[
  {
    "path": ".gitignore",
    "content": "*.pkl\n*.jpg\n*.pth\n*.pyc\n__pycache__\n*.h5\n*.pyc\n*.mkv\n*.gif\n*.webm\ncheckpoints/*\nresults/*\ntemp/*\nsegments.txt\n.DS_Store\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.\n\n## Our Standards\n\nExamples of behavior that contributes to creating a positive environment include:\n\n- Using welcoming and inclusive language\n- Being respectful of differing viewpoints and experiences\n- Gracefully accepting constructive criticism\n- Focusing on what is best for the community\n- Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n- The use of sexualized language or imagery and unwelcome sexual attention or advances\n- Trolling, insulting/derogatory comments, and personal or political attacks\n- Public or private harassment\n- Publishing others' private information, such as a physical or electronic address, without explicit permission\n- Other conduct that could reasonably be considered inappropriate in a professional setting\n\n## Our Responsibilities\n\nProject maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned with this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.\n\n## Scope\n\nThis Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project email address, posting via an official social media account, or acting as an appointed representative at an online or offline event.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team  All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident.\n\nProject maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.\n\n## Attribution\n\nThis Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at [https://www.contributor-covenant.org/version/2/0/code_of_conduct.html](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "<div align=\"center\">\n\n<h2>VideoReTalking <br/> <span style=\"font-size:12px\">Audio-based Lip Synchronization for Talking Head Video Editing in the Wild</span> </h2> \n\n  <a href='https://arxiv.org/abs/2211.14758'><img src='https://img.shields.io/badge/ArXiv-2211.14758-red'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href='https://vinthony.github.io/video-retalking/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vinthony/video-retalking/blob/main/quick_demo.ipynb)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n[![Replicate](https://replicate.com/cjwbw/video-retalking/badge)](https://replicate.com/cjwbw/video-retalking)\n\n<div>\n    <a target='_blank'>Kun Cheng <sup>*,1,2</sup> </a>&emsp;\n    <a href='https://vinthony.github.io/' target='_blank'>Xiaodong Cun <sup>*,2</a>&emsp;\n    <a href='https://yzhang2016.github.io/yongnorriszhang.github.io/' target='_blank'>Yong Zhang <sup>2</sup></a>&emsp;\n    <a href='https://menghanxia.github.io/' target='_blank'>Menghan Xia <sup>2</sup></a>&emsp;\n    <a href='https://feiiyin.github.io/' target='_blank'>Fei Yin <sup>2,3</sup></a>&emsp;<br/>\n    <a href='https://web.xidian.edu.cn/mrzhu/en/index.html' target='_blank'>Mingrui Zhu <sup>1</sup></a>&emsp;\n    <a href='https://xuanwangvc.github.io/' target='_blank'>Xuan Wang <sup>2</sup></a>&emsp;\n    <a href='https://juewang725.github.io/' target='_blank'>Jue Wang <sup>2</sup></a>&emsp;\n    <a href='https://web.xidian.edu.cn/nnwang/en/index.html' target='_blank'>Nannan Wang <sup>1</sup></a>\n</div>\n<br>\n<div>\n    <sup>1</sup> Xidian University &emsp; <sup>2</sup> Tencent AI Lab &emsp; <sup>3</sup> Tsinghua University\n</div>\n<br>\n<i><strong><a href='https://sa2022.siggraph.org/' target='_blank'>SIGGRAPH Asia 2022 Conference Track</a></strong></i>\n<br>\n<br>\n<img src=\"https://opentalker.github.io/video-retalking/static/images/teaser.png\" width=\"768px\">\n\n\n<div align=\"justify\">  <BR> We present VideoReTalking, a new system to edit the faces of a real-world talking head video according to input audio, producing a high-quality and lip-syncing output video even with a different emotion. Our system disentangles this objective into three sequential tasks:\n  \n <BR> (1) face video generation with a canonical expression\n<BR> (2) audio-driven lip-sync and \n  <BR> (3) face enhancement for improving photo-realism. \n  \n <BR>  Given a talking-head video, we first modify the expression of each frame according to the same expression template using the expression editing network, resulting in a video with the canonical expression. This video, together with the given audio, is then fed into the lip-sync network to generate a lip-syncing video. Finally, we improve the photo-realism of the synthesized faces through an identity-aware face enhancement network and post-processing. We use learning-based approaches for all three steps and all our modules can be tackled in a sequential pipeline without any user intervention.</div>\n<BR>\n\n<p>\n<img alt='pipeline' src=\"./docs/static/images/pipeline.png?raw=true\" width=\"768px\"><br>\n<em align='center'>Pipeline</em>\n</p>\n\n</div>\n\n## Results in the Wild （contains audio）\nhttps://user-images.githubusercontent.com/4397546/224310754-665eb2dd-aadc-47dc-b1f9-2029a937b20a.mp4\n\n\n\n\n## Environment\n```\ngit clone https://github.com/vinthony/video-retalking.git\ncd video-retalking\nconda create -n video_retalking python=3.8\nconda activate video_retalking\n\nconda install ffmpeg\n\n# Please follow the instructions from https://pytorch.org/get-started/previous-versions/\n# This installation command only works on CUDA 11.1\npip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html\n\npip install -r requirements.txt\n```\n\n## Quick Inference\n\n#### Pretrained Models\nPlease download our [pre-trained models](https://drive.google.com/drive/folders/18rhjMpxK8LVVxf7PI6XwOidt8Vouv_H0?usp=share_link) and put them in `./checkpoints`.\n\n<!-- We also provide some [example videos and audio](https://drive.google.com/drive/folders/14OwbNGDCAMPPdY-l_xO1axpUjkPxI9Dv?usp=share_link). Please put them in `./examples`. -->\n\n#### Inference\n\n```\npython3 inference.py \\\n  --face examples/face/1.mp4 \\\n  --audio examples/audio/1.wav \\\n  --outfile results/1_1.mp4\n```\nThis script includes data preprocessing steps. You can test any talking face videos without manual alignment. But it is worth noting that DNet cannot handle extreme poses.\n\nYou can also control the expression by adding the following parameters:\n\n```--exp_img```: Pre-defined expression template. The default is \"neutral\". You can choose \"smile\" or an image path.\n\n```--up_face```: You can choose \"surprise\" or \"angry\" to modify the expression of upper face with [GANimation](https://github.com/donydchen/ganimation_replicate).\n\n\n\n## Citation\n\nIf you find our work useful in your research, please consider citing:\n\n```\n@misc{cheng2022videoretalking,\n        title={VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild}, \n        author={Kun Cheng and Xiaodong Cun and Yong Zhang and Menghan Xia and Fei Yin and Mingrui Zhu and Xuan Wang and Jue Wang and Nannan Wang},\n        year={2022},\n        eprint={2211.14758},\n        archivePrefix={arXiv},\n        primaryClass={cs.CV}\n  }\n```\n\n## Acknowledgement\nThanks to\n[Wav2Lip](https://github.com/Rudrabha/Wav2Lip),\n[PIRenderer](https://github.com/RenYurui/PIRender), \n[GFP-GAN](https://github.com/TencentARC/GFPGAN), \n[GPEN](https://github.com/yangxy/GPEN),\n[ganimation_replicate](https://github.com/donydchen/ganimation_replicate),\n[STIT](https://github.com/rotemtzaban/STIT)\nfor sharing their code.\n\n\n## Related Work\n- [StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN (ECCV 2022)](https://github.com/FeiiYin/StyleHEAT)\n- [CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior (CVPR 2023)](https://github.com/Doubiiu/CodeTalker)\n- [SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation (CVPR 2023)](https://github.com/Winfredy/SadTalker)\n- [DPE: Disentanglement of Pose and Expression for General Video Portrait Editing (CVPR 2023)](https://github.com/Carlyx/DPE)\n- [3D GAN Inversion with Facial Symmetry Prior (CVPR 2023)](https://github.com/FeiiYin/SPI/)\n- [T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations (CVPR 2023)](https://github.com/Mael-zys/T2M-GPT)\n\n##  Disclaimer\n\nThis is not an official product of Tencent. \n\n```\n1. Please carefully read and comply with the open-source license applicable to this code before using it. \n2. Please carefully read and comply with the intellectual property declaration applicable to this code before using it.\n3. This open-source code runs completely offline and does not collect any personal information or other data. If you use this code to provide services to end-users and collect related data, please take necessary compliance measures according to applicable laws and regulations (such as publishing privacy policies, adopting necessary data security strategies, etc.). If the collected data involves personal information, user consent must be obtained (if applicable). Any legal liabilities arising from this are unrelated to Tencent.\n4. Without Tencent's written permission, you are not authorized to use the names or logos legally owned by Tencent, such as \"Tencent.\" Otherwise, you may be liable for your legal responsibilities.\n5. This open-source code does not have the ability to directly provide services to end-users. If you need to use this code for further model training or demos, as part of your product to provide services to end-users, or for similar use, please comply with applicable laws and regulations for your product or service. Any legal liabilities arising from this are unrelated to Tencent.\n6. It is prohibited to use this open-source code for activities that harm the legitimate rights and interests of others (including but not limited to fraud, deception, infringement of others' portrait rights, reputation rights, etc.), or other behaviors that violate applicable laws and regulations or go against social ethics and good customs (including providing incorrect or false information, spreading pornographic, terrorist, and violent information, etc.). Otherwise, you may be liable for your legal responsibilities.\n\n```\n## All Thanks To Our Contributors \n\n<a href=\"https://github.com/OpenTalker/video-retalking/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=OpenTalker/video-retalking\" />\n</a>\n"
  },
  {
    "path": "cog.yaml",
    "content": "# Configuration for Cog ⚙️\n# Reference: https://github.com/replicate/cog/blob/main/docs/yaml.md\n\nbuild:\n  gpu: true\n  system_packages:\n    - \"libgl1-mesa-glx\"\n    - \"libglib2.0-0\"\n    - \"ffmpeg\"\n  python_version: \"3.11\"\n  python_packages:\n    - \"torch==2.0.1\"\n    - \"torchvision==0.15.2\"\n    - \"basicsr==1.4.2\"\n    - \"kornia==0.5.1\"\n    - \"face-alignment==1.3.4\"\n    - \"ninja==1.10.2.3\"\n    - \"einops==0.4.1\"\n    - \"facexlib==0.2.5\"\n    - \"librosa==0.9.2\"\n    - \"cmake==3.27.7\"\n    - \"numpy==1.23.4\"\n  run:\n    - pip install dlib\n    - mkdir -p /root/.pyenv/versions/3.11.6/lib/python3.11/site-packages/facexlib/weights/ && wget --output-document \"/root/.pyenv/versions/3.11.6/lib/python3.11/site-packages/facexlib/weights/detection_Resnet50_Final.pth\" \"https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth\"\n    - mkdir -p /root/.pyenv/versions/3.11.6/lib/python3.11/site-packages/facexlib/weights/ && wget --output-document \"/root/.pyenv/versions/3.11.6/lib/python3.11/site-packages/facexlib/weights/parsing_parsenet.pth\" \"https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth\"\n    - mkdir -p /root/.cache/torch/hub/checkpoints/ && wget --output-document \"/root/.cache/torch/hub/checkpoints/s3fd-619a316812.pth\" \"https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth\"\n    - mkdir -p /root/.cache/torch/hub/checkpoints/ && wget --output-document \"/root/.cache/torch/hub/checkpoints/2DFAN4-cd938726ad.zip\" \"https://www.adrianbulat.com/downloads/python-fan/2DFAN4-cd938726ad.zip\"\npredict: \"predict.py:Predictor\"\n"
  },
  {
    "path": "docs/index.html",
    "content": "<!DOCTYPE html>\n<html>\n<head>\n  <meta charset=\"utf-8\">\n  <!-- Meta tags for social media banners, these should be filled in appropriately as they are your \"business card\" -->\n  <!-- Replace the content tag with appropriate information -->\n  <meta content=\"VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild\"\n  property=\"og:title\">\n  <meta content=\"VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild\"\n    name=\"description\" property=\"og:description\">\n  <meta content=\"https://vinthony.github.io/video-retalking/\" property=\"og:url\">\n  <!-- Path to banner image, should be in the path listed below. Optimal dimenssions are 1200X630-->\n  <meta property=\"og:image\" content=\"static/image/your_banner_image.png\" />\n  <meta property=\"og:image:width\" content=\"1200\"/>\n  <meta property=\"og:image:height\" content=\"630\"/>\n\n\n  <meta name=\"twitter:title\" content=\"TWITTER BANNER TITLE META TAG\">\n  <meta name=\"twitter:description\" content=\"TWITTER BANNER DESCRIPTION META TAG\">\n  <!-- Path to banner image, should be in the path listed below. Optimal dimenssions are 1200X600-->\n  <meta name=\"twitter:image\" content=\"static/images/your_twitter_banner_image.png\">\n  <meta name=\"twitter:card\" content=\"summary_large_image\">\n  <!-- Keywords for your paper to be indexed by-->\n  <meta name=\"keywords\" content=\"KEYWORDS SHOULD BE PLACED HERE\">\n  <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n\n\n  <title>VideoRetalking</title>\n  <link rel=\"icon\" type=\"image/x-icon\" href=\"static/images/favicon.ico\">\n  <link href=\"https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro\"\n  rel=\"stylesheet\">\n\n  <link rel=\"stylesheet\" href=\"static/css/bulma.min.css\">\n  <link rel=\"stylesheet\" href=\"static/css/bulma-carousel.min.css\">\n  <link rel=\"stylesheet\" href=\"static/css/bulma-slider.min.css\">\n  <link rel=\"stylesheet\" href=\"static/css/fontawesome.all.min.css\">\n  <link rel=\"stylesheet\"\n  href=\"https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css\">\n  <link rel=\"stylesheet\" href=\"static/css/index.css\">\n\n  <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js\"></script>\n  <script src=\"https://documentcloud.adobe.com/view-sdk/main.js\"></script>\n  <script defer src=\"static/js/fontawesome.all.min.js\"></script>\n  <script src=\"static/js/bulma-carousel.min.js\"></script>\n  <script src=\"static/js/bulma-slider.min.js\"></script>\n  <script src=\"static/js/index.js\"></script>\n</head>\n<body>\n\n\n  <section class=\"hero\">\n    <div class=\"hero-body\">\n      <div class=\"container is-max-desktop\">\n        <div class=\"columns is-centered\">\n          <div class=\"column has-text-centered\">\n            <h1 class=\"xtitle is-1 publication-title\">VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild</h1>\n            <br/>\n            <div class=\"is-size-5 publication-authors\">\n              <!-- Paper authors -->\n              <span class=\"author-block\">\n                <a href=\"#\" target=\"_blank\">Kun Cheng</a><sup>*,1,2</sup></span>\n                <span class=\"author-block\">\n                  <a href=\"https://vinthony.github.io\" target=\"_blank\">Xiaodong Cun</a><sup>*,2</sup></span>\n                  <span class=\"author-block\">\n                    <a href=\"https://yzhang2016.github.io\" target=\"_blank\">Yong Zhang</a><sup>2</sup>\n                  </span>\n                  <span class=\"author-block\">\n                    <a href=\"https://menghanxia.github.io/\" target=\"_blank\">Menghan Xia</a><sup>2</sup>\n                  </span>\n                  <span class=\"author-block\">\n                    <a href=\"https://feiiyin.github.io/\" target=\"_blank\">Fei Yin</a><sup>2,3</sup>\n                  </span>\n                </br>\n                  <span class=\"author-block\">\n                    <a href=\"https://web.xidian.edu.cn/mrzhu/en/index.html\" target=\"_blank\">Mingrui Zhu</a><sup>1</sup>\n                  </span>\n                  <span class=\"author-block\">\n                    <a href=\"https://xuanwangvc.github.io/\" target=\"_blank\">Xuan Wang</a><sup>2</sup>\n                  </span>\n                  <span class=\"author-block\">\n                    <a href=\"https://juewang725.github.io/\" target=\"_blank\">Jue Wang</a><sup>2</sup>\n                  </span>\n                  <span class=\"author-block\">\n                    <a href=\"https://web.xidian.edu.cn/nnwang/en/index.html\" target=\"_blank\">Nannan Wang</a><sup>1</sup>\n                  </span>\n                  </div>\n                  <br/>\n                  <div class=\"is-size-5 publication-authors\">\n                    <span class=\"author-block\">\n                      <sup>1</sup> Xidian University &nbsp;&nbsp;&nbsp;\n                      <sup>2</sup> Tencent AI Lab &nbsp;&nbsp;&nbsp;\n                      <sup>3</sup> Tsinghua University\n                      <br>SIGGRAPH Asia 2022 (Conference Track)</span>\n                    <span class=\"eql-cntrb\"><small><br><sup>*</sup>Indicates Equal Contribution</small></span>\n                  </div>\n\n                  <div class=\"column has-text-centered\">\n                    <div class=\"publication-links\">\n                         <!-- Arxiv PDF link -->\n                      <span class=\"link-block\">\n                        <a href=\"https://arxiv.org/pdf/2211.14758.pdf\" target=\"_blank\"\n                        class=\"external-link \">\n                        <span class=\"icon\">\n                          <i class=\"fas fa-file-pdf\"></i>\n                        </span>\n                        <span>Paper</span>\n                      </a>\n                    </span>\n\n                  <!-- Github link -->\n                  <span class=\"link-block\">\n                    <a href=\"https://github.com/vinthony/video-retalking/\" target=\"_blank\"\n                    class=\"external-link \">\n                    <span class=\"icon\">\n                      <i class=\"fab fa-github\"></i>\n                    </span>\n                    <span>Code</span>\n                  </a>\n                </span>\n\n                <!-- ArXiv abstract Link -->\n                <span class=\"link-block\">\n                  <a href=\"https://arxiv.org/abs/2211.14758\" target=\"_blank\"\n                  class=\"external-link \">\n                  <span class=\"icon\">\n                    <i class=\"ai ai-arxiv\"></i>\n                  </span>\n                  <span>arXiv</span>\n                </a>\n              </span>\n            </div>\n          </div>\n        </div>\n      </div>\n    </div>\n  </div>\n</section>\n\n\n<!-- Teaser video-->\n<section class=\"hero teaser\">\n  <div class=\"container is-max-desktop\">\n    <div class=\"hero-body-img\">\n      <img src=\"./static/images/teaser.png\" width=\"80%\">\n    </div>\n  </div>\n</section>\n<!-- End teaser video -->\n\n<!-- Paper abstract -->\n<section class=\"section hero is-light\">\n  <div class=\"container is-max-desktop\">\n    <div class=\"columns is-centered has-text-centered\">\n      <div class=\"column is-four-fifths\">\n        <h2 class=\"title is-3\">Abstract</h2>\n        <div class=\"content has-text-justified\">\n          <p>\n            We present VideoReTalking, a new system to edit the faces of a real-world talking head video according to input audio, \n        producing a high-quality and lip-syncing output video even with a different emotion. Our system disentangles this objective \n        into three sequential tasks: (1) face video generation with a canonical expression; (2) audio-driven lip-sync; and \n        (3) face enhancement for improving photo-realism. Given a talking-head video, we first modify the expression of each frame \n        according to the same expression template using the expression editing network, resulting in a video with the canonical \n        expression. This video, together with the given audio, is then fed into the lip-sync network to generate a lip-syncing video.\n        Finally, we improve the photo-realism of the synthesized faces through an identity-aware face enhancement network and \n        post-processing. We use learning-based approaches for all three steps and all our modules can be tackled in a sequential \n        pipeline without any user intervention.\n          </p>\n        </div>\n      </div>\n    </div>\n  </div>\n</section>\n<!-- End paper abstract -->\n\n\n\n\n\n\n<!-- Youtube video -->\n<section class=\"hero is-small is-light\">\n  <div class=\"hero-body\">\n    <div class=\"container\">\n      <!-- Paper video. -->\n      <h2 class=\"title is-3\">Pipeline</h2>\n      <div class=\"columns is-centered has-text-centered\">\n        <div class=\"column is-four-fifths\">\n          \n          <div class=\"hero-body-img\">\n            <!-- Youtube embed code here -->\n            <img width='80%' src=\"static/images/pipeline.png\">\n          </div>\n        </div>\n      </div>\n    </div>\n  </div>\n</section>\n<!-- End youtube video -->\n\n\n<!-- Youtube video -->\n<section class=\"hero is-small is-light\">\n  <div class=\"hero-body\">\n    <div class=\"container\">\n      <!-- Paper video. -->\n      <h2 class=\"title is-3\"><strong>Video1</strong>: Video Results in the Wild.</h2>\n      <div class=\"columns is-centered has-text-centered\">\n        <div class=\"column is-four-fifths\">\n          \n          <video controls=\"\"  width=\"100%\">\n                    <!-- t=0.001 is a hack to make iPhone show video thumbnail -->\n                    <source src=\"./static/videos/Results_in_the_wild.mp4#t=0.001\" type=\"video/mp4\">\n                </video>\n        </div>\n      </div>\n    </div>\n  </div>\n</section>\n<!-- End youtube video\n\n  !-- Youtube video -->\n<section class=\"hero is-small is-light\">\n  <div class=\"hero-body\">\n    <div class=\"container\">\n      <!-- Paper video. -->\n      <h2 class=\"title is-3\"><strong>Video2</strong>: Comparison with SOTA Methods.</h2>\n      <div class=\"columns is-centered has-text-centered\">\n        <div class=\"column is-four-fifths\">\n          \n          <video controls=\"\"  width=\"100%\">\n                    <!-- t=0.001 is a hack to make iPhone show video thumbnail -->\n                    <source src=\"./static/videos/Comparison.mp4#t=0.001\" type=\"video/mp4\">\n                </video>\n        </div>\n      </div>\n    </div>\n  </div>\n</section>\n\n  <section class=\"hero is-small is-light\">\n    <div class=\"hero-body\">\n      <div class=\"container\">\n        <!-- Paper video. -->\n        <h2 class=\"title is-3\"><strong>Video3</strong>: Ablation Study on Different Modules. </h2>\n        <div class=\"columns is-centered has-text-centered\">\n          <div class=\"column is-four-fifths\">\n            \n            <video controls=\"\"  width=\"100%\">\n                      <!-- t=0.001 is a hack to make iPhone show video thumbnail -->\n                      <source src=\"./static/videos/Ablation.mp4#0.001\" type=\"video/mp4\">\n                  </video>\n          </div>\n        </div>\n      </div>\n    </div>\n  </section>\n\n<!--BibTex citation -->\n  <section class=\"section\" id=\"BibTeX\">\n    <div class=\"container is-max-desktop content\">\n      <h2 class=\"title\">BibTeX</h2>\n      <pre><code>@misc{videoretalking,\n        title={VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild}, \n        author={Kun Cheng and Xiaodong Cun and Yong Zhang and Menghan Xia and Fei Yin and Mingrui Zhu and Xuan Wang and Jue Wang and Nannan Wang},\n        year={2022},\n        eprint={2211.14758},\n        archivePrefix={arXiv},\n        primaryClass={cs.CV}\n  }</code></pre>\n    </div>\n</section>\n<!--End BibTex citation -->\n\n\n  <footer class=\"footer\">\n  <div class=\"container\">\n    <div class=\"columns is-centered\">\n      <div class=\"column is-8\">\n        <div class=\"content\">\n\n          <p>\n            This page was built using the <a href=\"https://github.com/vinthony/project-page-template\">modification version</a> of <a href=\"https://github.com/eliahuhorwitz/Academic-project-page-template\" target=\"_blank\">Academic Project Page Template</a> from <a href=\"https://github.com/vinthony\">vinthony</a>.\n            You are free to borrow the of this website, we just ask that you link back to this page in the footer. <br> This website is licensed under a <a rel=\"license\"  href=\"http://creativecommons.org/licenses/by-sa/4.0/\" target=\"_blank\">Creative\n            Commons Attribution-ShareAlike 4.0 International License</a>.\n          </p>\n\n        </div>\n      </div>\n    </div>\n  </div>\n</footer>\n\n<!-- Statcounter tracking code -->\n  \n<!-- You can add a tracker to track page visits by creating an account at statcounter.com -->\n\n    <!-- End of Statcounter Code -->\n\n  </body>\n  </html>\n"
  },
  {
    "path": "docs/static/css/bulma.css.map.txt",
    "content": "{\"version\":3,\"sources\":[\"../bulma.sass\",\"../sass/utilities/_all.sass\",\"../sass/utilities/animations.sass\",\"bulma.css\",\"../sass/utilities/mixins.sass\",\"../sass/utilities/initial-variables.sass\",\"../sass/utilities/controls.sass\",\"../sass/base/_all.sass\",\"../sass/base/minireset.sass\",\"../sass/base/generic.sass\",\"../sass/utilities/derived-variables.sass\",\"../sass/elements/_all.sass\",\"../sass/elements/box.sass\",\"../sass/elements/button.sass\",\"../sass/utilities/functions.sass\",\"../sass/elements/container.sass\",\"../sass/elements/content.sass\",\"../sass/elements/icon.sass\",\"../sass/elements/image.sass\",\"../sass/elements/notification.sass\",\"../sass/elements/progress.sass\",\"../sass/elements/table.sass\",\"../sass/elements/tag.sass\",\"../sass/elements/title.sass\",\"../sass/elements/other.sass\",\"../sass/form/_all.sass\",\"../sass/form/shared.sass\",\"../sass/form/input-textarea.sass\",\"../sass/form/checkbox-radio.sass\",\"../sass/form/select.sass\",\"../sass/form/file.sass\",\"../sass/form/tools.sass\",\"../sass/components/_all.sass\",\"../sass/components/breadcrumb.sass\",\"../sass/components/card.sass\",\"../sass/components/dropdown.sass\",\"../sass/components/level.sass\",\"../sass/components/media.sass\",\"../sass/components/menu.sass\",\"../sass/components/message.sass\",\"../sass/components/modal.sass\",\"../sass/components/navbar.sass\",\"../sass/components/pagination.sass\",\"../sass/components/panel.sass\",\"../sass/components/tabs.sass\",\"../sass/grid/_all.sass\",\"../sass/grid/columns.sass\",\"../sass/grid/tiles.sass\",\"../sass/helpers/_all.sass\",\"../sass/helpers/color.sass\",\"../sass/helpers/flexbox.sass\",\"../sass/helpers/float.sass\",\"../sass/helpers/other.sass\",\"../sass/helpers/overflow.sass\",\"../sass/helpers/position.sass\",\"../sass/helpers/spacing.sass\",\"../sass/helpers/typography.sass\",\"../sass/helpers/visibility.sass\",\"../sass/layout/_all.sass\",\"../sass/layout/hero.sass\",\"../sass/layout/section.sass\",\"../sass/layout/footer.sass\"],\"names\":[],\"mappings\":\"AACA,6DAAA;ACDA,oBAAA;ACAA;EACE;IACE,uBAAuB;ECGzB;EDFA;IACE,yBAAyB;ECI3B;AACF;ADTA;EACE;IACE,uBAAuB;ECGzB;EDFA;IACE,yBAAyB;ECI3B;AACF;;AC0JA;;;;EANE,2BAA2B;EAC3B,yBAAyB;EACzB,sBAAsB;EACtB,qBAAqB;EACrB,iBAAiB;AD7InB;;ACkKA;EAfE,6BAD8B;EAE9B,kBAAkB;EAClB,eAAe;EACf,aAAa;EACb,YAAY;EACZ,cAAc;EACd,eAAe;EACf,qBAAqB;EACrB,oBAAoB;EACpB,kBAAkB;EAClB,QAAQ;EACR,yBAAyB;EACzB,wBAAwB;EACxB,cAAc;AD/IhB;;ACqJE;;EACE,qBC3IkB;AFNtB;;ACwNA;EAhEE,qBAAqB;EACrB,wBAAwB;EACxB,uCClM2B;EDmM3B,YAAY;EACZ,uBC/HuB;EDgIvB,eAAe;EACf,oBAAoB;EACpB,qBAAqB;EACrB,YAAY;EACZ,cAAc;EACd,YAAY;EACZ,YAAY;EACZ,gBAAgB;EAChB,eAAe;EACf,gBAAgB;EAChB,eAAe;EACf,aAAa;EACb,kBAAkB;EAClB,mBAAmB;EACnB,WAAW;ADpJb;;ACqJE;EAEE,uBCzM2B;ED0M3B,WAAW;EACX,cAAc;EACd,SAAS;EACT,kBAAkB;EAClB,QAAQ;EACR,0DAA0D;EAC1D,+BAA+B;ADnJnC;;ACoJE;EACE,WAAW;EACX,UAAU;ADjJd;;ACkJE;EACE,WAAW;EACX,UAAU;AD/Id;;ACgJE;EAEE,uCCtOyB;AFwF7B;;AC+IE;EACE,uCCxOyB;AF4F7B;;AC8IE;EACE,YAAY;EACZ,gBAAgB;EAChB,eAAe;EACf,gBAAgB;EAChB,eAAe;EACf,WAAW;AD3If;;AC4IE;EACE,YAAY;EACZ,gBAAgB;EAChB,eAAe;EACf,gBAAgB;EAChB,eAAe;EACf,WAAW;ADzIf;;AC0IE;EACE,YAAY;EACZ,gBAAgB;EAChB,eAAe;EACf,gBAAgB;EAChB,eAAe;EACf,WAAW;ADvIf;;ACwJA;EAXE,mDAA2C;UAA3C,2CAA2C;EAC3C,yBC7P4B;ED8P5B,uBCjMuB;EDkMvB,+BAA+B;EAC/B,6BAA6B;EAC7B,WAAW;EACX,cAAc;EACd,WAAW;EACX,kBAAkB;EAClB,UAAU;ADzIZ;;ACqJA;;;;;;;;;;;;;;;;;EANE,SADuB;EAEvB,OAFuB;EAGvB,kBAAkB;EAClB,QAJuB;EAKvB,MALuB;ADtHzB;;AGvHA;;;;;EA3BE,qBAAqB;EACrB,wBAAwB;EACxB,mBAAmB;EACnB,6BAA+C;EAC/C,kBDqDU;ECpDV,gBAAgB;EAChB,oBAAoB;EACpB,eDkBW;ECjBX,aAfoB;EAgBpB,2BAA2B;EAC3B,gBAhBuB;EAiBvB,iCAf+D;EAgB/D,gCAfkE;EAgBlE,iCAhBkE;EAiBlE,8BAlB+D;EAmB/D,kBAAkB;EAClB,mBAAmB;AH0JrB;;AGxJE;;;;;;;;;;;;;;;;;EAIE,aAAa;AHwKjB;;AGvKE;;;;;;;;;;;;;;;;EAEE,mBAAmB;AHwLvB;;AI7NA,eAAA;ACAA,0EAAA;AAEA;;;;;;;;;;;;;;;;;;;;;;;EAuBE,SAAS;EACT,UAAU;ALgOZ;;AK7NA;;;;;;EAME,eAAe;EACf,mBAAmB;ALgOrB;;AK7NA;EACE,gBAAgB;ALgOlB;;AK7NA;;;;EAIE,SAAS;ALgOX;;AK7NA;EACE,sBAAsB;ALgOxB;;AK9NA;EAII,mBAAmB;AL8NvB;;AK3NA;;EAEE,YAAY;EACZ,eAAe;AL8NjB;;AK3NA;EACE,SAAS;AL8NX;;AK3NA;EACE,yBAAyB;EACzB,iBAAiB;AL8NnB;;AK5NA;;EAEE,UAAU;AL+NZ;;AKjOA;;EAII,mBAAmB;ALkOvB;;AK9PA;EClBE,uBJjB6B;EIkB7B,eAhCc;EAiCd,kCAAkC;EAClC,mCAAmC;EACnC,gBAlCoB;EAmCpB,kBAhCsB;EAiCtB,kBAhCsB;EAiCtB,kCApCiC;EAqCjC,8BAAsB;KAAtB,2BAAsB;MAAtB,0BAAsB;UAAtB,sBAAsB;ANoRxB;;AMlRA;;;;;;;EAOE,cAAc;ANqRhB;;AMnRA;;;;;;EAME,oLJ7ByL;AFmT3L;;AMpRA;;EAEE,6BAA6B;EAC7B,4BAA4B;EAC5B,sBJlC0B;AFyT5B;;AMrRA;EACE,cJ3D4B;EI4D5B,cA1DkB;EA2DlB,gBJ3BiB;EI4BjB,gBA1DoB;ANkVtB;;AMpRA;EACE,cJpDgC;EIqDhC,eAAe;EACf,qBAAqB;ANuRvB;;AM1RA;EAKI,mBAAmB;ANyRvB;;AM9RA;EAOI,cJ1E0B;AFqW9B;;AMzRA;EACE,4BJtE4B;EIuE5B,cCpBsB;EDqBtB,kBArEiB;EAsEjB,mBAvEkB;EAwElB,4BAzEgC;ANqWlC;;AM1RA;EACE,4BJ7E4B;EI8E5B,YAAY;EACZ,cAAc;EACd,WAxEa;EAyEb,gBAxEkB;ANqWpB;;AM3RA;EACE,YAAY;EACZ,eAAe;AN8RjB;;AM5RA;;EAEE,wBAAwB;AN+R1B;;AM7RA;EACE,kBAvFuB;ANuXzB;;AM9RA;EACE,mBAAmB;EACnB,oBAAoB;ANiStB;;AM/RA;EACE,cJ1G4B;EI2G5B,gBJrEe;AFuWjB;;AM9RA;EACE,YAAY;ANiSd;;AM/RA;EL1DE,iCAAiC;EK4DjC,4BJ7G4B;EI8G5B,cJpH4B;EIqH5B,kBAjGqB;EAkGrB,gBAAgB;EAChB,uBAlG0B;EAmG1B,gBAAgB;EAChB,iBAAiB;ANkSnB;;AM1SA;EAUI,6BAA6B;EAC7B,mBAAmB;EACnB,cAvGoB;EAwGpB,UAAU;ANoSd;;AMlSA;;EAGI,mBAAmB;ANoSvB;;AMvSA;;EAKM,mBAAmB;ANuSzB;;AM5SA;EAOI,cJxI0B;AFib9B;;AQvbA,mBAAA;ACSA;EAEE,uBPI6B;EOH7B,kBP0DgB;EOzDhB,0FPX2B;EOY3B,cPP4B;EOQ5B,cAAc;EACd,gBAZmB;AT6brB;;AS/aA;EAGI,yEPC8B;AF+alC;;ASnbA;EAKI,oEPD8B;AFmblC;;AUzZA;EAGE,uBRpC6B;EQqC7B,qBR1C4B;EQ2C5B,iBPlDwB;EOmDxB,cRhD4B;EQiD5B,eAAe;EAGf,uBAAuB;EACvB,iCApD6D;EAqD7D,iBApD6B;EAqD7B,kBArD6B;EAsD7B,8BAvD6D;EAwD7D,kBAAkB;EAClB,mBAAmB;AVwZrB;;AUxaA;EAkBI,cAAc;AV0ZlB;;AU5aA;EAwBM,aAAa;EACb,YAAY;AVwZlB;;AUjbA;ETgGI,+BSrEwG;ETqExG,oBSpEgE;AV0ZpE;;AUtbA;ETgGI,mBSlEgE;ETkEhE,gCSjEwG;AV4Z5G;;AU3bA;EAiCM,+BAAmF;EACnF,gCAAoF;AV8Z1F;;AUhcA;EAsCI,qBR7E0B;EQ8E1B,cRjF0B;AF+e9B;;AUrcA;EA0CI,qBRpE8B;EQqE9B,cRrF0B;AFof9B;;AU1cA;EA6CM,kDRvE4B;AFwelC;;AU9cA;EAgDI,qBRzF0B;EQ0F1B,cR3F0B;AF6f9B;;AUndA;EAoDI,6BAA6B;EAC7B,yBAAyB;EACzB,cR/F0B;EQgG1B,0BAjF8B;AVoflC;;AU1dA;EA4DM,4BR/FwB;EQgGxB,cRvGwB;AFygB9B;;AU/dA;EAgEM,yBCH2B;EDI3B,cR3GwB;AF8gB9B;;AUpeA;;EAoEM,6BAA6B;EAC7B,yBAAyB;EACzB,gBAAgB;AVqatB;;AU3eA;EA2EM,uBR5GyB;EQ6GzB,yBAAyB;EACzB,cR3HuB;AF+hB7B;;AUjfA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,cRhIqB;AFqiB7B;;AUvfA;EAqFQ,yBAAyB;EACzB,cRpIqB;AF0iB7B;;AU5fA;EAwFU,mDRzHqB;AFiiB/B;;AUhgBA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,cR3IqB;AFojB7B;;AUtgBA;;EAgGQ,uBRjIuB;EQkIvB,yBAAyB;EACzB,gBAAgB;AV2axB;;AU7gBA;EAoGQ,yBRlJqB;EQmJrB,YRtIuB;AFmjB/B;;AUlhBA;EAwGU,uBC3CuB;AXydjC;;AUthBA;;EA2GU,yBRzJmB;EQ0JnB,yBAAyB;EACzB,gBAAgB;EAChB,YR/IqB;AF+jB/B;;AU9hBA;EAiHU,gEAA4E;AVibtF;;AUliBA;EAmHQ,6BAA6B;EAC7B,mBRrJuB;EQsJvB,YRtJuB;AFykB/B;;AUxiBA;EA0HU,uBR3JqB;EQ4JrB,mBR5JqB;EQ6JrB,cR1KmB;AF4lB7B;;AU9iBA;EA+HY,4DAA8D;AVmb1E;;AUljBA;EAqIc,gEAA4E;AVib1F;;AUtjBA;;EAwIU,6BAA6B;EAC7B,mBR1KqB;EQ2KrB,gBAAgB;EAChB,YR5KqB;AF+lB/B;;AU9jBA;EA6IQ,6BAA6B;EAC7B,qBR5LqB;EQ6LrB,cR7LqB;AFknB7B;;AUpkBA;EAoJU,yBRlMmB;EQmMnB,YRtLqB;AF0mB/B;;AUzkBA;EA4Jc,4DAA8D;AVib5E;;AU7kBA;;EA+JU,6BAA6B;EAC7B,qBR9MmB;EQ+MnB,gBAAgB;EAChB,cRhNmB;AFmoB7B;;AUrlBA;EA2EM,yBRzHuB;EQ0HvB,yBAAyB;EACzB,YR9GyB;AF4nB/B;;AU3lBA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,YRnHuB;AFkoB/B;;AUjmBA;EAqFQ,yBAAyB;EACzB,YRvHuB;AFuoB/B;;AUtmBA;EAwFU,gDRtImB;AFwpB7B;;AU1mBA;EA2FQ,uBC9ByB;ED+BzB,yBAAyB;EACzB,YR9HuB;AFipB/B;;AUhnBA;;EAgGQ,yBR9IqB;EQ+IrB,yBAAyB;EACzB,gBAAgB;AVqhBxB;;AUvnBA;EAoGQ,uBRrIuB;EQsIvB,cRnJqB;AF0qB7B;;AU5nBA;EAwGU,yBC3CuB;AXmkBjC;;AUhoBA;;EA2GU,uBR5IqB;EQ6IrB,yBAAyB;EACzB,gBAAgB;EAChB,cR5JmB;AFsrB7B;;AUxoBA;EAiHU,4DAA4E;AV2hBtF;;AU5oBA;EAmHQ,6BAA6B;EAC7B,qBRlKqB;EQmKrB,cRnKqB;AFgsB7B;;AUlpBA;EA0HU,yBRxKmB;EQyKnB,qBRzKmB;EQ0KnB,YR7JqB;AFyrB/B;;AUxpBA;EA+HY,gEAA8D;AV6hB1E;;AU5pBA;EAqIc,4DAA4E;AV2hB1F;;AUhqBA;;EAwIU,6BAA6B;EAC7B,qBRvLmB;EQwLnB,gBAAgB;EAChB,cRzLmB;AFstB7B;;AUxqBA;EA6IQ,6BAA6B;EAC7B,mBR/KuB;EQgLvB,YRhLuB;AF+sB/B;;AU9qBA;EAoJU,uBRrLqB;EQsLrB,cRnMmB;AFiuB7B;;AUnrBA;EA4Jc,gEAA8D;AV2hB5E;;AUvrBA;;EA+JU,6BAA6B;EAC7B,mBRjMqB;EQkMrB,gBAAgB;EAChB,YRnMqB;AFguB/B;;AU/rBA;EA2EM,4BR9GwB;EQ+GxB,yBAAyB;EACzB,yBC7Ce;AXqqBrB;;AUrsBA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,yBClDa;AX2qBrB;;AU3sBA;EAqFQ,yBAAyB;EACzB,yBCtDa;AXgrBrB;;AUhtBA;EAwFU,mDR3HoB;AFuvB9B;;AUptBA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,yBC7Da;AX0rBrB;;AU1tBA;;EAgGQ,4BRnIsB;EQoItB,yBAAyB;EACzB,gBAAgB;AV+nBxB;;AUjuBA;EAoGQ,oCCpEa;EDqEb,iBRxIsB;AFywB9B;;AUtuBA;EAwGU,oCC3CuB;AX6qBjC;;AU1uBA;;EA2GU,oCC3EW;ED4EX,yBAAyB;EACzB,gBAAgB;EAChB,iBRjJoB;AFqxB9B;;AUlvBA;EAiHU,sFAA4E;AVqoBtF;;AUtvBA;EAmHQ,6BAA6B;EAC7B,wBRvJsB;EQwJtB,iBRxJsB;AF+xB9B;;AU5vBA;EA0HU,4BR7JoB;EQ8JpB,wBR9JoB;EQ+JpB,yBC5FW;AXkuBrB;;AUlwBA;EA+HY,sEAA8D;AVuoB1E;;AUtwBA;EAqIc,sFAA4E;AVqoB1F;;AU1wBA;;EAwIU,6BAA6B;EAC7B,wBR5KoB;EQ6KpB,gBAAgB;EAChB,iBR9KoB;AFqzB9B;;AUlxBA;EA6IQ,6BAA6B;EAC7B,gCC9Ga;ED+Gb,yBC/Ga;AXwvBrB;;AUxxBA;EAoJU,oCCpHW;EDqHX,iBRxLoB;AFg0B9B;;AU7xBA;EA4Jc,sEAA8D;AVqoB5E;;AUjyBA;;EA+JU,6BAA6B;EAC7B,gCChIW;EDiIX,gBAAgB;EAChB,yBClIW;AXywBrB;;AUzyBA;EA2EM,yBRrHwB;EQsHxB,yBAAyB;EACzB,WC3CU;AX6wBhB;;AU/yBA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,WChDQ;AXmxBhB;;AUrzBA;EAqFQ,yBAAyB;EACzB,WCpDQ;AXwxBhB;;AU1zBA;EAwFU,gDRlIoB;AFw2B9B;;AU9zBA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,WC3DQ;AXkyBhB;;AUp0BA;;EAgGQ,yBR1IsB;EQ2ItB,yBAAyB;EACzB,gBAAgB;AVyuBxB;;AU30BA;EAoGQ,sBClEQ;EDmER,cR/IsB;AF03B9B;;AUh1BA;EAwGU,yBC3CuB;AXuxBjC;;AUp1BA;;EA2GU,sBCzEM;ED0EN,yBAAyB;EACzB,gBAAgB;EAChB,cRxJoB;AFs4B9B;;AU51BA;EAiHU,0DAA4E;AV+uBtF;;AUh2BA;EAmHQ,6BAA6B;EAC7B,qBR9JsB;EQ+JtB,cR/JsB;AFg5B9B;;AUt2BA;EA0HU,yBRpKoB;EQqKpB,qBRrKoB;EQsKpB,WC1FM;AX00BhB;;AU52BA;EA+HY,gEAA8D;AVivB1E;;AUh3BA;EAqIc,0DAA4E;AV+uB1F;;AUp3BA;;EAwIU,6BAA6B;EAC7B,qBRnLoB;EQoLpB,gBAAgB;EAChB,cRrLoB;AFs6B9B;;AU53BA;EA6IQ,6BAA6B;EAC7B,kBC5GQ;ED6GR,WC7GQ;AXg2BhB;;AUl4BA;EAoJU,sBClHM;EDmHN,cR/LoB;AFi7B9B;;AUv4BA;EA4Jc,gEAA8D;AV+uB5E;;AU34BA;;EA+JU,6BAA6B;EAC7B,kBC9HM;ED+HN,gBAAgB;EAChB,WChIM;AXi3BhB;;AUn5BA;EA2EM,yBRvG4B;EQwG5B,yBAAyB;EACzB,WC3CU;AXu3BhB;;AUz5BA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,WChDQ;AX63BhB;;AU/5BA;EAqFQ,yBAAyB;EACzB,WCpDQ;AXk4BhB;;AUp6BA;EAwFU,iDRpHwB;AFo8BlC;;AUx6BA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,WC3DQ;AX44BhB;;AU96BA;;EAgGQ,yBR5H0B;EQ6H1B,yBAAyB;EACzB,gBAAgB;AVm1BxB;;AUr7BA;EAoGQ,sBClEQ;EDmER,cRjI0B;AFs9BlC;;AU17BA;EAwGU,yBC3CuB;AXi4BjC;;AU97BA;;EA2GU,sBCzEM;ED0EN,yBAAyB;EACzB,gBAAgB;EAChB,cR1IwB;AFk+BlC;;AUt8BA;EAiHU,0DAA4E;AVy1BtF;;AU18BA;EAmHQ,6BAA6B;EAC7B,qBRhJ0B;EQiJ1B,cRjJ0B;AF4+BlC;;AUh9BA;EA0HU,yBRtJwB;EQuJxB,qBRvJwB;EQwJxB,WC1FM;AXo7BhB;;AUt9BA;EA+HY,gEAA8D;AV21B1E;;AU19BA;EAqIc,0DAA4E;AVy1B1F;;AU99BA;;EAwIU,6BAA6B;EAC7B,qBRrKwB;EQsKxB,gBAAgB;EAChB,cRvKwB;AFkgClC;;AUt+BA;EA6IQ,6BAA6B;EAC7B,kBC5GQ;ED6GR,WC7GQ;AX08BhB;;AU5+BA;EAoJU,sBClHM;EDmHN,cRjLwB;AF6gClC;;AUj/BA;EA4Jc,gEAA8D;AVy1B5E;;AUr/BA;;EA+JU,6BAA6B;EAC7B,kBC9HM;ED+HN,gBAAgB;EAChB,WChIM;AX29BhB;;AU7/BA;EAwKU,yBC/HsC;EDgItC,cCvH2D;AXg9BrE;;AUlgCA;EA4KY,yBC/GqB;EDgHrB,yBAAyB;EACzB,cC5HyD;AXs9BrE;;AUxgCA;EAiLY,yBCpHqB;EDqHrB,yBAAyB;EACzB,cCjIyD;AX49BrE;;AU9gCA;EA2EM,yBRrG4B;EQsG5B,yBAAyB;EACzB,WC3CU;AXk/BhB;;AUphCA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,WChDQ;AXw/BhB;;AU1hCA;EAqFQ,yBAAyB;EACzB,WCpDQ;AX6/BhB;;AU/hCA;EAwFU,kDRlHwB;AF6jClC;;AUniCA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,WC3DQ;AXugChB;;AUziCA;;EAgGQ,yBR1H0B;EQ2H1B,yBAAyB;EACzB,gBAAgB;AV88BxB;;AUhjCA;EAoGQ,sBClEQ;EDmER,cR/H0B;AF+kClC;;AUrjCA;EAwGU,yBC3CuB;AX4/BjC;;AUzjCA;;EA2GU,sBCzEM;ED0EN,yBAAyB;EACzB,gBAAgB;EAChB,cRxIwB;AF2lClC;;AUjkCA;EAiHU,0DAA4E;AVo9BtF;;AUrkCA;EAmHQ,6BAA6B;EAC7B,qBR9I0B;EQ+I1B,cR/I0B;AFqmClC;;AU3kCA;EA0HU,yBRpJwB;EQqJxB,qBRrJwB;EQsJxB,WC1FM;AX+iChB;;AUjlCA;EA+HY,gEAA8D;AVs9B1E;;AUrlCA;EAqIc,0DAA4E;AVo9B1F;;AUzlCA;;EAwIU,6BAA6B;EAC7B,qBRnKwB;EQoKxB,gBAAgB;EAChB,cRrKwB;AF2nClC;;AUjmCA;EA6IQ,6BAA6B;EAC7B,kBC5GQ;ED6GR,WC7GQ;AXqkChB;;AUvmCA;EAoJU,sBClHM;EDmHN,cR/KwB;AFsoClC;;AU5mCA;EA4Jc,gEAA8D;AVo9B5E;;AUhnCA;;EA+JU,6BAA6B;EAC7B,kBC9HM;ED+HN,gBAAgB;EAChB,WChIM;AXslChB;;AUxnCA;EAwKU,yBC/HsC;EDgItC,cCvH2D;AX2kCrE;;AU7nCA;EA4KY,yBC/GqB;EDgHrB,yBAAyB;EACzB,cC5HyD;AXilCrE;;AUnoCA;EAiLY,yBCpHqB;EDqHrB,yBAAyB;EACzB,cCjIyD;AXulCrE;;AUzoCA;EA2EM,yBRtG4B;EQuG5B,yBAAyB;EACzB,WC3CU;AX6mChB;;AU/oCA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,WChDQ;AXmnChB;;AUrpCA;EAqFQ,yBAAyB;EACzB,WCpDQ;AXwnChB;;AU1pCA;EAwFU,kDRnHwB;AFyrClC;;AU9pCA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,WC3DQ;AXkoChB;;AUpqCA;;EAgGQ,yBR3H0B;EQ4H1B,yBAAyB;EACzB,gBAAgB;AVykCxB;;AU3qCA;EAoGQ,sBClEQ;EDmER,cRhI0B;AF2sClC;;AUhrCA;EAwGU,yBC3CuB;AXunCjC;;AUprCA;;EA2GU,sBCzEM;ED0EN,yBAAyB;EACzB,gBAAgB;EAChB,cRzIwB;AFutClC;;AU5rCA;EAiHU,0DAA4E;AV+kCtF;;AUhsCA;EAmHQ,6BAA6B;EAC7B,qBR/I0B;EQgJ1B,cRhJ0B;AFiuClC;;AUtsCA;EA0HU,yBRrJwB;EQsJxB,qBRtJwB;EQuJxB,WC1FM;AX0qChB;;AU5sCA;EA+HY,gEAA8D;AVilC1E;;AUhtCA;EAqIc,0DAA4E;AV+kC1F;;AUptCA;;EAwIU,6BAA6B;EAC7B,qBRpKwB;EQqKxB,gBAAgB;EAChB,cRtKwB;AFuvClC;;AU5tCA;EA6IQ,6BAA6B;EAC7B,kBC5GQ;ED6GR,WC7GQ;AXgsChB;;AUluCA;EAoJU,sBClHM;EDmHN,cRhLwB;AFkwClC;;AUvuCA;EA4Jc,gEAA8D;AV+kC5E;;AU3uCA;;EA+JU,6BAA6B;EAC7B,kBC9HM;ED+HN,gBAAgB;EAChB,WChIM;AXitChB;;AUnvCA;EAwKU,yBC/HsC;EDgItC,cCvH2D;AXssCrE;;AUxvCA;EA4KY,yBC/GqB;EDgHrB,yBAAyB;EACzB,cC5HyD;AX4sCrE;;AU9vCA;EAiLY,yBCpHqB;EDqHrB,yBAAyB;EACzB,cCjIyD;AXktCrE;;AUpwCA;EA2EM,yBRxG4B;EQyG5B,yBAAyB;EACzB,WC3CU;AXwuChB;;AU1wCA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,WChDQ;AX8uChB;;AUhxCA;EAqFQ,yBAAyB;EACzB,WCpDQ;AXmvChB;;AUrxCA;EAwFU,kDRrHwB;AFszClC;;AUzxCA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,WC3DQ;AX6vChB;;AU/xCA;;EAgGQ,yBR7H0B;EQ8H1B,yBAAyB;EACzB,gBAAgB;AVosCxB;;AUtyCA;EAoGQ,sBClEQ;EDmER,cRlI0B;AFw0ClC;;AU3yCA;EAwGU,yBC3CuB;AXkvCjC;;AU/yCA;;EA2GU,sBCzEM;ED0EN,yBAAyB;EACzB,gBAAgB;EAChB,cR3IwB;AFo1ClC;;AUvzCA;EAiHU,0DAA4E;AV0sCtF;;AU3zCA;EAmHQ,6BAA6B;EAC7B,qBRjJ0B;EQkJ1B,cRlJ0B;AF81ClC;;AUj0CA;EA0HU,yBRvJwB;EQwJxB,qBRxJwB;EQyJxB,WC1FM;AXqyChB;;AUv0CA;EA+HY,gEAA8D;AV4sC1E;;AU30CA;EAqIc,0DAA4E;AV0sC1F;;AU/0CA;;EAwIU,6BAA6B;EAC7B,qBRtKwB;EQuKxB,gBAAgB;EAChB,cRxKwB;AFo3ClC;;AUv1CA;EA6IQ,6BAA6B;EAC7B,kBC5GQ;ED6GR,WC7GQ;AX2zChB;;AU71CA;EAoJU,sBClHM;EDmHN,cRlLwB;AF+3ClC;;AUl2CA;EA4Jc,gEAA8D;AV0sC5E;;AUt2CA;;EA+JU,6BAA6B;EAC7B,kBC9HM;ED+HN,gBAAgB;EAChB,WChIM;AX40ChB;;AU92CA;EAwKU,yBC/HsC;EDgItC,cCvH2D;AXi0CrE;;AUn3CA;EA4KY,yBC/GqB;EDgHrB,yBAAyB;EACzB,cC5HyD;AXu0CrE;;AUz3CA;EAiLY,yBCpHqB;EDqHrB,yBAAyB;EACzB,cCjIyD;AX60CrE;;AU/3CA;EA2EM,yBRzG4B;EQ0G5B,yBAAyB;EACzB,yBC7Ce;AXq2CrB;;AUr4CA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,yBClDa;AX22CrB;;AU34CA;EAqFQ,yBAAyB;EACzB,yBCtDa;AXg3CrB;;AUh5CA;EAwFU,kDRtHwB;AFk7ClC;;AUp5CA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,yBC7Da;AX03CrB;;AU15CA;;EAgGQ,yBR9H0B;EQ+H1B,yBAAyB;EACzB,gBAAgB;AV+zCxB;;AUj6CA;EAoGQ,oCCpEa;EDqEb,cRnI0B;AFo8ClC;;AUt6CA;EAwGU,oCC3CuB;AX62CjC;;AU16CA;;EA2GU,oCC3EW;ED4EX,yBAAyB;EACzB,gBAAgB;EAChB,cR5IwB;AFg9ClC;;AUl7CA;EAiHU,sFAA4E;AVq0CtF;;AUt7CA;EAmHQ,6BAA6B;EAC7B,qBRlJ0B;EQmJ1B,cRnJ0B;AF09ClC;;AU57CA;EA0HU,yBRxJwB;EQyJxB,qBRzJwB;EQ0JxB,yBC5FW;AXk6CrB;;AUl8CA;EA+HY,gEAA8D;AVu0C1E;;AUt8CA;EAqIc,sFAA4E;AVq0C1F;;AU18CA;;EAwIU,6BAA6B;EAC7B,qBRvKwB;EQwKxB,gBAAgB;EAChB,cRzKwB;AFg/ClC;;AUl9CA;EA6IQ,6BAA6B;EAC7B,gCC9Ga;ED+Gb,yBC/Ga;AXw7CrB;;AUx9CA;EAoJU,oCCpHW;EDqHX,cRnLwB;AF2/ClC;;AU79CA;EA4Jc,gEAA8D;AVq0C5E;;AUj+CA;;EA+JU,6BAA6B;EAC7B,gCChIW;EDiIX,gBAAgB;EAChB,yBClIW;AXy8CrB;;AUz+CA;EAwKU,yBC/HsC;EDgItC,cCvH2D;AX47CrE;;AU9+CA;EA4KY,yBC/GqB;EDgHrB,yBAAyB;EACzB,cC5HyD;AXk8CrE;;AUp/CA;EAiLY,yBCpHqB;EDqHrB,yBAAyB;EACzB,cCjIyD;AXw8CrE;;AU1/CA;EA2EM,yBRnG2B;EQoG3B,yBAAyB;EACzB,WC3CU;AX89ChB;;AUhgDA;EAgFQ,yBCnByB;EDoBzB,yBAAyB;EACzB,WChDQ;AXo+ChB;;AUtgDA;EAqFQ,yBAAyB;EACzB,WCpDQ;AXy+ChB;;AU3gDA;EAwFU,kDRhHuB;AFuiDjC;;AU/gDA;EA2FQ,yBC9ByB;ED+BzB,yBAAyB;EACzB,WC3DQ;AXm/ChB;;AUrhDA;;EAgGQ,yBRxHyB;EQyHzB,yBAAyB;EACzB,gBAAgB;AV07CxB;;AU5hDA;EAoGQ,sBClEQ;EDmER,cR7HyB;AFyjDjC;;AUjiDA;EAwGU,yBC3CuB;AXw+CjC;;AUriDA;;EA2GU,sBCzEM;ED0EN,yBAAyB;EACzB,gBAAgB;EAChB,cRtIuB;AFqkDjC;;AU7iDA;EAiHU,0DAA4E;AVg8CtF;;AUjjDA;EAmHQ,6BAA6B;EAC7B,qBR5IyB;EQ6IzB,cR7IyB;AF+kDjC;;AUvjDA;EA0HU,yBRlJuB;EQmJvB,qBRnJuB;EQoJvB,WC1FM;AX2hDhB;;AU7jDA;EA+HY,gEAA8D;AVk8C1E;;AUjkDA;EAqIc,0DAA4E;AVg8C1F;;AUrkDA;;EAwIU,6BAA6B;EAC7B,qBRjKuB;EQkKvB,gBAAgB;EAChB,cRnKuB;AFqmDjC;;AU7kDA;EA6IQ,6BAA6B;EAC7B,kBC5GQ;ED6GR,WC7GQ;AXijDhB;;AUnlDA;EAoJU,sBClHM;EDmHN,cR7KuB;AFgnDjC;;AUxlDA;EA4Jc,gEAA8D;AVg8C5E;;AU5lDA;;EA+JU,6BAA6B;EAC7B,kBC9HM;ED+HN,gBAAgB;EAChB,WChIM;AXkkDhB;;AUpmDA;EAwKU,yBC/HsC;EDgItC,cCvH2D;AXujDrE;;AUzmDA;EA4KY,yBC/GqB;EDgHrB,yBAAyB;EACzB,cC5HyD;AX6jDrE;;AU/mDA;EAiLY,yBCpHqB;EDqHrB,yBAAyB;EACzB,cCjIyD;AXmkDrE;;AUrnDA;EATE,kBR6BgB;EQ5BhB,kBRFc;AFooDhB;;AU1nDA;EANE,eRLW;AFyoDb;;AU9nDA;EAJE,kBRRc;AF8oDhB;;AUloDA;EAFE,iBRXa;AFmpDf;;AUtoDA;;EAgMI,uBRjO2B;EQkO3B,qBRvO0B;EQwO1B,gBAtNyB;EAuNzB,YAtNyB;AViqD7B;;AU9oDA;EAqMI,aAAa;EACb,WAAW;AV68Cf;;AUnpDA;EAwMI,6BAA6B;EAC7B,oBAAoB;AV+8CxB;;AUxpDA;ETvCE,kBAAkB;EAKhB,2BAAiC;EACjC,0BAAgC;ES8O9B,6BAA6B;AVk9CnC;;AU/pDA;EA+MI,4BRlP0B;EQmP1B,qBRtP0B;EQuP1B,cRzP0B;EQ0P1B,gBAAgB;EAChB,oBAAoB;AVo9CxB;;AUvqDA;EAqNI,uBR9LqB;EQ+LrB,gCAA0D;EAC1D,iCAA2D;AVs9C/D;;AUp9CA;EACE,mBAAmB;EACnB,aAAa;EACb,eAAe;EACf,2BAA2B;AVu9C7B;;AU39CA;EAMI,qBAAqB;AVy9CzB;;AU/9CA;ETzHI,oBSiIwC;AV29C5C;;AUn+CA;EAUI,sBAAsB;AV69C1B;;AUv+CA;EAYI,mBAAmB;AV+9CvB;;AU3+CA;EAlOE,kBR6BgB;EQ5BhB,kBRFc;AFmtDhB;;AUh/CA;EA7NE,kBRRc;AFytDhB;;AUp/CA;EA3NE,iBRXa;AF8tDf;;AUx/CA;EA0BQ,4BAA4B;EAC5B,yBAAyB;AVk+CjC;;AU7/CA;EA6BQ,6BAA6B;EAC7B,0BAA0B;ETvJ9B,kBSwJwC;AVo+C5C;;AUngDA;ETzHI,eS0JqC;AVs+CzC;;AUvgDA;EAoCQ,UAAU;AVu+ClB;;AU3gDA;EA0CQ,UAAU;AVq+ClB;;AU/gDA;EA4CU,UAAU;AVu+CpB;;AUnhDA;EA8CQ,YAAY;EACZ,cAAc;AVy+CtB;;AUxhDA;EAiDI,uBAAuB;AV2+C3B;;AU5hDA;EAoDQ,oBAAoB;EACpB,qBAAqB;AV4+C7B;;AUjiDA;EAuDI,yBAAyB;AV8+C7B;;AUriDA;EA0DQ,oBAAoB;EACpB,qBAAqB;AV++C7B;;AYhzDA;EACE,YAAY;EACZ,cAAc;EACd,kBAAkB;EAClB,WAAW;AZmzDb;;AYvzDA;EAMI,0BAA0B;EAC1B,kBV2CM;EU1CN,mBV0CM;EUzCN,WAAW;AZqzDf;;AChuDE;EW9FF;IAWI,gBAAuC;EZwzDzC;AACF;;AC5tDI;EWxGJ;IAcM,iBAAqE;EZ2zDzE;AACF;;ACntDI;EWvHJ;IAiBM,iBAAiE;EZ8zDrE;AACF;;ACnuDI;EW7GJ;IAoBM,iBAAqE;EZi0DzE;AACF;;AC1tDI;EW5HJ;IAuBM,iBAAiE;EZo0DrE;AACF;;Aa50DA;EAII,kBAAkB;Ab40DtB;;Aah1DA;;;;;;;EAcM,kBAAkB;Ab40DxB;;Aa11DA;;;;;;EAqBI,cXlC0B;EWmC1B,gBXEiB;EWDjB,kBAxC+B;Abs3DnC;;Aar2DA;EAyBI,cAAc;EACd,oBAAoB;Abg1DxB;;Aa12DA;EA4BM,eAAe;Abk1DrB;;Aa92DA;EA8BI,iBAAiB;EACjB,uBAAuB;Abo1D3B;;Aan3DA;EAiCM,oBAAoB;Abs1D1B;;Aav3DA;EAmCI,gBAAgB;EAChB,uBAAuB;Abw1D3B;;Aa53DA;EAsCM,oBAAoB;Ab01D1B;;Aah4DA;EAwCI,iBAAiB;EACjB,oBAAoB;Ab41DxB;;Aar4DA;EA2CI,kBAAkB;EAClB,uBAAuB;Ab81D3B;;Aa14DA;EA8CI,cAAc;EACd,kBAAkB;Abg2DtB;;Aa/4DA;EAiDI,4BXvD0B;EDmI1B,8BCtI0B;EW4D1B,qBAhEqC;Abk6DzC;;Aar5DA;EAqDI,4BAA4B;EZwE5B,gBYvEmC;EACnC,eAAe;Abo2DnB;;Aa35DA;EAyDM,wBAAwB;Abs2D9B;;Aa/5DA;EA2DQ,4BAA4B;Abw2DpC;;Aan6DA;EA6DQ,4BAA4B;Ab02DpC;;Aav6DA;EA+DQ,4BAA4B;Ab42DpC;;Aa36DA;EAiEQ,4BAA4B;Ab82DpC;;Aa/6DA;EAmEI,wBAAwB;EZ0DxB,gBYzDmC;EACnC,eAAe;Abg3DnB;;Aar7DA;EAuEM,uBAAuB;EACvB,iBAAiB;Abk3DvB;;Aa17DA;EA0EQ,uBAAuB;Abo3D/B;;Aa97DA;EZ6HI,gBYjDmC;Abs3DvC;;Aal8DA;EA8EI,gBAAgB;EAChB,iBAAiB;EACjB,kBAAkB;Abw3DtB;;Aax8DA;EAkFM,eAAe;Ab03DrB;;Aa58DA;EAoFM,kBAAkB;Ab43DxB;;Aah9DA;EAsFM,qBAAqB;Ab83D3B;;Aap9DA;EAwFM,kBAAkB;Abg4DxB;;Aax9DA;EZ2CE,iCAAiC;EYgD/B,gBAAgB;EAChB,qBAvG8B;EAwG9B,gBAAgB;EAChB,iBAAiB;Abk4DrB;;Aah+DA;;EAiGI,cAAc;Abo4DlB;;Aar+DA;EAmGI,WAAW;Abs4Df;;Aaz+DA;;EAsGM,yBX/GwB;EWgHxB,qBA/GmC;EAgHnC,qBA/GmC;EAgHnC,mBAAmB;Abw4DzB;;Aaj/DA;EA2GM,cXxHwB;AFkgE9B;;Aar/DA;EA6GQ,mBAAmB;Ab44D3B;;Aaz/DA;;EAiHQ,qBAtHsC;EAuHtC,cX/HsB;AF4gE9B;;Aa//DA;;EAsHQ,qBAzHsC;EA0HtC,cXpIsB;AFkhE9B;;AargEA;;EA6HY,sBAAsB;Ab64DlC;;Aa1gEA;EAgIM,aAAa;Ab84DnB;;Aa9gEA;EAmII,kBXhHY;AF+/DhB;;AalhEA;EAqII,kBXpHY;AFqgEhB;;AathEA;EAuII,iBXvHW;AF0gEf;;AcxiEA;EACE,mBAAmB;EACnB,oBAAoB;EACpB,uBAAuB;EACvB,cATsB;EAUtB,aAVsB;AdqjExB;;AchjEA;EAQI,YAZwB;EAaxB,WAbwB;AdyjE5B;;AcrjEA;EAWI,YAdyB;EAezB,WAfyB;Ad6jE7B;;Ac1jEA;EAcI,YAhBwB;EAiBxB,WAjBwB;AdikE5B;;AelkEA;EACE,cAAc;EACd,kBAAkB;AfqkEpB;;AevkEA;EAII,cAAc;EACd,YAAY;EACZ,WAAW;AfukEf;;Ae7kEA;EAQM,uBb6DmB;AF4gEzB;;AejlEA;EAUI,WAAW;Af2kEf;;AerlEA;;;;;;;;;;;;;;;;;EA+BM,YAAY;EACZ,WAAW;Af0kEjB;;Ae1mEA;EAmCI,iBAAiB;Af2kErB;;Ae9mEA;EAqCI,gBAAgB;Af6kEpB;;AelnEA;EAuCI,gBAAgB;Af+kEpB;;AetnEA;EAyCI,qBAAqB;AfilEzB;;Ae1nEA;EA2CI,gBAAgB;AfmlEpB;;Ae9nEA;EA6CI,mBAAmB;AfqlEvB;;AeloEA;EA+CI,gBAAgB;AfulEpB;;AetoEA;EAiDI,qBAAqB;AfylEzB;;Ae1oEA;EAmDI,iBAAiB;Af2lErB;;Ae9oEA;EAqDI,sBAAsB;Af6lE1B;;AelpEA;EAuDI,iBAAiB;Af+lErB;;AetpEA;EAyDI,sBAAsB;AfimE1B;;Ae1pEA;EA2DI,sBAAsB;AfmmE1B;;Ae9pEA;EA6DI,iBAAiB;AfqmErB;;AelqEA;EA+DI,iBAAiB;AfumErB;;AetqEA;EAmEM,YAAwB;EACxB,WAAuB;AfumE7B;;Ae3qEA;EAmEM,YAAwB;EACxB,WAAuB;Af4mE7B;;AehrEA;EAmEM,YAAwB;EACxB,WAAuB;AfinE7B;;AerrEA;EAmEM,YAAwB;EACxB,WAAuB;AfsnE7B;;Ae1rEA;EAmEM,YAAwB;EACxB,WAAuB;Af2nE7B;;Ae/rEA;EAmEM,YAAwB;EACxB,WAAuB;AfgoE7B;;AepsEA;EAmEM,aAAwB;EACxB,YAAuB;AfqoE7B;;AgBlsEA;EAEE,4BdE4B;EcD5B,kBdyDU;EcxDV,kBAAkB;EAEhB,sCAXoD;AhB8sExD;;AgBzsEA;EAUI,mBAAmB;EACnB,0BAA0B;AhBmsE9B;;AgB9sEA;EAaI,mBAAmB;AhBqsEvB;;AgBltEA;;EAgBI,iBdV2B;AFitE/B;;AgBvtEA;EAkBI,uBAAuB;AhBysE3B;;AgB3tEA;Ef+II,ae3H4B;EAC5B,kBAAkB;EAClB,WAAW;AhB2sEf;;AgBjuEA;;;EA0BI,mBAAmB;AhB6sEvB;;AgBvuEA;EAgCM,uBd1ByB;Ec2BzB,cdxCuB;AFmvE7B;;AgB5uEA;EAgCM,yBdvCuB;EcwCvB,Yd3ByB;AF2uE/B;;AgBjvEA;EAgCM,4Bd5BwB;Ec6BxB,yBLsCe;AX+qErB;;AgBtvEA;EAgCM,yBdnCwB;EcoCxB,WLwCU;AXkrEhB;;AgB3vEA;EAgCM,yBdrB4B;EcsB5B,WLwCU;AXurEhB;;AgBhwEA;EAuCU,yBLyCsC;EKxCtC,cLiD2D;AX4qErE;;AgBrwEA;EAgCM,yBdnB4B;EcoB5B,WLwCU;AXisEhB;;AgB1wEA;EAuCU,yBLyCsC;EKxCtC,cLiD2D;AXsrErE;;AgB/wEA;EAgCM,yBdpB4B;EcqB5B,WLwCU;AX2sEhB;;AgBpxEA;EAuCU,yBLyCsC;EKxCtC,cLiD2D;AXgsErE;;AgBzxEA;EAgCM,yBdtB4B;EcuB5B,WLwCU;AXqtEhB;;AgB9xEA;EAuCU,yBLyCsC;EKxCtC,cLiD2D;AX0sErE;;AgBnyEA;EAgCM,yBdvB4B;EcwB5B,yBLsCe;AXiuErB;;AgBxyEA;EAuCU,yBLyCsC;EKxCtC,cLiD2D;AXotErE;;AgB7yEA;EAgCM,yBdjB2B;EckB3B,WLwCU;AXyuEhB;;AgBlzEA;EAuCU,yBLyCsC;EKxCtC,cLiD2D;AX8tErE;;AiBxzEA;EAEE,qBAAqB;EACrB,wBAAwB;EACxB,YAAY;EACZ,uBf0DuB;EezDvB,cAAc;EACd,YfsBW;EerBX,gBAAgB;EAChB,UAAU;EACV,WAAW;AjB0zEb;;AiBp0EA;EAYI,yBfT2B;AFq0E/B;;AiBx0EA;EAcI,yBff0B;AF60E9B;;AiB50EA;EAgBI,yBfjB0B;AFi1E9B;;AiBh1EA;EAkBI,yBfnB0B;EeoB1B,YAAY;AjBk0EhB;;AiBr1EA;EAyBQ,uBflBuB;AFk1E/B;;AiBz1EA;EA2BQ,uBfpBuB;AFs1E/B;;AiB71EA;EA6BQ,uBftBuB;AF01E/B;;AiBj2EA;EA+BQ,mEAA2F;AjBs0EnG;;AiBr2EA;EAyBQ,yBf/BqB;AF+2E7B;;AiBz2EA;EA2BQ,yBfjCqB;AFm3E7B;;AiB72EA;EA6BQ,yBfnCqB;AFu3E7B;;AiBj3EA;EA+BQ,qEAA2F;AjBs1EnG;;AiBr3EA;EAyBQ,4BfpBsB;AFo3E9B;;AiBz3EA;EA2BQ,4BftBsB;AFw3E9B;;AiB73EA;EA6BQ,4BfxBsB;AF43E9B;;AiBj4EA;EA+BQ,wEAA2F;AjBs2EnG;;AiBr4EA;EAyBQ,yBf3BsB;AF24E9B;;AiBz4EA;EA2BQ,yBf7BsB;AF+4E9B;;AiB74EA;EA6BQ,yBf/BsB;AFm5E9B;;AiBj5EA;EA+BQ,qEAA2F;AjBs3EnG;;AiBr5EA;EAyBQ,yBfb0B;AF64ElC;;AiBz5EA;EA2BQ,yBff0B;AFi5ElC;;AiB75EA;EA6BQ,yBfjB0B;AFq5ElC;;AiBj6EA;EA+BQ,qEAA2F;AjBs4EnG;;AiBr6EA;EAyBQ,yBfX0B;AF25ElC;;AiBz6EA;EA2BQ,yBfb0B;AF+5ElC;;AiB76EA;EA6BQ,yBff0B;AFm6ElC;;AiBj7EA;EA+BQ,qEAA2F;AjBs5EnG;;AiBr7EA;EAyBQ,yBfZ0B;AF46ElC;;AiBz7EA;EA2BQ,yBfd0B;AFg7ElC;;AiB77EA;EA6BQ,yBfhB0B;AFo7ElC;;AiBj8EA;EA+BQ,qEAA2F;AjBs6EnG;;AiBr8EA;EAyBQ,yBfd0B;AF87ElC;;AiBz8EA;EA2BQ,yBfhB0B;AFk8ElC;;AiB78EA;EA6BQ,yBflB0B;AFs8ElC;;AiBj9EA;EA+BQ,qEAA2F;AjBs7EnG;;AiBr9EA;EAyBQ,yBff0B;AF+8ElC;;AiBz9EA;EA2BQ,yBfjB0B;AFm9ElC;;AiB79EA;EA6BQ,yBfnB0B;AFu9ElC;;AiBj+EA;EA+BQ,qEAA2F;AjBs8EnG;;AiBr+EA;EAyBQ,yBfTyB;AFy9EjC;;AiBz+EA;EA2BQ,yBfXyB;AF69EjC;;AiB7+EA;EA6BQ,yBfbyB;AFi+EjC;;AiBj/EA;EA+BQ,qEAA2F;AjBs9EnG;;AiBr/EA;EAkCI,gCAtCkC;UAsClC,wBAtCkC;EAuClC,2CAAmC;UAAnC,mCAAmC;EACnC,yCAAiC;UAAjC,iCAAiC;EACjC,yCAAiC;UAAjC,iCAAiC;EACjC,yBfnC2B;EeoC3B,qEAA0F;EAC1F,6BAA6B;EAC7B,4BAA4B;EAC5B,0BAA0B;AjBu9E9B;;AiBjgFA;EA4CM,6BAA6B;AjBy9EnC;;AiBrgFA;EA8CM,6BAA6B;AjB29EnC;;AiBzgFA;EAgDM,oBAAoB;AjB69E1B;;AiB7gFA;EAoDI,eftBY;AFm/EhB;;AiBjhFA;EAsDI,ef1BY;AFy/EhB;;AiBrhFA;EAwDI,cf7BW;AF8/Ef;;AiB/9EA;EACE;IACE,2BAA2B;EjBk+E7B;EiBj+EA;IACE,4BAA4B;EjBm+E9B;AACF;;AiBx+EA;EACE;IACE,2BAA2B;EjBk+E7B;EiBj+EA;IACE,4BAA4B;EjBm+E9B;AACF;;AkB/gFA;EAEE,uBhBd6B;EgBe7B,chBxB4B;AFyiF9B;;AkBphFA;;EAMI,yBhBvB0B;EgBwB1B,qBA9B6B;EA+B7B,qBA9B6B;EA+B7B,mBAAmB;AlBmhFvB;;AkB5hFA;;EAeQ,uBhB3BuB;EgB4BvB,mBhB5BuB;EgB6BvB,chB1CqB;AF4jF7B;;AkBniFA;;EAeQ,yBhBxCqB;EgByCrB,qBhBzCqB;EgB0CrB,YhB7BuB;AFsjF/B;;AkB1iFA;;EAeQ,4BhB7BsB;EgB8BtB,wBhB9BsB;EgB+BtB,yBPoCa;AX4/ErB;;AkBjjFA;;EAeQ,yBhBpCsB;EgBqCtB,qBhBrCsB;EgBsCtB,WPsCQ;AXigFhB;;AkBxjFA;;EAeQ,yBhBtB0B;EgBuB1B,qBhBvB0B;EgBwB1B,WPsCQ;AXwgFhB;;AkB/jFA;;EAeQ,yBhBpB0B;EgBqB1B,qBhBrB0B;EgBsB1B,WPsCQ;AX+gFhB;;AkBtkFA;;EAeQ,yBhBrB0B;EgBsB1B,qBhBtB0B;EgBuB1B,WPsCQ;AXshFhB;;AkB7kFA;;EAeQ,yBhBvB0B;EgBwB1B,qBhBxB0B;EgByB1B,WPsCQ;AX6hFhB;;AkBplFA;;EAeQ,yBhBxB0B;EgByB1B,qBhBzB0B;EgB0B1B,yBPoCa;AXsiFrB;;AkB3lFA;;EAeQ,yBhBlByB;EgBmBzB,qBhBnByB;EgBoBzB,WPsCQ;AX2iFhB;;AkBlmFA;;EAoBM,mBAAmB;EACnB,SAAS;AlBmlFf;;AkBxmFA;;EAuBM,yBhB9B4B;EgB+B5B,WP+BU;AXujFhB;;AkB9mFA;;;;EA2BQ,mBAAmB;AlB0lF3B;;AkBrnFA;;EA6BM,sBAAsB;AlB6lF5B;;AkB1nFA;EA+BI,chBpD0B;AFmpF9B;;AkB9nFA;EAiCM,mBAAmB;AlBimFzB;;AkBloFA;EAoCM,yBhB3C4B;EgB4C5B,WPkBU;AXglFhB;;AkBvoFA;;EAwCQ,mBAAmB;AlBomF3B;;AkB5oFA;;EA2CQ,kBPYQ;EOXR,mBAAmB;AlBsmF3B;;AkBlpFA;EA8CI,6BA5DqC;AlBoqFzC;;AkBtpFA;;EAiDM,qBApEgC;EAqEhC,chBvEwB;AFirF9B;;AkB5pFA;EAoDI,6BAhEqC;AlB4qFzC;;AkBhqFA;;EAuDM,qBAxEgC;EAyEhC,chB7EwB;AF2rF9B;;AkBtqFA;EA0DI,6BAvEqC;AlBurFzC;;AkB1qFA;;EA+DU,sBAAsB;AlBgnFhC;;AkB/qFA;;EAoEM,iBAAiB;AlBgnFvB;;AkBprFA;;EAyEU,wBAAwB;AlBgnFlC;;AkBzrFA;EA2EI,WAAW;AlBknFf;;AkB7rFA;EAgFU,yBhB7FoB;AF8sF9B;;AkBjsFA;EAqFY,yBhBlGkB;AFktF9B;;AkBrsFA;EAuFc,4BhBrGgB;AFutF9B;;AkBzsFA;;EA2FM,qBAAqB;AlBmnF3B;;AkB9sFA;EAgGU,yBhB7GoB;AF+tF9B;;AkBhnFA;EjB/DE,iCAAiC;EiBkEjC,cAAc;EACd,kBAAkB;EAClB,eAAe;AlBknFjB;;AmB7uFA;EACE,mBAAmB;EACnB,aAAa;EACb,eAAe;EACf,2BAA2B;AnBgvF7B;;AmBpvFA;EAMI,qBAAqB;AnBkvFzB;;AmBxvFA;ElByII,oBkBjIwC;AnBovF5C;;AmB5vFA;EAUI,sBAAsB;AnBsvF1B;;AmBhwFA;EAYI,mBAAmB;AnBwvFvB;;AmBpwFA;EAgBM,ejBcO;AF0uFb;;AmBxwFA;EAmBM,kBjBUU;AF+uFhB;;AmB5wFA;EAqBI,uBAAuB;AnB2vF3B;;AmBhxFA;EAuBM,qBAAqB;EACrB,oBAAoB;AnB6vF1B;;AmBrxFA;EA0BI,yBAAyB;AnB+vF7B;;AmBzxFA;EA6BQ,mBAAmB;AnBgwF3B;;AmB7xFA;EA+BQ,eAAe;AnBkwFvB;;AmBjyFA;ElByII,ekBvGmC;AnBmwFvC;;AmBryFA;ElByII,ckBrGqC;EAE/B,yBAAyB;EACzB,4BAA4B;AnBowFtC;;AmB3yFA;EA6CU,0BAA0B;EAC1B,6BAA6B;AnBkwFvC;;AmB7vFA;EACE,mBAAmB;EACnB,4BjB/C4B;EiBgD5B,kBjBQU;EiBPV,cjBvD4B;EiBwD5B,oBAAoB;EACpB,kBjB1Bc;EiB2Bd,WAAW;EACX,uBAAuB;EACvB,gBAAgB;EAChB,oBAAoB;EACpB,qBAAqB;EACrB,mBAAmB;AnBgwFrB;;AmB5wFA;ElBsFI,oBkBxEuC;ElBwEvC,uBkBvEyC;AnBkwF7C;;AmBjxFA;EAqBM,uBjBhEyB;EiBiEzB,cjB9EuB;AF80F7B;;AmBtxFA;EAqBM,yBjB7EuB;EiB8EvB,YjBjEyB;AFs0F/B;;AmB3xFA;EAqBM,4BjBlEwB;EiBmExB,yBRAe;AX0wFrB;;AmBhyFA;EAqBM,yBjBzEwB;EiB0ExB,WREU;AX6wFhB;;AmBryFA;EAqBM,yBjB3D4B;EiB4D5B,WREU;AXkxFhB;;AmB1yFA;EA4BU,yBRGsC;EQFtC,cRW2D;AXuwFrE;;AmB/yFA;EAqBM,yBjBzD4B;EiB0D5B,WREU;AX4xFhB;;AmBpzFA;EA4BU,yBRGsC;EQFtC,cRW2D;AXixFrE;;AmBzzFA;EAqBM,yBjB1D4B;EiB2D5B,WREU;AXsyFhB;;AmB9zFA;EA4BU,yBRGsC;EQFtC,cRW2D;AX2xFrE;;AmBn0FA;EAqBM,yBjB5D4B;EiB6D5B,WREU;AXgzFhB;;AmBx0FA;EA4BU,yBRGsC;EQFtC,cRW2D;AXqyFrE;;AmB70FA;EAqBM,yBjB7D4B;EiB8D5B,yBRAe;AX4zFrB;;AmBl1FA;EA4BU,yBRGsC;EQFtC,cRW2D;AX+yFrE;;AmBv1FA;EAqBM,yBjBvD2B;EiBwD3B,WREU;AXo0FhB;;AmB51FA;EA4BU,yBRGsC;EQFtC,cRW2D;AXyzFrE;;AmBj2FA;EAgCI,kBjBpDY;AFy3FhB;;AmBr2FA;EAkCI,ejBvDS;AF83Fb;;AmBz2FA;EAoCI,kBjB1DY;AFm4FhB;;AmB72FA;ElBsFI,qBkB/C0C;ElB+C1C,sBkB9C0C;AnB00F9C;;AmBl3FA;ElBsFI,qBkB5C0C;ElB4C1C,sBkB3C0C;AnB40F9C;;AmBv3FA;ElBsFI,qBkBzC0C;ElByC1C,sBkBxC0C;AnB80F9C;;AmB53FA;ElBsFI,gBkB7ImB;EAyGnB,UAAU;EACV,kBAAkB;EAClB,UAAU;AnB+0Fd;;AmBn4FA;EAuDM,8BAA8B;EAC9B,WAAW;EACX,cAAc;EACd,SAAS;EACT,kBAAkB;EAClB,QAAQ;EACR,0DAA0D;EAC1D,+BAA+B;AnBg1FrC;;AmB94FA;EAgEM,WAAW;EACX,UAAU;AnBk1FhB;;AmBn5FA;EAmEM,WAAW;EACX,UAAU;AnBo1FhB;;AmBx5FA;EAuEM,yBAAmD;AnBq1FzD;;AmB55FA;EAyEM,yBAAoD;AnBu1F1D;;AmBh6FA;EA2EI,uBjB9DqB;AFu5FzB;;AmBv1FA;EAEI,0BAA0B;AnBy1F9B;;AoB/8FA;;EAGE,sBAAsB;ApBi9FxB;;AoBp9FA;;;;EAMI,oBAAoB;ApBq9FxB;;AoB39FA;;EAQI,iBApBmB;ApB4+FvB;;AoBh+FA;;EAUI,iBArBmB;ApBg/FvB;;AoBr+FA;;EAYI,sBAAsB;ApB89F1B;;AoB59FA;EACE,clB5B4B;EkB+B5B,elBHW;EkBIX,gBlBKmB;EkBJnB,kBAnCuB;ApBggGzB;;AoBn+FA;EAQI,cApCwB;EAqCxB,oBApCyB;ApBmgG7B;;AoBx+FA;EAWI,oBAAoB;ApBi+FxB;;AoB5+FA;EAaI,oBA7B+B;ApBggGnC;;AoBh/FA;EAkBM,elBnBO;AFq/Fb;;AoBp/FA;EAkBM,iBlBlBS;AFw/Ff;;AoBx/FA;EAkBM,elBjBO;AF2/Fb;;AoB5/FA;EAkBM,iBlBhBS;AF8/Ff;;AoBhgGA;EAkBM,kBlBfU;AFigGhB;;AoBpgGA;EAkBM,elBdO;AFogGb;;AoBxgGA;EAkBM,kBlBbU;AFugGhB;;AoBx/FA;EACE,clB/C4B;EkBkD5B,kBlBrBc;EkBsBd,gBlBjBiB;EkBkBjB,iBA7CyB;ApBsiG3B;;AoB//FA;EAQI,clBvD0B;EkBwD1B,gBlBnBiB;AF8gGrB;;AoBpgGA;EAWI,oBA/C+B;ApB4iGnC;;AoBxgGA;EAgBM,elBrCO;AFiiGb;;AoB5gGA;EAgBM,iBlBpCS;AFoiGf;;AoBhhGA;EAgBM,elBnCO;AFuiGb;;AoBphGA;EAgBM,iBlBlCS;AF0iGf;;AoBxhGA;EAgBM,kBlBjCU;AF6iGhB;;AoB5hGA;EAgBM,elBhCO;AFgjGb;;AoBhiGA;EAgBM,kBlB/BU;AFmjGhB;;AqBnlGA;EACE,cAAc;EACd,eAAe;EACf,mBAAmB;EACnB,kBAAkB;EAClB,yBAAyB;ArBslG3B;;AqBplGA;EAEE,gBnB0BiB;EmBzBjB,eAAe;EACf,gBAAgB;EAChB,UAAU;ArBslGZ;;AqB3lGA;EAOI,cAAc;EACd,eAAe;ArBwlGnB;;AqBnlGA;EACE,mBAAmB;EACnB,4BnBf4B;EmBgB5B,uBnB0CuB;EmBzCvB,oBAAoB;EACpB,kBnBKc;EmBJd,WAAW;EACX,uBAAuB;EACvB,oBAAoB;EACpB,gBAAgB;EAChB,uBAAuB;EACvB,kBAAkB;EAClB,mBAAmB;ArBslGrB;;AsB5nGA,eAAA;ACuDA;EAxBE,uBrBhB6B;EqBiB7B,qBrBtB4B;EqBuB5B,kBrBoCU;EqBnCV,crB5B4B;AF8nG9B;;ACjkGI;EsB/BA,4BrB9B0B;AFkoG9B;;ACrkGI;EsB/BA,4BrB9B0B;AFsoG9B;;ACzkGI;EsB/BA,4BrB9B0B;AF0oG9B;;AC7kGI;EsB/BA,4BrB9B0B;AF8oG9B;;AuB/mGE;EAEE,qBrB9B0B;AF+oG9B;;AuBhnGE;EAIE,qBrBtB8B;EqBuB9B,kDrBvB8B;AFuoGlC;;AuB/mGE;;;;;EAEE,4BrBnC0B;EqBoC1B,wBrBpC0B;EqBqC1B,gBAAgB;EAChB,crB3C0B;AFgqG9B;;ACrmGI;;;;;EsBdE,+BrB7CwB;AFwqG9B;;AC7mGI;;;;;EsBdE,+BrB7CwB;AFgrG9B;;ACrnGI;;;;;EsBdE,+BrB7CwB;AFwrG9B;;AC7nGI;;;;;EsBdE,+BrB7CwB;AFgsG9B;;AwBlsGA;EAEE,2DtBN2B;EsBO3B,eAAe;EACf,WAAW;AxBosGb;;AwBnsGE;EACE,gBAAgB;AxBssGpB;;AwBlsGI;EACE,mBtBFyB;AFusG/B;;AwBtsGK;EAMG,mDtBPuB;AF2sG/B;;AwB1sGI;EACE,qBtBfuB;AF4tG7B;;AwB9sGK;EAMG,gDtBpBqB;AFguG7B;;AwBltGI;EACE,wBtBJwB;AFytG9B;;AwBttGK;EAMG,mDtBTsB;AF6tG9B;;AwB1tGI;EACE,qBtBXwB;AFwuG9B;;AwB9tGK;EAMG,gDtBhBsB;AF4uG9B;;AwBluGI;EACE,qBtBG4B;AFkuGlC;;AwBtuGK;EAMG,iDtBF0B;AFsuGlC;;AwB1uGI;EACE,qBtBK4B;AFwuGlC;;AwB9uGK;EAMG,kDtBA0B;AF4uGlC;;AwBlvGI;EACE,qBtBI4B;AFivGlC;;AwBtvGK;EAMG,kDtBD0B;AFqvGlC;;AwB1vGI;EACE,qBtBE4B;AF2vGlC;;AwB9vGK;EAMG,kDtBH0B;AF+vGlC;;AwBlwGI;EACE,qBtBC4B;AFowGlC;;AwBtwGK;EAMG,kDtBJ0B;AFwwGlC;;AwB1wGI;EACE,qBtBO2B;AFswGjC;;AwB9wGK;EAMG,kDtBEyB;AF0wGjC;;AwB1wGE;ErBoBA,kBDwBgB;ECvBhB,kBDPc;AFiwGhB;;AwB7wGE;ErBqBA,kBDXc;AFuwGhB;;AwB/wGE;ErBqBA,iBDda;AF4wGf;;AwBhxGE;EACE,cAAc;EACd,WAAW;AxBmxGf;;AwBlxGE;EACE,eAAe;EACf,WAAW;AxBqxGf;;AwBnxGA;EAGI,uBtB8BqB;EsB7BrB,gDAA4D;EAC5D,iDAA6D;AxBoxGjE;;AwBzxGA;EAOI,6BAA6B;EAC7B,yBAAyB;EACzB,gBAAgB;EAChB,eAAe;EACf,gBAAgB;AxBsxGpB;;AwBpxGA;EAEE,cAAc;EACd,eAAe;EACf,eAAe;EACf,2BrB/CkE;EqBgDlE,gBAAgB;AxBsxGlB;;AwB5xGA;EAQI,gBA1DsB;EA2DtB,eA1DqB;AxBk1GzB;;AwBjyGA;EAWI,eAAe;AxB0xGnB;;AwBryGA;EAcI,YAAY;AxB2xGhB;;AyB51GA;EACE,eAAe;EACf,qBAAqB;EACrB,iBAAiB;EACjB,kBAAkB;AzB+1GpB;;AyB91GE;EACE,eAAe;AzBi2GnB;;AyBh2GE;EACE,cvBF0B;AFq2G9B;;AyBl2GE;;;;;EAGE,cvBJ0B;EuBK1B,mBAAmB;AzBu2GvB;;AyBl2GA;ExB8HI,kBwB3HqC;AzBm2GzC;;A0Bt3GA;EACE,qBAAqB;EACrB,eAAe;EACf,kBAAkB;EAClB,mBAAmB;A1By3GrB;;A0B73GA;EAMI,avBHkB;AH83GtB;;A0Bj4GA;EAUM,qBxBU4B;EDkI9B,cyB3I+B;EAC7B,UAAU;A1B23GhB;;A0Bv4GA;EAeM,uBxBsDmB;EDyErB,iByB9HsC;A1B43G1C;;A0B54GA;EAmBI,eAAe;EACf,cAAc;EACd,cAAc;EACd,eAAe;EACf,aAAa;A1B63GjB;;A0Bp5GA;EAyBM,aAAa;A1B+3GnB;;A0Bx5GA;;EA4BM,wBxBjBwB;AFk5G9B;;A0B75GA;EzB8II,oByBhHwC;A1Bm4G5C;;A0Bj6GA;EAgCM,YAAY;EACZ,UAAU;A1Bq4GhB;;A0Bt6GA;EAmCQ,kBAAkB;A1Bu4G1B;;A0B16GA;EAuCM,qBxBnCwB;AF06G9B;;A0B96GA;EA6CQ,mBxBhCuB;AFq6G/B;;A0Bl7GA;EA+CQ,mBxBlCuB;AFy6G/B;;A0Bt7GA;EAkDU,qBfyDuB;AX+0GjC;;A0B17GA;EAuDU,mDxB1CqB;AFi7G/B;;A0B97GA;EA6CQ,qBxB7CqB;AFk8G7B;;A0Bl8GA;EA+CQ,qBxB/CqB;AFs8G7B;;A0Bt8GA;EAkDU,mBfyDuB;AX+1GjC;;A0B18GA;EAuDU,gDxBvDmB;AF88G7B;;A0B98GA;EA6CQ,wBxBlCsB;AFu8G9B;;A0Bl9GA;EA+CQ,wBxBpCsB;AF28G9B;;A0Bt9GA;EAkDU,qBfyDuB;AX+2GjC;;A0B19GA;EAuDU,mDxB5CoB;AFm9G9B;;A0B99GA;EA6CQ,qBxBzCsB;AF89G9B;;A0Bl+GA;EA+CQ,qBxB3CsB;AFk+G9B;;A0Bt+GA;EAkDU,qBfyDuB;AX+3GjC;;A0B1+GA;EAuDU,gDxBnDoB;AF0+G9B;;A0B9+GA;EA6CQ,qBxB3B0B;AFg+GlC;;A0Bl/GA;EA+CQ,qBxB7B0B;AFo+GlC;;A0Bt/GA;EAkDU,qBfyDuB;AX+4GjC;;A0B1/GA;EAuDU,iDxBrCwB;AF4+GlC;;A0B9/GA;EA6CQ,qBxBzB0B;AF8+GlC;;A0BlgHA;EA+CQ,qBxB3B0B;AFk/GlC;;A0BtgHA;EAkDU,qBfyDuB;AX+5GjC;;A0B1gHA;EAuDU,kDxBnCwB;AF0/GlC;;A0B9gHA;EA6CQ,qBxB1B0B;AF+/GlC;;A0BlhHA;EA+CQ,qBxB5B0B;AFmgHlC;;A0BthHA;EAkDU,qBfyDuB;AX+6GjC;;A0B1hHA;EAuDU,kDxBpCwB;AF2gHlC;;A0B9hHA;EA6CQ,qBxB5B0B;AFihHlC;;A0BliHA;EA+CQ,qBxB9B0B;AFqhHlC;;A0BtiHA;EAkDU,qBfyDuB;AX+7GjC;;A0B1iHA;EAuDU,kDxBtCwB;AF6hHlC;;A0B9iHA;EA6CQ,qBxB7B0B;AFkiHlC;;A0BljHA;EA+CQ,qBxB/B0B;AFsiHlC;;A0BtjHA;EAkDU,qBfyDuB;AX+8GjC;;A0B1jHA;EAuDU,kDxBvCwB;AF8iHlC;;A0B9jHA;EA6CQ,qBxBvByB;AF4iHjC;;A0BlkHA;EA+CQ,qBxBzByB;AFgjHjC;;A0BtkHA;EAkDU,qBfyDuB;AX+9GjC;;A0B1kHA;EAuDU,kDxBjCuB;AFwjHjC;;A0B9kHA;EvB0CE,kBDwBgB;ECvBhB,kBDPc;AF+iHhB;;A0BnlHA;EvB6CE,kBDXc;AFqjHhB;;A0BvlHA;EvB+CE,iBDda;AF0jHf;;A0B3lHA;EAkEM,qBxB5DwB;AFylH9B;;A0B/lHA;EAoEI,WAAW;A1B+hHf;;A0BnmHA;EAsEM,WAAW;A1BiiHjB;;A0BvmHA;EA0EM,aAAa;EACb,kBAAkB;EzB2EpB,cyB1E+B;EAC7B,YAAY;EACZ,eAAe;A1BiiHrB;;A0B/mHA;EAgFM,kBxB5CU;AF+kHhB;;A0BnnHA;EAkFM,kBxBhDU;AFqlHhB;;A0BvnHA;EAoFM,iBxBnDS;AF0lHf;;A2B9mHA;EAEE,oBAAoB;EACpB,aAAa;EACb,2BAA2B;EAC3B,kBAAkB;A3BgnHpB;;A2BrnHA;EAYQ,uBzBZuB;EyBavB,yBAAyB;EACzB,czB3BqB;AFwoH7B;;A2B3nHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,czBjCmB;AF8oH7B;;A2BjoHA;EAwBU,yBAAyB;EACzB,+CzBzBqB;EyB0BrB,czBvCmB;AFopH7B;;A2BvoHA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,czB7CmB;AF0pH7B;;A2B7oHA;EAYQ,yBzBzBqB;EyB0BrB,yBAAyB;EACzB,YzBduB;AFmpH/B;;A2BnpHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,YzBpBqB;AFypH/B;;A2BzpHA;EAwBU,yBAAyB;EACzB,4CzBtCmB;EyBuCnB,YzB1BqB;AF+pH/B;;A2B/pHA;EA8BU,uBhBgEuB;EgB/DvB,yBAAyB;EACzB,YzBhCqB;AFqqH/B;;A2BrqHA;EAYQ,4BzBdsB;EyBetB,yBAAyB;EACzB,yBhBmDa;AX0mHrB;;A2B3qHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,yBhB6CW;AXgnHrB;;A2BjrHA;EAwBU,yBAAyB;EACzB,+CzB3BoB;EyB4BpB,yBhBuCW;AXsnHrB;;A2BvrHA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,yBhBiCW;AX4nHrB;;A2B7rHA;EAYQ,yBzBrBsB;EyBsBtB,yBAAyB;EACzB,WhBqDQ;AXgoHhB;;A2BnsHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,WhB+CM;AXsoHhB;;A2BzsHA;EAwBU,yBAAyB;EACzB,4CzBlCoB;EyBmCpB,WhByCM;AX4oHhB;;A2B/sHA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,WhBmCM;AXkpHhB;;A2BrtHA;EAYQ,yBzBP0B;EyBQ1B,yBAAyB;EACzB,WhBqDQ;AXwpHhB;;A2B3tHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,WhB+CM;AX8pHhB;;A2BjuHA;EAwBU,yBAAyB;EACzB,6CzBpBwB;EyBqBxB,WhByCM;AXoqHhB;;A2BvuHA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,WhBmCM;AX0qHhB;;A2B7uHA;EAYQ,yBzBL0B;EyBM1B,yBAAyB;EACzB,WhBqDQ;AXgrHhB;;A2BnvHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,WhB+CM;AXsrHhB;;A2BzvHA;EAwBU,yBAAyB;EACzB,8CzBlBwB;EyBmBxB,WhByCM;AX4rHhB;;A2B/vHA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,WhBmCM;AXksHhB;;A2BrwHA;EAYQ,yBzBN0B;EyBO1B,yBAAyB;EACzB,WhBqDQ;AXwsHhB;;A2B3wHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,WhB+CM;AX8sHhB;;A2BjxHA;EAwBU,yBAAyB;EACzB,8CzBnBwB;EyBoBxB,WhByCM;AXotHhB;;A2BvxHA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,WhBmCM;AX0tHhB;;A2B7xHA;EAYQ,yBzBR0B;EyBS1B,yBAAyB;EACzB,WhBqDQ;AXguHhB;;A2BnyHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,WhB+CM;AXsuHhB;;A2BzyHA;EAwBU,yBAAyB;EACzB,8CzBrBwB;EyBsBxB,WhByCM;AX4uHhB;;A2B/yHA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,WhBmCM;AXkvHhB;;A2BrzHA;EAYQ,yBzBT0B;EyBU1B,yBAAyB;EACzB,yBhBmDa;AX0vHrB;;A2B3zHA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,yBhB6CW;AXgwHrB;;A2Bj0HA;EAwBU,yBAAyB;EACzB,8CzBtBwB;EyBuBxB,yBhBuCW;AXswHrB;;A2Bv0HA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,yBhBiCW;AX4wHrB;;A2B70HA;EAYQ,yBzBHyB;EyBIzB,yBAAyB;EACzB,WhBqDQ;AXgxHhB;;A2Bn1HA;EAkBU,yBhB4EuB;EgB3EvB,yBAAyB;EACzB,WhB+CM;AXsxHhB;;A2Bz1HA;EAwBU,yBAAyB;EACzB,8CzBhBuB;EyBiBvB,WhByCM;AX4xHhB;;A2B/1HA;EA8BU,yBhBgEuB;EgB/DvB,yBAAyB;EACzB,WhBmCM;AXkyHhB;;A2Br2HA;EAmCI,kBzBZY;AFk1HhB;;A2Bz2HA;EAqCI,kBzBhBY;AFw1HhB;;A2B72HA;EAwCQ,eAAe;A3By0HvB;;A2Bj3HA;EA0CI,iBzBtBW;AFi2Hf;;A2Br3HA;EA6CQ,eAAe;A3B40HvB;;A2Bz3HA;EAiDM,6BAA6B;EAC7B,0BAA0B;A3B40HhC;;A2B93HA;EAoDM,4BAA4B;EAC5B,yBAAyB;A3B80H/B;;A2Bn4HA;EAwDQ,kBzBFI;AFi1HZ;;A2Bv4HA;EA0DQ,aAAa;A3Bi1HrB;;A2B34HA;EA6DM,sBAAsB;A3Bk1H5B;;A2B/4HA;EA+DM,sBAAsB;EACtB,YAAY;EACZ,gBAAgB;A3Bo1HtB;;A2Br5HA;EAmEM,uBAAuB;A3Bs1H7B;;A2Bz5HA;EAqEM,aAAa;EACb,YAAY;A3Bw1HlB;;A2B95HA;EAwEQ,eAAe;A3B01HvB;;A2Bl6HA;EA2EQ,eAAe;A3B21HvB;;A2Bt6HA;EA8EQ,eAAe;A3B41HvB;;A2B16HA;EAiFQ,eAAe;A3B61HvB;;A2B96HA;EAoFQ,0BAA4C;A3B81HpD;;A2Bl7HA;EAsFQ,0BzBhCI;EyBiCJ,uBAAuB;A3Bg2H/B;;A2Bv7HA;EAyFI,uBAAuB;A3Bk2H3B;;A2B37HA;EA4FM,WAAW;A3Bm2HjB;;A2B/7HA;EA8FM,YAAY;EACZ,eAAe;A3Bq2HrB;;A2Bp8HA;EAiGI,yBAAyB;A3Bu2H7B;;A2Bx8HA;EAmGM,0BAA4C;A3By2HlD;;A2B58HA;EAqGM,0BzB/CM;EyBgDN,2BAA2B;EAC3B,SAAS;A3B22Hf;;A2Bz2HA;EACE,oBAAoB;EACpB,aAAa;EACb,eAAe;EACf,2BAA2B;EAC3B,gBAAgB;EAChB,kBAAkB;A3B42HpB;;A2Bl3HA;EASM,yBhBpB2B;EgBqB3B,czB5HwB;AFy+H9B;;A2Bv3HA;EAYM,qBhBvB2B;AXs4HjC;;A2B33HA;EAeM,yBhB1B2B;EgB2B3B,czBlIwB;AFk/H9B;;A2Bh4HA;EAkBM,qBhB7B2B;AX+4HjC;;A2Bh3HA;EACE,YAAY;EACZ,OAAO;EACP,UAAU;EACV,aAAa;EACb,kBAAkB;EAClB,MAAM;EACN,WAAW;A3Bm3Hb;;A2Bj3HA;;EAGE,qBzB9I4B;EyB+I5B,kBzBpFU;EyBqFV,cAAc;EACd,iBAAiB;EACjB,kBAAkB;EAClB,mBAAmB;A3Bm3HrB;;A2Bj3HA;EACE,4BzBnJ4B;EyBoJ5B,czB1J4B;AF8gI9B;;A2Bl3HA;EACE,qBzB1J4B;EyB2J5B,mBA5J4B;EA6J5B,2BA5JoC;EA6JpC,cAAc;EACd,eA7JwB;EA8JxB,gBAAgB;EAChB,mBAAmB;EACnB,uBAAuB;A3Bq3HzB;;A2Bn3HA;EACE,mBAAmB;EACnB,aAAa;EACb,WAAW;EACX,uBAAuB;E1BjCrB,mB0BkCmC;EACrC,UAAU;A3Bs3HZ;;A2B53HA;EAQI,eAAe;A3Bw3HnB;;A4BtiIA;EACE,c1BF4B;E0BG5B,cAAc;EACd,e1B2BW;E0B1BX,gB1BiCe;AFwgIjB;;A4B7iIA;EAMI,oBAAoB;A5B2iIxB;;A4BjjIA;EASI,kB1BsBY;AFshIhB;;A4BrjIA;EAWI,kB1BkBY;AF4hIhB;;A4BzjIA;EAaI,iB1BeW;AFiiIf;;A4B9iIA;EACE,cAAc;EACd,kB1Bcc;E0Bbd,mBAAmB;A5BijIrB;;A4BpjIA;EAOM,Y1BdyB;AF+jI/B;;A4BxjIA;EAOM,c1B3BuB;AFglI7B;;A4B5jIA;EAOM,iB1BhBwB;AFykI9B;;A4BhkIA;EAOM,c1BvBwB;AFolI9B;;A4BpkIA;EAOM,c1BT4B;AF0kIlC;;A4BxkIA;EAOM,c1BP4B;AF4kIlC;;A4B5kIA;EAOM,c1BR4B;AFilIlC;;A4BhlIA;EAOM,c1BV4B;AFulIlC;;A4BplIA;EAOM,c1BX4B;AF4lIlC;;A4BxlIA;EAOM,c1BL2B;AF0lIjC;;A4BjlIA;EAEI,sBAAsB;A5BmlI1B;;A4BrlIA;EAKI,aAAa;EACb,2BAA2B;A5BolI/B;;A4B1lIA;E3B+GI,kB2BtGwC;A5BqlI5C;;A4B9lIA;;;EAcU,gBAAgB;A5BslI1B;;A4BpmIA;;;EAoBY,6BAA6B;EAC7B,0BAA0B;A5BslItC;;A4B3mIA;;;EA8BY,4BAA4B;EAC5B,yBAAyB;A5BmlIrC;;A4BlnIA;;;;;EAyCY,UAAU;A5BilItB;;A4B1nIA;;;;;;;;;EA8CY,UAAU;A5BwlItB;;A4BtoIA;;;;;;;;;EAgDc,UAAU;A5BkmIxB;;A4BlpIA;EAkDQ,YAAY;EACZ,cAAc;A5BomItB;;A4BvpIA;EAqDM,uBAAuB;A5BsmI7B;;A4B3pIA;EAuDM,yBAAyB;A5BwmI/B;;A4B/pIA;EA0DQ,YAAY;EACZ,cAAc;A5BymItB;;A4BpqIA;EA6DI,aAAa;EACb,2BAA2B;A5B2mI/B;;A4BzqIA;EAgEM,cAAc;A5B6mIpB;;A4B7qIA;EAkEQ,gBAAgB;E3B6CpB,qB2B5C2C;A5B+mI/C;;A4BlrIA;EAqEQ,YAAY;EACZ,cAAc;A5BinItB;;A4BvrIA;EAwEM,uBAAuB;A5BmnI7B;;A4B3rIA;EA0EM,yBAAyB;A5BqnI/B;;A4B/rIA;EA4EM,eAAe;A5BunIrB;;A4BnsIA;EAgFU,sBAAsB;A5BunIhC;;A4BvsIA;EAkFQ,uBAAuB;A5BynI/B;;A4B3sIA;EAoFQ,gBAAgB;A5B2nIxB;;AC3pIE;E2BpDF;IAuFM,aAAa;E5B6nIjB;AACF;;A4B5nIA;EAEI,kBAAkB;A5B8nItB;;ACzqIE;E2ByCF;IAII,qBAAqB;E5BioIvB;AACF;;AC3qIE;E2BqCF;IAMI,aAAa;IACb,YAAY;IACZ,cAAc;I3Bcd,oB2BbsC;IACtC,iBAAiB;E5BqoInB;E4B/oIF;IAYM,kB1BhGU;I0BiGV,oBAAoB;E5BsoIxB;E4BnpIF;IAeM,oBAAoB;E5BuoIxB;E4BtpIF;IAiBM,kB1BvGU;I0BwGV,oBAAoB;E5BwoIxB;E4B1pIF;IAoBM,iB1B3GS;I0B4GT,oBAAoB;E5ByoIxB;AACF;;A4BxoIA;EAEI,gBAAgB;A5B0oIpB;;ACxsIE;E2B4DF;IAII,aAAa;IACb,aAAa;IACb,YAAY;IACZ,cAAc;E5B6oIhB;E4BppIF;IASM,gBAAgB;E5B8oIpB;E4BvpIF;IAWM,cAAc;E5B+oIlB;E4B1pIF;IAaQ,YAAY;E5BgpIlB;E4B7pIF;I3BDI,qB2BgB2C;E5BipI7C;AACF;;A4BhpIA;EACE,sBAAsB;EACtB,WAAW;EACX,e1BhIW;E0BiIX,kBAAkB;EAClB,mBAAmB;A5BmpIrB;;A4BxpIA;;;EAaU,c1BxKoB;AFyzI9B;;A4B9pIA;;;EAeQ,kB1B3IQ;AFgyIhB;;A4BpqIA;;;EAiBQ,kB1B/IQ;AFwyIhB;;A4B1qIA;;;EAmBQ,iB1BlJO;AF+yIf;;A4BhrIA;EAqBM,c1B7KwB;E0B8KxB,azBnLgB;EyBoLhB,oBAAoB;EACpB,kBAAkB;EAClB,MAAM;EACN,YzBvLgB;EyBwLhB,UAAU;A5B+pIhB;;A4B1rIA;;EA+BM,mBzB5LgB;AH41ItB;;A4B/rIA;EAiCM,OAAO;A5BkqIb;;A4BnsIA;;EAqCM,oBzBlMgB;AHq2ItB;;A4BxsIA;EAuCM,QAAQ;A5BqqId;;A4B5sIA;EA2CM,6BAA6B;E3BrD/B,c2BsD+B;EAC7B,YAAY;EACZ,UAAU;A5BqqIhB;;A4BntIA;EAgDM,kB1B5KU;AFm1IhB;;A4BvtIA;EAkDM,kB1BhLU;AFy1IhB;;A4B3tIA;EAoDM,iB1BnLS;AF81If;;A6Bj4IA,qBAAA;ACSA;EAGE,e5ByBW;E4BxBX,mBAAmB;A9B03IrB;;A8B93IA;EAMI,mBAAmB;EACnB,c5BM8B;E4BL9B,aAAa;EACb,uBAAuB;EACvB,iBAduC;A9B04I3C;;A8Bt4IA;EAYM,c5BfwB;AF64I9B;;A8B14IA;EAcI,mBAAmB;EACnB,aAAa;A9Bg4IjB;;A8B/4IA;E7BuII,e6BtHoC;A9Bk4IxC;;A8Bn5IA;EAoBQ,c5BvBsB;E4BwBtB,eAAe;EACf,oBAAoB;A9Bm4I5B;;A8Bz5IA;EAwBM,c5BxBwB;E4ByBxB,iBAAiB;A9Bq4IvB;;A8B95IA;;EA4BI,uBAAuB;EACvB,aAAa;EACb,eAAe;EACf,2BAA2B;A9Bu4I/B;;A8Bt6IA;E7BuII,mB6BrGuC;A9Bw4I3C;;A8B16IA;E7BuII,kB6BnGuC;A9B04I3C;;A8B96IA;;EAyCM,uBAAuB;A9B04I7B;;A8Bn7IA;;EA6CM,yBAAyB;A9B24I/B;;A8Bx7IA;EAgDI,kB5BnBY;AF+5IhB;;A8B57IA;EAkDI,kB5BvBY;AFq6IhB;;A8Bh8IA;EAoDI,iB5B1BW;AF06If;;A8Bp8IA;EAwDM,iBAAiB;A9Bg5IvB;;A8Bx8IA;EA2DM,iBAAiB;A9Bi5IvB;;A8B58IA;EA8DM,iBAAiB;A9Bk5IvB;;A8Bh9IA;EAiEM,iBAAiB;A9Bm5IvB;;A+Bx8IA;EACE,uB7BP6B;E6BQ7B,sBApBmB;EAqBnB,0F7BtB2B;E6BuB3B,c7BlB4B;E6BmB5B,eAAe;EACf,gBAvBoB;EAwBpB,kBAAkB;A/B28IpB;;A+Bz8IA;EACE,6BAzBwC;EA0BxC,oBAAoB;EACpB,kD7B/B2B;E6BgC3B,aAAa;A/B48If;;A+B18IA;EACE,mBAAmB;EACnB,c7BhC4B;E6BiC5B,aAAa;EACb,YAAY;EACZ,gB7BGe;E6BFf,qBAlCgC;A/B++IlC;;A+Bn9IA;EAQI,uBAAuB;A/B+8I3B;;A+B78IA;EACE,mBAAmB;EACnB,eAAe;EACf,aAAa;EACb,uBAAuB;EACvB,qBA3CgC;A/B2/IlC;;A+B98IA;EACE,cAAc;EACd,kBAAkB;A/Bi9IpB;;A+B/8IA;EACE,6BA9CyC;EA+CzC,eA9C2B;A/BggJ7B;;A+Bh9IA;EACE,6BA/CwC;EAgDxC,6B7BpD6B;E6BqD7B,oBAAoB;EACpB,aAAa;A/Bm9If;;A+Bj9IA;EACE,mBAAmB;EACnB,aAAa;EACb,aAAa;EACb,YAAY;EACZ,cAAc;EACd,uBAAuB;EACvB,gBAzD2B;A/B6gJ7B;;A+B39IA;E9B6EI,+BCrI2B;AFuhJ/B;;A+Bl9IA;EAEI,qB7BlCkB;AFs/ItB;;AgCnhJA;EACE,oBAAoB;EACpB,kBAAkB;EAClB,mBAAmB;AhCshJrB;;AgCzhJA;EAOM,cAAc;AhCshJpB;;AgC7hJA;EAUM,UAAU;EACV,QAAQ;AhCuhJd;;AgCliJA;EAcM,YAAY;EACZ,mBA9BuB;EA+BvB,oBAAoB;EACpB,SAAS;AhCwhJf;;AgCthJA;EACE,aAAa;E/BiHX,O+BhHqB;EACvB,gBAzC6B;EA0C7B,gBAtC2B;EAuC3B,kBAAkB;EAClB,SAAS;EACT,WApCqB;AhC6jJvB;;AgCvhJA;EACE,uB9BjC6B;E8BkC7B,kB9BoBU;E8BnBV,0F9BhD2B;E8BiD3B,sBA9CsC;EA+CtC,mBA9CmC;AhCwkJrC;;AgB5jJgB;EgBqCd,c9BhD4B;E8BiD5B,cAAc;EACd,mBAAmB;EACnB,gBAAgB;EAChB,sBAAsB;EACtB,kBAAkB;AhC2hJpB;;AgCzhJA;;E/BkFI,mB+BhFmC;EACrC,mBAAmB;EACnB,mBAAmB;EACnB,WAAW;AhC4hJb;;AgCjiJA;;EAOI,4B9BxD0B;E8ByD1B,c9BpEyB;AFmmJ7B;;AgCviJA;;EAUI,yB9BlD8B;E8BmD9B,WrBSY;AXyhJhB;;AgChiJA;EACE,yB9BjE6B;E8BkE7B,YAAY;EACZ,cAAc;EACd,WAAW;EACX,gBAAgB;AhCmiJlB;;AiCjnJA;EAEE,mBAAmB;EACnB,8BAA8B;AjCmnJhC;;AiCtnJA;EAKI,kB/B8DQ;AFujJZ;;AiC1nJA;EAOI,qBAAqB;EACrB,mBAAmB;AjCunJvB;;AiC/nJA;EAWI,aAAa;AjCwnJjB;;AiCnoJA;;EAcM,aAAa;AjC0nJnB;;AiCxoJA;EAgBM,aAAa;AjC4nJnB;;AiC5oJA;EAmBQ,gBAAgB;EhC2HpB,qBgChJqC;AjCmpJzC;;AiCjpJA;EAsBQ,YAAY;AjC+nJpB;;AClkJE;EgCnFF;IAyBI,aAAa;EjCioJf;EiC1pJF;IA4BQ,YAAY;EjCioJlB;AACF;;AiChoJA;EACE,mBAAmB;EACnB,aAAa;EACb,gBAAgB;EAChB,YAAY;EACZ,cAAc;EACd,uBAAuB;AjCmoJzB;;AiCzoJA;;EASI,gBAAgB;AjCqoJpB;;AC7lJE;EgCjDF;IAaM,sBA7CmC;EjCmrJvC;AACF;;AiCroJA;;EAEE,gBAAgB;EAChB,YAAY;EACZ,cAAc;AjCwoJhB;;AiC5oJA;;EAQM,YAAY;AjCyoJlB;;AC3mJE;EgCtCF;;IhCiGI,qBgChJqC;EjCssJvC;AACF;;AiC1oJA;EACE,mBAAmB;EACnB,2BAA2B;AjC6oJ7B;;AC3nJE;EgCpBF;IAMM,kBAAkB;EjC8oJtB;AACF;;AC7nJE;EgCxBF;IAQI,aAAa;EjCkpJf;AACF;;AiCjpJA;EACE,mBAAmB;EACnB,yBAAyB;AjCopJ3B;;ACxoJE;EgCdF;IAKI,aAAa;EjCspJf;AACF;;AkC/tJA;EACE,uBAAuB;EACvB,aAAa;EACb,mBAAmB;AlCkuJrB;;AkCruJA;EAKI,sBAAsB;AlCouJ1B;;AkCzuJA;EAOI,8ChCD0B;EgCE1B,aAAa;EACb,oBAAoB;AlCsuJxB;;AkC/uJA;;EAYM,qBAAqB;AlCwuJ3B;;AkCpvJA;EAcM,mBAAmB;AlC0uJzB;;AkCxvJA;EAgBQ,kBAAkB;AlC4uJ1B;;AkC5vJA;EAkBI,8ChCZ0B;EgCa1B,gBAtBgB;EAuBhB,iBAvBgB;AlCqwJpB;;AkClwJA;EAwBM,kBA1BsB;EA2BtB,mBA3BsB;AlCywJ5B;;AkC5uJA;;EAEE,gBAAgB;EAChB,YAAY;EACZ,cAAc;AlC+uJhB;;AkC7uJA;EjC2GI,kBiC/IgB;AlCqxJpB;;AkC9uJA;EjCwGI,iBiC/IgB;AlCyxJpB;;AkC/uJA;EACE,gBAAgB;EAChB,YAAY;EACZ,cAAc;EACd,mBAAmB;AlCkvJrB;;AChtJE;EiCtCF;IAQI,gBAAgB;ElCmvJlB;AACF;;AmCrxJA;EACE,ejCkBW;AFswJb;;AmCzxJA;EAII,kBjCgBY;AFywJhB;;AmC7xJA;EAMI,kBjCYY;AF+wJhB;;AmCjyJA;EAQI,iBjCSW;AFoxJf;;AmC3xJA;EACE,iBArB0B;AnCmzJ5B;;AmC/xJA;EAGI,kBjCqCc;EiCpCd,cjCzB0B;EiC0B1B,cAAc;EACd,qBAzBiC;AnCyzJrC;;AmCtyJA;EAQM,4BjCvBwB;EiCwBxB,cjC/BwB;AFi0J9B;;AmC3yJA;EAYM,yBjClB4B;EiCmB5B,WxByCU;AX0vJhB;;AmChzJA;ElCoHI,8BCtI0B;EiCmCxB,cAnC0B;ElCsI5B,oBkCrIkC;AnCu0JtC;;AmClyJA;EACE,cjCzC4B;EiC0C5B,iBApC2B;EAqC3B,qBApC+B;EAqC/B,yBAAyB;AnCqyJ3B;;AmCzyJA;EAMI,eAtCoB;AnC60JxB;;AmC7yJA;EAQI,kBAxCoB;AnCi1JxB;;AoC50JA;EAEE,4BlCV4B;EkCW5B,kBlC6CU;EkC5CV,elCYW;AFk0Jb;;AoCl1JA;EAMI,mBAAmB;ApCg1JvB;;AoCt1JA;EAQI,mBAAmB;EACnB,0BAA0B;ApCk1J9B;;AoC31JA;EAYI,kBlCKY;AF80JhB;;AoC/1JA;EAcI,kBlCCY;AFo1JhB;;AoCn2JA;EAgBI,iBlCFW;AFy1Jf;;AoCv2JA;EAsCM,uBAH+C;ApCw0JrD;;AoC32JA;EAwCQ,uBlC9CuB;EkC+CvB,clC5DqB;AFm4J7B;;AoCh3JA;EA2CQ,mBlCjDuB;AF03J/B;;AoCp3JA;EAsCM,yBAH+C;ApCq1JrD;;AoCx3JA;EAwCQ,yBlC3DqB;EkC4DrB,YlC/CuB;AFm4J/B;;AoC73JA;EA2CQ,qBlC9DqB;AFo5J7B;;AoCj4JA;EAsCM,yBAH+C;ApCk2JrD;;AoCr4JA;EAwCQ,4BlChDsB;EkCiDtB,yBzBkBa;AX+0JrB;;AoC14JA;EA2CQ,wBlCnDsB;AFs5J9B;;AoC94JA;EAsCM,yBAH+C;ApC+2JrD;;AoCl5JA;EAwCQ,yBlCvDsB;EkCwDtB,WzBoBQ;AX01JhB;;AoCv5JA;EA2CQ,qBlC1DsB;AF06J9B;;AoC35JA;EAsCM,yBzB8B0C;AX21JhD;;AoC/5JA;EAwCQ,yBlCzC0B;EkC0C1B,WzBoBQ;AXu2JhB;;AoCp6JA;EA2CQ,qBlC5C0B;EkC6C1B,czBiC6D;AX41JrE;;AoCz6JA;EAsCM,yBzB8B0C;AXy2JhD;;AoC76JA;EAwCQ,yBlCvC0B;EkCwC1B,WzBoBQ;AXq3JhB;;AoCl7JA;EA2CQ,qBlC1C0B;EkC2C1B,czBiC6D;AX02JrE;;AoCv7JA;EAsCM,yBzB8B0C;AXu3JhD;;AoC37JA;EAwCQ,yBlCxC0B;EkCyC1B,WzBoBQ;AXm4JhB;;AoCh8JA;EA2CQ,qBlC3C0B;EkC4C1B,czBiC6D;AXw3JrE;;AoCr8JA;EAsCM,yBzB8B0C;AXq4JhD;;AoCz8JA;EAwCQ,yBlC1C0B;EkC2C1B,WzBoBQ;AXi5JhB;;AoC98JA;EA2CQ,qBlC7C0B;EkC8C1B,czBiC6D;AXs4JrE;;AoCn9JA;EAsCM,yBzB8B0C;AXm5JhD;;AoCv9JA;EAwCQ,yBlC3C0B;EkC4C1B,yBzBkBa;AXi6JrB;;AoC59JA;EA2CQ,qBlC9C0B;EkC+C1B,czBiC6D;AXo5JrE;;AoCj+JA;EAsCM,yBzB8B0C;AXi6JhD;;AoCr+JA;EAwCQ,yBlCrCyB;EkCsCzB,WzBoBQ;AX66JhB;;AoC1+JA;EA2CQ,qBlCxCyB;EkCyCzB,czBiC6D;AXk6JrE;;AoCj8JA;EACE,mBAAmB;EACnB,yBlC9D4B;EkC+D5B,0BAAgE;EAChE,WzBWc;EyBVd,aAAa;EACb,gBlC7Be;EkC8Bf,8BAA8B;EAC9B,iBAAiB;EACjB,mBAtEiC;EAuEjC,kBAAkB;ApCo8JpB;;AoC98JA;EAYI,YAAY;EACZ,cAAc;EnCgEd,mBmC/DsC;ApCs8J1C;;AoCp9JA;EAgBI,eAjEgC;EAkEhC,yBAAyB;EACzB,0BAA0B;ApCw8J9B;;AoCt8JA;EACE,qBlC9E4B;EkC+E5B,kBlCpBU;EkCqBV,mBAAmB;EACnB,uBAjFmC;EAkFnC,clCrF4B;EkCsF5B,qBAjFiC;ApC0hKnC;;AoC/8JA;;EASI,uBlCjF2B;AF4hK/B;;AoCp9JA;EAWI,6BAlFgD;ApC+hKpD;;AqC/gKA;EAEE,mBAAmB;EACnB,aAAa;EACb,sBAAsB;EACtB,uBAAuB;EACvB,gBAAgB;EAChB,eAAe;EACf,WAxCU;ArCyjKZ;;AqCzhKA;EAWI,aAAa;ArCkhKjB;;AqChhKA;EAEE,wCnC7C2B;AF+jK7B;;AqChhKA;;EAEE,cA9CgC;EA+ChC,+BAA0D;EAC1D,cAAc;EACd,kBAAkB;EAClB,WAAW;ArCmhKb;;ACjgKE;EoCxBF;;IASI,cAAc;IACd,8BAA0D;IAC1D,YAxDuB;ErC8kKzB;AACF;;AqCrhKA;EAEE,gBAAgB;EAChB,YAxD2B;EAyD3B,eAAe;EpCsFb,WoC9IoB;EA0DtB,SAzDoB;EA0DpB,WA5D2B;ArCmlK7B;;AqCrhKA;EACE,aAAa;EACb,sBAAsB;EACtB,8BAAgD;EAChD,gBAAgB;EAChB,uBAAuB;ArCwhKzB;;AqCthKA;;EAEE,mBAAmB;EACnB,4BnCpE4B;EmCqE5B,aAAa;EACb,cAAc;EACd,2BAA2B;EAC3B,aApE4B;EAqE5B,kBAAkB;ArCyhKpB;;AqCvhKA;EACE,gCnC/E4B;EmCgF5B,2BnCpBgB;EmCqBhB,4BnCrBgB;AF+iKlB;;AqCxhKA;EACE,cnCxF4B;EmCyF5B,YAAY;EACZ,cAAc;EACd,iBnC9Da;EmC+Db,cA7E8B;ArCwmKhC;;AqCzhKA;EACE,8BnC/BgB;EmCgChB,+BnChCgB;EmCiChB,6BnC7F4B;AFynK9B;;AqC/hKA;EpC4CI,mBoCtCuC;ArC6hK3C;;AqC3hKA;EpC9CE,iCAAiC;EoCgDjC,uBnC/F6B;EmCgG7B,YAAY;EACZ,cAAc;EACd,cAAc;EACd,aAtF4B;ArConK9B;;AsCxlKA;EACE,uBpC1C6B;EoC2C7B,mBAvDqB;EAwDrB,kBAAkB;EAClB,WAtDW;AtCipKb;;AsC/lKA;EASM,uBpClDyB;EoCmDzB,cpChEuB;AF0pK7B;;AsCpmKA;;EAcU,cpCpEmB;AF+pK7B;;AsCzmKA;;;;EAoBY,yB3BiCqB;E2BhCrB,cpC3EiB;AFuqK7B;;AsCjnKA;EAwBY,qBpC9EiB;AF2qK7B;;AsCrnKA;EA0BQ,cpChFqB;AF+qK7B;;ACxmKE;EqCjBF;;;;IAgCY,cpCtFiB;EFurK3B;EsCjoKF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,cpC7Fe;EFosK3B;EsC9oKF;;IA0Cc,qBpChGe;EFwsK3B;EsClpKF;;;IA8CU,yB3BOuB;I2BNvB,cpCrGmB;EF8sK3B;EsCxpKF;IAmDc,uBpC5FiB;IoC6FjB,cpC1Ge;EFktK3B;AACF;;AsC7pKA;EASM,yBpC/DuB;EoCgEvB,YpCnDyB;AF2sK/B;;AsClqKA;;EAcU,YpCvDqB;AFgtK/B;;AsCvqKA;;;;EAoBY,uB3BiCqB;E2BhCrB,YpC9DmB;AFwtK/B;;AsC/qKA;EAwBY,mBpCjEmB;AF4tK/B;;AsCnrKA;EA0BQ,YpCnEuB;AFguK/B;;ACtqKE;EqCjBF;;;;IAgCY,YpCzEmB;EFwuK7B;EsC/rKF;;;;;;;;;;IAsCc,uB3BemB;I2BdnB,YpChFiB;EFqvK7B;EsC5sKF;;IA0Cc,mBpCnFiB;EFyvK7B;EsChtKF;;;IA8CU,uB3BOuB;I2BNvB,YpCxFqB;EF+vK7B;EsCttKF;IAmDc,yBpCzGe;IoC0Gf,YpC7FiB;EFmwK7B;AACF;;AsC3tKA;EASM,4BpCpDwB;EoCqDxB,yB3Bce;AXwsKrB;;AsChuKA;;EAcU,yB3BUW;AX6sKrB;;AsCruKA;;;;EAoBY,yB3BiCqB;E2BhCrB,yB3BGS;AXqtKrB;;AsC7uKA;EAwBY,gC3BAS;AXytKrB;;AsCjvKA;EA0BQ,yB3BFa;AX6tKrB;;ACpuKE;EqCjBF;;;;IAgCY,yB3BRS;EXquKnB;EsC7vKF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,yB3BfO;EXkvKnB;EsC1wKF;;IA0Cc,gC3BlBO;EXsvKnB;EsC9wKF;;;IA8CU,yB3BOuB;I2BNvB,yB3BvBW;EX4vKnB;EsCpxKF;IAmDc,4BpC9FgB;IoC+FhB,yB3B5BO;EXgwKnB;AACF;;AsCzxKA;EASM,yBpC3DwB;EoC4DxB,W3BgBU;AXowKhB;;AsC9xKA;;EAcU,W3BYM;AXywKhB;;AsCnyKA;;;;EAoBY,yB3BiCqB;E2BhCrB,W3BKI;AXixKhB;;AsC3yKA;EAwBY,kB3BEI;AXqxKhB;;AsC/yKA;EA0BQ,W3BAQ;AXyxKhB;;AClyKE;EqCjBF;;;;IAgCY,W3BNI;EXiyKd;EsC3zKF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,W3BbE;EX8yKd;EsCx0KF;;IA0Cc,kB3BhBE;EXkzKd;EsC50KF;;;IA8CU,yB3BOuB;I2BNvB,W3BrBM;EXwzKd;EsCl1KF;IAmDc,yBpCrGgB;IoCsGhB,W3B1BE;EX4zKd;AACF;;AsCv1KA;EASM,yBpC7C4B;EoC8C5B,W3BgBU;AXk0KhB;;AsC51KA;;EAcU,W3BYM;AXu0KhB;;AsCj2KA;;;;EAoBY,yB3BiCqB;E2BhCrB,W3BKI;AX+0KhB;;AsCz2KA;EAwBY,kB3BEI;AXm1KhB;;AsC72KA;EA0BQ,W3BAQ;AXu1KhB;;ACh2KE;EqCjBF;;;;IAgCY,W3BNI;EX+1Kd;EsCz3KF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,W3BbE;EX42Kd;EsCt4KF;;IA0Cc,kB3BhBE;EXg3Kd;EsC14KF;;;IA8CU,yB3BOuB;I2BNvB,W3BrBM;EXs3Kd;EsCh5KF;IAmDc,yBpCvFoB;IoCwFpB,W3B1BE;EX03Kd;AACF;;AsCr5KA;EASM,yBpC3C4B;EoC4C5B,W3BgBU;AXg4KhB;;AsC15KA;;EAcU,W3BYM;AXq4KhB;;AsC/5KA;;;;EAoBY,yB3BiCqB;E2BhCrB,W3BKI;AX64KhB;;AsCv6KA;EAwBY,kB3BEI;AXi5KhB;;AsC36KA;EA0BQ,W3BAQ;AXq5KhB;;AC95KE;EqCjBF;;;;IAgCY,W3BNI;EX65Kd;EsCv7KF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,W3BbE;EX06Kd;EsCp8KF;;IA0Cc,kB3BhBE;EX86Kd;EsCx8KF;;;IA8CU,yB3BOuB;I2BNvB,W3BrBM;EXo7Kd;EsC98KF;IAmDc,yBpCrFoB;IoCsFpB,W3B1BE;EXw7Kd;AACF;;AsCn9KA;EASM,yBpC5C4B;EoC6C5B,W3BgBU;AX87KhB;;AsCx9KA;;EAcU,W3BYM;AXm8KhB;;AsC79KA;;;;EAoBY,yB3BiCqB;E2BhCrB,W3BKI;AX28KhB;;AsCr+KA;EAwBY,kB3BEI;AX+8KhB;;AsCz+KA;EA0BQ,W3BAQ;AXm9KhB;;AC59KE;EqCjBF;;;;IAgCY,W3BNI;EX29Kd;EsCr/KF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,W3BbE;EXw+Kd;EsClgLF;;IA0Cc,kB3BhBE;EX4+Kd;EsCtgLF;;;IA8CU,yB3BOuB;I2BNvB,W3BrBM;EXk/Kd;EsC5gLF;IAmDc,yBpCtFoB;IoCuFpB,W3B1BE;EXs/Kd;AACF;;AsCjhLA;EASM,yBpC9C4B;EoC+C5B,W3BgBU;AX4/KhB;;AsCthLA;;EAcU,W3BYM;AXigLhB;;AsC3hLA;;;;EAoBY,yB3BiCqB;E2BhCrB,W3BKI;AXygLhB;;AsCniLA;EAwBY,kB3BEI;AX6gLhB;;AsCviLA;EA0BQ,W3BAQ;AXihLhB;;AC1hLE;EqCjBF;;;;IAgCY,W3BNI;EXyhLd;EsCnjLF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,W3BbE;EXsiLd;EsChkLF;;IA0Cc,kB3BhBE;EX0iLd;EsCpkLF;;;IA8CU,yB3BOuB;I2BNvB,W3BrBM;EXgjLd;EsC1kLF;IAmDc,yBpCxFoB;IoCyFpB,W3B1BE;EXojLd;AACF;;AsC/kLA;EASM,yBpC/C4B;EoCgD5B,yB3Bce;AX4jLrB;;AsCplLA;;EAcU,yB3BUW;AXikLrB;;AsCzlLA;;;;EAoBY,yB3BiCqB;E2BhCrB,yB3BGS;AXykLrB;;AsCjmLA;EAwBY,gC3BAS;AX6kLrB;;AsCrmLA;EA0BQ,yB3BFa;AXilLrB;;ACxlLE;EqCjBF;;;;IAgCY,yB3BRS;EXylLnB;EsCjnLF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,yB3BfO;EXsmLnB;EsC9nLF;;IA0Cc,gC3BlBO;EX0mLnB;EsCloLF;;;IA8CU,yB3BOuB;I2BNvB,yB3BvBW;EXgnLnB;EsCxoLF;IAmDc,yBpCzFoB;IoC0FpB,yB3B5BO;EXonLnB;AACF;;AsC7oLA;EASM,yBpCzC2B;EoC0C3B,W3BgBU;AXwnLhB;;AsClpLA;;EAcU,W3BYM;AX6nLhB;;AsCvpLA;;;;EAoBY,yB3BiCqB;E2BhCrB,W3BKI;AXqoLhB;;AsC/pLA;EAwBY,kB3BEI;AXyoLhB;;AsCnqLA;EA0BQ,W3BAQ;AX6oLhB;;ACtpLE;EqCjBF;;;;IAgCY,W3BNI;EXqpLd;EsC/qLF;;;;;;;;;;IAsCc,yB3BemB;I2BdnB,W3BbE;EXkqLd;EsC5rLF;;IA0Cc,kB3BhBE;EXsqLd;EsChsLF;;;IA8CU,yB3BOuB;I2BNvB,W3BrBM;EX4qLd;EsCtsLF;IAmDc,yBpCnFmB;IoCoFnB,W3B1BE;EXgrLd;AACF;;AsC3sLA;EAsDI,oBAAoB;EACpB,aAAa;EACb,mBA7GmB;EA8GnB,WAAW;AtCypLf;;AsCltLA;EA2DI,gCpCtG0B;AFiwL9B;;AsCttLA;EALE,OAAO;EACP,eAAe;EACf,QAAQ;EACR,WA/CiB;AtC8wLnB;;AsC7tLA;EAgEI,SAAS;AtCiqLb;;AsCjuLA;EAkEM,iCpC7GwB;AFgxL9B;;AsCruLA;EAoEI,MAAM;AtCqqLV;;AsCnqLA;;EAGI,oBA9HmB;AtCmyLvB;;AsCxqLA;;EAKI,uBAhImB;AtCwyLvB;;AsCtqLA;;EAEE,oBAAoB;EACpB,aAAa;EACb,cAAc;EACd,mBAvIqB;AtCgzLvB;;AsCvqLA;EAIM,6BAA6B;AtCuqLnC;;AsCrqLA;ErCpFE,iCAAiC;EqCsFjC,gBAAgB;EAChB,gBAAgB;EAChB,kBAAkB;AtCwqLpB;;AsCtqLA;EACE,cpClJ4B;EDoB5B,eAAe;EACf,cAAc;EACd,eqC1BqB;ErC2BrB,kBAAkB;EAClB,cqC5BqB;ErC6InB,iBqCWkC;AtC6qLtC;;ACxyLE;EACE,8BAA8B;EAC9B,cAAc;EACd,WAAW;EACX,qBAAqB;EACrB,kBAAkB;EAClB,wBAAwB;EACxB,yBCiCQ;EDhCR,yDAAyD;EACzD,oCC0Ba;EDzBb,WAAW;AD2yLf;;AC1yLI;EACE,oBAAoB;AD6yL1B;;AC5yLI;EACE,oBAAoB;AD+yL1B;;AC9yLI;EACE,oBAAoB;ADizL1B;;AChzLE;EACE,qCAAiC;ADmzLrC;;AC/yLM;EACE,wCAAwC;ADkzLhD;;ACjzLM;EACE,UAAU;ADozLlB;;ACnzLM;EACE,0CAA0C;ADszLlD;;AsCptLA;EACE,aAAa;AtCutLf;;AsCrtLA;;EAEE,cpC3J4B;EoC4J5B,cAAc;EACd,gBAAgB;EAChB,uBAAuB;EACvB,kBAAkB;AtCwtLpB;;AsC9tLA;;EASM,qBAAqB;EACrB,sBAAsB;AtC0tL5B;;AsCxtLA;;EAEE,eAAe;AtC2tLjB;;AsC7tLA;;;;;EAOI,yBpCrK0B;EoCsK1B,cpC9J8B;AF43LlC;;AsC5tLA;EACE,YAAY;EACZ,cAAc;AtC+tLhB;;AsCjuLA;EAII,mBA5KgC;AtC64LpC;;AsCruLA;EAMI,UAAU;AtCmuLd;;AsCzuLA;EAQI,YAAY;EACZ,cAAc;AtCquLlB;;AsC9uLA;EAWI,oCAAoC;EACpC,mBA/LmB;EAgMnB,kCAAkC;AtCuuLtC;;AsCpvLA;EAgBM,6BApLyC;EAqLzC,4BpCjL4B;AFy5LlC;;AsCzvLA;EAmBM,6BApL0C;EAqL1C,4BpCpL4B;EoCqL5B,0BApLuC;EAqLvC,wBApLqC;EAqLrC,cpCvL4B;EoCwL5B,kCAAwE;AtC0uL9E;;AsCxuLA;EACE,YAAY;EACZ,cAAc;AtC2uLhB;;AsCzuLA;ErCpEI,oBqCqEoC;AtC4uLxC;;AsC7uLA;EAII,qBpClM8B;EoCmM9B,oBAAoB;ErCjEpB,cqCkE6B;AtC6uLjC;;AsC3uLA;EACE,mBAAmB;EACnB,sBAAsB;EACtB,mBAAmB;AtC8uLrB;;AsCjvLA;EAKI,oBAAoB;EACpB,qBAAqB;AtCgvLzB;;AsC9uLA;EACE,4BpCxN4B;EoCyN5B,YAAY;EACZ,aAAa;EACb,WA9LyB;EA+LzB,gBAAgB;AtCivLlB;;AC74LE;EqCrBF;IAqLI,cAAc;EtCkvLhB;EsCjvLA;;IAGI,mBAAmB;IACnB,aAAa;EtCkvLjB;EsCjvLA;IAEI,aAAa;EtCkvLjB;EsC10LF;IA0FI,uBpCxO2B;IoCyO3B,4CpCtPyB;IoCuPzB,iBAAiB;EtCmvLnB;EsCtvLA;IAKI,cAAc;EtCovLlB;EsClvLA;IA1MA,OAAO;IACP,eAAe;IACf,QAAQ;IACR,WA/CiB;EtC8+LjB;EsCxvLA;IAKI,SAAS;EtCsvLb;EsC3vLA;IAOM,4CpClQqB;EFy/L3B;EsC9vLA;IASI,MAAM;EtCwvLV;EsCjwLA;IrC/LA,iCAAiC;IqC6M3B,iCAA2C;IAC3C,cAAc;EtCuvLpB;EsCtvLA;;IAGI,oBA7QiB;EtCogMrB;EsC1vLA;;IAKI,uBA/QiB;EtCwgMrB;AACF;;ACn8LE;EqC4MA;;;;IAIE,oBAAoB;IACpB,aAAa;EtC2vLf;EsC79LF;IAoOI,mBAzRmB;EtCqhMrB;EsC7vLA;IAGI,kBAzR0B;EtCshM9B;EsChwLA;;IAMM,mBAAmB;EtC8vLzB;EsCpwLA;;IASM,kBpC/NI;EF89LV;EsCxwLA;;;;IAgBQ,wCAAwC;EtC8vLhD;EsC9wLA;IAuBU,wCAAwC;EtC0vLlD;EsCjxLA;IA4BU,4BpC1SkB;IoC2SlB,cpCtTiB;EF8iM3B;EsCrxLA;IA+BU,4BpC7SkB;IoC8SlB,cpCrSsB;EF8hMhC;EsC55LF;IAqKI,aAAa;EtC0vLf;EsCv5LF;;IAgKI,mBAAmB;IACnB,aAAa;EtC2vLf;EsCt4LF;IA8IM,oBAAoB;EtC2vLxB;EsC7vLA;IAKM,oDAAoD;EtC2vL1D;EsChwLA;IAOM,gCpC/TsB;IoCgUtB,0BAAkE;IAClE,gBAAgB;IAChB,YAAY;IACZ,4CpC3UqB;IoC4UrB,SAAS;EtC4vLf;EsCxwLA;IAkBM,cAAc;EtCyvLpB;EsCxvLM;IAEE,UAAU;IACV,oBAAoB;IACpB,wBAAwB;EtCyvLhC;EsCr7LF;IA8LI,YAAY;IACZ,cAAc;EtC0vLhB;EsCzvLA;IACE,2BAA2B;IrC9M3B,kBqC+MoC;EtC2vLtC;EsC1vLA;IACE,yBAAyB;IrCjNzB,iBqCkNoC;EtC4vLtC;EsCl4LF;IAwII,uBpCrV2B;IoCsV3B,8BpC/Rc;IoCgSd,+BpChSc;IoCiSd,6BpC7V0B;IoC8V1B,2CpCtWyB;IoCuWzB,aAAa;IACb,mBAAmB;IrClNnB,OqCmNuB;IACvB,eAAe;IACf,kBAAkB;IAClB,SAAS;IACT,WAhVkB;EtC6kMpB;EsCh5LF;IAqJM,sBAAsB;IACtB,mBAAmB;EtC8vLvB;EsC7wLA;IrCnNE,mBqCoOuC;EtC+vLzC;EsChxLA;IAoBM,4BpC1WsB;IoC2WtB,cpCtXqB;EFqnM3B;EsCpxLA;IAuBM,4BpC7WsB;IoC8WtB,cpCrW0B;EFqmMhC;EsC/vLE;IAEE,kBpCxTY;IoCyTZ,gBAAgB;IAChB,4EpC9XuB;IoC+XvB,cAAc;IACd,UAAU;IACV,oBAAoB;IACpB,wBAA8C;IAC9C,2BAA2B;IAC3B,yBpC9TM;IoC+TN,uCAAuC;EtCgwL3C;EsCpyLA;IAsCI,UAAU;IACV,QAAQ;EtCiwLZ;EsCv6LF;IAwKI,cAAc;EtCkwLhB;EsCjwLA;;IrC7PE,qBqCgQyC;EtCkwL3C;EsCrwLA;;IrC7PE,sBqCkQyC;EtCowL3C;EsClwLA;IAjWA,OAAO;IACP,eAAe;IACf,QAAQ;IACR,WA/CiB;EtCqpMjB;EsCxwLA;IAKI,SAAS;EtCswLb;EsC3wLA;IAOM,4CpCzZqB;EFgqM3B;EsC9wLA;IASI,MAAM;EtCwwLV;EsCvwLA;;IAGI,oBA9ZiB;EtCsqMrB;EsC3wLA;;IAKI,uBAhaiB;EtC0qMrB;EsC/wLA;;IAOI,oBAA4D;EtC4wLhE;EsCnxLA;;IASI,uBAA+D;EtC8wLnE;EsC5wLA;;IAGI,cpC1auB;EFurM3B;EsChxLA;;IAKI,6BAja2C;EtCgrM/C;EsC9wLA;IAKM,yBpCtasB;EFkrM5B;AACF;;AsCzwLA;EAEI,iCAA2C;AtC2wL/C;;AuCtqMA;EAEE,erCIW;EqCHX,gBAhC0B;AvCwsM5B;;AuC3qMA;EAMI,kBrCCY;AFwqMhB;;AuC/qMA;EAQI,kBrCHY;AF8qMhB;;AuCnrMA;EAUI,iBrCNW;AFmrMf;;AuCvrMA;;EAcM,iBAAiB;EACjB,kBAAkB;EAClB,uBrCwBmB;AFspMzB;;AuC9rMA;EAkBM,uBrCsBmB;AF0pMzB;;AuC9qMA;;EAEE,mBAAmB;EACnB,aAAa;EACb,uBAAuB;EACvB,kBAAkB;AvCirMpB;;AuC/qMA;;;;EAME,cA3D6B;EA4D7B,uBAAuB;EACvB,eA5D8B;EA6D9B,mBA5DkC;EA6DlC,oBA5DmC;EA6DnC,kBAAkB;AvCgrMpB;;AuC9qMA;;;EAGE,qBrChE4B;EqCiE5B,crCrE4B;EqCsE5B,gBpCvEoB;AHwvMtB;;AuCtrMA;;;EAOI,qBrCrE0B;EqCsE1B,crCzE0B;AF8vM9B;;AuC7rMA;;;EAUI,qBrC3D8B;AFovMlC;;AuCnsMA;;;EAYI,iDrCjFyB;AF8wM7B;;AuCzsMA;;;EAcI,yBrC3E0B;EqC4E1B,qBrC5E0B;EqC6E1B,gBAAgB;EAChB,crChF0B;EqCiF1B,YAAY;AvCisMhB;;AuC/rMA;;EAEE,oBAAoB;EACpB,qBAAqB;EACrB,mBAAmB;AvCksMrB;;AuChsMA;EAEI,yBrC7E8B;EqC8E9B,qBrC9E8B;EqC+E9B,W5BnBY;AXqtMhB;;AuChsMA;EACE,crC/F4B;EqCgG5B,oBAAoB;AvCmsMtB;;AuCjsMA;EACE,eAAe;AvCosMjB;;AC/tME;EsClDF;IAiFI,eAAe;EvCqsMjB;EuC1tMF;;IAwBI,YAAY;IACZ,cAAc;EvCssMhB;EuCrsMA;IAEI,YAAY;IACZ,cAAc;EvCssMlB;AACF;;AC1uME;EsCsBF;IAiBI,YAAY;IACZ,cAAc;IACd,2BAA2B;IAC3B,QAAQ;EvCwsMV;EuCvsMA;IACE,QAAQ;EvCysMV;EuCxsMA;IACE,QAAQ;EvC0sMV;EuC9yMF;IAsGI,8BAA8B;EvC2sMhC;EuC5sMA;IAIM,QAAQ;EvC2sMd;EuC/sMA;IAMM,uBAAuB;IACvB,QAAQ;EvC4sMd;EuCntMA;IASM,QAAQ;EvC6sMd;EuCttMA;IAYM,QAAQ;EvC6sMd;EuCztMA;IAcM,QAAQ;EvC8sMd;EuC5tMA;IAgBM,yBAAyB;IACzB,QAAQ;EvC+sMd;AACF;;AwCv0MA;EACE,kBtCuCgB;EsCtChB,0FtC9B2B;EsC+B3B,etCIW;AFs0Mb;;AwC70MA;EAKI,qBtCakB;AF+zMtB;;AwCj1MA;EAYQ,uBtC3BuB;EsC4BvB,ctCzCqB;AFk3M7B;;AwCt1MA;EAeQ,0BtC9BuB;AFy2M/B;;AwC11MA;EAiBQ,YtChCuB;AF62M/B;;AwC91MA;EAYQ,yBtCxCqB;EsCyCrB,YtC5BuB;AFk3M/B;;AwCn2MA;EAeQ,4BtC3CqB;AFm4M7B;;AwCv2MA;EAiBQ,ctC7CqB;AFu4M7B;;AwC32MA;EAYQ,4BtC7BsB;EsC8BtB,yB7BqCa;AX8zMrB;;AwCh3MA;EAeQ,+BtChCsB;AFq4M9B;;AwCp3MA;EAiBQ,iBtClCsB;AFy4M9B;;AwCx3MA;EAYQ,yBtCpCsB;EsCqCtB,W7BuCQ;AXy0MhB;;AwC73MA;EAeQ,4BtCvCsB;AFy5M9B;;AwCj4MA;EAiBQ,ctCzCsB;AF65M9B;;AwCr4MA;EAYQ,yBtCtB0B;EsCuB1B,W7BuCQ;AXs1MhB;;AwC14MA;EAeQ,4BtCzB0B;AFw5MlC;;AwC94MA;EAiBQ,ctC3B0B;AF45MlC;;AwCl5MA;EAYQ,yBtCpB0B;EsCqB1B,W7BuCQ;AXm2MhB;;AwCv5MA;EAeQ,4BtCvB0B;AFm6MlC;;AwC35MA;EAiBQ,ctCzB0B;AFu6MlC;;AwC/5MA;EAYQ,yBtCrB0B;EsCsB1B,W7BuCQ;AXg3MhB;;AwCp6MA;EAeQ,4BtCxB0B;AFi7MlC;;AwCx6MA;EAiBQ,ctC1B0B;AFq7MlC;;AwC56MA;EAYQ,yBtCvB0B;EsCwB1B,W7BuCQ;AX63MhB;;AwCj7MA;EAeQ,4BtC1B0B;AFg8MlC;;AwCr7MA;EAiBQ,ctC5B0B;AFo8MlC;;AwCz7MA;EAYQ,yBtCxB0B;EsCyB1B,yB7BqCa;AX44MrB;;AwC97MA;EAeQ,4BtC3B0B;AF88MlC;;AwCl8MA;EAiBQ,ctC7B0B;AFk9MlC;;AwCt8MA;EAYQ,yBtClByB;EsCmBzB,W7BuCQ;AXu5MhB;;AwC38MA;EAeQ,4BtCrByB;AFq9MjC;;AwC/8MA;EAiBQ,ctCvByB;AFy9MjC;;AwCh8MA;;EAGI,gCtCzC2B;AF2+M/B;;AwCh8MA;EACE,yBtC5C6B;EsC6C7B,0BAA8C;EAC9C,ctCnD4B;EsCoD5B,iBAhDyB;EAiDzB,gBtCfe;EsCgBf,iBArD8B;EAsD9B,mBArDgC;AxCw/MlC;;AwCj8MA;EACE,qBAAqB;EACrB,aAAa;EACb,kBArD4B;EAsD5B,uBAAuB;AxCo8MzB;;AwCx8MA;EAMI,gCtC3D0B;EsC4D1B,mBAAmB;EACnB,cAAc;AxCs8MlB;;AwC98MA;EAWM,4BtCnEwB;EsCoExB,ctCrEwB;AF4gN9B;;AwCr8MA;EAEI,ctCxE0B;AF+gN9B;;AwCz8MA;EAIM,ctC3D4B;AFogNlC;;AwCv8MA;EACE,mBAAmB;EACnB,ctC/E4B;EsCgF5B,aAAa;EACb,2BAA2B;EAC3B,qBAAqB;AxC08MvB;;AwC/8MA;EvC6DI,oBuCtDsC;AxC48M1C;;AwCn9MA;EASI,YAAY;EACZ,cAAc;EACd,WAAW;AxC88Mf;;AwCz9MA;EAaI,eAAe;AxCg9MnB;;AwC79MA;EAeI,0BtC5E8B;EsC6E9B,ctC7F0B;AF+iN9B;;AwCl+MA;EAkBM,ctC/E4B;AFmiNlC;;AwCt+MA;EAoBI,8BtCjCc;EsCkCd,+BtClCc;AFw/MlB;;AwCp9MA;;EAEE,eAAe;AxCu9MjB;;AwCz9MA;;EAII,4BtCjG0B;AF2jN9B;;AwCx9MA;EvC9FE,qBAAqB;EACrB,euC8FgB;EvC7FhB,WuC6FqB;EvC5FrB,gBuC4FqB;EvC3FrB,kBAAkB;EAClB,mBAAmB;EACnB,UuCyFqB;EACrB,ctC1G4B;EDwI1B,oBuC7BoC;AxCi+MxC;;AwCp+MA;EAKI,kBAAkB;EAClB,oBAAoB;AxCm+MxB;;AyC7jNA;ExCkCE,iCAAiC;EwC9BjC,oBAAoB;EACpB,aAAa;EACb,evCGW;EuCFX,8BAA8B;EAC9B,gBAAgB;EAChB,gBAAgB;EAChB,mBAAmB;AzC8jNrB;;AyCxkNA;EAYI,mBAAmB;EACnB,4BvC/B0B;EuCgC1B,0BAzC4B;EA0C5B,wBAzC0B;EA0C1B,cvCrC0B;EuCsC1B,aAAa;EACb,uBAAuB;EACvB,mBAA6C;EAC7C,kBAxCyB;EAyCzB,mBAAmB;AzCgkNvB;;AyCrlNA;EAuBM,4BvC7CwB;EuC8CxB,cvC9CwB;AFgnN9B;;AyC1lNA;EA0BI,cAAc;AzCokNlB;;AyC9lNA;EA6BQ,4BvCnC0B;EuCoC1B,cvCpC0B;AFymNlC;;AyCnmNA;EAgCI,mBAAmB;EACnB,4BvCnD0B;EuCoD1B,0BA7D4B;EA8D5B,wBA7D0B;EA8D1B,aAAa;EACb,YAAY;EACZ,cAAc;EACd,2BAA2B;AzCukN/B;;AyC9mNA;EAyCM,qBAAqB;AzCykN3B;;AyClnNA;EA2CM,UAAU;EACV,uBAAuB;EACvB,oBAAoB;EACpB,qBAAqB;AzC2kN3B;;AyCznNA;EAgDM,yBAAyB;EACzB,oBAAoB;AzC6kN1B;;AyC9nNA;ExCoHI,mBwChEuC;AzC8kN3C;;AyCloNA;ExCoHI,kBwC9DuC;AzCglN3C;;AyCtoNA;EA0DM,uBAAuB;AzCglN7B;;AyC1oNA;EA6DM,yBAAyB;AzCilN/B;;AyC9oNA;EAiEM,6BAA6B;EAE3B,0BAAkE;AzCglN1E;;AyCnpNA;EAuEQ,4BvCtFsB;EuCuFtB,4BvC1FsB;AF0qN9B;;AyCxpNA;EA4EU,uBvCzFqB;EuC0FrB,qBvC/FoB;EuCgGpB,2CAA2E;AzCglNrF;;AyC9pNA;EAiFM,YAAY;EACZ,cAAc;AzCilNpB;;AyCnqNA;EAqFM,qBvCvGwB;EuCwGxB,mBA/F+B;EAgG/B,iBA/F6B;EAgG7B,gBAAgB;EAChB,kBAAkB;AzCklNxB;;AyC3qNA;EA2FQ,4BvC1GsB;EuC2GtB,qBvC/GsB;EuCgHtB,UAAU;AzColNlB;;AyCjrNA;ExCoHI,iBwCpBuE;AzCqlN3E;;AyCrrNA;EAmGU,2BvC1DE;EuC2DF,8BvC3DE;AFipNZ;;AyC1rNA;EA0GU,4BvCjEE;EuCkEF,+BvClEE;AFspNZ;;AyC/rNA;EAiHU,yBvCvHwB;EuCwHxB,qBvCxHwB;EuCyHxB,W9B7DM;E8B8DN,UAAU;AzCklNpB;;AyCtsNA;EAsHM,mBAAmB;AzColNzB;;AyC1sNA;EA2HY,mCvChFa;EuCiFb,gCvCjFa;EuCkFb,oBAAoB;AzCmlNhC;;AyChtNA;EAoIY,oCvCzFa;EuC0Fb,iCvC1Fa;EuC2Fb,qBAAqB;AzCglNjC;;AyCttNA;EA6II,kBvCnIY;AFgtNhB;;AyC1tNA;EA+II,kBvCvIY;AFstNhB;;AyC9tNA;EAiJI,iBvC1IW;AF2tNf;;A0C9vNA,eAAA;ACEA;EACE,cAAc;EACd,aAAa;EACb,YAAY;EACZ,cAAc;EACd,gBAPkB;A3CuwNpB;;A2C/vNE;EACE,UAAU;A3CkwNd;;A2CjwNE;EACE,UAAU;EACV,WAAW;A3CowNf;;A2CnwNE;EACE,UAAU;EACV,UAAU;A3CswNd;;A2CrwNE;EACE,UAAU;EACV,eAAe;A3CwwNnB;;A2CvwNE;EACE,UAAU;EACV,UAAU;A3C0wNd;;A2CzwNE;EACE,UAAU;EACV,eAAe;A3C4wNnB;;A2C3wNE;EACE,UAAU;EACV,UAAU;A3C8wNd;;A2C7wNE;EACE,UAAU;EACV,UAAU;A3CgxNd;;A2C/wNE;EACE,UAAU;EACV,UAAU;A3CkxNd;;A2CjxNE;EACE,UAAU;EACV,UAAU;A3CoxNd;;A2CnxNE;EACE,UAAU;EACV,UAAU;A3CsxNd;;A2CrxNE;E1CwGE,gB0CvGmC;A3CwxNvC;;A2CvxNE;E1CsGE,qB0CrGwC;A3C0xN5C;;A2CzxNE;E1CoGE,gB0CnGmC;A3C4xNvC;;A2C3xNE;E1CkGE,qB0CjGwC;A3C8xN5C;;A2C7xNE;E1CgGE,gB0C/FmC;A3CgyNvC;;A2C/xNE;E1C8FE,gB0C7FmC;A3CkyNvC;;A2CjyNE;E1C4FE,gB0C3FmC;A3CoyNvC;;A2CnyNE;E1C0FE,gB0CzFmC;A3CsyNvC;;A2CryNE;E1CwFE,gB0CvFmC;A3CwyNvC;;A2CtyNI;EACE,UAAU;EACV,SAA0B;A3CyyNhC;;A2CxyNI;E1CkFA,e0CjFqD;A3C2yNzD;;A2C/yNI;EACE,UAAU;EACV,eAA0B;A3CkzNhC;;A2CjzNI;E1CkFA,qB0CjFqD;A3CozNzD;;A2CxzNI;EACE,UAAU;EACV,gBAA0B;A3C2zNhC;;A2C1zNI;E1CkFA,sB0CjFqD;A3C6zNzD;;A2Cj0NI;EACE,UAAU;EACV,UAA0B;A3Co0NhC;;A2Cn0NI;E1CkFA,gB0CjFqD;A3Cs0NzD;;A2C10NI;EACE,UAAU;EACV,gBAA0B;A3C60NhC;;A2C50NI;E1CkFA,sB0CjFqD;A3C+0NzD;;A2Cn1NI;EACE,UAAU;EACV,gBAA0B;A3Cs1NhC;;A2Cr1NI;E1CkFA,sB0CjFqD;A3Cw1NzD;;A2C51NI;EACE,UAAU;EACV,UAA0B;A3C+1NhC;;A2C91NI;E1CkFA,gB0CjFqD;A3Ci2NzD;;A2Cr2NI;EACE,UAAU;EACV,gBAA0B;A3Cw2NhC;;A2Cv2NI;E1CkFA,sB0CjFqD;A3C02NzD;;A2C92NI;EACE,UAAU;EACV,gBAA0B;A3Ci3NhC;;A2Ch3NI;E1CkFA,sB0CjFqD;A3Cm3NzD;;A2Cv3NI;EACE,UAAU;EACV,UAA0B;A3C03NhC;;A2Cz3NI;E1CkFA,gB0CjFqD;A3C43NzD;;A2Ch4NI;EACE,UAAU;EACV,gBAA0B;A3Cm4NhC;;A2Cl4NI;E1CkFA,sB0CjFqD;A3Cq4NzD;;A2Cz4NI;EACE,UAAU;EACV,gBAA0B;A3C44NhC;;A2C34NI;E1CkFA,sB0CjFqD;A3C84NzD;;A2Cl5NI;EACE,UAAU;EACV,WAA0B;A3Cq5NhC;;A2Cp5NI;E1CkFA,iB0CjFqD;A3Cu5NzD;;ACr4NE;E0C/EF;IAgEM,UAAU;E3Cy5Nd;E2Cz9NF;IAkEM,UAAU;IACV,WAAW;E3C05Nf;E2C79NF;IAqEM,UAAU;IACV,UAAU;E3C25Nd;E2Cj+NF;IAwEM,UAAU;IACV,eAAe;E3C45NnB;E2Cr+NF;IA2EM,UAAU;IACV,UAAU;E3C65Nd;E2Cz+NF;IA8EM,UAAU;IACV,eAAe;E3C85NnB;E2C7+NF;IAiFM,UAAU;IACV,UAAU;E3C+5Nd;E2Cj/NF;IAoFM,UAAU;IACV,UAAU;E3Cg6Nd;E2Cr/NF;IAuFM,UAAU;IACV,UAAU;E3Ci6Nd;E2Cz/NF;IA0FM,UAAU;IACV,UAAU;E3Ck6Nd;E2C7/NF;IA6FM,UAAU;IACV,UAAU;E3Cm6Nd;E2CjgOF;I1C8II,gB0C9CqC;E3Co6NvC;E2CpgOF;I1C8II,qB0C5C0C;E3Cq6N5C;E2CvgOF;I1C8II,gB0C1CqC;E3Cs6NvC;E2C1gOF;I1C8II,qB0CxC0C;E3Cu6N5C;E2C7gOF;I1C8II,gB0CtCqC;E3Cw6NvC;E2ChhOF;I1C8II,gB0CpCqC;E3Cy6NvC;E2CnhOF;I1C8II,gB0ClCqC;E3C06NvC;E2CthOF;I1C8II,gB0ChCqC;E3C26NvC;E2CzhOF;I1C8II,gB0C9BqC;E3C46NvC;E2C5hOF;IAmHQ,UAAU;IACV,SAA0B;E3C46NhC;E2ChiOF;I1C8II,e0CxBuD;E3C66NzD;E2CniOF;IAmHQ,UAAU;IACV,eAA0B;E3Cm7NhC;E2CviOF;I1C8II,qB0CxBuD;E3Co7NzD;E2C1iOF;IAmHQ,UAAU;IACV,gBAA0B;E3C07NhC;E2C9iOF;I1C8II,sB0CxBuD;E3C27NzD;E2CjjOF;IAmHQ,UAAU;IACV,UAA0B;E3Ci8NhC;E2CrjOF;I1C8II,gB0CxBuD;E3Ck8NzD;E2CxjOF;IAmHQ,UAAU;IACV,gBAA0B;E3Cw8NhC;E2C5jOF;I1C8II,sB0CxBuD;E3Cy8NzD;E2C/jOF;IAmHQ,UAAU;IACV,gBAA0B;E3C+8NhC;E2CnkOF;I1C8II,sB0CxBuD;E3Cg9NzD;E2CtkOF;IAmHQ,UAAU;IACV,UAA0B;E3Cs9NhC;E2C1kOF;I1C8II,gB0CxBuD;E3Cu9NzD;E2C7kOF;IAmHQ,UAAU;IACV,gBAA0B;E3C69NhC;E2CjlOF;I1C8II,sB0CxBuD;E3C89NzD;E2CplOF;IAmHQ,UAAU;IACV,gBAA0B;E3Co+NhC;E2CxlOF;I1C8II,sB0CxBuD;E3Cq+NzD;E2C3lOF;IAmHQ,UAAU;IACV,UAA0B;E3C2+NhC;E2C/lOF;I1C8II,gB0CxBuD;E3C4+NzD;E2ClmOF;IAmHQ,UAAU;IACV,gBAA0B;E3Ck/NhC;E2CtmOF;I1C8II,sB0CxBuD;E3Cm/NzD;E2CzmOF;IAmHQ,UAAU;IACV,gBAA0B;E3Cy/NhC;E2C7mOF;I1C8II,sB0CxBuD;E3C0/NzD;E2ChnOF;IAmHQ,UAAU;IACV,WAA0B;E3CggOhC;E2CpnOF;I1C8II,iB0CxBuD;E3CigOzD;AACF;;ACriOE;E0CnFF;IA0HM,UAAU;E3CmgOd;E2C7nOF;IA6HM,UAAU;IACV,WAAW;E3CmgOf;E2CjoOF;IAiIM,UAAU;IACV,UAAU;E3CmgOd;E2CroOF;IAqIM,UAAU;IACV,eAAe;E3CmgOnB;E2CzoOF;IAyIM,UAAU;IACV,UAAU;E3CmgOd;E2C7oOF;IA6IM,UAAU;IACV,eAAe;E3CmgOnB;E2CjpOF;IAiJM,UAAU;IACV,UAAU;E3CmgOd;E2CrpOF;IAqJM,UAAU;IACV,UAAU;E3CmgOd;E2CzpOF;IAyJM,UAAU;IACV,UAAU;E3CmgOd;E2C7pOF;IA6JM,UAAU;IACV,UAAU;E3CmgOd;E2CjqOF;IAiKM,UAAU;IACV,UAAU;E3CmgOd;E2CrqOF;I1C8II,gB0CuBqC;E3CmgOvC;E2CxqOF;I1C8II,qB0C0B0C;E3CmgO5C;E2C3qOF;I1C8II,gB0C6BqC;E3CmgOvC;E2C9qOF;I1C8II,qB0CgC0C;E3CmgO5C;E2CjrOF;I1C8II,gB0CmCqC;E3CmgOvC;E2CprOF;I1C8II,gB0CsCqC;E3CmgOvC;E2CvrOF;I1C8II,gB0CyCqC;E3CmgOvC;E2C1rOF;I1C8II,gB0C4CqC;E3CmgOvC;E2C7rOF;I1C8II,gB0C+CqC;E3CmgOvC;E2ChsOF;IAiMQ,UAAU;IACV,SAA0B;E3CkgOhC;E2CpsOF;I1C8II,e0CuDuD;E3CkgOzD;E2CvsOF;IAiMQ,UAAU;IACV,eAA0B;E3CygOhC;E2C3sOF;I1C8II,qB0CuDuD;E3CygOzD;E2C9sOF;IAiMQ,UAAU;IACV,gBAA0B;E3CghOhC;E2CltOF;I1C8II,sB0CuDuD;E3CghOzD;E2CrtOF;IAiMQ,UAAU;IACV,UAA0B;E3CuhOhC;E2CztOF;I1C8II,gB0CuDuD;E3CuhOzD;E2C5tOF;IAiMQ,UAAU;IACV,gBAA0B;E3C8hOhC;E2ChuOF;I1C8II,sB0CuDuD;E3C8hOzD;E2CnuOF;IAiMQ,UAAU;IACV,gBAA0B;E3CqiOhC;E2CvuOF;I1C8II,sB0CuDuD;E3CqiOzD;E2C1uOF;IAiMQ,UAAU;IACV,UAA0B;E3C4iOhC;E2C9uOF;I1C8II,gB0CuDuD;E3C4iOzD;E2CjvOF;IAiMQ,UAAU;IACV,gBAA0B;E3CmjOhC;E2CrvOF;I1C8II,sB0CuDuD;E3CmjOzD;E2CxvOF;IAiMQ,UAAU;IACV,gBAA0B;E3C0jOhC;E2C5vOF;I1C8II,sB0CuDuD;E3C0jOzD;E2C/vOF;IAiMQ,UAAU;IACV,UAA0B;E3CikOhC;E2CnwOF;I1C8II,gB0CuDuD;E3CikOzD;E2CtwOF;IAiMQ,UAAU;IACV,gBAA0B;E3CwkOhC;E2C1wOF;I1C8II,sB0CuDuD;E3CwkOzD;E2C7wOF;IAiMQ,UAAU;IACV,gBAA0B;E3C+kOhC;E2CjxOF;I1C8II,sB0CuDuD;E3C+kOzD;E2CpxOF;IAiMQ,UAAU;IACV,WAA0B;E3CslOhC;E2CxxOF;I1C8II,iB0CuDuD;E3CslOzD;AACF;;ACjsOE;E0C3FF;IAwMM,UAAU;E3CylOd;E2CjyOF;IA0MM,UAAU;IACV,WAAW;E3C0lOf;E2CryOF;IA6MM,UAAU;IACV,UAAU;E3C2lOd;E2CzyOF;IAgNM,UAAU;IACV,eAAe;E3C4lOnB;E2C7yOF;IAmNM,UAAU;IACV,UAAU;E3C6lOd;E2CjzOF;IAsNM,UAAU;IACV,eAAe;E3C8lOnB;E2CrzOF;IAyNM,UAAU;IACV,UAAU;E3C+lOd;E2CzzOF;IA4NM,UAAU;IACV,UAAU;E3CgmOd;E2C7zOF;IA+NM,UAAU;IACV,UAAU;E3CimOd;E2Cj0OF;IAkOM,UAAU;IACV,UAAU;E3CkmOd;E2Cr0OF;IAqOM,UAAU;IACV,UAAU;E3CmmOd;E2Cz0OF;I1C8II,gB0C0FqC;E3ComOvC;E2C50OF;I1C8II,qB0C4F0C;E3CqmO5C;E2C/0OF;I1C8II,gB0C8FqC;E3CsmOvC;E2Cl1OF;I1C8II,qB0CgG0C;E3CumO5C;E2Cr1OF;I1C8II,gB0CkGqC;E3CwmOvC;E2Cx1OF;I1C8II,gB0CoGqC;E3CymOvC;E2C31OF;I1C8II,gB0CsGqC;E3C0mOvC;E2C91OF;I1C8II,gB0CwGqC;E3C2mOvC;E2Cj2OF;I1C8II,gB0C0GqC;E3C4mOvC;E2Cp2OF;IA2PQ,UAAU;IACV,SAA0B;E3C4mOhC;E2Cx2OF;I1C8II,e0CgHuD;E3C6mOzD;E2C32OF;IA2PQ,UAAU;IACV,eAA0B;E3CmnOhC;E2C/2OF;I1C8II,qB0CgHuD;E3ConOzD;E2Cl3OF;IA2PQ,UAAU;IACV,gBAA0B;E3C0nOhC;E2Ct3OF;I1C8II,sB0CgHuD;E3C2nOzD;E2Cz3OF;IA2PQ,UAAU;IACV,UAA0B;E3CioOhC;E2C73OF;I1C8II,gB0CgHuD;E3CkoOzD;E2Ch4OF;IA2PQ,UAAU;IACV,gBAA0B;E3CwoOhC;E2Cp4OF;I1C8II,sB0CgHuD;E3CyoOzD;E2Cv4OF;IA2PQ,UAAU;IACV,gBAA0B;E3C+oOhC;E2C34OF;I1C8II,sB0CgHuD;E3CgpOzD;E2C94OF;IA2PQ,UAAU;IACV,UAA0B;E3CspOhC;E2Cl5OF;I1C8II,gB0CgHuD;E3CupOzD;E2Cr5OF;IA2PQ,UAAU;IACV,gBAA0B;E3C6pOhC;E2Cz5OF;I1C8II,sB0CgHuD;E3C8pOzD;E2C55OF;IA2PQ,UAAU;IACV,gBAA0B;E3CoqOhC;E2Ch6OF;I1C8II,sB0CgHuD;E3CqqOzD;E2Cn6OF;IA2PQ,UAAU;IACV,UAA0B;E3C2qOhC;E2Cv6OF;I1C8II,gB0CgHuD;E3C4qOzD;E2C16OF;IA2PQ,UAAU;IACV,gBAA0B;E3CkrOhC;E2C96OF;I1C8II,sB0CgHuD;E3CmrOzD;E2Cj7OF;IA2PQ,UAAU;IACV,gBAA0B;E3CyrOhC;E2Cr7OF;I1C8II,sB0CgHuD;E3C0rOzD;E2Cx7OF;IA2PQ,UAAU;IACV,WAA0B;E3CgsOhC;E2C57OF;I1C8II,iB0CgHuD;E3CisOzD;AACF;;ACj2OE;E0C/FF;IAiQM,UAAU;E3CosOd;E2Cr8OF;IAmQM,UAAU;IACV,WAAW;E3CqsOf;E2Cz8OF;IAsQM,UAAU;IACV,UAAU;E3CssOd;E2C78OF;IAyQM,UAAU;IACV,eAAe;E3CusOnB;E2Cj9OF;IA4QM,UAAU;IACV,UAAU;E3CwsOd;E2Cr9OF;IA+QM,UAAU;IACV,eAAe;E3CysOnB;E2Cz9OF;IAkRM,UAAU;IACV,UAAU;E3C0sOd;E2C79OF;IAqRM,UAAU;IACV,UAAU;E3C2sOd;E2Cj+OF;IAwRM,UAAU;IACV,UAAU;E3C4sOd;E2Cr+OF;IA2RM,UAAU;IACV,UAAU;E3C6sOd;E2Cz+OF;IA8RM,UAAU;IACV,UAAU;E3C8sOd;E2C7+OF;I1C8II,gB0CmJqC;E3C+sOvC;E2Ch/OF;I1C8II,qB0CqJ0C;E3CgtO5C;E2Cn/OF;I1C8II,gB0CuJqC;E3CitOvC;E2Ct/OF;I1C8II,qB0CyJ0C;E3CktO5C;E2Cz/OF;I1C8II,gB0C2JqC;E3CmtOvC;E2C5/OF;I1C8II,gB0C6JqC;E3CotOvC;E2C//OF;I1C8II,gB0C+JqC;E3CqtOvC;E2ClgPF;I1C8II,gB0CiKqC;E3CstOvC;E2CrgPF;I1C8II,gB0CmKqC;E3CutOvC;E2CxgPF;IAoTQ,UAAU;IACV,SAA0B;E3CutOhC;E2C5gPF;I1C8II,e0CyKuD;E3CwtOzD;E2C/gPF;IAoTQ,UAAU;IACV,eAA0B;E3C8tOhC;E2CnhPF;I1C8II,qB0CyKuD;E3C+tOzD;E2CthPF;IAoTQ,UAAU;IACV,gBAA0B;E3CquOhC;E2C1hPF;I1C8II,sB0CyKuD;E3CsuOzD;E2C7hPF;IAoTQ,UAAU;IACV,UAA0B;E3C4uOhC;E2CjiPF;I1C8II,gB0CyKuD;E3C6uOzD;E2CpiPF;IAoTQ,UAAU;IACV,gBAA0B;E3CmvOhC;E2CxiPF;I1C8II,sB0CyKuD;E3CovOzD;E2C3iPF;IAoTQ,UAAU;IACV,gBAA0B;E3C0vOhC;E2C/iPF;I1C8II,sB0CyKuD;E3C2vOzD;E2CljPF;IAoTQ,UAAU;IACV,UAA0B;E3CiwOhC;E2CtjPF;I1C8II,gB0CyKuD;E3CkwOzD;E2CzjPF;IAoTQ,UAAU;IACV,gBAA0B;E3CwwOhC;E2C7jPF;I1C8II,sB0CyKuD;E3CywOzD;E2ChkPF;IAoTQ,UAAU;IACV,gBAA0B;E3C+wOhC;E2CpkPF;I1C8II,sB0CyKuD;E3CgxOzD;E2CvkPF;IAoTQ,UAAU;IACV,UAA0B;E3CsxOhC;E2C3kPF;I1C8II,gB0CyKuD;E3CuxOzD;E2C9kPF;IAoTQ,UAAU;IACV,gBAA0B;E3C6xOhC;E2CllPF;I1C8II,sB0CyKuD;E3C8xOzD;E2CrlPF;IAoTQ,UAAU;IACV,gBAA0B;E3CoyOhC;E2CzlPF;I1C8II,sB0CyKuD;E3CqyOzD;E2C5lPF;IAoTQ,UAAU;IACV,WAA0B;E3C2yOhC;E2ChmPF;I1C8II,iB0CyKuD;E3C4yOzD;AACF;;ACt/OI;E0C9GJ;IA0TM,UAAU;E3C+yOd;E2CzmPF;IA4TM,UAAU;IACV,WAAW;E3CgzOf;E2C7mPF;IA+TM,UAAU;IACV,UAAU;E3CizOd;E2CjnPF;IAkUM,UAAU;IACV,eAAe;E3CkzOnB;E2CrnPF;IAqUM,UAAU;IACV,UAAU;E3CmzOd;E2CznPF;IAwUM,UAAU;IACV,eAAe;E3CozOnB;E2C7nPF;IA2UM,UAAU;IACV,UAAU;E3CqzOd;E2CjoPF;IA8UM,UAAU;IACV,UAAU;E3CszOd;E2CroPF;IAiVM,UAAU;IACV,UAAU;E3CuzOd;E2CzoPF;IAoVM,UAAU;IACV,UAAU;E3CwzOd;E2C7oPF;IAuVM,UAAU;IACV,UAAU;E3CyzOd;E2CjpPF;I1C8II,gB0C4MqC;E3C0zOvC;E2CppPF;I1C8II,qB0C8M0C;E3C2zO5C;E2CvpPF;I1C8II,gB0CgNqC;E3C4zOvC;E2C1pPF;I1C8II,qB0CkN0C;E3C6zO5C;E2C7pPF;I1C8II,gB0CoNqC;E3C8zOvC;E2ChqPF;I1C8II,gB0CsNqC;E3C+zOvC;E2CnqPF;I1C8II,gB0CwNqC;E3Cg0OvC;E2CtqPF;I1C8II,gB0C0NqC;E3Ci0OvC;E2CzqPF;I1C8II,gB0C4NqC;E3Ck0OvC;E2C5qPF;IA6WQ,UAAU;IACV,SAA0B;E3Ck0OhC;E2ChrPF;I1C8II,e0CkOuD;E3Cm0OzD;E2CnrPF;IA6WQ,UAAU;IACV,eAA0B;E3Cy0OhC;E2CvrPF;I1C8II,qB0CkOuD;E3C00OzD;E2C1rPF;IA6WQ,UAAU;IACV,gBAA0B;E3Cg1OhC;E2C9rPF;I1C8II,sB0CkOuD;E3Ci1OzD;E2CjsPF;IA6WQ,UAAU;IACV,UAA0B;E3Cu1OhC;E2CrsPF;I1C8II,gB0CkOuD;E3Cw1OzD;E2CxsPF;IA6WQ,UAAU;IACV,gBAA0B;E3C81OhC;E2C5sPF;I1C8II,sB0CkOuD;E3C+1OzD;E2C/sPF;IA6WQ,UAAU;IACV,gBAA0B;E3Cq2OhC;E2CntPF;I1C8II,sB0CkOuD;E3Cs2OzD;E2CttPF;IA6WQ,UAAU;IACV,UAA0B;E3C42OhC;E2C1tPF;I1C8II,gB0CkOuD;E3C62OzD;E2C7tPF;IA6WQ,UAAU;IACV,gBAA0B;E3Cm3OhC;E2CjuPF;I1C8II,sB0CkOuD;E3Co3OzD;E2CpuPF;IA6WQ,UAAU;IACV,gBAA0B;E3C03OhC;E2CxuPF;I1C8II,sB0CkOuD;E3C23OzD;E2C3uPF;IA6WQ,UAAU;IACV,UAA0B;E3Ci4OhC;E2C/uPF;I1C8II,gB0CkOuD;E3Ck4OzD;E2ClvPF;IA6WQ,UAAU;IACV,gBAA0B;E3Cw4OhC;E2CtvPF;I1C8II,sB0CkOuD;E3Cy4OzD;E2CzvPF;IA6WQ,UAAU;IACV,gBAA0B;E3C+4OhC;E2C7vPF;I1C8II,sB0CkOuD;E3Cg5OzD;E2ChwPF;IA6WQ,UAAU;IACV,WAA0B;E3Cs5OhC;E2CpwPF;I1C8II,iB0CkOuD;E3Cu5OzD;AACF;;AC3oPI;E0C7HJ;IAmXM,UAAU;E3C05Od;E2C7wPF;IAqXM,UAAU;IACV,WAAW;E3C25Of;E2CjxPF;IAwXM,UAAU;IACV,UAAU;E3C45Od;E2CrxPF;IA2XM,UAAU;IACV,eAAe;E3C65OnB;E2CzxPF;IA8XM,UAAU;IACV,UAAU;E3C85Od;E2C7xPF;IAiYM,UAAU;IACV,eAAe;E3C+5OnB;E2CjyPF;IAoYM,UAAU;IACV,UAAU;E3Cg6Od;E2CryPF;IAuYM,UAAU;IACV,UAAU;E3Ci6Od;E2CzyPF;IA0YM,UAAU;IACV,UAAU;E3Ck6Od;E2C7yPF;IA6YM,UAAU;IACV,UAAU;E3Cm6Od;E2CjzPF;IAgZM,UAAU;IACV,UAAU;E3Co6Od;E2CrzPF;I1C8II,gB0CqQqC;E3Cq6OvC;E2CxzPF;I1C8II,qB0CuQ0C;E3Cs6O5C;E2C3zPF;I1C8II,gB0CyQqC;E3Cu6OvC;E2C9zPF;I1C8II,qB0C2Q0C;E3Cw6O5C;E2Cj0PF;I1C8II,gB0C6QqC;E3Cy6OvC;E2Cp0PF;I1C8II,gB0C+QqC;E3C06OvC;E2Cv0PF;I1C8II,gB0CiRqC;E3C26OvC;E2C10PF;I1C8II,gB0CmRqC;E3C46OvC;E2C70PF;I1C8II,gB0CqRqC;E3C66OvC;E2Ch1PF;IAsaQ,UAAU;IACV,SAA0B;E3C66OhC;E2Cp1PF;I1C8II,e0C2RuD;E3C86OzD;E2Cv1PF;IAsaQ,UAAU;IACV,eAA0B;E3Co7OhC;E2C31PF;I1C8II,qB0C2RuD;E3Cq7OzD;E2C91PF;IAsaQ,UAAU;IACV,gBAA0B;E3C27OhC;E2Cl2PF;I1C8II,sB0C2RuD;E3C47OzD;E2Cr2PF;IAsaQ,UAAU;IACV,UAA0B;E3Ck8OhC;E2Cz2PF;I1C8II,gB0C2RuD;E3Cm8OzD;E2C52PF;IAsaQ,UAAU;IACV,gBAA0B;E3Cy8OhC;E2Ch3PF;I1C8II,sB0C2RuD;E3C08OzD;E2Cn3PF;IAsaQ,UAAU;IACV,gBAA0B;E3Cg9OhC;E2Cv3PF;I1C8II,sB0C2RuD;E3Ci9OzD;E2C13PF;IAsaQ,UAAU;IACV,UAA0B;E3Cu9OhC;E2C93PF;I1C8II,gB0C2RuD;E3Cw9OzD;E2Cj4PF;IAsaQ,UAAU;IACV,gBAA0B;E3C89OhC;E2Cr4PF;I1C8II,sB0C2RuD;E3C+9OzD;E2Cx4PF;IAsaQ,UAAU;IACV,gBAA0B;E3Cq+OhC;E2C54PF;I1C8II,sB0C2RuD;E3Cs+OzD;E2C/4PF;IAsaQ,UAAU;IACV,UAA0B;E3C4+OhC;E2Cn5PF;I1C8II,gB0C2RuD;E3C6+OzD;E2Ct5PF;IAsaQ,UAAU;IACV,gBAA0B;E3Cm/OhC;E2C15PF;I1C8II,sB0C2RuD;E3Co/OzD;E2C75PF;IAsaQ,UAAU;IACV,gBAA0B;E3C0/OhC;E2Cj6PF;I1C8II,sB0C2RuD;E3C2/OzD;E2Cp6PF;IAsaQ,UAAU;IACV,WAA0B;E3CigPhC;E2Cx6PF;I1C8II,iB0C2RuD;E3CkgPzD;AACF;;A2CjgPA;E1C7RI,qB0ChJgB;E1CgJhB,sB0ChJgB;EAgblB,oBAhbkB;A3Co7PpB;;A2CvgPA;EAKI,uBAlbgB;A3Cw7PpB;;A2C3gPA;EAOI,qCAA4C;A3CwgPhD;;A2C/gPA;EAUI,uBAAuB;A3CygP3B;;A2CnhPA;E1C7RI,c0CySiC;E1CzSjC,e0C0SiC;EACjC,aAAa;A3C2gPjB;;A2CzhPA;EAgBM,SAAS;EACT,qBAAqB;A3C6gP3B;;A2C9hPA;EAmBM,qBAAqB;A3C+gP3B;;A2CliPA;EAqBM,gBAAgB;A3CihPtB;;A2CtiPA;EAuBI,aAAa;A3CmhPjB;;A2C1iPA;EAyBI,eAAe;A3CqhPnB;;A2C9iPA;EA2BI,mBAAmB;A3CuhPvB;;AC14PE;E0CwVF;IA+BM,aAAa;E3CwhPjB;AACF;;ACp4PE;E0C4UF;IAmCM,aAAa;E3C0hPjB;AACF;;A2CxhPE;EACE,oBAAY;E1CpUZ,wC0CqU2D;E1CrU3D,yC0CsU2D;A3C2hP/D;;A2C9hPE;EAKI,8BAA8B;EAC9B,+BAA+B;A3C6hPrC;;A2CniPE;EASM,iBAAY;A3C8hPpB;;ACz6PE;E0CkYA;IAYQ,iBAAY;E3CgiPpB;AACF;;AC36PE;E0C8XA;IAeQ,iBAAY;E3CmiPpB;AACF;;AC76PE;E0C0XA;IAkBQ,iBAAY;E3CsiPpB;AACF;;AC/6PE;E0CsXA;IAqBQ,iBAAY;E3CyiPpB;AACF;;ACj7PE;E0CkXA;IAwBQ,iBAAY;E3C4iPpB;AACF;;ACl7PI;E0C6WF;IA2BQ,iBAAY;E3C+iPpB;AACF;;AC96PI;E0CmWF;IA8BQ,iBAAY;E3CkjPpB;AACF;;AC/6PI;E0C8VF;IAiCQ,iBAAY;E3CqjPpB;AACF;;AC36PI;E0CoVF;IAoCQ,iBAAY;E3CwjPpB;AACF;;A2C7lPE;EASM,oBAAY;A3CwlPpB;;ACn+PE;E0CkYA;IAYQ,oBAAY;E3C0lPpB;AACF;;ACr+PE;E0C8XA;IAeQ,oBAAY;E3C6lPpB;AACF;;ACv+PE;E0C0XA;IAkBQ,oBAAY;E3CgmPpB;AACF;;ACz+PE;E0CsXA;IAqBQ,oBAAY;E3CmmPpB;AACF;;AC3+PE;E0CkXA;IAwBQ,oBAAY;E3CsmPpB;AACF;;AC5+PI;E0C6WF;IA2BQ,oBAAY;E3CymPpB;AACF;;ACx+PI;E0CmWF;IA8BQ,oBAAY;E3C4mPpB;AACF;;ACz+PI;E0C8VF;IAiCQ,oBAAY;E3C+mPpB;AACF;;ACr+PI;E0CoVF;IAoCQ,oBAAY;E3CknPpB;AACF;;A2CvpPE;EASM,mBAAY;A3CkpPpB;;AC7hQE;E0CkYA;IAYQ,mBAAY;E3CopPpB;AACF;;AC/hQE;E0C8XA;IAeQ,mBAAY;E3CupPpB;AACF;;ACjiQE;E0C0XA;IAkBQ,mBAAY;E3C0pPpB;AACF;;ACniQE;E0CsXA;IAqBQ,mBAAY;E3C6pPpB;AACF;;ACriQE;E0CkXA;IAwBQ,mBAAY;E3CgqPpB;AACF;;ACtiQI;E0C6WF;IA2BQ,mBAAY;E3CmqPpB;AACF;;ACliQI;E0CmWF;IA8BQ,mBAAY;E3CsqPpB;AACF;;ACniQI;E0C8VF;IAiCQ,mBAAY;E3CyqPpB;AACF;;AC/hQI;E0CoVF;IAoCQ,mBAAY;E3C4qPpB;AACF;;A2CjtPE;EASM,oBAAY;A3C4sPpB;;ACvlQE;E0CkYA;IAYQ,oBAAY;E3C8sPpB;AACF;;ACzlQE;E0C8XA;IAeQ,oBAAY;E3CitPpB;AACF;;AC3lQE;E0C0XA;IAkBQ,oBAAY;E3CotPpB;AACF;;AC7lQE;E0CsXA;IAqBQ,oBAAY;E3CutPpB;AACF;;AC/lQE;E0CkXA;IAwBQ,oBAAY;E3C0tPpB;AACF;;AChmQI;E0C6WF;IA2BQ,oBAAY;E3C6tPpB;AACF;;AC5lQI;E0CmWF;IA8BQ,oBAAY;E3CguPpB;AACF;;AC7lQI;E0C8VF;IAiCQ,oBAAY;E3CmuPpB;AACF;;ACzlQI;E0CoVF;IAoCQ,oBAAY;E3CsuPpB;AACF;;A2C3wPE;EASM,iBAAY;A3CswPpB;;ACjpQE;E0CkYA;IAYQ,iBAAY;E3CwwPpB;AACF;;ACnpQE;E0C8XA;IAeQ,iBAAY;E3C2wPpB;AACF;;ACrpQE;E0C0XA;IAkBQ,iBAAY;E3C8wPpB;AACF;;ACvpQE;E0CsXA;IAqBQ,iBAAY;E3CixPpB;AACF;;ACzpQE;E0CkXA;IAwBQ,iBAAY;E3CoxPpB;AACF;;AC1pQI;E0C6WF;IA2BQ,iBAAY;E3CuxPpB;AACF;;ACtpQI;E0CmWF;IA8BQ,iBAAY;E3C0xPpB;AACF;;ACvpQI;E0C8VF;IAiCQ,iBAAY;E3C6xPpB;AACF;;ACnpQI;E0CoVF;IAoCQ,iBAAY;E3CgyPpB;AACF;;A2Cr0PE;EASM,oBAAY;A3Cg0PpB;;AC3sQE;E0CkYA;IAYQ,oBAAY;E3Ck0PpB;AACF;;AC7sQE;E0C8XA;IAeQ,oBAAY;E3Cq0PpB;AACF;;AC/sQE;E0C0XA;IAkBQ,oBAAY;E3Cw0PpB;AACF;;ACjtQE;E0CsXA;IAqBQ,oBAAY;E3C20PpB;AACF;;ACntQE;E0CkXA;IAwBQ,oBAAY;E3C80PpB;AACF;;ACptQI;E0C6WF;IA2BQ,oBAAY;E3Ci1PpB;AACF;;AChtQI;E0CmWF;IA8BQ,oBAAY;E3Co1PpB;AACF;;ACjtQI;E0C8VF;IAiCQ,oBAAY;E3Cu1PpB;AACF;;AC7sQI;E0CoVF;IAoCQ,oBAAY;E3C01PpB;AACF;;A2C/3PE;EASM,mBAAY;A3C03PpB;;ACrwQE;E0CkYA;IAYQ,mBAAY;E3C43PpB;AACF;;ACvwQE;E0C8XA;IAeQ,mBAAY;E3C+3PpB;AACF;;ACzwQE;E0C0XA;IAkBQ,mBAAY;E3Ck4PpB;AACF;;AC3wQE;E0CsXA;IAqBQ,mBAAY;E3Cq4PpB;AACF;;AC7wQE;E0CkXA;IAwBQ,mBAAY;E3Cw4PpB;AACF;;AC9wQI;E0C6WF;IA2BQ,mBAAY;E3C24PpB;AACF;;AC1wQI;E0CmWF;IA8BQ,mBAAY;E3C84PpB;AACF;;AC3wQI;E0C8VF;IAiCQ,mBAAY;E3Ci5PpB;AACF;;ACvwQI;E0CoVF;IAoCQ,mBAAY;E3Co5PpB;AACF;;A2Cz7PE;EASM,oBAAY;A3Co7PpB;;AC/zQE;E0CkYA;IAYQ,oBAAY;E3Cs7PpB;AACF;;ACj0QE;E0C8XA;IAeQ,oBAAY;E3Cy7PpB;AACF;;ACn0QE;E0C0XA;IAkBQ,oBAAY;E3C47PpB;AACF;;ACr0QE;E0CsXA;IAqBQ,oBAAY;E3C+7PpB;AACF;;ACv0QE;E0CkXA;IAwBQ,oBAAY;E3Ck8PpB;AACF;;ACx0QI;E0C6WF;IA2BQ,oBAAY;E3Cq8PpB;AACF;;ACp0QI;E0CmWF;IA8BQ,oBAAY;E3Cw8PpB;AACF;;ACr0QI;E0C8VF;IAiCQ,oBAAY;E3C28PpB;AACF;;ACj0QI;E0CoVF;IAoCQ,oBAAY;E3C88PpB;AACF;;A2Cn/PE;EASM,iBAAY;A3C8+PpB;;ACz3QE;E0CkYA;IAYQ,iBAAY;E3Cg/PpB;AACF;;AC33QE;E0C8XA;IAeQ,iBAAY;E3Cm/PpB;AACF;;AC73QE;E0C0XA;IAkBQ,iBAAY;E3Cs/PpB;AACF;;AC/3QE;E0CsXA;IAqBQ,iBAAY;E3Cy/PpB;AACF;;ACj4QE;E0CkXA;IAwBQ,iBAAY;E3C4/PpB;AACF;;ACl4QI;E0C6WF;IA2BQ,iBAAY;E3C+/PpB;AACF;;AC93QI;E0CmWF;IA8BQ,iBAAY;E3CkgQpB;AACF;;AC/3QI;E0C8VF;IAiCQ,iBAAY;E3CqgQpB;AACF;;AC33QI;E0CoVF;IAoCQ,iBAAY;E3CwgQpB;AACF;;A4C9/QA;EACE,oBAAoB;EACpB,cAAc;EACd,aAAa;EACb,YAAY;EACZ,cAAc;EACd,+BAAuB;EAAvB,4BAAuB;EAAvB,uBAAuB;A5CigRzB;;A4CvgRA;EASI,qBAA+B;EAC/B,sBAAgC;EAChC,oBAA8B;A5CkgRlC;;A4C7gRA;EAaM,uBAAiC;A5CogRvC;;A4CjhRA;EAeM,sBAjBgB;A5CuhRtB;;A4CrhRA;EAiBI,oBAAoB;A5CwgRxB;;A4CzhRA;EAmBI,gBArBkB;A5C+hRtB;;A4C7hRA;EAqBI,sBAAsB;A5C4gR1B;;A4CjiRA;EAuBM,gCAAgC;A5C8gRtC;;ACl9QE;E2CnFF;IA2BM,aAAa;E5C+gRjB;E4C1iRF;IA8BQ,UAAU;IACV,eAAuB;E5C+gR7B;E4C9iRF;IA8BQ,UAAU;IACV,gBAAuB;E5CmhR7B;E4CljRF;IA8BQ,UAAU;IACV,UAAuB;E5CuhR7B;E4CtjRF;IA8BQ,UAAU;IACV,gBAAuB;E5C2hR7B;E4C1jRF;IA8BQ,UAAU;IACV,gBAAuB;E5C+hR7B;E4C9jRF;IA8BQ,UAAU;IACV,UAAuB;E5CmiR7B;E4ClkRF;IA8BQ,UAAU;IACV,gBAAuB;E5CuiR7B;E4CtkRF;IA8BQ,UAAU;IACV,gBAAuB;E5C2iR7B;E4C1kRF;IA8BQ,UAAU;IACV,UAAuB;E5C+iR7B;E4C9kRF;IA8BQ,UAAU;IACV,gBAAuB;E5CmjR7B;E4CllRF;IA8BQ,UAAU;IACV,gBAAuB;E5CujR7B;E4CtlRF;IA8BQ,UAAU;IACV,WAAuB;E5C2jR7B;AACF;;A6C7lRA,kBAAA;ACEE;EACE,uBAAwB;A9C+lR5B;;A8C9lRE;EAGI,yBAA0C;A9C+lRhD;;A8C9lRE;EACE,kCAAmC;A9CimRvC;;A8CxmRE;EACE,yBAAwB;A9C2mR5B;;A8C1mRE;EAGI,uBAA0C;A9C2mRhD;;A8C1mRE;EACE,oCAAmC;A9C6mRvC;;A8CpnRE;EACE,4BAAwB;A9CunR5B;;A8CtnRE;EAGI,yBAA0C;A9CunRhD;;A8CtnRE;EACE,uCAAmC;A9CynRvC;;A8ChoRE;EACE,yBAAwB;A9CmoR5B;;A8CloRE;EAGI,yBAA0C;A9CmoRhD;;A8CloRE;EACE,oCAAmC;A9CqoRvC;;A8C5oRE;EACE,yBAAwB;A9C+oR5B;;A8C9oRE;EAGI,yBAA0C;A9C+oRhD;;A8C9oRE;EACE,oCAAmC;A9CipRvC;;A8C5oRI;EACE,yBAA8B;A9C+oRpC;;A8C9oRI;EAGI,yBAAgD;A9C+oRxD;;A8C9oRI;EACE,oCAAyC;A9CipR/C;;A8C/oRI;EACE,yBAA6B;A9CkpRnC;;A8CjpRI;EAGI,yBAAgD;A9CkpRxD;;A8CjpRI;EACE,oCAAwC;A9CopR9C;;A8ChrRE;EACE,yBAAwB;A9CmrR5B;;A8ClrRE;EAGI,yBAA0C;A9CmrRhD;;A8ClrRE;EACE,oCAAmC;A9CqrRvC;;A8ChrRI;EACE,yBAA8B;A9CmrRpC;;A8ClrRI;EAGI,yBAAgD;A9CmrRxD;;A8ClrRI;EACE,oCAAyC;A9CqrR/C;;A8CnrRI;EACE,yBAA6B;A9CsrRnC;;A8CrrRI;EAGI,yBAAgD;A9CsrRxD;;A8CrrRI;EACE,oCAAwC;A9CwrR9C;;A8CptRE;EACE,yBAAwB;A9CutR5B;;A8CttRE;EAGI,yBAA0C;A9CutRhD;;A8CttRE;EACE,oCAAmC;A9CytRvC;;A8CptRI;EACE,yBAA8B;A9CutRpC;;A8CttRI;EAGI,yBAAgD;A9CutRxD;;A8CttRI;EACE,oCAAyC;A9CytR/C;;A8CvtRI;EACE,yBAA6B;A9C0tRnC;;A8CztRI;EAGI,yBAAgD;A9C0tRxD;;A8CztRI;EACE,oCAAwC;A9C4tR9C;;A8CxvRE;EACE,yBAAwB;A9C2vR5B;;A8C1vRE;EAGI,yBAA0C;A9C2vRhD;;A8C1vRE;EACE,oCAAmC;A9C6vRvC;;A8CxvRI;EACE,yBAA8B;A9C2vRpC;;A8C1vRI;EAGI,yBAAgD;A9C2vRxD;;A8C1vRI;EACE,oCAAyC;A9C6vR/C;;A8C3vRI;EACE,yBAA6B;A9C8vRnC;;A8C7vRI;EAGI,yBAAgD;A9C8vRxD;;A8C7vRI;EACE,oCAAwC;A9CgwR9C;;A8C5xRE;EACE,yBAAwB;A9C+xR5B;;A8C9xRE;EAGI,yBAA0C;A9C+xRhD;;A8C9xRE;EACE,oCAAmC;A9CiyRvC;;A8C5xRI;EACE,yBAA8B;A9C+xRpC;;A8C9xRI;EAGI,yBAAgD;A9C+xRxD;;A8C9xRI;EACE,oCAAyC;A9CiyR/C;;A8C/xRI;EACE,yBAA6B;A9CkyRnC;;A8CjyRI;EAGI,yBAAgD;A9CkyRxD;;A8CjyRI;EACE,oCAAwC;A9CoyR9C;;A8Ch0RE;EACE,yBAAwB;A9Cm0R5B;;A8Cl0RE;EAGI,yBAA0C;A9Cm0RhD;;A8Cl0RE;EACE,oCAAmC;A9Cq0RvC;;A8Ch0RI;EACE,yBAA8B;A9Cm0RpC;;A8Cl0RI;EAGI,yBAAgD;A9Cm0RxD;;A8Cl0RI;EACE,oCAAyC;A9Cq0R/C;;A8Cn0RI;EACE,yBAA6B;A9Cs0RnC;;A8Cr0RI;EAGI,yBAAgD;A9Cs0RxD;;A8Cr0RI;EACE,oCAAwC;A9Cw0R9C;;A8Cr0RE;EACE,yBAAwB;A9Cw0R5B;;A8Cv0RE;EACE,oCAAmC;A9C00RvC;;A8C70RE;EACE,yBAAwB;A9Cg1R5B;;A8C/0RE;EACE,oCAAmC;A9Ck1RvC;;A8Cr1RE;EACE,yBAAwB;A9Cw1R5B;;A8Cv1RE;EACE,oCAAmC;A9C01RvC;;A8C71RE;EACE,yBAAwB;A9Cg2R5B;;A8C/1RE;EACE,oCAAmC;A9Ck2RvC;;A8Cr2RE;EACE,yBAAwB;A9Cw2R5B;;A8Cv2RE;EACE,oCAAmC;A9C02RvC;;A8C72RE;EACE,yBAAwB;A9Cg3R5B;;A8C/2RE;EACE,oCAAmC;A9Ck3RvC;;A8Cr3RE;EACE,yBAAwB;A9Cw3R5B;;A8Cv3RE;EACE,oCAAmC;A9C03RvC;;A8C73RE;EACE,4BAAwB;A9Cg4R5B;;A8C/3RE;EACE,uCAAmC;A9Ck4RvC;;A8Cr4RE;EACE,yBAAwB;A9Cw4R5B;;A8Cv4RE;EACE,oCAAmC;A9C04RvC;;A+C56RE;EACE,8BAAiC;A/C+6RrC;;A+Ch7RE;EACE,sCAAiC;A/Cm7RrC;;A+Cp7RE;EACE,iCAAiC;A/Cu7RrC;;A+Cx7RE;EACE,yCAAiC;A/C27RrC;;A+Cv7RE;EACE,4BAA4B;A/C07RhC;;A+C37RE;EACE,0BAA4B;A/C87RhC;;A+C/7RE;EACE,kCAA4B;A/Ck8RhC;;A+C97RE;EACE,sCAAkC;A/Ci8RtC;;A+Cl8RE;EACE,oCAAkC;A/Cq8RtC;;A+Ct8RE;EACE,kCAAkC;A/Cy8RtC;;A+C18RE;EACE,yCAAkC;A/C68RtC;;A+C98RE;EACE,wCAAkC;A/Ci9RtC;;A+Cl9RE;EACE,wCAAkC;A/Cq9RtC;;A+Ct9RE;EACE,iCAAkC;A/Cy9RtC;;A+C19RE;EACE,+BAAkC;A/C69RtC;;A+C99RE;EACE,gCAAkC;A/Ci+RtC;;A+Cl+RE;EACE,iCAAkC;A/Cq+RtC;;A+Cj+RE;EACE,oCAAgC;A/Co+RpC;;A+Cr+RE;EACE,kCAAgC;A/Cw+RpC;;A+Cz+RE;EACE,gCAAgC;A/C4+RpC;;A+C7+RE;EACE,uCAAgC;A/Cg/RpC;;A+Cj/RE;EACE,sCAAgC;A/Co/RpC;;A+Cr/RE;EACE,sCAAgC;A/Cw/RpC;;A+Cz/RE;EACE,iCAAgC;A/C4/RpC;;A+C7/RE;EACE,+BAAgC;A/CggSpC;;A+CjgSE;EACE,6BAAgC;A/CogSpC;;A+CrgSE;EACE,kCAAgC;A/CwgSpC;;A+CpgSE;EACE,+BAA8B;A/CugSlC;;A+CxgSE;EACE,kCAA8B;A/C2gSlC;;A+C5gSE;EACE,gCAA8B;A/C+gSlC;;A+ChhSE;EACE,8BAA8B;A/CmhSlC;;A+CphSE;EACE,gCAA8B;A/CuhSlC;;A+CxhSE;EACE,6BAA8B;A/C2hSlC;;A+C5hSE;EACE,2BAA8B;A/C+hSlC;;A+ChiSE;EACE,kCAA8B;A/CmiSlC;;A+CpiSE;EACE,gCAA8B;A/CuiSlC;;A+CniSE;EACE,2BAA6B;A/CsiSjC;;A+CviSE;EACE,iCAA6B;A/C0iSjC;;A+C3iSE;EACE,+BAA6B;A/C8iSjC;;A+C/iSE;EACE,6BAA6B;A/CkjSjC;;A+CnjSE;EACE,+BAA6B;A/CsjSjC;;A+CvjSE;EACE,8BAA6B;A/C0jSjC;;A+CrjSI;EACE,uBAAqC;A/CwjS3C;;A+CzjSI;EACE,uBAAqC;A/C4jS3C;;A+C7jSI;EACE,uBAAqC;A/CgkS3C;;A+CjkSI;EACE,uBAAqC;A/CokS3C;;A+CrkSI;EACE,uBAAqC;A/CwkS3C;;A+CzkSI;EACE,uBAAqC;A/C4kS3C;;A+C7kSI;EACE,yBAAqC;A/CglS3C;;A+CjlSI;EACE,yBAAqC;A/ColS3C;;A+CrlSI;EACE,yBAAqC;A/CwlS3C;;A+CzlSI;EACE,yBAAqC;A/C4lS3C;;A+C7lSI;EACE,yBAAqC;A/CgmS3C;;A+CjmSI;EACE,yBAAqC;A/ComS3C;;ACnoSE;EACE,WAAW;EACX,YAAY;EACZ,cAAc;ADsoSlB;;AgDzoSA;EACE,sBAAsB;AhD4oSxB;;AgD1oSA;EACE,uBAAuB;AhD6oSzB;;AiDppSA;EACE,2BAA2B;AjDupS7B;;AiDrpSA;EACE,2BAA2B;AjDwpS7B;;AiDtpSA;EACE,0BAA0B;AjDypS5B;;AkDhqSA;EACE,2BAA2B;AlDmqS7B;;AmDjqSA;EACE,6BAA6B;AnDoqS/B;;AoDxqSA;EACE,oBAAoB;ApD2qStB;;AoDzqSA;EACE,qBAAqB;ApD4qSvB;;AoDjqSI;EACE,oBAA+B;ApDoqSrC;;AoDjqSM;EACE,wBAA8C;ApDoqStD;;AoDrqSM;EACE,0BAA8C;ApDwqStD;;AoDzqSM;EACE,2BAA8C;ApD4qStD;;AoD7qSM;EACE,yBAA8C;ApDgrStD;;AoD7qSM;EACE,yBAAyC;EACzC,0BAA2C;ApDgrSnD;;AoD7qSM;EACE,wBAAuC;EACvC,2BAA6C;ApDgrSrD;;AoD/rSI;EACE,0BAA+B;ApDksSrC;;AoD/rSM;EACE,8BAA8C;ApDksStD;;AoDnsSM;EACE,gCAA8C;ApDssStD;;AoDvsSM;EACE,iCAA8C;ApD0sStD;;AoD3sSM;EACE,+BAA8C;ApD8sStD;;AoD3sSM;EACE,+BAAyC;EACzC,gCAA2C;ApD8sSnD;;AoD3sSM;EACE,8BAAuC;EACvC,iCAA6C;ApD8sSrD;;AoD7tSI;EACE,yBAA+B;ApDguSrC;;AoD7tSM;EACE,6BAA8C;ApDguStD;;AoDjuSM;EACE,+BAA8C;ApDouStD;;AoDruSM;EACE,gCAA8C;ApDwuStD;;AoDzuSM;EACE,8BAA8C;ApD4uStD;;AoDzuSM;EACE,8BAAyC;EACzC,+BAA2C;ApD4uSnD;;AoDzuSM;EACE,6BAAuC;EACvC,gCAA6C;ApD4uSrD;;AoD3vSI;EACE,0BAA+B;ApD8vSrC;;AoD3vSM;EACE,8BAA8C;ApD8vStD;;AoD/vSM;EACE,gCAA8C;ApDkwStD;;AoDnwSM;EACE,iCAA8C;ApDswStD;;AoDvwSM;EACE,+BAA8C;ApD0wStD;;AoDvwSM;EACE,+BAAyC;EACzC,gCAA2C;ApD0wSnD;;AoDvwSM;EACE,8BAAuC;EACvC,iCAA6C;ApD0wSrD;;AoDzxSI;EACE,uBAA+B;ApD4xSrC;;AoDzxSM;EACE,2BAA8C;ApD4xStD;;AoD7xSM;EACE,6BAA8C;ApDgyStD;;AoDjySM;EACE,8BAA8C;ApDoyStD;;AoDrySM;EACE,4BAA8C;ApDwyStD;;AoDrySM;EACE,4BAAyC;EACzC,6BAA2C;ApDwySnD;;AoDrySM;EACE,2BAAuC;EACvC,8BAA6C;ApDwySrD;;AoDvzSI;EACE,yBAA+B;ApD0zSrC;;AoDvzSM;EACE,6BAA8C;ApD0zStD;;AoD3zSM;EACE,+BAA8C;ApD8zStD;;AoD/zSM;EACE,gCAA8C;ApDk0StD;;AoDn0SM;EACE,8BAA8C;ApDs0StD;;AoDn0SM;EACE,8BAAyC;EACzC,+BAA2C;ApDs0SnD;;AoDn0SM;EACE,6BAAuC;EACvC,gCAA6C;ApDs0SrD;;AoDr1SI;EACE,uBAA+B;ApDw1SrC;;AoDr1SM;EACE,2BAA8C;ApDw1StD;;AoDz1SM;EACE,6BAA8C;ApD41StD;;AoD71SM;EACE,8BAA8C;ApDg2StD;;AoDj2SM;EACE,4BAA8C;ApDo2StD;;AoDj2SM;EACE,4BAAyC;EACzC,6BAA2C;ApDo2SnD;;AoDj2SM;EACE,2BAAuC;EACvC,8BAA6C;ApDo2SrD;;AoDn3SI;EACE,qBAA+B;ApDs3SrC;;AoDn3SM;EACE,yBAA8C;ApDs3StD;;AoDv3SM;EACE,2BAA8C;ApD03StD;;AoD33SM;EACE,4BAA8C;ApD83StD;;AoD/3SM;EACE,0BAA8C;ApDk4StD;;AoD/3SM;EACE,0BAAyC;EACzC,2BAA2C;ApDk4SnD;;AoD/3SM;EACE,yBAAuC;EACvC,4BAA6C;ApDk4SrD;;AoDj5SI;EACE,2BAA+B;ApDo5SrC;;AoDj5SM;EACE,+BAA8C;ApDo5StD;;AoDr5SM;EACE,iCAA8C;ApDw5StD;;AoDz5SM;EACE,kCAA8C;ApD45StD;;AoD75SM;EACE,gCAA8C;ApDg6StD;;AoD75SM;EACE,gCAAyC;EACzC,iCAA2C;ApDg6SnD;;AoD75SM;EACE,+BAAuC;EACvC,kCAA6C;ApDg6SrD;;AoD/6SI;EACE,0BAA+B;ApDk7SrC;;AoD/6SM;EACE,8BAA8C;ApDk7StD;;AoDn7SM;EACE,gCAA8C;ApDs7StD;;AoDv7SM;EACE,iCAA8C;ApD07StD;;AoD37SM;EACE,+BAA8C;ApD87StD;;AoD37SM;EACE,+BAAyC;EACzC,gCAA2C;ApD87SnD;;AoD37SM;EACE,8BAAuC;EACvC,iCAA6C;ApD87SrD;;AoD78SI;EACE,2BAA+B;ApDg9SrC;;AoD78SM;EACE,+BAA8C;ApDg9StD;;AoDj9SM;EACE,iCAA8C;ApDo9StD;;AoDr9SM;EACE,kCAA8C;ApDw9StD;;AoDz9SM;EACE,gCAA8C;ApD49StD;;AoDz9SM;EACE,gCAAyC;EACzC,iCAA2C;ApD49SnD;;AoDz9SM;EACE,+BAAuC;EACvC,kCAA6C;ApD49SrD;;AoD3+SI;EACE,wBAA+B;ApD8+SrC;;AoD3+SM;EACE,4BAA8C;ApD8+StD;;AoD/+SM;EACE,8BAA8C;ApDk/StD;;AoDn/SM;EACE,+BAA8C;ApDs/StD;;AoDv/SM;EACE,6BAA8C;ApD0/StD;;AoDv/SM;EACE,6BAAyC;EACzC,8BAA2C;ApD0/SnD;;AoDv/SM;EACE,4BAAuC;EACvC,+BAA6C;ApD0/SrD;;AoDzgTI;EACE,0BAA+B;ApD4gTrC;;AoDzgTM;EACE,8BAA8C;ApD4gTtD;;AoD7gTM;EACE,gCAA8C;ApDghTtD;;AoDjhTM;EACE,iCAA8C;ApDohTtD;;AoDrhTM;EACE,+BAA8C;ApDwhTtD;;AoDrhTM;EACE,+BAAyC;EACzC,gCAA2C;ApDwhTnD;;AoDrhTM;EACE,8BAAuC;EACvC,iCAA6C;ApDwhTrD;;AoDviTI;EACE,wBAA+B;ApD0iTrC;;AoDviTM;EACE,4BAA8C;ApD0iTtD;;AoD3iTM;EACE,8BAA8C;ApD8iTtD;;AoD/iTM;EACE,+BAA8C;ApDkjTtD;;AoDnjTM;EACE,6BAA8C;ApDsjTtD;;AoDnjTM;EACE,6BAAyC;EACzC,8BAA2C;ApDsjTnD;;AoDnjTM;EACE,4BAAuC;EACvC,+BAA6C;ApDsjTrD;;AqDjlTI;EACE,0BAA2B;ArDolTjC;;AqDrlTI;EACE,4BAA2B;ArDwlTjC;;AqDzlTI;EACE,0BAA2B;ArD4lTjC;;AqD7lTI;EACE,4BAA2B;ArDgmTjC;;AqDjmTI;EACE,6BAA2B;ArDomTjC;;AqDrmTI;EACE,0BAA2B;ArDwmTjC;;AqDzmTI;EACE,6BAA2B;ArD4mTjC;;AC/hTE;EoD9EE;IACE,0BAA2B;ErDinT/B;EqDlnTE;IACE,4BAA2B;ErDonT/B;EqDrnTE;IACE,0BAA2B;ErDunT/B;EqDxnTE;IACE,4BAA2B;ErD0nT/B;EqD3nTE;IACE,6BAA2B;ErD6nT/B;EqD9nTE;IACE,0BAA2B;ErDgoT/B;EqDjoTE;IACE,6BAA2B;ErDmoT/B;AACF;;ACnjTE;EoDlFE;IACE,0BAA2B;ErDyoT/B;EqD1oTE;IACE,4BAA2B;ErD4oT/B;EqD7oTE;IACE,0BAA2B;ErD+oT/B;EqDhpTE;IACE,4BAA2B;ErDkpT/B;EqDnpTE;IACE,6BAA2B;ErDqpT/B;EqDtpTE;IACE,0BAA2B;ErDwpT/B;EqDzpTE;IACE,6BAA2B;ErD2pT/B;AACF;;ACnkTE;EoD1FE;IACE,0BAA2B;ErDiqT/B;EqDlqTE;IACE,4BAA2B;ErDoqT/B;EqDrqTE;IACE,0BAA2B;ErDuqT/B;EqDxqTE;IACE,4BAA2B;ErD0qT/B;EqD3qTE;IACE,6BAA2B;ErD6qT/B;EqD9qTE;IACE,0BAA2B;ErDgrT/B;EqDjrTE;IACE,6BAA2B;ErDmrT/B;AACF;;ACvlTE;EoD9FE;IACE,0BAA2B;ErDyrT/B;EqD1rTE;IACE,4BAA2B;ErD4rT/B;EqD7rTE;IACE,0BAA2B;ErD+rT/B;EqDhsTE;IACE,4BAA2B;ErDksT/B;EqDnsTE;IACE,6BAA2B;ErDqsT/B;EqDtsTE;IACE,0BAA2B;ErDwsT/B;EqDzsTE;IACE,6BAA2B;ErD2sT/B;AACF;;AChmTI;EoD7GA;IACE,0BAA2B;ErDitT/B;EqDltTE;IACE,4BAA2B;ErDotT/B;EqDrtTE;IACE,0BAA2B;ErDutT/B;EqDxtTE;IACE,4BAA2B;ErD0tT/B;EqD3tTE;IACE,6BAA2B;ErD6tT/B;EqD9tTE;IACE,0BAA2B;ErDguT/B;EqDjuTE;IACE,6BAA2B;ErDmuT/B;AACF;;ACzmTI;EoD5HA;IACE,0BAA2B;ErDyuT/B;EqD1uTE;IACE,4BAA2B;ErD4uT/B;EqD7uTE;IACE,0BAA2B;ErD+uT/B;EqDhvTE;IACE,4BAA2B;ErDkvT/B;EqDnvTE;IACE,6BAA2B;ErDqvT/B;EqDtvTE;IACE,0BAA2B;ErDwvT/B;EqDzvTE;IACE,6BAA2B;ErD2vT/B;AACF;;AqDnuTE;EACE,6BAAqC;ArDsuTzC;;AqDvuTE;EACE,8BAAqC;ArD0uTzC;;AqD3uTE;EACE,2BAAqC;ArD8uTzC;;AqD/uTE;EACE,4BAAqC;ArDkvTzC;;AC/rTE;EoD/CE;IACE,6BAAqC;ErDkvTzC;AACF;;ACjsTE;EoDhDE;IACE,6BAAqC;ErDqvTzC;AACF;;ACnsTE;EoDjDE;IACE,6BAAqC;ErDwvTzC;AACF;;ACrsTE;EoDlDE;IACE,6BAAqC;ErD2vTzC;AACF;;ACvsTE;EoDnDE;IACE,6BAAqC;ErD8vTzC;AACF;;ACxsTI;EoDrDA;IACE,6BAAqC;ErDiwTzC;AACF;;ACpsTI;EoD5DA;IACE,6BAAqC;ErDowTzC;AACF;;ACrsTI;EoD9DA;IACE,6BAAqC;ErDuwTzC;AACF;;ACjsTI;EoDrEA;IACE,6BAAqC;ErD0wTzC;AACF;;ACrvTE;EoD/CE;IACE,8BAAqC;ErDwyTzC;AACF;;ACvvTE;EoDhDE;IACE,8BAAqC;ErD2yTzC;AACF;;ACzvTE;EoDjDE;IACE,8BAAqC;ErD8yTzC;AACF;;AC3vTE;EoDlDE;IACE,8BAAqC;ErDizTzC;AACF;;AC7vTE;EoDnDE;IACE,8BAAqC;ErDozTzC;AACF;;AC9vTI;EoDrDA;IACE,8BAAqC;ErDuzTzC;AACF;;AC1vTI;EoD5DA;IACE,8BAAqC;ErD0zTzC;AACF;;AC3vTI;EoD9DA;IACE,8BAAqC;ErD6zTzC;AACF;;ACvvTI;EoDrEA;IACE,8BAAqC;ErDg0TzC;AACF;;AC3yTE;EoD/CE;IACE,2BAAqC;ErD81TzC;AACF;;AC7yTE;EoDhDE;IACE,2BAAqC;ErDi2TzC;AACF;;AC/yTE;EoDjDE;IACE,2BAAqC;ErDo2TzC;AACF;;ACjzTE;EoDlDE;IACE,2BAAqC;ErDu2TzC;AACF;;ACnzTE;EoDnDE;IACE,2BAAqC;ErD02TzC;AACF;;ACpzTI;EoDrDA;IACE,2BAAqC;ErD62TzC;AACF;;AChzTI;EoD5DA;IACE,2BAAqC;ErDg3TzC;AACF;;ACjzTI;EoD9DA;IACE,2BAAqC;ErDm3TzC;AACF;;AC7yTI;EoDrEA;IACE,2BAAqC;ErDs3TzC;AACF;;ACj2TE;EoD/CE;IACE,4BAAqC;ErDo5TzC;AACF;;ACn2TE;EoDhDE;IACE,4BAAqC;ErDu5TzC;AACF;;ACr2TE;EoDjDE;IACE,4BAAqC;ErD05TzC;AACF;;ACv2TE;EoDlDE;IACE,4BAAqC;ErD65TzC;AACF;;ACz2TE;EoDnDE;IACE,4BAAqC;ErDg6TzC;AACF;;AC12TI;EoDrDA;IACE,4BAAqC;ErDm6TzC;AACF;;ACt2TI;EoD5DA;IACE,4BAAqC;ErDs6TzC;AACF;;ACv2TI;EoD9DA;IACE,4BAAqC;ErDy6TzC;AACF;;ACn2TI;EoDrEA;IACE,4BAAqC;ErD46TzC;AACF;;AqD36TA;EACE,qCAAqC;ArD86TvC;;AqD56TA;EACE,oCAAoC;ArD+6TtC;;AqD76TA;EACE,oCAAoC;ArDg7TtC;;AqD96TA;EACE,6BAA6B;ArDi7T/B;;AqD/6TA;EACE,2BAAqC;ArDk7TvC;;AqDj7TA;EACE,2BAAsC;ArDo7TxC;;AqDn7TA;EACE,2BAAsC;ArDs7TxC;;AqDr7TA;EACE,2BAAwC;ArDw7T1C;;AqDv7TA;EACE,2BAAoC;ArD07TtC;;AqDx7TA;EACE,+LAAuC;ArD27TzC;;AqDz7TA;EACE,+LAAyC;ArD47T3C;;AqD17TA;EACE,+LAA0C;ArD67T5C;;AqD37TA;EACE,iCAAyC;ArD87T3C;;AqD57TA;EACE,iCAAoC;ArD+7TtC;;AsD3hUE;EACE,yBAA+B;AtD8hUnC;;ACn9TE;EqDzEE;IACE,yBAA+B;EtDgiUnC;AACF;;ACr9TE;EqD1EE;IACE,yBAA+B;EtDmiUnC;AACF;;ACv9TE;EqD3EE;IACE,yBAA+B;EtDsiUnC;AACF;;ACz9TE;EqD5EE;IACE,yBAA+B;EtDyiUnC;AACF;;AC39TE;EqD7EE;IACE,yBAA+B;EtD4iUnC;AACF;;AC59TI;EqD/EA;IACE,yBAA+B;EtD+iUnC;AACF;;ACx9TI;EqDtFA;IACE,yBAA+B;EtDkjUnC;AACF;;ACz9TI;EqDxFA;IACE,yBAA+B;EtDqjUnC;AACF;;ACr9TI;EqD/FA;IACE,yBAA+B;EtDwjUnC;AACF;;AsDrlUE;EACE,wBAA+B;AtDwlUnC;;AC7gUE;EqDzEE;IACE,wBAA+B;EtD0lUnC;AACF;;AC/gUE;EqD1EE;IACE,wBAA+B;EtD6lUnC;AACF;;ACjhUE;EqD3EE;IACE,wBAA+B;EtDgmUnC;AACF;;ACnhUE;EqD5EE;IACE,wBAA+B;EtDmmUnC;AACF;;ACrhUE;EqD7EE;IACE,wBAA+B;EtDsmUnC;AACF;;ACthUI;EqD/EA;IACE,wBAA+B;EtDymUnC;AACF;;AClhUI;EqDtFA;IACE,wBAA+B;EtD4mUnC;AACF;;ACnhUI;EqDxFA;IACE,wBAA+B;EtD+mUnC;AACF;;AC/gUI;EqD/FA;IACE,wBAA+B;EtDknUnC;AACF;;AsD/oUE;EACE,0BAA+B;AtDkpUnC;;ACvkUE;EqDzEE;IACE,0BAA+B;EtDopUnC;AACF;;ACzkUE;EqD1EE;IACE,0BAA+B;EtDupUnC;AACF;;AC3kUE;EqD3EE;IACE,0BAA+B;EtD0pUnC;AACF;;AC7kUE;EqD5EE;IACE,0BAA+B;EtD6pUnC;AACF;;AC/kUE;EqD7EE;IACE,0BAA+B;EtDgqUnC;AACF;;AChlUI;EqD/EA;IACE,0BAA+B;EtDmqUnC;AACF;;AC5kUI;EqDtFA;IACE,0BAA+B;EtDsqUnC;AACF;;AC7kUI;EqDxFA;IACE,0BAA+B;EtDyqUnC;AACF;;ACzkUI;EqD/FA;IACE,0BAA+B;EtD4qUnC;AACF;;AsDzsUE;EACE,gCAA+B;AtD4sUnC;;ACjoUE;EqDzEE;IACE,gCAA+B;EtD8sUnC;AACF;;ACnoUE;EqD1EE;IACE,gCAA+B;EtDitUnC;AACF;;ACroUE;EqD3EE;IACE,gCAA+B;EtDotUnC;AACF;;ACvoUE;EqD5EE;IACE,gCAA+B;EtDutUnC;AACF;;ACzoUE;EqD7EE;IACE,gCAA+B;EtD0tUnC;AACF;;AC1oUI;EqD/EA;IACE,gCAA+B;EtD6tUnC;AACF;;ACtoUI;EqDtFA;IACE,gCAA+B;EtDguUnC;AACF;;ACvoUI;EqDxFA;IACE,gCAA+B;EtDmuUnC;AACF;;ACnoUI;EqD/FA;IACE,gCAA+B;EtDsuUnC;AACF;;AsDnwUE;EACE,+BAA+B;AtDswUnC;;AC3rUE;EqDzEE;IACE,+BAA+B;EtDwwUnC;AACF;;AC7rUE;EqD1EE;IACE,+BAA+B;EtD2wUnC;AACF;;AC/rUE;EqD3EE;IACE,+BAA+B;EtD8wUnC;AACF;;ACjsUE;EqD5EE;IACE,+BAA+B;EtDixUnC;AACF;;ACnsUE;EqD7EE;IACE,+BAA+B;EtDoxUnC;AACF;;ACpsUI;EqD/EA;IACE,+BAA+B;EtDuxUnC;AACF;;AChsUI;EqDtFA;IACE,+BAA+B;EtD0xUnC;AACF;;ACjsUI;EqDxFA;IACE,+BAA+B;EtD6xUnC;AACF;;AC7rUI;EqD/FA;IACE,+BAA+B;EtDgyUnC;AACF;;AsD/xUA;EACE,wBAAwB;AtDkyU1B;;AsDhyUA;EACE,uBAAuB;EACvB,iCAAiC;EACjC,yBAAyB;EACzB,2BAA2B;EAC3B,qBAAqB;EACrB,6BAA6B;EAC7B,8BAA8B;EAC9B,wBAAwB;AtDmyU1B;;AChwUE;EqDhCA;IACE,wBAAwB;EtDoyU1B;AACF;;AClwUE;EqDhCA;IACE,wBAAwB;EtDsyU1B;AACF;;ACpwUE;EqDhCA;IACE,wBAAwB;EtDwyU1B;AACF;;ACtwUE;EqDhCA;IACE,wBAAwB;EtD0yU1B;AACF;;ACxwUE;EqDhCA;IACE,wBAAwB;EtD4yU1B;AACF;;ACzwUI;EqDjCF;IACE,wBAAwB;EtD8yU1B;AACF;;ACrwUI;EqDvCF;IACE,wBAAwB;EtDgzU1B;AACF;;ACtwUI;EqDxCF;IACE,wBAAwB;EtDkzU1B;AACF;;AClwUI;EqD9CF;IACE,wBAAwB;EtDozU1B;AACF;;AsDnzUA;EACE,6BAA6B;AtDszU/B;;AC1zUE;EqDOA;IACE,6BAA6B;EtDuzU/B;AACF;;AC5zUE;EqDOA;IACE,6BAA6B;EtDyzU/B;AACF;;AC9zUE;EqDOA;IACE,6BAA6B;EtD2zU/B;AACF;;ACh0UE;EqDOA;IACE,6BAA6B;EtD6zU/B;AACF;;ACl0UE;EqDOA;IACE,6BAA6B;EtD+zU/B;AACF;;ACn0UI;EqDMF;IACE,6BAA6B;EtDi0U/B;AACF;;AC/zUI;EqDAF;IACE,6BAA6B;EtDm0U/B;AACF;;ACh0UI;EqDDF;IACE,6BAA6B;EtDq0U/B;AACF;;AC5zUI;EqDPF;IACE,6BAA6B;EtDu0U/B;AACF;;AuDj8UA,iBAAA;ACQA;EACE,oBAAoB;EACpB,aAAa;EACb,sBAAsB;EACtB,8BAA8B;AxD67UhC;;AwDj8UA;EAMI,gBAAgB;AxD+7UpB;;AwDr8UA;EASM,mBAAmB;AxDg8UzB;;AwDz8UA;EAeM,uBtDRyB;EsDSzB,ctDtBuB;AFo9U7B;;AwD98UA;;EAmBQ,cAAc;AxDg8UtB;;AwDn9UA;EAqBQ,ctD3BqB;AF69U7B;;AwDv9UA;EAuBQ,4BtD7BqB;AFi+U7B;;AwD39UA;;EA0BU,ctDhCmB;AFs+U7B;;AC34UE;EuDrFF;IA6BU,uBtDtBqB;EF89U7B;AACF;;AwDt+UA;;EAgCQ,4BtDtCqB;AFi/U7B;;AwD3+UA;;;EAqCU,yB7CgEuB;E6C/DvB,ctD5CmB;AFw/U7B;;AwDl/UA;EAyCU,ctD/CmB;EsDgDnB,YAAY;AxD68UtB;;AwDv/UA;EA4CY,UAAU;AxD+8UtB;;AwD3/UA;EA+CY,UAAU;AxDg9UtB;;AwD//UA;EAmDY,ctDzDiB;AFygV7B;;AwDngVA;EAqDc,uCtD3De;AF6gV7B;;AwDvgVA;EAyDc,yBtD/De;EsDgEf,qBtDhEe;EsDiEf,YtDpDiB;AFsgV/B;;AwD7gVA;EAiEU,4EAAyG;AxDg9UnH;;ACx8UE;EuDzEF;IAoEc,4EAAyG;ExDk9UrH;AACF;;AwDvhVA;EAeM,yBtDrBuB;EsDsBvB,YtDTyB;AFqhV/B;;AwD5hVA;;EAmBQ,cAAc;AxD8gVtB;;AwDjiVA;EAqBQ,YtDduB;AF8hV/B;;AwDriVA;EAuBQ,+BtDhBuB;AFkiV/B;;AwDziVA;;EA0BU,YtDnBqB;AFuiV/B;;ACz9UE;EuDrFF;IA6BU,yBtDnCmB;EFyjV3B;AACF;;AwDpjVA;;EAgCQ,+BtDzBuB;AFkjV/B;;AwDzjVA;;;EAqCU,uB7CgEuB;E6C/DvB,YtD/BqB;AFyjV/B;;AwDhkVA;EAyCU,YtDlCqB;EsDmCrB,YAAY;AxD2hVtB;;AwDrkVA;EA4CY,UAAU;AxD6hVtB;;AwDzkVA;EA+CY,UAAU;AxD8hVtB;;AwD7kVA;EAmDY,YtD5CmB;AF0kV/B;;AwDjlVA;EAqDc,uCtD3De;AF2lV7B;;AwDrlVA;EAyDc,uBtDlDiB;EsDmDjB,mBtDnDiB;EsDoDjB,ctDjEe;AFimV7B;;AwD3lVA;EAiEU,8EAAyG;AxD8hVnH;;ACthVE;EuDzEF;IAoEc,8EAAyG;ExDgiVrH;AACF;;AwDrmVA;EAeM,4BtDVwB;EsDWxB,yB7CwDe;AXkiVrB;;AwD1mVA;;EAmBQ,cAAc;AxD4lVtB;;AwD/mVA;EAqBQ,yB7CmDa;AX2iVrB;;AwDnnVA;EAuBQ,yB7CiDa;AX+iVrB;;AwDvnVA;;EA0BU,yB7C8CW;AXojVrB;;ACviVE;EuDrFF;IA6BU,4BtDxBoB;EF4nV5B;AACF;;AwDloVA;;EAgCQ,yB7CwCa;AX+jVrB;;AwDvoVA;;;EAqCU,yB7CgEuB;E6C/DvB,yB7CkCW;AXskVrB;;AwD9oVA;EAyCU,yB7C+BW;E6C9BX,YAAY;AxDymVtB;;AwDnpVA;EA4CY,UAAU;AxD2mVtB;;AwDvpVA;EA+CY,UAAU;AxD4mVtB;;AwD3pVA;EAmDY,yB7CqBS;AXulVrB;;AwD/pVA;EAqDc,uCtD3De;AFyqV7B;;AwDnqVA;EAyDc,oC7CeO;E6CdP,gC7CcO;E6CbP,iBtDtDgB;AFoqV9B;;AwDzqVA;EAiEU,iFAAyG;AxD4mVnH;;ACpmVE;EuDzEF;IAoEc,iFAAyG;ExD8mVrH;AACF;;AwDnrVA;EAeM,yBtDjBwB;EsDkBxB,W7C0DU;AX8mVhB;;AwDxrVA;;EAmBQ,cAAc;AxD0qVtB;;AwD7rVA;EAqBQ,W7CqDQ;AXunVhB;;AwDjsVA;EAuBQ,+B7CmDQ;AX2nVhB;;AwDrsVA;;EA0BU,W7CgDM;AXgoVhB;;ACrnVE;EuDrFF;IA6BU,yBtD/BoB;EFitV5B;AACF;;AwDhtVA;;EAgCQ,+B7C0CQ;AX2oVhB;;AwDrtVA;;;EAqCU,yB7CgEuB;E6C/DvB,W7CoCM;AXkpVhB;;AwD5tVA;EAyCU,W7CiCM;E6ChCN,YAAY;AxDurVtB;;AwDjuVA;EA4CY,UAAU;AxDyrVtB;;AwDruVA;EA+CY,UAAU;AxD0rVtB;;AwDzuVA;EAmDY,W7CuBI;AXmqVhB;;AwD7uVA;EAqDc,uCtD3De;AFuvV7B;;AwDjvVA;EAyDc,sB7CiBE;E6ChBF,kB7CgBE;E6CfF,ctD7DgB;AFyvV9B;;AwDvvVA;EAiEU,gFAAyG;AxD0rVnH;;AClrVE;EuDzEF;IAoEc,gFAAyG;ExD4rVrH;AACF;;AwDjwVA;EAeM,yBtDH4B;EsDI5B,W7C0DU;AX4rVhB;;AwDtwVA;;EAmBQ,cAAc;AxDwvVtB;;AwD3wVA;EAqBQ,W7CqDQ;AXqsVhB;;AwD/wVA;EAuBQ,+B7CmDQ;AXysVhB;;AwDnxVA;;EA0BU,W7CgDM;AX8sVhB;;ACnsVE;EuDrFF;IA6BU,yBtDjBwB;EFixVhC;AACF;;AwD9xVA;;EAgCQ,+B7C0CQ;AXytVhB;;AwDnyVA;;;EAqCU,yB7CgEuB;E6C/DvB,W7CoCM;AXguVhB;;AwD1yVA;EAyCU,W7CiCM;E6ChCN,YAAY;AxDqwVtB;;AwD/yVA;EA4CY,UAAU;AxDuwVtB;;AwDnzVA;EA+CY,UAAU;AxDwwVtB;;AwDvzVA;EAmDY,W7CuBI;AXivVhB;;AwD3zVA;EAqDc,uCtD3De;AFq0V7B;;AwD/zVA;EAyDc,sB7CiBE;E6ChBF,kB7CgBE;E6CfF,ctD/CoB;AFyzVlC;;AwDr0VA;EAiEU,gFAAyG;AxDwwVnH;;AChwVE;EuDzEF;IAoEc,gFAAyG;ExD0wVrH;AACF;;AwD/0VA;EAeM,yBtDD4B;EsDE5B,W7C0DU;AX0wVhB;;AwDp1VA;;EAmBQ,cAAc;AxDs0VtB;;AwDz1VA;EAqBQ,W7CqDQ;AXmxVhB;;AwD71VA;EAuBQ,+B7CmDQ;AXuxVhB;;AwDj2VA;;EA0BU,W7CgDM;AX4xVhB;;ACjxVE;EuDrFF;IA6BU,yBtDfwB;EF61VhC;AACF;;AwD52VA;;EAgCQ,+B7C0CQ;AXuyVhB;;AwDj3VA;;;EAqCU,yB7CgEuB;E6C/DvB,W7CoCM;AX8yVhB;;AwDx3VA;EAyCU,W7CiCM;E6ChCN,YAAY;AxDm1VtB;;AwD73VA;EA4CY,UAAU;AxDq1VtB;;AwDj4VA;EA+CY,UAAU;AxDs1VtB;;AwDr4VA;EAmDY,W7CuBI;AX+zVhB;;AwDz4VA;EAqDc,uCtD3De;AFm5V7B;;AwD74VA;EAyDc,sB7CiBE;E6ChBF,kB7CgBE;E6CfF,ctD7CoB;AFq4VlC;;AwDn5VA;EAiEU,gFAAyG;AxDs1VnH;;AC90VE;EuDzEF;IAoEc,gFAAyG;ExDw1VrH;AACF;;AwD75VA;EAeM,yBtDF4B;EsDG5B,W7C0DU;AXw1VhB;;AwDl6VA;;EAmBQ,cAAc;AxDo5VtB;;AwDv6VA;EAqBQ,W7CqDQ;AXi2VhB;;AwD36VA;EAuBQ,+B7CmDQ;AXq2VhB;;AwD/6VA;;EA0BU,W7CgDM;AX02VhB;;AC/1VE;EuDrFF;IA6BU,yBtDhBwB;EF46VhC;AACF;;AwD17VA;;EAgCQ,+B7C0CQ;AXq3VhB;;AwD/7VA;;;EAqCU,yB7CgEuB;E6C/DvB,W7CoCM;AX43VhB;;AwDt8VA;EAyCU,W7CiCM;E6ChCN,YAAY;AxDi6VtB;;AwD38VA;EA4CY,UAAU;AxDm6VtB;;AwD/8VA;EA+CY,UAAU;AxDo6VtB;;AwDn9VA;EAmDY,W7CuBI;AX64VhB;;AwDv9VA;EAqDc,uCtD3De;AFi+V7B;;AwD39VA;EAyDc,sB7CiBE;E6ChBF,kB7CgBE;E6CfF,ctD9CoB;AFo9VlC;;AwDj+VA;EAiEU,gFAAyG;AxDo6VnH;;AC55VE;EuDzEF;IAoEc,gFAAyG;ExDs6VrH;AACF;;AwD3+VA;EAeM,yBtDJ4B;EsDK5B,W7C0DU;AXs6VhB;;AwDh/VA;;EAmBQ,cAAc;AxDk+VtB;;AwDr/VA;EAqBQ,W7CqDQ;AX+6VhB;;AwDz/VA;EAuBQ,+B7CmDQ;AXm7VhB;;AwD7/VA;;EA0BU,W7CgDM;AXw7VhB;;AC76VE;EuDrFF;IA6BU,yBtDlBwB;EF4/VhC;AACF;;AwDxgWA;;EAgCQ,+B7C0CQ;AXm8VhB;;AwD7gWA;;;EAqCU,yB7CgEuB;E6C/DvB,W7CoCM;AX08VhB;;AwDphWA;EAyCU,W7CiCM;E6ChCN,YAAY;AxD++VtB;;AwDzhWA;EA4CY,UAAU;AxDi/VtB;;AwD7hWA;EA+CY,UAAU;AxDk/VtB;;AwDjiWA;EAmDY,W7CuBI;AX29VhB;;AwDriWA;EAqDc,uCtD3De;AF+iW7B;;AwDziWA;EAyDc,sB7CiBE;E6ChBF,kB7CgBE;E6CfF,ctDhDoB;AFoiWlC;;AwD/iWA;EAiEU,gFAAyG;AxDk/VnH;;AC1+VE;EuDzEF;IAoEc,gFAAyG;ExDo/VrH;AACF;;AwDzjWA;EAeM,yBtDL4B;EsDM5B,yB7CwDe;AXs/VrB;;AwD9jWA;;EAmBQ,cAAc;AxDgjWtB;;AwDnkWA;EAqBQ,yB7CmDa;AX+/VrB;;AwDvkWA;EAuBQ,yB7CiDa;AXmgWrB;;AwD3kWA;;EA0BU,yB7C8CW;AXwgWrB;;AC3/VE;EuDrFF;IA6BU,yBtDnBwB;EF2kWhC;AACF;;AwDtlWA;;EAgCQ,yB7CwCa;AXmhWrB;;AwD3lWA;;;EAqCU,yB7CgEuB;E6C/DvB,yB7CkCW;AX0hWrB;;AwDlmWA;EAyCU,yB7C+BW;E6C9BX,YAAY;AxD6jWtB;;AwDvmWA;EA4CY,UAAU;AxD+jWtB;;AwD3mWA;EA+CY,UAAU;AxDgkWtB;;AwD/mWA;EAmDY,yB7CqBS;AX2iWrB;;AwDnnWA;EAqDc,uCtD3De;AF6nW7B;;AwDvnWA;EAyDc,oC7CeO;E6CdP,gC7CcO;E6CbP,ctDjDoB;AFmnWlC;;AwD7nWA;EAiEU,gFAAyG;AxDgkWnH;;ACxjWE;EuDzEF;IAoEc,gFAAyG;ExDkkWrH;AACF;;AwDvoWA;EAeM,yBtDC2B;EsDA3B,W7C0DU;AXkkWhB;;AwD5oWA;;EAmBQ,cAAc;AxD8nWtB;;AwDjpWA;EAqBQ,W7CqDQ;AX2kWhB;;AwDrpWA;EAuBQ,+B7CmDQ;AX+kWhB;;AwDzpWA;;EA0BU,W7CgDM;AXolWhB;;ACzkWE;EuDrFF;IA6BU,yBtDbuB;EFmpW/B;AACF;;AwDpqWA;;EAgCQ,+B7C0CQ;AX+lWhB;;AwDzqWA;;;EAqCU,yB7CgEuB;E6C/DvB,W7CoCM;AXsmWhB;;AwDhrWA;EAyCU,W7CiCM;E6ChCN,YAAY;AxD2oWtB;;AwDrrWA;EA4CY,UAAU;AxD6oWtB;;AwDzrWA;EA+CY,UAAU;AxD8oWtB;;AwD7rWA;EAmDY,W7CuBI;AXunWhB;;AwDjsWA;EAqDc,uCtD3De;AF2sW7B;;AwDrsWA;EAyDc,sB7CiBE;E6ChBF,kB7CgBE;E6CfF,ctD3CmB;AF2rWjC;;AwD3sWA;EAiEU,gFAAyG;AxD8oWnH;;ACtoWE;EuDzEF;IAoEc,gFAAyG;ExDgpWrH;AACF;;AwDrtWA;EAwEM,eA/E0B;AxDguWhC;;AC5oWE;EuD7EF;IA4EQ,oBAlF8B;ExDouWpC;AACF;;AClpWE;EuD7EF;IAgFQ,qBArF8B;ExDyuWpC;AACF;;AwDruWA;EAqFM,mBAAmB;EACnB,aAAa;AxDopWnB;;AwD1uWA;EAwFQ,YAAY;EACZ,cAAc;AxDspWtB;;AwD/uWA;EA2FI,gBAAgB;AxDwpWpB;;AwDnvWA;EA6FI,iBAAiB;AxD0pWrB;;AwDtpWA;EAEE,gBAAgB;AxDwpWlB;;AwD1pWA;EAII,SAAS;EACT,gBAAgB;EAChB,eAAe;EACf,kBAAkB;EAClB,QAAQ;EACR,qCAAqC;AxD0pWzC;;AwDnqWA;EAYI,YAAY;AxD2pWhB;;AC/rWE;EuDwBF;IAeI,aAAa;ExD6pWf;AACF;;AwD5pWA;EACE,kBAAkB;AxD+pWpB;;ACzsWE;EuDyCF;IAKM,aAAa;ExDgqWjB;EwDrqWF;IAOQ,sBAAsB;ExDiqW5B;AACF;;AC9sWE;EuDqCF;IASI,aAAa;IACb,uBAAuB;ExDqqWzB;EwD/qWF;IvDsBI,oBuDVwC;ExDsqW1C;AACF;;AwDnqWA;;EAEE,YAAY;EACZ,cAAc;AxDsqWhB;;AwDpqWA;EACE,YAAY;EACZ,cAAc;EACd,oBAlJ6B;AxDyzW/B;;AyDrzWA;EACE,oBAL2B;AzD6zW7B;;AC5tWE;EwD7FF;IAMM,oBAT8B;EzDi0WlC;EyD9zWF;IAQM,qBAV8B;EzDm0WlC;AACF;;A0Dl0WA;EACE,yBxDS4B;EwDR5B,yBAJ+B;A1Dy0WjC\",\"file\":\"bulma.css\"}"
  },
  {
    "path": "docs/static/css/index.css",
    "content": "body {\n  font-family: 'Noto Sans', sans-serif;\n}\n\n.hero-body-img{\n  text-align: center;\n}\n\n.footer .icon-link {\n    font-size: 25px;\n    color: #000;\n}\n\n.link-block a {\n    margin-top: 5px;\n    margin-bottom: 5px;\n}\n\n.dnerf {\n  font-variant: small-caps;\n}\n\n\n.teaser .hero-body {\n  padding-top: 0;\n  padding-bottom: 3rem;\n}\n\n.teaser {\n  font-family: 'Google Sans', sans-serif;\n}\n\n\n.publication-title {\n}\n\n.publication-banner {\n  max-height: parent;\n\n}\n\n.publication-banner video {\n  position: relative;\n  left: auto;\n  top: auto;\n  transform: none;\n  object-fit: fit;\n}\n\n.publication-header .hero-body {\n}\n\n.publication-title {\n    font-family: 'Google Sans', sans-serif;\n}\n\n.publication-authors {\n    font-family: 'Google Sans', sans-serif;\n}\n\n.publication-venue {\n    color: #555;\n    width: fit-content;\n    font-weight: bold;\n}\n\n.publication-awards {\n    color: #ff3860;\n    width: fit-content;\n    font-weight: bolder;\n}\n\n.publication-authors {\n}\n\n.author-block {\n  font-size: 16px;\n  padding: 0 5px;\n  display: inline-block;\n}\n\n.publication-banner img {\n}\n\n.publication-authors {\n  /*color: #4286f4;*/\n}\n\n.publication-video {\n    position: relative;\n    width: 100%;\n    height: 0;\n    padding-bottom: 56.25%;\n\n    overflow: hidden;\n    border-radius: 10px !important;\n}\n\n.publication-video iframe {\n    position: absolute;\n    top: 0;\n    left: 0;\n    width: 100%;\n    height: 100%;\n}\n\n.publication-body img {\n}\n\n.results-carousel {\n  overflow: hidden;\n}\n\n.results-carousel .item {\n  margin: 5px;\n  overflow: hidden;\n  padding: 20px;\n  font-size: 0;\n}\n\n.results-carousel video {\n  margin: 0;\n}\n\n.slider-pagination .slider-page {\n  background: #000000;\n}\n\n.eql-cntrb { \n  font-size: smaller;\n}\n\n\n\n\nbody{\n    font-weight: 200;\n    font-size: 16px;\n    /*background-color: rgb(43, 60, 197);*/\n    /*color: rgb(0, 79, 241);*/\n    /*color: white;*/\n    border-top:5px solid rgb(255, 180, 240);\n    /*border-bottom:5px solid orange;*/\n}\nb{\n  color:rgb(0, 79, 241);\n}\n\n.title{\n  text-align: center;\n}\n\n.posts{\n    /*font-family: \"Helvetica Neue\", Helvetica, Arial, sans-serif;*/\n    font-size: 14px;\n\n}\n.news{\n  line-height: 1.5em;\n}\n.post{\n  border-left: 5px solid rgb(255, 180, 240);\n}\n.xtitle{\n    font-family: \"Helvetica Neue\", Helvetica, Arial, sans-serif;\n    font-size: 30px;\n    text-align: center;\n    /* margin: 10px 0; */\n    /* font-weight: 400; */\n    /*color: rgb(0, 79, 241);*/\n}\n\n.posts .teaser{\n    width: 160px;\n    height: 120px;\n    float: left;\n    margin: 0 0 10px 10px;\n}\n\n.link-block{\n  margin: 0 10px;\n}\n\na{\n  color: #111;\n  position: relative;\n}\na:after{\n  content: '';\n  position: absolute;\n  top: 60%;\n  left: -0.1em;\n  right: -0.1em;\n  bottom: 0;\n  transition:top 200ms cubic-bezier(0, 0.8, 0.13, 1);\n  /*background-color: rgba(225, 166, 121, 0.5);*/\n  background-color: rgba(255,211,30, 0.4);\n}\n.emojilink:after{\n  background-color: rgba(255,211,30, 0.0) \n  /*rgba(225, 166, 121, 0.0);*/\n}\na:hover{\n  color: black;\n  text-decoration: none;\n}\na:hover:after{\n  color:black;\n  /*color: #111;*/\n  text-decoration: none;\n  top:0%;\n}\n.entry{\n    position: relative;\n    top:0;\n    left: 20px;\n    margin-top: 5px;\n}\n\n.posts > .post{\n    border-bottom: 0;\n    padding-bottom:0em;\n    padding-top: 10px;\n    margin-bottom: 5px;\n}\n.papertitle{\n    margin-top: 0px;\n    font-weight: 600;\n    font-size:16px;\n    font-style: italic;\n    /*font-style: italic;*/\n    /*height: 2.6em*/\n}\n"
  },
  {
    "path": "docs/static/js/bulma-carousel.js",
    "content": "(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"bulmaCarousel\"] = factory();\n\telse\n\t\troot[\"bulmaCarousel\"] = factory();\n})(typeof self !== 'undefined' ? self : this, function() {\nreturn /******/ (function(modules) { // webpackBootstrap\n/******/ \t// The module cache\n/******/ \tvar installedModules = {};\n/******/\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(installedModules[moduleId]) {\n/******/ \t\t\treturn installedModules[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = installedModules[moduleId] = {\n/******/ \t\t\ti: moduleId,\n/******/ \t\t\tl: false,\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/\n/******/ \t\t// Execute the module function\n/******/ \t\tmodules[moduleId].call(module.exports, module, module.exports, __webpack_require__);\n/******/\n/******/ \t\t// Flag the module as loaded\n/******/ \t\tmodule.l = true;\n/******/\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/\n/******/\n/******/ \t// expose the modules object (__webpack_modules__)\n/******/ \t__webpack_require__.m = modules;\n/******/\n/******/ \t// expose the module cache\n/******/ \t__webpack_require__.c = installedModules;\n/******/\n/******/ \t// define getter function for harmony exports\n/******/ \t__webpack_require__.d = function(exports, name, getter) {\n/******/ \t\tif(!__webpack_require__.o(exports, name)) {\n/******/ \t\t\tObject.defineProperty(exports, name, {\n/******/ \t\t\t\tconfigurable: false,\n/******/ \t\t\t\tenumerable: true,\n/******/ \t\t\t\tget: getter\n/******/ \t\t\t});\n/******/ \t\t}\n/******/ \t};\n/******/\n/******/ \t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t__webpack_require__.n = function(module) {\n/******/ \t\tvar getter = module && module.__esModule ?\n/******/ \t\t\tfunction getDefault() { return module['default']; } :\n/******/ \t\t\tfunction getModuleExports() { return module; };\n/******/ \t\t__webpack_require__.d(getter, 'a', getter);\n/******/ \t\treturn getter;\n/******/ \t};\n/******/\n/******/ \t// Object.prototype.hasOwnProperty.call\n/******/ \t__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };\n/******/\n/******/ \t// __webpack_public_path__\n/******/ \t__webpack_require__.p = \"\";\n/******/\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(__webpack_require__.s = 5);\n/******/ })\n/************************************************************************/\n/******/ ([\n/* 0 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* unused harmony export addClasses */\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"d\", function() { return removeClasses; });\n/* unused harmony export show */\n/* unused harmony export hide */\n/* unused harmony export offset */\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"e\", function() { return width; });\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"b\", function() { return height; });\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"c\", function() { return outerHeight; });\n/* unused harmony export outerWidth */\n/* unused harmony export position */\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"a\", function() { return css; });\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__type__ = __webpack_require__(2);\n\n\nvar addClasses = function addClasses(element, classes) {\n\tclasses = Array.isArray(classes) ? classes : classes.split(' ');\n\tclasses.forEach(function (cls) {\n\t\telement.classList.add(cls);\n\t});\n};\n\nvar removeClasses = function removeClasses(element, classes) {\n\tclasses = Array.isArray(classes) ? classes : classes.split(' ');\n\tclasses.forEach(function (cls) {\n\t\telement.classList.remove(cls);\n\t});\n};\n\nvar show = function show(elements) {\n\telements = Array.isArray(elements) ? elements : [elements];\n\telements.forEach(function (element) {\n\t\telement.style.display = '';\n\t});\n};\n\nvar hide = function hide(elements) {\n\telements = Array.isArray(elements) ? elements : [elements];\n\telements.forEach(function (element) {\n\t\telement.style.display = 'none';\n\t});\n};\n\nvar offset = function offset(element) {\n\tvar rect = element.getBoundingClientRect();\n\treturn {\n\t\ttop: rect.top + document.body.scrollTop,\n\t\tleft: rect.left + document.body.scrollLeft\n\t};\n};\n\n// returns an element's width\nvar width = function width(element) {\n\treturn element.getBoundingClientRect().width || element.offsetWidth;\n};\n// returns an element's height\nvar height = function height(element) {\n\treturn element.getBoundingClientRect().height || element.offsetHeight;\n};\n\nvar outerHeight = function outerHeight(element) {\n\tvar withMargin = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : false;\n\n\tvar height = element.offsetHeight;\n\tif (withMargin) {\n\t\tvar style = window.getComputedStyle(element);\n\t\theight += parseInt(style.marginTop) + parseInt(style.marginBottom);\n\t}\n\treturn height;\n};\n\nvar outerWidth = function outerWidth(element) {\n\tvar withMargin = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : false;\n\n\tvar width = element.offsetWidth;\n\tif (withMargin) {\n\t\tvar style = window.getComputedStyle(element);\n\t\twidth += parseInt(style.marginLeft) + parseInt(style.marginRight);\n\t}\n\treturn width;\n};\n\nvar position = function position(element) {\n\treturn {\n\t\tleft: element.offsetLeft,\n\t\ttop: element.offsetTop\n\t};\n};\n\nvar css = function css(element, obj) {\n\tif (!obj) {\n\t\treturn window.getComputedStyle(element);\n\t}\n\tif (Object(__WEBPACK_IMPORTED_MODULE_0__type__[\"b\" /* isObject */])(obj)) {\n\t\tvar style = '';\n\t\tObject.keys(obj).forEach(function (key) {\n\t\t\tstyle += key + ': ' + obj[key] + ';';\n\t\t});\n\n\t\telement.style.cssText += style;\n\t}\n};\n\n/***/ }),\n/* 1 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony export (immutable) */ __webpack_exports__[\"a\"] = detectSupportsPassive;\nfunction detectSupportsPassive() {\n\tvar supportsPassive = false;\n\n\ttry {\n\t\tvar opts = Object.defineProperty({}, 'passive', {\n\t\t\tget: function get() {\n\t\t\t\tsupportsPassive = true;\n\t\t\t}\n\t\t});\n\n\t\twindow.addEventListener('testPassive', null, opts);\n\t\twindow.removeEventListener('testPassive', null, opts);\n\t} catch (e) {}\n\n\treturn supportsPassive;\n}\n\n/***/ }),\n/* 2 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"a\", function() { return isFunction; });\n/* unused harmony export isNumber */\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"c\", function() { return isString; });\n/* unused harmony export isDate */\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"b\", function() { return isObject; });\n/* unused harmony export isEmptyObject */\n/* unused harmony export isNode */\n/* unused harmony export isVideo */\n/* unused harmony export isHTML5 */\n/* unused harmony export isIFrame */\n/* unused harmony export isYoutube */\n/* unused harmony export isVimeo */\nvar _typeof = typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\" ? function (obj) { return typeof obj; } : function (obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; };\n\nvar isFunction = function isFunction(unknown) {\n\treturn typeof unknown === 'function';\n};\nvar isNumber = function isNumber(unknown) {\n\treturn typeof unknown === \"number\";\n};\nvar isString = function isString(unknown) {\n\treturn typeof unknown === 'string' || !!unknown && (typeof unknown === 'undefined' ? 'undefined' : _typeof(unknown)) === 'object' && Object.prototype.toString.call(unknown) === '[object String]';\n};\nvar isDate = function isDate(unknown) {\n\treturn (Object.prototype.toString.call(unknown) === '[object Date]' || unknown instanceof Date) && !isNaN(unknown.valueOf());\n};\nvar isObject = function isObject(unknown) {\n\treturn (typeof unknown === 'function' || (typeof unknown === 'undefined' ? 'undefined' : _typeof(unknown)) === 'object' && !!unknown) && !Array.isArray(unknown);\n};\nvar isEmptyObject = function isEmptyObject(unknown) {\n\tfor (var name in unknown) {\n\t\tif (unknown.hasOwnProperty(name)) {\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n};\n\nvar isNode = function isNode(unknown) {\n\treturn !!(unknown && unknown.nodeType === HTMLElement | SVGElement);\n};\nvar isVideo = function isVideo(unknown) {\n\treturn isYoutube(unknown) || isVimeo(unknown) || isHTML5(unknown);\n};\nvar isHTML5 = function isHTML5(unknown) {\n\treturn isNode(unknown) && unknown.tagName === 'VIDEO';\n};\nvar isIFrame = function isIFrame(unknown) {\n\treturn isNode(unknown) && unknown.tagName === 'IFRAME';\n};\nvar isYoutube = function isYoutube(unknown) {\n\treturn isIFrame(unknown) && !!unknown.src.match(/\\/\\/.*?youtube(-nocookie)?\\.[a-z]+\\/(watch\\?v=[^&\\s]+|embed)|youtu\\.be\\/.*/);\n};\nvar isVimeo = function isVimeo(unknown) {\n\treturn isIFrame(unknown) && !!unknown.src.match(/vimeo\\.com\\/video\\/.*/);\n};\n\n/***/ }),\n/* 3 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _toConsumableArray(arr) { if (Array.isArray(arr)) { for (var i = 0, arr2 = Array(arr.length); i < arr.length; i++) { arr2[i] = arr[i]; } return arr2; } else { return Array.from(arr); } }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nvar EventEmitter = function () {\n  function EventEmitter() {\n    var events = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : [];\n\n    _classCallCheck(this, EventEmitter);\n\n    this.events = new Map(events);\n  }\n\n  _createClass(EventEmitter, [{\n    key: \"on\",\n    value: function on(name, cb) {\n      var _this = this;\n\n      this.events.set(name, [].concat(_toConsumableArray(this.events.has(name) ? this.events.get(name) : []), [cb]));\n\n      return function () {\n        return _this.events.set(name, _this.events.get(name).filter(function (fn) {\n          return fn !== cb;\n        }));\n      };\n    }\n  }, {\n    key: \"emit\",\n    value: function emit(name) {\n      for (var _len = arguments.length, args = Array(_len > 1 ? _len - 1 : 0), _key = 1; _key < _len; _key++) {\n        args[_key - 1] = arguments[_key];\n      }\n\n      return this.events.has(name) && this.events.get(name).map(function (fn) {\n        return fn.apply(undefined, args);\n      });\n    }\n  }]);\n\n  return EventEmitter;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (EventEmitter);\n\n/***/ }),\n/* 4 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nvar Coordinate = function () {\n\tfunction Coordinate() {\n\t\tvar x = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 0;\n\t\tvar y = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : 0;\n\n\t\t_classCallCheck(this, Coordinate);\n\n\t\tthis._x = x;\n\t\tthis._y = y;\n\t}\n\n\t_createClass(Coordinate, [{\n\t\tkey: 'add',\n\t\tvalue: function add(coord) {\n\t\t\treturn new Coordinate(this._x + coord._x, this._y + coord._y);\n\t\t}\n\t}, {\n\t\tkey: 'sub',\n\t\tvalue: function sub(coord) {\n\t\t\treturn new Coordinate(this._x - coord._x, this._y - coord._y);\n\t\t}\n\t}, {\n\t\tkey: 'distance',\n\t\tvalue: function distance(coord) {\n\t\t\tvar deltaX = this._x - coord._x;\n\t\t\tvar deltaY = this._y - coord._y;\n\n\t\t\treturn Math.sqrt(Math.pow(deltaX, 2) + Math.pow(deltaY, 2));\n\t\t}\n\t}, {\n\t\tkey: 'max',\n\t\tvalue: function max(coord) {\n\t\t\tvar x = Math.max(this._x, coord._x);\n\t\t\tvar y = Math.max(this._y, coord._y);\n\n\t\t\treturn new Coordinate(x, y);\n\t\t}\n\t}, {\n\t\tkey: 'equals',\n\t\tvalue: function equals(coord) {\n\t\t\tif (this == coord) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tif (!coord || coord == null) {\n\t\t\t\treturn false;\n\t\t\t}\n\t\t\treturn this._x == coord._x && this._y == coord._y;\n\t\t}\n\t}, {\n\t\tkey: 'inside',\n\t\tvalue: function inside(northwest, southeast) {\n\t\t\tif (this._x >= northwest._x && this._x <= southeast._x && this._y >= northwest._y && this._y <= southeast._y) {\n\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\t}, {\n\t\tkey: 'constrain',\n\t\tvalue: function constrain(min, max) {\n\t\t\tif (min._x > max._x || min._y > max._y) {\n\t\t\t\treturn this;\n\t\t\t}\n\n\t\t\tvar x = this._x,\n\t\t\t    y = this._y;\n\n\t\t\tif (min._x !== null) {\n\t\t\t\tx = Math.max(x, min._x);\n\t\t\t}\n\t\t\tif (max._x !== null) {\n\t\t\t\tx = Math.min(x, max._x);\n\t\t\t}\n\t\t\tif (min._y !== null) {\n\t\t\t\ty = Math.max(y, min._y);\n\t\t\t}\n\t\t\tif (max._y !== null) {\n\t\t\t\ty = Math.min(y, max._y);\n\t\t\t}\n\n\t\t\treturn new Coordinate(x, y);\n\t\t}\n\t}, {\n\t\tkey: 'reposition',\n\t\tvalue: function reposition(element) {\n\t\t\telement.style['top'] = this._y + 'px';\n\t\t\telement.style['left'] = this._x + 'px';\n\t\t}\n\t}, {\n\t\tkey: 'toString',\n\t\tvalue: function toString() {\n\t\t\treturn '(' + this._x + ',' + this._y + ')';\n\t\t}\n\t}, {\n\t\tkey: 'x',\n\t\tget: function get() {\n\t\t\treturn this._x;\n\t\t},\n\t\tset: function set() {\n\t\t\tvar value = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 0;\n\n\t\t\tthis._x = value;\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: 'y',\n\t\tget: function get() {\n\t\t\treturn this._y;\n\t\t},\n\t\tset: function set() {\n\t\t\tvar value = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 0;\n\n\t\t\tthis._y = value;\n\t\t\treturn this;\n\t\t}\n\t}]);\n\n\treturn Coordinate;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Coordinate);\n\n/***/ }),\n/* 5 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\nObject.defineProperty(__webpack_exports__, \"__esModule\", { value: true });\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__utils_index__ = __webpack_require__(6);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__utils_css__ = __webpack_require__(0);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_2__utils_type__ = __webpack_require__(2);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_3__utils_eventEmitter__ = __webpack_require__(3);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_4__components_autoplay__ = __webpack_require__(7);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_5__components_breakpoint__ = __webpack_require__(9);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_6__components_infinite__ = __webpack_require__(10);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_7__components_loop__ = __webpack_require__(11);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_8__components_navigation__ = __webpack_require__(13);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_9__components_pagination__ = __webpack_require__(15);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_10__components_swipe__ = __webpack_require__(18);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_11__components_transitioner__ = __webpack_require__(19);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_12__defaultOptions__ = __webpack_require__(22);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_13__templates__ = __webpack_require__(23);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_14__templates_item__ = __webpack_require__(24);\nvar _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; };\n\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _defineProperty(obj, key, value) { if (key in obj) { Object.defineProperty(obj, key, { value: value, enumerable: true, configurable: true, writable: true }); } else { obj[key] = value; } return obj; }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _possibleConstructorReturn(self, call) { if (!self) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return call && (typeof call === \"object\" || typeof call === \"function\") ? call : self; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function, not \" + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nvar bulmaCarousel = function (_EventEmitter) {\n  _inherits(bulmaCarousel, _EventEmitter);\n\n  function bulmaCarousel(selector) {\n    var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {};\n\n    _classCallCheck(this, bulmaCarousel);\n\n    var _this = _possibleConstructorReturn(this, (bulmaCarousel.__proto__ || Object.getPrototypeOf(bulmaCarousel)).call(this));\n\n    _this.element = Object(__WEBPACK_IMPORTED_MODULE_2__utils_type__[\"c\" /* isString */])(selector) ? document.querySelector(selector) : selector;\n    // An invalid selector or non-DOM node has been provided.\n    if (!_this.element) {\n      throw new Error('An invalid selector or non-DOM node has been provided.');\n    }\n    _this._clickEvents = ['click', 'touch'];\n\n    // Use Element dataset values to override options\n    var elementConfig = _this.element.dataset ? Object.keys(_this.element.dataset).filter(function (key) {\n      return Object.keys(__WEBPACK_IMPORTED_MODULE_12__defaultOptions__[\"a\" /* default */]).includes(key);\n    }).reduce(function (obj, key) {\n      return _extends({}, obj, _defineProperty({}, key, _this.element.dataset[key]));\n    }, {}) : {};\n    // Set default options - dataset attributes are master\n    _this.options = _extends({}, __WEBPACK_IMPORTED_MODULE_12__defaultOptions__[\"a\" /* default */], options, elementConfig);\n\n    _this._id = Object(__WEBPACK_IMPORTED_MODULE_0__utils_index__[\"a\" /* uuid */])('slider');\n\n    _this.onShow = _this.onShow.bind(_this);\n\n    // Initiate plugin\n    _this._init();\n    return _this;\n  }\n\n  /**\n   * Initiate all DOM element containing datePicker class\n   * @method\n   * @return {Array} Array of all datePicker instances\n   */\n\n\n  _createClass(bulmaCarousel, [{\n    key: '_init',\n\n\n    /****************************************************\n     *                                                  *\n     * PRIVATE FUNCTIONS                                *\n     *                                                  *\n     ****************************************************/\n    /**\n     * Initiate plugin instance\n     * @method _init\n     * @return {Slider} Current plugin instance\n     */\n    value: function _init() {\n      this._items = Array.from(this.element.children);\n\n      // Load plugins\n      this._breakpoint = new __WEBPACK_IMPORTED_MODULE_5__components_breakpoint__[\"a\" /* default */](this);\n      this._autoplay = new __WEBPACK_IMPORTED_MODULE_4__components_autoplay__[\"a\" /* default */](this);\n      this._navigation = new __WEBPACK_IMPORTED_MODULE_8__components_navigation__[\"a\" /* default */](this);\n      this._pagination = new __WEBPACK_IMPORTED_MODULE_9__components_pagination__[\"a\" /* default */](this);\n      this._infinite = new __WEBPACK_IMPORTED_MODULE_6__components_infinite__[\"a\" /* default */](this);\n      this._loop = new __WEBPACK_IMPORTED_MODULE_7__components_loop__[\"a\" /* default */](this);\n      this._swipe = new __WEBPACK_IMPORTED_MODULE_10__components_swipe__[\"a\" /* default */](this);\n\n      this._build();\n\n      if (Object(__WEBPACK_IMPORTED_MODULE_2__utils_type__[\"a\" /* isFunction */])(this.options.onReady)) {\n        this.options.onReady(this);\n      }\n\n      return this;\n    }\n\n    /**\n     * Build Slider HTML component and append it to the DOM\n     * @method _build\n     */\n\n  }, {\n    key: '_build',\n    value: function _build() {\n      var _this2 = this;\n\n      // Generate HTML Fragment of template\n      this.node = document.createRange().createContextualFragment(Object(__WEBPACK_IMPORTED_MODULE_13__templates__[\"a\" /* default */])(this.id));\n      // Save pointers to template parts\n      this._ui = {\n        wrapper: this.node.firstChild,\n        container: this.node.querySelector('.slider-container')\n\n        // Add slider to DOM\n      };this.element.appendChild(this.node);\n      this._ui.wrapper.classList.add('is-loading');\n      this._ui.container.style.opacity = 0;\n\n      this._transitioner = new __WEBPACK_IMPORTED_MODULE_11__components_transitioner__[\"a\" /* default */](this);\n\n      // Wrap all items by slide element\n      this._slides = this._items.map(function (item, index) {\n        return _this2._createSlide(item, index);\n      });\n\n      this.reset();\n\n      this._bindEvents();\n\n      this._ui.container.style.opacity = 1;\n      this._ui.wrapper.classList.remove('is-loading');\n    }\n\n    /**\n     * Bind all events\n     * @method _bindEvents\n     * @return {void}\n     */\n\n  }, {\n    key: '_bindEvents',\n    value: function _bindEvents() {\n      this.on('show', this.onShow);\n    }\n  }, {\n    key: '_unbindEvents',\n    value: function _unbindEvents() {\n      this.off('show', this.onShow);\n    }\n  }, {\n    key: '_createSlide',\n    value: function _createSlide(item, index) {\n      var slide = document.createRange().createContextualFragment(Object(__WEBPACK_IMPORTED_MODULE_14__templates_item__[\"a\" /* default */])()).firstChild;\n      slide.dataset.sliderIndex = index;\n      slide.appendChild(item);\n      return slide;\n    }\n\n    /**\n     * Calculate slider dimensions\n     */\n\n  }, {\n    key: '_setDimensions',\n    value: function _setDimensions() {\n      var _this3 = this;\n\n      if (!this.options.vertical) {\n        if (this.options.centerMode) {\n          this._ui.wrapper.style.padding = '0px ' + this.options.centerPadding;\n        }\n      } else {\n        this._ui.wrapper.style.height = Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"c\" /* outerHeight */])(this._slides[0]) * this.slidesToShow;\n        if (this.options.centerMode) {\n          this._ui.wrapper.style.padding = this.options.centerPadding + ' 0px';\n        }\n      }\n\n      this._wrapperWidth = Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"e\" /* width */])(this._ui.wrapper);\n      this._wrapperHeight = Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"c\" /* outerHeight */])(this._ui.wrapper);\n\n      if (!this.options.vertical) {\n        this._slideWidth = Math.ceil(this._wrapperWidth / this.slidesToShow);\n        this._containerWidth = Math.ceil(this._slideWidth * this._slides.length);\n        this._ui.container.style.width = this._containerWidth + 'px';\n      } else {\n        this._slideWidth = Math.ceil(this._wrapperWidth);\n        this._containerHeight = Math.ceil(Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"c\" /* outerHeight */])(this._slides[0]) * this._slides.length);\n        this._ui.container.style.height = this._containerHeight + 'px';\n      }\n\n      this._slides.forEach(function (slide) {\n        slide.style.width = _this3._slideWidth + 'px';\n      });\n    }\n  }, {\n    key: '_setHeight',\n    value: function _setHeight() {\n      if (this.options.effect !== 'translate') {\n        this._ui.container.style.height = Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"c\" /* outerHeight */])(this._slides[this.state.index]) + 'px';\n      }\n    }\n\n    // Update slides classes\n\n  }, {\n    key: '_setClasses',\n    value: function _setClasses() {\n      var _this4 = this;\n\n      this._slides.forEach(function (slide) {\n        Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"d\" /* removeClasses */])(slide, 'is-active is-current is-slide-previous is-slide-next');\n        if (Math.abs((_this4.state.index - 1) % _this4.state.length) === parseInt(slide.dataset.sliderIndex, 10)) {\n          slide.classList.add('is-slide-previous');\n        }\n        if (Math.abs(_this4.state.index % _this4.state.length) === parseInt(slide.dataset.sliderIndex, 10)) {\n          slide.classList.add('is-current');\n        }\n        if (Math.abs((_this4.state.index + 1) % _this4.state.length) === parseInt(slide.dataset.sliderIndex, 10)) {\n          slide.classList.add('is-slide-next');\n        }\n      });\n    }\n\n    /****************************************************\n     *                                                  *\n     * GETTERS and SETTERS                              *\n     *                                                  *\n     ****************************************************/\n\n    /**\n     * Get id of current datePicker\n     */\n\n  }, {\n    key: 'onShow',\n\n\n    /****************************************************\n     *                                                  *\n     * EVENTS FUNCTIONS                                 *\n     *                                                  *\n     ****************************************************/\n    value: function onShow(e) {\n      this._navigation.refresh();\n      this._pagination.refresh();\n      this._setClasses();\n    }\n\n    /****************************************************\n     *                                                  *\n     * PUBLIC FUNCTIONS                                 *\n     *                                                  *\n     ****************************************************/\n\n  }, {\n    key: 'next',\n    value: function next() {\n      if (!this.options.loop && !this.options.infinite && this.state.index + this.slidesToScroll > this.state.length - this.slidesToShow && !this.options.centerMode) {\n        this.state.next = this.state.index;\n      } else {\n        this.state.next = this.state.index + this.slidesToScroll;\n      }\n      this.show();\n    }\n  }, {\n    key: 'previous',\n    value: function previous() {\n      if (!this.options.loop && !this.options.infinite && this.state.index === 0) {\n        this.state.next = this.state.index;\n      } else {\n        this.state.next = this.state.index - this.slidesToScroll;\n      }\n      this.show();\n    }\n  }, {\n    key: 'start',\n    value: function start() {\n      this._autoplay.start();\n    }\n  }, {\n    key: 'pause',\n    value: function pause() {\n      this._autoplay.pause();\n    }\n  }, {\n    key: 'stop',\n    value: function stop() {\n      this._autoplay.stop();\n    }\n  }, {\n    key: 'show',\n    value: function show(index) {\n      var force = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : false;\n\n      // If all slides are already visible then return\n      if (!this.state.length || this.state.length <= this.slidesToShow) {\n        return;\n      }\n\n      if (typeof index === 'Number') {\n        this.state.next = index;\n      }\n\n      if (this.options.loop) {\n        this._loop.apply();\n      }\n      if (this.options.infinite) {\n        this._infinite.apply();\n      }\n\n      // If new slide is already the current one then return\n      if (this.state.index === this.state.next) {\n        return;\n      }\n\n      this.emit('before:show', this.state);\n      this._transitioner.apply(force, this._setHeight.bind(this));\n      this.emit('after:show', this.state);\n\n      this.emit('show', this);\n    }\n  }, {\n    key: 'reset',\n    value: function reset() {\n      var _this5 = this;\n\n      this.state = {\n        length: this._items.length,\n        index: Math.abs(this.options.initialSlide),\n        next: Math.abs(this.options.initialSlide),\n        prev: undefined\n      };\n\n      // Fix options\n      if (this.options.loop && this.options.infinite) {\n        this.options.loop = false;\n      }\n      if (this.options.slidesToScroll > this.options.slidesToShow) {\n        this.options.slidesToScroll = this.slidesToShow;\n      }\n      this._breakpoint.init();\n\n      if (this.state.index >= this.state.length && this.state.index !== 0) {\n        this.state.index = this.state.index - this.slidesToScroll;\n      }\n      if (this.state.length <= this.slidesToShow) {\n        this.state.index = 0;\n      }\n\n      this._ui.wrapper.appendChild(this._navigation.init().render());\n      this._ui.wrapper.appendChild(this._pagination.init().render());\n\n      if (this.options.navigationSwipe) {\n        this._swipe.bindEvents();\n      } else {\n        this._swipe._bindEvents();\n      }\n\n      this._breakpoint.apply();\n      // Move all created slides into slider\n      this._slides.forEach(function (slide) {\n        return _this5._ui.container.appendChild(slide);\n      });\n      this._transitioner.init().apply(true, this._setHeight.bind(this));\n\n      if (this.options.autoplay) {\n        this._autoplay.init().start();\n      }\n    }\n\n    /**\n     * Destroy Slider\n     * @method destroy\n     */\n\n  }, {\n    key: 'destroy',\n    value: function destroy() {\n      var _this6 = this;\n\n      this._unbindEvents();\n      this._items.forEach(function (item) {\n        _this6.element.appendChild(item);\n      });\n      this.node.remove();\n    }\n  }, {\n    key: 'id',\n    get: function get() {\n      return this._id;\n    }\n  }, {\n    key: 'index',\n    set: function set(index) {\n      this._index = index;\n    },\n    get: function get() {\n      return this._index;\n    }\n  }, {\n    key: 'length',\n    set: function set(length) {\n      this._length = length;\n    },\n    get: function get() {\n      return this._length;\n    }\n  }, {\n    key: 'slides',\n    get: function get() {\n      return this._slides;\n    },\n    set: function set(slides) {\n      this._slides = slides;\n    }\n  }, {\n    key: 'slidesToScroll',\n    get: function get() {\n      return this.options.effect === 'translate' ? this._breakpoint.getSlidesToScroll() : 1;\n    }\n  }, {\n    key: 'slidesToShow',\n    get: function get() {\n      return this.options.effect === 'translate' ? this._breakpoint.getSlidesToShow() : 1;\n    }\n  }, {\n    key: 'direction',\n    get: function get() {\n      return this.element.dir.toLowerCase() === 'rtl' || this.element.style.direction === 'rtl' ? 'rtl' : 'ltr';\n    }\n  }, {\n    key: 'wrapper',\n    get: function get() {\n      return this._ui.wrapper;\n    }\n  }, {\n    key: 'wrapperWidth',\n    get: function get() {\n      return this._wrapperWidth || 0;\n    }\n  }, {\n    key: 'container',\n    get: function get() {\n      return this._ui.container;\n    }\n  }, {\n    key: 'containerWidth',\n    get: function get() {\n      return this._containerWidth || 0;\n    }\n  }, {\n    key: 'slideWidth',\n    get: function get() {\n      return this._slideWidth || 0;\n    }\n  }, {\n    key: 'transitioner',\n    get: function get() {\n      return this._transitioner;\n    }\n  }], [{\n    key: 'attach',\n    value: function attach() {\n      var _this7 = this;\n\n      var selector = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : '.slider';\n      var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {};\n\n      var instances = new Array();\n\n      var elements = Object(__WEBPACK_IMPORTED_MODULE_2__utils_type__[\"c\" /* isString */])(selector) ? document.querySelectorAll(selector) : Array.isArray(selector) ? selector : [selector];\n      [].forEach.call(elements, function (element) {\n        if (typeof element[_this7.constructor.name] === 'undefined') {\n          var instance = new bulmaCarousel(element, options);\n          element[_this7.constructor.name] = instance;\n          instances.push(instance);\n        } else {\n          instances.push(element[_this7.constructor.name]);\n        }\n      });\n\n      return instances;\n    }\n  }]);\n\n  return bulmaCarousel;\n}(__WEBPACK_IMPORTED_MODULE_3__utils_eventEmitter__[\"a\" /* default */]);\n\n/* harmony default export */ __webpack_exports__[\"default\"] = (bulmaCarousel);\n\n/***/ }),\n/* 6 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"a\", function() { return uuid; });\n/* unused harmony export isRtl */\n/* unused harmony export defer */\n/* unused harmony export getNodeIndex */\n/* unused harmony export camelize */\nfunction _toConsumableArray(arr) { if (Array.isArray(arr)) { for (var i = 0, arr2 = Array(arr.length); i < arr.length; i++) { arr2[i] = arr[i]; } return arr2; } else { return Array.from(arr); } }\n\nvar uuid = function uuid() {\n\tvar prefix = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : '';\n\treturn prefix + ([1e7] + -1e3 + -4e3 + -8e3 + -1e11).replace(/[018]/g, function (c) {\n\t\treturn (c ^ crypto.getRandomValues(new Uint8Array(1))[0] & 15 >> c / 4).toString(16);\n\t});\n};\nvar isRtl = function isRtl() {\n\treturn document.documentElement.getAttribute('dir') === 'rtl';\n};\n\nvar defer = function defer() {\n\tthis.promise = new Promise(function (resolve, reject) {\n\t\tthis.resolve = resolve;\n\t\tthis.reject = reject;\n\t}.bind(this));\n\n\tthis.then = this.promise.then.bind(this.promise);\n\tthis.catch = this.promise.catch.bind(this.promise);\n};\n\nvar getNodeIndex = function getNodeIndex(node) {\n\treturn [].concat(_toConsumableArray(node.parentNode.children)).indexOf(node);\n};\nvar camelize = function camelize(str) {\n\treturn str.replace(/-(\\w)/g, toUpper);\n};\n\n/***/ }),\n/* 7 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__utils_eventEmitter__ = __webpack_require__(3);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__utils_device__ = __webpack_require__(8);\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _possibleConstructorReturn(self, call) { if (!self) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return call && (typeof call === \"object\" || typeof call === \"function\") ? call : self; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function, not \" + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }\n\n\n\n\nvar onVisibilityChange = Symbol('onVisibilityChange');\nvar onMouseEnter = Symbol('onMouseEnter');\nvar onMouseLeave = Symbol('onMouseLeave');\n\nvar defaultOptions = {\n\tautoplay: false,\n\tautoplaySpeed: 3000\n};\n\nvar Autoplay = function (_EventEmitter) {\n\t_inherits(Autoplay, _EventEmitter);\n\n\tfunction Autoplay(slider) {\n\t\t_classCallCheck(this, Autoplay);\n\n\t\tvar _this = _possibleConstructorReturn(this, (Autoplay.__proto__ || Object.getPrototypeOf(Autoplay)).call(this));\n\n\t\t_this.slider = slider;\n\n\t\t_this.onVisibilityChange = _this.onVisibilityChange.bind(_this);\n\t\t_this.onMouseEnter = _this.onMouseEnter.bind(_this);\n\t\t_this.onMouseLeave = _this.onMouseLeave.bind(_this);\n\t\treturn _this;\n\t}\n\n\t_createClass(Autoplay, [{\n\t\tkey: 'init',\n\t\tvalue: function init() {\n\t\t\tthis._bindEvents();\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: '_bindEvents',\n\t\tvalue: function _bindEvents() {\n\t\t\tdocument.addEventListener('visibilitychange', this.onVisibilityChange);\n\t\t\tif (this.slider.options.pauseOnHover) {\n\t\t\t\tthis.slider.container.addEventListener(__WEBPACK_IMPORTED_MODULE_1__utils_device__[\"a\" /* pointerEnter */], this.onMouseEnter);\n\t\t\t\tthis.slider.container.addEventListener(__WEBPACK_IMPORTED_MODULE_1__utils_device__[\"b\" /* pointerLeave */], this.onMouseLeave);\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: '_unbindEvents',\n\t\tvalue: function _unbindEvents() {\n\t\t\tdocument.removeEventListener('visibilitychange', this.onVisibilityChange);\n\t\t\tthis.slider.container.removeEventListener(__WEBPACK_IMPORTED_MODULE_1__utils_device__[\"a\" /* pointerEnter */], this.onMouseEnter);\n\t\t\tthis.slider.container.removeEventListener(__WEBPACK_IMPORTED_MODULE_1__utils_device__[\"b\" /* pointerLeave */], this.onMouseLeave);\n\t\t}\n\t}, {\n\t\tkey: 'start',\n\t\tvalue: function start() {\n\t\t\tvar _this2 = this;\n\n\t\t\tthis.stop();\n\t\t\tif (this.slider.options.autoplay) {\n\t\t\t\tthis.emit('start', this);\n\t\t\t\tthis._interval = setInterval(function () {\n\t\t\t\t\tif (!(_this2._hovering && _this2.slider.options.pauseOnHover)) {\n\t\t\t\t\t\tif (!_this2.slider.options.centerMode && _this2.slider.state.next >= _this2.slider.state.length - _this2.slider.slidesToShow && !_this2.slider.options.loop && !_this2.slider.options.infinite) {\n\t\t\t\t\t\t\t_this2.stop();\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t_this2.slider.next();\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}, this.slider.options.autoplaySpeed);\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'stop',\n\t\tvalue: function stop() {\n\t\t\tthis._interval = clearInterval(this._interval);\n\t\t\tthis.emit('stop', this);\n\t\t}\n\t}, {\n\t\tkey: 'pause',\n\t\tvalue: function pause() {\n\t\t\tvar _this3 = this;\n\n\t\t\tvar speed = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 0;\n\n\t\t\tif (this.paused) {\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (this.timer) {\n\t\t\t\tthis.stop();\n\t\t\t}\n\t\t\tthis.paused = true;\n\t\t\tif (speed === 0) {\n\t\t\t\tthis.paused = false;\n\t\t\t\tthis.start();\n\t\t\t} else {\n\t\t\t\tthis.slider.on('transition:end', function () {\n\t\t\t\t\tif (!_this3) {\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\t_this3.paused = false;\n\t\t\t\t\tif (!_this3.run) {\n\t\t\t\t\t\t_this3.stop();\n\t\t\t\t\t} else {\n\t\t\t\t\t\t_this3.start();\n\t\t\t\t\t}\n\t\t\t\t});\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'onVisibilityChange',\n\t\tvalue: function onVisibilityChange(e) {\n\t\t\tif (document.hidden) {\n\t\t\t\tthis.stop();\n\t\t\t} else {\n\t\t\t\tthis.start();\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'onMouseEnter',\n\t\tvalue: function onMouseEnter(e) {\n\t\t\tthis._hovering = true;\n\t\t\tif (this.slider.options.pauseOnHover) {\n\t\t\t\tthis.pause();\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'onMouseLeave',\n\t\tvalue: function onMouseLeave(e) {\n\t\t\tthis._hovering = false;\n\t\t\tif (this.slider.options.pauseOnHover) {\n\t\t\t\tthis.pause();\n\t\t\t}\n\t\t}\n\t}]);\n\n\treturn Autoplay;\n}(__WEBPACK_IMPORTED_MODULE_0__utils_eventEmitter__[\"a\" /* default */]);\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Autoplay);\n\n/***/ }),\n/* 8 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* unused harmony export isIE */\n/* unused harmony export isIETouch */\n/* unused harmony export isAndroid */\n/* unused harmony export isiPad */\n/* unused harmony export isiPod */\n/* unused harmony export isiPhone */\n/* unused harmony export isSafari */\n/* unused harmony export isUiWebView */\n/* unused harmony export supportsTouchEvents */\n/* unused harmony export supportsPointerEvents */\n/* unused harmony export supportsTouch */\n/* unused harmony export pointerDown */\n/* unused harmony export pointerMove */\n/* unused harmony export pointerUp */\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"a\", function() { return pointerEnter; });\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"b\", function() { return pointerLeave; });\nvar isIE = window.navigator.pointerEnabled || window.navigator.msPointerEnabled;\nvar isIETouch = window.navigator.msPointerEnabled && window.navigator.msMaxTouchPoints > 1 || window.navigator.pointerEnabled && window.navigator.maxTouchPoints > 1;\nvar isAndroid = navigator.userAgent.match(/(Android);?[\\s\\/]+([\\d.]+)?/);\nvar isiPad = navigator.userAgent.match(/(iPad).*OS\\s([\\d_]+)/);\nvar isiPod = navigator.userAgent.match(/(iPod)(.*OS\\s([\\d_]+))?/);\nvar isiPhone = !navigator.userAgent.match(/(iPad).*OS\\s([\\d_]+)/) && navigator.userAgent.match(/(iPhone\\sOS)\\s([\\d_]+)/);\nvar isSafari = navigator.userAgent.toLowerCase().indexOf('safari') >= 0 && navigator.userAgent.toLowerCase().indexOf('chrome') < 0 && navigator.userAgent.toLowerCase().indexOf('android') < 0;\nvar isUiWebView = /(iPhone|iPod|iPad).*AppleWebKit(?!.*Safari)/i.test(navigator.userAgent);\n\nvar supportsTouchEvents = !!('ontouchstart' in window);\nvar supportsPointerEvents = !!('PointerEvent' in window);\nvar supportsTouch = supportsTouchEvents || window.DocumentTouch && document instanceof DocumentTouch || navigator.maxTouchPoints; // IE >=11\nvar pointerDown = !supportsTouch ? 'mousedown' : 'mousedown ' + (supportsTouchEvents ? 'touchstart' : 'pointerdown');\nvar pointerMove = !supportsTouch ? 'mousemove' : 'mousemove ' + (supportsTouchEvents ? 'touchmove' : 'pointermove');\nvar pointerUp = !supportsTouch ? 'mouseup' : 'mouseup ' + (supportsTouchEvents ? 'touchend' : 'pointerup');\nvar pointerEnter = supportsTouch && supportsPointerEvents ? 'pointerenter' : 'mouseenter';\nvar pointerLeave = supportsTouch && supportsPointerEvents ? 'pointerleave' : 'mouseleave';\n\n/***/ }),\n/* 9 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nvar onResize = Symbol('onResize');\n\nvar Breakpoints = function () {\n\tfunction Breakpoints(slider) {\n\t\t_classCallCheck(this, Breakpoints);\n\n\t\tthis.slider = slider;\n\t\tthis.options = slider.options;\n\n\t\tthis[onResize] = this[onResize].bind(this);\n\n\t\tthis._bindEvents();\n\t}\n\n\t_createClass(Breakpoints, [{\n\t\tkey: 'init',\n\t\tvalue: function init() {\n\t\t\tthis._defaultBreakpoint = {\n\t\t\t\tslidesToShow: this.options.slidesToShow,\n\t\t\t\tslidesToScroll: this.options.slidesToScroll\n\t\t\t};\n\t\t\tthis.options.breakpoints.sort(function (a, b) {\n\t\t\t\treturn parseInt(a.changePoint, 10) > parseInt(b.changePoint, 10);\n\t\t\t});\n\t\t\tthis._currentBreakpoint = this._getActiveBreakpoint();\n\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: 'destroy',\n\t\tvalue: function destroy() {\n\t\t\tthis._unbindEvents();\n\t\t}\n\t}, {\n\t\tkey: '_bindEvents',\n\t\tvalue: function _bindEvents() {\n\t\t\twindow.addEventListener('resize', this[onResize]);\n\t\t\twindow.addEventListener('orientationchange', this[onResize]);\n\t\t}\n\t}, {\n\t\tkey: '_unbindEvents',\n\t\tvalue: function _unbindEvents() {\n\t\t\twindow.removeEventListener('resize', this[onResize]);\n\t\t\twindow.removeEventListener('orientationchange', this[onResize]);\n\t\t}\n\t}, {\n\t\tkey: '_getActiveBreakpoint',\n\t\tvalue: function _getActiveBreakpoint() {\n\t\t\t//Get breakpoint for window width\n\t\t\tvar _iteratorNormalCompletion = true;\n\t\t\tvar _didIteratorError = false;\n\t\t\tvar _iteratorError = undefined;\n\n\t\t\ttry {\n\t\t\t\tfor (var _iterator = this.options.breakpoints[Symbol.iterator](), _step; !(_iteratorNormalCompletion = (_step = _iterator.next()).done); _iteratorNormalCompletion = true) {\n\t\t\t\t\tvar point = _step.value;\n\n\t\t\t\t\tif (point.changePoint >= window.innerWidth) {\n\t\t\t\t\t\treturn point;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} catch (err) {\n\t\t\t\t_didIteratorError = true;\n\t\t\t\t_iteratorError = err;\n\t\t\t} finally {\n\t\t\t\ttry {\n\t\t\t\t\tif (!_iteratorNormalCompletion && _iterator.return) {\n\t\t\t\t\t\t_iterator.return();\n\t\t\t\t\t}\n\t\t\t\t} finally {\n\t\t\t\t\tif (_didIteratorError) {\n\t\t\t\t\t\tthrow _iteratorError;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn this._defaultBreakpoint;\n\t\t}\n\t}, {\n\t\tkey: 'getSlidesToShow',\n\t\tvalue: function getSlidesToShow() {\n\t\t\treturn this._currentBreakpoint ? this._currentBreakpoint.slidesToShow : this._defaultBreakpoint.slidesToShow;\n\t\t}\n\t}, {\n\t\tkey: 'getSlidesToScroll',\n\t\tvalue: function getSlidesToScroll() {\n\t\t\treturn this._currentBreakpoint ? this._currentBreakpoint.slidesToScroll : this._defaultBreakpoint.slidesToScroll;\n\t\t}\n\t}, {\n\t\tkey: 'apply',\n\t\tvalue: function apply() {\n\t\t\tif (this.slider.state.index >= this.slider.state.length && this.slider.state.index !== 0) {\n\t\t\t\tthis.slider.state.index = this.slider.state.index - this._currentBreakpoint.slidesToScroll;\n\t\t\t}\n\t\t\tif (this.slider.state.length <= this._currentBreakpoint.slidesToShow) {\n\t\t\t\tthis.slider.state.index = 0;\n\t\t\t}\n\n\t\t\tif (this.options.loop) {\n\t\t\t\tthis.slider._loop.init().apply();\n\t\t\t}\n\n\t\t\tif (this.options.infinite) {\n\t\t\t\tthis.slider._infinite.init().apply();\n\t\t\t}\n\n\t\t\tthis.slider._setDimensions();\n\t\t\tthis.slider._transitioner.init().apply(true, this.slider._setHeight.bind(this.slider));\n\t\t\tthis.slider._setClasses();\n\n\t\t\tthis.slider._navigation.refresh();\n\t\t\tthis.slider._pagination.refresh();\n\t\t}\n\t}, {\n\t\tkey: onResize,\n\t\tvalue: function value(e) {\n\t\t\tvar newBreakPoint = this._getActiveBreakpoint();\n\t\t\tif (newBreakPoint.slidesToShow !== this._currentBreakpoint.slidesToShow) {\n\t\t\t\tthis._currentBreakpoint = newBreakPoint;\n\t\t\t\tthis.apply();\n\t\t\t}\n\t\t}\n\t}]);\n\n\treturn Breakpoints;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Breakpoints);\n\n/***/ }),\n/* 10 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _toConsumableArray(arr) { if (Array.isArray(arr)) { for (var i = 0, arr2 = Array(arr.length); i < arr.length; i++) { arr2[i] = arr[i]; } return arr2; } else { return Array.from(arr); } }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nvar Infinite = function () {\n\tfunction Infinite(slider) {\n\t\t_classCallCheck(this, Infinite);\n\n\t\tthis.slider = slider;\n\t}\n\n\t_createClass(Infinite, [{\n\t\tkey: 'init',\n\t\tvalue: function init() {\n\t\t\tif (this.slider.options.infinite && this.slider.options.effect === 'translate') {\n\t\t\t\tif (this.slider.options.centerMode) {\n\t\t\t\t\tthis._infiniteCount = Math.ceil(this.slider.slidesToShow + this.slider.slidesToShow / 2);\n\t\t\t\t} else {\n\t\t\t\t\tthis._infiniteCount = this.slider.slidesToShow;\n\t\t\t\t}\n\n\t\t\t\tvar frontClones = [];\n\t\t\t\tvar slideIndex = 0;\n\t\t\t\tfor (var i = this.slider.state.length; i > this.slider.state.length - 1 - this._infiniteCount; i -= 1) {\n\t\t\t\t\tslideIndex = i - 1;\n\t\t\t\t\tfrontClones.unshift(this._cloneSlide(this.slider.slides[slideIndex], slideIndex - this.slider.state.length));\n\t\t\t\t}\n\n\t\t\t\tvar backClones = [];\n\t\t\t\tfor (var _i = 0; _i < this._infiniteCount + this.slider.state.length; _i += 1) {\n\t\t\t\t\tbackClones.push(this._cloneSlide(this.slider.slides[_i % this.slider.state.length], _i + this.slider.state.length));\n\t\t\t\t}\n\n\t\t\t\tthis.slider.slides = [].concat(frontClones, _toConsumableArray(this.slider.slides), backClones);\n\t\t\t}\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: 'apply',\n\t\tvalue: function apply() {}\n\t}, {\n\t\tkey: 'onTransitionEnd',\n\t\tvalue: function onTransitionEnd(e) {\n\t\t\tif (this.slider.options.infinite) {\n\t\t\t\tif (this.slider.state.next >= this.slider.state.length) {\n\t\t\t\t\tthis.slider.state.index = this.slider.state.next = this.slider.state.next - this.slider.state.length;\n\t\t\t\t\tthis.slider.transitioner.apply(true);\n\t\t\t\t} else if (this.slider.state.next < 0) {\n\t\t\t\t\tthis.slider.state.index = this.slider.state.next = this.slider.state.length + this.slider.state.next;\n\t\t\t\t\tthis.slider.transitioner.apply(true);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: '_cloneSlide',\n\t\tvalue: function _cloneSlide(slide, index) {\n\t\t\tvar newSlide = slide.cloneNode(true);\n\t\t\tnewSlide.dataset.sliderIndex = index;\n\t\t\tnewSlide.dataset.cloned = true;\n\t\t\tvar ids = newSlide.querySelectorAll('[id]') || [];\n\t\t\tids.forEach(function (id) {\n\t\t\t\tid.setAttribute('id', '');\n\t\t\t});\n\t\t\treturn newSlide;\n\t\t}\n\t}]);\n\n\treturn Infinite;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Infinite);\n\n/***/ }),\n/* 11 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__utils_dom__ = __webpack_require__(12);\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\n\n\nvar Loop = function () {\n\tfunction Loop(slider) {\n\t\t_classCallCheck(this, Loop);\n\n\t\tthis.slider = slider;\n\t}\n\n\t_createClass(Loop, [{\n\t\tkey: \"init\",\n\t\tvalue: function init() {\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: \"apply\",\n\t\tvalue: function apply() {\n\t\t\tif (this.slider.options.loop) {\n\t\t\t\tif (this.slider.state.next > 0) {\n\t\t\t\t\tif (this.slider.state.next < this.slider.state.length) {\n\t\t\t\t\t\tif (this.slider.state.next > this.slider.state.length - this.slider.slidesToShow && Object(__WEBPACK_IMPORTED_MODULE_0__utils_dom__[\"a\" /* isInViewport */])(this.slider._slides[this.slider.state.length - 1], this.slider.wrapper)) {\n\t\t\t\t\t\t\tthis.slider.state.next = 0;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tthis.slider.state.next = Math.min(Math.max(this.slider.state.next, 0), this.slider.state.length - this.slider.slidesToShow);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tthis.slider.state.next = 0;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (this.slider.state.next <= 0 - this.slider.slidesToScroll) {\n\t\t\t\t\t\tthis.slider.state.next = this.slider.state.length - this.slider.slidesToShow;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tthis.slider.state.next = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}]);\n\n\treturn Loop;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Loop);\n\n/***/ }),\n/* 12 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"a\", function() { return isInViewport; });\nvar isInViewport = function isInViewport(element, html) {\n\tvar rect = element.getBoundingClientRect();\n\thtml = html || document.documentElement;\n\treturn rect.top >= 0 && rect.left >= 0 && rect.bottom <= (window.innerHeight || html.clientHeight) && rect.right <= (window.innerWidth || html.clientWidth);\n};\n\n/***/ }),\n/* 13 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__templates_navigation__ = __webpack_require__(14);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__utils_detect_supportsPassive__ = __webpack_require__(1);\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\n\n\n\nvar Navigation = function () {\n\tfunction Navigation(slider) {\n\t\t_classCallCheck(this, Navigation);\n\n\t\tthis.slider = slider;\n\n\t\tthis._clickEvents = ['click', 'touch'];\n\t\tthis._supportsPassive = Object(__WEBPACK_IMPORTED_MODULE_1__utils_detect_supportsPassive__[\"a\" /* default */])();\n\n\t\tthis.onPreviousClick = this.onPreviousClick.bind(this);\n\t\tthis.onNextClick = this.onNextClick.bind(this);\n\t\tthis.onKeyUp = this.onKeyUp.bind(this);\n\t}\n\n\t_createClass(Navigation, [{\n\t\tkey: 'init',\n\t\tvalue: function init() {\n\t\t\tthis.node = document.createRange().createContextualFragment(Object(__WEBPACK_IMPORTED_MODULE_0__templates_navigation__[\"a\" /* default */])(this.slider.options.icons));\n\t\t\tthis._ui = {\n\t\t\t\tprevious: this.node.querySelector('.slider-navigation-previous'),\n\t\t\t\tnext: this.node.querySelector('.slider-navigation-next')\n\t\t\t};\n\n\t\t\tthis._unbindEvents();\n\t\t\tthis._bindEvents();\n\n\t\t\tthis.refresh();\n\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: 'destroy',\n\t\tvalue: function destroy() {\n\t\t\tthis._unbindEvents();\n\t\t}\n\t}, {\n\t\tkey: '_bindEvents',\n\t\tvalue: function _bindEvents() {\n\t\t\tvar _this = this;\n\n\t\t\tthis.slider.wrapper.addEventListener('keyup', this.onKeyUp);\n\t\t\tthis._clickEvents.forEach(function (clickEvent) {\n\t\t\t\t_this._ui.previous.addEventListener(clickEvent, _this.onPreviousClick);\n\t\t\t\t_this._ui.next.addEventListener(clickEvent, _this.onNextClick);\n\t\t\t});\n\t\t}\n\t}, {\n\t\tkey: '_unbindEvents',\n\t\tvalue: function _unbindEvents() {\n\t\t\tvar _this2 = this;\n\n\t\t\tthis.slider.wrapper.removeEventListener('keyup', this.onKeyUp);\n\t\t\tthis._clickEvents.forEach(function (clickEvent) {\n\t\t\t\t_this2._ui.previous.removeEventListener(clickEvent, _this2.onPreviousClick);\n\t\t\t\t_this2._ui.next.removeEventListener(clickEvent, _this2.onNextClick);\n\t\t\t});\n\t\t}\n\t}, {\n\t\tkey: 'onNextClick',\n\t\tvalue: function onNextClick(e) {\n\t\t\tif (!this._supportsPassive) {\n\t\t\t\te.preventDefault();\n\t\t\t}\n\n\t\t\tif (this.slider.options.navigation) {\n\t\t\t\tthis.slider.next();\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'onPreviousClick',\n\t\tvalue: function onPreviousClick(e) {\n\t\t\tif (!this._supportsPassive) {\n\t\t\t\te.preventDefault();\n\t\t\t}\n\n\t\t\tif (this.slider.options.navigation) {\n\t\t\t\tthis.slider.previous();\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'onKeyUp',\n\t\tvalue: function onKeyUp(e) {\n\t\t\tif (this.slider.options.keyNavigation) {\n\t\t\t\tif (e.key === 'ArrowRight' || e.key === 'Right') {\n\t\t\t\t\tthis.slider.next();\n\t\t\t\t} else if (e.key === 'ArrowLeft' || e.key === 'Left') {\n\t\t\t\t\tthis.slider.previous();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'refresh',\n\t\tvalue: function refresh() {\n\t\t\t// let centerOffset = Math.floor(this.options.slidesToShow / 2);\n\t\t\tif (!this.slider.options.loop && !this.slider.options.infinite) {\n\t\t\t\tif (this.slider.options.navigation && this.slider.state.length > this.slider.slidesToShow) {\n\t\t\t\t\tthis._ui.previous.classList.remove('is-hidden');\n\t\t\t\t\tthis._ui.next.classList.remove('is-hidden');\n\t\t\t\t\tif (this.slider.state.next === 0) {\n\t\t\t\t\t\tthis._ui.previous.classList.add('is-hidden');\n\t\t\t\t\t\tthis._ui.next.classList.remove('is-hidden');\n\t\t\t\t\t} else if (this.slider.state.next >= this.slider.state.length - this.slider.slidesToShow && !this.slider.options.centerMode) {\n\t\t\t\t\t\tthis._ui.previous.classList.remove('is-hidden');\n\t\t\t\t\t\tthis._ui.next.classList.add('is-hidden');\n\t\t\t\t\t} else if (this.slider.state.next >= this.slider.state.length - 1 && this.slider.options.centerMode) {\n\t\t\t\t\t\tthis._ui.previous.classList.remove('is-hidden');\n\t\t\t\t\t\tthis._ui.next.classList.add('is-hidden');\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tthis._ui.previous.classList.add('is-hidden');\n\t\t\t\t\tthis._ui.next.classList.add('is-hidden');\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'render',\n\t\tvalue: function render() {\n\t\t\treturn this.node;\n\t\t}\n\t}]);\n\n\treturn Navigation;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Navigation);\n\n/***/ }),\n/* 14 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony default export */ __webpack_exports__[\"a\"] = (function (icons) {\n\treturn \"<div class=\\\"slider-navigation-previous\\\">\" + icons.previous + \"</div>\\n<div class=\\\"slider-navigation-next\\\">\" + icons.next + \"</div>\";\n});\n\n/***/ }),\n/* 15 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__templates_pagination__ = __webpack_require__(16);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__templates_pagination_page__ = __webpack_require__(17);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_2__utils_detect_supportsPassive__ = __webpack_require__(1);\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\n\n\n\n\nvar Pagination = function () {\n\tfunction Pagination(slider) {\n\t\t_classCallCheck(this, Pagination);\n\n\t\tthis.slider = slider;\n\n\t\tthis._clickEvents = ['click', 'touch'];\n\t\tthis._supportsPassive = Object(__WEBPACK_IMPORTED_MODULE_2__utils_detect_supportsPassive__[\"a\" /* default */])();\n\n\t\tthis.onPageClick = this.onPageClick.bind(this);\n\t\tthis.onResize = this.onResize.bind(this);\n\t}\n\n\t_createClass(Pagination, [{\n\t\tkey: 'init',\n\t\tvalue: function init() {\n\t\t\tthis._pages = [];\n\t\t\tthis.node = document.createRange().createContextualFragment(Object(__WEBPACK_IMPORTED_MODULE_0__templates_pagination__[\"a\" /* default */])());\n\t\t\tthis._ui = {\n\t\t\t\tcontainer: this.node.firstChild\n\t\t\t};\n\n\t\t\tthis._count = Math.ceil((this.slider.state.length - this.slider.slidesToShow) / this.slider.slidesToScroll);\n\n\t\t\tthis._draw();\n\t\t\tthis.refresh();\n\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: 'destroy',\n\t\tvalue: function destroy() {\n\t\t\tthis._unbindEvents();\n\t\t}\n\t}, {\n\t\tkey: '_bindEvents',\n\t\tvalue: function _bindEvents() {\n\t\t\tvar _this = this;\n\n\t\t\twindow.addEventListener('resize', this.onResize);\n\t\t\twindow.addEventListener('orientationchange', this.onResize);\n\n\t\t\tthis._clickEvents.forEach(function (clickEvent) {\n\t\t\t\t_this._pages.forEach(function (page) {\n\t\t\t\t\treturn page.addEventListener(clickEvent, _this.onPageClick);\n\t\t\t\t});\n\t\t\t});\n\t\t}\n\t}, {\n\t\tkey: '_unbindEvents',\n\t\tvalue: function _unbindEvents() {\n\t\t\tvar _this2 = this;\n\n\t\t\twindow.removeEventListener('resize', this.onResize);\n\t\t\twindow.removeEventListener('orientationchange', this.onResize);\n\n\t\t\tthis._clickEvents.forEach(function (clickEvent) {\n\t\t\t\t_this2._pages.forEach(function (page) {\n\t\t\t\t\treturn page.removeEventListener(clickEvent, _this2.onPageClick);\n\t\t\t\t});\n\t\t\t});\n\t\t}\n\t}, {\n\t\tkey: '_draw',\n\t\tvalue: function _draw() {\n\t\t\tthis._ui.container.innerHTML = '';\n\t\t\tif (this.slider.options.pagination && this.slider.state.length > this.slider.slidesToShow) {\n\t\t\t\tfor (var i = 0; i <= this._count; i++) {\n\t\t\t\t\tvar newPageNode = document.createRange().createContextualFragment(Object(__WEBPACK_IMPORTED_MODULE_1__templates_pagination_page__[\"a\" /* default */])()).firstChild;\n\t\t\t\t\tnewPageNode.dataset.index = i * this.slider.slidesToScroll;\n\t\t\t\t\tthis._pages.push(newPageNode);\n\t\t\t\t\tthis._ui.container.appendChild(newPageNode);\n\t\t\t\t}\n\t\t\t\tthis._bindEvents();\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'onPageClick',\n\t\tvalue: function onPageClick(e) {\n\t\t\tif (!this._supportsPassive) {\n\t\t\t\te.preventDefault();\n\t\t\t}\n\n\t\t\tthis.slider.state.next = e.currentTarget.dataset.index;\n\t\t\tthis.slider.show();\n\t\t}\n\t}, {\n\t\tkey: 'onResize',\n\t\tvalue: function onResize() {\n\t\t\tthis._draw();\n\t\t}\n\t}, {\n\t\tkey: 'refresh',\n\t\tvalue: function refresh() {\n\t\t\tvar _this3 = this;\n\n\t\t\tvar newCount = void 0;\n\n\t\t\tif (this.slider.options.infinite) {\n\t\t\t\tnewCount = Math.ceil(this.slider.state.length - 1 / this.slider.slidesToScroll);\n\t\t\t} else {\n\t\t\t\tnewCount = Math.ceil((this.slider.state.length - this.slider.slidesToShow) / this.slider.slidesToScroll);\n\t\t\t}\n\t\t\tif (newCount !== this._count) {\n\t\t\t\tthis._count = newCount;\n\t\t\t\tthis._draw();\n\t\t\t}\n\n\t\t\tthis._pages.forEach(function (page) {\n\t\t\t\tpage.classList.remove('is-active');\n\t\t\t\tif (parseInt(page.dataset.index, 10) === _this3.slider.state.next % _this3.slider.state.length) {\n\t\t\t\t\tpage.classList.add('is-active');\n\t\t\t\t}\n\t\t\t});\n\t\t}\n\t}, {\n\t\tkey: 'render',\n\t\tvalue: function render() {\n\t\t\treturn this.node;\n\t\t}\n\t}]);\n\n\treturn Pagination;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Pagination);\n\n/***/ }),\n/* 16 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony default export */ __webpack_exports__[\"a\"] = (function () {\n\treturn \"<div class=\\\"slider-pagination\\\"></div>\";\n});\n\n/***/ }),\n/* 17 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony default export */ __webpack_exports__[\"a\"] = (function () {\n  return \"<div class=\\\"slider-page\\\"></div>\";\n});\n\n/***/ }),\n/* 18 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__ = __webpack_require__(4);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__utils_detect_supportsPassive__ = __webpack_require__(1);\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\n\n\n\nvar Swipe = function () {\n\tfunction Swipe(slider) {\n\t\t_classCallCheck(this, Swipe);\n\n\t\tthis.slider = slider;\n\n\t\tthis._supportsPassive = Object(__WEBPACK_IMPORTED_MODULE_1__utils_detect_supportsPassive__[\"a\" /* default */])();\n\n\t\tthis.onStartDrag = this.onStartDrag.bind(this);\n\t\tthis.onMoveDrag = this.onMoveDrag.bind(this);\n\t\tthis.onStopDrag = this.onStopDrag.bind(this);\n\n\t\tthis._init();\n\t}\n\n\t_createClass(Swipe, [{\n\t\tkey: '_init',\n\t\tvalue: function _init() {}\n\t}, {\n\t\tkey: 'bindEvents',\n\t\tvalue: function bindEvents() {\n\t\t\tvar _this = this;\n\n\t\t\tthis.slider.container.addEventListener('dragstart', function (e) {\n\t\t\t\tif (!_this._supportsPassive) {\n\t\t\t\t\te.preventDefault();\n\t\t\t\t}\n\t\t\t});\n\t\t\tthis.slider.container.addEventListener('mousedown', this.onStartDrag);\n\t\t\tthis.slider.container.addEventListener('touchstart', this.onStartDrag);\n\n\t\t\twindow.addEventListener('mousemove', this.onMoveDrag);\n\t\t\twindow.addEventListener('touchmove', this.onMoveDrag);\n\n\t\t\twindow.addEventListener('mouseup', this.onStopDrag);\n\t\t\twindow.addEventListener('touchend', this.onStopDrag);\n\t\t\twindow.addEventListener('touchcancel', this.onStopDrag);\n\t\t}\n\t}, {\n\t\tkey: 'unbindEvents',\n\t\tvalue: function unbindEvents() {\n\t\t\tvar _this2 = this;\n\n\t\t\tthis.slider.container.removeEventListener('dragstart', function (e) {\n\t\t\t\tif (!_this2._supportsPassive) {\n\t\t\t\t\te.preventDefault();\n\t\t\t\t}\n\t\t\t});\n\t\t\tthis.slider.container.removeEventListener('mousedown', this.onStartDrag);\n\t\t\tthis.slider.container.removeEventListener('touchstart', this.onStartDrag);\n\n\t\t\twindow.removeEventListener('mousemove', this.onMoveDrag);\n\t\t\twindow.removeEventListener('touchmove', this.onMoveDrag);\n\n\t\t\twindow.removeEventListener('mouseup', this.onStopDrag);\n\t\t\twindow.removeEventListener('mouseup', this.onStopDrag);\n\t\t\twindow.removeEventListener('touchcancel', this.onStopDrag);\n\t\t}\n\n\t\t/**\n   * @param {MouseEvent|TouchEvent}\n   */\n\n\t}, {\n\t\tkey: 'onStartDrag',\n\t\tvalue: function onStartDrag(e) {\n\t\t\tif (e.touches) {\n\t\t\t\tif (e.touches.length > 1) {\n\t\t\t\t\treturn;\n\t\t\t\t} else {\n\t\t\t\t\te = e.touches[0];\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tthis._origin = new __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__[\"a\" /* default */](e.screenX, e.screenY);\n\t\t\tthis.width = this.slider.wrapperWidth;\n\t\t\tthis.slider.transitioner.disable();\n\t\t}\n\n\t\t/**\n   * @param {MouseEvent|TouchEvent}\n   */\n\n\t}, {\n\t\tkey: 'onMoveDrag',\n\t\tvalue: function onMoveDrag(e) {\n\t\t\tif (this._origin) {\n\t\t\t\tvar point = e.touches ? e.touches[0] : e;\n\t\t\t\tthis._lastTranslate = new __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__[\"a\" /* default */](point.screenX - this._origin.x, point.screenY - this._origin.y);\n\t\t\t\tif (e.touches) {\n\t\t\t\t\tif (Math.abs(this._lastTranslate.x) > Math.abs(this._lastTranslate.y)) {\n\t\t\t\t\t\tif (!this._supportsPassive) {\n\t\t\t\t\t\t\te.preventDefault();\n\t\t\t\t\t\t}\n\t\t\t\t\t\te.stopPropagation();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/**\n   * @param {MouseEvent|TouchEvent}\n   */\n\n\t}, {\n\t\tkey: 'onStopDrag',\n\t\tvalue: function onStopDrag(e) {\n\t\t\tif (this._origin && this._lastTranslate) {\n\t\t\t\tif (Math.abs(this._lastTranslate.x) > 0.2 * this.width) {\n\t\t\t\t\tif (this._lastTranslate.x < 0) {\n\t\t\t\t\t\tthis.slider.next();\n\t\t\t\t\t} else {\n\t\t\t\t\t\tthis.slider.previous();\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tthis.slider.show(true);\n\t\t\t\t}\n\t\t\t}\n\t\t\tthis._origin = null;\n\t\t\tthis._lastTranslate = null;\n\t\t}\n\t}]);\n\n\treturn Swipe;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Swipe);\n\n/***/ }),\n/* 19 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__transitions_fade__ = __webpack_require__(20);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__transitions_translate__ = __webpack_require__(21);\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\n\n\n\nvar Transitioner = function () {\n\tfunction Transitioner(slider) {\n\t\t_classCallCheck(this, Transitioner);\n\n\t\tthis.slider = slider;\n\t\tthis.options = slider.options;\n\n\t\tthis._animating = false;\n\t\tthis._animation = undefined;\n\n\t\tthis._translate = new __WEBPACK_IMPORTED_MODULE_1__transitions_translate__[\"a\" /* default */](this, slider, slider.options);\n\t\tthis._fade = new __WEBPACK_IMPORTED_MODULE_0__transitions_fade__[\"a\" /* default */](this, slider, slider.options);\n\t}\n\n\t_createClass(Transitioner, [{\n\t\tkey: 'init',\n\t\tvalue: function init() {\n\t\t\tthis._fade.init();\n\t\t\tthis._translate.init();\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: 'isAnimating',\n\t\tvalue: function isAnimating() {\n\t\t\treturn this._animating;\n\t\t}\n\t}, {\n\t\tkey: 'enable',\n\t\tvalue: function enable() {\n\t\t\tthis._animation && this._animation.enable();\n\t\t}\n\t}, {\n\t\tkey: 'disable',\n\t\tvalue: function disable() {\n\t\t\tthis._animation && this._animation.disable();\n\t\t}\n\t}, {\n\t\tkey: 'apply',\n\t\tvalue: function apply(force, callback) {\n\t\t\t// If we don't force refresh and animation in progress then return\n\t\t\tif (this._animating && !force) {\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tswitch (this.options.effect) {\n\t\t\t\tcase 'fade':\n\t\t\t\t\tthis._animation = this._fade;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'translate':\n\t\t\t\tdefault:\n\t\t\t\t\tthis._animation = this._translate;\n\t\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tthis._animationCallback = callback;\n\n\t\t\tif (force) {\n\t\t\t\tthis._animation && this._animation.disable();\n\t\t\t} else {\n\t\t\t\tthis._animation && this._animation.enable();\n\t\t\t\tthis._animating = true;\n\t\t\t}\n\n\t\t\tthis._animation && this._animation.apply();\n\n\t\t\tif (force) {\n\t\t\t\tthis.end();\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'end',\n\t\tvalue: function end() {\n\t\t\tthis._animating = false;\n\t\t\tthis._animation = undefined;\n\t\t\tthis.slider.state.index = this.slider.state.next;\n\t\t\tif (this._animationCallback) {\n\t\t\t\tthis._animationCallback();\n\t\t\t}\n\t\t}\n\t}]);\n\n\treturn Transitioner;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Transitioner);\n\n/***/ }),\n/* 20 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__utils_css__ = __webpack_require__(0);\nvar _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; };\n\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\n\n\nvar Fade = function () {\n\tfunction Fade(transitioner, slider) {\n\t\tvar options = arguments.length > 2 && arguments[2] !== undefined ? arguments[2] : {};\n\n\t\t_classCallCheck(this, Fade);\n\n\t\tthis.transitioner = transitioner;\n\t\tthis.slider = slider;\n\t\tthis.options = _extends({}, options);\n\t}\n\n\t_createClass(Fade, [{\n\t\tkey: 'init',\n\t\tvalue: function init() {\n\t\t\tvar _this = this;\n\n\t\t\tif (this.options.effect === 'fade') {\n\t\t\t\tthis.slider.slides.forEach(function (slide, index) {\n\t\t\t\t\tObject(__WEBPACK_IMPORTED_MODULE_0__utils_css__[\"a\" /* css */])(slide, {\n\t\t\t\t\t\tposition: 'absolute',\n\t\t\t\t\t\tleft: 0,\n\t\t\t\t\t\ttop: 0,\n\t\t\t\t\t\tbottom: 0,\n\t\t\t\t\t\t'z-index': slide.dataset.sliderIndex == _this.slider.state.index ? 0 : -2,\n\t\t\t\t\t\topacity: slide.dataset.sliderIndex == _this.slider.state.index ? 1 : 0\n\t\t\t\t\t});\n\t\t\t\t});\n\t\t\t}\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: 'enable',\n\t\tvalue: function enable() {\n\t\t\tvar _this2 = this;\n\n\t\t\tthis._oldSlide = this.slider.slides.filter(function (slide) {\n\t\t\t\treturn slide.dataset.sliderIndex == _this2.slider.state.index;\n\t\t\t})[0];\n\t\t\tthis._newSlide = this.slider.slides.filter(function (slide) {\n\t\t\t\treturn slide.dataset.sliderIndex == _this2.slider.state.next;\n\t\t\t})[0];\n\t\t\tif (this._newSlide) {\n\t\t\t\tthis._newSlide.addEventListener('transitionend', this.onTransitionEnd.bind(this));\n\t\t\t\tthis._newSlide.style.transition = this.options.duration + 'ms ' + this.options.timing;\n\t\t\t\tif (this._oldSlide) {\n\t\t\t\t\tthis._oldSlide.addEventListener('transitionend', this.onTransitionEnd.bind(this));\n\t\t\t\t\tthis._oldSlide.style.transition = this.options.duration + 'ms ' + this.options.timing;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'disable',\n\t\tvalue: function disable() {\n\t\t\tvar _this3 = this;\n\n\t\t\tthis._oldSlide = this.slider.slides.filter(function (slide) {\n\t\t\t\treturn slide.dataset.sliderIndex == _this3.slider.state.index;\n\t\t\t})[0];\n\t\t\tthis._newSlide = this.slider.slides.filter(function (slide) {\n\t\t\t\treturn slide.dataset.sliderIndex == _this3.slider.state.next;\n\t\t\t})[0];\n\t\t\tif (this._newSlide) {\n\t\t\t\tthis._newSlide.removeEventListener('transitionend', this.onTransitionEnd.bind(this));\n\t\t\t\tthis._newSlide.style.transition = 'none';\n\t\t\t\tif (this._oldSlide) {\n\t\t\t\t\tthis._oldSlide.removeEventListener('transitionend', this.onTransitionEnd.bind(this));\n\t\t\t\t\tthis._oldSlide.style.transition = 'none';\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'apply',\n\t\tvalue: function apply(force) {\n\t\t\tvar _this4 = this;\n\n\t\t\tthis._oldSlide = this.slider.slides.filter(function (slide) {\n\t\t\t\treturn slide.dataset.sliderIndex == _this4.slider.state.index;\n\t\t\t})[0];\n\t\t\tthis._newSlide = this.slider.slides.filter(function (slide) {\n\t\t\t\treturn slide.dataset.sliderIndex == _this4.slider.state.next;\n\t\t\t})[0];\n\n\t\t\tif (this._oldSlide && this._newSlide) {\n\t\t\t\tObject(__WEBPACK_IMPORTED_MODULE_0__utils_css__[\"a\" /* css */])(this._oldSlide, {\n\t\t\t\t\topacity: 0\n\t\t\t\t});\n\t\t\t\tObject(__WEBPACK_IMPORTED_MODULE_0__utils_css__[\"a\" /* css */])(this._newSlide, {\n\t\t\t\t\topacity: 1,\n\t\t\t\t\t'z-index': force ? 0 : -1\n\t\t\t\t});\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'onTransitionEnd',\n\t\tvalue: function onTransitionEnd(e) {\n\t\t\tif (this.options.effect === 'fade') {\n\t\t\t\tif (this.transitioner.isAnimating() && e.target == this._newSlide) {\n\t\t\t\t\tif (this._newSlide) {\n\t\t\t\t\t\tObject(__WEBPACK_IMPORTED_MODULE_0__utils_css__[\"a\" /* css */])(this._newSlide, {\n\t\t\t\t\t\t\t'z-index': 0\n\t\t\t\t\t\t});\n\t\t\t\t\t\tthis._newSlide.removeEventListener('transitionend', this.onTransitionEnd.bind(this));\n\t\t\t\t\t}\n\t\t\t\t\tif (this._oldSlide) {\n\t\t\t\t\t\tObject(__WEBPACK_IMPORTED_MODULE_0__utils_css__[\"a\" /* css */])(this._oldSlide, {\n\t\t\t\t\t\t\t'z-index': -2\n\t\t\t\t\t\t});\n\t\t\t\t\t\tthis._oldSlide.removeEventListener('transitionend', this.onTransitionEnd.bind(this));\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tthis.transitioner.end();\n\t\t\t}\n\t\t}\n\t}]);\n\n\treturn Fade;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Fade);\n\n/***/ }),\n/* 21 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__ = __webpack_require__(4);\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__utils_css__ = __webpack_require__(0);\nvar _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; };\n\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\n\n\n\nvar Translate = function () {\n\tfunction Translate(transitioner, slider) {\n\t\tvar options = arguments.length > 2 && arguments[2] !== undefined ? arguments[2] : {};\n\n\t\t_classCallCheck(this, Translate);\n\n\t\tthis.transitioner = transitioner;\n\t\tthis.slider = slider;\n\t\tthis.options = _extends({}, options);\n\n\t\tthis.onTransitionEnd = this.onTransitionEnd.bind(this);\n\t}\n\n\t_createClass(Translate, [{\n\t\tkey: 'init',\n\t\tvalue: function init() {\n\t\t\tthis._position = new __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__[\"a\" /* default */](this.slider.container.offsetLeft, this.slider.container.offsetTop);\n\t\t\tthis._bindEvents();\n\t\t\treturn this;\n\t\t}\n\t}, {\n\t\tkey: 'destroy',\n\t\tvalue: function destroy() {\n\t\t\tthis._unbindEvents();\n\t\t}\n\t}, {\n\t\tkey: '_bindEvents',\n\t\tvalue: function _bindEvents() {\n\t\t\tthis.slider.container.addEventListener('transitionend', this.onTransitionEnd);\n\t\t}\n\t}, {\n\t\tkey: '_unbindEvents',\n\t\tvalue: function _unbindEvents() {\n\t\t\tthis.slider.container.removeEventListener('transitionend', this.onTransitionEnd);\n\t\t}\n\t}, {\n\t\tkey: 'enable',\n\t\tvalue: function enable() {\n\t\t\tthis.slider.container.style.transition = this.options.duration + 'ms ' + this.options.timing;\n\t\t}\n\t}, {\n\t\tkey: 'disable',\n\t\tvalue: function disable() {\n\t\t\tthis.slider.container.style.transition = 'none';\n\t\t}\n\t}, {\n\t\tkey: 'apply',\n\t\tvalue: function apply() {\n\t\t\tvar _this = this;\n\n\t\t\tvar maxOffset = void 0;\n\t\t\tif (this.options.effect === 'translate') {\n\t\t\t\tvar slide = this.slider.slides.filter(function (slide) {\n\t\t\t\t\treturn slide.dataset.sliderIndex == _this.slider.state.next;\n\t\t\t\t})[0];\n\t\t\t\tvar slideOffset = new __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__[\"a\" /* default */](slide.offsetLeft, slide.offsetTop);\n\t\t\t\tif (this.options.centerMode) {\n\t\t\t\t\tmaxOffset = new __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__[\"a\" /* default */](Math.round(Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"e\" /* width */])(this.slider.container)), Math.round(Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"b\" /* height */])(this.slider.container)));\n\t\t\t\t} else {\n\t\t\t\t\tmaxOffset = new __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__[\"a\" /* default */](Math.round(Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"e\" /* width */])(this.slider.container) - Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"e\" /* width */])(this.slider.wrapper)), Math.round(Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"b\" /* height */])(this.slider.container) - Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"b\" /* height */])(this.slider.wrapper)));\n\t\t\t\t}\n\t\t\t\tvar nextOffset = new __WEBPACK_IMPORTED_MODULE_0__utils_coordinate__[\"a\" /* default */](Math.min(Math.max(slideOffset.x * -1, maxOffset.x * -1), 0), Math.min(Math.max(slideOffset.y * -1, maxOffset.y * -1), 0));\n\t\t\t\tif (this.options.loop) {\n\t\t\t\t\tif (!this.options.vertical && Math.abs(this._position.x) > maxOffset.x) {\n\t\t\t\t\t\tnextOffset.x = 0;\n\t\t\t\t\t\tthis.slider.state.next = 0;\n\t\t\t\t\t} else if (this.options.vertical && Math.abs(this._position.y) > maxOffset.y) {\n\t\t\t\t\t\tnextOffset.y = 0;\n\t\t\t\t\t\tthis.slider.state.next = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tthis._position.x = nextOffset.x;\n\t\t\t\tthis._position.y = nextOffset.y;\n\t\t\t\tif (this.options.centerMode) {\n\t\t\t\t\tthis._position.x = this._position.x + this.slider.wrapperWidth / 2 - Object(__WEBPACK_IMPORTED_MODULE_1__utils_css__[\"e\" /* width */])(slide) / 2;\n\t\t\t\t}\n\n\t\t\t\tif (this.slider.direction === 'rtl') {\n\t\t\t\t\tthis._position.x = -this._position.x;\n\t\t\t\t\tthis._position.y = -this._position.y;\n\t\t\t\t}\n\t\t\t\tthis.slider.container.style.transform = 'translate3d(' + this._position.x + 'px, ' + this._position.y + 'px, 0)';\n\n\t\t\t\t/**\n     * update the index with the nextIndex only if\n     * the offset of the nextIndex is in the range of the maxOffset\n     */\n\t\t\t\tif (slideOffset.x > maxOffset.x) {\n\t\t\t\t\tthis.slider.transitioner.end();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}, {\n\t\tkey: 'onTransitionEnd',\n\t\tvalue: function onTransitionEnd(e) {\n\t\t\tif (this.options.effect === 'translate') {\n\n\t\t\t\tif (this.transitioner.isAnimating() && e.target == this.slider.container) {\n\t\t\t\t\tif (this.options.infinite) {\n\t\t\t\t\t\tthis.slider._infinite.onTransitionEnd(e);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tthis.transitioner.end();\n\t\t\t}\n\t\t}\n\t}]);\n\n\treturn Translate;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (Translate);\n\n/***/ }),\n/* 22 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\nvar defaultOptions = {\n  initialSlide: 0,\n  slidesToScroll: 1,\n  slidesToShow: 1,\n\n  navigation: true,\n  navigationKeys: true,\n  navigationSwipe: true,\n\n  pagination: true,\n\n  loop: false,\n  infinite: false,\n\n  effect: 'translate',\n  duration: 300,\n  timing: 'ease',\n\n  autoplay: false,\n  autoplaySpeed: 3000,\n  pauseOnHover: true,\n  breakpoints: [{\n    changePoint: 480,\n    slidesToShow: 1,\n    slidesToScroll: 1\n  }, {\n    changePoint: 640,\n    slidesToShow: 2,\n    slidesToScroll: 2\n  }, {\n    changePoint: 768,\n    slidesToShow: 3,\n    slidesToScroll: 3\n  }],\n\n  onReady: null,\n  icons: {\n    'previous': '<svg viewBox=\"0 0 50 80\" xml:space=\"preserve\">\\n      <polyline fill=\"currentColor\" stroke-width=\".5em\" stroke-linecap=\"round\" stroke-linejoin=\"round\" points=\"45.63,75.8 0.375,38.087 45.63,0.375 \"/>\\n    </svg>',\n    'next': '<svg viewBox=\"0 0 50 80\" xml:space=\"preserve\">\\n      <polyline fill=\"currentColor\" stroke-width=\".5em\" stroke-linecap=\"round\" stroke-linejoin=\"round\" points=\"0.375,0.375 45.63,38.087 0.375,75.8 \"/>\\n    </svg>'\n  }\n};\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (defaultOptions);\n\n/***/ }),\n/* 23 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony default export */ __webpack_exports__[\"a\"] = (function (id) {\n  return \"<div id=\\\"\" + id + \"\\\" class=\\\"slider\\\" tabindex=\\\"0\\\">\\n    <div class=\\\"slider-container\\\"></div>\\n  </div>\";\n});\n\n/***/ }),\n/* 24 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n/* harmony default export */ __webpack_exports__[\"a\"] = (function () {\n  return \"<div class=\\\"slider-item\\\"></div>\";\n});\n\n/***/ })\n/******/ ])[\"default\"];\n});"
  },
  {
    "path": "docs/static/js/bulma-slider.js",
    "content": "(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"bulmaSlider\"] = factory();\n\telse\n\t\troot[\"bulmaSlider\"] = factory();\n})(typeof self !== 'undefined' ? self : this, function() {\nreturn /******/ (function(modules) { // webpackBootstrap\n/******/ \t// The module cache\n/******/ \tvar installedModules = {};\n/******/\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(installedModules[moduleId]) {\n/******/ \t\t\treturn installedModules[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = installedModules[moduleId] = {\n/******/ \t\t\ti: moduleId,\n/******/ \t\t\tl: false,\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/\n/******/ \t\t// Execute the module function\n/******/ \t\tmodules[moduleId].call(module.exports, module, module.exports, __webpack_require__);\n/******/\n/******/ \t\t// Flag the module as loaded\n/******/ \t\tmodule.l = true;\n/******/\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/\n/******/\n/******/ \t// expose the modules object (__webpack_modules__)\n/******/ \t__webpack_require__.m = modules;\n/******/\n/******/ \t// expose the module cache\n/******/ \t__webpack_require__.c = installedModules;\n/******/\n/******/ \t// define getter function for harmony exports\n/******/ \t__webpack_require__.d = function(exports, name, getter) {\n/******/ \t\tif(!__webpack_require__.o(exports, name)) {\n/******/ \t\t\tObject.defineProperty(exports, name, {\n/******/ \t\t\t\tconfigurable: false,\n/******/ \t\t\t\tenumerable: true,\n/******/ \t\t\t\tget: getter\n/******/ \t\t\t});\n/******/ \t\t}\n/******/ \t};\n/******/\n/******/ \t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t__webpack_require__.n = function(module) {\n/******/ \t\tvar getter = module && module.__esModule ?\n/******/ \t\t\tfunction getDefault() { return module['default']; } :\n/******/ \t\t\tfunction getModuleExports() { return module; };\n/******/ \t\t__webpack_require__.d(getter, 'a', getter);\n/******/ \t\treturn getter;\n/******/ \t};\n/******/\n/******/ \t// Object.prototype.hasOwnProperty.call\n/******/ \t__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };\n/******/\n/******/ \t// __webpack_public_path__\n/******/ \t__webpack_require__.p = \"\";\n/******/\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(__webpack_require__.s = 0);\n/******/ })\n/************************************************************************/\n/******/ ([\n/* 0 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\nObject.defineProperty(__webpack_exports__, \"__esModule\", { value: true });\n/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, \"isString\", function() { return isString; });\n/* harmony import */ var __WEBPACK_IMPORTED_MODULE_0__events__ = __webpack_require__(1);\nvar _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; };\n\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nvar _typeof = typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\" ? function (obj) { return typeof obj; } : function (obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; };\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _possibleConstructorReturn(self, call) { if (!self) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return call && (typeof call === \"object\" || typeof call === \"function\") ? call : self; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function, not \" + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }\n\n\n\nvar isString = function isString(unknown) {\n  return typeof unknown === 'string' || !!unknown && (typeof unknown === 'undefined' ? 'undefined' : _typeof(unknown)) === 'object' && Object.prototype.toString.call(unknown) === '[object String]';\n};\n\nvar bulmaSlider = function (_EventEmitter) {\n  _inherits(bulmaSlider, _EventEmitter);\n\n  function bulmaSlider(selector) {\n    var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {};\n\n    _classCallCheck(this, bulmaSlider);\n\n    var _this = _possibleConstructorReturn(this, (bulmaSlider.__proto__ || Object.getPrototypeOf(bulmaSlider)).call(this));\n\n    _this.element = typeof selector === 'string' ? document.querySelector(selector) : selector;\n    // An invalid selector or non-DOM node has been provided.\n    if (!_this.element) {\n      throw new Error('An invalid selector or non-DOM node has been provided.');\n    }\n\n    _this._clickEvents = ['click'];\n    /// Set default options and merge with instance defined\n    _this.options = _extends({}, options);\n\n    _this.onSliderInput = _this.onSliderInput.bind(_this);\n\n    _this.init();\n    return _this;\n  }\n\n  /**\n   * Initiate all DOM element containing selector\n   * @method\n   * @return {Array} Array of all slider instances\n   */\n\n\n  _createClass(bulmaSlider, [{\n    key: 'init',\n\n\n    /**\n     * Initiate plugin\n     * @method init\n     * @return {void}\n     */\n    value: function init() {\n      this._id = 'bulmaSlider' + new Date().getTime() + Math.floor(Math.random() * Math.floor(9999));\n      this.output = this._findOutputForSlider();\n\n      this._bindEvents();\n\n      if (this.output) {\n        if (this.element.classList.contains('has-output-tooltip')) {\n          // Get new output position\n          var newPosition = this._getSliderOutputPosition();\n\n          // Set output position\n          this.output.style['left'] = newPosition.position;\n        }\n      }\n\n      this.emit('bulmaslider:ready', this.element.value);\n    }\n  }, {\n    key: '_findOutputForSlider',\n    value: function _findOutputForSlider() {\n      var _this2 = this;\n\n      var result = null;\n      var outputs = document.getElementsByTagName('output') || [];\n\n      Array.from(outputs).forEach(function (output) {\n        if (output.htmlFor == _this2.element.getAttribute('id')) {\n          result = output;\n          return true;\n        }\n      });\n      return result;\n    }\n  }, {\n    key: '_getSliderOutputPosition',\n    value: function _getSliderOutputPosition() {\n      // Update output position\n      var newPlace, minValue;\n\n      var style = window.getComputedStyle(this.element, null);\n      // Measure width of range input\n      var sliderWidth = parseInt(style.getPropertyValue('width'), 10);\n\n      // Figure out placement percentage between left and right of input\n      if (!this.element.getAttribute('min')) {\n        minValue = 0;\n      } else {\n        minValue = this.element.getAttribute('min');\n      }\n      var newPoint = (this.element.value - minValue) / (this.element.getAttribute('max') - minValue);\n\n      // Prevent bubble from going beyond left or right (unsupported browsers)\n      if (newPoint < 0) {\n        newPlace = 0;\n      } else if (newPoint > 1) {\n        newPlace = sliderWidth;\n      } else {\n        newPlace = sliderWidth * newPoint;\n      }\n\n      return {\n        'position': newPlace + 'px'\n      };\n    }\n\n    /**\n     * Bind all events\n     * @method _bindEvents\n     * @return {void}\n     */\n\n  }, {\n    key: '_bindEvents',\n    value: function _bindEvents() {\n      if (this.output) {\n        // Add event listener to update output when slider value change\n        this.element.addEventListener('input', this.onSliderInput, false);\n      }\n    }\n  }, {\n    key: 'onSliderInput',\n    value: function onSliderInput(e) {\n      e.preventDefault();\n\n      if (this.element.classList.contains('has-output-tooltip')) {\n        // Get new output position\n        var newPosition = this._getSliderOutputPosition();\n\n        // Set output position\n        this.output.style['left'] = newPosition.position;\n      }\n\n      // Check for prefix and postfix\n      var prefix = this.output.hasAttribute('data-prefix') ? this.output.getAttribute('data-prefix') : '';\n      var postfix = this.output.hasAttribute('data-postfix') ? this.output.getAttribute('data-postfix') : '';\n\n      // Update output with slider value\n      this.output.value = prefix + this.element.value + postfix;\n\n      this.emit('bulmaslider:ready', this.element.value);\n    }\n  }], [{\n    key: 'attach',\n    value: function attach() {\n      var _this3 = this;\n\n      var selector = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 'input[type=\"range\"].slider';\n      var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {};\n\n      var instances = new Array();\n\n      var elements = isString(selector) ? document.querySelectorAll(selector) : Array.isArray(selector) ? selector : [selector];\n      elements.forEach(function (element) {\n        if (typeof element[_this3.constructor.name] === 'undefined') {\n          var instance = new bulmaSlider(element, options);\n          element[_this3.constructor.name] = instance;\n          instances.push(instance);\n        } else {\n          instances.push(element[_this3.constructor.name]);\n        }\n      });\n\n      return instances;\n    }\n  }]);\n\n  return bulmaSlider;\n}(__WEBPACK_IMPORTED_MODULE_0__events__[\"a\" /* default */]);\n\n/* harmony default export */ __webpack_exports__[\"default\"] = (bulmaSlider);\n\n/***/ }),\n/* 1 */\n/***/ (function(module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nvar EventEmitter = function () {\n  function EventEmitter() {\n    var listeners = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : [];\n\n    _classCallCheck(this, EventEmitter);\n\n    this._listeners = new Map(listeners);\n    this._middlewares = new Map();\n  }\n\n  _createClass(EventEmitter, [{\n    key: \"listenerCount\",\n    value: function listenerCount(eventName) {\n      if (!this._listeners.has(eventName)) {\n        return 0;\n      }\n\n      var eventListeners = this._listeners.get(eventName);\n      return eventListeners.length;\n    }\n  }, {\n    key: \"removeListeners\",\n    value: function removeListeners() {\n      var _this = this;\n\n      var eventName = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : null;\n      var middleware = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : false;\n\n      if (eventName !== null) {\n        if (Array.isArray(eventName)) {\n          name.forEach(function (e) {\n            return _this.removeListeners(e, middleware);\n          });\n        } else {\n          this._listeners.delete(eventName);\n\n          if (middleware) {\n            this.removeMiddleware(eventName);\n          }\n        }\n      } else {\n        this._listeners = new Map();\n      }\n    }\n  }, {\n    key: \"middleware\",\n    value: function middleware(eventName, fn) {\n      var _this2 = this;\n\n      if (Array.isArray(eventName)) {\n        name.forEach(function (e) {\n          return _this2.middleware(e, fn);\n        });\n      } else {\n        if (!Array.isArray(this._middlewares.get(eventName))) {\n          this._middlewares.set(eventName, []);\n        }\n\n        this._middlewares.get(eventName).push(fn);\n      }\n    }\n  }, {\n    key: \"removeMiddleware\",\n    value: function removeMiddleware() {\n      var _this3 = this;\n\n      var eventName = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : null;\n\n      if (eventName !== null) {\n        if (Array.isArray(eventName)) {\n          name.forEach(function (e) {\n            return _this3.removeMiddleware(e);\n          });\n        } else {\n          this._middlewares.delete(eventName);\n        }\n      } else {\n        this._middlewares = new Map();\n      }\n    }\n  }, {\n    key: \"on\",\n    value: function on(name, callback) {\n      var _this4 = this;\n\n      var once = arguments.length > 2 && arguments[2] !== undefined ? arguments[2] : false;\n\n      if (Array.isArray(name)) {\n        name.forEach(function (e) {\n          return _this4.on(e, callback);\n        });\n      } else {\n        name = name.toString();\n        var split = name.split(/,|, | /);\n\n        if (split.length > 1) {\n          split.forEach(function (e) {\n            return _this4.on(e, callback);\n          });\n        } else {\n          if (!Array.isArray(this._listeners.get(name))) {\n            this._listeners.set(name, []);\n          }\n\n          this._listeners.get(name).push({ once: once, callback: callback });\n        }\n      }\n    }\n  }, {\n    key: \"once\",\n    value: function once(name, callback) {\n      this.on(name, callback, true);\n    }\n  }, {\n    key: \"emit\",\n    value: function emit(name, data) {\n      var _this5 = this;\n\n      var silent = arguments.length > 2 && arguments[2] !== undefined ? arguments[2] : false;\n\n      name = name.toString();\n      var listeners = this._listeners.get(name);\n      var middlewares = null;\n      var doneCount = 0;\n      var execute = silent;\n\n      if (Array.isArray(listeners)) {\n        listeners.forEach(function (listener, index) {\n          // Start Middleware checks unless we're doing a silent emit\n          if (!silent) {\n            middlewares = _this5._middlewares.get(name);\n            // Check and execute Middleware\n            if (Array.isArray(middlewares)) {\n              middlewares.forEach(function (middleware) {\n                middleware(data, function () {\n                  var newData = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : null;\n\n                  if (newData !== null) {\n                    data = newData;\n                  }\n                  doneCount++;\n                }, name);\n              });\n\n              if (doneCount >= middlewares.length) {\n                execute = true;\n              }\n            } else {\n              execute = true;\n            }\n          }\n\n          // If Middleware checks have been passed, execute\n          if (execute) {\n            if (listener.once) {\n              listeners[index] = null;\n            }\n            listener.callback(data);\n          }\n        });\n\n        // Dirty way of removing used Events\n        while (listeners.indexOf(null) !== -1) {\n          listeners.splice(listeners.indexOf(null), 1);\n        }\n      }\n    }\n  }]);\n\n  return EventEmitter;\n}();\n\n/* harmony default export */ __webpack_exports__[\"a\"] = (EventEmitter);\n\n/***/ })\n/******/ ])[\"default\"];\n});"
  },
  {
    "path": "docs/static/js/index.js",
    "content": "window.HELP_IMPROVE_VIDEOJS = false;\n\n\n$(document).ready(function() {\n    // Check for click events on the navbar burger icon\n\n    var options = {\n\t\t\tslidesToScroll: 1,\n\t\t\tslidesToShow: 1,\n\t\t\tloop: true,\n\t\t\tinfinite: true,\n\t\t\tautoplay: true,\n\t\t\tautoplaySpeed: 5000,\n    }\n\n\t\t// Initialize all div with carousel class\n    var carousels = bulmaCarousel.attach('.carousel', options);\n\t\n    bulmaSlider.attach();\n\n})\n"
  },
  {
    "path": "inference.py",
    "content": "import numpy as np\nimport cv2, os, sys, subprocess, platform, torch\nfrom tqdm import tqdm\nfrom PIL import Image\nfrom scipy.io import loadmat\n\nsys.path.insert(0, 'third_part')\nsys.path.insert(0, 'third_part/GPEN')\nsys.path.insert(0, 'third_part/GFPGAN')\n\n# 3dmm extraction\nfrom third_part.face3d.util.preprocess import align_img\nfrom third_part.face3d.util.load_mats import load_lm3d\nfrom third_part.face3d.extract_kp_videos import KeypointExtractor\n# face enhancement\nfrom third_part.GPEN.gpen_face_enhancer import FaceEnhancement\nfrom third_part.GFPGAN.gfpgan import GFPGANer\n# expression control\nfrom third_part.ganimation_replicate.model.ganimation import GANimationModel\n\nfrom utils import audio\nfrom utils.ffhq_preprocess import Croper\nfrom utils.alignment_stit import crop_faces, calc_alignment_coefficients, paste_image\nfrom utils.inference_utils import Laplacian_Pyramid_Blending_with_mask, face_detect, load_model, options, split_coeff, \\\n                                  trans_image, transform_semantic, find_crop_norm_ratio, load_face3d_net, exp_aus_dict\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nargs = options()\n\ndef main():    \n    device = 'cuda' if torch.cuda.is_available() else 'cpu'\n    print('[Info] Using {} for inference.'.format(device))\n    os.makedirs(os.path.join('temp', args.tmp_dir), exist_ok=True)\n\n    enhancer = FaceEnhancement(base_dir='checkpoints', size=512, model='GPEN-BFR-512', use_sr=False, \\\n                               sr_model='rrdb_realesrnet_psnr', channel_multiplier=2, narrow=1, device=device)\n    restorer = GFPGANer(model_path='checkpoints/GFPGANv1.3.pth', upscale=1, arch='clean', \\\n                        channel_multiplier=2, bg_upsampler=None)\n\n    base_name = args.face.split('/')[-1]\n    if os.path.isfile(args.face) and args.face.split('.')[1] in ['jpg', 'png', 'jpeg']:\n        args.static = True\n    if not os.path.isfile(args.face):\n        raise ValueError('--face argument must be a valid path to video/image file')\n    elif args.face.split('.')[1] in ['jpg', 'png', 'jpeg']:\n        full_frames = [cv2.imread(args.face)]\n        fps = args.fps\n    else:\n        video_stream = cv2.VideoCapture(args.face)\n        fps = video_stream.get(cv2.CAP_PROP_FPS)\n\n        full_frames = []\n        while True:\n            still_reading, frame = video_stream.read()\n            if not still_reading:\n                video_stream.release()\n                break\n            y1, y2, x1, x2 = args.crop\n            if x2 == -1: x2 = frame.shape[1]\n            if y2 == -1: y2 = frame.shape[0]\n            frame = frame[y1:y2, x1:x2]\n            full_frames.append(frame)\n\n    print (\"[Step 0] Number of frames available for inference: \"+str(len(full_frames)))\n    # face detection & cropping, cropping the first frame as the style of FFHQ\n    croper = Croper('checkpoints/shape_predictor_68_face_landmarks.dat')\n    full_frames_RGB = [cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) for frame in full_frames]\n    full_frames_RGB, crop, quad = croper.crop(full_frames_RGB, xsize=512)\n\n    clx, cly, crx, cry = crop\n    lx, ly, rx, ry = quad\n    lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)\n    oy1, oy2, ox1, ox2 = cly+ly, min(cly+ry, full_frames[0].shape[0]), clx+lx, min(clx+rx, full_frames[0].shape[1])\n    # original_size = (ox2 - ox1, oy2 - oy1)\n    frames_pil = [Image.fromarray(cv2.resize(frame,(256,256))) for frame in full_frames_RGB]\n\n    # get the landmark according to the detected face.\n    if not os.path.isfile('temp/'+base_name+'_landmarks.txt') or args.re_preprocess:\n        print('[Step 1] Landmarks Extraction in Video.')\n        kp_extractor = KeypointExtractor()\n        lm = kp_extractor.extract_keypoint(frames_pil, './temp/'+base_name+'_landmarks.txt')\n    else:\n        print('[Step 1] Using saved landmarks.')\n        lm = np.loadtxt('temp/'+base_name+'_landmarks.txt').astype(np.float32)\n        lm = lm.reshape([len(full_frames), -1, 2])\n       \n    if not os.path.isfile('temp/'+base_name+'_coeffs.npy') or args.exp_img is not None or args.re_preprocess:\n        net_recon = load_face3d_net(args.face3d_net_path, device)\n        lm3d_std = load_lm3d('checkpoints/BFM')\n\n        video_coeffs = []\n        for idx in tqdm(range(len(frames_pil)), desc=\"[Step 2] 3DMM Extraction In Video:\"):\n            frame = frames_pil[idx]\n            W, H = frame.size\n            lm_idx = lm[idx].reshape([-1, 2])\n            if np.mean(lm_idx) == -1:\n                lm_idx = (lm3d_std[:, :2]+1) / 2.\n                lm_idx = np.concatenate([lm_idx[:, :1] * W, lm_idx[:, 1:2] * H], 1)\n            else:\n                lm_idx[:, -1] = H - 1 - lm_idx[:, -1]\n\n            trans_params, im_idx, lm_idx, _ = align_img(frame, lm_idx, lm3d_std)\n            trans_params = np.array([float(item) for item in np.hsplit(trans_params, 5)]).astype(np.float32)\n            im_idx_tensor = torch.tensor(np.array(im_idx)/255., dtype=torch.float32).permute(2, 0, 1).to(device).unsqueeze(0) \n            with torch.no_grad():\n                coeffs = split_coeff(net_recon(im_idx_tensor))\n\n            pred_coeff = {key:coeffs[key].cpu().numpy() for key in coeffs}\n            pred_coeff = np.concatenate([pred_coeff['id'], pred_coeff['exp'], pred_coeff['tex'], pred_coeff['angle'],\\\n                                         pred_coeff['gamma'], pred_coeff['trans'], trans_params[None]], 1)\n            video_coeffs.append(pred_coeff)\n        semantic_npy = np.array(video_coeffs)[:,0]\n        np.save('temp/'+base_name+'_coeffs.npy', semantic_npy)\n    else:\n        print('[Step 2] Using saved coeffs.')\n        semantic_npy = np.load('temp/'+base_name+'_coeffs.npy').astype(np.float32)\n\n    # generate the 3dmm coeff from a single image\n    if args.exp_img is not None and ('.png' in args.exp_img or '.jpg' in args.exp_img):\n        print('extract the exp from',args.exp_img)\n        exp_pil = Image.open(args.exp_img).convert('RGB')\n        lm3d_std = load_lm3d('third_part/face3d/BFM')\n        \n        W, H = exp_pil.size\n        kp_extractor = KeypointExtractor()\n        lm_exp = kp_extractor.extract_keypoint([exp_pil], 'temp/'+base_name+'_temp.txt')[0]\n        if np.mean(lm_exp) == -1:\n            lm_exp = (lm3d_std[:, :2] + 1) / 2.\n            lm_exp = np.concatenate(\n                [lm_exp[:, :1] * W, lm_exp[:, 1:2] * H], 1)\n        else:\n            lm_exp[:, -1] = H - 1 - lm_exp[:, -1]\n\n        trans_params, im_exp, lm_exp, _ = align_img(exp_pil, lm_exp, lm3d_std)\n        trans_params = np.array([float(item) for item in np.hsplit(trans_params, 5)]).astype(np.float32)\n        im_exp_tensor = torch.tensor(np.array(im_exp)/255., dtype=torch.float32).permute(2, 0, 1).to(device).unsqueeze(0)\n        with torch.no_grad():\n            expression = split_coeff(net_recon(im_exp_tensor))['exp'][0]\n        del net_recon\n    elif args.exp_img == 'smile':\n        expression = torch.tensor(loadmat('checkpoints/expression.mat')['expression_mouth'])[0]\n    else:\n        print('using expression center')\n        expression = torch.tensor(loadmat('checkpoints/expression.mat')['expression_center'])[0]\n\n    # load DNet, model(LNet and ENet)\n    D_Net, model = load_model(args, device)\n\n    if not os.path.isfile('temp/'+base_name+'_stablized.npy') or args.re_preprocess:\n        imgs = []\n        for idx in tqdm(range(len(frames_pil)), desc=\"[Step 3] Stabilize the expression In Video:\"):\n            if args.one_shot:\n                source_img = trans_image(frames_pil[0]).unsqueeze(0).to(device)\n                semantic_source_numpy = semantic_npy[0:1]\n            else:\n                source_img = trans_image(frames_pil[idx]).unsqueeze(0).to(device)\n                semantic_source_numpy = semantic_npy[idx:idx+1]\n            ratio = find_crop_norm_ratio(semantic_source_numpy, semantic_npy)\n            coeff = transform_semantic(semantic_npy, idx, ratio).unsqueeze(0).to(device)\n        \n            # hacking the new expression\n            coeff[:, :64, :] = expression[None, :64, None].to(device) \n            with torch.no_grad():\n                output = D_Net(source_img, coeff)\n            img_stablized = np.uint8((output['fake_image'].squeeze(0).permute(1,2,0).cpu().clamp_(-1, 1).numpy() + 1 )/2. * 255)\n            imgs.append(cv2.cvtColor(img_stablized,cv2.COLOR_RGB2BGR)) \n        np.save('temp/'+base_name+'_stablized.npy',imgs)\n        del D_Net\n    else:\n        print('[Step 3] Using saved stabilized video.')\n        imgs = np.load('temp/'+base_name+'_stablized.npy')\n    torch.cuda.empty_cache()\n\n    if not args.audio.endswith('.wav'):\n        command = 'ffmpeg -loglevel error -y -i {} -strict -2 {}'.format(args.audio, 'temp/{}/temp.wav'.format(args.tmp_dir))\n        subprocess.call(command, shell=True)\n        args.audio = 'temp/{}/temp.wav'.format(args.tmp_dir)\n    wav = audio.load_wav(args.audio, 16000)\n    mel = audio.melspectrogram(wav)\n    if np.isnan(mel.reshape(-1)).sum() > 0:\n        raise ValueError('Mel contains nan! Using a TTS voice? Add a small epsilon noise to the wav file and try again')\n\n    mel_step_size, mel_idx_multiplier, i, mel_chunks = 16, 80./fps, 0, []\n    while True:\n        start_idx = int(i * mel_idx_multiplier)\n        if start_idx + mel_step_size > len(mel[0]):\n            mel_chunks.append(mel[:, len(mel[0]) - mel_step_size:])\n            break\n        mel_chunks.append(mel[:, start_idx : start_idx + mel_step_size])\n        i += 1\n\n    print(\"[Step 4] Load audio; Length of mel chunks: {}\".format(len(mel_chunks)))\n    imgs = imgs[:len(mel_chunks)]\n    full_frames = full_frames[:len(mel_chunks)]  \n    lm = lm[:len(mel_chunks)]\n    \n    imgs_enhanced = []\n    for idx in tqdm(range(len(imgs)), desc='[Step 5] Reference Enhancement'):\n        img = imgs[idx]\n        pred, _, _ = enhancer.process(img, img, face_enhance=True, possion_blending=False)\n        imgs_enhanced.append(pred)\n    gen = datagen(imgs_enhanced.copy(), mel_chunks, full_frames, None, (oy1,oy2,ox1,ox2))\n\n    frame_h, frame_w = full_frames[0].shape[:-1]\n    out = cv2.VideoWriter('temp/{}/result.mp4'.format(args.tmp_dir), cv2.VideoWriter_fourcc(*'mp4v'), fps, (frame_w, frame_h))\n    \n    if args.up_face != 'original':\n        instance = GANimationModel()\n        instance.initialize()\n        instance.setup()\n\n    kp_extractor = KeypointExtractor()\n    for i, (img_batch, mel_batch, frames, coords, img_original, f_frames) in enumerate(tqdm(gen, desc='[Step 6] Lip Synthesis:', total=int(np.ceil(float(len(mel_chunks)) / args.LNet_batch_size)))):\n        img_batch = torch.FloatTensor(np.transpose(img_batch, (0, 3, 1, 2))).to(device)\n        mel_batch = torch.FloatTensor(np.transpose(mel_batch, (0, 3, 1, 2))).to(device)\n        img_original = torch.FloatTensor(np.transpose(img_original, (0, 3, 1, 2))).to(device)/255. # BGR -> RGB\n        \n        with torch.no_grad():\n            incomplete, reference = torch.split(img_batch, 3, dim=1) \n            pred, low_res = model(mel_batch, img_batch, reference)\n            pred = torch.clamp(pred, 0, 1)\n\n            if args.up_face in ['sad', 'angry', 'surprise']:\n                tar_aus = exp_aus_dict[args.up_face]\n            else:\n                pass\n            \n            if args.up_face == 'original':\n                cur_gen_faces = img_original\n            else:\n                test_batch = {'src_img': torch.nn.functional.interpolate((img_original * 2 - 1), size=(128, 128), mode='bilinear'), \n                              'tar_aus': tar_aus.repeat(len(incomplete), 1)}\n                instance.feed_batch(test_batch)\n                instance.forward()\n                cur_gen_faces = torch.nn.functional.interpolate(instance.fake_img / 2. + 0.5, size=(384, 384), mode='bilinear')\n                \n            if args.without_rl1 is not False:\n                incomplete, reference = torch.split(img_batch, 3, dim=1)\n                mask = torch.where(incomplete==0, torch.ones_like(incomplete), torch.zeros_like(incomplete)) \n                pred = pred * mask + cur_gen_faces * (1 - mask) \n        \n        pred = pred.cpu().numpy().transpose(0, 2, 3, 1) * 255.\n\n        torch.cuda.empty_cache()\n        for p, f, xf, c in zip(pred, frames, f_frames, coords):\n            y1, y2, x1, x2 = c\n            p = cv2.resize(p.astype(np.uint8), (x2 - x1, y2 - y1))\n            \n            ff = xf.copy() \n            ff[y1:y2, x1:x2] = p\n            \n            # month region enhancement by GFPGAN\n            cropped_faces, restored_faces, restored_img = restorer.enhance(\n                ff, has_aligned=False, only_center_face=True, paste_back=True)\n                # 0,   1,   2,   3,   4,   5,   6,   7,   8,  9, 10,  11,  12,\n            mm = [0,   0,   0,   0,   0,   0,   0,   0,   0,  0, 255, 255, 255, 0, 0, 0, 0, 0, 0]\n            mouse_mask = np.zeros_like(restored_img)\n            tmp_mask = enhancer.faceparser.process(restored_img[y1:y2, x1:x2], mm)[0]\n            mouse_mask[y1:y2, x1:x2]= cv2.resize(tmp_mask, (x2 - x1, y2 - y1))[:, :, np.newaxis] / 255.\n\n            height, width = ff.shape[:2]\n            restored_img, ff, full_mask = [cv2.resize(x, (512, 512)) for x in (restored_img, ff, np.float32(mouse_mask))]\n            img = Laplacian_Pyramid_Blending_with_mask(restored_img, ff, full_mask[:, :, 0], 10)\n            pp = np.uint8(cv2.resize(np.clip(img, 0 ,255), (width, height)))\n\n            pp, orig_faces, enhanced_faces = enhancer.process(pp, xf, bbox=c, face_enhance=False, possion_blending=True)\n            out.write(pp)\n    out.release()\n    \n    if not os.path.isdir(os.path.dirname(args.outfile)):\n        os.makedirs(os.path.dirname(args.outfile), exist_ok=True)\n    command = 'ffmpeg -loglevel error -y -i {} -i {} -strict -2 -q:v 1 {}'.format(args.audio, 'temp/{}/result.mp4'.format(args.tmp_dir), args.outfile)\n    subprocess.call(command, shell=platform.system() != 'Windows')\n    print('outfile:', args.outfile)\n\n\n# frames:256x256, full_frames: original size\ndef datagen(frames, mels, full_frames, frames_pil, cox):\n    img_batch, mel_batch, frame_batch, coords_batch, ref_batch, full_frame_batch = [], [], [], [], [], []\n    base_name = args.face.split('/')[-1]\n    refs = []\n    image_size = 256 \n\n    # original frames\n    kp_extractor = KeypointExtractor()\n    fr_pil = [Image.fromarray(frame) for frame in frames]\n    lms = kp_extractor.extract_keypoint(fr_pil, 'temp/'+base_name+'x12_landmarks.txt')\n    frames_pil = [ (lm, frame) for frame,lm in zip(fr_pil, lms)] # frames is the croped version of modified face\n    crops, orig_images, quads  = crop_faces(image_size, frames_pil, scale=1.0, use_fa=True)\n    inverse_transforms = [calc_alignment_coefficients(quad + 0.5, [[0, 0], [0, image_size], [image_size, image_size], [image_size, 0]]) for quad in quads]\n    del kp_extractor.detector\n\n    oy1,oy2,ox1,ox2 = cox\n    face_det_results = face_detect(full_frames, args, jaw_correction=True)\n\n    for inverse_transform, crop, full_frame, face_det in zip(inverse_transforms, crops, full_frames, face_det_results):\n        imc_pil = paste_image(inverse_transform, crop, Image.fromarray(\n            cv2.resize(full_frame[int(oy1):int(oy2), int(ox1):int(ox2)], (256, 256))))\n\n        ff = full_frame.copy()\n        ff[int(oy1):int(oy2), int(ox1):int(ox2)] = cv2.resize(np.array(imc_pil.convert('RGB')), (ox2 - ox1, oy2 - oy1))\n        oface, coords = face_det\n        y1, y2, x1, x2 = coords\n        refs.append(ff[y1: y2, x1:x2])\n\n    for i, m in enumerate(mels):\n        idx = 0 if args.static else i % len(frames)\n        frame_to_save = frames[idx].copy()\n        face = refs[idx]\n        oface, coords = face_det_results[idx].copy()\n\n        face = cv2.resize(face, (args.img_size, args.img_size))\n        oface = cv2.resize(oface, (args.img_size, args.img_size))\n\n        img_batch.append(oface)\n        ref_batch.append(face) \n        mel_batch.append(m)\n        coords_batch.append(coords)\n        frame_batch.append(frame_to_save)\n        full_frame_batch.append(full_frames[idx].copy())\n\n        if len(img_batch) >= args.LNet_batch_size:\n            img_batch, mel_batch, ref_batch = np.asarray(img_batch), np.asarray(mel_batch), np.asarray(ref_batch)\n            img_masked = img_batch.copy()\n            img_original = img_batch.copy()\n            img_masked[:, args.img_size//2:] = 0\n            img_batch = np.concatenate((img_masked, ref_batch), axis=3) / 255.\n            mel_batch = np.reshape(mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1])\n\n            yield img_batch, mel_batch, frame_batch, coords_batch, img_original, full_frame_batch\n            img_batch, mel_batch, frame_batch, coords_batch, img_original, full_frame_batch, ref_batch  = [], [], [], [], [], [], []\n\n    if len(img_batch) > 0:\n        img_batch, mel_batch, ref_batch = np.asarray(img_batch), np.asarray(mel_batch), np.asarray(ref_batch)\n        img_masked = img_batch.copy()\n        img_original = img_batch.copy()\n        img_masked[:, args.img_size//2:] = 0\n        img_batch = np.concatenate((img_masked, ref_batch), axis=3) / 255.\n        mel_batch = np.reshape(mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1])\n        yield img_batch, mel_batch, frame_batch, coords_batch, img_original, full_frame_batch\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "inference_videoretalking.sh",
    "content": "python3 inference.py \\\n  --face ./examples/face/1.mp4 \\\n  --audio ./examples/audio/1.wav \\\n  --outfile results/1_1.mp4"
  },
  {
    "path": "models/DNet.py",
    "content": "# TODO\nimport functools\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom utils import flow_util\nfrom models.base_blocks import LayerNorm2d, ADAINHourglass, FineEncoder, FineDecoder\n\n# DNet\nclass DNet(nn.Module):\n    def __init__(self):  \n        super(DNet, self).__init__()\n        self.mapping_net = MappingNet()\n        self.warpping_net = WarpingNet()\n        self.editing_net = EditingNet()\n \n    def forward(self, input_image, driving_source, stage=None):\n        if stage == 'warp':\n            descriptor = self.mapping_net(driving_source)\n            output = self.warpping_net(input_image, descriptor)\n        else:\n            descriptor = self.mapping_net(driving_source)\n            output = self.warpping_net(input_image, descriptor)\n            output['fake_image'] = self.editing_net(input_image, output['warp_image'], descriptor)\n        return output\n\nclass MappingNet(nn.Module):\n    def __init__(self, coeff_nc=73, descriptor_nc=256, layer=3):\n        super( MappingNet, self).__init__()\n\n        self.layer = layer\n        nonlinearity = nn.LeakyReLU(0.1)\n\n        self.first = nn.Sequential(\n            torch.nn.Conv1d(coeff_nc, descriptor_nc, kernel_size=7, padding=0, bias=True))\n\n        for i in range(layer):\n            net = nn.Sequential(nonlinearity,\n                torch.nn.Conv1d(descriptor_nc, descriptor_nc, kernel_size=3, padding=0, dilation=3))\n            setattr(self, 'encoder' + str(i), net)   \n\n        self.pooling = nn.AdaptiveAvgPool1d(1)\n        self.output_nc = descriptor_nc\n\n    def forward(self, input_3dmm):\n        out = self.first(input_3dmm)\n        for i in range(self.layer):\n            model = getattr(self, 'encoder' + str(i))\n            out = model(out) + out[:,:,3:-3]\n        out = self.pooling(out)\n        return out   \n\nclass WarpingNet(nn.Module):\n    def __init__(\n        self, \n        image_nc=3, \n        descriptor_nc=256, \n        base_nc=32, \n        max_nc=256, \n        encoder_layer=5, \n        decoder_layer=3, \n        use_spect=False\n        ):\n        super( WarpingNet, self).__init__()\n\n        nonlinearity = nn.LeakyReLU(0.1)\n        norm_layer = functools.partial(LayerNorm2d, affine=True) \n        kwargs = {'nonlinearity':nonlinearity, 'use_spect':use_spect}\n\n        self.descriptor_nc = descriptor_nc \n        self.hourglass = ADAINHourglass(image_nc, self.descriptor_nc, base_nc,\n                                       max_nc, encoder_layer, decoder_layer, **kwargs)\n\n        self.flow_out = nn.Sequential(norm_layer(self.hourglass.output_nc), \n                                      nonlinearity,\n                                      nn.Conv2d(self.hourglass.output_nc, 2, kernel_size=7, stride=1, padding=3))\n\n        self.pool = nn.AdaptiveAvgPool2d(1)\n\n    def forward(self, input_image, descriptor):\n        final_output={}\n        output = self.hourglass(input_image, descriptor)\n        final_output['flow_field'] = self.flow_out(output)\n\n        deformation = flow_util.convert_flow_to_deformation(final_output['flow_field'])\n        final_output['warp_image'] = flow_util.warp_image(input_image, deformation)\n        return final_output\n\n\nclass EditingNet(nn.Module):\n    def __init__(\n        self, \n        image_nc=3, \n        descriptor_nc=256, \n        layer=3, \n        base_nc=64, \n        max_nc=256, \n        num_res_blocks=2, \n        use_spect=False):  \n        super(EditingNet, self).__init__()\n\n        nonlinearity = nn.LeakyReLU(0.1)\n        norm_layer = functools.partial(LayerNorm2d, affine=True) \n        kwargs = {'norm_layer':norm_layer, 'nonlinearity':nonlinearity, 'use_spect':use_spect}\n        self.descriptor_nc = descriptor_nc\n\n        # encoder part\n        self.encoder = FineEncoder(image_nc*2, base_nc, max_nc, layer, **kwargs)\n        self.decoder = FineDecoder(image_nc, self.descriptor_nc, base_nc, max_nc, layer, num_res_blocks, **kwargs)\n\n    def forward(self, input_image, warp_image, descriptor):\n        x = torch.cat([input_image, warp_image], 1)\n        x = self.encoder(x)\n        gen_image = self.decoder(x, descriptor)\n        return gen_image\n"
  },
  {
    "path": "models/ENet.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom models.base_blocks import ResBlock, StyleConv, ToRGB\n\n\nclass ENet(nn.Module):\n    def __init__(\n        self, \n        num_style_feat=512,\n        lnet=None,\n        concat=False\n        ):  \n        super(ENet, self).__init__()\n\n        self.low_res = lnet\n        for param in self.low_res.parameters():\n            param.requires_grad = False\n\n        channel_multiplier, narrow = 2, 1\n        channels = {\n            '4': int(512 * narrow),\n            '8': int(512 * narrow),\n            '16': int(512 * narrow),\n            '32': int(512 * narrow),\n            '64': int(256 * channel_multiplier * narrow),\n            '128': int(128 * channel_multiplier * narrow),\n            '256': int(64 * channel_multiplier * narrow),\n            '512': int(32 * channel_multiplier * narrow),\n            '1024': int(16 * channel_multiplier * narrow)\n        }\n\n        self.log_size = 8\n        first_out_size = 128\n        self.conv_body_first = nn.Conv2d(3, channels[f'{first_out_size}'], 1) # 256 -> 128\n\n        # downsample\n        in_channels = channels[f'{first_out_size}']\n        self.conv_body_down = nn.ModuleList()\n        for i in range(8, 2, -1):\n            out_channels = channels[f'{2**(i - 1)}']\n            self.conv_body_down.append(ResBlock(in_channels, out_channels, mode='down'))\n            in_channels = out_channels\n\n        self.num_style_feat = num_style_feat\n        linear_out_channel = num_style_feat\n        self.final_linear = nn.Linear(channels['4'] * 4 * 4, linear_out_channel)\n        self.final_conv = nn.Conv2d(in_channels, channels['4'], 3, 1, 1)\n\n        self.style_convs = nn.ModuleList()\n        self.to_rgbs = nn.ModuleList()\n        self.noises = nn.Module()\n        \n        self.concat = concat\n        if concat:\n            in_channels = 3 + 32 # channels['64']\n        else:\n            in_channels = 3\n\n        for i in range(7, 9):  # 128, 256\n            out_channels = channels[f'{2**i}'] # \n            self.style_convs.append(\n                StyleConv(\n                    in_channels,\n                    out_channels,\n                    kernel_size=3,\n                    num_style_feat=num_style_feat,\n                    demodulate=True,\n                    sample_mode='upsample'))\n            self.style_convs.append(\n                StyleConv(\n                    out_channels,\n                    out_channels,\n                    kernel_size=3,\n                    num_style_feat=num_style_feat,\n                    demodulate=True,\n                    sample_mode=None))\n            self.to_rgbs.append(ToRGB(out_channels, num_style_feat, upsample=True))\n            in_channels = out_channels\n\n    def forward(self, audio_sequences, face_sequences, gt_sequences):\n        B = audio_sequences.size(0)\n        input_dim_size = len(face_sequences.size())\n        inp, ref = torch.split(face_sequences,3,dim=1)\n\n        if input_dim_size > 4:\n            audio_sequences = torch.cat([audio_sequences[:, i] for i in range(audio_sequences.size(1))], dim=0)\n            inp = torch.cat([inp[:, :, i] for i in range(inp.size(2))], dim=0)\n            ref = torch.cat([ref[:, :, i] for i in range(ref.size(2))], dim=0)\n            gt_sequences = torch.cat([gt_sequences[:, :, i] for i in range(gt_sequences.size(2))], dim=0)\n        \n        # get the global style\n        feat = F.leaky_relu_(self.conv_body_first(F.interpolate(ref, size=(256,256), mode='bilinear')), negative_slope=0.2)\n        for i in range(self.log_size - 2):\n            feat = self.conv_body_down[i](feat)\n        feat = F.leaky_relu_(self.final_conv(feat), negative_slope=0.2)\n\n        # style code\n        style_code = self.final_linear(feat.reshape(feat.size(0), -1))\n        style_code = style_code.reshape(style_code.size(0), -1, self.num_style_feat)\n        \n        LNet_input = torch.cat([inp, gt_sequences], dim=1)\n        LNet_input = F.interpolate(LNet_input, size=(96,96), mode='bilinear')\n        \n        if self.concat:\n            low_res_img, low_res_feat = self.low_res(audio_sequences, LNet_input)\n            low_res_img.detach()\n            low_res_feat.detach()\n            out = torch.cat([low_res_img, low_res_feat], dim=1) \n\n        else:\n            low_res_img = self.low_res(audio_sequences, LNet_input)\n            low_res_img.detach()\n            # 96 x 96\n            out = low_res_img \n        \n        p2d = (2,2,2,2)\n        out = F.pad(out, p2d, \"reflect\", 0)\n        skip = out\n\n        for conv1, conv2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], self.to_rgbs):\n            out = conv1(out, style_code)  # 96, 192, 384\n            out = conv2(out, style_code)\n            skip = to_rgb(out, style_code, skip)\n        _outputs = skip\n\n        # remove padding\n        _outputs = _outputs[:,:,8:-8,8:-8]\n\n        if input_dim_size > 4:\n            _outputs = torch.split(_outputs, B, dim=0)\n            outputs = torch.stack(_outputs, dim=2)\n            low_res_img = F.interpolate(low_res_img, outputs.size()[3:])\n            low_res_img = torch.split(low_res_img, B, dim=0) \n            low_res_img = torch.stack(low_res_img, dim=2)\n        else:\n            outputs = _outputs\n        return outputs, low_res_img"
  },
  {
    "path": "models/LNet.py",
    "content": "import functools\nimport torch\nimport torch.nn as nn\n\nfrom models.transformer import RETURNX, Transformer\nfrom models.base_blocks import Conv2d, LayerNorm2d, FirstBlock2d, DownBlock2d, UpBlock2d, \\\n                               FFCADAINResBlocks, Jump, FinalBlock2d\n\n\nclass Visual_Encoder(nn.Module):\n    def __init__(self, image_nc, ngf, img_f, layers, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(Visual_Encoder, self).__init__()\n        self.layers = layers\n        self.first_inp = FirstBlock2d(image_nc, ngf, norm_layer, nonlinearity, use_spect)\n        self.first_ref = FirstBlock2d(image_nc, ngf, norm_layer, nonlinearity, use_spect)\n        for i in range(layers):\n            in_channels = min(ngf*(2**i), img_f)\n            out_channels = min(ngf*(2**(i+1)), img_f)\n            model_ref = DownBlock2d(in_channels, out_channels, norm_layer, nonlinearity, use_spect)\n            model_inp = DownBlock2d(in_channels, out_channels, norm_layer, nonlinearity, use_spect)\n            if i < 2:\n                ca_layer = RETURNX()\n            else:\n                ca_layer = Transformer(2**(i+1) * ngf,2,4,ngf,ngf*4)\n            setattr(self, 'ca' + str(i), ca_layer)\n            setattr(self, 'ref_down' + str(i), model_ref)\n            setattr(self, 'inp_down' + str(i), model_inp)\n        self.output_nc = out_channels * 2\n\n    def forward(self, maskGT, ref):\n        x_maskGT, x_ref = self.first_inp(maskGT), self.first_ref(ref)\n        out=[x_maskGT]\n        for i in range(self.layers):\n            model_ref = getattr(self, 'ref_down'+str(i))\n            model_inp = getattr(self, 'inp_down'+str(i))\n            ca_layer = getattr(self, 'ca'+str(i))\n            x_maskGT, x_ref = model_inp(x_maskGT), model_ref(x_ref)\n            x_maskGT = ca_layer(x_maskGT, x_ref)\n            if i < self.layers - 1:\n                out.append(x_maskGT)\n            else:           \n                out.append(torch.cat([x_maskGT, x_ref], dim=1)) # concat ref features !\n        return out\n\n\nclass Decoder(nn.Module):\n    def __init__(self, image_nc, feature_nc, ngf, img_f, layers, num_block, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(Decoder, self).__init__()\n        self.layers = layers\n        for i in range(layers)[::-1]:\n            if  i == layers-1:\n                in_channels = ngf*(2**(i+1)) * 2\n            else:\n                in_channels = min(ngf*(2**(i+1)), img_f)\n            out_channels = min(ngf*(2**i), img_f)\n            up = UpBlock2d(in_channels, out_channels, norm_layer, nonlinearity, use_spect)\n            res = FFCADAINResBlocks(num_block, in_channels, feature_nc, norm_layer, nonlinearity, use_spect)\n            jump = Jump(out_channels, norm_layer, nonlinearity, use_spect)\n\n            setattr(self, 'up' + str(i), up)\n            setattr(self, 'res' + str(i), res)            \n            setattr(self, 'jump' + str(i), jump)\n\n        self.final = FinalBlock2d(out_channels, image_nc, use_spect, 'sigmoid')\n        self.output_nc = out_channels\n\n    def forward(self, x, z):\n        out = x.pop()\n        for i in range(self.layers)[::-1]:\n            res_model = getattr(self, 'res' + str(i))\n            up_model = getattr(self, 'up' + str(i))\n            jump_model = getattr(self, 'jump' + str(i))\n            out = res_model(out, z)\n            out = up_model(out)\n            out = jump_model(x.pop()) + out\n        out_image = self.final(out)\n        return out_image\n\n\nclass LNet(nn.Module): \n    def __init__(\n        self, \n        image_nc=3, \n        descriptor_nc=512, \n        layer=3, \n        base_nc=64, \n        max_nc=512, \n        num_res_blocks=9, \n        use_spect=True,\n        encoder=Visual_Encoder,\n        decoder=Decoder\n        ):  \n        super(LNet, self).__init__()\n\n        nonlinearity = nn.LeakyReLU(0.1)\n        norm_layer = functools.partial(LayerNorm2d, affine=True) \n        kwargs = {'norm_layer':norm_layer, 'nonlinearity':nonlinearity, 'use_spect':use_spect}\n        self.descriptor_nc = descriptor_nc\n\n        self.encoder = encoder(image_nc, base_nc, max_nc, layer, **kwargs)\n        self.decoder = decoder(image_nc, self.descriptor_nc, base_nc, max_nc, layer, num_res_blocks, **kwargs)\n        self.audio_encoder = nn.Sequential(\n            Conv2d(1, 32, kernel_size=3, stride=1, padding=1),\n            Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),\n            Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),\n\n            Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1),\n            Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),\n            Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),\n\n            Conv2d(64, 128, kernel_size=3, stride=3, padding=1),\n            Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),\n            Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),\n\n            Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1),\n            Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),\n\n            Conv2d(256, 512, kernel_size=3, stride=1, padding=0),\n            Conv2d(512, descriptor_nc, kernel_size=1, stride=1, padding=0),\n            )\n\n    def forward(self, audio_sequences, face_sequences):\n        B = audio_sequences.size(0)\n        input_dim_size = len(face_sequences.size())\n        if input_dim_size > 4:\n            audio_sequences = torch.cat([audio_sequences[:, i] for i in range(audio_sequences.size(1))], dim=0)\n            face_sequences = torch.cat([face_sequences[:, :, i] for i in range(face_sequences.size(2))], dim=0)\n        cropped, ref = torch.split(face_sequences, 3, dim=1)\n\n        vis_feat = self.encoder(cropped, ref)\n        audio_feat = self.audio_encoder(audio_sequences) \n        _outputs = self.decoder(vis_feat, audio_feat)\n\n        if input_dim_size > 4:\n            _outputs = torch.split(_outputs, B, dim=0)\n            outputs = torch.stack(_outputs, dim=2) \n        else:\n            outputs = _outputs\n        return outputs"
  },
  {
    "path": "models/__init__.py",
    "content": "import torch\nfrom models.DNet import DNet\nfrom models.LNet import LNet\nfrom models.ENet import ENet\n\n\ndef _load(checkpoint_path):\n    map_location=None if torch.cuda.is_available() else torch.device('cpu')\n    checkpoint = torch.load(checkpoint_path, map_location=map_location)\n    return checkpoint\n\ndef load_checkpoint(path, model):\n    print(\"Load checkpoint from: {}\".format(path))\n    checkpoint = _load(path)\n    s = checkpoint[\"state_dict\"] if 'arcface' not in path else checkpoint\n    new_s = {}\n    for k, v in s.items():\n        if 'low_res' in k:\n            continue\n        else:\n            new_s[k.replace('module.', '')] = v\n    model.load_state_dict(new_s, strict=False)\n    return model\n\ndef load_network(args):\n    L_net = LNet()\n    L_net = load_checkpoint(args.LNet_path, L_net)\n    E_net = ENet(lnet=L_net)\n    model = load_checkpoint(args.ENet_path, E_net)\n    return model.eval()\n\ndef load_DNet(args):\n    D_Net = DNet()\n    print(\"Load checkpoint from: {}\".format(args.DNet_path))\n    checkpoint =  torch.load(args.DNet_path, map_location=lambda storage, loc: storage)\n    D_Net.load_state_dict(checkpoint['net_G_ema'], strict=False)\n    return D_Net.eval()"
  },
  {
    "path": "models/base_blocks.py",
    "content": "import math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.nn.modules.batchnorm import BatchNorm2d\nfrom torch.nn.utils.spectral_norm import spectral_norm as SpectralNorm\n\nfrom models.ffc import FFC\nfrom basicsr.archs.arch_util import default_init_weights\n\n\nclass Conv2d(nn.Module):\n    def __init__(self, cin, cout, kernel_size, stride, padding, residual=False, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.conv_block = nn.Sequential(\n                            nn.Conv2d(cin, cout, kernel_size, stride, padding),\n                            nn.BatchNorm2d(cout)\n                            )\n        self.act = nn.ReLU()\n        self.residual = residual\n\n    def forward(self, x):\n        out = self.conv_block(x)\n        if self.residual:\n            out += x\n        return self.act(out)\n\n\nclass ResBlock(nn.Module):\n    def __init__(self, in_channels, out_channels, mode='down'):\n        super(ResBlock, self).__init__()\n        self.conv1 = nn.Conv2d(in_channels, in_channels, 3, 1, 1)\n        self.conv2 = nn.Conv2d(in_channels, out_channels, 3, 1, 1)\n        self.skip = nn.Conv2d(in_channels, out_channels, 1, bias=False)\n        if mode == 'down':\n            self.scale_factor = 0.5\n        elif mode == 'up':\n            self.scale_factor = 2\n\n    def forward(self, x):\n        out = F.leaky_relu_(self.conv1(x), negative_slope=0.2)\n        # upsample/downsample\n        out = F.interpolate(out, scale_factor=self.scale_factor, mode='bilinear', align_corners=False)\n        out = F.leaky_relu_(self.conv2(out), negative_slope=0.2)\n        # skip\n        x = F.interpolate(x, scale_factor=self.scale_factor, mode='bilinear', align_corners=False)\n        skip = self.skip(x)\n        out = out + skip\n        return out\n\n\nclass LayerNorm2d(nn.Module):\n    def __init__(self, n_out, affine=True):\n        super(LayerNorm2d, self).__init__()\n        self.n_out = n_out\n        self.affine = affine\n\n        if self.affine:\n          self.weight = nn.Parameter(torch.ones(n_out, 1, 1))\n          self.bias = nn.Parameter(torch.zeros(n_out, 1, 1))\n\n    def forward(self, x):\n        normalized_shape = x.size()[1:]\n        if self.affine:\n          return F.layer_norm(x, normalized_shape, \\\n              self.weight.expand(normalized_shape), \n              self.bias.expand(normalized_shape))    \n        else:\n          return F.layer_norm(x, normalized_shape)  \n\n\ndef spectral_norm(module, use_spect=True):\n    if use_spect:\n        return SpectralNorm(module)\n    else:\n        return module\n\n\nclass FirstBlock2d(nn.Module):\n    def __init__(self, input_nc, output_nc, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(FirstBlock2d, self).__init__()\n        kwargs = {'kernel_size': 7, 'stride': 1, 'padding': 3}\n        conv = spectral_norm(nn.Conv2d(input_nc, output_nc, **kwargs), use_spect)\n\n        if type(norm_layer) == type(None):\n            self.model = nn.Sequential(conv, nonlinearity)\n        else:\n            self.model = nn.Sequential(conv, norm_layer(output_nc), nonlinearity)\n\n    def forward(self, x):\n        out = self.model(x)\n        return out \n\n\nclass DownBlock2d(nn.Module):\n    def __init__(self, input_nc, output_nc, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(DownBlock2d, self).__init__()\n        kwargs = {'kernel_size': 3, 'stride': 1, 'padding': 1}\n        conv = spectral_norm(nn.Conv2d(input_nc, output_nc, **kwargs), use_spect)\n        pool = nn.AvgPool2d(kernel_size=(2, 2))\n\n        if type(norm_layer) == type(None):\n            self.model = nn.Sequential(conv, nonlinearity, pool)\n        else:\n            self.model = nn.Sequential(conv, norm_layer(output_nc), nonlinearity, pool)\n\n    def forward(self, x):\n        out = self.model(x)\n        return out \n\n\nclass UpBlock2d(nn.Module):\n    def __init__(self, input_nc, output_nc, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(UpBlock2d, self).__init__()\n        kwargs = {'kernel_size': 3, 'stride': 1, 'padding': 1}\n        conv = spectral_norm(nn.Conv2d(input_nc, output_nc, **kwargs), use_spect)\n        if type(norm_layer) == type(None):\n            self.model = nn.Sequential(conv, nonlinearity)\n        else:\n            self.model = nn.Sequential(conv, norm_layer(output_nc), nonlinearity)\n\n    def forward(self, x):\n        out = self.model(F.interpolate(x, scale_factor=2))\n        return out\n\n\nclass ADAIN(nn.Module):\n    def __init__(self, norm_nc, feature_nc):\n        super().__init__()\n\n        self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False)\n\n        nhidden = 128\n        use_bias=True\n\n        self.mlp_shared = nn.Sequential(\n            nn.Linear(feature_nc, nhidden, bias=use_bias),            \n            nn.ReLU()\n        )\n        self.mlp_gamma = nn.Linear(nhidden, norm_nc, bias=use_bias)    \n        self.mlp_beta = nn.Linear(nhidden, norm_nc, bias=use_bias)    \n\n    def forward(self, x, feature):\n\n        # Part 1. generate parameter-free normalized activations\n        normalized = self.param_free_norm(x)\n        # Part 2. produce scaling and bias conditioned on feature\n        feature = feature.view(feature.size(0), -1)\n        actv = self.mlp_shared(feature)\n        gamma = self.mlp_gamma(actv)\n        beta = self.mlp_beta(actv)\n\n        # apply scale and bias\n        gamma = gamma.view(*gamma.size()[:2], 1,1)\n        beta = beta.view(*beta.size()[:2], 1,1)\n        out = normalized * (1 + gamma) + beta\n        return out\n\n\nclass FineADAINResBlock2d(nn.Module):\n    \"\"\"\n    Define an Residual block for different types\n    \"\"\"\n    def __init__(self, input_nc, feature_nc, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(FineADAINResBlock2d, self).__init__()\n        kwargs = {'kernel_size': 3, 'stride': 1, 'padding': 1}\n        self.conv1 = spectral_norm(nn.Conv2d(input_nc, input_nc, **kwargs), use_spect)\n        self.conv2 = spectral_norm(nn.Conv2d(input_nc, input_nc, **kwargs), use_spect)\n        self.norm1 = ADAIN(input_nc, feature_nc)\n        self.norm2 = ADAIN(input_nc, feature_nc)\n        self.actvn = nonlinearity\n\n    def forward(self, x, z):\n        dx = self.actvn(self.norm1(self.conv1(x), z))\n        dx = self.norm2(self.conv2(x), z)\n        out = dx + x\n        return out  \n\n\nclass FineADAINResBlocks(nn.Module):\n    def __init__(self, num_block, input_nc, feature_nc, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(FineADAINResBlocks, self).__init__()                                \n        self.num_block = num_block\n        for i in range(num_block):\n            model = FineADAINResBlock2d(input_nc, feature_nc, norm_layer, nonlinearity, use_spect)\n            setattr(self, 'res'+str(i), model)\n\n    def forward(self, x, z):\n        for i in range(self.num_block):\n            model = getattr(self, 'res'+str(i))\n            x = model(x, z)\n        return x   \n\n\nclass ADAINEncoderBlock(nn.Module):       \n    def __init__(self, input_nc, output_nc, feature_nc, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(ADAINEncoderBlock, self).__init__()\n        kwargs_down = {'kernel_size': 4, 'stride': 2, 'padding': 1}\n        kwargs_fine = {'kernel_size': 3, 'stride': 1, 'padding': 1}\n\n        self.conv_0 = spectral_norm(nn.Conv2d(input_nc,  output_nc, **kwargs_down), use_spect)\n        self.conv_1 = spectral_norm(nn.Conv2d(output_nc, output_nc, **kwargs_fine), use_spect)\n\n\n        self.norm_0 = ADAIN(input_nc, feature_nc)\n        self.norm_1 = ADAIN(output_nc, feature_nc)\n        self.actvn = nonlinearity\n\n    def forward(self, x, z):\n        x = self.conv_0(self.actvn(self.norm_0(x, z)))\n        x = self.conv_1(self.actvn(self.norm_1(x, z)))\n        return x\n\n\nclass ADAINDecoderBlock(nn.Module):\n    def __init__(self, input_nc, output_nc, hidden_nc, feature_nc, use_transpose=True, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(ADAINDecoderBlock, self).__init__()        \n        # Attributes\n        self.actvn = nonlinearity\n        hidden_nc = min(input_nc, output_nc) if hidden_nc is None else hidden_nc\n\n        kwargs_fine = {'kernel_size':3, 'stride':1, 'padding':1}\n        if use_transpose:\n            kwargs_up = {'kernel_size':3, 'stride':2, 'padding':1, 'output_padding':1}\n        else:\n            kwargs_up = {'kernel_size':3, 'stride':1, 'padding':1}\n\n        # create conv layers\n        self.conv_0 = spectral_norm(nn.Conv2d(input_nc, hidden_nc, **kwargs_fine), use_spect)\n        if use_transpose:\n            self.conv_1 = spectral_norm(nn.ConvTranspose2d(hidden_nc, output_nc, **kwargs_up), use_spect)\n            self.conv_s = spectral_norm(nn.ConvTranspose2d(input_nc, output_nc, **kwargs_up), use_spect)\n        else:\n            self.conv_1 = nn.Sequential(spectral_norm(nn.Conv2d(hidden_nc, output_nc, **kwargs_up), use_spect),\n                                        nn.Upsample(scale_factor=2))\n            self.conv_s = nn.Sequential(spectral_norm(nn.Conv2d(input_nc, output_nc, **kwargs_up), use_spect),\n                                        nn.Upsample(scale_factor=2))\n        # define normalization layers\n        self.norm_0 = ADAIN(input_nc, feature_nc)\n        self.norm_1 = ADAIN(hidden_nc, feature_nc)\n        self.norm_s = ADAIN(input_nc, feature_nc)\n        \n    def forward(self, x, z):\n        x_s = self.shortcut(x, z)\n        dx = self.conv_0(self.actvn(self.norm_0(x, z)))\n        dx = self.conv_1(self.actvn(self.norm_1(dx, z)))\n        out = x_s + dx\n        return out\n\n    def shortcut(self, x, z):\n        x_s = self.conv_s(self.actvn(self.norm_s(x, z)))\n        return x_s   \n\n\nclass FineEncoder(nn.Module):\n    \"\"\"docstring for Encoder\"\"\"\n    def __init__(self, image_nc, ngf, img_f, layers, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(FineEncoder, self).__init__()\n        self.layers = layers\n        self.first = FirstBlock2d(image_nc, ngf, norm_layer, nonlinearity, use_spect)\n        for i in range(layers):\n            in_channels = min(ngf*(2**i), img_f)\n            out_channels = min(ngf*(2**(i+1)), img_f)\n            model = DownBlock2d(in_channels, out_channels, norm_layer, nonlinearity, use_spect)\n            setattr(self, 'down' + str(i), model)\n        self.output_nc = out_channels\n\n    def forward(self, x):\n        x = self.first(x)\n        out=[x]\n        for i in range(self.layers):\n            model = getattr(self, 'down'+str(i))\n            x = model(x)\n            out.append(x)\n        return out\n\n\nclass FineDecoder(nn.Module):\n    \"\"\"docstring for FineDecoder\"\"\"\n    def __init__(self, image_nc, feature_nc, ngf, img_f, layers, num_block, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(FineDecoder, self).__init__()\n        self.layers = layers\n        for i in range(layers)[::-1]:\n            in_channels = min(ngf*(2**(i+1)), img_f)\n            out_channels = min(ngf*(2**i), img_f)\n            up = UpBlock2d(in_channels, out_channels, norm_layer, nonlinearity, use_spect)\n            res = FineADAINResBlocks(num_block, in_channels, feature_nc, norm_layer, nonlinearity, use_spect)\n            jump = Jump(out_channels, norm_layer, nonlinearity, use_spect)\n            setattr(self, 'up' + str(i), up)\n            setattr(self, 'res' + str(i), res)            \n            setattr(self, 'jump' + str(i), jump)\n        self.final = FinalBlock2d(out_channels, image_nc, use_spect, 'tanh')\n        self.output_nc = out_channels\n\n    def forward(self, x, z):\n        out = x.pop()\n        for i in range(self.layers)[::-1]:\n            res_model = getattr(self, 'res' + str(i))\n            up_model = getattr(self, 'up' + str(i))\n            jump_model = getattr(self, 'jump' + str(i))\n            out = res_model(out, z)\n            out = up_model(out)\n            out = jump_model(x.pop()) + out\n        out_image = self.final(out)\n        return out_image\n\n\nclass ADAINEncoder(nn.Module):\n    def __init__(self, image_nc, pose_nc, ngf, img_f, layers, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(ADAINEncoder, self).__init__()\n        self.layers = layers\n        self.input_layer = nn.Conv2d(image_nc, ngf, kernel_size=7, stride=1, padding=3)\n        for i in range(layers):\n            in_channels = min(ngf * (2**i), img_f)\n            out_channels = min(ngf *(2**(i+1)), img_f)\n            model = ADAINEncoderBlock(in_channels, out_channels, pose_nc, nonlinearity, use_spect)\n            setattr(self, 'encoder' + str(i), model)\n        self.output_nc = out_channels\n        \n    def forward(self, x, z):\n        out = self.input_layer(x)\n        out_list = [out]\n        for i in range(self.layers):\n            model = getattr(self, 'encoder' + str(i))\n            out = model(out, z)\n            out_list.append(out)\n        return out_list\n        \n        \nclass ADAINDecoder(nn.Module):\n    \"\"\"docstring for ADAINDecoder\"\"\"\n    def __init__(self, pose_nc, ngf, img_f, encoder_layers, decoder_layers, skip_connect=True, \n                 nonlinearity=nn.LeakyReLU(), use_spect=False):\n\n        super(ADAINDecoder, self).__init__()\n        self.encoder_layers = encoder_layers\n        self.decoder_layers = decoder_layers\n        self.skip_connect = skip_connect\n        use_transpose = True\n        for i in range(encoder_layers-decoder_layers, encoder_layers)[::-1]:\n            in_channels = min(ngf * (2**(i+1)), img_f)\n            in_channels = in_channels*2 if i != (encoder_layers-1) and self.skip_connect else in_channels\n            out_channels = min(ngf * (2**i), img_f)\n            model = ADAINDecoderBlock(in_channels, out_channels, out_channels, pose_nc, use_transpose, nonlinearity, use_spect)\n            setattr(self, 'decoder' + str(i), model)\n        self.output_nc = out_channels*2 if self.skip_connect else out_channels\n\n    def forward(self, x, z):\n        out = x.pop() if self.skip_connect else x\n        for i in range(self.encoder_layers-self.decoder_layers, self.encoder_layers)[::-1]:\n            model = getattr(self, 'decoder' + str(i))\n            out = model(out, z)\n            out = torch.cat([out, x.pop()], 1) if self.skip_connect else out\n        return out\n\n\nclass ADAINHourglass(nn.Module):\n    def __init__(self, image_nc, pose_nc, ngf, img_f, encoder_layers, decoder_layers, nonlinearity, use_spect):\n        super(ADAINHourglass, self).__init__()\n        self.encoder = ADAINEncoder(image_nc, pose_nc, ngf, img_f, encoder_layers, nonlinearity, use_spect)\n        self.decoder = ADAINDecoder(pose_nc, ngf, img_f, encoder_layers, decoder_layers, True, nonlinearity, use_spect)\n        self.output_nc = self.decoder.output_nc\n\n    def forward(self, x, z):\n        return self.decoder(self.encoder(x, z), z)        \n\n\nclass FineADAINLama(nn.Module):\n    def __init__(self, input_nc, feature_nc, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(FineADAINLama, self).__init__()\n        kwargs = {'kernel_size': 3, 'stride': 1, 'padding': 1}\n        self.actvn = nonlinearity\n        ratio_gin = 0.75\n        ratio_gout = 0.75        \n        self.ffc = FFC(input_nc, input_nc, 3,\n                       ratio_gin, ratio_gout, 1, 1, 1,\n                       1, False, False, padding_type='reflect')\n        global_channels = int(input_nc * ratio_gout)\n        self.bn_l = ADAIN(input_nc - global_channels, feature_nc)\n        self.bn_g = ADAIN(global_channels, feature_nc)\n\n    def forward(self, x, z):\n        x_l, x_g = self.ffc(x)\n        x_l = self.actvn(self.bn_l(x_l,z))\n        x_g = self.actvn(self.bn_g(x_g,z))\n        return x_l, x_g\n\n\nclass FFCResnetBlock(nn.Module):\n    def __init__(self, dim, feature_dim, padding_type='reflect', norm_layer=BatchNorm2d, activation_layer=nn.ReLU, dilation=1,\n                 spatial_transform_kwargs=None, inline=False, **conv_kwargs):\n        super().__init__()\n        self.conv1 = FineADAINLama(dim, feature_dim, **conv_kwargs)\n        self.conv2 = FineADAINLama(dim, feature_dim, **conv_kwargs)\n        self.inline = True\n\n    def forward(self, x, z):\n        if self.inline:\n            x_l, x_g = x[:, :-self.conv1.ffc.global_in_num], x[:, -self.conv1.ffc.global_in_num:]\n        else:\n            x_l, x_g = x if type(x) is tuple else (x, 0)\n\n        id_l, id_g = x_l, x_g\n        x_l, x_g = self.conv1((x_l, x_g), z)\n        x_l, x_g = self.conv2((x_l, x_g), z)\n\n        x_l, x_g = id_l + x_l, id_g + x_g\n        out = x_l, x_g\n        if self.inline:\n            out = torch.cat(out, dim=1)\n        return out\n\n\nclass FFCADAINResBlocks(nn.Module):\n    def __init__(self, num_block, input_nc, feature_nc, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(FFCADAINResBlocks, self).__init__()                                \n        self.num_block = num_block\n        for i in range(num_block):\n            model = FFCResnetBlock(input_nc, feature_nc, norm_layer, nonlinearity, use_spect)\n            setattr(self, 'res'+str(i), model)\n\n    def forward(self, x, z):\n        for i in range(self.num_block):\n            model = getattr(self, 'res'+str(i))\n            x = model(x, z)\n        return x \n\n\nclass Jump(nn.Module):\n    def __init__(self, input_nc, norm_layer=nn.BatchNorm2d, nonlinearity=nn.LeakyReLU(), use_spect=False):\n        super(Jump, self).__init__()\n        kwargs = {'kernel_size': 3, 'stride': 1, 'padding': 1}\n        conv = spectral_norm(nn.Conv2d(input_nc, input_nc, **kwargs), use_spect)\n        if type(norm_layer) == type(None):\n            self.model = nn.Sequential(conv, nonlinearity)\n        else:\n            self.model = nn.Sequential(conv, norm_layer(input_nc), nonlinearity)\n\n    def forward(self, x):\n        out = self.model(x)\n        return out   \n\n\nclass FinalBlock2d(nn.Module):\n    def __init__(self, input_nc, output_nc, use_spect=False, tanh_or_sigmoid='tanh'):\n        super(FinalBlock2d, self).__init__()\n        kwargs = {'kernel_size': 7, 'stride': 1, 'padding':3}\n        conv = spectral_norm(nn.Conv2d(input_nc, output_nc, **kwargs), use_spect)\n        if tanh_or_sigmoid == 'sigmoid':\n            out_nonlinearity = nn.Sigmoid()\n        else:\n            out_nonlinearity = nn.Tanh()            \n        self.model = nn.Sequential(conv, out_nonlinearity)\n\n    def forward(self, x):\n        out = self.model(x)\n        return out    \n\n\nclass ModulatedConv2d(nn.Module):\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size,\n                 num_style_feat,\n                 demodulate=True,\n                 sample_mode=None,\n                 eps=1e-8):\n        super(ModulatedConv2d, self).__init__()\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.kernel_size = kernel_size\n        self.demodulate = demodulate\n        self.sample_mode = sample_mode\n        self.eps = eps\n\n        # modulation inside each modulated conv\n        self.modulation = nn.Linear(num_style_feat, in_channels, bias=True)\n        # initialization\n        default_init_weights(self.modulation, scale=1, bias_fill=1, a=0, mode='fan_in', nonlinearity='linear')\n\n        self.weight = nn.Parameter(\n            torch.randn(1, out_channels, in_channels, kernel_size, kernel_size) /\n            math.sqrt(in_channels * kernel_size**2))\n        self.padding = kernel_size // 2\n\n    def forward(self, x, style):\n        b, c, h, w = x.shape   \n        style = self.modulation(style).view(b, 1, c, 1, 1)\n        weight = self.weight * style  \n\n        if self.demodulate:\n            demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps)\n            weight = weight * demod.view(b, self.out_channels, 1, 1, 1)\n\n        weight = weight.view(b * self.out_channels, c, self.kernel_size, self.kernel_size)\n\n        # upsample or downsample if necessary\n        if self.sample_mode == 'upsample':\n            x = F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)\n        elif self.sample_mode == 'downsample':\n            x = F.interpolate(x, scale_factor=0.5, mode='bilinear', align_corners=False)\n\n        b, c, h, w = x.shape\n        x = x.view(1, b * c, h, w)\n        out = F.conv2d(x, weight, padding=self.padding, groups=b)\n        out = out.view(b, self.out_channels, *out.shape[2:4])\n        return out\n\n    def __repr__(self):\n        return (f'{self.__class__.__name__}(in_channels={self.in_channels}, out_channels={self.out_channels}, '\n                f'kernel_size={self.kernel_size}, demodulate={self.demodulate}, sample_mode={self.sample_mode})')\n\n\nclass StyleConv(nn.Module):\n    def __init__(self, in_channels, out_channels, kernel_size, num_style_feat, demodulate=True, sample_mode=None):\n        super(StyleConv, self).__init__()\n        self.modulated_conv = ModulatedConv2d(\n            in_channels, out_channels, kernel_size, num_style_feat, demodulate=demodulate, sample_mode=sample_mode)\n        self.weight = nn.Parameter(torch.zeros(1))  # for noise injection\n        self.bias = nn.Parameter(torch.zeros(1, out_channels, 1, 1))\n        self.activate = nn.LeakyReLU(negative_slope=0.2, inplace=True)\n\n    def forward(self, x, style, noise=None):\n        # modulate\n        out = self.modulated_conv(x, style) * 2**0.5  # for conversion\n        # noise injection\n        if noise is None:\n            b, _, h, w = out.shape\n            noise = out.new_empty(b, 1, h, w).normal_()\n        out = out + self.weight * noise\n        # add bias\n        out = out + self.bias\n        # activation\n        out = self.activate(out)\n        return out\n\n\nclass ToRGB(nn.Module):\n    def __init__(self, in_channels, num_style_feat, upsample=True):\n        super(ToRGB, self).__init__()\n        self.upsample = upsample\n        self.modulated_conv = ModulatedConv2d(\n            in_channels, 3, kernel_size=1, num_style_feat=num_style_feat, demodulate=False, sample_mode=None)\n        self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))\n\n    def forward(self, x, style, skip=None):\n        out = self.modulated_conv(x, style)\n        out = out + self.bias\n        if skip is not None:\n            if self.upsample:\n                skip = F.interpolate(skip, scale_factor=2, mode='bilinear', align_corners=False)\n            out = out + skip\n        return out"
  },
  {
    "path": "models/ffc.py",
    "content": "# Fast Fourier Convolution NeurIPS 2020\n# original implementation https://github.com/pkumivision/FFC/blob/main/model_zoo/ffc.py\n# paper https://proceedings.neurips.cc/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-Paper.pdf\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n# from models.modules.squeeze_excitation import SELayer\nimport torch.fft\n\nclass SELayer(nn.Module):\n    def __init__(self, channel, reduction=16):\n        super(SELayer, self).__init__()\n        self.avg_pool = nn.AdaptiveAvgPool2d(1)\n        self.fc = nn.Sequential(\n            nn.Linear(channel, channel // reduction, bias=False),\n            nn.ReLU(inplace=True),\n            nn.Linear(channel // reduction, channel, bias=False),\n            nn.Sigmoid()\n        )\n\n    def forward(self, x):\n        b, c, _, _ = x.size()\n        y = self.avg_pool(x).view(b, c)\n        y = self.fc(y).view(b, c, 1, 1)\n        res = x * y.expand_as(x)\n        return res\n\n\nclass FFCSE_block(nn.Module):\n    def __init__(self, channels, ratio_g):\n        super(FFCSE_block, self).__init__()\n        in_cg = int(channels * ratio_g)\n        in_cl = channels - in_cg\n        r = 16\n\n        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n        self.conv1 = nn.Conv2d(channels, channels // r,\n                               kernel_size=1, bias=True)\n        self.relu1 = nn.ReLU(inplace=True)\n        self.conv_a2l = None if in_cl == 0 else nn.Conv2d(\n            channels // r, in_cl, kernel_size=1, bias=True)\n        self.conv_a2g = None if in_cg == 0 else nn.Conv2d(\n            channels // r, in_cg, kernel_size=1, bias=True)\n        self.sigmoid = nn.Sigmoid()\n\n    def forward(self, x):\n        x = x if type(x) is tuple else (x, 0)\n        id_l, id_g = x\n\n        x = id_l if type(id_g) is int else torch.cat([id_l, id_g], dim=1)\n        x = self.avgpool(x)\n        x = self.relu1(self.conv1(x))\n\n        x_l = 0 if self.conv_a2l is None else id_l * \\\n            self.sigmoid(self.conv_a2l(x))\n        x_g = 0 if self.conv_a2g is None else id_g * \\\n            self.sigmoid(self.conv_a2g(x))\n        return x_l, x_g\n\n\nclass FourierUnit(nn.Module):\n\n    def __init__(self, in_channels, out_channels, groups=1, spatial_scale_factor=None, spatial_scale_mode='bilinear',\n                 spectral_pos_encoding=False, use_se=False, se_kwargs=None, ffc3d=False, fft_norm='ortho'):\n        # bn_layer not used\n        super(FourierUnit, self).__init__()\n        self.groups = groups\n\n        self.conv_layer = torch.nn.Conv2d(in_channels=in_channels * 2 + (2 if spectral_pos_encoding else 0),\n                                          out_channels=out_channels * 2,\n                                          kernel_size=1, stride=1, padding=0, groups=self.groups, bias=False)\n        self.bn = torch.nn.BatchNorm2d(out_channels * 2)\n        self.relu = torch.nn.ReLU(inplace=True)\n\n        # squeeze and excitation block\n        self.use_se = use_se\n        if use_se:\n            if se_kwargs is None:\n                se_kwargs = {}\n            self.se = SELayer(self.conv_layer.in_channels, **se_kwargs)\n\n        self.spatial_scale_factor = spatial_scale_factor\n        self.spatial_scale_mode = spatial_scale_mode\n        self.spectral_pos_encoding = spectral_pos_encoding\n        self.ffc3d = ffc3d\n        self.fft_norm = fft_norm\n\n    def forward(self, x):\n        batch = x.shape[0]\n\n        if self.spatial_scale_factor is not None:\n            orig_size = x.shape[-2:]\n            x = F.interpolate(x, scale_factor=self.spatial_scale_factor, mode=self.spatial_scale_mode, align_corners=False)\n\n        r_size = x.size()\n        # (batch, c, h, w/2+1, 2)\n        fft_dim = (-3, -2, -1) if self.ffc3d else (-2, -1)\n        ffted = torch.fft.rfftn(x, dim=fft_dim, norm=self.fft_norm)\n        ffted = torch.stack((ffted.real, ffted.imag), dim=-1)\n        ffted = ffted.permute(0, 1, 4, 2, 3).contiguous()  # (batch, c, 2, h, w/2+1)\n        ffted = ffted.view((batch, -1,) + ffted.size()[3:])\n\n        if self.spectral_pos_encoding:\n            height, width = ffted.shape[-2:]\n            coords_vert = torch.linspace(0, 1, height)[None, None, :, None].expand(batch, 1, height, width).to(ffted)\n            coords_hor = torch.linspace(0, 1, width)[None, None, None, :].expand(batch, 1, height, width).to(ffted)\n            ffted = torch.cat((coords_vert, coords_hor, ffted), dim=1)\n\n        if self.use_se:\n            ffted = self.se(ffted)\n\n        ffted = self.conv_layer(ffted)  # (batch, c*2, h, w/2+1)\n        ffted = self.relu(self.bn(ffted))\n\n        ffted = ffted.view((batch, -1, 2,) + ffted.size()[2:]).permute(\n            0, 1, 3, 4, 2).contiguous()  # (batch,c, t, h, w/2+1, 2)\n        ffted = torch.complex(ffted[..., 0], ffted[..., 1])\n\n        ifft_shape_slice = x.shape[-3:] if self.ffc3d else x.shape[-2:]\n        output = torch.fft.irfftn(ffted, s=ifft_shape_slice, dim=fft_dim, norm=self.fft_norm)\n\n        if self.spatial_scale_factor is not None:\n            output = F.interpolate(output, size=orig_size, mode=self.spatial_scale_mode, align_corners=False)\n\n        return output\n\n\nclass SpectralTransform(nn.Module):\n    def __init__(self, in_channels, out_channels, stride=1, groups=1, enable_lfu=True, **fu_kwargs):\n        # bn_layer not used\n        super(SpectralTransform, self).__init__()\n        self.enable_lfu = enable_lfu\n        if stride == 2:\n            self.downsample = nn.AvgPool2d(kernel_size=(2, 2), stride=2)\n        else:\n            self.downsample = nn.Identity()\n\n        self.stride = stride\n        self.conv1 = nn.Sequential(\n            nn.Conv2d(in_channels, out_channels //\n                      2, kernel_size=1, groups=groups, bias=False),\n            nn.BatchNorm2d(out_channels // 2),\n            nn.ReLU(inplace=True)\n        )\n        self.fu = FourierUnit(\n            out_channels // 2, out_channels // 2, groups, **fu_kwargs)\n        if self.enable_lfu:\n            self.lfu = FourierUnit(\n                out_channels // 2, out_channels // 2, groups)\n        self.conv2 = torch.nn.Conv2d(\n            out_channels // 2, out_channels, kernel_size=1, groups=groups, bias=False)\n\n    def forward(self, x):\n        x = self.downsample(x)\n        x = self.conv1(x)\n        output = self.fu(x)\n\n        if self.enable_lfu:\n            n, c, h, w = x.shape\n            split_no = 2\n            split_s = h // split_no\n            xs = torch.cat(torch.split(\n                x[:, :c // 4], split_s, dim=-2), dim=1).contiguous()\n            xs = torch.cat(torch.split(xs, split_s, dim=-1),\n                           dim=1).contiguous()\n            xs = self.lfu(xs)\n            xs = xs.repeat(1, 1, split_no, split_no).contiguous()\n        else:\n            xs = 0\n\n        output = self.conv2(x + output + xs)\n        return output\n\n\nclass FFC(nn.Module):\n\n    def __init__(self, in_channels, out_channels, kernel_size,\n                 ratio_gin, ratio_gout, stride=1, padding=0,\n                 dilation=1, groups=1, bias=False, enable_lfu=True,\n                 padding_type='reflect', gated=False, **spectral_kwargs):\n        super(FFC, self).__init__()\n\n        assert stride == 1 or stride == 2, \"Stride should be 1 or 2.\"\n        self.stride = stride\n\n        in_cg = int(in_channels * ratio_gin)\n        in_cl = in_channels - in_cg\n        out_cg = int(out_channels * ratio_gout)\n        out_cl = out_channels - out_cg\n\n        self.ratio_gin = ratio_gin\n        self.ratio_gout = ratio_gout\n        self.global_in_num = in_cg\n\n        module = nn.Identity if in_cl == 0 or out_cl == 0 else nn.Conv2d\n        self.convl2l = module(in_cl, out_cl, kernel_size,\n                              stride, padding, dilation, groups, bias, padding_mode=padding_type)\n        module = nn.Identity if in_cl == 0 or out_cg == 0 else nn.Conv2d\n        self.convl2g = module(in_cl, out_cg, kernel_size,\n                              stride, padding, dilation, groups, bias, padding_mode=padding_type)\n        module = nn.Identity if in_cg == 0 or out_cl == 0 else nn.Conv2d\n        self.convg2l = module(in_cg, out_cl, kernel_size,\n                              stride, padding, dilation, groups, bias, padding_mode=padding_type)\n        module = nn.Identity if in_cg == 0 or out_cg == 0 else SpectralTransform\n        self.convg2g = module(\n            in_cg, out_cg, stride, 1 if groups == 1 else groups // 2, enable_lfu, **spectral_kwargs)\n\n        self.gated = gated\n        module = nn.Identity if in_cg == 0 or out_cl == 0 or not self.gated else nn.Conv2d\n        self.gate = module(in_channels, 2, 1)\n\n    def forward(self, x):\n        x_l, x_g = x if type(x) is tuple else (x, 0)\n        out_xl, out_xg = 0, 0\n\n        if self.gated:\n            total_input_parts = [x_l]\n            if torch.is_tensor(x_g):\n                total_input_parts.append(x_g)\n            total_input = torch.cat(total_input_parts, dim=1)\n\n            gates = torch.sigmoid(self.gate(total_input))\n            g2l_gate, l2g_gate = gates.chunk(2, dim=1)\n        else:\n            g2l_gate, l2g_gate = 1, 1\n\n        if self.ratio_gout != 1:\n            out_xl = self.convl2l(x_l) + self.convg2l(x_g) * g2l_gate\n        if self.ratio_gout != 0:\n            out_xg = self.convl2g(x_l) * l2g_gate + self.convg2g(x_g)\n\n        return out_xl, out_xg"
  },
  {
    "path": "models/transformer.py",
    "content": "import torch\nfrom torch import nn\n\nfrom einops import rearrange\n\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\n\n\nclass GELU(nn.Module):\n    def __init__(self):\n        super(GELU, self).__init__()\n    def forward(self, x):\n        return 0.5*x*(1+F.tanh(np.sqrt(2/np.pi)*(x+0.044715*torch.pow(x,3))))\n\n# helpers\n\ndef pair(t):\n    return t if isinstance(t, tuple) else (t, t)\n\n# classes\n\nclass PreNorm(nn.Module):\n    def __init__(self, dim, fn):\n        super().__init__()\n        self.norm = nn.LayerNorm(dim)\n        self.fn = fn\n    def forward(self, x, **kwargs):\n        return self.fn(self.norm(x), **kwargs)\n\nclass DualPreNorm(nn.Module):\n    def __init__(self, dim, fn):\n        super().__init__()\n        self.normx = nn.LayerNorm(dim)\n        self.normy = nn.LayerNorm(dim)\n        self.fn = fn\n    def forward(self, x, y, **kwargs):\n        return self.fn(self.normx(x), self.normy(y), **kwargs)\n\nclass FeedForward(nn.Module):\n    def __init__(self, dim, hidden_dim, dropout = 0.):\n        super().__init__()\n        self.net = nn.Sequential(\n            nn.Linear(dim, hidden_dim),\n            GELU(),\n            nn.Dropout(dropout),\n            nn.Linear(hidden_dim, dim),\n            nn.Dropout(dropout)\n        )\n    def forward(self, x):\n        return self.net(x)\n\nclass Attention(nn.Module):\n    def __init__(self, dim, heads = 8, dim_head = 64, dropout = 0.):\n        super().__init__()\n        inner_dim = dim_head *  heads\n        project_out = not (heads == 1 and dim_head == dim)\n\n        self.heads = heads\n        self.scale = dim_head ** -0.5\n\n        self.attend = nn.Softmax(dim = -1)\n\n        self.to_q = nn.Linear(dim, inner_dim, bias = False)\n        self.to_k = nn.Linear(dim, inner_dim, bias = False)\n        self.to_v = nn.Linear(dim, inner_dim, bias = False)\n\n\n        self.to_out = nn.Sequential(\n            nn.Linear(inner_dim, dim),\n            nn.Dropout(dropout)\n        ) if project_out else nn.Identity()\n\n    def forward(self, x, y):\n        # qk = self.to_qk(x).chunk(2, dim = -1) #\n        q = rearrange(self.to_q(x), 'b n (h d) -> b h n d', h = self.heads) # q,k from the zero feature\n        k = rearrange(self.to_k(x), 'b n (h d) -> b h n d', h = self.heads) # v from the reference features\n        v = rearrange(self.to_v(y), 'b n (h d) -> b h n d', h = self.heads) \n\n        dots = torch.matmul(q, k.transpose(-1, -2)) * self.scale\n\n        attn = self.attend(dots)\n\n        out = torch.matmul(attn, v)\n        out = rearrange(out, 'b h n d -> b n (h d)')\n        return self.to_out(out)\n\nclass Transformer(nn.Module):\n    def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout = 0.):\n        super().__init__()\n        self.layers = nn.ModuleList([])\n        for _ in range(depth):\n            self.layers.append(nn.ModuleList([\n                DualPreNorm(dim, Attention(dim, heads = heads, dim_head = dim_head, dropout = dropout)),\n                PreNorm(dim, FeedForward(dim, mlp_dim, dropout = dropout))\n            ]))\n\n\n    def forward(self, x, y): # x is the cropped, y is the foreign reference\n        bs,c,h,w = x.size()\n\n        # img to embedding\n        x = x.view(bs,c,-1).permute(0,2,1)\n        y = y.view(bs,c,-1).permute(0,2,1)\n\n        for attn, ff in self.layers:\n            x = attn(x, y) + x\n            x = ff(x) + x\n\n        x = x.view(bs,h,w,c).permute(0,3,1,2)\n        return x\n\nclass RETURNX(nn.Module):\n    def __init__(self,):\n        super().__init__()\n\n    def forward(self, x, y): # x is the cropped, y is the foreign reference \n        return x"
  },
  {
    "path": "predict.py",
    "content": "# Prediction interface for Cog ⚙️\n# https://github.com/replicate/cog/blob/main/docs/python.md\n\nimport os\nimport sys\nimport argparse\nimport subprocess\nimport numpy as np\nfrom tqdm import tqdm\nfrom PIL import Image\nfrom scipy.io import loadmat\nimport torch\nimport cv2\nfrom cog import BasePredictor, Input, Path\n\nsys.path.insert(0, \"third_part\")\nsys.path.insert(0, \"third_part/GPEN\")\nsys.path.insert(0, \"third_part/GFPGAN\")\n\n# 3dmm extraction\nfrom third_part.face3d.util.preprocess import align_img\nfrom third_part.face3d.util.load_mats import load_lm3d\nfrom third_part.face3d.extract_kp_videos import KeypointExtractor\n\n# face enhancement\nfrom third_part.GPEN.gpen_face_enhancer import FaceEnhancement\nfrom third_part.GFPGAN.gfpgan import GFPGANer\n\n# expression control\nfrom third_part.ganimation_replicate.model.ganimation import GANimationModel\n\nfrom utils import audio\nfrom utils.ffhq_preprocess import Croper\nfrom utils.alignment_stit import crop_faces, calc_alignment_coefficients, paste_image\nfrom utils.inference_utils import (\n    Laplacian_Pyramid_Blending_with_mask,\n    face_detect,\n    load_model,\n    options,\n    split_coeff,\n    trans_image,\n    transform_semantic,\n    find_crop_norm_ratio,\n    load_face3d_net,\n    exp_aus_dict,\n)\n\n\nclass Predictor(BasePredictor):\n    def setup(self) -> None:\n        \"\"\"Load the model into memory to make running multiple predictions efficient\"\"\"\n        self.enhancer = FaceEnhancement(\n            base_dir=\"checkpoints\",\n            size=512,\n            model=\"GPEN-BFR-512\",\n            use_sr=False,\n            sr_model=\"rrdb_realesrnet_psnr\",\n            channel_multiplier=2,\n            narrow=1,\n            device=\"cuda\",\n        )\n        self.restorer = GFPGANer(\n            model_path=\"checkpoints/GFPGANv1.3.pth\",\n            upscale=1,\n            arch=\"clean\",\n            channel_multiplier=2,\n            bg_upsampler=None,\n        )\n        self.croper = Croper(\"checkpoints/shape_predictor_68_face_landmarks.dat\")\n        self.kp_extractor = KeypointExtractor()\n\n        face3d_net_path = \"checkpoints/face3d_pretrain_epoch_20.pth\"\n\n        self.net_recon = load_face3d_net(face3d_net_path, \"cuda\")\n        self.lm3d_std = load_lm3d(\"checkpoints/BFM\")\n\n    def predict(\n        self,\n        face: Path = Input(description=\"Input video file of a talking-head.\"),\n        input_audio: Path = Input(description=\"Input audio file.\"),\n    ) -> Path:\n        \"\"\"Run a single prediction on the model\"\"\"\n        device = \"cuda\"\n        args = argparse.Namespace(\n            DNet_path=\"checkpoints/DNet.pt\",\n            LNet_path=\"checkpoints/LNet.pth\",\n            ENet_path=\"checkpoints/ENet.pth\",\n            face3d_net_path=\"checkpoints/face3d_pretrain_epoch_20.pth\",\n            face=str(face),\n            audio=str(input_audio),\n            exp_img=\"neutral\",\n            outfile=None,\n            fps=25,\n            pads=[0, 20, 0, 0],\n            face_det_batch_size=4,\n            LNet_batch_size=16,\n            img_size=384,\n            crop=[0, -1, 0, -1],\n            box=[-1, -1, -1, -1],\n            nosmooth=False,\n            static=False,\n            up_face=\"original\",\n            one_shot=False,\n            without_rl1=False,\n            tmp_dir=\"temp\",\n            re_preprocess=False,\n        )\n\n        base_name = args.face.split(\"/\")[-1]\n\n        if args.face.split(\".\")[1] in [\"jpg\", \"png\", \"jpeg\"]:\n            full_frames = [cv2.imread(args.face)]\n            args.static = True\n            fps = args.fps\n        else:\n            video_stream = cv2.VideoCapture(args.face)\n            fps = video_stream.get(cv2.CAP_PROP_FPS)\n            full_frames = []\n            while True:\n                still_reading, frame = video_stream.read()\n                if not still_reading:\n                    video_stream.release()\n                    break\n                y1, y2, x1, x2 = args.crop\n                if x2 == -1:\n                    x2 = frame.shape[1]\n                if y2 == -1:\n                    y2 = frame.shape[0]\n                frame = frame[y1:y2, x1:x2]\n                full_frames.append(frame)\n\n        full_frames_RGB = [\n            cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) for frame in full_frames\n        ]\n        full_frames_RGB, crop, quad = self.croper.crop(full_frames_RGB, xsize=512)\n\n        clx, cly, crx, cry = crop\n        lx, ly, rx, ry = quad\n        lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)\n        oy1, oy2, ox1, ox2 = (\n            cly + ly,\n            min(cly + ry, full_frames[0].shape[0]),\n            clx + lx,\n            min(clx + rx, full_frames[0].shape[1]),\n        )\n        # original_size = (ox2 - ox1, oy2 - oy1)\n        frames_pil = [\n            Image.fromarray(cv2.resize(frame, (256, 256))) for frame in full_frames_RGB\n        ]\n\n        # get the landmark according to the detected face.\n        if (\n            not os.path.isfile(\"temp/\" + base_name + \"_landmarks.txt\")\n            or args.re_preprocess\n        ):\n            print(\"[Step 1] Landmarks Extraction in Video.\")\n            lm = self.kp_extractor.extract_keypoint(\n                frames_pil, \"./temp/\" + base_name + \"_landmarks.txt\"\n            )\n        else:\n            print(\"[Step 1] Using saved landmarks.\")\n            lm = np.loadtxt(\"temp/\" + base_name + \"_landmarks.txt\").astype(np.float32)\n            lm = lm.reshape([len(full_frames), -1, 2])\n\n        if (\n            not os.path.isfile(\"temp/\" + base_name + \"_coeffs.npy\")\n            or args.exp_img is not None\n            or args.re_preprocess\n        ):\n            video_coeffs = []\n            for idx in tqdm(\n                range(len(frames_pil)), desc=\"[Step 2] 3DMM Extraction In Video:\"\n            ):\n                frame = frames_pil[idx]\n                W, H = frame.size\n                lm_idx = lm[idx].reshape([-1, 2])\n                if np.mean(lm_idx) == -1:\n                    lm_idx = (self.lm3d_std[:, :2] + 1) / 2.0\n                    lm_idx = np.concatenate([lm_idx[:, :1] * W, lm_idx[:, 1:2] * H], 1)\n                else:\n                    lm_idx[:, -1] = H - 1 - lm_idx[:, -1]\n\n                trans_params, im_idx, lm_idx, _ = align_img(\n                    frame, lm_idx, self.lm3d_std\n                )\n                trans_params = np.array(\n                    [float(item) for item in np.hsplit(trans_params, 5)]\n                ).astype(np.float32)\n                im_idx_tensor = (\n                    torch.tensor(np.array(im_idx) / 255.0, dtype=torch.float32)\n                    .permute(2, 0, 1)\n                    .to(device)\n                    .unsqueeze(0)\n                )\n                with torch.no_grad():\n                    coeffs = split_coeff(self.net_recon(im_idx_tensor))\n\n                pred_coeff = {key: coeffs[key].cpu().numpy() for key in coeffs}\n                pred_coeff = np.concatenate(\n                    [\n                        pred_coeff[\"id\"],\n                        pred_coeff[\"exp\"],\n                        pred_coeff[\"tex\"],\n                        pred_coeff[\"angle\"],\n                        pred_coeff[\"gamma\"],\n                        pred_coeff[\"trans\"],\n                        trans_params[None],\n                    ],\n                    1,\n                )\n                video_coeffs.append(pred_coeff)\n            semantic_npy = np.array(video_coeffs)[:, 0]\n            np.save(\"temp/\" + base_name + \"_coeffs.npy\", semantic_npy)\n        else:\n            print(\"[Step 2] Using saved coeffs.\")\n            semantic_npy = np.load(\"temp/\" + base_name + \"_coeffs.npy\").astype(\n                np.float32\n            )\n\n        # generate the 3dmm coeff from a single image\n        if args.exp_img == \"smile\":\n            expression = torch.tensor(\n                loadmat(\"checkpoints/expression.mat\")[\"expression_mouth\"]\n            )[0]\n        else:\n            print(\"using expression center\")\n            expression = torch.tensor(\n                loadmat(\"checkpoints/expression.mat\")[\"expression_center\"]\n            )[0]\n\n        # load DNet, model(LNet and ENet)\n        D_Net, model = load_model(args, device)\n\n        if (\n            not os.path.isfile(\"temp/\" + base_name + \"_stablized.npy\")\n            or args.re_preprocess\n        ):\n            imgs = []\n            for idx in tqdm(\n                range(len(frames_pil)),\n                desc=\"[Step 3] Stabilize the expression In Video:\",\n            ):\n                if args.one_shot:\n                    source_img = trans_image(frames_pil[0]).unsqueeze(0).to(device)\n                    semantic_source_numpy = semantic_npy[0:1]\n                else:\n                    source_img = trans_image(frames_pil[idx]).unsqueeze(0).to(device)\n                    semantic_source_numpy = semantic_npy[idx : idx + 1]\n                ratio = find_crop_norm_ratio(semantic_source_numpy, semantic_npy)\n                coeff = (\n                    transform_semantic(semantic_npy, idx, ratio).unsqueeze(0).to(device)\n                )\n\n                # hacking the new expression\n                coeff[:, :64, :] = expression[None, :64, None].to(device)\n                with torch.no_grad():\n                    output = D_Net(source_img, coeff)\n                img_stablized = np.uint8(\n                    (\n                        output[\"fake_image\"]\n                        .squeeze(0)\n                        .permute(1, 2, 0)\n                        .cpu()\n                        .clamp_(-1, 1)\n                        .numpy()\n                        + 1\n                    )\n                    / 2.0\n                    * 255\n                )\n                imgs.append(cv2.cvtColor(img_stablized, cv2.COLOR_RGB2BGR))\n            np.save(\"temp/\" + base_name + \"_stablized.npy\", imgs)\n            del D_Net\n        else:\n            print(\"[Step 3] Using saved stabilized video.\")\n            imgs = np.load(\"temp/\" + base_name + \"_stablized.npy\")\n        torch.cuda.empty_cache()\n\n        if not args.audio.endswith(\".wav\"):\n            command = \"ffmpeg -loglevel error -y -i {} -strict -2 {}\".format(\n                args.audio, \"temp/{}/temp.wav\".format(args.tmp_dir)\n            )\n            subprocess.call(command, shell=True)\n            args.audio = \"temp/{}/temp.wav\".format(args.tmp_dir)\n        wav = audio.load_wav(args.audio, 16000)\n        mel = audio.melspectrogram(wav)\n        if np.isnan(mel.reshape(-1)).sum() > 0:\n            raise ValueError(\n                \"Mel contains nan! Using a TTS voice? Add a small epsilon noise to the wav file and try again\"\n            )\n\n        mel_step_size, mel_idx_multiplier, i, mel_chunks = 16, 80.0 / fps, 0, []\n        while True:\n            start_idx = int(i * mel_idx_multiplier)\n            if start_idx + mel_step_size > len(mel[0]):\n                mel_chunks.append(mel[:, len(mel[0]) - mel_step_size :])\n                break\n            mel_chunks.append(mel[:, start_idx : start_idx + mel_step_size])\n            i += 1\n\n        print(\"[Step 4] Load audio; Length of mel chunks: {}\".format(len(mel_chunks)))\n        imgs = imgs[: len(mel_chunks)]\n        full_frames = full_frames[: len(mel_chunks)]\n        lm = lm[: len(mel_chunks)]\n\n        imgs_enhanced = []\n        for idx in tqdm(range(len(imgs)), desc=\"[Step 5] Reference Enhancement\"):\n            img = imgs[idx]\n            pred, _, _ = self.enhancer.process(\n                img, img, face_enhance=True, possion_blending=False\n            )\n            imgs_enhanced.append(pred)\n        gen = datagen(\n            imgs_enhanced.copy(), mel_chunks, full_frames, args, (oy1, oy2, ox1, ox2)\n        )\n\n        frame_h, frame_w = full_frames[0].shape[:-1]\n        out = cv2.VideoWriter(\n            \"temp/{}/result.mp4\".format(args.tmp_dir),\n            cv2.VideoWriter_fourcc(*\"mp4v\"),\n            fps,\n            (frame_w, frame_h),\n        )\n\n        if args.up_face != \"original\":\n            instance = GANimationModel()\n            instance.initialize()\n            instance.setup()\n\n        # kp_extractor = KeypointExtractor()\n        for i, (\n            img_batch,\n            mel_batch,\n            frames,\n            coords,\n            img_original,\n            f_frames,\n        ) in enumerate(\n            tqdm(\n                gen,\n                desc=\"[Step 6] Lip Synthesis:\",\n                total=int(np.ceil(float(len(mel_chunks)) / args.LNet_batch_size)),\n            )\n        ):\n            img_batch = torch.FloatTensor(np.transpose(img_batch, (0, 3, 1, 2))).to(\n                device\n            )\n            mel_batch = torch.FloatTensor(np.transpose(mel_batch, (0, 3, 1, 2))).to(\n                device\n            )\n            img_original = (\n                torch.FloatTensor(np.transpose(img_original, (0, 3, 1, 2))).to(device)\n                / 255.0\n            )  # BGR -> RGB\n\n            with torch.no_grad():\n                incomplete, reference = torch.split(img_batch, 3, dim=1)\n                pred, low_res = model(mel_batch, img_batch, reference)\n                pred = torch.clamp(pred, 0, 1)\n\n                if args.up_face in [\"sad\", \"angry\", \"surprise\"]:\n                    tar_aus = exp_aus_dict[args.up_face]\n                else:\n                    pass\n\n                if args.up_face == \"original\":\n                    cur_gen_faces = img_original\n                else:\n                    test_batch = {\n                        \"src_img\": torch.nn.functional.interpolate(\n                            (img_original * 2 - 1), size=(128, 128), mode=\"bilinear\"\n                        ),\n                        \"tar_aus\": tar_aus.repeat(len(incomplete), 1),\n                    }\n                    instance.feed_batch(test_batch)\n                    instance.forward()\n                    cur_gen_faces = torch.nn.functional.interpolate(\n                        instance.fake_img / 2.0 + 0.5, size=(384, 384), mode=\"bilinear\"\n                    )\n\n                if args.without_rl1 is not False:\n                    incomplete, reference = torch.split(img_batch, 3, dim=1)\n                    mask = torch.where(\n                        incomplete == 0,\n                        torch.ones_like(incomplete),\n                        torch.zeros_like(incomplete),\n                    )\n                    pred = pred * mask + cur_gen_faces * (1 - mask)\n\n            pred = pred.cpu().numpy().transpose(0, 2, 3, 1) * 255.0\n\n            torch.cuda.empty_cache()\n            for p, f, xf, c in zip(pred, frames, f_frames, coords):\n                y1, y2, x1, x2 = c\n                p = cv2.resize(p.astype(np.uint8), (x2 - x1, y2 - y1))\n\n                ff = xf.copy()\n                ff[y1:y2, x1:x2] = p\n\n                # month region enhancement by GFPGAN\n                cropped_faces, restored_faces, restored_img = self.restorer.enhance(\n                    ff, has_aligned=False, only_center_face=True, paste_back=True\n                )\n                # 0,   1,   2,   3,   4,   5,   6,   7,   8,  9, 10,  11,  12,\n                mm = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 0, 0, 0, 0, 0, 0]\n                mouse_mask = np.zeros_like(restored_img)\n                tmp_mask = self.enhancer.faceparser.process(\n                    restored_img[y1:y2, x1:x2], mm\n                )[0]\n                mouse_mask[y1:y2, x1:x2] = (\n                    cv2.resize(tmp_mask, (x2 - x1, y2 - y1))[:, :, np.newaxis] / 255.0\n                )\n\n                height, width = ff.shape[:2]\n                restored_img, ff, full_mask = [\n                    cv2.resize(x, (512, 512))\n                    for x in (restored_img, ff, np.float32(mouse_mask))\n                ]\n                img = Laplacian_Pyramid_Blending_with_mask(\n                    restored_img, ff, full_mask[:, :, 0], 10\n                )\n                pp = np.uint8(cv2.resize(np.clip(img, 0, 255), (width, height)))\n\n                pp, orig_faces, enhanced_faces = self.enhancer.process(\n                    pp, xf, bbox=c, face_enhance=False, possion_blending=True\n                )\n                out.write(pp)\n        out.release()\n\n        output_file = \"/tmp/output.mp4\"\n        command = \"ffmpeg -loglevel error -y -i {} -i {} -strict -2 -q:v 1 {}\".format(\n            args.audio, \"temp/{}/result.mp4\".format(args.tmp_dir), output_file\n        )\n        subprocess.call(command, shell=True)\n\n        return Path(output_file)\n\n\n# frames:256x256, full_frames: original size\ndef datagen(frames, mels, full_frames, args, cox):\n    img_batch, mel_batch, frame_batch, coords_batch, ref_batch, full_frame_batch = (\n        [],\n        [],\n        [],\n        [],\n        [],\n        [],\n    )\n    base_name = args.face.split(\"/\")[-1]\n    refs = []\n    image_size = 256\n\n    # original frames\n    kp_extractor = KeypointExtractor()\n    fr_pil = [Image.fromarray(frame) for frame in frames]\n    lms = kp_extractor.extract_keypoint(\n        fr_pil, \"temp/\" + base_name + \"x12_landmarks.txt\"\n    )\n    frames_pil = [\n        (lm, frame) for frame, lm in zip(fr_pil, lms)\n    ]  # frames is the croped version of modified face\n    crops, orig_images, quads = crop_faces(\n        image_size, frames_pil, scale=1.0, use_fa=True\n    )\n    inverse_transforms = [\n        calc_alignment_coefficients(\n            quad + 0.5,\n            [[0, 0], [0, image_size], [image_size, image_size], [image_size, 0]],\n        )\n        for quad in quads\n    ]\n    del kp_extractor.detector\n\n    oy1, oy2, ox1, ox2 = cox\n    face_det_results = face_detect(full_frames, args, jaw_correction=True)\n\n    for inverse_transform, crop, full_frame, face_det in zip(\n        inverse_transforms, crops, full_frames, face_det_results\n    ):\n        imc_pil = paste_image(\n            inverse_transform,\n            crop,\n            Image.fromarray(\n                cv2.resize(\n                    full_frame[int(oy1) : int(oy2), int(ox1) : int(ox2)], (256, 256)\n                )\n            ),\n        )\n\n        ff = full_frame.copy()\n        ff[int(oy1) : int(oy2), int(ox1) : int(ox2)] = cv2.resize(\n            np.array(imc_pil.convert(\"RGB\")), (ox2 - ox1, oy2 - oy1)\n        )\n        oface, coords = face_det\n        y1, y2, x1, x2 = coords\n        refs.append(ff[y1:y2, x1:x2])\n\n    for i, m in enumerate(mels):\n        idx = 0 if args.static else i % len(frames)\n        frame_to_save = frames[idx].copy()\n        face = refs[idx]\n        oface, coords = face_det_results[idx].copy()\n\n        face = cv2.resize(face, (args.img_size, args.img_size))\n        oface = cv2.resize(oface, (args.img_size, args.img_size))\n\n        img_batch.append(oface)\n        ref_batch.append(face)\n        mel_batch.append(m)\n        coords_batch.append(coords)\n        frame_batch.append(frame_to_save)\n        full_frame_batch.append(full_frames[idx].copy())\n\n        if len(img_batch) >= args.LNet_batch_size:\n            img_batch, mel_batch, ref_batch = (\n                np.asarray(img_batch),\n                np.asarray(mel_batch),\n                np.asarray(ref_batch),\n            )\n            img_masked = img_batch.copy()\n            img_original = img_batch.copy()\n            img_masked[:, args.img_size // 2 :] = 0\n            img_batch = np.concatenate((img_masked, ref_batch), axis=3) / 255.0\n            mel_batch = np.reshape(\n                mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1]\n            )\n\n            yield img_batch, mel_batch, frame_batch, coords_batch, img_original, full_frame_batch\n            (\n                img_batch,\n                mel_batch,\n                frame_batch,\n                coords_batch,\n                img_original,\n                full_frame_batch,\n                ref_batch,\n            ) = ([], [], [], [], [], [], [])\n\n    if len(img_batch) > 0:\n        img_batch, mel_batch, ref_batch = (\n            np.asarray(img_batch),\n            np.asarray(mel_batch),\n            np.asarray(ref_batch),\n        )\n        img_masked = img_batch.copy()\n        img_original = img_batch.copy()\n        img_masked[:, args.img_size // 2 :] = 0\n        img_batch = np.concatenate((img_masked, ref_batch), axis=3) / 255.0\n        mel_batch = np.reshape(\n            mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1]\n        )\n        yield img_batch, mel_batch, frame_batch, coords_batch, img_original, full_frame_batch\n"
  },
  {
    "path": "quick_demo.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"provenance\": [],\n      \"authorship_tag\": \"ABX9TyMPin07iIA2oewCCP9ZTz6w\",\n      \"include_colab_link\": true\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    },\n    \"accelerator\": \"GPU\",\n    \"gpuClass\": \"standard\"\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"view-in-github\",\n        \"colab_type\": \"text\"\n      },\n      \"source\": [\n        \"<a href=\\\"https://colab.research.google.com/github/vinthony/video-retalking/blob/main/quick_demo.ipynb\\\" target=\\\"_parent\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"/></a>\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## VideoReTalking：Audio-based Lip Synchronization for Talking Head Video Editing In the Wild\\n\",\n        \"\\n\",\n        \"[Arxiv](https://arxiv.org/abs/2211.14758) | [Project](https://vinthony.github.io/video-retalking/) | [Github](https://github.com/vinthony/video-retalking)\\n\",\n        \"\\n\",\n        \"Kun Cheng, Xiaodong Cun, Yong Zhang, Menghan Xia, Fei Yin, Mingrui Zhu, Xuan Wang, Jue Wang, Nannan Wang\\n\",\n        \"\\n\",\n        \"Xidian University, Tencent AI Lab, Tsinghua University\\n\",\n        \"\\n\",\n        \"*SIGGRAPH Asia 2022 Conferenence Track*\\n\",\n        \"\\n\"\n      ],\n      \"metadata\": {\n        \"id\": \"NVfkv2BXSpr3\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"**Installation** (30s)\"\n      ],\n      \"metadata\": {\n        \"id\": \"u9hdPaH6UL_F\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"PnKT9goiQ3Hk\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"#@title\\n\",\n        \"### make sure that CUDA is available in Edit -> Nootbook settings -> GPU\\n\",\n        \"!nvidia-smi\\n\",\n        \"\\n\",\n        \"!python --version  \\n\",\n        \"!apt-get update\\n\",\n        \"!apt install ffmpeg &> /dev/null \\n\",\n        \"\\n\",\n        \"print('Git clone project and install requirements...')\\n\",\n        \"!git clone https://github.com/vinthony/video-retalking.git &> /dev/null\\n\",\n        \"%cd video-retalking\\n\",\n        \"# !pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html\\n\",\n        \"!pip install -r requirements.txt\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"**Download Pretrained Models**\"\n      ],\n      \"metadata\": {\n        \"id\": \"uwJS0eaM61Cq\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"#@title\\n\",\n        \"print('Download pre-trained models...')\\n\",\n        \"!mkdir ./checkpoints  \\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/30_net_gen.pth -O ./checkpoints/30_net_gen.pth\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/BFM.zip -O ./checkpoints/BFM.zip\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/DNet.pt -O ./checkpoints/DNet.pt\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/ENet.pth -O ./checkpoints/ENet.pth\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/expression.mat -O ./checkpoints/expression.mat\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/face3d_pretrain_epoch_20.pth -O ./checkpoints/face3d_pretrain_epoch_20.pth\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/GFPGANv1.3.pth -O ./checkpoints/GFPGANv1.3.pth\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/GPEN-BFR-512.pth -O ./checkpoints/GPEN-BFR-512.pth\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/LNet.pth -O ./checkpoints/LNet.pth\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/ParseNet-latest.pth -O ./checkpoints/ParseNet-latest.pth\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/RetinaFace-R50.pth -O ./checkpoints/RetinaFace-R50.pth\\n\",\n        \"!wget https://github.com/vinthony/video-retalking/releases/download/v0.0.1/shape_predictor_68_face_landmarks.dat -O ./checkpoints/shape_predictor_68_face_landmarks.dat\\n\",\n        \"!unzip -d ./checkpoints/BFM ./checkpoints/BFM.zip\"\n      ],\n      \"metadata\": {\n        \"id\": \"x18qYuQY678E\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"**Inference**\\n\",\n        \"\\n\",\n        \"`--face`: Input video.\\n\",\n        \"\\n\",\n        \"`--audio`: Input audio. Both *.wav* and *.mp4* files are supported.\\n\",\n        \"\\n\",\n        \"You can choose our provided data from `./examples` folder or upload from your local computer.\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"\\n\"\n      ],\n      \"metadata\": {\n        \"id\": \"QJRTF4U8UOjv\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"#@title\\n\",\n        \"import glob, os, sys\\n\",\n        \"import ipywidgets as widgets\\n\",\n        \"from IPython.display import HTML\\n\",\n        \"from base64 import b64encode\\n\",\n        \"\\n\",\n        \"print(\\\"Choose the Video name to edit: (saved in folder 'examples/face')\\\")\\n\",\n        \"vid_list = glob.glob1('examples/face/', '*.mp4')\\n\",\n        \"vid_list.sort()\\n\",\n        \"default_vid_name = widgets.Dropdown(options=vid_list, value='1.mp4')\\n\",\n        \"display(default_vid_name)\\n\",\n        \"\\n\",\n        \"print(\\\"Choose the Audio name to edit: (saved in folder 'examples/audio')\\\")\\n\",\n        \"audio_list = glob.glob1('examples/audio/', '*')\\n\",\n        \"audio_list.sort()\\n\",\n        \"default_audio_name = widgets.Dropdown(options=audio_list, value='1.wav')\\n\",\n        \"display(default_audio_name)\\n\"\n      ],\n      \"metadata\": {\n        \"id\": \"U-IY-cBSporP\",\n        \"cellView\": \"form\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"Visualize the input video and audio:\"\n      ],\n      \"metadata\": {\n        \"id\": \"-MtI_R1bLJ-f\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"#@title\\n\",\n        \"input_video_name = './examples/face/{}'.format(default_vid_name.value)\\n\",\n        \"input_video_mp4 = open('{}'.format(input_video_name),'rb').read()\\n\",\n        \"input_video_data_url = \\\"data:video/x-m4v;base64,\\\" + b64encode(input_video_mp4).decode()\\n\",\n        \"print('Display input video: {}'.format(input_video_name), file=sys.stderr)\\n\",\n        \"display(HTML(\\\"\\\"\\\"\\n\",\n        \"  <video width=400 controls>\\n\",\n        \"        <source src=\\\"%s\\\" type=\\\"video/mp4\\\">\\n\",\n        \"  </video>\\n\",\n        \"  \\\"\\\"\\\" % input_video_data_url))\\n\",\n        \"\\n\",\n        \"input_audio_name = './examples/audio/{}'.format(default_audio_name.value)\\n\",\n        \"input_audio_mp4 = open('{}'.format(input_audio_name),'rb').read()\\n\",\n        \"input_audio_data_url = \\\"data:audio/wav;base64,\\\" + b64encode(input_audio_mp4).decode()\\n\",\n        \"print('Display input audio: {}'.format(input_audio_name), file=sys.stderr)\\n\",\n        \"display(HTML(\\\"\\\"\\\"\\n\",\n        \"  <audio width=400 controls>\\n\",\n        \"        <source src=\\\"%s\\\" type=\\\"audio/wav\\\">\\n\",\n        \"  </audio>\\n\",\n        \"  \\\"\\\"\\\" % input_audio_data_url))\\n\"\n      ],\n      \"metadata\": {\n        \"id\": \"ljbScdofJyGO\",\n        \"cellView\": \"form\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"input_video_path = 'examples/face/{}'.format(default_vid_name.value)\\n\",\n        \"input_audio_path = 'examples/audio/{}'.format(default_audio_name.value)\\n\",\n        \"\\n\",\n        \"!python3 inference.py \\\\\\n\",\n        \"  --face {input_video_path} \\\\\\n\",\n        \"  --audio {input_audio_path} \\\\\\n\",\n        \"  --outfile results/output.mp4\"\n      ],\n      \"metadata\": {\n        \"id\": \"D7hUwRCyUYEA\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"Visualize the output video:\"\n      ],\n      \"metadata\": {\n        \"id\": \"JB5RbKc-njkB\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"#@title\\n\",\n        \"# visualize code from makeittalk\\n\",\n        \"from IPython.display import HTML\\n\",\n        \"from base64 import b64encode\\n\",\n        \"import os, sys, glob, cv2, subprocess, platform\\n\",\n        \"\\n\",\n        \"def read_video(vid_name):\\n\",\n        \"  video_stream = cv2.VideoCapture(vid_name)\\n\",\n        \"  fps = video_stream.get(cv2.CAP_PROP_FPS)\\n\",\n        \"  full_frames = []\\n\",\n        \"  while True:\\n\",\n        \"    still_reading, frame = video_stream.read()\\n\",\n        \"    if not still_reading:\\n\",\n        \"        video_stream.release()\\n\",\n        \"        break\\n\",\n        \"    full_frames.append(frame)\\n\",\n        \"  return full_frames, fps\\n\",\n        \"\\n\",\n        \"input_video_frames, fps = read_video(input_video_path)\\n\",\n        \"output_video_frames, _ = read_video('./results/output.mp4')\\n\",\n        \"\\n\",\n        \"frame_h, frame_w = input_video_frames[0].shape[:-1]\\n\",\n        \"out_concat = cv2.VideoWriter('./temp/temp/result_concat.mp4', cv2.VideoWriter_fourcc(*'mp4v'), fps, (frame_w*2, frame_h))\\n\",\n        \"for i in range(len(output_video_frames)):\\n\",\n        \"  frame_input = input_video_frames[i % len(input_video_frames)]\\n\",\n        \"  frame_output = output_video_frames[i]\\n\",\n        \"  out_concat.write(cv2.hconcat([frame_input, frame_output]))\\n\",\n        \"out_concat.release()\\n\",\n        \"\\n\",\n        \"command = 'ffmpeg -loglevel error -y -i {} -i {} -strict -2 -q:v 1 {}'.format(input_audio_path, './temp/temp/result_concat.mp4', './results/output_concat_input.mp4')\\n\",\n        \"subprocess.call(command, shell=platform.system() != 'Windows')\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"output_video_name = './results/output.mp4'\\n\",\n        \"output_video_mp4 = open('{}'.format(output_video_name),'rb').read()\\n\",\n        \"output_video_data_url = \\\"data:video/mp4;base64,\\\" + b64encode(output_video_mp4).decode()\\n\",\n        \"print('Display lip-syncing video: {}'.format(output_video_name), file=sys.stderr)\\n\",\n        \"display(HTML(\\\"\\\"\\\"\\n\",\n        \"  <video height=400 controls>\\n\",\n        \"        <source src=\\\"%s\\\" type=\\\"video/mp4\\\">\\n\",\n        \"  </video>\\n\",\n        \"  \\\"\\\"\\\" % output_video_data_url))\\n\",\n        \"\\n\",\n        \"output_concat_video_name = './results/output_concat_input.mp4'\\n\",\n        \"output_concat_video_mp4 = open('{}'.format(output_concat_video_name),'rb').read()\\n\",\n        \"output_concat_video_data_url = \\\"data:video/mp4;base64,\\\" + b64encode(output_concat_video_mp4).decode()\\n\",\n        \"print('Display input video and lip-syncing video: {}'.format(output_concat_video_name), file=sys.stderr)\\n\",\n        \"display(HTML(\\\"\\\"\\\"\\n\",\n        \"  <video height=400 controls>\\n\",\n        \"        <source src=\\\"%s\\\" type=\\\"video/mp4\\\">\\n\",\n        \"  </video>\\n\",\n        \"  \\\"\\\"\\\" % output_concat_video_data_url))\\n\"\n      ],\n      \"metadata\": {\n        \"id\": \"ravs9UDucMfy\",\n        \"cellView\": \"form\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    }\n  ]\n}"
  },
  {
    "path": "requirements.txt",
    "content": "basicsr==1.4.2\nkornia==0.5.1\nface-alignment==1.3.4\nninja==1.10.2.3\neinops==0.4.1\nfacexlib==0.2.5\nlibrosa==0.9.2\ndlib==19.24.0\ngradio>=3.7.0\nnumpy==1.23.4\n"
  },
  {
    "path": "third_part/GFPGAN/LICENSE",
    "content": "Tencent is pleased to support the open source community by making GFPGAN available.\n\nCopyright (C) 2021 THL A29 Limited, a Tencent company.  All rights reserved.\n\nGFPGAN is licensed under the Apache License Version 2.0 except for the third-party components listed below.\n\n\nTerms of the Apache License Version 2.0:\n---------------------------------------------\nApache License\n\nVersion 2.0, January 2004\n\nhttp://www.apache.org/licenses/\n\nTERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n1. Definitions.\n\n“License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.\n\n“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.\n\n“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.\n\n“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License.\n\n“Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.\n\n“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.\n\n“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).\n\n“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.\n\n“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”\n\n“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.\n\n2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.\n\n3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.\n\n4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:\n\nYou must give any other recipients of the Work or Derivative Works a copy of this License; and\n\nYou must cause any modified files to carry prominent notices stating that You changed the files; and\n\nYou must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and\n\nIf the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.\n\nYou may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.\n\n5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.\n\n6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.\n\n7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.\n\n8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.\n\n9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.\n\nEND OF TERMS AND CONDITIONS\n\n\n\nOther  dependencies and licenses:\n\n\nOpen Source Software licensed under the Apache 2.0 license and Other Licenses of the Third-Party Components therein:\n---------------------------------------------\n1. basicsr\nCopyright 2018-2020 BasicSR Authors\n\n\nThis BasicSR project is released under the Apache 2.0 license.\n\nA copy of Apache 2.0 is included in this file.\n\nStyleGAN2\nThe codes are modified from the repository stylegan2-pytorch. Many thanks to the author - Kim Seonghyeon 😊 for translating from the official TensorFlow codes to PyTorch ones. Here is the license of stylegan2-pytorch.\nThe official repository is https://github.com/NVlabs/stylegan2, and here is the NVIDIA license.\nDFDNet\nThe codes are largely modified from the repository DFDNet. Their license is Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\nTerms of the Nvidia License:\n---------------------------------------------\n\n1. Definitions\n\n\"Licensor\" means any person or entity that distributes its Work.\n\n\"Software\" means the original work of authorship made available under\nthis License.\n\n\"Work\" means the Software and any additions to or derivative works of\nthe Software that are made available under this License.\n\n\"Nvidia Processors\" means any central processing unit (CPU), graphics\nprocessing unit (GPU), field-programmable gate array (FPGA),\napplication-specific integrated circuit (ASIC) or any combination\nthereof designed, made, sold, or provided by Nvidia or its affiliates.\n\nThe terms \"reproduce,\" \"reproduction,\" \"derivative works,\" and\n\"distribution\" have the meaning as provided under U.S. copyright law;\nprovided, however, that for the purposes of this License, derivative\nworks shall not include works that remain separable from, or merely\nlink (or bind by name) to the interfaces of, the Work.\n\nWorks, including the Software, are \"made available\" under this License\nby including in or with the Work either (a) a copyright notice\nreferencing the applicability of this License to the Work, or (b) a\ncopy of this License.\n\n2. License Grants\n\n    2.1 Copyright Grant. Subject to the terms and conditions of this\n    License, each Licensor grants to you a perpetual, worldwide,\n    non-exclusive, royalty-free, copyright license to reproduce,\n    prepare derivative works of, publicly display, publicly perform,\n    sublicense and distribute its Work and any resulting derivative\n    works in any form.\n\n3. Limitations\n\n    3.1 Redistribution. You may reproduce or distribute the Work only\n    if (a) you do so under this License, (b) you include a complete\n    copy of this License with your distribution, and (c) you retain\n    without modification any copyright, patent, trademark, or\n    attribution notices that are present in the Work.\n\n    3.2 Derivative Works. You may specify that additional or different\n    terms apply to the use, reproduction, and distribution of your\n    derivative works of the Work (\"Your Terms\") only if (a) Your Terms\n    provide that the use limitation in Section 3.3 applies to your\n    derivative works, and (b) you identify the specific derivative\n    works that are subject to Your Terms. Notwithstanding Your Terms,\n    this License (including the redistribution requirements in Section\n    3.1) will continue to apply to the Work itself.\n\n    3.3 Use Limitation. The Work and any derivative works thereof only\n    may be used or intended for use non-commercially. The Work or\n    derivative works thereof may be used or intended for use by Nvidia\n    or its affiliates commercially or non-commercially. As used herein,\n    \"non-commercially\" means for research or evaluation purposes only.\n\n    3.4 Patent Claims. If you bring or threaten to bring a patent claim\n    against any Licensor (including any claim, cross-claim or\n    counterclaim in a lawsuit) to enforce any patents that you allege\n    are infringed by any Work, then your rights under this License from\n    such Licensor (including the grants in Sections 2.1 and 2.2) will\n    terminate immediately.\n\n    3.5 Trademarks. This License does not grant any rights to use any\n    Licensor's or its affiliates' names, logos, or trademarks, except\n    as necessary to reproduce the notices described in this License.\n\n    3.6 Termination. If you violate any term of this License, then your\n    rights under this License (including the grants in Sections 2.1 and\n    2.2) will terminate immediately.\n\n4. Disclaimer of Warranty.\n\nTHE WORK IS PROVIDED \"AS IS\" WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR\nNON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER\nTHIS LICENSE.\n\n5. Limitation of Liability.\n\nEXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL\nTHEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE\nSHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT,\nINDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF\nOR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK\n(INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION,\nLOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER\nCOMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF\nTHE POSSIBILITY OF SUCH DAMAGES.\n\nMIT License\n\nCopyright (c) 2019 Kim Seonghyeon\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n\n\nOpen Source Software licensed under the BSD 3-Clause license:\n---------------------------------------------\n1. torchvision\nCopyright (c) Soumith Chintala 2016,\nAll rights reserved.\n\n2. torch\nCopyright (c) 2016-     Facebook, Inc            (Adam Paszke)\nCopyright (c) 2014-     Facebook, Inc            (Soumith Chintala)\nCopyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)\nCopyright (c) 2012-2014 Deepmind Technologies    (Koray Kavukcuoglu)\nCopyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)\nCopyright (c) 2011-2013 NYU                      (Clement Farabet)\nCopyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)\nCopyright (c) 2006      Idiap Research Institute (Samy Bengio)\nCopyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)\n\n\nTerms of the BSD 3-Clause License:\n---------------------------------------------\nRedistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\n\nOpen Source Software licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:\n---------------------------------------------\n1. numpy\nCopyright (c) 2005-2020, NumPy Developers.\nAll rights reserved.\n\nA copy of BSD 3-Clause License is included in this file.\n\nThe NumPy repository and source distributions bundle several libraries that are\ncompatibly licensed.  We list these here.\n\nName: Numpydoc\nFiles: doc/sphinxext/numpydoc/*\nLicense: BSD-2-Clause\n  For details, see doc/sphinxext/LICENSE.txt\n\nName: scipy-sphinx-theme\nFiles: doc/scipy-sphinx-theme/*\nLicense: BSD-3-Clause AND PSF-2.0 AND Apache-2.0\n  For details, see doc/scipy-sphinx-theme/LICENSE.txt\n\nName: lapack-lite\nFiles: numpy/linalg/lapack_lite/*\nLicense: BSD-3-Clause\n  For details, see numpy/linalg/lapack_lite/LICENSE.txt\n\nName: tempita\nFiles: tools/npy_tempita/*\nLicense: MIT\n  For details, see tools/npy_tempita/license.txt\n\nName: dragon4\nFiles: numpy/core/src/multiarray/dragon4.c\nLicense: MIT\n  For license text, see numpy/core/src/multiarray/dragon4.c\n\n\n\nOpen Source Software licensed under the MIT license:\n---------------------------------------------\n1. facexlib\nCopyright (c) 2020 Xintao Wang\n\n2. opencv-python\nCopyright (c) Olli-Pekka Heinisuo\nPlease note that only files in cv2 package are used.\n\n\nTerms of the MIT License:\n---------------------------------------------\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\n\nOpen Source Software licensed under the MIT license and Other Licenses of the Third-Party Components therein:\n---------------------------------------------\n1. tqdm\nCopyright (c) 2013 noamraph\n\n`tqdm` is a product of collaborative work.\nUnless otherwise stated, all authors (see commit logs) retain copyright\nfor their respective work, and release the work under the MIT licence\n(text below).\n\nExceptions or notable authors are listed below\nin reverse chronological order:\n\n* files: *\n  MPLv2.0 2015-2020 (c) Casper da Costa-Luis\n  [casperdcl](https://github.com/casperdcl).\n* files: tqdm/_tqdm.py\n  MIT 2016 (c) [PR #96] on behalf of Google Inc.\n* files: tqdm/_tqdm.py setup.py README.rst MANIFEST.in .gitignore\n  MIT 2013 (c) Noam Yorav-Raphael, original author.\n\n[PR #96]: https://github.com/tqdm/tqdm/pull/96\n\n\nMozilla Public Licence (MPL) v. 2.0 - Exhibit A\n-----------------------------------------------\n\nThis Source Code Form is subject to the terms of the\nMozilla Public License, v. 2.0.\nIf a copy of the MPL was not distributed with this file,\nYou can obtain one at https://mozilla.org/MPL/2.0/.\n\n\nMIT License (MIT)\n-----------------\n\nCopyright (c) 2013 noamraph\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of\nthe Software, and to permit persons to whom the Software is furnished to do so,\nsubject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS\nFOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR\nCOPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER\nIN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN\nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/__init__.py",
    "content": "# flake8: noqa\n\nfrom .archs import *\nfrom .data import *\nfrom .models import *\nfrom .utils import *\n\n# from .version import *\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/archs/__init__.py",
    "content": "import importlib\nfrom basicsr.utils import scandir\nfrom os import path as osp\n\n# automatically scan and import arch modules for registry\n# scan all the files that end with '_arch.py' under the archs folder\narch_folder = osp.dirname(osp.abspath(__file__))\narch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')]\n# import all the arch modules\n_arch_modules = [importlib.import_module(f'gfpgan.archs.{file_name}') for file_name in arch_filenames]\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/archs/arcface_arch.py",
    "content": "import torch.nn as nn\nfrom basicsr.utils.registry import ARCH_REGISTRY\n\n\ndef conv3x3(inplanes, outplanes, stride=1):\n    \"\"\"A simple wrapper for 3x3 convolution with padding.\n\n    Args:\n        inplanes (int): Channel number of inputs.\n        outplanes (int): Channel number of outputs.\n        stride (int): Stride in convolution. Default: 1.\n    \"\"\"\n    return nn.Conv2d(inplanes, outplanes, kernel_size=3, stride=stride, padding=1, bias=False)\n\n\nclass BasicBlock(nn.Module):\n    \"\"\"Basic residual block used in the ResNetArcFace architecture.\n\n    Args:\n        inplanes (int): Channel number of inputs.\n        planes (int): Channel number of outputs.\n        stride (int): Stride in convolution. Default: 1.\n        downsample (nn.Module): The downsample module. Default: None.\n    \"\"\"\n    expansion = 1  # output channel expansion ratio\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(BasicBlock, self).__init__()\n        self.conv1 = conv3x3(inplanes, planes, stride)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.relu = nn.ReLU(inplace=True)\n        self.conv2 = conv3x3(planes, planes)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\n\nclass IRBlock(nn.Module):\n    \"\"\"Improved residual block (IR Block) used in the ResNetArcFace architecture.\n\n    Args:\n        inplanes (int): Channel number of inputs.\n        planes (int): Channel number of outputs.\n        stride (int): Stride in convolution. Default: 1.\n        downsample (nn.Module): The downsample module. Default: None.\n        use_se (bool): Whether use the SEBlock (squeeze and excitation block). Default: True.\n    \"\"\"\n    expansion = 1  # output channel expansion ratio\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True):\n        super(IRBlock, self).__init__()\n        self.bn0 = nn.BatchNorm2d(inplanes)\n        self.conv1 = conv3x3(inplanes, inplanes)\n        self.bn1 = nn.BatchNorm2d(inplanes)\n        self.prelu = nn.PReLU()\n        self.conv2 = conv3x3(inplanes, planes, stride)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.downsample = downsample\n        self.stride = stride\n        self.use_se = use_se\n        if self.use_se:\n            self.se = SEBlock(planes)\n\n    def forward(self, x):\n        residual = x\n        out = self.bn0(x)\n        out = self.conv1(out)\n        out = self.bn1(out)\n        out = self.prelu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        if self.use_se:\n            out = self.se(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.prelu(out)\n\n        return out\n\n\nclass Bottleneck(nn.Module):\n    \"\"\"Bottleneck block used in the ResNetArcFace architecture.\n\n    Args:\n        inplanes (int): Channel number of inputs.\n        planes (int): Channel number of outputs.\n        stride (int): Stride in convolution. Default: 1.\n        downsample (nn.Module): The downsample module. Default: None.\n    \"\"\"\n    expansion = 4  # output channel expansion ratio\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(Bottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * self.expansion)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\n\nclass SEBlock(nn.Module):\n    \"\"\"The squeeze-and-excitation block (SEBlock) used in the IRBlock.\n\n    Args:\n        channel (int): Channel number of inputs.\n        reduction (int): Channel reduction ration. Default: 16.\n    \"\"\"\n\n    def __init__(self, channel, reduction=16):\n        super(SEBlock, self).__init__()\n        self.avg_pool = nn.AdaptiveAvgPool2d(1)  # pool to 1x1 without spatial information\n        self.fc = nn.Sequential(\n            nn.Linear(channel, channel // reduction), nn.PReLU(), nn.Linear(channel // reduction, channel),\n            nn.Sigmoid())\n\n    def forward(self, x):\n        b, c, _, _ = x.size()\n        y = self.avg_pool(x).view(b, c)\n        y = self.fc(y).view(b, c, 1, 1)\n        return x * y\n\n\n@ARCH_REGISTRY.register()\nclass ResNetArcFace(nn.Module):\n    \"\"\"ArcFace with ResNet architectures.\n\n    Ref: ArcFace: Additive Angular Margin Loss for Deep Face Recognition.\n\n    Args:\n        block (str): Block used in the ArcFace architecture.\n        layers (tuple(int)): Block numbers in each layer.\n        use_se (bool): Whether use the SEBlock (squeeze and excitation block). Default: True.\n    \"\"\"\n\n    def __init__(self, block, layers, use_se=True):\n        if block == 'IRBlock':\n            block = IRBlock\n        self.inplanes = 64\n        self.use_se = use_se\n        super(ResNetArcFace, self).__init__()\n\n        self.conv1 = nn.Conv2d(1, 64, kernel_size=3, padding=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.prelu = nn.PReLU()\n        self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\n        self.bn4 = nn.BatchNorm2d(512)\n        self.dropout = nn.Dropout()\n        self.fc5 = nn.Linear(512 * 8 * 8, 512)\n        self.bn5 = nn.BatchNorm1d(512)\n\n        # initialization\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.xavier_normal_(m.weight)\n            elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n            elif isinstance(m, nn.Linear):\n                nn.init.xavier_normal_(m.weight)\n                nn.init.constant_(m.bias, 0)\n\n    def _make_layer(self, block, planes, num_blocks, stride=1):\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion),\n            )\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample, use_se=self.use_se))\n        self.inplanes = planes\n        for _ in range(1, num_blocks):\n            layers.append(block(self.inplanes, planes, use_se=self.use_se))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.prelu(x)\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        x = self.layer2(x)\n        x = self.layer3(x)\n        x = self.layer4(x)\n        x = self.bn4(x)\n        x = self.dropout(x)\n        x = x.view(x.size(0), -1)\n        x = self.fc5(x)\n        x = self.bn5(x)\n\n        return x\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/archs/gfpgan_bilinear_arch.py",
    "content": "import math\nimport random\nimport torch\nfrom basicsr.utils.registry import ARCH_REGISTRY\nfrom torch import nn\n\nfrom .gfpganv1_arch import ResUpBlock\nfrom .stylegan2_bilinear_arch import (ConvLayer, EqualConv2d, EqualLinear, ResBlock, ScaledLeakyReLU,\n                                      StyleGAN2GeneratorBilinear)\n\n\nclass StyleGAN2GeneratorBilinearSFT(StyleGAN2GeneratorBilinear):\n    \"\"\"StyleGAN2 Generator with SFT modulation (Spatial Feature Transform).\n\n    It is the bilinear version. It does not use the complicated UpFirDnSmooth function that is not friendly for\n    deployment. It can be easily converted to the clean version: StyleGAN2GeneratorCSFT.\n\n    Args:\n        out_size (int): The spatial size of outputs.\n        num_style_feat (int): Channel number of style features. Default: 512.\n        num_mlp (int): Layer number of MLP style layers. Default: 8.\n        channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.\n        lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.\n        narrow (float): The narrow ratio for channels. Default: 1.\n        sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.\n    \"\"\"\n\n    def __init__(self,\n                 out_size,\n                 num_style_feat=512,\n                 num_mlp=8,\n                 channel_multiplier=2,\n                 lr_mlp=0.01,\n                 narrow=1,\n                 sft_half=False):\n        super(StyleGAN2GeneratorBilinearSFT, self).__init__(\n            out_size,\n            num_style_feat=num_style_feat,\n            num_mlp=num_mlp,\n            channel_multiplier=channel_multiplier,\n            lr_mlp=lr_mlp,\n            narrow=narrow)\n        self.sft_half = sft_half\n\n    def forward(self,\n                styles,\n                conditions,\n                input_is_latent=False,\n                noise=None,\n                randomize_noise=True,\n                truncation=1,\n                truncation_latent=None,\n                inject_index=None,\n                return_latents=False):\n        \"\"\"Forward function for StyleGAN2GeneratorBilinearSFT.\n\n        Args:\n            styles (list[Tensor]): Sample codes of styles.\n            conditions (list[Tensor]): SFT conditions to generators.\n            input_is_latent (bool): Whether input is latent style. Default: False.\n            noise (Tensor | None): Input noise or None. Default: None.\n            randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.\n            truncation (float): The truncation ratio. Default: 1.\n            truncation_latent (Tensor | None): The truncation latent tensor. Default: None.\n            inject_index (int | None): The injection index for mixing noise. Default: None.\n            return_latents (bool): Whether to return style latents. Default: False.\n        \"\"\"\n        # style codes -> latents with Style MLP layer\n        if not input_is_latent:\n            styles = [self.style_mlp(s) for s in styles]\n        # noises\n        if noise is None:\n            if randomize_noise:\n                noise = [None] * self.num_layers  # for each style conv layer\n            else:  # use the stored noise\n                noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)]\n        # style truncation\n        if truncation < 1:\n            style_truncation = []\n            for style in styles:\n                style_truncation.append(truncation_latent + truncation * (style - truncation_latent))\n            styles = style_truncation\n        # get style latents with injection\n        if len(styles) == 1:\n            inject_index = self.num_latent\n\n            if styles[0].ndim < 3:\n                # repeat latent code for all the layers\n                latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            else:  # used for encoder with different latent code for each layer\n                latent = styles[0]\n        elif len(styles) == 2:  # mixing noises\n            if inject_index is None:\n                inject_index = random.randint(1, self.num_latent - 1)\n            latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)\n            latent = torch.cat([latent1, latent2], 1)\n\n        # main generation\n        out = self.constant_input(latent.shape[0])\n        out = self.style_conv1(out, latent[:, 0], noise=noise[0])\n        skip = self.to_rgb1(out, latent[:, 1])\n\n        i = 1\n        for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2],\n                                                        noise[2::2], self.to_rgbs):\n            out = conv1(out, latent[:, i], noise=noise1)\n\n            # the conditions may have fewer levels\n            if i < len(conditions):\n                # SFT part to combine the conditions\n                if self.sft_half:  # only apply SFT to half of the channels\n                    out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1)\n                    out_sft = out_sft * conditions[i - 1] + conditions[i]\n                    out = torch.cat([out_same, out_sft], dim=1)\n                else:  # apply SFT to all the channels\n                    out = out * conditions[i - 1] + conditions[i]\n\n            out = conv2(out, latent[:, i + 1], noise=noise2)\n            skip = to_rgb(out, latent[:, i + 2], skip)  # feature back to the rgb space\n            i += 2\n\n        image = skip\n\n        if return_latents:\n            return image, latent\n        else:\n            return image, None\n\n\n@ARCH_REGISTRY.register()\nclass GFPGANBilinear(nn.Module):\n    \"\"\"The GFPGAN architecture: Unet + StyleGAN2 decoder with SFT.\n\n    It is the bilinear version and it does not use the complicated UpFirDnSmooth function that is not friendly for\n    deployment. It can be easily converted to the clean version: GFPGANv1Clean.\n\n\n    Ref: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior.\n\n    Args:\n        out_size (int): The spatial size of outputs.\n        num_style_feat (int): Channel number of style features. Default: 512.\n        channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.\n        decoder_load_path (str): The path to the pre-trained decoder model (usually, the StyleGAN2). Default: None.\n        fix_decoder (bool): Whether to fix the decoder. Default: True.\n\n        num_mlp (int): Layer number of MLP style layers. Default: 8.\n        lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.\n        input_is_latent (bool): Whether input is latent style. Default: False.\n        different_w (bool): Whether to use different latent w for different layers. Default: False.\n        narrow (float): The narrow ratio for channels. Default: 1.\n        sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.\n    \"\"\"\n\n    def __init__(\n            self,\n            out_size,\n            num_style_feat=512,\n            channel_multiplier=1,\n            decoder_load_path=None,\n            fix_decoder=True,\n            # for stylegan decoder\n            num_mlp=8,\n            lr_mlp=0.01,\n            input_is_latent=False,\n            different_w=False,\n            narrow=1,\n            sft_half=False):\n\n        super(GFPGANBilinear, self).__init__()\n        self.input_is_latent = input_is_latent\n        self.different_w = different_w\n        self.num_style_feat = num_style_feat\n\n        unet_narrow = narrow * 0.5  # by default, use a half of input channels\n        channels = {\n            '4': int(512 * unet_narrow),\n            '8': int(512 * unet_narrow),\n            '16': int(512 * unet_narrow),\n            '32': int(512 * unet_narrow),\n            '64': int(256 * channel_multiplier * unet_narrow),\n            '128': int(128 * channel_multiplier * unet_narrow),\n            '256': int(64 * channel_multiplier * unet_narrow),\n            '512': int(32 * channel_multiplier * unet_narrow),\n            '1024': int(16 * channel_multiplier * unet_narrow)\n        }\n\n        self.log_size = int(math.log(out_size, 2))\n        first_out_size = 2**(int(math.log(out_size, 2)))\n\n        self.conv_body_first = ConvLayer(3, channels[f'{first_out_size}'], 1, bias=True, activate=True)\n\n        # downsample\n        in_channels = channels[f'{first_out_size}']\n        self.conv_body_down = nn.ModuleList()\n        for i in range(self.log_size, 2, -1):\n            out_channels = channels[f'{2**(i - 1)}']\n            self.conv_body_down.append(ResBlock(in_channels, out_channels))\n            in_channels = out_channels\n\n        self.final_conv = ConvLayer(in_channels, channels['4'], 3, bias=True, activate=True)\n\n        # upsample\n        in_channels = channels['4']\n        self.conv_body_up = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            out_channels = channels[f'{2**i}']\n            self.conv_body_up.append(ResUpBlock(in_channels, out_channels))\n            in_channels = out_channels\n\n        # to RGB\n        self.toRGB = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            self.toRGB.append(EqualConv2d(channels[f'{2**i}'], 3, 1, stride=1, padding=0, bias=True, bias_init_val=0))\n\n        if different_w:\n            linear_out_channel = (int(math.log(out_size, 2)) * 2 - 2) * num_style_feat\n        else:\n            linear_out_channel = num_style_feat\n\n        self.final_linear = EqualLinear(\n            channels['4'] * 4 * 4, linear_out_channel, bias=True, bias_init_val=0, lr_mul=1, activation=None)\n\n        # the decoder: stylegan2 generator with SFT modulations\n        self.stylegan_decoder = StyleGAN2GeneratorBilinearSFT(\n            out_size=out_size,\n            num_style_feat=num_style_feat,\n            num_mlp=num_mlp,\n            channel_multiplier=channel_multiplier,\n            lr_mlp=lr_mlp,\n            narrow=narrow,\n            sft_half=sft_half)\n\n        # load pre-trained stylegan2 model if necessary\n        if decoder_load_path:\n            self.stylegan_decoder.load_state_dict(\n                torch.load(decoder_load_path, map_location=lambda storage, loc: storage)['params_ema'])\n        # fix decoder without updating params\n        if fix_decoder:\n            for _, param in self.stylegan_decoder.named_parameters():\n                param.requires_grad = False\n\n        # for SFT modulations (scale and shift)\n        self.condition_scale = nn.ModuleList()\n        self.condition_shift = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            out_channels = channels[f'{2**i}']\n            if sft_half:\n                sft_out_channels = out_channels\n            else:\n                sft_out_channels = out_channels * 2\n            self.condition_scale.append(\n                nn.Sequential(\n                    EqualConv2d(out_channels, out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0),\n                    ScaledLeakyReLU(0.2),\n                    EqualConv2d(out_channels, sft_out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=1)))\n            self.condition_shift.append(\n                nn.Sequential(\n                    EqualConv2d(out_channels, out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0),\n                    ScaledLeakyReLU(0.2),\n                    EqualConv2d(out_channels, sft_out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0)))\n\n    def forward(self, x, return_latents=False, return_rgb=True, randomize_noise=True):\n        \"\"\"Forward function for GFPGANBilinear.\n\n        Args:\n            x (Tensor): Input images.\n            return_latents (bool): Whether to return style latents. Default: False.\n            return_rgb (bool): Whether return intermediate rgb images. Default: True.\n            randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.\n        \"\"\"\n        conditions = []\n        unet_skips = []\n        out_rgbs = []\n\n        # encoder\n        feat = self.conv_body_first(x)\n        for i in range(self.log_size - 2):\n            feat = self.conv_body_down[i](feat)\n            unet_skips.insert(0, feat)\n\n        feat = self.final_conv(feat)\n\n        # style code\n        style_code = self.final_linear(feat.view(feat.size(0), -1))\n        if self.different_w:\n            style_code = style_code.view(style_code.size(0), -1, self.num_style_feat)\n\n        # decode\n        for i in range(self.log_size - 2):\n            # add unet skip\n            feat = feat + unet_skips[i]\n            # ResUpLayer\n            feat = self.conv_body_up[i](feat)\n            # generate scale and shift for SFT layers\n            scale = self.condition_scale[i](feat)\n            conditions.append(scale.clone())\n            shift = self.condition_shift[i](feat)\n            conditions.append(shift.clone())\n            # generate rgb images\n            if return_rgb:\n                out_rgbs.append(self.toRGB[i](feat))\n\n        # decoder\n        image, _ = self.stylegan_decoder([style_code],\n                                         conditions,\n                                         return_latents=return_latents,\n                                         input_is_latent=self.input_is_latent,\n                                         randomize_noise=randomize_noise)\n\n        return image, out_rgbs\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/archs/gfpganv1_arch.py",
    "content": "import math\nimport random\nimport torch\nfrom basicsr.archs.stylegan2_arch import (ConvLayer, EqualConv2d, EqualLinear, ResBlock, ScaledLeakyReLU,\n                                          StyleGAN2Generator)\nfrom basicsr.ops.fused_act import FusedLeakyReLU\nfrom basicsr.utils.registry import ARCH_REGISTRY\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\nclass StyleGAN2GeneratorSFT(StyleGAN2Generator):\n    \"\"\"StyleGAN2 Generator with SFT modulation (Spatial Feature Transform).\n\n    Args:\n        out_size (int): The spatial size of outputs.\n        num_style_feat (int): Channel number of style features. Default: 512.\n        num_mlp (int): Layer number of MLP style layers. Default: 8.\n        channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.\n        resample_kernel (list[int]): A list indicating the 1D resample kernel magnitude. A cross production will be\n            applied to extent 1D resample kernel to 2D resample kernel. Default: (1, 3, 3, 1).\n        lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.\n        narrow (float): The narrow ratio for channels. Default: 1.\n        sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.\n    \"\"\"\n\n    def __init__(self,\n                 out_size,\n                 num_style_feat=512,\n                 num_mlp=8,\n                 channel_multiplier=2,\n                 resample_kernel=(1, 3, 3, 1),\n                 lr_mlp=0.01,\n                 narrow=1,\n                 sft_half=False):\n        super(StyleGAN2GeneratorSFT, self).__init__(\n            out_size,\n            num_style_feat=num_style_feat,\n            num_mlp=num_mlp,\n            channel_multiplier=channel_multiplier,\n            resample_kernel=resample_kernel,\n            lr_mlp=lr_mlp,\n            narrow=narrow)\n        self.sft_half = sft_half\n\n    def forward(self,\n                styles,\n                conditions,\n                input_is_latent=False,\n                noise=None,\n                randomize_noise=True,\n                truncation=1,\n                truncation_latent=None,\n                inject_index=None,\n                return_latents=False):\n        \"\"\"Forward function for StyleGAN2GeneratorSFT.\n\n        Args:\n            styles (list[Tensor]): Sample codes of styles.\n            conditions (list[Tensor]): SFT conditions to generators.\n            input_is_latent (bool): Whether input is latent style. Default: False.\n            noise (Tensor | None): Input noise or None. Default: None.\n            randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.\n            truncation (float): The truncation ratio. Default: 1.\n            truncation_latent (Tensor | None): The truncation latent tensor. Default: None.\n            inject_index (int | None): The injection index for mixing noise. Default: None.\n            return_latents (bool): Whether to return style latents. Default: False.\n        \"\"\"\n        # style codes -> latents with Style MLP layer\n        if not input_is_latent:\n            styles = [self.style_mlp(s) for s in styles]\n        # noises\n        if noise is None:\n            if randomize_noise:\n                noise = [None] * self.num_layers  # for each style conv layer\n            else:  # use the stored noise\n                noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)]\n        # style truncation\n        if truncation < 1:\n            style_truncation = []\n            for style in styles:\n                style_truncation.append(truncation_latent + truncation * (style - truncation_latent))\n            styles = style_truncation\n        # get style latents with injection\n        if len(styles) == 1:\n            inject_index = self.num_latent\n\n            if styles[0].ndim < 3:\n                # repeat latent code for all the layers\n                latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            else:  # used for encoder with different latent code for each layer\n                latent = styles[0]\n        elif len(styles) == 2:  # mixing noises\n            if inject_index is None:\n                inject_index = random.randint(1, self.num_latent - 1)\n            latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)\n            latent = torch.cat([latent1, latent2], 1)\n\n        # main generation\n        out = self.constant_input(latent.shape[0])\n        out = self.style_conv1(out, latent[:, 0], noise=noise[0])\n        skip = self.to_rgb1(out, latent[:, 1])\n\n        i = 1\n        for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2],\n                                                        noise[2::2], self.to_rgbs):\n            out = conv1(out, latent[:, i], noise=noise1)\n\n            # the conditions may have fewer levels\n            if i < len(conditions):\n                # SFT part to combine the conditions\n                if self.sft_half:  # only apply SFT to half of the channels\n                    out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1)\n                    out_sft = out_sft * conditions[i - 1] + conditions[i]\n                    out = torch.cat([out_same, out_sft], dim=1)\n                else:  # apply SFT to all the channels\n                    out = out * conditions[i - 1] + conditions[i]\n\n            out = conv2(out, latent[:, i + 1], noise=noise2)\n            skip = to_rgb(out, latent[:, i + 2], skip)  # feature back to the rgb space\n            i += 2\n\n        image = skip\n\n        if return_latents:\n            return image, latent\n        else:\n            return image, None\n\n\nclass ConvUpLayer(nn.Module):\n    \"\"\"Convolutional upsampling layer. It uses bilinear upsampler + Conv.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n        kernel_size (int): Size of the convolving kernel.\n        stride (int): Stride of the convolution. Default: 1\n        padding (int): Zero-padding added to both sides of the input. Default: 0.\n        bias (bool): If ``True``, adds a learnable bias to the output. Default: ``True``.\n        bias_init_val (float): Bias initialized value. Default: 0.\n        activate (bool): Whether use activateion. Default: True.\n    \"\"\"\n\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size,\n                 stride=1,\n                 padding=0,\n                 bias=True,\n                 bias_init_val=0,\n                 activate=True):\n        super(ConvUpLayer, self).__init__()\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.kernel_size = kernel_size\n        self.stride = stride\n        self.padding = padding\n        # self.scale is used to scale the convolution weights, which is related to the common initializations.\n        self.scale = 1 / math.sqrt(in_channels * kernel_size**2)\n\n        self.weight = nn.Parameter(torch.randn(out_channels, in_channels, kernel_size, kernel_size))\n\n        if bias and not activate:\n            self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val))\n        else:\n            self.register_parameter('bias', None)\n\n        # activation\n        if activate:\n            if bias:\n                self.activation = FusedLeakyReLU(out_channels)\n            else:\n                self.activation = ScaledLeakyReLU(0.2)\n        else:\n            self.activation = None\n\n    def forward(self, x):\n        # bilinear upsample\n        out = F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)\n        # conv\n        out = F.conv2d(\n            out,\n            self.weight * self.scale,\n            bias=self.bias,\n            stride=self.stride,\n            padding=self.padding,\n        )\n        # activation\n        if self.activation is not None:\n            out = self.activation(out)\n        return out\n\n\nclass ResUpBlock(nn.Module):\n    \"\"\"Residual block with upsampling.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n    \"\"\"\n\n    def __init__(self, in_channels, out_channels):\n        super(ResUpBlock, self).__init__()\n\n        self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True)\n        self.conv2 = ConvUpLayer(in_channels, out_channels, 3, stride=1, padding=1, bias=True, activate=True)\n        self.skip = ConvUpLayer(in_channels, out_channels, 1, bias=False, activate=False)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = self.conv2(out)\n        skip = self.skip(x)\n        out = (out + skip) / math.sqrt(2)\n        return out\n\n\n@ARCH_REGISTRY.register()\nclass GFPGANv1(nn.Module):\n    \"\"\"The GFPGAN architecture: Unet + StyleGAN2 decoder with SFT.\n\n    Ref: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior.\n\n    Args:\n        out_size (int): The spatial size of outputs.\n        num_style_feat (int): Channel number of style features. Default: 512.\n        channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.\n        resample_kernel (list[int]): A list indicating the 1D resample kernel magnitude. A cross production will be\n            applied to extent 1D resample kernel to 2D resample kernel. Default: (1, 3, 3, 1).\n        decoder_load_path (str): The path to the pre-trained decoder model (usually, the StyleGAN2). Default: None.\n        fix_decoder (bool): Whether to fix the decoder. Default: True.\n\n        num_mlp (int): Layer number of MLP style layers. Default: 8.\n        lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.\n        input_is_latent (bool): Whether input is latent style. Default: False.\n        different_w (bool): Whether to use different latent w for different layers. Default: False.\n        narrow (float): The narrow ratio for channels. Default: 1.\n        sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.\n    \"\"\"\n\n    def __init__(\n            self,\n            out_size,\n            num_style_feat=512,\n            channel_multiplier=1,\n            resample_kernel=(1, 3, 3, 1),\n            decoder_load_path=None,\n            fix_decoder=True,\n            # for stylegan decoder\n            num_mlp=8,\n            lr_mlp=0.01,\n            input_is_latent=False,\n            different_w=False,\n            narrow=1,\n            sft_half=False):\n\n        super(GFPGANv1, self).__init__()\n        self.input_is_latent = input_is_latent\n        self.different_w = different_w\n        self.num_style_feat = num_style_feat\n\n        unet_narrow = narrow * 0.5  # by default, use a half of input channels\n        channels = {\n            '4': int(512 * unet_narrow),\n            '8': int(512 * unet_narrow),\n            '16': int(512 * unet_narrow),\n            '32': int(512 * unet_narrow),\n            '64': int(256 * channel_multiplier * unet_narrow),\n            '128': int(128 * channel_multiplier * unet_narrow),\n            '256': int(64 * channel_multiplier * unet_narrow),\n            '512': int(32 * channel_multiplier * unet_narrow),\n            '1024': int(16 * channel_multiplier * unet_narrow)\n        }\n\n        self.log_size = int(math.log(out_size, 2))\n        first_out_size = 2**(int(math.log(out_size, 2)))\n\n        self.conv_body_first = ConvLayer(3, channels[f'{first_out_size}'], 1, bias=True, activate=True)\n\n        # downsample\n        in_channels = channels[f'{first_out_size}']\n        self.conv_body_down = nn.ModuleList()\n        for i in range(self.log_size, 2, -1):\n            out_channels = channels[f'{2**(i - 1)}']\n            self.conv_body_down.append(ResBlock(in_channels, out_channels, resample_kernel))\n            in_channels = out_channels\n\n        self.final_conv = ConvLayer(in_channels, channels['4'], 3, bias=True, activate=True)\n\n        # upsample\n        in_channels = channels['4']\n        self.conv_body_up = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            out_channels = channels[f'{2**i}']\n            self.conv_body_up.append(ResUpBlock(in_channels, out_channels))\n            in_channels = out_channels\n\n        # to RGB\n        self.toRGB = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            self.toRGB.append(EqualConv2d(channels[f'{2**i}'], 3, 1, stride=1, padding=0, bias=True, bias_init_val=0))\n\n        if different_w:\n            linear_out_channel = (int(math.log(out_size, 2)) * 2 - 2) * num_style_feat\n        else:\n            linear_out_channel = num_style_feat\n\n        self.final_linear = EqualLinear(\n            channels['4'] * 4 * 4, linear_out_channel, bias=True, bias_init_val=0, lr_mul=1, activation=None)\n\n        # the decoder: stylegan2 generator with SFT modulations\n        self.stylegan_decoder = StyleGAN2GeneratorSFT(\n            out_size=out_size,\n            num_style_feat=num_style_feat,\n            num_mlp=num_mlp,\n            channel_multiplier=channel_multiplier,\n            resample_kernel=resample_kernel,\n            lr_mlp=lr_mlp,\n            narrow=narrow,\n            sft_half=sft_half)\n\n        # load pre-trained stylegan2 model if necessary\n        if decoder_load_path:\n            self.stylegan_decoder.load_state_dict(\n                torch.load(decoder_load_path, map_location=lambda storage, loc: storage)['params_ema'])\n        # fix decoder without updating params\n        if fix_decoder:\n            for _, param in self.stylegan_decoder.named_parameters():\n                param.requires_grad = False\n\n        # for SFT modulations (scale and shift)\n        self.condition_scale = nn.ModuleList()\n        self.condition_shift = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            out_channels = channels[f'{2**i}']\n            if sft_half:\n                sft_out_channels = out_channels\n            else:\n                sft_out_channels = out_channels * 2\n            self.condition_scale.append(\n                nn.Sequential(\n                    EqualConv2d(out_channels, out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0),\n                    ScaledLeakyReLU(0.2),\n                    EqualConv2d(out_channels, sft_out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=1)))\n            self.condition_shift.append(\n                nn.Sequential(\n                    EqualConv2d(out_channels, out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0),\n                    ScaledLeakyReLU(0.2),\n                    EqualConv2d(out_channels, sft_out_channels, 3, stride=1, padding=1, bias=True, bias_init_val=0)))\n\n    def forward(self, x, return_latents=False, return_rgb=True, randomize_noise=True):\n        \"\"\"Forward function for GFPGANv1.\n\n        Args:\n            x (Tensor): Input images.\n            return_latents (bool): Whether to return style latents. Default: False.\n            return_rgb (bool): Whether return intermediate rgb images. Default: True.\n            randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.\n        \"\"\"\n        conditions = []\n        unet_skips = []\n        out_rgbs = []\n\n        # encoder\n        feat = self.conv_body_first(x)\n        for i in range(self.log_size - 2):\n            feat = self.conv_body_down[i](feat)\n            unet_skips.insert(0, feat)\n\n        feat = self.final_conv(feat)\n\n        # style code\n        style_code = self.final_linear(feat.view(feat.size(0), -1))\n        if self.different_w:\n            style_code = style_code.view(style_code.size(0), -1, self.num_style_feat)\n\n        # decode\n        for i in range(self.log_size - 2):\n            # add unet skip\n            feat = feat + unet_skips[i]\n            # ResUpLayer\n            feat = self.conv_body_up[i](feat)\n            # generate scale and shift for SFT layers\n            scale = self.condition_scale[i](feat)\n            conditions.append(scale.clone())\n            shift = self.condition_shift[i](feat)\n            conditions.append(shift.clone())\n            # generate rgb images\n            if return_rgb:\n                out_rgbs.append(self.toRGB[i](feat))\n\n        # decoder\n        image, _ = self.stylegan_decoder([style_code],\n                                         conditions,\n                                         return_latents=return_latents,\n                                         input_is_latent=self.input_is_latent,\n                                         randomize_noise=randomize_noise)\n\n        return image, out_rgbs\n\n\n@ARCH_REGISTRY.register()\nclass FacialComponentDiscriminator(nn.Module):\n    \"\"\"Facial component (eyes, mouth, noise) discriminator used in GFPGAN.\n    \"\"\"\n\n    def __init__(self):\n        super(FacialComponentDiscriminator, self).__init__()\n        # It now uses a VGG-style architectrue with fixed model size\n        self.conv1 = ConvLayer(3, 64, 3, downsample=False, resample_kernel=(1, 3, 3, 1), bias=True, activate=True)\n        self.conv2 = ConvLayer(64, 128, 3, downsample=True, resample_kernel=(1, 3, 3, 1), bias=True, activate=True)\n        self.conv3 = ConvLayer(128, 128, 3, downsample=False, resample_kernel=(1, 3, 3, 1), bias=True, activate=True)\n        self.conv4 = ConvLayer(128, 256, 3, downsample=True, resample_kernel=(1, 3, 3, 1), bias=True, activate=True)\n        self.conv5 = ConvLayer(256, 256, 3, downsample=False, resample_kernel=(1, 3, 3, 1), bias=True, activate=True)\n        self.final_conv = ConvLayer(256, 1, 3, bias=True, activate=False)\n\n    def forward(self, x, return_feats=False):\n        \"\"\"Forward function for FacialComponentDiscriminator.\n\n        Args:\n            x (Tensor): Input images.\n            return_feats (bool): Whether to return intermediate features. Default: False.\n        \"\"\"\n        feat = self.conv1(x)\n        feat = self.conv3(self.conv2(feat))\n        rlt_feats = []\n        if return_feats:\n            rlt_feats.append(feat.clone())\n        feat = self.conv5(self.conv4(feat))\n        if return_feats:\n            rlt_feats.append(feat.clone())\n        out = self.final_conv(feat)\n\n        if return_feats:\n            return out, rlt_feats\n        else:\n            return out, None\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/archs/gfpganv1_clean_arch.py",
    "content": "import math\nimport random\nimport torch\nfrom basicsr.utils.registry import ARCH_REGISTRY\nfrom torch import nn\nfrom torch.nn import functional as F\n\nfrom .stylegan2_clean_arch import StyleGAN2GeneratorClean\n\n\nclass StyleGAN2GeneratorCSFT(StyleGAN2GeneratorClean):\n    \"\"\"StyleGAN2 Generator with SFT modulation (Spatial Feature Transform).\n\n    It is the clean version without custom compiled CUDA extensions used in StyleGAN2.\n\n    Args:\n        out_size (int): The spatial size of outputs.\n        num_style_feat (int): Channel number of style features. Default: 512.\n        num_mlp (int): Layer number of MLP style layers. Default: 8.\n        channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.\n        narrow (float): The narrow ratio for channels. Default: 1.\n        sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.\n    \"\"\"\n\n    def __init__(self, out_size, num_style_feat=512, num_mlp=8, channel_multiplier=2, narrow=1, sft_half=False):\n        super(StyleGAN2GeneratorCSFT, self).__init__(\n            out_size,\n            num_style_feat=num_style_feat,\n            num_mlp=num_mlp,\n            channel_multiplier=channel_multiplier,\n            narrow=narrow)\n        self.sft_half = sft_half\n\n    def forward(self,\n                styles,\n                conditions,\n                input_is_latent=False,\n                noise=None,\n                randomize_noise=True,\n                truncation=1,\n                truncation_latent=None,\n                inject_index=None,\n                return_latents=False):\n        \"\"\"Forward function for StyleGAN2GeneratorCSFT.\n\n        Args:\n            styles (list[Tensor]): Sample codes of styles.\n            conditions (list[Tensor]): SFT conditions to generators.\n            input_is_latent (bool): Whether input is latent style. Default: False.\n            noise (Tensor | None): Input noise or None. Default: None.\n            randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.\n            truncation (float): The truncation ratio. Default: 1.\n            truncation_latent (Tensor | None): The truncation latent tensor. Default: None.\n            inject_index (int | None): The injection index for mixing noise. Default: None.\n            return_latents (bool): Whether to return style latents. Default: False.\n        \"\"\"\n        # style codes -> latents with Style MLP layer\n        if not input_is_latent:\n            styles = [self.style_mlp(s) for s in styles]\n        # noises\n        if noise is None:\n            if randomize_noise:\n                noise = [None] * self.num_layers  # for each style conv layer\n            else:  # use the stored noise\n                noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)]\n        # style truncation\n        if truncation < 1:\n            style_truncation = []\n            for style in styles:\n                style_truncation.append(truncation_latent + truncation * (style - truncation_latent))\n            styles = style_truncation\n        # get style latents with injection\n        if len(styles) == 1:\n            inject_index = self.num_latent\n\n            if styles[0].ndim < 3:\n                # repeat latent code for all the layers\n                latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            else:  # used for encoder with different latent code for each layer\n                latent = styles[0]\n        elif len(styles) == 2:  # mixing noises\n            if inject_index is None:\n                inject_index = random.randint(1, self.num_latent - 1)\n            latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)\n            latent = torch.cat([latent1, latent2], 1)\n\n        # main generation\n        out = self.constant_input(latent.shape[0])\n        out = self.style_conv1(out, latent[:, 0], noise=noise[0])\n        skip = self.to_rgb1(out, latent[:, 1])\n\n        i = 1\n        for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2],\n                                                        noise[2::2], self.to_rgbs):\n            out = conv1(out, latent[:, i], noise=noise1)\n\n            # the conditions may have fewer levels\n            if i < len(conditions):\n                # SFT part to combine the conditions\n                if self.sft_half:  # only apply SFT to half of the channels\n                    out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1)\n                    out_sft = out_sft * conditions[i - 1] + conditions[i]\n                    out = torch.cat([out_same, out_sft], dim=1)\n                else:  # apply SFT to all the channels\n                    out = out * conditions[i - 1] + conditions[i]\n\n            out = conv2(out, latent[:, i + 1], noise=noise2)\n            skip = to_rgb(out, latent[:, i + 2], skip)  # feature back to the rgb space\n            i += 2\n\n        image = skip\n\n        if return_latents:\n            return image, latent\n        else:\n            return image, None\n\n\nclass ResBlock(nn.Module):\n    \"\"\"Residual block with bilinear upsampling/downsampling.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n        mode (str): Upsampling/downsampling mode. Options: down | up. Default: down.\n    \"\"\"\n\n    def __init__(self, in_channels, out_channels, mode='down'):\n        super(ResBlock, self).__init__()\n\n        self.conv1 = nn.Conv2d(in_channels, in_channels, 3, 1, 1)\n        self.conv2 = nn.Conv2d(in_channels, out_channels, 3, 1, 1)\n        self.skip = nn.Conv2d(in_channels, out_channels, 1, bias=False)\n        if mode == 'down':\n            self.scale_factor = 0.5\n        elif mode == 'up':\n            self.scale_factor = 2\n\n    def forward(self, x):\n        out = F.leaky_relu_(self.conv1(x), negative_slope=0.2)\n        # upsample/downsample\n        out = F.interpolate(out, scale_factor=self.scale_factor, mode='bilinear', align_corners=False)\n        out = F.leaky_relu_(self.conv2(out), negative_slope=0.2)\n        # skip\n        x = F.interpolate(x, scale_factor=self.scale_factor, mode='bilinear', align_corners=False)\n        skip = self.skip(x)\n        out = out + skip\n        return out\n\n\n@ARCH_REGISTRY.register()\nclass GFPGANv1Clean(nn.Module):\n    \"\"\"The GFPGAN architecture: Unet + StyleGAN2 decoder with SFT.\n\n    It is the clean version without custom compiled CUDA extensions used in StyleGAN2.\n\n    Ref: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior.\n\n    Args:\n        out_size (int): The spatial size of outputs.\n        num_style_feat (int): Channel number of style features. Default: 512.\n        channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.\n        decoder_load_path (str): The path to the pre-trained decoder model (usually, the StyleGAN2). Default: None.\n        fix_decoder (bool): Whether to fix the decoder. Default: True.\n\n        num_mlp (int): Layer number of MLP style layers. Default: 8.\n        input_is_latent (bool): Whether input is latent style. Default: False.\n        different_w (bool): Whether to use different latent w for different layers. Default: False.\n        narrow (float): The narrow ratio for channels. Default: 1.\n        sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.\n    \"\"\"\n\n    def __init__(\n            self,\n            out_size,\n            num_style_feat=512,\n            channel_multiplier=1,\n            decoder_load_path=None,\n            fix_decoder=True,\n            # for stylegan decoder\n            num_mlp=8,\n            input_is_latent=False,\n            different_w=False,\n            narrow=1,\n            sft_half=False):\n\n        super(GFPGANv1Clean, self).__init__()\n        self.input_is_latent = input_is_latent\n        self.different_w = different_w\n        self.num_style_feat = num_style_feat\n\n        unet_narrow = narrow * 0.5  # by default, use a half of input channels\n        channels = {\n            '4': int(512 * unet_narrow),\n            '8': int(512 * unet_narrow),\n            '16': int(512 * unet_narrow),\n            '32': int(512 * unet_narrow),\n            '64': int(256 * channel_multiplier * unet_narrow),\n            '128': int(128 * channel_multiplier * unet_narrow),\n            '256': int(64 * channel_multiplier * unet_narrow),\n            '512': int(32 * channel_multiplier * unet_narrow),\n            '1024': int(16 * channel_multiplier * unet_narrow)\n        }\n\n        self.log_size = int(math.log(out_size, 2))\n        first_out_size = 2**(int(math.log(out_size, 2)))\n\n        self.conv_body_first = nn.Conv2d(3, channels[f'{first_out_size}'], 1)\n\n        # downsample\n        in_channels = channels[f'{first_out_size}']\n        self.conv_body_down = nn.ModuleList()\n        for i in range(self.log_size, 2, -1):\n            out_channels = channels[f'{2**(i - 1)}']\n            self.conv_body_down.append(ResBlock(in_channels, out_channels, mode='down'))\n            in_channels = out_channels\n\n        self.final_conv = nn.Conv2d(in_channels, channels['4'], 3, 1, 1)\n\n        # upsample\n        in_channels = channels['4']\n        self.conv_body_up = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            out_channels = channels[f'{2**i}']\n            self.conv_body_up.append(ResBlock(in_channels, out_channels, mode='up'))\n            in_channels = out_channels\n\n        # to RGB\n        self.toRGB = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            self.toRGB.append(nn.Conv2d(channels[f'{2**i}'], 3, 1))\n\n        if different_w:\n            linear_out_channel = (int(math.log(out_size, 2)) * 2 - 2) * num_style_feat\n        else:\n            linear_out_channel = num_style_feat\n\n        self.final_linear = nn.Linear(channels['4'] * 4 * 4, linear_out_channel)\n\n        # the decoder: stylegan2 generator with SFT modulations\n        self.stylegan_decoder = StyleGAN2GeneratorCSFT(\n            out_size=out_size,\n            num_style_feat=num_style_feat,\n            num_mlp=num_mlp,\n            channel_multiplier=channel_multiplier,\n            narrow=narrow,\n            sft_half=sft_half)\n\n        # load pre-trained stylegan2 model if necessary\n        if decoder_load_path:\n            self.stylegan_decoder.load_state_dict(\n                torch.load(decoder_load_path, map_location=lambda storage, loc: storage)['params_ema'])\n        # fix decoder without updating params\n        if fix_decoder:\n            for _, param in self.stylegan_decoder.named_parameters():\n                param.requires_grad = False\n\n        # for SFT modulations (scale and shift)\n        self.condition_scale = nn.ModuleList()\n        self.condition_shift = nn.ModuleList()\n        for i in range(3, self.log_size + 1):\n            out_channels = channels[f'{2**i}']\n            if sft_half:\n                sft_out_channels = out_channels\n            else:\n                sft_out_channels = out_channels * 2\n            self.condition_scale.append(\n                nn.Sequential(\n                    nn.Conv2d(out_channels, out_channels, 3, 1, 1), nn.LeakyReLU(0.2, True),\n                    nn.Conv2d(out_channels, sft_out_channels, 3, 1, 1)))\n            self.condition_shift.append(\n                nn.Sequential(\n                    nn.Conv2d(out_channels, out_channels, 3, 1, 1), nn.LeakyReLU(0.2, True),\n                    nn.Conv2d(out_channels, sft_out_channels, 3, 1, 1)))\n\n    def forward(self, x, return_latents=False, return_rgb=True, randomize_noise=True):\n        \"\"\"Forward function for GFPGANv1Clean.\n\n        Args:\n            x (Tensor): Input images.\n            return_latents (bool): Whether to return style latents. Default: False.\n            return_rgb (bool): Whether return intermediate rgb images. Default: True.\n            randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.\n        \"\"\"\n        conditions = []\n        unet_skips = []\n        out_rgbs = []\n\n        # encoder\n        feat = F.leaky_relu_(self.conv_body_first(x), negative_slope=0.2)\n        for i in range(self.log_size - 2):\n            feat = self.conv_body_down[i](feat)\n            unet_skips.insert(0, feat)\n        feat = F.leaky_relu_(self.final_conv(feat), negative_slope=0.2)\n\n        # style code\n        style_code = self.final_linear(feat.view(feat.size(0), -1))\n        if self.different_w:\n            style_code = style_code.view(style_code.size(0), -1, self.num_style_feat)\n\n        # decode\n        for i in range(self.log_size - 2):\n            # add unet skip\n            feat = feat + unet_skips[i]\n            # ResUpLayer\n            feat = self.conv_body_up[i](feat)\n            # generate scale and shift for SFT layers\n            scale = self.condition_scale[i](feat)\n            conditions.append(scale.clone())\n            shift = self.condition_shift[i](feat)\n            conditions.append(shift.clone())\n            # generate rgb images\n            if return_rgb:\n                out_rgbs.append(self.toRGB[i](feat))\n\n        # decoder\n        image, _ = self.stylegan_decoder([style_code],\n                                         conditions,\n                                         return_latents=return_latents,\n                                         input_is_latent=self.input_is_latent,\n                                         randomize_noise=randomize_noise)\n\n        return image, out_rgbs\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/archs/stylegan2_bilinear_arch.py",
    "content": "import math\nimport random\nimport torch\nfrom basicsr.ops.fused_act import FusedLeakyReLU, fused_leaky_relu\nfrom basicsr.utils.registry import ARCH_REGISTRY\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\nclass NormStyleCode(nn.Module):\n\n    def forward(self, x):\n        \"\"\"Normalize the style codes.\n\n        Args:\n            x (Tensor): Style codes with shape (b, c).\n\n        Returns:\n            Tensor: Normalized tensor.\n        \"\"\"\n        return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8)\n\n\nclass EqualLinear(nn.Module):\n    \"\"\"Equalized Linear as StyleGAN2.\n\n    Args:\n        in_channels (int): Size of each sample.\n        out_channels (int): Size of each output sample.\n        bias (bool): If set to ``False``, the layer will not learn an additive\n            bias. Default: ``True``.\n        bias_init_val (float): Bias initialized value. Default: 0.\n        lr_mul (float): Learning rate multiplier. Default: 1.\n        activation (None | str): The activation after ``linear`` operation.\n            Supported: 'fused_lrelu', None. Default: None.\n    \"\"\"\n\n    def __init__(self, in_channels, out_channels, bias=True, bias_init_val=0, lr_mul=1, activation=None):\n        super(EqualLinear, self).__init__()\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.lr_mul = lr_mul\n        self.activation = activation\n        if self.activation not in ['fused_lrelu', None]:\n            raise ValueError(f'Wrong activation value in EqualLinear: {activation}'\n                             \"Supported ones are: ['fused_lrelu', None].\")\n        self.scale = (1 / math.sqrt(in_channels)) * lr_mul\n\n        self.weight = nn.Parameter(torch.randn(out_channels, in_channels).div_(lr_mul))\n        if bias:\n            self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val))\n        else:\n            self.register_parameter('bias', None)\n\n    def forward(self, x):\n        if self.bias is None:\n            bias = None\n        else:\n            bias = self.bias * self.lr_mul\n        if self.activation == 'fused_lrelu':\n            out = F.linear(x, self.weight * self.scale)\n            out = fused_leaky_relu(out, bias)\n        else:\n            out = F.linear(x, self.weight * self.scale, bias=bias)\n        return out\n\n    def __repr__(self):\n        return (f'{self.__class__.__name__}(in_channels={self.in_channels}, '\n                f'out_channels={self.out_channels}, bias={self.bias is not None})')\n\n\nclass ModulatedConv2d(nn.Module):\n    \"\"\"Modulated Conv2d used in StyleGAN2.\n\n    There is no bias in ModulatedConv2d.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n        kernel_size (int): Size of the convolving kernel.\n        num_style_feat (int): Channel number of style features.\n        demodulate (bool): Whether to demodulate in the conv layer.\n            Default: True.\n        sample_mode (str | None): Indicating 'upsample', 'downsample' or None.\n            Default: None.\n        eps (float): A value added to the denominator for numerical stability.\n            Default: 1e-8.\n    \"\"\"\n\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size,\n                 num_style_feat,\n                 demodulate=True,\n                 sample_mode=None,\n                 eps=1e-8,\n                 interpolation_mode='bilinear'):\n        super(ModulatedConv2d, self).__init__()\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.kernel_size = kernel_size\n        self.demodulate = demodulate\n        self.sample_mode = sample_mode\n        self.eps = eps\n        self.interpolation_mode = interpolation_mode\n        if self.interpolation_mode == 'nearest':\n            self.align_corners = None\n        else:\n            self.align_corners = False\n\n        self.scale = 1 / math.sqrt(in_channels * kernel_size**2)\n        # modulation inside each modulated conv\n        self.modulation = EqualLinear(\n            num_style_feat, in_channels, bias=True, bias_init_val=1, lr_mul=1, activation=None)\n\n        self.weight = nn.Parameter(torch.randn(1, out_channels, in_channels, kernel_size, kernel_size))\n        self.padding = kernel_size // 2\n\n    def forward(self, x, style):\n        \"\"\"Forward function.\n\n        Args:\n            x (Tensor): Tensor with shape (b, c, h, w).\n            style (Tensor): Tensor with shape (b, num_style_feat).\n\n        Returns:\n            Tensor: Modulated tensor after convolution.\n        \"\"\"\n        b, c, h, w = x.shape  # c = c_in\n        # weight modulation\n        style = self.modulation(style).view(b, 1, c, 1, 1)\n        # self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1)\n        weight = self.scale * self.weight * style  # (b, c_out, c_in, k, k)\n\n        if self.demodulate:\n            demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps)\n            weight = weight * demod.view(b, self.out_channels, 1, 1, 1)\n\n        weight = weight.view(b * self.out_channels, c, self.kernel_size, self.kernel_size)\n\n        if self.sample_mode == 'upsample':\n            x = F.interpolate(x, scale_factor=2, mode=self.interpolation_mode, align_corners=self.align_corners)\n        elif self.sample_mode == 'downsample':\n            x = F.interpolate(x, scale_factor=0.5, mode=self.interpolation_mode, align_corners=self.align_corners)\n\n        b, c, h, w = x.shape\n        x = x.view(1, b * c, h, w)\n        # weight: (b*c_out, c_in, k, k), groups=b\n        out = F.conv2d(x, weight, padding=self.padding, groups=b)\n        out = out.view(b, self.out_channels, *out.shape[2:4])\n\n        return out\n\n    def __repr__(self):\n        return (f'{self.__class__.__name__}(in_channels={self.in_channels}, '\n                f'out_channels={self.out_channels}, '\n                f'kernel_size={self.kernel_size}, '\n                f'demodulate={self.demodulate}, sample_mode={self.sample_mode})')\n\n\nclass StyleConv(nn.Module):\n    \"\"\"Style conv.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n        kernel_size (int): Size of the convolving kernel.\n        num_style_feat (int): Channel number of style features.\n        demodulate (bool): Whether demodulate in the conv layer. Default: True.\n        sample_mode (str | None): Indicating 'upsample', 'downsample' or None.\n            Default: None.\n    \"\"\"\n\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size,\n                 num_style_feat,\n                 demodulate=True,\n                 sample_mode=None,\n                 interpolation_mode='bilinear'):\n        super(StyleConv, self).__init__()\n        self.modulated_conv = ModulatedConv2d(\n            in_channels,\n            out_channels,\n            kernel_size,\n            num_style_feat,\n            demodulate=demodulate,\n            sample_mode=sample_mode,\n            interpolation_mode=interpolation_mode)\n        self.weight = nn.Parameter(torch.zeros(1))  # for noise injection\n        self.activate = FusedLeakyReLU(out_channels)\n\n    def forward(self, x, style, noise=None):\n        # modulate\n        out = self.modulated_conv(x, style)\n        # noise injection\n        if noise is None:\n            b, _, h, w = out.shape\n            noise = out.new_empty(b, 1, h, w).normal_()\n        out = out + self.weight * noise\n        # activation (with bias)\n        out = self.activate(out)\n        return out\n\n\nclass ToRGB(nn.Module):\n    \"\"\"To RGB from features.\n\n    Args:\n        in_channels (int): Channel number of input.\n        num_style_feat (int): Channel number of style features.\n        upsample (bool): Whether to upsample. Default: True.\n    \"\"\"\n\n    def __init__(self, in_channels, num_style_feat, upsample=True, interpolation_mode='bilinear'):\n        super(ToRGB, self).__init__()\n        self.upsample = upsample\n        self.interpolation_mode = interpolation_mode\n        if self.interpolation_mode == 'nearest':\n            self.align_corners = None\n        else:\n            self.align_corners = False\n        self.modulated_conv = ModulatedConv2d(\n            in_channels,\n            3,\n            kernel_size=1,\n            num_style_feat=num_style_feat,\n            demodulate=False,\n            sample_mode=None,\n            interpolation_mode=interpolation_mode)\n        self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))\n\n    def forward(self, x, style, skip=None):\n        \"\"\"Forward function.\n\n        Args:\n            x (Tensor): Feature tensor with shape (b, c, h, w).\n            style (Tensor): Tensor with shape (b, num_style_feat).\n            skip (Tensor): Base/skip tensor. Default: None.\n\n        Returns:\n            Tensor: RGB images.\n        \"\"\"\n        out = self.modulated_conv(x, style)\n        out = out + self.bias\n        if skip is not None:\n            if self.upsample:\n                skip = F.interpolate(\n                    skip, scale_factor=2, mode=self.interpolation_mode, align_corners=self.align_corners)\n            out = out + skip\n        return out\n\n\nclass ConstantInput(nn.Module):\n    \"\"\"Constant input.\n\n    Args:\n        num_channel (int): Channel number of constant input.\n        size (int): Spatial size of constant input.\n    \"\"\"\n\n    def __init__(self, num_channel, size):\n        super(ConstantInput, self).__init__()\n        self.weight = nn.Parameter(torch.randn(1, num_channel, size, size))\n\n    def forward(self, batch):\n        out = self.weight.repeat(batch, 1, 1, 1)\n        return out\n\n\n@ARCH_REGISTRY.register()\nclass StyleGAN2GeneratorBilinear(nn.Module):\n    \"\"\"StyleGAN2 Generator.\n\n    Args:\n        out_size (int): The spatial size of outputs.\n        num_style_feat (int): Channel number of style features. Default: 512.\n        num_mlp (int): Layer number of MLP style layers. Default: 8.\n        channel_multiplier (int): Channel multiplier for large networks of\n            StyleGAN2. Default: 2.\n        lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.\n        narrow (float): Narrow ratio for channels. Default: 1.0.\n    \"\"\"\n\n    def __init__(self,\n                 out_size,\n                 num_style_feat=512,\n                 num_mlp=8,\n                 channel_multiplier=2,\n                 lr_mlp=0.01,\n                 narrow=1,\n                 interpolation_mode='bilinear'):\n        super(StyleGAN2GeneratorBilinear, self).__init__()\n        # Style MLP layers\n        self.num_style_feat = num_style_feat\n        style_mlp_layers = [NormStyleCode()]\n        for i in range(num_mlp):\n            style_mlp_layers.append(\n                EqualLinear(\n                    num_style_feat, num_style_feat, bias=True, bias_init_val=0, lr_mul=lr_mlp,\n                    activation='fused_lrelu'))\n        self.style_mlp = nn.Sequential(*style_mlp_layers)\n\n        channels = {\n            '4': int(512 * narrow),\n            '8': int(512 * narrow),\n            '16': int(512 * narrow),\n            '32': int(512 * narrow),\n            '64': int(256 * channel_multiplier * narrow),\n            '128': int(128 * channel_multiplier * narrow),\n            '256': int(64 * channel_multiplier * narrow),\n            '512': int(32 * channel_multiplier * narrow),\n            '1024': int(16 * channel_multiplier * narrow)\n        }\n        self.channels = channels\n\n        self.constant_input = ConstantInput(channels['4'], size=4)\n        self.style_conv1 = StyleConv(\n            channels['4'],\n            channels['4'],\n            kernel_size=3,\n            num_style_feat=num_style_feat,\n            demodulate=True,\n            sample_mode=None,\n            interpolation_mode=interpolation_mode)\n        self.to_rgb1 = ToRGB(channels['4'], num_style_feat, upsample=False, interpolation_mode=interpolation_mode)\n\n        self.log_size = int(math.log(out_size, 2))\n        self.num_layers = (self.log_size - 2) * 2 + 1\n        self.num_latent = self.log_size * 2 - 2\n\n        self.style_convs = nn.ModuleList()\n        self.to_rgbs = nn.ModuleList()\n        self.noises = nn.Module()\n\n        in_channels = channels['4']\n        # noise\n        for layer_idx in range(self.num_layers):\n            resolution = 2**((layer_idx + 5) // 2)\n            shape = [1, 1, resolution, resolution]\n            self.noises.register_buffer(f'noise{layer_idx}', torch.randn(*shape))\n        # style convs and to_rgbs\n        for i in range(3, self.log_size + 1):\n            out_channels = channels[f'{2**i}']\n            self.style_convs.append(\n                StyleConv(\n                    in_channels,\n                    out_channels,\n                    kernel_size=3,\n                    num_style_feat=num_style_feat,\n                    demodulate=True,\n                    sample_mode='upsample',\n                    interpolation_mode=interpolation_mode))\n            self.style_convs.append(\n                StyleConv(\n                    out_channels,\n                    out_channels,\n                    kernel_size=3,\n                    num_style_feat=num_style_feat,\n                    demodulate=True,\n                    sample_mode=None,\n                    interpolation_mode=interpolation_mode))\n            self.to_rgbs.append(\n                ToRGB(out_channels, num_style_feat, upsample=True, interpolation_mode=interpolation_mode))\n            in_channels = out_channels\n\n    def make_noise(self):\n        \"\"\"Make noise for noise injection.\"\"\"\n        device = self.constant_input.weight.device\n        noises = [torch.randn(1, 1, 4, 4, device=device)]\n\n        for i in range(3, self.log_size + 1):\n            for _ in range(2):\n                noises.append(torch.randn(1, 1, 2**i, 2**i, device=device))\n\n        return noises\n\n    def get_latent(self, x):\n        return self.style_mlp(x)\n\n    def mean_latent(self, num_latent):\n        latent_in = torch.randn(num_latent, self.num_style_feat, device=self.constant_input.weight.device)\n        latent = self.style_mlp(latent_in).mean(0, keepdim=True)\n        return latent\n\n    def forward(self,\n                styles,\n                input_is_latent=False,\n                noise=None,\n                randomize_noise=True,\n                truncation=1,\n                truncation_latent=None,\n                inject_index=None,\n                return_latents=False):\n        \"\"\"Forward function for StyleGAN2Generator.\n\n        Args:\n            styles (list[Tensor]): Sample codes of styles.\n            input_is_latent (bool): Whether input is latent style.\n                Default: False.\n            noise (Tensor | None): Input noise or None. Default: None.\n            randomize_noise (bool): Randomize noise, used when 'noise' is\n                False. Default: True.\n            truncation (float): TODO. Default: 1.\n            truncation_latent (Tensor | None): TODO. Default: None.\n            inject_index (int | None): The injection index for mixing noise.\n                Default: None.\n            return_latents (bool): Whether to return style latents.\n                Default: False.\n        \"\"\"\n        # style codes -> latents with Style MLP layer\n        if not input_is_latent:\n            styles = [self.style_mlp(s) for s in styles]\n        # noises\n        if noise is None:\n            if randomize_noise:\n                noise = [None] * self.num_layers  # for each style conv layer\n            else:  # use the stored noise\n                noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)]\n        # style truncation\n        if truncation < 1:\n            style_truncation = []\n            for style in styles:\n                style_truncation.append(truncation_latent + truncation * (style - truncation_latent))\n            styles = style_truncation\n        # get style latent with injection\n        if len(styles) == 1:\n            inject_index = self.num_latent\n\n            if styles[0].ndim < 3:\n                # repeat latent code for all the layers\n                latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            else:  # used for encoder with different latent code for each layer\n                latent = styles[0]\n        elif len(styles) == 2:  # mixing noises\n            if inject_index is None:\n                inject_index = random.randint(1, self.num_latent - 1)\n            latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)\n            latent = torch.cat([latent1, latent2], 1)\n\n        # main generation\n        out = self.constant_input(latent.shape[0])\n        out = self.style_conv1(out, latent[:, 0], noise=noise[0])\n        skip = self.to_rgb1(out, latent[:, 1])\n\n        i = 1\n        for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2],\n                                                        noise[2::2], self.to_rgbs):\n            out = conv1(out, latent[:, i], noise=noise1)\n            out = conv2(out, latent[:, i + 1], noise=noise2)\n            skip = to_rgb(out, latent[:, i + 2], skip)\n            i += 2\n\n        image = skip\n\n        if return_latents:\n            return image, latent\n        else:\n            return image, None\n\n\nclass ScaledLeakyReLU(nn.Module):\n    \"\"\"Scaled LeakyReLU.\n\n    Args:\n        negative_slope (float): Negative slope. Default: 0.2.\n    \"\"\"\n\n    def __init__(self, negative_slope=0.2):\n        super(ScaledLeakyReLU, self).__init__()\n        self.negative_slope = negative_slope\n\n    def forward(self, x):\n        out = F.leaky_relu(x, negative_slope=self.negative_slope)\n        return out * math.sqrt(2)\n\n\nclass EqualConv2d(nn.Module):\n    \"\"\"Equalized Linear as StyleGAN2.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n        kernel_size (int): Size of the convolving kernel.\n        stride (int): Stride of the convolution. Default: 1\n        padding (int): Zero-padding added to both sides of the input.\n            Default: 0.\n        bias (bool): If ``True``, adds a learnable bias to the output.\n            Default: ``True``.\n        bias_init_val (float): Bias initialized value. Default: 0.\n    \"\"\"\n\n    def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, bias=True, bias_init_val=0):\n        super(EqualConv2d, self).__init__()\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.kernel_size = kernel_size\n        self.stride = stride\n        self.padding = padding\n        self.scale = 1 / math.sqrt(in_channels * kernel_size**2)\n\n        self.weight = nn.Parameter(torch.randn(out_channels, in_channels, kernel_size, kernel_size))\n        if bias:\n            self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val))\n        else:\n            self.register_parameter('bias', None)\n\n    def forward(self, x):\n        out = F.conv2d(\n            x,\n            self.weight * self.scale,\n            bias=self.bias,\n            stride=self.stride,\n            padding=self.padding,\n        )\n\n        return out\n\n    def __repr__(self):\n        return (f'{self.__class__.__name__}(in_channels={self.in_channels}, '\n                f'out_channels={self.out_channels}, '\n                f'kernel_size={self.kernel_size},'\n                f' stride={self.stride}, padding={self.padding}, '\n                f'bias={self.bias is not None})')\n\n\nclass ConvLayer(nn.Sequential):\n    \"\"\"Conv Layer used in StyleGAN2 Discriminator.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n        kernel_size (int): Kernel size.\n        downsample (bool): Whether downsample by a factor of 2.\n            Default: False.\n        bias (bool): Whether with bias. Default: True.\n        activate (bool): Whether use activateion. Default: True.\n    \"\"\"\n\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size,\n                 downsample=False,\n                 bias=True,\n                 activate=True,\n                 interpolation_mode='bilinear'):\n        layers = []\n        self.interpolation_mode = interpolation_mode\n        # downsample\n        if downsample:\n            if self.interpolation_mode == 'nearest':\n                self.align_corners = None\n            else:\n                self.align_corners = False\n\n            layers.append(\n                torch.nn.Upsample(scale_factor=0.5, mode=interpolation_mode, align_corners=self.align_corners))\n        stride = 1\n        self.padding = kernel_size // 2\n        # conv\n        layers.append(\n            EqualConv2d(\n                in_channels, out_channels, kernel_size, stride=stride, padding=self.padding, bias=bias\n                and not activate))\n        # activation\n        if activate:\n            if bias:\n                layers.append(FusedLeakyReLU(out_channels))\n            else:\n                layers.append(ScaledLeakyReLU(0.2))\n\n        super(ConvLayer, self).__init__(*layers)\n\n\nclass ResBlock(nn.Module):\n    \"\"\"Residual block used in StyleGAN2 Discriminator.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n    \"\"\"\n\n    def __init__(self, in_channels, out_channels, interpolation_mode='bilinear'):\n        super(ResBlock, self).__init__()\n\n        self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True)\n        self.conv2 = ConvLayer(\n            in_channels,\n            out_channels,\n            3,\n            downsample=True,\n            interpolation_mode=interpolation_mode,\n            bias=True,\n            activate=True)\n        self.skip = ConvLayer(\n            in_channels,\n            out_channels,\n            1,\n            downsample=True,\n            interpolation_mode=interpolation_mode,\n            bias=False,\n            activate=False)\n\n    def forward(self, x):\n        out = self.conv1(x)\n        out = self.conv2(out)\n        skip = self.skip(x)\n        out = (out + skip) / math.sqrt(2)\n        return out\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/archs/stylegan2_clean_arch.py",
    "content": "import math\nimport random\nimport torch\nfrom basicsr.archs.arch_util import default_init_weights\nfrom basicsr.utils.registry import ARCH_REGISTRY\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\nclass NormStyleCode(nn.Module):\n\n    def forward(self, x):\n        \"\"\"Normalize the style codes.\n\n        Args:\n            x (Tensor): Style codes with shape (b, c).\n\n        Returns:\n            Tensor: Normalized tensor.\n        \"\"\"\n        return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8)\n\n\nclass ModulatedConv2d(nn.Module):\n    \"\"\"Modulated Conv2d used in StyleGAN2.\n\n    There is no bias in ModulatedConv2d.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n        kernel_size (int): Size of the convolving kernel.\n        num_style_feat (int): Channel number of style features.\n        demodulate (bool): Whether to demodulate in the conv layer. Default: True.\n        sample_mode (str | None): Indicating 'upsample', 'downsample' or None. Default: None.\n        eps (float): A value added to the denominator for numerical stability. Default: 1e-8.\n    \"\"\"\n\n    def __init__(self,\n                 in_channels,\n                 out_channels,\n                 kernel_size,\n                 num_style_feat,\n                 demodulate=True,\n                 sample_mode=None,\n                 eps=1e-8):\n        super(ModulatedConv2d, self).__init__()\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.kernel_size = kernel_size\n        self.demodulate = demodulate\n        self.sample_mode = sample_mode\n        self.eps = eps\n\n        # modulation inside each modulated conv\n        self.modulation = nn.Linear(num_style_feat, in_channels, bias=True)\n        # initialization\n        default_init_weights(self.modulation, scale=1, bias_fill=1, a=0, mode='fan_in', nonlinearity='linear')\n\n        self.weight = nn.Parameter(\n            torch.randn(1, out_channels, in_channels, kernel_size, kernel_size) /\n            math.sqrt(in_channels * kernel_size**2))\n        self.padding = kernel_size // 2\n\n    def forward(self, x, style):\n        \"\"\"Forward function.\n\n        Args:\n            x (Tensor): Tensor with shape (b, c, h, w).\n            style (Tensor): Tensor with shape (b, num_style_feat).\n\n        Returns:\n            Tensor: Modulated tensor after convolution.\n        \"\"\"\n        b, c, h, w = x.shape  # c = c_in\n        # weight modulation\n        style = self.modulation(style).view(b, 1, c, 1, 1)\n        # self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1)\n        weight = self.weight * style  # (b, c_out, c_in, k, k)\n\n        if self.demodulate:\n            demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps)\n            weight = weight * demod.view(b, self.out_channels, 1, 1, 1)\n\n        weight = weight.view(b * self.out_channels, c, self.kernel_size, self.kernel_size)\n\n        # upsample or downsample if necessary\n        if self.sample_mode == 'upsample':\n            x = F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)\n        elif self.sample_mode == 'downsample':\n            x = F.interpolate(x, scale_factor=0.5, mode='bilinear', align_corners=False)\n\n        b, c, h, w = x.shape\n        x = x.view(1, b * c, h, w)\n        # weight: (b*c_out, c_in, k, k), groups=b\n        out = F.conv2d(x, weight, padding=self.padding, groups=b)\n        out = out.view(b, self.out_channels, *out.shape[2:4])\n\n        return out\n\n    def __repr__(self):\n        return (f'{self.__class__.__name__}(in_channels={self.in_channels}, out_channels={self.out_channels}, '\n                f'kernel_size={self.kernel_size}, demodulate={self.demodulate}, sample_mode={self.sample_mode})')\n\n\nclass StyleConv(nn.Module):\n    \"\"\"Style conv used in StyleGAN2.\n\n    Args:\n        in_channels (int): Channel number of the input.\n        out_channels (int): Channel number of the output.\n        kernel_size (int): Size of the convolving kernel.\n        num_style_feat (int): Channel number of style features.\n        demodulate (bool): Whether demodulate in the conv layer. Default: True.\n        sample_mode (str | None): Indicating 'upsample', 'downsample' or None. Default: None.\n    \"\"\"\n\n    def __init__(self, in_channels, out_channels, kernel_size, num_style_feat, demodulate=True, sample_mode=None):\n        super(StyleConv, self).__init__()\n        self.modulated_conv = ModulatedConv2d(\n            in_channels, out_channels, kernel_size, num_style_feat, demodulate=demodulate, sample_mode=sample_mode)\n        self.weight = nn.Parameter(torch.zeros(1))  # for noise injection\n        self.bias = nn.Parameter(torch.zeros(1, out_channels, 1, 1))\n        self.activate = nn.LeakyReLU(negative_slope=0.2, inplace=True)\n\n    def forward(self, x, style, noise=None):\n        # modulate\n        out = self.modulated_conv(x, style) * 2**0.5  # for conversion\n        # noise injection\n        if noise is None:\n            b, _, h, w = out.shape\n            noise = out.new_empty(b, 1, h, w).normal_()\n        out = out + self.weight * noise\n        # add bias\n        out = out + self.bias\n        # activation\n        out = self.activate(out)\n        return out\n\n\nclass ToRGB(nn.Module):\n    \"\"\"To RGB (image space) from features.\n\n    Args:\n        in_channels (int): Channel number of input.\n        num_style_feat (int): Channel number of style features.\n        upsample (bool): Whether to upsample. Default: True.\n    \"\"\"\n\n    def __init__(self, in_channels, num_style_feat, upsample=True):\n        super(ToRGB, self).__init__()\n        self.upsample = upsample\n        self.modulated_conv = ModulatedConv2d(\n            in_channels, 3, kernel_size=1, num_style_feat=num_style_feat, demodulate=False, sample_mode=None)\n        self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))\n\n    def forward(self, x, style, skip=None):\n        \"\"\"Forward function.\n\n        Args:\n            x (Tensor): Feature tensor with shape (b, c, h, w).\n            style (Tensor): Tensor with shape (b, num_style_feat).\n            skip (Tensor): Base/skip tensor. Default: None.\n\n        Returns:\n            Tensor: RGB images.\n        \"\"\"\n        out = self.modulated_conv(x, style)\n        out = out + self.bias\n        if skip is not None:\n            if self.upsample:\n                skip = F.interpolate(skip, scale_factor=2, mode='bilinear', align_corners=False)\n            out = out + skip\n        return out\n\n\nclass ConstantInput(nn.Module):\n    \"\"\"Constant input.\n\n    Args:\n        num_channel (int): Channel number of constant input.\n        size (int): Spatial size of constant input.\n    \"\"\"\n\n    def __init__(self, num_channel, size):\n        super(ConstantInput, self).__init__()\n        self.weight = nn.Parameter(torch.randn(1, num_channel, size, size))\n\n    def forward(self, batch):\n        out = self.weight.repeat(batch, 1, 1, 1)\n        return out\n\n\n@ARCH_REGISTRY.register()\nclass StyleGAN2GeneratorClean(nn.Module):\n    \"\"\"Clean version of StyleGAN2 Generator.\n\n    Args:\n        out_size (int): The spatial size of outputs.\n        num_style_feat (int): Channel number of style features. Default: 512.\n        num_mlp (int): Layer number of MLP style layers. Default: 8.\n        channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.\n        narrow (float): Narrow ratio for channels. Default: 1.0.\n    \"\"\"\n\n    def __init__(self, out_size, num_style_feat=512, num_mlp=8, channel_multiplier=2, narrow=1):\n        super(StyleGAN2GeneratorClean, self).__init__()\n        # Style MLP layers\n        self.num_style_feat = num_style_feat\n        style_mlp_layers = [NormStyleCode()]\n        for i in range(num_mlp):\n            style_mlp_layers.extend(\n                [nn.Linear(num_style_feat, num_style_feat, bias=True),\n                 nn.LeakyReLU(negative_slope=0.2, inplace=True)])\n        self.style_mlp = nn.Sequential(*style_mlp_layers)\n        # initialization\n        default_init_weights(self.style_mlp, scale=1, bias_fill=0, a=0.2, mode='fan_in', nonlinearity='leaky_relu')\n\n        # channel list\n        channels = {\n            '4': int(512 * narrow),\n            '8': int(512 * narrow),\n            '16': int(512 * narrow),\n            '32': int(512 * narrow),\n            '64': int(256 * channel_multiplier * narrow),\n            '128': int(128 * channel_multiplier * narrow),\n            '256': int(64 * channel_multiplier * narrow),\n            '512': int(32 * channel_multiplier * narrow),\n            '1024': int(16 * channel_multiplier * narrow)\n        }\n        self.channels = channels\n\n        self.constant_input = ConstantInput(channels['4'], size=4)\n        self.style_conv1 = StyleConv(\n            channels['4'],\n            channels['4'],\n            kernel_size=3,\n            num_style_feat=num_style_feat,\n            demodulate=True,\n            sample_mode=None)\n        self.to_rgb1 = ToRGB(channels['4'], num_style_feat, upsample=False)\n\n        self.log_size = int(math.log(out_size, 2))\n        self.num_layers = (self.log_size - 2) * 2 + 1\n        self.num_latent = self.log_size * 2 - 2\n\n        self.style_convs = nn.ModuleList()\n        self.to_rgbs = nn.ModuleList()\n        self.noises = nn.Module()\n\n        in_channels = channels['4']\n        # noise\n        for layer_idx in range(self.num_layers):\n            resolution = 2**((layer_idx + 5) // 2)\n            shape = [1, 1, resolution, resolution]\n            self.noises.register_buffer(f'noise{layer_idx}', torch.randn(*shape))\n        # style convs and to_rgbs\n        for i in range(3, self.log_size + 1):\n            out_channels = channels[f'{2**i}']\n            self.style_convs.append(\n                StyleConv(\n                    in_channels,\n                    out_channels,\n                    kernel_size=3,\n                    num_style_feat=num_style_feat,\n                    demodulate=True,\n                    sample_mode='upsample'))\n            self.style_convs.append(\n                StyleConv(\n                    out_channels,\n                    out_channels,\n                    kernel_size=3,\n                    num_style_feat=num_style_feat,\n                    demodulate=True,\n                    sample_mode=None))\n            self.to_rgbs.append(ToRGB(out_channels, num_style_feat, upsample=True))\n            in_channels = out_channels\n\n    def make_noise(self):\n        \"\"\"Make noise for noise injection.\"\"\"\n        device = self.constant_input.weight.device\n        noises = [torch.randn(1, 1, 4, 4, device=device)]\n\n        for i in range(3, self.log_size + 1):\n            for _ in range(2):\n                noises.append(torch.randn(1, 1, 2**i, 2**i, device=device))\n\n        return noises\n\n    def get_latent(self, x):\n        return self.style_mlp(x)\n\n    def mean_latent(self, num_latent):\n        latent_in = torch.randn(num_latent, self.num_style_feat, device=self.constant_input.weight.device)\n        latent = self.style_mlp(latent_in).mean(0, keepdim=True)\n        return latent\n\n    def forward(self,\n                styles,\n                input_is_latent=False,\n                noise=None,\n                randomize_noise=True,\n                truncation=1,\n                truncation_latent=None,\n                inject_index=None,\n                return_latents=False):\n        \"\"\"Forward function for StyleGAN2GeneratorClean.\n\n        Args:\n            styles (list[Tensor]): Sample codes of styles.\n            input_is_latent (bool): Whether input is latent style. Default: False.\n            noise (Tensor | None): Input noise or None. Default: None.\n            randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.\n            truncation (float): The truncation ratio. Default: 1.\n            truncation_latent (Tensor | None): The truncation latent tensor. Default: None.\n            inject_index (int | None): The injection index for mixing noise. Default: None.\n            return_latents (bool): Whether to return style latents. Default: False.\n        \"\"\"\n        # style codes -> latents with Style MLP layer\n        if not input_is_latent:\n            styles = [self.style_mlp(s) for s in styles]\n        # noises\n        if noise is None:\n            if randomize_noise:\n                noise = [None] * self.num_layers  # for each style conv layer\n            else:  # use the stored noise\n                noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)]\n        # style truncation\n        if truncation < 1:\n            style_truncation = []\n            for style in styles:\n                style_truncation.append(truncation_latent + truncation * (style - truncation_latent))\n            styles = style_truncation\n        # get style latents with injection\n        if len(styles) == 1:\n            inject_index = self.num_latent\n\n            if styles[0].ndim < 3:\n                # repeat latent code for all the layers\n                latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            else:  # used for encoder with different latent code for each layer\n                latent = styles[0]\n        elif len(styles) == 2:  # mixing noises\n            if inject_index is None:\n                inject_index = random.randint(1, self.num_latent - 1)\n            latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)\n            latent = torch.cat([latent1, latent2], 1)\n\n        # main generation\n        out = self.constant_input(latent.shape[0])\n        out = self.style_conv1(out, latent[:, 0], noise=noise[0])\n        skip = self.to_rgb1(out, latent[:, 1])\n\n        i = 1\n        for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2],\n                                                        noise[2::2], self.to_rgbs):\n            out = conv1(out, latent[:, i], noise=noise1)\n            out = conv2(out, latent[:, i + 1], noise=noise2)\n            skip = to_rgb(out, latent[:, i + 2], skip)  # feature back to the rgb space\n            i += 2\n\n        image = skip\n\n        if return_latents:\n            return image, latent\n        else:\n            return image, None\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/data/__init__.py",
    "content": "import importlib\nfrom basicsr.utils import scandir\nfrom os import path as osp\n\n# automatically scan and import dataset modules for registry\n# scan all the files that end with '_dataset.py' under the data folder\ndata_folder = osp.dirname(osp.abspath(__file__))\ndataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')]\n# import all the dataset modules\n_dataset_modules = [importlib.import_module(f'gfpgan.data.{file_name}') for file_name in dataset_filenames]\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/data/ffhq_degradation_dataset.py",
    "content": "import cv2\nimport math\nimport numpy as np\nimport os.path as osp\nimport torch\nimport torch.utils.data as data\nfrom basicsr.data import degradations as degradations\nfrom basicsr.data.data_util import paths_from_folder\nfrom basicsr.data.transforms import augment\nfrom basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor\nfrom basicsr.utils.registry import DATASET_REGISTRY\nfrom torchvision.transforms.functional import (adjust_brightness, adjust_contrast, adjust_hue, adjust_saturation,\n                                               normalize)\n\n\n@DATASET_REGISTRY.register()\nclass FFHQDegradationDataset(data.Dataset):\n    \"\"\"FFHQ dataset for GFPGAN.\n\n    It reads high resolution images, and then generate low-quality (LQ) images on-the-fly.\n\n    Args:\n        opt (dict): Config for train datasets. It contains the following keys:\n            dataroot_gt (str): Data root path for gt.\n            io_backend (dict): IO backend type and other kwarg.\n            mean (list | tuple): Image mean.\n            std (list | tuple): Image std.\n            use_hflip (bool): Whether to horizontally flip.\n            Please see more options in the codes.\n    \"\"\"\n\n    def __init__(self, opt):\n        super(FFHQDegradationDataset, self).__init__()\n        self.opt = opt\n        # file client (io backend)\n        self.file_client = None\n        self.io_backend_opt = opt['io_backend']\n\n        self.gt_folder = opt['dataroot_gt']\n        self.mean = opt['mean']\n        self.std = opt['std']\n        self.out_size = opt['out_size']\n\n        self.crop_components = opt.get('crop_components', False)  # facial components\n        self.eye_enlarge_ratio = opt.get('eye_enlarge_ratio', 1)  # whether enlarge eye regions\n\n        if self.crop_components:\n            # load component list from a pre-process pth files\n            self.components_list = torch.load(opt.get('component_path'))\n\n        # file client (lmdb io backend)\n        if self.io_backend_opt['type'] == 'lmdb':\n            self.io_backend_opt['db_paths'] = self.gt_folder\n            if not self.gt_folder.endswith('.lmdb'):\n                raise ValueError(f\"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}\")\n            with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:\n                self.paths = [line.split('.')[0] for line in fin]\n        else:\n            # disk backend: scan file list from a folder\n            self.paths = paths_from_folder(self.gt_folder)\n\n        # degradation configurations\n        self.blur_kernel_size = opt['blur_kernel_size']\n        self.kernel_list = opt['kernel_list']\n        self.kernel_prob = opt['kernel_prob']\n        self.blur_sigma = opt['blur_sigma']\n        self.downsample_range = opt['downsample_range']\n        self.noise_range = opt['noise_range']\n        self.jpeg_range = opt['jpeg_range']\n\n        # color jitter\n        self.color_jitter_prob = opt.get('color_jitter_prob')\n        self.color_jitter_pt_prob = opt.get('color_jitter_pt_prob')\n        self.color_jitter_shift = opt.get('color_jitter_shift', 20)\n        # to gray\n        self.gray_prob = opt.get('gray_prob')\n\n        logger = get_root_logger()\n        logger.info(f'Blur: blur_kernel_size {self.blur_kernel_size}, sigma: [{\", \".join(map(str, self.blur_sigma))}]')\n        logger.info(f'Downsample: downsample_range [{\", \".join(map(str, self.downsample_range))}]')\n        logger.info(f'Noise: [{\", \".join(map(str, self.noise_range))}]')\n        logger.info(f'JPEG compression: [{\", \".join(map(str, self.jpeg_range))}]')\n\n        if self.color_jitter_prob is not None:\n            logger.info(f'Use random color jitter. Prob: {self.color_jitter_prob}, shift: {self.color_jitter_shift}')\n        if self.gray_prob is not None:\n            logger.info(f'Use random gray. Prob: {self.gray_prob}')\n        self.color_jitter_shift /= 255.\n\n    @staticmethod\n    def color_jitter(img, shift):\n        \"\"\"jitter color: randomly jitter the RGB values, in numpy formats\"\"\"\n        jitter_val = np.random.uniform(-shift, shift, 3).astype(np.float32)\n        img = img + jitter_val\n        img = np.clip(img, 0, 1)\n        return img\n\n    @staticmethod\n    def color_jitter_pt(img, brightness, contrast, saturation, hue):\n        \"\"\"jitter color: randomly jitter the brightness, contrast, saturation, and hue, in torch Tensor formats\"\"\"\n        fn_idx = torch.randperm(4)\n        for fn_id in fn_idx:\n            if fn_id == 0 and brightness is not None:\n                brightness_factor = torch.tensor(1.0).uniform_(brightness[0], brightness[1]).item()\n                img = adjust_brightness(img, brightness_factor)\n\n            if fn_id == 1 and contrast is not None:\n                contrast_factor = torch.tensor(1.0).uniform_(contrast[0], contrast[1]).item()\n                img = adjust_contrast(img, contrast_factor)\n\n            if fn_id == 2 and saturation is not None:\n                saturation_factor = torch.tensor(1.0).uniform_(saturation[0], saturation[1]).item()\n                img = adjust_saturation(img, saturation_factor)\n\n            if fn_id == 3 and hue is not None:\n                hue_factor = torch.tensor(1.0).uniform_(hue[0], hue[1]).item()\n                img = adjust_hue(img, hue_factor)\n        return img\n\n    def get_component_coordinates(self, index, status):\n        \"\"\"Get facial component (left_eye, right_eye, mouth) coordinates from a pre-loaded pth file\"\"\"\n        components_bbox = self.components_list[f'{index:08d}']\n        if status[0]:  # hflip\n            # exchange right and left eye\n            tmp = components_bbox['left_eye']\n            components_bbox['left_eye'] = components_bbox['right_eye']\n            components_bbox['right_eye'] = tmp\n            # modify the width coordinate\n            components_bbox['left_eye'][0] = self.out_size - components_bbox['left_eye'][0]\n            components_bbox['right_eye'][0] = self.out_size - components_bbox['right_eye'][0]\n            components_bbox['mouth'][0] = self.out_size - components_bbox['mouth'][0]\n\n        # get coordinates\n        locations = []\n        for part in ['left_eye', 'right_eye', 'mouth']:\n            mean = components_bbox[part][0:2]\n            half_len = components_bbox[part][2]\n            if 'eye' in part:\n                half_len *= self.eye_enlarge_ratio\n            loc = np.hstack((mean - half_len + 1, mean + half_len))\n            loc = torch.from_numpy(loc).float()\n            locations.append(loc)\n        return locations\n\n    def __getitem__(self, index):\n        if self.file_client is None:\n            self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)\n\n        # load gt image\n        # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32.\n        gt_path = self.paths[index]\n        img_bytes = self.file_client.get(gt_path)\n        img_gt = imfrombytes(img_bytes, float32=True)\n\n        # random horizontal flip\n        img_gt, status = augment(img_gt, hflip=self.opt['use_hflip'], rotation=False, return_status=True)\n        h, w, _ = img_gt.shape\n\n        # get facial component coordinates\n        if self.crop_components:\n            locations = self.get_component_coordinates(index, status)\n            loc_left_eye, loc_right_eye, loc_mouth = locations\n\n        # ------------------------ generate lq image ------------------------ #\n        # blur\n        kernel = degradations.random_mixed_kernels(\n            self.kernel_list,\n            self.kernel_prob,\n            self.blur_kernel_size,\n            self.blur_sigma,\n            self.blur_sigma, [-math.pi, math.pi],\n            noise_range=None)\n        img_lq = cv2.filter2D(img_gt, -1, kernel)\n        # downsample\n        scale = np.random.uniform(self.downsample_range[0], self.downsample_range[1])\n        img_lq = cv2.resize(img_lq, (int(w // scale), int(h // scale)), interpolation=cv2.INTER_LINEAR)\n        # noise\n        if self.noise_range is not None:\n            img_lq = degradations.random_add_gaussian_noise(img_lq, self.noise_range)\n        # jpeg compression\n        if self.jpeg_range is not None:\n            img_lq = degradations.random_add_jpg_compression(img_lq, self.jpeg_range)\n\n        # resize to original size\n        img_lq = cv2.resize(img_lq, (w, h), interpolation=cv2.INTER_LINEAR)\n\n        # random color jitter (only for lq)\n        if self.color_jitter_prob is not None and (np.random.uniform() < self.color_jitter_prob):\n            img_lq = self.color_jitter(img_lq, self.color_jitter_shift)\n        # random to gray (only for lq)\n        if self.gray_prob and np.random.uniform() < self.gray_prob:\n            img_lq = cv2.cvtColor(img_lq, cv2.COLOR_BGR2GRAY)\n            img_lq = np.tile(img_lq[:, :, None], [1, 1, 3])\n            if self.opt.get('gt_gray'):  # whether convert GT to gray images\n                img_gt = cv2.cvtColor(img_gt, cv2.COLOR_BGR2GRAY)\n                img_gt = np.tile(img_gt[:, :, None], [1, 1, 3])  # repeat the color channels\n\n        # BGR to RGB, HWC to CHW, numpy to tensor\n        img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True)\n\n        # random color jitter (pytorch version) (only for lq)\n        if self.color_jitter_pt_prob is not None and (np.random.uniform() < self.color_jitter_pt_prob):\n            brightness = self.opt.get('brightness', (0.5, 1.5))\n            contrast = self.opt.get('contrast', (0.5, 1.5))\n            saturation = self.opt.get('saturation', (0, 1.5))\n            hue = self.opt.get('hue', (-0.1, 0.1))\n            img_lq = self.color_jitter_pt(img_lq, brightness, contrast, saturation, hue)\n\n        # round and clip\n        img_lq = torch.clamp((img_lq * 255.0).round(), 0, 255) / 255.\n\n        # normalize\n        normalize(img_gt, self.mean, self.std, inplace=True)\n        normalize(img_lq, self.mean, self.std, inplace=True)\n\n        if self.crop_components:\n            return_dict = {\n                'lq': img_lq,\n                'gt': img_gt,\n                'gt_path': gt_path,\n                'loc_left_eye': loc_left_eye,\n                'loc_right_eye': loc_right_eye,\n                'loc_mouth': loc_mouth\n            }\n            return return_dict\n        else:\n            return {'lq': img_lq, 'gt': img_gt, 'gt_path': gt_path}\n\n    def __len__(self):\n        return len(self.paths)\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/models/__init__.py",
    "content": "import importlib\nfrom basicsr.utils import scandir\nfrom os import path as osp\n\n# automatically scan and import model modules for registry\n# scan all the files that end with '_model.py' under the model folder\nmodel_folder = osp.dirname(osp.abspath(__file__))\nmodel_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]\n# import all the model modules\n_model_modules = [importlib.import_module(f'gfpgan.models.{file_name}') for file_name in model_filenames]\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/models/gfpgan_model.py",
    "content": "import math\nimport os.path as osp\nimport torch\nfrom basicsr.archs import build_network\nfrom basicsr.losses import build_loss\n# from basicsr.losses.losses import r1_penalty\nfrom basicsr.losses import r1_penalty\nfrom basicsr.metrics import calculate_metric\nfrom basicsr.models.base_model import BaseModel\nfrom basicsr.utils import get_root_logger, imwrite, tensor2img\nfrom basicsr.utils.registry import MODEL_REGISTRY\nfrom collections import OrderedDict\nfrom torch.nn import functional as F\nfrom torchvision.ops import roi_align\nfrom tqdm import tqdm\n\n\n@MODEL_REGISTRY.register()\nclass GFPGANModel(BaseModel):\n    \"\"\"The GFPGAN model for Towards real-world blind face restoratin with generative facial prior\"\"\"\n\n    def __init__(self, opt):\n        super(GFPGANModel, self).__init__(opt)\n        self.idx = 0  # it is used for saving data for check\n\n        # define network\n        self.net_g = build_network(opt['network_g'])\n        self.net_g = self.model_to_device(self.net_g)\n        self.print_network(self.net_g)\n\n        # load pretrained model\n        load_path = self.opt['path'].get('pretrain_network_g', None)\n        if load_path is not None:\n            param_key = self.opt['path'].get('param_key_g', 'params')\n            self.load_network(self.net_g, load_path, self.opt['path'].get('strict_load_g', True), param_key)\n\n        self.log_size = int(math.log(self.opt['network_g']['out_size'], 2))\n\n        if self.is_train:\n            self.init_training_settings()\n\n    def init_training_settings(self):\n        train_opt = self.opt['train']\n\n        # ----------- define net_d ----------- #\n        self.net_d = build_network(self.opt['network_d'])\n        self.net_d = self.model_to_device(self.net_d)\n        self.print_network(self.net_d)\n        # load pretrained model\n        load_path = self.opt['path'].get('pretrain_network_d', None)\n        if load_path is not None:\n            self.load_network(self.net_d, load_path, self.opt['path'].get('strict_load_d', True))\n\n        # ----------- define net_g with Exponential Moving Average (EMA) ----------- #\n        # net_g_ema only used for testing on one GPU and saving. There is no need to wrap with DistributedDataParallel\n        self.net_g_ema = build_network(self.opt['network_g']).to(self.device)\n        # load pretrained model\n        load_path = self.opt['path'].get('pretrain_network_g', None)\n        if load_path is not None:\n            self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema')\n        else:\n            self.model_ema(0)  # copy net_g weight\n\n        self.net_g.train()\n        self.net_d.train()\n        self.net_g_ema.eval()\n\n        # ----------- facial component networks ----------- #\n        if ('network_d_left_eye' in self.opt and 'network_d_right_eye' in self.opt and 'network_d_mouth' in self.opt):\n            self.use_facial_disc = True\n        else:\n            self.use_facial_disc = False\n\n        if self.use_facial_disc:\n            # left eye\n            self.net_d_left_eye = build_network(self.opt['network_d_left_eye'])\n            self.net_d_left_eye = self.model_to_device(self.net_d_left_eye)\n            self.print_network(self.net_d_left_eye)\n            load_path = self.opt['path'].get('pretrain_network_d_left_eye')\n            if load_path is not None:\n                self.load_network(self.net_d_left_eye, load_path, True, 'params')\n            # right eye\n            self.net_d_right_eye = build_network(self.opt['network_d_right_eye'])\n            self.net_d_right_eye = self.model_to_device(self.net_d_right_eye)\n            self.print_network(self.net_d_right_eye)\n            load_path = self.opt['path'].get('pretrain_network_d_right_eye')\n            if load_path is not None:\n                self.load_network(self.net_d_right_eye, load_path, True, 'params')\n            # mouth\n            self.net_d_mouth = build_network(self.opt['network_d_mouth'])\n            self.net_d_mouth = self.model_to_device(self.net_d_mouth)\n            self.print_network(self.net_d_mouth)\n            load_path = self.opt['path'].get('pretrain_network_d_mouth')\n            if load_path is not None:\n                self.load_network(self.net_d_mouth, load_path, True, 'params')\n\n            self.net_d_left_eye.train()\n            self.net_d_right_eye.train()\n            self.net_d_mouth.train()\n\n            # ----------- define facial component gan loss ----------- #\n            self.cri_component = build_loss(train_opt['gan_component_opt']).to(self.device)\n\n        # ----------- define losses ----------- #\n        # pixel loss\n        if train_opt.get('pixel_opt'):\n            self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device)\n        else:\n            self.cri_pix = None\n\n        # perceptual loss\n        if train_opt.get('perceptual_opt'):\n            self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device)\n        else:\n            self.cri_perceptual = None\n\n        # L1 loss is used in pyramid loss, component style loss and identity loss\n        self.cri_l1 = build_loss(train_opt['L1_opt']).to(self.device)\n\n        # gan loss (wgan)\n        self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device)\n\n        # ----------- define identity loss ----------- #\n        if 'network_identity' in self.opt:\n            self.use_identity = True\n        else:\n            self.use_identity = False\n\n        if self.use_identity:\n            # define identity network\n            self.network_identity = build_network(self.opt['network_identity'])\n            self.network_identity = self.model_to_device(self.network_identity)\n            self.print_network(self.network_identity)\n            load_path = self.opt['path'].get('pretrain_network_identity')\n            if load_path is not None:\n                self.load_network(self.network_identity, load_path, True, None)\n            self.network_identity.eval()\n            for param in self.network_identity.parameters():\n                param.requires_grad = False\n\n        # regularization weights\n        self.r1_reg_weight = train_opt['r1_reg_weight']  # for discriminator\n        self.net_d_iters = train_opt.get('net_d_iters', 1)\n        self.net_d_init_iters = train_opt.get('net_d_init_iters', 0)\n        self.net_d_reg_every = train_opt['net_d_reg_every']\n\n        # set up optimizers and schedulers\n        self.setup_optimizers()\n        self.setup_schedulers()\n\n    def setup_optimizers(self):\n        train_opt = self.opt['train']\n\n        # ----------- optimizer g ----------- #\n        net_g_reg_ratio = 1\n        normal_params = []\n        for _, param in self.net_g.named_parameters():\n            normal_params.append(param)\n        optim_params_g = [{  # add normal params first\n            'params': normal_params,\n            'lr': train_opt['optim_g']['lr']\n        }]\n        optim_type = train_opt['optim_g'].pop('type')\n        lr = train_opt['optim_g']['lr'] * net_g_reg_ratio\n        betas = (0**net_g_reg_ratio, 0.99**net_g_reg_ratio)\n        self.optimizer_g = self.get_optimizer(optim_type, optim_params_g, lr, betas=betas)\n        self.optimizers.append(self.optimizer_g)\n\n        # ----------- optimizer d ----------- #\n        net_d_reg_ratio = self.net_d_reg_every / (self.net_d_reg_every + 1)\n        normal_params = []\n        for _, param in self.net_d.named_parameters():\n            normal_params.append(param)\n        optim_params_d = [{  # add normal params first\n            'params': normal_params,\n            'lr': train_opt['optim_d']['lr']\n        }]\n        optim_type = train_opt['optim_d'].pop('type')\n        lr = train_opt['optim_d']['lr'] * net_d_reg_ratio\n        betas = (0**net_d_reg_ratio, 0.99**net_d_reg_ratio)\n        self.optimizer_d = self.get_optimizer(optim_type, optim_params_d, lr, betas=betas)\n        self.optimizers.append(self.optimizer_d)\n\n        # ----------- optimizers for facial component networks ----------- #\n        if self.use_facial_disc:\n            # setup optimizers for facial component discriminators\n            optim_type = train_opt['optim_component'].pop('type')\n            lr = train_opt['optim_component']['lr']\n            # left eye\n            self.optimizer_d_left_eye = self.get_optimizer(\n                optim_type, self.net_d_left_eye.parameters(), lr, betas=(0.9, 0.99))\n            self.optimizers.append(self.optimizer_d_left_eye)\n            # right eye\n            self.optimizer_d_right_eye = self.get_optimizer(\n                optim_type, self.net_d_right_eye.parameters(), lr, betas=(0.9, 0.99))\n            self.optimizers.append(self.optimizer_d_right_eye)\n            # mouth\n            self.optimizer_d_mouth = self.get_optimizer(\n                optim_type, self.net_d_mouth.parameters(), lr, betas=(0.9, 0.99))\n            self.optimizers.append(self.optimizer_d_mouth)\n\n    def feed_data(self, data):\n        self.lq = data['lq'].to(self.device)\n        if 'gt' in data:\n            self.gt = data['gt'].to(self.device)\n\n        if 'loc_left_eye' in data:\n            # get facial component locations, shape (batch, 4)\n            self.loc_left_eyes = data['loc_left_eye']\n            self.loc_right_eyes = data['loc_right_eye']\n            self.loc_mouths = data['loc_mouth']\n\n        # uncomment to check data\n        # import torchvision\n        # if self.opt['rank'] == 0:\n        #     import os\n        #     os.makedirs('tmp/gt', exist_ok=True)\n        #     os.makedirs('tmp/lq', exist_ok=True)\n        #     print(self.idx)\n        #     torchvision.utils.save_image(\n        #         self.gt, f'tmp/gt/gt_{self.idx}.png', nrow=4, padding=2, normalize=True, range=(-1, 1))\n        #     torchvision.utils.save_image(\n        #         self.lq, f'tmp/lq/lq{self.idx}.png', nrow=4, padding=2, normalize=True, range=(-1, 1))\n        #     self.idx = self.idx + 1\n\n    def construct_img_pyramid(self):\n        \"\"\"Construct image pyramid for intermediate restoration loss\"\"\"\n        pyramid_gt = [self.gt]\n        down_img = self.gt\n        for _ in range(0, self.log_size - 3):\n            down_img = F.interpolate(down_img, scale_factor=0.5, mode='bilinear', align_corners=False)\n            pyramid_gt.insert(0, down_img)\n        return pyramid_gt\n\n    def get_roi_regions(self, eye_out_size=80, mouth_out_size=120):\n        face_ratio = int(self.opt['network_g']['out_size'] / 512)\n        eye_out_size *= face_ratio\n        mouth_out_size *= face_ratio\n\n        rois_eyes = []\n        rois_mouths = []\n        for b in range(self.loc_left_eyes.size(0)):  # loop for batch size\n            # left eye and right eye\n            img_inds = self.loc_left_eyes.new_full((2, 1), b)\n            bbox = torch.stack([self.loc_left_eyes[b, :], self.loc_right_eyes[b, :]], dim=0)  # shape: (2, 4)\n            rois = torch.cat([img_inds, bbox], dim=-1)  # shape: (2, 5)\n            rois_eyes.append(rois)\n            # mouse\n            img_inds = self.loc_left_eyes.new_full((1, 1), b)\n            rois = torch.cat([img_inds, self.loc_mouths[b:b + 1, :]], dim=-1)  # shape: (1, 5)\n            rois_mouths.append(rois)\n\n        rois_eyes = torch.cat(rois_eyes, 0).to(self.device)\n        rois_mouths = torch.cat(rois_mouths, 0).to(self.device)\n\n        # real images\n        all_eyes = roi_align(self.gt, boxes=rois_eyes, output_size=eye_out_size) * face_ratio\n        self.left_eyes_gt = all_eyes[0::2, :, :, :]\n        self.right_eyes_gt = all_eyes[1::2, :, :, :]\n        self.mouths_gt = roi_align(self.gt, boxes=rois_mouths, output_size=mouth_out_size) * face_ratio\n        # output\n        all_eyes = roi_align(self.output, boxes=rois_eyes, output_size=eye_out_size) * face_ratio\n        self.left_eyes = all_eyes[0::2, :, :, :]\n        self.right_eyes = all_eyes[1::2, :, :, :]\n        self.mouths = roi_align(self.output, boxes=rois_mouths, output_size=mouth_out_size) * face_ratio\n\n    def _gram_mat(self, x):\n        \"\"\"Calculate Gram matrix.\n\n        Args:\n            x (torch.Tensor): Tensor with shape of (n, c, h, w).\n\n        Returns:\n            torch.Tensor: Gram matrix.\n        \"\"\"\n        n, c, h, w = x.size()\n        features = x.view(n, c, w * h)\n        features_t = features.transpose(1, 2)\n        gram = features.bmm(features_t) / (c * h * w)\n        return gram\n\n    def gray_resize_for_identity(self, out, size=128):\n        out_gray = (0.2989 * out[:, 0, :, :] + 0.5870 * out[:, 1, :, :] + 0.1140 * out[:, 2, :, :])\n        out_gray = out_gray.unsqueeze(1)\n        out_gray = F.interpolate(out_gray, (size, size), mode='bilinear', align_corners=False)\n        return out_gray\n\n    def optimize_parameters(self, current_iter):\n        # optimize net_g\n        for p in self.net_d.parameters():\n            p.requires_grad = False\n        self.optimizer_g.zero_grad()\n\n        # do not update facial component net_d\n        if self.use_facial_disc:\n            for p in self.net_d_left_eye.parameters():\n                p.requires_grad = False\n            for p in self.net_d_right_eye.parameters():\n                p.requires_grad = False\n            for p in self.net_d_mouth.parameters():\n                p.requires_grad = False\n\n        # image pyramid loss weight\n        pyramid_loss_weight = self.opt['train'].get('pyramid_loss_weight', 0)\n        if pyramid_loss_weight > 0 and current_iter > self.opt['train'].get('remove_pyramid_loss', float('inf')):\n            pyramid_loss_weight = 1e-12  # very small weight to avoid unused param error\n        if pyramid_loss_weight > 0:\n            self.output, out_rgbs = self.net_g(self.lq, return_rgb=True)\n            pyramid_gt = self.construct_img_pyramid()\n        else:\n            self.output, out_rgbs = self.net_g(self.lq, return_rgb=False)\n\n        # get roi-align regions\n        if self.use_facial_disc:\n            self.get_roi_regions(eye_out_size=80, mouth_out_size=120)\n\n        l_g_total = 0\n        loss_dict = OrderedDict()\n        if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):\n            # pixel loss\n            if self.cri_pix:\n                l_g_pix = self.cri_pix(self.output, self.gt)\n                l_g_total += l_g_pix\n                loss_dict['l_g_pix'] = l_g_pix\n\n            # image pyramid loss\n            if pyramid_loss_weight > 0:\n                for i in range(0, self.log_size - 2):\n                    l_pyramid = self.cri_l1(out_rgbs[i], pyramid_gt[i]) * pyramid_loss_weight\n                    l_g_total += l_pyramid\n                    loss_dict[f'l_p_{2**(i+3)}'] = l_pyramid\n\n            # perceptual loss\n            if self.cri_perceptual:\n                l_g_percep, l_g_style = self.cri_perceptual(self.output, self.gt)\n                if l_g_percep is not None:\n                    l_g_total += l_g_percep\n                    loss_dict['l_g_percep'] = l_g_percep\n                if l_g_style is not None:\n                    l_g_total += l_g_style\n                    loss_dict['l_g_style'] = l_g_style\n\n            # gan loss\n            fake_g_pred = self.net_d(self.output)\n            l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)\n            l_g_total += l_g_gan\n            loss_dict['l_g_gan'] = l_g_gan\n\n            # facial component loss\n            if self.use_facial_disc:\n                # left eye\n                fake_left_eye, fake_left_eye_feats = self.net_d_left_eye(self.left_eyes, return_feats=True)\n                l_g_gan = self.cri_component(fake_left_eye, True, is_disc=False)\n                l_g_total += l_g_gan\n                loss_dict['l_g_gan_left_eye'] = l_g_gan\n                # right eye\n                fake_right_eye, fake_right_eye_feats = self.net_d_right_eye(self.right_eyes, return_feats=True)\n                l_g_gan = self.cri_component(fake_right_eye, True, is_disc=False)\n                l_g_total += l_g_gan\n                loss_dict['l_g_gan_right_eye'] = l_g_gan\n                # mouth\n                fake_mouth, fake_mouth_feats = self.net_d_mouth(self.mouths, return_feats=True)\n                l_g_gan = self.cri_component(fake_mouth, True, is_disc=False)\n                l_g_total += l_g_gan\n                loss_dict['l_g_gan_mouth'] = l_g_gan\n\n                if self.opt['train'].get('comp_style_weight', 0) > 0:\n                    # get gt feat\n                    _, real_left_eye_feats = self.net_d_left_eye(self.left_eyes_gt, return_feats=True)\n                    _, real_right_eye_feats = self.net_d_right_eye(self.right_eyes_gt, return_feats=True)\n                    _, real_mouth_feats = self.net_d_mouth(self.mouths_gt, return_feats=True)\n\n                    def _comp_style(feat, feat_gt, criterion):\n                        return criterion(self._gram_mat(feat[0]), self._gram_mat(\n                            feat_gt[0].detach())) * 0.5 + criterion(\n                                self._gram_mat(feat[1]), self._gram_mat(feat_gt[1].detach()))\n\n                    # facial component style loss\n                    comp_style_loss = 0\n                    comp_style_loss += _comp_style(fake_left_eye_feats, real_left_eye_feats, self.cri_l1)\n                    comp_style_loss += _comp_style(fake_right_eye_feats, real_right_eye_feats, self.cri_l1)\n                    comp_style_loss += _comp_style(fake_mouth_feats, real_mouth_feats, self.cri_l1)\n                    comp_style_loss = comp_style_loss * self.opt['train']['comp_style_weight']\n                    l_g_total += comp_style_loss\n                    loss_dict['l_g_comp_style_loss'] = comp_style_loss\n\n            # identity loss\n            if self.use_identity:\n                identity_weight = self.opt['train']['identity_weight']\n                # get gray images and resize\n                out_gray = self.gray_resize_for_identity(self.output)\n                gt_gray = self.gray_resize_for_identity(self.gt)\n\n                identity_gt = self.network_identity(gt_gray).detach()\n                identity_out = self.network_identity(out_gray)\n                l_identity = self.cri_l1(identity_out, identity_gt) * identity_weight\n                l_g_total += l_identity\n                loss_dict['l_identity'] = l_identity\n\n            l_g_total.backward()\n            self.optimizer_g.step()\n\n        # EMA\n        self.model_ema(decay=0.5**(32 / (10 * 1000)))\n\n        # ----------- optimize net_d ----------- #\n        for p in self.net_d.parameters():\n            p.requires_grad = True\n        self.optimizer_d.zero_grad()\n        if self.use_facial_disc:\n            for p in self.net_d_left_eye.parameters():\n                p.requires_grad = True\n            for p in self.net_d_right_eye.parameters():\n                p.requires_grad = True\n            for p in self.net_d_mouth.parameters():\n                p.requires_grad = True\n            self.optimizer_d_left_eye.zero_grad()\n            self.optimizer_d_right_eye.zero_grad()\n            self.optimizer_d_mouth.zero_grad()\n\n        fake_d_pred = self.net_d(self.output.detach())\n        real_d_pred = self.net_d(self.gt)\n        l_d = self.cri_gan(real_d_pred, True, is_disc=True) + self.cri_gan(fake_d_pred, False, is_disc=True)\n        loss_dict['l_d'] = l_d\n        # In WGAN, real_score should be positive and fake_score should be negative\n        loss_dict['real_score'] = real_d_pred.detach().mean()\n        loss_dict['fake_score'] = fake_d_pred.detach().mean()\n        l_d.backward()\n\n        # regularization loss\n        if current_iter % self.net_d_reg_every == 0:\n            self.gt.requires_grad = True\n            real_pred = self.net_d(self.gt)\n            l_d_r1 = r1_penalty(real_pred, self.gt)\n            l_d_r1 = (self.r1_reg_weight / 2 * l_d_r1 * self.net_d_reg_every + 0 * real_pred[0])\n            loss_dict['l_d_r1'] = l_d_r1.detach().mean()\n            l_d_r1.backward()\n\n        self.optimizer_d.step()\n\n        # optimize facial component discriminators\n        if self.use_facial_disc:\n            # left eye\n            fake_d_pred, _ = self.net_d_left_eye(self.left_eyes.detach())\n            real_d_pred, _ = self.net_d_left_eye(self.left_eyes_gt)\n            l_d_left_eye = self.cri_component(\n                real_d_pred, True, is_disc=True) + self.cri_gan(\n                    fake_d_pred, False, is_disc=True)\n            loss_dict['l_d_left_eye'] = l_d_left_eye\n            l_d_left_eye.backward()\n            # right eye\n            fake_d_pred, _ = self.net_d_right_eye(self.right_eyes.detach())\n            real_d_pred, _ = self.net_d_right_eye(self.right_eyes_gt)\n            l_d_right_eye = self.cri_component(\n                real_d_pred, True, is_disc=True) + self.cri_gan(\n                    fake_d_pred, False, is_disc=True)\n            loss_dict['l_d_right_eye'] = l_d_right_eye\n            l_d_right_eye.backward()\n            # mouth\n            fake_d_pred, _ = self.net_d_mouth(self.mouths.detach())\n            real_d_pred, _ = self.net_d_mouth(self.mouths_gt)\n            l_d_mouth = self.cri_component(\n                real_d_pred, True, is_disc=True) + self.cri_gan(\n                    fake_d_pred, False, is_disc=True)\n            loss_dict['l_d_mouth'] = l_d_mouth\n            l_d_mouth.backward()\n\n            self.optimizer_d_left_eye.step()\n            self.optimizer_d_right_eye.step()\n            self.optimizer_d_mouth.step()\n\n        self.log_dict = self.reduce_loss_dict(loss_dict)\n\n    def test(self):\n        with torch.no_grad():\n            if hasattr(self, 'net_g_ema'):\n                self.net_g_ema.eval()\n                self.output, _ = self.net_g_ema(self.lq)\n            else:\n                logger = get_root_logger()\n                logger.warning('Do not have self.net_g_ema, use self.net_g.')\n                self.net_g.eval()\n                self.output, _ = self.net_g(self.lq)\n                self.net_g.train()\n\n    def dist_validation(self, dataloader, current_iter, tb_logger, save_img):\n        if self.opt['rank'] == 0:\n            self.nondist_validation(dataloader, current_iter, tb_logger, save_img)\n\n    def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):\n        dataset_name = dataloader.dataset.opt['name']\n        with_metrics = self.opt['val'].get('metrics') is not None\n        use_pbar = self.opt['val'].get('pbar', False)\n\n        if with_metrics:\n            if not hasattr(self, 'metric_results'):  # only execute in the first run\n                self.metric_results = {metric: 0 for metric in self.opt['val']['metrics'].keys()}\n            # initialize the best metric results for each dataset_name (supporting multiple validation datasets)\n            self._initialize_best_metric_results(dataset_name)\n            # zero self.metric_results\n            self.metric_results = {metric: 0 for metric in self.metric_results}\n\n        metric_data = dict()\n        if use_pbar:\n            pbar = tqdm(total=len(dataloader), unit='image')\n\n        for idx, val_data in enumerate(dataloader):\n            img_name = osp.splitext(osp.basename(val_data['lq_path'][0]))[0]\n            self.feed_data(val_data)\n            self.test()\n\n            sr_img = tensor2img(self.output.detach().cpu(), min_max=(-1, 1))\n            metric_data['img'] = sr_img\n            if hasattr(self, 'gt'):\n                gt_img = tensor2img(self.gt.detach().cpu(), min_max=(-1, 1))\n                metric_data['img2'] = gt_img\n                del self.gt\n\n            # tentative for out of GPU memory\n            del self.lq\n            del self.output\n            torch.cuda.empty_cache()\n\n            if save_img:\n                if self.opt['is_train']:\n                    save_img_path = osp.join(self.opt['path']['visualization'], img_name,\n                                             f'{img_name}_{current_iter}.png')\n                else:\n                    if self.opt['val']['suffix']:\n                        save_img_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"val\"][\"suffix\"]}.png')\n                    else:\n                        save_img_path = osp.join(self.opt['path']['visualization'], dataset_name,\n                                                 f'{img_name}_{self.opt[\"name\"]}.png')\n                imwrite(sr_img, save_img_path)\n\n            if with_metrics:\n                # calculate metrics\n                for name, opt_ in self.opt['val']['metrics'].items():\n                    self.metric_results[name] += calculate_metric(metric_data, opt_)\n            if use_pbar:\n                pbar.update(1)\n                pbar.set_description(f'Test {img_name}')\n        if use_pbar:\n            pbar.close()\n\n        if with_metrics:\n            for metric in self.metric_results.keys():\n                self.metric_results[metric] /= (idx + 1)\n                # update the best metric result\n                self._update_best_metric_result(dataset_name, metric, self.metric_results[metric], current_iter)\n\n            self._log_validation_metric_values(current_iter, dataset_name, tb_logger)\n\n    def _log_validation_metric_values(self, current_iter, dataset_name, tb_logger):\n        log_str = f'Validation {dataset_name}\\n'\n        for metric, value in self.metric_results.items():\n            log_str += f'\\t # {metric}: {value:.4f}'\n            if hasattr(self, 'best_metric_results'):\n                log_str += (f'\\tBest: {self.best_metric_results[dataset_name][metric][\"val\"]:.4f} @ '\n                            f'{self.best_metric_results[dataset_name][metric][\"iter\"]} iter')\n            log_str += '\\n'\n\n        logger = get_root_logger()\n        logger.info(log_str)\n        if tb_logger:\n            for metric, value in self.metric_results.items():\n                tb_logger.add_scalar(f'metrics/{dataset_name}/{metric}', value, current_iter)\n\n    def save(self, epoch, current_iter):\n        # save net_g and net_d\n        self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema'])\n        self.save_network(self.net_d, 'net_d', current_iter)\n        # save component discriminators\n        if self.use_facial_disc:\n            self.save_network(self.net_d_left_eye, 'net_d_left_eye', current_iter)\n            self.save_network(self.net_d_right_eye, 'net_d_right_eye', current_iter)\n            self.save_network(self.net_d_mouth, 'net_d_mouth', current_iter)\n        # save training state\n        self.save_training_state(epoch, current_iter)\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/train.py",
    "content": "# flake8: noqa\nimport os.path as osp\nfrom basicsr.train import train_pipeline\n\nimport gfpgan.archs\nimport gfpgan.data\nimport gfpgan.models\n\nif __name__ == '__main__':\n    root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir))\n    train_pipeline(root_path)\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/utils.py",
    "content": "import cv2\nimport os\nimport torch\nfrom basicsr.utils import img2tensor, tensor2img\nfrom basicsr.utils.download_util import load_file_from_url\nfrom facexlib.utils.face_restoration_helper import FaceRestoreHelper\nfrom torchvision.transforms.functional import normalize\n\nfrom gfpgan.archs.gfpgan_bilinear_arch import GFPGANBilinear\nfrom gfpgan.archs.gfpganv1_arch import GFPGANv1\nfrom gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean\n\nROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\nclass GFPGANer():\n    \"\"\"Helper for restoration with GFPGAN.\n\n    It will detect and crop faces, and then resize the faces to 512x512.\n    GFPGAN is used to restored the resized faces.\n    The background is upsampled with the bg_upsampler.\n    Finally, the faces will be pasted back to the upsample background image.\n\n    Args:\n        model_path (str): The path to the GFPGAN model. It can be urls (will first download it automatically).\n        upscale (float): The upscale of the final output. Default: 2.\n        arch (str): The GFPGAN architecture. Option: clean | original. Default: clean.\n        channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.\n        bg_upsampler (nn.Module): The upsampler for the background. Default: None.\n    \"\"\"\n\n    def __init__(self, model_path, upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=None):\n        self.upscale = upscale\n        self.bg_upsampler = bg_upsampler\n\n        # initialize model\n        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n        # initialize the GFP-GAN\n        if arch == 'clean':\n            self.gfpgan = GFPGANv1Clean(\n                out_size=512,\n                num_style_feat=512,\n                channel_multiplier=channel_multiplier,\n                decoder_load_path=None,\n                fix_decoder=False,\n                num_mlp=8,\n                input_is_latent=True,\n                different_w=True,\n                narrow=1,\n                sft_half=True)\n        elif arch == 'bilinear':\n            self.gfpgan = GFPGANBilinear(\n                out_size=512,\n                num_style_feat=512,\n                channel_multiplier=channel_multiplier,\n                decoder_load_path=None,\n                fix_decoder=False,\n                num_mlp=8,\n                input_is_latent=True,\n                different_w=True,\n                narrow=1,\n                sft_half=True)\n        elif arch == 'original':\n            self.gfpgan = GFPGANv1(\n                out_size=512,\n                num_style_feat=512,\n                channel_multiplier=channel_multiplier,\n                decoder_load_path=None,\n                fix_decoder=True,\n                num_mlp=8,\n                input_is_latent=True,\n                different_w=True,\n                narrow=1,\n                sft_half=True)\n        # initialize face helper\n        self.face_helper = FaceRestoreHelper(\n            upscale,\n            face_size=512,\n            crop_ratio=(1, 1),\n            det_model='retinaface_resnet50',\n            save_ext='png',\n            device=self.device)\n\n        if model_path.startswith('https://'):\n            model_path = load_file_from_url(\n                url=model_path, model_dir=os.path.join(ROOT_DIR, 'gfpgan/weights'), progress=True, file_name=None)\n        loadnet = torch.load(model_path)\n        if 'params_ema' in loadnet:\n            keyname = 'params_ema'\n        else:\n            keyname = 'params'\n        self.gfpgan.load_state_dict(loadnet[keyname], strict=True)\n        self.gfpgan.eval()\n        self.gfpgan = self.gfpgan.to(self.device)\n\n    @torch.no_grad()\n    def enhance(self, img, has_aligned=False, only_center_face=False, paste_back=True):\n        self.face_helper.clean_all()\n\n        if has_aligned:  # the inputs are already aligned\n            img = cv2.resize(img, (512, 512))\n            self.face_helper.cropped_faces = [img]\n        else:\n            self.face_helper.read_image(img)\n            # get face landmarks for each face\n            self.face_helper.get_face_landmarks_5(only_center_face=only_center_face, eye_dist_threshold=5)\n            # eye_dist_threshold=5: skip faces whose eye distance is smaller than 5 pixels\n            # TODO: even with eye_dist_threshold, it will still introduce wrong detections and restorations.\n            # align and warp each face\n            self.face_helper.align_warp_face()\n\n        # face restoration\n        for cropped_face in self.face_helper.cropped_faces:\n            # prepare data\n            cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)\n            normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)\n            cropped_face_t = cropped_face_t.unsqueeze(0).to(self.device)\n\n            try:\n                output = self.gfpgan(cropped_face_t, return_rgb=False)[0]\n                # convert to image\n                restored_face = tensor2img(output.squeeze(0), rgb2bgr=True, min_max=(-1, 1))\n            except RuntimeError as error:\n                print(f'\\tFailed inference for GFPGAN: {error}.')\n                restored_face = cropped_face\n\n            restored_face = restored_face.astype('uint8')\n            self.face_helper.add_restored_face(restored_face)\n\n        if not has_aligned and paste_back:\n            # upsample the background\n            if self.bg_upsampler is not None:\n                # Now only support RealESRGAN for upsampling background\n                bg_img = self.bg_upsampler.enhance(img, outscale=self.upscale)[0]\n            else:\n                bg_img = None\n\n            self.face_helper.get_inverse_affine(None)\n            # paste each restored face to the input image\n            restored_img = self.face_helper.paste_faces_to_input_image(upsample_img=bg_img)\n            return self.face_helper.cropped_faces, self.face_helper.restored_faces, restored_img\n        else:\n            return self.face_helper.cropped_faces, self.face_helper.restored_faces, None\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/version.py",
    "content": "# GENERATED VERSION FILE\n# TIME: Wed Apr 20 14:43:06 2022\n__version__ = '1.3.2'\n__gitsha__ = '924ce47'\nversion_info = (1, 3, 2)\n"
  },
  {
    "path": "third_part/GFPGAN/gfpgan/weights/README.md",
    "content": "# Weights\n\nPut the downloaded weights to this folder.\n"
  },
  {
    "path": "third_part/GFPGAN/options/train_gfpgan_v1.yml",
    "content": "# general settings\nname: train_GFPGANv1_512\nmodel_type: GFPGANModel\nnum_gpu: auto  # officially, we use 4 GPUs\nmanual_seed: 0\n\n# dataset and data loader settings\ndatasets:\n  train:\n    name: FFHQ\n    type: FFHQDegradationDataset\n    # dataroot_gt: datasets/ffhq/ffhq_512.lmdb\n    dataroot_gt: datasets/ffhq/ffhq_512\n    io_backend:\n      # type: lmdb\n      type: disk\n\n    use_hflip: true\n    mean: [0.5, 0.5, 0.5]\n    std: [0.5, 0.5, 0.5]\n    out_size: 512\n\n    blur_kernel_size: 41\n    kernel_list: ['iso', 'aniso']\n    kernel_prob: [0.5, 0.5]\n    blur_sigma: [0.1, 10]\n    downsample_range: [0.8, 8]\n    noise_range: [0, 20]\n    jpeg_range: [60, 100]\n\n    # color jitter and gray\n    color_jitter_prob: 0.3\n    color_jitter_shift: 20\n    color_jitter_pt_prob: 0.3\n    gray_prob: 0.01\n\n    # If you do not want colorization, please set\n    # color_jitter_prob: ~\n    # color_jitter_pt_prob: ~\n    # gray_prob: 0.01\n    # gt_gray: True\n\n    crop_components: true\n    component_path: experiments/pretrained_models/FFHQ_eye_mouth_landmarks_512.pth\n    eye_enlarge_ratio: 1.4\n\n    # data loader\n    use_shuffle: true\n    num_worker_per_gpu: 6\n    batch_size_per_gpu: 3\n    dataset_enlarge_ratio: 1\n    prefetch_mode: ~\n\n  val:\n    # Please modify accordingly to use your own validation\n    # Or comment the val block if do not need validation during training\n    name: validation\n    type: PairedImageDataset\n    dataroot_lq: datasets/faces/validation/input\n    dataroot_gt: datasets/faces/validation/reference\n    io_backend:\n      type: disk\n    mean: [0.5, 0.5, 0.5]\n    std: [0.5, 0.5, 0.5]\n    scale: 1\n\n# network structures\nnetwork_g:\n  type: GFPGANv1\n  out_size: 512\n  num_style_feat: 512\n  channel_multiplier: 1\n  resample_kernel: [1, 3, 3, 1]\n  decoder_load_path: experiments/pretrained_models/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth\n  fix_decoder: true\n  num_mlp: 8\n  lr_mlp: 0.01\n  input_is_latent: true\n  different_w: true\n  narrow: 1\n  sft_half: true\n\nnetwork_d:\n  type: StyleGAN2Discriminator\n  out_size: 512\n  channel_multiplier: 1\n  resample_kernel: [1, 3, 3, 1]\n\nnetwork_d_left_eye:\n  type: FacialComponentDiscriminator\n\nnetwork_d_right_eye:\n  type: FacialComponentDiscriminator\n\nnetwork_d_mouth:\n  type: FacialComponentDiscriminator\n\nnetwork_identity:\n  type: ResNetArcFace\n  block: IRBlock\n  layers: [2, 2, 2, 2]\n  use_se: False\n\n# path\npath:\n  pretrain_network_g: ~\n  param_key_g: params_ema\n  strict_load_g: ~\n  pretrain_network_d: ~\n  pretrain_network_d_left_eye: ~\n  pretrain_network_d_right_eye: ~\n  pretrain_network_d_mouth: ~\n  pretrain_network_identity: experiments/pretrained_models/arcface_resnet18.pth\n  # resume\n  resume_state: ~\n  ignore_resume_networks: ['network_identity']\n\n# training settings\ntrain:\n  optim_g:\n    type: Adam\n    lr: !!float 2e-3\n  optim_d:\n    type: Adam\n    lr: !!float 2e-3\n  optim_component:\n    type: Adam\n    lr: !!float 2e-3\n\n  scheduler:\n    type: MultiStepLR\n    milestones: [600000, 700000]\n    gamma: 0.5\n\n  total_iter: 800000\n  warmup_iter: -1  # no warm up\n\n  # losses\n  # pixel loss\n  pixel_opt:\n    type: L1Loss\n    loss_weight: !!float 1e-1\n    reduction: mean\n  # L1 loss used in pyramid loss, component style loss and identity loss\n  L1_opt:\n    type: L1Loss\n    loss_weight: 1\n    reduction: mean\n\n  # image pyramid loss\n  pyramid_loss_weight: 1\n  remove_pyramid_loss: 50000\n  # perceptual loss (content and style losses)\n  perceptual_opt:\n    type: PerceptualLoss\n    layer_weights:\n      # before relu\n      'conv1_2': 0.1\n      'conv2_2': 0.1\n      'conv3_4': 1\n      'conv4_4': 1\n      'conv5_4': 1\n    vgg_type: vgg19\n    use_input_norm: true\n    perceptual_weight: !!float 1\n    style_weight: 50\n    range_norm: true\n    criterion: l1\n  # gan loss\n  gan_opt:\n    type: GANLoss\n    gan_type: wgan_softplus\n    loss_weight: !!float 1e-1\n  # r1 regularization for discriminator\n  r1_reg_weight: 10\n  # facial component loss\n  gan_component_opt:\n    type: GANLoss\n    gan_type: vanilla\n    real_label_val: 1.0\n    fake_label_val: 0.0\n    loss_weight: !!float 1\n  comp_style_weight: 200\n  # identity loss\n  identity_weight: 10\n\n  net_d_iters: 1\n  net_d_init_iters: 0\n  net_d_reg_every: 16\n\n# validation settings\nval:\n  val_freq: !!float 5e3\n  save_img: true\n\n  metrics:\n    psnr: # metric name\n      type: calculate_psnr\n      crop_border: 0\n      test_y_channel: false\n\n# logging settings\nlogger:\n  print_freq: 100\n  save_checkpoint_freq: !!float 5e3\n  use_tb_logger: true\n  wandb:\n    project: ~\n    resume_id: ~\n\n# dist training settings\ndist_params:\n  backend: nccl\n  port: 29500\n\nfind_unused_parameters: true\n"
  },
  {
    "path": "third_part/GFPGAN/options/train_gfpgan_v1_simple.yml",
    "content": "# general settings\nname: train_GFPGANv1_512_simple\nmodel_type: GFPGANModel\nnum_gpu: auto  # officially, we use 4 GPUs\nmanual_seed: 0\n\n# dataset and data loader settings\ndatasets:\n  train:\n    name: FFHQ\n    type: FFHQDegradationDataset\n    # dataroot_gt: datasets/ffhq/ffhq_512.lmdb\n    dataroot_gt: datasets/ffhq/ffhq_512\n    io_backend:\n      # type: lmdb\n      type: disk\n\n    use_hflip: true\n    mean: [0.5, 0.5, 0.5]\n    std: [0.5, 0.5, 0.5]\n    out_size: 512\n\n    blur_kernel_size: 41\n    kernel_list: ['iso', 'aniso']\n    kernel_prob: [0.5, 0.5]\n    blur_sigma: [0.1, 10]\n    downsample_range: [0.8, 8]\n    noise_range: [0, 20]\n    jpeg_range: [60, 100]\n\n    # color jitter and gray\n    color_jitter_prob: 0.3\n    color_jitter_shift: 20\n    color_jitter_pt_prob: 0.3\n    gray_prob: 0.01\n\n    # If you do not want colorization, please set\n    # color_jitter_prob: ~\n    # color_jitter_pt_prob: ~\n    # gray_prob: 0.01\n    # gt_gray: True\n\n    # data loader\n    use_shuffle: true\n    num_worker_per_gpu: 6\n    batch_size_per_gpu: 3\n    dataset_enlarge_ratio: 1\n    prefetch_mode: ~\n\n  val:\n    # Please modify accordingly to use your own validation\n    # Or comment the val block if do not need validation during training\n    name: validation\n    type: PairedImageDataset\n    dataroot_lq: datasets/faces/validation/input\n    dataroot_gt: datasets/faces/validation/reference\n    io_backend:\n      type: disk\n    mean: [0.5, 0.5, 0.5]\n    std: [0.5, 0.5, 0.5]\n    scale: 1\n\n# network structures\nnetwork_g:\n  type: GFPGANv1\n  out_size: 512\n  num_style_feat: 512\n  channel_multiplier: 1\n  resample_kernel: [1, 3, 3, 1]\n  decoder_load_path: experiments/pretrained_models/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth\n  fix_decoder: true\n  num_mlp: 8\n  lr_mlp: 0.01\n  input_is_latent: true\n  different_w: true\n  narrow: 1\n  sft_half: true\n\nnetwork_d:\n  type: StyleGAN2Discriminator\n  out_size: 512\n  channel_multiplier: 1\n  resample_kernel: [1, 3, 3, 1]\n\n\n# path\npath:\n  pretrain_network_g: ~\n  param_key_g: params_ema\n  strict_load_g: ~\n  pretrain_network_d: ~\n  resume_state: ~\n\n# training settings\ntrain:\n  optim_g:\n    type: Adam\n    lr: !!float 2e-3\n  optim_d:\n    type: Adam\n    lr: !!float 2e-3\n  optim_component:\n    type: Adam\n    lr: !!float 2e-3\n\n  scheduler:\n    type: MultiStepLR\n    milestones: [600000, 700000]\n    gamma: 0.5\n\n  total_iter: 800000\n  warmup_iter: -1  # no warm up\n\n  # losses\n  # pixel loss\n  pixel_opt:\n    type: L1Loss\n    loss_weight: !!float 1e-1\n    reduction: mean\n  # L1 loss used in pyramid loss, component style loss and identity loss\n  L1_opt:\n    type: L1Loss\n    loss_weight: 1\n    reduction: mean\n\n  # image pyramid loss\n  pyramid_loss_weight: 1\n  remove_pyramid_loss: 50000\n  # perceptual loss (content and style losses)\n  perceptual_opt:\n    type: PerceptualLoss\n    layer_weights:\n      # before relu\n      'conv1_2': 0.1\n      'conv2_2': 0.1\n      'conv3_4': 1\n      'conv4_4': 1\n      'conv5_4': 1\n    vgg_type: vgg19\n    use_input_norm: true\n    perceptual_weight: !!float 1\n    style_weight: 50\n    range_norm: true\n    criterion: l1\n  # gan loss\n  gan_opt:\n    type: GANLoss\n    gan_type: wgan_softplus\n    loss_weight: !!float 1e-1\n  # r1 regularization for discriminator\n  r1_reg_weight: 10\n\n  net_d_iters: 1\n  net_d_init_iters: 0\n  net_d_reg_every: 16\n\n# validation settings\nval:\n  val_freq: !!float 5e3\n  save_img: true\n\n  metrics:\n    psnr: # metric name\n      type: calculate_psnr\n      crop_border: 0\n      test_y_channel: false\n\n# logging settings\nlogger:\n  print_freq: 100\n  save_checkpoint_freq: !!float 5e3\n  use_tb_logger: true\n  wandb:\n    project: ~\n    resume_id: ~\n\n# dist training settings\ndist_params:\n  backend: nccl\n  port: 29500\n\nfind_unused_parameters: true\n"
  },
  {
    "path": "third_part/GPEN/align_faces.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Apr 24 15:43:29 2017\n@author: zhaoy\n\"\"\"\n\"\"\"\n@Modified by yangxy (yangtao9009@gmail.com)\n\"\"\"\nimport cv2\nimport numpy as np\nfrom skimage import transform as trans\n\n# reference facial points, a list of coordinates (x,y)\nREFERENCE_FACIAL_POINTS = [\n    [30.29459953, 51.69630051],\n    [65.53179932, 51.50139999],\n    [48.02519989, 71.73660278],\n    [33.54930115, 92.3655014],\n    [62.72990036, 92.20410156]\n]\n\nDEFAULT_CROP_SIZE = (96, 112)\n\n\ndef _umeyama(src, dst, estimate_scale=True, scale=1.0):\n    \"\"\"Estimate N-D similarity transformation with or without scaling.\n    Parameters\n    ----------\n    src : (M, N) array\n        Source coordinates.\n    dst : (M, N) array\n        Destination coordinates.\n    estimate_scale : bool\n        Whether to estimate scaling factor.\n    Returns\n    -------\n    T : (N + 1, N + 1)\n        The homogeneous similarity transformation matrix. The matrix contains\n        NaN values only if the problem is not well-conditioned.\n    References\n    ----------\n    .. [1] \"Least-squares estimation of transformation parameters between two\n            point patterns\", Shinji Umeyama, PAMI 1991, :DOI:`10.1109/34.88573`\n    \"\"\"\n\n    num = src.shape[0]\n    dim = src.shape[1]\n\n    # Compute mean of src and dst.\n    src_mean = src.mean(axis=0)\n    dst_mean = dst.mean(axis=0)\n\n    # Subtract mean from src and dst.\n    src_demean = src - src_mean\n    dst_demean = dst - dst_mean\n\n    # Eq. (38).\n    A = dst_demean.T @ src_demean / num\n\n    # Eq. (39).\n    d = np.ones((dim,), dtype=np.double)\n    if np.linalg.det(A) < 0:\n        d[dim - 1] = -1\n\n    T = np.eye(dim + 1, dtype=np.double)\n\n    U, S, V = np.linalg.svd(A)\n\n    # Eq. (40) and (43).\n    rank = np.linalg.matrix_rank(A)\n    if rank == 0:\n        return np.nan * T\n    elif rank == dim - 1:\n        if np.linalg.det(U) * np.linalg.det(V) > 0:\n            T[:dim, :dim] = U @ V\n        else:\n            s = d[dim - 1]\n            d[dim - 1] = -1\n            T[:dim, :dim] = U @ np.diag(d) @ V\n            d[dim - 1] = s\n    else:\n        T[:dim, :dim] = U @ np.diag(d) @ V\n\n    if estimate_scale:\n        # Eq. (41) and (42).\n        scale = 1.0 / src_demean.var(axis=0).sum() * (S @ d)\n    else:\n        scale = scale\n\n    T[:dim, dim] = dst_mean - scale * (T[:dim, :dim] @ src_mean.T)\n    T[:dim, :dim] *= scale\n\n    return T, scale\n\n\nclass FaceWarpException(Exception):\n    def __str__(self):\n        return 'In File {}:{}'.format(\n            __file__, super.__str__(self))\n\n\ndef get_reference_facial_points(output_size=None,\n                                inner_padding_factor=0.0,\n                                outer_padding=(0, 0),\n                                default_square=False):\n    tmp_5pts = np.array(REFERENCE_FACIAL_POINTS)\n    tmp_crop_size = np.array(DEFAULT_CROP_SIZE)\n\n    # 0) make the inner region a square\n    if default_square:\n        size_diff = max(tmp_crop_size) - tmp_crop_size\n        tmp_5pts += size_diff / 2\n        tmp_crop_size += size_diff\n\n    if (output_size and\n            output_size[0] == tmp_crop_size[0] and\n            output_size[1] == tmp_crop_size[1]):\n        print('output_size == DEFAULT_CROP_SIZE {}: return default reference points'.format(tmp_crop_size))\n        return tmp_5pts\n\n    if (inner_padding_factor == 0 and\n            outer_padding == (0, 0)):\n        if output_size is None:\n            print('No paddings to do: return default reference points')\n            return tmp_5pts\n        else:\n            raise FaceWarpException(\n                'No paddings to do, output_size must be None or {}'.format(tmp_crop_size))\n\n    # check output size\n    if not (0 <= inner_padding_factor <= 1.0):\n        raise FaceWarpException('Not (0 <= inner_padding_factor <= 1.0)')\n\n    if ((inner_padding_factor > 0 or outer_padding[0] > 0 or outer_padding[1] > 0)\n            and output_size is None):\n        output_size = tmp_crop_size * \\\n                      (1 + inner_padding_factor * 2).astype(np.int32)\n        output_size += np.array(outer_padding)\n        print('              deduced from paddings, output_size = ', output_size)\n\n    if not (outer_padding[0] < output_size[0]\n            and outer_padding[1] < output_size[1]):\n        raise FaceWarpException('Not (outer_padding[0] < output_size[0]'\n                                'and outer_padding[1] < output_size[1])')\n\n    # 1) pad the inner region according inner_padding_factor\n    # print('---> STEP1: pad the inner region according inner_padding_factor')\n    if inner_padding_factor > 0:\n        size_diff = tmp_crop_size * inner_padding_factor * 2\n        tmp_5pts += size_diff / 2\n        tmp_crop_size += np.round(size_diff).astype(np.int32)\n\n    # print('              crop_size = ', tmp_crop_size)\n    # print('              reference_5pts = ', tmp_5pts)\n\n    # 2) resize the padded inner region\n    # print('---> STEP2: resize the padded inner region')\n    size_bf_outer_pad = np.array(output_size) - np.array(outer_padding) * 2\n    # print('              crop_size = ', tmp_crop_size)\n    # print('              size_bf_outer_pad = ', size_bf_outer_pad)\n\n    if size_bf_outer_pad[0] * tmp_crop_size[1] != size_bf_outer_pad[1] * tmp_crop_size[0]:\n        raise FaceWarpException('Must have (output_size - outer_padding)'\n                                '= some_scale * (crop_size * (1.0 + inner_padding_factor)')\n\n    scale_factor = size_bf_outer_pad[0].astype(np.float32) / tmp_crop_size[0]\n    # print('              resize scale_factor = ', scale_factor)\n    tmp_5pts = tmp_5pts * scale_factor\n    #    size_diff = tmp_crop_size * (scale_factor - min(scale_factor))\n    #    tmp_5pts = tmp_5pts + size_diff / 2\n    tmp_crop_size = size_bf_outer_pad\n    # print('              crop_size = ', tmp_crop_size)\n    # print('              reference_5pts = ', tmp_5pts)\n\n    # 3) add outer_padding to make output_size\n    reference_5point = tmp_5pts + np.array(outer_padding)\n    tmp_crop_size = output_size\n    # print('---> STEP3: add outer_padding to make output_size')\n    # print('              crop_size = ', tmp_crop_size)\n    # print('              reference_5pts = ', tmp_5pts)\n    #\n    # print('===> end get_reference_facial_points\\n')\n\n    return reference_5point\n\n\ndef get_affine_transform_matrix(src_pts, dst_pts):\n    tfm = np.float32([[1, 0, 0], [0, 1, 0]])\n    n_pts = src_pts.shape[0]\n    ones = np.ones((n_pts, 1), src_pts.dtype)\n    src_pts_ = np.hstack([src_pts, ones])\n    dst_pts_ = np.hstack([dst_pts, ones])\n\n    A, res, rank, s = np.linalg.lstsq(src_pts_, dst_pts_)\n\n    if rank == 3:\n        tfm = np.float32([\n            [A[0, 0], A[1, 0], A[2, 0]],\n            [A[0, 1], A[1, 1], A[2, 1]]\n        ])\n    elif rank == 2:\n        tfm = np.float32([\n            [A[0, 0], A[1, 0], 0],\n            [A[0, 1], A[1, 1], 0]\n        ])\n\n    return tfm\n\n\ndef warp_and_crop_face(src_img,\n                       facial_pts,\n                       reference_pts=None,\n                       crop_size=(96, 112),\n                       align_type='smilarity'): #smilarity cv2_affine affine\n    if reference_pts is None:\n        if crop_size[0] == 96 and crop_size[1] == 112:\n            reference_pts = REFERENCE_FACIAL_POINTS\n        else:\n            default_square = False\n            inner_padding_factor = 0\n            outer_padding = (0, 0)\n            output_size = crop_size\n\n            reference_pts = get_reference_facial_points(output_size,\n                                                        inner_padding_factor,\n                                                        outer_padding,\n                                                        default_square)\n\n    ref_pts = np.float32(reference_pts)\n    ref_pts_shp = ref_pts.shape\n    if max(ref_pts_shp) < 3: #  or min(ref_pts_shp) != 2:\n        raise FaceWarpException(\n            'reference_pts.shape must be (K,2) or (2,K) and K>2')\n\n    if ref_pts_shp[0] == 2 or ref_pts_shp[0] == 3:\n        ref_pts = ref_pts.T\n\n    src_pts = np.float32(facial_pts)\n    src_pts_shp = src_pts.shape\n    if max(src_pts_shp) < 3: # or min(src_pts_shp) != 2:\n        raise FaceWarpException(\n            'facial_pts.shape must be (K,2) or (2,K) and K>2')\n\n    if src_pts_shp[0] == 2 or src_pts_shp[0] == 3:\n        src_pts = src_pts.T\n\n    if src_pts.shape != ref_pts.shape:\n        raise FaceWarpException(\n            'facial_pts and reference_pts must have the same shape')\n\n    if align_type is 'cv2_affine':\n        tfm = cv2.getAffineTransform(src_pts[0:3], ref_pts[0:3])\n        tfm_inv = cv2.getAffineTransform(ref_pts[0:3], src_pts[0:3])\n    elif align_type is 'cv2_rigid':\n        tfm, _ = cv2.estimateAffinePartial2D(src_pts[0:3], ref_pts[0:3])\n        tfm_inv, _ = cv2.estimateAffinePartial2D(ref_pts[0:3], src_pts[0:3])\n    elif align_type is 'affine':\n        tfm = get_affine_transform_matrix(src_pts, ref_pts)\n        tfm_inv = get_affine_transform_matrix(ref_pts, src_pts)\n    else:\n        params, scale = _umeyama(src_pts, ref_pts)\n        tfm = params[:2, :]\n\n        params, _ = _umeyama(ref_pts, src_pts, False, scale=1.0/scale)\n        tfm_inv = params[:2, :]\n\n    # M = cv2.getPerspectiveTransform(ref_pts[0:4], src_pts[0:4])\n    face_img = cv2.warpAffine(src_img, tfm, (crop_size[0], crop_size[1]), flags=3)\n    # face_img = cv2.warpPerspective(src_img, M, (crop_size[0], crop_size[1]), flags=cv2.INTER_LINEAR )\n\n    return face_img, tfm_inv\n"
  },
  {
    "path": "third_part/GPEN/face_detect/data/FDDB/img_list.txt",
    "content": "2002/08/11/big/img_591\n2002/08/26/big/img_265\n2002/07/19/big/img_423\n2002/08/24/big/img_490\n2002/08/31/big/img_17676\n2002/07/31/big/img_228\n2002/07/24/big/img_402\n2002/08/04/big/img_769\n2002/07/19/big/img_581\n2002/08/13/big/img_723\n2002/08/12/big/img_821\n2003/01/17/big/img_610\n2002/08/13/big/img_1116\n2002/08/28/big/img_19238\n2002/08/21/big/img_660\n2002/08/14/big/img_607\n2002/08/05/big/img_3708\n2002/08/19/big/img_511\n2002/08/07/big/img_1316\n2002/07/25/big/img_1047\n2002/07/23/big/img_474\n2002/07/27/big/img_970\n2002/09/02/big/img_15752\n2002/09/01/big/img_16378\n2002/09/01/big/img_16189\n2002/08/26/big/img_276\n2002/07/24/big/img_518\n2002/08/14/big/img_1027\n2002/08/24/big/img_733\n2002/08/15/big/img_249\n2003/01/15/big/img_1371\n2002/08/07/big/img_1348\n2003/01/01/big/img_331\n2002/08/23/big/img_536\n2002/07/30/big/img_224\n2002/08/10/big/img_763\n2002/08/21/big/img_293\n2002/08/15/big/img_1211\n2002/08/15/big/img_1194\n2003/01/15/big/img_390\n2002/08/06/big/img_2893\n2002/08/17/big/img_691\n2002/08/07/big/img_1695\n2002/08/16/big/img_829\n2002/07/25/big/img_201\n2002/08/23/big/img_36\n2003/01/15/big/img_763\n2003/01/15/big/img_637\n2002/08/22/big/img_592\n2002/07/25/big/img_817\n2003/01/15/big/img_1219\n2002/08/05/big/img_3508\n2002/08/15/big/img_1108\n2002/07/19/big/img_488\n2003/01/16/big/img_704\n2003/01/13/big/img_1087\n2002/08/10/big/img_670\n2002/07/24/big/img_104\n2002/08/27/big/img_19823\n2002/09/01/big/img_16229\n2003/01/13/big/img_846\n2002/08/04/big/img_412\n2002/07/22/big/img_554\n2002/08/12/big/img_331\n2002/08/02/big/img_533\n2002/08/12/big/img_259\n2002/08/18/big/img_328\n2003/01/14/big/img_630\n2002/08/05/big/img_3541\n2002/08/06/big/img_2390\n2002/08/20/big/img_150\n2002/08/02/big/img_1231\n2002/08/16/big/img_710\n2002/08/19/big/img_591\n2002/07/22/big/img_725\n2002/07/24/big/img_820\n2003/01/13/big/img_568\n2002/08/22/big/img_853\n2002/08/09/big/img_648\n2002/08/23/big/img_528\n2003/01/14/big/img_888\n2002/08/30/big/img_18201\n2002/08/13/big/img_965\n2003/01/14/big/img_660\n2002/07/19/big/img_517\n2003/01/14/big/img_406\n2002/08/30/big/img_18433\n2002/08/07/big/img_1630\n2002/08/06/big/img_2717\n2002/08/21/big/img_470\n2002/07/23/big/img_633\n2002/08/20/big/img_915\n2002/08/16/big/img_893\n2002/07/29/big/img_644\n2002/08/15/big/img_529\n2002/08/16/big/img_668\n2002/08/07/big/img_1871\n2002/07/25/big/img_192\n2002/07/31/big/img_961\n2002/08/19/big/img_738\n2002/07/31/big/img_382\n2002/08/19/big/img_298\n2003/01/17/big/img_608\n2002/08/21/big/img_514\n2002/07/23/big/img_183\n2003/01/17/big/img_536\n2002/07/24/big/img_478\n2002/08/06/big/img_2997\n2002/09/02/big/img_15380\n2002/08/07/big/img_1153\n2002/07/31/big/img_967\n2002/07/31/big/img_711\n2002/08/26/big/img_664\n2003/01/01/big/img_326\n2002/08/24/big/img_775\n2002/08/08/big/img_961\n2002/08/16/big/img_77\n2002/08/12/big/img_296\n2002/07/22/big/img_905\n2003/01/13/big/img_284\n2002/08/13/big/img_887\n2002/08/24/big/img_849\n2002/07/30/big/img_345\n2002/08/18/big/img_419\n2002/08/01/big/img_1347\n2002/08/05/big/img_3670\n2002/07/21/big/img_479\n2002/08/08/big/img_913\n2002/09/02/big/img_15828\n2002/08/30/big/img_18194\n2002/08/08/big/img_471\n2002/08/22/big/img_734\n2002/08/09/big/img_586\n2002/08/09/big/img_454\n2002/07/29/big/img_47\n2002/07/19/big/img_381\n2002/07/29/big/img_733\n2002/08/20/big/img_327\n2002/07/21/big/img_96\n2002/08/06/big/img_2680\n2002/07/25/big/img_919\n2002/07/21/big/img_158\n2002/07/22/big/img_801\n2002/07/22/big/img_567\n2002/07/24/big/img_804\n2002/07/24/big/img_690\n2003/01/15/big/img_576\n2002/08/14/big/img_335\n2003/01/13/big/img_390\n2002/08/11/big/img_258\n2002/07/23/big/img_917\n2002/08/15/big/img_525\n2003/01/15/big/img_505\n2002/07/30/big/img_886\n2003/01/16/big/img_640\n2003/01/14/big/img_642\n2003/01/17/big/img_844\n2002/08/04/big/img_571\n2002/08/29/big/img_18702\n2003/01/15/big/img_240\n2002/07/29/big/img_553\n2002/08/10/big/img_354\n2002/08/18/big/img_17\n2003/01/15/big/img_782\n2002/07/27/big/img_382\n2002/08/14/big/img_970\n2003/01/16/big/img_70\n2003/01/16/big/img_625\n2002/08/18/big/img_341\n2002/08/26/big/img_188\n2002/08/09/big/img_405\n2002/08/02/big/img_37\n2002/08/13/big/img_748\n2002/07/22/big/img_399\n2002/07/25/big/img_844\n2002/08/12/big/img_340\n2003/01/13/big/img_815\n2002/08/26/big/img_5\n2002/08/10/big/img_158\n2002/08/18/big/img_95\n2002/07/29/big/img_1297\n2003/01/13/big/img_508\n2002/09/01/big/img_16680\n2003/01/16/big/img_338\n2002/08/13/big/img_517\n2002/07/22/big/img_626\n2002/08/06/big/img_3024\n2002/07/26/big/img_499\n2003/01/13/big/img_387\n2002/08/31/big/img_18025\n2002/08/13/big/img_520\n2003/01/16/big/img_576\n2002/07/26/big/img_121\n2002/08/25/big/img_703\n2002/08/26/big/img_615\n2002/08/17/big/img_434\n2002/08/02/big/img_677\n2002/08/18/big/img_276\n2002/08/05/big/img_3672\n2002/07/26/big/img_700\n2002/07/31/big/img_277\n2003/01/14/big/img_220\n2002/08/23/big/img_232\n2002/08/31/big/img_17422\n2002/07/22/big/img_508\n2002/08/13/big/img_681\n2003/01/15/big/img_638\n2002/08/30/big/img_18408\n2003/01/14/big/img_533\n2003/01/17/big/img_12\n2002/08/28/big/img_19388\n2002/08/08/big/img_133\n2002/07/26/big/img_885\n2002/08/19/big/img_387\n2002/08/27/big/img_19976\n2002/08/26/big/img_118\n2002/08/28/big/img_19146\n2002/08/05/big/img_3259\n2002/08/15/big/img_536\n2002/07/22/big/img_279\n2002/07/22/big/img_9\n2002/08/13/big/img_301\n2002/08/15/big/img_974\n2002/08/06/big/img_2355\n2002/08/01/big/img_1526\n2002/08/03/big/img_417\n2002/08/04/big/img_407\n2002/08/15/big/img_1029\n2002/07/29/big/img_700\n2002/08/01/big/img_1463\n2002/08/31/big/img_17365\n2002/07/28/big/img_223\n2002/07/19/big/img_827\n2002/07/27/big/img_531\n2002/07/19/big/img_845\n2002/08/20/big/img_382\n2002/07/31/big/img_268\n2002/08/27/big/img_19705\n2002/08/02/big/img_830\n2002/08/23/big/img_250\n2002/07/20/big/img_777\n2002/08/21/big/img_879\n2002/08/26/big/img_20146\n2002/08/23/big/img_789\n2002/08/06/big/img_2683\n2002/08/25/big/img_576\n2002/08/09/big/img_498\n2002/08/08/big/img_384\n2002/08/26/big/img_592\n2002/07/29/big/img_1470\n2002/08/21/big/img_452\n2002/08/30/big/img_18395\n2002/08/15/big/img_215\n2002/07/21/big/img_643\n2002/07/22/big/img_209\n2003/01/17/big/img_346\n2002/08/25/big/img_658\n2002/08/21/big/img_221\n2002/08/14/big/img_60\n2003/01/17/big/img_885\n2003/01/16/big/img_482\n2002/08/19/big/img_593\n2002/08/08/big/img_233\n2002/07/30/big/img_458\n2002/07/23/big/img_384\n2003/01/15/big/img_670\n2003/01/15/big/img_267\n2002/08/26/big/img_540\n2002/07/29/big/img_552\n2002/07/30/big/img_997\n2003/01/17/big/img_377\n2002/08/21/big/img_265\n2002/08/09/big/img_561\n2002/07/31/big/img_945\n2002/09/02/big/img_15252\n2002/08/11/big/img_276\n2002/07/22/big/img_491\n2002/07/26/big/img_517\n2002/08/14/big/img_726\n2002/08/08/big/img_46\n2002/08/28/big/img_19458\n2002/08/06/big/img_2935\n2002/07/29/big/img_1392\n2002/08/13/big/img_776\n2002/08/24/big/img_616\n2002/08/14/big/img_1065\n2002/07/29/big/img_889\n2002/08/18/big/img_188\n2002/08/07/big/img_1453\n2002/08/02/big/img_760\n2002/07/28/big/img_416\n2002/08/07/big/img_1393\n2002/08/26/big/img_292\n2002/08/26/big/img_301\n2003/01/13/big/img_195\n2002/07/26/big/img_532\n2002/08/20/big/img_550\n2002/08/05/big/img_3658\n2002/08/26/big/img_738\n2002/09/02/big/img_15750\n2003/01/17/big/img_451\n2002/07/23/big/img_339\n2002/08/16/big/img_637\n2002/08/14/big/img_748\n2002/08/06/big/img_2739\n2002/07/25/big/img_482\n2002/08/19/big/img_191\n2002/08/26/big/img_537\n2003/01/15/big/img_716\n2003/01/15/big/img_767\n2002/08/02/big/img_452\n2002/08/08/big/img_1011\n2002/08/10/big/img_144\n2003/01/14/big/img_122\n2002/07/24/big/img_586\n2002/07/24/big/img_762\n2002/08/20/big/img_369\n2002/07/30/big/img_146\n2002/08/23/big/img_396\n2003/01/15/big/img_200\n2002/08/15/big/img_1183\n2003/01/14/big/img_698\n2002/08/09/big/img_792\n2002/08/06/big/img_2347\n2002/07/31/big/img_911\n2002/08/26/big/img_722\n2002/08/23/big/img_621\n2002/08/05/big/img_3790\n2003/01/13/big/img_633\n2002/08/09/big/img_224\n2002/07/24/big/img_454\n2002/07/21/big/img_202\n2002/08/02/big/img_630\n2002/08/30/big/img_18315\n2002/07/19/big/img_491\n2002/09/01/big/img_16456\n2002/08/09/big/img_242\n2002/07/25/big/img_595\n2002/07/22/big/img_522\n2002/08/01/big/img_1593\n2002/07/29/big/img_336\n2002/08/15/big/img_448\n2002/08/28/big/img_19281\n2002/07/29/big/img_342\n2002/08/12/big/img_78\n2003/01/14/big/img_525\n2002/07/28/big/img_147\n2002/08/11/big/img_353\n2002/08/22/big/img_513\n2002/08/04/big/img_721\n2002/08/17/big/img_247\n2003/01/14/big/img_891\n2002/08/20/big/img_853\n2002/07/19/big/img_414\n2002/08/01/big/img_1530\n2003/01/14/big/img_924\n2002/08/22/big/img_468\n2002/08/18/big/img_354\n2002/08/30/big/img_18193\n2002/08/23/big/img_492\n2002/08/15/big/img_871\n2002/08/12/big/img_494\n2002/08/06/big/img_2470\n2002/07/23/big/img_923\n2002/08/26/big/img_155\n2002/08/08/big/img_669\n2002/07/23/big/img_404\n2002/08/28/big/img_19421\n2002/08/29/big/img_18993\n2002/08/25/big/img_416\n2003/01/17/big/img_434\n2002/07/29/big/img_1370\n2002/07/28/big/img_483\n2002/08/11/big/img_50\n2002/08/10/big/img_404\n2002/09/02/big/img_15057\n2003/01/14/big/img_911\n2002/09/01/big/img_16697\n2003/01/16/big/img_665\n2002/09/01/big/img_16708\n2002/08/22/big/img_612\n2002/08/28/big/img_19471\n2002/08/02/big/img_198\n2003/01/16/big/img_527\n2002/08/22/big/img_209\n2002/08/30/big/img_18205\n2003/01/14/big/img_114\n2003/01/14/big/img_1028\n2003/01/16/big/img_894\n2003/01/14/big/img_837\n2002/07/30/big/img_9\n2002/08/06/big/img_2821\n2002/08/04/big/img_85\n2003/01/13/big/img_884\n2002/07/22/big/img_570\n2002/08/07/big/img_1773\n2002/07/26/big/img_208\n2003/01/17/big/img_946\n2002/07/19/big/img_930\n2003/01/01/big/img_698\n2003/01/17/big/img_612\n2002/07/19/big/img_372\n2002/07/30/big/img_721\n2003/01/14/big/img_649\n2002/08/19/big/img_4\n2002/07/25/big/img_1024\n2003/01/15/big/img_601\n2002/08/30/big/img_18470\n2002/07/22/big/img_29\n2002/08/07/big/img_1686\n2002/07/20/big/img_294\n2002/08/14/big/img_800\n2002/08/19/big/img_353\n2002/08/19/big/img_350\n2002/08/05/big/img_3392\n2002/08/09/big/img_622\n2003/01/15/big/img_236\n2002/08/11/big/img_643\n2002/08/05/big/img_3458\n2002/08/12/big/img_413\n2002/08/22/big/img_415\n2002/08/13/big/img_635\n2002/08/07/big/img_1198\n2002/08/04/big/img_873\n2002/08/12/big/img_407\n2003/01/15/big/img_346\n2002/08/02/big/img_275\n2002/08/17/big/img_997\n2002/08/21/big/img_958\n2002/08/20/big/img_579\n2002/07/29/big/img_142\n2003/01/14/big/img_1115\n2002/08/16/big/img_365\n2002/07/29/big/img_1414\n2002/08/17/big/img_489\n2002/08/13/big/img_1010\n2002/07/31/big/img_276\n2002/07/25/big/img_1000\n2002/08/23/big/img_524\n2002/08/28/big/img_19147\n2003/01/13/big/img_433\n2002/08/20/big/img_205\n2003/01/01/big/img_458\n2002/07/29/big/img_1449\n2003/01/16/big/img_696\n2002/08/28/big/img_19296\n2002/08/29/big/img_18688\n2002/08/21/big/img_767\n2002/08/20/big/img_532\n2002/08/26/big/img_187\n2002/07/26/big/img_183\n2002/07/27/big/img_890\n2003/01/13/big/img_576\n2002/07/30/big/img_15\n2002/07/31/big/img_889\n2002/08/31/big/img_17759\n2003/01/14/big/img_1114\n2002/07/19/big/img_445\n2002/08/03/big/img_593\n2002/07/24/big/img_750\n2002/07/30/big/img_133\n2002/08/25/big/img_671\n2002/07/20/big/img_351\n2002/08/31/big/img_17276\n2002/08/05/big/img_3231\n2002/09/02/big/img_15882\n2002/08/14/big/img_115\n2002/08/02/big/img_1148\n2002/07/25/big/img_936\n2002/07/31/big/img_639\n2002/08/04/big/img_427\n2002/08/22/big/img_843\n2003/01/17/big/img_17\n2003/01/13/big/img_690\n2002/08/13/big/img_472\n2002/08/09/big/img_425\n2002/08/05/big/img_3450\n2003/01/17/big/img_439\n2002/08/13/big/img_539\n2002/07/28/big/img_35\n2002/08/16/big/img_241\n2002/08/06/big/img_2898\n2003/01/16/big/img_429\n2002/08/05/big/img_3817\n2002/08/27/big/img_19919\n2002/07/19/big/img_422\n2002/08/15/big/img_560\n2002/07/23/big/img_750\n2002/07/30/big/img_353\n2002/08/05/big/img_43\n2002/08/23/big/img_305\n2002/08/01/big/img_2137\n2002/08/30/big/img_18097\n2002/08/01/big/img_1389\n2002/08/02/big/img_308\n2003/01/14/big/img_652\n2002/08/01/big/img_1798\n2003/01/14/big/img_732\n2003/01/16/big/img_294\n2002/08/26/big/img_213\n2002/07/24/big/img_842\n2003/01/13/big/img_630\n2003/01/13/big/img_634\n2002/08/06/big/img_2285\n2002/08/01/big/img_2162\n2002/08/30/big/img_18134\n2002/08/02/big/img_1045\n2002/08/01/big/img_2143\n2002/07/25/big/img_135\n2002/07/20/big/img_645\n2002/08/05/big/img_3666\n2002/08/14/big/img_523\n2002/08/04/big/img_425\n2003/01/14/big/img_137\n2003/01/01/big/img_176\n2002/08/15/big/img_505\n2002/08/24/big/img_386\n2002/08/05/big/img_3187\n2002/08/15/big/img_419\n2003/01/13/big/img_520\n2002/08/04/big/img_444\n2002/08/26/big/img_483\n2002/08/05/big/img_3449\n2002/08/30/big/img_18409\n2002/08/28/big/img_19455\n2002/08/27/big/img_20090\n2002/07/23/big/img_625\n2002/08/24/big/img_205\n2002/08/08/big/img_938\n2003/01/13/big/img_527\n2002/08/07/big/img_1712\n2002/07/24/big/img_801\n2002/08/09/big/img_579\n2003/01/14/big/img_41\n2003/01/15/big/img_1130\n2002/07/21/big/img_672\n2002/08/07/big/img_1590\n2003/01/01/big/img_532\n2002/08/02/big/img_529\n2002/08/05/big/img_3591\n2002/08/23/big/img_5\n2003/01/14/big/img_882\n2002/08/28/big/img_19234\n2002/07/24/big/img_398\n2003/01/14/big/img_592\n2002/08/22/big/img_548\n2002/08/12/big/img_761\n2003/01/16/big/img_497\n2002/08/18/big/img_133\n2002/08/08/big/img_874\n2002/07/19/big/img_247\n2002/08/15/big/img_170\n2002/08/27/big/img_19679\n2002/08/20/big/img_246\n2002/08/24/big/img_358\n2002/07/29/big/img_599\n2002/08/01/big/img_1555\n2002/07/30/big/img_491\n2002/07/30/big/img_371\n2003/01/16/big/img_682\n2002/07/25/big/img_619\n2003/01/15/big/img_587\n2002/08/02/big/img_1212\n2002/08/01/big/img_2152\n2002/07/25/big/img_668\n2003/01/16/big/img_574\n2002/08/28/big/img_19464\n2002/08/11/big/img_536\n2002/07/24/big/img_201\n2002/08/05/big/img_3488\n2002/07/25/big/img_887\n2002/07/22/big/img_789\n2002/07/30/big/img_432\n2002/08/16/big/img_166\n2002/09/01/big/img_16333\n2002/07/26/big/img_1010\n2002/07/21/big/img_793\n2002/07/22/big/img_720\n2002/07/31/big/img_337\n2002/07/27/big/img_185\n2002/08/23/big/img_440\n2002/07/31/big/img_801\n2002/07/25/big/img_478\n2003/01/14/big/img_171\n2002/08/07/big/img_1054\n2002/09/02/big/img_15659\n2002/07/29/big/img_1348\n2002/08/09/big/img_337\n2002/08/26/big/img_684\n2002/07/31/big/img_537\n2002/08/15/big/img_808\n2003/01/13/big/img_740\n2002/08/07/big/img_1667\n2002/08/03/big/img_404\n2002/08/06/big/img_2520\n2002/07/19/big/img_230\n2002/07/19/big/img_356\n2003/01/16/big/img_627\n2002/08/04/big/img_474\n2002/07/29/big/img_833\n2002/07/25/big/img_176\n2002/08/01/big/img_1684\n2002/08/21/big/img_643\n2002/08/27/big/img_19673\n2002/08/02/big/img_838\n2002/08/06/big/img_2378\n2003/01/15/big/img_48\n2002/07/30/big/img_470\n2002/08/15/big/img_963\n2002/08/24/big/img_444\n2002/08/16/big/img_662\n2002/08/15/big/img_1209\n2002/07/24/big/img_25\n2002/08/06/big/img_2740\n2002/07/29/big/img_996\n2002/08/31/big/img_18074\n2002/08/04/big/img_343\n2003/01/17/big/img_509\n2003/01/13/big/img_726\n2002/08/07/big/img_1466\n2002/07/26/big/img_307\n2002/08/10/big/img_598\n2002/08/13/big/img_890\n2002/08/14/big/img_997\n2002/07/19/big/img_392\n2002/08/02/big/img_475\n2002/08/29/big/img_19038\n2002/07/29/big/img_538\n2002/07/29/big/img_502\n2002/08/02/big/img_364\n2002/08/31/big/img_17353\n2002/08/08/big/img_539\n2002/08/01/big/img_1449\n2002/07/22/big/img_363\n2002/08/02/big/img_90\n2002/09/01/big/img_16867\n2002/08/05/big/img_3371\n2002/07/30/big/img_342\n2002/08/07/big/img_1363\n2002/08/22/big/img_790\n2003/01/15/big/img_404\n2002/08/05/big/img_3447\n2002/09/01/big/img_16167\n2003/01/13/big/img_840\n2002/08/22/big/img_1001\n2002/08/09/big/img_431\n2002/07/27/big/img_618\n2002/07/31/big/img_741\n2002/07/30/big/img_964\n2002/07/25/big/img_86\n2002/07/29/big/img_275\n2002/08/21/big/img_921\n2002/07/26/big/img_892\n2002/08/21/big/img_663\n2003/01/13/big/img_567\n2003/01/14/big/img_719\n2002/07/28/big/img_251\n2003/01/15/big/img_1123\n2002/07/29/big/img_260\n2002/08/24/big/img_337\n2002/08/01/big/img_1914\n2002/08/13/big/img_373\n2003/01/15/big/img_589\n2002/08/13/big/img_906\n2002/07/26/big/img_270\n2002/08/26/big/img_313\n2002/08/25/big/img_694\n2003/01/01/big/img_327\n2002/07/23/big/img_261\n2002/08/26/big/img_642\n2002/07/29/big/img_918\n2002/07/23/big/img_455\n2002/07/24/big/img_612\n2002/07/23/big/img_534\n2002/07/19/big/img_534\n2002/07/19/big/img_726\n2002/08/01/big/img_2146\n2002/08/02/big/img_543\n2003/01/16/big/img_777\n2002/07/30/big/img_484\n2002/08/13/big/img_1161\n2002/07/21/big/img_390\n2002/08/06/big/img_2288\n2002/08/21/big/img_677\n2002/08/13/big/img_747\n2002/08/15/big/img_1248\n2002/07/31/big/img_416\n2002/09/02/big/img_15259\n2002/08/16/big/img_781\n2002/08/24/big/img_754\n2002/07/24/big/img_803\n2002/08/20/big/img_609\n2002/08/28/big/img_19571\n2002/09/01/big/img_16140\n2002/08/26/big/img_769\n2002/07/20/big/img_588\n2002/08/02/big/img_898\n2002/07/21/big/img_466\n2002/08/14/big/img_1046\n2002/07/25/big/img_212\n2002/08/26/big/img_353\n2002/08/19/big/img_810\n2002/08/31/big/img_17824\n2002/08/12/big/img_631\n2002/07/19/big/img_828\n2002/07/24/big/img_130\n2002/08/25/big/img_580\n2002/07/31/big/img_699\n2002/07/23/big/img_808\n2002/07/31/big/img_377\n2003/01/16/big/img_570\n2002/09/01/big/img_16254\n2002/07/21/big/img_471\n2002/08/01/big/img_1548\n2002/08/18/big/img_252\n2002/08/19/big/img_576\n2002/08/20/big/img_464\n2002/07/27/big/img_735\n2002/08/21/big/img_589\n2003/01/15/big/img_1192\n2002/08/09/big/img_302\n2002/07/31/big/img_594\n2002/08/23/big/img_19\n2002/08/29/big/img_18819\n2002/08/19/big/img_293\n2002/07/30/big/img_331\n2002/08/23/big/img_607\n2002/07/30/big/img_363\n2002/08/16/big/img_766\n2003/01/13/big/img_481\n2002/08/06/big/img_2515\n2002/09/02/big/img_15913\n2002/09/02/big/img_15827\n2002/09/02/big/img_15053\n2002/08/07/big/img_1576\n2002/07/23/big/img_268\n2002/08/21/big/img_152\n2003/01/15/big/img_578\n2002/07/21/big/img_589\n2002/07/20/big/img_548\n2002/08/27/big/img_19693\n2002/08/31/big/img_17252\n2002/07/31/big/img_138\n2002/07/23/big/img_372\n2002/08/16/big/img_695\n2002/07/27/big/img_287\n2002/08/15/big/img_315\n2002/08/10/big/img_361\n2002/07/29/big/img_899\n2002/08/13/big/img_771\n2002/08/21/big/img_92\n2003/01/15/big/img_425\n2003/01/16/big/img_450\n2002/09/01/big/img_16942\n2002/08/02/big/img_51\n2002/09/02/big/img_15379\n2002/08/24/big/img_147\n2002/08/30/big/img_18122\n2002/07/26/big/img_950\n2002/08/07/big/img_1400\n2002/08/17/big/img_468\n2002/08/15/big/img_470\n2002/07/30/big/img_318\n2002/07/22/big/img_644\n2002/08/27/big/img_19732\n2002/07/23/big/img_601\n2002/08/26/big/img_398\n2002/08/21/big/img_428\n2002/08/06/big/img_2119\n2002/08/29/big/img_19103\n2003/01/14/big/img_933\n2002/08/11/big/img_674\n2002/08/28/big/img_19420\n2002/08/03/big/img_418\n2002/08/17/big/img_312\n2002/07/25/big/img_1044\n2003/01/17/big/img_671\n2002/08/30/big/img_18297\n2002/07/25/big/img_755\n2002/07/23/big/img_471\n2002/08/21/big/img_39\n2002/07/26/big/img_699\n2003/01/14/big/img_33\n2002/07/31/big/img_411\n2002/08/16/big/img_645\n2003/01/17/big/img_116\n2002/09/02/big/img_15903\n2002/08/20/big/img_120\n2002/08/22/big/img_176\n2002/07/29/big/img_1316\n2002/08/27/big/img_19914\n2002/07/22/big/img_719\n2002/08/28/big/img_19239\n2003/01/13/big/img_385\n2002/08/08/big/img_525\n2002/07/19/big/img_782\n2002/08/13/big/img_843\n2002/07/30/big/img_107\n2002/08/11/big/img_752\n2002/07/29/big/img_383\n2002/08/26/big/img_249\n2002/08/29/big/img_18860\n2002/07/30/big/img_70\n2002/07/26/big/img_194\n2002/08/15/big/img_530\n2002/08/08/big/img_816\n2002/07/31/big/img_286\n2003/01/13/big/img_294\n2002/07/31/big/img_251\n2002/07/24/big/img_13\n2002/08/31/big/img_17938\n2002/07/22/big/img_642\n2003/01/14/big/img_728\n2002/08/18/big/img_47\n2002/08/22/big/img_306\n2002/08/20/big/img_348\n2002/08/15/big/img_764\n2002/08/08/big/img_163\n2002/07/23/big/img_531\n2002/07/23/big/img_467\n2003/01/16/big/img_743\n2003/01/13/big/img_535\n2002/08/02/big/img_523\n2002/08/22/big/img_120\n2002/08/11/big/img_496\n2002/08/29/big/img_19075\n2002/08/08/big/img_465\n2002/08/09/big/img_790\n2002/08/19/big/img_588\n2002/08/23/big/img_407\n2003/01/17/big/img_435\n2002/08/24/big/img_398\n2002/08/27/big/img_19899\n2003/01/15/big/img_335\n2002/08/13/big/img_493\n2002/09/02/big/img_15460\n2002/07/31/big/img_470\n2002/08/05/big/img_3550\n2002/07/28/big/img_123\n2002/08/01/big/img_1498\n2002/08/04/big/img_504\n2003/01/17/big/img_427\n2002/08/27/big/img_19708\n2002/07/27/big/img_861\n2002/07/25/big/img_685\n2002/07/31/big/img_207\n2003/01/14/big/img_745\n2002/08/31/big/img_17756\n2002/08/24/big/img_288\n2002/08/18/big/img_181\n2002/08/10/big/img_520\n2002/08/25/big/img_705\n2002/08/23/big/img_226\n2002/08/04/big/img_727\n2002/07/24/big/img_625\n2002/08/28/big/img_19157\n2002/08/23/big/img_586\n2002/07/31/big/img_232\n2003/01/13/big/img_240\n2003/01/14/big/img_321\n2003/01/15/big/img_533\n2002/07/23/big/img_480\n2002/07/24/big/img_371\n2002/08/21/big/img_702\n2002/08/31/big/img_17075\n2002/09/02/big/img_15278\n2002/07/29/big/img_246\n2003/01/15/big/img_829\n2003/01/15/big/img_1213\n2003/01/16/big/img_441\n2002/08/14/big/img_921\n2002/07/23/big/img_425\n2002/08/15/big/img_296\n2002/07/19/big/img_135\n2002/07/26/big/img_402\n2003/01/17/big/img_88\n2002/08/20/big/img_872\n2002/08/13/big/img_1110\n2003/01/16/big/img_1040\n2002/07/23/big/img_9\n2002/08/13/big/img_700\n2002/08/16/big/img_371\n2002/08/27/big/img_19966\n2003/01/17/big/img_391\n2002/08/18/big/img_426\n2002/08/01/big/img_1618\n2002/07/21/big/img_754\n2003/01/14/big/img_1101\n2003/01/16/big/img_1022\n2002/07/22/big/img_275\n2002/08/24/big/img_86\n2002/08/17/big/img_582\n2003/01/15/big/img_765\n2003/01/17/big/img_449\n2002/07/28/big/img_265\n2003/01/13/big/img_552\n2002/07/28/big/img_115\n2003/01/16/big/img_56\n2002/08/02/big/img_1232\n2003/01/17/big/img_925\n2002/07/22/big/img_445\n2002/07/25/big/img_957\n2002/07/20/big/img_589\n2002/08/31/big/img_17107\n2002/07/29/big/img_483\n2002/08/14/big/img_1063\n2002/08/07/big/img_1545\n2002/08/14/big/img_680\n2002/09/01/big/img_16694\n2002/08/14/big/img_257\n2002/08/11/big/img_726\n2002/07/26/big/img_681\n2002/07/25/big/img_481\n2003/01/14/big/img_737\n2002/08/28/big/img_19480\n2003/01/16/big/img_362\n2002/08/27/big/img_19865\n2003/01/01/big/img_547\n2002/09/02/big/img_15074\n2002/08/01/big/img_1453\n2002/08/22/big/img_594\n2002/08/28/big/img_19263\n2002/08/13/big/img_478\n2002/07/29/big/img_1358\n2003/01/14/big/img_1022\n2002/08/16/big/img_450\n2002/08/02/big/img_159\n2002/07/26/big/img_781\n2003/01/13/big/img_601\n2002/08/20/big/img_407\n2002/08/15/big/img_468\n2002/08/31/big/img_17902\n2002/08/16/big/img_81\n2002/07/25/big/img_987\n2002/07/25/big/img_500\n2002/08/02/big/img_31\n2002/08/18/big/img_538\n2002/08/08/big/img_54\n2002/07/23/big/img_686\n2002/07/24/big/img_836\n2003/01/17/big/img_734\n2002/08/16/big/img_1055\n2003/01/16/big/img_521\n2002/07/25/big/img_612\n2002/08/22/big/img_778\n2002/08/03/big/img_251\n2002/08/12/big/img_436\n2002/08/23/big/img_705\n2002/07/28/big/img_243\n2002/07/25/big/img_1029\n2002/08/20/big/img_287\n2002/08/29/big/img_18739\n2002/08/05/big/img_3272\n2002/07/27/big/img_214\n2003/01/14/big/img_5\n2002/08/01/big/img_1380\n2002/08/29/big/img_19097\n2002/07/30/big/img_486\n2002/08/29/big/img_18707\n2002/08/10/big/img_559\n2002/08/15/big/img_365\n2002/08/09/big/img_525\n2002/08/10/big/img_689\n2002/07/25/big/img_502\n2002/08/03/big/img_667\n2002/08/10/big/img_855\n2002/08/10/big/img_706\n2002/08/18/big/img_603\n2003/01/16/big/img_1055\n2002/08/31/big/img_17890\n2002/08/15/big/img_761\n2003/01/15/big/img_489\n2002/08/26/big/img_351\n2002/08/01/big/img_1772\n2002/08/31/big/img_17729\n2002/07/25/big/img_609\n2003/01/13/big/img_539\n2002/07/27/big/img_686\n2002/07/31/big/img_311\n2002/08/22/big/img_799\n2003/01/16/big/img_936\n2002/08/31/big/img_17813\n2002/08/04/big/img_862\n2002/08/09/big/img_332\n2002/07/20/big/img_148\n2002/08/12/big/img_426\n2002/07/24/big/img_69\n2002/07/27/big/img_685\n2002/08/02/big/img_480\n2002/08/26/big/img_154\n2002/07/24/big/img_598\n2002/08/01/big/img_1881\n2002/08/20/big/img_667\n2003/01/14/big/img_495\n2002/07/21/big/img_744\n2002/07/30/big/img_150\n2002/07/23/big/img_924\n2002/08/08/big/img_272\n2002/07/23/big/img_310\n2002/07/25/big/img_1011\n2002/09/02/big/img_15725\n2002/07/19/big/img_814\n2002/08/20/big/img_936\n2002/07/25/big/img_85\n2002/08/24/big/img_662\n2002/08/09/big/img_495\n2003/01/15/big/img_196\n2002/08/16/big/img_707\n2002/08/28/big/img_19370\n2002/08/06/big/img_2366\n2002/08/06/big/img_3012\n2002/08/01/big/img_1452\n2002/07/31/big/img_742\n2002/07/27/big/img_914\n2003/01/13/big/img_290\n2002/07/31/big/img_288\n2002/08/02/big/img_171\n2002/08/22/big/img_191\n2002/07/27/big/img_1066\n2002/08/12/big/img_383\n2003/01/17/big/img_1018\n2002/08/01/big/img_1785\n2002/08/11/big/img_390\n2002/08/27/big/img_20037\n2002/08/12/big/img_38\n2003/01/15/big/img_103\n2002/08/26/big/img_31\n2002/08/18/big/img_660\n2002/07/22/big/img_694\n2002/08/15/big/img_24\n2002/07/27/big/img_1077\n2002/08/01/big/img_1943\n2002/07/22/big/img_292\n2002/09/01/big/img_16857\n2002/07/22/big/img_892\n2003/01/14/big/img_46\n2002/08/09/big/img_469\n2002/08/09/big/img_414\n2003/01/16/big/img_40\n2002/08/28/big/img_19231\n2002/07/27/big/img_978\n2002/07/23/big/img_475\n2002/07/25/big/img_92\n2002/08/09/big/img_799\n2002/07/25/big/img_491\n2002/08/03/big/img_654\n2003/01/15/big/img_687\n2002/08/11/big/img_478\n2002/08/07/big/img_1664\n2002/08/20/big/img_362\n2002/08/01/big/img_1298\n2003/01/13/big/img_500\n2002/08/06/big/img_2896\n2002/08/30/big/img_18529\n2002/08/16/big/img_1020\n2002/07/29/big/img_892\n2002/08/29/big/img_18726\n2002/07/21/big/img_453\n2002/08/17/big/img_437\n2002/07/19/big/img_665\n2002/07/22/big/img_440\n2002/07/19/big/img_582\n2002/07/21/big/img_233\n2003/01/01/big/img_82\n2002/07/25/big/img_341\n2002/07/29/big/img_864\n2002/08/02/big/img_276\n2002/08/29/big/img_18654\n2002/07/27/big/img_1024\n2002/08/19/big/img_373\n2003/01/15/big/img_241\n2002/07/25/big/img_84\n2002/08/13/big/img_834\n2002/08/10/big/img_511\n2002/08/01/big/img_1627\n2002/08/08/big/img_607\n2002/08/06/big/img_2083\n2002/08/01/big/img_1486\n2002/08/08/big/img_700\n2002/08/01/big/img_1954\n2002/08/21/big/img_54\n2002/07/30/big/img_847\n2002/08/28/big/img_19169\n2002/07/21/big/img_549\n2002/08/03/big/img_693\n2002/07/31/big/img_1002\n2003/01/14/big/img_1035\n2003/01/16/big/img_622\n2002/07/30/big/img_1201\n2002/08/10/big/img_444\n2002/07/31/big/img_374\n2002/08/21/big/img_301\n2002/08/13/big/img_1095\n2003/01/13/big/img_288\n2002/07/25/big/img_232\n2003/01/13/big/img_967\n2002/08/26/big/img_360\n2002/08/05/big/img_67\n2002/08/29/big/img_18969\n2002/07/28/big/img_16\n2002/08/16/big/img_515\n2002/07/20/big/img_708\n2002/08/18/big/img_178\n2003/01/15/big/img_509\n2002/07/25/big/img_430\n2002/08/21/big/img_738\n2002/08/16/big/img_886\n2002/09/02/big/img_15605\n2002/09/01/big/img_16242\n2002/08/24/big/img_711\n2002/07/25/big/img_90\n2002/08/09/big/img_491\n2002/07/30/big/img_534\n2003/01/13/big/img_474\n2002/08/25/big/img_510\n2002/08/15/big/img_555\n2002/08/02/big/img_775\n2002/07/23/big/img_975\n2002/08/19/big/img_229\n2003/01/17/big/img_860\n2003/01/02/big/img_10\n2002/07/23/big/img_542\n2002/08/06/big/img_2535\n2002/07/22/big/img_37\n2002/08/06/big/img_2342\n2002/08/25/big/img_515\n2002/08/25/big/img_336\n2002/08/18/big/img_837\n2002/08/21/big/img_616\n2003/01/17/big/img_24\n2002/07/26/big/img_936\n2002/08/14/big/img_896\n2002/07/29/big/img_465\n2002/07/31/big/img_543\n2002/08/01/big/img_1411\n2002/08/02/big/img_423\n2002/08/21/big/img_44\n2002/07/31/big/img_11\n2003/01/15/big/img_628\n2003/01/15/big/img_605\n2002/07/30/big/img_571\n2002/07/23/big/img_428\n2002/08/15/big/img_942\n2002/07/26/big/img_531\n2003/01/16/big/img_59\n2002/08/02/big/img_410\n2002/07/31/big/img_230\n2002/08/19/big/img_806\n2003/01/14/big/img_462\n2002/08/16/big/img_370\n2002/08/13/big/img_380\n2002/08/16/big/img_932\n2002/07/19/big/img_393\n2002/08/20/big/img_764\n2002/08/15/big/img_616\n2002/07/26/big/img_267\n2002/07/27/big/img_1069\n2002/08/14/big/img_1041\n2003/01/13/big/img_594\n2002/09/01/big/img_16845\n2002/08/09/big/img_229\n2003/01/16/big/img_639\n2002/08/19/big/img_398\n2002/08/18/big/img_978\n2002/08/24/big/img_296\n2002/07/29/big/img_415\n2002/07/30/big/img_923\n2002/08/18/big/img_575\n2002/08/22/big/img_182\n2002/07/25/big/img_806\n2002/07/22/big/img_49\n2002/07/29/big/img_989\n2003/01/17/big/img_789\n2003/01/15/big/img_503\n2002/09/01/big/img_16062\n2003/01/17/big/img_794\n2002/08/15/big/img_564\n2003/01/15/big/img_222\n2002/08/01/big/img_1656\n2003/01/13/big/img_432\n2002/07/19/big/img_426\n2002/08/17/big/img_244\n2002/08/13/big/img_805\n2002/09/02/big/img_15067\n2002/08/11/big/img_58\n2002/08/22/big/img_636\n2002/07/22/big/img_416\n2002/08/13/big/img_836\n2002/08/26/big/img_363\n2002/07/30/big/img_917\n2003/01/14/big/img_206\n2002/08/12/big/img_311\n2002/08/31/big/img_17623\n2002/07/29/big/img_661\n2003/01/13/big/img_417\n2002/08/02/big/img_463\n2002/08/02/big/img_669\n2002/08/26/big/img_670\n2002/08/02/big/img_375\n2002/07/19/big/img_209\n2002/08/08/big/img_115\n2002/08/21/big/img_399\n2002/08/20/big/img_911\n2002/08/07/big/img_1212\n2002/08/20/big/img_578\n2002/08/22/big/img_554\n2002/08/21/big/img_484\n2002/07/25/big/img_450\n2002/08/03/big/img_542\n2002/08/15/big/img_561\n2002/07/23/big/img_360\n2002/08/30/big/img_18137\n2002/07/25/big/img_250\n2002/08/03/big/img_647\n2002/08/20/big/img_375\n2002/08/14/big/img_387\n2002/09/01/big/img_16990\n2002/08/28/big/img_19341\n2003/01/15/big/img_239\n2002/08/20/big/img_528\n2002/08/12/big/img_130\n2002/09/02/big/img_15108\n2003/01/15/big/img_372\n2002/08/16/big/img_678\n2002/08/04/big/img_623\n2002/07/23/big/img_477\n2002/08/28/big/img_19590\n2003/01/17/big/img_978\n2002/09/01/big/img_16692\n2002/07/20/big/img_109\n2002/08/06/big/img_2660\n2003/01/14/big/img_464\n2002/08/09/big/img_618\n2002/07/22/big/img_722\n2002/08/25/big/img_419\n2002/08/03/big/img_314\n2002/08/25/big/img_40\n2002/07/27/big/img_430\n2002/08/10/big/img_569\n2002/08/23/big/img_398\n2002/07/23/big/img_893\n2002/08/16/big/img_261\n2002/08/06/big/img_2668\n2002/07/22/big/img_835\n2002/09/02/big/img_15093\n2003/01/16/big/img_65\n2002/08/21/big/img_448\n2003/01/14/big/img_351\n2003/01/17/big/img_133\n2002/07/28/big/img_493\n2003/01/15/big/img_640\n2002/09/01/big/img_16880\n2002/08/15/big/img_350\n2002/08/20/big/img_624\n2002/08/25/big/img_604\n2002/08/06/big/img_2200\n2002/08/23/big/img_290\n2002/08/13/big/img_1152\n2003/01/14/big/img_251\n2002/08/02/big/img_538\n2002/08/22/big/img_613\n2003/01/13/big/img_351\n2002/08/18/big/img_368\n2002/07/23/big/img_392\n2002/07/25/big/img_198\n2002/07/25/big/img_418\n2002/08/26/big/img_614\n2002/07/23/big/img_405\n2003/01/14/big/img_445\n2002/07/25/big/img_326\n2002/08/10/big/img_734\n2003/01/14/big/img_530\n2002/08/08/big/img_561\n2002/08/29/big/img_18990\n2002/08/10/big/img_576\n2002/07/29/big/img_1494\n2002/07/19/big/img_198\n2002/08/10/big/img_562\n2002/07/22/big/img_901\n2003/01/14/big/img_37\n2002/09/02/big/img_15629\n2003/01/14/big/img_58\n2002/08/01/big/img_1364\n2002/07/27/big/img_636\n2003/01/13/big/img_241\n2002/09/01/big/img_16988\n2003/01/13/big/img_560\n2002/08/09/big/img_533\n2002/07/31/big/img_249\n2003/01/17/big/img_1007\n2002/07/21/big/img_64\n2003/01/13/big/img_537\n2003/01/15/big/img_606\n2002/08/18/big/img_651\n2002/08/24/big/img_405\n2002/07/26/big/img_837\n2002/08/09/big/img_562\n2002/08/01/big/img_1983\n2002/08/03/big/img_514\n2002/07/29/big/img_314\n2002/08/12/big/img_493\n2003/01/14/big/img_121\n2003/01/14/big/img_479\n2002/08/04/big/img_410\n2002/07/22/big/img_607\n2003/01/17/big/img_417\n2002/07/20/big/img_547\n2002/08/13/big/img_396\n2002/08/31/big/img_17538\n2002/08/13/big/img_187\n2002/08/12/big/img_328\n2003/01/14/big/img_569\n2002/07/27/big/img_1081\n2002/08/14/big/img_504\n2002/08/23/big/img_785\n2002/07/26/big/img_339\n2002/08/07/big/img_1156\n2002/08/07/big/img_1456\n2002/08/23/big/img_378\n2002/08/27/big/img_19719\n2002/07/31/big/img_39\n2002/07/31/big/img_883\n2003/01/14/big/img_676\n2002/07/29/big/img_214\n2002/07/26/big/img_669\n2002/07/25/big/img_202\n2002/08/08/big/img_259\n2003/01/17/big/img_943\n2003/01/15/big/img_512\n2002/08/05/big/img_3295\n2002/08/27/big/img_19685\n2002/08/08/big/img_277\n2002/08/30/big/img_18154\n2002/07/22/big/img_663\n2002/08/29/big/img_18914\n2002/07/31/big/img_908\n2002/08/27/big/img_19926\n2003/01/13/big/img_791\n2003/01/15/big/img_827\n2002/08/18/big/img_878\n2002/08/14/big/img_670\n2002/07/20/big/img_182\n2002/08/15/big/img_291\n2002/08/06/big/img_2600\n2002/07/23/big/img_587\n2002/08/14/big/img_577\n2003/01/15/big/img_585\n2002/07/30/big/img_310\n2002/08/03/big/img_658\n2002/08/10/big/img_157\n2002/08/19/big/img_811\n2002/07/29/big/img_1318\n2002/08/04/big/img_104\n2002/07/30/big/img_332\n2002/07/24/big/img_789\n2002/07/29/big/img_516\n2002/07/23/big/img_843\n2002/08/01/big/img_1528\n2002/08/13/big/img_798\n2002/08/07/big/img_1729\n2002/08/28/big/img_19448\n2003/01/16/big/img_95\n2002/08/12/big/img_473\n2002/07/27/big/img_269\n2003/01/16/big/img_621\n2002/07/29/big/img_772\n2002/07/24/big/img_171\n2002/07/19/big/img_429\n2002/08/07/big/img_1933\n2002/08/27/big/img_19629\n2002/08/05/big/img_3688\n2002/08/07/big/img_1691\n2002/07/23/big/img_600\n2002/07/29/big/img_666\n2002/08/25/big/img_566\n2002/08/06/big/img_2659\n2002/08/29/big/img_18929\n2002/08/16/big/img_407\n2002/08/18/big/img_774\n2002/08/19/big/img_249\n2002/08/06/big/img_2427\n2002/08/29/big/img_18899\n2002/08/01/big/img_1818\n2002/07/31/big/img_108\n2002/07/29/big/img_500\n2002/08/11/big/img_115\n2002/07/19/big/img_521\n2002/08/02/big/img_1163\n2002/07/22/big/img_62\n2002/08/13/big/img_466\n2002/08/21/big/img_956\n2002/08/23/big/img_602\n2002/08/20/big/img_858\n2002/07/25/big/img_690\n2002/07/19/big/img_130\n2002/08/04/big/img_874\n2002/07/26/big/img_489\n2002/07/22/big/img_548\n2002/08/10/big/img_191\n2002/07/25/big/img_1051\n2002/08/18/big/img_473\n2002/08/12/big/img_755\n2002/08/18/big/img_413\n2002/08/08/big/img_1044\n2002/08/17/big/img_680\n2002/08/26/big/img_235\n2002/08/20/big/img_330\n2002/08/22/big/img_344\n2002/08/09/big/img_593\n2002/07/31/big/img_1006\n2002/08/14/big/img_337\n2002/08/16/big/img_728\n2002/07/24/big/img_834\n2002/08/04/big/img_552\n2002/09/02/big/img_15213\n2002/07/25/big/img_725\n2002/08/30/big/img_18290\n2003/01/01/big/img_475\n2002/07/27/big/img_1083\n2002/08/29/big/img_18955\n2002/08/31/big/img_17232\n2002/08/08/big/img_480\n2002/08/01/big/img_1311\n2002/07/30/big/img_745\n2002/08/03/big/img_649\n2002/08/12/big/img_193\n2002/07/29/big/img_228\n2002/07/25/big/img_836\n2002/08/20/big/img_400\n2002/07/30/big/img_507\n2002/09/02/big/img_15072\n2002/07/26/big/img_658\n2002/07/28/big/img_503\n2002/08/05/big/img_3814\n2002/08/24/big/img_745\n2003/01/13/big/img_817\n2002/08/08/big/img_579\n2002/07/22/big/img_251\n2003/01/13/big/img_689\n2002/07/25/big/img_407\n2002/08/13/big/img_1050\n2002/08/14/big/img_733\n2002/07/24/big/img_82\n2003/01/17/big/img_288\n2003/01/15/big/img_475\n2002/08/14/big/img_620\n2002/08/21/big/img_167\n2002/07/19/big/img_300\n2002/07/26/big/img_219\n2002/08/01/big/img_1468\n2002/07/23/big/img_260\n2002/08/09/big/img_555\n2002/07/19/big/img_160\n2002/08/02/big/img_1060\n2003/01/14/big/img_149\n2002/08/15/big/img_346\n2002/08/24/big/img_597\n2002/08/22/big/img_502\n2002/08/30/big/img_18228\n2002/07/21/big/img_766\n2003/01/15/big/img_841\n2002/07/24/big/img_516\n2002/08/02/big/img_265\n2002/08/15/big/img_1243\n2003/01/15/big/img_223\n2002/08/04/big/img_236\n2002/07/22/big/img_309\n2002/07/20/big/img_656\n2002/07/31/big/img_412\n2002/09/01/big/img_16462\n2003/01/16/big/img_431\n2002/07/22/big/img_793\n2002/08/15/big/img_877\n2002/07/26/big/img_282\n2002/07/25/big/img_529\n2002/08/24/big/img_613\n2003/01/17/big/img_700\n2002/08/06/big/img_2526\n2002/08/24/big/img_394\n2002/08/21/big/img_521\n2002/08/25/big/img_560\n2002/07/29/big/img_966\n2002/07/25/big/img_448\n2003/01/13/big/img_782\n2002/08/21/big/img_296\n2002/09/01/big/img_16755\n2002/08/05/big/img_3552\n2002/09/02/big/img_15823\n2003/01/14/big/img_193\n2002/07/21/big/img_159\n2002/08/02/big/img_564\n2002/08/16/big/img_300\n2002/07/19/big/img_269\n2002/08/13/big/img_676\n2002/07/28/big/img_57\n2002/08/05/big/img_3318\n2002/07/31/big/img_218\n2002/08/21/big/img_898\n2002/07/29/big/img_109\n2002/07/19/big/img_854\n2002/08/23/big/img_311\n2002/08/14/big/img_318\n2002/07/25/big/img_523\n2002/07/21/big/img_678\n2003/01/17/big/img_690\n2002/08/28/big/img_19503\n2002/08/18/big/img_251\n2002/08/22/big/img_672\n2002/08/20/big/img_663\n2002/08/02/big/img_148\n2002/09/02/big/img_15580\n2002/07/25/big/img_778\n2002/08/14/big/img_565\n2002/08/12/big/img_374\n2002/08/13/big/img_1018\n2002/08/20/big/img_474\n2002/08/25/big/img_33\n2002/08/02/big/img_1190\n2002/08/08/big/img_864\n2002/08/14/big/img_1071\n2002/08/30/big/img_18103\n2002/08/18/big/img_533\n2003/01/16/big/img_650\n2002/07/25/big/img_108\n2002/07/26/big/img_81\n2002/07/27/big/img_543\n2002/07/29/big/img_521\n2003/01/13/big/img_434\n2002/08/26/big/img_674\n2002/08/06/big/img_2932\n2002/08/07/big/img_1262\n2003/01/15/big/img_201\n2003/01/16/big/img_673\n2002/09/02/big/img_15988\n2002/07/29/big/img_1306\n2003/01/14/big/img_1072\n2002/08/30/big/img_18232\n2002/08/05/big/img_3711\n2002/07/23/big/img_775\n2002/08/01/big/img_16\n2003/01/16/big/img_630\n2002/08/22/big/img_695\n2002/08/14/big/img_51\n2002/08/14/big/img_782\n2002/08/24/big/img_742\n2003/01/14/big/img_512\n2003/01/15/big/img_1183\n2003/01/15/big/img_714\n2002/08/01/big/img_2078\n2002/07/31/big/img_682\n2002/09/02/big/img_15687\n2002/07/26/big/img_518\n2002/08/27/big/img_19676\n2002/09/02/big/img_15969\n2002/08/02/big/img_931\n2002/08/25/big/img_508\n2002/08/29/big/img_18616\n2002/07/22/big/img_839\n2002/07/28/big/img_313\n2003/01/14/big/img_155\n2002/08/02/big/img_1105\n2002/08/09/big/img_53\n2002/08/16/big/img_469\n2002/08/15/big/img_502\n2002/08/20/big/img_575\n2002/07/25/big/img_138\n2003/01/16/big/img_579\n2002/07/19/big/img_352\n2003/01/14/big/img_762\n2003/01/01/big/img_588\n2002/08/02/big/img_981\n2002/08/21/big/img_447\n2002/09/01/big/img_16151\n2003/01/14/big/img_769\n2002/08/23/big/img_461\n2002/08/17/big/img_240\n2002/09/02/big/img_15220\n2002/07/19/big/img_408\n2002/09/02/big/img_15496\n2002/07/29/big/img_758\n2002/08/28/big/img_19392\n2002/08/06/big/img_2723\n2002/08/31/big/img_17752\n2002/08/23/big/img_469\n2002/08/13/big/img_515\n2002/09/02/big/img_15551\n2002/08/03/big/img_462\n2002/07/24/big/img_613\n2002/07/22/big/img_61\n2002/08/08/big/img_171\n2002/08/21/big/img_177\n2003/01/14/big/img_105\n2002/08/02/big/img_1017\n2002/08/22/big/img_106\n2002/07/27/big/img_542\n2002/07/21/big/img_665\n2002/07/23/big/img_595\n2002/08/04/big/img_657\n2002/08/29/big/img_19002\n2003/01/15/big/img_550\n2002/08/14/big/img_662\n2002/07/20/big/img_425\n2002/08/30/big/img_18528\n2002/07/26/big/img_611\n2002/07/22/big/img_849\n2002/08/07/big/img_1655\n2002/08/21/big/img_638\n2003/01/17/big/img_732\n2003/01/01/big/img_496\n2002/08/18/big/img_713\n2002/08/08/big/img_109\n2002/07/27/big/img_1008\n2002/07/20/big/img_559\n2002/08/16/big/img_699\n2002/08/31/big/img_17702\n2002/07/31/big/img_1013\n2002/08/01/big/img_2027\n2002/08/02/big/img_1001\n2002/08/03/big/img_210\n2002/08/01/big/img_2087\n2003/01/14/big/img_199\n2002/07/29/big/img_48\n2002/07/19/big/img_727\n2002/08/09/big/img_249\n2002/08/04/big/img_632\n2002/08/22/big/img_620\n2003/01/01/big/img_457\n2002/08/05/big/img_3223\n2002/07/27/big/img_240\n2002/07/25/big/img_797\n2002/08/13/big/img_430\n2002/07/25/big/img_615\n2002/08/12/big/img_28\n2002/07/30/big/img_220\n2002/07/24/big/img_89\n2002/08/21/big/img_357\n2002/08/09/big/img_590\n2003/01/13/big/img_525\n2002/08/17/big/img_818\n2003/01/02/big/img_7\n2002/07/26/big/img_636\n2003/01/13/big/img_1122\n2002/07/23/big/img_810\n2002/08/20/big/img_888\n2002/07/27/big/img_3\n2002/08/15/big/img_451\n2002/09/02/big/img_15787\n2002/07/31/big/img_281\n2002/08/05/big/img_3274\n2002/08/07/big/img_1254\n2002/07/31/big/img_27\n2002/08/01/big/img_1366\n2002/07/30/big/img_182\n2002/08/27/big/img_19690\n2002/07/29/big/img_68\n2002/08/23/big/img_754\n2002/07/30/big/img_540\n2002/08/27/big/img_20063\n2002/08/14/big/img_471\n2002/08/02/big/img_615\n2002/07/30/big/img_186\n2002/08/25/big/img_150\n2002/07/27/big/img_626\n2002/07/20/big/img_225\n2003/01/15/big/img_1252\n2002/07/19/big/img_367\n2003/01/15/big/img_582\n2002/08/09/big/img_572\n2002/08/08/big/img_428\n2003/01/15/big/img_639\n2002/08/28/big/img_19245\n2002/07/24/big/img_321\n2002/08/02/big/img_662\n2002/08/08/big/img_1033\n2003/01/17/big/img_867\n2002/07/22/big/img_652\n2003/01/14/big/img_224\n2002/08/18/big/img_49\n2002/07/26/big/img_46\n2002/08/31/big/img_18021\n2002/07/25/big/img_151\n2002/08/23/big/img_540\n2002/08/25/big/img_693\n2002/07/23/big/img_340\n2002/07/28/big/img_117\n2002/09/02/big/img_15768\n2002/08/26/big/img_562\n2002/07/24/big/img_480\n2003/01/15/big/img_341\n2002/08/10/big/img_783\n2002/08/20/big/img_132\n2003/01/14/big/img_370\n2002/07/20/big/img_720\n2002/08/03/big/img_144\n2002/08/20/big/img_538\n2002/08/01/big/img_1745\n2002/08/11/big/img_683\n2002/08/03/big/img_328\n2002/08/10/big/img_793\n2002/08/14/big/img_689\n2002/08/02/big/img_162\n2003/01/17/big/img_411\n2002/07/31/big/img_361\n2002/08/15/big/img_289\n2002/08/08/big/img_254\n2002/08/15/big/img_996\n2002/08/20/big/img_785\n2002/07/24/big/img_511\n2002/08/06/big/img_2614\n2002/08/29/big/img_18733\n2002/08/17/big/img_78\n2002/07/30/big/img_378\n2002/08/31/big/img_17947\n2002/08/26/big/img_88\n2002/07/30/big/img_558\n2002/08/02/big/img_67\n2003/01/14/big/img_325\n2002/07/29/big/img_1357\n2002/07/19/big/img_391\n2002/07/30/big/img_307\n2003/01/13/big/img_219\n2002/07/24/big/img_807\n2002/08/23/big/img_543\n2002/08/29/big/img_18620\n2002/07/22/big/img_769\n2002/08/26/big/img_503\n2002/07/30/big/img_78\n2002/08/14/big/img_1036\n2002/08/09/big/img_58\n2002/07/24/big/img_616\n2002/08/02/big/img_464\n2002/07/26/big/img_576\n2002/07/22/big/img_273\n2003/01/16/big/img_470\n2002/07/29/big/img_329\n2002/07/30/big/img_1086\n2002/07/31/big/img_353\n2002/09/02/big/img_15275\n2003/01/17/big/img_555\n2002/08/26/big/img_212\n2002/08/01/big/img_1692\n2003/01/15/big/img_600\n2002/07/29/big/img_825\n2002/08/08/big/img_68\n2002/08/10/big/img_719\n2002/07/31/big/img_636\n2002/07/29/big/img_325\n2002/07/21/big/img_515\n2002/07/22/big/img_705\n2003/01/13/big/img_818\n2002/08/09/big/img_486\n2002/08/22/big/img_141\n2002/07/22/big/img_303\n2002/08/09/big/img_393\n2002/07/29/big/img_963\n2002/08/02/big/img_1215\n2002/08/19/big/img_674\n2002/08/12/big/img_690\n2002/08/21/big/img_637\n2002/08/21/big/img_841\n2002/08/24/big/img_71\n2002/07/25/big/img_596\n2002/07/24/big/img_864\n2002/08/18/big/img_293\n2003/01/14/big/img_657\n2002/08/15/big/img_411\n2002/08/16/big/img_348\n2002/08/05/big/img_3157\n2002/07/20/big/img_663\n2003/01/13/big/img_654\n2003/01/16/big/img_433\n2002/08/30/big/img_18200\n2002/08/12/big/img_226\n2003/01/16/big/img_491\n2002/08/08/big/img_666\n2002/07/19/big/img_576\n2003/01/15/big/img_776\n2003/01/16/big/img_899\n2002/07/19/big/img_397\n2002/08/14/big/img_44\n2003/01/15/big/img_762\n2002/08/02/big/img_982\n2002/09/02/big/img_15234\n2002/08/17/big/img_556\n2002/08/21/big/img_410\n2002/08/21/big/img_386\n2002/07/19/big/img_690\n2002/08/05/big/img_3052\n2002/08/14/big/img_219\n2002/08/16/big/img_273\n2003/01/15/big/img_752\n2002/08/08/big/img_184\n2002/07/31/big/img_743\n2002/08/23/big/img_338\n2003/01/14/big/img_1055\n2002/08/05/big/img_3405\n2003/01/15/big/img_17\n2002/08/03/big/img_141\n2002/08/14/big/img_549\n2002/07/27/big/img_1034\n2002/07/31/big/img_932\n2002/08/30/big/img_18487\n2002/09/02/big/img_15814\n2002/08/01/big/img_2086\n2002/09/01/big/img_16535\n2002/07/22/big/img_500\n2003/01/13/big/img_400\n2002/08/25/big/img_607\n2002/08/30/big/img_18384\n2003/01/14/big/img_951\n2002/08/13/big/img_1150\n2002/08/08/big/img_1022\n2002/08/10/big/img_428\n2002/08/28/big/img_19242\n2002/08/05/big/img_3098\n2002/07/23/big/img_400\n2002/08/26/big/img_365\n2002/07/20/big/img_318\n2002/08/13/big/img_740\n2003/01/16/big/img_37\n2002/08/26/big/img_274\n2002/08/02/big/img_205\n2002/08/21/big/img_695\n2002/08/06/big/img_2289\n2002/08/20/big/img_794\n2002/08/18/big/img_438\n2002/08/07/big/img_1380\n2002/08/02/big/img_737\n2002/08/07/big/img_1651\n2002/08/15/big/img_1238\n2002/08/01/big/img_1681\n2002/08/06/big/img_3017\n2002/07/23/big/img_706\n2002/07/31/big/img_392\n2002/08/09/big/img_539\n2002/07/29/big/img_835\n2002/08/26/big/img_723\n2002/08/28/big/img_19235\n2003/01/16/big/img_353\n2002/08/10/big/img_150\n2002/08/29/big/img_19025\n2002/08/21/big/img_310\n2002/08/10/big/img_823\n2002/07/26/big/img_981\n2002/08/11/big/img_288\n2002/08/19/big/img_534\n2002/08/21/big/img_300\n2002/07/31/big/img_49\n2002/07/30/big/img_469\n2002/08/28/big/img_19197\n2002/08/25/big/img_205\n2002/08/10/big/img_390\n2002/08/23/big/img_291\n2002/08/26/big/img_230\n2002/08/18/big/img_76\n2002/07/23/big/img_409\n2002/08/14/big/img_1053\n2003/01/14/big/img_291\n2002/08/10/big/img_503\n2002/08/27/big/img_19928\n2002/08/03/big/img_563\n2002/08/17/big/img_250\n2002/08/06/big/img_2381\n2002/08/17/big/img_948\n2002/08/06/big/img_2710\n2002/07/22/big/img_696\n2002/07/31/big/img_670\n2002/08/12/big/img_594\n2002/07/29/big/img_624\n2003/01/17/big/img_934\n2002/08/03/big/img_584\n2002/08/22/big/img_1003\n2002/08/05/big/img_3396\n2003/01/13/big/img_570\n2002/08/02/big/img_219\n2002/09/02/big/img_15774\n2002/08/16/big/img_818\n2002/08/23/big/img_402\n2003/01/14/big/img_552\n2002/07/29/big/img_71\n2002/08/05/big/img_3592\n2002/08/16/big/img_80\n2002/07/27/big/img_672\n2003/01/13/big/img_470\n2003/01/16/big/img_702\n2002/09/01/big/img_16130\n2002/08/08/big/img_240\n2002/09/01/big/img_16338\n2002/07/26/big/img_312\n2003/01/14/big/img_538\n2002/07/20/big/img_695\n2002/08/30/big/img_18098\n2002/08/25/big/img_259\n2002/08/16/big/img_1042\n2002/08/09/big/img_837\n2002/08/31/big/img_17760\n2002/07/31/big/img_14\n2002/08/09/big/img_361\n2003/01/16/big/img_107\n2002/08/14/big/img_124\n2002/07/19/big/img_463\n2003/01/15/big/img_275\n2002/07/25/big/img_1151\n2002/07/29/big/img_1501\n2002/08/27/big/img_19889\n2002/08/29/big/img_18603\n2003/01/17/big/img_601\n2002/08/25/big/img_355\n2002/08/08/big/img_297\n2002/08/20/big/img_290\n2002/07/31/big/img_195\n2003/01/01/big/img_336\n2002/08/18/big/img_369\n2002/07/25/big/img_621\n2002/08/11/big/img_508\n2003/01/14/big/img_458\n2003/01/15/big/img_795\n2002/08/12/big/img_498\n2002/08/01/big/img_1734\n2002/08/02/big/img_246\n2002/08/16/big/img_565\n2002/08/11/big/img_475\n2002/08/22/big/img_408\n2002/07/28/big/img_78\n2002/07/21/big/img_81\n2003/01/14/big/img_697\n2002/08/14/big/img_661\n2002/08/15/big/img_507\n2002/08/19/big/img_55\n2002/07/22/big/img_152\n2003/01/14/big/img_470\n2002/08/03/big/img_379\n2002/08/22/big/img_506\n2003/01/16/big/img_966\n2002/08/18/big/img_698\n2002/08/24/big/img_528\n2002/08/23/big/img_10\n2002/08/01/big/img_1655\n2002/08/22/big/img_953\n2002/07/19/big/img_630\n2002/07/22/big/img_889\n2002/08/16/big/img_351\n2003/01/16/big/img_83\n2002/07/19/big/img_805\n2002/08/14/big/img_704\n2002/07/19/big/img_389\n2002/08/31/big/img_17765\n2002/07/29/big/img_606\n2003/01/17/big/img_939\n2002/09/02/big/img_15081\n2002/08/21/big/img_181\n2002/07/29/big/img_1321\n2002/07/21/big/img_497\n2002/07/20/big/img_539\n2002/08/24/big/img_119\n2002/08/01/big/img_1281\n2002/07/26/big/img_207\n2002/07/26/big/img_432\n2002/07/27/big/img_1006\n2002/08/05/big/img_3087\n2002/08/14/big/img_252\n2002/08/14/big/img_798\n2002/07/24/big/img_538\n2002/09/02/big/img_15507\n2002/08/08/big/img_901\n2003/01/14/big/img_557\n2002/08/07/big/img_1819\n2002/08/04/big/img_470\n2002/08/01/big/img_1504\n2002/08/16/big/img_1070\n2002/08/16/big/img_372\n2002/08/23/big/img_416\n2002/08/30/big/img_18208\n2002/08/01/big/img_2043\n2002/07/22/big/img_385\n2002/08/22/big/img_466\n2002/08/21/big/img_869\n2002/08/28/big/img_19429\n2002/08/02/big/img_770\n2002/07/23/big/img_433\n2003/01/14/big/img_13\n2002/07/27/big/img_953\n2002/09/02/big/img_15728\n2002/08/01/big/img_1361\n2002/08/29/big/img_18897\n2002/08/26/big/img_534\n2002/08/11/big/img_121\n2002/08/26/big/img_20130\n2002/07/31/big/img_363\n2002/08/13/big/img_978\n2002/07/25/big/img_835\n2002/08/02/big/img_906\n2003/01/14/big/img_548\n2002/07/30/big/img_80\n2002/07/26/big/img_982\n2003/01/16/big/img_99\n2002/08/19/big/img_362\n2002/08/24/big/img_376\n2002/08/07/big/img_1264\n2002/07/27/big/img_938\n2003/01/17/big/img_535\n2002/07/26/big/img_457\n2002/08/08/big/img_848\n2003/01/15/big/img_859\n2003/01/15/big/img_622\n2002/07/30/big/img_403\n2002/07/29/big/img_217\n2002/07/26/big/img_891\n2002/07/24/big/img_70\n2002/08/25/big/img_619\n2002/08/05/big/img_3375\n2002/08/01/big/img_2160\n2002/08/06/big/img_2227\n2003/01/14/big/img_117\n2002/08/14/big/img_227\n2002/08/13/big/img_565\n2002/08/19/big/img_625\n2002/08/03/big/img_812\n2002/07/24/big/img_41\n2002/08/16/big/img_235\n2002/07/29/big/img_759\n2002/07/21/big/img_433\n2002/07/29/big/img_190\n2003/01/16/big/img_435\n2003/01/13/big/img_708\n2002/07/30/big/img_57\n2002/08/22/big/img_162\n2003/01/01/big/img_558\n2003/01/15/big/img_604\n2002/08/16/big/img_935\n2002/08/20/big/img_394\n2002/07/28/big/img_465\n2002/09/02/big/img_15534\n2002/08/16/big/img_87\n2002/07/22/big/img_469\n2002/08/12/big/img_245\n2003/01/13/big/img_236\n2002/08/06/big/img_2736\n2002/08/03/big/img_348\n2003/01/14/big/img_218\n2002/07/26/big/img_232\n2003/01/15/big/img_244\n2002/07/25/big/img_1121\n2002/08/01/big/img_1484\n2002/07/26/big/img_541\n2002/08/07/big/img_1244\n2002/07/31/big/img_3\n2002/08/30/big/img_18437\n2002/08/29/big/img_19094\n2002/08/01/big/img_1355\n2002/08/19/big/img_338\n2002/07/19/big/img_255\n2002/07/21/big/img_76\n2002/08/25/big/img_199\n2002/08/12/big/img_740\n2002/07/30/big/img_852\n2002/08/15/big/img_599\n2002/08/23/big/img_254\n2002/08/19/big/img_125\n2002/07/24/big/img_2\n2002/08/04/big/img_145\n2002/08/05/big/img_3137\n2002/07/28/big/img_463\n2003/01/14/big/img_801\n2002/07/23/big/img_366\n2002/08/26/big/img_600\n2002/08/26/big/img_649\n2002/09/02/big/img_15849\n2002/07/26/big/img_248\n2003/01/13/big/img_200\n2002/08/07/big/img_1794\n2002/08/31/big/img_17270\n2002/08/23/big/img_608\n2003/01/13/big/img_837\n2002/08/23/big/img_581\n2002/08/20/big/img_754\n2002/08/18/big/img_183\n2002/08/20/big/img_328\n2002/07/22/big/img_494\n2002/07/29/big/img_399\n2002/08/28/big/img_19284\n2002/08/08/big/img_566\n2002/07/25/big/img_376\n2002/07/23/big/img_138\n2002/07/25/big/img_435\n2002/08/17/big/img_685\n2002/07/19/big/img_90\n2002/07/20/big/img_716\n2002/08/31/big/img_17458\n2002/08/26/big/img_461\n2002/07/25/big/img_355\n2002/08/06/big/img_2152\n2002/07/27/big/img_932\n2002/07/23/big/img_232\n2002/08/08/big/img_1020\n2002/07/31/big/img_366\n2002/08/06/big/img_2667\n2002/08/21/big/img_465\n2002/08/15/big/img_305\n2002/08/02/big/img_247\n2002/07/28/big/img_46\n2002/08/27/big/img_19922\n2002/08/23/big/img_643\n2003/01/13/big/img_624\n2002/08/23/big/img_625\n2002/08/05/big/img_3787\n2003/01/13/big/img_627\n2002/09/01/big/img_16381\n2002/08/05/big/img_3668\n2002/07/21/big/img_535\n2002/08/27/big/img_19680\n2002/07/22/big/img_413\n2002/07/29/big/img_481\n2003/01/15/big/img_496\n2002/07/23/big/img_701\n2002/08/29/big/img_18670\n2002/07/28/big/img_319\n2003/01/14/big/img_517\n2002/07/26/big/img_256\n2003/01/16/big/img_593\n2002/07/30/big/img_956\n2002/07/30/big/img_667\n2002/07/25/big/img_100\n2002/08/11/big/img_570\n2002/07/26/big/img_745\n2002/08/04/big/img_834\n2002/08/25/big/img_521\n2002/08/01/big/img_2148\n2002/09/02/big/img_15183\n2002/08/22/big/img_514\n2002/08/23/big/img_477\n2002/07/23/big/img_336\n2002/07/26/big/img_481\n2002/08/20/big/img_409\n2002/07/23/big/img_918\n2002/08/09/big/img_474\n2002/08/02/big/img_929\n2002/08/31/big/img_17932\n2002/08/19/big/img_161\n2002/08/09/big/img_667\n2002/07/31/big/img_805\n2002/09/02/big/img_15678\n2002/08/31/big/img_17509\n2002/08/29/big/img_18998\n2002/07/23/big/img_301\n2002/08/07/big/img_1612\n2002/08/06/big/img_2472\n2002/07/23/big/img_466\n2002/08/27/big/img_19634\n2003/01/16/big/img_16\n2002/08/14/big/img_193\n2002/08/21/big/img_340\n2002/08/27/big/img_19799\n2002/08/01/big/img_1345\n2002/08/07/big/img_1448\n2002/08/11/big/img_324\n2003/01/16/big/img_754\n2002/08/13/big/img_418\n2003/01/16/big/img_544\n2002/08/19/big/img_135\n2002/08/10/big/img_455\n2002/08/10/big/img_693\n2002/08/31/big/img_17967\n2002/08/28/big/img_19229\n2002/08/04/big/img_811\n2002/09/01/big/img_16225\n2003/01/16/big/img_428\n2002/09/02/big/img_15295\n2002/07/26/big/img_108\n2002/07/21/big/img_477\n2002/08/07/big/img_1354\n2002/08/23/big/img_246\n2002/08/16/big/img_652\n2002/07/27/big/img_553\n2002/07/31/big/img_346\n2002/08/04/big/img_537\n2002/08/08/big/img_498\n2002/08/29/big/img_18956\n2003/01/13/big/img_922\n2002/08/31/big/img_17425\n2002/07/26/big/img_438\n2002/08/19/big/img_185\n2003/01/16/big/img_33\n2002/08/10/big/img_252\n2002/07/29/big/img_598\n2002/08/27/big/img_19820\n2002/08/06/big/img_2664\n2002/08/20/big/img_705\n2003/01/14/big/img_816\n2002/08/03/big/img_552\n2002/07/25/big/img_561\n2002/07/25/big/img_934\n2002/08/01/big/img_1893\n2003/01/14/big/img_746\n2003/01/16/big/img_519\n2002/08/03/big/img_681\n2002/07/24/big/img_808\n2002/08/14/big/img_803\n2002/08/25/big/img_155\n2002/07/30/big/img_1107\n2002/08/29/big/img_18882\n2003/01/15/big/img_598\n2002/08/19/big/img_122\n2002/07/30/big/img_428\n2002/07/24/big/img_684\n2002/08/22/big/img_192\n2002/08/22/big/img_543\n2002/08/07/big/img_1318\n2002/08/18/big/img_25\n2002/07/26/big/img_583\n2002/07/20/big/img_464\n2002/08/19/big/img_664\n2002/08/24/big/img_861\n2002/09/01/big/img_16136\n2002/08/22/big/img_400\n2002/08/12/big/img_445\n2003/01/14/big/img_174\n2002/08/27/big/img_19677\n2002/08/31/big/img_17214\n2002/08/30/big/img_18175\n2003/01/17/big/img_402\n2002/08/06/big/img_2396\n2002/08/18/big/img_448\n2002/08/21/big/img_165\n2002/08/31/big/img_17609\n2003/01/01/big/img_151\n2002/08/26/big/img_372\n2002/09/02/big/img_15994\n2002/07/26/big/img_660\n2002/09/02/big/img_15197\n2002/07/29/big/img_258\n2002/08/30/big/img_18525\n2003/01/13/big/img_368\n2002/07/29/big/img_1538\n2002/07/21/big/img_787\n2002/08/18/big/img_152\n2002/08/06/big/img_2379\n2003/01/17/big/img_864\n2002/08/27/big/img_19998\n2002/08/01/big/img_1634\n2002/07/25/big/img_414\n2002/08/22/big/img_627\n2002/08/07/big/img_1669\n2002/08/16/big/img_1052\n2002/08/31/big/img_17796\n2002/08/18/big/img_199\n2002/09/02/big/img_15147\n2002/08/09/big/img_460\n2002/08/14/big/img_581\n2002/08/30/big/img_18286\n2002/07/26/big/img_337\n2002/08/18/big/img_589\n2003/01/14/big/img_866\n2002/07/20/big/img_624\n2002/08/01/big/img_1801\n2002/07/24/big/img_683\n2002/08/09/big/img_725\n2003/01/14/big/img_34\n2002/07/30/big/img_144\n2002/07/30/big/img_706\n2002/08/08/big/img_394\n2002/08/19/big/img_619\n2002/08/06/big/img_2703\n2002/08/29/big/img_19034\n2002/07/24/big/img_67\n2002/08/27/big/img_19841\n2002/08/19/big/img_427\n2003/01/14/big/img_333\n2002/09/01/big/img_16406\n2002/07/19/big/img_882\n2002/08/17/big/img_238\n2003/01/14/big/img_739\n2002/07/22/big/img_151\n2002/08/21/big/img_743\n2002/07/25/big/img_1048\n2002/07/30/big/img_395\n2003/01/13/big/img_584\n2002/08/13/big/img_742\n2002/08/13/big/img_1168\n2003/01/14/big/img_147\n2002/07/26/big/img_803\n2002/08/05/big/img_3298\n2002/08/07/big/img_1451\n2002/08/16/big/img_424\n2002/07/29/big/img_1069\n2002/09/01/big/img_16735\n2002/07/21/big/img_637\n2003/01/14/big/img_585\n2002/08/02/big/img_358\n2003/01/13/big/img_358\n2002/08/14/big/img_198\n2002/08/17/big/img_935\n2002/08/04/big/img_42\n2002/08/30/big/img_18245\n2002/07/25/big/img_158\n2002/08/22/big/img_744\n2002/08/06/big/img_2291\n2002/08/05/big/img_3044\n2002/07/30/big/img_272\n2002/08/23/big/img_641\n2002/07/24/big/img_797\n2002/07/30/big/img_392\n2003/01/14/big/img_447\n2002/07/31/big/img_898\n2002/08/06/big/img_2812\n2002/08/13/big/img_564\n2002/07/22/big/img_43\n2002/07/26/big/img_634\n2002/07/19/big/img_843\n2002/08/26/big/img_58\n2002/07/21/big/img_375\n2002/08/25/big/img_729\n2002/07/19/big/img_561\n2003/01/15/big/img_884\n2002/07/25/big/img_891\n2002/08/09/big/img_558\n2002/08/26/big/img_587\n2002/08/13/big/img_1146\n2002/09/02/big/img_15153\n2002/07/26/big/img_316\n2002/08/01/big/img_1940\n2002/08/26/big/img_90\n2003/01/13/big/img_347\n2002/07/25/big/img_520\n2002/08/29/big/img_18718\n2002/08/28/big/img_19219\n2002/08/13/big/img_375\n2002/07/20/big/img_719\n2002/08/31/big/img_17431\n2002/07/28/big/img_192\n2002/08/26/big/img_259\n2002/08/18/big/img_484\n2002/07/29/big/img_580\n2002/07/26/big/img_84\n2002/08/02/big/img_302\n2002/08/31/big/img_17007\n2003/01/15/big/img_543\n2002/09/01/big/img_16488\n2002/08/22/big/img_798\n2002/07/30/big/img_383\n2002/08/04/big/img_668\n2002/08/13/big/img_156\n2002/08/07/big/img_1353\n2002/07/25/big/img_281\n2003/01/14/big/img_587\n2003/01/15/big/img_524\n2002/08/19/big/img_726\n2002/08/21/big/img_709\n2002/08/26/big/img_465\n2002/07/31/big/img_658\n2002/08/28/big/img_19148\n2002/07/23/big/img_423\n2002/08/16/big/img_758\n2002/08/22/big/img_523\n2002/08/16/big/img_591\n2002/08/23/big/img_845\n2002/07/26/big/img_678\n2002/08/09/big/img_806\n2002/08/06/big/img_2369\n2002/07/29/big/img_457\n2002/07/19/big/img_278\n2002/08/30/big/img_18107\n2002/07/26/big/img_444\n2002/08/20/big/img_278\n2002/08/26/big/img_92\n2002/08/26/big/img_257\n2002/07/25/big/img_266\n2002/08/05/big/img_3829\n2002/07/26/big/img_757\n2002/07/29/big/img_1536\n2002/08/09/big/img_472\n2003/01/17/big/img_480\n2002/08/28/big/img_19355\n2002/07/26/big/img_97\n2002/08/06/big/img_2503\n2002/07/19/big/img_254\n2002/08/01/big/img_1470\n2002/08/21/big/img_42\n2002/08/20/big/img_217\n2002/08/06/big/img_2459\n2002/07/19/big/img_552\n2002/08/13/big/img_717\n2002/08/12/big/img_586\n2002/08/20/big/img_411\n2003/01/13/big/img_768\n2002/08/07/big/img_1747\n2002/08/15/big/img_385\n2002/08/01/big/img_1648\n2002/08/15/big/img_311\n2002/08/21/big/img_95\n2002/08/09/big/img_108\n2002/08/21/big/img_398\n2002/08/17/big/img_340\n2002/08/14/big/img_474\n2002/08/13/big/img_294\n2002/08/24/big/img_840\n2002/08/09/big/img_808\n2002/08/23/big/img_491\n2002/07/28/big/img_33\n2003/01/13/big/img_664\n2002/08/02/big/img_261\n2002/08/09/big/img_591\n2002/07/26/big/img_309\n2003/01/14/big/img_372\n2002/08/19/big/img_581\n2002/08/19/big/img_168\n2002/08/26/big/img_422\n2002/07/24/big/img_106\n2002/08/01/big/img_1936\n2002/08/05/big/img_3764\n2002/08/21/big/img_266\n2002/08/31/big/img_17968\n2002/08/01/big/img_1941\n2002/08/15/big/img_550\n2002/08/14/big/img_13\n2002/07/30/big/img_171\n2003/01/13/big/img_490\n2002/07/25/big/img_427\n2002/07/19/big/img_770\n2002/08/12/big/img_759\n2003/01/15/big/img_1360\n2002/08/05/big/img_3692\n2003/01/16/big/img_30\n2002/07/25/big/img_1026\n2002/07/22/big/img_288\n2002/08/29/big/img_18801\n2002/07/24/big/img_793\n2002/08/13/big/img_178\n2002/08/06/big/img_2322\n2003/01/14/big/img_560\n2002/08/18/big/img_408\n2003/01/16/big/img_915\n2003/01/16/big/img_679\n2002/08/07/big/img_1552\n2002/08/29/big/img_19050\n2002/08/01/big/img_2172\n2002/07/31/big/img_30\n2002/07/30/big/img_1019\n2002/07/30/big/img_587\n2003/01/13/big/img_773\n2002/07/30/big/img_410\n2002/07/28/big/img_65\n2002/08/05/big/img_3138\n2002/07/23/big/img_541\n2002/08/22/big/img_963\n2002/07/27/big/img_657\n2002/07/30/big/img_1051\n2003/01/16/big/img_150\n2002/07/31/big/img_519\n2002/08/01/big/img_1961\n2002/08/05/big/img_3752\n2002/07/23/big/img_631\n2003/01/14/big/img_237\n2002/07/28/big/img_21\n2002/07/22/big/img_813\n2002/08/05/big/img_3563\n2003/01/17/big/img_620\n2002/07/19/big/img_523\n2002/07/30/big/img_904\n2002/08/29/big/img_18642\n2002/08/11/big/img_492\n2002/08/01/big/img_2130\n2002/07/25/big/img_618\n2002/08/17/big/img_305\n2003/01/16/big/img_520\n2002/07/26/big/img_495\n2002/08/17/big/img_164\n2002/08/03/big/img_440\n2002/07/24/big/img_441\n2002/08/06/big/img_2146\n2002/08/11/big/img_558\n2002/08/02/big/img_545\n2002/08/31/big/img_18090\n2003/01/01/big/img_136\n2002/07/25/big/img_1099\n2003/01/13/big/img_728\n2003/01/16/big/img_197\n2002/07/26/big/img_651\n2002/08/11/big/img_676\n2003/01/15/big/img_10\n2002/08/21/big/img_250\n2002/08/14/big/img_325\n2002/08/04/big/img_390\n2002/07/24/big/img_554\n2003/01/16/big/img_333\n2002/07/31/big/img_922\n2002/09/02/big/img_15586\n2003/01/16/big/img_184\n2002/07/22/big/img_766\n2002/07/21/big/img_608\n2002/08/07/big/img_1578\n2002/08/17/big/img_961\n2002/07/27/big/img_324\n2002/08/05/big/img_3765\n2002/08/23/big/img_462\n2003/01/16/big/img_382\n2002/08/27/big/img_19838\n2002/08/01/big/img_1505\n2002/08/21/big/img_662\n2002/08/14/big/img_605\n2002/08/19/big/img_816\n2002/07/29/big/img_136\n2002/08/20/big/img_719\n2002/08/06/big/img_2826\n2002/08/10/big/img_630\n2003/01/17/big/img_973\n2002/08/14/big/img_116\n2002/08/02/big/img_666\n2002/08/21/big/img_710\n2002/08/05/big/img_55\n2002/07/31/big/img_229\n2002/08/01/big/img_1549\n2002/07/23/big/img_432\n2002/07/21/big/img_430\n2002/08/21/big/img_549\n2002/08/08/big/img_985\n2002/07/20/big/img_610\n2002/07/23/big/img_978\n2002/08/23/big/img_219\n2002/07/25/big/img_175\n2003/01/15/big/img_230\n2002/08/23/big/img_385\n2002/07/31/big/img_879\n2002/08/12/big/img_495\n2002/08/22/big/img_499\n2002/08/30/big/img_18322\n2002/08/15/big/img_795\n2002/08/13/big/img_835\n2003/01/17/big/img_930\n2002/07/30/big/img_873\n2002/08/11/big/img_257\n2002/07/31/big/img_593\n2002/08/21/big/img_916\n2003/01/13/big/img_814\n2002/07/25/big/img_722\n2002/08/16/big/img_379\n2002/07/31/big/img_497\n2002/07/22/big/img_602\n2002/08/21/big/img_642\n2002/08/21/big/img_614\n2002/08/23/big/img_482\n2002/07/29/big/img_603\n2002/08/13/big/img_705\n2002/07/23/big/img_833\n2003/01/14/big/img_511\n2002/07/24/big/img_376\n2002/08/17/big/img_1030\n2002/08/05/big/img_3576\n2002/08/16/big/img_540\n2002/07/22/big/img_630\n2002/08/10/big/img_180\n2002/08/14/big/img_905\n2002/08/29/big/img_18777\n2002/08/22/big/img_693\n2003/01/16/big/img_933\n2002/08/20/big/img_555\n2002/08/15/big/img_549\n2003/01/14/big/img_830\n2003/01/16/big/img_64\n2002/08/27/big/img_19670\n2002/08/22/big/img_729\n2002/07/27/big/img_981\n2002/08/09/big/img_458\n2003/01/17/big/img_884\n2002/07/25/big/img_639\n2002/08/31/big/img_18008\n2002/08/22/big/img_249\n2002/08/17/big/img_971\n2002/08/04/big/img_308\n2002/07/28/big/img_362\n2002/08/12/big/img_142\n2002/08/26/big/img_61\n2002/08/14/big/img_422\n2002/07/19/big/img_607\n2003/01/15/big/img_717\n2002/08/01/big/img_1475\n2002/08/29/big/img_19061\n2003/01/01/big/img_346\n2002/07/20/big/img_315\n2003/01/15/big/img_756\n2002/08/15/big/img_879\n2002/08/08/big/img_615\n2003/01/13/big/img_431\n2002/08/05/big/img_3233\n2002/08/24/big/img_526\n2003/01/13/big/img_717\n2002/09/01/big/img_16408\n2002/07/22/big/img_217\n2002/07/31/big/img_960\n2002/08/21/big/img_610\n2002/08/05/big/img_3753\n2002/08/03/big/img_151\n2002/08/21/big/img_267\n2002/08/01/big/img_2175\n2002/08/04/big/img_556\n2002/08/21/big/img_527\n2002/09/02/big/img_15800\n2002/07/27/big/img_156\n2002/07/20/big/img_590\n2002/08/15/big/img_700\n2002/08/08/big/img_444\n2002/07/25/big/img_94\n2002/07/24/big/img_778\n2002/08/14/big/img_694\n2002/07/20/big/img_666\n2002/08/02/big/img_200\n2002/08/02/big/img_578\n2003/01/17/big/img_332\n2002/09/01/big/img_16352\n2002/08/27/big/img_19668\n2002/07/23/big/img_823\n2002/08/13/big/img_431\n2003/01/16/big/img_463\n2002/08/27/big/img_19711\n2002/08/23/big/img_154\n2002/07/31/big/img_360\n2002/08/23/big/img_555\n2002/08/10/big/img_561\n2003/01/14/big/img_550\n2002/08/07/big/img_1370\n2002/07/30/big/img_1184\n2002/08/01/big/img_1445\n2002/08/23/big/img_22\n2002/07/30/big/img_606\n2003/01/17/big/img_271\n2002/08/31/big/img_17316\n2002/08/16/big/img_973\n2002/07/26/big/img_77\n2002/07/20/big/img_788\n2002/08/06/big/img_2426\n2002/08/07/big/img_1498\n2002/08/16/big/img_358\n2002/08/06/big/img_2851\n2002/08/12/big/img_359\n2002/08/01/big/img_1521\n2002/08/02/big/img_709\n2002/08/20/big/img_935\n2002/08/12/big/img_188\n2002/08/24/big/img_411\n2002/08/22/big/img_680\n2002/08/06/big/img_2480\n2002/07/20/big/img_627\n2002/07/30/big/img_214\n2002/07/25/big/img_354\n2002/08/02/big/img_636\n2003/01/15/big/img_661\n2002/08/07/big/img_1327\n2002/08/01/big/img_2108\n2002/08/31/big/img_17919\n2002/08/29/big/img_18768\n2002/08/05/big/img_3840\n2002/07/26/big/img_242\n2003/01/14/big/img_451\n2002/08/20/big/img_923\n2002/08/27/big/img_19908\n2002/08/16/big/img_282\n2002/08/19/big/img_440\n2003/01/01/big/img_230\n2002/08/08/big/img_212\n2002/07/20/big/img_443\n2002/08/25/big/img_635\n2003/01/13/big/img_1169\n2002/07/26/big/img_998\n2002/08/15/big/img_995\n2002/08/06/big/img_3002\n2002/07/29/big/img_460\n2003/01/14/big/img_925\n2002/07/23/big/img_539\n2002/08/16/big/img_694\n2003/01/13/big/img_459\n2002/07/23/big/img_249\n2002/08/20/big/img_539\n2002/08/04/big/img_186\n2002/08/26/big/img_264\n2002/07/22/big/img_704\n2002/08/25/big/img_277\n2002/08/22/big/img_988\n2002/07/29/big/img_504\n2002/08/05/big/img_3600\n2002/08/30/big/img_18380\n2003/01/14/big/img_937\n2002/08/21/big/img_254\n2002/08/10/big/img_130\n2002/08/20/big/img_339\n2003/01/14/big/img_428\n2002/08/20/big/img_889\n2002/08/31/big/img_17637\n2002/07/26/big/img_644\n2002/09/01/big/img_16776\n2002/08/06/big/img_2239\n2002/08/06/big/img_2646\n2003/01/13/big/img_491\n2002/08/10/big/img_579\n2002/08/21/big/img_713\n2002/08/22/big/img_482\n2002/07/22/big/img_167\n2002/07/24/big/img_539\n2002/08/14/big/img_721\n2002/07/25/big/img_389\n2002/09/01/big/img_16591\n2002/08/13/big/img_543\n2003/01/14/big/img_432\n2002/08/09/big/img_287\n2002/07/26/big/img_126\n2002/08/23/big/img_412\n2002/08/15/big/img_1034\n2002/08/28/big/img_19485\n2002/07/31/big/img_236\n2002/07/30/big/img_523\n2002/07/19/big/img_141\n2003/01/17/big/img_957\n2002/08/04/big/img_81\n2002/07/25/big/img_206\n2002/08/15/big/img_716\n2002/08/13/big/img_403\n2002/08/15/big/img_685\n2002/07/26/big/img_884\n2002/07/19/big/img_499\n2002/07/23/big/img_772\n2002/07/27/big/img_752\n2003/01/14/big/img_493\n2002/08/25/big/img_664\n2002/07/31/big/img_334\n2002/08/26/big/img_678\n2002/09/01/big/img_16541\n2003/01/14/big/img_347\n2002/07/23/big/img_187\n2002/07/30/big/img_1163\n2002/08/05/big/img_35\n2002/08/22/big/img_944\n2002/08/07/big/img_1239\n2002/07/29/big/img_1215\n2002/08/03/big/img_312\n2002/08/05/big/img_3523\n2002/07/29/big/img_218\n2002/08/13/big/img_672\n2002/08/16/big/img_205\n2002/08/17/big/img_594\n2002/07/29/big/img_1411\n2002/07/30/big/img_942\n2003/01/16/big/img_312\n2002/08/08/big/img_312\n2002/07/25/big/img_15\n2002/08/09/big/img_839\n2002/08/01/big/img_2069\n2002/08/31/big/img_17512\n2002/08/01/big/img_3\n2002/07/31/big/img_320\n2003/01/15/big/img_1265\n2002/08/14/big/img_563\n2002/07/31/big/img_167\n2002/08/20/big/img_374\n2002/08/13/big/img_406\n2002/08/08/big/img_625\n2002/08/02/big/img_314\n2002/08/27/big/img_19964\n2002/09/01/big/img_16670\n2002/07/31/big/img_599\n2002/08/29/big/img_18906\n2002/07/24/big/img_373\n2002/07/26/big/img_513\n2002/09/02/big/img_15497\n2002/08/19/big/img_117\n2003/01/01/big/img_158\n2002/08/24/big/img_178\n2003/01/13/big/img_935\n2002/08/13/big/img_609\n2002/08/30/big/img_18341\n2002/08/25/big/img_674\n2003/01/13/big/img_209\n2002/08/13/big/img_258\n2002/08/05/big/img_3543\n2002/08/07/big/img_1970\n2002/08/06/big/img_3004\n2003/01/17/big/img_487\n2002/08/24/big/img_873\n2002/08/29/big/img_18730\n2002/08/09/big/img_375\n2003/01/16/big/img_751\n2002/08/02/big/img_603\n2002/08/19/big/img_325\n2002/09/01/big/img_16420\n2002/08/05/big/img_3633\n2002/08/21/big/img_516\n2002/07/19/big/img_501\n2002/07/26/big/img_688\n2002/07/24/big/img_256\n2002/07/25/big/img_438\n2002/07/31/big/img_1017\n2002/08/22/big/img_512\n2002/07/21/big/img_543\n2002/08/08/big/img_223\n2002/08/19/big/img_189\n2002/08/12/big/img_630\n2002/07/30/big/img_958\n2002/07/28/big/img_208\n2002/08/31/big/img_17691\n2002/07/22/big/img_542\n2002/07/19/big/img_741\n2002/07/19/big/img_158\n2002/08/15/big/img_399\n2002/08/01/big/img_2159\n2002/08/14/big/img_455\n2002/08/17/big/img_1011\n2002/08/26/big/img_744\n2002/08/12/big/img_624\n2003/01/17/big/img_821\n2002/08/16/big/img_980\n2002/07/28/big/img_281\n2002/07/25/big/img_171\n2002/08/03/big/img_116\n2002/07/22/big/img_467\n2002/07/31/big/img_750\n2002/07/26/big/img_435\n2002/07/19/big/img_822\n2002/08/13/big/img_626\n2002/08/11/big/img_344\n2002/08/02/big/img_473\n2002/09/01/big/img_16817\n2002/08/01/big/img_1275\n2002/08/28/big/img_19270\n2002/07/23/big/img_607\n2002/08/09/big/img_316\n2002/07/29/big/img_626\n2002/07/24/big/img_824\n2002/07/22/big/img_342\n2002/08/08/big/img_794\n2002/08/07/big/img_1209\n2002/07/19/big/img_18\n2002/08/25/big/img_634\n2002/07/24/big/img_730\n2003/01/17/big/img_356\n2002/07/23/big/img_305\n2002/07/30/big/img_453\n2003/01/13/big/img_972\n2002/08/06/big/img_2610\n2002/08/29/big/img_18920\n2002/07/31/big/img_123\n2002/07/26/big/img_979\n2002/08/24/big/img_635\n2002/08/05/big/img_3704\n2002/08/07/big/img_1358\n2002/07/22/big/img_306\n2002/08/13/big/img_619\n2002/08/02/big/img_366\n"
  },
  {
    "path": "third_part/GPEN/face_detect/data/__init__.py",
    "content": "from .wider_face import WiderFaceDetection, detection_collate\nfrom .data_augment import *\nfrom .config import *\n"
  },
  {
    "path": "third_part/GPEN/face_detect/data/config.py",
    "content": "# config.py\n\ncfg_mnet = {\n    'name': 'mobilenet0.25',\n    'min_sizes': [[16, 32], [64, 128], [256, 512]],\n    'steps': [8, 16, 32],\n    'variance': [0.1, 0.2],\n    'clip': False,\n    'loc_weight': 2.0,\n    'gpu_train': True,\n    'batch_size': 32,\n    'ngpu': 1,\n    'epoch': 250,\n    'decay1': 190,\n    'decay2': 220,\n    'image_size': 640,\n    'pretrain': False,\n    'return_layers': {'stage1': 1, 'stage2': 2, 'stage3': 3},\n    'in_channel': 32,\n    'out_channel': 64\n}\n\ncfg_re50 = {\n    'name': 'Resnet50',\n    'min_sizes': [[16, 32], [64, 128], [256, 512]],\n    'steps': [8, 16, 32],\n    'variance': [0.1, 0.2],\n    'clip': False,\n    'loc_weight': 2.0,\n    'gpu_train': True,\n    'batch_size': 24,\n    'ngpu': 4,\n    'epoch': 100,\n    'decay1': 70,\n    'decay2': 90,\n    'image_size': 840,\n    'pretrain': False,\n    'return_layers': {'layer2': 1, 'layer3': 2, 'layer4': 3},\n    'in_channel': 256,\n    'out_channel': 256\n}\n\n"
  },
  {
    "path": "third_part/GPEN/face_detect/data/data_augment.py",
    "content": "import cv2\nimport numpy as np\nimport random\nfrom face_detect.utils.box_utils import matrix_iof\n\n\ndef _crop(image, boxes, labels, landm, img_dim):\n    height, width, _ = image.shape\n    pad_image_flag = True\n\n    for _ in range(250):\n        \"\"\"\n        if random.uniform(0, 1) <= 0.2:\n            scale = 1.0\n        else:\n            scale = random.uniform(0.3, 1.0)\n        \"\"\"\n        PRE_SCALES = [0.3, 0.45, 0.6, 0.8, 1.0]\n        scale = random.choice(PRE_SCALES)\n        short_side = min(width, height)\n        w = int(scale * short_side)\n        h = w\n\n        if width == w:\n            l = 0\n        else:\n            l = random.randrange(width - w)\n        if height == h:\n            t = 0\n        else:\n            t = random.randrange(height - h)\n        roi = np.array((l, t, l + w, t + h))\n\n        value = matrix_iof(boxes, roi[np.newaxis])\n        flag = (value >= 1)\n        if not flag.any():\n            continue\n\n        centers = (boxes[:, :2] + boxes[:, 2:]) / 2\n        mask_a = np.logical_and(roi[:2] < centers, centers < roi[2:]).all(axis=1)\n        boxes_t = boxes[mask_a].copy()\n        labels_t = labels[mask_a].copy()\n        landms_t = landm[mask_a].copy()\n        landms_t = landms_t.reshape([-1, 5, 2])\n\n        if boxes_t.shape[0] == 0:\n            continue\n\n        image_t = image[roi[1]:roi[3], roi[0]:roi[2]]\n\n        boxes_t[:, :2] = np.maximum(boxes_t[:, :2], roi[:2])\n        boxes_t[:, :2] -= roi[:2]\n        boxes_t[:, 2:] = np.minimum(boxes_t[:, 2:], roi[2:])\n        boxes_t[:, 2:] -= roi[:2]\n\n        # landm\n        landms_t[:, :, :2] = landms_t[:, :, :2] - roi[:2]\n        landms_t[:, :, :2] = np.maximum(landms_t[:, :, :2], np.array([0, 0]))\n        landms_t[:, :, :2] = np.minimum(landms_t[:, :, :2], roi[2:] - roi[:2])\n        landms_t = landms_t.reshape([-1, 10])\n\n\n\t# make sure that the cropped image contains at least one face > 16 pixel at training image scale\n        b_w_t = (boxes_t[:, 2] - boxes_t[:, 0] + 1) / w * img_dim\n        b_h_t = (boxes_t[:, 3] - boxes_t[:, 1] + 1) / h * img_dim\n        mask_b = np.minimum(b_w_t, b_h_t) > 0.0\n        boxes_t = boxes_t[mask_b]\n        labels_t = labels_t[mask_b]\n        landms_t = landms_t[mask_b]\n\n        if boxes_t.shape[0] == 0:\n            continue\n\n        pad_image_flag = False\n\n        return image_t, boxes_t, labels_t, landms_t, pad_image_flag\n    return image, boxes, labels, landm, pad_image_flag\n\n\ndef _distort(image):\n\n    def _convert(image, alpha=1, beta=0):\n        tmp = image.astype(float) * alpha + beta\n        tmp[tmp < 0] = 0\n        tmp[tmp > 255] = 255\n        image[:] = tmp\n\n    image = image.copy()\n\n    if random.randrange(2):\n\n        #brightness distortion\n        if random.randrange(2):\n            _convert(image, beta=random.uniform(-32, 32))\n\n        #contrast distortion\n        if random.randrange(2):\n            _convert(image, alpha=random.uniform(0.5, 1.5))\n\n        image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)\n\n        #saturation distortion\n        if random.randrange(2):\n            _convert(image[:, :, 1], alpha=random.uniform(0.5, 1.5))\n\n        #hue distortion\n        if random.randrange(2):\n            tmp = image[:, :, 0].astype(int) + random.randint(-18, 18)\n            tmp %= 180\n            image[:, :, 0] = tmp\n\n        image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)\n\n    else:\n\n        #brightness distortion\n        if random.randrange(2):\n            _convert(image, beta=random.uniform(-32, 32))\n\n        image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)\n\n        #saturation distortion\n        if random.randrange(2):\n            _convert(image[:, :, 1], alpha=random.uniform(0.5, 1.5))\n\n        #hue distortion\n        if random.randrange(2):\n            tmp = image[:, :, 0].astype(int) + random.randint(-18, 18)\n            tmp %= 180\n            image[:, :, 0] = tmp\n\n        image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)\n\n        #contrast distortion\n        if random.randrange(2):\n            _convert(image, alpha=random.uniform(0.5, 1.5))\n\n    return image\n\n\ndef _expand(image, boxes, fill, p):\n    if random.randrange(2):\n        return image, boxes\n\n    height, width, depth = image.shape\n\n    scale = random.uniform(1, p)\n    w = int(scale * width)\n    h = int(scale * height)\n\n    left = random.randint(0, w - width)\n    top = random.randint(0, h - height)\n\n    boxes_t = boxes.copy()\n    boxes_t[:, :2] += (left, top)\n    boxes_t[:, 2:] += (left, top)\n    expand_image = np.empty(\n        (h, w, depth),\n        dtype=image.dtype)\n    expand_image[:, :] = fill\n    expand_image[top:top + height, left:left + width] = image\n    image = expand_image\n\n    return image, boxes_t\n\n\ndef _mirror(image, boxes, landms):\n    _, width, _ = image.shape\n    if random.randrange(2):\n        image = image[:, ::-1]\n        boxes = boxes.copy()\n        boxes[:, 0::2] = width - boxes[:, 2::-2]\n\n        # landm\n        landms = landms.copy()\n        landms = landms.reshape([-1, 5, 2])\n        landms[:, :, 0] = width - landms[:, :, 0]\n        tmp = landms[:, 1, :].copy()\n        landms[:, 1, :] = landms[:, 0, :]\n        landms[:, 0, :] = tmp\n        tmp1 = landms[:, 4, :].copy()\n        landms[:, 4, :] = landms[:, 3, :]\n        landms[:, 3, :] = tmp1\n        landms = landms.reshape([-1, 10])\n\n    return image, boxes, landms\n\n\ndef _pad_to_square(image, rgb_mean, pad_image_flag):\n    if not pad_image_flag:\n        return image\n    height, width, _ = image.shape\n    long_side = max(width, height)\n    image_t = np.empty((long_side, long_side, 3), dtype=image.dtype)\n    image_t[:, :] = rgb_mean\n    image_t[0:0 + height, 0:0 + width] = image\n    return image_t\n\n\ndef _resize_subtract_mean(image, insize, rgb_mean):\n    interp_methods = [cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_NEAREST, cv2.INTER_LANCZOS4]\n    interp_method = interp_methods[random.randrange(5)]\n    image = cv2.resize(image, (insize, insize), interpolation=interp_method)\n    image = image.astype(np.float32)\n    image -= rgb_mean\n    return image.transpose(2, 0, 1)\n\n\nclass preproc(object):\n\n    def __init__(self, img_dim, rgb_means):\n        self.img_dim = img_dim\n        self.rgb_means = rgb_means\n\n    def __call__(self, image, targets):\n        assert targets.shape[0] > 0, \"this image does not have gt\"\n\n        boxes = targets[:, :4].copy()\n        labels = targets[:, -1].copy()\n        landm = targets[:, 4:-1].copy()\n\n        image_t, boxes_t, labels_t, landm_t, pad_image_flag = _crop(image, boxes, labels, landm, self.img_dim)\n        image_t = _distort(image_t)\n        image_t = _pad_to_square(image_t,self.rgb_means, pad_image_flag)\n        image_t, boxes_t, landm_t = _mirror(image_t, boxes_t, landm_t)\n        height, width, _ = image_t.shape\n        image_t = _resize_subtract_mean(image_t, self.img_dim, self.rgb_means)\n        boxes_t[:, 0::2] /= width\n        boxes_t[:, 1::2] /= height\n\n        landm_t[:, 0::2] /= width\n        landm_t[:, 1::2] /= height\n\n        labels_t = np.expand_dims(labels_t, 1)\n        targets_t = np.hstack((boxes_t, landm_t, labels_t))\n\n        return image_t, targets_t\n"
  },
  {
    "path": "third_part/GPEN/face_detect/data/wider_face.py",
    "content": "import os\nimport os.path\nimport sys\nimport torch\nimport torch.utils.data as data\nimport cv2\nimport numpy as np\n\nclass WiderFaceDetection(data.Dataset):\n    def __init__(self, txt_path, preproc=None):\n        self.preproc = preproc\n        self.imgs_path = []\n        self.words = []\n        f = open(txt_path,'r')\n        lines = f.readlines()\n        isFirst = True\n        labels = []\n        for line in lines:\n            line = line.rstrip()\n            if line.startswith('#'):\n                if isFirst is True:\n                    isFirst = False\n                else:\n                    labels_copy = labels.copy()\n                    self.words.append(labels_copy)\n                    labels.clear()\n                path = line[2:]\n                path = txt_path.replace('label.txt','images/') + path\n                self.imgs_path.append(path)\n            else:\n                line = line.split(' ')\n                label = [float(x) for x in line]\n                labels.append(label)\n\n        self.words.append(labels)\n\n    def __len__(self):\n        return len(self.imgs_path)\n\n    def __getitem__(self, index):\n        img = cv2.imread(self.imgs_path[index])\n        height, width, _ = img.shape\n\n        labels = self.words[index]\n        annotations = np.zeros((0, 15))\n        if len(labels) == 0:\n            return annotations\n        for idx, label in enumerate(labels):\n            annotation = np.zeros((1, 15))\n            # bbox\n            annotation[0, 0] = label[0]  # x1\n            annotation[0, 1] = label[1]  # y1\n            annotation[0, 2] = label[0] + label[2]  # x2\n            annotation[0, 3] = label[1] + label[3]  # y2\n\n            # landmarks\n            annotation[0, 4] = label[4]    # l0_x\n            annotation[0, 5] = label[5]    # l0_y\n            annotation[0, 6] = label[7]    # l1_x\n            annotation[0, 7] = label[8]    # l1_y\n            annotation[0, 8] = label[10]   # l2_x\n            annotation[0, 9] = label[11]   # l2_y\n            annotation[0, 10] = label[13]  # l3_x\n            annotation[0, 11] = label[14]  # l3_y\n            annotation[0, 12] = label[16]  # l4_x\n            annotation[0, 13] = label[17]  # l4_y\n            if (annotation[0, 4]<0):\n                annotation[0, 14] = -1\n            else:\n                annotation[0, 14] = 1\n\n            annotations = np.append(annotations, annotation, axis=0)\n        target = np.array(annotations)\n        if self.preproc is not None:\n            img, target = self.preproc(img, target)\n\n        return torch.from_numpy(img), target\n\ndef detection_collate(batch):\n    \"\"\"Custom collate fn for dealing with batches of images that have a different\n    number of associated object annotations (bounding boxes).\n\n    Arguments:\n        batch: (tuple) A tuple of tensor images and lists of annotations\n\n    Return:\n        A tuple containing:\n            1) (tensor) batch of images stacked on their 0 dim\n            2) (list of tensors) annotations for a given image are stacked on 0 dim\n    \"\"\"\n    targets = []\n    imgs = []\n    for _, sample in enumerate(batch):\n        for _, tup in enumerate(sample):\n            if torch.is_tensor(tup):\n                imgs.append(tup)\n            elif isinstance(tup, type(np.empty(0))):\n                annos = torch.from_numpy(tup).float()\n                targets.append(annos)\n\n    return (torch.stack(imgs, 0), targets)\n"
  },
  {
    "path": "third_part/GPEN/face_detect/facemodels/__init__.py",
    "content": ""
  },
  {
    "path": "third_part/GPEN/face_detect/facemodels/net.py",
    "content": "import time\nimport torch\nimport torch.nn as nn\nimport torchvision.models._utils as _utils\nimport torchvision.models as models\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\ndef conv_bn(inp, oup, stride = 1, leaky = 0):\n    return nn.Sequential(\n        nn.Conv2d(inp, oup, 3, stride, 1, bias=False),\n        nn.BatchNorm2d(oup),\n        nn.LeakyReLU(negative_slope=leaky, inplace=True)\n    )\n\ndef conv_bn_no_relu(inp, oup, stride):\n    return nn.Sequential(\n        nn.Conv2d(inp, oup, 3, stride, 1, bias=False),\n        nn.BatchNorm2d(oup),\n    )\n\ndef conv_bn1X1(inp, oup, stride, leaky=0):\n    return nn.Sequential(\n        nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False),\n        nn.BatchNorm2d(oup),\n        nn.LeakyReLU(negative_slope=leaky, inplace=True)\n    )\n\ndef conv_dw(inp, oup, stride, leaky=0.1):\n    return nn.Sequential(\n        nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),\n        nn.BatchNorm2d(inp),\n        nn.LeakyReLU(negative_slope= leaky,inplace=True),\n\n        nn.Conv2d(inp, oup, 1, 1, 0, bias=False),\n        nn.BatchNorm2d(oup),\n        nn.LeakyReLU(negative_slope= leaky,inplace=True),\n    )\n\nclass SSH(nn.Module):\n    def __init__(self, in_channel, out_channel):\n        super(SSH, self).__init__()\n        assert out_channel % 4 == 0\n        leaky = 0\n        if (out_channel <= 64):\n            leaky = 0.1\n        self.conv3X3 = conv_bn_no_relu(in_channel, out_channel//2, stride=1)\n\n        self.conv5X5_1 = conv_bn(in_channel, out_channel//4, stride=1, leaky = leaky)\n        self.conv5X5_2 = conv_bn_no_relu(out_channel//4, out_channel//4, stride=1)\n\n        self.conv7X7_2 = conv_bn(out_channel//4, out_channel//4, stride=1, leaky = leaky)\n        self.conv7x7_3 = conv_bn_no_relu(out_channel//4, out_channel//4, stride=1)\n\n    def forward(self, input):\n        conv3X3 = self.conv3X3(input)\n\n        conv5X5_1 = self.conv5X5_1(input)\n        conv5X5 = self.conv5X5_2(conv5X5_1)\n\n        conv7X7_2 = self.conv7X7_2(conv5X5_1)\n        conv7X7 = self.conv7x7_3(conv7X7_2)\n\n        out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1)\n        out = F.relu(out)\n        return out\n\nclass FPN(nn.Module):\n    def __init__(self,in_channels_list,out_channels):\n        super(FPN,self).__init__()\n        leaky = 0\n        if (out_channels <= 64):\n            leaky = 0.1\n        self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride = 1, leaky = leaky)\n        self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride = 1, leaky = leaky)\n        self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride = 1, leaky = leaky)\n\n        self.merge1 = conv_bn(out_channels, out_channels, leaky = leaky)\n        self.merge2 = conv_bn(out_channels, out_channels, leaky = leaky)\n\n    def forward(self, input):\n        # names = list(input.keys())\n        input = list(input.values())\n\n        output1 = self.output1(input[0])\n        output2 = self.output2(input[1])\n        output3 = self.output3(input[2])\n\n        up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode=\"nearest\")\n        output2 = output2 + up3\n        output2 = self.merge2(output2)\n\n        up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode=\"nearest\")\n        output1 = output1 + up2\n        output1 = self.merge1(output1)\n\n        out = [output1, output2, output3]\n        return out\n\n\n\nclass MobileNetV1(nn.Module):\n    def __init__(self):\n        super(MobileNetV1, self).__init__()\n        self.stage1 = nn.Sequential(\n            conv_bn(3, 8, 2, leaky = 0.1),    # 3\n            conv_dw(8, 16, 1),   # 7\n            conv_dw(16, 32, 2),  # 11\n            conv_dw(32, 32, 1),  # 19\n            conv_dw(32, 64, 2),  # 27\n            conv_dw(64, 64, 1),  # 43\n        )\n        self.stage2 = nn.Sequential(\n            conv_dw(64, 128, 2),  # 43 + 16 = 59\n            conv_dw(128, 128, 1), # 59 + 32 = 91\n            conv_dw(128, 128, 1), # 91 + 32 = 123\n            conv_dw(128, 128, 1), # 123 + 32 = 155\n            conv_dw(128, 128, 1), # 155 + 32 = 187\n            conv_dw(128, 128, 1), # 187 + 32 = 219\n        )\n        self.stage3 = nn.Sequential(\n            conv_dw(128, 256, 2), # 219 +3 2 = 241\n            conv_dw(256, 256, 1), # 241 + 64 = 301\n        )\n        self.avg = nn.AdaptiveAvgPool2d((1,1))\n        self.fc = nn.Linear(256, 1000)\n\n    def forward(self, x):\n        x = self.stage1(x)\n        x = self.stage2(x)\n        x = self.stage3(x)\n        x = self.avg(x)\n        # x = self.model(x)\n        x = x.view(-1, 256)\n        x = self.fc(x)\n        return x\n\n"
  },
  {
    "path": "third_part/GPEN/face_detect/facemodels/retinaface.py",
    "content": "import torch\nimport torch.nn as nn\nimport torchvision.models.detection.backbone_utils as backbone_utils\nimport torchvision.models._utils as _utils\nimport torch.nn.functional as F\nfrom collections import OrderedDict\n\nfrom face_detect.facemodels.net import MobileNetV1 as MobileNetV1\nfrom face_detect.facemodels.net import FPN as FPN\nfrom face_detect.facemodels.net import SSH as SSH\n\n\n\nclass ClassHead(nn.Module):\n    def __init__(self,inchannels=512,num_anchors=3):\n        super(ClassHead,self).__init__()\n        self.num_anchors = num_anchors\n        self.conv1x1 = nn.Conv2d(inchannels,self.num_anchors*2,kernel_size=(1,1),stride=1,padding=0)\n\n    def forward(self,x):\n        out = self.conv1x1(x)\n        out = out.permute(0,2,3,1).contiguous()\n        \n        return out.view(out.shape[0], -1, 2)\n\nclass BboxHead(nn.Module):\n    def __init__(self,inchannels=512,num_anchors=3):\n        super(BboxHead,self).__init__()\n        self.conv1x1 = nn.Conv2d(inchannels,num_anchors*4,kernel_size=(1,1),stride=1,padding=0)\n\n    def forward(self,x):\n        out = self.conv1x1(x)\n        out = out.permute(0,2,3,1).contiguous()\n\n        return out.view(out.shape[0], -1, 4)\n\nclass LandmarkHead(nn.Module):\n    def __init__(self,inchannels=512,num_anchors=3):\n        super(LandmarkHead,self).__init__()\n        self.conv1x1 = nn.Conv2d(inchannels,num_anchors*10,kernel_size=(1,1),stride=1,padding=0)\n\n    def forward(self,x):\n        out = self.conv1x1(x)\n        out = out.permute(0,2,3,1).contiguous()\n\n        return out.view(out.shape[0], -1, 10)\n\nclass RetinaFace(nn.Module):\n    def __init__(self, cfg = None, phase = 'train'):\n        \"\"\"\n        :param cfg:  Network related settings.\n        :param phase: train or test.\n        \"\"\"\n        super(RetinaFace,self).__init__()\n        self.phase = phase\n        backbone = None\n        if cfg['name'] == 'mobilenet0.25':\n            backbone = MobileNetV1()\n            if cfg['pretrain']:\n                checkpoint = torch.load(\"./weights/mobilenetV1X0.25_pretrain.tar\", map_location=torch.device('cpu'))\n                from collections import OrderedDict\n                new_state_dict = OrderedDict()\n                for k, v in checkpoint['state_dict'].items():\n                    name = k[7:]  # remove module.\n                    new_state_dict[name] = v\n                # load params\n                backbone.load_state_dict(new_state_dict)\n        elif cfg['name'] == 'Resnet50':\n            import torchvision.models as models\n            backbone = models.resnet50(pretrained=cfg['pretrain'])\n\n        self.body = _utils.IntermediateLayerGetter(backbone, cfg['return_layers'])\n        in_channels_stage2 = cfg['in_channel']\n        in_channels_list = [\n            in_channels_stage2 * 2,\n            in_channels_stage2 * 4,\n            in_channels_stage2 * 8,\n        ]\n        out_channels = cfg['out_channel']\n        self.fpn = FPN(in_channels_list,out_channels)\n        self.ssh1 = SSH(out_channels, out_channels)\n        self.ssh2 = SSH(out_channels, out_channels)\n        self.ssh3 = SSH(out_channels, out_channels)\n\n        self.ClassHead = self._make_class_head(fpn_num=3, inchannels=cfg['out_channel'])\n        self.BboxHead = self._make_bbox_head(fpn_num=3, inchannels=cfg['out_channel'])\n        self.LandmarkHead = self._make_landmark_head(fpn_num=3, inchannels=cfg['out_channel'])\n\n    def _make_class_head(self,fpn_num=3,inchannels=64,anchor_num=2):\n        classhead = nn.ModuleList()\n        for i in range(fpn_num):\n            classhead.append(ClassHead(inchannels,anchor_num))\n        return classhead\n    \n    def _make_bbox_head(self,fpn_num=3,inchannels=64,anchor_num=2):\n        bboxhead = nn.ModuleList()\n        for i in range(fpn_num):\n            bboxhead.append(BboxHead(inchannels,anchor_num))\n        return bboxhead\n\n    def _make_landmark_head(self,fpn_num=3,inchannels=64,anchor_num=2):\n        landmarkhead = nn.ModuleList()\n        for i in range(fpn_num):\n            landmarkhead.append(LandmarkHead(inchannels,anchor_num))\n        return landmarkhead\n\n    def forward(self,inputs):\n        out = self.body(inputs)\n\n        # FPN\n        fpn = self.fpn(out)\n\n        # SSH\n        feature1 = self.ssh1(fpn[0])\n        feature2 = self.ssh2(fpn[1])\n        feature3 = self.ssh3(fpn[2])\n        features = [feature1, feature2, feature3]\n\n        bbox_regressions = torch.cat([self.BboxHead[i](feature) for i, feature in enumerate(features)], dim=1)\n        classifications = torch.cat([self.ClassHead[i](feature) for i, feature in enumerate(features)],dim=1)\n        ldm_regressions = torch.cat([self.LandmarkHead[i](feature) for i, feature in enumerate(features)], dim=1)\n\n        if self.phase == 'train':\n            output = (bbox_regressions, classifications, ldm_regressions)\n        else:\n            output = (bbox_regressions, F.softmax(classifications, dim=-1), ldm_regressions)\n        return output"
  },
  {
    "path": "third_part/GPEN/face_detect/layers/__init__.py",
    "content": "from .functions import *\nfrom .modules import *\n"
  },
  {
    "path": "third_part/GPEN/face_detect/layers/functions/prior_box.py",
    "content": "import torch\nfrom itertools import product as product\nimport numpy as np\nfrom math import ceil\n\n\nclass PriorBox(object):\n    def __init__(self, cfg, image_size=None, phase='train'):\n        super(PriorBox, self).__init__()\n        self.min_sizes = cfg['min_sizes']\n        self.steps = cfg['steps']\n        self.clip = cfg['clip']\n        self.image_size = image_size\n        self.feature_maps = [[ceil(self.image_size[0]/step), ceil(self.image_size[1]/step)] for step in self.steps]\n        self.name = \"s\"\n\n    def forward(self):\n        anchors = []\n        for k, f in enumerate(self.feature_maps):\n            min_sizes = self.min_sizes[k]\n            for i, j in product(range(f[0]), range(f[1])):\n                for min_size in min_sizes:\n                    s_kx = min_size / self.image_size[1]\n                    s_ky = min_size / self.image_size[0]\n                    dense_cx = [x * self.steps[k] / self.image_size[1] for x in [j + 0.5]]\n                    dense_cy = [y * self.steps[k] / self.image_size[0] for y in [i + 0.5]]\n                    for cy, cx in product(dense_cy, dense_cx):\n                        anchors += [cx, cy, s_kx, s_ky]\n\n        # back to torch land\n        output = torch.Tensor(anchors).view(-1, 4)\n        if self.clip:\n            output.clamp_(max=1, min=0)\n        return output\n"
  },
  {
    "path": "third_part/GPEN/face_detect/layers/modules/__init__.py",
    "content": "from .multibox_loss import MultiBoxLoss\n\n__all__ = ['MultiBoxLoss']\n"
  },
  {
    "path": "third_part/GPEN/face_detect/layers/modules/multibox_loss.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom face_detect.utils.box_utils import match, log_sum_exp\nfrom face_detect.data import cfg_mnet\nGPU = cfg_mnet['gpu_train']\n\nclass MultiBoxLoss(nn.Module):\n    \"\"\"SSD Weighted Loss Function\n    Compute Targets:\n        1) Produce Confidence Target Indices by matching  ground truth boxes\n           with (default) 'priorboxes' that have jaccard index > threshold parameter\n           (default threshold: 0.5).\n        2) Produce localization target by 'encoding' variance into offsets of ground\n           truth boxes and their matched  'priorboxes'.\n        3) Hard negative mining to filter the excessive number of negative examples\n           that comes with using a large number of default bounding boxes.\n           (default negative:positive ratio 3:1)\n    Objective Loss:\n        L(x,c,l,g) = (Lconf(x, c) + αLloc(x,l,g)) / N\n        Where, Lconf is the CrossEntropy Loss and Lloc is the SmoothL1 Loss\n        weighted by α which is set to 1 by cross val.\n        Args:\n            c: class confidences,\n            l: predicted boxes,\n            g: ground truth boxes\n            N: number of matched default boxes\n        See: https://arxiv.org/pdf/1512.02325.pdf for more details.\n    \"\"\"\n\n    def __init__(self, num_classes, overlap_thresh, prior_for_matching, bkg_label, neg_mining, neg_pos, neg_overlap, encode_target):\n        super(MultiBoxLoss, self).__init__()\n        self.num_classes = num_classes\n        self.threshold = overlap_thresh\n        self.background_label = bkg_label\n        self.encode_target = encode_target\n        self.use_prior_for_matching = prior_for_matching\n        self.do_neg_mining = neg_mining\n        self.negpos_ratio = neg_pos\n        self.neg_overlap = neg_overlap\n        self.variance = [0.1, 0.2]\n\n    def forward(self, predictions, priors, targets):\n        \"\"\"Multibox Loss\n        Args:\n            predictions (tuple): A tuple containing loc preds, conf preds,\n            and prior boxes from SSD net.\n                conf shape: torch.size(batch_size,num_priors,num_classes)\n                loc shape: torch.size(batch_size,num_priors,4)\n                priors shape: torch.size(num_priors,4)\n\n            ground_truth (tensor): Ground truth boxes and labels for a batch,\n                shape: [batch_size,num_objs,5] (last idx is the label).\n        \"\"\"\n\n        loc_data, conf_data, landm_data = predictions\n        priors = priors\n        num = loc_data.size(0)\n        num_priors = (priors.size(0))\n\n        # match priors (default boxes) and ground truth boxes\n        loc_t = torch.Tensor(num, num_priors, 4)\n        landm_t = torch.Tensor(num, num_priors, 10)\n        conf_t = torch.LongTensor(num, num_priors)\n        for idx in range(num):\n            truths = targets[idx][:, :4].data\n            labels = targets[idx][:, -1].data\n            landms = targets[idx][:, 4:14].data\n            defaults = priors.data\n            match(self.threshold, truths, defaults, self.variance, labels, landms, loc_t, conf_t, landm_t, idx)\n        if GPU:\n            loc_t = loc_t.cuda()\n            conf_t = conf_t.cuda()\n            landm_t = landm_t.cuda()\n\n        zeros = torch.tensor(0).cuda()\n        # landm Loss (Smooth L1)\n        # Shape: [batch,num_priors,10]\n        pos1 = conf_t > zeros\n        num_pos_landm = pos1.long().sum(1, keepdim=True)\n        N1 = max(num_pos_landm.data.sum().float(), 1)\n        pos_idx1 = pos1.unsqueeze(pos1.dim()).expand_as(landm_data)\n        landm_p = landm_data[pos_idx1].view(-1, 10)\n        landm_t = landm_t[pos_idx1].view(-1, 10)\n        loss_landm = F.smooth_l1_loss(landm_p, landm_t, reduction='sum')\n\n\n        pos = conf_t != zeros\n        conf_t[pos] = 1\n\n        # Localization Loss (Smooth L1)\n        # Shape: [batch,num_priors,4]\n        pos_idx = pos.unsqueeze(pos.dim()).expand_as(loc_data)\n        loc_p = loc_data[pos_idx].view(-1, 4)\n        loc_t = loc_t[pos_idx].view(-1, 4)\n        loss_l = F.smooth_l1_loss(loc_p, loc_t, reduction='sum')\n\n        # Compute max conf across batch for hard negative mining\n        batch_conf = conf_data.view(-1, self.num_classes)\n        loss_c = log_sum_exp(batch_conf) - batch_conf.gather(1, conf_t.view(-1, 1))\n\n        # Hard Negative Mining\n        loss_c[pos.view(-1, 1)] = 0 # filter out pos boxes for now\n        loss_c = loss_c.view(num, -1)\n        _, loss_idx = loss_c.sort(1, descending=True)\n        _, idx_rank = loss_idx.sort(1)\n        num_pos = pos.long().sum(1, keepdim=True)\n        num_neg = torch.clamp(self.negpos_ratio*num_pos, max=pos.size(1)-1)\n        neg = idx_rank < num_neg.expand_as(idx_rank)\n\n        # Confidence Loss Including Positive and Negative Examples\n        pos_idx = pos.unsqueeze(2).expand_as(conf_data)\n        neg_idx = neg.unsqueeze(2).expand_as(conf_data)\n        conf_p = conf_data[(pos_idx+neg_idx).gt(0)].view(-1,self.num_classes)\n        targets_weighted = conf_t[(pos+neg).gt(0)]\n        loss_c = F.cross_entropy(conf_p, targets_weighted, reduction='sum')\n\n        # Sum of losses: L(x,c,l,g) = (Lconf(x, c) + αLloc(x,l,g)) / N\n        N = max(num_pos.data.sum().float(), 1)\n        loss_l /= N\n        loss_c /= N\n        loss_landm /= N1\n\n        return loss_l, loss_c, loss_landm\n"
  },
  {
    "path": "third_part/GPEN/face_detect/retinaface_detection.py",
    "content": "'''\n@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021)\n@author: yangxy (yangtao9009@gmail.com)\n'''\nimport os\nimport torch\nimport torch.backends.cudnn as cudnn\nimport numpy as np\nfrom face_detect.data import cfg_re50\nfrom face_detect.layers.functions.prior_box import PriorBox\nfrom face_detect.utils.nms.py_cpu_nms import py_cpu_nms\nimport cv2\nfrom face_detect.facemodels.retinaface import RetinaFace\nfrom face_detect.utils.box_utils import decode, decode_landm\nimport time\nimport torch.nn.functional as F\n\n\nclass RetinaFaceDetection(object):\n    def __init__(self, base_dir, device='cuda', network='RetinaFace-R50'):\n        torch.set_grad_enabled(False)\n        cudnn.benchmark = True\n        self.pretrained_path = os.path.join(base_dir, network+'.pth')\n        self.device = device #torch.cuda.current_device()\n        self.cfg = cfg_re50\n        self.net = RetinaFace(cfg=self.cfg, phase='test')\n        self.load_model()\n        self.net = self.net.to(device)\n\n        self.mean = torch.tensor([[[[104]], [[117]], [[123]]]]).to(device)\n\n    def check_keys(self, pretrained_state_dict):\n        ckpt_keys = set(pretrained_state_dict.keys())\n        model_keys = set(self.net.state_dict().keys())\n        used_pretrained_keys = model_keys & ckpt_keys\n        unused_pretrained_keys = ckpt_keys - model_keys\n        missing_keys = model_keys - ckpt_keys\n        assert len(used_pretrained_keys) > 0, 'load NONE from pretrained checkpoint'\n        return True\n\n    def remove_prefix(self, state_dict, prefix):\n        ''' Old style model is stored with all names of parameters sharing common prefix 'module.' '''\n        f = lambda x: x.split(prefix, 1)[-1] if x.startswith(prefix) else x\n        return {f(key): value for key, value in state_dict.items()}\n\n    def load_model(self, load_to_cpu=False):\n        #if load_to_cpu:\n        #    pretrained_dict = torch.load(self.pretrained_path, map_location=lambda storage, loc: storage)\n        #else:\n        #    pretrained_dict = torch.load(self.pretrained_path, map_location=lambda storage, loc: storage.cuda())\n        pretrained_dict = torch.load(self.pretrained_path, map_location=torch.device('cpu'))\n        if \"state_dict\" in pretrained_dict.keys():\n            pretrained_dict = self.remove_prefix(pretrained_dict['state_dict'], 'module.')\n        else:\n            pretrained_dict = self.remove_prefix(pretrained_dict, 'module.')\n        self.check_keys(pretrained_dict)\n        self.net.load_state_dict(pretrained_dict, strict=False)\n        self.net.eval()\n    \n    def detect(self, img_raw, resize=1, confidence_threshold=0.9, nms_threshold=0.4, top_k=5000, keep_top_k=750, save_image=False):\n        img = np.float32(img_raw)\n\n        im_height, im_width = img.shape[:2]\n        ss = 1.0\n        # tricky\n        if max(im_height, im_width) > 1500:\n            ss = 1000.0/max(im_height, im_width)\n            img = cv2.resize(img, (0,0), fx=ss, fy=ss)\n            im_height, im_width = img.shape[:2]\n\n        scale = torch.Tensor([img.shape[1], img.shape[0], img.shape[1], img.shape[0]])\n        img -= (104, 117, 123)\n        img = img.transpose(2, 0, 1)\n        img = torch.from_numpy(img).unsqueeze(0)\n        img = img.to(self.device)\n        scale = scale.to(self.device)\n        \n        with torch.no_grad():\n            loc, conf, landms = self.net(img)  # forward pass\n\n        priorbox = PriorBox(self.cfg, image_size=(im_height, im_width))\n        priors = priorbox.forward()\n        priors = priors.to(self.device)\n        prior_data = priors.data\n        boxes = decode(loc.data.squeeze(0), prior_data, self.cfg['variance'])\n        boxes = boxes * scale / resize\n        boxes = boxes.cpu().numpy()\n        scores = conf.squeeze(0).data.cpu().numpy()[:, 1]\n        landms = decode_landm(landms.data.squeeze(0), prior_data, self.cfg['variance'])\n        scale1 = torch.Tensor([img.shape[3], img.shape[2], img.shape[3], img.shape[2],\n                               img.shape[3], img.shape[2], img.shape[3], img.shape[2],\n                               img.shape[3], img.shape[2]])\n        scale1 = scale1.to(self.device)\n        landms = landms * scale1 / resize\n        landms = landms.cpu().numpy()\n\n        # ignore low scores\n        inds = np.where(scores > confidence_threshold)[0]\n        boxes = boxes[inds]\n        landms = landms[inds]\n        scores = scores[inds]\n\n        # keep top-K before NMS\n        order = scores.argsort()[::-1][:top_k]\n        boxes = boxes[order]\n        landms = landms[order]\n        scores = scores[order]\n\n        # do NMS\n        dets = np.hstack((boxes, scores[:, np.newaxis])).astype(np.float32, copy=False)\n        keep = py_cpu_nms(dets, nms_threshold)\n        # keep = nms(dets, nms_threshold,force_cpu=args.cpu)\n        dets = dets[keep, :]\n        landms = landms[keep]\n\n        # keep top-K faster NMS\n        dets = dets[:keep_top_k, :]\n        landms = landms[:keep_top_k, :]\n\n        # sort faces(delete)\n        '''\n        fscores = [det[4] for det in dets]\n        sorted_idx = sorted(range(len(fscores)), key=lambda k:fscores[k], reverse=False) # sort index\n        tmp = [landms[idx] for idx in sorted_idx]\n        landms = np.asarray(tmp)\n        '''\n        \n        landms = landms.reshape((-1, 5, 2))\n        landms = landms.transpose((0, 2, 1))\n        landms = landms.reshape(-1, 10, )\n        return dets/ss, landms/ss\n\n    def detect_tensor(self, img, resize=1, confidence_threshold=0.9, nms_threshold=0.4, top_k=5000, keep_top_k=750, save_image=False):\n        im_height, im_width = img.shape[-2:]\n        ss = 1000/max(im_height, im_width)\n        img = F.interpolate(img, scale_factor=ss)\n        im_height, im_width = img.shape[-2:]\n        scale = torch.Tensor([im_width, im_height, im_width, im_height]).to(self.device)\n        img -= self.mean\n\n        loc, conf, landms = self.net(img)  # forward pass\n\n        priorbox = PriorBox(self.cfg, image_size=(im_height, im_width))\n        priors = priorbox.forward()\n        priors = priors.to(self.device)\n        prior_data = priors.data\n        boxes = decode(loc.data.squeeze(0), prior_data, self.cfg['variance'])\n        boxes = boxes * scale / resize\n        boxes = boxes.cpu().numpy()\n        scores = conf.squeeze(0).data.cpu().numpy()[:, 1]\n        landms = decode_landm(landms.data.squeeze(0), prior_data, self.cfg['variance'])\n        scale1 = torch.Tensor([img.shape[3], img.shape[2], img.shape[3], img.shape[2],\n                               img.shape[3], img.shape[2], img.shape[3], img.shape[2],\n                               img.shape[3], img.shape[2]])\n        scale1 = scale1.to(self.device)\n        landms = landms * scale1 / resize\n        landms = landms.cpu().numpy()\n\n        # ignore low scores\n        inds = np.where(scores > confidence_threshold)[0]\n        boxes = boxes[inds]\n        landms = landms[inds]\n        scores = scores[inds]\n\n        # keep top-K before NMS\n        order = scores.argsort()[::-1][:top_k]\n        boxes = boxes[order]\n        landms = landms[order]\n        scores = scores[order]\n\n        # do NMS\n        dets = np.hstack((boxes, scores[:, np.newaxis])).astype(np.float32, copy=False)\n        keep = py_cpu_nms(dets, nms_threshold)\n        # keep = nms(dets, nms_threshold,force_cpu=args.cpu)\n        dets = dets[keep, :]\n        landms = landms[keep]\n\n        # keep top-K faster NMS\n        dets = dets[:keep_top_k, :]\n        landms = landms[:keep_top_k, :]\n\n        # sort faces(delete)\n        '''\n        fscores = [det[4] for det in dets]\n        sorted_idx = sorted(range(len(fscores)), key=lambda k:fscores[k], reverse=False) # sort index\n        tmp = [landms[idx] for idx in sorted_idx]\n        landms = np.asarray(tmp)\n        '''\n        \n        landms = landms.reshape((-1, 5, 2))\n        landms = landms.transpose((0, 2, 1))\n        landms = landms.reshape(-1, 10, )\n        return dets/ss, landms/ss\n"
  },
  {
    "path": "third_part/GPEN/face_detect/utils/__init__.py",
    "content": ""
  },
  {
    "path": "third_part/GPEN/face_detect/utils/box_utils.py",
    "content": "import torch\nimport numpy as np\n\n\ndef point_form(boxes):\n    \"\"\" Convert prior_boxes to (xmin, ymin, xmax, ymax)\n    representation for comparison to point form ground truth data.\n    Args:\n        boxes: (tensor) center-size default boxes from priorbox layers.\n    Return:\n        boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.\n    \"\"\"\n    return torch.cat((boxes[:, :2] - boxes[:, 2:]/2,     # xmin, ymin\n                     boxes[:, :2] + boxes[:, 2:]/2), 1)  # xmax, ymax\n\n\ndef center_size(boxes):\n    \"\"\" Convert prior_boxes to (cx, cy, w, h)\n    representation for comparison to center-size form ground truth data.\n    Args:\n        boxes: (tensor) point_form boxes\n    Return:\n        boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.\n    \"\"\"\n    return torch.cat((boxes[:, 2:] + boxes[:, :2])/2,  # cx, cy\n                     boxes[:, 2:] - boxes[:, :2], 1)  # w, h\n\n\ndef intersect(box_a, box_b):\n    \"\"\" We resize both tensors to [A,B,2] without new malloc:\n    [A,2] -> [A,1,2] -> [A,B,2]\n    [B,2] -> [1,B,2] -> [A,B,2]\n    Then we compute the area of intersect between box_a and box_b.\n    Args:\n      box_a: (tensor) bounding boxes, Shape: [A,4].\n      box_b: (tensor) bounding boxes, Shape: [B,4].\n    Return:\n      (tensor) intersection area, Shape: [A,B].\n    \"\"\"\n    A = box_a.size(0)\n    B = box_b.size(0)\n    max_xy = torch.min(box_a[:, 2:].unsqueeze(1).expand(A, B, 2),\n                       box_b[:, 2:].unsqueeze(0).expand(A, B, 2))\n    min_xy = torch.max(box_a[:, :2].unsqueeze(1).expand(A, B, 2),\n                       box_b[:, :2].unsqueeze(0).expand(A, B, 2))\n    inter = torch.clamp((max_xy - min_xy), min=0)\n    return inter[:, :, 0] * inter[:, :, 1]\n\n\ndef jaccard(box_a, box_b):\n    \"\"\"Compute the jaccard overlap of two sets of boxes.  The jaccard overlap\n    is simply the intersection over union of two boxes.  Here we operate on\n    ground truth boxes and default boxes.\n    E.g.:\n        A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)\n    Args:\n        box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]\n        box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]\n    Return:\n        jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)]\n    \"\"\"\n    inter = intersect(box_a, box_b)\n    area_a = ((box_a[:, 2]-box_a[:, 0]) *\n              (box_a[:, 3]-box_a[:, 1])).unsqueeze(1).expand_as(inter)  # [A,B]\n    area_b = ((box_b[:, 2]-box_b[:, 0]) *\n              (box_b[:, 3]-box_b[:, 1])).unsqueeze(0).expand_as(inter)  # [A,B]\n    union = area_a + area_b - inter\n    return inter / union  # [A,B]\n\n\ndef matrix_iou(a, b):\n    \"\"\"\n    return iou of a and b, numpy version for data augenmentation\n    \"\"\"\n    lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])\n    rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])\n\n    area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)\n    area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)\n    area_b = np.prod(b[:, 2:] - b[:, :2], axis=1)\n    return area_i / (area_a[:, np.newaxis] + area_b - area_i)\n\n\ndef matrix_iof(a, b):\n    \"\"\"\n    return iof of a and b, numpy version for data augenmentation\n    \"\"\"\n    lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])\n    rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])\n\n    area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)\n    area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)\n    return area_i / np.maximum(area_a[:, np.newaxis], 1)\n\n\ndef match(threshold, truths, priors, variances, labels, landms, loc_t, conf_t, landm_t, idx):\n    \"\"\"Match each prior box with the ground truth box of the highest jaccard\n    overlap, encode the bounding boxes, then return the matched indices\n    corresponding to both confidence and location preds.\n    Args:\n        threshold: (float) The overlap threshold used when mathing boxes.\n        truths: (tensor) Ground truth boxes, Shape: [num_obj, 4].\n        priors: (tensor) Prior boxes from priorbox layers, Shape: [n_priors,4].\n        variances: (tensor) Variances corresponding to each prior coord,\n            Shape: [num_priors, 4].\n        labels: (tensor) All the class labels for the image, Shape: [num_obj].\n        landms: (tensor) Ground truth landms, Shape [num_obj, 10].\n        loc_t: (tensor) Tensor to be filled w/ endcoded location targets.\n        conf_t: (tensor) Tensor to be filled w/ matched indices for conf preds.\n        landm_t: (tensor) Tensor to be filled w/ endcoded landm targets.\n        idx: (int) current batch index\n    Return:\n        The matched indices corresponding to 1)location 2)confidence 3)landm preds.\n    \"\"\"\n    # jaccard index\n    overlaps = jaccard(\n        truths,\n        point_form(priors)\n    )\n    # (Bipartite Matching)\n    # [1,num_objects] best prior for each ground truth\n    best_prior_overlap, best_prior_idx = overlaps.max(1, keepdim=True)\n\n    # ignore hard gt\n    valid_gt_idx = best_prior_overlap[:, 0] >= 0.2\n    best_prior_idx_filter = best_prior_idx[valid_gt_idx, :]\n    if best_prior_idx_filter.shape[0] <= 0:\n        loc_t[idx] = 0\n        conf_t[idx] = 0\n        return\n\n    # [1,num_priors] best ground truth for each prior\n    best_truth_overlap, best_truth_idx = overlaps.max(0, keepdim=True)\n    best_truth_idx.squeeze_(0)\n    best_truth_overlap.squeeze_(0)\n    best_prior_idx.squeeze_(1)\n    best_prior_idx_filter.squeeze_(1)\n    best_prior_overlap.squeeze_(1)\n    best_truth_overlap.index_fill_(0, best_prior_idx_filter, 2)  # ensure best prior\n    # TODO refactor: index  best_prior_idx with long tensor\n    # ensure every gt matches with its prior of max overlap\n    for j in range(best_prior_idx.size(0)):     # 判别此anchor是预测哪一个boxes\n        best_truth_idx[best_prior_idx[j]] = j\n    matches = truths[best_truth_idx]            # Shape: [num_priors,4] 此处为每一个anchor对应的bbox取出来\n    conf = labels[best_truth_idx]               # Shape: [num_priors]      此处为每一个anchor对应的label取出来\n    conf[best_truth_overlap < threshold] = 0    # label as background   overlap<0.35的全部作为负样本\n    loc = encode(matches, priors, variances)\n\n    matches_landm = landms[best_truth_idx]\n    landm = encode_landm(matches_landm, priors, variances)\n    loc_t[idx] = loc    # [num_priors,4] encoded offsets to learn\n    conf_t[idx] = conf  # [num_priors] top class label for each prior\n    landm_t[idx] = landm\n\n\ndef encode(matched, priors, variances):\n    \"\"\"Encode the variances from the priorbox layers into the ground truth boxes\n    we have matched (based on jaccard overlap) with the prior boxes.\n    Args:\n        matched: (tensor) Coords of ground truth for each prior in point-form\n            Shape: [num_priors, 4].\n        priors: (tensor) Prior boxes in center-offset form\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        encoded boxes (tensor), Shape: [num_priors, 4]\n    \"\"\"\n\n    # dist b/t match center and prior's center\n    g_cxcy = (matched[:, :2] + matched[:, 2:])/2 - priors[:, :2]\n    # encode variance\n    g_cxcy /= (variances[0] * priors[:, 2:])\n    # match wh / prior wh\n    g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:]\n    g_wh = torch.log(g_wh) / variances[1]\n    # return target for smooth_l1_loss\n    return torch.cat([g_cxcy, g_wh], 1)  # [num_priors,4]\n\ndef encode_landm(matched, priors, variances):\n    \"\"\"Encode the variances from the priorbox layers into the ground truth boxes\n    we have matched (based on jaccard overlap) with the prior boxes.\n    Args:\n        matched: (tensor) Coords of ground truth for each prior in point-form\n            Shape: [num_priors, 10].\n        priors: (tensor) Prior boxes in center-offset form\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        encoded landm (tensor), Shape: [num_priors, 10]\n    \"\"\"\n\n    # dist b/t match center and prior's center\n    matched = torch.reshape(matched, (matched.size(0), 5, 2))\n    priors_cx = priors[:, 0].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)\n    priors_cy = priors[:, 1].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)\n    priors_w = priors[:, 2].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)\n    priors_h = priors[:, 3].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)\n    priors = torch.cat([priors_cx, priors_cy, priors_w, priors_h], dim=2)\n    g_cxcy = matched[:, :, :2] - priors[:, :, :2]\n    # encode variance\n    g_cxcy /= (variances[0] * priors[:, :, 2:])\n    # g_cxcy /= priors[:, :, 2:]\n    g_cxcy = g_cxcy.reshape(g_cxcy.size(0), -1)\n    # return target for smooth_l1_loss\n    return g_cxcy\n\n\n# Adapted from https://github.com/Hakuyume/chainer-ssd\ndef decode(loc, priors, variances):\n    \"\"\"Decode locations from predictions using priors to undo\n    the encoding we did for offset regression at train time.\n    Args:\n        loc (tensor): location predictions for loc layers,\n            Shape: [num_priors,4]\n        priors (tensor): Prior boxes in center-offset form.\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        decoded bounding box predictions\n    \"\"\"\n\n    boxes = torch.cat((\n        priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:],\n        priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1)\n    boxes[:, :2] -= boxes[:, 2:] / 2\n    boxes[:, 2:] += boxes[:, :2]\n    return boxes\n\ndef decode_landm(pre, priors, variances):\n    \"\"\"Decode landm from predictions using priors to undo\n    the encoding we did for offset regression at train time.\n    Args:\n        pre (tensor): landm predictions for loc layers,\n            Shape: [num_priors,10]\n        priors (tensor): Prior boxes in center-offset form.\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        decoded landm predictions\n    \"\"\"\n    landms = torch.cat((priors[:, :2] + pre[:, :2] * variances[0] * priors[:, 2:],\n                        priors[:, :2] + pre[:, 2:4] * variances[0] * priors[:, 2:],\n                        priors[:, :2] + pre[:, 4:6] * variances[0] * priors[:, 2:],\n                        priors[:, :2] + pre[:, 6:8] * variances[0] * priors[:, 2:],\n                        priors[:, :2] + pre[:, 8:10] * variances[0] * priors[:, 2:],\n                        ), dim=1)\n    return landms\n\n\ndef log_sum_exp(x):\n    \"\"\"Utility function for computing log_sum_exp while determining\n    This will be used to determine unaveraged confidence loss across\n    all examples in a batch.\n    Args:\n        x (Variable(tensor)): conf_preds from conf layers\n    \"\"\"\n    x_max = x.data.max()\n    return torch.log(torch.sum(torch.exp(x-x_max), 1, keepdim=True)) + x_max\n\n\n# Original author: Francisco Massa:\n# https://github.com/fmassa/object-detection.torch\n# Ported to PyTorch by Max deGroot (02/01/2017)\ndef nms(boxes, scores, overlap=0.5, top_k=200):\n    \"\"\"Apply non-maximum suppression at test time to avoid detecting too many\n    overlapping bounding boxes for a given object.\n    Args:\n        boxes: (tensor) The location preds for the img, Shape: [num_priors,4].\n        scores: (tensor) The class predscores for the img, Shape:[num_priors].\n        overlap: (float) The overlap thresh for suppressing unnecessary boxes.\n        top_k: (int) The Maximum number of box preds to consider.\n    Return:\n        The indices of the kept boxes with respect to num_priors.\n    \"\"\"\n\n    keep = torch.Tensor(scores.size(0)).fill_(0).long()\n    if boxes.numel() == 0:\n        return keep\n    x1 = boxes[:, 0]\n    y1 = boxes[:, 1]\n    x2 = boxes[:, 2]\n    y2 = boxes[:, 3]\n    area = torch.mul(x2 - x1, y2 - y1)\n    v, idx = scores.sort(0)  # sort in ascending order\n    # I = I[v >= 0.01]\n    idx = idx[-top_k:]  # indices of the top-k largest vals\n    xx1 = boxes.new()\n    yy1 = boxes.new()\n    xx2 = boxes.new()\n    yy2 = boxes.new()\n    w = boxes.new()\n    h = boxes.new()\n\n    # keep = torch.Tensor()\n    count = 0\n    while idx.numel() > 0:\n        i = idx[-1]  # index of current largest val\n        # keep.append(i)\n        keep[count] = i\n        count += 1\n        if idx.size(0) == 1:\n            break\n        idx = idx[:-1]  # remove kept element from view\n        # load bboxes of next highest vals\n        torch.index_select(x1, 0, idx, out=xx1)\n        torch.index_select(y1, 0, idx, out=yy1)\n        torch.index_select(x2, 0, idx, out=xx2)\n        torch.index_select(y2, 0, idx, out=yy2)\n        # store element-wise max with next highest score\n        xx1 = torch.clamp(xx1, min=x1[i])\n        yy1 = torch.clamp(yy1, min=y1[i])\n        xx2 = torch.clamp(xx2, max=x2[i])\n        yy2 = torch.clamp(yy2, max=y2[i])\n        w.resize_as_(xx2)\n        h.resize_as_(yy2)\n        w = xx2 - xx1\n        h = yy2 - yy1\n        # check sizes of xx1 and xx2.. after each iteration\n        w = torch.clamp(w, min=0.0)\n        h = torch.clamp(h, min=0.0)\n        inter = w*h\n        # IoU = i / (area(a) + area(b) - i)\n        rem_areas = torch.index_select(area, 0, idx)  # load remaining areas)\n        union = (rem_areas - inter) + area[i]\n        IoU = inter/union  # store result in iou\n        # keep only elements with an IoU <= overlap\n        idx = idx[IoU.le(overlap)]\n    return keep, count\n\n\n"
  },
  {
    "path": "third_part/GPEN/face_detect/utils/nms/__init__.py",
    "content": ""
  },
  {
    "path": "third_part/GPEN/face_detect/utils/nms/py_cpu_nms.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport numpy as np\n\ndef py_cpu_nms(dets, thresh):\n    \"\"\"Pure Python NMS baseline.\"\"\"\n    x1 = dets[:, 0]\n    y1 = dets[:, 1]\n    x2 = dets[:, 2]\n    y2 = dets[:, 3]\n    scores = dets[:, 4]\n\n    areas = (x2 - x1 + 1) * (y2 - y1 + 1)\n    order = scores.argsort()[::-1]\n\n    keep = []\n    while order.size > 0:\n        i = order[0]\n        keep.append(i)\n        xx1 = np.maximum(x1[i], x1[order[1:]])\n        yy1 = np.maximum(y1[i], y1[order[1:]])\n        xx2 = np.minimum(x2[i], x2[order[1:]])\n        yy2 = np.minimum(y2[i], y2[order[1:]])\n\n        w = np.maximum(0.0, xx2 - xx1 + 1)\n        h = np.maximum(0.0, yy2 - yy1 + 1)\n        inter = w * h\n        ovr = inter / (areas[i] + areas[order[1:]] - inter)\n\n        inds = np.where(ovr <= thresh)[0]\n        order = order[inds + 1]\n\n    return keep\n"
  },
  {
    "path": "third_part/GPEN/face_detect/utils/timer.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport time\n\n\nclass Timer(object):\n    \"\"\"A simple timer.\"\"\"\n    def __init__(self):\n        self.total_time = 0.\n        self.calls = 0\n        self.start_time = 0.\n        self.diff = 0.\n        self.average_time = 0.\n\n    def tic(self):\n        # using time.time instead of time.clock because time time.clock\n        # does not normalize for multithreading\n        self.start_time = time.time()\n\n    def toc(self, average=True):\n        self.diff = time.time() - self.start_time\n        self.total_time += self.diff\n        self.calls += 1\n        self.average_time = self.total_time / self.calls\n        if average:\n            return self.average_time\n        else:\n            return self.diff\n\n    def clear(self):\n        self.total_time = 0.\n        self.calls = 0\n        self.start_time = 0.\n        self.diff = 0.\n        self.average_time = 0.\n"
  },
  {
    "path": "third_part/GPEN/face_model/face_gan.py",
    "content": "'''\n@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021)\n@author: yangxy (yangtao9009@gmail.com)\n'''\nimport torch\nimport os\nimport cv2\nimport glob\nimport numpy as np\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torchvision import transforms, utils\nfrom face_model.gpen_model import FullGenerator\n\nclass FaceGAN(object):\n    def __init__(self, base_dir='./', size=512, model=None, channel_multiplier=2, narrow=1, is_norm=True, device='cuda'):\n        self.mfile = os.path.join(base_dir, model+'.pth')\n        self.n_mlp = 8\n        self.device = device\n        self.is_norm = is_norm\n        self.resolution = size\n        self.load_model(channel_multiplier, narrow)\n\n    def load_model(self, channel_multiplier=2, narrow=1):\n        self.model = FullGenerator(self.resolution, 512, self.n_mlp, channel_multiplier, narrow=narrow, device=self.device)\n        pretrained_dict = torch.load(self.mfile, map_location=torch.device('cpu'))\n        self.model.load_state_dict(pretrained_dict)\n        self.model.to(self.device)\n        self.model.eval()\n\n    def process(self, img):\n        img = cv2.resize(img, (self.resolution, self.resolution))\n        img_t = self.img2tensor(img)\n\n        with torch.no_grad():\n            out, __ = self.model(img_t)\n\n        out = self.tensor2img(out)\n\n        return out\n\n    def img2tensor(self, img):\n        img_t = torch.from_numpy(img).to(self.device)/255.\n        if self.is_norm:\n            img_t = (img_t - 0.5) / 0.5\n        img_t = img_t.permute(2, 0, 1).unsqueeze(0).flip(1) # BGR->RGB\n        return img_t\n\n    def tensor2img(self, img_t, pmax=255.0, imtype=np.uint8):\n        if self.is_norm:\n            img_t = img_t * 0.5 + 0.5\n        img_t = img_t.squeeze(0).permute(1, 2, 0).flip(2) # RGB->BGR\n        img_np = np.clip(img_t.float().cpu().numpy(), 0, 1) * pmax\n\n        return img_np.astype(imtype)\n"
  },
  {
    "path": "third_part/GPEN/face_model/gpen_model.py",
    "content": "'''\n@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021)\n@author: yangxy (yangtao9009@gmail.com)\n'''\nimport math\nimport random\nimport functools\nimport operator\nimport itertools\n\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.autograd import Function\n\nfrom face_model.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d\n\nclass PixelNorm(nn.Module):\n    def __init__(self):\n        super().__init__()\n\n    def forward(self, input):\n        return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)\n\n\ndef make_kernel(k):\n    k = torch.tensor(k, dtype=torch.float32)\n\n    if k.ndim == 1:\n        k = k[None, :] * k[:, None]\n\n    k /= k.sum()\n\n    return k\n\n\nclass Upsample(nn.Module):\n    def __init__(self, kernel, factor=2, device='cpu'):\n        super().__init__()\n\n        self.factor = factor\n        kernel = make_kernel(kernel) * (factor ** 2)\n        self.register_buffer('kernel', kernel)\n\n        p = kernel.shape[0] - factor\n\n        pad0 = (p + 1) // 2 + factor - 1\n        pad1 = p // 2\n\n        self.pad = (pad0, pad1)\n        self.device = device\n\n    def forward(self, input):\n        out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad, device=self.device)\n\n        return out\n\n\nclass Downsample(nn.Module):\n    def __init__(self, kernel, factor=2, device='cpu'):\n        super().__init__()\n\n        self.factor = factor\n        kernel = make_kernel(kernel)\n        self.register_buffer('kernel', kernel)\n\n        p = kernel.shape[0] - factor\n\n        pad0 = (p + 1) // 2\n        pad1 = p // 2\n\n        self.pad = (pad0, pad1)\n        self.device = device\n\n    def forward(self, input):\n        out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad, device=self.device)\n\n        return out\n\n\nclass Blur(nn.Module):\n    def __init__(self, kernel, pad, upsample_factor=1, device='cpu'):\n        super().__init__()\n\n        kernel = make_kernel(kernel)\n\n        if upsample_factor > 1:\n            kernel = kernel * (upsample_factor ** 2)\n\n        self.register_buffer('kernel', kernel)\n\n        self.pad = pad\n        self.device = device\n\n    def forward(self, input):\n        out = upfirdn2d(input, self.kernel, pad=self.pad, device=self.device)\n\n        return out\n\n\nclass EqualConv2d(nn.Module):\n    def __init__(\n        self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True\n    ):\n        super().__init__()\n\n        self.weight = nn.Parameter(\n            torch.randn(out_channel, in_channel, kernel_size, kernel_size)\n        )\n        self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)\n\n        self.stride = stride\n        self.padding = padding\n\n        if bias:\n            self.bias = nn.Parameter(torch.zeros(out_channel))\n\n        else:\n            self.bias = None\n\n    def forward(self, input):\n        out = F.conv2d(\n            input,\n            self.weight * self.scale,\n            bias=self.bias,\n            stride=self.stride,\n            padding=self.padding,\n        )\n\n        return out\n\n    def __repr__(self):\n        return (\n            f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'\n            f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'\n        )\n\n\nclass EqualLinear(nn.Module):\n    def __init__(\n        self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None, device='cpu'\n    ):\n        super().__init__()\n\n        self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))\n\n        if bias:\n            self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))\n\n        else:\n            self.bias = None\n\n        self.activation = activation\n        self.device = device\n\n        self.scale = (1 / math.sqrt(in_dim)) * lr_mul\n        self.lr_mul = lr_mul\n\n    def forward(self, input):\n        if self.activation:\n            out = F.linear(input, self.weight * self.scale)\n            out = fused_leaky_relu(out, self.bias * self.lr_mul, device=self.device)\n\n        else:\n            out = F.linear(input, self.weight * self.scale, bias=self.bias * self.lr_mul)\n\n        return out\n\n    def __repr__(self):\n        return (\n            f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'\n        )\n\n\nclass ScaledLeakyReLU(nn.Module):\n    def __init__(self, negative_slope=0.2):\n        super().__init__()\n\n        self.negative_slope = negative_slope\n\n    def forward(self, input):\n        out = F.leaky_relu(input, negative_slope=self.negative_slope)\n\n        return out * math.sqrt(2)\n\n\nclass ModulatedConv2d(nn.Module):\n    def __init__(\n        self,\n        in_channel,\n        out_channel,\n        kernel_size,\n        style_dim,\n        demodulate=True,\n        upsample=False,\n        downsample=False,\n        blur_kernel=[1, 3, 3, 1],\n        device='cpu'\n    ):\n        super().__init__()\n\n        self.eps = 1e-8\n        self.kernel_size = kernel_size\n        self.in_channel = in_channel\n        self.out_channel = out_channel\n        self.upsample = upsample\n        self.downsample = downsample\n\n        if upsample:\n            factor = 2\n            p = (len(blur_kernel) - factor) - (kernel_size - 1)\n            pad0 = (p + 1) // 2 + factor - 1\n            pad1 = p // 2 + 1\n\n            self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor, device=device)\n\n        if downsample:\n            factor = 2\n            p = (len(blur_kernel) - factor) + (kernel_size - 1)\n            pad0 = (p + 1) // 2\n            pad1 = p // 2\n\n            self.blur = Blur(blur_kernel, pad=(pad0, pad1), device=device)\n\n        fan_in = in_channel * kernel_size ** 2\n        self.scale = 1 / math.sqrt(fan_in)\n        self.padding = kernel_size // 2\n\n        self.weight = nn.Parameter(\n            torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)\n        )\n\n        self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)\n\n        self.demodulate = demodulate\n\n    def __repr__(self):\n        return (\n            f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '\n            f'upsample={self.upsample}, downsample={self.downsample})'\n        )\n\n    def forward(self, input, style):\n        batch, in_channel, height, width = input.shape\n\n        style = self.modulation(style).view(batch, 1, in_channel, 1, 1)\n        weight = self.scale * self.weight * style\n\n        if self.demodulate:\n            demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)\n            weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)\n\n        weight = weight.view(\n            batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size\n        )\n\n        if self.upsample:\n            input = input.view(1, batch * in_channel, height, width)\n            weight = weight.view(\n                batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size\n            )\n            weight = weight.transpose(1, 2).reshape(\n                batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size\n            )\n            out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)\n            _, _, height, width = out.shape\n            out = out.view(batch, self.out_channel, height, width)\n            out = self.blur(out)\n\n        elif self.downsample:\n            input = self.blur(input)\n            _, _, height, width = input.shape\n            input = input.view(1, batch * in_channel, height, width)\n            out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)\n            _, _, height, width = out.shape\n            out = out.view(batch, self.out_channel, height, width)\n\n        else:\n            input = input.view(1, batch * in_channel, height, width)\n            out = F.conv2d(input, weight, padding=self.padding, groups=batch)\n            _, _, height, width = out.shape\n            out = out.view(batch, self.out_channel, height, width)\n\n        return out\n\n\nclass NoiseInjection(nn.Module):\n    def __init__(self, isconcat=True):\n        super().__init__()\n\n        self.isconcat = isconcat\n        self.weight = nn.Parameter(torch.zeros(1))\n\n    def forward(self, image, noise=None):\n        if noise is None:\n            batch, _, height, width = image.shape\n            noise = image.new_empty(batch, 1, height, width).normal_()\n\n        if self.isconcat:\n            return torch.cat((image, self.weight * noise), dim=1)\n        else:\n            return image + self.weight * noise\n\n\nclass ConstantInput(nn.Module):\n    def __init__(self, channel, size=4):\n        super().__init__()\n\n        self.input = nn.Parameter(torch.randn(1, channel, size, size))\n\n    def forward(self, input):\n        batch = input.shape[0]\n        out = self.input.repeat(batch, 1, 1, 1)\n\n        return out\n\n\nclass StyledConv(nn.Module):\n    def __init__(\n        self,\n        in_channel,\n        out_channel,\n        kernel_size,\n        style_dim,\n        upsample=False,\n        blur_kernel=[1, 3, 3, 1],\n        demodulate=True,\n        isconcat=True,\n        device='cpu'\n    ):\n        super().__init__()\n\n        self.conv = ModulatedConv2d(\n            in_channel,\n            out_channel,\n            kernel_size,\n            style_dim,\n            upsample=upsample,\n            blur_kernel=blur_kernel,\n            demodulate=demodulate,\n            device=device\n        )\n\n        self.noise = NoiseInjection(isconcat)\n        #self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))\n        #self.activate = ScaledLeakyReLU(0.2)\n        feat_multiplier = 2 if isconcat else 1\n        self.activate = FusedLeakyReLU(out_channel*feat_multiplier, device=device)\n\n    def forward(self, input, style, noise=None):\n        out = self.conv(input, style)\n        out = self.noise(out, noise=noise)\n        # out = out + self.bias\n        out = self.activate(out)\n\n        return out\n\n\nclass ToRGB(nn.Module):\n    def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1], device='cpu'):\n        super().__init__()\n\n        if upsample:\n            self.upsample = Upsample(blur_kernel, device=device)\n\n        self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False, device=device)\n        self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))\n\n    def forward(self, input, style, skip=None):\n        out = self.conv(input, style)\n        out = out + self.bias\n\n        if skip is not None:\n            skip = self.upsample(skip)\n\n            out = out + skip\n\n        return out\n\nclass Generator(nn.Module):\n    def __init__(\n        self,\n        size,\n        style_dim,\n        n_mlp,\n        channel_multiplier=2,\n        blur_kernel=[1, 3, 3, 1],\n        lr_mlp=0.01,\n        isconcat=True,\n        narrow=1,\n        device='cpu'\n    ):\n        super().__init__()\n\n        self.size = size\n        self.n_mlp = n_mlp\n        self.style_dim = style_dim\n        self.feat_multiplier = 2 if isconcat else 1\n\n        layers = [PixelNorm()]\n\n        for i in range(n_mlp):\n            layers.append(\n                EqualLinear(\n                    style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu', device=device\n                )\n            )\n\n        self.style = nn.Sequential(*layers)\n\n        self.channels = {\n            4: int(512 * narrow),\n            8: int(512 * narrow),\n            16: int(512 * narrow),\n            32: int(512 * narrow),\n            64: int(256 * channel_multiplier * narrow),\n            128: int(128 * channel_multiplier * narrow),\n            256: int(64 * channel_multiplier * narrow),\n            512: int(32 * channel_multiplier * narrow),\n            1024: int(16 * channel_multiplier * narrow)\n        }\n\n        self.input = ConstantInput(self.channels[4])\n        self.conv1 = StyledConv(\n            self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel, isconcat=isconcat, device=device\n        )\n        self.to_rgb1 = ToRGB(self.channels[4]*self.feat_multiplier, style_dim, upsample=False, device=device)\n\n        self.log_size = int(math.log(size, 2))\n\n        self.convs = nn.ModuleList()\n        self.upsamples = nn.ModuleList()\n        self.to_rgbs = nn.ModuleList()\n\n        in_channel = self.channels[4]\n\n        for i in range(3, self.log_size + 1):\n            out_channel = self.channels[2 ** i]\n\n            self.convs.append(\n                StyledConv(\n                    in_channel*self.feat_multiplier,\n                    out_channel,\n                    3,\n                    style_dim,\n                    upsample=True,\n                    blur_kernel=blur_kernel,\n                    isconcat=isconcat,\n                    device=device\n                )\n            )\n\n            self.convs.append(\n                StyledConv(\n                    out_channel*self.feat_multiplier, out_channel, 3, style_dim, blur_kernel=blur_kernel, isconcat=isconcat, device=device\n                )\n            )\n\n            self.to_rgbs.append(ToRGB(out_channel*self.feat_multiplier, style_dim, device=device))\n\n            in_channel = out_channel\n\n        self.n_latent = self.log_size * 2 - 2\n\n    def make_noise(self):\n        device = self.input.input.device\n\n        noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]\n\n        for i in range(3, self.log_size + 1):\n            for _ in range(2):\n                noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))\n\n        return noises\n\n    def mean_latent(self, n_latent):\n        latent_in = torch.randn(\n            n_latent, self.style_dim, device=self.input.input.device\n        )\n        latent = self.style(latent_in).mean(0, keepdim=True)\n\n        return latent\n\n    def get_latent(self, input):\n        return self.style(input)\n\n    def forward(\n        self,\n        styles,\n        return_latents=False,\n        inject_index=None,\n        truncation=1,\n        truncation_latent=None,\n        input_is_latent=False,\n        noise=None,\n    ):\n        if not input_is_latent:\n            styles = [self.style(s) for s in styles]\n\n        if noise is None:\n            '''\n            noise = [None] * (2 * (self.log_size - 2) + 1)\n            '''\n            noise = []\n            batch = styles[0].shape[0]\n            for i in range(self.n_mlp + 1):\n                size = 2 ** (i+2)\n                noise.append(torch.randn(batch, self.channels[size], size, size, device=styles[0].device))\n            \n        if truncation < 1:\n            style_t = []\n\n            for style in styles:\n                style_t.append(\n                    truncation_latent + truncation * (style - truncation_latent)\n                )\n\n            styles = style_t\n\n        if len(styles) < 2:\n            inject_index = self.n_latent\n\n            latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n\n        else:\n            if inject_index is None:\n                inject_index = random.randint(1, self.n_latent - 1)\n\n            latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)\n            latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)\n\n            latent = torch.cat([latent, latent2], 1)\n\n        out = self.input(latent)\n        out = self.conv1(out, latent[:, 0], noise=noise[0])\n\n        skip = self.to_rgb1(out, latent[:, 1])\n\n        i = 1\n        for conv1, conv2, noise1, noise2, to_rgb in zip(\n            self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs\n        ):\n            out = conv1(out, latent[:, i], noise=noise1)\n            out = conv2(out, latent[:, i + 1], noise=noise2)\n            skip = to_rgb(out, latent[:, i + 2], skip)\n\n            i += 2\n\n        image = skip\n\n        if return_latents:\n            return image, latent\n\n        else:\n            return image, None\n\nclass ConvLayer(nn.Sequential):\n    def __init__(\n        self,\n        in_channel,\n        out_channel,\n        kernel_size,\n        downsample=False,\n        blur_kernel=[1, 3, 3, 1],\n        bias=True,\n        activate=True,\n        device='cpu'\n    ):\n        layers = []\n\n        if downsample:\n            factor = 2\n            p = (len(blur_kernel) - factor) + (kernel_size - 1)\n            pad0 = (p + 1) // 2\n            pad1 = p // 2\n\n            layers.append(Blur(blur_kernel, pad=(pad0, pad1), device=device))\n\n            stride = 2\n            self.padding = 0\n\n        else:\n            stride = 1\n            self.padding = kernel_size // 2\n\n        layers.append(\n            EqualConv2d(\n                in_channel,\n                out_channel,\n                kernel_size,\n                padding=self.padding,\n                stride=stride,\n                bias=bias and not activate,\n            )\n        )\n\n        if activate:\n            if bias:\n                layers.append(FusedLeakyReLU(out_channel, device=device))\n\n            else:\n                layers.append(ScaledLeakyReLU(0.2))\n\n        super().__init__(*layers)\n\n\nclass ResBlock(nn.Module):\n    def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1], device='cpu'):\n        super().__init__()\n\n        self.conv1 = ConvLayer(in_channel, in_channel, 3, device=device)\n        self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True, device=device)\n\n        self.skip = ConvLayer(\n            in_channel, out_channel, 1, downsample=True, activate=False, bias=False\n        )\n\n    def forward(self, input):\n        out = self.conv1(input)\n        out = self.conv2(out)\n\n        skip = self.skip(input)\n        out = (out + skip) / math.sqrt(2)\n\n        return out\n\nclass FullGenerator(nn.Module):\n    def __init__(\n        self,\n        size,\n        style_dim,\n        n_mlp,\n        channel_multiplier=2,\n        blur_kernel=[1, 3, 3, 1],\n        lr_mlp=0.01,\n        isconcat=True,\n        narrow=1,\n        device='cpu'\n    ):\n        super().__init__()\n        channels = {\n            4: int(512 * narrow),\n            8: int(512 * narrow),\n            16: int(512 * narrow),\n            32: int(512 * narrow),\n            64: int(256 * channel_multiplier * narrow),\n            128: int(128 * channel_multiplier * narrow),\n            256: int(64 * channel_multiplier * narrow),\n            512: int(32 * channel_multiplier * narrow),\n            1024: int(16 * channel_multiplier * narrow)\n        }\n\n        self.log_size = int(math.log(size, 2))\n        self.generator = Generator(size, style_dim, n_mlp, channel_multiplier=channel_multiplier, blur_kernel=blur_kernel, lr_mlp=lr_mlp, isconcat=isconcat, narrow=narrow, device=device)\n        \n        conv = [ConvLayer(3, channels[size], 1, device=device)]\n        self.ecd0 = nn.Sequential(*conv)\n        in_channel = channels[size]\n\n        self.names = ['ecd%d'%i for i in range(self.log_size-1)]\n        for i in range(self.log_size, 2, -1):\n            out_channel = channels[2 ** (i - 1)]\n            #conv = [ResBlock(in_channel, out_channel, blur_kernel)]\n            conv = [ConvLayer(in_channel, out_channel, 3, downsample=True, device=device)] \n            setattr(self, self.names[self.log_size-i+1], nn.Sequential(*conv))\n            in_channel = out_channel\n        self.final_linear = nn.Sequential(EqualLinear(channels[4] * 4 * 4, style_dim, activation='fused_lrelu', device=device))\n\n    def forward(self,\n        inputs,\n        return_latents=False,\n        inject_index=None,\n        truncation=1,\n        truncation_latent=None,\n        input_is_latent=False,\n    ):\n        noise = []\n        for i in range(self.log_size-1):\n            ecd = getattr(self, self.names[i])\n            inputs = ecd(inputs)\n            noise.append(inputs)\n            \n        inputs = inputs.view(inputs.shape[0], -1)\n        outs = self.final_linear(inputs)\n        noise = list(itertools.chain.from_iterable(itertools.repeat(x, 2) for x in noise))[::-1]\n        outs = self.generator([outs], return_latents, inject_index, truncation, truncation_latent, input_is_latent, noise=noise[1:])\n        return outs\n\nclass Discriminator(nn.Module):\n    def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], narrow=1, device='cpu'):\n        super().__init__()\n\n        channels = {\n            4: int(512 * narrow),\n            8: int(512 * narrow),\n            16: int(512 * narrow),\n            32: int(512 * narrow),\n            64: int(256 * channel_multiplier * narrow),\n            128: int(128 * channel_multiplier * narrow),\n            256: int(64 * channel_multiplier * narrow),\n            512: int(32 * channel_multiplier * narrow),\n            1024: int(16 * channel_multiplier * narrow)\n        }\n\n        convs = [ConvLayer(3, channels[size], 1, device=device)]\n\n        log_size = int(math.log(size, 2))\n\n        in_channel = channels[size]\n\n        for i in range(log_size, 2, -1):\n            out_channel = channels[2 ** (i - 1)]\n\n            convs.append(ResBlock(in_channel, out_channel, blur_kernel, device=device))\n\n            in_channel = out_channel\n\n        self.convs = nn.Sequential(*convs)\n\n        self.stddev_group = 4\n        self.stddev_feat = 1\n\n        self.final_conv = ConvLayer(in_channel + 1, channels[4], 3, device=device)\n        self.final_linear = nn.Sequential(\n            EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu', device=device),\n            EqualLinear(channels[4], 1),\n        )\n\n    def forward(self, input):\n        out = self.convs(input)\n\n        batch, channel, height, width = out.shape\n        group = min(batch, self.stddev_group)\n        stddev = out.view(\n            group, -1, self.stddev_feat, channel // self.stddev_feat, height, width\n        )\n        stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)\n        stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)\n        stddev = stddev.repeat(group, 1, height, width)\n        out = torch.cat([out, stddev], 1)\n\n        out = self.final_conv(out)\n\n        out = out.view(batch, -1)\n        out = self.final_linear(out)\n        return out\n"
  },
  {
    "path": "third_part/GPEN/face_model/op/__init__.py",
    "content": "from .fused_act import FusedLeakyReLU, fused_leaky_relu\nfrom .upfirdn2d import upfirdn2d\n"
  },
  {
    "path": "third_part/GPEN/face_model/op/fused_act.py",
    "content": "import os\nimport platform\n\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torch.autograd import Function\nfrom torch.utils.cpp_extension import load, _import_module_from_library\n\n# if running GPEN without cuda, please comment line 11-19\nif platform.system() == 'Linux' and torch.cuda.is_available():\n    module_path = os.path.dirname(__file__)\n    fused = load(\n        'fused',\n        sources=[\n            os.path.join(module_path, 'fused_bias_act.cpp'),\n            os.path.join(module_path, 'fused_bias_act_kernel.cu'),\n        ],\n    )\n\n\n#fused = _import_module_from_library('fused', '/tmp/torch_extensions/fused', True)\n\n\nclass FusedLeakyReLUFunctionBackward(Function):\n    @staticmethod\n    def forward(ctx, grad_output, out, negative_slope, scale):\n        ctx.save_for_backward(out)\n        ctx.negative_slope = negative_slope\n        ctx.scale = scale\n\n        empty = grad_output.new_empty(0)\n\n        grad_input = fused.fused_bias_act(\n            grad_output, empty, out, 3, 1, negative_slope, scale\n        )\n\n        dim = [0]\n\n        if grad_input.ndim > 2:\n            dim += list(range(2, grad_input.ndim))\n\n        grad_bias = grad_input.sum(dim).detach()\n\n        return grad_input, grad_bias\n\n    @staticmethod\n    def backward(ctx, gradgrad_input, gradgrad_bias):\n        out, = ctx.saved_tensors\n        gradgrad_out = fused.fused_bias_act(\n            gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale\n        )\n\n        return gradgrad_out, None, None, None\n\n\nclass FusedLeakyReLUFunction(Function):\n    @staticmethod\n    def forward(ctx, input, bias, negative_slope, scale):\n        empty = input.new_empty(0)\n        out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)\n        ctx.save_for_backward(out)\n        ctx.negative_slope = negative_slope\n        ctx.scale = scale\n\n        return out\n\n    @staticmethod\n    def backward(ctx, grad_output):\n        out, = ctx.saved_tensors\n\n        grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(\n            grad_output, out, ctx.negative_slope, ctx.scale\n        )\n\n        return grad_input, grad_bias, None, None\n\n\nclass FusedLeakyReLU(nn.Module):\n    def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5, device='cpu'):\n        super().__init__()\n\n        self.bias = nn.Parameter(torch.zeros(channel))\n        self.negative_slope = negative_slope\n        self.scale = scale\n        self.device = device\n\n    def forward(self, input):\n        return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale, self.device)\n\n\ndef fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5, device='cpu'):\n    if platform.system() == 'Linux' and torch.cuda.is_available() and device != 'cpu':\n        return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)\n    else:\n        return scale * F.leaky_relu(input + bias.view((1, -1)+(1,)*(len(input.shape)-2)), negative_slope=negative_slope)\n"
  },
  {
    "path": "third_part/GPEN/face_model/op/fused_bias_act.cpp",
    "content": "#include <torch/extension.h>\n\n\ntorch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,\n    int act, int grad, float alpha, float scale);\n\n#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x \" must be a CUDA tensor\")\n#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x \" must be contiguous\")\n#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)\n\ntorch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,\n    int act, int grad, float alpha, float scale) {\n    CHECK_CUDA(input);\n    CHECK_CUDA(bias);\n\n    return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale);\n}\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n    m.def(\"fused_bias_act\", &fused_bias_act, \"fused bias act (CUDA)\");\n}"
  },
  {
    "path": "third_part/GPEN/face_model/op/fused_bias_act_kernel.cu",
    "content": "// Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n//\n// This work is made available under the Nvidia Source Code License-NC.\n// To view a copy of this license, visit\n// https://nvlabs.github.io/stylegan2/license.html\n\n#include <torch/types.h>\n\n#include <ATen/ATen.h>\n#include <ATen/AccumulateType.h>\n#include <ATen/cuda/CUDAContext.h>\n#include <ATen/cuda/CUDAApplyUtils.cuh>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n\ntemplate <typename scalar_t>\nstatic __global__ void fused_bias_act_kernel(scalar_t* out, const scalar_t* p_x, const scalar_t* p_b, const scalar_t* p_ref,\n    int act, int grad, scalar_t alpha, scalar_t scale, int loop_x, int size_x, int step_b, int size_b, int use_bias, int use_ref) {\n    int xi = blockIdx.x * loop_x * blockDim.x + threadIdx.x;\n\n    scalar_t zero = 0.0;\n\n    for (int loop_idx = 0; loop_idx < loop_x && xi < size_x; loop_idx++, xi += blockDim.x) {\n        scalar_t x = p_x[xi];\n\n        if (use_bias) {\n            x += p_b[(xi / step_b) % size_b];\n        }\n\n        scalar_t ref = use_ref ? p_ref[xi] : zero;\n\n        scalar_t y;\n\n        switch (act * 10 + grad) {\n            default:\n            case 10: y = x; break;\n            case 11: y = x; break;\n            case 12: y = 0.0; break;\n\n            case 30: y = (x > 0.0) ? x : x * alpha; break;\n            case 31: y = (ref > 0.0) ? x : x * alpha; break;\n            case 32: y = 0.0; break;\n        }\n\n        out[xi] = y * scale;\n    }\n}\n\n\ntorch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,\n    int act, int grad, float alpha, float scale) {\n    int curDevice = -1;\n    cudaGetDevice(&curDevice);\n    cudaStream_t stream = at::cuda::getCurrentCUDAStream(curDevice);\n\n    auto x = input.contiguous();\n    auto b = bias.contiguous();\n    auto ref = refer.contiguous();\n\n    int use_bias = b.numel() ? 1 : 0;\n    int use_ref = ref.numel() ? 1 : 0;\n\n    int size_x = x.numel();\n    int size_b = b.numel();\n    int step_b = 1;\n\n    for (int i = 1 + 1; i < x.dim(); i++) {\n        step_b *= x.size(i);\n    }\n\n    int loop_x = 4;\n    int block_size = 4 * 32;\n    int grid_size = (size_x - 1) / (loop_x * block_size) + 1;\n\n    auto y = torch::empty_like(x);\n\n    AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), \"fused_bias_act_kernel\", [&] {\n        fused_bias_act_kernel<scalar_t><<<grid_size, block_size, 0, stream>>>(\n            y.data_ptr<scalar_t>(),\n            x.data_ptr<scalar_t>(),\n            b.data_ptr<scalar_t>(),\n            ref.data_ptr<scalar_t>(),\n            act,\n            grad,\n            alpha,\n            scale,\n            loop_x,\n            size_x,\n            step_b,\n            size_b,\n            use_bias,\n            use_ref\n        );\n    });\n\n    return y;\n}"
  },
  {
    "path": "third_part/GPEN/face_model/op/upfirdn2d.cpp",
    "content": "#include <torch/extension.h>\n\n\ntorch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,\n                            int up_x, int up_y, int down_x, int down_y,\n                            int pad_x0, int pad_x1, int pad_y0, int pad_y1);\n\n#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x \" must be a CUDA tensor\")\n#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x \" must be contiguous\")\n#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)\n\ntorch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,\n                        int up_x, int up_y, int down_x, int down_y,\n                        int pad_x0, int pad_x1, int pad_y0, int pad_y1) {\n    CHECK_CUDA(input);\n    CHECK_CUDA(kernel);\n\n    return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);\n}\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n    m.def(\"upfirdn2d\", &upfirdn2d, \"upfirdn2d (CUDA)\");\n}"
  },
  {
    "path": "third_part/GPEN/face_model/op/upfirdn2d.py",
    "content": "import os\nimport platform\n\nimport torch\nimport torch.nn.functional as F\nfrom torch.autograd import Function\nfrom torch.utils.cpp_extension import load, _import_module_from_library\n\n# if running GPEN without cuda, please comment line 10-18\nif platform.system() == 'Linux' and torch.cuda.is_available():\n    module_path = os.path.dirname(__file__)\n    upfirdn2d_op = load(\n        'upfirdn2d',\n        sources=[\n            os.path.join(module_path, 'upfirdn2d.cpp'),\n            os.path.join(module_path, 'upfirdn2d_kernel.cu'),\n        ],\n    )\n\n\n#upfirdn2d_op = _import_module_from_library('upfirdn2d', '/tmp/torch_extensions/upfirdn2d', True)\n\nclass UpFirDn2dBackward(Function):\n    @staticmethod\n    def forward(\n        ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size\n    ):\n\n        up_x, up_y = up\n        down_x, down_y = down\n        g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad\n\n        grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)\n\n        grad_input = upfirdn2d_op.upfirdn2d(\n            grad_output,\n            grad_kernel,\n            down_x,\n            down_y,\n            up_x,\n            up_y,\n            g_pad_x0,\n            g_pad_x1,\n            g_pad_y0,\n            g_pad_y1,\n        )\n        grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])\n\n        ctx.save_for_backward(kernel)\n\n        pad_x0, pad_x1, pad_y0, pad_y1 = pad\n\n        ctx.up_x = up_x\n        ctx.up_y = up_y\n        ctx.down_x = down_x\n        ctx.down_y = down_y\n        ctx.pad_x0 = pad_x0\n        ctx.pad_x1 = pad_x1\n        ctx.pad_y0 = pad_y0\n        ctx.pad_y1 = pad_y1\n        ctx.in_size = in_size\n        ctx.out_size = out_size\n\n        return grad_input\n\n    @staticmethod\n    def backward(ctx, gradgrad_input):\n        kernel, = ctx.saved_tensors\n\n        gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)\n\n        gradgrad_out = upfirdn2d_op.upfirdn2d(\n            gradgrad_input,\n            kernel,\n            ctx.up_x,\n            ctx.up_y,\n            ctx.down_x,\n            ctx.down_y,\n            ctx.pad_x0,\n            ctx.pad_x1,\n            ctx.pad_y0,\n            ctx.pad_y1,\n        )\n        # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3])\n        gradgrad_out = gradgrad_out.view(\n            ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]\n        )\n\n        return gradgrad_out, None, None, None, None, None, None, None, None\n\n\nclass UpFirDn2d(Function):\n    @staticmethod\n    def forward(ctx, input, kernel, up, down, pad):\n        up_x, up_y = up\n        down_x, down_y = down\n        pad_x0, pad_x1, pad_y0, pad_y1 = pad\n\n        kernel_h, kernel_w = kernel.shape\n        batch, channel, in_h, in_w = input.shape\n        ctx.in_size = input.shape\n\n        input = input.reshape(-1, in_h, in_w, 1)\n\n        ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))\n\n        out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1\n        out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1\n        ctx.out_size = (out_h, out_w)\n\n        ctx.up = (up_x, up_y)\n        ctx.down = (down_x, down_y)\n        ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)\n\n        g_pad_x0 = kernel_w - pad_x0 - 1\n        g_pad_y0 = kernel_h - pad_y0 - 1\n        g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1\n        g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1\n\n        ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)\n\n        out = upfirdn2d_op.upfirdn2d(\n            input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1\n        )\n        # out = out.view(major, out_h, out_w, minor)\n        out = out.view(-1, channel, out_h, out_w)\n\n        return out\n\n    @staticmethod\n    def backward(ctx, grad_output):\n        kernel, grad_kernel = ctx.saved_tensors\n\n        grad_input = UpFirDn2dBackward.apply(\n            grad_output,\n            kernel,\n            grad_kernel,\n            ctx.up,\n            ctx.down,\n            ctx.pad,\n            ctx.g_pad,\n            ctx.in_size,\n            ctx.out_size,\n        )\n\n        return grad_input, None, None, None, None\n\n\ndef upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0), device='cpu'):\n    if platform.system() == 'Linux' and torch.cuda.is_available() and device != 'cpu':\n        out = UpFirDn2d.apply(\n            input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])\n        )\n    else:\n        out = upfirdn2d_native(input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1])\n\n    return out\n\n\ndef upfirdn2d_native(\n    input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1\n):\n    input = input.permute(0, 2, 3, 1)\n    _, in_h, in_w, minor = input.shape\n    kernel_h, kernel_w = kernel.shape\n    out = input.view(-1, in_h, 1, in_w, 1, minor)\n    out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])\n    out = out.view(-1, in_h * up_y, in_w * up_x, minor)\n\n    out = F.pad(\n        out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]\n    )\n    out = out[\n        :,\n        max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),\n        max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),\n        :,\n    ]\n\n    out = out.permute(0, 3, 1, 2)\n    out = out.reshape(\n        [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]\n    )\n    w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)\n    out = F.conv2d(out, w)\n    out = out.reshape(\n        -1,\n        minor,\n        in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,\n        in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,\n    )\n    # out = out.permute(0, 2, 3, 1)\n    return out[:, :, ::down_y, ::down_x]\n\n"
  },
  {
    "path": "third_part/GPEN/face_model/op/upfirdn2d_kernel.cu",
    "content": "// Copyright (c) 2019, NVIDIA Corporation. All rights reserved.\n//\n// This work is made available under the Nvidia Source Code License-NC.\n// To view a copy of this license, visit\n// https://nvlabs.github.io/stylegan2/license.html\n\n#include <torch/types.h>\n\n#include <ATen/ATen.h>\n#include <ATen/AccumulateType.h>\n#include <ATen/cuda/CUDAContext.h>\n#include <ATen/cuda/CUDAApplyUtils.cuh>\n\n#include <cuda.h>\n#include <cuda_runtime.h>\n\n\nstatic __host__ __device__ __forceinline__ int floor_div(int a, int b) {\n    int c = a / b;\n\n    if (c * b > a) {\n        c--;\n    }\n\n    return c;\n}\n\n\nstruct UpFirDn2DKernelParams {\n    int up_x;\n    int up_y;\n    int down_x;\n    int down_y;\n    int pad_x0;\n    int pad_x1;\n    int pad_y0;\n    int pad_y1;\n\n    int major_dim;\n    int in_h;\n    int in_w;\n    int minor_dim;\n    int kernel_h;\n    int kernel_w;\n    int out_h;\n    int out_w;\n    int loop_major;\n    int loop_x;\n};\n\n\ntemplate <typename scalar_t, int up_x, int up_y, int down_x, int down_y, int kernel_h, int kernel_w, int tile_out_h, int tile_out_w>\n__global__ void upfirdn2d_kernel(scalar_t* out, const scalar_t* input, const scalar_t* kernel, const UpFirDn2DKernelParams p) {\n    const int tile_in_h = ((tile_out_h - 1) * down_y + kernel_h - 1) / up_y + 1;\n    const int tile_in_w = ((tile_out_w - 1) * down_x + kernel_w - 1) / up_x + 1;\n\n    __shared__ volatile float sk[kernel_h][kernel_w];\n    __shared__ volatile float sx[tile_in_h][tile_in_w];\n\n    int minor_idx = blockIdx.x;\n    int tile_out_y = minor_idx / p.minor_dim;\n    minor_idx -= tile_out_y * p.minor_dim;\n    tile_out_y *= tile_out_h;\n    int tile_out_x_base = blockIdx.y * p.loop_x * tile_out_w;\n    int major_idx_base = blockIdx.z * p.loop_major;\n\n    if (tile_out_x_base >= p.out_w | tile_out_y >= p.out_h | major_idx_base >= p.major_dim) {\n        return;\n    }\n\n    for (int tap_idx = threadIdx.x; tap_idx < kernel_h * kernel_w; tap_idx += blockDim.x) {\n        int ky = tap_idx / kernel_w;\n        int kx = tap_idx - ky * kernel_w;\n        scalar_t v = 0.0;\n\n        if (kx < p.kernel_w & ky < p.kernel_h) {\n            v = kernel[(p.kernel_h - 1 - ky) * p.kernel_w + (p.kernel_w - 1 - kx)];\n        }\n\n        sk[ky][kx] = v;\n    }\n\n    for (int loop_major = 0, major_idx = major_idx_base; loop_major < p.loop_major & major_idx < p.major_dim; loop_major++, major_idx++) {\n        for (int loop_x = 0, tile_out_x = tile_out_x_base; loop_x < p.loop_x & tile_out_x < p.out_w; loop_x++, tile_out_x += tile_out_w) {\n            int tile_mid_x = tile_out_x * down_x + up_x - 1 - p.pad_x0;\n            int tile_mid_y = tile_out_y * down_y + up_y - 1 - p.pad_y0;\n            int tile_in_x = floor_div(tile_mid_x, up_x);\n            int tile_in_y = floor_div(tile_mid_y, up_y);\n\n            __syncthreads();\n\n            for (int in_idx = threadIdx.x; in_idx < tile_in_h * tile_in_w; in_idx += blockDim.x) {\n                int rel_in_y = in_idx / tile_in_w;\n                int rel_in_x = in_idx - rel_in_y * tile_in_w;\n                int in_x = rel_in_x + tile_in_x;\n                int in_y = rel_in_y + tile_in_y;\n\n                scalar_t v = 0.0;\n\n                if (in_x >= 0 & in_y >= 0 & in_x < p.in_w & in_y < p.in_h) {\n                    v = input[((major_idx * p.in_h + in_y) * p.in_w + in_x) * p.minor_dim + minor_idx];\n                }\n\n                sx[rel_in_y][rel_in_x] = v;\n            }\n\n            __syncthreads();\n            for (int out_idx = threadIdx.x; out_idx < tile_out_h * tile_out_w; out_idx += blockDim.x) {\n                int rel_out_y = out_idx / tile_out_w;\n                int rel_out_x = out_idx - rel_out_y * tile_out_w;\n                int out_x = rel_out_x + tile_out_x;\n                int out_y = rel_out_y + tile_out_y;\n\n                int mid_x = tile_mid_x + rel_out_x * down_x;\n                int mid_y = tile_mid_y + rel_out_y * down_y;\n                int in_x = floor_div(mid_x, up_x);\n                int in_y = floor_div(mid_y, up_y);\n                int rel_in_x = in_x - tile_in_x;\n                int rel_in_y = in_y - tile_in_y;\n                int kernel_x = (in_x + 1) * up_x - mid_x - 1;\n                int kernel_y = (in_y + 1) * up_y - mid_y - 1;\n\n                scalar_t v = 0.0;\n\n                #pragma unroll\n                for (int y = 0; y < kernel_h / up_y; y++)\n                    #pragma unroll\n                    for (int x = 0; x < kernel_w / up_x; x++)\n                        v += sx[rel_in_y + y][rel_in_x + x] * sk[kernel_y + y * up_y][kernel_x + x * up_x];\n\n                if (out_x < p.out_w & out_y < p.out_h) {\n                    out[((major_idx * p.out_h + out_y) * p.out_w + out_x) * p.minor_dim + minor_idx] = v;\n                }\n            }\n        }\n    }\n}\n\n\ntorch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,\n    int up_x, int up_y, int down_x, int down_y,\n    int pad_x0, int pad_x1, int pad_y0, int pad_y1) {\n    int curDevice = -1;\n    cudaGetDevice(&curDevice);\n    cudaStream_t stream = at::cuda::getCurrentCUDAStream(curDevice);\n\n    UpFirDn2DKernelParams p;\n\n    auto x = input.contiguous();\n    auto k = kernel.contiguous();\n\n    p.major_dim = x.size(0);\n    p.in_h = x.size(1);\n    p.in_w = x.size(2);\n    p.minor_dim = x.size(3);\n    p.kernel_h = k.size(0);\n    p.kernel_w = k.size(1);\n    p.up_x = up_x;\n    p.up_y = up_y;\n    p.down_x = down_x;\n    p.down_y = down_y;\n    p.pad_x0 = pad_x0;\n    p.pad_x1 = pad_x1;\n    p.pad_y0 = pad_y0;\n    p.pad_y1 = pad_y1;\n\n    p.out_h = (p.in_h * p.up_y + p.pad_y0 + p.pad_y1 - p.kernel_h + p.down_y) / p.down_y;\n    p.out_w = (p.in_w * p.up_x + p.pad_x0 + p.pad_x1 - p.kernel_w + p.down_x) / p.down_x;\n\n    auto out = at::empty({p.major_dim, p.out_h, p.out_w, p.minor_dim}, x.options());\n\n    int mode = -1;\n\n    int tile_out_h;\n    int tile_out_w;\n\n    if (p.up_x == 1 && p.up_y == 1 && p.down_x == 1 && p.down_y == 1 && p.kernel_h <= 4 && p.kernel_w <= 4) {\n        mode = 1;\n        tile_out_h = 16;\n        tile_out_w = 64;\n    }\n\n    if (p.up_x == 1 && p.up_y == 1 && p.down_x == 1 && p.down_y == 1 && p.kernel_h <= 3 && p.kernel_w <= 3) {\n        mode = 2;\n        tile_out_h = 16;\n        tile_out_w = 64;\n    }\n\n    if (p.up_x == 2 && p.up_y == 2 && p.down_x == 1 && p.down_y == 1 && p.kernel_h <= 4 && p.kernel_w <= 4) {\n        mode = 3;\n        tile_out_h = 16;\n        tile_out_w = 64;\n    }\n\n    if (p.up_x == 2 && p.up_y == 2 && p.down_x == 1 && p.down_y == 1 && p.kernel_h <= 2 && p.kernel_w <= 2) {\n        mode = 4;\n        tile_out_h = 16;\n        tile_out_w = 64;\n    }\n\n    if (p.up_x == 1 && p.up_y == 1 && p.down_x == 2 && p.down_y == 2 && p.kernel_h <= 4 && p.kernel_w <= 4) {\n        mode = 5;\n        tile_out_h = 8;\n        tile_out_w = 32;\n    }\n\n    if (p.up_x == 1 && p.up_y == 1 && p.down_x == 2 && p.down_y == 2 && p.kernel_h <= 2 && p.kernel_w <= 2) {\n        mode = 6;\n        tile_out_h = 8;\n        tile_out_w = 32;\n    }\n\n    dim3 block_size;\n    dim3 grid_size;\n\n    if (tile_out_h > 0 && tile_out_w) {\n        p.loop_major = (p.major_dim - 1) / 16384 + 1;\n        p.loop_x = 1;\n        block_size = dim3(32 * 8, 1, 1);\n        grid_size = dim3(((p.out_h - 1) / tile_out_h + 1) * p.minor_dim,\n                         (p.out_w - 1) / (p.loop_x * tile_out_w) + 1,\n                         (p.major_dim - 1) / p.loop_major + 1);\n    }\n\n    AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), \"upfirdn2d_cuda\", [&] {\n        switch (mode) {\n        case 1:\n            upfirdn2d_kernel<scalar_t, 1, 1, 1, 1, 4, 4, 16, 64><<<grid_size, block_size, 0, stream>>>(\n                out.data_ptr<scalar_t>(), x.data_ptr<scalar_t>(), k.data_ptr<scalar_t>(), p\n            );\n\n            break;\n\n        case 2:\n            upfirdn2d_kernel<scalar_t, 1, 1, 1, 1, 3, 3, 16, 64><<<grid_size, block_size, 0, stream>>>(\n                out.data_ptr<scalar_t>(), x.data_ptr<scalar_t>(), k.data_ptr<scalar_t>(), p\n            );\n\n            break;\n\n        case 3:\n            upfirdn2d_kernel<scalar_t, 2, 2, 1, 1, 4, 4, 16, 64><<<grid_size, block_size, 0, stream>>>(\n                out.data_ptr<scalar_t>(), x.data_ptr<scalar_t>(), k.data_ptr<scalar_t>(), p\n            );\n\n            break;\n\n        case 4:\n            upfirdn2d_kernel<scalar_t, 2, 2, 1, 1, 2, 2, 16, 64><<<grid_size, block_size, 0, stream>>>(\n                out.data_ptr<scalar_t>(), x.data_ptr<scalar_t>(), k.data_ptr<scalar_t>(), p\n            );\n\n            break;\n\n        case 5:\n            upfirdn2d_kernel<scalar_t, 1, 1, 2, 2, 4, 4, 8, 32><<<grid_size, block_size, 0, stream>>>(\n                out.data_ptr<scalar_t>(), x.data_ptr<scalar_t>(), k.data_ptr<scalar_t>(), p\n            );\n\n            break;\n\n        case 6:\n            upfirdn2d_kernel<scalar_t, 1, 1, 2, 2, 4, 4, 8, 32><<<grid_size, block_size, 0, stream>>>(\n                out.data_ptr<scalar_t>(), x.data_ptr<scalar_t>(), k.data_ptr<scalar_t>(), p\n            );\n\n            break;\n        }\n    });\n\n    return out;\n}"
  },
  {
    "path": "third_part/GPEN/face_morpher/.gitignore",
    "content": "*.pyc\n*.swp\nMANIFEST\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/README.rst",
    "content": "Face Morpher\n============\n\n| Warp, average and morph human faces!\n| Scripts will automatically detect frontal faces and skip images if\n  none is detected.\n\nBuilt with Python, `dlib`_, Numpy, Scipy, dlib.\n\n| Supported on Python 2.7, Python 3.6+\n| Tested on macOS Mojave and 64bit Linux (dockerized).\n\nRequirements\n--------------\n-  ``pip install -r requirements.txt``\n- Download `http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2` and extract file.\n- Export environment variable ``DLIB_DATA_DIR`` to the folder where ``shape_predictor_68_face_landmarks.dat`` is located. Default ``data``. E.g ``export DLIB_DATA_DIR=/Downloads/data``\n\nEither:\n\n-  `Use as local command-line utility`_\n-  `Use as pip library`_\n-  `Try out in a docker container`_\n\n.. _`Use as local command-line utility`:\n\nUse as local command-line utility\n---------------------------------\n::\n\n    $ git clone https://github.com/alyssaq/face_morpher\n\nMorphing Faces\n--------------\n\nMorph from a source to destination image:\n\n::\n\n    python facemorpher/morpher.py --src=<src_imgpath> --dest=<dest_imgpath> --plot\n\nMorph through a series of images in a folder:\n\n::\n\n    python facemorpher/morpher.py --images=<folder> --out_video=out.avi\n\nAll options listed in ``morpher.py`` (pasted below):\n\n::\n\n    Morph from source to destination face or\n    Morph through all images in a folder\n\n    Usage:\n        morpher.py (--src=<src_path> --dest=<dest_path> | --images=<folder>)\n                [--width=<width>] [--height=<height>]\n                [--num=<num_frames>] [--fps=<frames_per_second>]\n                [--out_frames=<folder>] [--out_video=<filename>]\n                [--plot] [--background=(black|transparent|average)]\n\n    Options:\n        -h, --help              Show this screen.\n        --src=<src_imgpath>     Filepath to source image (.jpg, .jpeg, .png)\n        --dest=<dest_imgpath>   Filepath to destination image (.jpg, .jpeg, .png)\n        --images=<folder>       Folderpath to images\n        --width=<width>         Custom width of the images/video [default: 500]\n        --height=<height>       Custom height of the images/video [default: 600]\n        --num=<num_frames>      Number of morph frames [default: 20]\n        --fps=<fps>             Number frames per second for the video [default: 10]\n        --out_frames=<folder>   Folder path to save all image frames\n        --out_video=<filename>  Filename to save a video\n        --plot                  Flag to plot images to result.png [default: False]\n        --background=<bg>       Background of images to be one of (black|transparent|average) [default: black]\n        --version               Show version.\n\nAveraging Faces\n---------------\n\nAverage faces from all images in a folder:\n\n::\n\n    python facemorpher/averager.py --images=<images_folder> --out=average.png\n\nAll options listed in ``averager.py`` (pasted below):\n\n::\n\n    Face averager\n\n    Usage:\n        averager.py --images=<images_folder> [--blur] [--plot]\n                [--background=(black|transparent|average)]\n                [--width=<width>] [--height=<height>]\n                [--out=<filename>] [--destimg=<filename>]\n\n    Options:\n        -h, --help             Show this screen.\n        --images=<folder>      Folder to images (.jpg, .jpeg, .png)\n        --blur                 Flag to blur edges of image [default: False]\n        --width=<width>        Custom width of the images/video [default: 500]\n        --height=<height>      Custom height of the images/video [default: 600]\n        --out=<filename>       Filename to save the average face [default: result.png]\n        --destimg=<filename>   Destination face image to overlay average face\n        --plot                 Flag to display the average face [default: False]\n        --background=<bg>      Background of image to be one of (black|transparent|average) [default: black]\n        --version              Show version.\n\nSteps (facemorpher folder)\n--------------------------\n\n1. Locator\n^^^^^^^^^^\n\n-  Locates face points\n-  For a different locator, return an array of (x, y) control face\n   points\n\n2. Aligner\n^^^^^^^^^^\n\n-  Align faces by resizing, centering and cropping to given size\n\n3. Warper\n^^^^^^^^^\n\n-  Given 2 images and its face points, warp one image to the other\n-  Triangulates face points\n-  Affine transforms each triangle with bilinear interpolation\n\n4a. Morpher\n^^^^^^^^^^^\n\n-  Morph between 2 or more images\n\n4b. Averager\n^^^^^^^^^^^^\n\n-  Average faces from 2 or more images\n\nBlender\n^^^^^^^\n\nOptional blending of warped image:\n\n-  Weighted average\n-  Alpha feathering\n-  Poisson blend\n\nExamples - `Being John Malkovich`_\n----------------------------------\n\nCreate a morphing video between the 2 images:\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n| ``> python facemorpher/morpher.py --src=alyssa.jpg --dest=john_malkovich.jpg``\n| ``--out_video=out.avi``\n\n(out.avi played and recorded as gif)\n\n.. figure:: https://raw.github.com/alyssaq/face_morpher/master/examples/being_john_malvokich.gif\n   :alt: gif\n\nSave the frames to a folder:\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n| ``> python facemorpher/morpher.py --src=alyssa.jpg --dest=john_malkovich.jpg``\n| ``--out_frames=out_folder --num=30``\n\nPlot the frames:\n^^^^^^^^^^^^^^^^\n\n| ``> python facemorpher/morpher.py --src=alyssa.jpg --dest=john_malkovich.jpg``\n| ``--num=12 --plot``\n\n.. figure:: https://raw.github.com/alyssaq/face_morpher/master/examples/plot.png\n   :alt: plot\n\nAverage all face images in a folder:\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n85 images used\n\n| ``> python facemorpher/averager.py --images=images --blur --background=transparent``\n| ``--width=220 --height=250``\n\n.. figure:: https://raw.github.com/alyssaq/face_morpher/master/examples/average_faces.png\n   :alt: average\\_faces\n\n.. _`Use as pip library`:\n\nUse as pip library\n---------------------------------\n::\n\n    $ pip install facemorpher\n\nExamples\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nAdditional options are exactly the same as the command line\n\n::\n\n    import facemorpher\n\n    # Get a list of image paths in a folder\n    imgpaths = facemorpher.list_imgpaths('imagefolder')\n\n    # To morph, supply an array of face images:\n    facemorpher.morpher(imgpaths, plot=True)\n\n    # To average, supply an array of face images:\n    facemorpher.averager(['image1.png', 'image2.png'], plot=True)\n\n\nOnce pip installed, 2 binaries are also available as a command line utility:\n\n::\n\n    $ facemorpher --src=<src_imgpath> --dest=<dest_imgpath> --plot\n    $ faceaverager --images=<images_folder> --plot\n\nTry out in a docker container\n---------------------------------\nMount local folder to `/images` in docker container, run it and enter a bash session.\n--rm removes the container when you close it.\n::\n\n    $ docker run -v  /Users/alyssa/Desktop/images:/images --name py3 --rm -it jjanzic/docker-python3-opencv bash\n\nOnce you're in the container, install ``facemorpher`` and try the examples listed above\n::\n\n    root@0dad0912ebbe:/# pip install facemorpher\n    root@0dad0912ebbe:/# facemorpher --src=<img1> --dest=<img2> --plot\n\nDocumentation\n-------------\n\nhttp://alyssaq.github.io/face_morpher\n\nBuild & publish Docs\n^^^^^^^^^^^^^^^^^^^^\n\n::\n\n    ./scripts/publish_ghpages.sh\n\nLicense\n-------\n`MIT`_\n\n.. _Being John Malkovich: http://www.rottentomatoes.com/m/being_john_malkovich\n.. _Mac installation steps: https://gist.github.com/alyssaq/f60393545173379e0f3f#file-4-opencv3-with-python3-md\n.. _MIT: http://alyssaq.github.io/mit-license\n.. _OpenCV: http://opencv.org\n.. _Homebrew: https://brew.sh\n.. _source: https://github.com/opencv/opencv\n.. _dlib: http://dlib.net\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/__init__.py",
    "content": "\"\"\"\nFace Morpher module init code\n\"\"\"\nfrom .morpher import morpher, list_imgpaths\nfrom .averager import averager\n\n__all__ = ['list_imgpaths',\n           'morpher',\n           'averager']\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/aligner.py",
    "content": "\"\"\"\nAlign face and image sizes\n\"\"\"\nimport cv2\nimport numpy as np\n\ndef positive_cap(num):\n  \"\"\" Cap a number to ensure positivity\n\n  :param num: positive or negative number\n  :returns: (overflow, capped_number)\n  \"\"\"\n  if num < 0:\n    return 0, abs(num)\n  else:\n    return num, 0\n\ndef roi_coordinates(rect, size, scale):\n  \"\"\" Align the rectangle into the center and return the top-left coordinates\n  within the new size. If rect is smaller, we add borders.\n\n  :param rect: (x, y, w, h) bounding rectangle of the face\n  :param size: (width, height) are the desired dimensions\n  :param scale: scaling factor of the rectangle to be resized\n  :returns: 4 numbers. Top-left coordinates of the aligned ROI.\n    (x, y, border_x, border_y). All values are > 0.\n  \"\"\"\n  rectx, recty, rectw, recth = rect\n  new_height, new_width = size\n  mid_x = int((rectx + rectw/2) * scale)\n  mid_y = int((recty + recth/2) * scale)\n  roi_x = mid_x - int(new_width/2)\n  roi_y = mid_y - int(new_height/2)\n\n  roi_x, border_x = positive_cap(roi_x)\n  roi_y, border_y = positive_cap(roi_y)\n  return roi_x, roi_y, border_x, border_y\n\ndef scaling_factor(rect, size):\n  \"\"\" Calculate the scaling factor for the current image to be\n      resized to the new dimensions\n\n  :param rect: (x, y, w, h) bounding rectangle of the face\n  :param size: (width, height) are the desired dimensions\n  :returns: floating point scaling factor\n  \"\"\"\n  new_height, new_width = size\n  rect_h, rect_w = rect[2:]\n  height_ratio = rect_h / new_height\n  width_ratio = rect_w / new_width\n  scale = 1\n  if height_ratio > width_ratio:\n    new_recth = 0.8 * new_height\n    scale = new_recth / rect_h\n  else:\n    new_rectw = 0.8 * new_width\n    scale = new_rectw / rect_w\n  return scale\n\ndef resize_image(img, scale):\n  \"\"\" Resize image with the provided scaling factor\n\n  :param img: image to be resized\n  :param scale: scaling factor for resizing the image\n  \"\"\"\n  cur_height, cur_width = img.shape[:2]\n  new_scaled_height = int(scale * cur_height)\n  new_scaled_width = int(scale * cur_width)\n\n  return cv2.resize(img, (new_scaled_width, new_scaled_height))\n\ndef resize_align(img, points, size):\n  \"\"\" Resize image and associated points, align face to the center\n    and crop to the desired size\n\n  :param img: image to be resized\n  :param points: *m* x 2 array of points\n  :param size: (height, width) tuple of new desired size\n  \"\"\"\n  new_height, new_width = size\n\n  # Resize image based on bounding rectangle\n  rect = cv2.boundingRect(np.array([points], np.int32))\n  scale = scaling_factor(rect, size)\n  img = resize_image(img, scale)\n\n  # Align bounding rect to center\n  cur_height, cur_width = img.shape[:2]\n  roi_x, roi_y, border_x, border_y = roi_coordinates(rect, size, scale)\n  roi_h = np.min([new_height-border_y, cur_height-roi_y])\n  roi_w = np.min([new_width-border_x, cur_width-roi_x])\n\n  # Crop to supplied size\n  crop = np.zeros((new_height, new_width, 3), img.dtype)\n  crop[border_y:border_y+roi_h, border_x:border_x+roi_w] = (\n     img[roi_y:roi_y+roi_h, roi_x:roi_x+roi_w])\n\n  # Scale and align face points to the crop\n  points[:, 0] = (points[:, 0] * scale) + (border_x - roi_x)\n  points[:, 1] = (points[:, 1] * scale) + (border_y - roi_y)\n\n  return (crop, points)\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/averager.py",
    "content": "\"\"\"\n::\n\n  Face averager\n\n  Usage:\n    averager.py --images=<images_folder> [--blur] [--plot]\n              [--background=(black|transparent|average)]\n              [--width=<width>] [--height=<height>]\n              [--out=<filename>] [--destimg=<filename>]\n\n  Options:\n    -h, --help             Show this screen.\n    --images=<folder>      Folder to images (.jpg, .jpeg, .png)\n    --blur                 Flag to blur edges of image [default: False]\n    --width=<width>        Custom width of the images/video [default: 500]\n    --height=<height>      Custom height of the images/video [default: 600]\n    --out=<filename>       Filename to save the average face [default: result.png]\n    --destimg=<filename>   Destination face image to overlay average face\n    --plot                 Flag to display the average face [default: False]\n    --background=<bg>      Background of image to be one of (black|transparent|average) [default: black]\n    --version              Show version.\n\"\"\"\n\nfrom docopt import docopt\nimport os\nimport cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\nfrom facemorpher import locator\nfrom facemorpher import aligner\nfrom facemorpher import warper\nfrom facemorpher import blender\nfrom facemorpher import plotter\n\ndef list_imgpaths(imgfolder):\n  for fname in os.listdir(imgfolder):\n    if (fname.lower().endswith('.jpg') or\n       fname.lower().endswith('.png') or\n       fname.lower().endswith('.jpeg')):\n      yield os.path.join(imgfolder, fname)\n\ndef sharpen(img):\n  blured = cv2.GaussianBlur(img, (0, 0), 2.5)\n  return cv2.addWeighted(img, 1.4, blured, -0.4, 0)\n\ndef load_image_points(path, size):\n  img = cv2.imread(path)\n  points = locator.face_points(img)\n\n  if len(points) == 0:\n    print('No face in %s' % path)\n    return None, None\n  else:\n    return aligner.resize_align(img, points, size)\n\ndef averager(imgpaths, dest_filename=None, width=500, height=600, background='black',\n             blur_edges=False, out_filename='result.png', plot=False):\n\n  size = (height, width)\n\n  images = []\n  point_set = []\n  for path in imgpaths:\n    img, points = load_image_points(path, size)\n    if img is not None:\n      images.append(img)\n      point_set.append(points)\n\n  if len(images) == 0:\n    raise FileNotFoundError('Could not find any valid images.' +\n                            ' Supported formats are .jpg, .png, .jpeg')\n\n  if dest_filename is not None:\n    dest_img, dest_points = load_image_points(dest_filename, size)\n    if dest_img is None or dest_points is None:\n      raise Exception('No face or detected face points in dest img: ' + dest_filename)\n  else:\n    dest_img = np.zeros(images[0].shape, np.uint8)\n    dest_points = locator.average_points(point_set)\n\n  num_images = len(images)\n  result_images = np.zeros(images[0].shape, np.float32)\n  for i in range(num_images):\n    result_images += warper.warp_image(images[i], point_set[i],\n                                       dest_points, size, np.float32)\n\n  result_image = np.uint8(result_images / num_images)\n  face_indexes = np.nonzero(result_image)\n  dest_img[face_indexes] = result_image[face_indexes]\n\n  mask = blender.mask_from_points(size, dest_points)\n  if blur_edges:\n    blur_radius = 10\n    mask = cv2.blur(mask, (blur_radius, blur_radius))\n\n  if background in ('transparent', 'average'):\n    dest_img = np.dstack((dest_img, mask))\n\n    if background == 'average':\n      average_background = locator.average_points(images)\n      dest_img = blender.overlay_image(dest_img, mask, average_background)\n\n  print('Averaged {} images'.format(num_images))\n  plt = plotter.Plotter(plot, num_images=1, out_filename=out_filename)\n  plt.save(dest_img)\n  plt.plot_one(dest_img)\n  plt.show()\n\ndef main():\n  args = docopt(__doc__, version='Face Averager 1.0')\n  try:\n    averager(list_imgpaths(args['--images']), args['--destimg'],\n             int(args['--width']), int(args['--height']),\n             args['--background'], args['--blur'], args['--out'], args['--plot'])\n  except Exception as e:\n    print(e)\n\n\nif __name__ == \"__main__\":\n  main()\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/blender.py",
    "content": "import cv2\nimport numpy as np\nimport scipy.sparse\n\ndef mask_from_points(size, points):\n  \"\"\" Create a mask of supplied size from supplied points\n  :param size: tuple of output mask size\n  :param points: array of [x, y] points\n  :returns: mask of values 0 and 255 where\n            255 indicates the convex hull containing the points\n  \"\"\"\n  radius = 10  # kernel size\n  kernel = np.ones((radius, radius), np.uint8)\n\n  mask = np.zeros(size, np.uint8)\n  cv2.fillConvexPoly(mask, cv2.convexHull(points), 255)\n  mask = cv2.erode(mask, kernel)\n\n  return mask\n\ndef overlay_image(foreground_image, mask, background_image):\n  \"\"\" Overlay foreground image onto the background given a mask\n  :param foreground_image: foreground image points\n  :param mask: [0-255] values in mask\n  :param background_image: background image points\n  :returns: image with foreground where mask > 0 overlaid on background image\n  \"\"\"\n  foreground_pixels = mask > 0\n  background_image[..., :3][foreground_pixels] = foreground_image[..., :3][foreground_pixels]\n  return background_image\n\ndef apply_mask(img, mask):\n  \"\"\" Apply mask to supplied image\n  :param img: max 3 channel image\n  :param mask: [0-255] values in mask\n  :returns: new image with mask applied\n  \"\"\"\n  masked_img = np.copy(img)\n  num_channels = 3\n  for c in range(num_channels):\n    masked_img[..., c] = img[..., c] * (mask / 255)\n\n  return masked_img\n\ndef weighted_average(img1, img2, percent=0.5):\n  if percent <= 0:\n    return img2\n  elif percent >= 1:\n    return img1\n  else:\n    return cv2.addWeighted(img1, percent, img2, 1-percent, 0)\n\ndef alpha_feathering(src_img, dest_img, img_mask, blur_radius=15):\n  mask = cv2.blur(img_mask, (blur_radius, blur_radius))\n  mask = mask / 255.0\n\n  result_img = np.empty(src_img.shape, np.uint8)\n  for i in range(3):\n    result_img[..., i] = src_img[..., i] * mask + dest_img[..., i] * (1-mask)\n\n  return result_img\n\ndef poisson_blend(img_source, dest_img, img_mask, offset=(0, 0)):\n  # http://opencv.jp/opencv2-x-samples/poisson-blending\n  img_target = np.copy(dest_img)\n  import pyamg\n  # compute regions to be blended\n  region_source = (\n    max(-offset[0], 0),\n    max(-offset[1], 0),\n    min(img_target.shape[0] - offset[0], img_source.shape[0]),\n    min(img_target.shape[1] - offset[1], img_source.shape[1]))\n  region_target = (\n    max(offset[0], 0),\n    max(offset[1], 0),\n    min(img_target.shape[0], img_source.shape[0] + offset[0]),\n    min(img_target.shape[1], img_source.shape[1] + offset[1]))\n  region_size = (region_source[2] - region_source[0],\n                 region_source[3] - region_source[1])\n\n  # clip and normalize mask image\n  img_mask = img_mask[region_source[0]:region_source[2],\n                      region_source[1]:region_source[3]]\n\n  # create coefficient matrix\n  coff_mat = scipy.sparse.identity(np.prod(region_size), format='lil')\n  for y in range(region_size[0]):\n    for x in range(region_size[1]):\n      if img_mask[y, x]:\n        index = x + y * region_size[1]\n        coff_mat[index, index] = 4\n        if index + 1 < np.prod(region_size):\n          coff_mat[index, index + 1] = -1\n        if index - 1 >= 0:\n          coff_mat[index, index - 1] = -1\n        if index + region_size[1] < np.prod(region_size):\n          coff_mat[index, index + region_size[1]] = -1\n        if index - region_size[1] >= 0:\n          coff_mat[index, index - region_size[1]] = -1\n  coff_mat = coff_mat.tocsr()\n\n  # create poisson matrix for b\n  poisson_mat = pyamg.gallery.poisson(img_mask.shape)\n  # for each layer (ex. RGB)\n  for num_layer in range(img_target.shape[2]):\n    # get subimages\n    t = img_target[region_target[0]:region_target[2],\n                   region_target[1]:region_target[3], num_layer]\n    s = img_source[region_source[0]:region_source[2],\n                   region_source[1]:region_source[3], num_layer]\n    t = t.flatten()\n    s = s.flatten()\n\n    # create b\n    b = poisson_mat * s\n    for y in range(region_size[0]):\n      for x in range(region_size[1]):\n        if not img_mask[y, x]:\n          index = x + y * region_size[1]\n          b[index] = t[index]\n\n    # solve Ax = b\n    x = pyamg.solve(coff_mat, b, verb=False, tol=1e-10)\n\n    # assign x to target image\n    x = np.reshape(x, region_size)\n    x[x > 255] = 255\n    x[x < 0] = 0\n    x = np.array(x, img_target.dtype)\n    img_target[region_target[0]:region_target[2],\n               region_target[1]:region_target[3], num_layer] = x\n\n  return img_target\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/locator.py",
    "content": "\"\"\"\nLocate face points\n\"\"\"\n\nimport cv2\nimport numpy as np\nimport os.path as path\nimport dlib\nimport os\n\n\nDATA_DIR = os.environ.get(\n  'DLIB_DATA_DIR',\n  path.join(path.dirname(path.dirname(path.realpath(__file__))), 'data')\n)\ndlib_detector = dlib.get_frontal_face_detector()\ndlib_predictor = dlib.shape_predictor(path.join(DATA_DIR, 'shape_predictor_68_face_landmarks.dat'))\n\ndef boundary_points(points, width_percent=0.1, height_percent=0.1):\n  \"\"\" Produce additional boundary points\n  :param points: *m* x 2 array of x,y points\n  :param width_percent: [-1, 1] percentage of width to taper inwards. Negative for opposite direction\n  :param height_percent: [-1, 1] percentage of height to taper downwards. Negative for opposite direction\n  :returns: 2 additional points at the top corners\n  \"\"\"\n  x, y, w, h = cv2.boundingRect(np.array([points], np.int32))\n  spacerw = int(w * width_percent)\n  spacerh = int(h * height_percent)\n  return [[x+spacerw, y+spacerh],\n          [x+w-spacerw, y+spacerh]]\n\n\ndef face_points(img, add_boundary_points=True):\n  return face_points_dlib(img, add_boundary_points)\n\ndef face_points_dlib(img, add_boundary_points=True):\n  \"\"\" Locates 68 face points using dlib (http://dlib.net)\n    Requires shape_predictor_68_face_landmarks.dat to be in face_morpher/data\n    Download at: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2\n  :param img: an image array\n  :param add_boundary_points: bool to add additional boundary points\n  :returns: Array of x,y face points. Empty array if no face found\n  \"\"\"\n  try:\n    points = []\n    rgbimg = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n    rects = dlib_detector(rgbimg, 1)\n\n    if rects and len(rects) > 0:\n      # We only take the first found face\n      shapes = dlib_predictor(rgbimg, rects[0])\n      points = np.array([(shapes.part(i).x, shapes.part(i).y) for i in range(68)], np.int32)\n\n      if add_boundary_points:\n        # Add more points inwards and upwards as dlib only detects up to eyebrows\n        points = np.vstack([\n          points,\n          boundary_points(points, 0.1, -0.03),\n          boundary_points(points, 0.13, -0.05),\n          boundary_points(points, 0.15, -0.08),\n          boundary_points(points, 0.33, -0.12)])\n\n    return points\n  except Exception as e:\n    print(e)\n    return []\n\ndef face_points_stasm(img, add_boundary_points=True):\n  import stasm\n  \"\"\" Locates 77 face points using stasm (http://www.milbo.users.sonic.net/stasm)\n\n  :param img: an image array\n  :param add_boundary_points: bool to add 2 additional points\n  :returns: Array of x,y face points. Empty array if no face found\n  \"\"\"\n  try:\n    points = stasm.search_single(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY))\n  except Exception as e:\n    print('Failed finding face points: ', e)\n    return []\n\n  points = points.astype(np.int32)\n  if len(points) == 0:\n    return points\n\n  if add_boundary_points:\n    return np.vstack([points, boundary_points(points)])\n\n  return points\n\ndef average_points(point_set):\n  \"\"\" Averages a set of face points from images\n\n  :param point_set: *n* x *m* x 2 array of face points. \\\\\n  *n* = number of images. *m* = number of face points per image\n  \"\"\"\n  return np.mean(point_set, 0).astype(np.int32)\n\ndef weighted_average_points(start_points, end_points, percent=0.5):\n  \"\"\" Weighted average of two sets of supplied points\n\n  :param start_points: *m* x 2 array of start face points.\n  :param end_points: *m* x 2 array of end face points.\n  :param percent: [0, 1] percentage weight on start_points\n  :returns: *m* x 2 array of weighted average points\n  \"\"\"\n  if percent <= 0:\n    return end_points\n  elif percent >= 1:\n    return start_points\n  else:\n    return np.asarray(start_points*percent + end_points*(1-percent), np.int32)\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/morpher.py",
    "content": "\"\"\"\n::\n\n  Morph from source to destination face or\n  Morph through all images in a folder\n\n  Usage:\n    morpher.py (--src=<src_path> --dest=<dest_path> | --images=<folder>)\n              [--width=<width>] [--height=<height>]\n              [--num=<num_frames>] [--fps=<frames_per_second>]\n              [--out_frames=<folder>] [--out_video=<filename>]\n              [--plot] [--background=(black|transparent|average)]\n\n  Options:\n    -h, --help              Show this screen.\n    --src=<src_imgpath>     Filepath to source image (.jpg, .jpeg, .png)\n    --dest=<dest_imgpath>   Filepath to destination image (.jpg, .jpeg, .png)\n    --images=<folder>       Folderpath to images\n    --width=<width>         Custom width of the images/video [default: 500]\n    --height=<height>       Custom height of the images/video [default: 600]\n    --num=<num_frames>      Number of morph frames [default: 20]\n    --fps=<fps>             Number frames per second for the video [default: 10]\n    --out_frames=<folder>   Folder path to save all image frames\n    --out_video=<filename>  Filename to save a video\n    --plot                  Flag to plot images to result.png [default: False]\n    --background=<bg>       Background of images to be one of (black|transparent|average) [default: black]\n    --version               Show version.\n\"\"\"\nfrom docopt import docopt\nimport os\nimport numpy as np\nimport cv2\n\nfrom facemorpher import locator\nfrom facemorpher import aligner\nfrom facemorpher import warper\nfrom facemorpher import blender\nfrom facemorpher import plotter\nfrom facemorpher import videoer\n\ndef verify_args(args):\n  if args['--images'] is None:\n    valid = os.path.isfile(args['--src']) & os.path.isfile(args['--dest'])\n    if not valid:\n      print('--src=%s or --dest=%s file does not exist. Double check the supplied paths' % (\n        args['--src'], args['--dest']))\n      exit(1)\n  else:\n    valid = os.path.isdir(args['--images'])\n    if not valid:\n      print('--images=%s is not a valid directory' % args['--images'])\n      exit(1)\n\ndef load_image_points(path, size):\n  img = cv2.imread(path)\n  points = locator.face_points(img)\n\n  if len(points) == 0:\n    print('No face in %s' % path)\n    return None, None\n  else:\n    return aligner.resize_align(img, points, size)\n\ndef load_valid_image_points(imgpaths, size):\n  for path in imgpaths:\n    img, points = load_image_points(path, size)\n    if img is not None:\n      print(path)\n      yield (img, points)\n\ndef list_imgpaths(images_folder=None, src_image=None, dest_image=None):\n  if images_folder is None:\n    yield src_image\n    yield dest_image\n  else:\n    for fname in os.listdir(images_folder):\n      if (fname.lower().endswith('.jpg') or\n         fname.lower().endswith('.png') or\n         fname.lower().endswith('.jpeg')):\n        yield os.path.join(images_folder, fname)\n\ndef morph(src_img, src_points, dest_img, dest_points,\n          video, width=500, height=600, num_frames=20, fps=10,\n          out_frames=None, out_video=None, plot=False, background='black'):\n  \"\"\"\n  Create a morph sequence from source to destination image\n\n  :param src_img: ndarray source image\n  :param src_points: source image array of x,y face points\n  :param dest_img: ndarray destination image\n  :param dest_points: destination image array of x,y face points\n  :param video: facemorpher.videoer.Video object\n  \"\"\"\n  size = (height, width)\n  stall_frames = np.clip(int(fps*0.15), 1, fps)  # Show first & last longer\n  plt = plotter.Plotter(plot, num_images=num_frames, out_folder=out_frames)\n  num_frames -= (stall_frames * 2)  # No need to process src and dest image\n\n  plt.plot_one(src_img)\n  video.write(src_img, 1)\n\n  # Produce morph frames!\n  for percent in np.linspace(1, 0, num=num_frames):\n    points = locator.weighted_average_points(src_points, dest_points, percent)\n    src_face = warper.warp_image(src_img, src_points, points, size)\n    end_face = warper.warp_image(dest_img, dest_points, points, size)\n    average_face = blender.weighted_average(src_face, end_face, percent)\n\n    if background in ('transparent', 'average'):\n      mask = blender.mask_from_points(average_face.shape[:2], points)\n      average_face = np.dstack((average_face, mask))\n\n      if background == 'average':\n        average_background = blender.weighted_average(src_img, dest_img, percent)\n        average_face = blender.overlay_image(average_face, mask, average_background)\n\n    plt.plot_one(average_face)\n    plt.save(average_face)\n    video.write(average_face)\n\n  plt.plot_one(dest_img)\n  video.write(dest_img, stall_frames)\n  plt.show()\n\ndef morpher(imgpaths, width=500, height=600, num_frames=20, fps=10,\n            out_frames=None, out_video=None, plot=False, background='black'):\n  \"\"\"\n  Create a morph sequence from multiple images in imgpaths\n\n  :param imgpaths: array or generator of image paths\n  \"\"\"\n  video = videoer.Video(out_video, fps, width, height)\n  images_points_gen = load_valid_image_points(imgpaths, (height, width))\n  src_img, src_points = next(images_points_gen)\n  for dest_img, dest_points in images_points_gen:\n    morph(src_img, src_points, dest_img, dest_points, video,\n          width, height, num_frames, fps, out_frames, out_video, plot, background)\n    src_img, src_points = dest_img, dest_points\n  video.end()\n\ndef main():\n  args = docopt(__doc__, version='Face Morpher 1.0')\n  verify_args(args)\n\n  morpher(list_imgpaths(args['--images'], args['--src'], args['--dest']),\n          int(args['--width']), int(args['--height']),\n          int(args['--num']), int(args['--fps']),\n          args['--out_frames'], args['--out_video'],\n          args['--plot'], args['--background'])\n\n\nif __name__ == \"__main__\":\n  main()\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/plotter.py",
    "content": "\"\"\"\nPlot and save images\n\"\"\"\n\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport os.path\nimport numpy as np\nimport cv2\n\ndef bgr2rgb(img):\n  # OpenCV's BGR to RGB\n  rgb = np.copy(img)\n  rgb[..., 0], rgb[..., 2] = img[..., 2], img[..., 0]\n  return rgb\n\ndef check_do_plot(func):\n  def inner(self, *args, **kwargs):\n    if self.do_plot:\n      func(self, *args, **kwargs)\n\n  return inner\n\ndef check_do_save(func):\n  def inner(self, *args, **kwargs):\n    if self.do_save:\n      func(self, *args, **kwargs)\n\n  return inner\n\nclass Plotter(object):\n  def __init__(self, plot=True, rows=0, cols=0, num_images=0, out_folder=None, out_filename=None):\n    self.save_counter = 1\n    self.plot_counter = 1\n    self.do_plot = plot\n    self.do_save = out_filename is not None\n    self.out_filename = out_filename\n    self.set_filepath(out_folder)\n\n    if (rows + cols) == 0 and num_images > 0:\n      # Auto-calculate the number of rows and cols for the figure\n      self.rows = np.ceil(np.sqrt(num_images / 2.0))\n      self.cols = np.ceil(num_images / self.rows)\n    else:\n      self.rows = rows\n      self.cols = cols\n\n  def set_filepath(self, folder):\n    if folder is None:\n      self.filepath = None\n      return\n\n    if not os.path.exists(folder):\n      os.makedirs(folder)\n    self.filepath = os.path.join(folder, 'frame{0:03d}.png')\n    self.do_save = True\n\n  @check_do_save\n  def save(self, img, filename=None):\n    if self.filepath:\n      filename = self.filepath.format(self.save_counter)\n      self.save_counter += 1\n    elif filename is None:\n      filename = self.out_filename\n\n    mpimg.imsave(filename, bgr2rgb(img))\n    print(filename + ' saved')\n\n  @check_do_plot\n  def plot_one(self, img):\n    p = plt.subplot(self.rows, self.cols, self.plot_counter)\n    p.axes.get_xaxis().set_visible(False)\n    p.axes.get_yaxis().set_visible(False)\n    plt.imshow(bgr2rgb(img))\n    self.plot_counter += 1\n\n  @check_do_plot\n  def show(self):\n    plt.gcf().subplots_adjust(hspace=0.05, wspace=0,\n                              left=0, bottom=0, right=1, top=0.98)\n    plt.axis('off')\n    #plt.show()\n    plt.savefig('result.png')\n\n  @check_do_plot\n  def plot_mesh(self, points, tri, color='k'):\n    \"\"\" plot triangles \"\"\"\n    for tri_indices in tri.simplices:\n      t_ext = [tri_indices[0], tri_indices[1], tri_indices[2], tri_indices[0]]\n      plt.plot(points[t_ext, 0], points[t_ext, 1], color)\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/videoer.py",
    "content": "\"\"\"\nCreate a video with image frames\n\"\"\"\n\nimport cv2\nimport numpy as np\n\n\ndef check_write_video(func):\n  def inner(self, *args, **kwargs):\n    if self.video:\n      return func(self, *args, **kwargs)\n    else:\n      pass\n  return inner\n\n\nclass Video(object):\n  def __init__(self, filename, fps, w, h):\n    self.filename = filename\n\n    if filename is None:\n      self.video = None\n    else:\n      fourcc = cv2.VideoWriter_fourcc(*'MJPG')\n      self.video = cv2.VideoWriter(filename, fourcc, fps, (w, h), True)\n\n  @check_write_video\n  def write(self, img, num_times=1):\n    for i in range(num_times):\n      self.video.write(img[..., :3])\n\n  @check_write_video\n  def end(self):\n    print(self.filename + ' saved')\n    self.video.release()\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/facemorpher/warper.py",
    "content": "import numpy as np\nimport scipy.spatial as spatial\n\ndef bilinear_interpolate(img, coords):\n  \"\"\" Interpolates over every image channel\n  http://en.wikipedia.org/wiki/Bilinear_interpolation\n\n  :param img: max 3 channel image\n  :param coords: 2 x _m_ array. 1st row = xcoords, 2nd row = ycoords\n  :returns: array of interpolated pixels with same shape as coords\n  \"\"\"\n  int_coords = np.int32(coords)\n  x0, y0 = int_coords\n  dx, dy = coords - int_coords\n\n  # 4 Neighbour pixels\n  q11 = img[y0, x0]\n  q21 = img[y0, x0+1]\n  q12 = img[y0+1, x0]\n  q22 = img[y0+1, x0+1]\n\n  btm = q21.T * dx + q11.T * (1 - dx)\n  top = q22.T * dx + q12.T * (1 - dx)\n  inter_pixel = top * dy + btm * (1 - dy)\n\n  return inter_pixel.T\n\ndef grid_coordinates(points):\n  \"\"\" x,y grid coordinates within the ROI of supplied points\n\n  :param points: points to generate grid coordinates\n  :returns: array of (x, y) coordinates\n  \"\"\"\n  xmin = np.min(points[:, 0])\n  xmax = np.max(points[:, 0]) + 1\n  ymin = np.min(points[:, 1])\n  ymax = np.max(points[:, 1]) + 1\n  return np.asarray([(x, y) for y in range(ymin, ymax)\n                     for x in range(xmin, xmax)], np.uint32)\n\ndef process_warp(src_img, result_img, tri_affines, dst_points, delaunay):\n  \"\"\"\n  Warp each triangle from the src_image only within the\n  ROI of the destination image (points in dst_points).\n  \"\"\"\n  roi_coords = grid_coordinates(dst_points)\n  # indices to vertices. -1 if pixel is not in any triangle\n  roi_tri_indices = delaunay.find_simplex(roi_coords)\n\n  for simplex_index in range(len(delaunay.simplices)):\n    coords = roi_coords[roi_tri_indices == simplex_index]\n    num_coords = len(coords)\n    out_coords = np.dot(tri_affines[simplex_index],\n                        np.vstack((coords.T, np.ones(num_coords))))\n    x, y = coords.T\n    result_img[y, x] = bilinear_interpolate(src_img, out_coords)\n\n  return None\n\ndef triangular_affine_matrices(vertices, src_points, dest_points):\n  \"\"\"\n  Calculate the affine transformation matrix for each\n  triangle (x,y) vertex from dest_points to src_points\n\n  :param vertices: array of triplet indices to corners of triangle\n  :param src_points: array of [x, y] points to landmarks for source image\n  :param dest_points: array of [x, y] points to landmarks for destination image\n  :returns: 2 x 3 affine matrix transformation for a triangle\n  \"\"\"\n  ones = [1, 1, 1]\n  for tri_indices in vertices:\n    src_tri = np.vstack((src_points[tri_indices, :].T, ones))\n    dst_tri = np.vstack((dest_points[tri_indices, :].T, ones))\n    mat = np.dot(src_tri, np.linalg.inv(dst_tri))[:2, :]\n    yield mat\n\ndef warp_image(src_img, src_points, dest_points, dest_shape, dtype=np.uint8):\n  # Resultant image will not have an alpha channel\n  num_chans = 3\n  src_img = src_img[:, :, :3]\n\n  rows, cols = dest_shape[:2]\n  result_img = np.zeros((rows, cols, num_chans), dtype)\n\n  delaunay = spatial.Delaunay(dest_points)\n  tri_affines = np.asarray(list(triangular_affine_matrices(\n    delaunay.simplices, src_points, dest_points)))\n\n  process_warp(src_img, result_img, tri_affines, dest_points, delaunay)\n\n  return result_img\n\ndef test_local():\n  from functools import partial\n  import cv2\n  import scipy.misc\n  import locator\n  import aligner\n  from matplotlib import pyplot as plt\n\n  # Load source image\n  face_points_func = partial(locator.face_points, '../data')\n  base_path = '../females/Screenshot 2015-03-04 17.11.12.png'\n  src_path = '../females/BlDmB5QCYAAY8iw.jpg'\n  src_img = cv2.imread(src_path)\n\n  # Define control points for warps\n  src_points = face_points_func(src_path)\n  base_img = cv2.imread(base_path)\n  base_points = face_points_func(base_path)\n\n  size = (600, 500)\n  src_img, src_points = aligner.resize_align(src_img, src_points, size)\n  base_img, base_points = aligner.resize_align(base_img, base_points, size)\n  result_points = locator.weighted_average_points(src_points, base_points, 0.2)\n\n  # Perform transform\n  dst_img1 = warp_image(src_img, src_points, result_points, size)\n  dst_img2 = warp_image(base_img, base_points, result_points, size)\n\n  import blender\n  ave = blender.weighted_average(dst_img1, dst_img2, 0.6)\n  mask = blender.mask_from_points(size, result_points)\n  blended_img = blender.poisson_blend(dst_img1, dst_img2, mask)\n\n  plt.subplot(2, 2, 1)\n  plt.imshow(ave)\n  plt.subplot(2, 2, 2)\n  plt.imshow(dst_img1)\n  plt.subplot(2, 2, 3)\n  plt.imshow(dst_img2)\n  plt.subplot(2, 2, 4)\n\n  plt.imshow(blended_img)\n  plt.show()\n\n\nif __name__ == \"__main__\":\n  test_local()\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/requirements.txt",
    "content": "numpy\nscipy\nmatplotlib\ndocopt\ndlib\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/scripts/make_docs.sh",
    "content": "#!/bin/bash\n\nrm -rf docs\n# reStructuredText in python files to rst. Documentation in docs folder\nsphinx-apidoc -A \"Alyssa Quek\" -f -F -o docs facemorpher/\n\ncd docs\n\n# Append module path to end of conf file\necho \"\" >> conf.py\necho \"import os\" >> conf.py\necho \"import sys\" >> conf.py\necho \"sys.path.insert(0, os.path.abspath('../'))\" >> conf.py\necho \"sys.path.insert(0, os.path.abspath('../facemorpher'))\" >> conf.py\n\n# Make sphinx documentation\nmake html\ncd ..\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/scripts/publish_ghpages.sh",
    "content": "#!/bin/bash          \n\n# delete previous gh-pages\ngit branch -D gh-pages\ngit push origin :gh-pages\n\ngit checkout -b gh-pages\ngit rebase master\ngit reset HEAD\n\n# make docs\n./scripts/make_docs.sh\n\n# Add docs\nmv docs/_build/html/*.html .\ngit add *.html\nmv docs/_build/html/*.js .\ngit add *.js\nmv docs/_build/html/_static/ _static\ngit add _static\n\ntouch .nojekyll\ngit add .nojekyll\n\n# Publish to gh-pages\ngit commit -m \"docs\"\ngit push origin gh-pages\n\ngit checkout master\n"
  },
  {
    "path": "third_part/GPEN/face_morpher/setup.cfg",
    "content": "[pep8]\nignore = E111,E114,E226,E302,E41,E121,E701\nmax-line-length = 100\n\n[flake8]\nignore = E111,E114,E226,E302,E41,E121,E701\nmax-line-length = 100"
  },
  {
    "path": "third_part/GPEN/face_morpher/setup.py",
    "content": "from setuptools import setup, find_packages\n\n# To test locally: python setup.py sdist bdist_wheel\n# To upload to pypi: twine upload dist/*\n\nsetup(\n  name='facemorpher',\n  version='5.2.dev0',\n  author='Alyssa Quek',\n  author_email='alyssaquek@gmail.com',\n  description=('Warp, morph and average human faces!'),\n  keywords='face morphing, averaging, warping',\n  url='https://github.com/alyssaq/face_morpher',\n  license='MIT',\n  packages=find_packages(),\n  install_requires=[\n    'docopt',\n    'numpy',\n    'scipy',\n    'matplotlib',\n    'dlib'\n  ],\n  entry_points={'console_scripts': [\n      'facemorpher=facemorpher.morpher:main',\n      'faceaverager=facemorpher.averager:main'\n    ]\n  },\n  data_files=[('readme', ['README.rst'])],\n  long_description=open('README.rst').read(),\n)\n"
  },
  {
    "path": "third_part/GPEN/face_parse/blocks.py",
    "content": "# -*- coding: utf-8 -*-\nimport torch\nimport torch.nn as nn\nfrom torch.nn.parameter import Parameter\nfrom torch.nn import functional as F\nimport numpy as np\n\nclass NormLayer(nn.Module):\n    \"\"\"Normalization Layers.\n    ------------\n    # Arguments\n        - channels: input channels, for batch norm and instance norm.\n        - input_size: input shape without batch size, for layer norm.\n    \"\"\"\n    def __init__(self, channels, normalize_shape=None, norm_type='bn', ref_channels=None):\n        super(NormLayer, self).__init__()\n        norm_type = norm_type.lower()\n        self.norm_type = norm_type\n        if norm_type == 'bn':\n            self.norm = nn.BatchNorm2d(channels, affine=True)\n        elif norm_type == 'in':\n            self.norm = nn.InstanceNorm2d(channels, affine=False)\n        elif norm_type == 'gn':\n            self.norm = nn.GroupNorm(32, channels, affine=True)\n        elif norm_type == 'pixel':\n            self.norm = lambda x: F.normalize(x, p=2, dim=1)\n        elif norm_type == 'layer':\n            self.norm = nn.LayerNorm(normalize_shape)\n        elif norm_type == 'none':\n            self.norm = lambda x: x*1.0\n        else:\n            assert 1==0, 'Norm type {} not support.'.format(norm_type)\n\n    def forward(self, x, ref=None):\n        if self.norm_type == 'spade':\n            return self.norm(x, ref)\n        else:\n            return self.norm(x)\n\n\nclass ReluLayer(nn.Module):\n    \"\"\"Relu Layer.\n    ------------\n    # Arguments\n        - relu type: type of relu layer, candidates are\n            - ReLU\n            - LeakyReLU: default relu slope 0.2\n            - PRelu \n            - SELU\n            - none: direct pass\n    \"\"\"\n    def __init__(self, channels, relu_type='relu'):\n        super(ReluLayer, self).__init__()\n        relu_type = relu_type.lower()\n        if relu_type == 'relu':\n            self.func = nn.ReLU(True)\n        elif relu_type == 'leakyrelu':\n            self.func = nn.LeakyReLU(0.2, inplace=True)\n        elif relu_type == 'prelu':\n            self.func = nn.PReLU(channels)\n        elif relu_type == 'selu':\n            self.func = nn.SELU(True)\n        elif relu_type == 'none':\n            self.func = lambda x: x*1.0\n        else:\n            assert 1==0, 'Relu type {} not support.'.format(relu_type)\n\n    def forward(self, x):\n        return self.func(x)\n\n\nclass ConvLayer(nn.Module):\n    def __init__(self, in_channels, out_channels, kernel_size=3, scale='none', norm_type='none', relu_type='none', use_pad=True, bias=True):\n        super(ConvLayer, self).__init__()\n        self.use_pad = use_pad\n        self.norm_type = norm_type\n        if norm_type in ['bn']:\n            bias = False\n        \n        stride = 2 if scale == 'down' else 1\n\n        self.scale_func = lambda x: x\n        if scale == 'up':\n            self.scale_func = lambda x: nn.functional.interpolate(x, scale_factor=2, mode='nearest')\n\n        self.reflection_pad = nn.ReflectionPad2d(int(np.ceil((kernel_size - 1.)/2))) \n        self.conv2d = nn.Conv2d(in_channels, out_channels, kernel_size, stride, bias=bias)\n\n        self.relu = ReluLayer(out_channels, relu_type)\n        self.norm = NormLayer(out_channels, norm_type=norm_type)\n\n    def forward(self, x):\n        out = self.scale_func(x)\n        if self.use_pad:\n            out = self.reflection_pad(out)\n        out = self.conv2d(out)\n        out = self.norm(out)\n        out = self.relu(out)\n        return out\n\n\nclass ResidualBlock(nn.Module):\n    \"\"\"\n    Residual block recommended in: http://torch.ch/blog/2016/02/04/resnets.html\n    \"\"\"\n    def __init__(self, c_in, c_out, relu_type='prelu', norm_type='bn', scale='none'):\n        super(ResidualBlock, self).__init__()\n\n        if scale == 'none' and c_in == c_out:\n            self.shortcut_func = lambda x: x\n        else:\n            self.shortcut_func = ConvLayer(c_in, c_out, 3, scale)\n        \n        scale_config_dict = {'down': ['none', 'down'], 'up': ['up', 'none'], 'none': ['none', 'none']}\n        scale_conf = scale_config_dict[scale]\n\n        self.conv1 = ConvLayer(c_in, c_out, 3, scale_conf[0], norm_type=norm_type, relu_type=relu_type) \n        self.conv2 = ConvLayer(c_out, c_out, 3, scale_conf[1], norm_type=norm_type, relu_type='none')\n  \n    def forward(self, x):\n        identity = self.shortcut_func(x)\n\n        res = self.conv1(x)\n        res = self.conv2(res)\n        return identity + res\n        \n\n"
  },
  {
    "path": "third_part/GPEN/face_parse/face_parsing.py",
    "content": "'''\n@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021)\n@author: yangxy (yangtao9009@gmail.com)\n'''\nimport os\nimport cv2\nimport torch\nimport numpy as np\nfrom face_parse.parse_model import ParseNet\nimport torch.nn.functional as F\n\nfrom face_parse.model import BiSeNet\nimport torchvision.transforms as transforms\n\nclass FaceParse(object):\n    def __init__(self, base_dir='./', model='ParseNet-latest', device='cuda', mask_map = [0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0]):\n        self.mfile = os.path.join(base_dir, model+'.pth')\n        self.size = 512\n        self.device = device\n\n        '''\n        0: 'background' 1: 'skin'   2: 'nose'\n        3: 'eye_g'  4: 'l_eye'  5: 'r_eye'\n        6: 'l_brow' 7: 'r_brow' 8: 'l_ear'\n        9: 'r_ear'  10: 'mouth' 11: 'u_lip'\n        12: 'l_lip' 13: 'hair'  14: 'hat'\n        15: 'ear_r' 16: 'neck_l'    17: 'neck'\n        18: 'cloth'\n        '''\n        # self.MASK_COLORMAP = [[0, 0, 0], [204, 0, 0], [76, 153, 0], [204, 204, 0], [51, 51, 255], [204, 0, 204], [0, 255, 255], [255, 204, 204], [102, 51, 0], [255, 0, 0], [102, 204, 0], [255, 255, 0], [0, 0, 153], [0, 0, 204], [255, 51, 153], [0, 204, 204], [0, 51, 0], [255, 153, 51], [0, 204, 0]]\n        #self.#MASK_COLORMAP = [[0, 0, 0], [204, 0, 0], [76, 153, 0], [204, 204, 0], [51, 51, 255], [204, 0, 204], [0, 255, 255], [255, 204, 204], [102, 51, 0], [255, 0, 0], [102, 204, 0], [255, 255, 0], [0, 0, 153], [0, 0, 204], [255, 51, 153], [0, 204, 204], [0, 51, 0], [255, 153, 51], [0, 204, 0]] = [[0, 0, 0], [204, 0, 0], [76, 153, 0], [204, 204, 0], [51, 51, 255], [204, 0, 204], [0, 255, 255], [255, 204, 204], [102, 51, 0], [255, 0, 0], [102, 204, 0], [255, 255, 0], [0, 0, 153], [0, 0, 204], [255, 51, 153], [0, 204, 204], [0, 51, 0], [0, 0, 0], [0, 0, 0]]\n        # self.MASK_COLORMAP = [0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 255, 0, 0, 0]\n        self.MASK_COLORMAP = mask_map\n\n        self.load_model()\n\n    def load_model(self):\n        self.faceparse = ParseNet(self.size, self.size, 32, 64, 19, norm_type='bn', relu_type='LeakyReLU', ch_range=[32, 256])\n        self.faceparse.load_state_dict(torch.load(self.mfile, map_location=torch.device('cpu')))\n        self.faceparse.to(self.device)\n        self.faceparse.eval()\n\n    def process(self, im, masks=[0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0]):\n        im = cv2.resize(im, (self.size, self.size))\n        imt = self.img2tensor(im)\n        with torch.no_grad():\n            pred_mask, sr_img_tensor = self.faceparse(imt)  # (1, 19, 512, 512)\n        mask = self.tenor2mask(pred_mask, masks)\n\n        return mask\n\n    def process_tensor(self, imt):\n        imt = F.interpolate(imt.flip(1)*2-1, (self.size, self.size))\n        pred_mask, sr_img_tensor = self.faceparse(imt)\n\n        mask = pred_mask.argmax(dim=1)\n        for idx, color in enumerate(self.MASK_COLORMAP):\n            mask = torch.where(mask==idx, color, mask)\n        #mask = mask.repeat(3, 1, 1).unsqueeze(0) #.cpu().float().numpy()\n        mask = mask.unsqueeze(0)\n\n        return mask\n\n    def img2tensor(self, img):\n        img = img[..., ::-1] # BGR to RGB\n        img = img / 255. * 2 - 1\n        img_tensor = torch.from_numpy(img.transpose(2, 0, 1)).unsqueeze(0).to(self.device)\n        return img_tensor.float()\n\n    def tenor2mask(self, tensor, masks):\n        if len(tensor.shape) < 4:\n            tensor = tensor.unsqueeze(0)\n        if tensor.shape[1] > 1:\n            tensor = tensor.argmax(dim=1) \n\n        tensor = tensor.squeeze(1).data.cpu().numpy()   # (1, 512, 512)\n        color_maps = []\n        for t in tensor:\n            #tmp_img = np.zeros(tensor.shape[1:] + (3,))\n            tmp_img = np.zeros(tensor.shape[1:])\n            for idx, color in enumerate(masks):\n                tmp_img[t == idx] = color\n            color_maps.append(tmp_img.astype(np.uint8))\n        return color_maps\n\n\n\nclass FaceParse_v2(object):\n    def __init__(self, device='cuda', mask_map = [0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0]):\n        self.mfile = '/apdcephfs/private_quincheng/Expression/face-parsing.PyTorch/res/cp/79999_iter.pth'\n        self.size = 512\n        self.device = device\n\n        '''\n        0: 'background' 1: 'skin'   2: 'nose'\n        3: 'eye_g'  4: 'l_eye'  5: 'r_eye'\n        6: 'l_brow' 7: 'r_brow' 8: 'l_ear'\n        9: 'r_ear'  10: 'mouth' 11: 'u_lip'\n        12: 'l_lip' 13: 'hair'  14: 'hat'\n        15: 'ear_r' 16: 'neck_l'    17: 'neck'\n        18: 'cloth'\n        '''\n        # self.MASK_COLORMAP = [[0, 0, 0], [204, 0, 0], [76, 153, 0], [204, 204, 0], [51, 51, 255], [204, 0, 204], [0, 255, 255], [255, 204, 204], [102, 51, 0], [255, 0, 0], [102, 204, 0], [255, 255, 0], [0, 0, 153], [0, 0, 204], [255, 51, 153], [0, 204, 204], [0, 51, 0], [255, 153, 51], [0, 204, 0]]\n        #self.#MASK_COLORMAP = [[0, 0, 0], [204, 0, 0], [76, 153, 0], [204, 204, 0], [51, 51, 255], [204, 0, 204], [0, 255, 255], [255, 204, 204], [102, 51, 0], [255, 0, 0], [102, 204, 0], [255, 255, 0], [0, 0, 153], [0, 0, 204], [255, 51, 153], [0, 204, 204], [0, 51, 0], [255, 153, 51], [0, 204, 0]] = [[0, 0, 0], [204, 0, 0], [76, 153, 0], [204, 204, 0], [51, 51, 255], [204, 0, 204], [0, 255, 255], [255, 204, 204], [102, 51, 0], [255, 0, 0], [102, 204, 0], [255, 255, 0], [0, 0, 153], [0, 0, 204], [255, 51, 153], [0, 204, 204], [0, 51, 0], [0, 0, 0], [0, 0, 0]]\n        # self.MASK_COLORMAP = [0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 255, 0, 0, 0]\n        self.MASK_COLORMAP = mask_map\n        self.load_model()\n        self.to_tensor = transforms.Compose([\n            transforms.ToTensor(),\n            transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),\n        ])\n\n    def load_model(self):\n        self.faceparse = BiSeNet(n_classes=19)\n        self.faceparse.load_state_dict(torch.load(self.mfile))\n        self.faceparse.to(self.device)\n        self.faceparse.eval()\n\n    def process(self, im, masks=[0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0]):\n        im = cv2.resize(im[...,::-1], (self.size, self.size))\n        im = self.to_tensor(im)\n        imt = torch.unsqueeze(im, 0).to(self.device)\n        with torch.no_grad():\n            pred_mask = self.faceparse(imt)[0]\n        mask = self.tenor2mask(pred_mask, masks)\n        return mask\n\n    # def img2tensor(self, img):\n    #     img = img[..., ::-1] # BGR to RGB\n    #     img = img / 255. * 2 - 1\n    #     img_tensor = torch.from_numpy(img.transpose(2, 0, 1)).unsqueeze(0).to(self.device)\n    #     return img_tensor.float()\n\n    def tenor2mask(self, tensor, masks):\n        if len(tensor.shape) < 4:\n            tensor = tensor.unsqueeze(0)\n        if tensor.shape[1] > 1:\n            tensor = tensor.argmax(dim=1) \n\n        tensor = tensor.squeeze(1).data.cpu().numpy()\n        color_maps = []\n        for t in tensor:\n            #tmp_img = np.zeros(tensor.shape[1:] + (3,))\n            tmp_img = np.zeros(tensor.shape[1:])\n            for idx, color in enumerate(masks):\n                tmp_img[t == idx] = color\n            color_maps.append(tmp_img.astype(np.uint8))\n        return color_maps"
  },
  {
    "path": "third_part/GPEN/face_parse/model.py",
    "content": "#!/usr/bin/python\n# -*- encoding: utf-8 -*-\n\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision\n\nfrom .resnet import Resnet18\n# from modules.bn import InPlaceABNSync as BatchNorm2d\n\n\nclass ConvBNReLU(nn.Module):\n    def __init__(self, in_chan, out_chan, ks=3, stride=1, padding=1, *args, **kwargs):\n        super(ConvBNReLU, self).__init__()\n        self.conv = nn.Conv2d(in_chan,\n                out_chan,\n                kernel_size = ks,\n                stride = stride,\n                padding = padding,\n                bias = False)\n        self.bn = nn.BatchNorm2d(out_chan)\n        self.init_weight()\n\n    def forward(self, x):\n        x = self.conv(x)\n        x = F.relu(self.bn(x))\n        return x\n\n    def init_weight(self):\n        for ly in self.children():\n            if isinstance(ly, nn.Conv2d):\n                nn.init.kaiming_normal_(ly.weight, a=1)\n                if not ly.bias is None: nn.init.constant_(ly.bias, 0)\n\nclass BiSeNetOutput(nn.Module):\n    def __init__(self, in_chan, mid_chan, n_classes, *args, **kwargs):\n        super(BiSeNetOutput, self).__init__()\n        self.conv = ConvBNReLU(in_chan, mid_chan, ks=3, stride=1, padding=1)\n        self.conv_out = nn.Conv2d(mid_chan, n_classes, kernel_size=1, bias=False)\n        self.init_weight()\n\n    def forward(self, x):\n        x = self.conv(x)\n        x = self.conv_out(x)\n        return x\n\n    def init_weight(self):\n        for ly in self.children():\n            if isinstance(ly, nn.Conv2d):\n                nn.init.kaiming_normal_(ly.weight, a=1)\n                if not ly.bias is None: nn.init.constant_(ly.bias, 0)\n\n    def get_params(self):\n        wd_params, nowd_params = [], []\n        for name, module in self.named_modules():\n            if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d):\n                wd_params.append(module.weight)\n                if not module.bias is None:\n                    nowd_params.append(module.bias)\n            elif isinstance(module, nn.BatchNorm2d):\n                nowd_params += list(module.parameters())\n        return wd_params, nowd_params\n\n\nclass AttentionRefinementModule(nn.Module):\n    def __init__(self, in_chan, out_chan, *args, **kwargs):\n        super(AttentionRefinementModule, self).__init__()\n        self.conv = ConvBNReLU(in_chan, out_chan, ks=3, stride=1, padding=1)\n        self.conv_atten = nn.Conv2d(out_chan, out_chan, kernel_size= 1, bias=False)\n        self.bn_atten = nn.BatchNorm2d(out_chan)\n        self.sigmoid_atten = nn.Sigmoid()\n        self.init_weight()\n\n    def forward(self, x):\n        feat = self.conv(x)\n        atten = F.avg_pool2d(feat, feat.size()[2:])\n        atten = self.conv_atten(atten)\n        atten = self.bn_atten(atten)\n        atten = self.sigmoid_atten(atten)\n        out = torch.mul(feat, atten)\n        return out\n\n    def init_weight(self):\n        for ly in self.children():\n            if isinstance(ly, nn.Conv2d):\n                nn.init.kaiming_normal_(ly.weight, a=1)\n                if not ly.bias is None: nn.init.constant_(ly.bias, 0)\n\n\nclass ContextPath(nn.Module):\n    def __init__(self, *args, **kwargs):\n        super(ContextPath, self).__init__()\n        self.resnet = Resnet18()\n        self.arm16 = AttentionRefinementModule(256, 128)\n        self.arm32 = AttentionRefinementModule(512, 128)\n        self.conv_head32 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1)\n        self.conv_head16 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1)\n        self.conv_avg = ConvBNReLU(512, 128, ks=1, stride=1, padding=0)\n\n        self.init_weight()\n\n    def forward(self, x):\n        H0, W0 = x.size()[2:]\n        feat8, feat16, feat32 = self.resnet(x)\n        H8, W8 = feat8.size()[2:]\n        H16, W16 = feat16.size()[2:]\n        H32, W32 = feat32.size()[2:]\n\n        avg = F.avg_pool2d(feat32, feat32.size()[2:])\n        avg = self.conv_avg(avg)\n        avg_up = F.interpolate(avg, (H32, W32), mode='nearest')\n\n        feat32_arm = self.arm32(feat32)\n        feat32_sum = feat32_arm + avg_up\n        feat32_up = F.interpolate(feat32_sum, (H16, W16), mode='nearest')\n        feat32_up = self.conv_head32(feat32_up)\n\n        feat16_arm = self.arm16(feat16)\n        feat16_sum = feat16_arm + feat32_up\n        feat16_up = F.interpolate(feat16_sum, (H8, W8), mode='nearest')\n        feat16_up = self.conv_head16(feat16_up)\n\n        return feat8, feat16_up, feat32_up  # x8, x8, x16\n\n    def init_weight(self):\n        for ly in self.children():\n            if isinstance(ly, nn.Conv2d):\n                nn.init.kaiming_normal_(ly.weight, a=1)\n                if not ly.bias is None: nn.init.constant_(ly.bias, 0)\n\n    def get_params(self):\n        wd_params, nowd_params = [], []\n        for name, module in self.named_modules():\n            if isinstance(module, (nn.Linear, nn.Conv2d)):\n                wd_params.append(module.weight)\n                if not module.bias is None:\n                    nowd_params.append(module.bias)\n            elif isinstance(module, nn.BatchNorm2d):\n                nowd_params += list(module.parameters())\n        return wd_params, nowd_params\n\n\n### This is not used, since I replace this with the resnet feature with the same size\nclass SpatialPath(nn.Module):\n    def __init__(self, *args, **kwargs):\n        super(SpatialPath, self).__init__()\n        self.conv1 = ConvBNReLU(3, 64, ks=7, stride=2, padding=3)\n        self.conv2 = ConvBNReLU(64, 64, ks=3, stride=2, padding=1)\n        self.conv3 = ConvBNReLU(64, 64, ks=3, stride=2, padding=1)\n        self.conv_out = ConvBNReLU(64, 128, ks=1, stride=1, padding=0)\n        self.init_weight()\n\n    def forward(self, x):\n        feat = self.conv1(x)\n        feat = self.conv2(feat)\n        feat = self.conv3(feat)\n        feat = self.conv_out(feat)\n        return feat\n\n    def init_weight(self):\n        for ly in self.children():\n            if isinstance(ly, nn.Conv2d):\n                nn.init.kaiming_normal_(ly.weight, a=1)\n                if not ly.bias is None: nn.init.constant_(ly.bias, 0)\n\n    def get_params(self):\n        wd_params, nowd_params = [], []\n        for name, module in self.named_modules():\n            if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d):\n                wd_params.append(module.weight)\n                if not module.bias is None:\n                    nowd_params.append(module.bias)\n            elif isinstance(module, nn.BatchNorm2d):\n                nowd_params += list(module.parameters())\n        return wd_params, nowd_params\n\n\nclass FeatureFusionModule(nn.Module):\n    def __init__(self, in_chan, out_chan, *args, **kwargs):\n        super(FeatureFusionModule, self).__init__()\n        self.convblk = ConvBNReLU(in_chan, out_chan, ks=1, stride=1, padding=0)\n        self.conv1 = nn.Conv2d(out_chan,\n                out_chan//4,\n                kernel_size = 1,\n                stride = 1,\n                padding = 0,\n                bias = False)\n        self.conv2 = nn.Conv2d(out_chan//4,\n                out_chan,\n                kernel_size = 1,\n                stride = 1,\n                padding = 0,\n                bias = False)\n        self.relu = nn.ReLU(inplace=True)\n        self.sigmoid = nn.Sigmoid()\n        self.init_weight()\n\n    def forward(self, fsp, fcp):\n        fcat = torch.cat([fsp, fcp], dim=1)\n        feat = self.convblk(fcat)\n        atten = F.avg_pool2d(feat, feat.size()[2:])\n        atten = self.conv1(atten)\n        atten = self.relu(atten)\n        atten = self.conv2(atten)\n        atten = self.sigmoid(atten)\n        feat_atten = torch.mul(feat, atten)\n        feat_out = feat_atten + feat\n        return feat_out\n\n    def init_weight(self):\n        for ly in self.children():\n            if isinstance(ly, nn.Conv2d):\n                nn.init.kaiming_normal_(ly.weight, a=1)\n                if not ly.bias is None: nn.init.constant_(ly.bias, 0)\n\n    def get_params(self):\n        wd_params, nowd_params = [], []\n        for name, module in self.named_modules():\n            if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d):\n                wd_params.append(module.weight)\n                if not module.bias is None:\n                    nowd_params.append(module.bias)\n            elif isinstance(module, nn.BatchNorm2d):\n                nowd_params += list(module.parameters())\n        return wd_params, nowd_params\n\n\nclass BiSeNet(nn.Module):\n    def __init__(self, n_classes, *args, **kwargs):\n        super(BiSeNet, self).__init__()\n        self.cp = ContextPath()\n        ## here self.sp is deleted\n        self.ffm = FeatureFusionModule(256, 256)\n        self.conv_out = BiSeNetOutput(256, 256, n_classes)\n        self.conv_out16 = BiSeNetOutput(128, 64, n_classes)\n        self.conv_out32 = BiSeNetOutput(128, 64, n_classes)\n        self.init_weight()\n\n    def forward(self, x):\n        H, W = x.size()[2:]\n        feat_res8, feat_cp8, feat_cp16 = self.cp(x)  # here return res3b1 feature\n        feat_sp = feat_res8  # use res3b1 feature to replace spatial path feature\n        feat_fuse = self.ffm(feat_sp, feat_cp8)\n\n        feat_out = self.conv_out(feat_fuse)\n        feat_out16 = self.conv_out16(feat_cp8)\n        feat_out32 = self.conv_out32(feat_cp16)\n\n        feat_out = F.interpolate(feat_out, (H, W), mode='bilinear', align_corners=True)\n        feat_out16 = F.interpolate(feat_out16, (H, W), mode='bilinear', align_corners=True)\n        feat_out32 = F.interpolate(feat_out32, (H, W), mode='bilinear', align_corners=True)\n        return feat_out, feat_out16, feat_out32\n\n    def init_weight(self):\n        for ly in self.children():\n            if isinstance(ly, nn.Conv2d):\n                nn.init.kaiming_normal_(ly.weight, a=1)\n                if not ly.bias is None: nn.init.constant_(ly.bias, 0)\n\n    def get_params(self):\n        wd_params, nowd_params, lr_mul_wd_params, lr_mul_nowd_params = [], [], [], []\n        for name, child in self.named_children():\n            child_wd_params, child_nowd_params = child.get_params()\n            if isinstance(child, FeatureFusionModule) or isinstance(child, BiSeNetOutput):\n                lr_mul_wd_params += child_wd_params\n                lr_mul_nowd_params += child_nowd_params\n            else:\n                wd_params += child_wd_params\n                nowd_params += child_nowd_params\n        return wd_params, nowd_params, lr_mul_wd_params, lr_mul_nowd_params\n\n\nif __name__ == \"__main__\":\n    net = BiSeNet(19)\n    net.cuda()\n    net.eval()\n    in_ten = torch.randn(16, 3, 640, 480).cuda()\n    out, out16, out32 = net(in_ten)\n    print(out.shape)\n\n    net.get_params()\n"
  },
  {
    "path": "third_part/GPEN/face_parse/parse_model.py",
    "content": "'''\n@Created by chaofengc (chaofenghust@gmail.com)\n\n@Modified by yangxy (yangtao9009@gmail.com)\n'''\n\nfrom face_parse.blocks import *\nimport torch\nfrom torch import nn\nimport numpy as np\n\ndef define_P(in_size=512, out_size=512, min_feat_size=32, relu_type='LeakyReLU', isTrain=False, weight_path=None):\n    net = ParseNet(in_size, out_size, min_feat_size, 64, 19, norm_type='bn', relu_type=relu_type, ch_range=[32, 256])\n    if not isTrain:\n        net.eval()  \n    if weight_path is not None:\n        net.load_state_dict(torch.load(weight_path))\n    return net\n\n\nclass ParseNet(nn.Module):\n    def __init__(self,\n                in_size=128,\n                out_size=128,\n                min_feat_size=32,\n                base_ch=64,\n                parsing_ch=19,\n                res_depth=10,\n                relu_type='prelu',\n                norm_type='bn',\n                ch_range=[32, 512],\n                ):\n        super().__init__()\n        self.res_depth = res_depth\n        act_args = {'norm_type': norm_type, 'relu_type': relu_type}\n        min_ch, max_ch = ch_range\n\n        ch_clip = lambda x: max(min_ch, min(x, max_ch))\n        min_feat_size = min(in_size, min_feat_size)\n\n        down_steps = int(np.log2(in_size//min_feat_size))\n        up_steps = int(np.log2(out_size//min_feat_size))\n\n        # =============== define encoder-body-decoder ==================== \n        self.encoder = []\n        self.encoder.append(ConvLayer(3, base_ch, 3, 1))\n        head_ch = base_ch\n        for i in range(down_steps):\n            cin, cout = ch_clip(head_ch), ch_clip(head_ch * 2)\n            self.encoder.append(ResidualBlock(cin, cout, scale='down', **act_args))\n            head_ch = head_ch * 2\n\n        self.body = []\n        for i in range(res_depth):\n            self.body.append(ResidualBlock(ch_clip(head_ch), ch_clip(head_ch), **act_args))\n\n        self.decoder = []\n        for i in range(up_steps):\n            cin, cout = ch_clip(head_ch), ch_clip(head_ch // 2)\n            self.decoder.append(ResidualBlock(cin, cout, scale='up', **act_args))\n            head_ch = head_ch // 2\n\n        self.encoder = nn.Sequential(*self.encoder)\n        self.body = nn.Sequential(*self.body)\n        self.decoder = nn.Sequential(*self.decoder)\n        self.out_img_conv = ConvLayer(ch_clip(head_ch), 3)\n        self.out_mask_conv = ConvLayer(ch_clip(head_ch), parsing_ch)\n\n    def forward(self, x):\n        feat = self.encoder(x)\n        x = feat + self.body(feat)\n        x = self.decoder(x)\n        out_img = self.out_img_conv(x) \n        out_mask = self.out_mask_conv(x)\n        return out_mask, out_img\n\n\n"
  },
  {
    "path": "third_part/GPEN/face_parse/resnet.py",
    "content": "#!/usr/bin/python\n# -*- encoding: utf-8 -*-\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.utils.model_zoo as modelzoo\n\n# from modules.bn import InPlaceABNSync as BatchNorm2d\n\nresnet18_url = 'https://download.pytorch.org/models/resnet18-5c106cde.pth'\n\n\ndef conv3x3(in_planes, out_planes, stride=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                     padding=1, bias=False)\n\n\nclass BasicBlock(nn.Module):\n    def __init__(self, in_chan, out_chan, stride=1):\n        super(BasicBlock, self).__init__()\n        self.conv1 = conv3x3(in_chan, out_chan, stride)\n        self.bn1 = nn.BatchNorm2d(out_chan)\n        self.conv2 = conv3x3(out_chan, out_chan)\n        self.bn2 = nn.BatchNorm2d(out_chan)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = None\n        if in_chan != out_chan or stride != 1:\n            self.downsample = nn.Sequential(\n                nn.Conv2d(in_chan, out_chan,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(out_chan),\n                )\n\n    def forward(self, x):\n        residual = self.conv1(x)\n        residual = F.relu(self.bn1(residual))\n        residual = self.conv2(residual)\n        residual = self.bn2(residual)\n\n        shortcut = x\n        if self.downsample is not None:\n            shortcut = self.downsample(x)\n\n        out = shortcut + residual\n        out = self.relu(out)\n        return out\n\n\ndef create_layer_basic(in_chan, out_chan, bnum, stride=1):\n    layers = [BasicBlock(in_chan, out_chan, stride=stride)]\n    for i in range(bnum-1):\n        layers.append(BasicBlock(out_chan, out_chan, stride=1))\n    return nn.Sequential(*layers)\n\n\nclass Resnet18(nn.Module):\n    def __init__(self):\n        super(Resnet18, self).__init__()\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1)\n        self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2)\n        self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2)\n        self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2)\n        self.init_weight()\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = F.relu(self.bn1(x))\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        feat8 = self.layer2(x) # 1/8\n        feat16 = self.layer3(feat8) # 1/16\n        feat32 = self.layer4(feat16) # 1/32\n        return feat8, feat16, feat32\n\n    def init_weight(self):\n        state_dict = modelzoo.load_url(resnet18_url)\n        self_state_dict = self.state_dict()\n        for k, v in state_dict.items():\n            if 'fc' in k: continue\n            self_state_dict.update({k: v})\n        self.load_state_dict(self_state_dict)\n\n    def get_params(self):\n        wd_params, nowd_params = [], []\n        for name, module in self.named_modules():\n            if isinstance(module, (nn.Linear, nn.Conv2d)):\n                wd_params.append(module.weight)\n                if not module.bias is None:\n                    nowd_params.append(module.bias)\n            elif isinstance(module,  nn.BatchNorm2d):\n                nowd_params += list(module.parameters())\n        return wd_params, nowd_params\n\n\nif __name__ == \"__main__\":\n    net = Resnet18()\n    x = torch.randn(16, 3, 224, 224)\n    out = net(x)\n    print(out[0].size())\n    print(out[1].size())\n    print(out[2].size())\n    net.get_params()\n"
  },
  {
    "path": "third_part/GPEN/gpen_face_enhancer.py",
    "content": "import cv2\nimport numpy as np\n\n######### face enhancement\nfrom face_parse.face_parsing import FaceParse\nfrom face_detect.retinaface_detection import RetinaFaceDetection\nfrom face_parse.face_parsing import FaceParse\nfrom face_model.face_gan import FaceGAN\n# from sr_model.real_esrnet import RealESRNet\nfrom align_faces import warp_and_crop_face, get_reference_facial_points\nfrom utils.inference_utils import Laplacian_Pyramid_Blending_with_mask\n\nclass FaceEnhancement(object):\n    def __init__(self, base_dir='./', size=512, model=None, use_sr=True, sr_model=None, channel_multiplier=2, narrow=1, device='cuda'):\n        self.facedetector = RetinaFaceDetection(base_dir, device)\n        self.facegan = FaceGAN(base_dir, size, model, channel_multiplier, narrow, device=device)\n        # self.srmodel =  RealESRNet(base_dir, sr_model, device=device)\n        self.srmodel=None\n        self.faceparser = FaceParse(base_dir, device=device)\n        self.use_sr = use_sr\n        self.size = size\n        self.threshold = 0.9\n\n        # the mask for pasting restored faces back\n        self.mask = np.zeros((512, 512), np.float32)\n        cv2.rectangle(self.mask, (26, 26), (486, 486), (1, 1, 1), -1, cv2.LINE_AA)\n        self.mask = cv2.GaussianBlur(self.mask, (101, 101), 11)\n        self.mask = cv2.GaussianBlur(self.mask, (101, 101), 11)\n\n        self.kernel = np.array((\n                [0.0625, 0.125, 0.0625],\n                [0.125, 0.25, 0.125],\n                [0.0625, 0.125, 0.0625]), dtype=\"float32\")\n\n        # get the reference 5 landmarks position in the crop settings\n        default_square = True\n        inner_padding_factor = 0.25\n        outer_padding = (0, 0)\n        self.reference_5pts = get_reference_facial_points(\n                (self.size, self.size), inner_padding_factor, outer_padding, default_square)\n\n    def mask_postprocess(self, mask, thres=20):\n        mask[:thres, :] = 0; mask[-thres:, :] = 0\n        mask[:, :thres] = 0; mask[:, -thres:] = 0        \n        mask = cv2.GaussianBlur(mask, (101, 101), 11)\n        mask = cv2.GaussianBlur(mask, (101, 101), 11)\n        return mask.astype(np.float32)\n    \n    def process(self, img, ori_img, bbox=None, face_enhance=True, possion_blending=False):\n        if self.use_sr:\n            img_sr = self.srmodel.process(img)\n            if img_sr is not None:\n                img = cv2.resize(img, img_sr.shape[:2][::-1])\n\n        facebs, landms = self.facedetector.detect(img.copy())\n\n        orig_faces, enhanced_faces = [], []\n        height, width = img.shape[:2]\n        full_mask = np.zeros((height, width), dtype=np.float32)\n        full_img = np.zeros(ori_img.shape, dtype=np.uint8)\n\n        for i, (faceb, facial5points) in enumerate(zip(facebs, landms)):\n            if faceb[4]<self.threshold: continue\n            fh, fw = (faceb[3]-faceb[1]), (faceb[2]-faceb[0])\n\n            facial5points = np.reshape(facial5points, (2, 5))\n\n            of, tfm_inv = warp_and_crop_face(img, facial5points, reference_pts=self.reference_5pts, crop_size=(self.size, self.size))\n\n            # enhance the face\n            if face_enhance:\n                ef = self.facegan.process(of)\n            else:\n                ef = of\n                    \n            orig_faces.append(of)\n            enhanced_faces.append(ef)\n            \n            # print(ef.shape)\n            # tmp_mask = self.mask\n            '''\n            0: 'background' 1: 'skin'   2: 'nose'\n            3: 'eye_g'  4: 'l_eye'  5: 'r_eye'\n            6: 'l_brow' 7: 'r_brow' 8: 'l_ear'\n            9: 'r_ear'  10: 'mouth' 11: 'u_lip'\n            12: 'l_lip' 13: 'hair'  14: 'hat'\n            15: 'ear_r' 16: 'neck_l'    17: 'neck'\n            18: 'cloth'\n            '''\n\n            # no ear, no neck, no hair&hat,  only face region\n            mm = [0, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 255, 0, 0, 0, 0, 0, 0]\n            mask_sharp = self.faceparser.process(ef, mm)[0]/255.\n            tmp_mask = self.mask_postprocess(mask_sharp)\n            tmp_mask = cv2.resize(tmp_mask, ef.shape[:2])\n            mask_sharp = cv2.resize(mask_sharp, ef.shape[:2])\n\n            tmp_mask = cv2.warpAffine(tmp_mask, tfm_inv, (width, height), flags=3)\n            mask_sharp = cv2.warpAffine(mask_sharp, tfm_inv, (width, height), flags=3)\n\n            if min(fh, fw)<100: # gaussian filter for small faces\n                ef = cv2.filter2D(ef, -1, self.kernel)\n            \n            if face_enhance:\n                tmp_img = cv2.warpAffine(ef, tfm_inv, (width, height), flags=3)\n            else:\n                tmp_img = cv2.warpAffine(of, tfm_inv, (width, height), flags=3)\n\n            mask = tmp_mask - full_mask\n            full_mask[np.where(mask>0)] = tmp_mask[np.where(mask>0)]\n            full_img[np.where(mask>0)] = tmp_img[np.where(mask>0)]\n\n        mask_sharp = cv2.GaussianBlur(mask_sharp, (0,0), sigmaX=1, sigmaY=1, borderType = cv2.BORDER_DEFAULT)\n\n        full_mask = full_mask[:, :, np.newaxis]\n        mask_sharp = mask_sharp[:, :, np.newaxis]\n\n        if self.use_sr and img_sr is not None:\n            img = cv2.convertScaleAbs(img_sr*(1-full_mask) + full_img*full_mask)\n        \n        elif possion_blending is True:\n            if bbox is not None:\n                y1, y2, x1, x2 = bbox\n                mask_bbox = np.zeros_like(mask_sharp)\n                mask_bbox[y1:y2 - 5, x1:x2] = 1\n                full_img, ori_img, full_mask = [cv2.resize(x,(512,512)) for x in (full_img, ori_img, np.float32(mask_sharp * mask_bbox))]\n            else:\n                full_img, ori_img, full_mask = [cv2.resize(x,(512,512)) for x in (full_img, ori_img, full_mask)]\n            \n            img = Laplacian_Pyramid_Blending_with_mask(full_img, ori_img, full_mask, 6)\n            img = np.clip(img, 0 ,255)\n            img = np.uint8(cv2.resize(img, (width, height)))\n\n        else:\n            img = cv2.convertScaleAbs(ori_img*(1-full_mask) + full_img*full_mask)\n            img = cv2.convertScaleAbs(ori_img*(1-mask_sharp) + img*mask_sharp)\n\n        return img, orig_faces, enhanced_faces"
  },
  {
    "path": "third_part/face3d/checkpoints/model_name/test_opt.txt",
    "content": "----------------- Options ---------------\n                add_image: True                          \n               bfm_folder: BFM                           \n                bfm_model: BFM_model_front.mat           \n                 camera_d: 10.0                          \n                   center: 112.0                         \n          checkpoints_dir: ./checkpoints                 \n             dataset_mode: None                          \n                 ddp_port: 12355                         \n        display_per_batch: True                          \n                    epoch: 20                            \t[default: latest]\n          eval_batch_nums: inf                           \n                    focal: 1015.0                        \n                  gpu_ids: 0                             \n     inference_batch_size: 8                             \n                init_path: checkpoints/init_model/resnet50-0676ba61.pth\n                input_dir: demo_video                    \t[default: None]\n                  isTrain: False                         \t[default: None]\n             keypoint_dir: demo_cctv                     \t[default: None]\n                    model: facerecon                     \n                     name: model_name                    \t[default: face_recon]\n                net_recon: resnet50                      \n               output_dir: demo_cctv                     \t[default: mp4]\n                    phase: test                          \n         save_split_files: False                         \n                   suffix:                               \n                  use_ddp: False                         \t[default: True]\n              use_last_fc: False                         \n                  verbose: False                         \n           vis_batch_nums: 1                             \n               world_size: 1                             \n                    z_far: 15.0                          \n                   z_near: 5.0                           \n----------------- End -------------------\n"
  },
  {
    "path": "third_part/face3d/coeff_detector.py",
    "content": "import os\nimport glob\nimport numpy as np\nfrom os import makedirs, name\nfrom PIL import Image\nfrom tqdm import tqdm\n\nimport torch\nimport torch.nn as nn\n\nfrom face3d.options.inference_options import InferenceOptions\nfrom face3d.models import create_model\nfrom face3d.util.preprocess import align_img\nfrom face3d.util.load_mats import load_lm3d\nfrom face3d.extract_kp_videos import KeypointExtractor\n\n\nclass CoeffDetector(nn.Module):\n    def __init__(self, opt):\n        super().__init__()\n\n        self.model = create_model(opt)\n        self.model.setup(opt)\n        self.model.device = 'cuda'\n        self.model.parallelize()\n        self.model.eval()\n\n        self.lm3d_std = load_lm3d(opt.bfm_folder) \n\n    def forward(self, img, lm):\n        \n        img, trans_params = self.image_transform(img, lm)\n\n        data_input = {                \n                'imgs': img[None],\n                }        \n        self.model.set_input(data_input)  \n        self.model.test()\n        pred_coeff = {key:self.model.pred_coeffs_dict[key].cpu().numpy() for key in self.model.pred_coeffs_dict}\n        pred_coeff = np.concatenate([\n            pred_coeff['id'], \n            pred_coeff['exp'], \n            pred_coeff['tex'], \n            pred_coeff['angle'],\n            pred_coeff['gamma'],\n            pred_coeff['trans'],\n            trans_params[None],\n            ], 1)\n        \n        return {'coeff_3dmm':pred_coeff, \n                'crop_img': Image.fromarray((img.cpu().permute(1, 2, 0).numpy()*255).astype(np.uint8))}\n\n    def image_transform(self, images, lm):\n        \"\"\"\n        param:\n            images:          -- PIL image \n            lm:              -- numpy array\n        \"\"\"\n        W,H = images.size\n        if np.mean(lm) == -1:\n            lm = (self.lm3d_std[:, :2]+1)/2.\n            lm = np.concatenate(\n                [lm[:, :1]*W, lm[:, 1:2]*H], 1\n            )\n        else:\n            lm[:, -1] = H - 1 - lm[:, -1]\n\n        trans_params, img, lm, _ = align_img(images, lm, self.lm3d_std)        \n        img = torch.tensor(np.array(img)/255., dtype=torch.float32).permute(2, 0, 1)\n        trans_params = np.array([float(item) for item in np.hsplit(trans_params, 5)])\n        trans_params = torch.tensor(trans_params.astype(np.float32))\n        return img, trans_params        \n\ndef get_data_path(root, keypoint_root):\n    filenames = list()\n    keypoint_filenames = list()\n\n    IMAGE_EXTENSIONS_LOWERCASE = {'jpg', 'png', 'jpeg', 'webp'}\n    IMAGE_EXTENSIONS = IMAGE_EXTENSIONS_LOWERCASE.union({f.upper() for f in IMAGE_EXTENSIONS_LOWERCASE})\n    extensions = IMAGE_EXTENSIONS\n\n    for ext in extensions:\n        filenames += glob.glob(f'{root}/*.{ext}', recursive=True)\n    filenames = sorted(filenames)\n    for filename in filenames:\n        name = os.path.splitext(os.path.basename(filename))[0]\n        keypoint_filenames.append(\n            os.path.join(keypoint_root, name + '.txt')\n        )\n    return filenames, keypoint_filenames\n\n\nif __name__ == \"__main__\":\n    opt = InferenceOptions().parse() \n    coeff_detector = CoeffDetector(opt)\n    kp_extractor = KeypointExtractor()\n    image_names, keypoint_names = get_data_path(opt.input_dir, opt.keypoint_dir)\n    makedirs(opt.keypoint_dir, exist_ok=True)\n    makedirs(opt.output_dir, exist_ok=True)\n\n    for image_name, keypoint_name in tqdm(zip(image_names, keypoint_names)):\n        image = Image.open(image_name)\n        if not os.path.isfile(keypoint_name):\n            lm = kp_extractor.extract_keypoint(image, keypoint_name)\n        else:\n            lm = np.loadtxt(keypoint_name).astype(np.float32)\n            lm = lm.reshape([-1, 2]) \n        predicted = coeff_detector(image, lm)\n        name = os.path.splitext(os.path.basename(image_name))[0]\n        np.savetxt(\n            \"{}/{}_3dmm_coeff.txt\".format(opt.output_dir, name), \n            predicted['coeff_3dmm'].reshape(-1))\n\n        \n\n\n\n    "
  },
  {
    "path": "third_part/face3d/data/__init__.py",
    "content": "\"\"\"This package includes all the modules related to data loading and preprocessing\n\n To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset.\n You need to implement four functions:\n    -- <__init__>:                      initialize the class, first call BaseDataset.__init__(self, opt).\n    -- <__len__>:                       return the size of dataset.\n    -- <__getitem__>:                   get a data point from data loader.\n    -- <modify_commandline_options>:    (optionally) add dataset-specific options and set default options.\n\nNow you can use the dataset class by specifying flag '--dataset_mode dummy'.\nSee our template dataset class 'template_dataset.py' for more details.\n\"\"\"\nimport numpy as np\nimport importlib\nimport torch.utils.data\nfrom face3d.data.base_dataset import BaseDataset\n\n\ndef find_dataset_using_name(dataset_name):\n    \"\"\"Import the module \"data/[dataset_name]_dataset.py\".\n\n    In the file, the class called DatasetNameDataset() will\n    be instantiated. It has to be a subclass of BaseDataset,\n    and it is case-insensitive.\n    \"\"\"\n    dataset_filename = \"data.\" + dataset_name + \"_dataset\"\n    datasetlib = importlib.import_module(dataset_filename)\n\n    dataset = None\n    target_dataset_name = dataset_name.replace('_', '') + 'dataset'\n    for name, cls in datasetlib.__dict__.items():\n        if name.lower() == target_dataset_name.lower() \\\n           and issubclass(cls, BaseDataset):\n            dataset = cls\n\n    if dataset is None:\n        raise NotImplementedError(\"In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase.\" % (dataset_filename, target_dataset_name))\n\n    return dataset\n\n\ndef get_option_setter(dataset_name):\n    \"\"\"Return the static method <modify_commandline_options> of the dataset class.\"\"\"\n    dataset_class = find_dataset_using_name(dataset_name)\n    return dataset_class.modify_commandline_options\n\n\ndef create_dataset(opt, rank=0):\n    \"\"\"Create a dataset given the option.\n\n    This function wraps the class CustomDatasetDataLoader.\n        This is the main interface between this package and 'train.py'/'test.py'\n\n    Example:\n        >>> from data import create_dataset\n        >>> dataset = create_dataset(opt)\n    \"\"\"\n    data_loader = CustomDatasetDataLoader(opt, rank=rank)\n    dataset = data_loader.load_data()\n    return dataset\n\nclass CustomDatasetDataLoader():\n    \"\"\"Wrapper class of Dataset class that performs multi-threaded data loading\"\"\"\n\n    def __init__(self, opt, rank=0):\n        \"\"\"Initialize this class\n\n        Step 1: create a dataset instance given the name [dataset_mode]\n        Step 2: create a multi-threaded data loader.\n        \"\"\"\n        self.opt = opt\n        dataset_class = find_dataset_using_name(opt.dataset_mode)\n        self.dataset = dataset_class(opt)\n        self.sampler = None\n        print(\"rank %d %s dataset [%s] was created\" % (rank, self.dataset.name, type(self.dataset).__name__))\n        if opt.use_ddp and opt.isTrain:\n            world_size = opt.world_size\n            self.sampler = torch.utils.data.distributed.DistributedSampler(\n                    self.dataset,\n                    num_replicas=world_size,\n                    rank=rank,\n                    shuffle=not opt.serial_batches\n                )\n            self.dataloader = torch.utils.data.DataLoader(\n                        self.dataset,\n                        sampler=self.sampler,\n                        num_workers=int(opt.num_threads / world_size), \n                        batch_size=int(opt.batch_size / world_size), \n                        drop_last=True)\n        else:\n            self.dataloader = torch.utils.data.DataLoader(\n                self.dataset,\n                batch_size=opt.batch_size,\n                shuffle=(not opt.serial_batches) and opt.isTrain,\n                num_workers=int(opt.num_threads),\n                drop_last=True\n            )\n\n    def set_epoch(self, epoch):\n        self.dataset.current_epoch = epoch\n        if self.sampler is not None:\n            self.sampler.set_epoch(epoch)\n\n    def load_data(self):\n        return self\n\n    def __len__(self):\n        \"\"\"Return the number of data in the dataset\"\"\"\n        return min(len(self.dataset), self.opt.max_dataset_size)\n\n    def __iter__(self):\n        \"\"\"Return a batch of data\"\"\"\n        for i, data in enumerate(self.dataloader):\n            if i * self.opt.batch_size >= self.opt.max_dataset_size:\n                break\n            yield data\n"
  },
  {
    "path": "third_part/face3d/data/base_dataset.py",
    "content": "\"\"\"This module implements an abstract base class (ABC) 'BaseDataset' for datasets.\n\nIt also includes common transformation functions (e.g., get_transform, __scale_width), which can be later used in subclasses.\n\"\"\"\nimport random\nimport numpy as np\nimport torch.utils.data as data\nfrom PIL import Image\nimport torchvision.transforms as transforms\nfrom abc import ABC, abstractmethod\n\n\nclass BaseDataset(data.Dataset, ABC):\n    \"\"\"This class is an abstract base class (ABC) for datasets.\n\n    To create a subclass, you need to implement the following four functions:\n    -- <__init__>:                      initialize the class, first call BaseDataset.__init__(self, opt).\n    -- <__len__>:                       return the size of dataset.\n    -- <__getitem__>:                   get a data point.\n    -- <modify_commandline_options>:    (optionally) add dataset-specific options and set default options.\n    \"\"\"\n\n    def __init__(self, opt):\n        \"\"\"Initialize the class; save the options in the class\n\n        Parameters:\n            opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions\n        \"\"\"\n        self.opt = opt\n        # self.root = opt.dataroot\n        self.current_epoch = 0\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        \"\"\"Add new dataset-specific options, and rewrite default values for existing options.\n\n        Parameters:\n            parser          -- original option parser\n            is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.\n\n        Returns:\n            the modified parser.\n        \"\"\"\n        return parser\n\n    @abstractmethod\n    def __len__(self):\n        \"\"\"Return the total number of images in the dataset.\"\"\"\n        return 0\n\n    @abstractmethod\n    def __getitem__(self, index):\n        \"\"\"Return a data point and its metadata information.\n\n        Parameters:\n            index - - a random integer for data indexing\n\n        Returns:\n            a dictionary of data with their names. It ususally contains the data itself and its metadata information.\n        \"\"\"\n        pass\n\n\ndef get_transform(grayscale=False):\n    transform_list = []\n    if grayscale:\n        transform_list.append(transforms.Grayscale(1))\n    transform_list += [transforms.ToTensor()]\n    return transforms.Compose(transform_list)\n\ndef get_affine_mat(opt, size):\n    shift_x, shift_y, scale, rot_angle, flip = 0., 0., 1., 0., False\n    w, h = size\n\n    if 'shift' in opt.preprocess:\n        shift_pixs = int(opt.shift_pixs)\n        shift_x = random.randint(-shift_pixs, shift_pixs)\n        shift_y = random.randint(-shift_pixs, shift_pixs)\n    if 'scale' in opt.preprocess:\n        scale = 1 + opt.scale_delta * (2 * random.random() - 1)\n    if 'rot' in opt.preprocess:\n        rot_angle = opt.rot_angle * (2 * random.random() - 1)\n        rot_rad = -rot_angle * np.pi/180\n    if 'flip' in opt.preprocess:\n        flip = random.random() > 0.5\n\n    shift_to_origin = np.array([1, 0, -w//2, 0, 1, -h//2, 0, 0, 1]).reshape([3, 3])\n    flip_mat = np.array([-1 if flip else 1, 0, 0, 0, 1, 0, 0, 0, 1]).reshape([3, 3])\n    shift_mat = np.array([1, 0, shift_x, 0, 1, shift_y, 0, 0, 1]).reshape([3, 3])\n    rot_mat = np.array([np.cos(rot_rad), np.sin(rot_rad), 0, -np.sin(rot_rad), np.cos(rot_rad), 0, 0, 0, 1]).reshape([3, 3])\n    scale_mat = np.array([scale, 0, 0, 0, scale, 0, 0, 0, 1]).reshape([3, 3])\n    shift_to_center = np.array([1, 0, w//2, 0, 1, h//2, 0, 0, 1]).reshape([3, 3])\n    \n    affine = shift_to_center @ scale_mat @ rot_mat @ shift_mat @ flip_mat @ shift_to_origin    \n    affine_inv = np.linalg.inv(affine)\n    return affine, affine_inv, flip\n\ndef apply_img_affine(img, affine_inv, method=Image.BICUBIC):\n    return img.transform(img.size, Image.AFFINE, data=affine_inv.flatten()[:6], resample=Image.BICUBIC)\n\ndef apply_lm_affine(landmark, affine, flip, size):\n    _, h = size\n    lm = landmark.copy()\n    lm[:, 1] = h - 1 - lm[:, 1]\n    lm = np.concatenate((lm, np.ones([lm.shape[0], 1])), -1)\n    lm = lm @ np.transpose(affine)\n    lm[:, :2] = lm[:, :2] / lm[:, 2:]\n    lm = lm[:, :2]\n    lm[:, 1] = h - 1 - lm[:, 1]\n    if flip:\n        lm_ = lm.copy()\n        lm_[:17] = lm[16::-1]\n        lm_[17:22] = lm[26:21:-1]\n        lm_[22:27] = lm[21:16:-1]\n        lm_[31:36] = lm[35:30:-1]\n        lm_[36:40] = lm[45:41:-1]\n        lm_[40:42] = lm[47:45:-1]\n        lm_[42:46] = lm[39:35:-1]\n        lm_[46:48] = lm[41:39:-1]\n        lm_[48:55] = lm[54:47:-1]\n        lm_[55:60] = lm[59:54:-1]\n        lm_[60:65] = lm[64:59:-1]\n        lm_[65:68] = lm[67:64:-1]\n        lm = lm_\n    return lm\n"
  },
  {
    "path": "third_part/face3d/data/flist_dataset.py",
    "content": "\"\"\"This script defines the custom dataset for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport os.path\nfrom data.base_dataset import BaseDataset, get_transform, get_affine_mat, apply_img_affine, apply_lm_affine\nfrom data.image_folder import make_dataset\nfrom PIL import Image\nimport random\nimport util.util as util\nimport numpy as np\nimport json\nimport torch\nfrom scipy.io import loadmat, savemat\nimport pickle\nfrom util.preprocess import align_img, estimate_norm\nfrom util.load_mats import load_lm3d\n\n\ndef default_flist_reader(flist):\n    \"\"\"\n    flist format: impath label\\nimpath label\\n ...(same to caffe's filelist)\n    \"\"\"\n    imlist = []\n    with open(flist, 'r') as rf:\n        for line in rf.readlines():\n            impath = line.strip()\n            imlist.append(impath)\n\n    return imlist\n\ndef jason_flist_reader(flist):\n    with open(flist, 'r') as fp:\n        info = json.load(fp)\n    return info\n\ndef parse_label(label):\n    return torch.tensor(np.array(label).astype(np.float32))\n\n\nclass FlistDataset(BaseDataset):\n    \"\"\"\n    It requires one directories to host training images '/path/to/data/train'\n    You can train the model with the dataset flag '--dataroot /path/to/data'.\n    \"\"\"\n\n    def __init__(self, opt):\n        \"\"\"Initialize this dataset class.\n\n        Parameters:\n            opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions\n        \"\"\"\n        BaseDataset.__init__(self, opt)\n        \n        self.lm3d_std = load_lm3d(opt.bfm_folder)\n        \n        msk_names = default_flist_reader(opt.flist)\n        self.msk_paths = [os.path.join(opt.data_root, i) for i in msk_names]\n\n        self.size = len(self.msk_paths) \n        self.opt = opt\n        \n        self.name = 'train' if opt.isTrain else 'val'\n        if '_' in opt.flist:\n            self.name += '_' + opt.flist.split(os.sep)[-1].split('_')[0]\n        \n\n    def __getitem__(self, index):\n        \"\"\"Return a data point and its metadata information.\n\n        Parameters:\n            index (int)      -- a random integer for data indexing\n\n        Returns a dictionary that contains A, B, A_paths and B_paths\n            img (tensor)       -- an image in the input domain\n            msk (tensor)       -- its corresponding attention mask\n            lm  (tensor)       -- its corresponding 3d landmarks\n            im_paths (str)     -- image paths\n            aug_flag (bool)    -- a flag used to tell whether its raw or augmented\n        \"\"\"\n        msk_path = self.msk_paths[index % self.size]  # make sure index is within then range\n        img_path = msk_path.replace('mask/', '')\n        lm_path = '.'.join(msk_path.replace('mask', 'landmarks').split('.')[:-1]) + '.txt'\n\n        raw_img = Image.open(img_path).convert('RGB')\n        raw_msk = Image.open(msk_path).convert('RGB')\n        raw_lm = np.loadtxt(lm_path).astype(np.float32)\n\n        _, img, lm, msk = align_img(raw_img, raw_lm, self.lm3d_std, raw_msk)\n        \n        aug_flag = self.opt.use_aug and self.opt.isTrain\n        if aug_flag:\n            img, lm, msk = self._augmentation(img, lm, self.opt, msk)\n        \n        _, H = img.size\n        M = estimate_norm(lm, H)\n        transform = get_transform()\n        img_tensor = transform(img)\n        msk_tensor = transform(msk)[:1, ...]\n        lm_tensor = parse_label(lm)\n        M_tensor = parse_label(M)\n\n\n        return {'imgs': img_tensor, \n                'lms': lm_tensor, \n                'msks': msk_tensor, \n                'M': M_tensor,\n                'im_paths': img_path, \n                'aug_flag': aug_flag,\n                'dataset': self.name}\n\n    def _augmentation(self, img, lm, opt, msk=None):\n        affine, affine_inv, flip = get_affine_mat(opt, img.size)\n        img = apply_img_affine(img, affine_inv)\n        lm = apply_lm_affine(lm, affine, flip, img.size)\n        if msk is not None:\n            msk = apply_img_affine(msk, affine_inv, method=Image.BILINEAR)\n        return img, lm, msk\n    \n\n\n\n    def __len__(self):\n        \"\"\"Return the total number of images in the dataset.\n        \"\"\"\n        return self.size\n"
  },
  {
    "path": "third_part/face3d/data/image_folder.py",
    "content": "\"\"\"A modified image folder class\n\nWe modify the official PyTorch image folder (https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py)\nso that this class can load images from both current directory and its subdirectories.\n\"\"\"\nimport numpy as np\nimport torch.utils.data as data\n\nfrom PIL import Image\nimport os\nimport os.path\n\nIMG_EXTENSIONS = [\n    '.jpg', '.JPG', '.jpeg', '.JPEG',\n    '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP',\n    '.tif', '.TIF', '.tiff', '.TIFF',\n]\n\n\ndef is_image_file(filename):\n    return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)\n\n\ndef make_dataset(dir, max_dataset_size=float(\"inf\")):\n    images = []\n    assert os.path.isdir(dir) or os.path.islink(dir), '%s is not a valid directory' % dir\n\n    for root, _, fnames in sorted(os.walk(dir, followlinks=True)):\n        for fname in fnames:\n            if is_image_file(fname):\n                path = os.path.join(root, fname)\n                images.append(path)\n    return images[:min(max_dataset_size, len(images))]\n\n\ndef default_loader(path):\n    return Image.open(path).convert('RGB')\n\n\nclass ImageFolder(data.Dataset):\n\n    def __init__(self, root, transform=None, return_paths=False,\n                 loader=default_loader):\n        imgs = make_dataset(root)\n        if len(imgs) == 0:\n            raise(RuntimeError(\"Found 0 images in: \" + root + \"\\n\"\n                               \"Supported image extensions are: \" + \",\".join(IMG_EXTENSIONS)))\n\n        self.root = root\n        self.imgs = imgs\n        self.transform = transform\n        self.return_paths = return_paths\n        self.loader = loader\n\n    def __getitem__(self, index):\n        path = self.imgs[index]\n        img = self.loader(path)\n        if self.transform is not None:\n            img = self.transform(img)\n        if self.return_paths:\n            return img, path\n        else:\n            return img\n\n    def __len__(self):\n        return len(self.imgs)\n"
  },
  {
    "path": "third_part/face3d/data/template_dataset.py",
    "content": "\"\"\"Dataset class template\n\nThis module provides a template for users to implement custom datasets.\nYou can specify '--dataset_mode template' to use this dataset.\nThe class name should be consistent with both the filename and its dataset_mode option.\nThe filename should be <dataset_mode>_dataset.py\nThe class name should be <Dataset_mode>Dataset.py\nYou need to implement the following functions:\n    -- <modify_commandline_options>:　Add dataset-specific options and rewrite default values for existing options.\n    -- <__init__>: Initialize this dataset class.\n    -- <__getitem__>: Return a data point and its metadata information.\n    -- <__len__>: Return the number of images.\n\"\"\"\nfrom data.base_dataset import BaseDataset, get_transform\n# from data.image_folder import make_dataset\n# from PIL import Image\n\n\nclass TemplateDataset(BaseDataset):\n    \"\"\"A template dataset class for you to implement custom datasets.\"\"\"\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        \"\"\"Add new dataset-specific options, and rewrite default values for existing options.\n\n        Parameters:\n            parser          -- original option parser\n            is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.\n\n        Returns:\n            the modified parser.\n        \"\"\"\n        parser.add_argument('--new_dataset_option', type=float, default=1.0, help='new dataset option')\n        parser.set_defaults(max_dataset_size=10, new_dataset_option=2.0)  # specify dataset-specific default values\n        return parser\n\n    def __init__(self, opt):\n        \"\"\"Initialize this dataset class.\n\n        Parameters:\n            opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions\n\n        A few things can be done here.\n        - save the options (have been done in BaseDataset)\n        - get image paths and meta information of the dataset.\n        - define the image transformation.\n        \"\"\"\n        # save the option and dataset root\n        BaseDataset.__init__(self, opt)\n        # get the image paths of your dataset;\n        self.image_paths = []  # You can call sorted(make_dataset(self.root, opt.max_dataset_size)) to get all the image paths under the directory self.root\n        # define the default transform function. You can use <base_dataset.get_transform>; You can also define your custom transform function\n        self.transform = get_transform(opt)\n\n    def __getitem__(self, index):\n        \"\"\"Return a data point and its metadata information.\n\n        Parameters:\n            index -- a random integer for data indexing\n\n        Returns:\n            a dictionary of data with their names. It usually contains the data itself and its metadata information.\n\n        Step 1: get a random image path: e.g., path = self.image_paths[index]\n        Step 2: load your data from the disk: e.g., image = Image.open(path).convert('RGB').\n        Step 3: convert your data to a PyTorch tensor. You can use helpder functions such as self.transform. e.g., data = self.transform(image)\n        Step 4: return a data point as a dictionary.\n        \"\"\"\n        path = 'temp'    # needs to be a string\n        data_A = None    # needs to be a tensor\n        data_B = None    # needs to be a tensor\n        return {'data_A': data_A, 'data_B': data_B, 'path': path}\n\n    def __len__(self):\n        \"\"\"Return the total number of images.\"\"\"\n        return len(self.image_paths)\n"
  },
  {
    "path": "third_part/face3d/data_preparation.py",
    "content": "\"\"\"This script is the data preparation script for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport os \nimport numpy as np\nimport argparse\nfrom util.detect_lm68 import detect_68p,load_lm_graph\nfrom util.skin_mask import get_skin_mask\nfrom util.generate_list import check_list, write_list\nimport warnings\nwarnings.filterwarnings(\"ignore\") \n\nparser = argparse.ArgumentParser()\nparser.add_argument('--data_root', type=str, default='datasets', help='root directory for training data')\nparser.add_argument('--img_folder', nargs=\"+\", required=True, help='folders of training images')\nparser.add_argument('--mode', type=str, default='train', help='train or val')\nopt = parser.parse_args()\n\nos.environ['CUDA_VISIBLE_DEVICES'] = '0'\n\ndef data_prepare(folder_list,mode):\n\n    lm_sess,input_op,output_op = load_lm_graph('./checkpoints/lm_model/68lm_detector.pb') # load a tensorflow version 68-landmark detector\n\n    for img_folder in folder_list:\n        detect_68p(img_folder,lm_sess,input_op,output_op) # detect landmarks for images\n        get_skin_mask(img_folder) # generate skin attention mask for images\n\n    # create files that record path to all training data\n    msks_list = []\n    for img_folder in folder_list:\n        path = os.path.join(img_folder, 'mask')\n        msks_list += ['/'.join([img_folder, 'mask', i]) for i in sorted(os.listdir(path)) if 'jpg' in i or \n                                                    'png' in i or 'jpeg' in i or 'PNG' in i]\n\n    imgs_list = [i.replace('mask/', '') for i in msks_list]\n    lms_list = [i.replace('mask', 'landmarks') for i in msks_list]\n    lms_list = ['.'.join(i.split('.')[:-1]) + '.txt' for i in lms_list]\n    \n    lms_list_final, imgs_list_final, msks_list_final = check_list(lms_list, imgs_list, msks_list) # check if the path is valid\n    write_list(lms_list_final, imgs_list_final, msks_list_final, mode=mode) # save files\n\nif __name__ == '__main__':\n    print('Datasets:',opt.img_folder)\n    data_prepare([os.path.join(opt.data_root,folder) for folder in opt.img_folder],opt.mode)\n"
  },
  {
    "path": "third_part/face3d/extract_kp_videos.py",
    "content": "import os\nimport cv2\nimport time\nimport glob\nimport argparse\nimport face_alignment\nimport numpy as np\nfrom PIL import Image\nimport torch\nfrom tqdm import tqdm\nfrom itertools import cycle\n\nfrom torch.multiprocessing import Pool, Process, set_start_method\n\nclass KeypointExtractor():\n    def __init__(self):\n        device = 'cuda' if torch.cuda.is_available() else 'cpu'\n        self.detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device=device)   \n\n    def extract_keypoint(self, images, name=None, info=True):\n        if isinstance(images, list):\n            keypoints = []\n            if info:\n                i_range = tqdm(images,desc='landmark Det:')\n            else:\n                i_range = images\n\n            for image in i_range:\n                current_kp = self.extract_keypoint(image)\n                if np.mean(current_kp) == -1 and keypoints:\n                    keypoints.append(keypoints[-1])\n                else:\n                    keypoints.append(current_kp[None])\n\n            keypoints = np.concatenate(keypoints, 0)\n            np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1))\n            return keypoints\n        else:\n            while True:\n                try:\n                    keypoints = self.detector.get_landmarks_from_image(np.array(images))[0]\n                    break\n                except RuntimeError as e:\n                    if str(e).startswith('CUDA'):\n                        print(\"Warning: out of memory, sleep for 1s\")\n                        time.sleep(1)\n                    else:\n                        print(e)\n                        break    \n                except TypeError:\n                    print('No face detected in this image')\n                    shape = [68, 2]\n                    keypoints = -1. * np.ones(shape)                    \n                    break\n            if name is not None:\n                np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1))\n            return keypoints\n\ndef read_video(filename):\n    frames = []\n    cap = cv2.VideoCapture(filename)\n    while cap.isOpened():\n        ret, frame = cap.read()\n        if ret:\n            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n            frame = Image.fromarray(frame)\n            frames.append(frame)\n        else:\n            break\n    cap.release()\n    return frames\n\ndef run(data):\n    filename, opt, device = data\n    os.environ['CUDA_VISIBLE_DEVICES'] = device\n    kp_extractor = KeypointExtractor()\n    images = read_video(filename)\n    name = filename.split('/')[-2:]\n    os.makedirs(os.path.join(opt.output_dir, name[-2]), exist_ok=True)\n    kp_extractor.extract_keypoint(\n        images, \n        name=os.path.join(opt.output_dir, name[-2], name[-1])\n    )\n\nif __name__ == '__main__':\n    set_start_method('spawn')\n    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n    parser.add_argument('--input_dir', type=str, help='the folder of the input files')\n    parser.add_argument('--output_dir', type=str, help='the folder of the output files')\n    parser.add_argument('--device_ids', type=str, default='0,1')\n    parser.add_argument('--workers', type=int, default=4)\n\n    opt = parser.parse_args()\n    filenames = list()\n    VIDEO_EXTENSIONS_LOWERCASE = {'mp4'}\n    VIDEO_EXTENSIONS = VIDEO_EXTENSIONS_LOWERCASE.union({f.upper() for f in VIDEO_EXTENSIONS_LOWERCASE})\n    extensions = VIDEO_EXTENSIONS\n    \n    for ext in extensions:\n        os.listdir(f'{opt.input_dir}')\n        print(f'{opt.input_dir}/*.{ext}')\n        filenames = sorted(glob.glob(f'{opt.input_dir}/*.{ext}'))\n    print('Total number of videos:', len(filenames))\n    pool = Pool(opt.workers)\n    args_list = cycle([opt])\n    device_ids = opt.device_ids.split(\",\")\n    device_ids = cycle(device_ids)\n    for data in tqdm(pool.imap_unordered(run, zip(filenames, args_list, device_ids))):\n        None\n"
  },
  {
    "path": "third_part/face3d/face_recon_videos.py",
    "content": "import os\nimport cv2\nimport glob\nimport numpy as np\nfrom PIL import Image\nfrom tqdm import tqdm\nfrom scipy.io import savemat\n\nimport torch \n\nfrom models import create_model\nfrom options.inference_options import InferenceOptions\nfrom util.preprocess import align_img\nfrom util.load_mats import load_lm3d\nfrom util.util import mkdirs, tensor2im, save_image\n\n\ndef get_data_path(root, keypoint_root):\n    filenames = list()\n    keypoint_filenames = list()\n\n    VIDEO_EXTENSIONS_LOWERCASE = {'mp4'}\n    VIDEO_EXTENSIONS = VIDEO_EXTENSIONS_LOWERCASE.union({f.upper() for f in VIDEO_EXTENSIONS_LOWERCASE})\n    extensions = VIDEO_EXTENSIONS\n\n    for ext in extensions:\n        filenames += glob.glob(f'{root}/**/*.{ext}', recursive=True)\n    filenames = sorted(filenames)\n    keypoint_filenames = sorted(glob.glob(f'{keypoint_root}/**/*.txt', recursive=True))\n    assert len(filenames) == len(keypoint_filenames)\n\n    return filenames, keypoint_filenames\n\nclass VideoPathDataset(torch.utils.data.Dataset):\n    def __init__(self, filenames, txt_filenames, bfm_folder):\n        self.filenames = filenames\n        self.txt_filenames = txt_filenames\n        self.lm3d_std = load_lm3d(bfm_folder) \n\n    def __len__(self):\n        return len(self.filenames)\n\n    def __getitem__(self, index):\n        filename = self.filenames[index]\n        txt_filename = self.txt_filenames[index]\n        frames = self.read_video(filename)\n        lm = np.loadtxt(txt_filename).astype(np.float32)\n        lm = lm.reshape([len(frames), -1, 2]) \n        out_images, out_trans_params = list(), list()\n        for i in range(len(frames)):\n            out_img, _, out_trans_param \\\n                = self.image_transform(frames[i], lm[i])\n            out_images.append(out_img[None])\n            out_trans_params.append(out_trans_param[None])\n        return {\n            'imgs': torch.cat(out_images, 0),\n            'trans_param':torch.cat(out_trans_params, 0),\n            'filename': filename\n        }\n        \n    def read_video(self, filename):\n        frames = list()\n        cap = cv2.VideoCapture(filename)\n        while cap.isOpened():\n            ret, frame = cap.read()\n            if ret:\n                frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n                frame = Image.fromarray(frame)\n                frames.append(frame)\n            else:\n                break\n        cap.release()\n        return frames\n\n    def image_transform(self, images, lm):\n        W,H = images.size\n        if np.mean(lm) == -1:\n            lm = (self.lm3d_std[:, :2]+1)/2.\n            lm = np.concatenate(\n                [lm[:, :1]*W, lm[:, 1:2]*H], 1\n            )\n        else:\n            lm[:, -1] = H - 1 - lm[:, -1]\n\n        trans_params, img, lm, _ = align_img(images, lm, self.lm3d_std)        \n        img = torch.tensor(np.array(img)/255., dtype=torch.float32).permute(2, 0, 1)\n        lm = torch.tensor(lm)\n        trans_params = np.array([float(item) for item in np.hsplit(trans_params, 5)])\n        trans_params = torch.tensor(trans_params.astype(np.float32))\n        return img, lm, trans_params        \n\ndef main(opt, model):\n    # import torch.multiprocessing\n    # torch.multiprocessing.set_sharing_strategy('file_system')\n    filenames, keypoint_filenames = get_data_path(opt.input_dir, opt.keypoint_dir)\n    dataset = VideoPathDataset(filenames, keypoint_filenames, opt.bfm_folder)\n    dataloader = torch.utils.data.DataLoader(\n        dataset,\n        batch_size=1, # can noly set to one here!\n        shuffle=False,\n        drop_last=False,\n        num_workers=0,\n    )     \n    batch_size = opt.inference_batch_size\n    for data in tqdm(dataloader):\n        num_batch = data['imgs'][0].shape[0] // batch_size + 1\n        pred_coeffs = list()\n        for index in range(num_batch):\n            data_input = {                \n                'imgs': data['imgs'][0,index*batch_size:(index+1)*batch_size],\n            }\n            model.set_input(data_input)  \n            model.test()\n            pred_coeff = {key:model.pred_coeffs_dict[key].cpu().numpy() for key in model.pred_coeffs_dict}\n            pred_coeff = np.concatenate([\n                pred_coeff['id'], \n                pred_coeff['exp'], \n                pred_coeff['tex'], \n                pred_coeff['angle'],\n                pred_coeff['gamma'],\n                pred_coeff['trans']], 1)\n            pred_coeffs.append(pred_coeff) \n            visuals = model.get_current_visuals()  # get image results\n            if False: # debug\n                for name in visuals:\n                    images = visuals[name]\n                    for i in range(images.shape[0]):\n                        image_numpy = tensor2im(images[i])\n                        save_image(\n                            image_numpy, \n                            os.path.join(\n                                opt.output_dir,\n                                os.path.basename(data['filename'][0])+str(i).zfill(5)+'.jpg')\n                            )\n                exit()\n\n        pred_coeffs = np.concatenate(pred_coeffs, 0)\n        pred_trans_params = data['trans_param'][0].cpu().numpy()\n        name = data['filename'][0].split('/')[-2:]\n        name[-1] = os.path.splitext(name[-1])[0] + '.mat'\n        os.makedirs(os.path.join(opt.output_dir, name[-2]), exist_ok=True)\n        savemat(\n            os.path.join(opt.output_dir, name[-2], name[-1]), \n            {'coeff':pred_coeffs, 'transform_params':pred_trans_params}\n        )\n\nif __name__ == '__main__':\n    opt = InferenceOptions().parse()  # get test options\n    model = create_model(opt)\n    model.setup(opt)\n    model.device = 'cuda:0'\n    model.parallelize()\n    model.eval()\n\n    main(opt, model)\n\n\n"
  },
  {
    "path": "third_part/face3d/models/__init__.py",
    "content": "\"\"\"This package contains modules related to objective functions, optimizations, and network architectures.\n\nTo add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel.\nYou need to implement the following five functions:\n    -- <__init__>:                      initialize the class; first call BaseModel.__init__(self, opt).\n    -- <set_input>:                     unpack data from dataset and apply preprocessing.\n    -- <forward>:                       produce intermediate results.\n    -- <optimize_parameters>:           calculate loss, gradients, and update network weights.\n    -- <modify_commandline_options>:    (optionally) add model-specific options and set default options.\n\nIn the function <__init__>, you need to define four lists:\n    -- self.loss_names (str list):          specify the training losses that you want to plot and save.\n    -- self.model_names (str list):         define networks used in our training.\n    -- self.visual_names (str list):        specify the images that you want to display and save.\n    -- self.optimizers (optimizer list):    define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage.\n\nNow you can use the model class by specifying flag '--model dummy'.\nSee our template model class 'template_model.py' for more details.\n\"\"\"\n\nimport importlib\nfrom face3d.models.base_model import BaseModel\n\n\ndef find_model_using_name(model_name):\n    \"\"\"Import the module \"models/[model_name]_model.py\".\n\n    In the file, the class called DatasetNameModel() will\n    be instantiated. It has to be a subclass of BaseModel,\n    and it is case-insensitive.\n    \"\"\"\n    model_filename = \"face3d.models.\" + model_name + \"_model\"\n    modellib = importlib.import_module(model_filename)\n    model = None\n    target_model_name = model_name.replace('_', '') + 'model'\n    for name, cls in modellib.__dict__.items():\n        if name.lower() == target_model_name.lower() \\\n           and issubclass(cls, BaseModel):\n            model = cls\n\n    if model is None:\n        print(\"In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase.\" % (model_filename, target_model_name))\n        exit(0)\n\n    return model\n\n\ndef get_option_setter(model_name):\n    \"\"\"Return the static method <modify_commandline_options> of the model class.\"\"\"\n    model_class = find_model_using_name(model_name)\n    return model_class.modify_commandline_options\n\n\ndef create_model(opt):\n    \"\"\"Create a model given the option.\n\n    This function warps the class CustomDatasetDataLoader.\n    This is the main interface between this package and 'train.py'/'test.py'\n\n    Example:\n        >>> from models import create_model\n        >>> model = create_model(opt)\n    \"\"\"\n    model = find_model_using_name(opt.model)\n    instance = model(opt)\n    print(\"model [%s] was created\" % type(instance).__name__)\n    return instance\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/README.md",
    "content": "# Distributed Arcface Training in Pytorch\n\nThis is a deep learning library that makes face recognition efficient, and effective, which can train tens of millions\nidentity on a single server.\n\n## Requirements\n\n- Install [pytorch](http://pytorch.org) (torch>=1.6.0), our doc for [install.md](docs/install.md).\n- `pip install -r requirements.txt`.\n- Download the dataset\n  from [https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_](https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_)\n  .\n\n## How to Training\n\nTo train a model, run `train.py` with the path to the configs:\n\n### 1. Single node, 8 GPUs:\n\n```shell\npython -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr=\"127.0.0.1\" --master_port=1234 train.py configs/ms1mv3_r50\n```\n\n### 2. Multiple nodes, each node 8 GPUs:\n\nNode 0:\n\n```shell\npython -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=\"ip1\" --master_port=1234 train.py train.py configs/ms1mv3_r50\n```\n\nNode 1:\n\n```shell\npython -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr=\"ip1\" --master_port=1234 train.py train.py configs/ms1mv3_r50\n```\n\n### 3.Training resnet2060 with 8 GPUs:\n\n```shell\npython -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr=\"127.0.0.1\" --master_port=1234 train.py configs/ms1mv3_r2060.py\n```\n\n## Model Zoo\n\n- The models are available for non-commercial research purposes only.  \n- All models can be found in here.  \n- [Baidu Yun Pan](https://pan.baidu.com/s/1CL-l4zWqsI1oDuEEYVhj-g):   e8pw  \n- [onedrive](https://1drv.ms/u/s!AswpsDO2toNKq0lWY69vN58GR6mw?e=p9Ov5d)\n\n### Performance on [**ICCV2021-MFR**](http://iccv21-mfr.com/)\n\nICCV2021-MFR testset consists of non-celebrities so we can ensure that it has very few overlap with public available face \nrecognition training set, such as MS1M and CASIA as they mostly collected from online celebrities. \nAs the result, we can evaluate the FAIR performance for different algorithms.  \n\nFor **ICCV2021-MFR-ALL** set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6). The \nglobalised multi-racial testset contains 242,143 identities and 1,624,305 images. \n\nFor **ICCV2021-MFR-MASK** set, TAR is measured on mask-to-nonmask 1:1 protocal, with FAR less than 0.0001(e-4). \nMask testset contains 6,964 identities, 6,964 masked images and 13,928 non-masked images. \nThere are totally 13,928 positive pairs and 96,983,824 negative pairs.\n\n| Datasets | backbone  | Training throughout | Size / MB  | **ICCV2021-MFR-MASK** | **ICCV2021-MFR-ALL** |\n| :---:    | :---      | :---                | :---       |:---                   |:---                  |     \n| MS1MV3    | r18  | -              | 91   | **47.85** | **68.33** |\n| Glint360k | r18  | 8536           | 91   | **53.32** | **72.07** |\n| MS1MV3    | r34  | -              | 130  | **58.72** | **77.36** |\n| Glint360k | r34  | 6344           | 130  | **65.10** | **83.02** |\n| MS1MV3    | r50  | 5500           | 166  | **63.85** | **80.53** |\n| Glint360k | r50  | 5136           | 166  | **70.23** | **87.08** |\n| MS1MV3    | r100 | -              | 248  | **69.09** | **84.31** |\n| Glint360k | r100 | 3332           | 248  | **75.57** | **90.66** |\n| MS1MV3    | mobilefacenet | 12185 | 7.8  | **41.52** | **65.26** |        \n| Glint360k | mobilefacenet | 11197 | 7.8  | **44.52** | **66.48** |  \n\n### Performance on IJB-C and Verification Datasets\n\n|   Datasets | backbone      | IJBC(1e-05) | IJBC(1e-04) | agedb30 | cfp_fp | lfw  |  log    |\n| :---:      |    :---       | :---          | :---  | :---  |:---   |:---    |:---     |  \n| MS1MV3     | r18      | 92.07 | 94.66 | 97.77 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r18_fp16/training.log)|         \n| MS1MV3     | r34      | 94.10 | 95.90 | 98.10 | 98.67 | 99.80 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r34_fp16/training.log)|        \n| MS1MV3     | r50      | 94.79 | 96.46 | 98.35 | 98.96 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r50_fp16/training.log)|         \n| MS1MV3     | r100     | 95.31 | 96.81 | 98.48 | 99.06 | 99.85 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r100_fp16/training.log)|        \n| MS1MV3     | **r2060**| 95.34 | 97.11 | 98.67 | 99.24 | 99.87 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r2060_fp16/training.log)|\n| Glint360k  |r18-0.1   | 93.16 | 95.33 | 97.72 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r18_fp16_0.1/training.log)| \n| Glint360k  |r34-0.1   | 95.16 | 96.56 | 98.33 | 98.78 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r34_fp16_0.1/training.log)| \n| Glint360k  |r50-0.1   | 95.61 | 96.97 | 98.38 | 99.20 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r50_fp16_0.1/training.log)| \n| Glint360k  |r100-0.1  | 95.88 | 97.32 | 98.48 | 99.29 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r100_fp16_0.1/training.log)|\n\n[comment]: <> (More details see [model.md]&#40;docs/modelzoo.md&#41; in docs.)\n\n\n## [Speed Benchmark](docs/speed_benchmark.md)\n\n**Arcface Torch** can train large-scale face recognition training set efficiently and quickly. When the number of\nclasses in training sets is greater than 300K and the training is sufficient, partial fc sampling strategy will get same\naccuracy with several times faster training performance and smaller GPU memory. \nPartial FC is a sparse variant of the model parallel architecture for large sacle  face recognition. Partial FC use a \nsparse softmax, where each batch dynamicly sample a subset of class centers for training. In each iteration, only a \nsparse part of the parameters will be updated, which can reduce a lot of GPU memory and calculations. With Partial FC, \nwe can scale trainset of 29 millions identities, the largest to date. Partial FC also supports multi-machine distributed \ntraining and mixed precision training.\n\n![Image text](https://github.com/anxiangsir/insightface_arcface_log/blob/master/partial_fc_v2.png)\n\nMore details see \n[speed_benchmark.md](docs/speed_benchmark.md) in docs.\n\n### 1. Training speed of different parallel methods (samples / second), Tesla V100 32GB * 8. (Larger is better)\n\n`-` means training failed because of gpu memory limitations.\n\n| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 |\n| :---    | :--- | :--- | :--- |\n|125000   | 4681         | 4824          | 5004     |\n|1400000  | **1672**     | 3043          | 4738     |\n|5500000  | **-**        | **1389**      | 3975     |\n|8000000  | **-**        | **-**         | 3565     |\n|16000000 | **-**        | **-**         | 2679     |\n|29000000 | **-**        | **-**         | **1855** |\n\n### 2. GPU memory cost of different parallel methods (MB per GPU), Tesla V100 32GB * 8. (Smaller is better)\n\n| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 |\n| :---    | :---      | :---      | :---  |\n|125000   | 7358      | 5306      | 4868  |\n|1400000  | 32252     | 11178     | 6056  |\n|5500000  | **-**     | 32188     | 9854  |\n|8000000  | **-**     | **-**     | 12310 |\n|16000000 | **-**     | **-**     | 19950 |\n|29000000 | **-**     | **-**     | 32324 |\n\n## Evaluation ICCV2021-MFR and IJB-C\n\nMore details see [eval.md](docs/eval.md) in docs.\n\n## Test\n\nWe tested many versions of PyTorch. Please create an issue if you are having trouble.  \n\n- [x] torch 1.6.0\n- [x] torch 1.7.1\n- [x] torch 1.8.0\n- [x] torch 1.9.0\n\n## Citation\n\n```\n@inproceedings{deng2019arcface,\n  title={Arcface: Additive angular margin loss for deep face recognition},\n  author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos},\n  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},\n  pages={4690--4699},\n  year={2019}\n}\n@inproceedings{an2020partical_fc,\n  title={Partial FC: Training 10 Million Identities on a Single Machine},\n  author={An, Xiang and Zhu, Xuhan and Xiao, Yang and Wu, Lan and Zhang, Ming and Gao, Yuan and Qin, Bin and\n  Zhang, Debing and Fu Ying},\n  booktitle={Arxiv 2010.05222},\n  year={2020}\n}\n```\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/backbones/__init__.py",
    "content": "from .iresnet import iresnet18, iresnet34, iresnet50, iresnet100, iresnet200\nfrom .mobilefacenet import get_mbf\n\n\ndef get_model(name, **kwargs):\n    # resnet\n    if name == \"r18\":\n        return iresnet18(False, **kwargs)\n    elif name == \"r34\":\n        return iresnet34(False, **kwargs)\n    elif name == \"r50\":\n        return iresnet50(False, **kwargs)\n    elif name == \"r100\":\n        return iresnet100(False, **kwargs)\n    elif name == \"r200\":\n        return iresnet200(False, **kwargs)\n    elif name == \"r2060\":\n        from .iresnet2060 import iresnet2060\n        return iresnet2060(False, **kwargs)\n    elif name == \"mbf\":\n        fp16 = kwargs.get(\"fp16\", False)\n        num_features = kwargs.get(\"num_features\", 512)\n        return get_mbf(fp16=fp16, num_features=num_features)\n    else:\n        raise ValueError()"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/backbones/iresnet.py",
    "content": "import torch\nfrom torch import nn\n\n__all__ = ['iresnet18', 'iresnet34', 'iresnet50', 'iresnet100', 'iresnet200']\n\n\ndef conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes,\n                     out_planes,\n                     kernel_size=3,\n                     stride=stride,\n                     padding=dilation,\n                     groups=groups,\n                     bias=False,\n                     dilation=dilation)\n\n\ndef conv1x1(in_planes, out_planes, stride=1):\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes,\n                     out_planes,\n                     kernel_size=1,\n                     stride=stride,\n                     bias=False)\n\n\nclass IBasicBlock(nn.Module):\n    expansion = 1\n    def __init__(self, inplanes, planes, stride=1, downsample=None,\n                 groups=1, base_width=64, dilation=1):\n        super(IBasicBlock, self).__init__()\n        if groups != 1 or base_width != 64:\n            raise ValueError('BasicBlock only supports groups=1 and base_width=64')\n        if dilation > 1:\n            raise NotImplementedError(\"Dilation > 1 not supported in BasicBlock\")\n        self.bn1 = nn.BatchNorm2d(inplanes, eps=1e-05,)\n        self.conv1 = conv3x3(inplanes, planes)\n        self.bn2 = nn.BatchNorm2d(planes, eps=1e-05,)\n        self.prelu = nn.PReLU(planes)\n        self.conv2 = conv3x3(planes, planes, stride)\n        self.bn3 = nn.BatchNorm2d(planes, eps=1e-05,)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n        out = self.bn1(x)\n        out = self.conv1(out)\n        out = self.bn2(out)\n        out = self.prelu(out)\n        out = self.conv2(out)\n        out = self.bn3(out)\n        if self.downsample is not None:\n            identity = self.downsample(x)\n        out += identity\n        return out\n\n\nclass IResNet(nn.Module):\n    fc_scale = 7 * 7\n    def __init__(self,\n                 block, layers, dropout=0, num_features=512, zero_init_residual=False,\n                 groups=1, width_per_group=64, replace_stride_with_dilation=None, fp16=False):\n        super(IResNet, self).__init__()\n        self.fp16 = fp16\n        self.inplanes = 64\n        self.dilation = 1\n        if replace_stride_with_dilation is None:\n            replace_stride_with_dilation = [False, False, False]\n        if len(replace_stride_with_dilation) != 3:\n            raise ValueError(\"replace_stride_with_dilation should be None \"\n                             \"or a 3-element tuple, got {}\".format(replace_stride_with_dilation))\n        self.groups = groups\n        self.base_width = width_per_group\n        self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05)\n        self.prelu = nn.PReLU(self.inplanes)\n        self.layer1 = self._make_layer(block, 64, layers[0], stride=2)\n        self.layer2 = self._make_layer(block,\n                                       128,\n                                       layers[1],\n                                       stride=2,\n                                       dilate=replace_stride_with_dilation[0])\n        self.layer3 = self._make_layer(block,\n                                       256,\n                                       layers[2],\n                                       stride=2,\n                                       dilate=replace_stride_with_dilation[1])\n        self.layer4 = self._make_layer(block,\n                                       512,\n                                       layers[3],\n                                       stride=2,\n                                       dilate=replace_stride_with_dilation[2])\n        self.bn2 = nn.BatchNorm2d(512 * block.expansion, eps=1e-05,)\n        self.dropout = nn.Dropout(p=dropout, inplace=True)\n        self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features)\n        self.features = nn.BatchNorm1d(num_features, eps=1e-05)\n        nn.init.constant_(self.features.weight, 1.0)\n        self.features.weight.requires_grad = False\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.normal_(m.weight, 0, 0.1)\n            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n\n        if zero_init_residual:\n            for m in self.modules():\n                if isinstance(m, IBasicBlock):\n                    nn.init.constant_(m.bn2.weight, 0)\n\n    def _make_layer(self, block, planes, blocks, stride=1, dilate=False):\n        downsample = None\n        previous_dilation = self.dilation\n        if dilate:\n            self.dilation *= stride\n            stride = 1\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                conv1x1(self.inplanes, planes * block.expansion, stride),\n                nn.BatchNorm2d(planes * block.expansion, eps=1e-05, ),\n            )\n        layers = []\n        layers.append(\n            block(self.inplanes, planes, stride, downsample, self.groups,\n                  self.base_width, previous_dilation))\n        self.inplanes = planes * block.expansion\n        for _ in range(1, blocks):\n            layers.append(\n                block(self.inplanes,\n                      planes,\n                      groups=self.groups,\n                      base_width=self.base_width,\n                      dilation=self.dilation))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        with torch.cuda.amp.autocast(self.fp16):\n            x = self.conv1(x)\n            x = self.bn1(x)\n            x = self.prelu(x)\n            x = self.layer1(x)\n            x = self.layer2(x)\n            x = self.layer3(x)\n            x = self.layer4(x)\n            x = self.bn2(x)\n            x = torch.flatten(x, 1)\n            x = self.dropout(x)\n        x = self.fc(x.float() if self.fp16 else x)\n        x = self.features(x)\n        return x\n\n\ndef _iresnet(arch, block, layers, pretrained, progress, **kwargs):\n    model = IResNet(block, layers, **kwargs)\n    if pretrained:\n        raise ValueError()\n    return model\n\n\ndef iresnet18(pretrained=False, progress=True, **kwargs):\n    return _iresnet('iresnet18', IBasicBlock, [2, 2, 2, 2], pretrained,\n                    progress, **kwargs)\n\n\ndef iresnet34(pretrained=False, progress=True, **kwargs):\n    return _iresnet('iresnet34', IBasicBlock, [3, 4, 6, 3], pretrained,\n                    progress, **kwargs)\n\n\ndef iresnet50(pretrained=False, progress=True, **kwargs):\n    return _iresnet('iresnet50', IBasicBlock, [3, 4, 14, 3], pretrained,\n                    progress, **kwargs)\n\n\ndef iresnet100(pretrained=False, progress=True, **kwargs):\n    return _iresnet('iresnet100', IBasicBlock, [3, 13, 30, 3], pretrained,\n                    progress, **kwargs)\n\n\ndef iresnet200(pretrained=False, progress=True, **kwargs):\n    return _iresnet('iresnet200', IBasicBlock, [6, 26, 60, 6], pretrained,\n                    progress, **kwargs)\n\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/backbones/iresnet2060.py",
    "content": "import torch\nfrom torch import nn\n\nassert torch.__version__ >= \"1.8.1\"\nfrom torch.utils.checkpoint import checkpoint_sequential\n\n__all__ = ['iresnet2060']\n\n\ndef conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes,\n                     out_planes,\n                     kernel_size=3,\n                     stride=stride,\n                     padding=dilation,\n                     groups=groups,\n                     bias=False,\n                     dilation=dilation)\n\n\ndef conv1x1(in_planes, out_planes, stride=1):\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes,\n                     out_planes,\n                     kernel_size=1,\n                     stride=stride,\n                     bias=False)\n\n\nclass IBasicBlock(nn.Module):\n    expansion = 1\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None,\n                 groups=1, base_width=64, dilation=1):\n        super(IBasicBlock, self).__init__()\n        if groups != 1 or base_width != 64:\n            raise ValueError('BasicBlock only supports groups=1 and base_width=64')\n        if dilation > 1:\n            raise NotImplementedError(\"Dilation > 1 not supported in BasicBlock\")\n        self.bn1 = nn.BatchNorm2d(inplanes, eps=1e-05, )\n        self.conv1 = conv3x3(inplanes, planes)\n        self.bn2 = nn.BatchNorm2d(planes, eps=1e-05, )\n        self.prelu = nn.PReLU(planes)\n        self.conv2 = conv3x3(planes, planes, stride)\n        self.bn3 = nn.BatchNorm2d(planes, eps=1e-05, )\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        identity = x\n        out = self.bn1(x)\n        out = self.conv1(out)\n        out = self.bn2(out)\n        out = self.prelu(out)\n        out = self.conv2(out)\n        out = self.bn3(out)\n        if self.downsample is not None:\n            identity = self.downsample(x)\n        out += identity\n        return out\n\n\nclass IResNet(nn.Module):\n    fc_scale = 7 * 7\n\n    def __init__(self,\n                 block, layers, dropout=0, num_features=512, zero_init_residual=False,\n                 groups=1, width_per_group=64, replace_stride_with_dilation=None, fp16=False):\n        super(IResNet, self).__init__()\n        self.fp16 = fp16\n        self.inplanes = 64\n        self.dilation = 1\n        if replace_stride_with_dilation is None:\n            replace_stride_with_dilation = [False, False, False]\n        if len(replace_stride_with_dilation) != 3:\n            raise ValueError(\"replace_stride_with_dilation should be None \"\n                             \"or a 3-element tuple, got {}\".format(replace_stride_with_dilation))\n        self.groups = groups\n        self.base_width = width_per_group\n        self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05)\n        self.prelu = nn.PReLU(self.inplanes)\n        self.layer1 = self._make_layer(block, 64, layers[0], stride=2)\n        self.layer2 = self._make_layer(block,\n                                       128,\n                                       layers[1],\n                                       stride=2,\n                                       dilate=replace_stride_with_dilation[0])\n        self.layer3 = self._make_layer(block,\n                                       256,\n                                       layers[2],\n                                       stride=2,\n                                       dilate=replace_stride_with_dilation[1])\n        self.layer4 = self._make_layer(block,\n                                       512,\n                                       layers[3],\n                                       stride=2,\n                                       dilate=replace_stride_with_dilation[2])\n        self.bn2 = nn.BatchNorm2d(512 * block.expansion, eps=1e-05, )\n        self.dropout = nn.Dropout(p=dropout, inplace=True)\n        self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features)\n        self.features = nn.BatchNorm1d(num_features, eps=1e-05)\n        nn.init.constant_(self.features.weight, 1.0)\n        self.features.weight.requires_grad = False\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.normal_(m.weight, 0, 0.1)\n            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n\n        if zero_init_residual:\n            for m in self.modules():\n                if isinstance(m, IBasicBlock):\n                    nn.init.constant_(m.bn2.weight, 0)\n\n    def _make_layer(self, block, planes, blocks, stride=1, dilate=False):\n        downsample = None\n        previous_dilation = self.dilation\n        if dilate:\n            self.dilation *= stride\n            stride = 1\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                conv1x1(self.inplanes, planes * block.expansion, stride),\n                nn.BatchNorm2d(planes * block.expansion, eps=1e-05, ),\n            )\n        layers = []\n        layers.append(\n            block(self.inplanes, planes, stride, downsample, self.groups,\n                  self.base_width, previous_dilation))\n        self.inplanes = planes * block.expansion\n        for _ in range(1, blocks):\n            layers.append(\n                block(self.inplanes,\n                      planes,\n                      groups=self.groups,\n                      base_width=self.base_width,\n                      dilation=self.dilation))\n\n        return nn.Sequential(*layers)\n\n    def checkpoint(self, func, num_seg, x):\n        if self.training:\n            return checkpoint_sequential(func, num_seg, x)\n        else:\n            return func(x)\n\n    def forward(self, x):\n        with torch.cuda.amp.autocast(self.fp16):\n            x = self.conv1(x)\n            x = self.bn1(x)\n            x = self.prelu(x)\n            x = self.layer1(x)\n            x = self.checkpoint(self.layer2, 20, x)\n            x = self.checkpoint(self.layer3, 100, x)\n            x = self.layer4(x)\n            x = self.bn2(x)\n            x = torch.flatten(x, 1)\n            x = self.dropout(x)\n        x = self.fc(x.float() if self.fp16 else x)\n        x = self.features(x)\n        return x\n\n\ndef _iresnet(arch, block, layers, pretrained, progress, **kwargs):\n    model = IResNet(block, layers, **kwargs)\n    if pretrained:\n        raise ValueError()\n    return model\n\n\ndef iresnet2060(pretrained=False, progress=True, **kwargs):\n    return _iresnet('iresnet2060', IBasicBlock, [3, 128, 1024 - 128, 3], pretrained, progress, **kwargs)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/backbones/mobilefacenet.py",
    "content": "'''\nAdapted from https://github.com/cavalleria/cavaface.pytorch/blob/master/backbone/mobilefacenet.py\nOriginal author cavalleria\n'''\n\nimport torch.nn as nn\nfrom torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Sequential, Module\nimport torch\n\n\nclass Flatten(Module):\n    def forward(self, x):\n        return x.view(x.size(0), -1)\n\n\nclass ConvBlock(Module):\n    def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1):\n        super(ConvBlock, self).__init__()\n        self.layers = nn.Sequential(\n            Conv2d(in_c, out_c, kernel, groups=groups, stride=stride, padding=padding, bias=False),\n            BatchNorm2d(num_features=out_c),\n            PReLU(num_parameters=out_c)\n        )\n\n    def forward(self, x):\n        return self.layers(x)\n\n\nclass LinearBlock(Module):\n    def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1):\n        super(LinearBlock, self).__init__()\n        self.layers = nn.Sequential(\n            Conv2d(in_c, out_c, kernel, stride, padding, groups=groups, bias=False),\n            BatchNorm2d(num_features=out_c)\n        )\n\n    def forward(self, x):\n        return self.layers(x)\n\n\nclass DepthWise(Module):\n    def __init__(self, in_c, out_c, residual=False, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=1):\n        super(DepthWise, self).__init__()\n        self.residual = residual\n        self.layers = nn.Sequential(\n            ConvBlock(in_c, out_c=groups, kernel=(1, 1), padding=(0, 0), stride=(1, 1)),\n            ConvBlock(groups, groups, groups=groups, kernel=kernel, padding=padding, stride=stride),\n            LinearBlock(groups, out_c, kernel=(1, 1), padding=(0, 0), stride=(1, 1))\n        )\n\n    def forward(self, x):\n        short_cut = None\n        if self.residual:\n            short_cut = x\n        x = self.layers(x)\n        if self.residual:\n            output = short_cut + x\n        else:\n            output = x\n        return output\n\n\nclass Residual(Module):\n    def __init__(self, c, num_block, groups, kernel=(3, 3), stride=(1, 1), padding=(1, 1)):\n        super(Residual, self).__init__()\n        modules = []\n        for _ in range(num_block):\n            modules.append(DepthWise(c, c, True, kernel, stride, padding, groups))\n        self.layers = Sequential(*modules)\n\n    def forward(self, x):\n        return self.layers(x)\n\n\nclass GDC(Module):\n    def __init__(self, embedding_size):\n        super(GDC, self).__init__()\n        self.layers = nn.Sequential(\n            LinearBlock(512, 512, groups=512, kernel=(7, 7), stride=(1, 1), padding=(0, 0)),\n            Flatten(),\n            Linear(512, embedding_size, bias=False),\n            BatchNorm1d(embedding_size))\n\n    def forward(self, x):\n        return self.layers(x)\n\n\nclass MobileFaceNet(Module):\n    def __init__(self, fp16=False, num_features=512):\n        super(MobileFaceNet, self).__init__()\n        scale = 2\n        self.fp16 = fp16\n        self.layers = nn.Sequential(\n            ConvBlock(3, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1)),\n            ConvBlock(64 * scale, 64 * scale, kernel=(3, 3), stride=(1, 1), padding=(1, 1), groups=64),\n            DepthWise(64 * scale, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=128),\n            Residual(64 * scale, num_block=4, groups=128, kernel=(3, 3), stride=(1, 1), padding=(1, 1)),\n            DepthWise(64 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=256),\n            Residual(128 * scale, num_block=6, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)),\n            DepthWise(128 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=512),\n            Residual(128 * scale, num_block=2, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)),\n        )\n        self.conv_sep = ConvBlock(128 * scale, 512, kernel=(1, 1), stride=(1, 1), padding=(0, 0))\n        self.features = GDC(num_features)\n        self._initialize_weights()\n\n    def _initialize_weights(self):\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')\n                if m.bias is not None:\n                    m.bias.data.zero_()\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n            elif isinstance(m, nn.Linear):\n                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')\n                if m.bias is not None:\n                    m.bias.data.zero_()\n\n    def forward(self, x):\n        with torch.cuda.amp.autocast(self.fp16):\n            x = self.layers(x)\n        x = self.conv_sep(x.float() if self.fp16 else x)\n        x = self.features(x)\n        return x\n\n\ndef get_mbf(fp16, num_features):\n    return MobileFaceNet(fp16, num_features)"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/3millions.py",
    "content": "from easydict import EasyDict as edict\n\n# configs for test speed\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"r50\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"synthetic\"\nconfig.num_classes = 300 * 10000\nconfig.num_epoch = 30\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [10, 16, 22]\nconfig.val_targets = []\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/3millions_pfc.py",
    "content": "from easydict import EasyDict as edict\n\n# configs for test speed\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"r50\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 0.1\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"synthetic\"\nconfig.num_classes = 300 * 10000\nconfig.num_epoch = 30\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [10, 16, 22]\nconfig.val_targets = []\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/__init__.py",
    "content": ""
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/base.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"r50\"\nconfig.resume = False\nconfig.output = \"ms1mv3_arcface_r50\"\n\nconfig.dataset = \"ms1m-retinaface-t1\"\nconfig.embedding_size = 512\nconfig.sample_rate = 1\nconfig.fp16 = False\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nif config.dataset == \"emore\":\n    config.rec = \"/train_tmp/faces_emore\"\n    config.num_classes = 85742\n    config.num_image = 5822653\n    config.num_epoch = 16\n    config.warmup_epoch = -1\n    config.decay_epoch = [8, 14, ]\n    config.val_targets = [\"lfw\", ]\n\nelif config.dataset == \"ms1m-retinaface-t1\":\n    config.rec = \"/train_tmp/ms1m-retinaface-t1\"\n    config.num_classes = 93431\n    config.num_image = 5179510\n    config.num_epoch = 25\n    config.warmup_epoch = -1\n    config.decay_epoch = [11, 17, 22]\n    config.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n\nelif config.dataset == \"glint360k\":\n    config.rec = \"/train_tmp/glint360k\"\n    config.num_classes = 360232\n    config.num_image = 17091657\n    config.num_epoch = 20\n    config.warmup_epoch = -1\n    config.decay_epoch = [8, 12, 15, 18]\n    config.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n\nelif config.dataset == \"webface\":\n    config.rec = \"/train_tmp/faces_webface_112x112\"\n    config.num_classes = 10572\n    config.num_image = \"forget\"\n    config.num_epoch = 34\n    config.warmup_epoch = -1\n    config.decay_epoch = [20, 28, 32]\n    config.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/glint360k_mbf.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"cosface\"\nconfig.network = \"mbf\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 0.1\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 2e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/glint360k\"\nconfig.num_classes = 360232\nconfig.num_image = 17091657\nconfig.num_epoch = 20\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [8, 12, 15, 18]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/glint360k_r100.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"cosface\"\nconfig.network = \"r100\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/glint360k\"\nconfig.num_classes = 360232\nconfig.num_image = 17091657\nconfig.num_epoch = 20\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [8, 12, 15, 18]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/glint360k_r18.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"cosface\"\nconfig.network = \"r18\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/glint360k\"\nconfig.num_classes = 360232\nconfig.num_image = 17091657\nconfig.num_epoch = 20\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [8, 12, 15, 18]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/glint360k_r34.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"cosface\"\nconfig.network = \"r34\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/glint360k\"\nconfig.num_classes = 360232\nconfig.num_image = 17091657\nconfig.num_epoch = 20\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [8, 12, 15, 18]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/glint360k_r50.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"cosface\"\nconfig.network = \"r50\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/glint360k\"\nconfig.num_classes = 360232\nconfig.num_image = 17091657\nconfig.num_epoch = 20\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [8, 12, 15, 18]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/ms1mv3_mbf.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"mbf\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 2e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/ms1m-retinaface-t1\"\nconfig.num_classes = 93431\nconfig.num_image = 5179510\nconfig.num_epoch = 30\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [10, 20, 25]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/ms1mv3_r18.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"r18\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/ms1m-retinaface-t1\"\nconfig.num_classes = 93431\nconfig.num_image = 5179510\nconfig.num_epoch = 25\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [10, 16, 22]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/ms1mv3_r2060.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"r2060\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 64\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/ms1m-retinaface-t1\"\nconfig.num_classes = 93431\nconfig.num_image = 5179510\nconfig.num_epoch = 25\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [10, 16, 22]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/ms1mv3_r34.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"r34\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/ms1m-retinaface-t1\"\nconfig.num_classes = 93431\nconfig.num_image = 5179510\nconfig.num_epoch = 25\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [10, 16, 22]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/ms1mv3_r50.py",
    "content": "from easydict import EasyDict as edict\n\n# make training faster\n# our RAM is 256G\n# mount -t tmpfs -o size=140G  tmpfs /train_tmp\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"r50\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"/train_tmp/ms1m-retinaface-t1\"\nconfig.num_classes = 93431\nconfig.num_image = 5179510\nconfig.num_epoch = 25\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [10, 16, 22]\nconfig.val_targets = [\"lfw\", \"cfp_fp\", \"agedb_30\"]\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/configs/speed.py",
    "content": "from easydict import EasyDict as edict\n\n# configs for test speed\n\nconfig = edict()\nconfig.loss = \"arcface\"\nconfig.network = \"r50\"\nconfig.resume = False\nconfig.output = None\nconfig.embedding_size = 512\nconfig.sample_rate = 1.0\nconfig.fp16 = True\nconfig.momentum = 0.9\nconfig.weight_decay = 5e-4\nconfig.batch_size = 128\nconfig.lr = 0.1  # batch size is 512\n\nconfig.rec = \"synthetic\"\nconfig.num_classes = 100 * 10000\nconfig.num_epoch = 30\nconfig.warmup_epoch = -1\nconfig.decay_epoch = [10, 16, 22]\nconfig.val_targets = []\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/dataset.py",
    "content": "import numbers\nimport os\nimport queue as Queue\nimport threading\n\nimport mxnet as mx\nimport numpy as np\nimport torch\nfrom torch.utils.data import DataLoader, Dataset\nfrom torchvision import transforms\n\n\nclass BackgroundGenerator(threading.Thread):\n    def __init__(self, generator, local_rank, max_prefetch=6):\n        super(BackgroundGenerator, self).__init__()\n        self.queue = Queue.Queue(max_prefetch)\n        self.generator = generator\n        self.local_rank = local_rank\n        self.daemon = True\n        self.start()\n\n    def run(self):\n        torch.cuda.set_device(self.local_rank)\n        for item in self.generator:\n            self.queue.put(item)\n        self.queue.put(None)\n\n    def next(self):\n        next_item = self.queue.get()\n        if next_item is None:\n            raise StopIteration\n        return next_item\n\n    def __next__(self):\n        return self.next()\n\n    def __iter__(self):\n        return self\n\n\nclass DataLoaderX(DataLoader):\n\n    def __init__(self, local_rank, **kwargs):\n        super(DataLoaderX, self).__init__(**kwargs)\n        self.stream = torch.cuda.Stream(local_rank)\n        self.local_rank = local_rank\n\n    def __iter__(self):\n        self.iter = super(DataLoaderX, self).__iter__()\n        self.iter = BackgroundGenerator(self.iter, self.local_rank)\n        self.preload()\n        return self\n\n    def preload(self):\n        self.batch = next(self.iter, None)\n        if self.batch is None:\n            return None\n        with torch.cuda.stream(self.stream):\n            for k in range(len(self.batch)):\n                self.batch[k] = self.batch[k].to(device=self.local_rank, non_blocking=True)\n\n    def __next__(self):\n        torch.cuda.current_stream().wait_stream(self.stream)\n        batch = self.batch\n        if batch is None:\n            raise StopIteration\n        self.preload()\n        return batch\n\n\nclass MXFaceDataset(Dataset):\n    def __init__(self, root_dir, local_rank):\n        super(MXFaceDataset, self).__init__()\n        self.transform = transforms.Compose(\n            [transforms.ToPILImage(),\n             transforms.RandomHorizontalFlip(),\n             transforms.ToTensor(),\n             transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),\n             ])\n        self.root_dir = root_dir\n        self.local_rank = local_rank\n        path_imgrec = os.path.join(root_dir, 'train.rec')\n        path_imgidx = os.path.join(root_dir, 'train.idx')\n        self.imgrec = mx.recordio.MXIndexedRecordIO(path_imgidx, path_imgrec, 'r')\n        s = self.imgrec.read_idx(0)\n        header, _ = mx.recordio.unpack(s)\n        if header.flag > 0:\n            self.header0 = (int(header.label[0]), int(header.label[1]))\n            self.imgidx = np.array(range(1, int(header.label[0])))\n        else:\n            self.imgidx = np.array(list(self.imgrec.keys))\n\n    def __getitem__(self, index):\n        idx = self.imgidx[index]\n        s = self.imgrec.read_idx(idx)\n        header, img = mx.recordio.unpack(s)\n        label = header.label\n        if not isinstance(label, numbers.Number):\n            label = label[0]\n        label = torch.tensor(label, dtype=torch.long)\n        sample = mx.image.imdecode(img).asnumpy()\n        if self.transform is not None:\n            sample = self.transform(sample)\n        return sample, label\n\n    def __len__(self):\n        return len(self.imgidx)\n\n\nclass SyntheticDataset(Dataset):\n    def __init__(self, local_rank):\n        super(SyntheticDataset, self).__init__()\n        img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32)\n        img = np.transpose(img, (2, 0, 1))\n        img = torch.from_numpy(img).squeeze(0).float()\n        img = ((img / 255) - 0.5) / 0.5\n        self.img = img\n        self.label = 1\n\n    def __getitem__(self, index):\n        return self.img, self.label\n\n    def __len__(self):\n        return 1000000\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/docs/eval.md",
    "content": "## Eval on ICCV2021-MFR\n\ncoming soon.\n\n\n## Eval IJBC\nYou can eval ijbc with pytorch or onnx.\n\n\n1. Eval IJBC With Onnx\n```shell\nCUDA_VISIBLE_DEVICES=0 python onnx_ijbc.py --model-root ms1mv3_arcface_r50 --image-path IJB_release/IJBC --result-dir ms1mv3_arcface_r50\n```\n\n2. Eval IJBC With Pytorch\n```shell\nCUDA_VISIBLE_DEVICES=0,1 python eval_ijbc.py \\\n--model-prefix ms1mv3_arcface_r50/backbone.pth \\\n--image-path IJB_release/IJBC \\\n--result-dir ms1mv3_arcface_r50 \\\n--batch-size 128 \\\n--job ms1mv3_arcface_r50 \\\n--target IJBC \\\n--network iresnet50\n```\n\n## Inference\n\n```shell\npython inference.py --weight ms1mv3_arcface_r50/backbone.pth --network r50\n```\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/docs/install.md",
    "content": "## v1.8.0 \n### Linux and Windows  \n```shell\n# CUDA 11.0\npip --default-timeout=100 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n# CUDA 10.2\npip --default-timeout=100 install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0\n\n# CPU only\npip --default-timeout=100 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n```\n\n\n## v1.7.1  \n### Linux and Windows  \n```shell\n# CUDA 11.0\npip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html\n\n# CUDA 10.2\npip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2\n\n# CUDA 10.1\npip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html\n\n# CUDA 9.2\npip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html\n\n# CPU only\npip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html\n```\n\n\n## v1.6.0  \n\n### Linux and Windows\n```shell\n# CUDA 10.2\npip install torch==1.6.0 torchvision==0.7.0\n\n# CUDA 10.1\npip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html\n\n# CUDA 9.2\npip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html\n\n# CPU only\npip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html\n```"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/docs/modelzoo.md",
    "content": ""
  },
  {
    "path": "third_part/face3d/models/arcface_torch/docs/speed_benchmark.md",
    "content": "## Test Training Speed\n\n- Test Commands\n\nYou need to use the following two commands to test the Partial FC training performance. \nThe number of identites is **3 millions** (synthetic data), turn mixed precision  training on, backbone is resnet50, \nbatch size is 1024.\n```shell\n# Model Parallel\npython -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr=\"127.0.0.1\" --master_port=1234 train.py configs/3millions\n# Partial FC 0.1\npython -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr=\"127.0.0.1\" --master_port=1234 train.py configs/3millions_pfc\n```\n\n- GPU Memory\n\n```\n# (Model Parallel) gpustat -i\n[0] Tesla V100-SXM2-32GB | 64'C,  94 % | 30338 / 32510 MB \n[1] Tesla V100-SXM2-32GB | 60'C,  99 % | 28876 / 32510 MB \n[2] Tesla V100-SXM2-32GB | 60'C,  99 % | 28872 / 32510 MB \n[3] Tesla V100-SXM2-32GB | 69'C,  99 % | 28872 / 32510 MB \n[4] Tesla V100-SXM2-32GB | 66'C,  99 % | 28888 / 32510 MB \n[5] Tesla V100-SXM2-32GB | 60'C,  99 % | 28932 / 32510 MB \n[6] Tesla V100-SXM2-32GB | 68'C, 100 % | 28916 / 32510 MB \n[7] Tesla V100-SXM2-32GB | 65'C,  99 % | 28860 / 32510 MB \n\n# (Partial FC 0.1) gpustat -i\n[0] Tesla V100-SXM2-32GB | 60'C,  95 % | 10488 / 32510 MB                                                                                                                                          │·······················\n[1] Tesla V100-SXM2-32GB | 60'C,  97 % | 10344 / 32510 MB                                                                                                                                          │·······················\n[2] Tesla V100-SXM2-32GB | 61'C,  95 % | 10340 / 32510 MB                                                                                                                                          │·······················\n[3] Tesla V100-SXM2-32GB | 66'C,  95 % | 10340 / 32510 MB                                                                                                                                          │·······················\n[4] Tesla V100-SXM2-32GB | 65'C,  94 % | 10356 / 32510 MB                                                                                                                                          │·······················\n[5] Tesla V100-SXM2-32GB | 61'C,  95 % | 10400 / 32510 MB                                                                                                                                          │·······················\n[6] Tesla V100-SXM2-32GB | 68'C,  96 % | 10384 / 32510 MB                                                                                                                                          │·······················\n[7] Tesla V100-SXM2-32GB | 64'C,  95 % | 10328 / 32510 MB                                                                                                                                        │·······················\n```\n\n- Training Speed\n\n```python\n# (Model Parallel) trainging.log\nTraining: Speed 2271.33 samples/sec   Loss 1.1624   LearningRate 0.2000   Epoch: 0   Global Step: 100 \nTraining: Speed 2269.94 samples/sec   Loss 0.0000   LearningRate 0.2000   Epoch: 0   Global Step: 150 \nTraining: Speed 2272.67 samples/sec   Loss 0.0000   LearningRate 0.2000   Epoch: 0   Global Step: 200 \nTraining: Speed 2266.55 samples/sec   Loss 0.0000   LearningRate 0.2000   Epoch: 0   Global Step: 250 \nTraining: Speed 2272.54 samples/sec   Loss 0.0000   LearningRate 0.2000   Epoch: 0   Global Step: 300 \n\n# (Partial FC 0.1) trainging.log\nTraining: Speed 5299.56 samples/sec   Loss 1.0965   LearningRate 0.2000   Epoch: 0   Global Step: 100  \nTraining: Speed 5296.37 samples/sec   Loss 0.0000   LearningRate 0.2000   Epoch: 0   Global Step: 150  \nTraining: Speed 5304.37 samples/sec   Loss 0.0000   LearningRate 0.2000   Epoch: 0   Global Step: 200  \nTraining: Speed 5274.43 samples/sec   Loss 0.0000   LearningRate 0.2000   Epoch: 0   Global Step: 250  \nTraining: Speed 5300.10 samples/sec   Loss 0.0000   LearningRate 0.2000   Epoch: 0   Global Step: 300   \n```\n\nIn this test case, Partial FC 0.1 only use1 1/3 of the GPU memory of the model parallel, \nand the training speed is 2.5 times faster than the model parallel.\n\n\n## Speed Benchmark\n\n1. Training speed of different parallel methods (samples/second), Tesla V100 32GB * 8. (Larger is better)\n\n| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 |\n| :---    | :--- | :--- | :--- |\n|125000   | 4681 | 4824 | 5004 |\n|250000   | 4047 | 4521 | 4976 |\n|500000   | 3087 | 4013 | 4900 |\n|1000000  | 2090 | 3449 | 4803 |\n|1400000  | 1672 | 3043 | 4738 |\n|2000000  | -    | 2593 | 4626 |\n|4000000  | -    | 1748 | 4208 |\n|5500000  | -    | 1389 | 3975 |\n|8000000  | -    | -    | 3565 |\n|16000000 | -    | -    | 2679 |\n|29000000 | -    | -    | 1855 |\n\n2. GPU memory cost of different parallel methods (GB per GPU), Tesla V100 32GB * 8. (Smaller is better)\n\n| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 |\n| :---    | :---  | :---  | :---  |\n|125000   | 7358  | 5306  | 4868  |\n|250000   | 9940  | 5826  | 5004  |\n|500000   | 14220 | 7114  | 5202  |\n|1000000  | 23708 | 9966  | 5620  |\n|1400000  | 32252 | 11178 | 6056  |\n|2000000  | -     | 13978 | 6472  |\n|4000000  | -     | 23238 | 8284  |\n|5500000  | -     | 32188 | 9854  |\n|8000000  | -     | -     | 12310 |\n|16000000 | -     | -     | 19950 |\n|29000000 | -     | -     | 32324 |\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/eval/__init__.py",
    "content": ""
  },
  {
    "path": "third_part/face3d/models/arcface_torch/eval/verification.py",
    "content": "\"\"\"Helper for evaluation on the Labeled Faces in the Wild dataset \n\"\"\"\n\n# MIT License\n#\n# Copyright (c) 2016 David Sandberg\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in all\n# copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\n\nimport datetime\nimport os\nimport pickle\n\nimport mxnet as mx\nimport numpy as np\nimport sklearn\nimport torch\nfrom mxnet import ndarray as nd\nfrom scipy import interpolate\nfrom sklearn.decomposition import PCA\nfrom sklearn.model_selection import KFold\n\n\nclass LFold:\n    def __init__(self, n_splits=2, shuffle=False):\n        self.n_splits = n_splits\n        if self.n_splits > 1:\n            self.k_fold = KFold(n_splits=n_splits, shuffle=shuffle)\n\n    def split(self, indices):\n        if self.n_splits > 1:\n            return self.k_fold.split(indices)\n        else:\n            return [(indices, indices)]\n\n\ndef calculate_roc(thresholds,\n                  embeddings1,\n                  embeddings2,\n                  actual_issame,\n                  nrof_folds=10,\n                  pca=0):\n    assert (embeddings1.shape[0] == embeddings2.shape[0])\n    assert (embeddings1.shape[1] == embeddings2.shape[1])\n    nrof_pairs = min(len(actual_issame), embeddings1.shape[0])\n    nrof_thresholds = len(thresholds)\n    k_fold = LFold(n_splits=nrof_folds, shuffle=False)\n\n    tprs = np.zeros((nrof_folds, nrof_thresholds))\n    fprs = np.zeros((nrof_folds, nrof_thresholds))\n    accuracy = np.zeros((nrof_folds))\n    indices = np.arange(nrof_pairs)\n\n    if pca == 0:\n        diff = np.subtract(embeddings1, embeddings2)\n        dist = np.sum(np.square(diff), 1)\n\n    for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)):\n        if pca > 0:\n            print('doing pca on', fold_idx)\n            embed1_train = embeddings1[train_set]\n            embed2_train = embeddings2[train_set]\n            _embed_train = np.concatenate((embed1_train, embed2_train), axis=0)\n            pca_model = PCA(n_components=pca)\n            pca_model.fit(_embed_train)\n            embed1 = pca_model.transform(embeddings1)\n            embed2 = pca_model.transform(embeddings2)\n            embed1 = sklearn.preprocessing.normalize(embed1)\n            embed2 = sklearn.preprocessing.normalize(embed2)\n            diff = np.subtract(embed1, embed2)\n            dist = np.sum(np.square(diff), 1)\n\n        # Find the best threshold for the fold\n        acc_train = np.zeros((nrof_thresholds))\n        for threshold_idx, threshold in enumerate(thresholds):\n            _, _, acc_train[threshold_idx] = calculate_accuracy(\n                threshold, dist[train_set], actual_issame[train_set])\n        best_threshold_index = np.argmax(acc_train)\n        for threshold_idx, threshold in enumerate(thresholds):\n            tprs[fold_idx, threshold_idx], fprs[fold_idx, threshold_idx], _ = calculate_accuracy(\n                threshold, dist[test_set],\n                actual_issame[test_set])\n        _, _, accuracy[fold_idx] = calculate_accuracy(\n            thresholds[best_threshold_index], dist[test_set],\n            actual_issame[test_set])\n\n    tpr = np.mean(tprs, 0)\n    fpr = np.mean(fprs, 0)\n    return tpr, fpr, accuracy\n\n\ndef calculate_accuracy(threshold, dist, actual_issame):\n    predict_issame = np.less(dist, threshold)\n    tp = np.sum(np.logical_and(predict_issame, actual_issame))\n    fp = np.sum(np.logical_and(predict_issame, np.logical_not(actual_issame)))\n    tn = np.sum(\n        np.logical_and(np.logical_not(predict_issame),\n                       np.logical_not(actual_issame)))\n    fn = np.sum(np.logical_and(np.logical_not(predict_issame), actual_issame))\n\n    tpr = 0 if (tp + fn == 0) else float(tp) / float(tp + fn)\n    fpr = 0 if (fp + tn == 0) else float(fp) / float(fp + tn)\n    acc = float(tp + tn) / dist.size\n    return tpr, fpr, acc\n\n\ndef calculate_val(thresholds,\n                  embeddings1,\n                  embeddings2,\n                  actual_issame,\n                  far_target,\n                  nrof_folds=10):\n    assert (embeddings1.shape[0] == embeddings2.shape[0])\n    assert (embeddings1.shape[1] == embeddings2.shape[1])\n    nrof_pairs = min(len(actual_issame), embeddings1.shape[0])\n    nrof_thresholds = len(thresholds)\n    k_fold = LFold(n_splits=nrof_folds, shuffle=False)\n\n    val = np.zeros(nrof_folds)\n    far = np.zeros(nrof_folds)\n\n    diff = np.subtract(embeddings1, embeddings2)\n    dist = np.sum(np.square(diff), 1)\n    indices = np.arange(nrof_pairs)\n\n    for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)):\n\n        # Find the threshold that gives FAR = far_target\n        far_train = np.zeros(nrof_thresholds)\n        for threshold_idx, threshold in enumerate(thresholds):\n            _, far_train[threshold_idx] = calculate_val_far(\n                threshold, dist[train_set], actual_issame[train_set])\n        if np.max(far_train) >= far_target:\n            f = interpolate.interp1d(far_train, thresholds, kind='slinear')\n            threshold = f(far_target)\n        else:\n            threshold = 0.0\n\n        val[fold_idx], far[fold_idx] = calculate_val_far(\n            threshold, dist[test_set], actual_issame[test_set])\n\n    val_mean = np.mean(val)\n    far_mean = np.mean(far)\n    val_std = np.std(val)\n    return val_mean, val_std, far_mean\n\n\ndef calculate_val_far(threshold, dist, actual_issame):\n    predict_issame = np.less(dist, threshold)\n    true_accept = np.sum(np.logical_and(predict_issame, actual_issame))\n    false_accept = np.sum(\n        np.logical_and(predict_issame, np.logical_not(actual_issame)))\n    n_same = np.sum(actual_issame)\n    n_diff = np.sum(np.logical_not(actual_issame))\n    # print(true_accept, false_accept)\n    # print(n_same, n_diff)\n    val = float(true_accept) / float(n_same)\n    far = float(false_accept) / float(n_diff)\n    return val, far\n\n\ndef evaluate(embeddings, actual_issame, nrof_folds=10, pca=0):\n    # Calculate evaluation metrics\n    thresholds = np.arange(0, 4, 0.01)\n    embeddings1 = embeddings[0::2]\n    embeddings2 = embeddings[1::2]\n    tpr, fpr, accuracy = calculate_roc(thresholds,\n                                       embeddings1,\n                                       embeddings2,\n                                       np.asarray(actual_issame),\n                                       nrof_folds=nrof_folds,\n                                       pca=pca)\n    thresholds = np.arange(0, 4, 0.001)\n    val, val_std, far = calculate_val(thresholds,\n                                      embeddings1,\n                                      embeddings2,\n                                      np.asarray(actual_issame),\n                                      1e-3,\n                                      nrof_folds=nrof_folds)\n    return tpr, fpr, accuracy, val, val_std, far\n\n@torch.no_grad()\ndef load_bin(path, image_size):\n    try:\n        with open(path, 'rb') as f:\n            bins, issame_list = pickle.load(f)  # py2\n    except UnicodeDecodeError as e:\n        with open(path, 'rb') as f:\n            bins, issame_list = pickle.load(f, encoding='bytes')  # py3\n    data_list = []\n    for flip in [0, 1]:\n        data = torch.empty((len(issame_list) * 2, 3, image_size[0], image_size[1]))\n        data_list.append(data)\n    for idx in range(len(issame_list) * 2):\n        _bin = bins[idx]\n        img = mx.image.imdecode(_bin)\n        if img.shape[1] != image_size[0]:\n            img = mx.image.resize_short(img, image_size[0])\n        img = nd.transpose(img, axes=(2, 0, 1))\n        for flip in [0, 1]:\n            if flip == 1:\n                img = mx.ndarray.flip(data=img, axis=2)\n            data_list[flip][idx][:] = torch.from_numpy(img.asnumpy())\n        if idx % 1000 == 0:\n            print('loading bin', idx)\n    print(data_list[0].shape)\n    return data_list, issame_list\n\n@torch.no_grad()\ndef test(data_set, backbone, batch_size, nfolds=10):\n    print('testing verification..')\n    data_list = data_set[0]\n    issame_list = data_set[1]\n    embeddings_list = []\n    time_consumed = 0.0\n    for i in range(len(data_list)):\n        data = data_list[i]\n        embeddings = None\n        ba = 0\n        while ba < data.shape[0]:\n            bb = min(ba + batch_size, data.shape[0])\n            count = bb - ba\n            _data = data[bb - batch_size: bb]\n            time0 = datetime.datetime.now()\n            img = ((_data / 255) - 0.5) / 0.5\n            net_out: torch.Tensor = backbone(img)\n            _embeddings = net_out.detach().cpu().numpy()\n            time_now = datetime.datetime.now()\n            diff = time_now - time0\n            time_consumed += diff.total_seconds()\n            if embeddings is None:\n                embeddings = np.zeros((data.shape[0], _embeddings.shape[1]))\n            embeddings[ba:bb, :] = _embeddings[(batch_size - count):, :]\n            ba = bb\n        embeddings_list.append(embeddings)\n\n    _xnorm = 0.0\n    _xnorm_cnt = 0\n    for embed in embeddings_list:\n        for i in range(embed.shape[0]):\n            _em = embed[i]\n            _norm = np.linalg.norm(_em)\n            _xnorm += _norm\n            _xnorm_cnt += 1\n    _xnorm /= _xnorm_cnt\n\n    acc1 = 0.0\n    std1 = 0.0\n    embeddings = embeddings_list[0] + embeddings_list[1]\n    embeddings = sklearn.preprocessing.normalize(embeddings)\n    print(embeddings.shape)\n    print('infer time', time_consumed)\n    _, _, accuracy, val, val_std, far = evaluate(embeddings, issame_list, nrof_folds=nfolds)\n    acc2, std2 = np.mean(accuracy), np.std(accuracy)\n    return acc1, std1, acc2, std2, _xnorm, embeddings_list\n\n\ndef dumpR(data_set,\n          backbone,\n          batch_size,\n          name='',\n          data_extra=None,\n          label_shape=None):\n    print('dump verification embedding..')\n    data_list = data_set[0]\n    issame_list = data_set[1]\n    embeddings_list = []\n    time_consumed = 0.0\n    for i in range(len(data_list)):\n        data = data_list[i]\n        embeddings = None\n        ba = 0\n        while ba < data.shape[0]:\n            bb = min(ba + batch_size, data.shape[0])\n            count = bb - ba\n\n            _data = nd.slice_axis(data, axis=0, begin=bb - batch_size, end=bb)\n            time0 = datetime.datetime.now()\n            if data_extra is None:\n                db = mx.io.DataBatch(data=(_data,), label=(_label,))\n            else:\n                db = mx.io.DataBatch(data=(_data, _data_extra),\n                                     label=(_label,))\n            model.forward(db, is_train=False)\n            net_out = model.get_outputs()\n            _embeddings = net_out[0].asnumpy()\n            time_now = datetime.datetime.now()\n            diff = time_now - time0\n            time_consumed += diff.total_seconds()\n            if embeddings is None:\n                embeddings = np.zeros((data.shape[0], _embeddings.shape[1]))\n            embeddings[ba:bb, :] = _embeddings[(batch_size - count):, :]\n            ba = bb\n        embeddings_list.append(embeddings)\n    embeddings = embeddings_list[0] + embeddings_list[1]\n    embeddings = sklearn.preprocessing.normalize(embeddings)\n    actual_issame = np.asarray(issame_list)\n    outname = os.path.join('temp.bin')\n    with open(outname, 'wb') as f:\n        pickle.dump((embeddings, issame_list),\n                    f,\n                    protocol=pickle.HIGHEST_PROTOCOL)\n\n\n# if __name__ == '__main__':\n#\n#     parser = argparse.ArgumentParser(description='do verification')\n#     # general\n#     parser.add_argument('--data-dir', default='', help='')\n#     parser.add_argument('--model',\n#                         default='../model/softmax,50',\n#                         help='path to load model.')\n#     parser.add_argument('--target',\n#                         default='lfw,cfp_ff,cfp_fp,agedb_30',\n#                         help='test targets.')\n#     parser.add_argument('--gpu', default=0, type=int, help='gpu id')\n#     parser.add_argument('--batch-size', default=32, type=int, help='')\n#     parser.add_argument('--max', default='', type=str, help='')\n#     parser.add_argument('--mode', default=0, type=int, help='')\n#     parser.add_argument('--nfolds', default=10, type=int, help='')\n#     args = parser.parse_args()\n#     image_size = [112, 112]\n#     print('image_size', image_size)\n#     ctx = mx.gpu(args.gpu)\n#     nets = []\n#     vec = args.model.split(',')\n#     prefix = args.model.split(',')[0]\n#     epochs = []\n#     if len(vec) == 1:\n#         pdir = os.path.dirname(prefix)\n#         for fname in os.listdir(pdir):\n#             if not fname.endswith('.params'):\n#                 continue\n#             _file = os.path.join(pdir, fname)\n#             if _file.startswith(prefix):\n#                 epoch = int(fname.split('.')[0].split('-')[1])\n#                 epochs.append(epoch)\n#         epochs = sorted(epochs, reverse=True)\n#         if len(args.max) > 0:\n#             _max = [int(x) for x in args.max.split(',')]\n#             assert len(_max) == 2\n#             if len(epochs) > _max[1]:\n#                 epochs = epochs[_max[0]:_max[1]]\n#\n#     else:\n#         epochs = [int(x) for x in vec[1].split('|')]\n#     print('model number', len(epochs))\n#     time0 = datetime.datetime.now()\n#     for epoch in epochs:\n#         print('loading', prefix, epoch)\n#         sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, epoch)\n#         # arg_params, aux_params = ch_dev(arg_params, aux_params, ctx)\n#         all_layers = sym.get_internals()\n#         sym = all_layers['fc1_output']\n#         model = mx.mod.Module(symbol=sym, context=ctx, label_names=None)\n#         # model.bind(data_shapes=[('data', (args.batch_size, 3, image_size[0], image_size[1]))], label_shapes=[('softmax_label', (args.batch_size,))])\n#         model.bind(data_shapes=[('data', (args.batch_size, 3, image_size[0],\n#                                           image_size[1]))])\n#         model.set_params(arg_params, aux_params)\n#         nets.append(model)\n#     time_now = datetime.datetime.now()\n#     diff = time_now - time0\n#     print('model loading time', diff.total_seconds())\n#\n#     ver_list = []\n#     ver_name_list = []\n#     for name in args.target.split(','):\n#         path = os.path.join(args.data_dir, name + \".bin\")\n#         if os.path.exists(path):\n#             print('loading.. ', name)\n#             data_set = load_bin(path, image_size)\n#             ver_list.append(data_set)\n#             ver_name_list.append(name)\n#\n#     if args.mode == 0:\n#         for i in range(len(ver_list)):\n#             results = []\n#             for model in nets:\n#                 acc1, std1, acc2, std2, xnorm, embeddings_list = test(\n#                     ver_list[i], model, args.batch_size, args.nfolds)\n#                 print('[%s]XNorm: %f' % (ver_name_list[i], xnorm))\n#                 print('[%s]Accuracy: %1.5f+-%1.5f' % (ver_name_list[i], acc1, std1))\n#                 print('[%s]Accuracy-Flip: %1.5f+-%1.5f' % (ver_name_list[i], acc2, std2))\n#                 results.append(acc2)\n#             print('Max of [%s] is %1.5f' % (ver_name_list[i], np.max(results)))\n#     elif args.mode == 1:\n#         raise ValueError\n#     else:\n#         model = nets[0]\n#         dumpR(ver_list[0], model, args.batch_size, args.target)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/eval_ijbc.py",
    "content": "# coding: utf-8\n\nimport os\nimport pickle\n\nimport matplotlib\nimport pandas as pd\n\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nimport timeit\nimport sklearn\nimport argparse\nimport cv2\nimport numpy as np\nimport torch\nfrom skimage import transform as trans\nfrom backbones import get_model\nfrom sklearn.metrics import roc_curve, auc\n\nfrom menpo.visualize.viewmatplotlib import sample_colours_from_colourmap\nfrom prettytable import PrettyTable\nfrom pathlib import Path\n\nimport sys\nimport warnings\n\nsys.path.insert(0, \"../\")\nwarnings.filterwarnings(\"ignore\")\n\nparser = argparse.ArgumentParser(description='do ijb test')\n# general\nparser.add_argument('--model-prefix', default='', help='path to load model.')\nparser.add_argument('--image-path', default='', type=str, help='')\nparser.add_argument('--result-dir', default='.', type=str, help='')\nparser.add_argument('--batch-size', default=128, type=int, help='')\nparser.add_argument('--network', default='iresnet50', type=str, help='')\nparser.add_argument('--job', default='insightface', type=str, help='job name')\nparser.add_argument('--target', default='IJBC', type=str, help='target, set to IJBC or IJBB')\nargs = parser.parse_args()\n\ntarget = args.target\nmodel_path = args.model_prefix\nimage_path = args.image_path\nresult_dir = args.result_dir\ngpu_id = None\nuse_norm_score = True  # if Ture, TestMode(N1)\nuse_detector_score = True  # if Ture, TestMode(D1)\nuse_flip_test = True  # if Ture, TestMode(F1)\njob = args.job\nbatch_size = args.batch_size\n\n\nclass Embedding(object):\n    def __init__(self, prefix, data_shape, batch_size=1):\n        image_size = (112, 112)\n        self.image_size = image_size\n        weight = torch.load(prefix)\n        resnet = get_model(args.network, dropout=0, fp16=False).cuda()\n        resnet.load_state_dict(weight)\n        model = torch.nn.DataParallel(resnet)\n        self.model = model\n        self.model.eval()\n        src = np.array([\n            [30.2946, 51.6963],\n            [65.5318, 51.5014],\n            [48.0252, 71.7366],\n            [33.5493, 92.3655],\n            [62.7299, 92.2041]], dtype=np.float32)\n        src[:, 0] += 8.0\n        self.src = src\n        self.batch_size = batch_size\n        self.data_shape = data_shape\n\n    def get(self, rimg, landmark):\n\n        assert landmark.shape[0] == 68 or landmark.shape[0] == 5\n        assert landmark.shape[1] == 2\n        if landmark.shape[0] == 68:\n            landmark5 = np.zeros((5, 2), dtype=np.float32)\n            landmark5[0] = (landmark[36] + landmark[39]) / 2\n            landmark5[1] = (landmark[42] + landmark[45]) / 2\n            landmark5[2] = landmark[30]\n            landmark5[3] = landmark[48]\n            landmark5[4] = landmark[54]\n        else:\n            landmark5 = landmark\n        tform = trans.SimilarityTransform()\n        tform.estimate(landmark5, self.src)\n        M = tform.params[0:2, :]\n        img = cv2.warpAffine(rimg,\n                             M, (self.image_size[1], self.image_size[0]),\n                             borderValue=0.0)\n        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n        img_flip = np.fliplr(img)\n        img = np.transpose(img, (2, 0, 1))  # 3*112*112, RGB\n        img_flip = np.transpose(img_flip, (2, 0, 1))\n        input_blob = np.zeros((2, 3, self.image_size[1], self.image_size[0]), dtype=np.uint8)\n        input_blob[0] = img\n        input_blob[1] = img_flip\n        return input_blob\n\n    @torch.no_grad()\n    def forward_db(self, batch_data):\n        imgs = torch.Tensor(batch_data).cuda()\n        imgs.div_(255).sub_(0.5).div_(0.5)\n        feat = self.model(imgs)\n        feat = feat.reshape([self.batch_size, 2 * feat.shape[1]])\n        return feat.cpu().numpy()\n\n\n# 将一个list尽量均分成n份，限制len(list)==n，份数大于原list内元素个数则分配空list[]\ndef divideIntoNstrand(listTemp, n):\n    twoList = [[] for i in range(n)]\n    for i, e in enumerate(listTemp):\n        twoList[i % n].append(e)\n    return twoList\n\n\ndef read_template_media_list(path):\n    # ijb_meta = np.loadtxt(path, dtype=str)\n    ijb_meta = pd.read_csv(path, sep=' ', header=None).values\n    templates = ijb_meta[:, 1].astype(np.int)\n    medias = ijb_meta[:, 2].astype(np.int)\n    return templates, medias\n\n\n# In[ ]:\n\n\ndef read_template_pair_list(path):\n    # pairs = np.loadtxt(path, dtype=str)\n    pairs = pd.read_csv(path, sep=' ', header=None).values\n    # print(pairs.shape)\n    # print(pairs[:, 0].astype(np.int))\n    t1 = pairs[:, 0].astype(np.int)\n    t2 = pairs[:, 1].astype(np.int)\n    label = pairs[:, 2].astype(np.int)\n    return t1, t2, label\n\n\n# In[ ]:\n\n\ndef read_image_feature(path):\n    with open(path, 'rb') as fid:\n        img_feats = pickle.load(fid)\n    return img_feats\n\n\n# In[ ]:\n\n\ndef get_image_feature(img_path, files_list, model_path, epoch, gpu_id):\n    batch_size = args.batch_size\n    data_shape = (3, 112, 112)\n\n    files = files_list\n    print('files:', len(files))\n    rare_size = len(files) % batch_size\n    faceness_scores = []\n    batch = 0\n    img_feats = np.empty((len(files), 1024), dtype=np.float32)\n\n    batch_data = np.empty((2 * batch_size, 3, 112, 112))\n    embedding = Embedding(model_path, data_shape, batch_size)\n    for img_index, each_line in enumerate(files[:len(files) - rare_size]):\n        name_lmk_score = each_line.strip().split(' ')\n        img_name = os.path.join(img_path, name_lmk_score[0])\n        img = cv2.imread(img_name)\n        lmk = np.array([float(x) for x in name_lmk_score[1:-1]],\n                       dtype=np.float32)\n        lmk = lmk.reshape((5, 2))\n        input_blob = embedding.get(img, lmk)\n\n        batch_data[2 * (img_index - batch * batch_size)][:] = input_blob[0]\n        batch_data[2 * (img_index - batch * batch_size) + 1][:] = input_blob[1]\n        if (img_index + 1) % batch_size == 0:\n            print('batch', batch)\n            img_feats[batch * batch_size:batch * batch_size +\n                                         batch_size][:] = embedding.forward_db(batch_data)\n            batch += 1\n        faceness_scores.append(name_lmk_score[-1])\n\n    batch_data = np.empty((2 * rare_size, 3, 112, 112))\n    embedding = Embedding(model_path, data_shape, rare_size)\n    for img_index, each_line in enumerate(files[len(files) - rare_size:]):\n        name_lmk_score = each_line.strip().split(' ')\n        img_name = os.path.join(img_path, name_lmk_score[0])\n        img = cv2.imread(img_name)\n        lmk = np.array([float(x) for x in name_lmk_score[1:-1]],\n                       dtype=np.float32)\n        lmk = lmk.reshape((5, 2))\n        input_blob = embedding.get(img, lmk)\n        batch_data[2 * img_index][:] = input_blob[0]\n        batch_data[2 * img_index + 1][:] = input_blob[1]\n        if (img_index + 1) % rare_size == 0:\n            print('batch', batch)\n            img_feats[len(files) -\n                      rare_size:][:] = embedding.forward_db(batch_data)\n            batch += 1\n        faceness_scores.append(name_lmk_score[-1])\n    faceness_scores = np.array(faceness_scores).astype(np.float32)\n    # img_feats = np.ones( (len(files), 1024), dtype=np.float32) * 0.01\n    # faceness_scores = np.ones( (len(files), ), dtype=np.float32 )\n    return img_feats, faceness_scores\n\n\n# In[ ]:\n\n\ndef image2template_feature(img_feats=None, templates=None, medias=None):\n    # ==========================================================\n    # 1. face image feature l2 normalization. img_feats:[number_image x feats_dim]\n    # 2. compute media feature.\n    # 3. compute template feature.\n    # ==========================================================\n    unique_templates = np.unique(templates)\n    template_feats = np.zeros((len(unique_templates), img_feats.shape[1]))\n\n    for count_template, uqt in enumerate(unique_templates):\n\n        (ind_t,) = np.where(templates == uqt)\n        face_norm_feats = img_feats[ind_t]\n        face_medias = medias[ind_t]\n        unique_medias, unique_media_counts = np.unique(face_medias,\n                                                       return_counts=True)\n        media_norm_feats = []\n        for u, ct in zip(unique_medias, unique_media_counts):\n            (ind_m,) = np.where(face_medias == u)\n            if ct == 1:\n                media_norm_feats += [face_norm_feats[ind_m]]\n            else:  # image features from the same video will be aggregated into one feature\n                media_norm_feats += [\n                    np.mean(face_norm_feats[ind_m], axis=0, keepdims=True)\n                ]\n        media_norm_feats = np.array(media_norm_feats)\n        # media_norm_feats = media_norm_feats / np.sqrt(np.sum(media_norm_feats ** 2, -1, keepdims=True))\n        template_feats[count_template] = np.sum(media_norm_feats, axis=0)\n        if count_template % 2000 == 0:\n            print('Finish Calculating {} template features.'.format(\n                count_template))\n    # template_norm_feats = template_feats / np.sqrt(np.sum(template_feats ** 2, -1, keepdims=True))\n    template_norm_feats = sklearn.preprocessing.normalize(template_feats)\n    # print(template_norm_feats.shape)\n    return template_norm_feats, unique_templates\n\n\n# In[ ]:\n\n\ndef verification(template_norm_feats=None,\n                 unique_templates=None,\n                 p1=None,\n                 p2=None):\n    # ==========================================================\n    #         Compute set-to-set Similarity Score.\n    # ==========================================================\n    template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int)\n    for count_template, uqt in enumerate(unique_templates):\n        template2id[uqt] = count_template\n\n    score = np.zeros((len(p1),))  # save cosine distance between pairs\n\n    total_pairs = np.array(range(len(p1)))\n    batchsize = 100000  # small batchsize instead of all pairs in one batch due to the memory limiation\n    sublists = [\n        total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize)\n    ]\n    total_sublists = len(sublists)\n    for c, s in enumerate(sublists):\n        feat1 = template_norm_feats[template2id[p1[s]]]\n        feat2 = template_norm_feats[template2id[p2[s]]]\n        similarity_score = np.sum(feat1 * feat2, -1)\n        score[s] = similarity_score.flatten()\n        if c % 10 == 0:\n            print('Finish {}/{} pairs.'.format(c, total_sublists))\n    return score\n\n\n# In[ ]:\ndef verification2(template_norm_feats=None,\n                  unique_templates=None,\n                  p1=None,\n                  p2=None):\n    template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int)\n    for count_template, uqt in enumerate(unique_templates):\n        template2id[uqt] = count_template\n    score = np.zeros((len(p1),))  # save cosine distance between pairs\n    total_pairs = np.array(range(len(p1)))\n    batchsize = 100000  # small batchsize instead of all pairs in one batch due to the memory limiation\n    sublists = [\n        total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize)\n    ]\n    total_sublists = len(sublists)\n    for c, s in enumerate(sublists):\n        feat1 = template_norm_feats[template2id[p1[s]]]\n        feat2 = template_norm_feats[template2id[p2[s]]]\n        similarity_score = np.sum(feat1 * feat2, -1)\n        score[s] = similarity_score.flatten()\n        if c % 10 == 0:\n            print('Finish {}/{} pairs.'.format(c, total_sublists))\n    return score\n\n\ndef read_score(path):\n    with open(path, 'rb') as fid:\n        img_feats = pickle.load(fid)\n    return img_feats\n\n\n# # Step1: Load Meta Data\n\n# In[ ]:\n\nassert target == 'IJBC' or target == 'IJBB'\n\n# =============================================================\n# load image and template relationships for template feature embedding\n# tid --> template id,  mid --> media id\n# format:\n#           image_name tid mid\n# =============================================================\nstart = timeit.default_timer()\ntemplates, medias = read_template_media_list(\n    os.path.join('%s/meta' % image_path,\n                 '%s_face_tid_mid.txt' % target.lower()))\nstop = timeit.default_timer()\nprint('Time: %.2f s. ' % (stop - start))\n\n# In[ ]:\n\n# =============================================================\n# load template pairs for template-to-template verification\n# tid : template id,  label : 1/0\n# format:\n#           tid_1 tid_2 label\n# =============================================================\nstart = timeit.default_timer()\np1, p2, label = read_template_pair_list(\n    os.path.join('%s/meta' % image_path,\n                 '%s_template_pair_label.txt' % target.lower()))\nstop = timeit.default_timer()\nprint('Time: %.2f s. ' % (stop - start))\n\n# # Step 2: Get Image Features\n\n# In[ ]:\n\n# =============================================================\n# load image features\n# format:\n#           img_feats: [image_num x feats_dim] (227630, 512)\n# =============================================================\nstart = timeit.default_timer()\nimg_path = '%s/loose_crop' % image_path\nimg_list_path = '%s/meta/%s_name_5pts_score.txt' % (image_path, target.lower())\nimg_list = open(img_list_path)\nfiles = img_list.readlines()\n# files_list = divideIntoNstrand(files, rank_size)\nfiles_list = files\n\n# img_feats\n# for i in range(rank_size):\nimg_feats, faceness_scores = get_image_feature(img_path, files_list,\n                                               model_path, 0, gpu_id)\nstop = timeit.default_timer()\nprint('Time: %.2f s. ' % (stop - start))\nprint('Feature Shape: ({} , {}) .'.format(img_feats.shape[0],\n                                          img_feats.shape[1]))\n\n# # Step3: Get Template Features\n\n# In[ ]:\n\n# =============================================================\n# compute template features from image features.\n# =============================================================\nstart = timeit.default_timer()\n# ==========================================================\n# Norm feature before aggregation into template feature?\n# Feature norm from embedding network and faceness score are able to decrease weights for noise samples (not face).\n# ==========================================================\n# 1. FaceScore （Feature Norm）\n# 2. FaceScore （Detector）\n\nif use_flip_test:\n    # concat --- F1\n    # img_input_feats = img_feats\n    # add --- F2\n    img_input_feats = img_feats[:, 0:img_feats.shape[1] //\n                                     2] + img_feats[:, img_feats.shape[1] // 2:]\nelse:\n    img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2]\n\nif use_norm_score:\n    img_input_feats = img_input_feats\nelse:\n    # normalise features to remove norm information\n    img_input_feats = img_input_feats / np.sqrt(\n        np.sum(img_input_feats ** 2, -1, keepdims=True))\n\nif use_detector_score:\n    print(img_input_feats.shape, faceness_scores.shape)\n    img_input_feats = img_input_feats * faceness_scores[:, np.newaxis]\nelse:\n    img_input_feats = img_input_feats\n\ntemplate_norm_feats, unique_templates = image2template_feature(\n    img_input_feats, templates, medias)\nstop = timeit.default_timer()\nprint('Time: %.2f s. ' % (stop - start))\n\n# # Step 4: Get Template Similarity Scores\n\n# In[ ]:\n\n# =============================================================\n# compute verification scores between template pairs.\n# =============================================================\nstart = timeit.default_timer()\nscore = verification(template_norm_feats, unique_templates, p1, p2)\nstop = timeit.default_timer()\nprint('Time: %.2f s. ' % (stop - start))\n\n# In[ ]:\nsave_path = os.path.join(result_dir, args.job)\n# save_path = result_dir + '/%s_result' % target\n\nif not os.path.exists(save_path):\n    os.makedirs(save_path)\n\nscore_save_file = os.path.join(save_path, \"%s.npy\" % target.lower())\nnp.save(score_save_file, score)\n\n# # Step 5: Get ROC Curves and TPR@FPR Table\n\n# In[ ]:\n\nfiles = [score_save_file]\nmethods = []\nscores = []\nfor file in files:\n    methods.append(Path(file).stem)\n    scores.append(np.load(file))\n\nmethods = np.array(methods)\nscores = dict(zip(methods, scores))\ncolours = dict(\n    zip(methods, sample_colours_from_colourmap(methods.shape[0], 'Set2')))\nx_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1]\ntpr_fpr_table = PrettyTable(['Methods'] + [str(x) for x in x_labels])\nfig = plt.figure()\nfor method in methods:\n    fpr, tpr, _ = roc_curve(label, scores[method])\n    roc_auc = auc(fpr, tpr)\n    fpr = np.flipud(fpr)\n    tpr = np.flipud(tpr)  # select largest tpr at same fpr\n    plt.plot(fpr,\n             tpr,\n             color=colours[method],\n             lw=1,\n             label=('[%s (AUC = %0.4f %%)]' %\n                    (method.split('-')[-1], roc_auc * 100)))\n    tpr_fpr_row = []\n    tpr_fpr_row.append(\"%s-%s\" % (method, target))\n    for fpr_iter in np.arange(len(x_labels)):\n        _, min_index = min(\n            list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr)))))\n        tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100))\n    tpr_fpr_table.add_row(tpr_fpr_row)\nplt.xlim([10 ** -6, 0.1])\nplt.ylim([0.3, 1.0])\nplt.grid(linestyle='--', linewidth=1)\nplt.xticks(x_labels)\nplt.yticks(np.linspace(0.3, 1.0, 8, endpoint=True))\nplt.xscale('log')\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC on IJB')\nplt.legend(loc=\"lower right\")\nfig.savefig(os.path.join(save_path, '%s.pdf' % target.lower()))\nprint(tpr_fpr_table)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/inference.py",
    "content": "import argparse\n\nimport cv2\nimport numpy as np\nimport torch\n\nfrom backbones import get_model\n\n\n@torch.no_grad()\ndef inference(weight, name, img):\n    if img is None:\n        img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.uint8)\n    else:\n        img = cv2.imread(img)\n        img = cv2.resize(img, (112, 112))\n\n    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n    img = np.transpose(img, (2, 0, 1))\n    img = torch.from_numpy(img).unsqueeze(0).float()\n    img.div_(255).sub_(0.5).div_(0.5)\n    net = get_model(name, fp16=False)\n    net.load_state_dict(torch.load(weight))\n    net.eval()\n    feat = net(img).numpy()\n    print(feat)\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(description='PyTorch ArcFace Training')\n    parser.add_argument('--network', type=str, default='r50', help='backbone network')\n    parser.add_argument('--weight', type=str, default='')\n    parser.add_argument('--img', type=str, default=None)\n    args = parser.parse_args()\n    inference(args.weight, args.network, args.img)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/losses.py",
    "content": "import torch\nfrom torch import nn\n\n\ndef get_loss(name):\n    if name == \"cosface\":\n        return CosFace()\n    elif name == \"arcface\":\n        return ArcFace()\n    else:\n        raise ValueError()\n\n\nclass CosFace(nn.Module):\n    def __init__(self, s=64.0, m=0.40):\n        super(CosFace, self).__init__()\n        self.s = s\n        self.m = m\n\n    def forward(self, cosine, label):\n        index = torch.where(label != -1)[0]\n        m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)\n        m_hot.scatter_(1, label[index, None], self.m)\n        cosine[index] -= m_hot\n        ret = cosine * self.s\n        return ret\n\n\nclass ArcFace(nn.Module):\n    def __init__(self, s=64.0, m=0.5):\n        super(ArcFace, self).__init__()\n        self.s = s\n        self.m = m\n\n    def forward(self, cosine: torch.Tensor, label):\n        index = torch.where(label != -1)[0]\n        m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)\n        m_hot.scatter_(1, label[index, None], self.m)\n        cosine.acos_()\n        cosine[index] += m_hot\n        cosine.cos_().mul_(self.s)\n        return cosine\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/onnx_helper.py",
    "content": "from __future__ import division\nimport datetime\nimport os\nimport os.path as osp\nimport glob\nimport numpy as np\nimport cv2\nimport sys\nimport onnxruntime\nimport onnx\nimport argparse\nfrom onnx import numpy_helper\nfrom insightface.data import get_image\n\nclass ArcFaceORT:\n    def __init__(self, model_path, cpu=False):\n        self.model_path = model_path\n        # providers = None will use available provider, for onnxruntime-gpu it will be \"CUDAExecutionProvider\"\n        self.providers = ['CPUExecutionProvider'] if cpu else None\n\n    #input_size is (w,h), return error message, return None if success\n    def check(self, track='cfat', test_img = None):\n        #default is cfat\n        max_model_size_mb=1024\n        max_feat_dim=512\n        max_time_cost=15\n        if track.startswith('ms1m'):\n            max_model_size_mb=1024\n            max_feat_dim=512\n            max_time_cost=10\n        elif track.startswith('glint'):\n            max_model_size_mb=1024\n            max_feat_dim=1024\n            max_time_cost=20\n        elif track.startswith('cfat'):\n            max_model_size_mb = 1024\n            max_feat_dim = 512\n            max_time_cost = 15\n        elif track.startswith('unconstrained'):\n            max_model_size_mb=1024\n            max_feat_dim=1024\n            max_time_cost=30\n        else:\n            return \"track not found\"\n\n        if not os.path.exists(self.model_path):\n            return \"model_path not exists\"\n        if not os.path.isdir(self.model_path):\n            return \"model_path should be directory\"\n        onnx_files = []\n        for _file in os.listdir(self.model_path):\n            if _file.endswith('.onnx'):\n                onnx_files.append(osp.join(self.model_path, _file))\n        if len(onnx_files)==0:\n            return \"do not have onnx files\"\n        self.model_file = sorted(onnx_files)[-1]\n        print('use onnx-model:', self.model_file)\n        try:\n            session = onnxruntime.InferenceSession(self.model_file, providers=self.providers)\n        except:\n            return \"load onnx failed\"\n        input_cfg = session.get_inputs()[0]\n        input_shape = input_cfg.shape\n        print('input-shape:', input_shape)\n        if len(input_shape)!=4:\n            return \"length of input_shape should be 4\"\n        if not isinstance(input_shape[0], str):\n            #return \"input_shape[0] should be str to support batch-inference\"\n            print('reset input-shape[0] to None')\n            model = onnx.load(self.model_file)\n            model.graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'None'\n            new_model_file = osp.join(self.model_path, 'zzzzrefined.onnx')\n            onnx.save(model, new_model_file)\n            self.model_file = new_model_file\n            print('use new onnx-model:', self.model_file)\n            try:\n                session = onnxruntime.InferenceSession(self.model_file, providers=self.providers)\n            except:\n                return \"load onnx failed\"\n            input_cfg = session.get_inputs()[0]\n            input_shape = input_cfg.shape\n            print('new-input-shape:', input_shape)\n\n        self.image_size = tuple(input_shape[2:4][::-1])\n        #print('image_size:', self.image_size)\n        input_name = input_cfg.name\n        outputs = session.get_outputs()\n        output_names = []\n        for o in outputs:\n            output_names.append(o.name)\n            #print(o.name, o.shape)\n        if len(output_names)!=1:\n            return \"number of output nodes should be 1\"\n        self.session = session\n        self.input_name = input_name\n        self.output_names = output_names\n        #print(self.output_names)\n        model = onnx.load(self.model_file)\n        graph = model.graph\n        if len(graph.node)<8:\n            return \"too small onnx graph\"\n\n        input_size = (112,112)\n        self.crop = None\n        if track=='cfat':\n            crop_file = osp.join(self.model_path, 'crop.txt')\n            if osp.exists(crop_file):\n                lines = open(crop_file,'r').readlines()\n                if len(lines)!=6:\n                    return \"crop.txt should contain 6 lines\"\n                lines = [int(x) for x in lines]\n                self.crop = lines[:4]\n                input_size = tuple(lines[4:6])\n        if input_size!=self.image_size:\n            return \"input-size is inconsistant with onnx model input, %s vs %s\"%(input_size, self.image_size)\n\n        self.model_size_mb = os.path.getsize(self.model_file) / float(1024*1024)\n        if self.model_size_mb > max_model_size_mb:\n            return \"max model size exceed, given %.3f-MB\"%self.model_size_mb\n\n        input_mean = None\n        input_std = None\n        if track=='cfat':\n            pn_file = osp.join(self.model_path, 'pixel_norm.txt')\n            if osp.exists(pn_file):\n                lines = open(pn_file,'r').readlines()\n                if len(lines)!=2:\n                    return \"pixel_norm.txt should contain 2 lines\"\n                input_mean = float(lines[0])\n                input_std = float(lines[1])\n        if input_mean is not None or input_std is not None:\n            if input_mean is None or input_std is None:\n                return \"please set input_mean and input_std simultaneously\"\n        else:\n            find_sub = False\n            find_mul = False\n            for nid, node in enumerate(graph.node[:8]):\n                print(nid, node.name)\n                if node.name.startswith('Sub') or node.name.startswith('_minus'):\n                    find_sub = True\n                if node.name.startswith('Mul') or node.name.startswith('_mul') or node.name.startswith('Div'):\n                    find_mul = True\n            if find_sub and find_mul:\n                print(\"find sub and mul\")\n                #mxnet arcface model\n                input_mean = 0.0\n                input_std = 1.0\n            else:\n                input_mean = 127.5\n                input_std = 127.5\n        self.input_mean = input_mean\n        self.input_std = input_std\n        for initn in graph.initializer:\n            weight_array = numpy_helper.to_array(initn)\n            dt = weight_array.dtype\n            if dt.itemsize<4:\n                return 'invalid weight type - (%s:%s)' % (initn.name, dt.name)\n        if test_img is None:\n            test_img = get_image('Tom_Hanks_54745')\n            test_img = cv2.resize(test_img, self.image_size)\n        else:\n            test_img = cv2.resize(test_img, self.image_size)\n        feat, cost = self.benchmark(test_img)\n        batch_result = self.check_batch(test_img)\n        batch_result_sum = float(np.sum(batch_result))\n        if batch_result_sum in [float('inf'), -float('inf')] or batch_result_sum != batch_result_sum:\n            print(batch_result)\n            print(batch_result_sum)\n            return \"batch result output contains NaN!\"\n\n        if len(feat.shape) < 2:\n           return \"the shape of the feature must be two, but get {}\".format(str(feat.shape))\n\n        if feat.shape[1] > max_feat_dim:\n            return \"max feat dim exceed, given %d\"%feat.shape[1]\n        self.feat_dim = feat.shape[1]\n        cost_ms = cost*1000\n        if cost_ms>max_time_cost:\n            return \"max time cost exceed, given %.4f\"%cost_ms\n        self.cost_ms = cost_ms\n        print('check stat:, model-size-mb: %.4f, feat-dim: %d, time-cost-ms: %.4f, input-mean: %.3f, input-std: %.3f'%(self.model_size_mb, self.feat_dim, self.cost_ms, self.input_mean, self.input_std))\n        return None\n\n    def check_batch(self, img):\n        if not isinstance(img, list):\n            imgs = [img, ] * 32\n        if self.crop is not None:\n            nimgs = []\n            for img in imgs:\n                nimg = img[self.crop[1]:self.crop[3], self.crop[0]:self.crop[2], :]\n                if nimg.shape[0] != self.image_size[1] or nimg.shape[1] != self.image_size[0]:\n                    nimg = cv2.resize(nimg, self.image_size)\n                nimgs.append(nimg)\n            imgs = nimgs\n        blob = cv2.dnn.blobFromImages(\n            images=imgs, scalefactor=1.0 / self.input_std, size=self.image_size,\n            mean=(self.input_mean, self.input_mean, self.input_mean), swapRB=True)\n        net_out = self.session.run(self.output_names, {self.input_name: blob})[0]\n        return net_out\n\n\n    def meta_info(self):\n        return {'model-size-mb':self.model_size_mb, 'feature-dim':self.feat_dim, 'infer': self.cost_ms}\n\n\n    def forward(self, imgs):\n        if not isinstance(imgs, list):\n            imgs = [imgs]\n        input_size = self.image_size\n        if self.crop is not None:\n            nimgs = []\n            for img in imgs:\n                nimg = img[self.crop[1]:self.crop[3],self.crop[0]:self.crop[2],:]\n                if nimg.shape[0]!=input_size[1] or nimg.shape[1]!=input_size[0]:\n                    nimg = cv2.resize(nimg, input_size)\n                nimgs.append(nimg)\n            imgs = nimgs\n        blob = cv2.dnn.blobFromImages(imgs, 1.0/self.input_std, input_size, (self.input_mean, self.input_mean, self.input_mean), swapRB=True)\n        net_out = self.session.run(self.output_names, {self.input_name : blob})[0]\n        return net_out\n\n    def benchmark(self, img):\n        input_size = self.image_size\n        if self.crop is not None:\n            nimg = img[self.crop[1]:self.crop[3],self.crop[0]:self.crop[2],:]\n            if nimg.shape[0]!=input_size[1] or nimg.shape[1]!=input_size[0]:\n                nimg = cv2.resize(nimg, input_size)\n            img = nimg\n        blob = cv2.dnn.blobFromImage(img, 1.0/self.input_std, input_size, (self.input_mean, self.input_mean, self.input_mean), swapRB=True)\n        costs = []\n        for _ in range(50):\n            ta = datetime.datetime.now()\n            net_out = self.session.run(self.output_names, {self.input_name : blob})[0]\n            tb = datetime.datetime.now()\n            cost = (tb-ta).total_seconds()\n            costs.append(cost)\n        costs = sorted(costs)\n        cost = costs[5]\n        return net_out, cost\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser(description='')\n    # general\n    parser.add_argument('workdir', help='submitted work dir', type=str)\n    parser.add_argument('--track', help='track name, for different challenge', type=str, default='cfat')\n    args = parser.parse_args()\n    handler = ArcFaceORT(args.workdir)\n    err = handler.check(args.track)\n    print('err:', err)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/onnx_ijbc.py",
    "content": "import argparse\nimport os\nimport pickle\nimport timeit\n\nimport cv2\nimport mxnet as mx\nimport numpy as np\nimport pandas as pd\nimport prettytable\nimport skimage.transform\nfrom sklearn.metrics import roc_curve\nfrom sklearn.preprocessing import normalize\n\nfrom onnx_helper import ArcFaceORT\n\nSRC = np.array(\n    [\n        [30.2946, 51.6963],\n        [65.5318, 51.5014],\n        [48.0252, 71.7366],\n        [33.5493, 92.3655],\n        [62.7299, 92.2041]]\n    , dtype=np.float32)\nSRC[:, 0] += 8.0\n\n\nclass AlignedDataSet(mx.gluon.data.Dataset):\n    def __init__(self, root, lines, align=True):\n        self.lines = lines\n        self.root = root\n        self.align = align\n\n    def __len__(self):\n        return len(self.lines)\n\n    def __getitem__(self, idx):\n        each_line = self.lines[idx]\n        name_lmk_score = each_line.strip().split(' ')\n        name = os.path.join(self.root, name_lmk_score[0])\n        img = cv2.cvtColor(cv2.imread(name), cv2.COLOR_BGR2RGB)\n        landmark5 = np.array([float(x) for x in name_lmk_score[1:-1]], dtype=np.float32).reshape((5, 2))\n        st = skimage.transform.SimilarityTransform()\n        st.estimate(landmark5, SRC)\n        img = cv2.warpAffine(img, st.params[0:2, :], (112, 112), borderValue=0.0)\n        img_1 = np.expand_dims(img, 0)\n        img_2 = np.expand_dims(np.fliplr(img), 0)\n        output = np.concatenate((img_1, img_2), axis=0).astype(np.float32)\n        output = np.transpose(output, (0, 3, 1, 2))\n        output = mx.nd.array(output)\n        return output\n\n\ndef extract(model_root, dataset):\n    model = ArcFaceORT(model_path=model_root)\n    model.check()\n    feat_mat = np.zeros(shape=(len(dataset), 2 * model.feat_dim))\n\n    def batchify_fn(data):\n        return mx.nd.concat(*data, dim=0)\n\n    data_loader = mx.gluon.data.DataLoader(\n        dataset, 128, last_batch='keep', num_workers=4,\n        thread_pool=True, prefetch=16, batchify_fn=batchify_fn)\n    num_iter = 0\n    for batch in data_loader:\n        batch = batch.asnumpy()\n        batch = (batch - model.input_mean) / model.input_std\n        feat = model.session.run(model.output_names, {model.input_name: batch})[0]\n        feat = np.reshape(feat, (-1, model.feat_dim * 2))\n        feat_mat[128 * num_iter: 128 * num_iter + feat.shape[0], :] = feat\n        num_iter += 1\n        if num_iter % 50 == 0:\n            print(num_iter)\n    return feat_mat\n\n\ndef read_template_media_list(path):\n    ijb_meta = pd.read_csv(path, sep=' ', header=None).values\n    templates = ijb_meta[:, 1].astype(np.int)\n    medias = ijb_meta[:, 2].astype(np.int)\n    return templates, medias\n\n\ndef read_template_pair_list(path):\n    pairs = pd.read_csv(path, sep=' ', header=None).values\n    t1 = pairs[:, 0].astype(np.int)\n    t2 = pairs[:, 1].astype(np.int)\n    label = pairs[:, 2].astype(np.int)\n    return t1, t2, label\n\n\ndef read_image_feature(path):\n    with open(path, 'rb') as fid:\n        img_feats = pickle.load(fid)\n    return img_feats\n\n\ndef image2template_feature(img_feats=None,\n                           templates=None,\n                           medias=None):\n    unique_templates = np.unique(templates)\n    template_feats = np.zeros((len(unique_templates), img_feats.shape[1]))\n    for count_template, uqt in enumerate(unique_templates):\n        (ind_t,) = np.where(templates == uqt)\n        face_norm_feats = img_feats[ind_t]\n        face_medias = medias[ind_t]\n        unique_medias, unique_media_counts = np.unique(face_medias, return_counts=True)\n        media_norm_feats = []\n        for u, ct in zip(unique_medias, unique_media_counts):\n            (ind_m,) = np.where(face_medias == u)\n            if ct == 1:\n                media_norm_feats += [face_norm_feats[ind_m]]\n            else:  # image features from the same video will be aggregated into one feature\n                media_norm_feats += [np.mean(face_norm_feats[ind_m], axis=0, keepdims=True), ]\n        media_norm_feats = np.array(media_norm_feats)\n        template_feats[count_template] = np.sum(media_norm_feats, axis=0)\n        if count_template % 2000 == 0:\n            print('Finish Calculating {} template features.'.format(\n                count_template))\n    template_norm_feats = normalize(template_feats)\n    return template_norm_feats, unique_templates\n\n\ndef verification(template_norm_feats=None,\n                 unique_templates=None,\n                 p1=None,\n                 p2=None):\n    template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int)\n    for count_template, uqt in enumerate(unique_templates):\n        template2id[uqt] = count_template\n    score = np.zeros((len(p1),))\n    total_pairs = np.array(range(len(p1)))\n    batchsize = 100000\n    sublists = [total_pairs[i: i + batchsize] for i in range(0, len(p1), batchsize)]\n    total_sublists = len(sublists)\n    for c, s in enumerate(sublists):\n        feat1 = template_norm_feats[template2id[p1[s]]]\n        feat2 = template_norm_feats[template2id[p2[s]]]\n        similarity_score = np.sum(feat1 * feat2, -1)\n        score[s] = similarity_score.flatten()\n        if c % 10 == 0:\n            print('Finish {}/{} pairs.'.format(c, total_sublists))\n    return score\n\n\ndef verification2(template_norm_feats=None,\n                  unique_templates=None,\n                  p1=None,\n                  p2=None):\n    template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int)\n    for count_template, uqt in enumerate(unique_templates):\n        template2id[uqt] = count_template\n    score = np.zeros((len(p1),))  # save cosine distance between pairs\n    total_pairs = np.array(range(len(p1)))\n    batchsize = 100000  # small batchsize instead of all pairs in one batch due to the memory limiation\n    sublists = [total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize)]\n    total_sublists = len(sublists)\n    for c, s in enumerate(sublists):\n        feat1 = template_norm_feats[template2id[p1[s]]]\n        feat2 = template_norm_feats[template2id[p2[s]]]\n        similarity_score = np.sum(feat1 * feat2, -1)\n        score[s] = similarity_score.flatten()\n        if c % 10 == 0:\n            print('Finish {}/{} pairs.'.format(c, total_sublists))\n    return score\n\n\ndef main(args):\n    use_norm_score = True  # if Ture, TestMode(N1)\n    use_detector_score = True  # if Ture, TestMode(D1)\n    use_flip_test = True  # if Ture, TestMode(F1)\n    assert args.target == 'IJBC' or args.target == 'IJBB'\n\n    start = timeit.default_timer()\n    templates, medias = read_template_media_list(\n        os.path.join('%s/meta' % args.image_path, '%s_face_tid_mid.txt' % args.target.lower()))\n    stop = timeit.default_timer()\n    print('Time: %.2f s. ' % (stop - start))\n\n    start = timeit.default_timer()\n    p1, p2, label = read_template_pair_list(\n        os.path.join('%s/meta' % args.image_path,\n                     '%s_template_pair_label.txt' % args.target.lower()))\n    stop = timeit.default_timer()\n    print('Time: %.2f s. ' % (stop - start))\n\n    start = timeit.default_timer()\n    img_path = '%s/loose_crop' % args.image_path\n    img_list_path = '%s/meta/%s_name_5pts_score.txt' % (args.image_path, args.target.lower())\n    img_list = open(img_list_path)\n    files = img_list.readlines()\n    dataset = AlignedDataSet(root=img_path, lines=files, align=True)\n    img_feats = extract(args.model_root, dataset)\n\n    faceness_scores = []\n    for each_line in files:\n        name_lmk_score = each_line.split()\n        faceness_scores.append(name_lmk_score[-1])\n    faceness_scores = np.array(faceness_scores).astype(np.float32)\n    stop = timeit.default_timer()\n    print('Time: %.2f s. ' % (stop - start))\n    print('Feature Shape: ({} , {}) .'.format(img_feats.shape[0], img_feats.shape[1]))\n    start = timeit.default_timer()\n\n    if use_flip_test:\n        img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2] + img_feats[:, img_feats.shape[1] // 2:]\n    else:\n        img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2]\n\n    if use_norm_score:\n        img_input_feats = img_input_feats\n    else:\n        img_input_feats = img_input_feats / np.sqrt(np.sum(img_input_feats ** 2, -1, keepdims=True))\n\n    if use_detector_score:\n        print(img_input_feats.shape, faceness_scores.shape)\n        img_input_feats = img_input_feats * faceness_scores[:, np.newaxis]\n    else:\n        img_input_feats = img_input_feats\n\n    template_norm_feats, unique_templates = image2template_feature(\n        img_input_feats, templates, medias)\n    stop = timeit.default_timer()\n    print('Time: %.2f s. ' % (stop - start))\n\n    start = timeit.default_timer()\n    score = verification(template_norm_feats, unique_templates, p1, p2)\n    stop = timeit.default_timer()\n    print('Time: %.2f s. ' % (stop - start))\n    save_path = os.path.join(args.result_dir, \"{}_result\".format(args.target))\n    if not os.path.exists(save_path):\n        os.makedirs(save_path)\n    score_save_file = os.path.join(save_path, \"{}.npy\".format(args.model_root))\n    np.save(score_save_file, score)\n    files = [score_save_file]\n    methods = []\n    scores = []\n    for file in files:\n        methods.append(os.path.basename(file))\n        scores.append(np.load(file))\n    methods = np.array(methods)\n    scores = dict(zip(methods, scores))\n    x_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1]\n    tpr_fpr_table = prettytable.PrettyTable(['Methods'] + [str(x) for x in x_labels])\n    for method in methods:\n        fpr, tpr, _ = roc_curve(label, scores[method])\n        fpr = np.flipud(fpr)\n        tpr = np.flipud(tpr)\n        tpr_fpr_row = []\n        tpr_fpr_row.append(\"%s-%s\" % (method, args.target))\n        for fpr_iter in np.arange(len(x_labels)):\n            _, min_index = min(\n                list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr)))))\n            tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100))\n        tpr_fpr_table.add_row(tpr_fpr_row)\n    print(tpr_fpr_table)\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser(description='do ijb test')\n    # general\n    parser.add_argument('--model-root', default='', help='path to load model.')\n    parser.add_argument('--image-path', default='', type=str, help='')\n    parser.add_argument('--result-dir', default='.', type=str, help='')\n    parser.add_argument('--target', default='IJBC', type=str, help='target, set to IJBC or IJBB')\n    main(parser.parse_args())\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/partial_fc.py",
    "content": "import logging\nimport os\n\nimport torch\nimport torch.distributed as dist\nfrom torch.nn import Module\nfrom torch.nn.functional import normalize, linear\nfrom torch.nn.parameter import Parameter\n\n\nclass PartialFC(Module):\n    \"\"\"\n    Author: {Xiang An, Yang Xiao, XuHan Zhu} in DeepGlint,\n    Partial FC: Training 10 Million Identities on a Single Machine\n    See the original paper:\n    https://arxiv.org/abs/2010.05222\n    \"\"\"\n\n    @torch.no_grad()\n    def __init__(self, rank, local_rank, world_size, batch_size, resume,\n                 margin_softmax, num_classes, sample_rate=1.0, embedding_size=512, prefix=\"./\"):\n        \"\"\"\n        rank: int\n            Unique process(GPU) ID from 0 to world_size - 1.\n        local_rank: int\n            Unique process(GPU) ID within the server from 0 to 7.\n        world_size: int\n            Number of GPU.\n        batch_size: int\n            Batch size on current rank(GPU).\n        resume: bool\n            Select whether to restore the weight of softmax.\n        margin_softmax: callable\n            A function of margin softmax, eg: cosface, arcface.\n        num_classes: int\n            The number of class center storage in current rank(CPU/GPU), usually is total_classes // world_size,\n            required.\n        sample_rate: float\n            The partial fc sampling rate, when the number of classes increases to more than 2 millions, Sampling\n            can greatly speed up training, and reduce a lot of GPU memory, default is 1.0.\n        embedding_size: int\n            The feature dimension, default is 512.\n        prefix: str\n            Path for save checkpoint, default is './'.\n        \"\"\"\n        super(PartialFC, self).__init__()\n        #\n        self.num_classes: int = num_classes\n        self.rank: int = rank\n        self.local_rank: int = local_rank\n        self.device: torch.device = torch.device(\"cuda:{}\".format(self.local_rank))\n        self.world_size: int = world_size\n        self.batch_size: int = batch_size\n        self.margin_softmax: callable = margin_softmax\n        self.sample_rate: float = sample_rate\n        self.embedding_size: int = embedding_size\n        self.prefix: str = prefix\n        self.num_local: int = num_classes // world_size + int(rank < num_classes % world_size)\n        self.class_start: int = num_classes // world_size * rank + min(rank, num_classes % world_size)\n        self.num_sample: int = int(self.sample_rate * self.num_local)\n\n        self.weight_name = os.path.join(self.prefix, \"rank_{}_softmax_weight.pt\".format(self.rank))\n        self.weight_mom_name = os.path.join(self.prefix, \"rank_{}_softmax_weight_mom.pt\".format(self.rank))\n\n        if resume:\n            try:\n                self.weight: torch.Tensor = torch.load(self.weight_name)\n                self.weight_mom: torch.Tensor = torch.load(self.weight_mom_name)\n                if self.weight.shape[0] != self.num_local or self.weight_mom.shape[0] != self.num_local:\n                    raise IndexError\n                logging.info(\"softmax weight resume successfully!\")\n                logging.info(\"softmax weight mom resume successfully!\")\n            except (FileNotFoundError, KeyError, IndexError):\n                self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device)\n                self.weight_mom: torch.Tensor = torch.zeros_like(self.weight)\n                logging.info(\"softmax weight init!\")\n                logging.info(\"softmax weight mom init!\")\n        else:\n            self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device)\n            self.weight_mom: torch.Tensor = torch.zeros_like(self.weight)\n            logging.info(\"softmax weight init successfully!\")\n            logging.info(\"softmax weight mom init successfully!\")\n        self.stream: torch.cuda.Stream = torch.cuda.Stream(local_rank)\n\n        self.index = None\n        if int(self.sample_rate) == 1:\n            self.update = lambda: 0\n            self.sub_weight = Parameter(self.weight)\n            self.sub_weight_mom = self.weight_mom\n        else:\n            self.sub_weight = Parameter(torch.empty((0, 0)).cuda(local_rank))\n\n    def save_params(self):\n        \"\"\" Save softmax weight for each rank on prefix\n        \"\"\"\n        torch.save(self.weight.data, self.weight_name)\n        torch.save(self.weight_mom, self.weight_mom_name)\n\n    @torch.no_grad()\n    def sample(self, total_label):\n        \"\"\"\n        Sample all positive class centers in each rank, and random select neg class centers to filling a fixed\n        `num_sample`.\n\n        total_label: tensor\n            Label after all gather, which cross all GPUs.\n        \"\"\"\n        index_positive = (self.class_start <= total_label) & (total_label < self.class_start + self.num_local)\n        total_label[~index_positive] = -1\n        total_label[index_positive] -= self.class_start\n        if int(self.sample_rate) != 1:\n            positive = torch.unique(total_label[index_positive], sorted=True)\n            if self.num_sample - positive.size(0) >= 0:\n                perm = torch.rand(size=[self.num_local], device=self.device)\n                perm[positive] = 2.0\n                index = torch.topk(perm, k=self.num_sample)[1]\n                index = index.sort()[0]\n            else:\n                index = positive\n            self.index = index\n            total_label[index_positive] = torch.searchsorted(index, total_label[index_positive])\n            self.sub_weight = Parameter(self.weight[index])\n            self.sub_weight_mom = self.weight_mom[index]\n\n    def forward(self, total_features, norm_weight):\n        \"\"\" Partial fc forward, `logits = X * sample(W)`\n        \"\"\"\n        torch.cuda.current_stream().wait_stream(self.stream)\n        logits = linear(total_features, norm_weight)\n        return logits\n\n    @torch.no_grad()\n    def update(self):\n        \"\"\" Set updated weight and weight_mom to memory bank.\n        \"\"\"\n        self.weight_mom[self.index] = self.sub_weight_mom\n        self.weight[self.index] = self.sub_weight\n\n    def prepare(self, label, optimizer):\n        \"\"\"\n        get sampled class centers for cal softmax.\n\n        label: tensor\n            Label tensor on each rank.\n        optimizer: opt\n            Optimizer for partial fc, which need to get weight mom.\n        \"\"\"\n        with torch.cuda.stream(self.stream):\n            total_label = torch.zeros(\n                size=[self.batch_size * self.world_size], device=self.device, dtype=torch.long)\n            dist.all_gather(list(total_label.chunk(self.world_size, dim=0)), label)\n            self.sample(total_label)\n            optimizer.state.pop(optimizer.param_groups[-1]['params'][0], None)\n            optimizer.param_groups[-1]['params'][0] = self.sub_weight\n            optimizer.state[self.sub_weight]['momentum_buffer'] = self.sub_weight_mom\n            norm_weight = normalize(self.sub_weight)\n            return total_label, norm_weight\n\n    def forward_backward(self, label, features, optimizer):\n        \"\"\"\n        Partial fc forward and backward with model parallel\n\n        label: tensor\n            Label tensor on each rank(GPU)\n        features: tensor\n            Features tensor on each rank(GPU)\n        optimizer: optimizer\n            Optimizer for partial fc\n\n        Returns:\n        --------\n        x_grad: tensor\n            The gradient of features.\n        loss_v: tensor\n            Loss value for cross entropy.\n        \"\"\"\n        total_label, norm_weight = self.prepare(label, optimizer)\n        total_features = torch.zeros(\n            size=[self.batch_size * self.world_size, self.embedding_size], device=self.device)\n        dist.all_gather(list(total_features.chunk(self.world_size, dim=0)), features.data)\n        total_features.requires_grad = True\n\n        logits = self.forward(total_features, norm_weight)\n        logits = self.margin_softmax(logits, total_label)\n\n        with torch.no_grad():\n            max_fc = torch.max(logits, dim=1, keepdim=True)[0]\n            dist.all_reduce(max_fc, dist.ReduceOp.MAX)\n\n            # calculate exp(logits) and all-reduce\n            logits_exp = torch.exp(logits - max_fc)\n            logits_sum_exp = logits_exp.sum(dim=1, keepdims=True)\n            dist.all_reduce(logits_sum_exp, dist.ReduceOp.SUM)\n\n            # calculate prob\n            logits_exp.div_(logits_sum_exp)\n\n            # get one-hot\n            grad = logits_exp\n            index = torch.where(total_label != -1)[0]\n            one_hot = torch.zeros(size=[index.size()[0], grad.size()[1]], device=grad.device)\n            one_hot.scatter_(1, total_label[index, None], 1)\n\n            # calculate loss\n            loss = torch.zeros(grad.size()[0], 1, device=grad.device)\n            loss[index] = grad[index].gather(1, total_label[index, None])\n            dist.all_reduce(loss, dist.ReduceOp.SUM)\n            loss_v = loss.clamp_min_(1e-30).log_().mean() * (-1)\n\n            # calculate grad\n            grad[index] -= one_hot\n            grad.div_(self.batch_size * self.world_size)\n\n        logits.backward(grad)\n        if total_features.grad is not None:\n            total_features.grad.detach_()\n        x_grad: torch.Tensor = torch.zeros_like(features, requires_grad=True)\n        # feature gradient all-reduce\n        dist.reduce_scatter(x_grad, list(total_features.grad.chunk(self.world_size, dim=0)))\n        x_grad = x_grad * self.world_size\n        # backward backbone\n        return x_grad, loss_v\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/requirement.txt",
    "content": "tensorboard\neasydict\nmxnet\nonnx\nsklearn\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/run.sh",
    "content": "CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr=\"127.0.0.1\" --master_port=1234 train.py configs/ms1mv3_r50\nps -ef | grep \"train\" | grep -v grep | awk '{print \"kill -9 \"$2}' | sh\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/torch2onnx.py",
    "content": "import numpy as np\nimport onnx\nimport torch\n\n\ndef convert_onnx(net, path_module, output, opset=11, simplify=False):\n    assert isinstance(net, torch.nn.Module)\n    img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32)\n    img = img.astype(np.float)\n    img = (img / 255. - 0.5) / 0.5  # torch style norm\n    img = img.transpose((2, 0, 1))\n    img = torch.from_numpy(img).unsqueeze(0).float()\n\n    weight = torch.load(path_module)\n    net.load_state_dict(weight)\n    net.eval()\n    torch.onnx.export(net, img, output, keep_initializers_as_inputs=False, verbose=False, opset_version=opset)\n    model = onnx.load(output)\n    graph = model.graph\n    graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'None'\n    if simplify:\n        from onnxsim import simplify\n        model, check = simplify(model)\n        assert check, \"Simplified ONNX model could not be validated\"\n    onnx.save(model, output)\n\n    \nif __name__ == '__main__':\n    import os\n    import argparse\n    from backbones import get_model\n\n    parser = argparse.ArgumentParser(description='ArcFace PyTorch to onnx')\n    parser.add_argument('input', type=str, help='input backbone.pth file or path')\n    parser.add_argument('--output', type=str, default=None, help='output onnx path')\n    parser.add_argument('--network', type=str, default=None, help='backbone network')\n    parser.add_argument('--simplify', type=bool, default=False, help='onnx simplify')\n    args = parser.parse_args()\n    input_file = args.input\n    if os.path.isdir(input_file):\n        input_file = os.path.join(input_file, \"backbone.pth\")\n    assert os.path.exists(input_file)\n    model_name = os.path.basename(os.path.dirname(input_file)).lower()\n    params = model_name.split(\"_\")\n    if len(params) >= 3 and params[1] in ('arcface', 'cosface'):\n        if args.network is None:\n            args.network = params[2]\n    assert args.network is not None\n    print(args)\n    backbone_onnx = get_model(args.network, dropout=0)\n\n    output_path = args.output\n    if output_path is None:\n        output_path = os.path.join(os.path.dirname(__file__), 'onnx')\n    if not os.path.exists(output_path):\n        os.makedirs(output_path)\n    assert os.path.isdir(output_path)\n    output_file = os.path.join(output_path, \"%s.onnx\" % model_name)\n    convert_onnx(backbone_onnx, input_file, output_file, simplify=args.simplify)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/train.py",
    "content": "import argparse\nimport logging\nimport os\n\nimport torch\nimport torch.distributed as dist\nimport torch.nn.functional as F\nimport torch.utils.data.distributed\nfrom torch.nn.utils import clip_grad_norm_\n\nimport losses\nfrom backbones import get_model\nfrom dataset import MXFaceDataset, SyntheticDataset, DataLoaderX\nfrom partial_fc import PartialFC\nfrom utils.utils_amp import MaxClipGradScaler\nfrom utils.utils_callbacks import CallBackVerification, CallBackLogging, CallBackModelCheckpoint\nfrom utils.utils_config import get_config\nfrom utils.utils_logging import AverageMeter, init_logging\n\n\ndef main(args):\n    cfg = get_config(args.config)\n    try:\n        world_size = int(os.environ['WORLD_SIZE'])\n        rank = int(os.environ['RANK'])\n        dist.init_process_group('nccl')\n    except KeyError:\n        world_size = 1\n        rank = 0\n        dist.init_process_group(backend='nccl', init_method=\"tcp://127.0.0.1:12584\", rank=rank, world_size=world_size)\n\n    local_rank = args.local_rank\n    torch.cuda.set_device(local_rank)\n    os.makedirs(cfg.output, exist_ok=True)\n    init_logging(rank, cfg.output)\n\n    if cfg.rec == \"synthetic\":\n        train_set = SyntheticDataset(local_rank=local_rank)\n    else:\n        train_set = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank)\n\n    train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True)\n    train_loader = DataLoaderX(\n        local_rank=local_rank, dataset=train_set, batch_size=cfg.batch_size,\n        sampler=train_sampler, num_workers=2, pin_memory=True, drop_last=True)\n    backbone = get_model(cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).to(local_rank)\n\n    if cfg.resume:\n        try:\n            backbone_pth = os.path.join(cfg.output, \"backbone.pth\")\n            backbone.load_state_dict(torch.load(backbone_pth, map_location=torch.device(local_rank)))\n            if rank == 0:\n                logging.info(\"backbone resume successfully!\")\n        except (FileNotFoundError, KeyError, IndexError, RuntimeError):\n            if rank == 0:\n                logging.info(\"resume fail, backbone init successfully!\")\n\n    backbone = torch.nn.parallel.DistributedDataParallel(\n        module=backbone, broadcast_buffers=False, device_ids=[local_rank])\n    backbone.train()\n    margin_softmax = losses.get_loss(cfg.loss)\n    module_partial_fc = PartialFC(\n        rank=rank, local_rank=local_rank, world_size=world_size, resume=cfg.resume,\n        batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes,\n        sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output)\n\n    opt_backbone = torch.optim.SGD(\n        params=[{'params': backbone.parameters()}],\n        lr=cfg.lr / 512 * cfg.batch_size * world_size,\n        momentum=0.9, weight_decay=cfg.weight_decay)\n    opt_pfc = torch.optim.SGD(\n        params=[{'params': module_partial_fc.parameters()}],\n        lr=cfg.lr / 512 * cfg.batch_size * world_size,\n        momentum=0.9, weight_decay=cfg.weight_decay)\n\n    num_image = len(train_set)\n    total_batch_size = cfg.batch_size * world_size\n    cfg.warmup_step = num_image // total_batch_size * cfg.warmup_epoch\n    cfg.total_step = num_image // total_batch_size * cfg.num_epoch\n\n    def lr_step_func(current_step):\n        cfg.decay_step = [x * num_image // total_batch_size for x in cfg.decay_epoch]\n        if current_step < cfg.warmup_step:\n            return current_step / cfg.warmup_step\n        else:\n            return 0.1 ** len([m for m in cfg.decay_step if m <= current_step])\n\n    scheduler_backbone = torch.optim.lr_scheduler.LambdaLR(\n        optimizer=opt_backbone, lr_lambda=lr_step_func)\n    scheduler_pfc = torch.optim.lr_scheduler.LambdaLR(\n        optimizer=opt_pfc, lr_lambda=lr_step_func)\n\n    for key, value in cfg.items():\n        num_space = 25 - len(key)\n        logging.info(\": \" + key + \" \" * num_space + str(value))\n\n    val_target = cfg.val_targets\n    callback_verification = CallBackVerification(2000, rank, val_target, cfg.rec)\n    callback_logging = CallBackLogging(50, rank, cfg.total_step, cfg.batch_size, world_size, None)\n    callback_checkpoint = CallBackModelCheckpoint(rank, cfg.output)\n\n    loss = AverageMeter()\n    start_epoch = 0\n    global_step = 0\n    grad_amp = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None\n    for epoch in range(start_epoch, cfg.num_epoch):\n        train_sampler.set_epoch(epoch)\n        for step, (img, label) in enumerate(train_loader):\n            global_step += 1\n            features = F.normalize(backbone(img))\n            x_grad, loss_v = module_partial_fc.forward_backward(label, features, opt_pfc)\n            if cfg.fp16:\n                features.backward(grad_amp.scale(x_grad))\n                grad_amp.unscale_(opt_backbone)\n                clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2)\n                grad_amp.step(opt_backbone)\n                grad_amp.update()\n            else:\n                features.backward(x_grad)\n                clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2)\n                opt_backbone.step()\n\n            opt_pfc.step()\n            module_partial_fc.update()\n            opt_backbone.zero_grad()\n            opt_pfc.zero_grad()\n            loss.update(loss_v, 1)\n            callback_logging(global_step, loss, epoch, cfg.fp16, scheduler_backbone.get_last_lr()[0], grad_amp)\n            callback_verification(global_step, backbone)\n            scheduler_backbone.step()\n            scheduler_pfc.step()\n        callback_checkpoint(global_step, backbone, module_partial_fc)\n    dist.destroy_process_group()\n\n\nif __name__ == \"__main__\":\n    torch.backends.cudnn.benchmark = True\n    parser = argparse.ArgumentParser(description='PyTorch ArcFace Training')\n    parser.add_argument('config', type=str, help='py config file')\n    parser.add_argument('--local_rank', type=int, default=0, help='local_rank')\n    main(parser.parse_args())\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/utils/__init__.py",
    "content": ""
  },
  {
    "path": "third_part/face3d/models/arcface_torch/utils/plot.py",
    "content": "# coding: utf-8\n\nimport os\nfrom pathlib import Path\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom menpo.visualize.viewmatplotlib import sample_colours_from_colourmap\nfrom prettytable import PrettyTable\nfrom sklearn.metrics import roc_curve, auc\n\nimage_path = \"/data/anxiang/IJB_release/IJBC\"\nfiles = [\n        \"./ms1mv3_arcface_r100/ms1mv3_arcface_r100/ijbc.npy\"\n]\n\n\ndef read_template_pair_list(path):\n    pairs = pd.read_csv(path, sep=' ', header=None).values\n    t1 = pairs[:, 0].astype(np.int)\n    t2 = pairs[:, 1].astype(np.int)\n    label = pairs[:, 2].astype(np.int)\n    return t1, t2, label\n\n\np1, p2, label = read_template_pair_list(\n    os.path.join('%s/meta' % image_path,\n                 '%s_template_pair_label.txt' % 'ijbc'))\n\nmethods = []\nscores = []\nfor file in files:\n    methods.append(file.split('/')[-2])\n    scores.append(np.load(file))\n\nmethods = np.array(methods)\nscores = dict(zip(methods, scores))\ncolours = dict(\n    zip(methods, sample_colours_from_colourmap(methods.shape[0], 'Set2')))\nx_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1]\ntpr_fpr_table = PrettyTable(['Methods'] + [str(x) for x in x_labels])\nfig = plt.figure()\nfor method in methods:\n    fpr, tpr, _ = roc_curve(label, scores[method])\n    roc_auc = auc(fpr, tpr)\n    fpr = np.flipud(fpr)\n    tpr = np.flipud(tpr)  # select largest tpr at same fpr\n    plt.plot(fpr,\n             tpr,\n             color=colours[method],\n             lw=1,\n             label=('[%s (AUC = %0.4f %%)]' %\n                    (method.split('-')[-1], roc_auc * 100)))\n    tpr_fpr_row = []\n    tpr_fpr_row.append(\"%s-%s\" % (method, \"IJBC\"))\n    for fpr_iter in np.arange(len(x_labels)):\n        _, min_index = min(\n            list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr)))))\n        tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100))\n    tpr_fpr_table.add_row(tpr_fpr_row)\nplt.xlim([10 ** -6, 0.1])\nplt.ylim([0.3, 1.0])\nplt.grid(linestyle='--', linewidth=1)\nplt.xticks(x_labels)\nplt.yticks(np.linspace(0.3, 1.0, 8, endpoint=True))\nplt.xscale('log')\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC on IJB')\nplt.legend(loc=\"lower right\")\nprint(tpr_fpr_table)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/utils/utils_amp.py",
    "content": "from typing import Dict, List\n\nimport torch\n\nif torch.__version__ < '1.9':\n    Iterable = torch._six.container_abcs.Iterable\nelse:\n    import collections\n\n    Iterable = collections.abc.Iterable\nfrom torch.cuda.amp import GradScaler\n\n\nclass _MultiDeviceReplicator(object):\n    \"\"\"\n    Lazily serves copies of a tensor to requested devices.  Copies are cached per-device.\n    \"\"\"\n\n    def __init__(self, master_tensor: torch.Tensor) -> None:\n        assert master_tensor.is_cuda\n        self.master = master_tensor\n        self._per_device_tensors: Dict[torch.device, torch.Tensor] = {}\n\n    def get(self, device) -> torch.Tensor:\n        retval = self._per_device_tensors.get(device, None)\n        if retval is None:\n            retval = self.master.to(device=device, non_blocking=True, copy=True)\n            self._per_device_tensors[device] = retval\n        return retval\n\n\nclass MaxClipGradScaler(GradScaler):\n    def __init__(self, init_scale, max_scale: float, growth_interval=100):\n        GradScaler.__init__(self, init_scale=init_scale, growth_interval=growth_interval)\n        self.max_scale = max_scale\n\n    def scale_clip(self):\n        if self.get_scale() == self.max_scale:\n            self.set_growth_factor(1)\n        elif self.get_scale() < self.max_scale:\n            self.set_growth_factor(2)\n        elif self.get_scale() > self.max_scale:\n            self._scale.fill_(self.max_scale)\n            self.set_growth_factor(1)\n\n    def scale(self, outputs):\n        \"\"\"\n        Multiplies ('scales') a tensor or list of tensors by the scale factor.\n\n        Returns scaled outputs.  If this instance of :class:`GradScaler` is not enabled, outputs are returned\n        unmodified.\n\n        Arguments:\n            outputs (Tensor or iterable of Tensors):  Outputs to scale.\n        \"\"\"\n        if not self._enabled:\n            return outputs\n        self.scale_clip()\n        # Short-circuit for the common case.\n        if isinstance(outputs, torch.Tensor):\n            assert outputs.is_cuda\n            if self._scale is None:\n                self._lazy_init_scale_growth_tracker(outputs.device)\n            assert self._scale is not None\n            return outputs * self._scale.to(device=outputs.device, non_blocking=True)\n\n        # Invoke the more complex machinery only if we're treating multiple outputs.\n        stash: List[_MultiDeviceReplicator] = []  # holds a reference that can be overwritten by apply_scale\n\n        def apply_scale(val):\n            if isinstance(val, torch.Tensor):\n                assert val.is_cuda\n                if len(stash) == 0:\n                    if self._scale is None:\n                        self._lazy_init_scale_growth_tracker(val.device)\n                    assert self._scale is not None\n                    stash.append(_MultiDeviceReplicator(self._scale))\n                return val * stash[0].get(val.device)\n            elif isinstance(val, Iterable):\n                iterable = map(apply_scale, val)\n                if isinstance(val, list) or isinstance(val, tuple):\n                    return type(val)(iterable)\n                else:\n                    return iterable\n            else:\n                raise ValueError(\"outputs must be a Tensor or an iterable of Tensors\")\n\n        return apply_scale(outputs)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/utils/utils_callbacks.py",
    "content": "import logging\nimport os\nimport time\nfrom typing import List\n\nimport torch\n\nfrom eval import verification\nfrom utils.utils_logging import AverageMeter\n\n\nclass CallBackVerification(object):\n    def __init__(self, frequent, rank, val_targets, rec_prefix, image_size=(112, 112)):\n        self.frequent: int = frequent\n        self.rank: int = rank\n        self.highest_acc: float = 0.0\n        self.highest_acc_list: List[float] = [0.0] * len(val_targets)\n        self.ver_list: List[object] = []\n        self.ver_name_list: List[str] = []\n        if self.rank is 0:\n            self.init_dataset(val_targets=val_targets, data_dir=rec_prefix, image_size=image_size)\n\n    def ver_test(self, backbone: torch.nn.Module, global_step: int):\n        results = []\n        for i in range(len(self.ver_list)):\n            acc1, std1, acc2, std2, xnorm, embeddings_list = verification.test(\n                self.ver_list[i], backbone, 10, 10)\n            logging.info('[%s][%d]XNorm: %f' % (self.ver_name_list[i], global_step, xnorm))\n            logging.info('[%s][%d]Accuracy-Flip: %1.5f+-%1.5f' % (self.ver_name_list[i], global_step, acc2, std2))\n            if acc2 > self.highest_acc_list[i]:\n                self.highest_acc_list[i] = acc2\n            logging.info(\n                '[%s][%d]Accuracy-Highest: %1.5f' % (self.ver_name_list[i], global_step, self.highest_acc_list[i]))\n            results.append(acc2)\n\n    def init_dataset(self, val_targets, data_dir, image_size):\n        for name in val_targets:\n            path = os.path.join(data_dir, name + \".bin\")\n            if os.path.exists(path):\n                data_set = verification.load_bin(path, image_size)\n                self.ver_list.append(data_set)\n                self.ver_name_list.append(name)\n\n    def __call__(self, num_update, backbone: torch.nn.Module):\n        if self.rank is 0 and num_update > 0 and num_update % self.frequent == 0:\n            backbone.eval()\n            self.ver_test(backbone, num_update)\n            backbone.train()\n\n\nclass CallBackLogging(object):\n    def __init__(self, frequent, rank, total_step, batch_size, world_size, writer=None):\n        self.frequent: int = frequent\n        self.rank: int = rank\n        self.time_start = time.time()\n        self.total_step: int = total_step\n        self.batch_size: int = batch_size\n        self.world_size: int = world_size\n        self.writer = writer\n\n        self.init = False\n        self.tic = 0\n\n    def __call__(self,\n                 global_step: int,\n                 loss: AverageMeter,\n                 epoch: int,\n                 fp16: bool,\n                 learning_rate: float,\n                 grad_scaler: torch.cuda.amp.GradScaler):\n        if self.rank == 0 and global_step > 0 and global_step % self.frequent == 0:\n            if self.init:\n                try:\n                    speed: float = self.frequent * self.batch_size / (time.time() - self.tic)\n                    speed_total = speed * self.world_size\n                except ZeroDivisionError:\n                    speed_total = float('inf')\n\n                time_now = (time.time() - self.time_start) / 3600\n                time_total = time_now / ((global_step + 1) / self.total_step)\n                time_for_end = time_total - time_now\n                if self.writer is not None:\n                    self.writer.add_scalar('time_for_end', time_for_end, global_step)\n                    self.writer.add_scalar('learning_rate', learning_rate, global_step)\n                    self.writer.add_scalar('loss', loss.avg, global_step)\n                if fp16:\n                    msg = \"Speed %.2f samples/sec   Loss %.4f   LearningRate %.4f   Epoch: %d   Global Step: %d   \" \\\n                          \"Fp16 Grad Scale: %2.f   Required: %1.f hours\" % (\n                              speed_total, loss.avg, learning_rate, epoch, global_step,\n                              grad_scaler.get_scale(), time_for_end\n                          )\n                else:\n                    msg = \"Speed %.2f samples/sec   Loss %.4f   LearningRate %.4f   Epoch: %d   Global Step: %d   \" \\\n                          \"Required: %1.f hours\" % (\n                              speed_total, loss.avg, learning_rate, epoch, global_step, time_for_end\n                          )\n                logging.info(msg)\n                loss.reset()\n                self.tic = time.time()\n            else:\n                self.init = True\n                self.tic = time.time()\n\n\nclass CallBackModelCheckpoint(object):\n    def __init__(self, rank, output=\"./\"):\n        self.rank: int = rank\n        self.output: str = output\n\n    def __call__(self, global_step, backbone, partial_fc, ):\n        if global_step > 100 and self.rank == 0:\n            path_module = os.path.join(self.output, \"backbone.pth\")\n            torch.save(backbone.module.state_dict(), path_module)\n            logging.info(\"Pytorch Model Saved in '{}'\".format(path_module))\n\n        if global_step > 100 and partial_fc is not None:\n            partial_fc.save_params()\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/utils/utils_config.py",
    "content": "import importlib\nimport os.path as osp\n\n\ndef get_config(config_file):\n    assert config_file.startswith('configs/'), 'config file setting must start with configs/'\n    temp_config_name = osp.basename(config_file)\n    temp_module_name = osp.splitext(temp_config_name)[0]\n    config = importlib.import_module(\"configs.base\")\n    cfg = config.config\n    config = importlib.import_module(\"configs.%s\" % temp_module_name)\n    job_cfg = config.config\n    cfg.update(job_cfg)\n    if cfg.output is None:\n        cfg.output = osp.join('work_dirs', temp_module_name)\n    return cfg"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/utils/utils_logging.py",
    "content": "import logging\nimport os\nimport sys\n\n\nclass AverageMeter(object):\n    \"\"\"Computes and stores the average and current value\n    \"\"\"\n\n    def __init__(self):\n        self.val = None\n        self.avg = None\n        self.sum = None\n        self.count = None\n        self.reset()\n\n    def reset(self):\n        self.val = 0\n        self.avg = 0\n        self.sum = 0\n        self.count = 0\n\n    def update(self, val, n=1):\n        self.val = val\n        self.sum += val * n\n        self.count += n\n        self.avg = self.sum / self.count\n\n\ndef init_logging(rank, models_root):\n    if rank == 0:\n        log_root = logging.getLogger()\n        log_root.setLevel(logging.INFO)\n        formatter = logging.Formatter(\"Training: %(asctime)s-%(message)s\")\n        handler_file = logging.FileHandler(os.path.join(models_root, \"training.log\"))\n        handler_stream = logging.StreamHandler(sys.stdout)\n        handler_file.setFormatter(formatter)\n        handler_stream.setFormatter(formatter)\n        log_root.addHandler(handler_file)\n        log_root.addHandler(handler_stream)\n        log_root.info('rank_id: %d' % rank)\n"
  },
  {
    "path": "third_part/face3d/models/arcface_torch/utils/utils_os.py",
    "content": ""
  },
  {
    "path": "third_part/face3d/models/base_model.py",
    "content": "\"\"\"This script defines the base network model for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport os\nimport numpy as np\nimport torch\nfrom collections import OrderedDict\nfrom abc import ABC, abstractmethod\nfrom . import networks\n\n\nclass BaseModel(ABC):\n    \"\"\"This class is an abstract base class (ABC) for models.\n    To create a subclass, you need to implement the following five functions:\n        -- <__init__>:                      initialize the class; first call BaseModel.__init__(self, opt).\n        -- <set_input>:                     unpack data from dataset and apply preprocessing.\n        -- <forward>:                       produce intermediate results.\n        -- <optimize_parameters>:           calculate losses, gradients, and update network weights.\n        -- <modify_commandline_options>:    (optionally) add model-specific options and set default options.\n    \"\"\"\n\n    def __init__(self, opt):\n        \"\"\"Initialize the BaseModel class.\n\n        Parameters:\n            opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions\n\n        When creating your custom class, you need to implement your own initialization.\n        In this fucntion, you should first call <BaseModel.__init__(self, opt)>\n        Then, you need to define four lists:\n            -- self.loss_names (str list):          specify the training losses that you want to plot and save.\n            -- self.model_names (str list):         specify the images that you want to display and save.\n            -- self.visual_names (str list):        define networks used in our training.\n            -- self.optimizers (optimizer list):    define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.\n        \"\"\"\n        self.opt = opt\n        self.isTrain = opt.isTrain\n        self.device = torch.device('cpu') \n        self.save_dir = os.path.join(opt.checkpoints_dir, opt.name)  # save all the checkpoints to save_dir\n        self.loss_names = []\n        self.model_names = []\n        self.visual_names = []\n        self.parallel_names = []\n        self.optimizers = []\n        self.image_paths = []\n        self.metric = 0  # used for learning rate policy 'plateau'\n\n    @staticmethod\n    def dict_grad_hook_factory(add_func=lambda x: x):\n        saved_dict = dict()\n\n        def hook_gen(name):\n            def grad_hook(grad):\n                saved_vals = add_func(grad)\n                saved_dict[name] = saved_vals\n            return grad_hook\n        return hook_gen, saved_dict\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        \"\"\"Add new model-specific options, and rewrite default values for existing options.\n\n        Parameters:\n            parser          -- original option parser\n            is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.\n\n        Returns:\n            the modified parser.\n        \"\"\"\n        return parser\n\n    @abstractmethod\n    def set_input(self, input):\n        \"\"\"Unpack input data from the dataloader and perform necessary pre-processing steps.\n\n        Parameters:\n            input (dict): includes the data itself and its metadata information.\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def forward(self):\n        \"\"\"Run forward pass; called by both functions <optimize_parameters> and <test>.\"\"\"\n        pass\n\n    @abstractmethod\n    def optimize_parameters(self):\n        \"\"\"Calculate losses, gradients, and update network weights; called in every training iteration\"\"\"\n        pass\n\n    def setup(self, opt):\n        \"\"\"Load and print networks; create schedulers\n\n        Parameters:\n            opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions\n        \"\"\"\n        if self.isTrain:\n            self.schedulers = [networks.get_scheduler(optimizer, opt) for optimizer in self.optimizers]\n        \n        if not self.isTrain or opt.continue_train:\n            load_suffix = opt.epoch\n            self.load_networks(load_suffix)\n \n            \n        # self.print_networks(opt.verbose)\n\n    def parallelize(self, convert_sync_batchnorm=True):\n        if not self.opt.use_ddp:\n            for name in self.parallel_names:\n                if isinstance(name, str):\n                    module = getattr(self, name)\n                    setattr(self, name, module.to(self.device))\n        else:\n            for name in self.model_names:\n                if isinstance(name, str):\n                    module = getattr(self, name)\n                    if convert_sync_batchnorm:\n                        module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module)\n                    setattr(self, name, torch.nn.parallel.DistributedDataParallel(module.to(self.device),\n                        device_ids=[self.device.index], \n                        find_unused_parameters=True, broadcast_buffers=True))\n            \n            # DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient.\n            for name in self.parallel_names:\n                if isinstance(name, str) and name not in self.model_names:\n                    module = getattr(self, name)\n                    setattr(self, name, module.to(self.device))\n            \n        # put state_dict of optimizer to gpu device\n        if self.opt.phase != 'test':\n            if self.opt.continue_train:\n                for optim in self.optimizers:\n                    for state in optim.state.values():\n                        for k, v in state.items():\n                            if isinstance(v, torch.Tensor):\n                                state[k] = v.to(self.device)\n\n    def data_dependent_initialize(self, data):\n        pass\n\n    def train(self):\n        \"\"\"Make models train mode\"\"\"\n        for name in self.model_names:\n            if isinstance(name, str):\n                net = getattr(self, name)\n                net.train()\n\n    def eval(self):\n        \"\"\"Make models eval mode\"\"\"\n        for name in self.model_names:\n            if isinstance(name, str):\n                net = getattr(self, name)\n                net.eval()\n\n    def test(self):\n        \"\"\"Forward function used in test time.\n\n        This function wraps <forward> function in no_grad() so we don't save intermediate steps for backprop\n        It also calls <compute_visuals> to produce additional visualization results\n        \"\"\"\n        with torch.no_grad():\n            self.forward()\n            self.compute_visuals()\n\n    def compute_visuals(self):\n        \"\"\"Calculate additional output images for visdom and HTML visualization\"\"\"\n        pass\n\n    def get_image_paths(self, name='A'):\n        \"\"\" Return image paths that are used to load current data\"\"\"\n        return self.image_paths if name =='A' else self.image_paths_B\n\n    def update_learning_rate(self):\n        \"\"\"Update learning rates for all the networks; called at the end of every epoch\"\"\"\n        for scheduler in self.schedulers:\n            if self.opt.lr_policy == 'plateau':\n                scheduler.step(self.metric)\n            else:\n                scheduler.step()\n\n        lr = self.optimizers[0].param_groups[0]['lr']\n        print('learning rate = %.7f' % lr)\n\n    def get_current_visuals(self):\n        \"\"\"Return visualization images. train.py will display these images with visdom, and save the images to a HTML\"\"\"\n        visual_ret = OrderedDict()\n        for name in self.visual_names:\n            if isinstance(name, str):\n                visual_ret[name] = getattr(self, name)[:, :3, ...]\n        return visual_ret\n\n    def get_current_losses(self):\n        \"\"\"Return traning losses / errors. train.py will print out these errors on console, and save them to a file\"\"\"\n        errors_ret = OrderedDict()\n        for name in self.loss_names:\n            if isinstance(name, str):\n                errors_ret[name] = float(getattr(self, 'loss_' + name))  # float(...) works for both scalar tensor and float number\n        return errors_ret\n\n    def save_networks(self, epoch):\n        \"\"\"Save all the networks to the disk.\n\n        Parameters:\n            epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)\n        \"\"\"\n        if not os.path.isdir(self.save_dir):\n            os.makedirs(self.save_dir)\n\n        save_filename = 'epoch_%s.pth' % (epoch)\n        save_path = os.path.join(self.save_dir, save_filename)\n        \n        save_dict = {}\n        for name in self.model_names:\n            if isinstance(name, str):\n                net = getattr(self, name)\n                if isinstance(net, torch.nn.DataParallel) or isinstance(net,\n                        torch.nn.parallel.DistributedDataParallel):\n                    net = net.module\n                save_dict[name] = net.state_dict()\n                \n\n        for i, optim in enumerate(self.optimizers):\n            save_dict['opt_%02d'%i] = optim.state_dict()\n\n        for i, sched in enumerate(self.schedulers):\n            save_dict['sched_%02d'%i] = sched.state_dict()\n        \n        torch.save(save_dict, save_path)\n\n    def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0):\n        \"\"\"Fix InstanceNorm checkpoints incompatibility (prior to 0.4)\"\"\"\n        key = keys[i]\n        if i + 1 == len(keys):  # at the end, pointing to a parameter/buffer\n            if module.__class__.__name__.startswith('InstanceNorm') and \\\n                    (key == 'running_mean' or key == 'running_var'):\n                if getattr(module, key) is None:\n                    state_dict.pop('.'.join(keys))\n            if module.__class__.__name__.startswith('InstanceNorm') and \\\n               (key == 'num_batches_tracked'):\n                state_dict.pop('.'.join(keys))\n        else:\n            self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1)\n\n    def load_networks(self, epoch):\n        \"\"\"Load all the networks from the disk.\n\n        Parameters:\n            epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)\n        \"\"\"\n        if self.opt.isTrain and self.opt.pretrained_name is not None:\n            load_dir = os.path.join(self.opt.checkpoints_dir, self.opt.pretrained_name)\n        else:\n            load_dir = self.save_dir    \n        load_filename = 'epoch_%s.pth' % (epoch)\n        load_path = os.path.join(load_dir, load_filename)\n        state_dict = torch.load(load_path, map_location=self.device)\n        print('loading the model from %s' % load_path)\n\n        for name in self.model_names:\n            if isinstance(name, str):\n                net = getattr(self, name)\n                if isinstance(net, torch.nn.DataParallel):\n                    net = net.module\n                net.load_state_dict(state_dict[name])\n        \n        if self.opt.phase != 'test':\n            if self.opt.continue_train:\n                print('loading the optim from %s' % load_path)\n                for i, optim in enumerate(self.optimizers):\n                    optim.load_state_dict(state_dict['opt_%02d'%i])\n\n                try:\n                    print('loading the sched from %s' % load_path)\n                    for i, sched in enumerate(self.schedulers):\n                        sched.load_state_dict(state_dict['sched_%02d'%i])\n                except:\n                    print('Failed to load schedulers, set schedulers according to epoch count manually')\n                    for i, sched in enumerate(self.schedulers):\n                        sched.last_epoch = self.opt.epoch_count - 1\n                    \n\n            \n\n    def print_networks(self, verbose):\n        \"\"\"Print the total number of parameters in the network and (if verbose) network architecture\n\n        Parameters:\n            verbose (bool) -- if verbose: print the network architecture\n        \"\"\"\n        print('---------- Networks initialized -------------')\n        for name in self.model_names:\n            if isinstance(name, str):\n                net = getattr(self, name)\n                num_params = 0\n                for param in net.parameters():\n                    num_params += param.numel()\n                if verbose:\n                    print(net)\n                print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6))\n        print('-----------------------------------------------')\n\n    def set_requires_grad(self, nets, requires_grad=False):\n        \"\"\"Set requies_grad=Fasle for all the networks to avoid unnecessary computations\n        Parameters:\n            nets (network list)   -- a list of networks\n            requires_grad (bool)  -- whether the networks require gradients or not\n        \"\"\"\n        if not isinstance(nets, list):\n            nets = [nets]\n        for net in nets:\n            if net is not None:\n                for param in net.parameters():\n                    param.requires_grad = requires_grad\n\n    def generate_visuals_for_evaluation(self, data, mode):\n        return {}\n"
  },
  {
    "path": "third_part/face3d/models/bfm.py",
    "content": "\"\"\"This script defines the parametric 3d face model for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport numpy as np\nimport  torch\nimport torch.nn.functional as F\nfrom scipy.io import loadmat\nfrom face3d.util.load_mats import transferBFM09\nimport os\n\ndef perspective_projection(focal, center):\n    # return p.T (N, 3) @ (3, 3) \n    return np.array([\n        focal, 0, center,\n        0, focal, center,\n        0, 0, 1\n    ]).reshape([3, 3]).astype(np.float32).transpose()\n\nclass SH:\n    def __init__(self):\n        self.a = [np.pi, 2 * np.pi / np.sqrt(3.), 2 * np.pi / np.sqrt(8.)]\n        self.c = [1/np.sqrt(4 * np.pi), np.sqrt(3.) / np.sqrt(4 * np.pi), 3 * np.sqrt(5.) / np.sqrt(12 * np.pi)]\n\n\n\nclass ParametricFaceModel:\n    def __init__(self, \n                bfm_folder='./BFM', \n                recenter=True,\n                camera_distance=10.,\n                init_lit=np.array([\n                    0.8, 0, 0, 0, 0, 0, 0, 0, 0\n                    ]),\n                focal=1015.,\n                center=112.,\n                is_train=True,\n                default_name='BFM_model_front.mat'):\n        \n        if not os.path.isfile(os.path.join(bfm_folder, default_name)):\n            transferBFM09(bfm_folder)\n        model = loadmat(os.path.join(bfm_folder, default_name))\n        # mean face shape. [3*N,1]\n        self.mean_shape = model['meanshape'].astype(np.float32)\n        # identity basis. [3*N,80]\n        self.id_base = model['idBase'].astype(np.float32)\n        # expression basis. [3*N,64]\n        self.exp_base = model['exBase'].astype(np.float32)\n        # mean face texture. [3*N,1] (0-255)\n        self.mean_tex = model['meantex'].astype(np.float32)\n        # texture basis. [3*N,80]\n        self.tex_base = model['texBase'].astype(np.float32)\n        # face indices for each vertex that lies in. starts from 0. [N,8]\n        self.point_buf = model['point_buf'].astype(np.int64) - 1\n        # vertex indices for each face. starts from 0. [F,3]\n        self.face_buf = model['tri'].astype(np.int64) - 1\n        # vertex indices for 68 landmarks. starts from 0. [68,1]\n        self.keypoints = np.squeeze(model['keypoints']).astype(np.int64) - 1\n\n        if is_train:\n            # vertex indices for small face region to compute photometric error. starts from 0.\n            self.front_mask = np.squeeze(model['frontmask2_idx']).astype(np.int64) - 1\n            # vertex indices for each face from small face region. starts from 0. [f,3]\n            self.front_face_buf = model['tri_mask2'].astype(np.int64) - 1\n            # vertex indices for pre-defined skin region to compute reflectance loss\n            self.skin_mask = np.squeeze(model['skinmask'])\n        \n        if recenter:\n            mean_shape = self.mean_shape.reshape([-1, 3])\n            mean_shape = mean_shape - np.mean(mean_shape, axis=0, keepdims=True)\n            self.mean_shape = mean_shape.reshape([-1, 1])\n\n        self.persc_proj = perspective_projection(focal, center)\n        self.device = 'cpu'\n        self.camera_distance = camera_distance\n        self.SH = SH()\n        self.init_lit = init_lit.reshape([1, 1, -1]).astype(np.float32)\n        \n\n    def to(self, device):\n        self.device = device\n        for key, value in self.__dict__.items():\n            if type(value).__module__ == np.__name__:\n                setattr(self, key, torch.tensor(value).to(device))\n\n    \n    def compute_shape(self, id_coeff, exp_coeff):\n        \"\"\"\n        Return:\n            face_shape       -- torch.tensor, size (B, N, 3)\n\n        Parameters:\n            id_coeff         -- torch.tensor, size (B, 80), identity coeffs\n            exp_coeff        -- torch.tensor, size (B, 64), expression coeffs\n        \"\"\"\n        batch_size = id_coeff.shape[0]\n        id_part = torch.einsum('ij,aj->ai', self.id_base, id_coeff)\n        exp_part = torch.einsum('ij,aj->ai', self.exp_base, exp_coeff)\n        face_shape = id_part + exp_part + self.mean_shape.reshape([1, -1])\n        return face_shape.reshape([batch_size, -1, 3])\n    \n\n    def compute_texture(self, tex_coeff, normalize=True):\n        \"\"\"\n        Return:\n            face_texture     -- torch.tensor, size (B, N, 3), in RGB order, range (0, 1.)\n\n        Parameters:\n            tex_coeff        -- torch.tensor, size (B, 80)\n        \"\"\"\n        batch_size = tex_coeff.shape[0]\n        face_texture = torch.einsum('ij,aj->ai', self.tex_base, tex_coeff) + self.mean_tex\n        if normalize:\n            face_texture = face_texture / 255.\n        return face_texture.reshape([batch_size, -1, 3])\n\n\n    def compute_norm(self, face_shape):\n        \"\"\"\n        Return:\n            vertex_norm      -- torch.tensor, size (B, N, 3)\n\n        Parameters:\n            face_shape       -- torch.tensor, size (B, N, 3)\n        \"\"\"\n\n        v1 = face_shape[:, self.face_buf[:, 0]]\n        v2 = face_shape[:, self.face_buf[:, 1]]\n        v3 = face_shape[:, self.face_buf[:, 2]]\n        e1 = v1 - v2\n        e2 = v2 - v3\n        face_norm = torch.cross(e1, e2, dim=-1)\n        face_norm = F.normalize(face_norm, dim=-1, p=2)\n        face_norm = torch.cat([face_norm, torch.zeros(face_norm.shape[0], 1, 3).to(self.device)], dim=1)\n        \n        vertex_norm = torch.sum(face_norm[:, self.point_buf], dim=2)\n        vertex_norm = F.normalize(vertex_norm, dim=-1, p=2)\n        return vertex_norm\n\n\n    def compute_color(self, face_texture, face_norm, gamma):\n        \"\"\"\n        Return:\n            face_color       -- torch.tensor, size (B, N, 3), range (0, 1.)\n\n        Parameters:\n            face_texture     -- torch.tensor, size (B, N, 3), from texture model, range (0, 1.)\n            face_norm        -- torch.tensor, size (B, N, 3), rotated face normal\n            gamma            -- torch.tensor, size (B, 27), SH coeffs\n        \"\"\"\n        batch_size = gamma.shape[0]\n        v_num = face_texture.shape[1]\n        a, c = self.SH.a, self.SH.c\n        gamma = gamma.reshape([batch_size, 3, 9])\n        gamma = gamma + self.init_lit\n        gamma = gamma.permute(0, 2, 1)\n        Y = torch.cat([\n             a[0] * c[0] * torch.ones_like(face_norm[..., :1]).to(self.device),\n            -a[1] * c[1] * face_norm[..., 1:2],\n             a[1] * c[1] * face_norm[..., 2:],\n            -a[1] * c[1] * face_norm[..., :1],\n             a[2] * c[2] * face_norm[..., :1] * face_norm[..., 1:2],\n            -a[2] * c[2] * face_norm[..., 1:2] * face_norm[..., 2:],\n            0.5 * a[2] * c[2] / np.sqrt(3.) * (3 * face_norm[..., 2:] ** 2 - 1),\n            -a[2] * c[2] * face_norm[..., :1] * face_norm[..., 2:],\n            0.5 * a[2] * c[2] * (face_norm[..., :1] ** 2  - face_norm[..., 1:2] ** 2)\n        ], dim=-1)\n        r = Y @ gamma[..., :1]\n        g = Y @ gamma[..., 1:2]\n        b = Y @ gamma[..., 2:]\n        face_color = torch.cat([r, g, b], dim=-1) * face_texture\n        return face_color\n\n    \n    def compute_rotation(self, angles):\n        \"\"\"\n        Return:\n            rot              -- torch.tensor, size (B, 3, 3) pts @ trans_mat\n\n        Parameters:\n            angles           -- torch.tensor, size (B, 3), radian\n        \"\"\"\n\n        batch_size = angles.shape[0]\n        ones = torch.ones([batch_size, 1]).to(self.device)\n        zeros = torch.zeros([batch_size, 1]).to(self.device)\n        x, y, z = angles[:, :1], angles[:, 1:2], angles[:, 2:],\n        \n        rot_x = torch.cat([\n            ones, zeros, zeros,\n            zeros, torch.cos(x), -torch.sin(x), \n            zeros, torch.sin(x), torch.cos(x)\n        ], dim=1).reshape([batch_size, 3, 3])\n        \n        rot_y = torch.cat([\n            torch.cos(y), zeros, torch.sin(y),\n            zeros, ones, zeros,\n            -torch.sin(y), zeros, torch.cos(y)\n        ], dim=1).reshape([batch_size, 3, 3])\n\n        rot_z = torch.cat([\n            torch.cos(z), -torch.sin(z), zeros,\n            torch.sin(z), torch.cos(z), zeros,\n            zeros, zeros, ones\n        ], dim=1).reshape([batch_size, 3, 3])\n\n        rot = rot_z @ rot_y @ rot_x\n        return rot.permute(0, 2, 1)\n\n\n    def to_camera(self, face_shape):\n        face_shape[..., -1] = self.camera_distance - face_shape[..., -1]\n        return face_shape\n\n    def to_image(self, face_shape):\n        \"\"\"\n        Return:\n            face_proj        -- torch.tensor, size (B, N, 2), y direction is opposite to v direction\n\n        Parameters:\n            face_shape       -- torch.tensor, size (B, N, 3)\n        \"\"\"\n        # to image_plane\n        face_proj = face_shape @ self.persc_proj\n        face_proj = face_proj[..., :2] / face_proj[..., 2:]\n\n        return face_proj\n\n\n    def transform(self, face_shape, rot, trans):\n        \"\"\"\n        Return:\n            face_shape       -- torch.tensor, size (B, N, 3) pts @ rot + trans\n\n        Parameters:\n            face_shape       -- torch.tensor, size (B, N, 3)\n            rot              -- torch.tensor, size (B, 3, 3)\n            trans            -- torch.tensor, size (B, 3)\n        \"\"\"\n        return face_shape @ rot + trans.unsqueeze(1)\n\n\n    def get_landmarks(self, face_proj):\n        \"\"\"\n        Return:\n            face_lms         -- torch.tensor, size (B, 68, 2)\n\n        Parameters:\n            face_proj       -- torch.tensor, size (B, N, 2)\n        \"\"\"  \n        return face_proj[:, self.keypoints]\n\n    def split_coeff(self, coeffs):\n        \"\"\"\n        Return:\n            coeffs_dict     -- a dict of torch.tensors\n\n        Parameters:\n            coeffs          -- torch.tensor, size (B, 256)\n        \"\"\"\n        id_coeffs = coeffs[:, :80]\n        exp_coeffs = coeffs[:, 80: 144]\n        tex_coeffs = coeffs[:, 144: 224]\n        angles = coeffs[:, 224: 227]\n        gammas = coeffs[:, 227: 254]\n        translations = coeffs[:, 254:]\n        return {\n            'id': id_coeffs,\n            'exp': exp_coeffs,\n            'tex': tex_coeffs,\n            'angle': angles,\n            'gamma': gammas,\n            'trans': translations\n        }\n    def compute_for_render(self, coeffs):\n        \"\"\"\n        Return:\n            face_vertex     -- torch.tensor, size (B, N, 3), in camera coordinate\n            face_color      -- torch.tensor, size (B, N, 3), in RGB order\n            landmark        -- torch.tensor, size (B, 68, 2), y direction is opposite to v direction\n        Parameters:\n            coeffs          -- torch.tensor, size (B, 257)\n        \"\"\"\n        coef_dict = self.split_coeff(coeffs)\n        face_shape = self.compute_shape(coef_dict['id'], coef_dict['exp'])\n        rotation = self.compute_rotation(coef_dict['angle'])\n\n\n        face_shape_transformed = self.transform(face_shape, rotation, coef_dict['trans'])\n        face_vertex = self.to_camera(face_shape_transformed)\n        \n        face_proj = self.to_image(face_vertex)\n        landmark = self.get_landmarks(face_proj)\n\n        face_texture = self.compute_texture(coef_dict['tex'])\n        face_norm = self.compute_norm(face_shape)\n        face_norm_roted = face_norm @ rotation\n        face_color = self.compute_color(face_texture, face_norm_roted, coef_dict['gamma'])\n\n        return face_vertex, face_texture, face_color, landmark\n\n\nif __name__ == '__main__':\n    transferBFM09()"
  },
  {
    "path": "third_part/face3d/models/facerecon_model.py",
    "content": "\"\"\"This script defines the face reconstruction model for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport numpy as np\nimport torch\nfrom face3d.models.base_model import BaseModel\nfrom face3d.models import networks\nfrom face3d.models.bfm import ParametricFaceModel\nfrom face3d.models.losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss\nfrom face3d.util import util \nfrom face3d.util.nvdiffrast import MeshRenderer\nfrom face3d.util.preprocess import estimate_norm_torch\n\nimport trimesh\nfrom scipy.io import savemat\n\nclass FaceReconModel(BaseModel):\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train=True):\n        \"\"\"  Configures options specific for CUT model\n        \"\"\"\n        # net structure and parameters\n        parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='network structure')\n        parser.add_argument('--init_path', type=str, default='checkpoints/init_model/resnet50-0676ba61.pth')\n        parser.add_argument('--use_last_fc', type=util.str2bool, nargs='?', const=True, default=False, help='zero initialize the last fc')\n        parser.add_argument('--bfm_folder', type=str, default='BFM')\n        parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model')\n\n        # renderer parameters\n        parser.add_argument('--focal', type=float, default=1015.)\n        parser.add_argument('--center', type=float, default=112.)\n        parser.add_argument('--camera_d', type=float, default=10.)\n        parser.add_argument('--z_near', type=float, default=5.)\n        parser.add_argument('--z_far', type=float, default=15.)\n\n        if is_train:\n            # training parameters\n            parser.add_argument('--net_recog', type=str, default='r50', choices=['r18', 'r43', 'r50'], help='face recog network structure')\n            parser.add_argument('--net_recog_path', type=str, default='checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth')\n            parser.add_argument('--use_crop_face', type=util.str2bool, nargs='?', const=True, default=False, help='use crop mask for photo loss')\n            parser.add_argument('--use_predef_M', type=util.str2bool, nargs='?', const=True, default=False, help='use predefined M for predicted face')\n\n            \n            # augmentation parameters\n            parser.add_argument('--shift_pixs', type=float, default=10., help='shift pixels')\n            parser.add_argument('--scale_delta', type=float, default=0.1, help='delta scale factor')\n            parser.add_argument('--rot_angle', type=float, default=10., help='rot angles, degree')\n\n            # loss weights\n            parser.add_argument('--w_feat', type=float, default=0.2, help='weight for feat loss')\n            parser.add_argument('--w_color', type=float, default=1.92, help='weight for loss loss')\n            parser.add_argument('--w_reg', type=float, default=3.0e-4, help='weight for reg loss')\n            parser.add_argument('--w_id', type=float, default=1.0, help='weight for id_reg loss')\n            parser.add_argument('--w_exp', type=float, default=0.8, help='weight for exp_reg loss')\n            parser.add_argument('--w_tex', type=float, default=1.7e-2, help='weight for tex_reg loss')\n            parser.add_argument('--w_gamma', type=float, default=10.0, help='weight for gamma loss')\n            parser.add_argument('--w_lm', type=float, default=1.6e-3, help='weight for lm loss')\n            parser.add_argument('--w_reflc', type=float, default=5.0, help='weight for reflc loss')\n\n\n\n        opt, _ = parser.parse_known_args()\n        parser.set_defaults(\n                focal=1015., center=112., camera_d=10., use_last_fc=False, z_near=5., z_far=15.\n            )\n        if is_train:\n            parser.set_defaults(\n                use_crop_face=True, use_predef_M=False\n            )\n        return parser\n\n    def __init__(self, opt):\n        \"\"\"Initialize this model class.\n\n        Parameters:\n            opt -- training/test options\n\n        A few things can be done here.\n        - (required) call the initialization function of BaseModel\n        - define loss function, visualization images, model names, and optimizers\n        \"\"\"\n        BaseModel.__init__(self, opt)  # call the initialization method of BaseModel\n        \n        self.visual_names = ['output_vis']\n        self.model_names = ['net_recon']\n        self.parallel_names = self.model_names + ['renderer']\n\n        self.net_recon = networks.define_net_recon(\n            net_recon=opt.net_recon, use_last_fc=opt.use_last_fc, init_path=opt.init_path\n        )\n\n        self.facemodel = ParametricFaceModel(\n            bfm_folder=opt.bfm_folder, camera_distance=opt.camera_d, focal=opt.focal, center=opt.center,\n            is_train=self.isTrain, default_name=opt.bfm_model\n        )\n        \n        fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi\n        self.renderer = MeshRenderer(\n            rasterize_fov=fov, znear=opt.z_near, zfar=opt.z_far, rasterize_size=int(2 * opt.center)\n        )\n\n        if self.isTrain:\n            self.loss_names = ['all', 'feat', 'color', 'lm', 'reg', 'gamma', 'reflc']\n\n            self.net_recog = networks.define_net_recog(\n                net_recog=opt.net_recog, pretrained_path=opt.net_recog_path\n                )\n            # loss func name: (compute_%s_loss) % loss_name\n            self.compute_feat_loss = perceptual_loss\n            self.comupte_color_loss = photo_loss\n            self.compute_lm_loss = landmark_loss\n            self.compute_reg_loss = reg_loss\n            self.compute_reflc_loss = reflectance_loss\n\n            self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr)\n            self.optimizers = [self.optimizer]\n            self.parallel_names += ['net_recog']\n        # Our program will automatically call <model.setup> to define schedulers, load networks, and print networks\n\n    def set_input(self, input):\n        \"\"\"Unpack input data from the dataloader and perform necessary pre-processing steps.\n\n        Parameters:\n            input: a dictionary that contains the data itself and its metadata information.\n        \"\"\"\n        self.input_img = input['imgs'].to(self.device) \n        self.atten_mask = input['msks'].to(self.device) if 'msks' in input else None\n        self.gt_lm = input['lms'].to(self.device)  if 'lms' in input else None\n        self.trans_m = input['M'].to(self.device) if 'M' in input else None\n        self.image_paths = input['im_paths'] if 'im_paths' in input else None\n\n    def forward(self):\n        output_coeff = self.net_recon(self.input_img)\n        self.facemodel.to(self.device)\n        self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = \\\n            self.facemodel.compute_for_render(output_coeff)\n        self.pred_mask, _, self.pred_face = self.renderer(\n            self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color)\n        \n        self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff)\n\n\n    def compute_losses(self):\n        \"\"\"Calculate losses, gradients, and update network weights; called in every training iteration\"\"\"\n\n        assert self.net_recog.training == False\n        trans_m = self.trans_m\n        if not self.opt.use_predef_M:\n            trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2])\n\n        pred_feat = self.net_recog(self.pred_face, trans_m)\n        gt_feat = self.net_recog(self.input_img, self.trans_m)\n        self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat)\n\n        face_mask = self.pred_mask\n        if self.opt.use_crop_face:\n            face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf)\n        \n        face_mask = face_mask.detach()\n        self.loss_color = self.opt.w_color * self.comupte_color_loss(\n            self.pred_face, self.input_img, self.atten_mask * face_mask)\n        \n        loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt)\n        self.loss_reg = self.opt.w_reg * loss_reg\n        self.loss_gamma = self.opt.w_gamma * loss_gamma\n\n        self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm)\n\n        self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask)\n\n        self.loss_all = self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma \\\n                        + self.loss_lm + self.loss_reflc\n            \n\n    def optimize_parameters(self, isTrain=True):\n        self.forward()               \n        self.compute_losses()\n        \"\"\"Update network weights; it will be called in every training iteration.\"\"\"\n        if isTrain:\n            self.optimizer.zero_grad()  \n            self.loss_all.backward()         \n            self.optimizer.step()        \n\n    def compute_visuals(self):\n        with torch.no_grad():\n            input_img_numpy = 255. * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy()\n            output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img\n            output_vis_numpy_raw = 255. * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy()\n            \n            if self.gt_lm is not None:\n                gt_lm_numpy = self.gt_lm.cpu().numpy()\n                pred_lm_numpy = self.pred_lm.detach().cpu().numpy()\n                output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, 'b')\n                output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, 'r')\n            \n                output_vis_numpy = np.concatenate((input_img_numpy, \n                                    output_vis_numpy_raw, output_vis_numpy), axis=-2)\n            else:\n                output_vis_numpy = np.concatenate((input_img_numpy, \n                                    output_vis_numpy_raw), axis=-2)\n\n            self.output_vis = torch.tensor(\n                    output_vis_numpy / 255., dtype=torch.float32\n                ).permute(0, 3, 1, 2).to(self.device)\n\n    def save_mesh(self, name):\n\n        recon_shape = self.pred_vertex  # get reconstructed shape\n        recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space\n        recon_shape = recon_shape.cpu().numpy()[0]\n        recon_color = self.pred_color\n        recon_color = recon_color.cpu().numpy()[0]\n        tri = self.facemodel.face_buf.cpu().numpy()\n        mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri, vertex_colors=np.clip(255. * recon_color, 0, 255).astype(np.uint8))\n        mesh.export(name)\n\n    def save_coeff(self,name):\n\n        pred_coeffs = {key:self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict}\n        pred_lm = self.pred_lm.cpu().numpy()\n        pred_lm = np.stack([pred_lm[:,:,0],self.input_img.shape[2]-1-pred_lm[:,:,1]],axis=2) # transfer to image coordinate\n        pred_coeffs['lm68'] = pred_lm\n        savemat(name,pred_coeffs)\n\n\n\n"
  },
  {
    "path": "third_part/face3d/models/losses.py",
    "content": "import numpy as np\nimport torch\nimport torch.nn as nn\nfrom kornia.geometry import warp_affine\nimport torch.nn.functional as F\n\ndef resize_n_crop(image, M, dsize=112):\n    # image: (b, c, h, w)\n    # M   :  (b, 2, 3)\n    return warp_affine(image, M, dsize=(dsize, dsize))\n\n### perceptual level loss\nclass PerceptualLoss(nn.Module):\n    def __init__(self, recog_net, input_size=112):\n        super(PerceptualLoss, self).__init__()\n        self.recog_net = recog_net\n        self.preprocess = lambda x: 2 * x - 1\n        self.input_size=input_size\n    def forward(imageA, imageB, M):\n        \"\"\"\n        1 - cosine distance\n        Parameters:\n            imageA       --torch.tensor (B, 3, H, W), range (0, 1) , RGB order\n            imageB       --same as imageA\n        \"\"\"\n\n        imageA = self.preprocess(resize_n_crop(imageA, M, self.input_size))\n        imageB = self.preprocess(resize_n_crop(imageB, M, self.input_size))\n\n        # freeze bn\n        self.recog_net.eval()\n        \n        id_featureA = F.normalize(self.recog_net(imageA), dim=-1, p=2)\n        id_featureB = F.normalize(self.recog_net(imageB), dim=-1, p=2)  \n        cosine_d = torch.sum(id_featureA * id_featureB, dim=-1)\n        # assert torch.sum((cosine_d > 1).float()) == 0\n        return torch.sum(1 - cosine_d) / cosine_d.shape[0]        \n\ndef perceptual_loss(id_featureA, id_featureB):\n    cosine_d = torch.sum(id_featureA * id_featureB, dim=-1)\n        # assert torch.sum((cosine_d > 1).float()) == 0\n    return torch.sum(1 - cosine_d) / cosine_d.shape[0]  \n\n### image level loss\ndef photo_loss(imageA, imageB, mask, eps=1e-6):\n    \"\"\"\n    l2 norm (with sqrt, to ensure backward stabililty, use eps, otherwise Nan may occur)\n    Parameters:\n        imageA       --torch.tensor (B, 3, H, W), range (0, 1), RGB order \n        imageB       --same as imageA\n    \"\"\"\n    loss = torch.sqrt(eps + torch.sum((imageA - imageB) ** 2, dim=1, keepdims=True)) * mask\n    loss = torch.sum(loss) / torch.max(torch.sum(mask), torch.tensor(1.0).to(mask.device))\n    return loss\n\ndef landmark_loss(predict_lm, gt_lm, weight=None):\n    \"\"\"\n    weighted mse loss\n    Parameters:\n        predict_lm    --torch.tensor (B, 68, 2)\n        gt_lm         --torch.tensor (B, 68, 2)\n        weight        --numpy.array (1, 68)\n    \"\"\"\n    if not weight:\n        weight = np.ones([68])\n        weight[28:31] = 20\n        weight[-8:] = 20\n        weight = np.expand_dims(weight, 0)\n        weight = torch.tensor(weight).to(predict_lm.device)\n    loss = torch.sum((predict_lm - gt_lm)**2, dim=-1) * weight\n    loss = torch.sum(loss) / (predict_lm.shape[0] * predict_lm.shape[1])\n    return loss\n\n\n### regulization\ndef reg_loss(coeffs_dict, opt=None):\n    \"\"\"\n    l2 norm without the sqrt, from yu's implementation (mse)\n    tf.nn.l2_loss https://www.tensorflow.org/api_docs/python/tf/nn/l2_loss\n    Parameters:\n        coeffs_dict     -- a  dict of torch.tensors , keys: id, exp, tex, angle, gamma, trans\n\n    \"\"\"\n    # coefficient regularization to ensure plausible 3d faces\n    if opt:\n        w_id, w_exp, w_tex = opt.w_id, opt.w_exp, opt.w_tex\n    else:\n        w_id, w_exp, w_tex = 1, 1, 1, 1\n    creg_loss = w_id * torch.sum(coeffs_dict['id'] ** 2) +  \\\n           w_exp * torch.sum(coeffs_dict['exp'] ** 2) + \\\n           w_tex * torch.sum(coeffs_dict['tex'] ** 2)\n    creg_loss = creg_loss / coeffs_dict['id'].shape[0]\n\n    # gamma regularization to ensure a nearly-monochromatic light\n    gamma = coeffs_dict['gamma'].reshape([-1, 3, 9])\n    gamma_mean = torch.mean(gamma, dim=1, keepdims=True)\n    gamma_loss = torch.mean((gamma - gamma_mean) ** 2)\n\n    return creg_loss, gamma_loss\n\ndef reflectance_loss(texture, mask):\n    \"\"\"\n    minimize texture variance (mse), albedo regularization to ensure an uniform skin albedo\n    Parameters:\n        texture       --torch.tensor, (B, N, 3)\n        mask          --torch.tensor, (N), 1 or 0\n\n    \"\"\"\n    mask = mask.reshape([1, mask.shape[0], 1])\n    texture_mean = torch.sum(mask * texture, dim=1, keepdims=True) / torch.sum(mask)\n    loss = torch.sum(((texture - texture_mean) * mask)**2) / (texture.shape[0] * torch.sum(mask))\n    return loss\n\n"
  },
  {
    "path": "third_part/face3d/models/networks.py",
    "content": "\"\"\"This script defines deep neural networks for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport os\nimport numpy as np\nimport torch.nn.functional as F\nfrom torch.nn import init\nimport functools\nfrom torch.optim import lr_scheduler\nimport torch\nfrom torch import Tensor\nimport torch.nn as nn\ntry:\n    from torch.hub import load_state_dict_from_url\nexcept ImportError:\n    from torch.utils.model_zoo import load_url as load_state_dict_from_url\nfrom typing import Type, Any, Callable, Union, List, Optional\nfrom .arcface_torch.backbones import get_model\nfrom kornia.geometry import warp_affine\n\ndef resize_n_crop(image, M, dsize=112):\n    # image: (b, c, h, w)\n    # M   :  (b, 2, 3)\n    return warp_affine(image, M, dsize=(dsize, dsize))\n\ndef filter_state_dict(state_dict, remove_name='fc'):\n    new_state_dict = {}\n    for key in state_dict:\n        if remove_name in key:\n            continue\n        new_state_dict[key] = state_dict[key]\n    return new_state_dict\n\ndef get_scheduler(optimizer, opt):\n    \"\"\"Return a learning rate scheduler\n\n    Parameters:\n        optimizer          -- the optimizer of the network\n        opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions．　\n                              opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine\n\n    For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.\n    See https://pytorch.org/docs/stable/optim.html for more details.\n    \"\"\"\n    if opt.lr_policy == 'linear':\n        def lambda_rule(epoch):\n            lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs + 1)\n            return lr_l\n        scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)\n    elif opt.lr_policy == 'step':\n        scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_epochs, gamma=0.2)\n    elif opt.lr_policy == 'plateau':\n        scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)\n    elif opt.lr_policy == 'cosine':\n        scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)\n    else:\n        return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)\n    return scheduler\n\n\ndef define_net_recon(net_recon, use_last_fc=False, init_path=None):\n    return ReconNetWrapper(net_recon, use_last_fc=use_last_fc, init_path=init_path)\n\ndef define_net_recog(net_recog, pretrained_path=None):\n    net = RecogNetWrapper(net_recog=net_recog, pretrained_path=pretrained_path)\n    net.eval()\n    return net\n\nclass ReconNetWrapper(nn.Module):\n    fc_dim=257\n    def __init__(self, net_recon, use_last_fc=False, init_path=None):\n        super(ReconNetWrapper, self).__init__()\n        self.use_last_fc = use_last_fc\n        if net_recon not in func_dict:\n            return  NotImplementedError('network [%s] is not implemented', net_recon)\n        func, last_dim = func_dict[net_recon]\n        backbone = func(use_last_fc=use_last_fc, num_classes=self.fc_dim)\n        if init_path and os.path.isfile(init_path):\n            state_dict = filter_state_dict(torch.load(init_path, map_location='cpu'))\n            backbone.load_state_dict(state_dict)\n            print(\"loading init net_recon %s from %s\" %(net_recon, init_path))\n        self.backbone = backbone\n        if not use_last_fc:\n            self.final_layers = nn.ModuleList([\n                conv1x1(last_dim, 80, bias=True), # id layer\n                conv1x1(last_dim, 64, bias=True), # exp layer\n                conv1x1(last_dim, 80, bias=True), # tex layer\n                conv1x1(last_dim, 3, bias=True),  # angle layer\n                conv1x1(last_dim, 27, bias=True), # gamma layer\n                conv1x1(last_dim, 2, bias=True),  # tx, ty\n                conv1x1(last_dim, 1, bias=True)   # tz\n            ])\n            for m in self.final_layers:\n                nn.init.constant_(m.weight, 0.)\n                nn.init.constant_(m.bias, 0.)\n\n    def forward(self, x):\n        x = self.backbone(x)\n        if not self.use_last_fc:\n            output = []\n            for layer in self.final_layers:\n                output.append(layer(x))\n            x = torch.flatten(torch.cat(output, dim=1), 1)\n        return x\n\n\nclass RecogNetWrapper(nn.Module):\n    def __init__(self, net_recog, pretrained_path=None, input_size=112):\n        super(RecogNetWrapper, self).__init__()\n        net = get_model(name=net_recog, fp16=False)\n        if pretrained_path:\n            state_dict = torch.load(pretrained_path, map_location='cpu')\n            net.load_state_dict(state_dict)\n            print(\"loading pretrained net_recog %s from %s\" %(net_recog, pretrained_path))\n        for param in net.parameters():\n            param.requires_grad = False\n        self.net = net\n        self.preprocess = lambda x: 2 * x - 1\n        self.input_size=input_size\n        \n    def forward(self, image, M):\n        image = self.preprocess(resize_n_crop(image, M, self.input_size))\n        id_feature = F.normalize(self.net(image), dim=-1, p=2)\n        return id_feature\n\n\n# adapted from https://github.com/pytorch/vision/edit/master/torchvision/models/resnet.py\n__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',\n           'resnet152', 'resnext50_32x4d', 'resnext101_32x8d',\n           'wide_resnet50_2', 'wide_resnet101_2']\n\n\nmodel_urls = {\n    'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth',\n    'resnet34': 'https://download.pytorch.org/models/resnet34-b627a593.pth',\n    'resnet50': 'https://download.pytorch.org/models/resnet50-0676ba61.pth',\n    'resnet101': 'https://download.pytorch.org/models/resnet101-63fe2227.pth',\n    'resnet152': 'https://download.pytorch.org/models/resnet152-394f9c45.pth',\n    'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',\n    'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',\n    'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth',\n    'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth',\n}\n\n\ndef conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1) -> nn.Conv2d:\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                     padding=dilation, groups=groups, bias=False, dilation=dilation)\n\n\ndef conv1x1(in_planes: int, out_planes: int, stride: int = 1, bias: bool = False) -> nn.Conv2d:\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=bias)\n\n\nclass BasicBlock(nn.Module):\n    expansion: int = 1\n\n    def __init__(\n        self,\n        inplanes: int,\n        planes: int,\n        stride: int = 1,\n        downsample: Optional[nn.Module] = None,\n        groups: int = 1,\n        base_width: int = 64,\n        dilation: int = 1,\n        norm_layer: Optional[Callable[..., nn.Module]] = None\n    ) -> None:\n        super(BasicBlock, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        if groups != 1 or base_width != 64:\n            raise ValueError('BasicBlock only supports groups=1 and base_width=64')\n        if dilation > 1:\n            raise NotImplementedError(\"Dilation > 1 not supported in BasicBlock\")\n        # Both self.conv1 and self.downsample layers downsample the input when stride != 1\n        self.conv1 = conv3x3(inplanes, planes, stride)\n        self.bn1 = norm_layer(planes)\n        self.relu = nn.ReLU(inplace=True)\n        self.conv2 = conv3x3(planes, planes)\n        self.bn2 = norm_layer(planes)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x: Tensor) -> Tensor:\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\n\nclass Bottleneck(nn.Module):\n    # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2)\n    # while original implementation places the stride at the first 1x1 convolution(self.conv1)\n    # according to \"Deep residual learning for image recognition\"https://arxiv.org/abs/1512.03385.\n    # This variant is also known as ResNet V1.5 and improves accuracy according to\n    # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch.\n\n    expansion: int = 4\n\n    def __init__(\n        self,\n        inplanes: int,\n        planes: int,\n        stride: int = 1,\n        downsample: Optional[nn.Module] = None,\n        groups: int = 1,\n        base_width: int = 64,\n        dilation: int = 1,\n        norm_layer: Optional[Callable[..., nn.Module]] = None\n    ) -> None:\n        super(Bottleneck, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        width = int(planes * (base_width / 64.)) * groups\n        # Both self.conv2 and self.downsample layers downsample the input when stride != 1\n        self.conv1 = conv1x1(inplanes, width)\n        self.bn1 = norm_layer(width)\n        self.conv2 = conv3x3(width, width, stride, groups, dilation)\n        self.bn2 = norm_layer(width)\n        self.conv3 = conv1x1(width, planes * self.expansion)\n        self.bn3 = norm_layer(planes * self.expansion)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x: Tensor) -> Tensor:\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = self.relu(out)\n\n        return out\n\n\nclass ResNet(nn.Module):\n\n    def __init__(\n        self,\n        block: Type[Union[BasicBlock, Bottleneck]],\n        layers: List[int],\n        num_classes: int = 1000,\n        zero_init_residual: bool = False,\n        use_last_fc: bool = False,\n        groups: int = 1,\n        width_per_group: int = 64,\n        replace_stride_with_dilation: Optional[List[bool]] = None,\n        norm_layer: Optional[Callable[..., nn.Module]] = None\n    ) -> None:\n        super(ResNet, self).__init__()\n        if norm_layer is None:\n            norm_layer = nn.BatchNorm2d\n        self._norm_layer = norm_layer\n\n        self.inplanes = 64\n        self.dilation = 1\n        if replace_stride_with_dilation is None:\n            # each element in the tuple indicates if we should replace\n            # the 2x2 stride with a dilated convolution instead\n            replace_stride_with_dilation = [False, False, False]\n        if len(replace_stride_with_dilation) != 3:\n            raise ValueError(\"replace_stride_with_dilation should be None \"\n                             \"or a 3-element tuple, got {}\".format(replace_stride_with_dilation))\n        self.use_last_fc = use_last_fc\n        self.groups = groups\n        self.base_width = width_per_group\n        self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = norm_layer(self.inplanes)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2,\n                                       dilate=replace_stride_with_dilation[0])\n        self.layer3 = self._make_layer(block, 256, layers[2], stride=2,\n                                       dilate=replace_stride_with_dilation[1])\n        self.layer4 = self._make_layer(block, 512, layers[3], stride=2,\n                                       dilate=replace_stride_with_dilation[2])\n        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))\n        \n        if self.use_last_fc:\n            self.fc = nn.Linear(512 * block.expansion, num_classes)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')\n            elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n\n\n\n        # Zero-initialize the last BN in each residual branch,\n        # so that the residual branch starts with zeros, and each residual block behaves like an identity.\n        # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677\n        if zero_init_residual:\n            for m in self.modules():\n                if isinstance(m, Bottleneck):\n                    nn.init.constant_(m.bn3.weight, 0)  # type: ignore[arg-type]\n                elif isinstance(m, BasicBlock):\n                    nn.init.constant_(m.bn2.weight, 0)  # type: ignore[arg-type]\n\n    def _make_layer(self, block: Type[Union[BasicBlock, Bottleneck]], planes: int, blocks: int,\n                    stride: int = 1, dilate: bool = False) -> nn.Sequential:\n        norm_layer = self._norm_layer\n        downsample = None\n        previous_dilation = self.dilation\n        if dilate:\n            self.dilation *= stride\n            stride = 1\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                conv1x1(self.inplanes, planes * block.expansion, stride),\n                norm_layer(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample, self.groups,\n                            self.base_width, previous_dilation, norm_layer))\n        self.inplanes = planes * block.expansion\n        for _ in range(1, blocks):\n            layers.append(block(self.inplanes, planes, groups=self.groups,\n                                base_width=self.base_width, dilation=self.dilation,\n                                norm_layer=norm_layer))\n\n        return nn.Sequential(*layers)\n\n    def _forward_impl(self, x: Tensor) -> Tensor:\n        # See note [TorchScript super()]\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        x = self.layer2(x)\n        x = self.layer3(x)\n        x = self.layer4(x)\n\n        x = self.avgpool(x)\n        if self.use_last_fc:\n            x = torch.flatten(x, 1)\n            x = self.fc(x)\n        return x\n\n    def forward(self, x: Tensor) -> Tensor:\n        return self._forward_impl(x)\n\n\ndef _resnet(\n    arch: str,\n    block: Type[Union[BasicBlock, Bottleneck]],\n    layers: List[int],\n    pretrained: bool,\n    progress: bool,\n    **kwargs: Any\n) -> ResNet:\n    model = ResNet(block, layers, **kwargs)\n    if pretrained:\n        state_dict = load_state_dict_from_url(model_urls[arch],\n                                              progress=progress)\n        model.load_state_dict(state_dict)\n    return model\n\n\ndef resnet18(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"ResNet-18 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet34(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"ResNet-34 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"ResNet-50 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet101(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"ResNet-101 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnet152(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"ResNet-152 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"ResNeXt-50 32x4d model from\n    `\"Aggregated Residual Transformation for Deep Neural Networks\" <https://arxiv.org/pdf/1611.05431.pdf>`_.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['groups'] = 32\n    kwargs['width_per_group'] = 4\n    return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"ResNeXt-101 32x8d model from\n    `\"Aggregated Residual Transformation for Deep Neural Networks\" <https://arxiv.org/pdf/1611.05431.pdf>`_.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['groups'] = 32\n    kwargs['width_per_group'] = 8\n    return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"Wide ResNet-50-2 model from\n    `\"Wide Residual Networks\" <https://arxiv.org/pdf/1605.07146.pdf>`_.\n\n    The model is the same as ResNet except for the bottleneck number of channels\n    which is twice larger in every block. The number of channels in outer 1x1\n    convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048\n    channels, and in Wide ResNet-50-2 has 2048-1024-2048.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['width_per_group'] = 64 * 2\n    return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:\n    r\"\"\"Wide ResNet-101-2 model from\n    `\"Wide Residual Networks\" <https://arxiv.org/pdf/1605.07146.pdf>`_.\n\n    The model is the same as ResNet except for the bottleneck number of channels\n    which is twice larger in every block. The number of channels in outer 1x1\n    convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048\n    channels, and in Wide ResNet-50-2 has 2048-1024-2048.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['width_per_group'] = 64 * 2\n    return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3],\n                   pretrained, progress, **kwargs)\n\n\nfunc_dict = {\n    'resnet18': (resnet18, 512),\n    'resnet50': (resnet50, 2048)\n}\n"
  },
  {
    "path": "third_part/face3d/models/template_model.py",
    "content": "\"\"\"Model class template\n\nThis module provides a template for users to implement custom models.\nYou can specify '--model template' to use this model.\nThe class name should be consistent with both the filename and its model option.\nThe filename should be <model>_dataset.py\nThe class name should be <Model>Dataset.py\nIt implements a simple image-to-image translation baseline based on regression loss.\nGiven input-output pairs (data_A, data_B), it learns a network netG that can minimize the following L1 loss:\n    min_<netG> ||netG(data_A) - data_B||_1\nYou need to implement the following functions:\n    <modify_commandline_options>:　Add model-specific options and rewrite default values for existing options.\n    <__init__>: Initialize this model class.\n    <set_input>: Unpack input data and perform data pre-processing.\n    <forward>: Run forward pass. This will be called by both <optimize_parameters> and <test>.\n    <optimize_parameters>: Update network weights; it will be called in every training iteration.\n\"\"\"\nimport numpy as np\nimport torch\nfrom .base_model import BaseModel\nfrom . import networks\n\n\nclass TemplateModel(BaseModel):\n    @staticmethod\n    def modify_commandline_options(parser, is_train=True):\n        \"\"\"Add new model-specific options and rewrite default values for existing options.\n\n        Parameters:\n            parser -- the option parser\n            is_train -- if it is training phase or test phase. You can use this flag to add training-specific or test-specific options.\n\n        Returns:\n            the modified parser.\n        \"\"\"\n        parser.set_defaults(dataset_mode='aligned')  # You can rewrite default values for this model. For example, this model usually uses aligned dataset as its dataset.\n        if is_train:\n            parser.add_argument('--lambda_regression', type=float, default=1.0, help='weight for the regression loss')  # You can define new arguments for this model.\n\n        return parser\n\n    def __init__(self, opt):\n        \"\"\"Initialize this model class.\n\n        Parameters:\n            opt -- training/test options\n\n        A few things can be done here.\n        - (required) call the initialization function of BaseModel\n        - define loss function, visualization images, model names, and optimizers\n        \"\"\"\n        BaseModel.__init__(self, opt)  # call the initialization method of BaseModel\n        # specify the training losses you want to print out. The program will call base_model.get_current_losses to plot the losses to the console and save them to the disk.\n        self.loss_names = ['loss_G']\n        # specify the images you want to save and display. The program will call base_model.get_current_visuals to save and display these images.\n        self.visual_names = ['data_A', 'data_B', 'output']\n        # specify the models you want to save to the disk. The program will call base_model.save_networks and base_model.load_networks to save and load networks.\n        # you can use opt.isTrain to specify different behaviors for training and test. For example, some networks will not be used during test, and you don't need to load them.\n        self.model_names = ['G']\n        # define networks; you can use opt.isTrain to specify different behaviors for training and test.\n        self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, gpu_ids=self.gpu_ids)\n        if self.isTrain:  # only defined during training time\n            # define your loss functions. You can use losses provided by torch.nn such as torch.nn.L1Loss.\n            # We also provide a GANLoss class \"networks.GANLoss\". self.criterionGAN = networks.GANLoss().to(self.device)\n            self.criterionLoss = torch.nn.L1Loss()\n            # define and initialize optimizers. You can define one optimizer for each network.\n            # If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.\n            self.optimizer = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))\n            self.optimizers = [self.optimizer]\n\n        # Our program will automatically call <model.setup> to define schedulers, load networks, and print networks\n\n    def set_input(self, input):\n        \"\"\"Unpack input data from the dataloader and perform necessary pre-processing steps.\n\n        Parameters:\n            input: a dictionary that contains the data itself and its metadata information.\n        \"\"\"\n        AtoB = self.opt.direction == 'AtoB'  # use <direction> to swap data_A and data_B\n        self.data_A = input['A' if AtoB else 'B'].to(self.device)  # get image data A\n        self.data_B = input['B' if AtoB else 'A'].to(self.device)  # get image data B\n        self.image_paths = input['A_paths' if AtoB else 'B_paths']  # get image paths\n\n    def forward(self):\n        \"\"\"Run forward pass. This will be called by both functions <optimize_parameters> and <test>.\"\"\"\n        self.output = self.netG(self.data_A)  # generate output image given the input data_A\n\n    def backward(self):\n        \"\"\"Calculate losses, gradients, and update network weights; called in every training iteration\"\"\"\n        # calculate the intermediate results if necessary; here self.output has been computed during function <forward>\n        # calculate loss given the input and intermediate results\n        self.loss_G = self.criterionLoss(self.output, self.data_B) * self.opt.lambda_regression\n        self.loss_G.backward()       # calculate gradients of network G w.r.t. loss_G\n\n    def optimize_parameters(self):\n        \"\"\"Update network weights; it will be called in every training iteration.\"\"\"\n        self.forward()               # first call forward to calculate intermediate results\n        self.optimizer.zero_grad()   # clear network G's existing gradients\n        self.backward()              # calculate gradients for network G\n        self.optimizer.step()        # update gradients for network G\n"
  },
  {
    "path": "third_part/face3d/options/__init__.py",
    "content": "\"\"\"This package options includes option modules: training options, test options, and basic options (used in both training and test).\"\"\"\n"
  },
  {
    "path": "third_part/face3d/options/base_options.py",
    "content": "\"\"\"This script contains base options for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport argparse\nimport os\nfrom util import util\nimport numpy as np\nimport torch\nimport face3d.models as models\nimport face3d.data as data\n\n\nclass BaseOptions():\n    \"\"\"This class defines options used during both training and test time.\n\n    It also implements several helper functions such as parsing, printing, and saving the options.\n    It also gathers additional options defined in <modify_commandline_options> functions in both dataset class and model class.\n    \"\"\"\n\n    def __init__(self, cmd_line=None):\n        \"\"\"Reset the class; indicates the class hasn't been initialized\"\"\"\n        self.initialized = False\n        self.cmd_line = None\n        if cmd_line is not None:\n            self.cmd_line = cmd_line.split()\n\n    def initialize(self, parser):\n        \"\"\"Define the common options that are used in both training and test.\"\"\"\n        # basic parameters\n        parser.add_argument('--name', type=str, default='face_recon', help='name of the experiment. It decides where to store samples and models')\n        parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0  0,1,2, 0,2. use -1 for CPU')\n        parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here')\n        parser.add_argument('--vis_batch_nums', type=float, default=1, help='batch nums of images for visulization')\n        parser.add_argument('--eval_batch_nums', type=float, default=float('inf'), help='batch nums of images for evaluation')\n        parser.add_argument('--use_ddp', type=util.str2bool, nargs='?', const=True, default=True, help='whether use distributed data parallel')\n        parser.add_argument('--ddp_port', type=str, default='12355', help='ddp port')\n        parser.add_argument('--display_per_batch', type=util.str2bool, nargs='?', const=True, default=True, help='whether use batch to show losses')\n        parser.add_argument('--add_image', type=util.str2bool, nargs='?', const=True, default=True, help='whether add image to tensorboard')\n        parser.add_argument('--world_size', type=int, default=1, help='batch nums of images for evaluation')\n\n        # model parameters\n        parser.add_argument('--model', type=str, default='facerecon', help='chooses which model to use.')\n\n        # additional parameters\n        parser.add_argument('--epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model')\n        parser.add_argument('--verbose', action='store_true', help='if specified, print more debugging information')\n        parser.add_argument('--suffix', default='', type=str, help='customized suffix: opt.name = opt.name + suffix: e.g., {model}_{netG}_size{load_size}')\n\n        self.initialized = True\n        return parser\n\n    def gather_options(self):\n        \"\"\"Initialize our parser with basic options(only once).\n        Add additional model-specific and dataset-specific options.\n        These options are defined in the <modify_commandline_options> function\n        in model and dataset classes.\n        \"\"\"\n        if not self.initialized:  # check if it has been initialized\n            parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n            parser = self.initialize(parser)\n\n        # get the basic options\n        if self.cmd_line is None:\n            opt, _ = parser.parse_known_args()\n        else:\n            opt, _ = parser.parse_known_args(self.cmd_line)\n\n        # set cuda visible devices\n        os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpu_ids\n\n        # modify model-related parser options\n        model_name = opt.model\n        model_option_setter = models.get_option_setter(model_name)\n        parser = model_option_setter(parser, self.isTrain)\n        if self.cmd_line is None:\n            opt, _ = parser.parse_known_args()  # parse again with new defaults\n        else:\n            opt, _ = parser.parse_known_args(self.cmd_line)  # parse again with new defaults\n\n        # modify dataset-related parser options\n        if opt.dataset_mode:\n            dataset_name = opt.dataset_mode\n            dataset_option_setter = data.get_option_setter(dataset_name)\n            parser = dataset_option_setter(parser, self.isTrain)\n\n        # save and return the parser\n        self.parser = parser\n        if self.cmd_line is None:\n            return parser.parse_args()\n        else:\n            return parser.parse_args(self.cmd_line)\n\n    def print_options(self, opt):\n        \"\"\"Print and save options\n\n        It will print both current options and default values(if different).\n        It will save options into a text file / [checkpoints_dir] / opt.txt\n        \"\"\"\n        message = ''\n        message += '----------------- Options ---------------\\n'\n        for k, v in sorted(vars(opt).items()):\n            comment = ''\n            default = self.parser.get_default(k)\n            if v != default:\n                comment = '\\t[default: %s]' % str(default)\n            message += '{:>25}: {:<30}{}\\n'.format(str(k), str(v), comment)\n        message += '----------------- End -------------------'\n        print(message)\n\n        # save to the disk\n        expr_dir = os.path.join(opt.checkpoints_dir, opt.name)\n        util.mkdirs(expr_dir)\n        file_name = os.path.join(expr_dir, '{}_opt.txt'.format(opt.phase))\n        try:\n            with open(file_name, 'wt') as opt_file:\n                opt_file.write(message)\n                opt_file.write('\\n')\n        except PermissionError as error:\n            print(\"permission error {}\".format(error))\n            pass\n\n    def parse(self):\n        \"\"\"Parse our options, create checkpoints directory suffix, and set up gpu device.\"\"\"\n        opt = self.gather_options()\n        opt.isTrain = self.isTrain   # train or test\n\n        # process opt.suffix\n        if opt.suffix:\n            suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.suffix != '' else ''\n            opt.name = opt.name + suffix\n\n\n        # set gpu ids\n        str_ids = opt.gpu_ids.split(',')\n        gpu_ids = []\n        for str_id in str_ids:\n            id = int(str_id)\n            if id >= 0:\n                gpu_ids.append(id)\n        opt.world_size = len(gpu_ids)\n        # if len(opt.gpu_ids) > 0:\n        #     torch.cuda.set_device(gpu_ids[0])\n        if opt.world_size == 1:\n            opt.use_ddp = False\n\n        if opt.phase != 'test':\n            # set continue_train automatically\n            if opt.pretrained_name is None:\n                model_dir = os.path.join(opt.checkpoints_dir, opt.name)\n            else:\n                model_dir = os.path.join(opt.checkpoints_dir, opt.pretrained_name)\n            if os.path.isdir(model_dir):\n                model_pths = [i for i in os.listdir(model_dir) if i.endswith('pth')]\n                if os.path.isdir(model_dir) and len(model_pths) != 0:\n                    opt.continue_train= True\n        \n            # update the latest epoch count\n            if opt.continue_train:\n                if opt.epoch == 'latest':\n                    epoch_counts = [int(i.split('.')[0].split('_')[-1]) for i in model_pths if 'latest' not in i]\n                    if len(epoch_counts) != 0:\n                        opt.epoch_count = max(epoch_counts) + 1\n                else:\n                    opt.epoch_count = int(opt.epoch) + 1\n                    \n\n        self.print_options(opt)\n        self.opt = opt\n        return self.opt\n"
  },
  {
    "path": "third_part/face3d/options/inference_options.py",
    "content": "from face3d.options.base_options import BaseOptions\n\n\nclass InferenceOptions(BaseOptions):\n    \"\"\"This class includes test options.\n\n    It also includes shared options defined in BaseOptions.\n    \"\"\"\n\n    def initialize(self, parser):\n        parser = BaseOptions.initialize(self, parser)  # define shared options\n        parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc')\n        parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]')\n\n        parser.add_argument('--input_dir', type=str, help='the folder of the input files')\n        parser.add_argument('--keypoint_dir', type=str, help='the folder of the keypoint files')\n        parser.add_argument('--output_dir', type=str, default='mp4', help='the output dir to save the extracted coefficients')\n        parser.add_argument('--save_split_files', action='store_true', help='save split files or not')\n        parser.add_argument('--inference_batch_size', type=int, default=8)\n        \n        # Dropout and Batchnorm has different behavior during training and test.\n        self.isTrain = False\n        return parser\n"
  },
  {
    "path": "third_part/face3d/options/test_options.py",
    "content": "\"\"\"This script contains the test options for Deep3DFaceRecon_pytorch\n\"\"\"\n\nfrom .base_options import BaseOptions\n\n\nclass TestOptions(BaseOptions):\n    \"\"\"This class includes test options.\n\n    It also includes shared options defined in BaseOptions.\n    \"\"\"\n\n    def initialize(self, parser):\n        parser = BaseOptions.initialize(self, parser)  # define shared options\n        parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc')\n        parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]')\n        parser.add_argument('--img_folder', type=str, default='examples', help='folder for test images.')\n\n        # Dropout and Batchnorm has different behavior during training and test.\n        self.isTrain = False\n        return parser\n"
  },
  {
    "path": "third_part/face3d/options/train_options.py",
    "content": "\"\"\"This script contains the training options for Deep3DFaceRecon_pytorch\n\"\"\"\n\nfrom .base_options import BaseOptions\nfrom util import util\n\nclass TrainOptions(BaseOptions):\n    \"\"\"This class includes training options.\n\n    It also includes shared options defined in BaseOptions.\n    \"\"\"\n\n    def initialize(self, parser):\n        parser = BaseOptions.initialize(self, parser)\n        # dataset parameters\n        # for train\n        parser.add_argument('--data_root', type=str, default='./', help='dataset root')\n        parser.add_argument('--flist', type=str, default='datalist/train/masks.txt', help='list of mask names of training set')\n        parser.add_argument('--batch_size', type=int, default=32)\n        parser.add_argument('--dataset_mode', type=str, default='flist', help='chooses how datasets are loaded. [None | flist]')\n        parser.add_argument('--serial_batches', action='store_true', help='if true, takes images in order to make batches, otherwise takes them randomly')\n        parser.add_argument('--num_threads', default=4, type=int, help='# threads for loading data')\n        parser.add_argument('--max_dataset_size', type=int, default=float(\"inf\"), help='Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.')\n        parser.add_argument('--preprocess', type=str, default='shift_scale_rot_flip', help='scaling and cropping of images at load time [shift_scale_rot_flip | shift_scale | shift | shift_rot_flip ]')\n        parser.add_argument('--use_aug', type=util.str2bool, nargs='?', const=True, default=True, help='whether use data augmentation')\n\n        # for val\n        parser.add_argument('--flist_val', type=str, default='datalist/val/masks.txt', help='list of mask names of val set')\n        parser.add_argument('--batch_size_val', type=int, default=32)\n\n\n        # visualization parameters\n        parser.add_argument('--display_freq', type=int, default=1000, help='frequency of showing training results on screen')\n        parser.add_argument('--print_freq', type=int, default=100, help='frequency of showing training results on console')\n        \n        # network saving and loading parameters\n        parser.add_argument('--save_latest_freq', type=int, default=5000, help='frequency of saving the latest results')\n        parser.add_argument('--save_epoch_freq', type=int, default=1, help='frequency of saving checkpoints at the end of epochs')\n        parser.add_argument('--evaluation_freq', type=int, default=5000, help='evaluation freq')\n        parser.add_argument('--save_by_iter', action='store_true', help='whether saves model by iteration')\n        parser.add_argument('--continue_train', action='store_true', help='continue training: load the latest model')\n        parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count, we save the model by <epoch_count>, <epoch_count>+<save_latest_freq>, ...')\n        parser.add_argument('--phase', type=str, default='train', help='train, val, test, etc')\n        parser.add_argument('--pretrained_name', type=str, default=None, help='resume training from another checkpoint')\n\n        # training parameters\n        parser.add_argument('--n_epochs', type=int, default=20, help='number of epochs with the initial learning rate')\n        parser.add_argument('--lr', type=float, default=0.0001, help='initial learning rate for adam')\n        parser.add_argument('--lr_policy', type=str, default='step', help='learning rate policy. [linear | step | plateau | cosine]')\n        parser.add_argument('--lr_decay_epochs', type=int, default=10, help='multiply by a gamma every lr_decay_epochs epoches')\n\n        self.isTrain = True\n        return parser\n"
  },
  {
    "path": "third_part/face3d/util/__init__.py",
    "content": "\"\"\"This package includes a miscellaneous collection of useful helper functions.\"\"\"\nfrom face3d.util import *\n"
  },
  {
    "path": "third_part/face3d/util/detect_lm68.py",
    "content": "import os\nimport cv2\nimport numpy as np\nfrom scipy.io import loadmat\nimport tensorflow as tf\nfrom util.preprocess import align_for_lm\nfrom shutil import move\n\nmean_face = np.loadtxt('util/test_mean_face.txt')\nmean_face = mean_face.reshape([68, 2])\n\ndef save_label(labels, save_path):\n    np.savetxt(save_path, labels)\n\ndef draw_landmarks(img, landmark, save_name):\n    landmark = landmark\n    lm_img = np.zeros([img.shape[0], img.shape[1], 3])\n    lm_img[:] = img.astype(np.float32)\n    landmark = np.round(landmark).astype(np.int32)\n\n    for i in range(len(landmark)):\n        for j in range(-1, 1):\n            for k in range(-1, 1):\n                if img.shape[0] - 1 - landmark[i, 1]+j > 0 and \\\n                        img.shape[0] - 1 - landmark[i, 1]+j < img.shape[0] and \\\n                        landmark[i, 0]+k > 0 and \\\n                        landmark[i, 0]+k < img.shape[1]:\n                    lm_img[img.shape[0] - 1 - landmark[i, 1]+j, landmark[i, 0]+k,\n                           :] = np.array([0, 0, 255])\n    lm_img = lm_img.astype(np.uint8)\n\n    cv2.imwrite(save_name, lm_img)\n\n\ndef load_data(img_name, txt_name):\n    return cv2.imread(img_name), np.loadtxt(txt_name)\n\n# create tensorflow graph for landmark detector\ndef load_lm_graph(graph_filename):\n    with tf.gfile.GFile(graph_filename, 'rb') as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n\n    with tf.Graph().as_default() as graph:\n        tf.import_graph_def(graph_def, name='net')\n        img_224 = graph.get_tensor_by_name('net/input_imgs:0')\n        output_lm = graph.get_tensor_by_name('net/lm:0')\n        lm_sess = tf.Session(graph=graph)\n\n    return lm_sess,img_224,output_lm\n\n# landmark detection\ndef detect_68p(img_path,sess,input_op,output_op):\n    print('detecting landmarks......')\n    names = [i for i in sorted(os.listdir(\n        img_path)) if 'jpg' in i or 'png' in i or 'jpeg' in i or 'PNG' in i]\n    vis_path = os.path.join(img_path, 'vis')\n    remove_path = os.path.join(img_path, 'remove')\n    save_path = os.path.join(img_path, 'landmarks')\n    if not os.path.isdir(vis_path):\n        os.makedirs(vis_path)\n    if not os.path.isdir(remove_path):\n        os.makedirs(remove_path)\n    if not os.path.isdir(save_path):\n        os.makedirs(save_path)\n\n    for i in range(0, len(names)):\n        name = names[i]\n        print('%05d' % (i), ' ', name)\n        full_image_name = os.path.join(img_path, name)\n        txt_name = '.'.join(name.split('.')[:-1]) + '.txt'\n        full_txt_name = os.path.join(img_path, 'detections', txt_name) # 5 facial landmark path for each image\n\n        # if an image does not have detected 5 facial landmarks, remove it from the training list\n        if not os.path.isfile(full_txt_name):\n            move(full_image_name, os.path.join(remove_path, name))\n            continue \n\n        # load data\n        img, five_points = load_data(full_image_name, full_txt_name)\n        input_img, scale, bbox = align_for_lm(img, five_points) # align for 68 landmark detection \n\n        # if the alignment fails, remove corresponding image from the training list\n        if scale == 0:\n            move(full_txt_name, os.path.join(\n                remove_path, txt_name))\n            move(full_image_name, os.path.join(remove_path, name))\n            continue\n\n        # detect landmarks\n        input_img = np.reshape(\n            input_img, [1, 224, 224, 3]).astype(np.float32)\n        landmark = sess.run(\n            output_op, feed_dict={input_op: input_img})\n\n        # transform back to original image coordinate\n        landmark = landmark.reshape([68, 2]) + mean_face\n        landmark[:, 1] = 223 - landmark[:, 1]\n        landmark = landmark / scale\n        landmark[:, 0] = landmark[:, 0] + bbox[0]\n        landmark[:, 1] = landmark[:, 1] + bbox[1]\n        landmark[:, 1] = img.shape[0] - 1 - landmark[:, 1]\n\n        if i % 100 == 0:\n            draw_landmarks(img, landmark, os.path.join(vis_path, name))\n        save_label(landmark, os.path.join(save_path, txt_name))\n"
  },
  {
    "path": "third_part/face3d/util/generate_list.py",
    "content": "\"\"\"This script is to generate training list files for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport os\n\n# save path to training data\ndef write_list(lms_list, imgs_list, msks_list, mode='train',save_folder='datalist', save_name=''):\n    save_path = os.path.join(save_folder, mode)\n    if not os.path.isdir(save_path):\n        os.makedirs(save_path)\n    with open(os.path.join(save_path, save_name + 'landmarks.txt'), 'w') as fd:\n        fd.writelines([i + '\\n' for i in lms_list])\n\n    with open(os.path.join(save_path, save_name + 'images.txt'), 'w') as fd:\n        fd.writelines([i + '\\n' for i in imgs_list])\n\n    with open(os.path.join(save_path, save_name + 'masks.txt'), 'w') as fd:\n        fd.writelines([i + '\\n' for i in msks_list])   \n\n# check if the path is valid\ndef check_list(rlms_list, rimgs_list, rmsks_list):\n    lms_list, imgs_list, msks_list = [], [], []\n    for i in range(len(rlms_list)):\n        flag = 'false'\n        lm_path = rlms_list[i]\n        im_path = rimgs_list[i]\n        msk_path = rmsks_list[i]\n        if os.path.isfile(lm_path) and os.path.isfile(im_path) and os.path.isfile(msk_path):\n            flag = 'true'\n            lms_list.append(rlms_list[i])\n            imgs_list.append(rimgs_list[i])\n            msks_list.append(rmsks_list[i])\n        print(i, rlms_list[i], flag)\n    return lms_list, imgs_list, msks_list\n"
  },
  {
    "path": "third_part/face3d/util/html.py",
    "content": "import dominate\nfrom dominate.tags import meta, h3, table, tr, td, p, a, img, br\nimport os\n\n\nclass HTML:\n    \"\"\"This HTML class allows us to save images and write texts into a single HTML file.\n\n     It consists of functions such as <add_header> (add a text header to the HTML file),\n     <add_images> (add a row of images to the HTML file), and <save> (save the HTML to the disk).\n     It is based on Python library 'dominate', a Python library for creating and manipulating HTML documents using a DOM API.\n    \"\"\"\n\n    def __init__(self, web_dir, title, refresh=0):\n        \"\"\"Initialize the HTML classes\n\n        Parameters:\n            web_dir (str) -- a directory that stores the webpage. HTML file will be created at <web_dir>/index.html; images will be saved at <web_dir/images/\n            title (str)   -- the webpage name\n            refresh (int) -- how often the website refresh itself; if 0; no refreshing\n        \"\"\"\n        self.title = title\n        self.web_dir = web_dir\n        self.img_dir = os.path.join(self.web_dir, 'images')\n        if not os.path.exists(self.web_dir):\n            os.makedirs(self.web_dir)\n        if not os.path.exists(self.img_dir):\n            os.makedirs(self.img_dir)\n\n        self.doc = dominate.document(title=title)\n        if refresh > 0:\n            with self.doc.head:\n                meta(http_equiv=\"refresh\", content=str(refresh))\n\n    def get_image_dir(self):\n        \"\"\"Return the directory that stores images\"\"\"\n        return self.img_dir\n\n    def add_header(self, text):\n        \"\"\"Insert a header to the HTML file\n\n        Parameters:\n            text (str) -- the header text\n        \"\"\"\n        with self.doc:\n            h3(text)\n\n    def add_images(self, ims, txts, links, width=400):\n        \"\"\"add images to the HTML file\n\n        Parameters:\n            ims (str list)   -- a list of image paths\n            txts (str list)  -- a list of image names shown on the website\n            links (str list) --  a list of hyperref links; when you click an image, it will redirect you to a new page\n        \"\"\"\n        self.t = table(border=1, style=\"table-layout: fixed;\")  # Insert a table\n        self.doc.add(self.t)\n        with self.t:\n            with tr():\n                for im, txt, link in zip(ims, txts, links):\n                    with td(style=\"word-wrap: break-word;\", halign=\"center\", valign=\"top\"):\n                        with p():\n                            with a(href=os.path.join('images', link)):\n                                img(style=\"width:%dpx\" % width, src=os.path.join('images', im))\n                            br()\n                            p(txt)\n\n    def save(self):\n        \"\"\"save the current content to the HTML file\"\"\"\n        html_file = '%s/index.html' % self.web_dir\n        f = open(html_file, 'wt')\n        f.write(self.doc.render())\n        f.close()\n\n\nif __name__ == '__main__':  # we show an example usage here.\n    html = HTML('web/', 'test_html')\n    html.add_header('hello world')\n\n    ims, txts, links = [], [], []\n    for n in range(4):\n        ims.append('image_%d.png' % n)\n        txts.append('text_%d' % n)\n        links.append('image_%d.png' % n)\n    html.add_images(ims, txts, links)\n    html.save()\n"
  },
  {
    "path": "third_part/face3d/util/load_mats.py",
    "content": "\"\"\"This script is to load 3D face model for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport numpy as np\nfrom PIL import Image\nfrom scipy.io import loadmat, savemat\nfrom array import array\nimport os.path as osp\n\n# load expression basis\ndef LoadExpBasis(bfm_folder='BFM'):\n    n_vertex = 53215\n    Expbin = open(osp.join(bfm_folder, 'Exp_Pca.bin'), 'rb')\n    exp_dim = array('i')\n    exp_dim.fromfile(Expbin, 1)\n    expMU = array('f')\n    expPC = array('f')\n    expMU.fromfile(Expbin, 3*n_vertex)\n    expPC.fromfile(Expbin, 3*exp_dim[0]*n_vertex)\n    Expbin.close()\n\n    expPC = np.array(expPC)\n    expPC = np.reshape(expPC, [exp_dim[0], -1])\n    expPC = np.transpose(expPC)\n\n    expEV = np.loadtxt(osp.join(bfm_folder, 'std_exp.txt'))\n\n    return expPC, expEV\n\n\n# transfer original BFM09 to our face model\ndef transferBFM09(bfm_folder='BFM'):\n    print('Transfer BFM09 to BFM_model_front......')\n    original_BFM = loadmat(osp.join(bfm_folder, '01_MorphableModel.mat'))\n    shapePC = original_BFM['shapePC']  # shape basis\n    shapeEV = original_BFM['shapeEV']  # corresponding eigen value\n    shapeMU = original_BFM['shapeMU']  # mean face\n    texPC = original_BFM['texPC']  # texture basis\n    texEV = original_BFM['texEV']  # eigen value\n    texMU = original_BFM['texMU']  # mean texture\n\n    expPC, expEV = LoadExpBasis()\n\n    # transfer BFM09 to our face model\n\n    idBase = shapePC*np.reshape(shapeEV, [-1, 199])\n    idBase = idBase/1e5  # unify the scale to decimeter\n    idBase = idBase[:, :80]  # use only first 80 basis\n\n    exBase = expPC*np.reshape(expEV, [-1, 79])\n    exBase = exBase/1e5  # unify the scale to decimeter\n    exBase = exBase[:, :64]  # use only first 64 basis\n\n    texBase = texPC*np.reshape(texEV, [-1, 199])\n    texBase = texBase[:, :80]  # use only first 80 basis\n\n    # our face model is cropped along face landmarks and contains only 35709 vertex.\n    # original BFM09 contains 53490 vertex, and expression basis provided by Guo et al. contains 53215 vertex.\n    # thus we select corresponding vertex to get our face model.\n\n    index_exp = loadmat(osp.join(bfm_folder, 'BFM_front_idx.mat'))\n    index_exp = index_exp['idx'].astype(np.int32) - 1  # starts from 0 (to 53215)\n\n    index_shape = loadmat(osp.join(bfm_folder, 'BFM_exp_idx.mat'))\n    index_shape = index_shape['trimIndex'].astype(\n        np.int32) - 1  # starts from 0 (to 53490)\n    index_shape = index_shape[index_exp]\n\n    idBase = np.reshape(idBase, [-1, 3, 80])\n    idBase = idBase[index_shape, :, :]\n    idBase = np.reshape(idBase, [-1, 80])\n\n    texBase = np.reshape(texBase, [-1, 3, 80])\n    texBase = texBase[index_shape, :, :]\n    texBase = np.reshape(texBase, [-1, 80])\n\n    exBase = np.reshape(exBase, [-1, 3, 64])\n    exBase = exBase[index_exp, :, :]\n    exBase = np.reshape(exBase, [-1, 64])\n\n    meanshape = np.reshape(shapeMU, [-1, 3])/1e5\n    meanshape = meanshape[index_shape, :]\n    meanshape = np.reshape(meanshape, [1, -1])\n\n    meantex = np.reshape(texMU, [-1, 3])\n    meantex = meantex[index_shape, :]\n    meantex = np.reshape(meantex, [1, -1])\n\n    # other info contains triangles, region used for computing photometric loss,\n    # region used for skin texture regularization, and 68 landmarks index etc.\n    other_info = loadmat(osp.join(bfm_folder, 'facemodel_info.mat'))\n    frontmask2_idx = other_info['frontmask2_idx']\n    skinmask = other_info['skinmask']\n    keypoints = other_info['keypoints']\n    point_buf = other_info['point_buf']\n    tri = other_info['tri']\n    tri_mask2 = other_info['tri_mask2']\n\n    # save our face model\n    savemat(osp.join(bfm_folder, 'BFM_model_front.mat'), {'meanshape': meanshape, 'meantex': meantex, 'idBase': idBase, 'exBase': exBase, 'texBase': texBase,\n            'tri': tri, 'point_buf': point_buf, 'tri_mask2': tri_mask2, 'keypoints': keypoints, 'frontmask2_idx': frontmask2_idx, 'skinmask': skinmask})\n\n\n# load landmarks for standard face, which is used for image preprocessing\ndef load_lm3d(bfm_folder):\n\n    Lm3D = loadmat(osp.join(bfm_folder, 'similarity_Lm3D_all.mat'))\n    Lm3D = Lm3D['lm']\n\n    # calculate 5 facial landmarks using 68 landmarks\n    lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1\n    Lm3D = np.stack([Lm3D[lm_idx[0], :], np.mean(Lm3D[lm_idx[[1, 2]], :], 0), np.mean(\n        Lm3D[lm_idx[[3, 4]], :], 0), Lm3D[lm_idx[5], :], Lm3D[lm_idx[6], :]], axis=0)\n    Lm3D = Lm3D[[1, 2, 0, 3, 4], :]\n\n    return Lm3D\n\n\nif __name__ == '__main__':\n    transferBFM09()"
  },
  {
    "path": "third_part/face3d/util/nvdiffrast.py",
    "content": "\"\"\"This script is the differentiable renderer for Deep3DFaceRecon_pytorch\n    Attention, antialiasing step is missing in current version.\n\"\"\"\n\nimport torch\nimport torch.nn.functional as F\nimport kornia\nfrom kornia.geometry.camera import pixel2cam\nimport numpy as np\nfrom typing import List\nimport nvdiffrast.torch as dr\nfrom scipy.io import loadmat\nfrom torch import nn\n\ndef ndc_projection(x=0.1, n=1.0, f=50.0):\n    return np.array([[n/x,    0,            0,              0],\n                     [  0, n/-x,            0,              0],\n                     [  0,    0, -(f+n)/(f-n), -(2*f*n)/(f-n)],\n                     [  0,    0,           -1,              0]]).astype(np.float32)\n\nclass MeshRenderer(nn.Module):\n    def __init__(self,\n                rasterize_fov,\n                znear=0.1,\n                zfar=10, \n                rasterize_size=224):\n        super(MeshRenderer, self).__init__()\n\n        x = np.tan(np.deg2rad(rasterize_fov * 0.5)) * znear\n        self.ndc_proj = torch.tensor(ndc_projection(x=x, n=znear, f=zfar)).matmul(\n                torch.diag(torch.tensor([1., -1, -1, 1])))\n        self.rasterize_size = rasterize_size\n        self.glctx = None\n    \n    def forward(self, vertex, tri, feat=None):\n        \"\"\"\n        Return:\n            mask               -- torch.tensor, size (B, 1, H, W)\n            depth              -- torch.tensor, size (B, 1, H, W)\n            features(optional) -- torch.tensor, size (B, C, H, W) if feat is not None\n\n        Parameters:\n            vertex          -- torch.tensor, size (B, N, 3)\n            tri             -- torch.tensor, size (B, M, 3) or (M, 3), triangles\n            feat(optional)  -- torch.tensor, size (B, C), features\n        \"\"\"\n        device = vertex.device\n        rsize = int(self.rasterize_size)\n        ndc_proj = self.ndc_proj.to(device)\n        # trans to homogeneous coordinates of 3d vertices, the direction of y is the same as v\n        if vertex.shape[-1] == 3:\n            vertex = torch.cat([vertex, torch.ones([*vertex.shape[:2], 1]).to(device)], dim=-1)\n            vertex[..., 1] = -vertex[..., 1] \n\n\n        vertex_ndc = vertex @ ndc_proj.t()\n        if self.glctx is None:\n            self.glctx = dr.RasterizeGLContext(device=device)\n            print(\"create glctx on device cuda:%d\"%device.index)\n        \n        ranges = None\n        if isinstance(tri, List) or len(tri.shape) == 3:\n            vum = vertex_ndc.shape[1]\n            fnum = torch.tensor([f.shape[0] for f in tri]).unsqueeze(1).to(device) \n            fstartidx = torch.cumsum(fnum, dim=0) - fnum \n            ranges = torch.cat([fstartidx, fnum], axis=1).type(torch.int32).cpu()\n            for i in range(tri.shape[0]):\n                tri[i] = tri[i] + i*vum\n            vertex_ndc = torch.cat(vertex_ndc, dim=0)\n            tri = torch.cat(tri, dim=0)\n\n        # for range_mode vetex: [B*N, 4], tri: [B*M, 3], for instance_mode vetex: [B, N, 4], tri: [M, 3]\n        tri = tri.type(torch.int32).contiguous()\n        rast_out, _ = dr.rasterize(self.glctx, vertex_ndc.contiguous(), tri, resolution=[rsize, rsize], ranges=ranges)\n\n        depth, _ = dr.interpolate(vertex.reshape([-1,4])[...,2].unsqueeze(1).contiguous(), rast_out, tri) \n        depth = depth.permute(0, 3, 1, 2)\n        mask =  (rast_out[..., 3] > 0).float().unsqueeze(1)\n        depth = mask * depth\n        \n\n        image = None\n        if feat is not None:\n            image, _ = dr.interpolate(feat, rast_out, tri)\n            image = image.permute(0, 3, 1, 2)\n            image = mask * image\n        \n        return mask, depth, image\n\n"
  },
  {
    "path": "third_part/face3d/util/preprocess.py",
    "content": "\"\"\"\nThis script contains the image preprocessing code for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport numpy as np\nfrom scipy.io import loadmat\nfrom PIL import Image\nimport cv2\nimport os\nfrom skimage import transform as trans\nimport torch\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=np.VisibleDeprecationWarning) \nwarnings.filterwarnings(\"ignore\", category=FutureWarning) \n\n\n# calculating least square problem for image alignment\ndef POS(xp, x):\n    npts = xp.shape[1]\n\n    A = np.zeros([2*npts, 8])\n\n    A[0:2*npts-1:2, 0:3] = x.transpose()\n    A[0:2*npts-1:2, 3] = 1\n\n    A[1:2*npts:2, 4:7] = x.transpose()\n    A[1:2*npts:2, 7] = 1\n\n    b = np.reshape(xp.transpose(), [2*npts, 1])\n\n    k, _, _, _ = np.linalg.lstsq(A, b)\n\n    R1 = k[0:3]\n    R2 = k[4:7]\n    sTx = k[3]\n    sTy = k[7]\n    s = (np.linalg.norm(R1) + np.linalg.norm(R2))/2\n    t = np.stack([sTx, sTy], axis=0)\n\n    return t, s\n\n# bounding box for 68 landmark detection\ndef BBRegression(points, params):\n\n    w1 = params['W1']\n    b1 = params['B1']\n    w2 = params['W2']\n    b2 = params['B2']\n    data = points.copy()\n    data = data.reshape([5, 2])\n    data_mean = np.mean(data, axis=0)\n    x_mean = data_mean[0]\n    y_mean = data_mean[1]\n    data[:, 0] = data[:, 0] - x_mean\n    data[:, 1] = data[:, 1] - y_mean\n\n    rms = np.sqrt(np.sum(data ** 2)/5)\n    data = data / rms\n    data = data.reshape([1, 10])\n    data = np.transpose(data)\n    inputs = np.matmul(w1, data) + b1\n    inputs = 2 / (1 + np.exp(-2 * inputs)) - 1\n    inputs = np.matmul(w2, inputs) + b2\n    inputs = np.transpose(inputs)\n    x = inputs[:, 0] * rms + x_mean\n    y = inputs[:, 1] * rms + y_mean\n    w = 224/inputs[:, 2] * rms\n    rects = [x, y, w, w]\n    return np.array(rects).reshape([4])\n\n# utils for landmark detection\ndef img_padding(img, box):\n    success = True\n    bbox = box.copy()\n    res = np.zeros([2*img.shape[0], 2*img.shape[1], 3])\n    res[img.shape[0] // 2: img.shape[0] + img.shape[0] //\n        2, img.shape[1] // 2: img.shape[1] + img.shape[1]//2] = img\n\n    bbox[0] = bbox[0] + img.shape[1] // 2\n    bbox[1] = bbox[1] + img.shape[0] // 2\n    if bbox[0] < 0 or bbox[1] < 0:\n        success = False\n    return res, bbox, success\n\n# utils for landmark detection\ndef crop(img, bbox):\n    padded_img, padded_bbox, flag = img_padding(img, bbox)\n    if flag:\n        crop_img = padded_img[padded_bbox[1]: padded_bbox[1] +\n                            padded_bbox[3], padded_bbox[0]: padded_bbox[0] + padded_bbox[2]]\n        crop_img = cv2.resize(crop_img.astype(np.uint8),\n                            (224, 224), interpolation=cv2.INTER_CUBIC)\n        scale = 224 / padded_bbox[3]\n        return crop_img, scale\n    else:\n        return padded_img, 0\n\n# utils for landmark detection\ndef scale_trans(img, lm, t, s):\n    imgw = img.shape[1]\n    imgh = img.shape[0]\n    M_s = np.array([[1, 0, -t[0] + imgw//2 + 0.5], [0, 1, -imgh//2 + t[1]]],\n                   dtype=np.float32)\n    img = cv2.warpAffine(img, M_s, (imgw, imgh))\n    w = int(imgw / s * 100)\n    h = int(imgh / s * 100)\n    img = cv2.resize(img, (w, h))\n    lm = np.stack([lm[:, 0] - t[0] + imgw // 2, lm[:, 1] -\n                   t[1] + imgh // 2], axis=1) / s * 100\n\n    left = w//2 - 112\n    up = h//2 - 112\n    bbox = [left, up, 224, 224]\n    cropped_img, scale2 = crop(img, bbox)\n    assert(scale2!=0)\n    t1 = np.array([bbox[0], bbox[1]])\n\n    # back to raw img s * crop + s * t1 + t2\n    t1 = np.array([w//2 - 112, h//2 - 112])\n    scale = s / 100\n    t2 = np.array([t[0] - imgw/2, t[1] - imgh / 2])\n    inv = (scale/scale2, scale * t1 + t2.reshape([2]))\n    return cropped_img, inv\n\n# utils for landmark detection\ndef align_for_lm(img, five_points):\n    five_points = np.array(five_points).reshape([1, 10])\n    params = loadmat('util/BBRegressorParam_r.mat')\n    bbox = BBRegression(five_points, params)\n    assert(bbox[2] != 0)\n    bbox = np.round(bbox).astype(np.int32)\n    crop_img, scale = crop(img, bbox)\n    return crop_img, scale, bbox\n\n\n# resize and crop images for face reconstruction\ndef resize_n_crop_img(img, lm, t, s, target_size=224., mask=None):\n    w0, h0 = img.size\n    w = (w0*s).astype(np.int32)\n    h = (h0*s).astype(np.int32)\n    left = (w/2 - target_size/2 + float((t[0] - w0/2)*s)).astype(np.int32)\n    right = left + target_size\n    up = (h/2 - target_size/2 + float((h0/2 - t[1])*s)).astype(np.int32)\n    below = up + target_size\n\n    img = img.resize((w, h), resample=Image.BICUBIC)\n    img = img.crop((left, up, right, below))\n\n    if mask is not None:\n        mask = mask.resize((w, h), resample=Image.BICUBIC)\n        mask = mask.crop((left, up, right, below))\n\n    lm = np.stack([lm[:, 0] - t[0] + w0/2, lm[:, 1] -\n                  t[1] + h0/2], axis=1)*s\n    lm = lm - np.reshape(\n            np.array([(w/2 - target_size/2), (h/2-target_size/2)]), [1, 2])\n\n    return img, lm, mask\n\n# utils for face reconstruction\ndef extract_5p(lm):\n    lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1\n    lm5p = np.stack([lm[lm_idx[0], :], np.mean(lm[lm_idx[[1, 2]], :], 0), np.mean(\n        lm[lm_idx[[3, 4]], :], 0), lm[lm_idx[5], :], lm[lm_idx[6], :]], axis=0)\n    lm5p = lm5p[[1, 2, 0, 3, 4], :]\n    return lm5p\n\n# utils for face reconstruction\ndef align_img(img, lm, lm3D, mask=None, target_size=224., rescale_factor=102.):\n    \"\"\"\n    Return:\n        transparams        --numpy.array  (raw_W, raw_H, scale, tx, ty)\n        img_new            --PIL.Image  (target_size, target_size, 3)\n        lm_new             --numpy.array  (68, 2), y direction is opposite to v direction\n        mask_new           --PIL.Image  (target_size, target_size)\n    \n    Parameters:\n        img                --PIL.Image  (raw_H, raw_W, 3)\n        lm                 --numpy.array  (68, 2), y direction is opposite to v direction\n        lm3D               --numpy.array  (5, 3)\n        mask               --PIL.Image  (raw_H, raw_W, 3)\n    \"\"\"\n\n    w0, h0 = img.size\n    if lm.shape[0] != 5:\n        lm5p = extract_5p(lm)\n    else:\n        lm5p = lm\n\n    # calculate translation and scale factors using 5 facial landmarks and standard landmarks of a 3D face\n    t, s = POS(lm5p.transpose(), lm3D.transpose())\n    s = rescale_factor/s\n\n    # processing the image\n    img_new, lm_new, mask_new = resize_n_crop_img(img, lm, t, s, target_size=target_size, mask=mask)\n    trans_params = np.array([w0, h0, s, t[0], t[1]])\n\n    return trans_params, img_new, lm_new, mask_new\n\n# utils for face recognition model\ndef estimate_norm(lm_68p, H):\n    # from https://github.com/deepinsight/insightface/blob/c61d3cd208a603dfa4a338bd743b320ce3e94730/recognition/common/face_align.py#L68\n    \"\"\"\n    Return:\n        trans_m            --numpy.array  (2, 3)\n    Parameters:\n        lm                 --numpy.array  (68, 2), y direction is opposite to v direction\n        H                  --int/float , image height\n    \"\"\"\n    lm = extract_5p(lm_68p)\n    lm[:, -1] = H - 1 - lm[:, -1]\n    tform = trans.SimilarityTransform()\n    src = np.array(\n    [[38.2946, 51.6963], [73.5318, 51.5014], [56.0252, 71.7366],\n     [41.5493, 92.3655], [70.7299, 92.2041]],\n    dtype=np.float32)\n    tform.estimate(lm, src)\n    M = tform.params\n    if np.linalg.det(M) == 0:\n        M = np.eye(3)\n\n    return M[0:2, :]\n\ndef estimate_norm_torch(lm_68p, H):\n    lm_68p_ = lm_68p.detach().cpu().numpy()\n    M = []\n    for i in range(lm_68p_.shape[0]):\n        M.append(estimate_norm(lm_68p_[i], H))\n    M = torch.tensor(np.array(M), dtype=torch.float32).to(lm_68p.device)\n    return M\n"
  },
  {
    "path": "third_part/face3d/util/skin_mask.py",
    "content": "\"\"\"This script is to generate skin attention mask for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport math\nimport numpy as np\nimport os\nimport cv2\n\nclass GMM:\n    def __init__(self, dim, num, w, mu, cov, cov_det, cov_inv):\n        self.dim = dim # feature dimension\n        self.num = num # number of Gaussian components\n        self.w = w # weights of Gaussian components (a list of scalars)\n        self.mu= mu # mean of Gaussian components (a list of 1xdim vectors)\n        self.cov = cov # covariance matrix of Gaussian components (a list of dimxdim matrices)\n        self.cov_det = cov_det # pre-computed determinet of covariance matrices (a list of scalars)\n        self.cov_inv = cov_inv # pre-computed inverse covariance matrices (a list of dimxdim matrices)\n\n        self.factor = [0]*num\n        for i in range(self.num):\n            self.factor[i] = (2*math.pi)**(self.dim/2) * self.cov_det[i]**0.5\n        \n    def likelihood(self, data):\n        assert(data.shape[1] == self.dim)\n        N = data.shape[0]\n        lh = np.zeros(N)\n\n        for i in range(self.num):\n            data_ = data - self.mu[i]\n\n            tmp = np.matmul(data_,self.cov_inv[i]) * data_\n            tmp = np.sum(tmp,axis=1)\n            power = -0.5 * tmp\n\n            p = np.array([math.exp(power[j]) for j in range(N)])\n            p = p/self.factor[i]\n            lh += p*self.w[i]\n        \n        return lh\n\n\ndef _rgb2ycbcr(rgb):\n    m = np.array([[65.481, 128.553, 24.966],\n                  [-37.797, -74.203, 112],\n                  [112, -93.786, -18.214]])\n    shape = rgb.shape\n    rgb = rgb.reshape((shape[0] * shape[1], 3))\n    ycbcr = np.dot(rgb, m.transpose() / 255.)\n    ycbcr[:, 0] += 16.\n    ycbcr[:, 1:] += 128.\n    return ycbcr.reshape(shape)\n\n\ndef _bgr2ycbcr(bgr):\n    rgb = bgr[..., ::-1]\n    return _rgb2ycbcr(rgb)\n\n\ngmm_skin_w = [0.24063933, 0.16365987, 0.26034665, 0.33535415]\ngmm_skin_mu = [np.array([113.71862, 103.39613, 164.08226]),\n                np.array([150.19858, 105.18467, 155.51428]),\n                np.array([183.92976, 107.62468, 152.71820]),\n                np.array([114.90524, 113.59782, 151.38217])]\ngmm_skin_cov_det = [5692842.5, 5851930.5, 2329131., 1585971.]\ngmm_skin_cov_inv = [np.array([[0.0019472069, 0.0020450759, -0.00060243998],[0.0020450759, 0.017700525, 0.0051420014],[-0.00060243998, 0.0051420014, 0.0081308950]]),\n                    np.array([[0.0027110141, 0.0011036990, 0.0023122299],[0.0011036990, 0.010707724, 0.010742856],[0.0023122299, 0.010742856, 0.017481629]]),\n                    np.array([[0.0048026871, 0.00022935172, 0.0077668377],[0.00022935172, 0.011729696, 0.0081661865],[0.0077668377, 0.0081661865, 0.025374353]]),\n                    np.array([[0.0011989699, 0.0022453172, -0.0010748957],[0.0022453172, 0.047758564, 0.020332102],[-0.0010748957, 0.020332102, 0.024502251]])]\n\ngmm_skin = GMM(3, 4, gmm_skin_w, gmm_skin_mu, [], gmm_skin_cov_det, gmm_skin_cov_inv)\n\ngmm_nonskin_w = [0.12791070, 0.31130761, 0.34245777, 0.21832393]\ngmm_nonskin_mu = [np.array([99.200851, 112.07533, 140.20602]),\n                    np.array([110.91392, 125.52969, 130.19237]),\n                    np.array([129.75864, 129.96107, 126.96808]),\n                    np.array([112.29587, 128.85121, 129.05431])]\ngmm_nonskin_cov_det = [458703648., 6466488., 90611376., 133097.63]\ngmm_nonskin_cov_inv = [np.array([[0.00085371657, 0.00071197288, 0.00023958916],[0.00071197288, 0.0025935620, 0.00076557708],[0.00023958916, 0.00076557708, 0.0015042332]]),\n                    np.array([[0.00024650150, 0.00045542428, 0.00015019422],[0.00045542428, 0.026412144, 0.018419769],[0.00015019422, 0.018419769, 0.037497383]]),\n                    np.array([[0.00037054974, 0.00038146760, 0.00040408765],[0.00038146760, 0.0085505722, 0.0079136286],[0.00040408765, 0.0079136286, 0.010982352]]),\n                    np.array([[0.00013709733, 0.00051228428, 0.00012777430],[0.00051228428, 0.28237113, 0.10528370],[0.00012777430, 0.10528370, 0.23468947]])]\n\ngmm_nonskin = GMM(3, 4, gmm_nonskin_w, gmm_nonskin_mu, [], gmm_nonskin_cov_det, gmm_nonskin_cov_inv)\n\nprior_skin = 0.8\nprior_nonskin = 1 - prior_skin\n\n\n# calculate skin attention mask\ndef skinmask(imbgr):\n    im = _bgr2ycbcr(imbgr)\n\n    data = im.reshape((-1,3))\n\n    lh_skin = gmm_skin.likelihood(data)\n    lh_nonskin = gmm_nonskin.likelihood(data)\n\n    tmp1 = prior_skin * lh_skin\n    tmp2 = prior_nonskin * lh_nonskin\n    post_skin = tmp1 / (tmp1+tmp2) # posterior probability\n\n    post_skin = post_skin.reshape((im.shape[0],im.shape[1]))\n\n    post_skin = np.round(post_skin*255)\n    post_skin = post_skin.astype(np.uint8)\n    post_skin = np.tile(np.expand_dims(post_skin,2),[1,1,3]) # reshape to H*W*3\n\n    return post_skin\n\n\ndef get_skin_mask(img_path):\n    print('generating skin masks......')\n    names = [i for i in sorted(os.listdir(\n        img_path)) if 'jpg' in i or 'png' in i or 'jpeg' in i or 'PNG' in i]\n    save_path = os.path.join(img_path, 'mask')\n    if not os.path.isdir(save_path):\n        os.makedirs(save_path)\n    \n    for i in range(0, len(names)):\n        name = names[i]\n        print('%05d' % (i), ' ', name)\n        full_image_name = os.path.join(img_path, name)\n        img = cv2.imread(full_image_name).astype(np.float32)\n        skin_img = skinmask(img)\n        cv2.imwrite(os.path.join(save_path, name), skin_img.astype(np.uint8))\n"
  },
  {
    "path": "third_part/face3d/util/test_mean_face.txt",
    "content": "-5.228591537475585938e+01\n2.078247070312500000e-01\n-5.064269638061523438e+01\n-1.315765380859375000e+01\n-4.952939224243164062e+01\n-2.592591094970703125e+01\n-4.793047332763671875e+01\n-3.832135772705078125e+01\n-4.512159729003906250e+01\n-5.059623336791992188e+01\n-3.917720794677734375e+01\n-6.043736648559570312e+01\n-2.929953765869140625e+01\n-6.861183166503906250e+01\n-1.719801330566406250e+01\n-7.572736358642578125e+01\n-1.961936950683593750e+00\n-7.862001037597656250e+01\n1.467941284179687500e+01\n-7.607844543457031250e+01\n2.744073486328125000e+01\n-6.915261840820312500e+01\n3.855677795410156250e+01\n-5.950350570678710938e+01\n4.478240966796875000e+01\n-4.867547225952148438e+01\n4.714337158203125000e+01\n-3.800830078125000000e+01\n4.940315246582031250e+01\n-2.496297454833984375e+01\n5.117234802246093750e+01\n-1.241538238525390625e+01\n5.190507507324218750e+01\n8.244247436523437500e-01\n-4.150688934326171875e+01\n2.386329650878906250e+01\n-3.570307159423828125e+01\n3.017010498046875000e+01\n-2.790358734130859375e+01\n3.212951660156250000e+01\n-1.941773223876953125e+01\n3.156523132324218750e+01\n-1.138106536865234375e+01\n2.841992187500000000e+01\n5.993263244628906250e+00\n2.895182800292968750e+01\n1.343590545654296875e+01\n3.189880371093750000e+01\n2.203153991699218750e+01\n3.302221679687500000e+01\n2.992478942871093750e+01\n3.099150085449218750e+01\n3.628388977050781250e+01\n2.765748596191406250e+01\n-1.933914184570312500e+00\n1.405374145507812500e+01\n-2.153038024902343750e+00\n5.772636413574218750e+00\n-2.270050048828125000e+00\n-2.121643066406250000e+00\n-2.218330383300781250e+00\n-1.068978118896484375e+01\n-1.187252044677734375e+01\n-1.997912597656250000e+01\n-6.879402160644531250e+00\n-2.143579864501953125e+01\n-1.227821350097656250e+00\n-2.193494415283203125e+01\n4.623237609863281250e+00\n-2.152721405029296875e+01\n9.721397399902343750e+00\n-1.953671264648437500e+01\n-3.648714447021484375e+01\n9.811126708984375000e+00\n-3.130242919921875000e+01\n1.422447967529296875e+01\n-2.212834930419921875e+01\n1.493019866943359375e+01\n-1.500880432128906250e+01\n1.073588562011718750e+01\n-2.095037078857421875e+01\n9.054298400878906250e+00\n-3.050099182128906250e+01\n8.704177856445312500e+00\n1.173237609863281250e+01\n1.054329681396484375e+01\n1.856353759765625000e+01\n1.535009765625000000e+01\n2.893331909179687500e+01\n1.451992797851562500e+01\n3.452944946289062500e+01\n1.065280151367187500e+01\n2.875990295410156250e+01\n8.654792785644531250e+00\n1.942100524902343750e+01\n9.422447204589843750e+00\n-2.204488372802734375e+01\n-3.983994293212890625e+01\n-1.324458312988281250e+01\n-3.467377471923828125e+01\n-6.749649047851562500e+00\n-3.092894744873046875e+01\n-9.183349609375000000e-01\n-3.196458435058593750e+01\n4.220649719238281250e+00\n-3.090406036376953125e+01\n1.089889526367187500e+01\n-3.497008514404296875e+01\n1.874589538574218750e+01\n-4.065438079833984375e+01\n1.124106597900390625e+01\n-4.438417816162109375e+01\n5.181709289550781250e+00\n-4.649170684814453125e+01\n-1.158607482910156250e+00\n-4.680406951904296875e+01\n-7.918922424316406250e+00\n-4.671575164794921875e+01\n-1.452505493164062500e+01\n-4.416526031494140625e+01\n-2.005007171630859375e+01\n-3.997841644287109375e+01\n-1.054919433593750000e+01\n-3.849683380126953125e+01\n-1.051826477050781250e+00\n-3.794863128662109375e+01\n6.412681579589843750e+00\n-3.804645538330078125e+01\n1.627674865722656250e+01\n-4.039697265625000000e+01\n6.373878479003906250e+00\n-4.087213897705078125e+01\n-8.551712036132812500e-01\n-4.157129669189453125e+01\n-1.014953613281250000e+01\n-4.128469085693359375e+01\n"
  },
  {
    "path": "third_part/face3d/util/util.py",
    "content": "\"\"\"This script contains basic utilities for Deep3DFaceRecon_pytorch\n\"\"\"\nfrom __future__ import print_function\nimport numpy as np\nimport torch\nfrom PIL import Image\nimport os\nimport importlib\nimport argparse\nfrom argparse import Namespace\nimport torchvision\n\n\ndef str2bool(v):\n    if isinstance(v, bool):\n        return v\n    if v.lower() in ('yes', 'true', 't', 'y', '1'):\n        return True\n    elif v.lower() in ('no', 'false', 'f', 'n', '0'):\n        return False\n    else:\n        raise argparse.ArgumentTypeError('Boolean value expected.')\n\n\ndef copyconf(default_opt, **kwargs):\n    conf = Namespace(**vars(default_opt))\n    for key in kwargs:\n        setattr(conf, key, kwargs[key])\n    return conf\n\ndef genvalconf(train_opt, **kwargs):\n    conf = Namespace(**vars(train_opt))\n    attr_dict = train_opt.__dict__\n    for key, value in attr_dict.items():\n        if 'val' in key and key.split('_')[0] in attr_dict:\n            setattr(conf, key.split('_')[0], value)\n\n    for key in kwargs:\n        setattr(conf, key, kwargs[key])\n\n    return conf\n        \ndef find_class_in_module(target_cls_name, module):\n    target_cls_name = target_cls_name.replace('_', '').lower()\n    clslib = importlib.import_module(module)\n    cls = None\n    for name, clsobj in clslib.__dict__.items():\n        if name.lower() == target_cls_name:\n            cls = clsobj\n\n    assert cls is not None, \"In %s, there should be a class whose name matches %s in lowercase without underscore(_)\" % (module, target_cls_name)\n\n    return cls\n\n\ndef tensor2im(input_image, imtype=np.uint8):\n    \"\"\"\"Converts a Tensor array into a numpy image array.\n\n    Parameters:\n        input_image (tensor) --  the input image tensor array, range(0, 1)\n        imtype (type)        --  the desired type of the converted numpy array\n    \"\"\"\n    if not isinstance(input_image, np.ndarray):\n        if isinstance(input_image, torch.Tensor):  # get the data from a variable\n            image_tensor = input_image.data\n        else:\n            return input_image\n        image_numpy = image_tensor.clamp(0.0, 1.0).cpu().float().numpy()  # convert it into a numpy array\n        if image_numpy.shape[0] == 1:  # grayscale to RGB\n            image_numpy = np.tile(image_numpy, (3, 1, 1))\n        image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0  # post-processing: transpose and scaling\n    else:  # if it is a numpy array, do nothing\n        image_numpy = input_image\n    return image_numpy.astype(imtype)\n\n\ndef diagnose_network(net, name='network'):\n    \"\"\"Calculate and print the mean of average absolute(gradients)\n\n    Parameters:\n        net (torch network) -- Torch network\n        name (str) -- the name of the network\n    \"\"\"\n    mean = 0.0\n    count = 0\n    for param in net.parameters():\n        if param.grad is not None:\n            mean += torch.mean(torch.abs(param.grad.data))\n            count += 1\n    if count > 0:\n        mean = mean / count\n    print(name)\n    print(mean)\n\n\ndef save_image(image_numpy, image_path, aspect_ratio=1.0):\n    \"\"\"Save a numpy image to the disk\n\n    Parameters:\n        image_numpy (numpy array) -- input numpy array\n        image_path (str)          -- the path of the image\n    \"\"\"\n\n    image_pil = Image.fromarray(image_numpy)\n    h, w, _ = image_numpy.shape\n\n    if aspect_ratio is None:\n        pass\n    elif aspect_ratio > 1.0:\n        image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC)\n    elif aspect_ratio < 1.0:\n        image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC)\n    image_pil.save(image_path)\n\n\ndef print_numpy(x, val=True, shp=False):\n    \"\"\"Print the mean, min, max, median, std, and size of a numpy array\n\n    Parameters:\n        val (bool) -- if print the values of the numpy array\n        shp (bool) -- if print the shape of the numpy array\n    \"\"\"\n    x = x.astype(np.float64)\n    if shp:\n        print('shape,', x.shape)\n    if val:\n        x = x.flatten()\n        print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % (\n            np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x)))\n\n\ndef mkdirs(paths):\n    \"\"\"create empty directories if they don't exist\n\n    Parameters:\n        paths (str list) -- a list of directory paths\n    \"\"\"\n    if isinstance(paths, list) and not isinstance(paths, str):\n        for path in paths:\n            mkdir(path)\n    else:\n        mkdir(paths)\n\n\ndef mkdir(path):\n    \"\"\"create a single empty directory if it didn't exist\n\n    Parameters:\n        path (str) -- a single directory path\n    \"\"\"\n    if not os.path.exists(path):\n        os.makedirs(path)\n\n\ndef correct_resize_label(t, size):\n    device = t.device\n    t = t.detach().cpu()\n    resized = []\n    for i in range(t.size(0)):\n        one_t = t[i, :1]\n        one_np = np.transpose(one_t.numpy().astype(np.uint8), (1, 2, 0))\n        one_np = one_np[:, :, 0]\n        one_image = Image.fromarray(one_np).resize(size, Image.NEAREST)\n        resized_t = torch.from_numpy(np.array(one_image)).long()\n        resized.append(resized_t)\n    return torch.stack(resized, dim=0).to(device)\n\n\ndef correct_resize(t, size, mode=Image.BICUBIC):\n    device = t.device\n    t = t.detach().cpu()\n    resized = []\n    for i in range(t.size(0)):\n        one_t = t[i:i + 1]\n        one_image = Image.fromarray(tensor2im(one_t)).resize(size, Image.BICUBIC)\n        resized_t = torchvision.transforms.functional.to_tensor(one_image) * 2 - 1.0\n        resized.append(resized_t)\n    return torch.stack(resized, dim=0).to(device)\n\ndef draw_landmarks(img, landmark, color='r', step=2):\n    \"\"\"\n    Return:\n        img              -- numpy.array, (B, H, W, 3) img with landmark, RGB order, range (0, 255)\n        \n\n    Parameters:\n        img              -- numpy.array, (B, H, W, 3), RGB order, range (0, 255)\n        landmark         -- numpy.array, (B, 68, 2), y direction is opposite to v direction\n        color            -- str, 'r' or 'b' (red or blue)\n    \"\"\"\n    if color =='r':\n        c = np.array([255., 0, 0])\n    else:\n        c = np.array([0, 0, 255.])\n\n    _, H, W, _ = img.shape\n    img, landmark = img.copy(), landmark.copy()\n    landmark[..., 1] = H - 1 - landmark[..., 1]\n    landmark = np.round(landmark).astype(np.int32)\n    for i in range(landmark.shape[1]):\n        x, y = landmark[:, i, 0], landmark[:, i, 1]\n        for j in range(-step, step):\n            for k in range(-step, step):\n                u = np.clip(x + j, 0, W - 1)\n                v = np.clip(y + k, 0, H - 1)\n                for m in range(landmark.shape[0]):\n                    img[m, v[m], u[m]] = c\n    return img\n"
  },
  {
    "path": "third_part/face3d/util/visualizer.py",
    "content": "\"\"\"This script defines the visualizer for Deep3DFaceRecon_pytorch\n\"\"\"\n\nimport numpy as np\nimport os\nimport sys\nimport ntpath\nimport time\nfrom . import util, html\nfrom subprocess import Popen, PIPE\nfrom torch.utils.tensorboard import SummaryWriter\n\ndef save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256):\n    \"\"\"Save images to the disk.\n\n    Parameters:\n        webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details)\n        visuals (OrderedDict)    -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs\n        image_path (str)         -- the string is used to create image paths\n        aspect_ratio (float)     -- the aspect ratio of saved images\n        width (int)              -- the images will be resized to width x width\n\n    This function will save images stored in 'visuals' to the HTML file specified by 'webpage'.\n    \"\"\"\n    image_dir = webpage.get_image_dir()\n    short_path = ntpath.basename(image_path[0])\n    name = os.path.splitext(short_path)[0]\n\n    webpage.add_header(name)\n    ims, txts, links = [], [], []\n\n    for label, im_data in visuals.items():\n        im = util.tensor2im(im_data)\n        image_name = '%s/%s.png' % (label, name)\n        os.makedirs(os.path.join(image_dir, label), exist_ok=True)\n        save_path = os.path.join(image_dir, image_name)\n        util.save_image(im, save_path, aspect_ratio=aspect_ratio)\n        ims.append(image_name)\n        txts.append(label)\n        links.append(image_name)\n    webpage.add_images(ims, txts, links, width=width)\n\n\nclass Visualizer():\n    \"\"\"This class includes several functions that can display/save images and print/save logging information.\n\n    It uses a Python library tensprboardX for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with images.\n    \"\"\"\n\n    def __init__(self, opt):\n        \"\"\"Initialize the Visualizer class\n\n        Parameters:\n            opt -- stores all the experiment flags; needs to be a subclass of BaseOptions\n        Step 1: Cache the training/test options\n        Step 2: create a tensorboard writer\n        Step 3: create an HTML object for saving HTML filters\n        Step 4: create a logging file to store training losses\n        \"\"\"\n        self.opt = opt  # cache the option\n        self.use_html = opt.isTrain and not opt.no_html\n        self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, 'logs', opt.name))\n        self.win_size = opt.display_winsize\n        self.name = opt.name\n        self.saved = False\n        if self.use_html:  # create an HTML object at <checkpoints_dir>/web/; images will be saved under <checkpoints_dir>/web/images/\n            self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web')\n            self.img_dir = os.path.join(self.web_dir, 'images')\n            print('create web directory %s...' % self.web_dir)\n            util.mkdirs([self.web_dir, self.img_dir])\n        # create a logging file to store training losses\n        self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')\n        with open(self.log_name, \"a\") as log_file:\n            now = time.strftime(\"%c\")\n            log_file.write('================ Training Loss (%s) ================\\n' % now)\n\n    def reset(self):\n        \"\"\"Reset the self.saved status\"\"\"\n        self.saved = False\n\n\n    def display_current_results(self, visuals, total_iters, epoch, save_result):\n        \"\"\"Display current results on tensorboad; save current results to an HTML file.\n\n        Parameters:\n            visuals (OrderedDict) - - dictionary of images to display or save\n            total_iters (int) -- total iterations\n            epoch (int) - - the current epoch\n            save_result (bool) - - if save the current results to an HTML file\n        \"\"\"\n        for label, image in visuals.items():\n            self.writer.add_image(label, util.tensor2im(image), total_iters, dataformats='HWC')\n\n        if self.use_html and (save_result or not self.saved):  # save images to an HTML file if they haven't been saved.\n            self.saved = True\n            # save images to the disk\n            for label, image in visuals.items():\n                image_numpy = util.tensor2im(image)\n                img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label))\n                util.save_image(image_numpy, img_path)\n\n            # update website\n            webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=0)\n            for n in range(epoch, 0, -1):\n                webpage.add_header('epoch [%d]' % n)\n                ims, txts, links = [], [], []\n\n                for label, image_numpy in visuals.items():\n                    image_numpy = util.tensor2im(image)\n                    img_path = 'epoch%.3d_%s.png' % (n, label)\n                    ims.append(img_path)\n                    txts.append(label)\n                    links.append(img_path)\n                webpage.add_images(ims, txts, links, width=self.win_size)\n            webpage.save()\n\n    def plot_current_losses(self, total_iters, losses):\n        # G_loss_collection = {}\n        # D_loss_collection = {}\n        # for name, value in losses.items():\n        #     if 'G' in name or 'NCE' in name or 'idt' in name:\n        #         G_loss_collection[name] = value\n        #     else:\n        #         D_loss_collection[name] = value\n        # self.writer.add_scalars('G_collec', G_loss_collection, total_iters)\n        # self.writer.add_scalars('D_collec', D_loss_collection, total_iters)\n        for name, value in losses.items():\n            self.writer.add_scalar(name, value, total_iters)\n\n    # losses: same format as |losses| of plot_current_losses\n    def print_current_losses(self, epoch, iters, losses, t_comp, t_data):\n        \"\"\"print current losses on console; also save the losses to the disk\n\n        Parameters:\n            epoch (int) -- current epoch\n            iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)\n            losses (OrderedDict) -- training losses stored in the format of (name, float) pairs\n            t_comp (float) -- computational time per data point (normalized by batch_size)\n            t_data (float) -- data loading time per data point (normalized by batch_size)\n        \"\"\"\n        message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data)\n        for k, v in losses.items():\n            message += '%s: %.3f ' % (k, v)\n\n        print(message)  # print the message\n        with open(self.log_name, \"a\") as log_file:\n            log_file.write('%s\\n' % message)  # save the message\n\n\nclass MyVisualizer:\n    def __init__(self, opt):\n        \"\"\"Initialize the Visualizer class\n\n        Parameters:\n            opt -- stores all the experiment flags; needs to be a subclass of BaseOptions\n        Step 1: Cache the training/test options\n        Step 2: create a tensorboard writer\n        Step 3: create an HTML object for saving HTML filters\n        Step 4: create a logging file to store training losses\n        \"\"\"\n        self.opt = opt  # cache the option\n        self.name = opt.name\n        self.img_dir = os.path.join(opt.checkpoints_dir, opt.name, 'results')\n        \n        if opt.phase != 'test':\n            self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, opt.name, 'logs'))\n            # create a logging file to store training losses\n            self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')\n            with open(self.log_name, \"a\") as log_file:\n                now = time.strftime(\"%c\")\n                log_file.write('================ Training Loss (%s) ================\\n' % now)\n\n\n    def display_current_results(self, visuals, total_iters, epoch, dataset='train', save_results=False, count=0, name=None,\n            add_image=True):\n        \"\"\"Display current results on tensorboad; save current results to an HTML file.\n\n        Parameters:\n            visuals (OrderedDict) - - dictionary of images to display or save\n            total_iters (int) -- total iterations\n            epoch (int) - - the current epoch\n            dataset (str) - - 'train' or 'val' or 'test'\n        \"\"\"\n        # if (not add_image) and (not save_results): return\n        \n        for label, image in visuals.items():\n            for i in range(image.shape[0]):\n                image_numpy = util.tensor2im(image[i])\n                if add_image:\n                    self.writer.add_image(label + '%s_%02d'%(dataset, i + count),\n                            image_numpy, total_iters, dataformats='HWC')\n\n                if save_results:\n                    save_path = os.path.join(self.img_dir, dataset, 'epoch_%s_%06d'%(epoch, total_iters))\n                    if not os.path.isdir(save_path):\n                        os.makedirs(save_path)\n\n                    if name is not None:\n                        img_path = os.path.join(save_path, '%s.png' % name)\n                    else:\n                        img_path = os.path.join(save_path, '%s_%03d.png' % (label, i + count))\n                    util.save_image(image_numpy, img_path)\n\n\n    def plot_current_losses(self, total_iters, losses, dataset='train'):\n        for name, value in losses.items():\n            self.writer.add_scalar(name + '/%s'%dataset, value, total_iters)\n\n    # losses: same format as |losses| of plot_current_losses\n    def print_current_losses(self, epoch, iters, losses, t_comp, t_data, dataset='train'):\n        \"\"\"print current losses on console; also save the losses to the disk\n\n        Parameters:\n            epoch (int) -- current epoch\n            iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)\n            losses (OrderedDict) -- training losses stored in the format of (name, float) pairs\n            t_comp (float) -- computational time per data point (normalized by batch_size)\n            t_data (float) -- data loading time per data point (normalized by batch_size)\n        \"\"\"\n        message = '(dataset: %s, epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (\n            dataset, epoch, iters, t_comp, t_data)\n        for k, v in losses.items():\n            message += '%s: %.3f ' % (k, v)\n\n        print(message)  # print the message\n        with open(self.log_name, \"a\") as log_file:\n            log_file.write('%s\\n' % message)  # save the message\n"
  },
  {
    "path": "third_part/face_detection/README.md",
    "content": "The code for Face Detection in this folder has been taken from the wonderful [face_alignment](https://github.com/1adrianb/face-alignment) repository. This has been modified to take batches of faces at a time. "
  },
  {
    "path": "third_part/face_detection/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n\n__author__ = \"\"\"Adrian Bulat\"\"\"\n__email__ = 'adrian.bulat@nottingham.ac.uk'\n__version__ = '1.0.1'\n\nfrom .api import FaceAlignment, LandmarksType, NetworkSize\n"
  },
  {
    "path": "third_part/face_detection/api.py",
    "content": "from __future__ import print_function\nimport os\nimport torch\nfrom torch.utils.model_zoo import load_url\nfrom enum import Enum\nimport numpy as np\nimport cv2\ntry:\n    import urllib.request as request_file\nexcept BaseException:\n    import urllib as request_file\n\nfrom .models import FAN, ResNetDepth\nfrom .utils import *\n\n\nclass LandmarksType(Enum):\n    \"\"\"Enum class defining the type of landmarks to detect.\n\n    ``_2D`` - the detected points ``(x,y)`` are detected in a 2D space and follow the visible contour of the face\n    ``_2halfD`` - this points represent the projection of the 3D points into 3D\n    ``_3D`` - detect the points ``(x,y,z)``` in a 3D space\n\n    \"\"\"\n    _2D = 1\n    _2halfD = 2\n    _3D = 3\n\n\nclass NetworkSize(Enum):\n    # TINY = 1\n    # SMALL = 2\n    # MEDIUM = 3\n    LARGE = 4\n\n    def __new__(cls, value):\n        member = object.__new__(cls)\n        member._value_ = value\n        return member\n\n    def __int__(self):\n        return self.value\n\nROOT = os.path.dirname(os.path.abspath(__file__))\n\nclass FaceAlignment:\n    def __init__(self, landmarks_type, network_size=NetworkSize.LARGE,\n                 device='cuda', flip_input=False, face_detector='sfd', verbose=False):\n        self.device = device\n        self.flip_input = flip_input\n        self.landmarks_type = landmarks_type\n        self.verbose = verbose\n\n        network_size = int(network_size)\n\n        if 'cuda' in device:\n            torch.backends.cudnn.benchmark = True\n\n        # Get the face detector\n        face_detector_module = __import__('face_detection.detection.' + face_detector,\n                                          globals(), locals(), [face_detector], 0)\n        self.face_detector = face_detector_module.FaceDetector(device=device, verbose=verbose)\n\n    def get_detections_for_batch(self, images):\n        images = images[..., ::-1]\n        detected_faces = self.face_detector.detect_from_batch(images.copy())\n        results = []\n\n        for i, d in enumerate(detected_faces):\n            if len(d) == 0:\n                results.append(None)\n                continue\n            d = d[0]\n            d = np.clip(d, 0, None)\n            \n            x1, y1, x2, y2 = map(int, d[:-1])\n            results.append((x1, y1, x2, y2))\n\n        return results"
  },
  {
    "path": "third_part/face_detection/detection/__init__.py",
    "content": "from .core import FaceDetector"
  },
  {
    "path": "third_part/face_detection/detection/core.py",
    "content": "import logging\nimport glob\nfrom tqdm import tqdm\nimport numpy as np\nimport torch\nimport cv2\n\n\nclass FaceDetector(object):\n    \"\"\"An abstract class representing a face detector.\n\n    Any other face detection implementation must subclass it. All subclasses\n    must implement ``detect_from_image``, that return a list of detected\n    bounding boxes. Optionally, for speed considerations detect from path is\n    recommended.\n    \"\"\"\n\n    def __init__(self, device, verbose):\n        self.device = device\n        self.verbose = verbose\n\n        if verbose:\n            if 'cpu' in device:\n                logger = logging.getLogger(__name__)\n                logger.warning(\"Detection running on CPU, this may be potentially slow.\")\n\n        if 'cpu' not in device and 'cuda' not in device:\n            if verbose:\n                logger.error(\"Expected values for device are: {cpu, cuda} but got: %s\", device)\n            raise ValueError\n\n    def detect_from_image(self, tensor_or_path):\n        \"\"\"Detects faces in a given image.\n\n        This function detects the faces present in a provided BGR(usually)\n        image. The input can be either the image itself or the path to it.\n\n        Arguments:\n            tensor_or_path {numpy.ndarray, torch.tensor or string} -- the path\n            to an image or the image itself.\n\n        Example::\n\n            >>> path_to_image = 'data/image_01.jpg'\n            ...   detected_faces = detect_from_image(path_to_image)\n            [A list of bounding boxes (x1, y1, x2, y2)]\n            >>> image = cv2.imread(path_to_image)\n            ...   detected_faces = detect_from_image(image)\n            [A list of bounding boxes (x1, y1, x2, y2)]\n\n        \"\"\"\n        raise NotImplementedError\n\n    def detect_from_directory(self, path, extensions=['.jpg', '.png'], recursive=False, show_progress_bar=True):\n        \"\"\"Detects faces from all the images present in a given directory.\n\n        Arguments:\n            path {string} -- a string containing a path that points to the folder containing the images\n\n        Keyword Arguments:\n            extensions {list} -- list of string containing the extensions to be\n            consider in the following format: ``.extension_name`` (default:\n            {['.jpg', '.png']}) recursive {bool} -- option wherever to scan the\n            folder recursively (default: {False}) show_progress_bar {bool} --\n            display a progressbar (default: {True})\n\n        Example:\n        >>> directory = 'data'\n        ...   detected_faces = detect_from_directory(directory)\n        {A dictionary of [lists containing bounding boxes(x1, y1, x2, y2)]}\n\n        \"\"\"\n        if self.verbose:\n            logger = logging.getLogger(__name__)\n\n        if len(extensions) == 0:\n            if self.verbose:\n                logger.error(\"Expected at list one extension, but none was received.\")\n            raise ValueError\n\n        if self.verbose:\n            logger.info(\"Constructing the list of images.\")\n        additional_pattern = '/**/*' if recursive else '/*'\n        files = []\n        for extension in extensions:\n            files.extend(glob.glob(path + additional_pattern + extension, recursive=recursive))\n\n        if self.verbose:\n            logger.info(\"Finished searching for images. %s images found\", len(files))\n            logger.info(\"Preparing to run the detection.\")\n\n        predictions = {}\n        for image_path in tqdm(files, disable=not show_progress_bar):\n            if self.verbose:\n                logger.info(\"Running the face detector on image: %s\", image_path)\n            predictions[image_path] = self.detect_from_image(image_path)\n\n        if self.verbose:\n            logger.info(\"The detector was successfully run on all %s images\", len(files))\n\n        return predictions\n\n    @property\n    def reference_scale(self):\n        raise NotImplementedError\n\n    @property\n    def reference_x_shift(self):\n        raise NotImplementedError\n\n    @property\n    def reference_y_shift(self):\n        raise NotImplementedError\n\n    @staticmethod\n    def tensor_or_path_to_ndarray(tensor_or_path, rgb=True):\n        \"\"\"Convert path (represented as a string) or torch.tensor to a numpy.ndarray\n\n        Arguments:\n            tensor_or_path {numpy.ndarray, torch.tensor or string} -- path to the image, or the image itself\n        \"\"\"\n        if isinstance(tensor_or_path, str):\n            return cv2.imread(tensor_or_path) if not rgb else cv2.imread(tensor_or_path)[..., ::-1]\n        elif torch.is_tensor(tensor_or_path):\n            # Call cpu in case its coming from cuda\n            return tensor_or_path.cpu().numpy()[..., ::-1].copy() if not rgb else tensor_or_path.cpu().numpy()\n        elif isinstance(tensor_or_path, np.ndarray):\n            return tensor_or_path[..., ::-1].copy() if not rgb else tensor_or_path\n        else:\n            raise TypeError\n"
  },
  {
    "path": "third_part/face_detection/detection/sfd/__init__.py",
    "content": "from .sfd_detector import SFDDetector as FaceDetector"
  },
  {
    "path": "third_part/face_detection/detection/sfd/bbox.py",
    "content": "from __future__ import print_function\nimport os\nimport sys\nimport cv2\nimport random\nimport datetime\nimport time\nimport math\nimport argparse\nimport numpy as np\nimport torch\n\ntry:\n    from iou import IOU\nexcept BaseException:\n    # IOU cython speedup 10x\n    def IOU(ax1, ay1, ax2, ay2, bx1, by1, bx2, by2):\n        sa = abs((ax2 - ax1) * (ay2 - ay1))\n        sb = abs((bx2 - bx1) * (by2 - by1))\n        x1, y1 = max(ax1, bx1), max(ay1, by1)\n        x2, y2 = min(ax2, bx2), min(ay2, by2)\n        w = x2 - x1\n        h = y2 - y1\n        if w < 0 or h < 0:\n            return 0.0\n        else:\n            return 1.0 * w * h / (sa + sb - w * h)\n\n\ndef bboxlog(x1, y1, x2, y2, axc, ayc, aww, ahh):\n    xc, yc, ww, hh = (x2 + x1) / 2, (y2 + y1) / 2, x2 - x1, y2 - y1\n    dx, dy = (xc - axc) / aww, (yc - ayc) / ahh\n    dw, dh = math.log(ww / aww), math.log(hh / ahh)\n    return dx, dy, dw, dh\n\n\ndef bboxloginv(dx, dy, dw, dh, axc, ayc, aww, ahh):\n    xc, yc = dx * aww + axc, dy * ahh + ayc\n    ww, hh = math.exp(dw) * aww, math.exp(dh) * ahh\n    x1, x2, y1, y2 = xc - ww / 2, xc + ww / 2, yc - hh / 2, yc + hh / 2\n    return x1, y1, x2, y2\n\n\ndef nms(dets, thresh):\n    if 0 == len(dets):\n        return []\n    x1, y1, x2, y2, scores = dets[:, 0], dets[:, 1], dets[:, 2], dets[:, 3], dets[:, 4]\n    areas = (x2 - x1 + 1) * (y2 - y1 + 1)\n    order = scores.argsort()[::-1]\n\n    keep = []\n    while order.size > 0:\n        i = order[0]\n        keep.append(i)\n        xx1, yy1 = np.maximum(x1[i], x1[order[1:]]), np.maximum(y1[i], y1[order[1:]])\n        xx2, yy2 = np.minimum(x2[i], x2[order[1:]]), np.minimum(y2[i], y2[order[1:]])\n\n        w, h = np.maximum(0.0, xx2 - xx1 + 1), np.maximum(0.0, yy2 - yy1 + 1)\n        ovr = w * h / (areas[i] + areas[order[1:]] - w * h)\n\n        inds = np.where(ovr <= thresh)[0]\n        order = order[inds + 1]\n\n    return keep\n\n\ndef encode(matched, priors, variances):\n    \"\"\"Encode the variances from the priorbox layers into the ground truth boxes\n    we have matched (based on jaccard overlap) with the prior boxes.\n    Args:\n        matched: (tensor) Coords of ground truth for each prior in point-form\n            Shape: [num_priors, 4].\n        priors: (tensor) Prior boxes in center-offset form\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        encoded boxes (tensor), Shape: [num_priors, 4]\n    \"\"\"\n\n    # dist b/t match center and prior's center\n    g_cxcy = (matched[:, :2] + matched[:, 2:]) / 2 - priors[:, :2]\n    # encode variance\n    g_cxcy /= (variances[0] * priors[:, 2:])\n    # match wh / prior wh\n    g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:]\n    g_wh = torch.log(g_wh) / variances[1]\n    # return target for smooth_l1_loss\n    return torch.cat([g_cxcy, g_wh], 1)  # [num_priors,4]\n\n\ndef decode(loc, priors, variances):\n    \"\"\"Decode locations from predictions using priors to undo\n    the encoding we did for offset regression at train time.\n    Args:\n        loc (tensor): location predictions for loc layers,\n            Shape: [num_priors,4]\n        priors (tensor): Prior boxes in center-offset form.\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        decoded bounding box predictions\n    \"\"\"\n\n    boxes = torch.cat((\n        priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:],\n        priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1)\n    boxes[:, :2] -= boxes[:, 2:] / 2\n    boxes[:, 2:] += boxes[:, :2]\n    return boxes\n\ndef batch_decode(loc, priors, variances):\n    \"\"\"Decode locations from predictions using priors to undo\n    the encoding we did for offset regression at train time.\n    Args:\n        loc (tensor): location predictions for loc layers,\n            Shape: [num_priors,4]\n        priors (tensor): Prior boxes in center-offset form.\n            Shape: [num_priors,4].\n        variances: (list[float]) Variances of priorboxes\n    Return:\n        decoded bounding box predictions\n    \"\"\"\n\n    boxes = torch.cat((\n        priors[:, :, :2] + loc[:, :, :2] * variances[0] * priors[:, :, 2:],\n        priors[:, :, 2:] * torch.exp(loc[:, :, 2:] * variances[1])), 2)\n    boxes[:, :, :2] -= boxes[:, :, 2:] / 2\n    boxes[:, :, 2:] += boxes[:, :, :2]\n    return boxes\n"
  },
  {
    "path": "third_part/face_detection/detection/sfd/detect.py",
    "content": "import torch\nimport torch.nn.functional as F\n\nimport os\nimport sys\nimport cv2\nimport random\nimport datetime\nimport math\nimport argparse\nimport numpy as np\n\nimport scipy.io as sio\nimport zipfile\nfrom .net_s3fd import s3fd\nfrom .bbox import *\n\n\ndef detect(net, img, device):\n    img = img - np.array([104, 117, 123])\n    img = img.transpose(2, 0, 1)\n    img = img.reshape((1,) + img.shape)\n\n    if 'cuda' in device:\n        torch.backends.cudnn.benchmark = True\n\n    img = torch.from_numpy(img).float().to(device)\n    BB, CC, HH, WW = img.size()\n    with torch.no_grad():\n        olist = net(img)\n\n    bboxlist = []\n    for i in range(len(olist) // 2):\n        olist[i * 2] = F.softmax(olist[i * 2], dim=1)\n    olist = [oelem.data.cpu() for oelem in olist]\n    for i in range(len(olist) // 2):\n        ocls, oreg = olist[i * 2], olist[i * 2 + 1]\n        FB, FC, FH, FW = ocls.size()  # feature map size\n        stride = 2**(i + 2)    # 4,8,16,32,64,128\n        anchor = stride * 4\n        poss = zip(*np.where(ocls[:, 1, :, :] > 0.05))\n        for Iindex, hindex, windex in poss:\n            axc, ayc = stride / 2 + windex * stride, stride / 2 + hindex * stride\n            score = ocls[0, 1, hindex, windex]\n            loc = oreg[0, :, hindex, windex].contiguous().view(1, 4)\n            priors = torch.Tensor([[axc / 1.0, ayc / 1.0, stride * 4 / 1.0, stride * 4 / 1.0]])\n            variances = [0.1, 0.2]\n            box = decode(loc, priors, variances)\n            x1, y1, x2, y2 = box[0] * 1.0\n            # cv2.rectangle(imgshow,(int(x1),int(y1)),(int(x2),int(y2)),(0,0,255),1)\n            bboxlist.append([x1, y1, x2, y2, score])\n    bboxlist = np.array(bboxlist)\n    if 0 == len(bboxlist):\n        bboxlist = np.zeros((1, 5))\n\n    return bboxlist\n\ndef batch_detect(net, imgs, device):\n    imgs = imgs - np.array([104, 117, 123])\n    imgs = imgs.transpose(0, 3, 1, 2)\n\n    if 'cuda' in device:\n        torch.backends.cudnn.benchmark = True\n\n    imgs = torch.from_numpy(imgs).float().to(device)\n    BB, CC, HH, WW = imgs.size()\n    with torch.no_grad():\n        # print(type(net),type(imgs), device)\n        olist = net(imgs)\n\n    bboxlist = []\n    for i in range(len(olist) // 2):\n        olist[i * 2] = F.softmax(olist[i * 2], dim=1)\n    # print(olist)\n    # import pdb; pdb.set_trace()\n    olist = [oelem.cpu() for oelem in olist]\n    for i in range(len(olist) // 2):\n        ocls, oreg = olist[i * 2], olist[i * 2 + 1]\n        FB, FC, FH, FW = ocls.size()  # feature map size\n        stride = 2**(i + 2)    # 4,8,16,32,64,128\n        anchor = stride * 4\n        poss = zip(*np.where(ocls[:, 1, :, :] > 0.05))\n        for Iindex, hindex, windex in poss:\n            axc, ayc = stride / 2 + windex * stride, stride / 2 + hindex * stride\n            score = ocls[:, 1, hindex, windex]\n            loc = oreg[:, :, hindex, windex].contiguous().view(BB, 1, 4)\n            priors = torch.Tensor([[axc / 1.0, ayc / 1.0, stride * 4 / 1.0, stride * 4 / 1.0]]).view(1, 1, 4)\n            variances = [0.1, 0.2]\n            box = batch_decode(loc, priors, variances)\n            box = box[:, 0] * 1.0\n            # cv2.rectangle(imgshow,(int(x1),int(y1)),(int(x2),int(y2)),(0,0,255),1)\n            bboxlist.append(torch.cat([box, score.unsqueeze(1)], 1).cpu().numpy())\n    bboxlist = np.array(bboxlist)\n    if 0 == len(bboxlist):\n        bboxlist = np.zeros((1, BB, 5))\n\n    return bboxlist\n\ndef flip_detect(net, img, device):\n    img = cv2.flip(img, 1)\n    b = detect(net, img, device)\n\n    bboxlist = np.zeros(b.shape)\n    bboxlist[:, 0] = img.shape[1] - b[:, 2]\n    bboxlist[:, 1] = b[:, 1]\n    bboxlist[:, 2] = img.shape[1] - b[:, 0]\n    bboxlist[:, 3] = b[:, 3]\n    bboxlist[:, 4] = b[:, 4]\n    return bboxlist\n\n\ndef pts_to_bb(pts):\n    min_x, min_y = np.min(pts, axis=0)\n    max_x, max_y = np.max(pts, axis=0)\n    return np.array([min_x, min_y, max_x, max_y])\n"
  },
  {
    "path": "third_part/face_detection/detection/sfd/net_s3fd.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass L2Norm(nn.Module):\n    def __init__(self, n_channels, scale=1.0):\n        super(L2Norm, self).__init__()\n        self.n_channels = n_channels\n        self.scale = scale\n        self.eps = 1e-10\n        self.weight = nn.Parameter(torch.Tensor(self.n_channels))\n        self.weight.data *= 0.0\n        self.weight.data += self.scale\n\n    def forward(self, x):\n        norm = x.pow(2).sum(dim=1, keepdim=True).sqrt() + self.eps\n        x = x / norm * self.weight.view(1, -1, 1, 1)\n        return x\n\n\nclass s3fd(nn.Module):\n    def __init__(self):\n        super(s3fd, self).__init__()\n        self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1)\n        self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1)\n\n        self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)\n        self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)\n\n        self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1)\n        self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1)\n        self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1)\n\n        self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1)\n        self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)\n        self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)\n\n        self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)\n        self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)\n        self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)\n\n        self.fc6 = nn.Conv2d(512, 1024, kernel_size=3, stride=1, padding=3)\n        self.fc7 = nn.Conv2d(1024, 1024, kernel_size=1, stride=1, padding=0)\n\n        self.conv6_1 = nn.Conv2d(1024, 256, kernel_size=1, stride=1, padding=0)\n        self.conv6_2 = nn.Conv2d(256, 512, kernel_size=3, stride=2, padding=1)\n\n        self.conv7_1 = nn.Conv2d(512, 128, kernel_size=1, stride=1, padding=0)\n        self.conv7_2 = nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1)\n\n        self.conv3_3_norm = L2Norm(256, scale=10)\n        self.conv4_3_norm = L2Norm(512, scale=8)\n        self.conv5_3_norm = L2Norm(512, scale=5)\n\n        self.conv3_3_norm_mbox_conf = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1)\n        self.conv3_3_norm_mbox_loc = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1)\n        self.conv4_3_norm_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1)\n        self.conv4_3_norm_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1)\n        self.conv5_3_norm_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1)\n        self.conv5_3_norm_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1)\n\n        self.fc7_mbox_conf = nn.Conv2d(1024, 2, kernel_size=3, stride=1, padding=1)\n        self.fc7_mbox_loc = nn.Conv2d(1024, 4, kernel_size=3, stride=1, padding=1)\n        self.conv6_2_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1)\n        self.conv6_2_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1)\n        self.conv7_2_mbox_conf = nn.Conv2d(256, 2, kernel_size=3, stride=1, padding=1)\n        self.conv7_2_mbox_loc = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1)\n\n    def forward(self, x):\n        h = F.relu(self.conv1_1(x))\n        h = F.relu(self.conv1_2(h))\n        h = F.max_pool2d(h, 2, 2)\n\n        h = F.relu(self.conv2_1(h))\n        h = F.relu(self.conv2_2(h))\n        h = F.max_pool2d(h, 2, 2)\n\n        h = F.relu(self.conv3_1(h))\n        h = F.relu(self.conv3_2(h))\n        h = F.relu(self.conv3_3(h))\n        f3_3 = h\n        h = F.max_pool2d(h, 2, 2)\n\n        h = F.relu(self.conv4_1(h))\n        h = F.relu(self.conv4_2(h))\n        h = F.relu(self.conv4_3(h))\n        f4_3 = h\n        h = F.max_pool2d(h, 2, 2)\n\n        h = F.relu(self.conv5_1(h))\n        h = F.relu(self.conv5_2(h))\n        h = F.relu(self.conv5_3(h))\n        f5_3 = h\n        h = F.max_pool2d(h, 2, 2)\n\n        h = F.relu(self.fc6(h))\n        h = F.relu(self.fc7(h))\n        ffc7 = h\n        h = F.relu(self.conv6_1(h))\n        h = F.relu(self.conv6_2(h))\n        f6_2 = h\n        h = F.relu(self.conv7_1(h))\n        h = F.relu(self.conv7_2(h))\n        f7_2 = h\n\n        f3_3 = self.conv3_3_norm(f3_3)\n        f4_3 = self.conv4_3_norm(f4_3)\n        f5_3 = self.conv5_3_norm(f5_3)\n\n        cls1 = self.conv3_3_norm_mbox_conf(f3_3)\n        reg1 = self.conv3_3_norm_mbox_loc(f3_3)\n        cls2 = self.conv4_3_norm_mbox_conf(f4_3)\n        reg2 = self.conv4_3_norm_mbox_loc(f4_3)\n        cls3 = self.conv5_3_norm_mbox_conf(f5_3)\n        reg3 = self.conv5_3_norm_mbox_loc(f5_3)\n        cls4 = self.fc7_mbox_conf(ffc7)\n        reg4 = self.fc7_mbox_loc(ffc7)\n        cls5 = self.conv6_2_mbox_conf(f6_2)\n        reg5 = self.conv6_2_mbox_loc(f6_2)\n        cls6 = self.conv7_2_mbox_conf(f7_2)\n        reg6 = self.conv7_2_mbox_loc(f7_2)\n\n        # max-out background label\n        chunk = torch.chunk(cls1, 4, 1)\n        bmax = torch.max(torch.max(chunk[0], chunk[1]), chunk[2])\n        cls1 = torch.cat([bmax, chunk[3]], dim=1)\n\n        return [cls1, reg1, cls2, reg2, cls3, reg3, cls4, reg4, cls5, reg5, cls6, reg6]\n"
  },
  {
    "path": "third_part/face_detection/detection/sfd/sfd_detector.py",
    "content": "import os\nimport cv2\nfrom torch.utils.model_zoo import load_url\n\nfrom ..core import FaceDetector\n\nfrom .net_s3fd import s3fd\nfrom .bbox import *\nfrom .detect import *\n\nmodels_urls = {\n    's3fd': 'https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth',\n}\n\n\nclass SFDDetector(FaceDetector):\n    def __init__(self, device, path_to_detector='/apdcephfs/share_1290939/shadowcun/pretrained/s3fd.pth', verbose=False):\n        super(SFDDetector, self).__init__(device, verbose)\n\n        # Initialise the face detector\n        if not os.path.isfile(path_to_detector):\n            model_weights = load_url(models_urls['s3fd'])\n        else:\n            model_weights = torch.load(path_to_detector)\n\n        self.face_detector = s3fd()\n        self.face_detector.load_state_dict(model_weights)\n        self.face_detector.to(device)\n        self.face_detector.eval()\n\n    def detect_from_image(self, tensor_or_path):\n        image = self.tensor_or_path_to_ndarray(tensor_or_path)\n\n        bboxlist = detect(self.face_detector, image, device=self.device)\n        keep = nms(bboxlist, 0.3)\n        bboxlist = bboxlist[keep, :]\n        bboxlist = [x for x in bboxlist if x[-1] > 0.5]\n\n        return bboxlist\n\n    def detect_from_batch(self, images):\n        bboxlists = batch_detect(self.face_detector, images, device=self.device)\n        keeps = [nms(bboxlists[:, i, :], 0.3) for i in range(bboxlists.shape[1])]\n        bboxlists = [bboxlists[keep, i, :] for i, keep in enumerate(keeps)]\n        bboxlists = [[x for x in bboxlist if x[-1] > 0.5] for bboxlist in bboxlists]\n\n        return bboxlists\n\n    @property\n    def reference_scale(self):\n        return 195\n\n    @property\n    def reference_x_shift(self):\n        return 0\n\n    @property\n    def reference_y_shift(self):\n        return 0\n"
  },
  {
    "path": "third_part/face_detection/models.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport math\n\n\ndef conv3x3(in_planes, out_planes, strd=1, padding=1, bias=False):\n    \"3x3 convolution with padding\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3,\n                     stride=strd, padding=padding, bias=bias)\n\n\nclass ConvBlock(nn.Module):\n    def __init__(self, in_planes, out_planes):\n        super(ConvBlock, self).__init__()\n        self.bn1 = nn.BatchNorm2d(in_planes)\n        self.conv1 = conv3x3(in_planes, int(out_planes / 2))\n        self.bn2 = nn.BatchNorm2d(int(out_planes / 2))\n        self.conv2 = conv3x3(int(out_planes / 2), int(out_planes / 4))\n        self.bn3 = nn.BatchNorm2d(int(out_planes / 4))\n        self.conv3 = conv3x3(int(out_planes / 4), int(out_planes / 4))\n\n        if in_planes != out_planes:\n            self.downsample = nn.Sequential(\n                nn.BatchNorm2d(in_planes),\n                nn.ReLU(True),\n                nn.Conv2d(in_planes, out_planes,\n                          kernel_size=1, stride=1, bias=False),\n            )\n        else:\n            self.downsample = None\n\n    def forward(self, x):\n        residual = x\n\n        out1 = self.bn1(x)\n        out1 = F.relu(out1, True)\n        out1 = self.conv1(out1)\n\n        out2 = self.bn2(out1)\n        out2 = F.relu(out2, True)\n        out2 = self.conv2(out2)\n\n        out3 = self.bn3(out2)\n        out3 = F.relu(out3, True)\n        out3 = self.conv3(out3)\n\n        out3 = torch.cat((out1, out2, out3), 1)\n\n        if self.downsample is not None:\n            residual = self.downsample(residual)\n\n        out3 += residual\n\n        return out3\n\n\nclass Bottleneck(nn.Module):\n\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(Bottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,\n                               padding=1, bias=False)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * 4)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\n\nclass HourGlass(nn.Module):\n    def __init__(self, num_modules, depth, num_features):\n        super(HourGlass, self).__init__()\n        self.num_modules = num_modules\n        self.depth = depth\n        self.features = num_features\n\n        self._generate_network(self.depth)\n\n    def _generate_network(self, level):\n        self.add_module('b1_' + str(level), ConvBlock(self.features, self.features))\n\n        self.add_module('b2_' + str(level), ConvBlock(self.features, self.features))\n\n        if level > 1:\n            self._generate_network(level - 1)\n        else:\n            self.add_module('b2_plus_' + str(level), ConvBlock(self.features, self.features))\n\n        self.add_module('b3_' + str(level), ConvBlock(self.features, self.features))\n\n    def _forward(self, level, inp):\n        # Upper branch\n        up1 = inp\n        up1 = self._modules['b1_' + str(level)](up1)\n\n        # Lower branch\n        low1 = F.avg_pool2d(inp, 2, stride=2)\n        low1 = self._modules['b2_' + str(level)](low1)\n\n        if level > 1:\n            low2 = self._forward(level - 1, low1)\n        else:\n            low2 = low1\n            low2 = self._modules['b2_plus_' + str(level)](low2)\n\n        low3 = low2\n        low3 = self._modules['b3_' + str(level)](low3)\n\n        up2 = F.interpolate(low3, scale_factor=2, mode='nearest')\n\n        return up1 + up2\n\n    def forward(self, x):\n        return self._forward(self.depth, x)\n\n\nclass FAN(nn.Module):\n\n    def __init__(self, num_modules=1):\n        super(FAN, self).__init__()\n        self.num_modules = num_modules\n\n        # Base part\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.conv2 = ConvBlock(64, 128)\n        self.conv3 = ConvBlock(128, 128)\n        self.conv4 = ConvBlock(128, 256)\n\n        # Stacking part\n        for hg_module in range(self.num_modules):\n            self.add_module('m' + str(hg_module), HourGlass(1, 4, 256))\n            self.add_module('top_m_' + str(hg_module), ConvBlock(256, 256))\n            self.add_module('conv_last' + str(hg_module),\n                            nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0))\n            self.add_module('bn_end' + str(hg_module), nn.BatchNorm2d(256))\n            self.add_module('l' + str(hg_module), nn.Conv2d(256,\n                                                            68, kernel_size=1, stride=1, padding=0))\n\n            if hg_module < self.num_modules - 1:\n                self.add_module(\n                    'bl' + str(hg_module), nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0))\n                self.add_module('al' + str(hg_module), nn.Conv2d(68,\n                                                                 256, kernel_size=1, stride=1, padding=0))\n\n    def forward(self, x):\n        x = F.relu(self.bn1(self.conv1(x)), True)\n        x = F.avg_pool2d(self.conv2(x), 2, stride=2)\n        x = self.conv3(x)\n        x = self.conv4(x)\n\n        previous = x\n\n        outputs = []\n        for i in range(self.num_modules):\n            hg = self._modules['m' + str(i)](previous)\n\n            ll = hg\n            ll = self._modules['top_m_' + str(i)](ll)\n\n            ll = F.relu(self._modules['bn_end' + str(i)]\n                        (self._modules['conv_last' + str(i)](ll)), True)\n\n            # Predict heatmaps\n            tmp_out = self._modules['l' + str(i)](ll)\n            outputs.append(tmp_out)\n\n            if i < self.num_modules - 1:\n                ll = self._modules['bl' + str(i)](ll)\n                tmp_out_ = self._modules['al' + str(i)](tmp_out)\n                previous = previous + ll + tmp_out_\n\n        return outputs\n\n\nclass ResNetDepth(nn.Module):\n\n    def __init__(self, block=Bottleneck, layers=[3, 8, 36, 3], num_classes=68):\n        self.inplanes = 64\n        super(ResNetDepth, self).__init__()\n        self.conv1 = nn.Conv2d(3 + 68, 64, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\n        self.avgpool = nn.AvgPool2d(7)\n        self.fc = nn.Linear(512 * block.expansion, num_classes)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n    def _make_layer(self, block, planes, blocks, stride=1):\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes * block.expansion,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample))\n        self.inplanes = planes * block.expansion\n        for i in range(1, blocks):\n            layers.append(block(self.inplanes, planes))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        x = self.layer2(x)\n        x = self.layer3(x)\n        x = self.layer4(x)\n\n        x = self.avgpool(x)\n        x = x.view(x.size(0), -1)\n        x = self.fc(x)\n\n        return x\n"
  },
  {
    "path": "third_part/face_detection/utils.py",
    "content": "from __future__ import print_function\nimport os\nimport sys\nimport time\nimport torch\nimport math\nimport numpy as np\nimport cv2\n\n\ndef _gaussian(\n        size=3, sigma=0.25, amplitude=1, normalize=False, width=None,\n        height=None, sigma_horz=None, sigma_vert=None, mean_horz=0.5,\n        mean_vert=0.5):\n    # handle some defaults\n    if width is None:\n        width = size\n    if height is None:\n        height = size\n    if sigma_horz is None:\n        sigma_horz = sigma\n    if sigma_vert is None:\n        sigma_vert = sigma\n    center_x = mean_horz * width + 0.5\n    center_y = mean_vert * height + 0.5\n    gauss = np.empty((height, width), dtype=np.float32)\n    # generate kernel\n    for i in range(height):\n        for j in range(width):\n            gauss[i][j] = amplitude * math.exp(-(math.pow((j + 1 - center_x) / (\n                sigma_horz * width), 2) / 2.0 + math.pow((i + 1 - center_y) / (sigma_vert * height), 2) / 2.0))\n    if normalize:\n        gauss = gauss / np.sum(gauss)\n    return gauss\n\n\ndef draw_gaussian(image, point, sigma):\n    # Check if the gaussian is inside\n    ul = [math.floor(point[0] - 3 * sigma), math.floor(point[1] - 3 * sigma)]\n    br = [math.floor(point[0] + 3 * sigma), math.floor(point[1] + 3 * sigma)]\n    if (ul[0] > image.shape[1] or ul[1] > image.shape[0] or br[0] < 1 or br[1] < 1):\n        return image\n    size = 6 * sigma + 1\n    g = _gaussian(size)\n    g_x = [int(max(1, -ul[0])), int(min(br[0], image.shape[1])) - int(max(1, ul[0])) + int(max(1, -ul[0]))]\n    g_y = [int(max(1, -ul[1])), int(min(br[1], image.shape[0])) - int(max(1, ul[1])) + int(max(1, -ul[1]))]\n    img_x = [int(max(1, ul[0])), int(min(br[0], image.shape[1]))]\n    img_y = [int(max(1, ul[1])), int(min(br[1], image.shape[0]))]\n    assert (g_x[0] > 0 and g_y[1] > 0)\n    image[img_y[0] - 1:img_y[1], img_x[0] - 1:img_x[1]\n          ] = image[img_y[0] - 1:img_y[1], img_x[0] - 1:img_x[1]] + g[g_y[0] - 1:g_y[1], g_x[0] - 1:g_x[1]]\n    image[image > 1] = 1\n    return image\n\n\ndef transform(point, center, scale, resolution, invert=False):\n    \"\"\"Generate and affine transformation matrix.\n\n    Given a set of points, a center, a scale and a targer resolution, the\n    function generates and affine transformation matrix. If invert is ``True``\n    it will produce the inverse transformation.\n\n    Arguments:\n        point {torch.tensor} -- the input 2D point\n        center {torch.tensor or numpy.array} -- the center around which to perform the transformations\n        scale {float} -- the scale of the face/object\n        resolution {float} -- the output resolution\n\n    Keyword Arguments:\n        invert {bool} -- define wherever the function should produce the direct or the\n        inverse transformation matrix (default: {False})\n    \"\"\"\n    _pt = torch.ones(3)\n    _pt[0] = point[0]\n    _pt[1] = point[1]\n\n    h = 200.0 * scale\n    t = torch.eye(3)\n    t[0, 0] = resolution / h\n    t[1, 1] = resolution / h\n    t[0, 2] = resolution * (-center[0] / h + 0.5)\n    t[1, 2] = resolution * (-center[1] / h + 0.5)\n\n    if invert:\n        t = torch.inverse(t)\n\n    new_point = (torch.matmul(t, _pt))[0:2]\n\n    return new_point.int()\n\n\ndef crop(image, center, scale, resolution=256.0):\n    \"\"\"Center crops an image or set of heatmaps\n\n    Arguments:\n        image {numpy.array} -- an rgb image\n        center {numpy.array} -- the center of the object, usually the same as of the bounding box\n        scale {float} -- scale of the face\n\n    Keyword Arguments:\n        resolution {float} -- the size of the output cropped image (default: {256.0})\n\n    Returns:\n        [type] -- [description]\n    \"\"\"  # Crop around the center point\n    \"\"\" Crops the image around the center. Input is expected to be an np.ndarray \"\"\"\n    ul = transform([1, 1], center, scale, resolution, True)\n    br = transform([resolution, resolution], center, scale, resolution, True)\n    # pad = math.ceil(torch.norm((ul - br).float()) / 2.0 - (br[0] - ul[0]) / 2.0)\n    if image.ndim > 2:\n        newDim = np.array([br[1] - ul[1], br[0] - ul[0],\n                           image.shape[2]], dtype=np.int32)\n        newImg = np.zeros(newDim, dtype=np.uint8)\n    else:\n        newDim = np.array([br[1] - ul[1], br[0] - ul[0]], dtype=np.int)\n        newImg = np.zeros(newDim, dtype=np.uint8)\n    ht = image.shape[0]\n    wd = image.shape[1]\n    newX = np.array(\n        [max(1, -ul[0] + 1), min(br[0], wd) - ul[0]], dtype=np.int32)\n    newY = np.array(\n        [max(1, -ul[1] + 1), min(br[1], ht) - ul[1]], dtype=np.int32)\n    oldX = np.array([max(1, ul[0] + 1), min(br[0], wd)], dtype=np.int32)\n    oldY = np.array([max(1, ul[1] + 1), min(br[1], ht)], dtype=np.int32)\n    newImg[newY[0] - 1:newY[1], newX[0] - 1:newX[1]\n           ] = image[oldY[0] - 1:oldY[1], oldX[0] - 1:oldX[1], :]\n    newImg = cv2.resize(newImg, dsize=(int(resolution), int(resolution)),\n                        interpolation=cv2.INTER_LINEAR)\n    return newImg\n\n\ndef get_preds_fromhm(hm, center=None, scale=None):\n    \"\"\"Obtain (x,y) coordinates given a set of N heatmaps. If the center\n    and the scale is provided the function will return the points also in\n    the original coordinate frame.\n\n    Arguments:\n        hm {torch.tensor} -- the predicted heatmaps, of shape [B, N, W, H]\n\n    Keyword Arguments:\n        center {torch.tensor} -- the center of the bounding box (default: {None})\n        scale {float} -- face scale (default: {None})\n    \"\"\"\n    max, idx = torch.max(\n        hm.view(hm.size(0), hm.size(1), hm.size(2) * hm.size(3)), 2)\n    idx += 1\n    preds = idx.view(idx.size(0), idx.size(1), 1).repeat(1, 1, 2).float()\n    preds[..., 0].apply_(lambda x: (x - 1) % hm.size(3) + 1)\n    preds[..., 1].add_(-1).div_(hm.size(2)).floor_().add_(1)\n\n    for i in range(preds.size(0)):\n        for j in range(preds.size(1)):\n            hm_ = hm[i, j, :]\n            pX, pY = int(preds[i, j, 0]) - 1, int(preds[i, j, 1]) - 1\n            if pX > 0 and pX < 63 and pY > 0 and pY < 63:\n                diff = torch.FloatTensor(\n                    [hm_[pY, pX + 1] - hm_[pY, pX - 1],\n                     hm_[pY + 1, pX] - hm_[pY - 1, pX]])\n                preds[i, j].add_(diff.sign_().mul_(.25))\n\n    preds.add_(-.5)\n\n    preds_orig = torch.zeros(preds.size())\n    if center is not None and scale is not None:\n        for i in range(hm.size(0)):\n            for j in range(hm.size(1)):\n                preds_orig[i, j] = transform(\n                    preds[i, j], center, scale, hm.size(2), True)\n\n    return preds, preds_orig\n\ndef get_preds_fromhm_batch(hm, centers=None, scales=None):\n    \"\"\"Obtain (x,y) coordinates given a set of N heatmaps. If the centers\n    and the scales is provided the function will return the points also in\n    the original coordinate frame.\n\n    Arguments:\n        hm {torch.tensor} -- the predicted heatmaps, of shape [B, N, W, H]\n\n    Keyword Arguments:\n        centers {torch.tensor} -- the centers of the bounding box (default: {None})\n        scales {float} -- face scales (default: {None})\n    \"\"\"\n    max, idx = torch.max(\n        hm.view(hm.size(0), hm.size(1), hm.size(2) * hm.size(3)), 2)\n    idx += 1\n    preds = idx.view(idx.size(0), idx.size(1), 1).repeat(1, 1, 2).float()\n    preds[..., 0].apply_(lambda x: (x - 1) % hm.size(3) + 1)\n    preds[..., 1].add_(-1).div_(hm.size(2)).floor_().add_(1)\n\n    for i in range(preds.size(0)):\n        for j in range(preds.size(1)):\n            hm_ = hm[i, j, :]\n            pX, pY = int(preds[i, j, 0]) - 1, int(preds[i, j, 1]) - 1\n            if pX > 0 and pX < 63 and pY > 0 and pY < 63:\n                diff = torch.FloatTensor(\n                    [hm_[pY, pX + 1] - hm_[pY, pX - 1],\n                     hm_[pY + 1, pX] - hm_[pY - 1, pX]])\n                preds[i, j].add_(diff.sign_().mul_(.25))\n\n    preds.add_(-.5)\n\n    preds_orig = torch.zeros(preds.size())\n    if centers is not None and scales is not None:\n        for i in range(hm.size(0)):\n            for j in range(hm.size(1)):\n                preds_orig[i, j] = transform(\n                    preds[i, j], centers[i], scales[i], hm.size(2), True)\n\n    return preds, preds_orig\n\ndef shuffle_lr(parts, pairs=None):\n    \"\"\"Shuffle the points left-right according to the axis of symmetry\n    of the object.\n\n    Arguments:\n        parts {torch.tensor} -- a 3D or 4D object containing the\n        heatmaps.\n\n    Keyword Arguments:\n        pairs {list of integers} -- [order of the flipped points] (default: {None})\n    \"\"\"\n    if pairs is None:\n        pairs = [16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0,\n                 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 27, 28, 29, 30, 35,\n                 34, 33, 32, 31, 45, 44, 43, 42, 47, 46, 39, 38, 37, 36, 41,\n                 40, 54, 53, 52, 51, 50, 49, 48, 59, 58, 57, 56, 55, 64, 63,\n                 62, 61, 60, 67, 66, 65]\n    if parts.ndimension() == 3:\n        parts = parts[pairs, ...]\n    else:\n        parts = parts[:, pairs, ...]\n\n    return parts\n\n\ndef flip(tensor, is_label=False):\n    \"\"\"Flip an image or a set of heatmaps left-right\n\n    Arguments:\n        tensor {numpy.array or torch.tensor} -- [the input image or heatmaps]\n\n    Keyword Arguments:\n        is_label {bool} -- [denote wherever the input is an image or a set of heatmaps ] (default: {False})\n    \"\"\"\n    if not torch.is_tensor(tensor):\n        tensor = torch.from_numpy(tensor)\n\n    if is_label:\n        tensor = shuffle_lr(tensor).flip(tensor.ndimension() - 1)\n    else:\n        tensor = tensor.flip(tensor.ndimension() - 1)\n\n    return tensor\n\n# From pyzolib/paths.py (https://bitbucket.org/pyzo/pyzolib/src/tip/paths.py)\n\n\ndef appdata_dir(appname=None, roaming=False):\n    \"\"\" appdata_dir(appname=None, roaming=False)\n\n    Get the path to the application directory, where applications are allowed\n    to write user specific files (e.g. configurations). For non-user specific\n    data, consider using common_appdata_dir().\n    If appname is given, a subdir is appended (and created if necessary).\n    If roaming is True, will prefer a roaming directory (Windows Vista/7).\n    \"\"\"\n\n    # Define default user directory\n    userDir = os.getenv('FACEALIGNMENT_USERDIR', None)\n    if userDir is None:\n        userDir = os.path.expanduser('~')\n        if not os.path.isdir(userDir):  # pragma: no cover\n            userDir = '/var/tmp'  # issue #54\n\n    # Get system app data dir\n    path = None\n    if sys.platform.startswith('win'):\n        path1, path2 = os.getenv('LOCALAPPDATA'), os.getenv('APPDATA')\n        path = (path2 or path1) if roaming else (path1 or path2)\n    elif sys.platform.startswith('darwin'):\n        path = os.path.join(userDir, 'Library', 'Application Support')\n    # On Linux and as fallback\n    if not (path and os.path.isdir(path)):\n        path = userDir\n\n    # Maybe we should store things local to the executable (in case of a\n    # portable distro or a frozen application that wants to be portable)\n    prefix = sys.prefix\n    if getattr(sys, 'frozen', None):\n        prefix = os.path.abspath(os.path.dirname(sys.executable))\n    for reldir in ('settings', '../settings'):\n        localpath = os.path.abspath(os.path.join(prefix, reldir))\n        if os.path.isdir(localpath):  # pragma: no cover\n            try:\n                open(os.path.join(localpath, 'test.write'), 'wb').close()\n                os.remove(os.path.join(localpath, 'test.write'))\n            except IOError:\n                pass  # We cannot write in this directory\n            else:\n                path = localpath\n                break\n\n    # Get path specific for this app\n    if appname:\n        if path == userDir:\n            appname = '.' + appname.lstrip('.')  # Make it a hidden directory\n        path = os.path.join(path, appname)\n        if not os.path.isdir(path):  # pragma: no cover\n            os.mkdir(path)\n\n    # Done\n    return path\n"
  },
  {
    "path": "third_part/ganimation_replicate/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 Yuedong Chen (Donald)\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "third_part/ganimation_replicate/checkpoints/opt.txt",
    "content": "------------------- [ test][220417_224012]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints                   \t[default: ./ckpts]\n                data_root: datasets/celebA               \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650206412                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220417_224012                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/celebA_ganimation_30  \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220417_224012]End ----------------------\n\n\n------------------- [ test][220419_184832]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365312                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_184832                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_184832]End ----------------------\n\n\n------------------- [ test][220419_185232]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365552                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_185232                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_185232]End ----------------------\n\n\n------------------- [ test][220419_185252]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365572                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_185252                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_185252]End ----------------------\n\n\n------------------- [ test][220419_185305]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365585                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_185305                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_185305]End ----------------------\n\n\n------------------- [ test][220419_185320]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365600                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_185320                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_185320]End ----------------------\n\n\n------------------- [ test][220419_185810]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365890                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_185810                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_185810]End ----------------------\n\n\n------------------- [ test][220419_190338]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650366218                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_190338                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_190338]End ----------------------\n\n\n------------------- [ test][220419_190445]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650366285                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_190445                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_190445]End ----------------------\n\n\n------------------- [ test][220419_190628]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650366388                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_190628                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_190628]End ----------------------\n\n\n------------------- [ test][220419_195037]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650369037                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_195037                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_195037]End ----------------------\n\n\n------------------- [ test][220419_200348]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650369828                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200348                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200348]End ----------------------\n\n\n------------------- [ test][220419_200512]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650369912                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200512                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200512]End ----------------------\n\n\n------------------- [ test][220419_200529]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650369929                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200529                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200529]End ----------------------\n\n\n------------------- [ test][220419_200554]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650369954                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200554                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200554]End ----------------------\n\n\n------------------- [ test][220419_200622]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650369982                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200622                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200622]End ----------------------\n\n\n------------------- [ test][220419_200641]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650370001                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200641                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200641]End ----------------------\n\n\n------------------- [ test][220419_200658]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650370018                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200658                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200658]End ----------------------\n\n\n------------------- [ test][220419_200717]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650370037                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200717                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200717]End ----------------------\n\n\n------------------- [ test][220419_200740]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650370060                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200740                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200740]End ----------------------\n\n\n------------------- [ test][220419_200807]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650370087                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_200807                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_200807]End ----------------------\n\n\n------------------- [ test][220419_213236]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650375156                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_213236                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_213236]End ----------------------\n\n\n------------------- [ test][220419_213329]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: checkpoints/                  \t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 30                            \t[default: 0]\n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650375209                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_213329                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_30       \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_213329]End ----------------------\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/checkpoints/run_script.sh",
    "content": "[ test][220417_224012]python main.py --mode test --data_root datasets/celebA --ckpt_dir checkpoints --load_epoch 30\n[ test][220419_184832]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_185232]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_185252]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_185305]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_185320]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_185810]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_190338]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_190445]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_190628]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_195037]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200348]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200512]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200529]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200554]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200622]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200641]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200658]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200717]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200740]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_200807]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_213236]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n[ test][220419_213329]python test.py --data_root . --mode test --load_epoch 30 --ckpt_dir checkpoints/\n"
  },
  {
    "path": "third_part/ganimation_replicate/ckpts/ganimation/220419_183211/opt.txt",
    "content": "------------------- [train][220419_183211]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts/./ganimation/220419_183211\t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650364331                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: train                         \n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_183211                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results                       \n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 1                             \n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [train][220419_183211]End ----------------------\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/ckpts/ganimation/220419_183211/run_script.sh",
    "content": "[train][220419_183211]python test.py --data_root .\n"
  },
  {
    "path": "third_part/ganimation_replicate/ckpts/ganimation/220419_183229/opt.txt",
    "content": "------------------- [train][220419_183229]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts/./ganimation/220419_183229\t[default: ./ckpts]\n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650364349                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: train                         \n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_183229                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results                       \n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 1                             \n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [train][220419_183229]End ----------------------\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/ckpts/ganimation/220419_183229/run_script.sh",
    "content": "[train][220419_183229]python test.py --data_root .\n"
  },
  {
    "path": "third_part/ganimation_replicate/ckpts/opt.txt",
    "content": "------------------- [ test][220419_183311]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650364391                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_183311                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_183311]End ----------------------\n\n\n------------------- [ test][220419_183356]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650364436                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_183356                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_183356]End ----------------------\n\n\n------------------- [ test][220419_183456]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650364496                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_183456                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_183456]End ----------------------\n\n\n------------------- [ test][220419_183528]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650364528                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_183528                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_183528]End ----------------------\n\n\n------------------- [ test][220419_183711]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650364631                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_183711                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_183711]End ----------------------\n\n\n------------------- [ test][220419_183837]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650364717                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_183837                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_183837]End ----------------------\n\n\n------------------- [ test][220419_184333]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365013                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_184333                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_184333]End ----------------------\n\n\n------------------- [ test][220419_184442]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365082                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_184442                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_184442]End ----------------------\n\n\n------------------- [ test][220419_184500]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365100                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_184500                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_184500]End ----------------------\n\n\n------------------- [ test][220419_184533]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365133                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_184533                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_184533]End ----------------------\n\n\n------------------- [ test][220419_184603]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365163                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_184603                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_184603]End ----------------------\n\n\n------------------- [ test][220419_184714]Options --------------------\n                   aus_nc: 17                            \n                  aus_pkl: aus_openface.pkl              \n               batch_size: 25                            \n                    beta1: 0.5                           \n                 ckpt_dir: ./ckpts                       \n                data_root: .                             \t[default: None]\n              epoch_count: 1                             \n               final_size: 128                           \n                 gan_type: wgan-gp                       \n                  gpu_ids: [0]                           \t[default: 0]\n                   img_nc: 3                             \n                 imgs_dir: imgs                          \n                init_gain: 0.02                          \n                init_type: normal                        \n          interpolate_len: 5                             \n               lambda_aus: 160.0                         \n               lambda_dis: 1.0                           \n              lambda_mask: 0                             \n               lambda_rec: 10.0                          \n                lambda_tv: 0                             \n           lambda_wgan_gp: 10.0                          \n               load_epoch: 0                             \n                load_size: 148                           \n                 log_file: logs.txt                      \n                       lr: 0.0001                        \n           lr_decay_iters: 50                            \n                lr_policy: lambda                        \n               lucky_seed: 1650365234                    \t[default: 0]\n         max_dataset_size: inf                           \n                     mode: test                          \t[default: train]\n                    model: ganimation                    \n                n_threads: 6                             \n                     name: 220419_184714                 \n                      ndf: 64                            \n                      ngf: 64                            \n                    niter: 20                            \n              niter_decay: 10                            \n             no_aus_noise: False                         \n                  no_flip: False                         \n             no_test_eval: False                         \n                     norm: instance                      \n                 opt_file: opt.txt                       \n         plot_losses_freq: 20000                         \n        print_losses_freq: 100                           \n           resize_or_crop: none                          \n                  results: results/._ganimation_0        \t[default: results]\n          sample_img_freq: 2000                          \n          save_epoch_freq: 2                             \n            save_test_gif: False                         \n           serial_batches: False                         \n                 test_csv: test_ids.csv                  \n                train_csv: train_ids.csv                 \n           train_gen_iter: 5                             \n              use_dropout: False                         \n        visdom_display_id: 0                             \t[default: 1]\n               visdom_env: main                          \n              visdom_port: 8097                          \n--------------------- [ test][220419_184714]End ----------------------\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/ckpts/run_script.sh",
    "content": "[ test][220419_183311]python test.py --data_root . --mode test\n[ test][220419_183356]python test.py --data_root . --mode test\n[ test][220419_183456]python test.py --data_root . --mode test\n[ test][220419_183528]python test.py --data_root . --mode test\n[ test][220419_183711]python test.py --data_root . --mode test\n[ test][220419_183837]python test.py --data_root . --mode test\n[ test][220419_184333]python test.py --data_root . --mode test\n[ test][220419_184442]python test.py --data_root . --mode test\n[ test][220419_184500]python test.py --data_root . --mode test\n[ test][220419_184533]python test.py --data_root . --mode test\n[ test][220419_184603]python test.py --data_root . --mode test\n[ test][220419_184714]python test.py --data_root . --mode test\n"
  },
  {
    "path": "third_part/ganimation_replicate/data/__init__.py",
    "content": "from .data_loader import create_dataloader"
  },
  {
    "path": "third_part/ganimation_replicate/data/base_dataset.py",
    "content": "import torch\nimport os\nfrom PIL import Image\nimport random\nimport numpy as np\nimport pickle\nimport torchvision.transforms as transforms\n\n\n\nclass BaseDataset(torch.utils.data.Dataset):\n    \"\"\"docstring for BaseDataset\"\"\"\n    def __init__(self):\n        super(BaseDataset, self).__init__()\n\n    def name(self):\n        return os.path.basename(self.opt.data_root.strip('/'))\n\n    def initialize(self, opt):\n        self.opt = opt\n        self.imgs_dir = os.path.join(self.opt.data_root, self.opt.imgs_dir)\n        self.is_train = self.opt.mode == \"train\"\n\n        # load images path \n        filename = self.opt.train_csv if self.is_train else self.opt.test_csv\n        self.imgs_name_file = os.path.join(self.opt.data_root, filename)\n        self.imgs_path = self.make_dataset()\n\n        # load AUs dicitionary \n        aus_pkl = os.path.join(self.opt.data_root, self.opt.aus_pkl)\n        self.aus_dict = self.load_dict(aus_pkl)\n\n        # load image to tensor transformer\n        self.img2tensor = self.img_transformer()\n\n    def make_dataset(self):\n        return None\n\n    def load_dict(self, pkl_path):\n        saved_dict = {}\n        with open(pkl_path, 'rb') as f:\n            saved_dict = pickle.load(f, encoding='latin1')\n        return saved_dict\n\n    def get_img_by_path(self, img_path):\n        assert os.path.isfile(img_path), \"Cannot find image file: %s\" % img_path\n        img_type = 'L' if self.opt.img_nc == 1 else 'RGB'\n        return Image.open(img_path).convert(img_type)\n\n    def get_aus_by_path(self, img_path):\n        return None\n\n    def img_transformer(self):\n        transform_list = []\n        if self.opt.resize_or_crop == 'resize_and_crop':\n            transform_list.append(transforms.Resize([self.opt.load_size, self.opt.load_size], Image.BICUBIC))\n            transform_list.append(transforms.RandomCrop(self.opt.final_size))\n        elif self.opt.resize_or_crop == 'crop':\n            transform_list.append(transforms.RandomCrop(self.opt.final_size))\n        elif self.opt.resize_or_crop == 'none':\n            transform_list.append(transforms.Lambda(lambda image: image))\n        else:\n            raise ValueError(\"--resize_or_crop %s is not a valid option.\" % self.opt.resize_or_crop)\n\n        if self.is_train and not self.opt.no_flip:\n            transform_list.append(transforms.RandomHorizontalFlip())\n\n        transform_list.append(transforms.ToTensor())\n        transform_list.append(transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)))\n\n        img2tensor = transforms.Compose(transform_list)\n\n        return img2tensor\n\n    def __len__(self):\n        return len(self.imgs_path)\n\n\n\n\n\n    \n\n\n\n\n\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/data/celeba.py",
    "content": "from .base_dataset import BaseDataset\nimport os\nimport random\nimport numpy as np\n\n\nclass CelebADataset(BaseDataset):\n    \"\"\"docstring for CelebADataset\"\"\"\n    def __init__(self):\n        super(CelebADataset, self).__init__()\n        \n    def initialize(self, opt):\n        super(CelebADataset, self).initialize(opt)\n\n    def get_aus_by_path(self, img_path):\n        assert os.path.isfile(img_path), \"Cannot find image file: %s\" % img_path\n        img_id = str(os.path.splitext(os.path.basename(img_path))[0])\n        return self.aus_dict[img_id] / 5.0   # norm to [0, 1]\n\n    def make_dataset(self):\n        # return all image full path in a list\n        imgs_path = []\n        assert os.path.isfile(self.imgs_name_file), \"%s does not exist.\" % self.imgs_name_file\n        with open(self.imgs_name_file, 'r') as f:\n            lines = f.readlines()\n            imgs_path = [os.path.join(self.imgs_dir, line.strip()) for line in lines]\n            imgs_path = sorted(imgs_path)\n        return imgs_path\n\n    def __getitem__(self, index):\n        img_path = self.imgs_path[index]\n\n        # load source image\n        src_img = self.get_img_by_path(img_path)\n        src_img_tensor = self.img2tensor(src_img)\n        src_aus = self.get_aus_by_path(img_path)\n\n        # load target image\n        tar_img_path = random.choice(self.imgs_path)\n        tar_img = self.get_img_by_path(tar_img_path)\n        tar_img_tensor = self.img2tensor(tar_img)\n        tar_aus = self.get_aus_by_path(tar_img_path)\n        if self.is_train and not self.opt.no_aus_noise:\n            tar_aus = tar_aus + np.random.uniform(-0.1, 0.1, tar_aus.shape)\n\n        # record paths for debug and test usage\n        data_dict = {'src_img':src_img_tensor, 'src_aus':src_aus, 'tar_img':tar_img_tensor, 'tar_aus':tar_aus, \\\n                        'src_path':img_path, 'tar_path':tar_img_path}\n\n        return data_dict\n"
  },
  {
    "path": "third_part/ganimation_replicate/data/data_loader.py",
    "content": "import torch\nimport os\nfrom PIL import Image\nimport random\nimport numpy as np\nimport pickle\nimport torchvision.transforms as transforms\n\nfrom .celeba import CelebADataset\n\n\ndef create_dataloader(opt):\n    data_loader = DataLoader()\n    data_loader.initialize(opt)\n    return data_loader\n\n\nclass DataLoader:\n    def name(self):\n        return self.dataset.name() + \"_Loader\"\n\n    def create_datase(self):\n        # specify which dataset to load here\n        loaded_dataset = os.path.basename(self.opt.data_root.strip('/')).lower()\n        if 'celeba' in loaded_dataset or 'emotion' in loaded_dataset:\n            dataset = CelebADataset()\n        else:\n            dataset = BaseDataset()\n        dataset.initialize(self.opt)\n        return dataset\n\n    def initialize(self, opt):\n        self.opt = opt\n        self.dataset = self.create_datase()\n        self.dataloader = torch.utils.data.DataLoader(\n            self.dataset,\n            batch_size=opt.batch_size,\n            shuffle=not opt.serial_batches,\n            num_workers=int(opt.n_threads)\n        )\n\n    def __len__(self):\n        return min(len(self.dataset), self.opt.max_dataset_size)\n\n    def __iter__(self):\n        for i, data in enumerate(self.dataloader):\n            if i * self.opt.batch_size >= self.opt.max_dataset_size:\n                break\n            yield data\n"
  },
  {
    "path": "third_part/ganimation_replicate/main.py",
    "content": "\"\"\"\nCreated on Dec 13, 2018\n@author: Yuedong Chen\n\"\"\"\n\nfrom options import Options\nfrom solvers import create_solver\n\n\n\n\nif __name__ == '__main__':\n    opt = Options().parse()\n\n    solver = create_solver(opt)\n    solver.run_solver()\n\n    print('[THE END]')"
  },
  {
    "path": "third_part/ganimation_replicate/model/__init__.py",
    "content": "from .base_model import BaseModel\nfrom .ganimation import GANimationModel\nfrom .stargan import StarGANModel\n\n\n\ndef create_model(opt):\n    # specify model name here\n    if opt.model == \"ganimation\":\n        instance = GANimationModel()\n    elif opt.model == \"stargan\":\n        instance = StarGANModel()\n    else:\n        instance = BaseModel()\n    instance.initialize(opt)\n    instance.setup()\n    return instance\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/model/base_model.py",
    "content": "import torch\nimport os\nfrom collections import OrderedDict\nimport random\nfrom . import model_utils\n\n\nclass BaseModel:\n    \"\"\"docstring for BaseModel\"\"\"\n    def __init__(self):\n        super(BaseModel, self).__init__()\n        self.name = \"Base\"\n\n    def initialize(self, opt):\n        self.opt = opt\n        self.gpu_ids = self.opt.gpu_ids\n        self.device = torch.device('cuda:%d' % self.gpu_ids[0] if self.gpu_ids else 'cpu')\n        self.is_train = self.opt.mode == \"train\"\n        # inherit to define network model \n        self.models_name = []\n        \n    def setup(self):\n        # print(\"%s with Model [%s]\" % (self.opt.mode.capitalize(), self.name))\n        if self.is_train:\n            self.set_train()\n            # define loss function\n            self.criterionGAN = model_utils.GANLoss(gan_type=self.opt.gan_type).to(self.device)\n            self.criterionL1 = torch.nn.L1Loss().to(self.device)\n            self.criterionMSE = torch.nn.MSELoss().to(self.device)\n            self.criterionTV = model_utils.TVLoss().to(self.device)\n            torch.nn.DataParallel(self.criterionGAN, self.gpu_ids)\n            torch.nn.DataParallel(self.criterionL1, self.gpu_ids)\n            torch.nn.DataParallel(self.criterionMSE, self.gpu_ids)\n            torch.nn.DataParallel(self.criterionTV, self.gpu_ids)\n            # inherit to set up train/val/test status\n            self.losses_name = []\n            self.optims = []\n            self.schedulers = []\n        else:\n            self.set_eval()\n\n    def set_eval(self):\n        print(\"Set model to Test state.\")\n        for name in self.models_name:\n            if isinstance(name, str):\n                net = getattr(self, 'net_' + name)\n                if True:\n                    net.eval()\n                    print(\"Set net_%s to EVAL.\" % name)\n                else:\n                    net.train()\n        self.is_train = False\n\n    def set_train(self):\n        print(\"Set model to Train state.\")\n        for name in self.models_name:\n            if isinstance(name, str):\n                net = getattr(self, 'net_' + name)\n                net.train()\n                print(\"Set net_%s to TRAIN.\" % name)\n        self.is_train = True\n\n    def set_requires_grad(self, parameters, requires_grad=False):\n        if not isinstance(parameters, list):\n            parameters = [parameters]\n        for param in parameters:\n            if param is not None:\n                param.requires_grad = requires_grad\n\n    def get_latest_visuals(self, visuals_name):\n        visual_ret = OrderedDict()\n        for name in visuals_name:\n            if isinstance(name, str) and hasattr(self, name):\n                visual_ret[name] = getattr(self, name)\n        return visual_ret\n\n    def get_latest_losses(self, losses_name):\n        errors_ret = OrderedDict()\n        for name in losses_name:\n            if isinstance(name, str):\n                cur_loss = float(getattr(self, 'loss_' + name))\n                # cur_loss_lambda = 1. if len(losses_name) == 1 else float(getattr(self.opt, 'lambda_' + name))\n                # errors_ret[name] = cur_loss * cur_loss_lambda\n                errors_ret[name] = cur_loss\n        return errors_ret\n\n    def feed_batch(self, batch):\n        pass \n\n    def forward(self):\n        pass\n\n    def optimize_paras(self):\n        pass\n\n    def update_learning_rate(self):\n        for scheduler in self.schedulers:\n            scheduler.step()\n        lr = self.optims[0].param_groups[0]['lr']\n        return lr\n\n    def save_ckpt(self, epoch, models_name):\n        for name in models_name:\n            if isinstance(name, str):\n                save_filename = '%s_net_%s.pth' % (epoch, name)\n                save_path = os.path.join(self.opt.ckpt_dir, save_filename)\n                net = getattr(self, 'net_' + name)\n                # save cpu params, so that it can be used in other GPU settings\n                if len(self.gpu_ids) > 0 and torch.cuda.is_available():\n                    torch.save(net.module.cpu().state_dict(), save_path)\n                    net.to(self.gpu_ids[0])\n                    net = torch.nn.DataParallel(net, self.gpu_ids)\n                else:\n                    torch.save(net.cpu().state_dict(), save_path)\n\n    def load_ckpt(self, epoch, models_name):\n        # print(models_name)\n        for name in models_name:\n            if isinstance(name, str):\n                load_filename = '%s_net_%s.pth' % (epoch, name)\n                # load_path = os.path.join(self.opt.ckpt_dir, load_filename)\n                # assert os.path.isfile(load_path), \"File '%s' does not exist.\" % load_path\n                \n                # pretrained_state_dict = torch.load(load_path, map_location=str(self.device))\n                pretrained_state_dict = torch.load('checkpoints/30_net_gen.pth', map_location=str('cuda:0'))\n                if hasattr(pretrained_state_dict, '_metadata'):\n                    del pretrained_state_dict._metadata\n\n                net = getattr(self, 'net_' + name)\n                if isinstance(net, torch.nn.DataParallel):\n                    net = net.module\n                # load only existing keys\n                pretrained_dict = {k: v for k, v in pretrained_state_dict.items() if k in net.state_dict()}\n                # for k, v in pretrained_state_dict.items():\n                #     print(k)\n                # assert False\n                net.load_state_dict(pretrained_dict)\n                print(\"[Info] Successfully load trained weights for net_%s.\" % name)\n\n    def clean_ckpt(self, epoch, models_name):\n        for name in models_name:\n            if isinstance(name, str):\n                load_filename = '%s_net_%s.pth' % (epoch, name)\n                load_path = os.path.join(self.opt.ckpt_dir, load_filename)\n                if os.path.isfile(load_path):\n                    os.remove(load_path)\n\n    def gradient_penalty(self, input_img, generate_img):\n        # interpolate sample\n        alpha = torch.rand(input_img.size(0), 1, 1, 1).to(self.device)\n        inter_img = (alpha * input_img.data + (1 - alpha) * generate_img.data).requires_grad_(True)\n        inter_img_prob, _ = self.net_dis(inter_img)\n\n        # computer gradient penalty: x: inter_img, y: inter_img_prob\n        # (L2_norm(dy/dx) - 1)**2\n        dydx = torch.autograd.grad(outputs=inter_img_prob,\n                                   inputs=inter_img,\n                                   grad_outputs=torch.ones(inter_img_prob.size()).to(self.device),\n                                   retain_graph=True,\n                                   create_graph=True,\n                                   only_inputs=True)[0]\n        dydx = dydx.view(dydx.size(0), -1)\n        dydx_l2norm = torch.sqrt(torch.sum(dydx ** 2, dim=1))\n        return torch.mean((dydx_l2norm - 1) ** 2) \n\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/model/ganimation.py",
    "content": "import torch\nfrom .base_model import BaseModel\nfrom . import model_utils\n\n\nclass GANimationModel(BaseModel):\n    \"\"\"docstring for GANimationModel\"\"\"\n    def __init__(self):\n        super(GANimationModel, self).__init__()\n        self.name = \"GANimation\"\n\n    def initialize(self):\n        # super(GANimationModel, self).initialize(opt)\n        self.is_train = False\n        self.models_name = []\n        self.net_gen = model_utils.define_splitG(3, 17, 64, use_dropout=False, \n                    norm='instance', init_type='normal', init_gain=0.02, gpu_ids=[0])\n        self.models_name.append('gen')\n        self.device = 'cuda'\n        \n        # if self.is_train:\n        #     self.net_dis = model_utils.define_splitD(3, 17, self.opt.final_size, self.opt.ndf, \n        #             norm=self.opt.norm, init_type=self.opt.init_type, init_gain=self.opt.init_gain, gpu_ids=self.gpu_ids)\n        #     self.models_name.append('dis')\n\n        # if self.opt.load_epoch > 0:\n        self.load_ckpt('30')\n\n    def setup(self):\n        super(GANimationModel, self).setup()\n        if self.is_train:\n            # setup optimizer\n            self.optim_gen = torch.optim.Adam(self.net_gen.parameters(),\n                            lr=self.opt.lr, betas=(self.opt.beta1, 0.999))\n            self.optims.append(self.optim_gen)\n            self.optim_dis = torch.optim.Adam(self.net_dis.parameters(), \n                            lr=self.opt.lr, betas=(self.opt.beta1, 0.999))\n            self.optims.append(self.optim_dis)\n\n            # setup schedulers\n            self.schedulers = [model_utils.get_scheduler(optim, self.opt) for optim in self.optims]\n\n    def feed_batch(self, batch):\n        self.src_img = batch['src_img'].to(self.device)\n        self.tar_aus = batch['tar_aus'].type(torch.FloatTensor).to(self.device)\n        if self.is_train:\n            self.src_aus = batch['src_aus'].type(torch.FloatTensor).to(self.device)\n            self.tar_img = batch['tar_img'].to(self.device)\n\n    def forward(self):\n        # generate fake image\n        self.color_mask ,self.aus_mask, self.embed = self.net_gen(self.src_img, self.tar_aus)\n        self.fake_img = self.aus_mask * self.src_img + (1 - self.aus_mask) * self.color_mask\n\n        # reconstruct real image\n        if self.is_train:\n            self.rec_color_mask, self.rec_aus_mask, self.rec_embed = self.net_gen(self.fake_img, self.src_aus)\n            self.rec_real_img = self.rec_aus_mask * self.fake_img + (1 - self.rec_aus_mask) * self.rec_color_mask\n\n    def backward_dis(self):\n        # real image\n        pred_real, self.pred_real_aus = self.net_dis(self.src_img)\n        self.loss_dis_real = self.criterionGAN(pred_real, True)\n        self.loss_dis_real_aus = self.criterionMSE(self.pred_real_aus, self.src_aus)\n\n        # fake image, detach to stop backward to generator\n        pred_fake, _ = self.net_dis(self.fake_img.detach()) \n        self.loss_dis_fake = self.criterionGAN(pred_fake, False)\n\n        # combine dis loss\n        self.loss_dis =   self.opt.lambda_dis * (self.loss_dis_fake + self.loss_dis_real) \\\n                        + self.opt.lambda_aus * self.loss_dis_real_aus\n        if self.opt.gan_type == 'wgan-gp':\n            self.loss_dis_gp = self.gradient_penalty(self.src_img, self.fake_img)\n            self.loss_dis = self.loss_dis + self.opt.lambda_wgan_gp * self.loss_dis_gp\n        \n        # backward discriminator loss\n        self.loss_dis.backward()\n\n    def backward_gen(self):\n        # original to target domain, should fake the discriminator\n        pred_fake, self.pred_fake_aus = self.net_dis(self.fake_img)\n        self.loss_gen_GAN = self.criterionGAN(pred_fake, True)\n        self.loss_gen_fake_aus = self.criterionMSE(self.pred_fake_aus, self.tar_aus)\n\n        # target to original domain reconstruct, identity loss\n        self.loss_gen_rec = self.criterionL1(self.rec_real_img, self.src_img)\n\n        # constrain on AUs mask\n        self.loss_gen_mask_real_aus = torch.mean(self.aus_mask)\n        self.loss_gen_mask_fake_aus = torch.mean(self.rec_aus_mask)\n        self.loss_gen_smooth_real_aus = self.criterionTV(self.aus_mask)\n        self.loss_gen_smooth_fake_aus = self.criterionTV(self.rec_aus_mask)\n\n        # combine and backward G loss\n        self.loss_gen =   self.opt.lambda_dis * self.loss_gen_GAN \\\n                        + self.opt.lambda_aus * self.loss_gen_fake_aus \\\n                        + self.opt.lambda_rec * self.loss_gen_rec \\\n                        + self.opt.lambda_mask * (self.loss_gen_mask_real_aus + self.loss_gen_mask_fake_aus) \\\n                        + self.opt.lambda_tv * (self.loss_gen_smooth_real_aus + self.loss_gen_smooth_fake_aus)\n\n        self.loss_gen.backward()\n\n    def optimize_paras(self, train_gen):\n        self.forward()\n        # update discriminator\n        self.set_requires_grad(self.net_dis, True)\n        self.optim_dis.zero_grad()\n        self.backward_dis()\n        self.optim_dis.step()\n\n        # update G if needed\n        if train_gen:\n            self.set_requires_grad(self.net_dis, False)\n            self.optim_gen.zero_grad()\n            self.backward_gen()\n            self.optim_gen.step()\n\n    def save_ckpt(self, epoch):\n        # save the specific networks\n        save_models_name = ['gen', 'dis']\n        return super(GANimationModel, self).save_ckpt(epoch, save_models_name)\n\n    def load_ckpt(self, epoch):\n        # load the specific part of networks\n        load_models_name = ['gen']\n        if self.is_train:\n            load_models_name.extend(['dis'])\n        return super(GANimationModel, self).load_ckpt(epoch, load_models_name)\n\n    def clean_ckpt(self, epoch):\n        # load the specific part of networks\n        load_models_name = ['gen', 'dis']\n        return super(GANimationModel, self).clean_ckpt(epoch, load_models_name)\n\n    def get_latest_losses(self):\n        get_losses_name = ['dis_fake', 'dis_real', 'dis_real_aus', 'gen_rec']\n        return super(GANimationModel, self).get_latest_losses(get_losses_name)\n\n    def get_latest_visuals(self):\n        visuals_name = ['src_img', 'tar_img', 'color_mask', 'aus_mask', 'fake_img']\n        if self.is_train:\n            visuals_name.extend(['rec_color_mask', 'rec_aus_mask', 'rec_real_img'])\n        return super(GANimationModel, self).get_latest_visuals(visuals_name)\n"
  },
  {
    "path": "third_part/ganimation_replicate/model/model_utils.py",
    "content": "import torch\nimport torch.nn as nn\nfrom torch.nn import init\nimport functools\nfrom torch.optim import lr_scheduler\nfrom collections import OrderedDict\n\n\n'''\nHelper functions for model\nBorrow tons of code from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py\n'''\n\ndef get_norm_layer(norm_type='instance'):\n    \"\"\"Return a normalization layer\n    Parameters:\n        norm_type (str) -- the name of the normalization layer: batch | instance | none\n    For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev).\n    For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics.\n    \"\"\"\n    if norm_type == 'batch':\n        norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True)\n    elif norm_type == 'instance':   \n        # change default flag, make sure instance norm behave as the same in both train and eval\n        # https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/395\n        norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False)\n    elif norm_type == 'none':\n        norm_layer = None\n    else:\n        raise NotImplementedError('normalization layer [%s] is not found' % norm_type)\n    return norm_layer\n\n\ndef get_scheduler(optimizer, opt):\n    if opt.lr_policy == 'lambda':\n        def lambda_rule(epoch):\n            lr_l = 1.0 - max(0, epoch + 1 + opt.epoch_count - opt.niter) / float(opt.niter_decay + 1)\n            return lr_l\n        scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)\n    elif opt.lr_policy == 'step':\n        scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1)\n    elif opt.lr_policy == 'plateau':\n        scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)\n    else:\n        return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)\n    return scheduler\n\n\ndef init_weights(net, init_type='normal', gain=0.02):\n    def init_func(m):\n        classname = m.__class__.__name__\n        if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):\n            if init_type == 'normal':\n                init.normal_(m.weight.data, 0.0, gain)\n            elif init_type == 'xavier':\n                init.xavier_normal_(m.weight.data, gain=gain)\n            elif init_type == 'kaiming':\n                init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')\n            elif init_type == 'orthogonal':\n                init.orthogonal_(m.weight.data, gain=gain)\n            else:\n                raise NotImplementedError('initialization method [%s] is not implemented' % init_type)\n            if hasattr(m, 'bias') and m.bias is not None:\n                init.constant_(m.bias.data, 0.0)\n        elif classname.find('BatchNorm2d') != -1:\n            init.normal_(m.weight.data, 1.0, gain)\n            init.constant_(m.bias.data, 0.0)\n\n    print('initialize network with %s' % init_type)\n    net.apply(init_func)\n\n\ndef init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]):\n    if len(gpu_ids) > 0:\n        # print(\"gpu_ids,\", gpu_ids)\n        assert(torch.cuda.is_available())\n        net.to(gpu_ids[0])\n        net = torch.nn.DataParallel(net, gpu_ids)\n    init_weights(net, init_type, gain=init_gain)\n    return net\n\n\ndef define_G(input_nc, output_nc, ngf, which_model_netG, norm='batch', use_dropout=False, init_type='normal', init_gain=0.02, gpu_ids=[]):\n    netG = None\n    norm_layer = get_norm_layer(norm_type=norm)\n\n    if which_model_netG == 'resnet_9blocks':\n        netG = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=9)\n    elif which_model_netG == 'resnet_6blocks':\n        netG = ResnetGenerator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=6)\n    elif which_model_netG == 'unet_128':\n        netG = UnetGenerator(input_nc, output_nc, 7, ngf, norm_layer=norm_layer, use_dropout=use_dropout)\n    elif which_model_netG == 'unet_256':\n        netG = UnetGenerator(input_nc, output_nc, 8, ngf, norm_layer=norm_layer, use_dropout=use_dropout)\n    else:\n        raise NotImplementedError('Generator model name [%s] is not recognized' % which_model_netG)\n    return init_net(netG, init_type, init_gain, gpu_ids)\n\n\ndef define_D(input_nc, ndf, which_model_netD,\n             n_layers_D=3, norm='batch', use_sigmoid=False, init_type='normal', init_gain=0.02, gpu_ids=[]):\n    netD = None\n    norm_layer = get_norm_layer(norm_type=norm)\n\n    if which_model_netD == 'basic':\n        netD = NLayerDiscriminator(input_nc, ndf, n_layers=3, norm_layer=norm_layer, use_sigmoid=use_sigmoid)\n    elif which_model_netD == 'n_layers':\n        netD = NLayerDiscriminator(input_nc, ndf, n_layers_D, norm_layer=norm_layer, use_sigmoid=use_sigmoid)\n    elif which_model_netD == 'pixel':\n        netD = PixelDiscriminator(input_nc, ndf, norm_layer=norm_layer, use_sigmoid=use_sigmoid)\n    else:\n        raise NotImplementedError('Discriminator model name [%s] is not recognized' %\n                                  which_model_netD)\n    return init_net(netD, init_type, init_gain, gpu_ids)\n\n\n##############################################################################\n# Classes\n##############################################################################\n\n\n# Defines the GAN loss which uses either LSGAN or the regular GAN.\n# When LSGAN is used, it is basically same as MSELoss,\n# but it abstracts away the need to create the target label tensor\n# that has the same size as the input\nclass GANLoss(nn.Module):\n    def __init__(self, gan_type='wgan-gp', target_real_label=1.0, target_fake_label=0.0):\n        super(GANLoss, self).__init__()\n        self.register_buffer('real_label', torch.tensor(target_real_label))\n        self.register_buffer('fake_label', torch.tensor(target_fake_label))\n        self.gan_type = gan_type\n        if self.gan_type == 'wgan-gp':\n            self.loss = lambda x, y: -torch.mean(x) if y else torch.mean(x)\n        elif self.gan_type == 'lsgan':\n            self.loss = nn.MSELoss()\n        elif self.gan_type == 'gan':\n            self.loss = nn.BCELoss()\n        else:\n            raise NotImplementedError('GAN loss type [%s] is not found' % gan_type)\n\n    def get_target_tensor(self, input, target_is_real):\n        if target_is_real:\n            target_tensor = self.real_label\n        else:\n            target_tensor = self.fake_label\n        return target_tensor.expand_as(input)\n\n    def __call__(self, input, target_is_real):\n        if self.gan_type == 'wgan-gp':\n            target_tensor = target_is_real\n        else:\n            target_tensor = self.get_target_tensor(input, target_is_real)\n        return self.loss(input, target_tensor)\n\n\n# Defines the generator that consists of Resnet blocks between a few\n# downsampling/upsampling operations.\n# Code and idea originally from Justin Johnson's architecture.\n# https://github.com/jcjohnson/fast-neural-style/\nclass ResnetGenerator(nn.Module):\n    def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect'):\n        assert(n_blocks >= 0)\n        super(ResnetGenerator, self).__init__()\n        self.input_nc = input_nc\n        self.output_nc = output_nc\n        self.ngf = ngf\n        if type(norm_layer) == functools.partial:\n            use_bias = norm_layer.func == nn.InstanceNorm2d\n        else:\n            use_bias = norm_layer == nn.InstanceNorm2d\n\n        model = [nn.ReflectionPad2d(3),\n                 nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0,\n                           bias=use_bias),\n                 norm_layer(ngf),\n                 nn.ReLU(True)]\n\n        n_downsampling = 2\n        for i in range(n_downsampling):\n            mult = 2**i\n            model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3,\n                                stride=2, padding=1, bias=use_bias),\n                      norm_layer(ngf * mult * 2),\n                      nn.ReLU(True)]\n\n        mult = 2**n_downsampling\n        for i in range(n_blocks):\n            model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]\n\n        for i in range(n_downsampling):\n            mult = 2**(n_downsampling - i)\n            model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),\n                                         kernel_size=3, stride=2,\n                                         padding=1, output_padding=1,\n                                         bias=use_bias),\n                      norm_layer(int(ngf * mult / 2)),\n                      nn.ReLU(True)]\n        model += [nn.ReflectionPad2d(3)]\n        model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]\n        model += [nn.Tanh()]\n\n        self.model = nn.Sequential(*model)\n\n    def forward(self, input):\n        return self.model(input)\n\n\n# Define a resnet block\nclass ResnetBlock(nn.Module):\n    def __init__(self, dim, padding_type, norm_layer, use_dropout, use_bias):\n        super(ResnetBlock, self).__init__()\n        self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias)\n\n    def build_conv_block(self, dim, padding_type, norm_layer, use_dropout, use_bias):\n        conv_block = []\n        p = 0\n        if padding_type == 'reflect':\n            conv_block += [nn.ReflectionPad2d(1)]\n        elif padding_type == 'replicate':\n            conv_block += [nn.ReplicationPad2d(1)]\n        elif padding_type == 'zero':\n            p = 1\n        else:\n            raise NotImplementedError('padding [%s] is not implemented' % padding_type)\n\n        conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias),\n                       norm_layer(dim),\n                       nn.ReLU(True)]\n        if use_dropout:\n            conv_block += [nn.Dropout(0.5)]\n\n        p = 0\n        if padding_type == 'reflect':\n            conv_block += [nn.ReflectionPad2d(1)]\n        elif padding_type == 'replicate':\n            conv_block += [nn.ReplicationPad2d(1)]\n        elif padding_type == 'zero':\n            p = 1\n        else:\n            raise NotImplementedError('padding [%s] is not implemented' % padding_type)\n        conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias),\n                       norm_layer(dim)]\n\n        return nn.Sequential(*conv_block)\n\n    def forward(self, x):\n        out = x + self.conv_block(x)\n        return out\n\n\n# Defines the Unet generator.\n# |num_downs|: number of downsamplings in UNet. For example,\n# if |num_downs| == 7, image of size 128x128 will become of size 1x1\n# at the bottleneck\nclass UnetGenerator(nn.Module):\n    def __init__(self, input_nc, output_nc, num_downs, ngf=64,\n                 norm_layer=nn.BatchNorm2d, use_dropout=False):\n        super(UnetGenerator, self).__init__()\n\n        # construct unet structure\n        unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True)\n        for i in range(num_downs - 5):\n            unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout)\n        unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer)\n        unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer)\n        unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer)\n        unet_block = UnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer)\n\n        self.model = unet_block\n\n    def forward(self, input):\n        return self.model(input)\n\n\n# Defines the submodule with skip connection.\n# X -------------------identity---------------------- X\n#   |-- downsampling -- |submodule| -- upsampling --|\nclass UnetSkipConnectionBlock(nn.Module):\n    def __init__(self, outer_nc, inner_nc, input_nc=None,\n                 submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):\n        super(UnetSkipConnectionBlock, self).__init__()\n        self.outermost = outermost\n        if type(norm_layer) == functools.partial:\n            use_bias = norm_layer.func == nn.InstanceNorm2d\n        else:\n            use_bias = norm_layer == nn.InstanceNorm2d\n        if input_nc is None:\n            input_nc = outer_nc\n        downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4,\n                             stride=2, padding=1, bias=use_bias)\n        downrelu = nn.LeakyReLU(0.2, True)\n        downnorm = norm_layer(inner_nc)\n        uprelu = nn.ReLU(True)\n        upnorm = norm_layer(outer_nc)\n\n        if outermost:\n            upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,\n                                        kernel_size=4, stride=2,\n                                        padding=1)\n            down = [downconv]\n            up = [uprelu, upconv, nn.Tanh()]\n            model = down + [submodule] + up\n        elif innermost:\n            upconv = nn.ConvTranspose2d(inner_nc, outer_nc,\n                                        kernel_size=4, stride=2,\n                                        padding=1, bias=use_bias)\n            down = [downrelu, downconv]\n            up = [uprelu, upconv, upnorm]\n            model = down + up\n        else:\n            upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,\n                                        kernel_size=4, stride=2,\n                                        padding=1, bias=use_bias)\n            down = [downrelu, downconv, downnorm]\n            up = [uprelu, upconv, upnorm]\n\n            if use_dropout:\n                model = down + [submodule] + up + [nn.Dropout(0.5)]\n            else:\n                model = down + [submodule] + up\n\n        self.model = nn.Sequential(*model)\n\n    def forward(self, x):\n        if self.outermost:\n            return self.model(x)\n        else:\n            return torch.cat([x, self.model(x)], 1)\n\n\n# Defines the PatchGAN discriminator with the specified arguments.\nclass NLayerDiscriminator(nn.Module):\n    def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_sigmoid=False):\n        super(NLayerDiscriminator, self).__init__()\n        if type(norm_layer) == functools.partial:\n            use_bias = norm_layer.func == nn.InstanceNorm2d\n        else:\n            use_bias = norm_layer == nn.InstanceNorm2d\n\n        kw = 4\n        padw = 1\n        sequence = [\n            nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw),\n            nn.LeakyReLU(0.2, True)\n        ]\n\n        nf_mult = 1\n        nf_mult_prev = 1\n        for n in range(1, n_layers):\n            nf_mult_prev = nf_mult\n            nf_mult = min(2**n, 8)\n            sequence += [\n                nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult,\n                          kernel_size=kw, stride=2, padding=padw, bias=use_bias),\n                norm_layer(ndf * nf_mult),\n                nn.LeakyReLU(0.2, True)\n            ]\n\n        nf_mult_prev = nf_mult\n        nf_mult = min(2**n_layers, 8)\n        sequence += [\n            nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult,\n                      kernel_size=kw, stride=1, padding=padw, bias=use_bias),\n            norm_layer(ndf * nf_mult),\n            nn.LeakyReLU(0.2, True)\n        ]\n\n        sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)]\n\n        if use_sigmoid:\n            sequence += [nn.Sigmoid()]\n\n        self.model = nn.Sequential(*sequence)\n\n    def forward(self, input):\n        return self.model(input)\n\n\nclass PixelDiscriminator(nn.Module):\n    def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d, use_sigmoid=False):\n        super(PixelDiscriminator, self).__init__()\n        if type(norm_layer) == functools.partial:\n            use_bias = norm_layer.func == nn.InstanceNorm2d\n        else:\n            use_bias = norm_layer == nn.InstanceNorm2d\n\n        self.net = [\n            nn.Conv2d(input_nc, ndf, kernel_size=1, stride=1, padding=0),\n            nn.LeakyReLU(0.2, True),\n            nn.Conv2d(ndf, ndf * 2, kernel_size=1, stride=1, padding=0, bias=use_bias),\n            norm_layer(ndf * 2),\n            nn.LeakyReLU(0.2, True),\n            nn.Conv2d(ndf * 2, 1, kernel_size=1, stride=1, padding=0, bias=use_bias)]\n\n        if use_sigmoid:\n            self.net.append(nn.Sigmoid())\n\n        self.net = nn.Sequential(*self.net)\n\n    def forward(self, input):\n        return self.net(input)\n\n\n##############################################################################\n# Basic network model \n##############################################################################\ndef define_splitG(img_nc, aus_nc, ngf, use_dropout=False, norm='instance', init_type='normal', init_gain=0.02, gpu_ids=[]):\n    norm_layer = get_norm_layer(norm_type=norm)\n    net_img_au = SplitGenerator(img_nc, aus_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, n_blocks=6)\n    return init_net(net_img_au, init_type, init_gain, gpu_ids)\n\n\ndef define_splitD(input_nc, aus_nc, image_size, ndf, norm='instance', init_type='normal', init_gain=0.02, gpu_ids=[]):\n    norm_layer = get_norm_layer(norm_type=norm)\n    net_dis_aus = SplitDiscriminator(input_nc, aus_nc, image_size, ndf, n_layers=6, norm_layer=norm_layer)\n    return init_net(net_dis_aus, init_type, init_gain, gpu_ids)\n\n\nclass SplitGenerator(nn.Module):\n    def __init__(self, img_nc, aus_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='zero'):\n        assert(n_blocks >= 0)\n        super(SplitGenerator, self).__init__()\n        self.input_nc = img_nc + aus_nc\n        self.ngf = ngf\n        if type(norm_layer) == functools.partial:\n            use_bias = norm_layer.func == nn.InstanceNorm2d\n        else:\n            use_bias = norm_layer == nn.InstanceNorm2d\n\n        model = [nn.Conv2d(self.input_nc, ngf, kernel_size=7, stride=1, padding=3, \n                           bias=use_bias),\n                 norm_layer(ngf),\n                 nn.ReLU(True)]\n\n        n_downsampling = 2\n        for i in range(n_downsampling):\n            mult = 2**i\n            model += [nn.Conv2d(ngf * mult, ngf * mult * 2, \\\n                                kernel_size=4, stride=2, padding=1, \\\n                                bias=use_bias),\n                      norm_layer(ngf * mult * 2),\n                      nn.ReLU(True)]\n\n        mult = 2**n_downsampling\n        for i in range(n_blocks):\n            model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]\n\n        for i in range(n_downsampling):\n            mult = 2**(n_downsampling - i)\n            model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),\n                                         kernel_size=4, stride=2, padding=1,\n                                         bias=use_bias),\n                      norm_layer(int(ngf * mult / 2)),\n                      nn.ReLU(True)]\n\n        self.model = nn.Sequential(*model)\n        # color mask generator top\n        color_top = []\n        color_top += [nn.Conv2d(ngf, img_nc, kernel_size=7, stride=1, padding=3, bias=False),\n                        nn.Tanh()]\n        self.color_top = nn.Sequential(*color_top)\n        # AUs mask generator top \n        au_top = []\n        au_top += [nn.Conv2d(ngf, 1, kernel_size=7, stride=1, padding=3, bias=False),\n                    nn.Sigmoid()]\n        self.au_top = nn.Sequential(*au_top)\n\n        # from torchsummary import summary\n        # summary(self.model.to(\"cuda\"), (20, 128, 128))\n        # summary(self.color_top.to(\"cuda\"), (64, 128, 128))\n        # summary(self.au_top.to(\"cuda\"), (64, 128, 128))\n        # assert False\n\n    def forward(self, img, au):\n        # replicate AUs vector to match image shap and concate to construct input \n        sparse_au = au.unsqueeze(2).unsqueeze(3)\n        sparse_au = sparse_au.expand(sparse_au.size(0), sparse_au.size(1), img.size(2), img.size(3))\n        self.input_img_au = torch.cat([img, sparse_au], dim=1)\n\n        embed_features = self.model(self.input_img_au)\n\n        return self.color_top(embed_features), self.au_top(embed_features), embed_features\n\n\nclass SplitDiscriminator(nn.Module):\n    def __init__(self, input_nc, aus_nc, image_size=128, ndf=64, n_layers=6, norm_layer=nn.BatchNorm2d):\n        super(SplitDiscriminator, self).__init__()\n        if type(norm_layer) == functools.partial:\n            use_bias = norm_layer.func == nn.InstanceNorm2d\n        else:\n            use_bias = norm_layer == nn.InstanceNorm2d\n\n        kw = 4\n        padw = 1\n        sequence = [\n            nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw),\n            nn.LeakyReLU(0.01, True)\n        ]\n\n        cur_dim = ndf\n        for n in range(1, n_layers):\n            sequence += [\n                nn.Conv2d(cur_dim, 2 * cur_dim,\n                          kernel_size=kw, stride=2, padding=padw, bias=use_bias),\n                nn.LeakyReLU(0.01, True)\n            ]\n            cur_dim = 2 * cur_dim\n\n        self.model = nn.Sequential(*sequence)\n        # patch discriminator top\n        self.dis_top = nn.Conv2d(cur_dim, 1, kernel_size=kw-1, stride=1, padding=padw, bias=False)\n        # AUs classifier top \n        k_size = int(image_size / (2 ** n_layers))\n        self.aus_top = nn.Conv2d(cur_dim, aus_nc, kernel_size=k_size, stride=1, bias=False)\n\n        # from torchsummary import summary\n        # summary(self.model.to(\"cuda\"), (3, 128, 128))\n\n    def forward(self, img):\n        embed_features = self.model(img)\n        pred_map = self.dis_top(embed_features)\n        pred_aus = self.aus_top(embed_features)\n        return pred_map.squeeze(), pred_aus.squeeze()\n\n\n# https://github.com/jxgu1016/Total_Variation_Loss.pytorch/blob/master/TVLoss.py\nclass TVLoss(nn.Module):\n    def __init__(self, TVLoss_weight=1):\n        super(TVLoss,self).__init__()\n        self.TVLoss_weight = TVLoss_weight\n\n    def forward(self,x):\n        batch_size = x.size()[0]\n        h_x = x.size()[2]\n        w_x = x.size()[3]\n        count_h = self._tensor_size(x[:,:,1:,:])\n        count_w = self._tensor_size(x[:,:,:,1:])\n        h_tv = torch.pow((x[:,:,1:,:]-x[:,:,:h_x-1,:]),2).sum()\n        w_tv = torch.pow((x[:,:,:,1:]-x[:,:,:,:w_x-1]),2).sum()\n        return self.TVLoss_weight*2*(h_tv/count_h+w_tv/count_w)/batch_size\n\n    def _tensor_size(self,t):\n        return t.size()[1]*t.size()[2]*t.size()[3]\n\n\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/model/stargan.py",
    "content": "import torch\nfrom .base_model import BaseModel\nfrom . import model_utils\n\n\n\nclass StarGANModel(BaseModel):\n    \"\"\"docstring for StarGANModel\"\"\"\n    def __init__(self):\n        super(StarGANModel, self).__init__()\n        self.name = \"StarGAN\"\n\n    def initialize(self, opt):\n        super(StarGANModel, self).initialize(opt)\n\n        self.net_gen = model_utils.define_splitG(self.opt.img_nc, self.opt.aus_nc, self.opt.ngf, use_dropout=self.opt.use_dropout, \n                    norm=self.opt.norm, init_type=self.opt.init_type, init_gain=self.opt.init_gain, gpu_ids=self.gpu_ids)\n        self.models_name.append('gen')\n        \n        if self.is_train:\n            self.net_dis = model_utils.define_splitD(self.opt.img_nc, self.opt.aus_nc, self.opt.final_size, self.opt.ndf, \n                    norm=self.opt.norm, init_type=self.opt.init_type, init_gain=self.opt.init_gain, gpu_ids=self.gpu_ids)\n            self.models_name.append('dis')\n\n        if self.opt.load_epoch > 0:\n            self.load_ckpt(self.opt.load_epoch)\n\n    def setup(self):\n        super(StarGANModel, self).setup()\n        if self.is_train:\n            # setup optimizer\n            self.optim_gen = torch.optim.Adam(self.net_gen.parameters(),\n                            lr=self.opt.lr, betas=(self.opt.beta1, 0.999))\n            self.optims.append(self.optim_gen)\n            self.optim_dis = torch.optim.Adam(self.net_dis.parameters(), \n                            lr=self.opt.lr, betas=(self.opt.beta1, 0.999))\n            self.optims.append(self.optim_dis)\n\n            # setup schedulers\n            self.schedulers = [model_utils.get_scheduler(optim, self.opt) for optim in self.optims]\n\n    def feed_batch(self, batch):\n        self.src_img = batch['src_img'].to(self.device)\n        self.tar_aus = batch['tar_aus'].type(torch.FloatTensor).to(self.device)\n        if self.is_train:\n            self.src_aus = batch['src_aus'].type(torch.FloatTensor).to(self.device)\n            self.tar_img = batch['tar_img'].to(self.device)\n\n    def forward(self):\n        # generate fake image\n        self.fake_img, _, _ = self.net_gen(self.src_img, self.tar_aus)\n\n        # reconstruct real image\n        if self.is_train:\n            self.rec_real_img, _, _ = self.net_gen(self.fake_img, self.src_aus)\n\n    def backward_dis(self):\n        # real image\n        pred_real, self.pred_real_aus = self.net_dis(self.src_img)\n        self.loss_dis_real = self.criterionGAN(pred_real, True)\n        self.loss_dis_real_aus = self.criterionMSE(self.pred_real_aus, self.src_aus)\n\n        # fake image, detach to stop backward to generator\n        pred_fake, _ = self.net_dis(self.fake_img.detach()) \n        self.loss_dis_fake = self.criterionGAN(pred_fake, False)\n\n        # combine dis loss\n        self.loss_dis =   self.opt.lambda_dis * (self.loss_dis_fake + self.loss_dis_real) \\\n                        + self.opt.lambda_aus * self.loss_dis_real_aus\n        if self.opt.gan_type == 'wgan-gp':\n            self.loss_dis_gp = self.gradient_penalty(self.src_img, self.fake_img)\n            self.loss_dis = self.loss_dis + self.opt.lambda_wgan_gp * self.loss_dis_gp\n        \n        # backward discriminator loss\n        self.loss_dis.backward()\n\n    def backward_gen(self):\n        # original to target domain, should fake the discriminator\n        pred_fake, self.pred_fake_aus = self.net_dis(self.fake_img)\n        self.loss_gen_GAN = self.criterionGAN(pred_fake, True)\n        self.loss_gen_fake_aus = self.criterionMSE(self.pred_fake_aus, self.tar_aus)\n\n        # target to original domain reconstruct, identity loss\n        self.loss_gen_rec = self.criterionL1(self.rec_real_img, self.src_img)\n\n        # combine and backward G loss\n        self.loss_gen =   self.opt.lambda_dis * self.loss_gen_GAN \\\n                        + self.opt.lambda_aus * self.loss_gen_fake_aus \\\n                        + self.opt.lambda_rec * self.loss_gen_rec \n\n        self.loss_gen.backward()\n\n    def optimize_paras(self, train_gen):\n        self.forward()\n        # update discriminator\n        self.set_requires_grad(self.net_dis, True)\n        self.optim_dis.zero_grad()\n        self.backward_dis()\n        self.optim_dis.step()\n\n        # update G if needed\n        if train_gen:\n            self.set_requires_grad(self.net_dis, False)\n            self.optim_gen.zero_grad()\n            self.backward_gen()\n            self.optim_gen.step()\n\n    def save_ckpt(self, epoch):\n        # save the specific networks\n        save_models_name = ['gen', 'dis']\n        return super(StarGANModel, self).save_ckpt(epoch, save_models_name)\n\n    def load_ckpt(self, epoch):\n        # load the specific part of networks\n        load_models_name = ['gen']\n        if self.is_train:\n            load_models_name.extend(['dis'])\n        return super(StarGANModel, self).load_ckpt(epoch, load_models_name)\n\n    def clean_ckpt(self, epoch):\n        # load the specific part of networks\n        load_models_name = ['gen', 'dis']\n        return super(StarGANModel, self).clean_ckpt(epoch, load_models_name)\n\n    def get_latest_losses(self):\n        get_losses_name = ['dis_fake', 'dis_real', 'dis_real_aus', 'gen_rec']\n        return super(StarGANModel, self).get_latest_losses(get_losses_name)\n\n    def get_latest_visuals(self):\n        visuals_name = ['src_img', 'tar_img', 'fake_img']\n        if self.is_train:\n            visuals_name.extend(['rec_real_img'])\n        return super(StarGANModel, self).get_latest_visuals(visuals_name)\n"
  },
  {
    "path": "third_part/ganimation_replicate/options.py",
    "content": "import argparse\nimport torch\nimport os\nfrom datetime import datetime\nimport time\nimport torch \nimport random\nimport numpy as np \nimport sys\n\n\n\nclass Options(object):\n    \"\"\"docstring for Options\"\"\"\n    def __init__(self):\n        super(Options, self).__init__()\n        \n    def initialize(self):\n        parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n        parser.add_argument('--mode', type=str, default='train', help='Mode of code. [train|test]')\n        parser.add_argument('--model', type=str, default='ganimation', help='[ganimation|stargan], see model.__init__ from more details.')\n        parser.add_argument('--lucky_seed', type=int, default=0, help='seed for random initialize, 0 to use current time.')\n        parser.add_argument('--visdom_env', type=str, default=\"main\", help='visdom env.')\n        parser.add_argument('--visdom_port', type=int, default=8097, help='visdom port.')\n        parser.add_argument('--visdom_display_id', type=int, default=1, help='set value larger than 0 to display with visdom.')\n        \n        parser.add_argument('--results', type=str, default=\"results\", help='save test results to this path.')\n        parser.add_argument('--interpolate_len', type=int, default=5, help='interpolate length for test.')\n        parser.add_argument('--no_test_eval', action='store_true', help='do not use eval mode during test time.')\n        parser.add_argument('--save_test_gif', action='store_true', help='save gif images instead of the concatenation of static images.')\n\n        parser.add_argument('--data_root', required=False, help='paths to data set.')\n        parser.add_argument('--imgs_dir', type=str, default=\"imgs\", help='path to image')\n        parser.add_argument('--aus_pkl', type=str, default=\"aus_openface.pkl\", help='AUs pickle dictionary.')\n        parser.add_argument('--train_csv', type=str, default=\"train_ids.csv\", help='train images paths')\n        parser.add_argument('--test_csv', type=str, default=\"test_ids.csv\", help='test images paths')\n\n        parser.add_argument('--batch_size', type=int, default=25, help='input batch size.')\n        parser.add_argument('--serial_batches', action='store_true', help='if specified, input images in order.')\n        parser.add_argument('--n_threads', type=int, default=6, help='number of workers to load data.')\n        parser.add_argument('--max_dataset_size', type=int, default=float(\"inf\"), help='maximum number of samples.')\n\n        parser.add_argument('--resize_or_crop', type=str, default='none', help='Preprocessing image, [resize_and_crop|crop|none]')\n        parser.add_argument('--load_size', type=int, default=148, help='scale image to this size.')\n        parser.add_argument('--final_size', type=int, default=128, help='crop image to this size.')\n        parser.add_argument('--no_flip', action='store_true', help='if specified, do not flip image.')\n        parser.add_argument('--no_aus_noise', action='store_true', help='if specified, add noise to target AUs.')\n\n        parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids, eg. 0,1,2; -1 for cpu.')\n        parser.add_argument('--ckpt_dir', type=str, default='./ckpts', help='directory to save check points.')\n        parser.add_argument('--load_epoch', type=int, default=0, help='load epoch; 0: do not load')\n        parser.add_argument('--log_file', type=str, default=\"logs.txt\", help='log loss')\n        parser.add_argument('--opt_file', type=str, default=\"opt.txt\", help='options file')\n\n        # train options \n        parser.add_argument('--img_nc', type=int, default=3, help='image number of channel')\n        parser.add_argument('--aus_nc', type=int, default=17, help='aus number of channel')\n        parser.add_argument('--ngf', type=int, default=64, help='ngf')\n        parser.add_argument('--ndf', type=int, default=64, help='ndf')\n        parser.add_argument('--use_dropout', action='store_true', help='if specified, use dropout.')\n        \n        parser.add_argument('--gan_type', type=str, default='wgan-gp', help='GAN loss [wgan-gp|lsgan|gan]')\n        parser.add_argument('--init_type', type=str, default='normal', help='network initialization [normal|xavier|kaiming|orthogonal]')\n        parser.add_argument('--init_gain', type=float, default=0.02, help='scaling factor for normal, xavier and orthogonal.')\n        parser.add_argument('--norm', type=str, default='instance', help='instance normalization or batch normalization [batch|instance|none]')\n        parser.add_argument('--beta1', type=float, default=0.5, help='momentum term of adam')\n        parser.add_argument('--lr', type=float, default=0.0001, help='initial learning rate for adam')\n        parser.add_argument('--lr_policy', type=str, default='lambda', help='learning rate policy: lambda|step|plateau|cosine')\n        parser.add_argument('--lr_decay_iters', type=int, default=50, help='multiply by a gamma every lr_decay_iters iterations')\n\n        parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count, we save the model by <epoch_count>, <epoch_count>+<save_latest_freq>, ...')\n        parser.add_argument('--niter', type=int, default=20, help='# of iter at starting learning rate')\n        parser.add_argument('--niter_decay', type=int, default=10, help='# of iter to linearly decay learning rate to zero')\n        \n        # loss options \n        parser.add_argument('--lambda_dis', type=float, default=1.0, help='discriminator weight in loss')\n        parser.add_argument('--lambda_aus', type=float, default=160.0, help='AUs weight in loss')\n        parser.add_argument('--lambda_rec', type=float, default=10.0, help='reconstruct loss weight')\n        parser.add_argument('--lambda_mask', type=float, default=0, help='mse loss weight')\n        parser.add_argument('--lambda_tv', type=float, default=0, help='total variation loss weight')\n        parser.add_argument('--lambda_wgan_gp', type=float, default=10., help='wgan gradient penalty weight')\n\n        # frequency options\n        parser.add_argument('--train_gen_iter', type=int, default=5, help='train G every n iterations.')\n        parser.add_argument('--print_losses_freq', type=int, default=100, help='print log every print_freq step.')\n        parser.add_argument('--plot_losses_freq', type=int, default=20000, help='plot log every plot_freq step.')\n        parser.add_argument('--sample_img_freq', type=int, default=2000, help='draw image every sample_img_freq step.')\n        parser.add_argument('--save_epoch_freq', type=int, default=2, help='save checkpoint every save_epoch_freq epoch.')\n        \n        return parser\n\n    def parse(self):\n        parser = self.initialize()\n        parser.set_defaults(name=datetime.now().strftime(\"%y%m%d_%H%M%S\"))\n        opt = parser.parse_args()\n\n        dataset_name = os.path.basename(opt.data_root.strip('/'))\n        # update checkpoint dir\n        if opt.mode == 'train' and opt.load_epoch == 0:\n            opt.ckpt_dir = os.path.join(opt.ckpt_dir, dataset_name, opt.model, opt.name)\n            if not os.path.exists(opt.ckpt_dir):\n                os.makedirs(opt.ckpt_dir)\n\n        # if test, disable visdom, update results path\n        if opt.mode == \"test\":\n            opt.visdom_display_id = 0\n            opt.results = os.path.join(opt.results, \"%s_%s_%s\" % (dataset_name, opt.model, opt.load_epoch))\n            if not os.path.exists(opt.results):\n                os.makedirs(opt.results)\n\n        # set gpu device\n        str_ids = opt.gpu_ids.split(',')\n        opt.gpu_ids = []\n        for str_id in str_ids:\n            cur_id = int(str_id)\n            if cur_id >= 0:\n                opt.gpu_ids.append(cur_id)\n        if len(opt.gpu_ids) > 0:\n            torch.cuda.set_device(opt.gpu_ids[0])\n\n        # set seed \n        if opt.lucky_seed == 0:\n            opt.lucky_seed = int(time.time())\n        random.seed(a=opt.lucky_seed)\n        np.random.seed(seed=opt.lucky_seed)\n        torch.manual_seed(opt.lucky_seed)\n        if len(opt.gpu_ids) > 0:\n            torch.backends.cudnn.deterministic = True\n            torch.backends.cudnn.benchmark = False\n            torch.cuda.manual_seed(opt.lucky_seed)\n            torch.cuda.manual_seed_all(opt.lucky_seed)\n            \n        # write command to file\n        script_dir = opt.ckpt_dir \n        with open(os.path.join(os.path.join(script_dir, \"run_script.sh\")), 'a+') as f:\n            f.write(\"[%5s][%s]python %s\\n\" % (opt.mode, opt.name, ' '.join(sys.argv)))\n\n        # print and write options file\n        msg = ''\n        msg += '------------------- [%5s][%s]Options --------------------\\n' % (opt.mode, opt.name)\n        for k, v in sorted(vars(opt).items()):\n            comment = ''\n            default_v = parser.get_default(k)\n            if v != default_v:\n                comment = '\\t[default: %s]' % str(default_v)\n            msg += '{:>25}: {:<30}{}\\n'.format(str(k), str(v), comment)\n        msg += '--------------------- [%5s][%s]End ----------------------\\n' % (opt.mode, opt.name)\n        print(msg)\n        with open(os.path.join(os.path.join(script_dir, \"opt.txt\")), 'a+') as f:\n            f.write(msg + '\\n\\n')\n\n        return opt\n\n\n\n\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/solvers.py",
    "content": "\"\"\"\nCreated on Dec 13, 2018\n@author: Yuedong Chen\n\"\"\"\n\nfrom data import create_dataloader\nfrom model import create_model\nfrom visualizer import Visualizer\nimport copy\nimport time\nimport os\nimport torch\nimport numpy as np\nfrom PIL import Image\n\n\ndef create_solver(opt):\n    instance = Solver()\n    instance.initialize(opt)\n    return instance\n\n\n\nclass Solver(object):\n    \"\"\"docstring for Solver\"\"\"\n    def __init__(self):\n        super(Solver, self).__init__()\n\n    def initialize(self, opt):\n        self.opt = opt\n        self.visual = Visualizer()\n        self.visual.initialize(self.opt)\n\n    def run_solver(self):\n        if self.opt.mode == \"train\":\n            self.train_networks()\n        else:\n            self.test_networks(self.opt)\n\n    def train_networks(self):\n        # init train setting\n        self.init_train_setting()\n\n        # for every epoch\n        for epoch in range(self.opt.epoch_count, self.epoch_len + 1):\n            # train network\n            self.train_epoch(epoch)\n            # update learning rate\n            self.cur_lr = self.train_model.update_learning_rate()\n            # save checkpoint if needed\n            if epoch % self.opt.save_epoch_freq == 0:\n                self.train_model.save_ckpt(epoch)\n\n        # save the last epoch \n        self.train_model.save_ckpt(self.epoch_len)\n\n    def init_train_setting(self):\n        self.train_dataset = create_dataloader(self.opt)\n        self.train_model = create_model(self.opt)\n\n        self.train_total_steps = 0\n        self.epoch_len = self.opt.niter + self.opt.niter_decay\n        self.cur_lr = self.opt.lr\n\n    def train_epoch(self, epoch):\n        epoch_start_time = time.time()\n        epoch_steps = 0\n\n        last_print_step_t = time.time()\n        for idx, batch in enumerate(self.train_dataset):\n\n            self.train_total_steps += self.opt.batch_size\n            epoch_steps += self.opt.batch_size\n            # train network\n            self.train_model.feed_batch(batch)\n            self.train_model.optimize_paras(train_gen=(idx % self.opt.train_gen_iter == 0))\n            # print losses\n            if self.train_total_steps % self.opt.print_losses_freq == 0:\n                cur_losses = self.train_model.get_latest_losses()\n                avg_step_t = (time.time() - last_print_step_t) / self.opt.print_losses_freq\n                last_print_step_t = time.time()\n                # print loss info to command line\n                info_dict = {'epoch': epoch, 'epoch_len': self.epoch_len,\n                            'epoch_steps': idx * self.opt.batch_size, 'epoch_steps_len': len(self.train_dataset),\n                            'step_time': avg_step_t, 'cur_lr': self.cur_lr,\n                            'log_path': os.path.join(self.opt.ckpt_dir, self.opt.log_file),\n                            'losses': cur_losses\n                            }\n                self.visual.print_losses_info(info_dict)\n            \n            # plot loss map to visdom\n            if self.train_total_steps % self.opt.plot_losses_freq == 0 and self.visual.display_id > 0:\n                cur_losses = self.train_model.get_latest_losses()\n                epoch_steps = idx * self.opt.batch_size\n                self.visual.display_current_losses(epoch - 1, epoch_steps / len(self.train_dataset), cur_losses)\n            \n            # display image on visdom\n            if self.train_total_steps % self.opt.sample_img_freq == 0 and self.visual.display_id > 0:\n                cur_vis = self.train_model.get_latest_visuals()\n                self.visual.display_online_results(cur_vis, epoch)\n                # latest_aus = model.get_latest_aus()\n                # visual.log_aus(epoch, epoch_steps, latest_aus, opt.ckpt_dir)\n\n    def test_networks(self, opt):\n        self.init_test_setting(opt)\n        self.test_ops()\n\n    def init_test_setting(self, opt):\n        self.test_dataset = create_dataloader(opt)\n        self.test_model = create_model(opt)\n\n    def test_ops(self):\n        for batch_idx, batch in enumerate(self.test_dataset):\n            with torch.no_grad():\n                # interpolate several times\n                faces_list = [batch['src_img'].float().numpy()]\n                paths_list = [batch['src_path'], batch['tar_path']]\n                for idx in range(self.opt.interpolate_len):\n                    cur_alpha = (idx + 1.) / float(self.opt.interpolate_len)\n                    cur_tar_aus = cur_alpha * batch['tar_aus'] + (1 - cur_alpha) * batch['src_aus']\n                    # print(batch['src_aus'])\n                    # print(cur_tar_aus)\n                    test_batch = {'src_img': batch['src_img'], 'tar_aus': cur_tar_aus, 'src_aus':batch['src_aus'], 'tar_img':batch['tar_img']}\n\n                    self.test_model.feed_batch(test_batch)\n                    self.test_model.forward()\n\n                    cur_gen_faces = self.test_model.fake_img.cpu().float().numpy()\n                    faces_list.append(cur_gen_faces)\n                faces_list.append(batch['tar_img'].float().numpy())\n            self.test_save_imgs(faces_list, paths_list)\n\n    def test_save_imgs(self, faces_list, paths_list):\n        for idx in range(len(paths_list[0])):\n            src_name = os.path.splitext(os.path.basename(paths_list[0][idx]))[0]\n            tar_name = os.path.splitext(os.path.basename(paths_list[1][idx]))[0]\n\n            if self.opt.save_test_gif:\n                import imageio\n                imgs_numpy_list = []\n                for face_idx in range(len(faces_list) - 1):  # remove target image\n                    cur_numpy = np.array(self.visual.numpy2im(faces_list[face_idx][idx]))\n                    imgs_numpy_list.extend([cur_numpy for _ in range(3)])\n                saved_path = os.path.join(self.opt.results, \"%s_%s.gif\" % (src_name, tar_name))\n                imageio.mimsave(saved_path, imgs_numpy_list)\n            else:\n                # concate src, inters, tar faces\n                concate_img = np.array(self.visual.numpy2im(faces_list[0][idx]))\n                for face_idx in range(1, len(faces_list)):\n                    concate_img = np.concatenate((concate_img, np.array(self.visual.numpy2im(faces_list[face_idx][idx]))), axis=1)\n                concate_img = Image.fromarray(concate_img)\n                # save image\n                saved_path = os.path.join(self.opt.results, \"%s_%s.jpg\" % (src_name, tar_name))\n                concate_img.save(saved_path)\n\n            print(\"[Success] Saved images to %s\" % saved_path)\n\n\n\n\n\n\n"
  },
  {
    "path": "third_part/ganimation_replicate/visualizer.py",
    "content": "import os\nimport numpy as np\nimport torch\nimport math\nfrom PIL import Image\n# import matplotlib.pyplot as plt\n\n\n\nclass Visualizer(object):\n    \"\"\"docstring for Visualizer\"\"\"\n    def __init__(self):\n        super(Visualizer, self).__init__()\n\n    def initialize(self, opt):\n        self.opt = opt\n        # self.vis_saved_dir = os.path.join(self.opt.ckpt_dir, 'vis_pics')\n        # if not os.path.isdir(self.vis_saved_dir):\n        #     os.makedirs(self.vis_saved_dir)\n        # plt.switch_backend('agg')\n\n        self.display_id = self.opt.visdom_display_id\n        if self.display_id > 0:\n            import visdom \n            self.ncols = 8\n            self.vis = visdom.Visdom(server=\"http://localhost\", port=self.opt.visdom_port, env=self.opt.visdom_env)\n\n    def throw_visdom_connection_error(self):\n        print('\\n\\nno visdom server.')\n        exit(1)\n\n    def print_losses_info(self, info_dict):\n        msg = '[{}][Epoch: {:0>3}/{:0>3}; Images: {:0>4}/{:0>4}; Time: {:.3f}s/Batch({}); LR: {:.7f}] '.format(\n                self.opt.name, info_dict['epoch'], info_dict['epoch_len'], \n                info_dict['epoch_steps'], info_dict['epoch_steps_len'], \n                info_dict['step_time'], self.opt.batch_size, info_dict['cur_lr'])\n        for k, v in info_dict['losses'].items():\n            msg += '| {}: {:.4f} '.format(k, v)\n        msg += '|'\n        print(msg)\n        with open(info_dict['log_path'], 'a+') as f:\n            f.write(msg + '\\n')\n\n    def display_current_losses(self, epoch, counter_ratio, losses_dict):\n        if not hasattr(self, 'plot_data'):\n            self.plot_data = {'X': [], 'Y': [], 'legend': list(losses_dict.keys())}\n        self.plot_data['X'].append(epoch + counter_ratio)\n        self.plot_data['Y'].append([losses_dict[k] for k in self.plot_data['legend']])\n        try:\n            self.vis.line(\n                X=np.stack([np.array(self.plot_data['X'])] * len(self.plot_data['legend']), 1),\n                Y=np.array(self.plot_data['Y']),\n                opts={\n                    'title': self.opt.name + ' loss over time',\n                    'legend':self.plot_data['legend'],\n                    'xlabel':'epoch',\n                    'ylabel':'loss'},\n                win=self.display_id)\n        except ConnectionError:\n            self.throw_visdom_connection_error()\n\n    def display_online_results(self, visuals, epoch):\n        win_id = self.display_id + 24\n        images = []\n        labels = []\n        for label, image in visuals.items():\n            if 'mask' in label:  # or 'focus' in label:\n                image = (image - 0.5) / 0.5   # convert map from [0, 1] to [-1, 1]\n            image_numpy = self.tensor2im(image)\n            images.append(image_numpy.transpose([2, 0, 1]))\n            labels.append(label)\n        try:\n            title = ' || '.join(labels)\n            self.vis.images(images, nrow=self.ncols, win=win_id,\n                            padding=5, opts=dict(title=title))\n        except ConnectionError:\n            self.throw_visdom_connection_error()\n        \n    # utils\n    def tensor2im(self, input_image, imtype=np.uint8):\n        if isinstance(input_image, torch.Tensor):\n            image_tensor = input_image.data\n        else:\n            return input_image\n        image_numpy = image_tensor[0].cpu().float().numpy()\n        im = self.numpy2im(image_numpy, imtype).resize((80, 80), Image.ANTIALIAS)\n        return np.array(im)\n        \n    def numpy2im(self, image_numpy, imtype=np.uint8):\n        if image_numpy.shape[0] == 1:\n            image_numpy = np.tile(image_numpy, (3, 1, 1))  \n        # input should be [0, 1]\n        #image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0\n        image_numpy = (np.transpose(image_numpy, (1, 2, 0)) / 2. + 0.5) * 255.0\n        # print(image_numpy.shape)\n        image_numpy = image_numpy.astype(imtype)\n        im = Image.fromarray(image_numpy)\n        # im = Image.fromarray(image_numpy).resize((64, 64), Image.ANTIALIAS)\n        return im   # np.array(im)\n\n\n\n\n\n"
  },
  {
    "path": "utils/alignment_stit.py",
    "content": "import PIL\nimport PIL.Image\nimport dlib\nimport face_alignment\nimport numpy as np\nimport scipy\nimport scipy.ndimage\nimport skimage.io as io\nimport torch\nfrom PIL import Image\nfrom scipy.ndimage import gaussian_filter1d\nfrom tqdm import tqdm\n\n# from configs import paths_config\ndef paste_image(inverse_transform, img, orig_image):\n    pasted_image = orig_image.copy().convert('RGBA')\n    projected = img.convert('RGBA').transform(orig_image.size, Image.PERSPECTIVE, inverse_transform, Image.BILINEAR)\n    pasted_image.paste(projected, (0, 0), mask=projected)\n    return pasted_image\n\ndef get_landmark(filepath, predictor, detector=None, fa=None):\n    \"\"\"get landmark with dlib\n    :return: np.array shape=(68, 2)\n    \"\"\"\n    if fa is not None:\n        image = io.imread(filepath)\n        lms, _, bboxes = fa.get_landmarks(image, return_bboxes=True)\n        if len(lms) == 0:\n            return None\n        return lms[0]\n\n    if detector is None:\n        detector = dlib.get_frontal_face_detector()\n    if isinstance(filepath, PIL.Image.Image):\n        img = np.array(filepath)\n    else:\n        img = dlib.load_rgb_image(filepath)\n    dets = detector(img)\n\n    for k, d in enumerate(dets):\n        shape = predictor(img, d)\n        break\n    else:\n        return None\n    t = list(shape.parts())\n    a = []\n    for tt in t:\n        a.append([tt.x, tt.y])\n    lm = np.array(a)\n    return lm\n\n\ndef align_face(filepath_or_image, predictor, output_size, detector=None,\n               enable_padding=False, scale=1.0):\n    \"\"\"\n    :param filepath: str\n    :return: PIL Image\n    \"\"\"\n\n    c, x, y = compute_transform(filepath_or_image, predictor, detector=detector,\n                                scale=scale)\n    quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])\n    img = crop_image(filepath_or_image, output_size, quad, enable_padding=enable_padding)\n\n    # Return aligned image.\n    return img\n\n\ndef crop_image(filepath, output_size, quad, enable_padding=False):\n    x = (quad[3] - quad[1]) / 2\n    qsize = np.hypot(*x) * 2\n    # read image\n    if isinstance(filepath, PIL.Image.Image):\n        img = filepath\n    else:\n        img = PIL.Image.open(filepath)\n    transform_size = output_size\n    # Shrink.\n    shrink = int(np.floor(qsize / output_size * 0.5))\n    if shrink > 1:\n        rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))\n        img = img.resize(rsize, PIL.Image.ANTIALIAS)\n        quad /= shrink\n        qsize /= shrink\n    # Crop.\n    border = max(int(np.rint(qsize * 0.1)), 3)\n    crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),\n            int(np.ceil(max(quad[:, 1]))))\n    crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),\n            min(crop[3] + border, img.size[1]))\n    if (crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]):\n        img = img.crop(crop)\n        quad -= crop[0:2]\n    # Pad.\n    pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),\n           int(np.ceil(max(quad[:, 1]))))\n    pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),\n           max(pad[3] - img.size[1] + border, 0))\n    if enable_padding and max(pad) > border - 4:\n        pad = np.maximum(pad, int(np.rint(qsize * 0.3)))\n        img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')\n        h, w, _ = img.shape\n        y, x, _ = np.ogrid[:h, :w, :1]\n        mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),\n                          1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))\n        blur = qsize * 0.02\n        img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)\n        img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)\n        img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')\n        quad += pad[:2]\n    # Transform.\n    img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)\n    if output_size < transform_size:\n        img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS)\n    return img\n\ndef compute_transform(lm, predictor, detector=None, scale=1.0, fa=None):\n    # lm = get_landmark(filepath, predictor, detector, fa)\n    # if lm is None:\n        # raise Exception(f'Did not detect any faces in image: {filepath}')\n    lm_chin = lm[0: 17]  # left-right\n    lm_eyebrow_left = lm[17: 22]  # left-right\n    lm_eyebrow_right = lm[22: 27]  # left-right\n    lm_nose = lm[27: 31]  # top-down\n    lm_nostrils = lm[31: 36]  # top-down\n    lm_eye_left = lm[36: 42]  # left-clockwise\n    lm_eye_right = lm[42: 48]  # left-clockwise\n    lm_mouth_outer = lm[48: 60]  # left-clockwise\n    lm_mouth_inner = lm[60: 68]  # left-clockwise\n    # Calculate auxiliary vectors.\n    eye_left = np.mean(lm_eye_left, axis=0)\n    eye_right = np.mean(lm_eye_right, axis=0)\n    eye_avg = (eye_left + eye_right) * 0.5\n    eye_to_eye = eye_right - eye_left\n    mouth_left = lm_mouth_outer[0]\n    mouth_right = lm_mouth_outer[6]\n    mouth_avg = (mouth_left + mouth_right) * 0.5\n    eye_to_mouth = mouth_avg - eye_avg\n    # Choose oriented crop rectangle.\n    x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]\n    x /= np.hypot(*x)\n    x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)\n\n    x *= scale\n    y = np.flipud(x) * [-1, 1]\n    c = eye_avg + eye_to_mouth * 0.1\n    return c, x, y\n\n\ndef crop_faces(IMAGE_SIZE, files, scale, center_sigma=0.0, xy_sigma=0.0, use_fa=False, fa=None):\n    if use_fa:\n        if fa == None:\n            device = 'cuda' if torch.cuda.is_available() else 'cpu'\n            fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=True, device=device)\n        predictor = None\n        detector = None\n    else:\n        fa = None\n        predictor = None\n        detector = None\n        # predictor = dlib.shape_predictor(paths_config.shape_predictor_path)\n        # detector = dlib.get_frontal_face_detector()\n\n    cs, xs, ys = [], [], []\n    for lm, pil in tqdm(files):\n        c, x, y = compute_transform(lm, predictor, detector=detector,\n                                    scale=scale, fa=fa)\n        cs.append(c)\n        xs.append(x)\n        ys.append(y)\n\n    cs = np.stack(cs)\n    xs = np.stack(xs)\n    ys = np.stack(ys)\n    if center_sigma != 0:\n        cs = gaussian_filter1d(cs, sigma=center_sigma, axis=0)\n\n    if xy_sigma != 0:\n        xs = gaussian_filter1d(xs, sigma=xy_sigma, axis=0)\n        ys = gaussian_filter1d(ys, sigma=xy_sigma, axis=0)\n\n    quads = np.stack([cs - xs - ys, cs - xs + ys, cs + xs + ys, cs + xs - ys], axis=1)\n    quads = list(quads)\n\n    crops, orig_images = crop_faces_by_quads(IMAGE_SIZE, files, quads)\n\n    return crops, orig_images, quads\n\n\ndef crop_faces_by_quads(IMAGE_SIZE, files, quads):\n    orig_images = []\n    crops = []\n    for quad, (_, path) in tqdm(zip(quads, files), total=len(quads)):\n        crop = crop_image(path, IMAGE_SIZE, quad.copy())\n        orig_image = path # Image.open(path)\n        orig_images.append(orig_image)\n        crops.append(crop)\n    return crops, orig_images\n\n\ndef calc_alignment_coefficients(pa, pb):\n    matrix = []\n    for p1, p2 in zip(pa, pb):\n        matrix.append([p1[0], p1[1], 1, 0, 0, 0, -p2[0] * p1[0], -p2[0] * p1[1]])\n        matrix.append([0, 0, 0, p1[0], p1[1], 1, -p2[1] * p1[0], -p2[1] * p1[1]])\n\n    a = np.matrix(matrix, dtype=float)\n    b = np.array(pb).reshape(8)\n\n    res = np.dot(np.linalg.inv(a.T * a) * a.T, b)\n    return np.array(res).reshape(8)"
  },
  {
    "path": "utils/audio.py",
    "content": "import librosa\nimport librosa.filters\nimport numpy as np\n# import tensorflow as tf\nfrom scipy import signal\nfrom scipy.io import wavfile\nfrom .hparams import hparams as hp\n\ndef load_wav(path, sr):\n    return librosa.core.load(path, sr=sr)[0]\n\ndef save_wav(wav, path, sr):\n    wav *= 32767 / max(0.01, np.max(np.abs(wav)))\n    #proposed by @dsmiller\n    wavfile.write(path, sr, wav.astype(np.int16))\n\ndef save_wavenet_wav(wav, path, sr):\n    librosa.output.write_wav(path, wav, sr=sr)\n\ndef preemphasis(wav, k, preemphasize=True):\n    if preemphasize:\n        return signal.lfilter([1, -k], [1], wav)\n    return wav\n\ndef inv_preemphasis(wav, k, inv_preemphasize=True):\n    if inv_preemphasize:\n        return signal.lfilter([1], [1, -k], wav)\n    return wav\n\ndef get_hop_size():\n    hop_size = hp.hop_size\n    if hop_size is None:\n        assert hp.frame_shift_ms is not None\n        hop_size = int(hp.frame_shift_ms / 1000 * hp.sample_rate)\n    return hop_size\n\ndef linearspectrogram(wav):\n    D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize))\n    S = _amp_to_db(np.abs(D)) - hp.ref_level_db\n    \n    if hp.signal_normalization:\n        return _normalize(S)\n    return S\n\ndef melspectrogram(wav):\n    D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize))\n    S = _amp_to_db(_linear_to_mel(np.abs(D))) - hp.ref_level_db\n    \n    if hp.signal_normalization:\n        return _normalize(S)\n    return S\n\ndef _lws_processor():\n    import lws\n    return lws.lws(hp.n_fft, get_hop_size(), fftsize=hp.win_size, mode=\"speech\")\n\ndef _stft(y):\n    if hp.use_lws:\n        return _lws_processor(hp).stft(y).T\n    else:\n        return librosa.stft(y=y, n_fft=hp.n_fft, hop_length=get_hop_size(), win_length=hp.win_size)\n\n##########################################################\n#Those are only correct when using lws!!! (This was messing with Wavenet quality for a long time!)\ndef num_frames(length, fsize, fshift):\n    \"\"\"Compute number of time frames of spectrogram\n    \"\"\"\n    pad = (fsize - fshift)\n    if length % fshift == 0:\n        M = (length + pad * 2 - fsize) // fshift + 1\n    else:\n        M = (length + pad * 2 - fsize) // fshift + 2\n    return M\n\n\ndef pad_lr(x, fsize, fshift):\n    \"\"\"Compute left and right padding\n    \"\"\"\n    M = num_frames(len(x), fsize, fshift)\n    pad = (fsize - fshift)\n    T = len(x) + 2 * pad\n    r = (M - 1) * fshift + fsize - T\n    return pad, pad + r\n##########################################################\n#Librosa correct padding\ndef librosa_pad_lr(x, fsize, fshift):\n    return 0, (x.shape[0] // fshift + 1) * fshift - x.shape[0]\n\n# Conversions\n_mel_basis = None\n\ndef _linear_to_mel(spectogram):\n    global _mel_basis\n    if _mel_basis is None:\n        _mel_basis = _build_mel_basis()\n    return np.dot(_mel_basis, spectogram)\n\ndef _build_mel_basis():\n    assert hp.fmax <= hp.sample_rate // 2\n    return librosa.filters.mel(hp.sample_rate, hp.n_fft, n_mels=hp.num_mels,\n                               fmin=hp.fmin, fmax=hp.fmax)\n\ndef _amp_to_db(x):\n    min_level = np.exp(hp.min_level_db / 20 * np.log(10))\n    return 20 * np.log10(np.maximum(min_level, x))\n\ndef _db_to_amp(x):\n    return np.power(10.0, (x) * 0.05)\n\ndef _normalize(S):\n    if hp.allow_clipping_in_normalization:\n        if hp.symmetric_mels:\n            return np.clip((2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value,\n                           -hp.max_abs_value, hp.max_abs_value)\n        else:\n            return np.clip(hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db)), 0, hp.max_abs_value)\n    \n    assert S.max() <= 0 and S.min() - hp.min_level_db >= 0\n    if hp.symmetric_mels:\n        return (2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value\n    else:\n        return hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db))\n\ndef _denormalize(D):\n    if hp.allow_clipping_in_normalization:\n        if hp.symmetric_mels:\n            return (((np.clip(D, -hp.max_abs_value,\n                              hp.max_abs_value) + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value))\n                    + hp.min_level_db)\n        else:\n            return ((np.clip(D, 0, hp.max_abs_value) * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db)\n    \n    if hp.symmetric_mels:\n        return (((D + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value)) + hp.min_level_db)\n    else:\n        return ((D * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db)\n"
  },
  {
    "path": "utils/ffhq_preprocess.py",
    "content": "import os\nimport cv2\nimport time\nimport glob\nimport argparse\nimport scipy\nimport numpy as np\nfrom PIL import Image\nfrom tqdm import tqdm\nfrom itertools import cycle\nfrom torch.multiprocessing import Pool, Process, set_start_method\n\n\n\"\"\"\nbrief: face alignment with FFHQ method (https://github.com/NVlabs/ffhq-dataset)\nauthor: lzhbrian (https://lzhbrian.me)\ndate: 2020.1.5\nnote: code is heavily borrowed from \n    https://github.com/NVlabs/ffhq-dataset\n    http://dlib.net/face_landmark_detection.py.html\nrequirements:\n    apt install cmake\n    conda install Pillow numpy scipy\n    pip install dlib\n    # download face landmark model from: \n    # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2\n\"\"\"\n\nimport numpy as np\nfrom PIL import Image\nimport dlib\n\n\nclass Croper:\n    def __init__(self, path_of_lm):\n        # download model from: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2\n        self.predictor = dlib.shape_predictor(path_of_lm)\n\n    def get_landmark(self, img_np):\n        \"\"\"get landmark with dlib\n        :return: np.array shape=(68, 2)\n        \"\"\"\n        detector = dlib.get_frontal_face_detector()\n        dets = detector(img_np, 1)\n        if len(dets) == 0:\n            return None\n        d = dets[0]\n        # Get the landmarks/parts for the face in box d.\n        shape = self.predictor(img_np, d)\n        t = list(shape.parts())\n        a = []\n        for tt in t:\n            a.append([tt.x, tt.y])\n        lm = np.array(a)\n        return lm\n\n    def align_face(self, img, lm, output_size=1024):\n        \"\"\"\n        :param filepath: str\n        :return: PIL Image\n        \"\"\"\n        lm_chin = lm[0: 17]  # left-right\n        lm_eyebrow_left = lm[17: 22]  # left-right\n        lm_eyebrow_right = lm[22: 27]  # left-right\n        lm_nose = lm[27: 31]  # top-down\n        lm_nostrils = lm[31: 36]  # top-down\n        lm_eye_left = lm[36: 42]  # left-clockwise\n        lm_eye_right = lm[42: 48]  # left-clockwise\n        lm_mouth_outer = lm[48: 60]  # left-clockwise\n        lm_mouth_inner = lm[60: 68]  # left-clockwise\n\n        # Calculate auxiliary vectors.\n        eye_left = np.mean(lm_eye_left, axis=0)\n        eye_right = np.mean(lm_eye_right, axis=0)\n        eye_avg = (eye_left + eye_right) * 0.5\n        eye_to_eye = eye_right - eye_left\n        mouth_left = lm_mouth_outer[0]\n        mouth_right = lm_mouth_outer[6]\n        mouth_avg = (mouth_left + mouth_right) * 0.5\n        eye_to_mouth = mouth_avg - eye_avg\n\n        # Choose oriented crop rectangle.\n        x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]  \n        x /= np.hypot(*x)  \n        x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)   \n        y = np.flipud(x) * [-1, 1]\n        c = eye_avg + eye_to_mouth * 0.1\n        quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])   \n        qsize = np.hypot(*x) * 2   \n\n        # Shrink.\n        shrink = int(np.floor(qsize / output_size * 0.5))\n        if shrink > 1:\n            rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))\n            img = img.resize(rsize, Image.ANTIALIAS)\n            quad /= shrink\n            qsize /= shrink\n\n        # Crop.\n        border = max(int(np.rint(qsize * 0.1)), 3)\n        crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),\n                int(np.ceil(max(quad[:, 1]))))\n        crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),\n                min(crop[3] + border, img.size[1]))\n        if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:\n            quad -= crop[0:2]\n\n        # Transform.\n        quad = (quad + 0.5).flatten()\n        lx = max(min(quad[0], quad[2]), 0)\n        ly = max(min(quad[1], quad[7]), 0)\n        rx = min(max(quad[4], quad[6]), img.size[0])\n        ry = min(max(quad[3], quad[5]), img.size[0])\n\n        # Save aligned image.\n        return crop, [lx, ly, rx, ry]\n    \n    def crop(self, img_np_list, xsize=512):    # first frame for all video\n        idx = 0\n        while idx < len(img_np_list)//2 :   # TODO \n            img_np = img_np_list[idx]\n            lm = self.get_landmark(img_np)\n            if lm is not None:  \n                break   # can detect face\n            idx += 1\n        if lm is None:\n            return None\n        \n        crop, quad = self.align_face(img=Image.fromarray(img_np), lm=lm, output_size=xsize)\n        clx, cly, crx, cry = crop\n        lx, ly, rx, ry = quad\n        lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)\n        for _i in range(len(img_np_list)):\n            _inp = img_np_list[_i]\n            _inp = _inp[cly:cry, clx:crx]\n            _inp = _inp[ly:ry, lx:rx]\n            img_np_list[_i] = _inp\n        return img_np_list, crop, quad\n\n\n"
  },
  {
    "path": "utils/flow_util.py",
    "content": "import torch\n\ndef convert_flow_to_deformation(flow):\n    r\"\"\"convert flow fields to deformations.\n\n    Args:\n        flow (tensor): Flow field obtained by the model\n    Returns:\n        deformation (tensor): The deformation used for warping\n    \"\"\"\n    b,c,h,w = flow.shape\n    flow_norm = 2 * torch.cat([flow[:,:1,...]/(w-1),flow[:,1:,...]/(h-1)], 1)\n    grid = make_coordinate_grid(flow)\n    deformation = grid + flow_norm.permute(0,2,3,1)\n    return deformation\n\ndef make_coordinate_grid(flow):\n    r\"\"\"obtain coordinate grid with the same size as the flow filed.\n\n    Args:\n        flow (tensor): Flow field obtained by the model\n    Returns:\n        grid (tensor): The grid with the same size as the input flow\n    \"\"\"    \n    b,c,h,w = flow.shape\n\n    x = torch.arange(w).to(flow)\n    y = torch.arange(h).to(flow)\n\n    x = (2 * (x / (w - 1)) - 1)\n    y = (2 * (y / (h - 1)) - 1)\n\n    yy = y.view(-1, 1).repeat(1, w)\n    xx = x.view(1, -1).repeat(h, 1)\n\n    meshed = torch.cat([xx.unsqueeze_(2), yy.unsqueeze_(2)], 2)\n    meshed = meshed.expand(b, -1, -1, -1)\n    return meshed    \n\n    \ndef warp_image(source_image, deformation):\n    r\"\"\"warp the input image according to the deformation\n\n    Args:\n        source_image (tensor): source images to be warped\n        deformation (tensor): deformations used to warp the images; value in range (-1, 1)\n    Returns:\n        output (tensor): the warped images\n    \"\"\" \n    _, h_old, w_old, _ = deformation.shape\n    _, _, h, w = source_image.shape\n    if h_old != h or w_old != w:\n        deformation = deformation.permute(0, 3, 1, 2)\n        deformation = torch.nn.functional.interpolate(deformation, size=(h, w), mode='bilinear')\n        deformation = deformation.permute(0, 2, 3, 1)\n    return torch.nn.functional.grid_sample(source_image, deformation) \n"
  },
  {
    "path": "utils/hparams.py",
    "content": "import os\n\nclass HParams:\n\tdef __init__(self, **kwargs):\n\t\tself.data = {}\n\n\t\tfor key, value in kwargs.items():\n\t\t\tself.data[key] = value\n\n\tdef __getattr__(self, key):\n\t\tif key not in self.data:\n\t\t\traise AttributeError(\"'HParams' object has no attribute %s\" % key)\n\t\treturn self.data[key]\n\n\tdef set_hparam(self, key, value):\n\t\tself.data[key] = value\n\n\n# Default hyperparameters\nhparams = HParams(\n\tnum_mels=80,  # Number of mel-spectrogram channels and local conditioning dimensionality\n\t#  network\n\trescale=True,  # Whether to rescale audio prior to preprocessing\n\trescaling_max=0.9,  # Rescaling value\n\t\n\t# Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction\n\t# It\"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder\n\t# Does not work if n_ffit is not multiple of hop_size!!\n\tuse_lws=False,\n\t\n\tn_fft=800,  # Extra window size is filled with 0 paddings to match this parameter\n\thop_size=200,  # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate)\n\twin_size=800,  # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate)\n\tsample_rate=16000,  # 16000Hz (corresponding to librispeech) (sox --i <filename>)\n\t\n\tframe_shift_ms=None,  # Can replace hop_size parameter. (Recommended: 12.5)\n\t\n\t# Mel and Linear spectrograms normalization/scaling and clipping\n\tsignal_normalization=True,\n\t# Whether to normalize mel spectrograms to some predefined range (following below parameters)\n\tallow_clipping_in_normalization=True,  # Only relevant if mel_normalization = True\n\tsymmetric_mels=True,\n\t# Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2, \n\t# faster and cleaner convergence)\n\tmax_abs_value=4.,\n\t# max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not \n\t# be too big to avoid gradient explosion, \n\t# not too small for fast convergence)\n\t# Contribution by @begeekmyfriend\n\t# Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude \n\t# levels. Also allows for better G&L phase reconstruction)\n\tpreemphasize=True,  # whether to apply filter\n\tpreemphasis=0.97,  # filter coefficient.\n\t\n\t# Limits\n\tmin_level_db=-100,\n\tref_level_db=20,\n\tfmin=55,\n\t# Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To \n\t# test depending on dataset. Pitch info: male~[65, 260], female~[100, 525])\n\tfmax=7600,  # To be increased/reduced depending on data.\n\n\t###################### Our training parameters #################################\n\timg_size=96,\n\tfps=25,\n\t\n\tbatch_size=8,\n\tinitial_learning_rate=1e-4,\n\tnepochs=300000,  ### ctrl + c, stop whenever eval loss is consistently greater than train loss for ~10 epochs\n\tnum_workers=20,\n\tcheckpoint_interval=3000,\n\teval_interval=3000,\n\twriter_interval=300,\n    save_optimizer_state=True,\n\n    syncnet_wt=0.0, # is initially zero, will be set automatically to 0.03 later. Leads to faster convergence. \n\tsyncnet_batch_size=64,\n\tsyncnet_lr=1e-4,\n\tsyncnet_eval_interval=10000,\n\tsyncnet_checkpoint_interval=10000,\n\n\tdisc_wt=0.07,\n\tdisc_initial_learning_rate=1e-4,\n)\n\n\n\n# Default hyperparameters\nhparamsdebug = HParams(\n\tnum_mels=80,  # Number of mel-spectrogram channels and local conditioning dimensionality\n\t#  network\n\trescale=True,  # Whether to rescale audio prior to preprocessing\n\trescaling_max=0.9,  # Rescaling value\n\t\n\t# Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction\n\t# It\"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder\n\t# Does not work if n_ffit is not multiple of hop_size!!\n\tuse_lws=False,\n\t\n\tn_fft=800,  # Extra window size is filled with 0 paddings to match this parameter\n\thop_size=200,  # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate)\n\twin_size=800,  # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate)\n\tsample_rate=16000,  # 16000Hz (corresponding to librispeech) (sox --i <filename>)\n\t\n\tframe_shift_ms=None,  # Can replace hop_size parameter. (Recommended: 12.5)\n\t\n\t# Mel and Linear spectrograms normalization/scaling and clipping\n\tsignal_normalization=True,\n\t# Whether to normalize mel spectrograms to some predefined range (following below parameters)\n\tallow_clipping_in_normalization=True,  # Only relevant if mel_normalization = True\n\tsymmetric_mels=True,\n\t# Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2, \n\t# faster and cleaner convergence)\n\tmax_abs_value=4.,\n\t# max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not \n\t# be too big to avoid gradient explosion, \n\t# not too small for fast convergence)\n\t# Contribution by @begeekmyfriend\n\t# Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude \n\t# levels. Also allows for better G&L phase reconstruction)\n\tpreemphasize=True,  # whether to apply filter\n\tpreemphasis=0.97,  # filter coefficient.\n\t\n\t# Limits\n\tmin_level_db=-100,\n\tref_level_db=20,\n\tfmin=55,\n\t# Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To \n\t# test depending on dataset. Pitch info: male~[65, 260], female~[100, 525])\n\tfmax=7600,  # To be increased/reduced depending on data.\n)\n\n\ndef hparams_debug_string():\n\tvalues = hparams.values()\n\thp = [\"  %s: %s\" % (name, values[name]) for name in sorted(values) if name != \"sentences\"]\n\treturn \"Hyperparameters:\\n\" + \"\\n\".join(hp)\n"
  },
  {
    "path": "utils/inference_utils.py",
    "content": "import numpy as np\nimport cv2, argparse, torch\nimport torchvision.transforms.functional as TF\n\nfrom models import load_network, load_DNet\nfrom tqdm import tqdm\nfrom PIL import Image\nfrom scipy.spatial import ConvexHull\nfrom third_part import face_detection\nfrom third_part.face3d.models import networks\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\ndef options():\n    parser = argparse.ArgumentParser(description='Inference code to lip-sync videos in the wild using Wav2Lip models')\n\n    parser.add_argument('--DNet_path', type=str, default='checkpoints/DNet.pt')\n    parser.add_argument('--LNet_path', type=str, default='checkpoints/LNet.pth')\n    parser.add_argument('--ENet_path', type=str, default='checkpoints/ENet.pth') \n    parser.add_argument('--face3d_net_path', type=str, default='checkpoints/face3d_pretrain_epoch_20.pth')                      \n    parser.add_argument('--face', type=str, help='Filepath of video/image that contains faces to use', required=True)\n    parser.add_argument('--audio', type=str, help='Filepath of video/audio file to use as raw audio source', required=True)\n    parser.add_argument('--exp_img', type=str, help='Expression template. neutral, smile or image path', default='neutral')\n    parser.add_argument('--outfile', type=str, help='Video path to save result')\n\n    parser.add_argument('--fps', type=float, help='Can be specified only if input is a static image (default: 25)', default=25., required=False)\n    parser.add_argument('--pads', nargs='+', type=int, default=[0, 20, 0, 0], help='Padding (top, bottom, left, right). Please adjust to include chin at least')\n    parser.add_argument('--face_det_batch_size', type=int, help='Batch size for face detection', default=4)\n    parser.add_argument('--LNet_batch_size', type=int, help='Batch size for LNet', default=16)\n    parser.add_argument('--img_size', type=int, default=384)\n    parser.add_argument('--crop', nargs='+', type=int, default=[0, -1, 0, -1], \n                        help='Crop video to a smaller region (top, bottom, left, right). Applied after resize_factor and rotate arg. ' \n                        'Useful if multiple face present. -1 implies the value will be auto-inferred based on height, width')\n    parser.add_argument('--box', nargs='+', type=int, default=[-1, -1, -1, -1], \n                        help='Specify a constant bounding box for the face. Use only as a last resort if the face is not detected.'\n                        'Also, might work only if the face is not moving around much. Syntax: (top, bottom, left, right).')\n    parser.add_argument('--nosmooth', default=False, action='store_true', help='Prevent smoothing face detections over a short temporal window')\n    parser.add_argument('--static', default=False, action='store_true')\n\n    \n    parser.add_argument('--up_face', default='original')\n    parser.add_argument('--one_shot', action='store_true')\n    parser.add_argument('--without_rl1', default=False, action='store_true', help='Do not use the relative l1')\n    parser.add_argument('--tmp_dir', type=str, default='temp', help='Folder to save tmp results')\n    parser.add_argument('--re_preprocess', action='store_true')\n    \n    args = parser.parse_args()\n    return args\n\nexp_aus_dict = {        # AU01_r, AU02_r, AU04_r, AU05_r, AU06_r, AU07_r, AU09_r, AU10_r, AU12_r, AU14_r, AU15_r, AU17_r, AU20_r, AU23_r, AU25_r, AU26_r, AU45_r.\n    'sad': torch.Tensor([[ 0,     0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0]]),\n    'angry':torch.Tensor([[0,     0,      0.3,    0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0]]),\n    'surprise': torch.Tensor([[0, 0,      0,      0.2,    0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0,      0]])\n}\n\ndef mask_postprocess(mask, thres=20):\n    mask[:thres, :] = 0; mask[-thres:, :] = 0\n    mask[:, :thres] = 0; mask[:, -thres:] = 0\n    mask = cv2.GaussianBlur(mask, (101, 101), 11)\n    mask = cv2.GaussianBlur(mask, (101, 101), 11)\n    return mask.astype(np.float32)\n\ndef trans_image(image):\n    image = TF.resize(\n        image, size=256, interpolation=Image.BICUBIC)\n    image = TF.to_tensor(image)\n    image = TF.normalize(image, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))\n    return image\n\ndef obtain_seq_index(index, num_frames):\n    seq = list(range(index-13, index+13))\n    seq = [ min(max(item, 0), num_frames-1) for item in seq ]\n    return seq\n\ndef transform_semantic(semantic, frame_index, crop_norm_ratio=None):\n    index = obtain_seq_index(frame_index, semantic.shape[0])\n    \n    coeff_3dmm = semantic[index,...]\n    ex_coeff = coeff_3dmm[:,80:144] #expression # 64\n    angles = coeff_3dmm[:,224:227] #euler angles for pose\n    translation = coeff_3dmm[:,254:257] #translation\n    crop = coeff_3dmm[:,259:262] #crop param\n\n    if crop_norm_ratio:\n        crop[:, -3] = crop[:, -3] * crop_norm_ratio\n\n    coeff_3dmm = np.concatenate([ex_coeff, angles, translation, crop], 1)\n    return torch.Tensor(coeff_3dmm).permute(1,0)   \n\ndef find_crop_norm_ratio(source_coeff, target_coeffs):\n    alpha = 0.3\n    exp_diff = np.mean(np.abs(target_coeffs[:,80:144] - source_coeff[:,80:144]), 1) # mean different exp\n    angle_diff = np.mean(np.abs(target_coeffs[:,224:227] - source_coeff[:,224:227]), 1) # mean different angle\n    index = np.argmin(alpha*exp_diff + (1-alpha)*angle_diff)  # find the smallerest index\n    crop_norm_ratio = source_coeff[:,-3] / target_coeffs[index:index+1, -3]\n    return crop_norm_ratio\n\ndef get_smoothened_boxes(boxes, T):\n    for i in range(len(boxes)):\n        if i + T > len(boxes):\n            window = boxes[len(boxes) - T:]\n        else:\n            window = boxes[i : i + T]\n        boxes[i] = np.mean(window, axis=0)\n    return boxes\n\ndef face_detect(images, args, jaw_correction=False, detector=None):\n    if detector == None:\n        device = 'cuda:0' if torch.cuda.is_available() else 'cpu'\n        detector = face_detection.FaceAlignment(face_detection.LandmarksType._2D, \n                                                flip_input=False, device=device)\n\n    batch_size = args.face_det_batch_size    \n    while 1:\n        predictions = []\n        try:\n            for i in tqdm(range(0, len(images), batch_size),desc='FaceDet:'):\n                predictions.extend(detector.get_detections_for_batch(np.array(images[i:i + batch_size])))\n        except RuntimeError:\n            if batch_size == 1: \n                raise RuntimeError('Image too big to run face detection on GPU. Please use the --resize_factor argument')\n            batch_size //= 2\n            print('Recovering from OOM error; New batch size: {}'.format(batch_size))\n            continue\n        break\n\n    results = []\n    pady1, pady2, padx1, padx2 = args.pads if jaw_correction else (0,20,0,0)\n    for rect, image in zip(predictions, images):\n        if rect is None:\n            cv2.imwrite('temp/faulty_frame.jpg', image) # check this frame where the face was not detected.\n            raise ValueError('Face not detected! Ensure the video contains a face in all the frames.')\n\n        y1 = max(0, rect[1] - pady1)\n        y2 = min(image.shape[0], rect[3] + pady2)\n        x1 = max(0, rect[0] - padx1)\n        x2 = min(image.shape[1], rect[2] + padx2)\n        results.append([x1, y1, x2, y2])\n\n    boxes = np.array(results)\n    if not args.nosmooth: boxes = get_smoothened_boxes(boxes, T=5)\n    results = [[image[y1: y2, x1:x2], (y1, y2, x1, x2)] for image, (x1, y1, x2, y2) in zip(images, boxes)]\n\n    del detector\n    torch.cuda.empty_cache()\n    return results \n\ndef _load(checkpoint_path, device):\n    if device == 'cuda':\n        checkpoint = torch.load(checkpoint_path)\n    else:\n        checkpoint = torch.load(checkpoint_path,\n                                map_location=lambda storage, loc: storage)\n    return checkpoint\n\ndef split_coeff(coeffs):\n        \"\"\"\n        Return:\n            coeffs_dict     -- a dict of torch.tensors\n\n        Parameters:\n            coeffs          -- torch.tensor, size (B, 256)\n        \"\"\"\n        id_coeffs = coeffs[:, :80]\n        exp_coeffs = coeffs[:, 80: 144]\n        tex_coeffs = coeffs[:, 144: 224]\n        angles = coeffs[:, 224: 227]\n        gammas = coeffs[:, 227: 254]\n        translations = coeffs[:, 254:]\n        return {\n            'id': id_coeffs,\n            'exp': exp_coeffs,\n            'tex': tex_coeffs,\n            'angle': angles,\n            'gamma': gammas,\n            'trans': translations\n        }\n\ndef Laplacian_Pyramid_Blending_with_mask(A, B, m, num_levels = 6):\n    # generate Gaussian pyramid for A,B and mask\n    GA = A.copy()\n    GB = B.copy()\n    GM = m.copy()\n    gpA = [GA]\n    gpB = [GB]\n    gpM = [GM]\n    for i in range(num_levels):\n        GA = cv2.pyrDown(GA)\n        GB = cv2.pyrDown(GB)\n        GM = cv2.pyrDown(GM)\n        gpA.append(np.float32(GA))\n        gpB.append(np.float32(GB))\n        gpM.append(np.float32(GM))\n\n    # generate Laplacian Pyramids for A,B and masks\n    lpA  = [gpA[num_levels-1]] # the bottom of the Lap-pyr holds the last (smallest) Gauss level\n    lpB  = [gpB[num_levels-1]]\n    gpMr = [gpM[num_levels-1]]\n    for i in range(num_levels-1,0,-1):\n        # Laplacian: subtract upscaled version of lower level from current level\n        # to get the high frequencies\n        LA = np.subtract(gpA[i-1], cv2.pyrUp(gpA[i]))\n        LB = np.subtract(gpB[i-1], cv2.pyrUp(gpB[i]))\n        lpA.append(LA)\n        lpB.append(LB)\n        gpMr.append(gpM[i-1]) # also reverse the masks\n\n    # Now blend images according to mask in each level\n    LS = []\n    for la,lb,gm in zip(lpA,lpB,gpMr):\n        gm = gm[:,:,np.newaxis]\n        ls = la * gm + lb * (1.0 - gm)\n        LS.append(ls)\n\n    # now reconstruct\n    ls_ = LS[0]\n    for i in range(1,num_levels):\n        ls_ = cv2.pyrUp(ls_)\n        ls_ = cv2.add(ls_, LS[i])\n    return ls_\n\ndef load_model(args, device):\n    D_Net = load_DNet(args).to(device)\n    model = load_network(args).to(device)\n    return D_Net, model\n\ndef normalize_kp(kp_source, kp_driving, kp_driving_initial, adapt_movement_scale=False,\n                 use_relative_movement=False, use_relative_jacobian=False):\n    if adapt_movement_scale:\n        source_area = ConvexHull(kp_source['value'][0].data.cpu().numpy()).volume\n        driving_area = ConvexHull(kp_driving_initial['value'][0].data.cpu().numpy()).volume\n        adapt_movement_scale = np.sqrt(source_area) / np.sqrt(driving_area)\n    else:\n        adapt_movement_scale = 1\n\n    kp_new = {k: v for k, v in kp_driving.items()}\n    if use_relative_movement:\n        kp_value_diff = (kp_driving['value'] - kp_driving_initial['value'])\n        kp_value_diff *= adapt_movement_scale\n        kp_new['value'] = kp_value_diff + kp_source['value']\n\n        if use_relative_jacobian:\n            jacobian_diff = torch.matmul(kp_driving['jacobian'], torch.inverse(kp_driving_initial['jacobian']))\n            kp_new['jacobian'] = torch.matmul(jacobian_diff, kp_source['jacobian'])\n    return kp_new\n\ndef load_face3d_net(ckpt_path, device):\n    net_recon = networks.define_net_recon(net_recon='resnet50', use_last_fc=False, init_path='').to(device)\n    checkpoint = torch.load(ckpt_path, map_location=device)    \n    net_recon.load_state_dict(checkpoint['net_recon'])\n    net_recon.eval()\n    return net_recon"
  },
  {
    "path": "webUI.py",
    "content": "import random\nimport subprocess\nimport os\nimport gradio\nimport gradio as gr\nimport shutil\n\ncurrent_dir = os.path.dirname(os.path.abspath(__file__))\n\n\ndef convert(segment_length, video, audio, progress=gradio.Progress()):\n    if segment_length is None:\n        segment_length=0\n    print(video, audio)\n\n    if segment_length != 0:\n        video_segments = cut_video_segments(video, segment_length)\n        audio_segments = cut_audio_segments(audio, segment_length)\n    else:\n        video_path = os.path.join('temp/video', os.path.basename(video))\n        shutil.move(video, video_path)\n        video_segments = [video_path]\n        audio_path = os.path.join('temp/audio', os.path.basename(audio))\n        shutil.move(audio, audio_path)\n        audio_segments = [audio_path]\n\n    processed_segments = []\n    for i, (video_seg, audio_seg) in progress.tqdm(enumerate(zip(video_segments, audio_segments))):\n        processed_output = process_segment(video_seg, audio_seg, i)\n        processed_segments.append(processed_output)\n\n    output_file = f\"results/output_{random.randint(0,1000)}.mp4\"\n    concatenate_videos(processed_segments, output_file)\n\n    # Remove temporary files\n    cleanup_temp_files(video_segments + audio_segments)\n\n    # Return the concatenated video file\n    return output_file\n\n\ndef cleanup_temp_files(file_list):\n    for file_path in file_list:\n        if os.path.isfile(file_path):\n            os.remove(file_path)\n\n\ndef cut_video_segments(video_file, segment_length):\n    temp_directory = 'temp/audio'\n    shutil.rmtree(temp_directory, ignore_errors=True)\n    shutil.os.makedirs(temp_directory, exist_ok=True)\n    segment_template = f\"{temp_directory}/{random.randint(0,1000)}_%03d.mp4\"\n    command = [\"ffmpeg\", \"-i\", video_file, \"-c\", \"copy\", \"-f\",\n               \"segment\", \"-segment_time\", str(segment_length), segment_template]\n    subprocess.run(command, check=True)\n\n    video_segments = [segment_template %\n                      i for i in range(len(os.listdir(temp_directory)))]\n    return video_segments\n\n\ndef cut_audio_segments(audio_file, segment_length):\n    temp_directory = 'temp/video'\n    shutil.rmtree(temp_directory, ignore_errors=True)\n    shutil.os.makedirs(temp_directory, exist_ok=True)\n    segment_template = f\"{temp_directory}/{random.randint(0,1000)}_%03d.mp3\"\n    command = [\"ffmpeg\", \"-i\", audio_file, \"-f\", \"segment\",\n               \"-segment_time\", str(segment_length), segment_template]\n    subprocess.run(command, check=True)\n\n    audio_segments = [segment_template %\n                      i for i in range(len(os.listdir(temp_directory)))]\n    return audio_segments\n\n\ndef process_segment(video_seg, audio_seg, i):\n    output_file = f\"results/{random.randint(10,100000)}_{i}.mp4\"\n    command = [\"python\", \"inference.py\", \"--face\", video_seg,\n               \"--audio\", audio_seg, \"--outfile\", output_file]\n    subprocess.run(command, check=True)\n\n    return output_file\n\n\ndef concatenate_videos(video_segments, output_file):\n    with open(\"segments.txt\", \"w\") as file:\n        for segment in video_segments:\n            file.write(f\"file '{segment}'\\n\")\n    command = [\"ffmpeg\", \"-f\", \"concat\", \"-i\",\n               \"segments.txt\", \"-c\", \"copy\", output_file]\n    subprocess.run(command, check=True)\n\n\nwith gradio.Blocks(\n    title=\"Audio-based Lip Synchronization\",\n    theme=gr.themes.Base(\n        primary_hue=gr.themes.colors.green,\n        font=[\"Source Sans Pro\", \"Arial\", \"sans-serif\"],\n        font_mono=['JetBrains mono', \"Consolas\", 'Courier New']\n    ),\n) as demo:\n    with gradio.Row():\n        gradio.Markdown(\"# Audio-based Lip Synchronization\")\n    with gradio.Row():\n        with gradio.Column():\n            with gradio.Row():\n                seg = gradio.Number(\n                    label=\"segment length (Second), 0 for no segmentation\")\n            with gradio.Row():\n                with gradio.Column():\n                    v = gradio.Video(label='SOurce Face')\n\n                with gradio.Column():\n                    a = gradio.Audio(\n                        type='filepath', label='Target Audio')\n\n            with gradio.Row():\n                btn = gradio.Button(value=\"Synthesize\",variant=\"primary\")\n            with gradio.Row():\n                gradio.Examples(\n                    label=\"Face Examples\",\n                    examples=[\n                        os.path.join(os.path.dirname(__file__),\n                                     \"examples/face/1.mp4\"),\n                        os.path.join(os.path.dirname(__file__),\n                                     \"examples/face/2.mp4\"),\n                        os.path.join(os.path.dirname(__file__),\n                                     \"examples/face/3.mp4\"),\n                        os.path.join(os.path.dirname(__file__),\n                                     \"examples/face/4.mp4\"),\n                        os.path.join(os.path.dirname(__file__),\n                                     \"examples/face/5.mp4\"),\n                    ],\n                    inputs=[v],\n                    fn=convert,\n                )\n            with gradio.Row():\n                gradio.Examples(\n                    label=\"Audio Examples\",\n                    examples=[\n                        os.path.join(os.path.dirname(__file__),\n                                     \"examples/audio/1.wav\"),\n                        os.path.join(os.path.dirname(__file__),\n                                     \"examples/audio/2.wav\"),\n                    ],\n                    inputs=[a],\n                    fn=convert,\n                )\n\n        with gradio.Column():\n            o = gradio.Video(label=\"Output Video\")\n\n    btn.click(fn=convert, inputs=[seg, v, a], outputs=[o])\n\ndemo.queue().launch()\n"
  }
]