Repository: zyxElsa/CAST_pytorch
Branch: main
Commit: 67a7d9c1e9c1
Files: 28
Total size: 169.4 KB
Directory structure:
gitextract_cjj_65gk/
├── LICENSE
├── README.md
├── data/
│ ├── __init__.py
│ ├── base_dataset.py
│ ├── image_folder.py
│ └── unaligned_dataset.py
├── experiments/
│ ├── __init__.py
│ └── __main__.py
├── models/
│ ├── MSP.py
│ ├── __init__.py
│ ├── base_model.py
│ ├── cast_model.py
│ ├── net.py
│ ├── networks.py
│ └── torch_utils.py
├── options/
│ ├── __init__.py
│ ├── base_options.py
│ ├── test_options.py
│ └── train_options.py
├── requirements.txt
├── test.py
├── train.py
└── util/
├── __init__.py
├── get_data.py
├── html.py
├── image_pool.py
├── util.py
└── visualizer.py
================================================
FILE CONTENTS
================================================
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: README.md
================================================
<div id="top"></div>
<!--
*** Thanks for checking out the Best-README-Template. If you have a suggestion
*** that would make this better, please fork the repo and create a pull request
*** or simply open an issue with the tag "enhancement".
*** Don't forget to give the project a star!
*** Thanks again! Now go create something AMAZING! :D
-->
<!-- PROJECT SHIELDS -->
<!--
*** I'm using markdown "reference style" links for readability.
*** Reference links are enclosed in brackets [ ] instead of parentheses ( ).
*** See the bottom of this document for the declaration of the reference variables
*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use.
*** https://www.markdownguide.org/basic-syntax/#reference-style-links
-->
<!-- [![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url]
[![MIT License][license-shield]][license-url]
[![LinkedIn][linkedin-shield]][linkedin-url] -->
<!-- PROJECT LOGO -->
<br />
<!-- <div align="center">
<a href="https://github.com/othneildrew/Best-README-Template">
<img src="images/logo.png" alt="Logo" width="80" height="80">
</a>
<h3 align="center">Best-README-Template</h3>
<p align="center">
An awesome README template to jumpstart your projects!
<br />
<a href="https://github.com/othneildrew/Best-README-Template"><strong>Explore the docs »</strong></a>
<br />
<br />
<a href="https://github.com/othneildrew/Best-README-Template">View Demo</a>
·
<a href="https://github.com/othneildrew/Best-README-Template/issues">Report Bug</a>
·
<a href="https://github.com/othneildrew/Best-README-Template/issues">Request Feature</a>
</p>
</div> -->
<!-- TABLE OF CONTENTS -->
<!-- <details>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">CAST</a>
<ul>
<li><a href="#built-with">Built With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li><a href="#usage">Usage</a></li>
<li><a href="#roadmap">Roadmap</a></li>
<li><a href="#contributing">Contributing</a></li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgments">Acknowledgments</a></li>
</ol>
</details> -->
<!-- ABOUT THE PROJECT -->
## Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning (CAST) <br> A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive Learning (UCAST)
<!--  -->

We provide our PyTorch implementation of the paper ''Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning''(SIGGRAPH 2022) , which is a simple yet powerful model for arbitrary image style transfer, and ''A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive Learning''(ACM Transactions on Graphics) , which is a improved arbitrary style style transfer method.
In this work, we tackle the challenging problem of arbitrary image style transfer using a novel style feature representation learning method.
A suitable style representation, as a key component in image stylization tasks, is essential to achieve satisfactory results.
Existing deep neural network based approaches achieve reasonable results with the guidance from second-order statistics such as Gram matrix of content features.
However, they do not leverage sufficient style information, which results in artifacts such as local distortions and style inconsistency.
To address these issues, we propose to learn style representation directly from image features instead of their second-order statistics, by analyzing the similarities and differences between multiple styles and considering the style distribution.
For details see the papers [CAST](http://arxiv.org/abs/2205.09542) , [UCAST](https://arxiv.org/abs/2303.12710), and the [video](https://youtu.be/3RG2yjLKTus)
<p align="right">(<a href="#top">back to top</a>)</p>
<!-- ### Built With -->
<!--
This section should list any major frameworks/libraries used to bootstrap your project. Leave any add-ons/plugins for the acknowledgements section. Here are a few examples.
* [Next.js](https://nextjs.org/)
* [React.js](https://reactjs.org/)
* [Vue.js](https://vuejs.org/)
* [Angular](https://angular.io/)
* [Svelte](https://svelte.dev/)
* [Laravel](https://laravel.com)
* [Bootstrap](https://getbootstrap.com)
* [JQuery](https://jquery.com)
<p align="right">(<a href="#top">back to top</a>)</p>
-->
<!-- GETTING STARTED -->
## Getting Started
### Prerequisites
Python 3.6 or above.
PyTorch 1.6 or above
For packages, see requirements.txt.
```sh
pip install -r requirements.txt
```
<p align="right">(<a href="#top">back to top</a>)</p>
### Installation
Clone the repo
```sh
git clone https://github.com/zyxElsa/CAST_pytorch.git
```
<p align="right">(<a href="#top">back to top</a>)</p>
### Datasets
Then put your content images in ./datasets/{datasets_name}/testA, and style images in ./datasets/{datasets_name}/testB.
Example directory hierarchy:
```sh
CAST_pytorch
|--- datasets
|--- {datasets_name}
|--- trainA
|--- trainB
|--- testA
|--- testB
Then, call --dataroot ./datasets/{datasets_name}
```
<p align="right">(<a href="#top">back to top</a>)</p>
### Train
Train the CAST model:
```sh
python train.py --dataroot ./datasets/{dataset_name} --name {model_name}
```
The pretrained style classification model is saved at ./models/style_vgg.pth.
Google Drive: Check [here](https://drive.google.com/file/d/12JKlL6QsVWkz6Dag54K59PAZigFBS6PQ/view?usp=sharing)
The pretrained content encoder is saved at ./models/vgg_normalised.pth.
Google Drive: Check [here](https://drive.google.com/file/d/1DKYRWJUKbmrvEba56tuihy1N6VrNZFwl/view?usp=sharing)
<p align="right">(<a href="#top">back to top</a>)</p>
### Test
Test the CAST or UCAST model:
```sh
python test.py --dataroot ./datasets/{dataset_name} --name {model_name}
```
The pretrained model is saved at ./checkpoints/CAST_model/*.pth.
BaiduNetdisk: Check [CAST model](https://pan.baidu.com/s/12oPk3195fntMEHdlsHNwkQ) (passwd:cast)
Google Drive: Download [CAST model](https://drive.google.com/file/d/11dZqu95QfnAgkzgR1NTJfQutz8JlwRY8/view?usp=sharing) and [UCAST model](https://drive.google.com/file/d/1rU8haiPG2BDhh5BNSwngjMKBKdutDYTJ/view?usp=sharing) (for video style transfer).
<p align="right">(<a href="#top">back to top</a>)</p>
### Citation
```sh
@inproceedings{zhang2020cast,
author = {Zhang, Yuxin and Tang, Fan and Dong, Weiming and Huang, Haibin and Ma, Chongyang and Lee, Tong-Yee and Xu, Changsheng},
title = {Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning},
booktitle = {ACM SIGGRAPH},
year = {2022}}
```
```sh
@article{zhang2023unified,
title={A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive Learning},
author={Zhang, Yuxin and Tang, Fan and Dong, Weiming and Huang, Haibin and Ma, Chongyang and Lee, Tong-Yee and Xu, Changsheng},
journal={ACM Transactions on Graphics},
year={2023},
publisher={ACM New York, NY}
}
```
<p align="right">(<a href="#top">back to top</a>)</p>
<!--
<!-- USAGE EXAMPLES -->
<!-- ## Usage
Use this space to show useful examples of how a project can be used. Additional screenshots, code examples and demos work well in this space. You may also link to more resources.
_For more examples, please refer to the [Documentation](https://example.com)_
<p align="right">(<a href="#top">back to top</a>)</p> -->
<!-- ROADMAP -->
<!-- ## Roadmap
- [x] Add Changelog
- [x] Add back to top links
- [ ] Add Additional Templates w/ Examples
- [ ] Add "components" document to easily copy & paste sections of the readme
- [ ] Multi-language Support
- [ ] Chinese
- [ ] Spanish
See the [open issues](https://github.com/othneildrew/Best-README-Template/issues) for a full list of proposed features (and known issues).
<p align="right">(<a href="#top">back to top</a>)</p> -->
<!-- CONTRIBUTING -->
<!-- ## Contributing -->
<!-- Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
-->
<!-- <p align="right">(<a href="#top">back to top</a>)</p> -->
<!-- LICENSE -->
<!-- ## License -->
<!--
Distributed under the MIT License. See `LICENSE.txt` for more information.
-->
<!-- <p align="right">(<a href="#top">back to top</a>)</p> -->
<!-- CONTACT -->
## Contact
Please feel free to open an issue or contact us personally if you have questions, need help, or need explanations. Write to one of the following email addresses, and maybe put one other in the cc:
zhangyuxin2020@ia.ac.cn
<!--
Your Name - [@your_twitter](https://twitter.com/your_username) - email@example.com
Project Link: [https://github.com/your_username/repo_name](https://github.com/your_username/repo_name)
-->
<p align="right">(<a href="#top">back to top</a>)</p>
<!-- ACKNOWLEDGMENTS -->
<!-- ## Acknowledgments -->
<!--
Use this space to list resources you find helpful and would like to give credit to. I've included a few of my favorites to kick things off!
* [Choose an Open Source License](https://choosealicense.com)
* [GitHub Emoji Cheat Sheet](https://www.webpagefx.com/tools/emoji-cheat-sheet)
* [Malven's Flexbox Cheatsheet](https://flexbox.malven.co/)
* [Malven's Grid Cheatsheet](https://grid.malven.co/)
* [Img Shields](https://shields.io)
* [GitHub Pages](https://pages.github.com)
* [Font Awesome](https://fontawesome.com)
* [React Icons](https://react-icons.github.io/react-icons/search) -->
<!-- <p align="right">(<a href="#top">back to top</a>)</p> -->
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/othneildrew/Best-README-Template.svg?style=for-the-badge
[contributors-url]: https://github.com/othneildrew/Best-README-Template/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/othneildrew/Best-README-Template.svg?style=for-the-badge
[forks-url]: https://github.com/othneildrew/Best-README-Template/network/members
[stars-shield]: https://img.shields.io/github/stars/othneildrew/Best-README-Template.svg?style=for-the-badge
[stars-url]: https://github.com/othneildrew/Best-README-Template/stargazers
[issues-shield]: https://img.shields.io/github/issues/othneildrew/Best-README-Template.svg?style=for-the-badge
[issues-url]: https://github.com/othneildrew/Best-README-Template/issues
[license-shield]: https://img.shields.io/github/license/othneildrew/Best-README-Template.svg?style=for-the-badge
[license-url]: https://github.com/othneildrew/Best-README-Template/blob/master/LICENSE.txt
[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555
[linkedin-url]: https://linkedin.com/in/othneildrew
[product-screenshot]: images/screenshot.png
================================================
FILE: data/__init__.py
================================================
"""This package includes all the modules related to data loading and preprocessing
To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset.
You need to implement four functions:
-- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
-- <__len__>: return the size of dataset.
-- <__getitem__>: get a data point from data loader.
-- <modify_commandline_options>: (optionally) add dataset-specific options and set default options.
Now you can use the dataset class by specifying flag '--dataset_mode dummy'.
See our template dataset class 'template_dataset.py' for more details.
"""
import importlib
import torch.utils.data
from data.base_dataset import BaseDataset
def find_dataset_using_name(dataset_name):
"""Import the module "data/[dataset_name]_dataset.py".
In the file, the class called DatasetNameDataset() will
be instantiated. It has to be a subclass of BaseDataset,
and it is case-insensitive.
"""
dataset_filename = "data." + dataset_name + "_dataset"
datasetlib = importlib.import_module(dataset_filename)
dataset = None
target_dataset_name = dataset_name.replace('_', '') + 'dataset'
for name, cls in datasetlib.__dict__.items():
if name.lower() == target_dataset_name.lower() \
and issubclass(cls, BaseDataset):
dataset = cls
if dataset is None:
raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name))
return dataset
def get_option_setter(dataset_name):
"""Return the static method <modify_commandline_options> of the dataset class."""
dataset_class = find_dataset_using_name(dataset_name)
return dataset_class.modify_commandline_options
def create_dataset(opt):
"""Create a dataset given the option.
This function wraps the class CustomDatasetDataLoader.
This is the main interface between this package and 'train.py'/'test.py'
Example:
>>> from data import create_dataset
>>> dataset = create_dataset(opt)
"""
data_loader = CustomDatasetDataLoader(opt)
dataset = data_loader.load_data()
return dataset
class CustomDatasetDataLoader():
"""Wrapper class of Dataset class that performs multi-threaded data loading"""
def __init__(self, opt):
"""Initialize this class
Step 1: create a dataset instance given the name [dataset_mode]
Step 2: create a multi-threaded data loader.
"""
self.opt = opt
dataset_class = find_dataset_using_name(opt.dataset_mode)
self.dataset = dataset_class(opt)
print("dataset [%s] was created" % type(self.dataset).__name__)
self.dataloader = torch.utils.data.DataLoader(
self.dataset,
batch_size=opt.batch_size,
shuffle=not opt.serial_batches,
num_workers=int(opt.num_threads),
drop_last=True if opt.isTrain else False,
)
def set_epoch(self, epoch):
self.dataset.current_epoch = epoch
def load_data(self):
return self
def __len__(self):
"""Return the number of data in the dataset"""
return min(len(self.dataset), self.opt.max_dataset_size)
def __iter__(self):
"""Return a batch of data"""
for i, data in enumerate(self.dataloader):
if i * self.opt.batch_size >= self.opt.max_dataset_size:
break
yield data
================================================
FILE: data/base_dataset.py
================================================
"""This module implements an abstract base class (ABC) 'BaseDataset' for datasets.
It also includes common transformation functions (e.g., get_transform, __scale_width), which can be later used in subclasses.
"""
import random
import numpy as np
import torch.utils.data as data
from PIL import Image
import torchvision.transforms as transforms
from abc import ABC, abstractmethod
class BaseDataset(data.Dataset, ABC):
"""This class is an abstract base class (ABC) for datasets.
To create a subclass, you need to implement the following four functions:
-- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
-- <__len__>: return the size of dataset.
-- <__getitem__>: get a data point.
-- <modify_commandline_options>: (optionally) add dataset-specific options and set default options.
"""
def __init__(self, opt):
"""Initialize the class; save the options in the class
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
self.opt = opt
self.root = opt.dataroot
self.current_epoch = 0
@staticmethod
def modify_commandline_options(parser, is_train):
"""Add new dataset-specific options, and rewrite default values for existing options.
Parameters:
parser -- original option parser
is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
Returns:
the modified parser.
"""
return parser
@abstractmethod
def __len__(self):
"""Return the total number of images in the dataset."""
return 0
@abstractmethod
def __getitem__(self, index):
"""Return a data point and its metadata information.
Parameters:
index - - a random integer for data indexing
Returns:
a dictionary of data with their names. It ususally contains the data itself and its metadata information.
"""
pass
def get_params(opt, size):
w, h = size
new_h = h
new_w = w
if opt.preprocess == 'resize_and_crop':
new_h = new_w = opt.load_size
elif opt.preprocess == 'scale_width_and_crop':
new_w = opt.load_size
new_h = opt.load_size * h // w
x = random.randint(0, np.maximum(0, new_w - opt.crop_size))
y = random.randint(0, np.maximum(0, new_h - opt.crop_size))
flip = random.random() > 0.5
return {'crop_pos': (x, y), 'flip': flip}
def get_transform(opt, params=None, grayscale=False, method=Image.BICUBIC, convert=True):
transform_list = []
if grayscale:
transform_list.append(transforms.Grayscale(1))
if 'fixsize' in opt.preprocess:
transform_list.append(transforms.Resize(params["size"], method))
if 'resize' in opt.preprocess:
osize = [opt.load_size, opt.load_size]
if "gta2cityscapes" in opt.dataroot:
osize[0] = opt.load_size // 2
transform_list.append(transforms.Resize(osize, method))
elif 'scale_width' in opt.preprocess:
transform_list.append(transforms.Lambda(lambda img: __scale_width(img, opt.load_size, opt.crop_size, method)))
elif 'scale_shortside' in opt.preprocess:
transform_list.append(transforms.Lambda(lambda img: __scale_shortside(img, opt.load_size, opt.crop_size, method)))
if 'zoom' in opt.preprocess:
if params is None:
transform_list.append(transforms.Lambda(lambda img: __random_zoom(img, opt.load_size, opt.crop_size, method)))
else:
transform_list.append(transforms.Lambda(lambda img: __random_zoom(img, opt.load_size, opt.crop_size, method, factor=params["scale_factor"])))
if 'crop' in opt.preprocess:
if params is None or 'crop_pos' not in params:
transform_list.append(transforms.RandomCrop(opt.crop_size))
else:
transform_list.append(transforms.Lambda(lambda img: __crop(img, params['crop_pos'], opt.crop_size)))
if 'patch' in opt.preprocess:
transform_list.append(transforms.Lambda(lambda img: __patch(img, params['patch_index'], opt.crop_size)))
if 'trim' in opt.preprocess:
transform_list.append(transforms.Lambda(lambda img: __trim(img, opt.crop_size)))
# if opt.preprocess == 'none':
transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base=4, method=method)))
if not opt.no_flip:
if params is None or 'flip' not in params:
transform_list.append(transforms.RandomHorizontalFlip())
elif 'flip' in params:
transform_list.append(transforms.Lambda(lambda img: __flip(img, params['flip'])))
if convert:
transform_list += [transforms.ToTensor()]
if grayscale:
transform_list += [transforms.Normalize((0.5,), (0.5,))]
else:
transform_list += [transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
return transforms.Compose(transform_list)
def __make_power_2(img, base, method=Image.BICUBIC):
ow, oh = img.size
h = int(round(oh / base) * base)
w = int(round(ow / base) * base)
if h == oh and w == ow:
return img
return img.resize((w, h), method)
def __random_zoom(img, target_width, crop_width, method=Image.BICUBIC, factor=None):
if factor is None:
zoom_level = np.random.uniform(0.8, 1.0, size=[2])
else:
zoom_level = (factor[0], factor[1])
iw, ih = img.size
zoomw = max(crop_width, iw * zoom_level[0])
zoomh = max(crop_width, ih * zoom_level[1])
img = img.resize((int(round(zoomw)), int(round(zoomh))), method)
return img
def __scale_shortside(img, target_width, crop_width, method=Image.BICUBIC):
ow, oh = img.size
shortside = min(ow, oh)
if shortside >= target_width:
return img
else:
scale = target_width / shortside
return img.resize((round(ow * scale), round(oh * scale)), method)
def __trim(img, trim_width):
ow, oh = img.size
if ow > trim_width:
xstart = np.random.randint(ow - trim_width)
xend = xstart + trim_width
else:
xstart = 0
xend = ow
if oh > trim_width:
ystart = np.random.randint(oh - trim_width)
yend = ystart + trim_width
else:
ystart = 0
yend = oh
return img.crop((xstart, ystart, xend, yend))
def __scale_width(img, target_width, crop_width, method=Image.BICUBIC):
ow, oh = img.size
if ow == target_width and oh >= crop_width:
return img
w = target_width
h = int(max(target_width * oh / ow, crop_width))
return img.resize((w, h), method)
def __crop(img, pos, size):
ow, oh = img.size
x1, y1 = pos
tw = th = size
if (ow > tw or oh > th):
return img.crop((x1, y1, x1 + tw, y1 + th))
return img
def __patch(img, index, size):
ow, oh = img.size
nw, nh = ow // size, oh // size
roomx = ow - nw * size
roomy = oh - nh * size
startx = np.random.randint(int(roomx) + 1)
starty = np.random.randint(int(roomy) + 1)
index = index % (nw * nh)
ix = index // nh
iy = index % nh
gridx = startx + ix * size
gridy = starty + iy * size
return img.crop((gridx, gridy, gridx + size, gridy + size))
def __flip(img, flip):
if flip:
return img.transpose(Image.FLIP_LEFT_RIGHT)
return img
def __print_size_warning(ow, oh, w, h):
"""Print warning information about image size(only print once)"""
if not hasattr(__print_size_warning, 'has_printed'):
print("The image size needs to be a multiple of 4. "
"The loaded image size was (%d, %d), so it was adjusted to "
"(%d, %d). This adjustment will be done to all images "
"whose sizes are not multiples of 4" % (ow, oh, w, h))
__print_size_warning.has_printed = True
================================================
FILE: data/image_folder.py
================================================
"""A modified image folder class
We modify the official PyTorch image folder (https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py)
so that this class can load images from both current directory and its subdirectories.
"""
import torch.utils.data as data
from PIL import Image
import os
import os.path
IMG_EXTENSIONS = [
'.jpg', '.JPG', '.jpeg', '.JPEG',
'.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP',
'.tif', '.TIF', '.tiff', '.TIFF',
]
def is_image_file(filename):
return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
def make_dataset(dir, max_dataset_size=float("inf")):
images = []
assert os.path.isdir(dir) or os.path.islink(dir), '%s is not a valid directory' % dir
for root, _, fnames in sorted(os.walk(dir, followlinks=True)):
for fname in fnames:
if is_image_file(fname):
path = os.path.join(root, fname)
images.append(path)
return images[:min(max_dataset_size, len(images))]
def default_loader(path):
return Image.open(path).convert('RGB')
class ImageFolder(data.Dataset):
def __init__(self, root, transform=None, return_paths=False,
loader=default_loader):
imgs = make_dataset(root)
if len(imgs) == 0:
raise(RuntimeError("Found 0 images in: " + root + "\n"
"Supported image extensions are: " + ",".join(IMG_EXTENSIONS)))
self.root = root
self.imgs = imgs
self.transform = transform
self.return_paths = return_paths
self.loader = loader
def __getitem__(self, index):
path = self.imgs[index]
img = self.loader(path)
if self.transform is not None:
img = self.transform(img)
if self.return_paths:
return img, path
else:
return img
def __len__(self):
return len(self.imgs)
================================================
FILE: data/unaligned_dataset.py
================================================
import os.path
from data.base_dataset import BaseDataset, get_transform
from data.image_folder import make_dataset
from PIL import Image
import random
import util.util as util
class UnalignedDataset(BaseDataset):
"""
This dataset class can load unaligned/unpaired datasets.
It requires two directories to host training images from domain A '/path/to/data/trainA'
and from domain B '/path/to/data/trainB' respectively.
You can train the model with the dataset flag '--dataroot /path/to/data'.
Similarly, you need to prepare two directories:
'/path/to/data/testA' and '/path/to/data/testB' during test time.
"""
def __init__(self, opt):
"""Initialize this dataset class.
Parameters:
opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
BaseDataset.__init__(self, opt)
self.dir_A = os.path.join(opt.dataroot, opt.phase + 'A') # create a path '/path/to/data/trainA'
self.dir_B = os.path.join(opt.dataroot, opt.phase + 'B') # create a path '/path/to/data/trainB'
if opt.phase == "test" and not os.path.exists(self.dir_A) \
and os.path.exists(os.path.join(opt.dataroot, "valA")):
self.dir_A = os.path.join(opt.dataroot, "valA")
self.dir_B = os.path.join(opt.dataroot, "valB")
self.A_paths = sorted(make_dataset(self.dir_A, opt.max_dataset_size)) # load images from '/path/to/data/trainA'
self.B_paths = sorted(make_dataset(self.dir_B, opt.max_dataset_size)) # load images from '/path/to/data/trainB'
self.A_size = len(self.A_paths) # get the size of dataset A
self.B_size = len(self.B_paths) # get the size of dataset B
def __getitem__(self, index):
"""Return a data point and its metadata information.
Parameters:
index (int) -- a random integer for data indexing
Returns a dictionary that contains A, B, A_paths and B_paths
A (tensor) -- an image in the input domain
B (tensor) -- its corresponding image in the target domain
A_paths (str) -- image paths
B_paths (str) -- image paths
"""
A_path = self.A_paths[index % self.A_size] # make sure index is within then range
if self.opt.serial_batches: # make sure index is within then range
index_B = index % self.B_size
else: # randomize the index for domain B to avoid fixed pairs.
index_B = random.randint(0, self.B_size - 1)
B_path = self.B_paths[index_B]
A_img = Image.open(A_path).convert('RGB')
B_img = Image.open(B_path).convert('RGB')
# Apply image transformation
# For FastCUT mode, if in finetuning phase (learning rate is decaying),
# do not perform resize-crop data augmentation of CycleGAN.
# print('current_epoch', self.current_epoch)
is_finetuning = self.opt.isTrain and self.current_epoch > self.opt.n_epochs
modified_opt = util.copyconf(self.opt, load_size=self.opt.crop_size if is_finetuning else self.opt.load_size)
transform = get_transform(modified_opt)
A = transform(A_img)
B = transform(B_img)
return {'A': A, 'B': B, 'A_paths': A_path, 'B_paths': B_path}
def __len__(self):
"""Return the total number of images in the dataset.
As we have two datasets with potentially different number of images,
we take a maximum of
"""
return max(self.A_size, self.B_size)
================================================
FILE: experiments/__init__.py
================================================
import os
import importlib
def find_launcher_using_name(launcher_name):
# cur_dir = os.path.dirname(os.path.abspath(__file__))
# pythonfiles = glob.glob(cur_dir + '/**/*.py')
launcher_filename = "experiments.{}_launcher".format(launcher_name)
launcherlib = importlib.import_module(launcher_filename)
# In the file, the class called LauncherNameLauncher() will
# be instantiated. It has to be a subclass of BaseLauncher,
# and it is case-insensitive.
launcher = None
target_launcher_name = launcher_name.replace('_', '') + 'launcher'
for name, cls in launcherlib.__dict__.items():
if name.lower() == target_launcher_name.lower():
launcher = cls
if launcher is None:
raise ValueError("In %s.py, there should be a subclass of BaseLauncher "
"with class name that matches %s in lowercase." %
(launcher_filename, target_launcher_name))
return launcher
if __name__ == "__main__":
import sys
import pickle
assert len(sys.argv) >= 3
name = sys.argv[1]
Launcher = find_launcher_using_name(name)
cache = "/tmp/tmux_launcher/{}".format(name)
if os.path.isfile(cache):
instance = pickle.load(open(cache, 'r'))
else:
instance = Launcher()
cmd = sys.argv[2]
if cmd == "launch":
instance.launch()
elif cmd == "stop":
instance.stop()
elif cmd == "send":
expid = int(sys.argv[3])
cmd = int(sys.argv[4])
instance.send_command(expid, cmd)
os.makedirs("/tmp/tmux_launcher/", exist_ok=True)
pickle.dump(instance, open(cache, 'w'))
================================================
FILE: experiments/__main__.py
================================================
import os
import importlib
def find_launcher_using_name(launcher_name):
# cur_dir = os.path.dirname(os.path.abspath(__file__))
# pythonfiles = glob.glob(cur_dir + '/**/*.py')
launcher_filename = "experiments.{}_launcher".format(launcher_name)
launcherlib = importlib.import_module(launcher_filename)
# In the file, the class called LauncherNameLauncher() will
# be instantiated. It has to be a subclass of BaseLauncher,
# and it is case-insensitive.
launcher = None
# target_launcher_name = launcher_name.replace('_', '') + 'launcher'
for name, cls in launcherlib.__dict__.items():
if name.lower() == "launcher":
launcher = cls
if launcher is None:
raise ValueError("In %s.py, there should be a class named Launcher")
return launcher
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('name')
parser.add_argument('cmd')
parser.add_argument('id', nargs='+', type=str)
parser.add_argument('--mode', default=None)
parser.add_argument('--which_epoch', default=None)
parser.add_argument('--continue_train', action='store_true')
parser.add_argument('--subdir', default='')
parser.add_argument('--title', default='')
parser.add_argument('--gpu_id', default=None, type=int)
parser.add_argument('--phase', default='test')
opt = parser.parse_args()
name = opt.name
Launcher = find_launcher_using_name(name)
instance = Launcher()
cmd = opt.cmd
ids = 'all' if 'all' in opt.id else [int(i) for i in opt.id]
if cmd == "launch":
instance.launch(ids, continue_train=opt.continue_train)
elif cmd == "stop":
instance.stop()
elif cmd == "send":
assert False
elif cmd == "close":
instance.close()
elif cmd == "dry":
instance.dry()
elif cmd == "relaunch":
instance.close()
instance.launch(ids, continue_train=opt.continue_train)
elif cmd == "run" or cmd == "train":
assert len(ids) == 1, '%s is invalid for run command' % (' '.join(opt.id))
expid = ids[0]
instance.run_command(instance.commands(), expid,
continue_train=opt.continue_train,
gpu_id=opt.gpu_id)
elif cmd == 'launch_test':
instance.launch(ids, test=True)
elif cmd == "run_test" or cmd == "test":
test_commands = instance.test_commands()
if ids == "all":
ids = list(range(len(test_commands)))
for expid in ids:
instance.run_command(test_commands, expid, opt.which_epoch,
gpu_id=opt.gpu_id)
if expid < len(ids) - 1:
os.system("sleep 5s")
elif cmd == "print_names":
instance.print_names(ids, test=False)
elif cmd == "print_test_names":
instance.print_names(ids, test=True)
elif cmd == "create_comparison_html":
instance.create_comparison_html(name, ids, opt.subdir, opt.title, opt.phase)
else:
raise ValueError("Command not recognized")
================================================
FILE: models/MSP.py
================================================
import numpy as np
import torch.nn as nn
import torch
from torch.nn.parameter import Parameter
import torch.nn.functional as F
from .torch_utils import concat_all_gather, get_world_size
class StyleExtractor(nn.Module):
"""Defines a PatchGAN discriminator"""
def __init__(self, encoder, gpu_ids = []):
"""Construct a PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
n_layers (int) -- the number of conv layers in the discriminator
norm_layer -- normalization layer
"""
super(StyleExtractor, self).__init__()
enc_layers = list(encoder.children())
self.enc_1 = nn.Sequential(*enc_layers[:6]) # input -> relu1_1
self.enc_2 = nn.Sequential(*enc_layers[6:13]) # relu1_1 -> relu2_1
self.enc_3 = nn.Sequential(*enc_layers[13:20]) # relu2_1 -> relu3_1
self.enc_4 = nn.Sequential(*enc_layers[20:33]) # relu3_1 -> relu4_1
self.enc_5 = nn.Sequential(*enc_layers[33:46]) # relu4_1 -> relu5_1
self.enc_6 = nn.Sequential(*enc_layers[46:70]) # relu5_1 -> maxpool
# fix the encoder
for name in ['enc_1', 'enc_2','enc_3', 'enc_4', 'enc_5', 'enc_6']:
for param in getattr(self, name).parameters():
param.requires_grad = True
# Class Activation Map
# self.gap_fc0 = nn.Linear(64, 1, bias=False)
# self.gmp_fc0 = nn.Linear(64, 1, bias=False)
# self.gap_fc1 = nn.Linear(128, 1, bias=False)
# self.gmp_fc1 = nn.Linear(128, 1, bias=False)
# self.gap_fc2 = nn.Linear(256, 1, bias=False)
# self.gmp_fc2 = nn.Linear(256, 1, bias=False)
# self.gap_fc3 = nn.Linear(512, 1, bias=False)
# self.gmp_fc3 = nn.Linear(512, 1, bias=False)
# self.gap_fc4 = nn.Linear(512, 1, bias=False)
# self.gmp_fc4 = nn.Linear(512, 1, bias=False)
# self.gap_fc5 = nn.Linear(512, 1, bias=False)
# self.gmp_fc5 = nn.Linear(512, 1, bias=False)
self.conv1x1_0 = nn.Conv2d(128, 64, kernel_size=1, stride=1, bias=True)
self.conv1x1_1 = nn.Conv2d(256, 128, kernel_size=1, stride=1, bias=True)
self.conv1x1_2 = nn.Conv2d(512, 256, kernel_size=1, stride=1, bias=True)
self.conv1x1_3 = nn.Conv2d(1024, 512, kernel_size=1, stride=1, bias=True)
self.conv1x1_4 = nn.Conv2d(1024, 512, kernel_size=1, stride=1, bias=True)
self.conv1x1_5 = nn.Conv2d(1024, 512, kernel_size=1, stride=1, bias=True)
self.relu = nn.ReLU(True)
# extract relu1_1, relu2_1, relu3_1, relu4_1 from input image
def encode_with_intermediate(self, input):
results = [input]
for i in range(6):
func = getattr(self, 'enc_{:d}'.format(i + 1))
results.append(func(results[-1]))
return results[1:]
def forward(self, input, index):
"""Standard forward."""
feats = self.encode_with_intermediate(input)
codes = []
for x in index:
code = feats[x].clone()
gap = torch.nn.functional.adaptive_avg_pool2d(code, (1,1))
gmp = torch.nn.functional.adaptive_max_pool2d(code, (1,1))
conv1x1 = getattr(self, 'conv1x1_{:d}'.format(x))
code = torch.cat([gap, gmp], 1)
code = self.relu(conv1x1(code))
codes.append(code)
return codes
class Projector(nn.Module):
def __init__(self, projector, gpu_ids = []):
super(Projector, self).__init__()
self.projector0 = nn.Sequential(
nn.Linear(64, 1024),
nn.ReLU(True),
#nn.Dropout(),
nn.Linear(1024, 2048),
nn.ReLU(True),
nn.Linear(2048, 2048),
)
self.projector1 = nn.Sequential(
#nn.Dropout(),
nn.Linear(128, 1024),
nn.ReLU(True),
#nn.Dropout(),
nn.Linear(1024, 2048),
nn.ReLU(True),
nn.Linear(2048, 2048),
)
self.projector2 = nn.Sequential(
#nn.Dropout(),
nn.Linear(256,1024),
nn.ReLU(True),
#nn.Dropout(),
nn.Linear(1024, 2048),
nn.ReLU(True),
nn.Linear(2048, 2048),
)
self.projector3 = nn.Sequential(
#nn.Dropout(),
nn.Linear(512, 1024),
nn.ReLU(True),
#nn.Dropout(),
nn.Linear(1024, 2048),
nn.ReLU(True),
nn.Linear(2048, 2048),
)
self.projector4 = nn.Sequential(
#nn.Dropout(),
nn.Linear(512, 1024),
nn.ReLU(True),
#nn.Dropout(),
nn.Linear(1024, 2048),
nn.ReLU(True),
nn.Linear(2048, 2048),
)
self.projector5 = nn.Sequential(
#nn.Dropout(),
nn.Linear(512, 1024),
nn.ReLU(True),
#nn.Dropout(),
nn.Linear(1024, 2048),
nn.ReLU(True),
nn.Linear(2048, 2048),
)
def forward(self, input, index):
"""Standard forward."""
num = 0
projections = []
for x in index:
projector = getattr(self, 'projector{:d}'.format(x))
code = input[num].view(input[num].size(0), -1)
projection = projector(code).view(code.size(0), -1)
projection = nn.functional.normalize(projection)
projections.append(projection)
num += 1
return projections
def make_layers(cfg, batch_norm=True):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
vgg = make_layers([3, 64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M',
512, 512, 512, 512, 'M', 512, 512, 'M', 512, 512, 'M'])
class InfoNCELoss(nn.Module):
def __init__(self, temperature, feature_dim, queue_size):
super().__init__()
self.tau = temperature
self.queue_size = queue_size
self.world_size = get_world_size()
data0 = torch.randn(2048, queue_size)
data0 = F.normalize(data0, dim=0)
data1 = torch.randn(2048, queue_size)
data1 = F.normalize(data1, dim=0)
data2 = torch.randn(2048, queue_size)
data2 = F.normalize(data2, dim=0)
data3 = torch.randn(2048, queue_size)
data3 = F.normalize(data3, dim=0)
data4 = torch.randn(2048, queue_size)
data4 = F.normalize(data4, dim=0)
data5 = torch.randn(2048, queue_size)
data5 = F.normalize(data5, dim=0)
self.register_buffer("queue_data_A0", data0)
self.register_buffer("queue_ptr_A0", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_B0", data0)
self.register_buffer("queue_ptr_B0", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_A2", data2)
self.register_buffer("queue_ptr_A2", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_B2", data2)
self.register_buffer("queue_ptr_B2", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_A4", data4)
self.register_buffer("queue_ptr_A4", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_B4", data4)
self.register_buffer("queue_ptr_B4", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_A1", data1)
self.register_buffer("queue_ptr_A1", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_B1", data1)
self.register_buffer("queue_ptr_B1", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_A3", data3)
self.register_buffer("queue_ptr_A3", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_B3", data3)
self.register_buffer("queue_ptr_B3", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_A5", data5)
self.register_buffer("queue_ptr_A5", torch.zeros(1, dtype=torch.long))
self.register_buffer("queue_data_B5", data5)
self.register_buffer("queue_ptr_B5", torch.zeros(1, dtype=torch.long))
def forward(self, query, key, style = 'real'):
# positive logits: Nx1
l_pos = torch.einsum("nc,nc->n", (query, key)).unsqueeze(-1)
# negative logits: NxK
if style == 'real_A0':
queue = self.queue_data_A0.clone().detach()
elif style == 'real_A1':
queue = self.queue_data_A1.clone().detach()
elif style == 'real_A2':
queue = self.queue_data_A2.clone().detach()
elif style == 'real_A3':
queue = self.queue_data_A3.clone().detach()
elif style == 'real_A4':
queue = self.queue_data_A4.clone().detach()
elif style == 'real_A5':
queue = self.queue_data_A5.clone().detach()
elif style == 'fake_A':
queue = self.queue_data_fake_A.clone().detach()
elif style == 'real_B0':
queue = self.queue_data_B0.clone().detach()
elif style == 'real_B1':
queue = self.queue_data_B1.clone().detach()
elif style == 'real_B2':
queue = self.queue_data_B2.clone().detach()
elif style == 'real_B3':
queue = self.queue_data_B3.clone().detach()
elif style == 'real_B4':
queue = self.queue_data_B4.clone().detach()
elif style == 'real_B5':
queue = self.queue_data_B5.clone().detach()
elif style == 'fake_B':
queue = self.queue_data_fake_B.clone().detach()
else:
raise NotImplementedError('QUEUE: style is not recognized')
l_neg = torch.einsum("nc,ck->nk", (query, queue))
# logits: Nx(1+K)
logits = torch.cat((l_pos, l_neg), dim=1)
# labels: positive key indicators
labels = torch.zeros(logits.size(0), dtype=torch.long, device=query.device)
return F.cross_entropy(logits / self.tau, labels)
@torch.no_grad()
def dequeue_and_enqueue(self, keys, style = 'real'):
# gather from all gpus
if self.world_size > 1:
keys = concat_all_gather(keys, self.world_size)
batch_size = keys.size(0)
# replace the keys at ptr (dequeue and enqueue)
if style == 'real_A0':
ptr = int(self.queue_ptr_A0)
assert self.queue_size % batch_size == 0
self.queue_data_A0[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_A0[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_A1':
ptr = int(self.queue_ptr_A1)
assert self.queue_size % batch_size == 0
self.queue_data_A1[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_A1[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_A2':
ptr = int(self.queue_ptr_A2)
assert self.queue_size % batch_size == 0
self.queue_data_A2[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_A2[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_A3':
ptr = int(self.queue_ptr_A3)
assert self.queue_size % batch_size == 0
self.queue_data_A3[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_A3[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_A4':
ptr = int(self.queue_ptr_A4)
assert self.queue_size % batch_size == 0
self.queue_data_A4[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_A4[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_A5':
ptr = int(self.queue_ptr_A5)
assert self.queue_size % batch_size == 0
self.queue_data_A5[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_A5[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_B0':
ptr = int(self.queue_ptr_B0)
assert self.queue_size % batch_size == 0
self.queue_data_B0[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_B0[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_B1':
ptr = int(self.queue_ptr_B1)
assert self.queue_size % batch_size == 0
self.queue_data_B1[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_B1[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_B2':
ptr = int(self.queue_ptr_B2)
assert self.queue_size % batch_size == 0
self.queue_data_B2[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_B2[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_B3':
ptr = int(self.queue_ptr_B3)
assert self.queue_size % batch_size == 0
self.queue_data_B3[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_B3[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_B4':
ptr = int(self.queue_ptr_B4)
assert self.queue_size % batch_size == 0
self.queue_data_B4[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_B4[0] = (ptr + batch_size) % self.queue_size
elif style == 'real_B5':
ptr = int(self.queue_ptr_B5)
assert self.queue_size % batch_size == 0
self.queue_data_B5[:, ptr:ptr + batch_size] = keys.T
self.queue_ptr_B5[0] = (ptr + batch_size) % self.queue_size
else:
raise NotImplementedError('QUEUE: style is not recognized')
================================================
FILE: models/__init__.py
================================================
"""This package contains modules related to objective functions, optimizations, and network architectures.
To add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel.
You need to implement the following five functions:
-- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
-- <set_input>: unpack data from dataset and apply preprocessing.
-- <forward>: produce intermediate results.
-- <optimize_parameters>: calculate loss, gradients, and update network weights.
-- <modify_commandline_options>: (optionally) add model-specific options and set default options.
In the function <__init__>, you need to define four lists:
-- self.loss_names (str list): specify the training losses that you want to plot and save.
-- self.model_names (str list): define networks used in our training.
-- self.visual_names (str list): specify the images that you want to display and save.
-- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage.
Now you can use the model class by specifying flag '--model dummy'.
See our template model class 'template_model.py' for more details.
"""
import importlib
from models.base_model import BaseModel
def find_model_using_name(model_name):
"""Import the module "models/[model_name]_model.py".
In the file, the class called DatasetNameModel() will
be instantiated. It has to be a subclass of BaseModel,
and it is case-insensitive.
"""
model_filename = "models." + model_name + "_model"
modellib = importlib.import_module(model_filename)
model = None
target_model_name = model_name.replace('_', '') + 'model'
for name, cls in modellib.__dict__.items():
if name.lower() == target_model_name.lower() \
and issubclass(cls, BaseModel):
model = cls
if model is None:
print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name))
exit(0)
return model
def get_option_setter(model_name):
"""Return the static method <modify_commandline_options> of the model class."""
model_class = find_model_using_name(model_name)
return model_class.modify_commandline_options
def create_model(opt):
"""Create a model given the option.
This function warps the class CustomDatasetDataLoader.
This is the main interface between this package and 'train.py'/'test.py'
Example:
>>> from models import create_model
>>> model = create_model(opt)
"""
model = find_model_using_name(opt.model)
instance = model(opt)
print("model [%s] was created" % type(instance).__name__)
return instance
================================================
FILE: models/base_model.py
================================================
import os
import torch
from collections import OrderedDict
from abc import ABC, abstractmethod
from . import networks
class BaseModel(ABC):
"""This class is an abstract base class (ABC) for models.
To create a subclass, you need to implement the following five functions:
-- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
-- <set_input>: unpack data from dataset and apply preprocessing.
-- <forward>: produce intermediate results.
-- <optimize_parameters>: calculate losses, gradients, and update network weights.
-- <modify_commandline_options>: (optionally) add model-specific options and set default options.
"""
def __init__(self, opt):
"""Initialize the BaseModel class.
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
When creating your custom class, you need to implement your own initialization.
In this fucntion, you should first call <BaseModel.__init__(self, opt)>
Then, you need to define four lists:
-- self.loss_names (str list): specify the training losses that you want to plot and save.
-- self.model_names (str list): specify the images that you want to display and save.
-- self.visual_names (str list): define networks used in our training.
-- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.
"""
self.opt = opt
self.gpu_ids = opt.gpu_ids
self.isTrain = opt.isTrain
self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') # get device name: CPU or GPU
self.save_dir = os.path.join(opt.checkpoints_dir, opt.name) # save all the checkpoints to save_dir
if opt.preprocess != 'scale_width': # with [scale_width], input images might have different sizes, which hurts the performance of cudnn.benchmark.
torch.backends.cudnn.benchmark = True
self.loss_names = []
self.model_names = []
self.visual_names = []
self.optimizers = []
self.image_paths = []
self.metric = 0 # used for learning rate policy 'plateau'
@staticmethod
def dict_grad_hook_factory(add_func=lambda x: x):
saved_dict = dict()
def hook_gen(name):
def grad_hook(grad):
saved_vals = add_func(grad)
saved_dict[name] = saved_vals
return grad_hook
return hook_gen, saved_dict
@staticmethod
def modify_commandline_options(parser, is_train):
"""Add new model-specific options, and rewrite default values for existing options.
Parameters:
parser -- original option parser
is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
Returns:
the modified parser.
"""
return parser
@abstractmethod
def set_input(self, input):
"""Unpack input data from the dataloader and perform necessary pre-processing steps.
Parameters:
input (dict): includes the data itself and its metadata information.
"""
pass
@abstractmethod
def forward(self):
"""Run forward pass; called by both functions <optimize_parameters> and <test>."""
pass
@abstractmethod
def optimize_parameters(self):
"""Calculate losses, gradients, and update network weights; called in every training iteration"""
pass
def setup(self, opt):
"""Load and print networks; create schedulers
Parameters:
opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
if self.isTrain:
self.schedulers = [networks.get_scheduler(optimizer, opt) for optimizer in self.optimizers]
if not self.isTrain or opt.continue_train:
load_suffix = opt.epoch
self.load_networks(load_suffix)
self.print_networks(opt.verbose)
def parallelize(self):
for name in self.model_names:
if isinstance(name, str):
net = getattr(self, 'net' + name)
setattr(self, 'net' + name, torch.nn.DataParallel(net, self.opt.gpu_ids))
def data_dependent_initialize(self, data):
pass
def eval(self):
"""Make models eval mode during test time"""
for name in self.model_names:
if isinstance(name, str):
net = getattr(self, 'net' + name)
net.eval()
def test(self):
"""Forward function used in test time.
This function wraps <forward> function in no_grad() so we don't save intermediate steps for backprop
It also calls <compute_visuals> to produce additional visualization results
"""
with torch.no_grad():
self.forward()
self.compute_visuals()
def compute_visuals(self):
"""Calculate additional output images for visdom and HTML visualization"""
pass
def get_image_paths(self):
""" Return image paths that are used to load current data"""
return self.image_paths
def update_learning_rate(self):
"""Update learning rates for all the networks; called at the end of every epoch"""
for scheduler in self.schedulers:
if self.opt.lr_policy == 'plateau':
scheduler.step(self.metric)
else:
scheduler.step()
lr = self.optimizers[0].param_groups[0]['lr']
print('learning rate = %.7f' % lr)
def get_current_visuals(self):
"""Return visualization images. train.py will display these images with visdom, and save the images to a HTML"""
visual_ret = OrderedDict()
for name in self.visual_names:
if isinstance(name, str):
visual_ret[name] = getattr(self, name)
return visual_ret
def get_current_losses(self):
"""Return traning losses / errors. train.py will print out these errors on console, and save them to a file"""
errors_ret = OrderedDict()
for name in self.loss_names:
if isinstance(name, str):
errors_ret[name] = float(getattr(self, 'loss_' + name)) # float(...) works for both scalar tensor and float number
return errors_ret
def save_networks(self, epoch):
"""Save all the networks to the disk.
Parameters:
epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
"""
for name in self.model_names:
if isinstance(name, str):
save_filename = '%s_net_%s.pth' % (epoch, name)
save_path = os.path.join(self.save_dir, save_filename)
net = getattr(self, 'net' + name)
if len(self.gpu_ids) > 0 and torch.cuda.is_available():
torch.save(net.module.cpu().state_dict(), save_path)
net.cuda(self.gpu_ids[0])
else:
torch.save(net.cpu().state_dict(), save_path)
def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0):
"""Fix InstanceNorm checkpoints incompatibility (prior to 0.4)"""
key = keys[i]
if i + 1 == len(keys): # at the end, pointing to a parameter/buffer
if module.__class__.__name__.startswith('InstanceNorm') and \
(key == 'running_mean' or key == 'running_var'):
if getattr(module, key) is None:
state_dict.pop('.'.join(keys))
if module.__class__.__name__.startswith('InstanceNorm') and \
(key == 'num_batches_tracked'):
state_dict.pop('.'.join(keys))
else:
self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1)
def load_networks(self, epoch):
"""Load all the networks from the disk.
Parameters:
epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
"""
for name in self.model_names:
if isinstance(name, str):
load_filename = '%s_net_%s.pth' % (epoch, name)
if self.opt.isTrain and self.opt.pretrained_name is not None:
load_dir = os.path.join(self.opt.checkpoints_dir, self.opt.pretrained_name)
else:
load_dir = self.save_dir
load_path = os.path.join(load_dir, load_filename)
net = getattr(self, 'net' + name)
if isinstance(net, torch.nn.DataParallel):
net = net.module
print('loading the model from %s' % load_path)
# if you are using PyTorch newer than 0.4 (e.g., built from
# GitHub source), you can remove str() on self.device
state_dict = torch.load(load_path, map_location=str(self.device))
if hasattr(state_dict, '_metadata'):
del state_dict._metadata
# patch InstanceNorm checkpoints prior to 0.4
# for key in list(state_dict.keys()): # need to copy keys here because we mutate in loop
# self.__patch_instance_norm_state_dict(state_dict, net, key.split('.'))
net.load_state_dict(state_dict)
def print_networks(self, verbose):
"""Print the total number of parameters in the network and (if verbose) network architecture
Parameters:
verbose (bool) -- if verbose: print the network architecture
"""
print('---------- Networks initialized -------------')
for name in self.model_names:
if isinstance(name, str):
net = getattr(self, 'net' + name)
num_params = 0
for param in net.parameters():
num_params += param.numel()
if verbose:
print(net)
print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6))
print('-----------------------------------------------')
def set_requires_grad(self, nets, requires_grad=False):
"""Set requies_grad=Fasle for all the networks to avoid unnecessary computations
Parameters:
nets (network list) -- a list of networks
requires_grad (bool) -- whether the networks require gradients or not
"""
if not isinstance(nets, list):
nets = [nets]
for net in nets:
if net is not None:
for param in net.parameters():
param.requires_grad = requires_grad
def generate_visuals_for_evaluation(self, data, mode):
return {}
================================================
FILE: models/cast_model.py
================================================
import itertools
import torch
from .base_model import BaseModel
from . import networks
from . import net
from . import MSP
import util.util as util
from util.image_pool import ImagePool
import torch.nn as nn
from torch.nn import init
import kornia.augmentation as K
class CASTModel(BaseModel):
""" This class implements CAST model.
This code is inspired by DCLGAN
"""
@staticmethod
def modify_commandline_options(parser, is_train=True):
""" Configures options specific for CAST """
parser.add_argument('--CAST_mode', type=str, default="CAST", choices='CAST')
parser.add_argument('--lambda_GAN_G_A', type=float, default=0.1, help='weight for GAN loss:GAN(G(Ic, Is))')
parser.add_argument('--lambda_GAN_G_B', type=float, default=0.1, help='weight for GAN loss:GAN(G(Is, Ic))')
parser.add_argument('--lambda_GAN_D_A', type=float, default=1.0, help='weight for GAN loss:GAN(G(Is, Ic))')
parser.add_argument('--lambda_GAN_D_B', type=float, default=1.0, help='weight for GAN loss:GAN(G(Ic, Is))')
parser.add_argument('--lambda_NCE_G', type=float, default=0.05, help='weight for NCE loss: NCE(G(Ic, Is), Is)')
parser.add_argument('--lambda_NCE_D', type=float, default=1.0, help='weight for NCE loss: NCE(I, I+, I-)')
parser.add_argument('--lambda_CYC', type=float, default=4.0, help='weight for l1 reconstructe loss:||Ic - G(G(Ic, Is),Ic)||')
parser.add_argument('--nce_layers', type=str, default='0,1,2,3', help='compute NCE loss on which layers')
parser.set_defaults(pool_size=0) # no image pooling
opt, _ = parser.parse_known_args()
# Set default parameters for CAST.
if opt.CAST_mode.lower() == "cast":
pass
else:
raise ValueError(opt.CAST_mode)
return parser
def __init__(self, opt):
BaseModel.__init__(self, opt)
# specify the training losses you want to print out.
# The training/test scripts will call <BaseModel.get_current_losses>
self.loss_names = ['G']
self.visual_names = ['real_A', 'fake_B', 'real_B']
if self.opt.lambda_GAN_G_A > 0.0 and self.isTrain:
self.loss_names += [ 'G_A']
if self.opt.lambda_GAN_G_B > 0.0 and self.isTrain:
self.loss_names += [ 'G_B']
if self.opt.lambda_GAN_D_A > 0.0 and self.isTrain:
self.loss_names += ['D_A']
if self.opt.lambda_GAN_D_B > 0.0 and self.isTrain:
self.loss_names += ['D_B']
if self.opt.lambda_NCE_G > 0.0 and self.isTrain:
self.loss_names += [ 'G_NCE_style']
if self.opt.lambda_NCE_D > 0.0 and self.isTrain:
self.loss_names += [ 'NCE_D']
if self.opt.lambda_CYC > 0.0 and self.isTrain:
self.visual_names += ['rec_A', 'rec_B']
self.loss_names += ['cyc']
if self.isTrain:
self.model_names = ['AE','Dec_A', 'Dec_B', 'D', 'P_style', 'D_A', 'D_B']
else: # during test time, only load G
self.model_names = ['AE','Dec_A', 'Dec_B']
# define networks
vgg = net.vgg
vgg.load_state_dict(torch.load('models/vgg_normalised.pth'))
vgg = nn.Sequential(*list(vgg.children())[:31])
self.netAE = net.ADAIN_Encoder(vgg, self.gpu_ids)
self.netDec_A = net.Decoder(self.gpu_ids)
self.netDec_B = net.Decoder(self.gpu_ids)
init_net(self.netAE, 'normal', 0.02, self.gpu_ids)
init_net(self.netDec_A, 'normal', 0.02, self.gpu_ids)
init_net(self.netDec_B, 'normal', 0.02, self.gpu_ids)
if self.isTrain:
style_vgg = MSP.vgg
style_vgg.load_state_dict(torch.load('models/style_vgg.pth'))
style_vgg = nn.Sequential(*list(style_vgg.children()))
self.netD = MSP.StyleExtractor(style_vgg, self.gpu_ids)
self.netP_style = MSP.Projector(self.gpu_ids)
init_net(self.netD, 'normal', 0.02, self.gpu_ids)
init_net(self.netP_style, 'normal', 0.02, self.gpu_ids)
self.netD_A = networks.define_D(opt.output_nc, opt.ndf, opt.netD, opt.n_layers_D,
opt.crop_size, opt.feature_dim, opt.max_conv_dim,
opt.normD, opt.init_type, opt.init_gain, opt.no_antialias,
self.gpu_ids, opt)
self.netD_B = networks.define_D(opt.output_nc, opt.ndf, opt.netD, opt.n_layers_D,
opt.crop_size, opt.feature_dim, opt.max_conv_dim,
opt.normD, opt.init_type, opt.init_gain, opt.no_antialias,
self.gpu_ids, opt)
self.fake_pool = ImagePool(opt.pool_size) # create image buffer to store previously generated images
self.criterionGAN = networks.GANLoss(opt.gan_mode).to(self.device)
self.nce_layers = [int(i) for i in self.opt.nce_layers.split(',')]
self.nce_loss = MSP.InfoNCELoss(opt.temperature, opt.hypersphere_dim,
opt.queue_size).to(self.device)
self.mse_loss = nn.MSELoss()
self.patch_sampler = K.RandomResizedCrop((256,256),scale=(0.8,1.0),ratio=(0.75,1.33)).to(self.device)
self.criterionCyc = torch.nn.L1Loss().to(self.device)
self.optimizer_G = torch.optim.Adam(itertools.chain(self.netAE.parameters(), self.netDec_A.parameters(), self.netDec_B.parameters()),
lr=opt.lr_G, betas=(opt.beta1, opt.beta2))
self.optimizer_D = torch.optim.Adam(itertools.chain(self.netD_A.parameters(), self.netD_B.parameters()),
lr=opt.lr_D, betas=(opt.beta1, opt.beta2))
self.optimizer_D_NCE = torch.optim.Adam(itertools.chain(self.netD.parameters(), self.netP_style.parameters()),
lr=opt.lr_D_NCE, betas=(opt.beta1, opt.beta2))
self.optimizers.append(self.optimizer_G)
self.optimizers.append(self.optimizer_D)
self.optimizers.append(self.optimizer_D_NCE)
def optimize_parameters(self):
# forward
self.forward()
# update D
if self.opt.lambda_GAN_D_A > 0.0 or self.opt.lambda_GAN_D_B > 0.0:
self.set_requires_grad([self.netD_A, self.netD_B], True)
self.set_requires_grad([self.netD, self.netP_style, self.netAE, self.netDec_A,self.netDec_B ], False)
self.optimizer_D.zero_grad()
self.loss_D = self.backward_D()
self.loss_D.backward(retain_graph=True)
self.optimizer_D.step()
# update MSP
if self.opt.lambda_NCE_D > 0.0:
self.set_requires_grad([self.netD, self.netP_style], True)
self.set_requires_grad([self.netAE, self.netDec_A,self.netDec_B, self.netD_A, self.netD_B ], False)
self.optimizer_D_NCE.zero_grad()
self.loss_NCE_D = self.backward_D_NCEloss()
self.loss_NCE_D.backward(retain_graph=True)
self.optimizer_D_NCE.step()
# update G
self.set_requires_grad([self.netD, self.netP_style, self.netD_A, self.netD_B], False)
self.set_requires_grad([self.netAE, self.netDec_A,self.netDec_B], True)
self.optimizer_G.zero_grad()
self.loss_G = self.compute_G_loss()
self.loss_G.backward()
self.optimizer_G.step()
def set_input(self, input):
"""Unpack input data from the dataloader and perform necessary pre-processing steps.
Parameters:
input (dict): include the data itself and its metadata information.
The option 'direction' can be used to swap domain A and domain B.
"""
AtoB = self.opt.direction == 'AtoB'
self.real_A = input['A' if AtoB else 'B'].to(self.device)
self.real_B = input['B' if AtoB else 'A'].to(self.device)
self.image_paths = input['A_paths' if AtoB else 'B_paths']
def forward(self):
"""Run forward pass; called by both functions <optimize_parameters> and <test>."""
self.real_A_feat = self.netAE(self.real_A, self.real_B) # G_A(A)
self.fake_B = self.netDec_B(self.real_A_feat)
if self.isTrain:
self.real_B_feat = self.netAE(self.real_B, self.real_A) # G_A(A)
self.fake_A = self.netDec_A(self.real_B_feat)
if self.opt.lambda_CYC > 0.0:
self.rec_A_feat = self.netAE(self.fake_B, self.real_A)
self.rec_B_feat = self.netAE(self.fake_A, self.real_B)
self.rec_A = self.netDec_A(self.rec_A_feat)
self.rec_B = self.netDec_B(self.rec_B_feat)
def backward_D_basic(self, netD, content,style, fake):
"""Calculate GAN loss for the discriminator
Parameters:
netD (network) -- the discriminator D
real (tensor array) -- real images
fake (tensor array) -- images generated by a generator
Return the discriminator loss.
We also call loss_D.backward() to calculate the gradients.
"""
loss_D_real = loss_D_fake = 0
# Real
pred_real = netD(style)
loss_D_real = self.criterionGAN(pred_real, True)
# Fake
pred_fake = netD(fake.detach())
loss_D_fake = self.criterionGAN(pred_fake, False)
# Combined loss and calculate gradients
loss_D = (loss_D_real + loss_D_fake)*0.5
return loss_D
def backward_D_NCEloss(self):
"""
Calculate NCE loss for the discriminator
"""
#query_A = query_B =0.0
real_A = self.netD(self.patch_sampler(self.real_A), self.nce_layers)
real_B = self.netD(self.patch_sampler(self.real_B), self.nce_layers)
real_Ax = self.netD(self.patch_sampler(self.real_A), self.nce_layers)
real_Bx = self.netD(self.patch_sampler(self.real_B), self.nce_layers)
query_A = self.netP_style(real_A, self.nce_layers)
query_B = self.netP_style(real_B, self.nce_layers)
query_Ax = self.netP_style(real_Ax, self.nce_layers)
query_Bx = self.netP_style(real_Bx, self.nce_layers)
num = 0
loss_D_cont_A = 0
loss_D_cont_B = 0
for x in self.nce_layers:
#self.nce_loss.dequeue_and_enqueue(query_A[num], 'real_A{:d}'.format(x))
self.nce_loss.dequeue_and_enqueue(query_B[num], 'real_B{:d}'.format(x))
#loss_D_cont_A += self.nce_loss(query_A[num], query_Ax[num], 'real_B{:d}'.format(x))
loss_D_cont_B += self.nce_loss(query_B[num], query_Bx[num], 'real_B{:d}'.format(x))
num += 1
loss_NCE_D = (loss_D_cont_A + loss_D_cont_B) * 0.5 * self.opt.lambda_NCE_D
return loss_NCE_D
def backward_D(self):
"""Calculate GAN loss for discriminator D"""
if self.opt.lambda_GAN_D_B > 0.0:
fake_B = self.fake_pool.query(self.fake_B)
self.loss_D_B = self.backward_D_basic(self.netD_B, self.real_A, self.real_B, fake_B) * self.opt.lambda_GAN_D_B
else:
self.loss_D_B = 0
if self.opt.lambda_GAN_D_A > 0.0:
fake_A = self.fake_pool.query(self.fake_A)
self.loss_D_A = self.backward_D_basic(self.netD_A, self.real_B, self.real_A, fake_A) * self.opt.lambda_GAN_D_A
else:
self.loss_D_A = 0
self.loss_D = (self.loss_D_B + self.loss_D_A) * 0.5
return self.loss_D
def compute_G_loss(self):
"""Calculate GAN and NCE loss for the generator"""
# First, G(A) should fake the discriminator
if self.opt.lambda_GAN_G_A > 0.0:
pred_fakeB = self.netD_B(self.fake_B)
self.loss_G_A = self.criterionGAN(pred_fakeB, True).mean() * self.opt.lambda_GAN_G_A
else:
self.loss_G_A = 0.0
if self.opt.lambda_GAN_G_B > 0.0:
pred_fakeA = self.netD_A(self.fake_A)
self.loss_G_B = self.criterionGAN(pred_fakeA, True).mean() * self.opt.lambda_GAN_G_B
else:
self.loss_G_B = 0.0
# Calculate the style contrastive loss.
if self.opt.lambda_NCE_G > 0.0:
real_A = self.patch_sampler(self.real_A)
real_B = self.patch_sampler(self.real_B)
fake_A = self.patch_sampler(self.fake_A)
fake_B = self.patch_sampler(self.fake_B)
key_A = self.netP_style(self.netD(real_A, self.nce_layers),self.nce_layers)
key_B = self.netP_style(self.netD(real_B, self.nce_layers),self.nce_layers)
query_A = self.netP_style(self.netD(fake_A, self.nce_layers),self.nce_layers)
query_B = self.netP_style(self.netD(fake_B, self.nce_layers),self.nce_layers)
num = 0
self.loss_G_NCE_style_A = 0
self.loss_G_NCE_style_B = 0
for x in self.nce_layers:
#self.loss_G_NCE_style_A += self.nce_loss(query_A[num], key_A[num], 'real_B{:d}'.format(x))
self.loss_G_NCE_style_B += self.nce_loss(query_B[num], key_B[num], 'real_B{:d}'.format(x))
num += 1
else:
self.loss_G_NCE_style_A = 0
self.loss_G_NCE_style_B = 0
self.loss_G_NCE_style = (self.loss_G_NCE_style_A + self.loss_G_NCE_style_B) * 0.5 * self.opt.lambda_NCE_G
#L1 Cycle Loss
if self.opt.lambda_CYC > 0.0:
self.loss_cyc_A = self.criterionCyc(self.rec_A, self.real_A) * self.opt.lambda_CYC
self.loss_cyc_B = self.criterionCyc(self.rec_B, self.real_B) * self.opt.lambda_CYC
else:
self.loss_cyc_A = 0
self.loss_cyc_B = 0
self.loss_cyc = (self.loss_cyc_A + self.loss_cyc_B) * 0.5
self.loss_G = self.loss_cyc + self.loss_G_NCE_style + (self.loss_G_A + self.loss_G_B) * 0.5
return self.loss_G
def init_weights(net, init_type='normal', init_gain=0.02):
"""Initialize network weights.
Parameters:
net (network) -- network to be initialized
init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
init_gain (float) -- scaling factor for normal, xavier and orthogonal.
We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might
work better for some applications. Feel free to try yourself.
"""
def init_func(m): # define the initialization function
classname = m.__class__.__name__
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
if init_type == 'normal':
init.normal_(m.weight.data, 0.0, init_gain)
elif init_type == 'xavier':
init.xavier_normal_(m.weight.data, gain=init_gain)
elif init_type == 'kaiming':
init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
elif init_type == 'orthogonal':
init.orthogonal_(m.weight.data, gain=init_gain)
else:
raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
if hasattr(m, 'bias') and m.bias is not None:
init.constant_(m.bias.data, 0.0)
elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies.
init.normal_(m.weight.data, 1.0, init_gain)
init.constant_(m.bias.data, 0.0)
print('initialize network with %s' % init_type)
net.apply(init_func) # apply the initialization function <init_func>
def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]):
"""Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights
Parameters:
net (network) -- the network to be initialized
init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
gain (float) -- scaling factor for normal, xavier and orthogonal.
gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
Return an initialized network.
"""
if len(gpu_ids) > 0:
assert(torch.cuda.is_available())
net.to(gpu_ids[0])
net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs
init_weights(net, init_type, init_gain=init_gain)
return net
================================================
FILE: models/net.py
================================================
import torch.nn as nn
import torch
vgg = nn.Sequential(
nn.Conv2d(3, 3, (1, 1)),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(3, 64, (3, 3)),
nn.ReLU(), # relu1-1
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(64, 64, (3, 3)),
nn.ReLU(), # relu1-2
nn.MaxPool2d((2, 2), (2, 2), (0, 0), ceil_mode=True),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(64, 128, (3, 3)),
nn.ReLU(), # relu2-1
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(128, 128, (3, 3)),
nn.ReLU(), # relu2-2
nn.MaxPool2d((2, 2), (2, 2), (0, 0), ceil_mode=True),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(128, 256, (3, 3)),
nn.ReLU(), # relu3-1
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(), # relu3-2
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(), # relu3-3
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(), # relu3-4
nn.MaxPool2d((2, 2), (2, 2), (0, 0), ceil_mode=True),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 512, (3, 3)),
nn.ReLU(), # relu4-1, this is the last layer used
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 512, (3, 3)),
nn.ReLU(), # relu4-2
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 512, (3, 3)),
nn.ReLU(), # relu4-3
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 512, (3, 3)),
nn.ReLU(), # relu4-4
nn.MaxPool2d((2, 2), (2, 2), (0, 0), ceil_mode=True),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 512, (3, 3)),
nn.ReLU(), # relu5-1
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 512, (3, 3)),
nn.ReLU(), # relu5-2
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 512, (3, 3)),
nn.ReLU(), # relu5-3
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 512, (3, 3)),
nn.ReLU() # relu5-4
)
class ADAIN_Encoder(nn.Module):
def __init__(self, encoder, gpu_ids=[]):
super(ADAIN_Encoder, self).__init__()
enc_layers = list(encoder.children())
self.enc_1 = nn.Sequential(*enc_layers[:4]) # input -> relu1_1 64
self.enc_2 = nn.Sequential(*enc_layers[4:11]) # relu1_1 -> relu2_1 128
self.enc_3 = nn.Sequential(*enc_layers[11:18]) # relu2_1 -> relu3_1 256
self.enc_4 = nn.Sequential(*enc_layers[18:31]) # relu3_1 -> relu4_1 512
self.mse_loss = nn.MSELoss()
# fix the encoder
for name in ['enc_1', 'enc_2', 'enc_3', 'enc_4']:
for param in getattr(self, name).parameters():
param.requires_grad = False
# extract relu1_1, relu2_1, relu3_1, relu4_1 from input image
def encode_with_intermediate(self, input):
results = [input]
for i in range(4):
func = getattr(self, 'enc_{:d}'.format(i + 1))
results.append(func(results[-1]))
return results[1:]
def calc_mean_std(self, feat, eps=1e-5):
# eps is a small value added to the variance to avoid divide-by-zero.
size = feat.size()
assert (len(size) == 4)
N, C = size[:2]
feat_var = feat.view(N, C, -1).var(dim=2) + eps
feat_std = feat_var.sqrt().view(N, C, 1, 1)
feat_mean = feat.view(N, C, -1).mean(dim=2).view(N, C, 1, 1)
return feat_mean, feat_std
def adain(self, content_feat, style_feat):
assert (content_feat.size()[:2] == style_feat.size()[:2])
size = content_feat.size()
style_mean, style_std = self.calc_mean_std(style_feat)
content_mean, content_std = self.calc_mean_std(content_feat)
normalized_feat = (content_feat - content_mean.expand(
size)) / content_std.expand(size)
return normalized_feat * style_std.expand(size) + style_mean.expand(size)
def forward(self, content, style, encoded_only = False):
style_feats = self.encode_with_intermediate(style)
content_feats = self.encode_with_intermediate(content)
if encoded_only:
return content_feats[-1], style_feats[-1]
else:
adain_feat = self.adain(content_feats[-1], style_feats[-1])
return adain_feat
class Decoder(nn.Module):
def __init__(self, gpu_ids=[]):
super(Decoder, self).__init__()
decoder = [
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 256, (3, 3)),
nn.ReLU(), # 256
nn.Upsample(scale_factor=2, mode='nearest'),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 128, (3, 3)),
nn.ReLU(),# 128
nn.Upsample(scale_factor=2, mode='nearest'),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(128, 128, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(128, 64, (3, 3)),
nn.ReLU(),# 64
nn.Upsample(scale_factor=2, mode='nearest'),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(64, 64, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(64, 3, (3, 3))
]
self.decoder = nn.Sequential(*decoder)
def forward(self, adain_feat):
fake_image = self.decoder(adain_feat)
return fake_image
================================================
FILE: models/networks.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import init
import functools
from torch.optim import lr_scheduler
import numpy as np
from torch.nn.parameter import Parameter
###############################################################################
# Helper Functions
###############################################################################
def get_filter(filt_size=3):
if(filt_size == 1):
a = np.array([1., ])
elif(filt_size == 2):
a = np.array([1., 1.])
elif(filt_size == 3):
a = np.array([1., 2., 1.])
elif(filt_size == 4):
a = np.array([1., 3., 3., 1.])
elif(filt_size == 5):
a = np.array([1., 4., 6., 4., 1.])
elif(filt_size == 6):
a = np.array([1., 5., 10., 10., 5., 1.])
elif(filt_size == 7):
a = np.array([1., 6., 15., 20., 15., 6., 1.])
filt = torch.Tensor(a[:, None] * a[None, :])
filt = filt / torch.sum(filt)
return filt
class Downsample(nn.Module):
def __init__(self, channels, pad_type='reflect', filt_size=3, stride=2, pad_off=0):
super(Downsample, self).__init__()
self.filt_size = filt_size
self.pad_off = pad_off
self.pad_sizes = [int(1. * (filt_size - 1) / 2), int(np.ceil(1. * (filt_size - 1) / 2)), int(1. * (filt_size - 1) / 2), int(np.ceil(1. * (filt_size - 1) / 2))]
self.pad_sizes = [pad_size + pad_off for pad_size in self.pad_sizes]
self.stride = stride
self.off = int((self.stride - 1) / 2.)
self.channels = channels
filt = get_filter(filt_size=self.filt_size)
self.register_buffer('filt', filt[None, None, :, :].repeat((self.channels, 1, 1, 1)))
self.pad = get_pad_layer(pad_type)(self.pad_sizes)
def forward(self, inp):
if(self.filt_size == 1):
if(self.pad_off == 0):
return inp[:, :, ::self.stride, ::self.stride]
else:
return self.pad(inp)[:, :, ::self.stride, ::self.stride]
else:
return F.conv2d(self.pad(inp), self.filt, stride=self.stride, groups=inp.shape[1])
class Upsample2(nn.Module):
def __init__(self, scale_factor, mode='nearest'):
super().__init__()
self.factor = scale_factor
self.mode = mode
def forward(self, x):
return torch.nn.functional.interpolate(x, scale_factor=self.factor, mode=self.mode)
class Upsample(nn.Module):
def __init__(self, channels, pad_type='repl', filt_size=4, stride=2):
super(Upsample, self).__init__()
self.filt_size = filt_size
self.filt_odd = np.mod(filt_size, 2) == 1
self.pad_size = int((filt_size - 1) / 2)
self.stride = stride
self.off = int((self.stride - 1) / 2.)
self.channels = channels
filt = get_filter(filt_size=self.filt_size) * (stride**2)
self.register_buffer('filt', filt[None, None, :, :].repeat((self.channels, 1, 1, 1)))
self.pad = get_pad_layer(pad_type)([1, 1, 1, 1])
def forward(self, inp):
ret_val = F.conv_transpose2d(self.pad(inp), self.filt, stride=self.stride, padding=1 + self.pad_size, groups=inp.shape[1])[:, :, 1:, 1:]
if(self.filt_odd):
return ret_val
else:
return ret_val[:, :, :-1, :-1]
def get_pad_layer(pad_type):
if(pad_type in ['refl', 'reflect']):
PadLayer = nn.ReflectionPad2d
elif(pad_type in ['repl', 'replicate']):
PadLayer = nn.ReplicationPad2d
elif(pad_type == 'zero'):
PadLayer = nn.ZeroPad2d
else:
print('Pad type [%s] not recognized' % pad_type)
return PadLayer
class Identity(nn.Module):
def forward(self, x):
return x
def get_norm_layer(norm_type='instance'):
"""Return a normalization layer
Parameters:
norm_type (str) -- the name of the normalization layer: batch | instance | none
For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev).
For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics.
"""
if norm_type == 'batch':
norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True)
elif norm_type == 'instance':
norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False)
elif norm_type == 'none':
def norm_layer(x):
return Identity()
else:
raise NotImplementedError('normalization layer [%s] is not found' % norm_type)
return norm_layer
def get_scheduler(optimizer, opt):
"""Return a learning rate scheduler
Parameters:
optimizer -- the optimizer of the network
opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.
opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine
For 'linear', we keep the same learning rate for the first <opt.n_epochs> epochs
and linearly decay the rate to zero over the next <opt.n_epochs_decay> epochs.
For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.
See https://pytorch.org/docs/stable/optim.html for more details.
"""
if opt.lr_policy == 'linear':
def lambda_rule(epoch):
lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs_decay + 1)
return lr_l
scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
elif opt.lr_policy == 'step':
scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1)
elif opt.lr_policy == 'plateau':
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
elif opt.lr_policy == 'cosine':
scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)
else:
return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
return scheduler
def init_weights(net, init_type='kaiming', init_gain=0.02, debug=False):
"""Initialize network weights.
Parameters:
net (network) -- network to be initialized
init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
init_gain (float) -- scaling factor for normal, xavier and orthogonal.
We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might
work better for some applications. Feel free to try yourself.
"""
def init_func(m): # define the initialization function
classname = m.__class__.__name__
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
if debug:
print(classname)
if init_type == 'normal':
init.normal_(m.weight.data, 0.0, init_gain)
elif init_type == 'xavier':
init.xavier_normal_(m.weight.data, gain=init_gain)
elif init_type == 'kaiming':
init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
elif init_type == 'orthogonal':
init.orthogonal_(m.weight.data, gain=init_gain)
else:
raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
if hasattr(m, 'bias') and m.bias is not None:
init.constant_(m.bias.data, 0.0)
elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies.
init.normal_(m.weight.data, 1.0, init_gain)
init.constant_(m.bias.data, 0.0)
net.apply(init_func) # apply the initialization function <init_func>
def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[], debug=False, initialize_weights=True):
"""Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights
Parameters:
net (network) -- the network to be initialized
init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
gain (float) -- scaling factor for normal, xavier and orthogonal.
gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
Return an initialized network.
"""
if len(gpu_ids) > 0:
assert(torch.cuda.is_available())
net.to(gpu_ids[0])
# if not amp:
# net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs for non-AMP training
if initialize_weights:
init_weights(net, init_type, init_gain=init_gain, debug=debug)
return net
def define_D(input_nc, ndf, netD, n_layers_D=3, image_size = 256, feature_dim = 256, max_conv_dim = 512,norm='batch', init_type='normal', init_gain=0.02, no_antialias=False, gpu_ids=[], opt=None):
"""Create a discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the first conv layer
netD (str) -- the architecture's name: basic | n_layers | pixel
n_layers_D (int) -- the number of conv layers in the discriminator; effective when netD=='n_layers'
norm (str) -- the type of normalization layers used in the network.
init_type (str) -- the name of the initialization method.
init_gain (float) -- scaling factor for normal, xavier and orthogonal.
gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
Returns a discriminator
Our current implementation provides three types of discriminators:
[basic]: 'PatchGAN' classifier described in the original pix2pix paper.
It can classify whether 70×70 overlapping patches are real or fake.
Such a patch-level discriminator architecture has fewer parameters
than a full-image discriminator and can work on arbitrarily-sized images
in a fully convolutional fashion.
[n_layers]: With this mode, you cna specify the number of conv layers in the discriminator
with the parameter <n_layers_D> (default=3 as used in [basic] (PatchGAN).)
[pixel]: 1x1 PixelGAN discriminator can classify whether a pixel is real or not.
It encourages greater color diversity but has no effect on spatial statistics.
The discriminator has been initialized by <init_net>. It uses Leaky RELU for non-linearity.
"""
net = None
norm_layer = get_norm_layer(norm_type=norm)
if netD == 'basic': # default PatchGAN classifier
net = NLayerDiscriminator(input_nc, ndf, n_layers=3, image_size = image_size, norm_layer=norm_layer, no_antialias=no_antialias,)
elif netD == 'n_layers': # more options
net = NLayerDiscriminator(input_nc, ndf, n_layers_D, norm_layer=norm_layer, no_antialias=no_antialias,)
elif netD == 'pixel': # classify if each pixel is real or fake
net = PixelDiscriminator(input_nc, ndf, norm_layer=norm_layer)
elif 'stylegan2' in netD:
net = StyleGAN2Discriminator(input_nc, ndf, n_layers_D, no_antialias=no_antialias, opt=opt)
elif netD == 'NCE': # default PatchGAN classifier
net = NCEDiscriminator(input_nc, ndf, n_layers=3, image_size = image_size, feature_dim = feature_dim, max_conv_dim = max_conv_dim, norm_layer=norm_layer, no_antialias=no_antialias,)
else:
raise NotImplementedError('Discriminator model name [%s] is not recognized' % netD)
return init_net(net, init_type, init_gain, gpu_ids,
initialize_weights=('stylegan2' not in netD))
##############################################################################
# Classes
##############################################################################
class GANLoss(nn.Module):
"""Define different GAN objectives.
The GANLoss class abstracts away the need to create the target label tensor
that has the same size as the input.
"""
def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0):
""" Initialize the GANLoss class.
Parameters:
gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp.
target_real_label (bool) - - label for a real image
target_fake_label (bool) - - label of a fake image
Note: Do not use sigmoid as the last layer of Discriminator.
LSGAN needs no sigmoid. vanilla GANs will handle it with BCEWithLogitsLoss.
"""
super(GANLoss, self).__init__()
self.register_buffer('real_label', torch.tensor(target_real_label))
self.register_buffer('fake_label', torch.tensor(target_fake_label))
self.gan_mode = gan_mode
if gan_mode == 'lsgan':
self.loss = nn.MSELoss()
elif gan_mode == 'vanilla':
self.loss = nn.BCEWithLogitsLoss()
elif gan_mode in ['wgangp', 'nonsaturating']:
self.loss = None
elif gan_mode == "hinge":
self.loss = None
else:
raise NotImplementedError('gan mode %s not implemented' % gan_mode)
def get_target_tensor(self, prediction, target_is_real):
"""Create label tensors with the same size as the input.
Parameters:
prediction (tensor) - - tpyically the prediction from a discriminator
target_is_real (bool) - - if the ground truth label is for real images or fake images
Returns:
A label tensor filled with ground truth label, and with the size of the input
"""
if target_is_real:
target_tensor = self.real_label
else:
target_tensor = self.fake_label
return target_tensor.expand_as(prediction)
def __call__(self, prediction, target_is_real):
"""Calculate loss given Discriminator's output and grount truth labels.
Parameters:
prediction (tensor) - - tpyically the prediction output from a discriminator
target_is_real (bool) - - if the ground truth label is for real images or fake images
Returns:
the calculated loss.
"""
bs = prediction.size(0)
if self.gan_mode in ['lsgan', 'vanilla']:
target_tensor = self.get_target_tensor(prediction, target_is_real)
loss = self.loss(prediction, target_tensor)
elif self.gan_mode == 'wgangp':
if target_is_real:
loss = -prediction.mean()
else:
loss = prediction.mean()
elif self.gan_mode == 'nonsaturating':
if target_is_real:
loss = F.softplus(-prediction).view(bs, -1).mean(dim=1)
else:
loss = F.softplus(prediction).view(bs, -1).mean(dim=1)
elif self.gan_mode == 'hinge':
if target_is_real:
minvalue = torch.min(prediction - 1, torch.zeros(prediction.shape).to(prediction.device))
loss = -torch.mean(minvalue)
else:
minvalue = torch.min(-prediction - 1,torch.zeros(prediction.shape).to(prediction.device))
loss = -torch.mean(minvalue)
return loss
def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', constant=1.0, lambda_gp=10.0):
"""Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028
Arguments:
netD (network) -- discriminator network
real_data (tensor array) -- real images
fake_data (tensor array) -- generated images from the generator
device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu')
type (str) -- if we mix real and fake data or not [real | fake | mixed].
constant (float) -- the constant used in formula ( | |gradient||_2 - constant)^2
lambda_gp (float) -- weight for this loss
Returns the gradient penalty loss
"""
if lambda_gp > 0.0:
if type == 'real': # either use real images, fake images, or a linear interpolation of two.
interpolatesv = real_data
elif type == 'fake':
interpolatesv = fake_data
elif type == 'mixed':
alpha = torch.rand(real_data.shape[0], 1, device=device)
alpha = alpha.expand(real_data.shape[0], real_data.nelement() // real_data.shape[0]).contiguous().view(*real_data.shape)
interpolatesv = alpha * real_data + ((1 - alpha) * fake_data)
else:
raise NotImplementedError('{} not implemented'.format(type))
interpolatesv.requires_grad_(True)
disc_interpolates = netD(interpolatesv)
gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolatesv,
grad_outputs=torch.ones(disc_interpolates.size()).to(device),
create_graph=True, retain_graph=True, only_inputs=True)
gradients = gradients[0].view(real_data.size(0), -1) # flat the data
gradient_penalty = (((gradients + 1e-16).norm(2, dim=1) - constant) ** 2).mean() * lambda_gp # added eps
return gradient_penalty, gradients
else:
return 0.0, None
class Normalize(nn.Module):
def __init__(self, power=2):
super(Normalize, self).__init__()
self.power = power
def forward(self, x):
norm = x.pow(self.power).sum(1, keepdim=True).pow(1. / self.power)
out = x.div(norm + 1e-7)
return out
##################################################################################
# Sequential Models
##################################################################################
class ResBlocks(nn.Module):
def __init__(self, num_blocks, dim, norm='inst', activation='relu', pad_type='zero', nz=0):
super(ResBlocks, self).__init__()
self.model = []
for i in range(num_blocks):
self.model += [ResBlock(dim, norm=norm, activation=activation, pad_type=pad_type, nz=nz)]
self.model = nn.Sequential(*self.model)
def forward(self, x):
return self.model(x)
##################################################################################
# Basic Blocks
##################################################################################
def cat_feature(x, y):
y_expand = y.view(y.size(0), y.size(1), 1, 1).expand(
y.size(0), y.size(1), x.size(2), x.size(3))
x_cat = torch.cat([x, y_expand], 1)
return x_cat
class Conv2dBlock(nn.Module):
def __init__(self, input_dim, output_dim, kernel_size, stride,
padding=0, norm='none', activation='relu', pad_type='zero'):
super(Conv2dBlock, self).__init__()
self.use_bias = True
# initialize padding
if pad_type == 'reflect':
self.pad = nn.ReflectionPad2d(padding)
elif pad_type == 'zero':
self.pad = nn.ZeroPad2d(padding)
else:
assert 0, "Unsupported padding type: {}".format(pad_type)
# initialize normalization
norm_dim = output_dim
if norm == 'batch':
self.norm = nn.BatchNorm2d(norm_dim)
elif norm == 'inst':
self.norm = nn.InstanceNorm2d(norm_dim, track_running_stats=False)
elif norm == 'ln':
self.norm = LayerNorm(norm_dim)
elif norm == 'none':
self.norm = None
else:
assert 0, "Unsupported normalization: {}".format(norm)
# initialize activation
if activation == 'relu':
self.activation = nn.ReLU(inplace=True)
elif activation == 'lrelu':
self.activation = nn.LeakyReLU(0.2, inplace=True)
elif activation == 'prelu':
self.activation = nn.PReLU()
elif activation == 'selu':
self.activation = nn.SELU(inplace=True)
elif activation == 'tanh':
self.activation = nn.Tanh()
elif activation == 'none':
self.activation = None
else:
assert 0, "Unsupported activation: {}".format(activation)
# initialize convolution
self.conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride, bias=self.use_bias)
def forward(self, x):
x = self.conv(self.pad(x))
if self.norm:
x = self.norm(x)
if self.activation:
x = self.activation(x)
return x
class LinearBlock(nn.Module):
def __init__(self, input_dim, output_dim, norm='none', activation='relu'):
super(LinearBlock, self).__init__()
use_bias = True
# initialize fully connected layer
self.fc = nn.Linear(input_dim, output_dim, bias=use_bias)
# initialize normalization
norm_dim = output_dim
if norm == 'batch':
self.norm = nn.BatchNorm1d(norm_dim)
elif norm == 'inst':
self.norm = nn.InstanceNorm1d(norm_dim)
elif norm == 'ln':
self.norm = LayerNorm(norm_dim)
elif norm == 'none':
self.norm = None
else:
assert 0, "Unsupported normalization: {}".format(norm)
# initialize activation
if activation == 'relu':
self.activation = nn.ReLU(inplace=True)
elif activation == 'lrelu':
self.activation = nn.LeakyReLU(0.2, inplace=True)
elif activation == 'prelu':
self.activation = nn.PReLU()
elif activation == 'selu':
self.activation = nn.SELU(inplace=True)
elif activation == 'tanh':
self.activation = nn.Tanh()
elif activation == 'none':
self.activation = None
else:
assert 0, "Unsupported activation: {}".format(activation)
def forward(self, x):
out = self.fc(x)
if self.norm:
out = self.norm(out)
if self.activation:
out = self.activation(out)
return out
##################################################################################
# Normalization layers
##################################################################################
class LayerNorm(nn.Module):
def __init__(self, num_features, eps=1e-5, affine=True):
super(LayerNorm, self).__init__()
self.num_features = num_features
self.affine = affine
self.eps = eps
if self.affine:
self.gamma = nn.Parameter(torch.Tensor(num_features).uniform_())
self.beta = nn.Parameter(torch.zeros(num_features))
def forward(self, x):
shape = [-1] + [1] * (x.dim() - 1)
mean = x.view(x.size(0), -1).mean(1).view(*shape)
std = x.view(x.size(0), -1).std(1).view(*shape)
x = (x - mean) / (std + self.eps)
if self.affine:
shape = [1, -1] + [1] * (x.dim() - 2)
x = x * self.gamma.view(*shape) + self.beta.view(*shape)
return x
class NLayerDiscriminator(nn.Module):
"""Defines a PatchGAN discriminator"""
def __init__(self, input_nc, ndf=64, n_layers=3, image_size = 256,norm_layer=nn.BatchNorm2d, no_antialias=False):
"""Construct a PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
n_layers (int) -- the number of conv layers in the discriminator
norm_layer -- normalization layer
"""
super(NLayerDiscriminator, self).__init__()
if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
kw = 4
padw = 1
if(no_antialias):
sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]
else:
sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=1, padding=padw), nn.LeakyReLU(0.2, True), Downsample(ndf)]
nf_mult = 1
nf_mult_prev = 1
for n in range(1, n_layers): # gradually increase the number of filters
nf_mult_prev = nf_mult
nf_mult = min(2 ** n, 8)
if(no_antialias):
sequence += [
nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
norm_layer(ndf * nf_mult),
nn.LeakyReLU(0.2, True)
]
else:
sequence += [
nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias),
norm_layer(ndf * nf_mult),
nn.LeakyReLU(0.2, True),
Downsample(ndf * nf_mult)]
nf_mult_prev = nf_mult
nf_mult = min(2 ** n_layers, 8)
sequence += [
nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias),
norm_layer(ndf * nf_mult),
nn.LeakyReLU(0.2, True)
]
sequence += [nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map
self.model = nn.Sequential(*sequence)
def forward(self, input):
"""Standard forward."""
logit = self.model(input)
return logit
class PixelDiscriminator(nn.Module):
"""Defines a 1x1 PatchGAN discriminator (pixelGAN)"""
def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d):
"""Construct a 1x1 PatchGAN discriminator
Parameters:
input_nc (int) -- the number of channels in input images
ndf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
"""
super(PixelDiscriminator, self).__init__()
if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
self.net = [
nn.Conv2d(input_nc, ndf, kernel_size=1, stride=1, padding=0),
nn.LeakyReLU(0.2, True),
nn.Conv2d(ndf, ndf * 2, kernel_size=1, stride=1, padding=0, bias=use_bias),
norm_layer(ndf * 2),
nn.LeakyReLU(0.2, True),
nn.Conv2d(ndf * 2, 1, kernel_size=1, stride=1, padding=0, bias=use_bias)]
self.net = nn.Sequential(*self.net)
def forward(self, input):
"""Standard forward."""
return self.net(input)
class PatchDiscriminator(NLayerDiscriminator):
"""Defines a PatchGAN discriminator"""
def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, no_antialias=False):
super().__init__(input_nc, ndf, 2, norm_layer, no_antialias)
def forward(self, input):
B, C, H, W = input.size(0), input.size(1), input.size(2), input.size(3)
size = 16
Y = H // size
X = W // size
input = input.view(B, C, Y, size, X, size)
input = input.permute(0, 2, 4, 1, 3, 5).contiguous().view(B * Y * X, C, size, size)
return super().forward(input)
class GroupedChannelNorm(nn.Module):
def __init__(self, num_groups):
super().__init__()
self.num_groups = num_groups
def forward(self, x):
shape = list(x.shape)
new_shape = [shape[0], self.num_groups, shape[1] // self.num_groups] + shape[2:]
x = x.view(*new_shape)
mean = x.mean(dim=2, keepdim=True)
std = x.std(dim=2, keepdim=True)
x_norm = (x - mean) / (std + 1e-7)
return x_norm.view(*shape)
================================================
FILE: models/torch_utils.py
================================================
import os
import random
import numpy as np
import torch
import torch.nn as nn
@torch.no_grad()
def concat_all_gather(tensor, world_size):
tensors_gather = [
torch.ones_like(tensor) for _ in range(world_size)]
torch.distributed.all_gather(tensors_gather, tensor, async_op=False)
output = torch.cat(tensors_gather, dim=0)
return output
def get_rank(group=None):
try:
return torch.distributed.get_rank(group)
except:
return 0
def get_world_size(group=None):
try:
return torch.distributed.get_world_size(group)
except:
return 1
def kaiming_init(mod):
if isinstance(mod, (nn.Conv2d, nn.Linear)):
if mod.weight.requires_grad:
nn.init.kaiming_normal_(mod.weight, a=0.2, mode="fan_in")
if mod.bias is not None:
nn.init.zeros_(mod.bias)
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
@torch.no_grad()
def update_average(net, net_ema, m=0.999):
net = net.module if hasattr(net, "module") else net
for p, p_ema in zip(net.parameters(), net_ema.parameters()):
p_ema.data.mul_(m).add_((1.0 - m) * p.detach().data)
def warmup_learning_rate(optimizer, lr, train_step, warmup_step):
if train_step > warmup_step or warmup_step == 0:
return lr
ratio = min(1.0, train_step/warmup_step)
lr_w = ratio * lr
for param_group in optimizer.param_groups:
param_group["lr"] = lr_w
return lr_w
================================================
FILE: options/__init__.py
================================================
"""This package options includes option modules: training options, test options, and basic options (used in both training and test)."""
================================================
FILE: options/base_options.py
================================================
import argparse
import os
from util import util
import torch
import models
import data
class BaseOptions():
"""This class defines options used during both training and test time.
It also implements several helper functions such as parsing, printing, and saving the options.
It also gathers additional options defined in <modify_commandline_options> functions in both dataset class and model class.
"""
def __init__(self, cmd_line=None):
"""Reset the class; indicates the class hasn't been initailized"""
self.initialized = False
self.cmd_line = None
if cmd_line is not None:
self.cmd_line = cmd_line.split()
def initialize(self, parser):
"""Define the common options that are used in both training and test."""
# basic parameters
parser.add_argument('--dataroot', default='placeholder', help='path to images (should have subfolders trainA, trainB, valA, valB, etc)')
parser.add_argument('--name', type=str, default='experiment_name', help='name of the experiment. It decides where to store samples and models')
parser.add_argument('--easy_label', type=str, default='experiment_name', help='Interpretable name')
parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU')
parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here')
# model parameters
parser.add_argument('--model', type=str, default='cast', help='chooses which model to use.')
parser.add_argument('--input_nc', type=int, default=3, help='# of input image channels: 3 for RGB and 1 for grayscale')
parser.add_argument('--output_nc', type=int, default=3, help='# of output image channels: 3 for RGB and 1 for grayscale')
parser.add_argument('--ngf', type=int, default=64, help='# of gen filters in the last conv layer')
parser.add_argument('--ndf', type=int, default=64, help='# of discrim filters in the first conv layer')
parser.add_argument('--netD', type=str, default='basic', choices=['basic', 'n_layers', 'pixel', 'patch', 'tilestylegan2', 'stylegan2'], help='specify discriminator architecture. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator')
parser.add_argument('--netG', type=str, default='resnet_9blocks', choices=['resnet_9blocks', 'resnet_6blocks', 'unet_256', 'unet_128', 'stylegan2', 'smallstylegan2', 'resnet_cat', 'cluit', 'SA2_2'], help='specify generator architecture')
parser.add_argument('--n_layers_D', type=int, default=3, help='only used if netD==n_layers')
parser.add_argument('--normG', type=str, default='instance', choices=['instance', 'batch', 'none'], help='instance normalization or batch normalization for G')
parser.add_argument('--normD', type=str, default='instance', choices=['instance', 'batch', 'none'], help='instance normalization or batch normalization for D')
parser.add_argument('--init_type', type=str, default='xavier', choices=['normal', 'xavier', 'kaiming', 'orthogonal'], help='network initialization')
parser.add_argument('--init_gain', type=float, default=0.02, help='scaling factor for normal, xavier and orthogonal.')
parser.add_argument('--no_dropout', type=util.str2bool, nargs='?', const=True, default=True,
help='no dropout for the generator')
parser.add_argument('--no_antialias', action='store_true', help='if specified, use stride=2 convs instead of antialiased-downsampling (sad)')
parser.add_argument('--no_antialias_up', action='store_true', help='if specified, use [upconv(learned filter)] instead of [upconv(hard-coded [1,3,3,1] filter), conv]')
# Model parameters.
parser.add_argument("--style-dim", default=256, type=int)
parser.add_argument("--feature_dim", default=256, type=int)
parser.add_argument("--hypersphere-dim", default=256, type=int)
parser.add_argument("--queue-size", default=4096, type=int)
parser.add_argument("--temperature", default=0.07, type=float)
parser.add_argument("--max_conv_dim", default=512, type=int)
# dataset parameters
parser.add_argument('--dataset_mode', type=str, default='unaligned', help='chooses how datasets are loaded. [unaligned | aligned | single | colorization]')
parser.add_argument('--direction', type=str, default='AtoB', help='AtoB or BtoA')
parser.add_argument('--serial_batches', action='store_true', help='if true, takes images in order to make batches, otherwise takes them randomly')
parser.add_argument('--num_threads', default=4, type=int, help='# threads for loading data')
parser.add_argument('--batch_size', type=int, default=1, help='input batch size')
parser.add_argument('--load_size', type=int, default=286, help='scale images to this size')
parser.add_argument('--crop_size', type=int, default=256, help='then crop to this size')
parser.add_argument('--max_dataset_size', type=int, default=float("inf"), help='Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.')
parser.add_argument('--preprocess', type=str, default='resize_and_crop', help='scaling and cropping of images at load time [resize_and_crop | crop | scale_width | scale_width_and_crop | none]')
parser.add_argument('--no_flip', action='store_true', help='if specified, do not flip the images for data augmentation')
parser.add_argument('--display_winsize', type=int, default=256, help='display window size for both visdom and HTML')
parser.add_argument('--random_scale_max', type=float, default=3.0,
help='(used for single image translation) Randomly scale the image by the specified factor as data augmentation.')
# additional parameters
parser.add_argument('--epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model')
parser.add_argument('--verbose', action='store_true', help='if specified, print more debugging information')
parser.add_argument('--suffix', default='', type=str, help='customized suffix: opt.name = opt.name + suffix: e.g., {model}_{netG}_size{load_size}')
self.initialized = True
return parser
def gather_options(self):
"""Initialize our parser with basic options(only once).
Add additional model-specific and dataset-specific options.
These options are defined in the <modify_commandline_options> function
in model and dataset classes.
"""
if not self.initialized: # check if it has been initialized
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser = self.initialize(parser)
# get the basic options
if self.cmd_line is None:
opt, _ = parser.parse_known_args()
else:
opt, _ = parser.parse_known_args(self.cmd_line)
# modify model-related parser options
model_name = opt.model
model_option_setter = models.get_option_setter(model_name)
parser = model_option_setter(parser, self.isTrain)
if self.cmd_line is None:
opt, _ = parser.parse_known_args() # parse again with new defaults
else:
opt, _ = parser.parse_known_args(self.cmd_line) # parse again with new defaults
# modify dataset-related parser options
dataset_name = opt.dataset_mode
dataset_option_setter = data.get_option_setter(dataset_name)
parser = dataset_option_setter(parser, self.isTrain)
# save and return the parser
self.parser = parser
if self.cmd_line is None:
return parser.parse_args()
else:
return parser.parse_args(self.cmd_line)
def print_options(self, opt):
"""Print and save options
It will print both current options and default values(if different).
It will save options into a text file / [checkpoints_dir] / opt.txt
"""
message = ''
message += '----------------- Options ---------------\n'
for k, v in sorted(vars(opt).items()):
comment = ''
default = self.parser.get_default(k)
if v != default:
comment = '\t[default: %s]' % str(default)
message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment)
message += '----------------- End -------------------'
print(message)
# save to the disk
expr_dir = os.path.join(opt.checkpoints_dir, opt.name)
util.mkdirs(expr_dir)
file_name = os.path.join(expr_dir, '{}_opt.txt'.format(opt.phase))
try:
with open(file_name, 'wt') as opt_file:
opt_file.write(message)
opt_file.write('\n')
except PermissionError as error:
print("permission error {}".format(error))
pass
def parse(self):
"""Parse our options, create checkpoints directory suffix, and set up gpu device."""
opt = self.gather_options()
opt.isTrain = self.isTrain # train or test
# process opt.suffix
if opt.suffix:
suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.suffix != '' else ''
opt.name = opt.name + suffix
self.print_options(opt)
# set gpu ids
str_ids = opt.gpu_ids.split(',')
opt.gpu_ids = []
for str_id in str_ids:
id = int(str_id)
if id >= 0:
opt.gpu_ids.append(id)
if len(opt.gpu_ids) > 0:
torch.cuda.set_device(opt.gpu_ids[0])
self.opt = opt
return self.opt
================================================
FILE: options/test_options.py
================================================
from .base_options import BaseOptions
class TestOptions(BaseOptions):
"""This class includes test options.
It also includes shared options defined in BaseOptions.
"""
def initialize(self, parser):
parser = BaseOptions.initialize(self, parser) # define shared options
parser.add_argument('--results_dir', type=str, default='./results/', help='saves results here.')
parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc')
# Dropout and Batchnorm has different behavioir during training and test.
parser.add_argument('--eval', action='store_true', help='use eval mode during test time.')
# Set the default = 5000 to test the whole test set.
parser.add_argument('--num_test', type=int, default=5000, help='how many test images to run')
# To avoid cropping, the load_size should be the same as crop_size
parser.set_defaults(load_size=parser.get_default('crop_size'))
self.isTrain = False
return parser
================================================
FILE: options/train_options.py
================================================
from .base_options import BaseOptions
class TrainOptions(BaseOptions):
"""This class includes training options.
It also includes shared options defined in BaseOptions.
"""
def initialize(self, parser):
parser = BaseOptions.initialize(self, parser)
# visdom and HTML visualization parameters
parser.add_argument('--display_freq', type=int, default=400, help='frequency of showing training results on screen')
parser.add_argument('--display_ncols', type=int, default=4, help='if positive, display all images in a single visdom web panel with certain number of images per row.')
parser.add_argument('--display_id', type=int, default=None, help='window id of the web display. Default is random window id')
parser.add_argument('--display_server', type=str, default="http://localhost", help='visdom server of the web display')
parser.add_argument('--display_env', type=str, default='main', help='visdom display environment name (default is "main")')
parser.add_argument('--display_port', type=int, default=8097, help='visdom port of the web display')
parser.add_argument('--update_html_freq', type=int, default=100, help='frequency of saving training results to html')
parser.add_argument('--print_freq', type=int, default=100, help='frequency of showing training results on console')
parser.add_argument('--no_html', action='store_true', help='do not save intermediate training results to [opt.checkpoints_dir]/[opt.name]/web/')
# network saving and loading parameters
parser.add_argument('--save_latest_freq', type=int, default=5000, help='frequency of saving the latest results')
parser.add_argument('--save_epoch_freq', type=int, default=50, help='frequency of saving checkpoints at the end of epochs')
parser.add_argument('--evaluation_freq', type=int, default=5000, help='evaluation freq')
parser.add_argument('--save_by_iter', action='store_true', help='whether saves model by iteration')
parser.add_argument('--continue_train', action='store_true', help='continue training: load the latest model')
parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count, we save the model by <epoch_count>, <epoch_count>+<save_latest_freq>, ...')
parser.add_argument('--phase', type=str, default='train', help='train, val, test, etc')
parser.add_argument('--pretrained_name', type=str, default=None, help='resume training from another checkpoint')
# training parameters
parser.add_argument('--n_epochs', type=int, default=200, help='number of epochs with the initial learning rate')
parser.add_argument('--n_epochs_decay', type=int, default=200, help='number of epochs to linearly decay learning rate to zero')
parser.add_argument('--beta1', type=float, default=0.5, help='momentum term of adam')
parser.add_argument('--beta2', type=float, default=0.999, help='momentum term of adam')
parser.add_argument('--lr_G', type=float, default=0.0001, help='initial learning rate for adam')
parser.add_argument('--lr_D', type=float, default=0.0001, help='initial learning rate for adam')
parser.add_argument('--lr_D_NCE', type=float, default=0.0001, help='initial learning rate for adam')
parser.add_argument('--gan_mode', type=str, default='hinge', help='the type of GAN objective. [vanilla| lsgan | wgangp| hinge]. vanilla GAN loss is the cross-entropy objective used in the original GAN paper.')
parser.add_argument('--pool_size', type=int, default=50, help='the size of image buffer that stores previously generated images')
parser.add_argument('--lr_policy', type=str, default='linear', help='learning rate policy. [linear | step | plateau | cosine]')
parser.add_argument('--lr_decay_iters', type=int, default=50, help='multiply by a gamma every lr_decay_iters iterations')
self.isTrain = True
return parser
================================================
FILE: requirements.txt
================================================
torch>=1.6.0
torchvision>=0.7.0
dominate>=2.4.0
visdom>=0.1.8.8
packaging
GPUtil>=1.4.0
scipy
Pillow>=6.1.0
numpy>=1.16.4
kornia
================================================
FILE: test.py
================================================
import os
from options.test_options import TestOptions
from data import create_dataset
from models import create_model
from util.visualizer import save_images
from util import html
import util.util as util
if __name__ == '__main__':
opt = TestOptions().parse() # get test options
# hard-code some parameters for test
opt.num_threads = 0 # test code only supports num_threads = 1
opt.batch_size = 1 # test code only supports batch_size = 1
opt.serial_batches = True # disable data shuffling; comment this line if results on randomly chosen images are needed.
opt.no_flip = True # no flip; comment this line if results on flipped images are needed.
opt.display_id = -1 # no visdom display; the test code saves the results to a HTML file.
dataset = create_dataset(opt) # create a dataset given opt.dataset_mode and other options
# train_dataset = create_dataset(util.copyconf(opt, phase="train"))
model = create_model(opt) # create a model given opt.model and other options
# create a webpage for viewing the results
web_dir = os.path.join(opt.results_dir, opt.name, '{}_{}'.format(opt.phase, opt.epoch)) # define the website directory
print('creating web directory', web_dir)
webpage = html.HTML(web_dir, 'Experiment = %s, Phase = %s, Epoch = %s' % (opt.name, opt.phase, opt.epoch))
for i, data in enumerate(dataset):
if i == 0:
model.setup(opt) # regular setup: load and print networks; create schedulers
model.parallelize()
if opt.eval:
model.eval()
if i >= opt.num_test: # only apply our model to opt.num_test images.
break
model.set_input(data) # unpack data from data loader
model.test() # run inference
visuals = model.get_current_visuals() # get image results
img_path = model.get_image_paths() # get image paths
if i % 5 == 0: # save images to an HTML file
print('processing (%04d)-th image... %s' % (i, img_path))
save_images(webpage, visuals, img_path, width=opt.display_winsize)
webpage.save() # save the HTML
================================================
FILE: train.py
================================================
import time
import torch
from options.train_options import TrainOptions
from data import create_dataset
from models import create_model
from util.visualizer import Visualizer
if __name__ == '__main__':
opt = TrainOptions().parse() # get training options
dataset = create_dataset(opt) # create a dataset given opt.dataset_mode and other options
dataset_size = len(dataset) # get the number of images in the dataset.
model = create_model(opt) # create a model given opt.model and other options
print('The number of training images = %d' % dataset_size)
visualizer = Visualizer(opt) # create a visualizer that display/save images and plots
opt.visualizer = visualizer
total_iters = 0 # the total number of training iterations
optimize_time = 0.1
times = []
for epoch in range(opt.epoch_count, opt.n_epochs + opt.n_epochs_decay + 1): # outer loop for different epochs; we save the model by <epoch_count>, <epoch_count>+<save_latest_freq>
epoch_start_time = time.time() # timer for entire epoch
iter_data_time = time.time() # timer for data loading per iteration
epoch_iter = 0 # the number of training iterations in current epoch, reset to 0 every epoch
visualizer.reset() # reset the visualizer: make sure it saves the results to HTML at least once every epoch
dataset.set_epoch(epoch)
for i, data in enumerate(dataset): # inner loop within one epoch
iter_start_time = time.time() # timer for computation per iteration
if total_iters % opt.print_freq == 0:
t_data = iter_start_time - iter_data_time
batch_size = data["A"].size(0)
total_iters += batch_size
epoch_iter += batch_size
if len(opt.gpu_ids) > 0:
torch.cuda.synchronize()
optimize_start_time = time.time()
if epoch == opt.epoch_count and i == 0:
model.setup(opt) # regular setup: load and print networks; create schedulers
model.parallelize()
model.set_input(data) # unpack data from dataset and apply preprocessing
model.optimize_parameters() # calculate loss functions, get gradients, update network weights
if len(opt.gpu_ids) > 0:
torch.cuda.synchronize()
optimize_time = (time.time() - optimize_start_time) / batch_size * 0.005 + 0.995 * optimize_time
if total_iters % opt.display_freq == 0: # display images on visdom and save images to a HTML file
save_result = total_iters % opt.update_html_freq == 0
model.compute_visuals()
visualizer.display_current_results(model.get_current_visuals(), epoch, save_result)
if total_iters % opt.print_freq == 0: # print training losses and save logging information to the disk
losses = model.get_current_losses()
visualizer.print_current_losses(epoch, epoch_iter, losses, optimize_time, t_data)
if opt.display_id is None or opt.display_id > 0:
visualizer.plot_current_losses(epoch, float(epoch_iter) / dataset_size, losses)
if total_iters % opt.save_latest_freq == 0: # cache our latest model every <save_latest_freq> iterations
print('saving the latest model (epoch %d, total_iters %d)' % (epoch, total_iters))
print(opt.name) # it's useful to occasionally show the experiment name on console
save_suffix = 'iter_%d' % total_iters if opt.save_by_iter else 'latest'
model.save_networks(save_suffix)
iter_data_time = time.time()
if epoch % opt.save_epoch_freq == 0: # cache our model every <save_epoch_freq> epochs
print('saving the model at the end of epoch %d, iters %d' % (epoch, total_iters))
model.save_networks(str(epoch)+'_'+str(total_iters))
model.save_networks(epoch)
print('End of epoch %d / %d \t Time Taken: %d sec' % (epoch, opt.n_epochs + opt.n_epochs_decay, time.time() - epoch_start_time))
model.update_learning_rate() # update learning rates at the end of every epoch.
================================================
FILE: util/__init__.py
================================================
"""This package includes a miscellaneous collection of useful helper functions."""
from util import *
================================================
FILE: util/get_data.py
================================================
from __future__ import print_function
import os
import tarfile
import requests
from warnings import warn
from zipfile import ZipFile
from bs4 import BeautifulSoup
from os.path import abspath, isdir, join, basename
class GetData(object):
"""A Python script for downloading CycleGAN or pix2pix datasets.
Parameters:
technique (str) -- One of: 'cyclegan' or 'pix2pix'.
verbose (bool) -- If True, print additional information.
Examples:
>>> from util.get_data import GetData
>>> gd = GetData(technique='cyclegan')
>>> new_data_path = gd.get(save_path='./datasets') # options will be displayed.
Alternatively, You can use bash scripts: 'scripts/download_pix2pix_model.sh'
and 'scripts/download_cyclegan_model.sh'.
"""
def __init__(self, technique='cyclegan', verbose=True):
url_dict = {
'pix2pix': 'http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/',
'cyclegan': 'https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets'
}
self.url = url_dict.get(technique.lower())
self._verbose = verbose
def _print(self, text):
if self._verbose:
print(text)
@staticmethod
def _get_options(r):
soup = BeautifulSoup(r.text, 'lxml')
options = [h.text for h in soup.find_all('a', href=True)
if h.text.endswith(('.zip', 'tar.gz'))]
return options
def _present_options(self):
r = requests.get(self.url)
options = self._get_options(r)
print('Options:\n')
for i, o in enumerate(options):
print("{0}: {1}".format(i, o))
choice = input("\nPlease enter the number of the "
"dataset above you wish to download:")
return options[int(choice)]
def _download_data(self, dataset_url, save_path):
if not isdir(save_path):
os.makedirs(save_path)
base = basename(dataset_url)
temp_save_path = join(save_path, base)
with open(temp_save_path, "wb") as f:
r = requests.get(dataset_url)
f.write(r.content)
if base.endswith('.tar.gz'):
obj = tarfile.open(temp_save_path)
elif base.endswith('.zip'):
obj = ZipFile(temp_save_path, 'r')
else:
raise ValueError("Unknown File Type: {0}.".format(base))
self._print("Unpacking Data...")
obj.extractall(save_path)
obj.close()
os.remove(temp_save_path)
def get(self, save_path, dataset=None):
"""
Download a dataset.
Parameters:
save_path (str) -- A directory to save the data to.
dataset (str) -- (optional). A specific dataset to download.
Note: this must include the file extension.
If None, options will be presented for you
to choose from.
Returns:
save_path_full (str) -- the absolute path to the downloaded data.
"""
if dataset is None:
selected_dataset = self._present_options()
else:
selected_dataset = dataset
save_path_full = join(save_path, selected_dataset.split('.')[0])
if isdir(save_path_full):
warn("\n'{0}' already exists. Voiding Download.".format(
save_path_full))
else:
self._print('Downloading Data...')
url = "{0}/{1}".format(self.url, selected_dataset)
self._download_data(url, save_path=save_path)
return abspath(save_path_full)
================================================
FILE: util/html.py
================================================
import dominate
from dominate.tags import meta, h3, table, tr, td, p, a, img, br
import os
class HTML:
"""This HTML class allows us to save images and write texts into a single HTML file.
It consists of functions such as <add_header> (add a text header to the HTML file),
<add_images> (add a row of images to the HTML file), and <save> (save the HTML to the disk).
It is based on Python library 'dominate', a Python library for creating and manipulating HTML documents using a DOM API.
"""
def __init__(self, web_dir, title, refresh=0):
"""Initialize the HTML classes
Parameters:
web_dir (str) -- a directory that stores the webpage. HTML file will be created at <web_dir>/index.html; images will be saved at <web_dir/images/
title (str) -- the webpage name
refresh (int) -- how often the website refresh itself; if 0; no refreshing
"""
self.title = title
self.web_dir = web_dir
self.img_dir = os.path.join(self.web_dir, 'images')
if not os.path.exists(self.web_dir):
os.makedirs(self.web_dir)
if not os.path.exists(self.img_dir):
os.makedirs(self.img_dir)
self.doc = dominate.document(title=title)
if refresh > 0:
with self.doc.head:
meta(http_equiv="refresh", content=str(refresh))
def get_image_dir(self):
"""Return the directory that stores images"""
return self.img_dir
def add_header(self, text):
"""Insert a header to the HTML file
Parameters:
text (str) -- the header text
"""
with self.doc:
h3(text)
def add_images(self, ims, txts, links, width=400):
"""add images to the HTML file
Parameters:
ims (str list) -- a list of image paths
txts (str list) -- a list of image names shown on the website
links (str list) -- a list of hyperref links; when you click an image, it will redirect you to a new page
"""
self.t = table(border=1, style="table-layout: fixed;") # Insert a table
self.doc.add(self.t)
with self.t:
with tr():
for im, txt, link in zip(ims, txts, links):
with td(style="word-wrap: break-word;", halign="center", valign="top"):
with p():
with a(href=os.path.join('images', link)):
img(style="width:%dpx" % width, src=os.path.join('images', im))
br()
p(txt)
def save(self):
"""save the current content to the HMTL file"""
html_file = '%s/index.html' % self.web_dir
f = open(html_file, 'wt')
f.write(self.doc.render())
f.close()
if __name__ == '__main__': # we show an example usage here.
html = HTML('web/', 'test_html')
html.add_header('hello world')
ims, txts, links = [], [], []
for n in range(4):
ims.append('image_%d.png' % n)
txts.append('text_%d' % n)
links.append('image_%d.png' % n)
html.add_images(ims, txts, links)
html.save()
================================================
FILE: util/image_pool.py
================================================
import random
import torch
class ImagePool():
"""This class implements an image buffer that stores previously generated images.
This buffer enables us to update discriminators using a history of generated images
rather than the ones produced by the latest generators.
"""
def __init__(self, pool_size):
"""Initialize the ImagePool class
Parameters:
pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created
"""
self.pool_size = pool_size
if self.pool_size > 0: # create an empty pool
self.num_imgs = 0
self.images = []
def query(self, images):
"""Return an image from the pool.
Parameters:
images: the latest generated images from the generator
Returns images from the buffer.
By 50/100, the buffer will return input images.
By 50/100, the buffer will return images previously stored in the buffer,
and insert the current images to the buffer.
"""
if self.pool_size == 0: # if the buffer size is 0, do nothing
return images
return_images = []
for image in images:
image = torch.unsqueeze(image.data, 0)
if self.num_imgs < self.pool_size: # if the buffer is not full; keep inserting current images to the buffer
self.num_imgs = self.num_imgs + 1
self.images.append(image)
return_images.append(image)
else:
p = random.uniform(0, 1)
if p > 0.5: # by 50% chance, the buffer will return a previously stored image, and insert the current image into the buffer
random_id = random.randint(0, self.pool_size - 1) # randint is inclusive
tmp = self.images[random_id].clone()
self.images[random_id] = image
return_images.append(tmp)
else: # by another 50% chance, the buffer will return the current image
return_images.append(image)
return_images = torch.cat(return_images, 0) # collect all the images and return
return return_images
================================================
FILE: util/util.py
================================================
"""This module contains simple helper functions """
from __future__ import print_function
import torch
import numpy as np
from PIL import Image
import os
import importlib
import argparse
from argparse import Namespace
import torchvision
def str2bool(v):
if isinstance(v, bool):
return v
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')
def copyconf(default_opt, **kwargs):
conf = Namespace(**vars(default_opt))
for key in kwargs:
setattr(conf, key, kwargs[key])
return conf
def find_class_in_module(target_cls_name, module):
target_cls_name = target_cls_name.replace('_', '').lower()
clslib = importlib.import_module(module)
cls = None
for name, clsobj in clslib.__dict__.items():
if name.lower() == target_cls_name:
cls = clsobj
assert cls is not None, "In %s, there should be a class whose name matches %s in lowercase without underscore(_)" % (module, target_cls_name)
return cls
def tensor2im(input_image, imtype=np.uint8):
""""Converts a Tensor array into a numpy image array.
Parameters:
input_image (tensor) -- the input image tensor array
imtype (type) -- the desired type of the converted numpy array
"""
if not isinstance(input_image, np.ndarray):
if isinstance(input_image, torch.Tensor): # get the data from a variable
image_tensor = input_image.data
else:
return input_image
image_numpy = image_tensor[0].clamp(-1.0, 1.0).cpu().float().numpy() # convert it into a numpy array
if image_numpy.shape[0] == 1: # grayscale to RGB
image_numpy = np.tile(image_numpy, (3, 1, 1))
image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 # post-processing: tranpose and scaling
else: # if it is a numpy array, do nothing
image_numpy = input_image
return image_numpy.astype(imtype)
def diagnose_network(net, name='network'):
"""Calculate and print the mean of average absolute(gradients)
Parameters:
net (torch network) -- Torch network
name (str) -- the name of the network
"""
mean = 0.0
count = 0
for param in net.parameters():
if param.grad is not None:
mean += torch.mean(torch.abs(param.grad.data))
count += 1
if count > 0:
mean = mean / count
print(name)
print(mean)
def save_image(image_numpy, image_path, aspect_ratio=1.0):
"""Save a numpy image to the disk
Parameters:
image_numpy (numpy array) -- input numpy array
image_path (str) -- the path of the image
"""
image_pil = Image.fromarray(image_numpy)
h, w, _ = image_numpy.shape
if aspect_ratio is None:
pass
elif aspect_ratio > 1.0:
image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC)
elif aspect_ratio < 1.0:
image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC)
image_pil.save(image_path)
def print_numpy(x, val=True, shp=False):
"""Print the mean, min, max, median, std, and size of a numpy array
Parameters:
val (bool) -- if print the values of the numpy array
shp (bool) -- if print the shape of the numpy array
"""
x = x.astype(np.float64)
if shp:
print('shape,', x.shape)
if val:
x = x.flatten()
print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % (
np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x)))
def mkdirs(paths):
"""create empty directories if they don't exist
Parameters:
paths (str list) -- a list of directory paths
"""
if isinstance(paths, list) and not isinstance(paths, str):
for path in paths:
mkdir(path)
else:
mkdir(paths)
def mkdir(path):
"""create a single empty directory if it didn't exist
Parameters:
path (str) -- a single directory path
"""
if not os.path.exists(path):
os.makedirs(path)
def correct_resize_label(t, size):
device = t.device
t = t.detach().cpu()
resized = []
for i in range(t.size(0)):
one_t = t[i, :1]
one_np = np.transpose(one_t.numpy().astype(np.uint8), (1, 2, 0))
one_np = one_np[:, :, 0]
one_image = Image.fromarray(one_np).resize(size, Image.NEAREST)
resized_t = torch.from_numpy(np.array(one_image)).long()
resized.append(resized_t)
return torch.stack(resized, dim=0).to(device)
def correct_resize(t, size, mode=Image.BICUBIC):
device = t.device
t = t.detach().cpu()
resized = []
for i in range(t.size(0)):
one_t = t[i:i + 1]
one_image = Image.fromarray(tensor2im(one_t)).resize(size, Image.BICUBIC)
resized_t = torchvision.transforms.functional.to_tensor(one_image) * 2 - 1.0
resized.append(resized_t)
return torch.stack(resized, dim=0).to(device)
================================================
FILE: util/visualizer.py
================================================
import numpy as np
import os
import sys
import ntpath
import time
from . import util, html
from subprocess import Popen, PIPE
if sys.version_info[0] == 2:
VisdomExceptionBase = Exception
else:
VisdomExceptionBase = ConnectionError
def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256):
"""Save images to the disk.
Parameters:
webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details)
visuals (OrderedDict) -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs
image_path (str) -- the string is used to create image paths
aspect_ratio (float) -- the aspect ratio of saved images
width (int) -- the images will be resized to width x width
This function will save images stored in 'visuals' to the HTML file specified by 'webpage'.
"""
image_dir = webpage.get_image_dir()
short_path = ntpath.basename(image_path[0])
name = os.path.splitext(short_path)[0]
webpage.add_header(name)
ims, txts, links = [], [], []
for label, im_data in visuals.items():
im = util.tensor2im(im_data)
image_name = '%s_%s.png' % (name, label)
os.makedirs(os.path.join(image_dir, label), exist_ok=True)
save_path = os.path.join(image_dir, image_name)
util.save_image(im, save_path, aspect_ratio=aspect_ratio)
ims.append(image_name)
txts.append(label)
links.append(image_name)
webpage.add_images(ims, txts, links, width=width)
class Visualizer():
"""This class includes several functions that can display/save images and print/save logging information.
It uses a Python library 'visdom' for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with images.
"""
def __init__(self, opt):
"""Initialize the Visualizer class
Parameters:
opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
Step 1: Cache the training/test options
Step 2: connect to a visdom server
Step 3: create an HTML object for saveing HTML filters
Step 4: create a logging file to store training losses
"""
self.opt = opt # cache the option
if opt.display_id is None:
self.display_id = np.random.randint(100000) * 10 # just a random display id
else:
self.display_id = opt.display_id
self.use_html = opt.isTrain and not opt.no_html
self.win_size = opt.display_winsize
self.name = opt.name
self.port = opt.display_port
self.saved = False
if self.display_id > 0: # connect to a visdom server given <display_port> and <display_server>
import visdom
self.plot_data = {}
self.ncols = opt.display_ncols
if "tensorboard_base_url" not in os.environ:
self.vis = visdom.Visdom(server=opt.display_server, port=opt.display_port, env=opt.display_env)
else:
self.vis = visdom.Visdom(port=2004,
base_url=os.environ['tensorboard_base_url'] + '/visdom')
if not self.vis.check_connection():
self.create_visdom_connections()
if self.use_html: # create an HTML object at <checkpoints_dir>/web/; images will be saved under <checkpoints_dir>/web/images/
self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web')
self.img_dir = os.path.join(self.web_dir, 'images')
print('create web directory %s...' % self.web_dir)
util.mkdirs([self.web_dir, self.img_dir])
# create a logging file to store training losses
self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')
with open(self.log_name, "a") as log_file:
now = time.strftime("%c")
log_file.write('================ Training Loss (%s) ================\n' % now)
def reset(self):
"""Reset the self.saved status"""
self.saved = False
def create_visdom_connections(self):
"""If the program could not connect to Visdom server, this function will start a new server at port < self.port > """
cmd = sys.executable + ' -m visdom.server -p %d &>/dev/null &' % self.port
print('\n\nCould not connect to Visdom server. \n Trying to start a server....')
print('Command: %s' % cmd)
Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
def display_current_results(self, visuals, epoch, save_result):
"""Display current results on visdom; save current results to an HTML file.
Parameters:
visuals (OrderedDict) - - dictionary of images to display or save
epoch (int) - - the current epoch
save_result (bool) - - if save the current results to an HTML file
"""
if self.display_id > 0: # show images in the browser using visdom
ncols = self.ncols
if ncols > 0: # show all the images in one visdom panel
ncols = min(ncols, len(visuals))
h, w = next(iter(visuals.values())).shape[:2]
table_css = """<style>
table {border-collapse: separate; border-spacing: 4px; white-space: nowrap; text-align: center}
table td {width: % dpx; height: % dpx; padding: 4px; outline: 4px solid black}
</style>""" % (w, h) # create a table css
# create a table of images.
title = self.name
label_html = ''
label_html_row = ''
images = []
idx = 0
for label, image in visuals.items():
image_numpy = util.tensor2im(image)
label_html_row += '<td>%s</td>' % label
images.append(image_numpy.transpose([2, 0, 1]))
idx += 1
if idx % ncols == 0:
label_html += '<tr>%s</tr>' % label_html_row
label_html_row = ''
white_image = np.ones_like(image_numpy.transpose([2, 0, 1])) * 255
while idx % ncols != 0:
images.append(white_image)
label_html_row += '<td></td>'
idx += 1
if label_html_row != '':
label_html += '<tr>%s</tr>' % label_html_row
try:
self.vis.images(images, ncols, 2, self.display_id + 1,
None, dict(title=title + ' images'))
label_html = '<table>%s</table>' % label_html
self.vis.text(table_css + label_html, win=self.display_id + 2,
opts=dict(title=title + ' labels'))
except VisdomExceptionBase:
self.create_visdom_connections()
else: # show each image in a separate visdom panel;
idx = 1
try:
for label, image in visuals.items():
image_numpy = util.tensor2im(image)
self.vis.image(
image_numpy.transpose([2, 0, 1]),
self.display_id + idx,
None,
dict(title=label)
)
idx += 1
except VisdomExceptionBase:
self.create_visdom_connections()
if self.use_html and (save_result or not self.saved): # save images to an HTML file if they haven't been saved.
self.saved = True
# save images to the disk
for label, image in visuals.items():
image_numpy = util.tensor2im(image)
img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label))
util.save_image(image_numpy, img_path)
# update website
webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=0)
for n in range(epoch, 0, -1):
webpage.add_header('epoch [%d]' % n)
ims, txts, links = [], [], []
for label, image_numpy in visuals.items():
image_numpy = util.tensor2im(image)
img_path = 'epoch%.3d_%s.png' % (n, label)
ims.append(img_path)
txts.append(label)
links.append(img_path)
webpage.add_images(ims, txts, links, width=self.win_size)
webpage.save()
def plot_current_losses(self, epoch, counter_ratio, losses):
"""display the current losses on visdom display: dictionary of error labels and values
Parameters:
epoch (int) -- current epoch
counter_ratio (float) -- progress (percentage) in the current epoch, between 0 to 1
losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
"""
if len(losses) == 0:
return
plot_name = '_'.join(list(losses.keys()))
if plot_name not in self.plot_data:
self.plot_data[plot_name] = {'X': [], 'Y': [], 'legend': list(losses.keys())}
plot_data = self.plot_data[plot_name]
plot_id = list(self.plot_data.keys()).index(plot_name)
plot_data['X'].append(epoch + counter_ratio)
plot_data['Y'].append([losses[k] for k in plot_data['legend']])
try:
self.vis.line(
X=np.stack([np.array(plot_data['X'])] * len(plot_data['legend']), 1),
Y=np.array(plot_data['Y']),
opts={
'title': self.name,
'legend': plot_data['legend'],
'xlabel': 'epoch',
'ylabel': 'loss'},
win=self.display_id - plot_id)
except VisdomExceptionBase:
self.create_visdom_connections()
# losses: same format as |losses| of plot_current_losses
def print_current_losses(self, epoch, iters, losses, t_comp, t_data):
"""print current losses on console; also save the losses to the disk
Parameters:
epoch (int) -- current epoch
iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
t_comp (float) -- computational time per data point (normalized by batch_size)
t_data (float) -- data loading time per data point (normalized by batch_size)
"""
message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data)
for k, v in losses.items():
message += '%s: %.3f ' % (k, v)
print(message) # print the message
with open(self.log_name, "a") as log_file:
log_file.write('%s\n' % message) # save the message
gitextract_cjj_65gk/
├── LICENSE
├── README.md
├── data/
│ ├── __init__.py
│ ├── base_dataset.py
│ ├── image_folder.py
│ └── unaligned_dataset.py
├── experiments/
│ ├── __init__.py
│ └── __main__.py
├── models/
│ ├── MSP.py
│ ├── __init__.py
│ ├── base_model.py
│ ├── cast_model.py
│ ├── net.py
│ ├── networks.py
│ └── torch_utils.py
├── options/
│ ├── __init__.py
│ ├── base_options.py
│ ├── test_options.py
│ └── train_options.py
├── requirements.txt
├── test.py
├── train.py
└── util/
├── __init__.py
├── get_data.py
├── html.py
├── image_pool.py
├── util.py
└── visualizer.py
SYMBOL INDEX (200 symbols across 21 files)
FILE: data/__init__.py
function find_dataset_using_name (line 18) | def find_dataset_using_name(dataset_name):
function get_option_setter (line 41) | def get_option_setter(dataset_name):
function create_dataset (line 47) | def create_dataset(opt):
class CustomDatasetDataLoader (line 62) | class CustomDatasetDataLoader():
method __init__ (line 65) | def __init__(self, opt):
method set_epoch (line 83) | def set_epoch(self, epoch):
method load_data (line 86) | def load_data(self):
method __len__ (line 89) | def __len__(self):
method __iter__ (line 93) | def __iter__(self):
FILE: data/base_dataset.py
class BaseDataset (line 13) | class BaseDataset(data.Dataset, ABC):
method __init__ (line 23) | def __init__(self, opt):
method modify_commandline_options (line 34) | def modify_commandline_options(parser, is_train):
method __len__ (line 47) | def __len__(self):
method __getitem__ (line 52) | def __getitem__(self, index):
function get_params (line 64) | def get_params(opt, size):
function get_transform (line 82) | def get_transform(opt, params=None, grayscale=False, method=Image.BICUBI...
function __make_power_2 (line 134) | def __make_power_2(img, base, method=Image.BICUBIC):
function __random_zoom (line 144) | def __random_zoom(img, target_width, crop_width, method=Image.BICUBIC, f...
function __scale_shortside (line 156) | def __scale_shortside(img, target_width, crop_width, method=Image.BICUBIC):
function __trim (line 166) | def __trim(img, trim_width):
function __scale_width (line 183) | def __scale_width(img, target_width, crop_width, method=Image.BICUBIC):
function __crop (line 192) | def __crop(img, pos, size):
function __patch (line 201) | def __patch(img, index, size):
function __flip (line 217) | def __flip(img, flip):
function __print_size_warning (line 223) | def __print_size_warning(ow, oh, w, h):
FILE: data/image_folder.py
function is_image_file (line 20) | def is_image_file(filename):
function make_dataset (line 24) | def make_dataset(dir, max_dataset_size=float("inf")):
function default_loader (line 36) | def default_loader(path):
class ImageFolder (line 40) | class ImageFolder(data.Dataset):
method __init__ (line 42) | def __init__(self, root, transform=None, return_paths=False,
method __getitem__ (line 55) | def __getitem__(self, index):
method __len__ (line 65) | def __len__(self):
FILE: data/unaligned_dataset.py
class UnalignedDataset (line 9) | class UnalignedDataset(BaseDataset):
method __init__ (line 20) | def __init__(self, opt):
method __getitem__ (line 39) | def __getitem__(self, index):
method __len__ (line 72) | def __len__(self):
FILE: experiments/__init__.py
function find_launcher_using_name (line 5) | def find_launcher_using_name(launcher_name):
FILE: experiments/__main__.py
function find_launcher_using_name (line 5) | def find_launcher_using_name(launcher_name):
FILE: models/MSP.py
class StyleExtractor (line 9) | class StyleExtractor(nn.Module):
method __init__ (line 12) | def __init__(self, encoder, gpu_ids = []):
method encode_with_intermediate (line 57) | def encode_with_intermediate(self, input):
method forward (line 64) | def forward(self, input, index):
class Projector (line 80) | class Projector(nn.Module):
method __init__ (line 81) | def __init__(self, projector, gpu_ids = []):
method forward (line 137) | def forward(self, input, index):
function make_layers (line 151) | def make_layers(cfg, batch_norm=True):
class InfoNCELoss (line 169) | class InfoNCELoss(nn.Module):
method __init__ (line 171) | def __init__(self, temperature, feature_dim, queue_size):
method forward (line 219) | def forward(self, query, key, style = 'real'):
method dequeue_and_enqueue (line 265) | def dequeue_and_enqueue(self, keys, style = 'real'):
FILE: models/__init__.py
function find_model_using_name (line 25) | def find_model_using_name(model_name):
function get_option_setter (line 48) | def get_option_setter(model_name):
function create_model (line 54) | def create_model(opt):
FILE: models/base_model.py
class BaseModel (line 8) | class BaseModel(ABC):
method __init__ (line 18) | def __init__(self, opt):
method dict_grad_hook_factory (line 47) | def dict_grad_hook_factory(add_func=lambda x: x):
method modify_commandline_options (line 58) | def modify_commandline_options(parser, is_train):
method set_input (line 71) | def set_input(self, input):
method forward (line 80) | def forward(self):
method optimize_parameters (line 85) | def optimize_parameters(self):
method setup (line 89) | def setup(self, opt):
method parallelize (line 103) | def parallelize(self):
method data_dependent_initialize (line 109) | def data_dependent_initialize(self, data):
method eval (line 112) | def eval(self):
method test (line 119) | def test(self):
method compute_visuals (line 129) | def compute_visuals(self):
method get_image_paths (line 133) | def get_image_paths(self):
method update_learning_rate (line 137) | def update_learning_rate(self):
method get_current_visuals (line 148) | def get_current_visuals(self):
method get_current_losses (line 156) | def get_current_losses(self):
method save_networks (line 164) | def save_networks(self, epoch):
method __patch_instance_norm_state_dict (line 182) | def __patch_instance_norm_state_dict(self, state_dict, module, keys, i...
method load_networks (line 196) | def load_networks(self, epoch):
method print_networks (line 226) | def print_networks(self, verbose):
method set_requires_grad (line 244) | def set_requires_grad(self, nets, requires_grad=False):
method generate_visuals_for_evaluation (line 257) | def generate_visuals_for_evaluation(self, data, mode):
FILE: models/cast_model.py
class CASTModel (line 13) | class CASTModel(BaseModel):
method modify_commandline_options (line 19) | def modify_commandline_options(parser, is_train=True):
method __init__ (line 47) | def __init__(self, opt):
method optimize_parameters (line 133) | def optimize_parameters(self):
method set_input (line 164) | def set_input(self, input):
method forward (line 175) | def forward(self):
method backward_D_basic (line 189) | def backward_D_basic(self, netD, content,style, fake):
method backward_D_NCEloss (line 212) | def backward_D_NCEloss(self):
method backward_D (line 240) | def backward_D(self):
method compute_G_loss (line 258) | def compute_G_loss(self):
function init_weights (line 311) | def init_weights(net, init_type='normal', init_gain=0.02):
function init_net (line 345) | def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]):
FILE: models/net.py
class ADAIN_Encoder (line 61) | class ADAIN_Encoder(nn.Module):
method __init__ (line 62) | def __init__(self, encoder, gpu_ids=[]):
method encode_with_intermediate (line 78) | def encode_with_intermediate(self, input):
method calc_mean_std (line 85) | def calc_mean_std(self, feat, eps=1e-5):
method adain (line 95) | def adain(self, content_feat, style_feat):
method forward (line 105) | def forward(self, content, style, encoded_only = False):
class Decoder (line 114) | class Decoder(nn.Module):
method __init__ (line 115) | def __init__(self, gpu_ids=[]):
method forward (line 150) | def forward(self, adain_feat):
FILE: models/networks.py
function get_filter (line 14) | def get_filter(filt_size=3):
class Downsample (line 36) | class Downsample(nn.Module):
method __init__ (line 37) | def __init__(self, channels, pad_type='reflect', filt_size=3, stride=2...
method forward (line 52) | def forward(self, inp):
class Upsample2 (line 62) | class Upsample2(nn.Module):
method __init__ (line 63) | def __init__(self, scale_factor, mode='nearest'):
method forward (line 68) | def forward(self, x):
class Upsample (line 72) | class Upsample(nn.Module):
method __init__ (line 73) | def __init__(self, channels, pad_type='repl', filt_size=4, stride=2):
method forward (line 87) | def forward(self, inp):
function get_pad_layer (line 95) | def get_pad_layer(pad_type):
class Identity (line 107) | class Identity(nn.Module):
method forward (line 108) | def forward(self, x):
function get_norm_layer (line 112) | def get_norm_layer(norm_type='instance'):
function get_scheduler (line 133) | def get_scheduler(optimizer, opt):
function init_weights (line 162) | def init_weights(net, init_type='kaiming', init_gain=0.02, debug=False):
function init_net (line 197) | def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[], debug=...
function define_D (line 217) | def define_D(input_nc, ndf, netD, n_layers_D=3, image_size = 256, featur...
class GANLoss (line 271) | class GANLoss(nn.Module):
method __init__ (line 278) | def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=...
method get_target_tensor (line 304) | def get_target_tensor(self, prediction, target_is_real):
method __call__ (line 321) | def __call__(self, prediction, target_is_real):
function cal_gradient_penalty (line 355) | def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed...
class Normalize (line 392) | class Normalize(nn.Module):
method __init__ (line 394) | def __init__(self, power=2):
method forward (line 398) | def forward(self, x):
class ResBlocks (line 408) | class ResBlocks(nn.Module):
method __init__ (line 409) | def __init__(self, num_blocks, dim, norm='inst', activation='relu', pa...
method forward (line 416) | def forward(self, x):
function cat_feature (line 423) | def cat_feature(x, y):
class Conv2dBlock (line 429) | class Conv2dBlock(nn.Module):
method __init__ (line 430) | def __init__(self, input_dim, output_dim, kernel_size, stride,
method forward (line 474) | def forward(self, x):
class LinearBlock (line 483) | class LinearBlock(nn.Module):
method __init__ (line 484) | def __init__(self, input_dim, output_dim, norm='none', activation='rel...
method forward (line 519) | def forward(self, x):
class LayerNorm (line 532) | class LayerNorm(nn.Module):
method __init__ (line 533) | def __init__(self, num_features, eps=1e-5, affine=True):
method forward (line 543) | def forward(self, x):
class NLayerDiscriminator (line 554) | class NLayerDiscriminator(nn.Module):
method __init__ (line 557) | def __init__(self, input_nc, ndf=64, n_layers=3, image_size = 256,norm...
method forward (line 608) | def forward(self, input):
class PixelDiscriminator (line 613) | class PixelDiscriminator(nn.Module):
method __init__ (line 616) | def __init__(self, input_nc, ndf=64, norm_layer=nn.BatchNorm2d):
method forward (line 640) | def forward(self, input):
class PatchDiscriminator (line 644) | class PatchDiscriminator(NLayerDiscriminator):
method __init__ (line 647) | def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNo...
method forward (line 650) | def forward(self, input):
class GroupedChannelNorm (line 659) | class GroupedChannelNorm(nn.Module):
method __init__ (line 660) | def __init__(self, num_groups):
method forward (line 664) | def forward(self, x):
FILE: models/torch_utils.py
function concat_all_gather (line 10) | def concat_all_gather(tensor, world_size):
function get_rank (line 19) | def get_rank(group=None):
function get_world_size (line 26) | def get_world_size(group=None):
function kaiming_init (line 33) | def kaiming_init(mod):
function set_seed (line 41) | def set_seed(seed):
function update_average (line 52) | def update_average(net, net_ema, m=0.999):
function warmup_learning_rate (line 58) | def warmup_learning_rate(optimizer, lr, train_step, warmup_step):
FILE: options/base_options.py
class BaseOptions (line 9) | class BaseOptions():
method __init__ (line 16) | def __init__(self, cmd_line=None):
method initialize (line 23) | def initialize(self, parser):
method gather_options (line 77) | def gather_options(self):
method print_options (line 114) | def print_options(self, opt):
method parse (line 143) | def parse(self):
FILE: options/test_options.py
class TestOptions (line 4) | class TestOptions(BaseOptions):
method initialize (line 10) | def initialize(self, parser):
FILE: options/train_options.py
class TrainOptions (line 4) | class TrainOptions(BaseOptions):
method initialize (line 10) | def initialize(self, parser):
FILE: util/get_data.py
class GetData (line 11) | class GetData(object):
method __init__ (line 27) | def __init__(self, technique='cyclegan', verbose=True):
method _print (line 35) | def _print(self, text):
method _get_options (line 40) | def _get_options(r):
method _present_options (line 46) | def _present_options(self):
method _download_data (line 56) | def _download_data(self, dataset_url, save_path):
method get (line 79) | def get(self, save_path, dataset=None):
FILE: util/html.py
class HTML (line 6) | class HTML:
method __init__ (line 14) | def __init__(self, web_dir, title, refresh=0):
method get_image_dir (line 35) | def get_image_dir(self):
method add_header (line 39) | def add_header(self, text):
method add_images (line 48) | def add_images(self, ims, txts, links, width=400):
method save (line 68) | def save(self):
FILE: util/image_pool.py
class ImagePool (line 5) | class ImagePool():
method __init__ (line 12) | def __init__(self, pool_size):
method query (line 23) | def query(self, images):
FILE: util/util.py
function str2bool (line 13) | def str2bool(v):
function copyconf (line 24) | def copyconf(default_opt, **kwargs):
function find_class_in_module (line 31) | def find_class_in_module(target_cls_name, module):
function tensor2im (line 44) | def tensor2im(input_image, imtype=np.uint8):
function diagnose_network (line 65) | def diagnose_network(net, name='network'):
function save_image (line 84) | def save_image(image_numpy, image_path, aspect_ratio=1.0):
function print_numpy (line 104) | def print_numpy(x, val=True, shp=False):
function mkdirs (line 120) | def mkdirs(paths):
function mkdir (line 133) | def mkdir(path):
function correct_resize_label (line 143) | def correct_resize_label(t, size):
function correct_resize (line 157) | def correct_resize(t, size, mode=Image.BICUBIC):
FILE: util/visualizer.py
function save_images (line 15) | def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256):
class Visualizer (line 46) | class Visualizer():
method __init__ (line 52) | def __init__(self, opt):
method reset (line 95) | def reset(self):
method create_visdom_connections (line 99) | def create_visdom_connections(self):
method display_current_results (line 106) | def display_current_results(self, visuals, epoch, save_result):
method plot_current_losses (line 191) | def plot_current_losses(self, epoch, counter_ratio, losses):
method print_current_losses (line 226) | def print_current_losses(self, epoch, iters, losses, t_comp, t_data):
Condensed preview — 28 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (180K chars).
[
{
"path": "LICENSE",
"chars": 11357,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 11999,
"preview": "<div id=\"top\"></div>\n<!--\n*** Thanks for checking out the Best-README-Template. If you have a suggestion\n*** that would "
},
{
"path": "data/__init__.py",
"chars": 3694,
"preview": "\"\"\"This package includes all the modules related to data loading and preprocessing\n\n To add a custom dataset class calle"
},
{
"path": "data/base_dataset.py",
"chars": 8026,
"preview": "\"\"\"This module implements an abstract base class (ABC) 'BaseDataset' for datasets.\n\nIt also includes common transformati"
},
{
"path": "data/image_folder.py",
"chars": 1941,
"preview": "\"\"\"A modified image folder class\n\nWe modify the official PyTorch image folder (https://github.com/pytorch/vision/blob/ma"
},
{
"path": "data/unaligned_dataset.py",
"chars": 3581,
"preview": "import os.path\nfrom data.base_dataset import BaseDataset, get_transform\nfrom data.image_folder import make_dataset\nfrom "
},
{
"path": "experiments/__init__.py",
"chars": 1660,
"preview": "import os\nimport importlib\n\n\ndef find_launcher_using_name(launcher_name):\n # cur_dir = os.path.dirname(os.path.abspat"
},
{
"path": "experiments/__main__.py",
"chars": 3122,
"preview": "import os\nimport importlib\n\n\ndef find_launcher_using_name(launcher_name):\n # cur_dir = os.path.dirname(os.path.abspat"
},
{
"path": "models/MSP.py",
"chars": 14115,
"preview": "import numpy as np\nimport torch.nn as nn\nimport torch\nfrom torch.nn.parameter import Parameter\nimport torch.nn.functiona"
},
{
"path": "models/__init__.py",
"chars": 3072,
"preview": "\"\"\"This package contains modules related to objective functions, optimizations, and network architectures.\n\nTo add a cus"
},
{
"path": "models/base_model.py",
"chars": 11223,
"preview": "import os\nimport torch\nfrom collections import OrderedDict\nfrom abc import ABC, abstractmethod\nfrom . import networks\n\n\n"
},
{
"path": "models/cast_model.py",
"chars": 17066,
"preview": "import itertools\r\nimport torch\r\nfrom .base_model import BaseModel\r\nfrom . import networks\r\nfrom . import net\r\nfrom . imp"
},
{
"path": "models/net.py",
"chars": 5626,
"preview": "import torch.nn as nn\nimport torch\n\nvgg = nn.Sequential(\n nn.Conv2d(3, 3, (1, 1)),\n nn.ReflectionPad2d((1, 1, 1, 1"
},
{
"path": "models/networks.py",
"chars": 28049,
"preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.nn import init\nimport functools\nfrom torch"
},
{
"path": "models/torch_utils.py",
"chars": 1654,
"preview": "import os\nimport random\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\n\n@torch.no_grad()\ndef concat_all_gather("
},
{
"path": "options/__init__.py",
"chars": 136,
"preview": "\"\"\"This package options includes option modules: training options, test options, and basic options (used in both trainin"
},
{
"path": "options/base_options.py",
"chars": 9918,
"preview": "import argparse\nimport os\nfrom util import util\nimport torch\nimport models\nimport data\n\n\nclass BaseOptions():\n \"\"\"Thi"
},
{
"path": "options/test_options.py",
"chars": 1038,
"preview": "from .base_options import BaseOptions\n\n\nclass TestOptions(BaseOptions):\n \"\"\"This class includes test options.\n\n It"
},
{
"path": "options/train_options.py",
"chars": 4016,
"preview": "from .base_options import BaseOptions\n\n\nclass TrainOptions(BaseOptions):\n \"\"\"This class includes training options.\n\n "
},
{
"path": "requirements.txt",
"chars": 129,
"preview": "torch>=1.6.0\ntorchvision>=0.7.0\ndominate>=2.4.0\nvisdom>=0.1.8.8\npackaging\nGPUtil>=1.4.0\nscipy\nPillow>=6.1.0\nnumpy>=1.16."
},
{
"path": "test.py",
"chars": 2178,
"preview": "import os\nfrom options.test_options import TestOptions\nfrom data import create_dataset\nfrom models import create_model\nf"
},
{
"path": "train.py",
"chars": 4326,
"preview": "import time\nimport torch\nfrom options.train_options import TrainOptions\nfrom data import create_dataset\nfrom models impo"
},
{
"path": "util/__init__.py",
"chars": 102,
"preview": "\"\"\"This package includes a miscellaneous collection of useful helper functions.\"\"\"\nfrom util import *\n"
},
{
"path": "util/get_data.py",
"chars": 3639,
"preview": "from __future__ import print_function\nimport os\nimport tarfile\nimport requests\nfrom warnings import warn\nfrom zipfile im"
},
{
"path": "util/html.py",
"chars": 3223,
"preview": "import dominate\nfrom dominate.tags import meta, h3, table, tr, td, p, a, img, br\nimport os\n\n\nclass HTML:\n \"\"\"This HTM"
},
{
"path": "util/image_pool.py",
"chars": 2226,
"preview": "import random\nimport torch\n\n\nclass ImagePool():\n \"\"\"This class implements an image buffer that stores previously gene"
},
{
"path": "util/util.py",
"chars": 5135,
"preview": "\"\"\"This module contains simple helper functions \"\"\"\nfrom __future__ import print_function\nimport torch\nimport numpy as n"
},
{
"path": "util/visualizer.py",
"chars": 11187,
"preview": "import numpy as np\nimport os\nimport sys\nimport ntpath\nimport time\nfrom . import util, html\nfrom subprocess import Popen,"
}
]
About this extraction
This page contains the full source code of the zyxElsa/CAST_pytorch GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 28 files (169.4 KB), approximately 42.4k tokens, and a symbol index with 200 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.