main 320640116abd cached
185 files
655.5 KB
154.5k tokens
1259 symbols
1 requests
Download .txt
Showing preview only (709K chars total). Download the full file or copy to clipboard to get everything.
Repository: pkhungurn/talking-head-anime-4-demo
Branch: main
Commit: 320640116abd
Files: 185
Total size: 655.5 KB

Directory structure:
gitextract_k8uli292/

├── .gitignore
├── .python-version
├── LICENSE
├── README.md
├── bin/
│   ├── activate-venv.bat
│   ├── activate-venv.sh
│   ├── run
│   └── run.bat
├── distiller-ui-doc/
│   ├── index.html
│   └── params/
│       ├── body_morpher_batch_size.html
│       ├── body_morpher_random_seed_0.html
│       ├── body_morpher_random_seed_1.html
│       ├── character_image_file_name.html
│       ├── face_mask_image_file_name.html
│       ├── face_morpher_batch_size.html
│       ├── face_morpher_random_seed_0.html
│       ├── face_morpher_random_seed_1.html
│       ├── num_cpu_workers.html
│       ├── num_gpus.html
│       ├── num_training_examples_per_sample_output.html
│       └── prefix.html
├── docs/
│   ├── character_model_ifacialmocap_puppeteer.md
│   ├── character_model_manual_poser.md
│   ├── character_model_mediapipe_puppeteer.md
│   ├── distill.md
│   ├── distiller_ui.md
│   └── full_manual_poser.md
├── poetry/
│   ├── README.md
│   └── pyproject.toml
└── src/
    └── tha4/
        ├── __init__.py
        ├── app/
        │   ├── __init__.py
        │   ├── character_model_ifacialmocap_puppeteer.py
        │   ├── character_model_manual_poser.py
        │   ├── character_model_mediapipe_puppeteer.py
        │   ├── distill.py
        │   ├── distiller_ui.py
        │   └── full_manual_poser.py
        ├── charmodel/
        │   ├── __init__.py
        │   └── character_model.py
        ├── dataset/
        │   ├── __init__.py
        │   └── image_poses_and_aother_images_dataset.py
        ├── distiller/
        │   ├── __init__.py
        │   ├── config_based_training_tasks.py
        │   ├── distill_body_morpher.py
        │   ├── distill_face_morpher.py
        │   ├── distiller_config.py
        │   └── ui/
        │       ├── __init__.py
        │       ├── distiller_config_state.py
        │       └── distiller_ui_main_frame.py
        ├── image_util.py
        ├── mocap/
        │   ├── __init__.py
        │   ├── ifacialmocap_constants.py
        │   ├── ifacialmocap_pose.py
        │   ├── ifacialmocap_pose_converter.py
        │   ├── ifacialmocap_pose_converter_25.py
        │   ├── ifacialmocap_v2.py
        │   ├── mediapipe_constants.py
        │   ├── mediapipe_face_pose.py
        │   ├── mediapipe_face_pose_converter.py
        │   └── mediapipe_face_pose_converter_00.py
        ├── nn/
        │   ├── __init__.py
        │   ├── common/
        │   │   ├── __init__.py
        │   │   ├── conv_block_factory.py
        │   │   ├── poser_args.py
        │   │   ├── poser_encoder_decoder_00.py
        │   │   ├── poser_encoder_decoder_00_separable.py
        │   │   ├── resize_conv_encoder_decoder.py
        │   │   ├── resize_conv_unet.py
        │   │   └── unet.py
        │   ├── conv.py
        │   ├── eyebrow_decomposer/
        │   │   ├── __init__.py
        │   │   └── eyebrow_decomposer_00.py
        │   ├── eyebrow_morphing_combiner/
        │   │   ├── __init__.py
        │   │   └── eyebrow_morphing_combiner_00.py
        │   ├── face_morpher/
        │   │   ├── __init__.py
        │   │   └── face_morpher_08.py
        │   ├── image_processing_util.py
        │   ├── init_function.py
        │   ├── morpher/
        │   │   ├── __init__.py
        │   │   └── morpher_00.py
        │   ├── nonlinearity_factory.py
        │   ├── normalization.py
        │   ├── pass_through.py
        │   ├── resnet_block.py
        │   ├── resnet_block_seperable.py
        │   ├── separable_conv.py
        │   ├── siren/
        │   │   ├── __init__.py
        │   │   ├── face_morpher/
        │   │   │   ├── __init__.py
        │   │   │   ├── siren_face_morpher_00.py
        │   │   │   ├── siren_face_morpher_00_trainer.py
        │   │   │   └── siren_face_morpher_protocols_00.py
        │   │   ├── morpher/
        │   │   │   ├── __init__.py
        │   │   │   ├── siren_morpher_03.py
        │   │   │   ├── siren_morpher_03_trainer.py
        │   │   │   └── siren_morpher_protocols_03.py
        │   │   └── vanilla/
        │   │       ├── __init__.py
        │   │       └── siren.py
        │   ├── spectral_norm.py
        │   ├── upscaler/
        │   │   ├── __init__.py
        │   │   └── upscaler_02.py
        │   └── util.py
        ├── poser/
        │   ├── __init__.py
        │   ├── general_poser_02.py
        │   ├── modes/
        │   │   ├── __init__.py
        │   │   ├── mode_07.py
        │   │   ├── mode_12.py
        │   │   ├── mode_14.py
        │   │   └── pose_parameters.py
        │   └── poser.py
        ├── pytasuku/
        │   ├── __init__.py
        │   ├── indexed/
        │   │   ├── __init__.py
        │   │   ├── all_tasks.py
        │   │   ├── bundled_indexed_file_tasks.py
        │   │   ├── indexed_file_tasks.py
        │   │   ├── indexed_tasks.py
        │   │   ├── no_index_command_tasks.py
        │   │   ├── no_index_file_tasks.py
        │   │   ├── one_index_file_tasks.py
        │   │   ├── simple_no_index_file_tasks.py
        │   │   ├── two_indices_file_tasks.py
        │   │   └── util.py
        │   ├── task.py
        │   ├── task_selector_ui.py
        │   ├── util.py
        │   └── workspace.py
        ├── sampleoutput/
        │   ├── __init__.py
        │   ├── general_sample_output_protocol.py
        │   ├── poser_sampler_output_protocol.py
        │   └── sample_image_creator.py
        └── shion/
            ├── __init__.py
            ├── base/
            │   ├── __init__.py
            │   ├── dataset/
            │   │   ├── __init__.py
            │   │   ├── lazy_dataset.py
            │   │   ├── lazy_tensor_dataset.py
            │   │   ├── png_in_dir_dataset.py
            │   │   ├── util.py
            │   │   └── xformed_dataset.py
            │   ├── image_util.py
            │   ├── loss/
            │   │   ├── __init__.py
            │   │   ├── computed_scale_loss.py
            │   │   ├── computed_scaled_l2_loss.py
            │   │   ├── l1_loss.py
            │   │   ├── l2_loss.py
            │   │   ├── sum_loss.py
            │   │   └── time_dependently_weighted_loss.py
            │   ├── module_accumulators.py
            │   ├── optimizer_factories.py
            │   ├── protocol/
            │   │   └── single_network_from_batch_input_computation_protocol.py
            │   └── training/
            │       ├── __init__.py
            │       ├── single_network.py
            │       ├── single_network_with_minibatch.py
            │       └── two_networks_training_protocol.py
            ├── core/
            │   ├── __init__.py
            │   ├── cached_computation.py
            │   ├── load_save.py
            │   ├── loss.py
            │   ├── module_accumulator.py
            │   ├── module_factory.py
            │   ├── optimizer_factory.py
            │   └── training/
            │       ├── __init__.py
            │       ├── distrib/
            │       │   ├── __init__.py
            │       │   ├── device_mapper.py
            │       │   ├── distributed_trainer.py
            │       │   ├── distributed_training_states.py
            │       │   └── distributed_training_tasks.py
            │       ├── sample_output_protocol.py
            │       ├── single/
            │       │   ├── __init__.py
            │       │   ├── training_states.py
            │       │   └── training_tasks.py
            │       ├── swarm/
            │       │   ├── __init__.py
            │       │   ├── swarm_training_tasks.py
            │       │   └── swarm_unit_trainer.py
            │       ├── training_protocol.py
            │       ├── util.py
            │       └── validation_protocol.py
            └── nn00/
                ├── __init__.py
                ├── block_args.py
                ├── conv.py
                ├── initialization_funcs.py
                ├── linear_module_args.py
                ├── nonlinearity_factories.py
                ├── normalization_layer_factories.py
                ├── normalization_layer_factory.py
                ├── pass_through.py
                └── resnet_block.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Compiled class file
*.class

# Log file
*.log

# BlueJ files
*.ctxt

# Mobile Tools for Java (J2ME)
.mtj.tmp/

# Package Files #
*.jar
*.war
*.nar
*.ear
*.zip
*.tar.gz
*.rar

# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
hs_err_pid*

.gradle
.vscode

out/
.idea/
.gradle/
build/
/data/
.idea/*
*.iml

*/.idea/*
*/build
*/.gradle/*
*/out/*
*.pyc
*.pyd
**/.cache/*
*/bin

__pycache__/
./tools/

temp/

*/*/bin/

venv/*

# io dump to ABCI tasks
*.o*

================================================
FILE: .python-version
================================================
3.10.11

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2024 pixiv Inc.

Permission is hereby granted, free of charge, to any person obtaining a copy 
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

================================================
FILE: README.md
================================================
# Demo Code for "Talking Head(?) Anime from a Single Image 4: Improved Model and Its Distillation"

This repository contains demo programs for the "Talking Head(?) Anime from a Single Image 4: Improved Model and Its Distillation" project. Roughly, the project is about a machine learning model that can animate an anime character given only one image. However, the model is too slow to run in real-time. So, it also proposes an algorithm to use the model to train a small machine learning model that is specialized to a character image that can anime the character in real time.

This demo code has two parts.

* **Improved model.** This part gives a model similar to [Version 3](https://github.com/pkhungurn/talking-head-anime-3-demo) of the porject. It has one demo program:

  * The `full_manual_poser` allows the user to manipulate a character's facial expression and body rotation through a graphical user interface.

  There are no real-time demos because the new model is too slow for that.

* **Distillation.** This part allows the user to train small models (which we will refer to as **student models**) to mimic that behavior of the full system with regards to a specific character image. It also allows the user to run these models under various interfaces. The demo programs are:

  * `distill` trains a student model given a configuration file, a $512 \times 512$ RGBA character image, and a mask of facial organs.
  * `distiller_ui` provides a user-friendly interface to `distill`, allowing you to create training configurations and providing useful documentation.
  * `character_model_manual_poser` allows the user to control trained student models with a graphical user interface.
  * `character_model_ifacialmocap_puppeteer` allows the user to control trained student models with their facial movement, which is captured by the [iFacialMocap](https://www.ifacialmocap.com/) software. To run this software, you must have an iOS device and, of course, iFacialMocap.
  *  `character_model_mediapipe_puppeteer` allows the user to control trained student models with their facial movement, which is captured a web camera and processed by the [Mediapipe FaceLandmarker](https://developers.google.com/mediapipe/solutions/vision/face_landmarker) model. To run this software, you need a web camera.

## Preemptive FAQs

### What is the program to control character images with my facial movement?

There is no such program in this release. If you want one, try the `ifacialmocap_puppeteer` of [Version 3](https://github.com/pkhungurn/talking-head-anime-3-demo).

### OK. I'm confused. Isn't your work about easy VTubing? Are you saying this release cannot do it?

NO. This release does it in a more complicated way. In order to control an image, you need to create a "student model." It is a small (< 2MB) and fast machine learning model that knows how to animate that particular image. Then, the student model can be controlled with facial movement. You can find two student models in the `data/character_models` directory. The [two](https://pkhungurn.github.io/talking-head-anime-4/supplementary/webcam-demo/index.html) [demos](https://pkhungurn.github.io/talking-head-anime-4/supplementary/manual-poser-demo/index.html) on the project website feature 13 students models.

### So, for this release, you can control only these few characters in real time?

No. You can create your own student models.

### How do I create this student model then?

1. You prepare your characater image according to the "Constraint on Input Images" section below.
2. You prepare a black-and-white mask image that covers the eyes and the mouth of the character, like [this image](data/images/lambda_00_face_mask.png). You can see how I made it with [GIMP](https://www.gimp.org/) by inspecting this [GIMP file](data/images/lambda_00_face_mask.xcf).
3. You use `distiller_ui` to create a configuration file that specifies how the student model should be trained.
4. You use `distiller_ui` or `distill` to start the training process.
5. You wait several ten hours for the student model to finish training. Last time I tried, it was about 30 hours on a computer with an Nvidia RTX A6000 GPU.
6. After that, you can control the student model with `character_model_ifacialmocap_puppeteer` and `character_model_mediapipe_puppeteer`.

### Why is this release so hard to use?

[Version 3](https://github.com/pkhungurn/talking-head-anime-3-demo) is arguably easier to use because you can give it an animate and you can control it with your facial movment immediately. However, I was not satisfied with its image quality and speed. 

In this release, I explore a new way of doing things. I added a new preprocessing stage (i.e., training the student models) that has to be done one time per character image. It allows the image to be animated much faster at a higher image quality level.

In other words, it makes the user's life difficult but the engineer/researcher happy. Patient users who are willing to go through the steps, though, would be rewarded with faster animation.


### Can I use a student model from a web browser?

No. A student model created by `distill` is a [PyTorch](https://pytorch.org/) model, which cannot run directly in the browser. It needs to be converted to the appropriate format ([TensorFlow.js](https://www.tensorflow.org/js)) first, and the [web](https://pkhungurn.github.io/talking-head-anime-4/supplementary/webcam-demo/index.html) [demos](https://pkhungurn.github.io/talking-head-anime-4/supplementary/manual-poser-demo/index.html) use the converted models. However, The conversion code is not included in this repository. I will not release it unless I change my mind.

## Hardware Requirements

All programs require a recent and powerful Nvidia GPU to run. I developed the programs on a machine with an Nvidia RTX A6000. However, anything after the GeForce RTX 2080 should be fine.

The `character_model_ifacialmocap_puppeteer` program requires an iOS device that is capable of computing [blend shape parameters](https://developer.apple.com/documentation/arkit/arfaceanchor/2928251-blendshapes) from a video feed. This means that the device must be able to run iOS 11.0 or higher and must have a TrueDepth front-facing camera. (See [this page](https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces) for more info.) In other words, if you have the iPhone X or something better, you should be all set. Personally, I have used an iPhone 12 mini.

The `character_model_mediapipe_puppeteer` program requires a web camera.

## Software Requirements

### GPU Driver and CUDA Toolkit

Please update your GPU's device driver and install the [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) that is compatible with your GPU and is newer than the version you will be installing in the next subsection.

### Python and Python Libraries

All programs are written in the [Python](https://www.python.org/) programming languages. The following libraries are required:

* `python` 3.10.11
* `torch` 1.13.1 with CUDA support
* `torchvision` 0.14.1
* `tensorboard` 2.15.1
* `opencv-python` 4.8.1.78
* `wxpython` 4.2.1
* `numpy-quaternion` 2022.4.2
* `pillow` 9.4.0
* `matplotlib` 3.6.3
* `einops` 0.6.0
* `mediapipe` 0.10.3
* `numpy` 1.26.3
* `scipy` 1.12.0
* `omegaconf` 2.3.0

Instead of installing these libraries yourself, you should follow the recommended method to set up a Python environment in the next section.

### iFacialMocap

If you want to use ``ifacialmocap_puppeteer``, you will also need to an iOS software called [iFacialMocap](https://www.ifacialmocap.com/) (a 980 yen purchase in the App Store). Your iOS and your computer must use the same network. For example, you may connect them to the same wireless router.

## Creating Python Environment

### Installing Python

Please install [Python 3.10.11](https://www.python.org/downloads/release/python-31011/). 

I recommend using [`pyenv`](https://github.com/pyenv/pyenv) (or [`pyenv-win`](https://github.com/pyenv-win/pyenv-win) for Windows users) to manage multiple Python versions on your system. If you use `pyenv`, this repository has a `.python-version` file that indicates it would use Python 3.10.11. So, you will be using Python 3.10.11 automatically once you `cd` into the repository's directory.

Make sure that you can run Python from the command line.

### Installing Poetry

Please install [Poetry](https://python-poetry.org/) 1.7 or later. We will use it to automatically install the required libraries. Again, make sure that you can run it from the command line.

### Cloning the Repository

Please clone the repository to an arbitrary directory in your machine.

### Instruction for Linux/OSX Users

1. Open a shell.
2. `cd` to the directory you just cloned the repository too
   ```
   cd SOMEWHERE/talking-head-anime-4-demo
   ```
3. Use Python to create a virtual environment under the `venv` directory.
   ```
   python -m venv venv --prompt talking-head-anime-4-demo
   ```
4. Activate the newly created virtual environment. You can either use the script I provide:
   ```
   source bin/activate-venv.sh
   ```
   or do it yourself:
   ```
   source venv/bin/activate   
   ```
5. Use Poetry to install libraries.
   ```
   cd poetry
   poetry install
   ```

### Instruction for Windows Users

1. Open a shell.
2. `cd` to the directory you just cloned the repository too
   ```
   cd SOMEWHERE\talking-head-anime-4-demo
   ```
3. Use Python to create a virtual environment under the `venv` directory.
   ```
   python -m venv venv --prompt talking-head-anime-4-demo
   ```
4. Activate the newly created virtual environment. You can either use the script I provide:
   ```
   bin\activate-venv.bat
   ```
   or do it yourself:
   ```
   venv\Scripts\activate   
   ```
5. Use Poetry to install libraries.
   ```
   cd poetry
   poetry install
   ```

## Download the Models/Dataset Files

### THA4 Models

Please download [this ZIP file](https://www.dropbox.com/scl/fi/7wec0sur7449iqgtlpi3n/tha4-models.zip?rlkey=0f9d1djmbvjjjn09469s1adx8&dl=0) hosted on Dropbox, and unzip it to the `data/tha4` directory the under the repository's directory. In the end, the directory tree should look like the following diagram:

```
+ talking-head-anime-4-demo
   + data
      - character_models
      - distill_examples
      + tha4
         - body_morpher.pt
         - eyebrow_decomposer.pt
         - eyebrow_morphing_combiner.pt
         - face_morpher.pt
         - upscaler.pt
     - images
     - third_party
```

### Pose Dataset

If you want to create your own student models, you also need to download a dataset of poses that are needed for the training process. Download [this `pose_dataset.pt` file](https://www.dropbox.com/scl/fi/du10e6buzr5bslbe025qu/pose_dataset.pt?rlkey=y052g4n3xb14nu2elctzouc5x&dl=0) and save it to the `data` folder. The directory tree should then look like the following diagram:

```
+ talking-head-anime-4-demo
   + data
      - character_models
      - distill_examples
      - tha4
      - images
      - third_party
      - pose_dataset.pt
```

## Running the Programs

The programs are located in the `src/tha4/app` directory. You need to run them from a shell with the provided scripts.

### Instruction for Linux/OSX Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE/talking-head-anime-4-demo
   ```
3. Run a program.
   ```
   bin/run src/tha4/app/<program-file-name>
   ```
   where `<program-file-name>` can be replaced with:
   
   * `character_model_ifacialmocap_puppeteer.py`
   * `character_model_manual_poser.py`
   * `character_model_mediapipe_puppeteer.py`
   * `distill.py`
   * `disllerer_ui.py`
   * `full_manual_poser.py`

### Instruction for Windows Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE\talking-head-anime-4-demo
   ```
3. Run a program.
   ```
   bin\run.bat src\tha4\app\<program-file-name>
   ```
   where `<program-file-name>` can be replaced with:
   
   * `character_model_ifacialmocap_puppeteer.py`
   * `character_model_manual_poser.py`
   * `character_model_mediapipe_puppeteer.py`
   * `distill.py`
   * `disllerer_ui.py`
   * `full_manual_poser.py`

## Contraints on Input Images

In order for the system to work well, the input image must obey the following constraints:

* It should be of resolution 512 x 512. (If the demo programs receives an input image of any other size, they will resize the image to this resolution and also output at this resolution.)
* It must have an alpha channel.
* It must contain only one humanoid character.
* The character should be standing upright and facing forward.
* The character's hands should be below and far from the head.
* The head of the character should roughly be contained in the 128 x 128 box in the middle of the top half of the image.
* The alpha channels of all pixels that do not belong to the character (i.e., background pixels) must be 0.

![An example of an image that conforms to the above criteria](docs/images/input_spec.png "An example of an image that conforms to the above criteria")

## Documentation for the Tools

* [`character_model_ifacial_model_puppeteer`](docs/character_model_ifacialmocap_puppeteer.md)
* [`character_model_manual_poser`](docs/character_model_manual_poser.md)
* [`character_model_mediapipe_puppeteer`](docs/character_model_mediapipe_puppeteer.md)
* [`distill`](docs/distill.md)
* [`distiller_ui`](docs/distiller_ui.md)
* [`full_manual_poser`](docs/full_manual_poser.md)

## Disclaimer

The author is an employee of [pixiv Inc.](https://www.pixiv.co.jp/) This project is a part of his work as a researcher.

However, this project is NOT a pixiv product. The company will NOT provide any support for this project. The author will try to support the project, but there are no Service Level Agreements (SLAs) that he will maintain.

The code is released under the [MIT license](https://github.com/pkhungurn/talking-head-anime-2-demo/blob/master/LICENSE).
The THA4 models and the images under the `data/images` directory are released under the [Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/deed.en).

This repository redistributes a version of the [Face landmark detection model](https://developers.google.com/mediapipe/solutions/vision/face_landmarker) from the [MediaPipe](https://developers.google.com/mediapipe) project. The model has been released under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).

================================================
FILE: bin/activate-venv.bat
================================================
venv\Scripts\activate

================================================
FILE: bin/activate-venv.sh
================================================
#! /bin/bash
source venv/bin/activate

================================================
FILE: bin/run
================================================
#! /bin/bash
export PYTHONPATH=$(pwd)/src
venv/bin/python $@


================================================
FILE: bin/run.bat
================================================
set PYTHONPATH=%cd%\src
venv\Scripts\python.exe %*


================================================
FILE: distiller-ui-doc/index.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation</title>
</head>
<body>
<h1>How to use Distiller UI</h1>

<p>This program is called <code>distiller_ui</code>. It allows you to create and modify configurations for the process of distilling the full, but slow THA4 system to a student model that can be run in real time on computers with moderately powerful GPUs.</p>

<h2>Basic Usage</h2>

<p>This program manipulates YAML files that are used as configurations for the distillation process. The menus
<ul>
    <li><b>File &rarr; New</b></li>
    <li><b>File &rarr; Open</b></li>
    <li><b>File &rarr; Save</b></li>
</ul>
do what they are supposed to do in typical application programs.
</p>

<p>You can use the UI in the middle panel to change various parameters of the configuration. If you do not understand what the meaning of a parameter, click the "Help" button for that parameter to learn more.</p>

<p>Once you have modified the parameters to your liking, click the "RUN" button at the bottom of the middle panel to carry out the distillation. This will take several ten hours, so sit back and relax.</p>

<p>The distillation process can be interrupted and resumed at any time. As a result, you do not have to worry that you may lose data if there's a blackout or if you need to free your GPU(s) to do something else. Resuming can be done through this program or through the <code>distill</code> script.</p>

<h2>Explanation of Configuration Parameters</h2>

<ul>
    <li><a href="params/prefix.html"><code>prefix</code></a></li>
    <li><a href="params/character_image_file_name.html"><code>character_image_file_name</code></a></li>
    <li><a href="params/face_mask_image_file_name.html"><code>face_mask_image_file_name</code></a></li>
    <li><a href="params/num_cpu_workers.html"><code>num_cpu_workers</code></a></li>
    <li><a href="params/num_gpus.html"><code>num_gpus</code></a></li>
    <li><a href="params/face_morpher_random_seed_0.html"><code>face_morpher_random_seed_0</code></a></li>
    <li><a href="params/face_morpher_random_seed_1.html"><code>face_morpher_random_seed_1</code></a></li>
    <li><a href="params/face_morpher_batch_size.html"><code>face_morpher_batch_size</code></a></li>
    <li><a href="params/body_morpher_random_seed_0.html"><code>body_morpher_random_seed_0</code></a></li>
    <li><a href="params/body_morpher_random_seed_1.html"><code>body_morpher_random_seed_1</code></a></li>
    <li><a href="params/body_morpher_batch_size.html"><code>body_morpher_batch_size</code></a></li>
    <li><a href="params/num_training_examples_per_sample_output.html"><code>num_training_examples_per_sample_output</code></a></li>
</ul>

</body>
</html>

================================================
FILE: distiller-ui-doc/params/body_morpher_batch_size.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: body_morpher_batch_size</title>
</head>
<body>
<h1><code>body_morpher_batch_size</code></h1>

<p>The "batch size" is the number of training examples shown to a machine learning model in one round of parameter update. This parameter is the batch size for training the student body morpher. We recommend you set it to 8. However, if your computer does not have enough GPU RAM, you can reduce the number to any smaller positive integer.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/body_morpher_random_seed_0.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: body_morpher_random_seed_0</title>
</head>
<body>
<h1><code>body_morpher_random_seed_0</code></h1>

<p>This parameter will be used as a random seed in the process of training the student body morpher. It can be any non-negative integer from 0 to 2<sup>64</sup>-1. You can specify the number directly, or use the "Randomize" button to specify a random one.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/body_morpher_random_seed_1.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: body_morpher_random_seed_1</title>
</head>
<body>
<h1><code>body_morpher_random_seed_1</code></h1>

<p>This parameter will be used as a random seed in the process of training the student body morpher. It can be any non-negative integer from 0 to 2<sup>64</sup>-1. You can specify the number directly, or use the "Randomize" button to specify a random one.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/character_image_file_name.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: character_image_file_name</title>
</head>
<body>
<h1><code>character_image_file_name</code></h1>
<p>This is the name of the file of an image of a humanoid character. The image must conform to the following specifications.</p>

<p>
<ul>
    <li>It MUST in the PNG format.</li>
    <li>It MUST have an alpha channel.</li>
    <li>It MUST be 512 x 512.</li>
    <li>It MUST contain only one humanoid character.</li>
    <li>The character should be standing upright and facing forward.</li>
    <li>The character's hands should be below and far from the head.</li>
    <li>The head of the character should roughly be contained in the 128 x 128 box in the middle of the top half of the image.</li>
    <li>The alpha channels of all pixels that do not belong to the character (i.e., background pixels) must be 0.</li>
</ul>
</p>

<p>
<img src="../images/input_spec.png" alt="">
</p>

<p>Once you have chosen the image, a crop of the character face will be shown on the right side of the window. In order for the distillation process works correctly, <b>make sure that all the movable parts of the face&mdash; eyes, eyebrows, mouth, jaw line &mdash; can all be seen in this crop.</b></p>

<p>
<table border="1" cellpadding="5">
    <tr>
        <td align="center"><img src="../images/face_crop_ok.png" alt=""><br><font size="18" color="green">&#9745;</font></td>
        <td>This image is GOOD because we can see all of the eyes, eyebrows, mouth, and jaw line in the image.</td>
    </tr>
    <tr>
        <td align="center"><img src="../images/face_crop_not_ok_00.png" alt=""><br><font size="18" color="red">&#9746;</font></td>
        <td>This image is NOT GOOD because we cannot see the whole of the jaw line in the image</td>
    </tr>
    <tr>
        <td align="center"><img src="../images/face_crop_not_ok_01.png" alt=""><br><font size="18" color="red">&#9746;</font></td>
        <td>This image is NOT GOOD because we cannot see the whole of the right eye and eyebrow in the image.</td>
    </tr>
    <tr>
        <td align="center"><img src="../images/face_crop_not_ok_02.png" alt=""><br><font size="18" color="red">&#9746;</font></td>
        <td>This image is NOT GOOD because we cannot see the whole of the eyebrows in the image.</td>
    </tr>
</table>
</p>

<p>The <code>data/images</code> directory contains two example images that conform to all the above specifications: <code>data/images/lambda_00.png</code> and <code>data/images/lambda_01.png</code>. Please use them as references.</p>

<hr>

<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/face_mask_image_file_name.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: face_mask_image_file_name</title>
</head>
<body>
<h1><code>face_mask_image_file_name</code></h1>

<p>This is the name of the file containing binary masks of movable facial organs of the character. It is probably the best to see an example.</p>

<p>
    <img src="../../data/images/lambda_00_face_mask.png" alt="">
</p>

<p>A "face mask image" conforms to the following specification.</p>

<p>
    <ul>
        <li>It must be in the PNG format.</li>
        <li>It must be 512 x 512.</li>
        <li>It must be an RGB image (i.e., no alpha channel).</li>
        <li>All pixels must be either block (0,0,0) or white (255,255,255).</li>
        <li>The white pixels should cover movable parts of the face.</li>
    </ul>
</p>

<p>We recommend creating three rectangles.</p>

<p>
    <ul>
        <li>One covers the right eye and eyebrow.</li>
        <li>One covers the left eye and eyebrow.</li>
        <li>One covers the mouth and the jaw line.</li>
    </ul>
</p>

<p>The rectangles for the eyes and the eyebrows should extend above the eyes to some extent because the eyebrows can move upward.</p>

<p>Once you have specified the face mask image with the "Change..." button, a crop of the face area will show up on the left side of the window. If the character image has also been specified, an image of the face mask laid over the character's face will also show up. Use this image to check whether the masks are covering everything.</p>

<p>
    <img src="../images/left_panel.png" alt="">
</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/face_morpher_batch_size.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: face_morpher_batch_size</title>
</head>
<body>
<h1><code>face_morpher_batch_size</code></h1>

<p>The "batch size" is the number of training examples shown to a machine learning model in one round of parameter update. This parameter is the batch size for training the student face morpher. We recommend you set it to 8. However, if your computer does not have enough GPU RAM, you can reduce the number to any smaller positive integer.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/face_morpher_random_seed_0.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: face_morpher_random_seed_0</title>
</head>
<body>
<h1><code>face_morpher_random_seed_0</code></h1>

<p>This parameter will be used as a random seed in the process of training the student face morpher. It can be any non-negative integer from 0 to 2<sup>64</sup>-1. You can specify the number directly, or use the "Randomize" button to specify a random one.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/face_morpher_random_seed_1.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: face_morpher_random_seed_1</title>
</head>
<body>
<h1><code>face_morpher_random_seed_1</code></h1>

<p>This parameter will be used as a random seed in the process of training the student face morpher. It can be any non-negative integer from 0 to 2<sup>64</sup>-1. You can specify the number directly, or use the "Randomize" button to specify a random one.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/num_cpu_workers.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: face_mask_image_file_name</title>
</head>
<body>
<h1><code>num_cpu_workers</code></h1>

<p>This is the number of worker threads that are used to process pose data during training of the student models. Typically, 1 would be enough, but you can specify up to the number of CPUs your computer has.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/num_gpus.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: num_gpus</title>
</head>
<body>
<h1><code>num_gpus</code></h1>

<p>This is the number of GPUs that are used to to train the student models. Typically, 1 would be enough. However, you can specify up to the number of Nvidia GPUs that your PC has.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/num_training_examples_per_sample_output.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: num_training_example_per_sample_output</title>
</head>
<body>
<h1><code>num_training_example_per_sample_output</code></h1>

<p>During training of a student model, the training process would periodically create "sample output" produced by the model being trained in order to allow the user to see training progress and observe whether there is any anomalies.</p>

<p>This parameter specifies how frequent the sample outputs are generated. You can indicate whether you want a sample output to be generated every time the trained model has beeen shown 10,000, 100,000 or 1,000,000 training examples. If you do not care about sample outputs, you can also make the process not generate any sample outputs at all.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: distiller-ui-doc/params/prefix.html
================================================
<html lang="en">
<head>
    <title>Distiller UI Documentation: prefix</title>
</head>
<body>
<h1><code>prefix</code></h1>

<p><code>prefix</code> is the name of the directory under which the distillation process will store the trained models and other intermediate data. Please choose a directory that is a subdirectory of the directory that stores the <code>talking-head-anime-4-demo</code>'s repository.</p>

<hr>
<a href="../index.html">Back to main documentation</a>
</body>
</html>

================================================
FILE: docs/character_model_ifacialmocap_puppeteer.md
================================================
# `character_model_ifacialmocap_puppeteer`

This program allows the user to control trained student models with their facial movement, which is captured by the [iFacialMocap](https://www.ifacialmocap.com/) software. You can purchase the software from the App Store for 980 Japanese Yen.

## Invoking the Program

Make sure you have (1) created a Python environment and (2) downloaded model files as instruction in the [main README file](../README.md).

### Instruction for Linux/OSX Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE/talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin/run src/tha4/app/character_model_ifacialmocap_puppeteer.py
   ```   

### Instruction for Windows Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE\talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin\run.bat src\tha4\app\character_model_ifacialmocap_puppeteer.py
   ```

## Usage

1. Run iFacialMocap on your iOS device. It should show you the device's IP address. Jot it down. Keep the app open.

   ![IP address in iFacialMocap screen](images/ifacialmocap_ip.jpg "IP address in iFacialMocap screen")

2. Invoke the `character_model_ifacialmocap_puppeteer` application.

3. You will see a text box with label "Capture Device IP." Write the iOS device's IP address that you jotted down there.

   ![Write IP address of your iOS device in the 'Capture Device IP' text box.](images/ifacialmocap-puppeteer-device-ip.png "Write IP address of your iOS device in the 'Capture Device IP' text box.")

4. Click the "START CAPTURE!" button to the right.

   ![Click the 'START CAPTURE!' button.](images/ifacialmocap-puppeteer-start-capture.png "Click the 'START CAPTURE!' button.")

   If the programs are connected properly, you should see the numbers in the bottom part of the window change when you move your head.

   ![The numbers in the bottom part of the window should change when you move your head.](images/ifacialmocap-puppeteer-moving-numbers.png "The numbers in the bottom part of the window should change when you move your head.")

5. Now, you can load a student model, and the character should follow your facial movement.

================================================
FILE: docs/character_model_manual_poser.md
================================================
# `character_model_manual_poser`

This program allows the user to control trained student models with a graphical user interface, mostly sliders.

## Invoking the Program

Make sure you have (1) created a Python environment and (2) downloaded model files as instruction in the [main README file](../README.md).

### Instruction for Linux/OSX Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE/talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin/run src/tha4/app/character_model_manual_poser.py
   ```   

### Instruction for Windows Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE\talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin\run.bat src\tha4\app\character_model_manual_poser.py
   ```   


================================================
FILE: docs/character_model_mediapipe_puppeteer.md
================================================
# `character_model_mediapipe_puppeteer`

allows the user to control trained student models with their facial movement, which is captured by a web camera and processed by the [Mediapipe FaceLandmarker](https://developers.google.com/mediapipe/solutions/vision/face_landmarker) model.

## Web Camera

Please make sure that, before you invoke the program, your computer has a web camera plugged in. The program will use a web camera, but it does not allow you to specify which. In case your machine has more than one web camera, you can turn off all camera except the one that you want to use. 

You can also inspect the [source code](../src/tha4/app/character_model_mediapipe_puppeteer.py) and change the 

```
    video_capture = cv2.VideoCapture(0)
```

line to choose a particular camera that you want to use.

## Invoking the Program

Make sure you have (1) created a Python environment and (2) downloaded model files as instruction in the [main README file](../README.md).

### Instruction for Linux/OSX Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE/talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin/run src/tha4/app/character_model_mediapipe_puppeteer.py
   ```   

### Instruction for Windows Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE\talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin\run.bat src\tha4\app\character_model_mediapipe_puppeteer.py
   ```   


================================================
FILE: docs/distill.md
================================================
# `distill`

This program trains a student model given a configuration file, a $512 \times 512$ RGBA character image, and a mask of facial organs.

## Invoking the Program

Make sure you have (1) created a Python environment and (2) downloaded model files as instruction in the [main README file](../README.md).

### Instruction for Linux/OSX Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE/talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin/run src/tha4/app/distill.py <config-file>
   ```
   where `<config-file>` is a configuration file for creating a student model. More on this later.

### Instruction for Windows Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE\talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin\run.bat src\tha4\app\full_manual_poser.py <config-file>
   ```   
   where `<config-file>` is a configuration file for creating a student model. More on this later.

## Configuration File

A configuration file is a [YAML](https://yaml.org/) file that specify how to create a student model. This repository comes with two valid configuration files that you can peruse:

* [data/distill_examples/lambda_00/config.yaml](../data/distill_examples/lambda_00/config.yaml)
* [data/distill_examples/lambda_01/config.yaml](../data/distill_examples/lambda_01/config.yaml)

I recommend that you use the `distiller_ui` program to create configuration files rather than writing them yourself. Inside the program, you can see what the fields are and what they mean.

## What `distill` Outputs

Inside the configuration file, you specify a directory where the student models should be saved to in the `prefix` field. After `distill` is done with its job, the output directory will look like this:

```
+ <prefix-specified-in-config-file>
  + body_morpher
  + face_morpher
  + character_model
  - config.yaml
```

Here:

* `config.yaml` is a copy of the configuration file that you wrote. 
* The `character_model` directory contains a trained student model that can be used with `character_model_manual_poser.md`, `character_model_ifacialmocap_puppeteer.md`, and `character_model_mediapipe_puppeteer.md`. 
* `body_morpher` is a scratch directory that was used to save intermediate results during the training of a part of the student model.
* `face_morpher` is a scratch directory that was used to save intermediate results during the training of another part of the student model.

You only need what is inside the `character_model` directory. As a resulit, you can delete other files after the `character_model` directory has been filled. You can move the directory out to somewhere and rename it as long as the contents inside are not modified.

## The Training Process Is Interruptible

Invoking `distill` on a configuration will start a rather long process of training a student model. On a machine with an A6000 GPU, it takes about 30 hours to complete. As a result, it might take several days on machines with less powerful GPUs.

The training process is robust and interruptible. You can stop it any time by closing the shell window or by typing `Ctrl+C`. Intermediate results are periodically saved in the scratch directories, ready to be picked up at a later time when you are ready to train the student model again. To resume the process, just invoke `distill` again with the same configuration file that you started with, and the process will take care of itself.

================================================
FILE: docs/distiller_ui.md
================================================
# `distiller_ui`

This program provides a user-friendly interface to the [`distill`](distill.md) program, allowing you to create training configurations and providing useful documentation.

## Invoking the Program

Make sure you have (1) created a Python environment and (2) downloaded model files as instruction in the [main README file](../README.md).

### Instruction for Linux/OSX Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE/talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin/run src/tha4/app/distill_ui.py
   ```   

### Instruction for Windows Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE\talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin\run.bat src\tha4\app\distill_ui.py
   ```   

## Usage

Please consult the documentation inside the program itself. It is available on the rightmost panel.

================================================
FILE: docs/full_manual_poser.md
================================================
# `full_manual_poser`

This program uses the full version of the Talking Head(?) Anime 4 system to animate character images.

## Invoking the Program

Make sure you have (1) created a Python environment and (2) downloaded model files as instruction in the [main README file](../README.md).

### Instruction for Linux/OSX Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE/talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin/run src/tha4/app/full_manual_poser.py
   ```   

### Instruction for Windows Users

1. Open a shell.
2. `cd` to the repository's directory.
   ```
   cd SOMEWHERE\talking-head-anime-4-demo
   ```
3. Run the program.
   ```
   bin\run.bat src\tha4\app\full_manual_poser.py
   ```

================================================
FILE: poetry/README.md
================================================


================================================
FILE: poetry/pyproject.toml
================================================
[tool.poetry]
name = "talking-head-anime-4-demo"
version = "0.1.0"
description = "Demo code for Talking Head(?) Anime 4"
authors = ["Pramook Khungurn <pong@pixiv.co.jp>"]
readme = "README.md"
packages = [
    {include = "tha4", from = "../src"},
]

[tool.poetry.dependencies]
python = ">=3.10, <3.11"
torch = {version = "1.13.1", source = "torch_cu117"}
torchvision = {version = "0.14.1", source = "torch_cu117"}
tensorboard = "^2.15.1"
opencv-python = "^4.8.1.78"
wxpython = "^4.2.1"
numpy-quaternion = "^2022.4.2"
pillow = "^9.4.0"
matplotlib = "^3.6.3"
einops = "^0.6.0"
mediapipe = "^0.10.3"
numpy = "^1.26.3"
scipy = "^1.12.0"
omegaconf = "^2.3.0"

[[tool.poetry.source]]
name = "torch_cu117"
url = "https://download.pytorch.org/whl/cu117"
priority = "explicit"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

================================================
FILE: src/tha4/__init__.py
================================================


================================================
FILE: src/tha4/app/__init__.py
================================================


================================================
FILE: src/tha4/app/character_model_ifacialmocap_puppeteer.py
================================================
import os
import socket
import sys
import threading
import time
from typing import Optional

import PIL.Image

from tha4.shion.base.image_util import torch_linear_to_srgb
from tha4.image_util import convert_linear_to_srgb
from tha4.mocap.ifacialmocap_pose_converter_25 import create_ifacialmocap_pose_converter
from tha4.app.full_manual_poser import resize_PIL_image
from tha4.charmodel.character_model import CharacterModel

sys.path.append(os.getcwd())

from tha4.mocap.ifacialmocap_pose import create_default_ifacialmocap_pose
from tha4.mocap.ifacialmocap_v2 import IFACIALMOCAP_PORT, IFACIALMOCAP_START_STRING, parse_ifacialmocap_v2_pose

import torch
import wx

from tha4.mocap.ifacialmocap_constants import *
from tha4.mocap.ifacialmocap_pose_converter import IFacialMocapPoseConverter


class FpsStatistics:
    def __init__(self):
        self.count = 100
        self.fps = []

    def add_fps(self, fps):
        self.fps.append(fps)
        while len(self.fps) > self.count:
            del self.fps[0]

    def get_average_fps(self):
        if len(self.fps) == 0:
            return 0.0
        else:
            return sum(self.fps) / len(self.fps)


class MainFrame(wx.Frame):
    IMAGE_SIZE = 512

    def __init__(self, pose_converter: IFacialMocapPoseConverter, device: torch.device):
        super().__init__(None, wx.ID_ANY, "iFacialMocap Puppeteer (Fuji)")
        self.poser = None
        self.pose_converter = pose_converter
        self.device = device

        self.ifacialmocap_pose = create_default_ifacialmocap_pose()
        self.source_image_bitmap = wx.Bitmap(MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE)
        self.result_image_bitmap = wx.Bitmap(MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE)
        self.wx_source_image = None
        self.torch_source_image = None
        self.last_pose = None
        self.fps_statistics = FpsStatistics()
        self.last_update_time = None

        self.create_receiving_socket()
        self.create_ui()
        self.create_timers()
        self.Bind(wx.EVT_CLOSE, self.on_close)

        self.update_source_image_bitmap()
        self.update_result_image_bitmap()

    def create_receiving_socket(self):
        self.receiving_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
        self.receiving_socket.bind(("", IFACIALMOCAP_PORT))
        self.receiving_socket.setblocking(False)

    def create_timers(self):
        self.capture_timer = wx.Timer(self, wx.ID_ANY)
        self.Bind(wx.EVT_TIMER, self.update_capture_panel, id=self.capture_timer.GetId())
        self.animation_timer = wx.Timer(self, wx.ID_ANY)
        self.Bind(wx.EVT_TIMER, self.update_result_image_bitmap, id=self.animation_timer.GetId())

    def on_close(self, event: wx.Event):
        # Stop the timers
        self.animation_timer.Stop()
        self.capture_timer.Stop()

        # Close receiving socket
        self.receiving_socket.close()

        # Destroy the windows
        self.Destroy()
        event.Skip()

    def on_start_capture(self, event: wx.Event):
        capture_device_ip_address = self.capture_device_ip_text_ctrl.GetValue()
        out_socket = None
        try:
            address = (capture_device_ip_address, IFACIALMOCAP_PORT)
            out_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
            out_socket.sendto(IFACIALMOCAP_START_STRING, address)
        except Exception as e:
            message_dialog = wx.MessageDialog(self, str(e), "Error!", wx.OK)
            message_dialog.ShowModal()
            message_dialog.Destroy()
        finally:
            if out_socket is not None:
                out_socket.close()

    def read_ifacialmocap_pose(self):
        if not self.animation_timer.IsRunning():
            return self.ifacialmocap_pose
        socket_bytes = None
        while True:
            try:
                socket_bytes = self.receiving_socket.recv(8192)
            except socket.error as e:
                break
        if socket_bytes is not None:
            socket_string = socket_bytes.decode("utf-8")
            self.ifacialmocap_pose = parse_ifacialmocap_v2_pose(socket_string)
        return self.ifacialmocap_pose

    def on_erase_background(self, event: wx.Event):
        pass

    def create_animation_panel(self, parent):
        self.animation_panel = wx.Panel(parent, style=wx.RAISED_BORDER)
        self.animation_panel_sizer = wx.BoxSizer(wx.HORIZONTAL)
        self.animation_panel.SetSizer(self.animation_panel_sizer)
        self.animation_panel.SetAutoLayout(1)

        image_size = MainFrame.IMAGE_SIZE

        if True:
            self.input_panel = wx.Panel(self.animation_panel, size=(image_size, image_size + 128),
                                        style=wx.SIMPLE_BORDER)
            self.input_panel_sizer = wx.BoxSizer(wx.VERTICAL)
            self.input_panel.SetSizer(self.input_panel_sizer)
            self.input_panel.SetAutoLayout(1)
            self.animation_panel_sizer.Add(self.input_panel, 0, wx.FIXED_MINSIZE)

            self.source_image_panel = wx.Panel(self.input_panel, size=(image_size, image_size), style=wx.SIMPLE_BORDER)
            self.source_image_panel.Bind(wx.EVT_PAINT, self.paint_source_image_panel)
            self.source_image_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
            self.input_panel_sizer.Add(self.source_image_panel, 0, wx.FIXED_MINSIZE)

            self.load_model_button = wx.Button(self.input_panel, wx.ID_ANY, "Load Model")
            self.input_panel_sizer.Add(self.load_model_button, 1, wx.EXPAND)
            self.load_model_button.Bind(wx.EVT_BUTTON, self.load_model)

            self.input_panel_sizer.Fit(self.input_panel)

        if True:
            self.pose_converter.init_pose_converter_panel(self.animation_panel)

        if True:
            self.animation_left_panel = wx.Panel(self.animation_panel, style=wx.SIMPLE_BORDER)
            self.animation_left_panel_sizer = wx.BoxSizer(wx.VERTICAL)
            self.animation_left_panel.SetSizer(self.animation_left_panel_sizer)
            self.animation_left_panel.SetAutoLayout(1)
            self.animation_panel_sizer.Add(self.animation_left_panel, 0, wx.EXPAND)

            self.result_image_panel = wx.Panel(self.animation_left_panel, size=(image_size, image_size),
                                               style=wx.SIMPLE_BORDER)
            self.result_image_panel.Bind(wx.EVT_PAINT, self.paint_result_image_panel)
            self.result_image_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
            self.animation_left_panel_sizer.Add(self.result_image_panel, 0, wx.FIXED_MINSIZE)

            separator = wx.StaticLine(self.animation_left_panel, -1, size=(256, 5))
            self.animation_left_panel_sizer.Add(separator, 0, wx.EXPAND)

            background_text = wx.StaticText(self.animation_left_panel, label="--- Background ---",
                                            style=wx.ALIGN_CENTER)
            self.animation_left_panel_sizer.Add(background_text, 0, wx.EXPAND)

            self.output_background_choice = wx.Choice(
                self.animation_left_panel,
                choices=[
                    "TRANSPARENT",
                    "GREEN",
                    "BLUE",
                    "BLACK",
                    "WHITE"
                ])
            self.output_background_choice.SetSelection(0)
            self.animation_left_panel_sizer.Add(self.output_background_choice, 0, wx.EXPAND)

            separator = wx.StaticLine(self.animation_left_panel, -1, size=(256, 5))
            self.animation_left_panel_sizer.Add(separator, 0, wx.EXPAND)

            self.fps_text = wx.StaticText(self.animation_left_panel, label="")
            self.animation_left_panel_sizer.Add(self.fps_text, wx.SizerFlags().Border())

            self.animation_left_panel_sizer.Fit(self.animation_left_panel)

        self.animation_panel_sizer.Fit(self.animation_panel)

    def create_ui(self):
        self.main_sizer = wx.BoxSizer(wx.VERTICAL)
        self.SetSizer(self.main_sizer)
        self.SetAutoLayout(1)

        self.capture_pose_lock = threading.Lock()

        self.create_connection_panel(self)
        self.main_sizer.Add(self.connection_panel, wx.SizerFlags(0).Expand().Border(wx.ALL, 5))

        self.create_animation_panel(self)
        self.main_sizer.Add(self.animation_panel, wx.SizerFlags(0).Expand().Border(wx.ALL, 5))

        self.create_capture_panel(self)
        self.main_sizer.Add(self.capture_panel, wx.SizerFlags(0).Expand().Border(wx.ALL, 5))

        self.main_sizer.Fit(self)

    def create_connection_panel(self, parent):
        self.connection_panel = wx.Panel(parent, style=wx.RAISED_BORDER)
        self.connection_panel_sizer = wx.BoxSizer(wx.HORIZONTAL)
        self.connection_panel.SetSizer(self.connection_panel_sizer)
        self.connection_panel.SetAutoLayout(1)

        capture_device_ip_text = wx.StaticText(self.connection_panel, label="Capture Device IP:", style=wx.ALIGN_RIGHT)
        self.connection_panel_sizer.Add(capture_device_ip_text, wx.SizerFlags(0).FixedMinSize().Border(wx.ALL, 3))

        self.capture_device_ip_text_ctrl = wx.TextCtrl(self.connection_panel, value="192.168.0.1")
        self.connection_panel_sizer.Add(self.capture_device_ip_text_ctrl, wx.SizerFlags(1).Expand().Border(wx.ALL, 3))

        self.start_capture_button = wx.Button(self.connection_panel, label="START CAPTURE!")
        self.connection_panel_sizer.Add(self.start_capture_button, wx.SizerFlags(0).FixedMinSize().Border(wx.ALL, 3))
        self.start_capture_button.Bind(wx.EVT_BUTTON, self.on_start_capture)

    def create_capture_panel(self, parent):
        self.capture_panel = wx.Panel(parent, style=wx.RAISED_BORDER)
        self.capture_panel_sizer = wx.FlexGridSizer(cols=5)
        for i in range(5):
            self.capture_panel_sizer.AddGrowableCol(i)
        self.capture_panel.SetSizer(self.capture_panel_sizer)
        self.capture_panel.SetAutoLayout(1)

        self.rotation_labels = {}
        self.rotation_value_labels = {}
        rotation_column_0 = self.create_rotation_column(self.capture_panel, RIGHT_EYE_BONE_ROTATIONS)
        self.capture_panel_sizer.Add(rotation_column_0, wx.SizerFlags(0).Expand().Border(wx.ALL, 3))
        rotation_column_1 = self.create_rotation_column(self.capture_panel, LEFT_EYE_BONE_ROTATIONS)
        self.capture_panel_sizer.Add(rotation_column_1, wx.SizerFlags(0).Expand().Border(wx.ALL, 3))
        rotation_column_2 = self.create_rotation_column(self.capture_panel, HEAD_BONE_ROTATIONS)
        self.capture_panel_sizer.Add(rotation_column_2, wx.SizerFlags(0).Expand().Border(wx.ALL, 3))

    def create_rotation_column(self, parent, rotation_names):
        column_panel = wx.Panel(parent, style=wx.SIMPLE_BORDER)
        column_panel_sizer = wx.FlexGridSizer(cols=2)
        column_panel_sizer.AddGrowableCol(1)
        column_panel.SetSizer(column_panel_sizer)
        column_panel.SetAutoLayout(1)

        for rotation_name in rotation_names:
            self.rotation_labels[rotation_name] = wx.StaticText(
                column_panel, label=rotation_name, style=wx.ALIGN_RIGHT)
            column_panel_sizer.Add(self.rotation_labels[rotation_name],
                                   wx.SizerFlags(1).Expand().Border(wx.ALL, 3))

            self.rotation_value_labels[rotation_name] = wx.TextCtrl(
                column_panel, style=wx.TE_RIGHT)
            self.rotation_value_labels[rotation_name].SetValue("0.00")
            self.rotation_value_labels[rotation_name].Disable()
            column_panel_sizer.Add(self.rotation_value_labels[rotation_name],
                                   wx.SizerFlags(1).Expand().Border(wx.ALL, 3))

        column_panel.GetSizer().Fit(column_panel)
        return column_panel

    def paint_capture_panel(self, event: wx.Event):
        self.update_capture_panel(event)

    def update_capture_panel(self, event: wx.Event):
        data = self.ifacialmocap_pose
        for rotation_name in ROTATION_NAMES:
            value = data[rotation_name]
            self.rotation_value_labels[rotation_name].SetValue("%0.2f" % value)

    @staticmethod
    def convert_to_100(x):
        return int(max(0.0, min(1.0, x)) * 100)

    def paint_source_image_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.source_image_panel, self.source_image_bitmap)

    def update_source_image_bitmap(self):
        dc = wx.MemoryDC()
        dc.SelectObject(self.source_image_bitmap)
        if self.wx_source_image is None:
            self.draw_nothing_yet_string(dc)
        else:
            dc.Clear()
            dc.DrawBitmap(self.wx_source_image, 0, 0, True)
        del dc

    def draw_nothing_yet_string(self, dc):
        dc.Clear()
        font = wx.Font(wx.FontInfo(14).Family(wx.FONTFAMILY_SWISS))
        dc.SetFont(font)
        w, h = dc.GetTextExtent("Nothing yet!")
        dc.DrawText("Nothing yet!", (MainFrame.IMAGE_SIZE - w) // 2, (MainFrame.IMAGE_SIZE - h) // 2)

    def paint_result_image_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.result_image_panel, self.result_image_bitmap)

    def update_result_image_bitmap(self, event: Optional[wx.Event] = None):
        ifacialmocap_pose = self.read_ifacialmocap_pose()
        current_pose = self.pose_converter.convert(ifacialmocap_pose)
        if self.last_pose is not None and self.last_pose == current_pose:
            return
        self.last_pose = current_pose

        if self.torch_source_image is None or self.poser is None:
            dc = wx.MemoryDC()
            dc.SelectObject(self.result_image_bitmap)
            self.draw_nothing_yet_string(dc)
            del dc
            return

        pose = torch.tensor(current_pose, device=self.device, dtype=self.poser.get_dtype())

        with torch.no_grad():
            output_image = self.poser.pose(self.torch_source_image, pose)[0].float()
            output_image = torch.clip((output_image + 1.0) / 2.0, 0.0, 1.0)
            output_image = convert_linear_to_srgb(output_image)

            background_choice = self.output_background_choice.GetSelection()
            if background_choice == 0:
                pass
            else:
                background = torch.zeros(4, output_image.shape[1], output_image.shape[2], device=self.device)
                background[3, :, :] = 1.0
                if background_choice == 1:
                    background[1, :, :] = 1.0
                    output_image = self.blend_with_background(output_image, background)
                elif background_choice == 2:
                    background[2, :, :] = 1.0
                    output_image = self.blend_with_background(output_image, background)
                elif background_choice == 3:
                    output_image = self.blend_with_background(output_image, background)
                else:
                    background[0:3, :, :] = 1.0
                    output_image = self.blend_with_background(output_image, background)

            c, h, w = output_image.shape
            output_image = 255.0 * torch.transpose(output_image.reshape(c, h * w), 0, 1).reshape(h, w, c)
            output_image = output_image.byte()

        numpy_image = output_image.detach().cpu().numpy()
        wx_image = wx.ImageFromBuffer(numpy_image.shape[0],
                                      numpy_image.shape[1],
                                      numpy_image[:, :, 0:3].tobytes(),
                                      numpy_image[:, :, 3].tobytes())
        wx_bitmap = wx_image.ConvertToBitmap()

        dc = wx.MemoryDC()
        dc.SelectObject(self.result_image_bitmap)
        dc.Clear()
        dc.DrawBitmap(wx_bitmap,
                      (MainFrame.IMAGE_SIZE - numpy_image.shape[0]) // 2,
                      (MainFrame.IMAGE_SIZE - numpy_image.shape[1]) // 2, True)
        del dc

        time_now = time.time_ns()
        if self.last_update_time is not None:
            elapsed_time = time_now - self.last_update_time
            fps = 1.0 / (elapsed_time / 10 ** 9)
            if self.torch_source_image is not None:
                self.fps_statistics.add_fps(fps)
            self.fps_text.SetLabelText("FPS = %0.2f" % self.fps_statistics.get_average_fps())
        self.last_update_time = time_now

        self.Refresh()

    def blend_with_background(self, numpy_image, background):
        alpha = numpy_image[3:4, :, :]
        color = numpy_image[0:3, :, :]
        new_color = color * alpha + (1.0 - alpha) * background[0:3, :, :]
        return torch.cat([new_color, background[3:4, :, :]], dim=0)

    def load_model(self, event: wx.Event):
        dir_name = "data/character_models"
        file_dialog = wx.FileDialog(self, "Choose a model", dir_name, "", "*.yaml", wx.FD_OPEN)
        if file_dialog.ShowModal() == wx.ID_OK:
            character_model_json_file_name = os.path.join(file_dialog.GetDirectory(), file_dialog.GetFilename())
            try:
                self.character_model = CharacterModel.load(character_model_json_file_name)
                self.torch_source_image = self.character_model.get_character_image(self.device)
                pil_image = resize_PIL_image(
                    PIL.Image.open(self.character_model.character_image_file_name),
                    (MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE))
                w, h = pil_image.size
                self.wx_source_image = wx.Bitmap.FromBufferRGBA(w, h, pil_image.convert("RGBA").tobytes())
                self.update_source_image_bitmap()
                self.poser = self.character_model.get_poser(self.device)
            except Exception:
                message_dialog = wx.MessageDialog(
                    self, "Could not load character model " + character_model_json_file_name, "Poser", wx.OK)
                message_dialog.ShowModal()
                message_dialog.Destroy()
        file_dialog.Destroy()
        self.Refresh()


if __name__ == "__main__":
    device = torch.device('cuda:0')

    pose_converter = create_ifacialmocap_pose_converter()

    app = wx.App()
    main_frame = MainFrame(pose_converter, device)
    main_frame.Show(True)
    main_frame.capture_timer.Start(10)
    main_frame.animation_timer.Start(10)
    app.MainLoop()


================================================
FILE: src/tha4/app/character_model_manual_poser.py
================================================
import logging
import os
import sys
import time
from typing import List

from tha4.charmodel.character_model import CharacterModel
from tha4.image_util import resize_PIL_image, convert_output_image_from_torch_to_numpy
from tha4.poser.modes.mode_14 import get_pose_parameters

sys.path.append(os.getcwd())

import PIL.Image
import torch
import wx

from tha4.poser.poser import PoseParameterCategory, PoseParameterGroup


class MorphCategoryControlPanel(wx.Panel):
    def __init__(self,
                 parent,
                 title: str,
                 pose_param_category: PoseParameterCategory,
                 param_groups: List[PoseParameterGroup]):
        super().__init__(parent, style=wx.SIMPLE_BORDER)
        self.pose_param_category = pose_param_category
        self.sizer = wx.BoxSizer(wx.VERTICAL)
        self.SetSizer(self.sizer)
        self.SetAutoLayout(1)

        title_text = wx.StaticText(self, label=title, style=wx.ALIGN_CENTER)
        self.sizer.Add(title_text, 0, wx.EXPAND)

        self.param_groups = [group for group in param_groups if group.get_category() == pose_param_category]
        self.choice = wx.Choice(self, choices=[group.get_group_name() for group in self.param_groups])
        if len(self.param_groups) > 0:
            self.choice.SetSelection(0)
        self.choice.Bind(wx.EVT_CHOICE, self.on_choice_updated)
        self.sizer.Add(self.choice, 0, wx.EXPAND)

        self.left_slider = wx.Slider(self, minValue=-1000, maxValue=1000, value=-1000, style=wx.HORIZONTAL)
        self.sizer.Add(self.left_slider, 0, wx.EXPAND)

        self.right_slider = wx.Slider(self, minValue=-1000, maxValue=1000, value=-1000, style=wx.HORIZONTAL)
        self.sizer.Add(self.right_slider, 0, wx.EXPAND)

        self.checkbox = wx.CheckBox(self, label="Show")
        self.checkbox.SetValue(True)
        self.sizer.Add(self.checkbox, 0, wx.SHAPED | wx.ALIGN_CENTER)

        self.update_ui()

        self.sizer.Fit(self)

    def update_ui(self):
        param_group = self.param_groups[self.choice.GetSelection()]
        if param_group.is_discrete():
            self.left_slider.Enable(False)
            self.right_slider.Enable(False)
            self.checkbox.Enable(True)
        elif param_group.get_arity() == 1:
            self.left_slider.Enable(True)
            self.right_slider.Enable(False)
            self.checkbox.Enable(False)
        else:
            self.left_slider.Enable(True)
            self.right_slider.Enable(True)
            self.checkbox.Enable(False)

    def on_choice_updated(self, event: wx.Event):
        param_group = self.param_groups[self.choice.GetSelection()]
        if param_group.is_discrete():
            self.checkbox.SetValue(True)
        self.update_ui()

    def set_param_value(self, pose: List[float]):
        if len(self.param_groups) == 0:
            return
        selected_morph_index = self.choice.GetSelection()
        param_group = self.param_groups[selected_morph_index]
        param_index = param_group.get_parameter_index()
        if param_group.is_discrete():
            if self.checkbox.GetValue():
                for i in range(param_group.get_arity()):
                    pose[param_index + i] = 1.0
        else:
            param_range = param_group.get_range()
            alpha = (self.left_slider.GetValue() + 1000) / 2000.0
            pose[param_index] = param_range[0] + (param_range[1] - param_range[0]) * alpha
            if param_group.get_arity() == 2:
                alpha = (self.right_slider.GetValue() + 1000) / 2000.0
                pose[param_index + 1] = param_range[0] + (param_range[1] - param_range[0]) * alpha


class SimpleParamGroupsControlPanel(wx.Panel):
    def __init__(self, parent,
                 pose_param_category: PoseParameterCategory,
                 param_groups: List[PoseParameterGroup]):
        super().__init__(parent, style=wx.SIMPLE_BORDER)
        self.sizer = wx.BoxSizer(wx.VERTICAL)
        self.SetSizer(self.sizer)
        self.SetAutoLayout(1)

        self.param_groups = [group for group in param_groups if group.get_category() == pose_param_category]
        for param_group in self.param_groups:
            assert not param_group.is_discrete()
            assert param_group.get_arity() == 1

        self.sliders = []
        for param_group in self.param_groups:
            static_text = wx.StaticText(
                self,
                label="   ------------ %s ------------   " % param_group.get_group_name(), style=wx.ALIGN_CENTER)
            self.sizer.Add(static_text, 0, wx.EXPAND)
            range = param_group.get_range()
            min_value = int(range[0] * 1000)
            max_value = int(range[1] * 1000)
            slider = wx.Slider(self, minValue=min_value, maxValue=max_value, value=0, style=wx.HORIZONTAL)
            self.sizer.Add(slider, 0, wx.EXPAND)
            self.sliders.append(slider)

        self.sizer.Fit(self)

    def set_param_value(self, pose: List[float]):
        if len(self.param_groups) == 0:
            return
        for param_group_index in range(len(self.param_groups)):
            param_group = self.param_groups[param_group_index]
            slider = self.sliders[param_group_index]
            param_range = param_group.get_range()
            param_index = param_group.get_parameter_index()
            alpha = (slider.GetValue() - slider.GetMin()) * 1.0 / (slider.GetMax() - slider.GetMin())
            pose[param_index] = param_range[0] + (param_range[1] - param_range[0]) * alpha


class MainFrame(wx.Frame):
    IMAGE_SIZE = 512
    OUTPUT_LENGTH = 6
    NUM_PARAMETERS = 45

    def __init__(self, device: torch.device):
        super().__init__(None, wx.ID_ANY, "Poser")
        self.poser = None
        self.device = device

        self.wx_source_image = None
        self.torch_source_image = None

        self.main_sizer = wx.BoxSizer(wx.HORIZONTAL)
        self.SetSizer(self.main_sizer)
        self.SetAutoLayout(1)
        self.init_left_panel()
        self.init_control_panel()
        self.init_right_panel()
        self.main_sizer.Fit(self)

        self.timer = wx.Timer(self, wx.ID_ANY)
        self.Bind(wx.EVT_TIMER, self.update_images, self.timer)

        save_image_id = wx.NewIdRef()
        self.Bind(wx.EVT_MENU, self.on_save_image, id=save_image_id)
        accelerator_table = wx.AcceleratorTable([
            (wx.ACCEL_CTRL, ord('S'), save_image_id)
        ])
        self.SetAcceleratorTable(accelerator_table)

        self.last_pose = None
        self.last_output_index = self.output_index_choice.GetSelection()
        self.last_output_numpy_image = None

        self.wx_source_image = None
        self.torch_source_image = None
        self.source_image_bitmap = wx.Bitmap(MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE)
        self.result_image_bitmap = wx.Bitmap(MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE)
        self.source_image_dirty = True

    def init_left_panel(self):
        self.control_panel = wx.Panel(self, style=wx.SIMPLE_BORDER, size=(MainFrame.IMAGE_SIZE, -1))
        self.left_panel = wx.Panel(self, style=wx.SIMPLE_BORDER)
        left_panel_sizer = wx.BoxSizer(wx.VERTICAL)
        self.left_panel.SetSizer(left_panel_sizer)
        self.left_panel.SetAutoLayout(1)

        self.source_image_panel = wx.Panel(self.left_panel, size=(MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE),
                                           style=wx.SIMPLE_BORDER)
        self.source_image_panel.Bind(wx.EVT_PAINT, self.paint_source_image_panel)
        self.source_image_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
        left_panel_sizer.Add(self.source_image_panel, 0, wx.FIXED_MINSIZE)

        self.load_model_button = wx.Button(self.left_panel, wx.ID_ANY, "\nLoad Model\n\n")
        left_panel_sizer.Add(self.load_model_button, 1, wx.EXPAND)
        self.load_model_button.Bind(wx.EVT_BUTTON, self.load_model)

        left_panel_sizer.Fit(self.left_panel)
        self.main_sizer.Add(self.left_panel, 0, wx.FIXED_MINSIZE)

    def on_erase_background(self, event: wx.Event):
        pass

    def init_control_panel(self):
        self.control_panel_sizer = wx.BoxSizer(wx.VERTICAL)
        self.control_panel.SetSizer(self.control_panel_sizer)
        self.control_panel.SetMinSize(wx.Size(256, 1))

        morph_categories = [
            PoseParameterCategory.EYEBROW,
            PoseParameterCategory.EYE,
            PoseParameterCategory.MOUTH,
            PoseParameterCategory.IRIS_MORPH
        ]
        morph_category_titles = {
            PoseParameterCategory.EYEBROW: "   ------------ Eyebrow ------------   ",
            PoseParameterCategory.EYE: "   ------------ Eye ------------   ",
            PoseParameterCategory.MOUTH: "   ------------ Mouth ------------   ",
            PoseParameterCategory.IRIS_MORPH: "   ------------ Iris morphs ------------   ",
        }
        self.morph_control_panels = {}
        param_groups = get_pose_parameters().get_pose_parameter_groups()
        for category in morph_categories:
            filtered_param_groups = [group for group in param_groups if group.get_category() == category]
            if len(filtered_param_groups) == 0:
                continue
            control_panel = MorphCategoryControlPanel(
                self.control_panel,
                morph_category_titles[category],
                category,
                param_groups)
            self.morph_control_panels[category] = control_panel
            self.control_panel_sizer.Add(control_panel, 0, wx.EXPAND)

        self.non_morph_control_panels = {}
        non_morph_categories = [
            PoseParameterCategory.IRIS_ROTATION,
            PoseParameterCategory.FACE_ROTATION,
            PoseParameterCategory.BODY_ROTATION,
            PoseParameterCategory.BREATHING
        ]
        for category in non_morph_categories:
            filtered_param_groups = [group for group in param_groups if group.get_category() == category]
            if len(filtered_param_groups) == 0:
                continue
            control_panel = SimpleParamGroupsControlPanel(
                self.control_panel,
                category,
                param_groups)
            self.non_morph_control_panels[category] = control_panel
            self.control_panel_sizer.Add(control_panel, 0, wx.EXPAND)

        self.control_panel_sizer.Fit(self.control_panel)
        self.main_sizer.Add(self.control_panel, 1, wx.FIXED_MINSIZE)

    def init_right_panel(self):
        self.right_panel = wx.Panel(self, style=wx.SIMPLE_BORDER)
        right_panel_sizer = wx.BoxSizer(wx.VERTICAL)
        self.right_panel.SetSizer(right_panel_sizer)
        self.right_panel.SetAutoLayout(1)

        self.result_image_panel = wx.Panel(self.right_panel,
                                           size=(MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE),
                                           style=wx.SIMPLE_BORDER)
        self.result_image_panel.Bind(wx.EVT_PAINT, self.paint_result_image_panel)
        self.result_image_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
        self.output_index_choice = wx.Choice(
            self.right_panel,
            choices=[str(i) for i in range(MainFrame.OUTPUT_LENGTH)])
        self.output_index_choice.SetSelection(0)
        right_panel_sizer.Add(self.result_image_panel, 0, wx.FIXED_MINSIZE)
        right_panel_sizer.Add(self.output_index_choice, 0, wx.EXPAND)

        self.save_image_button = wx.Button(self.right_panel, wx.ID_ANY, "\nSave Image\n\n")
        right_panel_sizer.Add(self.save_image_button, 1, wx.EXPAND)
        self.save_image_button.Bind(wx.EVT_BUTTON, self.on_save_image)

        right_panel_sizer.Fit(self.right_panel)
        self.main_sizer.Add(self.right_panel, 0, wx.FIXED_MINSIZE)

    def create_param_category_choice(self, param_category: PoseParameterCategory):
        params = []
        for param_group in self.poser.get_pose_parameter_groups():
            if param_group.get_category() == param_category:
                params.append(param_group.get_group_name())
        choice = wx.Choice(self.control_panel, choices=params)
        if len(params) > 0:
            choice.SetSelection(0)
        return choice

    def load_model(self, event: wx.Event):
        dir_name = "data/character_models"
        file_dialog = wx.FileDialog(self, "Choose a model", dir_name, "", "*.yaml", wx.FD_OPEN)
        if file_dialog.ShowModal() == wx.ID_OK:
            character_model_file_name = os.path.join(file_dialog.GetDirectory(), file_dialog.GetFilename())
            try:
                self.character_model = CharacterModel.load(character_model_file_name)
                self.torch_source_image = self.character_model.get_character_image(self.device)
                pil_image = resize_PIL_image(
                    PIL.Image.open(self.character_model.character_image_file_name),
                    (MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE))
                w, h = pil_image.size
                self.wx_source_image = wx.Bitmap.FromBufferRGBA(w, h, pil_image.convert("RGBA").tobytes())
                self.poser = self.character_model.get_poser(self.device)
                self.source_image_dirty = True
                self.Refresh()
                self.Update()
            except RuntimeError as e:
                message_dialog = wx.MessageDialog(
                    self, "Could not load character model " + character_model_file_name, "Poser", wx.OK)
                message_dialog.ShowModal()
                message_dialog.Destroy()
        file_dialog.Destroy()

    def paint_source_image_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.source_image_panel, self.source_image_bitmap)

    def paint_result_image_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.result_image_panel, self.result_image_bitmap)

    def draw_nothing_yet_string_to_bitmap(self, bitmap):
        dc = wx.MemoryDC()
        dc.SelectObject(bitmap)

        dc.Clear()
        font = wx.Font(wx.FontInfo(14).Family(wx.FONTFAMILY_SWISS))
        dc.SetFont(font)
        w, h = dc.GetTextExtent("Nothing yet!")
        dc.DrawText("Nothing yet!", (MainFrame.IMAGE_SIZE - w) // 2, (MainFrame.IMAGE_SIZE - - h) // 2)

        del dc

    def get_current_pose(self):
        current_pose = [0.0 for i in range(MainFrame.NUM_PARAMETERS)]
        for morph_control_panel in self.morph_control_panels.values():
            morph_control_panel.set_param_value(current_pose)
        for rotation_control_panel in self.non_morph_control_panels.values():
            rotation_control_panel.set_param_value(current_pose)
        return current_pose

    def update_images(self, event: wx.Event):
        current_pose = self.get_current_pose()
        if not self.source_image_dirty \
                and self.last_pose is not None \
                and self.last_pose == current_pose \
                and self.last_output_index == self.output_index_choice.GetSelection():
            return
        self.last_pose = current_pose
        self.last_output_index = self.output_index_choice.GetSelection()

        if self.torch_source_image is None or self.poser is None:
            self.draw_nothing_yet_string_to_bitmap(self.source_image_bitmap)
            self.draw_nothing_yet_string_to_bitmap(self.result_image_bitmap)
            self.source_image_dirty = False
            self.Refresh()
            self.Update()
            return

        if self.source_image_dirty:
            dc = wx.MemoryDC()
            dc.SelectObject(self.source_image_bitmap)
            dc.Clear()
            dc.DrawBitmap(self.wx_source_image, 0, 0)
            self.source_image_dirty = False

        pose = torch.tensor(current_pose, device=self.device)
        output_index = self.output_index_choice.GetSelection()
        with torch.no_grad():
            start_cuda_event = torch.cuda.Event(enable_timing=True)
            end_cuda_event = torch.cuda.Event(enable_timing=True)
            start_cuda_event.record()
            start_time = time.time()

            output_image = self.poser.pose(self.torch_source_image, pose, output_index)[0].detach().cpu()

            end_time = time.time()
            end_cuda_event.record()
            torch.cuda.synchronize()
            print("cuda time (ms):", start_cuda_event.elapsed_time(end_cuda_event))
            print("elapsed time (ms):", (end_time - start_time) * 1000.0)

        numpy_image = convert_output_image_from_torch_to_numpy(output_image)
        self.last_output_numpy_image = numpy_image
        wx_image = wx.ImageFromBuffer(
            numpy_image.shape[0],
            numpy_image.shape[1],
            numpy_image[:, :, 0:3].tobytes(),
            numpy_image[:, :, 3].tobytes())
        wx_bitmap = wx_image.ConvertToBitmap()

        dc = wx.MemoryDC()
        dc.SelectObject(self.result_image_bitmap)
        dc.Clear()
        dc.DrawBitmap(wx_bitmap,
                      (MainFrame.IMAGE_SIZE - numpy_image.shape[0]) // 2,
                      (MainFrame.IMAGE_SIZE - numpy_image.shape[1]) // 2,
                      True)
        del dc

        self.Refresh()
        self.Update()

    def on_save_image(self, event: wx.Event):
        if self.last_output_numpy_image is None:
            logging.info("There is no output image to save!!!")
            return

        dir_name = "data/images"
        file_dialog = wx.FileDialog(self, "Choose an image", dir_name, "", "*.png", wx.FD_SAVE)
        if file_dialog.ShowModal() == wx.ID_OK:
            image_file_name = os.path.join(file_dialog.GetDirectory(), file_dialog.GetFilename())
            try:
                if os.path.exists(image_file_name):
                    message_dialog = wx.MessageDialog(self, f"Override {image_file_name}", "Manual Poser",
                                                      wx.YES_NO | wx.ICON_QUESTION)
                    result = message_dialog.ShowModal()
                    if result == wx.ID_YES:
                        self.save_last_numpy_image(image_file_name)
                    message_dialog.Destroy()
                else:
                    self.save_last_numpy_image(image_file_name)
            except:
                message_dialog = wx.MessageDialog(self, f"Could not save {image_file_name}", "Manual Poser", wx.OK)
                message_dialog.ShowModal()
                message_dialog.Destroy()
        file_dialog.Destroy()

    def save_last_numpy_image(self, image_file_name):
        numpy_image = self.last_output_numpy_image
        pil_image = PIL.Image.fromarray(numpy_image, mode='RGBA')
        os.makedirs(os.path.dirname(image_file_name), exist_ok=True)
        pil_image.save(image_file_name)


if __name__ == "__main__":
    device = torch.device('cuda:0')
    app = wx.App()
    main_frame = MainFrame(device)
    main_frame.Show(True)
    main_frame.timer.Start(16)
    app.MainLoop()


================================================
FILE: src/tha4/app/character_model_mediapipe_puppeteer.py
================================================
import os
import sys
import threading
import time
from typing import Optional
import PIL.Image

import cv2
import mediapipe
from scipy.spatial.transform import Rotation

from tha4.shion.base.image_util import resize_PIL_image
from tha4.charmodel.character_model import CharacterModel
from tha4.image_util import convert_linear_to_srgb
from tha4.mocap.mediapipe_constants import HEAD_ROTATIONS, HEAD_X, HEAD_Y, HEAD_Z
from tha4.mocap.mediapipe_face_pose import MediaPipeFacePose
from tha4.mocap.mediapipe_face_pose_converter_00 import MediaPoseFacePoseConverter00

sys.path.append(os.getcwd())

import torch
import wx


class FpsStatistics:
    def __init__(self):
        self.count = 100
        self.fps = []

    def add_fps(self, fps):
        self.fps.append(fps)
        while len(self.fps) > self.count:
            del self.fps[0]

    def get_average_fps(self):
        if len(self.fps) == 0:
            return 0.0
        else:
            return sum(self.fps) / len(self.fps)


class MainFrame(wx.Frame):
    IMAGE_SIZE = 512

    def __init__(self,
                 pose_converter: MediaPoseFacePoseConverter00,
                 video_capture,
                 face_landmarker,
                 device: torch.device):
        super().__init__(None, wx.ID_ANY, "THA4 Character Model MediaPipe Puppeteer")
        self.face_landmarker = face_landmarker
        self.video_capture = video_capture
        self.pose_converter = pose_converter
        self.device = device

        self.source_image_bitmap = wx.Bitmap(MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE)
        self.result_image_bitmap = wx.Bitmap(MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE)
        self.webcam_capture_bitmap = wx.Bitmap(256, 192)
        self.wx_source_image = None
        self.torch_source_image = None
        self.last_pose = None
        self.mediapipe_face_pose = None
        self.fps_statistics = FpsStatistics()
        self.last_update_time = None
        self.character_model = None
        self.poser = None

        self.create_ui()
        self.create_timers()
        self.Bind(wx.EVT_CLOSE, self.on_close)

        self.update_source_image_bitmap()
        self.update_result_image_bitmap()

    def create_timers(self):
        self.capture_timer = wx.Timer(self, wx.ID_ANY)
        self.Bind(wx.EVT_TIMER, self.update_capture_panel, id=self.capture_timer.GetId())
        self.animation_timer = wx.Timer(self, wx.ID_ANY)
        self.Bind(wx.EVT_TIMER, self.update_result_image_bitmap, id=self.animation_timer.GetId())

    def on_close(self, event: wx.Event):
        # Stop the timers
        self.animation_timer.Stop()
        self.capture_timer.Stop()

        # Destroy the windows
        self.Destroy()
        event.Skip()

    def on_erase_background(self, event: wx.Event):
        pass

    def create_animation_panel(self, parent):
        self.animation_panel = wx.Panel(parent, style=wx.RAISED_BORDER)
        self.animation_panel_sizer = wx.BoxSizer(wx.HORIZONTAL)
        self.animation_panel.SetSizer(self.animation_panel_sizer)
        self.animation_panel.SetAutoLayout(1)

        image_size = MainFrame.IMAGE_SIZE

        if True:
            self.input_panel = wx.Panel(self.animation_panel, size=(image_size, image_size + 128),
                                        style=wx.SIMPLE_BORDER)
            self.input_panel_sizer = wx.BoxSizer(wx.VERTICAL)
            self.input_panel.SetSizer(self.input_panel_sizer)
            self.input_panel.SetAutoLayout(1)
            self.animation_panel_sizer.Add(self.input_panel, 0, wx.FIXED_MINSIZE)

            self.source_image_panel = wx.Panel(self.input_panel, size=(image_size, image_size), style=wx.SIMPLE_BORDER)
            self.source_image_panel.Bind(wx.EVT_PAINT, self.paint_source_image_panel)
            self.source_image_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
            self.input_panel_sizer.Add(self.source_image_panel, 0, wx.FIXED_MINSIZE)

            self.load_model_button = wx.Button(self.input_panel, wx.ID_ANY, "Load Model")
            self.input_panel_sizer.Add(self.load_model_button, 1, wx.EXPAND)
            self.load_model_button.Bind(wx.EVT_BUTTON, self.load_model)

            self.input_panel_sizer.Fit(self.input_panel)

        if True:
            def current_pose_supplier() -> Optional[MediaPipeFacePose]:
                return self.mediapipe_face_pose

            self.pose_converter.init_pose_converter_panel(self.animation_panel, current_pose_supplier)

        if True:
            self.animation_left_panel = wx.Panel(self.animation_panel, style=wx.SIMPLE_BORDER)
            self.animation_left_panel_sizer = wx.BoxSizer(wx.VERTICAL)
            self.animation_left_panel.SetSizer(self.animation_left_panel_sizer)
            self.animation_left_panel.SetAutoLayout(1)
            self.animation_panel_sizer.Add(self.animation_left_panel, 0, wx.EXPAND)

            self.result_image_panel = wx.Panel(self.animation_left_panel, size=(image_size, image_size),
                                               style=wx.SIMPLE_BORDER)
            self.result_image_panel.Bind(wx.EVT_PAINT, self.paint_result_image_panel)
            self.result_image_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
            self.animation_left_panel_sizer.Add(self.result_image_panel, 0, wx.FIXED_MINSIZE)

            separator = wx.StaticLine(self.animation_left_panel, -1, size=(256, 5))
            self.animation_left_panel_sizer.Add(separator, 0, wx.EXPAND)

            background_text = wx.StaticText(self.animation_left_panel, label="--- Background ---",
                                            style=wx.ALIGN_CENTER)
            self.animation_left_panel_sizer.Add(background_text, 0, wx.EXPAND)

            self.output_background_choice = wx.Choice(
                self.animation_left_panel,
                choices=[
                    "TRANSPARENT",
                    "GREEN",
                    "BLUE",
                    "BLACK",
                    "WHITE"
                ])
            self.output_background_choice.SetSelection(0)
            self.animation_left_panel_sizer.Add(self.output_background_choice, 0, wx.EXPAND)

            separator = wx.StaticLine(self.animation_left_panel, -1, size=(256, 5))
            self.animation_left_panel_sizer.Add(separator, 0, wx.EXPAND)

            self.fps_text = wx.StaticText(self.animation_left_panel, label="")
            self.animation_left_panel_sizer.Add(self.fps_text, wx.SizerFlags().Border())

            self.animation_left_panel_sizer.Fit(self.animation_left_panel)

        self.animation_panel_sizer.Fit(self.animation_panel)

    def create_ui(self):
        self.main_sizer = wx.BoxSizer(wx.VERTICAL)
        self.SetSizer(self.main_sizer)
        self.SetAutoLayout(1)

        self.capture_pose_lock = threading.Lock()

        self.create_animation_panel(self)
        self.main_sizer.Add(self.animation_panel, wx.SizerFlags(0).Expand().Border(wx.ALL, 5))

        self.create_capture_panel(self)
        self.main_sizer.Add(self.capture_panel, wx.SizerFlags(0).Expand().Border(wx.ALL, 5))

        self.main_sizer.Fit(self)

    def create_capture_panel(self, parent):
        self.capture_panel = wx.Panel(parent, style=wx.RAISED_BORDER)
        self.capture_panel_sizer = wx.BoxSizer(wx.HORIZONTAL)
        self.capture_panel.SetSizer(self.capture_panel_sizer)
        self.capture_panel.SetAutoLayout(1)

        self.webcam_capture_panel = wx.Panel(self.capture_panel, size=(256, 192), style=wx.SIMPLE_BORDER)
        self.webcam_capture_panel.Bind(wx.EVT_PAINT, self.paint_webcam_capture_panel)
        self.webcam_capture_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
        self.capture_panel_sizer.Add(self.webcam_capture_panel, wx.SizerFlags(0).FixedMinSize().Border(wx.ALL, 5))

        self.rotation_labels = {}
        self.rotation_value_labels = {}
        rotation_column = self.create_rotation_column(self.capture_panel, HEAD_ROTATIONS)
        self.capture_panel_sizer.Add(rotation_column, wx.SizerFlags(0).Expand().Border(wx.ALL, 3))

    def paint_webcam_capture_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.webcam_capture_panel, self.webcam_capture_bitmap)

    def create_rotation_column(self, parent, rotation_names):
        column_panel = wx.Panel(parent, style=wx.SIMPLE_BORDER)
        column_panel_sizer = wx.FlexGridSizer(cols=2)
        column_panel_sizer.AddGrowableCol(1)
        column_panel.SetSizer(column_panel_sizer)
        column_panel.SetAutoLayout(1)

        for rotation_name in rotation_names:
            self.rotation_labels[rotation_name] = wx.StaticText(
                column_panel, label=rotation_name, style=wx.ALIGN_RIGHT)
            column_panel_sizer.Add(self.rotation_labels[rotation_name],
                                   wx.SizerFlags(1).Expand().Border(wx.ALL, 3))

            self.rotation_value_labels[rotation_name] = wx.TextCtrl(
                column_panel, style=wx.TE_RIGHT)
            self.rotation_value_labels[rotation_name].SetValue("0.00")
            self.rotation_value_labels[rotation_name].Disable()
            column_panel_sizer.Add(self.rotation_value_labels[rotation_name],
                                   wx.SizerFlags(1).Expand().Border(wx.ALL, 3))

        column_panel.GetSizer().Fit(column_panel)
        return column_panel

    def update_capture_panel(self, event: wx.Event):
        there_is_frame, frame = self.video_capture.read()
        if not there_is_frame:
            dc = wx.MemoryDC()
            dc.SelectObject(self.webcam_capture_bitmap)
            self.draw_nothing_yet_string(dc)
            del dc
            return

        rgb_frame = cv2.flip(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB), 1)
        resized_frame = cv2.resize(rgb_frame, (256, 192))
        wx_image = wx.ImageFromBuffer(256, 192, resized_frame.tobytes())
        wx_bitmap = wx_image.ConvertToBitmap()

        dc = wx.MemoryDC()
        dc.SelectObject(self.webcam_capture_bitmap)
        dc.Clear()
        dc.DrawBitmap(wx_bitmap, 0, 0, True)
        del dc

        self.webcam_capture_panel.Refresh()

        time_ms = int(time.time() * 1000)
        mediapipe_image = mediapipe.Image(image_format=mediapipe.ImageFormat.SRGB, data=rgb_frame)
        detection_result = self.face_landmarker.detect_for_video(mediapipe_image, time_ms)
        self.update_mediapipe_face_pose(detection_result)

    def update_mediapipe_face_pose(self, detection_result):
        if len(detection_result.facial_transformation_matrixes) == 0:
            return

        xform_matrix = detection_result.facial_transformation_matrixes[0]
        blendshape_params = {}
        for item in detection_result.face_blendshapes[0]:
            blendshape_params[item.category_name] = item.score
        M = xform_matrix[0:3, 0:3]
        rot = Rotation.from_matrix(M)
        euler_angles = rot.as_euler('xyz', degrees=True)

        self.rotation_value_labels[HEAD_X].SetValue("%0.2f" % euler_angles[0])
        self.rotation_value_labels[HEAD_X].Refresh()
        self.rotation_value_labels[HEAD_Y].SetValue("%0.2f" % euler_angles[1])
        self.rotation_value_labels[HEAD_Y].Refresh()
        self.rotation_value_labels[HEAD_Z].SetValue("%0.2f" % euler_angles[2])
        self.rotation_value_labels[HEAD_Z].Refresh()

        self.mediapipe_face_pose = MediaPipeFacePose(blendshape_params, xform_matrix)

    @staticmethod
    def convert_to_100(x):
        return int(max(0.0, min(1.0, x)) * 100)

    def paint_source_image_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.source_image_panel, self.source_image_bitmap)

    def update_source_image_bitmap(self):
        dc = wx.MemoryDC()
        dc.SelectObject(self.source_image_bitmap)
        if self.wx_source_image is None:
            self.draw_nothing_yet_string(dc)
        else:
            dc.Clear()
            dc.DrawBitmap(self.wx_source_image, 0, 0, True)
        del dc

    def draw_nothing_yet_string(self, dc):
        dc.Clear()
        font = wx.Font(wx.FontInfo(14).Family(wx.FONTFAMILY_SWISS))
        dc.SetFont(font)
        w, h = dc.GetTextExtent("Nothing yet!")
        dc.DrawText("Nothing yet!", (MainFrame.IMAGE_SIZE - w) // 2, (MainFrame.IMAGE_SIZE - h) // 2)

    def paint_result_image_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.result_image_panel, self.result_image_bitmap)

    def update_result_image_bitmap(self, event: Optional[wx.Event] = None):
        if self.mediapipe_face_pose is None or self.poser is None:
            dc = wx.MemoryDC()
            dc.SelectObject(self.result_image_bitmap)
            self.draw_nothing_yet_string(dc)
            del dc
            return

        current_pose = self.pose_converter.convert(self.mediapipe_face_pose)
        if self.last_pose is not None and self.last_pose == current_pose:
            return
        self.last_pose = current_pose

        if self.torch_source_image is None:
            dc = wx.MemoryDC()
            dc.SelectObject(self.result_image_bitmap)
            self.draw_nothing_yet_string(dc)
            del dc
            return

        pose = torch.tensor(current_pose, device=self.device, dtype=self.poser.get_dtype())

        with torch.no_grad():
            output_image = self.poser.pose(self.torch_source_image, pose)[0].float()
            output_image = torch.clip((output_image + 1.0) / 2.0, 0.0, 1.0)
            output_image = convert_linear_to_srgb(output_image)

            background_choice = self.output_background_choice.GetSelection()
            if background_choice == 0:
                pass
            else:
                background = torch.zeros(4, output_image.shape[1], output_image.shape[2], device=self.device)
                background[3, :, :] = 1.0
                if background_choice == 1:
                    background[1, :, :] = 1.0
                    output_image = self.blend_with_background(output_image, background)
                elif background_choice == 2:
                    background[2, :, :] = 1.0
                    output_image = self.blend_with_background(output_image, background)
                elif background_choice == 3:
                    output_image = self.blend_with_background(output_image, background)
                else:
                    background[0:3, :, :] = 1.0
                    output_image = self.blend_with_background(output_image, background)

            c, h, w = output_image.shape
            output_image = 255.0 * torch.transpose(output_image.reshape(c, h * w), 0, 1).reshape(h, w, c)
            output_image = output_image.byte()

        numpy_image = output_image.detach().cpu().numpy()
        wx_image = wx.ImageFromBuffer(numpy_image.shape[0],
                                      numpy_image.shape[1],
                                      numpy_image[:, :, 0:3].tobytes(),
                                      numpy_image[:, :, 3].tobytes())
        wx_bitmap = wx_image.ConvertToBitmap()

        dc = wx.MemoryDC()
        dc.SelectObject(self.result_image_bitmap)
        dc.Clear()
        dc.DrawBitmap(wx_bitmap,
                      (MainFrame.IMAGE_SIZE - numpy_image.shape[0]) // 2,
                      (MainFrame.IMAGE_SIZE - numpy_image.shape[1]) // 2, True)
        del dc

        time_now = time.time_ns()
        if self.last_update_time is not None:
            elapsed_time = time_now - self.last_update_time
            fps = 1.0 / (elapsed_time / 10 ** 9)
            if self.torch_source_image is not None:
                self.fps_statistics.add_fps(fps)
            self.fps_text.SetLabelText("FPS = %0.2f" % self.fps_statistics.get_average_fps())
        self.last_update_time = time_now

        self.Refresh()

    def blend_with_background(self, numpy_image, background):
        alpha = numpy_image[3:4, :, :]
        color = numpy_image[0:3, :, :]
        new_color = color * alpha + (1.0 - alpha) * background[0:3, :, :]
        return torch.cat([new_color, background[3:4, :, :]], dim=0)

    def load_model(self, event: wx.Event):
        dir_name = "data/character_models"
        file_dialog = wx.FileDialog(self, "Choose a model", dir_name, "", "*.yaml", wx.FD_OPEN)
        if file_dialog.ShowModal() == wx.ID_OK:
            character_model_json_file_name = os.path.join(file_dialog.GetDirectory(), file_dialog.GetFilename())
            try:
                self.character_model = CharacterModel.load(character_model_json_file_name)
                self.torch_source_image = self.character_model.get_character_image(self.device)
                pil_image = resize_PIL_image(
                    PIL.Image.open(self.character_model.character_image_file_name),
                    (MainFrame.IMAGE_SIZE, MainFrame.IMAGE_SIZE))
                w, h = pil_image.size
                self.wx_source_image = wx.Bitmap.FromBufferRGBA(w, h, pil_image.convert("RGBA").tobytes())
                self.update_source_image_bitmap()
                self.poser = self.character_model.get_poser(self.device)
            except Exception:
                message_dialog = wx.MessageDialog(
                    self, "Could not load character model " + character_model_json_file_name, "Poser", wx.OK)
                message_dialog.ShowModal()
                message_dialog.Destroy()
        file_dialog.Destroy()
        self.Refresh()


if __name__ == "__main__":
    device = torch.device("cuda:0")

    pose_converter = MediaPoseFacePoseConverter00()

    face_landmarker_base_options = mediapipe.tasks.BaseOptions(
        model_asset_path='data/thirdparty/mediapipe/face_landmarker_v2_with_blendshapes.task')
    options = mediapipe.tasks.vision.FaceLandmarkerOptions(
        base_options=face_landmarker_base_options,
        running_mode=mediapipe.tasks.vision.RunningMode.VIDEO,
        output_face_blendshapes=True,
        output_facial_transformation_matrixes=True,
        num_faces=1)
    face_landmarker = mediapipe.tasks.vision.FaceLandmarker.create_from_options(options)

    video_capture = cv2.VideoCapture(0)

    app = wx.App()
    main_frame = MainFrame(pose_converter, video_capture, face_landmarker, device)
    main_frame.Show(True)
    main_frame.capture_timer.Start(30)
    main_frame.animation_timer.Start(30)
    app.MainLoop()


================================================
FILE: src/tha4/app/distill.py
================================================
import argparse
import logging

from tha4.distiller.distiller_config import DistillerConfig
from tha4.pytasuku.workspace import Workspace


def run_config(config_file_name: str):
    config = DistillerConfig.load(config_file_name)

    logging.basicConfig(level=logging.INFO, force=True)
    workspace = Workspace()
    config.define_tasks(workspace)

    workspace.start_session()
    workspace.run(f"{config.prefix}/all")
    workspace.end_session()


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Training script.')
    parser.add_argument("--config_file", type=str, required=True,
                        help="The name of the config file for the distillation process.")
    args = parser.parse_args()
    run_config(args.config_file)


================================================
FILE: src/tha4/app/distiller_ui.py
================================================
import wx

from tha4.app.distill import run_config
from tha4.distiller.ui.distiller_ui_main_frame import DistillerUiMainFrame

if __name__ == "__main__":
    app = wx.App()
    main_frame = DistillerUiMainFrame()
    main_frame.Show(True)
    app.MainLoop()

    if main_frame.config_file_to_run is not None:
        run_config(main_frame.config_file_to_run)


================================================
FILE: src/tha4/app/full_manual_poser.py
================================================
import logging
import os
import sys
import time
from typing import List

from tha4.shion.base.image_util import extract_pytorch_image_from_PIL_image, pytorch_rgba_to_numpy_image, \
    pytorch_rgb_to_numpy_image
from tha4.image_util import grid_change_to_numpy_image, resize_PIL_image

sys.path.append(os.getcwd())

import PIL.Image
import numpy
import torch
import wx

from tha4.poser.poser import Poser, PoseParameterCategory, PoseParameterGroup


class MorphCategoryControlPanel(wx.Panel):
    def __init__(self,
                 parent,
                 title: str,
                 pose_param_category: PoseParameterCategory,
                 param_groups: List[PoseParameterGroup]):
        super().__init__(parent, style=wx.SIMPLE_BORDER)
        self.pose_param_category = pose_param_category
        self.sizer = wx.BoxSizer(wx.VERTICAL)
        self.SetSizer(self.sizer)
        self.SetAutoLayout(1)

        title_text = wx.StaticText(self, label=title, style=wx.ALIGN_CENTER)
        self.sizer.Add(title_text, 0, wx.EXPAND)

        self.param_groups = [group for group in param_groups if group.get_category() == pose_param_category]
        self.choice = wx.Choice(self, choices=[group.get_group_name() for group in self.param_groups])
        if len(self.param_groups) > 0:
            self.choice.SetSelection(0)
        self.choice.Bind(wx.EVT_CHOICE, self.on_choice_updated)
        self.sizer.Add(self.choice, 0, wx.EXPAND)

        self.left_slider = wx.Slider(self, minValue=-1000, maxValue=1000, value=-1000, style=wx.HORIZONTAL)
        self.sizer.Add(self.left_slider, 0, wx.EXPAND)

        self.right_slider = wx.Slider(self, minValue=-1000, maxValue=1000, value=-1000, style=wx.HORIZONTAL)
        self.sizer.Add(self.right_slider, 0, wx.EXPAND)

        self.checkbox = wx.CheckBox(self, label="Show")
        self.checkbox.SetValue(True)
        self.sizer.Add(self.checkbox, 0, wx.SHAPED | wx.ALIGN_CENTER)

        self.update_ui()

        self.sizer.Fit(self)

    def update_ui(self):
        param_group = self.param_groups[self.choice.GetSelection()]
        if param_group.is_discrete():
            self.left_slider.Enable(False)
            self.right_slider.Enable(False)
            self.checkbox.Enable(True)
        elif param_group.get_arity() == 1:
            self.left_slider.Enable(True)
            self.right_slider.Enable(False)
            self.checkbox.Enable(False)
        else:
            self.left_slider.Enable(True)
            self.right_slider.Enable(True)
            self.checkbox.Enable(False)

    def on_choice_updated(self, event: wx.Event):
        param_group = self.param_groups[self.choice.GetSelection()]
        if param_group.is_discrete():
            self.checkbox.SetValue(True)
        self.update_ui()

    def set_param_value(self, pose: List[float]):
        if len(self.param_groups) == 0:
            return
        selected_morph_index = self.choice.GetSelection()
        param_group = self.param_groups[selected_morph_index]
        param_index = param_group.get_parameter_index()
        if param_group.is_discrete():
            if self.checkbox.GetValue():
                for i in range(param_group.get_arity()):
                    pose[param_index + i] = 1.0
        else:
            param_range = param_group.get_range()
            alpha = (self.left_slider.GetValue() + 1000) / 2000.0
            pose[param_index] = param_range[0] + (param_range[1] - param_range[0]) * alpha
            if param_group.get_arity() == 2:
                alpha = (self.right_slider.GetValue() + 1000) / 2000.0
                pose[param_index + 1] = param_range[0] + (param_range[1] - param_range[0]) * alpha


class SimpleParamGroupsControlPanel(wx.Panel):
    def __init__(self, parent,
                 pose_param_category: PoseParameterCategory,
                 param_groups: List[PoseParameterGroup]):
        super().__init__(parent, style=wx.SIMPLE_BORDER)
        self.sizer = wx.BoxSizer(wx.VERTICAL)
        self.SetSizer(self.sizer)
        self.SetAutoLayout(1)

        self.param_groups = [group for group in param_groups if group.get_category() == pose_param_category]
        for param_group in self.param_groups:
            assert not param_group.is_discrete()
            assert param_group.get_arity() == 1

        self.sliders = []
        for param_group in self.param_groups:
            static_text = wx.StaticText(
                self,
                label="   ------------ %s ------------   " % param_group.get_group_name(), style=wx.ALIGN_CENTER)
            self.sizer.Add(static_text, 0, wx.EXPAND)
            range = param_group.get_range()
            min_value = int(range[0] * 1000)
            max_value = int(range[1] * 1000)
            slider = wx.Slider(self, minValue=min_value, maxValue=max_value, value=0, style=wx.HORIZONTAL)
            self.sizer.Add(slider, 0, wx.EXPAND)
            self.sliders.append(slider)

        self.sizer.Fit(self)

    def set_param_value(self, pose: List[float]):
        if len(self.param_groups) == 0:
            return
        for param_group_index in range(len(self.param_groups)):
            param_group = self.param_groups[param_group_index]
            slider = self.sliders[param_group_index]
            param_range = param_group.get_range()
            param_index = param_group.get_parameter_index()
            alpha = (slider.GetValue() - slider.GetMin()) * 1.0 / (slider.GetMax() - slider.GetMin())
            pose[param_index] = param_range[0] + (param_range[1] - param_range[0]) * alpha


def convert_output_image_from_torch_to_numpy(output_image):
    if output_image.shape[2] == 2:
        h, w, c = output_image.shape
        numpy_image = torch.transpose(output_image.reshape(h * w, c), 0, 1).reshape(c, h, w)
    elif output_image.shape[0] == 4:
        numpy_image = pytorch_rgba_to_numpy_image(output_image)
    elif output_image.shape[0] == 3:
        numpy_image = pytorch_rgb_to_numpy_image(output_image)
    elif output_image.shape[0] == 1:
        c, h, w = output_image.shape
        alpha_image = torch.cat([output_image.repeat(3, 1, 1) * 2.0 - 1.0, torch.ones(1, h, w)], dim=0)
        numpy_image = pytorch_rgba_to_numpy_image(alpha_image)
    elif output_image.shape[0] == 2:
        numpy_image = grid_change_to_numpy_image(output_image, num_channels=4)
    else:
        raise RuntimeError("Unsupported # image channels: %d" % output_image.shape[0])
    numpy_image = numpy.uint8(numpy.rint(numpy_image * 255.0))
    return numpy_image


class MainFrame(wx.Frame):
    def __init__(self, poser: Poser, device: torch.device):
        super().__init__(None, wx.ID_ANY, "Poser")
        self.poser = poser
        self.dtype = self.poser.get_dtype()
        self.device = device
        self.image_size = self.poser.get_image_size()

        self.wx_source_image = None
        self.torch_source_image = None

        self.main_sizer = wx.BoxSizer(wx.HORIZONTAL)
        self.SetSizer(self.main_sizer)
        self.SetAutoLayout(1)
        self.init_left_panel()
        self.init_control_panel()
        self.init_right_panel()
        self.main_sizer.Fit(self)

        self.timer = wx.Timer(self, wx.ID_ANY)
        self.Bind(wx.EVT_TIMER, self.update_images, self.timer)

        save_image_id = wx.NewIdRef()
        self.Bind(wx.EVT_MENU, self.on_save_image, id=save_image_id)
        accelerator_table = wx.AcceleratorTable([
            (wx.ACCEL_CTRL, ord('S'), save_image_id)
        ])
        self.SetAcceleratorTable(accelerator_table)

        self.last_pose = None
        self.last_output_index = self.output_index_choice.GetSelection()
        self.last_output_numpy_image = None

        self.wx_source_image = None
        self.torch_source_image = None
        self.source_image_bitmap = wx.Bitmap(self.image_size, self.image_size)
        self.result_image_bitmap = wx.Bitmap(self.image_size, self.image_size)
        self.source_image_dirty = True

    def init_left_panel(self):
        self.control_panel = wx.Panel(self, style=wx.SIMPLE_BORDER, size=(self.image_size, -1))
        self.left_panel = wx.Panel(self, style=wx.SIMPLE_BORDER)
        left_panel_sizer = wx.BoxSizer(wx.VERTICAL)
        self.left_panel.SetSizer(left_panel_sizer)
        self.left_panel.SetAutoLayout(1)

        self.source_image_panel = wx.Panel(self.left_panel, size=(self.image_size, self.image_size),
                                           style=wx.SIMPLE_BORDER)
        self.source_image_panel.Bind(wx.EVT_PAINT, self.paint_source_image_panel)
        self.source_image_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
        left_panel_sizer.Add(self.source_image_panel, 0, wx.FIXED_MINSIZE)

        self.load_image_button = wx.Button(self.left_panel, wx.ID_ANY, "\nLoad Image\n\n")
        left_panel_sizer.Add(self.load_image_button, 1, wx.EXPAND)
        self.load_image_button.Bind(wx.EVT_BUTTON, self.load_image)

        left_panel_sizer.Fit(self.left_panel)
        self.main_sizer.Add(self.left_panel, 0, wx.FIXED_MINSIZE)

    def on_erase_background(self, event: wx.Event):
        pass

    def init_control_panel(self):
        self.control_panel_sizer = wx.BoxSizer(wx.VERTICAL)
        self.control_panel.SetSizer(self.control_panel_sizer)
        self.control_panel.SetMinSize(wx.Size(256, 1))

        morph_categories = [
            PoseParameterCategory.EYEBROW,
            PoseParameterCategory.EYE,
            PoseParameterCategory.MOUTH,
            PoseParameterCategory.IRIS_MORPH
        ]
        morph_category_titles = {
            PoseParameterCategory.EYEBROW: "   ------------ Eyebrow ------------   ",
            PoseParameterCategory.EYE: "   ------------ Eye ------------   ",
            PoseParameterCategory.MOUTH: "   ------------ Mouth ------------   ",
            PoseParameterCategory.IRIS_MORPH: "   ------------ Iris morphs ------------   ",
        }
        self.morph_control_panels = {}
        for category in morph_categories:
            param_groups = self.poser.get_pose_parameter_groups()
            filtered_param_groups = [group for group in param_groups if group.get_category() == category]
            if len(filtered_param_groups) == 0:
                continue
            control_panel = MorphCategoryControlPanel(
                self.control_panel,
                morph_category_titles[category],
                category,
                self.poser.get_pose_parameter_groups())
            self.morph_control_panels[category] = control_panel
            self.control_panel_sizer.Add(control_panel, 0, wx.EXPAND)

        self.non_morph_control_panels = {}
        non_morph_categories = [
            PoseParameterCategory.IRIS_ROTATION,
            PoseParameterCategory.FACE_ROTATION,
            PoseParameterCategory.BODY_ROTATION,
            PoseParameterCategory.BREATHING
        ]
        for category in non_morph_categories:
            param_groups = self.poser.get_pose_parameter_groups()
            filtered_param_groups = [group for group in param_groups if group.get_category() == category]
            if len(filtered_param_groups) == 0:
                continue
            control_panel = SimpleParamGroupsControlPanel(
                self.control_panel,
                category,
                self.poser.get_pose_parameter_groups())
            self.non_morph_control_panels[category] = control_panel
            self.control_panel_sizer.Add(control_panel, 0, wx.EXPAND)

        self.control_panel_sizer.Fit(self.control_panel)
        self.main_sizer.Add(self.control_panel, 1, wx.FIXED_MINSIZE)

    def init_right_panel(self):
        self.right_panel = wx.Panel(self, style=wx.SIMPLE_BORDER)
        right_panel_sizer = wx.BoxSizer(wx.VERTICAL)
        self.right_panel.SetSizer(right_panel_sizer)
        self.right_panel.SetAutoLayout(1)

        self.result_image_panel = wx.Panel(self.right_panel,
                                           size=(self.image_size, self.image_size),
                                           style=wx.SIMPLE_BORDER)
        self.result_image_panel.Bind(wx.EVT_PAINT, self.paint_result_image_panel)
        self.result_image_panel.Bind(wx.EVT_ERASE_BACKGROUND, self.on_erase_background)
        self.output_index_choice = wx.Choice(
            self.right_panel,
            choices=[str(i) for i in range(self.poser.get_output_length())])
        self.output_index_choice.SetSelection(0)
        right_panel_sizer.Add(self.result_image_panel, 0, wx.FIXED_MINSIZE)
        right_panel_sizer.Add(self.output_index_choice, 0, wx.EXPAND)

        self.save_image_button = wx.Button(self.right_panel, wx.ID_ANY, "\nSave Image\n\n")
        right_panel_sizer.Add(self.save_image_button, 1, wx.EXPAND)
        self.save_image_button.Bind(wx.EVT_BUTTON, self.on_save_image)

        right_panel_sizer.Fit(self.right_panel)
        self.main_sizer.Add(self.right_panel, 0, wx.FIXED_MINSIZE)

    def create_param_category_choice(self, param_category: PoseParameterCategory):
        params = []
        for param_group in self.poser.get_pose_parameter_groups():
            if param_group.get_category() == param_category:
                params.append(param_group.get_group_name())
        choice = wx.Choice(self.control_panel, choices=params)
        if len(params) > 0:
            choice.SetSelection(0)
        return choice

    def load_image(self, event: wx.Event):
        dir_name = "data/images"
        file_dialog = wx.FileDialog(self, "Choose an image", dir_name, "", "*.png", wx.FD_OPEN)
        if file_dialog.ShowModal() == wx.ID_OK:
            image_file_name = os.path.join(file_dialog.GetDirectory(), file_dialog.GetFilename())
            try:
                pil_image = resize_PIL_image(PIL.Image.open(image_file_name),
                                             (self.poser.get_image_size(), self.poser.get_image_size()))
                w, h = pil_image.size
                if pil_image.mode != 'RGBA':
                    self.source_image_string = "Image must have alpha channel!"
                    self.wx_source_image = None
                    self.torch_source_image = None
                else:
                    self.wx_source_image = wx.Bitmap.FromBufferRGBA(w, h, pil_image.convert("RGBA").tobytes())
                    self.torch_source_image = extract_pytorch_image_from_PIL_image(pil_image) \
                        .to(self.device).to(self.dtype)
                self.source_image_dirty = True
                self.Refresh()
                self.Update()
            except:
                message_dialog = wx.MessageDialog(self, "Could not load image " + image_file_name, "Poser", wx.OK)
                message_dialog.ShowModal()
                message_dialog.Destroy()
        file_dialog.Destroy()

    def paint_source_image_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.source_image_panel, self.source_image_bitmap)

    def paint_result_image_panel(self, event: wx.Event):
        wx.BufferedPaintDC(self.result_image_panel, self.result_image_bitmap)

    def draw_nothing_yet_string_to_bitmap(self, bitmap):
        dc = wx.MemoryDC()
        dc.SelectObject(bitmap)

        dc.Clear()
        font = wx.Font(wx.FontInfo(14).Family(wx.FONTFAMILY_SWISS))
        dc.SetFont(font)
        w, h = dc.GetTextExtent("Nothing yet!")
        dc.DrawText("Nothing yet!", (self.image_size - w) // 2, (self.image_size - - h) // 2)

        del dc

    def get_current_pose(self):
        current_pose = [0.0 for i in range(self.poser.get_num_parameters())]
        for morph_control_panel in self.morph_control_panels.values():
            morph_control_panel.set_param_value(current_pose)
        for rotation_control_panel in self.non_morph_control_panels.values():
            rotation_control_panel.set_param_value(current_pose)
        return current_pose

    def update_images(self, event: wx.Event):
        current_pose = self.get_current_pose()
        if not self.source_image_dirty \
                and self.last_pose is not None \
                and self.last_pose == current_pose \
                and self.last_output_index == self.output_index_choice.GetSelection():
            return
        self.last_pose = current_pose
        self.last_output_index = self.output_index_choice.GetSelection()

        if self.torch_source_image is None:
            self.draw_nothing_yet_string_to_bitmap(self.source_image_bitmap)
            self.draw_nothing_yet_string_to_bitmap(self.result_image_bitmap)
            self.source_image_dirty = False
            self.Refresh()
            self.Update()
            return

        if self.source_image_dirty:
            dc = wx.MemoryDC()
            dc.SelectObject(self.source_image_bitmap)
            dc.Clear()
            dc.DrawBitmap(self.wx_source_image, 0, 0)
            self.source_image_dirty = False

        pose = torch.tensor(current_pose, device=self.device, dtype=self.dtype)
        output_index = self.output_index_choice.GetSelection()
        with torch.no_grad():
            start_cuda_event = torch.cuda.Event(enable_timing=True)
            end_cuda_event = torch.cuda.Event(enable_timing=True)
            start_cuda_event.record()
            start_time = time.time()

            output_image = self.poser.pose(self.torch_source_image, pose, output_index)[0].detach().cpu()

            end_time = time.time()
            end_cuda_event.record()
            torch.cuda.synchronize()
            print("cuda time (ms):", start_cuda_event.elapsed_time(end_cuda_event))
            print("elapsed time (ms):", (end_time - start_time) * 1000.0)

        numpy_image = convert_output_image_from_torch_to_numpy(output_image)
        self.last_output_numpy_image = numpy_image
        wx_image = wx.ImageFromBuffer(
            numpy_image.shape[0],
            numpy_image.shape[1],
            numpy_image[:, :, 0:3].tobytes(),
            numpy_image[:, :, 3].tobytes())
        wx_bitmap = wx_image.ConvertToBitmap()

        dc = wx.MemoryDC()
        dc.SelectObject(self.result_image_bitmap)
        dc.Clear()
        dc.DrawBitmap(wx_bitmap,
                      (self.image_size - numpy_image.shape[0]) // 2,
                      (self.image_size - numpy_image.shape[1]) // 2,
                      True)
        del dc

        self.Refresh()
        self.Update()

    def on_save_image(self, event: wx.Event):
        if self.last_output_numpy_image is None:
            logging.info("There is no output image to save!!!")
            return

        dir_name = "data/images"
        file_dialog = wx.FileDialog(self, "Choose an image", dir_name, "", "*.png", wx.FD_SAVE)
        if file_dialog.ShowModal() == wx.ID_OK:
            image_file_name = os.path.join(file_dialog.GetDirectory(), file_dialog.GetFilename())
            try:
                if os.path.exists(image_file_name):
                    message_dialog = wx.MessageDialog(self, f"Override {image_file_name}", "Manual Poser",
                                                      wx.YES_NO | wx.ICON_QUESTION)
                    result = message_dialog.ShowModal()
                    if result == wx.ID_YES:
                        self.save_last_numpy_image(image_file_name)
                    message_dialog.Destroy()
                else:
                    self.save_last_numpy_image(image_file_name)
            except:
                message_dialog = wx.MessageDialog(self, f"Could not save {image_file_name}", "Manual Poser", wx.OK)
                message_dialog.ShowModal()
                message_dialog.Destroy()
        file_dialog.Destroy()

    def save_last_numpy_image(self, image_file_name):
        numpy_image = self.last_output_numpy_image
        pil_image = PIL.Image.fromarray(numpy_image, mode='RGBA')
        os.makedirs(os.path.dirname(image_file_name), exist_ok=True)
        pil_image.save(image_file_name)


if __name__ == "__main__":
    device = torch.device('cuda:0')
    try:
        import tha4.poser.modes.mode_07

        poser = tha4.poser.modes.mode_07.create_poser(device)
    except RuntimeError as e:
        print(e)
        sys.exit()

    app = wx.App()
    main_frame = MainFrame(poser, device)
    main_frame.Show(True)
    main_frame.timer.Start(16)
    app.MainLoop()


================================================
FILE: src/tha4/charmodel/__init__.py
================================================


================================================
FILE: src/tha4/charmodel/character_model.py
================================================
import json
import os.path

import PIL.Image
import torch
from omegaconf import OmegaConf

from tha4.shion.base.image_util import extract_pytorch_image_from_PIL_image
from tha4.poser.modes.mode_14 import create_poser, KEY_FACE_MORPHER, KEY_BODY_MORPHER


class CharacterModel:
    def __init__(self,
                 character_image_file_name: str,
                 face_morpher_file_name: str,
                 body_morpher_file_name: str):
        self.body_morpher_file_name = body_morpher_file_name
        self.face_morpher_file_name = face_morpher_file_name
        self.character_image_file_name = character_image_file_name
        self.poser = None
        self.character_image = None

    def get_poser(self, device: torch.device):
        if self.poser is not None:
            self.poser.to(device)
        else:
            self.poser = create_poser(
                device,
                module_file_names={
                    KEY_FACE_MORPHER: self.face_morpher_file_name,
                    KEY_BODY_MORPHER: self.body_morpher_file_name
                })
        return self.poser

    def get_character_image(self, device: torch.device):
        if self.character_image is None:
            pil_image = PIL.Image.open(self.character_image_file_name)
            if pil_image.mode != 'RGBA':
                raise RuntimeError("Character image is not an RGBA image!")
            self.character_image = extract_pytorch_image_from_PIL_image(pil_image)
        self.character_image = self.character_image.to(device)
        return self.character_image

    def save(self, file_name: str):
        dir = os.path.dirname(file_name)
        rel_char_image_file_name = os.path.relpath(self.character_image_file_name, dir)
        rel_face_morpher_file_name = os.path.relpath(self.face_morpher_file_name, dir)
        rel_body_morpher_file_name = os.path.relpath(self.body_morpher_file_name, dir)
        data = {
            "character_image_file_name": rel_char_image_file_name,
            "face_morpher_file_name": rel_face_morpher_file_name,
            "body_morpher_file_name": rel_body_morpher_file_name,
        }
        conf = OmegaConf.create(data)
        os.makedirs(dir, exist_ok=True)
        with open(file_name, "wt") as fout:
            fout.write(OmegaConf.to_yaml(conf))

    @staticmethod
    def load(file_name: str):
        conf = OmegaConf.to_container(OmegaConf.load(file_name))
        dir = os.path.dirname(file_name)
        character_image_file_name = os.path.join(dir, conf["character_image_file_name"])
        face_morpher_file_name = os.path.join(dir, conf["face_morpher_file_name"])
        body_morpher_file_name = os.path.join(dir, conf["body_morpher_file_name"])
        return CharacterModel(
            character_image_file_name,
            face_morpher_file_name,
            body_morpher_file_name)


================================================
FILE: src/tha4/dataset/__init__.py
================================================


================================================
FILE: src/tha4/dataset/image_poses_and_aother_images_dataset.py
================================================
from typing import List, Callable

from torch import Tensor
from torch.utils.data import Dataset


class ImagePosesAndOtherImagesDataset(Dataset):
    def __init__(self,
                 main_image_func: Callable[[], Tensor],
                 pose_dataset: Dataset,
                 other_image_funcs: List[Callable[[], Tensor]]):
        self.main_image_func = main_image_func
        self.other_image_funcs = other_image_funcs
        self.pose_dataset = pose_dataset
        self.main_image = None
        self.other_images = [None for i in range(len(self.other_image_funcs))]

    def get_main_image(self):
        if self.main_image is None:
            self.main_image = self.main_image_func()
        return self.main_image

    def get_other_image(self, image_index: int):
        if self.other_images[image_index] is None:
            self.other_images[image_index] = self.other_image_funcs[image_index]()
        return self.other_images[image_index]

    def __len__(self):
        return len(self.pose_dataset)

    def __getitem__(self, index):
        main_image = self.get_main_image()
        pose = self.pose_dataset[index][0]
        other_images = [self.get_other_image(i) for i in range(len(self.other_image_funcs))]
        return [main_image, pose] + other_images


================================================
FILE: src/tha4/distiller/__init__.py
================================================


================================================
FILE: src/tha4/distiller/config_based_training_tasks.py
================================================
import logging
import os
import sys
from typing import Callable, List, Optional

from tha4.pytasuku.workspace import Workspace
from tha4.shion.core.training.distrib.distributed_trainer import DistributedTrainer
from tha4.shion.core.training.distrib.distributed_training_states import DistributedTrainingState


def get_torchrun_executable():
    return os.path.dirname(sys.executable) + os.path.sep + "torchrun"


class RdzvConfig:
    def __init__(self, id: int, port: int):
        self.port = port
        self.id = id


def run_standalone_config_based_training_script(
        training_script_file_name: str,
        config_file_name: str,
        num_proc_per_node: int,
        target_checkpoint_examples: Optional[int] = None,
        rdzv_config: Optional[RdzvConfig] = None):
    command = f"{get_torchrun_executable()} " \
              f"--nnodes=1 " \
              f"--nproc_per_node={num_proc_per_node} "
    if rdzv_config is not None:
        command += f"--rdzv_endpoint=localhost:{rdzv_config.port} "
        command += "--rdzv_backend=c10d "
        command += f"--rdzv_id={rdzv_config.id} "
    else:
        command += "--standalone "
    command += f"{training_script_file_name} "
    if target_checkpoint_examples is not None:
        command += f"--target_checkpoint_examples {target_checkpoint_examples} "
        command += f"--config_file={config_file_name} "
    logging.info(f"Executing -- {command}")
    os.system(command)


def define_standalone_config_based_training_tasks(
        workspace: Workspace,
        distributed_trainer_func: Callable[[], DistributedTrainer],
        training_script_file_name: str,
        config_file_name: str,
        num_proc_per_node: int,
        dependencies: Optional[List[str]] = None,
        rdzv_config: Optional[RdzvConfig] = None):
    trainer = distributed_trainer_func()
    checkpoint_examples = trainer.training_protocol.get_checkpoint_examples()
    assert len(checkpoint_examples) >= 1
    assert checkpoint_examples[0] > 0
    checkpoint_examples = [0] + checkpoint_examples

    if dependencies is None:
        dependencies = []
    module_file_dependencies = dependencies[:]
    for module_name in trainer.pretrained_module_file_names:
        module_file_dependencies.append(trainer.pretrained_module_file_names[module_name])

    def create_train_func(target_checkpoint_examples: int):
        return lambda: run_standalone_config_based_training_script(
            training_script_file_name,
            config_file_name,
            num_proc_per_node,
            target_checkpoint_examples,
            rdzv_config=rdzv_config)

    train_tasks = []
    for checkpoint_index in range(0, len(checkpoint_examples)):
        for module_name in trainer.module_names:
            module_file_name = DistributedTrainingState.get_module_file_name(
                trainer.get_checkpoint_prefix(checkpoint_index),
                module_name)
            workspace.create_file_task(
                module_file_name,
                module_file_dependencies,
                create_train_func(trainer.checkpoint_examples[checkpoint_index]))
        for module_name in trainer.accumulators:
            accumulated_module_file_name = DistributedTrainingState.get_accumulated_module_file_name(
                trainer.get_checkpoint_prefix(checkpoint_index),
                module_name)
            workspace.create_file_task(
                accumulated_module_file_name,
                module_file_dependencies,
                create_train_func(checkpoint_examples[checkpoint_index]))
        workspace.create_command_task(
            trainer.get_checkpoint_prefix(checkpoint_index) + "/train_standalone",
            module_file_dependencies,
            create_train_func(checkpoint_examples[checkpoint_index]))
        train_tasks.append(trainer.get_checkpoint_prefix(checkpoint_index) + "/train_standlone")
    workspace.create_file_task(
        trainer.prefix + "/train_standalone",
        module_file_dependencies,
        create_train_func(checkpoint_examples[-1]))


================================================
FILE: src/tha4/distiller/distill_body_morpher.py
================================================
import logging

from tha4.shion.core.training.distrib.distributed_trainer import DistributedTrainer
from tha4.distiller.distiller_config import DistillerConfig

if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)

    parser = DistributedTrainer.get_default_arg_parser()
    parser.add_argument('--config_file', type=str)
    args = parser.parse_args()

    config_file_name = args.config_file
    config = DistillerConfig.load(config_file_name)

    DistributedTrainer.run_with_args(config.get_body_morpher_trainer, args)


================================================
FILE: src/tha4/distiller/distill_face_morpher.py
================================================
import logging

from tha4.shion.core.training.distrib.distributed_trainer import DistributedTrainer
from tha4.distiller.distiller_config import DistillerConfig

if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)

    parser = DistributedTrainer.get_default_arg_parser()
    parser.add_argument('--config_file', type=str)
    args = parser.parse_args()

    config_file_name = args.config_file
    config = DistillerConfig.load(config_file_name)

    DistributedTrainer.run_with_args(config.get_face_morpher_trainer, args)


================================================
FILE: src/tha4/distiller/distiller_config.py
================================================
import os.path
import shutil
import PIL.Image
from dataclasses import dataclass
from typing import Optional

from omegaconf import OmegaConf
from tha4.charmodel.character_model import CharacterModel
from tha4.pytasuku.workspace import Workspace, file_task
from tha4.distiller.config_based_training_tasks import define_standalone_config_based_training_tasks
from tha4.nn.siren.face_morpher.siren_face_morpher_00_trainer import SirenFaceMorpher00TrainerArgs
from tha4.nn.siren.morpher.siren_morpher_03_trainer import SirenMorpher03TrainerArgs, TrainingPhases, TrainingPhase, \
    LossWeights, LossTerm
from tha4.shion.base.image_util import pil_image_has_transparency

POSE_DATASET_FILE_NAME = 'data/pose_dataset.pt'


def copy_file(source_file_name: str, dest_file_name):
    os.makedirs(os.path.dirname(dest_file_name), exist_ok=True)
    shutil.copyfile(source_file_name, dest_file_name)


@dataclass
class DistillerConfig:
    prefix: str
    character_image_file_name: str
    face_mask_image_file_name: str

    face_morpher_random_seed_0: int = 12771885812175595441
    face_morpher_random_seed_1: int = 14367217090963479175
    face_morpher_num_training_examples_per_sample_output: Optional[int] = 10_000
    face_morpher_batch_size: int = 8

    body_morpher_random_seed_0: int = 2892221210020292507
    body_morpher_random_seed_1: int = 9998918537095922080
    body_morpher_num_training_examples_per_sample_output: Optional[int] = 10_000
    body_morpher_batch_size: int = 8

    num_cpu_workers: int = 1
    num_gpus: int = 1

    def check(self):
        DistillerConfig.check_prefix(self.prefix)
        DistillerConfig.check_character_image_file_name(self.character_image_file_name)
        DistillerConfig.check_face_mask_image_file_name(self.face_mask_image_file_name)

        DistillerConfig.check_num_cpu_workers(self.num_cpu_workers)
        DistillerConfig.check_num_gpus(self.num_gpus)

        DistillerConfig.check_random_seed(self.face_morpher_random_seed_0, "face_morpher_random_seed_0")
        DistillerConfig.check_random_seed(self.face_morpher_random_seed_1, "face_morpher_random_seed_1")
        DistillerConfig.check_batch_size(self.face_morpher_batch_size, "face_morpher_batch_size")
        DistillerConfig.check_num_training_examples_per_sample_output(
            self.face_morpher_num_training_examples_per_sample_output,
            "face_morpher_num_training_examples_per_sample_output")

        DistillerConfig.check_random_seed(self.body_morpher_random_seed_0, "body_morpher_random_seed_0")
        DistillerConfig.check_random_seed(self.body_morpher_random_seed_1, "body_morpher_random_seed_1")
        DistillerConfig.check_batch_size(self.body_morpher_batch_size, "body_morpher_batch_size")
        DistillerConfig.check_num_training_examples_per_sample_output(
            self.body_morpher_num_training_examples_per_sample_output,
            "body_morpher_num_training_examples_per_sample_output")

    @staticmethod
    def check_prefix(prefix):
        assert os.path.isdir(prefix), "The 'prefix' must be a directory."
        assert os.path.exists(prefix), f"The {prefix} directory does not exist."

    @staticmethod
    def check_character_image_file_name(file_name):
        _, ext = os.path.splitext(file_name)
        assert os.path.isfile(file_name), \
            f"The specified character image file name, {file_name}, does not point to a file."
        assert ext.lower() == ".png", "The character image file name must have extension '.png'."

        image = PIL.Image.open(file_name)
        assert pil_image_has_transparency(image), "The character image must have an alpha channel."
        assert image.width == 512 and image.height == 512, "The character image must be 512x512."
        image.close()

    @staticmethod
    def check_face_mask_image_file_name(file_name):
        _, ext = os.path.splitext(file_name)
        assert os.path.isfile(file_name), \
            f"The specified face mask image file name, {file_name}, does not point to a file."
        assert ext.lower() == ".png", "The face mask image file name must have extension '.png'."

        image = PIL.Image.open(file_name)
        assert image.width == 512 and image.height == 512, "The face mask image must be 512x512."
        assert image.mode == "RGB", "The face mask image must be an RGB image."
        for x in range(512):
            for y in range(512):
                r, g, b = image.getpixel((x, y))
                assert (r == 0) or (r == 255), "The R channel of the face mask image must be 0 or 255"
                assert (g == 0) or (g == 255), "The G channel of the face mask image must be 0 or 255"
                assert (b == 0) or (b == 255), "The B channel of the face mask image must be 0 or 255"
        image.close()

    @staticmethod
    def check_batch_size(value, field_name: str):
        assert isinstance(value, int), f"The {field_name} must be an integer."
        assert value >= 1, f"The {field_name} must be at least 1."
        assert value <= 8, f"The {field_name} must be at most 8."

    @staticmethod
    def check_num_cpu_workers(value):
        assert value >= 1, "The value of 'num_cpu_workers must be at least 1."

    @staticmethod
    def check_num_gpus(value):
        assert value >= 1, "The value of 'num_gpus' must be at least 1."

    @staticmethod
    def check_random_seed(value, field_name: str):
        assert isinstance(value, int), f"The {field_name} must be an integer."
        assert value >= 0 and value <= 0x_ffff_ffff_ffff_ffff, "A random seed must be between 0 and 2**64-1."

    @staticmethod
    def check_num_training_examples_per_sample_output(value, field_name):
        assert value in [10_000, 100_000, 1_000_000,
                         None], f"The {field_name} must be 10_000, 100_00, 1_000_000_000, or None."

    def save(self, file_name: str):
        conf = OmegaConf.structured(self)
        os.makedirs(self.prefix, exist_ok=True)
        with open(file_name, "wt") as fout:
            fout.write(OmegaConf.to_yaml(conf))

    def config_yaml_file_name(self):
        return f"{self.prefix}/config.yaml"

    def create_config_yaml_file(self):
        if os.path.exists(self.config_yaml_file_name()):
            return
        self.save(self.config_yaml_file_name())

    @staticmethod
    def load(file_name: str) -> 'DistillerConfig':
        conf = OmegaConf.to_container(OmegaConf.load(file_name))
        args = DistillerConfig(**conf)
        args.check()
        return args

    def face_morpher_prefix(self):
        return f"{self.prefix}/face_morpher"

    def get_face_morpher_trainer(self, world_size: Optional[int] = None, backend: str = 'gloo'):
        if world_size is None:
            world_size = self.num_gpus
        args = SirenFaceMorpher00TrainerArgs(
            character_file_name=self.character_image_file_name,
            face_mask_file_name=self.face_mask_image_file_name,
            pose_dataset_file_name=POSE_DATASET_FILE_NAME,
            total_worker=self.num_cpu_workers,
            num_training_examples_per_sample_output=self.face_morpher_num_training_examples_per_sample_output,
            total_batch_size=self.face_morpher_batch_size,
            training_random_seed=self.face_morpher_random_seed_0,
            sample_output_random_seed=self.face_morpher_random_seed_1)
        return args.create_trainer(self.face_morpher_prefix(), world_size, backend)

    def body_morpher_prefix(self):
        return f"{self.prefix}/body_morpher"

    def get_body_morpher_trainer(self, world_size: Optional[int] = None, backend: str = 'gloo'):
        if world_size is None:
            world_size = self.num_gpus
        args = SirenMorpher03TrainerArgs(
            character_file_name=self.character_image_file_name,
            pose_dataset_file_name=POSE_DATASET_FILE_NAME,
            total_worker=self.num_cpu_workers,
            num_training_examples_per_sample_output=self.body_morpher_num_training_examples_per_sample_output,
            training_random_seed=self.body_morpher_random_seed_0,
            sample_output_random_seed=self.body_morpher_random_seed_1,
            total_batch_size=self.body_morpher_batch_size,
            sample_output_batch_size=1,
            training_phases=TrainingPhases([
                TrainingPhase(
                    num_examples_upper_bound=200_000,
                    learning_rate=1e-4,
                    loss_weights=LossWeights(weights={
                        LossTerm.full_blended: 0.25,
                        LossTerm.full_warped: 0.25,
                        LossTerm.full_grid_change: 0.5,
                        LossTerm.full_color_change: 2.0,
                    })),
                TrainingPhase(
                    num_examples_upper_bound=400_000,
                    learning_rate=3e-5,
                    loss_weights=LossWeights(weights={
                        LossTerm.full_blended: 0.25,
                        LossTerm.full_warped: 0.25,
                        LossTerm.full_grid_change: 0.5,
                        LossTerm.full_color_change: 2.0,
                    })),
                TrainingPhase(
                    num_examples_upper_bound=600_000,
                    learning_rate=3e-5,
                    loss_weights=LossWeights(weights={
                        LossTerm.full_blended: 1.0,
                        LossTerm.full_warped: 2.5,
                        LossTerm.full_grid_change: 5.0,
                        LossTerm.full_color_change: 1.0,
                    })),
                TrainingPhase(
                    num_examples_upper_bound=800_000,
                    learning_rate=1e-5,
                    loss_weights=LossWeights(weights={
                        LossTerm.full_blended: 1.0,
                        LossTerm.full_warped: 2.5,
                        LossTerm.full_grid_change: 5.0,
                        LossTerm.full_color_change: 1.0,
                    })),
                TrainingPhase(
                    num_examples_upper_bound=1_300_000,
                    learning_rate=1e-5,
                    loss_weights=LossWeights(weights={
                        LossTerm.full_blended: 10.0,
                        LossTerm.full_warped: 1.0,
                        LossTerm.full_grid_change: 1.0,
                        LossTerm.full_color_change: 1.0,
                    })),
                TrainingPhase(
                    num_examples_upper_bound=1_500_000,
                    learning_rate=3e-6,
                    loss_weights=LossWeights(weights={
                        LossTerm.full_blended: 10.0,
                        LossTerm.full_warped: 1.0,
                        LossTerm.full_grid_change: 1.0,
                        LossTerm.full_color_change: 1.0,
                    })),
            ]))
        return args.create_trainer(self.body_morpher_prefix(), world_size, backend)

    def character_model_prefix(self):
        return f"{self.prefix}/character_model"

    def character_model_face_morpher_file_name(self):
        return f"{self.character_model_prefix()}/face_morpher.pt"

    def character_model_body_morpher_file_name(self):
        return f"{self.character_model_prefix()}/body_morpher.pt"

    def character_model_character_png_file_name(self):
        return f"{self.character_model_prefix()}/character.png"

    def character_model_yaml_file_name(self):
        return f"{self.character_model_prefix()}/character_model.yaml"

    def define_tasks(self, workspace: Workspace):
        workspace.create_file_task(self.config_yaml_file_name(), [], self.create_config_yaml_file)

        define_standalone_config_based_training_tasks(
            workspace,
            self.get_face_morpher_trainer,
            "src/tha4/distiller/distill_face_morpher.py",
            self.config_yaml_file_name(),
            num_proc_per_node=self.num_gpus,
            dependencies=[
                self.config_yaml_file_name(),
            ])

        define_standalone_config_based_training_tasks(
            workspace,
            self.get_body_morpher_trainer,
            "src/tha4/distiller/distill_body_morpher.py",
            self.config_yaml_file_name(),
            num_proc_per_node=self.num_gpus,
            dependencies=[
                self.config_yaml_file_name(),
            ])

        @file_task(workspace, self.character_model_character_png_file_name(), [self.character_image_file_name])
        def copy_character_image_file_name():
            copy_file(self.character_image_file_name, self.character_model_character_png_file_name())

        @file_task(workspace, self.character_model_face_morpher_file_name(), [
            f"{self.face_morpher_prefix()}/checkpoint/0010/module_module.pt",
        ])
        def copy_face_morpher():
            copy_file(
                f"{self.face_morpher_prefix()}/checkpoint/0010/module_module.pt",
                self.character_model_face_morpher_file_name())

        @file_task(workspace, self.character_model_body_morpher_file_name(), [
            f"{self.body_morpher_prefix()}/checkpoint/0015/module_module.pt",
        ])
        def copy_face_morpher():
            copy_file(
                f"{self.body_morpher_prefix()}/checkpoint/0015/module_module.pt",
                self.character_model_body_morpher_file_name())

        @file_task(workspace, self.character_model_yaml_file_name(), [])
        def create_character_model_yaml_file():
            character_model = CharacterModel(
                self.character_model_character_png_file_name(),
                self.character_model_face_morpher_file_name(),
                self.character_model_body_morpher_file_name())
            character_model.save(self.character_model_yaml_file_name())

        workspace.create_command_task(
            f"{self.prefix}/all",
            [
                f"{self.face_morpher_prefix()}/train_standalone",
                f"{self.body_morpher_prefix()}/train_standalone",
                self.character_model_character_png_file_name(),
                self.character_model_face_morpher_file_name(),
                self.character_model_body_morpher_file_name(),
                self.character_model_yaml_file_name(),
            ])


================================================
FILE: src/tha4/distiller/ui/__init__.py
================================================


================================================
FILE: src/tha4/distiller/ui/distiller_config_state.py
================================================
import os.path
from contextlib import contextmanager
from pathlib import PurePath, Path
from typing import Callable, Any, Optional

from tha4.distiller.distiller_config import DistillerConfig


class DistillerConfigState:
    def __init__(self):
        self.config = DistillerConfig(prefix="", character_image_file_name="", face_mask_image_file_name="")
        self.last_saved_timestamp = None
        self.dirty = False

    def load(self, file_name):
        self.config = DistillerConfig.load(file_name)
        if os.path.exists(self.config.config_yaml_file_name()):
            self.last_saved_timestamp = os.path.getmtime(self.config.config_yaml_file_name())
        else:
            self.last_saved_timestamp = None
        self.dirty = False

    def need_to_check_overwrite(self):
        if self.last_saved_timestamp is None:
            return True
        if not os.path.exists(self.config.config_yaml_file_name()):
            return False
        if self.last_saved_timestamp < os.path.getmtime(self.config.config_yaml_file_name()):
            return True
        return False

    def save(self):
        self.config.save(self.config.config_yaml_file_name())
        self.dirty = False
        self.last_saved_timestamp = os.path.getmtime(self.config.config_yaml_file_name())

    @contextmanager
    def updating_value(self, value_func: Callable[[], Any]):
        old_value = value_func()
        yield
        new_value = value_func()
        if new_value != old_value:
            self.dirty = True

    def set_prefix(self, new_value):
        with self.updating_value(lambda: self.config.prefix):
            new_relative_path = self.get_relative_path_to_cwd(
                new_value,
                "The prefix directory must be a subdirectory of the talking-head-anime-4-demo's source code directory.")
            DistillerConfig.check_prefix(new_relative_path)
            self.config.prefix = new_relative_path

    def set_character_image_file_name(self, new_value):
        with self.updating_value(lambda: self.config.character_image_file_name):
            new_relative_path = self.get_relative_path_to_cwd(
                new_value,
                "The character image file must be under talking-head-anime-4-demo's source code directory.")
            DistillerConfig.check_character_image_file_name(new_relative_path)
            self.config.character_image_file_name = new_relative_path

    def set_face_mask_image_file_name(self, new_value):
        with self.updating_value(lambda: self.config.face_mask_image_file_name):
            new_relative_path = self.get_relative_path_to_cwd(
                new_value,
                "The face mask image file must be under talking-head-anime-4-demo's source code directory.")
            DistillerConfig.check_face_mask_image_file_name(new_relative_path)
            self.config.face_mask_image_file_name = new_relative_path

    def set_num_cpu_workers(self, new_value: int):
        with self.updating_value(lambda: self.config.num_cpu_workers):
            DistillerConfig.check_num_cpu_workers(new_value)
            self.config.num_cpu_workers = new_value

    def set_num_gpus(self, new_value: int):
        with self.updating_value(lambda: self.config.num_gpus):
            DistillerConfig.check_num_cpu_workers(new_value)
            self.config.num_gpus = new_value

    def set_face_morpher_random_seed_0(self, new_value: int):
        with self.updating_value(lambda: self.config.face_morpher_random_seed_0):
            DistillerConfig.check_random_seed(new_value, "face_morpher_random_seed_0")
            self.config.face_morpher_random_seed_0 = new_value

    def set_face_morpher_random_seed_1(self, new_value: int):
        with self.updating_value(lambda: self.config.face_morpher_random_seed_1):
            DistillerConfig.check_random_seed(new_value, "face_morpher_random_seed_1")
            self.config.face_morpher_random_seed_1 = new_value

    def set_face_morpher_num_training_examples_per_sample_output(self, new_value: Optional[int]):
        with self.updating_value(lambda: self.config.face_morpher_num_training_examples_per_sample_output):
            DistillerConfig.check_num_training_examples_per_sample_output(
                new_value, "face_morpher_num_training_examples_per_sample_output")
            self.config.face_morpher_num_training_examples_per_sample_output = new_value

    def set_face_morpher_batch_size(self, new_value: int):
        with self.updating_value(lambda: self.config.face_morpher_batch_size):
            DistillerConfig.check_batch_size(new_value, "face_morpher_batch_size")
            self.config.face_morpher_batch_size = new_value

    def set_body_morpher_random_seed_0(self, new_value: int):
        with self.updating_value(lambda: self.config.body_morpher_random_seed_0):
            DistillerConfig.check_random_seed(new_value, "body_morpher_random_seed_0")
            self.config.body_morpher_random_seed_0 = new_value

    def set_body_morpher_random_seed_1(self, new_value: int):
        with self.updating_value(lambda: self.config.body_morpher_random_seed_1):
            DistillerConfig.check_random_seed(new_value, "body_morpher_random_seed_1")
            self.config.body_morpher_random_seed_1 = new_value

    def set_body_morpher_num_training_examples_per_sample_output(self, new_value: Optional[int]):
        with self.updating_value(lambda: self.config.body_morpher_num_training_examples_per_sample_output):
            DistillerConfig.check_num_training_examples_per_sample_output(
                new_value, "body_morpher_num_training_examples_per_sample_output")
            self.config.body_morpher_num_training_examples_per_sample_output = new_value

    def set_body_morpher_batch_size(self, new_value: int):
        with self.updating_value(lambda: self.config.body_morpher_batch_size):
            DistillerConfig.check_batch_size(new_value, "body_morpher_batch_size")
            self.config.body_morpher_batch_size = new_value

    def get_relative_path_to_cwd(self, file_name: str, message: str):
        cwd = os.getcwd()
        assert os.path.commonprefix([cwd, file_name]) == cwd, message
        cwd_path = Path(cwd).as_posix()
        new_path = Path(file_name).as_posix()
        new_relative_path = os.path.relpath(str(new_path), cwd_path)
        new_relative_path = str(Path(new_relative_path).as_posix())
        return new_relative_path

    def can_show_character_image(self):
        return os.path.isfile(self.config.character_image_file_name)

    def can_show_face_mask_image(self):
        return os.path.isfile(self.config.face_mask_image_file_name)

    def can_show_mask_on_face_image(self):
        return self.can_show_character_image() and self.can_show_face_mask_image()

    def can_save(self):
        return os.path.isdir(self.config.prefix) \
            and os.path.isfile(self.config.character_image_file_name) \
            and os.path.isfile(self.config.face_mask_image_file_name)


================================================
FILE: src/tha4/distiller/ui/distiller_ui_main_frame.py
================================================
import multiprocessing
import random
from contextlib import contextmanager
from typing import Callable
import PIL.Image

import torch
import wx
import wx.html
import wx.lib.intctrl
from tha4.distiller.ui.distiller_config_state import DistillerConfigState
from tha4.image_util import convert_output_image_from_torch_to_numpy
from tha4.shion.base.image_util import extract_pytorch_image_from_PIL_image


def wx_bind_event(widget, evt):
    def f(handler):
        widget.Bind(evt, handler)
        return handler

    return f


class DistillerUiMainFrame(wx.Frame):
    PARAM_NAME_STATIC_TEXT_MIN_WIDTH = 400
    NUM_TRAINING_EXAMPLES_PER_SAMPLE_OUTPUT_CHOICES = [
        "10_000", "100_000", "1_000_000", "Do not generate sample outputs"]

    def __init__(self):
        super().__init__(None, wx.ID_ANY, "Distiller UI")

        self.init_ui()
        self.init_menus()
        self.init_bitmaps()
        self.Bind(wx.EVT_CLOSE, self.on_close)

        self.state = DistillerConfigState()
        self.update_ui()

        self.config_file_to_run = None

    def init_ui(self):
        main_sizer = wx.BoxSizer(wx.HORIZONTAL)

        self.SetSizer(main_sizer)
        self.SetAutoLayout(1)

        left_panel = self.init_left_panel(self)
        main_sizer.Add(left_panel, 0, wx.FIXED_MINSIZE)

        middle_panel = self.init_middle_panel(self)
        main_sizer.Add(middle_panel, 0, wx.EXPAND)

        right_panel = self.init_right_panel(self)
        main_sizer.Add(right_panel, 1, wx.EXPAND)

        main_sizer.Fit(self)

    def init_menus(self):
        self.file_menu = wx.Menu()

        self.new_menu_id = wx.Window.NewControlId()
        self.file_menu.Append(
            self.new_menu_id, item="&New\tCTRL+N", helpString="Create a new distiller configuration.")
        self.Bind(wx.EVT_MENU, self.on_new, id=self.new_menu_id)

        self.open_menu_id = wx.Window.NewControlId()
        self.file_menu.Append(
            self.open_menu_id, item="&Open\tCTRL+O", helpString="Open a distiller confuguration.")
        self.Bind(wx.EVT_MENU, self.on_open, id=self.open_menu_id)

        self.save_menu_id = wx.Window.NewControlId()
        self.save_menu_item = wx.MenuItem(
            self.file_menu, id=self.save_menu_id, text="&Save\tCTRL+S",
            helpString="Save the current distiller configuration. Error message will be shown it it is not well formed.")
        self.Bind(wx.EVT_MENU, self.on_save, id=self.save_menu_id)
        self.file_menu.Append(self.save_menu_item)

        self.file_menu.AppendSeparator()

        self.exit_menu_id = wx.ID_EXIT
        self.file_menu.Append(
            self.exit_menu_id, item="E&xit\tCTRL+Q", helpString="Exit the application.")
        self.Bind(wx.EVT_MENU, self.on_close, id=self.exit_menu_id)

        self.menu_bar = wx.MenuBar()
        self.menu_bar.Append(self.file_menu, "&File")

        self.SetMenuBar(self.menu_bar)

    def init_bitmaps(self):
        self.face_image_bitmap = wx.Bitmap(128, 128)
        self.face_image_pytorch = None
        self.face_mask_image_bitmap = wx.Bitmap(128, 128)
        self.face_mask_image_pytorch = None
        self.mask_on_face_image_bitmap = wx.Bitmap(128, 128)
        self.draw_nothing_yet_string_to_bitmap(self.face_image_bitmap, 128, 128)
        self.draw_nothing_yet_string_to_bitmap(self.face_mask_image_bitmap, 128, 128)
        self.draw_nothing_yet_string_to_bitmap(self.mask_on_face_image_bitmap, 128, 128)

    @contextmanager
    def create_panel(self, parent, sizer, *args, **kwargs):
        panel = wx.Panel(parent, *args, **kwargs)
        panel.SetSizer(sizer)
        panel.SetAutoLayout(1)

        try:
            yield panel, sizer
        finally:
            sizer.Fit(panel)

    def init_left_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, sizer):
            self.face_image_panel = wx.Panel(panel, size=(128, 128), style=wx.SIMPLE_BORDER)
            self.face_image_panel.Bind(wx.EVT_PAINT, self.on_face_image_panel_paint)
            sizer.Add(self.face_image_panel, 0, wx.EXPAND)

            static_text = wx.StaticText(panel, label="Face", style=wx.ALIGN_CENTER)
            sizer.Add(static_text, 0, wx.EXPAND)

            self.face_mask_image_panel = wx.Panel(panel, size=(128, 128), style=wx.SIMPLE_BORDER)
            self.face_mask_image_panel.Bind(wx.EVT_PAINT, self.on_face_mask_image_panel_paint)
            sizer.Add(self.face_mask_image_panel, 0, wx.EXPAND)

            static_text = wx.StaticText(panel, label="Face mask", style=wx.ALIGN_CENTER)
            sizer.Add(static_text, 0, wx.EXPAND)

            self.mask_on_face_image_panel = wx.Panel(panel, size=(128, 128), style=wx.SIMPLE_BORDER)
            self.mask_on_face_image_panel.Bind(wx.EVT_PAINT, self.on_mask_on_face_image_panel_paint)
            sizer.Add(self.mask_on_face_image_panel, 0, wx.EXPAND)

            static_text = wx.StaticText(panel, label="Mask upon face", style=wx.ALIGN_CENTER)
            sizer.Add(static_text, 0, wx.EXPAND)

        return panel

    def on_erase_background(self, event):
        pass

    def on_face_image_panel_paint(self, event):
        wx.BufferedPaintDC(self.face_image_panel, self.face_image_bitmap)

    def on_face_mask_image_panel_paint(self, event):
        wx.BufferedPaintDC(self.face_mask_image_panel, self.face_mask_image_bitmap)

    def on_mask_on_face_image_panel_paint(self, event):
        wx.BufferedPaintDC(self.mask_on_face_image_panel, self.mask_on_face_image_bitmap)

    def init_middle_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, sizer):
            sizer.Add(self.init_prefix_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_character_image_file_name_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_face_mask_image_file_name_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_num_cpu_workers_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_num_gpus_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_face_morpher_random_seed_0_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_face_morpher_random_seed_1_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_face_morpher_batch_size_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_body_morpher_random_seed_0_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_body_morpher_random_seed_1_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_body_morpher_batch_size_panel(panel), 0, wx.EXPAND)
            sizer.Add(self.init_num_training_examples_per_sample_output_panel(panel), 0, wx.EXPAND)

            self.run_button = wx.Button(panel, label="RUN")
            self.run_button.SetMinSize((-1, 64))
            self.run_button.Bind(wx.EVT_BUTTON, self.on_run)
            sizer.Add(self.run_button, 1, wx.EXPAND)

        return panel

    def init_prefix_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "prefix (i.e. project directory)",
                self.create_help_button_func("distiller-ui-doc/params/prefix.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            with self.create_panel(panel, wx.BoxSizer(wx.HORIZONTAL), style=wx.BORDER_NONE) \
                    as (prefix_panel, prefix_sizer):
                self.prefix_text_ctrl = wx.TextCtrl(prefix_panel, value="")
                self.prefix_text_ctrl.SetEditable(False)
                prefix_sizer.Add(self.prefix_text_ctrl, 1, wx.EXPAND)

                self.prefix_change_button = wx.Button(prefix_panel, label="Change...")
                self.prefix_change_button.Bind(wx.EVT_BUTTON, self.on_prefix_change_button)
                prefix_sizer.Add(self.prefix_change_button, 0, wx.EXPAND)
            panel_sizer.Add(prefix_panel, 1, wx.EXPAND)

        return panel

    def on_prefix_change_button(self, event):
        dir_dialog = wx.DirDialog(self, "Choose a directory.", style=wx.DD_DEFAULT_STYLE | wx.DD_NEW_DIR_BUTTON)
        if dir_dialog.ShowModal() != wx.ID_OK:
            return
        prefix_value = dir_dialog.GetPath()
        try:
            self.state.set_prefix(prefix_value)
            self.update_ui()
        except Exception as e:
            message_dialog = wx.MessageDialog(self, str(e), "Error", wx.OK | wx.ICON_ERROR)
            message_dialog.ShowModal()

    def init_character_image_file_name_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "character_image_file_name",
                self.create_help_button_func("distiller-ui-doc/params/character_image_file_name.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            with self.create_panel(panel, wx.BoxSizer(wx.HORIZONTAL), style=wx.BORDER_NONE) as (sub_panel, sub_sizer):
                self.character_image_file_name_text_ctrl = wx.TextCtrl(sub_panel, value="")
                self.character_image_file_name_text_ctrl.SetEditable(False)
                sub_sizer.Add(self.character_image_file_name_text_ctrl, 1, wx.EXPAND)

                self.character_image_change_button = wx.Button(sub_panel, label="Change...")
                self.character_image_change_button.Bind(wx.EVT_BUTTON, self.on_character_image_change_button)
                sub_sizer.Add(self.character_image_change_button, 0, wx.EXPAND)
            panel_sizer.Add(sub_panel, 1, wx.EXPAND)

        return panel

    def on_character_image_change_button(self, event):
        file_dialog = wx.FileDialog(self, "Choose a PNG file", wildcard="*.png", style=wx.FD_OPEN)
        if file_dialog.ShowModal() != wx.ID_OK:
            return
        file_name = file_dialog.GetPath()
        try:
            self.state.set_character_image_file_name(file_name)
            self.update_face_image_bitmap(file_name)
            self.update_ui()
        except Exception as e:
            message_dialog = wx.MessageDialog(self, str(e), "Error", wx.OK | wx.ICON_ERROR)
            message_dialog.ShowModal()

    def update_face_image_bitmap(self, new_file_name: str):
        pil_image = PIL.Image.open(new_file_name)
        subimage = pil_image.crop((256 - 64, 80, 256 + 64, 208))
        self.face_image_bitmap = wx.Bitmap.FromBufferRGBA(128, 128, subimage.convert("RGBA").tobytes())
        self.face_image_pytorch = extract_pytorch_image_from_PIL_image(subimage).to(torch.float)
        self.update_mask_on_face_image_bitmap()

    def init_face_mask_image_file_name_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "face_mask_image_file_name",
                self.create_help_button_func("distiller-ui-doc/params/face_mask_image_file_name.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            with self.create_panel(panel, wx.BoxSizer(wx.HORIZONTAL), style=wx.BORDER_NONE) as (sub_panel, sub_sizer):
                self.face_mask_image_file_name_text_ctrl = wx.TextCtrl(sub_panel, value="")
                self.face_mask_image_file_name_text_ctrl.SetEditable(False)
                sub_sizer.Add(self.face_mask_image_file_name_text_ctrl, 1, wx.EXPAND)

                self.face_mask_image_file_name_change_button = wx.Button(sub_panel, label="Change...")
                self.face_mask_image_file_name_change_button.Bind(wx.EVT_BUTTON, self.on_face_mask_image_change_button)
                sub_sizer.Add(self.face_mask_image_file_name_change_button, 0, wx.EXPAND)
            panel_sizer.Add(sub_panel, 1, wx.EXPAND)

        return panel

    def on_face_mask_image_change_button(self, event):
        file_dialog = wx.FileDialog(self, "Choose a PNG file", wildcard="*.png", style=wx.FD_OPEN)
        if file_dialog.ShowModal() != wx.ID_OK:
            return
        file_name = file_dialog.GetPath()
        try:
            self.state.set_face_mask_image_file_name(file_name)
            self.update_face_mask_image_bitmap(file_name)
            self.update_ui()
        except Exception as e:
            message_dialog = wx.MessageDialog(self, str(e), "Error", wx.OK | wx.ICON_ERROR)
            message_dialog.ShowModal()

    def update_face_mask_image_bitmap(self, new_file_name):
        pil_image = PIL.Image.open(new_file_name)
        subimage = pil_image.crop((256 - 64, 80, 256 + 64, 208))
        self.face_mask_image_bitmap = wx.Bitmap.FromBufferRGBA(128, 128, subimage.convert("RGBA").tobytes())
        self.face_mask_image_pytorch = extract_pytorch_image_from_PIL_image(subimage).to(torch.float)
        self.face_mask_image_pytorch = self.face_mask_image_pytorch[0:1, :, :]
        self.update_mask_on_face_image_bitmap()

    def update_mask_on_face_image_bitmap(self):
        if self.face_image_pytorch is None:
            return
        if self.face_mask_image_pytorch is None:
            return

        mask_on_face_image = (0.5 * self.face_image_pytorch) + (0.5 * self.face_mask_image_pytorch)
        numpy_image = convert_output_image_from_torch_to_numpy(mask_on_face_image)
        wx_image = wx.ImageFromBuffer(
            numpy_image.shape[0],
            numpy_image.shape[1],
            numpy_image[:, :, 0:3].tobytes(),
            numpy_image[:, :, 3].tobytes())
        self.mask_on_face_image_bitmap = wx_image.ConvertToBitmap()

    def init_num_cpu_workers_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "num_cpu_workers",
                self.create_help_button_func("distiller-ui-doc/params/num_cpu_workers.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            num_cpus = multiprocessing.cpu_count()
            self.num_cpu_workers_spin_ctrl = wx.SpinCtrl(panel, initial=1, min=1, max=num_cpus)

            @wx_bind_event(self.num_cpu_workers_spin_ctrl, wx.EVT_SPINCTRL)
            def on_num_cpu_workers_spin_ctrl(event):
                self.state.set_num_cpu_workers(self.num_cpu_workers_spin_ctrl.GetValue())
                self.Refresh()

            panel_sizer.Add(self.num_cpu_workers_spin_ctrl, 1, wx.EXPAND)

        return panel

    def init_num_gpus_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "num_gpus",
                self.create_help_button_func("distiller-ui-doc/params/num_gpus.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            num_gpus = torch.cuda.device_count()
            self.num_gpus_spin_ctrl = wx.SpinCtrl(panel, initial=1, min=1, max=max(1, num_gpus))

            @wx_bind_event(self.num_gpus_spin_ctrl, wx.EVT_SPINCTRL)
            def on_num_cpu_workers_spin_ctrl(event):
                self.state.set_num_gpus(self.num_gpus_spin_ctrl.GetValue())
                self.Refresh()

            panel_sizer.Add(self.num_gpus_spin_ctrl, 1, wx.EXPAND)

        return panel

    def init_face_morpher_random_seed_0_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "face_morpher_random_seed_0",
                self.create_help_button_func("distiller-ui-doc/params/face_morpher_random_seed_0.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            with self.create_panel(panel, wx.BoxSizer(wx.HORIZONTAL), style=wx.BORDER_NONE) as (sub_panel, sub_sizer):
                initial_value = random.randint(0, 2 ** 64 - 1)
                self.face_morpher_random_seed_0_int_ctrl = wx.lib.intctrl.IntCtrl(
                    sub_panel, value=initial_value, min=0, max=0x_ffff_ffff_ffff_ffff)

                @wx_bind_event(self.face_morpher_random_seed_0_int_ctrl, wx.EVT_TEXT)
                def on_face_morpher_random_seed_0_int_ctrl_text(event):
                    self.state.set_face_morpher_random_seed_0(self.face_morpher_random_seed_0_int_ctrl.GetValue())

                sub_sizer.Add(self.face_morpher_random_seed_0_int_ctrl, 1, wx.EXPAND)

                self.face_morpher_random_seed_0_randomize_button = wx.Button(sub_panel, label="Randomize")

                @wx_bind_event(self.face_morpher_random_seed_0_randomize_button, wx.EVT_BUTTON)
                def on_face_morpher_random_seed_0_randomize_button(event):
                    new_value = random.randint(0, 0x_ffff_ffff_ffff_ffff)
                    self.face_morpher_random_seed_0_int_ctrl.SetValue(new_value)
                    self.state.set_face_morpher_random_seed_0(new_value)

                sub_sizer.Add(self.face_morpher_random_seed_0_randomize_button, 0, wx.EXPAND)
            panel_sizer.Add(sub_panel, 1, wx.EXPAND)

        return panel

    def init_face_morpher_random_seed_1_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "face_morpher_random_seed_1",
                self.create_help_button_func("distiller-ui-doc/params/face_morpher_random_seed_1.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            with self.create_panel(panel, wx.BoxSizer(wx.HORIZONTAL), style=wx.BORDER_NONE) as (sub_panel, sub_sizer):
                initial_value = random.randint(0, 2 ** 64 - 1)
                self.face_morpher_random_seed_1_int_ctrl = wx.lib.intctrl.IntCtrl(
                    sub_panel, value=initial_value, min=0, max=0x_ffff_ffff_ffff_ffff)

                @wx_bind_event(self.face_morpher_random_seed_1_int_ctrl, wx.EVT_TEXT)
                def on_face_morpher_random_seed_1_int_ctrl_text(event):
                    self.state.set_face_morpher_random_seed_1(self.face_morpher_random_seed_1_int_ctrl.GetValue())

                sub_sizer.Add(self.face_morpher_random_seed_1_int_ctrl, 1, wx.EXPAND)

                self.face_morpher_random_seed_1_randomize_button = wx.Button(sub_panel, label="Randomize")

                @wx_bind_event(self.face_morpher_random_seed_1_randomize_button, wx.EVT_BUTTON)
                def on_face_morpher_random_seed_1_randomize_button(event):
                    new_value = random.randint(0, 0x_ffff_ffff_ffff_ffff)
                    self.face_morpher_random_seed_1_int_ctrl.SetValue(new_value)
                    self.state.set_face_morpher_random_seed_1(new_value)

                sub_sizer.Add(self.face_morpher_random_seed_1_randomize_button, 0, wx.EXPAND)
            panel_sizer.Add(sub_panel, 1, wx.EXPAND)

        return panel

    def init_face_morpher_batch_size_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "face_morpher_batch_size",
                self.create_help_button_func("distiller-ui-doc/params/face_morpher_batch_size.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            self.face_morpher_batch_size_spin_ctrl = wx.SpinCtrl(panel, initial=8, min=1, max=8)

            @wx_bind_event(self.face_morpher_batch_size_spin_ctrl, wx.EVT_SPINCTRL)
            def on_face_morpher_batch_size_spin_ctrl(event):
                self.state.set_face_morpher_batch_size(self.face_morpher_batch_size_spin_ctrl.GetValue())

            panel_sizer.Add(self.face_morpher_batch_size_spin_ctrl, 1, wx.EXPAND)

        return panel

    def init_body_morpher_random_seed_0_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "body_morpher_random_seed_0",
                self.create_help_button_func("distiller-ui-doc/params/body_morpher_random_seed_0.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            with self.create_panel(panel, wx.BoxSizer(wx.HORIZONTAL), style=wx.BORDER_NONE) as (sub_panel, sub_sizer):
                initial_value = random.randint(0, 2 ** 64 - 1)
                self.body_morpher_random_seed_0_int_ctrl = wx.lib.intctrl.IntCtrl(
                    sub_panel, value=initial_value, min=0, max=0x_ffff_ffff_ffff_ffff)

                @wx_bind_event(self.body_morpher_random_seed_0_int_ctrl, wx.EVT_TEXT)
                def on_body_morpher_random_seed_0_int_ctrl_text(event):
                    self.state.set_body_morpher_random_seed_0(self.body_morpher_random_seed_0_int_ctrl.GetValue())

                sub_sizer.Add(self.body_morpher_random_seed_0_int_ctrl, 1, wx.EXPAND)

                self.body_morpher_random_seed_0_randomize_button = wx.Button(sub_panel, label="Randomize")

                @wx_bind_event(self.body_morpher_random_seed_0_randomize_button, wx.EVT_BUTTON)
                def on_body_morpher_random_seed_0_randomize_button(event):
                    new_value = random.randint(0, 0x_ffff_ffff_ffff_ffff)
                    self.body_morpher_random_seed_0_int_ctrl.SetValue(new_value)
                    self.state.set_body_morpher_random_seed_0(new_value)

                sub_sizer.Add(self.body_morpher_random_seed_0_randomize_button, 0, wx.EXPAND)
            panel_sizer.Add(sub_panel, 1, wx.EXPAND)

        return panel

    def init_body_morpher_random_seed_1_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "body_morpher_random_seed_1",
                self.create_help_button_func("distiller-ui-doc/params/body_morpher_random_seed_1.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            with self.create_panel(panel, wx.BoxSizer(wx.HORIZONTAL), style=wx.BORDER_NONE) as (sub_panel, sub_sizer):
                initial_value = random.randint(0, 2 ** 64 - 1)
                self.body_morpher_random_seed_1_int_ctrl = wx.lib.intctrl.IntCtrl(
                    sub_panel, value=initial_value, min=0, max=0x_ffff_ffff_ffff_ffff)

                @wx_bind_event(self.body_morpher_random_seed_1_int_ctrl, wx.EVT_TEXT)
                def on_body_morpher_random_seed_1_int_ctrl_text(event):
                    self.state.set_body_morpher_random_seed_1(self.body_morpher_random_seed_1_int_ctrl.GetValue())

                sub_sizer.Add(self.body_morpher_random_seed_1_int_ctrl, 1, wx.EXPAND)

                self.body_morpher_random_seed_1_randomize_button = wx.Button(sub_panel, label="Randomize")

                @wx_bind_event(self.body_morpher_random_seed_1_randomize_button, wx.EVT_BUTTON)
                def on_body_morpher_random_seed_1_randomize_button(event):
                    new_value = random.randint(0, 0x_ffff_ffff_ffff_ffff)
                    self.body_morpher_random_seed_1_int_ctrl.SetValue(new_value)
                    self.state.set_body_morpher_random_seed_1(new_value)

                sub_sizer.Add(self.body_morpher_random_seed_1_randomize_button, 0, wx.EXPAND)
            panel_sizer.Add(sub_panel, 1, wx.EXPAND)

        return panel

    def init_body_morpher_batch_size_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "body_morpher_batch_size",
                self.create_help_button_func("distiller-ui-doc/params/body_morpher_batch_size.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            self.body_morpher_batch_size_spin_ctrl = wx.SpinCtrl(panel, initial=8, min=1, max=8)

            @wx_bind_event(self.body_morpher_batch_size_spin_ctrl, wx.EVT_SPINCTRL)
            def on_body_morpher_batch_size_spin_ctrl(event):
                self.state.set_body_morpher_batch_size(self.body_morpher_batch_size_spin_ctrl.GetValue())

            panel_sizer.Add(self.body_morpher_batch_size_spin_ctrl, 1, wx.EXPAND)

        return panel

    def init_num_training_examples_per_sample_output_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, panel_sizer):
            prefix_param_name_panel = self.create_param_name_panel_with_help_button(
                panel,
                "num_training_examples_per_sample_output",
                self.create_help_button_func("distiller-ui-doc/params/num_training_examples_per_sample_output.html"))
            panel_sizer.Add(prefix_param_name_panel, 1, wx.EXPAND)

            self.num_training_examples_per_sample_output_combobox = \
                wx.ComboBox(panel,
                            value="10_000",
                            choices=DistillerUiMainFrame.NUM_TRAINING_EXAMPLES_PER_SAMPLE_OUTPUT_CHOICES)

            @wx_bind_event(self.num_training_examples_per_sample_output_combobox, wx.EVT_COMBOBOX)
            def on_num_training_examples_per_sample_output_combobox(event):
                index = self.num_training_examples_per_sample_output_combobox.GetSelection()
                if index == 3:
                    self.state.set_face_morpher_num_training_examples_per_sample_output(None)
                    self.state.set_body_morpher_num_training_examples_per_sample_output(None)
                else:
                    selected = DistillerUiMainFrame.NUM_TRAINING_EXAMPLES_PER_SAMPLE_OUTPUT_CHOICES[index]
                    new_value = int(selected)
                    self.state.set_face_morpher_num_training_examples_per_sample_output(new_value)
                    self.state.set_body_morpher_num_training_examples_per_sample_output(new_value)

            panel_sizer.Add(self.num_training_examples_per_sample_output_combobox, 1, wx.EXPAND)

        return panel

    def on_close(self, event):
        if self.state.dirty:
            confirmation_dialog = wx.MessageDialog(
                parent=self,
                message=f"You have not saved your work. Do you want to exit anyway?",
                caption="Confirmation",
                style=wx.YES_NO | wx.ICON_QUESTION)
            result = confirmation_dialog.ShowModal()
            if result == wx.ID_NO:
                return

        self.Destroy()

    def create_help_button_func(self, html_file_name: str):
        def init_help_button_func(parent):
            button = wx.Button(parent, label="Help")

            @wx_bind_event(button, wx.EVT_BUTTON)
            def on_prefix_button(event):
                self.html_window.LoadPage(html_file_name)
                self.Refresh()

            return button

        return init_help_button_func

    def create_param_name_panel_with_help_button(
            self, parent, param_name: str, help_button_func: Callable[[wx.Window], wx.Button]):
        with self.create_panel(parent, wx.BoxSizer(wx.HORIZONTAL), style=wx.NO_BORDER) \
                as (panel, sizer):
            title_text_panel = self.create_vertically_centered_text_panel(
                panel, param_name, DistillerUiMainFrame.PARAM_NAME_STATIC_TEXT_MIN_WIDTH)
            sizer.Add(title_text_panel, 1, wx.EXPAND)

            help_button = help_button_func(panel)
            sizer.Add(help_button, 0, wx.EXPAND)
        return panel

    def create_vertically_centered_text_panel(self, parent, text: str, min_width: int):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.NO_BORDER) as (panel, sizer):
            sizer.AddStretchSpacer(1)
            text = wx.StaticText(
                panel,
                label=text,
                style=wx.ALIGN_CENTER)
            text.SetMinSize((min_width, -1))
            sizer.Add(text, 0, wx.EXPAND)
            sizer.AddStretchSpacer(1)
        return panel

    def init_right_panel(self, parent):
        with self.create_panel(parent, wx.BoxSizer(wx.VERTICAL), style=wx.BORDER_SIMPLE) as (panel, sizer):
            self.html_window = wx.html.HtmlWindow(panel)
            self.html_window.SetMinSize((600, 600))
            self.html_window.SetFonts("Times New Roman", "Courier New", sizes=[10, 12, 14, 16, 18, 20, 24])
            self.html_window.LoadPage("distiller-ui-doc/index.html")
            sizer.Add(self.html_window, 1, wx.EXPAND)

            go_to_main_documentation_button = wx.Button(panel, label="Go to Main Documentation")
            sizer.Add(go_to_main_documentation_button, 0, wx.EXPAND)

            @wx_bind_event(go_to_main_documentation_button, wx.EVT_BUTTON)
            def on_go_to_main_documentation_button(event):
                self.html_window.LoadPage("distiller-ui-doc/index.html")
                self.Refresh()

        return panel

    def populate_distiller_config(self):
        self.state.config.prefix = self.prefix_text_ctrl.GetValue()
        self.state.config.character_image_file_name = self.character_image_file_name_text_ctrl.GetValue()
        self.state.config.face_mask_image_file_name = self.face_mask_image_file_name_text_ctrl.GetValue()

        self.state.config.num_cpu_workers = self.num_cpu_workers_spin_ctrl.GetValue()
        self.state.config.num_gpus = self.num_gpus_spin_ctrl.GetValue()

        self.state.config.face_morpher_random_seed_0 = self.face_morpher_random_seed_0_int_ctrl.GetValue()
        self.state.config.face_morpher_random_seed_1 = self.face_morpher_random_seed_1_int_ctrl.GetValue()
        self.state.config.face_morpher_batch_size = self.face_morpher_batch_size_spin_ctrl.GetValue()

        self.state.config.body_morpher_random_seed_0 = self.body_morpher_random_seed_0_int_ctrl.GetValue()
        self.state.config.body_morpher_random_seed_1 = self.body_morpher_random_seed_1_int_ctrl.GetValue()
        self.state.config.body_morpher_batch_size = self.body_morpher_batch_size_spin_ctrl.GetValue()

        if self.num_training_examples_per_sample_output_combobox.GetValue() == \
                DistillerUiMainFrame.NUM_TRAINING_EXAMPLES_PER_SAMPLE_OUTPUT_CHOICES[-1]:
            self.state.config.face_morpher_num_training_examples_per_sample_output = None
            self.state.config.body_morpher_num_training_examples_per_sample_output = None
        else:
            value = int(self.num_training_examples_per_sample_output_combobox.GetValue())
            self.state.config.face_morpher_num_training_examples_per_sample_output = value
            self.state.config.body_morpher_num_training_examples_per_sample_output = value

    def update_ui(self):
        self.prefix_text_ctrl.SetValue(self.state.config.prefix)
        self.character_image_file_name_text_ctrl.SetValue(self.state.config.character_image_file_name)
        self.face_mask_image_file_name_text_ctrl.SetValue(self.state.config.face_mask_image_file_name)

        if not self.state.can_show_character_image():
            self.draw_nothing_yet_string_to_bitmap(self.face_image_bitmap, 128, 128)
        if not self.state.can_show_face_mask_image():
            self.draw_nothing_yet_string_to_bitmap(self.face_mask_image_bitmap, 128, 128)
        if not self.state.can_show_mask_on_face_image():
            self.draw_nothing_yet_string_to_bitmap(self.mask_on_face_image_bitmap, 128, 128)

        self.num_cpu_workers_spin_ctrl.SetValue(self.state.config.num_cpu_workers)
        self.num_gpus_spin_ctrl.SetValue(self.state.config.num_gpus)

        self.face_morpher_random_seed_0_int_ctrl.SetValue(self.state.config.face_morpher_random_seed_0)
        self.face_morpher_random_seed_1_int_ctrl.SetValue(self.state.config.face_morpher_random_seed_1)
        self.face_morpher_batch_size_spin_ctrl.SetValue(self.state.config.face_morpher_batch_size)

        self.body_morpher_random_seed_0_int_ctrl.SetValue(self.state.config.body_morpher_random_seed_0)
        self.body_morpher_random_seed_1_int_ctrl.SetValue(self.state.config.body_morpher_random_seed_1)
        self.body_morpher_batch_size_spin_ctrl.SetValue(self.state.config.body_morpher_batch_size)

        if self.state.config.body_morpher_num_training_examples_per_sample_output is None:
            self.num_training_examples_per_sample_output_combobox.SetSelection(3)
        else:
            choices = [int(x) for x in DistillerUiMainFrame.NUM_TRAINING_EXAMPLES_PER_SAMPLE_OUTPUT_CHOICES[:-1]]
            self.num_training_examples_per_sample_output_combobox.SetSelection(
                choices.index(self.state.config.body_morpher_num_training_examples_per_sample_output))

        self.save_menu_item.Enable(self.state.can_save())

        self.Refresh()

    def draw_nothing_yet_string_to_bitmap(self, bitmap, width: int, height: int):
        dc = wx.MemoryDC()
        dc.SelectObject(bitmap)

        dc.Clear()
        font = wx.Font(wx.FontInfo(14).Family(wx.FONTFAMILY_SWISS))
        dc.SetFont(font)
        w, h = dc.GetTextExtent("Nothing yet!")
        dc.DrawText("Nothing yet!", (width - w) // 2, (height - h) // 2)

        del dc

    def try_saving(self):
        if not self.state.can_save():
            message_dialog = wx.MessageDialog(
                self,
                "Cannot save yet! Please make sure you set the prefix, character_image_file_name, "
                "and face_mask_image_file_name first.",
                "Error",
                wx.OK | wx.ICON_ERROR)
            message_dialog.ShowModal()
            return False
        else:
            if self.state.need_to_check_overwrite():
                confirmation_dialog = wx.MessageDialog(
                    parent=self,
                    message=f"Overwriting {self.state.config.config_yaml_file_name()}?",
                    caption="Confirmation",
                    style=wx.YES_NO | wx.CANCEL | wx.ICON_QUESTION)
                result = confirmation_dialog.ShowModal()
                if result == wx.ID_YES:
                    self.state.save()
                    return True
                elif result == wx.ID_NO:
                    return False
                else:
                    return False
            else:
                self.state.save()
                return True

    def on_save(self, event):
        return self.try_saving()

    def on_new(self, event):
        if self.state.dirty:
            confirmation_dialog = wx.MessageDialog(
                parent=self,
                message=f"You have not saved the current config. Do you want to proceed?",
                caption="Confirmation",
                style=wx.YES_NO | wx.ICON_QUESTION)
            result = confirmation_dialog.ShowModal()
            if result == wx.ID_NO:
                return
        self.state = DistillerConfigState()
        self.update_ui()

    def on_open(self, event):
        if self.state.dirty:
            confirmation_dialog = wx.MessageDialog(
                parent=self,
                message=f"You have not saved the current config. Do you want to proceed?",
                caption="Confirmation",
                style=wx.YES_NO | wx.ICON_QUESTION)
            result = confirmation_dialog.ShowModal()
            if result == wx.ID_NO:
                return

        file_dialog = wx.FileDialog(self, "Choose a YAML file", wildcard="*.yaml", style=wx.FD_OPEN)
        if file_dialog.ShowModal() != wx.ID_OK:
            return
        file_name = file_dialog.GetPath()
        try:
            self.state.load(file_name)
            self.face_image_pytorch = None
            self.face_mask_image_pytorch = None
            self.update_face_image_bitmap(self.state.config.character_image_file_name)
            self.update_face_mask_image_bitmap(self.state.config.face_mask_image_file_name)
            self.update_ui()
        except Exception as e:
            message_dialog = wx.MessageDialog(self, str(e), "Error", wx.OK | wx.ICON_ERROR)
            message_dialog.ShowModal()

    def on_run(self, event):
        try:
            self.state.config.check()
        except Exception as e:
            message_dialog = wx.MessageDialog(self, str(e), "Error", wx.OK | wx.ICON_ERROR)
            message_dialog.ShowModal()
            return

        if self.state.dirty:
            message_dialog = wx.MessageDialog(
                self,
                "Please save the configuration first.",
                "Error",
                wx.OK | wx.ICON_ERROR)
            message_dialog.ShowModal()
            return

        self.config_file_to_run = self.state.config.config_yaml_file_name()
        self.Destroy()


================================================
FILE: src/tha4/image_util.py
================================================
import math

import PIL.Image
import numpy
import torch
from matplotlib import cm
from tha4.shion.base.image_util import numpy_linear_to_srgb, pytorch_rgba_to_numpy_image, pytorch_rgb_to_numpy_image, \
    torch_linear_to_srgb


def grid_change_to_numpy_image(torch_image, num_channels=3):
    height = torch_image.shape[1]
    width = torch_image.shape[2]
    size_image = (torch_image[0, :, :] ** 2 + torch_image[1, :, :] ** 2).sqrt().view(height, width, 1).numpy()
    hsv = cm.get_cmap('hsv')
    angle_image = hsv(((torch.atan2(
        torch_image[0, :, :].view(height * width),
        torch_image[1, :, :].view(height * width)).view(height, width) + math.pi) / (2 * math.pi)).numpy()) * 3
    numpy_image = size_image * angle_image[:, :, 0:3]
    rgb_image = numpy_linear_to_srgb(numpy_image)
    if num_channels == 3:
        return rgb_image
    elif num_channels == 4:
        return numpy.concatenate([rgb_image, numpy.ones_like(size_image)], axis=2)
    else:
        raise RuntimeError("Unsupported num_channels: " + str(num_channels))


def resize_PIL_image(pil_image, size=(256, 256)):
    w, h = pil_image.size
    d = min(w, h)
    r = ((w - d) // 2, (h - d) // 2, (w + d) // 2, (h + d) // 2)
    return pil_image.resize(size, resample=PIL.Image.LANCZOS, box=r)


def convert_output_image_from_torch_to_numpy(output_image):
    if output_image.shape[2] == 2:
        h, w, c = output_image.shape
        numpy_image = torch.transpose(output_image.reshape(h * w, c), 0, 1).reshape(c, h, w)
    elif output_image.shape[0] == 4:
        numpy_image = pytorch_rgba_to_numpy_image(output_image)
    elif output_image.shape[0] == 3:
        numpy_image = pytorch_rgb_to_numpy_image(output_image)
    elif output_image.shape[0] == 1:
        c, h, w = output_image.shape
        alpha_image = torch.cat([output_image.repeat(3, 1, 1) * 2.0 - 1.0, torch.ones(1, h, w)], dim=0)
        numpy_image = pytorch_rgba_to_numpy_image(alpha_image)
    elif output_image.shape[0] == 2:
        numpy_image = grid_change_to_numpy_image(output_image, num_channels=4)
    else:
        raise RuntimeError("Unsupported # image channels: %d" % output_image.shape[0])
    numpy_image = numpy.uint8(numpy.rint(numpy_image * 255.0))
    return numpy_image


def convert_linear_to_srgb(image: torch.Tensor) -> torch.Tensor:
    rgb_image = torch_linear_to_srgb(image[0:3, :, :])
    return torch.cat([rgb_image, image[3:4, :, :]], dim=0)


================================================
FILE: src/tha4/mocap/__init__.py
================================================


================================================
FILE: src/tha4/mocap/ifacialmocap_constants.py
================================================
EYE_LOOK_IN_LEFT = "eyeLookInLeft"
EYE_LOOK_OUT_LEFT = "eyeLookOutLeft"
EYE_LOOK_DOWN_LEFT = "eyeLookDownLeft"
EYE_LOOK_UP_LEFT = "eyeLookUpLeft"
EYE_BLINK_LEFT = "eyeBlinkLeft"
EYE_SQUINT_LEFT = "eyeSquintLeft"
EYE_WIDE_LEFT = "eyeWideLeft"
EYE_LOOK_IN_RIGHT = "eyeLookInRight"
EYE_LOOK_OUT_RIGHT = "eyeLookOutRight"
EYE_LOOK_DOWN_RIGHT = "eyeLookDownRight"
EYE_LOOK_UP_RIGHT = "eyeLookUpRight"
EYE_BLINK_RIGHT = "eyeBlinkRight"
EYE_SQUINT_RIGHT = "eyeSquintRight"
EYE_WIDE_RIGHT = "eyeWideRight"
BROW_DOWN_LEFT = "browDownLeft"
BROW_OUTER_UP_LEFT = "browOuterUpLeft"
BROW_DOWN_RIGHT = "browDownRight"
BROW_OUTER_UP_RIGHT = "browOuterUpRight"
BROW_INNER_UP = "browInnerUp"
NOSE_SNEER_LEFT = "noseSneerLeft"
NOSE_SNEER_RIGHT = "noseSneerRight"
CHEEK_SQUINT_LEFT = "cheekSquintLeft"
CHEEK_SQUINT_RIGHT = "cheekSquintRight"
CHEEK_PUFF = "cheekPuff"
MOUTH_LEFT = "mouthLeft"
MOUTH_DIMPLE_LEFT = "mouthDimpleLeft"
MOUTH_FROWN_LEFT = "mouthFrownLeft"
MOUTH_LOWER_DOWN_LEFT = "mouthLowerDownLeft"
MOUTH_PRESS_LEFT = "mouthPressLeft"
MOUTH_SMILE_LEFT = "mouthSmileLeft"
MOUTH_STRETCH_LEFT = "mouthStretchLeft"
MOUTH_UPPER_UP_LEFT = "mouthUpperUpLeft"
MOUTH_RIGHT = "mouthRight"
MOUTH_DIMPLE_RIGHT = "mouthDimpleRight"
MOUTH_FROWN_RIGHT = "mouthFrownRight"
MOUTH_LOWER_DOWN_RIGHT = "mouthLowerDownRight"
MOUTH_PRESS_RIGHT = "mouthPressRight"
MOUTH_SMILE_RIGHT = "mouthSmileRight"
MOUTH_STRETCH_RIGHT = "mouthStretchRight"
MOUTH_UPPER_UP_RIGHT = "mouthUpperUpRight"
MOUTH_CLOSE = "mouthClose"
MOUTH_FUNNEL = "mouthFunnel"
MOUTH_PUCKER = "mouthPucker"
MOUTH_ROLL_LOWER = "mouthRollLower"
MOUTH_ROLL_UPPER = "mouthRollUpper"
MOUTH_SHRUG_LOWER = "mouthShrugLower"
MOUTH_SHRUG_UPPER = "mouthShrugUpper"
JAW_LEFT = "jawLeft"
JAW_RIGHT = "jawRight"
JAW_FORWARD = "jawForward"
JAW_OPEN = "jawOpen"
TONGUE_OUT = "tongueOut"

BLENDSHAPE_NAMES = [
    EYE_LOOK_IN_LEFT,  # 0
    EYE_LOOK_OUT_LEFT,  # 1
    EYE_LOOK_DOWN_LEFT,  # 2
    EYE_LOOK_UP_LEFT,  # 3
    EYE_BLINK_LEFT,  # 4
    EYE_SQUINT_LEFT,  # 5
    EYE_WIDE_LEFT, 
Download .txt
gitextract_k8uli292/

├── .gitignore
├── .python-version
├── LICENSE
├── README.md
├── bin/
│   ├── activate-venv.bat
│   ├── activate-venv.sh
│   ├── run
│   └── run.bat
├── distiller-ui-doc/
│   ├── index.html
│   └── params/
│       ├── body_morpher_batch_size.html
│       ├── body_morpher_random_seed_0.html
│       ├── body_morpher_random_seed_1.html
│       ├── character_image_file_name.html
│       ├── face_mask_image_file_name.html
│       ├── face_morpher_batch_size.html
│       ├── face_morpher_random_seed_0.html
│       ├── face_morpher_random_seed_1.html
│       ├── num_cpu_workers.html
│       ├── num_gpus.html
│       ├── num_training_examples_per_sample_output.html
│       └── prefix.html
├── docs/
│   ├── character_model_ifacialmocap_puppeteer.md
│   ├── character_model_manual_poser.md
│   ├── character_model_mediapipe_puppeteer.md
│   ├── distill.md
│   ├── distiller_ui.md
│   └── full_manual_poser.md
├── poetry/
│   ├── README.md
│   └── pyproject.toml
└── src/
    └── tha4/
        ├── __init__.py
        ├── app/
        │   ├── __init__.py
        │   ├── character_model_ifacialmocap_puppeteer.py
        │   ├── character_model_manual_poser.py
        │   ├── character_model_mediapipe_puppeteer.py
        │   ├── distill.py
        │   ├── distiller_ui.py
        │   └── full_manual_poser.py
        ├── charmodel/
        │   ├── __init__.py
        │   └── character_model.py
        ├── dataset/
        │   ├── __init__.py
        │   └── image_poses_and_aother_images_dataset.py
        ├── distiller/
        │   ├── __init__.py
        │   ├── config_based_training_tasks.py
        │   ├── distill_body_morpher.py
        │   ├── distill_face_morpher.py
        │   ├── distiller_config.py
        │   └── ui/
        │       ├── __init__.py
        │       ├── distiller_config_state.py
        │       └── distiller_ui_main_frame.py
        ├── image_util.py
        ├── mocap/
        │   ├── __init__.py
        │   ├── ifacialmocap_constants.py
        │   ├── ifacialmocap_pose.py
        │   ├── ifacialmocap_pose_converter.py
        │   ├── ifacialmocap_pose_converter_25.py
        │   ├── ifacialmocap_v2.py
        │   ├── mediapipe_constants.py
        │   ├── mediapipe_face_pose.py
        │   ├── mediapipe_face_pose_converter.py
        │   └── mediapipe_face_pose_converter_00.py
        ├── nn/
        │   ├── __init__.py
        │   ├── common/
        │   │   ├── __init__.py
        │   │   ├── conv_block_factory.py
        │   │   ├── poser_args.py
        │   │   ├── poser_encoder_decoder_00.py
        │   │   ├── poser_encoder_decoder_00_separable.py
        │   │   ├── resize_conv_encoder_decoder.py
        │   │   ├── resize_conv_unet.py
        │   │   └── unet.py
        │   ├── conv.py
        │   ├── eyebrow_decomposer/
        │   │   ├── __init__.py
        │   │   └── eyebrow_decomposer_00.py
        │   ├── eyebrow_morphing_combiner/
        │   │   ├── __init__.py
        │   │   └── eyebrow_morphing_combiner_00.py
        │   ├── face_morpher/
        │   │   ├── __init__.py
        │   │   └── face_morpher_08.py
        │   ├── image_processing_util.py
        │   ├── init_function.py
        │   ├── morpher/
        │   │   ├── __init__.py
        │   │   └── morpher_00.py
        │   ├── nonlinearity_factory.py
        │   ├── normalization.py
        │   ├── pass_through.py
        │   ├── resnet_block.py
        │   ├── resnet_block_seperable.py
        │   ├── separable_conv.py
        │   ├── siren/
        │   │   ├── __init__.py
        │   │   ├── face_morpher/
        │   │   │   ├── __init__.py
        │   │   │   ├── siren_face_morpher_00.py
        │   │   │   ├── siren_face_morpher_00_trainer.py
        │   │   │   └── siren_face_morpher_protocols_00.py
        │   │   ├── morpher/
        │   │   │   ├── __init__.py
        │   │   │   ├── siren_morpher_03.py
        │   │   │   ├── siren_morpher_03_trainer.py
        │   │   │   └── siren_morpher_protocols_03.py
        │   │   └── vanilla/
        │   │       ├── __init__.py
        │   │       └── siren.py
        │   ├── spectral_norm.py
        │   ├── upscaler/
        │   │   ├── __init__.py
        │   │   └── upscaler_02.py
        │   └── util.py
        ├── poser/
        │   ├── __init__.py
        │   ├── general_poser_02.py
        │   ├── modes/
        │   │   ├── __init__.py
        │   │   ├── mode_07.py
        │   │   ├── mode_12.py
        │   │   ├── mode_14.py
        │   │   └── pose_parameters.py
        │   └── poser.py
        ├── pytasuku/
        │   ├── __init__.py
        │   ├── indexed/
        │   │   ├── __init__.py
        │   │   ├── all_tasks.py
        │   │   ├── bundled_indexed_file_tasks.py
        │   │   ├── indexed_file_tasks.py
        │   │   ├── indexed_tasks.py
        │   │   ├── no_index_command_tasks.py
        │   │   ├── no_index_file_tasks.py
        │   │   ├── one_index_file_tasks.py
        │   │   ├── simple_no_index_file_tasks.py
        │   │   ├── two_indices_file_tasks.py
        │   │   └── util.py
        │   ├── task.py
        │   ├── task_selector_ui.py
        │   ├── util.py
        │   └── workspace.py
        ├── sampleoutput/
        │   ├── __init__.py
        │   ├── general_sample_output_protocol.py
        │   ├── poser_sampler_output_protocol.py
        │   └── sample_image_creator.py
        └── shion/
            ├── __init__.py
            ├── base/
            │   ├── __init__.py
            │   ├── dataset/
            │   │   ├── __init__.py
            │   │   ├── lazy_dataset.py
            │   │   ├── lazy_tensor_dataset.py
            │   │   ├── png_in_dir_dataset.py
            │   │   ├── util.py
            │   │   └── xformed_dataset.py
            │   ├── image_util.py
            │   ├── loss/
            │   │   ├── __init__.py
            │   │   ├── computed_scale_loss.py
            │   │   ├── computed_scaled_l2_loss.py
            │   │   ├── l1_loss.py
            │   │   ├── l2_loss.py
            │   │   ├── sum_loss.py
            │   │   └── time_dependently_weighted_loss.py
            │   ├── module_accumulators.py
            │   ├── optimizer_factories.py
            │   ├── protocol/
            │   │   └── single_network_from_batch_input_computation_protocol.py
            │   └── training/
            │       ├── __init__.py
            │       ├── single_network.py
            │       ├── single_network_with_minibatch.py
            │       └── two_networks_training_protocol.py
            ├── core/
            │   ├── __init__.py
            │   ├── cached_computation.py
            │   ├── load_save.py
            │   ├── loss.py
            │   ├── module_accumulator.py
            │   ├── module_factory.py
            │   ├── optimizer_factory.py
            │   └── training/
            │       ├── __init__.py
            │       ├── distrib/
            │       │   ├── __init__.py
            │       │   ├── device_mapper.py
            │       │   ├── distributed_trainer.py
            │       │   ├── distributed_training_states.py
            │       │   └── distributed_training_tasks.py
            │       ├── sample_output_protocol.py
            │       ├── single/
            │       │   ├── __init__.py
            │       │   ├── training_states.py
            │       │   └── training_tasks.py
            │       ├── swarm/
            │       │   ├── __init__.py
            │       │   ├── swarm_training_tasks.py
            │       │   └── swarm_unit_trainer.py
            │       ├── training_protocol.py
            │       ├── util.py
            │       └── validation_protocol.py
            └── nn00/
                ├── __init__.py
                ├── block_args.py
                ├── conv.py
                ├── initialization_funcs.py
                ├── linear_module_args.py
                ├── nonlinearity_factories.py
                ├── normalization_layer_factories.py
                ├── normalization_layer_factory.py
                ├── pass_through.py
                └── resnet_block.py
Download .txt
SYMBOL INDEX (1259 symbols across 117 files)

FILE: src/tha4/app/character_model_ifacialmocap_puppeteer.py
  class FpsStatistics (line 28) | class FpsStatistics:
    method __init__ (line 29) | def __init__(self):
    method add_fps (line 33) | def add_fps(self, fps):
    method get_average_fps (line 38) | def get_average_fps(self):
  class MainFrame (line 45) | class MainFrame(wx.Frame):
    method __init__ (line 48) | def __init__(self, pose_converter: IFacialMocapPoseConverter, device: ...
    method create_receiving_socket (line 71) | def create_receiving_socket(self):
    method create_timers (line 76) | def create_timers(self):
    method on_close (line 82) | def on_close(self, event: wx.Event):
    method on_start_capture (line 94) | def on_start_capture(self, event: wx.Event):
    method read_ifacialmocap_pose (line 109) | def read_ifacialmocap_pose(self):
    method on_erase_background (line 123) | def on_erase_background(self, event: wx.Event):
    method create_animation_panel (line 126) | def create_animation_panel(self, parent):
    method create_ui (line 198) | def create_ui(self):
    method create_connection_panel (line 216) | def create_connection_panel(self, parent):
    method create_capture_panel (line 232) | def create_capture_panel(self, parent):
    method create_rotation_column (line 249) | def create_rotation_column(self, parent, rotation_names):
    method paint_capture_panel (line 272) | def paint_capture_panel(self, event: wx.Event):
    method update_capture_panel (line 275) | def update_capture_panel(self, event: wx.Event):
    method convert_to_100 (line 282) | def convert_to_100(x):
    method paint_source_image_panel (line 285) | def paint_source_image_panel(self, event: wx.Event):
    method update_source_image_bitmap (line 288) | def update_source_image_bitmap(self):
    method draw_nothing_yet_string (line 298) | def draw_nothing_yet_string(self, dc):
    method paint_result_image_panel (line 305) | def paint_result_image_panel(self, event: wx.Event):
    method update_result_image_bitmap (line 308) | def update_result_image_bitmap(self, event: Optional[wx.Event] = None):
    method blend_with_background (line 377) | def blend_with_background(self, numpy_image, background):
    method load_model (line 383) | def load_model(self, event: wx.Event):

FILE: src/tha4/app/character_model_manual_poser.py
  class MorphCategoryControlPanel (line 20) | class MorphCategoryControlPanel(wx.Panel):
    method __init__ (line 21) | def __init__(self,
    method update_ui (line 56) | def update_ui(self):
    method on_choice_updated (line 71) | def on_choice_updated(self, event: wx.Event):
    method set_param_value (line 77) | def set_param_value(self, pose: List[float]):
  class SimpleParamGroupsControlPanel (line 96) | class SimpleParamGroupsControlPanel(wx.Panel):
    method __init__ (line 97) | def __init__(self, parent,
    method set_param_value (line 125) | def set_param_value(self, pose: List[float]):
  class MainFrame (line 137) | class MainFrame(wx.Frame):
    method __init__ (line 142) | def __init__(self, device: torch.device):
    method init_left_panel (line 178) | def init_left_panel(self):
    method on_erase_background (line 198) | def on_erase_background(self, event: wx.Event):
    method init_control_panel (line 201) | def init_control_panel(self):
    method init_right_panel (line 253) | def init_right_panel(self):
    method create_param_category_choice (line 278) | def create_param_category_choice(self, param_category: PoseParameterCa...
    method load_model (line 288) | def load_model(self, event: wx.Event):
    method paint_source_image_panel (line 312) | def paint_source_image_panel(self, event: wx.Event):
    method paint_result_image_panel (line 315) | def paint_result_image_panel(self, event: wx.Event):
    method draw_nothing_yet_string_to_bitmap (line 318) | def draw_nothing_yet_string_to_bitmap(self, bitmap):
    method get_current_pose (line 330) | def get_current_pose(self):
    method update_images (line 338) | def update_images(self, event: wx.Event):
    method on_save_image (line 400) | def on_save_image(self, event: wx.Event):
    method save_last_numpy_image (line 425) | def save_last_numpy_image(self, image_file_name):

FILE: src/tha4/app/character_model_mediapipe_puppeteer.py
  class FpsStatistics (line 25) | class FpsStatistics:
    method __init__ (line 26) | def __init__(self):
    method add_fps (line 30) | def add_fps(self, fps):
    method get_average_fps (line 35) | def get_average_fps(self):
  class MainFrame (line 42) | class MainFrame(wx.Frame):
    method __init__ (line 45) | def __init__(self,
    method create_timers (line 75) | def create_timers(self):
    method on_close (line 81) | def on_close(self, event: wx.Event):
    method on_erase_background (line 90) | def on_erase_background(self, event: wx.Event):
    method create_animation_panel (line 93) | def create_animation_panel(self, parent):
    method create_ui (line 168) | def create_ui(self):
    method create_capture_panel (line 183) | def create_capture_panel(self, parent):
    method paint_webcam_capture_panel (line 199) | def paint_webcam_capture_panel(self, event: wx.Event):
    method create_rotation_column (line 202) | def create_rotation_column(self, parent, rotation_names):
    method update_capture_panel (line 225) | def update_capture_panel(self, event: wx.Event):
    method update_mediapipe_face_pose (line 252) | def update_mediapipe_face_pose(self, detection_result):
    method convert_to_100 (line 274) | def convert_to_100(x):
    method paint_source_image_panel (line 277) | def paint_source_image_panel(self, event: wx.Event):
    method update_source_image_bitmap (line 280) | def update_source_image_bitmap(self):
    method draw_nothing_yet_string (line 290) | def draw_nothing_yet_string(self, dc):
    method paint_result_image_panel (line 297) | def paint_result_image_panel(self, event: wx.Event):
    method update_result_image_bitmap (line 300) | def update_result_image_bitmap(self, event: Optional[wx.Event] = None):
    method blend_with_background (line 375) | def blend_with_background(self, numpy_image, background):
    method load_model (line 381) | def load_model(self, event: wx.Event):

FILE: src/tha4/app/distill.py
  function run_config (line 8) | def run_config(config_file_name: str):

FILE: src/tha4/app/full_manual_poser.py
  class MorphCategoryControlPanel (line 21) | class MorphCategoryControlPanel(wx.Panel):
    method __init__ (line 22) | def __init__(self,
    method update_ui (line 57) | def update_ui(self):
    method on_choice_updated (line 72) | def on_choice_updated(self, event: wx.Event):
    method set_param_value (line 78) | def set_param_value(self, pose: List[float]):
  class SimpleParamGroupsControlPanel (line 97) | class SimpleParamGroupsControlPanel(wx.Panel):
    method __init__ (line 98) | def __init__(self, parent,
    method set_param_value (line 126) | def set_param_value(self, pose: List[float]):
  function convert_output_image_from_torch_to_numpy (line 138) | def convert_output_image_from_torch_to_numpy(output_image):
  class MainFrame (line 158) | class MainFrame(wx.Frame):
    method __init__ (line 159) | def __init__(self, poser: Poser, device: torch.device):
    method init_left_panel (line 197) | def init_left_panel(self):
    method on_erase_background (line 217) | def on_erase_background(self, event: wx.Event):
    method init_control_panel (line 220) | def init_control_panel(self):
    method init_right_panel (line 273) | def init_right_panel(self):
    method create_param_category_choice (line 298) | def create_param_category_choice(self, param_category: PoseParameterCa...
    method load_image (line 308) | def load_image(self, event: wx.Event):
    method paint_source_image_panel (line 334) | def paint_source_image_panel(self, event: wx.Event):
    method paint_result_image_panel (line 337) | def paint_result_image_panel(self, event: wx.Event):
    method draw_nothing_yet_string_to_bitmap (line 340) | def draw_nothing_yet_string_to_bitmap(self, bitmap):
    method get_current_pose (line 352) | def get_current_pose(self):
    method update_images (line 360) | def update_images(self, event: wx.Event):
    method on_save_image (line 422) | def on_save_image(self, event: wx.Event):
    method save_last_numpy_image (line 447) | def save_last_numpy_image(self, image_file_name):

FILE: src/tha4/charmodel/character_model.py
  class CharacterModel (line 12) | class CharacterModel:
    method __init__ (line 13) | def __init__(self,
    method get_poser (line 23) | def get_poser(self, device: torch.device):
    method get_character_image (line 35) | def get_character_image(self, device: torch.device):
    method save (line 44) | def save(self, file_name: str):
    method load (line 60) | def load(file_name: str):

FILE: src/tha4/dataset/image_poses_and_aother_images_dataset.py
  class ImagePosesAndOtherImagesDataset (line 7) | class ImagePosesAndOtherImagesDataset(Dataset):
    method __init__ (line 8) | def __init__(self,
    method get_main_image (line 18) | def get_main_image(self):
    method get_other_image (line 23) | def get_other_image(self, image_index: int):
    method __len__ (line 28) | def __len__(self):
    method __getitem__ (line 31) | def __getitem__(self, index):

FILE: src/tha4/distiller/config_based_training_tasks.py
  function get_torchrun_executable (line 11) | def get_torchrun_executable():
  class RdzvConfig (line 15) | class RdzvConfig:
    method __init__ (line 16) | def __init__(self, id: int, port: int):
  function run_standalone_config_based_training_script (line 21) | def run_standalone_config_based_training_script(
  function define_standalone_config_based_training_tasks (line 44) | def define_standalone_config_based_training_tasks(

FILE: src/tha4/distiller/distiller_config.py
  function copy_file (line 19) | def copy_file(source_file_name: str, dest_file_name):
  class DistillerConfig (line 25) | class DistillerConfig:
    method check (line 43) | def check(self):
    method check_prefix (line 66) | def check_prefix(prefix):
    method check_character_image_file_name (line 71) | def check_character_image_file_name(file_name):
    method check_face_mask_image_file_name (line 83) | def check_face_mask_image_file_name(file_name):
    method check_batch_size (line 101) | def check_batch_size(value, field_name: str):
    method check_num_cpu_workers (line 107) | def check_num_cpu_workers(value):
    method check_num_gpus (line 111) | def check_num_gpus(value):
    method check_random_seed (line 115) | def check_random_seed(value, field_name: str):
    method check_num_training_examples_per_sample_output (line 120) | def check_num_training_examples_per_sample_output(value, field_name):
    method save (line 124) | def save(self, file_name: str):
    method config_yaml_file_name (line 130) | def config_yaml_file_name(self):
    method create_config_yaml_file (line 133) | def create_config_yaml_file(self):
    method load (line 139) | def load(file_name: str) -> 'DistillerConfig':
    method face_morpher_prefix (line 145) | def face_morpher_prefix(self):
    method get_face_morpher_trainer (line 148) | def get_face_morpher_trainer(self, world_size: Optional[int] = None, b...
    method body_morpher_prefix (line 162) | def body_morpher_prefix(self):
    method get_body_morpher_trainer (line 165) | def get_body_morpher_trainer(self, world_size: Optional[int] = None, b...
    method character_model_prefix (line 235) | def character_model_prefix(self):
    method character_model_face_morpher_file_name (line 238) | def character_model_face_morpher_file_name(self):
    method character_model_body_morpher_file_name (line 241) | def character_model_body_morpher_file_name(self):
    method character_model_character_png_file_name (line 244) | def character_model_character_png_file_name(self):
    method character_model_yaml_file_name (line 247) | def character_model_yaml_file_name(self):
    method define_tasks (line 250) | def define_tasks(self, workspace: Workspace):

FILE: src/tha4/distiller/ui/distiller_config_state.py
  class DistillerConfigState (line 9) | class DistillerConfigState:
    method __init__ (line 10) | def __init__(self):
    method load (line 15) | def load(self, file_name):
    method need_to_check_overwrite (line 23) | def need_to_check_overwrite(self):
    method save (line 32) | def save(self):
    method updating_value (line 38) | def updating_value(self, value_func: Callable[[], Any]):
    method set_prefix (line 45) | def set_prefix(self, new_value):
    method set_character_image_file_name (line 53) | def set_character_image_file_name(self, new_value):
    method set_face_mask_image_file_name (line 61) | def set_face_mask_image_file_name(self, new_value):
    method set_num_cpu_workers (line 69) | def set_num_cpu_workers(self, new_value: int):
    method set_num_gpus (line 74) | def set_num_gpus(self, new_value: int):
    method set_face_morpher_random_seed_0 (line 79) | def set_face_morpher_random_seed_0(self, new_value: int):
    method set_face_morpher_random_seed_1 (line 84) | def set_face_morpher_random_seed_1(self, new_value: int):
    method set_face_morpher_num_training_examples_per_sample_output (line 89) | def set_face_morpher_num_training_examples_per_sample_output(self, new...
    method set_face_morpher_batch_size (line 95) | def set_face_morpher_batch_size(self, new_value: int):
    method set_body_morpher_random_seed_0 (line 100) | def set_body_morpher_random_seed_0(self, new_value: int):
    method set_body_morpher_random_seed_1 (line 105) | def set_body_morpher_random_seed_1(self, new_value: int):
    method set_body_morpher_num_training_examples_per_sample_output (line 110) | def set_body_morpher_num_training_examples_per_sample_output(self, new...
    method set_body_morpher_batch_size (line 116) | def set_body_morpher_batch_size(self, new_value: int):
    method get_relative_path_to_cwd (line 121) | def get_relative_path_to_cwd(self, file_name: str, message: str):
    method can_show_character_image (line 130) | def can_show_character_image(self):
    method can_show_face_mask_image (line 133) | def can_show_face_mask_image(self):
    method can_show_mask_on_face_image (line 136) | def can_show_mask_on_face_image(self):
    method can_save (line 139) | def can_save(self):

FILE: src/tha4/distiller/ui/distiller_ui_main_frame.py
  function wx_bind_event (line 16) | def wx_bind_event(widget, evt):
  class DistillerUiMainFrame (line 24) | class DistillerUiMainFrame(wx.Frame):
    method __init__ (line 29) | def __init__(self):
    method init_ui (line 42) | def init_ui(self):
    method init_menus (line 59) | def init_menus(self):
    method init_bitmaps (line 91) | def init_bitmaps(self):
    method create_panel (line 102) | def create_panel(self, parent, sizer, *args, **kwargs):
    method init_left_panel (line 112) | def init_left_panel(self, parent):
    method on_erase_background (line 137) | def on_erase_background(self, event):
    method on_face_image_panel_paint (line 140) | def on_face_image_panel_paint(self, event):
    method on_face_mask_image_panel_paint (line 143) | def on_face_mask_image_panel_paint(self, event):
    method on_mask_on_face_image_panel_paint (line 146) | def on_mask_on_face_image_panel_paint(self, event):
    method init_middle_panel (line 149) | def init_middle_panel(self, parent):
    method init_prefix_panel (line 171) | def init_prefix_panel(self, parent):
    method on_prefix_change_button (line 192) | def on_prefix_change_button(self, event):
    method init_character_image_file_name_panel (line 204) | def init_character_image_file_name_panel(self, parent):
    method on_character_image_change_button (line 224) | def on_character_image_change_button(self, event):
    method update_face_image_bitmap (line 237) | def update_face_image_bitmap(self, new_file_name: str):
    method init_face_mask_image_file_name_panel (line 244) | def init_face_mask_image_file_name_panel(self, parent):
    method on_face_mask_image_change_button (line 264) | def on_face_mask_image_change_button(self, event):
    method update_face_mask_image_bitmap (line 277) | def update_face_mask_image_bitmap(self, new_file_name):
    method update_mask_on_face_image_bitmap (line 285) | def update_mask_on_face_image_bitmap(self):
    method init_num_cpu_workers_panel (line 300) | def init_num_cpu_workers_panel(self, parent):
    method init_num_gpus_panel (line 320) | def init_num_gpus_panel(self, parent):
    method init_face_morpher_random_seed_0_panel (line 340) | def init_face_morpher_random_seed_0_panel(self, parent):
    method init_face_morpher_random_seed_1_panel (line 372) | def init_face_morpher_random_seed_1_panel(self, parent):
    method init_face_morpher_batch_size_panel (line 404) | def init_face_morpher_batch_size_panel(self, parent):
    method init_body_morpher_random_seed_0_panel (line 422) | def init_body_morpher_random_seed_0_panel(self, parent):
    method init_body_morpher_random_seed_1_panel (line 454) | def init_body_morpher_random_seed_1_panel(self, parent):
    method init_body_morpher_batch_size_panel (line 486) | def init_body_morpher_batch_size_panel(self, parent):
    method init_num_training_examples_per_sample_output_panel (line 504) | def init_num_training_examples_per_sample_output_panel(self, parent):
    method on_close (line 533) | def on_close(self, event):
    method create_help_button_func (line 546) | def create_help_button_func(self, html_file_name: str):
    method create_param_name_panel_with_help_button (line 559) | def create_param_name_panel_with_help_button(
    method create_vertically_centered_text_panel (line 571) | def create_vertically_centered_text_panel(self, parent, text: str, min...
    method init_right_panel (line 583) | def init_right_panel(self, parent):
    method populate_distiller_config (line 601) | def populate_distiller_config(self):
    method update_ui (line 626) | def update_ui(self):
    method draw_nothing_yet_string_to_bitmap (line 660) | def draw_nothing_yet_string_to_bitmap(self, bitmap, width: int, height...
    method try_saving (line 672) | def try_saving(self):
    method on_save (line 701) | def on_save(self, event):
    method on_new (line 704) | def on_new(self, event):
    method on_open (line 717) | def on_open(self, event):
    method on_run (line 743) | def on_run(self, event):

FILE: src/tha4/image_util.py
  function grid_change_to_numpy_image (line 11) | def grid_change_to_numpy_image(torch_image, num_channels=3):
  function resize_PIL_image (line 29) | def resize_PIL_image(pil_image, size=(256, 256)):
  function convert_output_image_from_torch_to_numpy (line 36) | def convert_output_image_from_torch_to_numpy(output_image):
  function convert_linear_to_srgb (line 56) | def convert_linear_to_srgb(image: torch.Tensor) -> torch.Tensor:

FILE: src/tha4/mocap/ifacialmocap_pose.py
  function create_default_ifacialmocap_pose (line 6) | def create_default_ifacialmocap_pose():

FILE: src/tha4/mocap/ifacialmocap_pose_converter.py
  class IFacialMocapPoseConverter (line 5) | class IFacialMocapPoseConverter(ABC):
    method convert (line 7) | def convert(self, ifacialmocap_pose: Dict[str, float]) -> List[float]:
    method init_pose_converter_panel (line 11) | def init_pose_converter_panel(self, parent):

FILE: src/tha4/mocap/ifacialmocap_pose_converter_25.py
  class EyebrowDownMode (line 20) | class EyebrowDownMode(Enum):
  class WinkMode (line 27) | class WinkMode(Enum):
  function rad_to_deg (line 32) | def rad_to_deg(rad):
  function deg_to_rad (line 36) | def deg_to_rad(deg):
  function clamp (line 40) | def clamp(x, min_value, max_value):
  class IFacialMocapPoseConverter25Args (line 44) | class IFacialMocapPoseConverter25Args:
    method __init__ (line 45) | def __init__(self,
    method set_smile_threshold_min (line 89) | def set_smile_threshold_min(self, new_value: float):
    method set_smile_threshold_max (line 92) | def set_smile_threshold_max(self, new_value: float):
    method set_eye_surprised_max (line 95) | def set_eye_surprised_max(self, new_value: float):
    method set_eye_blink_max (line 98) | def set_eye_blink_max(self, new_value: float):
    method set_eyebrow_down_max (line 101) | def set_eyebrow_down_max(self, new_value: float):
    method set_cheek_squint_min (line 104) | def set_cheek_squint_min(self, new_value: float):
    method set_cheek_squint_max (line 107) | def set_cheek_squint_max(self, new_value: float):
    method set_jaw_open_min (line 110) | def set_jaw_open_min(self, new_value: float):
    method set_jaw_open_max (line 113) | def set_jaw_open_max(self, new_value: float):
    method set_mouth_frown_max (line 116) | def set_mouth_frown_max(self, new_value: float):
    method set_mouth_funnel_min (line 119) | def set_mouth_funnel_min(self, new_value: float):
    method set_mouth_funnel_max (line 122) | def set_mouth_funnel_max(self, new_value: float):
  class IFacialMocapPoseConverter25 (line 126) | class IFacialMocapPoseConverter25(IFacialMocapPoseConverter):
    method __init__ (line 127) | def __init__(self, args: Optional[IFacialMocapPoseConverter25Args] = N...
    method init_pose_converter_panel (line 188) | def init_pose_converter_panel(self, parent):
    method create_spin_control (line 324) | def create_spin_control(self, parent, label: str, initial_value: float...
    method restart_breathing_cycle_clicked (line 347) | def restart_breathing_cycle_clicked(self, event: wx.Event):
    method change_eyebrow_down_mode (line 350) | def change_eyebrow_down_mode(self, event: wx.Event):
    method change_wink_mode (line 361) | def change_wink_mode(self, event: wx.Event):
    method change_iris_size (line 368) | def change_iris_size(self, event: wx.Event):
    method link_left_right_irises_clicked (line 380) | def link_left_right_irises_clicked(self, event: wx.Event):
    method decompose_head_body_param (line 387) | def decompose_head_body_param(self, param, threshold=2.0 / 3):
    method convert (line 397) | def convert(self, ifacialmocap_pose: Dict[str, float]) -> List[float]:
  function create_ifacialmocap_pose_converter (line 612) | def create_ifacialmocap_pose_converter(

FILE: src/tha4/mocap/ifacialmocap_v2.py
  function parse_ifacialmocap_v2_pose (line 11) | def parse_ifacialmocap_v2_pose(ifacialmocap_output):
  function parse_ifacialmocap_v1_pose (line 51) | def parse_ifacialmocap_v1_pose(ifacialmocap_output):

FILE: src/tha4/mocap/mediapipe_face_pose.py
  class MediaPipeFacePose (line 8) | class MediaPipeFacePose:
    method __init__ (line 12) | def __init__(self, blendshape_params: Optional[Dict[str, float]], xfor...
    method get_json (line 23) | def get_json(self):
    method save (line 29) | def save(self, file_name: str):
    method load (line 35) | def load(file_name: str):

FILE: src/tha4/mocap/mediapipe_face_pose_converter.py
  class MediaPipeFacePoseConverter (line 7) | class MediaPipeFacePoseConverter(ABC):
    method convert (line 9) | def convert(self, mediapipe_face_pose: MediaPipeFacePose) -> List[float]:
    method init_pose_converter_panel (line 13) | def init_pose_converter_panel(

FILE: src/tha4/mocap/mediapipe_face_pose_converter_00.py
  class EyebrowDownMode (line 22) | class EyebrowDownMode(Enum):
  class WinkMode (line 29) | class WinkMode(Enum):
  function rad_to_deg (line 34) | def rad_to_deg(rad):
  function deg_to_rad (line 38) | def deg_to_rad(deg):
  function clamp (line 42) | def clamp(x, min_value, max_value):
  class MediaPipeFacePoseConverter00Args (line 46) | class MediaPipeFacePoseConverter00Args:
    method __init__ (line 47) | def __init__(self,
    method set_smile_threshold_min (line 99) | def set_smile_threshold_min(self, new_value: float):
    method set_smile_threshold_max (line 102) | def set_smile_threshold_max(self, new_value: float):
    method set_eye_surprised_max (line 105) | def set_eye_surprised_max(self, new_value: float):
    method set_eye_blink_max (line 108) | def set_eye_blink_max(self, new_value: float):
    method set_eyebrow_down_max (line 111) | def set_eyebrow_down_max(self, new_value: float):
    method set_cheek_squint_min (line 114) | def set_cheek_squint_min(self, new_value: float):
    method set_cheek_squint_max (line 117) | def set_cheek_squint_max(self, new_value: float):
    method set_jaw_open_min (line 120) | def set_jaw_open_min(self, new_value: float):
    method set_jaw_open_max (line 123) | def set_jaw_open_max(self, new_value: float):
    method set_mouth_frown_max (line 126) | def set_mouth_frown_max(self, new_value: float):
    method set_mouth_funnel_min (line 129) | def set_mouth_funnel_min(self, new_value: float):
    method set_mouth_funnel_max (line 132) | def set_mouth_funnel_max(self, new_value: float):
  class MediaPoseFacePoseConverter00 (line 136) | class MediaPoseFacePoseConverter00(MediaPipeFacePoseConverter):
    method __init__ (line 137) | def __init__(self, args: Optional[MediaPipeFacePoseConverter00Args] = ...
    method init_pose_converter_panel (line 199) | def init_pose_converter_panel(
    method create_spin_control (line 352) | def create_spin_control(self, parent, label: str, initial_value: float...
    method extract_euler_angles (line 375) | def extract_euler_angles(self, mediapipe_face_pose: MediaPipeFacePose):
    method calibrate_face_orientation_clicked (line 380) | def calibrate_face_orientation_clicked(self, event: wx.Event):
    method restart_breathing_cycle_clicked (line 393) | def restart_breathing_cycle_clicked(self, event: wx.Event):
    method change_eyebrow_down_mode (line 396) | def change_eyebrow_down_mode(self, event: wx.Event):
    method change_wink_mode (line 407) | def change_wink_mode(self, event: wx.Event):
    method change_iris_size (line 414) | def change_iris_size(self, event: wx.Event):
    method link_left_right_irises_clicked (line 426) | def link_left_right_irises_clicked(self, event: wx.Event):
    method decompose_head_body_param (line 433) | def decompose_head_body_param(self, param, threshold=2.0 / 3):
    method convert (line 443) | def convert(self, mediapipe_face_pose: MediaPipeFacePose) -> List[float]:

FILE: src/tha4/nn/common/conv_block_factory.py
  class ConvBlockFactory (line 12) | class ConvBlockFactory:
    method __init__ (line 13) | def __init__(self,
    method create_conv3 (line 19) | def create_conv3(self,
    method create_conv7_block (line 33) | def create_conv7_block(self, in_channels: int, out_channels: int):
    method create_conv3_block (line 39) | def create_conv3_block(self, in_channels: int, out_channels: int):
    method create_downsample_block (line 45) | def create_downsample_block(self, in_channels: int, out_channels: int,...
    method create_resnet_block (line 51) | def create_resnet_block(self, num_channels: int, is_1x1: bool):

FILE: src/tha4/nn/common/poser_args.py
  class PoserArgs00 (line 11) | class PoserArgs00:
    method __init__ (line 12) | def __init__(self,
    method create_alpha_block (line 31) | def create_alpha_block(self):
    method create_all_channel_alpha_block (line 42) | def create_all_channel_alpha_block(self):
    method create_color_change_block (line 53) | def create_color_change_block(self):
    method create_grid_change_block (line 62) | def create_grid_change_block(self):

FILE: src/tha4/nn/common/poser_encoder_decoder_00.py
  class PoserEncoderDecoder00Args (line 17) | class PoserEncoderDecoder00Args(PoserArgs00):
    method __init__ (line 18) | def __init__(self,
  class PoserEncoderDecoder00 (line 43) | class PoserEncoderDecoder00(Module):
    method __init__ (line 44) | def __init__(self, args: PoserEncoderDecoder00Args):
    method get_num_output_channels_from_level (line 93) | def get_num_output_channels_from_level(self, level: int):
    method get_num_output_channels_from_image_size (line 96) | def get_num_output_channels_from_image_size(self, image_size: int):
    method forward (line 99) | def forward(self, image: Tensor, pose: Optional[Tensor] = None) -> Lis...

FILE: src/tha4/nn/common/poser_encoder_decoder_00_separable.py
  class PoserEncoderDecoder00Separable (line 14) | class PoserEncoderDecoder00Separable(Module):
    method __init__ (line 15) | def __init__(self, args: PoserEncoderDecoder00Args):
    method get_num_output_channels_from_level (line 64) | def get_num_output_channels_from_level(self, level: int):
    method get_num_output_channels_from_image_size (line 67) | def get_num_output_channels_from_image_size(self, image_size: int):
    method forward (line 70) | def forward(self, image: Tensor, pose: Optional[Tensor] = None) -> Lis...

FILE: src/tha4/nn/common/resize_conv_encoder_decoder.py
  class ResizeConvEncoderDecoderArgs (line 14) | class ResizeConvEncoderDecoderArgs:
    method __init__ (line 15) | def __init__(self,
  class ResizeConvEncoderDecoder (line 36) | class ResizeConvEncoderDecoder(Module):
    method __init__ (line 37) | def __init__(self, args: ResizeConvEncoderDecoderArgs):
    method get_num_output_channels_from_level (line 84) | def get_num_output_channels_from_level(self, level: int):
    method get_num_output_channels_from_image_size (line 87) | def get_num_output_channels_from_image_size(self, image_size: int):
    method forward (line 90) | def forward(self, feature: Tensor) -> List[Tensor]:

FILE: src/tha4/nn/common/resize_conv_unet.py
  class ResizeConvUNetArgs (line 13) | class ResizeConvUNetArgs:
    method __init__ (line 14) | def __init__(self,
  class ResizeConvUNet (line 40) | class ResizeConvUNet(Module):
    method __init__ (line 41) | def __init__(self, args: ResizeConvUNetArgs):
    method forward (line 91) | def forward(self, feature: Tensor) -> List[Tensor]:

FILE: src/tha4/nn/common/unet.py
  class Identity (line 13) | class Identity(Module):
    method __init__ (line 14) | def __init__(self):
    method forward (line 17) | def forward(self, x):
  class IdentityFactory (line 21) | class IdentityFactory(ModuleFactory):
    method create (line 22) | def create(self) -> Module:
  function init_to_zero (line 26) | def init_to_zero(module: Module):
  class Upsample (line 33) | class Upsample(Module):
    method __init__ (line 34) | def __init__(self, in_channels: int, out_channels: Optional[int] = Non...
    method forward (line 44) | def forward(self, x):
  class Downsample (line 49) | class Downsample(Module):
    method __init__ (line 50) | def __init__(self, in_channels: int, out_channels: Optional[int] = Non...
    method forward (line 60) | def forward(self, x):
  function GroupNorm32 (line 65) | def GroupNorm32(channels):
  class SamplingMode (line 69) | class SamplingMode(Enum):
  class ResBlockArgs (line 75) | class ResBlockArgs:
    method __init__ (line 76) | def __init__(self,
  function apply_scaleshift (line 90) | def apply_scaleshift(x: Tensor, scaleshift: Tensor, condition_bias: floa...
  class ResBlock (line 100) | class ResBlock(Module):
    method __init__ (line 101) | def __init__(self,
    method forward (line 154) | def forward(self, x: Tensor, cond0: Optional[Tensor] = None, cond1: Op...
  class AttentionBlockArgs (line 168) | class AttentionBlockArgs:
    method __init__ (line 169) | def __init__(self,
  function qkv_attention_legacy (line 178) | def qkv_attention_legacy(qkv: torch.Tensor, num_heads: int):
  function qkv_attention (line 192) | def qkv_attention(qkv: torch.Tensor, num_heads: int):
  class AttentionBlock (line 205) | class AttentionBlock(Module):
    method __init__ (line 206) | def __init__(self,
    method forward (line 230) | def forward(self, x: torch.Tensor):
  class Arity3To1 (line 242) | class Arity3To1(Module):
    method __init__ (line 243) | def __init__(self, module: Module):
    method forward (line 247) | def forward(self, x: Tensor, y: Optional[Tensor] = None, z: Optional[T...
  class DownsamplingBlock (line 251) | class DownsamplingBlock(Module):
    method __init__ (line 252) | def __init__(self,
    method forward (line 296) | def forward(self, h: Tensor, cond0: Optional[Tensor] = None, cond1: Op...
  class UpsamplingBlock (line 308) | class UpsamplingBlock(Module):
    method __init__ (line 309) | def __init__(self,
    method forward (line 351) | def forward(self,
  function compute_timestep_embedding (line 365) | def compute_timestep_embedding(t: Tensor, out_channels: int):
  class TimeEmbedding (line 379) | class TimeEmbedding(Module):
    method __init__ (line 380) | def __init__(self, out_channels: int):
    method forward (line 384) | def forward(self, t: Tensor):
  class UnetArgs (line 388) | class UnetArgs:
    method __init__ (line 389) | def __init__(self,
  class Unet (line 438) | class Unet(Module):
    method __init__ (line 439) | def __init__(self, args: UnetArgs):
    method forward (line 531) | def forward(self, x: Tensor, t: Tensor, cond: Tensor):
  class UnetWithFirstConvAddition (line 549) | class UnetWithFirstConvAddition(Module):
    method __init__ (line 550) | def __init__(self, args: UnetArgs):
    method forward (line 642) | def forward(self, x: Tensor, t: Tensor, cond: Tensor, first_conv_addit...

FILE: src/tha4/nn/conv.py
  function create_conv7 (line 11) | def create_conv7(in_channels: int, out_channels: int,
  function create_conv7_from_block_args (line 21) | def create_conv7_from_block_args(in_channels: int,
  function create_conv3 (line 33) | def create_conv3(in_channels: int,
  function create_conv3_from_block_args (line 44) | def create_conv3_from_block_args(in_channels: int, out_channels: int,
  function create_conv1 (line 54) | def create_conv1(in_channels: int, out_channels: int,
  function create_conv1_from_block_args (line 64) | def create_conv1_from_block_args(in_channels: int,
  function create_conv7_block (line 78) | def create_conv7_block(in_channels: int, out_channels: int,
  function create_conv7_block_from_block_args (line 91) | def create_conv7_block_from_block_args(
  function create_conv3_block (line 103) | def create_conv3_block(in_channels: int, out_channels: int,
  function create_conv3_block_from_block_args (line 116) | def create_conv3_block_from_block_args(
  function create_downsample_block (line 127) | def create_downsample_block(in_channels: int, out_channels: int,
  function create_downsample_block_from_block_args (line 150) | def create_downsample_block_from_block_args(in_channels: int, out_channe...
  function create_upsample_block (line 164) | def create_upsample_block(in_channels: int,
  function create_upsample_block_from_block_args (line 180) | def create_upsample_block_from_block_args(in_channels: int,

FILE: src/tha4/nn/eyebrow_decomposer/eyebrow_decomposer_00.py
  class EyebrowDecomposer00Args (line 15) | class EyebrowDecomposer00Args(PoserEncoderDecoder00Args):
    method __init__ (line 16) | def __init__(self,
  class EyebrowDecomposer00 (line 36) | class EyebrowDecomposer00(Module):
    method __init__ (line 37) | def __init__(self, args: EyebrowDecomposer00Args):
    method forward (line 46) | def forward(self, image: Tensor, *args) -> List[Tensor]:
  class EyebrowDecomposer00Factory (line 75) | class EyebrowDecomposer00Factory(ModuleFactory):
    method __init__ (line 76) | def __init__(self, args: EyebrowDecomposer00Args):
    method create (line 80) | def create(self) -> Module:

FILE: src/tha4/nn/eyebrow_morphing_combiner/eyebrow_morphing_combiner_00.py
  class EyebrowMorphingCombiner00Args (line 15) | class EyebrowMorphingCombiner00Args(PoserEncoderDecoder00Args):
    method __init__ (line 16) | def __init__(self,
  class EyebrowMorphingCombiner00 (line 37) | class EyebrowMorphingCombiner00(Module):
    method __init__ (line 38) | def __init__(self, args: EyebrowMorphingCombiner00Args):
    method forward (line 47) | def forward(self, background_layer: Tensor, eyebrow_layer: Tensor, pos...
  class EyebrowMorphingCombiner00Factory (line 85) | class EyebrowMorphingCombiner00Factory(ModuleFactory):
    method __init__ (line 86) | def __init__(self, args: EyebrowMorphingCombiner00Args):
    method create (line 90) | def create(self) -> Module:

FILE: src/tha4/nn/face_morpher/face_morpher_08.py
  class FaceMorpher08Args (line 19) | class FaceMorpher08Args:
    method __init__ (line 20) | def __init__(self,
  class FaceMorpher08 (line 48) | class FaceMorpher08(Module):
    method __init__ (line 49) | def __init__(self, args: FaceMorpher08Args):
    method create_alpha_block (line 104) | def create_alpha_block(self):
    method create_color_change_block (line 114) | def create_color_change_block(self):
    method create_grid_change_block (line 123) | def create_grid_change_block(self):
    method get_num_output_channels_from_level (line 131) | def get_num_output_channels_from_level(self, level: int):
    method get_num_output_channels_from_image_size (line 134) | def get_num_output_channels_from_image_size(self, image_size: int):
    method merge_down (line 137) | def merge_down(self, top_layer: Tensor, bottom_layer: Tensor):
    method apply_grid_change (line 142) | def apply_grid_change(self, grid_change, image: Tensor) -> Tensor:
    method apply_color_change (line 155) | def apply_color_change(self, alpha, color_change, image: Tensor) -> Te...
    method forward (line 158) | def forward(self, image: Tensor, pose: Tensor, *args) -> List[Tensor]:
  class FaceMorpher08Factory (line 205) | class FaceMorpher08Factory(ModuleFactory):
    method __init__ (line 206) | def __init__(self, args: FaceMorpher08Args):
    method create (line 210) | def create(self) -> Module:

FILE: src/tha4/nn/image_processing_util.py
  function apply_rgb_change (line 6) | def apply_rgb_change(alpha: Tensor, color_change: Tensor, image: Tensor):
  function apply_grid_change (line 13) | def apply_grid_change(grid_change, image: Tensor) -> Tensor:
  class GridChangeApplier (line 27) | class GridChangeApplier:
    method __init__ (line 28) | def __init__(self):
    method apply (line 33) | def apply(self, grid_change: Tensor, image: Tensor, align_corners: boo...
  function apply_color_change (line 57) | def apply_color_change(alpha, color_change, image: Tensor) -> Tensor:

FILE: src/tha4/nn/init_function.py
  function create_init_function (line 9) | def create_init_function(method: str = 'none') -> Callable[[Module], Mod...
  class HeInitialization (line 35) | class HeInitialization:
    method __init__ (line 36) | def __init__(self, a: int = 0, mode: str = 'fan_in', nonlinearity: str...
    method __call__ (line 41) | def __call__(self, module: Module) -> Module:
  class NormalInitialization (line 47) | class NormalInitialization:
    method __init__ (line 48) | def __init__(self, mean: float = 0.0, std: float = 1.0):
    method __call__ (line 52) | def __call__(self, module: Module) -> Module:
  class XavierInitialization (line 58) | class XavierInitialization:
    method __init__ (line 59) | def __init__(self, gain: float = 1.0):
    method __call__ (line 62) | def __call__(self, module: Module) -> Module:
  class ZeroInitialization (line 68) | class ZeroInitialization:
    method __call__ (line 69) | def __call__(self, module: Module) -> Module:
  class NoInitialization (line 74) | class NoInitialization:
    method __call__ (line 75) | def __call__(self, module: Module) -> Module:

FILE: src/tha4/nn/morpher/morpher_00.py
  function apply_color_change (line 12) | def apply_color_change(alpha, color_change, image: Tensor) -> Tensor:
  class Morpher00Args (line 16) | class Morpher00Args:
    method __init__ (line 17) | def __init__(self,
  class Morpher00 (line 35) | class Morpher00(Module):
    method __init__ (line 36) | def __init__(self, args: Morpher00Args):
    method forward (line 42) | def forward(self, image: torch.Tensor, pose: torch.Tensor) -> List[Ten...
  class Morpher00Factory (line 75) | class Morpher00Factory(ModuleFactory):
    method __init__ (line 76) | def __init__(self, args: Morpher00Args):
    method create (line 79) | def create(self) -> Module:

FILE: src/tha4/nn/nonlinearity_factory.py
  class ReLUFactory (line 8) | class ReLUFactory(ModuleFactory):
    method __init__ (line 9) | def __init__(self, inplace: bool = False):
    method create (line 12) | def create(self) -> Module:
  class LeakyReLUFactory (line 16) | class LeakyReLUFactory(ModuleFactory):
    method __init__ (line 17) | def __init__(self, inplace: bool = False, negative_slope: float = 1e-2):
    method create (line 21) | def create(self) -> Module:
  class ELUFactory (line 25) | class ELUFactory(ModuleFactory):
    method __init__ (line 26) | def __init__(self, inplace: bool = False, alpha: float = 1.0):
    method create (line 30) | def create(self) -> Module:
  class ReLU6Factory (line 34) | class ReLU6Factory(ModuleFactory):
    method __init__ (line 35) | def __init__(self, inplace: bool = False):
    method create (line 38) | def create(self) -> Module:
  class SiLUFactory (line 42) | class SiLUFactory(ModuleFactory):
    method __init__ (line 43) | def __init__(self, inplace: bool = False):
    method create (line 46) | def create(self) -> Module:
  class HardswishFactory (line 50) | class HardswishFactory(ModuleFactory):
    method __init__ (line 51) | def __init__(self, inplace: bool = False):
    method create (line 54) | def create(self) -> Module:
  class TanhFactory (line 58) | class TanhFactory(ModuleFactory):
    method create (line 59) | def create(self) -> Module:
  class SigmoidFactory (line 63) | class SigmoidFactory(ModuleFactory):
    method create (line 64) | def create(self) -> Module:
  function resolve_nonlinearity_factory (line 68) | def resolve_nonlinearity_factory(nonlinearity_fatory: Optional[ModuleFac...

FILE: src/tha4/nn/normalization.py
  class PixelNormalization (line 12) | class PixelNormalization(Module):
    method __init__ (line 13) | def __init__(self, epsilon=1e-8):
    method forward (line 17) | def forward(self, x):
  class NormalizationLayerFactory (line 21) | class NormalizationLayerFactory(ABC):
    method __init__ (line 22) | def __init__(self):
    method create (line 26) | def create(self, num_features: int, affine: bool = True) -> Module:
    method resolve_2d (line 30) | def resolve_2d(factory: Optional['NormalizationLayerFactory']) -> 'Nor...
  class Bias2d (line 37) | class Bias2d(Module):
    method __init__ (line 38) | def __init__(self, num_features: int):
    method forward (line 43) | def forward(self, x):
  class NoNorm2dFactory (line 47) | class NoNorm2dFactory(NormalizationLayerFactory):
    method __init__ (line 48) | def __init__(self):
    method create (line 51) | def create(self, num_features: int, affine: bool = True) -> Module:
  class BatchNorm2dFactory (line 58) | class BatchNorm2dFactory(NormalizationLayerFactory):
    method __init__ (line 59) | def __init__(self,
    method get_weight_mean (line 68) | def get_weight_mean(self):
    method get_weight_std (line 74) | def get_weight_std(self):
    method create (line 80) | def create(self, num_features: int, affine: bool = True) -> Module:
  class InstanceNorm2dFactory (line 90) | class InstanceNorm2dFactory(NormalizationLayerFactory):
    method __init__ (line 91) | def __init__(self):
    method create (line 94) | def create(self, num_features: int, affine: bool = True) -> Module:
  class PixelNormFactory (line 98) | class PixelNormFactory(NormalizationLayerFactory):
    method __init__ (line 99) | def __init__(self):
    method create (line 102) | def create(self, num_features: int, affine: bool = True) -> Module:
  class LayerNorm2d (line 106) | class LayerNorm2d(Module):
    method __init__ (line 107) | def __init__(self, channels: int, affine: bool = True):
    method forward (line 116) | def forward(self, x):
  class LayerNorm2dFactory (line 121) | class LayerNorm2dFactory(NormalizationLayerFactory):
    method __init__ (line 122) | def __init__(self):
    method create (line 125) | def create(self, num_features: int, affine: bool = True) -> Module:

FILE: src/tha4/nn/pass_through.py
  class PassThrough (line 4) | class PassThrough(Module):
    method __init__ (line 5) | def __init__(self):
    method forward (line 8) | def forward(self, x):

FILE: src/tha4/nn/resnet_block.py
  class ResnetBlock (line 13) | class ResnetBlock(Module):
    method create (line 15) | def create(num_channels: int,
    method __init__ (line 29) | def __init__(self,
    method forward (line 63) | def forward(self, x):

FILE: src/tha4/nn/resnet_block_seperable.py
  class ResnetBlockSeparable (line 14) | class ResnetBlockSeparable(Module):
    method create (line 16) | def create(num_channels: int,
    method __init__ (line 31) | def __init__(self,
    method forward (line 67) | def forward(self, x):

FILE: src/tha4/nn/separable_conv.py
  function create_separable_conv3 (line 9) | def create_separable_conv3(in_channels: int, out_channels: int,
  function create_separable_conv7 (line 24) | def create_separable_conv7(in_channels: int, out_channels: int,
  function create_separable_conv3_block (line 39) | def create_separable_conv3_block(
  function create_separable_conv7_block (line 56) | def create_separable_conv7_block(
  function create_separable_downsample_block (line 73) | def create_separable_downsample_block(
  function create_separable_upsample_block (line 103) | def create_separable_upsample_block(

FILE: src/tha4/nn/siren/face_morpher/siren_face_morpher_00.py
  class SirenFaceMorpher00Args (line 12) | class SirenFaceMorpher00Args:
    method __init__ (line 13) | def __init__(self,
  class SirenFaceMorpher00 (line 28) | class SirenFaceMorpher00(Module):
    method __init__ (line 29) | def __init__(self, args: SirenFaceMorpher00Args):
    method forward (line 34) | def forward(self, pose: Tensor, position: Optional[Tensor] = None) -> ...
  class SirenFaceMorpher00Factory (line 54) | class SirenFaceMorpher00Factory(ModuleFactory):
    method __init__ (line 55) | def __init__(self, args: SirenFaceMorpher00Args):
    method create (line 58) | def create(self) -> Module:

FILE: src/tha4/nn/siren/face_morpher/siren_face_morpher_00_trainer.py
  function get_poser (line 23) | def get_poser():
  class SirenFaceMorpher00TrainerArgs (line 29) | class SirenFaceMorpher00TrainerArgs:
    method __init__ (line 30) | def __init__(self,
    method get_character_image (line 75) | def get_character_image(self):
    method get_face_mask_image (line 83) | def get_face_mask_image(self):
    method get_training_dataset (line 97) | def get_training_dataset(self):
    method get_module_factory (line 103) | def get_module_factory(self):
    method transform_pose_to_module_input (line 115) | def transform_pose_to_module_input(self, pose: Tensor):
    method transform_original_image_to_module_input (line 118) | def transform_original_image_to_module_input(self, image: Tensor):
    method transform_poser_posed_image_to_groundtruth (line 123) | def transform_poser_posed_image_to_groundtruth(self, image: Tensor):
    method get_training_computation_protocol (line 128) | def get_training_computation_protocol(self):
    method get_learning_rate (line 134) | def get_learning_rate(self, examples_seen_so_far) -> Dict[str, float]:
    method get_optimizer_factories (line 152) | def get_optimizer_factories(self):
    method get_poser (line 157) | def get_poser(self):
    method get_training_protocol (line 160) | def get_training_protocol(self, world_size: int):
    method get_sample_output_protocol (line 175) | def get_sample_output_protocol(self):
    method get_loss (line 185) | def get_loss(self):
    method create_trainer (line 205) | def create_trainer(self, prefix: str, world_size: int, distrib_backend...

FILE: src/tha4/nn/siren/face_morpher/siren_face_morpher_protocols_00.py
  class SirenMorpherProtocol00Keys (line 23) | class SirenMorpherProtocol00Keys:
  class SirenMorpherProtocol00Indices (line 43) | class SirenMorpherProtocol00Indices:
  class SirenFaceMorpherComputationProtocol00 (line 50) | class SirenFaceMorpherComputationProtocol00(ComposableCachedComputationP...
    method __init__ (line 51) | def __init__(self,
  class SirenFaceMorpherSampleOutputProtocol00 (line 110) | class SirenFaceMorpherSampleOutputProtocol00(SampleOutputProtocol):
    method __init__ (line 111) | def __init__(self,
    method get_examples_per_sample_output (line 132) | def get_examples_per_sample_output(self) -> int:
    method get_random_seed (line 135) | def get_random_seed(self) -> int:
    method get_sample_output_data (line 138) | def get_sample_output_data(self, validation_dataset: Dataset, device: ...
    method save_sample_output_data (line 153) | def save_sample_output_data(self,

FILE: src/tha4/nn/siren/morpher/siren_morpher_03.py
  class SirenMorpherLevelArgs (line 14) | class SirenMorpherLevelArgs:
    method __init__ (line 15) | def __init__(self,
  class SirenMorpher03Args (line 25) | class SirenMorpher03Args:
    method __init__ (line 26) | def __init__(self,
  class SirenMorpher03 (line 42) | class SirenMorpher03(Module):
    method __init__ (line 43) | def __init__(self, args: SirenMorpher03Args):
    method get_position_grid (line 92) | def get_position_grid(self, n: int, image_size: int, device: torch.dev...
    method get_pose_image (line 101) | def get_pose_image(self, pose: Tensor, image_size: int):
    method forward (line 107) | def forward(self, image: Tensor, pose: Tensor) -> List[Tensor]:
  class SirenMorpher03Factory (line 148) | class SirenMorpher03Factory(ModuleFactory):
    method __init__ (line 149) | def __init__(self, args: SirenMorpher03Args):
    method create (line 152) | def create(self):

FILE: src/tha4/nn/siren/morpher/siren_morpher_03_trainer.py
  function get_poser (line 20) | def get_poser():
  class LossTerm (line 26) | class LossTerm(Enum):
    method get_loss (line 32) | def get_loss(self, protocol: SirenMorpherComputationProtocol03):
  class LossWeights (line 53) | class LossWeights:
    method __init__ (line 54) | def __init__(self, weights: Optional[Dict[LossTerm, float]] = None):
  class TrainingPhase (line 64) | class TrainingPhase:
    method __init__ (line 65) | def __init__(self,
  class LearningRateFunc (line 74) | class LearningRateFunc:
    method __init__ (line 75) | def __init__(self, phases: List[TrainingPhase], keys: List[str]):
    method make_learning_rate_dict (line 79) | def make_learning_rate_dict(self, keys: List[str], value: float):
    method __call__ (line 85) | def __call__(self, examples_seen_so_far: int) -> Dict[str, float]:
  class LossWeightFunc (line 92) | class LossWeightFunc:
    method __init__ (line 93) | def __init__(self, phases: List[TrainingPhase], term: LossTerm):
    method __call__ (line 97) | def __call__(self, examples_seen_so_far: int) -> float:
  class TrainingPhases (line 104) | class TrainingPhases:
    method __init__ (line 105) | def __init__(self, phases: List[TrainingPhase]):
    method make_learning_rate_dict (line 112) | def make_learning_rate_dict(self, keys: List[str], value: float):
    method get_learning_rate_func (line 118) | def get_learning_rate_func(self, keys: List[str]):
    method get_loss_weight_func (line 121) | def get_loss_weight_func(self, term: LossTerm) -> Callable[[int], float]:
  class SirenMorpher03TrainerArgs (line 125) | class SirenMorpher03TrainerArgs:
    method __init__ (line 126) | def __init__(self,
    method get_character_image (line 160) | def get_character_image(self):
    method get_training_dataset (line 168) | def get_training_dataset(self):
    method get_module_factory (line 174) | def get_module_factory(self):
    method get_training_computation_protocol (line 195) | def get_training_computation_protocol(self):
    method get_optimizer_factories (line 202) | def get_optimizer_factories(self):
    method get_poser (line 207) | def get_poser(self):
    method get_training_protocol (line 210) | def get_training_protocol(self, world_size: int):
    method get_sample_output_protocol (line 225) | def get_sample_output_protocol(self):
    method get_loss (line 237) | def get_loss(self):
    method create_trainer (line 249) | def create_trainer(self, prefix: str, world_size: int, distrib_backend...

FILE: src/tha4/nn/siren/morpher/siren_morpher_protocols_03.py
  class SirenMorpherProtocol03Keys (line 27) | class SirenMorpherProtocol03Keys:
  class SirenMorpherProtocol03Indices (line 57) | class SirenMorpherProtocol03Indices:
  class SirenMorpherComputationProtocol03 (line 75) | class SirenMorpherComputationProtocol03(ComposableCachedComputationProto...
    method __init__ (line 76) | def __init__(self,
  class SirenMorpherTrainingProtocol03 (line 160) | class SirenMorpherTrainingProtocol03(AbstractTrainingProtocol):
    method __init__ (line 161) | def __init__(self,
    method run_training_iteration (line 178) | def run_training_iteration(
  class SirenMorpherSampleOutputProtocol (line 217) | class SirenMorpherSampleOutputProtocol(SampleOutputProtocol):
    method __init__ (line 218) | def __init__(self,
    method get_examples_per_sample_output (line 281) | def get_examples_per_sample_output(self) -> int:
    method get_random_seed (line 284) | def get_random_seed(self) -> int:
    method get_sample_output_data (line 287) | def get_sample_output_data(self, validation_dataset: Dataset, device: ...
    method save_sample_output_data (line 300) | def save_sample_output_data(self,

FILE: src/tha4/nn/siren/vanilla/siren.py
  class SineLinearLayer (line 12) | class SineLinearLayer(Module):
    method __init__ (line 13) | def __init__(self,
    method forward (line 38) | def forward(self, x: Tensor):
  class SirenArgs (line 42) | class SirenArgs:
    method __init__ (line 43) | def __init__(
  class Siren (line 62) | class Siren(Module):
    method __init__ (line 63) | def __init__(self, args: SirenArgs):
    method forward (line 84) | def forward(self, x: Tensor) -> Tensor:
  class SirenFactory (line 94) | class SirenFactory(ModuleFactory):
    method __init__ (line 95) | def __init__(self, args: SirenArgs):
    method create (line 99) | def create(self) -> Module:

FILE: src/tha4/nn/spectral_norm.py
  function apply_spectral_norm (line 5) | def apply_spectral_norm(module: Module, use_spectrial_norm: bool = False...

FILE: src/tha4/nn/upscaler/upscaler_02.py
  class Upscaler02Args (line 12) | class Upscaler02Args:
    method __init__ (line 13) | def __init__(self,
  function apply_color_change (line 33) | def apply_color_change(alpha, color_change, image: Tensor) -> Tensor:
  class Upscaler02 (line 37) | class Upscaler02(Module):
    method __init__ (line 38) | def __init__(self, args: Upscaler02Args):
    method check_image (line 53) | def check_image(self, image: torch.Tensor):
    method forward (line 59) | def forward(self,
  class Upscaler02Factory (line 105) | class Upscaler02Factory(ModuleFactory):
    method __init__ (line 106) | def __init__(self, args: Upscaler02Args):
    method create (line 109) | def create(self) -> Module:

FILE: src/tha4/nn/util.py
  function wrap_conv_or_linear_module (line 12) | def wrap_conv_or_linear_module(module: Module,
  class BlockArgs (line 22) | class BlockArgs:
    method __init__ (line 23) | def __init__(self,
    method wrap_module (line 33) | def wrap_module(self, module: Module) -> Module:
    method get_init_func (line 36) | def get_init_func(self) -> Callable[[Module], Module]:

FILE: src/tha4/poser/general_poser_02.py
  class GeneralPoser02 (line 10) | class GeneralPoser02(Poser):
    method __init__ (line 11) | def __init__(self,
    method get_image_size (line 38) | def get_image_size(self) -> int:
    method get_modules (line 41) | def get_modules(self):
    method get_pose_parameter_groups (line 51) | def get_pose_parameter_groups(self) -> List[PoseParameterGroup]:
    method get_num_parameters (line 54) | def get_num_parameters(self) -> int:
    method pose (line 57) | def pose(self, image: Tensor, pose: Tensor, output_index: Optional[int...
    method get_posing_outputs (line 63) | def get_posing_outputs(self, image: Tensor, pose: Tensor) -> List[Tens...
    method get_output_length (line 81) | def get_output_length(self) -> int:
    method free (line 84) | def free(self):
    method get_dtype (line 87) | def get_dtype(self) -> torch.dtype:
    method to (line 90) | def to(self, device: torch.device) -> 'GeneralPoser02':

FILE: src/tha4/poser/modes/mode_07.py
  class Network (line 24) | class Network(Enum):
    method outputs_key (line 32) | def outputs_key(self):
  class Branch (line 36) | class Branch(Enum):
  class FiveStepPoserComputationProtocol (line 47) | class FiveStepPoserComputationProtocol(CachedComputationProtocol):
    method __init__ (line 48) | def __init__(self, eyebrow_morphed_image_index: int):
    method compute_func (line 54) | def compute_func(self):
    method compute_output (line 72) | def compute_output(self, key: str, state: ComputationState) -> List[Te...
  function load_eyebrow_decomposer (line 137) | def load_eyebrow_decomposer(file_name: str):
  function load_eyebrow_morphing_combiner (line 158) | def load_eyebrow_morphing_combiner(file_name: str):
  function load_face_morpher (line 180) | def load_face_morpher(file_name: str):
  function apply_color_change (line 206) | def apply_color_change(alpha, color_change, image: Tensor) -> Tensor:
  function load_morpher_00 (line 210) | def load_morpher_00(file_name: str):
  function load_upscaler_02 (line 241) | def load_upscaler_02(file_name: str):
  function create_poser (line 272) | def create_poser(

FILE: src/tha4/poser/modes/mode_12.py
  class Network (line 20) | class Network(Enum):
    method outputs_key (line 26) | def outputs_key(self):
  class Branch (line 30) | class Branch(Enum):
  class FiveStepPoserComputationProtocol (line 41) | class FiveStepPoserComputationProtocol(CachedComputationProtocol):
    method __init__ (line 42) | def __init__(self, eyebrow_morphed_image_index: int):
    method compute_func (line 48) | def compute_func(self):
    method compute_output (line 66) | def compute_output(self, key: str, state: ComputationState) -> Any:
  function load_eyebrow_decomposer (line 99) | def load_eyebrow_decomposer(file_name: str):
  function load_eyebrow_morphing_combiner (line 120) | def load_eyebrow_morphing_combiner(file_name: str):
  function load_face_morpher (line 142) | def load_face_morpher(file_name: str):
  function apply_color_change (line 165) | def apply_color_change(alpha, color_change, image: Tensor) -> Tensor:
  function create_poser (line 169) | def create_poser(

FILE: src/tha4/poser/modes/mode_14.py
  class Keys (line 19) | class Keys:
  class Indices (line 35) | class Indices:
  class TwoStepPoserComputationProtocol (line 40) | class TwoStepPoserComputationProtocol(CachedComputationProtocol):
    method __init__ (line 41) | def __init__(self, keys: Optional[Keys] = None, indices: Optional[Indi...
    method compute_func (line 52) | def compute_func(self):
    method compute_output (line 58) | def compute_output(self, key: str, state: ComputationState) -> Any:
  function load_face_morpher (line 93) | def load_face_morpher(file_name: Optional[str] = None):
  function load_body_morpher (line 109) | def load_body_morpher(file_name: Optional[str] = None):
  function create_poser (line 134) | def create_poser(

FILE: src/tha4/poser/modes/pose_parameters.py
  function get_pose_parameters (line 4) | def get_pose_parameters():

FILE: src/tha4/poser/poser.py
  class PoseParameterCategory (line 9) | class PoseParameterCategory(Enum):
  class PoseParameterGroup (line 20) | class PoseParameterGroup:
    method __init__ (line 21) | def __init__(self,
    method get_arity (line 47) | def get_arity(self) -> int:
    method get_group_name (line 50) | def get_group_name(self) -> str:
    method get_parameter_names (line 53) | def get_parameter_names(self) -> List[str]:
    method is_discrete (line 56) | def is_discrete(self) -> bool:
    method get_range (line 59) | def get_range(self) -> Tuple[float, float]:
    method get_default_value (line 62) | def get_default_value(self):
    method get_parameter_index (line 65) | def get_parameter_index(self):
    method get_category (line 68) | def get_category(self) -> PoseParameterCategory:
  class PoseParameters (line 72) | class PoseParameters:
    method __init__ (line 73) | def __init__(self, pose_parameter_groups: List[PoseParameterGroup]):
    method get_parameter_index (line 76) | def get_parameter_index(self, name: str) -> int:
    method get_parameter_name (line 85) | def get_parameter_name(self, index: int) -> str:
    method get_pose_parameter_groups (line 95) | def get_pose_parameter_groups(self):
    method get_parameter_count (line 98) | def get_parameter_count(self):
    class Builder (line 104) | class Builder:
      method __init__ (line 105) | def __init__(self):
      method add_parameter_group (line 109) | def add_parameter_group(self,
      method build (line 128) | def build(self) -> 'PoseParameters':
  class Poser (line 132) | class Poser(ABC):
    method get_image_size (line 134) | def get_image_size(self) -> int:
    method get_output_length (line 138) | def get_output_length(self) -> int:
    method get_pose_parameter_groups (line 142) | def get_pose_parameter_groups(self) -> List[PoseParameterGroup]:
    method get_num_parameters (line 146) | def get_num_parameters(self) -> int:
    method pose (line 150) | def pose(self, image: Tensor, pose: Tensor, output_index: int = 0) -> ...
    method get_posing_outputs (line 154) | def get_posing_outputs(self, image: Tensor, pose: Tensor) -> List[Tens...
    method get_dtype (line 157) | def get_dtype(self) -> torch.dtype:
    method to (line 161) | def to(self, device: torch.device):

FILE: src/tha4/pytasuku/indexed/all_tasks.py
  class AllTasks (line 8) | class AllTasks(NoIndexCommandTasks):
    method __init__ (line 9) | def __init__(
    method execute_run_command (line 20) | def execute_run_command(self):
    method execute_clean_command (line 24) | def execute_clean_command(self):

FILE: src/tha4/pytasuku/indexed/bundled_indexed_file_tasks.py
  class BundledIndexedTasks (line 9) | class BundledIndexedTasks:
    method indexed_tasks_command_names (line 14) | def indexed_tasks_command_names(self) -> Iterable[str]:
    method get_indexed_tasks (line 18) | def get_indexed_tasks(self, command_name) -> IndexedTasks:
  function define_all_tasks_from_list (line 22) | def define_all_tasks_from_list(workspace: Workspace, prefix: str, tasks:...

FILE: src/tha4/pytasuku/indexed/indexed_file_tasks.py
  class IndexedFileTasks (line 8) | class IndexedFileTasks(IndexedTasks, abc.ABC):
    method __init__ (line 9) | def __init__(self, workspace: Workspace, prefix: str):
    method file_list (line 14) | def file_list(self) -> List[str]:
    method get_file_name (line 18) | def get_file_name(self, *indices: int) -> str:

FILE: src/tha4/pytasuku/indexed/indexed_tasks.py
  class IndexedTasks (line 7) | class IndexedTasks(abc.ABC):
    method __init__ (line 8) | def __init__(self, workspace: Workspace, prefix: str):
    method run_command (line 14) | def run_command(self) -> str:
    method clean_command (line 19) | def clean_command(self) -> str:
    method shape (line 24) | def shape(self) -> List[int]:
    method arity (line 29) | def arity(self) -> int:
    method define_tasks (line 33) | def define_tasks(self):

FILE: src/tha4/pytasuku/indexed/no_index_command_tasks.py
  class NoIndexCommandTasks (line 8) | class NoIndexCommandTasks(IndexedTasks, abc.ABC):
    method __init__ (line 9) | def __init__(self, workspace: Workspace, prefix: str, command_name: st...
    method run_command (line 16) | def run_command(self):
    method clean_command (line 20) | def clean_command(self):
    method arity (line 24) | def arity(self) -> int:
    method shape (line 28) | def shape(self) -> List[int]:
    method execute_run_command (line 32) | def execute_run_command(self):
    method execute_clean_command (line 36) | def execute_clean_command(self):
    method define_tasks (line 39) | def define_tasks(self):

FILE: src/tha4/pytasuku/indexed/no_index_file_tasks.py
  class NoIndexFileTasks (line 9) | class NoIndexFileTasks(IndexedFileTasks, abc.ABC):
    method __init__ (line 10) | def __init__(self, workspace: Workspace, prefix: str, command_name: st...
    method file_name (line 18) | def file_name(self):
    method create_file_task (line 22) | def create_file_task(self):
    method get_file_name (line 25) | def get_file_name(self, *indices: int) -> str:
    method run_command (line 31) | def run_command(self):
    method clean_command (line 35) | def clean_command(self):
    method arity (line 39) | def arity(self) -> int:
    method shape (line 43) | def shape(self) -> List[int]:
    method file_list (line 47) | def file_list(self) -> List[str]:
    method clean (line 50) | def clean(self):
    method define_tasks (line 53) | def define_tasks(self):

FILE: src/tha4/pytasuku/indexed/one_index_file_tasks.py
  class OneIndexFileTasks (line 10) | class OneIndexFileTasks(IndexedFileTasks, abc.ABC):
    method __init__ (line 11) | def __init__(self, workspace: Workspace, prefix: str, command_name: st...
    method run_command (line 21) | def run_command(self) -> str:
    method clean_command (line 25) | def clean_command(self) -> str:
    method shape (line 29) | def shape(self) -> List[int]:
    method arity (line 33) | def arity(self) -> int:
    method file_name (line 37) | def file_name(self, index):
    method create_file_tasks (line 41) | def create_file_tasks(self, index):
    method get_file_name (line 44) | def get_file_name(self, *indices: int) -> str:
    method file_list (line 51) | def file_list(self):
    method clean (line 57) | def clean(self):
    method define_tasks (line 61) | def define_tasks(self):

FILE: src/tha4/pytasuku/indexed/simple_no_index_file_tasks.py
  class SimpleNoIndexFileTasks (line 7) | class SimpleNoIndexFileTasks(NoIndexFileTasks):
    method __init__ (line 8) | def __init__(self,
    method file_name (line 24) | def file_name(self):
    method create_file_task (line 27) | def create_file_task(self):

FILE: src/tha4/pytasuku/indexed/two_indices_file_tasks.py
  class TwoIndicesFileTasks (line 9) | class TwoIndicesFileTasks(IndexedFileTasks, abc.ABC):
    method __init__ (line 10) | def __init__(self, workspace: Workspace, prefix: str, command_name: str,
    method run_command (line 21) | def run_command(self) -> str:
    method clean_command (line 25) | def clean_command(self) -> str:
    method shape (line 29) | def shape(self) -> List[int]:
    method arity (line 33) | def arity(self) -> int:
    method file_name (line 37) | def file_name(self, index0: int, index1: int) -> str:
    method file_list (line 41) | def file_list(self) -> List[str]:
    method create_file_tasks (line 49) | def create_file_tasks(self, index0: int, index1: int):
    method get_file_name (line 52) | def get_file_name(self, *indices: int) -> str:
    method clean (line 58) | def clean(self):
    method define_tasks (line 62) | def define_tasks(self):

FILE: src/tha4/pytasuku/indexed/util.py
  function delete_file (line 9) | def delete_file(file_name):
  function all_tasks_from_named_tasks_map (line 17) | def all_tasks_from_named_tasks_map(
  function create_tasks_hierarchy_helper (line 38) | def create_tasks_hierarchy_helper(
  function create_task_hierarchy (line 60) | def create_task_hierarchy(
  function write_done_file (line 68) | def write_done_file(file_name: str):

FILE: src/tha4/pytasuku/task.py
  class Task (line 6) | class Task:
    method __init__ (line 7) | def __init__(self, workspace: 'Workspace', name: str, dependencies: Li...
    method run (line 13) | def run(self):
    method can_run (line 17) | def can_run(self) -> bool:
    method needs_to_be_run (line 21) | def needs_to_be_run(self) -> bool:
    method name (line 25) | def name(self) -> str:
    method dependencies (line 29) | def dependencies(self) -> List[str]:
    method workspace (line 33) | def workspace(self) -> 'Workspace':
    method timestamp (line 37) | def timestamp(self) -> float:
  class CommandTask (line 41) | class CommandTask(Task):
    method __init__ (line 42) | def __init__(self, workspace, name, dependencies):
    method needs_to_be_run (line 46) | def needs_to_be_run(self):
  class PlaceholderTask (line 50) | class PlaceholderTask(Task):
    method __init__ (line 51) | def __init__(self, workspace, name):
    method can_run (line 55) | def can_run(self):
    method run (line 58) | def run(self):
    method needs_to_be_run (line 62) | def needs_to_be_run(self):
    method timestamp (line 66) | def timestamp(self) -> float:
  class FileTask (line 73) | class FileTask(Task):
    method __init__ (line 74) | def __init__(self, workspace, name, dependencies):
    method timestamp (line 78) | def timestamp(self):
    method needs_to_be_run (line 82) | def needs_to_be_run(self):

FILE: src/tha4/pytasuku/task_selector_ui.py
  class TaskSelectorUi (line 7) | class TaskSelectorUi(Frame):
    method __init__ (line 8) | def __init__(self, root, workspace: Workspace):
    method add_tree_nodes (line 46) | def add_tree_nodes(self):
    method run_selected_task (line 94) | def run_selected_task(self):
  function run_task_selector_ui (line 104) | def run_task_selector_ui(workspace: Workspace):

FILE: src/tha4/pytasuku/util.py
  function create_delete_all_task (line 8) | def create_delete_all_task(workspace: Workspace, name: str, files: List[...

FILE: src/tha4/pytasuku/workspace.py
  class WorkspaceState (line 8) | class WorkspaceState(Enum):
  class NodeState (line 13) | class NodeState(Enum):
  class FuncCommandTask (line 18) | class FuncCommandTask(CommandTask):
    method __init__ (line 19) | def __init__(self, workspace, name, dependencies, func):
    method run (line 23) | def run(self):
  class FuncFileTask (line 27) | class FuncFileTask(FileTask):
    method __init__ (line 28) | def __init__(self, workspace, name, dependencies, func):
    method run (line 32) | def run(self):
  function do_nothing (line 36) | def do_nothing():
  class Workspace (line 40) | class Workspace:
    method __init__ (line 41) | def __init__(self):
    method modified (line 48) | def modified(self) -> bool:
    method state (line 52) | def state(self) -> WorkspaceState:
    method in_session (line 56) | def in_session(self) -> bool:
    method task_exists (line 59) | def task_exists(self, name: str) -> bool:
    method task_exists_and_not_placeholder (line 62) | def task_exists_and_not_placeholder(self, name: str) -> bool:
    method get_task (line 65) | def get_task(self, name: str) -> Task:
    method add_task (line 68) | def add_task(self, task):
    method start_session (line 81) | def start_session(self):
    method end_session (line 90) | def end_session(self):
    method session (line 97) | def session(self):
    method check_cycle (line 104) | def check_cycle(self):
    method dfs (line 110) | def dfs(self, name, node_states):
    method run (line 122) | def run(self, name):
    method run_helper (line 129) | def run_helper(self, name):
    method needs_to_run (line 138) | def needs_to_run(self, name):
    method create_command_task (line 148) | def create_command_task(self, name, dependencies, func=do_nothing):
    method create_file_task (line 151) | def create_file_task(self, name, dependencies, func):
  function command_task (line 155) | def command_task(workspace: Workspace, name: str, dependencies: List[str]):
  function file_task (line 163) | def file_task(workspace: Workspace, name: str, dependencies: List[str]):

FILE: src/tha4/sampleoutput/general_sample_output_protocol.py
  class ImageType (line 18) | class ImageType(Enum):
  class SampleImageSpec (line 25) | class SampleImageSpec:
    method __init__ (line 26) | def __init__(self, value_func: TensorCachedComputationFunc, image_type...
  class SampleImageSaver (line 31) | class SampleImageSaver:
    method __init__ (line 32) | def __init__(self,
    method save_sample_output_data (line 41) | def save_sample_output_data(self,
    method convert_to_numpy_image (line 94) | def convert_to_numpy_image(self, image: torch.Tensor):
  class GeneralSampleOutputProtocol (line 106) | class GeneralSampleOutputProtocol(SampleOutputProtocol):
    method __init__ (line 107) | def __init__(self,
    method get_examples_per_sample_output (line 120) | def get_examples_per_sample_output(self) -> int:
    method get_random_seed (line 123) | def get_random_seed(self) -> int:
    method get_sample_output_data (line 126) | def get_sample_output_data(self, validation_dataset: Dataset, device: ...
    method save_sample_output_data (line 132) | def save_sample_output_data(self,

FILE: src/tha4/sampleoutput/poser_sampler_output_protocol.py
  class PoserSampleOutputProtocol (line 13) | class PoserSampleOutputProtocol(SampleOutputProtocol):
    method __init__ (line 14) | def __init__(self,
    method get_examples_per_sample_output (line 50) | def get_examples_per_sample_output(self) -> int:
    method get_random_seed (line 53) | def get_random_seed(self) -> int:
    method get_sample_output_data (line 56) | def get_sample_output_data(self, validation_dataset: Dataset, device: ...
    method save_sample_output_data (line 62) | def save_sample_output_data(self,

FILE: src/tha4/sampleoutput/sample_image_creator.py
  class ImageSource (line 15) | class ImageSource(Enum):
  class ImageType (line 20) | class ImageType(Enum):
  class SampleImageSpec (line 27) | class SampleImageSpec:
    method __init__ (line 28) | def __init__(self, image_source: ImageSource, index: int, image_type: ...
  function torch_rgb_to_numpy_image (line 34) | def torch_rgb_to_numpy_image(torch_image: Tensor, min_pixel_value=-1.0, ...
  function torch_rgba_to_numpy_image (line 45) | def torch_rgba_to_numpy_image(torch_image: Tensor, min_pixel_value=-1.0,...
  function torch_grid_change_to_numpy_image (line 57) | def torch_grid_change_to_numpy_image(torch_image, num_channels=3):
  class SampleImageSaver (line 74) | class SampleImageSaver:
    method __init__ (line 75) | def __init__(self,
    method save_sample_output_image (line 86) | def save_sample_output_image(self, batch: List[Tensor], outputs: List[...
    method save_sample_output_data (line 132) | def save_sample_output_data(self,
    method convert_to_numpy_image (line 140) | def convert_to_numpy_image(self, image: torch.Tensor):

FILE: src/tha4/shion/base/dataset/lazy_dataset.py
  class LazyDataset (line 6) | class LazyDataset(Dataset):
    method __init__ (line 7) | def __init__(self, source_func: Callable[[], Dataset]):
    method get_source (line 11) | def get_source(self):
    method __len__ (line 16) | def __len__(self):
    method __getitem__ (line 19) | def __getitem__(self, item):

FILE: src/tha4/shion/base/dataset/lazy_tensor_dataset.py
  class LazyTensorDataset (line 7) | class LazyTensorDataset(Dataset):
    method __init__ (line 8) | def __init__(self, file_name: str):
    method get_dataset (line 12) | def get_dataset(self):
    method __len__ (line 25) | def __len__(self):
    method __getitem__ (line 29) | def __getitem__(self, item):

FILE: src/tha4/shion/base/dataset/png_in_dir_dataset.py
  class PngInDirDataset (line 11) | class PngInDirDataset(Dataset):
    method __init__ (line 12) | def __init__(self, dir: str,
    method get_file_names (line 29) | def get_file_names(self):
    method __len__ (line 36) | def __len__(self):
    method __getitem__ (line 40) | def __getitem__(self, item):

FILE: src/tha4/shion/base/dataset/util.py
  function get_indexed_batch (line 7) | def get_indexed_batch(dataset: Dataset, example_indices: List[int], devi...

FILE: src/tha4/shion/base/dataset/xformed_dataset.py
  class XformedDataset (line 6) | class XformedDataset(Dataset):
    method __init__ (line 7) | def __init__(self, source: Dataset, xform_func: Callable[[Any], Any]):
    method __len__ (line 11) | def __len__(self):
    method __getitem__ (line 14) | def __getitem__(self, item):

FILE: src/tha4/shion/base/image_util.py
  function numpy_srgb_to_linear (line 10) | def numpy_srgb_to_linear(x):
  function numpy_linear_to_srgb (line 15) | def numpy_linear_to_srgb(x):
  function numpy_alpha_devide (line 20) | def numpy_alpha_devide(rgb, a, epsilon=1e-5):
  function torch_srgb_to_linear (line 26) | def torch_srgb_to_linear(x: torch.Tensor):
  function torch_linear_to_srgb (line 31) | def torch_linear_to_srgb(x):
  function numpy_image_linear_to_srgb (line 36) | def numpy_image_linear_to_srgb(image):
  function numpy_image_srgb_to_linear (line 47) | def numpy_image_srgb_to_linear(image):
  function pytorch_rgb_to_numpy_image (line 58) | def pytorch_rgb_to_numpy_image(torch_image: Tensor, min_pixel_value=-1.0...
  function pytorch_rgba_to_numpy_image_greenscreen (line 69) | def pytorch_rgba_to_numpy_image_greenscreen(torch_image: Tensor,
  function pytorch_rgba_to_numpy_image (line 90) | def pytorch_rgba_to_numpy_image(
  function pil_image_has_transparency (line 111) | def pil_image_has_transparency(pil_image):
  function extract_numpy_image_from_PIL_image (line 127) | def extract_numpy_image_from_PIL_image(pil_image, scale=2.0, offset=-1.0,
  function extract_numpy_image_from_PIL_image_with_pytorch_layout (line 152) | def extract_numpy_image_from_PIL_image_with_pytorch_layout(pil_image, sc...
  function extract_numpy_image_from_filelike_with_pytorch_layout (line 165) | def extract_numpy_image_from_filelike_with_pytorch_layout(file, scale=2....
  function extract_numpy_image_from_filelike (line 173) | def extract_numpy_image_from_filelike(file, scale=1.0, offset=0.0,
  function extract_pytorch_image_from_filelike (line 183) | def extract_pytorch_image_from_filelike(file, scale=2.0, offset=-1.0, pr...
  function extract_pytorch_image_from_PIL_image (line 194) | def extract_pytorch_image_from_PIL_image(pil_image, scale=2.0, offset=-1...
  function convert_pytorch_image_to_zero_to_one_numpy_image (line 201) | def convert_pytorch_image_to_zero_to_one_numpy_image(
  function convert_zero_to_one_numpy_image_to_PIL_image (line 211) | def convert_zero_to_one_numpy_image_to_PIL_image(
  function save_numpy_image (line 233) | def save_numpy_image(numpy_image, file_name: str, save_straight_alpha=Tr...
  function resize_PIL_image (line 239) | def resize_PIL_image(pil_image, size=(256, 256)):

FILE: src/tha4/shion/base/loss/computed_scale_loss.py
  class ComputedScaleLoss (line 7) | class ComputedScaleLoss(Loss):
    method __init__ (line 8) | def __init__(self,
    method compute (line 16) | def compute(self, state: ComputationState, log_func: Optional[Callable...

FILE: src/tha4/shion/base/loss/computed_scaled_l2_loss.py
  class ComputedScaledL2Loss (line 7) | class ComputedScaledL2Loss(Loss):
    method __init__ (line 8) | def __init__(self,
    method compute (line 18) | def compute(

FILE: src/tha4/shion/base/loss/l1_loss.py
  class L1Loss (line 9) | class L1Loss(Loss):
    method __init__ (line 10) | def __init__(self,
    method compute (line 18) | def compute(self, state: ComputationState, log_func: Optional[Callable...
  class ListL1Loss (line 27) | class ListL1Loss(Loss):
    method __init__ (line 28) | def __init__(self,
    method compute (line 36) | def compute(self, state: ComputationState, log_func: Optional[Callable...
  class MaskedL1Loss (line 49) | class MaskedL1Loss(Loss):
    method __init__ (line 50) | def __init__(self,
    method compute (line 60) | def compute(self, state: ComputationState, log_func: Optional[Callable...

FILE: src/tha4/shion/base/loss/l2_loss.py
  class L2Loss (line 7) | class L2Loss(Loss):
    method __init__ (line 8) | def __init__(self,
    method compute (line 16) | def compute(

FILE: src/tha4/shion/base/loss/sum_loss.py
  class SumLoss (line 10) | class SumLoss(Loss):
    method __init__ (line 11) | def __init__(self, losses: List[Tuple[str, Loss]]):
    method compute (line 14) | def compute(self,

FILE: src/tha4/shion/base/loss/time_dependently_weighted_loss.py
  class TimeDependentlyWeightedLoss (line 9) | class TimeDependentlyWeightedLoss(Loss):
    method __init__ (line 10) | def __init__(self,
    method compute (line 18) | def compute(self,

FILE: src/tha4/shion/base/module_accumulators.py
  function accumulate_modules (line 10) | def accumulate_modules(new_module: Module, accumulated_module: Module, b...
  class DecayAccumulator (line 23) | class DecayAccumulator(ModuleAccumulator):
    method __init__ (line 24) | def __init__(self, decay: float = 0.999):
    method accumulate (line 27) | def accumulate(self, module: Module, output: Module, examples_seen_so_...

FILE: src/tha4/shion/base/optimizer_factories.py
  class AdamOptimizerFactory (line 9) | class AdamOptimizerFactory(OptimizerFactory):
    method __init__ (line 10) | def __init__(self, betas: Tuple[float, float] = (0.9, 0.999), epsilon:...
    method create (line 16) | def create(self, parameters: Iterable[Parameter]) -> Optimizer:
  class AdamWOptimizerFactory (line 20) | class AdamWOptimizerFactory(OptimizerFactory):
    method __init__ (line 21) | def __init__(self, betas: Tuple[float, float] = (0.9, 0.999), epsilon:...
    method create (line 27) | def create(self, parameters: Iterable[Parameter]) -> Optimizer:
  class SparseAdamOptimizerFactory (line 31) | class SparseAdamOptimizerFactory(OptimizerFactory):
    method __init__ (line 32) | def __init__(self, betas: Tuple[float, float] = (0.9, 0.999), epsilon:...
    method create (line 37) | def create(self, parameters: Iterable[Parameter]) -> Optimizer:
  class RMSpropOptimizerFactory (line 41) | class RMSpropOptimizerFactory(OptimizerFactory):
    method __init__ (line 42) | def __init__(self):
    method create (line 45) | def create(self, parameters: Iterable[Parameter]) -> Optimizer:

FILE: src/tha4/shion/base/protocol/single_network_from_batch_input_computation_protocol.py
  class SingleNetworkBatchInputComputationProtocol (line 9) | class SingleNetworkBatchInputComputationProtocol(CachedComputationProtoc...
    method __init__ (line 10) | def __init__(self,
    method compute_output (line 21) | def compute_output(self, key: str, state: ComputationState) -> Any:

FILE: src/tha4/shion/base/training/single_network.py
  class SingleNetworkTrainingProtocol (line 18) | class SingleNetworkTrainingProtocol(TrainingProtocol):
    method __init__ (line 19) | def __init__(self,
    method get_optimizer_factories (line 36) | def get_optimizer_factories(self) -> Dict[str, OptimizerFactory]:
    method get_checkpoint_examples (line 39) | def get_checkpoint_examples(self) -> List[int]:
    method get_random_seed (line 42) | def get_random_seed(self) -> int:
    method get_batch_size (line 45) | def get_batch_size(self) -> int:
    method get_learning_rate (line 48) | def get_learning_rate(self, examples_seen_so_far: int) -> Dict[str, fl...
    method run_training_iteration (line 51) | def run_training_iteration(
  class SingleNetworkValidationProtocol (line 76) | class SingleNetworkValidationProtocol(ValidationProtocol):
    method __init__ (line 77) | def __init__(
    method get_batch_size (line 87) | def get_batch_size(self, ) -> int:
    method get_examples_per_validation_iteration (line 90) | def get_examples_per_validation_iteration(self) -> int:
    method run_validation_iteration (line 93) | def run_validation_iteration(

FILE: src/tha4/shion/base/training/single_network_with_minibatch.py
  class SingleNetworkWithMinibatchTrainingProtocol (line 18) | class SingleNetworkWithMinibatchTrainingProtocol(TrainingProtocol):
    method __init__ (line 19) | def __init__(self,
    method get_optimizer_factories (line 39) | def get_optimizer_factories(self) -> Dict[str, OptimizerFactory]:
    method get_checkpoint_examples (line 42) | def get_checkpoint_examples(self) -> List[int]:
    method get_random_seed (line 45) | def get_random_seed(self) -> int:
    method get_batch_size (line 48) | def get_batch_size(self) -> int:
    method get_learning_rate (line 51) | def get_learning_rate(self, examples_seen_so_far: int) -> Dict[str, fl...
    method run_training_iteration (line 54) | def run_training_iteration(

FILE: src/tha4/shion/base/training/two_networks_training_protocol.py
  class TwoNetworksWithMinibatchTrainingProtocol (line 14) | class TwoNetworksWithMinibatchTrainingProtocol(TrainingProtocol):
    method __init__ (line 15) | def __init__(self,
    method get_optimizer_factories (line 41) | def get_optimizer_factories(self) -> Dict[str, OptimizerFactory]:
    method get_checkpoint_examples (line 44) | def get_checkpoint_examples(self) -> List[int]:
    method get_random_seed (line 47) | def get_random_seed(self) -> int:
    method get_batch_size (line 50) | def get_batch_size(self) -> int:
    method get_learning_rate (line 53) | def get_learning_rate(self, examples_seen_so_far: int) -> Dict[str, fl...
    method run_training_iteration (line 56) | def run_training_iteration(

FILE: src/tha4/shion/core/cached_computation.py
  class ComputationState (line 9) | class ComputationState:
    method __init__ (line 10) | def __init__(self,
  function create_get_item_func (line 27) | def create_get_item_func(func: CachedComputationFunc, index):
  function create_batch_element_func (line 35) | def create_batch_element_func(index: int) -> TensorCachedComputationFunc:
  class CachedComputationProtocol (line 42) | class CachedComputationProtocol(ABC):
    method get_output (line 43) | def get_output(self, key: str, state: ComputationState) -> Any:
    method compute_output (line 52) | def compute_output(self, key: str, state: ComputationState) -> Any:
    method get_output_func (line 55) | def get_output_func(self, key: str) -> CachedComputationFunc:
  class ComposableCachedComputationProtocol (line 65) | class ComposableCachedComputationProtocol(CachedComputationProtocol):
    method __init__ (line 66) | def __init__(self, computation_steps: Optional[Dict[str, ComposableCac...
    method compute_output (line 71) | def compute_output(self, key: str, state: ComputationState) -> Any:
  function batch_indexing_func (line 78) | def batch_indexing_func(index: int):
  function proxy_func (line 85) | def proxy_func(key: str):
  function output_array_indexing_func (line 92) | def output_array_indexing_func(key: str, index: int):
  function add_step (line 99) | def add_step(step_dict: Dict[str, ComposableCachedComputationStep], name...
  function zeros_like_func (line 107) | def zeros_like_func(key: str):

FILE: src/tha4/shion/core/load_save.py
  function torch_save (line 6) | def torch_save(content, file_name):
  function torch_load (line 12) | def torch_load(file_name):

FILE: src/tha4/shion/core/loss.py
  class Loss (line 9) | class Loss(ABC):
    method compute (line 11) | def compute(

FILE: src/tha4/shion/core/module_accumulator.py
  class ModuleAccumulator (line 7) | class ModuleAccumulator(ABC):
    method accumulate (line 9) | def accumulate(self, module: Module, output: Module, examples_seen_so_...

FILE: src/tha4/shion/core/module_factory.py
  class ModuleFactory (line 6) | class ModuleFactory(ABC):
    method create (line 8) | def create(self) -> Module:

FILE: src/tha4/shion/core/optimizer_factory.py
  class OptimizerFactory (line 7) | class OptimizerFactory(ABC):
    method create (line 9) | def create(self, parameters: Iterable[Parameter]):

FILE: src/tha4/shion/core/training/distrib/device_mapper.py
  class SimpleCudaDeviceMapper (line 6) | class SimpleCudaDeviceMapper:
    method __call__ (line 7) | def __call__(self, rank, local_rank):
  class UserSpecifiedLocalRankToDeviceMapper (line 11) | class UserSpecifiedLocalRankToDeviceMapper:
    method __init__ (line 12) | def __init__(self, device_map: Dict[int, torch.device]):
    method __call__ (line 15) | def __call__(self, rank, local_rank):

FILE: src/tha4/shion/core/training/distrib/distributed_trainer.py
  class DistributedTrainer (line 31) | class DistributedTrainer:
    method __init__ (line 32) | def __init__(self,
    method get_sample_output_data_file_name (line 82) | def get_sample_output_data_file_name(self):
    method save_sample_output_data (line 85) | def save_sample_output_data(self, rank: int, device: torch.device):
    method load_sample_output_data (line 97) | def load_sample_output_data(self, rank: int, device: torch.device):
    method get_snapshot_prefix (line 104) | def get_snapshot_prefix(self) -> str:
    method can_load_training_state (line 107) | def can_load_training_state(self, prefix: str, world_size: int) -> bool:
    method load_training_state (line 115) | def load_training_state(self, prefix, rank: int, local_rank: int, devi...
    method checkpoint_prefix (line 126) | def checkpoint_prefix(prefix: str, checkpoint_index: int) -> str:
    method get_checkpoint_prefix (line 129) | def get_checkpoint_prefix(self, checkpoint_index) -> str:
    method get_initial_training_state (line 132) | def get_initial_training_state(self, rank: int, local_rank: int, devic...
    method load_previous_training_state (line 145) | def load_previous_training_state(self,
    method get_log_dir (line 171) | def get_log_dir(self):
    method get_summary_writer (line 177) | def get_summary_writer(self, rank: int) -> Optional[SummaryWriter]:
    method get_effective_training_epoch_size (line 184) | def get_effective_training_epoch_size(self, world_size: int):
    method get_training_epoch_index (line 191) | def get_training_epoch_index(self, examples_seen_so_far: int, world_si...
    method get_next_training_batch (line 196) | def get_next_training_batch(self, examples_seen_so_far: int, world_siz...
    method get_next_checkpoint_num_examples (line 226) | def get_next_checkpoint_num_examples(self, examples_seen_so_far) -> int:
    method get_next_snapshot_num_examples (line 232) | def get_next_snapshot_num_examples(self, examples_seen_so_far) -> int:
    method get_next_validation_num_examples (line 235) | def get_next_validation_num_examples(self, examples_seen_so_far) -> int:
    method get_next_sample_output_num_examples (line 241) | def get_next_sample_output_num_examples(self, examples_seen_so_far) ->...
    method get_next_num_examples (line 247) | def get_next_num_examples(self, examples_seen_so_far) -> Dict[str, int]:
    method get_next_validation_batch (line 255) | def get_next_validation_batch(self, device: torch.device):
    method get_checkpoint_index_to_save (line 274) | def get_checkpoint_index_to_save(self, examples_seen_so_far: int) -> int:
    method barrier (line 281) | def barrier(self, local_rank: int):
    method train (line 287) | def train(self,
    method get_default_arg_parser (line 392) | def get_default_arg_parser() -> argparse.ArgumentParser:
    method run_with_args (line 398) | def run_with_args(trainer_factory: Callable[[int, str], 'DistributedTr...
    method run (line 411) | def run(trainer_factory: Callable[[int, str], 'DistributedTrainer'],

FILE: src/tha4/shion/core/training/distrib/distributed_training_states.py
  class DistributedTrainingState (line 18) | class DistributedTrainingState:
    method __init__ (line 19) | def __init__(self,
    method get_examples_seen_so_far_file_name (line 30) | def get_examples_seen_so_far_file_name(prefix) -> str:
    method get_module_file_name (line 34) | def get_module_file_name(prefix, module_name) -> str:
    method get_accumulated_module_file_name (line 38) | def get_accumulated_module_file_name(prefix, module_name) -> str:
    method get_optimizer_file_name (line 42) | def get_optimizer_file_name(prefix, module_name) -> str:
    method get_rng_state_file_name (line 46) | def get_rng_state_file_name(prefix, rank: int):
    method mkdir (line 49) | def mkdir(self, prefix: str):
    method save_data (line 52) | def save_data(self, prefix: str, rank: int):
    method save (line 83) | def save(self, prefix: str, rank: int, barrier_func: Callable[[], None]):
    method get_examples_seen_so_far (line 91) | def get_examples_seen_so_far(prefix: str) -> int:
    method load (line 97) | def load(
    method new (line 155) | def new(module_factories: Dict[str, ModuleFactory],
    method can_load (line 201) | def can_load(prefix: str,

FILE: src/tha4/shion/core/training/distrib/distributed_training_tasks.py
  function get_torchrun_executable (line 11) | def get_torchrun_executable():
  function run_distributed_training_script (line 15) | def run_distributed_training_script(
  class RdzvConfig (line 33) | class RdzvConfig:
    method __init__ (line 34) | def __init__(self, id: int, port: int):
  function run_standalone_distributed_training_script (line 39) | def run_standalone_distributed_training_script(
  function define_distributed_training_tasks (line 60) | def define_distributed_training_tasks(
  function define_standalone_distributed_training_tasks (line 83) | def define_standalone_distributed_training_tasks(

FILE: src/tha4/shion/core/training/sample_output_protocol.py
  class SampleOutputProtocol (line 9) | class SampleOutputProtocol(ABC):
    method get_examples_per_sample_output (line 11) | def get_examples_per_sample_output(self) -> int:
    method get_random_seed (line 15) | def get_random_seed(self) -> int:
    method get_sample_output_data (line 19) | def get_sample_output_data(self, validation_dataset: Dataset, device: ...
    method save_sample_output_data (line 23) | def save_sample_output_data(
  class AbstractSampleOutputProtocol (line 34) | class AbstractSampleOutputProtocol(SampleOutputProtocol, ABC):
    method __init__ (line 35) | def __init__(self, examples_per_sample_output: int, random_seed: int):
    method get_examples_per_sample_output (line 39) | def get_examples_per_sample_output(self) -> int:
    method get_random_seed (line 42) | def get_random_seed(self) -> int:

FILE: src/tha4/shion/core/training/single/training_states.py
  class TrainingState (line 17) | class TrainingState:
    method __init__ (line 18) | def __init__(self,
    method get_examples_seen_so_far_file_name (line 29) | def get_examples_seen_so_far_file_name(prefix) -> str:
    method get_module_file_name (line 33) | def get_module_file_name(prefix, module_name) -> str:
    method get_accumulated_module_file_name (line 37) | def get_accumulated_module_file_name(prefix, module_name) -> str:
    method get_optimizer_file_name (line 41) | def get_optimizer_file_name(prefix, module_name) -> str:
    method get_rng_state_file_name (line 45) | def get_rng_state_file_name(prefix):
    method save (line 48) | def save(self, prefix):
    method get_examples_seen_so_far (line 71) | def get_examples_seen_so_far(prefix: str) -> int:
    method load (line 77) | def load(prefix: str,
    method new (line 126) | def new(module_factories: Dict[str, ModuleFactory],
    method can_load (line 163) | def can_load(prefix: str,

FILE: src/tha4/shion/core/training/single/training_tasks.py
  class TrainingTasks (line 27) | class TrainingTasks:
    method __init__ (line 28) | def __init__(
    method get_sample_output_data_file_name (line 120) | def get_sample_output_data_file_name(self):
    method save_sample_output_data (line 123) | def save_sample_output_data(self):
    method get_module_file_name (line 132) | def get_module_file_name(self, checkpoint_index, module_name):
    method get_last_module_file_name (line 135) | def get_last_module_file_name(self, module_name):
    method get_log_dir (line 138) | def get_log_dir(self):
    method get_summary_writer (line 144) | def get_summary_writer(self) -> SummaryWriter:
    method get_train_command_name (line 149) | def get_train_command_name(self) -> str:
    method get_snapshot_prefix (line 152) | def get_snapshot_prefix(self) -> str:
    method get_checkpoint_prefix (line 155) | def get_checkpoint_prefix(self, checkpoint_index) -> str:
    method can_load_training_state (line 158) | def can_load_training_state(self, prefix) -> bool:
    method load_training_state (line 165) | def load_training_state(self, prefix) -> TrainingState:
    method get_initial_training_state (line 173) | def get_initial_training_state(self) -> TrainingState:
    method load_previous_training_state (line 184) | def load_previous_training_state(self, target_checkpoint_examples: int...
    method get_next_checkpoint_num_examples (line 200) | def get_next_checkpoint_num_examples(self, examples_seen_so_far) -> int:
    method get_next_snapshot_num_examples (line 206) | def get_next_snapshot_num_examples(self, examples_seen_so_far) -> int:
    method get_next_validation_num_examples (line 209) | def get_next_validation_num_examples(self, examples_seen_so_far) -> int:
    method get_next_sample_output_num_examples (line 215) | def get_next_sample_output_num_examples(self, examples_seen_so_far) ->...
    method get_next_num_examples (line 221) | def get_next_num_examples(self, examples_seen_so_far) -> Dict[str, int]:
    method get_checkpoint_index_to_save (line 229) | def get_checkpoint_index_to_save(self, examples_seen_so_far: int) -> int:
    method get_next_training_batch (line 236) | def get_next_training_batch(self):
    method get_next_validation_batch (line 253) | def get_next_validation_batch(self):
    method get_checkpoint_index (line 272) | def get_checkpoint_index(self, target_checkpoint_examples: int):
    method train (line 275) | def train(self, target_checkpoint_examples: Optional[int] = None):

FILE: src/tha4/shion/core/training/swarm/swarm_training_tasks.py
  function define_standalone_swarm_training_tasks (line 10) | def define_standalone_swarm_training_tasks(

FILE: src/tha4/shion/core/training/swarm/swarm_unit_trainer.py
  class SwarmUnitTrainer (line 26) | class SwarmUnitTrainer:
    method __init__ (line 27) | def __init__(self,
    method get_sample_output_data_file_name (line 75) | def get_sample_output_data_file_name(self):
    method save_sample_output_data (line 78) | def save_sample_output_data(self, device: torch.device):
    method load_sample_output_data (line 88) | def load_sample_output_data(self, device: torch.device):
    method get_snapshot_prefix (line 92) | def get_snapshot_prefix(self) -> str:
    method can_load_training_state (line 95) | def can_load_training_state(self, prefix: str) -> bool:
    method load_training_state (line 102) | def load_training_state(self, prefix, device: torch.device) -> Trainin...
    method checkpoint_prefix (line 111) | def checkpoint_prefix(prefix: str, checkpoint_index: int) -> str:
    method get_checkpoint_prefix (line 114) | def get_checkpoint_prefix(self, checkpoint_index) -> str:
    method get_initial_training_state (line 117) | def get_initial_training_state(self, device: torch.device) -> Training...
    method load_previous_training_state (line 128) | def load_previous_training_state(self,
    method get_log_dir (line 151) | def get_log_dir(self):
    method get_summary_writer (line 157) | def get_summary_writer(self) -> Optional[SummaryWriter]:
    method get_next_training_batch (line 162) | def get_next_training_batch(self, device: torch.device):
    method get_next_checkpoint_num_examples (line 179) | def get_next_checkpoint_num_examples(self, examples_seen_so_far) -> int:
    method get_next_snapshot_num_examples (line 185) | def get_next_snapshot_num_examples(self, examples_seen_so_far) -> int:
    method get_next_validation_num_examples (line 188) | def get_next_validation_num_examples(self, examples_seen_so_far) -> int:
    method get_next_sample_output_num_examples (line 194) | def get_next_sample_output_num_examples(self, examples_seen_so_far) ->...
    method get_next_num_examples (line 200) | def get_next_num_examples(self, examples_seen_so_far) -> Dict[str, int]:
    method get_next_validation_batch (line 208) | def get_next_validation_batch(self, device: torch.device):
    method get_checkpoint_index_to_save (line 227) | def get_checkpoint_index_to_save(self, examples_seen_so_far: int) -> int:
    method train (line 234) | def train(self,
    method run (line 332) | def run(trainer_factory: Dict[int, Callable[[], 'SwarmUnitTrainer']],

FILE: src/tha4/shion/core/training/training_protocol.py
  class TrainingProtocol (line 12) | class TrainingProtocol(ABC):
    method get_optimizer_factories (line 14) | def get_optimizer_factories(self) -> Dict[str, OptimizerFactory]:
    method get_checkpoint_examples (line 18) | def get_checkpoint_examples(self) -> List[int]:
    method get_random_seed (line 22) | def get_random_seed(self) -> int:
    method get_batch_size (line 26) | def get_batch_size(self) -> int:
    method get_learning_rate (line 30) | def get_learning_rate(self, examples_seen_so_far: int) -> Dict[str, fl...
    method run_training_iteration (line 34) | def run_training_iteration(
  class AbstractTrainingProtocol (line 47) | class AbstractTrainingProtocol(TrainingProtocol, ABC):
    method __init__ (line 48) | def __init__(self,
    method get_optimizer_factories (line 60) | def get_optimizer_factories(self) -> Dict[str, OptimizerFactory]:
    method get_checkpoint_examples (line 63) | def get_checkpoint_examples(self) -> List[int]:
    method get_random_seed (line 66) | def get_random_seed(self) -> int:
    method get_batch_size (line 69) | def get_batch_size(self) -> int:
    method get_learning_rate (line 72) | def get_learning_rate(self, examples_seen_so_far: int) -> Dict[str, fl...

FILE: src/tha4/shion/core/training/util.py
  function optimizer_to_device (line 8) | def optimizer_to_device(optim: Optimizer, device: torch.device):
  function zero_module (line 15) | def zero_module(module: Module):
  function get_least_greater_multiple (line 21) | def get_least_greater_multiple(x: int, m: int) -> int:
  function create_log_func (line 32) | def create_log_func(summary_writer, prefix: str, examples_seen_so_far: i...
  function set_learning_rate (line 39) | def set_learning_rate(module, lr):

FILE: src/tha4/shion/core/training/validation_protocol.py
  class ValidationProtocol (line 10) | class ValidationProtocol(ABC):
    method get_batch_size (line 12) | def get_batch_size(self) -> int:
    method get_examples_per_validation_iteration (line 16) | def get_examples_per_validation_iteration(self) -> int:
    method run_validation_iteration (line 20) | def run_validation_iteration(
  class AbstractValidationProtocol (line 32) | class AbstractValidationProtocol(ValidationProtocol, ABC):
    method __init__ (line 33) | def __init__(self,
    method get_batch_size (line 39) | def get_batch_size(self) -> int:
    method get_examples_per_validation_iteration (line 42) | def get_examples_per_validation_iteration(self) -> int:

FILE: src/tha4/shion/nn00/block_args.py
  class BlockArgs (line 12) | class BlockArgs:
    method __init__ (line 13) | def __init__(

FILE: src/tha4/shion/nn00/conv.py
  function create_conv7 (line 9) | def create_conv7(
  function create_conv3 (line 19) | def create_conv3(in_channels: int,
  function create_conv1 (line 28) | def create_conv1(
  function create_conv7_block (line 37) | def create_conv7_block(
  function create_conv3_block (line 53) | def create_conv3_block(
  function create_downsample_block (line 69) | def create_downsample_block(
  function create_upsample_block (line 91) | def create_upsample_block(

FILE: src/tha4/shion/nn00/initialization_funcs.py
  class HeInitialization (line 9) | class HeInitialization:
    method __init__ (line 10) | def __init__(self, a: int = 0, mode: str = 'fan_in', nonlinearity: str...
    method __call__ (line 15) | def __call__(self, module: Module) -> Module:
  class NormalInitialization (line 21) | class NormalInitialization:
    method __init__ (line 22) | def __init__(self, mean: float = 0.0, std: float = 1.0):
    method __call__ (line 26) | def __call__(self, module: Module) -> Module:
  class XavierInitialization (line 32) | class XavierInitialization:
    method __init__ (line 33) | def __init__(self, gain: float = 1.0):
    method __call__ (line 36) | def __call__(self, module: Module) -> Module:
  class ZeroInitialization (line 42) | class ZeroInitialization:
    method __call__ (line 43) | def __call__(self, module: Module) -> Module:
  class NoInitialization (line 49) | class NoInitialization:
    method __call__ (line 50) | def __call__(self, module: Module) -> Module:
  function resolve_initialization_func (line 54) | def resolve_initialization_func(initialization: Optional[Callable[[Modul...

FILE: src/tha4/shion/nn00/linear_module_args.py
  class LinearModuleArgs (line 9) | class LinearModuleArgs:
    method __init__ (line 10) | def __init__(
    method wrap_linear_module (line 17) | def wrap_linear_module(self, module: Module) -> Module:
  function wrap_linear_module (line 24) | def wrap_linear_module(module: Module, linear_module_args: Optional[Line...

FILE: src/tha4/shion/nn00/nonlinearity_factories.py
  class ReLUFactory (line 10) | class ReLUFactory(ModuleFactory):
    method __init__ (line 11) | def __init__(self, inplace: bool = False):
    method create (line 14) | def create(self) -> Module:
  class LeakyReLUFactory (line 18) | class LeakyReLUFactory(ModuleFactory):
    method __init__ (line 19) | def __init__(self, inplace: bool = False, negative_slope: float = 1e-2):
    method create (line 23) | def create(self) -> Module:
  class ELUFactory (line 27) | class ELUFactory(ModuleFactory):
    method __init__ (line 28) | def __init__(self, inplace: bool = False, alpha: float = 1.0):
    method create (line 32) | def create(self) -> Module:
  class ReLU6Factory (line 36) | class ReLU6Factory(ModuleFactory):
    method __init__ (line 37) | def __init__(self, inplace: bool = False):
    method create (line 40) | def create(self) -> Module:
  class SiLUFactory (line 44) | class SiLUFactory(ModuleFactory):
    method __init__ (line 45) | def __init__(self, inplace: bool = False):
    method create (line 48) | def create(self) -> Module:
  class HardswishFactory (line 52) | class HardswishFactory(ModuleFactory):
    method __init__ (line 53) | def __init__(self, inplace: bool = False):
    method create (line 56) | def create(self) -> Module:
  class TanhFactory (line 60) | class TanhFactory(ModuleFactory):
    method create (line 61) | def create(self) -> Module:
  class SigmoidFactory (line 65) | class SigmoidFactory(ModuleFactory):
    method create (line 66) | def create(self) -> Module:
  class Swish (line 70) | class Swish(Module):
    method __init__ (line 71) | def __init__(self):
    method forward (line 74) | def forward(self, x: Tensor):
  class SwishFactory (line 78) | class SwishFactory(ModuleFactory):
    method create (line 79) | def create(self) -> Module:
  function resolve_nonlinearity_factory (line 83) | def resolve_nonlinearity_factory(nonlinearity_factory: Optional[ModuleFa...

FILE: src/tha4/shion/nn00/normalization_layer_factories.py
  class Bias2d (line 12) | class Bias2d(Module):
    method __init__ (line 13) | def __init__(self, num_features: int):
    method forward (line 18) | def forward(self, x):
  class NoNorm2dFactory (line 22) | class NoNorm2dFactory(NormalizationLayerFactory):
    method __init__ (line 23) | def __init__(self):
    method create (line 26) | def create(self, num_features: int, affine: bool = True) -> Module:
  class BatchNorm2dFactory (line 33) | class BatchNorm2dFactory(NormalizationLayerFactory):
    method __init__ (line 34) | def __init__(self,
    method get_weight_mean (line 43) | def get_weight_mean(self):
    method get_weight_std (line 49) | def get_weight_std(self):
    method create (line 55) | def create(self, num_features: int, affine: bool = True) -> Module:
  class InstanceNorm2dFactory (line 65) | class InstanceNorm2dFactory(NormalizationLayerFactory):
    method __init__ (line 66) | def __init__(self):
    method create (line 69) | def create(self, num_features: int, affine: bool = True) -> Module:
  class LayerNorm2d (line 73) | class LayerNorm2d(Module):
    method __init__ (line 74) | def __init__(self, channels: int, affine: bool = True):
    method forward (line 83) | def forward(self, x):
  class LayerNorm2dFactory (line 89) | class LayerNorm2dFactory(NormalizationLayerFactory):
    method __init__ (line 90) | def __init__(self):
    method create (line 93) | def create(self, num_features: int, affine: bool = True) -> Module:
  class GroupNormFactory (line 97) | class GroupNormFactory(NormalizationLayerFactory):
    method __init__ (line 98) | def __init__(self, num_groups: int, eps=1e-6):
    method create (line 103) | def create(self, num_features: int, affine: bool = True) -> Module:
  function resolve_normalization_layer_factory (line 107) | def resolve_normalization_layer_factory(factory: Optional['Normalization...

FILE: src/tha4/shion/nn00/normalization_layer_factory.py
  class NormalizationLayerFactory (line 6) | class NormalizationLayerFactory(ABC):
    method __init__ (line 7) | def __init__(self):
    method create (line 11) | def create(self, num_features: int, affine: bool = True) -> Module:

FILE: src/tha4/shion/nn00/pass_through.py
  class PassThrough (line 4) | class PassThrough(Module):
    method __init__ (line 5) | def __init__(self):
    method forward (line 8) | def forward(self, x):

FILE: src/tha4/shion/nn00/resnet_block.py
  class ResnetBlock (line 10) | class ResnetBlock(Module):
    method __init__ (line 11) | def __init__(self,
    method forward (line 51) | def forward(self, x):
Condensed preview — 185 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (717K chars).
[
  {
    "path": ".gitignore",
    "chars": 491,
    "preview": "# Compiled class file\n*.class\n\n# Log file\n*.log\n\n# BlueJ files\n*.ctxt\n\n# Mobile Tools for Java (J2ME)\n.mtj.tmp/\n\n# Packa"
  },
  {
    "path": ".python-version",
    "chars": 7,
    "preview": "3.10.11"
  },
  {
    "path": "LICENSE",
    "chars": 1087,
    "preview": "MIT License\r\n\r\nCopyright (c) 2024 pixiv Inc.\r\n\r\nPermission is hereby granted, free of charge, to any person obtaining a "
  },
  {
    "path": "README.md",
    "chars": 14573,
    "preview": "# Demo Code for \"Talking Head(?) Anime from a Single Image 4: Improved Model and Its Distillation\"\n\nThis repository cont"
  },
  {
    "path": "bin/activate-venv.bat",
    "chars": 21,
    "preview": "venv\\Scripts\\activate"
  },
  {
    "path": "bin/activate-venv.sh",
    "chars": 37,
    "preview": "#! /bin/bash\nsource venv/bin/activate"
  },
  {
    "path": "bin/run",
    "chars": 61,
    "preview": "#! /bin/bash\nexport PYTHONPATH=$(pwd)/src\nvenv/bin/python $@\n"
  },
  {
    "path": "bin/run.bat",
    "chars": 53,
    "preview": "set PYTHONPATH=%cd%\\src\r\nvenv\\Scripts\\python.exe %*\r\n"
  },
  {
    "path": "distiller-ui-doc/index.html",
    "chars": 2734,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation</title>\r\n</head>\r\n<body>\r\n<h1>How to use Distiller UI</h"
  },
  {
    "path": "distiller-ui-doc/params/body_morpher_batch_size.html",
    "chars": 589,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: body_morpher_batch_size</title>\r\n</head>\r\n<body>\r\n<h1><"
  },
  {
    "path": "distiller-ui-doc/params/body_morpher_random_seed_0.html",
    "chars": 511,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: body_morpher_random_seed_0</title>\r\n</head>\r\n<body>\r\n<h"
  },
  {
    "path": "distiller-ui-doc/params/body_morpher_random_seed_1.html",
    "chars": 511,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: body_morpher_random_seed_1</title>\r\n</head>\r\n<body>\r\n<h"
  },
  {
    "path": "distiller-ui-doc/params/character_image_file_name.html",
    "chars": 2693,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: character_image_file_name</title>\r\n</head>\r\n<body>\r\n<h1"
  },
  {
    "path": "distiller-ui-doc/params/face_mask_image_file_name.html",
    "chars": 1686,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: face_mask_image_file_name</title>\r\n</head>\r\n<body>\r\n<h1"
  },
  {
    "path": "distiller-ui-doc/params/face_morpher_batch_size.html",
    "chars": 589,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: face_morpher_batch_size</title>\r\n</head>\r\n<body>\r\n<h1><"
  },
  {
    "path": "distiller-ui-doc/params/face_morpher_random_seed_0.html",
    "chars": 511,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: face_morpher_random_seed_0</title>\r\n</head>\r\n<body>\r\n<h"
  },
  {
    "path": "distiller-ui-doc/params/face_morpher_random_seed_1.html",
    "chars": 511,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: face_morpher_random_seed_1</title>\r\n</head>\r\n<body>\r\n<h"
  },
  {
    "path": "distiller-ui-doc/params/num_cpu_workers.html",
    "chars": 451,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: face_mask_image_file_name</title>\r\n</head>\r\n<body>\r\n<h1"
  },
  {
    "path": "distiller-ui-doc/params/num_gpus.html",
    "chars": 400,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: num_gpus</title>\r\n</head>\r\n<body>\r\n<h1><code>num_gpus</"
  },
  {
    "path": "distiller-ui-doc/params/num_training_examples_per_sample_output.html",
    "chars": 865,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: num_training_example_per_sample_output</title>\r\n</head>"
  },
  {
    "path": "distiller-ui-doc/params/prefix.html",
    "chars": 498,
    "preview": "<html lang=\"en\">\r\n<head>\r\n    <title>Distiller UI Documentation: prefix</title>\r\n</head>\r\n<body>\r\n<h1><code>prefix</code"
  },
  {
    "path": "docs/character_model_ifacialmocap_puppeteer.md",
    "chars": 2267,
    "preview": "# `character_model_ifacialmocap_puppeteer`\r\n\r\nThis program allows the user to control trained student models with their "
  },
  {
    "path": "docs/character_model_manual_poser.md",
    "chars": 830,
    "preview": "# `character_model_manual_poser`\r\n\r\nThis program allows the user to control trained student models with a graphical user"
  },
  {
    "path": "docs/character_model_mediapipe_puppeteer.md",
    "chars": 1520,
    "preview": "# `character_model_mediapipe_puppeteer`\r\n\r\nallows the user to control trained student models with their facial movement,"
  },
  {
    "path": "docs/distill.md",
    "chars": 3552,
    "preview": "# `distill`\r\n\r\nThis program trains a student model given a configuration file, a $512 \\times 512$ RGBA character image, "
  },
  {
    "path": "docs/distiller_ui.md",
    "chars": 950,
    "preview": "# `distiller_ui`\r\n\r\nThis program provides a user-friendly interface to the [`distill`](distill.md) program, allowing you"
  },
  {
    "path": "docs/full_manual_poser.md",
    "chars": 782,
    "preview": "# `full_manual_poser`\r\n\r\nThis program uses the full version of the Talking Head(?) Anime 4 system to animate character i"
  },
  {
    "path": "poetry/README.md",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "poetry/pyproject.toml",
    "chars": 884,
    "preview": "[tool.poetry]\r\nname = \"talking-head-anime-4-demo\"\r\nversion = \"0.1.0\"\r\ndescription = \"Demo code for Talking Head(?) Anime"
  },
  {
    "path": "src/tha4/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/app/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/app/character_model_ifacialmocap_puppeteer.py",
    "chars": 18708,
    "preview": "import os\r\nimport socket\r\nimport sys\r\nimport threading\r\nimport time\r\nfrom typing import Optional\r\n\r\nimport PIL.Image\r\n\r\n"
  },
  {
    "path": "src/tha4/app/character_model_manual_poser.py",
    "chars": 19403,
    "preview": "import logging\r\nimport os\r\nimport sys\r\nimport time\r\nfrom typing import List\r\n\r\nfrom tha4.charmodel.character_model impor"
  },
  {
    "path": "src/tha4/app/character_model_mediapipe_puppeteer.py",
    "chars": 18756,
    "preview": "import os\r\nimport sys\r\nimport threading\r\nimport time\r\nfrom typing import Optional\r\nimport PIL.Image\r\n\r\nimport cv2\r\nimpor"
  },
  {
    "path": "src/tha4/app/distill.py",
    "chars": 795,
    "preview": "import argparse\r\nimport logging\r\n\r\nfrom tha4.distiller.distiller_config import DistillerConfig\r\nfrom tha4.pytasuku.works"
  },
  {
    "path": "src/tha4/app/distiller_ui.py",
    "chars": 372,
    "preview": "import wx\r\n\r\nfrom tha4.app.distill import run_config\r\nfrom tha4.distiller.ui.distiller_ui_main_frame import DistillerUiM"
  },
  {
    "path": "src/tha4/app/full_manual_poser.py",
    "chars": 20773,
    "preview": "import logging\r\nimport os\r\nimport sys\r\nimport time\r\nfrom typing import List\r\n\r\nfrom tha4.shion.base.image_util import ex"
  },
  {
    "path": "src/tha4/charmodel/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/charmodel/character_model.py",
    "chars": 2927,
    "preview": "import json\r\nimport os.path\r\n\r\nimport PIL.Image\r\nimport torch\r\nfrom omegaconf import OmegaConf\r\n\r\nfrom tha4.shion.base.i"
  },
  {
    "path": "src/tha4/dataset/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/dataset/image_poses_and_aother_images_dataset.py",
    "chars": 1321,
    "preview": "from typing import List, Callable\r\n\r\nfrom torch import Tensor\r\nfrom torch.utils.data import Dataset\r\n\r\n\r\nclass ImagePose"
  },
  {
    "path": "src/tha4/distiller/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/distiller/config_based_training_tasks.py",
    "chars": 4161,
    "preview": "import logging\r\nimport os\r\nimport sys\r\nfrom typing import Callable, List, Optional\r\n\r\nfrom tha4.pytasuku.workspace impor"
  },
  {
    "path": "src/tha4/distiller/distill_body_morpher.py",
    "chars": 558,
    "preview": "import logging\r\n\r\nfrom tha4.shion.core.training.distrib.distributed_trainer import DistributedTrainer\r\nfrom tha4.distill"
  },
  {
    "path": "src/tha4/distiller/distill_face_morpher.py",
    "chars": 558,
    "preview": "import logging\r\n\r\nfrom tha4.shion.core.training.distrib.distributed_trainer import DistributedTrainer\r\nfrom tha4.distill"
  },
  {
    "path": "src/tha4/distiller/distiller_config.py",
    "chars": 14563,
    "preview": "import os.path\r\nimport shutil\r\nimport PIL.Image\r\nfrom dataclasses import dataclass\r\nfrom typing import Optional\r\n\r\nfrom "
  },
  {
    "path": "src/tha4/distiller/ui/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/distiller/ui/distiller_config_state.py",
    "chars": 7149,
    "preview": "import os.path\r\nfrom contextlib import contextmanager\r\nfrom pathlib import PurePath, Path\r\nfrom typing import Callable, "
  },
  {
    "path": "src/tha4/distiller/ui/distiller_ui_main_frame.py",
    "chars": 38362,
    "preview": "import multiprocessing\r\nimport random\r\nfrom contextlib import contextmanager\r\nfrom typing import Callable\r\nimport PIL.Im"
  },
  {
    "path": "src/tha4/image_util.py",
    "chars": 2487,
    "preview": "import math\r\n\r\nimport PIL.Image\r\nimport numpy\r\nimport torch\r\nfrom matplotlib import cm\r\nfrom tha4.shion.base.image_util "
  },
  {
    "path": "src/tha4/mocap/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/mocap/ifacialmocap_constants.py",
    "chars": 6702,
    "preview": "EYE_LOOK_IN_LEFT = \"eyeLookInLeft\"\r\nEYE_LOOK_OUT_LEFT = \"eyeLookOutLeft\"\r\nEYE_LOOK_DOWN_LEFT = \"eyeLookDownLeft\"\r\nEYE_LO"
  },
  {
    "path": "src/tha4/mocap/ifacialmocap_pose.py",
    "chars": 891,
    "preview": "from tha4.mocap.ifacialmocap_constants import BLENDSHAPE_NAMES, HEAD_BONE_X, HEAD_BONE_Y, HEAD_BONE_Z, \\\r\n    HEAD_BONE_"
  },
  {
    "path": "src/tha4/mocap/ifacialmocap_pose_converter.py",
    "chars": 307,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Dict, List\r\n\r\n\r\nclass IFacialMocapPoseConverter(ABC):\r\n    @abst"
  },
  {
    "path": "src/tha4/mocap/ifacialmocap_pose_converter_25.py",
    "chars": 30209,
    "preview": "import math\r\nimport time\r\nfrom enum import Enum\r\nfrom typing import Optional, Dict, List, Callable\r\n\r\nimport numpy\r\nimpo"
  },
  {
    "path": "src/tha4/mocap/ifacialmocap_v2.py",
    "chars": 4135,
    "preview": "import math\r\n\r\nfrom tha4.mocap.ifacialmocap_constants import BLENDSHAPE_NAMES, HEAD_BONE_X, HEAD_BONE_Y, HEAD_BONE_Z, \\\r"
  },
  {
    "path": "src/tha4/mocap/mediapipe_constants.py",
    "chars": 5902,
    "preview": "EYE_LOOK_IN_LEFT = \"eyeLookInLeft\"\r\nEYE_LOOK_OUT_LEFT = \"eyeLookOutLeft\"\r\nEYE_LOOK_DOWN_LEFT = \"eyeLookDownLeft\"\r\nEYE_LO"
  },
  {
    "path": "src/tha4/mocap/mediapipe_face_pose.py",
    "chars": 1422,
    "preview": "import json\r\nimport os\r\nfrom typing import Optional, Dict\r\n\r\nimport numpy\r\n\r\n\r\nclass MediaPipeFacePose:\r\n    KEY_BLENDSH"
  },
  {
    "path": "src/tha4/mocap/mediapipe_face_pose_converter.py",
    "chars": 495,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import List, Callable, Optional\r\n\r\nfrom tha4.mocap.mediapipe_face_pose "
  },
  {
    "path": "src/tha4/mocap/mediapipe_face_pose_converter_00.py",
    "chars": 32214,
    "preview": "import math\r\nimport time\r\nfrom enum import Enum\r\nfrom typing import Optional, List, Callable\r\n\r\nimport numpy\r\nimport sci"
  },
  {
    "path": "src/tha4/nn/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/common/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/common/conv_block_factory.py",
    "chars": 2781,
    "preview": "from typing import Optional\r\n\r\nfrom tha4.nn.conv import create_conv7_block_from_block_args, create_conv3_block_from_bloc"
  },
  {
    "path": "src/tha4/nn/common/poser_args.py",
    "chars": 2510,
    "preview": "from typing import Optional\r\n\r\nfrom torch.nn import Sigmoid, Sequential, Tanh\r\n\r\nfrom tha4.nn.conv import create_conv3, "
  },
  {
    "path": "src/tha4/nn/common/poser_encoder_decoder_00.py",
    "chars": 5285,
    "preview": "import math\r\nfrom typing import Optional, List\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import ModuleLis"
  },
  {
    "path": "src/tha4/nn/common/poser_encoder_decoder_00_separable.py",
    "chars": 4098,
    "preview": "import math\r\nfrom typing import Optional, List\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import ModuleLis"
  },
  {
    "path": "src/tha4/nn/common/resize_conv_encoder_decoder.py",
    "chars": 4543,
    "preview": "import math\r\nfrom typing import Optional, List\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import Module, M"
  },
  {
    "path": "src/tha4/nn/common/resize_conv_unet.py",
    "chars": 4396,
    "preview": "from typing import Optional, List\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import ModuleList, Module, Up"
  },
  {
    "path": "src/tha4/nn/common/unet.py",
    "chars": 28058,
    "preview": "import math\r\nfrom enum import Enum\r\nfrom typing import Optional, List\r\n\r\nimport torch\r\nfrom torch import zero_, Tensor\r\n"
  },
  {
    "path": "src/tha4/nn/conv.py",
    "chars": 9301,
    "preview": "from typing import Optional, Union, Callable\r\n\r\nfrom torch.nn import Conv2d, Module, Sequential, ConvTranspose2d\r\n\r\nfrom"
  },
  {
    "path": "src/tha4/nn/eyebrow_decomposer/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/eyebrow_decomposer/eyebrow_decomposer_00.py",
    "chars": 3049,
    "preview": "from typing import List, Optional\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import Module\r\n\r\nfrom tha4.nn"
  },
  {
    "path": "src/tha4/nn/eyebrow_morphing_combiner/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/eyebrow_morphing_combiner/eyebrow_morphing_combiner_00.py",
    "chars": 3865,
    "preview": "from typing import List, Optional\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import Module\r\n\r\nfrom tha4.nn"
  },
  {
    "path": "src/tha4/nn/face_morpher/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/face_morpher/face_morpher_08.py",
    "chars": 9087,
    "preview": "import math\r\nfrom typing import List, Optional\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import ModuleLis"
  },
  {
    "path": "src/tha4/nn/image_processing_util.py",
    "chars": 2334,
    "preview": "import torch\r\nfrom torch import Tensor\r\nfrom torch.nn.functional import affine_grid, grid_sample\r\n\r\n\r\ndef apply_rgb_chan"
  },
  {
    "path": "src/tha4/nn/init_function.py",
    "chars": 2251,
    "preview": "from typing import Callable\r\n\r\nimport torch\r\nfrom torch import zero_\r\nfrom torch.nn import Module\r\nfrom torch.nn.init im"
  },
  {
    "path": "src/tha4/nn/morpher/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/morpher/morpher_00.py",
    "chars": 2714,
    "preview": "from typing import List\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import Module\r\n\r\nfrom tha4.shion.core.m"
  },
  {
    "path": "src/tha4/nn/nonlinearity_factory.py",
    "chars": 1978,
    "preview": "from typing import Optional\r\n\r\nfrom torch.nn import Module, ReLU, LeakyReLU, ELU, ReLU6, Hardswish, SiLU, Tanh, Sigmoid\r"
  },
  {
    "path": "src/tha4/nn/normalization.py",
    "chars": 3871,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Optional\r\n\r\nimport torch\r\nfrom torch import layer_norm\r\nfrom tor"
  },
  {
    "path": "src/tha4/nn/pass_through.py",
    "chars": 159,
    "preview": "from torch.nn import Module\r\n\r\n\r\nclass PassThrough(Module):\r\n    def __init__(self):\r\n        super().__init__()\r\n\r\n    "
  },
  {
    "path": "src/tha4/nn/resnet_block.py",
    "chars": 3102,
    "preview": "from typing import Optional\n\nimport torch\nfrom torch.nn import Module, Sequential, Parameter\n\nfrom tha4.shion.core.modul"
  },
  {
    "path": "src/tha4/nn/resnet_block_seperable.py",
    "chars": 3184,
    "preview": "from typing import Optional\r\n\r\nimport torch\r\nfrom torch.nn import Module, Sequential, Parameter\r\n\r\nfrom tha4.shion.core."
  },
  {
    "path": "src/tha4/nn/separable_conv.py",
    "chars": 5758,
    "preview": "from typing import Optional\r\n\r\nfrom torch.nn import Sequential, Conv2d, ConvTranspose2d, Module\r\n\r\nfrom tha4.nn.normaliz"
  },
  {
    "path": "src/tha4/nn/siren/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/siren/face_morpher/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/siren/face_morpher/siren_face_morpher_00.py",
    "chars": 2036,
    "preview": "from typing import Optional, List\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import Module\r\nfrom torch.nn."
  },
  {
    "path": "src/tha4/nn/siren/face_morpher/siren_face_morpher_00_trainer.py",
    "chars": 10200,
    "preview": "from typing import Dict, List, Optional, Callable\r\n\r\nimport torch\r\nfrom tha4.shion.base.dataset.lazy_tensor_dataset impo"
  },
  {
    "path": "src/tha4/nn/siren/face_morpher/siren_face_morpher_protocols_00.py",
    "chars": 10190,
    "preview": "import os\r\nfrom dataclasses import dataclass\r\nfrom typing import Dict, Any, Optional, Callable\r\n\r\nimport PIL.Image\r\nimpo"
  },
  {
    "path": "src/tha4/nn/siren/morpher/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/siren/morpher/siren_morpher_03.py",
    "chars": 5585,
    "preview": "from typing import List, Optional, Callable\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import Module, Modu"
  },
  {
    "path": "src/tha4/nn/siren/morpher/siren_morpher_03_trainer.py",
    "chars": 12033,
    "preview": "from enum import Enum\r\nfrom typing import Dict, List, Optional, Callable\r\n\r\nimport torch\r\nfrom tha4.shion.base.dataset.l"
  },
  {
    "path": "src/tha4/nn/siren/morpher/siren_morpher_protocols_03.py",
    "chars": 15914,
    "preview": "from dataclasses import dataclass\r\nfrom typing import Optional, List, Callable, Dict, Any\r\n\r\nimport torch\r\nfrom tha4.shi"
  },
  {
    "path": "src/tha4/nn/siren/vanilla/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/siren/vanilla/siren.py",
    "chars": 3280,
    "preview": "import math\r\nfrom typing import Callable, Optional, List\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import"
  },
  {
    "path": "src/tha4/nn/spectral_norm.py",
    "chars": 261,
    "preview": "from torch.nn import Module\r\nfrom torch.nn.utils import spectral_norm\r\n\r\n\r\ndef apply_spectral_norm(module: Module, use_s"
  },
  {
    "path": "src/tha4/nn/upscaler/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/nn/upscaler/upscaler_02.py",
    "chars": 4030,
    "preview": "from typing import List\r\n\r\nimport torch\r\nfrom torch import Tensor, zero_\r\nfrom torch.nn import Module, Conv2d\r\n\r\nfrom th"
  },
  {
    "path": "src/tha4/nn/util.py",
    "chars": 1831,
    "preview": "from typing import Optional, Callable, Union\r\n\r\nfrom torch.nn import Module\r\n\r\nfrom tha4.shion.core.module_factory impor"
  },
  {
    "path": "src/tha4/poser/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/poser/general_poser_02.py",
    "chars": 3433,
    "preview": "from typing import List, Optional, Tuple, Dict, Callable\r\n\r\nimport torch\r\nfrom tha4.shion.core.cached_computation import"
  },
  {
    "path": "src/tha4/poser/modes/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/poser/modes/mode_07.py",
    "chars": 14097,
    "preview": "from enum import Enum\r\nfrom typing import List, Dict, Optional\r\n\r\nimport torch\r\nfrom tha4.shion.core.cached_computation "
  },
  {
    "path": "src/tha4/poser/modes/mode_12.py",
    "chars": 9009,
    "preview": "from enum import Enum\r\nfrom typing import List, Dict, Optional, Any\r\n\r\nimport torch\r\nfrom tha4.shion.core.cached_computa"
  },
  {
    "path": "src/tha4/poser/modes/mode_14.py",
    "chars": 6257,
    "preview": "from dataclasses import dataclass\r\nfrom typing import List, Optional, Dict, Any\r\n\r\nimport torch\r\nfrom tha4.shion.core.ca"
  },
  {
    "path": "src/tha4/poser/modes/pose_parameters.py",
    "chars": 2965,
    "preview": "from tha4.poser.poser import PoseParameters, PoseParameterCategory\r\n\r\n\r\ndef get_pose_parameters():\r\n    return PoseParam"
  },
  {
    "path": "src/tha4/poser/poser.py",
    "chars": 4815,
    "preview": "from abc import ABC, abstractmethod\r\nfrom enum import Enum\r\nfrom typing import Tuple, List, Optional\r\n\r\nimport torch\r\nfr"
  },
  {
    "path": "src/tha4/pytasuku/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/pytasuku/indexed/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/pytasuku/indexed/all_tasks.py",
    "chars": 899,
    "preview": "from typing import Iterable\n\nfrom tha4.pytasuku.workspace import Workspace\nfrom tha4.pytasuku.indexed.indexed_tasks impo"
  },
  {
    "path": "src/tha4/pytasuku/indexed/bundled_indexed_file_tasks.py",
    "chars": 1012,
    "preview": "import abc\nfrom typing import Iterable, List\n\nfrom tha4.pytasuku.workspace import Workspace\nfrom tha4.pytasuku.indexed.i"
  },
  {
    "path": "src/tha4/pytasuku/indexed/indexed_file_tasks.py",
    "chars": 474,
    "preview": "import abc\nfrom typing import List\n\nfrom tha4.pytasuku.workspace import Workspace\nfrom tha4.pytasuku.indexed.indexed_tas"
  },
  {
    "path": "src/tha4/pytasuku/indexed/indexed_tasks.py",
    "chars": 642,
    "preview": "import abc\nfrom typing import List\n\nfrom tha4.pytasuku.workspace import Workspace\n\n\nclass IndexedTasks(abc.ABC):\n    def"
  },
  {
    "path": "src/tha4/pytasuku/indexed/no_index_command_tasks.py",
    "chars": 1159,
    "preview": "import abc\nfrom typing import List\n\nfrom tha4.pytasuku.workspace import Workspace\nfrom tha4.pytasuku.indexed.indexed_tas"
  },
  {
    "path": "src/tha4/pytasuku/indexed/no_index_file_tasks.py",
    "chars": 1591,
    "preview": "import abc\nfrom typing import List\n\nfrom tha4.pytasuku.workspace import Workspace\nfrom tha4.pytasuku.indexed.indexed_fil"
  },
  {
    "path": "src/tha4/pytasuku/indexed/one_index_file_tasks.py",
    "chars": 2011,
    "preview": "import abc\n\nfrom typing import List\n\nfrom tha4.pytasuku.workspace import Workspace\nfrom tha4.pytasuku.indexed.indexed_fi"
  },
  {
    "path": "src/tha4/pytasuku/indexed/simple_no_index_file_tasks.py",
    "chars": 993,
    "preview": "from typing import Callable, List, Optional\r\n\r\nfrom tha4.pytasuku.workspace import Workspace\r\nfrom tha4.pytasuku.indexed"
  },
  {
    "path": "src/tha4/pytasuku/indexed/two_indices_file_tasks.py",
    "chars": 2216,
    "preview": "import abc\nfrom typing import List\n\nfrom tha4.pytasuku.workspace import Workspace\nfrom tha4.pytasuku.indexed.indexed_fil"
  },
  {
    "path": "src/tha4/pytasuku/indexed/util.py",
    "chars": 2383,
    "preview": "import os\nfrom typing import Iterable, Dict, Callable, List\n\nfrom tha4.pytasuku.workspace import Workspace\nfrom tha4.pyt"
  },
  {
    "path": "src/tha4/pytasuku/task.py",
    "chars": 2864,
    "preview": "import os\nimport logging\nfrom typing import List\n\n\nclass Task:\n    def __init__(self, workspace: 'Workspace', name: str,"
  },
  {
    "path": "src/tha4/pytasuku/task_selector_ui.py",
    "chars": 4108,
    "preview": "from tkinter import Tk, BOTH, Button, RIGHT, Scrollbar\nfrom tkinter.ttk import Frame, Treeview\n\nfrom tha4.pytasuku.works"
  },
  {
    "path": "src/tha4/pytasuku/util.py",
    "chars": 412,
    "preview": "import os.path\nfrom typing import List\nimport logging\n\nfrom tha4.pytasuku.workspace import Workspace\n\n\ndef create_delete"
  },
  {
    "path": "src/tha4/pytasuku/workspace.py",
    "chars": 5054,
    "preview": "from contextlib import contextmanager\nfrom enum import Enum\nfrom typing import List\n\nfrom tha4.pytasuku.task import Task"
  },
  {
    "path": "src/tha4/sampleoutput/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/sampleoutput/general_sample_output_protocol.py",
    "chars": 6246,
    "preview": "import os\r\nfrom enum import Enum\r\nfrom typing import List, Dict\r\n\r\nimport PIL.Image\r\nimport numpy\r\nimport torch\r\nfrom th"
  },
  {
    "path": "src/tha4/sampleoutput/poser_sampler_output_protocol.py",
    "chars": 3791,
    "preview": "from typing import Optional, List, Dict\r\n\r\nimport torch\r\nfrom torch.nn import Module\r\nfrom torch.utils.data import Datas"
  },
  {
    "path": "src/tha4/sampleoutput/sample_image_creator.py",
    "chars": 5972,
    "preview": "import math\r\nimport os\r\nfrom enum import Enum\r\nfrom typing import List\r\n\r\nimport numpy\r\nimport torch\r\nfrom matplotlib im"
  },
  {
    "path": "src/tha4/shion/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/base/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/base/dataset/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/base/dataset/lazy_dataset.py",
    "chars": 508,
    "preview": "from typing import Callable\r\n\r\nfrom torch.utils.data import Dataset\r\n\r\n\r\nclass LazyDataset(Dataset):\r\n    def __init__(s"
  },
  {
    "path": "src/tha4/shion/base/dataset/lazy_tensor_dataset.py",
    "chars": 1003,
    "preview": "import torch\r\nfrom torch.utils.data import Dataset, TensorDataset\r\n\r\nfrom tha4.shion.core.load_save import torch_load\r\n\r"
  },
  {
    "path": "src/tha4/shion/base/dataset/png_in_dir_dataset.py",
    "chars": 1944,
    "preview": "import os\r\n\r\nfrom torch.nn import functional\r\nfrom torch.utils.data import Dataset\r\nfrom os import listdir\r\nfrom os.path"
  },
  {
    "path": "src/tha4/shion/base/dataset/util.py",
    "chars": 1061,
    "preview": "from typing import List\r\n\r\nimport torch\r\nfrom torch.utils.data import Dataset\r\n\r\n\r\ndef get_indexed_batch(dataset: Datase"
  },
  {
    "path": "src/tha4/shion/base/dataset/xformed_dataset.py",
    "chars": 400,
    "preview": "from typing import Any, Callable\r\n\r\nfrom torch.utils.data import Dataset\r\n\r\n\r\nclass XformedDataset(Dataset):\r\n    def __"
  },
  {
    "path": "src/tha4/shion/base/image_util.py",
    "chars": 9938,
    "preview": "import os\r\n\r\nimport PIL.Image\r\nimport numpy\r\nimport torch\r\nfrom matplotlib import pyplot\r\nfrom torch import Tensor\r\n\r\n\r\n"
  },
  {
    "path": "src/tha4/shion/base/loss/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/base/loss/computed_scale_loss.py",
    "chars": 787,
    "preview": "from typing import Optional, Callable\r\n\r\nfrom tha4.shion.core.cached_computation import TensorCachedComputationFunc, Com"
  },
  {
    "path": "src/tha4/shion/base/loss/computed_scaled_l2_loss.py",
    "chars": 1125,
    "preview": "from typing import Callable, Optional\r\n\r\nfrom tha4.shion.core.cached_computation import TensorCachedComputationFunc, Com"
  },
  {
    "path": "src/tha4/shion/base/loss/l1_loss.py",
    "chars": 2560,
    "preview": "from typing import Callable, Optional\r\n\r\nimport torch\r\n\r\nfrom tha4.shion.core.cached_computation import TensorCachedComp"
  },
  {
    "path": "src/tha4/shion/base/loss/l2_loss.py",
    "chars": 897,
    "preview": "from typing import Callable, Optional\r\n\r\nfrom tha4.shion.core.cached_computation import TensorCachedComputationFunc, Com"
  },
  {
    "path": "src/tha4/shion/base/loss/sum_loss.py",
    "chars": 1060,
    "preview": "from typing import List, Tuple, Callable, Optional\r\n\r\nimport torch\r\nfrom torch import Tensor\r\n\r\nfrom tha4.shion.core.cac"
  },
  {
    "path": "src/tha4/shion/base/loss/time_dependently_weighted_loss.py",
    "chars": 1056,
    "preview": "from typing import Callable, Optional\r\n\r\nfrom torch import Tensor\r\n\r\nfrom tha4.shion.core.cached_computation import Comp"
  },
  {
    "path": "src/tha4/shion/base/module_accumulators.py",
    "chars": 1252,
    "preview": "from typing import Optional\r\n\r\nimport torch\r\nfrom torch.nn import Module\r\n\r\nfrom tha4.shion.core.module_accumulator impo"
  },
  {
    "path": "src/tha4/shion/base/optimizer_factories.py",
    "chars": 1752,
    "preview": "from typing import Tuple, Iterable\r\n\r\nfrom torch.nn import Parameter\r\nfrom torch.optim import Optimizer, Adam, AdamW, Sp"
  },
  {
    "path": "src/tha4/shion/base/protocol/single_network_from_batch_input_computation_protocol.py",
    "chars": 1226,
    "preview": "from typing import Optional, Any, List\r\n\r\nfrom tha4.shion.core.cached_computation import CachedComputationProtocol, Comp"
  },
  {
    "path": "src/tha4/shion/base/training/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/base/training/single_network.py",
    "chars": 4218,
    "preview": "import time\r\nfrom typing import List, Dict, Callable, Any, Optional\r\n\r\nimport torch\r\nfrom torch.nn import Module\r\nfrom t"
  },
  {
    "path": "src/tha4/shion/base/training/single_network_with_minibatch.py",
    "chars": 3490,
    "preview": "import time\r\nfrom typing import List, Dict, Callable, Any, Optional\r\n\r\nimport torch\r\nfrom torch.nn import Module\r\nfrom t"
  },
  {
    "path": "src/tha4/shion/base/training/two_networks_training_protocol.py",
    "chars": 4539,
    "preview": "from typing import List, Dict, Callable, Any, Optional\r\n\r\nimport torch\r\nfrom torch.nn import Module\r\nfrom torch.nn.utils"
  },
  {
    "path": "src/tha4/shion/core/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/core/cached_computation.py",
    "chars": 3365,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Callable, Dict, Any, Optional\r\n\r\nimport torch\r\nfrom torch import"
  },
  {
    "path": "src/tha4/shion/core/load_save.py",
    "chars": 341,
    "preview": "import os\r\n\r\nimport torch\r\n\r\n\r\ndef torch_save(content, file_name):\r\n    os.makedirs(os.path.dirname(file_name), exist_ok"
  },
  {
    "path": "src/tha4/shion/core/loss.py",
    "chars": 384,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Callable, Optional\r\n\r\nfrom torch import Tensor\r\n\r\nfrom tha4.shio"
  },
  {
    "path": "src/tha4/shion/core/module_accumulator.py",
    "chars": 280,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Optional\r\n\r\nfrom torch.nn import Module\r\n\r\n\r\nclass ModuleAccumul"
  },
  {
    "path": "src/tha4/shion/core/module_factory.py",
    "chars": 165,
    "preview": "from abc import ABC, abstractmethod\r\n\r\nfrom torch.nn import Module\r\n\r\n\r\nclass ModuleFactory(ABC):\r\n    @abstractmethod\r\n"
  },
  {
    "path": "src/tha4/shion/core/optimizer_factory.py",
    "chars": 225,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Iterable\r\n\r\nfrom torch.nn import Parameter\r\n\r\n\r\nclass OptimizerF"
  },
  {
    "path": "src/tha4/shion/core/training/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/core/training/distrib/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/core/training/distrib/device_mapper.py",
    "chars": 452,
    "preview": "from typing import Dict\r\n\r\nimport torch\r\n\r\n\r\nclass SimpleCudaDeviceMapper:\r\n    def __call__(self, rank, local_rank):\r\n "
  },
  {
    "path": "src/tha4/shion/core/training/distrib/distributed_trainer.py",
    "chars": 20483,
    "preview": "import argparse\r\nimport logging\r\nimport os.path\r\nimport time\r\nfrom datetime import datetime\r\nfrom typing import Dict, Op"
  },
  {
    "path": "src/tha4/shion/core/training/distrib/distributed_training_states.py",
    "chars": 10887,
    "preview": "import copy\r\nimport logging\r\nimport os\r\nfrom typing import Dict, Optional, Callable\r\n\r\nimport torch\r\nfrom torch.nn impor"
  },
  {
    "path": "src/tha4/shion/core/training/distrib/distributed_training_tasks.py",
    "chars": 5479,
    "preview": "import logging\r\nimport os\r\nimport sys\r\nfrom typing import Callable, List, Optional\r\n\r\nfrom tha4.pytasuku.workspace impor"
  },
  {
    "path": "src/tha4/shion/core/training/sample_output_protocol.py",
    "chars": 1243,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Dict, Any\r\n\r\nimport torch\r\nfrom torch.nn import Module\r\nfrom tor"
  },
  {
    "path": "src/tha4/shion/core/training/single/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/core/training/single/training_states.py",
    "chars": 8283,
    "preview": "import copy\r\nimport logging\r\nimport os\r\nfrom typing import Dict, Optional\r\n\r\nimport torch\r\nfrom torch.nn import Module\r\n"
  },
  {
    "path": "src/tha4/shion/core/training/single/training_tasks.py",
    "chars": 17471,
    "preview": "import logging\r\nimport time\r\nfrom datetime import datetime\r\nfrom typing import Optional, Dict, List\r\n\r\nimport torch\r\nfro"
  },
  {
    "path": "src/tha4/shion/core/training/swarm/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/core/training/swarm/swarm_training_tasks.py",
    "chars": 2953,
    "preview": "from typing import Callable, Optional, List\r\n\r\nfrom tha4.pytasuku.workspace import Workspace\r\nfrom tha4.shion.core.train"
  },
  {
    "path": "src/tha4/shion/core/training/swarm/swarm_unit_trainer.py",
    "chars": 16704,
    "preview": "import argparse\r\nimport logging\r\nimport os\r\nimport time\r\nfrom datetime import datetime\r\nfrom typing import Dict, Optiona"
  },
  {
    "path": "src/tha4/shion/core/training/training_protocol.py",
    "chars": 2324,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Dict, List, Callable, Any, Optional\r\n\r\nimport torch\r\nfrom torch."
  },
  {
    "path": "src/tha4/shion/core/training/util.py",
    "chars": 1155,
    "preview": "from typing import Callable\r\n\r\nimport torch\r\nfrom torch.nn import Module\r\nfrom torch.optim import Optimizer\r\n\r\n\r\ndef opt"
  },
  {
    "path": "src/tha4/shion/core/training/validation_protocol.py",
    "chars": 1266,
    "preview": "from abc import ABC, abstractmethod\r\nfrom typing import Dict, Callable, Any\r\n\r\nimport torch\r\nfrom torch.nn import Module"
  },
  {
    "path": "src/tha4/shion/nn00/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/tha4/shion/nn00/block_args.py",
    "chars": 1080,
    "preview": "from typing import Optional\r\n\r\nfrom torch.nn import Module, Sequential\r\n\r\nfrom tha4.shion.core.module_factory import Mod"
  },
  {
    "path": "src/tha4/shion/nn00/conv.py",
    "chars": 3784,
    "preview": "from typing import Optional, Union, Callable\r\n\r\nfrom torch.nn import Conv2d, Module, Sequential, ConvTranspose2d\r\n\r\nfrom"
  },
  {
    "path": "src/tha4/shion/nn00/initialization_funcs.py",
    "chars": 1641,
    "preview": "from typing import Callable, Optional\r\n\r\nimport torch\r\nfrom torch import zero_\r\nfrom torch.nn import Module\r\nfrom torch."
  },
  {
    "path": "src/tha4/shion/nn00/linear_module_args.py",
    "chars": 1090,
    "preview": "from typing import Optional, Callable\r\n\r\nfrom torch.nn import Module\r\nfrom torch.nn.utils import spectral_norm\r\n\r\nfrom t"
  },
  {
    "path": "src/tha4/shion/nn00/nonlinearity_factories.py",
    "chars": 2269,
    "preview": "from typing import Optional\r\n\r\nimport torch\r\nfrom torch import Tensor\r\nfrom torch.nn import Module, ReLU, LeakyReLU, ELU"
  },
  {
    "path": "src/tha4/shion/nn00/normalization_layer_factories.py",
    "chars": 3647,
    "preview": "from typing import Optional\r\n\r\nimport torch\r\nfrom torch.nn import Module, Parameter, BatchNorm2d, InstanceNorm2d, GroupN"
  },
  {
    "path": "src/tha4/shion/nn00/normalization_layer_factory.py",
    "chars": 274,
    "preview": "from abc import ABC, abstractmethod\r\n\r\nfrom torch.nn import Module\r\n\r\n\r\nclass NormalizationLayerFactory(ABC):\r\n    def _"
  },
  {
    "path": "src/tha4/shion/nn00/pass_through.py",
    "chars": 159,
    "preview": "from torch.nn import Module\r\n\r\n\r\nclass PassThrough(Module):\r\n    def __init__(self):\r\n        super().__init__()\r\n\r\n    "
  },
  {
    "path": "src/tha4/shion/nn00/resnet_block.py",
    "chars": 2125,
    "preview": "from typing import Optional\r\n\r\nimport torch\r\nfrom torch.nn import Module, Sequential, Parameter\r\n\r\nfrom tha4.shion.nn00."
  }
]

About this extraction

This page contains the full source code of the pkhungurn/talking-head-anime-4-demo GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 185 files (655.5 KB), approximately 154.5k tokens, and a symbol index with 1259 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!