Full Code of Amshaker/unetr_plus_plus for AI

main dcba1c6e8179 cached
155 files
1.2 MB
284.2k tokens
815 symbols
1 requests
Download .txt
Showing preview only (1,282K chars total). Download the full file or copy to clipboard to get everything.
Repository: Amshaker/unetr_plus_plus
Branch: main
Commit: dcba1c6e8179
Files: 155
Total size: 1.2 MB

Directory structure:
gitextract_3co428a0/

├── LICENSE
├── README.md
├── evaluation_scripts/
│   ├── run_evaluation_acdc.sh
│   ├── run_evaluation_lung.sh
│   ├── run_evaluation_synapse.sh
│   └── run_evaluation_tumor.sh
├── requirements.txt
├── training_scripts/
│   ├── run_training_acdc.sh
│   ├── run_training_lung.sh
│   ├── run_training_synapse.sh
│   └── run_training_tumor.sh
└── unetr_pp/
    ├── __init__.py
    ├── configuration.py
    ├── evaluation/
    │   ├── __init__.py
    │   ├── add_dummy_task_with_mean_over_all_tasks.py
    │   ├── add_mean_dice_to_json.py
    │   ├── collect_results_files.py
    │   ├── evaluator.py
    │   ├── metrics.py
    │   ├── model_selection/
    │   │   ├── __init__.py
    │   │   ├── collect_all_fold0_results_and_summarize_in_one_csv.py
    │   │   ├── ensemble.py
    │   │   ├── figure_out_what_to_submit.py
    │   │   ├── rank_candidates.py
    │   │   ├── rank_candidates_StructSeg.py
    │   │   ├── rank_candidates_cascade.py
    │   │   ├── summarize_results_in_one_json.py
    │   │   └── summarize_results_with_plans.py
    │   ├── region_based_evaluation.py
    │   ├── surface_dice.py
    │   ├── unetr_pp_acdc_checkpoint/
    │   │   └── unetr_pp/
    │   │       └── 3d_fullres/
    │   │           └── Task001_ACDC/
    │   │               └── unetr_pp_trainer_acdc__unetr_pp_Plansv2.1/
    │   │                   └── fold_0/
    │   │                       └── .gitignore
    │   ├── unetr_pp_lung_checkpoint/
    │   │   └── unetr_pp/
    │   │       └── 3d_fullres/
    │   │           └── Task006_Lung/
    │   │               └── unetr_pp_trainer_lung__unetr_pp_Plansv2.1/
    │   │                   └── fold_0/
    │   │                       └── .gitignore
    │   ├── unetr_pp_synapse_checkpoint/
    │   │   └── unetr_pp/
    │   │       └── 3d_fullres/
    │   │           └── Task002_Synapse/
    │   │               └── unetr_pp_trainer_synapse__unetr_pp_Plansv2.1/
    │   │                   └── fold_0/
    │   │                       └── .gitignore
    │   └── unetr_pp_tumor_checkpoint/
    │       └── unetr_pp/
    │           └── 3d_fullres/
    │               └── Task003_tumor/
    │                   └── unetr_pp_trainer_tumor__unetr_pp_Plansv2.1/
    │                       └── fold_0/
    │                           └── .gitignore
    ├── experiment_planning/
    │   ├── DatasetAnalyzer.py
    │   ├── __init__.py
    │   ├── alternative_experiment_planning/
    │   │   ├── experiment_planner_baseline_3DUNet_v21_11GB.py
    │   │   ├── experiment_planner_baseline_3DUNet_v21_16GB.py
    │   │   ├── experiment_planner_baseline_3DUNet_v21_32GB.py
    │   │   ├── experiment_planner_baseline_3DUNet_v21_3convperstage.py
    │   │   ├── experiment_planner_baseline_3DUNet_v22.py
    │   │   ├── experiment_planner_baseline_3DUNet_v23.py
    │   │   ├── experiment_planner_residual_3DUNet_v21.py
    │   │   ├── normalization/
    │   │   │   ├── experiment_planner_2DUNet_v21_RGB_scaleto_0_1.py
    │   │   │   ├── experiment_planner_3DUNet_CT2.py
    │   │   │   └── experiment_planner_3DUNet_nonCT.py
    │   │   ├── patch_size/
    │   │   │   ├── experiment_planner_3DUNet_isotropic_in_mm.py
    │   │   │   └── experiment_planner_3DUNet_isotropic_in_voxels.py
    │   │   ├── pooling_and_convs/
    │   │   │   ├── experiment_planner_baseline_3DUNet_allConv3x3.py
    │   │   │   └── experiment_planner_baseline_3DUNet_poolBasedOnSpacing.py
    │   │   └── target_spacing/
    │   │       ├── experiment_planner_baseline_3DUNet_targetSpacingForAnisoAxis.py
    │   │       ├── experiment_planner_baseline_3DUNet_v21_customTargetSpacing_2x2x2.py
    │   │       └── experiment_planner_baseline_3DUNet_v21_noResampling.py
    │   ├── change_batch_size.py
    │   ├── common_utils.py
    │   ├── experiment_planner_baseline_2DUNet.py
    │   ├── experiment_planner_baseline_2DUNet_v21.py
    │   ├── experiment_planner_baseline_3DUNet.py
    │   ├── experiment_planner_baseline_3DUNet_v21.py
    │   ├── nnFormer_convert_decathlon_task.py
    │   ├── nnFormer_plan_and_preprocess.py
    │   ├── summarize_plans.py
    │   └── utils.py
    ├── inference/
    │   ├── __init__.py
    │   ├── inferTs/
    │   │   └── swin_nomask_2/
    │   │       └── plans.pkl
    │   ├── predict.py
    │   ├── predict_simple.py
    │   └── segmentation_export.py
    ├── inference_acdc.py
    ├── inference_synapse.py
    ├── inference_tumor.py
    ├── network_architecture/
    │   ├── README.md
    │   ├── __init__.py
    │   ├── acdc/
    │   │   ├── __init__.py
    │   │   ├── model_components.py
    │   │   ├── transformerblock.py
    │   │   └── unetr_pp_acdc.py
    │   ├── dynunet_block.py
    │   ├── generic_UNet.py
    │   ├── initialization.py
    │   ├── layers.py
    │   ├── lung/
    │   │   ├── __init__.py
    │   │   ├── model_components.py
    │   │   ├── transformerblock.py
    │   │   └── unetr_pp_lung.py
    │   ├── neural_network.py
    │   ├── synapse/
    │   │   ├── __init__.py
    │   │   ├── model_components.py
    │   │   ├── transformerblock.py
    │   │   └── unetr_pp_synapse.py
    │   └── tumor/
    │       ├── __init__.py
    │       ├── model_components.py
    │       ├── transformerblock.py
    │       └── unetr_pp_tumor.py
    ├── paths.py
    ├── postprocessing/
    │   ├── connected_components.py
    │   ├── consolidate_all_for_paper.py
    │   ├── consolidate_postprocessing.py
    │   └── consolidate_postprocessing_simple.py
    ├── preprocessing/
    │   ├── cropping.py
    │   ├── custom_preprocessors/
    │   │   └── preprocessor_scale_RGB_to_0_1.py
    │   ├── preprocessing.py
    │   └── sanity_checks.py
    ├── run/
    │   ├── __init__.py
    │   ├── default_configuration.py
    │   └── run_training.py
    ├── training/
    │   ├── __init__.py
    │   ├── cascade_stuff/
    │   │   ├── __init__.py
    │   │   └── predict_next_stage.py
    │   ├── data_augmentation/
    │   │   ├── __init__.py
    │   │   ├── custom_transforms.py
    │   │   ├── data_augmentation_insaneDA.py
    │   │   ├── data_augmentation_insaneDA2.py
    │   │   ├── data_augmentation_moreDA.py
    │   │   ├── data_augmentation_noDA.py
    │   │   ├── default_data_augmentation.py
    │   │   ├── downsampling.py
    │   │   └── pyramid_augmentations.py
    │   ├── dataloading/
    │   │   ├── __init__.py
    │   │   └── dataset_loading.py
    │   ├── learning_rate/
    │   │   └── poly_lr.py
    │   ├── loss_functions/
    │   │   ├── TopK_loss.py
    │   │   ├── __init__.py
    │   │   ├── crossentropy.py
    │   │   ├── deep_supervision.py
    │   │   └── dice_loss.py
    │   ├── model_restore.py
    │   ├── network_training/
    │   │   ├── Trainer_acdc.py
    │   │   ├── Trainer_lung.py
    │   │   ├── Trainer_synapse.py
    │   │   ├── Trainer_tumor.py
    │   │   ├── network_trainer_acdc.py
    │   │   ├── network_trainer_lung.py
    │   │   ├── network_trainer_synapse.py
    │   │   ├── network_trainer_tumor.py
    │   │   ├── unetr_pp_trainer_acdc.py
    │   │   ├── unetr_pp_trainer_lung.py
    │   │   ├── unetr_pp_trainer_synapse.py
    │   │   └── unetr_pp_trainer_tumor.py
    │   └── optimizer/
    │       └── ranger.py
    └── utilities/
        ├── __init__.py
        ├── distributed.py
        ├── file_conversions.py
        ├── file_endings.py
        ├── folder_names.py
        ├── nd_softmax.py
        ├── one_hot_encoding.py
        ├── overlay_plots.py
        ├── random_stuff.py
        ├── recursive_delete_npz.py
        ├── recursive_rename_taskXX_to_taskXXX.py
        ├── sitk_stuff.py
        ├── task_name_id_conversion.py
        ├── tensor_utilities.py
        └── to_torch.py

================================================
FILE CONTENTS
================================================

================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [2024] [Abdelrahman Mohamed Shaker Youssief]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation
![](https://i.imgur.com/waxVImv.png)
[Abdelrahman Shaker](https://scholar.google.com/citations?hl=en&user=eEz4Wu4AAAAJ)<sup>*1</sup>, [Muhammad Maaz](https://scholar.google.com/citations?user=vTy9Te8AAAAJ&hl=en&authuser=1&oi=sra)<sup>1</sup>, [Hanoona Rasheed](https://scholar.google.com/citations?user=yhDdEuEAAAAJ&hl=en&authuser=1&oi=sra)<sup>1</sup>, [Salman Khan](https://salman-h-khan.github.io/)<sup>1</sup>, [Ming-Hsuan Yang](https://scholar.google.com/citations?user=p9-ohHsAAAAJ&hl=en)<sup>2,3</sup> and [Fahad Shahbaz Khan](https://scholar.google.es/citations?user=zvaeYnUAAAAJ&hl=en)<sup>1,4</sup>

Mohamed Bin Zayed University of Artificial Intelligence<sup>1</sup>, University of California Merced<sup>2</sup>, Google Research<sup>3</sup>, Linkoping University<sup>4</sup>

[![paper](https://img.shields.io/badge/Paper-<COLOR>.svg)](https://ieeexplore.ieee.org/document/10526382)
[![Website](https://img.shields.io/badge/Project-Website-87CEEB)](https://amshaker.github.io/unetr_plus_plus/)
[![slides](https://img.shields.io/badge/Presentation-Slides-B762C1)](https://docs.google.com/presentation/d/e/2PACX-1vRtrxSfA2kU1fBmPxBdMQioLfsvjcjWBoaOVf3aupqajm0mw_C4TEz05Yk4ZF_vqoMyA8iiUJE60ynm/pub?start=true&loop=false&delayms=10000)


## :rocket: News
* **(Dec 2025):** UNETR++ is now available in Keras 3 with a simple model initialization as part of [AI Toolkit for Healthcare Imaging](https://github.com/innat/medic-ai/tree/main/medicai/models/unetr_plus_plus)!
* **(Oct 2025):** UNETR++ became the **#1 most popular article** of 2025 in [IEEE Transactions on Medical Imaging (TMI)](https://ieeexplore.ieee.org/xpl/topAccessedArticles.jsp?punumber=42)! 🎉
* **(May 04, 2024):** We're thrilled to share that UNETR++ has been accepted to IEEE TMI-2024! 🎊
* **(Jun 01, 2023):** UNETR++ code & weights are released for Decathlon-Lung and BRaTs.
* **(Dec 15, 2022):** UNETR++ weights are released for Synapse & ACDC datasets.
* **(Dec 09, 2022):** UNETR++ training and evaluation codes are released.

<hr />

![main figure](media/intro_fig.png)
> **Abstract:** *Owing to the success of transformer models, recent works study their applicability in 3D medical segmentation tasks. Within the transformer models, the self-attention mechanism is one of the main building blocks that strives to capture long-range dependencies. However, the self-attention operation has quadratic complexity, which proves to be a computational bottleneck, especially in volumetric medical imaging, where the inputs are 3D with numerous slices.  In this paper, we propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed. The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features using a pair of inter-dependent branches based on spatial and channel attention.
Our spatial attention formulation is efficient, having linear complexity with respect to the input sequence length. To enable communication between spatial and channel-focused branches, we share the weights of query and key mapping functions that provide a complementary benefit (paired attention), while also reducing the overall network parameters. Our extensive evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy. On Synapse, our UNETR++ sets a new state-of-the-art with a Dice Score of 87.2%, while being significantly efficient with a reduction of over 71% in terms of both parameters and FLOPs, compared to the best method in the literature.* 
<hr />


## Architecture overview of UNETR++
Overview of our UNETR++ approach with hierarchical encoder-decoder structure. The 3D patches are fed to the encoder, whose outputs are then connected to the decoder via skip connections followed by convolutional blocks to produce the final segmentation mask. The focus of our design is the introduction of an _efficient paired-attention_ (EPA) block. Each EPA block performs two tasks using parallel attention modules with shared keys-queries and different value layers to efficiently learn enriched spatial-channel feature representations. As illustrated in the EPA block diagram (on the right), the first (top) attention module aggregates the spatial features by a weighted sum of the projected features in a linear manner to compute the spatial attention maps, while the second (bottom) attention module emphasizes the dependencies in the channels and computes the channel attention maps. Finally, the outputs of the two attention modules are fused and passed to convolutional blocks to enhance the feature representation, leading to better segmentation masks.
![Architecture overview](media/UNETR++_Block_Diagram.jpg)

<hr />


## Results

### Synapse Dataset
State-of-the-art comparison on the abdominal multi-organ Synapse dataset. We report both the segmentation performance (DSC, HD95) and model complexity (parameters and FLOPs).
Our proposed UNETR++ achieves favorable segmentation performance against existing methods, while considerably reducing the model complexity. Best results are in bold. 
Abbreviations stand for: Spl: _spleen_, RKid: _right kidney_, LKid: _left kidney_, Gal: _gallbladder_, Liv: _liver_, Sto: _stomach_, Aor: _aorta_, Pan: _pancreas_. 
Best results are in bold.

![Synapse Results](media/synapse_results.png)

<hr />

## Qualitative Comparison

### Synapse Dataset
Qualitative comparison on multi-organ segmentation task. Here, we compare our UNETR++ with existing methods: UNETR, Swin UNETR, and nnFormer. 
The different abdominal organs are shown in the legend below the examples. Existing methods struggle to correctly segment different organs (marked in red dashed box). 
Our UNETR++ achieves promising segmentation performance by accurately segmenting the organs.
![Synapse Qual Results](media/UNETR++_results_fig_synapse.jpg)

### ACDC Dataset
Qualitative comparison on the ACDC dataset. We compare our UNETR++ with existing methods: UNETR and nnFormer. It is noticeable that the existing methods struggle to correctly segment different organs (marked in red dashed box). Our UNETR++ achieves favorable segmentation performance by accurately segmenting the organs.  Our UNETR++ achieves promising segmentation performance by accurately segmenting the organs.
![ACDC Qual Results](media/acdc_vs_unetr_suppl.jpg)


<hr />

## Installation
The code is tested with PyTorch 1.11.0 and CUDA 11.3. After cloning the repository, follow the below steps for installation,

1. Create and activate conda environment
```shell
conda create --name unetr_pp python=3.8
conda activate unetr_pp
```
2. Install PyTorch and torchvision
```shell
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
```
3. Install other dependencies
```shell
pip install -r requirements.txt
```
<hr />


## Dataset
We follow the same dataset preprocessing as in [nnFormer](https://github.com/282857341/nnFormer). We conducted extensive experiments on five benchmarks: Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung. 

The dataset folders for Synapse should be organized as follows: 

```
./DATASET_Synapse/
  ├── unetr_pp_raw/
      ├── unetr_pp_raw_data/
           ├── Task02_Synapse/
              ├── imagesTr/
              ├── imagesTs/
              ├── labelsTr/
              ├── labelsTs/
              ├── dataset.json
           ├── Task002_Synapse
       ├── unetr_pp_cropped_data/
           ├── Task002_Synapse
 ```
 
 The dataset folders for ACDC should be organized as follows: 

```
./DATASET_Acdc/
  ├── unetr_pp_raw/
      ├── unetr_pp_raw_data/
           ├── Task01_ACDC/
              ├── imagesTr/
              ├── imagesTs/
              ├── labelsTr/
              ├── labelsTs/
              ├── dataset.json
           ├── Task001_ACDC
       ├── unetr_pp_cropped_data/
           ├── Task001_ACDC
 ```
 
  The dataset folders for Decathlon-Lung should be organized as follows: 

```
./DATASET_Lungs/
  ├── unetr_pp_raw/
      ├── unetr_pp_raw_data/
           ├── Task06_Lung/
              ├── imagesTr/
              ├── imagesTs/
              ├── labelsTr/
              ├── labelsTs/
              ├── dataset.json
           ├── Task006_Lung
       ├── unetr_pp_cropped_data/
           ├── Task006_Lung
 ```
   The dataset folders for BRaTs should be organized as follows: 

```
./DATASET_Tumor/
  ├── unetr_pp_raw/
      ├── unetr_pp_raw_data/
           ├── Task03_tumor/
              ├── imagesTr/
              ├── imagesTs/
              ├── labelsTr/
              ├── labelsTs/
              ├── dataset.json
           ├── Task003_tumor
       ├── unetr_pp_cropped_data/
           ├── Task003_tumor
 ```
 
Please refer to [Setting up the datasets](https://github.com/282857341/nnFormer) on nnFormer repository for more details.
Alternatively, you can download the preprocessed dataset for [Synapse](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/abdelrahman_youssief_mbzuai_ac_ae/EbHDhSjkQW5Ak9SMPnGCyb8BOID98wdg3uUvQ0eNvTZ8RA?e=YVhfdg), [ACDC](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/abdelrahman_youssief_mbzuai_ac_ae/EY9qieTkT3JFrhCJQiwZXdsB1hJ4ebVAtNdBNOs2HAo3CQ?e=VwfFHC), [Decathlon-Lung](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/abdelrahman_youssief_mbzuai_ac_ae/EWhU1T7c-mNKgkS2PQjFwP0B810LCiX3D2CvCES2pHDVSg?e=OqcIW3), [BRaTs](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/abdelrahman_youssief_mbzuai_ac_ae/EaQOxpD2yE5Btl-UEBAbQa0BYFBCL4J2Ph-VF_sqZlBPSQ?e=DFY41h), and extract it under the project directory.

## Training
The following scripts can be used for training our UNETR++ model on the datasets:
```shell
bash training_scripts/run_training_synapse.sh
bash training_scripts/run_training_acdc.sh
bash training_scripts/run_training_lung.sh
bash training_scripts/run_training_tumor.sh
```

<hr />

## Evaluation

To reproduce the results of UNETR++: 

1- Download [Synapse weights](https://drive.google.com/file/d/13JuLMeDQRR_a3c3tr2V2oav6I29fJoBa) and paste ```model_final_checkpoint.model``` in the following path:
```shell
unetr_pp/evaluation/unetr_pp_synapse_checkpoint/unetr_pp/3d_fullres/Task002_Synapse/unetr_pp_trainer_synapse__unetr_pp_Plansv2.1/fold_0/
```
Then, run 
```shell
bash evaluation_scripts/run_evaluation_synapse.sh
```
2- Download [ACDC weights](https://drive.google.com/file/d/15YXiHai1zLc1ycmXaiSHetYbLGum3tV5) and paste ```model_final_checkpoint.model``` it in the following path:
```shell
unetr_pp/evaluation/unetr_pp_acdc_checkpoint/unetr_pp/3d_fullres/Task001_ACDC/unetr_pp_trainer_acdc__unetr_pp_Plansv2.1/fold_0/
```
Then, run 
```shell
bash evaluation_scripts/run_evaluation_acdc.sh
```


3- Download [Decathlon-Lung weights](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/abdelrahman_youssief_mbzuai_ac_ae/ETAlc8WTjV1BhZx7zwFpA8UBS4og6upb1qX2UKkypMoTjw?e=KfzAiG) and paste ```model_final_checkpoint.model``` it in the following path:
```shell
unetr_pp/evaluation/unetr_pp_lung_checkpoint/unetr_pp/3d_fullres/Task006_Lung/unetr_pp_trainer_lung__unetr_pp_Plansv2.1/fold_0/
```
Then, run 
```shell
bash evaluation_scripts/run_evaluation_lung.sh
```

4- Download [BRaTs weights](https://drive.google.com/file/d/1LiqnVKKv3DrDKvo6J0oClhIFirhaz5PG) and paste ```model_final_checkpoint.model``` it in the following path:
```shell
unetr_pp/evaluation/unetr_pp_lung_checkpoint/unetr_pp/3d_fullres/Task003_tumor/unetr_pp_trainer_tumor__unetr_pp_Plansv2.1/fold_0/
```
Then, run 
```shell
bash evaluation_scripts/run_evaluation_tumor.sh
```

<hr />

## Acknowledgement
This repository is built based on [nnFormer](https://github.com/282857341/nnFormer) repository.

## Citation
If you use our work, please consider citing:
```bibtex
@ARTICLE{10526382,
  title={UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation}, 
  author={Shaker, Abdelrahman M. and Maaz, Muhammad and Rasheed, Hanoona and Khan, Salman and Yang, Ming-Hsuan and Khan, Fahad Shahbaz},
  journal={IEEE Transactions on Medical Imaging}, 
  year={2024},
  doi={10.1109/TMI.2024.3398728}}

```

## Contact
Should you have any question, please create an issue on this repository or contact me at abdelrahman.youssief@mbzuai.ac.ae.


================================================
FILE: evaluation_scripts/run_evaluation_acdc.sh
================================================
#!/bin/sh

DATASET_PATH=../DATASET_Acdc
CHECKPOINT_PATH=../unetr_pp/evaluation/unetr_pp_acdc_checkpoint

export PYTHONPATH=.././
export RESULTS_FOLDER="$CHECKPOINT_PATH"
export unetr_pp_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task01_ACDC
export unetr_pp_raw_data_base="$DATASET_PATH"/unetr_pp_raw

python ../unetr_pp/run/run_training.py 3d_fullres unetr_pp_trainer_acdc 1 0 -val 


================================================
FILE: evaluation_scripts/run_evaluation_lung.sh
================================================
#!/bin/sh

DATASET_PATH=../DATASET_Lungs
CHECKPOINT_PATH=../unetr_pp/evaluation/unetr_pp_lung_checkpoint

export PYTHONPATH=.././
export RESULTS_FOLDER="$CHECKPOINT_PATH"
export unetr_pp_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task06_Lung
export unetr_pp_raw_data_base="$DATASET_PATH"/unetr_pp_raw

python ../unetr_pp/run/run_training.py 3d_fullres unetr_pp_trainer_lung 6 0 -val


================================================
FILE: evaluation_scripts/run_evaluation_synapse.sh
================================================
#!/bin/sh

DATASET_PATH=../DATASET_Synapse
CHECKPOINT_PATH=../unetr_pp/evaluation/unetr_pp_synapse_checkpoint

export PYTHONPATH=.././
export RESULTS_FOLDER="$CHECKPOINT_PATH"
export unetr_pp_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task02_Synapse
export unetr_pp_raw_data_base="$DATASET_PATH"/unetr_pp_raw

python ../unetr_pp/run/run_training.py 3d_fullres unetr_pp_trainer_synapse 2 0 -val


================================================
FILE: evaluation_scripts/run_evaluation_tumor.sh
================================================
#!/bin/sh

DATASET_PATH=../DATASET_Tumor

export PYTHONPATH=.././
export RESULTS_FOLDER=../unetr_pp/evaluation/unetr_pp_tumor_checkpoint
export unetr_pp_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task03_tumor
export unetr_pp_raw_data_base="$DATASET_PATH"/unetr_pp_raw


# Only for Tumor, it is recommended to train unetr_plus_plus first, and then use the provided checkpoint to evaluate. It might raise issues regarding the pickle files if you evaluated without training

python ../unetr_pp/inference/predict_simple.py -i ../unetr_plus_plus/DATASET_Tumor/unetr_pp_raw/unetr_pp_raw_data/Task003_tumor/imagesTs -o ../unetr_plus_plus/unetr_pp/evaluation/unetr_pp_tumor_checkpoint/inferTs -m 3d_fullres  -t 3 -f 0 -chk model_final_checkpoint -tr unetr_pp_trainer_tumor


python ../unetr_pp/inference_tumor.py 0



================================================
FILE: requirements.txt
================================================
argparse==1.4.0
numpy==1.20.1
batchgenerators==0.21
matplotlib==3.5.1
typing==3.7.4.3
sklearn==0.0
scikit-learn==1.0.2
tqdm==4.32.1
fvcore==0.1.5.post20220414
scikit-image==0.18.1
simpleitk==2.2.0
tifffile==2021.3.31
medpy==0.4.0
pandas==1.2.3
scipy==1.6.2
nibabel==4.0.1
timm==0.4.12
monai==0.7.0
einops==0.6.0
tensorboardX==2.2


================================================
FILE: training_scripts/run_training_acdc.sh
================================================
#!/bin/sh

DATASET_PATH=../DATASET_Acdc

export PYTHONPATH=.././
export RESULTS_FOLDER=../output_acdc
export unetr_pp_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task01_ACDC
export unetr_pp_raw_data_base="$DATASET_PATH"/unetr_pp_raw

python ../unetr_pp/run/run_training.py 3d_fullres unetr_pp_trainer_acdc 1 0


================================================
FILE: training_scripts/run_training_lung.sh
================================================
#!/bin/sh

DATASET_PATH=../DATASET_Lungs

export PYTHONPATH=.././
export RESULTS_FOLDER=../output_lung
export unetr_pp_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task06_Lung
export unetr_pp_raw_data_base="$DATASET_PATH"/unetr_pp_raw

python ../unetr_pp/run/run_training.py 3d_fullres unetr_pp_trainer_lung 6 0


================================================
FILE: training_scripts/run_training_synapse.sh
================================================
#!/bin/sh

DATASET_PATH=../DATASET_Synapse

export PYTHONPATH=.././
export RESULTS_FOLDER=../output_synapse
export unetr_pp_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task02_Synapse
export unetr_pp_raw_data_base="$DATASET_PATH"/unetr_pp_raw

python ../unetr_pp/run/run_training.py 3d_fullres unetr_pp_trainer_synapse 2 0


================================================
FILE: training_scripts/run_training_tumor.sh
================================================
#!/bin/sh

DATASET_PATH=../DATASET_Tumor

export PYTHONPATH=.././
export RESULTS_FOLDER=../output_tumor
export unetr_pp_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task03_tumor
export unetr_pp_raw_data_base="$DATASET_PATH"/unetr_pp_raw

python ../unetr_pp/run/run_training.py 3d_fullres unetr_pp_trainer_tumor 3 0


================================================
FILE: unetr_pp/__init__.py
================================================
from __future__ import absolute_import
from . import *


================================================
FILE: unetr_pp/configuration.py
================================================
import os

default_num_threads = 8 if 'nnFormer_def_n_proc' not in os.environ else int(os.environ['nnFormer_def_n_proc'])
RESAMPLING_SEPARATE_Z_ANISO_THRESHOLD = 3  # determines what threshold to use for resampling the low resolution axis
# separately (with NN)

================================================
FILE: unetr_pp/evaluation/__init__.py
================================================
from __future__ import absolute_import
from . import *

================================================
FILE: unetr_pp/evaluation/add_dummy_task_with_mean_over_all_tasks.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

import json
import numpy as np
from batchgenerators.utilities.file_and_folder_operations import subfiles
import os
from collections import OrderedDict

folder = "/home/fabian/drives/E132-Projekte/Projects/2018_MedicalDecathlon/Leaderboard"
task_descriptors = ['2D final 2',
                    '2D final, less pool, dc and topK, fold0',
                    '2D final pseudo3d 7, fold0',
                    '2D final, less pool, dc and ce, fold0',
                    '3D stage0 final 2, fold0',
                    '3D fullres final 2, fold0']
task_ids_with_no_stage0 = ["Task001_BrainTumour", "Task004_Hippocampus", "Task005_Prostate"]

mean_scores = OrderedDict()
for t in task_descriptors:
    mean_scores[t] = OrderedDict()

json_files = subfiles(folder, True, None, ".json", True)
json_files = [i for i in json_files if not i.split("/")[-1].startswith(".")]  # stupid mac
for j in json_files:
    with open(j, 'r') as f:
        res = json.load(f)
    task = res['task']
    if task != "Task999_ALL":
        name = res['name']
        if name in task_descriptors:
            if task not in list(mean_scores[name].keys()):
                mean_scores[name][task] = res['results']['mean']['mean']
            else:
                raise RuntimeError("duplicate task %s for description %s" % (task, name))

for t in task_ids_with_no_stage0:
    mean_scores["3D stage0 final 2, fold0"][t] = mean_scores["3D fullres final 2, fold0"][t]

a = set()
for i in mean_scores.keys():
    a = a.union(list(mean_scores[i].keys()))

for i in mean_scores.keys():
    try:
        for t in list(a):
            assert t in mean_scores[i].keys(), "did not find task %s for experiment %s" % (t, i)
        new_res = OrderedDict()
        new_res['name'] = i
        new_res['author'] = "Fabian"
        new_res['task'] = "Task999_ALL"
        new_res['results'] = OrderedDict()
        new_res['results']['mean'] = OrderedDict()
        new_res['results']['mean']['mean'] = OrderedDict()
        tasks = list(mean_scores[i].keys())
        metrics = mean_scores[i][tasks[0]].keys()
        for m in metrics:
            foreground_values = [mean_scores[i][n][m] for n in tasks]
            new_res['results']['mean']["mean"][m] = np.nanmean(foreground_values)
        output_fname = i.replace(" ", "_") + "_globalMean.json"
        with open(os.path.join(folder, output_fname), 'w') as f:
            json.dump(new_res, f)
    except AssertionError:
        print("could not process experiment %s" % i)
        print("did not find task %s for experiment %s" % (t, i))



================================================
FILE: unetr_pp/evaluation/add_mean_dice_to_json.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

import json
import numpy as np
from batchgenerators.utilities.file_and_folder_operations import subfiles
from collections import OrderedDict


def foreground_mean(filename):
    with open(filename, 'r') as f:
        res = json.load(f)
    class_ids = np.array([int(i) for i in res['results']['mean'].keys() if (i != 'mean')])
    class_ids = class_ids[class_ids != 0]
    class_ids = class_ids[class_ids != -1]
    class_ids = class_ids[class_ids != 99]

    tmp = res['results']['mean'].get('99')
    if tmp is not None:
        _ = res['results']['mean'].pop('99')

    metrics = res['results']['mean']['1'].keys()
    res['results']['mean']["mean"] = OrderedDict()
    for m in metrics:
        foreground_values = [res['results']['mean'][str(i)][m] for i in class_ids]
        res['results']['mean']["mean"][m] = np.nanmean(foreground_values)
    with open(filename, 'w') as f:
        json.dump(res, f, indent=4, sort_keys=True)


def run_in_folder(folder):
    json_files = subfiles(folder, True, None, ".json", True)
    json_files = [i for i in json_files if not i.split("/")[-1].startswith(".") and not i.endswith("_globalMean.json")] # stupid mac
    for j in json_files:
        foreground_mean(j)


if __name__ == "__main__":
    folder = "/media/fabian/Results/nnFormerOutput_final/summary_jsons"
    run_in_folder(folder)


================================================
FILE: unetr_pp/evaluation/collect_results_files.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

import os
import shutil
from batchgenerators.utilities.file_and_folder_operations import subdirs, subfiles


def crawl_and_copy(current_folder, out_folder, prefix="fabian_", suffix="ummary.json"):
    """
    This script will run recursively through all subfolders of current_folder and copy all files that end with
    suffix with some automatically generated prefix into out_folder
    :param current_folder:
    :param out_folder:
    :param prefix:
    :return:
    """
    s = subdirs(current_folder, join=False)
    f = subfiles(current_folder, join=False)
    f = [i for i in f if i.endswith(suffix)]
    if current_folder.find("fold0") != -1:
        for fl in f:
            shutil.copy(os.path.join(current_folder, fl), os.path.join(out_folder, prefix+fl))
    for su in s:
        if prefix == "":
            add = su
        else:
            add = "__" + su
        crawl_and_copy(os.path.join(current_folder, su), out_folder, prefix=prefix+add)


if __name__ == "__main__":
    from unetr_pp.paths import network_training_output_dir
    output_folder = "/home/fabian/PhD/results/nnFormerV2/leaderboard"
    crawl_and_copy(network_training_output_dir, output_folder)
    from unetr_pp.evaluation.add_mean_dice_to_json import run_in_folder
    run_in_folder(output_folder)


================================================
FILE: unetr_pp/evaluation/evaluator.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


import collections
import inspect
import json
import hashlib
from datetime import datetime
from multiprocessing.pool import Pool
import numpy as np
import pandas as pd
import SimpleITK as sitk
from unetr_pp.evaluation.metrics import ConfusionMatrix, ALL_METRICS
from batchgenerators.utilities.file_and_folder_operations import save_json, subfiles, join
from collections import OrderedDict


class Evaluator:
    """Object that holds test and reference segmentations with label information
    and computes a number of metrics on the two. 'labels' must either be an
    iterable of numeric values (or tuples thereof) or a dictionary with string
    names and numeric values.
    """

    default_metrics = [
        "False Positive Rate",
        "Dice",
        "Jaccard",
        "Precision",
        "Recall",
        "Accuracy",
        "False Omission Rate",
        "Negative Predictive Value",
        "False Negative Rate",
        "True Negative Rate",
        "False Discovery Rate",
        "Total Positives Test",
        "Total Positives Reference"
    ]

    default_advanced_metrics = [
        #"Hausdorff Distance",
        "Hausdorff Distance 95",
        #"Avg. Surface Distance",
        #"Avg. Symmetric Surface Distance"
    ]

    def __init__(self,
                 test=None,
                 reference=None,
                 labels=None,
                 metrics=None,
                 advanced_metrics=None,
                 nan_for_nonexisting=True):

        self.test = None
        self.reference = None
        self.confusion_matrix = ConfusionMatrix()
        self.labels = None
        self.nan_for_nonexisting = nan_for_nonexisting
        self.result = None

        self.metrics = []
        if metrics is None:
            for m in self.default_metrics:
                self.metrics.append(m)
        else:
            for m in metrics:
                self.metrics.append(m)

        self.advanced_metrics = []
        if advanced_metrics is None:
            for m in self.default_advanced_metrics:
                self.advanced_metrics.append(m)
        else:
            for m in advanced_metrics:
                self.advanced_metrics.append(m)

        self.set_reference(reference)
        self.set_test(test)
        if labels is not None:
            self.set_labels(labels)
        else:
            if test is not None and reference is not None:
                self.construct_labels()

    def set_test(self, test):
        """Set the test segmentation."""

        self.test = test

    def set_reference(self, reference):
        """Set the reference segmentation."""

        self.reference = reference

    def set_labels(self, labels):
        """Set the labels.
        :param labels= may be a dictionary (int->str), a set (of ints), a tuple (of ints) or a list (of ints). Labels
        will only have names if you pass a dictionary"""

        if isinstance(labels, dict):
            self.labels = collections.OrderedDict(labels)
        elif isinstance(labels, set):
            self.labels = list(labels)
        elif isinstance(labels, np.ndarray):
            self.labels = [i for i in labels]
        elif isinstance(labels, (list, tuple)):
            self.labels = labels
        else:
            raise TypeError("Can only handle dict, list, tuple, set & numpy array, but input is of type {}".format(type(labels)))

    def construct_labels(self):
        """Construct label set from unique entries in segmentations."""

        if self.test is None and self.reference is None:
            raise ValueError("No test or reference segmentations.")
        elif self.test is None:
            labels = np.unique(self.reference)
        else:
            labels = np.union1d(np.unique(self.test),
                                np.unique(self.reference))
        self.labels = list(map(lambda x: int(x), labels))

    def set_metrics(self, metrics):
        """Set evaluation metrics"""

        if isinstance(metrics, set):
            self.metrics = list(metrics)
        elif isinstance(metrics, (list, tuple, np.ndarray)):
            self.metrics = metrics
        else:
            raise TypeError("Can only handle list, tuple, set & numpy array, but input is of type {}".format(type(metrics)))

    def add_metric(self, metric):

        if metric not in self.metrics:
            self.metrics.append(metric)

    def evaluate(self, test=None, reference=None, advanced=False, **metric_kwargs):
        """Compute metrics for segmentations."""
        if test is not None:
            self.set_test(test)

        if reference is not None:
            self.set_reference(reference)

        if self.test is None or self.reference is None:
            raise ValueError("Need both test and reference segmentations.")

        if self.labels is None:
            self.construct_labels()

        self.metrics.sort()

        # get functions for evaluation
        # somewhat convoluted, but allows users to define additonal metrics
        # on the fly, e.g. inside an IPython console
        _funcs = {m: ALL_METRICS[m] for m in self.metrics + self.advanced_metrics}
        frames = inspect.getouterframes(inspect.currentframe())
        for metric in self.metrics:
            for f in frames:
                if metric in f[0].f_locals:
                    _funcs[metric] = f[0].f_locals[metric]
                    break
            else:
                if metric in _funcs:
                    continue
                else:
                    raise NotImplementedError(
                        "Metric {} not implemented.".format(metric))

        # get results
        self.result = OrderedDict()

        eval_metrics = self.metrics
        if advanced:
            eval_metrics += self.advanced_metrics

        if isinstance(self.labels, dict):

            for label, name in self.labels.items():
                k = str(name)
                self.result[k] = OrderedDict()
                if not hasattr(label, "__iter__"):
                    self.confusion_matrix.set_test(self.test == label)
                    self.confusion_matrix.set_reference(self.reference == label)
                else:
                    current_test = 0
                    current_reference = 0
                    for l in label:
                        current_test += (self.test == l)
                        current_reference += (self.reference == l)
                    self.confusion_matrix.set_test(current_test)
                    self.confusion_matrix.set_reference(current_reference)
                for metric in eval_metrics:
                    self.result[k][metric] = _funcs[metric](confusion_matrix=self.confusion_matrix,
                                                               nan_for_nonexisting=self.nan_for_nonexisting,
                                                               **metric_kwargs)

        else:

            for i, l in enumerate(self.labels):
                k = str(l)
                self.result[k] = OrderedDict()
                self.confusion_matrix.set_test(self.test == l)
                self.confusion_matrix.set_reference(self.reference == l)
                for metric in eval_metrics:
                    self.result[k][metric] = _funcs[metric](confusion_matrix=self.confusion_matrix,
                                                            nan_for_nonexisting=self.nan_for_nonexisting,
                                                            **metric_kwargs)

        return self.result

    def to_dict(self):

        if self.result is None:
            self.evaluate()
        return self.result

    def to_array(self):
        """Return result as numpy array (labels x metrics)."""

        if self.result is None:
            self.evaluate

        result_metrics = sorted(self.result[list(self.result.keys())[0]].keys())

        a = np.zeros((len(self.labels), len(result_metrics)), dtype=np.float32)

        if isinstance(self.labels, dict):
            for i, label in enumerate(self.labels.keys()):
                for j, metric in enumerate(result_metrics):
                    a[i][j] = self.result[self.labels[label]][metric]
        else:
            for i, label in enumerate(self.labels):
                for j, metric in enumerate(result_metrics):
                    a[i][j] = self.result[label][metric]

        return a

    def to_pandas(self):
        """Return result as pandas DataFrame."""

        a = self.to_array()

        if isinstance(self.labels, dict):
            labels = list(self.labels.values())
        else:
            labels = self.labels

        result_metrics = sorted(self.result[list(self.result.keys())[0]].keys())

        return pd.DataFrame(a, index=labels, columns=result_metrics)


class NiftiEvaluator(Evaluator):

    def __init__(self, *args, **kwargs):

        self.test_nifti = None
        self.reference_nifti = None
        super(NiftiEvaluator, self).__init__(*args, **kwargs)

    def set_test(self, test):
        """Set the test segmentation."""

        if test is not None:
            self.test_nifti = sitk.ReadImage(test)
            super(NiftiEvaluator, self).set_test(sitk.GetArrayFromImage(self.test_nifti))
        else:
            self.test_nifti = None
            super(NiftiEvaluator, self).set_test(test)

    def set_reference(self, reference):
        """Set the reference segmentation."""

        if reference is not None:
            self.reference_nifti = sitk.ReadImage(reference)
            super(NiftiEvaluator, self).set_reference(sitk.GetArrayFromImage(self.reference_nifti))
        else:
            self.reference_nifti = None
            super(NiftiEvaluator, self).set_reference(reference)

    def evaluate(self, test=None, reference=None, voxel_spacing=None, **metric_kwargs):

        if voxel_spacing is None:
            voxel_spacing = np.array(self.test_nifti.GetSpacing())[::-1]
            metric_kwargs["voxel_spacing"] = voxel_spacing

        return super(NiftiEvaluator, self).evaluate(test, reference, **metric_kwargs)


def run_evaluation(args):
    test, ref, evaluator, metric_kwargs = args
    # evaluate
    evaluator.set_test(test)
    evaluator.set_reference(ref)
    if evaluator.labels is None:
        evaluator.construct_labels()
    current_scores = evaluator.evaluate(**metric_kwargs)
    if type(test) == str:
        current_scores["test"] = test
    if type(ref) == str:
        current_scores["reference"] = ref
    return current_scores


def aggregate_scores(test_ref_pairs,
                     evaluator=NiftiEvaluator,
                     labels=None,
                     nanmean=True,
                     json_output_file=None,
                     json_name="",
                     json_description="",
                     json_author="Fabian",
                     json_task="",
                     num_threads=2,
                     **metric_kwargs):
    """
    test = predicted image
    :param test_ref_pairs:
    :param evaluator:
    :param labels: must be a dict of int-> str or a list of int
    :param nanmean:
    :param json_output_file:
    :param json_name:
    :param json_description:
    :param json_author:
    :param json_task:
    :param metric_kwargs:
    :return:
    """

    if type(evaluator) == type:
        evaluator = evaluator()

    if labels is not None:
        evaluator.set_labels(labels)

    all_scores = OrderedDict()
    all_scores["all"] = []
    all_scores["mean"] = OrderedDict()

    test = [i[0] for i in test_ref_pairs]
    ref = [i[1] for i in test_ref_pairs]
    p = Pool(num_threads)
    all_res = p.map(run_evaluation, zip(test, ref, [evaluator]*len(ref), [metric_kwargs]*len(ref)))
    p.close()
    p.join()

    for i in range(len(all_res)):
        all_scores["all"].append(all_res[i])

        # append score list for mean
        for label, score_dict in all_res[i].items():
            if label in ("test", "reference"):
                continue
            if label not in all_scores["mean"]:
                all_scores["mean"][label] = OrderedDict()
            for score, value in score_dict.items():
                if score not in all_scores["mean"][label]:
                    all_scores["mean"][label][score] = []
                all_scores["mean"][label][score].append(value)

    for label in all_scores["mean"]:
        for score in all_scores["mean"][label]:
            if nanmean:
                all_scores["mean"][label][score] = float(np.nanmean(all_scores["mean"][label][score]))
            else:
                all_scores["mean"][label][score] = float(np.mean(all_scores["mean"][label][score]))

    # save to file if desired
    # we create a hopefully unique id by hashing the entire output dictionary
    if json_output_file is not None:
        json_dict = OrderedDict()
        json_dict["name"] = json_name
        json_dict["description"] = json_description
        timestamp = datetime.today()
        json_dict["timestamp"] = str(timestamp)
        json_dict["task"] = json_task
        json_dict["author"] = json_author
        json_dict["results"] = all_scores
        json_dict["id"] = hashlib.md5(json.dumps(json_dict).encode("utf-8")).hexdigest()[:12]
        save_json(json_dict, json_output_file)


    return all_scores


def aggregate_scores_for_experiment(score_file,
                                    labels=None,
                                    metrics=Evaluator.default_metrics,
                                    nanmean=True,
                                    json_output_file=None,
                                    json_name="",
                                    json_description="",
                                    json_author="Fabian",
                                    json_task=""):

    scores = np.load(score_file)
    scores_mean = scores.mean(0)
    if labels is None:
        labels = list(map(str, range(scores.shape[1])))

    results = []
    results_mean = OrderedDict()
    for i in range(scores.shape[0]):
        results.append(OrderedDict())
        for l, label in enumerate(labels):
            results[-1][label] = OrderedDict()
            results_mean[label] = OrderedDict()
            for m, metric in enumerate(metrics):
                results[-1][label][metric] = float(scores[i][l][m])
                results_mean[label][metric] = float(scores_mean[l][m])

    json_dict = OrderedDict()
    json_dict["name"] = json_name
    json_dict["description"] = json_description
    timestamp = datetime.today()
    json_dict["timestamp"] = str(timestamp)
    json_dict["task"] = json_task
    json_dict["author"] = json_author
    json_dict["results"] = {"all": results, "mean": results_mean}
    json_dict["id"] = hashlib.md5(json.dumps(json_dict).encode("utf-8")).hexdigest()[:12]
    if json_output_file is not None:
        json_output_file = open(json_output_file, "w")
        json.dump(json_dict, json_output_file, indent=4, separators=(",", ": "))
        json_output_file.close()

    return json_dict


def evaluate_folder(folder_with_gts: str, folder_with_predictions: str, labels: tuple, **metric_kwargs):
    """
    writes a summary.json to folder_with_predictions
    :param folder_with_gts: folder where the ground truth segmentations are saved. Must be nifti files.
    :param folder_with_predictions: folder where the predicted segmentations are saved. Must be nifti files.
    :param labels: tuple of int with the labels in the dataset. For example (0, 1, 2, 3) for Task001_BrainTumour.
    :return:
    """
    files_gt = subfiles(folder_with_gts, suffix=".nii.gz", join=False)
    files_pred = subfiles(folder_with_predictions, suffix=".nii.gz", join=False)
    assert all([i in files_pred for i in files_gt]), "files missing in folder_with_predictions"
    assert all([i in files_gt for i in files_pred]), "files missing in folder_with_gts"
    test_ref_pairs = [(join(folder_with_predictions, i), join(folder_with_gts, i)) for i in files_pred]
    res = aggregate_scores(test_ref_pairs, json_output_file=join(folder_with_predictions, "summary.json"),
                           num_threads=8, labels=labels, **metric_kwargs)
    return res


def nnformer_evaluate_folder():
    import argparse
    parser = argparse.ArgumentParser("Evaluates the segmentations located in the folder pred. Output of this script is "
                                     "a json file. At the very bottom of the json file is going to be a 'mean' "
                                     "entry with averages metrics across all cases")
    parser.add_argument('-ref', required=True, type=str, help="Folder containing the reference segmentations in nifti "
                                                              "format.")
    parser.add_argument('-pred', required=True, type=str, help="Folder containing the predicted segmentations in nifti "
                                                               "format. File names must match between the folders!")
    parser.add_argument('-l', nargs='+', type=int, required=True, help="List of label IDs (integer values) that should "
                                                                       "be evaluated. Best practice is to use all int "
                                                                       "values present in the dataset, so for example "
                                                                       "for LiTS the labels are 0: background, 1: "
                                                                       "liver, 2: tumor. So this argument "
                                                                       "should be -l 1 2. You can if you want also "
                                                                       "evaluate the background label (0) but in "
                                                                       "this case that would not gie any useful "
                                                                       "information.")
    args = parser.parse_args()
    return evaluate_folder(args.ref, args.pred, args.l)

================================================
FILE: unetr_pp/evaluation/metrics.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

import numpy as np
from medpy import metric


def assert_shape(test, reference):

    assert test.shape == reference.shape, "Shape mismatch: {} and {}".format(
        test.shape, reference.shape)


class ConfusionMatrix:

    def __init__(self, test=None, reference=None):

        self.tp = None
        self.fp = None
        self.tn = None
        self.fn = None
        self.size = None
        self.reference_empty = None
        self.reference_full = None
        self.test_empty = None
        self.test_full = None
        self.set_reference(reference)
        self.set_test(test)

    def set_test(self, test):

        self.test = test
        self.reset()

    def set_reference(self, reference):

        self.reference = reference
        self.reset()

    def reset(self):

        self.tp = None
        self.fp = None
        self.tn = None
        self.fn = None
        self.size = None
        self.test_empty = None
        self.test_full = None
        self.reference_empty = None
        self.reference_full = None

    def compute(self):

        if self.test is None or self.reference is None:
            raise ValueError("'test' and 'reference' must both be set to compute confusion matrix.")

        assert_shape(self.test, self.reference)

        self.tp = int(((self.test != 0) * (self.reference != 0)).sum())
        self.fp = int(((self.test != 0) * (self.reference == 0)).sum())
        self.tn = int(((self.test == 0) * (self.reference == 0)).sum())
        self.fn = int(((self.test == 0) * (self.reference != 0)).sum())
        self.size = int(np.prod(self.reference.shape, dtype=np.int64))
        self.test_empty = not np.any(self.test)
        self.test_full = np.all(self.test)
        self.reference_empty = not np.any(self.reference)
        self.reference_full = np.all(self.reference)

    def get_matrix(self):

        for entry in (self.tp, self.fp, self.tn, self.fn):
            if entry is None:
                self.compute()
                break

        return self.tp, self.fp, self.tn, self.fn

    def get_size(self):

        if self.size is None:
            self.compute()
        return self.size

    def get_existence(self):

        for case in (self.test_empty, self.test_full, self.reference_empty, self.reference_full):
            if case is None:
                self.compute()
                break

        return self.test_empty, self.test_full, self.reference_empty, self.reference_full


def dice(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """2TP / (2TP + FP + FN)"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()
    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if test_empty and reference_empty:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0.

    return float(2. * tp / (2 * tp + fp + fn))


def jaccard(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """TP / (TP + FP + FN)"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()
    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if test_empty and reference_empty:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0.

    return float(tp / (tp + fp + fn))


def precision(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """TP / (TP + FP)"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()
    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if test_empty:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0.

    return float(tp / (tp + fp))


def sensitivity(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """TP / (TP + FN)"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()
    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if reference_empty:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0.

    return float(tp / (tp + fn))


def recall(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """TP / (TP + FN)"""

    return sensitivity(test, reference, confusion_matrix, nan_for_nonexisting, **kwargs)


def specificity(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """TN / (TN + FP)"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()
    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if reference_full:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0.

    return float(tn / (tn + fp))


def accuracy(test=None, reference=None, confusion_matrix=None, **kwargs):
    """(TP + TN) / (TP + FP + FN + TN)"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()

    return float((tp + tn) / (tp + fp + tn + fn))


def fscore(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, beta=1., **kwargs):
    """(1 + b^2) * TP / ((1 + b^2) * TP + b^2 * FN + FP)"""

    precision_ = precision(test, reference, confusion_matrix, nan_for_nonexisting)
    recall_ = recall(test, reference, confusion_matrix, nan_for_nonexisting)

    return (1 + beta*beta) * precision_ * recall_ /\
        ((beta*beta * precision_) + recall_)


def false_positive_rate(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """FP / (FP + TN)"""

    return 1 - specificity(test, reference, confusion_matrix, nan_for_nonexisting)


def false_omission_rate(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """FN / (TN + FN)"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()
    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if test_full:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0.

    return float(fn / (fn + tn))


def false_negative_rate(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """FN / (TP + FN)"""

    return 1 - sensitivity(test, reference, confusion_matrix, nan_for_nonexisting)


def true_negative_rate(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """TN / (TN + FP)"""

    return specificity(test, reference, confusion_matrix, nan_for_nonexisting)


def false_discovery_rate(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """FP / (TP + FP)"""

    return 1 - precision(test, reference, confusion_matrix, nan_for_nonexisting)


def negative_predictive_value(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, **kwargs):
    """TN / (TN + FN)"""

    return 1 - false_omission_rate(test, reference, confusion_matrix, nan_for_nonexisting)


def total_positives_test(test=None, reference=None, confusion_matrix=None, **kwargs):
    """TP + FP"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()

    return tp + fp


def total_negatives_test(test=None, reference=None, confusion_matrix=None, **kwargs):
    """TN + FN"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()

    return tn + fn


def total_positives_reference(test=None, reference=None, confusion_matrix=None, **kwargs):
    """TP + FN"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()

    return tp + fn


def total_negatives_reference(test=None, reference=None, confusion_matrix=None, **kwargs):
    """TN + FP"""

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    tp, fp, tn, fn = confusion_matrix.get_matrix()

    return tn + fp


def hausdorff_distance(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, voxel_spacing=None, connectivity=1, **kwargs):

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if test_empty or test_full or reference_empty or reference_full:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0

    test, reference = confusion_matrix.test, confusion_matrix.reference

    return metric.hd(test, reference, voxel_spacing, connectivity)


def hausdorff_distance_95(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, voxel_spacing=None, connectivity=1, **kwargs):

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if test_empty or test_full or reference_empty or reference_full:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0

    test, reference = confusion_matrix.test, confusion_matrix.reference

    return metric.hd95(test, reference, voxel_spacing, connectivity)


def avg_surface_distance(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, voxel_spacing=None, connectivity=1, **kwargs):

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if test_empty or test_full or reference_empty or reference_full:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0

    test, reference = confusion_matrix.test, confusion_matrix.reference

    return metric.asd(test, reference, voxel_spacing, connectivity)


def avg_surface_distance_symmetric(test=None, reference=None, confusion_matrix=None, nan_for_nonexisting=True, voxel_spacing=None, connectivity=1, **kwargs):

    if confusion_matrix is None:
        confusion_matrix = ConfusionMatrix(test, reference)

    test_empty, test_full, reference_empty, reference_full = confusion_matrix.get_existence()

    if test_empty or test_full or reference_empty or reference_full:
        if nan_for_nonexisting:
            return float("NaN")
        else:
            return 0

    test, reference = confusion_matrix.test, confusion_matrix.reference

    return metric.assd(test, reference, voxel_spacing, connectivity)


ALL_METRICS = {
    "False Positive Rate": false_positive_rate,
    "Dice": dice,
    "Jaccard": jaccard,
    "Hausdorff Distance": hausdorff_distance,
    "Hausdorff Distance 95": hausdorff_distance_95,
    "Precision": precision,
    "Recall": recall,
    "Avg. Symmetric Surface Distance": avg_surface_distance_symmetric,
    "Avg. Surface Distance": avg_surface_distance,
    "Accuracy": accuracy,
    "False Omission Rate": false_omission_rate,
    "Negative Predictive Value": negative_predictive_value,
    "False Negative Rate": false_negative_rate,
    "True Negative Rate": true_negative_rate,
    "False Discovery Rate": false_discovery_rate,
    "Total Positives Test": total_positives_test,
    "Total Negatives Test": total_negatives_test,
    "Total Positives Reference": total_positives_reference,
    "total Negatives Reference": total_negatives_reference
}


================================================
FILE: unetr_pp/evaluation/model_selection/__init__.py
================================================
from __future__ import absolute_import
from . import *

================================================
FILE: unetr_pp/evaluation/model_selection/collect_all_fold0_results_and_summarize_in_one_csv.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

from unetr_pp.evaluation.model_selection.summarize_results_in_one_json import summarize2
from unetr_pp.paths import network_training_output_dir
from batchgenerators.utilities.file_and_folder_operations import *

if __name__ == "__main__":
    summary_output_folder = join(network_training_output_dir, "summary_jsons_fold0_new")
    maybe_mkdir_p(summary_output_folder)
    summarize2(['all'], output_dir=summary_output_folder, folds=(0,))

    results_csv = join(network_training_output_dir, "summary_fold0.csv")

    summary_files = subfiles(summary_output_folder, suffix='.json', join=False)

    with open(results_csv, 'w') as f:
        for s in summary_files:
            if s.find("ensemble") == -1:
                task, network, trainer, plans, validation_folder, folds = s.split("__")
            else:
                n1, n2 = s.split("--")
                n1 = n1[n1.find("ensemble_") + len("ensemble_") :]
                task = s.split("__")[0]
                network = "ensemble"
                trainer = n1
                plans = n2
                validation_folder = "none"
            folds = folds[:-len('.json')]
            results = load_json(join(summary_output_folder, s))
            results_mean = results['results']['mean']['mean']['Dice']
            results_median = results['results']['median']['mean']['Dice']
            f.write("%s,%s,%s,%s,%s,%02.4f,%02.4f\n" % (task,
                                            network, trainer, validation_folder, plans, results_mean, results_median))

    summary_output_folder = join(network_training_output_dir, "summary_jsons_new")
    maybe_mkdir_p(summary_output_folder)
    summarize2(['all'], output_dir=summary_output_folder)

    results_csv = join(network_training_output_dir, "summary_allFolds.csv")

    summary_files = subfiles(summary_output_folder, suffix='.json', join=False)

    with open(results_csv, 'w') as f:
        for s in summary_files:
            if s.find("ensemble") == -1:
                task, network, trainer, plans, validation_folder, folds = s.split("__")
            else:
                n1, n2 = s.split("--")
                n1 = n1[n1.find("ensemble_") + len("ensemble_") :]
                task = s.split("__")[0]
                network = "ensemble"
                trainer = n1
                plans = n2
                validation_folder = "none"
            folds = folds[:-len('.json')]
            results = load_json(join(summary_output_folder, s))
            results_mean = results['results']['mean']['mean']['Dice']
            results_median = results['results']['median']['mean']['Dice']
            f.write("%s,%s,%s,%s,%s,%02.4f,%02.4f\n" % (task,
                                            network, trainer, validation_folder, plans, results_mean, results_median))



================================================
FILE: unetr_pp/evaluation/model_selection/ensemble.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.
import shutil
from multiprocessing.pool import Pool

import numpy as np
from batchgenerators.utilities.file_and_folder_operations import *
from unetr_pp.configuration import default_num_threads
from unetr_pp.evaluation.evaluator import aggregate_scores
from unetr_pp.inference.segmentation_export import save_segmentation_nifti_from_softmax
from unetr_pp.paths import network_training_output_dir, preprocessing_output_dir
from unetr_pp.postprocessing.connected_components import determine_postprocessing


def merge(args):
    file1, file2, properties_file, out_file = args
    if not isfile(out_file):
        res1 = np.load(file1)['softmax']
        res2 = np.load(file2)['softmax']
        props = load_pickle(properties_file)
        mn = np.mean((res1, res2), 0)
        # Softmax probabilities are already at target spacing so this will not do any resampling (resampling parameters
        # don't matter here)
        save_segmentation_nifti_from_softmax(mn, out_file, props, 3, None, None, None, force_separate_z=None,
                                             interpolation_order_z=0)


def ensemble(training_output_folder1, training_output_folder2, output_folder, task, validation_folder, folds, allow_ensembling: bool = True):
    print("\nEnsembling folders\n", training_output_folder1, "\n", training_output_folder2)

    output_folder_base = output_folder
    output_folder = join(output_folder_base, "ensembled_raw")

    # only_keep_largest_connected_component is the same for all stages
    dataset_directory = join(preprocessing_output_dir, task)
    plans = load_pickle(join(training_output_folder1, "plans.pkl"))  # we need this only for the labels

    files1 = []
    files2 = []
    property_files = []
    out_files = []
    gt_segmentations = []

    folder_with_gt_segs = join(dataset_directory, "gt_segmentations")
    # in the correct shape and we need the original geometry to restore the niftis

    for f in folds:
        validation_folder_net1 = join(training_output_folder1, "fold_%d" % f, validation_folder)
        validation_folder_net2 = join(training_output_folder2, "fold_%d" % f, validation_folder)

        if not isdir(validation_folder_net1):
            raise AssertionError("Validation directory missing: %s. Please rerun validation with `nnFormer_train CONFIG TRAINER TASK FOLD -val --npz`" % validation_folder_net1)
        if not isdir(validation_folder_net2):
            raise AssertionError("Validation directory missing: %s. Please rerun validation with `nnFormer_train CONFIG TRAINER TASK FOLD -val --npz`" % validation_folder_net2)

        # we need to ensure the validation was successful. We can verify this via the presence of the summary.json file
        if not isfile(join(validation_folder_net1, 'summary.json')):
            raise AssertionError("Validation directory incomplete: %s. Please rerun validation with `nnFormer_train CONFIG TRAINER TASK FOLD -val --npz`" % validation_folder_net1)
        if not isfile(join(validation_folder_net2, 'summary.json')):
            raise AssertionError("Validation directory missing: %s. Please rerun validation with `nnFormer_train CONFIG TRAINER TASK FOLD -val --npz`" % validation_folder_net2)

        patient_identifiers1_npz = [i[:-4] for i in subfiles(validation_folder_net1, False, None, 'npz', True)]
        patient_identifiers2_npz = [i[:-4] for i in subfiles(validation_folder_net2, False, None, 'npz', True)]

        # we don't do postprocessing anymore so there should not be any of that noPostProcess
        patient_identifiers1_nii = [i[:-7] for i in subfiles(validation_folder_net1, False, None, suffix='nii.gz', sort=True) if not i.endswith("noPostProcess.nii.gz") and not i.endswith('_postprocessed.nii.gz')]
        patient_identifiers2_nii = [i[:-7] for i in subfiles(validation_folder_net2, False, None, suffix='nii.gz', sort=True) if not i.endswith("noPostProcess.nii.gz") and not i.endswith('_postprocessed.nii.gz')]

        if not all([i in patient_identifiers1_npz for i in patient_identifiers1_nii]):
            raise AssertionError("Missing npz files in folder %s. Please run the validation for all models and folds with the '--npz' flag." % (validation_folder_net1))
        if not all([i in patient_identifiers2_npz for i in patient_identifiers2_nii]):
            raise AssertionError("Missing npz files in folder %s. Please run the validation for all models and folds with the '--npz' flag." % (validation_folder_net2))

        patient_identifiers1_npz.sort()
        patient_identifiers2_npz.sort()

        assert all([i == j for i, j in zip(patient_identifiers1_npz, patient_identifiers2_npz)]), "npz filenames do not match. This should not happen."

        maybe_mkdir_p(output_folder)

        for p in patient_identifiers1_npz:
            files1.append(join(validation_folder_net1, p + '.npz'))
            files2.append(join(validation_folder_net2, p + '.npz'))
            property_files.append(join(validation_folder_net1, p) + ".pkl")
            out_files.append(join(output_folder, p + ".nii.gz"))
            gt_segmentations.append(join(folder_with_gt_segs, p + ".nii.gz"))

    p = Pool(default_num_threads)
    p.map(merge, zip(files1, files2, property_files, out_files))
    p.close()
    p.join()

    if not isfile(join(output_folder, "summary.json")) and len(out_files) > 0:
        aggregate_scores(tuple(zip(out_files, gt_segmentations)), labels=plans['all_classes'],
                     json_output_file=join(output_folder, "summary.json"), json_task=task,
                     json_name=task + "__" + output_folder_base.split("/")[-1], num_threads=default_num_threads)

    if allow_ensembling and not isfile(join(output_folder_base, "postprocessing.json")):
        # now lets also look at postprocessing. We cannot just take what we determined in cross-validation and apply it
        # here because things may have changed and may also be too inconsistent between the two networks
        determine_postprocessing(output_folder_base, folder_with_gt_segs, "ensembled_raw", "temp",
                                 "ensembled_postprocessed", default_num_threads, dice_threshold=0)

        out_dir_all_json = join(network_training_output_dir, "summary_jsons")
        json_out = load_json(join(output_folder_base, "ensembled_postprocessed", "summary.json"))

        json_out["experiment_name"] = output_folder_base.split("/")[-1]
        save_json(json_out, join(output_folder_base, "ensembled_postprocessed", "summary.json"))

        maybe_mkdir_p(out_dir_all_json)
        shutil.copy(join(output_folder_base, "ensembled_postprocessed", "summary.json"),
                    join(out_dir_all_json, "%s__%s.json" % (task, output_folder_base.split("/")[-1])))


================================================
FILE: unetr_pp/evaluation/model_selection/figure_out_what_to_submit.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.
import shutil
from itertools import combinations
import unetr_pp
from batchgenerators.utilities.file_and_folder_operations import *
from unetr_pp.evaluation.add_mean_dice_to_json import foreground_mean
from unetr_pp.evaluation.evaluator import evaluate_folder
from unetr_pp.evaluation.model_selection.ensemble import ensemble
from unetr_pp.paths import network_training_output_dir
import numpy as np
from subprocess import call
from unetr_pp.postprocessing.consolidate_postprocessing import consolidate_folds, collect_cv_niftis
from unetr_pp.utilities.folder_names import get_output_folder_name
from unetr_pp.paths import default_cascade_trainer, default_trainer, default_plans_identifier


def find_task_name(folder, task_id):
    candidates = subdirs(folder, prefix="Task%03.0d_" % task_id, join=False)
    assert len(candidates) > 0, "no candidate for Task id %d found in folder %s" % (task_id, folder)
    assert len(candidates) == 1, "more than one candidate for Task id %d found in folder %s" % (task_id, folder)
    return candidates[0]


def get_mean_foreground_dice(json_file):
    results = load_json(json_file)
    return get_foreground_mean(results)


def get_foreground_mean(results):
    results_mean = results['results']['mean']
    dice_scores = [results_mean[i]['Dice'] for i in results_mean.keys() if i != "0" and i != 'mean']
    return np.mean(dice_scores)


def main():
    import argparse
    parser = argparse.ArgumentParser(usage="This is intended to identify the best model based on the five fold "
                                           "cross-validation. Running this script requires all models to have been run "
                                           "already. This script will summarize the results of the five folds of all "
                                           "models in one json each for easy interpretability")

    parser.add_argument("-m", '--models', nargs="+", required=False, default=['2d', '3d_lowres', '3d_fullres',
                                                                              '3d_cascade_fullres'])
    parser.add_argument("-t", '--task_ids', nargs="+", required=True)

    parser.add_argument("-tr", type=str, required=False, default=default_trainer,
                        help="nnFormerTrainer class. Default: %s" % default_trainer)
    parser.add_argument("-ctr", type=str, required=False, default=default_cascade_trainer,
                        help="nnFormerTrainer class for cascade model. Default: %s" % default_cascade_trainer)
    parser.add_argument("-pl", type=str, required=False, default=default_plans_identifier,
                        help="plans name, Default: %s" % default_plans_identifier)
    parser.add_argument('-f', '--folds', nargs='+', default=(0, 1, 2, 3, 4), help="Use this if you have non-standard "
                                                                                  "folds. Experienced users only.")
    parser.add_argument('--disable_ensembling', required=False, default=False, action='store_true',
                        help='Set this flag to disable the use of ensembling. This will find the best single '
                             'configuration for each task.')
    parser.add_argument("--disable_postprocessing", required=False, default=False, action="store_true",
                        help="Set this flag if you want to disable the use of postprocessing")

    args = parser.parse_args()
    tasks = [int(i) for i in args.task_ids]

    models = args.models
    tr = args.tr
    trc = args.ctr
    pl = args.pl
    disable_ensembling = args.disable_ensembling
    disable_postprocessing = args.disable_postprocessing
    folds = tuple(int(i) for i in args.folds)

    validation_folder = "validation_raw"

    # this script now acts independently from the summary jsons. That was unnecessary
    id_task_mapping = {}

    for t in tasks:
        # first collect pure model performance
        results = {}
        all_results = {}
        valid_models = []
        for m in models:
            if m == "3d_cascade_fullres":
                trainer = trc
            else:
                trainer = tr

            if t not in id_task_mapping.keys():
                task_name = find_task_name(get_output_folder_name(m), t)
                id_task_mapping[t] = task_name

            output_folder = get_output_folder_name(m, id_task_mapping[t], trainer, pl)
            if not isdir(output_folder):
                raise RuntimeError("Output folder for model %s is missing, expected: %s" % (m, output_folder))

            if disable_postprocessing:
                # we need to collect the predicted niftis from the 5-fold cv and evaluate them against the ground truth
                cv_niftis_folder = join(output_folder, 'cv_niftis_raw')

                if not isfile(join(cv_niftis_folder, 'summary.json')):
                    print(t, m, ': collecting niftis from 5-fold cv')
                    if isdir(cv_niftis_folder):
                        shutil.rmtree(cv_niftis_folder)

                    collect_cv_niftis(output_folder, cv_niftis_folder, validation_folder, folds)

                    niftis_gt = subfiles(join(output_folder, "gt_niftis"), suffix='.nii.gz', join=False)
                    niftis_cv = subfiles(cv_niftis_folder, suffix='.nii.gz', join=False)
                    if not all([i in niftis_gt for i in niftis_cv]):
                        raise AssertionError("It does not seem like you trained all the folds! Train " \
                                             "all folds first! There are %d gt niftis in %s but only " \
                                             "%d predicted niftis in %s" % (len(niftis_gt), niftis_gt,
                                                                            len(niftis_cv), niftis_cv))

                    # load a summary file so that we can know what class labels to expect
                    summary_fold0 = load_json(join(output_folder, "fold_%d" % folds[0], validation_folder,
                                                   "summary.json"))['results']['mean']
                    # read classes from summary.json
                    classes = tuple((int(i) for i in summary_fold0.keys()))

                    # evaluate the cv niftis
                    print(t, m, ': evaluating 5-fold cv results')
                    evaluate_folder(join(output_folder, "gt_niftis"), cv_niftis_folder, classes)

            else:
                postprocessing_json = join(output_folder, "postprocessing.json")
                cv_niftis_folder = join(output_folder, "cv_niftis_raw")

                # we need cv_niftis_postprocessed to know the single model performance. And we need the
                # postprocessing_json. If either of those is missing, rerun consolidate_folds
                if not isfile(postprocessing_json) or not isdir(cv_niftis_folder):
                    print("running missing postprocessing for %s and model %s" % (id_task_mapping[t], m))
                    consolidate_folds(output_folder, folds=folds)

                assert isfile(postprocessing_json), "Postprocessing json missing, expected: %s" % postprocessing_json
                assert isdir(cv_niftis_folder), "Folder with niftis from CV missing, expected: %s" % cv_niftis_folder

            # obtain mean foreground dice
            summary_file = join(cv_niftis_folder, "summary.json")
            results[m] = get_mean_foreground_dice(summary_file)
            foreground_mean(summary_file)
            all_results[m] = load_json(summary_file)['results']['mean']
            valid_models.append(m)

        if not disable_ensembling:
            # now run ensembling and add ensembling to results
            print("\nI will now ensemble combinations of the following models:\n", valid_models)
            if len(valid_models) > 1:
                for m1, m2 in combinations(valid_models, 2):

                    trainer_m1 = trc if m1 == "3d_cascade_fullres" else tr
                    trainer_m2 = trc if m2 == "3d_cascade_fullres" else tr

                    ensemble_name = "ensemble_" + m1 + "__" + trainer_m1 + "__" + pl + "--" + m2 + "__" + trainer_m2 + "__" + pl
                    output_folder_base = join(network_training_output_dir, "ensembles", id_task_mapping[t], ensemble_name)
                    maybe_mkdir_p(output_folder_base)

                    network1_folder = get_output_folder_name(m1, id_task_mapping[t], trainer_m1, pl)
                    network2_folder = get_output_folder_name(m2, id_task_mapping[t], trainer_m2, pl)

                    print("ensembling", network1_folder, network2_folder)
                    ensemble(network1_folder, network2_folder, output_folder_base, id_task_mapping[t], validation_folder, folds, allow_ensembling=not disable_postprocessing)
                    # ensembling will automatically do postprocessingget_foreground_mean

                    # now get result of ensemble
                    results[ensemble_name] = get_mean_foreground_dice(join(output_folder_base, "ensembled_raw", "summary.json"))
                    summary_file = join(output_folder_base, "ensembled_raw", "summary.json")
                    foreground_mean(summary_file)
                    all_results[ensemble_name] = load_json(summary_file)['results']['mean']

        # now print all mean foreground dice and highlight the best
        foreground_dices = list(results.values())
        best = np.max(foreground_dices)
        for k, v in results.items():
            print(k, v)

        predict_str = ""
        best_model = None
        for k, v in results.items():
            if v == best:
                print("%s submit model %s" % (id_task_mapping[t], k), v)
                best_model = k
                print("\nHere is how you should predict test cases. Run in sequential order and replace all input and output folder names with your personalized ones\n")
                if k.startswith("ensemble"):
                    tmp = k[len("ensemble_"):]
                    model1, model2 = tmp.split("--")
                    m1, t1, pl1 = model1.split("__")
                    m2, t2, pl2 = model2.split("__")
                    predict_str += "nnFormer_predict -i FOLDER_WITH_TEST_CASES -o OUTPUT_FOLDER_MODEL1 -tr " + tr + " -ctr " + trc + " -m " + m1 + " -p " + pl + " -t " + \
                                   id_task_mapping[t] + "\n"
                    predict_str += "nnFormer_predict -i FOLDER_WITH_TEST_CASES -o OUTPUT_FOLDER_MODEL2 -tr " + tr + " -ctr " + trc + " -m " + m2 + " -p " + pl + " -t " + \
                                   id_task_mapping[t] + "\n"

                    if not disable_postprocessing:
                        predict_str += "nnFormer_ensemble -f OUTPUT_FOLDER_MODEL1 OUTPUT_FOLDER_MODEL2 -o OUTPUT_FOLDER -pp " + join(network_training_output_dir, "ensembles", id_task_mapping[t], k, "postprocessing.json") + "\n"
                    else:
                        predict_str += "nnFormer_ensemble -f OUTPUT_FOLDER_MODEL1 OUTPUT_FOLDER_MODEL2 -o OUTPUT_FOLDER\n"
                else:
                    predict_str += "nnFormer_predict -i FOLDER_WITH_TEST_CASES -o OUTPUT_FOLDER_MODEL1 -tr " + tr + " -ctr " + trc + " -m " + k + " -p " + pl + " -t " + \
                                   id_task_mapping[t] + "\n"
                print(predict_str)

        summary_folder = join(network_training_output_dir, "ensembles", id_task_mapping[t])
        maybe_mkdir_p(summary_folder)
        with open(join(summary_folder, "prediction_commands.txt"), 'w') as f:
            f.write(predict_str)

        num_classes = len([i for i in all_results[best_model].keys() if i != 'mean' and i != '0'])
        with open(join(summary_folder, "summary.csv"), 'w') as f:
            f.write("model")
            for c in range(1, num_classes + 1):
                f.write(",class%d" % c)
            f.write(",average")
            f.write("\n")
            for m in all_results.keys():
                f.write(m)
                for c in range(1, num_classes + 1):
                    f.write(",%01.4f" % all_results[m][str(c)]["Dice"])
                f.write(",%01.4f" % all_results[m]['mean']["Dice"])
                f.write("\n")


if __name__ == "__main__":
    main()


================================================
FILE: unetr_pp/evaluation/model_selection/rank_candidates.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


import numpy as np
from batchgenerators.utilities.file_and_folder_operations import *
from unetr_pp.paths import network_training_output_dir

if __name__ == "__main__":
    # run collect_all_fold0_results_and_summarize_in_one_csv.py first
    summary_files_dir = join(network_training_output_dir, "summary_jsons_fold0_new")
    output_file = join(network_training_output_dir, "summary.csv")

    folds = (0, )
    folds_str = ""
    for f in folds:
        folds_str += str(f)

    plans = "nnFormerPlans"

    overwrite_plans = {
        'nnFormerTrainerV2_2': ["nnFormerPlans", "nnFormerPlansisoPatchesInVoxels"], # r
        'nnFormerTrainerV2': ["nnFormerPlansnonCT", "nnFormerPlansCT2", "nnFormerPlansallConv3x3",
                            "nnFormerPlansfixedisoPatchesInVoxels", "nnFormerPlanstargetSpacingForAnisoAxis",
                            "nnFormerPlanspoolBasedOnSpacing", "nnFormerPlansfixedisoPatchesInmm", "nnFormerPlansv2.1"],
        'nnFormerTrainerV2_warmup': ["nnFormerPlans", "nnFormerPlansv2.1", "nnFormerPlansv2.1_big", "nnFormerPlansv2.1_verybig"],
        'nnFormerTrainerV2_cycleAtEnd': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_cycleAtEnd2': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_reduceMomentumDuringTraining': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_graduallyTransitionFromCEToDice': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_independentScalePerAxis': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_Mish': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_Ranger_lr3en4': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_fp32': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_GN': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_momentum098': ["nnFormerPlans", "nnFormerPlansv2.1"],
        'nnFormerTrainerV2_momentum09': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_DP': ["nnFormerPlansv2.1_verybig"],
        'nnFormerTrainerV2_DDP': ["nnFormerPlansv2.1_verybig"],
        'nnFormerTrainerV2_FRN': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_resample33': ["nnFormerPlansv2.3"],
        'nnFormerTrainerV2_O2': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_ResencUNet': ["nnFormerPlans_FabiansResUNet_v2.1"],
        'nnFormerTrainerV2_DA2': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_allConv3x3': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_ForceBD': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_ForceSD': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_LReLU_slope_2en1': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_lReLU_convReLUIN': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_ReLU': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_ReLU_biasInSegOutput': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_ReLU_convReLUIN': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_lReLU_biasInSegOutput': ["nnFormerPlansv2.1"],
        #'nnFormerTrainerV2_Loss_MCC': ["nnFormerPlansv2.1"],
        #'nnFormerTrainerV2_Loss_MCCnoBG': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_Loss_DicewithBG': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_Loss_Dice_LR1en3': ["nnFormerPlansv2.1"],
        'nnFormerTrainerV2_Loss_Dice': ["nnFormerPlans", "nnFormerPlansv2.1"],
        'nnFormerTrainerV2_Loss_DicewithBG_LR1en3': ["nnFormerPlansv2.1"],
        # 'nnFormerTrainerV2_fp32': ["nnFormerPlansv2.1"],
        # 'nnFormerTrainerV2_fp32': ["nnFormerPlansv2.1"],
        # 'nnFormerTrainerV2_fp32': ["nnFormerPlansv2.1"],
        # 'nnFormerTrainerV2_fp32': ["nnFormerPlansv2.1"],
        # 'nnFormerTrainerV2_fp32': ["nnFormerPlansv2.1"],

    }

    trainers = ['nnFormerTrainer'] + ['nnFormerTrainerNewCandidate%d' % i for i in range(1, 28)] + [
        'nnFormerTrainerNewCandidate24_2',
        'nnFormerTrainerNewCandidate24_3',
        'nnFormerTrainerNewCandidate26_2',
        'nnFormerTrainerNewCandidate27_2',
        'nnFormerTrainerNewCandidate23_always3DDA',
        'nnFormerTrainerNewCandidate23_corrInit',
        'nnFormerTrainerNewCandidate23_noOversampling',
        'nnFormerTrainerNewCandidate23_softDS',
        'nnFormerTrainerNewCandidate23_softDS2',
        'nnFormerTrainerNewCandidate23_softDS3',
        'nnFormerTrainerNewCandidate23_softDS4',
        'nnFormerTrainerNewCandidate23_2_fp16',
        'nnFormerTrainerNewCandidate23_2',
        'nnFormerTrainerVer2',
        'nnFormerTrainerV2_2',
        'nnFormerTrainerV2_3',
        'nnFormerTrainerV2_3_CE_GDL',
        'nnFormerTrainerV2_3_dcTopk10',
        'nnFormerTrainerV2_3_dcTopk20',
        'nnFormerTrainerV2_3_fp16',
        'nnFormerTrainerV2_3_softDS4',
        'nnFormerTrainerV2_3_softDS4_clean',
        'nnFormerTrainerV2_3_softDS4_clean_improvedDA',
        'nnFormerTrainerV2_3_softDS4_clean_improvedDA_newElDef',
        'nnFormerTrainerV2_3_softDS4_radam',
        'nnFormerTrainerV2_3_softDS4_radam_lowerLR',

        'nnFormerTrainerV2_2_schedule',
        'nnFormerTrainerV2_2_schedule2',
        'nnFormerTrainerV2_2_clean',
        'nnFormerTrainerV2_2_clean_improvedDA_newElDef',

        'nnFormerTrainerV2_2_fixes', # running
        'nnFormerTrainerV2_BN', # running
        'nnFormerTrainerV2_noDeepSupervision', # running
        'nnFormerTrainerV2_softDeepSupervision', # running
        'nnFormerTrainerV2_noDataAugmentation', # running
        'nnFormerTrainerV2_Loss_CE', # running
        'nnFormerTrainerV2_Loss_CEGDL',
        'nnFormerTrainerV2_Loss_Dice',
        'nnFormerTrainerV2_Loss_DiceTopK10',
        'nnFormerTrainerV2_Loss_TopK10',
        'nnFormerTrainerV2_Adam', # running
        'nnFormerTrainerV2_Adam_nnFormerTrainerlr', # running
        'nnFormerTrainerV2_SGD_ReduceOnPlateau', # running
        'nnFormerTrainerV2_SGD_lr1en1', # running
        'nnFormerTrainerV2_SGD_lr1en3', # running
        'nnFormerTrainerV2_fixedNonlin', # running
        'nnFormerTrainerV2_GeLU', # running
        'nnFormerTrainerV2_3ConvPerStage',
        'nnFormerTrainerV2_NoNormalization',
        'nnFormerTrainerV2_Adam_ReduceOnPlateau',
        'nnFormerTrainerV2_fp16',
        'nnFormerTrainerV2', # see overwrite_plans
        'nnFormerTrainerV2_noMirroring',
        'nnFormerTrainerV2_momentum09',
        'nnFormerTrainerV2_momentum095',
        'nnFormerTrainerV2_momentum098',
        'nnFormerTrainerV2_warmup',
        'nnFormerTrainerV2_Loss_Dice_LR1en3',
        'nnFormerTrainerV2_NoNormalization_lr1en3',
        'nnFormerTrainerV2_Loss_Dice_squared',
        'nnFormerTrainerV2_newElDef',
        'nnFormerTrainerV2_fp32',
        'nnFormerTrainerV2_cycleAtEnd',
        'nnFormerTrainerV2_reduceMomentumDuringTraining',
        'nnFormerTrainerV2_graduallyTransitionFromCEToDice',
        'nnFormerTrainerV2_insaneDA',
        'nnFormerTrainerV2_independentScalePerAxis',
        'nnFormerTrainerV2_Mish',
        'nnFormerTrainerV2_Ranger_lr3en4',
        'nnFormerTrainerV2_cycleAtEnd2',
        'nnFormerTrainerV2_GN',
        'nnFormerTrainerV2_DP',
        'nnFormerTrainerV2_FRN',
        'nnFormerTrainerV2_resample33',
        'nnFormerTrainerV2_O2',
        'nnFormerTrainerV2_ResencUNet',
        'nnFormerTrainerV2_DA2',
        'nnFormerTrainerV2_allConv3x3',
        'nnFormerTrainerV2_ForceBD',
        'nnFormerTrainerV2_ForceSD',
        'nnFormerTrainerV2_ReLU',
        'nnFormerTrainerV2_LReLU_slope_2en1',
        'nnFormerTrainerV2_lReLU_convReLUIN',
        'nnFormerTrainerV2_ReLU_biasInSegOutput',
        'nnFormerTrainerV2_ReLU_convReLUIN',
        'nnFormerTrainerV2_lReLU_biasInSegOutput',
        'nnFormerTrainerV2_Loss_DicewithBG_LR1en3',
        #'nnFormerTrainerV2_Loss_MCCnoBG',
        'nnFormerTrainerV2_Loss_DicewithBG',
        # 'nnFormerTrainerV2_Loss_Dice_LR1en3',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
        # 'nnFormerTrainerV2_Ranger_lr3en4',
    ]

    datasets = \
        {"Task001_BrainTumour": ("3d_fullres", ),
        "Task002_Heart": ("3d_fullres",),
        #"Task024_Promise": ("3d_fullres",),
        #"Task027_ACDC": ("3d_fullres",),
        "Task003_Liver": ("3d_fullres", "3d_lowres"),
        "Task004_Hippocampus": ("3d_fullres",),
        "Task005_Prostate": ("3d_fullres",),
        "Task006_Lung": ("3d_fullres", "3d_lowres"),
        "Task007_Pancreas": ("3d_fullres", "3d_lowres"),
        "Task008_HepaticVessel": ("3d_fullres", "3d_lowres"),
        "Task009_Spleen": ("3d_fullres", "3d_lowres"),
        "Task010_Colon": ("3d_fullres", "3d_lowres"),}

    expected_validation_folder = "validation_raw"
    alternative_validation_folder = "validation"
    alternative_alternative_validation_folder = "validation_tiledTrue_doMirror_True"

    interested_in = "mean"

    result_per_dataset = {}
    for d in datasets:
        result_per_dataset[d] = {}
        for c in datasets[d]:
            result_per_dataset[d][c] = []

    valid_trainers = []
    all_trainers = []

    with open(output_file, 'w') as f:
        f.write("trainer,")
        for t in datasets.keys():
            s = t[4:7]
            for c in datasets[t]:
                s1 = s + "_" + c[3]
                f.write("%s," % s1)
        f.write("\n")

        for trainer in trainers:
            trainer_plans = [plans]
            if trainer in overwrite_plans.keys():
                trainer_plans = overwrite_plans[trainer]

            result_per_dataset_here = {}
            for d in datasets:
                result_per_dataset_here[d] = {}

            for p in trainer_plans:
                name = "%s__%s" % (trainer, p)
                all_present = True
                all_trainers.append(name)

                f.write("%s," % name)
                for dataset in datasets.keys():
                    for configuration in datasets[dataset]:
                        summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (dataset, configuration, trainer, p, expected_validation_folder, folds_str))
                        if not isfile(summary_file):
                            summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (dataset, configuration, trainer, p, alternative_validation_folder, folds_str))
                            if not isfile(summary_file):
                                summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (
                                dataset, configuration, trainer, p, alternative_alternative_validation_folder, folds_str))
                                if not isfile(summary_file):
                                    all_present = False
                                    print(name, dataset, configuration, "has missing summary file")
                        if isfile(summary_file):
                            result = load_json(summary_file)['results'][interested_in]['mean']['Dice']
                            result_per_dataset_here[dataset][configuration] = result
                            f.write("%02.4f," % result)
                        else:
                            f.write("NA,")
                            result_per_dataset_here[dataset][configuration] = 0

                f.write("\n")

                if True:
                    valid_trainers.append(name)
                    for d in datasets:
                        for c in datasets[d]:
                            result_per_dataset[d][c].append(result_per_dataset_here[d][c])

    invalid_trainers = [i for i in all_trainers if i not in valid_trainers]

    num_valid = len(valid_trainers)
    num_datasets = len(datasets.keys())
    # create an array that is trainer x dataset. If more than one configuration is there then use the best metric across the two
    all_res = np.zeros((num_valid, num_datasets))
    for j, d in enumerate(datasets.keys()):
        ks = list(result_per_dataset[d].keys())
        tmp = result_per_dataset[d][ks[0]]
        for k in ks[1:]:
            for i in range(len(tmp)):
                tmp[i] = max(tmp[i], result_per_dataset[d][k][i])
        all_res[:, j] = tmp

    ranks_arr = np.zeros_like(all_res)
    for d in range(ranks_arr.shape[1]):
        temp = np.argsort(all_res[:, d])[::-1] # inverse because we want the highest dice to be rank0
        ranks = np.empty_like(temp)
        ranks[temp] = np.arange(len(temp))

        ranks_arr[:, d] = ranks

    mn = np.mean(ranks_arr, 1)
    for i in np.argsort(mn):
        print(mn[i], valid_trainers[i])

    print()
    print(valid_trainers[np.argmin(mn)])


================================================
FILE: unetr_pp/evaluation/model_selection/rank_candidates_StructSeg.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


import numpy as np
from batchgenerators.utilities.file_and_folder_operations import *
from unetr_pp.paths import network_training_output_dir

if __name__ == "__main__":
    # run collect_all_fold0_results_and_summarize_in_one_csv.py first
    summary_files_dir = join(network_training_output_dir, "summary_jsons_new")
    output_file = join(network_training_output_dir, "summary_structseg_5folds.csv")

    folds = (0, 1, 2, 3, 4)
    folds_str = ""
    for f in folds:
        folds_str += str(f)

    plans = "nnFormerPlans"

    overwrite_plans = {
        'nnFormerTrainerV2_2': ["nnFormerPlans", "nnFormerPlans_customClip"], # r
        'nnFormerTrainerV2_2_noMirror': ["nnFormerPlans", "nnFormerPlans_customClip"],  # r
        'nnFormerTrainerV2_lessMomentum_noMirror': ["nnFormerPlans", "nnFormerPlans_customClip"],  # r
        'nnFormerTrainerV2_2_structSeg_noMirror': ["nnFormerPlans", "nnFormerPlans_customClip"],  # r
        'nnFormerTrainerV2_2_structSeg': ["nnFormerPlans", "nnFormerPlans_customClip"],  # r
        'nnFormerTrainerV2_lessMomentum_noMirror_structSeg': ["nnFormerPlans", "nnFormerPlans_customClip"],  # r
        'nnFormerTrainerV2_FabiansResUNet_structSet_NoMirror_leakyDecoder': ["nnFormerPlans", "nnFormerPlans_customClip"],  # r
        'nnFormerTrainerV2_FabiansResUNet_structSet_NoMirror': ["nnFormerPlans", "nnFormerPlans_customClip"],  # r
        'nnFormerTrainerV2_FabiansResUNet_structSet': ["nnFormerPlans", "nnFormerPlans_customClip"],  # r

    }

    trainers = ['nnFormerTrainer'] + [
        'nnFormerTrainerV2_2',
        'nnFormerTrainerV2_lessMomentum_noMirror',
        'nnFormerTrainerV2_2_noMirror',
        'nnFormerTrainerV2_2_structSeg_noMirror',
        'nnFormerTrainerV2_2_structSeg',
        'nnFormerTrainerV2_lessMomentum_noMirror_structSeg',
        'nnFormerTrainerV2_FabiansResUNet_structSet_NoMirror_leakyDecoder',
        'nnFormerTrainerV2_FabiansResUNet_structSet_NoMirror',
        'nnFormerTrainerV2_FabiansResUNet_structSet',
    ]

    datasets = \
        {"Task049_StructSeg2019_Task1_HaN_OAR": ("3d_fullres",  "3d_lowres", "2d"),
        "Task050_StructSeg2019_Task2_Naso_GTV": ("3d_fullres", "3d_lowres", "2d"),
        "Task051_StructSeg2019_Task3_Thoracic_OAR": ("3d_fullres", "3d_lowres", "2d"),
        "Task052_StructSeg2019_Task4_Lung_GTV": ("3d_fullres", "3d_lowres", "2d"),
}

    expected_validation_folder = "validation_raw"
    alternative_validation_folder = "validation"
    alternative_alternative_validation_folder = "validation_tiledTrue_doMirror_True"

    interested_in = "mean"

    result_per_dataset = {}
    for d in datasets:
        result_per_dataset[d] = {}
        for c in datasets[d]:
            result_per_dataset[d][c] = []

    valid_trainers = []
    all_trainers = []

    with open(output_file, 'w') as f:
        f.write("trainer,")
        for t in datasets.keys():
            s = t[4:7]
            for c in datasets[t]:
                if len(c) > 3:
                    n = c[3]
                else:
                    n = "2"
                s1 = s + "_" + n
                f.write("%s," % s1)
        f.write("\n")

        for trainer in trainers:
            trainer_plans = [plans]
            if trainer in overwrite_plans.keys():
                trainer_plans = overwrite_plans[trainer]

            result_per_dataset_here = {}
            for d in datasets:
                result_per_dataset_here[d] = {}

            for p in trainer_plans:
                name = "%s__%s" % (trainer, p)
                all_present = True
                all_trainers.append(name)

                f.write("%s," % name)
                for dataset in datasets.keys():
                    for configuration in datasets[dataset]:
                        summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (dataset, configuration, trainer, p, expected_validation_folder, folds_str))
                        if not isfile(summary_file):
                            summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (dataset, configuration, trainer, p, alternative_validation_folder, folds_str))
                            if not isfile(summary_file):
                                summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (
                                dataset, configuration, trainer, p, alternative_alternative_validation_folder, folds_str))
                                if not isfile(summary_file):
                                    all_present = False
                                    print(name, dataset, configuration, "has missing summary file")
                        if isfile(summary_file):
                            result = load_json(summary_file)['results'][interested_in]['mean']['Dice']
                            result_per_dataset_here[dataset][configuration] = result
                            f.write("%02.4f," % result)
                        else:
                            f.write("NA,")
                f.write("\n")

                if all_present:
                    valid_trainers.append(name)
                    for d in datasets:
                        for c in datasets[d]:
                            result_per_dataset[d][c].append(result_per_dataset_here[d][c])

    invalid_trainers = [i for i in all_trainers if i not in valid_trainers]

    num_valid = len(valid_trainers)
    num_datasets = len(datasets.keys())
    # create an array that is trainer x dataset. If more than one configuration is there then use the best metric across the two
    all_res = np.zeros((num_valid, num_datasets))
    for j, d in enumerate(datasets.keys()):
        ks = list(result_per_dataset[d].keys())
        tmp = result_per_dataset[d][ks[0]]
        for k in ks[1:]:
            for i in range(len(tmp)):
                tmp[i] = max(tmp[i], result_per_dataset[d][k][i])
        all_res[:, j] = tmp

    ranks_arr = np.zeros_like(all_res)
    for d in range(ranks_arr.shape[1]):
        temp = np.argsort(all_res[:, d])[::-1] # inverse because we want the highest dice to be rank0
        ranks = np.empty_like(temp)
        ranks[temp] = np.arange(len(temp))

        ranks_arr[:, d] = ranks

    mn = np.mean(ranks_arr, 1)
    for i in np.argsort(mn):
        print(mn[i], valid_trainers[i])

    print()
    print(valid_trainers[np.argmin(mn)])


================================================
FILE: unetr_pp/evaluation/model_selection/rank_candidates_cascade.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


import numpy as np
from batchgenerators.utilities.file_and_folder_operations import *
from unetr_pp.paths import network_training_output_dir

if __name__ == "__main__":
    # run collect_all_fold0_results_and_summarize_in_one_csv.py first
    summary_files_dir = join(network_training_output_dir, "summary_jsons_fold0_new")
    output_file = join(network_training_output_dir, "summary_cascade.csv")

    folds = (0, )
    folds_str = ""
    for f in folds:
        folds_str += str(f)

    plans = "nnFormerPlansv2.1"

    overwrite_plans = {
        'nnFormerTrainerCascadeFullRes': ['nnFormerPlans'],
    }

    trainers = [
        'nnFormerTrainerCascadeFullRes',
        'nnFormerTrainerV2CascadeFullRes_EducatedGuess',
        'nnFormerTrainerV2CascadeFullRes_EducatedGuess2',
        'nnFormerTrainerV2CascadeFullRes_EducatedGuess3',
        'nnFormerTrainerV2CascadeFullRes_lowerLR',
        'nnFormerTrainerV2CascadeFullRes',
        'nnFormerTrainerV2CascadeFullRes_noConnComp',
        'nnFormerTrainerV2CascadeFullRes_shorter_lowerLR',
        'nnFormerTrainerV2CascadeFullRes_shorter',
        'nnFormerTrainerV2CascadeFullRes_smallerBinStrel',
        #'',
        #'',
        #'',
        #'',
        #'',
        #'',
    ]

    datasets = \
        {
        "Task003_Liver": ("3d_cascade_fullres", ),
        "Task006_Lung": ("3d_cascade_fullres", ),
        "Task007_Pancreas": ("3d_cascade_fullres", ),
        "Task008_HepaticVessel": ("3d_cascade_fullres", ),
        "Task009_Spleen": ("3d_cascade_fullres", ),
        "Task010_Colon": ("3d_cascade_fullres", ),
        "Task017_AbdominalOrganSegmentation": ("3d_cascade_fullres", ),
        #"Task029_LITS": ("3d_cascade_fullres", ),
        "Task048_KiTS_clean": ("3d_cascade_fullres", ),
        "Task055_SegTHOR": ("3d_cascade_fullres", ),
        "Task056_VerSe": ("3d_cascade_fullres", ),
        #"": ("3d_cascade_fullres", ),
        }

    expected_validation_folder = "validation_raw"
    alternative_validation_folder = "validation"
    alternative_alternative_validation_folder = "validation_tiledTrue_doMirror_True"

    interested_in = "mean"

    result_per_dataset = {}
    for d in datasets:
        result_per_dataset[d] = {}
        for c in datasets[d]:
            result_per_dataset[d][c] = []

    valid_trainers = []
    all_trainers = []

    with open(output_file, 'w') as f:
        f.write("trainer,")
        for t in datasets.keys():
            s = t[4:7]
            for c in datasets[t]:
                s1 = s + "_" + c[3]
                f.write("%s," % s1)
        f.write("\n")

        for trainer in trainers:
            trainer_plans = [plans]
            if trainer in overwrite_plans.keys():
                trainer_plans = overwrite_plans[trainer]

            result_per_dataset_here = {}
            for d in datasets:
                result_per_dataset_here[d] = {}

            for p in trainer_plans:
                name = "%s__%s" % (trainer, p)
                all_present = True
                all_trainers.append(name)

                f.write("%s," % name)
                for dataset in datasets.keys():
                    for configuration in datasets[dataset]:
                        summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (dataset, configuration, trainer, p, expected_validation_folder, folds_str))
                        if not isfile(summary_file):
                            summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (dataset, configuration, trainer, p, alternative_validation_folder, folds_str))
                            if not isfile(summary_file):
                                summary_file = join(summary_files_dir, "%s__%s__%s__%s__%s__%s.json" % (
                                dataset, configuration, trainer, p, alternative_alternative_validation_folder, folds_str))
                                if not isfile(summary_file):
                                    all_present = False
                                    print(name, dataset, configuration, "has missing summary file")
                        if isfile(summary_file):
                            result = load_json(summary_file)['results'][interested_in]['mean']['Dice']
                            result_per_dataset_here[dataset][configuration] = result
                            f.write("%02.4f," % result)
                        else:
                            f.write("NA,")
                            result_per_dataset_here[dataset][configuration] = 0

                f.write("\n")

                if True:
                    valid_trainers.append(name)
                    for d in datasets:
                        for c in datasets[d]:
                            result_per_dataset[d][c].append(result_per_dataset_here[d][c])

    invalid_trainers = [i for i in all_trainers if i not in valid_trainers]

    num_valid = len(valid_trainers)
    num_datasets = len(datasets.keys())
    # create an array that is trainer x dataset. If more than one configuration is there then use the best metric across the two
    all_res = np.zeros((num_valid, num_datasets))
    for j, d in enumerate(datasets.keys()):
        ks = list(result_per_dataset[d].keys())
        tmp = result_per_dataset[d][ks[0]]
        for k in ks[1:]:
            for i in range(len(tmp)):
                tmp[i] = max(tmp[i], result_per_dataset[d][k][i])
        all_res[:, j] = tmp

    ranks_arr = np.zeros_like(all_res)
    for d in range(ranks_arr.shape[1]):
        temp = np.argsort(all_res[:, d])[::-1] # inverse because we want the highest dice to be rank0
        ranks = np.empty_like(temp)
        ranks[temp] = np.arange(len(temp))

        ranks_arr[:, d] = ranks

    mn = np.mean(ranks_arr, 1)
    for i in np.argsort(mn):
        print(mn[i], valid_trainers[i])

    print()
    print(valid_trainers[np.argmin(mn)])


================================================
FILE: unetr_pp/evaluation/model_selection/summarize_results_in_one_json.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

from collections import OrderedDict
from unetr_pp.evaluation.add_mean_dice_to_json import foreground_mean
from batchgenerators.utilities.file_and_folder_operations import *
from unetr_pp.paths import network_training_output_dir
import numpy as np


def summarize(tasks, models=('2d', '3d_lowres', '3d_fullres', '3d_cascade_fullres'),
              output_dir=join(network_training_output_dir, "summary_jsons"), folds=(0, 1, 2, 3, 4)):
    maybe_mkdir_p(output_dir)

    if len(tasks) == 1 and tasks[0] == "all":
        tasks = list(range(999))
    else:
        tasks = [int(i) for i in tasks]

    for model in models:
        for t in tasks:
            t = int(t)
            if not isdir(join(network_training_output_dir, model)):
                continue
            task_name = subfolders(join(network_training_output_dir, model), prefix="Task%03.0d" % t, join=False)
            if len(task_name) != 1:
                print("did not find unique output folder for network %s and task %s" % (model, t))
                continue
            task_name = task_name[0]
            out_dir_task = join(network_training_output_dir, model, task_name)

            model_trainers = subdirs(out_dir_task, join=False)
            for trainer in model_trainers:
                if trainer.startswith("fold"):
                    continue
                out_dir = join(out_dir_task, trainer)

                validation_folders = []
                for fld in folds:
                    d = join(out_dir, "fold%d"%fld)
                    if not isdir(d):
                        d = join(out_dir, "fold_%d"%fld)
                        if not isdir(d):
                            break
                    validation_folders += subfolders(d, prefix="validation", join=False)

                for v in validation_folders:
                    ok = True
                    metrics = OrderedDict()
                    for fld in folds:
                        d = join(out_dir, "fold%d"%fld)
                        if not isdir(d):
                            d = join(out_dir, "fold_%d"%fld)
                            if not isdir(d):
                                ok = False
                                break
                        validation_folder = join(d, v)

                        if not isfile(join(validation_folder, "summary.json")):
                            print("summary.json missing for net %s task %s fold %d" % (model, task_name, fld))
                            ok = False
                            break

                        metrics_tmp = load_json(join(validation_folder, "summary.json"))["results"]["mean"]
                        for l in metrics_tmp.keys():
                            if metrics.get(l) is None:
                                metrics[l] = OrderedDict()
                            for m in metrics_tmp[l].keys():
                                if metrics[l].get(m) is None:
                                    metrics[l][m] = []
                                metrics[l][m].append(metrics_tmp[l][m])
                    if ok:
                        for l in metrics.keys():
                            for m in metrics[l].keys():
                                assert len(metrics[l][m]) == len(folds)
                                metrics[l][m] = np.mean(metrics[l][m])
                        json_out = OrderedDict()
                        json_out["results"] = OrderedDict()
                        json_out["results"]["mean"] = metrics
                        json_out["task"] = task_name
                        json_out["description"] = model + " " + task_name + " all folds summary"
                        json_out["name"] = model + " " + task_name + " all folds summary"
                        json_out["experiment_name"] = model
                        save_json(json_out, join(out_dir, "summary_allFolds__%s.json" % v))
                        save_json(json_out, join(output_dir, "%s__%s__%s__%s.json" % (task_name, model, trainer, v)))
                        foreground_mean(join(out_dir, "summary_allFolds__%s.json" % v))
                        foreground_mean(join(output_dir, "%s__%s__%s__%s.json" % (task_name, model, trainer, v)))


def summarize2(task_ids, models=('2d', '3d_lowres', '3d_fullres', '3d_cascade_fullres'),
               output_dir=join(network_training_output_dir, "summary_jsons"), folds=(0, 1, 2, 3, 4)):
    maybe_mkdir_p(output_dir)

    if len(task_ids) == 1 and task_ids[0] == "all":
        task_ids = list(range(999))
    else:
        task_ids = [int(i) for i in task_ids]

    for model in models:
        for t in task_ids:
            if not isdir(join(network_training_output_dir, model)):
                continue
            task_name = subfolders(join(network_training_output_dir, model), prefix="Task%03.0d" % t, join=False)
            if len(task_name) != 1:
                print("did not find unique output folder for network %s and task %s" % (model, t))
                continue
            task_name = task_name[0]
            out_dir_task = join(network_training_output_dir, model, task_name)

            model_trainers = subdirs(out_dir_task, join=False)
            for trainer in model_trainers:
                if trainer.startswith("fold"):
                    continue
                out_dir = join(out_dir_task, trainer)

                validation_folders = []
                for fld in folds:
                    fold_output_dir = join(out_dir, "fold_%d"%fld)
                    if not isdir(fold_output_dir):
                        continue
                    validation_folders += subfolders(fold_output_dir, prefix="validation", join=False)

                validation_folders = np.unique(validation_folders)

                for v in validation_folders:
                    ok = True
                    metrics = OrderedDict()
                    metrics['mean'] = OrderedDict()
                    metrics['median'] = OrderedDict()
                    metrics['all'] = OrderedDict()
                    for fld in folds:
                        fold_output_dir = join(out_dir, "fold_%d"%fld)

                        if not isdir(fold_output_dir):
                            print("fold missing", model, task_name, trainer, fld)
                            ok = False
                            break
                        validation_folder = join(fold_output_dir, v)

                        if not isdir(validation_folder):
                            print("validation folder missing", model, task_name, trainer, fld, v)
                            ok = False
                            break

                        if not isfile(join(validation_folder, "summary.json")):
                            print("summary.json missing", model, task_name, trainer, fld, v)
                            ok = False
                            break

                        all_metrics = load_json(join(validation_folder, "summary.json"))["results"]
                        # we now need to get the mean and median metrics. We use the mean metrics just to get the
                        # names of computed metics, we ignore the precomputed mean and do it ourselfes again
                        mean_metrics = all_metrics["mean"]
                        all_labels = [i for i in list(mean_metrics.keys()) if i != "mean"]

                        if len(all_labels) == 0: print(v, fld); break

                        all_metrics_names = list(mean_metrics[all_labels[0]].keys())
                        for l in all_labels:
                            # initialize the data structure, no values are copied yet
                            for k in ['mean', 'median', 'all']:
                                if metrics[k].get(l) is None:
                                    metrics[k][l] = OrderedDict()
                            for m in all_metrics_names:
                                if metrics['all'][l].get(m) is None:
                                    metrics['all'][l][m] = []
                        for entry in all_metrics['all']:
                            for l in all_labels:
                                for m in all_metrics_names:
                                    metrics['all'][l][m].append(entry[l][m])
                    # now compute mean and median
                    for l in metrics['all'].keys():
                        for m in metrics['all'][l].keys():
                            metrics['mean'][l][m] = np.nanmean(metrics['all'][l][m])
                            metrics['median'][l][m] = np.nanmedian(metrics['all'][l][m])
                    if ok:
                        fold_string = ""
                        for f in folds:
                            fold_string += str(f)
                        json_out = OrderedDict()
                        json_out["results"] = OrderedDict()
                        json_out["results"]["mean"] = metrics['mean']
                        json_out["results"]["median"] = metrics['median']
                        json_out["task"] = task_name
                        json_out["description"] = model + " " + task_name + "summary folds" + str(folds)
                        json_out["name"] = model + " " + task_name + "summary folds" + str(folds)
                        json_out["experiment_name"] = model
                        save_json(json_out, join(output_dir, "%s__%s__%s__%s__%s.json" % (task_name, model, trainer, v, fold_string)))
                        foreground_mean2(join(output_dir, "%s__%s__%s__%s__%s.json" % (task_name, model, trainer, v, fold_string)))


def foreground_mean2(filename):
    with open(filename, 'r') as f:
        res = json.load(f)
    class_ids = np.array([int(i) for i in res['results']['mean'].keys() if (i != 'mean') and i != '0'])

    metric_names = res['results']['mean']['1'].keys()
    res['results']['mean']["mean"] = OrderedDict()
    res['results']['median']["mean"] = OrderedDict()
    for m in metric_names:
        foreground_values = [res['results']['mean'][str(i)][m] for i in class_ids]
        res['results']['mean']["mean"][m] = np.nanmean(foreground_values)
        foreground_values = [res['results']['median'][str(i)][m] for i in class_ids]
        res['results']['median']["mean"][m] = np.nanmean(foreground_values)
    with open(filename, 'w') as f:
        json.dump(res, f, indent=4, sort_keys=True)


if __name__ == "__main__":
    import argparse
    parser = argparse.ArgumentParser(usage="This is intended to identify the best model based on the five fold "
                                           "cross-validation. Running this script requires alle models to have been run "
                                           "already. This script will summarize the results of the five folds of all "
                                           "models in one json each for easy interpretability")
    parser.add_argument("-t", '--task_ids', nargs="+", required=True, help="task id. can be 'all'")
    parser.add_argument("-f", '--folds', nargs="+", required=False, type=int, default=[0, 1, 2, 3, 4])
    parser.add_argument("-m", '--models', nargs="+", required=False, default=['2d', '3d_lowres', '3d_fullres', '3d_cascade_fullres'])

    args = parser.parse_args()
    tasks = args.task_ids
    models = args.models

    folds = args.folds
    summarize2(tasks, models, folds=folds, output_dir=join(network_training_output_dir, "summary_jsons_new"))



================================================
FILE: unetr_pp/evaluation/model_selection/summarize_results_with_plans.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


from batchgenerators.utilities.file_and_folder_operations import *
import os
from unetr_pp.evaluation.model_selection.summarize_results_in_one_json import summarize
from unetr_pp.paths import network_training_output_dir
import numpy as np


def list_to_string(l, delim=","):
    st = "%03.3f" % l[0]
    for i in l[1:]:
        st += delim + "%03.3f" % i
    return st


def write_plans_to_file(f, plans_file, stage=0, do_linebreak_at_end=True, override_name=None):
    a = load_pickle(plans_file)
    stages = list(a['plans_per_stage'].keys())
    stages.sort()
    patch_size_in_mm = [i * j for i, j in zip(a['plans_per_stage'][stages[stage]]['patch_size'],
                                              a['plans_per_stage'][stages[stage]]['current_spacing'])]
    median_patient_size_in_mm = [i * j for i, j in zip(a['plans_per_stage'][stages[stage]]['median_patient_size_in_voxels'],
                                              a['plans_per_stage'][stages[stage]]['current_spacing'])]
    if override_name is None:
        f.write(plans_file.split("/")[-2] + "__" + plans_file.split("/")[-1])
    else:
        f.write(override_name)
    f.write(";%d" % stage)
    f.write(";%s" % str(a['plans_per_stage'][stages[stage]]['batch_size']))
    f.write(";%s" % str(a['plans_per_stage'][stages[stage]]['num_pool_per_axis']))
    f.write(";%s" % str(a['plans_per_stage'][stages[stage]]['patch_size']))
    f.write(";%s" % list_to_string(patch_size_in_mm))
    f.write(";%s" % str(a['plans_per_stage'][stages[stage]]['median_patient_size_in_voxels']))
    f.write(";%s" % list_to_string(median_patient_size_in_mm))
    f.write(";%s" % list_to_string(a['plans_per_stage'][stages[stage]]['current_spacing']))
    f.write(";%s" % list_to_string(a['plans_per_stage'][stages[stage]]['original_spacing']))
    f.write(";%s" % str(a['plans_per_stage'][stages[stage]]['pool_op_kernel_sizes']))
    f.write(";%s" % str(a['plans_per_stage'][stages[stage]]['conv_kernel_sizes']))
    if do_linebreak_at_end:
        f.write("\n")


if __name__ == "__main__":
    summarize((1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 24, 27), output_dir=join(network_training_output_dir, "summary_fold0"), folds=(0,))
    base_dir = os.environ['RESULTS_FOLDER']
    nnformers = ['nnFormerV2', 'nnFormerV2_zspacing']
    task_ids = list(range(99))
    with open("summary.csv", 'w') as f:
        f.write("identifier;stage;batch_size;num_pool_per_axis;patch_size;patch_size(mm);median_patient_size_in_voxels;median_patient_size_in_mm;current_spacing;original_spacing;pool_op_kernel_sizes;conv_kernel_sizes;patient_dc;global_dc\n")
        for i in task_ids:
            for nnformer in nnformers:
                try:
                    summary_folder = join(base_dir, nnformer, "summary_fold0")
                    if isdir(summary_folder):
                        summary_files = subfiles(summary_folder, join=False, prefix="Task%03.0d_" % i, suffix=".json", sort=True)
                        for s in summary_files:
                            tmp = s.split("__")
                            trainer = tmp[2]

                            expected_output_folder = join(base_dir, nnformer, tmp[1], tmp[0], tmp[2].split(".")[0])
                            name = tmp[0] + "__" + nnformer + "__" + tmp[1] + "__" + tmp[2].split(".")[0]
                            global_dice_json = join(base_dir, nnformer, tmp[1], tmp[0], tmp[2].split(".")[0], "fold_0", "validation_tiledTrue_doMirror_True", "global_dice.json")

                            if not isdir(expected_output_folder) or len(tmp) > 3:
                                if len(tmp) == 2:
                                    continue
                                expected_output_folder = join(base_dir, nnformer, tmp[1], tmp[0], tmp[2] + "__" + tmp[3].split(".")[0])
                                name = tmp[0] + "__" + nnformer + "__" + tmp[1] + "__" + tmp[2] + "__" + tmp[3].split(".")[0]
                                global_dice_json = join(base_dir, nnformer, tmp[1], tmp[0], tmp[2] + "__" + tmp[3].split(".")[0], "fold_0", "validation_tiledTrue_doMirror_True", "global_dice.json")

                            assert isdir(expected_output_folder), "expected output dir not found"
                            plans_file = join(expected_output_folder, "plans.pkl")
                            assert isfile(plans_file)

                            plans = load_pickle(plans_file)
                            num_stages = len(plans['plans_per_stage'])
                            if num_stages > 1 and tmp[1] == "3d_fullres":
                                stage = 1
                            elif (num_stages == 1 and tmp[1] == "3d_fullres") or tmp[1] == "3d_lowres":
                                stage = 0
                            else:
                                print("skipping", s)
                                continue

                            g_dc = load_json(global_dice_json)
                            mn_glob_dc = np.mean(list(g_dc.values()))

                            write_plans_to_file(f, plans_file, stage, False, name)
                            # now read and add result to end of line
                            results = load_json(join(summary_folder, s))
                            mean_dc = results['results']['mean']['mean']['Dice']
                            f.write(";%03.3f" % mean_dc)
                            f.write(";%03.3f\n" % mn_glob_dc)
                            print(name, mean_dc)
                except Exception as e:
                    print(e)


================================================
FILE: unetr_pp/evaluation/region_based_evaluation.py
================================================
from copy import deepcopy
from multiprocessing.pool import Pool

from batchgenerators.utilities.file_and_folder_operations import *
from medpy import metric
import SimpleITK as sitk
import numpy as np
from unetr_pp.configuration import default_num_threads
from unetr_pp.postprocessing.consolidate_postprocessing import collect_cv_niftis


def get_brats_regions():
    """
    this is only valid for the brats data in here where the labels are 1, 2, and 3. The original brats data have a
    different labeling convention!
    :return:
    """
    regions = {
        "whole tumor": (1, 2, 3),
        "tumor core": (2, 3),
        "enhancing tumor": (3,)
    }
    return regions


def get_KiTS_regions():
    regions = {
        "kidney incl tumor": (1, 2),
        "tumor": (2,)
    }
    return regions


def create_region_from_mask(mask, join_labels: tuple):
    mask_new = np.zeros_like(mask, dtype=np.uint8)
    for l in join_labels:
        mask_new[mask == l] = 1
    return mask_new


def evaluate_case(file_pred: str, file_gt: str, regions):
    image_gt = sitk.GetArrayFromImage(sitk.ReadImage(file_gt))
    image_pred = sitk.GetArrayFromImage(sitk.ReadImage(file_pred))
    results = []
    for r in regions:
        mask_pred = create_region_from_mask(image_pred, r)
        mask_gt = create_region_from_mask(image_gt, r)
        dc = np.nan if np.sum(mask_gt) == 0 and np.sum(mask_pred) == 0 else metric.dc(mask_pred, mask_gt)
        results.append(dc)
    return results


def evaluate_regions(folder_predicted: str, folder_gt: str, regions: dict, processes=default_num_threads):
    region_names = list(regions.keys())
    files_in_pred = subfiles(folder_predicted, suffix='.nii.gz', join=False)
    files_in_gt = subfiles(folder_gt, suffix='.nii.gz', join=False)
    have_no_gt = [i for i in files_in_pred if i not in files_in_gt]
    assert len(have_no_gt) == 0, "Some files in folder_predicted have not ground truth in folder_gt"
    have_no_pred = [i for i in files_in_gt if i not in files_in_pred]
    if len(have_no_pred) > 0:
        print("WARNING! Some files in folder_gt were not predicted (not present in folder_predicted)!")

    files_in_gt.sort()
    files_in_pred.sort()

    # run for all cases
    full_filenames_gt = [join(folder_gt, i) for i in files_in_pred]
    full_filenames_pred = [join(folder_predicted, i) for i in files_in_pred]

    p = Pool(processes)
    res = p.starmap(evaluate_case, zip(full_filenames_pred, full_filenames_gt, [list(regions.values())] * len(files_in_gt)))
    p.close()
    p.join()

    all_results = {r: [] for r in region_names}
    with open(join(folder_predicted, 'summary.csv'), 'w') as f:
        f.write("casename")
        for r in region_names:
            f.write(",%s" % r)
        f.write("\n")
        for i in range(len(files_in_pred)):
            f.write(files_in_pred[i][:-7])
            result_here = res[i]
            for k, r in enumerate(region_names):
                dc = result_here[k]
                f.write(",%02.4f" % dc)
                all_results[r].append(dc)
            f.write("\n")

        f.write('mean')
        for r in region_names:
            f.write(",%02.4f" % np.nanmean(all_results[r]))
        f.write("\n")
        f.write('median')
        for r in region_names:
            f.write(",%02.4f" % np.nanmedian(all_results[r]))
        f.write("\n")

        f.write('mean (nan is 1)')
        for r in region_names:
            tmp = np.array(all_results[r])
            tmp[np.isnan(tmp)] = 1
            f.write(",%02.4f" % np.mean(tmp))
        f.write("\n")
        f.write('median (nan is 1)')
        for r in region_names:
            tmp = np.array(all_results[r])
            tmp[np.isnan(tmp)] = 1
            f.write(",%02.4f" % np.median(tmp))
        f.write("\n")


if __name__ == '__main__':
    collect_cv_niftis('./', './cv_niftis')
    evaluate_regions('./cv_niftis/', './gt_niftis/', get_brats_regions())


================================================
FILE: unetr_pp/evaluation/surface_dice.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


import numpy as np
from medpy.metric.binary import __surface_distances


def normalized_surface_dice(a: np.ndarray, b: np.ndarray, threshold: float, spacing: tuple = None, connectivity=1):
    """
    This implementation differs from the official surface dice implementation! These two are not comparable!!!!!

    The normalized surface dice is symmetric, so it should not matter whether a or b is the reference image

    This implementation natively supports 2D and 3D images. Whether other dimensions are supported depends on the
    __surface_distances implementation in medpy

    :param a: image 1, must have the same shape as b
    :param b: image 2, must have the same shape as a
    :param threshold: distances below this threshold will be counted as true positives. Threshold is in mm, not voxels!
    (if spacing = (1, 1(, 1)) then one voxel=1mm so the threshold is effectively in voxels)
    must be a tuple of len dimension(a)
    :param spacing: how many mm is one voxel in reality? Can be left at None, we then assume an isotropic spacing of 1mm
    :param connectivity: see scipy.ndimage.generate_binary_structure for more information. I suggest you leave that
    one alone
    :return:
    """
    assert all([i == j for i, j in zip(a.shape, b.shape)]), "a and b must have the same shape. a.shape= %s, " \
                                                            "b.shape= %s" % (str(a.shape), str(b.shape))
    if spacing is None:
        spacing = tuple([1 for _ in range(len(a.shape))])
    a_to_b = __surface_distances(a, b, spacing, connectivity)
    b_to_a = __surface_distances(b, a, spacing, connectivity)

    numel_a = len(a_to_b)
    numel_b = len(b_to_a)

    tp_a = np.sum(a_to_b <= threshold) / numel_a
    tp_b = np.sum(b_to_a <= threshold) / numel_b

    fp = np.sum(a_to_b > threshold) / numel_a
    fn = np.sum(b_to_a > threshold) / numel_b

    dc = (tp_a + tp_b) / (tp_a + tp_b + fp + fn + 1e-8)  # 1e-8 just so that we don't get div by 0
    return dc



================================================
FILE: unetr_pp/evaluation/unetr_pp_acdc_checkpoint/unetr_pp/3d_fullres/Task001_ACDC/unetr_pp_trainer_acdc__unetr_pp_Plansv2.1/fold_0/.gitignore
================================================
# Ignore everything in this directory
*
# Except this file
!.gitignore


================================================
FILE: unetr_pp/evaluation/unetr_pp_lung_checkpoint/unetr_pp/3d_fullres/Task006_Lung/unetr_pp_trainer_lung__unetr_pp_Plansv2.1/fold_0/.gitignore
================================================
# Ignore everything in this directory
*
# Except this file
!.gitignore


================================================
FILE: unetr_pp/evaluation/unetr_pp_synapse_checkpoint/unetr_pp/3d_fullres/Task002_Synapse/unetr_pp_trainer_synapse__unetr_pp_Plansv2.1/fold_0/.gitignore
================================================
# Ignore everything in this directory
*
# Except this file
!.gitignore


================================================
FILE: unetr_pp/evaluation/unetr_pp_tumor_checkpoint/unetr_pp/3d_fullres/Task003_tumor/unetr_pp_trainer_tumor__unetr_pp_Plansv2.1/fold_0/.gitignore
================================================
# Ignore everything in this directory
*
# Except this file
!.gitignore


================================================
FILE: unetr_pp/experiment_planning/DatasetAnalyzer.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

from batchgenerators.utilities.file_and_folder_operations import *
from multiprocessing import Pool

from unetr_pp.configuration import default_num_threads
from unetr_pp.paths import nnFormer_raw_data, nnFormer_cropped_data
import numpy as np
import pickle
from unetr_pp.preprocessing.cropping import get_patient_identifiers_from_cropped_files
from skimage.morphology import label
from collections import OrderedDict


class DatasetAnalyzer(object):
    def __init__(self, folder_with_cropped_data, overwrite=True, num_processes=default_num_threads):
        """
        :param folder_with_cropped_data:
        :param overwrite: If True then precomputed values will not be used and instead recomputed from the data.
        False will allow loading of precomputed values. This may be dangerous though if some of the code of this class
        was changed, therefore the default is True.
        """
        self.num_processes = num_processes
        self.overwrite = overwrite
        self.folder_with_cropped_data = folder_with_cropped_data
        self.sizes = self.spacings = None
        self.patient_identifiers = get_patient_identifiers_from_cropped_files(self.folder_with_cropped_data)
        assert isfile(join(self.folder_with_cropped_data, "dataset.json")), \
            "dataset.json needs to be in folder_with_cropped_data"
        self.props_per_case_file = join(self.folder_with_cropped_data, "props_per_case.pkl")
        self.intensityproperties_file = join(self.folder_with_cropped_data, "intensityproperties.pkl")

    def load_properties_of_cropped(self, case_identifier):
        with open(join(self.folder_with_cropped_data, "%s.pkl" % case_identifier), 'rb') as f:
            properties = pickle.load(f)
        return properties

    @staticmethod
    def _check_if_all_in_one_region(seg, regions):
        res = OrderedDict()
        for r in regions:
            new_seg = np.zeros(seg.shape)
            for c in r:
                new_seg[seg == c] = 1
            labelmap, numlabels = label(new_seg, return_num=True)
            if numlabels != 1:
                res[tuple(r)] = False
            else:
                res[tuple(r)] = True
        return res

    @staticmethod
    def _collect_class_and_region_sizes(seg, all_classes, vol_per_voxel):
        volume_per_class = OrderedDict()
        region_volume_per_class = OrderedDict()
        for c in all_classes:
            region_volume_per_class[c] = []
            volume_per_class[c] = np.sum(seg == c) * vol_per_voxel
            labelmap, numregions = label(seg == c, return_num=True)
            for l in range(1, numregions + 1):
                region_volume_per_class[c].append(np.sum(labelmap == l) * vol_per_voxel)
        return volume_per_class, region_volume_per_class

    def _get_unique_labels(self, patient_identifier):
        seg = np.load(join(self.folder_with_cropped_data, patient_identifier) + ".npz")['data'][-1]
        unique_classes = np.unique(seg)
        return unique_classes

    def _load_seg_analyze_classes(self, patient_identifier, all_classes):
        """
        1) what class is in this training case?
        2) what is the size distribution for each class?
        3) what is the region size of each class?
        4) check if all in one region
        :return:
        """
        seg = np.load(join(self.folder_with_cropped_data, patient_identifier) + ".npz")['data'][-1]
        pkl = load_pickle(join(self.folder_with_cropped_data, patient_identifier) + ".pkl")
        vol_per_voxel = np.prod(pkl['itk_spacing'])

        # ad 1)
        unique_classes = np.unique(seg)

        # 4) check if all in one region
        regions = list()
        regions.append(list(all_classes))
        for c in all_classes:
            regions.append((c, ))

        all_in_one_region = self._check_if_all_in_one_region(seg, regions)

        # 2 & 3) region sizes
        volume_per_class, region_sizes = self._collect_class_and_region_sizes(seg, all_classes, vol_per_voxel)

        return unique_classes, all_in_one_region, volume_per_class, region_sizes

    def get_classes(self):
        datasetjson = load_json(join(self.folder_with_cropped_data, "dataset.json"))
        return datasetjson['labels']

    def analyse_segmentations(self):
        class_dct = self.get_classes()

        if self.overwrite or not isfile(self.props_per_case_file):
            p = Pool(self.num_processes)
            res = p.map(self._get_unique_labels, self.patient_identifiers)
            p.close()
            p.join()

            props_per_patient = OrderedDict()
            for p, unique_classes in \
                            zip(self.patient_identifiers, res):
                props = dict()
                props['has_classes'] = unique_classes
                props_per_patient[p] = props

            save_pickle(props_per_patient, self.props_per_case_file)
        else:
            props_per_patient = load_pickle(self.props_per_case_file)
        return class_dct, props_per_patient

    def get_sizes_and_spacings_after_cropping(self):
        sizes = []
        spacings = []
        # for c in case_identifiers:
        for c in self.patient_identifiers:
            properties = self.load_properties_of_cropped(c)
            sizes.append(properties["size_after_cropping"])
            spacings.append(properties["original_spacing"])

        return sizes, spacings

    def get_modalities(self):
        datasetjson = load_json(join(self.folder_with_cropped_data, "dataset.json"))
        modalities = datasetjson["modality"]
        modalities = {int(k): modalities[k] for k in modalities.keys()}
        return modalities

    def get_size_reduction_by_cropping(self):
        size_reduction = OrderedDict()
        for p in self.patient_identifiers:
            props = self.load_properties_of_cropped(p)
            shape_before_crop = props["original_size_of_raw_data"]
            shape_after_crop = props['size_after_cropping']
            size_red = np.prod(shape_after_crop) / np.prod(shape_before_crop)
            size_reduction[p] = size_red
        return size_reduction

    def _get_voxels_in_foreground(self, patient_identifier, modality_id):
        all_data = np.load(join(self.folder_with_cropped_data, patient_identifier) + ".npz")['data']
        modality = all_data[modality_id]
        mask = all_data[-1] > 0
        voxels = list(modality[mask][::10]) # no need to take every voxel
        return voxels

    @staticmethod
    def _compute_stats(voxels):
        if len(voxels) == 0:
            return np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan
        median = np.median(voxels)
        mean = np.mean(voxels)
        sd = np.std(voxels)
        mn = np.min(voxels)
        mx = np.max(voxels)
        percentile_99_5 = np.percentile(voxels, 99.5)
        percentile_00_5 = np.percentile(voxels, 00.5)
        return median, mean, sd, mn, mx, percentile_99_5, percentile_00_5

    def collect_intensity_properties(self, num_modalities):
        if self.overwrite or not isfile(self.intensityproperties_file):
            p = Pool(self.num_processes)

            results = OrderedDict()
            for mod_id in range(num_modalities):
                results[mod_id] = OrderedDict()
                v = p.starmap(self._get_voxels_in_foreground, zip(self.patient_identifiers,
                                                              [mod_id] * len(self.patient_identifiers)))

                w = []
                for iv in v:
                    w += iv

                median, mean, sd, mn, mx, percentile_99_5, percentile_00_5 = self._compute_stats(w)

                local_props = p.map(self._compute_stats, v)
                props_per_case = OrderedDict()
                for i, pat in enumerate(self.patient_identifiers):
                    props_per_case[pat] = OrderedDict()
                    props_per_case[pat]['median'] = local_props[i][0]
                    props_per_case[pat]['mean'] = local_props[i][1]
                    props_per_case[pat]['sd'] = local_props[i][2]
                    props_per_case[pat]['mn'] = local_props[i][3]
                    props_per_case[pat]['mx'] = local_props[i][4]
                    props_per_case[pat]['percentile_99_5'] = local_props[i][5]
                    props_per_case[pat]['percentile_00_5'] = local_props[i][6]

                results[mod_id]['local_props'] = props_per_case
                results[mod_id]['median'] = median
                results[mod_id]['mean'] = mean
                results[mod_id]['sd'] = sd
                results[mod_id]['mn'] = mn
                results[mod_id]['mx'] = mx
                results[mod_id]['percentile_99_5'] = percentile_99_5
                results[mod_id]['percentile_00_5'] = percentile_00_5

            p.close()
            p.join()
            save_pickle(results, self.intensityproperties_file)
        else:
            results = load_pickle(self.intensityproperties_file)
        return results

    def analyze_dataset(self, collect_intensityproperties=True):
        # get all spacings and sizes
        sizes, spacings = self.get_sizes_and_spacings_after_cropping()

        # get all classes and what classes are in what patients
        # class min size
        # region size per class
        classes = self.get_classes()
        all_classes = [int(i) for i in classes.keys() if int(i) > 0]

        # modalities
        modalities = self.get_modalities()

        # collect intensity information
        if collect_intensityproperties:
            intensityproperties = self.collect_intensity_properties(len(modalities))
        else:
            intensityproperties = None

        # size reduction by cropping
        size_reductions = self.get_size_reduction_by_cropping()

        dataset_properties = dict()
        dataset_properties['all_sizes'] = sizes
        dataset_properties['all_spacings'] = spacings
        dataset_properties['all_classes'] = all_classes
        dataset_properties['modalities'] = modalities  # {idx: modality name}
        dataset_properties['intensityproperties'] = intensityproperties
        dataset_properties['size_reductions'] = size_reductions  # {patient_id: size_reduction}

        save_pickle(dataset_properties, join(self.folder_with_cropped_data, "dataset_properties.pkl"))
        return dataset_properties


================================================
FILE: unetr_pp/experiment_planning/__init__.py
================================================
from __future__ import absolute_import
from . import *

================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_11GB.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

from copy import deepcopy

import numpy as np
from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \
    ExperimentPlanner3D_v21
from unetr_pp.experiment_planning.common_utils import get_pool_and_conv_props
from unetr_pp.network_architecture.generic_UNet import Generic_UNet
from unetr_pp.paths import *


class ExperimentPlanner3D_v21_11GB(ExperimentPlanner3D_v21):
    """
    Same as ExperimentPlanner3D_v21, but designed to fill a RTX2080 ti (11GB) in fp16
    """
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super(ExperimentPlanner3D_v21_11GB, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.data_identifier = "nnFormerData_plans_v2.1_big"
        self.plans_fname = join(self.preprocessed_output_folder,
                                "nnFormerPlansv2.1_big_plans_3D.pkl")

    def get_properties_for_stage(self, current_spacing, original_spacing, original_shape, num_cases,
                                 num_modalities, num_classes):
        """
        We need to adapt ref
        """
        new_median_shape = np.round(original_spacing / current_spacing * original_shape).astype(int)
        dataset_num_voxels = np.prod(new_median_shape) * num_cases

        # the next line is what we had before as a default. The patch size had the same aspect ratio as the median shape of a patient. We swapped t
        # input_patch_size = new_median_shape

        # compute how many voxels are one mm
        input_patch_size = 1 / np.array(current_spacing)

        # normalize voxels per mm
        input_patch_size /= input_patch_size.mean()

        # create an isotropic patch of size 512x512x512mm
        input_patch_size *= 1 / min(input_patch_size) * 512  # to get a starting value
        input_patch_size = np.round(input_patch_size).astype(int)

        # clip it to the median shape of the dataset because patches larger then that make not much sense
        input_patch_size = [min(i, j) for i, j in zip(input_patch_size, new_median_shape)]

        network_num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, new_shp, \
        shape_must_be_divisible_by = get_pool_and_conv_props(current_spacing, input_patch_size,
                                                             self.unet_featuremap_min_edge_length,
                                                             self.unet_max_numpool)
        #     use_this_for_batch_size_computation_3D = 520000000 # 505789440
        # typical ExperimentPlanner3D_v21 configurations use 7.5GB, but on a 2080ti we have 11. Allow for more space
        # to be used
        ref = Generic_UNet.use_this_for_batch_size_computation_3D * 11 / 8
        here = Generic_UNet.compute_approx_vram_consumption(new_shp, network_num_pool_per_axis,
                                                            self.unet_base_num_features,
                                                            self.unet_max_num_filters, num_modalities,
                                                            num_classes,
                                                            pool_op_kernel_sizes, conv_per_stage=self.conv_per_stage)
        while here > ref:
            axis_to_be_reduced = np.argsort(new_shp / new_median_shape)[-1]

            tmp = deepcopy(new_shp)
            tmp[axis_to_be_reduced] -= shape_must_be_divisible_by[axis_to_be_reduced]
            _, _, _, _, shape_must_be_divisible_by_new = \
                get_pool_and_conv_props(current_spacing, tmp,
                                        self.unet_featuremap_min_edge_length,
                                        self.unet_max_numpool,
                                        )
            new_shp[axis_to_be_reduced] -= shape_must_be_divisible_by_new[axis_to_be_reduced]

            # we have to recompute numpool now:
            network_num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, new_shp, \
            shape_must_be_divisible_by = get_pool_and_conv_props(current_spacing, new_shp,
                                                                 self.unet_featuremap_min_edge_length,
                                                                 self.unet_max_numpool,
                                                                 )

            here = Generic_UNet.compute_approx_vram_consumption(new_shp, network_num_pool_per_axis,
                                                                self.unet_base_num_features,
                                                                self.unet_max_num_filters, num_modalities,
                                                                num_classes, pool_op_kernel_sizes,
                                                                conv_per_stage=self.conv_per_stage)
            # print(new_shp)

        input_patch_size = new_shp

        batch_size = Generic_UNet.DEFAULT_BATCH_SIZE_3D  # This is what wirks with 128**3
        batch_size = int(np.floor(max(ref / here, 1) * batch_size))

        # check if batch size is too large
        max_batch_size = np.round(self.batch_size_covers_max_percent_of_dataset * dataset_num_voxels /
                                  np.prod(input_patch_size, dtype=np.int64)).astype(int)
        max_batch_size = max(max_batch_size, self.unet_min_batch_size)
        batch_size = max(1, min(batch_size, max_batch_size))

        do_dummy_2D_data_aug = (max(input_patch_size) / input_patch_size[
            0]) > self.anisotropy_threshold

        plan = {
            'batch_size': batch_size,
            'num_pool_per_axis': network_num_pool_per_axis,
            'patch_size': input_patch_size,
            'median_patient_size_in_voxels': new_median_shape,
            'current_spacing': current_spacing,
            'original_spacing': original_spacing,
            'do_dummy_2D_data_aug': do_dummy_2D_data_aug,
            'pool_op_kernel_sizes': pool_op_kernel_sizes,
            'conv_kernel_sizes': conv_kernel_sizes,
        }
        return plan



================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_16GB.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

from copy import deepcopy

import numpy as np
from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \
    ExperimentPlanner3D_v21
from unetr_pp.experiment_planning.common_utils import get_pool_and_conv_props
from unetr_pp.network_architecture.generic_UNet import Generic_UNet
from unetr_pp.paths import *


class ExperimentPlanner3D_v21_16GB(ExperimentPlanner3D_v21):
    """
    Same as ExperimentPlanner3D_v21, but designed to fill 16GB in fp16
    """
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super(ExperimentPlanner3D_v21_16GB, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.data_identifier = "nnFormerData_plans_v2.1_16GB"
        self.plans_fname = join(self.preprocessed_output_folder,
                                "nnFormerPlansv2.1_16GB_plans_3D.pkl")

    def get_properties_for_stage(self, current_spacing, original_spacing, original_shape, num_cases,
                                 num_modalities, num_classes):
        """
        We need to adapt ref
        """
        new_median_shape = np.round(original_spacing / current_spacing * original_shape).astype(int)
        dataset_num_voxels = np.prod(new_median_shape) * num_cases

        # the next line is what we had before as a default. The patch size had the same aspect ratio as the median shape of a patient. We swapped t
        # input_patch_size = new_median_shape

        # compute how many voxels are one mm
        input_patch_size = 1 / np.array(current_spacing)

        # normalize voxels per mm
        input_patch_size /= input_patch_size.mean()

        # create an isotropic patch of size 512x512x512mm
        input_patch_size *= 1 / min(input_patch_size) * 512  # to get a starting value
        input_patch_size = np.round(input_patch_size).astype(int)

        # clip it to the median shape of the dataset because patches larger then that make not much sense
        input_patch_size = [min(i, j) for i, j in zip(input_patch_size, new_median_shape)]

        network_num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, new_shp, \
        shape_must_be_divisible_by = get_pool_and_conv_props(current_spacing, input_patch_size,
                                                             self.unet_featuremap_min_edge_length,
                                                             self.unet_max_numpool)
        #     use_this_for_batch_size_computation_3D = 520000000 # 505789440
        # typical ExperimentPlanner3D_v21 configurations use 7.5GB, but on a 2080ti we have 11. Allow for more space
        # to be used
        ref = Generic_UNet.use_this_for_batch_size_computation_3D * 16 / 8.5
        here = Generic_UNet.compute_approx_vram_consumption(new_shp, network_num_pool_per_axis,
                                                            self.unet_base_num_features,
                                                            self.unet_max_num_filters, num_modalities,
                                                            num_classes,
                                                            pool_op_kernel_sizes, conv_per_stage=self.conv_per_stage)
        while here > ref:
            axis_to_be_reduced = np.argsort(new_shp / new_median_shape)[-1]

            tmp = deepcopy(new_shp)
            tmp[axis_to_be_reduced] -= shape_must_be_divisible_by[axis_to_be_reduced]
            _, _, _, _, shape_must_be_divisible_by_new = \
                get_pool_and_conv_props(current_spacing, tmp,
                                        self.unet_featuremap_min_edge_length,
                                        self.unet_max_numpool,
                                        )
            new_shp[axis_to_be_reduced] -= shape_must_be_divisible_by_new[axis_to_be_reduced]

            # we have to recompute numpool now:
            network_num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, new_shp, \
            shape_must_be_divisible_by = get_pool_and_conv_props(current_spacing, new_shp,
                                                                 self.unet_featuremap_min_edge_length,
                                                                 self.unet_max_numpool,
                                                                 )

            here = Generic_UNet.compute_approx_vram_consumption(new_shp, network_num_pool_per_axis,
                                                                self.unet_base_num_features,
                                                                self.unet_max_num_filters, num_modalities,
                                                                num_classes, pool_op_kernel_sizes,
                                                                conv_per_stage=self.conv_per_stage)
            # print(new_shp)

        input_patch_size = new_shp

        batch_size = Generic_UNet.DEFAULT_BATCH_SIZE_3D  # This is what wirks with 128**3
        batch_size = int(np.floor(max(ref / here, 1) * batch_size))

        # check if batch size is too large
        max_batch_size = np.round(self.batch_size_covers_max_percent_of_dataset * dataset_num_voxels /
                                  np.prod(input_patch_size, dtype=np.int64)).astype(int)
        max_batch_size = max(max_batch_size, self.unet_min_batch_size)
        batch_size = max(1, min(batch_size, max_batch_size))

        do_dummy_2D_data_aug = (max(input_patch_size) / input_patch_size[
            0]) > self.anisotropy_threshold

        plan = {
            'batch_size': batch_size,
            'num_pool_per_axis': network_num_pool_per_axis,
            'patch_size': input_patch_size,
            'median_patient_size_in_voxels': new_median_shape,
            'current_spacing': current_spacing,
            'original_spacing': original_spacing,
            'do_dummy_2D_data_aug': do_dummy_2D_data_aug,
            'pool_op_kernel_sizes': pool_op_kernel_sizes,
            'conv_kernel_sizes': conv_kernel_sizes,
        }
        return plan



================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_32GB.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

from copy import deepcopy

import numpy as np
from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \
    ExperimentPlanner3D_v21
from unetr_pp.experiment_planning.common_utils import get_pool_and_conv_props
from unetr_pp.network_architecture.generic_UNet import Generic_UNet
from unetr_pp.paths import *


class ExperimentPlanner3D_v21_32GB(ExperimentPlanner3D_v21):
    """
    Same as ExperimentPlanner3D_v21, but designed to fill a V100 (32GB) in fp16
    """
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super(ExperimentPlanner3D_v21_32GB, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.data_identifier = "nnFormerData_plans_v2.1_verybig"
        self.plans_fname = join(self.preprocessed_output_folder,
                                "nnFormerPlansv2.1_verybig_plans_3D.pkl")

    def get_properties_for_stage(self, current_spacing, original_spacing, original_shape, num_cases,
                                 num_modalities, num_classes):
        """
        We need to adapt ref
        """
        new_median_shape = np.round(original_spacing / current_spacing * original_shape).astype(int)
        dataset_num_voxels = np.prod(new_median_shape) * num_cases

        # the next line is what we had before as a default. The patch size had the same aspect ratio as the median shape of a patient. We swapped t
        # input_patch_size = new_median_shape

        # compute how many voxels are one mm
        input_patch_size = 1 / np.array(current_spacing)

        # normalize voxels per mm
        input_patch_size /= input_patch_size.mean()

        # create an isotropic patch of size 512x512x512mm
        input_patch_size *= 1 / min(input_patch_size) * 512  # to get a starting value
        input_patch_size = np.round(input_patch_size).astype(int)

        # clip it to the median shape of the dataset because patches larger then that make not much sense
        input_patch_size = [min(i, j) for i, j in zip(input_patch_size, new_median_shape)]

        network_num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, new_shp, \
        shape_must_be_divisible_by = get_pool_and_conv_props(current_spacing, input_patch_size,
                                                             self.unet_featuremap_min_edge_length,
                                                             self.unet_max_numpool)
        #     use_this_for_batch_size_computation_3D = 520000000 # 505789440
        # typical ExperimentPlanner3D_v21 configurations use 7.5GB, but on a V100 we have 32. Allow for more space
        # to be used
        ref = Generic_UNet.use_this_for_batch_size_computation_3D * 32 / 8
        here = Generic_UNet.compute_approx_vram_consumption(new_shp, network_num_pool_per_axis,
                                                            self.unet_base_num_features,
                                                            self.unet_max_num_filters, num_modalities,
                                                            num_classes,
                                                            pool_op_kernel_sizes, conv_per_stage=self.conv_per_stage)
        while here > ref:
            axis_to_be_reduced = np.argsort(new_shp / new_median_shape)[-1]

            tmp = deepcopy(new_shp)
            tmp[axis_to_be_reduced] -= shape_must_be_divisible_by[axis_to_be_reduced]
            _, _, _, _, shape_must_be_divisible_by_new = \
                get_pool_and_conv_props(current_spacing, tmp,
                                        self.unet_featuremap_min_edge_length,
                                        self.unet_max_numpool,
                                        )
            new_shp[axis_to_be_reduced] -= shape_must_be_divisible_by_new[axis_to_be_reduced]

            # we have to recompute numpool now:
            network_num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, new_shp, \
            shape_must_be_divisible_by = get_pool_and_conv_props(current_spacing, new_shp,
                                                                 self.unet_featuremap_min_edge_length,
                                                                 self.unet_max_numpool,
                                                                 )

            here = Generic_UNet.compute_approx_vram_consumption(new_shp, network_num_pool_per_axis,
                                                                self.unet_base_num_features,
                                                                self.unet_max_num_filters, num_modalities,
                                                                num_classes, pool_op_kernel_sizes,
                                                                conv_per_stage=self.conv_per_stage)
            # print(new_shp)
        input_patch_size = new_shp

        batch_size = Generic_UNet.DEFAULT_BATCH_SIZE_3D  # This is what wirks with 128**3
        batch_size = int(np.floor(max(ref / here, 1) * batch_size))

        # check if batch size is too large
        max_batch_size = np.round(self.batch_size_covers_max_percent_of_dataset * dataset_num_voxels /
                                  np.prod(input_patch_size, dtype=np.int64)).astype(int)
        max_batch_size = max(max_batch_size, self.unet_min_batch_size)
        batch_size = max(1, min(batch_size, max_batch_size))

        do_dummy_2D_data_aug = (max(input_patch_size) / input_patch_size[
            0]) > self.anisotropy_threshold

        plan = {
            'batch_size': batch_size,
            'num_pool_per_axis': network_num_pool_per_axis,
            'patch_size': input_patch_size,
            'median_patient_size_in_voxels': new_median_shape,
            'current_spacing': current_spacing,
            'original_spacing': original_spacing,
            'do_dummy_2D_data_aug': do_dummy_2D_data_aug,
            'pool_op_kernel_sizes': pool_op_kernel_sizes,
            'conv_kernel_sizes': conv_kernel_sizes,
        }
        return plan


================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_3convperstage.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

from copy import deepcopy

import numpy as np
from unetr_pp.experiment_planning.common_utils import get_pool_and_conv_props
from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet import ExperimentPlanner
from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet_v21 import ExperimentPlanner3D_v21
from unetr_pp.network_architecture.generic_UNet import Generic_UNet
from unetr_pp.paths import *


class ExperimentPlanner3D_v21_3cps(ExperimentPlanner3D_v21):
    """
    have 3x conv-in-lrelu per resolution instead of 2 while remaining in the same memory budget

    This only works with 3d fullres because we use the same data as ExperimentPlanner3D_v21. Lowres would require to
    rerun preprocesing (different patch size = different 3d lowres target spacing)
    """
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super(ExperimentPlanner3D_v21_3cps, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.plans_fname = join(self.preprocessed_output_folder,
                                "nnFormerPlansv2.1_3cps_plans_3D.pkl")
        self.unet_base_num_features = 32
        self.conv_per_stage = 3

    def run_preprocessing(self, num_threads):
        pass


================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v22.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

import numpy as np
from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \
    ExperimentPlanner3D_v21
from unetr_pp.paths import *


class ExperimentPlanner3D_v22(ExperimentPlanner3D_v21):
    """
    """
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super().__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.data_identifier = "nnFormerData_plans_v2.2"
        self.plans_fname = join(self.preprocessed_output_folder,
                                "nnFormerPlansv2.2_plans_3D.pkl")

    def get_target_spacing(self):
        spacings = self.dataset_properties['all_spacings']
        sizes = self.dataset_properties['all_sizes']

        target = np.percentile(np.vstack(spacings), self.target_spacing_percentile, 0)
        target_size = np.percentile(np.vstack(sizes), self.target_spacing_percentile, 0)
        target_size_mm = np.array(target) * np.array(target_size)
        # we need to identify datasets for which a different target spacing could be beneficial. These datasets have
        # the following properties:
        # - one axis which much lower resolution than the others
        # - the lowres axis has much less voxels than the others
        # - (the size in mm of the lowres axis is also reduced)
        worst_spacing_axis = np.argmax(target)
        other_axes = [i for i in range(len(target)) if i != worst_spacing_axis]
        other_spacings = [target[i] for i in other_axes]
        other_sizes = [target_size[i] for i in other_axes]

        has_aniso_spacing = target[worst_spacing_axis] > (self.anisotropy_threshold * max(other_spacings))
        has_aniso_voxels = target_size[worst_spacing_axis] * self.anisotropy_threshold < min(other_sizes)
        # we don't use the last one for now
        #median_size_in_mm = target[target_size_mm] * RESAMPLING_SEPARATE_Z_ANISOTROPY_THRESHOLD < max(target_size_mm)

        if has_aniso_spacing and has_aniso_voxels:
            spacings_of_that_axis = np.vstack(spacings)[:, worst_spacing_axis]
            target_spacing_of_that_axis = np.percentile(spacings_of_that_axis, 10)
            # don't let the spacing of that axis get higher than self.anisotropy_thresholdxthe_other_axes
            target_spacing_of_that_axis = max(max(other_spacings) * self.anisotropy_threshold, target_spacing_of_that_axis)
            target[worst_spacing_axis] = target_spacing_of_that_axis
        return target



================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v23.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \
    ExperimentPlanner3D_v21
from unetr_pp.paths import *


class ExperimentPlanner3D_v23(ExperimentPlanner3D_v21):
    """
    """
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super(ExperimentPlanner3D_v23, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.data_identifier = "nnFormerData_plans_v2.3"
        self.plans_fname = join(self.preprocessed_output_folder,
                                "nnFormerPlansv2.3_plans_3D.pkl")
        self.preprocessor_name = "Preprocessor3DDifferentResampling"


================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_residual_3DUNet_v21.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


from copy import deepcopy

import numpy as np
from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \
    ExperimentPlanner3D_v21
from unetr_pp.experiment_planning.common_utils import get_pool_and_conv_props
from unetr_pp.paths import *
from unetr_pp.network_architecture.generic_modular_residual_UNet import FabiansUNet


class ExperimentPlanner3DFabiansResUNet_v21(ExperimentPlanner3D_v21):
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super(ExperimentPlanner3DFabiansResUNet_v21, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.data_identifier = "nnFormerData_plans_v2.1"# "nnFormerData_FabiansResUNet_v2.1"
        self.plans_fname = join(self.preprocessed_output_folder,
                                "nnFormerPlans_FabiansResUNet_v2.1_plans_3D.pkl")

    def get_properties_for_stage(self, current_spacing, original_spacing, original_shape, num_cases,
                                 num_modalities, num_classes):
        """
        We use FabiansUNet instead of Generic_UNet
        """
        new_median_shape = np.round(original_spacing / current_spacing * original_shape).astype(int)
        dataset_num_voxels = np.prod(new_median_shape) * num_cases

        # the next line is what we had before as a default. The patch size had the same aspect ratio as the median shape of a patient. We swapped t
        # input_patch_size = new_median_shape

        # compute how many voxels are one mm
        input_patch_size = 1 / np.array(current_spacing)

        # normalize voxels per mm
        input_patch_size /= input_patch_size.mean()

        # create an isotropic patch of size 512x512x512mm
        input_patch_size *= 1 / min(input_patch_size) * 512  # to get a starting value
        input_patch_size = np.round(input_patch_size).astype(int)

        # clip it to the median shape of the dataset because patches larger then that make not much sense
        input_patch_size = [min(i, j) for i, j in zip(input_patch_size, new_median_shape)]

        network_num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, new_shp, \
        shape_must_be_divisible_by = get_pool_and_conv_props(current_spacing, input_patch_size,
                                                             self.unet_featuremap_min_edge_length,
                                                             self.unet_max_numpool)
        pool_op_kernel_sizes = [[1, 1, 1]] + pool_op_kernel_sizes
        blocks_per_stage_encoder = FabiansUNet.default_blocks_per_stage_encoder[:len(pool_op_kernel_sizes)]
        blocks_per_stage_decoder = FabiansUNet.default_blocks_per_stage_decoder[:len(pool_op_kernel_sizes) - 1]

        ref = FabiansUNet.use_this_for_3D_configuration
        here = FabiansUNet.compute_approx_vram_consumption(input_patch_size, self.unet_base_num_features,
                                                           self.unet_max_num_filters, num_modalities, num_classes,
                                                           pool_op_kernel_sizes, blocks_per_stage_encoder,
                                                           blocks_per_stage_decoder, 2, self.unet_min_batch_size,)
        while here > ref:
            axis_to_be_reduced = np.argsort(new_shp / new_median_shape)[-1]

            tmp = deepcopy(new_shp)
            tmp[axis_to_be_reduced] -= shape_must_be_divisible_by[axis_to_be_reduced]
            _, _, _, _, shape_must_be_divisible_by_new = \
                get_pool_and_conv_props(current_spacing, tmp,
                                        self.unet_featuremap_min_edge_length,
                                        self.unet_max_numpool,
                                        )
            new_shp[axis_to_be_reduced] -= shape_must_be_divisible_by_new[axis_to_be_reduced]

            # we have to recompute numpool now:
            network_num_pool_per_axis, pool_op_kernel_sizes, conv_kernel_sizes, new_shp, \
            shape_must_be_divisible_by = get_pool_and_conv_props(current_spacing, new_shp,
                                                                 self.unet_featuremap_min_edge_length,
                                                                 self.unet_max_numpool,
                                                                 )
            pool_op_kernel_sizes = [[1, 1, 1]] + pool_op_kernel_sizes
            blocks_per_stage_encoder = FabiansUNet.default_blocks_per_stage_encoder[:len(pool_op_kernel_sizes)]
            blocks_per_stage_decoder = FabiansUNet.default_blocks_per_stage_decoder[:len(pool_op_kernel_sizes) - 1]
            here = FabiansUNet.compute_approx_vram_consumption(new_shp, self.unet_base_num_features,
                                                               self.unet_max_num_filters, num_modalities, num_classes,
                                                               pool_op_kernel_sizes, blocks_per_stage_encoder,
                                                               blocks_per_stage_decoder, 2, self.unet_min_batch_size)
        input_patch_size = new_shp

        batch_size = FabiansUNet.default_min_batch_size
        batch_size = int(np.floor(max(ref / here, 1) * batch_size))

        # check if batch size is too large
        max_batch_size = np.round(self.batch_size_covers_max_percent_of_dataset * dataset_num_voxels /
                                  np.prod(input_patch_size, dtype=np.int64)).astype(int)
        max_batch_size = max(max_batch_size, self.unet_min_batch_size)
        batch_size = max(1, min(batch_size, max_batch_size))

        do_dummy_2D_data_aug = (max(input_patch_size) / input_patch_size[
            0]) > self.anisotropy_threshold

        plan = {
            'batch_size': batch_size,
            'num_pool_per_axis': network_num_pool_per_axis,
            'patch_size': input_patch_size,
            'median_patient_size_in_voxels': new_median_shape,
            'current_spacing': current_spacing,
            'original_spacing': original_spacing,
            'do_dummy_2D_data_aug': do_dummy_2D_data_aug,
            'pool_op_kernel_sizes': pool_op_kernel_sizes,
            'conv_kernel_sizes': conv_kernel_sizes,
            'num_blocks_encoder': blocks_per_stage_encoder,
            'num_blocks_decoder': blocks_per_stage_decoder
        }
        return plan

    def run_preprocessing(self, num_threads):
        """
        On all datasets except 3d fullres on spleen the preprocessed data would look identical to
        ExperimentPlanner3D_v21 (I tested decathlon data only). Therefore we just reuse the preprocessed data of
        that other planner
        :param num_threads:
        :return:
        """
        pass


================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_2DUNet_v21_RGB_scaleto_0_1.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


from unetr_pp.experiment_planning.experiment_planner_baseline_2DUNet_v21 import ExperimentPlanner2D_v21
from unetr_pp.paths import *


class ExperimentPlanner2D_v21_RGB_scaleTo_0_1(ExperimentPlanner2D_v21):
    """
    used by tutorial unetr_pp.tutorials.custom_preprocessing
    """
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super().__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.data_identifier = "nnFormer_RGB_scaleTo_0_1"
        self.plans_fname = join(self.preprocessed_output_folder, "nnFormer_RGB_scaleTo_0_1" + "_plans_2D.pkl")

        # The custom preprocessor class we intend to use is GenericPreprocessor_scale_uint8_to_0_1. It must be located
        # in unetr_pp.preprocessing (any file and submodule) and will be found by its name. Make sure to always define
        # unique names!
        self.preprocessor_name = 'GenericPreprocessor_scale_uint8_to_0_1'


================================================
FILE: unetr_pp/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_3DUNet_CT2.py
================================================
#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
#
#    Licensed under the Apache License, Version 2.0 (the "License");
#    you may not use this file except in compliance with the License.
#    You may obtain a copy of the License at
#
#        http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.


from collections import OrderedDict

from unetr_pp.experiment_planning.experiment_planner_baseline_3DUNet import ExperimentPlanner
from unetr_pp.paths import *


class ExperimentPlannerCT2(ExperimentPlanner):
    """
    preprocesses CT data with the "CT2" normalization.

    (clip range comes from training set and is the 0.5 and 99.5 percentile of intensities in foreground)
    CT = clip to range, then normalize with global mn and sd (computed on foreground in training set)
    CT2 = clip to range, normalize each case separately with its own mn and std (computed within the area that was in clip_range)
    """
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super(ExperimentPlannerCT2, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.data_identifier = "nnFormer_CT2"
        self.plans_fname = join(self.preprocessed_output_folder, "nnFormerPlans" + "CT2_plans_3D.pkl")

    def determine_normalization_scheme(self):
        schemes = OrderedDict()
        modalities = self.dataset_properties['modalities']
        num_modalities = len(list(modalities.keys()))

        for i in range(num_modalities):
            if modalities[i] == "CT":
                schemes[i] = "CT2"
            else:
                schemes[i] = "nonCT"
        return schemes


================================================
FI
Download .txt
gitextract_3co428a0/

├── LICENSE
├── README.md
├── evaluation_scripts/
│   ├── run_evaluation_acdc.sh
│   ├── run_evaluation_lung.sh
│   ├── run_evaluation_synapse.sh
│   └── run_evaluation_tumor.sh
├── requirements.txt
├── training_scripts/
│   ├── run_training_acdc.sh
│   ├── run_training_lung.sh
│   ├── run_training_synapse.sh
│   └── run_training_tumor.sh
└── unetr_pp/
    ├── __init__.py
    ├── configuration.py
    ├── evaluation/
    │   ├── __init__.py
    │   ├── add_dummy_task_with_mean_over_all_tasks.py
    │   ├── add_mean_dice_to_json.py
    │   ├── collect_results_files.py
    │   ├── evaluator.py
    │   ├── metrics.py
    │   ├── model_selection/
    │   │   ├── __init__.py
    │   │   ├── collect_all_fold0_results_and_summarize_in_one_csv.py
    │   │   ├── ensemble.py
    │   │   ├── figure_out_what_to_submit.py
    │   │   ├── rank_candidates.py
    │   │   ├── rank_candidates_StructSeg.py
    │   │   ├── rank_candidates_cascade.py
    │   │   ├── summarize_results_in_one_json.py
    │   │   └── summarize_results_with_plans.py
    │   ├── region_based_evaluation.py
    │   ├── surface_dice.py
    │   ├── unetr_pp_acdc_checkpoint/
    │   │   └── unetr_pp/
    │   │       └── 3d_fullres/
    │   │           └── Task001_ACDC/
    │   │               └── unetr_pp_trainer_acdc__unetr_pp_Plansv2.1/
    │   │                   └── fold_0/
    │   │                       └── .gitignore
    │   ├── unetr_pp_lung_checkpoint/
    │   │   └── unetr_pp/
    │   │       └── 3d_fullres/
    │   │           └── Task006_Lung/
    │   │               └── unetr_pp_trainer_lung__unetr_pp_Plansv2.1/
    │   │                   └── fold_0/
    │   │                       └── .gitignore
    │   ├── unetr_pp_synapse_checkpoint/
    │   │   └── unetr_pp/
    │   │       └── 3d_fullres/
    │   │           └── Task002_Synapse/
    │   │               └── unetr_pp_trainer_synapse__unetr_pp_Plansv2.1/
    │   │                   └── fold_0/
    │   │                       └── .gitignore
    │   └── unetr_pp_tumor_checkpoint/
    │       └── unetr_pp/
    │           └── 3d_fullres/
    │               └── Task003_tumor/
    │                   └── unetr_pp_trainer_tumor__unetr_pp_Plansv2.1/
    │                       └── fold_0/
    │                           └── .gitignore
    ├── experiment_planning/
    │   ├── DatasetAnalyzer.py
    │   ├── __init__.py
    │   ├── alternative_experiment_planning/
    │   │   ├── experiment_planner_baseline_3DUNet_v21_11GB.py
    │   │   ├── experiment_planner_baseline_3DUNet_v21_16GB.py
    │   │   ├── experiment_planner_baseline_3DUNet_v21_32GB.py
    │   │   ├── experiment_planner_baseline_3DUNet_v21_3convperstage.py
    │   │   ├── experiment_planner_baseline_3DUNet_v22.py
    │   │   ├── experiment_planner_baseline_3DUNet_v23.py
    │   │   ├── experiment_planner_residual_3DUNet_v21.py
    │   │   ├── normalization/
    │   │   │   ├── experiment_planner_2DUNet_v21_RGB_scaleto_0_1.py
    │   │   │   ├── experiment_planner_3DUNet_CT2.py
    │   │   │   └── experiment_planner_3DUNet_nonCT.py
    │   │   ├── patch_size/
    │   │   │   ├── experiment_planner_3DUNet_isotropic_in_mm.py
    │   │   │   └── experiment_planner_3DUNet_isotropic_in_voxels.py
    │   │   ├── pooling_and_convs/
    │   │   │   ├── experiment_planner_baseline_3DUNet_allConv3x3.py
    │   │   │   └── experiment_planner_baseline_3DUNet_poolBasedOnSpacing.py
    │   │   └── target_spacing/
    │   │       ├── experiment_planner_baseline_3DUNet_targetSpacingForAnisoAxis.py
    │   │       ├── experiment_planner_baseline_3DUNet_v21_customTargetSpacing_2x2x2.py
    │   │       └── experiment_planner_baseline_3DUNet_v21_noResampling.py
    │   ├── change_batch_size.py
    │   ├── common_utils.py
    │   ├── experiment_planner_baseline_2DUNet.py
    │   ├── experiment_planner_baseline_2DUNet_v21.py
    │   ├── experiment_planner_baseline_3DUNet.py
    │   ├── experiment_planner_baseline_3DUNet_v21.py
    │   ├── nnFormer_convert_decathlon_task.py
    │   ├── nnFormer_plan_and_preprocess.py
    │   ├── summarize_plans.py
    │   └── utils.py
    ├── inference/
    │   ├── __init__.py
    │   ├── inferTs/
    │   │   └── swin_nomask_2/
    │   │       └── plans.pkl
    │   ├── predict.py
    │   ├── predict_simple.py
    │   └── segmentation_export.py
    ├── inference_acdc.py
    ├── inference_synapse.py
    ├── inference_tumor.py
    ├── network_architecture/
    │   ├── README.md
    │   ├── __init__.py
    │   ├── acdc/
    │   │   ├── __init__.py
    │   │   ├── model_components.py
    │   │   ├── transformerblock.py
    │   │   └── unetr_pp_acdc.py
    │   ├── dynunet_block.py
    │   ├── generic_UNet.py
    │   ├── initialization.py
    │   ├── layers.py
    │   ├── lung/
    │   │   ├── __init__.py
    │   │   ├── model_components.py
    │   │   ├── transformerblock.py
    │   │   └── unetr_pp_lung.py
    │   ├── neural_network.py
    │   ├── synapse/
    │   │   ├── __init__.py
    │   │   ├── model_components.py
    │   │   ├── transformerblock.py
    │   │   └── unetr_pp_synapse.py
    │   └── tumor/
    │       ├── __init__.py
    │       ├── model_components.py
    │       ├── transformerblock.py
    │       └── unetr_pp_tumor.py
    ├── paths.py
    ├── postprocessing/
    │   ├── connected_components.py
    │   ├── consolidate_all_for_paper.py
    │   ├── consolidate_postprocessing.py
    │   └── consolidate_postprocessing_simple.py
    ├── preprocessing/
    │   ├── cropping.py
    │   ├── custom_preprocessors/
    │   │   └── preprocessor_scale_RGB_to_0_1.py
    │   ├── preprocessing.py
    │   └── sanity_checks.py
    ├── run/
    │   ├── __init__.py
    │   ├── default_configuration.py
    │   └── run_training.py
    ├── training/
    │   ├── __init__.py
    │   ├── cascade_stuff/
    │   │   ├── __init__.py
    │   │   └── predict_next_stage.py
    │   ├── data_augmentation/
    │   │   ├── __init__.py
    │   │   ├── custom_transforms.py
    │   │   ├── data_augmentation_insaneDA.py
    │   │   ├── data_augmentation_insaneDA2.py
    │   │   ├── data_augmentation_moreDA.py
    │   │   ├── data_augmentation_noDA.py
    │   │   ├── default_data_augmentation.py
    │   │   ├── downsampling.py
    │   │   └── pyramid_augmentations.py
    │   ├── dataloading/
    │   │   ├── __init__.py
    │   │   └── dataset_loading.py
    │   ├── learning_rate/
    │   │   └── poly_lr.py
    │   ├── loss_functions/
    │   │   ├── TopK_loss.py
    │   │   ├── __init__.py
    │   │   ├── crossentropy.py
    │   │   ├── deep_supervision.py
    │   │   └── dice_loss.py
    │   ├── model_restore.py
    │   ├── network_training/
    │   │   ├── Trainer_acdc.py
    │   │   ├── Trainer_lung.py
    │   │   ├── Trainer_synapse.py
    │   │   ├── Trainer_tumor.py
    │   │   ├── network_trainer_acdc.py
    │   │   ├── network_trainer_lung.py
    │   │   ├── network_trainer_synapse.py
    │   │   ├── network_trainer_tumor.py
    │   │   ├── unetr_pp_trainer_acdc.py
    │   │   ├── unetr_pp_trainer_lung.py
    │   │   ├── unetr_pp_trainer_synapse.py
    │   │   └── unetr_pp_trainer_tumor.py
    │   └── optimizer/
    │       └── ranger.py
    └── utilities/
        ├── __init__.py
        ├── distributed.py
        ├── file_conversions.py
        ├── file_endings.py
        ├── folder_names.py
        ├── nd_softmax.py
        ├── one_hot_encoding.py
        ├── overlay_plots.py
        ├── random_stuff.py
        ├── recursive_delete_npz.py
        ├── recursive_rename_taskXX_to_taskXXX.py
        ├── sitk_stuff.py
        ├── task_name_id_conversion.py
        ├── tensor_utilities.py
        └── to_torch.py
Download .txt
SYMBOL INDEX (815 symbols across 112 files)

FILE: unetr_pp/evaluation/add_mean_dice_to_json.py
  function foreground_mean (line 21) | def foreground_mean(filename):
  function run_in_folder (line 42) | def run_in_folder(folder):

FILE: unetr_pp/evaluation/collect_results_files.py
  function crawl_and_copy (line 20) | def crawl_and_copy(current_folder, out_folder, prefix="fabian_", suffix=...

FILE: unetr_pp/evaluation/evaluator.py
  class Evaluator (line 30) | class Evaluator:
    method __init__ (line 60) | def __init__(self,
    method set_test (line 99) | def set_test(self, test):
    method set_reference (line 104) | def set_reference(self, reference):
    method set_labels (line 109) | def set_labels(self, labels):
    method construct_labels (line 125) | def construct_labels(self):
    method set_metrics (line 137) | def set_metrics(self, metrics):
    method add_metric (line 147) | def add_metric(self, metric):
    method evaluate (line 152) | def evaluate(self, test=None, reference=None, advanced=False, **metric...
    method to_dict (line 227) | def to_dict(self):
    method to_array (line 233) | def to_array(self):
    method to_pandas (line 254) | def to_pandas(self):
  class NiftiEvaluator (line 269) | class NiftiEvaluator(Evaluator):
    method __init__ (line 271) | def __init__(self, *args, **kwargs):
    method set_test (line 277) | def set_test(self, test):
    method set_reference (line 287) | def set_reference(self, reference):
    method evaluate (line 297) | def evaluate(self, test=None, reference=None, voxel_spacing=None, **me...
  function run_evaluation (line 306) | def run_evaluation(args):
  function aggregate_scores (line 321) | def aggregate_scores(test_ref_pairs,
  function aggregate_scores_for_experiment (line 403) | def aggregate_scores_for_experiment(score_file,
  function evaluate_folder (line 446) | def evaluate_folder(folder_with_gts: str, folder_with_predictions: str, ...
  function nnformer_evaluate_folder (line 464) | def nnformer_evaluate_folder():

FILE: unetr_pp/evaluation/metrics.py
  function assert_shape (line 19) | def assert_shape(test, reference):
  class ConfusionMatrix (line 25) | class ConfusionMatrix:
    method __init__ (line 27) | def __init__(self, test=None, reference=None):
    method set_test (line 41) | def set_test(self, test):
    method set_reference (line 46) | def set_reference(self, reference):
    method reset (line 51) | def reset(self):
    method compute (line 63) | def compute(self):
    method get_matrix (line 80) | def get_matrix(self):
    method get_size (line 89) | def get_size(self):
    method get_existence (line 95) | def get_existence(self):
  function dice (line 105) | def dice(test=None, reference=None, confusion_matrix=None, nan_for_nonex...
  function jaccard (line 123) | def jaccard(test=None, reference=None, confusion_matrix=None, nan_for_no...
  function precision (line 141) | def precision(test=None, reference=None, confusion_matrix=None, nan_for_...
  function sensitivity (line 159) | def sensitivity(test=None, reference=None, confusion_matrix=None, nan_fo...
  function recall (line 177) | def recall(test=None, reference=None, confusion_matrix=None, nan_for_non...
  function specificity (line 183) | def specificity(test=None, reference=None, confusion_matrix=None, nan_fo...
  function accuracy (line 201) | def accuracy(test=None, reference=None, confusion_matrix=None, **kwargs):
  function fscore (line 212) | def fscore(test=None, reference=None, confusion_matrix=None, nan_for_non...
  function false_positive_rate (line 222) | def false_positive_rate(test=None, reference=None, confusion_matrix=None...
  function false_omission_rate (line 228) | def false_omission_rate(test=None, reference=None, confusion_matrix=None...
  function false_negative_rate (line 246) | def false_negative_rate(test=None, reference=None, confusion_matrix=None...
  function true_negative_rate (line 252) | def true_negative_rate(test=None, reference=None, confusion_matrix=None,...
  function false_discovery_rate (line 258) | def false_discovery_rate(test=None, reference=None, confusion_matrix=Non...
  function negative_predictive_value (line 264) | def negative_predictive_value(test=None, reference=None, confusion_matri...
  function total_positives_test (line 270) | def total_positives_test(test=None, reference=None, confusion_matrix=Non...
  function total_negatives_test (line 281) | def total_negatives_test(test=None, reference=None, confusion_matrix=Non...
  function total_positives_reference (line 292) | def total_positives_reference(test=None, reference=None, confusion_matri...
  function total_negatives_reference (line 303) | def total_negatives_reference(test=None, reference=None, confusion_matri...
  function hausdorff_distance (line 314) | def hausdorff_distance(test=None, reference=None, confusion_matrix=None,...
  function hausdorff_distance_95 (line 332) | def hausdorff_distance_95(test=None, reference=None, confusion_matrix=No...
  function avg_surface_distance (line 350) | def avg_surface_distance(test=None, reference=None, confusion_matrix=Non...
  function avg_surface_distance_symmetric (line 368) | def avg_surface_distance_symmetric(test=None, reference=None, confusion_...

FILE: unetr_pp/evaluation/model_selection/ensemble.py
  function merge (line 26) | def merge(args):
  function ensemble (line 39) | def ensemble(training_output_folder1, training_output_folder2, output_fo...

FILE: unetr_pp/evaluation/model_selection/figure_out_what_to_submit.py
  function find_task_name (line 29) | def find_task_name(folder, task_id):
  function get_mean_foreground_dice (line 36) | def get_mean_foreground_dice(json_file):
  function get_foreground_mean (line 41) | def get_foreground_mean(results):
  function main (line 47) | def main():

FILE: unetr_pp/evaluation/model_selection/summarize_results_in_one_json.py
  function summarize (line 22) | def summarize(tasks, models=('2d', '3d_lowres', '3d_fullres', '3d_cascad...
  function summarize2 (line 101) | def summarize2(task_ids, models=('2d', '3d_lowres', '3d_fullres', '3d_ca...
  function foreground_mean2 (line 203) | def foreground_mean2(filename):

FILE: unetr_pp/evaluation/model_selection/summarize_results_with_plans.py
  function list_to_string (line 23) | def list_to_string(l, delim=","):
  function write_plans_to_file (line 30) | def write_plans_to_file(f, plans_file, stage=0, do_linebreak_at_end=True...

FILE: unetr_pp/evaluation/region_based_evaluation.py
  function get_brats_regions (line 12) | def get_brats_regions():
  function get_KiTS_regions (line 26) | def get_KiTS_regions():
  function create_region_from_mask (line 34) | def create_region_from_mask(mask, join_labels: tuple):
  function evaluate_case (line 41) | def evaluate_case(file_pred: str, file_gt: str, regions):
  function evaluate_regions (line 53) | def evaluate_regions(folder_predicted: str, folder_gt: str, regions: dic...

FILE: unetr_pp/evaluation/surface_dice.py
  function normalized_surface_dice (line 20) | def normalized_surface_dice(a: np.ndarray, b: np.ndarray, threshold: flo...

FILE: unetr_pp/experiment_planning/DatasetAnalyzer.py
  class DatasetAnalyzer (line 27) | class DatasetAnalyzer(object):
    method __init__ (line 28) | def __init__(self, folder_with_cropped_data, overwrite=True, num_proce...
    method load_properties_of_cropped (line 45) | def load_properties_of_cropped(self, case_identifier):
    method _check_if_all_in_one_region (line 51) | def _check_if_all_in_one_region(seg, regions):
    method _collect_class_and_region_sizes (line 65) | def _collect_class_and_region_sizes(seg, all_classes, vol_per_voxel):
    method _get_unique_labels (line 76) | def _get_unique_labels(self, patient_identifier):
    method _load_seg_analyze_classes (line 81) | def _load_seg_analyze_classes(self, patient_identifier, all_classes):
    method get_classes (line 109) | def get_classes(self):
    method analyse_segmentations (line 113) | def analyse_segmentations(self):
    method get_sizes_and_spacings_after_cropping (line 134) | def get_sizes_and_spacings_after_cropping(self):
    method get_modalities (line 145) | def get_modalities(self):
    method get_size_reduction_by_cropping (line 151) | def get_size_reduction_by_cropping(self):
    method _get_voxels_in_foreground (line 161) | def _get_voxels_in_foreground(self, patient_identifier, modality_id):
    method _compute_stats (line 169) | def _compute_stats(voxels):
    method collect_intensity_properties (line 181) | def collect_intensity_properties(self, num_modalities):
    method analyze_dataset (line 225) | def analyze_dataset(self, collect_intensityproperties=True):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_11GB.py
  class ExperimentPlanner3D_v21_11GB (line 25) | class ExperimentPlanner3D_v21_11GB(ExperimentPlanner3D_v21):
    method __init__ (line 29) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 35) | def get_properties_for_stage(self, current_spacing, original_spacing, ...

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_16GB.py
  class ExperimentPlanner3D_v21_16GB (line 25) | class ExperimentPlanner3D_v21_16GB(ExperimentPlanner3D_v21):
    method __init__ (line 29) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 35) | def get_properties_for_stage(self, current_spacing, original_spacing, ...

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_32GB.py
  class ExperimentPlanner3D_v21_32GB (line 25) | class ExperimentPlanner3D_v21_32GB(ExperimentPlanner3D_v21):
    method __init__ (line 29) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 35) | def get_properties_for_stage(self, current_spacing, original_spacing, ...

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_3convperstage.py
  class ExperimentPlanner3D_v21_3cps (line 25) | class ExperimentPlanner3D_v21_3cps(ExperimentPlanner3D_v21):
    method __init__ (line 32) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method run_preprocessing (line 39) | def run_preprocessing(self, num_threads):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v22.py
  class ExperimentPlanner3D_v22 (line 21) | class ExperimentPlanner3D_v22(ExperimentPlanner3D_v21):
    method __init__ (line 24) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_target_spacing (line 30) | def get_target_spacing(self):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v23.py
  class ExperimentPlanner3D_v23 (line 20) | class ExperimentPlanner3D_v23(ExperimentPlanner3D_v21):
    method __init__ (line 23) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_residual_3DUNet_v21.py
  class ExperimentPlanner3DFabiansResUNet_v21 (line 26) | class ExperimentPlanner3DFabiansResUNet_v21(ExperimentPlanner3D_v21):
    method __init__ (line 27) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 33) | def get_properties_for_stage(self, current_spacing, original_spacing, ...
    method run_preprocessing (line 124) | def run_preprocessing(self, num_threads):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_2DUNet_v21_RGB_scaleto_0_1.py
  class ExperimentPlanner2D_v21_RGB_scaleTo_0_1 (line 20) | class ExperimentPlanner2D_v21_RGB_scaleTo_0_1(ExperimentPlanner2D_v21):
    method __init__ (line 24) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_3DUNet_CT2.py
  class ExperimentPlannerCT2 (line 22) | class ExperimentPlannerCT2(ExperimentPlanner):
    method __init__ (line 30) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method determine_normalization_scheme (line 35) | def determine_normalization_scheme(self):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_3DUNet_nonCT.py
  class ExperimentPlannernonCT (line 22) | class ExperimentPlannernonCT(ExperimentPlanner):
    method __init__ (line 27) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method determine_normalization_scheme (line 32) | def determine_normalization_scheme(self):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/patch_size/experiment_planner_3DUNet_isotropic_in_mm.py
  class ExperimentPlannerIso (line 25) | class ExperimentPlannerIso(ExperimentPlanner):
    method __init__ (line 32) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 37) | def get_properties_for_stage(self, current_spacing, original_spacing, ...

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/patch_size/experiment_planner_3DUNet_isotropic_in_voxels.py
  class ExperimentPlanner3D_IsoPatchesInVoxels (line 25) | class ExperimentPlanner3D_IsoPatchesInVoxels(ExperimentPlanner):
    method __init__ (line 33) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 38) | def get_properties_for_stage(self, current_spacing, original_spacing, ...

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/pooling_and_convs/experiment_planner_baseline_3DUNet_allConv3x3.py
  class ExperimentPlannerAllConv3x3 (line 24) | class ExperimentPlannerAllConv3x3(ExperimentPlanner):
    method __init__ (line 25) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 30) | def get_properties_for_stage(self, current_spacing, original_spacing, ...
    method run_preprocessing (line 136) | def run_preprocessing(self, num_threads):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/pooling_and_convs/experiment_planner_baseline_3DUNet_poolBasedOnSpacing.py
  class ExperimentPlannerPoolBasedOnSpacing (line 24) | class ExperimentPlannerPoolBasedOnSpacing(ExperimentPlanner):
    method __init__ (line 25) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 31) | def get_properties_for_stage(self, current_spacing, original_spacing, ...

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_targetSpacingForAnisoAxis.py
  class ExperimentPlannerTargetSpacingForAnisoAxis (line 20) | class ExperimentPlannerTargetSpacingForAnisoAxis(ExperimentPlanner):
    method __init__ (line 21) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_target_spacing (line 27) | def get_target_spacing(self):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_v21_customTargetSpacing_2x2x2.py
  class ExperimentPlanner3D_v21_customTargetSpacing_2x2x2 (line 20) | class ExperimentPlanner3D_v21_customTargetSpacing_2x2x2(ExperimentPlanne...
    method __init__ (line 21) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_target_spacing (line 30) | def get_target_spacing(self):

FILE: unetr_pp/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_v21_noResampling.py
  class ExperimentPlanner3D_v21_noResampling (line 23) | class ExperimentPlanner3D_v21_noResampling(ExperimentPlanner3D_v21):
    method __init__ (line 24) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method plan_experiment (line 31) | def plan_experiment(self):
  class ExperimentPlanner3D_v21_noResampling_16GB (line 121) | class ExperimentPlanner3D_v21_noResampling_16GB(ExperimentPlanner3D_v21_...
    method __init__ (line 122) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method plan_experiment (line 129) | def plan_experiment(self):

FILE: unetr_pp/experiment_planning/common_utils.py
  function split_4d_nifti (line 23) | def split_4d_nifti(filename, output_folder):
  function get_pool_and_conv_props_poolLateV2 (line 50) | def get_pool_and_conv_props_poolLateV2(patch_size, min_feature_map_size,...
  function get_pool_and_conv_props (line 89) | def get_pool_and_conv_props(spacing, patch_size, min_feature_map_size, m...
  function get_pool_and_conv_props_v2 (line 157) | def get_pool_and_conv_props_v2(spacing, patch_size, min_feature_map_size...
  function get_shape_must_be_divisible_by (line 232) | def get_shape_must_be_divisible_by(net_numpool_per_axis):
  function pad_shape (line 236) | def pad_shape(shape, must_be_divisible_by):
  function get_network_numpool (line 257) | def get_network_numpool(patch_size, maxpool_cap=999, min_feature_map_siz...

FILE: unetr_pp/experiment_planning/experiment_planner_baseline_2DUNet.py
  class ExperimentPlanner2D (line 32) | class ExperimentPlanner2D(ExperimentPlanner):
    method __init__ (line 33) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 45) | def get_properties_for_stage(self, current_spacing, original_spacing, ...
    method plan_experiment (line 90) | def plan_experiment(self):

FILE: unetr_pp/experiment_planning/experiment_planner_baseline_2DUNet_v21.py
  class ExperimentPlanner2D_v21 (line 23) | class ExperimentPlanner2D_v21(ExperimentPlanner2D):
    method __init__ (line 24) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_properties_for_stage (line 31) | def get_properties_for_stage(self, current_spacing, original_spacing, ...

FILE: unetr_pp/experiment_planning/experiment_planner_baseline_3DUNet.py
  class ExperimentPlanner (line 32) | class ExperimentPlanner(object):
    method __init__ (line 33) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_target_spacing (line 66) | def get_target_spacing(self):
    method save_my_plans (line 81) | def save_my_plans(self):
    method load_my_plans (line 85) | def load_my_plans(self):
    method determine_postprocessing (line 94) | def determine_postprocessing(self):
    method get_properties_for_stage (line 144) | def get_properties_for_stage(self, current_spacing, original_spacing, ...
    method plan_experiment (line 247) | def plan_experiment(self):
    method determine_normalization_scheme (line 359) | def determine_normalization_scheme(self):
    method save_properties_of_cropped (line 371) | def save_properties_of_cropped(self, case_identifier, properties):
    method load_properties_of_cropped (line 375) | def load_properties_of_cropped(self, case_identifier):
    method determine_whether_to_use_mask_for_norm (line 380) | def determine_whether_to_use_mask_for_norm(self):
    method write_normalization_scheme_to_patients (line 411) | def write_normalization_scheme_to_patients(self):
    method run_preprocessing (line 422) | def run_preprocessing(self, num_threads):

FILE: unetr_pp/experiment_planning/experiment_planner_baseline_3DUNet_v21.py
  class ExperimentPlanner3D_v21 (line 24) | class ExperimentPlanner3D_v21(ExperimentPlanner):
    method __init__ (line 31) | def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
    method get_target_spacing (line 38) | def get_target_spacing(self):
    method get_properties_for_stage (line 83) | def get_properties_for_stage(self, current_spacing, original_spacing, ...

FILE: unetr_pp/experiment_planning/nnFormer_convert_decathlon_task.py
  function crawl_and_remove_hidden_from_decathlon (line 20) | def crawl_and_remove_hidden_from_decathlon(folder):
  function main (line 41) | def main():

FILE: unetr_pp/experiment_planning/nnFormer_plan_and_preprocess.py
  function main (line 27) | def main():

FILE: unetr_pp/experiment_planning/summarize_plans.py
  function summarize_plans (line 20) | def summarize_plans(file):
  function write_plans_to_file (line 37) | def write_plans_to_file(f, plans_file):

FILE: unetr_pp/experiment_planning/utils.py
  function split_4d (line 31) | def split_4d(input_folder, num_processes=default_num_threads, overwrite_...
  function create_lists_from_splitted_dataset (line 82) | def create_lists_from_splitted_dataset(base_folder_splitted):
  function create_lists_from_splitted_dataset_folder (line 100) | def create_lists_from_splitted_dataset_folder(folder):
  function get_caseIDs_from_splitted_dataset_folder (line 113) | def get_caseIDs_from_splitted_dataset_folder(folder):
  function crop (line 122) | def crop(task_string, override=False, num_threads=default_num_threads):
  function analyze_dataset (line 138) | def analyze_dataset(task_string, override=False, collect_intensityproper...
  function plan_and_preprocess (line 144) | def plan_and_preprocess(task_string, processes_lowres=default_num_thread...
  function add_classes_in_slice_info (line 190) | def add_classes_in_slice_info(args):

FILE: unetr_pp/inference/predict.py
  function preprocess_save_to_queue (line 37) | def preprocess_save_to_queue(preprocess_fn, q, list_of_lists, output_fil...
  function preprocess_multithreaded (line 95) | def preprocess_multithreaded(trainer, list_of_lists, output_files, num_p...
  function predict_cases (line 133) | def predict_cases(model, list_of_lists, output_filenames, folds, save_np...
  function predict_cases_fast (line 289) | def predict_cases_fast(model, list_of_lists, output_filenames, folds, nu...
  function predict_cases_fastest (line 427) | def predict_cases_fastest(model, list_of_lists, output_filenames, folds,...
  function check_input_folder_and_return_caseIDs (line 543) | def check_input_folder_and_return_caseIDs(input_folder, expected_num_mod...
  function predict_from_folder (line 579) | def predict_from_folder(model: str, input_folder: str, output_folder: st...

FILE: unetr_pp/inference/predict_simple.py
  function main (line 27) | def main():

FILE: unetr_pp/inference/segmentation_export.py
  function save_segmentation_nifti_from_softmax (line 27) | def save_segmentation_nifti_from_softmax(segmentation_softmax: Union[str...
  function save_segmentation_nifti (line 158) | def save_segmentation_nifti(segmentation, out_fname, dct, order=1, force...

FILE: unetr_pp/inference_acdc.py
  function read_nii (line 11) | def read_nii(path):
  function dice (line 16) | def dice(pred, label):
  function process_label (line 22) | def process_label(label):
  function hd (line 42) | def hd(pred,gt):
  function test (line 55) | def test(fold):

FILE: unetr_pp/inference_synapse.py
  function read_nii (line 8) | def read_nii(path):
  function dice (line 11) | def dice(pred, label):
  function hd (line 16) | def hd(pred,gt):
  function process_label (line 23) | def process_label(label):
  function test (line 35) | def test(fold):

FILE: unetr_pp/inference_tumor.py
  function read_nii (line 8) | def read_nii(path):
  function new_dice (line 11) | def new_dice(pred,label):
  function dice (line 17) | def dice(pred, label):
  function hd (line 23) | def hd(pred,gt):
  function process_label (line 30) | def process_label(label):
  function test (line 41) | def test(fold):

FILE: unetr_pp/network_architecture/acdc/model_components.py
  class UnetrPPEncoder (line 13) | class UnetrPPEncoder(nn.Module):
    method __init__ (line 14) | def __init__(self, input_size=[16 * 40 * 40, 8 * 20 * 20, 4 * 10 * 10,...
    method _init_weights (line 45) | def _init_weights(self, m):
    method forward_features (line 54) | def forward_features(self, x):
    method forward (line 70) | def forward(self, x):
  class UnetrUpBlock (line 75) | class UnetrUpBlock(nn.Module):
    method __init__ (line 76) | def     __init__(
    method _init_weights (line 133) | def _init_weights(self, m):
    method forward (line 142) | def forward(self, inp, skip):

FILE: unetr_pp/network_architecture/acdc/transformerblock.py
  class TransformerBlock (line 6) | class TransformerBlock(nn.Module):
    method __init__ (line 12) | def __init__(
    method forward (line 54) | def forward(self, x):
  class EPA (line 69) | class EPA(nn.Module):
    method __init__ (line 74) | def __init__(self, input_size, hidden_size, proj_size, num_heads=4, qk...
    method forward (line 93) | def forward(self, x):
    method no_weight_decay (line 135) | def no_weight_decay(self):

FILE: unetr_pp/network_architecture/acdc/unetr_pp_acdc.py
  class UNETR_PP (line 8) | class UNETR_PP(SegmentationNetwork):
    method __init__ (line 13) | def __init__(
    method proj_feat (line 113) | def proj_feat(self, x, hidden_size, feat_size):
    method forward (line 118) | def forward(self, x_in):

FILE: unetr_pp/network_architecture/dynunet_block.py
  class UnetResBlock (line 12) | class UnetResBlock(nn.Module):
    method __init__ (line 30) | def __init__(
    method forward (line 67) | def forward(self, inp):
  class UnetBasicBlock (line 83) | class UnetBasicBlock(nn.Module):
    method __init__ (line 101) | def __init__(
    method forward (line 129) | def forward(self, inp):
  class UnetUpBlock (line 139) | class UnetUpBlock(nn.Module):
    method __init__ (line 159) | def __init__(
    method forward (line 196) | def forward(self, inp, skip):
  class UnetOutBlock (line 204) | class UnetOutBlock(nn.Module):
    method __init__ (line 205) | def __init__(
    method forward (line 213) | def forward(self, inp):
  function get_conv_layer (line 217) | def get_conv_layer(
  function get_padding (line 251) | def get_padding(
  function get_output_padding (line 265) | def get_output_padding(

FILE: unetr_pp/network_architecture/generic_UNet.py
  class ConvDropoutNormNonlin (line 26) | class ConvDropoutNormNonlin(nn.Module):
    method __init__ (line 31) | def __init__(self, input_channels, output_channels,
    method forward (line 64) | def forward(self, x):
  class ConvDropoutNonlinNorm (line 71) | class ConvDropoutNonlinNorm(ConvDropoutNormNonlin):
    method forward (line 72) | def forward(self, x):
  class StackedConvLayers (line 79) | class StackedConvLayers(nn.Module):
    method __init__ (line 80) | def __init__(self, input_feature_channels, output_feature_channels, nu...
    method forward (line 141) | def forward(self, x):
  function print_module_training_status (line 145) | def print_module_training_status(module):
  class Upsample (line 154) | class Upsample(nn.Module):
    method __init__ (line 155) | def __init__(self, size=None, scale_factor=None, mode='nearest', align...
    method forward (line 162) | def forward(self, x):
  class Generic_UNet (line 167) | class Generic_UNet(SegmentationNetwork):
    method __init__ (line 184) | def __init__(self, input_channels, base_num_features, num_classes, num...
    method forward (line 388) | def forward(self, x):
    method compute_approx_vram_consumption (line 417) | def compute_approx_vram_consumption(patch_size, num_pool_per_axis, bas...

FILE: unetr_pp/network_architecture/initialization.py
  class InitWeights_He (line 19) | class InitWeights_He(object):
    method __init__ (line 20) | def __init__(self, neg_slope=1e-2):
    method __call__ (line 23) | def __call__(self, module):
  class InitWeights_XavierUniform (line 30) | class InitWeights_XavierUniform(object):
    method __init__ (line 31) | def __init__(self, gain=1):
    method __call__ (line 34) | def __call__(self, module):

FILE: unetr_pp/network_architecture/layers.py
  class LayerNorm (line 7) | class LayerNorm(nn.Module):
    method __init__ (line 8) | def __init__(self, normalized_shape, eps=1e-6, data_format="channels_l...
    method forward (line 18) | def forward(self, x):
  class PositionalEncodingFourier (line 29) | class PositionalEncodingFourier(nn.Module):
    method __init__ (line 30) | def __init__(self, hidden_dim=32, dim=768, temperature=10000):
    method forward (line 38) | def forward(self, B, H, W):

FILE: unetr_pp/network_architecture/lung/model_components.py
  class UnetrPPEncoder (line 15) | class UnetrPPEncoder(nn.Module):
    method __init__ (line 16) | def __init__(self, input_size=[32*48*48, 16 * 24 * 24, 8 * 12 * 12, 4*...
    method _init_weights (line 49) | def _init_weights(self, m):
    method forward_features (line 58) | def forward_features(self, x):
    method forward (line 74) | def forward(self, x):
  class UnetrUpBlock (line 79) | class UnetrUpBlock(nn.Module):
    method __init__ (line 80) | def     __init__(
    method _init_weights (line 137) | def _init_weights(self, m):
    method forward (line 146) | def forward(self, inp, skip):

FILE: unetr_pp/network_architecture/lung/transformerblock.py
  class TransformerBlock (line 6) | class TransformerBlock(nn.Module):
    method __init__ (line 12) | def __init__(
    method forward (line 55) | def forward(self, x):
  class EPA (line 72) | class EPA(nn.Module):
    method __init__ (line 77) | def __init__(self, input_size, hidden_size, proj_size, num_heads=4, qk...
    method forward (line 96) | def forward(self, x):
    method no_weight_decay (line 139) | def no_weight_decay(self):

FILE: unetr_pp/network_architecture/lung/unetr_pp_lung.py
  class UNETR_PP (line 8) | class UNETR_PP(SegmentationNetwork):
    method __init__ (line 13) | def __init__(
    method proj_feat (line 113) | def proj_feat(self, x, hidden_size, feat_size):
    method forward (line 118) | def forward(self, x_in):

FILE: unetr_pp/network_architecture/neural_network.py
  class NeuralNetwork (line 28) | class NeuralNetwork(nn.Module):
    method __init__ (line 29) | def __init__(self):
    method get_device (line 32) | def get_device(self):
    method set_device (line 38) | def set_device(self, device):
    method forward (line 44) | def forward(self, x):
  class SegmentationNetwork (line 48) | class SegmentationNetwork(NeuralNetwork):
    method __init__ (line 49) | def __init__(self):
    method predict_3D (line 73) | def predict_3D(self, x: np.ndarray, do_mirroring: bool, mirror_axes: T...
    method predict_2D (line 168) | def predict_2D(self, x, do_mirroring: bool, mirror_axes: tuple = (0, 1...
    method _get_gaussian (line 251) | def _get_gaussian(patch_size, sigma_scale=1. / 8) -> np.ndarray:
    method _compute_steps_for_sliding_window (line 267) | def _compute_steps_for_sliding_window(patch_size: Tuple[int, ...], ima...
    method _internal_predict_3D_3Dconv_tiled (line 292) | def _internal_predict_3D_3Dconv_tiled(self, x: np.ndarray, step_size: ...
    method _internal_predict_2D_2Dconv (line 430) | def _internal_predict_2D_2Dconv(self, x: np.ndarray, min_size: Tuple[i...
    method _internal_predict_3D_3Dconv (line 466) | def _internal_predict_3D_3Dconv(self, x: np.ndarray, min_size: Tuple[i...
    method _internal_maybe_mirror_and_pred_3D (line 502) | def _internal_maybe_mirror_and_pred_3D(self, x: Union[np.ndarray, torc...
    method _internal_maybe_mirror_and_pred_2D (line 561) | def _internal_maybe_mirror_and_pred_2D(self, x: Union[np.ndarray, torc...
    method _internal_predict_2D_2Dconv_tiled (line 604) | def _internal_predict_2D_2Dconv_tiled(self, x: np.ndarray, step_size: ...
    method _internal_predict_3D_2Dconv (line 736) | def _internal_predict_3D_2Dconv(self, x: np.ndarray, min_size: Tuple[i...
    method predict_3D_pseudo3D_2Dconv (line 754) | def predict_3D_pseudo3D_2Dconv(self, x: np.ndarray, min_size: Tuple[in...
    method _internal_predict_3D_2Dconv_tiled (line 786) | def _internal_predict_3D_2Dconv_tiled(self, x: np.ndarray, patch_size:...

FILE: unetr_pp/network_architecture/synapse/model_components.py
  class UnetrPPEncoder (line 13) | class UnetrPPEncoder(nn.Module):
    method __init__ (line 14) | def __init__(self, input_size=[32 * 32 * 32, 16 * 16 * 16, 8 * 8 * 8, ...
    method _init_weights (line 43) | def _init_weights(self, m):
    method forward_features (line 52) | def forward_features(self, x):
    method forward (line 68) | def forward(self, x):
  class UnetrUpBlock (line 73) | class UnetrUpBlock(nn.Module):
    method __init__ (line 74) | def     __init__(
    method _init_weights (line 129) | def _init_weights(self, m):
    method forward (line 138) | def forward(self, inp, skip):

FILE: unetr_pp/network_architecture/synapse/transformerblock.py
  class TransformerBlock (line 6) | class TransformerBlock(nn.Module):
    method __init__ (line 12) | def __init__(
    method forward (line 52) | def forward(self, x):
  class EPA (line 68) | class EPA(nn.Module):
    method __init__ (line 73) | def __init__(self, input_size, hidden_size, proj_size, num_heads=4, qk...
    method forward (line 93) | def forward(self, x):
    method no_weight_decay (line 135) | def no_weight_decay(self):

FILE: unetr_pp/network_architecture/synapse/unetr_pp_synapse.py
  class UNETR_PP (line 8) | class UNETR_PP(SegmentationNetwork):
    method __init__ (line 14) | def __init__(
    method proj_feat (line 128) | def proj_feat(self, x, hidden_size, feat_size):
    method forward (line 133) | def forward(self, x_in):

FILE: unetr_pp/network_architecture/tumor/model_components.py
  class UnetrPPEncoder (line 13) | class UnetrPPEncoder(nn.Module):
    method __init__ (line 14) | def __init__(self, input_size=[32 * 32 * 32, 16 * 16 * 16, 8 * 8 * 8, ...
    method _init_weights (line 45) | def _init_weights(self, m):
    method forward_features (line 54) | def forward_features(self, x):
    method forward (line 70) | def forward(self, x):
  class UnetrUpBlock (line 75) | class UnetrUpBlock(nn.Module):
    method __init__ (line 76) | def     __init__(
    method _init_weights (line 133) | def _init_weights(self, m):
    method forward (line 142) | def forward(self, inp, skip):

FILE: unetr_pp/network_architecture/tumor/transformerblock.py
  class TransformerBlock (line 7) | class TransformerBlock(nn.Module):
    method __init__ (line 13) | def __init__(
    method forward (line 53) | def forward(self, x):
  function init_ (line 69) | def init_(tensor):
  class EPA (line 76) | class EPA(nn.Module):
    method __init__ (line 81) | def __init__(self, input_size, hidden_size, proj_size, num_heads=4, qk...
    method forward (line 98) | def forward(self, x):
    method no_weight_decay (line 129) | def no_weight_decay(self):

FILE: unetr_pp/network_architecture/tumor/unetr_pp_tumor.py
  class UNETR_PP (line 8) | class UNETR_PP(SegmentationNetwork):
    method __init__ (line 13) | def __init__(
    method proj_feat (line 113) | def proj_feat(self, x, hidden_size, feat_size):
    method forward (line 118) | def forward(self, x_in):

FILE: unetr_pp/postprocessing/connected_components.py
  function load_remove_save (line 30) | def load_remove_save(input_file: str, output_file: str, for_which_classe...
  function remove_all_but_the_largest_connected_component (line 48) | def remove_all_but_the_largest_connected_component(image: np.ndarray, fo...
  function load_postprocessing (line 108) | def load_postprocessing(json_file):
  function determine_postprocessing (line 122) | def determine_postprocessing(base, gt_labels_folder, raw_subfolder_name=...
  function apply_postprocessing_to_folder (line 400) | def apply_postprocessing_to_folder(input_folder: str, output_folder: str...

FILE: unetr_pp/postprocessing/consolidate_all_for_paper.py
  function get_datasets (line 19) | def get_datasets():
  function get_commands (line 44) | def get_commands(configurations, regular_trainer="nnFormerTrainerV2", ca...

FILE: unetr_pp/postprocessing/consolidate_postprocessing.py
  function collect_cv_niftis (line 25) | def collect_cv_niftis(cv_folder: str, output_folder: str, validation_fol...
  function consolidate_folds (line 43) | def consolidate_folds(output_folder_base, validation_folder_name: str = ...

FILE: unetr_pp/postprocessing/consolidate_postprocessing_simple.py
  function main (line 23) | def main():

FILE: unetr_pp/preprocessing/cropping.py
  function create_nonzero_mask (line 23) | def create_nonzero_mask(data):
  function get_bbox_from_mask (line 34) | def get_bbox_from_mask(mask, outside_value=0):
  function crop_to_bbox (line 45) | def crop_to_bbox(image, bbox):
  function get_case_identifier (line 51) | def get_case_identifier(case):
  function get_case_identifier_from_npz (line 56) | def get_case_identifier_from_npz(case):
  function load_case_from_list_of_files (line 61) | def load_case_from_list_of_files(data_files, seg_file=None):
  function crop_to_nonzero (line 84) | def crop_to_nonzero(data, seg=None, nonzero_label=-1):
  function get_patient_identifiers_from_cropped_files (line 119) | def get_patient_identifiers_from_cropped_files(folder):
  class ImageCropper (line 123) | class ImageCropper(object):
    method __init__ (line 124) | def __init__(self, num_threads, output_folder=None):
    method crop (line 139) | def crop(data, properties, seg=None):
    method crop_from_list_of_files (line 153) | def crop_from_list_of_files(data_files, seg_file=None):
    method load_crop_save (line 157) | def load_crop_save(self, case, case_identifier, overwrite_existing=Fal...
    method get_list_of_cropped_files (line 175) | def get_list_of_cropped_files(self):
    method get_patient_identifiers_from_cropped_files (line 178) | def get_patient_identifiers_from_cropped_files(self):
    method run_cropping (line 181) | def run_cropping(self, list_of_files, overwrite_existing=False, output...
    method load_properties (line 209) | def load_properties(self, case_identifier):
    method save_properties (line 214) | def save_properties(self, case_identifier, properties):

FILE: unetr_pp/preprocessing/custom_preprocessors/preprocessor_scale_RGB_to_0_1.py
  class GenericPreprocessor_scale_uint8_to_0_1 (line 19) | class GenericPreprocessor_scale_uint8_to_0_1(PreprocessorFor2D):
    method resample_and_normalize (line 27) | def resample_and_normalize(self, data, target_spacing, properties, seg...

FILE: unetr_pp/preprocessing/preprocessing.py
  function get_do_separate_z (line 28) | def get_do_separate_z(spacing, anisotropy_threshold=RESAMPLING_SEPARATE_...
  function get_lowres_axis (line 33) | def get_lowres_axis(new_spacing):
  function resample_patient (line 38) | def resample_patient(data, seg, original_spacing, target_spacing, order_...
  function resample_data_or_seg (line 112) | def resample_data_or_seg(data, new_shape, is_seg, axis=None, order=3, do...
  class GenericPreprocessor (line 204) | class GenericPreprocessor(object):
    method __init__ (line 205) | def __init__(self, normalization_scheme_per_modality, use_nonzero_mask...
    method load_cropped (line 220) | def load_cropped(cropped_output_dir, case_identifier):
    method resample_and_normalize (line 228) | def resample_and_normalize(self, data, target_spacing, properties, seg...
    method preprocess_test_case (line 308) | def preprocess_test_case(self, data_files, target_spacing, seg_file=No...
    method _run_internal (line 318) | def _run_internal(self, target_spacing, case_identifier, output_folder...
    method run (line 356) | def run(self, target_spacings, input_folder_with_cropped_npz, output_f...
  class Preprocessor3DDifferentResampling (line 397) | class Preprocessor3DDifferentResampling(GenericPreprocessor):
    method resample_and_normalize (line 398) | def resample_and_normalize(self, data, target_spacing, properties, seg...
  class Preprocessor3DBetterResampling (line 479) | class Preprocessor3DBetterResampling(GenericPreprocessor):
    method resample_and_normalize (line 485) | def resample_and_normalize(self, data, target_spacing, properties, seg...
  class PreprocessorFor2D (line 573) | class PreprocessorFor2D(GenericPreprocessor):
    method __init__ (line 574) | def __init__(self, normalization_scheme_per_modality, use_nonzero_mask...
    method run (line 578) | def run(self, target_spacings, input_folder_with_cropped_npz, output_f...
    method resample_and_normalize (line 606) | def resample_and_normalize(self, data, target_spacing, properties, seg...
  class PreprocessorFor3D_NoResampling (line 674) | class PreprocessorFor3D_NoResampling(GenericPreprocessor):
    method resample_and_normalize (line 675) | def resample_and_normalize(self, data, target_spacing, properties, seg...
  class PreprocessorFor2D_noNormalization (line 754) | class PreprocessorFor2D_noNormalization(GenericPreprocessor):
    method resample_and_normalize (line 755) | def resample_and_normalize(self, data, target_spacing, properties, seg...

FILE: unetr_pp/preprocessing/sanity_checks.py
  function verify_all_same_orientation (line 25) | def verify_all_same_orientation(folder):
  function verify_same_geometry (line 45) | def verify_same_geometry(img_1: sitk.Image, img_2: sitk.Image):
  function verify_contains_only_expected_labels (line 79) | def verify_contains_only_expected_labels(itk_img: str, valid_labels: (tu...
  function verify_dataset_integrity (line 90) | def verify_dataset_integrity(folder):
  function reorient_to_RAS (line 237) | def reorient_to_RAS(img_fname: str, output_fname: str = None):

FILE: unetr_pp/run/default_configuration.py
  function get_configuration_from_output_folder (line 25) | def get_configuration_from_output_folder(folder):
  function get_default_configuration (line 36) | def get_default_configuration(network, task, network_trainer, plans_iden...

FILE: unetr_pp/run/run_training.py
  function main (line 38) | def main():

FILE: unetr_pp/training/cascade_stuff/predict_next_stage.py
  function resample_and_save (line 31) | def resample_and_save(predicted, target_shape, output_file, force_separa...
  function predict_next_stage (line 46) | def predict_next_stage(trainer, stage_to_be_predicted_folder):

FILE: unetr_pp/training/data_augmentation/custom_transforms.py
  class RemoveKeyTransform (line 19) | class RemoveKeyTransform(AbstractTransform):
    method __init__ (line 20) | def __init__(self, key_to_remove):
    method __call__ (line 23) | def __call__(self, **data_dict):
  class MaskTransform (line 28) | class MaskTransform(AbstractTransform):
    method __init__ (line 29) | def __init__(self, dct_for_where_it_was_used, mask_idx_in_seg=1, set_o...
    method __call__ (line 46) | def __call__(self, **data_dict):
  function convert_3d_to_2d_generator (line 60) | def convert_3d_to_2d_generator(data_dict):
  function convert_2d_to_3d_generator (line 70) | def convert_2d_to_3d_generator(data_dict):
  class Convert3DTo2DTransform (line 80) | class Convert3DTo2DTransform(AbstractTransform):
    method __init__ (line 81) | def __init__(self):
    method __call__ (line 84) | def __call__(self, **data_dict):
  class Convert2DTo3DTransform (line 88) | class Convert2DTo3DTransform(AbstractTransform):
    method __init__ (line 89) | def __init__(self):
    method __call__ (line 92) | def __call__(self, **data_dict):
  class ConvertSegmentationToRegionsTransform (line 96) | class ConvertSegmentationToRegionsTransform(AbstractTransform):
    method __init__ (line 97) | def __init__(self, regions: dict, seg_key: str = "seg", output_key: st...
    method __call__ (line 110) | def __call__(self, **data_dict):

FILE: unetr_pp/training/data_augmentation/data_augmentation_insaneDA.py
  function get_insaneDA_augmentation (line 37) | def get_insaneDA_augmentation(dataloader_train, dataloader_val, patch_si...

FILE: unetr_pp/training/data_augmentation/data_augmentation_insaneDA2.py
  function get_insaneDA_augmentation2 (line 38) | def get_insaneDA_augmentation2(dataloader_train, dataloader_val, patch_s...

FILE: unetr_pp/training/data_augmentation/data_augmentation_moreDA.py
  function get_moreDA_augmentation (line 37) | def get_moreDA_augmentation(dataloader_train, dataloader_val, patch_size...

FILE: unetr_pp/training/data_augmentation/data_augmentation_noDA.py
  function get_no_augmentation (line 28) | def get_no_augmentation(dataloader_train, dataloader_val, params=default...

FILE: unetr_pp/training/data_augmentation/default_data_augmentation.py
  function get_patch_size (line 107) | def get_patch_size(final_patch_size, rot_x, rot_y, rot_z, scale_range):
  function get_default_augmentation (line 130) | def get_default_augmentation(dataloader_train, dataloader_val, patch_siz...

FILE: unetr_pp/training/data_augmentation/downsampling.py
  class DownsampleSegForDSTransform3 (line 23) | class DownsampleSegForDSTransform3(AbstractTransform):
    method __init__ (line 34) | def __init__(self, ds_scales=(1, 0.5, 0.25), input_key="seg", output_k...
    method __call__ (line 40) | def __call__(self, **data_dict):
  function downsample_seg_for_ds_transform3 (line 45) | def downsample_seg_for_ds_transform3(seg, ds_scales=((1, 1, 1), (0.5, 0....
  class DownsampleSegForDSTransform2 (line 70) | class DownsampleSegForDSTransform2(AbstractTransform):
    method __init__ (line 74) | def __init__(self, ds_scales=(1, 0.5, 0.25), order=0, cval=0, input_ke...
    method __call__ (line 82) | def __call__(self, **data_dict):
  function downsample_seg_for_ds_transform2 (line 88) | def downsample_seg_for_ds_transform2(seg, ds_scales=((1, 1, 1), (0.5, 0....

FILE: unetr_pp/training/data_augmentation/pyramid_augmentations.py
  class RemoveRandomConnectedComponentFromOneHotEncodingTransform (line 22) | class RemoveRandomConnectedComponentFromOneHotEncodingTransform(Abstract...
    method __init__ (line 23) | def __init__(self, channel_idx, key="data", p_per_sample=0.2, fill_wit...
    method __call__ (line 39) | def __call__(self, **data_dict):
  class MoveSegAsOneHotToData (line 70) | class MoveSegAsOneHotToData(AbstractTransform):
    method __init__ (line 71) | def __init__(self, channel_id, all_seg_labels, key_origin="seg", key_t...
    method __call__ (line 78) | def __call__(self, **data_dict):
  class ApplyRandomBinaryOperatorTransform (line 95) | class ApplyRandomBinaryOperatorTransform(AbstractTransform):
    method __init__ (line 96) | def __init__(self, channel_idx, p_per_sample=0.3, any_of_these=(binary...
    method __call__ (line 111) | def __call__(self, **data_dict):
  class ApplyRandomBinaryOperatorTransform2 (line 138) | class ApplyRandomBinaryOperatorTransform2(AbstractTransform):
    method __init__ (line 139) | def __init__(self, channel_idx, p_per_sample=0.3, p_per_label=0.3, any...
    method __call__ (line 164) | def __call__(self, **data_dict):

FILE: unetr_pp/training/dataloading/dataset_loading.py
  function get_case_identifiers (line 26) | def get_case_identifiers(folder):
  function get_case_identifiers_from_raw_folder (line 31) | def get_case_identifiers_from_raw_folder(folder):
  function convert_to_npy (line 37) | def convert_to_npy(args):
  function save_as_npz (line 48) | def save_as_npz(args):
  function unpack_dataset (line 58) | def unpack_dataset(folder, threads=default_num_threads, key="data"):
  function pack_dataset (line 73) | def pack_dataset(folder, threads=default_num_threads, key="data"):
  function delete_npy (line 81) | def delete_npy(folder):
  function load_dataset (line 89) | def load_dataset(folder, num_cases_properties_loading_threshold=1000):
  function crop_2D_image_force_fg (line 113) | def crop_2D_image_force_fg(img, crop_size, valid_voxels):
  class DataLoader3D (line 155) | class DataLoader3D(SlimDataLoaderBase):
    method __init__ (line 156) | def __init__(self, data, patch_size, final_patch_size, batch_size, has...
    method get_do_oversample (line 204) | def get_do_oversample(self, batch_idx):
    method determine_shapes (line 207) | def determine_shapes(self):
    method generate_train_batch (line 223) | def generate_train_batch(self):
  class DataLoader2D (line 382) | class DataLoader2D(SlimDataLoaderBase):
    method __init__ (line 383) | def __init__(self, data, patch_size, final_patch_size, batch_size, ove...
    method determine_shapes (line 429) | def determine_shapes(self):
    method get_do_oversample (line 442) | def get_do_oversample(self, batch_idx):
    method generate_train_batch (line 445) | def generate_train_batch(self):

FILE: unetr_pp/training/learning_rate/poly_lr.py
  function poly_lr (line 16) | def poly_lr(epoch, max_epochs, initial_lr, exponent=0.9):

FILE: unetr_pp/training/loss_functions/TopK_loss.py
  class TopKLoss (line 20) | class TopKLoss(RobustCrossEntropyLoss):
    method __init__ (line 24) | def __init__(self, weight=None, ignore_index=-100, k=10):
    method forward (line 28) | def forward(self, inp, target):

FILE: unetr_pp/training/loss_functions/crossentropy.py
  class RobustCrossEntropyLoss (line 4) | class RobustCrossEntropyLoss(nn.CrossEntropyLoss):
    method forward (line 8) | def forward(self, input: Tensor, target: Tensor) -> Tensor:

FILE: unetr_pp/training/loss_functions/deep_supervision.py
  class MultipleOutputLoss2 (line 19) | class MultipleOutputLoss2(nn.Module):
    method __init__ (line 20) | def __init__(self, loss, weight_factors=None):
    method forward (line 31) | def forward(self, x, y):

FILE: unetr_pp/training/loss_functions/dice_loss.py
  class GDL (line 25) | class GDL(nn.Module):
    method __init__ (line 26) | def __init__(self, apply_nonlin=None, batch_dice=False, do_bg=True, sm...
    method forward (line 40) | def forward(self, x, y, loss_mask=None):
  function get_tp_fp_fn_tn (line 100) | def get_tp_fp_fn_tn(net_output, gt, axes=None, mask=None, square=False):
  class SoftDiceLoss (line 158) | class SoftDiceLoss(nn.Module):
    method __init__ (line 159) | def __init__(self, apply_nonlin=None, batch_dice=False, do_bg=True, sm...
    method forward (line 169) | def forward(self, x, y, loss_mask=None):
  class MCCLoss (line 197) | class MCCLoss(nn.Module):
    method __init__ (line 198) | def __init__(self, apply_nonlin=None, batch_mcc=False, do_bg=True, smo...
    method forward (line 212) | def forward(self, x, y, loss_mask=None):
  class SoftDiceLossSquared (line 245) | class SoftDiceLossSquared(nn.Module):
    method __init__ (line 246) | def __init__(self, apply_nonlin=None, batch_dice=False, do_bg=True, sm...
    method forward (line 257) | def forward(self, x, y, loss_mask=None):
  class DC_and_CE_loss (line 304) | class DC_and_CE_loss(nn.Module):
    method __init__ (line 305) | def __init__(self, soft_dice_kwargs, ce_kwargs, aggregate="sum", squar...
    method forward (line 333) | def forward(self, net_output, target):
  class DC_and_BCE_loss (line 364) | class DC_and_BCE_loss(nn.Module):
    method __init__ (line 365) | def __init__(self, bce_kwargs, soft_dice_kwargs, aggregate="sum"):
    method forward (line 380) | def forward(self, net_output, target):
  class GDL_and_CE_loss (line 392) | class GDL_and_CE_loss(nn.Module):
    method __init__ (line 393) | def __init__(self, gdl_dice_kwargs, ce_kwargs, aggregate="sum"):
    method forward (line 399) | def forward(self, net_output, target):
  class DC_and_topk_loss (line 409) | class DC_and_topk_loss(nn.Module):
    method __init__ (line 410) | def __init__(self, soft_dice_kwargs, ce_kwargs, aggregate="sum", squar...
    method forward (line 419) | def forward(self, net_output, target):

FILE: unetr_pp/training/model_restore.py
  function recursive_find_python_class (line 22) | def recursive_find_python_class(folder, trainer_name, current_module):
  function restore_model (line 43) | def restore_model(pkl_file, checkpoint=None, train=False, fp16=None,fold...
  function load_best_model_for_inference (line 112) | def load_best_model_for_inference(folder):
  function load_model_and_checkpoint_files (line 118) | def load_model_and_checkpoint_files(folder, folds=None, mixed_precision=...

FILE: unetr_pp/training/network_training/Trainer_acdc.py
  class Trainer_acdc (line 48) | class Trainer_acdc(NetworkTrainer_acdc):
    method __init__ (line 49) | def __init__(self, plans_file, fold, output_folder=None, dataset_direc...
    method update_fold (line 134) | def update_fold(self, fold):
    method setup_DA_params (line 153) | def setup_DA_params(self):
    method initialize (line 189) | def initialize(self, training=True, force_load_plans=False):
    method initialize_network (line 234) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 267) | def initialize_optimizer_and_scheduler(self):
    method plot_network_architecture (line 276) | def plot_network_architecture(self):
    method save_debug_information (line 299) | def save_debug_information(self):
    method run_training (line 317) | def run_training(self):
    method load_plans_file (line 321) | def load_plans_file(self):
    method process_plans (line 328) | def process_plans(self, plans):
    method load_dataset (line 396) | def load_dataset(self):
    method get_basic_generators (line 399) | def get_basic_generators(self):
    method preprocess_patient (line 419) | def preprocess_patient(self, input_files):
    method preprocess_predict_nifti (line 447) | def preprocess_predict_nifti(self, input_files: List[str], output_file...
    method predict_preprocessed_data_return_seg_and_softmax (line 485) | def predict_preprocessed_data_return_seg_and_softmax(self, data: np.nd...
    method validate (line 528) | def validate(self, do_mirroring: bool = True, use_sliding_window: bool...
    method run_online_evaluation (line 685) | def run_online_evaluation(self, output, target):
    method finish_online_evaluation (line 709) | def finish_online_evaluation(self):
    method save_checkpoint (line 728) | def save_checkpoint(self, fname, save_optimizer=True):

FILE: unetr_pp/training/network_training/Trainer_lung.py
  class Trainer_lung (line 49) | class Trainer_lung(NetworkTrainer_lung):
    method __init__ (line 50) | def __init__(self, plans_file, fold, output_folder=None, dataset_direc...
    method update_fold (line 135) | def update_fold(self, fold):
    method setup_DA_params (line 154) | def setup_DA_params(self):
    method initialize (line 190) | def initialize(self, training=True, force_load_plans=False):
    method initialize_network (line 235) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 268) | def initialize_optimizer_and_scheduler(self):
    method plot_network_architecture (line 277) | def plot_network_architecture(self):
    method save_debug_information (line 300) | def save_debug_information(self):
    method run_training (line 318) | def run_training(self):
    method load_plans_file (line 322) | def load_plans_file(self):
    method process_plans (line 329) | def process_plans(self, plans):
    method load_dataset (line 397) | def load_dataset(self):
    method get_basic_generators (line 400) | def get_basic_generators(self):
    method preprocess_patient (line 420) | def preprocess_patient(self, input_files):
    method preprocess_predict_nifti (line 448) | def preprocess_predict_nifti(self, input_files: List[str], output_file...
    method predict_preprocessed_data_return_seg_and_softmax (line 486) | def predict_preprocessed_data_return_seg_and_softmax(self, data: np.nd...
    method validate (line 529) | def validate(self, do_mirroring: bool = True, use_sliding_window: bool...
    method run_online_evaluation (line 686) | def run_online_evaluation(self, output, target):
    method finish_online_evaluation (line 710) | def finish_online_evaluation(self):
    method save_checkpoint (line 729) | def save_checkpoint(self, fname, save_optimizer=True):

FILE: unetr_pp/training/network_training/Trainer_synapse.py
  class Trainer_synapse (line 49) | class Trainer_synapse(NetworkTrainer_synapse):
    method __init__ (line 50) | def __init__(self, plans_file, fold, output_folder=None, dataset_direc...
    method update_fold (line 135) | def update_fold(self, fold):
    method setup_DA_params (line 154) | def setup_DA_params(self):
    method initialize (line 190) | def initialize(self, training=True, force_load_plans=False):
    method initialize_network (line 235) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 268) | def initialize_optimizer_and_scheduler(self):
    method plot_network_architecture (line 277) | def plot_network_architecture(self):
    method save_debug_information (line 300) | def save_debug_information(self):
    method run_training (line 318) | def run_training(self):
    method load_plans_file (line 322) | def load_plans_file(self):
    method process_plans (line 329) | def process_plans(self, plans):
    method load_dataset (line 397) | def load_dataset(self):
    method get_basic_generators (line 400) | def get_basic_generators(self):
    method preprocess_patient (line 420) | def preprocess_patient(self, input_files):
    method preprocess_predict_nifti (line 448) | def preprocess_predict_nifti(self, input_files: List[str], output_file...
    method predict_preprocessed_data_return_seg_and_softmax (line 486) | def predict_preprocessed_data_return_seg_and_softmax(self, data: np.nd...
    method validate (line 529) | def validate(self, do_mirroring: bool = True, use_sliding_window: bool...
    method run_online_evaluation (line 693) | def run_online_evaluation(self, output, target):
    method finish_online_evaluation (line 724) | def finish_online_evaluation(self):
    method save_checkpoint (line 743) | def save_checkpoint(self, fname, save_optimizer=True):

FILE: unetr_pp/training/network_training/Trainer_tumor.py
  class Trainer_tumor (line 49) | class Trainer_tumor(NetworkTrainer_tumor):
    method __init__ (line 50) | def __init__(self, plans_file, fold, output_folder=None, dataset_direc...
    method update_fold (line 134) | def update_fold(self, fold):
    method setup_DA_params (line 153) | def setup_DA_params(self):
    method initialize (line 189) | def initialize(self, training=True, force_load_plans=False):
    method initialize_network (line 234) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 267) | def initialize_optimizer_and_scheduler(self):
    method plot_network_architecture (line 276) | def plot_network_architecture(self):
    method save_debug_information (line 299) | def save_debug_information(self):
    method run_training (line 317) | def run_training(self):
    method load_plans_file (line 321) | def load_plans_file(self):
    method process_plans (line 329) | def process_plans(self, plans):
    method load_dataset (line 397) | def load_dataset(self):
    method get_basic_generators (line 400) | def get_basic_generators(self):
    method preprocess_patient (line 420) | def preprocess_patient(self, input_files):
    method preprocess_predict_nifti (line 448) | def preprocess_predict_nifti(self, input_files: List[str], output_file...
    method predict_preprocessed_data_return_seg_and_softmax (line 486) | def predict_preprocessed_data_return_seg_and_softmax(self, data: np.nd...
    method validate (line 529) | def validate(self, do_mirroring: bool = True, use_sliding_window: bool...
    method run_online_evaluation (line 693) | def run_online_evaluation(self, output, target):
    method finish_online_evaluation (line 717) | def finish_online_evaluation(self):
    method save_checkpoint (line 736) | def save_checkpoint(self, fname, save_optimizer=True):

FILE: unetr_pp/training/network_training/network_trainer_acdc.py
  class NetworkTrainer_acdc (line 42) | class NetworkTrainer_acdc(object):
    method __init__ (line 43) | def __init__(self, deterministic=True, fp16=False):
    method initialize (line 130) | def initialize(self, training=True):
    method load_dataset (line 146) | def load_dataset(self):
    method do_split (line 149) | def do_split(self):
    method plot_progress (line 187) | def plot_progress(self):
    method print_to_log_file (line 248) | def print_to_log_file(self, *args, also_print_to_console=True, add_tim...
    method save_checkpoint (line 282) | def save_checkpoint(self, fname, save_optimizer=True):
    method load_best_checkpoint (line 314) | def load_best_checkpoint(self, train=True):
    method load_latest_checkpoint (line 324) | def load_latest_checkpoint(self, train=True):
    method load_final_checkpoint (line 333) | def load_final_checkpoint(self, train=False):
    method load_checkpoint (line 339) | def load_checkpoint(self, fname, train=True):
    method initialize_network (line 348) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 356) | def initialize_optimizer_and_scheduler(self):
    method load_checkpoint_ram (line 363) | def load_checkpoint_ram(self, checkpoint, train=True):
    method _maybe_init_amp (line 427) | def _maybe_init_amp(self):
    method plot_network_architecture (line 431) | def plot_network_architecture(self):
    method run_training (line 439) | def run_training(self):
    method maybe_update_lr (line 529) | def maybe_update_lr(self):
    method maybe_save_checkpoint (line 542) | def maybe_save_checkpoint(self):
    method update_eval_criterion_MA (line 554) | def update_eval_criterion_MA(self):
    method manage_patience (line 580) | def manage_patience(self):
    method on_epoch_end (line 633) | def on_epoch_end(self):
    method update_train_loss_MA (line 648) | def update_train_loss_MA(self):
    method run_iteration (line 655) | def run_iteration(self, data_generator, do_backprop=True, run_online_e...
    method run_online_evaluation (line 695) | def run_online_evaluation(self, *args, **kwargs):
    method finish_online_evaluation (line 704) | def finish_online_evaluation(self):
    method validate (line 712) | def validate(self, *args, **kwargs):
    method find_lr (line 715) | def find_lr(self, num_iters=1000, init_value=1e-6, final_value=10., be...

FILE: unetr_pp/training/network_training/network_trainer_lung.py
  class NetworkTrainer_lung (line 42) | class NetworkTrainer_lung(object):
    method __init__ (line 43) | def __init__(self, deterministic=True, fp16=False):
    method initialize (line 130) | def initialize(self, training=True):
    method load_dataset (line 146) | def load_dataset(self):
    method do_split (line 149) | def do_split(self):
    method plot_progress (line 187) | def plot_progress(self):
    method print_to_log_file (line 248) | def print_to_log_file(self, *args, also_print_to_console=True, add_tim...
    method save_checkpoint (line 282) | def save_checkpoint(self, fname, save_optimizer=True):
    method load_best_checkpoint (line 314) | def load_best_checkpoint(self, train=True):
    method load_latest_checkpoint (line 324) | def load_latest_checkpoint(self, train=True):
    method load_final_checkpoint (line 333) | def load_final_checkpoint(self, train=False):
    method load_checkpoint (line 339) | def load_checkpoint(self, fname, train=True):
    method initialize_network (line 348) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 356) | def initialize_optimizer_and_scheduler(self):
    method load_checkpoint_ram (line 363) | def load_checkpoint_ram(self, checkpoint, train=True):
    method _maybe_init_amp (line 427) | def _maybe_init_amp(self):
    method plot_network_architecture (line 431) | def plot_network_architecture(self):
    method run_training (line 439) | def run_training(self):
    method maybe_update_lr (line 529) | def maybe_update_lr(self):
    method maybe_save_checkpoint (line 542) | def maybe_save_checkpoint(self):
    method update_eval_criterion_MA (line 554) | def update_eval_criterion_MA(self):
    method manage_patience (line 580) | def manage_patience(self):
    method on_epoch_end (line 633) | def on_epoch_end(self):
    method update_train_loss_MA (line 648) | def update_train_loss_MA(self):
    method run_iteration (line 655) | def run_iteration(self, data_generator, do_backprop=True, run_online_e...
    method run_online_evaluation (line 695) | def run_online_evaluation(self, *args, **kwargs):
    method finish_online_evaluation (line 704) | def finish_online_evaluation(self):
    method validate (line 712) | def validate(self, *args, **kwargs):
    method find_lr (line 715) | def find_lr(self, num_iters=1000, init_value=1e-6, final_value=10., be...

FILE: unetr_pp/training/network_training/network_trainer_synapse.py
  class NetworkTrainer_synapse (line 43) | class NetworkTrainer_synapse(object):
    method __init__ (line 44) | def __init__(self, deterministic=True, fp16=False):
    method initialize (line 131) | def initialize(self, training=True):
    method load_dataset (line 147) | def load_dataset(self):
    method do_split (line 150) | def do_split(self):
    method plot_progress (line 188) | def plot_progress(self):
    method print_to_log_file (line 249) | def print_to_log_file(self, *args, also_print_to_console=True, add_tim...
    method save_checkpoint (line 283) | def save_checkpoint(self, fname, save_optimizer=True):
    method load_best_checkpoint (line 315) | def load_best_checkpoint(self, train=True):
    method load_latest_checkpoint (line 325) | def load_latest_checkpoint(self, train=True):
    method load_final_checkpoint (line 334) | def load_final_checkpoint(self, train=False):
    method load_checkpoint (line 340) | def load_checkpoint(self, fname, train=True):
    method initialize_network (line 349) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 357) | def initialize_optimizer_and_scheduler(self):
    method load_checkpoint_ram (line 364) | def load_checkpoint_ram(self, checkpoint, train=True):
    method _maybe_init_amp (line 428) | def _maybe_init_amp(self):
    method plot_network_architecture (line 432) | def plot_network_architecture(self):
    method run_training (line 440) | def run_training(self):
    method maybe_update_lr (line 530) | def maybe_update_lr(self):
    method maybe_save_checkpoint (line 543) | def maybe_save_checkpoint(self):
    method update_eval_criterion_MA (line 555) | def update_eval_criterion_MA(self):
    method manage_patience (line 581) | def manage_patience(self):
    method on_epoch_end (line 634) | def on_epoch_end(self):
    method update_train_loss_MA (line 649) | def update_train_loss_MA(self):
    method run_iteration (line 656) | def run_iteration(self, data_generator, do_backprop=True, run_online_e...
    method run_online_evaluation (line 696) | def run_online_evaluation(self, *args, **kwargs):
    method finish_online_evaluation (line 705) | def finish_online_evaluation(self):
    method validate (line 713) | def validate(self, *args, **kwargs):
    method find_lr (line 716) | def find_lr(self, num_iters=1000, init_value=1e-6, final_value=10., be...

FILE: unetr_pp/training/network_training/network_trainer_tumor.py
  class NetworkTrainer_tumor (line 42) | class NetworkTrainer_tumor(object):
    method __init__ (line 43) | def __init__(self, deterministic=True, fp16=False):
    method initialize (line 129) | def initialize(self, training=True):
    method load_dataset (line 141) | def load_dataset(self):
    method do_split (line 144) | def do_split(self):
    method plot_progress (line 182) | def plot_progress(self):
    method print_to_log_file (line 238) | def print_to_log_file(self, *args, also_print_to_console=True, add_tim...
    method save_checkpoint (line 272) | def save_checkpoint(self, fname, save_optimizer=True):
    method load_best_checkpoint (line 304) | def load_best_checkpoint(self, train=True):
    method load_latest_checkpoint (line 314) | def load_latest_checkpoint(self, train=True):
    method load_final_checkpoint (line 323) | def load_final_checkpoint(self, train=False):
    method load_checkpoint (line 329) | def load_checkpoint(self, fname, train=True):
    method initialize_network (line 338) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 346) | def initialize_optimizer_and_scheduler(self):
    method load_checkpoint_ram (line 353) | def load_checkpoint_ram(self, checkpoint, train=True):
    method _maybe_init_amp (line 417) | def _maybe_init_amp(self):
    method plot_network_architecture (line 421) | def plot_network_architecture(self):
    method run_training (line 429) | def run_training(self):
    method maybe_update_lr (line 519) | def maybe_update_lr(self):
    method maybe_save_checkpoint (line 532) | def maybe_save_checkpoint(self):
    method update_eval_criterion_MA (line 544) | def update_eval_criterion_MA(self):
    method manage_patience (line 570) | def manage_patience(self):
    method on_epoch_end (line 623) | def on_epoch_end(self):
    method update_train_loss_MA (line 638) | def update_train_loss_MA(self):
    method run_iteration (line 645) | def run_iteration(self, data_generator, do_backprop=True, run_online_e...
    method run_online_evaluation (line 685) | def run_online_evaluation(self, *args, **kwargs):
    method finish_online_evaluation (line 694) | def finish_online_evaluation(self):
    method validate (line 702) | def validate(self, *args, **kwargs):
    method find_lr (line 705) | def find_lr(self, num_iters=1000, init_value=1e-6, final_value=10., be...

FILE: unetr_pp/training/network_training/unetr_pp_trainer_acdc.py
  class unetr_pp_trainer_acdc (line 40) | class unetr_pp_trainer_acdc(Trainer_acdc):
    method __init__ (line 45) | def __init__(self, plans_file, fold, output_folder=None, dataset_direc...
    method initialize (line 76) | def initialize(self, training=True, force_load_plans=False):
    method initialize_network (line 159) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 193) | def initialize_optimizer_and_scheduler(self):
    method run_online_evaluation (line 199) | def run_online_evaluation(self, output, target):
    method validate (line 215) | def validate(self, do_mirroring: bool = True, use_sliding_window: bool...
    method predict_preprocessed_data_return_seg_and_softmax (line 233) | def predict_preprocessed_data_return_seg_and_softmax(self, data: np.nd...
    method run_iteration (line 257) | def run_iteration(self, data_generator, do_backprop=True, run_online_e...
    method do_split (line 309) | def do_split(self):
    method setup_DA_params (line 431) | def setup_DA_params(self):
    method maybe_update_lr (line 485) | def maybe_update_lr(self, epoch=None):
    method on_epoch_end (line 502) | def on_epoch_end(self):
    method run_training (line 522) | def run_training(self):

FILE: unetr_pp/training/network_training/unetr_pp_trainer_lung.py
  class unetr_pp_trainer_lung (line 40) | class unetr_pp_trainer_lung(Trainer_lung):
    method __init__ (line 45) | def __init__(self, plans_file, fold, output_folder=None, dataset_direc...
    method initialize (line 77) | def initialize(self, training=True, force_load_plans=False):
    method initialize_network (line 160) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 194) | def initialize_optimizer_and_scheduler(self):
    method run_online_evaluation (line 200) | def run_online_evaluation(self, output, target):
    method validate (line 216) | def validate(self, do_mirroring: bool = True, use_sliding_window: bool...
    method predict_preprocessed_data_return_seg_and_softmax (line 234) | def predict_preprocessed_data_return_seg_and_softmax(self, data: np.nd...
    method run_iteration (line 258) | def run_iteration(self, data_generator, do_backprop=True, run_online_e...
    method do_split (line 311) | def do_split(self):
    method setup_DA_params (line 391) | def setup_DA_params(self):
    method maybe_update_lr (line 445) | def maybe_update_lr(self, epoch=None):
    method on_epoch_end (line 462) | def on_epoch_end(self):
    method run_training (line 482) | def run_training(self):

FILE: unetr_pp/training/network_training/unetr_pp_trainer_synapse.py
  class unetr_pp_trainer_synapse (line 40) | class unetr_pp_trainer_synapse(Trainer_synapse):
    method __init__ (line 45) | def __init__(self, plans_file, fold, output_folder=None, dataset_direc...
    method initialize (line 70) | def initialize(self, training=True, force_load_plans=False):
    method initialize_network (line 150) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 185) | def initialize_optimizer_and_scheduler(self):
    method run_online_evaluation (line 191) | def run_online_evaluation(self, output, target):
    method validate (line 207) | def validate(self, do_mirroring: bool = True, use_sliding_window: bool...
    method predict_preprocessed_data_return_seg_and_softmax (line 225) | def predict_preprocessed_data_return_seg_and_softmax(self, data: np.nd...
    method run_iteration (line 249) | def run_iteration(self, data_generator, do_backprop=True, run_online_e...
    method do_split (line 301) | def do_split(self):
    method setup_DA_params (line 373) | def setup_DA_params(self):
    method maybe_update_lr (line 427) | def maybe_update_lr(self, epoch=None):
    method on_epoch_end (line 444) | def on_epoch_end(self):
    method run_training (line 464) | def run_training(self):

FILE: unetr_pp/training/network_training/unetr_pp_trainer_tumor.py
  class unetr_pp_trainer_tumor (line 40) | class unetr_pp_trainer_tumor(Trainer_tumor):
    method __init__ (line 45) | def __init__(self, plans_file, fold, output_folder=None, dataset_direc...
    method initialize (line 78) | def initialize(self, training=True, force_load_plans=False):
    method initialize_network (line 158) | def initialize_network(self):
    method initialize_optimizer_and_scheduler (line 193) | def initialize_optimizer_and_scheduler(self):
    method run_online_evaluation (line 199) | def run_online_evaluation(self, output, target):
    method validate (line 215) | def validate(self, do_mirroring: bool = True, use_sliding_window: bool...
    method predict_preprocessed_data_return_seg_and_softmax (line 233) | def predict_preprocessed_data_return_seg_and_softmax(self, data: np.nd...
    method run_iteration (line 257) | def run_iteration(self, data_generator, do_backprop=True, run_online_e...
    method do_split (line 309) | def do_split(self):
    method setup_DA_params (line 468) | def setup_DA_params(self):
    method maybe_update_lr (line 522) | def maybe_update_lr(self, epoch=None):
    method on_epoch_end (line 539) | def on_epoch_end(self):
    method run_training (line 559) | def run_training(self):

FILE: unetr_pp/training/optimizer/ranger.py
  class Ranger (line 11) | class Ranger(Optimizer):
    method __init__ (line 13) | def __init__(self, params, lr=1e-3, alpha=0.5, k=6, N_sma_threshhold=5...
    method __setstate__ (line 64) | def __setstate__(self, state):
    method step (line 68) | def step(self, closure=None):

FILE: unetr_pp/utilities/distributed.py
  function print_if_rank0 (line 22) | def print_if_rank0(*args):
  class awesome_allgather_function (line 27) | class awesome_allgather_function(autograd.Function):
    method forward (line 29) | def forward(ctx, input):
    method backward (line 39) | def backward(ctx, grad_output):

FILE: unetr_pp/utilities/file_conversions.py
  function convert_2d_image_to_nifti (line 8) | def convert_2d_image_to_nifti(input_filename: str, output_filename_trunc...
  function convert_3d_tiff_to_nifti (line 63) | def convert_3d_tiff_to_nifti(filenames: List[str], output_name: str, spa...
  function convert_2d_segmentation_nifti_to_img (line 99) | def convert_2d_segmentation_nifti_to_img(nifti_file: str, output_filenam...
  function convert_3d_segmentation_nifti_to_tiff (line 109) | def convert_3d_segmentation_nifti_to_tiff(nifti_file: str, output_filena...

FILE: unetr_pp/utilities/file_endings.py
  function remove_trailing_slash (line 19) | def remove_trailing_slash(filename: str):
  function maybe_add_0000_to_all_niigz (line 25) | def maybe_add_0000_to_all_niigz(folder):

FILE: unetr_pp/utilities/folder_names.py
  function get_output_folder_name (line 20) | def get_output_folder_name(model: str, task: str = None, trainer: str = ...

FILE: unetr_pp/utilities/one_hot_encoding.py
  function to_one_hot (line 18) | def to_one_hot(seg, all_seg_labels=None):

FILE: unetr_pp/utilities/overlay_plots.py
  function hex_to_rgb (line 41) | def hex_to_rgb(hex: str):
  function generate_overlay (line 46) | def generate_overlay(input_image: np.ndarray, segmentation: np.ndarray, ...
  function plot_overlay (line 89) | def plot_overlay(image_file: str, segmentation_file: str, output_file: s...
  function plot_overlay_preprocessed (line 108) | def plot_overlay_preprocessed(case_file: str, output_file: str, overlay_...
  function multiprocessing_plot_overlay (line 127) | def multiprocessing_plot_overlay(list_of_image_files, list_of_seg_files,...
  function multiprocessing_plot_overlay_preprocessed (line 138) | def multiprocessing_plot_overlay_preprocessed(list_of_case_files, list_o...
  function generate_overlays_for_task (line 150) | def generate_overlays_for_task(task_name_or_id, output_folder, num_proce...
  function entry_point_generate_overlay (line 191) | def entry_point_generate_overlay():

FILE: unetr_pp/utilities/random_stuff.py
  class no_op (line 16) | class no_op(object):
    method __enter__ (line 17) | def __enter__(self):
    method __exit__ (line 20) | def __exit__(self, *args):

FILE: unetr_pp/utilities/recursive_delete_npz.py
  function recursive_delete_npz (line 21) | def recursive_delete_npz(current_directory: str):

FILE: unetr_pp/utilities/recursive_rename_taskXX_to_taskXXX.py
  function recursive_rename (line 20) | def recursive_rename(folder):

FILE: unetr_pp/utilities/sitk_stuff.py
  function copy_geometry (line 19) | def copy_geometry(image: sitk.Image, ref: sitk.Image):

FILE: unetr_pp/utilities/task_name_id_conversion.py
  function convert_id_to_task_name (line 21) | def convert_id_to_task_name(task_id: int):
  function convert_task_name_to_id (line 64) | def convert_task_name_to_id(task_name: str):

FILE: unetr_pp/utilities/tensor_utilities.py
  function sum_tensor (line 20) | def sum_tensor(inp, axes, keepdim=False):
  function mean_tensor (line 31) | def mean_tensor(inp, axes, keepdim=False):
  function flip (line 42) | def flip(x, dim):

FILE: unetr_pp/utilities/to_torch.py
  function maybe_to_torch (line 18) | def maybe_to_torch(d):
  function to_cuda (line 26) | def to_cuda(data, non_blocking=True, gpu_id=0):
Condensed preview — 155 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,296K chars).
[
  {
    "path": "LICENSE",
    "chars": 11369,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 12467,
    "preview": "# UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation\n![](https://i.imgur.com/waxVImv.png)\n[Abdel"
  },
  {
    "path": "evaluation_scripts/run_evaluation_acdc.sh",
    "chars": 400,
    "preview": "#!/bin/sh\n\nDATASET_PATH=../DATASET_Acdc\nCHECKPOINT_PATH=../unetr_pp/evaluation/unetr_pp_acdc_checkpoint\n\nexport PYTHONPA"
  },
  {
    "path": "evaluation_scripts/run_evaluation_lung.sh",
    "chars": 400,
    "preview": "#!/bin/sh\n\nDATASET_PATH=../DATASET_Lungs\nCHECKPOINT_PATH=../unetr_pp/evaluation/unetr_pp_lung_checkpoint\n\nexport PYTHONP"
  },
  {
    "path": "evaluation_scripts/run_evaluation_synapse.sh",
    "chars": 411,
    "preview": "#!/bin/sh\n\nDATASET_PATH=../DATASET_Synapse\nCHECKPOINT_PATH=../unetr_pp/evaluation/unetr_pp_synapse_checkpoint\n\nexport PY"
  },
  {
    "path": "evaluation_scripts/run_evaluation_tumor.sh",
    "chars": 825,
    "preview": "#!/bin/sh\n\nDATASET_PATH=../DATASET_Tumor\n\nexport PYTHONPATH=.././\nexport RESULTS_FOLDER=../unetr_pp/evaluation/unetr_pp_"
  },
  {
    "path": "requirements.txt",
    "chars": 330,
    "preview": "argparse==1.4.0\nnumpy==1.20.1\nbatchgenerators==0.21\nmatplotlib==3.5.1\ntyping==3.7.4.3\nsklearn==0.0\nscikit-learn==1.0.2\nt"
  },
  {
    "path": "training_scripts/run_training_acdc.sh",
    "chars": 326,
    "preview": "#!/bin/sh\n\nDATASET_PATH=../DATASET_Acdc\n\nexport PYTHONPATH=.././\nexport RESULTS_FOLDER=../output_acdc\nexport unetr_pp_pr"
  },
  {
    "path": "training_scripts/run_training_lung.sh",
    "chars": 327,
    "preview": "#!/bin/sh\n\nDATASET_PATH=../DATASET_Lungs\n\nexport PYTHONPATH=.././\nexport RESULTS_FOLDER=../output_lung\nexport unetr_pp_p"
  },
  {
    "path": "training_scripts/run_training_synapse.sh",
    "chars": 338,
    "preview": "#!/bin/sh\n\nDATASET_PATH=../DATASET_Synapse\n\nexport PYTHONPATH=.././\nexport RESULTS_FOLDER=../output_synapse\nexport unetr"
  },
  {
    "path": "training_scripts/run_training_tumor.sh",
    "chars": 330,
    "preview": "#!/bin/sh\n\nDATASET_PATH=../DATASET_Tumor\n\nexport PYTHONPATH=.././\nexport RESULTS_FOLDER=../output_tumor\nexport unetr_pp_"
  },
  {
    "path": "unetr_pp/__init__.py",
    "chars": 55,
    "preview": "from __future__ import absolute_import\nfrom . import *\n"
  },
  {
    "path": "unetr_pp/configuration.py",
    "chars": 261,
    "preview": "import os\n\ndefault_num_threads = 8 if 'nnFormer_def_n_proc' not in os.environ else int(os.environ['nnFormer_def_n_proc']"
  },
  {
    "path": "unetr_pp/evaluation/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/evaluation/add_dummy_task_with_mean_over_all_tasks.py",
    "chars": 3246,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/add_mean_dice_to_json.py",
    "chars": 2026,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/collect_results_files.py",
    "chars": 1975,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/evaluator.py",
    "chars": 18782,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/metrics.py",
    "chars": 13031,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/model_selection/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/evaluation/model_selection/collect_all_fold0_results_and_summarize_in_one_csv.py",
    "chars": 3487,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/model_selection/ensemble.py",
    "chars": 7423,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/model_selection/figure_out_what_to_submit.py",
    "chars": 12965,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/model_selection/rank_candidates.py",
    "chars": 13571,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/model_selection/rank_candidates_StructSeg.py",
    "chars": 7070,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/model_selection/rank_candidates_cascade.py",
    "chars": 6593,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/model_selection/summarize_results_in_one_json.py",
    "chars": 12145,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/model_selection/summarize_results_with_plans.py",
    "chars": 6235,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/region_based_evaluation.py",
    "chars": 3942,
    "preview": "from copy import deepcopy\nfrom multiprocessing.pool import Pool\n\nfrom batchgenerators.utilities.file_and_folder_operatio"
  },
  {
    "path": "unetr_pp/evaluation/surface_dice.py",
    "chars": 2686,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/evaluation/unetr_pp_acdc_checkpoint/unetr_pp/3d_fullres/Task001_ACDC/unetr_pp_trainer_acdc__unetr_pp_Plansv2.1/fold_0/.gitignore",
    "chars": 71,
    "preview": "# Ignore everything in this directory\n*\n# Except this file\n!.gitignore\n"
  },
  {
    "path": "unetr_pp/evaluation/unetr_pp_lung_checkpoint/unetr_pp/3d_fullres/Task006_Lung/unetr_pp_trainer_lung__unetr_pp_Plansv2.1/fold_0/.gitignore",
    "chars": 71,
    "preview": "# Ignore everything in this directory\n*\n# Except this file\n!.gitignore\n"
  },
  {
    "path": "unetr_pp/evaluation/unetr_pp_synapse_checkpoint/unetr_pp/3d_fullres/Task002_Synapse/unetr_pp_trainer_synapse__unetr_pp_Plansv2.1/fold_0/.gitignore",
    "chars": 71,
    "preview": "# Ignore everything in this directory\n*\n# Except this file\n!.gitignore\n"
  },
  {
    "path": "unetr_pp/evaluation/unetr_pp_tumor_checkpoint/unetr_pp/3d_fullres/Task003_tumor/unetr_pp_trainer_tumor__unetr_pp_Plansv2.1/fold_0/.gitignore",
    "chars": 71,
    "preview": "# Ignore everything in this directory\n*\n# Except this file\n!.gitignore\n"
  },
  {
    "path": "unetr_pp/experiment_planning/DatasetAnalyzer.py",
    "chars": 11061,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_11GB.py",
    "chars": 6732,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_16GB.py",
    "chars": 6721,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_32GB.py",
    "chars": 6730,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v21_3convperstage.py",
    "chars": 1944,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v22.py",
    "chars": 3159,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_baseline_3DUNet_v23.py",
    "chars": 1345,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/experiment_planner_residual_3DUNet_v21.py",
    "chars": 7416,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_2DUNet_v21_RGB_scaleto_0_1.py",
    "chars": 1637,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_3DUNet_CT2.py",
    "chars": 2024,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_3DUNet_nonCT.py",
    "chars": 1773,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/patch_size/experiment_planner_3DUNet_isotropic_in_mm.py",
    "chars": 7027,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/patch_size/experiment_planner_3DUNet_isotropic_in_voxels.py",
    "chars": 6320,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/pooling_and_convs/experiment_planner_baseline_3DUNet_allConv3x3.py",
    "chars": 7707,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/pooling_and_convs/experiment_planner_baseline_3DUNet_poolBasedOnSpacing.py",
    "chars": 6778,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_targetSpacingForAnisoAxis.py",
    "chars": 3632,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_v21_customTargetSpacing_2x2x2.py",
    "chars": 1795,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/alternative_experiment_planning/target_spacing/experiment_planner_baseline_3DUNet_v21_noResampling.py",
    "chars": 12219,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/change_batch_size.py",
    "chars": 606,
    "preview": "from batchgenerators.utilities.file_and_folder_operations import *\nimport numpy as np\n\nif __name__ == '__main__':\n    in"
  },
  {
    "path": "unetr_pp/experiment_planning/common_utils.py",
    "chars": 10549,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/experiment_planner_baseline_2DUNet.py",
    "chars": 8792,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/experiment_planner_baseline_2DUNet_v21.py",
    "chars": 5796,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/experiment_planner_baseline_3DUNet.py",
    "chars": 26475,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/experiment_planner_baseline_3DUNet_v21.py",
    "chars": 10274,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/nnFormer_convert_decathlon_task.py",
    "chars": 4182,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/nnFormer_plan_and_preprocess.py",
    "chars": 7041,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/summarize_plans.py",
    "chars": 4193,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/experiment_planning/utils.py",
    "chars": 9667,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/inference/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/inference/predict.py",
    "chars": 42793,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/inference/predict_simple.py",
    "chars": 14127,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/inference/segmentation_export.py",
    "chars": 11976,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/inference_acdc.py",
    "chars": 4691,
    "preview": "import glob\r\nimport os\r\nimport SimpleITK as sitk\r\nimport numpy as np\r\nfrom medpy.metric import binary\r\nfrom sklearn.neig"
  },
  {
    "path": "unetr_pp/inference_synapse.py",
    "chars": 7516,
    "preview": "import glob\r\nimport os\r\nimport SimpleITK as sitk\r\nimport numpy as np\r\nimport argparse\r\nfrom medpy import metric\r\n\r\ndef r"
  },
  {
    "path": "unetr_pp/inference_tumor.py",
    "chars": 4732,
    "preview": "import glob\nimport os\nimport SimpleITK as sitk\nimport numpy as np\nimport argparse\nfrom medpy.metric import binary\n\ndef r"
  },
  {
    "path": "unetr_pp/network_architecture/README.md",
    "chars": 181,
    "preview": "You can change batch size, input data size\n```\nhttps://github.com/282857341/nnFormer/blob/6e36d76f9b7d0bea522e1cd05adf50"
  },
  {
    "path": "unetr_pp/network_architecture/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/network_architecture/acdc/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "unetr_pp/network_architecture/acdc/model_components.py",
    "chars": 6231,
    "preview": "from torch import nn\nfrom timm.models.layers import trunc_normal_\nfrom typing import Sequence, Tuple, Union\nfrom monai.n"
  },
  {
    "path": "unetr_pp/network_architecture/acdc/transformerblock.py",
    "chars": 5195,
    "preview": "import torch.nn as nn\nimport torch\nfrom unetr_pp.network_architecture.dynunet_block import UnetResBlock\n\n\nclass Transfor"
  },
  {
    "path": "unetr_pp/network_architecture/acdc/unetr_pp_acdc.py",
    "chars": 5018,
    "preview": "from torch import nn\nfrom typing import Tuple, Union\nfrom unetr_pp.network_architecture.neural_network import Segmentati"
  },
  {
    "path": "unetr_pp/network_architecture/dynunet_block.py",
    "chars": 10003,
    "preview": "from typing import Optional, Sequence, Tuple, Union\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom monai.n"
  },
  {
    "path": "unetr_pp/network_architecture/generic_UNet.py",
    "chars": 21050,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/network_architecture/initialization.py",
    "chars": 1673,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/network_architecture/layers.py",
    "chars": 2438,
    "preview": "import torch\nimport torch.nn.functional as F\nfrom torch import nn\nimport math\n\n\nclass LayerNorm(nn.Module):\n    def __in"
  },
  {
    "path": "unetr_pp/network_architecture/lung/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "unetr_pp/network_architecture/lung/model_components.py",
    "chars": 6390,
    "preview": "from torch import nn\nfrom timm.models.layers import trunc_normal_\nfrom typing import Sequence, Tuple, Union\nfrom monai.n"
  },
  {
    "path": "unetr_pp/network_architecture/lung/transformerblock.py",
    "chars": 5410,
    "preview": "import torch.nn as nn\nimport torch\nfrom unetr_pp.network_architecture.dynunet_block import UnetResBlock\n\n\nclass Transfor"
  },
  {
    "path": "unetr_pp/network_architecture/lung/unetr_pp_lung.py",
    "chars": 5207,
    "preview": "from torch import nn\nfrom typing import Tuple, Union\nfrom unetr_pp.network_architecture.neural_network import Segmentati"
  },
  {
    "path": "unetr_pp/network_architecture/neural_network.py",
    "chars": 43868,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/network_architecture/synapse/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "unetr_pp/network_architecture/synapse/model_components.py",
    "chars": 6087,
    "preview": "from torch import nn\nfrom timm.models.layers import trunc_normal_\nfrom typing import Sequence, Tuple, Union\nfrom monai.n"
  },
  {
    "path": "unetr_pp/network_architecture/synapse/transformerblock.py",
    "chars": 5033,
    "preview": "import torch.nn as nn\nimport torch\nfrom unetr_pp.network_architecture.dynunet_block import UnetResBlock\n\n\nclass Transfor"
  },
  {
    "path": "unetr_pp/network_architecture/synapse/unetr_pp_synapse.py",
    "chars": 6006,
    "preview": "from torch import nn\nfrom typing import Tuple, Union\nfrom unetr_pp.network_architecture.neural_network import Segmentati"
  },
  {
    "path": "unetr_pp/network_architecture/tumor/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "unetr_pp/network_architecture/tumor/model_components.py",
    "chars": 6231,
    "preview": "from torch import nn\nfrom timm.models.layers import trunc_normal_\nfrom typing import Sequence, Tuple, Union\nfrom monai.n"
  },
  {
    "path": "unetr_pp/network_architecture/tumor/transformerblock.py",
    "chars": 4994,
    "preview": "import torch.nn as nn\nimport torch\nfrom unetr_pp.network_architecture.dynunet_block import UnetResBlock\nimport math\n\n\ncl"
  },
  {
    "path": "unetr_pp/network_architecture/tumor/unetr_pp_tumor.py",
    "chars": 5104,
    "preview": "from torch import nn\nfrom typing import Tuple, Union\nfrom unetr_pp.network_architecture.neural_network import Segmentati"
  },
  {
    "path": "unetr_pp/paths.py",
    "chars": 2978,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/postprocessing/connected_components.py",
    "chars": 19128,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/postprocessing/consolidate_all_for_paper.py",
    "chars": 3172,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/postprocessing/consolidate_postprocessing.py",
    "chars": 4855,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/postprocessing/consolidate_postprocessing_simple.py",
    "chars": 2743,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/preprocessing/cropping.py",
    "chars": 8571,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/preprocessing/custom_preprocessors/preprocessor_scale_RGB_to_0_1.py",
    "chars": 3477,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/preprocessing/preprocessing.py",
    "chars": 40026,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/preprocessing/sanity_checks.py",
    "chars": 12234,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/run/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/run/default_configuration.py",
    "chars": 4775,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/run/run_training.py",
    "chars": 9001,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/training/cascade_stuff/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/training/cascade_stuff/predict_next_stage.py",
    "chars": 6260,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/data_augmentation/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/training/data_augmentation/custom_transforms.py",
    "chars": 4821,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/data_augmentation/data_augmentation_insaneDA.py",
    "chars": 10853,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/data_augmentation/data_augmentation_insaneDA2.py",
    "chars": 10977,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/data_augmentation/data_augmentation_moreDA.py",
    "chars": 12318,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/data_augmentation/data_augmentation_noDA.py",
    "chars": 4905,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/data_augmentation/default_data_augmentation.py",
    "chars": 12571,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/data_augmentation/downsampling.py",
    "chars": 4164,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/data_augmentation/pyramid_augmentations.py",
    "chars": 9007,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/dataloading/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/training/dataloading/dataset_loading.py",
    "chars": 33739,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/learning_rate/poly_lr.py",
    "chars": 807,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/loss_functions/TopK_loss.py",
    "chars": 1366,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/loss_functions/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/training/loss_functions/crossentropy.py",
    "chars": 438,
    "preview": "from torch import nn, Tensor\n\n\nclass RobustCrossEntropyLoss(nn.CrossEntropyLoss):\n    \"\"\"\n    this is just a compatibili"
  },
  {
    "path": "unetr_pp/training/loss_functions/deep_supervision.py",
    "chars": 1679,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/loss_functions/dice_loss.py",
    "chars": 14057,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/model_restore.py",
    "chars": 7325,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/Trainer_acdc.py",
    "chars": 39935,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/Trainer_lung.py",
    "chars": 39937,
    "preview": "\n#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n# "
  },
  {
    "path": "unetr_pp/training/network_training/Trainer_synapse.py",
    "chars": 40763,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/Trainer_tumor.py",
    "chars": 40549,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/network_trainer_acdc.py",
    "chars": 32298,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/network_trainer_lung.py",
    "chars": 32299,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/network_trainer_synapse.py",
    "chars": 32424,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/network_trainer_tumor.py",
    "chars": 32288,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/unetr_pp_trainer_acdc.py",
    "chars": 31140,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/unetr_pp_trainer_lung.py",
    "chars": 25530,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/unetr_pp_trainer_synapse.py",
    "chars": 24085,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/network_training/unetr_pp_trainer_tumor.py",
    "chars": 30653,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/training/optimizer/ranger.py",
    "chars": 6465,
    "preview": "############\n# https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer\n# This code was taken from the repo above and"
  },
  {
    "path": "unetr_pp/utilities/__init__.py",
    "chars": 54,
    "preview": "from __future__ import absolute_import\nfrom . import *"
  },
  {
    "path": "unetr_pp/utilities/distributed.py",
    "chars": 3172,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/file_conversions.py",
    "chars": 4533,
    "preview": "from typing import Tuple, List, Union\nfrom skimage import io\nimport SimpleITK as sitk\nimport numpy as np\nimport tifffile"
  },
  {
    "path": "unetr_pp/utilities/file_endings.py",
    "chars": 1130,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/folder_names.py",
    "chars": 1812,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/nd_softmax.py",
    "chars": 801,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/one_hot_encoding.py",
    "chars": 990,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/overlay_plots.py",
    "chars": 8440,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/random_stuff.py",
    "chars": 794,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/recursive_delete_npz.py",
    "chars": 1554,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/recursive_rename_taskXX_to_taskXXX.py",
    "chars": 1784,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/sitk_stuff.py",
    "chars": 908,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/task_name_id_conversion.py",
    "chars": 3544,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/tensor_utilities.py",
    "chars": 1624,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  },
  {
    "path": "unetr_pp/utilities/to_torch.py",
    "chars": 1175,
    "preview": "#    Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany\n#\n#  "
  }
]

// ... and 1 more files (download for full content)

About this extraction

This page contains the full source code of the Amshaker/unetr_plus_plus GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 155 files (1.2 MB), approximately 284.2k tokens, and a symbol index with 815 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!