[
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "# SwiftFormer\n### **SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications**\n\n![](https://i.imgur.com/waxVImv.png)\n[Abdelrahman Shaker](https://scholar.google.com/citations?hl=en&user=eEz4Wu4AAAAJ)<sup>*1</sup>, [Muhammad Maaz](https://scholar.google.com/citations?user=vTy9Te8AAAAJ&hl=en&authuser=1&oi=sra)<sup>1</sup>, [Hanoona Rasheed](https://scholar.google.com/citations?user=yhDdEuEAAAAJ&hl=en&authuser=1&oi=sra)<sup>1</sup>, [Salman Khan](https://salman-h-khan.github.io/)<sup>1</sup>, [Ming-Hsuan Yang](https://scholar.google.com/citations?user=p9-ohHsAAAAJ&hl=en)<sup>2,3</sup> and [Fahad Shahbaz Khan](https://scholar.google.es/citations?user=zvaeYnUAAAAJ&hl=en)<sup>1,4</sup>\n\nMohamed Bin Zayed University of Artificial Intelligence<sup>1</sup>, University of California Merced<sup>2</sup>, Google Research<sup>3</sup>, Linkoping University<sup>4</sup>\n<!-- [![Website](https://img.shields.io/badge/Project-Website-87CEEB)](site_url) -->\n[![paper](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://openaccess.thecvf.com/content/ICCV2023/papers/Shaker_SwiftFormer_Efficient_Additive_Attention_for_Transformer-based_Real-time_Mobile_Vision_Applications_ICCV_2023_paper.pdf)\n<!-- [![video](https://img.shields.io/badge/Video-Presentation-F9D371)](youtube_link) -->\n<!-- [![slides](https://img.shields.io/badge/Presentation-Slides-B762C1)](presentation) -->\n\n## :rocket: News\n* **(Jul 14, 2023):** SwiftFormer has been accepted at ICCV 2023. :fire::fire:\n* **(Mar 27, 2023):** Classification training and evaluation codes along with pre-trained models are released.\n\n<hr />\n\n<p align=\"center\">\n  <img src=\"images/Swiftformer_performance.png\" width=60%> <br>\n  Comparison of our SwiftFormer Models with state-of-the-art on ImgeNet-1K. The latency is measured on iPhone 14 Neural Engine (iOS 16).\n</p>\n\n<p align=\"center\">\n  <img src=\"images/attentions_comparison.png\" width=99%> <br>\n</p>\n<p align=\"left\">\n  Comparison with different self-attention modules. (a) is a typical self-attention. (b) is the transpose self-attention, where the self-attention operation is applied across channel feature dimensions (d×d) instead of the spatial dimension (n×n). (c) is the separable self-attention of MobileViT-v2, it uses element-wise operations to compute the context vector from the interactions of Q and K matrices. Then, the context vector is multiplied by V matrix to produce the final output. (d) Our proposed efficient additive self-attention. Here, the query matrix is multiplied by learnable weights and pooled to produce global queries. Then, the matrix K is element-wise multiplied by the broadcasted global queries, resulting the global context representation.\n</p>\n\n<details>\n  <summary>\n  <font size=\"+1\">Abstract</font>\n  </summary>\nSelf-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called \"SwiftFormer\" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8~ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2.\n</details>\n\n<br>\n\n\n\n## Classification on ImageNet-1K\n\n### Models\n\n| Model | Top-1 accuracy | #params | GMACs | Latency | Ckpt | CoreML|\n|:---------------|:----:|:---:|:--:|:--:|:--:|:--:|\n| SwiftFormer-XS |   75.7%    |     3.5M    |   0.6G   |      0.7ms     |  [XS](https://drive.google.com/file/d/12RchxzyiJrtZS-2Bur9k4wcRQMItA43S/view?usp=sharing)    |   [XS](https://drive.google.com/file/d/1bkAP_BD6CdDqlbQsStZhLa0ST2NZTIvH/view?usp=sharing)    |\n| SwiftFormer-S  |   78.5%    |     6.1M    |   1.0G   |      0.8ms     |   [S](https://drive.google.com/file/d/1awpcXAaHH38WaHrOmUM8updxQazUZ3Nb/view?usp=sharing)   |   [S](https://drive.google.com/file/d/1qNAhecWIeQ1YJotWhbnLTCR5Uv1zBaf1/view?usp=sharing)    |\n| SwiftFormer-L1 |   80.9%   |    12.1M   |   1.6G   |      1.1ms     |   [L1](https://drive.google.com/file/d/1SDzauVmpR5uExkOv3ajxdwFnP-Buj9Uo/view?usp=sharing)   |   [L1](https://drive.google.com/file/d/1CowZE7-lbxz93uwXqefe-HxGOHUdvX_a/view?usp=sharing)    |\n| SwiftFormer-L3 |   83.0%   |    28.5M    |   4.0G   |      1.9ms     |  [L3](https://drive.google.com/file/d/1DAxMe6FlnZBBIpR-HYIDfFLWJzIgiF0Y/view?usp=sharing)    |   [L3](https://drive.google.com/file/d/1SO3bRWd9oWJemy-gpYUcwP-B4bJ-dsdg/view?usp=sharing)   |\n\n\n## Detection and Segmentation Qualitative Results\n\n<p align=\"center\">\n  <img src=\"images/detection_seg.png\" width=100%> <br>\n</p>\n<p align=\"center\">\n  <img src=\"images/semantic_seg.png\" width=100%> <br>\n</p>\n\n## Latency Measurement\n\nThe latency reported in SwiftFormer for iPhone 14 (iOS 16) uses the benchmark tool from [XCode 14](https://developer.apple.com/videos/play/wwdc2022/10027/).\n\n### SwiftFormer meets Android\n\nCommunity-driven results with [Samsung Galaxy S23 Ultra, with Qualcomm Snapdragon 8 Gen 2](https://www.qualcomm.com/snapdragon/device-finder/samsung-galaxy-s23-ultra):\n\n1. [Export](https://github.com/escorciav/SwiftFormer/blob/main-v/export.py) & profiler results of [`SwiftFormer_L1`](./models/swiftformer.py):\n\n    | QNN            | 2.16 | 2.17  | 2.18   |\n    | -------------- | -----| ----- | ------ |\n    | Latency (msec) | 2.63 | 2.26  | 2.43   |\n\n2. [Export](https://github.com/escorciav/SwiftFormer/blob/main-v/export_block.py) & profiler results of SwiftFormerEncoder block:\n\n    | QNN            | 2.16 | 2.17  | 2.18   |\n    | -------------- | -----| ----- | ------ |\n    | Latency (msec) | 2.17 | 1.69  | 1.7    |\n\n    Refer to the script above for details of the input & block parameters.\n\n❓ _Interested in reproducing the results above?_\n\nRefer to [Issue #14](https://github.com/Amshaker/SwiftFormer/issues/14) for details about [exporting & profiling.](https://github.com/Amshaker/SwiftFormer/issues/14#issuecomment-1883351728)\n\n## ImageNet\n\n### Prerequisites\n`conda` virtual environment is recommended.\n\n```shell\nconda create --name=swiftformer python=3.9\nconda activate swiftformer\n\npip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113\npip install timm\npip install coremltools==5.2.0\n```\n\n### Data Preparation\n\nDownload and extract ImageNet train and val images from http://image-net.org. The training and validation data are expected to be in the `train` folder and `val` folder respectively:\n```\n|-- /path/to/imagenet/\n    |-- train\n    |-- val\n```\n\n### Single-machine multi-GPU training\n\nWe provide training script for all models in `dist_train.sh` using PyTorch distributed data parallel (DDP).\n\nTo train SwiftFormer models on an 8-GPU machine:\n\n```\nsh dist_train.sh /path/to/imagenet 8\n```\n\nNote: specify which model command you want to run in the script. To reproduce the results of the paper, use 16-GPU machine with batch-size of 128 or 8-GPU machine with batch size of 256. Auto Augmentation, CutMix, MixUp are disabled for SwiftFormer-XS, and CutMix, MixUp are disabled for SwiftFormer-S.\n\n### Multi-node training\n\nOn a Slurm-managed cluster, multi-node training can be launched as\n\n```\nsbatch slurm_train.sh /path/to/imagenet SwiftFormer_XS\n```\n\nNote: specify slurm specific parameters in `slurm_train.sh` script.\n\n### Testing\n\nWe provide an example test script `dist_test.sh` using PyTorch distributed data parallel (DDP).\nFor example, to test SwiftFormer-XS on an 8-GPU machine:\n\n```\nsh dist_test.sh SwiftFormer_XS 8 weights/SwiftFormer_XS_ckpt.pth\n```\n\n## Citation\nif you use our work, please consider citing us:\n```BibTeX\n@InProceedings{Shaker_2023_ICCV,\n    author    = {Shaker, Abdelrahman and Maaz, Muhammad and Rasheed, Hanoona and Khan, Salman and Yang, Ming-Hsuan and Khan, Fahad Shahbaz},\n    title     = {SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications},\n    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},\n    year      = {2023},\n}\n```\n\n## Contact:\nIf you have any questions, please create an issue on this repository or contact at abdelrahman.youssief@mbzuai.ac.ae.\n\n\n## Acknowledgement\nOur code base is based on [LeViT](https://github.com/facebookresearch/LeViT) and [EfficientFormer](https://github.com/snap-research/EfficientFormer) repositories. We thank the authors for their open-source implementation.\n\nI'd like to express my sincere appreciation to [Victor Escorcia](https://github.com/escorciav) for measuring & reporting the latency of SwiftFormer on Android (Samsung Galaxy S23 Ultra, with Qualcomm Snapdragon 8 Gen 2). Check [SwiftFormer Meets Android](https://github.com/escorciav/SwiftFormer) for more details!\n\n## Our Related Works\n\n- EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications, CADL'22, ECCV. [Paper](https://arxiv.org/abs/2206.10589) | [Code](https://github.com/mmaaz60/EdgeNeXt).\n"
  },
  {
    "path": "dist_test.sh",
    "content": "#!/usr/bin/env bash\n\nIMAGENET_PATH=$1\nMODEL=$2\nCHECKPOINT=$3\nnGPUs=$4\n\npython -m torch.distributed.launch --master_addr=\"127.0.0.1\" --master_port=1234 --nproc_per_node=$nGPUs --use_env main.py --model \"$MODEL\" \\\n--resume $CHECKPOINT --eval \\\n--data-path \"$IMAGENET_PATH\" \\\n--output_dir SwiftFormer_test\n"
  },
  {
    "path": "dist_train.sh",
    "content": "\n#!/usr/bin/env bash\n\nIMAGENET_PATH=$1\nnGPUs=$2\n\n## SwiftFormer-XS training\npython -m torch.distributed.launch --nproc_per_node=$nGPUs --use_env main.py --model SwiftFormer_XS --aa=\"\" --mixup 0 --cutmix 0 --data-path \"$IMAGENET_PATH\" \\\n--output_dir SwiftFormer_XS_results\n\n## SwiftFormer-S training\npython -m torch.distributed.launch --nproc_per_node=$nGPUs --use_env main.py --model SwiftFormer_S --mixup 0 --cutmix 0 --data-path \"$IMAGENET_PATH\" \\\n--output_dir SwiftFormer_S_results\n\n## SwiftFormer-L1 training\npython -m torch.distributed.launch --nproc_per_node=$nGPUs --use_env main.py --model SwiftFormer_L1 --data-path \"$IMAGENET_PATH\" \\\n--output_dir SwiftFormer_L1_results\n\n## SwiftFormer-L3 training\npython -m torch.distributed.launch --nproc_per_node=$nGPUs --use_env main.py --model SwiftFormer_L3 --data-path \"$IMAGENET_PATH\" \\\n--output_dir SwiftFormer_L3_results\n"
  },
  {
    "path": "main.py",
    "content": "import argparse\nimport datetime\nimport numpy as np\nimport time\nimport torch\nimport torch.backends.cudnn as cudnn\nimport json\nfrom pathlib import Path\n\nfrom timm.data import Mixup\nfrom timm.models import create_model\nfrom timm.loss import LabelSmoothingCrossEntropy, SoftTargetCrossEntropy\nfrom timm.scheduler import create_scheduler\nfrom timm.optim import create_optimizer\nfrom timm.utils import NativeScaler, get_state_dict, ModelEma\n\nfrom util import *\nfrom models import *\n\n\ndef get_args_parser():\n    parser = argparse.ArgumentParser(\n        'SwiftFormer training and evaluation script', add_help=False)\n    parser.add_argument('--batch-size', default=128, type=int)\n    parser.add_argument('--epochs', default=300, type=int)\n\n    # Model parameters\n    parser.add_argument('--model', default='SwiftFormer_XS', type=str, metavar='MODEL',\n                        help='Name of model to train')\n    parser.add_argument('--input-size', default=224,\n                        type=int, help='images input size')\n\n    parser.add_argument('--model-ema', action='store_true')\n    parser.add_argument(\n        '--no-model-ema', action='store_false', dest='model_ema')\n    parser.set_defaults(model_ema=True)\n    parser.add_argument('--model-ema-decay', type=float,\n                        default=0.99996, help='')\n    parser.add_argument('--model-ema-force-cpu',\n                        action='store_true', default=False, help='')\n\n    # Optimizer parameters\n    parser.add_argument('--opt', default='adamw', type=str, metavar='OPTIMIZER',\n                        help='Optimizer (default: \"adamw\"')\n    parser.add_argument('--opt-eps', default=1e-8, type=float, metavar='EPSILON',\n                        help='Optimizer Epsilon (default: 1e-8)')\n    parser.add_argument('--opt-betas', default=None, type=float, nargs='+', metavar='BETA',\n                        help='Optimizer Betas (default: None, use opt default)')\n    parser.add_argument('--clip-grad', type=float, default=0.01, metavar='NORM',\n                        help='Clip gradient norm (default: None, no clipping)')\n    parser.add_argument('--clip-mode', type=str, default='agc',\n                        help='Gradient clipping mode. One of (\"norm\", \"value\", \"agc\")')\n    parser.add_argument('--momentum', type=float, default=0.9, metavar='M',\n                        help='SGD momentum (default: 0.9)')\n    parser.add_argument('--weight-decay', type=float, default=0.025,\n                        help='weight decay (default: 0.025)')\n    # Learning rate schedule parameters\n    parser.add_argument('--sched', default='cosine', type=str, metavar='SCHEDULER',\n                        help='LR scheduler (default: \"cosine\"')\n    parser.add_argument('--lr', type=float, default=2e-3, metavar='LR',\n                        help='learning rate (default: 2e-3)')\n    parser.add_argument('--lr-noise', type=float, nargs='+', default=None, metavar='pct, pct',\n                        help='learning rate noise on/off epoch percentages')\n    parser.add_argument('--lr-noise-pct', type=float, default=0.67, metavar='PERCENT',\n                        help='learning rate noise limit percent (default: 0.67)')\n    parser.add_argument('--lr-noise-std', type=float, default=1.0, metavar='STDDEV',\n                        help='learning rate noise std-dev (default: 1.0)')\n    parser.add_argument('--warmup-lr', type=float, default=1e-6, metavar='LR',\n                        help='warmup learning rate (default: 1e-6)')\n    parser.add_argument('--min-lr', type=float, default=1e-5, metavar='LR',\n                        help='lower lr bound for cyclic schedulers that hit 0 (1e-5)')\n\n    parser.add_argument('--decay-epochs', type=float, default=30, metavar='N',\n                        help='epoch interval to decay LR')\n    parser.add_argument('--warmup-epochs', type=int, default=5, metavar='N',\n                        help='epochs to warmup LR, if scheduler supports')\n    parser.add_argument('--cooldown-epochs', type=int, default=10, metavar='N',\n                        help='epochs to cooldown LR at min_lr, after cyclic schedule ends')\n    parser.add_argument('--patience-epochs', type=int, default=10, metavar='N',\n                        help='patience epochs for Plateau LR scheduler (default: 10')\n    parser.add_argument('--decay-rate', '--dr', type=float, default=0.1, metavar='RATE',\n                        help='LR decay rate (default: 0.1)')\n\n    # Augmentation parameters\n    parser.add_argument('--color-jitter', type=float, default=0.4, metavar='PCT',\n                        help='Color jitter factor (default: 0.4)')\n    parser.add_argument('--aa', type=str, default='rand-m9-mstd0.5-inc1', metavar='NAME',\n                        help='Use AutoAugment policy. \"v0\" or \"original\". \" + \\\n                             \"(default: rand-m9-mstd0.5-inc1)'),\n    parser.add_argument('--smoothing', type=float, default=0.1,\n                        help='Label smoothing (default: 0.1)')\n    parser.add_argument('--train-interpolation', type=str, default='bicubic',\n                        help='Training interpolation (random, bilinear, bicubic default: \"bicubic\")')\n\n    parser.add_argument('--repeated-aug', action='store_true')\n    parser.add_argument('--no-repeated-aug',\n                        action='store_false', dest='repeated_aug')\n    parser.set_defaults(repeated_aug=True)\n\n    # * Random Erase params\n    parser.add_argument('--reprob', type=float, default=0.25, metavar='PCT',\n                        help='Random erase prob (default: 0.25)')\n    parser.add_argument('--remode', type=str, default='pixel',\n                        help='Random erase mode (default: \"pixel\")')\n    parser.add_argument('--recount', type=int, default=1,\n                        help='Random erase count (default: 1)')\n    parser.add_argument('--resplit', action='store_true', default=False,\n                        help='Do not random erase first (clean) augmentation split')\n\n    # * Mixup params\n    parser.add_argument('--mixup', type=float, default=0.8,\n                        help='mixup alpha, mixup enabled if > 0. (default: 0.8)')\n    parser.add_argument('--cutmix', type=float, default=1.0,\n                        help='cutmix alpha, cutmix enabled if > 0. (default: 1.0)')\n    parser.add_argument('--cutmix-minmax', type=float, nargs='+', default=None,\n                        help='cutmix min/max ratio, overrides alpha and enables cutmix if set (default: None)')\n    parser.add_argument('--mixup-prob', type=float, default=1.0,\n                        help='Probability of performing mixup or cutmix when either/both is enabled')\n    parser.add_argument('--mixup-switch-prob', type=float, default=0.5,\n                        help='Probability of switching to cutmix when both mixup and cutmix enabled')\n    parser.add_argument('--mixup-mode', type=str, default='batch',\n                        help='How to apply mixup/cutmix params. Per \"batch\", \"pair\", or \"elem\"')\n\n    # Distillation parameters\n    parser.add_argument('--teacher-model', default='regnety_160', type=str, metavar='MODEL',\n                        help='Name of teacher model to train (default: \"regnety_160\"')\n    parser.add_argument('--teacher-path', type=str,\n                        default='https://dl.fbaipublicfiles.com/deit/regnety_160-a5fe301d.pth')\n    parser.add_argument('--distillation-type', default='hard',\n                        choices=['none', 'soft', 'hard'], type=str, help=\"\")\n    parser.add_argument('--distillation-alpha',\n                        default=0.5, type=float, help=\"\")\n    parser.add_argument('--distillation-tau', default=1.0, type=float, help=\"\")\n\n    # * Finetuning params\n    parser.add_argument('--finetune', default='',\n                        help='finetune from checkpoint')\n\n    # Dataset parameters\n    parser.add_argument('--data-path', default='./imagenet', type=str,\n                        help='dataset path')\n    parser.add_argument('--data-set', default='IMNET', choices=['CIFAR', 'IMNET', 'INAT', 'INAT19'],\n                        type=str, help='Image Net dataset path')\n    parser.add_argument('--inat-category', default='name',\n                        choices=['kingdom', 'phylum', 'class', 'order',\n                                 'supercategory', 'family', 'genus', 'name'],\n                        type=str, help='semantic granularity')\n\n    parser.add_argument('--output_dir', default='',\n                        help='path where to save, empty for no saving')\n    parser.add_argument('--device', default='cuda',\n                        help='device to use for training / testing')\n    parser.add_argument('--seed', default=0, type=int)\n    parser.add_argument('--resume', default='', help='resume from checkpoint')\n    parser.add_argument('--start_epoch', default=0, type=int, metavar='N',\n                        help='start epoch')\n    parser.add_argument('--eval', action='store_true',\n                        help='Perform evaluation only')\n    parser.add_argument('--dist-eval', action='store_true',\n                        default=False, help='Enabling distributed evaluation')\n    parser.add_argument('--num_workers', default=10, type=int)\n    parser.add_argument('--pin-mem', action='store_true',\n                        help='Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU.')\n    parser.add_argument('--no-pin-mem', action='store_false', dest='pin_mem',\n                        help='')\n    parser.set_defaults(pin_mem=True)\n\n    # distributed training parameters\n    parser.add_argument('--world_size', default=1, type=int,\n                        help='number of distributed processes')\n    parser.add_argument('--dist_url', default='env://',\n                        help='url used to set up distributed training')\n    return parser\n\n\ndef main(args):\n    utils.init_distributed_mode(args)\n\n    print(args)\n\n    if args.distillation_type != 'none' and args.finetune and not args.eval:\n        raise NotImplementedError(\n            \"Finetuning with distillation not yet supported\")\n\n    device = torch.device(args.device)\n\n    # Fix the seed for reproducibility\n    seed = args.seed + utils.get_rank()\n    torch.manual_seed(seed)\n    np.random.seed(seed)\n\n    cudnn.benchmark = True\n\n    dataset_train, args.nb_classes = build_dataset(is_train=True, args=args)\n    dataset_val, _ = build_dataset(is_train=False, args=args)\n\n    num_tasks = utils.get_world_size()\n    global_rank = utils.get_rank()\n    if args.repeated_aug:\n        sampler_train = RASampler(\n            dataset_train, num_replicas=num_tasks, rank=global_rank, shuffle=True\n        )\n    else:\n        sampler_train = torch.utils.data.DistributedSampler(\n            dataset_train, num_replicas=num_tasks, rank=global_rank, shuffle=True\n        )\n    if args.dist_eval:\n        if len(dataset_val) % num_tasks != 0:\n            print('Warning: Enabling distributed evaluation with an eval dataset not divisible by process number. '\n                  'This will slightly alter validation results as extra duplicate entries are added to achieve '\n                  'equal num of samples per-process.')\n        sampler_val = torch.utils.data.DistributedSampler(\n            dataset_val, num_replicas=num_tasks, rank=global_rank, shuffle=False)\n    else:\n        sampler_val = torch.utils.data.SequentialSampler(dataset_val)\n\n    data_loader_train = torch.utils.data.DataLoader(\n        dataset_train, sampler=sampler_train,\n        batch_size=args.batch_size,\n        num_workers=args.num_workers,\n        pin_memory=args.pin_mem,\n        drop_last=True,\n    )\n\n    data_loader_val = torch.utils.data.DataLoader(\n        dataset_val, sampler=sampler_val,\n        batch_size=int(1.5 * args.batch_size),\n        num_workers=args.num_workers,\n        pin_memory=args.pin_mem,\n        drop_last=False\n    )\n\n    mixup_fn = None\n    mixup_active = args.mixup > 0 or args.cutmix > 0. or args.cutmix_minmax is not None\n    if mixup_active:\n        mixup_fn = Mixup(\n            mixup_alpha=args.mixup, cutmix_alpha=args.cutmix, cutmix_minmax=args.cutmix_minmax,\n            prob=args.mixup_prob, switch_prob=args.mixup_switch_prob, mode=args.mixup_mode,\n            label_smoothing=args.smoothing, num_classes=args.nb_classes)\n\n    print(f\"Creating model: {args.model}\")\n    model = create_model(\n        args.model,\n        num_classes=args.nb_classes,\n        distillation=(args.distillation_type != 'none'),\n        pretrained=args.eval,\n        fuse=args.eval,\n    )\n\n    if args.finetune:\n        if args.finetune.startswith('https'):\n            checkpoint = torch.hub.load_state_dict_from_url(\n                args.finetune, map_location='cpu', check_hash=True)\n        else:\n            checkpoint = torch.load(args.finetune, map_location='cpu')\n\n        checkpoint_model = checkpoint['model']\n        state_dict = model.state_dict()\n        for k in ['head.weight', 'head.bias',\n                  'head_dist.weight', 'head_dist.bias']:\n            if k in checkpoint_model and checkpoint_model[k].shape != state_dict[k].shape:\n                print(f\"Removing key {k} from pretrained checkpoint\")\n                del checkpoint_model[k]\n\n        model.load_state_dict(checkpoint_model, strict=False)\n\n    model.to(device)\n\n    model_ema = None\n    if args.model_ema:\n        # Important to create EMA model after cuda(), DP wrapper, and AMP but\n        # before SyncBN and DDP wrapper\n        model_ema = ModelEma(\n            model,\n            decay=args.model_ema_decay,\n            device='cpu' if args.model_ema_force_cpu else '',\n            resume='')\n\n    model_without_ddp = model\n    if args.distributed:\n        model = torch.nn.parallel.DistributedDataParallel(\n            model, device_ids=[args.gpu])\n        model_without_ddp = model.module\n    n_parameters = sum(p.numel()\n                       for p in model.parameters() if p.requires_grad)\n    print('number of params:', n_parameters)\n\n    # better not to scale up lr for AdamW optimizer\n    # linear_scaled_lr = args.lr * args.batch_size * utils.get_world_size() / 512.0\n    # args.lr = linear_scaled_lr\n\n    optimizer = create_optimizer(args, model_without_ddp)\n    loss_scaler = NativeScaler()\n\n    lr_scheduler, _ = create_scheduler(args, optimizer)\n\n    if args.mixup > 0.:\n        # smoothing is handled with mixup label transform\n        criterion = SoftTargetCrossEntropy()\n    elif args.smoothing:\n        criterion = LabelSmoothingCrossEntropy(smoothing=args.smoothing)\n    else:\n        criterion = torch.nn.CrossEntropyLoss()\n\n    teacher_model = None\n    if args.distillation_type != 'none':\n        assert args.teacher_path, 'need to specify teacher-path when using distillation'\n        print(f\"Creating teacher model: {args.teacher_model}\")\n        teacher_model = create_model(\n            args.teacher_model,\n            pretrained=False,\n            num_classes=args.nb_classes,\n            global_pool='avg',\n        )\n        if args.teacher_path.startswith('https'):\n            checkpoint = torch.hub.load_state_dict_from_url(\n                args.teacher_path, map_location='cpu', check_hash=True)\n        else:\n            checkpoint = torch.load(args.teacher_path, map_location='cpu')\n        teacher_model.load_state_dict(checkpoint['model'])\n        teacher_model.to(device)\n        teacher_model.eval()\n\n    # Wrap the criterion in our custom DistillationLoss, which\n    # just dispatches to the original criterion if args.distillation_type is\n    # 'none'\n    criterion = DistillationLoss(\n        criterion, teacher_model, args.distillation_type, args.distillation_alpha, args.distillation_tau\n    )\n\n    output_dir = Path(args.output_dir)\n    if args.resume:\n        if args.resume.startswith('https'):\n            checkpoint = torch.hub.load_state_dict_from_url(\n                args.resume, map_location='cpu', check_hash=True)\n        else:\n            checkpoint = torch.load(args.resume, map_location='cpu')\n        model_without_ddp.load_state_dict(checkpoint['model'])\n        if not args.eval and 'optimizer' in checkpoint and 'lr_scheduler' in checkpoint and 'epoch' in checkpoint:\n            optimizer.load_state_dict(checkpoint['optimizer'])\n            lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])\n            args.start_epoch = checkpoint['epoch'] + 1\n            if args.model_ema:\n                utils._load_checkpoint_for_ema(\n                    model_ema, checkpoint['model_ema'])\n            if 'scaler' in checkpoint:\n                loss_scaler.load_state_dict(checkpoint['scaler'])\n    if args.eval:\n        test_stats = evaluate(data_loader_val, model, device)\n        print(\n            f\"Accuracy of the network on the {len(dataset_val)} test images: {test_stats['acc1']:.1f}%\")\n        return\n\n    print(f\"Start training for {args.epochs} epochs\")\n    start_time = time.time()\n    max_accuracy = 0.0\n    for epoch in range(args.start_epoch, args.epochs):\n        if args.distributed:\n            data_loader_train.sampler.set_epoch(epoch)\n\n        train_stats = train_one_epoch(\n            model, criterion, data_loader_train,\n            optimizer, device, epoch, loss_scaler,\n            args.clip_grad, args.clip_mode, model_ema, mixup_fn,\n            set_training_mode=args.finetune == ''  # keep in eval mode during finetuning\n        )\n\n        lr_scheduler.step(epoch)\n        if args.output_dir:\n            checkpoint_paths = [output_dir / 'checkpoint.pth']\n            for checkpoint_path in checkpoint_paths:\n                utils.save_on_master({\n                    'model': model_without_ddp.state_dict(),\n                    'optimizer': optimizer.state_dict(),\n                    'lr_scheduler': lr_scheduler.state_dict(),\n                    'epoch': epoch,\n                    'model_ema': get_state_dict(model_ema),\n                    'scaler': loss_scaler.state_dict(),\n                    'args': args,\n                }, checkpoint_path)\n\n        if epoch % 20 == 19:\n            test_stats = evaluate(data_loader_val, model, device)\n            print(\n                f\"Accuracy of the network on the {len(dataset_val)} test images: {test_stats['acc1']:.1f}%\")\n            max_accuracy = max(max_accuracy, test_stats[\"acc1\"])\n            print(f'Max accuracy: {max_accuracy:.2f}%')\n            log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},\n                         **{f'test_{k}': v for k, v in test_stats.items()},\n                         'epoch': epoch,\n                         'n_parameters': n_parameters}\n        else:\n            log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},\n                         'epoch': epoch,\n                         'n_parameters': n_parameters}\n\n        if args.output_dir and utils.is_main_process():\n            with (output_dir / \"log.txt\").open(\"a\") as f:\n                f.write(json.dumps(log_stats) + \"\\n\")\n\n    total_time = time.time() - start_time\n    total_time_str = str(datetime.timedelta(seconds=int(total_time)))\n    print('Training time {}'.format(total_time_str))\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser(\n        'SwiftFormer training and evaluation script', parents=[get_args_parser()])\n    args = parser.parse_args()\n    if args.output_dir:\n        Path(args.output_dir).mkdir(parents=True, exist_ok=True)\n    main(args)\n"
  },
  {
    "path": "models/__init__.py",
    "content": "from .swiftformer import SwiftFormer_XS, SwiftFormer_S, SwiftFormer_L1, SwiftFormer_L3\n"
  },
  {
    "path": "models/swiftformer.py",
    "content": "\"\"\"\nSwiftFormer\n\"\"\"\nimport os\nimport copy\nimport torch\nimport torch.nn as nn\nfrom timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD\nfrom timm.models.layers import DropPath, trunc_normal_\nfrom timm.models.registry import register_model\nfrom timm.models.layers.helpers import to_2tuple\nimport einops\n\nSwiftFormer_width = {\n    'XS': [48, 56, 112, 220],\n    'S': [48, 64, 168, 224],\n    'l1': [48, 96, 192, 384],\n    'l3': [64, 128, 320, 512],\n}\n\nSwiftFormer_depth = {\n    'XS': [3, 3, 6, 4],\n    'S': [3, 3, 9, 6],\n    'l1': [4, 3, 10, 5],\n    'l3': [4, 4, 12, 6],\n}\n\ndef stem(in_chs, out_chs):\n    \"\"\"\n    Stem Layer that is implemented by two layers of conv.\n    Output: sequence of layers with final shape of [B, C, H/4, W/4]\n    \"\"\"\n    return nn.Sequential(\n        nn.Conv2d(in_chs, out_chs // 2, kernel_size=3, stride=2, padding=1),\n        nn.BatchNorm2d(out_chs // 2),\n        nn.ReLU(),\n        nn.Conv2d(out_chs // 2, out_chs, kernel_size=3, stride=2, padding=1),\n        nn.BatchNorm2d(out_chs),\n        nn.ReLU(), )\n\n\nclass Embedding(nn.Module):\n    \"\"\"\n    Patch Embedding that is implemented by a layer of conv.\n    Input: tensor in shape [B, C, H, W]\n    Output: tensor in shape [B, C, H/stride, W/stride]\n    \"\"\"\n\n    def __init__(self, patch_size=16, stride=16, padding=0,\n                 in_chans=3, embed_dim=768, norm_layer=nn.BatchNorm2d):\n        super().__init__()\n        patch_size = to_2tuple(patch_size)\n        stride = to_2tuple(stride)\n        padding = to_2tuple(padding)\n        self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size,\n                              stride=stride, padding=padding)\n        self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()\n\n    def forward(self, x):\n        x = self.proj(x)\n        x = self.norm(x)\n        return x\n\n\nclass ConvEncoder(nn.Module):\n    \"\"\"\n    Implementation of ConvEncoder with 3*3 and 1*1 convolutions.\n    Input: tensor with shape [B, C, H, W]\n    Output: tensor with shape [B, C, H, W]\n    \"\"\"\n\n    def __init__(self, dim, hidden_dim=64, kernel_size=3, drop_path=0., use_layer_scale=True):\n        super().__init__()\n        self.dwconv = nn.Conv2d(dim, dim, kernel_size=kernel_size, padding=kernel_size // 2, groups=dim)\n        self.norm = nn.BatchNorm2d(dim)\n        self.pwconv1 = nn.Conv2d(dim, hidden_dim, kernel_size=1)\n        self.act = nn.GELU()\n        self.pwconv2 = nn.Conv2d(hidden_dim, dim, kernel_size=1)\n        self.drop_path = DropPath(drop_path) if drop_path > 0. \\\n            else nn.Identity()\n        self.use_layer_scale = use_layer_scale\n        if use_layer_scale:\n            self.layer_scale = nn.Parameter(torch.ones(dim).unsqueeze(-1).unsqueeze(-1), requires_grad=True)\n        self.apply(self._init_weights)\n\n    def _init_weights(self, m):\n        if isinstance(m, nn.Conv2d):\n            trunc_normal_(m.weight, std=.02)\n            if m.bias is not None:\n                nn.init.constant_(m.bias, 0)\n\n    def forward(self, x):\n        input = x\n        x = self.dwconv(x)\n        x = self.norm(x)\n        x = self.pwconv1(x)\n        x = self.act(x)\n        x = self.pwconv2(x)\n        if self.use_layer_scale:\n            x = input + self.drop_path(self.layer_scale * x)\n        else:\n            x = input + self.drop_path(x)\n        return x\n\n\nclass Mlp(nn.Module):\n    \"\"\"\n    Implementation of MLP layer with 1*1 convolutions.\n    Input: tensor with shape [B, C, H, W]\n    Output: tensor with shape [B, C, H, W]\n    \"\"\"\n\n    def __init__(self, in_features, hidden_features=None,\n                 out_features=None, act_layer=nn.GELU, drop=0.):\n        super().__init__()\n        out_features = out_features or in_features\n        hidden_features = hidden_features or in_features\n        self.norm1 = nn.BatchNorm2d(in_features)\n        self.fc1 = nn.Conv2d(in_features, hidden_features, 1)\n        self.act = act_layer()\n        self.fc2 = nn.Conv2d(hidden_features, out_features, 1)\n        self.drop = nn.Dropout(drop)\n        self.apply(self._init_weights)\n\n    def _init_weights(self, m):\n        if isinstance(m, nn.Conv2d):\n            trunc_normal_(m.weight, std=.02)\n            if m.bias is not None:\n                nn.init.constant_(m.bias, 0)\n\n    def forward(self, x):\n        x = self.norm1(x)\n        x = self.fc1(x)\n        x = self.act(x)\n        x = self.drop(x)\n        x = self.fc2(x)\n        x = self.drop(x)\n        return x\n\n\nclass EfficientAdditiveAttnetion(nn.Module):\n    \"\"\"\n    Efficient Additive Attention module for SwiftFormer.\n    Input: tensor in shape [B, N, D]\n    Output: tensor in shape [B, N, D]\n    \"\"\"\n\n    def __init__(self, in_dims=512, token_dim=256, num_heads=2):\n        super().__init__()\n\n        self.to_query = nn.Linear(in_dims, token_dim * num_heads)\n        self.to_key = nn.Linear(in_dims, token_dim * num_heads)\n\n        self.w_g = nn.Parameter(torch.randn(token_dim * num_heads, 1))\n        self.scale_factor = token_dim ** -0.5\n        self.Proj = nn.Linear(token_dim * num_heads, token_dim * num_heads)\n        self.final = nn.Linear(token_dim * num_heads, token_dim)\n\n    def forward(self, x):\n        query = self.to_query(x)\n        key = self.to_key(x)\n\n        query = torch.nn.functional.normalize(query, dim=-1) #BxNxD\n        key = torch.nn.functional.normalize(key, dim=-1) #BxNxD\n\n        query_weight = query @ self.w_g # BxNx1 (BxNxD @ Dx1)\n        A = query_weight * self.scale_factor # BxNx1\n\n        A = torch.nn.functional.normalize(A, dim=1) # BxNx1\n\n        G = torch.sum(A * query, dim=1) # BxD\n\n        G = einops.repeat(\n            G, \"b d -> b repeat d\", repeat=key.shape[1]\n        ) # BxNxD\n\n        out = self.Proj(G * key) + query #BxNxD\n\n        out = self.final(out) # BxNxD\n\n        return out\n\n\nclass SwiftFormerLocalRepresentation(nn.Module):\n    \"\"\"\n    Local Representation module for SwiftFormer that is implemented by 3*3 depth-wise and point-wise convolutions.\n    Input: tensor in shape [B, C, H, W]\n    Output: tensor in shape [B, C, H, W]\n    \"\"\"\n\n    def __init__(self, dim, kernel_size=3, drop_path=0., use_layer_scale=True):\n        super().__init__()\n        self.dwconv = nn.Conv2d(dim, dim, kernel_size=kernel_size, padding=kernel_size // 2, groups=dim)\n        self.norm = nn.BatchNorm2d(dim)\n        self.pwconv1 = nn.Conv2d(dim, dim, kernel_size=1)\n        self.act = nn.GELU()\n        self.pwconv2 = nn.Conv2d(dim, dim, kernel_size=1)\n        self.drop_path = DropPath(drop_path) if drop_path > 0. \\\n            else nn.Identity()\n        self.use_layer_scale = use_layer_scale\n        if use_layer_scale:\n            self.layer_scale = nn.Parameter(torch.ones(dim).unsqueeze(-1).unsqueeze(-1), requires_grad=True)\n        self.apply(self._init_weights)\n\n    def _init_weights(self, m):\n        if isinstance(m, nn.Conv2d):\n            trunc_normal_(m.weight, std=.02)\n            if m.bias is not None:\n                nn.init.constant_(m.bias, 0)\n\n    def forward(self, x):\n        input = x\n        x = self.dwconv(x)\n        x = self.norm(x)\n        x = self.pwconv1(x)\n        x = self.act(x)\n        x = self.pwconv2(x)\n        if self.use_layer_scale:\n            x = input + self.drop_path(self.layer_scale * x)\n        else:\n            x = input + self.drop_path(x)\n        return x\n\n\nclass SwiftFormerEncoder(nn.Module):\n    \"\"\"\n    SwiftFormer Encoder Block for SwiftFormer. It consists of (1) Local representation module, (2) EfficientAdditiveAttention, and (3) MLP block.\n    Input: tensor in shape [B, C, H, W]\n    Output: tensor in shape [B, C, H, W]\n    \"\"\"\n\n    def __init__(self, dim, mlp_ratio=4.,\n                 act_layer=nn.GELU,\n                 drop=0., drop_path=0.,\n                 use_layer_scale=True, layer_scale_init_value=1e-5):\n\n        super().__init__()\n\n        self.local_representation = SwiftFormerLocalRepresentation(dim=dim, kernel_size=3, drop_path=0.,\n                                                                   use_layer_scale=True)\n        self.attn = EfficientAdditiveAttnetion(in_dims=dim, token_dim=dim, num_heads=1)\n        self.linear = Mlp(in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=drop)\n        self.drop_path = DropPath(drop_path) if drop_path > 0. \\\n            else nn.Identity()\n        self.use_layer_scale = use_layer_scale\n        if use_layer_scale:\n            self.layer_scale_1 = nn.Parameter(\n                layer_scale_init_value * torch.ones(dim).unsqueeze(-1).unsqueeze(-1), requires_grad=True)\n            self.layer_scale_2 = nn.Parameter(\n                layer_scale_init_value * torch.ones(dim).unsqueeze(-1).unsqueeze(-1), requires_grad=True)\n\n    def forward(self, x):\n        x = self.local_representation(x)\n        B, C, H, W = x.shape\n        if self.use_layer_scale:\n            x = x + self.drop_path(\n                self.layer_scale_1 * self.attn(x.permute(0, 2, 3, 1).reshape(B, H * W, C)).reshape(B, H, W, C).permute(\n                    0, 3, 1, 2))\n            x = x + self.drop_path(self.layer_scale_2 * self.linear(x))\n\n        else:\n            x = x + self.drop_path(\n                self.attn(x.permute(0, 2, 3, 1).reshape(B, H * W, C)).reshape(B, H, W, C).permute(0, 3, 1, 2))\n            x = x + self.drop_path(self.linear(x))\n        return x\n\n\ndef Stage(dim, index, layers, mlp_ratio=4.,\n          act_layer=nn.GELU,\n          drop_rate=.0, drop_path_rate=0.,\n          use_layer_scale=True, layer_scale_init_value=1e-5, vit_num=1):\n    \"\"\"\n    Implementation of each SwiftFormer stages. Here, SwiftFormerEncoder used as the last block in all stages, while ConvEncoder used in the rest of the blocks.\n    Input: tensor in shape [B, C, H, W]\n    Output: tensor in shape [B, C, H, W]\n    \"\"\"\n    blocks = []\n\n    for block_idx in range(layers[index]):\n        block_dpr = drop_path_rate * (block_idx + sum(layers[:index])) / (sum(layers) - 1)\n\n        if layers[index] - block_idx <= vit_num:\n            blocks.append(SwiftFormerEncoder(\n                dim, mlp_ratio=mlp_ratio,\n                act_layer=act_layer, drop_path=block_dpr,\n                use_layer_scale=use_layer_scale,\n                layer_scale_init_value=layer_scale_init_value))\n\n        else:\n            blocks.append(ConvEncoder(dim=dim, hidden_dim=int(mlp_ratio * dim), kernel_size=3))\n\n    blocks = nn.Sequential(*blocks)\n    return blocks\n\n\nclass SwiftFormer(nn.Module):\n\n    def __init__(self, layers, embed_dims=None,\n                 mlp_ratios=4, downsamples=None,\n                 act_layer=nn.GELU,\n                 num_classes=1000,\n                 down_patch_size=3, down_stride=2, down_pad=1,\n                 drop_rate=0., drop_path_rate=0.,\n                 use_layer_scale=True, layer_scale_init_value=1e-5,\n                 fork_feat=False,\n                 init_cfg=None,\n                 pretrained=None,\n                 vit_num=1,\n                 distillation=True,\n                 **kwargs):\n        super().__init__()\n\n        if not fork_feat:\n            self.num_classes = num_classes\n        self.fork_feat = fork_feat\n\n        self.patch_embed = stem(3, embed_dims[0])\n\n        network = []\n        for i in range(len(layers)):\n            stage = Stage(embed_dims[i], i, layers, mlp_ratio=mlp_ratios,\n                          act_layer=act_layer,\n                          drop_rate=drop_rate,\n                          drop_path_rate=drop_path_rate,\n                          use_layer_scale=use_layer_scale,\n                          layer_scale_init_value=layer_scale_init_value,\n                          vit_num=vit_num)\n            network.append(stage)\n            if i >= len(layers) - 1:\n                break\n            if downsamples[i] or embed_dims[i] != embed_dims[i + 1]:\n                # downsampling between two stages\n                network.append(\n                    Embedding(\n                        patch_size=down_patch_size, stride=down_stride,\n                        padding=down_pad,\n                        in_chans=embed_dims[i], embed_dim=embed_dims[i + 1]\n                    )\n                )\n\n        self.network = nn.ModuleList(network)\n\n        if self.fork_feat:\n            # add a norm layer for each output\n            self.out_indices = [0, 2, 4, 6]\n            for i_emb, i_layer in enumerate(self.out_indices):\n                if i_emb == 0 and os.environ.get('FORK_LAST3', None):\n                    layer = nn.Identity()\n                else:\n                    layer = nn.BatchNorm2d(embed_dims[i_emb])\n                layer_name = f'norm{i_layer}'\n                self.add_module(layer_name, layer)\n        else:\n            # Classifier head\n            self.norm = nn.BatchNorm2d(embed_dims[-1])\n            self.head = nn.Linear(\n                embed_dims[-1], num_classes) if num_classes > 0 \\\n                else nn.Identity()\n            self.dist = distillation\n            if self.dist:\n                self.dist_head = nn.Linear(\n                    embed_dims[-1], num_classes) if num_classes > 0 \\\n                    else nn.Identity()\n\n        # self.apply(self.cls_init_weights)\n        self.apply(self._init_weights)\n\n        self.init_cfg = copy.deepcopy(init_cfg)\n        # load pre-trained model\n        if self.fork_feat and (\n                self.init_cfg is not None or pretrained is not None):\n            self.init_weights()\n\n    # init for mmdetection or mmsegmentation by loading\n    # imagenet pre-trained weights\n    def init_weights(self, pretrained=None):\n        logger = get_root_logger()\n        if self.init_cfg is None and pretrained is None:\n            logger.warn(f'No pre-trained weights for '\n                        f'{self.__class__.__name__}, '\n                        f'training start from scratch')\n            pass\n        else:\n            assert 'checkpoint' in self.init_cfg, f'Only support ' \\\n                                                  f'specify `Pretrained` in ' \\\n                                                  f'`init_cfg` in ' \\\n                                                  f'{self.__class__.__name__} '\n            if self.init_cfg is not None:\n                ckpt_path = self.init_cfg['checkpoint']\n            elif pretrained is not None:\n                ckpt_path = pretrained\n\n            ckpt = _load_checkpoint(\n                ckpt_path, logger=logger, map_location='cpu')\n            if 'state_dict' in ckpt:\n                _state_dict = ckpt['state_dict']\n            elif 'model' in ckpt:\n                _state_dict = ckpt['model']\n            else:\n                _state_dict = ckpt\n\n            state_dict = _state_dict\n            missing_keys, unexpected_keys = \\\n                self.load_state_dict(state_dict, False)\n\n    def _init_weights(self, m):\n        if isinstance(m, (nn.Conv2d, nn.Linear)):\n            trunc_normal_(m.weight, std=.02)\n            if m.bias is not None:\n                nn.init.constant_(m.bias, 0)\n        elif isinstance(m, (nn.LayerNorm)):\n            nn.init.constant_(m.bias, 0)\n            nn.init.constant_(m.weight, 1.0)\n\n    def forward_tokens(self, x):\n        outs = []\n        for idx, block in enumerate(self.network):\n            x = block(x)\n            if self.fork_feat and idx in self.out_indices:\n                norm_layer = getattr(self, f'norm{idx}')\n                x_out = norm_layer(x)\n                outs.append(x_out)\n        if self.fork_feat:\n            return outs\n        return x\n\n    def forward(self, x):\n        x = self.patch_embed(x)\n        x = self.forward_tokens(x)\n        if self.fork_feat:\n            # Output features of four stages for dense prediction\n            return x\n\n        x = self.norm(x)\n        if self.dist:\n            cls_out = self.head(x.flatten(2).mean(-1)), self.dist_head(x.flatten(2).mean(-1))\n            if not self.training:\n                cls_out = (cls_out[0] + cls_out[1]) / 2\n        else:\n            cls_out = self.head(x.flatten(2).mean(-1))\n        # For image classification\n        return cls_out\n\n\ndef _cfg(url='', **kwargs):\n    return {\n        'url': url,\n        'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,\n        'crop_pct': .95, 'interpolation': 'bicubic',\n        'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,\n        'classifier': 'head',\n        **kwargs\n    }\n\n\n@register_model\ndef SwiftFormer_XS(pretrained=False, **kwargs):\n    model = SwiftFormer(\n        layers=SwiftFormer_depth['XS'],\n        embed_dims=SwiftFormer_width['XS'],\n        downsamples=[True, True, True, True],\n        vit_num=1,\n        **kwargs)\n    model.default_cfg = _cfg(crop_pct=0.9)\n    return model\n\n\n@register_model\ndef SwiftFormer_S(pretrained=False, **kwargs):\n    model = SwiftFormer(\n        layers=SwiftFormer_depth['S'],\n        embed_dims=SwiftFormer_width['S'],\n        downsamples=[True, True, True, True],\n        vit_num=1,\n        **kwargs)\n    model.default_cfg = _cfg(crop_pct=0.9)\n    return model\n\n\n@register_model\ndef SwiftFormer_L1(pretrained=False, **kwargs):\n    model = SwiftFormer(\n        layers=SwiftFormer_depth['l1'],\n        embed_dims=SwiftFormer_width['l1'],\n        downsamples=[True, True, True, True],\n        vit_num=1,\n        **kwargs)\n    model.default_cfg = _cfg(crop_pct=0.9)\n    return model\n\n\n@register_model\ndef SwiftFormer_L3(pretrained=False, **kwargs):\n    model = SwiftFormer(\n        layers=SwiftFormer_depth['l3'],\n        embed_dims=SwiftFormer_width['l3'],\n        downsamples=[True, True, True, True],\n        vit_num=1,\n        **kwargs)\n    model.default_cfg = _cfg(crop_pct=0.9)\n    return model\n\n"
  },
  {
    "path": "requirements.txt",
    "content": "torch==1.11.0+cu113\ntorchvision==0.12.0+cu113\ntimm==0.5.4\n"
  },
  {
    "path": "slurm_train.sh",
    "content": "#!/bin/sh\n#SBATCH --job-name=swiftformer\n#SBATCH --partition=your_partition\n#SBATCH --time=48:00:00\n#SBATCH --nodes=4\n#SBATCH --ntasks=16\n#SBATCH --cpus-per-task=16\n#SBATCH --gres=gpu:4\n#SBATCH --mem-per-cpu=8000\n\nIMAGENET_PATH=$1\nMODEL=$2\n\nsrun python main.py --model \"$MODEL\" \\\n--data-path \"$IMAGENET_PATH\" \\\n--batch-size 128 \\\n--epochs 300 \\\n\n\n## Note: Disable aa, mixup, and cutmix for SwiftFormer-XS, and disable mixup, and cutmix for SwiftFormer-S.\n## By default, this script requests total 16 GPUs on 4 nodes. The batch size per gpu is set to 128,\n## tha sums to 128*16=2048 in total.\n"
  },
  {
    "path": "util/__init__.py",
    "content": "import util.utils as utils\nfrom .datasets import build_dataset\nfrom .engine import train_one_epoch, evaluate\nfrom .losses import DistillationLoss\nfrom .samplers import RASampler\n\n"
  },
  {
    "path": "util/datasets.py",
    "content": "import os\nimport json\n\nfrom torchvision import datasets, transforms\nfrom torchvision.datasets.folder import ImageFolder, default_loader\nimport torch\n\nfrom timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD\nfrom timm.data import create_transform\n\n\nclass INatDataset(ImageFolder):\n    def __init__(self, root, train=True, year=2018, transform=None, target_transform=None, category='name',\n                 loader=default_loader):\n        super().__init__(root, transform, target_transform, loader)\n        self.transform = transform\n        self.loader = loader\n        self.target_transform = target_transform\n        self.year = year\n        # assert category in ['kingdom','phylum','class','order','supercategory','family','genus','name']\n        path_json = os.path.join(\n            root, f'{\"train\" if train else \"val\"}{year}.json')\n        with open(path_json) as json_file:\n            data = json.load(json_file)\n\n        with open(os.path.join(root, 'categories.json')) as json_file:\n            data_catg = json.load(json_file)\n\n        path_json_for_targeter = os.path.join(root, f\"train{year}.json\")\n\n        with open(path_json_for_targeter) as json_file:\n            data_for_targeter = json.load(json_file)\n\n        targeter = {}\n        indexer = 0\n        for elem in data_for_targeter['annotations']:\n            king = []\n            king.append(data_catg[int(elem['category_id'])][category])\n            if king[0] not in targeter.keys():\n                targeter[king[0]] = indexer\n                indexer += 1\n        self.nb_classes = len(targeter)\n\n        self.samples = []\n        for elem in data['images']:\n            cut = elem['file_name'].split('/')\n            target_current = int(cut[2])\n            path_current = os.path.join(root, cut[0], cut[2], cut[3])\n\n            categors = data_catg[target_current]\n            target_current_true = targeter[categors[category]]\n            self.samples.append((path_current, target_current_true))\n\n    # __getitem__ and __len__ inherited from ImageFolder\n\n\ndef build_dataset(is_train, args):\n    transform = build_transform(is_train, args)\n\n    if args.data_set == 'CIFAR':\n        dataset = datasets.CIFAR100(\n            args.data_path, train=is_train, transform=transform)\n        nb_classes = 100\n    elif args.data_set == 'IMNET':\n        root = os.path.join(args.data_path, 'train' if is_train else 'val')\n        dataset = datasets.ImageFolder(root, transform=transform)\n        nb_classes = 1000\n    elif args.data_set == 'FLOWERS':\n        root = os.path.join(args.data_path, 'train' if is_train else 'test')\n        dataset = datasets.ImageFolder(root, transform=transform)\n        if is_train:\n            dataset = torch.utils.data.ConcatDataset(\n                [dataset for _ in range(100)])\n        nb_classes = 102\n    elif args.data_set == 'INAT':\n        dataset = INatDataset(args.data_path, train=is_train, year=2018,\n                              category=args.inat_category, transform=transform)\n        nb_classes = dataset.nb_classes\n    elif args.data_set == 'INAT19':\n        dataset = INatDataset(args.data_path, train=is_train, year=2019,\n                              category=args.inat_category, transform=transform)\n        nb_classes = dataset.nb_classes\n    else:\n        raise NotImplementedError\n\n    return dataset, nb_classes\n\n\ndef build_transform(is_train, args):\n    resize_im = args.input_size > 32\n    if is_train:\n        # This should always dispatch to transforms_imagenet_train\n        transform = create_transform(\n            input_size=args.input_size,\n            is_training=True,\n            color_jitter=args.color_jitter,\n            auto_augment=args.aa,\n            interpolation=args.train_interpolation,\n            re_prob=args.reprob,\n            re_mode=args.remode,\n            re_count=args.recount,\n        )\n        if not resize_im:\n            # Replace RandomResizedCropAndInterpolation with RandomCrop\n            transform.transforms[0] = transforms.RandomCrop(\n                args.input_size, padding=4)\n        return transform\n\n    t = []\n    if resize_im:\n        size = int((256 / 224) * args.input_size)\n        t.append(\n            # to maintain same ratio w.r.t. 224 images\n            transforms.Resize(size, interpolation=3),\n        )\n        t.append(transforms.CenterCrop(args.input_size))\n\n    t.append(transforms.ToTensor())\n    t.append(transforms.Normalize(IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD))\n    return transforms.Compose(t)\n"
  },
  {
    "path": "util/engine.py",
    "content": "\"\"\"\nTrain and eval functions used in main.py\n\"\"\"\nimport math\nimport sys\nfrom typing import Iterable, Optional\n\nimport torch\n\nfrom timm.data import Mixup\nfrom timm.utils import accuracy, ModelEma\n\nfrom .losses import DistillationLoss\nimport util.utils as utils\n\n\ndef train_one_epoch(model: torch.nn.Module, criterion: DistillationLoss,\n                    data_loader: Iterable, optimizer: torch.optim.Optimizer,\n                    device: torch.device, epoch: int, loss_scaler,\n                    clip_grad: float = 0,\n                    clip_mode: str = 'norm',\n                    model_ema: Optional[ModelEma] = None, mixup_fn: Optional[Mixup] = None,\n                    set_training_mode=True):\n    model.train(set_training_mode)\n    metric_logger = utils.MetricLogger(delimiter=\"  \")\n    metric_logger.add_meter('lr', utils.SmoothedValue(\n        window_size=1, fmt='{value:.6f}'))\n    header = 'Epoch: [{}]'.format(epoch)\n    print_freq = 100\n\n    for samples, targets in metric_logger.log_every(\n            data_loader, print_freq, header):\n        samples = samples.to(device, non_blocking=True)\n        targets = targets.to(device, non_blocking=True)\n\n        if mixup_fn is not None:\n            samples, targets = mixup_fn(samples, targets)\n\n        if True:  # with torch.cuda.amp.autocast():\n            outputs = model(samples)\n            loss = criterion(samples, outputs, targets)\n\n        loss_value = loss.item()\n\n        if not math.isfinite(loss_value):\n            print(\"Loss is {}, stopping training\".format(loss_value))\n            sys.exit(1)\n\n        optimizer.zero_grad()\n\n        # This attribute is added by timm on one optimizer (adahessian)\n        is_second_order = hasattr(\n            optimizer, 'is_second_order') and optimizer.is_second_order\n        loss_scaler(loss, optimizer, clip_grad=clip_grad, clip_mode=clip_mode,\n                    parameters=model.parameters(), create_graph=is_second_order)\n\n        torch.cuda.synchronize()\n        if model_ema is not None:\n            model_ema.update(model)\n\n        metric_logger.update(loss=loss_value)\n        metric_logger.update(lr=optimizer.param_groups[0][\"lr\"])\n    # gather the stats from all processes\n    metric_logger.synchronize_between_processes()\n    print(\"Averaged stats:\", metric_logger)\n    return {k: meter.global_avg for k, meter in metric_logger.meters.items()}\n\n\n@torch.no_grad()\ndef evaluate(data_loader, model, device):\n    criterion = torch.nn.CrossEntropyLoss()\n\n    metric_logger = utils.MetricLogger(delimiter=\"  \")\n    header = 'Test:'\n\n    # Switch to evaluation mode\n    model.eval()\n\n    for images, target in metric_logger.log_every(data_loader, 10, header):\n        images = images.to(device, non_blocking=True)\n        target = target.to(device, non_blocking=True)\n\n        # Compute output\n        with torch.cuda.amp.autocast():\n            output = model(images)\n            loss = criterion(output, target)\n\n        acc1, acc5 = accuracy(output, target, topk=(1, 5))\n\n        batch_size = images.shape[0]\n        metric_logger.update(loss=loss.item())\n        metric_logger.meters['acc1'].update(acc1.item(), n=batch_size)\n        metric_logger.meters['acc5'].update(acc5.item(), n=batch_size)\n\n    # Gather the stats from all processes\n    metric_logger.synchronize_between_processes()\n    print('* Acc@1 {top1.global_avg:.3f} Acc@5 {top5.global_avg:.3f} loss {losses.global_avg:.3f}'\n          .format(top1=metric_logger.acc1, top5=metric_logger.acc5, losses=metric_logger.loss))\n    print(output.mean().item(), output.std().item())\n\n    return {k: meter.global_avg for k, meter in metric_logger.meters.items()}\n"
  },
  {
    "path": "util/losses.py",
    "content": "\"\"\"\nImplements the knowledge distillation loss\n\"\"\"\nimport torch\nfrom torch.nn import functional as F\n\n\nclass DistillationLoss(torch.nn.Module):\n    \"\"\"\n    This module wraps a standard criterion and adds an extra knowledge distillation loss by\n    taking a teacher model prediction and using it as additional supervision.\n    \"\"\"\n\n    def __init__(self, base_criterion: torch.nn.Module, teacher_model: torch.nn.Module,\n                 distillation_type: str, alpha: float, tau: float):\n        super().__init__()\n        self.base_criterion = base_criterion\n        self.teacher_model = teacher_model\n        assert distillation_type in ['none', 'soft', 'hard']\n        self.distillation_type = distillation_type\n        self.alpha = alpha\n        self.tau = tau\n\n    def forward(self, inputs, outputs, labels):\n        \"\"\"\n        Args:\n            inputs: The original inputs that are feed to the teacher model\n            outputs: the outputs of the model to be trained. It is expected to be\n                either a Tensor, or a Tuple[Tensor, Tensor], with the original output\n                in the first position and the distillation predictions as the second output\n            labels: the labels for the base criterion\n        \"\"\"\n        outputs_kd = None\n        if not isinstance(outputs, torch.Tensor):\n            # assume that the model outputs a tuple of [outputs, outputs_kd]\n            outputs, outputs_kd = outputs\n        base_loss = self.base_criterion(outputs, labels)\n        if self.distillation_type == 'none':\n            return base_loss\n\n        if outputs_kd is None:\n            raise ValueError(\"When knowledge distillation is enabled, the model is \"\n                             \"expected to return a Tuple[Tensor, Tensor] with the output of the \"\n                             \"class_token and the dist_token\")\n        # Don't backprop throught the teacher\n        with torch.no_grad():\n            teacher_outputs = self.teacher_model(inputs)\n\n        if self.distillation_type == 'soft':\n            T = self.tau\n            # taken from https://github.com/peterliht/knowledge-distillation-pytorch/blob/master/model/net.py#L100\n            # with slight modifications\n            distillation_loss = F.kl_div(\n                F.log_softmax(outputs_kd / T, dim=1),\n                F.log_softmax(teacher_outputs / T, dim=1),\n                reduction='sum',\n                log_target=True\n            ) * (T * T) / outputs_kd.numel()\n        elif self.distillation_type == 'hard':\n            distillation_loss = F.cross_entropy(\n                outputs_kd, teacher_outputs.argmax(dim=1))\n\n        loss = base_loss * (1 - self.alpha) + distillation_loss * self.alpha\n        return loss\n"
  },
  {
    "path": "util/samplers.py",
    "content": "import torch\nimport torch.distributed as dist\nimport math\n\n\nclass RASampler(torch.utils.data.Sampler):\n    \"\"\"Sampler that restricts data loading to a subset of the dataset for distributed,\n    with repeated augmentation.\n    It ensures that different each augmented version of a sample will be visible to a\n    different process (GPU)\n    Heavily based on torch.utils.data.DistributedSampler\n    \"\"\"\n\n    def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):\n        if num_replicas is None:\n            if not dist.is_available():\n                raise RuntimeError(\n                    \"Requires distributed package to be available\")\n            num_replicas = dist.get_world_size()\n        if rank is None:\n            if not dist.is_available():\n                raise RuntimeError(\n                    \"Requires distributed package to be available\")\n            rank = dist.get_rank()\n        self.dataset = dataset\n        self.num_replicas = num_replicas\n        self.rank = rank\n        self.epoch = 0\n        self.num_samples = int(\n            math.ceil(len(self.dataset) * 3.0 / self.num_replicas))\n        self.total_size = self.num_samples * self.num_replicas\n        self.num_selected_samples = int(math.floor(\n            len(self.dataset) // 256 * 256 / self.num_replicas))\n        self.shuffle = shuffle\n\n    def __iter__(self):\n        # Deterministically shuffle based on epoch\n        g = torch.Generator()\n        g.manual_seed(self.epoch)\n        if self.shuffle:\n            indices = torch.randperm(len(self.dataset), generator=g).tolist()\n        else:\n            indices = list(range(len(self.dataset)))\n\n        # Add extra samples to make it evenly divisible\n        indices = [ele for ele in indices for i in range(3)]\n        indices += indices[:(self.total_size - len(indices))]\n        assert len(indices) == self.total_size\n\n        # Subsample\n        indices = indices[self.rank:self.total_size:self.num_replicas]\n        assert len(indices) == self.num_samples\n\n        return iter(indices[:self.num_selected_samples])\n\n    def __len__(self):\n        return self.num_selected_samples\n\n    def set_epoch(self, epoch):\n        self.epoch = epoch\n"
  },
  {
    "path": "util/utils.py",
    "content": "\"\"\"\nMisc functions, including distributed helpers.\n\nMostly copy-paste from torchvision references.\n\"\"\"\nimport io\nimport os\nimport time\nfrom collections import defaultdict, deque\nimport datetime\n\nimport torch\nimport torch.distributed as dist\nimport subprocess\n\n\nclass SmoothedValue(object):\n    \"\"\"Track a series of values and provide access to smoothed values over a\n    window or the global series average.\n    \"\"\"\n\n    def __init__(self, window_size=20, fmt=None):\n        if fmt is None:\n            fmt = \"{median:.4f} ({global_avg:.4f})\"\n        self.deque = deque(maxlen=window_size)\n        self.total = 0.0\n        self.count = 0\n        self.fmt = fmt\n\n    def update(self, value, n=1):\n        self.deque.append(value)\n        self.count += n\n        self.total += value * n\n\n    def synchronize_between_processes(self):\n        \"\"\"\n        Warning: does not synchronize the deque!\n        \"\"\"\n        if not is_dist_avail_and_initialized():\n            return\n        t = torch.tensor([self.count, self.total],\n                         dtype=torch.float64, device='cuda')\n        dist.barrier()\n        dist.all_reduce(t)\n        t = t.tolist()\n        self.count = int(t[0])\n        self.total = t[1]\n\n    @property\n    def median(self):\n        d = torch.tensor(list(self.deque))\n        return d.median().item()\n\n    @property\n    def avg(self):\n        d = torch.tensor(list(self.deque), dtype=torch.float32)\n        return d.mean().item()\n\n    @property\n    def global_avg(self):\n        return self.total / self.count\n\n    @property\n    def max(self):\n        return max(self.deque)\n\n    @property\n    def value(self):\n        return self.deque[-1]\n\n    def __str__(self):\n        return self.fmt.format(\n            median=self.median,\n            avg=self.avg,\n            global_avg=self.global_avg,\n            max=self.max,\n            value=self.value)\n\n\nclass MetricLogger(object):\n    def __init__(self, delimiter=\"\\t\"):\n        self.meters = defaultdict(SmoothedValue)\n        self.delimiter = delimiter\n\n    def update(self, **kwargs):\n        for k, v in kwargs.items():\n            if isinstance(v, torch.Tensor):\n                v = v.item()\n            assert isinstance(v, (float, int))\n            self.meters[k].update(v)\n\n    def __getattr__(self, attr):\n        if attr in self.meters:\n            return self.meters[attr]\n        if attr in self.__dict__:\n            return self.__dict__[attr]\n        raise AttributeError(\"'{}' object has no attribute '{}'\".format(\n            type(self).__name__, attr))\n\n    def __str__(self):\n        loss_str = []\n        for name, meter in self.meters.items():\n            loss_str.append(\n                \"{}: {}\".format(name, str(meter))\n            )\n        return self.delimiter.join(loss_str)\n\n    def synchronize_between_processes(self):\n        for meter in self.meters.values():\n            meter.synchronize_between_processes()\n\n    def add_meter(self, name, meter):\n        self.meters[name] = meter\n\n    def log_every(self, iterable, print_freq, header=None):\n        i = 0\n        if not header:\n            header = ''\n        start_time = time.time()\n        end = time.time()\n        iter_time = SmoothedValue(fmt='{avg:.4f}')\n        data_time = SmoothedValue(fmt='{avg:.4f}')\n        space_fmt = ':' + str(len(str(len(iterable)))) + 'd'\n        log_msg = [\n            header,\n            '[{0' + space_fmt + '}/{1}]',\n            'eta: {eta}',\n            '{meters}',\n            'time: {time}',\n            'data: {data}'\n        ]\n        if torch.cuda.is_available():\n            log_msg.append('max mem: {memory:.0f}')\n        log_msg = self.delimiter.join(log_msg)\n        MB = 1024.0 * 1024.0\n        for obj in iterable:\n            data_time.update(time.time() - end)\n            yield obj\n            iter_time.update(time.time() - end)\n            if i % print_freq == 0 or i == len(iterable) - 1:\n                eta_seconds = iter_time.global_avg * (len(iterable) - i)\n                eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))\n                if torch.cuda.is_available():\n                    print(log_msg.format(\n                        i, len(iterable), eta=eta_string,\n                        meters=str(self),\n                        time=str(iter_time), data=str(data_time),\n                        memory=torch.cuda.max_memory_allocated() / MB))\n                else:\n                    print(log_msg.format(\n                        i, len(iterable), eta=eta_string,\n                        meters=str(self),\n                        time=str(iter_time), data=str(data_time)))\n            i += 1\n            end = time.time()\n        total_time = time.time() - start_time\n        total_time_str = str(datetime.timedelta(seconds=int(total_time)))\n        print('{} Total time: {} ({:.4f} s / it)'.format(\n            header, total_time_str, total_time / len(iterable)))\n\n\ndef _load_checkpoint_for_ema(model_ema, checkpoint):\n    \"\"\"\n    Workaround for ModelEma._load_checkpoint to accept an already-loaded object\n    \"\"\"\n    mem_file = io.BytesIO()\n    torch.save(checkpoint, mem_file)\n    mem_file.seek(0)\n    model_ema._load_checkpoint(mem_file)\n\n\ndef setup_for_distributed(is_master):\n    \"\"\"\n    This function disables printing when not in master process\n    \"\"\"\n    import builtins as __builtin__\n    builtin_print = __builtin__.print\n\n    def print(*args, **kwargs):\n        force = kwargs.pop('force', False)\n        if is_master or force:\n            builtin_print(*args, **kwargs)\n\n    __builtin__.print = print\n\n\ndef is_dist_avail_and_initialized():\n    if not dist.is_available():\n        return False\n    if not dist.is_initialized():\n        return False\n    return True\n\n\ndef get_world_size():\n    if not is_dist_avail_and_initialized():\n        return 1\n    return dist.get_world_size()\n\n\ndef get_rank():\n    if not is_dist_avail_and_initialized():\n        return 0\n    return dist.get_rank()\n\n\ndef is_main_process():\n    return get_rank() == 0\n\n\ndef save_on_master(*args, **kwargs):\n    if is_main_process():\n        torch.save(*args, **kwargs)\n\n\ndef init_distributed_mode(args):\n    if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:\n        args.rank = int(os.environ[\"RANK\"])\n        args.world_size = int(os.environ['WORLD_SIZE'])\n        args.gpu = int(os.environ['LOCAL_RANK'])\n        args.dist_url = 'env://'\n        os.environ['LOCAL_SIZE'] = str(torch.cuda.device_count())\n        print('Using distributed mode: 1')\n    elif 'SLURM_PROCID' in os.environ:\n        proc_id = int(os.environ['SLURM_PROCID'])\n        ntasks = int(os.environ['SLURM_NTASKS'])\n        node_list = os.environ['SLURM_NODELIST']\n        num_gpus = torch.cuda.device_count()\n        addr = subprocess.getoutput(\n            'scontrol show hostname {} | head -n1'.format(node_list))\n        os.environ['MASTER_PORT'] = os.environ.get('MASTER_PORT', '29500')\n        os.environ['MASTER_ADDR'] = addr\n        os.environ['WORLD_SIZE'] = str(ntasks)\n        os.environ['RANK'] = str(proc_id)\n        os.environ['LOCAL_RANK'] = str(proc_id % num_gpus)\n        os.environ['LOCAL_SIZE'] = str(num_gpus)\n        args.dist_url = 'env://'\n        args.world_size = ntasks\n        args.rank = proc_id\n        args.gpu = proc_id % num_gpus\n        print('Using distributed mode: slurm')\n        print(f\"world: {os.environ['WORLD_SIZE']}, rank:{os.environ['RANK']},\"\n              f\" local_rank{os.environ['LOCAL_RANK']}, local_size{os.environ['LOCAL_SIZE']}\")\n    else:\n        print('Not using distributed mode')\n        args.distributed = False\n        return\n\n    args.distributed = True\n\n    torch.cuda.set_device(args.gpu)\n    args.dist_backend = 'nccl'\n    print('| distributed init (rank {}): {}'.format(\n        args.rank, args.dist_url), flush=True)\n    torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url,\n                                         world_size=args.world_size, rank=args.rank)\n    torch.distributed.barrier()\n    setup_for_distributed(args.rank == 0)\n\n\ndef replace_batchnorm(net):\n    for child_name, child in net.named_children():\n        if hasattr(child, 'fuse'):\n            setattr(net, child_name, child.fuse())\n        elif isinstance(child, torch.nn.Conv2d):\n            child.bias = torch.nn.Parameter(torch.zeros(child.weight.size(0)))\n        elif isinstance(child, torch.nn.BatchNorm2d):\n            setattr(net, child_name, torch.nn.Identity())\n        else:\n            replace_batchnorm(child)\n\n\ndef replace_layernorm(net):\n    import apex\n    for child_name, child in net.named_children():\n        if isinstance(child, torch.nn.LayerNorm):\n            setattr(net, child_name, apex.normalization.FusedLayerNorm(\n                child.weight.size(0)))\n        else:\n            replace_layernorm(child)\n"
  }
]