[
  {
    "path": ".gitignore",
    "content": "# Created by https://www.toptal.com/developers/gitignore/api/python,jupyternotebooks\n# Edit at https://www.toptal.com/developers/gitignore?templates=python,jupyternotebooks\n\n### JupyterNotebooks ###\n# gitignore template for Jupyter Notebooks\n# website: http://jupyter.org/\n\n.ipynb_checkpoints\n*/.ipynb_checkpoints/*\n\n# IPython\nprofile_default/\nipython_config.py\n\n# Remove previous ipynb_checkpoints\n#   git rm -r .ipynb_checkpoints/\n\n### Python ###\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\ncover/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\n.pybuilder/\ntarget/\n\n# Jupyter Notebook\n\n# IPython\n\n# pyenv\n#   For a library or package, you might want to ignore these files since the code is\n#   intended to run in multiple environments; otherwise, check them in:\n# .python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# poetry\n#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.\n#   This is especially recommended for binary packages to ensure reproducibility, and is more\n#   commonly ignored for libraries.\n#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control\n#poetry.lock\n\n# pdm\n#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.\n#pdm.lock\n#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it\n#   in version control.\n#   https://pdm.fming.dev/#use-with-ide\n.pdm.toml\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n# PyCharm\n#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can\n#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore\n#  and can be added to the global gitignore or merged into this file.  For a more nuclear\n#  option (not recommended) you can uncomment the following to ignore the entire idea folder.\n#.idea/\n\n### Python Patch ###\n# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration\npoetry.toml\n\n# ruff\n.ruff_cache/\n\n# LSP config files\npyrightconfig.json\n\n# End of https://www.toptal.com/developers/gitignore/api/python,jupyternotebooks\n\n./results/\n./data/\n./checkpoints/\n./processed_data/\n*.tar.gz\n"
  },
  {
    "path": "README.md",
    "content": "# DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)\n\n<a href=\"https://arxiv.org/abs/2402.03898\"><img src=\"https://img.shields.io/badge/Paper-arXiv:2402.03898-Green\"></a>\n<a href=#bibtex><img src=\"https://img.shields.io/badge/Paper-BibTex-yellow\"></a>\n\nOfficial PyTorch implementation of **DistiLLM**, as presented in our paper: \\\n\\\n**DistiLLM: Towards Streamlined Distillation for Large Language Models** \\\n*[Jongwoo Ko](https://sites.google.com/view/jongwooko), [Sungnyun Kim](https://sungnyunkim.notion.site/Sungnyun-Kim-4770a0182c47469ebdcd357cde97bd32), Tianyi Chen, Se-Young Yun* \\\nKAIST AI and Microsoft\n\n## 🚀 Updates\n- [x] (25.03.11) DistiLLM-2 paper is out! The preliminary code will be available in this repo, and final code will be available in [here](https://github.com/jongwooko/distillm-2), soon.\n- [x] (24.08.12) Remove the dependency on the local transformers, which are outdated. You can work with various types of recent LLMs!\n- [x] (24.05.01) Our paper has been accepted in **ICML 2024**. We are open to receiving any discussions and will reflect them in the camera-ready version. Looking forward to seeing you in Vienna!\n- [x] (24.03.13) Release [**LoRA checkpoints for OpenLLaMa2-3B**](https://drive.google.com/drive/folders/1Yun1aNpn-mz2h-IVH_VdJ1Jhzm0K55Bo?usp=sharing)\n\n## Environment\n```bash\nbash install.sh\n```\n\nOur code is based on [this commit](https://github.com/huggingface/transformers/commit/85fde09c97213bf7e8625f83096bb2a9e183f987) of HuggingFace Transformers **by following MiniLLM**.\n\n## Data\n### Resources\n+ The training/evaluation intruction-response data before processing can be downloaded from this [link](https://conversationhub.blob.core.windows.net/beit-share-public/MiniLLM/data.tar?sv=2021-10-04&st=2023-06-08T11%3A16%3A02Z&se=2033-06-09T11%3A16%3A00Z&sr=c&sp=r&sig=N4pfCVmSeq4L4tS8QbrFVsX6f6q844eft8xSuXdxU48%3D).\n+ The plain-text corpus $\\mathcal{D}_\\text{PT}$ can be download from the HugginFace datasets [repository](https://huggingface.co/datasets/openwebtext).\n\n\n### Data Processing\nGet plain-text corpus $\\mathcal{D}_\\text{PT}$:\n```bash\npython3 tools/get_openwebtext.py\n```\nThis script will replace the continuous `\\n` in each document with a special token \"<@x(x!>\" and write each document in OpenWebText in a line, which is convenient for parallel processing. In `data/openwebtext/data.txt`, we give an example of the resulting format. You can follow this format to prepare other corpus beyond OpenWebText.\n\nTokenize the data and store them in binary files:\n```bash\nbash scripts/gpt2/tools/process_data_dolly.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM} # Process Dolly Train / Validation Data\nbash scripts/gpt2/tools/process_data_pretrain.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM} # Process OpenWebText Train / Validation Data\n\nbash scripts/opt/tools/process_data_dolly.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM} # Process Dolly Train / Validation Data\nbash scripts/opt/tools/process_data_pretrain.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM} # Process OpenWebText Corpus Train / Validation Data\n\nbash scripts/llama/tools/process_data_dolly.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM} # Process Dolly Train / Validation Data\nbash scripts/llama/tools/process_data_pretrain.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM} # Process OpenWebText Corpus Train / Validation Data\n```\n\n## Base Pre-trained Models\nTo run fine-tuning or standard KD baselines, you need to download the model checkpoints from [Huggingface Model Hub] and put them in `checkpoints/`. For example, for gpt2-large, you can download the model from this [link](https://huggingface.co/gpt2-large/tree/main) and put them in `checkpoints/gpt2-large`.\n\nAlternatively, you can also change the `CKPT` variable in each script to the corresponding model name to enable Transformers to download the base models automatically. For example, set `CKPT=\"gpt2-large\"` in `scripts/gpt2/sft/sft_large.sh` causes download of the gpt2-large base model from the HugginFace model hub.\n\n## Train\nWe provide example commands for GPT-2 models. Similar scripts for model families can be found in `scripts/opt` and `scripts/openllama2`. All our experiments are conducted on 4 \\* 40A100, which can be reduced for small models.\n\n### Baselines\nThe final checkpoints are selected by the **ROUGE-L** scores.\n\n#### Fine-tune the teacher models\n```bash\nbash scripts/gpt2/sft/sft_xlarge.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n#### SFT Baselines\n```bash\nbash scripts/gpt2/sft/sft_base.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/sft/sft_medium.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/sft/sft_large.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\n#### KD Baselines\n```bash\nbash scripts/gpt2/kd/kd_base.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/kd/kd_medium.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/kd/kd_large.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\n#### SeqKD Baselines\nGenerate and process responses with the teacher:\n```bash\nbash scripts/gpt2/tools/generate_data_seqkd.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/tools/process_pseudo_data_seqkd.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\nFine-tune the model with SeqKD:\n```bash\nbash scripts/gpt2/seqkd/seqkd_base.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/seqkd/seqkd_medium.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/seqkd/seqkd_large.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\n#### Student Initialization\nThe final checkpoints are selected by the **validation loss**.\n```bash\nbash scripts/gpt2/init/init_base.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/init/init_medium.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/init/init_large.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\n#### ImitKD Baselines\n```bash\nbash scripts/gpt2/imitkd/imitkd_base_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/imitkd/imitkd_medium_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/imitkd/imitkd_large_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\n#### MiniLLM Baselines\n```bash\nbash scripts/gpt2/minillm/train_base_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/minillm/train_medium_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/minillm/train_large_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\n#### GKD Baselines\n```bash\nbash scripts/gpt2/gkd/gkd_base_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/gkd/gkd_medium_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/gkd/gkd_large_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\n### DistiLLM\nThe final checkpoints are selected by the **validation loss**.\n```bash\nbash scripts/gpt2/init/init_base.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/init/init_medium.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/init/init_large.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\nThe final checkpoints are selected by the **ROUGE-L** scores.\n```bash\nbash scripts/gpt2/distillm/train_base_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/distillm/train_medium_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\nbash scripts/gpt2/distillm/train_large_xl.sh ${/PATH/TO/DistiLLM} ${MASTER_PORT} ${GPU_NUM}\n```\n\n## Run Evaluation\n```bash\nbash scripts/gpt2/eval/run_eval.sh ${GPU_IDX} ${/PATH/TO/DistiLLM}\nbash scripts/opt/eval/run_eval.sh ${GPU_IDX} ${/PATH/TO/DistiLLM} \nbash scripts/openllama2/eval/run_eval.sh ${GPU_IDX} ${/PATH/TO/DistiLLM} \n```\n\n## Results\nDistiLLM outperforms other KD baselines in terms of both generation performance and training speed for various model families such as GPT-2, OPT, and OpenLLaMA.\n<p align=\"center\">\n<img width=\"1394\" src=\"https://github.com/jongwooko/distillm/assets/59277369/19ddac5c-4cd6-4d81-99d8-32723a8e60d8\">\n</p>\n\n## Checkpoints (OpenLLaMA-3B)\nWe share the LoRA weights for OpenLLaMA-3B in [google drive](https://drive.google.com/drive/folders/1Yun1aNpn-mz2h-IVH_VdJ1Jhzm0K55Bo?usp=sharing).\n\n## Acknowledgement\nOur code is based on the code of ICLR2024 [MiniLLM: Knowledge Distillation of Large Language Models](https://arxiv.org/pdf/2306.08543.pdf).\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=jongwooko/distillm&type=Date)](https://star-history.com/#jongwooko/distillm&Date)\n\n## BibTeX\nIf you find this repo useful for your research, please consider citing our paper:\n\n```\n@inproceedings{kodistillm,\n  title={DistiLLM: Towards Streamlined Distillation for Large Language Models},\n  author={Ko, Jongwoo and Kim, Sungnyun and Chen, Tianyi and Yun, Se-Young},\n  booktitle={Forty-first International Conference on Machine Learning}\n}\n\n@article{ko2025distillm2,\n      title={DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs}, \n      author={Jongwoo Ko and Tianyi Chen and Sungnyun Kim and Tianyu Ding and Luming Liang and Ilya Zharkov and Se-Young Yun},\n      year={2025},\n      journal={arXiv preprint arXiv:2503.07067}\n}\n```\n\n## Contact\n- Jongwoo Ko: jongwoo.ko@kaist.ac.kr\n"
  },
  {
    "path": "arguments.py",
    "content": "# coding=utf-8\n# Copyright 2020 The OpenBMB team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport os\nimport deepspeed\nimport numpy as np\n\n\ndef add_model_args(parser: argparse.ArgumentParser):\n    \"\"\"Model arguments\"\"\"\n\n    group = parser.add_argument_group('model', 'model configuration')\n    group.add_argument('--model-path', type=str, help='model path')\n    group.add_argument(\"--ckpt-name\", type=str)\n    group.add_argument(\"--model-type\", type=str, default=\"gpt2\")\n    group.add_argument(\"--teacher-model-type\", type=str, default=None)\n    group.add_argument(\"--n-gpu\", type=int, default=1)\n    group.add_argument(\"--n-nodes\", type=int, default=1)\n    group.add_argument(\"--teacher-model-path\", type=str)\n    group.add_argument(\"--teacher-ckpt-name\", type=str)\n    group.add_argument(\"--teacher-model-fp16\", action=\"store_true\")\n    group.add_argument(\"--model-parallel\", action=\"store_true\")\n    group.add_argument(\"--model-parallel-size\", type=int, default=None)\n    group.add_argument(\"--no-value\", action=\"store_true\")\n    group.add_argument(\"--dropout-path-rate\", type=float, default=None)\n    group.add_argument(\"--fp32\", action=\"store_true\")\n    return parser\n\n\ndef add_runtime_args(parser: argparse.ArgumentParser):\n    group = parser.add_argument_group('runtime', 'runtime configurations')\n\n    group.add_argument(\"--type\", type=str, default=None)\n    group.add_argument(\"--do-train\", action=\"store_true\")\n    group.add_argument(\"--do-valid\", action=\"store_true\")\n    group.add_argument(\"--do-eval\", action=\"store_true\")\n    group.add_argument('--base-path', type=str, default=None, help='Path to the project base directory.')\n    group.add_argument('--load', type=str, default=None,\n                       help='Path to a directory containing a model checkpoint.')\n    group.add_argument('--save', type=str, default=None,\n                       help='Output directory to save checkpoints to.')\n    group.add_argument(\"--log-interval\", type=int, default=10)\n    group.add_argument(\"--mid-log-num\", type=int, default=4)\n    group.add_argument('--save-interval', type=int, default=1000,\n                       help='number of iterations between saves')\n    group.add_argument(\"--eval-interval\", type=int, default=1000)\n    group.add_argument('--local_rank', type=int, default=None,\n                       help='local rank passed from distributed launcher')\n    group.add_argument(\"--save-additional-suffix\", type=str, default=\"\")\n    group.add_argument(\"--save-rollout\", action=\"store_true\")\n    group.add_argument(\"--eb-sample-times\", type=int, default=3)\n    return parser\n\n\ndef add_data_args(parser: argparse.ArgumentParser):\n    group = parser.add_argument_group('data', 'data configurations')\n    group.add_argument(\"--data-dir\", type=str, default=None)\n    group.add_argument(\"--processed-data-dir\", type=str, default=None)\n    group.add_argument(\"--force-process\", action=\"store_true\")\n    group.add_argument(\"--force-process-demo\", action=\"store_true\")\n    group.add_argument(\"--data-process-workers\", type=int, default=-1)\n    group.add_argument(\"--train-num\", type=int, default=-1)\n    group.add_argument(\"--train-ratio\", type=float, default=1)\n    group.add_argument(\"--dev-num\", type=int, default=-1)\n    group.add_argument(\"--dev-ratio\", type=float, default=1)\n    group.add_argument(\"--gen-num\", type=int, default=-1)\n    group.add_argument(\"--data-names\", type=str, default=None)\n    group.add_argument(\"--prompt-type\", type=str, default=None)\n    group.add_argument(\"--num-workers\", type=int, default=1)\n    group.add_argument(\"--max-prompt-length\", type=int, default=512)\n    group.add_argument(\"--min-prompt-length\", type=int, default=128)\n    group.add_argument(\"--json-data\", action=\"store_true\")\n    group.add_argument(\"--bin-data\", action=\"store_true\")\n    group.add_argument(\"--txt-data\", action=\"store_true\")\n    \n    group.add_argument(\"--prompt-data-dir\", type=str)\n    group.add_argument(\"--lm-data-dir\", type=str)\n    group.add_argument(\"--eval-ppl\", action=\"store_true\")\n    group.add_argument(\"--eval-rw\", action=\"store_true\")\n    group.add_argument(\"--eval-gen\", action=\"store_true\")\n    \n    group.add_argument(\"--only-prompt\", action=\"store_true\")\n    return parser\n\n\ndef add_hp_args(parser: argparse.ArgumentParser):\n    group = parser.add_argument_group(\"hp\", \"hyper parameter configurations\")\n    group.add_argument('--batch-size', type=int, default=32,\n                       help='Data Loader batch size')\n    group.add_argument('--eval-batch-size', type=int, default=32,\n                       help='Data Loader batch size')\n    group.add_argument('--clip-grad', type=float, default=1.0,\n                       help='gradient clipping')\n    group.add_argument('--total-iters', type=int, default=None,\n                       help='total number of iterations')\n    group.add_argument('--train-iters-per-epoch', type=int, default=-1,\n                       help='total number of iterations per epoch')\n    group.add_argument('--max-length', type=int, default=1024,\n                       help='max length of input')\n    group.add_argument('--seed', type=int, default=1234,\n                       help='random seed for reproducibility')\n    group.add_argument(\"--seed-order\", type=int, default=42)\n    group.add_argument(\"--seed-data\", type=int, default=42)\n    group.add_argument(\"--seed-ppo\", type=int, default=42)\n    group.add_argument(\"--seed-lm\", type=int, default=7)\n    group.add_argument('--epochs', type=int, default=None,\n                       help='total number of epochs to train over all training runs')\n    group.add_argument('--training-epochs', type=int, default=10000)\n    group.add_argument(\"--gradient-accumulation-steps\", type=int, default=1)\n    group.add_argument(\"--gradient-checkpointing\", action=\"store_true\")\n    group.add_argument(\"--attn-dtype\", default=None)\n    \n    group.add_argument('--lr', type=float, help='initial learning rate')\n    group.add_argument(\"--lr-min\", type=float, default=0.0000001)\n    group.add_argument('--weight-decay', type=float, default=1.0e-2,\n                       help='weight-decay')\n    group.add_argument('--loss-scale', type=float, default=65536,\n                       help='loss scale')\n    group.add_argument(\"--kd-ratio\", type=float, default=None)\n\n    group.add_argument('--warmup-iters', type=int, default=0,\n                       help='percentage of data to warmup on (.01 = 1% of all '\n                       'training iters). Default 0.01')\n    group.add_argument('--lr-decay-iters', type=int, default=None,\n                       help='number of iterations to decay LR over,'\n                       ' If None defaults to `--train-iters`*`--epochs`')\n    group.add_argument('--lr-decay-style', type=str, default='noam',\n                       choices=['constant', 'linear', 'cosine', 'exponential', 'noam'],\n                       help='learning rate decay function')\n    group.add_argument(\"--scheduler-name\", type=str, default=\"constant_trm\")\n\n    return parser\n\n\ndef add_ppo_args(parser: argparse.ArgumentParser):\n    group = parser.add_argument_group('ppo', 'ppo configurations')\n    \n    group.add_argument(\"--reward-scaling\", type=float, default=None)\n    group.add_argument(\"--cliprange-reward\", type=float, default=1)\n    group.add_argument(\"--ppo-epochs\", type=int, default=None)\n    group.add_argument(\"--num-rollouts\", type=int, default=256)\n    group.add_argument(\"--num-rollouts-per-device\", type=int, default=None)\n    group.add_argument(\"--cliprange\", type=float, default=0.2)\n    group.add_argument(\"--chunk-size\", type=int, default=None)\n    group.add_argument(\"--gamma\", type=float, default=0.95)\n    \n    return parser\n\n\ndef add_minillm_args(parser: argparse.ArgumentParser):\n    group = parser.add_argument_group('minillm', 'minillm configurations')\n    \n    group.add_argument(\"--length-norm\", action=\"store_true\")\n    group.add_argument(\"--single-step-reg\", action=\"store_true\")\n    group.add_argument(\"--teacher-mixed-alpha\", type=float, default=None)\n    group.add_argument(\"--lm-coef\", type=float, default=1)\n    \n    return parser\n\n\ndef add_distillm_args(parser: argparse.ArgumentParser):\n    group = parser.add_argument_group('distillm', 'distillm configurations')\n\n    # skew kld\n    group.add_argument(\"--skew-alpha\", type=float, default=0.1)\n    \n    # student generation\n    group.add_argument(\"--student-gen\", action=\"store_true\")\n    group.add_argument(\"--gen-top-p\", type=float, default=1.0)\n    group.add_argument(\"--gen-num-beams\", type=int, default=2)\n    \n    # adaptive threshold\n    group.add_argument(\"--mixed-alpha\", type=float, default=0.5)\n    group.add_argument(\"--loss-eps\", type=float, default=0.1)\n    group.add_argument(\"--init-threshold\", type=float, default=0.0)\n    \n    # off-policy\n    group.add_argument(\"--capacity\", type=int, default=1000)\n    group.add_argument(\"--replay-ratio\", type=str, default=\"decreasing\")\n    # group.add_argument(\"--time\", action=\"store_true\")\n    return parser\n\n\ndef add_gen_args(parser: argparse.ArgumentParser):\n    group = parser.add_argument_group('generation', 'generation configurations')\n    \n    group.add_argument(\"--top-k\", type=int, default=0)\n    group.add_argument(\"--top-p\", type=float, default=1.0)\n    group.add_argument(\"--do-sample\", action=\"store_true\")\n    group.add_argument(\"--no-repeat-ngram-size\", type=int, default=6)\n    group.add_argument(\"--repetition-penalty\", type=float, default=None)\n    group.add_argument(\"--num-beams\", type=int, default=1)\n    group.add_argument(\"--temperature\", type=float, default=1)\n    \n    return parser\n\n\ndef add_peft_args(parser: argparse.ArgumentParser):\n    group = parser.add_argument_group('generation', 'generation configurations')\n    \n    group.add_argument(\"--peft\", type=str, default=None)\n    group.add_argument(\"--peft-lora-r\", type=int, default=16)\n    group.add_argument(\"--peft-lora-alpha\", type=int, default=64)\n    group.add_argument(\"--peft-lora-dropout\", type=float, default=0.1)\n    group.add_argument(\"--peft-name\", type=str, default=None)\n    group.add_argument(\"--peft-path\", type=str, default=None)\n    group.add_argument(\"--teacher-peft-name\", type=str, default=None)\n    group.add_argument(\"--teacher-peft-path\", type=str, default=None)\n    return parser\n\n\ndef get_args():\n    parser = argparse.ArgumentParser()\n    parser = add_model_args(parser)\n    parser = add_runtime_args(parser)\n    parser = add_data_args(parser)\n    parser = add_hp_args(parser)\n    parser = add_ppo_args(parser)\n    parser = add_minillm_args(parser)\n    parser = add_distillm_args(parser)\n    parser = add_gen_args(parser)\n    parser = add_peft_args(parser)\n    parser = deepspeed.add_config_arguments(parser)\n    \n    args, unknown = parser.parse_known_args()\n    \n    assert all([\"--\" not in x for x in unknown]), unknown\n    \n    args.local_rank = int(os.getenv(\"LOCAL_RANK\", \"0\"))\n        \n    args.n_gpu = args.n_gpu * args.n_nodes\n        \n    if args.type == \"eval_main\":\n        ckpt_name = None\n        if args.ckpt_name is not None:\n            ckpt_name = args.ckpt_name\n        if args.peft_name is not None:\n            ckpt_name = args.peft_name\n\n        if ckpt_name is not None:\n            tmp = ckpt_name.split(\"/\")\n            if tmp[-1].isdigit():\n                ckpt_name = \"_\".join(tmp[:-1]) + \"/\" + tmp[-1]\n            else:\n                ckpt_name = \"_\".join(tmp)\n\n        save_path = os.path.join(\n            args.save,\n            f\"{args.data_names}-{args.max_length}\" + (f\"-mp{args.model_parallel_size}\" if args.model_parallel > 0 else \"\"),\n            ckpt_name,\n            f\"{args.seed}\",\n        )\n        args.save = save_path\n    elif args.type == \"lm\":\n        save_path = os.path.join(\n            args.save,\n            (f\"{args.ckpt_name}\" + f\"-{args.peft_name}\" if args.peft_name is not None else \"\"),\n            (f\"e{args.epochs}-bs{args.batch_size}-lr{args.lr}-G{args.gradient_accumulation_steps}-N{args.n_gpu}-NN{args.n_nodes}\") + \\\n            (f\"-mp{args.model_parallel_size}\" if args.model_parallel > 0 else \"\") + \\\n            (f\"-lora-{args.peft_lora_r}-{args.peft_lora_alpha}-{args.peft_lora_dropout}\" if args.peft == \"lora\" else \"\") + \\\n            args.save_additional_suffix\n        )\n        args.save = save_path\n    elif args.type == \"kd\":\n        save_path = os.path.join(\n            args.save,\n            (f\"{args.ckpt_name}\" + f\"-{args.peft_name}\" if args.peft_name is not None else \"\" + \\\n             f\"-{args.teacher_ckpt_name}\" + f\"-{args.teacher_peft_name}\" if args.teacher_peft_name is not None else \"\"),\n            (f\"e{args.epochs}-bs{args.batch_size}-lr{args.lr}-G{args.gradient_accumulation_steps}-N{args.n_gpu}-NN{args.n_nodes}-kd{args.kd_ratio}\") + \\\n            (f\"-mp{args.model_parallel_size}\" if args.model_parallel > 0 else \"\") + \\\n            (f\"-lora-{args.peft_lora_r}-{args.peft_lora_alpha}-{args.peft_lora_dropout}\" if args.peft == \"lora\" else \"\") + \\\n            args.save_additional_suffix\n        )\n        args.save = save_path\n    elif args.type == \"gen\":\n        save_path = os.path.join(\n            args.save,\n            (f\"{args.ckpt_name}\"),\n            (f\"t{args.temperature}-l{args.max_length}\"),\n        )\n        args.save = save_path\n    elif args.type == \"minillm\":\n        ppo_prefix = f\"pe{args.ppo_epochs}\" + \\\n                     (f\"_rs{args.reward_scaling}\" if args.ppo_epochs is not None else \"\") + \\\n                     (f\"_nr{args.num_rollouts}\" if args.num_rollouts is not None else \"\") + \\\n                     (f\"_ln\" if args.length_norm else \"\") + \\\n                     (f\"_sr\" if args.single_step_reg else \"\") + \\\n                     (f\"_tm{args.teacher_mixed_alpha}\" if args.teacher_mixed_alpha is not None else \"\")\n        save_path = os.path.join(\n            args.save,\n            (f\"{args.ckpt_name}\" + f\"-{args.peft_name}\" if args.peft_name is not None else \"\" + \\\n             f\"-{args.teacher_ckpt_name}\" + f\"-{args.teacher_peft_name}\" if args.teacher_peft_name is not None else \"\"),\n            (f\"bs{args.batch_size}-lr{args.lr}-G{args.gradient_accumulation_steps}-N{args.n_gpu}-NN{args.n_nodes}-lm{args.lm_coef}-len{args.max_length}\" + \\\n                (f\"-mp{args.model_parallel_size}\" if args.model_parallel > 0 else \"\")) + \\\n            (f\"-lora-{args.peft_lora_r}-{args.peft_lora_alpha}-{args.peft_lora_dropout}\" if args.peft == \"lora\" else \"\"),\n            ppo_prefix + args.save_additional_suffix\n        )\n        args.save = save_path\n        args.num_rollouts_per_device = args.num_rollouts // args.n_gpu\n        \n        if args.warmup_iters > 0:\n            assert args.scheduler_name is not None\n\n    return args\n"
  },
  {
    "path": "configs/deepspeed/ds_config.json",
    "content": "{\n    \"train_micro_batch_size_per_gpu\": 1,\n    \"gradient_accumulation_steps\": 1,\n    \"zero_optimization\": {\n        \"stage\": 1\n    },\n    \"zero_allow_untested_optimizer\": true,\n    \"fp16\": {\n        \"enabled\": true,\n        \"loss_scale\": 0,\n        \"initial_scale_power\": 11,\n        \"loss_scale_window\": 2000,\n        \"hysteresis\": 4\n    },\n    \"wall_clock_breakdown\": false\n}"
  },
  {
    "path": "configs/deepspeed/ds_config_fp32.json",
    "content": "{\n    \"train_micro_batch_size_per_gpu\": 1,\n    \"gradient_accumulation_steps\": 1,\n    \"zero_optimization\": {\n        \"stage\": 1\n    },\n    \"zero_allow_untested_optimizer\": true,\n    \"fp16\": {\n        \"enabled\": false\n    },\n    \"wall_clock_breakdown\": false\n}"
  },
  {
    "path": "configs/deepspeed/ds_config_zero2.json",
    "content": "{\n    \"train_micro_batch_size_per_gpu\": 1,\n    \"gradient_accumulation_steps\": 1,\n    \"zero_optimization\": {\n        \"stage\": 2,\n        \"offload_optimizer\": {\n            \"device\": \"none\"\n        },\n        \"allgather_partitions\": true,\n        \"allgather_bucket_size\": 2e8,\n        \"overlap_comm\": true,\n        \"reduce_scatter\": true,\n        \"reduce_bucket_size\": 2e8,\n        \"contiguous_gradients\": true\n    },\n    \"zero_allow_untested_optimizer\": true,\n    \"fp16\": {\n        \"enabled\": true,\n        \"loss_scale\": 0,\n        \"initial_scale_power\": 11,\n        \"loss_scale_window\": 5000,\n        \"hysteresis\": 4\n    },\n    \"wall_clock_breakdown\": false\n}"
  },
  {
    "path": "configs/deepspeed/ds_config_zero2_offload.json",
    "content": "{\n    \"train_micro_batch_size_per_gpu\": 1,\n    \"gradient_accumulation_steps\": 1,\n    \"zero_optimization\": {\n        \"stage\": 2,\n        \"offload_optimizer\": {\n            \"device\": \"cpu\"\n        },\n        \"allgather_partitions\": true,\n        \"allgather_bucket_size\": 2e8,\n        \"overlap_comm\": true,\n        \"reduce_scatter\": true,\n        \"reduce_bucket_size\": 2e8,\n        \"contiguous_gradients\": true\n    },\n    \"zero_force_ds_cpu_optimizer\": false,\n    \"zero_allow_untested_optimizer\": true,\n    \"fp16\": {\n        \"enabled\": true,\n        \"loss_scale\": 0,\n        \"initial_scale_power\": 11,\n        \"loss_scale_window\": 5000,\n        \"hysteresis\": 4\n    },\n    \"wall_clock_breakdown\": false\n}"
  },
  {
    "path": "configs/hostfiles/node_0_1",
    "content": "node-0 slots=8\nnode-1 slots=8"
  },
  {
    "path": "configs/hostfiles/node_0_1_2_3",
    "content": "node-0 slots=8\nnode-1 slots=8\nnode-2 slots=8\nnode-3 slots=8"
  },
  {
    "path": "configs/hostfiles/node_1_2",
    "content": "node-1 slots=8\nnode-2 slots=8"
  },
  {
    "path": "configs/hostfiles/node_2_3",
    "content": "node-2 slots=8\nnode-3 slots=8"
  },
  {
    "path": "data_utils/distributed_indexed.py",
    "content": "# coding=utf-8\n# Copyright 2020 The OpenBMB team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport struct\nimport shutil\n\nfrom itertools import accumulate\n\nimport numpy as np\nimport torch\nimport torch.distributed as dist\nfrom utils import print_rank, save_rank\n\n\ndtypes = {\n    1: np.uint8,\n    2: np.int8,\n    3: np.int16,\n    4: np.int32,\n    5: np.int64,\n    6: np.float32,\n    7: np.double,\n    8: np.uint16,\n    9: np.uint32\n}\n\n\ndef code(dtype):\n    for k in dtypes.keys():\n        if dtypes[k] == dtype:\n            return k\n    raise ValueError(dtype)\n\n\ndef index_file_path(prefix_path):\n    return prefix_path + '.idx'\n\n\ndef data_file_path(prefix_path):\n    return prefix_path + '.bin'\n\n\nclass DistributedMMapIndexedDataset(torch.utils.data.Dataset):\n    class Index(object):\n        _HDR_MAGIC = b'MMIDIDX\\x00\\x00'\n        def __init__(self, path):\n            with open(path, 'rb') as stream:\n                magic_test = stream.read(9)\n                assert self._HDR_MAGIC == magic_test, (\n                    'Index file doesn\\'t match expected format. '\n                    'Make sure that --dataset-impl is configured properly.'\n                )\n                version = struct.unpack('<Q', stream.read(8))\n                assert (1,) == version\n\n                dtype_code, = struct.unpack('<B', stream.read(1))\n                self._dtype = dtypes[dtype_code]\n                self._dtype_size = self._dtype().itemsize\n\n                self._len = struct.unpack('<Q', stream.read(8))[0]\n                self._doc_count = struct.unpack('<Q', stream.read(8))[0]\n                offset = stream.tell()\n\n            self._bin_buffer_mmap = np.memmap(path, mode='r', order='C')\n            self._bin_buffer = memoryview(self._bin_buffer_mmap)\n            self._sizes = np.frombuffer(\n                self._bin_buffer,\n                dtype=np.int32,\n                count=self._len,\n                offset=offset)\n            self._pointers = np.frombuffer(self._bin_buffer, dtype=np.int64, count=self._len,\n                                           offset=offset + self._sizes.nbytes)\n            self._doc_idx = np.frombuffer(self._bin_buffer, dtype=np.int64, count=self._doc_count,\n                                          offset=offset + self._sizes.nbytes + self._pointers.nbytes)\n\n        def __del__(self):\n            self._bin_buffer_mmap._mmap.close()\n            del self._bin_buffer_mmap\n\n        @property\n        def dtype(self):\n            return self._dtype\n\n        @property\n        def sizes(self):\n            return self._sizes\n\n        @property\n        def doc_idx(self):\n            return self._doc_idx\n\n        def __getitem__(self, i):\n            return self._pointers[i], self._sizes[i]\n\n        def __len__(self):\n            return self._len\n\n    def __init__(self, path, name, rank_number, rank_total, cache = None):\n        \n        super().__init__()\n\n        self._path = path\n        self._name = name\n        self._state = 0\n        if cache is not None:\n            self._cache = cache\n            os.makedirs(self._cache, exist_ok=True)\n        else:\n            self._cache = None\n        self._rank_total = rank_total\n        self._rank_number = rank_number\n        self._index = None\n        self._bin_buffer = None\n        self._bin_buffer_mmap = None\n        self.max_state, self.history = self._probe_data_path(self._path, self._name, self._rank_total)\n        self.total_length = self.history[self.max_state-1][1]\n\n        self._do_init(self._path, self._name, self._cache, self._state)\n\n    def _probe_data_path(self, path, name, rank_total):\n        print_rank(\"Probing Dataset\")\n            \n        state = 0\n        history = {-1:(0, 0)}\n        for state in range(np.iinfo(np.int32).max):\n            source_file = path + name + f\"_{state}\"\n            if self.exists(source_file):\n                index = self.Index(index_file_path(source_file))\n                history[state] = (history[state-1][1], history[state-1][1] + len(index))\n            else:\n                break\n            \n        print_rank(f\"Probing end. Max data state {state}, total length {history[state-1][1]}\")\n        \n        return state, history\n\n    def __getstate__(self):\n        return self._path + self._name + \"_%d\"%(self._state)\n\n    def __setstate__(self, state):\n        self._state = state\n        self._do_init(self._path, self._name, self._cache, self._state)\n\n    def _do_init(self, path, name, cache, state):\n        if self._bin_buffer_mmap is not None:\n            self._bin_buffer_mmap._mmap.close()\n            del self._bin_buffer_mmap\n        if self._index is not None:\n            del self._index\n\n        self._state = state\n\n        source_file = path + name + f\"_{self._state}\"\n        self._index = self.Index(index_file_path(source_file))\n        self._bin_buffer_mmap = np.memmap(data_file_path(source_file), mode='r', order='C')\n        self._bin_buffer = memoryview(self._bin_buffer_mmap)\n\n    def __del__(self):\n        if self._bin_buffer_mmap is not None:\n            self._bin_buffer_mmap._mmap.close()\n            del self._bin_buffer_mmap\n        if self._index is not None:\n            del self._index\n\n    def __len__(self):\n        return self.total_length\n\n    def _next_file(self):\n        self._state += 1\n        if self._state >= self.max_state:\n            self._state = 0\n        # print_rank(f\"next_file: {self._state}\")\n        self._do_init(self._path, self._name, self._cache, self._state)\n    \n    def __relative_idx(self, idx):\n        res = idx - self.history[self._state][0]\n        return res\n\n    def __slice_item(self, start, stop):\n        ptr = self._index._pointers[self.__relative_idx(start)]\n        sizes = self._index._sizes[self.__relative_idx(start):self.__relative_idx(stop)]\n        offsets = list(accumulate(sizes))\n        np_array = np.frombuffer(self._bin_buffer, dtype=self._index.dtype, count=sum(sizes), offset=ptr)\n        return np.split(np_array, offsets[:-1])\n\n    def __getitem__(self, idx):\n        if isinstance(idx, int):\n            while idx >= self.history[self._state][1] or idx < self.history[self._state][0]:\n                self._next_file()\n            ptr, size = self._index[self.__relative_idx(idx)]\n            return np.frombuffer(self._bin_buffer, dtype=self._index.dtype, count=size, offset=ptr)\n        elif isinstance(idx, slice):\n            raise NotImplementedError()\n\n    @property\n    def sizes(self):\n        return self._index.sizes\n        \n    def exists(self, path):\n        return (\n            os.path.exists(index_file_path(path)) and os.path.exists(data_file_path(path))\n        )\n"
  },
  {
    "path": "data_utils/indexed_dataset.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# copied from fairseq/fairseq/data/indexed_dataset.py\n# Removed IndexedRawTextDataset since it relied on Fairseq dictionary\n# other slight modifications to remove fairseq dependencies\n# Added document index to index file and made it accessible.\n#    An empty sentence no longer separates documents.\n\nfrom functools import lru_cache\nimport os\nimport shutil\nimport struct\nfrom itertools import accumulate\n\nimport numpy as np\nimport torch\n\n\ndef __best_fitting_dtype(vocab_size=None):\n    if vocab_size is not None and vocab_size < 65500:\n        return np.uint16\n    else:\n        return np.int32\n\n\ndef get_available_dataset_impl():\n    return ['lazy', 'cached', 'mmap']\n\n\ndef infer_dataset_impl(path):\n    if IndexedDataset.exists(path):\n        with open(index_file_path(path), 'rb') as f:\n            magic = f.read(8)\n            if magic == IndexedDataset._HDR_MAGIC:\n                return 'cached'\n            elif magic == MMapIndexedDataset.Index._HDR_MAGIC[:8]:\n                return 'mmap'\n            else:\n                return None\n    else:\n        print(f\"Dataset does not exist: {path}\")\n        print(\"Path should be a basename that both .idx and .bin can be appended to get full filenames.\")\n        return None\n\n\ndef make_builder(out_file, impl, dtype):\n    if impl == 'mmap':\n        return MMapIndexedDatasetBuilder(out_file, dtype=dtype)\n    else:\n        return IndexedDatasetBuilder(out_file)\n\n\ndef make_dataset(path, impl, skip_warmup=False):\n    if not IndexedDataset.exists(path):\n        print(f\"Dataset does not exist: {path}\")\n        print(\"Path should be a basename that both .idx and .bin can be appended to get full filenames.\")\n        return None\n    if impl == 'infer':\n        impl = infer_dataset_impl(path)\n    if impl == 'lazy' and IndexedDataset.exists(path):\n        return IndexedDataset(path)\n    elif impl == 'cached' and IndexedDataset.exists(path):\n        return IndexedCachedDataset(path)\n    elif impl == 'mmap' and MMapIndexedDataset.exists(path):\n        return MMapIndexedDataset(path, skip_warmup)\n    print(f\"Unknown dataset implementation: {impl}\")\n    return None\n\n\ndef dataset_exists(path, impl):\n    if impl == 'mmap':\n        return MMapIndexedDataset.exists(path)\n    else:\n        return IndexedDataset.exists(path)\n\n\ndef read_longs(f, n):\n    a = np.empty(n, dtype=np.int64)\n    f.readinto(a)\n    return a\n\n\ndef write_longs(f, a):\n    f.write(np.array(a, dtype=np.int64))\n\n\ndtypes = {\n    1: np.uint8,\n    2: np.int8,\n    3: np.int16,\n    4: np.int32,\n    5: np.int64,\n    6: np.float32,\n    7: np.double,\n    8: np.uint16,\n    9: np.uint32\n}\n\n\ndef code(dtype):\n    for k in dtypes.keys():\n        if dtypes[k] == dtype:\n            return k\n    raise ValueError(dtype)\n\n\ndef index_file_path(prefix_path):\n    return prefix_path + '.idx'\n\n\ndef data_file_path(prefix_path):\n    return prefix_path + '.bin'\n\n\ndef create_doc_idx(sizes):\n    doc_idx = [0]\n    for i, s in enumerate(sizes):\n        if s == 0:\n            doc_idx.append(i + 1)\n    return doc_idx\n\n\nclass IndexedDataset(torch.utils.data.Dataset):\n    \"\"\"Loader for IndexedDataset\"\"\"\n    _HDR_MAGIC = b'TNTIDX\\x00\\x00'\n\n    def __init__(self, path):\n        super().__init__()\n        self.path = path\n        self.data_file = None\n        self.read_index(path)\n\n    def read_index(self, path):\n        with open(index_file_path(path), 'rb') as f:\n            magic = f.read(8)\n            assert magic == self._HDR_MAGIC, (\n                'Index file doesn\\'t match expected format. '\n                'Make sure that --dataset-impl is configured properly.'\n            )\n            version = f.read(8)\n            assert struct.unpack('<Q', version) == (1,)\n            code, self.element_size = struct.unpack('<QQ', f.read(16))\n            self.dtype = dtypes[code]\n            self._len, self.s = struct.unpack('<QQ', f.read(16))\n            self.doc_count = struct.unpack('<Q', f.read(8))\n            self.dim_offsets = read_longs(f, self._len + 1)\n            self.data_offsets = read_longs(f, self._len + 1)\n            self.sizes = read_longs(f, self.s)\n            self.doc_idx = read_longs(f, self.doc_count)\n\n    def read_data(self, path):\n        self.data_file = open(data_file_path(path), 'rb', buffering=0)\n\n    def check_index(self, i):\n        if i < 0 or i >= self._len:\n            raise IndexError('index out of range')\n\n    def __del__(self):\n        if self.data_file:\n            self.data_file.close()\n\n    # @lru_cache(maxsize=8)\n    def __getitem__(self, idx):\n        if not self.data_file:\n            self.read_data(self.path)\n        if isinstance(idx, int):\n            i = idx\n            self.check_index(i)\n            tensor_size = self.sizes[self.dim_offsets[i]:self.dim_offsets[i + 1]]\n            a = np.empty(tensor_size, dtype=self.dtype)\n            self.data_file.seek(self.data_offsets[i] * self.element_size)\n            self.data_file.readinto(a)\n            return a\n        elif isinstance(idx, slice):\n            start, stop, step = idx.indices(len(self))\n            if step != 1:\n                raise ValueError(\"Slices into indexed_dataset must be contiguous\")\n            sizes = self.sizes[self.dim_offsets[start]:self.dim_offsets[stop]]\n            size = sum(sizes)\n            a = np.empty(size, dtype=self.dtype)\n            self.data_file.seek(self.data_offsets[start] * self.element_size)\n            self.data_file.readinto(a)\n            offsets = list(accumulate(sizes))\n            sents = np.split(a, offsets[:-1])\n            return sents\n\n    def __len__(self):\n        return self._len\n\n    def num_tokens(self, index):\n        return self.sizes[index]\n\n    def size(self, index):\n        return self.sizes[index]\n\n    @staticmethod\n    def exists(path):\n        return (\n            os.path.exists(index_file_path(path)) and os.path.exists(data_file_path(path))\n        )\n\n    @property\n    def supports_prefetch(self):\n        return False  # avoid prefetching to save memory\n\n\nclass IndexedCachedDataset(IndexedDataset):\n\n    def __init__(self, path):\n        super().__init__(path)\n        self.cache = None\n        self.cache_index = {}\n\n    @property\n    def supports_prefetch(self):\n        return True\n\n    def prefetch(self, indices):\n        if all(i in self.cache_index for i in indices):\n            return\n        if not self.data_file:\n            self.read_data(self.path)\n        indices = sorted(set(indices))\n        total_size = 0\n        for i in indices:\n            total_size += self.data_offsets[i + 1] - self.data_offsets[i]\n        self.cache = np.empty(total_size, dtype=self.dtype)\n        ptx = 0\n        self.cache_index.clear()\n        for i in indices:\n            self.cache_index[i] = ptx\n            size = self.data_offsets[i + 1] - self.data_offsets[i]\n            a = self.cache[ptx: ptx + size]\n            self.data_file.seek(self.data_offsets[i] * self.element_size)\n            self.data_file.readinto(a)\n            ptx += size\n        if self.data_file:\n            # close and delete data file after prefetch so we can pickle\n            self.data_file.close()\n            self.data_file = None\n\n    # @lru_cache(maxsize=8)\n    def __getitem__(self, idx):\n        if isinstance(idx, int):\n            i = idx\n            self.check_index(i)\n            tensor_size = self.sizes[self.dim_offsets[i]:self.dim_offsets[i + 1]]\n            a = np.empty(tensor_size, dtype=self.dtype)\n            ptx = self.cache_index[i]\n            np.copyto(a, self.cache[ptx: ptx + a.size])\n            return a\n        elif isinstance(idx, slice):\n            # Hack just to make this work, can optimizer later if necessary\n            sents = []\n            for i in range(*idx.indices(len(self))):\n                sents.append(self[i])\n            return sents\n\n\nclass IndexedDatasetBuilder(object):\n    element_sizes = {\n        np.uint8: 1,\n        np.int8: 1,\n        np.int16: 2,\n        np.int32: 4,\n        np.int64: 8,\n        np.float32: 4,\n        np.double: 8\n    }\n\n    def __init__(self, out_file, dtype=np.int32):\n        self.out_file = open(out_file, 'wb')\n        self.dtype = dtype\n        self.data_offsets = [0]\n        self.dim_offsets = [0]\n        self.sizes = []\n        self.element_size = self.element_sizes[self.dtype]\n        self.doc_idx = [0]\n\n    def add_item(self, tensor):\n        bytes = self.out_file.write(np.array(tensor.numpy(), dtype=self.dtype))\n        self.data_offsets.append(self.data_offsets[-1] + bytes / self.element_size)\n        for s in tensor.size():\n            self.sizes.append(s)\n        self.dim_offsets.append(self.dim_offsets[-1] + len(tensor.size()))\n\n    def end_document(self):\n        self.doc_idx.append(len(self.sizes))\n\n    def merge_file_(self, another_file):\n        index = IndexedDataset(another_file)\n        assert index.dtype == self.dtype\n\n        begin = self.data_offsets[-1]\n        for offset in index.data_offsets[1:]:\n            self.data_offsets.append(begin + offset)\n        self.sizes.extend(index.sizes)\n        begin = self.dim_offsets[-1]\n        for dim_offset in index.dim_offsets[1:]:\n            self.dim_offsets.append(begin + dim_offset)\n\n        with open(data_file_path(another_file), 'rb') as f:\n            while True:\n                data = f.read(1024)\n                if data:\n                    self.out_file.write(data)\n                else:\n                    break\n\n    def finalize(self, index_file):\n        self.out_file.close()\n        index = open(index_file, 'wb')\n        index.write(b'TNTIDX\\x00\\x00')\n        index.write(struct.pack('<Q', 1))\n        index.write(struct.pack('<QQ', code(self.dtype), self.element_size))\n        index.write(struct.pack('<QQ', len(self.data_offsets) - 1, len(self.sizes)))\n        index.write(struct.pack('<Q', len(self.doc_idx)))\n        write_longs(index, self.dim_offsets)\n        write_longs(index, self.data_offsets)\n        write_longs(index, self.sizes)\n        write_longs(index, self.doc_idx)\n        index.close()\n\n\ndef _warmup_mmap_file(path):\n    with open(path, 'rb') as stream:\n        while stream.read(100 * 1024 * 1024):\n            pass\n\n\nclass MMapIndexedDataset(torch.utils.data.Dataset):\n    class Index(object):\n        _HDR_MAGIC = b'MMIDIDX\\x00\\x00'\n\n        @classmethod\n        def writer(cls, path, dtype):\n            class _Writer(object):\n                def __enter__(self):\n                    self._file = open(path, 'wb')\n\n                    self._file.write(cls._HDR_MAGIC)\n                    self._file.write(struct.pack('<Q', 1))\n                    self._file.write(struct.pack('<B', code(dtype)))\n\n                    return self\n\n                @staticmethod\n                def _get_pointers(sizes):\n                    dtype_size = dtype().itemsize\n                    address = 0\n                    pointers = []\n\n                    for size in sizes:\n                        pointers.append(address)\n                        address += size * dtype_size\n\n                    return pointers\n\n                def write(self, sizes, doc_idx):\n                    pointers = self._get_pointers(sizes)\n\n                    self._file.write(struct.pack('<Q', len(sizes)))\n                    self._file.write(struct.pack('<Q', len(doc_idx)))\n\n                    sizes = np.array(sizes, dtype=np.int32)\n                    self._file.write(sizes.tobytes(order='C'))\n                    del sizes\n\n                    pointers = np.array(pointers, dtype=np.int64)\n                    self._file.write(pointers.tobytes(order='C'))\n                    del pointers\n\n                    doc_idx = np.array(doc_idx, dtype=np.int64)\n                    self._file.write(doc_idx.tobytes(order='C'))\n\n                def __exit__(self, exc_type, exc_val, exc_tb):\n                    self._file.close()\n\n            return _Writer()\n\n        def __init__(self, path, skip_warmup=False):\n            with open(path, 'rb') as stream:\n                magic_test = stream.read(9)\n                assert self._HDR_MAGIC == magic_test, (\n                    'Index file doesn\\'t match expected format. '\n                    'Make sure that --dataset-impl is configured properly.'\n                )\n                version = struct.unpack('<Q', stream.read(8))\n                assert (1,) == version\n\n                dtype_code, = struct.unpack('<B', stream.read(1))\n                self._dtype = dtypes[dtype_code]\n                self._dtype_size = self._dtype().itemsize\n\n                self._len = struct.unpack('<Q', stream.read(8))[0]\n                self._doc_count = struct.unpack('<Q', stream.read(8))[0]\n                offset = stream.tell()\n\n            if not skip_warmup:\n                print(\"    warming up index mmap file...\")\n                _warmup_mmap_file(path)\n\n            self._bin_buffer_mmap = np.memmap(path, mode='r', order='C')\n            self._bin_buffer = memoryview(self._bin_buffer_mmap)\n            print(\"    reading sizes...\")\n            self._sizes = np.frombuffer(\n                self._bin_buffer,\n                dtype=np.int32,\n                count=self._len,\n                offset=offset)\n            print(\"    reading pointers...\")\n            self._pointers = np.frombuffer(self._bin_buffer, dtype=np.int64, count=self._len,\n                                           offset=offset + self._sizes.nbytes)\n            print(\"    reading document index...\")\n            self._doc_idx = np.frombuffer(self._bin_buffer, dtype=np.int64, count=self._doc_count,\n                                          offset=offset + self._sizes.nbytes + self._pointers.nbytes)\n\n        def __del__(self):\n            self._bin_buffer_mmap._mmap.close()\n            del self._bin_buffer_mmap\n\n        @property\n        def dtype(self):\n            return self._dtype\n\n        @property\n        def sizes(self):\n            return self._sizes\n\n        @property\n        def doc_idx(self):\n            return self._doc_idx\n\n        @lru_cache(maxsize=8)\n        def __getitem__(self, i):\n            return self._pointers[i], self._sizes[i]\n\n        def __len__(self):\n            return self._len\n\n    def __init__(self, path, skip_warmup=False):\n        super().__init__()\n\n        self._path = None\n        self._index = None\n        self._bin_buffer = None\n\n        self._do_init(path, skip_warmup)\n\n    def __getstate__(self):\n        return self._path\n\n    def __setstate__(self, state):\n        self._do_init(state)\n\n    def _do_init(self, path, skip_warmup):\n        self._path = path\n        self._index = self.Index(index_file_path(self._path), skip_warmup)\n\n        if not skip_warmup:\n            print(\"    warming up data mmap file...\")\n            _warmup_mmap_file(data_file_path(self._path))\n        print(\"    creating numpy buffer of mmap...\")\n        self._bin_buffer_mmap = np.memmap(data_file_path(self._path), mode='r', order='C')\n        print(\"    creating memory view of numpy buffer...\")\n        self._bin_buffer = memoryview(self._bin_buffer_mmap)\n\n    def __del__(self):\n        self._bin_buffer_mmap._mmap.close()\n        del self._bin_buffer_mmap\n        del self._index\n\n    def __len__(self):\n        return len(self._index)\n\n    # @lru_cache(maxsize=8)\n    def __getitem__(self, idx):\n        if isinstance(idx, int):\n            assert idx < len(self._index), \"Index {} out of range: {}\".format(idx, len(self._index))\n            ptr, size = self._index[idx]\n            np_array = np.frombuffer(self._bin_buffer, dtype=self._index.dtype,\n                                     count=size, offset=ptr)\n            return np_array\n        elif isinstance(idx, slice):\n            start, stop, step = idx.indices(len(self))\n            if step != 1:\n                raise ValueError(\"Slices into indexed_dataset must be contiguous\")\n            ptr = self._index._pointers[start]\n            sizes = self._index._sizes[idx]\n            offsets = list(accumulate(sizes))\n            total_size = sum(sizes)\n            np_array = np.frombuffer(self._bin_buffer, dtype=self._index.dtype,\n                                     count=total_size, offset=ptr)\n            sents = np.split(np_array, offsets[:-1])\n            return sents\n\n    def get(self, idx, offset=0, length=None):\n        \"\"\" Retrieves a single item from the dataset with the option to only\n        return a portion of the item.\n\n        get(idx) is the same as [idx] but get() does not support slicing.\n        \"\"\"\n        ptr, size = self._index[idx]\n        if length is None:\n            length = size - offset\n        ptr += offset * np.dtype(self._index.dtype).itemsize\n        np_array = np.frombuffer(self._bin_buffer, dtype=self._index.dtype,\n                                 count=length, offset=ptr)\n        return np_array\n\n    @property\n    def sizes(self):\n        return self._index.sizes\n\n    # @property\n    # def doc_idx(self):\n    #     return self._index.doc_idx\n\n    # def get_doc_idx(self):\n    #     return self._index._doc_idx\n\n    # def set_doc_idx(self, doc_idx_):\n    #     self._index._doc_idx = doc_idx_\n\n    @property\n    def supports_prefetch(self):\n        return False\n\n    @staticmethod\n    def exists(path):\n        return (\n            os.path.exists(index_file_path(path)) and os.path.exists(data_file_path(path))\n        )\n\n\nclass MMapIndexedDatasetBuilder(object):\n    def __init__(self, out_file, dtype=np.int64):\n        self._data_file = open(out_file, 'wb')\n        self._dtype = dtype\n        self._sizes = []\n        self._doc_idx = [0]\n\n    def add_item(self, tensor):\n        np_array = np.array(tensor.numpy(), dtype=self._dtype)\n        self._data_file.write(np_array.tobytes(order='C'))\n        self._sizes.append(np_array.size)\n\n    def end_document(self):\n        self._doc_idx.append(len(self._sizes))\n\n    def merge_file_(self, another_file):\n        # Concatenate index\n        index = MMapIndexedDataset.Index(index_file_path(another_file))\n        assert index.dtype == self._dtype\n\n        for size in index.sizes:\n            self._sizes.append(size)\n\n        # Concatenate data\n        with open(data_file_path(another_file), 'rb') as f:\n            shutil.copyfileobj(f, self._data_file)\n\n    def finalize(self, index_file):\n        self._data_file.close()\n\n        with MMapIndexedDataset.Index.writer(index_file, self._dtype) as index:\n            index.write(self._sizes, self._doc_idx)\n"
  },
  {
    "path": "data_utils/lm_datasets.py",
    "content": "import random\nimport torch\nimport os\nimport json\nimport pickle\nimport numpy as np\nfrom torch.utils.data import Dataset\nfrom .distributed_indexed import DistributedMMapIndexedDataset\n\nfrom torch.distributed import get_rank, get_world_size, barrier\nfrom utils import print_rank\nfrom utils import save_rank\n\n\nclass LMTrainDataset(Dataset):\n    def __init__(self, args, tokenizer, path, split, num, ratio, rng_sample: random.Random):\n        self.args = args\n        self.tokenizer = tokenizer\n        self.split = split\n        self.pad_id = self.tokenizer.eos_token_id\n        self.ratio = ratio\n        self.max_length = args.max_length\n        self.max_prompt_length = args.max_prompt_length\n        self.rng_sample = rng_sample\n        self.lm_ctx = DistributedMMapIndexedDataset(path, f\"{split}\", get_rank(), get_world_size())\n\n        if os.path.exists(os.path.join(path, f\"{split}.jsonl\")):\n            with open(os.path.join(path, f\"{split}.jsonl\")) as f:\n                self.raw = [json.loads(line) for line in f.readlines()]\n                self.answers = [x[\"output\"] if isinstance(x[\"output\"], list) else [x[\"output\"]] for x in self.raw]\n        \n        print_rank(len(self.lm_ctx))\n        if num == -1:\n            self.num = len(self.lm_ctx)\n        else:\n            self.num = num\n\n        print_rank(f\"Num LM instances: {len(self.lm_ctx)}\")\n\n    def __len__(self):\n        return self.num\n   \n    def __getitem__(self, index):\n        return self._get_lm(index)\n    \n    def _get_lm(self, index):\n        data = self.lm_ctx[index]\n        input_ids = data.astype(int)\n        return {\n            \"input_ids\": input_ids\n        }\n\n    def _process_lm(self, i, samp, model_data, no_model_data, gen_data):\n        input_ids = samp[\"input_ids\"]\n        source_len = 1\n        \n        prompt = None\n        if 65535 in input_ids:\n            source_len = np.where(input_ids==65535)[0][0]\n            prompt = input_ids[:source_len]\n            input_ids = np.concatenate([input_ids[:source_len], input_ids[source_len+1:]], axis=0)\n        input_ids = input_ids[:self.max_length]\n        input_len = len(input_ids)\n        model_data[\"input_ids\"][i][:input_len-1] = torch.tensor(input_ids[:-1], dtype=torch.long)\n        model_data[\"attention_mask\"][i][:input_len-1] = 1.0\n        if self.args.model_type in [\"gpt2\"]:\n            model_data[\"position_ids\"][i][:input_len-1] = torch.arange(0, input_len-1, dtype=torch.long)\n        no_model_data[\"label\"][i][:input_len-1] = torch.tensor(input_ids[1:], dtype=torch.long)\n        no_model_data[\"label\"][i][:source_len-1] = -100\n        no_model_data[\"loss_mask\"][i][:input_len-1] = 1.0\n        no_model_data[\"loss_mask\"][i][:source_len-1] = 0\n        \n        if prompt is not None:\n            gen_data[\"input_ids\"][i][-len(prompt):] = torch.tensor(prompt, dtype=torch.long)\n            gen_data[\"attention_mask\"][i][-len(prompt):] = 1.0\n\n    def move_to_device(self, model_data, no_model_data, gen_data, device):\n        for k in model_data:\n            model_data[k] = model_data[k].to(device)\n\n        for k in no_model_data:\n            no_model_data[k] = no_model_data[k].to(device)\n\n        for k in gen_data:\n            gen_data[k] = gen_data[k].to(device)\n\n        return model_data, no_model_data, gen_data\n\n    def collate(self, samples):\n        bs = len(samples)\n\n        max_length = self.max_length\n        \n        model_data = {\n            \"input_ids\": torch.ones(bs, max_length, dtype=torch.long) * self.pad_id,\n            \"attention_mask\": torch.zeros(bs, max_length),\n        }\n        \n        if self.args.model_type in [\"gpt2\"]:\n            model_data[\"position_ids\"] = torch.zeros(bs, max_length, dtype=torch.long)\n            \n        no_model_data = {\n            \"label\": torch.ones(bs, max_length, dtype=torch.long) * -100,\n            \"loss_mask\": torch.zeros(bs, max_length)\n        }\n        \n        gen_data = {\n            \"input_ids\": torch.ones(bs, self.max_prompt_length, dtype=torch.long) * self.pad_id,\n            \"attention_mask\": torch.zeros(bs, self.max_prompt_length, dtype=torch.long),\n        }\n\n        for i, samp in enumerate(samples):\n            self._process_lm(i, samp, model_data, no_model_data, gen_data)\n        \n        return model_data, no_model_data, gen_data\n"
  },
  {
    "path": "data_utils/prompt_datasets.py",
    "content": "import random\nimport torch\nimport os\nfrom torch.utils.data import Dataset\nfrom .distributed_indexed import DistributedMMapIndexedDataset\n\nfrom torch.distributed import get_rank, get_world_size\nfrom utils import print_rank\nfrom tqdm import tqdm\nimport json\n\n\nclass PromptDataset(Dataset):\n    def __init__(self, args, tokenizer, split, data_path=None, num=-1):\n        super().__init__()\n        self.tokenizer = tokenizer\n\n        self.args = args\n        self.tokenizer = tokenizer\n        self.split = split\n        self.pad_id = self.tokenizer.eos_token_id\n        self.max_length = args.max_length\n        self.min_prompt_length = args.min_prompt_length\n        self.max_prompt_length = args.max_prompt_length\n\n        if args.bin_data:\n            self.data = DistributedMMapIndexedDataset(data_path, f\"{split}\", get_rank(), get_world_size())\n        elif args.json_data:\n            self.data, self.origin_data = self.load_data_json(data_path)\n        else:\n            # txt data\n            self.data = self.load_data_txt(data_path)\n        \n        if os.path.exists(os.path.join(data_path, f\"{self.split}_{self.args.model_type}.jsonl\")):\n            with open(os.path.join(data_path, f\"{self.split}_{self.args.model_type}.jsonl\")) as f:\n                self.raw = [json.loads(line) for line in f.readlines()]\n                self.answers = [x[\"output\"] if isinstance(x[\"output\"], list) else [x[\"output\"]] for x in self.raw]\n        elif os.path.exists(os.path.join(data_path, f\"{split}.jsonl\")):\n            with open(os.path.join(data_path, f\"{split}.jsonl\")) as f:\n                self.raw = [json.loads(line) for line in f.readlines()]\n                self.answers = [x[\"output\"] if isinstance(x[\"output\"], list) else [x[\"output\"]] for x in self.raw]\n        else:\n            print_rank(\"WARNING: No answers exist\")\n            \n        self.label_map = {tokenizer.encode(x[0], add_special_tokens=False)[0]: x[0] for x in self.answers}\n            \n        self.num = min(num, len(self.data)) if num > 0 else len(self.data)\n        print_rank(f\"Num instances: {len(self.data)}\")\n            \n    def __len__(self):\n        return self.num\n\n    def load_data_json(self, data_path):\n        if os.path.exists(os.path.join(data_path, f\"{self.split}_{self.args.model_type}.jsonl\")):\n            data_path = os.path.join(data_path, f\"{self.split}_{self.args.model_type}.jsonl\")\n        else:\n            data_path = os.path.join(data_path, f\"{self.split}.jsonl\")\n        \n        with open(data_path) as f:\n            lines = f.readlines()\n        data_origin = [json.loads(line) for line in lines]\n        data = []\n        print_rank(\"Loading Data\")\n        for d in tqdm(data_origin, disable=(get_rank() != 0)):\n            prompt = d[\"prompt\"].replace(\"<n>\", \"\\n\")\n            prompt_ids = self.tokenizer.encode(prompt)\n            output_ids = None\n            if \"output\" in d:\n                if isinstance(d[\"output\"], list):\n                    output_ids = self.tokenizer.encode(d[\"output\"][0])\n                else:\n                    output_ids = self.tokenizer.encode(d[\"output\"])\n            data.append({\n                \"prompt_ids\": prompt_ids,\n                \"output_ids\": output_ids[:self.max_length - self.max_prompt_length]\n            })\n        print_rank(\"Load End\")\n        return data, data_origin\n\n    def load_data_txt(self, data_path):\n        with open(os.path.join(data_path, f\"{self.split}.txt\")) as f:\n            lines = f.readlines()\n        data = []\n        print_rank(\"Loading Data\")\n        for line in lines:\n            line = line.strip()\n            line = line.replace(\"<n>\", \"\\n\")\n            prompt = self.tokenizer.encode(line)\n            data.append(prompt)\n        print_rank(\"Load End\")\n        return data\n\n    def verbalizer(self):\n        return self.label_map\n\n    def __getitem__(self, index: int):\n        data = self.data[index]\n        if self.args.bin_data:\n            data = data.astype(int)\n        elif self.args.json_data:\n            output_ids = data[\"output_ids\"]\n            data = data[\"prompt_ids\"]\n        \n        prompt_length = self.max_prompt_length\n\n        prompt = data[:prompt_length]\n        rest = data[prompt_length:]  \n        if self.args.json_data:\n            if output_ids is not None:\n                rest = output_ids  \n    \n        return index, prompt, rest\n    \n    def collate(self, samples):\n        bs = len(samples)\n        \n        max_prompt_length = self.max_prompt_length\n        max_rest_length = max([len(samp[2]) for samp in samples])\n        \n        model_batch = {\n            \"input_ids\": torch.ones(bs, max_prompt_length, dtype=torch.long) * self.pad_id,\n            \"attention_mask\": torch.zeros(bs, max_prompt_length, dtype=torch.long),\n            # \"position_ids\": torch.zeros(bs, max_prompt_length, dtype=torch.long)\n        }\n        \n        no_model_batch = {\n            \"idx\": torch.zeros(bs, dtype=torch.long),\n            \"rest_ids\": torch.ones(bs, max_rest_length, dtype=torch.long) * self.pad_id\n        }\n        \n        for i, (idx, prompt, rest) in enumerate(samples):\n            # left padding\n            model_batch[\"input_ids\"][i][-len(prompt):] = torch.tensor(prompt, dtype=torch.long)\n            model_batch[\"attention_mask\"][i][-len(prompt):] = 1\n            # model_batch[\"position_ids\"][i][-len(prompt):] = torch.arange(len(prompt))\n            no_model_batch[\"idx\"][i] = idx\n            no_model_batch[\"rest_ids\"][i][:len(rest)] = torch.tensor(rest, dtype=torch.long)\n        \n        return model_batch, no_model_batch\n\n    def move_to_device(self, model_batch, no_model_batch, device):\n        for k in model_batch:\n            model_batch[k] = model_batch[k].to(device)        \n        for k in no_model_batch:\n            no_model_batch[k] = no_model_batch[k].to(device)    \n        \n        return model_batch, no_model_batch\n"
  },
  {
    "path": "distillm/__init__.py",
    "content": "from .losses import forward_kl, reverse_kl, symmetric_kl, js_distance, tv_distance\nfrom .losses import skewed_forward_kl, skewed_reverse_kl\nfrom .sampler import SampleGenerator\nfrom .buffer import ReplayBuffer\n"
  },
  {
    "path": "distillm/buffer.py",
    "content": "import random\nimport torch\nimport os\nimport json\nimport pickle\nimport numpy as np\nfrom torch.utils.data import Dataset\n\nfrom torch.distributed import get_rank, get_world_size, barrier\nfrom utils import print_rank\nfrom utils import save_rank\n\nfrom collections import namedtuple, deque\n\n\nclass ReplayBuffer:\n    def __init__(self, args):\n        self.args = args\n        self.replay_memory = deque(maxlen=args.capacity)\n        self.bs = args.batch_size\n        if args.model_type in [\"gpt2\", \"llama\"]:\n            self.data = namedtuple(\"Generation\", \\\n               field_names=[\"input_ids\", \"attention_mask\", \"position_ids\", \"label\", \"loss_mask\"])\n        else:\n            self.data = namedtuple(\"Generation\", \\\n               field_names=[\"input_ids\", \"attention_mask\", \"label\", \"loss_mask\"])\n            \n    def __len__(self):\n        return len(self.replay_memory)\n    \n    def sample(self):\n        data = random.sample(self.replay_memory, k=self.bs)\n        input_ids = torch.stack([d.input_ids for d in data], dim=0)\n        attention_mask = torch.stack([d.attention_mask for d in data], dim=0)\n        label = torch.stack([d.label for d in data], dim=0)\n        loss_mask = torch.stack([d.loss_mask for d in data], dim=0)\n        \n        if self.args.model_type in [\"gpt2\", \"llama\"]:\n            position_ids = torch.stack([d.position_ids for d in data], dim=0)\n            model_data = {\n                \"input_ids\": input_ids, \"attention_mask\": attention_mask, \"position_ids\": position_ids\n            }\n        else:\n            model_data = {\n                \"input_ids\": input_ids, \"attention_mask\": attention_mask\n            }\n            \n        no_model_data = {\n            \"label\": label, \"loss_mask\": loss_mask\n        }\n        return model_data, no_model_data\n        \n    \n    def move_to_device(self, model_data, no_model_data, device):\n        for k in model_data:\n            model_data[k] = model_data[k].to(device)\n\n        for k in no_model_data:\n            no_model_data[k] = no_model_data[k].to(device)\n\n        return model_data, no_model_data\n    \n    def move_to_memory(self, model_data, no_model_data):\n        device = torch.device(\"cpu\")\n        model_data_cpu, no_model_data_cpu = {}, {}\n        for k in model_data:\n            model_data_cpu[k] = model_data[k].to(device)\n        \n        for k in no_model_data:\n            no_model_data_cpu[k] = no_model_data[k].to(device)\n        \n        for idx in range(model_data_cpu[\"input_ids\"].size(0)):\n            if self.args.model_type in [\"gpt2\", \"llama\"]:\n                e = self.data(model_data_cpu[\"input_ids\"][idx], model_data_cpu[\"attention_mask\"][idx], model_data_cpu[\"position_ids\"][idx],\n                              no_model_data_cpu[\"label\"][idx], no_model_data_cpu[\"loss_mask\"][idx])\n            else:\n                e = self.data(model_data_cpu[\"input_ids\"][idx], model_data_cpu[\"attention_mask\"][idx],\n                              no_model_data_cpu[\"label\"][idx], no_model_data_cpu[\"loss_mask\"][idx])\n            self.replay_memory.append(e)"
  },
  {
    "path": "distillm/losses.py",
    "content": "import torch\nimport torch.nn.functional as F\n\ndef forward_kl(logits, teacher_logits, no_model_batch):\n    teacher_probs = F.softmax(teacher_logits, dim=-1, dtype=torch.float32)\n    inf_mask = torch.isinf(logits)\n    student_logprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)\n    prod_probs = torch.masked_fill(teacher_probs * student_logprobs, inf_mask, 0)\n    x = torch.sum(prod_probs, dim=-1).view(-1)\n    mask = (no_model_batch[\"label\"] != -100).int()\n    distil_loss = -torch.sum(x * mask.view(-1), dim=0) / torch.sum(mask.view(-1), dim=0)\n    return distil_loss\n\ndef reverse_kl(logits, teacher_logits, no_model_batch):\n    student_probs = F.softmax(logits, dim=-1, dtype=torch.float32)\n    student_logprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)\n    teacher_logprobs = F.log_softmax(teacher_logits, dim=-1, dtype=torch.float32)\n    inf_mask = torch.isinf(teacher_logits) | torch.isinf(logits)\n    prod_probs = torch.masked_fill(student_probs * teacher_logprobs, inf_mask, 0)\n    prod_probs -= torch.masked_fill(student_probs * student_logprobs, inf_mask, 0)\n    x = torch.sum(prod_probs, dim=-1).view(-1)\n    mask = (no_model_batch[\"label\"] != -100).int()\n    distil_loss = -torch.sum(x * mask.view(-1), dim=0) / torch.sum(mask.view(-1), dim=0)\n    return distil_loss\n\ndef symmetric_kl(logits, teacher_logits, no_model_batch, lam=0.9):\n    for_kl = forward_kl(logits, teacher_logits, no_model_batch)\n    rev_kl = reverse_kl(logits, teacher_logits, no_model_batch)\n    distil_loss = (1-lam) * for_kl + lam * rev_kl\n    return distil_loss\n    \ndef js_distance(logits, teacher_logits, no_model_batch, lam=0.9):\n    teacher_probs = F.softmax(teacher_logits, dim=-1, dtype=torch.float32)\n    student_probs = F.softmax(logits, dim=-1, dtype=torch.float32)\n    mixed_probs = (1-lam) * teacher_probs + lam * student_probs\n\n    teacher_logprobs = F.log_softmax(teacher_logits, dim=-1, dtype=torch.float32)\n    student_logprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)\n    mixed_logprobs = torch.log(mixed_probs)\n\n    mask = (no_model_batch[\"label\"] != -100).int()\n    inf_mask = torch.isinf(logits) | torch.isinf(teacher_logits)\n\n    prod_probs = torch.masked_fill(student_probs * mixed_logprobs, inf_mask, 0)\n    prod_probs -= torch.masked_fill(student_probs * student_logprobs, inf_mask, 0)\n    x = torch.sum(prod_probs, dim=-1).view(-1)\n    distil_loss = lam * -torch.sum(x * mask.view(-1), dim=0) / torch.sum(mask.view(-1), dim=0)\n\n    prod_probs = torch.masked_fill(teacher_probs * mixed_logprobs, inf_mask, 0)\n    prod_probs -= torch.masked_fill(teacher_probs * teacher_logprobs, inf_mask, 0)\n    x = torch.sum(prod_probs, dim=-1).view(-1)\n    distil_loss += (1-lam) * -torch.sum(x * mask.view(-1), dim=0) / torch.sum(mask.view(-1), dim=0)\n    return distil_loss\n    \ndef tv_distance(logits, teacher_logits, no_model_batch):\n    teacher_probs = F.softmax(teacher_logits, dim=-1, dtype=torch.float32)\n    student_probs = F.softmax(logits, dim=-1, dtype=torch.float32)\n    \n    mask = (no_model_batch[\"label\"] != -100).int()\n    inf_mask = torch.isinf(logits) | torch.isinf(teacher_logits)\n    prod_probs = 0.5 * torch.masked_fill(torch.abs(teacher_probs - student_probs), inf_mask, 0)\n    x = torch.sum(prod_probs, dim=-1).view(-1)\n    distil_loss = torch.sum(x * mask.view(-1), dim=0) / torch.sum(mask.view(-1), dim=0)\n    return distil_loss\n\ndef skewed_forward_kl(logits, teacher_logits, no_model_batch, lam=0.1):\n    teacher_probs = F.softmax(teacher_logits, dim=-1, dtype=torch.float32)\n    student_probs = F.softmax(logits, dim=-1, dtype=torch.float32)\n    mixed_probs = lam * teacher_probs + (1-lam) * student_probs\n    mixed_logprobs = torch.log(mixed_probs)\n    \n    mask = (no_model_batch[\"label\"] != -100).int()\n    inf_mask = torch.isinf(logits) | torch.isinf(teacher_logits)\n\n    prod_probs = torch.masked_fill(teacher_probs * mixed_logprobs, inf_mask, 0)\n    x = torch.sum(prod_probs, dim=-1).view(-1)\n    distil_loss = -torch.sum(x * mask.view(-1), dim=0) / torch.sum(mask.view(-1), dim=0)\n    return distil_loss\n\ndef skewed_reverse_kl(logits, teacher_logits, no_model_batch, lam=0.1):\n    teacher_probs = F.softmax(teacher_logits, dim=-1, dtype=torch.float32)\n    student_probs = F.softmax(logits, dim=-1, dtype=torch.float32)\n    mixed_probs = (1-lam) * teacher_probs + lam * student_probs\n    \n    student_logprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)\n    mixed_logprobs = torch.log(mixed_probs)\n\n    mask = (no_model_batch[\"label\"] != -100).int()\n    inf_mask = torch.isinf(logits) | torch.isinf(teacher_logits)\n\n    prod_probs = torch.masked_fill(student_probs * mixed_logprobs, inf_mask, 0)\n    prod_probs -= torch.masked_fill(student_probs * student_logprobs, inf_mask, 0)\n    x = torch.sum(prod_probs, dim=-1).view(-1)\n    distil_loss = -torch.sum(x * mask.view(-1), dim=0) / torch.sum(mask.view(-1), dim=0)\n    return distil_loss"
  },
  {
    "path": "distillm/sampler.py",
    "content": "import torch\nimport os\nfrom transformers import GenerationConfig\n\n\nclass SampleGenerator():\n    def __init__(self, args, tokenizer):\n        self.args = args\n        self.tokenizer = tokenizer\n        self.max_new_token = self.args.max_length - self.args.max_prompt_length\n        self.pad_id = tokenizer.pad_token_id\n        self.generation_config = GenerationConfig(\n            do_sample=args.do_sample,\n            top_p=args.gen_top_p,\n            top_k=args.top_k,\n            temperature=args.temperature,\n            repetition_penalty=args.repetition_penalty,\n            max_length=args.max_length,\n            min_length=None,\n            eos_token_id=tokenizer.eos_token_id,\n            pad_token_id=tokenizer.eos_token_id,\n            return_dict_in_generate=True,\n            output_scores=False\n        )\n        \n    def run_sample(self, model, gen_data):\n        bs = gen_data[\"input_ids\"].size(0)\n        results = {\n            \"input_ids\": torch.ones(bs, self.args.max_length, dtype=torch.long, device=gen_data[\"input_ids\"].device) * self.pad_id,\n            \"attention_mask\": torch.zeros(bs, self.args.max_length, dtype=torch.float,  device=gen_data[\"input_ids\"].device),\n            \"position_ids\": torch.zeros(bs, self.args.max_length, dtype=torch.long,  device=gen_data[\"input_ids\"].device),\n            \"no_model_batch\": torch.ones(bs, self.args.max_length, dtype=torch.long, device=gen_data[\"input_ids\"].device) * -100,\n        }\n        \n        model.eval()\n        with torch.no_grad():\n            gen_out = model.generate(\n                **gen_data,\n                generation_config=self.generation_config,\n                max_new_tokens=self.max_new_token,\n            )\n            \n            full_ids = gen_out.sequences\n            input_ids = full_ids[:, :gen_data[\"input_ids\"].size(1)]\n            response_ids = full_ids[:, gen_data[\"input_ids\"].size(1):]\n            \n            for i in range(len(input_ids)):\n                result_id = torch.cat(\n                    (input_ids[i][input_ids[i] != self.pad_id],\n                     response_ids[i][response_ids[i] != self.pad_id]),\n                )\n                input_id = input_ids[i][input_ids[i] != self.pad_id]\n                response_id = response_ids[i][response_ids[i] != self.pad_id]\n                \n                results[\"input_ids\"][i, :len(result_id)] = result_id\n                results[\"position_ids\"][i, :len(result_id)] = torch.arange(len(result_id))\n                results[\"no_model_batch\"][i, len(input_id):len(result_id)] = response_id\n        results[\"attention_mask\"] = torch.where(results[\"input_ids\"] != self.pad_id, 1, 0)\n        results[\"attention_mask\"] = results[\"attention_mask\"].float()\n        results[\"no_model_batch\"] = results[\"no_model_batch\"].long()\n        return results"
  },
  {
    "path": "evaluate.py",
    "content": "import time\nimport os\n\nimport torch\nimport torch.distributed as dist\nimport deepspeed\n\nimport json\n\nfrom arguments import get_args\n\nfrom utils import initialize, print_args\nfrom utils import print_rank\nfrom utils import save_rank\nfrom utils import get_tokenizer, get_model\n\nfrom evaluate_main import evaluate_main, prepare_dataset_main\n\n\ntorch.set_num_threads(4)\n\n\ndef setup_model(args, ds_config, device):\n    # get the model\n    model = get_model(args, device)\n    # get the optimizer and lr_scheduler\n\n    optimizer, lr_scheduler = None, None\n        \n    model, _, _, _ = deepspeed.initialize(\n        model=model,\n        optimizer=optimizer,\n        args=args,\n        lr_scheduler=lr_scheduler,\n        mpu=None,\n        config_params=ds_config\n    )\n    \n    # get the memory usage\n    print_rank(\"Model mem\\n\", torch.cuda.memory_summary())\n    return model\n\n\ndef main():\n    torch.backends.cudnn.enabled = False\n    \n    args = get_args()\n    initialize(args)\n    \n    if dist.get_rank() == 0:\n        print_args(args)\n        with open(os.path.join(args.save, \"args.json\"), \"w\") as f:\n            json.dump(vars(args), f)\n    \n    device = torch.cuda.current_device()\n    cur_time = time.strftime(\"%Y-%m-%d %H:%M:%S\", time.localtime())\n    save_rank(\"\\n\\n\" + \"=\"*30 + f\" EXP at {cur_time} \" + \"=\"*30, os.path.join(args.save, \"log.txt\"))\n    print(\"OK\")\n    with open(args.deepspeed_config, \"r\") as f:\n        ds_config = json.load(f)\n\n    ds_config[\"gradient_accumulation_steps\"] = args.gradient_accumulation_steps\n    ds_config[\"train_micro_batch_size_per_gpu\"] = args.batch_size\n    ds_config[\"gradient_clipping\"] = args.clip_grad\n    ds_config[\"steps_per_print\"] = args.gradient_accumulation_steps\n    \n    if not args.do_train:\n        ds_config[\"zero_optimization\"][\"stage\"] = 0\n\n    args.fp32 = not ds_config[\"fp16\"][\"enabled\"] \n    args.deepspeed_config = None\n\n    # get the tokenizer\n    tokenizer = get_tokenizer(args)\n    if args.type == \"eval_main\":\n        dataset = prepare_dataset_main(\n            args,\n            tokenizer,\n        )\n    else:\n        raise NotImplementedError\n    model = setup_model(args, ds_config, device)\n    \n    if args.type == \"eval_main\":\n        evaluate_main(args, tokenizer, model, dataset[\"test\"], \"test\", 0, device)\n    else:\n        raise NotImplementedError\n    \n    \nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "evaluate_main.py",
    "content": "from data_utils.prompt_datasets import PromptDataset\nfrom transformers import GenerationConfig\nimport os\nimport nltk\nnltk.download(\"punkt\")\n\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\nfrom torch.utils.data import DataLoader, DistributedSampler\nimport torch.nn.functional as F\nfrom tqdm import tqdm\nimport numpy as np\nimport json\nfrom utils import print_rank, save_rank, all_gather\n\nfrom rouge_metric import compute_metrics\n\ntorch.set_num_threads(4)\n\n\ndef prepare_dataset_main(args, tokenizer):\n    data = {}\n    data[\"test\"] = PromptDataset(args, tokenizer, \"valid\", args.data_dir, args.dev_num)\n\n    return data\n\n\ndef run_model(args, tokenizer, model, dataset: PromptDataset, epoch, device):\n    \n    collate_fn = dataset.collate\n    dp_world_size = dist.get_world_size()\n    dp_rank = dist.get_rank()\n    dp_group = None\n    \n    sampler = DistributedSampler(dataset, shuffle=False, drop_last=False, rank=dp_rank, num_replicas=dp_world_size)\n    dataloader = DataLoader(\n        dataset, sampler=sampler, batch_size=args.eval_batch_size, num_workers=args.num_workers, collate_fn=collate_fn)\n    model.eval()\n    \n    all_query_ids = []\n    all_response_ids = []\n    all_lm_losses = []\n    \n    generation_config = GenerationConfig (\n        do_sample=args.do_sample,\n        top_p=args.top_p,\n        top_k=args.top_k,\n        temperature=args.temperature,\n        no_repeat_ngram_size=args.no_repeat_ngram_size,\n        repetition_penalty=args.repetition_penalty,\n        max_length=args.max_length,\n        min_length=None,\n        eos_token_id=tokenizer.eos_token_id,\n        pad_token_id=tokenizer.pad_token_id,\n        return_dict_in_generate=True,\n        output_scores=True\n    )\n\n    with torch.no_grad():\n        for it, (model_batch, no_model_batch) in enumerate(tqdm(dataloader, desc=f\"Evaluating {args.data_names} \", disable=(dist.get_rank() != 0))):\n            if it == 0:\n                print_rank(\"############### Example ###############\")\n                print_rank(tokenizer.decode(model_batch[\"input_ids\"][0], skip_special_tokens=True))\n                print_rank(\"############### End ###############\")\n            \n            dataset.move_to_device(model_batch, no_model_batch, device)\n\n            all_ids = torch.cat([model_batch[\"input_ids\"], no_model_batch[\"rest_ids\"]], dim=-1)\n            input_ids = all_ids[:, :-1]\n            attention_mask = (input_ids != tokenizer.pad_token_id).long()\n            label_ids = all_ids[:, 1:]\n            label_ids = torch.masked_fill(label_ids, label_ids==tokenizer.pad_token_id, -100)\n            label_ids[:, :model_batch[\"input_ids\"].size(1)-1] = -100  \n            if args.model_type in [\"gpt2\"]:\n                position_ids = (torch.cumsum(attention_mask, dim=-1) - 1) * attention_mask\n                out = model(input_ids=input_ids, position_ids=position_ids, attention_mask=attention_mask, return_dict=True)\n            else:\n                out = model(input_ids=input_ids, attention_mask=attention_mask, return_dict=True)\n            logits = out.logits\n            loss_mask = (label_ids != -100).float()\n            loss_func = nn.CrossEntropyLoss(reduction=\"none\")\n            lm_loss = loss_func(logits.view(-1, logits.size(-1)), label_ids.view(-1)).view(label_ids.size())\n            lm_loss = torch.sum(lm_loss * loss_mask, -1) / torch.sum(loss_mask, -1)\n            all_lm_losses.append(lm_loss)\n\n            query_ids = model_batch[\"input_ids\"]\n            max_new_tokens = args.max_length - query_ids.size(1)\n            gen_out = model.generate(\n                **model_batch,\n                generation_config=generation_config,\n                max_new_tokens=max_new_tokens\n            )\n            full_ids = gen_out.sequences\n            response_ids = full_ids[:, query_ids.size(1):] # remove prompt (may include start token)\n            \n            query_ids = F.pad(query_ids, (args.max_prompt_length-query_ids.size(1), 0, 0, 0), value=tokenizer.pad_token_id)\n            response_ids = F.pad(response_ids, (0, args.max_length-args.max_prompt_length-response_ids.size(1), 0, 0), value=tokenizer.pad_token_id)\n            \n            all_query_ids.append(query_ids)\n            all_response_ids.append(response_ids)\n\n    all_lm_losses = torch.cat(all_lm_losses)\n    mean_lm_loss = all_lm_losses.mean()\n    dist.all_reduce(mean_lm_loss, dist.ReduceOp.SUM, group=dp_group)\n    mean_lm_loss = mean_lm_loss.item() / dp_world_size\n        \n    all_query_ids = torch.cat(all_query_ids)\n    all_query_ids = all_gather(all_query_ids, dim=1, group=dp_group, world_size=dp_world_size, op=\"stack\")\n    all_query_ids = all_query_ids.view(-1, all_query_ids.size(-1))\n    all_query_ids = all_query_ids[:len(dataset)]\n    \n    all_response_ids = torch.cat(all_response_ids)\n    all_response_ids = all_gather(all_response_ids, dim=1, group=dp_group, world_size=dp_world_size, op=\"stack\")\n    all_response_ids = all_response_ids.view(-1, all_response_ids.size(-1))\n    all_response_ids = all_response_ids[:len(dataset)]\n        \n    return (\n        mean_lm_loss,\n        all_query_ids,\n        all_response_ids)\n\n\ndef evaluate_main(args, tokenizer, model, dataset: PromptDataset, split, epoch, device):\n        \n    lm_loss, query_ids, response_ids = run_model(args, tokenizer, model, dataset, epoch, device)\n    query_strs = tokenizer.batch_decode(query_ids, skip_special_tokens=True)\n    response_strs = tokenizer.batch_decode(response_ids, skip_special_tokens=True)\n    \n    with open(os.path.join(args.save, \"preds.txt\"), \"w\") as f:\n        for q, r in zip(query_strs, response_strs):\n            f.write(q.replace(\"\\n\", \"<n>\") + \"\\t\\t\" + r.replace(\"\\n\", \"<n>\") + \"\\n\")\n\n    all_preds = [[]]\n    for q, r in zip(query_strs, response_strs):\n        all_preds[0].append((q, q + r))\n    torch.save(all_preds, os.path.join(args.save, \"preds.pt\"))\n\n    all_responses = []\n    with open(os.path.join(args.save, \"answers.jsonl\"), \"w\") as f:    \n        for p in all_preds[0]:\n            q, r = p\n            r = r[len(q):]\n            idx = r.find(\"<|endoftext|>\")\n            if idx >= 0:\n                r = r[:idx]\n            f.write(json.dumps({\n                \"text\": r.replace(\"<n>\", \"\\n\").strip()\n            }) + \"\\n\")\n            all_responses.append(r.replace(\"<n>\", \"\\n\").strip())\n    \n    gen_res = compute_metrics(all_responses, dataset.answers)\n\n    mean_gen_length = np.mean([len(tokenizer.encode(s)) for s in response_strs])\n\n    log_str = f\"{split} | name: {args.data_names} | {gen_res} | lm_loss {round(lm_loss, 4)} | avg. gen lenth: {mean_gen_length}\"\n    print_rank(log_str)\n    save_rank(log_str, os.path.join(args.save, \"log.txt\"))\n"
  },
  {
    "path": "finetune.py",
    "content": "import time\nimport os\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport torch.distributed as dist\nfrom torch.utils.data import DataLoader, DistributedSampler\nfrom torch.optim import AdamW\nimport deepspeed\n\nimport random\nimport json\nfrom tqdm import tqdm\nimport math\nimport datetime\n\nfrom transformers import (\n    AutoModelForCausalLM,\n    AutoTokenizer,\n    AutoConfig,\n    GenerationConfig)\n\nfrom transformers import get_constant_schedule_with_warmup, get_polynomial_decay_schedule_with_warmup\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\n\nfrom arguments import get_args\n\nfrom data_utils.lm_datasets import LMTrainDataset\nfrom utils import get_optimizer_params, get_optimizer_params_peft, print_args, initialize\nfrom utils import print_rank, get_rank\nfrom utils import save_rank\nfrom utils import all_gather\nfrom utils import load_parallel, save_parallel\nfrom utils import get_tokenizer, get_model\n\nfrom distillm import forward_kl, reverse_kl, js_distance, tv_distance\nfrom distillm import skewed_forward_kl, skewed_reverse_kl\nfrom distillm import SampleGenerator, ReplayBuffer\n\nfrom rouge_metric import compute_metrics\n\nfrom peft import PeftModel\n\ntorch.set_num_threads(4)\n\n\ndef get_teacher_model(args, device):\n    config = AutoConfig.from_pretrained(args.teacher_model_path)\n    if args.model_parallel:\n        raise NotImplementedError\n    else:\n        config.is_model_parallel = False\n        try: model = AutoModelForCausalLM.from_pretrained(args.teacher_model_path, config=config, device_map={\"\": device}, torch_dtype=torch.float16)\n        except:\n            model = AutoModelForCausalLM.from_pretrained(args.teacher_model_path, config=config, device_map={\"\": device}, torch_dtype=torch.float32)\n            model = model.half()\n        \n        if args.peft is not None and args.teacher_peft_path is not None:\n            if args.peft == \"lora\":\n                model = PeftModel.from_pretrained(model, args.teacher_peft_path)\n                model = model.merge_and_unload()\n            else:\n                raise NotImplementedError\n        else:\n            if dist.get_rank() == 0:\n                print(' > number of parameters: {}'.format(\n                    sum([p.nelement() for p in model.parameters()])), flush=True)\n\n    model.eval()\n    \n    return model\n\n\ndef get_optimizer(args, model):\n    \"\"\"Set up the optimizer.\"\"\"\n\n    # Build parameter groups (weight decay and non-decay).\n    while isinstance(model, DDP):\n        model = model.module\n\n    if args.peft is not None:\n        param_groups = get_optimizer_params_peft(args, model)\n    else:\n        param_groups = get_optimizer_params(args, model)\n\n    # Use AdamW.\n    optimizer = AdamW(param_groups, lr=args.lr, weight_decay=args.weight_decay)\n    print_rank(f'Optimizer = {optimizer.__class__.__name__}')\n    return optimizer\n\n\ndef get_learning_rate_scheduler(args, optimizer):\n    if args.total_iters is None:\n        args.total_iters = args.train_iters_per_epoch * args.epochs\n    if args.lr_decay_style == \"constant\":\n        lr_scheduler = get_constant_schedule_with_warmup(\n            optimizer,\n            num_warmup_steps=args.warmup_iters)\n    elif args.lr_decay_style == \"cosine\":\n        lr_scheduler = CosineAnnealingLR(\n            optimizer,\n            T_max=args.total_iters,\n            eta_min=args.lr_min)\n    elif args.lr_decay_style == \"noam\":\n        lr_scheduler = get_polynomial_decay_schedule_with_warmup(\n            optimizer,\n            num_warmup_steps=args.warmup_iters,\n            num_training_steps=args.total_iters,\n            power=0.5)\n    else:\n        raise ValueError(f\"lr_scheduler of type {args.lr_decay_style} is not supported yet.\")\n\n    return lr_scheduler\n\n\ndef setup_model_and_optimizer(args, ds_config, device, set_optim=True):\n    # get the model\n    model = get_model(args, device)\n    # get the optimizer and lr_scheduler\n    if set_optim:\n        optimizer = get_optimizer(args, model)\n        lr_scheduler = get_learning_rate_scheduler(args, optimizer)\n    else:\n        optimizer, lr_scheduler = None, None\n        \n    model, optimizer, _, lr_scheduler = deepspeed.initialize(\n        model=model,\n        optimizer=optimizer,\n        args=args,\n        lr_scheduler=lr_scheduler,\n        mpu=None,\n        config_params=ds_config\n    )\n    \n    # get the memory usage\n    print_rank(\"Model mem\\n\", torch.cuda.memory_summary())\n    return model, optimizer, lr_scheduler\n\n\ndef prepare_dataset(args, tokenizer):\n    data = {}\n    rng_sample = random.Random(args.seed)\n    if args.do_train:\n        data[\"train\"] = LMTrainDataset(args, tokenizer, args.data_dir, \"train\", args.train_num, args.train_ratio, rng_sample)\n        print_rank(\"train num\", len(data[\"train\"]))\n        data[\"dev\"] = LMTrainDataset(args, tokenizer, args.data_dir, \"valid\", args.dev_num, args.dev_ratio, rng_sample)\n    elif args.do_eval:\n        data[\"test\"] = LMTrainDataset(args, tokenizer, args.data_dir, \"valid\", args.dev_num, args.dev_ratio, rng_sample)\n    else:\n        raise ValueError(\"Do train and do eval must set one\")\n        \n    # pre-trained dataset\n    if args.do_train and args.lm_data_dir is not None:\n        data[\"pt_train\"] = LMTrainDataset(args, tokenizer, args.lm_data_dir, \"train\", args.train_num, args.train_ratio, rng_sample)\n        print_rank(\"train num\", len(data[\"pt_train\"]))\n    return data\n\n\ndef pt_loss(args, model, model_batch, no_model_batch):\n    loss_mask = (no_model_batch[\"label\"] != -100).int()\n    outputs = model(**model_batch, return_dict=True, use_cache=False)\n    logits = outputs.logits\n    loss_fn = nn.CrossEntropyLoss(ignore_index=-100)\n    lm_loss = loss_fn(logits.view(-1, logits.size(-1)), no_model_batch[\"label\"].view(-1))\n    return lm_loss\n\n\ndef get_distil_loss(args, tokenizer, model, teacher_model, model_batch, no_model_batch, logits):\n    with torch.no_grad():\n        teacher_model.eval()\n        teacher_outputs = teacher_model(**model_batch, use_cache=False)\n        teacher_logits = teacher_outputs.logits\n    if args.model_parallel:\n        raise NotImplementedError\n    else:\n        if \"sfkl\" in args.type:\n            distil_loss = skewed_forward_kl(logits, teacher_logits, no_model_batch, lam=args.skew_alpha)\n        elif \"srkl\" in args.type:\n            distil_loss = skewed_reverse_kl(logits, teacher_logits, no_model_batch, lam=args.skew_alpha)\n        elif \"jsd\" in args.type:\n            distil_loss = js_distance(logits, teacher_logits, no_model_batch)\n        elif \"tvd\" in args.type:\n            distil_loss = tv_distance(logits, teacher_logits, no_model_batch)\n        elif \"fkl\" in args.type or args.type == \"kd\":\n            distil_loss = forward_kl(logits, teacher_logits, no_model_batch)\n        elif \"rkl\" in args.type:\n            distil_loss = reverse_kl(logits, teacher_logits, no_model_batch)\n        else:\n            raise NotImplementedError\n    return distil_loss\n\n\ndef get_teacher_lm_loss(args, tokenizer, model, teacher_model, model_batch):\n    with torch.no_grad():\n        t_gen_out = teacher_model.generate(\n            **model_batch,\n            pad_token_id=tokenizer.pad_token_id,\n            eos_token_id=tokenizer.eos_token_id,\n            max_length=args.max_length,\n            top_k=0,\n            top_p=1,\n            temperature=1.0,\n            do_sample=True,\n            return_dict_in_generate=True,\n            output_scores=False)\n    \n    full_ids = t_gen_out.sequences\n    \n    input_ids = full_ids[:, :-1]\n    mask = (input_ids != tokenizer.pad_token_id).long()\n    labels = full_ids[:, 1:]    \n    labels = torch.masked_fill(labels, mask==0, -100)\n    labels[:, :model_batch[\"input_ids\"].size(1)-1] = -100\n    loss_mask = (labels != -100).float()\n    \n    new_batch = {\n        \"input_ids\": input_ids,\n        \"attention_mask\": mask,\n    }\n    \n    if args.model_type in [\"gpt2\"]:\n        position_ids = torch.cumsum(mask, dim=-1) - 1\n        position_ids = torch.masked_fill(position_ids, mask==0, 0)    \n        new_batch[\"position_ids\"] = position_ids    \n    \n    loss_fn = nn.CrossEntropyLoss(ignore_index=-100)\n\n    outputs = model(**new_batch, return_dict=True, use_cache=False)\n    logits = outputs.logits\n    lm_loss = loss_fn(logits.view(-1, logits.size(-1)), labels.view(-1))\n\n    return lm_loss\n\n\ndef finetune(args, tokenizer: AutoTokenizer, model: deepspeed.DeepSpeedEngine, optimizer: AdamW, lr_scheduler, dataset, device, teacher_model=None):\n    print_rank(\"Start Fine-tuning\")\n\n    # print_inspect(model, '*')\n    if args.model_parallel:\n        raise NotImplementedError\n    else:\n        dp_world_size = dist.get_world_size()\n        dp_rank = dist.get_rank()\n        dp_group = None\n        loss_func = nn.CrossEntropyLoss()\n\n    sampler = DistributedSampler(dataset[\"train\"], shuffle=True, drop_last=True, rank=dp_rank, num_replicas=dp_world_size)\n    train_dataloader = DataLoader(\n        dataset['train'], sampler=sampler, batch_size=args.batch_size, num_workers=args.num_workers, collate_fn=dataset[\"train\"].collate)\n    \n    if \"pt_train\" in dataset:\n        pt_sampler = DistributedSampler(dataset[\"pt_train\"], shuffle=True, drop_last=True, rank=dp_rank, num_replicas=dp_world_size)\n        pt_train_dataloader = DataLoader(\n        dataset['pt_train'], sampler=pt_sampler, batch_size=args.batch_size, num_workers=args.num_workers, collate_fn=dataset[\"pt_train\"].collate)\n        pt_train_iter = iter(pt_train_dataloader)\n        \n    student_generator = SampleGenerator(args, tokenizer)\n\n    step, global_step = 1, 1\n    total_loss, total_distil_loss, total_time = 0.0, 0.0, 0.0\n    \n    adaptive_threshold = args.init_threshold if \"adaptive\" in args.type else None\n    prev_avg_loss = evaluate(args, tokenizer, model, dataset[\"dev\"], \"dev\", 0, device, adaptive_threshold)\n    replay_buffer = ReplayBuffer(args)\n    \n    for epoch in range(args.epochs):\n        sampler.set_epoch(epoch)\n\n        model.train()\n        for it, (model_batch, no_model_batch, gen_data) in enumerate(train_dataloader):\n            dataset[\"train\"].move_to_device(model_batch, no_model_batch, gen_data, device)\n            \n            if args.lm_data_dir is not None:\n                try:\n                    pt_model_batch, pt_no_model_batch, pt_gen_data = next(pt_train_iter)\n                    # pt_model_batch, pt_no_model_batch, pt_gen_data = pt_train_iter.next()\n                except:\n                    pt_train_iter = iter(pt_train_dataloader)\n                    # pt_model_batch, pt_no_model_batch, pt_gen_data = pt_train_iter.next()\n                    pt_model_batch, pt_no_model_batch, pt_gen_data = next(pt_train_iter)\n                    \n                dataset[\"pt_train\"].move_to_device(pt_model_batch, pt_no_model_batch, pt_gen_data, device)\n            \n            torch.cuda.synchronize()\n            st_time = time.time()\n            \n            # # sampling ratio:\n            samp_threshold = adaptive_threshold * (1 - global_step / args.total_iters)\n            if \"adaptive\" in args.type:\n                if args.replay_ratio == \"constant\":\n                    samp_threshold = adaptive_threshold * 0.5\n                elif args.replay_ratio == \"increasing\":\n                    samp_threshold = adaptive_threshold * global_step / args.total_iters\n                else:\n                    samp_threshold = adaptive_threshold * (1 - global_step / args.total_iters)\n            \n            # data generation\n            if args.student_gen:\n                r = np.random.uniform(0, 1)\n                if \"mixed\" in args.type and r < args.mixed_alpha:\n                    model_batch = student_generator.run_sample(model, gen_data)\n                    no_model_batch[\"label\"] = model_batch.pop(\"no_model_batch\")\n                    \n                    replay_buffer.move_to_memory(model_batch, no_model_batch)\n                    model_batch, no_model_batch = replay_buffer.sample()\n                    model_batch, no_model_batch = replay_buffer.move_to_device(model_batch, no_model_batch, device)\n                    \n                elif \"adaptive\" in args.type and (r < samp_threshold or (r < adaptive_threshold and len(replay_buffer) < args.capacity)):\n\n                    model_batch = student_generator.run_sample(model, gen_data)\n                    no_model_batch[\"label\"] = model_batch.pop(\"no_model_batch\")\n                    \n                    if args.model_type in [\"opt\"]:\n                        model_batch.pop('position_ids')\n                        \n                    replay_buffer.move_to_memory(model_batch, no_model_batch)\n                    \n                elif \"adaptive\" in args.type and r < adaptive_threshold:\n                    model_batch, no_model_batch = replay_buffer.sample()\n                    model_batch, no_model_batch = replay_buffer.move_to_device(model_batch, no_model_batch, device)\n                    \n                model.train()\n\n            outputs = model(**model_batch, use_cache=False)\n            \n            logits = outputs.logits\n            if args.model_parallel:\n                raise NotImplementedError\n            else:\n                lm_loss = loss_func(logits.float().view(-1, logits.shape[-1]), no_model_batch[\"label\"].view(-1))\n            \n            if teacher_model is not None:\n                distil_loss = get_distil_loss(args, tokenizer, model, teacher_model, model_batch, no_model_batch, logits)\n                loss = (1 - args.kd_ratio) * lm_loss + args.kd_ratio * distil_loss\n            else:\n                loss = lm_loss\n                \n            if args.lm_data_dir is not None:\n                assert args.lm_coef is not None\n                loss += args.lm_coef * pt_loss(args, model, pt_model_batch, pt_no_model_batch)\n                \n            model.backward(loss)\n            model.step()\n             \n            dist.all_reduce(loss, dist.ReduceOp.SUM, group=dp_group)\n            global_loss = loss.item() / dp_world_size\n\n            global_distil_loss = 0\n            if teacher_model is not None:\n                dist.all_reduce(distil_loss, dist.ReduceOp.SUM, group=dp_group)\n                global_distil_loss = distil_loss.item() / dp_world_size\n                total_distil_loss += global_distil_loss\n    \n            torch.cuda.synchronize()\n            elapsed_time = time.time() - st_time\n\n            total_loss += global_loss\n            total_time += elapsed_time\n\n            # Logging\n            def get_log(log_loss, log_distil_loss, log_time):\n                return \"train | epoch {:3d} | Iter: {:6d}/{:6d} | global iter: {:6d}/{:6d} | loss: {:.4f} | ds_loss: {:.4f} | lr: {:.4e} | scale: {:10.4f} | micro time: {:.3f} | step time: {:.3f}\".format(\n                    epoch,\n                    step,\n                    args.total_iters * args.gradient_accumulation_steps,\n                    global_step,\n                    args.total_iters,\n                    log_loss,\n                    log_distil_loss,\n                    lr_scheduler.get_last_lr()[0],\n                    optimizer.cur_scale if hasattr(optimizer, \"cur_scale\") else 0,\n                    elapsed_time,\n                    log_time,\n                )\n\n            if args.mid_log_num > 0:\n                mid_log_step = args.gradient_accumulation_steps // args.mid_log_num\n                mid_log_step = 1 if mid_log_step == 0 else mid_log_step\n                if step % mid_log_step == 0:\n                    print_rank(get_log(global_loss, global_distil_loss, 0))\n\n            if global_step % args.log_interval == 0 and step % args.gradient_accumulation_steps == 0:\n                log_str = get_log(\n                    total_loss / (args.log_interval * args.gradient_accumulation_steps),\n                    total_distil_loss / (args.log_interval * args.gradient_accumulation_steps),\n                    total_time / (args.log_interval))\n                print_rank(\"*\" * 100)\n                print_rank(log_str)\n                print_rank(args.save)\n                print_rank(\"*\" * 100)\n                save_rank(log_str, os.path.join(args.save, \"log.txt\"))\n                total_loss, total_distil_loss, total_time = 0.0, 0.0, 0.0\n            \n            # Checkpointing\n            if args.save and args.save_interval and global_step % args.save_interval == 0 and step % args.gradient_accumulation_steps == 0:\n                save_dir_path = os.path.join(args.save, str(global_step))\n                if args.model_parallel:\n                    raise NotImplementedError\n                else:\n                    if dist.get_rank() == 0:\n                        os.makedirs(save_dir_path, exist_ok=True)\n                        print_rank(f\"Model save to {save_dir_path}\")\n                        tokenizer.save_pretrained(save_dir_path)\n                        model.module.save_pretrained(save_dir_path, safe_serialization=False)\n                dist.barrier()\n\n            # Evaluation\n            if args.eval_interval and global_step % args.eval_interval == 0 and step % args.gradient_accumulation_steps == 0:\n                curr_avg_loss = evaluate(args, tokenizer, model, dataset[\"dev\"], \"dev\", epoch, device, adaptive_threshold)\n                if \"adaptive\" in args.type:\n                    if curr_avg_loss >= prev_avg_loss + args.loss_eps:\n                        adaptive_threshold += 0.1\n                        adaptive_threshold = min(adaptive_threshold, 1.0)\n                        prev_avg_loss = curr_avg_loss\n                    \n                model.train()\n                \n            step += 1\n            if step % args.gradient_accumulation_steps == 0:\n                global_step += 1\n            \n            if global_step > args.total_iters:\n                break\n            \n    return model\n\n\ndef evaluate(args, tokenizer, model, dataset: LMTrainDataset, split, epoch, device, adaptive_threshold=None):\n    \n    collate_fn = dataset.collate\n\n    if args.model_parallel:\n        raise NotImplementedError\n    else:\n        dp_world_size = dist.get_world_size()\n        dp_rank = dist.get_rank()\n        dp_group = None\n        loss_func = nn.CrossEntropyLoss()\n\n    print_rank(\"dp size\", dp_world_size)\n\n    generation_config = GenerationConfig(\n        do_sample=args.do_sample,\n        top_p=args.top_p,\n        top_k=args.top_k,\n        temperature=args.temperature,\n        repetition_penalty=args.repetition_penalty,\n        max_length=args.max_length,\n        min_length=None,\n        eos_token_id=tokenizer.eos_token_id,\n        pad_token_id=tokenizer.eos_token_id,\n        return_dict_in_generate=True,\n        output_scores=False\n    )\n\n    sampler = DistributedSampler(dataset, shuffle=False, drop_last=False, rank=dp_rank, num_replicas=dp_world_size)\n    dataloader = DataLoader(\n        dataset, sampler=sampler, batch_size=args.eval_batch_size, num_workers=args.num_workers, collate_fn=collate_fn)\n\n    model.eval()\n    all_loss = 0.0\n    step = 0\n    \n    all_response_ids = []\n    \n    with torch.no_grad():\n        for it, (model_batch, no_model_batch, gen_data) in enumerate(tqdm(dataloader, desc=\"Evaluating\", disable=(dist.get_rank() != 0))):\n            print_rank(f\"{it}/{len(dataloader)}\")\n            dataset.move_to_device(model_batch, no_model_batch, gen_data, device)\n            logits = model(**model_batch).logits\n            if args.model_parallel:\n                raise NotImplementedError\n            else:\n                loss = loss_func(logits.view(-1, logits.shape[-1]), no_model_batch[\"label\"].view(-1))\n            \n            max_new_tokens = args.max_length - gen_data[\"input_ids\"].size(1)\n            \n            if args.eval_gen:            \n                gen_out = model.generate(\n                    **gen_data,\n                    generation_config=generation_config,\n                    max_new_tokens=max_new_tokens)\n                \n                full_ids = gen_out.sequences\n                \n                full_ids = F.pad(\n                    full_ids,\n                    (0, args.max_length - full_ids.shape[1]),\n                    value=tokenizer.pad_token_id,\n                )\n                \n                response_ids = full_ids[:, gen_data[\"input_ids\"].size(1):]\n                all_response_ids.append(response_ids)\n                    \n            dist.all_reduce(loss, dist.ReduceOp.SUM, group=dp_group)\n            loss = loss / dp_world_size\n            all_loss += loss.item()\n            step += 1\n    \n    if args.eval_gen:\n        all_response_ids = torch.cat(all_response_ids, dim=0)\n        all_response_ids = all_gather(all_response_ids, dim=1, world_size=dp_world_size, group=dp_group, op=\"stack\")\n        all_response_ids = all_response_ids.view(-1, all_response_ids.size(-1))\n        \n        responses = tokenizer.batch_decode(all_response_ids, skip_special_tokens=True)\n    \n    if get_rank() == 0:\n        if args.eval_gen:\n            references = dataset.answers\n            responses = responses[:len(references)]\n            \n            res = compute_metrics(responses, references)\n        \n            eval_dir = os.path.join(args.save, \"eval\", str(epoch))\n            print_rank(eval_dir)\n            os.makedirs(eval_dir, exist_ok=True)\n            with open(os.path.join(eval_dir, \"answers.jsonl\"), \"w\") as f:\n                for resp in responses:\n                    f.write(json.dumps({\"text\": resp}) + \"\\n\")\n        else:\n            res = {}\n    \n        avg_loss = all_loss / step\n        \n        if \"adaptive\" in args.type:\n            log_str = f\"{split} | avg_loss: {avg_loss} | {res} | threshold: {adaptive_threshold}\"\n        else:\n            log_str = f\"{split} | avg_loss: {avg_loss} | {res}\"\n        print_rank(log_str)\n        save_rank(log_str, os.path.join(args.save, \"log.txt\"))\n        \n    return all_loss / step\n\n\ndef main():\n    torch.backends.cudnn.enabled = False\n    \n    args = get_args()\n    initialize(args)\n    \n    if dist.get_rank() == 0:\n        print_args(args)\n        with open(os.path.join(args.save, \"args.json\"), \"w\") as f:\n            json.dump(vars(args), f)\n    \n    device = torch.cuda.current_device()\n    cur_time = time.strftime(\"%Y-%m-%d %H:%M:%S\", time.localtime())\n    save_rank(\"\\n\\n\" + \"=\"*30 + f\" EXP at {cur_time} \" + \"=\"*30, os.path.join(args.save, \"log.txt\"))\n    \n    with open(args.deepspeed_config, \"r\") as f:\n        ds_config = json.load(f)\n\n    ds_config[\"gradient_accumulation_steps\"] = args.gradient_accumulation_steps\n    ds_config[\"train_micro_batch_size_per_gpu\"] = args.batch_size\n    ds_config[\"gradient_clipping\"] = args.clip_grad\n    ds_config[\"steps_per_print\"] = 10000000\n    \n    if not args.do_train:\n        ds_config[\"zero_optimization\"][\"stage\"] = 0\n    \n    args.fp32 = not ds_config[\"fp16\"][\"enabled\"]    \n    args.deepspeed_config = None\n    \n    # get the tokenizer\n    tokenizer = get_tokenizer(args)\n    dataset = prepare_dataset(\n        args,\n        tokenizer,\n    )\n    \n    dp_world_size = dist.get_world_size()\n    \n    if args.do_train:\n        args.train_iters_per_epoch = int(len(dataset[\"train\"]) / (args.batch_size * dp_world_size * args.gradient_accumulation_steps))\n        print_rank(\"Train iters per epoch\", args.train_iters_per_epoch)\n        if args.total_iters is None:\n            args.total_iters = args.train_iters_per_epoch * args.epochs\n        if args.epochs is None:\n            args.epochs = math.ceil(args.total_iters / args.train_iters_per_epoch)\n        print_rank(\"total_iters\", args.total_iters)\n        \n        if args.save_interval == -1:\n            args.save_interval = args.train_iters_per_epoch\n        \n        if args.eval_interval == -1:\n            args.eval_interval = args.train_iters_per_epoch\n    \n    model, optimizer, lr_scheduler = setup_model_and_optimizer(args, ds_config, device, set_optim=args.do_train)\n    \n    if args.teacher_model_type is None:\n        args.teacher_model_type = args.model_type\n    \n    if args.teacher_model_path is not None:\n        teacher_model = get_teacher_model(args, device)\n    else:\n        teacher_model = None\n    \n    if args.do_train:\n        model = finetune(args, tokenizer, model, optimizer, lr_scheduler, dataset, device, teacher_model=teacher_model)\n   \n    if args.do_eval:\n        evaluate(args, tokenizer, model, dataset[\"test\"], \"test\", 0, device)\n        \n    \nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "generate.py",
    "content": "import time\nimport os\n\nimport torch\nimport torch.distributed as dist\nfrom torch.utils.data import DataLoader, DistributedSampler\nimport deepspeed\nimport numpy as np\n\nimport json\nfrom tqdm import tqdm\n\nfrom transformers import mpu\n\nfrom arguments import get_args\n\nfrom data_utils.prompt_datasets import PromptDataset\nfrom utils import print_args, initialize\nfrom utils import print_rank, get_rank\nfrom utils import save_rank\nfrom utils import all_gather\nfrom utils import get_tokenizer, get_model\n\n\ntorch.set_num_threads(4)\n\n\ndef setup_model(args, ds_config, device):\n    # get the model\n    model = get_model(args, device)\n    # get the optimizer and lr_scheduler\n    optimizer, lr_scheduler = None, None\n        \n    model, _, _, _ = deepspeed.initialize(\n        model=model,\n        optimizer=optimizer,\n        args=args,\n        lr_scheduler=lr_scheduler,\n        mpu=mpu if args.model_parallel else None,\n        config_params=ds_config\n    )\n    \n    # get the memory usage\n    print_rank(\"Model mem\\n\", torch.cuda.memory_summary())\n    return model\n\n\ndef prepare_dataset(args, tokenizer):\n    data = {}\n    data = PromptDataset(args, tokenizer, \"train\", data_path=args.data_dir, num=args.gen_num)\n    print_rank(\"gen num\", len(data))\n    return data\n\n\ndef generate(args, tokenizer, model, dataset, device):\n    \n    collate_fn = dataset.collate\n\n    if args.model_parallel:\n        dp_world_size = mpu.get_data_parallel_world_size()\n        dp_rank = mpu.get_data_parallel_rank()\n        dp_group = mpu.get_data_parallel_group()\n    else:\n        dp_world_size = dist.get_world_size()\n        dp_rank = dist.get_rank()\n        dp_group = None\n\n    sampler = DistributedSampler(dataset, shuffle=False, drop_last=False, rank=dp_rank, num_replicas=dp_world_size)\n    dataloader = DataLoader(\n        dataset, sampler=sampler, batch_size=args.eval_batch_size, num_workers=args.num_workers, collate_fn=collate_fn)\n\n    model.eval()\n    all_gen_ids = []\n    all_idxs = []\n    max_new_tokens = args.max_length - args.max_prompt_length\n\n    with torch.no_grad():\n        for it, (model_batch, no_model_batch) in enumerate(tqdm(dataloader, desc=\"Generating\", disable=(dist.get_rank() != 0))):\n            dataset.move_to_device(model_batch, no_model_batch, device)\n            t_gen_out = model.generate(\n                **model_batch,\n                pad_token_id=tokenizer.pad_token_id,\n                eos_token_id=tokenizer.eos_token_id,\n                max_new_tokens=max_new_tokens,\n                top_k=args.top_k,\n                top_p=args.top_p,\n                temperature=args.temperature,\n                do_sample=True,\n                return_dict_in_generate=True,\n                output_scores=False)\n    \n            full_ids = t_gen_out.sequences\n            gen_ids = full_ids[:, model_batch[\"input_ids\"].size(1):]\n            buffer = torch.ones(gen_ids.size(0), max_new_tokens, dtype=torch.long, device=gen_ids.device) * tokenizer.pad_token_id\n            buffer[:, :gen_ids.size(1)] = gen_ids\n            all_gen_ids.append(buffer)\n            all_idxs.append(no_model_batch[\"idx\"])            \n\n    all_idxs = all_gather(torch.cat(all_idxs, dim=0), dim=0, world_size=dp_world_size, group=dp_group).cpu().tolist()\n    all_gen_ids = all_gather(torch.cat(all_gen_ids, dim=0), dim=0, world_size=dp_world_size, group=dp_group).cpu().tolist()\n    \n    if get_rank() == 0:\n        all_gen_strs = tokenizer.batch_decode(all_gen_ids, skip_special_tokens=True)\n        mean_lens = np.mean([len(tokenizer.encode(x)) for x in all_gen_strs[:100]])\n        \n        log_str = f\"gen | avg. lens: {mean_lens}\"\n        print_rank(log_str)\n        save_rank(log_str, os.path.join(args.save, \"log.txt\"))\n        \n        assert len(all_idxs) == len(all_gen_strs)\n\n        for idx, g in zip(all_idxs, all_gen_strs):\n            dataset.origin_data[idx][\"gen_answer\"] = g\n        \n        with open(os.path.join(args.save, \"raw.jsonl\"), \"w\") as f:\n            for d in dataset.origin_data:\n                if \"gen_answer\" in d:\n                    f.write(json.dumps(d) + \"\\n\")\n\n    dist.barrier()\n\n\ndef main():\n    torch.backends.cudnn.enabled = False\n    \n    args = get_args()\n    initialize(args)\n    \n    if dist.get_rank() == 0:\n        print_args(args)\n        with open(os.path.join(args.save, \"args.json\"), \"w\") as f:\n            json.dump(vars(args), f)\n    \n    device = torch.cuda.current_device()\n    cur_time = time.strftime(\"%Y-%m-%d %H:%M:%S\", time.localtime())\n    save_rank(\"\\n\\n\" + \"=\"*30 + f\" EXP at {cur_time} \" + \"=\"*30, os.path.join(args.save, \"log.txt\"))\n    \n    with open(args.deepspeed_config, \"r\") as f:\n        ds_config = json.load(f)\n\n    ds_config[\"steps_per_print\"] = args.gradient_accumulation_steps\n    ds_config[\"zero_optimization\"][\"stage\"] = 0\n\n    args.fp32 = not ds_config[\"fp16\"][\"enabled\"]\n    args.deepspeed_config = None\n    \n    # get the tokenizer\n    tokenizer = get_tokenizer(args)\n    dataset = prepare_dataset(\n        args,\n        tokenizer,\n    )\n    \n    model = setup_model(args, ds_config, device)\n    \n    generate(args, tokenizer, model, dataset, device)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "install.sh",
    "content": "export NCCL_DEBUG=\"\"\n# conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia\n# pip install transformers==4.42.4\npip install vllm==0.5.0\npip install deepspeed\npip install nltk\npip install numerize\npip install rouge-score\npip install torchtyping\npip install rich\npip install accelerate\npip install datasets\npip install sentencepiece\npip install protobuf\npip install peft"
  },
  {
    "path": "minillm/__init__.py",
    "content": "from deepspeed import DeepSpeedConfig\nfrom typing import Optional\n\n# from trlx.utils.loading import get_orchestrator, get_pipeline, get_trainer\nfrom .sampler import PPOSampler\nfrom .pipelines import PPOPipeline, LMPipeline\nfrom .trainer import PPOTrainer\nfrom .reward import Reward\n\ndef train(\n    args,\n    tokenizer,\n    reward_fn = None,\n    teacher_model=None,\n    prompt_data: Optional[str] = None,\n    eval_prompt_data: Optional[str] = None,\n    lm_data: Optional[str] = None,\n    eval_lm_data: Optional[str] = None,\n    ds_config: Optional[DeepSpeedConfig] = None,\n):\n\n    trainer = PPOTrainer(\n        args=args,\n        tokenizer=tokenizer,\n        reward_fn=reward_fn,\n        ds_config=ds_config,\n    )\n    trainer.set_teacher_model(teacher_model)\n\n    ppo_pipeline = PPOPipeline(\n        args, tokenizer, \"train\", prompt_data, num=args.train_num\n    )\n\n    sampler = PPOSampler(\n        args, trainer, ppo_pipeline, chunk_size=args.chunk_size\n    )\n    sampler.run_sample(args.num_rollouts_per_device)\n    \n    eval_ppo_pipeline = PPOPipeline(\n        args, trainer.tokenizer, \"valid\", eval_prompt_data, fix_prompts=True, num=args.dev_num\n    )\n    trainer.add_eval_pipeline(eval_ppo_pipeline)\n\n    lm_pipeline = LMPipeline(\n        args, trainer.tokenizer, \"train\", lm_data, num=args.train_num) if lm_data is not None else None\n    eval_lm_pipeline = LMPipeline(\n        args, trainer.tokenizer, \"valid\", eval_lm_data, num=args.dev_num) if eval_lm_data is not None else None\n\n    trainer.add_lm_pipeline(lm_pipeline, eval_lm_pipeline)\n\n    trainer.train()\n    return trainer\n"
  },
  {
    "path": "minillm/data_types.py",
    "content": "from dataclasses import dataclass\nfrom typing import Iterable\nfrom torchtyping import TensorType\n\n\n@dataclass\nclass PromptElement:\n    \"\"\"\n    Dataclass for a single prompt, containing its string and tokenized form.\n\n    :param text: The prompt text.\n    :type text: str\n\n    :param tokens: The prompt tokens. Should be a long tensor\n    :type tokens: torch.Tensor\n    \"\"\"\n\n    text: str\n    tokens: TensorType[\"num_tokens\"]\n\n\n@dataclass\nclass PromptBatch:\n    \"\"\"\n    Batched PromptElement\n\n    :param text: An iterable of prompt texts.\n    :type text: Iterable[str]\n\n    :param tokens: A long tensor batch of prompt tokens.\n    :type tokens: torch.Tensor\n    \"\"\"\n\n    text: Iterable[str]\n    tokens: TensorType[\"batch_size\", \"num_tokens\"]\n\n\n@dataclass\nclass PPORLElement:\n    \"\"\"\n    :param query_tensor: The query tensor i.e. the prompt tokens.\n                         Should be a long tensor.\n    :type query_tensor: torch.Tensor\n\n    :param response_tensor: The response tensor i.e. the output tokens.\n                            Should be a long tensor.\n    :type response_tensor: torch.Tensor\n\n    :param logprobs: The log probabilities over all tokens in the vocabulary for\n                    each token generated from the policy network\n                    (i.e. the autoregressive model).\n                    Should be a float tensor of same size as tokens,\n                    with a dimension across the vocabulary.\n    :type logprobs: torch.Tensor\n\n    :param values: The values for each token generated from the value network or value head.\n                    Should be a float tensor of same size as tokens.\n    :type values: torch.Tensor\n\n    :param rewards: The rewards for each token outputted in response.\n                    Should be a float tensor of same size as tokens.\n    :type rewards: torch.Tensor\n    \"\"\"\n\n    query_tensor: TensorType[\"query_size\"]\n    response_tensor: TensorType[\"response_size\"]\n    lens: int\n    s_lens: int\n    mask: TensorType[\"response_size\"]\n    logprobs: TensorType[\"response_size\"]\n    rewards: TensorType[\"response_size\"]\n    rev_kl: TensorType[\"response_size\"]\n    w: TensorType[\"response_size\"]\n    inf_mask: TensorType[\"response_size\", \"vocab_size\"]\n    t_rewards: TensorType[\"response_size\"]\n    ent_rewards: TensorType[\"response_size\"]\n\n\n@dataclass\nclass PPORLBatch:\n    \"\"\"\n    A batched version of the PPORLElement. See PPORLElement for more details on individual fields.\n\n    :param query_tensors: A batch of query tensors. Should be a long tensor.\n    :type query_tensors: torch.Tensor\n\n    :param response_tensors: A batch of response tensors. Should be a long tensor.\n    :type response_tensors: torch.Tensor\n\n    :param logprobs: A batch of log probabilities from policy\n    :type logprobs: torch.Tensor\n\n    :param values: A batch of values from value network\n    :type values: torch.Tensor\n\n    :param rewards: A batch of rewards\n    :type rewards: torch.Tensor\n    \"\"\"\n\n    query_tensors: TensorType[\"batch_size\", \"query_size\"]\n    response_tensors: TensorType[\"batch_size\", \"response_size\"]\n    lens: TensorType[\"batch_size\"]\n    s_lens: TensorType[\"batch_size\"]\n    mask: TensorType[\"batch_size\", \"response_size\"]\n    logprobs: TensorType[\"batch_size\", \"response_size\"]\n    rewards: TensorType[\"batch_size\", \"response_size\"]\n    rev_kl: TensorType[\"batch_size\", \"response_size\"]\n    w: TensorType[\"batch_size\", \"response_size\"]\n    inf_mask: TensorType[\"batch_size\", \"response_size\", \"vocab_size\"]\n    t_rewards: TensorType[\"batch_size\", \"response_size\"]\n    ent_rewards: TensorType[\"batch_size\", \"response_size\"]"
  },
  {
    "path": "minillm/losses.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom typing import Optional, Tuple\nfrom torchtyping import TensorType\n\nfrom .data_types import PPORLBatch\nfrom .utils import whiten, get_entropy, get_x_entropy, get_log_probs\n\nfrom transformers import mpu\n\nfrom utils import all_gather, print_rank\n\n\nclass Loss():\n    def __init__(self, args, trainer):\n        self.args = args\n        self.trainer = trainer\n\n    def _get_cumsum_rewards(self, rewards):          \n        full_rewards = torch.zeros_like(rewards[:, 0])\n        for t in reversed(range(rewards.size(1))):\n            full_rewards = self.args.gamma * full_rewards + rewards[:, t]\n            \n        return full_rewards\n\n    def _get_advantages_and_returns(\n        self,\n        rewards: TensorType[\"batch_size\", \"response_size\"],\n        response_length: int,\n        mask: TensorType[\"batch_size\", \"response_size\"],\n        use_whitening: Optional[bool] = True,\n    ) -> Tuple[torch.Tensor, torch.Tensor]:\n        last_rw = 0\n        rw_reversed = []\n        \n        rewards = rewards.float()\n        mask = mask.float()\n        lens = torch.cumsum(mask, dim=-1)      # faster way        \n        lens = mask - lens + lens[:, -1:None]  # faster way\n        lens = torch.masked_fill(lens, lens==0, 1)\n\n        for t in reversed(range(response_length)):\n            rw_delta = rewards[:, t]\n            last_rw = rw_delta + self.args.gamma * last_rw\n            rw_reversed.append(last_rw)\n\n        rw = torch.stack(rw_reversed[::-1], dim=1)\n        rw = rw / lens\n\n        advantages = rw\n\n        if use_whitening:\n            advantages = whiten(advantages)\n        \n        return advantages.detach()\n\n    def _pg_loss(\n        self,\n        logprobs: TensorType[\"batch_size\", \"response_size\"],\n        old_logprobs: TensorType[\"batch_size\", \"response_size\"],\n        advantages: TensorType[\"batch_size\", \"response_size\"],\n        mask: TensorType[\"batch_size\", \"response_size\"],\n        w: TensorType[\"batch_size\", \"response_size\"],\n    ):\n        \"\"\"PPO objective function.\n        References:\n        - https://stable-baselines.readthedocs.io/en/master/modules/ppo2.html\n        \"\"\"\n        n = mask.sum()\n        \n        log_ratio = (logprobs - old_logprobs) * mask\n        ratio = torch.exp(log_ratio.float())            \n        ratio = ratio * w\n\n        if any(torch.isinf(advantages).view(-1)):\n            print(\"[ERROR] advantage inf\")\n        \n        if any(torch.isinf(ratio).view(-1)):\n            print(\"[ERROR] ratio inf\")\n\n        if any(torch.isnan(advantages).view(-1)):\n            print(\"[ERROR] advantage nan\")\n        \n        if any(torch.isnan(ratio).view(-1)):\n            print(\"[ERROR] ratio nan\")\n        \n        pg_loss1 = -advantages * ratio\n        pg_loss2 = -advantages * torch.clamp(\n            ratio,\n            1.0 - self.args.cliprange,\n            1.0 + self.args.cliprange,\n        )\n        pg_loss = torch.sum(torch.max(pg_loss1, pg_loss2).float() * mask) / n\n\n        return pg_loss\n\n    def _reg_loss(self, query_ids, response_ids, mask, logits, inf_mask, stats):\n        with torch.no_grad():\n            t_logits = self.trainer.compute_logits_and_log_probs(query_ids, response_ids, inf_mask, base=\"teacher\", return_logprobs=False)\n        \n        loss_exp_ent = 0\n        xent = get_x_entropy(logits, t_logits, inf_mask, mask, model_parallel=self.args.model_parallel)\n        s_ent = get_entropy(logits, inf_mask, mask, model_parallel=self.args.model_parallel)\n        loss_exp_ent = torch.sum((xent - s_ent) * mask) / mask.sum()\n        stats[\"reg_loss\"] = loss_exp_ent.item()\n        \n        return loss_exp_ent\n\n    def get_input_batch(self, ppo_batch: PPORLBatch, pt_batch):\n        query_tensors = ppo_batch.query_tensors\n        response_tensors = ppo_batch.response_tensors\n        ppo_input_batch = self.trainer.get_model_inputs(query_tensors, response_tensors)\n        pt_input_batch, _ = pt_batch\n        # merge batch\n        assert len(ppo_input_batch) == len(pt_input_batch), list(ppo_input_batch.keys())\n        input_batch = {}\n        for k in ppo_input_batch:\n            input_batch[k] = torch.cat([ppo_input_batch[k], pt_input_batch[k]], dim=0)\n        return input_batch\n\n    def ppo_loss(self, batch: PPORLBatch, logits):\n        stats = {}\n        query_tensors = batch.query_tensors\n        response_tensors = batch.response_tensors\n        lens = batch.lens\n        s_lens = batch.s_lens\n        mask = batch.mask\n        old_logprobs = batch.logprobs\n        old_rewards = batch.rewards\n        rev_kl = batch.rev_kl\n        w = batch.w\n        inf_mask = batch.inf_mask\n        \n        response_length = response_tensors.shape[-1]\n\n        start = query_tensors.size(1) - 1 # \"-1\" for the first generated token AS TARGET\n        end = query_tensors.size(1) + response_tensors.size(1) - 1 # \"remove the last token that does not have target\"\n\n        logits = logits / self.args.temperature\n        logits = logits[:, start:end]\n        if inf_mask is not None:\n            logits = logits.masked_fill(inf_mask, -float(\"inf\"))\n            \n        tokens = torch.cat((query_tensors, response_tensors), dim=1)[\n            :, -self.trainer.max_length :\n        ]\n        mask = self.trainer.get_mask(tokens)[:, start:end]\n        \n        logprobs = get_log_probs(logits, response_tensors, mask, inf_mask, model_parallel=self.args.model_parallel)\n\n        advantages = self._get_advantages_and_returns(\n            old_rewards, response_length, mask\n        )\n        \n        loss = self._pg_loss(\n            logprobs=logprobs,\n            old_logprobs=old_logprobs,\n            advantages=advantages,\n            mask=mask,\n            w=w,\n        )\n        stats[\"pg_loss\"] = loss.item()\n        \n        single_step_reg_loss = self._reg_loss(query_tensors, response_tensors, mask, logits, inf_mask, stats)\n        stats[\"reg_loss\"] = single_step_reg_loss.item()\n        \n        if self.args.single_step_reg:\n            loss += single_step_reg_loss\n        \n        stats[\"rl_loss\"] = loss.item()\n        \n        with torch.no_grad():\n            # generation values for reward\n            cumsum_rewards = self._get_cumsum_rewards(old_rewards)\n            rev_kl = torch.sum(rev_kl, dim=-1)\n            \n            if self.args.length_norm:\n                cumsum_rewards = cumsum_rewards / lens\n                rev_kl = rev_kl / s_lens\n                        \n            cumsum_rewards = all_gather(cumsum_rewards, dim=0, world_size=self.trainer.dp_world_size, group=self.trainer.dp_group).mean(dim=0).item()\n            rev_kl = all_gather(rev_kl, dim=0, world_size=self.trainer.dp_world_size, group=self.trainer.dp_group).mean(dim=0).item()\n            lens = all_gather(lens, dim=0, world_size=self.trainer.dp_world_size, group=self.trainer.dp_group).float().mean(dim=0).item()\n            s_lens = all_gather(s_lens, dim=0, world_size=self.trainer.dp_world_size, group=self.trainer.dp_group).float().mean(dim=0).item()\n        \n        stats[\"reward\"] = cumsum_rewards\n        stats[\"rev_kl\"] = rev_kl\n        stats[\"mixed_lens\"] = lens\n        stats[\"stu_lens\"] = s_lens\n        \n        return loss, stats\n\n    def pt_loss(self, batch, logits):\n        stats = {}\n        model_batch, no_model_batch = batch\n        loss_mask = (no_model_batch[\"label\"] != -100).int()\n        if self.args.model_parallel:\n            lm_losses = mpu.parallel_cross_entropy(logits.contiguous().float(), no_model_batch[\"label\"]).view(-1)\n            lm_loss = (lm_losses * loss_mask.view(-1)).sum(-1) / loss_mask.view(-1).sum(-1)\n        else:\n            loss_fn = nn.CrossEntropyLoss(ignore_index=-100)\n            lm_loss = loss_fn(logits.view(-1, logits.size(-1)), no_model_batch[\"label\"].view(-1))\n        \n        distil_loss = 0\n        if self.trainer.teacher_model is not None and self.args.kd_ratio is not None:\n            with torch.no_grad():\n                teacher_outputs = self.trainer.teacher_model(**model_batch, return_dict=True, use_cache=False)\n                teacher_logits = teacher_outputs.logits\n            if self.args.model_parallel:\n                distil_losses = mpu.parallel_soft_cross_entropy_loss(logits.float(), teacher_logits.float())\n                distil_losses = distil_losses.view(-1)\n                distil_loss = (distil_losses * loss_mask.view(-1)).sum(-1) / loss_mask.view(-1).sum(-1)\n            else:\n                teacher_probs = F.softmax(teacher_logits, dim=-1, dtype=torch.float32)\n                inf_mask = torch.isinf(logits)\n                logprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)\n                prod_probs = torch.masked_fill(teacher_probs * logprobs, inf_mask, 0)\n                x = torch.sum(prod_probs, dim=-1).view(-1)\n                distil_loss = -torch.sum(x * loss_mask.view(-1), dim=0) / torch.sum(loss_mask.view(-1), dim=0)\n            \n            loss = (1-self.args.kd_ratio) * lm_loss + self.args.kd_ratio * distil_loss\n\n        stats[\"pt_loss\"] = loss.item()\n        stats[\"lm_loss\"] = lm_loss.item()\n        stats[\"ds_loss\"] = distil_loss.item()\n\n        return loss, stats"
  },
  {
    "path": "minillm/model.py",
    "content": "import torch.nn as nn\nfrom transformers import (\n    AutoConfig,)\n\nfrom utils import get_model\n\n\nclass PPOModel(nn.Module):\n    def __init__(self, args, device):\n        super().__init__()\n        self.model_parallel = args.model_parallel\n        self.config = AutoConfig.from_pretrained(args.model_path)\n        self.base_model = get_model(args, device)\n        self.base_model.eval() # no dropout for RL\n\n    def forward(self, **x):\n        base_model_outputs = self.base_model(**x)\n        return base_model_outputs\n    \n    def generate(self, **x):\n        return self.base_model.generate(**x)\n    \n    def set_force_gradient_checkpointing(self, value):\n        self.base_model.set_force_gradient_checkpointing(value)\n"
  },
  {
    "path": "minillm/pipelines.py",
    "content": "import os\nimport json\nimport torch\nimport random\nimport numpy as np\nfrom torch.utils.data import DataLoader, DistributedSampler\nfrom transformers import mpu\nimport torch.distributed as dist\n\nfrom data_utils.distributed_indexed import DistributedMMapIndexedDataset\nfrom torch.distributed import get_rank, get_world_size\nfrom utils import print_rank\n\n\nclass PPOPipeline():\n    def __init__(self, args, tokenizer, split, ppo_data_path=None, fix_prompts=False, num=-1):\n        super().__init__()\n        self.tokenizer = tokenizer\n\n        self.args = args\n        self.tokenizer = tokenizer\n        self.split = split\n        self.pad_id = self.tokenizer.eos_token_id\n        self.max_length = args.max_length\n        self.rng_ppo = random.Random(args.seed_ppo)\n        self.min_prompt_length = args.min_prompt_length\n        self.max_prompt_length = args.max_prompt_length\n\n        self.ppo_ctx = DistributedMMapIndexedDataset(ppo_data_path, f\"{split}\", get_rank(), get_world_size())\n        self.ppo_raw, self.ppo_answers = None, None\n        if os.path.exists(os.path.join(ppo_data_path, f\"{split}.jsonl\")):\n            with open(os.path.join(ppo_data_path, f\"{split}.jsonl\")) as f:\n                self.ppo_raw = [json.loads(line) for line in f.readlines()]\n                self.ppo_answers = [x[\"output\"] if isinstance(x[\"output\"], list) else [x[\"output\"]] for x in self.ppo_raw]\n\n        self.num = min(num, len(self.ppo_ctx)) if num > 0 else len(self.ppo_ctx)\n        self.fix_prompts = fix_prompts\n        self.prompt_lengths = [None for _ in range(num)]\n        print_rank(f\"Num PPO instances: {len(self.ppo_ctx)}\")\n            \n    def __len__(self):\n        return self.num\n\n    def __getitem__(self, index: int):\n        data = self.ppo_ctx[index].astype(int)\n        \n        assert len(data) <= self.max_prompt_length\n        \n        if self.args.model_type!=\"qwen\" and 65535 in data:\n            source_len = np.where(data==65535)[0][0]\n            prompt = data[:source_len]\n            response = data[source_len+1:]\n        else:\n            prompt = data\n            response = None\n        \n        # return prompt, rest\n        return prompt, response\n    \n    def collate(self, samples):\n        bs = len(samples)\n        \n        max_prompt_length = self.max_prompt_length\n        \n        model_batch = {\n            \"input_ids\": torch.ones(bs, max_prompt_length, dtype=torch.long) * self.pad_id,\n            \"attention_mask\": torch.zeros(bs, max_prompt_length, dtype=torch.long),\n        }\n        \n        no_model_batch = {\n            \"full_ids\": torch.ones(bs, self.max_length, dtype=torch.long) * self.pad_id,\n            \"full_attention_mask\": torch.zeros(bs, self.max_length, dtype=torch.long),\n            \"full_label_ids\": torch.ones(bs, self.max_length, dtype=torch.long) * -100,\n        }\n        \n        for i, (prompt, response) in enumerate(samples):\n            # left padding\n            model_batch[\"input_ids\"][i][-len(prompt):] = torch.tensor(prompt, dtype=torch.long)\n            model_batch[\"attention_mask\"][i][-len(prompt):] = 1\n            if response is not None:\n                full_ids = np.concatenate([prompt, response], axis=0)\n                no_model_batch[\"full_ids\"][i][:len(full_ids)-1] = torch.tensor(full_ids[:-1], dtype=torch.long)\n                no_model_batch[\"full_attention_mask\"][i][:len(full_ids)-1] = 1.0\n                no_model_batch[\"full_label_ids\"][i][len(prompt)-1:len(full_ids)-1] = torch.tensor(response, dtype=torch.long)\n        \n        return model_batch, no_model_batch\n\n    def move_to_device(self, model_batch, no_model_batch, device):\n        for k in model_batch:\n            model_batch[k] = model_batch[k].to(device)        \n        for k in no_model_batch:\n            no_model_batch[k] = no_model_batch[k].to(device)    \n        \n        return model_batch, no_model_batch\n\n    def create_loader(self, batch_size: int, shuffle=False, drop_last: bool = False, num_workers: int = 0) -> DataLoader:\n        if self.args.model_parallel:\n            dp_world_size = mpu.get_data_parallel_world_size()\n            dp_rank = mpu.get_data_parallel_rank()\n        else:\n            dp_world_size = dist.get_world_size()\n            dp_rank = dist.get_rank()\n        \n        sampler = DistributedSampler(self, shuffle=shuffle, drop_last=drop_last, rank=dp_rank, num_replicas=dp_world_size)\n        return DataLoader(\n            self, sampler=sampler, batch_size=batch_size, collate_fn=self.collate, num_workers=num_workers\n        )\n\n\nclass LMPipeline():\n    def __init__(self, args, tokenizer, split, lm_data_path=None, num=-1):\n        super().__init__()\n        self.tokenizer = tokenizer\n\n        self.args = args\n        self.tokenizer = tokenizer\n        self.split = split\n        self.pad_id = self.tokenizer.eos_token_id\n        self.max_length = args.max_length\n        self.rng_lm = random.Random(args.seed_lm)\n\n        self.lm_ctx = DistributedMMapIndexedDataset(lm_data_path, f\"{split}\", get_rank(), get_world_size())\n        self.num = min(num, len(self.lm_ctx)) if num > 0 else len(self.lm_ctx)\n        print_rank(f\"Num LM instances: {len(self.lm_ctx)}\")\n            \n    def __len__(self):\n        return self.num\n\n    def __getitem__(self, index):\n        return self._get_lm(index)\n\n    def _get_lm(self, index):\n        data = self.lm_ctx[index]\n        input_ids = data.astype(int)\n        return {\n            \"input_ids\": input_ids[:self.max_length]\n        }\n\n    def _process_lm(self, i, samp, model_data, no_model_data):\n        input_ids = samp[\"input_ids\"]\n        source_len = 1\n        \n        if self.args.model_type!=\"qwen\" and 65535 in input_ids:\n            source_len = np.where(input_ids==65535)[0][0]\n            input_ids = np.concatenate([input_ids[:source_len], input_ids[source_len+1:]], axis=0)\n        input_ids = input_ids[:self.max_length]\n        input_len = len(input_ids)\n        model_data[\"input_ids\"][i][:input_len-1] = torch.tensor(input_ids[:-1], dtype=torch.long)\n        model_data[\"attention_mask\"][i][:input_len-1] = 1.0\n        if self.args.model_type in [\"gpt2\"]:\n            model_data[\"position_ids\"][i][:input_len-1] = torch.arange(0, input_len-1, dtype=torch.long)\n        no_model_data[\"label\"][i][:input_len-1] = torch.tensor(input_ids[1:], dtype=torch.long)\n        no_model_data[\"label\"][i][:source_len-1] = -100\n        no_model_data[\"loss_mask\"][i][:input_len-1] = 1.0\n        no_model_data[\"loss_mask\"][i][:source_len-1] = 0\n\n    def move_to_device(self, model_batch, no_model_batch, device):\n        for k in model_batch:\n            model_batch[k] = model_batch[k].to(device)\n\n        for k in no_model_batch:\n            no_model_batch[k] = no_model_batch[k].to(device)    \n        \n        return model_batch, no_model_batch\n\n    def collate(self, samples):\n        bs = len(samples)\n        \n        max_length = self.max_length\n        \n        model_data = {\n            \"input_ids\": torch.ones(bs, max_length, dtype=torch.long) * self.pad_id,\n            \"attention_mask\": torch.zeros(bs, max_length, dtype=torch.long)\n        }\n\n        if self.args.model_type in [\"gpt2\"]:\n            model_data[\"position_ids\"] = torch.zeros(bs, max_length, dtype=torch.long)\n\n        no_model_data = {\n            \"label\": torch.ones(bs, self.max_length, dtype=torch.long) * -100,\n            \"loss_mask\": torch.zeros(bs, max_length)\n        }\n        \n        for i, samp in enumerate(samples):        \n            self._process_lm(i, samp, model_data, no_model_data)\n            \n        return model_data, no_model_data\n\n    def create_loader(self, batch_size: int, shuffle=False, drop_last: bool = False, num_workers: int = 0) -> DataLoader:\n        if self.args.model_parallel:\n            dp_world_size = mpu.get_data_parallel_world_size()\n            dp_rank = mpu.get_data_parallel_rank()\n        else:\n            dp_world_size = dist.get_world_size()\n            dp_rank = dist.get_rank()\n        \n        sampler = DistributedSampler(self, shuffle=shuffle, drop_last=drop_last, rank=dp_rank, num_replicas=dp_world_size)\n        return DataLoader(\n            self, sampler=sampler, batch_size=batch_size, collate_fn=self.collate, num_workers=num_workers\n        )\n"
  },
  {
    "path": "minillm/reward.py",
    "content": "import torch\nfrom transformers import (\n    AutoModelForCausalLM,\n    AutoTokenizer,\n    mpu)\n\n\nclass Reward():\n    def __init__(self, args, tokenizer: AutoTokenizer, model: AutoModelForCausalLM):\n        self.args = args\n        self.tokenizer = tokenizer\n        self.model = model\n        self.pad_token_id = tokenizer.pad_token_id\n        self.eos_token_id = tokenizer.eos_token_id\n\n    def get_input_batch(self, input_ids, gen_ids, output_pos=True):\n        full_ids = torch.cat([input_ids, gen_ids], dim=-1)\n        attention_mask = (full_ids != self.pad_token_id)\n\n        model_inputs = {\n            \"input_ids\": full_ids,\n            \"attention_mask\": attention_mask,\n            \"use_cache\": False\n        }\n        \n        if (self.args.model_type in [\"gpt2\"]) and output_pos:\n            position_ids = torch.cumsum(attention_mask, dim=-1) - 1\n            position_ids.masked_fill_(~attention_mask, 0)\n            model_inputs[\"position_ids\"] = position_ids\n        \n        return model_inputs\n\n    def reward_fn(self, input_ids, gen_ids, inf_mask=None, output_pos=True):\n        # not include eos token\n        \n        self.model.eval()\n        # input_ids = input_ids.repeat(1, 1)\n        \n        model_inputs = self.get_input_batch(input_ids, gen_ids, output_pos=output_pos)\n\n        with torch.no_grad():\n            outputs = self.model(**model_inputs)\n        \n        logits = outputs.logits # (B, L, V)\n        if self.args.model_parallel:\n            logits = logits - mpu.parallel_mean(logits.float(), dim=-1).unsqueeze(-1)\n        else:\n            logits = logits - torch.mean(logits, dim=-1, keepdim=True)\n        \n        mask = model_inputs[\"attention_mask\"]\n        logits = logits * mask.unsqueeze(-1) # set logits output by padding to 0\n        \n        logits = logits[:, input_ids.size(-1)-1:, :]\n        mask = mask[:, input_ids.size(-1)-1:]\n\n        if self.args.model_parallel:\n            selection_value = mpu.parallel_gather(logits[:, :-1, :], -1, model_inputs[\"input_ids\"][:, input_ids.size(-1):, None]).squeeze(-1)\n        else:\n            selection_value = torch.gather(logits[:, :-1, :], -1, model_inputs[\"input_ids\"][:, input_ids.size(-1):, None]).squeeze(-1)\n\n        current_logits = logits[:, :-1, :]\n        if self.args.model_parallel:\n            next_state_value = mpu.parallel_logsumexp(current_logits.float(), dim=-1)\n        else:\n            next_state_value = torch.logsumexp(current_logits, dim=-1)\n        next_state_value = next_state_value * mask[:, :-1]\n        raw_next_state_value = next_state_value\n\n        scores = selection_value - next_state_value\n        \n        assert all((~torch.isinf(scores.view(-1))) & (~torch.isnan(scores.view(-1))))\n        \n        assert scores.size() == gen_ids.size()\n        \n        return {\n            \"rewards\": scores,\n            \"inf_mask\": inf_mask\n        }\n"
  },
  {
    "path": "minillm/sampler.py",
    "content": "import torch\nimport os\n\nfrom .data_types import PromptBatch, PPORLElement\nfrom .pipelines import PPOPipeline\nfrom .trainer import PPOTrainer\n\nfrom utils import get_rank, print_rank, all_gather, save_rank\nfrom .utils import get_rev_kl\nfrom transformers import mpu\n\nclass PPOSampler():\n    \"\"\"\n    Orchestrator prepares data for PPO training.\n    Transforms samples from `pipeline` into `PPOBatch` and pushes them into trainer's `store`\n    \"\"\"\n\n    def __init__(\n        self,\n        args,\n        trainer: PPOTrainer,\n        pipeline: PPOPipeline,\n        chunk_size: int = 512,\n    ):\n        self.args = args\n        self.pipeline = pipeline\n        self.trainer = trainer\n        self.chunk_size = chunk_size\n\n        self.pipeline_loader = self.pipeline.create_loader(\n            self.chunk_size, shuffle=True, drop_last=True, num_workers=self.args.num_workers\n        )\n        self.pipeline_iterator = iter(self.pipeline_loader)\n\n        self.trainer.set_sampler(self)\n\n        self.epochs = 0\n\n    def run_sample(self, num_rollouts_per_device: int = 1024, iter_count: int = 0):\n        \"\"\"\n        Takes `num_rollouts_per_device` prompts from `pipeline`, samples model and computes the\n        KL againts a reference model. It then appends PPOElements to trainer's `store`\n        \"\"\"\n        ppo_rl_elements = []\n\n        while len(ppo_rl_elements) < num_rollouts_per_device:\n            if ((not self.args.model_parallel) or mpu.get_model_parallel_rank()) == 0:\n                print(f\"Rank {get_rank()}: Number Sampling Elements {len(ppo_rl_elements)} / {num_rollouts_per_device}\")\n            try:\n                batch: PromptBatch = next(self.pipeline_iterator)\n            except StopIteration:\n                self.epochs += 1\n                print_rank(f\"Another outer ppo epoch, outer ppo epoch: {self.epochs}\")\n                save_rank(f\"Another outer ppo epoch, outer ppo epoch: {self.epochs}\", os.path.join(self.args.save, \"log.txt\"))\n                \n                self.pipeline_loader.sampler.set_epoch(self.epochs)\n                self.pipeline_iterator = iter(self.pipeline_loader)\n                batch = next(self.pipeline_iterator)\n\n            batch, no_model_batch = batch\n            n = batch[\"input_ids\"].size(0)\n            \n            batch, no_model_batch = self.pipeline.move_to_device(batch, no_model_batch, self.trainer.device)\n            \n            query_ids = batch[\"input_ids\"]\n            \n            # generate and compute rollout scores\n            with torch.no_grad():\n                mode = \"base\"\n                gen_out = self.trainer.generate(**batch, return_dict_in_generate=True, mode=mode, teacher_mixed_sample=(self.args.teacher_mixed_alpha is not None), output_scores=True)\n                full_ids = gen_out.sequences\n                response_ids = full_ids[:, query_ids.size(1):] # remove prompt (may include start token)\n                mask = (full_ids != self.trainer.tokenizer.pad_token_id)[:, query_ids.size(-1)-1:query_ids.size(-1)+response_ids.size(-1)-1]\n                lens = torch.sum(mask, dim=-1)\n                gen_logits = gen_out.scores # NOTE: [b, s, h_p]\n                inf_mask = torch.isinf(gen_logits)\n                scores = self.trainer.reward_fn(query_ids, response_ids, inf_mask=inf_mask)\n                t_rewards = scores[\"rewards\"]\n                inf_mask = scores[\"inf_mask\"]\n                _, rollout_logprobs = self.trainer.compute_logits_and_log_probs(query_ids, response_ids, inf_mask=inf_mask, base=mode)\n\n                # student generation features\n                if self.args.teacher_mixed_alpha is not None:\n                    s_gen_out = self.trainer.generate(**batch, return_dict_in_generate=True, mode=mode, output_scores=True)\n                    s_full_ids = s_gen_out.sequences\n                    s_response_ids = s_full_ids[:, query_ids.size(1):]\n                    s_inf_mask = torch.isinf(s_gen_out.scores)\n                    s_response_ids = s_full_ids[:, query_ids.size(1):] # remove prompt (may include start token)\n                    s_scores = self.trainer.reward_fn(query_ids, s_response_ids, inf_mask=s_inf_mask)\n                    s_t_rewards = s_scores[\"rewards\"]\n                    s_inf_mask = s_scores[\"inf_mask\"]\n                    _, s_rollout_logprobs = self.trainer.compute_logits_and_log_probs(query_ids, s_response_ids, inf_mask=s_inf_mask, base=mode)\n                    s_mask = (s_full_ids != self.trainer.tokenizer.pad_token_id)[:, query_ids.size(-1)-1:query_ids.size(-1)+s_response_ids.size(-1)-1]\n                    s_lens = torch.sum(s_mask, dim=-1)\n                else:\n                    s_t_rewards = t_rewards\n                    s_rollout_logprobs = rollout_logprobs\n                    s_mask = mask\n                    s_lens = lens\n\n            rev_kl = get_rev_kl(s_t_rewards, s_rollout_logprobs, s_mask)\n\n            if self.args.teacher_mixed_alpha is not None:\n                with torch.no_grad():\n                    _, t_rollout_logprobs = self.trainer.compute_logits_and_log_probs(query_ids, response_ids, inf_mask=inf_mask, base=\"teacher\") # recompute because of the fp16 loss\n\n            # get logprobs and the importance sampling weight w\n            with torch.no_grad():\n                if self.args.teacher_mixed_alpha is not None:\n                    _, raw_logprobs = self.trainer.compute_logits_and_log_probs(query_ids, response_ids, inf_mask=inf_mask, base=\"base\") # raw_logprobs: compute using the new model\n                    logprobs = raw_logprobs\n                    mix_probs = (1 - self.args.teacher_mixed_alpha) * torch.exp(rollout_logprobs.float()) + self.args.teacher_mixed_alpha * torch.exp(t_rollout_logprobs.float())\n                    mix_logprobs = torch.log(mix_probs)\n                    log_w = logprobs - mix_logprobs\n                    w = torch.exp(log_w) # importance sampling weight\n                else:\n                    raw_logprobs = rollout_logprobs\n                    logprobs = rollout_logprobs\n                    w = torch.ones_like(logprobs)\n                        \n                # get ent_rewards\n                ent_rewards = -logprobs\n\n            rewards = t_rewards + ent_rewards\n\n            if self.args.reward_scaling is not None:\n                rewards = rewards / self.args.reward_scaling\n\n            clip_reward = self.args.cliprange_reward\n            if clip_reward:\n                rewards = torch.clip(rewards, -clip_reward, clip_reward)\n\n            query_ids = query_ids.cpu()\n            response_ids = response_ids.cpu()\n            lens = lens.cpu()\n            s_lens = s_lens.cpu()\n            mask = mask.cpu()\n            logprobs = logprobs.cpu()\n            rewards = rewards.cpu()\n            rev_kl = rev_kl.cpu()\n            w = w.cpu()\n            inf_mask = inf_mask.cpu()\n            \n            new_ppo_rl_elements = [\n                PPORLElement(\n                    query_tensor=query_ids[i],\n                    response_tensor=response_ids[i],\n                    lens=lens[i],\n                    s_lens=s_lens[i],\n                    mask=mask[i],\n                    logprobs=logprobs[i],\n                    rewards=rewards[i],\n                    rev_kl=rev_kl[i],\n                    w=w[i],\n                    inf_mask=inf_mask[i],\n                    t_rewards=t_rewards[i],\n                    ent_rewards=ent_rewards[i]\n                )\n                for i in range(n)\n            ]\n            ppo_rl_elements.extend(new_ppo_rl_elements)\n\n        ppo_rl_elements = ppo_rl_elements[:num_rollouts_per_device]\n        # Push samples and rewards to trainer's rollout storage\n        self.trainer.push_to_store(ppo_rl_elements)\n        \n        if self.args.save_rollout:\n            all_query_ids = all_gather(torch.stack([e.query_tensor for e in ppo_rl_elements], dim=0).to(self.trainer.device))\n            all_response_ids = all_gather(torch.stack([e.response_tensor for e in ppo_rl_elements], dim=0).to(self.trainer.device))\n            all_entropy = all_gather(torch.stack([e.entropy for e in ppo_rl_elements], dim=0).to(self.trainer.device))\n            rollout_save_path = os.path.join(self.args.save, \"rollout_history\", str(iter_count))\n            if get_rank() == 0:\n                os.makedirs(rollout_save_path, exist_ok=True)\n                torch.save((all_query_ids, all_response_ids, all_entropy), os.path.join(rollout_save_path, \"all.pt\"))\n"
  },
  {
    "path": "minillm/storages.py",
    "content": "import json\nimport os\nimport time\nfrom abc import abstractmethod\nfrom typing import Any, Callable, Iterable\n\nimport torch\nfrom torch.nn.utils.rnn import pad_sequence\nfrom torch.utils.data import Dataset, DataLoader\nimport torch.distributed as dist\n\nfrom .data_types import PPORLElement, PPORLBatch\n\nfrom utils import get_rank\n\n\nclass BaseRolloutStore(Dataset):\n    def __init__(self, capacity=-1):\n        self.history: Iterable[Any] = None\n        self.capacity = capacity\n\n    @abstractmethod\n    def push(self, exps: Iterable[Any]):\n        \"\"\"\n        Push experiences to rollout storage\n        \"\"\"\n        pass\n\n    def __getitem__(self, index: int) -> PPORLElement:\n        return self.history[index]\n\n    def __len__(self) -> int:\n        return len(self.history)\n\n    @abstractmethod\n    def create_loader(\n        self,\n        batch_size: int,\n        shuffle: bool,\n        prep_fn: Callable = None,\n        num_workers: int = 0,\n        drop_last: bool = False\n    ) -> DataLoader:\n        \"\"\"\n        Create a dataloader for the rollout store\n\n        :param prep_fn: Applied to RLElement after collation (typically tokenizer)\n        :type prep_fn: Callable\n        \"\"\"\n        pass\n    \n    @abstractmethod\n    def broadcast(self, batch, src=0, group=None):\n        pass\n    \n    @abstractmethod\n    def move_to_device(self, batch, device):\n        pass\n\n\nclass PPORolloutStorage(BaseRolloutStore):\n    \"\"\"\n    Rollout storage for training PPO\n    \"\"\"\n\n    def __init__(self, pad_token_id, seed):\n        super().__init__()\n\n        self.pad_token_id = pad_token_id\n        self.history: Iterable[PPORLElement] = [None]\n        self.rng = torch.Generator()\n        self.rng.manual_seed(seed)\n\n    def push(self, exps: Iterable[PPORLElement]):\n        self.history += exps\n\n    def save(self, path):\n        def exp_to_dict(exp):\n            return {k: v for k, v in exp.__dict__.items()}\n\n        data = [exp_to_dict(exp) for exp in self.history]\n        \n        torch.save(data, os.path.join(path, f\"{get_rank()}.pkl\"))\n            \n    def load(self, path):\n        data = torch.load(os.path.join(path, f\"history_{get_rank()}.pkl\"), map_location=\"cpu\")\n        self.history = [PPORLElement(**d) for d in data]\n\n    def clear_history(self):\n        self.history = []\n\n    def export_history(self, location: str):\n        assert os.path.exists(location)\n\n        fpath = os.path.join(location, f\"epoch-{str(time.time())}.json\")\n\n        def exp_to_dict(exp):\n            return {k: v.cpu().tolist() for k, v in exp.__dict__.items()}\n\n        data = [exp_to_dict(exp) for exp in self.history]\n        with open(fpath, \"w\") as f:\n            f.write(json.dumps(data, indent=2))\n\n    def __getitem__(self, index: int) -> PPORLElement:\n        return self.history[index]\n\n    def __len__(self) -> int:\n        return len(self.history)\n\n    def collate(self, elems: Iterable[PPORLElement]):\n        if any([e is None for e in elems]):\n            print(elems)\n        return PPORLBatch(\n            # Left padding of already left-padded queries\n            pad_sequence(\n                [elem.query_tensor.flip(0) for elem in elems],\n                padding_value=self.pad_token_id,\n                batch_first=True,\n            ).flip(1),\n            # Right pad the rest, to have a single horizontal query/response split\n            pad_sequence(\n                [elem.response_tensor for elem in elems],\n                padding_value=self.pad_token_id,\n                batch_first=True,\n            ),\n            torch.tensor([elem.lens for elem in elems], dtype=torch.long),\n            torch.tensor([elem.s_lens for elem in elems], dtype=torch.long),\n            pad_sequence(\n                [elem.mask for elem in elems],\n                padding_value=0.0,\n                batch_first=True,\n            ),            \n            pad_sequence(\n                [elem.logprobs for elem in elems],\n                padding_value=0.0,\n                batch_first=True,\n            ),\n            pad_sequence(\n                [elem.rewards for elem in elems],\n                padding_value=0.0,\n                batch_first=True,\n            ),\n            pad_sequence(\n                [elem.rev_kl for elem in elems],\n                padding_value=0.0,\n                batch_first=True,\n            ),\n            pad_sequence(\n                [elem.w for elem in elems],\n                padding_value=0.0,\n                batch_first=True,\n            ),\n            pad_sequence(\n                [elem.inf_mask for elem in elems],\n                padding_value=0,\n                batch_first=True,\n            ),\n            pad_sequence(\n                [elem.t_rewards for elem in elems],\n                padding_value=0.0,\n                batch_first=True,\n            ),\n            pad_sequence(\n                [elem.ent_rewards for elem in elems],\n                padding_value=0.0,\n                batch_first=True,\n            ),\n        )\n\n    def create_loader(self, batch_size: int, shuffle=False, drop_last: bool = False, num_workers: int = 0) -> DataLoader:\n        # sampler = DistributedSampler(self, shuffle=shuffle, drop_last=drop_last)\n        # we don't use distributed sampler because the dataset on each device is different\n        return DataLoader(\n            self, batch_size=batch_size, collate_fn=self.collate, num_workers=num_workers, shuffle=shuffle, drop_last=drop_last, generator=self.rng\n        )\n        \n    def broadcast(self, batch: PPORLBatch, src=0, group=None):\n        for k, v in batch.__dict__.items():\n            dist.broadcast(batch.__dict__[k], src=src, group=group)\n            \n    def move_to_device(self, batch: PPORLBatch, device):\n        for k, v in batch.__dict__.items():\n            batch.__dict__[k] = batch.__dict__[k].to(device)"
  },
  {
    "path": "minillm/trainer.py",
    "content": "import json\nimport os\nimport deepspeed\nfrom time import time\nfrom typing import Optional, Tuple\nfrom collections import defaultdict\n\nimport torch\nimport torch.nn.functional as F\nimport torch.distributed as dist\nfrom torch.optim import AdamW\nfrom rich.console import Console\nfrom rich.table import Table\nfrom tqdm import tqdm\nfrom transformers import (\n    AutoTokenizer,\n    GenerationConfig,\n    mpu)\n\nfrom transformers import get_constant_schedule_with_warmup, get_cosine_schedule_with_warmup\n\nfrom .utils import (\n    get_scheduler_class,\n    get_log_probs,\n    get_rev_kl,\n    significant\n)\n\nfrom .model import (\n    PPOModel\n)\n\nfrom .pipelines import PPOPipeline, LMPipeline\n\n\nfrom .storages import PPORolloutStorage\nfrom .losses import Loss\n\nfrom utils import print_rank, save_rank, get_rank, all_gather, save_parallel\nfrom rouge_metric import compute_metrics\n\n\nclass PPOTrainer():\n    \"\"\"\n    RL model trainer with an `accelerate` based backend\n    \"\"\"\n\n    def __init__(self, args, tokenizer: AutoTokenizer, reward_fn, ds_config):\n        self.args = args\n        self.max_length = args.max_length\n        self.ds_config = ds_config\n        self.reward_fn = reward_fn\n        self.device = torch.cuda.current_device()\n\n        if int(os.environ.get(\"WORLD_SIZE\", 1)) > 1:\n            dist.barrier(device_ids=[int(os.environ.get(\"LOCAL_RANK\", 0))])\n\n        if args.model_parallel:\n            raise NotImplementedError\n        else:\n            self.dp_world_size = dist.get_world_size()\n            self.dp_rank = dist.get_rank()\n            self.dp_group = None\n\n        self.model = PPOModel(args, self.device)\n        if args.model_parallel:\n            raise NotImplementedError\n        else:\n            if dist.get_rank() == 0:\n                print(' > number of parameters: {}M'.format(\n                    int(sum([p.nelement() for p in self.model.parameters()]) / 1e6)), flush=True)\n\n        self.sampler = None\n        self.teacher_model = None\n        self.opt = self.setup_optimizer()\n        self.scheduler = self.setup_scheduler()\n        self.model, self.opt, self.scheduler = self.setup_ds(self.model, self.opt, self.scheduler)\n        \n        self.tokenizer = tokenizer\n        self.store = PPORolloutStorage(self.tokenizer.pad_token_id, self.args.seed_ppo + self.dp_rank)\n        self.store.clear_history()\n        \n        self.losses = Loss(args, self)\n        self.generate_kwargs = dict(\n            do_sample=args.do_sample,\n            top_p=args.top_p,\n            top_k=args.top_k,\n            temperature=args.temperature,\n            max_length=args.max_length,\n            eos_token_id=self.tokenizer.eos_token_id,\n            pad_token_id=self.tokenizer.pad_token_id,\n        )\n\n    def set_teacher_model(self, model):\n        self.teacher_model = model\n\n    def set_sampler(self, sampler):\n        self.sampler = sampler\n\n    def setup_optimizer(self):\n        \"\"\"\n        Returns an optimizer derived from an instance's TRLConfig\n        \"\"\"\n        optimizer = AdamW(\n            self.model.parameters(),\n            lr=self.args.lr,\n            betas=[0.9, 0.95],\n            eps=1.0e-8,\n            weight_decay=1.0e-6\n        )\n\n        return optimizer\n\n    def setup_scheduler(self):\n        \"\"\"\n        Returns a learning rate scheduler derived from an instance's TRLConfig\n        \"\"\"\n        if self.args.scheduler_name == \"constant_trm\":\n            scheduler = get_constant_schedule_with_warmup(self.opt, num_warmup_steps=self.args.warmup_iters)\n        elif self.args.scheduler_name == \"cosine_trm\":\n            scheduler = get_cosine_schedule_with_warmup(self.opt, num_warmup_steps=self.args.warmup_iters, num_training_steps=self.args.total_iters)\n        else:\n            scheduler_class = get_scheduler_class(self.args.scheduler_name)\n            scheduler = scheduler_class(self.opt, eta_min=self.args.lr_min, T_max=self.args.total_iters)\n        \n        return scheduler\n\n    def setup_ds(self, model, optimizer=None, scheduler=None):\n        model, optimizer, _, scheduler = deepspeed.initialize(\n            model=model,\n            optimizer=optimizer,\n            args=self.args,\n            lr_scheduler=scheduler,\n            mpu=mpu if self.args.model_parallel else None,\n            config_params=self.ds_config\n        )\n        return model, optimizer, scheduler\n\n    def add_eval_pipeline(self, eval_pipeline: PPOPipeline):\n        \"\"\"Adds pipeline from with validation prompts\"\"\"\n        self.eval_pipeline = eval_pipeline\n\n    def add_lm_pipeline(self, lm_pipeline: LMPipeline, eval_lm_pipeline: LMPipeline):\n        self.lm_pipeline = lm_pipeline\n        self.eval_lm_pipeline = eval_lm_pipeline\n\n    def get_model_inputs(\n        self,\n        query_tensors,\n        response_tensors,\n    ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n        tokens = torch.cat((query_tensors, response_tensors), dim=1)[\n            :, -self.max_length :\n        ]\n        attention_mask = self.get_mask(tokens)\n  \n        batch = {\n            \"input_ids\": tokens,\n            \"attention_mask\": attention_mask\n        }\n        \n        if self.args.model_type in [\"gpt2\"]:  \n            # For a proper positional encoding in case of left padding\n            position_ids = attention_mask.cumsum(-1) - 1\n            position_ids.masked_fill_(attention_mask.eq(0), 0)\n            batch[\"position_ids\"] = position_ids\n        \n        return batch\n\n    def get_mask(self, tokens):\n        attention_mask = (\n            tokens.not_equal(self.tokenizer.pad_token_id).long()\n        )\n        return attention_mask\n\n    def forward_model(self, batch):\n        outputs = self.model(\n            **batch,\n            return_dict=True,\n            use_cache=False,\n        )\n        return outputs\n\n    def compute_logits_and_log_probs(self, query_ids, response_ids, inf_mask=None, base=\"base\", return_logprobs=True):\n        batch = self.get_model_inputs(\n            query_ids, response_ids\n        )\n        \n        if base == \"base\":\n            model_cls = self.model.module.forward\n        elif base == \"teacher\":\n            model_cls = self.teacher_model\n        else:\n            raise NotImplementedError\n\n        outputs = model_cls(\n            **batch,\n            return_dict=True,\n            use_cache=False\n        )\n\n        logits = outputs.logits\n        logits = logits / self.args.temperature\n\n        start = query_ids.size(1) - 1\n        end = query_ids.size(1) + response_ids.size(1) - 1\n        logits = logits[:, start:end]\n\n        if inf_mask is not None:\n            logits = logits.masked_fill(inf_mask, -float(\"inf\"))\n\n        mask = batch[\"attention_mask\"][:, start:end]\n                \n        if return_logprobs:\n            logprobs = get_log_probs(logits, response_ids, mask, inf_mask, model_parallel=self.args.model_parallel)\n            return logits, logprobs\n\n        return logits\n\n    def train(self):\n        \"\"\"\n        Samples batches from `self.store`, updates model and periodically evaluates it on `self.eval_dataloader`\n        \"\"\"\n\n        self.prepare_learning()\n        self.iter_count = 1\n        self.global_iter_count = 1\n        self.nth_evaluation = 0\n\n        self.evaluate()\n\n        print_rank(\"Total Steps:\", self.total_steps, \"Data Epochs:\", self.args.epochs)\n        lm_epochs = 0        \n        logging_stats = defaultdict(float)\n\n        for training_epoch in range(self.args.training_epochs):\n            for ppo_epoch in range(self.n_updates_per_batch):\n                for it, batch in enumerate(self.train_dataloader):\n                    if self.lm_pipeline is not None:\n                        try:\n                            lm_batch = next(self.lm_iterator)\n                        except StopIteration:\n                            lm_epochs += 1\n                            print_rank(f\"Another lm epoch, lm epochs: {lm_epochs}\")\n                            save_rank(f\"Another lm epoch, lm epochs: {lm_epochs}\", os.path.join(self.args.save, \"log.txt\"))\n                            self.lm_dataloader.sampler.set_epoch(lm_epochs)\n                            self.lm_iterator = iter(self.lm_dataloader)\n                            lm_batch = next(self.lm_iterator)\n\n                    self.store.move_to_device(batch, self.device)\n                    self.lm_pipeline.move_to_device(*lm_batch, self.device)\n                    stats = {}\n\n                    if self.args.model_parallel:\n                        raise NotImplementedError\n\n                    if self.args.gradient_checkpointing:\n                        try: self.model.module.set_force_gradient_checkpointing(True)\n                        except: self.model.module.base_model.set_force_gradient_checkpointing(True)\n                    \n                    input_batch = self.losses.get_input_batch(batch, lm_batch)\n                    logits = self.forward_model(input_batch).logits\n                    ppo_logits = logits[:batch.query_tensors.size(0)]\n                    lm_logits = logits[batch.query_tensors.size(0):]\n\n                    # forward\n                    forward_time = time()\n                    # compute rl-related loss on explored data\n                    rl_loss, rl_loss_stats = self.losses.ppo_loss(batch, ppo_logits)\n                    stats.update(rl_loss_stats)\n                    # compute lm-related loss on pre-training data\n                    pt_loss, pt_loss_stats = self.losses.pt_loss(lm_batch, lm_logits)\n                    stats.update(pt_loss_stats)\n                    \n                    loss = rl_loss + self.args.lm_coef * pt_loss\n                    stats[\"tot_loss\"] = loss.item()\n\n                    forward_time = time() - forward_time\n                    \n                    # backward\n                    backward_time = time()\n                    self.model.backward(loss)\n                    backward_time = time() - backward_time\n\n                    # step\n                    step_time = time()\n                    self.model.step()\n                    step_time = time() - step_time\n\n                    if self.args.gradient_checkpointing:\n                        try: self.model.module.set_force_gradient_checkpointing(False)\n                        except: self.model.module.base_model.set_force_gradient_checkpointing(False)\n\n                    if self.iter_count % self.args.gradient_accumulation_steps == 0 and \\\n                        ((self.global_iter_count < 10000 and (self.global_iter_count % 1000 == 0)) or \\\n                        self.global_iter_count % self.args.save_interval == 0):\n                        self.save()\n\n                    # eval\n                    if self.iter_count % self.args.gradient_accumulation_steps == 0 and \\\n                        ((self.global_iter_count < 1000 and (self.global_iter_count % 100 == 0)) or \\\n                        (self.global_iter_count % self.args.eval_interval == 0)):\n                        self.evaluate()\n\n                    elapsed_time = forward_time + backward_time + step_time\n                    \n                    stats[\"elapsed_time\"] = elapsed_time\n                    \n                    for k in stats:\n                        logging_stats[k] += stats[k]\n\n                    # Logging\n                    def get_log(log_stats, one_step_time):\n                        keys = [\"tot_loss\", \"rl_loss\", \"pt_loss\", \"pg_loss\", \"reg_loss\", \"reward\", \"rev_kl\", \"stu_lens\", \"mixed_lens\"]\n                        prefix = \"train | data_epochs {:2d}/{:2d} | inner iter: {:3d}/{:3d} | ppo epoch: {:2d}/{:2d} | global iter: {:6d}/{:6d}\".format(\n                            self.sampler.epochs,\n                            self.args.epochs,\n                            it,\n                            len(self.train_dataloader),\n                            ppo_epoch,\n                            self.n_updates_per_batch,\n                            self.global_iter_count,\n                            self.total_steps\n                        )\n                        suffix = \"| lr: {:.4e} | scale: {:6.2f} | time: {:.3f} | step time: {:.3f}\".format(\n                            self.scheduler.get_last_lr()[0],\n                            self.opt.cur_scale if hasattr(self.opt, \"cur_scale\") else 0,\n                            elapsed_time,\n                            one_step_time\n                        )\n                        for key in keys:\n                            prefix += \"| {}: {:.4f} \".format(key, log_stats.get(key, 0))\n                        return prefix + suffix\n\n                    mid_log_step = self.args.gradient_accumulation_steps // self.args.mid_log_num\n                    mid_log_step = 1 if mid_log_step == 0 else mid_log_step\n                    if self.iter_count % mid_log_step == 0:\n                        print_rank(get_log(stats, 0))\n\n                    if self.global_iter_count % self.args.log_interval == 0 and self.iter_count % self.args.gradient_accumulation_steps == 0:\n                        logging_stats = {k:v/(self.args.log_interval*self.args.gradient_accumulation_steps) for k,v in logging_stats.items()}\n                        log_str = get_log(logging_stats, logging_stats.get(\"elapsed_time\", 0) * self.args.gradient_accumulation_steps)\n                        print_rank(\"*\" * 100)\n                        print_rank(log_str)\n                        print_rank(self.args.save)\n                        print_rank(\"*\" * 100)\n                        save_rank(log_str, os.path.join(self.args.save, \"log.txt\"))\n                        logging_stats = {k:0 for k in logging_stats}\n\n                    # end\n                    if (self.global_iter_count >= self.total_steps or self.sampler.epochs >= self.args.epochs):\n                        if self.global_iter_count >= self.total_steps:\n                            print_rank(\"Reached total steps {}/{}\".format(self.global_iter_count, self.total_steps))\n                        else:\n                            print_rank(\"Reached data epochs {}/{}\".format(self.sampler.epochs, self.args.epochs)) \n                        self.save()\n                        results, preds, response_texts = self.evaluate_ppo()\n                        if self.eval_lm_pipeline is not None:\n                            eval_pt_results = self.evaluate_pt()\n                            results.update(eval_pt_results)\n                        self.save_evals(preds, results, response_texts)\n                        return results\n                    \n                    self.iter_count += 1\n                    if self.iter_count % self.args.gradient_accumulation_steps == 0:\n                        self.global_iter_count += 1\n\n                self.post_backward_callback()\n\n            self.post_epoch_callback(training_epoch)\n\n    def post_backward_callback(self):\n        pass\n        \n    def post_epoch_callback(self, epoch):\n        self.store.clear_history()\n        # self.store.load(self.args.save)\n        self.sampler.run_sample(\n            self.args.num_rollouts_per_device, self.global_iter_count\n        )  # Collect more rollouts for training\n\n    def prepare_learning(self):\n        self.train_dataloader = self.store.create_loader(\n            self.args.batch_size, shuffle=True, num_workers=self.args.num_workers, drop_last=True\n        )\n        \n        self.eval_dataloader = self.eval_pipeline.create_loader(\n            self.args.batch_size, shuffle=False, num_workers=self.args.num_workers, drop_last=False)\n\n        self.lm_dataloader = self.lm_pipeline.create_loader(\n            self.args.batch_size, shuffle=True, num_workers=self.args.num_workers, drop_last=True)\n        self.lm_iterator = iter(self.lm_dataloader)\n        \n        self.eval_lm_dataloader = self.eval_lm_pipeline.create_loader(\n            self.args.batch_size, shuffle=False, num_workers=self.args.num_workers, drop_last=False)\n\n        self.n_updates_per_batch = self.args.ppo_epochs\n        self.total_steps = int(\n            self.args.training_epochs\n            * self.n_updates_per_batch\n            * len(self.train_dataloader)\n            / self.args.gradient_accumulation_steps\n        )\n        self.total_steps = min(self.total_steps, self.args.total_iters)\n\n    def evaluate(self):\n        eval_results = {}\n        eval_rl_results, preds, response_texts = self.evaluate_ppo()\n        eval_results.update(eval_rl_results)\n        eval_pt_results = self.evaluate_pt()\n        eval_results.update(eval_pt_results)\n        \n        response_texts = response_texts[:len(self.eval_pipeline.ppo_answers)]            \n        self.save_evals(preds, eval_results, response_texts)\n        \n        if get_rank() == 0:\n            res = compute_metrics(response_texts, self.eval_pipeline.ppo_answers)\n            eval_results.update(res)\n            keys = [\"rougeL\", \"exact_match\", \"rev_kl\", \"lens\", \"pt_loss\", \"lm_loss\", \"kd_loss\"]\n            eval_log_str = \"eval \"\n            for key in keys:\n                eval_log_str += \"| {}: {:.3f} \".format(key, eval_results[key])\n            print_rank(eval_log_str)\n            save_rank(eval_log_str, os.path.join(self.args.save, \"log.txt\"))\n\n    def evaluate_ppo(self):  # noqa: C901\n        # self.model.eval()\n        \"\"\"Samples model on `eval_prompts`, logs stats with `reward_fn` or `metric_fn` if provided\"\"\"\n        stats = {}\n        all_full_ids = []\n        all_rev_kl = []\n        all_lens = []\n        \n        table = []\n\n        with torch.no_grad():\n            for batch in tqdm(self.eval_dataloader, \"Generation Evaluation\", disable=(not get_rank() == 0)):\n                batch, no_model_batch = batch\n                batch, _ = self.eval_pipeline.move_to_device(batch, no_model_batch, self.device)\n                gen_out = self.generate(\n                    **batch,\n                    return_dict_in_generate=True,\n                    output_scores=True\n                )\n                full_ids = gen_out.sequences\n                gen_logits = gen_out.scores # NOTE: [b, s, h_p]\n                inf_mask = torch.isinf(gen_logits)\n\n                all_full_ids.append(full_ids)\n                \n                input_ids = batch[\"input_ids\"]\n                gen_ids = full_ids[:, input_ids.size(1):]\n                mask = self.get_mask(full_ids)\n                mask = mask[:, input_ids.size(1)-1:input_ids.size(1)+gen_ids.size(1)-1]\n                lens = torch.sum(mask, dim=-1)\n                \n                teacher_rewards = self.reward_fn(input_ids, gen_ids)[\"rewards\"] # \\log p(y_t | y_{<t}, x)\n                _, logprobs = self.compute_logits_and_log_probs(input_ids, gen_ids, inf_mask=inf_mask, base=\"base\") # \\log q_{\\theta}(y_t | y_{<t}, x)\n                \n                kl = get_rev_kl(teacher_rewards, logprobs, mask)\n                kl = kl.sum(-1)\n                \n                if self.args.length_norm:\n                    kl = kl / lens\n\n                all_rev_kl.append(kl)\n                all_lens.append(lens)\n\n            all_full_ids = torch.cat(all_full_ids, dim=0)\n            all_rev_kl = torch.cat(all_rev_kl, dim=0)\n            all_lens = torch.cat(all_lens, dim=0)\n\n            full_ids = all_gather(all_full_ids, dim=1, world_size=self.dp_world_size, group=self.dp_group, op=\"stack\")\n            full_ids = full_ids.view(-1, full_ids.size(-1))\n\n            prompt_ids = full_ids[:, :self.eval_pipeline.max_prompt_length]\n            all_rev_kl = all_gather(all_rev_kl, dim=0, world_size=self.dp_world_size, group=self.dp_group)\n            stats[\"rev_kl\"] = all_rev_kl.mean()\n            all_lens = all_gather(all_lens, dim=0, world_size=self.dp_world_size, group=self.dp_group)\n            stats[\"lens\"] = all_lens.float().mean()\n\n            response_texts = []\n            if get_rank() == 0:\n                prompt_texts = self.tokenizer.batch_decode(prompt_ids, skip_special_tokens=True)\n                response_texts = self.tokenizer.batch_decode(full_ids[:, self.eval_pipeline.max_prompt_length:], skip_special_tokens=True)\n                gen_texts = [p + g for p, g in zip(prompt_texts, response_texts)]\n\n                columns = [\"prompts\"]\n                columns_data = [prompt_texts]\n                # in online setting, compute the reward for validation\n                columns.append(\"samples\")\n                if isinstance(gen_texts[0], str):\n                    columns_data.append(gen_texts)\n                else:\n                    columns_data.append(gen_texts.tolist())\n\n                table.append(list(zip(*columns_data)))\n\n        # Log and display evaluation metrics\n        if get_rank() == 0:\n            rows = sum(list(map(list, zip(*table))), [])\n\n            # Add metrics/rewards to the table's title\n            table_title = f\"Evaluation #{self.nth_evaluation}\"\n            for k, x in stats.items():\n                if k.startswith(\"reward\") or k.startswith(\"metrics\"):\n                    table_title += f\" {k}: {significant(x)}\"\n\n            rich_table = Table(*columns, title=table_title, show_lines=True)\n\n            for ix in range(min(3, len(rows))):\n                rich_table.add_row(*[str(significant(x)) for x in rows[ix]])\n\n            try:\n                Console().print(rich_table)\n            except:\n                pass\n\n        self.nth_evaluation += 1\n        return stats, table, response_texts\n\n    def evaluate_pt(self):\n        all_pt_losses = []\n        all_lm_losses = []\n        all_kd_losses = []\n        for batch in tqdm(self.eval_lm_dataloader, desc=\"LM Evaluation\", disable=(not get_rank() == 0)):\n            self.eval_lm_pipeline.move_to_device(*batch, self.device)\n            model_batch, _ = batch\n            outputs = self.model(**model_batch, return_dict=True, use_cache=False)\n            logits = outputs.logits\n            with torch.no_grad():\n                _, stats = self.losses.pt_loss(batch, logits)\n                all_pt_losses.append(stats[\"pt_loss\"])\n                all_lm_losses.append(stats[\"lm_loss\"])\n                all_kd_losses.append(stats[\"ds_loss\"])\n        \n        all_pt_losses = torch.tensor(all_pt_losses, device=self.device)\n        eval_pt_loss = all_gather(all_pt_losses, dim=0, world_size=self.dp_world_size, group=self.dp_group).mean().item()\n        \n        all_lm_losses = torch.tensor(all_lm_losses, device=self.device)\n        eval_lm_loss = all_gather(all_lm_losses, dim=0, world_size=self.dp_world_size, group=self.dp_group).mean().item()\n        \n        all_kd_losses = torch.tensor(all_kd_losses, device=self.device)\n        eval_kd_loss = all_gather(all_kd_losses, dim=0, world_size=self.dp_world_size, group=self.dp_group).mean().item()\n        \n        results = {\"pt_loss\": eval_pt_loss, \"lm_loss\": eval_lm_loss, \"kd_loss\": eval_kd_loss}\n        \n        return results\n    \n    def save(self, directory: Optional[str] = None):\n        \"\"\"Creates a checkpoint of the optimizer, scheduler and model\"\"\"\n        \"\"\"Creates checkpoint of optimizer, scheduler and a model\"\"\"\n        base_ckpt_path = directory or self.args.save\n        ckpt_dir = os.path.join(base_ckpt_path, f\"{self.global_iter_count}\")\n        os.makedirs(ckpt_dir, exist_ok=True)\n        if self.args.model_parallel:\n            raise NotImplementedError\n        else:\n            if get_rank() == 0:\n                self.model.module.base_model.save_pretrained(ckpt_dir, safe_serialization=False)\n                # torch.save(self.model.module.value_model.state_dict(), os.path.join(ckpt_dir, \"value_model.ckpt\"))\n                print(f\"Model save to {ckpt_dir}\")\n                self.tokenizer.save_pretrained(ckpt_dir)\n\n    def save_evals(self, preds, results, response_texts, directory: Optional[str] = None):\n        \"\"\"Creates a checkpoint of the optimizer, scheduler and model\"\"\"\n        \"\"\"Creates checkpoint of optimizer, scheduler and a model\"\"\"\n        base_ckpt_path = directory or self.args.save\n        save_dir = os.path.join(base_ckpt_path, \"eval\", f\"{self.global_iter_count}\")\n        os.makedirs(save_dir, exist_ok=True)\n        \n        if get_rank() == 0:\n            torch.save(preds, os.path.join(save_dir, \"preds.pt\"))\n            torch.save(results, os.path.join(save_dir, \"results.pt\"))\n            with open(os.path.join(save_dir, \"answers.jsonl\"), \"w\") as f:\n                for resp in response_texts:\n                    f.write(json.dumps({\"text\": resp}) + \"\\n\")\n\n    def push_to_store(self, data):\n        self.store.push(data)\n         \n    def generate(self, input_ids, attention_mask=None, mode=\"base\", teacher_mixed_sample=False, **kwargs):\n        \"\"\"Wraps hf's `generate` adding some specific method's defaults\"\"\"\n        input_ids = input_ids.to(self.device)\n        if attention_mask is not None:\n            attention_mask = attention_mask.to(self.device)\n\n        kwargs = dict(self.generate_kwargs, **kwargs)\n\n        if mode == \"base\":\n            model = self.model.module\n        elif mode == \"teacher\":\n            model = self.teacher_model\n        else:\n            raise NotImplementedError\n\n        mix_in_model, mix_in_alpha = None, None\n        if teacher_mixed_sample:\n            mix_in_model = self.teacher_model\n            mix_in_alpha = self.args.teacher_mixed_alpha\n\n        with torch.no_grad():\n            \n            generation_config = GenerationConfig(**kwargs)\n            \n            max_new_tokens = generation_config.max_length - input_ids.size(1)\n            gen = model.generate(\n                input_ids=input_ids,\n                attention_mask=attention_mask,\n                generation_config=generation_config,\n                max_new_tokens=max_new_tokens,\n                mix_in_model=mix_in_model,\n                mix_in_alpha=mix_in_alpha\n            )\n            \n            gen.sequences = F.pad(\n                gen.sequences,\n                (0, self.max_length - gen.sequences.shape[1]),\n                value=self.tokenizer.pad_token_id,\n            )\n            \n            if gen.scores is not None:\n                gen.scores = torch.stack(gen.scores, dim=1)\n                gen.scores = torch.cat([\n                    gen.scores, \n                    torch.zeros(\n                        gen.scores.size(0),\n                        self.max_length - self.args.max_prompt_length - gen.scores.size(1),\n                        gen.scores.size(2),\n                        device=gen.scores.device)],\n                    dim=1)\n                \n            # NOTE: scores: [b, s, h_p]\n\n        return gen"
  },
  {
    "path": "minillm/utils.py",
    "content": "import math\nfrom enum import Enum\nfrom numbers import Number\nfrom typing import Tuple\n\nimport torch\nimport torch.nn.functional as F\nimport torch.distributed as dist\nfrom torch.optim.lr_scheduler import CosineAnnealingLR, LinearLR\nfrom accelerate import init_empty_weights\n\n\nfrom transformers import (\n    AutoModelForCausalLM,\n    AutoConfig,\n)\n\n\ndef get_entropy(gen_logits, inf_mask, mask, model_parallel=False):\n    inf_mask = torch.isinf(gen_logits) | inf_mask\n    if model_parallel:\n        raise NotImplementedError\n    else:\n        full_probs = F.softmax(gen_logits, dim=-1, dtype=torch.float32)\n        full_logprobs = F.log_softmax(gen_logits, dim=-1, dtype=torch.float32)\n        full_logprobs = full_logprobs.masked_fill(inf_mask, 0)        \n        ent = -torch.sum(full_probs * full_logprobs, dim=-1)\n    ent = ent * mask    \n    return ent\n\n\ndef get_log_probs(logits, ids, mask, inf_mask=None, model_parallel=False):\n    if model_parallel:\n        raise NotImplementedError\n    else:\n        logprobs = F.log_softmax(logits, dim=-1)\n        if inf_mask is not None:\n            logprobs = logprobs.masked_fill(inf_mask, -float(\"inf\"))\n        logprobs = torch.gather(logprobs, dim=-1, index=ids.unsqueeze(-1)).squeeze(-1)\n    logprobs = logprobs.masked_fill(~(mask.bool()), 0)\n    \n    # we ensure that the selected logprobs are not inf or nan\n    assert all((~torch.isinf(logprobs.view(-1))) & (~torch.isnan(logprobs.view(-1))))\n    \n    return logprobs\n\n\ndef get_x_entropy(logits_1, logits_2, inf_mask, mask, model_parallel=False):\n    inf_mask = torch.isinf(logits_1) | torch.isinf(logits_2) | inf_mask\n    if model_parallel:\n        raise NotImplementedError\n    else:\n        full_probs = F.softmax(logits_1, dim=-1, dtype=torch.float32)\n        full_logprobs = F.log_softmax(logits_2, dim=-1, dtype=torch.float32)\n        full_logprobs = full_logprobs.masked_fill(inf_mask, 0)\n        xent = -torch.sum(full_probs * full_logprobs, dim=-1)\n    xent = xent * mask\n    return xent\n\n\ndef get_rev_kl(log_p, log_q, mask):\n    log_ratio = (log_p - log_q) * mask\n    kl = log_ratio.float().exp() - 1 - log_ratio\n    return kl\n\n\ndef get_global_statistics(xs: torch.Tensor) -> Tuple[float, float, int]:\n    \"\"\"\n    Computes element-wise mean and variance of the tensor across processes\n    \"\"\"\n    sum_and_count = torch.tensor([xs.sum(), xs.numel()], device=xs.device)\n    dist.all_reduce(sum_and_count, dist.ReduceOp.SUM)\n    global_sum, count = sum_and_count\n    global_mean = global_sum / count\n\n    sum_var = torch.sum((xs - global_mean) ** 2)\n    dist.all_reduce(sum_var, dist.ReduceOp.SUM)\n    global_var = sum_var / count\n    return global_mean, global_var, count\n\n\ndef whiten(xs: torch.Tensor, shift_mean=True, distributed=True) -> torch.Tensor:\n    \"\"\"Whitens values\"\"\"\n    if distributed and dist.is_initialized():\n        mean, var, _ = get_global_statistics(xs)\n    else:\n        var, mean = torch.var_mean(xs)\n\n    whitened = (xs - mean) * torch.rsqrt(var + 1e-8)\n    if not shift_mean:\n        whitened += mean\n    return whitened\n\n\ndef significant(x: Number, ndigits=2) -> Number:\n    \"\"\"\n    Cut the number up to its `ndigits` after the most significant\n    \"\"\"\n    if isinstance(x, torch.Tensor):\n        x = x.item()\n\n    if not isinstance(x, Number) or x == 0:\n        return x\n\n    return round(x, ndigits - int(math.floor(math.log10(abs(x)))))\n\n\nclass OptimizerName(str, Enum):\n    \"\"\"Supported optimizer names\"\"\"\n\n    ADAM: str = \"adam\"\n    ADAMW: str = \"adamw\"\n    ADAM_8BIT_BNB: str = \"adam_8bit_bnb\"\n    ADAMW_8BIT_BNB: str = \"adamw_8bit_bnb\"\n    SGD: str = \"sgd\"\n\n\ndef get_optimizer_class(name: OptimizerName):\n    \"\"\"\n    Returns the optimizer class with the given name\n\n    Args:\n        name (str): Name of the optimizer as found in `OptimizerNames`\n    \"\"\"\n    if name == OptimizerName.ADAM:\n        return torch.optim.Adam\n    if name == OptimizerName.ADAMW:\n        return torch.optim.AdamW\n    if name == OptimizerName.SGD.value:\n        return torch.optim.SGD\n    supported_optimizers = [o.value for o in OptimizerName]\n    raise ValueError(\n        f\"`{name}` is not a supported optimizer. \"\n        f\"Supported optimizers are: {supported_optimizers}\"\n    )\n\n\nclass SchedulerName(str, Enum):\n    \"\"\"Supported scheduler names\"\"\"\n\n    COSINE_ANNEALING = \"cosine_annealing\"\n    LINEAR = \"linear\"\n\n\ndef get_scheduler_class(name: SchedulerName):\n    \"\"\"\n    Returns the scheduler class with the given name\n    \"\"\"\n    if name == SchedulerName.COSINE_ANNEALING:\n        return CosineAnnealingLR\n    if name == SchedulerName.LINEAR:\n        return LinearLR\n    supported_schedulers = [s.value for s in SchedulerName]\n    raise ValueError(\n        f\"`{name}` is not a supported scheduler. \"\n        f\"Supported schedulers are: {supported_schedulers}\"\n    )"
  },
  {
    "path": "rouge_metric.py",
    "content": "import string\nimport json\nimport os\nimport argparse\nfrom rouge_score import rouge_scorer\nfrom transformers import AutoTokenizer\n\n\ndefault_rouge_scorer = rouge_scorer.RougeScorer(['rougeL'], use_stemmer=True)\n\n# adapted the flowing from Squad v1.1 evaluation, without removing the articles.\ndef normalize_answer(s):\n    \"\"\"Lower text and remove punctuation, and extra whitespace.\"\"\"\n\n    def white_space_fix(text):\n        return ' '.join(text.split())\n\n    def remove_punc(text):\n        exclude = set(string.punctuation)\n        return ''.join(ch for ch in text if ch not in exclude)\n\n    def lower(text):\n        return text.lower()\n\n    return white_space_fix(remove_punc(lower(s)))\n\n\ndef exact_match(prediction, ground_truth, xlingual=False):\n    return (normalize_answer(prediction) == normalize_answer(ground_truth))\n\n\ndef rouge(prediction, ground_truth, xlingual=False):\n    scorer = default_rouge_scorer\n    scores = scorer.score(prediction=prediction, target=ground_truth)\n    return scores[\"rougeL\"].fmeasure\n\n\ndef metric_max_over_ground_truths(metric_fn, prediction, ground_truths, xlingual=False):\n    scores_for_ground_truths = []\n    for ground_truth in ground_truths:\n        score = metric_fn(prediction, ground_truth, xlingual=xlingual)\n        scores_for_ground_truths.append(score)\n    return max(scores_for_ground_truths)\n\n\ndef compute_metrics(predictions, references, xlingual=False):\n    # assert len(predictions) == len(references), f\"# of predictions {len(predictions)} doesn't match # of references {len(references)}.\"\n    \n    min_length = min(len((predictions)), len(references))\n    predictions = predictions[:min_length]\n    references = references[:min_length]\n    \n    em, rougeL = 0, 0\n    for pred, gold in zip(predictions, references):\n        assert isinstance(gold, list)\n        em += metric_max_over_ground_truths(\n            exact_match, prediction=pred, ground_truths=gold, xlingual=xlingual\n        )\n        rougeL += metric_max_over_ground_truths(\n            rouge, prediction=pred, ground_truths=gold, xlingual=xlingual\n        )\n    em = 100.0 * em / len(references)\n    rougeL = 100.0 * rougeL / len(references)\n    metrics = {\"exact_match\": em, \"rougeL\": rougeL}\n    metrics = {k: round(v, 4) for k, v in metrics.items()}\n    return metrics\n\n\ndef compute_grouped_metrics(predictions, references, groups, xlingual=False):\n    assert len(predictions) == len(references) == len(groups)\n\n    examples_by_group = {}\n    for pred, gold, group in zip(predictions, references, groups):\n        if group not in examples_by_group:\n            examples_by_group[group] = []\n        examples_by_group[group].append((pred, gold))\n    \n    results = {}\n    for group, group_examples in examples_by_group.items():\n        task_predictions, task_references = zip(*group_examples)\n        group_metrics = compute_metrics(task_predictions, task_references, xlingual=xlingual)\n        for metric, value in group_metrics.items():\n            results[f\"{metric}_for_{group}\"] = value\n    return results\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\n        \"--prediction_file\", required=True,\n        help=\"Jsonl file with each line corresponding to a prediction. \" \n             \"Each json object should have an `id` and a `prediction` key.\")\n    parser.add_argument(\n        \"--reference_file\", required=True,\n        help=\"Jsonl file with each line corresponding to a reference. \" \n             \"Each json object should have an `id` and a `references` key. \"\n             \"`task_id`, `task_category` and `task_track` are optional, which will be used to \"\n             \"compute the per-task performance, per-category performance and the performance for default (english) / xlingual Tracks.\")\n    parser.add_argument(\n        \"--output_file\",\n        help=\"Jsonl file to write the results to.\")\n    parser.add_argument(\n        \"--model_name\",\n    )\n    return parser.parse_args()\n\n\nif __name__ == \"__main__\":\n    args = parse_args()\n\n    references = []\n    with open(args.reference_file) as fin:\n        for line in fin:\n            instance = json.loads(line)\n            if isinstance(instance[\"output\"], list):\n                references.append(instance[\"output\"])\n            else:\n                references.append([instance[\"output\"]])\n\n    predictions = []\n    with open(args.prediction_file) as fin:\n        for line in fin:\n            prediction = json.loads(line)\n            predictions.append(prediction[\"text\"])\n\n    predictions = predictions[:1000]\n\n    references = references[:len(predictions)]\n\n    results = compute_metrics(predictions, references, xlingual=False)\n\n    print(results)\n\n    if args.output_file:\n        os.makedirs(args.output_file, exist_ok=True)\n        with open(os.path.join(args.output_file, f\"{args.model_name}.json\"), \"w\") as fout:\n            json.dump(results, fout, indent=2)\n            "
  },
  {
    "path": "scripts/gpt2/distillm/train_0.1B_1.5B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-base\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=64\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/distill_0.1B_1.5B_final2\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type adaptive-sfkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# distillm\nOPTS+=\" --student-gen\"\nOPTS+=\" --gen-num-beams 1\"\nOPTS+=\" --gen-top-p 1.0\"\nOPTS+=\" --init-threshold 0.0\"\nOPTS+=\" --loss-eps 0.1\"\nOPTS+=\" --capacity 1000\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\nCODE_BASE=HF ${CMD}\n"
  },
  {
    "path": "scripts/gpt2/distillm/train_0.3B_1.5B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-medium\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=32\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/distillm/distill_0.3B_1.5B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type adaptive-srkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# distillm\nOPTS+=\" --student-gen\"\nOPTS+=\" --gen-num-beams 1\"\nOPTS+=\" --gen-top-p 1.0\"\nOPTS+=\" --init-threshold 0.0\"\nOPTS+=\" --loss-eps 0.1\"\nOPTS+=\" --capacity 1000\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\nCODE_BASE=HF ${CMD}\n"
  },
  {
    "path": "scripts/gpt2/distillm/train_0.7B_1.5B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-large\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# hp\nBATCH_SIZE=4\nLR=0.0001\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/distillm/distillm_0.7B_1.5B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type adaptive-srkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# distillm\nOPTS+=\" --student-gen\"\nOPTS+=\" --gen-num-beams 1\"\nOPTS+=\" --gen-top-p 1.0\"\nOPTS+=\" --init-threshold 0.0\"\nOPTS+=\" --loss-eps 0.1\"\nOPTS+=\" --capacity 1000\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\nCODE_BASE=HF ${CMD}\n"
  },
  {
    "path": "scripts/gpt2/eval/eval_main_dolly.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"gpt2-base\"}\nCKPT=\"${BASE_PATH}/results/gpt2/train/${CKPT_NAME}/\"\n# data\nDATA_NAMES=\"dolly\"\nDATA_DIR=\"${BASE_PATH}/data/dolly\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type gpt2\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/eval/eval_main_self_inst.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"gpt2-base\"}\nCKPT=\"${BASE_PATH}/results/gpt2/train/${CKPT_NAME}/\"\n# data\nDATA_NAMES=\"self_inst\"\nDATA_DIR=\"${BASE_PATH}/data/self-inst\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type gpt2\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/eval/eval_main_sinst.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"gpt2-base\"}\nCKPT=\"${BASE_PATH}/results/gpt2/train/${CKPT_NAME}/\"\n# data\nSPLIT=\"11_\"\nDATA_NAMES=\"sinst_${SPLIT}\"\nDATA_DIR=\"${BASE_PATH}/data/sinst/${SPLIT}\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type gpt2\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/eval/eval_main_uinst.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"gpt2-base\"}\nCKPT=\"${BASE_PATH}/results/gpt2/train/${CKPT_NAME}/\"\n# data\nSPLIT=\"11_\"\nDATA_NAMES=\"uinst_${SPLIT}\"\nDATA_DIR=\"${BASE_PATH}/data/uinst/${SPLIT}\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type gpt2\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 10000\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/eval/eval_main_vicuna.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"gpt2-base\"}\nCKPT=\"${BASE_PATH}/results/gpt2/train/${CKPT_NAME}/\"\n# data\nDATA_NAMES=\"vicuna\"\nDATA_DIR=\"${BASE_PATH}/data/vicuna\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type gpt2\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/eval/run_eval.sh",
    "content": "#!/bin/bash\n\nMASTER_PORT=2040\nDEVICE=${1}\nckpt=${2}\n\nfor seed in 10 20 30 40 50\ndo\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/gpt2/eval/eval_main_dolly.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/gpt2/eval/eval_main_self_inst.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/gpt2/eval/eval_main_vicuna.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/gpt2/eval/eval_main_sinst.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/gpt2/eval/eval_main_uinst.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\ndone"
  },
  {
    "path": "scripts/gpt2/gkd/gkd_base.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-base\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=2\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/gkd/base_xl\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-jsd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 0.5\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/gkd/gkd_large.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-large\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/kd/large_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-jsd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 0.5\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/gkd/gkd_medium.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-medium\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/kd/medium_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-jsd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 0.5\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/imitkd/imitkd_base.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-base\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/imitkd/base_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# ImitKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/imitkd/imitkd_large.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-large\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# hp\nBATCH_SIZE=4\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/imitkd/large_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# ImitKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/imitkd/imitkd_medium.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-medium\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/imitkd/medium_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# ImitKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/init/init_base.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-base\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2\" # download automatically\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=32\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/sft\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 3\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/init/init_large.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-large\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-large\" # download automatically\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/sft\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 3\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/init/init_medium.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-medium\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-medium\" # download automatically\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/sft\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 3\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/kd/kd_base.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-base\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2\" # download automatically\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=2\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/kd\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/kd/kd_large.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-large\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-large\" # download automatically\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/kd/large_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/kd/kd_medium.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-medium\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-medium\" # download automatically\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/kd/medium_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/minillm/train_base_xl.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"base-init\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/gpt2-base\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nPROMPT_DATA_DIR=\"${BASE_PATH}/processed_data/dolly/prompt/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/minillm/\"\n# hp\nGRAD_ACC=1\nBATCH_SIZE=8\nCHUNK_SIZE=16\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --n-nodes ${NNODES}\"\nOPTS+=\" --teacher-model-fp16\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --prompt-data-dir ${PROMPT_DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --dev-num 1000\"\nOPTS+=\" --num-workers 0\"\n# hp\nOPTS+=\" --epochs 10\"\nOPTS+=\" --total-iters 5000\"\nOPTS+=\" --kd-ratio 0.5\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --lr 5e-6\"\nOPTS+=\" --lr-min 5e-6\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\nOPTS+=\" --warmup-iters 100\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed-lm 7\"\nOPTS+=\" --save-interval 500\"\nOPTS+=\" --eval-interval 100\"\nOPTS+=\" --log-interval 16\"\nOPTS+=\" --mid-log-num 1\"\n# ppo\nOPTS+=\" --type minillm\"\nOPTS+=\" --ppo-epochs 4\"\nOPTS+=\" --num-rollouts 256\"\nOPTS+=\" --chunk-size ${CHUNK_SIZE}\"\n# minillm\nOPTS+=\" --length-norm\"\nOPTS+=\" --single-step-reg\"\nOPTS+=\" --teacher-mixed-alpha 0.2\"\n# reward\nOPTS+=\" --reward-scaling 0.5\"\nOPTS+=\" --cliprange-reward 100\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/train_minillm.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/minillm/train_large_xl.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"large-init\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/gpt2-large\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nPROMPT_DATA_DIR=\"${BASE_PATH}/processed_data/dolly/prompt/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/minillm/\"\n# hp\nGRAD_ACC=2\nBATCH_SIZE=2\nCHUNK_SIZE=4\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --n-nodes ${NNODES}\"\nOPTS+=\" --teacher-model-fp16\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --prompt-data-dir ${PROMPT_DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --dev-num 1000\"\nOPTS+=\" --num-workers 0\"\n# hp\nOPTS+=\" --epochs 10\"\nOPTS+=\" --total-iters 5000\"\nOPTS+=\" --kd-ratio 0.5\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --lr 5e-6\"\nOPTS+=\" --lr-min 5e-6\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\nOPTS+=\" --warmup-iters 100\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed-lm 7\"\nOPTS+=\" --save-interval 500\"\nOPTS+=\" --eval-interval 100\"\nOPTS+=\" --log-interval 16\"\nOPTS+=\" --mid-log-num 1\"\n# ppo\nOPTS+=\" --type minillm\"\nOPTS+=\" --ppo-epochs 4\"\nOPTS+=\" --num-rollouts 256\"\nOPTS+=\" --chunk-size ${CHUNK_SIZE}\"\n# minillm\nOPTS+=\" --length-norm\"\nOPTS+=\" --single-step-reg\"\nOPTS+=\" --teacher-mixed-alpha 0.2\"\n# reward\nOPTS+=\" --reward-scaling 0.5\"\nOPTS+=\" --cliprange-reward 100\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/train_minillm.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/minillm/train_medium_xl.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"medium-init\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/init/gpt2-medium\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nPROMPT_DATA_DIR=\"${BASE_PATH}/processed_data/dolly/prompt/gpt2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/minillm/\"\n# hp\nGRAD_ACC=2\nBATCH_SIZE=2\nCHUNK_SIZE=8\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --n-nodes ${NNODES}\"\nOPTS+=\" --teacher-model-fp16\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --prompt-data-dir ${PROMPT_DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --dev-num 1000\"\nOPTS+=\" --num-workers 0\"\n# hp\nOPTS+=\" --epochs 10\"\nOPTS+=\" --total-iters 5000\"\nOPTS+=\" --kd-ratio 0.5\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --lr 5e-6\"\nOPTS+=\" --lr-min 5e-6\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\nOPTS+=\" --warmup-iters 100\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed-lm 7\"\nOPTS+=\" --save-interval 500\"\nOPTS+=\" --eval-interval 100\"\nOPTS+=\" --log-interval 16\"\nOPTS+=\" --mid-log-num 1\"\n# ppo\nOPTS+=\" --type minillm\"\nOPTS+=\" --ppo-epochs 4\"\nOPTS+=\" --num-rollouts 256\"\nOPTS+=\" --chunk-size ${CHUNK_SIZE}\"\n# minillm\nOPTS+=\" --length-norm\"\nOPTS+=\" --single-step-reg\"\nOPTS+=\" --teacher-mixed-alpha 0.2\"\n# reward\nOPTS+=\" --reward-scaling 0.5\"\nOPTS+=\" --cliprange-reward 100\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/train_minillm.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/seqkd/seqkd_base.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-base\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/pseudo/gpt2-xlarge-sft/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/gpt2/512/10M/\"\n# hp\nBATCH_SIZE=2\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/seqkd/base_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/seqkd/seqkd_large.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-large\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-large\" # download automatically\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/pseudo/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/seqkd/large_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/seqkd/seqkd_medium.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-medium\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-medium\" # download automatically\nTEACHER_CKPT_NAME=\"xlarge-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/pseudo/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/seqkd/medium_xlarge\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/sft/sft_base.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-base\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2\" # download automatically\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=32\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/sft\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/sft/sft_large.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-large\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-large\" # download automatically\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0001\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/sft\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/sft/sft_medium.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-medium\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-medium\" # download automatically\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0001\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/sft\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/sft/sft_xlarge.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-xlarge\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"gpt2-xl\" # download automatically\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nBATCH_SIZE=2\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/train/sft\"\n# seed\nSEED=10\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/tools/generate_data_seqkd.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"gpt2-xlarge-sft\"\nCKPT=\"${BASE_PATH}/results/gpt2/train/sft/gpt2-xlarge/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/gpt2/\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/gpt2/gen/\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names dolly\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --gen-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type gen\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/generate.py ${OPTS} $@\"\n\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/gpt2/tools/process_data_dolly.sh",
    "content": "BASE_PATH=${1}\n\nexport TF_CPP_MIN_LOG_LEVEL=3\n\n# only prompt for MiniLLM train\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/data/dolly/ \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/prompt \\\n    --model-path ${BASE_PATH}/checkpoints/gpt2-large \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num 1000 \\\n    --only-prompt \\\n    --model-type gpt2\n\n# prompt and response for baselines\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/data/dolly/ \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/full \\\n    --model-path ${BASE_PATH}/checkpoints/gpt2-large \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num 1000 \\\n    --model-type gpt2\n"
  },
  {
    "path": "scripts/gpt2/tools/process_data_pretrain.sh",
    "content": "BASE_PATH=${1}\n\nMAX_LENGTH=512\n\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_pretrain.py \\\n    --data-dir ${BASE_PATH}/data/openwebtext \\\n    --processed-data-dir ${BASE_PATH}/processed_data/openwebtext/gpt2/${MAX_LENGTH}/ \\\n    --model-path ${BASE_PATH}/checkpoints/gpt2-large \\\n    --max-length ${MAX_LENGTH} \\\n    --train-num 10000000 \\\n    --data-process-workers 32 \\\n    --dev-num 10000 \\"
  },
  {
    "path": "scripts/gpt2/tools/process_pseudo_data_seqkd.sh",
    "content": "BASE_PATH=${1}\n\nexport TF_CPP_MIN_LOG_LEVEL=3\n\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/results/gpt2/gen/gpt2-xlarge-sft/t1.0-l512 \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/pseudo \\\n    --model-path ${BASE_PATH}/checkpoints/gpt2-large \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num -1 \\\n    --model-type gpt2\n\ncp ${BASE_PATH}/processed_data/dolly/full/gpt2/valid_0.bin ${BASE_PATH}/processed_data/dolly/pseudo/gpt2/\ncp ${BASE_PATH}/processed_data/dolly/full/gpt2/valid_0.idx ${BASE_PATH}/processed_data/dolly/pseudo/gpt2/\ncp ${BASE_PATH}/processed_data/dolly/full/gpt2/valid.jsonl ${BASE_PATH}/processed_data/dolly/pseudo/gpt2/\n"
  },
  {
    "path": "scripts/openllama2/distillm/train_3B_7B_teacher_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nPEFT_CKPT_NAME=\"openllama2-3B\"\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/init/${PEFT_CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"openllama2-7B\"\nTEACHER_CKPT=\"${BASE_PATH}/checkpoints/${TEACHER_CKPT_NAME}/\"\nTEACHER_PEFT_CKPT_NAME=\"sft_7B\"\nTEACHER_PEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/sft/${TEACHER_PEFT_CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/openllama2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/openllama2/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/distillm/3B_7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --do-train\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\nOPTS+=\" --teacher-peft-name ${TEACHER_PEFT_CKPT_NAME}\"\nOPTS+=\" --teacher-peft-path ${TEACHER_PEFT_CKPT}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type srkl-aesop\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --init-threshold 0.2\"\nOPTS+=\" --loss-eps 0.2\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/eval/eval_main_dolly_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"openllama2-3B\"}\n# CKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nCKPT=\"/home/jongwoo/.cache/huggingface/hub/models--openlm-research--open_llama_7b_v2/snapshots/e5961def23172a2384543940e773ab676033c963/\"\nPEFT_CKPT_NAME=${5-\"lora\"}\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/${PEFT_CKPT_NAME}/\"\n# data\nDATA_NAMES=\"dolly\"\nDATA_DIR=\"${BASE_PATH}/data/dolly\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/eval/eval_main_self_inst_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"openllama2-3B\"}\n# CKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nCKPT=\"/home/jongwoo/.cache/huggingface/hub/models--openlm-research--open_llama_7b_v2/snapshots/e5961def23172a2384543940e773ab676033c963/\"\nPEFT_CKPT_NAME=${5-\"lora\"}\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/${PEFT_CKPT_NAME}/\"\n# data\nDATA_NAMES=\"self_inst\"\nDATA_DIR=\"${BASE_PATH}/data/self-inst\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/eval/eval_main_sinst_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"openllama2-3B\"}\n# CKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nCKPT=\"/home/jongwoo/.cache/huggingface/hub/models--openlm-research--open_llama_7b_v2/snapshots/e5961def23172a2384543940e773ab676033c963/\"\nPEFT_CKPT_NAME=${5-\"lora\"}\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/${PEFT_CKPT_NAME}/\"\n# data\nSPLIT=\"11_\"\nDATA_NAMES=\"sinst_${SPLIT}\"\nDATA_DIR=\"${BASE_PATH}/data/sinst/${SPLIT}\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/eval/eval_main_uinst_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"openllama2-3B\"}\n# CKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nCKPT=\"/home/jongwoo/.cache/huggingface/hub/models--openlm-research--open_llama_7b_v2/snapshots/e5961def23172a2384543940e773ab676033c963/\"\nPEFT_CKPT_NAME=${5-\"lora\"}\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/${PEFT_CKPT_NAME}/\"\n# data\nSPLIT=\"11_\"\nDATA_NAMES=\"uinst_${SPLIT}\"\nDATA_DIR=\"${BASE_PATH}/data/uinst/${SPLIT}\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 10000\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/eval/eval_main_vicuna_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"openllama2-3B\"}\n# CKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nCKPT=\"/home/jongwoo/.cache/huggingface/hub/models--openlm-research--open_llama_7b_v2/snapshots/e5961def23172a2384543940e773ab676033c963/\"\nPEFT_CKPT_NAME=${5-\"lora\"}\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/${PEFT_CKPT_NAME}/\"\n# data\nDATA_NAMES=\"vicuna\"\nDATA_DIR=\"${BASE_PATH}/data/vicuna\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/eval/run_eval.sh",
    "content": "#!/bin/bash\n\nMASTER_PORT=2040\nDEVICE=${1}\nckpt=${2}\n\n# dolly eval\nfor seed in 10 20 30 40 50\ndo\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/openllama2/eval/eval_main_dolly_lora.sh ./ ${MASTER_PORT} 1 openllama2-3B ${ckpt} --seed $seed  --eval-batch-size 4\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/openllama2/eval/eval_main_self_inst_lora.sh ./ ${MASTER_PORT} 1 openllama2-3B ${ckpt} --seed $seed  --eval-batch-size 4\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/openllama2/eval/eval_main_vicuna_lora.sh ./ ${MASTER_PORT} 1 openllama2-3B ${ckpt} --seed $seed  --eval-batch-size 4\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/openllama2/eval/eval_main_sinst_lora.sh ./ ${MASTER_PORT} 1 openllama2-3B ${ckpt} --seed $seed  --eval-batch-size 4\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/openllama2/eval/eval_main_uinst_lora.sh ./ ${MASTER_PORT} 1 openllama2-3B ${ckpt} --seed $seed  --eval-batch-size 4\ndone"
  },
  {
    "path": "scripts/openllama2/gkd/gkd_3B_7B_teacher_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nPEFT_CKPT_NAME=\"openllama2-3B\"\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/init/${PEFT_CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"openllama2-7B\"\nTEACHER_CKPT=\"${BASE_PATH}/checkpoints/${TEACHER_CKPT_NAME}/\"\nTEACHER_PEFT_CKPT_NAME=\"sft_7B\"\nTEACHER_PEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/sft/${TEACHER_PEFT_CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/openllama2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/openllama2/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/gkd/3B_7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --do-train\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\nOPTS+=\" --teacher-peft-name ${TEACHER_PEFT_CKPT_NAME}\"\nOPTS+=\" --teacher-peft-path ${TEACHER_PEFT_CKPT}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type jsd-mixed\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 0.5\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/imitkd/imitkd_3B_7B_teacher_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nPEFT_CKPT_NAME=\"openllama2-3B\"\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/init/${PEFT_CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"openllama2-7B\"\nTEACHER_CKPT=\"${BASE_PATH}/checkpoints/${TEACHER_CKPT_NAME}/\"\nTEACHER_PEFT_CKPT_NAME=\"sft_7B\"\nTEACHER_PEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/sft/${TEACHER_PEFT_CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/openllama2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/openllama2/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/imitkd/3B_7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --do-train\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\nOPTS+=\" --teacher-peft-name ${TEACHER_PEFT_CKPT_NAME}\"\nOPTS+=\" --teacher-peft-path ${TEACHER_PEFT_CKPT}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# ImitKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/init/sft_3B_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/openllama2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/minillm_init/openllama2-3B\"\n# seed\nSEED=20\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 3\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num 1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# lora\nOPTS+=\" --peft lora\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config_zero2.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/kd/kd_3B_7B_teacher_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"openllama2-7B\"\nTEACHER_CKPT=\"${BASE_PATH}/checkpoints/${TEACHER_CKPT_NAME}/\"\nTEACHER_PEFT_CKPT_NAME=\"sft_7B\"\nTEACHER_PEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/sft/${TEACHER_PEFT_CKPT_NAME}/\"\nMP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/openllama2/\"\n# hp\nBATCH_SIZE=4\nLR=0.00001\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/kd/kd_3B_7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --gradient-checkpointing\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 0.5\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --teacher-peft-name ${TEACHER_PEFT_CKPT_NAME}\"\nOPTS+=\" --teacher-peft-path ${TEACHER_PEFT_CKPT}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config_zero2.json\"\n# type\nOPTS+=\" --type fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/minillm/train_3B_7B_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nPEFT_CKPT_NAME=\"openllama2-3B\"\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/init/${PEFT_CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"openllama2-7B\"\nTEACHER_CKPT=\"${BASE_PATH}/checkpoints/${TEACHER_CKPT_NAME}/\"\nTEACHER_PEFT_CKPT_NAME=\"sft_7B\"\nTEACHER_PEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/sft/${TEACHER_PEFT_CKPT_NAME}/\"\n# data\nPROMPT_DATA_DIR=\"${BASE_PATH}/processed_data/dolly/prompt/openllama2/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/openllama2/512/1M/\"\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/minillm/3B_7B\"\n# hp\nGRAD_ACC=4\nBATCH_SIZE=2\nCHUNK_SIZE=8\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --prompt-data-dir ${PROMPT_DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --dev-num 1000\"\nOPTS+=\" --num-workers 0\"\n# hp\nOPTS+=\" --epochs 10\"\nOPTS+=\" --total-iters 5000\"\nOPTS+=\" --kd-ratio 0.5\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --lr 5e-6\"\nOPTS+=\" --lr-min 5e-6\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\nOPTS+=\" --warmup-iters 100\"\nOPTS+=\" --scheduler-name cosine_trm\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed-lm 7\"\nOPTS+=\" --save-interval 500\"\nOPTS+=\" --eval-interval 500\"\nOPTS+=\" --log-interval 16\"\nOPTS+=\" --mid-log-num 1\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --do-train\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\nOPTS+=\" --teacher-peft-name ${TEACHER_PEFT_CKPT_NAME}\"\nOPTS+=\" --teacher-peft-path ${TEACHER_PEFT_CKPT}\"\n# ppo\nOPTS+=\" --type minillm\"\nOPTS+=\" --ppo-epochs 4\"\nOPTS+=\" --num-rollouts 256\"\nOPTS+=\" --chunk-size ${CHUNK_SIZE}\"\n# minillm\nOPTS+=\" --length-norm\"\nOPTS+=\" --single-step-reg\"\nOPTS+=\" --teacher-mixed-alpha 0.2\"\n# reward\nOPTS+=\" --reward-scaling 0.5\"\nOPTS+=\" --cliprange-reward 100\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config_zero2.json\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/train_minillm.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/seqkd/seqkd_3B_7B_teacher_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"openllama2-7B\"\nTEACHER_CKPT=\"${BASE_PATH}/checkpoints/${TEACHER_CKPT_NAME}/\"\nTEACHER_PEFT_CKPT_NAME=\"sft_7B\"\nTEACHER_PEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/sft/${TEACHER_PEFT_CKPT_NAME}/\"\nMP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/pseudo/openllama2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/seqkd/3B_7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --gradient-checkpointing\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 0.5\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --teacher-peft-name ${TEACHER_PEFT_CKPT_NAME}\"\nOPTS+=\" --teacher-peft-path ${TEACHER_PEFT_CKPT}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config_zero2.json\"\n# type\nOPTS+=\" --type fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/sft/sft_3B_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/openllama2/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/sft/sft_3B\"\n# seed\nSEED=20\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num 1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# lora\nOPTS+=\" --peft lora\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config_zero2.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/sft/sft_7B_lora.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-7B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/openllama2/\"\n# hp\nBATCH_SIZE=4\nLR=0.0005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/train/sft/sft_7B\"\n# seed\nSEED=20\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num 1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# lora\nOPTS+=\" --peft lora\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config_zero2.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/tools/generate_data_seqkd.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"openllama2-7B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nPEFT_CKPT_NAME=\"sft_7B\"\nPEFT_CKPT=\"${BASE_PATH}/results/openllama2/train/sft/${PEFT_CKPT_NAME}/\"\nMP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/openllama2/\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/openllama2/gen/\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type llama\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names dolly\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --gen-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# lora\nOPTS+=\" --peft lora\"\nOPTS+=\" --peft-name ${PEFT_CKPT_NAME}\"\nOPTS+=\" --peft-path ${PEFT_CKPT}\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type gen\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/generate.py ${OPTS} $@\"\n\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/openllama2/tools/process_data_dolly.sh",
    "content": "BASE_PATH=${1}\n\nexport TF_CPP_MIN_LOG_LEVEL=3\n\n# only prompt for MiniLLM train\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/data/dolly/ \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/prompt \\\n    --model-path ${BASE_PATH}/checkpoints/openllama2-3B \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num 1000 \\\n    --only-prompt \\\n    --model-type openllama2\n\n# prompt and response for baselines\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/data/dolly/ \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/full \\\n    --model-path ${BASE_PATH}/checkpoints/openllama2-3B \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num 1000 \\\n    --model-type openllama2\n"
  },
  {
    "path": "scripts/openllama2/tools/process_data_pretrain.sh",
    "content": "BASE_PATH=${1}\n\nMAX_LENGTH=512\n\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_pretrain.py \\\n    --data-dir ${BASE_PATH}/data/openwebtext \\\n    --processed-data-dir ${BASE_PATH}/processed_data/openwebtext/openllama2/${MAX_LENGTH}/ \\\n    --model-path ${BASE_PATH}/checkpoints/openllama2-3B \\\n    --max-length ${MAX_LENGTH} \\\n    --train-num 1000000 \\\n    --data-process-workers 32 \\\n    --dev-num 10000 \\\n"
  },
  {
    "path": "scripts/openllama2/tools/process_pseudo_data_seqkd.sh",
    "content": "BASE_PATH=${1}\n\nexport TF_CPP_MIN_LOG_LEVEL=3\n\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/results/openllama2/gen/openllama2-7B/t1.0-l512 \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/pseudo \\\n    --model-path ${BASE_PATH}/checkpoints/openllama2-7B \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num -1 \\\n    --model-type openllama2\n\ncp ${BASE_PATH}/processed_data/dolly/full/openllama2/valid_0.bin ${BASE_PATH}/processed_data/dolly/pseudo/openllama2/\ncp ${BASE_PATH}/processed_data/dolly/full/openllama2/valid_0.idx ${BASE_PATH}/processed_data/dolly/pseudo/openllama2/\ncp ${BASE_PATH}/processed_data/dolly/full/openllama2/valid.jsonl ${BASE_PATH}/processed_data/dolly/pseudo/openllama2/\n"
  },
  {
    "path": "scripts/opt/distillm/train_0.1B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.1B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/distillm/0.1B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num 1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type adaptive-srkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n# distillm\nOPTS+=\" --student-gen\"\nOPTS+=\" --gen-top-p 1.0\"\nOPTS+=\" --init-threshold 0.0\"\nOPTS+=\" --loss-eps 0.1\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\nCODE_BASE=HF ${CMD}\n"
  },
  {
    "path": "scripts/opt/distillm/train_0.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.3B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/distillm/0.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num 1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type adaptive-srkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n# distillm\nOPTS+=\" --student-gen\"\nOPTS+=\" --gen-top-p 1.0\"\nOPTS+=\" --init-threshold 0.0\"\nOPTS+=\" --loss-eps 0.1\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\nCODE_BASE=HF ${CMD}\n"
  },
  {
    "path": "scripts/opt/distillm/train_1.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-1.3B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/distillm/1.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num 1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type adaptive-srkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n# distillm\nOPTS+=\" --student-gen\"\nOPTS+=\" --gen-top-p 1.0\"\nOPTS+=\" --init-threshold 0.0\"\nOPTS+=\" --loss-eps 0.1\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\nCODE_BASE=HF ${CMD}\n"
  },
  {
    "path": "scripts/opt/eval/eval_main_dolly.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"opt-1.3B\"}\nCKPT=\"${BASE_PATH}/results/opt/train/${CKPT_NAME}/\"\nMP_SIZE=4\n# data\nDATA_NAMES=\"dolly\"\nDATA_DIR=\"${BASE_PATH}/data/dolly\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\nOPTS+=\" --model-type opt\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/eval/eval_main_self_inst.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"opt-1.3B\"}\nCKPT=\"${BASE_PATH}/results/opt/train/${CKPT_NAME}/\"\nMP_SIZE=4\n# data\nDATA_NAMES=\"self_inst\"\nDATA_DIR=\"${BASE_PATH}/data/self-inst\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\nOPTS+=\" --model-type opt\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/eval/eval_main_sinst.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"opt-1.3B\"}\nCKPT=\"${BASE_PATH}/results/opt/train/${CKPT_NAME}/\"\nMP_SIZE=4\n# data\nSPLIT=\"11_\"\nDATA_NAMES=\"sinst_new_${SPLIT}\"\nDATA_DIR=\"${BASE_PATH}/data/sinst/${SPLIT}\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\nOPTS+=\" --model-type opt\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/eval/eval_main_uinst.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"opt-1.3B\"}\nCKPT=\"${BASE_PATH}/results/opt/train/${CKPT_NAME}/\"\nMP_SIZE=4\n# data\nSPLIT=\"11_\"\nDATA_NAMES=\"uinst_${SPLIT}\"\nDATA_DIR=\"${BASE_PATH}/data/uinst/${SPLIT}\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\nOPTS+=\" --model-type opt\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 10000\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/eval/eval_main_vicuna.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-1}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=${4-\"opt-1.3B\"}\nCKPT=\"${BASE_PATH}/results/opt/train/${CKPT_NAME}/\"\nMP_SIZE=4\n# data\nDATA_NAMES=\"vicuna\"\nDATA_DIR=\"${BASE_PATH}/data/vicuna\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/eval_main/\"\nTYPE=\"eval_main\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\n# OPTS+=\" --model-parallel\"\n# OPTS+=\" --model-parallel-size ${MP_SIZE}\"\nOPTS+=\" --model-type opt\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names ${DATA_NAMES}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-eval\"\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type ${TYPE}\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/evaluate.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/eval/run_eval.sh",
    "content": "#!/bin/bash\n\nMASTER_PORT=2040\nDEVICE=${1}\nckpt=${2}\n\n# dolly eval\nfor seed in $SEED\ndo\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/opt/eval/eval_main_dolly.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/opt/eval/eval_main_self_inst.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/opt/eval/eval_main_vicuna.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/opt/eval/eval_main_sinst.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\n    CUDA_VISIBLE_DEVICES=${DEVICE} bash ./scripts/opt/eval/eval_main_uinst.sh ./ ${MASTER_PORT} 1 ${ckpt} --seed $seed  --eval-batch-size 16\ndone"
  },
  {
    "path": "scripts/opt/gkd/gkd_0.1B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.1B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/gkd/0.1B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-jsd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 0.5\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/gkd/gkd_0.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.3B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/gkd/0.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-jsd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 0.5\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/gkd/gkd_1.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-1.3B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/gkd/1.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-jsd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 0.5\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/imitkd/imitkd_0.1B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.1B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/imitkd/0.1B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 1.0\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/imitkd/imitkd_0.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.3B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/imitkd/0.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/imitkd/imitkd_1.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-1.3B\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/${CKPT_NAME}\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/imitkd/1.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type mixed-fkl\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# GKD\nOPTS+=\" --student-gen\"\nOPTS+=\" --mixed-alpha 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/init/init_0.1B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.1B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=16\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/init/opt-0.1B\"\n# seed\nSEED=10\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 3\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/init/init_0.3B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=16\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/init/opt-0.3B\"\n# seed\nSEED=10\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 3\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/init/init_1.3B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-1.3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\n# hp\nBATCH_SIZE=2\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/init/opt-1.3B\"\n# seed\nSEED=10\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 3\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/kd/kd_0.1B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.1B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/kd/0.1B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/kd/kd_0.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/kd/0.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/kd/kd_1.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-1.3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/kd/1.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/minillm/train_0.1B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"0.1B-init\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/opt-0.1B\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# data\nPROMPT_DATA_DIR=\"${BASE_PATH}/processed_data/dolly/prompt/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/minillm/\"\n# hp\nGRAD_ACC=2\nBATCH_SIZE=4\nCHUNK_SIZE=16\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --n-nodes ${NNODES}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --prompt-data-dir ${PROMPT_DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --dev-num 1000\"\nOPTS+=\" --num-workers 0\"\n# hp\nOPTS+=\" --epochs 10\"\nOPTS+=\" --total-iters 5000\"\nOPTS+=\" --kd-ratio 0.5\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --lr 5e-6\"\nOPTS+=\" --lr-min 5e-6\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\nOPTS+=\" --warmup-iters 100\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed-lm 7\"\nOPTS+=\" --save-interval 500\"\nOPTS+=\" --eval-interval 100\"\nOPTS+=\" --log-interval 16\"\nOPTS+=\" --mid-log-num 1\"\n# ppo\nOPTS+=\" --type minillm\"\nOPTS+=\" --ppo-epochs 4\"\nOPTS+=\" --num-rollouts 256\"\nOPTS+=\" --chunk-size ${CHUNK_SIZE}\"\n# minillm\nOPTS+=\" --length-norm\"\nOPTS+=\" --single-step-reg\"\nOPTS+=\" --teacher-mixed-alpha 0.2\"\n# reward\nOPTS+=\" --reward-scaling 0.5\"\nOPTS+=\" --cliprange-reward 100\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/train_minillm.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/minillm/train_0.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"0.3B-init\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/opt-0.3B\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# data\nPROMPT_DATA_DIR=\"${BASE_PATH}/processed_data/dolly/prompt/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/minillm/0.3B_2.7B\"\n# hp\nGRAD_ACC=1\nBATCH_SIZE=8\nCHUNK_SIZE=8\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --n-nodes ${NNODES}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --prompt-data-dir ${PROMPT_DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --dev-num 1000\"\nOPTS+=\" --num-workers 0\"\n# hp\nOPTS+=\" --epochs 10\"\nOPTS+=\" --total-iters 5000\"\nOPTS+=\" --kd-ratio 0.5\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --lr 5e-6\"\nOPTS+=\" --lr-min 5e-6\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\nOPTS+=\" --warmup-iters 100\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed-lm 7\"\nOPTS+=\" --save-interval 500\"\nOPTS+=\" --eval-interval 100\"\nOPTS+=\" --log-interval 16\"\nOPTS+=\" --mid-log-num 1\"\n# ppo\nOPTS+=\" --type minillm\"\nOPTS+=\" --ppo-epochs 4\"\nOPTS+=\" --num-rollouts 256\"\nOPTS+=\" --chunk-size ${CHUNK_SIZE}\"\n# minillm\nOPTS+=\" --length-norm\"\nOPTS+=\" --single-step-reg\"\nOPTS+=\" --teacher-mixed-alpha 0.2\"\n# reward\nOPTS+=\" --reward-scaling 0.5\"\nOPTS+=\" --cliprange-reward 100\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/train_minillm.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/minillm/train_1.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"1.3B-init\"\nCKPT=\"${BASE_PATH}/results/opt/train/init/opt-1.3B\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# data\nPROMPT_DATA_DIR=\"${BASE_PATH}/processed_data/dolly/prompt/opt/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/minillm/1.3B_2.7B\"\n# hp\nGRAD_ACC=2\nBATCH_SIZE=4\nCHUNK_SIZE=8\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --n-nodes ${NNODES}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --prompt-data-dir ${PROMPT_DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --dev-num 1000\"\nOPTS+=\" --num-workers 0\"\n# hp\nOPTS+=\" --epochs 10\"\nOPTS+=\" --total-iters 5000\"\nOPTS+=\" --kd-ratio 0.5\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --lr 5e-6\"\nOPTS+=\" --lr-min 5e-6\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\nOPTS+=\" --warmup-iters 100\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed 10\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed-lm 7\"\nOPTS+=\" --save-interval 500\"\nOPTS+=\" --eval-interval 100\"\nOPTS+=\" --log-interval 16\"\nOPTS+=\" --mid-log-num 1\"\n# ppo\nOPTS+=\" --type minillm\"\nOPTS+=\" --ppo-epochs 4\"\nOPTS+=\" --num-rollouts 256\"\nOPTS+=\" --chunk-size ${CHUNK_SIZE}\"\n# minillm\nOPTS+=\" --length-norm\"\nOPTS+=\" --single-step-reg\"\nOPTS+=\" --teacher-mixed-alpha 0.2\"\n# reward\nOPTS+=\" --reward-scaling 0.5\"\nOPTS+=\" --cliprange-reward 100\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/train_minillm.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/seqkd/seqkd_0.1B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.1B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/pseudo/opt-2.7B/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/seqkd/0.1B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/seqkd/seqkd_0.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/pseudo/opt-2.7B/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/seqkd/0.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/seqkd/seqkd_1.3B_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-1.3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\nTEACHER_CKPT_NAME=\"2.7B-sft\"\nTEACHER_CKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\n# MP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/pseudo/opt-2.7B/\"\nLM_DATA_DIR=\"${BASE_PATH}/processed_data/openwebtext/opt/512/1M/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/seqkd/1.3B_2.7B\"\n# seed\nSEED=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --teacher-model-path ${TEACHER_CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --teacher-ckpt-name ${TEACHER_CKPT_NAME}\"\nOPTS+=\" --teacher-model-fp16\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\nOPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --lm-data-dir ${LM_DATA_DIR}\"\nOPTS+=\" --num-workers 4\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\nOPTS+=\" --kd-ratio 1.0\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type kd\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/sft/sft_0.1B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.1B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\n# hp\nBATCH_SIZE=8\nLR=0.0005\nGRAD_ACC=4\nEVAL_BATCH_SIZE=16\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/sft/opt-0.1B\"\n# seed\nSEED=10\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 2\"\nOPTS+=\" --mid-log-num 1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/sft/sft_0.3B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-0.3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=4\nEVAL_BATCH_SIZE=16\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/sft/opt-0.3B\"\n# seed\nSEED=10\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 20\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/sft/sft_1.3B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-1.3B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# CKPT=\"facebook/opt-1.3b\" # download automatically\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\n# hp\nBATCH_SIZE=8\nLR=0.00005\nGRAD_ACC=4\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/sft\"\n# seed\nSEED=10\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/sft/sft_2.7B.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2012}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-2.7B\"\nCKPT=\"${BASE_PATH}/checkpoints/${CKPT_NAME}/\"\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\n# hp\nBATCH_SIZE=4\nLR=0.00005\nGRAD_ACC=1\nEVAL_BATCH_SIZE=8\n# length\nMAX_LENGTH=512\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B\"\n# seed\nSEED=10\nSEED_ORDER=10\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\n# OPTS+=\" --gradient-checkpointing\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --dev-num 1000\"\n# hp\nOPTS+=\" --lr ${LR}\"\nOPTS+=\" --batch-size ${BATCH_SIZE}\"\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --gradient-accumulation-steps ${GRAD_ACC}\"\nOPTS+=\" --warmup-iters 0\"\nOPTS+=\" --lr-decay-style cosine\"\nOPTS+=\" --weight-decay 1e-2\"\nOPTS+=\" --clip-grad 1.0\"\nOPTS+=\" --epochs 10\"\n# length\nOPTS+=\" --max-length ${MAX_LENGTH}\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --do-train\"\nOPTS+=\" --do-valid\"\nOPTS+=\" --eval-gen\"\nOPTS+=\" --save-interval -1\"\nOPTS+=\" --eval-interval -1\"\nOPTS+=\" --log-interval 4\"\nOPTS+=\" --mid-log-num -1\"\nOPTS+=\" --save ${SAVE_PATH}\"\n# seed\nOPTS+=\" --seed ${SEED}\"\nOPTS+=\" --seed-order ${SEED_ORDER}\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\n# type\nOPTS+=\" --type lm\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport NCCL_DEBUG=\"\"\nexport WANDB_DISABLED=True\nexport TF_CPP_MIN_LOG_LEVEL=3\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/finetune.py ${OPTS} $@\"\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/tools/generate_data_seqkd.sh",
    "content": "#! /bin/bash\n\nMASTER_ADDR=localhost\nMASTER_PORT=${2-2113}\nNNODES=1\nNODE_RANK=0\nGPUS_PER_NODE=${3-16}\n\nDISTRIBUTED_ARGS=\"--nproc_per_node $GPUS_PER_NODE \\\n                  --nnodes $NNODES \\\n                  --node_rank $NODE_RANK \\\n                  --master_addr $MASTER_ADDR \\\n                  --master_port $MASTER_PORT\"\n\n# model\nBASE_PATH=${1-\"/home/MiniLLM\"}\nCKPT_NAME=\"opt-2.7B-sft\"\nCKPT=\"${BASE_PATH}/results/opt/train/sft/opt-2.7B/\"\nMP_SIZE=4\n# data\nDATA_DIR=\"${BASE_PATH}/processed_data/dolly/full/opt/\"\n# hp\nEVAL_BATCH_SIZE=16\n# runtime\nSAVE_PATH=\"${BASE_PATH}/results/opt/gen/\"\n\n\nOPTS=\"\"\n# model\nOPTS+=\" --base-path ${BASE_PATH}\"\nOPTS+=\" --model-path ${CKPT}\"\nOPTS+=\" --ckpt-name ${CKPT_NAME}\"\nOPTS+=\" --n-gpu ${GPUS_PER_NODE}\"\nOPTS+=\" --model-type opt\"\n# data\nOPTS+=\" --data-dir ${DATA_DIR}\"\nOPTS+=\" --data-names dolly\"\nOPTS+=\" --num-workers 0\"\nOPTS+=\" --gen-num -1\"\nOPTS+=\" --data-process-workers -1\"\nOPTS+=\" --json-data\"\n# hp\nOPTS+=\" --eval-batch-size ${EVAL_BATCH_SIZE}\"\nOPTS+=\" --max-length 512\"\nOPTS+=\" --max-prompt-length 256\"\n# runtime\nOPTS+=\" --save ${SAVE_PATH}\"\nOPTS+=\" --seed-ppo 42\"\nOPTS+=\" --seed 10\"\n# deepspeed\nOPTS+=\" --deepspeed\"\nOPTS+=\" --deepspeed_config ${BASE_PATH}/configs/deepspeed/ds_config.json\"\nOPTS+=\" --type gen\"\n# gen\nOPTS+=\" --do-sample\"\nOPTS+=\" --top-k 0\"\nOPTS+=\" --top-p 1.0\"\nOPTS+=\" --temperature 1.0\"\n\n\nexport TOKENIZERS_PARALLELISM=false\nexport PYTHONIOENCODING=utf-8\nexport PYTHONPATH=${BASE_PATH}\nCMD=\"torchrun ${DISTRIBUTED_ARGS} ${BASE_PATH}/generate.py ${OPTS} $@\"\n\n\necho ${CMD}\necho \"PYTHONPATH=${PYTHONPATH}\"\nmkdir -p ${SAVE_PATH}\n${CMD}\n"
  },
  {
    "path": "scripts/opt/tools/process_data_dolly.sh",
    "content": "BASE_PATH=${1}\n\nexport TF_CPP_MIN_LOG_LEVEL=3\n\n# only prompt for MiniLLM train\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/data/dolly/ \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/prompt \\\n    --model-path ${BASE_PATH}/checkpoints/opt-1.3B \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num 1000 \\\n    --only-prompt \\\n    --model-type opt\n\n# prompt and response for baselines\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/data/dolly/ \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/full \\\n    --model-path ${BASE_PATH}/checkpoints/opt-1.3B \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num 1000 \\\n    --model-type opt\n"
  },
  {
    "path": "scripts/opt/tools/process_data_pretrain.sh",
    "content": "BASE_PATH=${1}\n\nMAX_LENGTH=512\n\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_pretrain.py \\\n    --data-dir ${BASE_PATH}/data/openwebtext \\\n    --processed-data-dir ${BASE_PATH}/processed_data/openwebtext/opt/${MAX_LENGTH}/ \\\n    --model-path ${BASE_PATH}/checkpoints/opt-0.3B \\\n    --max-length ${MAX_LENGTH} \\\n    --train-num 1000000 \\\n    --data-process-workers 32 \\\n    --dev-num 10000 \\"
  },
  {
    "path": "scripts/opt/tools/process_pseudo_data_seqkd.sh",
    "content": "BASE_PATH=${1}\n\nexport TF_CPP_MIN_LOG_LEVEL=3\n\nPYTHONPATH=${BASE_PATH} python3 ${BASE_PATH}/tools/process_data_dolly.py \\\n    --data-dir ${BASE_PATH}/results/opt/gen/opt-2.7B/t1.0-l512 \\\n    --processed-data-dir ${BASE_PATH}/processed_data/dolly/pseudo \\\n    --model-path ${BASE_PATH}/checkpoints/opt-0.3B \\\n    --data-process-workers 32 \\\n    --max-prompt-length 256 \\\n    --dev-num -1 \\\n    --model-type opt-2.7B\n\ncp ${BASE_PATH}/processed_data/dolly/full/opt/valid_0.bin ${BASE_PATH}/processed_data/dolly/pseudo/opt-2.7B/\ncp ${BASE_PATH}/processed_data/dolly/full/opt/valid_0.idx ${BASE_PATH}/processed_data/dolly/pseudo/opt-2.7B/\ncp ${BASE_PATH}/processed_data/dolly/full/opt/valid.jsonl ${BASE_PATH}/processed_data/dolly/pseudo/opt-2.7B/\n"
  },
  {
    "path": "tools/convert_mp.py",
    "content": "#coding:utf-8\nimport torch\nimport argparse\nimport os\nfrom transformers import AutoModelForCausalLM\nfrom transformers import (\n    decrease_mp_opt, increase_mp_opt,\n    decrease_mp_gptj, increase_mp_gptj,\n    decrease_mp_llama, increase_mp_llama,\n    decrease_mp_mistral, increase_mp_mistral,\n    decrease_mp_qwen, increase_mp_qwen,\n)\n\nfunc_map = {\n    \"opt\": (decrease_mp_opt, increase_mp_opt),\n    \"gptj\": (decrease_mp_gptj, increase_mp_gptj),\n    \"llama\": (decrease_mp_llama, increase_mp_llama),\n    \"llama2\": (decrease_mp_llama, increase_mp_llama),\n    \"mistral\": (decrease_mp_mistral, increase_mp_mistral),\n    \"qwen\": (decrease_mp_qwen, increase_mp_qwen),\n}\n\n\ndef main():\n    parser = argparse.ArgumentParser(\"Change the tensor parallel of a model.\")\n\n    parser.add_argument(\"--input_path\", type=str)\n    parser.add_argument(\"--model_type\", type=str, default=\"opt\")\n    parser.add_argument(\"--source_mp_size\", type=int, default=1)\n    parser.add_argument(\"--target_mp_size\", type=int, default=2)\n    # parser.add_argument(\"--save_path\", type=str)\n    parser.add_argument(\"--half\", action=\"store_true\")\n    parser.add_argument(\"--exist_ok\", action=\"store_true\")\n\n    args = parser.parse_args()\n    \n    decrease_mp, increase_mp = func_map[args.model_type]\n\n    if args.source_mp_size == 1:\n        assert args.target_mp_size > args.source_mp_size\n        args.save_path = os.path.join(args.input_path, f\"mp{args.target_mp_size}\")\n        assert args.exist_ok or not any([os.path.exists(os.path.join(args.save_path, f\"pytorch_model_{i}.bin\")) for i in range(args.target_mp_size)])\n        os.makedirs(args.save_path, exist_ok=True)\n        if args.model_type=='qwen':\n            model_hf =  AutoModelForCausalLM.from_pretrained(\n                args.input_path,\n                use_flash_attn=False,\n                fp16=True if args.half else False,\n                fp32=True if not args.half else False,\n                bf16=False,\n            ).state_dict()\n        else:\n            model_hf = AutoModelForCausalLM.from_pretrained(args.input_path, torch_dtype=torch.float16).state_dict()\n        d_list = increase_mp(model_hf, args.target_mp_size, half=args.half)\n        for i, d in enumerate(d_list):\n            torch.save(d, os.path.join(args.save_path, f\"pytorch_model_{i}.bin\"))\n    elif args.target_mp_size == 1:\n        assert args.source_mp_size > args.target_mp_size\n        args.save_path = args.input_path\n        assert args.exist_ok or not os.path.exists(os.path.join(args.save_path, \"pytorch_model.bin\"))\n        ckpt_path = os.path.join(args.input_path, f\"mp{args.source_mp_size}\")\n        d_list = [torch.load(os.path.join(ckpt_path, f\"pytorch_model_{i}.bin\"), map_location=\"cpu\") for i in range(args.source_mp_size)]\n        d = decrease_mp(d_list, half=args.half)\n        torch.save(d, os.path.join(args.save_path, \"pytorch_model.bin\"))\n    else:\n        args.save_path = os.path.join(args.input_path, f\"mp{args.target_mp_size}\")\n        assert args.exist_ok or not any([os.path.exists(os.path.join(args.save_path, f\"pytorch_model_{i}.bin\")) for i in range(args.target_mp_size)])\n        \n        ckpt_path = os.path.join(args.input_path, f\"mp{args.source_mp_size}\")\n        d_list = [torch.load(os.path.join(ckpt_path, f\"pytorch_model_{i}.bin\"), map_location=\"cpu\") for i in range(args.source_mp_size)]\n        d = decrease_mp(d_list, half=args.half)\n        \n        torch.save(d, os.path.join(args.input_path, \"pytorch_model.bin\"))\n        \n        os.makedirs(args.save_path, exist_ok=True)\n        if args.model_type=='qwen':\n            model_hf =  AutoModelForCausalLM.from_pretrained(\n                args.input_path,\n                use_flash_attn=False,\n                fp16=True if args.half else False,\n                fp32=True if not args.half else False,\n                bf16=False,\n            ).state_dict()\n        else:\n            model_hf = AutoModelForCausalLM.from_pretrained(args.input_path, torch_dtype=torch.float16).state_dict()\n        d_list = increase_mp(model_hf, args.target_mp_size, half=args.half)\n        for i, d in enumerate(d_list):\n            torch.save(d, os.path.join(args.save_path, f\"pytorch_model_{i}.bin\"))\n        \n        \n        \n    \nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "tools/get_openwebtext.py",
    "content": "import datasets\nimport os\nimport re\n\ndataset = datasets.load_dataset('openwebtext', split='train')\n\nos.makedirs(\"data/openwebtext\", exist_ok=True)\n\nnum = 0\nwith open(\"data/openwebtext/data.txt\", \"w\") as f:\n    for data in dataset:\n        f.write(re.sub(r\"\\n+\", \"<@x(x!>\", data['text']) + \"\\n\")\n        num += 1\n\nprint(\"Number of lines:\", num)"
  },
  {
    "path": "tools/process_data_dolly.py",
    "content": "import multiprocessing\nimport os\nimport time\nimport torch\nimport json\nimport sys\nfrom numerize.numerize import numerize\nimport numpy as np\nfrom data_utils.indexed_dataset import make_builder\nfrom transformers import AutoTokenizer\nfrom arguments import get_args\n\n\n# 1. Implement an Encoder, which gives it a line of input data and it returns you the tokenized result.\nclass Encoder(object): \n    def __init__(self, args):\n        self.args = args\n\n    def initializer(self):\n        Encoder.tokenizer = AutoTokenizer.from_pretrained(self.args.model_path)\n\n    def encode(self, line):\n        line = json.loads(line)\n        if \"input\" not in line or len(line[\"input\"]) == 0:\n            if self.args.model_type!=\"qwen\":\n                template = (\n                    \"Below is an instruction that describes a task. \"\n                    \"Write a response that appropriately completes the request.\\n\\n\"\n                    \"### Instruction:\\n{instruction}\\n\\n### Response:\\n\"\n                )\n            else:\n                template = (\n                    \"<|im_start|>Below is an instruction that describes a task. \"\n                    \"Write a response that appropriately completes the request.\\n\\n\"\n                    \"### Instruction:\\n{instruction}\\n\\n### Response:\\n<|im_end|><|im_start|>Assistant:\"\n                )\n            prompt = template.format(instruction=line[\"instruction\"])\n        else:\n            if self.args.model_type!=\"qwen\":\n                template = (\n                    \"Below is an instruction that describes a task, paired with an input that provides further context. \"\n                    \"Write a response that appropriately completes the request.\\n\\n\"\n                    \"### Instruction:\\n{instruction}\\n\\n### Input:\\n{input}\\n\\n### Response:\\n\"\n                )\n            else:\n                template = (\n                    \"<|im_start|>Below is an instruction that describes a task, paired with an input that provides further context. \"\n                    \"Write a response that appropriately completes the request.\\n\\n\"\n                    \"### Instruction:\\n{instruction}\\n\\n### Input:\\n{input}\\n\\n### Response:\\n<|im_end|><|im_start|>Assistant:\"\n                )\n            prompt = template.format(instruction=line[\"instruction\"], input=line[\"input\"])\n            \n        response = line[\"output\"]\n        prompt_tokens = Encoder.tokenizer.encode(prompt, add_special_tokens=False)\n        full_tokens = Encoder.tokenizer.encode(prompt + response, add_special_tokens=False) + [Encoder.tokenizer.eos_token_id]\n        response_tokens = full_tokens[len(prompt_tokens):]\n        \n        if len(prompt_tokens) > self.args.max_prompt_length:\n            prompt_tokens = prompt_tokens[:self.args.max_prompt_length]\n            # return None, None, None, None, len(line)\n        \n        return line, prompt, prompt_tokens, response_tokens, len(line)\n\n\ndef main():\n    print(\"OK\")\n    args = get_args()\n        \n    if 'generated' not in args.processed_data_dir:\n        args.processed_data_dir = os.path.join(args.processed_data_dir, args.model_type)\n\n    os.makedirs(args.processed_data_dir, exist_ok=True)\n    \n    with open(os.path.join(args.data_dir, \"raw.jsonl\")) as f:\n        raw_data = f.readlines()\n\n    if args.dev_num > 0:\n        all_data = {\n            \"valid\": raw_data[:args.dev_num],\n            \"train\": raw_data[args.dev_num:]\n        }\n    else:\n        all_data = {\n            \"train\": raw_data\n        }\n    \n    for split in all_data:\n        \n        # encoder use the tokenizer to encode data\n        encoder = Encoder(args)\n\n        # 2. Mapping all datas with Encoder, with the help of multiprocessing\n        pool = multiprocessing.Pool(processes=args.data_process_workers, initializer=encoder.initializer)\n        encoded_docs = pool.imap_unordered(encoder.encode, all_data[split], chunksize=50)\n        proc_start = time.time()\n        total_bytes_processed = 0\n        \n        bin_file = os.path.join(args.processed_data_dir, f\"{split}_{0}.bin\")\n        idx_file = os.path.join(args.processed_data_dir, f\"{split}_{0}.idx\")\n\n        if args.model_type!=\"qwen\":\n            binary_builder = make_builder(bin_file, impl=\"mmap\", dtype=np.uint16)\n        else:\n            binary_builder = make_builder(bin_file, impl=\"mmap\", dtype=np.uint32)\n\n        # put tokenized data into binary_builder\n        inst_num = 0\n        print(\"#\"*10, split, \"#\"*10)\n        \n        prompt_lens = []\n        response_lens = []\n        \n        json_file = open(os.path.join(args.processed_data_dir, f\"{split}.jsonl\"), \"w\")\n        \n        for lid, (line, prompt_str, prompt, response, bytes_processed) in enumerate(encoded_docs):\n            total_bytes_processed += bytes_processed\n            if prompt is None:\n                continue\n            \n            if args.only_prompt:\n                if len(prompt) < args.max_length:\n                    binary_builder.add_item(torch.IntTensor(prompt))\n                else:\n                    continue\n            else:\n                binary_builder.add_item(torch.IntTensor(prompt + [-1] + response))\n\n            json_file.write(json.dumps({\n                \"instruction\": line[\"instruction\"],\n                \"prompt\": prompt_str,\n                \"input\": line[\"input\"],\n                \"output\": line[\"output\"],\n            }) + \"\\n\")\n\n            prompt_lens.append(len(prompt))\n            response_lens.append(len(response))\n\n            inst_num += 1\n            if lid % 1000 == 0:\n                current = time.time()\n                elapsed = current - proc_start\n                mbs = total_bytes_processed / elapsed / 1024 / 1024\n                print(f\"Processed {lid} documents. {inst_num} instances.\",\n                    f\"({lid/elapsed} docs/s, {mbs} MB/s).\",\n                    file=sys.stderr)\n\n        # finish compressing tokenized data into `bin_file`, and generate meta information into `idx_file`\n        binary_builder.finalize(idx_file)\n\n        # close multiproceessing mapping\n        pool.close()\n        json_file.close()\n                \n        print(\"Data num\", len(prompt_lens))\n        print(\"Prompt lengths.\", \"Mean:\", np.mean(prompt_lens), \"Max:\", np.max(prompt_lens), \"Min:\", np.min(prompt_lens))\n        print(\"Response\", \"Mean:\", np.mean(response_lens), \"Max:\", np.max(response_lens), \"Min:\", np.min(response_lens))\n\n\nif __name__ == '__main__':\n    main()"
  },
  {
    "path": "tools/process_data_pretrain.py",
    "content": "import multiprocessing\nimport os\nimport time\nimport torch\nimport sys\nfrom numerize.numerize import numerize\nimport numpy as np\nfrom data_utils.indexed_dataset import make_builder\nfrom transformers import AutoTokenizer\nfrom arguments import get_args\n\n\n# 1. Implement an Encoder, which gives it a line of input data and it returns you the tokenized result.\nclass Encoder(object): \n    def __init__(self, args):\n        self.args = args\n        \n    def initializer(self):\n        Encoder.tokenizer = AutoTokenizer.from_pretrained(self.args.model_path)\n\n    def encode(self, line):\n        line = line.replace(\"<@x(x!>\", \"\\n\")\n        token_ids = Encoder.tokenizer.encode(line, add_special_tokens=False) + [Encoder.tokenizer.eos_token_id]\n        \n        return token_ids, len(line)\n\n\ndef main():\n    args = get_args()\n        \n    args.processed_data_dir = os.path.join(args.processed_data_dir, numerize(args.train_num))\n\n    os.makedirs(args.processed_data_dir, exist_ok=True)\n        \n    file_name = os.path.join(args.data_dir, \"data.txt\")\n    fin = open(file_name, \"r\", encoding=\"utf-8\")\n    # encoder use the tokenizer to encode data\n    encoder = Encoder(args)\n\n    # 2. Mapping all datas with Encoder, with the help of multiprocessing\n    pool = multiprocessing.Pool(processes=args.data_process_workers, initializer=encoder.initializer)\n    encoded_docs = pool.imap_unordered(encoder.encode, fin, chunksize=50)\n    proc_start = time.time()\n    total_bytes_processed = 0\n\n    # 3. tool `indexed_dataset` compress the tokenized data into binary format `bin_file`\n    # it will also generate another small `idx_file` for saving meta information in order to decode `bin_file`.\n    train_bin_file = os.path.join(args.processed_data_dir, f\"train_{0}.bin\")\n    train_idx_file = os.path.join(args.processed_data_dir, f\"train_{0}.idx\")\n\n    valid_bin_file = os.path.join(args.processed_data_dir, f\"valid_{0}.bin\")\n    valid_idx_file = os.path.join(args.processed_data_dir, f\"valid_{0}.idx\")\n\n    if args.model_type!=\"qwen\":\n        train_binary_builder = make_builder(train_bin_file, impl=\"mmap\", dtype=np.uint16)\n        valid_binary_builder = make_builder(valid_bin_file, impl=\"mmap\", dtype=np.uint16)\n    else:\n        train_binary_builder = make_builder(train_bin_file, impl=\"mmap\", dtype=np.uint32)\n        valid_binary_builder = make_builder(valid_bin_file, impl=\"mmap\", dtype=np.uint32)\n\n    # put tokenized data into binary_builder\n    buffer = []\n    inst_num = 0\n    for lid, (input_ids, bytes_processed) in enumerate(encoded_docs):\n        total_bytes_processed += bytes_processed\n        if input_ids is None:\n            continue\n        \n        buffer.extend(input_ids)\n        while len(buffer) >= args.max_length:\n            inst = buffer[:args.max_length]\n            buffer = buffer[args.max_length:]\n        \n            if inst_num < args.dev_num:\n                valid_binary_builder.add_item(torch.IntTensor(inst))\n            else:\n                train_binary_builder.add_item(torch.IntTensor(inst))\n            \n            inst_num += 1\n            \n        if lid % 10000 == 0:\n            current = time.time()\n            elapsed = current - proc_start\n            mbs = total_bytes_processed / elapsed / 1024 / 1024\n            print(f\"Processed {lid} documents. {inst_num} instances.\",\n                f\"({lid/elapsed} docs/s, {mbs} MB/s).\",\n                file=sys.stderr)\n        \n        if inst_num - args.dev_num >= args.train_num:\n            break\n\n    # finish compressing tokenized data into `bin_file`, and generate meta information into `idx_file`\n    train_binary_builder.finalize(train_idx_file)\n    valid_binary_builder.finalize(valid_idx_file)\n\n    # close multiproceessing mapping\n    pool.close()\n\n\nif __name__ == '__main__':\n    main()"
  },
  {
    "path": "train_minillm.py",
    "content": "import torch\nimport os\nimport json\nimport torch.distributed as dist\nfrom accelerate import init_empty_weights\n\nfrom transformers import (\n    AutoModelForCausalLM,\n    AutoConfig,)\n\nfrom arguments import get_args\nfrom utils import print_args, initialize, load_parallel, get_tokenizer\n\nfrom minillm import train, Reward\n\nfrom peft import PeftModel\n\n\ndef get_teacher_model(args, device):\n    config = AutoConfig.from_pretrained(args.teacher_model_path)\n    if args.model_parallel:\n        raise NotImplementedError\n    else:\n        config.is_model_parallel = False\n        model = AutoModelForCausalLM.from_pretrained(args.teacher_model_path, config=config, device_map={\"\": device}, torch_dtype=torch.float16)\n\n        if args.peft is not None:\n            if args.peft == \"lora\":\n                assert args.teacher_peft_path is not None\n                model = PeftModel.from_pretrained(model, args.teacher_peft_path)\n                model = model.merge_and_unload()\n            else:\n                raise NotImplementedError\n        else:\n            if dist.get_rank() == 0:\n                print(' > number of parameters: {}'.format(\n                    sum([p.nelement() for p in model.parameters()])), flush=True)\n\n    model.eval()\n\n    return model\n\n\ndef main():\n    \n    args = get_args()\n    initialize(args)\n\n    device = torch.cuda.current_device()\n    \n    os.makedirs(args.save, exist_ok=True)\n    if dist.get_rank() == 0:\n        print_args(args)\n        with open(os.path.join(args.save, \"args.json\"), \"w\") as f:\n            json.dump(vars(args), f)\n            \n    with open(args.deepspeed_config, \"r\") as f:\n        ds_config = json.load(f)\n\n    ds_config[\"gradient_accumulation_steps\"] = args.gradient_accumulation_steps\n    ds_config[\"train_micro_batch_size_per_gpu\"] = args.batch_size\n    ds_config[\"gradient_clipping\"] = args.clip_grad\n    ds_config[\"steps_per_print\"] = 10000000\n    \n    args.fp32 = not ds_config[\"fp16\"][\"enabled\"]\n    args.deepspeed_config = None\n    \n    if args.teacher_model_type is None:\n        args.teacher_model_type = args.model_type\n    \n    teacher_model = get_teacher_model(args, device)\n    tokenizer = get_tokenizer(args)\n    \n    reward = Reward(args, tokenizer, teacher_model)\n    \n    train(\n        args=args,\n        tokenizer=tokenizer,\n        reward_fn=reward.reward_fn,\n        teacher_model=teacher_model,\n        ds_config=ds_config,\n        prompt_data=args.prompt_data_dir,\n        eval_prompt_data=args.prompt_data_dir,\n        lm_data=args.lm_data_dir,\n        eval_lm_data=args.lm_data_dir,\n    )\n\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "utils.py",
    "content": "from typing import Dict\nimport numpy as np\nimport os\nimport time\nimport torch.distributed as dist\nfrom torch.distributed import get_rank\nimport random\nimport torch\nimport torch.nn as nn\nfrom datetime import timedelta\nimport deepspeed\nfrom accelerate import load_checkpoint_and_dispatch, init_empty_weights\nfrom peft import get_peft_model, LoraConfig, TaskType, PeftModel\n\n\nfrom transformers import (\n    AutoModelForCausalLM,\n    AutoTokenizer,\n    AutoConfig,\n)\n\n\n# Logging\ndef print_args(args):\n    \"\"\"Print arguments.\"\"\"\n\n    print('arguments:', flush=True)\n    for arg in vars(args):\n        dots = '.' * (29 - len(arg))\n        print('  {} {} {}'.format(arg, dots, getattr(args, arg)), flush=True)\n\n\ndef save_rank(log_str, save_path, rank=0):\n    if not dist.is_initialized() or dist.get_rank() == rank:\n        with open(save_path, \"a\") as f:\n            f.write(log_str + \"\\n\")\n\n\ndef print_rank(*args, rank=0, **kwargs):\n    if not dist.is_initialized() or dist.get_rank() == rank:\n        print(*args, **kwargs)\n\n\n# Distributed\ndef all_gather(t, dim=0, world_size=None, group=None, op=\"cat\"):\n    if world_size is None:\n        world_size = dist.get_world_size()\n    all_t = [torch.zeros_like(t) for _ in range(world_size)]\n    dist.all_gather(all_t, t, group=group)\n    if op == \"cat\":\n        all_t = torch.cat(all_t, dim=dim)\n    elif op == \"stack\":\n        all_t = torch.stack(all_t, dim=dim)\n    return all_t\n\n\n# Initialize\ndef set_random_seed(seed, mp=False):\n    \"\"\"Set random seed for reproducability.\"\"\"\n    seed = dist.get_rank() + seed\n    if seed is not None and seed > 0:\n        random.seed(seed)\n        np.random.seed(seed)\n        torch.manual_seed(seed)\n        if mp:\n            mpu.model_parallel_cuda_manual_seed(seed)\n\n\ndef init_distributed(args):\n    args.rank = int(os.getenv(\"RANK\", \"0\"))\n    args.world_size = int(os.getenv(\"WORLD_SIZE\", \"1\"))\n    args.local_rank = int(os.getenv(\"LOCAL_RANK\", \"0\"))\n\n    if args.rank == 0:\n        print(f\"using world size: {args.world_size}\")\n\n    # Manually set the device ids.\n    device = args.rank % torch.cuda.device_count()\n    if args.local_rank is not None:\n        device = args.local_rank\n    torch.cuda.set_device(device)\n\n    dist.init_process_group(backend=\"nccl\", timeout=timedelta(minutes=300))\n\n\ndef init_distributed_ds(args):\n    args.rank = int(os.getenv(\"RANK\", \"0\"))\n    args.world_size = int(os.getenv(\"WORLD_SIZE\", \"1\"))\n    args.local_rank = int(os.getenv(\"LOCAL_RANK\", \"0\"))\n\n    if args.rank == 0:\n        print(f\"using world size: {args.world_size}\")\n\n    # Manually set the device ids.\n    device = args.rank % torch.cuda.device_count()\n    if args.local_rank is not None:\n        device = args.local_rank\n    torch.cuda.set_device(device)\n\n    deepspeed.init_distributed(timeout=timedelta(minutes=300))\n\n\ndef initialize(args):\n    # init bmt\n    if args.deepspeed:\n        init_distributed_ds(args)\n    else:\n        init_distributed(args)\n\n    if args.model_parallel:\n        raise NotImplementedError\n\n    set_random_seed(args.seed, args.model_parallel)\n    # init save folder\n    if args.save != None:\n        os.makedirs(args.save, exist_ok=True)\n\n\n# Load and save model\ndef get_model(args, device):\n    config = AutoConfig.from_pretrained(args.model_path)\n    \n    st_time = time.time()\n    if args.model_parallel:\n        raise NotImplementedError\n    else:\n        config.is_model_parallel = False\n        dtype = torch.float32 if args.fp32 else torch.float16\n        try:\n            model = AutoModelForCausalLM.from_pretrained(args.model_path, config=config, device_map={\"\": device}, torch_dtype=dtype)\n        except:\n            model = AutoModelForCausalLM.from_pretrained(args.model_path, config=config, device_map={\"\": device}, torch_dtype=torch.float32)\n            model = model.half()\n        \n        if args.peft is not None:\n            if args.peft == \"lora\":\n                model.enable_input_require_grads()\n                if args.peft_path is not None:\n                    if args.do_train:\n                        _model = PeftModel.from_pretrained(model, args.peft_path)\n                        state_dict = dict(_model.state_dict().items())\n                        peft_config = LoraConfig(\n                            task_type=TaskType.CAUSAL_LM, inference_mode=(not args.do_train), r=args.peft_lora_r, lora_alpha=args.peft_lora_alpha, lora_dropout=args.peft_lora_dropout\n                        )\n                        model = get_peft_model(model, peft_config)\n                        model.load_state_dict(state_dict)\n                        \n                        del _model\n                        del state_dict\n                    else:\n                        model = PeftModel.from_pretrained(model, args.peft_path)\n                else:\n                    peft_config = LoraConfig(\n                        task_type=TaskType.CAUSAL_LM, inference_mode=(not args.do_train), r=args.peft_lora_r, lora_alpha=args.peft_lora_alpha, lora_dropout=args.peft_lora_dropout\n                    )\n                    model = get_peft_model(model, peft_config)\n                model.print_trainable_parameters()\n            else:\n                raise NotImplementedError\n        else:\n            if dist.get_rank() == 0:\n                print(' > number of parameters: {}'.format(\n                    sum([p.nelement() for p in model.parameters()])), flush=True)\n        # model = DDP(model)\n        # NOTE: no need for DDP since deepspeed has done\n    if args.gradient_checkpointing:\n        model.gradient_checkpointing_enable()\n    \n    ed_time = time.time()\n    \n    print_rank(f\"Model load time: {ed_time - st_time}s\")\n    \n    return model\n\n\ndef get_optimizer_params(args, model: nn.Module):\n    # taken from https://github.com/facebookresearch/SpanBERT/blob/0670d8b6a38f6714b85ea7a033f16bd8cc162676/code/run_tacred.py\n    param_optimizer = list(model.named_parameters())\n    no_decay = ['bias', 'ln_f.weight', 'ln_1.weight', 'ln_2.weight', 'ln_cross_attn']\n    optimizer_grouped_parameters = [\n        {'params': [p for n, p in param_optimizer\n                    if not any(nd in n for nd in no_decay)]},\n        {'params': [p for n, p in param_optimizer\n                    if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}\n    ]\n\n    return optimizer_grouped_parameters\n\n\ndef get_optimizer_params_peft(args, model: nn.Module):\n    # taken from https://github.com/facebookresearch/SpanBERT/blob/0670d8b6a38f6714b85ea7a033f16bd8cc162676/code/run_tacred.py\n    param_optimizer = list(model.named_parameters())\n    optimizer_grouped_parameters = [\n        {'params': [p for n, p in param_optimizer if p.requires_grad]},\n    ]\n\n    return optimizer_grouped_parameters\n\n\ndef get_tokenizer(args):\n    tokenizer = AutoTokenizer.from_pretrained(args.model_path)\n    if args.model_type in [\"gpt2\", \"opt\", \"llama\", \"gptj\", \"llama2\", \"mistral\"]:\n        tokenizer.pad_token_id = tokenizer.eos_token_id\n    elif args.model_type==\"qwen\":\n        tokenizer.pad_token_id = 151646\n        tokenizer.eos_token_id = 151643\n        tokenizer.pad_token_id = tokenizer.eos_token_id\n    \n    return tokenizer\n\n\ndef load_parallel(model, load_dir):\n    mp_rank = mpu.get_model_parallel_rank()\n    assert mpu.get_model_parallel_world_size() != 1\n    checkpoint_name = os.path.join(load_dir, f\"mp{mpu.get_model_parallel_world_size()}\", f\"pytorch_model_{mp_rank}.bin\")\n    assert os.path.exists(checkpoint_name), f\"{checkpoint_name} does not exist.\"\n    model = load_checkpoint_and_dispatch(model=model, checkpoint=checkpoint_name, device_map={\"\": torch.cuda.current_device()}, dtype=torch.float16)\n    dist.barrier()\n    print(f\"Rank {get_rank()}: {checkpoint_name} loaded.\")\n\n\ndef save_parallel(model, save_dir):\n    mp_rank = mpu.get_model_parallel_rank()\n    os.makedirs(os.path.join(save_dir, f\"mp{mpu.get_model_parallel_world_size()}\"), exist_ok=True)\n    checkpoint_name = os.path.join(save_dir, f\"mp{mpu.get_model_parallel_world_size()}\", f\"pytorch_model_{mp_rank}.bin\")\n    torch.save(model.state_dict(), checkpoint_name)\n    print(f\"Rank {get_rank()}: {checkpoint_name} saved.\")\n"
  }
]