[
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing Guide\n\nThank you for your interest in the project! We welcome all forms of contributions, including but not limited to:\n\n- Bug reports\n- Feature suggestions\n- Documentation improvements\n- Code fixes\n- New features\n\n## Development Process\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/AmazingFeature`)\n3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)\n4. Push to the branch (`git push origin feature/AmazingFeature`)\n5. Open a Pull Request\n\n## Developer Certificate of Origin\n\nVersion 1.1\n\nCopyright (C) 2004, 2006 The Linux Foundation and its contributors.\n\nEveryone is permitted to copy and distribute verbatim copies of this\nlicense document, but changing it is not allowed.\n\n### Developer's Certificate of Origin 1.1\n\nBy making a contribution to this project, I certify that:\n\n(a) The contribution was created in whole or in part by me and I\n    have the right to submit it under the open source license\n    indicated in the file; or\n\n(b) The contribution is based upon previous work that, to the best\n    of my knowledge, is covered under an appropriate open source\n    license and I have the right under that license to submit that\n    work with modifications, whether created in whole or in part\n    by me, under the same open source license (unless I am\n    permitted to submit under a different license), as indicated\n    in the file; or\n\n(c) The contribution was provided directly to me by some other\n    person who certified (a), (b) or (c) and I have not modified\n    it.\n\n(d) I understand and agree that this project and the contribution\n    are public and that a record of the contribution (including all\n    personal information I submit with it, including my sign-off) is\n    maintained indefinitely and may be redistributed consistent with\n    this project or the open source license(s) involved.\n\n## Code Style\n\nPlease ensure your code follows the project's code style guidelines. We use the following tools to maintain code quality:\n\n- Code formatting tools\n- Code linting tools\n- Unit tests\n\n## Submitting Pull Requests\n\nBefore submitting a Pull Request, please ensure:\n\n1. Your code passes all tests\n2. You have updated relevant documentation\n3. Your commit messages are clear and descriptive\n4. Your code follows the project's code style guidelines\n\n## Issue Reporting\n\nIf you find any issues or have suggestions, please submit them through GitHub Issues. Before submitting an issue, please ensure:\n\n1. The issue hasn't been reported already\n2. You have provided sufficient information to reproduce the issue\n3. You have attempted to resolve the issue yourself\n\nThank you for contributing! "
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2025 Hanrong Ye\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\" width=\"100%\">\n<img src=\"assets/logo.png\" alt=\"Stanford-Alpaca\" style=\"width: 70%; min-width: 300px; display: block; margin: auto;\">\n</p>\n\n# <span style=\"background: linear-gradient(45deg, #667eea 0%, #764ba2 25%, #f093fb 50%, #f5576c 75%, #4facfe 100%); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; font-weight: bold; font-size: 1.1em;\">**OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM (ICLR 2026)**</span> <br />\n\n[![Paper](https://img.shields.io/badge/ArXiv-Paper-brown)](https://arxiv.org/abs/2510.15870)\n[![Code](https://img.shields.io/badge/GitHub-Link-blue)](https://github.com/NVlabs/OmniVinci)\n[![Model](https://img.shields.io/badge/HuggingFace-Model-yellow)](https://huggingface.co/nvidia/omnivinci)\n[![Website](https://img.shields.io/badge/Web-Page-orange)](https://nvlabs.github.io/OmniVinci)\n[![Video](https://img.shields.io/badge/Video-Demo-white)](https://youtu.be/w84pPuGFH4o?si=OUFhhiXeQbzil7gN)\n\n\n<div align=\"center\">\n\n</div>\n\n[Hanrong Ye*†](https://sites.google.com/site/yhrspace/home), [Chao-Han Huck Yang†](https://huckiyang.github.io/), [Arushi Goel†](https://scholar.google.com/citations?user=tj08PZcAAAAJ&hl=en), [Wei Huang†](https://aaron-weihuang.com/), [Ligeng Zhu†](https://lzhu.me/), [Yuanhang Su†](https://scholar.google.com/citations?user=n335GwUAAAAJ&hl=en), [Sean Lin†](https://www.nvidia.com/en-us/), [An-Chieh Cheng†](https://www.anjiecheng.me/), [Zhen Wan†](https://scholar.google.com/citations?user=OH_1qwMAAAAJ&hl=en), [Jinchuan Tian†](https://jctian98.github.io/), [Yuming Lou†](https://github.com/Louym), [Dong Yang†](https://scholar.google.com/citations?user=PHvliUgAAAAJ&hl=en), [Zhijian Liu](https://zhijianliu.com/), [Yukang Chen](https://yukangchen.com/), [Ambrish Dantrey](https://www.nvidia.com/en-us/), [Ehsan Jahangiri](https://www.nvidia.com/en-us/), [Sreyan Ghosh](https://sreyan88.github.io/), [Daguang Xu](https://scholar.google.com/citations?user=r_VHYHAAAAAJ&hl=en), [Ehsan Hosseini Asl](https://scholar.google.com/citations?user=I9w3ON4AAAAJ&hl=en), [Danial Mohseni Taheri](https://danialtaheri.github.io/), [Vidya Murali](https://www.linkedin.com/in/vidya-n-murali/), [Sifei Liu](https://sifeiliu.net/), [Yao Lu](https://www.linkedin.com/in/yao-jason-lu-a0291938/), [Oluwatobi Olabiyi](https://www.linkedin.com/in/oluwatobi-olabiyi-08955123/), [Yu-Chiang Frank Wang](https://scholar.google.com/citations?user=HSGvdtoAAAAJ&hl=en), [Rafael Valle](https://rafaelvalle.github.io/), [Bryan Catanzaro](https://www.linkedin.com/in/bryancatanzaro/), [Andrew Tao](https://scholar.google.com/citations?user=Wel9l1wAAAAJ&hl=en), [Song Han](https://hanlab.mit.edu/songhan), [Jan Kautz](https://jankautz.com/), [Hongxu Yin*^†](https://hongxu-yin.github.io/), [Pavlo Molchanov^](https://www.pmolchanov.com/)  \n\n<span style=\"color: rgb(133, 184, 55);\">**NVIDIA**</span>  \n*Corresponding Author | †Core Contribution | ^Equal Advisory\n\n<p align=\"center\" width=\"100%\">\n<img src=\"assets/performance.png\" alt=\"Stanford-Alpaca\" style=\"width: 100%; min-width: 300px; display: block; margin: auto;\">\n</p>\n\nAdvancing machine intelligence requires developing the ability to perceive across multiple modalities, much as humans sense the world.\nWe introduce OmniVinci, an initiative to build a strong, open-source, omni-modal LLM.\nWe carefully study the design choices across model architecture and data curation.\nFor model architecture, we present three key innovations:\n**(i)** OmniAlignNet for strengthening alignment between vision and audio embeddings in a shared omni-modal latent space;\n**(ii)** Temporal Embedding Grouping for capturing relative temporal alignment between vision and audio signals; and\n**(iii)** Constrained Rotary Time Embedding for encoding absolute temporal information in omni-modal embeddings. \nWe introduce a curation and synthesis pipeline that generates 24M single-modal and omni-modal conversations. We find that modalities reinforce one another in both perception and reasoning. Our model outperforms Qwen2.5-Omni with +19.05 on DailyOmni (cross-modal understanding), +1.7 on MMAR (audio), and +3.9 on Video-MME (vision), while using just 0.2T training tokens - a 6 times reduction compared to Qwen2.5-Omni’s 1.2T.\nWe finally demonstrate omni-modal advantages in downstream applications spanning robotics, medical AI, and smart factory. \n\n| Model        | Omni - Dailyomni | Omni - Worldsense | Audio - MMAU | Audio - MMAR | Vision - MVBench | Vision - Video-MME (w/o sub) |\n|--------------|------------------|-------------------|--------------------------|--------------|------------------|------------------------------|\n| Qwen2.5-Omni | 47.5            | 45.4              | 71.0                       | 56.7         | 70.3             | 64.3                         |\n| **Ours**         | **66.5**         | **48.2**         | **71.6**                 | **58.4**     | **70.6**         | **68.2**                     |\n\n\n\n## News\n- [x] **[2025 Oct 19] OmniVinci-9B** is released! It supports joint understanding of **vision, audio, and text**.\n\n## Model Usage\n<p align=\"center\" width=\"100%\">\n<img src=\"assets/arch.png\" alt=\"Stanford-Alpaca\" style=\"width: 100%; min-width: 300px; display: block; margin: auto;\">\n</p>\n### Inference\n### Envirnoment setup\n\n\n1. Download and cd huggingface repo\n```\nhuggingface-cli download nvidia/omnivinci --local-dir ./omnivinci --local-dir-use-symlinks False\ncd ./omnivinci\n```\n\n2. Install python environment (based on NVILA codebase)\n```\nbash ./environment_setup.sh omnivinci\n```\n\n### 🤗 Transformers Usage\n\n#### Video (with audio) Inference Example:\n```python\nfrom transformers import AutoProcessor, AutoModel, AutoConfig,AutoModelForCausalLM\nimport torch\nimport os\n\n# default: Load the model on the available device(s)\nmodel_path = \"./\"\nvideo_path = \"xxx.mp4\"\ngeneration_kwargs = {\"max_new_tokens\": 1024, \"max_length\": 99999999}\nload_audio_in_video = True\nnum_video_frames = 128\naudio_length = \"max_3600\"\n\nconfig = AutoConfig.from_pretrained(model_path, trust_remote_code=True)\n\nmodel = AutoModel.from_pretrained(model_path,\n                                  trust_remote_code=True,\n                                  torch_dtype=\"torch.float16\",\n                                  device_map=\"auto\")\n\nprocessor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)\ngeneration_config = model.default_generation_config\ngeneration_config.update(**generation_kwargs)\n\nmodel.config.load_audio_in_video = load_audio_in_video\nprocessor.config.load_audio_in_video = load_audio_in_video\nif num_video_frames > 0:\n    model.config.num_video_frames = num_video_frames\n    processor.config.num_video_frames = num_video_frames\nif audio_length != -1:\n    model.config.audio_chunk_length = audio_length\n    processor.config.audio_chunk_length = audio_length\n\n\nconversation = [{\n        \"role\": \"user\",\n        \"content\": [\n            {\"type\": \"video\", \"video\":video_path},\n            {\"type\": \"text\", \"text\": \"Assess the video, followed by a detailed description of it's video and audio contents.\"}\n        ]\n}]\ntext = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)\n\ninputs = processor([text])\n\noutput_ids = model.generate(\n    input_ids=inputs.input_ids,\n    media=getattr(inputs, 'media', None),\n    media_config=getattr(inputs, 'media_config', None),\n    generation_config=generation_config,\n)\nprint(processor.tokenizer.batch_decode(output_ids, skip_special_tokens=True))\n```\n\n- **For a audio and image inference examples please refer to ```example_mini_audio.py``` and ```example_mini_image.py```**\n\n## Examples\nhttps://github.com/user-attachments/assets/276d2735-2cdc-47c8-8fdb-787f77c5dddb\n\n\n\n<span style=\"color: rgb(133, 184, 55);\">**Question**</span>: \"What are they talking about in detail?\"  \n<span style=\"color: rgb(133, 184, 55);\">**Answer**</span>: The video features Jensen Huang with gray hair and glasses, dressed in a black leather jacket, speaking into a microphone in a modern office setting. The background includes a glass wall and a green panel, with a blurred view of desks and chairs. Jensen Huang discusses the evolution of artificial intelligence, focusing on the impact of deep learning and the development of AI frameworks. He emphasizes the importance of data, computing power, and software methodologies in driving AI advancements. The conversation highlights the transformative potential of AI, particularly in perception and generative capabilities, and the need for a new approach to software development. Jensen Huang also mentions the creation of a new AI supercomputer, DGX-1, and its impact on the industry. The video concludes with Jensen Huang discussing the second phase of AI, generative AI, and its potential to revolutionize the way we understand and create information.\n\n\nhttps://github.com/user-attachments/assets/19a03604-f6f8-4407-a93d-c078b621e940\n\n\n<span style=\"color: rgb(133, 184, 55);\">**Question**</span>: \"Describe the video in detail.\"  \n<span style=\"color: rgb(133, 184, 55);\">**Answer**</span>: The video begins with Jensen Huang in a modern, well-lit room with large windows and a view of greenery outside. He dressed in a black jacket and white pants, is seated at a table, writing a message on a black card with a gold pen. The message reads, 'To Robot, Enjoy Your New Brain!' followed by a signature. He then places the card on the table rand begins to open a large black gift box with a gold ribbon and bow. The scene transitions to a close-up of the gift box on the table, with the person's hand visible. The focus then shifts to a robot wearing a white hard hat with the 'NVIDIA' logo, standing in a workshop or industrial setting. The robot holds the same black gift box with the gold ribbon and bow, and it opens the box to reveal the black card with the message. The robot examines the card closely. The narrative continues with the robot, still in the workshop setting, holding the black gift box. The robot opens the box, revealing a sleek, white device with a black screen, nestled in crumpled black paper. The robot examines the device closely, then places it back into the box and closes it. The scene transitions to a different setting, where the robot is now in a modern office environment with green walls and multiple computer monitors. The robot stands behind the closed gift box, gesturing with its hands as if explaining or presenting something. The video wraps up with the robot in the modern office environment, gesturing with its hands. The scene transitions to a close-up of the robot's face, showing its detailed features and expressive eyes. \n\n\n\n## Citation\nPlease consider to cite our paper and this framework, if they are helpful in your research.\n\n```bibtex\n@article{ye2025omnivinci,\n  title={OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM},\n  author={Ye, Hanrong and Yang, Chao-Han Huck and Goel, Arushi and Huang, Wei and Zhu, Ligeng and Su, Yuanhang and Lin, Sean and Cheng, An-Chieh and Wan, Zhen and Tian, Jinchuan and others},\n  journal={arXiv preprint arXiv:2510.15870},\n  year={2025}\n}\n```\n"
  },
  {
    "path": "environment_setup.sh",
    "content": "#!/usr/bin/env bash\nset -e\n\nCONDA_ENV=${1:-\"\"}\nif [ -n \"$CONDA_ENV\" ]; then\n    # This is required to activate conda environment\n    eval \"$(conda shell.bash hook)\"\n\n    conda create -n $CONDA_ENV python=3.10.14 -y\n    conda activate $CONDA_ENV\n    # This is optional if you prefer to use built-in nvcc\n    conda install -c nvidia cuda-toolkit=12.2 -y\nelse\n    echo \"Skipping conda environment creation. Make sure you have the correct environment activated.\"\nfi\n\n# Using uv to speedup installations\npip install uv\nalias uvp=\"uv pip\"\n\necho \"[INFO] Using python $(which python)\"\necho \"[INFO] Using pip $(which pip)\"\necho \"[INFO] Using uv $(which uv)\"\n\n# This is required to enable PEP 660 support\nuv pip install --upgrade pip setuptools\n\n# Install FlashAttention2\nuv pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu122torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl\n\n# Install VILA\nuv pip install -e \".[train,eval]\"\n\n# numpy introduce a lot dependencies issues, separate from pyproject.yaml\npip install numpy==1.26.4\n\n# audio\nuv pip install soundfile librosa openai-whisper ftfy\nconda install -c conda-forge ffmpeg\nuv pip install jiwer\n\n# Downgrade protobuf to 3.20 for backward compatibility\nuv pip install protobuf==3.20.*\n\n# Replace transformers and deepspeed files\nsite_pkg_path=$(python -c 'import site; print(site.getsitepackages()[0])')\ncp -rv ./transformers/modeling_utils.py $site_pkg_path/transformers/modeling_utils.py # for using qwen 2.5 omni checkpoint\n\n# for benchmark adoption\nuv pip install faiss-gpu-cu12\n\n# Quantization requires the newest triton version, and introduce dependency issue\nuv pip install triton==3.1.0 # we don't need this version if we do not use FP8LinearQwen2Config, QLlavaLlamaConfig, etc. It is not compatible with mamba-ssm.\n\nuv pip install kaldiio\n\n# for rotary embedding\nuv pip install beartype\n\nuv pip install pydantic==1.10.22\n"
  },
  {
    "path": "example_infer.py",
    "content": "# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identifier: Apache-2.0\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom transformers import AutoProcessor, AutoModel, AutoConfig, GenerationConfig\nimport torch\nimport os\nimport time\nfrom pathlib import Path\nfrom typing import List, Dict, Any, Optional, Union\nimport logging\nimport sys\nos.environ[\"HF_HUB_OFFLINE\"] = \"1\"  # Use local cache for models\n\n# Set up logging\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\nlogger = logging.getLogger(__name__)\n\ndef add_to_sys_path_direct(model_path):\n    \"\"\"Add model path directly to sys.path\"\"\"\n    if model_path not in sys.path:\n        sys.path.insert(0, model_path)  # Insert at beginning for priority\n        print(f\"✓ Added to sys.path: {model_path}\")\n    else:\n        print(f\"Already in sys.path: {model_path}\")\n\nclass NVOmniVideoInference:\n    \"\"\"A class to handle NVOmni video model inference with improved error handling and flexibility.\"\"\"\n    \n    def __init__(self, model_path: str, torch_dtype=\"torch.float16\", device_map=\"auto\"):\n        \"\"\"\n        Initialize the NVOmni model for video inference.\n        \n        Args:\n            model_path (str): Path to the model directory\n            torch_dtype: PyTorch data type for model weights\n            device_map (str): Device mapping strategy for model loading\n        \"\"\"\n        self.model_path = model_path\n        self.torch_dtype = torch_dtype\n        self.device_map = device_map\n        self.model = None\n        self.processor = None\n        self.config = None\n        self.device = None\n        \n        self.load_model()\n        \n    def validate_paths(self, model_path: str, video_path: str = None) -> bool:\n        \"\"\"Validate that required paths exist.\"\"\"\n        if not Path(model_path).exists():\n            logger.error(f\"Model path does not exist: {model_path}\")\n            return False\n            \n        if video_path and not Path(video_path).exists():\n            logger.error(f\"Video path does not exist: {video_path}\")\n            return False\n            \n        return True\n    \n    def load_model(self) -> bool:\n        \"\"\"Load the model, processor, and config with error handling.\"\"\"\n        if not self.validate_paths(self.model_path):\n            return False\n            \n        if True:\n            logger.info(\"Loading model configuration...\")\n            self.config = AutoConfig.from_pretrained(self.model_path, trust_remote_code=True)\n            \n            logger.info(\"Loading model...\")\n            start_time = time.time()\n            self.model = AutoModel.from_pretrained(\n                self.model_path,\n                trust_remote_code=True,\n                torch_dtype=self.torch_dtype,\n                device_map=self.device_map,\n                low_cpu_mem_usage=True  # More memory efficient loading\n            )#.to(eval(self.torch_dtype))\n            load_time = time.time() - start_time\n            logger.info(f\"Model loaded in {load_time:.2f} seconds\")\n            \n            logger.info(\"Loading processor...\")\n            self.processor = AutoProcessor.from_pretrained(self.model_path, trust_remote_code=True)\n\n            # Set device for single-device setups\n            if hasattr(self.model, 'device'):\n                self.device = self.model.device\n            else:\n                self.device = next(self.model.parameters()).device if self.model.parameters() else torch.device('cpu')\n            \n            logger.info(f\"Model successfully loaded on device: {self.device}\")\n            self._print_model_info()\n            return True\n            \n    def _print_model_info(self):\n        \"\"\"Print useful information about the loaded model.\"\"\"\n        logger.info(\"=\" * 50)\n        logger.info(\"MODEL INFORMATION\")\n        logger.info(\"=\" * 50)\n        \n        if self.config:\n            logger.info(f\"Model type: {getattr(self.config, 'model_type', 'Unknown')}\")\n            logger.info(f\"Hidden size: {getattr(self.config, 'hidden_size', 'Unknown')}\")\n            \n        if self.model and torch.cuda.is_available():\n            logger.info(f\"GPU memory allocated: {torch.cuda.memory_allocated() / 1024**3:.2f} GB\")\n            logger.info(f\"GPU memory reserved: {torch.cuda.memory_reserved() / 1024**3:.2f} GB\")\n    \n    def create_conversation(self, video_path: str, text_prompt: str) -> List[Dict[str, Any]]:\n        \"\"\"\n        Create a conversation format for the model.\n        \n        Args:\n            video_path (str): Path to the video file\n            text_prompt (str): Text prompt for the model\n            \n        Returns:\n            List[Dict]: Conversation in the expected format\n        \"\"\"\n        return [{\n            \"role\": \"user\",\n            \"content\": [\n                {\"type\": \"video\", \"video\": video_path},\n                {\"type\": \"text\", \"text\": text_prompt}\n            ]\n        }]\n\n    @torch.inference_mode()    \n    def generate_response(\n        self, \n        video_path: str, \n        text_prompt: str,\n        max_new_tokens: int = 256,\n        temperature: float = None,\n        top_p: float = None,\n        do_sample: bool = None,\n        num_video_frames: int = -1,\n        load_audio_in_video: bool = True,\n        audio_length: Union[int, str] = \"max_3600\",\n    ) -> Optional[str]:\n        \"\"\"\n        Generate a response from the model given a video and text prompt.\n        \n        Args:\n            video_path (str): Path to the video file\n            text_prompt (str): Text prompt for the model\n            max_new_tokens (int): Maximum number of new tokens to generate\n            temperature (float): Sampling temperature\n            top_p (float): Top-p sampling parameter\n            do_sample (bool): Whether to use sampling\n            custom_generation_config (GenerationConfig): Custom generation configuration\n            \n        Returns:\n            Optional[str]: Generated response or None if failed\n        \"\"\"\n        if not self.model or not self.processor:\n            logger.error(\"Model or processor not loaded. Please initialize the model first.\")\n            return None\n            \n        if not self.validate_paths(self.model_path, video_path):\n            return None\n        \n        # try:\n        if True:\n        \n            logger.info(f\"Processing video: {video_path}\")\n            logger.info(f\"Text prompt: {text_prompt}\")\n            \n            # Create conversation\n            conversation = self.create_conversation(video_path, text_prompt)\n            \n            # Apply chat template\n            text = self.processor.apply_chat_template(\n                conversation, \n                tokenize=False, \n                add_generation_prompt=True\n            )\n            logger.info(f\"Chat template applied\")\n\n            # set model params\n            self.model.config.load_audio_in_video = load_audio_in_video\n            self.processor.config.load_audio_in_video = load_audio_in_video\n            if num_video_frames > 0:\n                self.model.config.num_video_frames = num_video_frames\n                self.processor.config.num_video_frames = num_video_frames\n            if audio_length != -1:\n                self.model.config.audio_chunk_length = audio_length\n                self.processor.config.audio_chunk_length = audio_length\n            logger.info(f\"Model config - load_audio_in_video: {self.model.config.load_audio_in_video}, num_video_frames: {self.model.config.num_video_frames}, audio_chunk_length: {self.model.config.audio_chunk_length}\")\n            \n            # Process inputs\n            start_time = time.time()\n            inputs = self.processor([text])\n            \n            # Move inputs to the correct device if needed\n            if hasattr(inputs, 'input_ids') and inputs.input_ids is not None:\n                inputs.input_ids = inputs.input_ids.to(self.device)\n            \n            processing_time = time.time() - start_time\n            logger.info(f\"Input processing completed in {processing_time:.2f} seconds\")\n            \n            logger.info(\"Generating response...\")\n            start_time = time.time()\n\n            generation_kwargs = {\"max_new_tokens\": max_new_tokens, \"max_length\": 99999999}\n            if top_p is not None:\n                generation_kwargs[\"top_p\"] = top_p\n            if do_sample is not None:\n                generation_kwargs[\"do_sample\"] = do_sample\n            if temperature is not None:\n                generation_kwargs[\"temperature\"] = temperature\n\n            generation_config = self.model.default_generation_config\n            generation_config.update(**generation_kwargs)\n\n            logger.info(f\"Generation config: {generation_config.to_dict()}\")\n\n\n            with torch.no_grad():\n                output_ids = self.model.generate(\n                    input_ids=inputs.input_ids,\n                    media=getattr(inputs, 'media', None),\n                    media_config=getattr(inputs, 'media_config', None),\n                    generation_config=generation_config,\n                )\n            \n            generation_time = time.time() - start_time\n            logger.info(f\"Generation completed in {generation_time:.2f} seconds\")\n            \n            # Decode response\n            response = self.processor.tokenizer.batch_decode(\n                output_ids, \n                skip_special_tokens=True\n            )[0]\n\n            return response\n            \n    def batch_generate(\n        self, \n        video_text_pairs: List[tuple], \n        **generation_kwargs\n    ) -> List[Optional[str]]:\n        \"\"\"\n        Generate responses for multiple video-text pairs.\n        \n        Args:\n            video_text_pairs (List[tuple]): List of (video_path, text_prompt) tuples\n            **generation_kwargs: Arguments passed to generate_response\n            \n        Returns:\n            List[Optional[str]]: List of generated responses\n        \"\"\"\n        responses = []\n        for i, (video_path, text_prompt) in enumerate(video_text_pairs):\n            logger.info(f\"Processing batch item {i+1}/{len(video_text_pairs)}\")\n            response = self.generate_response(video_path, text_prompt, **generation_kwargs)\n            responses.append(response)\n            \n            # Clear cache between generations to manage memory\n            if torch.cuda.is_available():\n                torch.cuda.empty_cache()\n                \n        return responses\n\ndef main():\n    \"\"\"Main function demonstrating usage of the NVOmni model.\"\"\"\n    \n    # Configuration\n    MODEL_PATH = \"./\"\n    VIDEO_PATH = \"xxx.mp4\"\n    TEXT_PROMPT = \"Assess the video, followed by a detailed description of it's video and audio contents.\"\n\n    num_video_frames=128\n    audio_length=\"max_3600\"\n    load_audio_in_video=True\n\n    add_to_sys_path_direct(MODEL_PATH)\n    \n    # Initialize the inference class\n    logger.info(\"Initializing NVOmni Video Inference...\")\n    inferencer = NVOmniVideoInference(MODEL_PATH, torch_dtype=\"torch.float16\")\n    \n    if inferencer.model is None:\n        logger.error(\"Failed to initialize model. Exiting.\")\n        return\n    \n    # Generate response\n    logger.info(\"Starting inference...\")\n    response = inferencer.generate_response(\n        video_path=VIDEO_PATH,\n        text_prompt=TEXT_PROMPT,\n        num_video_frames=num_video_frames,\n        load_audio_in_video=load_audio_in_video,\n        audio_length=audio_length,\n        max_new_tokens=1024,\n    )\n    \n    if response:\n        print(\"\\n\" + \"=\"*60)\n        print(\"GENERATED RESPONSE\")\n        print(\"=\"*60)\n        print(response)\n        print(\"=\"*60)\n    else:\n        logger.error(\"Failed to generate response\")\n    \n    # Example of batch processing\n    if False:\n        logger.info(\"\\nExample: Batch processing\")\n        batch_pairs = [\n            (VIDEO_PATH, \"What is happening in this video?\"),\n            (VIDEO_PATH, \"Describe the audio content of this video.\"),\n        ]\n        \n        batch_responses = inferencer.batch_generate(batch_pairs, max_new_tokens=128)\n        \n        for i, (pair, response) in enumerate(zip(batch_pairs, batch_responses)):\n            print(f\"\\n--- Batch Response {i+1} ---\")\n            print(f\"Prompt: {pair[1]}\")\n            print(f\"Response: {response}\")\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "example_mini_audio.py",
    "content": "# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identifier: Apache-2.0\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nExample script for audio transcription using the model.\n\nThis script demonstrates how to:\n1. Load the model and processor\n2. Configure audio processing parameters\n3. Process audio input\n4. Generate transcription output\n\nUsage:\n    python example_mini_audio.py --model_path <path_to_model> --audio_path <path_to_audio>\n\"\"\"\n\nfrom transformers import AutoProcessor, AutoModel, AutoConfig, AutoModelForCausalLM\nimport torch\nimport os\nimport argparse\n\n# Configuration\nparser = argparse.ArgumentParser(description=\"Audio transcription example\")\nparser.add_argument(\"--model_path\", type=str, default=\"./\", help=\"Path to the model\")\nparser.add_argument(\"--audio_path\", type=str, required=True, help=\"Path to the audio file\")\nparser.add_argument(\"--max_new_tokens\", type=int, default=1024, help=\"Maximum number of tokens to generate\")\nparser.add_argument(\"--num_video_frames\", type=int, default=128, help=\"Number of video frames to process\")\nparser.add_argument(\"--audio_length\", type=str, default=\"max_3600\", help=\"Maximum audio length\")\n\nargs = parser.parse_args()\n\nmodel_path = args.model_path\naudio_path = args.audio_path\ngeneration_kwargs = {\"max_new_tokens\": args.max_new_tokens, \"max_length\": 99999999}\nload_audio_in_video = True\nnum_video_frames = args.num_video_frames\naudio_length = args.audio_length\n\nconfig = AutoConfig.from_pretrained(model_path, trust_remote_code=True)\n\nmodel = AutoModel.from_pretrained(model_path,\n                                  trust_remote_code=True,\n                                  torch_dtype=\"torch.float16\",\n                                  device_map=\"auto\")\n\nprocessor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)\ngeneration_config = model.default_generation_config\ngeneration_config.update(**generation_kwargs)\n\nmodel.config.load_audio_in_video = load_audio_in_video\nprocessor.config.load_audio_in_video = load_audio_in_video\nif num_video_frames > 0:\n    model.config.num_video_frames = num_video_frames\n    processor.config.num_video_frames = num_video_frames\nif audio_length != -1:\n    model.config.audio_chunk_length = audio_length\n    processor.config.audio_chunk_length = audio_length\n\n\nconversation = [{\n        \"role\": \"user\",\n        \"content\": [\n            {\"type\": \"audio\", \"audio\": audio_path},\n            {\"type\": \"text\", \"text\": \"Transcribe the whole speech.\"}\n        ]\n}]\ntext = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)\n\ninputs = processor([text])\n\noutput_ids = model.generate(\n    input_ids=inputs.input_ids,\n    media=getattr(inputs, 'media', None),\n    media_config=getattr(inputs, 'media_config', None),\n    generation_config=generation_config,\n)\nprint(processor.tokenizer.batch_decode(output_ids, skip_special_tokens=True))"
  },
  {
    "path": "example_mini_image.py",
    "content": "# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identifier: Apache-2.0\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nExample script for image understanding using the model.\n\nThis script demonstrates how to:\n1. Load the model and processor\n2. Process image input\n3. Generate description output\n\nUsage:\n    python example_mini_image.py --model_path <path_to_model> --image_path <path_to_image>\n\"\"\"\n\nfrom transformers import AutoProcessor, AutoModel, AutoConfig, AutoModelForCausalLM\nimport torch\nimport os\nimport argparse\n\n# Configuration\nparser = argparse.ArgumentParser(description=\"Image understanding example\")\nparser.add_argument(\"--model_path\", type=str, default=\"./\", help=\"Path to the model\")\nparser.add_argument(\"--image_path\", type=str, required=True, help=\"Path to the image file\")\nparser.add_argument(\"--max_new_tokens\", type=int, default=1024, help=\"Maximum number of tokens to generate\")\nparser.add_argument(\"--prompt\", type=str, default=\"Describe the image in detail.\", help=\"Text prompt for the model\")\n\nargs = parser.parse_args()\n\nmodel_path = args.model_path\nimage_path = args.image_path\ngeneration_kwargs = {\"max_new_tokens\": args.max_new_tokens, \"max_length\": 99999999}\n\nconfig = AutoConfig.from_pretrained(model_path, trust_remote_code=True)\n\nmodel = AutoModel.from_pretrained(\n    model_path,\n    trust_remote_code=True,\n    torch_dtype=torch.float16,\n    device_map=\"auto\"\n)\n\nprocessor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)\ngeneration_config = model.default_generation_config\ngeneration_config.update(**generation_kwargs)\n\nconversation = [{\n    \"role\": \"user\",\n    \"content\": [\n        {\"type\": \"image\", \"image\": image_path},\n        {\"type\": \"text\", \"text\": args.prompt}\n    ]\n}]\ntext = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)\n\ninputs = processor([text])\n\noutput_ids = model.generate(\n    input_ids=inputs.input_ids,\n    media=getattr(inputs, 'media', None),\n    media_config=getattr(inputs, 'media_config', None),\n    generation_config=generation_config,\n)\nprint(processor.tokenizer.batch_decode(output_ids, skip_special_tokens=True))\n"
  },
  {
    "path": "example_mini_video.py",
    "content": "# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identifier: Apache-2.0\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nExample script for video understanding using the model.\n\nThis script demonstrates how to:\n1. Load the model and processor\n2. Configure video and audio processing parameters\n3. Process video input with optional audio\n4. Generate description output\n\nUsage:\n    python example_mini_video.py --model_path <path_to_model> --video_path <path_to_video>\n\"\"\"\n\nfrom transformers import AutoProcessor, AutoModel, AutoConfig, AutoModelForCausalLM\nimport torch\nimport os\nimport argparse\n\n# Configuration\nparser = argparse.ArgumentParser(description=\"Video understanding example\")\nparser.add_argument(\"--model_path\", type=str, default=\"./\", help=\"Path to the model\")\nparser.add_argument(\"--video_path\", type=str, required=True, help=\"Path to the video file\")\nparser.add_argument(\"--max_new_tokens\", type=int, default=1024, help=\"Maximum number of tokens to generate\")\nparser.add_argument(\"--num_video_frames\", type=int, default=128, help=\"Number of video frames to process\")\nparser.add_argument(\"--audio_length\", type=str, default=\"max_3600\", help=\"Maximum audio length\")\nparser.add_argument(\"--prompt\", type=str, default=\"What are they talking about in detail?\", help=\"Text prompt for the model\")\nparser.add_argument(\"--load_audio\", action=\"store_true\", default=True, help=\"Load audio from video\")\n\nargs = parser.parse_args()\n\nmodel_path = args.model_path\nvideo_path = args.video_path\ngeneration_kwargs = {\"max_new_tokens\": args.max_new_tokens, \"max_length\": 99999999}\nload_audio_in_video = args.load_audio\nnum_video_frames = args.num_video_frames\naudio_length = args.audio_length\ntext_prompt = args.prompt\n\nassert os.path.exists(video_path), f\"Video path {video_path} does not exist.\"\n\nconfig = AutoConfig.from_pretrained(model_path, trust_remote_code=True)\n\nmodel = AutoModel.from_pretrained(\n    model_path,\n    trust_remote_code=True,\n    torch_dtype=torch.float16,\n    device_map=\"auto\"\n)\n\nprocessor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)\ngeneration_config = model.default_generation_config\ngeneration_config.update(**generation_kwargs)\n\nmodel.config.load_audio_in_video = load_audio_in_video\nprocessor.config.load_audio_in_video = load_audio_in_video\nif num_video_frames > 0:\n    model.config.num_video_frames = num_video_frames\n    processor.config.num_video_frames = num_video_frames\nif audio_length != -1:\n    model.config.audio_chunk_length = audio_length\n    processor.config.audio_chunk_length = audio_length\n\ndef forward_inference(video_path, text_prompt):\n    \"\"\"Run inference on video with text prompt.\"\"\"\n    print(f\"Text prompt: {text_prompt}\")\n    print(f\"Video path: {video_path}\")\n    conversation = [{\n        \"role\": \"user\",\n        \"content\": [\n            {\"type\": \"video\", \"video\": video_path},\n            {\"type\": \"text\", \"text\": text_prompt}\n        ]\n    }]\n    text = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)\n\n    inputs = processor([text])\n\n    output_ids = model.generate(\n        input_ids=inputs.input_ids,\n        media=getattr(inputs, 'media', None),\n        media_config=getattr(inputs, 'media_config', None),\n        generation_config=generation_config,\n    )\n    print(processor.tokenizer.batch_decode(output_ids, skip_special_tokens=True))\n\nforward_inference(video_path, text_prompt)\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[build-system]\nrequires = [\"setuptools>=61.0\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"vila\"\nversion = \"1.5.0\"\ndescription = \"nvOmni\"\nreadme = \"README.md\"\nrequires-python = \">=3.8\"\nclassifiers = [\n    \"Programming Language :: Python :: 3\",\n    \"License :: OSI Approved :: Apache Software License\",\n]\ndependencies = [\n    \"torch==2.3.0\", \"torchvision==0.18.0\",\n    \"transformers==4.46.0\", \"tokenizers>=0.15.2\", \"sentencepiece==0.1.99\", \"shortuuid\",\n    \"accelerate==0.34.2\", \"peft>=0.9.0\", \"bitsandbytes==0.43.2\",\n    \"pydantic<2,>=1\", \"markdown2[all]\", \"numpy==1.26.4\", \"scikit-learn==1.2.2\",\n    \"gradio==3.35.2\", \"gradio_client==0.2.9\",\n    \"requests\", \"httpx\", \"uvicorn\", \"fastapi\", \"fire\", \"seaborn\", \"ring_flash_attn==0.1.1\",\n    \"einops==0.6.1\", \"einops-exts==0.0.4\", \"timm==0.9.12\",\n    \"openpyxl==3.1.2\", \"pytorchvideo==0.1.5\", \"decord==0.6.0\",\n    \"datasets==2.16.1\", \"openai==1.8.0\", \"webdataset==0.2.86\",\n    \"nltk==3.3\", \"pywsd==1.2.4\", \"opencv-python-headless==4.8.0.76\",\n    \"s2wrapper@git+https://github.com/bfshi/scaling_on_scales\",\n    \"tyro\", \"pytest\", \"pre-commit\", \"loguru\", \"hydra-core\", \"xgrammar\"\n]\n\n[project.scripts]\nvila-run = \"llava.cli.run:main\"\nvila-eval = \"llava.cli.eval:main\"\nvila-infer = \"llava.cli.infer:main\"\nvila-upload = \"llava.cli.upload2hf:main\"\n\n[project.optional-dependencies]\ntrain = [\"deepspeed==0.9.5\", \"ninja\", \"wandb\"]\neval = [\"word2number\", \"Levenshtein\", \"nltk\", \"pywsd\"]\n\n[project.urls]\n\"Homepage\" = \"https://hanlab.mit.edu/projects/vila\"\n\"Bug Tracker\" = \"https://github.com/NVlabs/VILA/issues\"\n\n[tool.triton]\ntriton = {version = \"3.0.0.post20240610003544\", file = \"https://aiinfra.pkgs.visualstudio.com/2692857e-05ef-43b4-ba9c-ccf1c22c437c/_packaging/07c94329-d4c3-4ad4-9e6b-f904a60032ec/pypi/download/triton-nightly/3.post20240610003544/triton_nightly-3.0.0.post20240610003544-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl\", sha256 = \"ac2c36a49bf9c2bb780909b38096fb718f17efd78b88a1ca1d649f6d063cdc2c\"}\n\n[tool.black]\nline-length = 120\n\n[tool.isort]\nprofile = \"black\"\nmulti_line_output = 3\ninclude_trailing_comma = true\nforce_grid_wrap = 0\nuse_parentheses = true\nensure_newline_before_comments = true\nline_length = 120\n\n[tool.setuptools.packages.find]\nexclude = [\"assets*\", \"benchmark*\", \"docs\", \"dist*\", \"playground*\", \"scripts*\", \"tests*\"]\n\n[tool.wheel]\nexclude = [\"assets*\", \"benchmark*\", \"docs\", \"dist*\", \"playground*\", \"scripts*\", \"tests*\"]\n"
  }
]