Full Code of NVlabs/OmniVinci for AI

main 9307faa70176 cached
9 files
51.3 KB
12.2k tokens
11 symbols
1 requests
Download .txt
Repository: NVlabs/OmniVinci
Branch: main
Commit: 9307faa70176
Files: 9
Total size: 51.3 KB

Directory structure:
gitextract_43fwejau/

├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── environment_setup.sh
├── example_infer.py
├── example_mini_audio.py
├── example_mini_image.py
├── example_mini_video.py
└── pyproject.toml

================================================
FILE CONTENTS
================================================

================================================
FILE: CONTRIBUTING.md
================================================
# Contributing Guide

Thank you for your interest in the project! We welcome all forms of contributions, including but not limited to:

- Bug reports
- Feature suggestions
- Documentation improvements
- Code fixes
- New features

## Development Process

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request

## Developer Certificate of Origin

Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.

### Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

## Code Style

Please ensure your code follows the project's code style guidelines. We use the following tools to maintain code quality:

- Code formatting tools
- Code linting tools
- Unit tests

## Submitting Pull Requests

Before submitting a Pull Request, please ensure:

1. Your code passes all tests
2. You have updated relevant documentation
3. Your commit messages are clear and descriptive
4. Your code follows the project's code style guidelines

## Issue Reporting

If you find any issues or have suggestions, please submit them through GitHub Issues. Before submitting an issue, please ensure:

1. The issue hasn't been reported already
2. You have provided sufficient information to reproduce the issue
3. You have attempted to resolve the issue yourself

Thank you for contributing! 

================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright 2025 Hanrong Ye

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
<p align="center" width="100%">
<img src="assets/logo.png" alt="Stanford-Alpaca" style="width: 70%; min-width: 300px; display: block; margin: auto;">
</p>

# <span style="background: linear-gradient(45deg, #667eea 0%, #764ba2 25%, #f093fb 50%, #f5576c 75%, #4facfe 100%); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; font-weight: bold; font-size: 1.1em;">**OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM (ICLR 2026)**</span> <br />

[![Paper](https://img.shields.io/badge/ArXiv-Paper-brown)](https://arxiv.org/abs/2510.15870)
[![Code](https://img.shields.io/badge/GitHub-Link-blue)](https://github.com/NVlabs/OmniVinci)
[![Model](https://img.shields.io/badge/HuggingFace-Model-yellow)](https://huggingface.co/nvidia/omnivinci)
[![Website](https://img.shields.io/badge/Web-Page-orange)](https://nvlabs.github.io/OmniVinci)
[![Video](https://img.shields.io/badge/Video-Demo-white)](https://youtu.be/w84pPuGFH4o?si=OUFhhiXeQbzil7gN)


<div align="center">

</div>

[Hanrong Ye*†](https://sites.google.com/site/yhrspace/home), [Chao-Han Huck Yang†](https://huckiyang.github.io/), [Arushi Goel†](https://scholar.google.com/citations?user=tj08PZcAAAAJ&hl=en), [Wei Huang†](https://aaron-weihuang.com/), [Ligeng Zhu†](https://lzhu.me/), [Yuanhang Su†](https://scholar.google.com/citations?user=n335GwUAAAAJ&hl=en), [Sean Lin†](https://www.nvidia.com/en-us/), [An-Chieh Cheng†](https://www.anjiecheng.me/), [Zhen Wan†](https://scholar.google.com/citations?user=OH_1qwMAAAAJ&hl=en), [Jinchuan Tian†](https://jctian98.github.io/), [Yuming Lou†](https://github.com/Louym), [Dong Yang†](https://scholar.google.com/citations?user=PHvliUgAAAAJ&hl=en), [Zhijian Liu](https://zhijianliu.com/), [Yukang Chen](https://yukangchen.com/), [Ambrish Dantrey](https://www.nvidia.com/en-us/), [Ehsan Jahangiri](https://www.nvidia.com/en-us/), [Sreyan Ghosh](https://sreyan88.github.io/), [Daguang Xu](https://scholar.google.com/citations?user=r_VHYHAAAAAJ&hl=en), [Ehsan Hosseini Asl](https://scholar.google.com/citations?user=I9w3ON4AAAAJ&hl=en), [Danial Mohseni Taheri](https://danialtaheri.github.io/), [Vidya Murali](https://www.linkedin.com/in/vidya-n-murali/), [Sifei Liu](https://sifeiliu.net/), [Yao Lu](https://www.linkedin.com/in/yao-jason-lu-a0291938/), [Oluwatobi Olabiyi](https://www.linkedin.com/in/oluwatobi-olabiyi-08955123/), [Yu-Chiang Frank Wang](https://scholar.google.com/citations?user=HSGvdtoAAAAJ&hl=en), [Rafael Valle](https://rafaelvalle.github.io/), [Bryan Catanzaro](https://www.linkedin.com/in/bryancatanzaro/), [Andrew Tao](https://scholar.google.com/citations?user=Wel9l1wAAAAJ&hl=en), [Song Han](https://hanlab.mit.edu/songhan), [Jan Kautz](https://jankautz.com/), [Hongxu Yin*^†](https://hongxu-yin.github.io/), [Pavlo Molchanov^](https://www.pmolchanov.com/)  

<span style="color: rgb(133, 184, 55);">**NVIDIA**</span>  
*Corresponding Author | †Core Contribution | ^Equal Advisory

<p align="center" width="100%">
<img src="assets/performance.png" alt="Stanford-Alpaca" style="width: 100%; min-width: 300px; display: block; margin: auto;">
</p>

Advancing machine intelligence requires developing the ability to perceive across multiple modalities, much as humans sense the world.
We introduce OmniVinci, an initiative to build a strong, open-source, omni-modal LLM.
We carefully study the design choices across model architecture and data curation.
For model architecture, we present three key innovations:
**(i)** OmniAlignNet for strengthening alignment between vision and audio embeddings in a shared omni-modal latent space;
**(ii)** Temporal Embedding Grouping for capturing relative temporal alignment between vision and audio signals; and
**(iii)** Constrained Rotary Time Embedding for encoding absolute temporal information in omni-modal embeddings. 
We introduce a curation and synthesis pipeline that generates 24M single-modal and omni-modal conversations. We find that modalities reinforce one another in both perception and reasoning. Our model outperforms Qwen2.5-Omni with +19.05 on DailyOmni (cross-modal understanding), +1.7 on MMAR (audio), and +3.9 on Video-MME (vision), while using just 0.2T training tokens - a 6 times reduction compared to Qwen2.5-Omni’s 1.2T.
We finally demonstrate omni-modal advantages in downstream applications spanning robotics, medical AI, and smart factory. 

| Model        | Omni - Dailyomni | Omni - Worldsense | Audio - MMAU | Audio - MMAR | Vision - MVBench | Vision - Video-MME (w/o sub) |
|--------------|------------------|-------------------|--------------------------|--------------|------------------|------------------------------|
| Qwen2.5-Omni | 47.5            | 45.4              | 71.0                       | 56.7         | 70.3             | 64.3                         |
| **Ours**         | **66.5**         | **48.2**         | **71.6**                 | **58.4**     | **70.6**         | **68.2**                     |



## News
- [x] **[2025 Oct 19] OmniVinci-9B** is released! It supports joint understanding of **vision, audio, and text**.

## Model Usage
<p align="center" width="100%">
<img src="assets/arch.png" alt="Stanford-Alpaca" style="width: 100%; min-width: 300px; display: block; margin: auto;">
</p>
### Inference
### Envirnoment setup


1. Download and cd huggingface repo
```
huggingface-cli download nvidia/omnivinci --local-dir ./omnivinci --local-dir-use-symlinks False
cd ./omnivinci
```

2. Install python environment (based on NVILA codebase)
```
bash ./environment_setup.sh omnivinci
```

### 🤗 Transformers Usage

#### Video (with audio) Inference Example:
```python
from transformers import AutoProcessor, AutoModel, AutoConfig,AutoModelForCausalLM
import torch
import os

# default: Load the model on the available device(s)
model_path = "./"
video_path = "xxx.mp4"
generation_kwargs = {"max_new_tokens": 1024, "max_length": 99999999}
load_audio_in_video = True
num_video_frames = 128
audio_length = "max_3600"

config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)

model = AutoModel.from_pretrained(model_path,
                                  trust_remote_code=True,
                                  torch_dtype="torch.float16",
                                  device_map="auto")

processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
generation_config = model.default_generation_config
generation_config.update(**generation_kwargs)

model.config.load_audio_in_video = load_audio_in_video
processor.config.load_audio_in_video = load_audio_in_video
if num_video_frames > 0:
    model.config.num_video_frames = num_video_frames
    processor.config.num_video_frames = num_video_frames
if audio_length != -1:
    model.config.audio_chunk_length = audio_length
    processor.config.audio_chunk_length = audio_length


conversation = [{
        "role": "user",
        "content": [
            {"type": "video", "video":video_path},
            {"type": "text", "text": "Assess the video, followed by a detailed description of it's video and audio contents."}
        ]
}]
text = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)

inputs = processor([text])

output_ids = model.generate(
    input_ids=inputs.input_ids,
    media=getattr(inputs, 'media', None),
    media_config=getattr(inputs, 'media_config', None),
    generation_config=generation_config,
)
print(processor.tokenizer.batch_decode(output_ids, skip_special_tokens=True))
```

- **For a audio and image inference examples please refer to ```example_mini_audio.py``` and ```example_mini_image.py```**

## Examples
https://github.com/user-attachments/assets/276d2735-2cdc-47c8-8fdb-787f77c5dddb



<span style="color: rgb(133, 184, 55);">**Question**</span>: "What are they talking about in detail?"  
<span style="color: rgb(133, 184, 55);">**Answer**</span>: The video features Jensen Huang with gray hair and glasses, dressed in a black leather jacket, speaking into a microphone in a modern office setting. The background includes a glass wall and a green panel, with a blurred view of desks and chairs. Jensen Huang discusses the evolution of artificial intelligence, focusing on the impact of deep learning and the development of AI frameworks. He emphasizes the importance of data, computing power, and software methodologies in driving AI advancements. The conversation highlights the transformative potential of AI, particularly in perception and generative capabilities, and the need for a new approach to software development. Jensen Huang also mentions the creation of a new AI supercomputer, DGX-1, and its impact on the industry. The video concludes with Jensen Huang discussing the second phase of AI, generative AI, and its potential to revolutionize the way we understand and create information.


https://github.com/user-attachments/assets/19a03604-f6f8-4407-a93d-c078b621e940


<span style="color: rgb(133, 184, 55);">**Question**</span>: "Describe the video in detail."  
<span style="color: rgb(133, 184, 55);">**Answer**</span>: The video begins with Jensen Huang in a modern, well-lit room with large windows and a view of greenery outside. He dressed in a black jacket and white pants, is seated at a table, writing a message on a black card with a gold pen. The message reads, 'To Robot, Enjoy Your New Brain!' followed by a signature. He then places the card on the table rand begins to open a large black gift box with a gold ribbon and bow. The scene transitions to a close-up of the gift box on the table, with the person's hand visible. The focus then shifts to a robot wearing a white hard hat with the 'NVIDIA' logo, standing in a workshop or industrial setting. The robot holds the same black gift box with the gold ribbon and bow, and it opens the box to reveal the black card with the message. The robot examines the card closely. The narrative continues with the robot, still in the workshop setting, holding the black gift box. The robot opens the box, revealing a sleek, white device with a black screen, nestled in crumpled black paper. The robot examines the device closely, then places it back into the box and closes it. The scene transitions to a different setting, where the robot is now in a modern office environment with green walls and multiple computer monitors. The robot stands behind the closed gift box, gesturing with its hands as if explaining or presenting something. The video wraps up with the robot in the modern office environment, gesturing with its hands. The scene transitions to a close-up of the robot's face, showing its detailed features and expressive eyes. 



## Citation
Please consider to cite our paper and this framework, if they are helpful in your research.

```bibtex
@article{ye2025omnivinci,
  title={OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM},
  author={Ye, Hanrong and Yang, Chao-Han Huck and Goel, Arushi and Huang, Wei and Zhu, Ligeng and Su, Yuanhang and Lin, Sean and Cheng, An-Chieh and Wan, Zhen and Tian, Jinchuan and others},
  journal={arXiv preprint arXiv:2510.15870},
  year={2025}
}
```


================================================
FILE: environment_setup.sh
================================================
#!/usr/bin/env bash
set -e

CONDA_ENV=${1:-""}
if [ -n "$CONDA_ENV" ]; then
    # This is required to activate conda environment
    eval "$(conda shell.bash hook)"

    conda create -n $CONDA_ENV python=3.10.14 -y
    conda activate $CONDA_ENV
    # This is optional if you prefer to use built-in nvcc
    conda install -c nvidia cuda-toolkit=12.2 -y
else
    echo "Skipping conda environment creation. Make sure you have the correct environment activated."
fi

# Using uv to speedup installations
pip install uv
alias uvp="uv pip"

echo "[INFO] Using python $(which python)"
echo "[INFO] Using pip $(which pip)"
echo "[INFO] Using uv $(which uv)"

# This is required to enable PEP 660 support
uv pip install --upgrade pip setuptools

# Install FlashAttention2
uv pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu122torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

# Install VILA
uv pip install -e ".[train,eval]"

# numpy introduce a lot dependencies issues, separate from pyproject.yaml
pip install numpy==1.26.4

# audio
uv pip install soundfile librosa openai-whisper ftfy
conda install -c conda-forge ffmpeg
uv pip install jiwer

# Downgrade protobuf to 3.20 for backward compatibility
uv pip install protobuf==3.20.*

# Replace transformers and deepspeed files
site_pkg_path=$(python -c 'import site; print(site.getsitepackages()[0])')
cp -rv ./transformers/modeling_utils.py $site_pkg_path/transformers/modeling_utils.py # for using qwen 2.5 omni checkpoint

# for benchmark adoption
uv pip install faiss-gpu-cu12

# Quantization requires the newest triton version, and introduce dependency issue
uv pip install triton==3.1.0 # we don't need this version if we do not use FP8LinearQwen2Config, QLlavaLlamaConfig, etc. It is not compatible with mamba-ssm.

uv pip install kaldiio

# for rotary embedding
uv pip install beartype

uv pip install pydantic==1.10.22


================================================
FILE: example_infer.py
================================================
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


from transformers import AutoProcessor, AutoModel, AutoConfig, GenerationConfig
import torch
import os
import time
from pathlib import Path
from typing import List, Dict, Any, Optional, Union
import logging
import sys
os.environ["HF_HUB_OFFLINE"] = "1"  # Use local cache for models

# Set up logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

def add_to_sys_path_direct(model_path):
    """Add model path directly to sys.path"""
    if model_path not in sys.path:
        sys.path.insert(0, model_path)  # Insert at beginning for priority
        print(f"✓ Added to sys.path: {model_path}")
    else:
        print(f"Already in sys.path: {model_path}")

class NVOmniVideoInference:
    """A class to handle NVOmni video model inference with improved error handling and flexibility."""
    
    def __init__(self, model_path: str, torch_dtype="torch.float16", device_map="auto"):
        """
        Initialize the NVOmni model for video inference.
        
        Args:
            model_path (str): Path to the model directory
            torch_dtype: PyTorch data type for model weights
            device_map (str): Device mapping strategy for model loading
        """
        self.model_path = model_path
        self.torch_dtype = torch_dtype
        self.device_map = device_map
        self.model = None
        self.processor = None
        self.config = None
        self.device = None
        
        self.load_model()
        
    def validate_paths(self, model_path: str, video_path: str = None) -> bool:
        """Validate that required paths exist."""
        if not Path(model_path).exists():
            logger.error(f"Model path does not exist: {model_path}")
            return False
            
        if video_path and not Path(video_path).exists():
            logger.error(f"Video path does not exist: {video_path}")
            return False
            
        return True
    
    def load_model(self) -> bool:
        """Load the model, processor, and config with error handling."""
        if not self.validate_paths(self.model_path):
            return False
            
        if True:
            logger.info("Loading model configuration...")
            self.config = AutoConfig.from_pretrained(self.model_path, trust_remote_code=True)
            
            logger.info("Loading model...")
            start_time = time.time()
            self.model = AutoModel.from_pretrained(
                self.model_path,
                trust_remote_code=True,
                torch_dtype=self.torch_dtype,
                device_map=self.device_map,
                low_cpu_mem_usage=True  # More memory efficient loading
            )#.to(eval(self.torch_dtype))
            load_time = time.time() - start_time
            logger.info(f"Model loaded in {load_time:.2f} seconds")
            
            logger.info("Loading processor...")
            self.processor = AutoProcessor.from_pretrained(self.model_path, trust_remote_code=True)

            # Set device for single-device setups
            if hasattr(self.model, 'device'):
                self.device = self.model.device
            else:
                self.device = next(self.model.parameters()).device if self.model.parameters() else torch.device('cpu')
            
            logger.info(f"Model successfully loaded on device: {self.device}")
            self._print_model_info()
            return True
            
    def _print_model_info(self):
        """Print useful information about the loaded model."""
        logger.info("=" * 50)
        logger.info("MODEL INFORMATION")
        logger.info("=" * 50)
        
        if self.config:
            logger.info(f"Model type: {getattr(self.config, 'model_type', 'Unknown')}")
            logger.info(f"Hidden size: {getattr(self.config, 'hidden_size', 'Unknown')}")
            
        if self.model and torch.cuda.is_available():
            logger.info(f"GPU memory allocated: {torch.cuda.memory_allocated() / 1024**3:.2f} GB")
            logger.info(f"GPU memory reserved: {torch.cuda.memory_reserved() / 1024**3:.2f} GB")
    
    def create_conversation(self, video_path: str, text_prompt: str) -> List[Dict[str, Any]]:
        """
        Create a conversation format for the model.
        
        Args:
            video_path (str): Path to the video file
            text_prompt (str): Text prompt for the model
            
        Returns:
            List[Dict]: Conversation in the expected format
        """
        return [{
            "role": "user",
            "content": [
                {"type": "video", "video": video_path},
                {"type": "text", "text": text_prompt}
            ]
        }]

    @torch.inference_mode()    
    def generate_response(
        self, 
        video_path: str, 
        text_prompt: str,
        max_new_tokens: int = 256,
        temperature: float = None,
        top_p: float = None,
        do_sample: bool = None,
        num_video_frames: int = -1,
        load_audio_in_video: bool = True,
        audio_length: Union[int, str] = "max_3600",
    ) -> Optional[str]:
        """
        Generate a response from the model given a video and text prompt.
        
        Args:
            video_path (str): Path to the video file
            text_prompt (str): Text prompt for the model
            max_new_tokens (int): Maximum number of new tokens to generate
            temperature (float): Sampling temperature
            top_p (float): Top-p sampling parameter
            do_sample (bool): Whether to use sampling
            custom_generation_config (GenerationConfig): Custom generation configuration
            
        Returns:
            Optional[str]: Generated response or None if failed
        """
        if not self.model or not self.processor:
            logger.error("Model or processor not loaded. Please initialize the model first.")
            return None
            
        if not self.validate_paths(self.model_path, video_path):
            return None
        
        # try:
        if True:
        
            logger.info(f"Processing video: {video_path}")
            logger.info(f"Text prompt: {text_prompt}")
            
            # Create conversation
            conversation = self.create_conversation(video_path, text_prompt)
            
            # Apply chat template
            text = self.processor.apply_chat_template(
                conversation, 
                tokenize=False, 
                add_generation_prompt=True
            )
            logger.info(f"Chat template applied")

            # set model params
            self.model.config.load_audio_in_video = load_audio_in_video
            self.processor.config.load_audio_in_video = load_audio_in_video
            if num_video_frames > 0:
                self.model.config.num_video_frames = num_video_frames
                self.processor.config.num_video_frames = num_video_frames
            if audio_length != -1:
                self.model.config.audio_chunk_length = audio_length
                self.processor.config.audio_chunk_length = audio_length
            logger.info(f"Model config - load_audio_in_video: {self.model.config.load_audio_in_video}, num_video_frames: {self.model.config.num_video_frames}, audio_chunk_length: {self.model.config.audio_chunk_length}")
            
            # Process inputs
            start_time = time.time()
            inputs = self.processor([text])
            
            # Move inputs to the correct device if needed
            if hasattr(inputs, 'input_ids') and inputs.input_ids is not None:
                inputs.input_ids = inputs.input_ids.to(self.device)
            
            processing_time = time.time() - start_time
            logger.info(f"Input processing completed in {processing_time:.2f} seconds")
            
            logger.info("Generating response...")
            start_time = time.time()

            generation_kwargs = {"max_new_tokens": max_new_tokens, "max_length": 99999999}
            if top_p is not None:
                generation_kwargs["top_p"] = top_p
            if do_sample is not None:
                generation_kwargs["do_sample"] = do_sample
            if temperature is not None:
                generation_kwargs["temperature"] = temperature

            generation_config = self.model.default_generation_config
            generation_config.update(**generation_kwargs)

            logger.info(f"Generation config: {generation_config.to_dict()}")


            with torch.no_grad():
                output_ids = self.model.generate(
                    input_ids=inputs.input_ids,
                    media=getattr(inputs, 'media', None),
                    media_config=getattr(inputs, 'media_config', None),
                    generation_config=generation_config,
                )
            
            generation_time = time.time() - start_time
            logger.info(f"Generation completed in {generation_time:.2f} seconds")
            
            # Decode response
            response = self.processor.tokenizer.batch_decode(
                output_ids, 
                skip_special_tokens=True
            )[0]

            return response
            
    def batch_generate(
        self, 
        video_text_pairs: List[tuple], 
        **generation_kwargs
    ) -> List[Optional[str]]:
        """
        Generate responses for multiple video-text pairs.
        
        Args:
            video_text_pairs (List[tuple]): List of (video_path, text_prompt) tuples
            **generation_kwargs: Arguments passed to generate_response
            
        Returns:
            List[Optional[str]]: List of generated responses
        """
        responses = []
        for i, (video_path, text_prompt) in enumerate(video_text_pairs):
            logger.info(f"Processing batch item {i+1}/{len(video_text_pairs)}")
            response = self.generate_response(video_path, text_prompt, **generation_kwargs)
            responses.append(response)
            
            # Clear cache between generations to manage memory
            if torch.cuda.is_available():
                torch.cuda.empty_cache()
                
        return responses

def main():
    """Main function demonstrating usage of the NVOmni model."""
    
    # Configuration
    MODEL_PATH = "./"
    VIDEO_PATH = "xxx.mp4"
    TEXT_PROMPT = "Assess the video, followed by a detailed description of it's video and audio contents."

    num_video_frames=128
    audio_length="max_3600"
    load_audio_in_video=True

    add_to_sys_path_direct(MODEL_PATH)
    
    # Initialize the inference class
    logger.info("Initializing NVOmni Video Inference...")
    inferencer = NVOmniVideoInference(MODEL_PATH, torch_dtype="torch.float16")
    
    if inferencer.model is None:
        logger.error("Failed to initialize model. Exiting.")
        return
    
    # Generate response
    logger.info("Starting inference...")
    response = inferencer.generate_response(
        video_path=VIDEO_PATH,
        text_prompt=TEXT_PROMPT,
        num_video_frames=num_video_frames,
        load_audio_in_video=load_audio_in_video,
        audio_length=audio_length,
        max_new_tokens=1024,
    )
    
    if response:
        print("\n" + "="*60)
        print("GENERATED RESPONSE")
        print("="*60)
        print(response)
        print("="*60)
    else:
        logger.error("Failed to generate response")
    
    # Example of batch processing
    if False:
        logger.info("\nExample: Batch processing")
        batch_pairs = [
            (VIDEO_PATH, "What is happening in this video?"),
            (VIDEO_PATH, "Describe the audio content of this video."),
        ]
        
        batch_responses = inferencer.batch_generate(batch_pairs, max_new_tokens=128)
        
        for i, (pair, response) in enumerate(zip(batch_pairs, batch_responses)):
            print(f"\n--- Batch Response {i+1} ---")
            print(f"Prompt: {pair[1]}")
            print(f"Response: {response}")

if __name__ == "__main__":
    main()

================================================
FILE: example_mini_audio.py
================================================
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Example script for audio transcription using the model.

This script demonstrates how to:
1. Load the model and processor
2. Configure audio processing parameters
3. Process audio input
4. Generate transcription output

Usage:
    python example_mini_audio.py --model_path <path_to_model> --audio_path <path_to_audio>
"""

from transformers import AutoProcessor, AutoModel, AutoConfig, AutoModelForCausalLM
import torch
import os
import argparse

# Configuration
parser = argparse.ArgumentParser(description="Audio transcription example")
parser.add_argument("--model_path", type=str, default="./", help="Path to the model")
parser.add_argument("--audio_path", type=str, required=True, help="Path to the audio file")
parser.add_argument("--max_new_tokens", type=int, default=1024, help="Maximum number of tokens to generate")
parser.add_argument("--num_video_frames", type=int, default=128, help="Number of video frames to process")
parser.add_argument("--audio_length", type=str, default="max_3600", help="Maximum audio length")

args = parser.parse_args()

model_path = args.model_path
audio_path = args.audio_path
generation_kwargs = {"max_new_tokens": args.max_new_tokens, "max_length": 99999999}
load_audio_in_video = True
num_video_frames = args.num_video_frames
audio_length = args.audio_length

config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)

model = AutoModel.from_pretrained(model_path,
                                  trust_remote_code=True,
                                  torch_dtype="torch.float16",
                                  device_map="auto")

processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
generation_config = model.default_generation_config
generation_config.update(**generation_kwargs)

model.config.load_audio_in_video = load_audio_in_video
processor.config.load_audio_in_video = load_audio_in_video
if num_video_frames > 0:
    model.config.num_video_frames = num_video_frames
    processor.config.num_video_frames = num_video_frames
if audio_length != -1:
    model.config.audio_chunk_length = audio_length
    processor.config.audio_chunk_length = audio_length


conversation = [{
        "role": "user",
        "content": [
            {"type": "audio", "audio": audio_path},
            {"type": "text", "text": "Transcribe the whole speech."}
        ]
}]
text = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)

inputs = processor([text])

output_ids = model.generate(
    input_ids=inputs.input_ids,
    media=getattr(inputs, 'media', None),
    media_config=getattr(inputs, 'media_config', None),
    generation_config=generation_config,
)
print(processor.tokenizer.batch_decode(output_ids, skip_special_tokens=True))

================================================
FILE: example_mini_image.py
================================================
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Example script for image understanding using the model.

This script demonstrates how to:
1. Load the model and processor
2. Process image input
3. Generate description output

Usage:
    python example_mini_image.py --model_path <path_to_model> --image_path <path_to_image>
"""

from transformers import AutoProcessor, AutoModel, AutoConfig, AutoModelForCausalLM
import torch
import os
import argparse

# Configuration
parser = argparse.ArgumentParser(description="Image understanding example")
parser.add_argument("--model_path", type=str, default="./", help="Path to the model")
parser.add_argument("--image_path", type=str, required=True, help="Path to the image file")
parser.add_argument("--max_new_tokens", type=int, default=1024, help="Maximum number of tokens to generate")
parser.add_argument("--prompt", type=str, default="Describe the image in detail.", help="Text prompt for the model")

args = parser.parse_args()

model_path = args.model_path
image_path = args.image_path
generation_kwargs = {"max_new_tokens": args.max_new_tokens, "max_length": 99999999}

config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)

model = AutoModel.from_pretrained(
    model_path,
    trust_remote_code=True,
    torch_dtype=torch.float16,
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
generation_config = model.default_generation_config
generation_config.update(**generation_kwargs)

conversation = [{
    "role": "user",
    "content": [
        {"type": "image", "image": image_path},
        {"type": "text", "text": args.prompt}
    ]
}]
text = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)

inputs = processor([text])

output_ids = model.generate(
    input_ids=inputs.input_ids,
    media=getattr(inputs, 'media', None),
    media_config=getattr(inputs, 'media_config', None),
    generation_config=generation_config,
)
print(processor.tokenizer.batch_decode(output_ids, skip_special_tokens=True))


================================================
FILE: example_mini_video.py
================================================
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Example script for video understanding using the model.

This script demonstrates how to:
1. Load the model and processor
2. Configure video and audio processing parameters
3. Process video input with optional audio
4. Generate description output

Usage:
    python example_mini_video.py --model_path <path_to_model> --video_path <path_to_video>
"""

from transformers import AutoProcessor, AutoModel, AutoConfig, AutoModelForCausalLM
import torch
import os
import argparse

# Configuration
parser = argparse.ArgumentParser(description="Video understanding example")
parser.add_argument("--model_path", type=str, default="./", help="Path to the model")
parser.add_argument("--video_path", type=str, required=True, help="Path to the video file")
parser.add_argument("--max_new_tokens", type=int, default=1024, help="Maximum number of tokens to generate")
parser.add_argument("--num_video_frames", type=int, default=128, help="Number of video frames to process")
parser.add_argument("--audio_length", type=str, default="max_3600", help="Maximum audio length")
parser.add_argument("--prompt", type=str, default="What are they talking about in detail?", help="Text prompt for the model")
parser.add_argument("--load_audio", action="store_true", default=True, help="Load audio from video")

args = parser.parse_args()

model_path = args.model_path
video_path = args.video_path
generation_kwargs = {"max_new_tokens": args.max_new_tokens, "max_length": 99999999}
load_audio_in_video = args.load_audio
num_video_frames = args.num_video_frames
audio_length = args.audio_length
text_prompt = args.prompt

assert os.path.exists(video_path), f"Video path {video_path} does not exist."

config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)

model = AutoModel.from_pretrained(
    model_path,
    trust_remote_code=True,
    torch_dtype=torch.float16,
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
generation_config = model.default_generation_config
generation_config.update(**generation_kwargs)

model.config.load_audio_in_video = load_audio_in_video
processor.config.load_audio_in_video = load_audio_in_video
if num_video_frames > 0:
    model.config.num_video_frames = num_video_frames
    processor.config.num_video_frames = num_video_frames
if audio_length != -1:
    model.config.audio_chunk_length = audio_length
    processor.config.audio_chunk_length = audio_length

def forward_inference(video_path, text_prompt):
    """Run inference on video with text prompt."""
    print(f"Text prompt: {text_prompt}")
    print(f"Video path: {video_path}")
    conversation = [{
        "role": "user",
        "content": [
            {"type": "video", "video": video_path},
            {"type": "text", "text": text_prompt}
        ]
    }]
    text = processor.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)

    inputs = processor([text])

    output_ids = model.generate(
        input_ids=inputs.input_ids,
        media=getattr(inputs, 'media', None),
        media_config=getattr(inputs, 'media_config', None),
        generation_config=generation_config,
    )
    print(processor.tokenizer.batch_decode(output_ids, skip_special_tokens=True))

forward_inference(video_path, text_prompt)


================================================
FILE: pyproject.toml
================================================
[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"

[project]
name = "vila"
version = "1.5.0"
description = "nvOmni"
readme = "README.md"
requires-python = ">=3.8"
classifiers = [
    "Programming Language :: Python :: 3",
    "License :: OSI Approved :: Apache Software License",
]
dependencies = [
    "torch==2.3.0", "torchvision==0.18.0",
    "transformers==4.46.0", "tokenizers>=0.15.2", "sentencepiece==0.1.99", "shortuuid",
    "accelerate==0.34.2", "peft>=0.9.0", "bitsandbytes==0.43.2",
    "pydantic<2,>=1", "markdown2[all]", "numpy==1.26.4", "scikit-learn==1.2.2",
    "gradio==3.35.2", "gradio_client==0.2.9",
    "requests", "httpx", "uvicorn", "fastapi", "fire", "seaborn", "ring_flash_attn==0.1.1",
    "einops==0.6.1", "einops-exts==0.0.4", "timm==0.9.12",
    "openpyxl==3.1.2", "pytorchvideo==0.1.5", "decord==0.6.0",
    "datasets==2.16.1", "openai==1.8.0", "webdataset==0.2.86",
    "nltk==3.3", "pywsd==1.2.4", "opencv-python-headless==4.8.0.76",
    "s2wrapper@git+https://github.com/bfshi/scaling_on_scales",
    "tyro", "pytest", "pre-commit", "loguru", "hydra-core", "xgrammar"
]

[project.scripts]
vila-run = "llava.cli.run:main"
vila-eval = "llava.cli.eval:main"
vila-infer = "llava.cli.infer:main"
vila-upload = "llava.cli.upload2hf:main"

[project.optional-dependencies]
train = ["deepspeed==0.9.5", "ninja", "wandb"]
eval = ["word2number", "Levenshtein", "nltk", "pywsd"]

[project.urls]
"Homepage" = "https://hanlab.mit.edu/projects/vila"
"Bug Tracker" = "https://github.com/NVlabs/VILA/issues"

[tool.triton]
triton = {version = "3.0.0.post20240610003544", file = "https://aiinfra.pkgs.visualstudio.com/2692857e-05ef-43b4-ba9c-ccf1c22c437c/_packaging/07c94329-d4c3-4ad4-9e6b-f904a60032ec/pypi/download/triton-nightly/3.post20240610003544/triton_nightly-3.0.0.post20240610003544-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", sha256 = "ac2c36a49bf9c2bb780909b38096fb718f17efd78b88a1ca1d649f6d063cdc2c"}

[tool.black]
line-length = 120

[tool.isort]
profile = "black"
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
use_parentheses = true
ensure_newline_before_comments = true
line_length = 120

[tool.setuptools.packages.find]
exclude = ["assets*", "benchmark*", "docs", "dist*", "playground*", "scripts*", "tests*"]

[tool.wheel]
exclude = ["assets*", "benchmark*", "docs", "dist*", "playground*", "scripts*", "tests*"]
Download .txt
gitextract_43fwejau/

├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── environment_setup.sh
├── example_infer.py
├── example_mini_audio.py
├── example_mini_image.py
├── example_mini_video.py
└── pyproject.toml
Download .txt
SYMBOL INDEX (11 symbols across 2 files)

FILE: example_infer.py
  function add_to_sys_path_direct (line 31) | def add_to_sys_path_direct(model_path):
  class NVOmniVideoInference (line 39) | class NVOmniVideoInference:
    method __init__ (line 42) | def __init__(self, model_path: str, torch_dtype="torch.float16", devic...
    method validate_paths (line 61) | def validate_paths(self, model_path: str, video_path: str = None) -> b...
    method load_model (line 73) | def load_model(self) -> bool:
    method _print_model_info (line 107) | def _print_model_info(self):
    method create_conversation (line 121) | def create_conversation(self, video_path: str, text_prompt: str) -> Li...
    method generate_response (line 141) | def generate_response(
    method batch_generate (line 250) | def batch_generate(
  function main (line 277) | def main():

FILE: example_mini_video.py
  function forward_inference (line 78) | def forward_inference(video_path, text_prompt):
Condensed preview — 9 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (55K chars).
[
  {
    "path": "CONTRIBUTING.md",
    "chars": 2677,
    "preview": "# Contributing Guide\n\nThank you for your interest in the project! We welcome all forms of contributions, including but n"
  },
  {
    "path": "LICENSE",
    "chars": 11340,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 11151,
    "preview": "<p align=\"center\" width=\"100%\">\n<img src=\"assets/logo.png\" alt=\"Stanford-Alpaca\" style=\"width: 70%; min-width: 300px; di"
  },
  {
    "path": "environment_setup.sh",
    "chars": 1924,
    "preview": "#!/usr/bin/env bash\nset -e\n\nCONDA_ENV=${1:-\"\"}\nif [ -n \"$CONDA_ENV\" ]; then\n    # This is required to activate conda env"
  },
  {
    "path": "example_infer.py",
    "chars": 12875,
    "preview": "# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identi"
  },
  {
    "path": "example_mini_audio.py",
    "chars": 3438,
    "preview": "# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identi"
  },
  {
    "path": "example_mini_image.py",
    "chars": 2703,
    "preview": "# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identi"
  },
  {
    "path": "example_mini_video.py",
    "chars": 3971,
    "preview": "# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n# SPDX-License-Identi"
  },
  {
    "path": "pyproject.toml",
    "chars": 2427,
    "preview": "[build-system]\nrequires = [\"setuptools>=61.0\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"vila\"\nversion "
  }
]

About this extraction

This page contains the full source code of the NVlabs/OmniVinci GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 9 files (51.3 KB), approximately 12.2k tokens, and a symbol index with 11 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!