[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\ncover/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\n.pybuilder/\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\nPipfile.lock\n\n# poetry\npoetry.lock\n\n# pdm\n.pdm.toml\n\n# PEP 582\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n.conda/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n# IDEs\n.vscode/\n.idea/\n*.swp\n*.swo\n*~\n.DS_Store\n*.sublime-project\n*.sublime-workspace\n\n# Model checkpoints and outputs\ncheckpoints/\noutputs/\nruns/\n*.pt\n*.pth\n*.ckpt\n*.safetensors\n*.bin\n*.h5\n*.hdf5\n*.onnx\n*.pb\n*.tflite\n*.pkl\n*.pickle\n\n# Training logs and outputs\nlogs/\nwandb/\ntensorboard/\ntb_logs/\n*.log\n*.out\n*.err\n\n# Data files (uncomment if you don't want to track large data files)\n# data/\n# datasets/\n# *.csv\n# *.json\n# *.jsonl\n# *.parquet\n# *.arrow\n\n# Image files (uncomment if you don't want to track large image datasets)\n# *.jpg\n# *.jpeg\n# *.gif\n# *.bmp\n# *.tiff\n# *.webp\n# Allow PNG files in assets directory\n!assets/**/*.png\n!assets/**/*.jpg\n!assets/**/*.jpeg\n\n# Temporary files\n*.tmp\n*.temp\n*.bak\n*.swp\n*.swo\n*~\n\n# OS files\n.DS_Store\n.DS_Store?\n._*\n.Spotlight-V100\n.Trashes\nehthumbs.db\nThumbs.db\ndesktop.ini\n\n# DeepSpeed\ndeepspeed_logs/\ndeepspeed_info/\n\n# Hugging Face cache\n.cache/\n.huggingface/\ntransformers_cache/\n\n# Local configuration files\nconfig.local.yaml\nconfig.local.yml\n*.local.*\n\n# Evaluation results\neval_results/\nresults/\n\n# JSON files (but keep config files in specific directories)\n*.json\n!**/local_scripts/*.json\n!**/config*.json\n!package.json\n!tsconfig.json\n!requirements.txt\n\n# Large files\n*.zip\n*.tar\n*.tar.gz\n*.rar\n*.7z\n\n# Node modules (if any)\nnode_modules/\n\n# Compiled models\n*.model\n*.weights\n\n"
  },
  {
    "path": "README.md",
    "content": "<div align=\"center\">\n\n# [AAAI 2026 Oral] Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding\nThis is the official repository for Robust-R1.\n\n[Jiaqi Tang^](https://jqt.me/), \n[Jianmin Chen^](https://github.com/Ch921-cell), \n\\\n[Wei Wei**](https://scholar.google.com/citations?hl=zh-CN&user=v8KMYlwAAAAJ), \n[Xiaogang Xu](https://xuxiaogang.com/), \n[Runtao Liu](https://scholar.google.com/citations?hl=zh-CN&user=YHTvXF4AAAAJ), \n[Xiangyu Wu](https://scholar.google.com/citations?user=R0GjVWIAAAAJ&hl=en), \n[Qipeng Xie](), \n[Jiafei Wu](), \n[Lei Zhang](https://scholar.google.com/citations?hl=zh-CN&user=0Kg6Gi4AAAAJ) and \n\\\n[Qifeng Chen*](https://cqf.io)\n\n^: Equal contribution. *: Corresponding Author. **: Co-corresponding Author.\n\n[![Paper](https://img.shields.io/badge/cs.CV-Paper-b31b1b?style=flat&logo=arxiv&logoColor=white)](https://huggingface.co/papers/2512.17532)\n[![HuggingFace](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-ffd21e)](https://huggingface.co/Jiaqi-hkust/Robust-R1)\n[![HuggingFace](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Data-ffd21e)](https://huggingface.co/datasets/Jiaqi-hkust/Robust-R1)\n[![made-for-VSCode](https://img.shields.io/badge/Made%20for-VSCode-1f425f.svg)](https://code.visualstudio.com/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)\n\n</div>\n\n## 📰 **News**\n- **[2025-12-23]** 🔥 Online demo is now available at [HF Space](https://huggingface.co/spaces/Jiaqi-hkust/Robust-R1).\n- **[2025-12-23]** 🔥 We release the [Code](https://github.com/jqtangust/Robust-R1), [Models](https://huggingface.co/Jiaqi-hkust/Robust-R1), and [Dataset](https://huggingface.co/datasets/Jiaqi-hkust/Robust-R1) on HuggingFace.\n- **[2025-12-22]** ✅ Our paper is now available on [arXiv](https://arxiv.org/abs/your-paper-id).\n- **[2025-11-08]** 🚀 Our paper is accepted by **AAAI 2026 Oral**.\n\n\n---\n\n## 🔭 **Motivation**\n\n- 🚩 **Limited Interpretability**: Lack of explicit mechanisms to diagnose degradation impacts on original semantic information.\n- 🚩 **Isolated Optimization**: Neglect of the degradation propagation relation between the visual encoder and large language model.\n\n<div align=\"center\">\n  <img src=\"assets/moti.png\" width=\"85%\" alt=\"Method Overview\">\n  <br>\n</div>\n\n---\n\n## 🛠️ **Installation**\n\n- **Clone the repository:**\n   ```bash\n   git clone https://github.com/jqtangust/Robust-R1.git\n   cd Robust-R1\n   ```\n\n- **Create environment:**\n   ```bash\n   conda create -n robust_r1 python=3.10\n   conda activate robust_r1\n   bash setup.sh\n   ```\n---\n\n### 🏰 **Pretrained and Fine-tuned Model**\n\n- The following checkpoints are utilized to run Robust-R1:\n\n  | Checkpoint | Link | Note |\n  |:---------:|:----:|:----:|\n  | Qwen2.5-VL-Base | [link](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | Used as initial weights for training. |\n  | **Robust-R1-SFT** | [link](https://huggingface.co/Jiaqi-hkust/Robust-R1-SFT) | Fine-tuned on [Robust-R1 dataset](https://huggingface.co/datasets/Jiaqi-hkust/Robust-R1) |\n  | **Robust-R1-RL** | [link](https://huggingface.co/Jiaqi-hkust/Robust-R1-RL) | Fine-tuned with reinforcement learning on [Robust-R1 dataset](https://huggingface.co/datasets/Jiaqi-hkust/Robust-R1) |\n\n---\n\n## ⏳ **Demo**\n\n### 🖥️ CLI Demo\n\n- Run the command-line demo with a question:\n\n  ```bash\n  # if you use local weight\n  export MODEL_PATH=\"your_model_name_or_path\"\n\n  python demo.py \"What type of vehicles are the people riding?\\n0. trucks\\n1. wagons\\n2. jeeps\\n3. cars\\n\"\n  ```\n\n### 🌐 GUI Demo\n\n- Set the model path as an environment variable and run the demo:\n\n  ```bash\n  # if you use local weight\n  export MODEL_PATH=\"your_model_name_or_path\"\n  \n  python app.py\n  ```\n\n- The demo will be available at `http://localhost:7860` by default.\n\n- GUI [Online Demo](https://huggingface.co/spaces/Jiaqi-hkust/Robust-R1). \n\n  <div align=\"center\">\n    <img src=\"assets/demo.png\" alt=\"Robust-R1 Demo\">\n  </div>\n\n---\n\n## 🧠 **Training**\n\n### 🎓 Supervised Fine-Tuning\n\nWe employ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for supervised fine-tuning of the base model.\n\n1. Clone the repository and install required dependencies:\n\n   ```bash\n   git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git\n   cd LLaMA-Factory\n   pip install -e \".[torch,metrics]\"\n   ```\n\n2. Download the base model [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).\n\n3. Prepare the training data and configuration files:\n\n   - Download the [Robust images](https://huggingface.co/datasets/Jiaqi-hkust/Robust-R1) and unzip it.\n   - Modify the configuration files in the `LLaMA-Factory/data` directory.\n\n4. Configure the training YAML file with your local paths (model path, data path, output directory.).\n\n5. Run the training command to train the SFT model:\n\n   ```bash\n   llamafactory-cli train examples/train_full/qwen2_5_vl_full_sft.yaml\n   ```\n\n### 🎓 Reinforcement Learning\n\n1. Download [Robust images](https://huggingface.co/datasets/Jiaqi-hkust/Robust-R1) and unzip it in `Robust-R1/dataset`.\n\n2. Prepare the training data file (train.jsonl) and organize the image folders.\n\n3. Download the SFT model checkpoint from [Robust-R1-SFT](https://huggingface.co/Jiaqi-hkust/Robust-R1-SFT) or use your own trained SFT model.\n\n4. Replace the following part in the [run_scripts/run_grpo_robust.sh](run_scripts/run_grpo_robust.sh) file with your own paths:\n\n   ```bash\n   data_paths=\"Robust-R1/data/train.jsonl\" \n   image_folders=\"Robust-R1/data/train_images\"\n   model_path=\"your_model_name_or_path\"\n   ```\n\n5. Run the script:\n\n   ```bash\n   bash run_scripts/run_grpo_robust.sh\n   ```\n\n---\n\n## 📊 **Evaluation**\n\nWe use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) for anti-degradation evaluation.\n\n1. Clone the VLMEvalKit repository and install dependencies:\n\n   ```bash\n   git clone https://github.com/open-compass/VLMEvalKit.git\n   cd VLMEvalKit\n   pip install -e .\n   ```\n\n2. Prepare the evaluation datasets according to VLMEvalKit requirements.\n\n3. **Image Degradation Pipeline**: Generate corrupted images for robustness evaluation.\n\n   We provide an image degradation pipeline for generating corrupted images to evaluate model robustness.\n\n   Navigate to the degradation pipeline directory and process images:\n\n   ```bash\n   cd add_degradation\n   python generate_pipeline_open_source.py --input_dir <input_dir> --output_base_dir <output_base_dir> --dataset_name <dataset_name> --verbose\n   ```\n\n   The script will generate three output directories with different degradation intensities for each image.\n\n4. Configure the model path and evaluation settings in the VLMEvalKit configuration file.\n\n5. Run the evaluation command:\n\n   ```bash\n   python run.py --model <your_model_name_or_path> --data <dataset_name>\n   ```\n\n### 🔬 R-Bench Evaluation\n\nFor R-Bench evaluation, we use [R-Bench](https://github.com/Q-Future/R-Bench) to assess model performance under real-world corruptions.\n\n1. Clone the R-Bench repository:\n\n   ```bash\n   git clone https://github.com/Q-Future/R-Bench.git\n   ```\n\n2. Evaluate using VLMEvalKit with R-Bench dataset:\n\n   ```bash\n   cd VLMEvalKit\n   python run.py --data R-Bench-Dis --model <your_model_name_or_path> --verbose\n   ```\n\n3. For full dataset evaluation, follow the R-Bench pipeline as described in the [R-Bench repository](https://github.com/Q-Future/R-Bench).\n\n---\n\n## ⭐️ Citation\n\nIf you find Robust-R1 useful for your research and applications, please cite using this BibTeX:\n   ``` latex\n   @inproceedings{tang2025robustr1,\n     title={Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding},\n     author={Tang, Jiaqi and Chen, Jianmin and Wei, Wei and Xu, Xiaogang and Liu, Runtao and Wu, Xiangyu and Xie, Qipeng and Wu, Jiafei and Zhang, Lei and Chen, Qifeng},\n     booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},\n     year={2026}\n   }\n   ```\n\n## 🤝 Acknowledgements\nThe work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project Reference Number: AoE/E-601/24-N).\n\nWe also thank the authors of [VLM-R1](https://github.com/om-ai-lab/VLM-R1?tab=readme-ov-file), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), and [R-Bench](https://github.com/Q-Future/R-Bench) for their contributions.\n\n"
  },
  {
    "path": "add_degradation/add_degradation.py",
    "content": "import cv2\nimport numpy as np\nimport random\n\n\ndef motion_blur(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    degree = max(5, int(intensity * 30))\n    angle = random.uniform(0, 360)\n    \n    M = cv2.getRotationMatrix2D((degree/2, degree/2), angle, 1)\n    kernel = np.diag(np.ones(degree))\n    kernel = cv2.warpAffine(kernel, M, (degree, degree))\n    kernel /= np.sum(kernel)\n    \n    return cv2.filter2D(img, -1, kernel)\n\n\ndef lens_blur(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    kernel_size = int(3 + intensity * 300) | 1\n    sigma = intensity * 20\n    \n    kernel = cv2.getGaussianKernel(kernel_size, sigma)\n    kernel = kernel @ kernel.T\n    \n    blurred = np.zeros_like(img, dtype=np.float32)\n    for c in range(3):\n        blurred[..., c] = cv2.filter2D(img[..., c].astype(np.float32), -1, kernel)\n    \n    result = cv2.addWeighted(\n        img, 1 - intensity * 0.7,\n        blurred.astype(np.uint8), intensity * 0.9, 0\n    )\n    \n    return result\n\n\ndef gaussian_noise(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    noise_std = intensity * 75\n    noise = np.random.normal(0, noise_std, img.shape)\n    result = np.clip(img.astype(np.float32) + noise, 0, 255).astype(np.uint8)\n    \n    return result\n\n\ndef block_exchange(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    h, w = img.shape[:2]\n    block_size = min(32, int(5 + intensity * 30))\n    noisy_img = img.copy()\n    \n    num_exchanges = int(intensity * 35)\n    for _ in range(num_exchanges):\n        i1 = random.randint(0, h // block_size - 1)\n        j1 = random.randint(0, w // block_size - 1)\n        i2 = random.randint(0, h // block_size - 1)\n        j2 = random.randint(0, w // block_size - 1)\n        \n        y1, x1 = i1 * block_size, j1 * block_size\n        y2, x2 = i2 * block_size, j2 * block_size\n        \n        block1 = noisy_img[y1:y1+block_size, x1:x1+block_size].copy()\n        noisy_img[y1:y1+block_size, x1:x1+block_size] = \\\n            noisy_img[y2:y2+block_size, x2:x2+block_size]\n        noisy_img[y2:y2+block_size, x2:x2+block_size] = block1\n    \n    return noisy_img\n\n\ndef jpeg_compression(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if not 0 <= intensity <= 1:\n        raise ValueError(\"Intensity must be in range [0.0, 1.0]\")\n    \n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    quality = int(100 - intensity * 95)\n    quality = max(5, min(100, quality))\n    \n    encode_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]\n    _, encimg = cv2.imencode('.jpg', img, encode_params)\n    compressed_img = cv2.imdecode(encimg, cv2.IMREAD_COLOR)\n    \n    return compressed_img\n\n\ndef mean_shift(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    spatial_radius = int(intensity * 40)\n    color_radius = int(intensity * 40)\n    \n    return cv2.pyrMeanShiftFiltering(img, spatial_radius, color_radius)\n\n\ndef color_diffusion(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    kernel_size = 3 + 2 * int(intensity * 20)\n    sigma = intensity * 50\n    \n    kernel = cv2.getGaussianKernel(kernel_size, sigma)\n    kernel = kernel @ kernel.T * (intensity ** 2)\n    \n    diffused = np.zeros_like(img, dtype=np.float32)\n    for c in range(3):\n        diffused[..., c] = cv2.filter2D(img[..., c].astype(np.float32), -1, kernel)\n    \n    if intensity > 0.9:\n        h, w = img.shape[:2]\n        for _ in range(int(100 * intensity)):\n            x, y = np.random.randint(0, w), np.random.randint(0, h)\n            radius = np.random.randint(5, 20)\n            cv2.circle(diffused, (x, y), radius,\n                      (np.random.randint(0, 255),) * 3, -1)\n    \n    result = cv2.addWeighted(\n        img, max(0.1, 1 - intensity * 0.9),\n        diffused.astype(np.uint8), min(0.9, intensity * 0.9), 0\n    )\n    \n    return np.clip(result, 0, 255).astype(np.uint8)\n\n\ndef sharpness_change(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    if intensity > 0:\n        kernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]]) * (intensity * 80)\n        result = cv2.filter2D(img, -1, kernel)\n    else:\n        ksize = int(3 + abs(intensity) * 5) | 1\n        result = cv2.GaussianBlur(img, (ksize, ksize), 0)\n    \n    result = cv2.addWeighted(img, 0.7, result, 0.3, 0)\n    return result\n\n\ndef dark_illumination(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    result = (img * (1 - intensity ** 2)).clip(0, 255).astype(np.uint8)\n    return result\n\n\ndef hsv_saturation(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV).astype(np.float32)\n    hsv[..., 1] *= (1 - intensity)\n    result = cv2.cvtColor(hsv.clip(0, 255).astype(np.uint8), cv2.COLOR_HSV2BGR)\n    \n    return result\n\n\ndef atmospheric_turbulence(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    h, w = img.shape[:2]\n    x, y = np.meshgrid(np.arange(w), np.arange(h))\n    distortion = intensity * 40 * np.sin(y / 30 + intensity * 5)\n    x_new = np.clip(x + distortion, 0, w - 1).astype(np.float32)\n    y_new = np.clip(y + distortion * 0.7, 0, h - 1).astype(np.float32)\n    \n    return cv2.remap(img, x_new, y_new, cv2.INTER_LINEAR)\n\n\ndef dirty_lens(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    h, w = img.shape[:2]\n    dirt = np.zeros((h, w, 3), dtype=np.float32)\n    \n    if intensity > 0.1:\n        for _ in range(int(10 * intensity)):\n            center_x = random.randint(0, w)\n            center_y = random.randint(0, h)\n            cv2.ellipse(dirt, (center_x, center_y),\n                       (random.randint(150, 300), random.randint(100, 200)),\n                       angle=random.randint(0, 180),\n                       startAngle=0, endAngle=360,\n                       color=(50, 50, 50), thickness=-1)\n    \n    for _ in range(int(300 * intensity)):\n        x = random.randint(0, w)\n        y = random.randint(0, h)\n        cv2.circle(dirt, (x, y), random.randint(4, 20),\n                  (random.randint(50, 100),) * 3, -1)\n    \n    if intensity > 0.5:\n        for _ in range(int(5 * intensity)):\n            x = random.randint(0, w)\n            y = random.randint(0, h)\n            cv2.circle(dirt, (x, y), random.randint(20, 50),\n                      (80, 80, 80), -1)\n            cv2.circle(dirt, (x, y), random.randint(10, 30),\n                      (120, 120, 120), -1)\n    \n    dirt = cv2.GaussianBlur(dirt, (0, 0), 30)\n    dirt = dirt.astype(np.uint8)\n    \n    result = cv2.addWeighted(img, 1 - 0.7 * intensity, dirt, 0.8 * intensity, 0)\n    return np.clip(result, 0, 255).astype(np.uint8)\n\n\ndef scan_lines(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    line_interval = max(3, int(20 / (intensity + 0.1)))\n    line_width = max(5, int(7 * intensity))\n    \n    result = img.copy()\n    for i in range(0, img.shape[0], line_interval):\n        end_line = min(i + line_width, img.shape[0])\n        result[i:end_line] = result[i:end_line] * 0.01\n    \n    return result\n\n\ndef graffiti(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if not 0 <= intensity <= 1:\n        raise ValueError(\"Intensity must be in range [0.0, 1.0]\")\n    \n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    h, w = img.shape[:2]\n    result = img.copy()\n    \n    for _ in range(int(10 * intensity)):\n        color = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))\n        pt1 = (random.randint(0, w - 1), random.randint(0, h - 1))\n        pt2 = (random.randint(0, w - 1), random.randint(0, h - 1))\n        thickness = random.randint(1, max(1, int(5 * intensity)))\n        cv2.line(result, pt1, pt2, color, thickness)\n    \n    if intensity > 0.55:\n        texts = [\"X\", \"FAKE\", \"COPY\", \"VOID\", \"COPYRIGHT\", str(random.randint(1, 100))]\n        text = random.choice(texts)\n        \n        font_scale = max(0.5, intensity * 5)\n        thickness = max(1, int(font_scale))\n        \n        text_size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, font_scale, thickness)[0]\n        text_width, text_height = text_size\n        \n        if h - 10 > text_height + 10:\n            text_x = random.randint(0, max(1, w - text_width - 10))\n            text_y = random.randint(text_height + 10, h - 10)\n            \n            cv2.putText(result, text,\n                       (text_x, text_y),\n                       cv2.FONT_HERSHEY_SIMPLEX,\n                       font_scale,\n                       (0, 0, 255),\n                       thickness)\n    \n    return result\n\n\ndef watermark_damage(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    h, w = img.shape[:2]\n    mask = np.zeros((h, w), dtype=np.float32)\n    \n    for _ in range(int(1 + intensity * 15)):\n        x = random.randint(0, w - 50)\n        y = random.randint(0, h - 50)\n        cv2.rectangle(mask, (x, y),\n                     (x + random.randint(50, 200), y + random.randint(20, 80)), 1, -1)\n    \n    repaired = cv2.inpaint(img, (mask * 255).astype(np.uint8), 3, cv2.INPAINT_TELEA)\n    result = cv2.addWeighted(img, 1 - intensity, repaired, intensity, 0)\n    \n    if intensity > 0.5:\n        edges = cv2.Canny((mask * 255).astype(np.uint8), 50, 150)\n        result[edges > 0] = result[edges > 0] * 0.8\n    \n    return result\n\n\ndef lens_flare(img: np.ndarray, intensity: float = 0.5) -> np.ndarray:\n    if img is None:\n        raise ValueError(\"Input image is None\")\n    \n    h, w = img.shape[:2]\n    flare = np.zeros((h, w, 3), dtype=np.float32)\n    \n    num_flares = 3 + int(30 * intensity)\n    for _ in range(num_flares):\n        x = random.randint(0, w)\n        y = random.randint(0, h)\n        radius = random.randint(10, 50)\n        color = np.array([255, 255, 235])\n        \n        cv2.circle(flare, (x, y), radius, color.tolist(), -1)\n        \n        angle = random.uniform(0, 2 * np.pi)\n        length = random.randint(30, 150)\n        end_x = int(x + length * np.cos(angle))\n        end_y = int(y + length * np.sin(angle))\n        cv2.line(flare, (x, y), (end_x, end_y), color.tolist(), 2)\n    \n    flare = cv2.GaussianBlur(flare, (3, 3), 20 * intensity)\n    \n    result = cv2.addWeighted(img.astype(np.float32), 1, flare, 0.9 * intensity, 0)\n    return np.clip(result, 0, 255).astype(np.uint8)\n"
  },
  {
    "path": "add_degradation/generate_degradation.py",
    "content": "import add_degradation\nimport cv2\nimport os\nimport numpy as np\nimport argparse\n\nDEGRADATION_CONFIG = {\n    'capture': {\n        'lens_blur': {'weight': 20},\n        'lens_flare': {'weight': 20},\n        'motion_blur': {'weight': 20},\n        'dirty_lens': {'weight': 20},\n        'hsv_saturation': {'weight': 20}\n    },\n    'transmission': {\n        'jpeg_compression': {'weight': 25},\n        'block_exchange': {'weight': 25},\n        'mean_shift': {'weight': 25},\n        'scan_lines': {'weight': 25}\n    },\n    'environment': {\n        'dark_illumination': {'weight': 25},\n        'atmospheric_turbulence': {'weight': 25},\n        'gaussian_noise': {'weight': 25},\n        'color_diffusion': {'weight': 25}\n    },\n    'postprocessing': {\n        'sharpness_change': {'weight': 33},\n        'graffiti': {'weight': 33},\n        'watermark_damage': {'weight': 34}\n    }\n}\n\ndef apply_degradation_Benchmark(image, method_name, intensity):\n    degradation_func = getattr(add_degradation, method_name)\n    degraded_img = degradation_func(image, intensity)\n    return degraded_img\n\ndef main():\n    parser = argparse.ArgumentParser(description='Image degradation pipeline for robustness evaluation')\n    parser.add_argument('--input_dir', type=str, \n                       default=os.getenv('INPUT_DIR', './data/images'),\n                       help='Input image directory path (can be set via INPUT_DIR environment variable)')\n    parser.add_argument('--output_base_dir', type=str,\n                       default=os.getenv('OUTPUT_BASE_DIR', './data/output'),\n                       help='Base directory for output images (can be set via OUTPUT_BASE_DIR environment variable)')\n    parser.add_argument('--dataset_name', type=str,\n                       default=os.getenv('DATASET_NAME', 'RealWorldQA'),\n                       help='Dataset name (used to generate output directory names)')\n    \n    args = parser.parse_args()\n    \n    folder_path = args.input_dir\n    output_base_dir = args.output_base_dir\n    dataset_name = args.dataset_name\n    \n    output_dirs = {\n        0.9: os.path.join(output_base_dir, f'{dataset_name}_Robust_100'),\n        0.45: os.path.join(output_base_dir, f'{dataset_name}_Robust_50'),\n        0.23: os.path.join(output_base_dir, f'{dataset_name}_Robust_25')\n    }\n\n    if not os.path.exists(folder_path):\n        raise ValueError(f\"Input directory does not exist: {folder_path}\")\n    \n    for path in output_dirs.values():\n        os.makedirs(path, exist_ok=True)\n\n    all_methods_with_weights = []\n    for category, methods in DEGRADATION_CONFIG.items():\n        for method_name, details in methods.items():\n            all_methods_with_weights.append((method_name, details['weight']))\n\n    method_names = [item[0] for item in all_methods_with_weights]\n    weights = [item[1] for item in all_methods_with_weights]\n\n    total_weight = sum(weights)\n    probabilities = [w / total_weight for w in weights]\n\n    num = 0\n    for filename in os.listdir(folder_path):\n        if filename.lower().endswith(('.png', '.jpg', '.jpeg', '.bmp', '.tiff')):\n            image_path = os.path.join(folder_path, filename)\n            image = cv2.imread(image_path)\n            \n            if image is None:\n                print(f\"Warning: Could not read image {image_path}, skipping\")\n                num += 1\n                continue\n\n            selected_method_name = np.random.choice(method_names, p=probabilities)\n\n            for intensity, output_dir in output_dirs.items():\n                degraded_img = apply_degradation_Benchmark(image, selected_method_name, intensity)\n                save_path = os.path.join(output_dir, filename)\n                cv2.imwrite(save_path, degraded_img)\n            \n            num += 1\n            if num % 100 == 0:\n                print(f\"Processed {num} images\")\n        \n    print(\"Processing completed!\")\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "app.py",
    "content": "import gradio as gr\nimport os\nimport torch\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\nimport html\n\nsys_prompt = \"\"\"First output the the types of degradations in image briefly in <TYPE> <TYPE_END> tags, \n        and then output what effects do these degradation have on the image in <INFLUENCE> <INFLUENCE_END> tags, \n        then based on the strength of degradation, output an APPROPRIATE length for the reasoning process in <REASONING> <REASONING_END> tags, \n        and then summarize the content of reasoning and the give the answer in <CONCLUSION> <CONCLUSION_END> tags,\n        provides the user with the answer briefly in <ANSWER> <ANSWER_END>.\"\"\"\n\nproject_dir = os.path.dirname(os.path.abspath(__file__))\ntemp_dir = os.path.join(project_dir, \".gradio_temp\")\nos.makedirs(temp_dir, exist_ok=True)\nos.environ[\"GRADIO_TEMP_DIR\"] = temp_dir\n\nMODEL_PATH = os.getenv(\"MODEL_PATH\", \"\")\n\nif not MODEL_PATH:\n    raise ValueError(\"MODEL_PATH environment variable must be set. Please set it to your model path.\")\n\nprint(f\"==========================================\")\nprint(f\"Initializing application...\")\nprint(f\"==========================================\")\n\nclass ModelHandler:\n    def __init__(self, model_path):\n        self.model_path = model_path\n        self.model = None\n        self.processor = None\n        self._load_model()\n\n    def _load_model(self):\n        try:\n            print(f\"⏳ Loading model weights, this may take a few minutes...\")\n            \n            self.processor = AutoProcessor.from_pretrained(self.model_path)\n            \n            self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n                self.model_path,\n                torch_dtype=torch.bfloat16,\n                device_map=\"auto\",\n                attn_implementation=\"flash_attention_2\" if torch.cuda.is_available() and torch.cuda.get_device_capability()[0] >= 8 else \"eager\"\n            )\n            print(\"✅ Model loaded successfully!\")\n        except Exception as e:\n            print(f\"❌ Model loading failed: {e}\")\n            raise e\n\n    def predict(self, message_dict, history, temperature, max_tokens):\n        text = message_dict.get(\"text\", \"\")\n        files = message_dict.get(\"files\", [])\n\n        messages = []\n        \n        if history:\n            print(f\"Processing {len(history)} previous messages from history\")\n            for msg in history:\n                role = msg.get(\"role\", \"\")\n                content = msg.get(\"content\", \"\")\n                \n                if role == \"user\":\n                    user_content = []\n                    \n                    if isinstance(content, list):\n                        for item in content:\n                            if isinstance(item, str):\n                                if os.path.exists(item) or any(item.lower().endswith(ext) for ext in ['.jpg', '.jpeg', '.png', '.gif', '.bmp', '.webp']):\n                                    user_content.append({\"type\": \"image\", \"image\": item})\n                                else:\n                                    user_content.append({\"type\": \"text\", \"text\": item})\n                            elif isinstance(item, dict):\n                                user_content.append(item)\n                    elif isinstance(content, str):\n                        if content:\n                            user_content.append({\"type\": \"text\", \"text\": content})\n                    \n                    if user_content:\n                        messages.append({\"role\": \"user\", \"content\": user_content})\n                        \n                elif role == \"assistant\":\n                    if isinstance(content, str) and content:\n                        messages.append({\"role\": \"assistant\", \"content\": content})\n        \n        current_content = []\n        if files:\n            for file_path in files:\n                current_content.append({\"type\": \"image\", \"image\": file_path})\n        \n        if text:\n            sys_prompt_formatted = \" \".join(sys_prompt.split())\n            full_text = f\"{text}\\n{sys_prompt_formatted}\"\n            current_content.append({\"type\": \"text\", \"text\": full_text})\n        \n        if current_content:\n            messages.append({\"role\": \"user\", \"content\": current_content})\n        \n        print(f\"Total messages for model: {len(messages)}\")\n        print(f\"Message roles: {[m['role'] for m in messages]}\")\n\n        text_prompt = self.processor.apply_chat_template(\n            messages, tokenize=False, add_generation_prompt=True\n        )\n        \n        image_inputs, video_inputs = process_vision_info(messages)\n        \n        inputs = self.processor(\n            text=[text_prompt],\n            images=image_inputs,\n            videos=video_inputs,\n            padding=True,\n            return_tensors=\"pt\"\n        )\n        \n        inputs = inputs.to(self.model.device)\n\n        generation_kwargs = dict(\n            **inputs,\n            max_new_tokens=max_tokens,\n            temperature=temperature,\n            do_sample=True if temperature > 0 else False,\n        )\n\n        try:\n            print(\"Starting model generation...\")\n            with torch.no_grad():\n                generated_ids = self.model.generate(**generation_kwargs)\n            \n            input_length = inputs['input_ids'].shape[1]\n            generated_ids = generated_ids[0][input_length:]\n            \n            print(f\"Input length: {input_length}, Generated token count: {len(generated_ids)}\")\n            \n            generated_text = self.processor.tokenizer.decode(\n                generated_ids, \n                skip_special_tokens=True\n            )\n            \n            print(f\"Generation completed. Output length: {len(generated_text)}, Content preview: {repr(generated_text[:200])}\")\n            \n            if generated_text and generated_text.strip():\n                print(f\"Yielding generated text: {generated_text[:100]}...\")\n                yield generated_text\n            else:\n                warning_msg = \"⚠️ No output generated. The model may not have produced any response.\"\n                print(warning_msg)\n                yield warning_msg\n                \n        except Exception as e:\n            import traceback\n            error_details = traceback.format_exc()\n            print(f\"Error in model.generate: {error_details}\")\n            yield f\"❌ Generation error: {str(e)}\"\n            return\n\nmodel_handler = ModelHandler(MODEL_PATH)\n\ndef create_chat_ui():\n    custom_css = \"\"\"\n    .gradio-container { font-family: 'Inter', sans-serif; }\n    #chatbot { height: 650px !important; overflow-y: auto; }\n    \"\"\"\n\n    with gr.Blocks(theme=gr.themes.Soft(), css=custom_css, title=\"Robust-R1\") as demo:\n        \n        with gr.Row():\n            gr.Markdown(\"# 🤖Robust-R1:Degradation-Aware Reasoning for Robust Visual Understanding\")\n\n        with gr.Row():\n            with gr.Column(scale=4):\n                chatbot = gr.Chatbot(\n                    elem_id=\"chatbot\",\n                    label=\"Chat\",\n                    type=\"messages\",\n                    avatar_images=(None, \"https://api.dicebear.com/7.x/bottts/svg?seed=Qwen\"),\n                    height=650\n                )\n                \n                chat_input = gr.MultimodalTextbox(\n                    interactive=True,\n                    file_types=[\"image\"],\n                    placeholder=\"Enter your question or upload an image...\",\n                    show_label=False\n                )\n\n            with gr.Column(scale=1):\n                with gr.Group():\n                    gr.Markdown(\"### ⚙️ Generation Config\")\n                    temperature = gr.Slider(\n                        minimum=0.01, maximum=1.0, value=0.6, step=0.05, \n                        label=\"Temperature\"\n                    )\n                    max_tokens = gr.Slider(\n                        minimum=128, maximum=4096, value=1024, step=128, \n                        label=\"Max New Tokens\"\n                    )\n                \n                clear_btn = gr.Button(\"🗑️ Clear Context\", variant=\"stop\")\n\n        gr.Markdown(\"---\")\n        gr.Markdown(\"### 📚 Examples\")\n        gr.Markdown(\"Click the examples below to quickly fill the input box and start a conversation\")\n        \n        example_images_dir = os.path.join(project_dir, \"assets\")\n        \n        examples_config = [\n            (\"What type of vehicles are the people riding?\\n0. trucks\\n1. wagons\\n2. jeeps\\n3. cars\\n\", os.path.join(example_images_dir, \"1.jpg\")),\n            (\"What is the giant fish in the air?\\n0. blimp\\n1. balloon\\n2. kite\\n3. sculpture\\n\", os.path.join(example_images_dir, \"2.jpg\")),\n        ]\n        \n        example_data = []\n        for text, img_path in examples_config:\n            if os.path.exists(img_path):\n                example_data.append({\"text\": text, \"files\": [img_path]})\n        \n        if example_data:\n            gr.Examples(\n                examples=example_data,\n                inputs=chat_input,\n                label=\"\",\n                examples_per_page=3\n            )\n        else:\n            gr.Markdown(\"*No example images available, please manually upload images for testing*\")\n        \n        async def respond(user_msg, history, temp, tokens):\n            text = user_msg.get(\"text\", \"\").strip()\n            files = user_msg.get(\"files\", [])\n            user_content = list(files)\n            if text: user_content.append(text)\n            \n            if not files and text: user_message = {\"role\": \"user\", \"content\": text}\n            else: user_message = {\"role\": \"user\", \"content\": user_content}\n            \n            history.append(user_message)\n            yield history, gr.MultimodalTextbox(value=None, interactive=False)\n\n            history.append({\"role\": \"assistant\", \"content\": \"\"})\n            \n            try:\n                previous_history = history[:-2] if len(history) >= 2 else []\n                \n                generated_text = \"\"\n                for chunk in model_handler.predict(user_msg, previous_history, temp, tokens):\n                    generated_text = chunk\n                    \n                    safe_text = html.escape(generated_text)\n                    safe_text = generated_text.replace(\"<\", \"&lt;\").replace(\">\", \"&gt;\")\n                    \n                    history[-1][\"content\"] = safe_text\n                    yield history, gr.MultimodalTextbox(interactive=False)\n                    \n            except Exception as e:\n                import traceback\n                traceback.print_exc()\n                history[-1][\"content\"] = f\"❌ Inference error: {str(e)}\"\n                yield history, gr.MultimodalTextbox(interactive=True)\n            \n            yield history, gr.MultimodalTextbox(value=None, interactive=True)\n            \n        chat_input.submit(\n            respond,\n            inputs=[chat_input, chatbot, temperature, max_tokens],\n            outputs=[chatbot, chat_input]\n        )\n\n        def clear_history(): return [], None\n        clear_btn.click(clear_history, outputs=[chatbot, chat_input])\n\n    return demo\n\nif __name__ == \"__main__\":\n    demo = create_chat_ui()\n    \n    print(f\"🚀 Service is starting, please visit: http://localhost:7862\")\n    demo.launch(\n        server_name=\"0.0.0.0\",\n        server_port=7862,\n        share=False,\n        show_error=True,\n        allowed_paths=[project_dir]\n    )\n"
  },
  {
    "path": "demo.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nCLI Demo for Robust-R1: Visual Question Answering with Degradation-Aware Reasoning.\n\"\"\"\n\nimport os\nimport sys\nimport torch\nimport argparse\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# Default model path - can be overridden by MODEL_PATH environment variable\n# Users can set MODEL_PATH to their local model path or HuggingFace model name\nDEFAULT_MODEL_PATH = \"Jiaqi-hkust/Robust-R1-RL\"  # HuggingFace model name\nMODEL_PATH = os.getenv(\"MODEL_PATH\", DEFAULT_MODEL_PATH)\n\n# Fixed image path for demo\nFIXED_IMAGE_PATH = \"assets/1.jpg\"\n\nSYS_PROMPT = \"\"\"First output the the types of degradations in image briefly in <TYPE> <TYPE_END> tags, \nand then output what effects do these degradation have on the image in <INFLUENCE> <INFLUENCE_END> tags, \nthen based on the strength of degradation, output an APPROPRIATE length for the reasoning process in <REASONING> <REASONING_END> tags, \nand then summarize the content of reasoning and the give the answer in <CONCLUSION> <CONCLUSION_END> tags,\nprovides the user with the answer briefly in <ANSWER> <ANSWER_END>.\"\"\"\n\nDEFAULT_TEMPERATURE = 0.6\nDEFAULT_MAX_TOKENS = 1024\n\n\nclass ModelHandler:\n    def __init__(self, model_path):\n        self.model_path = model_path\n        self.model = None\n        self.processor = None\n        self._load_model()\n\n    def _load_model(self):\n        try:\n            print(\"Loading model, this may take a few minutes...\")\n            self.processor = AutoProcessor.from_pretrained(self.model_path)\n            self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n                self.model_path,\n                torch_dtype=torch.bfloat16,\n                device_map=\"auto\",\n                attn_implementation=\"flash_attention_2\" if torch.cuda.is_available() and torch.cuda.get_device_capability()[0] >= 8 else \"eager\"\n            )\n            print(\"Model loaded successfully!\")\n        except Exception as e:\n            print(f\"Model loading failed: {e}\")\n            raise e\n\n    def predict(self, question, image_path, temperature=DEFAULT_TEMPERATURE, max_tokens=DEFAULT_MAX_TOKENS):\n        \"\"\"\n        Generate response for the given question and image.\n        \n        Args:\n            question: User question\n            image_path: Path to the image\n            temperature: Generation temperature\n            max_tokens: Maximum number of tokens to generate\n        \n        Returns:\n            Generated text response\n        \"\"\"\n        sys_prompt_formatted = \" \".join(SYS_PROMPT.split())\n        full_text = f\"{question}\\n{sys_prompt_formatted}\"\n        \n        messages = [\n            {\n                \"role\": \"user\",\n                \"content\": [\n                    {\"type\": \"text\", \"text\": full_text},\n                    {\"type\": \"image\", \"image\": image_path},\n                ],\n            }\n        ]\n        \n        text_prompt = self.processor.apply_chat_template(\n            messages, tokenize=False, add_generation_prompt=True\n        )\n        \n        image_inputs, video_inputs = process_vision_info(messages)\n        \n        inputs = self.processor(\n            text=[text_prompt],\n            images=image_inputs,\n            videos=video_inputs,\n            padding=True,\n            return_tensors=\"pt\"\n        )\n        \n        inputs = inputs.to(self.model.device)\n        \n        generation_kwargs = dict(\n            **inputs,\n            max_new_tokens=max_tokens,\n            temperature=temperature,\n            do_sample=True if temperature > 0 else False,\n        )\n        \n        try:\n            print(\"Generating response...\")\n            with torch.no_grad():\n                generated_ids = self.model.generate(**generation_kwargs)\n            \n            input_length = inputs['input_ids'].shape[1]\n            generated_ids = generated_ids[0][input_length:]\n            \n            generated_text = self.processor.tokenizer.decode(\n                generated_ids, \n                skip_special_tokens=True\n            )\n            \n            return generated_text\n                \n        except Exception as e:\n            import traceback\n            error_details = traceback.format_exc()\n            print(f\"Generation error: {error_details}\")\n            raise e\n\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"CLI Demo for Robust-R1: Visual Question Answering with Degradation-Aware Reasoning\",\n        formatter_class=argparse.RawDescriptionHelpFormatter,\n        epilog=\"\"\"\nExamples:\n    python demo.py \"What type of vehicles are the people riding?\"\n    python demo.py \"What is in the image?\" --temperature 0.7 --max-tokens 2048\n    python demo.py \"Your question\" --image /path/to/image.jpg\n        \"\"\"\n    )\n    \n    parser.add_argument(\n        \"question\",\n        type=str,\n        help=\"Question to ask about the image\"\n    )\n    \n    parser.add_argument(\n        \"--image\", \"-i\",\n        type=str,\n        default=FIXED_IMAGE_PATH,\n        help=f\"Path to the input image (default: {FIXED_IMAGE_PATH})\"\n    )\n    \n    parser.add_argument(\n        \"--temperature\", \"-t\",\n        type=float,\n        default=DEFAULT_TEMPERATURE,\n        help=f\"Generation temperature (default: {DEFAULT_TEMPERATURE})\"\n    )\n    \n    parser.add_argument(\n        \"--max-tokens\", \"-m\",\n        type=int,\n        default=DEFAULT_MAX_TOKENS,\n        help=f\"Maximum number of tokens to generate (default: {DEFAULT_MAX_TOKENS})\"\n    )\n    \n    parser.add_argument(\n        \"--model-path\",\n        type=str,\n        default=MODEL_PATH,\n        help=f\"Model path or HuggingFace model name (default: {MODEL_PATH}). Can also be set via MODEL_PATH environment variable.\"\n    )\n    \n    args = parser.parse_args()\n    \n    if not os.path.exists(args.image):\n        print(f\"Error: Image file does not exist: {args.image}\")\n        sys.exit(1)\n    \n    print(f\"Model path: {args.model_path}\")\n    print(f\"Image path: {args.image}\")\n    print(f\"Question: {args.question}\")\n    print(f\"Temperature: {args.temperature}, Max tokens: {args.max_tokens}\")\n    print(\"-\" * 80)\n    \n    model_handler = ModelHandler(args.model_path)\n    \n    try:\n        response = model_handler.predict(\n            question=args.question,\n            image_path=args.image,\n            temperature=args.temperature,\n            max_tokens=args.max_tokens\n        )\n        \n        print(\"\\n\" + \"=\" * 80)\n        print(\"Model Response:\")\n        print(\"=\" * 80)\n        print(response)\n        print(\"=\" * 80)\n        \n    except KeyboardInterrupt:\n        print(\"\\n\\nUser interrupted\")\n        sys.exit(0)\n    except Exception as e:\n        print(f\"\\nError: {e}\")\n        import traceback\n        traceback.print_exc()\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "requirements.txt",
    "content": "torch>=2.5.1\ntransformers==4.49.0\ngradio>=4.0.0\nqwen-vl-utils>=0.0.1\naccelerate>=1.2.1\nsentencepiece>=0.1.99\npillow\nsafetensors>=0.3.3\nhuggingface-hub>=0.19.2,<1.0\neinops>=0.8.0\npackaging>=23.0\nnumpy>=1.21.0\n"
  },
  {
    "path": "run_scripts/run_grpo_robust.sh",
    "content": "PROJECT_ROOT=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )/..\" && pwd )\"\nexport REPO_HOME=\"${PROJECT_ROOT}\"\necho \"REPO_HOME: $REPO_HOME\"\n# Change the data_paths and image_folders to your own data\ndata_paths=\"your_data_path\" \nimage_folders=\"your_images_folder\"\nmodel_path=\"your_model_name_or_path\"\nis_reward_customized_from_vlm_module=True\necho \"data_paths: $data_paths\"\necho \"image_folders: $image_folders\"\n\n\nexport EXP_NAME=\"your_experiment_name\" # TODO: change this to your own experiment name\nTASK_TYPE=\"robust\"\ncd ${REPO_HOME}/src/open-r1-multimodal\n\nexport DEBUG_MODE=\"true\" # Enable Debug if you want to see the rollout of model during RL\n# create the run directory and log file\nmkdir -p ${REPO_HOME}/runs/${EXP_NAME}/log\nexport LOG_PATH=\"${REPO_HOME}/runs/${EXP_NAME}/log/debug_log.$(date +%Y-%m-%d-%H-%M-%S).txt\"\n# MAX_STEPS=1200 # TODO: change this to your own max steps\n\n# export WANDB_DISABLED=true     \ntorchrun --nproc_per_node=\"8\" \\\n    --nnodes=\"1\" \\\n    --node_rank=\"0\" \\\n    --master_addr=\"127.0.0.1\" \\\n    --master_port=\"12352\" \\\n  src/open_r1/grpo_jsonl.py \\\n    --use_vllm False \\\n    --output_dir ${REPO_HOME}/checkpoints/rl/${EXP_NAME} \\\n    --resume_from_checkpoint True \\\n    --model_name_or_path $model_path \\\n    --data_file_paths $data_paths \\\n    --image_folders $image_folders \\\n    --is_reward_customized_from_vlm_module $is_reward_customized_from_vlm_module \\\n    --task_type $TASK_TYPE \\\n    --per_device_train_batch_size 8 \\\n    --gradient_accumulation_steps 2\\\n    --gradient_checkpointing true \\\n    --logging_steps 1 \\\n    --num_train_epochs 1 \\\n    --bf16 \\\n    --attn_implementation flash_attention_2 \\\n    --run_name ${EXP_NAME} \\\n    --data_seed 42 \\\n    --save_steps 100 \\\n    --num_generations 8 \\\n    --max_completion_length 2048 \\\n    --reward_funcs accuracy format type length\\\n    --beta 0.04 \\\n    --report_to none \\\n    --dataset-name this_is_not_used \\\n    --deepspeed ${REPO_HOME}/src/open-r1-multimodal/local_scripts/zero3.json \\\n    --freeze_vision_modules true\n    \n\necho \"Training completed for ${EXP_NAME}\"\n"
  },
  {
    "path": "setup.sh",
    "content": "# conda create -n vlm-r1 python=3.11 \n# conda activate vlm-r1\n\n# Install the packages in open-r1-multimodal .\ncd src/open-r1-multimodal # We edit the grpo.py and grpo_trainer.py in open-r1 repo.\n\n# Install torch first (required for flash-attn)\npip install torch>=2.5.1 torchvision\n\n# Install open-r1 package with dev dependencies\npip install -e \".[dev]\"\n\n# Additional modules\npip install wandb==0.18.3\npip install tensorboardx\npip install qwen_vl_utils\npip install babel\npip install python-Levenshtein\npip install matplotlib\npip install pycocotools\npip install openai\npip install httpx[socks]\n\n# Install flash-attn last (requires torch to be already installed)\npip install flash-attn --no-build-isolation"
  },
  {
    "path": "src/eval/test_od_r1.py",
    "content": "import re\nimport os\nimport json\nimport torch\nimport random\n\nfrom tqdm import tqdm\nfrom pprint import pprint\nfrom qwen_vl_utils import process_vision_info\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\n\n\ndef extract_bbox_answer(content):\n    pattern = r'```json(.*?)```'\n    json_match = re.search(pattern, content, re.DOTALL)\n    bbox_json = json_match.group(1).strip() if json_match else None\n\n    if bbox_json:\n        try:\n            bbox = json.loads(bbox_json)[0]['bbox_2d']\n            return bbox, False\n        except:\n            return [0, 0, 0, 0], False\n    else:\n        return [0, 0, 0, 0], False\n\n\ndef iou(box1, box2):\n    inter_x1 = max(box1[0], box2[0])\n    inter_y1 = max(box1[1], box2[1])\n    inter_x2 = min(box1[2] - 1, box2[2] - 1)\n    inter_y2 = min(box1[3] - 1, box2[3] - 1)\n    if inter_x1 < inter_x2 and inter_y1 < inter_y2:\n        inter = (inter_x2 - inter_x1 + 1) * (inter_y2 - inter_y1 + 1)\n    else:\n        inter = 0\n    union = (box1[2] - box1[0]) * (box1[3] - box1[1]) + (box2[2] - box2[0]) * (box2[3] - box2[1]) - inter\n    return float(inter) / union\n\n\ndef load_model(model_path, device_map):\n    #We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n    model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n        model_path,\n        torch_dtype=torch.bfloat16,\n        attn_implementation=\"flash_attention_2\",\n        device_map=device_map,\n    )\n\n    # default processer\n    processor = AutoProcessor.from_pretrained(model_path)\n\n    return model, processor\n\n\ndef eval_od_r1(\n    model_path, test_datasets, data_root, image_root, question_template, output_dir, batch_size=32, sample_num=500, seed=42, device_map=\"cuda:0\"\n):\n    random.seed(seed)\n    model, processor = load_model(model_path, device_map)\n\n    for ds in test_datasets:\n        print(f\"Processing {ds}...\")\n\n        ds_path = os.path.join(data_root, f\"{ds}.json\")\n        data = json.load(open(ds_path, \"r\"))\n        random.shuffle(data)\n        data = data[:sample_num]\n        messages = []\n\n        for x in data:\n            image_path = os.path.join(image_root, x['image'])\n            messages.append(\n                [\n                    {\n                        \"role\":\n                            \"user\",\n                        \"content\":\n                            [\n                                {\n                                    \"type\": \"image\",\n                                    \"image\": f\"file://{image_path}\"\n                                }, {\n                                    \"type\": \"text\",\n                                    \"text\": question_template.format(Question=x['normal_caption'])\n                                }\n                            ]\n                    }\n                ]\n            )\n\n        all_outputs = []  # List to store all answers\n\n        # Process data\n        for i in tqdm(range(0, len(messages), batch_size)):\n            batch_messages = messages[i:i + batch_size]\n\n            # Preparation for inference\n            text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in batch_messages]\n\n            image_inputs, video_inputs = process_vision_info(batch_messages)\n            inputs = processor(\n                text=text,\n                images=image_inputs,\n                videos=video_inputs,\n                padding=True,\n                return_tensors=\"pt\",\n            )\n            inputs = inputs.to(device_map)\n\n            # Inference: Generation of the output\n            generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=256, do_sample=False)\n\n            generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]\n            batch_output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)\n            all_outputs.extend(batch_output_text)\n\n        final_output = []\n        correct_number = 0\n\n        for input_example, model_output in zip(data, all_outputs):\n            original_output = model_output\n            ground_truth = input_example['solution']\n            ground_truth_normalized = input_example['normalized_solution']\n            model_answer, normalized = extract_bbox_answer(original_output)\n\n            # Count correct answers\n            correct = 0\n            if model_answer is not None:\n                iou_value = iou(model_answer, ground_truth_normalized if normalized else ground_truth)\n                if iou_value > 0.5:\n                    correct = 1\n            correct_number += correct\n\n            # Create a result dictionary for this example\n            result = {\n                \"question\": question_template.format(Question=input_example['normal_caption']),\n                \"ground_truth\": ground_truth if not normalized else ground_truth_normalized,\n                \"model_output\": original_output,\n                \"extracted_answer\": model_answer,\n                \"correct\": correct,\n                \"iou\": iou_value\n            }\n            final_output.append(result)\n\n        # Calculate and print accuracy\n        accuracy = correct_number / len(data) * 100\n        print(f\"\\nAccuracy of {ds}: {accuracy:.2f}%\")\n\n        # Save results to a JSON file\n        result_path = os.path.join(output_dir, f\"{os.path.basename(model_path)}\", f\"{ds}_od_r1.json\")\n        os.makedirs(os.path.dirname(result_path), exist_ok=True)\n        with open(result_path, \"w\") as f:\n            json.dump({\"accuracy\": accuracy, \"results\": final_output}, f, indent=2)\n\n        print(f\"Results saved to {result_path}\")\n        print('-' * 100)\n\n\nif __name__ == \"__main__\":\n    model_path = ''  # Add the path to the model\n    data_root = ''  # Add the data root\n    test_datasets = ['refcoco_val', 'refcocop_val', 'refcocog_val']  # modify the datasets\n    image_root = ''  # Add the image root\n    output_dir = 'logs'  # Add the output directory, default is logs\n    device_map = 'cuda:0'  # select the device, default is cuda:0\n\n    question_template = '{Question} First output the thinking process in <think> </think> tags and then output the final answer in <answer> </answer> tags. Output the final answer in JSON format.'  # modify the question template which must contain {Question}, {Question} will be replaced by the caption\n\n    eval_od_r1(\n        model_path=model_path,\n        data_root=data_root,\n        test_datasets=test_datasets,\n        image_root=image_root,\n        question_template=question_template,\n        output_dir=output_dir,\n        device_map=device_map\n    )\n"
  },
  {
    "path": "src/eval/test_rec_baseline.py",
    "content": "from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\nimport torch\nimport json\nfrom tqdm import tqdm\nimport re\nimport os\nfrom pprint import pprint\nimport random\n\n\nimport torch.distributed as dist\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport argparse\n\nimport warnings\n\nwarnings.filterwarnings(\"ignore\", category=UserWarning, module=\"transformers\")\n\ndef setup_distributed():\n    local_rank = int(os.environ.get(\"LOCAL_RANK\", 0))\n    torch.cuda.set_device(local_rank) \n    \n    dist.init_process_group(backend=\"nccl\")\n    \n    world_size = dist.get_world_size()\n    rank = dist.get_rank()\n    \n    print(f\"Process {rank}/{world_size} initialized on cuda:{local_rank}\")\n    return local_rank, world_size, rank\n\nlocal_rank, world_size, rank = setup_distributed()\ndevice = f\"cuda:{local_rank}\"\n\nsteps = 100\nMODEL_PATH=f\"/data10/shz/project/LLaMA-Factory/saves/qwen2_5_vl-3b/full/sft/checkpoint-{steps}\" \nOUTPUT_PATH=\"./logs/rec_results_{DATASET}_qwen2_5vl_3b_instruct_sft_{STEPS}.json\"\n\n# MODEL_PATH = \"/data10/shz/ckpt/vlm-r1-related/Qwen2.5-VL-3B-Instruct\"\n# OUTPUT_PATH = \"./logs/rec_results_{DATASET}_qwen2_5vl_3b_instruct_baseline_{STEPS}.json\"\n\nBSZ=4\nDATA_ROOT = \"/data10/shz/dataset/rec/rec_jsons_processed\"\n\nTEST_DATASETS = ['refcoco_val', 'refcocop_val', 'refcocog_val']\nIMAGE_ROOT = \"/data10/shz/dataset/coco\"\n\n# TEST_DATASETS = ['lisa_test']\n# IMAGE_ROOT = \"/data10/shz/dataset/lisa\"\n\n#We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n    MODEL_PATH,\n    torch_dtype=torch.bfloat16,\n    attn_implementation=\"flash_attention_2\",\n    device_map={\"\": local_rank}, \n)\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(MODEL_PATH)\n\ndef extract_bbox_answer(content):\n    bbox_pattern = r'\\[(\\s*-?\\d*\\.?\\d+\\s*),\\s*(\\s*-?\\d*\\.?\\d+\\s*),\\s*(\\s*-?\\d*\\.?\\d+\\s*),\\s*(\\s*-?\\d*\\.?\\d+\\s*)\\]'\n    # bbox_pattern = r'\\[(-?\\d*\\.?\\d+),\\s*(-?\\d*\\.?\\d+),\\s*(-?\\d*\\.?\\d+),\\s*(-?\\d*\\.?\\d+)\\]'\n    bbox_match = re.search(bbox_pattern, content)\n\n    if bbox_match:\n        bbox = [float(bbox_match.group(1)), float(bbox_match.group(2)), float(bbox_match.group(3)), float(bbox_match.group(4))]\n        return bbox\n    return [0, 0, 0, 0]\n\ndef iou(box1, box2):\n    inter_x1 = max(box1[0], box2[0])\n    inter_y1 = max(box1[1], box2[1])\n    inter_x2 = min(box1[2]-1, box2[2]-1)\n    inter_y2 = min(box1[3]-1, box2[3]-1)\n    if inter_x1 < inter_x2 and inter_y1 < inter_y2:\n        inter = (inter_x2-inter_x1+1)*(inter_y2-inter_y1+1)\n    else:\n        inter = 0\n    union = (box1[2]-box1[0])*(box1[3]-box1[1]) + (box2[2]-box2[0])*(box2[3]-box2[1]) - inter\n    return float(inter)/union\n\nnum_samples = 2000\nfor ds in TEST_DATASETS:\n    if rank == 0:\n        print(f\"Processing {ds}...\")\n    ds_path = os.path.join(DATA_ROOT, f\"{ds}.json\")\n    data = json.load(open(ds_path, \"r\"))\n    random.seed(42)\n    random.shuffle(data)\n    data = data[:num_samples]\n    # QUESTION_TEMPLATE = \"{Question}\" if steps > 0 else \"{Question} Please provide the bounding box coordinate in JSON format.\"\n    QUESTION_TEMPLATE = \"{Question} Please provide the bounding box coordinate in JSON format.\"\n    \n    # Split data for distributed evaluation\n    per_rank_data = len(data) // world_size\n    start_idx = rank * per_rank_data\n    end_idx = start_idx + per_rank_data if rank < world_size - 1 else len(data)\n    rank_data = data[start_idx:end_idx]\n    \n    messages = []\n\n    for x in rank_data:\n        image_path = os.path.join(IMAGE_ROOT, x['image'])\n        message = [\n            # {\"role\": \"system\", \"content\": [{\"type\": \"text\", \"text\": SYSTEM_PROMPT}]},\n            {\n            \"role\": \"user\",\n            \"content\": [\n                {\n                    \"type\": \"image\", \n                    \"image\": f\"file://{image_path}\"\n                },\n                {\n                    \"type\": \"text\",\n                    \"text\": QUESTION_TEMPLATE.format(Question=x['problem'])\n                }\n            ]\n        }]\n        messages.append(message)\n\n    rank_outputs = [] # List to store answers for this rank\n    all_outputs = []  # List to store all answers\n\n    # Process data\n    for i in tqdm(range(0, len(messages), BSZ), disable=rank != 0):\n        batch_messages = messages[i:i + BSZ]\n    \n        # Preparation for inference\n        text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in batch_messages]\n        \n        image_inputs, video_inputs = process_vision_info(batch_messages)\n        inputs = processor(\n            text=text,\n            images=image_inputs,\n            videos=video_inputs,\n            padding=True,\n            padding_side=\"left\",\n            return_tensors=\"pt\",\n        )\n        inputs = inputs.to(device)\n\n        # Inference: Generation of the output\n        generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=256, do_sample=False)\n        \n        generated_ids_trimmed = [\n            out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n        ]\n        batch_output_text = processor.batch_decode(\n            generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n        )\n        \n        rank_outputs.extend(batch_output_text)\n\n    print(f\"Rank {rank} has finished processing {len(rank_outputs)} examples\")\n\n    # Gather all outputs from all ranks\n    all_outputs = [None] * len(data)\n    rank_results = [(start_idx + i, output) for i, output in enumerate(rank_outputs)]\n\n    gathered_results = [None] * world_size\n    dist.all_gather_object(gathered_results, rank_results)\n    \n    assert gathered_results[-1][-1][0] == len(data) - 1\n\n    # The main process will collect all results\n    if rank == 0:\n        for results in gathered_results:\n            for idx, output in results:\n                assert idx < len(all_outputs)\n                all_outputs[idx] = output\n        assert all_outputs[-1] is not None\n\n        final_output = []\n        correct_number = 0\n\n        for input_example, model_output in zip(data, all_outputs):\n            original_output = model_output\n            ground_truth = input_example['solution']\n            model_answer = extract_bbox_answer(original_output)\n            \n            # Count correct answers\n            correct = 0\n            if model_answer is not None:\n                if iou(model_answer, ground_truth) > 0.5:\n                    correct = 1\n            correct_number += correct\n            \n            # Create a result dictionary for this example\n            result = {\n                'image': input_example['image'],\n                'question': input_example['problem'],\n                'ground_truth': ground_truth,\n                'model_output': original_output,\n                'extracted_answer': model_answer,\n                'correct': correct\n            }\n            final_output.append(result)\n\n        # Calculate and print accuracy\n        accuracy = correct_number / len(data) * 100\n        print(f\"\\nAccuracy of {ds}: {accuracy:.2f}%\")\n\n        # Save results to a JSON file\n        output_path = OUTPUT_PATH.format(DATASET=ds, STEPS=steps)\n        output_dir = os.path.dirname(output_path)\n        if not os.path.exists(output_dir):\n            os.makedirs(output_dir)\n        with open(output_path, \"w\") as f:\n            json.dump({\n                'accuracy': accuracy,\n                'results': final_output\n            }, f, indent=2)\n\n        print(f\"Results saved to {output_path}\")\n        print(\"-\"*100)\n    \n    # Synchronize all processes\n    dist.barrier()\n\n\n\n\n\n"
  },
  {
    "path": "src/eval/test_rec_r1.py",
    "content": "from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\nimport torch\nimport json\nfrom tqdm import tqdm\nimport re\nimport os\nfrom pprint import pprint\nimport random\n\n\nimport torch.distributed as dist\nfrom torch.nn.parallel import DistributedDataParallel as DDP\nimport argparse\n\nimport warnings\n\nwarnings.filterwarnings(\"ignore\", category=UserWarning, module=\"transformers\")\n\ndef setup_distributed():\n    local_rank = int(os.environ.get(\"LOCAL_RANK\", 0))\n    torch.cuda.set_device(local_rank) \n    \n    dist.init_process_group(backend=\"nccl\")\n    \n    world_size = dist.get_world_size()\n    rank = dist.get_rank()\n    \n    return local_rank, world_size, rank\n\nlocal_rank, world_size, rank = setup_distributed()\ndevice = f\"cuda:{local_rank}\"\nprint(f\"Process {rank} using {device}\")\n\nmain_rank = 0\nsteps = 100\nif rank == main_rank:\n    print(\"Steps: \", steps)\n\nRUN_NAME = \"Qwen2.5-VL-3B-Instruct-rec\"\n\nMODEL_PATH=f\"/training/shz/project/vlm-r1/VLM-R1/checkpoints/rl/{RUN_NAME}/checkpoint-{steps}\"\nOUTPUT_PATH=\"./logs/rec_results_{DATASET}_{RUN_NAME}_{STEPS}.json\"\n\nBSZ=2   \nDATA_ROOT = \"/training/shz/dataset/vlm-r1/rec_jsons_processed\"\n\n# TEST_DATASETS = ['refcoco_val', 'refcocop_val', 'refcocog_val']\n# IMAGE_ROOT = \"/training/shz/dataset/coco\"\n\n\nTEST_DATASETS = ['lisa_test']\nIMAGE_ROOT = \"/training/shz/dataset/lisa\"\n\n\n#We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n    MODEL_PATH,\n    torch_dtype=torch.bfloat16,\n    attn_implementation=\"flash_attention_2\",\n    device_map={\"\": local_rank}, \n)\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(MODEL_PATH)\n\ndef extract_bbox_answer(content):\n    # Try to find the bbox within <answer> tags, if can not find, return [0, 0, 0, 0]\n    answer_tag_pattern = r'<answer>(.*?)</answer>'\n    bbox_pattern = r'\\{.*\\[(\\d+),\\s*(\\d+),\\s*(\\d+),\\s*(\\d+)]\\s*.*\\}'\n    content_answer_match = re.search(answer_tag_pattern, content, re.DOTALL)\n    if content_answer_match:\n        content_answer = content_answer_match.group(1).strip()\n        bbox_match = re.search(bbox_pattern, content_answer, re.DOTALL)\n        if bbox_match:\n            bbox = [int(bbox_match.group(1)), int(bbox_match.group(2)), int(bbox_match.group(3)), int(bbox_match.group(4))]\n            return bbox\n    return [0, 0, 0, 0]\n\ndef iou(box1, box2):\n    inter_x1 = max(box1[0], box2[0])\n    inter_y1 = max(box1[1], box2[1])\n    inter_x2 = min(box1[2]-1, box2[2]-1)\n    inter_y2 = min(box1[3]-1, box2[3]-1)\n    if inter_x1 < inter_x2 and inter_y1 < inter_y2:\n        inter = (inter_x2-inter_x1+1)*(inter_y2-inter_y1+1)\n    else:\n        inter = 0\n    union = (box1[2]-box1[0])*(box1[3]-box1[1]) + (box2[2]-box2[0])*(box2[3]-box2[1]) - inter\n    return float(inter)/union\n\nnum_samples = 2000\nfor ds in TEST_DATASETS:\n    if rank == 0:\n        print(f\"Processing {ds}...\")\n    ds_path = os.path.join(DATA_ROOT, f\"{ds}.json\")\n    data = json.load(open(ds_path, \"r\"))\n    random.seed(42)\n    random.shuffle(data)\n    data = data[:num_samples]\n\n    QUESTION_TEMPLATE = \"{Question} First output the thinking process in <think> </think> tags and then output the final answer in <answer> </answer> tags. Output the final answer in JSON format.\"\n\n    # Split data for distributed evaluation\n    per_rank_data = len(data) // world_size\n    start_idx = rank * per_rank_data\n    end_idx = start_idx + per_rank_data if rank < world_size - 1 else len(data)\n    rank_data = data[start_idx:end_idx]\n\n    messages = []\n\n    for x in rank_data:\n        image_path = os.path.join(IMAGE_ROOT, x['image'])\n        message = [\n            # {\"role\": \"system\", \"content\": [{\"type\": \"text\", \"text\": SYSTEM_PROMPT}]},\n            {\n            \"role\": \"user\",\n            \"content\": [\n                {\n                    \"type\": \"image\", \n                    \"image\": f\"file://{image_path}\"\n                },\n                {\n                    \"type\": \"text\",\n                    \"text\": QUESTION_TEMPLATE.format(Question=x['problem'])\n                }\n            ]\n        }]\n        messages.append(message)\n\n    rank_outputs = [] # List to store answers for this rank\n    all_outputs = []  # List to store all answers\n\n    # Process data\n    for i in tqdm(range(0, len(messages), BSZ), disable=rank != main_rank):\n        batch_messages = messages[i:i + BSZ]\n    \n        # Preparation for inference\n        text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in batch_messages]\n        \n        image_inputs, video_inputs = process_vision_info(batch_messages)\n        inputs = processor(\n            text=text,\n            images=image_inputs,\n            videos=video_inputs,\n            padding=True,\n            padding_side=\"left\",\n            return_tensors=\"pt\",\n        )\n        inputs = inputs.to(device)\n\n        # Inference: Generation of the output\n        generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=256, do_sample=False)\n        \n        generated_ids_trimmed = [\n            out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n        ]\n        batch_output_text = processor.batch_decode(\n            generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n        )\n        \n        rank_outputs.extend(batch_output_text)\n\n    print(f\"Rank {rank} has finished processing {len(rank_outputs)} examples\")\n\n    # Gather all outputs from all ranks\n    all_outputs = [None] * len(data)\n    rank_results = [(start_idx + i, output) for i, output in enumerate(rank_outputs)]\n\n    gathered_results = [None] * world_size\n    dist.all_gather_object(gathered_results, rank_results)\n    \n    assert gathered_results[-1][-1][0] == len(data) - 1\n\n    # The main process will collect all results\n    if rank == main_rank:\n        for results in gathered_results:\n            for idx, output in results:\n                assert idx < len(all_outputs)\n                all_outputs[idx] = output\n        assert all_outputs[-1] is not None\n\n        final_output = []\n        correct_number = 0\n\n        for input_example, model_output in zip(data, all_outputs):\n            original_output = model_output\n            ground_truth = input_example['solution']\n            model_answer = extract_bbox_answer(original_output)\n            \n            # Count correct answers\n            correct = 0\n            if model_answer is not None:\n                if iou(model_answer, ground_truth) > 0.5:\n                    correct = 1\n            correct_number += correct\n            \n            # Create a result dictionary for this example\n            result = {\n                'image': input_example['image'],\n                'question': input_example['problem'],\n                'ground_truth': ground_truth,\n                'model_output': original_output,\n                'extracted_answer': model_answer,\n                'correct': correct\n            }\n            final_output.append(result)\n\n        # Calculate and print accuracy\n        accuracy = correct_number / len(data) * 100\n        print(f\"\\nAccuracy of {ds}: {accuracy:.2f}%\")\n\n        # Save results to a JSON file\n        output_path = OUTPUT_PATH.format(DATASET=ds, RUN_NAME=RUN_NAME, STEPS=steps)\n        output_dir = os.path.dirname(output_path)\n        if not os.path.exists(output_dir):\n            os.makedirs(output_dir)\n        with open(output_path, \"w\") as f:\n            json.dump({\n                'accuracy': accuracy,\n                'results': final_output\n            }, f, indent=2)\n\n        print(f\"Results saved to {output_path}\")\n        print(\"-\"*100)\n\n    # Synchronize all processes\n    dist.barrier()\n\n\n\n\n\n"
  },
  {
    "path": "src/eval/test_rec_r1_internvl.py",
    "content": "import torch\nimport json\nfrom tqdm import tqdm\nimport re\nimport os\nfrom pprint import pprint\nimport random\nfrom transformers import AutoTokenizer, AutoProcessor, AutoModelForCausalLM\nfrom open_r1.vlm_modules.internvl_module import InvernVLModule\n\nimport torch.distributed as dist\nfrom torch.nn.parallel import DistributedDataParallel as DDP\n\nimport warnings\n\nwarnings.filterwarnings(\"ignore\", category=UserWarning, module=\"transformers\")\n\ndef setup_distributed():\n    local_rank = int(os.environ.get(\"LOCAL_RANK\", 0))\n    torch.cuda.set_device(local_rank) \n    \n    dist.init_process_group(backend=\"nccl\")\n    \n    world_size = dist.get_world_size()\n    rank = dist.get_rank()\n    \n    return local_rank, world_size, rank\n\nlocal_rank, world_size, rank = setup_distributed()\ndevice = f\"cuda:{local_rank}\"\nprint(f\"Process {rank} using {device}\")\n\nmain_rank = 0\nsteps = 300\nif rank == main_rank:\n    print(\"Steps: \", steps)\n\nRUN_NAME = \"InternVL2_5-4B_MPO-rec\"\n\nMODEL_PATH=f\"/training/shz/project/vlm-r1/VLM-R1/checkpoints/rl/{RUN_NAME}/checkpoint-{steps}\" \nOUTPUT_PATH=\"./logs/rec_results_{DATASET}_{RUN_NAME}_{STEPS}.json\"\n\nBSZ=4\nDATA_ROOT = \"/training/shz/dataset/vlm-r1/rec_jsons_internvl\"\n\n# TEST_DATASETS = ['refcoco_val', 'refcocop_val', 'refcocog_val']\n# IMAGE_ROOT = \"/training/shz/dataset/coco\"\n\nTEST_DATASETS = ['lisa_test']\nIMAGE_ROOT = \"/training/shz/dataset/lisa\"\n\nrandom.seed(42)\n\nvlm_module = InvernVLModule()\n\nmodel = vlm_module.get_model_class(MODEL_PATH, {}).from_pretrained(\n    MODEL_PATH,\n    torch_dtype=torch.bfloat16,\n    device_map={\"\": local_rank},\n    trust_remote_code=True,\n    use_flash_attn=True,\n)\n\ntokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)\ntokenizer.pad_token_id = tokenizer.eos_token_id\nmodel.generation_config.pad_token_id = tokenizer.pad_token_id\nvlm_module.post_model_init(model, tokenizer)\n\n\ndef extract_bbox_answer(content):\n    # Try to find the bbox within <answer> tags, if can not find, return [0, 0, 0, 0]\n    answer_tag_pattern = r'<answer>(.*?)</answer>'\n    bbox_pattern = r'\\[(\\d+),\\s*(\\d+),\\s*(\\d+),\\s*(\\d+)]'\n    content_answer_match = re.search(answer_tag_pattern, content, re.DOTALL)\n    if content_answer_match:\n        content_answer = content_answer_match.group(1).strip()\n        bbox_match = re.search(bbox_pattern, content_answer, re.DOTALL)\n        if bbox_match:\n            bbox = [int(bbox_match.group(1)), int(bbox_match.group(2)), int(bbox_match.group(3)), int(bbox_match.group(4))]\n            return bbox\n    return [0, 0, 0, 0]\n\ndef iou(box1, box2):\n    inter_x1 = max(box1[0], box2[0])\n    inter_y1 = max(box1[1], box2[1])\n    inter_x2 = min(box1[2]-1, box2[2]-1)\n    inter_y2 = min(box1[3]-1, box2[3]-1)\n    if inter_x1 < inter_x2 and inter_y1 < inter_y2:\n        inter = (inter_x2-inter_x1+1)*(inter_y2-inter_y1+1)\n    else:\n        inter = 0\n    union = (box1[2]-box1[0])*(box1[3]-box1[1]) + (box2[2]-box2[0])*(box2[3]-box2[1]) - inter\n    return float(inter)/union\n\nfrom PIL import Image\ndef process_vision_info(batch_messages):\n    images = []\n    for msg in batch_messages:\n        image_path = msg[0]['content'][0]['image'].replace(\"file://\", \"\")\n        image = Image.open(image_path)\n        images.append(image)\n    return images\n\n\nsample_num = 2000\ntokenizer.max_anyres_num = 12\nfor ds in TEST_DATASETS:\n    if rank == main_rank:\n        print(f\"Processing {ds}...\")\n    ds_path = os.path.join(DATA_ROOT, f\"{ds}.json\")\n    data = json.load(open(ds_path, \"r\"))\n    random.seed(42)\n    random.shuffle(data)\n    data = data[:sample_num]\n    QUESTION_TEMPLATE = \"{Question} First output the thinking process in <think> </think> tags and then output the final answer in <answer> </answer> tags.\"\n\n    # Split data for distributed evaluation\n    per_rank_data = len(data) // world_size\n    start_idx = rank * per_rank_data\n    end_idx = start_idx + per_rank_data if rank < world_size - 1 else len(data)\n    rank_data = data[start_idx:end_idx]\n\n    messages = []\n    for x in rank_data:\n        image_path = os.path.join(IMAGE_ROOT, x['image'])\n        message = [\n            # {\"role\": \"system\", \"content\": [{\"type\": \"text\", \"text\": SYSTEM_PROMPT}]},\n            {\n            \"role\": \"user\",\n            \"content\": [\n                {\n                    \"type\": \"image\", \n                    \"image\": f\"file://{image_path}\"\n                },\n                {\n                    \"type\": \"text\",\n                    \"text\": QUESTION_TEMPLATE.format(Question=x['problem'])\n                }\n            ]\n        }]\n        messages.append(message)\n    \n    rank_outputs = [] # List to store answers for this rank\n    all_outputs = []  # List to store all answers\n\n    # Process data\n    for i in tqdm(range(0, len(messages), BSZ), disable=rank != main_rank):\n        batch_messages = messages[i:i + BSZ]\n        prompts = vlm_module.prepare_prompt(None, [{\"prompt\": msg} for msg in batch_messages])\n\n        images = process_vision_info(batch_messages)\n\n        model_inputs = vlm_module.prepare_model_inputs(tokenizer, prompts, images)\n        model_inputs['pixel_values'] = model_inputs['pixel_values'].to(torch.bfloat16)\n        model_inputs = model_inputs.to(device)\n\n        outputs = model.generate(**{k:v for k,v in model_inputs.items() if k not in vlm_module.get_non_generate_params()}, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id)\n        batch_output_text = tokenizer.batch_decode(\n            outputs, skip_special_tokens=True, clean_up_tokenization_spaces=False\n        )\n        rank_outputs.extend(batch_output_text)\n    \n    print(f\"Rank {rank} has finished processing {len(rank_outputs)} examples\")\n\n    # Gather all outputs from all ranks\n    all_outputs = [None] * len(data)\n    rank_results = [(start_idx + i, output) for i, output in enumerate(rank_outputs)]\n\n    gathered_results = [None] * world_size\n    dist.all_gather_object(gathered_results, rank_results)\n    \n    assert gathered_results[-1][-1][0] == len(data) - 1\n\n    # The main process will collect all results\n    if rank == main_rank:\n        for results in gathered_results:\n            for idx, output in results:\n                assert idx < len(all_outputs)\n                all_outputs[idx] = output\n        assert all_outputs[-1] is not None\n\n        final_output = []\n        correct_number = 0\n\n        for input_example, model_output in zip(data, all_outputs):\n            original_output = model_output\n            ground_truth = input_example['solution']\n            model_answer = extract_bbox_answer(original_output)\n            \n            # Count correct answers\n            correct = 0\n            if model_answer is not None and iou(model_answer, ground_truth) > 0.5:\n                correct = 1\n            correct_number += correct\n            \n            # Create a result dictionary for this example\n            result = {\n                'image': input_example['image'],\n                'question': input_example['problem'],\n                'ground_truth': ground_truth,\n                'model_output': original_output,\n                'extracted_answer': model_answer,\n                'correct': correct\n            }\n            final_output.append(result)\n\n        # Calculate and print accuracy\n        accuracy = correct_number / len(data) * 100\n        print(f\"\\nAccuracy of {ds}: {accuracy:.2f}%\")\n\n        # Save results to a JSON file\n        output_path = OUTPUT_PATH.format(DATASET=ds, RUN_NAME=RUN_NAME, STEPS=steps)\n        output_dir = os.path.dirname(output_path)\n        if not os.path.exists(output_dir):\n            os.makedirs(output_dir)\n        with open(output_path, \"w\") as f:\n            json.dump({\n                'accuracy': accuracy,\n                'results': final_output\n            }, f, indent=4)\n\n        print(f\"Results saved to {output_path}\")\n        print(\"-\"*100)\n\n    # Synchronize all processes\n    dist.barrier()\n\n\n\n\n\n"
  },
  {
    "path": "src/open-r1-multimodal/.gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\ncover/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\n.pybuilder/\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n#   For a library or package, you might want to ignore these files since the code is\n#   intended to run in multiple environments; otherwise, check them in:\n# .python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# UV\n#   Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.\n#   This is especially recommended for binary packages to ensure reproducibility, and is more\n#   commonly ignored for libraries.\n#uv.lock\n\n# poetry\n#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.\n#   This is especially recommended for binary packages to ensure reproducibility, and is more\n#   commonly ignored for libraries.\n#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control\n#poetry.lock\n\n# pdm\n#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.\n#pdm.lock\n#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it\n#   in version control.\n#   https://pdm.fming.dev/latest/usage/project/#working-with-version-control\n.pdm.toml\n.pdm-python\n.pdm-build/\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n# PyCharm\n#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can\n#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore\n#  and can be added to the global gitignore or merged into this file.  For a more nuclear\n#  option (not recommended) you can uncomment the following to ignore the entire idea folder.\n#.idea/\n\n# PyPI configuration file\n.pypirc\n\n# Temp folders\ndata/\nwandb/\nscripts/\ncheckpoints/\n.vscode/"
  },
  {
    "path": "src/open-r1-multimodal/LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "src/open-r1-multimodal/Makefile",
    "content": ".PHONY: style quality\n\n# make sure to test the local checkout in scripts and not the pre-installed one (don't use quotes!)\nexport PYTHONPATH = src\n\ncheck_dirs := src\n\nstyle:\n\tblack --line-length 119 --target-version py310 $(check_dirs) setup.py\n\tisort $(check_dirs) setup.py\n\nquality:\n\tblack --check --line-length 119 --target-version py310 $(check_dirs) setup.py\n\tisort --check-only $(check_dirs) setup.py\n\tflake8 --max-line-length 119 $(check_dirs) setup.py\n\n\n# Evaluation\n\nevaluate:\n"
  },
  {
    "path": "src/open-r1-multimodal/configs/ddp.yaml",
    "content": "compute_environment: LOCAL_MACHINE\ndebug: false\ndistributed_type: MULTI_GPU\ndowncast_bf16: 'no'\ngpu_ids: all\nmachine_rank: 0\nmain_training_function: main\nmixed_precision: bf16\nnum_machines: 1\nnum_processes: 8\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false\n"
  },
  {
    "path": "src/open-r1-multimodal/configs/zero2.yaml",
    "content": "compute_environment: LOCAL_MACHINE\ndebug: false\ndeepspeed_config:\n  deepspeed_multinode_launcher: standard\n  offload_optimizer_device: none\n  offload_param_device: none\n  zero3_init_flag: false\n  zero_stage: 2\ndistributed_type: DEEPSPEED\ndowncast_bf16: 'no'\nmachine_rank: 0\nmain_training_function: main\nmixed_precision: bf16\nnum_machines: 1\nnum_processes: 8\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false"
  },
  {
    "path": "src/open-r1-multimodal/configs/zero3.yaml",
    "content": "compute_environment: LOCAL_MACHINE\ndebug: false\ndeepspeed_config:\n  deepspeed_multinode_launcher: standard\n  offload_optimizer_device: cpu\n  offload_param_device: cpu\n  zero3_init_flag: true\n  zero3_save_16bit_model: true\n  zero_stage: 3\ndistributed_type: DEEPSPEED\ndowncast_bf16: 'no'\nmachine_rank: 0\nmain_training_function: main\nmixed_precision: bf16\nnum_machines: 1\nnum_processes: 2\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false\n"
  },
  {
    "path": "src/open-r1-multimodal/local_scripts/zero2.json",
    "content": "{\n    \"fp16\": {\n        \"enabled\": \"auto\",\n        \"loss_scale\": 0,\n        \"loss_scale_window\": 1000,\n        \"initial_scale_power\": 16,\n        \"hysteresis\": 2,\n        \"min_loss_scale\": 1\n    },\n    \"bf16\": {\n        \"enabled\": \"auto\"\n    },\n    \"optimizer\": {\n        \"type\": \"AdamW\",\n        \"params\": {\n            \"lr\": \"auto\",\n            \"betas\": \"auto\",\n            \"eps\": \"auto\",\n            \"weight_decay\": \"auto\"\n        }\n    },\n    \"zero_optimization\": {\n        \"stage\": 2,\n        \"offload_optimizer\": {\n            \"device\": \"none\",\n            \"pin_memory\": true\n        },\n        \"allgather_partitions\": true,\n        \"allgather_bucket_size\": 2e8,\n        \"overlap_comm\": false,\n        \"reduce_scatter\": true,\n        \"reduce_bucket_size\": 2e8,\n        \"contiguous_gradients\": true\n    },\n    \"gradient_accumulation_steps\": \"auto\",\n    \"gradient_clipping\": \"auto\",\n    \"steps_per_print\": 100,\n    \"train_batch_size\": \"auto\",\n    \"train_micro_batch_size_per_gpu\": \"auto\",\n    \"wall_clock_breakdown\": false\n}"
  },
  {
    "path": "src/open-r1-multimodal/local_scripts/zero3.json",
    "content": "{\n    \"fp16\": {\n        \"enabled\": \"auto\",\n        \"loss_scale\": 0,\n        \"loss_scale_window\": 1000,\n        \"initial_scale_power\": 16,\n        \"hysteresis\": 2,\n        \"min_loss_scale\": 1\n    },\n    \"bf16\": {\n        \"enabled\": \"auto\"\n    },\n\n    \"zero_optimization\": {\n        \"stage\": 3,\n        \"offload_optimizer\": {\n            \"device\": \"cpu\",\n            \"pin_memory\": true\n        },\n        \"offload_param\": {\n            \"device\": \"cpu\",\n            \"pin_memory\": true\n        },\n        \"overlap_comm\": true,\n        \"contiguous_gradients\": true,\n        \"sub_group_size\": 1e9,\n        \"reduce_bucket_size\": \"auto\",\n        \"stage3_prefetch_bucket_size\": \"auto\",\n        \"stage3_param_persistence_threshold\": \"auto\",\n        \"stage3_max_live_parameters\": 1e9,\n        \"stage3_max_reuse_distance\": 1e9,\n        \"stage3_gather_16bit_weights_on_model_save\": true\n    },\n\n    \"gradient_accumulation_steps\": \"auto\",\n    \"gradient_clipping\": \"auto\",\n    \"steps_per_print\": 100,\n    \"train_batch_size\": \"auto\",\n    \"train_micro_batch_size_per_gpu\": \"auto\",\n    \"wall_clock_breakdown\": false\n}"
  },
  {
    "path": "src/open-r1-multimodal/local_scripts/zero3.yaml",
    "content": "compute_environment: LOCAL_MACHINE\ndebug: false\ndeepspeed_config:\n  deepspeed_multinode_launcher: standard\n  offload_optimizer_device: none\n  offload_param_device: none\n  zero3_init_flag: true\n  zero3_save_16bit_model: true\n  zero_stage: 3\ndistributed_type: DEEPSPEED\ndowncast_bf16: 'no'\nmachine_rank: 0\nmain_training_function: main\nmixed_precision: bf16\nnum_machines: 1\nnum_processes: 8\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false\n"
  },
  {
    "path": "src/open-r1-multimodal/local_scripts/zero3_offload.json",
    "content": "{\n    \"fp16\": {\n        \"enabled\": \"auto\",\n        \"loss_scale\": 0,\n        \"loss_scale_window\": 1000,\n        \"initial_scale_power\": 16,\n        \"hysteresis\": 2,\n        \"min_loss_scale\": 1\n    },\n    \"bf16\": {\n        \"enabled\": \"auto\"\n    },\n    \"optimizer\": {\n        \"type\": \"AdamW\",\n        \"params\": {\n            \"lr\": \"auto\",\n            \"betas\": \"auto\",\n            \"eps\": \"auto\",\n            \"weight_decay\": \"auto\"\n        }\n    },\n    \"zero_optimization\": {\n        \"stage\": 3,\n        \"offload_optimizer\": {\n            \"device\": \"cpu\",\n            \"pin_memory\": true\n        },\n        \"offload_param\": {\n            \"device\": \"cpu\",\n            \"pin_memory\": true\n        },\n        \"overlap_comm\": true,\n        \"contiguous_gradients\": true,\n        \"sub_group_size\": 1e9,\n        \"reduce_bucket_size\": \"auto\",\n        \"stage3_prefetch_bucket_size\": \"auto\",\n        \"stage3_param_persistence_threshold\": \"auto\",\n        \"stage3_max_live_parameters\": 1e9,\n        \"stage3_max_reuse_distance\": 1e9,\n        \"gather_16bit_weights_on_model_save\": true\n    },\n    \"gradient_accumulation_steps\": \"auto\",\n    \"gradient_clipping\": \"auto\",\n    \"train_batch_size\": \"auto\",\n    \"train_micro_batch_size_per_gpu\": \"auto\",\n    \"steps_per_print\": 1e5,\n    \"wall_clock_breakdown\": false\n}"
  },
  {
    "path": "src/open-r1-multimodal/local_scripts/zero_stage2_config.json",
    "content": "{\n  \"zero_optimization\": {\n    \"stage\": 2,\n    \"allgather_partitions\": true,\n    \"allgather_bucket_size\": 1e8,\n    \"overlap_comm\": true,\n    \"reduce_scatter\": true,\n    \"reduce_bucket_size\": 1e8,\n    \"contiguous_gradients\": true\n  },\n  \"fp16\": {\n    \"enabled\": \"auto\",\n    \"auto_cast\": true,\n    \"loss_scale\": 0,\n    \"initial_scale_power\": 32,\n    \"loss_scale_window\": 1000,\n    \"hysteresis\": 2,\n    \"min_loss_scale\": 1\n  },\n  \"bf16\": {\n    \"enabled\": \"auto\"\n  },\n  \"gradient_accumulation_steps\": \"auto\",\n  \"gradient_clipping\": \"auto\",\n  \"steps_per_print\": 2000,\n  \"train_batch_size\": \"auto\",\n  \"train_micro_batch_size_per_gpu\": \"auto\",\n  \"wall_clock_breakdown\": false\n}\n"
  },
  {
    "path": "src/open-r1-multimodal/setup.cfg",
    "content": "[isort]\ndefault_section = FIRSTPARTY\nensure_newline_before_comments = True\nforce_grid_wrap = 0\ninclude_trailing_comma = True\nknown_first_party = open_r1\nknown_third_party =\n    transformers\n    datasets\n    fugashi\n    git\n    h5py\n    matplotlib\n    nltk\n    numpy\n    packaging\n    pandas\n    psutil\n    pytest\n    rouge_score\n    sacrebleu\n    seqeval\n    sklearn\n    streamlit\n    torch\n    tqdm\n\nline_length = 119\nlines_after_imports = 2\nmulti_line_output = 3\nuse_parentheses = True\n\n[flake8]\nignore = E203, E501, E741, W503, W605\nmax-line-length = 119\nper-file-ignores =\n    # imported but unused\n    __init__.py: F401\n\n[tool:pytest]\ndoctest_optionflags=NUMBER NORMALIZE_WHITESPACE ELLIPSIS"
  },
  {
    "path": "src/open-r1-multimodal/setup.py",
    "content": "# Copyright 2025 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Adapted from huggingface/transformers: https://github.com/huggingface/transformers/blob/21a2d900eceeded7be9edc445b56877b95eda4ca/setup.py\n\n\nimport re\nimport shutil\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\n\n# Remove stale open_r1.egg-info directory to avoid https://github.com/pypa/pip/issues/5466\nstale_egg_info = Path(__file__).parent / \"open_r1.egg-info\"\nif stale_egg_info.exists():\n    print(\n        (\n            \"Warning: {} exists.\\n\\n\"\n            \"If you recently updated open_r1, this is expected,\\n\"\n            \"but it may prevent open_r1 from installing in editable mode.\\n\\n\"\n            \"This directory is automatically generated by Python's packaging tools.\\n\"\n            \"I will remove it now.\\n\\n\"\n            \"See https://github.com/pypa/pip/issues/5466 for details.\\n\"\n        ).format(stale_egg_info)\n    )\n    shutil.rmtree(stale_egg_info)\n\n\n# IMPORTANT: all dependencies should be listed here with their version requirements, if any.\n#   * If a dependency is fast-moving (e.g. transformers), pin to the exact version\n_deps = [\n    \"accelerate>=1.2.1\",\n    \"bitsandbytes>=0.43.0\",\n    \"black>=24.4.2\",\n    \"datasets>=3.2.0\",\n    \"deepspeed==0.15.4\",\n    \"distilabel[vllm,ray,openai]>=1.5.2\",\n    \"einops>=0.8.0\",\n    \"flake8>=6.0.0\",\n    \"hf_transfer>=0.1.4\",\n    \"huggingface-hub[cli]>=0.19.2,<1.0\",\n    \"isort>=5.12.0\",\n    \"liger_kernel==0.5.2\",\n    # \"lighteval @ git+https://github.com/huggingface/lighteval.git@4f381b352c0e467b5870a97d41cb66b487a2c503#egg=lighteval[math]\",\n    \"math-verify\",  # Used for math verification in grpo\n    \"packaging>=23.0\",\n    \"parameterized>=0.9.0\",\n    \"pytest\",\n    \"safetensors>=0.3.3\",\n    \"sentencepiece>=0.1.99\",\n    \"torch>=2.5.1\",\n    \"transformers==4.49.0\",\n    \"trl @ git+https://github.com/huggingface/trl.git@main\",\n    \"vllm==0.6.6.post1\",\n    \"wandb>=0.19.1\",\n    \"pillow\",\n]\n\n# this is a lookup table with items like:\n#\n# tokenizers: \"tokenizers==0.9.4\"\n# packaging: \"packaging\"\n#\n# some of the values are versioned whereas others aren't.\ndeps = {b: a for a, b in (re.findall(r\"^(([^!=<>~ \\[\\]]+)(?:\\[[^\\]]+\\])?(?:[!=<>~ ].*)?$)\", x)[0] for x in _deps)}\n\n\ndef deps_list(*pkgs):\n    return [deps[pkg] for pkg in pkgs]\n\n\nextras = {}\nextras[\"tests\"] = deps_list(\"pytest\", \"parameterized\")\nextras[\"torch\"] = deps_list(\"torch\")\nextras[\"quality\"] = deps_list(\"black\", \"isort\", \"flake8\")\n# extras[\"eval\"] = deps_list(\"lighteval\", \"math-verify\")\nextras[\"eval\"] = deps_list(\"math-verify\")\nextras[\"dev\"] = extras[\"quality\"] + extras[\"tests\"] + extras[\"eval\"]\n\n# core dependencies shared across the whole project - keep this to a bare minimum :)\ninstall_requires = [\n    deps[\"accelerate\"],\n    deps[\"bitsandbytes\"],\n    deps[\"einops\"],\n    deps[\"datasets\"],\n    deps[\"deepspeed\"],\n    deps[\"hf_transfer\"],\n    deps[\"huggingface-hub\"],\n    deps[\"liger_kernel\"],\n    deps[\"packaging\"],  # utilities from PyPA to e.g., compare versions\n    deps[\"safetensors\"],\n    deps[\"sentencepiece\"],\n    deps[\"transformers\"],\n    deps[\"trl\"],\n]\n\nsetup(\n    name=\"open-r1\",\n    version=\"0.1.0.dev0\",  # expected format is one of x.y.z.dev0, or x.y.z.rc1 or x.y.z (no to dashes, yes to dots)\n    author=\"The Hugging Face team (past and future)\",\n    author_email=\"lewis@huggingface.co\",\n    description=\"Open R1\",\n    # long_description=open(\"README.md\", \"r\", encoding=\"utf-8\").read(),\n    long_description_content_type=\"text/markdown\",\n    keywords=\"llm inference-time compute reasoning\",\n    license=\"Apache\",\n    url=\"https://github.com/huggingface/open-r1\",\n    package_dir={\"\": \"src\"},\n    packages=find_packages(\"src\"),\n    zip_safe=False,\n    extras_require=extras,\n    python_requires=\">=3.10.9\",\n    install_requires=install_requires,\n    classifiers=[\n        \"Development Status :: 3 - Alpha\",\n        \"Intended Audience :: Developers\",\n        \"Intended Audience :: Education\",\n        \"Intended Audience :: Science/Research\",\n        \"License :: OSI Approved :: Apache Software License\",\n        \"Operating System :: OS Independent\",\n        \"Programming Language :: Python :: 3\",\n        \"Programming Language :: Python :: 3.10\",\n        \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n    ],\n)\n"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/__init__.py",
    "content": ""
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/configs.py",
    "content": "# coding=utf-8\n# Copyright 2025 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport trl\n\n\n# TODO: add the shared options with a mixin to reduce code duplication\n@dataclass\nclass GRPOConfig(trl.GRPOConfig):\n    \"\"\"\n    args for callbacks, benchmarks etc\n    \"\"\"\n\n    benchmarks: list[str] = field(\n        default_factory=lambda: [], metadata={\"help\": \"The benchmarks to run after training.\"}\n    )\n    callbacks: list[str] = field(\n        default_factory=lambda: [], metadata={\"help\": \"The callbacks to run during training.\"}\n    )\n    system_prompt: Optional[str] = field(\n        default=None, metadata={\"help\": \"The optional system prompt to use for benchmarking.\"}\n    )\n    hub_model_revision: Optional[str] = field(\n        default=\"main\", metadata={\"help\": \"The Hub model branch to push the model to.\"}\n    )\n    overwrite_hub_revision: bool = field(default=False, metadata={\"help\": \"Whether to overwrite the Hub revision.\"})\n    push_to_hub_revision: bool = field(default=False, metadata={\"help\": \"Whether to push to a Hub revision/branch.\"})\n    wandb_entity: Optional[str] = field(\n        default=None,\n        metadata={\"help\": (\"The entity to store runs under.\")},\n    )\n    wandb_project: Optional[str] = field(\n        default=None,\n        metadata={\"help\": (\"The project to store runs under.\")},\n    )\n\n\n@dataclass\nclass SFTConfig(trl.SFTConfig):\n    \"\"\"\n    args for callbacks, benchmarks etc\n    \"\"\"\n\n    benchmarks: list[str] = field(\n        default_factory=lambda: [], metadata={\"help\": \"The benchmarks to run after training.\"}\n    )\n    callbacks: list[str] = field(\n        default_factory=lambda: [], metadata={\"help\": \"The callbacks to run during training.\"}\n    )\n    system_prompt: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"The optional system prompt to use for benchmarking.\"},\n    )\n    hub_model_revision: Optional[str] = field(\n        default=\"main\",\n        metadata={\"help\": \"The Hub model branch to push the model to.\"},\n    )\n    overwrite_hub_revision: bool = field(default=False, metadata={\"help\": \"Whether to overwrite the Hub revision.\"})\n    push_to_hub_revision: bool = field(default=False, metadata={\"help\": \"Whether to push to a Hub revision/branch.\"})\n    wandb_entity: Optional[str] = field(\n        default=None,\n        metadata={\"help\": (\"The entity to store runs under.\")},\n    )\n    wandb_project: Optional[str] = field(\n        default=None,\n        metadata={\"help\": (\"The project to store runs under.\")},\n    )"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/evaluate.py",
    "content": "# Copyright 2025 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Custom evaluation tasks for LightEval.\"\"\"\n\nfrom lighteval.metrics.dynamic_metrics import (\n    ExprExtractionConfig,\n    LatexExtractionConfig,\n    multilingual_extractive_match_metric,\n)\nfrom lighteval.tasks.lighteval_task import LightevalTaskConfig\nfrom lighteval.tasks.requests import Doc\nfrom lighteval.utils.language import Language\n\n\nmetric = multilingual_extractive_match_metric(\n    language=Language.ENGLISH,\n    fallback_mode=\"first_match\",\n    precision=5,\n    gold_extraction_target=(LatexExtractionConfig(),),\n    pred_extraction_target=(ExprExtractionConfig(), LatexExtractionConfig()),\n    aggregation_function=max,\n)\n\n\ndef prompt_fn(line, task_name: str = None):\n    \"\"\"Assumes the model is either prompted to emit \\\\boxed{answer} or does so automatically\"\"\"\n    return Doc(\n        task_name=task_name,\n        query=line[\"problem\"],\n        choices=[line[\"solution\"]],\n        gold_index=0,\n    )\n\n\n# Define tasks\naime24 = LightevalTaskConfig(\n    name=\"aime24\",\n    suite=[\"custom\"],\n    prompt_function=prompt_fn,\n    hf_repo=\"HuggingFaceH4/aime_2024\",\n    hf_subset=\"default\",\n    hf_avail_splits=[\"train\"],\n    evaluation_splits=[\"train\"],\n    few_shots_split=None,\n    few_shots_select=None,\n    generation_size=32768,\n    metric=[metric],\n    version=1,\n)\nmath_500 = LightevalTaskConfig(\n    name=\"math_500\",\n    suite=[\"custom\"],\n    prompt_function=prompt_fn,\n    hf_repo=\"HuggingFaceH4/MATH-500\",\n    hf_subset=\"default\",\n    hf_avail_splits=[\"test\"],\n    evaluation_splits=[\"test\"],\n    few_shots_split=None,\n    few_shots_select=None,\n    generation_size=32768,\n    metric=[metric],\n    version=1,\n)\n\n# Add tasks to the table\nTASKS_TABLE = []\nTASKS_TABLE.append(aime24)\nTASKS_TABLE.append(math_500)\n\n# MODULE LOGIC\nif __name__ == \"__main__\":\n    print([t[\"name\"] for t in TASKS_TABLE])\n    print(len(TASKS_TABLE))\n"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/generate.py",
    "content": "# Copyright 2025 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional\n\nfrom distilabel.llms import OpenAILLM\nfrom distilabel.pipeline import Pipeline\nfrom distilabel.steps.tasks import TextGeneration\n\n\ndef build_distilabel_pipeline(\n    model: str,\n    base_url: str = \"http://localhost:8000/v1\",\n    prompt_column: Optional[str] = None,\n    temperature: Optional[float] = None,\n    top_p: Optional[float] = None,\n    max_new_tokens: int = 8192,\n    num_generations: int = 1,\n) -> Pipeline:\n    generation_kwargs = {\"max_new_tokens\": max_new_tokens}\n\n    if temperature is not None:\n        generation_kwargs[\"temperature\"] = temperature\n\n    if top_p is not None:\n        generation_kwargs[\"top_p\"] = top_p\n\n    with Pipeline().ray() as pipeline:\n        TextGeneration(\n            llm=OpenAILLM(\n                base_url=base_url,\n                api_key=\"something\",\n                model=model,\n                # thinking can take some time...\n                timeout=10 * 60,\n                generation_kwargs=generation_kwargs,\n            ),\n            input_mappings={\"instruction\": prompt_column} if prompt_column is not None else {},\n            input_batch_size=64,  # on 4 nodes bs ~60+ leads to preemption due to KV cache exhaustion\n            num_generations=num_generations,\n        )\n\n    return pipeline\n\n\nif __name__ == \"__main__\":\n    import argparse\n\n    from datasets import load_dataset\n\n    parser = argparse.ArgumentParser(description=\"Run distilabel pipeline for generating responses with DeepSeek R1\")\n    parser.add_argument(\n        \"--hf-dataset\",\n        type=str,\n        required=True,\n        help=\"HuggingFace dataset to load\",\n    )\n    parser.add_argument(\n        \"--hf-dataset-config\",\n        type=str,\n        required=False,\n        help=\"Dataset config to use\",\n    )\n    parser.add_argument(\n        \"--hf-dataset-split\",\n        type=str,\n        default=\"train\",\n        help=\"Dataset split to use\",\n    )\n    parser.add_argument(\"--prompt-column\", type=str, default=\"prompt\")\n    parser.add_argument(\n        \"--model\",\n        type=str,\n        required=True,\n        help=\"Model name to use for generation\",\n    )\n    parser.add_argument(\n        \"--vllm-server-url\",\n        type=str,\n        default=\"http://localhost:8000/v1\",\n        help=\"URL of the vLLM server\",\n    )\n    parser.add_argument(\n        \"--temperature\",\n        type=float,\n        help=\"Temperature for generation\",\n    )\n    parser.add_argument(\n        \"--top-p\",\n        type=float,\n        help=\"Top-p value for generation\",\n    )\n    parser.add_argument(\n        \"--max-new-tokens\",\n        type=int,\n        default=8192,\n        help=\"Maximum number of new tokens to generate\",\n    )\n    parser.add_argument(\n        \"--num-generations\",\n        type=int,\n        default=1,\n        help=\"Number of generations per problem\",\n    )\n    parser.add_argument(\n        \"--hf-output-dataset\",\n        type=str,\n        required=False,\n        help=\"HuggingFace repo to push results to\",\n    )\n    parser.add_argument(\n        \"--private\",\n        action=\"store_true\",\n        help=\"Whether to make the output dataset private when pushing to HF Hub\",\n    )\n\n    args = parser.parse_args()\n\n    print(\"\\nRunning with arguments:\")\n    for arg, value in vars(args).items():\n        print(f\"  {arg}: {value}\")\n    print()\n\n    print(f\"Loading '{args.hf_dataset}' (config: {args.hf_dataset_config}, split: {args.hf_dataset_split}) dataset...\")\n    dataset = load_dataset(args.hf_dataset, split=args.hf_dataset_split)\n    print(\"Dataset loaded!\")\n\n    pipeline = build_distilabel_pipeline(\n        model=args.model,\n        base_url=args.vllm_server_url,\n        prompt_column=args.prompt_column,\n        temperature=args.temperature,\n        top_p=args.top_p,\n        max_new_tokens=args.max_new_tokens,\n        num_generations=args.num_generations,\n    )\n\n    print(\"Running generation pipeline...\")\n    distiset = pipeline.run(dataset=dataset, use_cache=False)\n    print(\"Generation pipeline finished!\")\n\n    if args.hf_output_dataset:\n        print(f\"Pushing resulting dataset to '{args.hf_output_dataset}'...\")\n        distiset.push_to_hub(args.hf_output_dataset, private=args.private)\n        print(\"Dataset pushed!\")\n"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/grpo_jsonl.py",
    "content": "# Copyright 2025 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nimport pathlib\nfrom datetime import datetime\nfrom dataclasses import dataclass, field\nfrom typing import Optional\nfrom babel.numbers import parse_decimal\nfrom utils.math import compute_score\nfrom datasets import load_dataset, load_from_disk\nfrom transformers import Qwen2VLForConditionalGeneration\n\nfrom math_verify import parse, verify\nfrom trainer import VLMGRPOTrainer, GRPOConfig\n# from trainer import VLMGRPOTrainer, GRPOConfig\nfrom trl import ModelConfig, ScriptArguments, TrlParser, get_peft_config\nimport PIL\nfrom Levenshtein import ratio\nfrom utils.pycocotools.coco import COCO\nfrom utils.pycocotools.cocoeval import COCOeval\nimport json\nimport math\nfrom json_repair import repair_json\n\nfrom vlm_modules import *\nfrom typing import Tuple\nfrom transformers.utils import logging\nfrom transformers import AutoProcessor, AutoTokenizer\n\nfrom openai import OpenAI\n\n\nlogger = logging.get_logger(__name__)\n\nclient = OpenAI(\n    api_key=os.getenv(\"OPENAI_API_KEY\", \"\"),  # Must be set via environment variable\n    base_url=os.getenv(\"OPENAI_API_BASE\", \"https://api.openai.com/v1\")\n)\n\nfrom qwen2_5vl_monkey_patch import monkey_patch_qwen2_5vl_flash_attn, monkey_patch_qwen2_5vl_forward, monkey_patch_torch_load\n\nmonkey_patch_qwen2_5vl_flash_attn()    \nmonkey_patch_torch_load()\n\ntokenizer = None\n\ndef initialize_tokenizer(model_path):\n    global tokenizer\n    if tokenizer is None:\n        tokenizer = AutoTokenizer.from_pretrained(model_path,local_files_only=True)\n        print(f\"Is Fast Tokenizer? {tokenizer.is_fast}\")\n    return tokenizer\n\n@dataclass\nclass GRPOScriptArguments(ScriptArguments):\n    \"\"\"\n    Script arguments for the GRPO training script.\n    \"\"\"\n    data_file_paths: str = field(\n        default=None,\n        metadata={\"help\": \"Paths to data files, separated by ':'\"},\n    )\n    image_folders: str = field(\n        default=None,\n        metadata={\"help\": \"Paths to image folders, separated by ':'\"},\n    )\n    arrow_cache_dir: str = field(\n        default=None,\n        metadata={\"help\": \"Path to arrow cache directory\"},\n    )\n    val_split_ratio: float = field(\n        default=0.0,\n        metadata={\"help\": \"Ratio of validation split, default 0.0\"},\n    )\n    reward_funcs: list[str] = field(\n        default_factory=lambda: [\"accuracy\", \"format\"],\n        metadata={\"help\": \"List of reward functions. Possible values: 'accuracy', 'format'\"},\n    )\n    max_pixels: Optional[int] = field(\n        default=12845056,\n        metadata={\"help\": \"Maximum number of pixels for the image (for QwenVL)\"},\n    )\n    min_pixels: Optional[int] = field(\n        default=3136,\n        metadata={\"help\": \"Minimum number of pixels for the image (for QwenVL)\"},\n    )\n    max_anyres_num: Optional[int] = field(\n        default=12,\n        metadata={\"help\": \"Maximum number of anyres blocks for the image (for InternVL)\"},\n    )\n    reward_method: Optional[str] = field(\n        default=None,\n        metadata={\n            \"help\": \"Choose reward method: 'default', 'mcp', ...\"\n        },\n    )\n    task_type: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"Choose task type: 'default', 'gui', ...\"},\n    )\n    is_reward_customized_from_vlm_module: bool = field(\n        default=False,\n        metadata={\"help\": \"Whether to use a customized reward from vlm module\"},\n    )\n\ndef extract_choice(text):\n    # 1. Clean and normalize text\n    text = text.upper()  # Convert to uppercase\n    text = re.sub(r'\\s+', ' ', text)  # Normalize spaces\n\n    # 2. Choice should not have uppercase letters before or after\n    choices = re.findall(r'(?<![A-Z])([A-Z])(?=[\\.\\,\\?\\!\\:\\;]|$)', text)\n\n    if not choices:\n        return None\n\n    # 3. If only one choice, return it directly\n    if len(choices) == 1:\n        return choices[0]\n\n    # 4. If multiple choices, use heuristic rules\n    choice_scores = {choice: 0 for choice in choices}\n\n    # 4.1 Keywords around choices get points\n    keywords = [\n        '答案', '选择', '正确', '是', '对',\n        'answer', 'correct', 'choose', 'select', 'right',\n        '认为', '应该', '觉得', 'think', 'believe', 'should'\n    ]\n\n    # Get context for each choice (20 chars before and after)\n    for choice in choices:\n        pos = text.find(choice)\n        context = text[max(0, pos-20):min(len(text), pos+20)]\n\n        # Add points for keywords\n        for keyword in keywords:\n            if keyword.upper() in context:\n                choice_scores[choice] += 1\n\n        # Add points if choice is near the end (usually final answer)\n        if pos > len(text) * 0.7:  # In last 30% of text\n            choice_scores[choice] += 2\n\n        # Add points if followed by punctuation\n        if pos < len(text) - 1 and text[pos+1] in '。.!！,，':\n            choice_scores[choice] += 1\n\n    # Return highest scoring choice\n    return max(choice_scores.items(), key=lambda x: x[1])[0]\n\ndef evaluate_answer_similarity(student_answer, ground_truth):\n    \"\"\"Use llm to evaluate answer similarity.\"\"\"\n    try:\n        response = client.chat.completions.create(\n            model=\"qwen2.5:7b\",\n            messages=[\n                {\n                    \"role\": \"user\",\n                    \"content\": \"You are a evaluation expert. First, analyze the student's response to identify and extract their final answer. Then, compare the extracted answer with the correct solution. Output ONLY '1.0' if the extracted answer matches the correct solution in meaning, or '0.0' if the student's response does not contain a clear or correct answer. No other output is allowed.\"\n                },\n                {\n                    \"role\": \"user\",\n                    \"content\": f\"Student's response: {student_answer}\\nCorrect solution: {ground_truth}\\nOutput only 1.0 or 0.0:\"\n                }\n            ],\n            temperature=0\n        )\n        result = response.choices[0].message.content.strip()\n        return float(result)\n    \n    except Exception as e:\n        print(f\"Error in GPT evaluation: {e}\")\n        # If API call fails, fall back to simple text matching\n        return 1.0 if student_answer ==ground_truth else 0.0\n\ndef llm_reward(content, sol, **kwargs):\n    # Extract answer from content if it has think/answer tags\n    sol_match = re.search(r'<answer>(.*?)</answer>', sol)\n    ground_truth = sol_match.group(1).strip() if sol_match else sol.strip()\n    \n    # Extract answer from content if it has think/answer tags\n    content_matches = re.findall(r'<answer>(.*?)</answer>', content, re.DOTALL)\n    student_answer = content_matches[-1].strip() if content_matches else content.strip()\n    return evaluate_answer_similarity(student_answer, ground_truth)\n\ndef mcq_reward(content, sol, **kwargs):\n    # For multiple choice, extract and compare choices\n    sol_match = re.search(r'<answer>(.*?)</answer>', sol)\n    ground_truth = sol_match.group(1).strip() if sol_match else sol.strip()\n    has_choices = extract_choice(ground_truth)\n    correct_choice = has_choices.upper() if has_choices else sol.strip()\n\n    # Extract answer from content if it has think/answer tags\n    content_match = re.search(r'<answer>(.*?)</answer>', content, re.DOTALL)\n    student_answer = content_match.group(1).strip() if content_match else content.strip()\n    student_choice = extract_choice(student_answer)\n    if student_choice:\n        reward = 1.0 if student_choice == correct_choice else 0.0\n    else:\n        reward = 0.0\n\n    return reward\n\n\ndef yes_no_reward(content, sol, **kwargs):\n    content = content.lower()\n    sol = sol.lower()\n\n    # Extract answer from solution if it has think/answer tags\n    sol_match = re.search(r'<answer>(.*?)</answer>', sol)\n    ground_truth = sol_match.group(1).strip() if sol_match else sol.strip()\n\n    # Extract answer from content if it has think/answer tags\n    content_match = re.search(r'<answer>(.*?)</answer>', content, re.DOTALL)\n    student_answer = content_match.group(1).strip() if content_match else content.strip()\n\n    ground_yes_no = re.search(r'(yes|no)', ground_truth)\n    ground_yes_no = ground_yes_no.group(1) if ground_yes_no else ''\n    student_yes_no = re.search(r'(yes|no)', student_answer)\n    student_yes_no = student_yes_no.group(1) if student_yes_no else ''\n\n    reward = 1.0 if ground_yes_no == student_yes_no else 0.0\n\n    return reward\n\n# score_type: 0 for mAP, 1 for mAP 50\ndef calculate_map(pred_bbox_list, gt_bbox_list, score_type=0):\n    # Calculate mAP\n\n    # Initialize COCO object for ground truth\n    gt_json = {\"annotations\": [], \"images\": [], \"categories\": []}\n    gt_json[\"images\"] = [{\n        \"id\": 0,\n        \"width\": 2048,\n        \"height\": 2048,\n        \"file_name\": \"image_0.jpg\"\n    }]\n\n    gt_json[\"categories\"] = []\n\n    cats2id = {}\n    cat_count = 0\n    for idx, gt_bbox in enumerate(gt_bbox_list):\n        if gt_bbox[\"label\"] not in cats2id:\n            cats2id[gt_bbox[\"label\"]] = cat_count\n            gt_json[\"categories\"].append({\n                \"id\": cat_count,\n                \"name\": gt_bbox[\"label\"]\n            })\n            cat_count += 1\n        \n        gt_json[\"annotations\"].append({\n            \"id\": idx+1,\n            \"image_id\": 0,\n            \"category_id\": cats2id[gt_bbox[\"label\"]],\n            \"bbox\": [gt_bbox[\"bbox_2d\"][0], gt_bbox[\"bbox_2d\"][1], gt_bbox[\"bbox_2d\"][2] - gt_bbox[\"bbox_2d\"][0], gt_bbox[\"bbox_2d\"][3] - gt_bbox[\"bbox_2d\"][1]],\n            \"area\": (gt_bbox[\"bbox_2d\"][2] - gt_bbox[\"bbox_2d\"][0]) * (gt_bbox[\"bbox_2d\"][3] - gt_bbox[\"bbox_2d\"][1]),\n            \"iscrowd\": 0\n        })\n    coco_gt = COCO(gt_json)\n\n    dt_json = []\n    for idx, pred_bbox in enumerate(pred_bbox_list):\n        try:\n            dt_json.append({\n                \"image_id\": 0,\n                \"category_id\": cats2id[pred_bbox[\"label\"]],\n                \"bbox\": [pred_bbox[\"bbox_2d\"][0], pred_bbox[\"bbox_2d\"][1], pred_bbox[\"bbox_2d\"][2] - pred_bbox[\"bbox_2d\"][0], pred_bbox[\"bbox_2d\"][3] - pred_bbox[\"bbox_2d\"][1]],\n                \"score\": 1.0,\n                \"area\": (pred_bbox[\"bbox_2d\"][2] - pred_bbox[\"bbox_2d\"][0]) * (pred_bbox[\"bbox_2d\"][3] - pred_bbox[\"bbox_2d\"][1])\n            })\n        except:\n            pass\n    \n    if len(dt_json) == 0:\n        return 0.0\n    \n    coco_dt = coco_gt.loadRes(dt_json)\n    coco_eval = COCOeval(coco_gt, coco_dt, \"bbox\")\n\n    coco_eval.evaluate()\n    coco_eval.accumulate()\n    coco_eval.summarize()\n    return coco_eval.stats[score_type]\n\ndef map_reward(content, sol, length_reward=False, score_type=0, **kwargs):\n    \"\"\"\n    Calculate mean average precision (mAP) reward between predicted and ground truth bounding boxes.\n    \n    Args:\n        content (str): String containing predicted bounding boxes in JSON format\n        sol (str): String containing ground truth bounding boxes in JSON format\n        length_reward (bool, optional): Whether to include length penalty in reward calculation. Defaults to False.\n        score_type (int, optional): Type of COCO evaluation metric to use. Defaults to 0 (mAP).\n        **kwargs: Additional keyword arguments\n        \n    Returns:\n        float: mAP reward score between 0 and 1. If length_reward is True, the score is multiplied by a length penalty factor.\n    \"\"\"\n    # Extract JSON content between ```json tags\n    pattern = r'```json(.*?)```'\n    json_match = re.findall(pattern, sol, re.DOTALL)\n    bbox_json = json_match[-1].strip() if json_match else None\n\n    # Parse ground truth JSON to get bbox list\n    gt_bbox_list = []\n    if bbox_json:\n        bbox_data = json.loads(bbox_json)\n        gt_bbox_list = [item for item in bbox_data]\n    \n    # Parse predicted JSON to get bbox list\n    pred_bbox_list = []\n    json_match = re.findall(pattern, content, re.DOTALL)\n    if json_match:\n        try:\n            bbox_data = json.loads(json_match[-1].strip())\n            pred_bbox_list = [item for item in bbox_data]\n        except:\n            # Return empty list if JSON parsing fails\n            pred_bbox_list = []\n\n    # Calculate mAP if both prediction and ground truth exist\n    if len(pred_bbox_list) > 0 and len(gt_bbox_list) > 0:\n        bbox_reward = calculate_map(pred_bbox_list, gt_bbox_list, score_type=score_type)\n    elif len(pred_bbox_list) == 0 and len(gt_bbox_list) == 0:\n        bbox_reward = 1.0\n    else:\n        bbox_reward = 0.0\n    \n    if length_reward:\n        # Calculate length penalty based on ratio of ground truth to predicted bounding boxes\n        gt_length = len(gt_bbox_list)\n        pred_length = len(pred_bbox_list)\n        # Full score if prediction has fewer boxes than ground truth, otherwise penalize proportionally\n        length_score = 1.0 if gt_length >= pred_length else gt_length/pred_length\n        return bbox_reward * length_score\n    else:\n        return bbox_reward\n\ndef od_reward(content, sol, score_type=0, **kwargs):\n    \"\"\"\n    Calculate reward for object detection task by comparing predicted and ground truth answers.\n    \n    Args:\n        content (str): Model's predicted answer containing bounding box annotations\n        sol (str): Ground truth answer containing bounding box annotations \n        score_type (int): Type of COCO evaluation metric to use (default: 0 for mAP)\n        **kwargs: Additional keyword arguments\n        \n    Returns:\n        float: Reward score between 0 and 1 based on mAP between predicted and ground truth boxes\n    \"\"\"\n    # Pattern to extract content between <answer> tags\n    match_pattern = r'<answer>(.*?)</answer>'\n\n    # Extract ground truth answer\n    sol_match = re.search(match_pattern, sol, re.DOTALL)\n    ground_truth = sol_match.group(1).strip() if sol_match else None\n\n    # Extract predicted answer (using last match if multiple)\n    content_match = re.findall(match_pattern, content, re.DOTALL)\n    student_answer = content_match[-1].strip() if content_match else None\n\n    # Return 0 if no prediction\n    if student_answer is None:\n        return 0.0\n    # Return 1 if both prediction and ground truth are None\n    elif ground_truth == \"None\" and student_answer == \"None\":\n        return 1.0\n    # Otherwise calculate mAP between prediction and ground truth\n    else:\n        return map_reward(student_answer, ground_truth, score_type=score_type)\n\ndef odLength_reward(content, sol, **kwargs):\n    \"\"\"\n    Calculate reward for object detection task with length penalty.\n    \n    Args:\n        content (str): Model's predicted answer containing bounding box annotations\n        sol (str): Ground truth answer containing bounding box annotations\n        **kwargs: Additional keyword arguments\n        \n    Returns:\n        float: Reward score between 0 and 1 based on mAP and length penalty\n    \"\"\"\n    # Pattern to extract content between <answer> tags\n    match_pattern = r'<answer>(.*?)</answer>'\n\n    # Extract ground truth answer\n    sol_match = re.search(match_pattern, sol, re.DOTALL)\n    ground_truth = sol_match.group(1).strip() if sol_match else None\n    # Extract predicted answer (using last match if multiple)\n    content_match = re.findall(match_pattern, content, re.DOTALL)\n    student_answer = content_match[-1].strip() if content_match else None\n\n    # Return 0 if no prediction\n    if student_answer is None:\n        return 0.0\n    # Return 1 if both prediction and ground truth are None\n    elif ground_truth == \"None\" and student_answer == \"None\":\n        return 1.0\n    # Calculate mAP with length penalty\n    else:\n        bbox_reward = map_reward(student_answer, ground_truth, length_reward=True, score_type=0)\n        return bbox_reward\n\ndef iou(box1, box2):\n    inter_x1 = max(box1[0], box2[0])\n    inter_y1 = max(box1[1], box2[1])\n    inter_x2 = min(box1[2]-1, box2[2]-1)\n    inter_y2 = min(box1[3]-1, box2[3]-1)\n    if inter_x1 < inter_x2 and inter_y1 < inter_y2:\n        inter = (inter_x2-inter_x1+1)*(inter_y2-inter_y1+1)\n    else:\n        inter = 0\n    union = (box1[2]-box1[0])*(box1[3]-box1[1]) + (box2[2]-box2[0])*(box2[3]-box2[1]) - inter\n    return float(inter)/union\n\ndef detection_score(content, sol, iou_threshold=0.5, alpha=0.7, beta=0.0, gamma=0.3):\n    pattern = r'```json(.*?)```'\n    json_match = re.search(pattern, clean_text(content), re.DOTALL)\n    content_bbox_json = json_match.group(1).strip() if json_match else None\n    if content_bbox_json:\n        try:\n            bbox_data = json.loads(content_bbox_json)\n            pred_boxes = [item for item in bbox_data]\n        except:\n            pred_boxes = []\n\n    else:\n        pred_boxes = []\n\n    pattern = r'```json(.*?)```'\n    json_match = re.search(pattern, clean_text(sol), re.DOTALL)\n    sol_bbox_json = json_match.group(1).strip() if json_match else None\n    if sol_bbox_json:\n        bbox_data = json.loads(sol_bbox_json)\n        gt_boxes = [item for item in bbox_data]\n    else:\n        gt_boxes = []\n\n    \"\"\"\n    Calculate the comprehensive score for object detection\n    \n    Parameters:\n        pred_boxes: List of predicted boxes, each element is in the format {\"bbox_2d\": [x1, y1, x2, y2], \"label\": \"category name\"}\n        gt_boxes: List of ground truth boxes, each element is in the format {\"bbox_2d\": [x1, y1, x2, y2], \"label\": \"category name\"}\n        iou_threshold: IoU threshold, default is 0.5\n        alpha: Position accuracy weight, default is 0.7\n        beta: Label accuracy weight, default is 0.0\n        gamma: Completeness weight (penalty for missed/false detections), default is 0.3\n        \n    Returns:\n        Comprehensive score, ranging from [0.0, 1.0]\n    \"\"\"\n    # Handle edge cases\n    if len(gt_boxes) == 0:\n        return 1.0 if not pred_boxes else 0.0\n    \n    if len(pred_boxes) == 0:\n        return 0.0\n    \n    # Initialize matching results\n    matches = []  # Store matched pairs of predicted and ground truth boxes\n    unmatched_preds = list(range(len(pred_boxes)))  # Indices of unmatched predicted boxes\n    unmatched_gts = list(range(len(gt_boxes)))  # Indices of unmatched ground truth boxes\n    \n    # Calculate IoU matrix between all predicted and ground truth boxes\n    iou_matrix = []\n    for pred_idx, pred_box in enumerate(pred_boxes):\n        iou_row = []\n        for gt_idx, gt_box in enumerate(gt_boxes):\n            try:\n                curr_iou = iou(pred_box[\"bbox_2d\"], gt_box[\"bbox_2d\"])\n            except:\n                curr_iou = 0.0\n            iou_row.append(curr_iou)\n        iou_matrix.append(iou_row)\n    \n    # Greedy matching: find the best match for each predicted box\n    while unmatched_preds and unmatched_gts:\n        # Find the maximum IoU\n        max_iou = -1\n        max_pred_idx = -1\n        max_gt_idx = -1\n        \n        for pred_idx in unmatched_preds:\n            for gt_idx in unmatched_gts:\n                curr_iou = iou_matrix[pred_idx][gt_idx]\n                if curr_iou > max_iou:\n                    max_iou = curr_iou\n                    max_pred_idx = pred_idx\n                    max_gt_idx = gt_idx\n        \n        # Stop matching if the maximum IoU is below the threshold\n        if max_iou < iou_threshold:\n            break\n        \n        # Record matching results\n        try:\n            pred_label = pred_boxes[max_pred_idx][\"label\"].lower()\n        except:\n            pred_box = \"\"\n        try:\n            gt_label = gt_boxes[max_gt_idx][\"label\"].lower()\n        except:\n            gt_label = \"\"\n        label_correct = (pred_label == gt_label)\n        \n        if label_correct:\n            matches.append({\n                \"pred_idx\": max_pred_idx,\n                \"gt_idx\": max_gt_idx,\n                \"iou\": max_iou,\n                \"label_correct\": label_correct\n            })\n        else:\n            matches.append({\n                \"pred_idx\": max_pred_idx,\n                \"gt_idx\": max_gt_idx,\n                \"iou\": 0,\n                \"label_correct\": label_correct\n            })\n        \n        # Remove matched boxes from the unmatched list\n        unmatched_preds.remove(max_pred_idx)\n        unmatched_gts.remove(max_gt_idx)\n    \n    # Calculate position accuracy score (average IoU)\n    position_score = sum(m[\"iou\"] for m in matches) / len(gt_boxes) if matches else 0.0\n    \n    # Calculate label accuracy score\n    label_score = sum(1.0 for m in matches if m[\"label_correct\"]) / len(gt_boxes) if matches else 0.0\n    \n    # Calculate completeness score (considering missed and false detections)\n    # Miss rate = number of unmatched ground truth boxes / total number of ground truth boxes\n    # False alarm rate = number of unmatched predicted boxes / total number of predicted boxes\n    miss_rate = len(unmatched_gts) / len(gt_boxes)\n    false_alarm_rate = len(unmatched_preds) / len(pred_boxes) if pred_boxes else 0.0\n    \n    # Completeness score = 1 - (miss rate + false alarm rate) / 2\n    completeness_score = 1.0 - (miss_rate + false_alarm_rate) / 2.0\n    \n    # Calculate the final comprehensive score\n    final_score = (\n        alpha * position_score + \n        beta * label_score + \n        gamma * completeness_score\n    ) / (alpha + beta + gamma)\n\n    return final_score\n\ndef cosine_reward(content, tokenizer, acc_reward, **kwargs):\n    #https://arxiv.org/abs/2502.03373\n    min_len_value_wrong = 0.0\n    max_len_value_wrong = -0.5\n    min_len_value_correct = 1.0\n    max_len_value_correct = 0.5\n    cosine_max_len = 1024\n\n    # processing_class = AutoProcessor.from_pretrained(model_path)\n    # tokenizer = processing_class.tokenizer\n    \n    gen_len = len(tokenizer.encode(content))\n    acc_reward = 1.0\n    is_correct = acc_reward >= 0.7\n    \n    if is_correct:\n        # Swap min/max for correct answers\n        min_value = max_len_value_correct\n        max_value = min_len_value_correct\n    else:\n        min_value = min_len_value_wrong\n        max_value = max_len_value_wrong\n\n    reward = max_value - (max_value - min_value) * (1 - math.cos(gen_len * math.pi / cosine_max_len)) / 2\n\n    return reward\n\ndef repetition_reward(content, **kwargs):\n    max_penalty = -1.0\n\n    if content == '':\n        return 0.0\n\n    # First, try to extract explicitly marked JSON sections\n    pattern = r'```json(.*?)```'\n    json_match = re.search(pattern, content, re.DOTALL)\n    \n    if json_match:\n        bbox_json = json_match.group(1).strip()\n    else:\n        # If no explicitly marked JSON is found, try to find any possible JSON sections\n        pattern = r'```(.*?)```'\n        json_match = re.search(pattern, content, re.DOTALL)\n        bbox_json = json_match.group(1).strip() if json_match else None\n        \n        # If still not found, try to find possible JSON array sections\n        if not bbox_json:\n            pattern = r'\\[\\s*{.*?\"bbox_2d\".*?\"label\".*?}\\s*\\]'\n            json_match = re.search(pattern, content, re.DOTALL)\n            bbox_json = json_match.group(0) if json_match else None\n    \n    # Try to parse JSON data\n    if bbox_json:\n        try:\n            # Try direct parsing\n            data = json.loads(bbox_json)\n        except json.JSONDecodeError:\n            try:\n                # If direct parsing fails, try using json_repair to repair\n                repaired_json = repair_json(bbox_json)\n                data = json.loads(repaired_json)\n            except:\n                # If repair also fails, switch to plain text processing\n                data = None\n        if data and isinstance(data, list):\n            # Ensure data is in list format\n            try:\n                # For JSON data, set ngram_size to 1\n                ngram_size = 1\n                # Combine 'bbox_2d' and 'label' of each object into a string\n                items = []\n                for item in data:\n                    if 'bbox_2d' in item and 'label' in item:\n                        items.append(f\"{item['bbox_2d']}_{item['label']}\")\n                \n                @staticmethod\n                def zipngram(text: list, ngram_size: int):\n                    return zip(*[text[i:] for i in range(ngram_size)])\n                \n                ngrams = set()\n                total = 0\n\n                for ng in zipngram(items, ngram_size):\n                    ngrams.add(ng)\n                    total += 1\n\n                if total == 0:\n                    return 0.0\n\n                scaling = 1 - len(ngrams) / total\n                reward = scaling * max_penalty\n\n                return reward\n            except KeyError:\n                # If necessary keys are missing, switch to plain text processing\n                pass\n    \n    # If no JSON section is found or JSON processing fails, treat as plain text\n    ngram_size = 6\n    \n    if len(content.split()) < ngram_size:\n        return 0.0\n    \n    @staticmethod\n    def zipngram(text: str, ngram_size: int):\n        words = text.lower().split()\n        return zip(*[words[i:] for i in range(ngram_size)])\n    \n    ngrams = set()\n    total = 0\n\n    for ng in zipngram(content, ngram_size):\n        ngrams.add(ng)\n        total += 1\n\n    scaling = 1 - len(ngrams) / total\n    reward = scaling * max_penalty\n\n    return reward\n\n\ndef repetition_rewards(completions, solution, **kwargs):\n    contents = [completion[0][\"content\"] for completion in completions]\n    rewards = []\n\n    for content, sol in zip(contents, solution):\n        reward = repetition_reward(content)\n        rewards.append(reward)\n\n\n        if os.getenv(\"DEBUG_MODE\") == \"true\":\n            log_path = os.getenv(\"LOG_PATH\")\n            current_time = datetime.now().strftime(\"%d-%H-%M-%S-%f\")\n            image_path = kwargs.get(\"image_path\")[0] if \"image_path\" in kwargs else None\n            problem = kwargs.get(\"problem\")[0]\n            if reward <= 0.0:  # this condition can be changed for debug\n                with open(log_path+\"_repetition.txt\", \"a\", encoding='utf-8') as f:\n                    f.write(f\"------------- {current_time} Accuracy reward: {reward} -------------\\n\")\n                    f.write(f\"image_path: {image_path}\\n\")\n                    f.write(f\"problem: {problem}\\n\")\n                    f.write(f\"Content: {content}\\n\")\n                    f.write(f\"Solution: {sol}\\n\")     \n\n\n\n    return rewards\n\n\ndef cosine_rewards(completions, solution, **kwargs):\n    contents = [completion[0][\"content\"] for completion in completions]\n    rewards = []\n\n    for content, sol in zip(contents, solution):\n        clean_content = clean_text(content)\n        sol = clean_text(sol)\n        if sol == \"none\":\n            if clean_content == \"none\":\n                acc_reward = 1.0\n            else:\n                acc_reward = 0.0\n        else:\n            acc_reward = detection_score(clean_content, sol)\n        reward = cosine_reward(content, tokenizer, acc_reward)\n        rewards.append(reward)\n\n        if os.getenv(\"DEBUG_MODE\") == \"true\":\n            log_path = os.getenv(\"LOG_PATH\")\n            current_time = datetime.now().strftime(\"%d-%H-%M-%S-%f\")\n            image_path = kwargs.get(\"image_path\")[0] if \"image_path\" in kwargs else None\n            problem = kwargs.get(\"problem\")[0]\n            if reward <=1.0:  # this condition can be changed for debug\n                with open(log_path+\"_cosine.txt\", \"a\", encoding='utf-8') as f:\n                    f.write(f\"------------- {current_time} Accuracy reward: {reward} -------------\\n\")\n                    f.write(f\"image_path: {image_path}\\n\")\n                    f.write(f\"problem: {problem}\\n\")\n                    f.write(f\"Content: {content}\\n\")\n                    f.write(f\"Solution: {sol}\\n\")   \n\n    return rewards\n\ndef numeric_reward(content, sol, **kwargs):\n    content = clean_text(content)\n    sol = clean_text(sol)\n    try:\n        content, sol = float(content), float(sol)\n        return 1.0 if content == sol else 0.0\n    except:\n        return None\ndef math_reward(content, sol, **kwargs):\n    content = clean_text(content)\n    sol = clean_text(sol)\n    return compute_score(content, sol)\ndef clean_text(text, exclue_chars=['\\n', '\\r']):\n    # Extract content between <answer> and </answer> if present\n    answer_matches = re.findall(r'<answer>(.*?)</answer>', text, re.DOTALL)\n    if answer_matches:\n        # Use the last match\n        text = answer_matches[-1]\n    \n    for char in exclue_chars:\n        if char in ['\\n', '\\r']:\n            # If there is a space before the newline, remove the newline\n            text = re.sub(r'(?<=\\s)' + re.escape(char), '', text)\n            # If there is no space before the newline, replace it with a space\n            text = re.sub(r'(?<!\\s)' + re.escape(char), ' ', text)\n        else:\n            text = text.replace(char, ' ')\n    \n    # Remove leading and trailing spaces and convert to lowercase\n    return text.strip().rstrip('.').lower()\n\ndef all_match_reward(content, sol, **kwargs):\n    content = clean_text(content)\n    sol = clean_text(sol)\n    return 1.0 if content == sol else 0.0\n\ndef default_accuracy_reward(content, sol, **kwargs):\n    reward = 0.0\n        # Extract answer from solution if it has think/answer tags\n    sol_match = re.search(r'<answer>(.*?)</answer>', sol)\n    ground_truth = sol_match.group(1).strip() if sol_match else sol.strip()\n    \n    # Extract answer from content if it has think/answer tags\n    content_matches = re.findall(r'<answer>(.*?)</answer>', content, re.DOTALL)\n    student_answer = content_matches[-1].strip() if content_matches else content.strip()\n    \n    # Try symbolic verification first for numeric answers\n    try:\n        answer = parse(student_answer)\n        if float(verify(answer, parse(ground_truth))) > 0:\n            reward = 1.0\n    except Exception:\n        pass  # Continue to next verification method if this fails\n\n    # If symbolic verification failed, try string matching or fuzzy matching\n    if reward == 0.0:\n        try: \n            # Check if ground truth contains numbers\n            has_numbers = bool(re.search(r'\\d', ground_truth))\n            # Check if it's a multiple choice question\n            has_choices = extract_choice(ground_truth)\n            \n            if has_numbers:\n                # For numeric answers, use exact matching\n                reward = numeric_reward(student_answer, ground_truth)\n                if reward is None:\n                    reward = ratio(clean_text(student_answer), clean_text(ground_truth))\n            elif has_choices:\n                # For multiple choice, extract and compare choices\n                correct_choice = has_choices.upper()\n                student_choice = extract_choice(student_answer)\n                if student_choice:\n                    reward = 1.0 if student_choice == correct_choice else 0.0\n            else:\n                # For text answers, use fuzzy matching\n                reward = ratio(clean_text(student_answer), clean_text(ground_truth))\n        except Exception:\n            pass  # Keep reward as 0.0 if all methods fail\n\n    return reward\n\ndef accuracy_reward(completions, solution, **kwargs):\n    \"\"\"Reward function that checks if the completion is correct using symbolic verification, exact string matching, or fuzzy matching.\"\"\"\n    contents = [completion[0][\"content\"] for completion in completions]\n    rewards = []\n    for content, sol, accu_reward_method in zip(contents, solution, kwargs.get(\"accu_reward_method\")):\n        # if accu_reward_method is defined, use the corresponding reward function, otherwise use the default reward function\n        if accu_reward_method == \"mcq\":\n            reward = mcq_reward(content, sol)\n        elif accu_reward_method == 'yes_no':\n            reward = yes_no_reward(content, sol)\n        elif accu_reward_method == 'llm':\n            reward = llm_reward(content, sol)\n        elif accu_reward_method == 'map':\n            reward = map_reward(content, sol)\n        elif accu_reward_method == 'math':\n            reward = math_reward(content, sol)\n        elif accu_reward_method == 'weighted_sum':\n            clean_content = clean_text(content)\n            sol = clean_text(sol)\n            if sol == \"none\":\n                if clean_content == \"none\":\n                    reward = 1.0\n                else:\n                    reward = 0.0\n            else:\n                reward = detection_score(clean_content, sol)\n        elif accu_reward_method == 'od_ap':\n            reward = od_reward(content, sol)\n        elif accu_reward_method == 'od_ap50':\n            reward = od_reward(content, sol, score_type=1)\n        elif accu_reward_method == 'odLength':\n            reward = odLength_reward(content, sol)\n        elif accu_reward_method == 'all_match':\n            reward = all_match_reward(content, sol)\n        else:\n            reward = default_accuracy_reward(content, sol)  \n        rewards.append(reward)\n        \n        if os.getenv(\"DEBUG_MODE\") == \"true\":\n            log_path = os.getenv(\"LOG_PATH\")\n            current_time = datetime.now().strftime(\"%d-%H-%M-%S-%f\")\n            image_path = kwargs.get(\"image_path\")[0] if \"image_path\" in kwargs else None\n            problem = kwargs.get(\"problem\")[0]\n            if reward <= 1.0:  # this condition can be changed for debug\n                with open(log_path, \"a\", encoding='utf-8') as f:\n                    f.write(f\"------------- {current_time} Accuracy reward: {reward} -------------\\n\")\n                    f.write(f\"accu_reward_method: {accu_reward_method}\\n\")\n                    f.write(f\"image_path: {image_path}\\n\")\n                    f.write(f\"problem: {problem}\\n\")\n                    f.write(f\"Content: {content}\\n\")\n                    f.write(f\"Solution: {sol}\\n\")     \n\n        \n    return rewards\n\ndef format_reward(completions, **kwargs):\n    \"\"\"Reward function that checks if the completion has a specific format.\"\"\"\n    pattern = r\"<think>.*?</think>\\s*<answer>.*?</answer>\"\n    completion_contents = [completion[0][\"content\"] for completion in completions]\n    matches = [re.fullmatch(pattern, content, re.DOTALL) for content in completion_contents]\n\n    current_time = datetime.now().strftime(\"%d-%H-%M-%S-%f\")\n    if os.getenv(\"DEBUG_MODE\") == \"true\":\n        log_path = os.getenv(\"LOG_PATH\")\n        with open(log_path.replace(\".txt\", \"_format.txt\"), \"a\", encoding='utf-8') as f:\n            f.write(f\"------------- {current_time} Format reward -------------\\n\")\n            for content, match in zip(completion_contents, matches):\n                f.write(f\"Content: {content}\\n\")\n                f.write(f\"Has format: {bool(match)}\\n\")\n\n    return [1.0 if match else 0.0 for match in matches]\n\n\nreward_funcs_registry = {\n    \"accuracy\": accuracy_reward,\n    \"format\": format_reward,\n    \"length\": cosine_rewards,\n    \"repetition\": repetition_rewards,\n}\n\n@dataclass\nclass GRPOModelConfig(ModelConfig):\n    freeze_vision_modules: bool = False\n\nSYSTEM_PROMPT = (\n    \"A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant \"\n    \"first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning \"\n    \"process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., \"\n    \"<think> reasoning process here </think><answer> answer here </answer>\"\n)\n\n\ndef get_vlm_module(model_name_or_path):\n    if \"qwen\" in model_name_or_path.lower():\n        return Qwen2VLModule\n    else:\n        raise ValueError(f\"Unsupported model: {model_name_or_path}\")\n\ndef main(script_args, training_args, model_args):\n    # Load the VLM module\n    vlm_module_cls = get_vlm_module(model_args.model_name_or_path)\n    print(\"using vlm module:\", vlm_module_cls.__name__)\n    question_prompt = vlm_module_cls.get_question_template(task_type=script_args.task_type)\n\n    # Get reward functions \n    if script_args.is_reward_customized_from_vlm_module:\n        reward_funcs = [vlm_module_cls.select_reward_func(func, script_args.task_type) for func in script_args.reward_funcs]\n    else:\n        reward_funcs = [reward_funcs_registry[func] for func in script_args.reward_funcs]\n    print(\"reward_funcs:\", reward_funcs)\n\n    # Load the JSONL datasets\n    import json\n    from datasets import Dataset\n    \n    data_files = script_args.data_file_paths.split(\":\")\n    image_folders = script_args.image_folders.split(\":\")\n    \n    if len(data_files) != len(image_folders):\n        raise ValueError(\"Number of data files must match number of image folders\")\n    \n    if script_args.reward_method is None:\n        accu_reward_methods = [\"default\"] * len(data_files)\n    else:\n        accu_reward_methods = script_args.reward_method.split(\":\")\n        assert len(accu_reward_methods) == len(data_files), f\"Number of reward methods must match number of data files: {len(accu_reward_methods)} != {len(data_files)}\"\n\n    \n    if len(data_files) != len(image_folders):\n        raise ValueError(\"Number of data files must match number of image folders\")\n    \n    all_data = []\n    for data_file, image_folder, accu_reward_method in zip(data_files, image_folders, accu_reward_methods):\n        with open(data_file, 'r') as f:\n            for line in f:\n                item = json.loads(line)\n                if 'image' in item:\n                    if isinstance(item['image'], str):\n                        # Store image path instead of loading the image\n                        item['image_path'] = [os.path.join(image_folder, item['image'])]\n                        del item['image'] # remove the image column so that it can be loaded later\n                    elif isinstance(item['image'], list):\n                        # if the image is a list, then it is a list of images (for multi-image input)\n                        item['image_path'] = [os.path.join(image_folder, image) for image in item['image']]\n                        del item['image'] # remove the image column so that it can be loaded later\n                    else:\n                        raise ValueError(f\"Unsupported image type: {type(item['image'])}\")\n                # Remove immediate image loading\n                item['problem'] = item['conversations'][0]['value'].replace('<image>', '')\n                \n                # Handle solution that could be a float or string\n                solution_value = item['conversations'][1]['value']\n                if isinstance(solution_value, str):\n                    item['solution'] = solution_value.replace('<answer>', '').replace('</answer>', '').strip()\n                else:\n                    # If it's a float or other non-string type, keep it as is\n                    item['solution'] = str(solution_value)\n                \n                del item['conversations']\n                item['accu_reward_method'] = item.get('accu_reward_method', accu_reward_method) # if accu_reward_method is in the data jsonl, use the value in the data jsonl, otherwise use the defined value\n                all_data.append(item)\n\n    dataset = Dataset.from_list(all_data)\n\n    def make_conversation_from_jsonl(example):\n        if 'image_path' in example and example['image_path'] is not None:\n            assert all(os.path.exists(p) for p in example['image_path']), f\"Image paths do not exist: {example['image_path']}\"\n            # Don't load image here, just store the path\n            return {\n                'image_path': [p for p in example['image_path']],  # Store path instead of loaded image\n                'problem': example['problem'],\n                'solution': f\"<answer> {example['solution']} </answer>\",\n                'accu_reward_method': example['accu_reward_method'],\n                'prompt': [{\n                    'role': 'user',\n                    'content': [\n                        *({'type': 'image', 'text': None} for _ in range(len(example['image_path']))),\n                        {'type': 'text', 'text': question_prompt.format(Question=example['problem'])}\n                    ]\n                }]\n            }\n        else:\n            return {\n                'problem': example['problem'],\n                'solution': f\"<answer> {example['solution']} </answer>\",\n                'accu_reward_method': example['accu_reward_method'],\n                'prompt': [{\n                    'role': 'user',\n                    'content': [\n                        {'type': 'text', 'text': question_prompt.format(Question=example['problem'])}\n                    ]\n                }]\n            }\n\n    # Map the conversations\n    dataset = dataset.map(make_conversation_from_jsonl, num_proc=8)\n    # print(dataset[0])\n    # Split dataset for validation if requested\n    splits = {'train': dataset}\n    if script_args.val_split_ratio > 0:\n        train_val_split = dataset.train_test_split(\n            test_size=script_args.val_split_ratio\n        )\n        splits['train'] = train_val_split['train']\n        splits['validation'] = train_val_split['test']\n\n    # Select trainer class based on vlm_trainer argument\n    trainer_cls = VLMGRPOTrainer\n    print(\"using trainer:\", trainer_cls.__name__)\n    initialize_tokenizer(model_args.model_name_or_path)\n    \n    # Initialize the GRPO trainer\n    trainer = trainer_cls(\n        model=model_args.model_name_or_path,\n        reward_funcs=reward_funcs,\n        args=training_args,\n        vlm_module=vlm_module_cls(),\n        train_dataset=splits['train'],\n        eval_dataset=splits.get('validation') if training_args.eval_strategy != \"no\" else None,\n        peft_config=get_peft_config(model_args),\n        freeze_vision_modules=model_args.freeze_vision_modules,\n        attn_implementation=model_args.attn_implementation,\n        max_pixels=script_args.max_pixels,\n        min_pixels=script_args.min_pixels,\n        max_anyres_num=script_args.max_anyres_num,\n    )\n\n    # Train and push the model to the Hub\n    if list(pathlib.Path(training_args.output_dir).glob(\"checkpoint-*\")):\n        trainer.train(resume_from_checkpoint=True)\n    else:\n        trainer.train()\n\n    # Save and push to hub\n    trainer.save_model(training_args.output_dir)\n    if training_args.push_to_hub:\n        trainer.push_to_hub()\n\n\nif __name__ == \"__main__\":\n    parser = TrlParser((GRPOScriptArguments, GRPOConfig, GRPOModelConfig))\n    script_args, training_args, model_args = parser.parse_args_and_config()\n    if training_args.deepspeed and \"zero3\" in training_args.deepspeed:\n        print(\"zero3 is used, qwen2_5vl forward monkey patch is applied\")\n        monkey_patch_qwen2_5vl_forward()\n    main(script_args, training_args, model_args)\n"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/qwen2_5vl_monkey_patch.py",
    "content": "\n# ----------------------- Fix the flash attention bug in the current version of transformers -----------------------\nfrom transformers.models.qwen2_5_vl.modeling_qwen2_5_vl import Qwen2_5_VLVisionFlashAttention2, apply_rotary_pos_emb_flashatt, flash_attn_varlen_func\nimport torch\nfrom typing import Tuple, Optional\ndef qwen2_5vl_vision_flash_attn_forward(\n        self,\n        hidden_states: torch.Tensor,\n        cu_seqlens: torch.Tensor,\n        rotary_pos_emb: Optional[torch.Tensor] = None,\n        position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,\n    ) -> torch.Tensor:\n        seq_length = hidden_states.shape[0]\n        q, k, v = self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)\n        # print(111, 222, 333, 444, 555, 666, 777, 888, 999)\n        if position_embeddings is None:\n            logger.warning_once(\n                \"The attention layers in this model are transitioning from computing the RoPE embeddings internally \"\n                \"through `rotary_pos_emb` (2D tensor of RoPE theta values), to using externally computed \"\n                \"`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.54 `rotary_pos_emb` will be \"\n                \"removed and `position_embeddings` will be mandatory.\"\n            )\n            emb = torch.cat((rotary_pos_emb, rotary_pos_emb), dim=-1)\n            cos = emb.cos().float()\n            sin = emb.sin().float()\n        else:\n            cos, sin = position_embeddings\n            # Add this\n            cos = cos.to(torch.float)\n            sin = sin.to(torch.float)\n        q, k = apply_rotary_pos_emb_flashatt(q.unsqueeze(0), k.unsqueeze(0), cos, sin)\n        q = q.squeeze(0)\n        k = k.squeeze(0)\n\n        max_seqlen = (cu_seqlens[1:] - cu_seqlens[:-1]).max().item()\n        attn_output = flash_attn_varlen_func(q, k, v, cu_seqlens, cu_seqlens, max_seqlen, max_seqlen).reshape(\n            seq_length, -1\n        )\n        attn_output = self.proj(attn_output)\n        return attn_output\n\n\ndef monkey_patch_qwen2_5vl_flash_attn():\n    Qwen2_5_VLVisionFlashAttention2.forward = qwen2_5vl_vision_flash_attn_forward\n\n\n# ----------------------- Fix the process pending bug when using data mixture of image-text data and pure-text under deepseed zero3-----------------------\nfrom transformers.models.qwen2_5_vl.modeling_qwen2_5_vl import Qwen2_5_VLCausalLMOutputWithPast\nfrom typing import List, Union\nfrom torch.nn import CrossEntropyLoss\nfrom transformers.models.qwen2_5_vl.modeling_qwen2_5_vl import Qwen2_5_VLForConditionalGeneration\ndef qwen2_5vl_forward(\n        self,\n        input_ids: torch.LongTensor = None,\n        attention_mask: Optional[torch.Tensor] = None,\n        position_ids: Optional[torch.LongTensor] = None,\n        past_key_values: Optional[List[torch.FloatTensor]] = None,\n        inputs_embeds: Optional[torch.FloatTensor] = None,\n        labels: Optional[torch.LongTensor] = None,\n        use_cache: Optional[bool] = None,\n        output_attentions: Optional[bool] = None,\n        output_hidden_states: Optional[bool] = None,\n        return_dict: Optional[bool] = None,\n        pixel_values: Optional[torch.Tensor] = None,\n        pixel_values_videos: Optional[torch.FloatTensor] = None,\n        image_grid_thw: Optional[torch.LongTensor] = None,\n        video_grid_thw: Optional[torch.LongTensor] = None,\n        rope_deltas: Optional[torch.LongTensor] = None,\n        cache_position: Optional[torch.LongTensor] = None,\n        second_per_grid_ts: Optional[torch.Tensor] = None,\n    ) -> Union[Tuple, Qwen2_5_VLCausalLMOutputWithPast]:\n        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\n        output_hidden_states = (\n            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\n        )\n        return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n        if inputs_embeds is None:\n            inputs_embeds = self.model.embed_tokens(input_ids)\n\n            has_images_global = False\n            if pixel_values is not None:\n                has_images_local = torch.tensor(1, device=input_ids.device)\n            else:\n                has_images_local = torch.tensor(0, device=input_ids.device)\n            # Use all_reduce to ensure all GPUs know if there are images to process\n            torch.distributed.all_reduce(has_images_local, op=torch.distributed.ReduceOp.MAX)\n            has_images_global = has_images_local.item() > 0\n\n            # If there are image inputs globally, ensure all GPUs call the visual model\n            if has_images_global:\n                if pixel_values is not None:   \n                    pixel_values = pixel_values.type(self.visual.dtype)\n                    image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)\n                    n_image_tokens = (input_ids == self.config.image_token_id).sum().item()\n                    n_image_features = image_embeds.shape[0]\n                    if n_image_tokens != n_image_features:\n                        raise ValueError(\n                            f\"Image features and image tokens do not match: tokens: {n_image_tokens}, features {n_image_features}\"\n                        )\n                    \n                    mask = input_ids == self.config.image_token_id\n                    mask_unsqueezed = mask.unsqueeze(-1)\n                    mask_expanded = mask_unsqueezed.expand_as(inputs_embeds)\n                    image_mask = mask_expanded.to(inputs_embeds.device)\n                    \n                    image_embeds = image_embeds.to(inputs_embeds.device, inputs_embeds.dtype)\n                    inputs_embeds = inputs_embeds.masked_scatter(image_mask, image_embeds)\n                else:\n                    with torch.no_grad():\n                        # Create a dummy image data for triggering parameter synchronization\n                        dummy_pixel_values = torch.zeros((4, 1176), device=input_ids.device, dtype=self.visual.dtype)\n                        dummy_grid_thw = torch.tensor([[1, 2, 2]], device=input_ids.device)\n                        _ = self.visual(dummy_pixel_values, grid_thw=dummy_grid_thw)\n\n            # Currently, video processing is not handled.\n            if pixel_values_videos is not None:\n                pixel_values_videos = pixel_values_videos.type(self.visual.dtype)\n                video_embeds = self.visual(pixel_values_videos, grid_thw=video_grid_thw)\n                n_video_tokens = (input_ids == self.config.video_token_id).sum().item()\n                n_video_features = video_embeds.shape[0]\n                if n_video_tokens != n_video_features:\n                    raise ValueError(\n                        f\"Video features and video tokens do not match: tokens: {n_video_tokens}, features {n_video_features}\"\n                    )\n\n                mask = input_ids == self.config.video_token_id\n                mask_unsqueezed = mask.unsqueeze(-1)\n                mask_expanded = mask_unsqueezed.expand_as(inputs_embeds)\n                video_mask = mask_expanded.to(inputs_embeds.device)\n\n                video_embeds = video_embeds.to(inputs_embeds.device, inputs_embeds.dtype)\n                inputs_embeds = inputs_embeds.masked_scatter(video_mask, video_embeds)\n\n            if attention_mask is not None:\n                attention_mask = attention_mask.to(inputs_embeds.device)\n\n        # if we get 4D attention mask we cannot calculate rope deltas anymore. TODO @raushan fixme\n        if position_ids is None and (attention_mask is None or attention_mask.ndim == 2):\n            # calculate RoPE index once per generation in the pre-fill stage only\n            if (\n                (cache_position is not None and cache_position[0] == 0)\n                or self.rope_deltas is None\n                or (past_key_values is None or past_key_values.get_seq_length() == 0)\n            ):\n                position_ids, rope_deltas = self.get_rope_index(\n                    input_ids,\n                    image_grid_thw,\n                    video_grid_thw,\n                    second_per_grid_ts,\n                    attention_mask,\n                )\n                self.rope_deltas = rope_deltas\n            # then use the prev pre-calculated rope-deltas to get the correct position ids\n            else:\n                batch_size, seq_length, _ = inputs_embeds.shape\n                delta = (\n                    (cache_position[0] + self.rope_deltas).to(inputs_embeds.device)\n                    if cache_position is not None\n                    else 0\n                )\n                position_ids = torch.arange(seq_length, device=inputs_embeds.device)\n                position_ids = position_ids.view(1, -1).expand(batch_size, -1)\n                if cache_position is not None:  # otherwise `deltas` is an int `0`\n                    delta = delta.repeat_interleave(batch_size // delta.shape[0], dim=0)\n                position_ids = position_ids.add(delta)\n                position_ids = position_ids.unsqueeze(0).expand(3, -1, -1)\n\n        outputs = self.model(\n            input_ids=None,\n            position_ids=position_ids,\n            attention_mask=attention_mask,\n            past_key_values=past_key_values,\n            inputs_embeds=inputs_embeds,\n            use_cache=use_cache,\n            output_attentions=output_attentions,\n            output_hidden_states=output_hidden_states,\n            return_dict=return_dict,\n            cache_position=cache_position,\n        )\n\n        hidden_states = outputs[0]\n        logits = self.lm_head(hidden_states)\n\n        loss = None\n        if labels is not None:\n            # Upcast to float if we need to compute the loss to avoid potential precision issues\n            logits = logits.float()\n            # Shift so that tokens < n predict n\n            shift_logits = logits[..., :-1, :].contiguous()\n            shift_labels = labels[..., 1:].contiguous()\n            # Flatten the tokens\n            loss_fct = CrossEntropyLoss()\n            shift_logits = shift_logits.view(-1, self.config.vocab_size)\n            shift_labels = shift_labels.view(-1)\n            # Enable model parallelism\n            shift_labels = shift_labels.to(shift_logits.device)\n            loss = loss_fct(shift_logits, shift_labels)\n\n        if not return_dict:\n            output = (logits,) + outputs[1:]\n            return (loss,) + output if loss is not None else output\n\n        return Qwen2_5_VLCausalLMOutputWithPast(\n            loss=loss,\n            logits=logits,\n            past_key_values=outputs.past_key_values,\n            hidden_states=outputs.hidden_states,\n            attentions=outputs.attentions,\n            rope_deltas=self.rope_deltas,\n        )\n\ndef monkey_patch_qwen2_5vl_forward():\n    Qwen2_5_VLForConditionalGeneration.forward = qwen2_5vl_forward\n\n# ----------------------- Set the Weights only as False in torch.load (In Pytorch 2.6, this is default as True)-----------------------\nfrom deepspeed.runtime.checkpoint_engine.torch_checkpoint_engine import TorchCheckpointEngine\nfrom deepspeed.utils import logger, log_dist\ndef weigths_only_load(self, path: str, map_location=None):\n    logger.info(f\"[Torch] Loading checkpoint from {path}...\")\n    partition = torch.load(path, map_location=map_location, weights_only=False)\n    logger.info(f\"[Torch] Loaded checkpoint from {path}.\")\n    return partition\n\ndef monkey_patch_torch_load():\n    TorchCheckpointEngine.load = weigths_only_load\n\n\n\n"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/trainer/__init__.py",
    "content": "from .grpo_trainer import VLMGRPOTrainer\nfrom .grpo_config import GRPOConfig\n\n__all__ = [\"VLMGRPOTrainer\"]"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/trainer/grpo_config.py",
    "content": "# Copyright 2025 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nfrom transformers import TrainingArguments\n\n\n@dataclass\nclass GRPOConfig(TrainingArguments):\n    r\"\"\"\n    Configuration class for the [`GRPOTrainer`].\n\n    Only the parameters specific to GRPO training are listed here. For details on other parameters, refer to the\n    [`~transformers.TrainingArguments`] documentation.\n\n    Using [`~transformers.HfArgumentParser`] we can turn this class into\n    [argparse](https://docs.python.org/3/library/argparse#module-argparse) arguments that can be specified on the\n    command line.\n\n    Parameters:\n        > Parameters that control the model and reference model\n\n        model_init_kwargs (`dict[str, Any]` or `None`, *optional*, defaults to `None`):\n            Keyword arguments for [`~transformers.AutoModelForCausalLM.from_pretrained`], used when the `model`\n            argument of the [`GRPOTrainer`] is provided as a string.\n\n        > Parameters that control the data preprocessing\n\n        remove_unused_columns (`bool`, *optional*, defaults to `False`):\n            Whether to only keep the column `\"prompt\"` in the dataset. If you use a custom reward function that\n            requires any column other than `\"prompts\"` and `\"completions\"`, you should keep this to `False`.\n        max_prompt_length (`int` or `None`, *optional*, defaults to `512`):\n            Maximum length of the prompt. If the prompt is longer than this value, it will be truncated left.\n        num_generations (`int` or `None`, *optional*, defaults to `8`):\n            Number of generations per prompt to sample. The global batch size (num_processes * per_device_batch_size)\n            must be divisible by this value.\n        max_completion_length (`int` or `None`, *optional*, defaults to `256`):\n            Maximum length of the generated completion.\n        ds3_gather_for_generation (`bool`, *optional*, defaults to `True`):\n            This setting applies to DeepSpeed ZeRO-3. If enabled, the policy model weights are gathered for generation,\n            improving generation speed. However, disabling this option allows training models that exceed the VRAM\n            capacity of a single GPU, albeit at the cost of slower generation. Disabling this option is not compatible\n            with vLLM generation.\n\n        > Parameters that control generation\n\n        temperature (`float`, defaults to `0.9`):\n            Temperature for sampling. The higher the temperature, the more random the completions.\n        top_p (`float`, *optional*, defaults to `1.0`):\n            Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to\n            `1.0` to consider all tokens.\n        top_k (`int` or `None`, *optional*, defaults to `50`):\n            Number of highest probability vocabulary tokens to keep for top-k-filtering. If `None`, top-k-filtering is\n            disabled.\n        min_p (`float` or `None`, *optional*, defaults to `None`):\n            Minimum token probability, which will be scaled by the probability of the most likely token. It must be a\n            value between `0.0` and `1.0`. Typical values are in the `0.01-0.2` range.\n        repetition_penalty (`float`, *optional*, defaults to `1.0`):\n            Float that penalizes new tokens based on whether they appear in the prompt and the generated text so far.\n            Values > `1.0` encourage the model to use new tokens, while values < `1.0` encourage the model to repeat\n            tokens.\n        cache_implementation (`str` or `None`, *optional*, defaults to `None`):\n            Implementation of the cache method for faster generation when use_vllm is set to False.\n\n        > Parameters that control generation acceleration powered by vLLM\n\n        use_vllm (`bool`, *optional*, defaults to `False`):\n            Whether to use vLLM for generating completions. If set to `True`, ensure that a GPU is kept unused for\n            training, as vLLM will require one for generation. vLLM must be installed (`pip install vllm`).\n        vllm_device (`str`, *optional*, defaults to `\"auto\"`):\n            Device where vLLM generation will run, e.g. `\"cuda:1\"`. If set to `\"auto\"` (default), the system will\n            automatically select the next available GPU after the last one used for training. This assumes that\n            training has not already occupied all available GPUs. If only one device is available, the device will be\n            shared between both training and vLLM.\n        vllm_gpu_memory_utilization (`float`, *optional*, defaults to `0.9`):\n            Ratio (between 0 and 1) of GPU memory to reserve for the model weights, activations, and KV cache on the\n            device dedicated to generation powered by vLLM. Higher values will increase the KV cache size and thus\n            improve the model's throughput. However, if the value is too high, it may cause out-of-memory (OOM) errors\n            during initialization.\n        vllm_dtype (`str`, *optional*, defaults to `\"auto\"`):\n            Data type to use for vLLM generation. If set to `\"auto\"`, the data type will be automatically determined\n            based on the model configuration. Find the supported values in the vLLM documentation.\n        vllm_max_model_len (`int` or `None`, *optional*, defaults to `None`):\n            If set, the `max_model_len` to use for vLLM. This could be useful when running with reduced\n            `vllm_gpu_memory_utilization`, leading to a reduced KV cache size. If not set, vLLM will use the model\n            context size, which might be much larger than the KV cache, leading to inefficiencies.\n        vllm_enable_prefix_caching (`bool`, *optional*, defaults to `True`):\n            Whether to enable prefix caching in vLLM. If set to `True` (default), ensure that the model and the hardware\n            support this feature.\n        vllm_guided_decoding_regex (`str` or `None`, *optional*, defaults to `None`):\n            Regex for vLLM guided decoding. If `None` (default), guided decoding is disabled.\n\n        > Parameters that control the training\n\n        learning_rate (`float`, *optional*, defaults to `1e-6`):\n            Initial learning rate for [`AdamW`] optimizer. The default value replaces that of\n            [`~transformers.TrainingArguments`].\n        beta (`float`, *optional*, defaults to `0.04`):\n            KL coefficient. If `0.0`, the reference model is not loaded, reducing memory usage and improving training\n            speed, but may be numerically unstable for long training runs.\n        num_iterations (`int`, *optional*, defaults to `1`):\n            Number of iterations per batch (denoted as μ in the algorithm).\n        epsilon (`float`, *optional*, defaults to `0.2`):\n            Epsilon value for clipping.\n        epsilon_high (`float` or `None`, *optional*, defaults to `None`):\n            Upper-bound epsilon value for clipping. If not specified, it defaults to the same value as the lower-bound\n            specified in argument `epsilon`. Paper [DAPO](https://huggingface.co/papers/2503.14476) recommends `0.28`.\n        reward_weights (`list[float]` or `None`, *optional*, defaults to `None`):\n            Weights for each reward function. Must match the number of reward functions. If `None`, all rewards are\n            weighted equally with weight `1.0`.\n        sync_ref_model (`bool`, *optional*, defaults to `False`):\n            Whether to synchronize the reference model with the active model every `ref_model_sync_steps` steps, using\n            the `ref_model_mixup_alpha` parameter. This synchronization originites from the\n            [TR-DPO](https://huggingface.co/papers/2404.09656) paper.\n        ref_model_mixup_alpha (`float`, *optional*, defaults to `0.6`):\n            α parameter from the [TR-DPO](https://huggingface.co/papers/2404.09656) paper, which controls the mix\n            between the current policy and the previous reference policy during updates. The reference policy is\n            updated according to the equation: `π_ref = α * π_θ + (1 - α) * π_ref_prev`. To use this parameter, you\n            must set `sync_ref_model=True`.\n        ref_model_sync_steps (`int`, *optional*, defaults to `512`):\n            τ parameter from the [TR-DPO](https://huggingface.co/papers/2404.09656) paper, which determines how\n            frequently the current policy is synchronized with the reference policy. To use this parameter, you must\n            set `sync_ref_model=True`.\n\n        > Parameters that control the logging\n\n        log_completions (`bool`, *optional*, defaults to `False`):\n            Whether to log a sample of (prompt, completion) pairs every `logging_steps` steps. If `rich` is\n            installed, it prints the sample. If `wandb` logging is enabled, it logs it to `wandb`.\n    \"\"\"\n\n    # Parameters that control the model and reference model\n    model_init_kwargs: Optional[dict] = field(\n        default=None,\n        metadata={\n            \"help\": \"Keyword arguments for `transformers.AutoModelForCausalLM.from_pretrained`, used when the `model` \"\n            \"argument of the `GRPOTrainer` is provided as a string.\"\n        },\n    )\n\n    # Parameters that control the data preprocessing\n    # The default value remove_unused_columns is overwritten from the parent class, because in GRPO we usually rely on\n    # additional columns to compute the reward\n    remove_unused_columns: Optional[bool] = field(\n        default=False,\n        metadata={\n            \"help\": \"Whether to only keep the column 'prompt' in the dataset. If you use a custom reward function \"\n            \"that requires any column other than 'prompts' and 'completions', you should keep this to `False`.\"\n        },\n    )\n    max_prompt_length: Optional[int] = field(\n        default=512,\n        metadata={\n            \"help\": \"Maximum length of the prompt. If the prompt is longer than this value, it will be truncated left.\"\n        },\n    )\n    num_generations: Optional[int] = field(\n        default=8,\n        metadata={\n            \"help\": \"Number of generations to sample. The global batch size (num_processes * per_device_batch_size) \"\n            \"must be divisible by this value.\"\n        },\n    )\n    max_completion_length: Optional[int] = field(\n        default=256,\n        metadata={\"help\": \"Maximum length of the generated completion.\"},\n    )\n    ds3_gather_for_generation: bool = field(\n        default=True,\n        metadata={\n            \"help\": \"This setting applies to DeepSpeed ZeRO-3. If enabled, the policy model weights are gathered for \"\n            \"generation, improving generation speed. However, disabling this option allows training models that \"\n            \"exceed the VRAM capacity of a single GPU, albeit at the cost of slower generation. Disabling this option \"\n            \"is not compatible with vLLM generation.\"\n        },\n    )\n\n    # Parameters that control generation\n    temperature: float = field(\n        default=0.9,\n        metadata={\"help\": \"Temperature for sampling. The higher the temperature, the more random the completions.\"},\n    )\n    top_p: float = field(\n        default=1.0,\n        metadata={\n            \"help\": \"Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. \"\n            \"Set to 1.0 to consider all tokens.\"\n        },\n    )\n    top_k: Optional[int] = field(\n        default=50,\n        metadata={\n            \"help\": \"Number of highest probability vocabulary tokens to keep for top-k-filtering. If `None`, \"\n            \"top-k-filtering is disabled.\"\n        },\n    )\n    min_p: Optional[float] = field(\n        default=None,\n        metadata={\n            \"help\": \"Minimum token probability, which will be scaled by the probability of the most likely token. It \"\n            \"must be a value between 0.0 and 1.0. Typical values are in the 0.01-0.2 range.\"\n        },\n    )\n    repetition_penalty: float = field(\n        default=1.0,\n        metadata={\n            \"help\": \"Float that penalizes new tokens based on whether they appear in the prompt and the generated \"\n            \"text so far. Values > 1.0 encourage the model to use new tokens, while values < 1.0 encourage the model \"\n            \"to repeat tokens.\"\n        },\n    )\n    cache_implementation: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"Implementation of the cache method for faster generation when use_vllm is set to False.\"},\n    )\n\n    # Parameters that control generation acceleration powered by vLLM\n    use_vllm: Optional[bool] = field(\n        default=False,\n        metadata={\n            \"help\": \"Whether to use vLLM for generating completions. If set to `True`, ensure that a GPU is kept \"\n            \"unused for training, as vLLM will require one for generation. vLLM must be installed \"\n            \"(`pip install vllm`).\"\n        },\n    )\n    vllm_device: Optional[str] = field(\n        default=\"auto\",\n        metadata={\n            \"help\": \"Device where vLLM generation will run, e.g. 'cuda:1'. If set to 'auto' (default), the system \"\n            \"will automatically select the next available GPU after the last one used for training. This assumes \"\n            \"that training has not already occupied all available GPUs.\"\n        },\n    )\n    vllm_gpu_memory_utilization: float = field(\n        default=0.9,\n        metadata={\n            \"help\": \"Ratio (between 0 and 1) of GPU memory to reserve for the model weights, activations, and KV \"\n            \"cache on the device dedicated to generation powered by vLLM. Higher values will increase the KV cache \"\n            \"size and thus improve the model's throughput. However, if the value is too high, it may cause \"\n            \"out-of-memory (OOM) errors during initialization.\"\n        },\n    )\n    vllm_dtype: Optional[str] = field(\n        default=\"auto\",\n        metadata={\n            \"help\": \"Data type to use for vLLM generation. If set to 'auto', the data type will be automatically \"\n            \"determined based on the model configuration. Find the supported values in the vLLM documentation.\"\n        },\n    )\n    vllm_max_model_len: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"If set, the `max_model_len` to use for vLLM. This could be useful when running with reduced \"\n            \"`vllm_gpu_memory_utilization`, leading to a reduced KV cache size. If not set, vLLM will use the model \"\n            \"context size, which might be much larger than the KV cache, leading to inefficiencies.\"\n        },\n    )\n    vllm_enable_prefix_caching: Optional[bool] = field(\n        default=True,\n        metadata={\n            \"help\": \"Whether to enable prefix caching in vLLM. If set to `True` (default), ensure that the model and \"\n            \"the hardware support this feature.\"\n        },\n    )\n    vllm_guided_decoding_regex: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"Regex for vLLM guided decoding. If `None` (default), guided decoding is disabled.\"},\n    )\n\n    # Parameters that control the training\n    learning_rate: float = field(\n        default=1e-6,\n        metadata={\n            \"help\": \"Initial learning rate for `AdamW` optimizer. The default value replaces that of \"\n            \"`transformers.TrainingArguments`.\"\n        },\n    )\n    beta: float = field(\n        default=0.04,\n        metadata={\n            \"help\": \"KL coefficient. If `0.0`, the reference model is not loaded, reducing memory usage and improving \"\n            \"training speed, but may be numerically unstable for long training runs.\"\n        },\n    )\n    num_iterations: int = field(\n        default=1,\n        metadata={\"help\": \"Number of iterations per batch (denoted as μ in the algorithm).\"},\n    )\n    epsilon: float = field(\n        default=0.2,\n        metadata={\"help\": \"Epsilon value for clipping.\"},\n    )\n    epsilon_high: Optional[float] = field(\n        default=None,\n        metadata={\n            \"help\": \"Upper-bound epsilon value for clipping. If not specified, it defaults to the same value as the \"\n            \"lower-bound specified in argument `epsilon`. Paper DAPO recommends `0.28`.\"\n        },\n    )\n    reward_weights: Optional[list[float]] = field(\n        default=None,\n        metadata={\n            \"help\": \"Weights for each reward function. Must match the number of reward functions. If `None`, all \"\n            \"rewards are weighted equally with weight `1.0`.\"\n        },\n    )\n    sync_ref_model: bool = field(\n        default=False,\n        metadata={\n            \"help\": \"Whether to synchronize the reference model with the active model every `ref_model_sync_steps` \"\n            \"steps, using the `ref_model_mixup_alpha` parameter.\"\n        },\n    )\n    ref_model_mixup_alpha: float = field(\n        default=0.6,\n        metadata={\n            \"help\": \"α parameter from the TR-DPO paper, which controls the mix between the current policy and the \"\n            \"previous reference policy during updates. The reference policy is updated according to the equation: \"\n            \"`π_ref = α * π_θ + (1 - α) * π_ref_prev`. To use this parameter, you must set `sync_ref_model=True`.\"\n        },\n    )\n    ref_model_sync_steps: int = field(\n        default=512,\n        metadata={\n            \"help\": \"τ parameter from the TR-DPO paper, which determines how frequently the current policy is \"\n            \"synchronized with the reference policy. To use this parameter, you must set `sync_ref_model=True`.\"\n        },\n    )\n\n    # Parameters that control the logging\n    log_completions: bool = field(\n        default=False,\n        metadata={\n            \"help\": \"Whether to log a sample of (prompt, completion) pairs every `logging_steps` steps. If `rich` is \"\n            \"installed, it prints the sample. If `wandb` logging is enabled, it logs it to `wandb`.\"\n        },\n    )"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/trainer/grpo_trainer.py",
    "content": "# Copyright 2025 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport textwrap\nfrom collections import defaultdict\nfrom typing import Any, Callable, Optional, Union, Sized\nfrom qwen_vl_utils import process_vision_info\nimport torch\nimport torch.utils.data\nimport transformers\nfrom datasets import Dataset, IterableDataset\nfrom packaging import version\nfrom transformers import (\n    AriaForConditionalGeneration,\n    AriaProcessor,\n    AutoModelForCausalLM,\n    AutoModelForSequenceClassification,\n    AutoProcessor,\n    AutoTokenizer,\n    GenerationConfig,\n    PreTrainedModel,\n    PreTrainedTokenizerBase,\n    Qwen2VLForConditionalGeneration,\n    Qwen2_5_VLForConditionalGeneration,\n    Trainer,\n    TrainerCallback,\n    is_wandb_available,\n)\nfrom transformers.integrations.deepspeed import is_deepspeed_zero3_enabled\nfrom transformers.utils import is_peft_available\n\nfrom trl.data_utils import apply_chat_template, is_conversational, maybe_apply_chat_template\nfrom trl.models import create_reference_model, prepare_deepspeed, unwrap_model_for_generation\nfrom trl.trainer.grpo_config import GRPOConfig\nfrom trl.trainer.utils import generate_model_card, get_comet_experiment_url\n# from trl import GRPOTrainer\n\nfrom accelerate.utils import is_peft_model, set_seed\nimport PIL.Image\n\nimport copy\nfrom torch.utils.data import Sampler\nimport warnings\n\nif is_peft_available():\n    from peft import PeftConfig, get_peft_model\n\nif is_wandb_available():\n    import wandb\n\nfrom vlm_modules.vlm_module import VLMBaseModule\n# What we call a reward function is a callable that takes a list of prompts and completions and returns a list of\n# rewards. When it's a string, it's a model ID, so it's loaded as a pretrained model.\nRewardFunc = Union[str, PreTrainedModel, Callable[[list, list], list[float]]]\n\n\nclass RepeatRandomSampler(Sampler):\n    \"\"\"\n    Sampler that repeats the indices of a dataset in a structured manner.\n\n    Args:\n        data_source (`Sized`):\n            Dataset to sample from.\n        mini_repeat_count (`int`):\n            Number of times to repeat each index per batch.\n        batch_size (`int`, *optional*, defaults to `1`):\n            Number of unique indices per batch.\n        repeat_count (`int`, *optional*, defaults to `1`):\n            Number of times to repeat the full sampling process.\n        seed (`int` or `None`, *optional*, defaults to `None`):\n            Random seed for reproducibility.\n    \"\"\"\n\n    def __init__(\n        self,\n        data_source: Sized,\n        mini_repeat_count: int,\n        batch_size: int = 1,\n        repeat_count: int = 1,\n        seed: Optional[int] = None,\n    ):\n        self.data_source = data_source\n        self.mini_repeat_count = mini_repeat_count\n        self.batch_size = batch_size\n        self.repeat_count = repeat_count\n        self.num_samples = len(data_source)\n        self.seed = seed\n        self.generator = torch.Generator()\n        if seed is not None:\n            self.generator.manual_seed(seed)\n\n    def __iter__(self):\n        indexes = torch.randperm(self.num_samples, generator=self.generator).tolist()\n        indexes = [indexes[i : i + self.batch_size] for i in range(0, len(indexes), self.batch_size)]\n        indexes = [chunk for chunk in indexes if len(chunk) == self.batch_size]\n\n        for chunk in indexes:\n            for _ in range(self.repeat_count):\n                for index in chunk:\n                    for _ in range(self.mini_repeat_count):\n                        yield index\n\n    def __len__(self) -> int:\n        return self.num_samples * self.mini_repeat_count * self.repeat_count\n\n\nclass VLMGRPOTrainer(Trainer):\n    \"\"\"\n    Trainer for the Group Relative Policy Optimization (GRPO) method. This algorithm was initially proposed in the\n    paper [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).\n\n    Example:\n\n    ```python\n    from datasets import load_dataset\n    from trl import GRPOTrainer\n\n    dataset = load_dataset(\"trl-lib/tldr\", split=\"train\")\n\n    trainer = GRPOTrainer(\n        model=\"Qwen/Qwen2-0.5B-Instruct\",\n        reward_funcs=\"weqweasdas/RM-Gemma-2B\",\n        train_dataset=dataset,\n    )\n\n    trainer.train()\n    ```\n\n    Args:\n        model (`Union[str, PreTrainedModel]`):\n            Model to be trained. Can be either:\n\n            - A string, being the *model id* of a pretrained model hosted inside a model repo on huggingface.co, or\n              a path to a *directory* containing model weights saved using\n              [`~transformers.PreTrainedModel.save_pretrained`], e.g., `'./my_model_directory/'`. The model is\n              loaded using [`~transformers.AutoModelForCausalLM.from_pretrained`] with the keywork arguments\n              in `args.model_init_kwargs`.\n            - A [`~transformers.PreTrainedModel`] object. Only causal language models are supported.\n        reward_funcs (`Union[RewardFunc, list[RewardFunc]]`):\n            Reward functions to be used for computing the rewards. To compute the rewards, we call all the reward\n            functions with the prompts and completions and sum the rewards. Can be either:\n\n            - A single reward function, such as:\n                - A string: The *model ID* of a pretrained model hosted inside a model repo on huggingface.co, or a\n                path to a *directory* containing model weights saved using\n                [`~transformers.PreTrainedModel.save_pretrained`], e.g., `'./my_model_directory/'`. The model is loaded\n                using [`~transformers.AutoModelForSequenceClassification.from_pretrained`] with `num_labels=1` and the\n                keyword arguments in `args.model_init_kwargs`.\n                - A [`~transformers.PreTrainedModel`] object: Only sequence classification models are supported.\n                - A custom reward function: The function is provided with the prompts and the generated completions,\n                  plus any additional columns in the dataset. It should return a list of rewards. For more details, see\n                  [Using a custom reward function](#using-a-custom-reward-function).\n            - A list of reward functions, where each item can independently be any of the above types. Mixing different\n            types within the list (e.g., a string model ID and a custom reward function) is allowed.\n        args ([`GRPOConfig`], *optional*, defaults to `None`):\n            Configuration for this trainer. If `None`, a default configuration is used.\n        train_dataset ([`~datasets.Dataset`] or [`~datasets.IterableDataset`]):\n            Dataset to use for training. It must include a column `\"prompt\"`. Any additional columns in the dataset is\n            ignored. The format of the samples can be either:\n\n            - [Standard](dataset_formats#standard): Each sample contains plain text.\n            - [Conversational](dataset_formats#conversational): Each sample contains structured messages (e.g., role\n              and content).\n        eval_dataset ([`~datasets.Dataset`], [`~datasets.IterableDataset`] or `dict[str, Union[Dataset, IterableDataset]]`):\n            Dataset to use for evaluation. It must meet the same requirements as `train_dataset`.\n        processing_class ([`~transformers.PreTrainedTokenizerBase`], *optional*, defaults to `None`):\n            Processing class used to process the data. The padding side must be set to \"left\". If `None`, the\n            processing class is loaded from the model's name with [`~transformers.AutoTokenizer.from_pretrained`].\n        reward_processing_classes (`Union[PreTrainedTokenizerBase, list[PreTrainedTokenizerBase]]`, *optional*, defaults to `None`):\n            Processing classes corresponding to the reward functions specified in `reward_funcs`. Can be either:\n\n            - A single processing class: Used when `reward_funcs` contains only one reward function.\n            - A list of processing classes: Must match the order and length of the reward functions in `reward_funcs`.\n            If set to `None`, or if an element of the list corresponding to a [`~transformers.PreTrainedModel`] is\n            `None`, the tokenizer for the model is automatically loaded using [`~transformers.AutoTokenizer.from_pretrained`].\n            For elements in `reward_funcs` that are custom reward functions (not [`~transformers.PreTrainedModel`]),\n            the corresponding entries in `reward_processing_classes` are ignored.\n        callbacks (list of [`~transformers.TrainerCallback`], *optional*, defaults to `None`):\n            List of callbacks to customize the training loop. Will add those to the list of default callbacks\n            detailed in [here](https://huggingface.co/docs/transformers/main_classes/callback).\n\n            If you want to remove one of the default callbacks used, use the [`~transformers.Trainer.remove_callback`]\n            method.\n        optimizers (`tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]`, *optional*, defaults to `(None, None)`):\n            A tuple containing the optimizer and the scheduler to use. Will default to an instance of [`AdamW`] on your\n            model and a scheduler given by [`get_linear_schedule_with_warmup`] controlled by `args`.\n        peft_config ([`~peft.PeftConfig`], *optional*, defaults to `None`):\n            PEFT configuration used to wrap the model. If `None`, the model is not wrapped.\n    \"\"\"\n\n    def __init__(\n        self,\n        model: Union[str, PreTrainedModel],\n        reward_funcs: Union[RewardFunc, list[RewardFunc]],\n        args: GRPOConfig = None,\n        vlm_module: VLMBaseModule = None,\n        train_dataset: Optional[Union[Dataset, IterableDataset]] = None,\n        eval_dataset: Optional[Union[Dataset, IterableDataset, dict[str, Union[Dataset, IterableDataset]]]] = None,\n        processing_class: Optional[PreTrainedTokenizerBase] = None,\n        reward_processing_classes: Optional[Union[PreTrainedTokenizerBase, list[PreTrainedTokenizerBase]]] = None,\n        callbacks: Optional[list[TrainerCallback]] = None,\n        optimizers: tuple[Optional[torch.optim.Optimizer], Optional[torch.optim.lr_scheduler.LambdaLR]] = (None, None),\n        peft_config: Optional[\"PeftConfig\"] = None,\n        freeze_vision_modules: Optional[bool] = False,\n        attn_implementation: str = \"flash_attention_2\",\n        torch_dtype: str = \"bfloat16\",\n        **kwargs,\n    ):\n        # Args\n        if args is None:\n            model_name = model if isinstance(model, str) else model.config._name_or_path\n            model_name = model_name.split(\"/\")[-1]\n            args = GRPOConfig(f\"{model_name}-GRPO\")\n        \n        self.vlm_module = vlm_module\n\n        # Models\n        # Trained model\n        model_init_kwargs = args.model_init_kwargs or {}\n        # FIXME\n        # Remember to modify it in the invernvl\n        model_init_kwargs[\"attn_implementation\"] = attn_implementation\n        if model_init_kwargs.get(\"torch_dtype\") is None:\n            model_init_kwargs[\"torch_dtype\"] = torch_dtype\n        \n        assert isinstance(model, str), \"model must be a string in the current implementation\"\n        model_id = model\n        torch_dtype = model_init_kwargs.get(\"torch_dtype\")\n        if isinstance(torch_dtype, torch.dtype) or torch_dtype == \"auto\" or torch_dtype is None:\n            pass  # torch_dtype is already a torch.dtype or \"auto\" or None\n        elif isinstance(torch_dtype, str):  # it's a str, but not \"auto\"\n            torch_dtype = getattr(torch, torch_dtype)\n        else:\n            raise ValueError(\n                \"Invalid `torch_dtype` passed to `GRPOConfig`. Expected either 'auto' or a string representing \"\n                f\"a `torch.dtype` (e.g., 'float32'), but got {torch_dtype}.\"\n            )\n        # Disable caching if gradient checkpointing is enabled (not supported)\n        model_init_kwargs[\"use_cache\"] = (\n            False if args.gradient_checkpointing else model_init_kwargs.get(\"use_cache\")\n        )\n\n        model_cls = self.vlm_module.get_model_class(model_id, model_init_kwargs)        \n        model = model_cls.from_pretrained(model_id, **model_init_kwargs)\n        \n        # for name, param in model.named_parameters():\n        #     if not param.requires_grad:\n        #         print(f\"Frozen: {name}\")\n        #     else:\n        #         print(f\"trainable:{name}\")\n        \n        # LoRA  \n        self.vision_modules_keywords = self.vlm_module.get_vision_modules_keywords()\n        if peft_config is not None:\n            print(\"Applying LoRA...\")\n            def find_all_linear_names(model, multimodal_keywords):\n                cls = torch.nn.Linear\n                lora_module_names = set()\n                for name, module in model.named_modules():\n                    # LoRA is not applied to the vision modules\n                    if any(mm_keyword in name for mm_keyword in multimodal_keywords):\n                        continue\n                    if isinstance(module, cls):\n                        lora_module_names.add(name)\n                for m in lora_module_names:  # needed for 16-bit\n                    if \"embed_tokens\" in m:\n                        lora_module_names.remove(m)\n                return list(lora_module_names)\n            target_modules = find_all_linear_names(model, self.vision_modules_keywords)\n            peft_config.target_modules = target_modules\n            model = get_peft_model(model, peft_config)\n\n        # Freeze vision modules\n        if freeze_vision_modules:\n            print(\"Freezing vision modules...\")\n            for n, p in model.named_parameters():\n                if any(keyword in n for keyword in self.vision_modules_keywords):\n                    p.requires_grad = False\n        # Compute the number of trainable parameters and print the parameter that is trainable\n        # for name, param in model.named_parameters():\n        #     print(name, param.requires_grad)\n        trainable_params = [p for p in model.parameters() if p.requires_grad]\n        total_params = sum(p.numel() for p in trainable_params)\n        # for n, p in model.named_parameters():\n        #     if p.requires_grad:\n        #         print(n, p.shape)\n        print(f\"Total trainable parameters: {total_params}\")\n\n        # Enable gradient checkpointing if requested\n        if args.gradient_checkpointing:\n            model = self._enable_gradient_checkpointing(model, args)\n\n        # Reference model\n        self.beta = args.beta\n        if self.beta == 0.0:\n            # If beta is 0.0, the reference model is not needed\n            self.ref_model = None\n        elif is_deepspeed_zero3_enabled():\n            self.ref_model = model_cls.from_pretrained(model_id, **model_init_kwargs)\n        elif is_peft_model(model):\n            # If PEFT is used, the reference model is not needed since the adapter can be disabled\n            # to revert to the initial model.\n            self.ref_model = None\n        else:\n            # If PEFT configuration is not provided, create a reference model based on the initial model.\n            self.ref_model = create_reference_model(model)\n\n        if processing_class is None:\n            tokenizer = AutoTokenizer.from_pretrained(\n            model_id,\n            local_files_only=False,\n            use_fast=True,\n            trust_remote_code=True\n        )\n            processing_cls = self.vlm_module.get_processing_class()\n            processing_class = processing_cls.from_pretrained(model_id, trust_remote_code=model_init_kwargs.get(\"trust_remote_code\", None))\n            processing_class.tokenizer = tokenizer\n            for component, processing_keyword in self.vlm_module.get_custom_processing_keywords():\n                if processing_keyword in kwargs:\n                    # If we cannot find component in processing_class, return the processing_class itself\n                    processing_component = getattr(processing_class, component, processing_class)\n                    setattr(processing_component, processing_keyword, kwargs[processing_keyword])\n            if getattr(processing_class, \"tokenizer\",  None) is not None:\n                pad_token_id = processing_class.tokenizer.pad_token_id\n                processing_class.pad_token_id = pad_token_id\n                processing_class.eos_token_id = processing_class.tokenizer.eos_token_id\n            else:\n                assert isinstance(processing_class, PreTrainedTokenizerBase), \"processing_class must be an instance of PreTrainedTokenizerBase if it has no tokenizer attribute\"\n                pad_token_id = processing_class.pad_token_id\n\n        self.vlm_module.post_model_init(model, processing_class)\n        self.vlm_module.post_model_init(self.ref_model, processing_class)\n\n        # Reward functions\n        if not isinstance(reward_funcs, list):\n            reward_funcs = [reward_funcs]\n        for i, reward_func in enumerate(reward_funcs):\n            if isinstance(reward_func, str):\n                reward_funcs[i] = AutoModelForSequenceClassification.from_pretrained(\n                    reward_func, num_labels=1, **model_init_kwargs\n                )\n        self.reward_funcs = reward_funcs\n\n        # Reward processing class\n        if reward_processing_classes is None:\n            reward_processing_classes = [None] * len(reward_funcs)\n        elif not isinstance(reward_processing_classes, list):\n            reward_processing_classes = [reward_processing_classes]\n        else:\n            if len(reward_processing_classes) != len(reward_funcs):\n                raise ValueError(\"The number of reward processing classes must match the number of reward functions.\")\n\n        for i, (reward_processing_class, reward_func) in enumerate(zip(reward_processing_classes, reward_funcs)):\n            if isinstance(reward_func, PreTrainedModel):\n                if reward_processing_class is None:\n                    reward_processing_class = AutoTokenizer.from_pretrained(reward_func.config._name_or_path)\n                if reward_processing_class.pad_token_id is None:\n                    reward_processing_class.pad_token = reward_processing_class.eos_token\n                # The reward model computes the reward for the latest non-padded token in the input sequence.\n                # So it's important to set the pad token ID to the padding token ID of the processing class.\n                reward_func.config.pad_token_id = reward_processing_class.pad_token_id\n                reward_processing_classes[i] = reward_processing_class\n        self.reward_processing_classes = reward_processing_classes\n\n        # Data collator\n        def data_collator(features):  # No data collation is needed in GRPO\n            return features\n\n        # Training arguments\n        self.max_prompt_length = args.max_prompt_length\n        self.max_prompt_length = None\n        if args.max_prompt_length is not None:\n            warnings.warn(\"Setting max_prompt_length is currently not supported, it has been set to None\")\n\n        self.max_completion_length = args.max_completion_length  # = |o_i| in the GRPO paper\n        self.num_generations = args.num_generations  # = G in the GRPO paper\n        self.generation_config = GenerationConfig(\n            max_new_tokens=self.max_completion_length,\n            do_sample=True,  \n            temperature=1,\n            pad_token_id=pad_token_id,\n        )\n        if hasattr(self.vlm_module, \"get_eos_token_id\"): # For InternVL\n            self.generation_config.eos_token_id = self.vlm_module.get_eos_token_id(processing_class)\n        self.beta = args.beta\n        self.epsilon_low = args.epsilon\n        self.epsilon_high = args.epsilon_high if args.epsilon_high is not None else args.epsilon\n\n        # Multi-step\n        self.num_iterations = args.num_iterations  # = 𝜇 in the GRPO paper\n        # Tracks the number of iterations (forward + backward passes), including those within a gradient accumulation cycle\n        self._step = 0\n        # Buffer the batch to reuse generated outputs across multiple updates\n        self._buffered_inputs = [None] * args.gradient_accumulation_steps\n\n        # The trainer estimates the number of FLOPs (floating-point operations) using the number of elements in the\n        # input tensor associated with the key \"input_ids\". However, in GRPO, the sampled data does not include the\n        # \"input_ids\" key. Instead, the available keys is \"prompt\". As a result, the trainer issues the warning:\n        # \"Could not estimate the number of tokens of the input, floating-point operations will not be computed.\" To\n        # suppress this warning, we set the \"estimate_tokens\" key in the model's \"warnings_issued\" dictionary to True.\n        # This acts as a flag to indicate that the warning has already been issued.\n        model.warnings_issued[\"estimate_tokens\"] = True\n\n        # Initialize the metrics\n        self._metrics = defaultdict(list)\n\n        super().__init__(\n            model=model,\n            args=args,\n            data_collator=data_collator,\n            train_dataset=train_dataset,\n            eval_dataset=eval_dataset,\n            processing_class=processing_class,\n            callbacks=callbacks,\n            optimizers=optimizers,\n        )\n\n        # Check if the per_device_train/eval_batch_size * num processes can be divided by the number of generations\n        num_processes = self.accelerator.num_processes\n        global_batch_size = args.per_device_train_batch_size * num_processes\n        possible_values = [n_gen for n_gen in range(2, global_batch_size + 1) if (global_batch_size) % n_gen == 0]\n        if self.num_generations not in possible_values:\n            raise ValueError(\n                f\"The global train batch size ({num_processes} x {args.per_device_train_batch_size}) must be evenly \"\n                f\"divisible by the number of generations per prompt ({self.num_generations}). Given the current train \"\n                f\"batch size, the valid values for the number of generations are: {possible_values}.\"\n            )\n        if self.args.eval_strategy != \"no\":\n            global_batch_size = args.per_device_eval_batch_size * num_processes\n            possible_values = [n_gen for n_gen in range(2, global_batch_size + 1) if (global_batch_size) % n_gen == 0]\n            if self.num_generations not in possible_values:\n                raise ValueError(\n                    f\"The global eval batch size ({num_processes} x {args.per_device_eval_batch_size}) must be evenly \"\n                    f\"divisible by the number of generations per prompt ({self.num_generations}). Given the current \"\n                    f\"eval batch size, the valid values for the number of generations are: {possible_values}.\"\n                )\n\n        # Ensure each process receives a unique seed to prevent duplicate completions when generating with\n        # transformers if num_generations exceeds per_device_train_batch_size. We could skip it if we use vLLM, but\n        # it's safer to set it in all cases.\n        set_seed(args.seed, device_specific=True)\n\n        # Gradient accumulation requires scaled loss. Normally, loss scaling in the parent class depends on whether the\n        # model accepts loss-related kwargs. Since we compute our own loss, this check is irrelevant. We set\n        # self.model_accepts_loss_kwargs to False to enable scaling.\n        self.model_accepts_loss_kwargs = False\n\n        if self.ref_model is not None:\n            # if self.is_deepspeed_enabled:\n            if is_deepspeed_zero3_enabled():\n                self.ref_model = prepare_deepspeed(self.ref_model, self.accelerator)\n            else:\n                self.ref_model = self.accelerator.prepare_model(self.ref_model, evaluation_mode=True)\n\n        for i, reward_func in enumerate(self.reward_funcs):\n            if isinstance(reward_func, PreTrainedModel):\n                self.reward_funcs[i] = self.accelerator.prepare_model(reward_func, evaluation_mode=True)\n\n    def _enable_gradient_checkpointing(self, model: PreTrainedModel, args: GRPOConfig) -> PreTrainedModel:\n        \"\"\"Enables gradient checkpointing for the model.\"\"\"\n        # Ensure use_cache is disabled\n        model.config.use_cache = False\n\n        # Enable gradient checkpointing on the base model for PEFT\n        if is_peft_model(model):\n            model.base_model.gradient_checkpointing_enable()\n        # Enable gradient checkpointing for non-PEFT models\n        else:\n            if getattr(model, \"language_model\", None) is not None:\n                # For InternVL; these operations are copied from the original training script of InternVL\n                model.language_model.config.use_cache = False\n                model.vision_model.gradient_checkpointing = True\n                model.vision_model.encoder.gradient_checkpointing = True\n                model.language_model._set_gradient_checkpointing()\n                # This line is necessary, otherwise the `model.gradient_checkpointing_enable()` will be executed during the training process, leading to an error since InternVL does not support this operation.\n                args.gradient_checkpointing = False\n            else:\n                model.gradient_checkpointing_enable()\n\n        gradient_checkpointing_kwargs = args.gradient_checkpointing_kwargs or {}\n        use_reentrant = (\n            \"use_reentrant\" not in gradient_checkpointing_kwargs or gradient_checkpointing_kwargs[\"use_reentrant\"]\n        )\n\n        if use_reentrant:\n            model.enable_input_require_grads()\n\n        return model\n    \n    def _set_signature_columns_if_needed(self):\n        # If `self.args.remove_unused_columns` is True, non-signature columns are removed.\n        # By default, this method sets `self._signature_columns` to the model's expected inputs.\n        # In GRPOTrainer, we preprocess data, so using the model's signature columns doesn't work.\n        # Instead, we set them to the columns expected by the `training_step` method, hence the override.\n        if self._signature_columns is None:\n            self._signature_columns = [\"prompt\"]\n\n\n    # Get the per-token log probabilities for the completions for the model and the reference model\n    def _get_per_token_logps(self, model, input_ids, attention_mask, **custom_multimodal_inputs):\n        logits = model(input_ids=input_ids, attention_mask=attention_mask, **custom_multimodal_inputs).logits  # (B, L, V)\n        logits = logits[:, :-1, :]  # (B, L-1, V), exclude the last logit: it corresponds to the next token pred\n        input_ids = input_ids[:, 1:]  # (B, L-1), exclude the first input ID since we don't have logits for it\n        # Compute the log probabilities for the input tokens. Use a loop to reduce memory peak.\n        per_token_logps = []\n        for logits_row, input_ids_row in zip(logits, input_ids):\n            log_probs = logits_row.log_softmax(dim=-1)\n            token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)\n            per_token_logps.append(token_log_prob)\n        return torch.stack(per_token_logps)\n\n\n    def _prepare_inputs(self, inputs):\n        # Simple pass-through, just like original\n        return inputs\n\n    def _get_key_from_inputs(self, x, key):\n        ele = x.get(key, None)\n        assert ele is not None, f\"The key {key} is not found in the input\"\n        if isinstance(ele, list):\n            return [e for e in ele]\n        else:\n            return [ele]\n\n    def _generate_and_score_completions(self, inputs: dict[str, Union[torch.Tensor, Any]], model) -> dict[str, Union[torch.Tensor, Any]]:\n        device = self.accelerator.device\n        prompts = [x[\"prompt\"] for x in inputs]\n        prompts_text = self.vlm_module.prepare_prompt(self.processing_class, inputs)\n        # Handle both pre-loaded images and image paths\n        images = []\n        for x in inputs:\n            if \"image\" in x:\n                imgs = self._get_key_from_inputs(x, \"image\")\n            elif \"image_path\" in x and x[\"image_path\"] is not None:\n                imgs = [PIL.Image.open(p) for p in self._get_key_from_inputs(x, \"image_path\")]\n            else:\n                imgs = []\n\n            for img in imgs:\n                try:\n                    # Ensure minimum dimensions of 28 pixels\n                    w, h = img.size\n                    if w < 28 or h < 28:\n                    # Calculate new dimensions maintaining aspect ratio\n                        if w < h:\n                            new_w = 28\n                            new_h = int(h * (28/w))\n                        else:\n                            new_h = 28\n                            new_w = int(w * (28/h))\n                    img = img.resize((new_w, new_h), PIL.Image.Resampling.LANCZOS)\n                except:\n                    pass\n                images.append(img)\n                \n        prompt_inputs = self.vlm_module.prepare_model_inputs(\n            self.processing_class,\n            prompts_text,\n            images,\n            return_tensors=\"pt\",\n            padding=True,\n            padding_side=\"left\",\n            add_special_tokens=False,\n        )\n        prompt_inputs = super()._prepare_inputs(prompt_inputs)\n        prompt_ids, prompt_mask = prompt_inputs[\"input_ids\"], prompt_inputs[\"attention_mask\"]\n        \n        # Generate completions\n        with unwrap_model_for_generation(model, self.accelerator) as unwrapped_model:\n            generate_returned_result = unwrapped_model.generate(\n                **{k: v for k, v in prompt_inputs.items() if k not in self.vlm_module.get_non_generate_params()}, \n                generation_config=self.generation_config,\n            )\n            prompt_length = prompt_ids.size(1)\n            if not self.vlm_module.is_embeds_input():\n                prompt_completion_ids = generate_returned_result\n                prompt_ids = prompt_completion_ids[:, :prompt_length]\n                completion_ids = prompt_completion_ids[:, prompt_length:]\n            else:\n                # In this case, the input of the LLM backbone is the embedding of the combination of the image and text prompt\n                # So the returned result of the `generate` method only contains the completion ids\n                completion_ids = generate_returned_result\n                prompt_completion_ids = torch.cat([prompt_ids, completion_ids], dim=1)\n            \n        # Mask everything after the first EOS token\n        is_eos = completion_ids == self.processing_class.eos_token_id\n        eos_idx = torch.full((is_eos.size(0),), is_eos.size(1), dtype=torch.long, device=device)\n        eos_idx[is_eos.any(dim=1)] = is_eos.int().argmax(dim=1)[is_eos.any(dim=1)]\n        sequence_indices = torch.arange(is_eos.size(1), device=device).expand(is_eos.size(0), -1)\n        completion_mask = (sequence_indices <= eos_idx.unsqueeze(1)).int()\n\n        # Concatenate prompt_mask with completion_mask for logit computation\n        attention_mask = torch.cat([prompt_mask, completion_mask], dim=1)  # (B, P+C)\n\n        # Get the multimodal inputs\n        multimodal_keywords = self.vlm_module.get_custom_multimodal_keywords()\n        multimodal_inputs = {k: prompt_inputs[k] if k in prompt_inputs else None for k in multimodal_keywords}\n        with torch.no_grad():\n            # When using num_iterations == 1, old_per_token_logps == per_token_logps, so we can skip its\n            # computation here, and use per_token_logps.detach() instead.\n            if self.num_iterations > 1:\n                old_per_token_logps = self._get_per_token_logps(\n                    model, prompt_completion_ids, attention_mask, **multimodal_inputs\n                )\n                old_per_token_logps = old_per_token_logps[:, prompt_length - 1:]\n            else:\n                old_per_token_logps = None\n\n            if self.beta == 0.0:\n                ref_per_token_logps = None\n            elif self.ref_model is not None:\n                ref_per_token_logps = self._get_per_token_logps(\n                    self.ref_model, prompt_completion_ids, attention_mask, **multimodal_inputs\n                )\n            else:\n                with self.accelerator.unwrap_model(model).disable_adapter():\n                    ref_per_token_logps = self._get_per_token_logps(\n                        model, prompt_completion_ids, attention_mask, **multimodal_inputs\n                    )\n        if ref_per_token_logps is not None:\n            ref_per_token_logps = ref_per_token_logps[:, prompt_length - 1:]\n\n        # Decode the generated completions\n        completions = self.processing_class.batch_decode(completion_ids, skip_special_tokens=True)\n        if is_conversational(inputs[0]):\n            completions = [[{\"role\": \"assistant\", \"content\": completion}] for completion in completions]\n\n        # Compute the rewards\n        # No need to duplicate prompts as we're not generating multiple completions per prompt\n\n        rewards_per_func = torch.zeros(len(prompts), len(self.reward_funcs), device=device)\n        for i, (reward_func, reward_processing_class) in enumerate(\n            zip(self.reward_funcs, self.reward_processing_classes)\n        ):\n            if isinstance(reward_func, PreTrainedModel):\n                if is_conversational(inputs[0]):\n                    messages = [{\"messages\": p + c} for p, c in zip(prompts, completions)]\n                    texts = [apply_chat_template(x, reward_processing_class)[\"text\"] for x in messages]\n                else:\n                    texts = [p + c for p, c in zip(prompts, completions)]\n                reward_inputs = reward_processing_class(\n                    texts, return_tensors=\"pt\", padding=True, padding_side=\"right\", add_special_tokens=False\n                )\n                reward_inputs = super()._prepare_inputs(reward_inputs)\n                with torch.inference_mode():\n                    rewards_per_func[:, i] = reward_func(**reward_inputs).logits[:, 0]  # Shape (B*G,)\n            else:\n                # Repeat all input columns (but \"prompt\" and \"completion\") to match the number of generations\n                reward_kwargs = {key: [] for key in inputs[0].keys() if key not in [\"prompt\", \"completion\"]}\n                for key in reward_kwargs:\n                    for example in inputs:\n                        # No need to duplicate prompts as we're not generating multiple completions per prompt\n                        # reward_kwargs[key].extend([example[key]] * self.num_generations)\n                        reward_kwargs[key].extend([example[key]])\n                output_reward_func = reward_func(prompts=prompts, completions=completions,**reward_kwargs)\n                rewards_per_func[:, i] = torch.tensor(output_reward_func, dtype=torch.float32, device=device)\n\n        # Gather rewards across processes\n        rewards_per_func = self.accelerator.gather(rewards_per_func)\n        \n        # Sum the rewards from all reward functions\n        rewards = rewards_per_func.sum(dim=1)\n        \n        # Compute grouped-wise rewards\n        # Each group consists of num_generations completions for the same prompt\n        mean_grouped_rewards = rewards.view(-1, self.num_generations).mean(dim=1)\n        std_grouped_rewards = rewards.view(-1, self.num_generations).std(dim=1)\n        \n        # Normalize the rewards to compute the advantages\n        mean_grouped_rewards = mean_grouped_rewards.repeat_interleave(self.num_generations, dim=0)\n        std_grouped_rewards = std_grouped_rewards.repeat_interleave(self.num_generations, dim=0)\n        advantages = (rewards - mean_grouped_rewards) / (std_grouped_rewards + 1e-4)\n        \n        # Get only the local slice of advantages\n        process_slice = slice(\n            self.accelerator.process_index * len(prompts),\n            (self.accelerator.process_index + 1) * len(prompts),\n        )\n        advantages = advantages[process_slice]\n\n        # Log the metrics\n        completion_length = self.accelerator.gather_for_metrics(completion_mask.sum(1)).float().mean().item()\n        self._metrics[\"completion_length\"].append(completion_length)\n\n        reward_per_func = self.accelerator.gather_for_metrics(rewards_per_func).mean(0)\n        for i, reward_func in enumerate(self.reward_funcs):\n            if isinstance(reward_func, PreTrainedModel):\n                reward_func_name = reward_func.config._name_or_path.split(\"/\")[-1]\n            else:\n                reward_func_name = reward_func.__name__\n            self._metrics[f\"rewards/{reward_func_name}\"].append(reward_per_func[i].item())\n\n        self._metrics[\"reward\"].append(self.accelerator.gather_for_metrics(rewards).mean().item())\n\n        self._metrics[\"reward_std\"].append(self.accelerator.gather_for_metrics(std_grouped_rewards).mean().item())\n\n        return {\n            \"prompt_ids\": prompt_ids,\n            \"prompt_mask\": prompt_mask,\n            \"completion_ids\": completion_ids,\n            \"completion_mask\": completion_mask,\n            \"old_per_token_logps\": old_per_token_logps,\n            \"ref_per_token_logps\": ref_per_token_logps,\n            \"advantages\": advantages,\n            \"multimodal_inputs\": multimodal_inputs\n        }\n\n    def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=None):\n        if return_outputs:\n            raise ValueError(\"The GRPOTrainer does not support returning outputs\")\n    \n        # Check if we need to generate new completions or use buffered ones\n        if self.state.global_step % self.num_iterations == 0:\n            inputs = self._generate_and_score_completions(inputs, model)\n            self._buffered_inputs[self._step % self.args.gradient_accumulation_steps] = inputs\n        else:\n            inputs = self._buffered_inputs[self._step % self.args.gradient_accumulation_steps]\n        self._step += 1\n\n        # Get the prepared inputs\n        prompt_ids, prompt_mask = inputs[\"prompt_ids\"], inputs[\"prompt_mask\"]\n        completion_ids, completion_mask = inputs[\"completion_ids\"], inputs[\"completion_mask\"]\n        multimodal_inputs = inputs[\"multimodal_inputs\"]\n        \n        # Concatenate for full sequence\n        input_ids = torch.cat([prompt_ids, completion_ids], dim=1)\n        attention_mask = torch.cat([prompt_mask, completion_mask], dim=1)\n\n        # Get the current policy's log probabilities\n        per_token_logps = self._get_per_token_logps(model, input_ids, attention_mask, **multimodal_inputs)\n        # Get rid of the prompt (-1 because of the shift done in get_per_token_logps)\n        per_token_logps = per_token_logps[:, prompt_ids.size(1) - 1:]\n\n        # Get the advantages from inputs\n        advantages = inputs[\"advantages\"]\n\n        # When using num_iterations == 1, old_per_token_logps == per_token_logps, so we can skip its computation\n        # and use per_token_logps.detach() instead\n        old_per_token_logps = inputs[\"old_per_token_logps\"] if self.num_iterations > 1 else per_token_logps.detach()\n\n        # Compute the policy ratio and clipped version\n        coef_1 = torch.exp(per_token_logps - old_per_token_logps)\n        coef_2 = torch.clamp(coef_1, 1 - self.epsilon_low, 1 + self.epsilon_high)\n        per_token_loss1 = coef_1 * advantages.unsqueeze(1)\n        per_token_loss2 = coef_2 * advantages.unsqueeze(1)\n        per_token_loss = -torch.min(per_token_loss1, per_token_loss2)\n\n        # Add KL penalty if beta > 0\n        if self.beta > 0:\n            ref_per_token_logps = inputs[\"ref_per_token_logps\"]\n            per_token_kl = torch.exp(ref_per_token_logps - per_token_logps) - (ref_per_token_logps - per_token_logps) - 1\n            per_token_loss = per_token_loss + self.beta * per_token_kl\n\n            # Log KL divergence\n            mean_kl = ((per_token_kl * completion_mask).sum(dim=1) / completion_mask.sum(dim=1)).mean()\n            self._metrics[\"kl\"].append(self.accelerator.gather_for_metrics(mean_kl).mean().item())\n\n        # Compute final loss\n        loss = ((per_token_loss * completion_mask).sum(dim=1) / completion_mask.sum(dim=1)).mean()\n\n        # Log clip ratio\n        is_clipped = (per_token_loss1 < per_token_loss2).float()\n        clip_ratio = (is_clipped * completion_mask).sum() / completion_mask.sum()\n        self._metrics[\"clip_ratio\"].append(self.accelerator.gather_for_metrics(clip_ratio).mean().item())\n\n        return loss\n\n    def log(self, logs: dict[str, float], start_time: Optional[float] = None) -> None:\n        metrics = {key: sum(val) / len(val) for key, val in self._metrics.items()}  # average the metrics\n        logs = {**logs, **metrics}\n        if version.parse(transformers.__version__) >= version.parse(\"4.47.0.dev0\"):\n            super().log(logs, start_time)\n        else:  # transformers<=4.46\n            super().log(logs)\n        self._metrics.clear()\n\n    def create_model_card(\n        self,\n        model_name: Optional[str] = None,\n        dataset_name: Optional[str] = None,\n        tags: Union[str, list[str], None] = None,\n    ):\n        \"\"\"\n        Creates a draft of a model card using the information available to the `Trainer`.\n\n        Args:\n            model_name (`str` or `None`, *optional*, defaults to `None`):\n                Name of the model.\n            dataset_name (`str` or `None`, *optional*, defaults to `None`):\n                Name of the dataset used for training.\n            tags (`str`, `list[str]` or `None`, *optional*, defaults to `None`):\n                Tags to be associated with the model card.\n        \"\"\"\n        if not self.is_world_process_zero():\n            return\n\n        if hasattr(self.model.config, \"_name_or_path\") and not os.path.isdir(self.model.config._name_or_path):\n            base_model = self.model.config._name_or_path\n        else:\n            base_model = None\n\n        tags = tags or []\n        if isinstance(tags, str):\n            tags = [tags]\n\n        if hasattr(self.model.config, \"unsloth_version\"):\n            tags.append(\"unsloth\")\n\n        citation = textwrap.dedent(\n            \"\"\"\\\n            @article{zhihong2024deepseekmath,\n                title        = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},\n                author       = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},\n                year         = 2024,\n                eprint       = {arXiv:2402.03300},\n            \"\"\"\n        )\n\n        model_card = generate_model_card(\n            base_model=base_model,\n            model_name=model_name,\n            hub_model_id=self.hub_model_id,\n            dataset_name=dataset_name,\n            tags=tags,\n            wandb_url=wandb.run.get_url() if is_wandb_available() and wandb.run is not None else None,\n            comet_url=get_comet_experiment_url(),\n            trainer_name=\"GRPO\",\n            trainer_citation=citation,\n            paper_title=\"DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models\",\n            paper_id=\"2402.03300\",\n        )\n\n        model_card.save(os.path.join(self.args.output_dir, \"README.md\"))\n\n    def _get_train_sampler(self) -> Sampler:\n        \"\"\"Returns a sampler that ensures proper data sampling for GRPO training.\"\"\"\n        effective_batch_size = (\n            self.args.per_device_train_batch_size\n            * self.accelerator.num_processes\n            * self.args.gradient_accumulation_steps\n        )\n        \n        return RepeatRandomSampler(\n            data_source=self.train_dataset,\n            mini_repeat_count=self.num_generations,\n            batch_size=effective_batch_size // self.num_generations,\n            repeat_count=self.num_iterations,\n            seed=self.args.seed,\n        )\n\n    def _get_eval_sampler(self, eval_dataset) -> Sampler:\n        \"\"\"Returns a sampler for evaluation.\"\"\"\n        return RepeatRandomSampler(\n            data_source=eval_dataset,\n            mini_repeat_count=self.num_generations,\n            seed=self.args.seed,\n        )"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/utils/__init__.py",
    "content": ""
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/utils/callbacks.py",
    "content": "#!/usr/bin/env python\n# coding=utf-8\n# Copyright 2025 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport subprocess\nfrom typing import List\n\nfrom transformers import TrainerCallback\nfrom transformers.trainer_callback import TrainerControl, TrainerState\nfrom transformers.training_args import TrainingArguments\n\nfrom .evaluation import run_benchmark_jobs\nfrom .hub import push_to_hub_revision\n\n\ndef is_slurm_available() -> bool:\n    # returns true if a slurm queueing system is available\n    try:\n        subprocess.run([\"sinfo\"], check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        return True\n    except FileNotFoundError:\n        return False\n\n\nclass DummyConfig:\n    def __init__(self, **kwargs):\n        for k, v in kwargs.items():\n            setattr(self, k, v)\n\n\nclass PushToHubRevisionCallback(TrainerCallback):\n    def __init__(self, model_config) -> None:\n        self.model_config = model_config\n\n    def on_save(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):\n        if state.is_world_process_zero:\n            global_step = state.global_step\n\n            # WARNING: if you use dataclasses.replace(args, ...) the accelerator dist state will be broken, so I do this workaround\n            # Also if you instantiate a new SFTConfig, the accelerator dist state will be broken\n            dummy_config = DummyConfig(\n                hub_model_id=args.hub_model_id,\n                hub_model_revision=f\"{args.hub_model_revision}-step-{global_step:09d}\",\n                output_dir=f\"{args.output_dir}/checkpoint-{global_step}\",\n                system_prompt=args.system_prompt,\n            )\n\n            future = push_to_hub_revision(\n                dummy_config, extra_ignore_patterns=[\"*.pt\"]\n            )  # don't push the optimizer states\n\n            if is_slurm_available():\n                dummy_config.benchmarks = args.benchmarks\n\n                def run_benchmark_callback(_):\n                    print(f\"Checkpoint {global_step} pushed to hub.\")\n                    run_benchmark_jobs(dummy_config, self.model_config)\n\n                future.add_done_callback(run_benchmark_callback)\n\n\nCALLBACKS = {\n    \"push_to_hub_revision\": PushToHubRevisionCallback,\n}\n\n\ndef get_callbacks(train_config, model_config) -> List[TrainerCallback]:\n    callbacks = []\n    for callback_name in train_config.callbacks:\n        if callback_name not in CALLBACKS:\n            raise ValueError(f\"Callback {callback_name} not found in CALLBACKS.\")\n        callbacks.append(CALLBACKS[callback_name](model_config))\n\n    return callbacks"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/utils/evaluation.py",
    "content": "import subprocess\nfrom typing import TYPE_CHECKING, Dict, Union\n\nfrom .hub import get_gpu_count_for_vllm, get_param_count_from_repo_id\n\n\nif TYPE_CHECKING:\n    from trl import GRPOConfig, SFTConfig, ModelConfig\n\nimport os\n\n\n# We need a special environment setup to launch vLLM from within Slurm training jobs.\n# - Reference code: https://github.com/huggingface/brrr/blob/c55ba3505686d690de24c7ace6487a5c1426c0fd/brrr/lighteval/one_job_runner.py#L105\n# - Slack thread: https://huggingface.slack.com/archives/C043JTYE1MJ/p1726566494958269\nuser_home_directory = os.path.expanduser(\"~\")\nVLLM_SLURM_PREFIX = [\n    \"env\",\n    \"-i\",\n    \"bash\",\n    \"-c\",\n    f\"for f in /etc/profile.d/*.sh; do source $f; done; export HOME={user_home_directory}; sbatch \",\n]\n\n\ndef register_lighteval_task(\n    configs: Dict[str, str], eval_suite: str, task_name: str, task_list: str, num_fewshot: int = 0\n):\n    \"\"\"Registers a LightEval task configuration.\n\n    - Core tasks can be added from this table: https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/tasks_table.jsonl\n    - Custom tasks that require their own metrics / scripts, should be stored in scripts/evaluation/extended_lighteval_tasks\n\n    Args:\n        configs (Dict[str, str]): The dictionary to store the task configuration.\n        eval_suite (str, optional): The evaluation suite.\n        task_name (str): The name of the task.\n        task_list (str): The comma-separated list of tasks in the format \"extended|{task_name}|{num_fewshot}|0\" or \"lighteval|{task_name}|{num_fewshot}|0\".\n        num_fewshot (int, optional): The number of few-shot examples. Defaults to 0.\n        is_custom_task (bool, optional): Whether the task is a custom task. Defaults to False.\n    \"\"\"\n    # Format task list in lighteval format\n    task_list = \",\".join(f\"{eval_suite}|{task}|{num_fewshot}|0\" for task in task_list.split(\",\"))\n    configs[task_name] = task_list\n\n\nLIGHTEVAL_TASKS = {}\n\nregister_lighteval_task(LIGHTEVAL_TASKS, \"custom\", \"math_500\", \"math_500\", 0)\nregister_lighteval_task(LIGHTEVAL_TASKS, \"custom\", \"aime24\", \"aime24\", 0)\nregister_lighteval_task(LIGHTEVAL_TASKS, \"custom\", \"aime25_part1\", \"aime25:part1\", 0)\nregister_lighteval_task(LIGHTEVAL_TASKS, \"custom\", \"gpqa\", \"gpqa:diamond\", 0)\n\n\ndef get_lighteval_tasks():\n    return list(LIGHTEVAL_TASKS.keys())\n\n\nSUPPORTED_BENCHMARKS = get_lighteval_tasks()\n\n\ndef run_lighteval_job(\n    benchmark: str, training_args: Union[\"SFTConfig\", \"GRPOConfig\"], model_args: \"ModelConfig\"\n) -> None:\n    task_list = LIGHTEVAL_TASKS[benchmark]\n    model_name = training_args.hub_model_id\n    model_revision = training_args.hub_model_revision\n    # For large models >= 30b params or those running the MATH benchmark, we need to shard them across the GPUs to avoid OOM\n    num_gpus = get_gpu_count_for_vllm(model_name, model_revision)\n    if get_param_count_from_repo_id(model_name) >= 30_000_000_000:\n        tensor_parallel = True\n    else:\n        tensor_parallel = False\n\n    cmd = VLLM_SLURM_PREFIX.copy()\n    cmd_args = [\n        f\"--gres=gpu:{num_gpus}\",\n        f\"--job-name=or1_{benchmark}_{model_name.split('/')[-1]}_{model_revision}\",\n        \"slurm/evaluate.slurm\",\n        benchmark,\n        f'\"{task_list}\"',\n        model_name,\n        model_revision,\n        f\"{tensor_parallel}\",\n        f\"{model_args.trust_remote_code}\",\n    ]\n    if training_args.system_prompt is not None:\n        cmd_args.append(f\"--system_prompt={training_args.system_prompt}\")\n    cmd[-1] += \" \" + \" \".join(cmd_args)\n    subprocess.run(cmd, check=True)\n\n\ndef run_benchmark_jobs(training_args: Union[\"SFTConfig\", \"GRPOConfig\"], model_args: \"ModelConfig\") -> None:\n    benchmarks = training_args.benchmarks\n    if len(benchmarks) == 1 and benchmarks[0] == \"all\":\n        benchmarks = get_lighteval_tasks()\n        # Evaluate on all supported benchmarks. Later we may want to include a `chat` option\n        # that just evaluates on `ifeval` and `mt_bench` etc.\n\n    for benchmark in benchmarks:\n        print(f\"Launching benchmark `{benchmark}`\")\n        if benchmark in get_lighteval_tasks():\n            run_lighteval_job(benchmark, training_args, model_args)\n        else:\n            raise ValueError(f\"Unknown benchmark {benchmark}\")"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/utils/hub.py",
    "content": "#!/usr/bin/env python\n# coding=utf-8\n# Copyright 2025 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport re\nfrom concurrent.futures import Future\n\nfrom transformers import AutoConfig\n\nfrom huggingface_hub import (\n    create_branch,\n    create_repo,\n    get_safetensors_metadata,\n    list_repo_commits,\n    list_repo_files,\n    list_repo_refs,\n    repo_exists,\n    upload_folder,\n)\nfrom trl import GRPOConfig, SFTConfig\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef push_to_hub_revision(training_args: SFTConfig | GRPOConfig, extra_ignore_patterns=[]) -> Future:\n    \"\"\"Pushes the model to branch on a Hub repo.\"\"\"\n\n    # Create a repo if it doesn't exist yet\n    repo_url = create_repo(repo_id=training_args.hub_model_id, private=True, exist_ok=True)\n    # Get initial commit to branch from\n    initial_commit = list_repo_commits(training_args.hub_model_id)[-1]\n    # Now create the branch we'll be pushing to\n    create_branch(\n        repo_id=training_args.hub_model_id,\n        branch=training_args.hub_model_revision,\n        revision=initial_commit.commit_id,\n        exist_ok=True,\n    )\n    logger.info(f\"Created target repo at {repo_url}\")\n    logger.info(f\"Pushing to the Hub revision {training_args.hub_model_revision}...\")\n    ignore_patterns = [\"checkpoint-*\", \"*.pth\"]\n    ignore_patterns.extend(extra_ignore_patterns)\n    future = upload_folder(\n        repo_id=training_args.hub_model_id,\n        folder_path=training_args.output_dir,\n        revision=training_args.hub_model_revision,\n        commit_message=f\"Add {training_args.hub_model_revision} checkpoint\",\n        ignore_patterns=ignore_patterns,\n        run_as_future=True,\n    )\n    logger.info(f\"Pushed to {repo_url} revision {training_args.hub_model_revision} successfully!\")\n\n    return future\n\n\ndef check_hub_revision_exists(training_args: SFTConfig | GRPOConfig):\n    \"\"\"Checks if a given Hub revision exists.\"\"\"\n    if repo_exists(training_args.hub_model_id):\n        if training_args.push_to_hub_revision is True:\n            # First check if the revision exists\n            revisions = [rev.name for rev in list_repo_refs(training_args.hub_model_id).branches]\n            # If the revision exists, we next check it has a README file\n            if training_args.hub_model_revision in revisions:\n                repo_files = list_repo_files(\n                    repo_id=training_args.hub_model_id, revision=training_args.hub_model_revision\n                )\n                if \"README.md\" in repo_files and training_args.overwrite_hub_revision is False:\n                    raise ValueError(\n                        f\"Revision {training_args.hub_model_revision} already exists. \"\n                        \"Use --overwrite_hub_revision to overwrite it.\"\n                    )\n\n\ndef get_param_count_from_repo_id(repo_id: str) -> int:\n    \"\"\"Function to get model param counts from safetensors metadata or find patterns like 42m, 1.5b, 0.5m or products like 8x7b in a repo ID.\"\"\"\n    try:\n        metadata = get_safetensors_metadata(repo_id)\n        return list(metadata.parameter_count.values())[0]\n    except Exception:\n        # Pattern to match products (like 8x7b) and single values (like 42m)\n        pattern = r\"((\\d+(\\.\\d+)?)(x(\\d+(\\.\\d+)?))?)([bm])\"\n        matches = re.findall(pattern, repo_id.lower())\n\n        param_counts = []\n        for full_match, number1, _, _, number2, _, unit in matches:\n            if number2:  # If there's a second number, it's a product\n                number = float(number1) * float(number2)\n            else:  # Otherwise, it's a single value\n                number = float(number1)\n\n            if unit == \"b\":\n                number *= 1_000_000_000  # Convert to billion\n            elif unit == \"m\":\n                number *= 1_000_000  # Convert to million\n\n            param_counts.append(number)\n\n        if len(param_counts) > 0:\n            # Return the largest number\n            return int(max(param_counts))\n        else:\n            # Return -1 if no match found\n            return -1\n\n\ndef get_gpu_count_for_vllm(model_name: str, revision: str = \"main\", num_gpus: int = 8) -> int:\n    \"\"\"vLLM enforces a constraint that the number of attention heads must be divisible by the number of GPUs and 64 must be divisible by the number of GPUs.\n    This function calculates the number of GPUs to use for decoding based on the number of attention heads in the model.\n    \"\"\"\n    config = AutoConfig.from_pretrained(model_name, revision=revision, trust_remote_code=True)\n    # Get number of attention heads\n    num_heads = config.num_attention_heads\n    # Reduce num_gpus so that num_heads is divisible by num_gpus and 64 is divisible by num_gpus\n    while num_heads % num_gpus != 0 or 64 % num_gpus != 0:\n        logger.info(f\"Reducing num_gpus from {num_gpus} to {num_gpus - 1} to make num_heads divisible by num_gpus\")\n        num_gpus -= 1\n    return num_gpus"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/utils/math.py",
    "content": "from math_verify import parse, verify\ndef compute_score(solution_str, ground_truth) -> float:\n    retval = 0.\n    \n    if solution_str == ground_truth:\n        return 1.0 \n\n    if float(verify(parse(solution_str), parse(ground_truth))) > 0:\n        return 1.0 \n\n    try:\n        answer = solution_str\n        string_in_last_boxed = last_boxed_only_string(solution_str)\n        if string_in_last_boxed is not None:\n            answer = remove_boxed(string_in_last_boxed) \n\n        if is_equiv(answer, ground_truth):\n            return 1.0 \n    except Exception as e:\n        print(e)\n\n    return retval\n\n\ndef remove_boxed(s):\n    if \"\\\\boxed \" in s:\n        left = \"\\\\boxed \"\n        assert s[:len(left)] == left\n        return s[len(left):]\n\n    left = \"\\\\boxed{\"\n\n    assert s[:len(left)] == left\n    assert s[-1] == \"}\"\n\n    return s[len(left):-1]\n\ndef last_boxed_only_string(string):\n    idx = string.rfind(\"\\\\boxed\")\n    if \"\\\\boxed \" in string:\n        return \"\\\\boxed \" + string.split(\"\\\\boxed \")[-1].split(\"$\")[0]\n    if idx < 0:\n        idx = string.rfind(\"\\\\fbox\")\n        if idx < 0:\n            return None\n\n    i = idx\n    right_brace_idx = None\n    num_left_braces_open = 0\n    while i < len(string):\n        if string[i] == \"{\":\n            num_left_braces_open += 1\n        if string[i] == \"}\":\n            num_left_braces_open -= 1\n            if num_left_braces_open == 0:\n                right_brace_idx = i\n                break\n        i += 1\n\n    if right_brace_idx is None:\n        retval = None\n    else:\n        retval = string[idx:right_brace_idx + 1]\n\n    return retval\n\n# string normalization from https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/hendrycks_math.py\ndef is_equiv(str1, str2, verbose=False):\n    if str1 is None and str2 is None:\n        print(\"WARNING: Both None\")\n        return True\n    if str1 is None or str2 is None:\n        return False\n\n    try:\n        ss1 = strip_string(str1)\n        ss2 = strip_string(str2)\n        if verbose:\n            print(ss1, ss2)\n        return ss1 == ss2\n    except Exception:\n        return str1 == str2\n\n\n\ndef fix_fracs(string):\n    substrs = string.split(\"\\\\frac\")\n    new_str = substrs[0]\n    if len(substrs) > 1:\n        substrs = substrs[1:]\n        for substr in substrs:\n            new_str += \"\\\\frac\"\n            if substr[0] == \"{\":\n                new_str += substr\n            else:\n                try:\n                    assert len(substr) >= 2\n                except AssertionError:\n                    return string\n                a = substr[0]\n                b = substr[1]\n                if b != \"{\":\n                    if len(substr) > 2:\n                        post_substr = substr[2:]\n                        new_str += \"{\" + a + \"}{\" + b + \"}\" + post_substr\n                    else:\n                        new_str += \"{\" + a + \"}{\" + b + \"}\"\n                else:\n                    if len(substr) > 2:\n                        post_substr = substr[2:]\n                        new_str += \"{\" + a + \"}\" + b + post_substr\n                    else:\n                        new_str += \"{\" + a + \"}\" + b\n    string = new_str\n    return string\n\n\ndef fix_a_slash_b(string):\n    if len(string.split(\"/\")) != 2:\n        return string\n    a = string.split(\"/\")[0]\n    b = string.split(\"/\")[1]\n    try:\n        a = int(a)\n        b = int(b)\n        assert string == \"{}/{}\".format(a, b)\n        new_string = \"\\\\frac{\" + str(a) + \"}{\" + str(b) + \"}\"\n        return new_string\n    except AssertionError:\n        return string\n\n\ndef remove_right_units(string):\n    # \"\\\\text{ \" only ever occurs (at least in the val set) when describing units\n    if \"\\\\text{ \" in string:\n        splits = string.split(\"\\\\text{ \")\n        assert len(splits) == 2\n        return splits[0]\n    else:\n        return string\n\n\ndef fix_sqrt(string):\n    if \"\\\\sqrt\" not in string:\n        return string\n    splits = string.split(\"\\\\sqrt\")\n    new_string = splits[0]\n    for split in splits[1:]:\n        if split[0] != \"{\":\n            a = split[0]\n            new_substr = \"\\\\sqrt{\" + a + \"}\" + split[1:]\n        else:\n            new_substr = \"\\\\sqrt\" + split\n        new_string += new_substr\n    return new_string\n\n\ndef strip_string(string):\n    # linebreaks\n    string = string.replace(\"\\n\", \"\")\n\n    # remove inverse spaces\n    string = string.replace(\"\\\\!\", \"\")\n\n    # replace \\\\ with \\\n    string = string.replace(\"\\\\\\\\\", \"\\\\\")\n\n    # replace tfrac and dfrac with frac\n    string = string.replace(\"tfrac\", \"frac\")\n    string = string.replace(\"dfrac\", \"frac\")\n\n    # remove \\left and \\right\n    string = string.replace(\"\\\\left\", \"\")\n    string = string.replace(\"\\\\right\", \"\")\n\n    # Remove circ (degrees)\n    string = string.replace(\"^{\\\\circ}\", \"\")\n    string = string.replace(\"^\\\\circ\", \"\")\n\n    # remove dollar signs\n    string = string.replace(\"\\\\$\", \"\")\n\n    # remove units (on the right)\n    string = remove_right_units(string)\n\n    # remove percentage\n    string = string.replace(\"\\\\%\", \"\")\n    string = string.replace(\"\\%\", \"\")  # noqa: W605\n\n    # \" 0.\" equivalent to \" .\" and \"{0.\" equivalent to \"{.\" Alternatively, add \"0\" if \".\" is the start of the string\n    string = string.replace(\" .\", \" 0.\")\n    string = string.replace(\"{.\", \"{0.\")\n    # if empty, return empty string\n    if len(string) == 0:\n        return string\n    if string[0] == \".\":\n        string = \"0\" + string\n\n    # to consider: get rid of e.g. \"k = \" or \"q = \" at beginning\n    if len(string.split(\"=\")) == 2:\n        if len(string.split(\"=\")[0]) <= 2:\n            string = string.split(\"=\")[1]\n\n    # fix sqrt3 --> sqrt{3}\n    string = fix_sqrt(string)\n\n    # remove spaces\n    string = string.replace(\" \", \"\")\n\n    # \\frac1b or \\frac12 --> \\frac{1}{b} and \\frac{1}{2}, etc. Even works with \\frac1{72} (but not \\frac{72}1). Also does a/b --> \\\\frac{a}{b}\n    string = fix_fracs(string)\n\n    # manually change 0.5 --> \\frac{1}{2}\n    if string == \"0.5\":\n        string = \"\\\\frac{1}{2}\"\n\n    # NOTE: X/Y changed to \\frac{X}{Y} in dataset, but in simple cases fix in case the model output is X/Y\n    string = fix_a_slash_b(string)\n\n    return string"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/utils/pycocotools/coco.py",
    "content": "import json\nimport time\nimport matplotlib.pyplot as plt\nfrom matplotlib.collections import PatchCollection\nfrom matplotlib.patches import Polygon\nimport numpy as np\nimport copy\nimport itertools\n#from . import mask as maskUtils\nimport os\nfrom collections import defaultdict\nimport sys\nPYTHON_VERSION = sys.version_info[0]\nif PYTHON_VERSION == 2:\n    from urllib import urlretrieve\nelif PYTHON_VERSION == 3:\n    from urllib.request import urlretrieve\n\n\ndef _isArrayLike(obj):\n    return hasattr(obj, '__iter__') and hasattr(obj, '__len__')\n\n\nclass COCO:\n    def __init__(self, annotation_file=None):\n        \"\"\"\n        Constructor of Microsoft COCO helper class for reading and visualizing annotations.\n        :param annotation_file (str): location of annotation file\n        :param image_folder (str): location to the folder that hosts images.\n        :return:\n        \"\"\"\n        # load dataset\n        self.dataset,self.anns,self.cats,self.imgs = dict(),dict(),dict(),dict()\n        self.imgToAnns, self.catToImgs = defaultdict(list), defaultdict(list)\n        if not annotation_file == None:\n            # print('loading annotations into memory...')\n            tic = time.time()\n            if type(annotation_file) == dict:\n                dataset = annotation_file\n            else:\n                dataset = json.load(open(annotation_file, 'r'))\n            assert type(dataset)==dict, 'annotation file format {} not supported'.format(type(dataset))\n            # print('Done (t={:0.2f}s)'.format(time.time()- tic))\n            self.dataset = dataset\n            self.createIndex()\n\n    def createIndex(self):\n        # create index\n        # print('creating index...')\n        anns, cats, imgs = {}, {}, {}\n        imgToAnns,catToImgs = defaultdict(list),defaultdict(list)\n        if 'annotations' in self.dataset:\n            for ann in self.dataset['annotations']:\n                imgToAnns[ann['image_id']].append(ann)\n                anns[ann['id']] = ann\n\n        if 'images' in self.dataset:\n            for img in self.dataset['images']:\n                imgs[img['id']] = img\n\n        if 'categories' in self.dataset:\n            for cat in self.dataset['categories']:\n                cats[cat['id']] = cat\n\n        if 'annotations' in self.dataset and 'categories' in self.dataset:\n            for ann in self.dataset['annotations']:\n                catToImgs[ann['category_id']].append(ann['image_id'])\n\n        # print('index created!')\n\n        # create class members\n        self.anns = anns\n        self.imgToAnns = imgToAnns\n        self.catToImgs = catToImgs\n        self.imgs = imgs\n        self.cats = cats\n\n    def info(self):\n        \"\"\"\n        Print information about the annotation file.\n        :return:\n        \"\"\"\n        for key, value in self.dataset['info'].items():\n            print('{}: {}'.format(key, value))\n\n    def getAnnIds(self, imgIds=[], catIds=[], areaRng=[], iscrowd=None):\n        \"\"\"\n        Get ann ids that satisfy given filter conditions. default skips that filter\n        :param imgIds  (int array)     : get anns for given imgs\n               catIds  (int array)     : get anns for given cats\n               areaRng (float array)   : get anns for given area range (e.g. [0 inf])\n               iscrowd (boolean)       : get anns for given crowd label (False or True)\n        :return: ids (int array)       : integer array of ann ids\n        \"\"\"\n        imgIds = imgIds if _isArrayLike(imgIds) else [imgIds]\n        catIds = catIds if _isArrayLike(catIds) else [catIds]\n\n        if len(imgIds) == len(catIds) == len(areaRng) == 0:\n            anns = self.dataset['annotations']\n        else:\n            if not len(imgIds) == 0:\n                lists = [self.imgToAnns[imgId] for imgId in imgIds if imgId in self.imgToAnns]\n                anns = list(itertools.chain.from_iterable(lists))\n            else:\n                anns = self.dataset['annotations']\n            anns = anns if len(catIds)  == 0 else [ann for ann in anns if ann['category_id'] in catIds]\n            anns = anns if len(areaRng) == 0 else [ann for ann in anns if ann['area'] > areaRng[0] and ann['area'] < areaRng[1]]\n        if not iscrowd == None:\n            ids = [ann['id'] for ann in anns if ann['iscrowd'] == iscrowd]\n        else:\n            ids = [ann['id'] for ann in anns]\n        return ids\n\n    def getCatIds(self, catNms=[], supNms=[], catIds=[]):\n        \"\"\"\n        filtering parameters. default skips that filter.\n        :param catNms (str array)  : get cats for given cat names\n        :param supNms (str array)  : get cats for given supercategory names\n        :param catIds (int array)  : get cats for given cat ids\n        :return: ids (int array)   : integer array of cat ids\n        \"\"\"\n        catNms = catNms if _isArrayLike(catNms) else [catNms]\n        supNms = supNms if _isArrayLike(supNms) else [supNms]\n        catIds = catIds if _isArrayLike(catIds) else [catIds]\n\n        if len(catNms) == len(supNms) == len(catIds) == 0:\n            cats = self.dataset['categories']\n        else:\n            cats = self.dataset['categories']\n            cats = cats if len(catNms) == 0 else [cat for cat in cats if cat['name']          in catNms]\n            cats = cats if len(supNms) == 0 else [cat for cat in cats if cat['supercategory'] in supNms]\n            cats = cats if len(catIds) == 0 else [cat for cat in cats if cat['id']            in catIds]\n        ids = [cat['id'] for cat in cats]\n        return ids\n\n    def getImgIds(self, imgIds=[], catIds=[]):\n        '''\n        Get img ids that satisfy given filter conditions.\n        :param imgIds (int array) : get imgs for given ids\n        :param catIds (int array) : get imgs with all given cats\n        :return: ids (int array)  : integer array of img ids\n        '''\n        imgIds = imgIds if _isArrayLike(imgIds) else [imgIds]\n        catIds = catIds if _isArrayLike(catIds) else [catIds]\n\n        if len(imgIds) == len(catIds) == 0:\n            ids = self.imgs.keys()\n        else:\n            ids = set(imgIds)\n            for i, catId in enumerate(catIds):\n                if i == 0 and len(ids) == 0:\n                    ids = set(self.catToImgs[catId])\n                else:\n                    ids &= set(self.catToImgs[catId])\n        return list(ids)\n\n    def loadAnns(self, ids=[]):\n        \"\"\"\n        Load anns with the specified ids.\n        :param ids (int array)       : integer ids specifying anns\n        :return: anns (object array) : loaded ann objects\n        \"\"\"\n        if _isArrayLike(ids):\n            return [self.anns[id] for id in ids]\n        elif type(ids) == int:\n            return [self.anns[ids]]\n\n    def loadCats(self, ids=[]):\n        \"\"\"\n        Load cats with the specified ids.\n        :param ids (int array)       : integer ids specifying cats\n        :return: cats (object array) : loaded cat objects\n        \"\"\"\n        if _isArrayLike(ids):\n            return [self.cats[id] for id in ids]\n        elif type(ids) == int:\n            return [self.cats[ids]]\n\n    def loadImgs(self, ids=[]):\n        \"\"\"\n        Load anns with the specified ids.\n        :param ids (int array)       : integer ids specifying img\n        :return: imgs (object array) : loaded img objects\n        \"\"\"\n        if _isArrayLike(ids):\n            return [self.imgs[id] for id in ids]\n        elif type(ids) == int:\n            return [self.imgs[ids]]\n\n    def showAnns(self, anns, draw_bbox=False):\n        \"\"\"\n        Display the specified annotations.\n        :param anns (array of object): annotations to display\n        :return: None\n        \"\"\"\n        if len(anns) == 0:\n            return 0\n        if 'segmentation' in anns[0] or 'keypoints' in anns[0]:\n            datasetType = 'instances'\n        elif 'caption' in anns[0]:\n            datasetType = 'captions'\n        else:\n            raise Exception('datasetType not supported')\n        if datasetType == 'instances':\n            ax = plt.gca()\n            ax.set_autoscale_on(False)\n            polygons = []\n            color = []\n            for ann in anns:\n                c = (np.random.random((1, 3))*0.6+0.4).tolist()[0]\n                if 'segmentation' in ann:\n                    if type(ann['segmentation']) == list:\n                        # polygon\n                        for seg in ann['segmentation']:\n                            poly = np.array(seg).reshape((int(len(seg)/2), 2))\n                            polygons.append(Polygon(poly))\n                            color.append(c)\n                    else:\n                        # mask\n                        t = self.imgs[ann['image_id']]\n                        if type(ann['segmentation']['counts']) == list:\n                            rle = maskUtils.frPyObjects([ann['segmentation']], t['height'], t['width'])\n                        else:\n                            rle = [ann['segmentation']]\n                        m = maskUtils.decode(rle)\n                        img = np.ones( (m.shape[0], m.shape[1], 3) )\n                        if ann['iscrowd'] == 1:\n                            color_mask = np.array([2.0,166.0,101.0])/255\n                        if ann['iscrowd'] == 0:\n                            color_mask = np.random.random((1, 3)).tolist()[0]\n                        for i in range(3):\n                            img[:,:,i] = color_mask[i]\n                        ax.imshow(np.dstack( (img, m*0.5) ))\n                if 'keypoints' in ann and type(ann['keypoints']) == list:\n                    # turn skeleton into zero-based index\n                    sks = np.array(self.loadCats(ann['category_id'])[0]['skeleton'])-1\n                    kp = np.array(ann['keypoints'])\n                    x = kp[0::3]\n                    y = kp[1::3]\n                    v = kp[2::3]\n                    for sk in sks:\n                        if np.all(v[sk]>0):\n                            plt.plot(x[sk],y[sk], linewidth=3, color=c)\n                    plt.plot(x[v>0], y[v>0],'o',markersize=8, markerfacecolor=c, markeredgecolor='k',markeredgewidth=2)\n                    plt.plot(x[v>1], y[v>1],'o',markersize=8, markerfacecolor=c, markeredgecolor=c, markeredgewidth=2)\n\n                if draw_bbox:\n                    [bbox_x, bbox_y, bbox_w, bbox_h] = ann['bbox']\n                    poly = [[bbox_x, bbox_y], [bbox_x, bbox_y+bbox_h], [bbox_x+bbox_w, bbox_y+bbox_h], [bbox_x+bbox_w, bbox_y]]\n                    np_poly = np.array(poly).reshape((4,2))\n                    polygons.append(Polygon(np_poly))\n                    color.append(c)\n\n            p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.4)\n            ax.add_collection(p)\n            p = PatchCollection(polygons, facecolor='none', edgecolors=color, linewidths=2)\n            ax.add_collection(p)\n        elif datasetType == 'captions':\n            for ann in anns:\n                print(ann['caption'])\n\n    def loadRes(self, resFile):\n        \"\"\"\n        Load result file and return a result api object.\n        :param   resFile (str)     : file name of result file\n        :return: res (obj)         : result api object\n        \"\"\"\n        res = COCO()\n        res.dataset['images'] = [img for img in self.dataset['images']]\n\n        # print('Loading and preparing results...')\n        tic = time.time()\n        if type(resFile) == str or (PYTHON_VERSION == 2 and type(resFile) == unicode):\n            anns = json.load(open(resFile))\n        elif type(resFile) == np.ndarray:\n            anns = self.loadNumpyAnnotations(resFile)\n        else:\n            anns = resFile\n        assert type(anns) == list, 'results in not an array of objects'\n        annsImgIds = [ann['image_id'] for ann in anns]\n        assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \\\n               'Results do not correspond to current coco set'\n        if 'caption' in anns[0]:\n            imgIds = set([img['id'] for img in res.dataset['images']]) & set([ann['image_id'] for ann in anns])\n            res.dataset['images'] = [img for img in res.dataset['images'] if img['id'] in imgIds]\n            for id, ann in enumerate(anns):\n                ann['id'] = id+1\n        elif 'bbox' in anns[0] and not anns[0]['bbox'] == []:\n            res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])\n            for id, ann in enumerate(anns):\n                bb = ann['bbox']\n                x1, x2, y1, y2 = [bb[0], bb[0]+bb[2], bb[1], bb[1]+bb[3]]\n                if not 'segmentation' in ann:\n                    ann['segmentation'] = [[x1, y1, x1, y2, x2, y2, x2, y1]]\n                ann['area'] = bb[2]*bb[3]\n                ann['id'] = id+1\n                ann['iscrowd'] = 0\n        elif 'segmentation' in anns[0]:\n            res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])\n            for id, ann in enumerate(anns):\n                # now only support compressed RLE format as segmentation results\n                ann['area'] = maskUtils.area(ann['segmentation'])\n                if not 'bbox' in ann:\n                    ann['bbox'] = maskUtils.toBbox(ann['segmentation'])\n                ann['id'] = id+1\n                ann['iscrowd'] = 0\n        elif 'keypoints' in anns[0]:\n            res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])\n            for id, ann in enumerate(anns):\n                s = ann['keypoints']\n                x = s[0::3]\n                y = s[1::3]\n                x0,x1,y0,y1 = np.min(x), np.max(x), np.min(y), np.max(y)\n                ann['area'] = (x1-x0)*(y1-y0)\n                ann['id'] = id + 1\n                ann['bbox'] = [x0,y0,x1-x0,y1-y0]\n        # print('DONE (t={:0.2f}s)'.format(time.time()- tic))\n\n        res.dataset['annotations'] = anns\n        res.createIndex()\n        return res\n\n    def download(self, tarDir = None, imgIds = [] ):\n        '''\n        Download COCO images from mscoco.org server.\n        :param tarDir (str): COCO results directory name\n               imgIds (list): images to be downloaded\n        :return:\n        '''\n        if tarDir is None:\n            print('Please specify target directory')\n            return -1\n        if len(imgIds) == 0:\n            imgs = self.imgs.values()\n        else:\n            imgs = self.loadImgs(imgIds)\n        N = len(imgs)\n        if not os.path.exists(tarDir):\n            os.makedirs(tarDir)\n        for i, img in enumerate(imgs):\n            tic = time.time()\n            fname = os.path.join(tarDir, img['file_name'])\n            if not os.path.exists(fname):\n                urlretrieve(img['coco_url'], fname)\n            print('downloaded {}/{} images (t={:0.1f}s)'.format(i, N, time.time()- tic))\n\n    def loadNumpyAnnotations(self, data):\n        \"\"\"\n        Convert result data from a numpy array [Nx7] where each row contains {imageID,x1,y1,w,h,score,class}\n        :param  data (numpy.ndarray)\n        :return: annotations (python nested list)\n        \"\"\"\n        print('Converting ndarray to lists...')\n        assert(type(data) == np.ndarray)\n        print(data.shape)\n        assert(data.shape[1] == 7)\n        N = data.shape[0]\n        ann = []\n        for i in range(N):\n            if i % 1000000 == 0:\n                print('{}/{}'.format(i,N))\n            ann += [{\n                'image_id'  : int(data[i, 0]),\n                'bbox'  : [ data[i, 1], data[i, 2], data[i, 3], data[i, 4] ],\n                'score' : data[i, 5],\n                'category_id': int(data[i, 6]),\n                }]\n        return ann\n\n    def annToRLE(self, ann):\n        \"\"\"\n        Convert annotation which can be polygons, uncompressed RLE to RLE.\n        :return: binary mask (numpy 2D array)\n        \"\"\"\n        t = self.imgs[ann['image_id']]\n        h, w = t['height'], t['width']\n        segm = ann['segmentation']\n        if type(segm) == list:\n            # polygon -- a single object might consist of multiple parts\n            # we merge all parts into one mask rle code\n            rles = maskUtils.frPyObjects(segm, h, w)\n            rle = maskUtils.merge(rles)\n        elif type(segm['counts']) == list:\n            # uncompressed RLE\n            rle = maskUtils.frPyObjects(segm, h, w)\n        else:\n            # rle\n            rle = ann['segmentation']\n        return rle\n\n    def annToMask(self, ann):\n        \"\"\"\n        Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask.\n        :return: binary mask (numpy 2D array)\n        \"\"\"\n        rle = self.annToRLE(ann)\n        m = maskUtils.decode(rle)\n        return m"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/utils/pycocotools/cocoeval.py",
    "content": "import numpy as np\nimport datetime\nimport time\nfrom collections import defaultdict\nfrom pycocotools import mask as maskUtils\nimport copy\n\nclass COCOeval:\n    # Interface for evaluating detection on the Microsoft COCO dataset.\n    #\n    # The usage for CocoEval is as follows:\n    #  cocoGt=..., cocoDt=...       # load dataset and results\n    #  E = CocoEval(cocoGt,cocoDt); # initialize CocoEval object\n    #  E.params.recThrs = ...;      # set parameters as desired\n    #  E.evaluate();                # run per image evaluation\n    #  E.accumulate();              # accumulate per image results\n    #  E.summarize();               # display summary metrics of results\n    # For example usage see evalDemo.m and http://mscoco.org/.\n    #\n    # The evaluation parameters are as follows (defaults in brackets):\n    #  imgIds     - [all] N img ids to use for evaluation\n    #  catIds     - [all] K cat ids to use for evaluation\n    #  iouThrs    - [.5:.05:.95] T=10 IoU thresholds for evaluation\n    #  recThrs    - [0:.01:1] R=101 recall thresholds for evaluation\n    #  areaRng    - [...] A=4 object area ranges for evaluation\n    #  maxDets    - [1 10 100] M=3 thresholds on max detections per image\n    #  iouType    - ['segm'] set iouType to 'segm', 'bbox' or 'keypoints'\n    #  iouType replaced the now DEPRECATED useSegm parameter.\n    #  useCats    - [1] if true use category labels for evaluation\n    # Note: if useCats=0 category labels are ignored as in proposal scoring.\n    # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified.\n    #\n    # evaluate(): evaluates detections on every image and every category and\n    # concats the results into the \"evalImgs\" with fields:\n    #  dtIds      - [1xD] id for each of the D detections (dt)\n    #  gtIds      - [1xG] id for each of the G ground truths (gt)\n    #  dtMatches  - [TxD] matching gt id at each IoU or 0\n    #  gtMatches  - [TxG] matching dt id at each IoU or 0\n    #  dtScores   - [1xD] confidence of each dt\n    #  gtIgnore   - [1xG] ignore flag for each gt\n    #  dtIgnore   - [TxD] ignore flag for each dt at each IoU\n    #\n    # accumulate(): accumulates the per-image, per-category evaluation\n    # results in \"evalImgs\" into the dictionary \"eval\" with fields:\n    #  params     - parameters used for evaluation\n    #  date       - date evaluation was performed\n    #  counts     - [T,R,K,A,M] parameter dimensions (see above)\n    #  precision  - [TxRxKxAxM] precision for every evaluation setting\n    #  recall     - [TxKxAxM] max recall for every evaluation setting\n    # Note: precision and recall==-1 for settings with no gt objects.\n    #\n    # See also coco, mask, pycocoDemo, pycocoEvalDemo\n    #\n    # Microsoft COCO Toolbox.      version 2.0\n    # Data, paper, and tutorials available at:  http://mscoco.org/\n    # Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n    # Licensed under the Simplified BSD License [see coco/license.txt]\n    def __init__(self, cocoGt=None, cocoDt=None, iouType='segm'):\n        '''\n        Initialize CocoEval using coco APIs for gt and dt\n        :param cocoGt: coco object with ground truth annotations\n        :param cocoDt: coco object with detection results\n        :return: None\n        '''\n        if not iouType:\n            print('iouType not specified. use default iouType segm')\n        self.cocoGt   = cocoGt              # ground truth COCO API\n        self.cocoDt   = cocoDt              # detections COCO API\n        self.evalImgs = defaultdict(list)   # per-image per-category evaluation results [KxAxI] elements\n        self.eval     = {}                  # accumulated evaluation results\n        self._gts = defaultdict(list)       # gt for evaluation\n        self._dts = defaultdict(list)       # dt for evaluation\n        self.params = Params(iouType=iouType) # parameters\n        self._paramsEval = {}               # parameters for evaluation\n        self.stats = []                     # result summarization\n        self.ious = {}                      # ious between all gts and dts\n        if not cocoGt is None:\n            self.params.imgIds = sorted(cocoGt.getImgIds())\n            self.params.catIds = sorted(cocoGt.getCatIds())\n\n\n    def _prepare(self):\n        '''\n        Prepare ._gts and ._dts for evaluation based on params\n        :return: None\n        '''\n        def _toMask(anns, coco):\n            # modify ann['segmentation'] by reference\n            for ann in anns:\n                rle = coco.annToRLE(ann)\n                ann['segmentation'] = rle\n        p = self.params\n        if p.useCats:\n            gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds))\n            dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds))\n        else:\n            gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds))\n            dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds))\n\n        # convert ground truth to mask if iouType == 'segm'\n        if p.iouType == 'segm':\n            _toMask(gts, self.cocoGt)\n            _toMask(dts, self.cocoDt)\n        # set ignore flag\n        for gt in gts:\n            gt['ignore'] = gt['ignore'] if 'ignore' in gt else 0\n            gt['ignore'] = 'iscrowd' in gt and gt['iscrowd']\n            if p.iouType == 'keypoints':\n                gt['ignore'] = (gt['num_keypoints'] == 0) or gt['ignore']\n        self._gts = defaultdict(list)       # gt for evaluation\n        self._dts = defaultdict(list)       # dt for evaluation\n        for gt in gts:\n            self._gts[gt['image_id'], gt['category_id']].append(gt)\n        for dt in dts:\n            self._dts[dt['image_id'], dt['category_id']].append(dt)\n        self.evalImgs = defaultdict(list)   # per-image per-category evaluation results\n        self.eval     = {}                  # accumulated evaluation results\n\n    def evaluate(self):\n        '''\n        Run per image evaluation on given images and store results (a list of dict) in self.evalImgs\n        :return: None\n        '''\n        tic = time.time()\n        #('Running per image evaluation...')\n        p = self.params\n        # add backward compatibility if useSegm is specified in params\n        if not p.useSegm is None:\n            p.iouType = 'segm' if p.useSegm == 1 else 'bbox'\n            print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType))\n       # print('Evaluate annotation type *{}*'.format(p.iouType))\n        p.imgIds = list(np.unique(p.imgIds))\n        if p.useCats:\n            p.catIds = list(np.unique(p.catIds))\n        p.maxDets = sorted(p.maxDets)\n        self.params=p\n\n        self._prepare()\n        # loop through images, area range, max detection number\n        catIds = p.catIds if p.useCats else [-1]\n\n        if p.iouType == 'segm' or p.iouType == 'bbox':\n            computeIoU = self.computeIoU\n        elif p.iouType == 'keypoints':\n            computeIoU = self.computeOks\n        self.ious = {(imgId, catId): computeIoU(imgId, catId) \\\n                        for imgId in p.imgIds\n                        for catId in catIds}\n\n        evaluateImg = self.evaluateImg\n        maxDet = p.maxDets[-1]\n        self.evalImgs = [evaluateImg(imgId, catId, areaRng, maxDet)\n                 for catId in catIds\n                 for areaRng in p.areaRng\n                 for imgId in p.imgIds\n             ]\n        self._paramsEval = copy.deepcopy(self.params)\n        toc = time.time()\n        #print('DONE (t={:0.2f}s).'.format(toc-tic))\n\n    def computeIoU(self, imgId, catId):\n        p = self.params\n        if p.useCats:\n            gt = self._gts[imgId,catId]\n            dt = self._dts[imgId,catId]\n        else:\n            gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]]\n            dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]]\n        if len(gt) == 0 and len(dt) ==0:\n            return []\n        inds = np.argsort([-d['score'] for d in dt], kind='mergesort')\n        dt = [dt[i] for i in inds]\n        if len(dt) > p.maxDets[-1]:\n            dt=dt[0:p.maxDets[-1]]\n\n        if p.iouType == 'segm':\n            g = [g['segmentation'] for g in gt]\n            d = [d['segmentation'] for d in dt]\n        elif p.iouType == 'bbox':\n            g = [g['bbox'] for g in gt]\n            d = [d['bbox'] for d in dt]\n        else:\n            raise Exception('unknown iouType for iou computation')\n\n        # compute iou between each dt and gt region\n        iscrowd = [int(o['iscrowd']) for o in gt]\n        ious = maskUtils.iou(d,g,iscrowd)\n        return ious\n\n    def computeOks(self, imgId, catId):\n        p = self.params\n        # dimention here should be Nxm\n        gts = self._gts[imgId, catId]\n        dts = self._dts[imgId, catId]\n        inds = np.argsort([-d['score'] for d in dts], kind='mergesort')\n        dts = [dts[i] for i in inds]\n        if len(dts) > p.maxDets[-1]:\n            dts = dts[0:p.maxDets[-1]]\n        # if len(gts) == 0 and len(dts) == 0:\n        if len(gts) == 0 or len(dts) == 0:\n            return []\n        ious = np.zeros((len(dts), len(gts)))\n        sigmas = p.kpt_oks_sigmas\n        vars = (sigmas * 2)**2\n        k = len(sigmas)\n        # compute oks between each detection and ground truth object\n        for j, gt in enumerate(gts):\n            # create bounds for ignore regions(double the gt bbox)\n            g = np.array(gt['keypoints'])\n            xg = g[0::3]; yg = g[1::3]; vg = g[2::3]\n            k1 = np.count_nonzero(vg > 0)\n            bb = gt['bbox']\n            x0 = bb[0] - bb[2]; x1 = bb[0] + bb[2] * 2\n            y0 = bb[1] - bb[3]; y1 = bb[1] + bb[3] * 2\n            for i, dt in enumerate(dts):\n                d = np.array(dt['keypoints'])\n                xd = d[0::3]; yd = d[1::3]\n                if k1>0:\n                    # measure the per-keypoint distance if keypoints visible\n                    dx = xd - xg\n                    dy = yd - yg\n                else:\n                    # measure minimum distance to keypoints in (x0,y0) & (x1,y1)\n                    z = np.zeros((k))\n                    dx = np.max((z, x0-xd),axis=0)+np.max((z, xd-x1),axis=0)\n                    dy = np.max((z, y0-yd),axis=0)+np.max((z, yd-y1),axis=0)\n                e = (dx**2 + dy**2) / vars / (gt['area']+np.spacing(1)) / 2\n                if k1 > 0:\n                    e=e[vg > 0]\n                ious[i, j] = np.sum(np.exp(-e)) / e.shape[0]\n        return ious\n\n    def evaluateImg(self, imgId, catId, aRng, maxDet):\n        '''\n        perform evaluation for single category and image\n        :return: dict (single image results)\n        '''\n        p = self.params\n        if p.useCats:\n            gt = self._gts[imgId,catId]\n            dt = self._dts[imgId,catId]\n        else:\n            gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]]\n            dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]]\n        if len(gt) == 0 and len(dt) ==0:\n            return None\n\n        for g in gt:\n            if g['ignore'] or (g['area']<aRng[0] or g['area']>aRng[1]):\n                g['_ignore'] = 1\n            else:\n                g['_ignore'] = 0\n\n        # sort dt highest score first, sort gt ignore last\n        gtind = np.argsort([g['_ignore'] for g in gt], kind='mergesort')\n        gt = [gt[i] for i in gtind]\n        dtind = np.argsort([-d['score'] for d in dt], kind='mergesort')\n        dt = [dt[i] for i in dtind[0:maxDet]]\n        iscrowd = [int(o['iscrowd']) for o in gt]\n        # load computed ious\n        ious = self.ious[imgId, catId][:, gtind] if len(self.ious[imgId, catId]) > 0 else self.ious[imgId, catId]\n\n        T = len(p.iouThrs)\n        G = len(gt)\n        D = len(dt)\n        gtm  = np.zeros((T,G))\n        dtm  = np.zeros((T,D))\n        gtIg = np.array([g['_ignore'] for g in gt])\n        dtIg = np.zeros((T,D))\n        if not len(ious)==0:\n            for tind, t in enumerate(p.iouThrs):\n                for dind, d in enumerate(dt):\n                    # information about best match so far (m=-1 -> unmatched)\n                    iou = min([t,1-1e-10])\n                    m   = -1\n                    for gind, g in enumerate(gt):\n                        # if this gt already matched, and not a crowd, continue\n                        if gtm[tind,gind]>0 and not iscrowd[gind]:\n                            continue\n                        # if dt matched to reg gt, and on ignore gt, stop\n                        if m>-1 and gtIg[m]==0 and gtIg[gind]==1:\n                            break\n                        # continue to next gt unless better match made\n                        if ious[dind,gind] < iou:\n                            continue\n                        # if match successful and best so far, store appropriately\n                        iou=ious[dind,gind]\n                        m=gind\n                    # if match made store id of match for both dt and gt\n                    if m ==-1:\n                        continue\n                    dtIg[tind,dind] = gtIg[m]\n                    dtm[tind,dind]  = gt[m]['id']\n                    gtm[tind,m]     = d['id']\n        # set unmatched detections outside of area range to ignore\n        a = np.array([d['area']<aRng[0] or d['area']>aRng[1] for d in dt]).reshape((1, len(dt)))\n        dtIg = np.logical_or(dtIg, np.logical_and(dtm==0, np.repeat(a,T,0)))\n        # store results for given image and category\n        return {\n                'image_id':     imgId,\n                'category_id':  catId,\n                'aRng':         aRng,\n                'maxDet':       maxDet,\n                'dtIds':        [d['id'] for d in dt],\n                'gtIds':        [g['id'] for g in gt],\n                'dtMatches':    dtm,\n                'gtMatches':    gtm,\n                'dtScores':     [d['score'] for d in dt],\n                'gtIgnore':     gtIg,\n                'dtIgnore':     dtIg,\n            }\n\n    def accumulate(self, p = None):\n        '''\n        Accumulate per image evaluation results and store the result in self.eval\n        :param p: input params for evaluation\n        :return: None\n        '''\n        #print('Accumulating evaluation results...')\n        tic = time.time()\n        if not self.evalImgs:\n            print('Please run evaluate() first')\n        # allows input customized parameters\n        if p is None:\n            p = self.params\n        p.catIds = p.catIds if p.useCats == 1 else [-1]\n        T           = len(p.iouThrs)\n        R           = len(p.recThrs)\n        K           = len(p.catIds) if p.useCats else 1\n        A           = len(p.areaRng)\n        M           = len(p.maxDets)\n        precision   = -np.ones((T,R,K,A,M)) # -1 for the precision of absent categories\n        recall      = -np.ones((T,K,A,M))\n        scores      = -np.ones((T,R,K,A,M))\n\n        # create dictionary for future indexing\n        _pe = self._paramsEval\n        catIds = _pe.catIds if _pe.useCats else [-1]\n        setK = set(catIds)\n        setA = set(map(tuple, _pe.areaRng))\n        setM = set(_pe.maxDets)\n        setI = set(_pe.imgIds)\n        # get inds to evaluate\n        k_list = [n for n, k in enumerate(p.catIds)  if k in setK]\n        m_list = [m for n, m in enumerate(p.maxDets) if m in setM]\n        a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA]\n        i_list = [n for n, i in enumerate(p.imgIds)  if i in setI]\n        I0 = len(_pe.imgIds)\n        A0 = len(_pe.areaRng)\n        # retrieve E at each category, area range, and max number of detections\n        for k, k0 in enumerate(k_list):\n            Nk = k0*A0*I0\n            for a, a0 in enumerate(a_list):\n                Na = a0*I0\n                for m, maxDet in enumerate(m_list):\n                    E = [self.evalImgs[Nk + Na + i] for i in i_list]\n                    E = [e for e in E if not e is None]\n                    if len(E) == 0:\n                        continue\n                    dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E])\n\n                    # different sorting method generates slightly different results.\n                    # mergesort is used to be consistent as Matlab implementation.\n                    inds = np.argsort(-dtScores, kind='mergesort')\n                    dtScoresSorted = dtScores[inds]\n\n                    dtm  = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds]\n                    dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet]  for e in E], axis=1)[:,inds]\n                    gtIg = np.concatenate([e['gtIgnore'] for e in E])\n                    npig = np.count_nonzero(gtIg==0 )\n                    if npig == 0:\n                        continue\n                    tps = np.logical_and(               dtm,  np.logical_not(dtIg) )\n                    fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) )\n\n                    tp_sum = np.cumsum(tps, axis=1).astype(dtype=float)\n                    fp_sum = np.cumsum(fps, axis=1).astype(dtype=float)\n                    for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):\n                        tp = np.array(tp)\n                        fp = np.array(fp)\n                        nd = len(tp)\n                        rc = tp / npig\n                        pr = tp / (fp+tp+np.spacing(1))\n                        q  = np.zeros((R,))\n                        ss = np.zeros((R,))\n\n                        if nd:\n                            recall[t,k,a,m] = rc[-1]\n                        else:\n                            recall[t,k,a,m] = 0\n\n                        # numpy is slow without cython optimization for accessing elements\n                        # use python array gets significant speed improvement\n                        pr = pr.tolist(); q = q.tolist()\n\n                        for i in range(nd-1, 0, -1):\n                            if pr[i] > pr[i-1]:\n                                pr[i-1] = pr[i]\n\n                        inds = np.searchsorted(rc, p.recThrs, side='left')\n                        try:\n                            for ri, pi in enumerate(inds):\n                                q[ri] = pr[pi]\n                                ss[ri] = dtScoresSorted[pi]\n                        except:\n                            pass\n                        precision[t,:,k,a,m] = np.array(q)\n                        scores[t,:,k,a,m] = np.array(ss)\n        self.eval = {\n            'params': p,\n            'counts': [T, R, K, A, M],\n            'date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),\n            'precision': precision,\n            'recall':   recall,\n            'scores': scores,\n        }\n        toc = time.time()\n        # print('DONE (t={:0.2f}s).'.format( toc-tic))\n\n    def summarize(self):\n        '''\n        Compute and display summary metrics for evaluation results.\n        Note this functin can *only* be applied on the default parameter setting\n        '''\n        def _summarize( ap=1, iouThr=None, areaRng='all', maxDets=100 ):\n            p = self.params\n            iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}'\n            titleStr = 'Average Precision' if ap == 1 else 'Average Recall'\n            typeStr = '(AP)' if ap==1 else '(AR)'\n            iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \\\n                if iouThr is None else '{:0.2f}'.format(iouThr)\n\n            aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng]\n            mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets]\n            if ap == 1:\n                # dimension of precision: [TxRxKxAxM]\n                s = self.eval['precision']\n                # IoU\n                if iouThr is not None:\n                    t = np.where(iouThr == p.iouThrs)[0]\n                    s = s[t]\n                s = s[:,:,:,aind,mind]\n            else:\n                # dimension of recall: [TxKxAxM]\n                s = self.eval['recall']\n                if iouThr is not None:\n                    t = np.where(iouThr == p.iouThrs)[0]\n                    s = s[t]\n                s = s[:,:,aind,mind]\n            if len(s[s>-1])==0:\n                mean_s = -1\n            else:\n                mean_s = np.mean(s[s>-1])\n            #print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s))\n            return mean_s\n        def _summarizeDets():\n            stats = np.zeros((12,))\n            stats[0] = _summarize(1)\n            stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])\n            stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[2])\n            stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[2])\n            stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[2])\n            stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[2])\n            stats[6] = _summarize(0, maxDets=self.params.maxDets[0])\n            stats[7] = _summarize(0, maxDets=self.params.maxDets[1])\n            stats[8] = _summarize(0, maxDets=self.params.maxDets[2])\n            stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[2])\n            stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[2])\n            stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[2])\n            return stats\n        def _summarizeKps():\n            stats = np.zeros((10,))\n            stats[0] = _summarize(1, maxDets=20)\n            stats[1] = _summarize(1, maxDets=20, iouThr=.5)\n            stats[2] = _summarize(1, maxDets=20, iouThr=.75)\n            stats[3] = _summarize(1, maxDets=20, areaRng='medium')\n            stats[4] = _summarize(1, maxDets=20, areaRng='large')\n            stats[5] = _summarize(0, maxDets=20)\n            stats[6] = _summarize(0, maxDets=20, iouThr=.5)\n            stats[7] = _summarize(0, maxDets=20, iouThr=.75)\n            stats[8] = _summarize(0, maxDets=20, areaRng='medium')\n            stats[9] = _summarize(0, maxDets=20, areaRng='large')\n            return stats\n        if not self.eval:\n            raise Exception('Please run accumulate() first')\n        iouType = self.params.iouType\n        if iouType == 'segm' or iouType == 'bbox':\n            summarize = _summarizeDets\n        elif iouType == 'keypoints':\n            summarize = _summarizeKps\n        self.stats = summarize()\n\n    def __str__(self):\n        self.summarize()\n\nclass Params:\n    '''\n    Params for coco evaluation api\n    '''\n    def setDetParams(self):\n        self.imgIds = []\n        self.catIds = []\n        # np.arange causes trouble.  the data point on arange is slightly larger than the true value\n        self.iouThrs = np.linspace(.5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True)\n        self.recThrs = np.linspace(.0, 1.00, int(np.round((1.00 - .0) / .01)) + 1, endpoint=True)\n        self.maxDets = [1, 10, 100]\n        self.areaRng = [[0 ** 2, 1e5 ** 2], [0 ** 2, 32 ** 2], [32 ** 2, 96 ** 2], [96 ** 2, 1e5 ** 2]]\n        self.areaRngLbl = ['all', 'small', 'medium', 'large']\n        self.useCats = 1\n\n    def setKpParams(self):\n        self.imgIds = []\n        self.catIds = []\n        # np.arange causes trouble.  the data point on arange is slightly larger than the true value\n        self.iouThrs = np.linspace(.5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True)\n        self.recThrs = np.linspace(.0, 1.00, int(np.round((1.00 - .0) / .01)) + 1, endpoint=True)\n        self.maxDets = [20]\n        self.areaRng = [[0 ** 2, 1e5 ** 2], [32 ** 2, 96 ** 2], [96 ** 2, 1e5 ** 2]]\n        self.areaRngLbl = ['all', 'medium', 'large']\n        self.useCats = 1\n        self.kpt_oks_sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62,.62, 1.07, 1.07, .87, .87, .89, .89])/10.0\n\n    def __init__(self, iouType='segm'):\n        if iouType == 'segm' or iouType == 'bbox':\n            self.setDetParams()\n        elif iouType == 'keypoints':\n            self.setKpParams()\n        else:\n            raise Exception('iouType not supported')\n        self.iouType = iouType\n        # useSegm is deprecated\n        self.useSegm = None"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/vlm_modules/__init__.py",
    "content": "from .vlm_module import VLMBaseModule\nfrom .qwen_module import Qwen2VLModule\n\n__all__ = [\"VLMBaseModule\", \"Qwen2VLModule\"]"
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/vlm_modules/qwen_module.py",
    "content": "from transformers import Qwen2_5_VLForConditionalGeneration, Qwen2VLForConditionalGeneration, AutoProcessor\nfrom typing import Dict, Any, Union\nfrom trl.data_utils import maybe_apply_chat_template\nimport torch\nimport re\nfrom transformers import AutoTokenizer\nfrom vlm_modules.vlm_module import VLMBaseModule\nimport math\nimport numpy as np\n\nclass Qwen2VLModule(VLMBaseModule):\n    def __init__(self):\n        super().__init__()\n\n    def get_vlm_key(self):\n        return \"qwen\"\n\n    def get_model_class(self, model_id: str, model_init_kwargs: dict):\n        if \"Qwen2-VL\" in model_id:\n            model_cls = Qwen2VLForConditionalGeneration\n        elif \"Qwen2.5-VL\" in model_id:\n            model_cls = Qwen2_5_VLForConditionalGeneration\n        else:\n            raise ValueError(f\"Unsupported model: {model_id}\")\n        return model_cls\n    \n    def post_model_init(self, model, processing_class):\n        pass\n    \n    def get_processing_class(self):\n        return AutoProcessor\n    \n    def get_vision_modules_keywords(self):  \n        return ['visual']\n    \n    def get_custom_multimodal_keywords(self):\n        return ['pixel_values', 'image_grid_thw']\n\n    def get_non_generate_params(self):\n        return []\n    \n    def get_custom_processing_keywords(self):\n        return [('image_processor', 'max_pixels'), ('image_processor', 'min_pixels')]\n    \n    def prepare_prompt(self, processing_class, inputs: dict[str, Union[torch.Tensor, Any]]):\n        prompts_text = [maybe_apply_chat_template(example, processing_class)[\"prompt\"] for example in inputs]\n        return prompts_text\n    \n    def prepare_model_inputs(self, processing_class, prompts_text, images, return_tensors=\"pt\", padding=True, padding_side=\"left\", add_special_tokens=False):\n        # FIXME\n        # print(type(prompts_text))\n        # This could only process pure-multimodal or pure-text inputs\n        if len(images) > 0:\n            prompt_inputs = processing_class(\n                text=prompts_text,\n                images=images,\n                return_tensors=return_tensors,\n                padding=padding,\n                padding_side=padding_side,\n                add_special_tokens=add_special_tokens)\n        else:\n            prompt_inputs = processing_class(\n                text=prompts_text,\n                return_tensors=return_tensors,\n                padding=padding,\n                padding_side=padding_side,\n                add_special_tokens=add_special_tokens)\n        return prompt_inputs\n    \n    @staticmethod\n    def get_question_template(task_type: str):\n        match task_type:\n            case \"robust\":\n                return \"{Question}First output the types of degradations in image briefly in <TYPE> <TYPE_END> tags, and then output what effects do these degradation have on the image in <INFLUENCE> <INFLUENCE_END> tags, then based on the strength of degradation, output an APPROPRIATE length for the reasoning process in <REASONING> <REASONING_END> tags, and then summarize the content of reasoning and the give the answer in <CONCLUSION> <CONCLUSION_END> tags,provides the user with the answer briefly in <ANSWER> <ANSWER_END>.i.e., <TYPE> degradation type here <TYPE_END>\\n<INFLUENCE> influence here<INFLUENCE_END>\\n<REASONING> reasoning process here<REASONING_END>\\n<CONCLUSION>summary here<CONCLUSION_END>\\n<ANSWER>final answer<ANSWER_END>\"\n            case \"rec\":\n                return \"{Question} First output the thinking process in <think> </think> tags and then output the final answer in <answer> </answer> tags. Output the final answer in JSON format.\"\n            case \"ic\":\n                return \"{Question} First thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think><answer> json format answer here </answer>\"\n            case \"odLength\":\n                SYSTEM_PROMPT = (\n                    #\"A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant \"\n                    \"First thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning \"\n                    \"process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., \"\n                    \"<think> reasoning process here </think><answer> answer here </answer>\"\n                )\n                return SYSTEM_PROMPT + '\\n' + \"{Question}\"\n            case _:\n                return \"{Question} First output the thinking process in <think> </think> tags and then output the final answer in <answer> </answer> tags.\"\n    \n    @staticmethod\n    def format_reward_rec(completions, **kwargs):\n        \"\"\"Check if the Qwen model output matches a specific format.\"\"\"\n        import re\n        import os\n        from datetime import datetime\n        pattern = r\"<think>.*?</think>\\s*<answer>.*?\\{.*\\[\\d+,\\s*\\d+,\\s*\\d+,\\s*\\d+\\].*\\}.*?</answer>\"\n        completion_contents = [completion[0][\"content\"] for completion in completions]\n        matches = [re.search(pattern, content, re.DOTALL) is not None for content in completion_contents]\n\n        current_time = datetime.now().strftime(\"%d-%H-%M-%S-%f\")\n        if os.getenv(\"DEBUG_MODE\") == \"true\":\n            log_path = os.getenv(\"LOG_PATH\")\n            with open(log_path.replace(\".txt\", \"_format.txt\"), \"a\", encoding='utf-8') as f:\n                f.write(f\"------------- {current_time} Format reward -------------\\n\")\n                for content, match in zip(completion_contents, matches):\n                    f.write(f\"Content: {content}\\n\")\n                    f.write(f\"Has format: {bool(match)}\\n\")\n        return [1.0 if match else 0.0 for match in matches]\n    \n    @staticmethod\n    def format_reward_robust(completions, **kwargs):\n        import re\n        import os\n        from datetime import datetime\n\n        pattern = r\"<TYPE>.*?<TYPE_END>\\s*<INFLUENCE>.*?<INFLUENCE_END>\\s*<REASONING>.*?<REASONING_END>\\s*<CONCLUSION>.*?<CONCLUSION_END>\\s*<ANSWER>.*?<ANSWER_END>\"\n        completion_contents = [completion[0][\"content\"] for completion in completions]\n        matches = [re.search(pattern, content, re.DOTALL) is not None for content in completion_contents]\n\n        current_time = datetime.now().strftime(\"%d-%H-%M-%S-%f\")\n        if os.getenv(\"DEBUG_MODE\") == \"true\":\n            log_path = os.getenv(\"LOG_PATH\")\n            with open(log_path.replace(\".txt\", \"_format.txt\"), \"a\", encoding='utf-8') as f:\n                f.write(f\"------------- {current_time} Format reward -------------\\n\")\n                for content, match in zip(completion_contents, matches):\n                    f.write(f\"Content: {content}\\n\")\n                    f.write(f\"Has format: {bool(match)}\\n\")\n        return [1.0 if match else 0.0 for match in matches]\n    \n    @staticmethod\n    def type_reward(completions, solution, **kwargs):\n        def custom_normalize_reward(score, k_positive=1.0, k_negative=2.0, x0=0.0):\n            sigmoid_output = 0.0\n            if score >= x0:\n                sigmoid_output = 1 / (1 + math.exp(-k_positive * (score - x0)))\n            else:\n                sigmoid_output = 1 / (1 + math.exp(-k_negative * (score - x0)))\n                \n            normalized_score = 2 * sigmoid_output - 1\n            \n            return normalized_score\n            \n        def extract_image_degradations(text):\n            match = re.search(r'<TYPE>(.*?)<TYPE_END>', text, re.DOTALL)\n            if not match:\n                return []\n\n            types_string = match.group(1)\n            degradations = re.findall(r'(\\w+(?:\\s+\\w+)*)\\(([\\d.]+)\\)', types_string)\n\n            result = []\n            for degradation, strength in degradations:\n                result.append((degradation.strip(), float(strength)))\n            \n            return result\n\n        def calculate_reward(A, B):\n            reward = 0.0\n            \n            B_dict = dict(B)\n            matched_keys = set()\n            \n            for degradation_A, strength_A in A:\n                if degradation_A in B_dict:\n                    reward += 1\n                    strength_B = B_dict[degradation_A]\n                    diff = abs(strength_A - strength_B)\n                    reward += (0.5 - diff)\n                    matched_keys.add(degradation_A)\n                else:\n                    reward -= 1\n                    \n            for degradation_B in B_dict:\n                if degradation_B not in matched_keys:\n                    reward -= 1\n                    \n            return reward\n        \n        contents = [completion[0][\"content\"] for completion in completions]\n        rewards = []\n\n        for i in range(len(contents)):\n            content_single = extract_image_degradations(contents[i])\n            solution_single = extract_image_degradations(solution[i])\n            score = calculate_reward(content_single, solution_single)\n            rewards.append(score)\n        \n        return rewards\n    \n    @staticmethod\n    def accuracy_reward(completions, solution, **kwargs):\n        def extract_answer(text):\n            match = re.search(r'<ANSWER>(.*?)<ANSWER_END>', text, re.DOTALL)\n            if match:\n                return match.group(1).strip()\n            return None\n        \n        contents = [completion[0][\"content\"] for completion in completions]\n        \n        if len(contents) != len(solution):\n            print(\"Warning: Input list lengths do not match.\")\n            return []\n\n        rewards = []\n        for i in range(len(contents)):\n            model_answer = extract_answer(contents[i])\n            gt_answer = extract_answer(solution[i])\n            if model_answer == gt_answer:\n                rewards.append(1)\n            else:\n                rewards.append(0)\n            \n        return rewards\n    \n    @staticmethod\n    def length_reward(completions, solution, **kwargs):\n        \n        processor = AutoProcessor.from_pretrained(\"your_model_path\",user_fast=False)\n        tokenizer =processor.tokenizer\n\n        responses = [completion[0][\"content\"] for completion in completions]\n        \n        if len(responses) != len(solution):\n            print(\"Warning: Input list lengths do not match.\")\n            return []\n        \n        rewards = []\n        for resp, sol in zip(responses, solution):\n            resp_len = len(tokenizer.encode(resp))\n            sol_len = len(tokenizer.encode(sol))\n            \n            length_diff = abs(resp_len - sol_len)\n            \n            reward = 1 - (length_diff/sol_len)\n            \n            rewards.append(reward)\n        \n        return rewards\n    \n    @staticmethod\n    def iou_reward(completions, solution, **kwargs):\n        import re\n        import os\n        from datetime import datetime\n        import json\n        def iou(box1, box2):\n            inter_x1 = max(box1[0], box2[0])\n            inter_y1 = max(box1[1], box2[1])\n            inter_x2 = min(box1[2]-1, box2[2]-1)\n            inter_y2 = min(box1[3]-1, box2[3]-1)\n            if inter_x1 < inter_x2 and inter_y1 < inter_y2:\n                inter = (inter_x2-inter_x1+1)*(inter_y2-inter_y1+1)\n            else:\n                inter = 0\n            union = (box1[2]-box1[0])*(box1[3]-box1[1]) + (box2[2]-box2[0])*(box2[3]-box2[1]) - inter\n            return float(inter)/union\n        contents = [completion[0][\"content\"] for completion in completions]\n        rewards = []\n        current_time = datetime.now().strftime(\"%d-%H-%M-%S-%f\")\n        answer_tag_pattern = r'<answer>(.*?)</answer>'\n        bbox_pattern = r'\\[(\\d+),\\s*(\\d+),\\s*(\\d+),\\s*(\\d+)]'\n        for content, sol in zip(contents, solution):\n            sol = re.findall(answer_tag_pattern, sol, re.DOTALL)[-1]\n            sol = json.loads(sol.strip())\n            reward = 0.0\n            try:\n                content_answer_match = re.search(answer_tag_pattern, content, re.DOTALL)\n                if content_answer_match:\n                    content_answer = content_answer_match.group(1).strip()\n                    bbox_match = re.search(bbox_pattern, content_answer)\n                    if bbox_match:\n                        bbox = [int(bbox_match.group(1)), int(bbox_match.group(2)), int(bbox_match.group(3)), int(bbox_match.group(4))]\n                        reward = iou(bbox, sol)\n            except Exception:\n                pass\n                    \n            rewards.append(reward)\n            if os.getenv(\"DEBUG_MODE\") == \"true\":\n                log_path = os.getenv(\"LOG_PATH\")\n                current_time = datetime.now().strftime(\"%d-%H-%M-%S-%f\")\n                image_path = kwargs.get(\"image_path\")[0] if \"image_path\" in kwargs else None\n                problem = kwargs.get(\"problem\")[0]\n                if reward <= 1.0:\n                    with open(log_path, \"a\", encoding='utf-8') as f:\n                        f.write(f\"------------- {current_time} Accuracy reward: {reward} -------------\\n\")\n                        f.write(f\"image_path: {image_path}\\n\")\n                        f.write(f\"problem: {problem}\\n\")\n                        f.write(f\"Content: {content}\\n\")\n                        f.write(f\"Solution: {sol}\\n\") \n        return rewards\n\n    @staticmethod\n    def select_reward_func(func: str, task_type: str):\n        if func == \"accuracy\":\n            match task_type:\n                case \"robust\":\n                    return Qwen2VLModule.accuracy_reward\n                case \"rec\":\n                    return Qwen2VLModule.iou_reward\n                case _:\n                    raise ValueError(f\"Unsupported reward function: {func}\")\n        \n        elif func == \"format\":\n            match task_type:\n                case \"robust\":\n                    return Qwen2VLModule.format_reward_robust\n                case \"rec\":\n                    return Qwen2VLModule.format_reward_rec\n                case _:\n                    raise ValueError(f\"Unsupported reward function: {func}\")\n        \n        elif func == \"type\":\n            match task_type:\n                case \"robust\":\n                    return Qwen2VLModule.type_reward\n                case \"rec\":\n                    return Qwen2VLModule.format_reward_rec\n                case _:\n                    raise ValueError(f\"Unsupported reward function: {func}\")\n        \n        elif func == \"length\":\n            match task_type:\n                case \"robust\":\n                    return Qwen2VLModule.length_reward\n                case \"rec\":\n                    return Qwen2VLModule.format_reward_rec\n                case _:\n                    raise ValueError(f\"Unsupported reward function: {func}\")\n        else:\n            raise ValueError(f\"Unsupported reward function: {func}\")\n\n\n    "
  },
  {
    "path": "src/open-r1-multimodal/src/open_r1/vlm_modules/vlm_module.py",
    "content": "from abc import ABC, abstractmethod\nfrom typing import Dict, Any, Union\nimport torch\n\n\nclass VLMBaseModule(ABC):\n    def __init__(self):\n        super().__init__()\n    \n    @abstractmethod\n    def get_vlm_key(self):\n        pass\n\n    @abstractmethod\n    def get_model_class(self, model_id: str, model_init_kwargs: dict):\n        pass\n\n    def post_model_init(self, model, processing_class):\n        pass\n\n    def is_embeds_input(self):\n        return False\n    \n    @abstractmethod\n    def get_processing_class(self):\n        pass\n\n    @abstractmethod\n    def get_vision_modules_keywords(self):\n        pass\n\n    @abstractmethod\n    def get_custom_multimodal_keywords(self):\n        pass\n\n    @abstractmethod\n    def get_non_generate_params(self):\n        pass\n\n    @abstractmethod\n    def get_custom_processing_keywords(self):\n        pass\n\n    @abstractmethod\n    def prepare_prompt(self, processing_class, inputs: dict[str, Union[torch.Tensor, Any]]):\n        pass\n    \n    @abstractmethod\n    def prepare_model_inputs(self, processing_class, prompts_text, images, return_tensors, padding, padding_side, add_special_tokens):\n        pass"
  }
]