main 5b92fd64645e cached
7 files
41.3 KB
18.7k tokens
2 symbols
1 requests
Download .txt
Repository: MiscellaneousStuff/openai-whisper-cpu
Branch: main
Commit: 5b92fd64645e
Files: 7
Total size: 41.3 KB

Directory structure:
gitextract_s0h6l0yq/

├── .gitignore
├── .gitmodules
├── Dockerfile
├── LICENSE
├── README.md
├── main.ipynb
└── script/
    └── custom_whisper.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/


================================================
FILE: .gitmodules
================================================
[submodule "whisper"]
	path = whisper
	url = https://github.com/MiscellaneousStuff/whisper

================================================
FILE: Dockerfile
================================================
FROM python:3.9.14-bullseye

# Install dependencies
RUN apt-get update && apt-get install -y \
    ffmpeg \
 && apt-get clean \
 && rm -rf /var/lib/apt/lists/*

# Install Whisper
RUN git clone https://github.com/MiscellaneousStuff/openai-whisper-cpu.git \
 && cd openai-whisper-cpu \
 && git submodule init \
 && git submodule update \
 && pip install -e ./whisper

# Install model files
RUN whisper --model tiny dummy.wav; exit 0
RUN whisper --model base dummy.wav; exit 0
RUN whisper --model small dummy.wav; exit 0
RUN whisper --model medium dummy.wav; exit 0
RUN whisper --model large dummy.wav; exit 0
RUN whisper --model tiny.en dummy.wav; exit 0
RUN whisper --model base.en dummy.wav; exit 0
RUN whisper --model small.en dummy.wav; exit 0
RUN whisper --model medium.en dummy.wav; exit 0

WORKDIR /usr/src/app

CMD ["whisper","python3"]


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2022 MiscellaneousStuff

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

================================================
FILE: README.md
================================================
# OpenAI Whisper - CPU

## About

Experiments applying quantization methods to OpenAI Whisper ASR model
to improve the inference speed and throughput on CPU-based deployments.
This is motivated by the fact that, although the Whisper model greatly
improves the accessibility of SOTA ASR and doesn't require depending
on the cloud for high quality transcription, many end users can not
run this model out-of-the-box as most consumer computers only contain
CPUs and do not contain high performance GPUs.

This could lead to allowing the larger Whisper models to run faster
on laptops without a GPU.

Hardware for experiments: \
CPU - AMD Ryzen 5 5600X \
RAM - 32GB DDR4 \
GPU - Nvidia GeForce RTX 3060 Ti \
HDD - M.2 SSD 

## Usage

Firstly, get the fork of the OpenAI Whisper repo with the
modifications needed for CPU dynamic quantization:

```bash
git submodule init
git submodule update
```

And then install the module using:

```bash
pip install -e ./whisper
```

### Explanation

Quantization of the Whisper model requires changing the `Linear()`
layers within the model to `nn.Linear()`. This is because you need
to specifiy which layer types to dynamically quantize, such as:

```python
quantized_model = torch.quantization.quantize_dynamic(
    model_fp32, {torch.nn.Linear}, dtype=torch.qint8
)
```

However the whisper model is designed to be adaptable, i.e.
it can run at different precisions, so the `Linear()` layer contains
custom code to account for this. However, this is not required for
the quantized model. You can either change the `Linear()` layers in
"/whisper/whisper/model.py" yourself, or you can just use the above
installation instructions.

## Results

Test audio is the first 30 seconds of: \
https://www.youtube.com/watch?v=oKOtzIo-uYw

| Device | Whisper Model | Data Type | Linear Layer | Inference Time |
| --- | --- | ----------- | --- | --- |
| GPU | tiny | fp32 | Linear | 0.5 |
| CPU | tiny  | fp32 | nn.Linear | 2.3 |
| CPU | tiny  | qint8 (quant) | nn.Linear | 3.1 (0.74x slowdown) |

Tiny quantized model is 9.67x faster than real time. \
Tiny quantized model is 0.74x slower than the original model.

| Device | Whisper Model | Data Type | Linear Layer | Inference Time |
| --- | --- | ----------- | --- | --- |
| GPU | base | fp32 | Linear | 0.6 |
| CPU | base  | fp32 | nn.Linear | 5.2 |
| CPU | base  | qint8 (quant) | nn.Linear | 3.2 (1.62x speedup) |

Base quantized model is 9.37x faster than real time. \
Base quantized model is 1.62x faster than the original model.

| Device | Whisper Model | Data Type | Linear Layer | Inference Time |
| --- | --- | ----------- | --- | --- |
| GPU | small | fp32 | Linear | 0.7 |
| CPU | small | fp32 | nn.Linear | 19.1s |
| CPU | small | qint8 (quant) | nn.Linear | 6.9s (2.76x speedup) |

Small quantized model is 4.34x faster than real time. \
Small quantized model is 2.76x faster than the original model.

| Device | Whisper Model | Data Type | Linear Layer | Inference Time |
| --- | --- | ----------- | --- | --- 
| GPU | medium | fp32 | Linear | 1.7s |
| CPU | medium | fp32 | nn.Linear | 60.7 |
| CPU | medium | qint8 (quant) | nn.Linear | 23.1 (2.62x speedup) |

Medium quantized model is 1.29x faster than real time. \
Medium quantized model is 2.62x faster than the original model.

# Docker

Build the docker image.   

``` 
docker build -t whisper-cpu . 
```
Run the quantized model.   

```
docker run --rm -v "$(pwd)/audio":/usr/src/app/audio -v "$(pwd)/script":/usr/src/app/script whisper-cpu python3 ./script/custom_whisper.py audio/path_to_dir_or_audio_file --language English --model medium.en 
```

- ```-v "$(pwd)/audio":/usr/src/app/audio``` this creates a volume to give docker access to your audio files.
- ```-v "$(pwd)/script":/usr/src/app/script``` this volume gives docker access to the custom start script. Transcription results are also stored here.

- Note: you might want to adjust ```./script/custom_whisper.py``` for your own needs.



================================================
FILE: main.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# OpenAI Whisper - CPU\n",
    "Improving CPU-deployment performance of OpenAI Whisper model, following this procedure:\n",
    "https://pytorch.org/assets/images/quantization-practice/quantization-flowchart2.png"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Load Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import whisper\n",
    "import torch\n",
    "\n",
    "test_path  = \"C:\\\\Users\\\\win8t\\\\Music\\\\\"\n",
    "test_path += \"Fugees - Killing Me Softly With His Song (Official Video).mp3\"\n",
    "\n",
    "model_fp32 = whisper.load_model(\n",
    "    name=\"base\",\n",
    "    device=\"cpu\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Dynamically Quantize Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "quantized_model = torch.quantization.quantize_dynamic(\n",
    "    model_fp32, {torch.nn.Linear}, dtype=torch.qint8\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Size (MB): 290.459479\n",
      "Size (MB): 158.410839\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "158.410839"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "def print_size_of_model(model):\n",
    "    torch.save(model.state_dict(), \"temp.p\")\n",
    "    size = os.path.getsize(\"temp.p\")/1e6\n",
    "    print('Size (MB):', size)\n",
    "    os.remove('temp.p')\n",
    "    return size\n",
    "\n",
    "print_size_of_model(model_fp32)\n",
    "print_size_of_model(quantized_model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Run Dynamically Quantized Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "audio = whisper.load_audio(test_path)\n",
    "audio = whisper.pad_or_trim(audio)\n",
    "\n",
    "mel   = whisper.log_mel_spectrogram(audio).to(model_fp32.device)\n",
    "options = whisper.DecodingOptions(fp16=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Detected language: en\n"
     ]
    }
   ],
   "source": [
    "# regular\n",
    "_, probs = model_fp32.detect_language(mel)\n",
    "print(f\"Detected language: {max(probs, key=probs.get)}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Detected language: en\n"
     ]
    }
   ],
   "source": [
    "# quantized\n",
    "_, probs = quantized_model.detect_language(mel)\n",
    "print(f\"Detected language: {max(probs, key=probs.get)}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "c:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\transcribe.py:76: UserWarning: Performing inference on CPU when CUDA is available\n",
      "  warnings.warn(\"Performing inference on CPU when CUDA is available\")\n",
      "c:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead\n",
      "  warnings.warn(\"FP16 is not supported on CPU; using FP32 instead\")\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " Strum in my pain with his fingers, singing my life with his words. Killing me softly with his song, killing me softly with his song, telling my whole life. With his words killing me softly with his song. This is why I clap for refuge. I'll help you up in the prize where you sit on the base, sit on the beat. While I'm on this road, I got my girl, El. One time, one time, pay your El. You know you got the lyrics. I heard he sang a good song. I heard he had a style. And so I came to see him and listen for a while. And there he was, this young boy, straightened to my eyes. Strumming my pain with his finger, singing my life with his words. Killing me softly with his song, killing me softly with his song, telling my whole life. With his words killing me softly with his song. I felt all flush with the rust, and merrised by the crown. I felt he found my letter, and read each one out loud. I prayed that he would finish, but he just kept writing on. Strumming my pain with his finger, singing my life with his words. Killing me softly with his song, killing me softly with his song, telling my whole life. With his words killing me softly with his song, taking to the best of the world. La la la la la la la la la low, low. I'm alive He's throwing a pain with his finger Yes, he was singing my line with his wife He let me softly with his soul He let me softly hear his song telling my whole life\n",
      "Evaluate total time (seconds): 129.8\n"
     ]
    },
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "\u001b[1;32m~\\AppData\\Local\\Temp/ipykernel_9024/408565071.py\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m     13\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     14\u001b[0m \u001b[1;31m# Evaluate the INT8 BERT model after the dynamic quantization\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 15\u001b[1;33m \u001b[0mtime_model_evaluation\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mquantized_model\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmel\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0moptions\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[1;32m~\\AppData\\Local\\Temp/ipykernel_9024/408565071.py\u001b[0m in \u001b[0;36mtime_model_evaluation\u001b[1;34m(model, mel, options)\u001b[0m\n\u001b[0;32m      3\u001b[0m     \u001b[0meval_start_time\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtime\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtime\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      4\u001b[0m     \u001b[1;31m# result = whisper.decode(model, mel, options)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 5\u001b[1;33m     \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mwhisper\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtranscribe\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtest_path\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;31m# , options)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m      6\u001b[0m     \u001b[0meval_end_time\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtime\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtime\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      7\u001b[0m     \u001b[0meval_duration_time\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0meval_end_time\u001b[0m \u001b[1;33m-\u001b[0m \u001b[0meval_start_time\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\transcribe.py\u001b[0m in \u001b[0;36mtranscribe\u001b[1;34m(model, audio, verbose, temperature, compression_ratio_threshold, logprob_threshold, no_speech_threshold, condition_on_previous_text, **decode_options)\u001b[0m\n\u001b[0;32m    180\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    181\u001b[0m             \u001b[0mdecode_options\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m\"prompt\"\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mall_tokens\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mprompt_reset_since\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 182\u001b[1;33m             \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mdecode_with_fallback\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0msegment\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    183\u001b[0m             \u001b[0mtokens\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtensor\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mresult\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtokens\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    184\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\transcribe.py\u001b[0m in \u001b[0;36mdecode_with_fallback\u001b[1;34m(segment)\u001b[0m\n\u001b[0;32m    123\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[0many\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mneeds_fallback\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    124\u001b[0m                 \u001b[0moptions\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mDecodingOptions\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtemperature\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mt\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 125\u001b[1;33m                 \u001b[0mretries\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmodel\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdecode\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0msegment\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mneeds_fallback\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0moptions\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    126\u001b[0m                 \u001b[1;32mfor\u001b[0m \u001b[0mretry_index\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0moriginal_index\u001b[0m \u001b[1;32min\u001b[0m \u001b[0menumerate\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mnp\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mnonzero\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mneeds_fallback\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    127\u001b[0m                     \u001b[0mresults\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0moriginal_index\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mretries\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mretry_index\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\Users\\win8t\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\torch\\autograd\\grad_mode.py\u001b[0m in \u001b[0;36mdecorate_context\u001b[1;34m(*args, **kwargs)\u001b[0m\n\u001b[0;32m     26\u001b[0m         \u001b[1;32mdef\u001b[0m \u001b[0mdecorate_context\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     27\u001b[0m             \u001b[1;32mwith\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__class__\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 28\u001b[1;33m                 \u001b[1;32mreturn\u001b[0m \u001b[0mfunc\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     29\u001b[0m         \u001b[1;32mreturn\u001b[0m \u001b[0mcast\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mF\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdecorate_context\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     30\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\decoding.py\u001b[0m in \u001b[0;36mdecode\u001b[1;34m(model, mel, options)\u001b[0m\n\u001b[0;32m    697\u001b[0m         \u001b[0mmel\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmel\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0munsqueeze\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    698\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 699\u001b[1;33m     \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mDecodingTask\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0moptions\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mmel\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    700\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    701\u001b[0m     \u001b[1;32mif\u001b[0m \u001b[0msingle\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\Users\\win8t\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\torch\\autograd\\grad_mode.py\u001b[0m in \u001b[0;36mdecorate_context\u001b[1;34m(*args, **kwargs)\u001b[0m\n\u001b[0;32m     26\u001b[0m         \u001b[1;32mdef\u001b[0m \u001b[0mdecorate_context\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     27\u001b[0m             \u001b[1;32mwith\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__class__\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 28\u001b[1;33m                 \u001b[1;32mreturn\u001b[0m \u001b[0mfunc\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     29\u001b[0m         \u001b[1;32mreturn\u001b[0m \u001b[0mcast\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mF\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdecorate_context\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     30\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\decoding.py\u001b[0m in \u001b[0;36mrun\u001b[1;34m(self, mel)\u001b[0m\n\u001b[0;32m    629\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    630\u001b[0m         \u001b[1;31m# call the main sampling loop\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 631\u001b[1;33m         \u001b[0mtokens\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0msum_logprobs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mno_speech_probs\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_main_loop\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0maudio_features\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtokens\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    632\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    633\u001b[0m         \u001b[1;31m# reshape the tensors to have (n_audio, n_group) as the first two dimensions\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\decoding.py\u001b[0m in \u001b[0;36m_main_loop\u001b[1;34m(self, audio_features, tokens)\u001b[0m\n\u001b[0;32m    584\u001b[0m         \u001b[1;32mtry\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    585\u001b[0m             \u001b[1;32mfor\u001b[0m \u001b[0mi\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mrange\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0msample_len\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 586\u001b[1;33m                 \u001b[0mlogits\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0minference\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mlogits\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mtokens\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0maudio_features\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    587\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    588\u001b[0m                 \u001b[1;32mif\u001b[0m \u001b[0mi\u001b[0m \u001b[1;33m==\u001b[0m \u001b[1;36m0\u001b[0m \u001b[1;32mand\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtokenizer\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mno_speech\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m:\u001b[0m  \u001b[1;31m# save no_speech_probs\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\decoding.py\u001b[0m in \u001b[0;36mlogits\u001b[1;34m(self, tokens, audio_features)\u001b[0m\n\u001b[0;32m    143\u001b[0m             \u001b[0mtokens\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtokens\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m-\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    144\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 145\u001b[1;33m         \u001b[1;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmodel\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdecoder\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mtokens\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0maudio_features\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mkv_cache\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mkv_cache\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    146\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    147\u001b[0m     \u001b[1;32mdef\u001b[0m \u001b[0mcleanup_caching\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\Users\\win8t\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\torch\\nn\\modules\\module.py\u001b[0m in \u001b[0;36m_call_impl\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m   1100\u001b[0m         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n\u001b[0;32m   1101\u001b[0m                 or _global_forward_hooks or _global_forward_pre_hooks):\n\u001b[1;32m-> 1102\u001b[1;33m             \u001b[1;32mreturn\u001b[0m \u001b[0mforward_call\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m   1103\u001b[0m         \u001b[1;31m# Do not call functions when jit is used\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1104\u001b[0m         \u001b[0mfull_backward_hooks\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mnon_full_backward_hooks\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\model.py\u001b[0m in \u001b[0;36mforward\u001b[1;34m(self, x, xa, kv_cache)\u001b[0m\n\u001b[0;32m    187\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    188\u001b[0m         \u001b[1;32mfor\u001b[0m \u001b[0mblock\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mblocks\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 189\u001b[1;33m             \u001b[0mx\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mblock\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mxa\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmask\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmask\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mkv_cache\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mkv_cache\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    190\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    191\u001b[0m         \u001b[0mx\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mln\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\Users\\win8t\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\torch\\nn\\modules\\module.py\u001b[0m in \u001b[0;36m_call_impl\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m   1100\u001b[0m         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n\u001b[0;32m   1101\u001b[0m                 or _global_forward_hooks or _global_forward_pre_hooks):\n\u001b[1;32m-> 1102\u001b[1;33m             \u001b[1;32mreturn\u001b[0m \u001b[0mforward_call\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m   1103\u001b[0m         \u001b[1;31m# Do not call functions when jit is used\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1104\u001b[0m         \u001b[0mfull_backward_hooks\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mnon_full_backward_hooks\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\model.py\u001b[0m in \u001b[0;36mforward\u001b[1;34m(self, x, xa, mask, kv_cache)\u001b[0m\n\u001b[0;32m    124\u001b[0m         \u001b[0mx\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mx\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mattn\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mattn_ln\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmask\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mmask\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mkv_cache\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mkv_cache\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    125\u001b[0m         \u001b[1;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mcross_attn\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 126\u001b[1;33m             \u001b[0mx\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mx\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mcross_attn\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mcross_attn_ln\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mxa\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mkv_cache\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mkv_cache\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    127\u001b[0m         \u001b[0mx\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mx\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmlp\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmlp_ln\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    128\u001b[0m         \u001b[1;32mreturn\u001b[0m \u001b[0mx\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\Users\\win8t\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\torch\\nn\\modules\\module.py\u001b[0m in \u001b[0;36m_call_impl\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m   1100\u001b[0m         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n\u001b[0;32m   1101\u001b[0m                 or _global_forward_hooks or _global_forward_pre_hooks):\n\u001b[1;32m-> 1102\u001b[1;33m             \u001b[1;32mreturn\u001b[0m \u001b[0mforward_call\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m   1103\u001b[0m         \u001b[1;31m# Do not call functions when jit is used\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1104\u001b[0m         \u001b[0mfull_backward_hooks\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mnon_full_backward_hooks\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\users\\win8t\\onedrive\\desktop\\projects\\openai-whisper-cpu\\whisper\\whisper\\model.py\u001b[0m in \u001b[0;36mforward\u001b[1;34m(self, x, xa, mask, kv_cache)\u001b[0m\n\u001b[0;32m     71\u001b[0m         \u001b[0mkv_cache\u001b[0m\u001b[1;33m:\u001b[0m \u001b[0mOptional\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mdict\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     72\u001b[0m     ):\n\u001b[1;32m---> 73\u001b[1;33m         \u001b[0mq\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mquery\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     74\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     75\u001b[0m         \u001b[1;32mif\u001b[0m \u001b[0mkv_cache\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m \u001b[1;32mor\u001b[0m \u001b[0mxa\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\Users\\win8t\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\torch\\nn\\modules\\module.py\u001b[0m in \u001b[0;36m_call_impl\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m   1100\u001b[0m         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n\u001b[0;32m   1101\u001b[0m                 or _global_forward_hooks or _global_forward_pre_hooks):\n\u001b[1;32m-> 1102\u001b[1;33m             \u001b[1;32mreturn\u001b[0m \u001b[0mforward_call\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0minput\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m   1103\u001b[0m         \u001b[1;31m# Do not call functions when jit is used\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m   1104\u001b[0m         \u001b[0mfull_backward_hooks\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mnon_full_backward_hooks\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mc:\\Users\\win8t\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\torch\\nn\\quantized\\dynamic\\modules\\linear.py\u001b[0m in \u001b[0;36mforward\u001b[1;34m(self, x)\u001b[0m\n\u001b[0;32m     46\u001b[0m                     x, self._packed_params._packed_params)\n\u001b[0;32m     47\u001b[0m             \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 48\u001b[1;33m                 Y = torch.ops.quantized.linear_dynamic(\n\u001b[0m\u001b[0;32m     49\u001b[0m                     x, self._packed_params._packed_params, reduce_range=True)\n\u001b[0;32m     50\u001b[0m         \u001b[1;32melif\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_packed_params\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdtype\u001b[0m \u001b[1;33m==\u001b[0m \u001b[0mtorch\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfloat16\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "import time\n",
    "def time_model_evaluation(model, mel, options):\n",
    "    eval_start_time = time.time()\n",
    "    # result = whisper.decode(model, mel, options)\n",
    "    result = whisper.transcribe(model, test_path) # , options)\n",
    "    eval_end_time = time.time()\n",
    "    eval_duration_time = eval_end_time - eval_start_time\n",
    "    print(result[\"text\"])\n",
    "    print(\"Evaluate total time (seconds): {0:.1f}\".format(eval_duration_time))\n",
    "\n",
    "# Evaluate the original FP32 BERT model\n",
    "time_model_evaluation(model_fp32, mel, options)\n",
    "\n",
    "# Evaluate the INT8 BERT model after the dynamic quantization\n",
    "time_model_evaluation(quantized_model, mel, options)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.9.9 64-bit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.9"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "28453d1081d3c550fce4dd227bac61cebcdf565b50505afc80cae3c0cf61cf22"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}


================================================
FILE: script/custom_whisper.py
================================================
#!/usr/bin/python3

import sys

#audio/Byron_Katie_Podcast/Byron_Katie_KICK_OFF_FINAL_MIX.mp3 --language English --model large
audio_path = str(sys.argv[1])
print ('Audio:', audio_path)
print ('Language Tag', str(sys.argv[2]))
language = str(sys.argv[3])
print ('Language:', language)
print ('Model Tag:', str(sys.argv[4]))
model_name = str(sys.argv[5])
print ('Model:', model_name)

import whisper
import torch


model_fp32 = whisper.load_model(
    name=model_name,
    device="cpu"
#   ,in_memory=True
)

print(torch.__version__)

quantized_model = torch.quantization.quantize_dynamic(
    model_fp32, {torch.nn.Linear}, dtype=torch.qint8
)

#print(quantized_model)
#print(model_fp32)

import os

def print_size_of_model(model):
    path = "temp.p"
    torch.save(model.state_dict(), path)
    size = os.path.getsize(path)/1e6
    print('Size (MB):', size)
    os.remove(path)
    return size

print_size_of_model(model_fp32)
print_size_of_model(quantized_model)

#audio = whisper.load_audio(audio_file)
#audio = whisper.pad_or_trim(audio)

#mel   = whisper.log_mel_spectrogram(audio).to(model_fp32.device)
#options = whisper.DecodingOptions(language=language,fp16=False)

# regular
#_, probs = model_fp32.detect_language(mel)
#print(f"Detected language: {max(probs, key=probs.get)}")

# quantized
#_, probs = quantized_model.detect_language(mel)
#print(f"Detected language: {max(probs, key=probs.get)}")

from pathlib import Path
from whisper.utils import write_srt
import json

import time
def time_model_evaluation(model,audio_file):
    eval_start_time = time.time()
    # result = whisper.decode(model, mel, options)
    result = whisper.transcribe(model, audio_file)
    eval_end_time = time.time()
    eval_duration_time = eval_end_time - eval_start_time

    # save SRT
    audio_basename = Path(audio_file).stem
    with open(Path("./script") / (audio_basename + ".srt"), "w", encoding="utf-8") as srt:
        write_srt(result["segments"], file=srt)
    # save JSON
    json_object = json.dumps(result, indent=4)
    with open(Path("./script") / (audio_basename + ".json"), "w", encoding="utf-8") as output:
        output.write(json_object)

    print("Evaluate total time (seconds): {0:.1f}".format(eval_duration_time))


# check if audio_path is a dir or a file
if os.path.isdir(audio_path):
    # is dir
    files = [f for f in os.listdir(audio_path) if os.path.isfile(os.path.join(audio_path, f))]
    for audio_file in files:
        time_model_evaluation(quantized_model,os.path.join(audio_path, audio_file))
else:
    # is file
    time_model_evaluation(quantized_model,audio_path)




# Evaluate the original FP32 BERT model
# time_model_evaluation(model_fp32, mel, options)

# Evaluate the INT8 BERT model after the dynamic quantization
#time_model_evaluation(quantized_model)

#torch.save(quantized_model.state_dict(), "./script/quantized_model.p")
Download .txt
gitextract_s0h6l0yq/

├── .gitignore
├── .gitmodules
├── Dockerfile
├── LICENSE
├── README.md
├── main.ipynb
└── script/
    └── custom_whisper.py
Download .txt
SYMBOL INDEX (2 symbols across 1 files)

FILE: script/custom_whisper.py
  function print_size_of_model (line 36) | def print_size_of_model(model):
  function time_model_evaluation (line 66) | def time_model_evaluation(model,audio_file):
Condensed preview — 7 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (46K chars).
[
  {
    "path": ".gitignore",
    "chars": 1799,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": ".gitmodules",
    "chars": 90,
    "preview": "[submodule \"whisper\"]\n\tpath = whisper\n\turl = https://github.com/MiscellaneousStuff/whisper"
  },
  {
    "path": "Dockerfile",
    "chars": 843,
    "preview": "FROM python:3.9.14-bullseye\n\n# Install dependencies\nRUN apt-get update && apt-get install -y \\\n    ffmpeg \\\n && apt-get "
  },
  {
    "path": "LICENSE",
    "chars": 1074,
    "preview": "MIT License\n\nCopyright (c) 2022 MiscellaneousStuff\n\nPermission is hereby granted, free of charge, to any person obtainin"
  },
  {
    "path": "README.md",
    "chars": 3953,
    "preview": "# OpenAI Whisper - CPU\n\n## About\n\nExperiments applying quantization methods to OpenAI Whisper ASR model\nto improve the i"
  },
  {
    "path": "main.ipynb",
    "chars": 31616,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# OpenAI Whisper - CPU\\n\",\n    \"Imp"
  },
  {
    "path": "script/custom_whisper.py",
    "chars": 2873,
    "preview": "#!/usr/bin/python3\n\nimport sys\n\n#audio/Byron_Katie_Podcast/Byron_Katie_KICK_OFF_FINAL_MIX.mp3 --language English --model"
  }
]

About this extraction

This page contains the full source code of the MiscellaneousStuff/openai-whisper-cpu GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 7 files (41.3 KB), approximately 18.7k tokens, and a symbol index with 2 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!