Repository: ochen1/insanely-fast-whisper-cli
Branch: main
Commit: ab934dd330df
Files: 7
Total size: 11.1 KB
Directory structure:
gitextract_2amsqsvz/
├── .gitignore
├── LICENSE
├── README.md
├── insanely-fast-whisper.py
├── install-gfx1010.sh
├── requirements-gfx1010.txt
└── requirements.txt
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2023 ochen1
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# Insanely Fast Whisper (CLI)
[](https://github.com/ochen1/insanely-fast-whisper-cli/blob/main/LICENSE)
[](https://www.python.org/downloads/)
Powered by 🤗 *Transformers* & *Optimum* and based on **[Vaibhavs10/insanely-fast-whisper](https://github.com/Vaibhavs10/insanely-fast-whisper)**.
**TL;DR** - 🎙️ Transcribe **300** minutes (5 hours) of audio in less than **10** minutes - with [OpenAI's Whisper Large v2](https://huggingface.co/openai/whisper-large-v2). Blazingly fast transcription is now a reality!⚡️
## Features
✨ **ASR Model**: Choose from different 🤗 Hugging Face ASR models, including all sizes of [openai/whisper](https://github.com/openai/whisper) and even use an English-only variant (for non-large models).
🚀 **Performance**: Customizable optimizations ASR processing with options for batch size, data type, and BetterTransformer, all from the comfort of your terminal! 😎
📝 **Timestamps**: Get an SRT output file with accurate timestamps, allowing you to create subtitles for your audio or video content.
## Installation
- Clone git repository with `git clone https://github.com/ochen1/insanely-fast-whisper-cli`
- Switch to that folder with `cd insanely-fast-whisper-cli/`
- (optional) Create a new Python environment with `python -m venv venv`
- (optional) Activate environment with `source venv/bin/activate`
- Install packages from requirements with `pip install -r requirements.txt`
- Run program with `python insanely-fast-whisper.py`
## Usage
```bash
insanely-fast-whisper --model openai/whisper-base --device cuda:0 --dtype float32 --batch-size 8 --better-transformer --chunk-length 30 your_audio_file.wav
```
- `model`: Specify the ASR model (default is "openai/whisper-base").
- `device`: Choose the computation device (default is "cuda:0").
- `dtype`: Set the data type for computation ("float32" or "float16").
- `batch-size`: Adjust the batch size for processing (default is 8).
- `better-transformer`: Use BetterTransformer for improved processing (flag).
- `chunk-length`: Define audio chunk length in seconds (default is 30).
## Example
Transcribing an audio file with English-only Whisper model and returning timestamps:
```bash
insanely-fast-whisper --model openai/whisper-base.en your_audio_file.wav
```
## Output
The tool will save an SRT transcription of your audio file in the current working directory.
## License
This project is licensed under the [MIT License](https://github.com/ochen1/insanely-fast-whisper-cli/blob/main/LICENSE).
## Acknowledgments
- This tool is powered by Hugging Face's ASR models, primarily Whisper by OpenAI.
- Optimizations are developed by [Vaibhavs10/insanely-fast-whisper](https://github.com/Vaibhavs10/insanely-fast-whisper).
- Developed by [@ochen1](https://github.com/ochen1).
## 📞 Contact
Have questions or feedback? Feel free to create an issue!
🌟 **Star this repository if you find it helpful!**
[](https://star-history.com/#ochen1/insanely-fast-whisper-cli&Date)
---
[](https://github.com/ochen1/insanely-fast-whisper-cli/issues)
[](https://github.com/ochen1/insanely-fast-whisper-cli/pulls)
🚀 Happy transcribing with Insanely Fast Whisper! 🚀
================================================
FILE: insanely-fast-whisper.py
================================================
#!/usr/bin/env python3
import click
import os
import time
@click.command()
@click.option('--model', default='openai/whisper-base', help='ASR model to use for speech recognition. Default is "openai/whisper-base". Model sizes include base, small, medium, large, large-v2. Additionally, try appending ".en" to model names for English-only applications (not available for large).')
@click.option('--device', default='cuda:0', help='Device to use for computation. Default is "cuda:0". If you want to use CPU, specify "cpu".')
@click.option('--dtype', default='float32', help='Data type for computation. Can be either "float32" or "float16". Default is "float32".')
@click.option('--batch-size', type=int, default=8, help='Batch size for processing. This is the number of audio files processed at once. Default is 8.')
@click.option('--better-transformer', is_flag=True, help='Flag to use BetterTransformer for processing. If set, BetterTransformer will be used.')
@click.option('--chunk-length', type=int, default=30, help='Length of audio chunks to process at once, in seconds. Default is 30 seconds.')
@click.argument('audio_file', type=str)
def asr_cli(model, device, dtype, batch_size, better_transformer, chunk_length, audio_file):
from transformers import pipeline
import torch
# Initialize the ASR pipeline
pipe = pipeline("automatic-speech-recognition",
model=model,
device=device,
torch_dtype=torch.float16 if dtype == "float16" else torch.float32)
if better_transformer:
pipe.model = pipe.model.to_bettertransformer()
# Perform ASR
click.echo("Model loaded.")
start_time = time.perf_counter()
outputs = pipe(audio_file, chunk_length_s=chunk_length, batch_size=batch_size, return_timestamps=True)
# Output the results
click.echo(outputs)
click.echo("Transcription complete.")
end_time = time.perf_counter()
elapsed_time = end_time - start_time
click.echo(f"ASR took {elapsed_time:.2f} seconds.")
# Save ASR chunks to an SRT file
audio_file_name = os.path.splitext(os.path.basename(audio_file))[0]
srt_filename = f"{audio_file_name}.srt"
with open(srt_filename, 'w', encoding="utf-8") as srt_file:
prev = 0
for index, chunk in enumerate(outputs['chunks']):
prev, start_time = seconds_to_srt_time_format(prev, chunk['timestamp'][0])
prev, end_time = seconds_to_srt_time_format(prev, chunk['timestamp'][1])
srt_file.write(f"{index + 1}\n")
srt_file.write(f"{start_time} --> {end_time}\n")
srt_file.write(f"{chunk['text'].strip()}\n\n")
def seconds_to_srt_time_format(prev, seconds):
if not (isinstance(seconds, int) or isinstance(seconds, float)):
seconds = prev
else:
prev = seconds
hours = seconds // 3600
seconds %= 3600
minutes = seconds // 60
seconds %= 60
milliseconds = int((seconds - int(seconds)) * 1000)
hours = int(hours)
minutes = int(minutes)
seconds = int(seconds)
return (prev, f"{hours:02d}:{minutes:02d}:{int(seconds):02d},{milliseconds:03d}")
if __name__ == '__main__':
asr_cli()
================================================
FILE: install-gfx1010.sh
================================================
echo "Python <= 3.10 only!"
uv pip install -r requirements-gfx1010.txt --extra-index-url https://download.pytorch.org/whl/rocm5.2
echo
echo "Example usage:"
echo "HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 insanely-fast-whisper.py --model distil-whisper/distil-medium.en audio.mp3"
================================================
FILE: requirements-gfx1010.txt
================================================
#accelerate
#optimum
click
transformers
torch==1.13.1+rocm5.2
================================================
FILE: requirements.txt
================================================
torch
torchvision
torchaudio
git+https://github.com/huggingface/transformers
accelerate
optimum
click
gitextract_2amsqsvz/ ├── .gitignore ├── LICENSE ├── README.md ├── insanely-fast-whisper.py ├── install-gfx1010.sh ├── requirements-gfx1010.txt └── requirements.txt
SYMBOL INDEX (2 symbols across 1 files) FILE: insanely-fast-whisper.py function asr_cli (line 15) | def asr_cli(model, device, dtype, batch_size, better_transformer, chunk_... function seconds_to_srt_time_format (line 52) | def seconds_to_srt_time_format(prev, seconds):
Condensed preview — 7 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (12K chars).
[
{
"path": ".gitignore",
"chars": 3078,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": "LICENSE",
"chars": 1063,
"preview": "MIT License\n\nCopyright (c) 2023 ochen1\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof "
},
{
"path": "README.md",
"chars": 3577,
"preview": "# Insanely Fast Whisper (CLI)\n\n[](https://github.com"
},
{
"path": "insanely-fast-whisper.py",
"chars": 3192,
"preview": "#!/usr/bin/env python3\n\nimport click\nimport os\nimport time\n\n@click.command()\n@click.option('--model', default='openai/wh"
},
{
"path": "install-gfx1010.sh",
"chars": 279,
"preview": "echo \"Python <= 3.10 only!\"\nuv pip install -r requirements-gfx1010.txt --extra-index-url https://download.pytorch.org/wh"
},
{
"path": "requirements-gfx1010.txt",
"chars": 62,
"preview": "#accelerate\n#optimum\nclick\ntransformers\ntorch==1.13.1+rocm5.2\n"
},
{
"path": "requirements.txt",
"chars": 102,
"preview": "torch\ntorchvision\ntorchaudio\ngit+https://github.com/huggingface/transformers\naccelerate\noptimum\nclick\n"
}
]
About this extraction
This page contains the full source code of the ochen1/insanely-fast-whisper-cli GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 7 files (11.1 KB), approximately 3.1k tokens, and a symbol index with 2 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.