[
  {
    "path": ".github/FUNDING.yml",
    "content": "github: daanzu\ncustom: \"https://paypal.me/daanzu\"\n"
  },
  {
    "path": ".github/RELEASING.md",
    "content": "# Release Process\n\n## Quick Summary\n\n1. **Prepare**: Update version in `__init__.py`, update `CHANGELOG.md`, run tests\n2. **Tag Fork First**: Push all changes and tag `kag-vX.Y.Z` in kaldi-fork repo (MUST be done first!)\n3. **Tag Main**: Create and push git tag `vX.Y.Z` in this repo to trigger builds\n4. **Build**: Automated GitHub Actions builds wheels for all platforms\n5. **Release**: Create GitHub release with changelog, upload wheel artifacts and models\n6. **Publish**: Download wheels from artifacts, upload to PyPI with twine\n7. **Finalize**: Bump to next dev version, announce release\n\n---\n\n## Detailed Release Process\n\n### Overview\n\nThis is a **duorepo** (2 separate repos used together):\n- Main repo: `daanzu/kaldi-active-grammar` (Python package)\n- Native binaries repo: `daanzu/kaldi-fork-active-grammar` (Kaldi C++ fork)\n\n**⚠️ IMPORTANT: The fork repo MUST always be pushed first before this repo for any changes!** The build process in this repo checks out code from the fork repo, so the fork must contain all necessary changes before triggering builds here.\n\n### 1. Pre-Release Preparation\n\n#### Update Version Number\n\nEdit `kaldi_active_grammar/__init__.py`:\n```python\n__version__ = 'X.Y.Z'  # Change from previous version\n```\n\nOptionally update `REQUIRED_MODEL_VERSION` if model changes.\n\n#### Update CHANGELOG.md\n\nAdd new version section following Keep a Changelog format:\n```markdown\n## [X.Y.Z](release-url) - YYYY-MM-DD - Changes: [KaldiAG](compare-url) [KaldiFork](compare-url)\n\n### Added\n- New features\n\n### Changed\n- Changes to existing functionality\n\n### Fixed\n- Bug fixes\n\n### Removed\n- Removed features\n```\n\nInclude comparison links for both repos (KaldiAG and KaldiFork).\n\n#### Run Tests\n\n```bash\njust test\n```\n\n#### Commit Changes\n\n```bash\ngit add kaldi_active_grammar/__init__.py CHANGELOG.md\ngit commit -m \"Release vX.Y.Z\"\n```\n\n### 2. Create Git Tags\n\n**⚠️ CRITICAL ORDER: Tag and push the fork repo FIRST, then this repo!**\n\n#### Tag the Kaldi Fork Repo (DO THIS FIRST!)\n\nIn the `daanzu/kaldi-fork-active-grammar` repo:\n\n1. Ensure all native code changes are committed and pushed\n2. Create and push the tag:\n   ```bash\n   git tag kag-vX.Y.Z  # Note the 'kag-' prefix matching the version\n   git push origin kag-vX.Y.Z\n   ```\n\nThis is crucial because the build process in this repo will check out code from the fork using this tag.\n\n#### Tag This Repo (DO THIS SECOND!)\n\nOnly after the fork repo tag is pushed:\n\n```bash\ngit tag vX.Y.Z\ngit push origin vX.Y.Z\n```\n\nPushing this tag will trigger the GitHub Actions build workflow, which will pull the native code from the fork repo using the `kag-vX.Y.Z` tag.\n\n### 3. Automated Build Process (GitHub Actions)\n\nWhen you push a tag, the CI automatically:\n\n#### Detects the Tag\n\nSets `KALDI_BRANCH=kag-vX.Y.Z` for tagged commits, or uses current branch name for non-tagged commits.\n\n#### Builds Native Binaries\n\n**Linux** (`build-linux` job):\n- Uses dockcross/manylinux2010 for compatibility\n- Compiles Kaldi C++ code with CMake\n- Runs auditwheel for wheel repair\n\n**Windows** (`build-windows` job):\n- Uses Visual Studio 2022/2025\n- Installs Intel MKL\n- Compiles OpenFST and Kaldi with MSBuild\n\n**macOS ARM** (`build-macos-arm` job):\n- For Apple Silicon (M1/M2/etc)\n- Uses delocate for wheel repair\n\n**macOS Intel** (`build-macos-intel` job):\n- For x86_64 Macs\n- Uses delocate for wheel repair\n\n#### Caches Native Binaries\n\nCaches compiled Kaldi binaries by commit hash to speed up rebuilds.\n\n#### Creates Python Wheels\n\n- Packages include all native binaries\n- Platform-specific wheels: `py3-none-{platform}`\n- Uses `setup.py` with scikit-build (or `KALDIAG_BUILD_SKIP_NATIVE=1` for packaging only)\n\n#### Tests All Wheels\n\n`test-wheels` job runs on multiple OS/Python version combinations.\n\n#### Merges Wheel Artifacts\n\n`merge-wheels` job combines all platform wheels into single artifact named `wheels`.\n\n### 4. Manual Workflow Trigger (Optional)\n\nYou can also trigger builds manually:\n\n```bash\n# Using GitHub CLI\ngh workflow run build.yml --ref master\n# Or specific ref:\ngh workflow run build.yml --ref vX.Y.Z\n\n# Or via Justfile:\njust trigger-build master\n```\n\n### 5. Create GitHub Release\n\n1. **Navigate to GitHub Releases page**\n   - https://github.com/daanzu/kaldi-active-grammar/releases\n\n2. **Create new release**:\n   - Tag: `vX.Y.Z` (select existing tag)\n   - Title: `vX.Y.Z` or descriptive name\n   - Description: Copy relevant section from `CHANGELOG.md` or use template from `.github/release_notes.md`\n\n3. **Download wheel artifacts** from successful build workflow:\n   - Go to Actions → Build workflow → successful run\n   - Download artifact named `wheels` (merged) or individual `wheels-{platform}`\n\n4. **Upload wheels to release**:\n   - Upload all `.whl` files from the artifacts\n\n5. **Upload additional assets** (if applicable):\n   - Pre-trained Kaldi models (if updated)\n   - WinPython distributions (if prepared):\n     - `kaldi-dragonfly-winpython` (stable)\n     - `kaldi-dragonfly-winpython-dev` (development)\n     - `kaldi-caster-winpython-dev` (with Caster)\n\n6. **Publish the release**\n\n### 6. Publish to PyPI\n\nThe process is currently **manual** (not automated in workflow).\n\n#### Download Wheel Artifacts\n\nDownload the `wheels` artifact from the successful GitHub Actions build.\n\n#### Upload to PyPI\n\nYou'll need PyPI credentials (entered interactively or configured in `~/.pypirc` or via environment variables).\n\n```bash\n# Test PyPI first (recommended):\nuvx twine upload --repository testpypi wheels/*\n# Production PyPI:\nuvx twine upload wheels/*\n```\n\nOr:\n\n```bash\npip install twine\n# Test PyPI first (recommended):\ntwine upload --repository testpypi wheels/*\n# Production PyPI:\ntwine upload wheels/*\n```\n\n#### Verify on PyPI\n\n- Check https://pypi.org/project/kaldi-active-grammar/\n- Verify all platforms are present\n- Test installation:\n  ```bash\n  pip install kaldi-active-grammar==X.Y.Z\n  ```\n\n### 7. Post-Release Tasks\n\n#### Bump Version for Development\n\nUpdate `__version__` in `kaldi_active_grammar/__init__.py` to next dev version:\n```python\n__version__ = 'X.Y.Z.dev0'  # or 'X.Y+1.0.dev0'\n```\n\nCommit the change:\n```bash\ngit add kaldi_active_grammar/__init__.py\ngit commit -m \"Bump version to X.Y.Z.dev0\"\ngit push\n```\n\n#### Announce Release\n\n- Update documentation/README if needed\n- Update Dragonfly documentation if relevant\n- Post on relevant forums/communities\n    - Notify on Gitter:\n        - https://app.gitter.im/#/room/#dragonfly2:matrix.org\n        - https://gitter.im/kaldi-active-grammar/community\n\n---\n\n## Key Files in Release Process\n\n| File | Purpose |\n|------|---------|\n| `kaldi_active_grammar/__init__.py:8` | Version source |\n| `kaldi_active_grammar/__init__.py:10` | Required model version |\n| `CHANGELOG.md` | Release notes and history |\n| `.github/workflows/build.yml` | CI build configuration |\n| `.github/workflows/tests.yml` | CI test configuration |\n| `setup.py` | Package build configuration |\n| `pyproject.toml` | Build system requirements |\n| `Justfile` | Build and test tasks |\n| `.github/release_notes.md` | Release notes template |\n\n---\n\n## Environment Variables\n\n| Variable | Purpose |\n|----------|---------|\n| `KALDIAG_BUILD_SKIP_NATIVE=1` | Skip native compilation, just package |\n| `KALDI_BRANCH` | Which Kaldi fork branch/tag to build from (auto-detected from git tag) |\n| `MKL_URL` | Optional Intel MKL download URL (mostly disabled now) |\n\n---\n\n## Development vs Release Versions\n\n- **Dev versions**: `setup.py` auto-appends timestamp to `X.Y.Z.dev0` versions\n  - Example: `3.1.0.dev20251031123456`\n- **Release versions**: Clean `X.Y.Z` semantic version\n  - Example: `3.1.0`\n- Build process differentiates based on git tags\n\n---\n\n## Troubleshooting\n\n### Build fails on one platform\n\n- Check the GitHub Actions logs for that specific job\n- Native binaries are cached, so may need to invalidate cache if Kaldi fork changed\n- Ensure the `kag-vX.Y.Z` tag exists in kaldi-fork-active-grammar repo\n\n### Tests fail\n\n- Run tests locally: `just test`\n- Check if model needs to be updated\n- Verify test data is downloaded: `just setup-tests`\n\n### PyPI upload fails\n\n- Verify credentials in `~/.pypirc`\n- Check wheel filenames are correct\n- Ensure version doesn't already exist on PyPI\n- Try test PyPI first\n\n### Wheels missing for a platform\n\n- Check if that build job completed successfully\n- Look for cache issues\n- Verify platform is included in build matrix\n\n### Version mismatch\n\n- Ensure git tag matches `__version__` in `__init__.py`\n- Check that both repos are tagged (main repo: `vX.Y.Z`, fork: `kag-vX.Y.Z`)\n- Verify `CHANGELOG.md` has correct version\n"
  },
  {
    "path": ".github/release_notes.md",
    "content": "v0.5.0: User Lexicon! Compilation Optimizations! Better Model!\n\n### Notes\n\n* **User Lexicon**: you can add new words/pronunciations to the model's lexicon to be recognized & used in grammars, and the pronunciations can be either specified explicitly or inferred automatically.\n* **Compilation Optimizations**: compilation while loading grammars uses the disk much less, and far fewer passes are made over the graphs, as separate modules have been customized & combined.\n* **Better Model**: 50% more training data.\n\n### Artifacts\n\n* **`kaldi_model_zamia`**: [*new model version required!*] A compatible general English Kaldi nnet3 chain model.\n* **`kaldi-dragonfly-winpython`**: A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-dragonfly-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\n### **Donations are appreciated to encourage development.**\n\n[![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu)\n[![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu)\n\n\nv0.6.0: Big Fixes And Optimizations To Get Caster Running\n\n### Artifacts\n\n* **`kaldi_model_zamia`**: A compatible general English Kaldi nnet3 chain model.\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-dragonfly-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n### **Donations are appreciated to encourage development.**\n\n[![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu)\n[![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu)\n\n\nv0.7.1: Partial Decoding, Parallel Compilation, & Various Optimizations for 15-50% Speedup\n\nSupport is now included in dragonfly2 v0.17.0! You can try a self-contained distribution available below, of either stable or development versions.\n\n### Notes\n\n* **Partial Decoding**: support for having **separate Voice Activity Detection timeout values** based on whether the current utterance is complex (dictation) or not.\n* **Parallel Compilation**: when compiling grammars/rules that are not cached, multiple can be compiled at once (up to your core count).\n    * Example: loading Caster without cache is ~40% faster (in addition to optimizations below).\n* **Various Optimizations**: loading even while cached sped up 15%.\n* Refactored temporary/cache file handling\n* Various bug fixes\n\n### Artifacts\n\n* **`kaldi_model_zamia`**: A compatible general English Kaldi nnet3 chain model.\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-dragonfly-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n### **Donations are appreciated to encourage development.**\n\n[![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu)\n[![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu)\n\n\nv1.0.0: Faster Loading, Python3, Grammar/Rule Weights, and more\n\nSupport is now included in dragonfly2 v0.18.0! You can try a self-contained distribution available below, of either stable or development versions.\n\n### Notes\n\n* **Direct Parsing**: parse recognitions directly on the FST, removing the (slow) `pyparsing` dependency.\n    * Caster example: Loading is now **~50%** faster when cached, and the Kaldi backend accounts for only ~15% of loading time.\n* **Python3**: both python 2 and 3 should be fully supported now.\n    * **Unicode**: this should also fix unicode issues in various places in both python2/3.\n* **Grammar/Rule Weights**: can specify weight, where grammars/rules with higher weight value are more likely to be recognized, compared to their peers, for an ambiguous recognition.\n* **Generalized Alternative Dictation**: the cloud dictation feature has been generalized to make it easier to add other alternatives in the future.\n* Various bug fixes & optimizations\n\n### Artifacts\n\n* **`kaldi_model_zamia`**: A compatible general English Kaldi nnet3 chain model.\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-dragonfly-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n### Donations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-pink.svg)](https://github.com/sponsors/daanzu) [![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu) [![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu) [![Donate](https://img.shields.io/badge/preferred-GitHub-black.svg)](https://github.com/sponsors/daanzu)\n[**GitHub** is currently matching all my donations $-for-$.]\n\n\n\n\nv1.2.0: Improved Recognition, Weights on Any Elements, Pluggable Alternative Dictation, Stand-alone Plain Dictation Interface, and More\n\nSupport is now included in dragonfly2 v0.20.0! You can try a self-contained distribution available below, of either stable or development versions.\n\n### Notes\n\n* **Improved Recognition**: better graph construction/compilation should give significantly better overall recognition.\n* **Weights on Any Elements**: you can now easily add weights to any element (including compound elements in `MappingRule`s), in addition to any rule/grammar.\n* **Pluggable Alternative Dictation**: you can optionally pass a `callable` as `alternative_dictation` to define your own, external dictation engine.\n* **Stand-alone Plain Dictation Interface**: the library now provides a simple interface for recognizing plain dictation without fancy active grammar features.\n* **NOTE**: the default model directory is now `kaldi_model`.\n* Various bug fixes & optimizations\n\n### Artifacts\n\n* **`kaldi_model_daanzu`**: A better overall compatible general English Kaldi nnet3 chain model than below.\n* **`kaldi_model_zamia_daanzu_mediumlm`**: A compatible general English Kaldi nnet3 chain model, with a larger/better dictation language model than below.\n* **`kaldi_model_zamia`**: A compatible general English Kaldi nnet3 chain model.\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-dragonfly-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n### Donations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-pink.svg)](https://github.com/sponsors/daanzu) [![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu) [![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu) [![Donate](https://img.shields.io/badge/preferred-GitHub-black.svg)](https://github.com/sponsors/daanzu)\n[**GitHub** is currently matching all my donations $-for-$.]\n\n\n\nv1.3.0: Preparation and Fixes for Next Generation of Models\n\nThis should be included the next dragonfly version, or you can try a self-contained distribution available below.\n\nYou can subscribe to announcements on Gitter: see [instructions](https://gitlab.com/gitlab-org/gitter/webapp/blob/master/docs/notifications.md#announcements). [![Gitter](https://badges.gitter.im/kaldi-active-grammar/community.svg)](https://gitter.im/kaldi-active-grammar/community)\n\n### Notes\n\n* **Next Generation of Models**: support for a new generation of models, trained on more data, and with hopefully better accuracy.\n* **User Lexicon**: if there is a ``user_lexicon.txt`` file in the current working directory of your initial loader script, its contents will be automatically added to the ``user_lexicon.txt`` in the active model when it is loaded.\n* Various bug fixes & optimizations\n\n### Artifacts\n\n* **`kaldi_model_daanzu*`**: A better acoustic model, and varying levels of language model for dictation (bigger is generally better).\n* **`kaldi_model_zamia`**: A compatible general English Kaldi nnet3 chain model.\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-dragonfly-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n### Donations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-pink.svg)](https://github.com/sponsors/daanzu) [![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu) [![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu) [![Donate](https://img.shields.io/badge/preferred-GitHub-black.svg)](https://github.com/sponsors/daanzu)\n[**GitHub** is matching (only) my **GitHub Sponsors** donations.]\n\n\n\n\n\nv1.4.0: MacOS Support, And Faster Graph Compilation\n\nSupport is now included in dragonfly2 v0.22.0! You can try a self-contained distribution available below.\n\nYou can subscribe to announcements on Gitter: see [instructions](https://gitlab.com/gitlab-org/gitter/webapp/blob/master/docs/notifications.md#announcements). [![Gitter](https://badges.gitter.im/kaldi-active-grammar/community.svg)](https://gitter.im/kaldi-active-grammar/community)\n\n### Notes\n\n* **MacOS Support**\n* **Faster Graph Compilation**\n* **Dictation**: the dictation model now does not recognize a zero-word sequence\n* Various bug fixes & optimizations\n\n### Artifacts\n\n* **`kaldi_model_daanzu*`**: A better acoustic model, and varying levels of language model for dictation (bigger is generally better).\n* **`kaldi_model_zamia`**: A compatible general English Kaldi nnet3 chain model.\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n### Donations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-pink.svg)](https://github.com/sponsors/daanzu) [![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu) [![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu) [![Donate](https://img.shields.io/badge/preferred-GitHub-black.svg)](https://github.com/sponsors/daanzu)\n[**GitHub** is matching (only) my **GitHub Sponsors** donations.]\n\n\n\n\n\nv1.5.0: Improved Recognition Confidence Estimation\n\nYou can subscribe to announcements on Gitter: see [instructions](https://gitlab.com/gitlab-org/gitter/webapp/blob/master/docs/notifications.md#announcements). [![Gitter](https://badges.gitter.im/kaldi-active-grammar/community.svg)](https://gitter.im/kaldi-active-grammar/community)\n\n### Notes\n\n* **Improved Recognition Confidence Estimation**: two new, different measures:\n    * `confidence`: basically the difference in how much \"better\" the returned recognition was, compared to the second best guess (`>0`)\n    * `expected_error_rate`: an estimate of how often similar utterances are incorrect (roughly out of `1.0`, but can be greater)\n* Refactoring in preparation for future improvements\n* Various bug fixes & optimizations\n\n### Artifacts\n\n* **Models are available [here](https://github.com/daanzu/kaldi-active-grammar/blob/master/docs/models.md)**\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n### Donations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-pink.svg)](https://github.com/sponsors/daanzu) [![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu) [![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu) [![Donate](https://img.shields.io/badge/preferred-GitHub-black.svg)](https://github.com/sponsors/daanzu)\n[**GitHub** is matching (only) my **GitHub Sponsors** donations.]\n\n\n\n\nv1.6.0: Easier Configuration; Public Automated Builds\n\nYou can subscribe to announcements on GitHub (see Watch panel above), or on Gitter (see [instructions](https://gitlab.com/gitlab-org/gitter/webapp/blob/master/docs/notifications.md#announcements) [![Gitter](https://badges.gitter.im/kaldi-active-grammar/community.svg)](https://gitter.im/kaldi-active-grammar/community))\n\n### Added\n* Can now pass configuration dict to `KaldiAgfNNet3Decoder`, `PlainDictationRecognizer` (without `HCLG.fst`).\n* Continuous Integration builds run on GitHub Actions for Windows (x64), MacOS (x64), Linux (x64).\n\n### Changed\n* Refactor of passing configuration to initialization.\n* `PlainDictationRecognizer.decode_utterance` can take `chunk_size` parameter.\n* Smaller binaries: MacOS 11MB -> 7.6MB, Linux 21MB -> 18MB.\n\n### Fixed\n* Confidence measurement in the presence of multiple, redundant rules.\n* Python3 int division bug for cloud dictation.\n\n### Artifacts\n\n* **Models are available [here](https://github.com/daanzu/kaldi-active-grammar/blob/master/docs/models.md)**\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n### Donations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-pink.svg)](https://github.com/sponsors/daanzu) [![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu) [![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu) [![Donate](https://img.shields.io/badge/preferred-GitHub-black.svg)](https://github.com/sponsors/daanzu)\n[**GitHub** is matching (only) my **GitHub Sponsors** donations.]\n\n\n\nv2.0.0: Faster Grammar Compilation; Cleaner Codebase; Preparation For New Features\n\n\nv2.1.0\n\nMinor fix for OpenBLAS compilation for some architectures on linux/mac.\n\nSee [major changes introduced in v2.0.0 and associated downloads](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v2.0.0).\n\n\nv3.2.0\n\nFunctionality-wise, only one small bug-fix: to the broken alternative dictation interface. But extensive build and infrastructure changes lead me to make this a minor release rather than just a patch release, out of an abundance of caution.\n\nActive development has resumed after a long break! (While development paused, the project was continuously maintained and actively used in production.) Look forward to more frequent releases in the hopefully-near future.\n\n\n\nYou can subscribe to announcements on GitHub (see Watch panel above), or on Gitter (see [instructions](https://gitlab.com/gitlab-org/gitter/webapp/blob/master/docs/notifications.md#announcements) [![Gitter](https://badges.gitter.im/kaldi-active-grammar/community.svg)](https://gitter.im/kaldi-active-grammar/community))\n\n#### Donations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-EA4AAA.svg?logo=githubsponsors)](https://github.com/sponsors/daanzu) [![Donate](https://img.shields.io/badge/donate-PayPal-002991.svg?logo=paypal)](https://paypal.me/daanzu) [![Donate](https://img.shields.io/badge/donate-GitHub-EA4AAA.svg?logo=githubsponsors)](https://github.com/sponsors/daanzu)\n\n\n### Artifacts\n\n* **Models are available [here](https://github.com/daanzu/kaldi-active-grammar/blob/master/docs/models.md)** and below.\n* **`kaldi-dragonfly-winpython`**: A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython`**: A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nIf you have trouble downloading, try using `wget --continue`.\n\n\n\n\n* **`kaldi-dragonfly-winpython`**: [*stable release version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nThis should be included the next dragonfly version, or you can try a self-contained distribution available below.\n"
  },
  {
    "path": ".github/workflows/build.yml",
    "content": "name: Build\n\non:\n  push:\n  pull_request:\n  workflow_dispatch:\n    # gh api repos/:owner/:repo/actions/workflows/build.yml/dispatches -F ref=master\n    # gh workflow run build.yml --ref develop\n    # gh workflow run build.yml\n\njobs:\n\n  build-linux:\n    runs-on: ubuntu-latest\n    if: true\n    env:\n      MKL_URL: \"\"\n      # MKL_URL: \"https://registrationcenter-download.intel.com/akdlm/irc_nas/tec/16917/l_mkl_2020.4.304.tgz\"\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v5\n\n      - name: Get KALDI_BRANCH (kag-$TAG tag if commit is tagged; current branch name if not)\n        run: |\n          # Fetch tags on the one fetched commit (shallow clone)\n          git fetch --depth=1 origin \"+refs/tags/*:refs/tags/*\"\n          export TAG=$(git tag --points-at HEAD)\n          echo \"TAG: $TAG\"\n          if [[ $TAG ]]; then\n            echo \"KALDI_BRANCH: kag-$TAG\"\n            echo \"KALDI_BRANCH=kag-$TAG\" >> $GITHUB_ENV\n            echo \"KALDI_BRANCH=kag-$TAG\" >> $GITHUB_OUTPUT\n          else\n            echo \"KALDI_BRANCH: ${GITHUB_REF/refs\\/heads\\//}\"\n            echo \"KALDI_BRANCH=${GITHUB_REF/refs\\/heads\\//}\" >> $GITHUB_ENV\n            echo \"KALDI_BRANCH=${GITHUB_REF/refs\\/heads\\//}\" >> $GITHUB_OUTPUT\n          fi\n\n      - name: Get Kaldi commit hash\n        id: get-kaldi-commit\n        run: |\n          KALDI_COMMIT=$(git ls-remote https://github.com/daanzu/kaldi-fork-active-grammar.git $KALDI_BRANCH | cut -f1)\n          echo \"KALDI_COMMIT: $KALDI_COMMIT\"\n          echo \"KALDI_COMMIT=$KALDI_COMMIT\" >> $GITHUB_OUTPUT\n\n      - name: Restore cached native binaries\n        id: cache-native-binaries-restore\n        uses: actions/cache/restore@v4\n        with:\n          key: native-${{ runner.os }}-${{ steps.get-kaldi-commit.outputs.KALDI_COMMIT }}-${{ env.MKL_URL }}-v1\n          path: |\n            kaldi_active_grammar/exec/linux\n            kaldi_active_grammar.libs\n\n      - name: Install just\n        uses: taiki-e/install-action@just\n\n      - name: Build with dockcross (native binaries & python wheel)\n        run: |\n          shopt -s nullglob\n          echo \"KALDI_BRANCH: $KALDI_BRANCH\"\n          echo \"MKL_URL: $MKL_URL\"\n          just build-dockcross ${{ steps.cache-native-binaries-restore.outputs.cache-hit == 'true' && '--skip-native' || '' }} $KALDI_BRANCH $MKL_URL\n          ls -l wheelhouse/\n          for whl in wheelhouse/*.whl; do\n            unzip -l $whl\n          done\n\n      - name: Extract native binaries from wheel after auditwheel repair, to save to cache\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        # We do this rather than manually caching all of the various kaldi/openfst libraries in their build locations\n        run: |\n          shopt -s nullglob\n          # Assert there is only one wheel\n          WHEEL_COUNT=$(ls wheelhouse/*.whl | wc -l)\n          if [ \"$WHEEL_COUNT\" -ne 1 ]; then\n            echo \"Error: Expected exactly 1 wheel, found $WHEEL_COUNT\"\n            ls -l wheelhouse/\n            exit 1\n          fi\n          WHEEL_FILE=$(ls wheelhouse/*.whl)\n          echo \"Extracting from wheel: $WHEEL_FILE\"\n          unzip -o $WHEEL_FILE 'kaldi_active_grammar/exec/linux/*'\n          unzip -o $WHEEL_FILE 'kaldi_active_grammar.libs/*'\n          ls -l kaldi_active_grammar/exec/linux/ kaldi_active_grammar.libs/\n          readelf -d kaldi_active_grammar/exec/linux/libkaldi-dragonfly.so | egrep 'NEEDED|RUNPATH|RPATH'\n\n      - name: Save cached native binaries\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        uses: actions/cache/save@v4\n        with:\n          key: ${{ steps.cache-native-binaries-restore.outputs.cache-primary-key }}\n          path: |\n            kaldi_active_grammar/exec/linux\n            kaldi_active_grammar.libs\n\n      - name: Upload native binaries to artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: native-linux\n          path: |\n            kaldi_active_grammar/exec/linux\n            kaldi_active_grammar.libs\n\n      - name: Upload Linux wheels\n        uses: actions/upload-artifact@v4\n        with:\n          name: wheels-linux\n          path: wheelhouse/*\n\n      - name: Examine results\n        run: |\n          shopt -s nullglob\n          for whl in wheelhouse/*.whl; do\n            echo \"::notice title=Built wheel::$(basename $whl)\"\n            unzip -l $whl\n          done\n\n  build-windows:\n    runs-on: windows-2025\n    if: true\n    env:\n      VS_VERSION: vs2022\n      PLATFORM_TOOLSET: v143\n      WINDOWS_TARGET_PLATFORM_VERSION: 10.0\n      MKL_VERSION: 2025.1.0\n    defaults:\n      run:\n        shell: bash\n    steps:\n\n      - name: Checkout main repository\n        uses: actions/checkout@v5\n        with:\n          path: main\n\n      - name: Get KALDI_BRANCH (kag-$TAG tag if commit is tagged; current branch name if not)\n        id: get-kaldi-branch\n        working-directory: main\n        run: |\n          # Fetch tags on the one fetched commit (shallow clone)\n          git fetch --depth=1 origin \"+refs/tags/*:refs/tags/*\"\n          export TAG=$(git tag --points-at HEAD)\n          echo \"TAG: $TAG\"\n          if [[ $TAG ]]; then\n            echo \"KALDI_BRANCH: kag-$TAG\"\n            echo \"KALDI_BRANCH=kag-$TAG\" >> $GITHUB_ENV\n            echo \"KALDI_BRANCH=kag-$TAG\" >> $GITHUB_OUTPUT\n          else\n            echo \"KALDI_BRANCH: ${GITHUB_REF/refs\\/heads\\//}\"\n            echo \"KALDI_BRANCH=${GITHUB_REF/refs\\/heads\\//}\" >> $GITHUB_ENV\n            echo \"KALDI_BRANCH=${GITHUB_REF/refs\\/heads\\//}\" >> $GITHUB_OUTPUT\n          fi\n\n      - name: Get Kaldi commit hash\n        id: get-kaldi-commit\n        run: |\n          KALDI_COMMIT=$(git ls-remote https://github.com/daanzu/kaldi-fork-active-grammar.git $KALDI_BRANCH | cut -f1)\n          echo \"KALDI_COMMIT: $KALDI_COMMIT\"\n          echo \"KALDI_COMMIT=$KALDI_COMMIT\" >> $GITHUB_OUTPUT\n\n      - name: Restore cached native binaries\n        id: cache-native-binaries-restore\n        uses: actions/cache/restore@v4\n        with:\n          key: native-${{ runner.os }}-${{ steps.get-kaldi-commit.outputs.KALDI_COMMIT }}-${{ env.VS_VERSION }}-${{ env.PLATFORM_TOOLSET }}-${{ env.WINDOWS_TARGET_PLATFORM_VERSION }}-${{ env.MKL_VERSION }}-v1\n          path: main/kaldi_active_grammar/exec/windows\n\n      - name: Checkout OpenFST repository\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        uses: actions/checkout@v5\n        with:\n          repository: daanzu/openfst\n          path: openfst\n\n      - name: Checkout Kaldi repository\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        uses: actions/checkout@v5\n        with:\n          repository: daanzu/kaldi-fork-active-grammar\n          path: kaldi\n          ref: ${{ steps.get-kaldi-branch.outputs.KALDI_BRANCH }}\n\n      - name: Gather system information\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        run: |\n          echo $GITHUB_WORKSPACE\n          df -h\n          echo \"Windows SDK Versions:\"\n          ls -l '/c/Program Files (x86)/Windows Kits/10/Include/'\n          echo \"Visual Studio Redistributables:\"\n          # ls -l '/c/Program Files (x86)/Microsoft Visual Studio/2022/Enterprise/VC/Redist/MSVC/'\n          # ls -l '/c/Program Files (x86)/Microsoft Visual Studio/2022/Enterprise/VC/Redist/MSVC/14.26.28720'\n          # ls -l '/c/Program Files (x86)/Microsoft Visual Studio/2022/Enterprise/VC/Redist/MSVC/v142'\n          # ls -l '/c/Program Files (x86)/Microsoft Visual Studio/2022/Enterprise/VC/Redist/MSVC/14.16.27012/x64/Microsoft.VC141.CRT'\n          # ls -l '/c/Program Files (x86)/Microsoft Visual Studio/2022/Enterprise/VC/Redist/MSVC/'*/x64/Microsoft.*.CRT\n          # ls -lR /c/Program\\ Files\\ \\(x86\\)/Microsoft Visual Studio/2022/Enterprise/VC/Redist/MSVC/\n          # ls -lR '/c/Program Files (x86)/Microsoft Visual Studio/'2022/Enterprise/VC/Redist/MSVC/\n          vswhere\n          vswhere -find 'VC\\Redist\\**\\VC_redist.x64.exe'\n\n      - name: Setup Kaldi build configuration\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        run: |\n          cd kaldi/windows\n          cp kaldiwin_mkl.props kaldiwin.props\n          cp variables.props.dev variables.props\n          # Set openfst location\n          perl -pi -e 's/<OPENFST>.*<\\/OPENFST>/<OPENFST>$ENV{GITHUB_WORKSPACE}\\\\openfst<\\/OPENFST>/g' variables.props\n          perl -pi -e 's/<OPENFSTLIB>.*<\\/OPENFSTLIB>/<OPENFSTLIB>$ENV{GITHUB_WORKSPACE}\\\\openfst\\\\build_output<\\/OPENFSTLIB>/g' variables.props\n          perl generate_solution.pl --vsver ${VS_VERSION} --enable-mkl --noportaudio\n          # Add additional libfstscript library to dragonfly build file\n          sed -i.bak '$i\\\n            <ItemDefinitionGroup>\\\n              <Link>\\\n                <AdditionalDependencies>libfstscript.lib;%(AdditionalDependencies)</AdditionalDependencies>\\\n              </Link>\\\n            </ItemDefinitionGroup>' ../kaldiwin_${VS_VERSION}_MKL/kaldiwin/kaldi-dragonfly/kaldi-dragonfly.vcxproj\n          perl get_version.pl\n\n      - name: Add msbuild to PATH\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        uses: microsoft/setup-msbuild@v2\n\n      - name: Install MKL\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        run: winget install --id=Intel.oneMKL -v \"${MKL_VERSION}\" -e --accept-package-agreements --accept-source-agreements --disable-interactivity\n\n      - name: Build OpenFST\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        shell: cmd\n        run: msbuild -t:Build -p:Configuration=Release -p:Platform=x64 -p:PlatformToolset=%PLATFORM_TOOLSET% -maxCpuCount -verbosity:minimal openfst/openfst.sln\n\n      - name: Build Kaldi\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        shell: cmd\n        run: msbuild -t:Build -p:Configuration=Release -p:Platform=x64 -p:PlatformToolset=%PLATFORM_TOOLSET% -p:WindowsTargetPlatformVersion=%WINDOWS_TARGET_PLATFORM_VERSION% -maxCpuCount -verbosity:minimal kaldi/kaldiwin_%VS_VERSION%_MKL/kaldiwin/kaldi-dragonfly/kaldi-dragonfly.vcxproj\n\n      - name: Copy native binaries\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        run: |\n          mkdir -p main/kaldi_active_grammar/exec/windows\n          cp kaldi/kaldiwin_${VS_VERSION}_MKL/kaldiwin/kaldi-dragonfly/x64/Release/kaldi-dragonfly.dll main/kaldi_active_grammar/exec/windows/\n\n      - name: Save cached native binaries\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        uses: actions/cache/save@v4\n        with:\n          key: ${{ steps.cache-native-binaries-restore.outputs.cache-primary-key }}\n          path: main/kaldi_active_grammar/exec/windows\n\n      - name: Upload native binaries to artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: native-windows\n          path: main/kaldi_active_grammar/exec/windows\n\n      - name: Build Python wheel\n        working-directory: main\n        run: |\n          python -m pip -V\n          python -m pip install --upgrade setuptools wheel\n          env KALDIAG_BUILD_SKIP_NATIVE=1 python setup.py bdist_wheel\n          ls -l dist/\n\n      - name: Upload Windows wheels\n        uses: actions/upload-artifact@v4\n        with:\n          name: wheels-windows\n          path: main/dist/*\n\n      - name: Examine results\n        run: |\n          shopt -s nullglob\n          for whl in main/dist/*.whl; do\n            echo \"::notice title=Built wheel::$(basename $whl)\"\n            unzip -l $whl\n          done\n\n      # - name: Copy Windows vc_redist\n      #   run: |\n      #     mkdir -p vc_redist\n      #     cp '/c/Program Files (x86)/Microsoft Visual Studio/2022/Enterprise/VC/Redist/MSVC/14.26.28720'/vc_redist.x64.exe vc_redist/\n      #     cp '/c/Program Files (x86)/Microsoft Visual Studio/2022/Enterprise/VC/Redist/MSVC/14.26.28720'/x64/Microsoft.*.CRT/* vc_redist/\n      # - uses: actions/upload-artifact@v4\n      #   with:\n      #     name: windows_vc_redist\n      #     path: vc_redist/*\n\n  build-macos-arm:\n    runs-on: macos-15\n    if: true\n    env:\n      MACOSX_DEPLOYMENT_TARGET: \"11.0\"\n      MKL_URL: \"\"\n      # MKL_URL: https://registrationcenter-download.intel.com/akdlm/irc_nas/tec/17172/m_mkl_2020.4.301.dmg\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v5\n\n      - name: Get KALDI_BRANCH (kag-$TAG tag if commit is tagged; current branch name if not)\n        id: get-kaldi-branch\n        run: |\n          # Fetch tags on the one fetched commit (shallow clone)\n          git fetch --depth=1 origin \"+refs/tags/*:refs/tags/*\"\n          export TAG=$(git tag --points-at HEAD)\n          echo \"TAG: $TAG\"\n          if [[ $TAG ]]; then\n            echo \"KALDI_BRANCH: kag-$TAG\"\n            echo \"KALDI_BRANCH=kag-$TAG\" >> $GITHUB_ENV\n            echo \"KALDI_BRANCH=kag-$TAG\" >> $GITHUB_OUTPUT\n          else\n            echo \"KALDI_BRANCH: ${GITHUB_REF/refs\\/heads\\//}\"\n            echo \"KALDI_BRANCH=${GITHUB_REF/refs\\/heads\\//}\" >> $GITHUB_ENV\n            echo \"KALDI_BRANCH=${GITHUB_REF/refs\\/heads\\//}\" >> $GITHUB_OUTPUT\n          fi\n\n      - name: Get Kaldi commit hash\n        id: get-kaldi-commit\n        run: |\n          KALDI_COMMIT=$(git ls-remote https://github.com/daanzu/kaldi-fork-active-grammar.git $KALDI_BRANCH | cut -f1)\n          echo \"KALDI_COMMIT: $KALDI_COMMIT\"\n          echo \"KALDI_COMMIT=$KALDI_COMMIT\" >> $GITHUB_OUTPUT\n\n      - name: Restore cached native binaries\n        id: cache-native-binaries-restore\n        uses: actions/cache/restore@v4\n        with:\n          key: native-${{ runner.os }}-arm-${{ steps.get-kaldi-commit.outputs.KALDI_COMMIT }}-${{ env.MACOSX_DEPLOYMENT_TARGET }}-${{ env.MKL_URL }}-v1\n          path: kaldi_active_grammar/exec/macos\n\n      - name: Install MKL (if enabled)\n        if: ${{ env.MKL_URL != '' && steps.cache-native-binaries-restore.outputs.cache-hit != 'true' }}\n        run: |\n          echo \"Installing MKL from: $MKL_URL\"\n          export MKL_FILE=${MKL_URL##*/}\n          export MKL_FILE=${MKL_FILE%\\.dmg}\n          wget --no-verbose $MKL_URL\n          hdiutil attach ${MKL_FILE}.dmg\n          cp /Volumes/${MKL_FILE}/${MKL_FILE}.app/Contents/MacOS/silent.cfg .\n          sed -i.bak -e 's/decline/accept/g' silent.cfg\n          sudo /Volumes/${MKL_FILE}/${MKL_FILE}.app/Contents/MacOS/install.sh --silent silent.cfg\n\n      - name: Install dependencies for native build\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        run: |\n          python3 -m pip install --break-system-packages --user --upgrade scikit-build>=0.10.0 cmake ninja\n          brew install automake sox libtool\n          brew reinstall gfortran  # For openblas\n          # brew install autoconf\n\n      - name: Install dependencies for python build\n        run: |\n          python3 -m pip install --break-system-packages --user --upgrade setuptools wheel delocate\n\n      - name: Build Python wheel\n        run: |\n          shopt -s nullglob\n          echo \"KALDI_BRANCH: $KALDI_BRANCH\"\n          echo \"MKL_URL: $MKL_URL\"\n          ${{ steps.cache-native-binaries-restore.outputs.cache-hit == 'true' && 'KALDIAG_BUILD_SKIP_NATIVE=1' || '' }} python3 setup.py bdist_wheel\n          ls -l dist/\n          for whl in dist/*.whl; do\n            unzip -l $whl\n          done\n\n      - name: Repair wheel with delocate\n        run: |\n          shopt -s nullglob\n          for whl in dist/*.whl; do\n            echo \"Examining wheel before delocate: $whl\"\n            python3 -m delocate.cmd.delocate_listdeps -d $whl\n            echo \"Repairing wheel: $whl\"\n            python3 -m delocate.cmd.delocate_wheel -v -w wheelhouse -L exec/macos/libs --require-archs arm64 $whl\n          done\n          # NOTE: This also downgrades the required MacOS version to the minimum possible\n          ls -l wheelhouse/\n          for whl in wheelhouse/*.whl; do\n            echo \"Examining repaired wheel: $whl\"\n            python3 -m delocate.cmd.delocate_listdeps -d $whl\n          done\n\n      - name: Extract native binaries from wheel after delocate repair, to save to cache\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        # We do this rather than manually caching all of the various kaldi/openfst libraries in their build locations\n        run: |\n          shopt -s nullglob\n          # Assert there is only one wheel\n          WHEEL_COUNT=$(ls wheelhouse/*.whl | wc -l)\n          if [ \"$WHEEL_COUNT\" -ne 1 ]; then\n            echo \"Error: Expected exactly 1 wheel, found $WHEEL_COUNT\"\n            ls -l wheelhouse/\n            exit 1\n          fi\n          WHEEL_FILE=$(ls wheelhouse/*.whl)\n          echo \"Extracting from wheel: $WHEEL_FILE\"\n          unzip -o $WHEEL_FILE 'kaldi_active_grammar/exec/macos/*'\n          ls -l kaldi_active_grammar/exec/macos/\n          otool -l kaldi_active_grammar/exec/macos/libkaldi-dragonfly.dylib | egrep -A2 'LC_RPATH|cmd LC_LOAD_DYLIB'\n          otool -L kaldi_active_grammar/exec/macos/libkaldi-dragonfly.dylib\n          lipo -archs kaldi_active_grammar/exec/macos/libkaldi-dragonfly.dylib\n\n      - name: Save cached native binaries\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        uses: actions/cache/save@v4\n        with:\n          key: ${{ steps.cache-native-binaries-restore.outputs.cache-primary-key }}\n          path: kaldi_active_grammar/exec/macos\n\n      - name: Upload native binaries to artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: native-macos-arm\n          path: kaldi_active_grammar/exec/macos\n\n      - name: Upload MacOS ARM wheels\n        uses: actions/upload-artifact@v4\n        with:\n          name: wheels-macos-arm\n          path: wheelhouse/*\n\n      - name: Examine results\n        run: |\n          shopt -s nullglob\n          for whl in wheelhouse/*.whl; do\n            echo \"::notice title=Built wheel::$(basename $whl)\"\n            unzip -l $whl\n          done\n\n  build-macos-intel:\n    runs-on: macos-15-intel\n    if: true\n    env:\n      MACOSX_DEPLOYMENT_TARGET: \"10.9\"\n      MKL_URL: \"\"\n      # MKL_URL: https://registrationcenter-download.intel.com/akdlm/irc_nas/tec/17172/m_mkl_2020.4.301.dmg\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v5\n\n      - name: Get KALDI_BRANCH (kag-$TAG tag if commit is tagged; current branch name if not)\n        id: get-kaldi-branch\n        run: |\n          # Fetch tags on the one fetched commit (shallow clone)\n          git fetch --depth=1 origin \"+refs/tags/*:refs/tags/*\"\n          export TAG=$(git tag --points-at HEAD)\n          echo \"TAG: $TAG\"\n          if [[ $TAG ]]; then\n            echo \"KALDI_BRANCH: kag-$TAG\"\n            echo \"KALDI_BRANCH=kag-$TAG\" >> $GITHUB_ENV\n            echo \"KALDI_BRANCH=kag-$TAG\" >> $GITHUB_OUTPUT\n          else\n            echo \"KALDI_BRANCH: ${GITHUB_REF/refs\\/heads\\//}\"\n            echo \"KALDI_BRANCH=${GITHUB_REF/refs\\/heads\\//}\" >> $GITHUB_ENV\n            echo \"KALDI_BRANCH=${GITHUB_REF/refs\\/heads\\//}\" >> $GITHUB_OUTPUT\n          fi\n\n      - name: Get Kaldi commit hash\n        id: get-kaldi-commit\n        run: |\n          KALDI_COMMIT=$(git ls-remote https://github.com/daanzu/kaldi-fork-active-grammar.git $KALDI_BRANCH | cut -f1)\n          echo \"KALDI_COMMIT: $KALDI_COMMIT\"\n          echo \"KALDI_COMMIT=$KALDI_COMMIT\" >> $GITHUB_OUTPUT\n\n      - name: Restore cached native binaries\n        id: cache-native-binaries-restore\n        uses: actions/cache/restore@v4\n        with:\n          key: native-${{ runner.os }}-intel-${{ steps.get-kaldi-commit.outputs.KALDI_COMMIT }}-${{ env.MACOSX_DEPLOYMENT_TARGET }}-${{ env.MKL_URL }}-v1\n          path: kaldi_active_grammar/exec/macos\n\n      - name: Install MKL (if enabled)\n        if: ${{ env.MKL_URL != '' && steps.cache-native-binaries-restore.outputs.cache-hit != 'true' }}\n        run: |\n          echo \"Installing MKL from: $MKL_URL\"\n          export MKL_FILE=${MKL_URL##*/}\n          export MKL_FILE=${MKL_FILE%\\.dmg}\n          wget --no-verbose $MKL_URL\n          hdiutil attach ${MKL_FILE}.dmg\n          cp /Volumes/${MKL_FILE}/${MKL_FILE}.app/Contents/MacOS/silent.cfg .\n          sed -i.bak -e 's/decline/accept/g' silent.cfg\n          sudo /Volumes/${MKL_FILE}/${MKL_FILE}.app/Contents/MacOS/install.sh --silent silent.cfg\n\n      - name: Install dependencies for native build\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        run: |\n          python3 -m pip install --break-system-packages --user --upgrade scikit-build>=0.10.0 cmake ninja\n          brew install automake sox\n          brew reinstall gfortran  # For openblas\n          # brew install autoconf libtool\n\n      - name: Install dependencies for python build\n        run: |\n          python3 -m pip install --break-system-packages --user --upgrade setuptools wheel delocate\n\n      - name: Build Python wheel\n        run: |\n          shopt -s nullglob\n          echo \"KALDI_BRANCH: $KALDI_BRANCH\"\n          echo \"MKL_URL: $MKL_URL\"\n          ${{ steps.cache-native-binaries-restore.outputs.cache-hit == 'true' && 'KALDIAG_BUILD_SKIP_NATIVE=1' || '' }} python3 setup.py bdist_wheel\n          ls -l dist/\n          for whl in dist/*.whl; do\n            unzip -l $whl\n          done\n\n      - name: Repair wheel with delocate\n        run: |\n          shopt -s nullglob\n          for whl in dist/*.whl; do\n            echo \"Examining wheel before delocate: $whl\"\n            python3 -m delocate.cmd.delocate_listdeps -d $whl\n            echo \"Repairing wheel: $whl\"\n            python3 -m delocate.cmd.delocate_wheel -v -w wheelhouse -L exec/macos/libs --require-archs x86_64 $whl\n          done\n          # NOTE: This also downgrades the required MacOS version to the minimum possible\n          ls -l wheelhouse/\n          for whl in wheelhouse/*.whl; do\n            echo \"Examining repaired wheel: $whl\"\n            python3 -m delocate.cmd.delocate_listdeps -d $whl\n          done\n\n      - name: Extract native binaries from wheel after delocate repair, to save to cache\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        # We do this rather than manually caching all of the various kaldi/openfst libraries in their build locations\n        run: |\n          shopt -s nullglob\n          # Assert there is only one wheel\n          WHEEL_COUNT=$(ls wheelhouse/*.whl | wc -l)\n          if [ \"$WHEEL_COUNT\" -ne 1 ]; then\n            echo \"Error: Expected exactly 1 wheel, found $WHEEL_COUNT\"\n            ls -l wheelhouse/\n            exit 1\n          fi\n          WHEEL_FILE=$(ls wheelhouse/*.whl)\n          echo \"Extracting from wheel: $WHEEL_FILE\"\n          unzip -o $WHEEL_FILE 'kaldi_active_grammar/exec/macos/*'\n          ls -l kaldi_active_grammar/exec/macos/\n          otool -l kaldi_active_grammar/exec/macos/libkaldi-dragonfly.dylib | egrep -A2 'LC_RPATH|cmd LC_LOAD_DYLIB'\n          otool -L kaldi_active_grammar/exec/macos/libkaldi-dragonfly.dylib\n          lipo -archs kaldi_active_grammar/exec/macos/libkaldi-dragonfly.dylib\n\n      - name: Save cached native binaries\n        if: steps.cache-native-binaries-restore.outputs.cache-hit != 'true'\n        uses: actions/cache/save@v4\n        with:\n          key: ${{ steps.cache-native-binaries-restore.outputs.cache-primary-key }}\n          path: kaldi_active_grammar/exec/macos\n\n      - name: Upload native binaries to artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: native-macos-intel\n          path: kaldi_active_grammar/exec/macos\n\n      - name: Upload MacOS Intel wheels\n        uses: actions/upload-artifact@v4\n        with:\n          name: wheels-macos-intel\n          path: wheelhouse/*\n\n      - name: Examine results\n        run: |\n          shopt -s nullglob\n          for whl in wheelhouse/*.whl; do\n            echo \"::notice title=Built wheel::$(basename $whl)\"\n            unzip -l $whl\n          done\n\n  merge-wheels:\n    runs-on: ubuntu-latest\n    needs: [\n      build-linux,\n      build-windows,\n      build-macos-arm,\n      build-macos-intel,\n    ]\n    steps:\n      - name: Merge all wheel artifacts\n        uses: actions/upload-artifact/merge@v4\n        with:\n          name: wheels\n          pattern: wheels-*\n          delete-merged: false  # Optional: removes the individual artifacts\n\n  setup-tests-cache:\n    runs-on: ${{ matrix.os }}\n    strategy:\n      matrix:\n        os: [ubuntu-latest, windows-latest]\n    steps:\n      - uses: actions/checkout@v5\n\n      - name: Install just\n        uses: taiki-e/install-action@just\n\n      - name: Install uv\n        uses: astral-sh/setup-uv@v6\n\n      - name: Restore cached tests setup\n        id: cache-tests-setup-restore\n        uses: actions/cache/restore@v4\n        with:\n          key: tests-setup-${{ hashFiles('Justfile') }}-v1\n          path: |\n            tests/*.onnx\n            tests/*.onnx.json\n            tests/kaldi_model\n\n      - name: Setup tests\n        if: steps.cache-tests-setup-restore.outputs.cache-hit != 'true'\n        run: |\n          just setup-tests\n\n      - name: Save cached tests setup\n        if: steps.cache-tests-setup-restore.outputs.cache-hit != 'true'\n        uses: actions/cache/save@v4\n        with:\n          key: ${{ steps.cache-tests-setup-restore.outputs.cache-primary-key }}\n          path: |\n            tests/*.onnx\n            tests/*.onnx.json\n            tests/kaldi_model\n\n  test-wheels:\n    runs-on: ${{ matrix.os }}\n    needs:\n      - build-linux\n      - build-windows\n      - build-macos-arm\n      - build-macos-intel\n      - setup-tests-cache\n    if: ${{ always() && true }}  # Run even if some build jobs failed\n    defaults:\n      run:\n        shell: bash\n    strategy:\n      fail-fast: false\n      matrix:\n        # https://docs.github.com/en/actions/reference/runners/github-hosted-runners#standard-github-hosted-runners-for-public-repositories\n        # https://github.com/actions/runner-images\n        os: [ubuntu-22.04, ubuntu-24.04, windows-2022, windows-2025, macos-14, macos-15, macos-15-intel, macos-26]\n        # Status of Python versions (https://devguide.python.org/versions/)\n        python-version: [\"3.9\", \"3.10\", \"3.11\", \"3.12\", \"3.13\"]  # 3.14 doesn't have piper-tts wheels yet\n    steps:\n      - uses: actions/checkout@v5\n\n      - name: Install just\n        uses: taiki-e/install-action@just\n\n      - name: Install uv and set the python version\n        uses: astral-sh/setup-uv@v6\n        with:\n          python-version: ${{ matrix.python-version }}\n\n      - name: Set artifact name\n        id: artifact-name\n        run: |\n          case \"${{ matrix.os }}\" in\n            ubuntu-*) echo \"name=wheels-linux\" >> $GITHUB_OUTPUT ;;\n            windows-*) echo \"name=wheels-windows\" >> $GITHUB_OUTPUT ;;\n            macos-15-intel) echo \"name=wheels-macos-intel\" >> $GITHUB_OUTPUT ;;\n            macos-*) echo \"name=wheels-macos-arm\" >> $GITHUB_OUTPUT ;;\n            *) echo \"Unexpected OS: ${{ matrix.os }}\" 1>&2; exit 1 ;;\n          esac\n\n      - name: Download wheel artifacts\n        uses: actions/download-artifact@v4\n        with:\n          name: ${{ steps.artifact-name.outputs.name }}\n          path: wheels/\n        continue-on-error: true  # Continue even if no artifact found\n\n      - name: Check wheels presence\n        id: wheels-presence\n        run: |\n          shopt -s nullglob\n          files=(wheels/*.whl)\n          if (( ${#files[@]} > 0 )); then\n            echo \"found=true\" >> $GITHUB_OUTPUT\n            ls -l wheels/\n          else\n            echo \"found=false\" >> $GITHUB_OUTPUT\n            echo \"No wheel artifacts found for ${{ matrix.os }}\" 1>&2\n          fi\n\n      - name: Restore cached tests setup\n        uses: actions/cache/restore@v4\n        with:\n          key: tests-setup-${{ hashFiles('Justfile') }}-v1\n          path: |\n            tests/*.onnx\n            tests/*.onnx.json\n            tests/kaldi_model\n          fail-on-cache-miss: true\n\n      - name: Run tests\n        if: steps.wheels-presence.outputs.found == 'true'\n        # Must ignore warnings for piper-tts on Github Actions Windows runners\n        run: |\n          # just test-package -v -W 'ignore:Unsupported Windows version (2022server). ONNX Runtime supports Windows 10 and above, only.:UserWarning' -W 'ignore:Unsupported Windows version (2025server). ONNX Runtime supports Windows 10 and above, only.:UserWarning'\n          just test-package-separately -W 'ignore:Unsupported Windows version (2022server). ONNX Runtime supports Windows 10 and above, only.:UserWarning' -W 'ignore:Unsupported Windows version (2025server). ONNX Runtime supports Windows 10 and above, only.:UserWarning'\n"
  },
  {
    "path": ".gitignore",
    "content": "_cmake_test_compile/\nkaldi_active_grammar/exec\nexamples/*.fst\nportable/\ntests/*.onnx\ntests/*.onnx.json\ntests/**/*.wav\ntmp/\nwheels\n*/kaldi_model*/\n*/kaldi_model*.zip\n\n# general things to ignore\nvenv*/\nbuild/\n_skbuild/\ndist/\nwheelhouse/\nwheels/\n*.egg-info/\n*.egg\n*.py[cod]\n__pycache__/\n*.so\n*~\n.vscode/\npip-wheel-metadata/\n\n# due to using tox and pytest\n.tox\n.cache\n.coverage\n"
  },
  {
    "path": "AGENTS.md",
    "content": "# Kaldi Active Grammar - Agent Information\n\nThis document provides technical architectural information for AI coding agents (or humans!) working with the kaldi-active-grammar project.\n\nWARNING: This file may be auto-generated and/or out of date!\n\n## Project Overview\n\n**Kaldi Active Grammar** is a Python package that enables context-based command and control using the Kaldi automatic speech recognition engine with dynamically manageable grammars.\n\n### Key Technologies\n\n- **Speech Recognition**: Kaldi ASR engine\n- **Language**: Python 3.6+\n- **Supported Platforms**: Windows, Linux, macOS (64-bit)\n- **Primary Integration**: Dragonfly speech recognition framework\n- **Model Architecture**: Kaldi nnet3 chain models\n\n### Version Information\n\n- **Current Version**: See [`kaldi_active_grammar/__init__.py:8`](kaldi_active_grammar/__init__.py:8)\n- **Required Model Version**: See [`kaldi_active_grammar/__init__.py:10`](kaldi_active_grammar/__init__.py:10)\n- **Version History**: See [`CHANGELOG.md`](CHANGELOG.md:1)\n\n## Core Modules\n\n### Compiler (`kaldi_active_grammar/compiler.py`)\n\nThe **Compiler** module is responsible for compiling grammar rules into FST (Finite State Transducer) format for use by the Kaldi decoder.\n\n**Key Classes:**\n- `Compiler`: Main compilation engine that manages grammar compilation and FST generation\n- `KaldiRule`: Represents an individual grammar rule with associated FST representation\n\n**Responsibilities:**\n- Grammar-to-FST compilation\n- Rule caching and management\n- Dynamic grammar loading/unloading\n- Lexicon handling and pronunciation generation\n\n### Model (`kaldi_active_grammar/model.py`)\n\nThe **Model** module manages the Kaldi acoustic model and lexicon operations.\n\n**Key Classes:**\n- `Lexicon`: Manages phoneme sets and phoneme conversion (CMU to XSAMPA)\n- `Model`: Orchestrates the acoustic model and lexicon\n\n**Responsibilities:**\n- Loading and validating Kaldi nnet3 chain models\n- Lexicon management (CMU, XSAMPA phoneme sets)\n- Word pronunciation generation (local via g2p_en or online)\n- Model version verification\n\n### Wrapper (`kaldi_active_grammar/wrapper.py`)\n\nThe **Wrapper** module provides the FFI (Foreign Function Interface) to native Kaldi binaries.\n\n**Key Classes:**\n- `KaldiAgfNNet3Decoder`: Main decoder for active grammar FSTs\n- `KaldiLafNNet3Decoder`: Alternative LAF (Linear Alignment Filter) decoder\n- `KaldiPlainNNet3Decoder`: Decoder for plain dictation\n\n**Responsibilities:**\n- Native library binding\n- Audio decoding\n- Hypothesis generation and lattice manipulation\n\n### WFST (Weighted Finite State Transducer) (`kaldi_active_grammar/wfst.py`)\n\nThe **WFST** module handles FST representation and manipulation.\n\n**Key Classes:**\n- `WFST`: Python-based FST implementation\n- `NativeWFST`: Native (C++-based) FST wrapper\n- `SymbolTable`: Maps symbols to numeric IDs for FST operations\n\n**Responsibilities:**\n- FST construction and modification\n- Symbol table management\n- FST serialization and caching\n\n### Plain Dictation (`kaldi_active_grammar/plain_dictation.py`)\n\nThe **PlainDictationRecognizer** module provides simple dictation recognition without grammar rules.\n\n**Features:**\n- Works with standard Kaldi HCLG.fst files\n- Fallback option for dictation-only use cases\n- Compatible with both pre-trained models and custom models\n\n### Utilities (`kaldi_active_grammar/utils.py`)\n\nUtility functions for:\n- File discovery and path handling\n- Symbol table loading\n- External process management\n- Cross-platform compatibility\n\n## Architecture Overview\n\n```\n┌─────────────────────────────────────────┐\n│     Dragonfly / User Application        │\n└────────────────┬────────────────────────┘\n                 │\n┌─────────────────▼────────────────────────┐\n│  Compiler (Grammar Rules → FSTs)         │\n├──────────────────────────────────────────┤\n│ • Grammar compilation                    │\n│ • FST generation & caching               │\n│ • Rule management                        │\n└────────────────┬────────────────────────┘\n                 │\n┌─────────────────▼────────────────────────┐\n│  Model (Acoustic Model + Lexicon)        │\n├──────────────────────────────────────────┤\n│ • Kaldi nnet3 chain model loading        │\n│ • Pronunciation generation               │\n│ • Lexicon management                     │\n└────────────────┬────────────────────────┘\n                 │\n┌─────────────────▼────────────────────────┐\n│  Wrapper (FFI to Native Kaldi)           │\n├──────────────────────────────────────────┤\n│ • KaldiAgfNNet3Decoder                   │\n│ • KaldiLafNNet3Decoder                   │\n│ • Audio decoding & lattice generation    │\n└────────────────┬────────────────────────┘\n                 │\n┌─────────────────▼────────────────────────┐\n│  Native Kaldi Binaries (C++)             │\n├──────────────────────────────────────────┤\n│ • Acoustic model decoding                │\n│ • FST operations                         │\n│ • Lattice operations                     │\n└──────────────────────────────────────────┘\n```\n\n## Key Features & Capabilities\n\n### Dynamic Grammar Management\n- Grammars can be marked active/inactive on a per-utterance basis\n- Enables context-aware command recognition\n- Improves accuracy by reducing possible recognitions\n\n### Grammar Compilation\n- Multiple independent grammars with nonterminals\n- Separate compilation and dynamic stitching at decode-time\n- Shared dictation grammar between command grammars\n\n### Performance & Accuracy\n- Context-based activation reduces vocabulary scope\n- Efficient FST-based representation\n- Support for weighted grammar rules\n\n### Dictation Support\n- Integrated dictation grammar\n- Plain dictation interface (HCLG.fst compatible)\n- Pronunciation generation via g2p_en or online service\n\n## Development Integration\n\n### Testing\n- Test suite in `tests/` directory\n- Pytest configuration in `pyproject.toml`\n- Coverage reporting can be enabled\n- Integration tests for grammar compilation and decoding\n- Run tests with `just test`\n- To setup virtual environment for tests: `uv venv && uv pip install -r requirements-test.txt -r requirements-editable.txt`\n\n### Examples\n- `examples/plain_dictation.py`: Plain dictation usage\n- `examples/mix_dictation.py`: Mixed command+dictation\n- `examples/full_example.py`: Comprehensive example\n- `examples/audio.py`: Audio handling utilities\n\n### Build System\n- CMake-based native compilation\n- Scikit-build integration for wheel generation\n- Multi-platform support (Windows/Linux/macOS)\n- GitHub Actions CI/CD pipeline\n\n## System Requirements\n\n- **Python**: 3.6+, 64-bit\n- **RAM**: 1GB+ (model + grammars)\n- **Disk Space**: 1GB+ (model + temporary files)\n- **Model Type**: Kaldi left-biphone nnet3 chain (specific modifications required)\n- **Audio Input**: Microphone or audio file\n\n## Workflow\n\n1. **Model Initialization**: Load Kaldi nnet3 chain model\n2. **Grammar Definition**: Define command/rule grammar\n3. **Compilation**: Compiler converts grammar rules to FSTs\n4. **Activation**: Set which grammars are active for current utterance\n5. **Decoding**: Wrapper processes audio through Kaldi decoder\n6. **Recognition**: Return recognized utterance and associated action\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\nAll notable changes to this project will be documented in this file.\nNote that the project (and python wheel) is built from a duorepo (2 separate repos used together), so changes from both will be reflected here, but the commits are spread between both.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),\nand this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html) since v1.0.0.\n\n<!-- ## [Unreleased] - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v3.2.0...master) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v3.2.0...master) -->\n\n## [3.2.0](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v3.2.0) - 2025-11-02 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v3.1.0...v3.2.0) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v3.1.0...kag-v3.2.0)\n\n### Added\n\n* Comprehensive test suite with 80+ tests covering grammar compilation, plain dictation, and alternative dictation\n* Test infrastructure using pytest with TTS-generated test audio (Piper)\n* `AGENTS.md` documentation for AI coding agents with project architecture and development guidance\n* Exposed `NativeWFST` at package top-level for easier importing\n* Support for testing with multiple platforms and Python versions (3.9-3.13)\n\n### Changed\n\n* **CI/CD Improvements**:\n  * Implemented comprehensive caching of native binaries by commit hash\n  * Added caching of test setup data\n  * Updated build workflow to run on all pushes and PRs\n  * Modified macOS wheel builds to use delocate instead of ad-hoc manual library handling\n  * Improved Linux wheel build with cleaner output and better caching\n  * Updated CI to support latest GitHub Actions runners (Ubuntu 24.04, Windows 2025, macOS 13/15/26)\n  * Moved tests into main build workflow for faster feedback\n  * Added notices for built wheels in CI output\n* Relaxed Python package requirements version specifiers for better compatibility\n* Updated setup.py classifiers to include Python 3.11, 3.12, 3.13, 3.14\n* Dropped Python 2 from wheel tag (py3 instead of py2.py3), as Python 2 is no longer supported\n* Improved comments and cleanup in Justfile\n\n### Fixed\n\n* Updated CI workflows to properly handle latest runner environments\n* Fixed Linux build configuration and wrapper script\n* Cleaned up and standardized build processes across all platforms\n\n### Development\n\n* Refactored test structure for better organization and maintainability\n* Added test generators for creating synthetic speech using Piper TTS and Google TTS\n* Added helper utilities for test fixtures and audio generation\n* Improved test coverage for edge cases (empty audio, garbage audio, very short/long audio)\n* Added tests for complex grammar patterns (diamond, cascade, hub-and-spoke, etc.)\n* Added comprehensive alternative dictation tests with mocking\n\n## [3.1.0](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v3.1.0) - 2021-11-24 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v3.0.0...v3.1.0) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v3.0.0...kag-v3.1.0)\n\n### Fixed\n\n* Fix updating of SymbolTable multiple times for new words, so that there is only one instance for a single Model.\n\n### Changed\n\n* Only mark lexicon stale if it was successfully modified.\n* Removed deprecated CLI binaries from Windows build, reducing wheel size by ~65%.\n\n## [3.0.0](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v3.0.0) - 2021-10-31 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v2.1.0...v3.0.0) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v2.1.0...kag-v3.0.0)\n\n### Changed\n\n* Pronunciation generation for lexicon now better supports local mode (using the `g2p_en` package), which is now also the default mode. It is also preferred over the online mode (using CMU's web service), which is now disabled by default. See the Setup section of the README for details. The new models now include the data files for `g2p_en`.\n* `PlainDictation` output now discards any silence words from transcript.\n* `lattice_beam` default value reduced from `6.0` to `5.0`, to hopefully avoid occasional errors.\n* Removed deprecated CLI binaries from build for linux/mac.\n\n### Fixed\n\n* Whitespace in the model path is once again handled properly (thanks [@matthewmcintire](https://github.com/matthewmcintire)).\n* `NativeWFST.has_path()` now handles loops.\n* Linux/Mac binaries are now more stripped.\n\n## [2.1.0](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v2.1.0) - 2021-04-04 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v2.0.2...v2.1.0) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v2.0.2...kag-v2.1.0)\n\n### Added\n\n* NativeWFST support for checking for impossible graphs (no successful path), which can then fail to compile.\n* Debugging info for NativeWFST.\n\n### Changed\n\n* `lattice_beam` default value reduced from `8.0` to `6.0`, to hopefully avoid occasional errors.\n\n### Fixed\n\n* Reloading grammars with NativeWFST.\n\n## [2.0.2](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v2.0.2) - 2021-03-30 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v2.0.0...v2.0.2) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v2.0.0...kag-v2.0.2)\n\n### Changed\n\n* Minor fix for OpenBLAS compilation for some architectures on linux/mac\n\n## [2.0.0](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v2.0.0) - 2021-03-21 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v1.8.0...v2.0.0) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v1.8.0...kag-v2.0.0)\n\n### Added\n\n* Native FST support, via direct wrapping of OpenFST, rather than Python text-format implementation\n    * Eliminates grammar (G) FST compilation step\n* Internalized many graph construction steps, via direct use of native Kaldi/OpenFST functions, rather than invoking separate CLI processes\n    * Eliminates need for many temporary files (FSTs, `.conf`s, etc) and pipes\n* Example usage for allowing mixing of free dictation with strict command phrases\n* Experimental support for \"look ahead\" graphs, as an alternative to full HCLG compilation\n* Experimental support for rescoring with CARPA LMs\n* Experimental support for rescoring with RNN LMs\n* Experimental support for \"priming\" RNNLM previous left context for each utterance\n\n### Changed\n\n* OpenBLAS is now the default linear algebra library (rather than Intel MKL) on Linux/MacOS\n    * Because it is open source and provides good performance on all hardware (including AMD)\n    * Windows is more difficult for this, and will be implemented soon in a later release\n* Default `tmp_dir` is now set to `[model_dir]/cache.tmp`\n* `tmp_dir` is now optional, and only needed if caching compiled FSTs (or for certain framework/option combinations)\n* File cache is now stored at `[model_dir]/file_cache.json`\n* Optimized adding many new words to the lexicon, in many different grammars, all in one loading session: only rebuild `L_disambig.fst` once at the end.\n* External interfaces: `Compiler.__init__()`, decoding setup, etc.\n* Internal interfaces: wrappers, etc.\n* Major refactoring of C++ components, with a new inheritance hierarchy and configuration mechanism, making it easier to use and test features with and without \"activity\"\n* Many build changes\n\n### Removed\n\n* Python 2.7 support: it may still work, but will not be a focus.\n* Google cloud speech-to-text removed, as an unneeded dependency. Alternative dictation is still supported as an option, via a callback to an external provider.\n\n### Deprecated\n\n* Separate CLI Kaldi/OpenFST executables\n* Indirect AGF graph compilation (framework==`agf-indirect`)\n* Non-native FSTs\n* parsing_framework==`text`\n\n## [1.8.0](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v1.8.0) - 2020-09-05 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v1.7.0...v1.8.0) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v1.7.0...kag-v1.8.0)\n\n### Added\n* New speech models (should be better in general, and support new noise resistance)\n* Make failed AGF graph compilation save and output stderr upon failure automatically\n* Example of complete usage with a grammar and microphone audio\n* Various documentation\n\n### Changed\n* Top FST now accepts various noise phones (if present in speech model), making it more resistant to noise\n* Cleanup error handling in compiler, supporting Dragonfly backend automatically printing excerpt of the Rule that failed\n\n### Fixed\n* Mysterious windows newline bug in some environments\n\n## [1.7.0](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v1.7.0) - 2020-08-01 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v1.6.2...v1.7.0) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v1.6.2...kag-v1.7.0)\n\n### Added\n* Add automatic saving of text FST & compiled FST files with log level 5\n\n### Changed\n* Miscellaneous naming\n\n### Fixed\n* Support compiling some complex grammars (Caster text manipulation), by simplifying during compilation (remove epsilons, and determinize)\n\n## [1.6.2](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v1.6.2) - 2020-07-20 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v1.6.1...v1.6.2) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v1.6.1...kag-v1.6.2)\n\n### Fixed\n* Add missing rnnlm library file in MacOS build\n\n## [1.6.1](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v1.6.1) - 2020-07-19 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v1.6.0...v1.6.1) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v1.6.0...kag-v1.6.1)\n\n### Changed\n* Windows wheels now only require the VS2017 (not VS2019) redistributables to be installed\n\n## [1.6.0](https://github.com/daanzu/kaldi-active-grammar/releases/tag/v1.6.0) - 2020-07-11 - Changes: [KaldiAG](https://github.com/daanzu/kaldi-active-grammar/compare/v1.5.0...v1.6.0) [KaldiFork](https://github.com/daanzu/kaldi-fork-active-grammar/compare/kag-v1.5.0...kag-v1.6.0)\n\n### Added\n* Can now pass configuration dict to `KaldiAgfNNet3Decoder`, `PlainDictationRecognizer` (without `HCLG.fst`).\n* Continuous Integration builds run on GitHub Actions for Windows (x64), MacOS (x64), Linux (x64).\n\n### Changed\n* Refactor of passing configuration to initialization.\n* `PlainDictationRecognizer.decode_utterance` can take `chunk_size` parameter.\n* Smaller binaries: MacOS 11MB -> 7.6MB, Linux 21MB -> 18MB.\n\n### Fixed\n* Confidence measurement in the presence of multiple, redundant rules.\n* Python3 int division bug for cloud dictation.\n\n## Earlier versions\n\nSee [GitHub releases notes](https://github.com/daanzu/kaldi-active-grammar/releases).\n"
  },
  {
    "path": "CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 3.13.0)\nproject(kaldi_binaries)\n\ninclude(ExternalProject)\ninclude(ProcessorCount)\n\nProcessorCount(NCPU)\nif(NOT NCPU EQUAL 0)\n  set(MAKE_FLAGS -j${NCPU})\nendif()\n\nset(DST ${PROJECT_SOURCE_DIR}/kaldi_active_grammar/exec)\nif (\"${CMAKE_HOST_SYSTEM_NAME}\" STREQUAL \"Darwin\")\n  set(DST ${DST}/macos/)\nelseif(\"${CMAKE_HOST_SYSTEM_NAME}\" STREQUAL \"Linux\")\n  set(DST ${DST}/linux/)\nelse()\n  set(DST ${DST}/windows/)\nendif()\n\nset(BINARIES\n  )\nset(LIBRARIES\n  src/lib/libkaldi-dragonfly${CMAKE_SHARED_LIBRARY_SUFFIX}\n  )\n\nif(\"${CMAKE_HOST_SYSTEM_NAME}\" STREQUAL \"Windows\")\n  message(FATAL_ERROR \"CMake build not supported on Windows\")\n  # FIXME: copy files?\n  # https://cmake.org/cmake/help/latest/command/foreach.html\n  # https://stackoverflow.com/questions/34799916/copy-file-from-source-directory-to-binary-directory-using-cmake\nendif()\n\nfind_program(MAKE_EXE NAMES make gmake nmake)\n\nif(DEFINED ENV{INTEL_MKL_DIR})\n  # Default: INTEL_MKL_DIR=/opt/intel/mkl/\n  message(\"Compiling with MKL in: $ENV{INTEL_MKL_DIR}\")\n  set(KALDI_CONFIG_FLAGS --shared --static-math --use-cuda=no --mathlib=MKL --mkl-root=$ENV{INTEL_MKL_DIR})\n  set(MATHLIB_BUILD_COMMAND true)\nelse()\n  if(NOT DEFINED OPENBLAS_REF)\n    if(\"${CMAKE_HOST_SYSTEM_NAME}\" STREQUAL \"Darwin\")\n      # Need newer version to build on macOS\n      set(OPENBLAS_REF \"v0.3.30\")\n    else()\n      set(OPENBLAS_REF \"v0.3.13\")\n    endif()\n  endif()\n  message(\"Compiling with OpenBLAS git ref: ${OPENBLAS_REF}\")\n  set(KALDI_CONFIG_FLAGS --shared --static-math --use-cuda=no --mathlib=OPENBLAS)\n  set(MATHLIB_BUILD_COMMAND cd tools\n    && git clone -b ${OPENBLAS_REF} --single-branch https://github.com/OpenMathLib/OpenBLAS\n    && ${MAKE_EXE} ${MAKE_FLAGS} -C OpenBLAS DYNAMIC_ARCH=1 TARGET=GENERIC USE_LOCKING=1 USE_THREAD=0 all\n    && ${MAKE_EXE} ${MAKE_FLAGS} -C OpenBLAS PREFIX=install install\n    && cd ..)\nendif()\n\nif(DEFINED ENV{KALDI_BRANCH})\n  set(KALDI_BRANCH $ENV{KALDI_BRANCH})\nelse()\n  message(FATAL_ERROR \"KALDI_BRANCH not set! Use 'origin/master'?\")\n  # set(KALDI_BRANCH \"origin/master\")\nendif()\n\nmessage(\"MAKE_EXE                  = ${MAKE_EXE}\")\nmessage(\"PYTHON_EXECUTABLE         = ${PYTHON_EXECUTABLE}\")\nmessage(\"PYTHON_INCLUDE_DIR        = ${PYTHON_INCLUDE_DIR}\")\nmessage(\"PYTHON_LIBRARY            = ${PYTHON_LIBRARY}\")\nmessage(\"PYTHON_VERSION_STRING     = ${PYTHON_VERSION_STRING}\")\nmessage(\"SKBUILD                   = ${SKBUILD}\")\nmessage(\"KALDI_BRANCH              = ${KALDI_BRANCH}\")\nmessage(\"CMAKE_CURRENT_SOURCE_DIR  = ${CMAKE_CURRENT_SOURCE_DIR}\")\nmessage(\"CMAKE_CURRENT_BINARY_DIR  = ${CMAKE_CURRENT_BINARY_DIR}\")\n\n# CXXFLAGS are set and exported in kaldi-configure-wrapper.sh\n\nif(NOT \"${CMAKE_HOST_SYSTEM_NAME}\" STREQUAL \"Windows\")\n  set(STRIP_LIBS_COMMAND find src/lib tools/openfst/lib -name *${CMAKE_SHARED_LIBRARY_SUFFIX} | xargs strip)\n  # set(STRIP_DST_COMMAND find ${DST} [[[other specifiers]]] | xargs strip)\n  if(\"${CMAKE_HOST_SYSTEM_NAME}\" STREQUAL \"Darwin\")\n    list(APPEND STRIP_LIBS_COMMAND -x)\n    # list(APPEND STRIP_DST_COMMAND -x)\n  endif()\n  # set(STRIP_LIBS_COMMAND true)\n  set(STRIP_DST_COMMAND true)\n  ExternalProject_Add(kaldi\n    GIT_CONFIG        advice.detachedHead=false\n    GIT_REPOSITORY    https://github.com/daanzu/kaldi-fork-active-grammar.git\n    GIT_TAG           ${KALDI_BRANCH}\n    GIT_SHALLOW       TRUE\n    CONFIGURE_COMMAND sed -i.bak -e \"s/status=0/exit 0/g\" tools/extras/check_dependencies.sh && sed -i.bak -e \"s/openfst_add_CXXFLAGS = -g -O2/openfst_add_CXXFLAGS = -g0 -O3/g\" tools/Makefile && cp ${PROJECT_SOURCE_DIR}/building/kaldi-configure-wrapper.sh src/\n    BUILD_IN_SOURCE   TRUE\n    BUILD_COMMAND     ${MATHLIB_BUILD_COMMAND} && cd tools && ${MAKE_EXE} && cd openfst && autoreconf && cd ../../src && bash ./kaldi-configure-wrapper.sh ./configure ${KALDI_CONFIG_FLAGS} && ${MAKE_EXE} ${MAKE_FLAGS} depend && ${MAKE_EXE} ${MAKE_FLAGS} dragonfly\n    LIST_SEPARATOR    \" \"\n    INSTALL_COMMAND   ${STRIP_LIBS_COMMAND} && mkdir -p ${DST} && cp ${BINARIES} ${LIBRARIES} ${DST} && ${STRIP_DST_COMMAND}\n    )\nendif()\n\ninstall(CODE \"MESSAGE(\\\"Installed kaldi engine binaries.\\\")\")\n"
  },
  {
    "path": "Justfile",
    "content": "\nset ignore-comments\nset positional-arguments\n\ndocker_repo := 'daanzu/kaldi-fork-active-grammar-manylinux'\npiper_voice := 'en_US-ryan-low'\nkaldi_model_url := 'https://github.com/daanzu/kaldi-active-grammar/releases/download/v3.0.0/kaldi_model_daanzu_20211030-smalllm.zip'\n\n_default:\n\tjust --list\n\tjust --summary\n\nbuild-linux python='python3':\n\tmkdir -p _skbuild\n\trm -rf kaldi_active_grammar/exec\n\trm -rf _skbuild/*/cmake-build/ _skbuild/*/cmake-install/ _skbuild/*/setuptools/\n\t# {{python}} -m pip install -r requirements-build.txt\n\t# MKL with INTEL_MKL_DIR=/opt/intel/mkl/\n\t{{python}} setup.py bdist_wheel\n\nbuild-dockcross *args='':\n\tbuilding/dockcross-manylinux2010-x64 bash building/build-wheel-dockcross.sh manylinux2010_x86_64 {{args}}\n\nsetup-dockcross:\n\tdocker run --rm dockcross/manylinux2010-x64:20210127-72b83fc > building/dockcross-manylinux2010-x64 && chmod +x building/dockcross-manylinux2010-x64\n\t@# [ ! -e building/dockcross-manylinux2010-x64 ] && docker run --rm dockcross/manylinux2010-x64 > building/dockcross-manylinux2010-x64 && chmod +x building/dockcross-manylinux2010-x64 || true\n\npip-install-develop:\n\tKALDIAG_BUILD_SKIP_NATIVE=1 pip3 install --user -e .\n\n# Setup an editable development environment on linux\nsetup-linux-develop kaldi_root_dir:\n\t# Compile kaldi_root_dir with: env CXXFLAGS=-O2 ./configure --mkl-root=/home/daanzu/intel/mkl/ --shared --static-math\n\tmkdir -p kaldi_active_grammar/exec/linux/\n\tln -sr {{kaldi_root_dir}}/tools/openfst/bin/fstarcsort kaldi_active_grammar/exec/linux/\n\tln -sr {{kaldi_root_dir}}/tools/openfst/bin/fstcompile kaldi_active_grammar/exec/linux/\n\tln -sr {{kaldi_root_dir}}/tools/openfst/bin/fstinfo kaldi_active_grammar/exec/linux/\n\tln -sr {{kaldi_root_dir}}/src/fstbin/fstaddselfloops kaldi_active_grammar/exec/linux/\n\tln -sr {{kaldi_root_dir}}/src/dragonfly/libkaldi-dragonfly.so kaldi_active_grammar/exec/linux/\n\tln -sr {{kaldi_root_dir}}/src/dragonflybin/compile-graph-agf kaldi_active_grammar/exec/linux/\n\nwatch-windows-develop config='Release':\n\tbash -c \"watchexec -v --no-ignore -w /mnt/c/Work/Speech/kaldi/kaldi-windows/kaldiwin_vs2019_MKL/x64/ cp /mnt/c/Work/Speech/kaldi/kaldi-windows/kaldiwin_vs2019_MKL/x64/{{config}}/kaldi-dragonfly.dll /mnt/c/Work/Speech/kaldi/kaldi-active-grammar/kaldi_active_grammar/exec/windows/\"\n\ntest-model model_dir:\n\tcd {{invocation_directory()}} && rm -rf kaldi_model kaldi_model.tmp && cp -rp {{model_dir}} kaldi_model\n\ntrigger-build ref='master':\n\tgh workflow run build.yml --ref {{ref}}\n\nsetup-tests:\n\tuv run --no-project --with-requirements requirements-test.txt -m piper.download_voices --debug --download-dir tests/ '{{piper_voice}}'\n\tcd tests && [ ! -e kaldi_model ] && curl -L -C - -o kaldi_model.zip '{{kaldi_model_url}}' && unzip -o kaldi_model.zip || true\n\n# Common args: --lf -k\ntest *args='':\n    uv run --no-project --with-requirements requirements-test.txt --with-requirements requirements-editable.txt -m pytest \"$@\"\n\n# Test package after building wheel into wheels/ directory. Runs tests from within tests/ directory to prevent importing kaldi_active_grammar from source tree\ntest-package *args='':\n\tuv run -v --no-project --isolated --with-requirements ../requirements-test.txt --with kaldi-active-grammar --find-links wheels/ --directory tests/ -m pytest \"$@\"\n\ntest-package-separately *args='':\n\tuv run -v --no-project --isolated --with-requirements ../requirements-test.txt --with kaldi-active-grammar --find-links wheels/ --directory tests/ run_each_test_separately.py \"$@\"\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "                    GNU AFFERO GENERAL PUBLIC LICENSE\n                       Version 3, 19 November 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU Affero General Public License is a free, copyleft license for\nsoftware and other kinds of works, specifically designed to ensure\ncooperation with the community in the case of network server software.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nour General Public Licenses are intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  Developers that use our General Public Licenses protect your rights\nwith two steps: (1) assert copyright on the software, and (2) offer\nyou this License which gives you legal permission to copy, distribute\nand/or modify the software.\n\n  A secondary benefit of defending all users' freedom is that\nimprovements made in alternate versions of the program, if they\nreceive widespread use, become available for other developers to\nincorporate.  Many developers of free software are heartened and\nencouraged by the resulting cooperation.  However, in the case of\nsoftware used on network servers, this result may fail to come about.\nThe GNU General Public License permits making a modified version and\nletting the public access it on a server without ever releasing its\nsource code to the public.\n\n  The GNU Affero General Public License is designed specifically to\nensure that, in such cases, the modified source code becomes available\nto the community.  It requires the operator of a network server to\nprovide the source code of the modified version running there to the\nusers of that server.  Therefore, public use of a modified version, on\na publicly accessible server, gives the public access to the source\ncode of the modified version.\n\n  An older license, called the Affero General Public License and\npublished by Affero, was designed to accomplish similar goals.  This is\na different license, not a version of the Affero GPL, but Affero has\nreleased a new version of the Affero GPL which permits relicensing under\nthis license.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU Affero General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Remote Network Interaction; Use with the GNU General Public License.\n\n  Notwithstanding any other provision of this License, if you modify the\nProgram, your modified version must prominently offer all users\ninteracting with it remotely through a computer network (if your version\nsupports such interaction) an opportunity to receive the Corresponding\nSource of your version by providing access to the Corresponding Source\nfrom a network server at no charge, through some standard or customary\nmeans of facilitating copying of software.  This Corresponding Source\nshall include the Corresponding Source for any work covered by version 3\nof the GNU General Public License that is incorporated pursuant to the\nfollowing paragraph.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the work with which it is combined will remain governed by version\n3 of the GNU General Public License.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU Affero General Public License from time to time.  Such new versions\nwill be similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU Affero General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU Affero General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU Affero General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU Affero General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU Affero General Public License for more details.\n\n    You should have received a copy of the GNU Affero General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If your software can interact with users remotely through a computer\nnetwork, you should also make sure that it provides a way for users to\nget its source.  For example, if your program is a web application, its\ninterface could display a \"Source\" link that leads users to an archive\nof the code.  There are many ways you could offer source, and different\nsolutions will be better for different programs; see section 13 for the\nspecific requirements.\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU AGPL, see\n<https://www.gnu.org/licenses/>.\n"
  },
  {
    "path": "README.md",
    "content": "# Kaldi Active Grammar\n\n> **Python Kaldi speech recognition with grammars that can be set active/inactive dynamically at decode-time**\n\n> Python package developed to enable context-based command & control of computer applications, as in the [Dragonfly](https://github.com/dictation-toolbox/dragonfly) speech recognition framework, using the [Kaldi](https://github.com/kaldi-asr/kaldi) automatic speech recognition engine.\n\n[![PyPI - Version](https://img.shields.io/pypi/v/kaldi-active-grammar.svg)](https://pypi.python.org/pypi/kaldi-active-grammar/)\n[![PyPI - Wheel](https://img.shields.io/pypi/wheel/kaldi-active-grammar.svg)](https://pypi.python.org/pypi/kaldi-active-grammar/)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/kaldi-active-grammar.svg)](https://pypi.python.org/pypi/kaldi-active-grammar/)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/kaldi-active-grammar.svg?logo=python)](https://pypi.python.org/pypi/kaldi-active-grammar/)\n[![GitHub - Downloads](https://img.shields.io/github/downloads/daanzu/kaldi-active-grammar/total?logo=github)](https://github.com/daanzu/kaldi-active-grammar/releases)\n<!-- [![GitHub - Downloads](https://img.shields.io/github/downloads/daanzu/kaldi-active-grammar/latest/total?logo=github)](https://github.com/daanzu/kaldi-active-grammar/releases/latest) -->\n\n![Maintenance](https://img.shields.io/maintenance/yes/2026)\n![PyPI - Status](https://img.shields.io/pypi/status/kaldi-active-grammar)\n[![Build](https://github.com/daanzu/kaldi-active-grammar/actions/workflows/build.yml/badge.svg)](https://github.com/daanzu/kaldi-active-grammar/actions/workflows/build.yml)\n[![PyPI - License](https://img.shields.io/pypi/l/kaldi-active-grammar)](https://github.com/daanzu/kaldi-active-grammar?tab=AGPL-3.0-1-ov-file#readme)\n[![Gitter](https://img.shields.io/gitter/room/daanzu/kaldi-active-grammar)](https://app.gitter.im/#/room/#kaldi-active-grammar_community:gitter.im)\n<!-- [![Gitter](https://badges.gitter.im/kaldi-active-grammar/community.svg)](https://app.gitter.im/#/room/#kaldi-active-grammar_community:gitter.im) -->\n<!-- [![Gitter](https://badges.gitter.im/kaldi-active-grammar/community.svg)](https://app.gitter.im/#/room/#dragonfly2:matrix.org) -->\n<!-- [![Batteries-Included](https://img.shields.io/badge/batteries-included-green.svg)](https://github.com/daanzu/kaldi-active-grammar/releases) -->\n<!-- ![GitHub Sponsors](https://img.shields.io/github/sponsors/daanzu) -->\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-EA4AAA.svg?logo=githubsponsors)](https://github.com/sponsors/daanzu)\n[![Donate](https://img.shields.io/badge/donate-PayPal-002991.svg?logo=paypal)](https://paypal.me/daanzu)\n[![Donate](https://img.shields.io/badge/donate-GitHub-EA4AAA.svg?logo=githubsponsors)](https://github.com/sponsors/daanzu)\n\nNormally, Kaldi decoding graphs are **monolithic**, require **expensive up-front off-line** compilation, and are **static during decoding**. Kaldi's new grammar framework allows **multiple independent** grammars with nonterminals, to be compiled separately and **stitched together dynamically** at decode-time, but all the grammars are **always active** and capable of being recognized.\n\nThis project extends that to allow each grammar/rule to be **independently marked** as active/inactive **dynamically** on a **per-utterance** basis (set at the beginning of each utterance). Dragonfly is then capable of activating **only the appropriate grammars for the current environment**, resulting in increased accuracy due to fewer possible recognitions. Furthermore, the dictation grammar can be **shared** between all the command grammars, which can be **compiled quickly** without needing to include large-vocabulary dictation directly.\n\nSee the [Changelog](CHANGELOG.md) for the latest updates.\n\n### Features\n\n* **Binaries:** The Python package **includes all necessary binaries** for decoding on **Windows/Linux/MacOS**. Available on [PyPI](https://pypi.org/project/kaldi-active-grammar/#files).\n    * Binaries are generated from my [fork of Kaldi](https://github.com/daanzu/kaldi-fork-active-grammar), which is only intended to be used by kaldi-active-grammar directly, and not as a stand-alone library.\n* **Pre-trained model:** A compatible **general English Kaldi nnet3 chain model** is trained on **~3000** hours of open audio. Available under [project releases](https://github.com/daanzu/kaldi-active-grammar/releases).\n    * [**Model info and comparison**](docs/models.md)\n    * Improved models are under development.\n* **Plain dictation:** Do you just want to recognize plain dictation? Seems kind of boring, but okay! There is an [**interface for plain dictation** (see below)](#plain-dictation-interface), using either your specified `HCLG.fst` file, or KaldiAG's included pre-trained dictation model.\n* **Dragonfly/Caster:** A compatible [**backend for Dragonfly**](https://github.com/daanzu/dragonfly/tree/kaldi/dragonfly/engines/backend_kaldi) is under development in the `kaldi` branch of my fork, and has been merged as of Dragonfly **v0.15.0**.\n    * See its [documentation](https://dragonfly2.readthedocs.io/en/latest/kaldi_engine.html), try out a [demo](https://github.com/dictation-toolbox/dragonfly/blob/master/dragonfly/examples/kaldi_demo.py), or use the [loader](https://github.com/dictation-toolbox/dragonfly/blob/master/dragonfly/examples/kaldi_module_loader_plus.py) to run all normal dragonfly scripts.\n    * You can try it out easily on Windows using a **simple no-install package**: see [Getting Started](#getting-started) below.\n    * [Caster](https://github.com/dictation-toolbox/Caster) is supported as of KaldiAG **v0.6.0** and Dragonfly **v0.16.1**.\n* **Bootstrapped** since v0.2: development of KaldiAG is done entirely using KaldiAG.\n\n### Demo Video\n\n<div align=\"center\">\n\n[![Demo Video](docs/demo_video.png)](https://youtu.be/Qk1mGbIJx3s)\n\n</div>\n\n### Donations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-EA4AAA.svg?logo=githubsponsors)](https://github.com/sponsors/daanzu)\n[![Donate](https://img.shields.io/badge/donate-PayPal-002991.svg?logo=paypal)](https://paypal.me/daanzu)\n[![Donate](https://img.shields.io/badge/donate-GitHub-EA4AAA.svg?logo=githubsponsors)](https://github.com/sponsors/daanzu)\n\n### Related Repositories\n\n* [daanzu/kaldi-grammar-simple](https://github.com/daanzu/kaldi-grammar-simple)\n* [daanzu/speech-training-recorder](https://github.com/daanzu/speech-training-recorder)\n* [daanzu/dragonfly_daanzu_tools](https://github.com/daanzu/dragonfly_daanzu_tools)\n* [kmdouglass/caster-kaldi](https://github.com/kmdouglass/homelab/tree/master/speech-recognition/toolchains/caster-kaldi): Docker image to run KaldiAG + Dragonfly + Caster inside a container on Linux, using the host's microphone.\n\n## Getting Started\n\nWant to get started **quickly & easily on Windows**?\nAvailable under [project releases](https://github.com/daanzu/kaldi-active-grammar/releases):\n\n* **`kaldi-dragonfly-winpython`**: A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-dragonfly-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2. Just unzip and run!\n* **`kaldi-caster-winpython-dev`**: [*more recent development version*] A self-contained, portable, batteries-included (python & libraries & model) distribution of kaldi-active-grammar + dragonfly2 + caster. Just unzip and run!\n\nOtherwise...\n\n### Setup\n\n**Requirements**:\n* Python 3.6+; *64-bit required!*\n* OS: Windows/Linux/MacOS all supported\n* Only supports Kaldi left-biphone models, specifically *nnet3 chain* models, with specific modifications\n* ~1GB+ disk space for model plus temporary storage and cache, depending on your grammar complexity\n* ~1GB+ RAM for model and grammars, depending on your model and grammar complexity\n\n**Installation**:\n1. Download compatible generic English Kaldi nnet3 chain model from [project releases](https://github.com/daanzu/kaldi-active-grammar/releases). Unzip the model and pass the directory path to kaldi-active-grammar constructor.\n    * Or use your own model. Standard Kaldi models must be converted to be usable. Conversion can be performed automatically, but this hasn't been fully implemented yet.\n1. Install Python package, which includes necessary Kaldi binaries:\n    * The easy way to use kaldi-active-grammar is as a backend to dragonfly, which makes it easy to define grammars and resultant actions.\n        * For this, simply run `pip install 'dragonfly2[kaldi]'` to install all necessary packages. See the [dragonfly documentation for details on installation](https://dragonfly2.readthedocs.io/en/latest/kaldi_engine.html#setup), plus how to define grammars and actions.\n    * Alternatively, if you only want to use it directly (via a more low level interface), you can just run `pip install kaldi-active-grammar`\n1. To support automatic generation of pronunciations for unknown words (not in the lexicon), you have two choices:\n    * Local generation: Install the `g2p_en` package with `pip install 'kaldi-active-grammar[g2p_en]'`\n        * The necessary data files are now included in the latest speech models I released with `v3.0.0`.\n    * Online/cloud generation: Install the `requests` package with `pip install 'kaldi-active-grammar[online]'` **AND** pass `allow_online_pronunciations=True` to `Compiler.add_word()` or `Model.add_word()`\n    * If both are available, the former is preferentially used.\n\n### Troubleshooting\n\n* Errors installing\n    * Make sure you're using a 64-bit Python.\n    * You should install via `pip install kaldi-active-grammar` (directly or indirectly), *not* `python setup.py install`, in order to get the required binaries.\n    * Update your `pip` (to at least `19.0+`) by executing `python -m pip install --upgrade pip`, to support the required python binary wheel package.\n* Errors running\n    * Windows: `The code execution cannot proceed because VCRUNTIME140.dll was not found.` (or similar)\n        * You must install the VC2017+ redistributable from Microsoft: [download page](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads), [direct link](https://aka.ms/vs/16/release/vc_redist.x64.exe). (This is usually already installed globally by other programs.)\n    * Try deleting the Kaldi model `.tmp` directory, and re-running.\n    * Try deleting the Kaldi model directory itself, re-downloading and/or re-extracting it, and re-running. (Note: You may want to make a copy of your `user_lexicon.txt` file before deleting, to put in the new model directory.)\n* For reporting issues, try running with `import logging; logging.basicConfig(level=1)` at the top of your main/loader file to enable full debugging logging.\n\n## Documentation\n\nFormal documentation is somewhat lacking currently. To see example usage, examine:\n\n* [**Plain dictation interface**](examples/plain_dictation.py): Set up recognizer for plain dictation; perform decoding on given `wav` file.\n* [**Full example**](examples/full_example.py): Set up grammar compiler & decoder; set up a rule; perform decoding on live, real-time audio from microphone.\n* [**Backend for Dragonfly**](https://github.com/daanzu/dragonfly/tree/kaldi/dragonfly/engines/backend_kaldi): Many advanced features and complex interactions.\n\nThe KaldiAG API is fairly low level, but basically: you define a set of grammar rules, then send in audio data, along with a bit mask of which rules are active at the beginning of each utterance, and receive back the recognized rule and text. The easy way is to go through Dragonfly, which makes it easy to define the rules, contexts, and actions.\n\n### Building\n\n* Recommendation: use the binary wheels distributed for all major platforms.\n    * Significant work has gone into allowing you to avoid the many repo/dependency downloads, GBs of disk space, and vCPU-hours needed for building from scratch.\n    * They are built in public by automated Continuous Integration run on GitHub Actions: [see manifest](.github/workflows/build.yml).\n* Alternatively, to build for use locally:\n    * Linux/MacOS:\n        1. `python -m pip install -r requirements-build.txt`\n        1. `python setup.py bdist_wheel` (see [`CMakeLists.txt`](CMakeLists.txt) for details)\n    * Windows:\n        * Less easily generally automated\n        * You can follow the steps for Continuous Integration run on GitHub Actions: see the `build-windows` section of [the manifest](.github/workflows/build.yml).\n* Note: the project (and python wheel) is built from a duorepo (2 separate repos used together):\n    1. This repo, containing the external interface and higher-level logic, written in Python.\n    1. [My fork of Kaldi](https://github.com/daanzu/kaldi-fork-active-grammar), containing the lower-level code, written in C++.\n\n## Contributing\n\nIssues, suggestions, and feature requests are welcome & encouraged. Pull requests are considered, but project structure is in flux.\n\nDonations are appreciated to encourage development.\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-EA4AAA.svg?logo=githubsponsors)](https://github.com/sponsors/daanzu)\n[![Donate](https://img.shields.io/badge/donate-PayPal-002991.svg?logo=paypal)](https://paypal.me/daanzu)\n[![Donate](https://img.shields.io/badge/donate-GitHub-EA4AAA.svg?logo=githubsponsors)](https://github.com/sponsors/daanzu)\n\n## Author\n\n* David Zurow ([@daanzu](https://github.com/daanzu))\n\n## License\n\nThis project is licensed under the GNU Affero General Public License v3 (AGPL-3.0-or-later). See the [LICENSE.txt file](LICENSE.txt) for details. If this license is problematic for you, please contact me.\n\n## Acknowledgments\n\n* Based on and including code from [Kaldi ASR](https://github.com/kaldi-asr/kaldi), under the Apache-2.0 license.\n* Code from [OpenFST](http://www.openfst.org/) and [OpenFST port for Windows](https://github.com/kkm000/openfst), under the Apache-2.0 license.\n* [Intel Math Kernel Library](https://software.intel.com/en-us/mkl), copyright (c) 2018 Intel Corporation, under the [Intel Simplified Software License](https://software.intel.com/en-us/license/intel-simplified-software-license), currently only used for Windows build.\n"
  },
  {
    "path": "building/build-wheel-dockcross.sh",
    "content": "#!/usr/bin/env bash\n\n# This script builds a Python wheel for kaldi-active-grammar using dockcross,\n# and is to be RUN WITHIN THE DOCKCROSS CONTAINER. It optionally installs Intel\n# MKL if MKL_URL is provided, then builds the wheel and repairs it for the\n# specified platform using auditwheel.\n#\n# Usage: ./build-wheel-dockcross.sh [--skip-native] <WHEEL_PLAT> <KALDI_BRANCH> [MKL_URL]\n# - --skip-native: Skip the native build step\n# - WHEEL_PLAT: The platform tag for the wheel (e.g., manylinux2014_x86_64)\n# - KALDI_BRANCH: The Kaldi branch to use for building\n# - MKL_URL: Optional URL to download and install Intel MKL\n\nset -e -x\n\nPYTHON_EXE=/opt/python/cp38-cp38/bin/python\n\n# Parse optional arguments and filter them out\nSKIP_NATIVE=false\nARGS=()\nwhile [[ $# -gt 0 ]]; do\n  case $1 in\n    --skip-native)\n      SKIP_NATIVE=true\n      shift\n      ;;\n    *)\n      ARGS+=(\"$1\")\n      shift\n      ;;\n  esac\ndone\n# Set positional arguments from filtered array\nset -- \"${ARGS[@]}\"\n\n# Parse required arguments\nWHEEL_PLAT=$1\nKALDI_BRANCH=$2\nMKL_URL=$3\n\nif [ -z \"$PYTHON_EXE\" ] || [ -z \"$WHEEL_PLAT\" ] || [ -z \"$KALDI_BRANCH\" ]; then\n    echo \"ERROR: variable not set!\"\n    exit 1\nfi\n\nif [ -n \"$MKL_URL\" ]; then\n    pushd _skbuild\n    wget --no-verbose --no-clobber $MKL_URL\n    mkdir -p /tmp/mkl\n    MKL_FILE=$(basename $MKL_URL)\n    tar zxf $MKL_FILE -C /tmp/mkl --strip-components=1\n    sed -i.bak -e 's/ACCEPT_EULA=decline/ACCEPT_EULA=accept/g' -e 's/ARCH_SELECTED=ALL/ARCH_SELECTED=INTEL64/g' /tmp/mkl/silent.cfg\n    sudo /tmp/mkl/install.sh --silent /tmp/mkl/silent.cfg\n    rm -rf /tmp/mkl\n    export INTEL_MKL_DIR=\"/opt/intel/mkl/\"\n    popd\nfi\n\nif [ \"$SKIP_NATIVE\" = true ]; then\n    export KALDIAG_BUILD_SKIP_NATIVE=1\n    # Patch the native binaries restored from cache to work with auditwheel repair below; final result should be idempotent\n    patchelf --force-rpath --set-rpath \"$(pwd)/kaldi_active_grammar.libs\" kaldi_active_grammar/exec/linux/libkaldi-dragonfly.so\n    readelf -d kaldi_active_grammar/exec/linux/libkaldi-dragonfly.so | egrep 'NEEDED|RUNPATH|RPATH'\n    # ldd kaldi_active_grammar/exec/linux/libkaldi-dragonfly.so\n    # LD_DEBUG=libs ldd kaldi_active_grammar/exec/linux/libkaldi-dragonfly.so\nelse\n    # Clean in preparation for native build\n    mkdir -p _skbuild\n    rm -rf _skbuild/*/cmake-install/ _skbuild/*/setuptools/\n    rm -rf kaldi_active_grammar/exec\nfi\n\nKALDI_BRANCH=$KALDI_BRANCH $PYTHON_EXE setup.py bdist_wheel\n\n# ls -lR kaldi_active_grammar/exec/linux\n\nmkdir -p wheelhouse\nfor whl in dist/*.whl; do\n    unzip -l $whl\n    auditwheel show $whl\n    auditwheel repair $whl --plat $WHEEL_PLAT -w wheelhouse/\n    # auditwheel -v repair $whl --plat $WHEEL_PLAT -w wheelhouse/\ndone\n"
  },
  {
    "path": "building/dockcross-manylinux2010-x64",
    "content": "#!/usr/bin/env bash\n\nDEFAULT_DOCKCROSS_IMAGE=dockcross/manylinux2010-x64:latest\n\n#------------------------------------------------------------------------------\n# Helpers\n#\nerr() {\n    echo -e >&2 ERROR: $@\\\\n\n}\n\ndie() {\n    err $@\n    exit 1\n}\n\nhas() {\n    # eg. has command update\n    local kind=$1\n    local name=$2\n\n    type -t $kind:$name | grep -q function\n}\n\n#------------------------------------------------------------------------------\n# Command handlers\n#\ncommand:update-image() {\n    docker pull $FINAL_IMAGE\n}\n\nhelp:update-image() {\n    echo Pull the latest $FINAL_IMAGE .\n}\n\ncommand:update-script() {\n    if cmp -s <( docker run --rm $FINAL_IMAGE ) $0; then\n        echo $0 is up to date\n    else\n        echo -n Updating $0 '... '\n        docker run --rm $FINAL_IMAGE > $0 && echo ok\n    fi\n}\n\nhelp:update-image() {\n    echo Update $0 from $FINAL_IMAGE .\n}\n\ncommand:update() {\n    command:update-image\n    command:update-script\n}\n\nhelp:update() {\n    echo Pull the latest $FINAL_IMAGE, and then update $0 from that.\n}\n\ncommand:help() {\n    if [[ $# != 0 ]]; then\n        if ! has command $1; then\n            err \\\"$1\\\" is not an dockcross command\n            command:help\n        elif ! has help $1; then\n            err No help found for \\\"$1\\\"\n        else\n            help:$1\n        fi\n    else\n        cat >&2 <<ENDHELP\nUsage: dockcross [options] [--] command [args]\n\nBy default, run the given *command* in an dockcross Docker container.\n\nThe *options* can be one of:\n\n    --args|-a           Extra args to the *docker run* command\n    --image|-i          Docker cross-compiler image to use\n    --config|-c         Bash script to source before running this script\n\n\nAdditionally, there are special update commands:\n\n    update-image\n    update-script\n    update\n\nFor update command help use: $0 help <command>\nENDHELP\n        exit 1\n    fi\n}\n\n#------------------------------------------------------------------------------\n# Option processing\n#\nspecial_update_command=''\nwhile [[ $# != 0 ]]; do\n    case $1 in\n\n        --)\n            shift\n            break\n            ;;\n\n        --args|-a)\n            ARG_ARGS=\"$2\"\n            shift 2\n            ;;\n\n        --config|-c)\n            ARG_CONFIG=\"$2\"\n            shift 2\n            ;;\n\n        --image|-i)\n            ARG_IMAGE=\"$2\"\n            shift 2\n            ;;\n        update|update-image|update-script)\n            special_update_command=$1\n            break\n            ;;\n        -*)\n            err Unknown option \\\"$1\\\"\n            command:help\n            exit\n            ;;\n\n        *)\n            break\n            ;;\n\n    esac\ndone\n\n# The precedence for options is:\n# 1. command-line arguments\n# 2. environment variables\n# 3. defaults\n\n# Source the config file if it exists\nDEFAULT_DOCKCROSS_CONFIG=~/.dockcross\nFINAL_CONFIG=${ARG_CONFIG-${DOCKCROSS_CONFIG-$DEFAULT_DOCKCROSS_CONFIG}}\n\n[[ -f \"$FINAL_CONFIG\" ]] && source \"$FINAL_CONFIG\"\n\n# Set the docker image\nFINAL_IMAGE=${ARG_IMAGE-${DOCKCROSS_IMAGE-$DEFAULT_DOCKCROSS_IMAGE}}\n\n# Handle special update command\nif [ \"$special_update_command\" != \"\" ]; then\n    case $special_update_command in\n\n        update)\n            command:update\n            exit $?\n            ;;\n\n        update-image)\n            command:update-image\n            exit $?\n            ;;\n\n        update-script)\n            command:update-script\n            exit $?\n            ;;\n\n    esac\nfi\n\n# Set the docker run extra args (if any)\nFINAL_ARGS=${ARG_ARGS-${DOCKCROSS_ARGS}}\n\n# Bash on Ubuntu on Windows\nUBUNTU_ON_WINDOWS=$([ -e /proc/version ] && grep -l Microsoft /proc/version || echo \"\")\n# MSYS, Git Bash, etc.\nMSYS=$([ -e /proc/version ] && grep -l MINGW /proc/version || echo \"\")\n\nif [ -z \"$UBUNTU_ON_WINDOWS\" -a -z \"$MSYS\" ]; then\n    USER_IDS=(-e BUILDER_UID=\"$( id -u )\" -e BUILDER_GID=\"$( id -g )\" -e BUILDER_USER=\"$( id -un )\" -e BUILDER_GROUP=\"$( id -gn )\")\nfi\n\n# Change the PWD when working in Docker on Windows\nif [ -n \"$UBUNTU_ON_WINDOWS\" ]; then\n    WSL_ROOT=\"/mnt/\"\n    CFG_FILE=/etc/wsl.conf\n\tif [ -f \"$CFG_FILE\" ]; then\n\t\tCFG_CONTENT=$(cat $CFG_FILE | sed -r '/[^=]+=[^=]+/!d' | sed -r 's/\\s+=\\s/=/g')\n\t\teval \"$CFG_CONTENT\"\n\t\tif [ -n \"$root\" ]; then\n\t\t\tWSL_ROOT=$root\n\t\tfi\n\tfi\n    HOST_PWD=`pwd -P`\n    HOST_PWD=${HOST_PWD/$WSL_ROOT//}\nelif [ -n \"$MSYS\" ]; then\n    HOST_PWD=$PWD\n    HOST_PWD=${HOST_PWD/\\//}\n    HOST_PWD=${HOST_PWD/\\//:\\/}\nelse\n    HOST_PWD=$PWD\n    [ -L $HOST_PWD ] && HOST_PWD=$(readlink $HOST_PWD)\nfi\n\n# Mount Additional Volumes\nif [ -z \"$SSH_DIR\" ]; then\n    SSH_DIR=\"$HOME/.ssh\"\nfi\n\nHOST_VOLUMES=\nif [ -e \"$SSH_DIR\" -a -z \"$MSYS\" ]; then\n    HOST_VOLUMES+=\"-v $SSH_DIR:/home/$(id -un)/.ssh\"\nfi\n\n#------------------------------------------------------------------------------\n# Now, finally, run the command in a container\n#\nTTY_ARGS=\ntty -s && [ -z \"$MSYS\" ] && TTY_ARGS=-ti\nCONTAINER_NAME=dockcross_$RANDOM\ndocker run $TTY_ARGS --name $CONTAINER_NAME \\\n    -v \"$HOST_PWD\":/work \\\n    $HOST_VOLUMES \\\n    \"${USER_IDS[@]}\" \\\n    $FINAL_ARGS \\\n    $FINAL_IMAGE \"$@\"\nrun_exit_code=$?\n\n# Attempt to delete container\nrm_output=$(docker rm -f $CONTAINER_NAME 2>&1)\nrm_exit_code=$?\nif [[ $rm_exit_code != 0 ]]; then\n  if [[ \"$CIRCLECI\" == \"true\" ]] && [[ $rm_output == *\"Driver btrfs failed to remove\"* ]]; then\n    : # Ignore error because of https://circleci.com/docs/docker-btrfs-error/\n  else\n    echo \"$rm_output\"\n    exit $rm_exit_code\n  fi\nfi\n\nexit $run_exit_code\n\n################################################################################\n#\n# This image is not intended to be run manually.\n#\n# To create a dockcross helper script for the\n# dockcross/manylinux2010-x64:latest image, run:\n#\n# docker run --rm dockcross/manylinux2010-x64:latest > dockcross-manylinux2010-x64-latest\n# chmod +x dockcross-manylinux2010-x64-latest\n#\n# You may then wish to move the dockcross script to your PATH.\n#\n################################################################################\n"
  },
  {
    "path": "building/kaldi-configure-wrapper.sh",
    "content": "#!/usr/bin/env bash\n\n# We use this wrapper script to set CXXFLAGS in the environment before calling\n# kaldi configure, to avoid issues with setting environment variables in\n# commands called from cmake.\n\nset -e -x\n\nexport CXXFLAGS=\"-O3 -g0 -ftree-vectorize\"\n# -g0: Request debugging information and also use level to specify how much information. The default level is 2.\n#      Level 0 produces no debug information at all. Thus, -g0 negates -g.\n\n# Execute all arguments\nexec \"$@\"\n"
  },
  {
    "path": "docs/models.md",
    "content": "# Speech Recognition Models\n\n[![Donate](https://img.shields.io/badge/donate-GitHub-pink.svg)](https://github.com/sponsors/daanzu)\n[![Donate](https://img.shields.io/badge/donate-Patreon-orange.svg)](https://www.patreon.com/daanzu)\n[![Donate](https://img.shields.io/badge/donate-PayPal-green.svg)](https://paypal.me/daanzu)\n[![Donate](https://img.shields.io/badge/preferred-GitHub-black.svg)](https://github.com/sponsors/daanzu)\n\n## Available Models\n\n* For **kaldi-active-grammar**\n    * [kaldi_model_daanzu_20211030-biglm](https://github.com/daanzu/kaldi-active-grammar/releases/download/v3.0.0/kaldi_model_daanzu_20211030-biglm.zip) (1.05 GB)\n    * [kaldi_model_daanzu_20211030-mediumlm](https://github.com/daanzu/kaldi-active-grammar/releases/download/v3.0.0/kaldi_model_daanzu_20211030-mediumlm.zip) (651 MB)\n    * [kaldi_model_daanzu_20211030-smalllm](https://github.com/daanzu/kaldi-active-grammar/releases/download/v3.0.0/kaldi_model_daanzu_20211030-smalllm.zip) (400 MB)\n    * [kaldi_model_daanzu_20200905_1ep-biglm](https://github.com/daanzu/kaldi-active-grammar/releases/download/v1.8.0/kaldi_model_daanzu_20200905_1ep-biglm.zip) (1.05 GB)\n    * [kaldi_model_daanzu_20200905_1ep-mediumlm](https://github.com/daanzu/kaldi-active-grammar/releases/download/v1.8.0/kaldi_model_daanzu_20200905_1ep-mediumlm.zip) (651 MB)\n    * [kaldi_model_daanzu_20200905_1ep-smalllm](https://github.com/daanzu/kaldi-active-grammar/releases/download/v1.8.0/kaldi_model_daanzu_20200905_1ep-smalllm.zip) (400 MB)\n    * [kaldi_model_daanzu_20200328_1ep-mediumlm](https://github.com/daanzu/kaldi-active-grammar/releases/download/v1.4.0/kaldi_model_daanzu_20200328_1ep-mediumlm.zip) (322 MB)\n* For **generic kaldi**, or [**vosk**](https://github.com/alphacep/vosk-api)\n    * [vosk-model-en-us-daanzu-20200328](https://github.com/daanzu/kaldi-active-grammar/releases/download/v1.4.0/vosk-model-en-us-daanzu-20200328.zip)\n    * [vosk-model-en-us-daanzu-20200328-lgraph](https://github.com/daanzu/kaldi-active-grammar/releases/download/v1.4.0/vosk-model-en-us-daanzu-20200328-lgraph.zip)\n* If you have trouble downloading, try using `wget --continue`\n\n## Basic info for KaldiAG models\n\n* **Latency**: I have yet to do formal latency testing, but for command grammars, the latency between the end of the utterance (as determined by the Voice Activity Detector) and receiving the final recognition results is in the range of 10-20ms.\n\n## General Comparison\n\n* Metric: [Word Error Rate (WER)](https://en.wikipedia.org/wiki/Word_error_rate)\n* Data sets:\n    * [LibriSpeech](http://www.openslr.org/12) Test Clean\n    * [Mozilla Common Voice](https://voice.mozilla.org/en/datasets) English Test\n    * [TED-LIUM Release 3](https://www.openslr.org/51/) Legacy Test\n    * TestSet1: my test set combining multiple sources\n    * Speech Comm: test set from [Google's Speech Commands Dataset](http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz), consisting of short single-word commands\n\n**Note**: The tests on commands are not necessarily fair, because they were performed using a full dictation grammar, rather than a reduced command-specific grammar. This is a worst case scenario for accuracy; in practice, speaking commands would perform much more accurately.\n\n|                       Engine                       | LS Test Clean | CV4 Test  | Ted3 Test | TestSet1  | Speech Comm |\n|:--------------------------------------------------:|:-------------:|:---------:|:---------:|:---------:|:-----------:|\n|        KaldiAG dgesr2-f-1ep LibriSpeech LM         |     4.77      | **30.91** | **12.98** | **10.16** |  **11.67**  |\n|        vosk-model-en-us-aspire-0.2 [carpa]         |     17.90     |   69.76   |           |           |    55.90    |\n|             vosk-model-small-en-us-0.3             |     19.30     |           |           |           |    45.57    |\n|                Zamia LibriSpeech LM                |   **4.56**    |   34.28   |           |   10.34   |    30.16    |\n|             Amazon Transcribe **\\*\\***             |     8.21%     |           |           |           |             |\n|         CMU PocketSphinx (0.1.15) **\\*\\***         |    31.82%     |           |           |           |             |\n|           Google Speech-to-Text **\\*\\***           |    12.23%     |           |           |           |             |\n|        Mozilla DeepSpeech (0.6.1) **\\*\\***         |     7.55%     |           |           |           |             |\n|        Picovoice Cheetah (v1.2.0) **\\*\\***         |    10.49%     |           |           |           |             |\n| Picovoice Cheetah LibriSpeech LM (v1.2.0) **\\*\\*** |     8.25%     |           |           |           |             |\n|        Picovoice Leopard (v1.0.0) **\\*\\***         |     8.34%     |           |           |           |             |\n| Picovoice Leopard LibriSpeech LM (v1.0.0) **\\*\\*** |     6.58%     |           |           |           |             |\n\n**\\*\\***: not tested by me; from [Picovoice speech-to-text-benchmark](https://github.com/Picovoice/speech-to-text-benchmark#results)\n\n## Fine tuning for individual speakers\n\nFine tuning a generic model for an individual speaker can greatly increase accuracy, at the small cost of recording some training data from the speaker themself. This training data can be recorded specifically for training purposes, or it can be retained from normal use while using another model (or even another engine).\n\n### David\n\n* Very difficult speech.\n\n|                                 Model                                 | David Commands (test set) | David Dictation (test set) |\n|:---------------------------------------------------------------------:|:-------------------------:|:--------------------------:|\n|                      KaldiAG dgesr-f-1ep generic                      |           84.94           |           70.59            |\n| KaldiAG dgesr-f-1ep fine tuned on ~34hr of mixed commands + dictation |           7.11            |           14.46            |\n|   Custom model trained only on ~34hr of mixed commands + dictation    |           10.04           |           10.29            |\n\n### Shervin\n\n* Accented speech.\n* Shervin Commands: ~1 hour, ~4000 utterances.\n* Shervin Dictation: ~20 minutes, ~250 utterances.\n\n|                              Model                              | Shervin Commands | Shervin Dictation |\n|:---------------------------------------------------------------:|:----------------:|:-----------------:|\n|                  KaldiAG dgesr2-f-1ep generic                   |      46.98       |       9.21        |\n| KaldiAG dgesr2-f-1ep fine tuned on Shervin Commands + Dictation |       9.76       |       2.40        |\n\n\n\n\n\n"
  },
  {
    "path": "examples/audio.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\nfrom __future__ import division, print_function\nimport collections, itertools, logging, time, threading, wave\n\nfrom six import binary_type, print_\nfrom six.moves import queue\nimport sounddevice\nimport webrtcvad\n\n_log = logging.getLogger(\"audio\")\n\n\nclass MicAudio(object):\n    \"\"\"Streams raw audio from microphone. Data is received in a separate thread, and stored in a buffer, to be read from.\"\"\"\n\n    FORMAT = 'int16'\n    SAMPLE_WIDTH = 2\n    SAMPLE_RATE = 16000\n    CHANNELS = 1\n    BLOCKS_PER_SECOND = 100\n    BLOCK_SIZE_SAMPLES = int(SAMPLE_RATE / float(BLOCKS_PER_SECOND))  # Block size in number of samples\n    BLOCK_DURATION_MS = int(1000 * BLOCK_SIZE_SAMPLES // SAMPLE_RATE)  # Block duration in milliseconds\n\n    def __init__(self, callback=None, buffer_s=0, flush_queue=True, start=True, input_device=None, self_threaded=None, reconnect_callback=None):\n        self.callback = callback if callback is not None else lambda in_data: self.buffer_queue.put(in_data, block=False)\n        self.flush_queue = bool(flush_queue)\n        self.input_device = input_device\n        self.self_threaded = bool(self_threaded)\n        if reconnect_callback is not None and not callable(reconnect_callback):\n            _log.error(\"Invalid reconnect_callback not callable: %r\", reconnect_callback)\n            reconnect_callback = None\n        self.reconnect_callback = reconnect_callback\n\n        self.buffer_queue = queue.Queue(maxsize=(buffer_s * 1000 // self.BLOCK_DURATION_MS))\n        self.stream = None\n        self.thread = None\n        self.thread_cancelled = False\n        self.device_info = None\n        self._connect(start=start)\n\n    def _connect(self, start=None):\n        callback = self.callback\n        def proxy_callback(in_data, frame_count, time_info, status):\n            callback(bytes(in_data))  # Must copy data from temporary C buffer!\n\n        self.stream = sounddevice.RawInputStream(\n            samplerate=self.SAMPLE_RATE,\n            channels=self.CHANNELS,\n            dtype=self.FORMAT,\n            blocksize=self.BLOCK_SIZE_SAMPLES,\n            # latency=80,\n            device=self.input_device,\n            callback=proxy_callback if not self.self_threaded else None,\n        )\n\n        if self.self_threaded:\n            self.thread_cancelled = False\n            self.thread = threading.Thread(target=self._reader_thread, args=(callback,))\n            self.thread.daemon = True\n            self.thread.start()\n\n        if start:\n            self.start()\n\n        device_info = sounddevice.query_devices(self.stream.device)\n        hostapi_info = sounddevice.query_hostapis(device_info['hostapi'])\n        _log.info(\"streaming audio from '%s' using %s: %i sample_rate, %i block_duration_ms, %i latency_ms\",\n            device_info['name'], hostapi_info['name'], self.stream.samplerate, self.BLOCK_DURATION_MS, int(self.stream.latency*1000))\n        self.device_info = device_info\n\n    def _reader_thread(self, callback):\n        while not self.thread_cancelled and self.stream and not self.stream.closed:\n            if self.stream.active and self.stream.read_available >= self.stream.blocksize:\n                in_data, overflowed = self.stream.read(self.stream.blocksize)\n                # print('_reader_thread', read_available, len(in_data), overflowed, self.stream.blocksize)\n                if overflowed:\n                    _log.warning(\"audio stream overflow\")\n                callback(bytes(in_data))  # Must copy data from temporary C buffer!\n            else:\n                time.sleep(0.001)\n\n    def destroy(self):\n        self.stream.close()\n\n    def reconnect(self):\n        # FIXME: flapping\n        old_device_info = self.device_info\n        self.thread_cancelled = True\n        if self.thread:\n            self.thread.join()\n            self.thread = None\n        self.stream.close()\n        self._connect(start=True)\n        if self.reconnect_callback is not None:\n            self.reconnect_callback(self)\n        if self.device_info != old_device_info:\n            raise Exception(\"Audio reconnect could not reconnect to the same device\")\n\n    def start(self):\n        self.stream.start()\n\n    def stop(self):\n        self.stream.stop()\n\n    def read(self, nowait=False):\n        \"\"\"Return a block of audio data. If nowait==False, waits for a block if necessary; else, returns False immediately if no block is available.\"\"\"\n        if self.stream or (self.flush_queue and not self.buffer_queue.empty()):\n            if nowait:\n                try:\n                    return self.buffer_queue.get_nowait()  # Return good block if available\n                except queue.Empty as e:\n                    return False  # Queue is empty for now\n            else:\n                return self.buffer_queue.get()  # Wait for a good block and return it\n        else:\n            return None  # We are done\n\n    def read_loop(self, callback):\n        \"\"\"Block looping reading, repeatedly passing a block of audio data to callback.\"\"\"\n        for block in iter(self):\n            callback(block)\n\n    def iter(self, nowait=False):\n        \"\"\"Generator that yields all audio blocks from microphone.\"\"\"\n        while True:\n            block = self.read(nowait=nowait)\n            if block is None:\n                break\n            yield block\n\n    def __iter__(self):\n        \"\"\"Generator that yields all audio blocks from microphone.\"\"\"\n        return self.iter()\n\n    def get_wav_length_s(self, data):\n        assert isinstance(data, binary_type)\n        length_bytes = len(data)\n        assert self.FORMAT == 'int16'\n        length_samples = length_bytes / self.SAMPLE_WIDTH\n        return (float(length_samples) / self.SAMPLE_RATE)\n\n    def write_wav(self, filename, data):\n        # _log.debug(\"write wav %s\", filename)\n        wf = wave.open(filename, 'wb')\n        wf.setnchannels(self.CHANNELS)\n        # wf.setsampwidth(self.pa.get_sample_size(FORMAT))\n        assert self.FORMAT == 'int16'\n        wf.setsampwidth(self.SAMPLE_WIDTH)\n        wf.setframerate(self.SAMPLE_RATE)\n        wf.writeframes(data)\n        wf.close()\n\n    @staticmethod\n    def print_list():\n        print_(\"\")\n        print_(\"LISTING ALL INPUT DEVICES SUPPORTED BY PORTAUDIO\")\n        print_(\"(any device numbers not shown are for output only)\")\n        print_(\"\")\n        devices = sounddevice.query_devices()\n        print_(devices)\n\n        # for i in range(0, pa.get_device_count()):\n        #     info = pa.get_device_info_by_index(i)\n\n        #     if info['maxInputChannels'] > 0:  # microphone? or just speakers\n        #         print_(\"DEVICE #%d\" % info['index'])\n        #         print_(\"    %s\" % info['name'])\n        #         print_(\"    input channels = %d, output channels = %d, defaultSampleRate = %d\" %\n        #             (info['maxInputChannels'], info['maxOutputChannels'], info['defaultSampleRate']))\n        #         # print_(info)\n        #         try:\n        #           supports16k = pa.is_format_supported(16000,  # sample rate\n        #               input_device = info['index'],\n        #               input_channels = info['maxInputChannels'],\n        #               input_format = pyaudio.paInt16)\n        #         except ValueError:\n        #           print_(\"    NOTE: 16k sampling not supported, configure pulseaudio to use this device\")\n\n        print_(\"\")\n\n\nclass VADAudio(MicAudio):\n    \"\"\"Filter & segment audio with voice activity detection.\"\"\"\n\n    def __init__(self, aggressiveness=3, **kwargs):\n        super(VADAudio, self).__init__(**kwargs)\n        self.vad = webrtcvad.Vad(aggressiveness)\n\n    def vad_collector(self, start_window_ms=150, start_padding_ms=100,\n        end_window_ms=150, end_padding_ms=None, complex_end_window_ms=None,\n        ratio=0.8, blocks=None, nowait=False,\n        ):\n        \"\"\"Generator/coroutine that yields series of consecutive audio blocks comprising each phrase, separated by yielding a single None.\n            Determines voice activity by ratio of blocks in window_ms. Uses a buffer to include window_ms prior to being triggered.\n            Example: (block, ..., block, None, block, ..., block, None, ...)\n                      |----phrase-----|        |----phrase-----|\n        \"\"\"\n        assert end_padding_ms == None, \"end_padding_ms not supported yet\"\n        num_start_window_blocks = max(1, int(start_window_ms // self.BLOCK_DURATION_MS))\n        num_start_padding_blocks = max(0, int((start_padding_ms or 0) // self.BLOCK_DURATION_MS))\n        num_end_window_blocks = max(1, int(end_window_ms // self.BLOCK_DURATION_MS))\n        num_complex_end_window_blocks = max(1, int((complex_end_window_ms or end_window_ms) // self.BLOCK_DURATION_MS))\n        num_end_padding_blocks = max(0, int((end_padding_ms or 0) // self.BLOCK_DURATION_MS))\n        _log.debug(\"%s: vad_collector: num_start_window_blocks=%s num_end_window_blocks=%s num_complex_end_window_blocks=%s\",\n            self, num_start_window_blocks, num_end_window_blocks, num_complex_end_window_blocks)\n        audio_reconnect_threshold_blocks = 5\n        audio_reconnect_threshold_time = 50 * self.BLOCK_DURATION_MS / 1000\n\n        ring_buffer = collections.deque(maxlen=max(\n            (num_start_window_blocks + num_start_padding_blocks),\n            (num_end_window_blocks + num_end_padding_blocks),\n            (num_complex_end_window_blocks + num_end_padding_blocks),\n        ))\n        ring_buffer_recent_slice = lambda num_blocks: itertools.islice(ring_buffer, max(0, (len(ring_buffer) - num_blocks)), None)\n\n        triggered = False\n        in_complex_phrase = False\n        num_empty_blocks = 0\n        last_good_block_time = time.time()\n\n        if blocks is None: blocks = self.iter(nowait=nowait)\n        for block in blocks:\n            if block is False or block is None:\n                # Bad/empty block\n                num_empty_blocks += 1\n                if (num_empty_blocks >= audio_reconnect_threshold_blocks) and (time.time() - last_good_block_time >= audio_reconnect_threshold_time):\n                    _log.warning(\"%s: no good block received recently, so reconnecting audio\", self)\n                    self.reconnect()\n                    num_empty_blocks = 0\n                    last_good_block_time = time.time()\n                in_complex_phrase = yield block\n\n            else:\n                # Good block\n                num_empty_blocks = 0\n                last_good_block_time = time.time()\n                is_speech = self.vad.is_speech(block, self.SAMPLE_RATE)\n\n                if not triggered:\n                    # Between phrases\n                    ring_buffer.append((block, is_speech))\n                    num_voiced = len([1 for (_, speech) in ring_buffer_recent_slice(num_start_window_blocks) if speech])\n                    if num_voiced >= (num_start_window_blocks * ratio):\n                        # Start of phrase\n                        triggered = True\n                        for block, _ in ring_buffer_recent_slice(num_start_padding_blocks + num_start_window_blocks):\n                            # print('|' if is_speech else '.', end='')\n                            # print('|' if in_complex_phrase else '.', end='')\n                            in_complex_phrase = yield block\n                        # print('#', end='')\n                        ring_buffer.clear()\n\n                else:\n                    # Ongoing phrase\n                    in_complex_phrase = yield block\n                    # print('|' if is_speech else '.', end='')\n                    # print('|' if in_complex_phrase else '.', end='')\n                    ring_buffer.append((block, is_speech))\n                    num_unvoiced = len([1 for (_, speech) in ring_buffer_recent_slice(num_end_window_blocks) if not speech])\n                    num_complex_unvoiced = len([1 for (_, speech) in ring_buffer_recent_slice(num_complex_end_window_blocks) if not speech])\n                    if (not in_complex_phrase and num_unvoiced >= (num_end_window_blocks * ratio)) or \\\n                        (in_complex_phrase and num_complex_unvoiced >= (num_complex_end_window_blocks * ratio)):\n                        # End of phrase\n                        triggered = False\n                        in_complex_phrase = yield None\n                        # print('*')\n                        ring_buffer.clear()\n\n    def debug_print_simple(self):\n        print(\"block_duration_ms=%s\" % self.BLOCK_DURATION_MS)\n        for block in self.iter(nowait=False):\n            is_speech = self.vad.is_speech(block, self.SAMPLE_RATE)\n            print('|' if is_speech else '.', end='')\n\n    def debug_loop(self, *args, **kwargs):\n        audio_iter = self.vad_collector(*args, **kwargs)\n        next(audio_iter)\n        while True:\n            block = audio_iter.send(False)\n"
  },
  {
    "path": "examples/full_example.py",
    "content": "import logging, time\nimport kaldi_active_grammar\n\nlogging.basicConfig(level=20)\nmodel_dir = None  # Default\ntmp_dir = None  # Default\n\n##### Set up grammar compiler & decoder\n\ncompiler = kaldi_active_grammar.Compiler(model_dir=model_dir, tmp_dir=tmp_dir)\n# compiler.fst_cache.invalidate()\ndecoder = compiler.init_decoder()\n\n##### Set up a rule\n\nrule = kaldi_active_grammar.KaldiRule(compiler, 'TestRule')\nfst = rule.fst\n\n# Construct grammar in a FST\nprevious_state = fst.add_state(initial=True)\nfor word in \"i will order the\".split():\n    state = fst.add_state()\n    fst.add_arc(previous_state, state, word)\n    if word == 'the':\n        # 'the' is optional, so we also allow an epsilon (silent) arc\n        fst.add_arc(previous_state, state, None)\n    previous_state = state\nfinal_state = fst.add_state(final=True)\nfor word in ['egg', 'bacon', 'sausage']: fst.add_arc(previous_state, final_state, word)\nfst.add_arc(previous_state, final_state, 'spam', weight=8)  # 'spam' is much more likely\nfst.add_arc(final_state, previous_state, None)  # Loop back, with an epsilon (silent) arc\n\nrule.compile()\nrule.load()\n\n##### You could add many more rules...\n\n##### Perform decoding on live, real-time audio from microphone\n\nfrom audio import VADAudio\naudio = VADAudio()\naudio_iterator = audio.vad_collector(nowait=True)\nprint(\"Listening...\")\n\nin_phrase = False\nfor block in audio_iterator:\n\n    if block is False:\n        # No audio block available\n        time.sleep(0.001)\n\n    elif block is not None:\n        if not in_phrase:\n            # Start of phrase\n            kaldi_rules_activity = [True]  # A bool for each rule\n            in_phrase = True\n        else:\n            # Ongoing phrase\n            kaldi_rules_activity = None  # Irrelevant\n\n        decoder.decode(block, False, kaldi_rules_activity)\n        output, info = decoder.get_output()\n        print(\"Partial phrase: %r\" % (output,))\n        recognized_rule, words, words_are_dictation_mask, in_dictation = compiler.parse_partial_output(output)\n\n    else:\n        # End of phrase\n        decoder.decode(b'', True)\n        output, info = decoder.get_output()\n        expected_error_rate = info.get('expected_error_rate', float('nan'))\n        confidence = info.get('confidence', float('nan'))\n\n        recognized_rule, words, words_are_dictation_mask = compiler.parse_output(output)\n        is_acceptable_recognition = bool(recognized_rule)\n        parsed_output = ' '.join(words)\n        print(\"End of phrase: eer=%.2f conf=%.2f%s, rule %s, %r\" %\n            (expected_error_rate, confidence, (\" [BAD]\" if not is_acceptable_recognition else \"\"), recognized_rule, parsed_output))\n\n        in_phrase = False\n"
  },
  {
    "path": "examples/mix_dictation.py",
    "content": "import kaldi_active_grammar\n\nif __name__ == '__main__':\n    import util\n    compiler, decoder = util.initialize()\n\n##### Set up a rule mixing strict commands with free dictation\n\nrule = kaldi_active_grammar.KaldiRule(compiler, 'TestRule')\nfst = rule.fst\n\ndictation_nonterm = '#nonterm:dictation'\nend_nonterm = '#nonterm:end'\n\n# Optional preface\nprevious_state = fst.add_state(initial=True)\nnext_state = fst.add_state()\nfst.add_arc(previous_state, next_state, 'cap')\nfst.add_arc(previous_state, next_state, None)  # Optionally skip, with an epsilon (silent) arc\n\n# Required free dictation\nprevious_state = next_state\nextra_state = fst.add_state()\nnext_state = fst.add_state()\n# These two arcs together (always use together) will recognize one or more words of free dictation (but not zero):\nfst.add_arc(previous_state, extra_state, dictation_nonterm)\nfst.add_arc(extra_state, next_state, None, end_nonterm)\n\n# Loop repetition, alternating between a group of alternatives and more free dictation\nprevious_state = next_state\nnext_state = fst.add_state()\nfor word in ['period', 'comma', 'colon']:\n    fst.add_arc(previous_state, next_state, word)\nextra_state = fst.add_state()\nnext_state = fst.add_state()\nfst.add_arc(next_state, extra_state, dictation_nonterm)\nfst.add_arc(extra_state, next_state, None, end_nonterm)\nfst.add_arc(next_state, previous_state, None)  # Loop back, with an epsilon (silent) arc\nfst.add_arc(previous_state, next_state, None)  # Optionally skip, with an epsilon (silent) arc\n\n# Finish up\nfinal_state = fst.add_state(final=True)\nfst.add_arc(next_state, final_state, None)\n\nrule.compile()\nrule.load()\n\n# Decode\nif __name__ == '__main__':\n    util.do_recognition(compiler, decoder)\n"
  },
  {
    "path": "examples/plain_dictation.py",
    "content": "import logging, sys, wave\nfrom kaldi_active_grammar import PlainDictationRecognizer\n\n# logging.basicConfig(level=10)\nrecognizer = PlainDictationRecognizer()  # Or supply non-default model_dir, tmp_dir, or fst_file\nfilename = sys.argv[1] if len(sys.argv) > 1 else 'test.wav'\nwave_file = wave.open(filename, 'rb')\ndata = wave_file.readframes(wave_file.getnframes())\noutput_str, info = recognizer.decode_utterance(data)\nprint(repr(output_str), info)  # -> 'it depends on the context'\n"
  },
  {
    "path": "examples/requirements_audio.txt",
    "content": "sounddevice==0.3.*\nwebrtcvad-wheels==2.0.*\n"
  },
  {
    "path": "examples/util.py",
    "content": "import time\nimport kaldi_active_grammar\nfrom audio import VADAudio\n\ndef initialize(model_dir=None, tmp_dir=None, config={}):\n    compiler = kaldi_active_grammar.Compiler(model_dir=model_dir, tmp_dir=tmp_dir)\n    decoder = compiler.init_decoder(config=config)\n    return (compiler, decoder)\n\ndef do_recognition(compiler, decoder, print_partial=True, cap_dictation=True):\n    audio = VADAudio()\n    audio_iterator = audio.vad_collector(nowait=True)\n    print(\"Listening...\")\n\n    in_phrase = False\n    for block in audio_iterator:\n\n        if block is False:\n            # No audio block available\n            time.sleep(0.001)\n\n        elif block is not None:\n            if not in_phrase:\n                # Start of phrase\n                kaldi_rules_activity = [True]  # A bool for each rule\n                in_phrase = True\n            else:\n                # Ongoing phrase\n                kaldi_rules_activity = None  # Irrelevant\n\n            decoder.decode(block, False, kaldi_rules_activity)\n            output, info = decoder.get_output()\n            if print_partial:\n                print(\"Partial phrase: %r\" % (output,))\n            recognized_rule, words, words_are_dictation_mask, in_dictation = compiler.parse_partial_output(output)\n\n        else:\n            # End of phrase\n            decoder.decode(b'', True)\n            output, info = decoder.get_output()\n            expected_error_rate = info.get('expected_error_rate', float('nan'))\n            confidence = info.get('confidence', float('nan'))\n\n            recognized_rule, words, words_are_dictation_mask = compiler.parse_output(output)\n            is_acceptable_recognition = bool(recognized_rule)\n            if cap_dictation:\n                words = [(word.upper() if word_in_dictation else word) for (word, word_in_dictation) in zip(words, words_are_dictation_mask)]\n            parsed_output = ' '.join(words)\n            print(\"End of phrase: eer=%.2f conf=%.2f%s, rule %s, %r\" %\n                (expected_error_rate, confidence, (\" [BAD]\" if not is_acceptable_recognition else \"\"), recognized_rule, parsed_output))\n\n            in_phrase = False\n"
  },
  {
    "path": "kaldi_active_grammar/LICENSE.txt",
    "content": "                    GNU AFFERO GENERAL PUBLIC LICENSE\n                       Version 3, 19 November 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU Affero General Public License is a free, copyleft license for\nsoftware and other kinds of works, specifically designed to ensure\ncooperation with the community in the case of network server software.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nour General Public Licenses are intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  Developers that use our General Public Licenses protect your rights\nwith two steps: (1) assert copyright on the software, and (2) offer\nyou this License which gives you legal permission to copy, distribute\nand/or modify the software.\n\n  A secondary benefit of defending all users' freedom is that\nimprovements made in alternate versions of the program, if they\nreceive widespread use, become available for other developers to\nincorporate.  Many developers of free software are heartened and\nencouraged by the resulting cooperation.  However, in the case of\nsoftware used on network servers, this result may fail to come about.\nThe GNU General Public License permits making a modified version and\nletting the public access it on a server without ever releasing its\nsource code to the public.\n\n  The GNU Affero General Public License is designed specifically to\nensure that, in such cases, the modified source code becomes available\nto the community.  It requires the operator of a network server to\nprovide the source code of the modified version running there to the\nusers of that server.  Therefore, public use of a modified version, on\na publicly accessible server, gives the public access to the source\ncode of the modified version.\n\n  An older license, called the Affero General Public License and\npublished by Affero, was designed to accomplish similar goals.  This is\na different license, not a version of the Affero GPL, but Affero has\nreleased a new version of the Affero GPL which permits relicensing under\nthis license.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU Affero General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Remote Network Interaction; Use with the GNU General Public License.\n\n  Notwithstanding any other provision of this License, if you modify the\nProgram, your modified version must prominently offer all users\ninteracting with it remotely through a computer network (if your version\nsupports such interaction) an opportunity to receive the Corresponding\nSource of your version by providing access to the Corresponding Source\nfrom a network server at no charge, through some standard or customary\nmeans of facilitating copying of software.  This Corresponding Source\nshall include the Corresponding Source for any work covered by version 3\nof the GNU General Public License that is incorporated pursuant to the\nfollowing paragraph.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the work with which it is combined will remain governed by version\n3 of the GNU General Public License.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU Affero General Public License from time to time.  Such new versions\nwill be similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU Affero General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU Affero General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU Affero General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU Affero General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU Affero General Public License for more details.\n\n    You should have received a copy of the GNU Affero General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If your software can interact with users remotely through a computer\nnetwork, you should also make sure that it provides a way for users to\nget its source.  For example, if your program is a web application, its\ninterface could display a \"Source\" link that leads users to an archive\nof the code.  There are many ways you could offer source, and different\nsolutions will be better for different programs; see section 13 for the\nspecific requirements.\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU AGPL, see\n<https://www.gnu.org/licenses/>.\n"
  },
  {
    "path": "kaldi_active_grammar/__init__.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\n_name = 'kaldi_active_grammar'\n__version__ = '3.2.0'\n# __dev_version__ = __version__ + '.dev0'\nREQUIRED_MODEL_VERSION = '0.5.0'\n\nimport logging\n_log = logging.getLogger('kaldi')\n# _handler = logging.NullHandler()\n# _log.addHandler(_handler)\n\nclass KaldiError(Exception):\n    pass\n\nfrom .compiler import Compiler, KaldiRule\nfrom .wrapper import KaldiAgfNNet3Decoder, KaldiLafNNet3Decoder, KaldiPlainNNet3Decoder\nfrom .wfst import NativeWFST, WFST\nfrom .plain_dictation import PlainDictationRecognizer\nfrom .utils import disable_donation_message\n"
  },
  {
    "path": "kaldi_active_grammar/__main__.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\nimport logging, os.path, shutil\n\nfrom six import print_\n\nfrom . import _name\nfrom .utils import debug_timer\nfrom .compiler import Compiler\nfrom .model import Model, convert_generic_model_to_agf\n\ndef main():\n    import argparse\n    parser = argparse.ArgumentParser(prog='python -m %s' % _name)\n    parser.add_argument('-v', '--verbose', action='store_true')\n    parser.add_argument('-m', '--model_dir')\n    parser.add_argument('-t', '--tmp_dir')\n    parser.add_argument('command', choices=[\n        'compile_agf_dictation_graph',\n        'compile_plain_dictation_graph',\n        'convert_generic_model_to_agf',\n        'add_word',\n        'generate_lexicon_files',\n        'reset_user_lexicon',\n        'generate_words_relabeled_file',\n    ])\n    # FIXME: helps\n    # FIXME: subparsers?\n    args, unknown = parser.parse_known_args()\n\n    logging.basicConfig(level=5 if args.verbose else logging.INFO)\n\n    if args.command == 'compile_agf_dictation_graph':\n        compiler = Compiler(args.model_dir, args.tmp_dir)\n        g_filename = unknown.pop(0) if unknown else None\n        print_(\"Compiling dictation graph...\")\n        compiler.compile_agf_dictation_fst(g_filename=g_filename)\n\n    if args.command == 'compile_plain_dictation_graph':\n        compiler = Compiler(args.model_dir, args.tmp_dir)\n        g_filename = unknown.pop(0) if unknown else None\n        output_filename = unknown.pop(0) if unknown else None\n        print_(\"Compiling plain dictation graph...\")\n        compiler.compile_plain_dictation_fst(g_filename=g_filename, output_filename=output_filename)\n\n    if args.command == 'convert_generic_model_to_agf':\n        # if not args.model_dir: parser.error(\"MODEL_DIR required for %s\" % args.command)\n        file = unknown[0]\n        convert_generic_model_to_agf(file, args.model_dir)\n\n    if args.command == 'add_word':\n        word = unknown[0]\n        phones = unknown[1].split() if len(unknown) >= 2 else None\n        pronunciations = Model(args.model_dir).add_word(word, phones)\n        for phones in pronunciations:\n            print_(\"Added word %r: %r\" % (word, ' '.join(phones)))\n\n    if args.command == 'generate_lexicon_files':\n        Model(args.model_dir).generate_lexicon_files()\n        print_(\"Generated lexicon files\")\n\n    if args.command == 'reset_user_lexicon':\n        Model(args.model_dir).reset_user_lexicon()\n        print_(\"Reset user lexicon\")\n\n    if args.command == 'generate_words_relabeled_file':\n        Model.generate_words_relabeled_file(*unknown)\n        print_(\"Generated words_relabeled file\")\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "kaldi_active_grammar/compiler.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\nimport collections, copy, logging, multiprocessing, os, re, shlex, shutil, subprocess, threading\nimport concurrent.futures\nfrom contextlib import contextmanager\nfrom io import open\n\nfrom six.moves import range, zip\n\nfrom . import _log, KaldiError\nfrom .utils import ExternalProcess, debug_timer, platform, show_donation_message\nfrom .wfst import WFST, NativeWFST, SymbolTable\nfrom .model import Model\nfrom .wrapper import KaldiAgfCompiler, KaldiAgfNNet3Decoder, KaldiLafNNet3Decoder\nimport kaldi_active_grammar.defaults as defaults\n\n_log = _log.getChild('compiler')\n\n\n########################################################################################################################\n\nclass KaldiRule(object):\n\n    cls_lock = threading.Lock()\n\n    def __init__(self, compiler, name, nonterm=True, has_dictation=None, is_complex=None):\n        \"\"\"\n        :param nonterm: bool whether rule represents a nonterminal in the active-grammar-fst (only False for the top FST?)\n        \"\"\"\n        self.compiler = compiler\n        self.name = name\n        self.nonterm = nonterm\n        self.has_dictation = has_dictation\n        self.is_complex = is_complex\n\n        # id: matches \"nonterm:rule__\"; 0-based; can/will change due to rule unloading!\n        self.id = int(self.compiler.alloc_rule_id() if nonterm else -1)\n        if self.id > self.compiler._max_rule_id: raise KaldiError(\"KaldiRule id > compiler._max_rule_id\")\n        if self.id in self.compiler.kaldi_rule_by_id_dict: raise KaldiError(\"KaldiRule id already in use\")\n        if self.id >= 0:\n            self.compiler.kaldi_rule_by_id_dict[self.id] = self\n\n        # Private/protected\n        self._fst_text = None\n        self.compiled = False\n        self.loaded = False\n        self.reloading = False  # KaldiRule is in the process of the reload contextmanager\n        self.has_been_loaded = False  # KaldiRule was loaded, then reload() was called & completed, and now it is not currently loaded, and load() we need to call the decoder's reload\n        self.destroyed = False  # KaldiRule must not be used/referenced anymore\n\n        # Public\n        self.fst = WFST() if not self.compiler.native_fst else NativeWFST()\n        self.matcher = None\n        self.active = True\n\n    def __repr__(self):\n        return \"%s(%s, %s)\" % (self.__class__.__name__, self.id, self.name)\n\n    fst_cache = property(lambda self: self.compiler.fst_cache)\n    decoder = property(lambda self: self.compiler.decoder)\n\n    pending_compile = property(lambda self: (self in self.compiler.compile_queue) or (self in self.compiler.compile_duplicate_filename_queue))\n    pending_load = property(lambda self: self in self.compiler.load_queue)\n\n    fst_wrapper = property(lambda self: self.fst if self.fst.native else self.filepath)\n    filename = property(lambda self: self.fst.filename)\n\n    @property\n    def filepath(self):\n        assert self.filename\n        assert self.compiler.tmp_dir is not None\n        return os.path.join(self.compiler.tmp_dir, self.filename)\n\n    def compile(self, lazy=False, duplicate=None):\n        if self.destroyed: raise KaldiError(\"Cannot use a KaldiRule after calling destroy()\")\n        if self.compiled: return self\n\n        if self.fst.native:\n            if not self.filename:\n                self.fst.compute_hash(self.fst_cache.dependencies_hash)\n                assert self.filename\n\n        else:\n            # Handle compiling text WFST to binary\n            if not self._fst_text:\n                # self.fst.normalize_weights()\n                self._fst_text = self.fst.get_fst_text(fst_cache=self.fst_cache)\n                assert self.filename\n            # if 'dictation' in self._fst_text: _log.log(50, '\\n    '.join([\"%s: FST text:\" % self] + self._fst_text.splitlines()))  # log _fst_text\n\n        if self.compiler.cache_fsts and self.fst_cache.fst_is_current(self.filepath, touch=True):\n            _log.debug(\"%s: Skipped FST compilation thanks to FileCache\" % self)\n            if self.compiler.decoding_framework == 'agf' and self.fst.native:\n                self.fst.compiled_native_obj = NativeWFST.load_file(self.filepath)\n            self.compiled = True\n            return self\n        else:\n            if duplicate:\n                _log.warning(\"%s was supposed to be a duplicate compile, but was not found in FileCache\")\n\n        if lazy:\n            if not self.pending_compile:\n                # Special handling for rules that are an exact content match (and hence hash/name) with another (different) rule already in the compile_queue\n                if not any(self.filename == kaldi_rule.filename for kaldi_rule in self.compiler.compile_queue if self != kaldi_rule):\n                    self.compiler.compile_queue.add(self)\n                else:\n                    self.compiler.compile_duplicate_filename_queue.add(self)\n            return self\n\n        return self.finish_compile()\n\n    def finish_compile(self):\n        # Must be thread-safe!\n        with self.cls_lock:\n            self.compiler.prepare_for_compilation()\n        _log.log(15, \"%s: Compiling %sstate/%sarc FST%s%s\" % (self, self.fst.num_states, self.fst.num_arcs,\n                (\" (%dbyte)\" % len(self._fst_text)) if self._fst_text else \"\",\n                (\" to \" + self.filename) if self.filename else \"\"))\n        assert self.fst.native or self._fst_text\n        if _log.isEnabledFor(3):\n            if self.fst.native: self.fst.write_file('tmp_G.fst')\n            if _log.isEnabledFor(2):\n                if self._fst_text: _log.log(2, '\\n    '.join([\"%s: FST text:\" % self] + self._fst_text.splitlines()))  # log _fst_text\n                elif self.fst.native: self.fst.print()\n\n        try:\n            if self.compiler.decoding_framework == 'agf':\n                if self.fst.native:\n                    self.fst.compiled_native_obj = self.compiler._compile_agf_graph(compile=True, nonterm=self.nonterm, input_fst=self.fst, return_output_fst=True,\n                        output_filename=(self.filepath if self.compiler.cache_fsts else None))\n                else:\n                    self.compiler._compile_agf_graph(compile=True, nonterm=self.nonterm, input_text=self._fst_text, output_filename=self.filepath)\n                    self._fst_text = None  # Free memory\n\n            elif self.compiler.decoding_framework == 'laf':\n                # self.compiler._compile_laf_graph(compile=True, nonterm=self.nonterm, input_text=self._fst_text, output_filename=self.filepath)\n                # Keep self._fst_text, for adding directly later\n                pass\n\n            else: raise KaldiError(\"unknown compiler.decoding_framework\")\n        except Exception as e:\n            raise KaldiError(\"Exception while compiling\", self)  # Return this KaldiRule inside exception\n\n        self.compiled = True\n        return self\n\n    def load(self, lazy=False):\n        if self.destroyed: raise KaldiError(\"Cannot use a KaldiRule after calling destroy()\")\n        if lazy or self.pending_compile:\n            self.compiler.load_queue.add(self)\n            return self\n        assert self.compiled\n\n        if self.has_been_loaded:\n            # FIXME: why is this necessary?\n            self._do_reloading()\n        else:\n            if self.compiler.decoding_framework == 'agf':\n                grammar_fst_index = self.decoder.add_grammar_fst(self.fst if self.fst.native else self.filepath)\n            elif self.compiler.decoding_framework == 'laf':\n                grammar_fst_index = self.decoder.add_grammar_fst(self.fst) if self.fst.native else self.decoder.add_grammar_fst_text(self._fst_text)\n            else: raise KaldiError(\"unknown compiler decoding_framework\")\n            assert self.id == grammar_fst_index, \"add_grammar_fst allocated invalid grammar_fst_index %d != %d for %s\" % (grammar_fst_index, self.id, self)\n\n        self.loaded = True\n        self.has_been_loaded = True\n        return self\n\n    def _do_reloading(self):\n        if self.compiler.decoding_framework == 'agf':\n            return self.decoder.reload_grammar_fst(self.id, (self.fst if self.fst.native else self.filepath))\n        elif self.compiler.decoding_framework == 'laf':\n            assert self.fst.native\n            return self.decoder.reload_grammar_fst(self.id, self.fst)\n            if not self.fst.native: return self.decoder.reload_grammar_fst_text(self.id, self._fst_text)  # FIXME: not implemented\n        else: raise KaldiError(\"unknown compiler decoding_framework\")\n\n    @contextmanager\n    def reload(self):\n        \"\"\" Used for modifying a rule in place, e.g. ListRef. \"\"\"\n        if self.destroyed: raise KaldiError(\"Cannot use a KaldiRule after calling destroy()\")\n\n        was_loaded = self.loaded\n        self.reloading = True\n        self.fst.clear()\n        self._fst_text = None\n        self.compiled = False\n        self.loaded = False\n\n        yield\n\n        if self.compiled and was_loaded:\n            if not self.loaded:\n                # FIXME: how is this different from the branch of the if above in load()?\n                self._do_reloading()\n                self.loaded = True\n        elif was_loaded:  # must be not self.compiled (i.e. the compile during reloading was lazy)\n            self.compiler.load_queue.add(self)\n        self.reloading = False\n\n    def destroy(self):\n        \"\"\" Destructor. Unloads rule. The rule should not be used/referenced anymore after calling! \"\"\"\n        if self.destroyed:\n            return\n\n        if self.loaded:\n            self.decoder.remove_grammar_fst(self.id)\n            assert self not in self.compiler.compile_queue\n            assert self not in self.compiler.compile_duplicate_filename_queue\n            assert self not in self.compiler.load_queue\n        else:\n            if self in self.compiler.compile_queue: self.compiler.compile_queue.remove(self)\n            if self in self.compiler.compile_duplicate_filename_queue: self.compiler.compile_duplicate_filename_queue.remove(self)\n            if self in self.compiler.load_queue: self.compiler.load_queue.remove(self)\n\n        # Adjust other kaldi_rules ids down, if above self.id, then rebuild dict\n        other_kaldi_rules = list(self.compiler.kaldi_rule_by_id_dict.values())\n        other_kaldi_rules.remove(self)\n        for kaldi_rule in other_kaldi_rules:\n            if kaldi_rule.id > self.id:\n                kaldi_rule.id -= 1\n        self.compiler.kaldi_rule_by_id_dict = { kaldi_rule.id: kaldi_rule for kaldi_rule in other_kaldi_rules }\n\n        self.compiler.free_rule_id()\n        self.destroyed = True\n\n\n########################################################################################################################\n\nclass Compiler(object):\n\n    def __init__(self, model_dir=None, tmp_dir=None, alternative_dictation=None,\n            framework='agf-direct', native_fst=True, cache_fsts=True):\n        # Supported parameter combinations:\n        #   framework='agf-indirect' native_fst=False (original method)\n        #   framework='agf-direct' native_fst=False (no external CLI programs needed)\n        #   framework='agf-direct' native_fst=True (no external CLI programs needed; no cache/temp files used)\n        #   framework='laf' native_fst=False (no reloading supported)\n        #   framework='laf' native_fst=True (no reloading supported)\n\n        show_donation_message()\n        self._log = _log\n\n        AGF_INTERNAL_COMPILATION = True\n        if framework == 'agf-direct':\n            framework = 'agf'\n            AGF_INTERNAL_COMPILATION = True\n        if framework == 'agf-indirect':\n            framework = 'agf'\n            AGF_INTERNAL_COMPILATION = False\n            assert not native_fst, \"AGF with NativeWFST not supported\"\n            assert cache_fsts, \"AGF must cache FSTs\"\n        self.decoding_framework = framework\n        assert self.decoding_framework in ('agf', 'laf')\n        self.parsing_framework = 'token'\n        assert self.parsing_framework in ('token', 'text')\n        self.native_fst = bool(native_fst)\n        self.cache_fsts = bool(cache_fsts)\n        self.alternative_dictation = alternative_dictation\n\n        tmp_dir_needed = bool(self.cache_fsts)\n        self.model = Model(model_dir, tmp_dir, tmp_dir_needed=tmp_dir_needed)\n        self._lexicon_files_stale = False\n\n        if self.native_fst:\n            NativeWFST.init_class(\n                osymbol_table=self.model.words_table,\n                isymbol_table=self.model.words_table if self.decoding_framework != 'laf' else SymbolTable(self.files_dict['words.relabeled.txt']),\n                wildcard_nonterms=self.wildcard_nonterms)\n        self._agf_compiler = self._init_agf_compiler() if AGF_INTERNAL_COMPILATION else None\n        self.decoder = None\n\n        self._num_kaldi_rules = 0\n        self._max_rule_id = 999\n        self.nonterminals = tuple(['#nonterm:dictation'] + ['#nonterm:rule%i' % i for i in range(self._max_rule_id + 1)])\n        words_set = frozenset(self.model.words_table.words)\n        self._oov_word = '<unk>' if ('<unk>' in self.model.words_table) else None  # FIXME: make this configurable, for different models\n        self._silence_words = frozenset(['!SIL']) & words_set  # FIXME: make this configurable, for different models\n        self._noise_words = frozenset(['<unk>', '!SIL']) & words_set  # FIXME: make this configurable, for different models\n\n        self.kaldi_rule_by_id_dict = collections.OrderedDict()  # maps KaldiRule.id -> KaldiRule\n        self.compile_queue = set()  # KaldiRule\n        self.compile_duplicate_filename_queue = set()  # KaldiRule; queued KaldiRules with a duplicate filename (and thus contents), so can skip compilation\n        self.load_queue = set()  # KaldiRule; must maintain same order as order of instantiation!\n\n    def init_decoder(self, config=None, dictation_fst_file=None):\n        if self.decoder: raise KaldiError(\"Decoder already initialized\")\n        if dictation_fst_file is None: dictation_fst_file = self.dictation_fst_filepath\n        decoder_kwargs = dict(model_dir=self.model_dir, tmp_dir=self.tmp_dir, dictation_fst_file=dictation_fst_file, max_num_rules=self._max_rule_id+1, config=config)\n        if self.decoding_framework == 'agf':\n            top_fst_rule = self.compile_top_fst()\n            decoder_kwargs.update(top_fst=top_fst_rule.fst_wrapper)\n            self.decoder = KaldiAgfNNet3Decoder(**decoder_kwargs)\n        elif self.decoding_framework == 'laf':\n            self.decoder = KaldiLafNNet3Decoder(**decoder_kwargs)\n        else:\n            raise KaldiError(\"Invalid Compiler.decoding_framework: %r\" % self.decoding_framework)\n        return self.decoder\n\n    exec_dir = property(lambda self: self.model.exec_dir)\n    model_dir = property(lambda self: self.model.model_dir)\n    tmp_dir = property(lambda self: self.model.tmp_dir)\n    files_dict = property(lambda self: self.model.files_dict)\n    fst_cache = property(lambda self: self.model.fst_cache)\n\n    num_kaldi_rules = property(lambda self: self._num_kaldi_rules)\n    lexicon_words = property(lambda self: self.model.words_table.word_to_id_map)\n    _longest_word = property(lambda self: self.model.longest_word)\n\n    _default_dictation_g_filepath = property(lambda self: os.path.join(self.model_dir, defaults.DEFAULT_DICTATION_G_FILENAME))\n    _dictation_fst_filepath = property(lambda self: os.path.join(self.model_dir,\n        (defaults.DEFAULT_DICTATION_FST_FILENAME if self.decoding_framework == 'agf' else 'Gr.fst')))  # FIXME: generalize\n    _plain_dictation_hclg_fst_filepath = property(lambda self: os.path.join(self.model_dir, defaults.DEFAULT_PLAIN_DICTATION_HCLG_FST_FILENAME))\n\n    def alloc_rule_id(self):\n        id = self._num_kaldi_rules\n        self._num_kaldi_rules += 1\n        return id\n\n    def free_rule_id(self):\n        id = self._num_kaldi_rules\n        self._num_kaldi_rules -= 1\n        return id\n\n\n    ####################################################################################################################\n    # Methods for compiling graphs.\n\n    def add_word(self, word, phones=None, lazy_compilation=False, allow_online_pronunciations=False):\n        pronunciations = self.model.add_word(word, phones=phones, lazy_compilation=lazy_compilation, allow_online_pronunciations=allow_online_pronunciations)\n        self._lexicon_files_stale = True  # Only mark lexicon stale if it was successfully modified (not an exception)\n        return pronunciations\n\n    def prepare_for_compilation(self):\n        if self._lexicon_files_stale:\n            self.model.generate_lexicon_files()\n            self.model.load_words()  # FIXME: This re-loading from the words.txt file may be unnecessary now that we have/use NativeWFST + SymbolTable, but it's not clear if it's safe to remove it.\n            self.decoder.load_lexicon()\n            if self._agf_compiler:\n                # TODO: Just update the necessary files in the config\n                self._agf_compiler.destroy()\n                self._agf_compiler = self._init_agf_compiler()\n            self._lexicon_files_stale = False\n\n    def _compile_laf_graph(self, input_text=None, input_filename=None, output_filename=None, **kwargs):\n        # FIXME: documentation\n        with debug_timer(self._log.debug, \"laf graph compilation\"):\n            format_kwargs = dict(self.files_dict, **kwargs)\n\n            if input_text and input_filename: raise KaldiError(\"_compile_laf_graph passed both input_text and input_filename\")\n            elif input_text: input = ExternalProcess.shell.echo(input_text.encode('utf-8'))\n            elif input_filename: input = input_filename\n            else: raise KaldiError(\"_compile_laf_graph passed neither input_text nor input_filename\")\n            compile_command = input\n            format = ExternalProcess.get_list_formatter(format_kwargs)\n\n            compile_command |= ExternalProcess.fstcompile(*format('--isymbols={words_txt}', '--osymbols={words_txt}'))\n            # g_filename = output_filename.replace('.fst', '.G.fst')\n            compile_command |= output_filename\n            compile_command()\n            # fstrelabel --relabel_ipairs=relabel G.fst | fstarcsort --sort_type=ilabel | fstconvert --fst_type=const > Gr.fst\n\n    def _init_agf_compiler(self):\n        format_kwargs = dict(self.files_dict)\n        config = dict(\n            tree_rxfilename = '{tree}',\n            model_rxfilename = '{final_mdl}',\n            lex_rxfilename = '{L_disambig_fst}',\n            disambig_rxfilename = '{disambig_int}',\n            word_syms_filename = '{words_txt}',\n            )\n        config = { key: value.format(**format_kwargs) for (key, value) in config.items() }\n        return KaldiAgfCompiler(config)\n\n    def _compile_agf_graph(self, compile=False, nonterm=False, simplify_lg=True,\n            input_text=None, input_filename=None, input_fst=None,\n            output_filename=None, return_output_fst=False, **kwargs):\n        \"\"\"\n        :param compile: bool whether to compile FST (False if it has already been compiled, like importing dictation FST)\n        :param nonterm: bool whether rule represents a nonterminal in the active-grammar-fst (only False for the top FST?)\n        :param simplify_lg: bool whether to simplify LG (disambiguate, and more) (do for command grammars, but not for dictation graph!)\n        \"\"\"\n        # Must be thread-safe!\n        # Possible combinations of (compile,nonterm): (True,True) (True,False) (False,True)\n        # FIXME: documentation\n        verbose_level = 3 if self._log.isEnabledFor(5) else 0\n        format_kwargs = dict(self.files_dict, input_filename=input_filename, output_filename=output_filename, verbose=verbose_level, **kwargs)\n        format_kwargs.update(nonterm_phones_offset=self.model.nonterm_phones_offset)\n        format_kwargs.update(words_nonterm_begin=self.model.nonterm_words_offset, words_nonterm_end=self.model.nonterm_words_offset+1)\n        format_kwargs.update(simplify_lg=str(bool(simplify_lg)).lower())\n\n        if self._agf_compiler:\n            # Internal-style (no external CLI programs)\n            config = dict(\n                nonterm_phones_offset = self.model.nonterm_phones_offset,\n                disambig_rxfilename = '{disambig_int}',\n                simplify_lg = simplify_lg,\n                verbose = verbose_level,\n                tree_rxfilename = '{tree}',\n                model_rxfilename = '{final_mdl}',\n                lex_rxfilename = '{L_disambig_fst}',\n                word_syms_filename = '{words_txt}',\n                )\n            if output_filename:\n                config.update(hclg_wxfilename=output_filename)\n            elif self._log.isEnabledFor(3):\n                import datetime\n                config.update(hclg_wxfilename=os.path.join(self.tmp_dir, datetime.datetime.now().isoformat().replace(':', '') + '.fst'))\n            if nonterm:\n                config.update(grammar_prepend_nonterm=self.model.nonterm_words_offset, grammar_append_nonterm=self.model.nonterm_words_offset+1)\n            config = { key: value.format(**format_kwargs) if isinstance(value, str) else value for (key, value) in config.items() }\n\n            if 1 != sum(int(i is not None) for i in [input_text, input_filename, input_fst]):\n                raise KaldiError(\"must pass exactly one input\")\n            if input_text:\n                return self._agf_compiler.compile_graph(config, grammar_fst_text=input_text, return_graph=return_output_fst)\n            if input_filename:\n                return self._agf_compiler.compile_graph(config, grammar_fst_file=input_filename, return_graph=return_output_fst)\n            if input_fst:\n                return self._agf_compiler.compile_graph(config, grammar_fst=input_fst, return_graph=return_output_fst)\n\n        elif True:\n            # Pipeline-style\n            assert not input_fst\n            if input_text and input_filename: raise KaldiError(\"_compile_agf_graph passed both input_text and input_filename\")\n            elif input_text: input = ExternalProcess.shell.echo(input_text.encode('utf-8'))\n            elif input_filename: input = input_filename\n            else: raise KaldiError(\"_compile_agf_graph passed neither input_text nor input_filename\")\n            compile_command = input\n            format = ExternalProcess.get_list_formatter(format_kwargs)\n            args = []\n\n            # if True: (input | ExternalProcess.fstcompile(*format('--isymbols={words_txt}', '--osymbols={words_txt}')) | ExternalProcess.fstinfo | 'stats.log+')()\n            # if True: (ExternalProcess.shell.echo(input_text) | ExternalProcess.fstcompile(*format('--isymbols={words_txt}', '--osymbols={words_txt}')) | (output_filename+'-G'))()\n\n            if compile:\n                compile_command |= ExternalProcess.fstcompile(*format('--isymbols={words_txt}', '--osymbols={words_txt}'))\n                if self._log.isEnabledFor(5):\n                    g_txt_filename = output_filename.replace('.fst', '.G.fst.txt')\n                    self._log.log(5, \"Saving text grammar FST to %s\", g_txt_filename)\n                    with open(g_txt_filename, 'wb') as f: shutil.copyfileobj(copy.deepcopy(compile_command.commands[0].get_opt('stdin')), f)\n                    g_filename = output_filename.replace('.fst', '.G.fst')\n                    self._log.log(5, \"Saving compiled grammar FST to %s\", g_filename)\n                    (copy.deepcopy(compile_command) | g_filename)()\n                args.extend(['--arcsort-grammar'])\n            if nonterm:\n                args.extend(format('--grammar-prepend-nonterm={words_nonterm_begin}', '--grammar-append-nonterm={words_nonterm_end}'))\n            args.extend(format(\n                '--nonterm-phones-offset={nonterm_phones_offset}',\n                '--read-disambig-syms={disambig_int}',\n                '--simplify-lg={simplify_lg}',\n                '--verbose={verbose}',\n                '{tree}', '{final_mdl}', '{L_disambig_fst}', '-', '{output_filename}'))\n            compile_command |= ExternalProcess.compile_graph_agf(*args, **ExternalProcess.get_debug_stderr_kwargs(self._log))\n            ExternalProcess.execute_command_safely(compile_command, self._log)\n\n            # if True: (ExternalProcess.shell.echo('%s -> %s\\n' % (len(input_text), get_time_spent())) | ExternalProcess.shell('cat') | 'stats.log+')()\n\n        else:\n            # CLI-style (deprecated!)\n            assert not input_fst\n            run = lambda cmd, **kwargs: run_subprocess(cmd, format_kwargs, \"agf graph compilation step\", format_kwargs_update=dict(input_filename=output_filename), **kwargs)\n            if compile: run(\"{exec_dir}fstcompile --isymbols={words_txt} --osymbols={words_txt} {input_filename}.txt {output_filename}\")\n            # run(\"cp {input_filename} {output_filename}-G\")\n            if compile: run(\"{exec_dir}fstarcsort --sort_type=ilabel {input_filename} {output_filename}\")\n            if nonterm: run(\"{exec_dir}fstconcat {tmp_dir}nonterm_begin.fst {input_filename} {output_filename}\")\n            if nonterm: run(\"{exec_dir}fstconcat {input_filename} {tmp_dir}nonterm_end.fst {output_filename}\")\n            # run(\"cp {input_filename} {output_filename}-G\")\n            run(\"{exec_dir}compile-graph --nonterm-phones-offset={nonterm_phones_offset} --read-disambig-syms={disambig_int} --verbose={verbose}\"\n                + \" {tree} {final_mdl} {L_disambig_fst} {input_filename} {output_filename}\")\n\n    def compile_plain_dictation_fst(self, g_filename=None, output_filename=None):\n        if g_filename is None: g_filename = self._default_dictation_g_filepath\n        if output_filename is None: output_filename = self._plain_dictation_hclg_fst_filepath\n        verbose_level = 5 if self._log.isEnabledFor(5) else 0\n        format_kwargs = dict(self.files_dict, g_filename=g_filename, output_filename=output_filename, verbose=verbose_level)\n        format = ExternalProcess.get_list_formatter(format_kwargs)\n        args = format('--read-disambig-syms={disambig_int}', '--simplify-lg=false', '--verbose={verbose}',\n            '{tree}', '{final_mdl}', '{L_disambig_fst}', '{g_filename}', '{output_filename}')\n        compile_command = ExternalProcess.compile_graph_agf(*args, **ExternalProcess.get_debug_stderr_kwargs(self._log))\n        compile_command()\n\n    def compile_agf_dictation_fst(self, g_filename=None):\n        if g_filename is None: g_filename = self._default_dictation_g_filepath\n        self._compile_agf_graph(input_filename=g_filename, output_filename=self._dictation_fst_filepath, nonterm=True, simplify_lg=False)\n\n    # def _compile_base_fsts(self):\n    #     filepaths = [self.tmp_dir + filename for filename in ['nonterm_begin.fst', 'nonterm_end.fst']]\n    #     if all(self.fst_cache.is_current(filepath) for filepath in filepaths):\n    #         return\n    #     format_kwargs = dict(self.files_dict)\n    #     def run(cmd): subprocess.check_call(cmd.format(**format_kwargs), shell=True)  # FIXME: unsafe shell?\n    #     if platform == 'windows':\n    #     else:\n    #         run(\"(echo 0 1 #nonterm_begin 0^& echo 1) | {exec_dir}fstcompile.exe --isymbols={words_txt} > {tmp_dir}nonterm_begin.fst\")\n    #         run(\"(echo 0 1 #nonterm_end 0^& echo 1) | {exec_dir}fstcompile.exe --isymbols={words_txt} > {tmp_dir}nonterm_end.fst\")\n    #         run(\"(echo 0 1 \\\\#nonterm_begin 0; echo 1) | {exec_dir}fstcompile --isymbols={words_txt} > {tmp_dir}nonterm_begin.fst\")\n    #         run(\"(echo 0 1 \\\\#nonterm_end 0; echo 1) | {exec_dir}fstcompile --isymbols={words_txt} > {tmp_dir}nonterm_end.fst\")\n    #     for filepath in filepaths:\n    #         self.fst_cache.add(filepath)\n\n    def compile_top_fst(self):\n        return self._build_top_fst(nonterms=['#nonterm:rule'+str(i) for i in range(self._max_rule_id + 1)], noise_words=self._noise_words).compile()\n\n    def compile_top_fst_dictation_only(self):\n        return self._build_top_fst(nonterms=['#nonterm:dictation'], noise_words=self._noise_words).compile()\n\n    def _build_top_fst(self, nonterms, noise_words):\n        kaldi_rule = KaldiRule(self, 'top', nonterm=False)\n        fst = kaldi_rule.fst\n        state_initial = fst.add_state(initial=True)\n        state_final = fst.add_state(final=True)\n\n        state_return = fst.add_state()\n        for nonterm in nonterms:\n            fst.add_arc(state_initial, state_return, nonterm)\n        fst.add_arc(state_return, state_final, None, '#nonterm:end')\n\n        if noise_words:\n            for (state_from, state_to) in [\n                    (state_initial, state_final),\n                    # (state_initial, state_initial),  # FIXME: test this\n                    # (state_final, state_final),\n                    ]:\n                for word in noise_words:\n                    fst.add_arc(state_from, state_to, word)\n\n        return kaldi_rule\n\n    def _get_dictation_fst_filepath(self):\n        if os.path.exists(self._dictation_fst_filepath):\n            return self._dictation_fst_filepath\n        self._log.error(\"cannot find dictation fst: %s\", self._dictation_fst_filepath)\n        # FIXME: Fall back to universal dictation?\n    dictation_fst_filepath = property(_get_dictation_fst_filepath)\n\n    # def _construct_dictation_states(self, fst, src_state, dst_state, number=(1,None), words=None, start_weight=None):\n    #     \"\"\"\n    #     Matches `number` words.\n    #     :param number: (0,None) or (1,None) or (1,1), where None is infinity.\n    #     \"\"\"\n    #     # unweighted=0.01\n    #     if words is None: words = self.lexicon_words\n    #     word_probs = self._lexicon_word_probs\n    #     backoff_state = fst.add_state()\n    #     fst.add_arc(src_state, backoff_state, None, weight=start_weight)\n    #     if number[0] == 0:\n    #         fst.add_arc(backoff_state, dst_state, None)\n    #     for word, prob in word_probs.items():\n    #         state = fst.add_state()\n    #         fst.add_arc(backoff_state, state, word, weight=prob)\n    #         if number[1] == None:\n    #             fst.add_arc(state, backoff_state, None)\n    #         fst.add_arc(state, dst_state, None)\n\n    def compile_universal_grammar(self, words=None):\n        \"\"\"recognizes any sequence of words\"\"\"\n        kaldi_rule = KaldiRule(self, 'universal', nonterm=False)\n        if words is None: words = self.lexicon_words\n        fst = kaldi_rule.fst\n        backoff_state = fst.add_state(initial=True, final=True)\n        for word in words:\n            # state = fst.add_state()\n            # fst.add_arc(backoff_state, state, word)\n            # fst.add_arc(state, backoff_state, None)\n            fst.add_arc(backoff_state, backoff_state, word)\n        kaldi_rule.compile()\n        return kaldi_rule\n\n    def process_compile_and_load_queues(self):\n        # Allowing this gives us leeway elsewhere\n        # for kaldi_rule in self.compile_queue:\n        #     if kaldi_rule.compiled:\n        #         self._log.warning(\"compile_queue has %s but it is already compiled\", kaldi_rule)\n        # for kaldi_rule in self.compile_duplicate_filename_queue:\n        #     if kaldi_rule.compiled:\n        #         self._log.warning(\"compile_duplicate_filename_queue has %s but it is already compiled\", kaldi_rule)\n        # for kaldi_rule in self.load_queue:\n        #     if kaldi_rule.loaded:\n        #         self._log.warning(\"load_queue has %s but it is already loaded\", kaldi_rule)\n\n        # Clean out obsolete entries\n        self.compile_queue.difference_update([kaldi_rule for kaldi_rule in self.compile_queue if kaldi_rule.compiled])\n        self.compile_duplicate_filename_queue.difference_update([kaldi_rule for kaldi_rule in self.compile_duplicate_filename_queue if kaldi_rule.compiled])\n        self.load_queue.difference_update([kaldi_rule for kaldi_rule in self.load_queue if kaldi_rule.loaded])\n\n        if self.compile_queue or self.compile_duplicate_filename_queue or self.load_queue:\n            with concurrent.futures.ThreadPoolExecutor(max_workers=multiprocessing.cpu_count()) as executor:\n                results = executor.map(lambda kaldi_rule: kaldi_rule.finish_compile(), self.compile_queue)\n                # Load pending rules that have already been compiled\n                # for kaldi_rule in (self.load_queue - self.compile_queue - self.compile_duplicate_filename_queue):\n                #     kaldi_rule.load()\n                #     self.load_queue.remove(kaldi_rule)\n                # Handle rules as they are completed (have been compiled)\n                for kaldi_rule in results:\n                    assert kaldi_rule.compiled\n                    self.compile_queue.remove(kaldi_rule)\n                    # if kaldi_rule in self.load_queue:\n                    #     kaldi_rule.load()\n                    #     self.load_queue.remove(kaldi_rule)\n                # Handle rules that were pending compile but were duplicate and so compiled by/for another rule. They should be in the cache now\n                for kaldi_rule in list(self.compile_duplicate_filename_queue):\n                    kaldi_rule.compile(duplicate=True)\n                    assert kaldi_rule.compiled\n                    self.compile_duplicate_filename_queue.remove(kaldi_rule)\n                    # if kaldi_rule in self.load_queue:\n                    #     kaldi_rule.load()\n                    #     self.load_queue.remove(kaldi_rule)\n                # Load rules in correct order\n                for kaldi_rule in sorted(self.load_queue, key=lambda kr: kr.id):\n                    kaldi_rule.load()\n                    assert kaldi_rule.loaded\n                    self.load_queue.remove(kaldi_rule)\n\n\n    ####################################################################################################################\n    # Methods for recognition.\n\n    def prepare_for_recognition(self):\n        try:\n            if self.compile_queue or self.compile_duplicate_filename_queue or self.load_queue:\n                self.process_compile_and_load_queues()\n        except KaldiError:\n            raise\n        except Exception:\n            raise KaldiError(\"Exception while compiling/loading rules in prepare_for_recognition\")\n        finally:\n            if self.fst_cache.dirty:\n                self.fst_cache.save()\n\n    wildcard_nonterms = ('#nonterm:dictation', '#nonterm:dictation_cloud')\n\n    def parse_output_for_rule(self, kaldi_rule, output):\n        \"\"\"Can be used even when self.parsing_framework == 'token', only for mimic (which contains no nonterms).\"\"\"\n        labels = kaldi_rule.fst.does_match(output.split(), wildcard_nonterms=self.wildcard_nonterms)\n        self._log.log(5, \"parse_output_for_rule(%s, %r) got %r\", kaldi_rule, output, labels)\n        if labels is False:\n            return None\n        words = [label for label in labels if not label.startswith('#nonterm:')]\n        parsed_output = ' '.join(words)\n        if parsed_output.lower() != output:\n            self._log.error(\"parsed_output(%r).lower() != output(%r)\" % (parsed_output, output))\n        return words\n\n    alternative_dictation_regex = re.compile(r'(?<=#nonterm:dictation_cloud )(.*?)(?= #nonterm:end)')  # lookbehind & lookahead assertions\n\n    def parse_output(self, output, dictation_info_func=None):\n        \"\"\"\n        dictation_info_func: Optional but required for using alternative_dictation; expected to be (audio_data, wrapper::KaldiNNet3Decoder.get_word_align output).\n        \"\"\"\n        assert self.parsing_framework == 'token'\n        self._log.debug(\"parse_output(%r)\" % output)\n        if (output == '') or (output in self._noise_words):\n            return None, [], []\n\n        nonterm_token, _, parsed_output = output.partition(' ')\n        assert nonterm_token.startswith('#nonterm:rule')\n        kaldi_rule_id = int(nonterm_token[len('#nonterm:rule'):])\n        kaldi_rule = self.kaldi_rule_by_id_dict[kaldi_rule_id]\n\n        if self.alternative_dictation and dictation_info_func and kaldi_rule.has_dictation and '#nonterm:dictation_cloud' in parsed_output:\n            try:\n                if callable(self.alternative_dictation):\n                    alternative_text_func = self.alternative_dictation\n                else:\n                    raise TypeError(\"Invalid alternative_dictation value: %r\" % self.alternative_dictation)\n\n                audio_data, word_align = dictation_info_func()\n                self._log.log(5, \"alternative_dictation word_align: %s\", word_align)\n                words, times, lengths = list(zip(*word_align))\n                # Find start & end word-index & byte-offset of each alternative dictation span\n                dictation_spans = [{\n                        'index_start': index,\n                        'offset_start': time,\n                        'index_end': words.index('#nonterm:end', index),\n                        'offset_end': times[words.index('#nonterm:end', index)],\n                    }\n                    for index, (word, time, length) in enumerate(word_align)\n                    if word.startswith('#nonterm:dictation_cloud')]\n\n                # If last dictation is at end of utterance, it should include rest of audio_data; else, it should include half of audio_data between dictation end and start of next word\n                dictation_span = dictation_spans[-1]\n                if dictation_span['index_end'] == len(word_align) - 1:\n                    dictation_span['offset_end'] = len(audio_data)\n                else:\n                    next_word_time = times[dictation_span['index_end'] + 1]\n                    dictation_span['offset_end'] = (dictation_span['offset_end'] + next_word_time) // 2\n\n                def replace_dictation(matchobj: re.Match) -> str:\n                    orig_text = matchobj.group(1)\n                    dictation_span = dictation_spans.pop(0)\n                    dictation_audio = audio_data[dictation_span['offset_start'] : dictation_span['offset_end']]\n                    with debug_timer(self._log.debug, 'alternative_dictation call'):\n                        alternative_text = alternative_text_func(dictation_audio)\n                        self._log.debug(\"alternative_dictation: %.2fs audio -> %r\", (0.5 * len(dictation_audio) / 16000), alternative_text)  # FIXME: hardcoded sample_rate!\n                    # alternative_dictation.write_wav('test.wav', dictation_audio)\n                    return (alternative_text or orig_text)\n\n                parsed_output = self.alternative_dictation_regex.sub(replace_dictation, parsed_output)\n            except Exception as e:\n                self._log.exception(\"Exception performing alternative dictation\")\n\n        words = []\n        words_are_dictation_mask = []\n        in_dictation = False\n        for word in parsed_output.split():\n            if word.startswith('#nonterm:'):\n                if word.startswith('#nonterm:dictation'):\n                    in_dictation = True\n                elif in_dictation and word == '#nonterm:end':\n                    in_dictation = False\n            else:\n                words.append(word)\n                words_are_dictation_mask.append(in_dictation)\n\n        return kaldi_rule, words, words_are_dictation_mask\n\n    def parse_partial_output(self, output):\n        assert self.parsing_framework == 'token'\n        self._log.log(3, \"parse_partial_output(%r)\", output)\n        if (output == '') or (output in self._noise_words):\n            return None, [], [], False\n\n        nonterm_token, _, parsed_output = output.partition(' ')\n        assert nonterm_token.startswith('#nonterm:rule')\n        kaldi_rule_id = int(nonterm_token[len('#nonterm:rule'):])\n        kaldi_rule = self.kaldi_rule_by_id_dict[kaldi_rule_id]\n\n        words = []\n        words_are_dictation_mask = []\n        in_dictation = False\n        for word in parsed_output.split():\n            if word.startswith('#nonterm:'):\n                if word.startswith('#nonterm:dictation'):\n                    in_dictation = True\n                elif in_dictation and word == '#nonterm:end':\n                    in_dictation = False\n            else:\n                words.append(word)\n                words_are_dictation_mask.append(in_dictation)\n\n        return kaldi_rule, words, words_are_dictation_mask, in_dictation\n\n########################################################################################################################\n# Utility functions.\n\ndef remove_words_in_words(words, remove_words_func):\n    return [word for word in words if not remove_words_func(word)]\n\ndef remove_words_in_text(text, remove_words_func):\n    return ' '.join(word for word in text.split() if not remove_words_func(word))\n\ndef remove_nonterms_in_words(words):\n    return remove_words_in_words(words, lambda word: word.startswith('#nonterm:'))\n\ndef remove_nonterms_in_text(text):\n    return remove_words_in_text(text, lambda word: word.startswith('#nonterm:'))\n\ndef run_subprocess(cmd, format_kwargs, description=None, format_kwargs_update=None, **kwargs):\n    with debug_timer(_log.debug, description or \"description\", False), open(os.devnull, 'wb') as devnull:\n        output = None if _log.isEnabledFor(logging.DEBUG) else devnull\n        args = shlex.split(cmd.format(**format_kwargs), posix=(platform != 'windows'))\n        _log.log(5, \"subprocess.check_call(%r)\", args)\n        subprocess.check_call(args, stdout=output, stderr=output, **kwargs)\n        if format_kwargs_update:\n            format_kwargs.update(format_kwargs_update)\n"
  },
  {
    "path": "kaldi_active_grammar/defaults.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\nDEFAULT_MODEL_DIR = 'kaldi_model'\nFILE_CACHE_FILENAME = 'file_cache.json'\n\nDEFAULT_DICTATION_G_FILENAME = 'G.fst'\nDEFAULT_DICTATION_FST_FILENAME = 'Dictation.fst'\nDEFAULT_PLAIN_DICTATION_HCLG_FST_FILENAME = 'HCLG.fst'\n"
  },
  {
    "path": "kaldi_active_grammar/ffi.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\n\"\"\"\nFFI classes for Kaldi\n\"\"\"\n\nimport os, re\n\nfrom cffi import FFI\n\nfrom .utils import exec_dir, platform\n\n_ffi = FFI()\n_library_binary_path = os.path.join(exec_dir, dict(windows='kaldi-dragonfly.dll', linux='libkaldi-dragonfly.so', macos='libkaldi-dragonfly.dylib')[platform])\n_c_source_ignore_regex = re.compile(r'(\\b(extern|DRAGONFLY_API)\\b)|(\"C\")|(//.*$)', re.MULTILINE)  # Pattern for extraneous stuff to be removed\n\ndef encode(text):\n    \"\"\" For C interop: encode unicode text -> binary utf-8. \"\"\"\n    return text.encode('utf-8')\ndef decode(binary):\n    \"\"\" For C interop: decode binary utf-8 -> unicode text. \"\"\"\n    return binary.decode('utf-8')\n\nclass FFIObject(object):\n\n    def __init__(self):\n        self.init_ffi()\n\n    @classmethod\n    def init_ffi(cls):\n        cls._lib = _ffi.init_once(cls._init_ffi, cls.__name__ + '._init_ffi')\n\n    @classmethod\n    def _init_ffi(cls):\n        _ffi.cdef(_c_source_ignore_regex.sub(' ', cls._library_header_text))\n        return _ffi.dlopen(_library_binary_path)\n"
  },
  {
    "path": "kaldi_active_grammar/kaldi/COPYING",
    "content": "\n Update to legal notice, made Feb 2012, modified Sep 2013.  We would like to\n clarify that we are using a convention where multiple names in the Apache\n copyright headers, for example\n\n  // Copyright 2009-2012  Yanmin Qian  Arnab Ghoshal\n  //                2013  Vassil Panayotov\n\n does not signify joint ownership of copyright of that file, except in cases\n where all those names were present in the original release made in March 2011--\n you can use the version history to work this out, if this matters to you.\n Instead, we intend that those contributors who later modified the file, agree\n to release their changes under the Apache license.  The conventional way of\n signifying this is to duplicate the Apache headers at the top of each file each\n time a change is made by a different author, but this would quickly become\n impractical.\n\n Where the copyright header says something like:\n\n // Copyright    2013   Johns Hopkins University (author: Daniel Povey)\n\n it is because the individual who wrote the code was at that institution as an\n employee, so the copyright is owned by the university (and we will have checked\n that the contributions were in accordance with the open-source policies of the\n institutions concerned, including getting them vetted individually where\n necessary).  From a legal point of view the copyright ownership is that of the\n institution concerned, and the (author: xxx) in parentheses is just\n informational, to identify the actual person who wrote the code, and is not\n intended to have any legal implications.  In some cases, however, particularly\n early on, we just wrote the name of the university or company concerned,\n without the actual author's name in parentheses.  If you see something like\n\n //  Copyright  2009-2012   Arnab Ghoshal  Microsoft Corporation\n\n it does not imply that Arnab was working for Microsoft, it is because someone\n else contributed to the file while working at Microsoft (this would be Daniel\n Povey, in fact, who was working at Microsoft Research at the outset of the\n project).\n\n The list of authors of each file is in an essentially arbitrary order, but is\n often chronological if they contributed in different years.\n\n The original legal notice is below.  Note: we are continuing to modify it by\n adding the names of new contributors, but at any given time, the list may\n be out of date.\n\n---\n                          Legal Notices\n\nEach of the files comprising Kaldi v1.0 have been separately licensed by\ntheir respective author(s) under the terms of the Apache License v 2.0 (set\nforth below).  The source code headers for each file specifies the individual\nauthors and source material for that file as well the corresponding copyright\nnotice.  For reference purposes only: A cumulative list of all individual\ncontributors and original source material as well as the full text of the Apache\nLicense v 2.0 are set forth below.\n\nIndividual Contributors (in alphabetical order)\n\n      Mohit Agarwal\n      Tanel Alumae\n      Gilles Boulianne\n      Lukas Burget\n      Dogan Can\n      Guoguo Chen\n      Gaofeng Cheng\n      Cisco Corporation\n      Pavel Denisov\n      Ilya Edrenkin\n      Ewald Enzinger\n      Joachim Fainberg\n      Daniel Galvez\n      Pegah Ghahremani\n      Arnab Ghoshal\n      Ondrej Glembek\n      Go Vivace Inc.\n      Allen Guo\n      Hossein Hadian\n      Lv Hang\n      Mirko Hannemann\n      Hendy Irawan\n      Navdeep Jaitly\n      Johns Hopkins University\n      Shiyin Kang\n      Kirill Katsnelson\n      Tom Ko\n      Danijel Korzinek\n      Gaurav Kumar\n      Ke Li\n      Matthew Maciejewski\n      Vimal Manohar\n      Yajie Miao\n      Microsoft Corporation\n      Petr Motlicek\n      Xingyu Na\n      Vincent Nguyen\n      Lucas Ondel\n      Vassil Panayotov\n      Vijayaditya Peddinti\n      Phonexia s.r.o.\n      Ondrej Platek\n      Daniel Povey\n      Yanmin Qian\n      Ariya Rastrow\n      Saarland University\n      Omid Sadjadi\n      Petr Schwarz\n      Yiwen Shao\n      Nickolay V. Shmyrev\n      Jan Silovsky\n      Eduardo Silva\n      Peter Smit\n      David Snyder\n      Alexander Solovets\n      Georg Stemmer\n      Pawel Swietojanski\n      Jan \"Yenda\" Trmal\n      Albert Vernon\n      Karel Vesely\n      Yiming Wang\n      Shinji Watanabe\n      Minhua Wu\n      Haihua Xu\n      Hainan Xu\n      Xiaohui Zhang\n\nOther Source Material\n\n    This project includes a port and modification of materials from JAMA: A Java\n  Matrix Package under the following notice: \"This software is a cooperative\n  product of The MathWorks and the National Institute of Standards and Technology\n  (NIST) which has been released to the public domain.\" This notice and the\n  original code is available at http://math.nist.gov/javanumerics/jama/\n\n   This project includes a modified version of code published in Malvar, H.,\n  \"Signal processing with lapped transforms,\" Artech House, Inc., 1992.  The\n  current copyright holder, Henrique S. Malvar, has given his permission for the\n  release of this modified version under the Apache License 2.0.\n\n  This project includes material from the OpenFST Library v1.2.7 available at\n  http://www.openfst.org and released under the Apache License v. 2.0.\n\n  [OpenFst COPYING file begins here]\n\n    Licensed under the Apache License, Version 2.0 (the \"License\");\n    you may not use these files except in compliance with the License.\n    You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n    Unless required by applicable law or agreed to in writing, software\n    distributed under the License is distributed on an \"AS IS\" BASIS,\n    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n    See the License for the specific language governing permissions and\n    limitations under the License.\n\n    Copyright 2005-2010 Google, Inc.\n\n  [OpenFst COPYING file ends here]\n\n\n -------------------------------------------------------------------------\n\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "kaldi_active_grammar/kaldi/__init__.py",
    "content": ""
  },
  {
    "path": "kaldi_active_grammar/kaldi/augment_phones_txt.py",
    "content": "#!/usr/bin/env python3\n\n\nimport argparse\nimport re\nimport os\nimport sys\n\ndef get_args():\n    parser = argparse.ArgumentParser(description=\"\"\"This script augments a phones.txt\n       file (a phone-level symbol table) by adding certain special symbols\n       relating to grammar support.  See ../add_nonterminals.sh for context.\"\"\")\n\n    parser.add_argument('input_phones_txt', type=str,\n                        help='Filename of input phones.txt file, to be augmented')\n    parser.add_argument('nonterminal_symbols_list', type=str,\n                        help='Filename of a file containing a list of nonterminal '\n                        'symbols, one per line.  E.g. #nonterm:contact_list')\n    parser.add_argument('output_phones_txt', type=str, help='Filename of output '\n                        'phones.txt file.  May be the same as input-phones-txt.')\n    args = parser.parse_args()\n    return args\n\n\n\n\ndef read_phones_txt(filename):\n    \"\"\"Reads the phones.txt file in 'filename', returns a 2-tuple (lines, highest_symbol)\n       where 'lines' is all the lines the phones.txt as a list of strings,\n       and 'highest_symbol' is the integer value of the highest-numbered symbol\n       in the symbol table.  It is an error if the phones.txt is empty or mis-formatted.\"\"\"\n\n    # The use of latin-1 encoding does not preclude reading utf-8.  latin-1\n    # encoding means \"treat words as sequences of bytes\", and it is compatible\n    # with utf-8 encoding as well as other encodings such as gbk, as long as the\n    # spaces are also spaces in ascii (which we check).  It is basically how we\n    # emulate the behavior of python before python3.\n    whitespace = re.compile(\"[ \\t]+\")\n    with open(filename, 'r', encoding='latin-1') as f:\n        lines = [line.strip(\" \\t\\r\\n\") for line in f]\n        highest_numbered_symbol = 0\n        for line in lines:\n            s = whitespace.split(line)\n            try:\n                i = int(s[1])\n                if i > highest_numbered_symbol:\n                    highest_numbered_symbol = i\n            except:\n                raise RuntimeError(\"Could not interpret line '{0}' in file '{1}'\".format(\n                line, filename))\n            if s[0] == '#nonterm_bos':\n                raise RuntimeError(\"It looks like the symbol table {0} already has nonterminals \"\n                                   \"in it.\".format(filename))\n        return lines, highest_numbered_symbol\n\n\ndef read_nonterminals(filename):\n    \"\"\"Reads the user-defined nonterminal symbols in 'filename', checks that\n       it has the expected format and has no duplicates, and returns the nonterminal\n       symbols as a list of strings, e.g.\n       ['#nonterm:contact_list', '#nonterm:phone_number', ... ]. \"\"\"\n    ans = [line.strip(\" \\t\\r\\n\") for line in open(filename, 'r', encoding='latin-1')]\n    if len(ans) == 0:\n        raise RuntimeError(\"The file {0} contains no nonterminal symbols.\".format(filename))\n    for nonterm in ans:\n        if nonterm[:9] != '#nonterm:':\n            raise RuntimeError(\"In file '{0}', expected nonterminal symbols to start with '#nonterm:', found '{1}'\"\n                               .format(filename, nonterm))\n    if len(set(ans)) != len(ans):\n        raise RuntimeError(\"Duplicate nonterminal symbols are present in file {0}\".format(filename))\n    return ans\n\ndef write_phones_txt(orig_lines, highest_numbered_symbol, nonterminals, filename):\n    \"\"\"Writes updated phones.txt to 'filename'.  'orig_lines' is the original lines\n       in the phones.txt file as a list of strings (without the newlines);\n       highest_numbered_symbol is the highest numbered symbol in the original\n       phones.txt; nonterminals is a list of strings like '#nonterm:foo'.\"\"\"\n    with open(filename, 'w', encoding='latin-1') as f:\n        for l in orig_lines:\n            print(l, file=f)\n        cur_symbol = highest_numbered_symbol + 1\n        for n in ['#nonterm_bos', '#nonterm_begin', '#nonterm_end', '#nonterm_reenter' ] + nonterminals:\n            print(\"{0} {1}\".format(n, cur_symbol), file=f)\n            cur_symbol = cur_symbol + 1\n\n\n\ndef main():\n    args = get_args()\n    (lines, highest_symbol) = read_phones_txt(args.input_phones_txt)\n    nonterminals = read_nonterminals(args.nonterminal_symbols_list)\n    write_phones_txt(lines, highest_symbol, nonterminals, args.output_phones_txt)\n\n\nif __name__ == '__main__':\n      main()\n"
  },
  {
    "path": "kaldi_active_grammar/kaldi/augment_phones_txt_py2.py",
    "content": "#!/usr/bin/env python3\n\n\nfrom __future__ import print_function\nimport argparse\nimport re\nimport os\nimport sys\n\ndef get_args():\n    parser = argparse.ArgumentParser(description=\"\"\"This script augments a phones.txt\n       file (a phone-level symbol table) by adding certain special symbols\n       relating to grammar support.  See ../add_nonterminals.sh for context.\"\"\")\n\n    parser.add_argument('input_phones_txt', type=str,\n                        help='Filename of input phones.txt file, to be augmented')\n    parser.add_argument('nonterminal_symbols_list', type=str,\n                        help='Filename of a file containing a list of nonterminal '\n                        'symbols, one per line.  E.g. #nonterm:contact_list')\n    parser.add_argument('output_phones_txt', type=str, help='Filename of output '\n                        'phones.txt file.  May be the same as input-phones-txt.')\n    args = parser.parse_args()\n    return args\n\n\n\n\ndef read_phones_txt(filename):\n    \"\"\"Reads the phones.txt file in 'filename', returns a 2-tuple (lines, highest_symbol)\n       where 'lines' is all the lines the phones.txt as a list of strings,\n       and 'highest_symbol' is the integer value of the highest-numbered symbol\n       in the symbol table.  It is an error if the phones.txt is empty or mis-formatted.\"\"\"\n\n    # The use of latin-1 encoding does not preclude reading utf-8.  latin-1\n    # encoding means \"treat words as sequences of bytes\", and it is compatible\n    # with utf-8 encoding as well as other encodings such as gbk, as long as the\n    # spaces are also spaces in ascii (which we check).  It is basically how we\n    # emulate the behavior of python before python3.\n    whitespace = re.compile(\"[ \\t]+\")\n    with open(filename, 'r') as f:\n        lines = [line.strip(\" \\t\\r\\n\") for line in f]\n        highest_numbered_symbol = 0\n        for line in lines:\n            s = whitespace.split(line)\n            try:\n                i = int(s[1])\n                if i > highest_numbered_symbol:\n                    highest_numbered_symbol = i\n            except:\n                raise RuntimeError(\"Could not interpret line '{0}' in file '{1}'\".format(\n                line, filename))\n            if s[0] == '#nonterm_bos':\n                raise RuntimeError(\"It looks like the symbol table {0} already has nonterminals \"\n                                   \"in it.\".format(filename))\n        return lines, highest_numbered_symbol\n\n\ndef read_nonterminals(filename):\n    \"\"\"Reads the user-defined nonterminal symbols in 'filename', checks that\n       it has the expected format and has no duplicates, and returns the nonterminal\n       symbols as a list of strings, e.g.\n       ['#nonterm:contact_list', '#nonterm:phone_number', ... ]. \"\"\"\n    ans = [line.strip(\" \\t\\r\\n\") for line in open(filename, 'r')]\n    if len(ans) == 0:\n        raise RuntimeError(\"The file {0} contains no nonterminal symbols.\".format(filename))\n    for nonterm in ans:\n        if nonterm[:9] != '#nonterm:':\n            raise RuntimeError(\"In file '{0}', expected nonterminal symbols to start with '#nonterm:', found '{1}'\"\n                               .format(filename, nonterm))\n    if len(set(ans)) != len(ans):\n        raise RuntimeError(\"Duplicate nonterminal symbols are present in file {0}\".format(filename))\n    return ans\n\ndef write_phones_txt(orig_lines, highest_numbered_symbol, nonterminals, filename):\n    \"\"\"Writes updated phones.txt to 'filename'.  'orig_lines' is the original lines\n       in the phones.txt file as a list of strings (without the newlines);\n       highest_numbered_symbol is the highest numbered symbol in the original\n       phones.txt; nonterminals is a list of strings like '#nonterm:foo'.\"\"\"\n    with open(filename, 'wb') as f:\n        for l in orig_lines:\n            print(l, file=f)\n        cur_symbol = highest_numbered_symbol + 1\n        for n in ['#nonterm_bos', '#nonterm_begin', '#nonterm_end', '#nonterm_reenter' ] + nonterminals:\n            print(\"{0} {1}\".format(n, cur_symbol), file=f)\n            cur_symbol = cur_symbol + 1\n\n\n\ndef main():\n    args = get_args()\n    (lines, highest_symbol) = read_phones_txt(args.input_phones_txt)\n    nonterminals = read_nonterminals(args.nonterminal_symbols_list)\n    write_phones_txt(lines, highest_symbol, nonterminals, args.output_phones_txt)\n\n\nif __name__ == '__main__':\n      main()\n"
  },
  {
    "path": "kaldi_active_grammar/kaldi/augment_words_txt.py",
    "content": "#!/usr/bin/env python3\n\n\nimport argparse\nimport os\nimport sys\nimport re\n\ndef get_args():\n    parser = argparse.ArgumentParser(description=\"\"\"This script augments a words.txt\n       file (a word-level symbol table) by adding certain special symbols\n       relating to grammar support.  See ../add_nonterminals.sh for context,\n       and augment_phones_txt.py.\"\"\")\n\n    parser.add_argument('input_words_txt', type=str,\n                        help='Filename of input words.txt file, to be augmented')\n    parser.add_argument('nonterminal_symbols_list', type=str,\n                        help='Filename of a file containing a list of nonterminal '\n                        'symbols, one per line.  E.g. #nonterm:contact_list')\n    parser.add_argument('output_words_txt', type=str, help='Filename of output '\n                        'words.txt file.  May be the same as input-words-txt.')\n    args = parser.parse_args()\n    return args\n\n\n\n\ndef read_words_txt(filename):\n    \"\"\"Reads the words.txt file in 'filename', returns a 2-tuple (lines, highest_symbol)\n       where 'lines' is all the lines the words.txt as a list of strings,\n       and 'highest_symbol' is the integer value of the highest-numbered symbol\n       in the symbol table.  It is an error if the words.txt is empty or mis-formatted.\"\"\"\n\n    # The use of latin-1 encoding does not preclude reading utf-8.  latin-1\n    # encoding means \"treat words as sequences of bytes\", and it is compatible\n    # with utf-8 encoding as well as other encodings such as gbk, as long as the\n    # spaces are also spaces in ascii (which we check).  It is basically how we\n    # emulate the behavior of python before python3.\n    whitespace = re.compile(\"[ \\t]+\")\n    with open(filename, 'r', encoding='latin-1') as f:\n        lines = [line.strip(\" \\t\\r\\n\") for line in f]\n        highest_numbered_symbol = 0\n        for line in lines:\n            s = whitespace.split(line)\n            try:\n                i = int(s[1])\n                if i > highest_numbered_symbol:\n                    highest_numbered_symbol = i\n            except:\n                raise RuntimeError(\"Could not interpret line '{0}' in file '{1}'\".format(\n                line, filename))\n            if s[0] in [ '#nonterm_begin', '#nonterm_end' ]:\n                raise RuntimeError(\"It looks like the symbol table {0} already has nonterminals \"\n                                   \"in it.\".format(filename))\n        return lines, highest_numbered_symbol\n\n\ndef read_nonterminals(filename):\n    \"\"\"Reads the user-defined nonterminal symbols in 'filename', checks that\n       it has the expected format and has no duplicates, and returns the nonterminal\n       symbols as a list of strings, e.g.\n       ['#nonterm:contact_list', '#nonterm:phone_number', ... ]. \"\"\"\n    ans = [line.strip(\" \\t\\r\\n\") for line in open(filename, 'r', encoding='latin-1')]\n    if len(ans) == 0:\n        raise RuntimeError(\"The file {0} contains no nonterminal symbols.\".format(filename))\n    for nonterm in ans:\n        if nonterm[:9] != '#nonterm:':\n            raise RuntimeError(\"In file '{0}', expected nonterminal symbols to start with '#nonterm:', found '{1}'\"\n                               .format(filename, nonterm))\n    if len(set(ans)) != len(ans):\n        raise RuntimeError(\"Duplicate nonterminal symbols are present in file {0}\".format(filename))\n    return ans\n\ndef write_words_txt(orig_lines, highest_numbered_symbol, nonterminals, filename):\n    \"\"\"Writes updated words.txt to 'filename'.  'orig_lines' is the original lines\n       in the words.txt file as a list of strings (without the newlines);\n       highest_numbered_symbol is the highest numbered symbol in the original\n       words.txt; nonterminals is a list of strings like '#nonterm:foo'.\"\"\"\n    with open(filename, 'w', encoding='latin-1') as f:\n        for l in orig_lines:\n            print(l, file=f)\n        cur_symbol = highest_numbered_symbol + 1\n        for n in [ '#nonterm_begin', '#nonterm_end' ] + nonterminals:\n            print(\"{0} {1}\".format(n, cur_symbol), file=f)\n            cur_symbol = cur_symbol + 1\n\n\ndef main():\n    args = get_args()\n    (lines, highest_symbol) = read_words_txt(args.input_words_txt)\n    nonterminals = read_nonterminals(args.nonterminal_symbols_list)\n    write_words_txt(lines, highest_symbol, nonterminals, args.output_words_txt)\n\n\nif __name__ == '__main__':\n      main()\n"
  },
  {
    "path": "kaldi_active_grammar/kaldi/augment_words_txt_py2.py",
    "content": "#!/usr/bin/env python3\n\n\nfrom __future__ import print_function\nimport argparse\nimport os\nimport sys\nimport re\n\ndef get_args():\n    parser = argparse.ArgumentParser(description=\"\"\"This script augments a words.txt\n       file (a word-level symbol table) by adding certain special symbols\n       relating to grammar support.  See ../add_nonterminals.sh for context,\n       and augment_phones_txt.py.\"\"\")\n\n    parser.add_argument('input_words_txt', type=str,\n                        help='Filename of input words.txt file, to be augmented')\n    parser.add_argument('nonterminal_symbols_list', type=str,\n                        help='Filename of a file containing a list of nonterminal '\n                        'symbols, one per line.  E.g. #nonterm:contact_list')\n    parser.add_argument('output_words_txt', type=str, help='Filename of output '\n                        'words.txt file.  May be the same as input-words-txt.')\n    args = parser.parse_args()\n    return args\n\n\n\n\ndef read_words_txt(filename):\n    \"\"\"Reads the words.txt file in 'filename', returns a 2-tuple (lines, highest_symbol)\n       where 'lines' is all the lines the words.txt as a list of strings,\n       and 'highest_symbol' is the integer value of the highest-numbered symbol\n       in the symbol table.  It is an error if the words.txt is empty or mis-formatted.\"\"\"\n\n    # The use of latin-1 encoding does not preclude reading utf-8.  latin-1\n    # encoding means \"treat words as sequences of bytes\", and it is compatible\n    # with utf-8 encoding as well as other encodings such as gbk, as long as the\n    # spaces are also spaces in ascii (which we check).  It is basically how we\n    # emulate the behavior of python before python3.\n    whitespace = re.compile(\"[ \\t]+\")\n    with open(filename, 'r') as f:\n        lines = [line.strip(\" \\t\\r\\n\") for line in f]\n        highest_numbered_symbol = 0\n        for line in lines:\n            s = whitespace.split(line)\n            try:\n                i = int(s[1])\n                if i > highest_numbered_symbol:\n                    highest_numbered_symbol = i\n            except:\n                raise RuntimeError(\"Could not interpret line '{0}' in file '{1}'\".format(\n                line, filename))\n            if s[0] in [ '#nonterm_begin', '#nonterm_end' ]:\n                raise RuntimeError(\"It looks like the symbol table {0} already has nonterminals \"\n                                   \"in it.\".format(filename))\n        return lines, highest_numbered_symbol\n\n\ndef read_nonterminals(filename):\n    \"\"\"Reads the user-defined nonterminal symbols in 'filename', checks that\n       it has the expected format and has no duplicates, and returns the nonterminal\n       symbols as a list of strings, e.g.\n       ['#nonterm:contact_list', '#nonterm:phone_number', ... ]. \"\"\"\n    ans = [line.strip(\" \\t\\r\\n\") for line in open(filename, 'r')]\n    if len(ans) == 0:\n        raise RuntimeError(\"The file {0} contains no nonterminal symbols.\".format(filename))\n    for nonterm in ans:\n        if nonterm[:9] != '#nonterm:':\n            raise RuntimeError(\"In file '{0}', expected nonterminal symbols to start with '#nonterm:', found '{1}'\"\n                               .format(filename, nonterm))\n    if len(set(ans)) != len(ans):\n        raise RuntimeError(\"Duplicate nonterminal symbols are present in file {0}\".format(filename))\n    return ans\n\ndef write_words_txt(orig_lines, highest_numbered_symbol, nonterminals, filename):\n    \"\"\"Writes updated words.txt to 'filename'.  'orig_lines' is the original lines\n       in the words.txt file as a list of strings (without the newlines);\n       highest_numbered_symbol is the highest numbered symbol in the original\n       words.txt; nonterminals is a list of strings like '#nonterm:foo'.\"\"\"\n    with open(filename, 'wb') as f:\n        for l in orig_lines:\n            print(l, file=f)\n        cur_symbol = highest_numbered_symbol + 1\n        for n in [ '#nonterm_begin', '#nonterm_end' ] + nonterminals:\n            print(\"{0} {1}\".format(n, cur_symbol), file=f)\n            cur_symbol = cur_symbol + 1\n\n\ndef main():\n    args = get_args()\n    (lines, highest_symbol) = read_words_txt(args.input_words_txt)\n    nonterminals = read_nonterminals(args.nonterminal_symbols_list)\n    write_words_txt(lines, highest_symbol, nonterminals, args.output_words_txt)\n\n\nif __name__ == '__main__':\n      main()\n"
  },
  {
    "path": "kaldi_active_grammar/kaldi/make_lexicon_fst.py",
    "content": "#!/usr/bin/env python3\n\n# Copyright   2018  Johns Hopkins University (author: Daniel Povey)\n# Apache 2.0.\n\n# see get_args() below for usage message.\nimport argparse\nimport os\nimport sys\nimport math\nimport re\n\n# The use of latin-1 encoding does not preclude reading utf-8.  latin-1\n# encoding means \"treat words as sequences of bytes\", and it is compatible\n# with utf-8 encoding as well as other encodings such as gbk, as long as the\n# spaces are also spaces in ascii (which we check).  It is basically how we\n# emulate the behavior of python before python3.\n# sys.stdout = open(1, 'w', encoding='latin-1', newline='\\n', closefd=False)\n# sys.stderr = open(2, 'w', encoding='latin-1', newline='\\n', closefd=False)\n\ndef get_args():\n    parser = argparse.ArgumentParser(description=\"\"\"This script creates the\n       text form of a lexicon FST, to be compiled by fstcompile using the\n       appropriate symbol tables (phones.txt and words.txt) .  It will mostly\n       be invoked indirectly via utils/prepare_lang.sh.  The output goes to\n       the stdout.\"\"\")\n\n    parser.add_argument('--sil-phone', dest='sil_phone', type=str,\n                        help=\"\"\"Text form of optional-silence phone, e.g. 'SIL'.  See also\n                        the --silprob option.\"\"\")\n    parser.add_argument('--sil-prob', dest='sil_prob', type=float, default=0.0,\n                        help=\"\"\"Probability of silence between words (including at the\n                        beginning and end of word sequences).  Must be in the range [0.0, 1.0].\n                        This refers to the optional silence inserted by the lexicon; see\n                        the --silphone option.\"\"\")\n    parser.add_argument('--sil-disambig', dest='sil_disambig', type=str,\n                        help=\"\"\"Disambiguation symbol to disambiguate silence, e.g. #5.\n                        Will only be supplied if you are creating the version of L.fst\n                        with disambiguation symbols, intended for use with cyclic G.fst.\n                        This symbol was introduced to fix a rather obscure source of\n                        nondeterminism of CLG.fst, that has to do with reordering of\n                        disambiguation symbols and phone symbols.\"\"\")\n    parser.add_argument('--left-context-phones', dest='left_context_phones', type=str,\n                        help=\"\"\"Only relevant if --nonterminals is also supplied; this relates\n                        to grammar decoding (see http://kaldi-asr.org/doc/grammar.html or\n                        src/doc/grammar.dox).  Format is a list of left-context phones,\n                        in text form, one per line.  E.g. data/lang/phones/left_context_phones.txt\"\"\")\n    parser.add_argument('--nonterminals', type=str,\n                        help=\"\"\"If supplied, --left-context-phones must also be supplied.\n                        List of user-defined nonterminal symbols such as #nonterm:contact_list,\n                        one per line.  E.g. data/local/dict/nonterminals.txt.\"\"\")\n    parser.add_argument('lexiconp', type=str,\n                        help=\"\"\"Filename of lexicon with pronunciation probabilities\n                        (normally lexiconp.txt), with lines of the form 'word prob p1 p2...',\n                        e.g. 'a   1.0    ay'\"\"\")\n    args = parser.parse_args()\n    return args\n\n\ndef read_lexiconp(filename):\n    \"\"\"Reads the lexiconp.txt file in 'filename', with lines like 'word pron p1 p2 ...'.\n    Returns a list of tuples (word, pron_prob, pron), where 'word' is a string,\n   'pron_prob', a float, is the pronunciation probability (which must be >0.0\n    and would normally be <=1.0),  and 'pron' is a list of strings representing phones.\n    An element in the returned list might be ('hello', 1.0, ['h', 'eh', 'l', 'ow']).\n    \"\"\"\n\n    ans = []\n    found_empty_prons = False\n    found_large_pronprobs = False\n    # See the comment near the top of this file, RE why we use latin-1.\n    with open(filename, 'r', encoding='latin-1') as f:\n        whitespace = re.compile(\"[ \\t]+\")\n        for line in f:\n            a = whitespace.split(line.strip(\" \\t\\r\\n\"))\n            if len(a) < 2:\n                print(\"{0}: error: found bad line '{1}' in lexicon file {2} \".format(\n                    sys.argv[0], line.strip(\" \\t\\r\\n\"), filename), file=sys.stderr)\n                sys.exit(1)\n            word = a[0]\n            if word == \"<eps>\":\n                # This would clash with the epsilon symbol normally used in OpenFst.\n                print(\"{0}: error: found <eps> as a word in lexicon file \"\n                      \"{1}\".format(line.strip(\" \\t\\r\\n\"), filename), file=sys.stderr)\n                sys.exit(1)\n            try:\n                pron_prob = float(a[1])\n            except:\n                print(\"{0}: error: found bad line '{1}' in lexicon file {2}, 2nd field \"\n                      \"should be pron-prob\".format(sys.argv[0], line.strip(\" \\t\\r\\n\"), filename),\n                      file=sys.stderr)\n                sys.exit(1)\n            prons = a[2:]\n            if pron_prob <= 0.0:\n                print(\"{0}: error: invalid pron-prob in line '{1}' of lexicon file {2} \".format(\n                    sys.argv[0], line.strip(\" \\t\\r\\n\"), filename), file=sys.stderr)\n                sys.exit(1)\n            if len(prons) == 0:\n                found_empty_prons = True\n            ans.append( (word, pron_prob, prons) )\n            if pron_prob > 1.0:\n                found_large_pronprobs = True\n    if found_empty_prons:\n        print(\"{0}: warning: found at least one word with an empty pronunciation \"\n              \"in lexicon file {1}.\".format(sys.argv[0], filename),\n              file=sys.stderr)\n    if found_large_pronprobs:\n        print(\"{0}: warning: found at least one word with pron-prob >1.0 \"\n              \"in {1}\".format(sys.argv[0], filename), file=sys.stderr)\n\n\n    if len(ans) == 0:\n        print(\"{0}: error: found no pronunciations in lexicon file {1}\".format(\n            sys.argv[0], filename), file=sys.stderr)\n        sys.exit(1)\n    return ans\n\n\ndef write_nonterminal_arcs(start_state, loop_state, next_state,\n                           nonterminals, left_context_phones):\n    \"\"\"This function relates to the grammar-decoding setup, see\n    kaldi-asr.org/doc/grammar.html.  It is called from write_fst_no_silence\n    and write_fst_silence, and writes to the stdout some extra arcs\n    in the lexicon FST that relate to nonterminal symbols.\n    See the section \"Special symbols in L.fst,\n    kaldi-asr.org/doc/grammar.html#grammar_special_l.\n       start_state: the start-state of L.fst.\n       loop_state:  the state of high out-degree in L.fst where words leave\n                  and enter.\n       next_state: the number from which this function can start allocating its\n                  own states.  the updated value of next_state will be returned.\n       nonterminals: the user-defined nonterminal symbols as a list of\n          strings, e.g. ['#nonterm:contact_list', ... ].\n       left_context_phones: a list of phones that may appear as left-context,\n          e.g. ['a', 'ah', ... '#nonterm_bos'].\n    \"\"\"\n    shared_state = next_state\n    next_state += 1\n    final_state = next_state\n    next_state += 1\n\n    print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n        src=start_state, dest=shared_state,\n        phone='#nonterm_begin', word='#nonterm_begin',\n        cost=0.0))\n\n    for nonterminal in nonterminals:\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=loop_state, dest=shared_state,\n            phone=nonterminal, word=nonterminal,\n            cost=0.0))\n    # this_cost equals log(len(left_context_phones)) but the expression below\n    # better captures the meaning.  Applying this cost to arcs keeps the FST\n    # stochatic (sum-to-one, like an HMM), so that if we do weight pushing\n    # things won't get weird.  In the grammar-FST code when we splice things\n    # together we will cancel out this cost, see the function CombineArcs().\n    this_cost = -math.log(1.0 / len(left_context_phones))\n\n    for left_context_phone in left_context_phones:\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=shared_state, dest=loop_state,\n            phone=left_context_phone, word='<eps>', cost=this_cost))\n    # arc from loop-state to a final-state with #nonterm_end as ilabel and olabel\n    print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n        src=loop_state, dest=final_state,\n        phone='#nonterm_end', word='#nonterm_end', cost=0.0))\n    print(\"{state}\\t{final_cost}\".format(\n        state=final_state, final_cost=0.0))\n    return next_state\n\n\n\ndef write_fst_no_silence(lexicon, nonterminals=None, left_context_phones=None):\n    \"\"\"Writes the text format of L.fst to the standard output.  This version is for\n    when --sil-prob=0.0, meaning there is no optional silence allowed.\n\n      'lexicon' is a list of 3-tuples (word, pron-prob, prons) as returned by\n        read_lexiconp().\n     'nonterminals', which relates to grammar decoding (see kaldi-asr.org/doc/grammar.html),\n        is either None, or the user-defined nonterminal symbols as a list of\n        strings, e.g. ['#nonterm:contact_list', ... ].\n     'left_context_phones', which also relates to grammar decoding, and must be\n        supplied if 'nonterminals' is supplied is either None or a list of\n        phones that may appear as left-context, e.g. ['a', 'ah', ... '#nonterm_bos'].\n    \"\"\"\n\n    loop_state = 0\n    next_state = 1  # the next un-allocated state, will be incremented as we go.\n    for (word, pronprob, pron) in lexicon:\n        cost = -math.log(pronprob)\n        cur_state = loop_state\n        for i in range(len(pron) - 1):\n            print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n                src=cur_state,\n                dest=next_state,\n                phone=pron[i],\n                word=(word if i == 0 else '<eps>'),\n                cost=(cost if i == 0 else 0.0)))\n            cur_state = next_state\n            next_state += 1\n\n        i = len(pron) - 1  # note: i == -1 if pron is empty.\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=cur_state,\n            dest=loop_state,\n            phone=(pron[i] if i >= 0 else '<eps>'),\n            word=(word if i <= 0 else '<eps>'),\n            cost=(cost if i <= 0 else 0.0)))\n\n    if nonterminals is not None:\n        next_state = write_nonterminal_arcs(\n            loop_state, loop_state, next_state,\n            nonterminals, left_context_phones)\n\n    print(\"{state}\\t{final_cost}\".format(\n        state=loop_state,\n        final_cost=0.0))\n\n\ndef write_fst_with_silence(lexicon, sil_prob, sil_phone, sil_disambig,\n                           nonterminals=None, left_context_phones=None):\n    \"\"\"Writes the text format of L.fst to the standard output.  This version is for\n       when --sil-prob != 0.0, meaning there is optional silence\n     'lexicon' is a list of 3-tuples (word, pron-prob, prons)\n         as returned by read_lexiconp().\n     'sil_prob', which is expected to be strictly between 0.. and 1.0, is the\n         probability of silence\n     'sil_phone' is the silence phone, e.g. \"SIL\".\n     'sil_disambig' is either None, or the silence disambiguation symbol, e.g. \"#5\".\n     'nonterminals', which relates to grammar decoding (see kaldi-asr.org/doc/grammar.html),\n        is either None, or the user-defined nonterminal symbols as a list of\n        strings, e.g. ['#nonterm:contact_list', ... ].\n     'left_context_phones', which also relates to grammar decoding, and must be\n        supplied if 'nonterminals' is supplied is either None or a list of\n        phones that may appear as left-context, e.g. ['a', 'ah', ... '#nonterm_bos'].\n    \"\"\"\n\n    assert sil_prob > 0.0 and sil_prob < 1.0\n    sil_cost = -math.log(sil_prob)\n    no_sil_cost = -math.log(1.0 - sil_prob);\n\n    start_state = 0\n    loop_state = 1  # words enter and leave from here\n    sil_state = 2   # words terminate here when followed by silence; this state\n                    # has a silence transition to loop_state.\n    next_state = 3  # the next un-allocated state, will be incremented as we go.\n\n\n    print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n        src=start_state, dest=loop_state,\n        phone='<eps>', word='<eps>', cost=no_sil_cost))\n    print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n        src=start_state, dest=sil_state,\n        phone='<eps>', word='<eps>', cost=sil_cost))\n    if sil_disambig is None:\n        print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n            src=sil_state, dest=loop_state,\n            phone=sil_phone, word='<eps>', cost=0.0))\n    else:\n        sil_disambig_state = next_state\n        next_state += 1\n        print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n            src=sil_state, dest=sil_disambig_state,\n            phone=sil_phone, word='<eps>', cost=0.0))\n        print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n            src=sil_disambig_state, dest=loop_state,\n            phone=sil_disambig, word='<eps>', cost=0.0))\n\n\n    for (word, pronprob, pron) in lexicon:\n        pron_cost = -math.log(pronprob)\n        cur_state = loop_state\n        for i in range(len(pron) - 1):\n            print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n                src=cur_state, dest=next_state,\n                phone=pron[i],\n                word=(word if i == 0 else '<eps>'),\n                cost=(pron_cost if i == 0 else 0.0)))\n            cur_state = next_state\n            next_state += 1\n\n        i = len(pron) - 1  # note: i == -1 if pron is empty.\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=cur_state,\n            dest=loop_state,\n            phone=(pron[i] if i >= 0 else '<eps>'),\n            word=(word if i <= 0 else '<eps>'),\n            cost=no_sil_cost + (pron_cost if i <= 0 else 0.0)))\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=cur_state,\n            dest=sil_state,\n            phone=(pron[i] if i >= 0 else '<eps>'),\n            word=(word if i <= 0 else '<eps>'),\n            cost=sil_cost + (pron_cost if i <= 0 else 0.0)))\n\n    if nonterminals is not None:\n        next_state = write_nonterminal_arcs(\n            start_state, loop_state, next_state,\n            nonterminals, left_context_phones)\n\n    print(\"{state}\\t{final_cost}\".format(\n        state=loop_state,\n        final_cost=0.0))\n\n\n\n\ndef write_words_txt(orig_lines, highest_numbered_symbol, nonterminals, filename):\n    \"\"\"Writes updated words.txt to 'filename'.  'orig_lines' is the original lines\n       in the words.txt file as a list of strings (without the newlines);\n       highest_numbered_symbol is the highest numbered symbol in the original\n       words.txt; nonterminals is a list of strings like '#nonterm:foo'.\"\"\"\n    with open(filename, 'w', encoding='latin-1') as f:\n        for l in orig_lines:\n            print(l, file=f)\n        cur_symbol = highest_numbered_symbol + 1\n        for n in [ '#nonterm_begin', '#nonterm_end' ] + nonterminals:\n            print(\"{0} {1}\".format(n, cur_symbol), file=f)\n            cur_symbol = cur_symbol + 1\n\n\ndef read_nonterminals(filename):\n    \"\"\"Reads the user-defined nonterminal symbols in 'filename', checks that\n       it has the expected format and has no duplicates, and returns the nonterminal\n       symbols as a list of strings, e.g.\n       ['#nonterm:contact_list', '#nonterm:phone_number', ... ]. \"\"\"\n    ans = [line.strip(\" \\t\\r\\n\") for line in open(filename, 'r', encoding='latin-1')]\n    if len(ans) == 0:\n        raise RuntimeError(\"The file {0} contains no nonterminals symbols.\".format(filename))\n    for nonterm in ans:\n        if nonterm[:9] != '#nonterm:':\n            raise RuntimeError(\"In file '{0}', expected nonterminal symbols to start with '#nonterm:', found '{1}'\"\n                               .format(filename, nonterm))\n    if len(set(ans)) != len(ans):\n        raise RuntimeError(\"Duplicate nonterminal symbols are present in file {0}\".format(filename))\n    return ans\n\ndef read_left_context_phones(filename):\n    \"\"\"Reads, checks, and returns a list of left-context phones, in text form, one\n       per line.  Returns a list of strings, e.g. ['a', 'ah', ..., '#nonterm_bos' ]\"\"\"\n    ans = [line.strip(\" \\t\\r\\n\") for line in open(filename, 'r', encoding='latin-1')]\n    if len(ans) == 0:\n        raise RuntimeError(\"The file {0} contains no left-context phones.\".format(filename))\n    whitespace = re.compile(\"[ \\t]+\")\n    for s in ans:\n        if len(whitespace.split(s)) != 1:\n            raise RuntimeError(\"The file {0} contains an invalid line '{1}'\".format(filename, s)   )\n\n    if len(set(ans)) != len(ans):\n        raise RuntimeError(\"Duplicate nonterminal symbols are present in file {0}\".format(filename))\n    return ans\n\n\ndef is_token(s):\n    \"\"\"Returns true if s is a string and is space-free.\"\"\"\n    if not isinstance(s, str):\n        return False\n    whitespace = re.compile(\"[ \\t\\r\\n]+\")\n    split_str = whitespace.split(s);\n    return len(split_str) == 1 and s == split_str[0]\n\n\ndef main(args=None):\n    if args is None: args = get_args()\n\n    lexicon = read_lexiconp(args.lexiconp)\n\n    if args.nonterminals is None:\n        nonterminals, left_context_phones = None, None\n    else:\n        if args.left_context_phones is None:\n            print(\"{0}: if --nonterminals is specified, --left-context-phones must also \"\n                  \"be specified\".format(sys.argv[0]))\n            sys.exit(1)\n        nonterminals = read_nonterminals(args.nonterminals)\n        left_context_phones = read_left_context_phones(args.left_context_phones)\n\n    if args.sil_prob == 0.0:\n          write_fst_no_silence(lexicon,\n                               nonterminals=nonterminals,\n                               left_context_phones=left_context_phones)\n    else:\n        # Do some checking that the options make sense.\n        if args.sil_prob < 0.0 or args.sil_prob >= 1.0:\n            print(\"{0}: invalid value specified --sil-prob={1}\".format(\n                sys.argv[0], args.sil_prob), file=sys.stderr)\n            sys.exit(1)\n\n        if not is_token(args.sil_phone):\n            print(\"{0}: you specified --sil-prob={1} but --sil-phone is set \"\n                  \"to '{2}'\".format(sys.argv[0], args.sil_prob, args.sil_phone),\n                  file=sys.stderr)\n            sys.exit(1)\n        if args.sil_disambig is not None and not is_token(args.sil_disambig):\n            print(\"{0}: invalid value --sil-disambig='{1}' was specified.\"\n                  \"\".format(sys.argv[0], args.sil_disambig), file=sys.stderr)\n            sys.exit(1)\n        write_fst_with_silence(lexicon, args.sil_prob, args.sil_phone,\n                               args.sil_disambig,\n                               nonterminals=nonterminals,\n                               left_context_phones=left_context_phones)\n\n\n\n#    (lines, highest_symbol) = read_words_txt(args.input_words_txt)\n#    nonterminals = read_nonterminals(args.nonterminal_symbols_list)\n#    write_words_txt(lines, highest_symbol, nonterminals, args.output_words_txt)\n\n\nif __name__ == '__main__':\n      main()\n"
  },
  {
    "path": "kaldi_active_grammar/kaldi/make_lexicon_fst_py2.py",
    "content": "#!/usr/bin/env python3\n\n# Copyright   2018  Johns Hopkins University (author: Daniel Povey)\n# Apache 2.0.\n\n# see get_args() below for usage message.\nfrom __future__ import print_function\nimport argparse\nimport os\nimport sys\nimport math\nimport re\n\n# The use of latin-1 encoding does not preclude reading utf-8.  latin-1\n# encoding means \"treat words as sequences of bytes\", and it is compatible\n# with utf-8 encoding as well as other encodings such as gbk, as long as the\n# spaces are also spaces in ascii (which we check).  It is basically how we\n# emulate the behavior of python before python3.\n# sys.stdout = open(1, 'w', encoding='latin-1', closefd=False)\n# sys.stderr = open(2, 'w', encoding='latin-1', closefd=False)\n\ndef get_args():\n    parser = argparse.ArgumentParser(description=\"\"\"This script creates the\n       text form of a lexicon FST, to be compiled by fstcompile using the\n       appropriate symbol tables (phones.txt and words.txt) .  It will mostly\n       be invoked indirectly via utils/prepare_lang.sh.  The output goes to\n       the stdout.\"\"\")\n\n    parser.add_argument('--sil-phone', dest='sil_phone', type=str,\n                        help=\"\"\"Text form of optional-silence phone, e.g. 'SIL'.  See also\n                        the --silprob option.\"\"\")\n    parser.add_argument('--sil-prob', dest='sil_prob', type=float, default=0.0,\n                        help=\"\"\"Probability of silence between words (including at the\n                        beginning and end of word sequences).  Must be in the range [0.0, 1.0].\n                        This refers to the optional silence inserted by the lexicon; see\n                        the --silphone option.\"\"\")\n    parser.add_argument('--sil-disambig', dest='sil_disambig', type=str,\n                        help=\"\"\"Disambiguation symbol to disambiguate silence, e.g. #5.\n                        Will only be supplied if you are creating the version of L.fst\n                        with disambiguation symbols, intended for use with cyclic G.fst.\n                        This symbol was introduced to fix a rather obscure source of\n                        nondeterminism of CLG.fst, that has to do with reordering of\n                        disambiguation symbols and phone symbols.\"\"\")\n    parser.add_argument('--left-context-phones', dest='left_context_phones', type=str,\n                        help=\"\"\"Only relevant if --nonterminals is also supplied; this relates\n                        to grammar decoding (see http://kaldi-asr.org/doc/grammar.html or\n                        src/doc/grammar.dox).  Format is a list of left-context phones,\n                        in text form, one per line.  E.g. data/lang/phones/left_context_phones.txt\"\"\")\n    parser.add_argument('--nonterminals', type=str,\n                        help=\"\"\"If supplied, --left-context-phones must also be supplied.\n                        List of user-defined nonterminal symbols such as #nonterm:contact_list,\n                        one per line.  E.g. data/local/dict/nonterminals.txt.\"\"\")\n    parser.add_argument('lexiconp', type=str,\n                        help=\"\"\"Filename of lexicon with pronunciation probabilities\n                        (normally lexiconp.txt), with lines of the form 'word prob p1 p2...',\n                        e.g. 'a   1.0    ay'\"\"\")\n    args = parser.parse_args()\n    return args\n\n\ndef read_lexiconp(filename):\n    \"\"\"Reads the lexiconp.txt file in 'filename', with lines like 'word pron p1 p2 ...'.\n    Returns a list of tuples (word, pron_prob, pron), where 'word' is a string,\n   'pron_prob', a float, is the pronunciation probability (which must be >0.0\n    and would normally be <=1.0),  and 'pron' is a list of strings representing phones.\n    An element in the returned list might be ('hello', 1.0, ['h', 'eh', 'l', 'ow']).\n    \"\"\"\n\n    ans = []\n    found_empty_prons = False\n    found_large_pronprobs = False\n    # See the comment near the top of this file, RE why we use latin-1.\n    with open(filename, 'r') as f:\n        whitespace = re.compile(\"[ \\t]+\")\n        for line in f:\n            a = whitespace.split(line.strip(\" \\t\\r\\n\"))\n            if len(a) < 2:\n                print(\"{0}: error: found bad line '{1}' in lexicon file {2} \".format(\n                    sys.argv[0], line.strip(\" \\t\\r\\n\"), filename), file=sys.stderr)\n                sys.exit(1)\n            word = a[0]\n            if word == \"<eps>\":\n                # This would clash with the epsilon symbol normally used in OpenFst.\n                print(\"{0}: error: found <eps> as a word in lexicon file \"\n                      \"{1}\".format(line.strip(\" \\t\\r\\n\"), filename), file=sys.stderr)\n                sys.exit(1)\n            try:\n                pron_prob = float(a[1])\n            except:\n                print(\"{0}: error: found bad line '{1}' in lexicon file {2}, 2nd field \"\n                      \"should be pron-prob\".format(sys.argv[0], line.strip(\" \\t\\r\\n\"), filename),\n                      file=sys.stderr)\n                sys.exit(1)\n            prons = a[2:]\n            if pron_prob <= 0.0:\n                print(\"{0}: error: invalid pron-prob in line '{1}' of lexicon file {1} \".format(\n                    sys.argv[0], line.strip(\" \\t\\r\\n\"), filename), file=sys.stderr)\n                sys.exit(1)\n            if len(prons) == 0:\n                found_empty_prons = True\n            ans.append( (word, pron_prob, prons) )\n            if pron_prob > 1.0:\n                found_large_pronprobs = True\n    if found_empty_prons:\n        print(\"{0}: warning: found at least one word with an empty pronunciation \"\n              \"in lexicon file {1}.\".format(sys.argv[0], filename),\n              file=sys.stderr)\n    if found_large_pronprobs:\n        print(\"{0}: warning: found at least one word with pron-prob >1.0 \"\n              \"in {1}\".format(sys.argv[0], filename), file=sys.stderr)\n\n\n    if len(ans) == 0:\n        print(\"{0}: error: found no pronunciations in lexicon file {1}\".format(\n            sys.argv[0], filename), file=sys.stderr)\n        sys.exit(1)\n    return ans\n\n\ndef write_nonterminal_arcs(start_state, loop_state, next_state,\n                           nonterminals, left_context_phones):\n    \"\"\"This function relates to the grammar-decoding setup, see\n    kaldi-asr.org/doc/grammar.html.  It is called from write_fst_no_silence\n    and write_fst_silence, and writes to the stdout some extra arcs\n    in the lexicon FST that relate to nonterminal symbols.\n    See the section \"Special symbols in L.fst,\n    kaldi-asr.org/doc/grammar.html#grammar_special_l.\n       start_state: the start-state of L.fst.\n       loop_state:  the state of high out-degree in L.fst where words leave\n                  and enter.\n       next_state: the number from which this function can start allocating its\n                  own states.  the updated value of next_state will be returned.\n       nonterminals: the user-defined nonterminal symbols as a list of\n          strings, e.g. ['#nonterm:contact_list', ... ].\n       left_context_phones: a list of phones that may appear as left-context,\n          e.g. ['a', 'ah', ... '#nonterm_bos'].\n    \"\"\"\n    shared_state = next_state\n    next_state += 1\n    final_state = next_state\n    next_state += 1\n\n    print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n        src=start_state, dest=shared_state,\n        phone='#nonterm_begin', word='#nonterm_begin',\n        cost=0.0))\n\n    for nonterminal in nonterminals:\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=loop_state, dest=shared_state,\n            phone=nonterminal, word=nonterminal,\n            cost=0.0))\n    # this_cost equals log(len(left_context_phones)) but the expression below\n    # better captures the meaning.  Applying this cost to arcs keeps the FST\n    # stochatic (sum-to-one, like an HMM), so that if we do weight pushing\n    # things won't get weird.  In the grammar-FST code when we splice things\n    # together we will cancel out this cost, see the function CombineArcs().\n    this_cost = -math.log(1.0 / len(left_context_phones))\n\n    for left_context_phone in left_context_phones:\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=shared_state, dest=loop_state,\n            phone=left_context_phone, word='<eps>', cost=this_cost))\n    # arc from loop-state to a final-state with #nonterm_end as ilabel and olabel\n    print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n        src=loop_state, dest=final_state,\n        phone='#nonterm_end', word='#nonterm_end', cost=0.0))\n    print(\"{state}\\t{final_cost}\".format(\n        state=final_state, final_cost=0.0))\n    return next_state\n\n\n\ndef write_fst_no_silence(lexicon, nonterminals=None, left_context_phones=None):\n    \"\"\"Writes the text format of L.fst to the standard output.  This version is for\n    when --sil-prob=0.0, meaning there is no optional silence allowed.\n\n      'lexicon' is a list of 3-tuples (word, pron-prob, prons) as returned by\n        read_lexiconp().\n     'nonterminals', which relates to grammar decoding (see kaldi-asr.org/doc/grammar.html),\n        is either None, or the user-defined nonterminal symbols as a list of\n        strings, e.g. ['#nonterm:contact_list', ... ].\n     'left_context_phones', which also relates to grammar decoding, and must be\n        supplied if 'nonterminals' is supplied is either None or a list of\n        phones that may appear as left-context, e.g. ['a', 'ah', ... '#nonterm_bos'].\n    \"\"\"\n\n    loop_state = 0\n    next_state = 1  # the next un-allocated state, will be incremented as we go.\n    for (word, pronprob, pron) in lexicon:\n        cost = -math.log(pronprob)\n        cur_state = loop_state\n        for i in range(len(pron) - 1):\n            print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n                src=cur_state,\n                dest=next_state,\n                phone=pron[i],\n                word=(word if i == 0 else '<eps>'),\n                cost=(cost if i == 0 else 0.0)))\n            cur_state = next_state\n            next_state += 1\n\n        i = len(pron) - 1  # note: i == -1 if pron is empty.\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=cur_state,\n            dest=loop_state,\n            phone=(pron[i] if i >= 0 else '<eps>'),\n            word=(word if i <= 0 else '<eps>'),\n            cost=(cost if i <= 0 else 0.0)))\n\n    if nonterminals is not None:\n        next_state = write_nonterminal_arcs(\n            start_state, loop_state, next_state,\n            nonterminals, left_context_phones)\n\n    print(\"{state}\\t{final_cost}\".format(\n        state=loop_state,\n        final_cost=0.0))\n\n\ndef write_fst_with_silence(lexicon, sil_prob, sil_phone, sil_disambig,\n                           nonterminals=None, left_context_phones=None):\n    \"\"\"Writes the text format of L.fst to the standard output.  This version is for\n       when --sil-prob != 0.0, meaning there is optional silence\n     'lexicon' is a list of 3-tuples (word, pron-prob, prons)\n         as returned by read_lexiconp().\n     'sil_prob', which is expected to be strictly between 0.. and 1.0, is the\n         probability of silence\n     'sil_phone' is the silence phone, e.g. \"SIL\".\n     'sil_disambig' is either None, or the silence disambiguation symbol, e.g. \"#5\".\n     'nonterminals', which relates to grammar decoding (see kaldi-asr.org/doc/grammar.html),\n        is either None, or the user-defined nonterminal symbols as a list of\n        strings, e.g. ['#nonterm:contact_list', ... ].\n     'left_context_phones', which also relates to grammar decoding, and must be\n        supplied if 'nonterminals' is supplied is either None or a list of\n        phones that may appear as left-context, e.g. ['a', 'ah', ... '#nonterm_bos'].\n    \"\"\"\n\n    assert sil_prob > 0.0 and sil_prob < 1.0\n    sil_cost = -math.log(sil_prob)\n    no_sil_cost = -math.log(1.0 - sil_prob);\n\n    start_state = 0\n    loop_state = 1  # words enter and leave from here\n    sil_state = 2   # words terminate here when followed by silence; this state\n                    # has a silence transition to loop_state.\n    next_state = 3  # the next un-allocated state, will be incremented as we go.\n\n\n    print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n        src=start_state, dest=loop_state,\n        phone='<eps>', word='<eps>', cost=no_sil_cost))\n    print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n        src=start_state, dest=sil_state,\n        phone='<eps>', word='<eps>', cost=sil_cost))\n    if sil_disambig is None:\n        print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n            src=sil_state, dest=loop_state,\n            phone=sil_phone, word='<eps>', cost=0.0))\n    else:\n        sil_disambig_state = next_state\n        next_state += 1\n        print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n            src=sil_state, dest=sil_disambig_state,\n            phone=sil_phone, word='<eps>', cost=0.0))\n        print('{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}'.format(\n            src=sil_disambig_state, dest=loop_state,\n            phone=sil_disambig, word='<eps>', cost=0.0))\n\n\n    for (word, pronprob, pron) in lexicon:\n        pron_cost = -math.log(pronprob)\n        cur_state = loop_state\n        for i in range(len(pron) - 1):\n            print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n                src=cur_state, dest=next_state,\n                phone=pron[i],\n                word=(word if i == 0 else '<eps>'),\n                cost=(pron_cost if i == 0 else 0.0)))\n            cur_state = next_state\n            next_state += 1\n\n        i = len(pron) - 1  # note: i == -1 if pron is empty.\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=cur_state,\n            dest=loop_state,\n            phone=(pron[i] if i >= 0 else '<eps>'),\n            word=(word if i <= 0 else '<eps>'),\n            cost=no_sil_cost + (pron_cost if i <= 0 else 0.0)))\n        print(\"{src}\\t{dest}\\t{phone}\\t{word}\\t{cost}\".format(\n            src=cur_state,\n            dest=sil_state,\n            phone=(pron[i] if i >= 0 else '<eps>'),\n            word=(word if i <= 0 else '<eps>'),\n            cost=sil_cost + (pron_cost if i <= 0 else 0.0)))\n\n    if nonterminals is not None:\n        next_state = write_nonterminal_arcs(\n            start_state, loop_state, next_state,\n            nonterminals, left_context_phones)\n\n    print(\"{state}\\t{final_cost}\".format(\n        state=loop_state,\n        final_cost=0.0))\n\n\n\n\ndef write_words_txt(orig_lines, highest_numbered_symbol, nonterminals, filename):\n    \"\"\"Writes updated words.txt to 'filename'.  'orig_lines' is the original lines\n       in the words.txt file as a list of strings (without the newlines);\n       highest_numbered_symbol is the highest numbered symbol in the original\n       words.txt; nonterminals is a list of strings like '#nonterm:foo'.\"\"\"\n    with open(filename, 'w') as f:\n        for l in orig_lines:\n            print(l, file=f)\n        cur_symbol = highest_numbered_symbol + 1\n        for n in [ '#nonterm_begin', '#nonterm_end' ] + nonterminals:\n            print(\"{0} {1}\".format(n, cur_symbol), file=f)\n            cur_symbol = cur_symbol + 1\n\n\ndef read_nonterminals(filename):\n    \"\"\"Reads the user-defined nonterminal symbols in 'filename', checks that\n       it has the expected format and has no duplicates, and returns the nonterminal\n       symbols as a list of strings, e.g.\n       ['#nonterm:contact_list', '#nonterm:phone_number', ... ]. \"\"\"\n    ans = [line.strip(\" \\t\\r\\n\") for line in open(filename, 'r')]\n    if len(ans) == 0:\n        raise RuntimeError(\"The file {0} contains no nonterminals symbols.\".format(filename))\n    for nonterm in ans:\n        if nonterm[:9] != '#nonterm:':\n            raise RuntimeError(\"In file '{0}', expected nonterminal symbols to start with '#nonterm:', found '{1}'\"\n                               .format(filename, nonterm))\n    if len(set(ans)) != len(ans):\n        raise RuntimeError(\"Duplicate nonterminal symbols are present in file {0}\".format(filename))\n    return ans\n\ndef read_left_context_phones(filename):\n    \"\"\"Reads, checks, and returns a list of left-context phones, in text form, one\n       per line.  Returns a list of strings, e.g. ['a', 'ah', ..., '#nonterm_bos' ]\"\"\"\n    ans = [line.strip(\" \\t\\r\\n\") for line in open(filename, 'r')]\n    if len(ans) == 0:\n        raise RuntimeError(\"The file {0} contains no left-context phones.\".format(filename))\n    whitespace = re.compile(\"[ \\t]+\")\n    for s in ans:\n        if len(whitespace.split(s)) != 1:\n            raise RuntimeError(\"The file {0} contains an invalid line '{1}'\".format(filename, s)   )\n\n    if len(set(ans)) != len(ans):\n        raise RuntimeError(\"Duplicate nonterminal symbols are present in file {0}\".format(filename))\n    return ans\n\n\ndef is_token(s):\n    \"\"\"Returns true if s is a string and is space-free.\"\"\"\n    if not isinstance(s, str):\n        return False\n    whitespace = re.compile(\"[ \\t\\r\\n]+\")\n    split_str = whitespace.split(s);\n    return len(split_str) == 1 and s == split_str[0]\n\n\ndef main(args=None):\n    if args is None: args = get_args()\n\n    lexicon = read_lexiconp(args.lexiconp)\n\n    if args.nonterminals is None:\n        nonterminals, left_context_phones = None, None\n    else:\n        if args.left_context_phones is None:\n            print(\"{0}: if --nonterminals is specified, --left-context-phones must also \"\n                  \"be specified\".format(sys.argv[0]))\n            sys.exit(1)\n        nonterminals = read_nonterminals(args.nonterminals)\n        left_context_phones = read_left_context_phones(args.left_context_phones)\n\n    if args.sil_prob == 0.0:\n          write_fst_no_silence(lexicon,\n                               nonterminals=nonterminals,\n                               left_context_phones=left_context_phones)\n    else:\n        # Do some checking that the options make sense.\n        if args.sil_prob < 0.0 or args.sil_prob >= 1.0:\n            print(\"{0}: invalid value specified --sil-prob={1}\".format(\n                sys.argv[0], args.sil_prob), file=sys.stderr)\n            sys.exit(1)\n\n        if not is_token(args.sil_phone):\n            print(\"{0}: you specified --sil-prob={1} but --sil-phone is set \"\n                  \"to '{2}'\".format(sys.argv[0], args.sil_prob, args.sil_phone),\n                  file=sys.stderr)\n            sys.exit(1)\n        if args.sil_disambig is not None and not is_token(args.sil_disambig):\n            print(\"{0}: invalid value --sil-disambig='{1}' was specified.\"\n                  \"\".format(sys.argv[0], args.sil_disambig), file=sys.stderr)\n            sys.exit(1)\n        write_fst_with_silence(lexicon, args.sil_prob, args.sil_phone,\n                               args.sil_disambig,\n                               nonterminals=nonterminals,\n                               left_context_phones=left_context_phones)\n\n\n\n#    (lines, highest_symbol) = read_words_txt(args.input_words_txt)\n#    nonterminals = read_nonterminals(args.nonterminal_symbols_list)\n#    write_words_txt(lines, highest_symbol, nonterminals, args.output_words_txt)\n\n\nif __name__ == '__main__':\n      main()\n"
  },
  {
    "path": "kaldi_active_grammar/model.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\nimport os, re, shutil\nfrom io import open\n\nfrom six import PY2, text_type\n\nfrom . import _log, KaldiError, REQUIRED_MODEL_VERSION\nfrom .wfst import SymbolTable\nfrom .wrapper import KaldiModelBuildUtils\nfrom .utils import ExternalProcess, find_file, load_symbol_table, show_donation_message, symbol_table_lookup\nimport kaldi_active_grammar.defaults as defaults\nimport kaldi_active_grammar.utils as utils\n\n_log = _log.getChild('model')\n\n\n########################################################################################################################\n\nclass Lexicon(object):\n\n    def __init__(self, phones):\n        \"\"\" phones: list of strings, each being a phone \"\"\"\n        self.phone_set = set(self.make_position_independent(phones))\n\n    # XSAMPA phones are 1-letter each, so 2-letter below represent 2 separate phones.\n    CMU_to_XSAMPA_dict = {\n        \"'\"   : \"'\",\n        'AA'  : 'A',\n        'AE'  : '{',\n        'AH'  : 'V',  ##\n        'AO'  : 'O',  ##\n        'AW'  : 'aU',\n        'AY'  : 'aI',\n        'B'   : 'b',\n        'CH'  : 'tS',\n        'D'   : 'd',\n        'DH'  : 'D',\n        'EH'  : 'E',\n        'ER'  : '3',\n        'EY'  : 'eI',\n        'F'   : 'f',\n        'G'   : 'g',\n        'HH'  : 'h',\n        'IH'  : 'I',\n        'IY'  : 'i',\n        'JH'  : 'dZ',\n        'K'   : 'k',\n        'L'   : 'l',\n        'M'   : 'm',\n        'NG'  : 'N',\n        'N'   : 'n',\n        'OW'  : 'oU',\n        'OY'  : 'OI', ##\n        'P'   : 'p',\n        'R'   : 'r',\n        'SH'  : 'S',\n        'S'   : 's',\n        'TH'  : 'T',\n        'T'   : 't',\n        'UH'  : 'U',\n        'UW'  : 'u',\n        'V'   : 'v',\n        'W'   : 'w',\n        'Y'   : 'j',\n        'ZH'  : 'Z',\n        'Z'   : 'z',\n    }\n    CMU_to_XSAMPA_dict.update({'AX': '@'})\n    del CMU_to_XSAMPA_dict[\"'\"]\n    XSAMPA_to_CMU_dict = { v: k for k,v in CMU_to_XSAMPA_dict.items() }  # FIXME: handle double-entries\n\n    @classmethod\n    def phones_cmu_to_xsampa_generic(cls, phones, lexicon_phones=None):\n        new_phones = []\n        for phone in phones:\n            stress = False\n            if phone.endswith('1'):\n                phone = phone[:-1]\n                stress = True\n            elif phone.endswith(('0', '2')):\n                phone = phone[:-1]\n            phone = cls.CMU_to_XSAMPA_dict[phone]\n            assert 1 <= len(phone) <= 2\n\n            new_phone = (\"'\" if stress else '') + phone\n            if (lexicon_phones is not None) and (new_phone in lexicon_phones):\n                # Add entire possibly-2-letter phone\n                new_phones.append(new_phone)\n            else:\n                # Add each individual 1-letter phone\n                for match in re.finditer(r\"('?).\", new_phone):\n                    new_phones.append(match.group(0))\n\n        return new_phones\n\n    def phones_cmu_to_xsampa(self, phones):\n        return self.phones_cmu_to_xsampa_generic(phones, self.phone_set)\n\n    @classmethod\n    def make_position_dependent(cls, phones):\n        if len(phones) == 0: return []\n        elif len(phones) == 1: return [phones[0]+'_S']\n        else: return [phones[0]+'_B'] + [phone+'_I' for phone in phones[1:-1]] + [phones[-1]+'_E']\n\n    @classmethod\n    def make_position_independent(cls, phones):\n        return [re.sub(r'_[SBIE]', '', phone) for phone in phones]\n\n    @classmethod\n    def generate_pronunciations_cmu_online(cls, word):\n        try:\n            import requests\n            files = {'wordfile': ('wordfile', word)}\n            req = requests.post('http://www.speech.cs.cmu.edu/cgi-bin/tools/logios/lextool.pl', files=files)\n            req.raise_for_status()\n            # FIXME: handle network failures\n            match = re.search(r'<!-- DICT (.*)  -->', req.text)\n            if match:\n                url = match.group(1)\n                req = requests.get(url)\n                req.raise_for_status()\n                entries = req.text.strip().split('\\n')\n                pronunciations = []\n                for entry in entries:\n                    tokens = entry.strip().split()\n                    assert re.match(word + r'(\\(\\d\\))?', tokens[0], re.I)  # 'SEMI-COLON' or 'SEMI-COLON(2)'\n                    phones = tokens[1:]\n                    _log.debug(\"generated pronunciation with cloud-cmudict for %r: CMU phones are %r\" % (word, phones))\n                    pronunciations.append(phones)\n                return pronunciations\n            raise KaldiError(\"received bad response from www.speech.cs.cmu.edu: %r\" % req.text)\n        except Exception as e:\n            _log.exception(\"generate_pronunciations exception accessing www.speech.cs.cmu.edu\")\n            raise e\n\n    g2p_en = None\n\n    @classmethod\n    def attempt_load_g2p_en(cls, model_dir=None):\n        try:\n            if model_dir:\n                import nltk\n                nltk.data.path.insert(0, os.path.abspath(os.path.join(model_dir, 'g2p')))\n            # g2p_en>=2.1.0\n            import g2p_en\n            cls.g2p_en = g2p_en.G2p()\n            assert all(re.sub(r'[012]$', '', phone) in cls.CMU_to_XSAMPA_dict for phone in cls.g2p_en.phonemes if not phone.startswith('<'))\n        except Exception:  # including ImportError\n            cls.g2p_en = False  # Don't try anymore.\n            _log.debug(\"failed to load g2p_en\")\n\n    @classmethod\n    def generate_pronunciations_g2p_en(cls, word):\n        try:\n            phones = cls.g2p_en(word)\n            _log.debug(\"generated pronunciation with g2p_en for %r: %r\" % (word, phones))\n            return [phones]\n        except Exception as e:\n            _log.exception(\"generate_pronunciations exception using g2p_en\")\n            raise e\n\n    @classmethod\n    def generate_pronunciations(cls, word, model_dir=None, allow_online_pronunciations=False):\n        \"\"\"returns CMU/arpabet phones\"\"\"\n        if cls.g2p_en is None:\n            cls.attempt_load_g2p_en(model_dir)\n        if cls.g2p_en:\n            return cls.generate_pronunciations_g2p_en(word)\n        if allow_online_pronunciations:\n            return cls.generate_pronunciations_cmu_online(word)\n        raise KaldiError(\"cannot generate word pronunciation: no generators available\")\n\n\n########################################################################################################################\n\nclass Model(object):\n    def __init__(self, model_dir=None, tmp_dir=None, tmp_dir_needed=False):\n        show_donation_message()\n\n        self.model_dir = os.path.join(model_dir or defaults.DEFAULT_MODEL_DIR, '')\n        self.tmp_dir = None\n        if tmp_dir_needed:\n            self.tmp_dir = os.path.join(tmp_dir or os.path.join(self.model_dir, 'cache.tmp'), '')\n        self.exec_dir = os.path.join(utils.exec_dir, '')\n\n        if not os.path.isdir(self.model_dir):\n            raise KaldiError(\"cannot find model_dir: %r\" % self.model_dir)\n        if self.tmp_dir:\n            if os.path.isfile(self.tmp_dir): raise KaldiError(\"please specify an available tmp_dir, or remove %r\" % self.tmp_dir)\n            if not os.path.exists(self.tmp_dir):\n                _log.warning(\"%s: creating tmp dir: %r\" % (self, self.tmp_dir))\n                os.mkdir(self.tmp_dir)\n                utils.touch_file(os.path.join(self.tmp_dir, \"FILES_ARE_SAFE_TO_DELETE\"))\n        if not os.path.isdir(self.exec_dir):\n            raise KaldiError(\"cannot find exec_dir: %r\" % self.exec_dir,\n                \"are you sure you installed kaldi-active-grammar correctly?\")\n\n        version_file = os.path.join(self.model_dir, 'KAG_VERSION')\n        if os.path.isfile(version_file):\n            with open(version_file, 'r', encoding='utf-8') as f:\n                model_version = f.read().strip()\n                if model_version != REQUIRED_MODEL_VERSION:\n                    raise KaldiError(\"invalid model_dir version! please download a compatible model\")\n        else:\n            _log.warning(\"model_dir has no version information; errors below may indicate an incompatible model\")\n\n        self.create_missing_files()\n        self.check_user_lexicon()\n\n        self.files_dict = {\n            'exec_dir': self.exec_dir,\n            'model_dir': self.model_dir,\n            'tmp_dir': self.tmp_dir,\n            'words.txt': find_file(self.model_dir, 'words.txt', default=True),\n            'words.base.txt': find_file(self.model_dir, 'words.base.txt', default=True),\n            'phones.txt': find_file(self.model_dir, 'phones.txt', default=True),\n            'align_lexicon.int': find_file(self.model_dir, 'align_lexicon.int', default=True),\n            'align_lexicon.base.int': find_file(self.model_dir, 'align_lexicon.base.int', default=True),\n            'disambig.int': find_file(self.model_dir, 'disambig.int', default=True),\n            'L_disambig.fst': find_file(self.model_dir, 'L_disambig.fst', default=True),\n            'tree': find_file(self.model_dir, 'tree', default=True),\n            'final.mdl': find_file(self.model_dir, 'final.mdl', default=True),\n            # 'g.irelabel': find_file(self.model_dir, 'g.irelabel', default=True),  # otf\n            'user_lexicon.txt': find_file(self.model_dir, 'user_lexicon.txt', default=True),\n            'left_context_phones.txt': find_file(self.model_dir, 'left_context_phones.txt', default=True),\n            'nonterminals.txt': find_file(self.model_dir, 'nonterminals.txt', default=True),\n            'wdisambig_phones.int': find_file(self.model_dir, 'wdisambig_phones.int', default=True),\n            'wdisambig_words.int': find_file(self.model_dir, 'wdisambig_words.int', default=True),\n            'lexiconp_disambig.txt': find_file(self.model_dir, 'lexiconp_disambig.txt', default=True),\n            'lexiconp_disambig.base.txt': find_file(self.model_dir, 'lexiconp_disambig.base.txt', default=True),\n            'words.relabeled.txt': find_file(self.model_dir, 'words.relabeled.txt', default=True),\n        }\n        self.files_dict.update({ k.replace('.', '_'): v for (k, v) in self.files_dict.items() })  # For named placeholder access in str.format()\n        self.fst_cache = utils.FSTFileCache(os.path.join(self.model_dir, defaults.FILE_CACHE_FILENAME), dependencies_dict=self.files_dict, tmp_dir=self.tmp_dir)\n\n        self.phone_to_int_dict = { phone: i for phone, i in load_symbol_table(self.files_dict['phones.txt']) }\n        self.lexicon = Lexicon(self.phone_to_int_dict.keys())\n        self.nonterm_phones_offset = self.phone_to_int_dict.get('#nonterm_bos')\n        if self.nonterm_phones_offset is None: raise KaldiError(\"missing nonterms in 'phones.txt'\")\n        self.nonterm_words_offset = symbol_table_lookup(self.files_dict['words.base.txt'], '#nonterm_begin')\n        if self.nonterm_words_offset is None: raise KaldiError(\"missing nonterms in 'words.base.txt'\")\n\n        # Update files if needed, before loading words\n        necessary_files = ['user_lexicon.txt', 'words.txt',]\n        non_lazy_files = ['align_lexicon.int', 'lexiconp_disambig.txt', 'L_disambig.fst',]\n        files_are_not_current = lambda files: any(not self.fst_cache.file_is_current(self.files_dict[file]) for file in files)\n        if self.fst_cache.cache_is_new or files_are_not_current(necessary_files + non_lazy_files):\n            self.generate_lexicon_files()\n\n        self.words_table = SymbolTable()\n        self.load_words()\n\n    def load_words(self, words_file=None):\n        if words_file is None: words_file = self.files_dict['words.txt']\n        _log.debug(\"loading words from %r\", words_file)\n        invalid_words = \"<eps> !SIL <UNK> #0 <s> </s>\".lower().split()\n        self.words_table.load_text_file(words_file)\n        self.longest_word = max(self.words_table.word_to_id_map.keys(), key=len)\n        return self.words_table\n\n    def read_user_lexicon(self, filename=None):\n        if filename is None: filename = self.files_dict['user_lexicon.txt']\n        with open(filename, 'r', encoding='utf-8') as file:\n            entries = [line.split() for line in file if line.split()]\n            for tokens in entries:\n                # word lowercase\n                tokens[0] = tokens[0].lower()\n        return entries\n\n    def write_user_lexicon(self, entries, filename=None):\n        if filename is None: filename = self.files_dict['user_lexicon.txt']\n        lines = [' '.join(tokens) + '\\n' for tokens in entries]\n        with open(filename, 'w', encoding='utf-8', newline='\\n') as file:\n            file.writelines(lines)\n\n    def add_word(self, word, phones=None, lazy_compilation=False, allow_online_pronunciations=False):\n        word = word.strip().lower()\n\n        if phones is None:\n            # Not given pronunciation(s), so generate pronunciation(s), then call ourselves recursively for each individual pronunciation\n            pronunciations = Lexicon.generate_pronunciations(word, model_dir=self.model_dir, allow_online_pronunciations=allow_online_pronunciations)\n            pronunciations = sum([\n                self.add_word(word, phones, lazy_compilation=True)\n                for phones in pronunciations], [])\n            if not lazy_compilation:\n                self.generate_lexicon_files()\n            return pronunciations\n            # FIXME: refactor this function\n\n        # Now just handle single-pronunciation case...\n        phones = self.lexicon.phones_cmu_to_xsampa(phones)\n        new_entry = [word] + phones\n\n        entries = self.read_user_lexicon()\n        if any(new_entry == entry for entry in entries):\n            _log.warning(\"word & pronunciation already in user_lexicon\")\n            return [phones]\n        for tokens in entries:\n            if word == tokens[0]:\n                _log.warning(\"word (with different pronunciation) already in user_lexicon: %s\" % tokens[1:])\n\n        entries.append(new_entry)\n        self.write_user_lexicon(entries)\n\n        if lazy_compilation:\n            self.words_table.add_word(word)\n        else:\n            self.generate_lexicon_files()\n\n        return [phones]\n\n    def create_missing_files(self):\n        utils.touch_file(os.path.join(self.model_dir, 'user_lexicon.txt'))\n        def check_file(filename, src_filename):\n            # Create missing file from its base file\n            if not find_file(self.model_dir, filename):\n                src = find_file(self.model_dir, src_filename)\n                dst = src.replace(src_filename, filename)\n                shutil.copyfile(src, dst)\n        check_file('words.txt', 'words.base.txt')\n        check_file('align_lexicon.int', 'align_lexicon.base.int')\n        check_file('lexiconp_disambig.txt', 'lexiconp_disambig.base.txt')\n\n    def check_user_lexicon(self):\n        \"\"\" Checks for a user lexicon file in the CWD, and if found and different than the model's user lexicon, extends the model's. \"\"\"\n        cwd_user_lexicon_filename = os.path.abspath('user_lexicon.txt')\n        model_user_lexicon_filename = os.path.abspath(os.path.join(self.model_dir, 'user_lexicon.txt'))\n        if (cwd_user_lexicon_filename != model_user_lexicon_filename) and os.path.isfile(cwd_user_lexicon_filename):\n            cwd_user_lexicon_entries = [tuple(tokens) for tokens in self.read_user_lexicon(filename=cwd_user_lexicon_filename)]\n            model_user_lexicon_entries = [tuple(tokens) for tokens in self.read_user_lexicon(filename=model_user_lexicon_filename)]\n            model_user_lexicon_entries_set = set(model_user_lexicon_entries)\n            new_user_lexicon_entries = [tokens for tokens in cwd_user_lexicon_entries if tokens not in model_user_lexicon_entries_set]\n            if new_user_lexicon_entries:\n                _log.info(\"adding new user lexicon entries from %r\", cwd_user_lexicon_filename)\n                entries = model_user_lexicon_entries + new_user_lexicon_entries\n                self.write_user_lexicon(entries, filename=model_user_lexicon_filename)\n\n    def generate_lexicon_files(self):\n        \"\"\" Generates: words.txt, align_lexicon.int, lexiconp_disambig.txt, L_disambig.fst \"\"\"\n        _log.info(\"generating lexicon files\")\n        self.fst_cache.invalidate()\n\n        # FIXME: refactor this to use words_table/SymbolTable\n        max_word_id = max(word_id for word, word_id in load_symbol_table(base_filepath(self.files_dict['words.txt'])) if word_id < self.nonterm_words_offset)\n\n        user_lexicon_entries = []\n        with open(self.files_dict['user_lexicon.txt'], 'r', encoding='utf-8') as user_lexicon:\n            for line in user_lexicon:\n                tokens = line.split()\n                if len(tokens) >= 2:\n                    word, phones = tokens[0], tokens[1:]\n                    phones = Lexicon.make_position_dependent(phones)\n                    unknown_phones = [phone for phone in phones if phone not in self.phone_to_int_dict]\n                    if unknown_phones:\n                        raise KaldiError(\"word %r has unknown phone(s) %r\" % (word, unknown_phones))\n                        # _log.critical(\"word %r has unknown phone(s) %r so using junk phones!!!\", word, unknown_phones)\n                        # phones = [phone if phone not in self.phone_to_int_dict else self.noise_phone for phone in phones]\n                        # continue\n                    max_word_id += 1\n                    user_lexicon_entries.append((word, max_word_id, phones))\n\n        def generate_file_from_base_with_user_lexicon(filename, write_func):\n            filepath = self.files_dict[filename]\n            with open(base_filepath(filepath), 'r', encoding='utf-8') as file:\n                base_data = file.read()\n            with open(filepath, 'w', encoding='utf-8', newline='\\n') as file:\n                file.write(base_data)\n                for word, word_id, phones in user_lexicon_entries:\n                    file.write(write_func(word, word_id, phones) + '\\n')\n\n        generate_file_from_base_with_user_lexicon('words.txt', lambda word, word_id, phones:\n            str_space_join([word, word_id]))\n        generate_file_from_base_with_user_lexicon('align_lexicon.int', lambda word, word_id, phones:\n            str_space_join([word_id, word_id] + [self.phone_to_int_dict[phone] for phone in phones]))\n        generate_file_from_base_with_user_lexicon('lexiconp_disambig.txt', lambda word, word_id, phones:\n            '%s\\t1.0 %s' % (word, ' '.join(phones)))\n\n        if True:\n            lexicon_fst_text = KaldiModelBuildUtils.make_lexicon_fst(\n                left_context_phones=self.files_dict['left_context_phones_txt'],\n                nonterminals=self.files_dict['nonterminals_txt'],\n                sil_prob=0.5,\n                sil_phone='SIL',\n                sil_disambig='#14',  # FIXME: lookup correct value\n                lexiconp=self.files_dict['lexiconp_disambig_txt'],\n            )\n            KaldiModelBuildUtils.build_L_disambig(\n                lexicon_fst_text.encode(encoding='latin-1'),\n                phones_file=self.files_dict['phones_txt'], words_file=self.files_dict['words_txt'],\n                wdisambig_phones_file=self.files_dict['wdisambig_phones_int'], wdisambig_words_file=self.files_dict['wdisambig_words_int'],\n                fst_out_file=self.files_dict['L_disambig_fst'])\n\n        else:\n            format = ExternalProcess.get_list_formatter(self.files_dict)\n            command = ExternalProcess.make_lexicon_fst(*format(\n                '--left-context-phones={left_context_phones_txt}',\n                '--nonterminals={nonterminals_txt}',\n                '--sil-prob=0.5',\n                '--sil-phone=SIL',\n                '--sil-disambig=#14',  # FIXME: lookup correct value\n                '{lexiconp_disambig_txt}',\n            ))\n            command |= ExternalProcess.fstcompile(*format(\n                '--isymbols={phones_txt}',\n                '--osymbols={words_txt}',\n                '--keep_isymbols=false',\n                '--keep_osymbols=false',\n            ))\n            command |= ExternalProcess.fstaddselfloops(*format('{wdisambig_phones_int}', '{wdisambig_words_int}'), **ExternalProcess.get_debug_stderr_kwargs(_log))\n            command |= ExternalProcess.fstarcsort(*format('--sort_type=olabel'))\n            command |= self.files_dict['L_disambig.fst']\n            command()\n\n        # FIXME: generate_words_relabeled_file(self.files_dict['words.txt'], self.files_dict['relabel_ilabels.int'], self.files_dict['words.relabeled.txt'])\n\n        self.fst_cache.update_dependencies()\n        self.fst_cache.save()\n\n    def reset_user_lexicon(self):\n        utils.clear_file(self.files_dict['user_lexicon.txt'])\n        self.generate_lexicon_files()\n\n    @staticmethod\n    def generate_words_relabeled_file(words_filename, relabel_filename, words_relabel_filename):\n        \"\"\" generate a version of the words file, that has already been relabeled with the given relabel file \"\"\"\n        with open(words_filename, 'r', encoding='utf-8') as file:\n            word_id_pairs = [(word, id) for (word, id) in [line.strip().split() for line in file]]\n        with open(relabel_filename, 'r', encoding='utf-8') as file:\n            relabel_map = {from_id: to_id for (from_id, to_id) in [line.strip().split() for line in file]}\n        word_ids = frozenset(id for (word, id) in word_id_pairs)\n        relabel_from_ids = frozenset(from_id for from_id in relabel_map.keys())\n        if word_ids < relabel_from_ids:\n            _log.warning(\"generate_words_relabeled_file: word_ids < relabel_from_ids\")\n        # if word_ids > relabel_from_ids:\n        #     _log.warning(\"generate_words_relabeled_file: word_ids > relabel_from_ids\")\n        with open(words_relabel_filename, 'w', encoding='utf-8') as file:\n            for (word, id) in word_id_pairs:\n                file.write(\"%s %s\\n\" % (word, (relabel_map.get(id, id))))\n\n\n########################################################################################################################\n\ndef convert_generic_model_to_agf(src_dir, model_dir):\n    from .compiler import Compiler\n    if PY2:\n        from .kaldi import augment_phones_txt_py2 as augment_phones_txt, augment_words_txt_py2 as augment_words_txt\n    else:\n        from .kaldi import augment_phones_txt, augment_words_txt\n\n    filenames = [\n        'words.txt',\n        'phones.txt',\n        'align_lexicon.int',\n        'disambig.int',\n        # 'L_disambig.fst',\n        'tree',\n        'final.mdl',\n        'lexiconp.txt',\n        'word_boundary.txt',\n        'optional_silence.txt',\n        'silence.txt',\n        'nonsilence.txt',\n        'wdisambig_phones.int',\n        'wdisambig_words.int',\n        'mfcc_hires.conf',\n        'mfcc.conf',\n        'ivector_extractor.conf',\n        'splice.conf',\n        'online_cmvn.conf',\n        'final.mat',\n        'global_cmvn.stats',\n        'final.dubm',\n        'final.ie',\n    ]\n    nonterminals = list(Compiler.nonterminals)\n\n    for filename in filenames:\n        path = find_file(src_dir, filename)\n        if path is None:\n            _log.error(\"cannot find %r in %r\", filename, model_dir)\n            continue\n        _log.info(\"copying %r to %r\", path, model_dir)\n        shutil.copy(path, model_dir)\n\n    _log.info(\"converting %r in %r\", 'phones.txt', model_dir)\n    lines, highest_symbol = augment_phones_txt.read_phones_txt(os.path.join(model_dir, 'phones.txt'))\n    augment_phones_txt.write_phones_txt(lines, highest_symbol, nonterminals, os.path.join(model_dir, 'phones.txt'))\n\n    _log.info(\"converting %r in %r\", 'words.txt', model_dir)\n    lines, highest_symbol = augment_words_txt.read_words_txt(os.path.join(model_dir, 'words.txt'))\n    # FIXME: leave space for adding words later\n    augment_words_txt.write_words_txt(lines, highest_symbol, nonterminals, os.path.join(model_dir, 'words.txt'))\n\n    with open(os.path.join(model_dir, 'nonterminals.txt'), 'w', encoding='utf-8', newline='\\n') as f:\n        f.writelines(nonterm + '\\n' for nonterm in nonterminals)\n\n    # add nonterminals to align_lexicon.int\n    \n    # fix L_disambig.fst: construct lexiconp_disambig.txt ...\n\n\n########################################################################################################################\n\ndef str_space_join(iterable):\n    return u' '.join(text_type(elem) for elem in iterable)\n\ndef base_filepath(filepath):\n    root, ext = os.path.splitext(filepath)\n    return root + '.base' + ext\n\ndef verify_files_exist(*filenames):\n    return False\n"
  },
  {
    "path": "kaldi_active_grammar/plain_dictation.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\nfrom . import _log, KaldiError\nfrom .model import Model\nfrom .compiler import Compiler, remove_nonterms_in_text, remove_words_in_text\nfrom .wrapper import KaldiPlainNNet3Decoder, KaldiAgfNNet3Decoder\nfrom .utils import show_donation_message\n\n_log = _log.getChild('plain_dictation')\n\n\nclass PlainDictationRecognizer(object):\n\n    def __init__(self, model_dir=None, tmp_dir=None, fst_file=None, config=None):\n        \"\"\"\n        Recognizes plain dictation only. If `fst_file` is specified, uses that\n        HCLG.fst file; otherwise, uses KaldiAG but dictation only.\n\n        Args:\n            model_dir (str): optional path to model directory\n            tmp_dir (str): optional path to temporary directory\n            fst_file (str): optional path to model's HCLG.fst file to use\n            config (dict): optional configuration for initialization of decoder\n        \"\"\"\n        show_donation_message()\n\n        kwargs = {}\n        if config: kwargs['config'] = dict(config)\n\n        if fst_file:\n            self._model = Model(model_dir, tmp_dir)\n            self.decoder = KaldiPlainNNet3Decoder(model_dir=self._model.model_dir, tmp_dir=self._model.tmp_dir,\n                fst_file=fst_file, **kwargs)\n\n        else:\n            self._compiler = Compiler(model_dir, tmp_dir, cache_fsts=False)\n            top_fst_rule = self._compiler.compile_top_fst_dictation_only()\n            dictation_fst_file = self._compiler.dictation_fst_filepath\n            self.decoder = KaldiAgfNNet3Decoder(model_dir=self._compiler.model_dir, tmp_dir=self._compiler.tmp_dir,\n                top_fst=top_fst_rule.fst_wrapper, dictation_fst_file=dictation_fst_file, **kwargs)\n\n\n    def decode_utterance(self, samples_data, chunk_size=None):\n        \"\"\"\n        Decodes an entire utterance at once,\n        taking as input *samples_data* (*bytes-like* in `int16` format),\n        and returning a tuple of (output (*text*), likelihood (*float*)).\n        Optionally takes *chunk_size* (*int* in number of samples) for decoding.\n        \"\"\"\n        if chunk_size:\n            chunk_size *= 2  # Compensate for int16 format\n            for i in range(0, len(samples_data), chunk_size):\n                self.decoder.decode(samples_data[i : i + chunk_size], False)\n            self.decoder.decode(bytes(), True)\n        else:\n            self.decoder.decode(samples_data, True)\n        output_str, info = self.decoder.get_output()\n        output_str = remove_nonterms_in_text(output_str)\n        output_str = remove_words_in_text(output_str, lambda word: word in self._compiler._silence_words)\n        return (output_str, info)\n"
  },
  {
    "path": "kaldi_active_grammar/utils.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\nimport logging, sys, time\nimport fnmatch, glob, os\nimport functools\nimport hashlib, json\nimport threading\nfrom contextlib import contextmanager\nfrom io import open\n\nimport six\nfrom six import PY2, binary_type, text_type, print_\n\nfrom . import _log, _name, __version__\n\n\n########################################################################################################################\n\n_donation_message_enabled = True\n_donation_message = (\"Kaldi-Active-Grammar v%s: \\n\"\n    \"    If this free, open source engine is valuable to you, please consider donating \\n\"\n    \"    https://github.com/daanzu/kaldi-active-grammar \\n\"\n    \"    Disable message by calling `kaldi_active_grammar.disable_donation_message()`\") % __version__\n\ndef show_donation_message():\n    if _donation_message_enabled:\n        print_(_donation_message)\n        disable_donation_message()\n\ndef disable_donation_message():\n    global _donation_message_enabled\n    _donation_message_enabled = False\n\n\n########################################################################################################################\n\ndebug_timer_enabled = True\n\nclass ThreadLocalData(threading.local):\n    def __init__(self):\n        self._debug_timer_stack = []\nthread_local_data = ThreadLocalData()\n\n@contextmanager\ndef debug_timer(log, desc, enabled=True, independent=False):\n    \"\"\"\n    Contextmanager that outputs timing to ``log`` with ``desc``.\n    :param independent: if True, tracks entire time spent inside context, rather than subtracting time within inner ``debug_timer`` instances\n    \"\"\"\n    _debug_timer_stack = thread_local_data._debug_timer_stack\n    start_time = time.time()\n    if not independent: _debug_timer_stack.append(start_time)\n    spent_time_func = lambda: time.time() - start_time\n    yield spent_time_func\n    if not independent: start_time_adjusted = _debug_timer_stack.pop()\n    else: start_time_adjusted = 0\n    if enabled:\n        if debug_timer_enabled:\n            log(\"%s %d ms\" % (desc, (time.time() - start_time_adjusted) * 1000))\n        if _debug_timer_stack and not independent:\n            _debug_timer_stack[-1] += spent_time_func()\n\nif not PY2:\n    def clock():\n        return time.perf_counter()\nelse:\n    def clock():\n        return time.clock()\n\n\n########################################################################################################################\n\nif sys.platform.startswith('win'): platform = 'windows'\nelif sys.platform.startswith('linux'): platform = 'linux'\nelif sys.platform.startswith('darwin'): platform = 'macos'\nelse: raise KaldiError(\"unknown sys.platform\")\n\nexec_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'exec', platform)\n\nimport ush\n\nclass ExternalProcess(object):\n\n    shell = ush.Shell(raise_on_error=True)\n\n    fstcompile = shell(os.path.join(exec_dir, 'fstcompile'))\n    fstarcsort = shell(os.path.join(exec_dir, 'fstarcsort'))\n    fstaddselfloops = shell(os.path.join(exec_dir, 'fstaddselfloops'))\n    fstinfo = shell(os.path.join(exec_dir, 'fstinfo'))\n    # compile_graph = shell(os.path.join(exec_dir, 'compile-graph'))\n    compile_graph_agf = shell(os.path.join(exec_dir, 'compile-graph-agf'))\n    # compile_graph_agf_debug = shell(os.path.join(exec_dir, 'compile-graph-agf-debug'))\n\n    make_lexicon_fst = shell([sys.executable, os.path.join(os.path.dirname(os.path.abspath(__file__)), 'kaldi', 'make_lexicon_fst%s.py' % ('_py2' if PY2 else ''))])\n\n    @staticmethod\n    def get_dict_formatter(format_kwargs):\n        return lambda **kwargs: { key: value.format(**format_kwargs) for (key, value) in kwargs.items() }\n    @staticmethod\n    def get_list_formatter(format_kwargs):\n        return lambda *args: [arg.format(**format_kwargs) for arg in args]\n\n    @staticmethod\n    def get_debug_stderr_kwargs(log):\n        return (dict() if log.isEnabledFor(logging.DEBUG) else dict(stderr=six.BytesIO()))\n\n    @staticmethod\n    def execute_command_safely(commands, log):\n        \"\"\" Executes given `ush` command, redirecting stderr appropriately: either logging, or storing to output upon error. \"\"\"\n        stderr = six.BytesIO()\n        for command in commands.commands:\n            command.opts['stderr'] = stderr\n        try:\n            result = commands()\n        except Exception as e:\n            log.error(\"Error running command. Printing stderr as follows...\\n%s\", stderr.getvalue().decode('utf-8'))\n            raise e\n        return result\n\n\n########################################################################################################################\n\ndef lazy_readonly_property(func):\n    # From https://stackoverflow.com/questions/3012421/python-memoising-deferred-lookup-property-decorator\n    attr_name = '_lazy_' + func.__name__\n\n    @property\n    @functools.wraps(func)\n    def _lazyprop(self):\n        if not hasattr(self, attr_name):\n            setattr(self, attr_name, func(self))\n        return getattr(self, attr_name)\n\n    return _lazyprop\n\nclass lazy_settable_property(object):\n    '''\n    meant to be used for lazy evaluation of an object attribute.\n    property should represent non-mutable data, as it replaces itself.\n    '''\n    # From https://stackoverflow.com/questions/3012421/python-memoising-deferred-lookup-property-decorator\n\n    def __init__(self, fget):\n        self.fget = fget\n        # copy the getter function's docstring and other attributes\n        functools.update_wrapper(self, fget)\n\n    def __get__(self, obj, cls):\n        if obj is None:\n            return self\n        value = self.fget(obj)\n        setattr(obj, self.fget.__name__, value)\n        return value\n\n\n########################################################################################################################\n\ndef touch_file(filename):\n    with open(filename, 'ab'):\n        os.utime(filename, None)  # Update timestamps\n\ndef clear_file(filename):\n    with open(filename, 'wb'):\n        pass\n\nsymbol_table_lookup_cache = dict()\n\ndef symbol_table_lookup(filename, input):\n    \"\"\"\n    Returns the RHS corresponding to LHS == ``input`` in symbol table in ``filename``.\n    \"\"\"\n    cached = symbol_table_lookup_cache.get((filename, input))\n    if cached is not None:\n        return cached\n    with open(filename, 'r', encoding='utf-8') as f:\n        for line in f:\n            tokens = line.strip().split()\n            if len(tokens) >= 2 and input == tokens[0]:\n                try:\n                    symbol_table_lookup_cache[(filename, input)] = int(tokens[1])\n                    return int(tokens[1])\n                except Exception as e:\n                    symbol_table_lookup_cache[(filename, input)] = tokens[1]\n                    return tokens[1]\n        return None\n\ndef load_symbol_table(filename):\n    with open(filename, 'r', encoding='utf-8') as f:\n        return [[int(token) if token.isdigit() else token for token in line.strip().split()] for line in f]\n\ndef find_file(directory, filename, required=False, default=False):\n    matches = []\n    for root, dirnames, filenames in os.walk(directory):\n        for filename in fnmatch.filter(filenames, filename):\n            matches.append(os.path.join(root, filename))\n    if matches:\n        matches.sort(key=len)\n        _log.log(8, \"%s: find_file found file %r\", _name, matches[0])\n        return matches[0]\n    else:\n        _log.log(8, \"%s: find_file cannot find file %r in %r (or subdirectories)\", _name, filename, directory)\n        if required:\n            raise IOError(\"cannot find file %r in %r\" % (filename, directory))\n        if default == True:\n            return os.path.join(directory, filename)\n        return None\n\ndef is_file_up_to_date(filename, *parent_filenames):\n    if not os.path.exists(filename): return False\n    for parent_filename in parent_filenames:\n        if not os.path.exists(parent_filename): return False\n        if os.path.getmtime(filename) < os.path.getmtime(parent_filename): return False\n    return True\n\n\n########################################################################################################################\n\nclass FSTFileCache(object):\n\n    def __init__(self, cache_filename, tmp_dir=None, dependencies_dict=None, invalidate=False):\n        \"\"\"\n        Stores mapping filename -> hash of its contents/data, to detect when recalculaion is necessary. Assumes file is in model_dir.\n        Also stores an entry ``dependencies_list`` listing filenames of all dependencies.\n        FST files are a special case: they aren't stored in the cache object, because their filename is itself a hash of its content mixed with a hash of its dependencies.\n        If ``invalidate``, then initialize a fresh cache.\n        \"\"\"\n\n        self.cache_filename = cache_filename\n        self.tmp_dir = tmp_dir\n        if dependencies_dict is None: dependencies_dict = dict()\n        self.dependencies_dict = dependencies_dict\n        self.lock = threading.Lock()\n\n        try:\n            self._load()\n        except Exception as e:\n            _log.info(\"%s: failed to load cache from %r\", self, cache_filename)\n            self.cache = None\n\n        must_reset_cache = False\n        if invalidate:\n            _log.debug(\"%s: forced invalidate\", self)\n            must_reset_cache = True\n        elif self.cache is None:\n            _log.debug(\"%s: could not load cache\", self)\n            must_reset_cache = True\n        elif self.cache.get('version') != __version__:\n            _log.debug(\"%s: version changed\", self)\n            must_reset_cache = True\n        elif sorted(self.cache.get('dependencies_list', list())) != sorted(dependencies_dict.keys()):\n            _log.debug(\"%s: list of dependencies has changed\", self)\n            must_reset_cache = True\n        elif any(not self.file_is_current(path)\n                for (name, path) in dependencies_dict.items()\n                if path and os.path.isfile(path)):\n            _log.debug(\"%s: any of the dependencies files' contents (as stored in cache) has changed\", self)\n            must_reset_cache = True\n\n        if must_reset_cache:\n            # Then reset cache\n            _log.info(\"%s: version or dependencies did not match cache from %r; initializing empty\", self, cache_filename)\n            self.cache = dict({ 'version': text_type(__version__) })\n            self.cache_is_new = True\n            self.update_dependencies()\n            self.save()\n\n    def _load(self):\n        with open(self.cache_filename, 'r', encoding='utf-8') as f:\n            self.cache = json.load(f)\n        self.cache_is_new = False\n        self.dirty = False\n\n    dependencies_hash = property(lambda self: self.cache['dependencies_hash'])\n\n    def save(self):\n        with open(self.cache_filename, 'w', encoding='utf-8') as f:\n            # https://stackoverflow.com/a/14870531\n            f.write(json.dumps(self.cache, ensure_ascii=False))\n        self.dirty = False\n\n    def update_dependencies(self):\n        dependencies_dict = self.dependencies_dict\n        for (name, path) in dependencies_dict.items():\n            if path and os.path.isfile(path):\n                self.add_file(path)\n        self.cache['dependencies_list'] = sorted(dependencies_dict.keys())  # list\n        self.cache['dependencies_hash'] = self.hash_data([self.cache.get(path) for (key, path) in sorted(dependencies_dict.items())])\n\n    def invalidate(self, filename=None):\n        if filename is None:\n            _log.info(\"%s: invalidating all file entries in cache\", self)\n            # Does not invalidate dependencies!\n            self.cache = { key: self.cache[key]\n                for key in ['version', 'dependencies_list', 'dependencies_hash'] + self.cache['dependencies_list']\n                if key in self.cache }\n            self.dirty = True\n            if self.tmp_dir is not None:\n                for filename in glob.glob(os.path.join(self.tmp_dir, '*.fst')):\n                    os.remove(filename)\n        elif filename in self.cache:\n            _log.info(\"%s: invalidating cache entry for %r\", self, filename)\n            del self.cache[filename]\n            self.dirty = True\n\n    def hash_data(self, data, mix_dependencies=False):\n        if not isinstance(data, binary_type):\n            if not isinstance(data, text_type):\n                data = text_type(data)\n            data = data.encode('utf-8')\n        hasher = hashlib.md5()\n        if mix_dependencies:\n            hasher.update(self.dependencies_hash.encode('utf-8'))\n        hasher.update(data)\n        return text_type(hasher.hexdigest())\n\n    def add_file(self, filepath, data=None):\n        # Assumes file is a root dependency\n        if data is None:\n            with open(filepath, 'rb') as f:\n                data = f.read()\n        filename = os.path.basename(filepath)\n        self.cache[filename] = self.hash_data(data)\n        self.dirty = True\n\n    def contains(self, filename, data):\n        return (filename in self.cache) and (self.cache[filename] == self.hash_data(data))\n\n    def file_is_current(self, filepath, data=None):\n        \"\"\"Returns bool whether generic filepath file exists and the cache contains the given data (or the file's current data if none given).\"\"\"\n        filename = os.path.basename(filepath)\n        if self.cache_is_new and filename in self.cache.get('dependencies_list', list()):\n            return False\n        if not os.path.isfile(filepath):\n            return False\n        if data is None:\n            with open(filepath, 'rb') as f:\n                data = f.read()\n        return self.contains(filename, data)\n\n    def fst_is_current(self, filepath, touch=True):\n        \"\"\"Returns bool whether FST file in directory path exists.\"\"\"\n        result = os.path.isfile(filepath)\n        if result and touch:\n            touch_file(filepath)\n        return result\n"
  },
  {
    "path": "kaldi_active_grammar/wfst.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\nimport collections, itertools, math\n\nfrom six import iteritems, itervalues, text_type\n\nfrom . import KaldiError\nfrom .utils import FSTFileCache\n\n\nclass WFST(object):\n    \"\"\"\n    WFST class.\n    Notes:\n        * Weight (arc & state) is stored as raw probability, then normalized and converted to negative log likelihood/probability before export.\n    \"\"\"\n\n    zero = float('inf')  # Weight of non-final states; a state is final if and only if its weight is not equal to self.zero\n    one = 0.0\n    eps = u'<eps>'\n    eps_disambig = u'#0'\n    silent_labels = frozenset((eps, eps_disambig, u'!SIL'))\n    native = property(lambda self: False)\n\n    def __init__(self):\n        self.clear()\n\n    def clear(self):\n        self._arc_table_dict = collections.defaultdict(list)  # { src_state: [[src_state, dst_state, label, olabel, weight], ...] }  # list of its outgoing arcs\n        self._state_table = dict()  # { id: weight }\n        self._next_state_id = 0\n        self.start_state = self.add_state()\n        self.filename = None\n\n    num_arcs = property(lambda self: sum(len(arc_list) for arc_list in itervalues(self._arc_table_dict)))\n    num_states = property(lambda self: len(self._state_table))\n\n    def iter_arcs(self):\n        return itertools.chain.from_iterable(itervalues(self._arc_table_dict))\n\n    def is_state_final(self, state):\n        return (self._state_table[state] != 0)\n\n    def add_state(self, weight=None, initial=False, final=False):\n        \"\"\" Default weight is 1. \"\"\"\n        self.filename = None\n        id = int(self._next_state_id)\n        self._next_state_id += 1\n        if weight is None:\n            weight = 1 if final else 0\n        else:\n            assert final\n        self._state_table[id] = float(weight)\n        if initial:\n            self.add_arc(self.start_state, id, None)\n        return id\n\n    def add_arc(self, src_state, dst_state, label, olabel=None, weight=None):\n        \"\"\" Default weight is 1. None label is replaced by eps. Default olabel of None is replaced by label. \"\"\"\n        self.filename = None\n        if label is None: label = self.eps\n        if olabel is None: olabel = label\n        if weight is None: weight = 1\n        self._arc_table_dict[src_state].append(\n            [int(src_state), int(dst_state), text_type(label), text_type(olabel), float(weight)])\n\n    def get_fst_text(self, fst_cache, eps2disambig=False):\n        eps_replacement = self.eps_disambig if eps2disambig else self.eps\n        arcs_text = u''.join(\"%d %d %s %s %f\\n\" % (\n                src_state,\n                dst_state,\n                ilabel if ilabel != self.eps else eps_replacement,\n                olabel,\n                -math.log(weight) if weight != 0 else self.zero,\n            )\n            for (src_state, dst_state, ilabel, olabel, weight) in self.iter_arcs())\n        states_text = u''.join(\"%d %f\\n\" % (\n                id,\n                -math.log(weight) if weight != 0 else self.zero,\n            )\n            for (id, weight) in iteritems(self._state_table)\n            if weight != 0)\n        text = arcs_text + states_text\n        self.filename = fst_cache.hash_data(text, mix_dependencies=True) + '.fst'\n        return text\n\n    ####################################################################################################################\n\n    def label_is_silent(self, label):\n        return ((label in self.silent_labels) or (label.startswith('#nonterm')))\n\n    def scale_weights(self, factor):\n        # Unused\n        factor = float(factor)\n        for arcs in itervalues(self._arc_table_dict):\n            for arc in arcs:\n                arc[4] = arc[4] * factor\n\n    def normalize_weights(self, stochasticity=False):\n        # Unused\n        for arcs in itervalues(self._arc_table_dict):\n            num_weights = len(arcs)\n            sum_weights = sum(arc[4] for arc in arcs)\n            divisor = float(sum_weights if stochasticity else num_weights)\n            for arc in arcs:\n                arc[4] = arc[4] / divisor\n\n    def has_eps_path(self, path_src_state, path_dst_state, eps_like_labels=frozenset()):\n        \"\"\" Returns True iff there is a epsilon path from src_state to dst_state. Uses BFS. Does not follow nonterminals! Used by Dragonfly compiler. \"\"\"\n        eps_like_labels = frozenset((self.eps, self.eps_disambig)) | frozenset(eps_like_labels)\n        state_queue = collections.deque([path_src_state])\n        queued = set(state_queue)\n        while state_queue:\n            state = state_queue.pop()\n            if state == path_dst_state:\n                return True\n            next_states = [dst_state\n                for (src_state, dst_state, label, olabel, weight) in self._arc_table_dict[state]\n                if (label in eps_like_labels) and (dst_state not in queued)]\n            state_queue.extendleft(next_states)\n            queued.update(next_states)\n        return False\n\n    def does_match(self, target_words, wildcard_nonterms=(), include_silent=False):\n        \"\"\" Returns the olabels on a matching path if there is one, False if not. Uses BFS. Wildcard accepts zero or more words. Used for parsing by KaldiAG.compiler. \"\"\"\n        queue = collections.deque()  # entries: (state, path of olabels of arcs to state, index into target_words of remaining words)\n        queue.append((self.start_state, (), 0))\n        while queue:\n            state, path, target_word_index = queue.popleft()\n            target_word = target_words[target_word_index] if target_word_index < len(target_words) else None\n            if (target_word is None) and self.is_state_final(state):\n                return tuple(olabel for olabel in path\n                    if include_silent or not self.label_is_silent(olabel))\n            for arc in self._arc_table_dict[state]:\n                src_state, dst_state, ilabel, olabel, weight = arc\n                if (target_word is not None) and (ilabel == target_word):\n                    queue.append((dst_state, path+(olabel,), target_word_index+1))\n                elif ilabel in wildcard_nonterms:\n                    if olabel not in path:\n                        path += (olabel,)  # FIXME: Is this right? shouldn't we only check for olabel at end of path?\n                    if target_word is not None:\n                        queue.append((src_state, path+(target_word,), target_word_index+1))  # accept word and stay\n                    queue.append((dst_state, path, target_word_index))  # epsilon transition; already added olabel above or previously\n                elif self.label_is_silent(ilabel):\n                    queue.append((dst_state, path+(olabel,), target_word_index))  # epsilon transition\n        return False\n\n\n########################################################################################################################\n\nfrom .ffi import FFIObject, _ffi, decode, encode\n\nclass NativeWFST(FFIObject):\n    \"\"\"\n    WFST class, implemented in native code.\n    Notes:\n        * Weight (arc & state) is stored as raw probability, then normalized and converted to negative log likelihood/probability before export.\n    \"\"\"\n\n    _library_header_text = \"\"\"\n        DRAGONFLY_API bool fst__init(int32_t eps_like_ilabels_len, int32_t eps_like_ilabels_cp[], int32_t silent_olabels_len, int32_t silent_olabels_cp[], int32_t wildcard_olabels_len, int32_t wildcard_olabels_cp[]);\n        DRAGONFLY_API void* fst__construct();\n        DRAGONFLY_API bool fst__destruct(void* fst_vp);\n        DRAGONFLY_API int32_t fst__add_state(void* fst_vp, float weight, bool initial);\n        DRAGONFLY_API bool fst__add_arc(void* fst_vp, int32_t src_state_id, int32_t dst_state_id, int32_t ilabel, int32_t olabel, float weight);\n        DRAGONFLY_API bool fst__compute_md5(void* fst_vp, char* md5_cp, char* dependencies_seed_md5_cp);\n        DRAGONFLY_API bool fst__has_path(void* fst_vp);\n        DRAGONFLY_API bool fst__has_eps_path(void* fst_vp, int32_t path_src_state, int32_t path_dst_state);\n        DRAGONFLY_API bool fst__does_match(void* fst_vp, int32_t target_labels_len, int32_t target_labels_cp[], int32_t output_labels_cp[], int32_t* output_labels_len);\n        DRAGONFLY_API void* fst__load_file(char* filename_cp);\n        DRAGONFLY_API bool fst__write_file(void* fst_vp, char* filename_cp);\n        DRAGONFLY_API bool fst__write_file_const(void* fst_vp, char* filename_cp);\n        DRAGONFLY_API bool fst__print(void* fst_vp, char* filename_cp);\n        DRAGONFLY_API void* fst__compile_text(char* fst_text_cp, char* isymbols_file_cp, char* osymbols_file_cp);\n    \"\"\"\n\n    zero = float('inf')  # Weight of non-final states; a state is final if and only if its weight is not equal to self.zero\n    one = 0.0\n    eps = u'<eps>'\n    eps_disambig = u'#0'\n    silent_words = frozenset((eps, eps_disambig, u'!SIL'))\n    native = property(lambda self: True)\n\n    @classmethod\n    def init_class(cls, isymbol_table, wildcard_nonterms, osymbol_table=None):\n        if osymbol_table is None: osymbol_table = isymbol_table\n        cls.word_to_ilabel_map = isymbol_table.word_to_id_map\n        cls.word_to_olabel_map = osymbol_table.word_to_id_map\n        cls.olabel_to_word_map = osymbol_table.id_to_word_map\n        cls.eps_like_ilabels = tuple(cls.word_to_ilabel_map[word] for word in (cls.eps, cls.eps_disambig))\n        cls.silent_olabels = tuple(\n            frozenset(cls.word_to_olabel_map[word] for word in cls.silent_words)\n            | frozenset(symbol for (word, symbol) in cls.word_to_olabel_map.items() if word.startswith('#nonterm')))\n        cls.wildcard_nonterms = frozenset(wildcard_nonterms)\n        cls.wildcard_olabels = tuple(cls.word_to_olabel_map[word] for word in cls.wildcard_nonterms)\n        assert cls.word_to_ilabel_map[cls.eps] == 0\n\n        cls.init_ffi()\n        result = cls._lib.fst__init(len(cls.eps_like_ilabels), cls.eps_like_ilabels,\n            len(cls.silent_olabels), cls.silent_olabels,\n            len(cls.wildcard_olabels), cls.wildcard_olabels)\n        if not result:\n            raise KaldiError(\"Failed fst__init\")\n\n    def __init__(self):\n        super().__init__()\n        self._construct()\n\n    def _construct(self):\n        self.native_obj = self._lib.fst__construct()\n        if self.native_obj == _ffi.NULL:\n            raise KaldiError(\"Failed fst__construct\")\n\n        self.num_states = 1  # Is initialized with a start state\n        self.num_arcs = 0\n        self.filename = None\n        self._compiled_native_obj = None\n\n    def __del__(self):\n        self.destruct()\n\n    def destruct(self):\n        del self.compiled_native_obj\n        if self.native_obj is not None:\n            result = self._lib.fst__destruct(self.native_obj)\n            self.native_obj = None\n            if not result:\n                raise KaldiError(\"Failed fst__destruct on %r\" % self.native_obj)\n\n    compiled_native_obj = property(lambda self: self._compiled_native_obj)\n    @compiled_native_obj.setter\n    def compiled_native_obj(self, value):\n        del self.compiled_native_obj\n        self._compiled_native_obj = value\n    @compiled_native_obj.deleter\n    def compiled_native_obj(self):\n        if self._compiled_native_obj is not None:\n            result = self._lib.fst__destruct(self._compiled_native_obj)\n            self._compiled_native_obj = None\n            if not result:\n                raise KaldiError(\"Failed fst__destruct on %r\" % self._compiled_native_obj)\n\n    def clear(self):\n        self.destruct()\n        self._construct()\n\n    def add_state(self, weight=None, initial=False, final=False):\n        \"\"\" Default weight is 1. \"\"\"\n        self.filename = None\n        if weight is None:\n            weight = 1 if final else 0\n        else:\n            assert final\n        weight = -math.log(weight) if weight != 0 else self.zero\n        id = self._lib.fst__add_state(self.native_obj, float(weight), bool(initial))\n        if id < 0:\n            raise KaldiError(\"Failed fst__add_state\")\n        self.num_states += 1\n        if initial:\n            self.num_arcs += 1\n        return id\n\n    def add_arc(self, src_state, dst_state, label, olabel=None, weight=None):\n        \"\"\" Default weight is 1. None label is replaced by eps. Default olabel of None is replaced by label. \"\"\"\n        self.filename = None\n        if label is None: label = self.eps\n        if olabel is None: olabel = label\n        if weight is None: weight = 1\n        weight = -math.log(weight) if weight != 0 else self.zero\n        label_id = self.word_to_ilabel_map[label]\n        olabel_id = self.word_to_olabel_map[olabel]\n        result = self._lib.fst__add_arc(self.native_obj, int(src_state), int(dst_state), int(label_id), int(olabel_id), float(weight))\n        if not result:\n            raise KaldiError(\"Failed fst__add_arc\")\n        self.num_arcs += 1\n\n    def compute_hash(self, dependencies_seed_hash_str='0'*32):\n        hash_p = _ffi.new('char[]', 33)  # Length of MD5 hex string + null terminator\n        result = self._lib.fst__compute_md5(self.native_obj, hash_p, encode(dependencies_seed_hash_str))\n        if not result:\n            raise KaldiError(\"Failed fst__compute_md5\")\n        hash_str = decode(_ffi.string(hash_p))\n        self.filename = hash_str + '.fst'\n        return hash_str\n\n    ####################################################################################################################\n\n    def has_path(self):\n        \"\"\" Returns True iff there is a path (from start state to a final state). Uses BFS. Assumes can nonterminals succeed. \"\"\"\n        result = self._lib.fst__has_path(self.native_obj)\n        return result\n\n    def has_eps_path(self, path_src_state, path_dst_state, eps_like_labels=frozenset()):\n        \"\"\" Returns True iff there is a epsilon-like-only path from src_state to dst_state. Uses BFS. Does not follow nonterminals! \"\"\"\n        assert not eps_like_labels\n        result = self._lib.fst__has_eps_path(self.native_obj, path_src_state, path_dst_state)\n        return result\n\n    def does_match(self, target_words, wildcard_nonterms=(), include_silent=False, output_max_length=1024):\n        \"\"\" Returns the olabels on a matching path if there is one, False if not. Uses BFS. Wildcard accepts zero or more words. \"\"\"\n        # FIXME: do in decoder!\n        assert frozenset(wildcard_nonterms) == self.wildcard_nonterms\n        output_p = _ffi.new('int32_t[]', output_max_length)\n        output_len_p = _ffi.new('int32_t*', output_max_length)\n        target_labels = [self.word_to_ilabel_map[word] for word in target_words]\n        result = self._lib.fst__does_match(self.native_obj, len(target_labels), target_labels, output_p, output_len_p)\n        if output_len_p[0] > output_max_length:\n            raise KaldiError(\"fst__does_match needed too much output length\")\n        if result:\n            return tuple(self.olabel_to_word_map[symbol]\n                for symbol in output_p[0:output_len_p[0]]\n                if include_silent or symbol not in self.silent_olabels)\n        return False\n\n    ####################################################################################################################\n\n    def write_file(self, fst_filename):\n        result = self._lib.fst__write_file(self.native_obj, encode(fst_filename))\n        if not result:\n            raise KaldiError(\"Failed fst__write_file\")\n\n    def write_file_const(self, fst_filename):\n        result = self._lib.fst__write_file_const(self.native_obj, encode(fst_filename))\n        if not result:\n            raise KaldiError(\"Failed fst__write_file\")\n\n    def print(self, fst_filename=None):\n        result = self._lib.fst__print(self.native_obj, (encode(fst_filename) if fst_filename is not None else _ffi.NULL))\n        if not result:\n            raise KaldiError(\"Failed fst__print\")\n\n    @classmethod\n    def load_file(cls, fst_filename):\n        cls.init_ffi()\n        native_obj = cls._lib.fst__load_file(encode(fst_filename))\n        if not native_obj:\n            raise KaldiError(\"Failed fst__load_file\")\n        # FIXME: memory leak possible?\n        return native_obj\n\n    @classmethod\n    def compile_text(cls, fst_text, isymbols_filename, osymbols_filename):\n        cls.init_ffi()\n        native_obj = cls._lib.fst__compile_text(encode(fst_text), encode(isymbols_filename), encode(osymbols_filename))\n        if not native_obj:\n            raise KaldiError(\"Failed fst__compile_text\")\n        # FIXME: memory leak possible?\n        return native_obj\n\n\n########################################################################################################################\n\nclass SymbolTable(object):\n\n    def __init__(self, filename=None):\n        self.word_to_id_map = dict()\n        self.id_to_word_map = dict()\n        self.max_term_word_id = -1\n        if filename is not None:\n            self.load_text_file(filename)\n\n    def load_text_file(self, filename):\n        with open(filename, 'r', encoding='utf-8') as file:\n            word_id_pairs = [line.strip().split() for line in file]\n        self.word_to_id_map.clear()\n        self.id_to_word_map.clear()\n        self.word_to_id_map.update({ word: int(id) for (word, id) in word_id_pairs })\n        self.id_to_word_map.update({ id: word for (word, id) in self.word_to_id_map.items() })\n        self.max_term_word_id = max(id for (word, id) in self.word_to_id_map.items() if not word.startswith('#nonterm'))\n\n    def add_word(self, word, id=None):\n        if id is None:\n            self.max_term_word_id += 1\n            id = self.max_term_word_id\n        else:\n            id = int(id)\n        self.word_to_id_map[word] = id\n        self.id_to_word_map[id] = word\n\n    words = property(lambda self: self.word_to_id_map.keys())\n\n    def __contains__(self, word):\n        return (word in self.word_to_id_map)\n"
  },
  {
    "path": "kaldi_active_grammar/wrapper.py",
    "content": "#\n# This file is part of kaldi-active-grammar.\n# (c) Copyright 2019 by David Zurow\n# Licensed under the AGPL-3.0; see LICENSE.txt file.\n#\n\n\"\"\"\nWrapper classes for Kaldi\n\"\"\"\n\nimport argparse, json, os.path, sys\nfrom io import open, StringIO\n\nfrom six.moves import zip\nimport numpy as np\n\nfrom . import _log, KaldiError\nfrom .ffi import FFIObject, _ffi, decode as de, encode as en\nfrom .utils import clock, find_file, show_donation_message, symbol_table_lookup\nfrom .wfst import NativeWFST\nimport kaldi_active_grammar.defaults as defaults\n\n_log = _log.getChild('wrapper')\n_log_library = _log.getChild('library')\n\n\n########################################################################################################################\n\nclass KaldiDecoderBase(FFIObject):\n    \"\"\"docstring for KaldiDecoderBase\"\"\"\n\n    def __init__(self):\n        super(KaldiDecoderBase, self).__init__()\n\n        show_donation_message()\n\n        self.sample_rate = 16000\n        self.num_channels = 1\n        self.bytes_per_kaldi_frame = self.kaldi_frame_num_to_audio_bytes(1)\n\n        self._reset_decode_time()\n\n    def _reset_decode_time(self):\n        self._decode_time = 0\n        self._decode_real_time = 0\n        self._decode_times = []\n\n    def _start_decode_time(self, num_frames):\n        self.decode_start_time = clock()\n        self._decode_real_time += 1000.0 * num_frames / self.sample_rate\n\n    def _stop_decode_time(self, finalize=False):\n        this = (clock() - self.decode_start_time) * 1000.0\n        self._decode_time += this\n        self._decode_times.append(this)\n        if finalize:\n            rtf = 1.0 * self._decode_time / self._decode_real_time if self._decode_real_time != 0 else float('nan')\n            pct = 100.0 * this / self._decode_time if self._decode_time != 0 else 100\n            _log.log(15, \"decoded at %.2f RTF, for %d ms audio, spending %d ms, of which %d ms (%.0f%%) in finalization\",\n                rtf, self._decode_real_time, self._decode_time, this, pct)\n            _log.log(13, \"    decode times: %s\", ' '.join(\"%d\" % t for t in self._decode_times))\n            self._reset_decode_time()\n\n    def kaldi_frame_num_to_audio_bytes(self, kaldi_frame_num):\n        kaldi_frame_length_ms = 30\n        sample_size_bytes = 2 * self.num_channels\n        return int(kaldi_frame_num * kaldi_frame_length_ms * self.sample_rate / 1000 * sample_size_bytes)\n\n    def audio_bytes_to_s(self, audio_bytes):\n        sample_size_bytes = 2 * self.num_channels\n        return 1.0 * audio_bytes // sample_size_bytes / self.sample_rate\n\n\n########################################################################################################################\n\nclass KaldiGmmDecoder(KaldiDecoderBase):\n    \"\"\"docstring for KaldiGmmDecoder\"\"\"\n\n    _library_header_text = \"\"\"\n        void* gmm__init(float beam, int32_t max_active, int32_t min_active, float lattice_beam,\n            char* word_syms_filename_cp, char* fst_in_str_cp, char* config_cp);\n        bool gmm__decode(void* model_vp, float samp_freq, int32_t num_frames, float* frames, bool finalize);\n        bool gmm__get_output(void* model_vp, char* output, int32_t output_length, double* likelihood_p);\n    \"\"\"\n\n    def __init__(self, graph_dir=None, words_file=None, graph_file=None, model_conf_file=None):\n        super(KaldiGmmDecoder, self).__init__()\n\n        if words_file is None and graph_dir is not None: words_file = graph_dir + r\"graph\\words.txt\"\n        if graph_file is None and graph_dir is not None: graph_file = graph_dir + r\"graph\\HCLG.fst\"\n        self.words_file = os.path.normpath(words_file)\n        self.graph_file = os.path.normpath(graph_file)\n        self.model_conf_file = os.path.normpath(model_conf_file)\n        self._model = self._lib.gmm__init(7.0, 7000, 200, 8.0, words_file, graph_file, model_conf_file)\n        if not self._model: raise KaldiError(\"failed gmm__init\")\n        self.sample_rate = 16000\n\n    def decode(self, frames, finalize, grammars_activity=None):\n        if not isinstance(frames, np.ndarray): frames = np.frombuffer(frames, np.int16)\n        frames = frames.astype(np.float32)\n        frames_char = _ffi.from_buffer(frames)\n        frames_float = _ffi.cast('float *', frames_char)\n\n        self._start_decode_time(len(frames))\n        result = self._lib.gmm__decode(self._model, self.sample_rate, len(frames), frames_float, finalize)\n        self._stop_decode_time(finalize)\n\n        if not result:\n            raise RuntimeError(\"decoding error\")\n        return finalize\n\n    def get_output(self, output_max_length=4*1024):\n        output_p = _ffi.new('char[]', output_max_length)\n        likelihood_p = _ffi.new('double *')\n        result = self._lib.gmm__get_output(self._model, output_p, output_max_length, likelihood_p)\n        output_str = _ffi.string(output_p)\n        info = {\n            'likelihood': likelihood_p[0],\n        }\n        return output_str, info\n\n\n########################################################################################################################\n\nclass KaldiOtfGmmDecoder(KaldiDecoderBase):\n    \"\"\"docstring for KaldiOtfGmmDecoder\"\"\"\n\n    _library_header_text = \"\"\"\n        void* gmm_otf__init(float beam, int32_t max_active, int32_t min_active, float lattice_beam,\n            char* word_syms_filename_cp, char* config_cp,\n            char* hcl_fst_filename_cp, char** grammar_fst_filenames_cp, int32_t grammar_fst_filenames_len);\n        bool gmm_otf__add_grammar_fst(void* model_vp, char* grammar_fst_filename_cp);\n        bool gmm_otf__decode(void* model_vp, float samp_freq, int32_t num_frames, float* frames, bool finalize,\n            bool* grammars_activity, int32_t grammars_activity_size);\n        bool gmm_otf__get_output(void* model_vp, char* output, int32_t output_length, double* likelihood_p);\n    \"\"\"\n\n    def __init__(self, graph_dir=None, words_file=None, model_conf_file=None, hcl_fst_file=None, grammar_fst_files=None):\n        super(KaldiOtfGmmDecoder, self).__init__()\n\n        if words_file is None and graph_dir is not None: words_file = graph_dir + r\"graph\\words.txt\"\n        if hcl_fst_file is None and graph_dir is not None: hcl_fst_file = graph_dir + r\"graph\\HCLr.fst\"\n        if grammar_fst_files is None and graph_dir is not None: grammar_fst_files = [graph_dir + r\"graph\\Gr.fst\"]\n        self.words_file = os.path.normpath(words_file)\n        self.model_conf_file = os.path.normpath(model_conf_file)\n        self.hcl_fst_file = os.path.normpath(hcl_fst_file)\n        grammar_fst_filenames_cps = [_ffi.new('char[]', os.path.normpath(f)) for f in grammar_fst_files]\n        grammar_fst_filenames_cp = _ffi.new('char*[]', grammar_fst_filenames_cps)\n        self._model = self._lib.gmm_otf__init(7.0, 7000, 200, 8.0, words_file, model_conf_file,\n            hcl_fst_file, _ffi.cast('char**', grammar_fst_filenames_cp), len(grammar_fst_files))\n        if not self._model: raise KaldiError(\"failed gmm_otf__init\")\n        self.sample_rate = 16000\n        self.num_grammars = len(grammar_fst_files)\n\n    def add_grammar_fst(self, grammar_fst_file):\n        grammar_fst_file = os.path.normpath(grammar_fst_file)\n        _log.log(8, \"%s: adding grammar_fst_file: %s\", self, grammar_fst_file)\n        result = self._lib.gmm_otf__add_grammar_fst(self._model, grammar_fst_file)\n        if not result:\n            raise KaldiError(\"error adding grammar\")\n        self.num_grammars += 1\n\n    def decode(self, frames, finalize, grammars_activity=None):\n        # grammars_activity = [True] * self.num_grammars\n        # grammars_activity = np.random.choice([True, False], len(grammars_activity)).tolist(); print grammars_activity; time.sleep(5)\n        if grammars_activity is None: grammars_activity = []\n        else: _log.debug(\"decode: grammars_activity = %s\", ''.join('1' if a else '0' for a in grammars_activity))\n        # if len(grammars_activity) != self.num_grammars:\n        #     raise KaldiError(\"wrong len(grammars_activity)\")\n\n        if not isinstance(frames, np.ndarray): frames = np.frombuffer(frames, np.int16)\n        frames = frames.astype(np.float32)\n        frames_char = _ffi.from_buffer(frames)\n        frames_float = _ffi.cast('float *', frames_char)\n\n        self._start_decode_time(len(frames))\n        result = self._lib.gmm_otf__decode(self._model, self.sample_rate, len(frames), frames_float, finalize,\n            grammars_activity, len(grammars_activity))\n        self._stop_decode_time(finalize)\n\n        if not result:\n            raise KaldiError(\"decoding error\")\n        return finalize\n\n    def get_output(self, output_max_length=4*1024):\n        output_p = _ffi.new('char[]', output_max_length)\n        likelihood_p = _ffi.new('double *')\n        result = self._lib.gmm_otf__get_output(self._model, output_p, output_max_length, likelihood_p)\n        output_str = _ffi.string(output_p)\n        info = {\n            'likelihood': likelihood_p[0],\n        }\n        return output_str, info\n\n\n########################################################################################################################\n\nclass KaldiNNet3Decoder(KaldiDecoderBase):\n    \"\"\" Abstract base class for nnet3 decoders. \"\"\"\n\n    _library_header_text = \"\"\"\n        DRAGONFLY_API bool nnet3_base__load_lexicon(void* model_vp, char* word_syms_filename_cp, char* word_align_lexicon_filename_cp);\n        DRAGONFLY_API bool nnet3_base__save_adaptation_state(void* model_vp);\n        DRAGONFLY_API bool nnet3_base__reset_adaptation_state(void* model_vp);\n        DRAGONFLY_API bool nnet3_base__get_word_align(void* model_vp, int32_t* times_cp, int32_t* lengths_cp, int32_t num_words);\n        DRAGONFLY_API bool nnet3_base__decode(void* model_vp, float samp_freq, int32_t num_samples, float* samples, bool finalize, bool save_adaptation_state);\n        DRAGONFLY_API bool nnet3_base__get_output(void* model_vp, char* output, int32_t output_max_length,\n                float* likelihood_p, float* am_score_p, float* lm_score_p, float* confidence_p, float* expected_error_rate_p);\n        DRAGONFLY_API bool nnet3_base__set_lm_prime_text(void* model_vp, char* prime_cp);\n    \"\"\"\n\n    def __init__(self, model_dir, tmp_dir, words_file=None, word_align_lexicon_file=None, max_num_rules=None, save_adaptation_state=False):\n        super(KaldiNNet3Decoder, self).__init__()\n\n        model_dir = os.path.normpath(model_dir)\n        if words_file is None: words_file = find_file(model_dir, 'words.txt')\n        if word_align_lexicon_file is None: word_align_lexicon_file = find_file(model_dir, 'align_lexicon.int', required=False)\n        mfcc_conf_file = find_file(model_dir, 'mfcc_hires.conf')\n        if mfcc_conf_file is None: mfcc_conf_file = find_file(model_dir, 'mfcc.conf')  # FIXME: warning?\n        model_file = find_file(model_dir, 'final.mdl')\n\n        self.model_dir = model_dir\n        self.words_file = os.path.normpath(words_file)\n        self.word_align_lexicon_file = os.path.normpath(word_align_lexicon_file) if word_align_lexicon_file is not None else None\n        self.mfcc_conf_file = os.path.normpath(mfcc_conf_file)\n        self.model_file = os.path.normpath(model_file)\n        self.ie_config = self._read_ie_conf_file(model_dir, find_file(model_dir, 'ivector_extractor.conf'))\n        self.verbosity = (10 - _log_library.getEffectiveLevel()) if _log_library.isEnabledFor(10) else -1\n        self.max_num_rules = int(max_num_rules) if max_num_rules is not None else None\n        self._saving_adaptation_state = save_adaptation_state\n\n        self.config_dict = {\n            'model_dir': self.model_dir,\n            'mfcc_config_filename': self.mfcc_conf_file,\n            'ivector_extraction_config_json': self.ie_config,\n            'model_filename': self.model_file,\n            'word_syms_filename': self.words_file,\n            'word_align_lexicon_filename': self.word_align_lexicon_file or '',\n            }\n        if self.max_num_rules is not None: self.config_dict.update(max_num_rules=self.max_num_rules)\n\n    def _read_ie_conf_file(self, model_dir, old_filename, search=True):\n        \"\"\" Read ivector_extractor.conf file, converting relative paths to absolute paths for current configuration, returning dict of config. \"\"\"\n        options_with_path = {\n            '--splice-config':      'conf/splice.conf',\n            '--cmvn-config':        'conf/online_cmvn.conf',\n            '--lda-matrix':         'ivector_extractor/final.mat',\n            '--global-cmvn-stats':  'ivector_extractor/global_cmvn.stats',\n            '--diag-ubm':           'ivector_extractor/final.dubm',\n            '--ivector-extractor':  'ivector_extractor/final.ie',\n        }\n        def convert_path(key, value):\n            if not search:\n                return os.path.join(model_dir, options_with_path[key])\n            else:\n                return find_file(model_dir, os.path.basename(options_with_path[key]), required=True)\n        options_converters = {\n            '--splice-config':          convert_path,\n            '--cmvn-config':            convert_path,\n            '--lda-matrix':             convert_path,\n            '--global-cmvn-stats':      convert_path,\n            '--diag-ubm':               convert_path,\n            '--ivector-extractor':      convert_path,\n            '--ivector-period':         lambda key, value: (float(value) if '.' in value else int(value)),\n            '--max-count':              lambda key, value: (float(value) if '.' in value else int(value)),\n            '--max-remembered-frames':  lambda key, value: (float(value) if '.' in value else int(value)),\n            '--min-post':               lambda key, value: (float(value) if '.' in value else int(value)),\n            '--num-gselect':            lambda key, value: (float(value) if '.' in value else int(value)),\n            '--posterior-scale':        lambda key, value: (float(value) if '.' in value else int(value)),\n            '--online-cmvn-iextractor': lambda key, value: (True if value in ['true'] else False),\n        }\n        config = dict()\n        with open(old_filename, 'r', encoding='utf-8') as old_file:\n            for line in old_file:\n                key, value = line.strip().split('=', 1)\n                value = options_converters[key](key, value)\n                assert key.startswith('--')\n                key = key[2:]\n                config[key] = value\n        return config\n\n    saving_adaptation_state = property(lambda self: self._saving_adaptation_state, doc=\"Whether currently to save updated adaptation state at end of utterance\")\n    @saving_adaptation_state.setter\n    def saving_adaptation_state(self, value): self._saving_adaptation_state = value\n\n    def load_lexicon(self, words_file=None, word_align_lexicon_file=None):\n        \"\"\" Only necessary when you update the lexicon after initialization. \"\"\"\n        if words_file is None: words_file = self.words_file\n        if word_align_lexicon_file is None: word_align_lexicon_file = self.word_align_lexicon_file\n        result = self._lib.nnet3_base__load_lexicon(self._model, en(words_file), en(word_align_lexicon_file))\n        if not result:\n            raise KaldiError(\"error loading lexicon (%r, %r)\" % (words_file, word_align_lexicon_file))\n\n    def save_adaptation_state(self):\n        result = self._lib.nnet3_base__save_adaptation_state(self._model)\n        if not result:\n            raise KaldiError(\"save_adaptation_state error\")\n\n    def reset_adaptation_state(self):\n        result = self._lib.nnet3_base__reset_adaptation_state(self._model)\n        if not result:\n            raise KaldiError(\"reset_adaptation_state error\")\n\n    def get_output(self, output_max_length=4*1024):\n        output_p = _ffi.new('char[]', output_max_length)\n        likelihood_p = _ffi.new('float *')\n        am_score_p = _ffi.new('float *')\n        lm_score_p = _ffi.new('float *')\n        confidence_p = _ffi.new('float *')\n        expected_error_rate_p = _ffi.new('float *')\n        result = self._lib.nnet3_base__get_output(self._model, output_p, output_max_length, likelihood_p, am_score_p, lm_score_p, confidence_p, expected_error_rate_p)\n        if not result:\n            raise KaldiError(\"get_output error\")\n        output_str = de(_ffi.string(output_p))\n        info = {\n            'likelihood': likelihood_p[0],\n            'am_score': am_score_p[0],\n            'lm_score': lm_score_p[0],\n            'confidence': confidence_p[0],\n            'expected_error_rate': expected_error_rate_p[0],\n        }\n        _log.log(7, \"get_output: %r %s\", output_str, info)\n        return output_str, info\n\n    def get_word_align(self, output):\n        \"\"\"Returns tuple of tuples: words (including nonterminals but not eps), each's time (in bytes), and each's length (in bytes).\"\"\"\n        words = output.split()\n        num_words = len(words)\n        kaldi_frame_times_p = _ffi.new('int32_t[]', num_words)\n        kaldi_frame_lengths_p = _ffi.new('int32_t[]', num_words)\n        result = self._lib.nnet3_base__get_word_align(self._model, kaldi_frame_times_p, kaldi_frame_lengths_p, num_words)\n        if not result:\n            raise KaldiError(\"get_word_align error\")\n        times = [kaldi_frame_num * self.bytes_per_kaldi_frame for kaldi_frame_num in kaldi_frame_times_p]\n        lengths = [kaldi_frame_num * self.bytes_per_kaldi_frame for kaldi_frame_num in kaldi_frame_lengths_p]\n        return tuple(zip(words, times, lengths))\n\n    def set_lm_prime_text(self, prime_text):\n        prime_text = prime_text.strip()\n        result = self._lib.nnet3_base__set_lm_prime_text(self._model, en(prime_text))\n        if not result:\n            raise KaldiError(\"error setting prime text %r\" % prime_text)\n\n\n########################################################################################################################\n\nclass KaldiPlainNNet3Decoder(KaldiNNet3Decoder):\n    \"\"\"docstring for KaldiPlainNNet3Decoder\"\"\"\n\n    _library_header_text = KaldiNNet3Decoder._library_header_text + \"\"\"\n        DRAGONFLY_API void* nnet3_plain__construct(char* model_dir_cp, char* config_str_cp, int32_t verbosity);\n        DRAGONFLY_API bool nnet3_plain__destruct(void* model_vp);\n        DRAGONFLY_API bool nnet3_plain__decode(void* model_vp, float samp_freq, int32_t num_samples, float* samples, bool finalize, bool save_adaptation_state);\n    \"\"\"\n\n    def __init__(self, fst_file=None, config=None, **kwargs):\n        super(KaldiPlainNNet3Decoder, self).__init__(**kwargs)\n\n        if fst_file is None: fst_file = find_file(self.model_dir, defaults.DEFAULT_PLAIN_DICTATION_HCLG_FST_FILENAME, required=True)\n        fst_file = os.path.normpath(fst_file)\n\n        self.config_dict.update({\n            'decode_fst_filename': fst_file,\n            })\n        if config: self.config_dict.update(config)\n\n        _log.debug(\"config_dict: %s\", self.config_dict)\n        self._model = self._lib.nnet3_plain__construct(en(self.model_dir), en(json.dumps(self.config_dict)), self.verbosity)\n        if not self._model: raise KaldiError(\"failed nnet3_plain__construct\")\n\n    def destroy(self):\n        if self._model:\n            result = self._lib.nnet3_plain__destruct(self._model)\n            if not result:\n                raise KaldiError(\"failed nnet3_plain__destruct\")\n            self._model = None\n\n    def decode(self, frames, finalize):\n        \"\"\"Continue decoding with given new audio data.\"\"\"\n        if not isinstance(frames, np.ndarray): frames = np.frombuffer(frames, np.int16)\n        frames = frames.astype(np.float32)\n        frames_char = _ffi.from_buffer(frames)\n        frames_float = _ffi.cast('float *', frames_char)\n\n        self._start_decode_time(len(frames))\n        result = self._lib.nnet3_plain__decode(self._model, self.sample_rate, len(frames), frames_float, finalize, self._saving_adaptation_state)\n        self._stop_decode_time(finalize)\n\n        if not result:\n            raise KaldiError(\"decoding error\")\n        return finalize\n\n\n########################################################################################################################\n\nclass KaldiAgfNNet3Decoder(KaldiNNet3Decoder):\n    \"\"\"docstring for KaldiAgfNNet3Decoder\"\"\"\n\n    _library_header_text = KaldiNNet3Decoder._library_header_text + \"\"\"\n        DRAGONFLY_API void* nnet3_agf__construct(char* model_dir_cp, char* config_str_cp, int32_t verbosity);\n        DRAGONFLY_API bool nnet3_agf__destruct(void* model_vp);\n        DRAGONFLY_API int32_t nnet3_agf__add_grammar_fst(void* model_vp, void* grammar_fst_cp);\n        DRAGONFLY_API int32_t nnet3_agf__add_grammar_fst_file(void* model_vp, char* grammar_fst_filename_cp);\n        DRAGONFLY_API bool nnet3_agf__reload_grammar_fst(void* model_vp, int32_t grammar_fst_index, void* grammar_fst_cp);\n        DRAGONFLY_API bool nnet3_agf__reload_grammar_fst_file(void* model_vp, int32_t grammar_fst_index, char* grammar_fst_filename_cp);\n        DRAGONFLY_API bool nnet3_agf__remove_grammar_fst(void* model_vp, int32_t grammar_fst_index);\n        DRAGONFLY_API bool nnet3_agf__decode(void* model_vp, float samp_freq, int32_t num_frames, float* frames, bool finalize,\n            bool* grammars_activity_cp, int32_t grammars_activity_cp_size, bool save_adaptation_state);\n    \"\"\"\n\n    def __init__(self, *, top_fst=None, dictation_fst_file=None, config=None, **kwargs):\n        super(KaldiAgfNNet3Decoder, self).__init__(**kwargs)\n\n        phones_file = find_file(self.model_dir, 'phones.txt')\n        nonterm_phones_offset = symbol_table_lookup(phones_file, '#nonterm_bos')\n        if nonterm_phones_offset is None:\n            raise KaldiError(\"cannot find #nonterm_bos symbol in phones.txt\")\n        rules_phones_offset = symbol_table_lookup(phones_file, '#nonterm:rule0')\n        if rules_phones_offset is None:\n            raise KaldiError(\"cannot find #nonterm:rule0 symbol in phones.txt\")\n        dictation_phones_offset = symbol_table_lookup(phones_file, '#nonterm:dictation')\n        if dictation_phones_offset is None:\n            raise KaldiError(\"cannot find #nonterm:dictation symbol in phones.txt\")\n\n        self.config_dict.update({\n            'nonterm_phones_offset': nonterm_phones_offset,\n            'rules_phones_offset': rules_phones_offset,\n            'dictation_phones_offset': dictation_phones_offset,\n            'dictation_fst_filename': os.path.normpath(dictation_fst_file) if dictation_fst_file is not None else '',\n            })\n        if isinstance(top_fst, NativeWFST): self.config_dict.update({'top_fst': int(_ffi.cast(\"uint64_t\", top_fst.compiled_native_obj))})\n        elif isinstance(top_fst, str): self.config_dict.update({'top_fst_filename': os.path.normpath(top_fst)})\n        else: raise KaldiError(\"unrecognized top_fst type\")\n        if config: self.config_dict.update(config)\n\n        _log.debug(\"config_dict: %s\", self.config_dict)\n        self._model = self._lib.nnet3_agf__construct(en(self.model_dir), en(json.dumps(self.config_dict)), self.verbosity)\n        if not self._model: raise KaldiError(\"failed nnet3_agf__construct\")\n        self.num_grammars = 0\n\n    def destroy(self):\n        if self._model:\n            result = self._lib.nnet3_agf__destruct(self._model)\n            if not result:\n                raise KaldiError(\"failed nnet3_agf__destruct\")\n            self._model = None\n\n    def add_grammar_fst(self, grammar_fst):\n        _log.log(8, \"%s: adding grammar_fst: %r\", self, grammar_fst)\n        if isinstance(grammar_fst, NativeWFST):\n            grammar_fst_index = self._lib.nnet3_agf__add_grammar_fst(self._model, grammar_fst.compiled_native_obj)\n        elif isinstance(grammar_fst, str):\n            grammar_fst_index = self._lib.nnet3_agf__add_grammar_fst_file(self._model, en(os.path.normpath(grammar_fst)))\n        else: raise KaldiError(\"unrecognized grammar_fst type\")\n        if grammar_fst_index < 0:\n            raise KaldiError(\"error adding grammar %r\" % grammar_fst)\n        assert grammar_fst_index == self.num_grammars, \"add_grammar_fst allocated invalid grammar_fst_index\"\n        self.num_grammars += 1\n        return grammar_fst_index\n\n    def reload_grammar_fst(self, grammar_fst_index, grammar_fst):\n        _log.debug(\"%s: reloading grammar_fst_index: #%s %r\", self, grammar_fst_index, grammar_fst)\n        if isinstance(grammar_fst, NativeWFST):\n            result = self._lib.nnet3_agf__reload_grammar_fst(self._model, grammar_fst_index, grammar_fst.compiled_native_obj)\n        elif isinstance(grammar_fst, str):\n            result = self._lib.nnet3_agf__reload_grammar_fst_file(self._model, grammar_fst_index, en(os.path.normpath(grammar_fst)))\n        else: raise KaldiError(\"unrecognized grammar_fst type\")\n        if not result:\n            raise KaldiError(\"error reloading grammar #%s %r\" % (grammar_fst_index, grammar_fst))\n\n    def remove_grammar_fst(self, grammar_fst_index):\n        _log.debug(\"%s: removing grammar_fst_index: %s\", self, grammar_fst_index)\n        result = self._lib.nnet3_agf__remove_grammar_fst(self._model, grammar_fst_index)\n        if not result:\n            raise KaldiError(\"error removing grammar #%s\" % grammar_fst_index)\n        self.num_grammars -= 1\n\n    def decode(self, frames, finalize, grammars_activity=None):\n        \"\"\"Continue decoding with given new audio data.\"\"\"\n        # grammars_activity = [True] * self.num_grammars\n        # grammars_activity = np.random.choice([True, False], len(grammars_activity)).tolist(); print grammars_activity; time.sleep(5)\n        if grammars_activity is None:\n            grammars_activity = []\n        else:\n            # Start of utterance\n            _log.log(5, \"decode: grammars_activity = %s\", ''.join('1' if a else '0' for a in grammars_activity))\n            if len(grammars_activity) != self.num_grammars:\n                _log.error(\"wrong len(grammars_activity) = %d != %d = num_grammars\" % (len(grammars_activity), self.num_grammars))\n\n        if not isinstance(frames, np.ndarray): frames = np.frombuffer(frames, np.int16)\n        frames = frames.astype(np.float32)\n        frames_char = _ffi.from_buffer(frames)\n        frames_float = _ffi.cast('float *', frames_char)\n\n        self._start_decode_time(len(frames))\n        result = self._lib.nnet3_agf__decode(self._model, self.sample_rate, len(frames), frames_float, finalize,\n            grammars_activity, len(grammars_activity), self._saving_adaptation_state)\n        self._stop_decode_time(finalize)\n\n        if not result:\n            raise KaldiError(\"decoding error\")\n        return finalize\n\n\n########################################################################################################################\n\nclass KaldiAgfCompiler(FFIObject):\n\n    _library_header_text = \"\"\"\n        DRAGONFLY_API void* nnet3_agf__construct_compiler(char* config_str_cp);\n        DRAGONFLY_API bool nnet3_agf__destruct_compiler(void* compiler_vp);\n        DRAGONFLY_API void* nnet3_agf__compile_graph(void* compiler_vp, char* config_str_cp, void* grammar_fst_cp, bool return_graph);\n        DRAGONFLY_API void* nnet3_agf__compile_graph_text(void* compiler_vp, char* config_str_cp, char* grammar_fst_text_cp, bool return_graph);\n        DRAGONFLY_API void* nnet3_agf__compile_graph_file(void* compiler_vp, char* config_str_cp, char* grammar_fst_filename_cp, bool return_graph);\n    \"\"\"\n\n    def __init__(self, config):\n        super(KaldiAgfCompiler, self).__init__()\n        self._compiler = self._lib.nnet3_agf__construct_compiler(en(json.dumps(config)))\n        if not self._compiler: raise KaldiError(\"failed nnet3_agf__construct_compiler\")\n\n    def destroy(self):\n        if self._compiler:\n            result = self._lib.nnet3_agf__destruct_compiler(self._compiler)\n            if not result:\n                raise KaldiError(\"failed nnet3_agf__destruct_compiler\")\n            self._compiler = None\n\n    def compile_graph(self, config, grammar_fst=None, grammar_fst_text=None, grammar_fst_file=None, return_graph=False):\n        if 1 != sum(int(g is not None) for g in [grammar_fst, grammar_fst_text, grammar_fst_file]):\n            raise ValueError(\"must pass exactly one grammar\")\n        if grammar_fst is not None:\n            _log.log(5, \"compile_graph:\\n    config=%r\\n    grammar_fst=%r\", config, grammar_fst)\n            result = self._lib.nnet3_agf__compile_graph(self._compiler, en(json.dumps(config)), grammar_fst.native_obj, return_graph)\n            return result\n        if grammar_fst_text is not None:\n            _log.log(5, \"compile_graph:\\n    config=%r\\n    grammar_fst_text:\\n%s\", config, grammar_fst_text)\n            result = self._lib.nnet3_agf__compile_graph_text(self._compiler, en(json.dumps(config)), en(grammar_fst_text), return_graph)\n            return result\n        if grammar_fst_file is not None:\n            _log.log(5, \"compile_graph:\\n    config=%r\\n    grammar_fst_file=%r\", config, grammar_fst_file)\n            result = self._lib.nnet3_agf__compile_graph_file(self._compiler, en(json.dumps(config)), en(grammar_fst_file), return_graph)\n            return result\n\n\n########################################################################################################################\n\nclass KaldiLafNNet3Decoder(KaldiNNet3Decoder):\n    \"\"\"docstring for KaldiLafNNet3Decoder\"\"\"\n\n    _library_header_text = KaldiNNet3Decoder._library_header_text + \"\"\"\n        DRAGONFLY_API void* nnet3_laf__construct(char* model_dir_cp, char* config_str_cp, int32_t verbosity);\n        DRAGONFLY_API bool nnet3_laf__destruct(void* model_vp);\n        DRAGONFLY_API int32_t nnet3_laf__add_grammar_fst(void* model_vp, void* grammar_fst_cp);\n        DRAGONFLY_API int32_t nnet3_laf__add_grammar_fst_text(void* model_vp, char* grammar_fst_cp);\n        DRAGONFLY_API bool nnet3_laf__reload_grammar_fst(void* model_vp, int32_t grammar_fst_index, void* grammar_fst_cp);\n        DRAGONFLY_API bool nnet3_laf__remove_grammar_fst(void* model_vp, int32_t grammar_fst_index);\n        DRAGONFLY_API bool nnet3_laf__decode(void* model_vp, float samp_freq, int32_t num_frames, float* frames, bool finalize,\n            bool* grammars_activity_cp, int32_t grammars_activity_cp_size, bool save_adaptation_state);\n    \"\"\"\n\n    def __init__(self, dictation_fst_file=None, config=None, **kwargs):\n        super(KaldiLafNNet3Decoder, self).__init__(**kwargs)\n\n        self.config_dict.update({\n            'hcl_fst_filename': find_file(self.model_dir, 'HCLr.fst'),\n            'disambig_tids_filename': find_file(self.model_dir, 'disambig_tid.int'),\n            'relabel_ilabels_filename': find_file(self.model_dir, 'relabel_ilabels.int'),\n            'word_syms_relabeled_filename': find_file(self.model_dir, 'words.relabeled.txt', required=True),\n            'dictation_fst_filename': dictation_fst_file or '',\n            })\n        if config: self.config_dict.update(config)\n\n        _log.debug(\"config_dict: %s\", self.config_dict)\n        self._model = self._lib.nnet3_laf__construct(en(self.model_dir), en(json.dumps(self.config_dict)), self.verbosity)\n        if not self._model: raise KaldiError(\"failed nnet3_laf__construct\")\n        self.num_grammars = 0\n\n    def destroy(self):\n        if self._model:\n            result = self._lib.nnet3_laf__destruct(self._model)\n            if not result:\n                raise KaldiError(\"failed nnet3_laf__destruct\")\n            self._model = None\n\n    def add_grammar_fst(self, grammar_fst):\n        _log.log(8, \"%s: adding grammar_fst: %r\", self, grammar_fst)\n        grammar_fst_index = self._lib.nnet3_laf__add_grammar_fst(self._model, grammar_fst.native_obj)\n        if grammar_fst_index < 0:\n            raise KaldiError(\"error adding grammar %r\" % grammar_fst)\n        assert grammar_fst_index == self.num_grammars, \"add_grammar_fst allocated invalid grammar_fst_index\"\n        self.num_grammars += 1\n        return grammar_fst_index\n\n    def add_grammar_fst_text(self, grammar_fst_text):\n        assert grammar_fst_text\n        _log.log(8, \"%s: adding grammar_fst_text: %r\", self, grammar_fst_text[:512])\n        grammar_fst_index = self._lib.nnet3_laf__add_grammar_fst_text(self._model, en(grammar_fst_text))\n        if grammar_fst_index < 0:\n            raise KaldiError(\"error adding grammar %r\" % grammar_fst_text[:512])\n        assert grammar_fst_index == self.num_grammars, \"add_grammar_fst allocated invalid grammar_fst_index\"\n        self.num_grammars += 1\n        return grammar_fst_index\n\n    def reload_grammar_fst(self, grammar_fst_index, grammar_fst):\n        _log.debug(\"%s: reloading grammar_fst_index: #%s %r\", self, grammar_fst_index, grammar_fst)\n        result = self._lib.nnet3_laf__reload_grammar_fst(self._model, grammar_fst_index, grammar_fst.native_obj)\n        if not result:\n            raise KaldiError(\"error reloading grammar #%s %r\" % (grammar_fst_index, grammar_fst))\n\n    def remove_grammar_fst(self, grammar_fst_index):\n        _log.debug(\"%s: removing grammar_fst_index: %s\", self, grammar_fst_index)\n        result = self._lib.nnet3_laf__remove_grammar_fst(self._model, grammar_fst_index)\n        if not result:\n            raise KaldiError(\"error removing grammar #%s\" % grammar_fst_index)\n        self.num_grammars -= 1\n\n    def decode(self, frames, finalize, grammars_activity=None):\n        \"\"\"Continue decoding with given new audio data.\"\"\"\n        # grammars_activity = [True] * self.num_grammars\n        # grammars_activity = np.random.choice([True, False], len(grammars_activity)).tolist(); print grammars_activity; time.sleep(5)\n        if grammars_activity is None:\n            grammars_activity = []\n        else:\n            # Start of utterance\n            _log.log(5, \"decode: grammars_activity = %s\", ''.join('1' if a else '0' for a in grammars_activity))\n            if len(grammars_activity) != self.num_grammars:\n                _log.error(\"wrong len(grammars_activity) = %d != %d = num_grammars\" % (len(grammars_activity), self.num_grammars))\n\n        if not isinstance(frames, np.ndarray): frames = np.frombuffer(frames, np.int16)\n        frames = frames.astype(np.float32)\n        frames_char = _ffi.from_buffer(frames)\n        frames_float = _ffi.cast('float *', frames_char)\n\n        self._start_decode_time(len(frames))\n        result = self._lib.nnet3_laf__decode(self._model, self.sample_rate, len(frames), frames_float, finalize,\n            grammars_activity, len(grammars_activity), self._saving_adaptation_state)\n        self._stop_decode_time(finalize)\n\n        if not result:\n            raise KaldiError(\"decoding error\")\n        return finalize\n\n\n########################################################################################################################\n\nclass KaldiModelBuildUtils(FFIObject):\n\n    _library_header_text = \"\"\"\n        DRAGONFLY_API bool utils__build_L_disambig(char* lexicon_fst_text_cp, char* isymbols_file_cp, char* osymbols_file_cp, char* wdisambig_phones_file_cp, char* wdisambig_words_file_cp, char* fst_out_file_cp);\n    \"\"\"\n\n    @classmethod\n    def build_L_disambig(cls, lexicon_fst_text_bytes, phones_file, words_file, wdisambig_phones_file, wdisambig_words_file, fst_out_file):\n        cls.init_ffi()\n        result = cls._lib.utils__build_L_disambig(lexicon_fst_text_bytes, en(phones_file), en(words_file), en(wdisambig_phones_file), en(wdisambig_words_file), en(fst_out_file))\n        if not result: raise KaldiError(\"failed utils__build_L_disambig\")\n\n    @staticmethod\n    def make_lexicon_fst(**kwargs):\n        try:\n            from .kaldi.make_lexicon_fst import main\n            old_stdout = sys.stdout\n            sys.stdout = output = StringIO()\n            main(argparse.Namespace(**kwargs))\n            return output.getvalue()\n        finally:\n            sys.stdout = old_stdout\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[build-system]\nrequires = [\"setuptools\", \"wheel\", \"scikit-build\", \"cmake\", \"ninja\"]\n\n[tool.pytest.ini_options]\nminversion = \"8.0\"\ntestpaths = [\"tests\"]\naddopts = [\n    \"--import-mode=importlib\",\n    \"-ra\",  # extra summary for skips/xfails\n    \"--strict-markers\",  # fail on unknown markers\n    \"--strict-config\",  # fail on bad config\n    # \"--cov=kaldi_active_grammar\", \"--no-cov-on-fail\", \"--cov-branch\", \"--cov-report=term-missing\",\n]\nxfail_strict = true\n"
  },
  {
    "path": "requirements-build.txt",
    "content": "cmake\nninja\nscikit-build>=0.10.0\nsetuptools\nwheel\n"
  },
  {
    "path": "requirements-editable.txt",
    "content": "-e .\n"
  },
  {
    "path": "requirements-test.txt",
    "content": "piper-tts~=1.0\npytest>=8.0\npytest-cov\n"
  },
  {
    "path": "setup.cfg",
    "content": "[metadata]\n# This includes the license file(s) in the wheel.\n# https://wheel.readthedocs.io/en/stable/user_guide.html#including-license-files-in-the-generated-wheel-file\nlicense_files = LICENSE.txt\n\n[bdist_wheel]\n# This flag says to generate wheels that support both Python 2 and Python\n# 3. If your code will not run unchanged on both Python 2 and 3, you will\n# need to generate separate wheels for each Python version that you\n# support. Removing this line (or setting universal to 0) will prevent\n# bdist_wheel from trying to make a universal wheel. For more see:\n# https://packaging.python.org/guides/distributing-packages-using-setuptools/#wheels\n# universal=1\n"
  },
  {
    "path": "setup.py",
    "content": "\"\"\"A setuptools based setup module.\n\nSee:\nhttps://packaging.python.org/guides/distributing-packages-using-setuptools/\nhttps://github.com/pypa/sampleproject\n\"\"\"\n\n# Always prefer setuptools over distutils\nfrom setuptools import find_packages\nimport datetime\nimport os\nimport platform\nimport re\n\n# Optionally skip native code build (expecting libraries to be manually/externally placed correctly prior) by using standard setuptools; otherwise build native code with scikit-build\nif os.environ.get('KALDIAG_BUILD_SKIP_NATIVE') or os.environ.get('KALDIAG_SETUP_RAW'):\n    from setuptools import setup\nelse:\n    from skbuild import setup\n\nimport site, sys\nsite.ENABLE_USER_SITE = bool(\"--user\" in sys.argv[1:])  # Fix pip bug breaking editable install to user directory: https://github.com/pypa/pip/issues/7953\n\n\n# Force wheel to be platform-specific (needed due to manually-loaded native libraries)\n# https://stackoverflow.com/questions/45150304/how-to-force-a-python-wheel-to-be-platform-specific-when-building-it\n# https://github.com/Yelp/dumb-init/blob/48db0c0d0ecb4598d1a6400710445b85d67616bf/setup.py#L11-L27\n# https://github.com/google/or-tools/issues/616#issuecomment-371480314\nif True:\n    from wheel.bdist_wheel import bdist_wheel as bdist_wheel\n    class bdist_wheel_impure(bdist_wheel):\n\n        def finalize_options(self):\n            super().finalize_options()\n            # Mark us as not a pure python package: we contain platform-specific native libraries, even though no CPython extensions\n            self.root_is_pure = False\n\n        def get_tag(self):\n            python, abi, plat = super().get_tag()\n            # Mark us as python-version-agnostic (py3), and python-ABI-agnostic (none), since we contain no CPython extensions\n            python, abi = 'py3', 'none'\n            # For MacOS, prevent mistakenly marking as universal2 wheel (since we compile our native libraries as either x86_64 or arm64, not both)\n            if plat.startswith(\"macosx_\") and plat.endswith(\"_universal2\"):\n                want = \"x86_64\" if platform.machine() == \"x86_64\" else \"arm64\"\n                plat = re.sub(r\"_universal2$\", f\"_{want}\", plat)\n            return python, abi, plat\n\n    from setuptools.command.install import install\n    class install_platlib(install):\n        def finalize_options(self):\n            super().finalize_options()\n            self.install_lib = self.install_platlib\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# https://packaging.python.org/guides/single-sourcing-package-version/\ndef read(*parts):\n    with open(os.path.join(here, *parts), 'r') as fp:\n        return fp.read()\n\ndef find_version(*file_paths):\n    version_file = read(*file_paths)\n    version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n    if version_match:\n        return version_match.group(1)\n    raise RuntimeError(\"Unable to find version string.\")\n\nversion = find_version('kaldi_active_grammar', '__init__.py')\nif version.endswith('dev0'):\n    version = version[:-1] + datetime.datetime.now().strftime('%Y%m%d%H%M%S')\n\n# Set branch for Kaldi source repository (maybe we should use commits instead?)\nif not os.environ.get('KALDI_BRANCH'):\n    os.environ['KALDI_BRANCH'] = ('kag-v' + version) if ('dev' not in version) else 'origin/master'\n\n# Get the long description from the README file\nwith open(os.path.join(here, 'README.md'), encoding='utf-8') as f:\n    long_description = f.read()\n\n\n# Arguments marked as \"Required\" below must be included for upload to PyPI.\n# Fields marked as \"Optional\" may be commented out.\n\nsetup(\n    cmdclass={\n        'bdist_wheel': bdist_wheel_impure,\n        'install': install_platlib,\n    },\n\n    # This is the name of your project. The first time you publish this\n    # package, this name will be registered for you. It will determine how\n    # users can install this project, e.g.:\n    #\n    # $ pip install sampleproject\n    #\n    # And where it will live on PyPI: https://pypi.org/project/sampleproject/\n    #\n    # There are some restrictions on what makes a valid project name\n    # specification here:\n    # https://packaging.python.org/specifications/core-metadata/#name\n    name='kaldi-active-grammar',  # Required\n\n    # Versions should comply with PEP 440:\n    # https://www.python.org/dev/peps/pep-0440/\n    #\n    # For a discussion on single-sourcing the version across setup.py and the\n    # project code, see\n    # https://packaging.python.org/en/latest/single_source_version.html\n    # version='0.2.0',  # Required\n    # version=open('VERSION').read().strip(),\n    version=version,\n\n    # This is a one-line description or tagline of what your project does. This\n    # corresponds to the \"Summary\" metadata field:\n    # https://packaging.python.org/specifications/core-metadata/#summary\n    description='Kaldi speech recognition with grammars that can be set active/inactive dynamically at decode-time',  # Optional\n\n    # This is an optional longer description of your project that represents\n    # the body of text which users will see when they visit PyPI.\n    #\n    # Often, this is the same as your README, so you can just read it in from\n    # that file directly (as we have already done above)\n    #\n    # This field corresponds to the \"Description\" metadata field:\n    # https://packaging.python.org/specifications/core-metadata/#description-optional\n    long_description=long_description,  # Optional\n\n    # Denotes that our long_description is in Markdown; valid values are\n    # text/plain, text/x-rst, and text/markdown\n    #\n    # Optional if long_description is written in reStructuredText (rst) but\n    # required for plain-text or Markdown; if unspecified, \"applications should\n    # attempt to render [the long_description] as text/x-rst; charset=UTF-8 and\n    # fall back to text/plain if it is not valid rst\" (see link below)\n    #\n    # This field corresponds to the \"Description-Content-Type\" metadata field:\n    # https://packaging.python.org/specifications/core-metadata/#description-content-type-optional\n    long_description_content_type='text/markdown',  # Optional (see note above)\n\n    # This should be a valid link to your project's main homepage.\n    #\n    # This field corresponds to the \"Home-Page\" metadata field:\n    # https://packaging.python.org/specifications/core-metadata/#home-page-optional\n    url='https://github.com/daanzu/kaldi-active-grammar',  # Optional\n\n    # This should be your name or the name of the organization which owns the\n    # project.\n    author='David Zurow',  # Optional\n\n    # This should be a valid email address corresponding to the author listed\n    # above.\n    author_email='daanzu@gmail.com',  # Optional\n\n    license='AGPL-3.0-or-later',\n\n    # Classifiers help users find your project by categorizing it.\n    #\n    # For a list of valid classifiers, see https://pypi.org/classifiers/\n    classifiers=[  # Optional\n        # How mature is this project? Common values are\n        #   3 - Alpha\n        #   4 - Beta\n        #   5 - Production/Stable\n        'Development Status :: 5 - Production/Stable',\n\n        # Indicate who your project is intended for\n        'Intended Audience :: Developers',\n        # 'Topic :: Software Development :: Build Tools',\n\n        # Specify the Python versions you support here. In particular, ensure\n        # that you indicate whether you support Python 2, Python 3 or both.\n        # These classifiers are *not* checked by 'pip install'. See instead\n        # 'python_requires' below.\n        'Programming Language :: Python :: 3',\n        'Programming Language :: Python :: 3.9',\n        'Programming Language :: Python :: 3.10',\n        'Programming Language :: Python :: 3.11',\n        'Programming Language :: Python :: 3.12',\n        'Programming Language :: Python :: 3.13',\n        'Programming Language :: Python :: 3.14',\n        'Programming Language :: Python :: Implementation :: CPython',\n    ],\n\n    # This field adds keywords for your project which will appear on the\n    # project page. What does your project relate to?\n    #\n    # Note that this is a string of words separated by whitespace, not a list.\n    keywords='kaldi speech recognition grammar dragonfly',  # Optional\n\n    # You can just specify package directories manually here if your project is\n    # simple. Or you can use find_packages().\n    #\n    # Alternatively, if you just want to distribute a single Python file, use\n    # the `py_modules` argument instead as follows, which will expect a file\n    # called `my_module.py` to exist:\n    #\n    #   py_modules=[\"my_module\"],\n    #\n    packages=find_packages(exclude=['contrib', 'docs', 'tests']),  # Required\n\n    # Specify which Python versions you support. In contrast to the\n    # 'Programming Language' classifiers above, 'pip install' will check this\n    # and refuse to install the project if the version does not match. If you\n    # do not support Python 2, you can simplify this to '>=3.5' or similar, see\n    # https://packaging.python.org/guides/distributing-packages-using-setuptools/#python-requires\n    python_requires='>=3.6, <4',  # NOTE: Allowing earlier unsupported versions, even if not tested, unless we know they break\n\n    # This field lists other packages that your project depends on to run.\n    # Any package you put here will be installed by pip when your project is\n    # installed, so they must be valid existing projects.\n    #\n    # For an analysis of \"install_requires\" vs pip's requirements files see:\n    # https://packaging.python.org/en/latest/requirements.html\n    install_requires=[\n        'cffi >= 1.12',\n        'numpy >= 1.16, != 1.19.4',\n        'ush >= 3.1',\n        'six',\n        'futures; python_version == \"2.7\"',\n    ],  # Optional\n\n    # List additional groups of dependencies here (e.g. development\n    # dependencies). Users will be able to install these using the \"extras\"\n    # syntax, for example:\n    #\n    #   $ pip install sampleproject[dev]\n    #\n    # Similar to `install_requires` above, these must be valid existing\n    # projects.\n    extras_require={  # Optional\n        'g2p_en': ['g2p_en >= 2.1.0'],\n        'online': ['requests >= 2.18'],\n        # 'dev': ['check-manifest'],\n        # \"test\": [\n        #     # See requirements-test.txt\n        # ]\n    },\n\n    # package_dir={\n    #     'kaldi_active_grammar': 'package'\n    # },\n\n    # If there are data files included in your packages that need to be\n    # installed, specify them here.\n    #\n    # If using Python 2.6 or earlier, then these have to be included in\n    # MANIFEST.in as well.\n    package_data={  # Optional\n        'kaldi_active_grammar': ['exec/*/*', 'exec/*/*/*'],\n        '': ['LICENSE.txt'],\n    },\n\n    # Although 'package_data' is the preferred approach, in some case you may\n    # need to place data files outside of your packages. See:\n    # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files\n    #\n    # In this case, 'data_file' will be installed into '<sys.prefix>/my_data'\n    # data_files=[('my_data', ['data/data_file'])],  # Optional\n    # data_files=[('my_data', ['exec/windows/dragonfly.dll'])],  # Optional\n    # data_files=[('', ['LICENSE.txt'])],\n\n    # To provide executable scripts, use entry points in preference to the\n    # \"scripts\" keyword. Entry points provide cross-platform support and allow\n    # `pip` to create the appropriate form of executable for the target\n    # platform.\n    #\n    # For example, the following would provide a command called `sample` which\n    # executes the function `main` from this package when invoked:\n    # entry_points={  # Optional\n    #     'console_scripts': [\n    #         'sample=sample:main',\n    #     ],\n    # },\n\n    # List additional URLs that are relevant to your project as a dict.\n    #\n    # This field corresponds to the \"Project-URL\" metadata fields:\n    # https://packaging.python.org/specifications/core-metadata/#project-url-multiple-use\n    #\n    # Examples listed include a pattern for specifying where the package tracks\n    # issues, where the source is hosted, where to say thanks to the package\n    # maintainers, and where to support the project financially. The key is\n    # what's used to render the link text on PyPI.\n    project_urls={  # Optional\n        'Bug Reports': 'https://github.com/daanzu/kaldi-active-grammar/issues',\n        'Funding': 'https://github.com/sponsors/daanzu',\n        # 'Say Thanks!': 'http://saythanks.io/to/example',\n        'Source': 'https://github.com/daanzu/kaldi-active-grammar/',\n    },\n)\n"
  },
  {
    "path": "tests/conftest.py",
    "content": "\nimport os\nfrom pathlib import Path\n\nimport piper\nimport pytest\n\n\n@pytest.fixture\ndef change_to_test_dir(monkeypatch):\n        monkeypatch.chdir(Path(__file__).parent)  # Where model is\n\ndef get_piper_model_path():\n    \"\"\" Get Piper model path from environment or use default. \"\"\"\n    model_name = os.environ.get('PIPER_MODEL', 'en_US-ryan-low.onnx')\n    model_path = Path(__file__).parent / model_name\n    if not model_path.is_file():\n        raise FileNotFoundError(f\"Piper model file '{model_path}' not found.\")\n        # from piper.download_voices import download_voice\n        # download_voice(model_name, model_path.parent)\n    return model_path\n\n@pytest.fixture(scope=\"session\")\ndef piper_voice():\n    \"\"\" Load Piper TTS voice model once per test session. \"\"\"\n    piper_model_path = get_piper_model_path()\n    return piper.PiperVoice.load(piper_model_path)\n\n@pytest.fixture\ndef audio_generator(piper_voice):\n    \"\"\" Generate audio data from text using Piper TTS. \"\"\"\n    def _generate_audio(text, syn_config=None):\n        if syn_config is None:\n            syn_config = piper.SynthesisConfig(\n                length_scale=1.5,  # Slow down\n                noise_scale=0.0,  # No audio variation, for repeatable testing\n                noise_w_scale=0.0,  # No speaking variation, for repeatable testing\n            )\n        audio_chunks = []\n        # Chunk size is variable and determined by Piper internals\n        for chunk in piper_voice.synthesize(text, syn_config=syn_config):\n            audio_chunks.append(chunk.audio_int16_bytes)\n        return b''.join(audio_chunks)\n    return _generate_audio\n"
  },
  {
    "path": "tests/generate_google_tts.py",
    "content": "#!/usr/bin/env -S uv run --script\n# /// script\n# requires-python = \">=3.12\"\n# dependencies = [\n#     \"google-cloud-texttospeech\",\n#     \"fire\",\n# ]\n# ///\n\nfrom google.cloud import texttospeech\n\n# pip install google-cloud-texttospeech google-auth\n# from google.oauth2 import service_account\n# creds = service_account.Credentials.from_service_account_file(\"service-account.json\", scopes=[\"https://www.googleapis.com/auth/cloud-platform\"])\n# client = texttospeech.TextToSpeechClient(credentials=creds)\n# OR GOOGLE_APPLICATION_CREDENTIALS=/full/path/service-account.json\nclient = texttospeech.TextToSpeechClient()\n\ndef generate(text, out=None, voice=\"en-US-Studio-Q\", lang=\"en-US\", format=\"wav\", play=False):\n    audio_encodings = {\n        \"wav\": texttospeech.AudioEncoding.LINEAR16,\n        \"mp3\": texttospeech.AudioEncoding.MP3,\n    }\n    assert format in audio_encodings, f\"Unsupported format: {format}. Supported formats: {list(audio_encodings.keys())}\"\n    if out is None:\n        out = f\"{text.replace(' ', '_')}.{format}\"\n    out = str(out)\n    response = client.synthesize_speech(\n        input=texttospeech.SynthesisInput(text=text),\n        voice=texttospeech.VoiceSelectionParams(language_code=lang, name=voice),\n        audio_config=texttospeech.AudioConfig(audio_encoding=audio_encodings[format], sample_rate_hertz=16000)\n    )\n    with open(out, \"wb\") as f:\n        f.write(response.audio_content)\n    if play:\n        import winsound\n        winsound.PlaySound(out, winsound.SND_FILENAME)\n\ndef list_voices():\n    for v in client.list_voices().voices:\n        print(v.name, \"-\", v.language_codes)\n\nif __name__ == \"__main__\":\n    import fire\n    fire.Fire(generate)\n"
  },
  {
    "path": "tests/generate_piper_tts.py",
    "content": "#!/usr/bin/env -S uv run --script\n# /// script\n# requires-python = \">=3.12\"\n# dependencies = [\n#     \"piper-tts\",\n#     \"kaldi-active-grammar\",\n#     \"fire\",\n# ]\n# ///\n\nimport os\n\nimport piper\n\npiper_model_path = os.path.join(os.path.dirname(__file__), 'en_US-ryan-low.onnx')\nvoice = piper.PiperVoice.load(piper_model_path)\n\n# with wave.open(\"test.wav\", \"wb\") as wav_file:\n#     voice.synthesize_wav(\"Welcome to the world of speech synthesis!\", wav_file)\n\nsyn_config = piper.SynthesisConfig(\n    # volume=0.5,  # half as loud\n    # length_scale=2.0,  # twice as slow\n    # noise_scale=1.0,  # more audio variation\n    # noise_w_scale=1.0,  # more speaking variation\n    length_scale=1.5,\n    noise_scale=0.0,\n    noise_w_scale=0.0,\n    # normalize_audio=False, # use raw audio from voice\n)\n\n# voice.synthesize_wav(..., syn_config=syn_config)\n\ntext = \"Welcome to the world of speech synthesis!\"\ntext = \"it depends on the context\"\ntext = \"up down left right\"\n\nfor chunk in voice.synthesize(text, syn_config=syn_config):\n    print(chunk.sample_rate, chunk.sample_width, chunk.sample_channels)\n    print(\"audio\", len(chunk.audio_int16_bytes))\n    audio_data = chunk.audio_int16_bytes\n\n    if True:\n        from io import BytesIO\n        import wave\n        import winsound\n        audio_buffer = BytesIO()\n        with wave.open(audio_buffer, 'wb') as wav_file:\n            wav_file.setnchannels(chunk.sample_channels)\n            wav_file.setsampwidth(chunk.sample_width)\n            wav_file.setframerate(chunk.sample_rate)\n            wav_file.writeframes(chunk.audio_int16_bytes)\n        audio_buffer.seek(0)\n        winsound.PlaySound(audio_buffer.getvalue(), winsound.SND_MEMORY)\n\nif True:\n    import kaldi_active_grammar as kag\n    recognizer = kag.PlainDictationRecognizer()\n    output_str, info = recognizer.decode_utterance(audio_data)\n    print(repr(output_str), info)\n"
  },
  {
    "path": "tests/helpers.py",
    "content": "\nexpected_info_keys_and_types = {\n    'likelihood': float,\n    'am_score': float,\n    'lm_score': float,\n    'confidence': float,\n    'expected_error_rate': float,\n}\n\ndef assert_info_shape(info):\n    assert isinstance(info, dict)\n    for key, expected_type in expected_info_keys_and_types.items():\n        assert key in info, f\"Missing key: {key}\"\n        assert isinstance(info[key], expected_type), f\"Incorrect type for {key}: expected {expected_type}, got {type(info[key])}\"\n\ndef play_audio_on_windows(audio_bytes: bytes, sample_rate: int = 16000):\n    \"\"\" Play raw PCM audio bytes on Windows using winsound. For interactive debugging only. \"\"\"\n    import io\n    import wave\n    import winsound\n    with io.BytesIO() as buf:\n        with wave.open(buf, 'wb') as wf:\n            wf.setnchannels(1)\n            wf.setsampwidth(2)\n            wf.setframerate(sample_rate)\n            wf.writeframes(audio_bytes)\n        wav_data = buf.getvalue()\n    winsound.PlaySound(wav_data, winsound.SND_MEMORY)\n\n"
  },
  {
    "path": "tests/run_each_test_separately.py",
    "content": "\"\"\"\nRun each test in a separate process.\nThis is the only reasonable cross-platform way to do this with pytest.\n\"\"\"\n\nimport sys, subprocess\n\ndef collect_nodeids(extra):\n    # -q with --collect-only prints one nodeid per line\n    r = subprocess.run(\n        [sys.executable, \"-m\", \"pytest\", \"-q\", \"--collect-only\", *extra],\n        capture_output=True, text=True, check=True\n    )\n    return [\n        ln.strip().split(\"/\")[-1]  # Discard the \"tests/\" prefix\n        for ln in r.stdout.splitlines()\n        if ln.strip() and not ln.startswith((\"=\", \"<\", \"[\")) and not \"collected in\" in ln\n    ]\n\ndef main():\n    extra = sys.argv[1:]   # e.g. [\"tests\", \"-k\", \"not slow\"]\n    nodeids = collect_nodeids(extra)\n    print(f\"Collected {len(nodeids)} nodeids: {nodeids}\")\n    failed = []\n    for nid in nodeids:\n        print(f\"\\n=== {nid} ===\")\n        rc = subprocess.call([sys.executable, \"-m\", \"pytest\", \"-q\", nid, *extra])\n        if rc != 0:\n            failed.append(nid)\n    print(\"\\n========= DONE =========\")\n    if failed:\n        print(f\"\\nFailures in {len(failed)} tests:\")\n        for nid in failed: print(\" -\", nid)\n        sys.exit(1)\n    else:\n        print(\"\\nAll tests passed.\")\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "tests/test_grammar.py",
    "content": "\nfrom typing import Callable, Optional, Union\n\nimport pytest\n\nfrom kaldi_active_grammar import Compiler, KaldiRule, NativeWFST, WFST\nfrom tests.helpers import *\n\n\nclass TestGrammar:\n\n    @pytest.fixture(autouse=True)\n    def setup(self, change_to_test_dir, audio_generator):\n        self.compiler = Compiler()\n        self.decoder = self.compiler.init_decoder()\n        self.audio_generator = audio_generator\n\n    def make_rule(self, name: str, build_func: Callable[[Union[NativeWFST, WFST]], None], **kwargs) -> KaldiRule:\n        rule = KaldiRule(self.compiler, name, **kwargs)\n        assert rule.name == name\n        assert rule.fst is not None\n        build_func(rule.fst)\n        rule.compile()\n        assert rule.compiled\n        rule.load()\n        assert rule.loaded\n        return rule\n\n    def decode(self, text_or_audio: Union[str, bytes], kaldi_rules_activity: list[bool], expected_rule: Optional[KaldiRule], expected_words: Optional[list[str]] = None, expected_words_are_dictation_mask: Optional[list[bool]] = None):\n        if isinstance(text_or_audio, str):\n            text = text_or_audio\n            audio_data = self.audio_generator(text)\n            if expected_words is None:\n                expected_words = text.split() if text else []\n        else:\n            text = None\n            audio_data = text_or_audio\n            if expected_words is None:\n                expected_words = []\n\n        self.decoder.decode(audio_data, True, kaldi_rules_activity)\n\n        output, info = self.decoder.get_output()\n        assert isinstance(output, str)\n        assert len(output) > 0 or expected_words == []\n        assert_info_shape(info)\n\n        recognized_rule, words, words_are_dictation_mask = self.compiler.parse_output(output)\n        if expected_rule is None:\n            assert recognized_rule is None\n            assert words == []\n            assert words_are_dictation_mask == []\n        else:\n            assert recognized_rule == expected_rule\n            assert words == expected_words\n            if expected_words_are_dictation_mask is None:\n                expected_words_are_dictation_mask = [False] * len(words)\n            assert words_are_dictation_mask == expected_words_are_dictation_mask\n\n    def test_simple_rule(self):\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, final_state, 'hello')\n        rule = self.make_rule('TestRule', _build)\n        self.decode(\"hello\", [True], rule)\n\n    def test_epsilon_transition(self):\n        \"\"\"Test epsilon transitions between states.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            middle_state = fst.add_state()\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, middle_state, None)  # epsilon transition\n            fst.add_arc(middle_state, final_state, 'world')\n        rule = self.make_rule('EpsilonRule', _build)\n        self.decode(\"world\", [True], rule)\n\n    def test_multiple_paths(self):\n        \"\"\"Test grammar with multiple alternative paths (choice).\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n            # Create three alternative paths\n            fst.add_arc(initial_state, final_state, 'hello')\n            fst.add_arc(initial_state, final_state, 'hi')\n            fst.add_arc(initial_state, final_state, 'greetings')\n        rule = self.make_rule('MultiPathRule', _build)\n        # Test each alternative path\n        self.decode(\"hello\", [True], rule)\n\n    def test_multiple_paths_hi(self):\n        \"\"\"Test second alternative in multiple path grammar.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, final_state, 'hello')\n            fst.add_arc(initial_state, final_state, 'hi')\n            fst.add_arc(initial_state, final_state, 'greetings')\n        rule = self.make_rule('MultiPathRule2', _build)\n        self.decode(\"hi\", [True], rule)\n\n    def test_sequential_chain(self):\n        \"\"\"Test long sequential chain of states.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            state1 = fst.add_state()\n            state2 = fst.add_state()\n            state3 = fst.add_state()\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, state1, 'the')\n            fst.add_arc(state1, state2, 'quick')\n            fst.add_arc(state2, state3, 'brown')\n            fst.add_arc(state3, final_state, 'fox')\n        rule = self.make_rule('SequentialRule', _build)\n        self.decode(\"the quick brown fox\", [True], rule)\n\n    def test_diamond_pattern(self):\n        \"\"\"Test diamond pattern with branch and merge.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            branch1 = fst.add_state()\n            branch2 = fst.add_state()\n            merge_state = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            # Initial arc\n            fst.add_arc(initial_state, branch1, 'start')\n\n            # Two branches with different paths\n            fst.add_arc(branch1, merge_state, 'left')\n            fst.add_arc(branch1, branch2, 'right')\n            fst.add_arc(branch2, merge_state, 'path')\n\n            # Merge and continue\n            fst.add_arc(merge_state, final_state, 'end')\n        rule = self.make_rule('DiamondRule', _build)\n        self.decode(\"start left end\", [True], rule)\n\n    def test_diamond_pattern_alt(self):\n        \"\"\"Test alternative path through diamond pattern.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            branch1 = fst.add_state()\n            branch2 = fst.add_state()\n            merge_state = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            fst.add_arc(initial_state, branch1, 'start')\n            fst.add_arc(branch1, merge_state, 'left')\n            fst.add_arc(branch1, branch2, 'right')\n            fst.add_arc(branch2, merge_state, 'path')\n            fst.add_arc(merge_state, final_state, 'end')\n        rule = self.make_rule('DiamondRule2', _build)\n        self.decode(\"start right path end\", [True], rule)\n\n    def test_self_loop(self):\n        \"\"\"Test self-loop for optional repetition.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            loop_state = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            fst.add_arc(initial_state, loop_state, 'repeat')\n            fst.add_arc(loop_state, loop_state, 'again')  # Self-loop\n            fst.add_arc(loop_state, final_state, 'done')\n        rule = self.make_rule('LoopRule', _build)\n        self.decode(\"repeat again again done\", [True], rule)\n\n    def test_optional_path_with_epsilon(self):\n        \"\"\"Test optional path using epsilon transition.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            optional_state = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            # Direct path (skipping optional)\n            fst.add_arc(initial_state, final_state, 'hello')\n\n            # Optional path with epsilon\n            fst.add_arc(initial_state, optional_state, None)  # epsilon\n            fst.add_arc(optional_state, final_state, 'optional')\n        rule = self.make_rule('OptionalRule', _build)\n        self.decode(\"hello\", [True], rule)\n\n    def test_complex_branching(self):\n        \"\"\"Test complex branching structure with multiple decision points.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            branch_a = fst.add_state()\n            branch_b = fst.add_state()\n            sub_branch_a1 = fst.add_state()\n            sub_branch_a2 = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            # First level branching\n            fst.add_arc(initial_state, branch_a, 'go')\n            fst.add_arc(initial_state, branch_b, 'move')\n\n            # Branch A has sub-branches\n            fst.add_arc(branch_a, sub_branch_a1, 'left')\n            fst.add_arc(branch_a, sub_branch_a2, 'right')\n            fst.add_arc(sub_branch_a1, final_state, 'side')\n            fst.add_arc(sub_branch_a2, final_state, 'side')\n\n            # Branch B goes directly to final\n            fst.add_arc(branch_b, final_state, 'forward')\n        rule = self.make_rule('ComplexBranchRule', _build)\n        self.decode(\"go left side\", [True], rule)\n\n    def test_complex_branching_alt1(self):\n        \"\"\"Test alternative path in complex branching.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            branch_a = fst.add_state()\n            branch_b = fst.add_state()\n            sub_branch_a1 = fst.add_state()\n            sub_branch_a2 = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            fst.add_arc(initial_state, branch_a, 'go')\n            fst.add_arc(initial_state, branch_b, 'move')\n            fst.add_arc(branch_a, sub_branch_a1, 'left')\n            fst.add_arc(branch_a, sub_branch_a2, 'right')\n            fst.add_arc(sub_branch_a1, final_state, 'side')\n            fst.add_arc(sub_branch_a2, final_state, 'side')\n            fst.add_arc(branch_b, final_state, 'forward')\n        rule = self.make_rule('ComplexBranchRule2', _build)\n        self.decode(\"go right side\", [True], rule)\n\n    def test_complex_branching_alt2(self):\n        \"\"\"Test third alternative path in complex branching.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            branch_a = fst.add_state()\n            branch_b = fst.add_state()\n            sub_branch_a1 = fst.add_state()\n            sub_branch_a2 = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            fst.add_arc(initial_state, branch_a, 'go')\n            fst.add_arc(initial_state, branch_b, 'move')\n            fst.add_arc(branch_a, sub_branch_a1, 'left')\n            fst.add_arc(branch_a, sub_branch_a2, 'right')\n            fst.add_arc(sub_branch_a1, final_state, 'side')\n            fst.add_arc(sub_branch_a2, final_state, 'side')\n            fst.add_arc(branch_b, final_state, 'forward')\n        rule = self.make_rule('ComplexBranchRule3', _build)\n        self.decode(\"move forward\", [True], rule)\n\n    def test_multiple_epsilon_transitions(self):\n        \"\"\"Test multiple consecutive epsilon transitions.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            eps1 = fst.add_state()\n            eps2 = fst.add_state()\n            eps3 = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            fst.add_arc(initial_state, eps1, None)  # epsilon\n            fst.add_arc(eps1, eps2, None)  # epsilon\n            fst.add_arc(eps2, eps3, None)  # epsilon\n            fst.add_arc(eps3, final_state, 'finish')\n        rule = self.make_rule('MultiEpsilonRule', _build)\n        self.decode(\"finish\", [True], rule)\n\n    def test_weighted_alternatives(self):\n        \"\"\"Test weighted alternative paths (higher weight should be preferred).\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n\n            # Add alternatives with different weights\n            fst.add_arc(initial_state, final_state, 'test', weight=0.9)  # Higher probability\n            fst.add_arc(initial_state, final_state, 'test', weight=0.1)  # Lower probability\n        rule = self.make_rule('WeightedRule', _build)\n        self.decode(\"test\", [True], rule)\n\n    def test_parallel_sequences(self):\n        \"\"\"Test parallel sequences that merge at the end.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            seq1_s1 = fst.add_state()\n            seq1_s2 = fst.add_state()\n            seq2_s1 = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            # Sequence 1: long path\n            fst.add_arc(initial_state, seq1_s1, 'long')\n            fst.add_arc(seq1_s1, seq1_s2, 'path')\n            fst.add_arc(seq1_s2, final_state, 'here')\n\n            # Sequence 2: short path\n            fst.add_arc(initial_state, seq2_s1, 'short')\n            fst.add_arc(seq2_s1, final_state, 'way')\n        rule = self.make_rule('ParallelRule', _build)\n        self.decode(\"long path here\", [True], rule)\n\n    def test_parallel_sequences_alt(self):\n        \"\"\"Test alternative parallel sequence.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            seq1_s1 = fst.add_state()\n            seq1_s2 = fst.add_state()\n            seq2_s1 = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            fst.add_arc(initial_state, seq1_s1, 'long')\n            fst.add_arc(seq1_s1, seq1_s2, 'path')\n            fst.add_arc(seq1_s2, final_state, 'here')\n            fst.add_arc(initial_state, seq2_s1, 'short')\n            fst.add_arc(seq2_s1, final_state, 'way')\n        rule = self.make_rule('ParallelRule2', _build)\n        self.decode(\"short way\", [True], rule)\n\n    def test_nested_loops(self):\n        \"\"\"Test nested loop structures.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            outer_loop = fst.add_state()\n            inner_loop = fst.add_state()\n            exit_state = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            fst.add_arc(initial_state, outer_loop, 'start')\n            fst.add_arc(outer_loop, inner_loop, 'inner')\n            fst.add_arc(inner_loop, inner_loop, 'repeat')  # Inner self-loop\n            fst.add_arc(inner_loop, outer_loop, 'outer')  # Back to outer\n            fst.add_arc(outer_loop, exit_state, 'exit')\n            fst.add_arc(exit_state, final_state, 'done')\n        rule = self.make_rule('NestedLoopRule', _build)\n        self.decode(\"start inner repeat outer exit done\", [True], rule)\n\n    def test_multiple_entry_points(self):\n        \"\"\"Test graph with multiple entry points via epsilon.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            entry1 = fst.add_state()\n            entry2 = fst.add_state()\n            merge = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            # Multiple epsilon transitions to different entry points\n            fst.add_arc(initial_state, entry1, None)  # epsilon to entry1\n            fst.add_arc(initial_state, entry2, None)  # epsilon to entry2\n\n            # Each entry has its own word\n            fst.add_arc(entry1, merge, 'alpha')\n            fst.add_arc(entry2, merge, 'beta')\n\n            # Merge to final\n            fst.add_arc(merge, final_state, 'end')\n        rule = self.make_rule('MultiEntryRule', _build)\n        self.decode(\"alpha end\", [True], rule)\n\n    def test_cascade_pattern(self):\n        \"\"\"Test cascading pattern with multiple stages.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            stage1 = fst.add_state()\n            stage2 = fst.add_state()\n            stage3 = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            # Stage 1: two options\n            fst.add_arc(initial_state, stage1, 'one')\n            fst.add_arc(initial_state, stage1, 'two')\n\n            # Stage 2: connects to stage1\n            fst.add_arc(stage1, stage2, 'and')\n\n            # Stage 3: two options from stage2\n            fst.add_arc(stage2, stage3, 'three')\n            fst.add_arc(stage2, stage3, 'four')\n\n            # Final\n            fst.add_arc(stage3, final_state, 'done')\n        rule = self.make_rule('CascadeRule', _build)\n        self.decode(\"one and three done\", [True], rule)\n\n    def test_backtracking_pattern(self):\n        \"\"\"Test pattern that requires backtracking in search.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            trap = fst.add_state()  # Dead end\n            good_path = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            # First arc is ambiguous\n            fst.add_arc(initial_state, trap, 'start')\n            fst.add_arc(initial_state, good_path, 'start')\n\n            # Trap has no continuation matching our test\n            fst.add_arc(trap, trap, 'wrong')\n\n            # Good path continues\n            fst.add_arc(good_path, final_state, 'right')\n        rule = self.make_rule('BacktrackRule', _build)\n        self.decode(\"start right\", [True], rule)\n\n    def test_very_long_sequence(self):\n        \"\"\"Test very long sequential chain to stress test.\"\"\"\n        def _build(fst):\n            words = ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten']\n            states = [fst.add_state(initial=(i == 0), final=(i == len(words))) for i in range(len(words) + 1)]\n            for i, word in enumerate(words):\n                fst.add_arc(states[i], states[i + 1], word)\n        rule = self.make_rule('LongSequenceRule', _build)\n        self.decode(\"one two three four five six seven eight nine ten\", [True], rule)\n\n    def test_hub_and_spoke(self):\n        \"\"\"Test hub-and-spoke pattern with central node.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            hub = fst.add_state()\n            spoke1 = fst.add_state()\n            spoke2 = fst.add_state()\n            spoke3 = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            # All paths go through hub\n            fst.add_arc(initial_state, hub, 'center')\n\n            # Spokes from hub\n            fst.add_arc(hub, spoke1, 'north')\n            fst.add_arc(hub, spoke2, 'south')\n            fst.add_arc(hub, spoke3, 'east')\n\n            # All spokes lead to final\n            fst.add_arc(spoke1, final_state, 'end')\n            fst.add_arc(spoke2, final_state, 'end')\n            fst.add_arc(spoke3, final_state, 'end')\n        rule = self.make_rule('HubSpokeRule', _build)\n        self.decode(\"center north end\", [True], rule)\n\n    @pytest.mark.parametrize('dictation_words,expected_mask', [\n        (\"\", [False]),\n        (\"hello\", [False, True]),\n        (\"hello world\", [False, True, True]),\n    ], ids=['zero_words', 'one_word', 'two_words'])\n    def test_rule_with_dictation(self, dictation_words, expected_mask):\n        \"\"\"Test rule with dictation element: 'dictate <dictation>' with varying dictation content.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            write_state = fst.add_state()\n            dictation_state = fst.add_state()\n            end_state = fst.add_state()\n            final_state = fst.add_state(final=True)\n\n            fst.add_arc(initial_state, write_state, 'dictate')\n            fst.add_arc(write_state, dictation_state, '#nonterm:dictation')\n            fst.add_arc(dictation_state, end_state, None, '#nonterm:end')\n            fst.add_arc(end_state, final_state, None)\n\n        rule = self.make_rule('DictationRule', _build, has_dictation=True)\n        text = f\"dictate {dictation_words}\".strip()\n        self.decode(text, [True], rule, expected_words_are_dictation_mask=expected_mask)\n\n    def test_no_rules(self):\n        \"\"\"Test decoding when no rules are defined.\"\"\"\n        self.decode(\"hello\", [], None)\n\n    def test_no_active_rules(self):\n        \"\"\"Test decoding when no rules are active.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, final_state, 'hello')\n        rule = self.make_rule('InactiveRule', _build)\n        self.decode(\"hello\", [False], None)\n\n    def test_garbage_audio(self):\n        \"\"\"Test decoder with random noise/garbage audio.\"\"\"\n        import random\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, final_state, 'hello')\n        rule = self.make_rule('NoiseRule', _build)\n\n        random.seed(42)\n        audio_data = bytes(random.randint(0, 255) for _ in range(32768))\n        self.decode(audio_data, [True], None)\n\n    def test_empty_audio(self):\n        \"\"\"Test decoder with empty audio data.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, final_state, 'hello')\n        rule = self.make_rule('EmptyAudioRule', _build)\n        self.decode(b'', [True], None)\n\n    def test_very_short_audio(self):\n        \"\"\"Test decoder with very short utterance.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, final_state, 'hi')\n        rule = self.make_rule('ShortRule', _build)\n        self.decode(\"hi\", [True], rule)\n\n    def test_multiple_utterances_sequence(self):\n        \"\"\"Test decoding multiple utterances in sequence.\"\"\"\n        def _build(fst):\n            initial_state = fst.add_state(initial=True)\n            final_state = fst.add_state(final=True)\n            fst.add_arc(initial_state, final_state, 'hello')\n            fst.add_arc(initial_state, final_state, 'world')\n            fst.add_arc(initial_state, final_state, 'test')\n        rule = self.make_rule('MultiUtteranceRule', _build)\n\n        for text in ['hello', 'world', 'test']:\n            self.decode(text, [True], rule)\n\n\nclass TestAlternativeDictation:\n    \"\"\"Tests for alternative dictation feature.\"\"\"\n\n    @pytest.fixture(autouse=True)\n    def setup(self, change_to_test_dir, audio_generator):\n        self.audio_generator = audio_generator\n        self.alternative_dictation_calls = []\n\n    @pytest.fixture\n    def compiler_with_mock(self):\n        \"\"\"Fixture providing compiler with mock alternative dictation.\"\"\"\n        def mock_alternative_dictation_func(audio_data):\n            self.alternative_dictation_calls.append(audio_data)\n            return 'ALTERNATIVE_TEXT'\n        return Compiler(alternative_dictation=mock_alternative_dictation_func)\n\n    def create_mock_rule(self, compiler, has_dictation=True):\n        \"\"\"Helper to create a mock KaldiRule for testing.\"\"\"\n        return KaldiRule(compiler, 'mock_rule', has_dictation=has_dictation)\n\n    def parse_with_dictation_info(self, compiler, output_text, audio_data, word_align):\n        \"\"\"Helper to parse output with dictation info.\"\"\"\n        def dictation_info_func():\n            return audio_data, word_align\n        return compiler.parse_output(output_text, dictation_info_func=dictation_info_func)\n\n    def test_alternative_dictation_callable_check(self):\n        \"\"\"Test that alternative_dictation must be callable.\"\"\"\n        compiler = Compiler(alternative_dictation=lambda x: 'text')\n        assert compiler.alternative_dictation is not None\n\n        compiler = Compiler(alternative_dictation=None)\n        assert compiler.alternative_dictation is None\n\n    def test_alternative_dictation_not_called_without_dictation(self, compiler_with_mock):\n        \"\"\"Test alternative dictation is not called for rules without dictation.\"\"\"\n        decoder = compiler_with_mock.init_decoder()\n\n        rule = KaldiRule(compiler_with_mock, 'no_dictation_rule', has_dictation=False)\n        fst = rule.fst\n        initial_state = fst.add_state(initial=True)\n        final_state = fst.add_state(final=True)\n        fst.add_arc(initial_state, final_state, 'hello')\n        rule.compile().load()\n\n        decoder.decode(self.audio_generator('hello'), True, [True])\n        output, info = decoder.get_output()\n\n        kaldi_rule, words, words_are_dictation_mask = compiler_with_mock.parse_output(output, dictation_info_func=None)\n\n        assert len(self.alternative_dictation_calls) == 0\n        assert kaldi_rule == rule\n        assert words == ['hello']\n\n    def test_alternative_dictation_integration_full_decode(self, change_to_test_dir):\n        \"\"\"Full integration test: rule with dictation, decode audio, alternative dictation called and replaces text.\"\"\"\n        from kaldi_active_grammar import PlainDictationRecognizer\n\n        alternative_calls = []\n        alternative_audio_received = []\n        alternative_recognized_texts = []\n\n        def alternative_dictation_func(audio_data):\n            \"\"\"Uses an independent PlainDictationRecognizer to decode the audio.\"\"\"\n            alternative_calls.append(True)\n            alternative_audio_received.append(len(audio_data))\n            alt_recognizer = PlainDictationRecognizer()\n            alt_text, alt_info = alt_recognizer.decode_utterance(audio_data)\n            alternative_recognized_texts.append(alt_text)\n            return alt_text\n\n        compiler = Compiler(alternative_dictation=alternative_dictation_func)\n        decoder = compiler.init_decoder()\n\n        # Create rule with dictation: \"hello <dictation>\"\n        rule = KaldiRule(compiler, 'dictation_rule', has_dictation=True)\n        fst = rule.fst\n\n        initial_state = fst.add_state(initial=True)\n        hello_state = fst.add_state()\n        dictation_state = fst.add_state()\n        end_state = fst.add_state()\n        final_state = fst.add_state(final=True)\n\n        # Pattern: \"hello\" followed by dictation\n        fst.add_arc(initial_state, hello_state, 'hello')\n        fst.add_arc(hello_state, dictation_state, '#nonterm:dictation', '#nonterm:dictation_cloud')  # #nonterm:dictation must be on ilabel; cloud variant on olabel\n        fst.add_arc(dictation_state, end_state, None, '#nonterm:end')\n        fst.add_arc(end_state, final_state, None)\n\n        rule.compile().load()\n\n        # Generate audio for \"hello world\"\n        audio_data = self.audio_generator('hello world')\n\n        # Decode\n        decoder.decode(audio_data, True, [True])\n        output, info = decoder.get_output()\n\n        # Get word alignment for alternative dictation\n        word_align = decoder.get_word_align(output)\n\n        # Create dictation_info_func that returns audio and word_align\n        def dictation_info_func():\n            return audio_data, word_align\n\n        # Parse with alternative dictation\n        kaldi_rule, words, words_are_dictation_mask = compiler.parse_output(\n            output, dictation_info_func=dictation_info_func)\n\n        # Verify alternative dictation was called\n        assert len(alternative_calls) > 0, \"Alternative dictation should have been called\"\n        assert len(alternative_audio_received) > 0, \"Alternative dictation should have received audio\"\n        assert len(alternative_recognized_texts) > 0, \"Alternative dictation should have recognized text\"\n\n        # Verify the alternative recognizer produced some output\n        alt_text = alternative_recognized_texts[0]\n        assert alt_text, f\"Alternative recognizer should produce text, got: {alt_text}\"\n\n        # The alternative text should be in the final words (replacing original dictation)\n        words_str = ' '.join(words)\n        assert alt_text in words_str or any(word in words for word in alt_text.split()), \\\n            f\"Alternative text '{alt_text}' should be in words: {words}\"\n\n        # Verify 'hello' is still there (not part of dictation)\n        assert 'hello' in words, f\"Hello word should be preserved: {words}\"\n\n        # Verify rule was recognized\n        assert kaldi_rule == rule\n\n        # Verify dictation mask is correct\n        assert len(words) == len(words_are_dictation_mask)\n        assert words_are_dictation_mask[words.index('hello')] == False, \"Hello should not be marked as dictation\"\n\n    def test_alternative_dictation_not_called_without_cloud_nonterm(self, compiler_with_mock):\n        \"\"\"Test alternative dictation not called when #nonterm:dictation_cloud not in output.\"\"\"\n        decoder = compiler_with_mock.init_decoder()\n\n        rule = KaldiRule(compiler_with_mock, 'no_cloud_rule', has_dictation=True)\n        fst = rule.fst\n        initial_state = fst.add_state(initial=True)\n        final_state = fst.add_state(final=True)\n        fst.add_arc(initial_state, final_state, 'test')\n        rule.compile().load()\n\n        decoder.decode(self.audio_generator('test'), True, [True])\n        output, info = decoder.get_output()\n\n        mock_audio = b'mock_audio_data'\n        mock_word_align = [('test', 0, 1000)]\n        kaldi_rule, words, words_are_dictation_mask = self.parse_with_dictation_info(compiler_with_mock, output, mock_audio, mock_word_align)\n\n        assert len(self.alternative_dictation_calls) == 0\n\n    def test_alternative_dictation_word_align_parsing(self, compiler_with_mock):\n        \"\"\"Test parsing of word_align data for dictation spans.\"\"\"\n        output_text = '#nonterm:rule0 start #nonterm:dictation_cloud original text #nonterm:end finish'\n        mock_audio = b'\\x00' * 32000\n        mock_word_align = [\n            ('#nonterm:rule0', 0, 0),\n            ('start', 0, 8000),\n            ('#nonterm:dictation_cloud', 8000, 0),\n            ('original', 8000, 4000),\n            ('text', 12000, 4000),\n            ('#nonterm:end', 16000, 0),\n            ('finish', 16000, 8000),\n        ]\n\n        self.create_mock_rule(compiler_with_mock)\n        kaldi_rule, words, words_are_dictation_mask = self.parse_with_dictation_info(compiler_with_mock, output_text, mock_audio, mock_word_align)\n\n        assert len(self.alternative_dictation_calls) == 1\n        assert len(self.alternative_dictation_calls[0]) > 0\n        assert 'ALTERNATIVE_TEXT' in words or words == ['start', 'finish']\n\n    @pytest.mark.parametrize('output_text,word_align,audio_size,expected_audio_size', [\n        (\n            '#nonterm:rule0 start #nonterm:dictation_cloud final words #nonterm:end',\n            [\n                ('#nonterm:rule0', 0, 0),\n                ('start', 0, 8000),\n                ('#nonterm:dictation_cloud', 8000, 0),\n                ('final', 8000, 4000),\n                ('words', 12000, 4000),\n                ('#nonterm:end', 16000, 0),\n            ],\n            32000,\n            24000,  # 32000 - 8000\n        ),\n        (\n            '#nonterm:rule0 start #nonterm:dictation_cloud middle text #nonterm:end finish',\n            [\n                ('#nonterm:rule0', 0, 0),\n                ('start', 0, 4000),\n                ('#nonterm:dictation_cloud', 4000, 0),\n                ('middle', 4000, 4000),\n                ('text', 8000, 4000),\n                ('#nonterm:end', 12000, 0),\n                ('finish', 16000, 4000),\n            ],\n            32000,\n            10000,  # 14000 - 4000 (half gap to next word)\n        ),\n    ], ids=['end_of_utterance', 'middle_of_utterance'])\n    def test_alternative_dictation_span_calculation(self, compiler_with_mock, output_text, word_align, audio_size, expected_audio_size):\n        \"\"\"Test dictation span calculation for various positions.\"\"\"\n        mock_audio = b'\\x00' * audio_size\n\n        self.create_mock_rule(compiler_with_mock)\n        kaldi_rule, words, words_are_dictation_mask = self.parse_with_dictation_info(compiler_with_mock, output_text, mock_audio, word_align)\n\n        assert len(self.alternative_dictation_calls) == 1\n        assert len(self.alternative_dictation_calls[0]) == expected_audio_size\n\n    def test_alternative_dictation_multiple_spans(self):\n        \"\"\"Test handling multiple dictation spans in single utterance.\"\"\"\n        call_count = [0]\n\n        def multi_alternative_func(audio_data):\n            call_count[0] += 1\n            return f'ALT_{call_count[0]}'\n\n        compiler = Compiler(alternative_dictation=multi_alternative_func)\n\n        output_text = '#nonterm:rule0 start #nonterm:dictation_cloud first #nonterm:end middle #nonterm:dictation_cloud second #nonterm:end finish'\n        mock_audio = b'\\x00' * 48000\n        mock_word_align = [\n            ('#nonterm:rule0', 0, 0),\n            ('start', 0, 4000),\n            ('#nonterm:dictation_cloud', 4000, 0),\n            ('first', 4000, 4000),\n            ('#nonterm:end', 8000, 0),\n            ('middle', 12000, 4000),\n            ('#nonterm:dictation_cloud', 16000, 0),\n            ('second', 16000, 4000),\n            ('#nonterm:end', 20000, 0),\n            ('finish', 24000, 4000),\n        ]\n\n        self.create_mock_rule(compiler)\n        kaldi_rule, words, words_are_dictation_mask = self.parse_with_dictation_info(compiler, output_text, mock_audio, mock_word_align)\n\n        assert call_count[0] == 2\n\n    @pytest.mark.parametrize('alternative_func,expected_words', [\n        (lambda x: None, ['original', 'text']),\n        (lambda x: '', ['original', 'text']),\n    ], ids=['returns_none', 'returns_empty_string'])\n    def test_alternative_dictation_fallback(self, alternative_func, expected_words):\n        \"\"\"Test fallback to original text when alternative_dictation returns falsy value.\"\"\"\n        compiler = Compiler(alternative_dictation=alternative_func)\n\n        output_text = '#nonterm:rule0 #nonterm:dictation_cloud original text #nonterm:end'\n        mock_audio = b'\\x00' * 16000\n        mock_word_align = [\n            ('#nonterm:rule0', 0, 0),\n            ('#nonterm:dictation_cloud', 0, 0),\n            ('original', 0, 4000),\n            ('text', 4000, 4000),\n            ('#nonterm:end', 8000, 0),\n        ]\n\n        self.create_mock_rule(compiler)\n        kaldi_rule, words, words_are_dictation_mask = self.parse_with_dictation_info(compiler, output_text, mock_audio, mock_word_align)\n\n        for expected_word in expected_words:\n            assert expected_word in words\n\n    def test_alternative_dictation_exception_handling(self):\n        \"\"\"Test that exceptions in alternative_dictation are caught and logged.\"\"\"\n        def failing_func(audio_data):\n            raise ValueError('Test exception')\n\n        compiler = Compiler(alternative_dictation=failing_func)\n\n        output_text = '#nonterm:rule0 #nonterm:dictation_cloud original #nonterm:end'\n        mock_audio = b'\\x00' * 8000\n        mock_word_align = [\n            ('#nonterm:rule0', 0, 0),\n            ('#nonterm:dictation_cloud', 0, 0),\n            ('original', 0, 4000),\n            ('#nonterm:end', 4000, 0),\n        ]\n\n        self.create_mock_rule(compiler)\n        kaldi_rule, words, words_are_dictation_mask = self.parse_with_dictation_info(compiler, output_text, mock_audio, mock_word_align)\n\n        assert 'original' in words\n\n    def test_alternative_dictation_invalid_type_raises(self):\n        \"\"\"Test that invalid alternative_dictation type raises TypeError.\"\"\"\n        compiler = Compiler(alternative_dictation='not_callable')\n\n        output_text = '#nonterm:rule0 #nonterm:dictation_cloud text #nonterm:end'\n        mock_audio = b'\\x00' * 8000\n        mock_word_align = [\n            ('#nonterm:rule0', 0, 0),\n            ('#nonterm:dictation_cloud', 0, 0),\n            ('text', 0, 4000),\n            ('#nonterm:end', 4000, 0),\n        ]\n\n        self.create_mock_rule(compiler)\n        kaldi_rule, words, words_are_dictation_mask = self.parse_with_dictation_info(compiler, output_text, mock_audio, mock_word_align)\n\n        assert words is not None\n\n    def test_alternative_dictation_audio_slice_accuracy(self):\n        \"\"\"Test that correct audio slice is passed to alternative_dictation.\"\"\"\n        received_audio = []\n\n        def capture_audio_func(audio_data):\n            received_audio.append(audio_data)\n            return 'replaced'\n\n        compiler = Compiler(alternative_dictation=capture_audio_func)\n\n        output_text = '#nonterm:rule0 #nonterm:dictation_cloud test #nonterm:end'\n        mock_audio = b'\\x01' * 4000 + b'\\x02' * 4000 + b'\\x03' * 4000\n        mock_word_align = [\n            ('#nonterm:rule0', 0, 0),\n            ('#nonterm:dictation_cloud', 4000, 0),\n            ('test', 4000, 4000),\n            ('#nonterm:end', 8000, 0),\n        ]\n\n        self.create_mock_rule(compiler)\n        kaldi_rule, words, words_are_dictation_mask = self.parse_with_dictation_info(compiler, output_text, mock_audio, mock_word_align)\n\n        assert len(received_audio) == 1\n        assert received_audio[0] == b'\\x02' * 4000 + b'\\x03' * 4000\n"
  },
  {
    "path": "tests/test_package.py",
    "content": "\nimport re\n\ndef test_import_and_version():\n    import kaldi_active_grammar as kag\n    assert isinstance(kag.__version__, str)\n    assert kag.__version__.strip() != \"\"\n\n    version_pattern = r'^\\d+\\.\\d+\\.\\d+(?:[-+].+)?$'\n    assert re.match(version_pattern, kag.__version__), f\"Version '{kag.__version__}' does not match semantic versioning format\"\n"
  },
  {
    "path": "tests/test_plain_dictation.py",
    "content": "\nimport math\nimport random\n\nimport pytest\n\nfrom kaldi_active_grammar import PlainDictationRecognizer\nfrom tests.helpers import *\n\n\n@pytest.fixture\ndef recognizer(change_to_test_dir):\n    return PlainDictationRecognizer()\n\n\ndef test_initialization(recognizer):\n    assert isinstance(recognizer, PlainDictationRecognizer)\n    assert hasattr(recognizer, 'decoder')\n    assert hasattr(recognizer, '_compiler')\n\n\n@pytest.mark.parametrize(\"test_text\", [\n    \"hello world\",\n    \"this is a longer sentence to test the speech recognition capabilities\",\n    \"testing active grammar framework\",\n    \"hello there\",\n    \"one two three four five\",\n    \"testing numbers and words\",\n])\ndef test_basic_dictation(recognizer, audio_generator, test_text):\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data)\n    assert isinstance(output_str, str)\n    assert output_str == test_text\n    assert_info_shape(info)\n\n\ndef test_empty_audio(recognizer):\n    output_str, info = recognizer.decode_utterance(b'')\n    assert isinstance(output_str, str)\n    assert output_str == \"\"\n    assert_info_shape(info)\n\n\ndef test_garbage_audio(recognizer):\n    random.seed(42)\n    audio_data = bytes(random.randint(0, 255) for _ in range(32768))\n    output_str, info = recognizer.decode_utterance(audio_data)\n    assert isinstance(output_str, str)\n    assert output_str == \"\"\n    assert_info_shape(info)\n\n\ndef test_multiple_utterances(recognizer, audio_generator):\n    test_utterances = [\n        \"first utterance\",\n        \"second utterance here\",\n        \"and a third one\",\n    ]\n    for test_text in test_utterances:\n        audio_data = audio_generator(test_text)\n        output_str, info = recognizer.decode_utterance(audio_data)\n        assert isinstance(output_str, str)\n        assert output_str == test_text\n        assert_info_shape(info)\n\n\nclass TestPlainDictationWithFST:\n    \"\"\"Test PlainDictationRecognizer using HCLG.fst file\"\"\"\n\n    @pytest.fixture(autouse=True)\n    def setup(self, change_to_test_dir):\n        try:\n            self.recognizer = PlainDictationRecognizer(fst_file='HCLG.fst')\n            self.has_hclg = True\n        except Exception:\n            self.has_hclg = False\n            pytest.skip(\"HCLG.fst not available for testing\")\n\n    def test_initialization(self):\n        if not self.has_hclg:\n            pytest.skip(\"HCLG.fst not available\")\n        assert isinstance(self.recognizer, PlainDictationRecognizer)\n        assert hasattr(self.recognizer, 'decoder')\n        assert hasattr(self.recognizer, '_model')\n\n    def test_basic_dictation(self, audio_generator):\n        if not self.has_hclg:\n            pytest.skip(\"HCLG.fst not available\")\n        test_text = \"testing plain decoder\"\n        audio_data = audio_generator(test_text)\n        output_str, info = self.recognizer.decode_utterance(audio_data)\n        assert isinstance(output_str, str)\n        assert_info_shape(info)\n\n\n@pytest.mark.parametrize(\"chunk_size\", [512, 1024, 2048, 4096, 8192, 16384])\n@pytest.mark.parametrize(\"test_text\", [\n    \"testing small chunk size\",\n    \"medium chunk size testing\",\n    \"large chunk size for testing\",\n])\ndef test_chunked_decode(recognizer, audio_generator, chunk_size, test_text):\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data, chunk_size=chunk_size)\n    assert isinstance(output_str, str)\n    assert output_str == test_text\n    assert_info_shape(info)\n\n\ndef test_custom_tmp_dir(change_to_test_dir, audio_generator, tmp_path):\n    recognizer = PlainDictationRecognizer(tmp_dir=str(tmp_path))\n    test_text = \"custom temporary directory\"\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data)\n    assert isinstance(output_str, str)\n    assert output_str == test_text\n    assert_info_shape(info)\n\n\ndef test_custom_config(change_to_test_dir, audio_generator):\n    config = {\n        'beam': 13.0,\n        'max_active': 7000,\n    }\n    recognizer = PlainDictationRecognizer(config=config)\n    test_text = \"custom configuration test\"\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data)\n    assert isinstance(output_str, str)\n    assert output_str == test_text\n    assert_info_shape(info)\n\n\ndef test_very_short_audio(recognizer, audio_generator):\n    test_text = \"hi\"\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data)\n    assert isinstance(output_str, str)\n    assert_info_shape(info)\n\n\ndef test_very_long_audio(recognizer, audio_generator):\n    test_text = \" \".join([\n        \"this is a very long sentence that goes on and on\",\n        \"with many words to test the handling of extended audio\",\n        \"and ensure that the decoder can process lengthy utterances\",\n        \"without any issues or errors occurring during processing\",\n    ])\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data)\n    assert isinstance(output_str, str)\n    assert_info_shape(info)\n\n\ndef test_repeated_words(recognizer, audio_generator):\n    test_text = \"repeat repeat repeat the words\"\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data)\n    assert isinstance(output_str, str)\n    assert_info_shape(info)\n\n\ndef test_sequential_empty_audio(recognizer):\n    for _ in range(3):\n        output_str, info = recognizer.decode_utterance(b'')\n        assert isinstance(output_str, str)\n        assert output_str == \"\"\n        assert_info_shape(info)\n\n\ndef test_alternating_empty_and_valid(recognizer, audio_generator):\n    test_text = \"valid audio\"\n\n    output_str, info = recognizer.decode_utterance(b'')\n    assert output_str == \"\"\n    assert_info_shape(info)\n\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data)\n    assert output_str == test_text\n    assert_info_shape(info)\n\n    output_str, info = recognizer.decode_utterance(b'')\n    assert output_str == \"\"\n    assert_info_shape(info)\n\n\ndef test_info_structure(recognizer, audio_generator):\n    test_text = \"check info dictionary\"\n    audio_data = audio_generator(test_text)\n    output_str, info = recognizer.decode_utterance(audio_data)\n\n    assert_info_shape(info)\n\n    assert 0.0 <= info['confidence'] <= 1.0 or math.isnan(info['confidence'])\n    assert 0.0 <= info['expected_error_rate'] <= 1.0 or math.isnan(info['expected_error_rate'])\n\n\ndef test_info_consistency(change_to_test_dir, audio_generator):\n    test_text = \"consistent info values\"\n    audio_data = audio_generator(test_text)\n\n    recognizer1 = PlainDictationRecognizer()\n    output_str1, info1 = recognizer1.decode_utterance(audio_data)\n\n    recognizer2 = PlainDictationRecognizer()\n    output_str2, info2 = recognizer2.decode_utterance(audio_data)\n\n    assert output_str1 == output_str2\n\n    assert_info_shape(info1)\n    assert_info_shape(info2)\n    threshold_pct = 0.01\n    assert abs(info1['likelihood'] - info2['likelihood']) / abs(info1['likelihood']) < threshold_pct\n    assert abs(info1['am_score'] - info2['am_score']) / abs(info1['am_score']) < threshold_pct\n    assert abs(info1['lm_score'] - info2['lm_score']) / abs(info1['lm_score']) < threshold_pct\n"
  }
]