[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n"
  },
  {
    "path": "AUTHORS",
    "content": "WakaQ is written and maintained by Alan Hamlett and various contributors:\n\nDevelopment Lead\n----------------\n\n- Alan Hamlett <alan.hamlett@gmail.com>\n\n"
  },
  {
    "path": "CHANGES.md",
    "content": "# CHANGES\n\n## 4.0.5 (2026-02-03) [commits](https://github.com/wakatime/wakaq/compare/4.0.4...4.0.5)\n\n#### Bugfix\n\n- Handle BlockingIOError when parent process writing to log file.\n- Handle missing process when checking ram usage.\n- Fix required python version 3.14.\n\n## 4.0.4 (2026-02-01) [commits](https://github.com/wakatime/wakaq/compare/4.0.3...4.0.4)\n\n#### Bugfix\n\n- Fix import path.\n\n## 4.0.3 (2026-02-01) [commits](https://github.com/wakatime/wakaq/compare/4.0.2...4.0.3)\n\n#### Bugfix\n\n- Handle redis pubsub connection errors and attempt to reconnect.\n\n## 4.0.2 (2026-02-01) [commits](https://github.com/wakatime/wakaq/compare/4.0.1...4.0.2)\n\n#### Bugfix\n\n- Fix picking child process using most RAM.\n- Handle corrupted ping messages from child processes when server out of available RAM.\n\n## 4.0.1 (2025-12-02) [commits](https://github.com/wakatime/wakaq/compare/4.0.0...4.0.1)\n\n#### Bugfix\n\n- Fix accessing constant value.\n\n## 4.0.0 (2025-12-02) [commits](https://github.com/wakatime/wakaq/compare/3.0.7...4.0.0)\n\n#### Breaking\n\n- Only support Python 3.14 and newer.\n\n## 3.0.7 (2025-05-01) [commits](https://github.com/wakatime/wakaq/compare/3.0.6...3.0.7)\n\n#### Bugfix\n\n- Ignore SoftTimeout in parent process.\n- Remove deprecated charset redis config option.\n\n## 3.0.6 (2025-01-04) [commits](https://github.com/wakatime/wakaq/compare/3.0.5...3.0.6)\n\n#### Bugfix\n\n- Remove noisy debug log.\n\n## 3.0.5 (2024-11-21) [commits](https://github.com/wakatime/wakaq/compare/3.0.4...3.0.5)\n\n#### Bugfix\n\n- Stop waiting for async tasks after exception raised.\n[#7](https://github.com/wakatime/wakaq/issues/7)\n\n## 3.0.4 (2024-11-20) [commits](https://github.com/wakatime/wakaq/compare/3.0.3...3.0.4)\n\n#### Bugfix\n\n- Fix raising asyncio.exceptions.CancelledError when async task hits soft timeout.\n\n## 3.0.3 (2024-11-13) [commits](https://github.com/wakatime/wakaq/compare/3.0.2...3.0.3)\n\n#### Bugfix\n\n- Fix UnboundLocalError for async task context\n[#15](https://github.com/wakatime/wakaq/pull/15)\n\n## 3.0.2 (2024-11-13) [commits](https://github.com/wakatime/wakaq/compare/3.0.1...3.0.2)\n\n#### Bugfix\n\n- Support Python versions older than 3.11.\n\n## 3.0.1 (2024-11-13) [commits](https://github.com/wakatime/wakaq/compare/3.0.0...3.0.1)\n\n#### Bugfix\n\n- Cancel async tasks when soft_timeout reached.\n[#14](https://github.com/wakatime/wakaq/pull/14)\n- Fix setting current async task for logging.\n[#14](https://github.com/wakatime/wakaq/pull/14)\n\n## 3.0.0 (2024-11-12) [commits](https://github.com/wakatime/wakaq/compare/2.1.25...3.0.0)\n\n#### Breaking\n\n- Custom task wrapper should no longer use an inner function.\n\n#### Feature\n\n- Support for async concurrent tasks on the same worker process.\n[#2](https://github.com/wakatime/wakaq/issues/2)\n\n## 2.1.25 (2024-11-07) [commits](https://github.com/wakatime/wakaq/compare/2.1.24...2.1.25)\n\n#### Misc\n\n- Synchronous mode for easier debugging.\n[#10](https://github.com/wakatime/wakaq/pull/10)\n\n## 2.1.24 (2024-08-19) [commits](https://github.com/wakatime/wakaq/compare/2.1.23...2.1.24)\n\n#### Bugfix\n\n- Fix number of cores not resolving to integer.\n\n## 2.1.23 (2023-12-16) [commits](https://github.com/wakatime/wakaq/compare/2.1.22...2.1.23)\n\n#### Bugfix\n\n- Fix catching out of memory errors when executing broadcast tasks.\n\n## 2.1.22 (2023-12-16) [commits](https://github.com/wakatime/wakaq/compare/2.1.21...2.1.22)\n\n#### Bugfix\n\n- Fix logging memory errors and treat BrokenPipeError as mem error.\n\n## 2.1.21 (2023-12-15) [commits](https://github.com/wakatime/wakaq/compare/2.1.20...2.1.21)\n\n#### Bugfix\n\n- Log memory errors when to debug level.\n\n## 2.1.20 (2023-12-14) [commits](https://github.com/wakatime/wakaq/compare/2.1.19...2.1.20)\n\n#### Bugfix\n\n- Silently exit child worker(s) when parent process missing.\n\n## 2.1.19 (2023-11-11) [commits](https://github.com/wakatime/wakaq/compare/2.1.18...2.1.19)\n\n#### Bugfix\n\n- Prevent lost tasks when OOM kills parent worker process, improvement.\n\n## 2.1.18 (2023-11-11) [commits](https://github.com/wakatime/wakaq/compare/2.1.17...2.1.18)\n\n#### Bugfix\n\n- Prevent lost tasks when OOM kills parent worker process.\n\n## 2.1.17 (2023-11-10) [commits](https://github.com/wakatime/wakaq/compare/2.1.16...2.1.17)\n\n#### Bugfix\n\n- Fix typo.\n\n## 2.1.16 (2023-11-10) [commits](https://github.com/wakatime/wakaq/compare/2.1.15...2.1.16)\n\n#### Bugfix\n\n- Fix function decorator callbacks after_worker_started, etc.\n\n## 2.1.15 (2023-11-10) [commits](https://github.com/wakatime/wakaq/compare/2.1.14...2.1.15)\n\n#### Bugfix\n\n- Unsubscribe from pubsub while worker processes are waiting to exit.\n\n## 2.1.14 (2023-11-09) [commits](https://github.com/wakatime/wakaq/compare/2.1.13...2.1.14)\n\n#### Bugfix\n\n- Add missing wrapped log.handlers property.\n\n## 2.1.13 (2023-11-09) [commits](https://github.com/wakatime/wakaq/compare/2.1.12...2.1.13)\n\n#### Misc\n\n- Ignore logging errors.\n\n## 2.1.12 (2023-11-09) [commits](https://github.com/wakatime/wakaq/compare/2.1.11...2.1.12)\n\n#### Bugfix\n\n- Only refork crashed workers once per wait_timeout.\n\n## 2.1.11 (2023-11-09) [commits](https://github.com/wakatime/wakaq/compare/2.1.10...2.1.11)\n\n#### Bugfix\n\n- Postpone forking workers at startup if until ram usage below max threshold.\n\n## 2.1.10 (2023-11-09) [commits](https://github.com/wakatime/wakaq/compare/2.1.9...2.1.10)\n\n#### Bugfix\n\n- Postpone forking missing child workers while using too much RAM.\n\n## 2.1.9 (2023-11-08) [commits](https://github.com/wakatime/wakaq/compare/2.1.8...2.1.9)\n\n#### Bugfix\n\n- Prevent UnboundLocalError from using task_name var before assignment.\n\n## 2.1.8 (2023-11-08) [commits](https://github.com/wakatime/wakaq/compare/2.1.7...2.1.8)\n\n#### Bugfix\n\n- Prevent ValueError when unpacking invalid message from child process.\n\n## 2.1.7 (2023-11-08) [commits](https://github.com/wakatime/wakaq/compare/2.1.6...2.1.7)\n\n#### Bugfix\n\n- Prevent ValueError if no worker processes spawned when checking max mem usage.\n\n## 2.1.6 (2023-10-11) [commits](https://github.com/wakatime/wakaq/compare/2.1.5...2.1.6)\n\n#### Misc\n\n- Log time task took until exited when killed because max_mem_percent reached.\n\n## 2.1.5 (2023-10-11) [commits](https://github.com/wakatime/wakaq/compare/2.1.4...2.1.5)\n\n#### Bugfix\n\n- Prevent reset max_mem_reached_at when unrelated child process exits.\n\n## 2.1.4 (2023-10-11) [commits](https://github.com/wakatime/wakaq/compare/2.1.3...2.1.4)\n\n#### Misc\n\n- Less noisy logs when max_mem_percent reached.\n- Allow restarting child worker processes more than once per soft timeout.\n\n## 2.1.3 (2023-10-11) [commits](https://github.com/wakatime/wakaq/compare/2.1.2...2.1.3)\n\n#### Bugfix\n\n- Wait for child worker to finish processing current task when max_mem_percent reached.\n\n## 2.1.2 (2023-10-09) [commits](https://github.com/wakatime/wakaq/compare/2.1.1...2.1.2)\n\n#### Misc\n\n- Log mem usage and current task when max_mem_percent threshold reached.\n\n## 2.1.1 (2023-10-09) [commits](https://github.com/wakatime/wakaq/compare/2.1.0...2.1.1)\n\n#### Bugfix\n\n- Fix setting max_mem_percent on WakaQ.\n\n## 2.1.0 (2023-09-22) [commits](https://github.com/wakatime/wakaq/compare/2.0.2...2.1.0)\n\n#### Feature\n\n- Include number of workers connected when inspecting queues.\n- Log queue params on startup.\n\n#### Misc\n\n- Improve docs in readme.\n\n## 2.0.2 (2022-12-09) [commits](https://github.com/wakatime/wakaq/compare/2.0.1...2.0.2)\n\n#### Bugfix\n\n- Make sure to catch system exceptions to prevent worker infinite loops.\n\n## 2.0.1 (2022-12-09) [commits](https://github.com/wakatime/wakaq/compare/2.0.0...2.0.1)\n\n#### Bugfix\n\n- Always catch SoftTimeout even when nested in exception context.\n\n## 2.0.0 (2022-11-18) [commits](https://github.com/wakatime/wakaq/compare/1.2.0...2.0.0)\n\n#### Feature\n\n- Support bytes in task arguments.\n\n#### Misc\n\n- Tasks always receive datetimes in UTC without tzinfo.\n\n## 1.3.0 (2022-10-05) [commits](https://github.com/wakatime/wakaq/compare/1.2.1...1.3.0)\n\n#### Feature\n\n- Add username, password, and db redis connection options.\n[#6](https://github.com/wakatime/wakaq/issues/6)\n\n## 1.2.1 (2022-09-20) [commits](https://github.com/wakatime/wakaq/compare/1.2.0...1.2.1)\n\n#### Bugfix\n\n- Prevent reading from Redis when no queues defined.\n\n#### Misc\n\n- Improve error message when app path not WakaQ instance.\n\n## 1.2.0 (2022-09-17) [commits](https://github.com/wakatime/wakaq/compare/1.1.8...1.2.0)\n\n#### Feature\n\n- Util functions to peek at tasks in queues.\n\n## 1.1.8 (2022-09-15) [commits](https://github.com/wakatime/wakaq/compare/1.1.7...1.1.8)\n\n#### Bugfix\n\n- Ignore SoftTimeout in child when not processing any task.\n\n## 1.1.7 (2022-09-15) [commits](https://github.com/wakatime/wakaq/compare/1.1.6...1.1.7)\n\n#### Bugfix\n\n- Allow custom timeouts defined on task decorator.\n\n## 1.1.6 (2022-09-15) [commits](https://github.com/wakatime/wakaq/compare/1.1.5...1.1.6)\n\n#### Bugfix\n\n- All timeouts should accept timedelta or int seconds.\n\n## 1.1.5 (2022-09-15) [commits](https://github.com/wakatime/wakaq/compare/1.1.4...1.1.5)\n\n#### Bugfix\n\n- Fix typo.\n\n## 1.1.4 (2022-09-15) [commits](https://github.com/wakatime/wakaq/compare/1.1.3...1.1.4)\n\n#### Bugfix\n\n- Fix setting task and queue on child from ping.\n\n## 1.1.3 (2022-09-15) [commits](https://github.com/wakatime/wakaq/compare/1.1.2...1.1.3)\n\n#### Bugfix\n\n- Fix sending task and queue to parent process.\n\n## 1.1.2 (2022-09-14) [commits](https://github.com/wakatime/wakaq/compare/1.1.1...1.1.2)\n\n#### Bugfix\n\n- Fix getattr.\n\n## 1.1.1 (2022-09-14) [commits](https://github.com/wakatime/wakaq/compare/1.1.0...1.1.1)\n\n#### Bugfix\n\n- Add missing child timeout class attributes.\n\n## 1.1.0 (2022-09-14) [commits](https://github.com/wakatime/wakaq/compare/1.0.6...1.1.0)\n\n#### Feature\n\n- Ability to overwrite timeouts per task or queue.\n\n## 1.0.6 (2022-09-08) [commits](https://github.com/wakatime/wakaq/compare/1.0.5...1.0.6)\n\n#### Bugfix\n\n- Prevent unknown task crashing worker process.\n\n## 1.0.5 (2022-09-08) [commits](https://github.com/wakatime/wakaq/compare/1.0.4...1.0.5)\n\n#### Bugfix\n\n- Make sure logging has current task set.\n\n## 1.0.4 (2022-09-07) [commits](https://github.com/wakatime/wakaq/compare/1.0.3...1.0.4)\n\n#### Bugfix\n\n- Fix auto retrying tasks on soft timeout.\n\n## 1.0.3 (2022-09-07) [commits](https://github.com/wakatime/wakaq/compare/1.0.2...1.0.3)\n\n#### Bugfix\n\n- Ignore SoftTimeout when waiting on BLPOP from Redis list.\n\n## 1.0.2 (2022-09-05) [commits](https://github.com/wakatime/wakaq/compare/1.0.1...1.0.2)\n\n#### Bugfix\n\n- Ping parent before blocking dequeue in case wait timeout is near soft timeout.\n\n## 1.0.1 (2022-09-05) [commits](https://github.com/wakatime/wakaq/compare/1.0.0...1.0.1)\n\n#### Bugfix\n\n- All logger vars should be strings.\n\n## 1.0.0 (2022-09-05) [commits](https://github.com/wakatime/wakaq/compare/0.0.11...1.0.0)\n\n- First major release.\n\n## 0.0.11 (2022-09-05) [commits](https://github.com/wakatime/wakaq/compare/0.0.10...0.0.11)\n\n#### Feature\n\n- Add task payload to logger variables.\n\n## 0.0.10 (2022-09-05) [commits](https://github.com/wakatime/wakaq/compare/0.0.9...0.0.10)\n\n#### Bugfix\n\n- Prevent logging error from crashing parent process.\n\n## 0.0.9 (2022-09-05) [commits](https://github.com/wakatime/wakaq/compare/0.0.8...0.0.9)\n\n#### Bugfix\n\n- Prevent parent process looping forever while stopping children.\n\n## 0.0.8 (2022-09-05) [commits](https://github.com/wakatime/wakaq/compare/0.0.7...0.0.8)\n\n#### Bugfix\n\n- Prevent parent process crash leaving zombie child processes.\n\n## 0.0.7 (2022-09-05) [commits](https://github.com/wakatime/wakaq/compare/0.0.6...0.0.7)\n\n#### Feature\n\n- Ability to retry tasks when they soft timeout.\n\n#### Bugfix\n\n- Ping parent process at start of task to make sure soft timeout timer is reset.\n\n## 0.0.6 (2022-09-03) [commits](https://github.com/wakatime/wakaq/compare/0.0.5...0.0.6)\n\n#### Feature\n\n- Implement exclude_queues option.\n\n#### Bugfix\n\n- Prevent parent process crash if write to child broadcast pipe fails.\n\n## 0.0.5 (2022-09-01) [commits](https://github.com/wakatime/wakaq/compare/0.0.4...0.0.5)\n\n#### Bugfix\n\n- Run broadcast tasks once per worker instead of randomly.\n\n## 0.0.4 (2022-09-01) [commits](https://github.com/wakatime/wakaq/compare/0.0.3...0.0.4)\n\n#### Feature\n\n- Allow defining schedules as tuple of cron and task name, without args.\n\n## 0.0.3 (2022-09-01) [commits](https://github.com/wakatime/wakaq/compare/0.0.2...0.0.3)\n\n#### Bugfix\n\n- Prevent worker process crashing on any exception.\n\n#### Feature\n\n- Ability to wrap tasks with custom dectorator function.\n\n## 0.0.2 (2022-09-01) [commits](https://github.com/wakatime/wakaq/compare/0.0.1...0.0.2)\n\n#### Breaking\n\n- Run in foreground by default.\n- Separate log files and levels for worker and scheduler.\n- Decorators for after worker started, before task, and after task callbacks.\n\n#### Bufix\n\n- Keep processing tasks after SoftTimeout.\n- Scheduler should sleep until scheduled time.\n\n## 0.0.1 (2022-08-30)\n\n- Initial release.\n"
  },
  {
    "path": "LICENSE",
    "content": "BSD 3-Clause License\n\nCopyright (c) 2022, WakaTime\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n   list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its\n   contributors may be used to endorse or promote products derived from\n   this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "MANIFEST.in",
    "content": "include README.md LICENSE CHANGES.md requirements.txt\nrecursive-include src/wakaq *.py\n"
  },
  {
    "path": "Makefile",
    "content": ".PHONY: all test clean build upload\n\nall:\n\t@echo 'test     run the unit tests with the current default python'\n\t@echo 'clean    remove builds at dist/*'\n\t@echo 'build    run setup.py dist'\n\t@echo 'upload   upload all builds in dist folder to pypi'\n\t@echo 'release  publish the current version to pypi'\n\ntest:\n\t@pytest\n\nrelease: clean build upload\n\nclean:\n\t@rm -f dist/*\n\nbuild:\n\t@python ./setup.py sdist\n\nupload:\n\t@twine upload ./dist/*\n"
  },
  {
    "path": "README.md",
    "content": "# ![logo](https://raw.githubusercontent.com/wakatime/wakaq/main/wakatime-logo.png \"WakaQ\") WakaQ\n\n[![wakatime](https://wakatime.com/badge/github/wakatime/wakaq.svg)](https://wakatime.com/badge/github/wakatime/wakaq)\n\nBackground task queue for Python backed by Redis, a super minimal Celery.\nRead about the motivation behind this project on [this blog post][blog launch] and the accompanying [Hacker News discussion][hacker news].\nWakaQ is currently used in production at [WakaTime.com][wakatime].\nWakaQ is also available in [TypeScript][wakaq-ts].\n\n## Features\n\n* Queue priority\n* Delayed tasks (run tasks after a timedelta eta)\n* Scheduled periodic tasks\n* Tasks can be [async][asyncio] or normal synchronous functions\n* [Broadcast][broadcast] a task to all workers\n* Task [soft][soft timeout] and [hard][hard timeout] timeout limits\n* Optionally retry tasks on soft timeout\n* Combat memory leaks with `max_mem_percent` or `max_tasks_per_worker`\n* Super minimal\n\nWant more features like rate limiting, task deduplication, etc? Too bad, feature PRs are not accepted. Maximal features belong in your app’s worker tasks.\n\n## Installing\n\n    pip install wakaq\n\n## Using\n\n```python\nimport logging\nfrom datetime import timedelta\nfrom wakaq import WakaQ, Queue, CronTask\n\n\n# use constants to prevent misspelling queue names\nQ_HIGH = 'a-high-priority-queue'\nQ_MED = 'a-medium-priority-queue'\nQ_LOW = 'a-low-priority-queue'\nQ_OTHER = 'another-queue'\nQ_DEFAULT = 'default-lowest-priority-queue'\n\n\nwakaq = WakaQ(\n\n    # List your queues and their priorities.\n    # Queues can be defined as Queue instances, tuples, or just a str.\n    queues=[\n        (0, Q_HIGH),\n        (1, Q_MED),\n        (2, Q_LOW),\n        Queue(Q_OTHER, priority=3, max_retries=5, soft_timeout=300, hard_timeout=360),\n        Q_DEFAULT,\n    ],\n\n    # Number of worker processes. Must be an int or str which evaluates to an\n    # int. The variable \"cores\" is replaced with the number of processors on\n    # the current machine.\n    concurrency=\"cores*4\",\n\n    # Number of concurrent asyncio tasks per worker process. Must be an int or\n    # str which evaluates to an int. The variable \"cores\" is replaced with the\n    # number of processors on the current machine. Default is zero for no limit.\n    async_concurrency=0,\n\n    # Raise SoftTimeout or asyncio.CancelledError in a task if it runs longer\n    # than 30 seconds. Can also be set per task or queue. If no soft timeout\n    # set, tasks can run forever.\n    soft_timeout=30,  # seconds\n\n    # SIGKILL a task if it runs longer than 1 minute. Can be set per task or queue.\n    hard_timeout=timedelta(minutes=1),\n\n    # If the task soft timeouts, retry up to 3 times. Max retries comes first\n    # from the task decorator if set, next from the Queue's max_retries,\n    # lastly from the option below. If No max_retries is found, the task\n    # is not retried on a soft timeout.\n    max_retries=3,\n\n    # Combat memory leaks by reloading a worker (the one using the most RAM),\n    # when the total machine RAM usage is at or greater than 98%.\n    max_mem_percent=98,\n\n    # Combat memory leaks by reloading a worker after it's processed 5000 tasks.\n    max_tasks_per_worker=5000,\n\n    # Schedule two tasks, the first runs every minute, the second once every ten minutes.\n    # Scheduled tasks can be passed as CronTask instances or tuples. To run scheduled\n    # tasks you must keep a wakaq scheduler running as a daemon.\n    schedules=[\n\n        # Runs mytask on the queue with priority 1.\n        CronTask('* * * * *', 'mytask', queue=Q_MED, args=[2, 2], kwargs={}),\n\n        # Runs mytask once every 5 minutes.\n        ('*/5 * * * *', 'mytask', [1, 1], {}),\n\n        # Runs anothertask on the default lowest priority queue.\n        ('*/10 * * * *', 'anothertask'),\n    ],\n)\n\n\n# timeouts can be customized per task with a timedelta or integer seconds\n@wakaq.task(queue=Q_MED, max_retries=7, soft_timeout=420, hard_timeout=480)\ndef mytask(x, y):\n    print(x + y)\n\n\n@wakaq.task\ndef a_cpu_intensive_task():\n    print(\"hello world\")\n\n\n@wakaq.task\nasync def an_io_intensive_task():\n    print(\"hello world\")\n\n\n@wakaq.wrap_tasks_with\nasync def custom_task_decorator(fn, args, kwargs):\n    # do something before each task runs, for ex: `with app.app_context():`\n    if inspect.iscoroutinefunction(fn):\n        await fn(*args, **kwargs)\n    else:\n        fn(*args, **kwargs)\n    # do something after each task runs\n\n\nif __name__ == '__main__':\n\n    # add 1 plus 1 on a worker somewhere\n    mytask.delay(1, 1)\n\n    # add 1 plus 1 on a worker somewhere, overwriting the task's queue from medium to high\n    mytask.delay(1, 1, queue=Q_HIGH)\n\n    # print hello world on a worker somewhere, running on the default lowest priority queue\n    anothertask.delay()\n\n    # print hello world on a worker somewhere, after 10 seconds from now\n    anothertask.delay(eta=timedelta(seconds=10))\n\n    # print hello world on a worker concurrently, even if you only have 1 worker process\n    an_io_intensive_task.delay()\n```\n\n## Deploying\n\n#### Optimizing\n\nSee the [WakaQ init params][wakaq init] for a full list of options, like Redis host and Redis socket timeout values.\n\nWhen using in production, make sure to [increase the max open ports][max open ports] allowed for your Redis server process.\n\nWhen using eta tasks a Redis sorted set is used, so eta tasks are automatically deduped based on task name, args, and kwargs.\nIf you want multiple pending eta tasks with the same arguments, just add a throwaway random string to the task’s kwargs for ex: `str(uuid.uuid1())`.\n\n#### Running as a Daemon\n\nHere’s an example systemd config to run `wakaq-worker` as a daemon:\n\n```systemd\n[Unit]\nDescription=WakaQ Worker Service\n\n[Service]\nWorkingDirectory=/opt/yourapp\nExecStart=/opt/yourapp/venv/bin/python /opt/yourapp/venv/bin/wakaq-worker --app=yourapp.wakaq\nRemainAfterExit=no\nRestart=always\nRestartSec=30s\nKillSignal=SIGINT\nLimitNOFILE=99999\n\n[Install]\nWantedBy=multi-user.target\n```\n\nCreate a file at `/etc/systemd/system/wakaqworker.service` with the above contents, then run:\n\n    systemctl daemon-reload && systemctl enable wakaqworker\n\n## Running synchronously in tests or local dev environment\n\nIn dev and test environments, it’s easier to run tasks synchronously so you don’t need Redis or any worker processes.\nThe recommended way is mocking WakaQ:\n\n```python\nclass WakaQMock:\n    def __init__(self):\n        self.task = TaskMock\n\n    def wrap_tasks_with(self, fn):\n        return fn\n\n\nclass TaskMock(object):\n    fn = None\n    name = None\n    args = ()\n    kwargs = {}\n\n    def __init__(self, *args, **kwargs):\n        if len(args) == 1 and len(kwargs) == 0:\n            self.fn = args[0]\n            self.name = args[0].__name__\n        else:\n            self.args = args\n            self.kwargs = kwargs\n\n    def delay(self, *args, **kwargs):\n        kwargs.pop(\"queue\", None)\n        kwargs.pop(\"eta\", None)\n        return self.fn(*args, **kwargs)\n\n    def broadcast(self, *args, **kwargs):\n        return\n\n    def __call__(self, *args, **kwargs):\n        if not self.fn:\n            task = TaskMock(args[0])\n            task.args = self.args\n            task.kwargs = self.kwargs\n            return task\n        else:\n            return self.fn(*args, **kwargs)\n```\n\nThen in dev and test environments instead of using `wakaq.WakaQ` use `WakaQMock`.\n\n\n[wakatime]: https://wakatime.com\n[broadcast]: https://github.com/wakatime/wakaq/blob/58a7e4ce29d9be928b16ffbf5c00c7106aab9360/wakaq/task.py#L65\n[soft timeout]: https://github.com/wakatime/wakaq/blob/58a7e4ce29d9be928b16ffbf5c00c7106aab9360/wakaq/exceptions.py#L5\n[hard timeout]: https://github.com/wakatime/wakaq/blob/58a7e4ce29d9be928b16ffbf5c00c7106aab9360/wakaq/worker.py#L590\n[wakaq init]: https://github.com/wakatime/wakaq/blob/58a7e4ce29d9be928b16ffbf5c00c7106aab9360/wakaq/__init__.py#L47\n[max open ports]: https://wakatime.com/blog/47-maximize-your-concurrent-web-server-connections\n[blog launch]: https://wakatime.com/blog/56-building-a-distributed-task-queue-in-python\n[hacker news]: https://news.ycombinator.com/item?id=32730038\n[wakaq-ts]: https://github.com/wakatime/wakaq-ts\n[asyncio]: https://docs.python.org/3/library/asyncio.html\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[tool.black]\nline-length = 120\n[tool.isort]\nprofile = \"black\"\n"
  },
  {
    "path": "requirements.txt",
    "content": "click\ncroniter\npsutil\nredis\n"
  },
  {
    "path": "setup.py",
    "content": "from setuptools import setup\n\nabout = {}\nwith open(\"wakaq/__about__.py\") as f:\n    exec(f.read(), about)\n\ninstall_requires = [x.strip() for x in open(\"requirements.txt\").readlines()]\n\nsetup(\n    name=about[\"__title__\"],\n    version=about[\"__version__\"],\n    license=about[\"__license__\"],\n    description=about[\"__description__\"],\n    long_description=open(\"README.md\").read(),\n    long_description_content_type=\"text/markdown\",\n    author=about[\"__author__\"],\n    author_email=about[\"__author_email__\"],\n    url=about[\"__url__\"],\n    packages=[\"wakaq\"],\n    package_dir={\"wakaq\": \"wakaq\"},\n    python_requires=\">= 3.14\",\n    include_package_data=True,\n    platforms=\"any\",\n    install_requires=install_requires,\n    entry_points={\n        \"console_scripts\": [\n            \"wakaq-worker = wakaq.cli:worker\",\n            \"wakaq-scheduler = wakaq.cli:scheduler\",\n            \"wakaq-info = wakaq.cli:info\",\n            \"wakaq-purge = wakaq.cli:purge\",\n        ],\n    },\n    classifiers=[\n        \"Development Status :: 5 - Production/Stable\",\n        \"Intended Audience :: Developers\",\n        \"Natural Language :: English\",\n        \"Programming Language :: Python\",\n        \"Programming Language :: Python :: 3 :: Only\",\n        \"Programming Language :: Python :: 3\",\n        \"Programming Language :: Python :: 3.14\",\n        \"Topic :: Software Development :: Libraries :: Python Modules\",\n        \"Topic :: System :: Distributed Computing\",\n        \"Topic :: Software Development :: Object Brokering\",\n        \"Operating System :: OS Independent\",\n    ],\n)\n"
  },
  {
    "path": "wakaq/__about__.py",
    "content": "__title__ = \"WakaQ\"\n__description__ = \"Background task queue for Python backed by Redis, a minimal Celery.\"\n__url__ = \"https://github.com/wakatime/wakaq\"\n__version_info__ = (\"4\", \"0\", \"5\")\n__version__ = \".\".join(__version_info__)\n__author__ = \"Alan Hamlett\"\n__author_email__ = \"alan.hamlett@gmail.com\"\n__license__ = \"BSD\"\n__copyright__ = \"Copyright 2022 Alan Hamlett\"\n"
  },
  {
    "path": "wakaq/__init__.py",
    "content": "import calendar\nimport logging\nimport multiprocessing\nfrom datetime import datetime, timedelta\n\nimport redis\n\nfrom .queue import Queue\nfrom .scheduler import CronTask\nfrom .serializer import serialize\nfrom .task import Task\nfrom .utils import safe_eval\n\n__all__ = [\n    \"WakaQ\",\n    \"Queue\",\n    \"CronTask\",\n]\n\n\nclass WakaQ:\n    queues = []\n    soft_timeout = None\n    hard_timeout = None\n    concurrency = 0\n    async_concurrency = 0\n    schedules = []\n    exclude_queues = []\n    max_retries = None\n    wait_timeout = None\n    max_mem_percent = None\n    max_tasks_per_worker = None\n    worker_log_file = None\n    scheduler_log_file = None\n    worker_log_level = None\n    scheduler_log_level = None\n\n    after_worker_started_callback = None\n    before_task_started_callback = None\n    after_task_finished_callback = None\n    wrap_tasks_function = None\n\n    broadcast_key = \"wakaq-broadcast\"\n    log_format = \"[%(asctime)s] %(levelname)s: %(message)s\"\n    task_log_format = \"[%(asctime)s] %(levelname)s in %(task)s: %(message)s\"\n\n    def __init__(\n        self,\n        queues=[],\n        schedules=[],\n        host=\"localhost\",\n        port=6379,\n        username=None,\n        password=None,\n        db=0,\n        concurrency=0,\n        async_concurrency=0,\n        exclude_queues=[],\n        max_retries=None,\n        soft_timeout=None,\n        hard_timeout=None,\n        max_mem_percent=None,\n        max_tasks_per_worker=None,\n        worker_log_file=None,\n        scheduler_log_file=None,\n        worker_log_level=None,\n        scheduler_log_level=None,\n        socket_timeout=15,\n        socket_connect_timeout=15,\n        health_check_interval=30,\n        wait_timeout=1,\n    ):\n        self.queues = [Queue.create(x) for x in queues]\n        if len(self.queues) == 0:\n            raise Exception(\"Missing queues.\")\n        lowest_priority = max(self.queues, key=lambda q: q.priority)\n        self.queues = list(map(lambda q: self._default_priority(q, lowest_priority.priority), self.queues))\n        self.queues.sort(key=lambda q: q.priority)\n        self.queues_by_name = dict([(x.name, x) for x in self.queues])\n        self.queues_by_key = dict([(x.broker_key, x) for x in self.queues])\n        self.exclude_queues = self._validate_queue_names(exclude_queues)\n        self.max_retries = int(max_retries or 0)\n        self.broker_keys = [x.broker_key for x in self.queues if x.name not in self.exclude_queues]\n        self.schedules = [CronTask.create(x) for x in schedules]\n        self.concurrency = self._format_concurrency(concurrency)\n        self.async_concurrency = self._format_concurrency(async_concurrency, is_async=True)\n        self.soft_timeout = soft_timeout.total_seconds() if isinstance(soft_timeout, timedelta) else soft_timeout\n        self.hard_timeout = hard_timeout.total_seconds() if isinstance(hard_timeout, timedelta) else hard_timeout\n        self.wait_timeout = wait_timeout.total_seconds() if isinstance(wait_timeout, timedelta) else wait_timeout\n\n        if self.soft_timeout and self.soft_timeout <= wait_timeout:\n            raise Exception(\n                f\"Soft timeout ({self.soft_timeout}) can not be less than or equal to wait timeout ({self.wait_timeout}).\"\n            )\n        if self.hard_timeout and self.hard_timeout <= self.wait_timeout:\n            raise Exception(\n                f\"Hard timeout ({self.hard_timeout}) can not be less than or equal to wait timeout ({self.wait_timeout}).\"\n            )\n        if self.soft_timeout and self.hard_timeout and self.hard_timeout <= self.soft_timeout:\n            raise Exception(\n                f\"Hard timeout ({self.hard_timeout}) can not be less than or equal to soft timeout ({self.soft_timeout}).\"\n            )\n\n        if max_mem_percent:\n            self.max_mem_percent = int(max_mem_percent)\n            if self.max_mem_percent < 1 or self.max_mem_percent > 99:\n                raise Exception(f\"Max memory percent must be between 1 and 99: {self.max_mem_percent}\")\n        else:\n            self.max_mem_percent = None\n\n        self.max_tasks_per_worker = abs(int(max_tasks_per_worker)) if max_tasks_per_worker else None\n        self.worker_log_file = worker_log_file if isinstance(worker_log_file, str) else None\n        self.scheduler_log_file = scheduler_log_file if isinstance(scheduler_log_file, str) else None\n        self.worker_log_level = worker_log_level if isinstance(worker_log_level, int) else logging.INFO\n        self.scheduler_log_level = scheduler_log_level if isinstance(scheduler_log_level, int) else logging.DEBUG\n\n        self.tasks = {}\n        self.broker = redis.Redis(\n            host=host,\n            port=port,\n            username=username,\n            password=password,\n            db=db,\n            decode_responses=True,\n            health_check_interval=health_check_interval,\n            socket_timeout=socket_timeout,\n            socket_connect_timeout=socket_connect_timeout,\n        )\n\n    def task(self, fn=None, queue=None, max_retries=None, soft_timeout=None, hard_timeout=None):\n        def wrap(f):\n            t = Task(\n                fn=f,\n                wakaq=self,\n                queue=queue,\n                max_retries=max_retries,\n                soft_timeout=soft_timeout,\n                hard_timeout=hard_timeout,\n            )\n            if t.name in self.tasks:\n                raise Exception(f\"Duplicate task name: {t.name}\")\n            self.tasks[t.name] = t\n            return t.fn\n\n        return wrap(fn) if fn else wrap\n\n    def after_worker_started(self, fn=None):\n        def wrap(f):\n            if not callable(self.after_worker_started_callback):\n                self.after_worker_started_callback = f\n            return f\n\n        return wrap(fn) if fn else wrap\n\n    def before_task_started(self, fn=None):\n        def wrap(f):\n            if not callable(self.before_task_started_callback):\n                self.before_task_started_callback = f\n            return f\n\n        return wrap(fn) if fn else wrap\n\n    def after_task_finished(self, fn=None):\n        def wrap(f):\n            if not callable(self.after_task_finished_callback):\n                self.after_task_finished_callback = f\n            return f\n\n        return wrap(fn) if fn else wrap\n\n    def wrap_tasks_with(self, fn=None):\n        def wrap(f):\n            if not callable(self.wrap_tasks_function):\n                self.wrap_tasks_function = f\n            return f\n\n        return wrap(fn) if fn else wrap\n\n    def _validate_queue_names(self, queue_names: list) -> list:\n        try:\n            queue_names = [x for x in queue_names]\n        except:\n            return []\n        for queue_name in queue_names:\n            if queue_name not in self.queues_by_name:\n                raise Exception(f\"Invalid queue: {queue_name}\")\n        return queue_names\n\n    def _enqueue_at_front(self, task_name: str, queue: str, args: list, kwargs: dict):\n        queue = self._queue_or_default(queue)\n        payload = serialize(\n            {\n                \"name\": task_name,\n                \"args\": args,\n                \"kwargs\": kwargs,\n            }\n        )\n        self.broker.lpush(queue.broker_key, payload)\n\n    def _enqueue_at_end(self, task_name: str, queue: str, args: list, kwargs: dict, retry=0):\n        queue = self._queue_or_default(queue)\n        payload = serialize(\n            {\n                \"name\": task_name,\n                \"args\": args,\n                \"kwargs\": kwargs,\n                \"retry\": retry,\n            }\n        )\n        self.broker.rpush(queue.broker_key, payload)\n\n    def _enqueue_with_eta(self, task_name: str, queue: str, args: list, kwargs: dict, eta):\n        queue = self._queue_or_default(queue)\n        payload = serialize(\n            {\n                \"name\": task_name,\n                \"args\": args,\n                \"kwargs\": kwargs,\n            }\n        )\n        if isinstance(eta, timedelta):\n            eta = datetime.utcnow() + eta\n        timestamp = calendar.timegm(eta.utctimetuple())\n        self.broker.zadd(queue.broker_eta_key, {payload: timestamp}, nx=True)\n\n    def _broadcast(self, task_name: str, args: list, kwargs: dict):\n        payload = serialize(\n            {\n                \"name\": task_name,\n                \"args\": args,\n                \"kwargs\": kwargs,\n            }\n        )\n        return self.broker.publish(self.broadcast_key, payload)\n\n    def _queue_or_default(self, queue_name: str):\n        if queue_name:\n            return Queue.create(queue_name, queues_by_name=self.queues_by_name)\n\n        # return lowest priority queue by default\n        return self.queues[-1]\n\n    def _default_priority(self, queue, lowest_priority):\n        if queue.priority < 0:\n            queue.priority = lowest_priority + 1\n        return queue\n\n    def _format_concurrency(self, concurrency, is_async=None):\n        if not concurrency:\n            return 0\n        try:\n            return int(safe_eval(str(concurrency), {\"cores\": multiprocessing.cpu_count()}))\n        except Exception as e:\n            raise Exception(f\"Error parsing {'async_' if is_async else ''}concurrency: {e}\")\n"
  },
  {
    "path": "wakaq/cli.py",
    "content": "import json\n\nimport click\n\nfrom .scheduler import Scheduler\nfrom .utils import (\n    import_app,\n    inspect,\n    num_pending_eta_tasks_in_queue,\n    num_pending_tasks_in_queue,\n    purge_eta_queue,\n    purge_queue,\n)\nfrom .worker import Worker\n\n\n@click.command()\n@click.option(\"--app\", required=True, help=\"Import path of the WakaQ instance.\")\ndef worker(**options):\n    \"\"\"Run worker(s) to process tasks from queue(s) defined in your app.\"\"\"\n    wakaq = import_app(options.pop(\"app\"))\n    worker = Worker(wakaq=wakaq)\n    worker.start()\n\n\n@click.command()\n@click.option(\"--app\", required=True, help=\"Import path of the WakaQ instance.\")\ndef scheduler(**options):\n    \"\"\"Run a scheduler to enqueue periodic tasks based on a schedule defined in your app.\"\"\"\n    wakaq = import_app(options.pop(\"app\"))\n    scheduler = Scheduler(wakaq=wakaq)\n    scheduler.start()\n\n\n@click.command()\n@click.option(\"--app\", required=True, help=\"Import path of the WakaQ instance.\")\ndef info(**options):\n    \"\"\"Inspect and print info about your queues.\"\"\"\n    wakaq = import_app(options.pop(\"app\"))\n    click.echo(json.dumps(inspect(wakaq), indent=2, sort_keys=True))\n\n\n@click.command()\n@click.option(\"--app\", required=True, help=\"Import path of the WakaQ instance.\")\n@click.option(\"--queue\", required=True, help=\"Name of queue to purge.\")\ndef purge(**options):\n    \"\"\"Remove and empty all pending tasks in a queue.\"\"\"\n    wakaq = import_app(options.pop(\"app\"))\n    queue_name = options.pop(\"queue\")\n    count = num_pending_tasks_in_queue(wakaq, queue_name=queue_name)\n    purge_queue(wakaq, queue_name=queue_name)\n    count += num_pending_eta_tasks_in_queue(wakaq, queue_name=queue_name)\n    purge_eta_queue(wakaq, queue_name=queue_name)\n    click.echo(f\"Purged {count} tasks from {queue_name}\")\n"
  },
  {
    "path": "wakaq/exceptions.py",
    "content": "class WakaQError(Exception):\n    pass\n\n\nclass SoftTimeout(WakaQError):\n    pass\n"
  },
  {
    "path": "wakaq/logger.py",
    "content": "import sys\nfrom logging import Formatter as FormatterBase\nfrom logging import StreamHandler, captureWarnings, getLogger\nfrom logging.handlers import WatchedFileHandler\n\nfrom .serializer import serialize\nfrom .utils import current_task\n\nlogger = getLogger(\"wakaq\")\n\n\nclass SafeLogger:\n    def setLevel(self, *args, **kwargs):\n        logger.setLevel(*args, **kwargs)\n\n    def debug(self, *args, **kwargs):\n        try:\n            logger.debug(*args, **kwargs)\n        except:\n            pass\n\n    def info(self, *args, **kwargs):\n        try:\n            logger.info(*args, **kwargs)\n        except:\n            pass\n\n    def warning(self, *args, **kwargs):\n        try:\n            logger.warning(*args, **kwargs)\n        except:\n            pass\n\n    def error(self, *args, **kwargs):\n        try:\n            logger.error(*args, **kwargs)\n        except:\n            pass\n\n    def exception(self, *args, **kwargs):\n        try:\n            logger.exception(*args, **kwargs)\n        except:\n            pass\n\n    def critical(self, *args, **kwargs):\n        try:\n            logger.critical(*args, **kwargs)\n        except:\n            pass\n\n    def fatal(self, *args, **kwargs):\n        try:\n            logger.fatal(*args, **kwargs)\n        except:\n            pass\n\n    def log(self, *args, **kwargs):\n        try:\n            logger.log(*args, **kwargs)\n        except:\n            pass\n\n    @property\n    def handlers(self):\n        return logger.handlers\n\n\nlog = SafeLogger()\n\n\nclass Formatter(FormatterBase):\n    def __init__(self, wakaq):\n        self.wakaq = wakaq\n        super().__init__(wakaq.log_format)\n\n    def format(self, record):\n        task = current_task.get()\n        if task is not None:\n            task, payload = task[0], task[1]\n            self._fmt = self.wakaq.task_log_format\n            self._style._fmt = self.wakaq.task_log_format\n            record.__dict__.update(task=task.name)\n            record.__dict__.update(task_args=serialize(payload[\"args\"]))\n            record.__dict__.update(task_kwargs=serialize(payload[\"kwargs\"]))\n            record.__dict__.update(task_retry=serialize(payload.get(\"retry\")))\n        else:\n            self._fmt = self.wakaq.log_format\n            self._style._fmt = self.wakaq.log_format\n            record.__dict__.setdefault(\"task\", None)\n            record.__dict__.setdefault(\"task_args\", None)\n            record.__dict__.setdefault(\"task_kwargs\", None)\n            record.__dict__.setdefault(\"task_retry\", None)\n        return super().format(record)\n\n\ndef setup_logging(wakaq, is_child=None, is_scheduler=None):\n    logger = getLogger(\"wakaq\")\n\n    for handler in logger.handlers:\n        logger.removeHandler(handler)\n\n    log_file = wakaq.scheduler_log_file if is_scheduler else wakaq.worker_log_file\n    log_level = wakaq.scheduler_log_level if is_scheduler else wakaq.worker_log_level\n\n    logger.setLevel(log_level)\n    captureWarnings(True)\n\n    out = sys.stdout if is_child or not log_file else log_file\n    options = {}\n    if not is_child and log_file:\n        options[\"encoding\"] = \"utf8\"\n    handler = (StreamHandler if is_child or not log_file else WatchedFileHandler)(out, **options)\n    handler.setLevel(log_level)\n\n    formatter = Formatter(wakaq)\n    handler.setFormatter(formatter)\n\n    logger.addHandler(handler)\n"
  },
  {
    "path": "wakaq/queue.py",
    "content": "import re\nfrom datetime import timedelta\n\n\nclass Queue:\n    __slots__ = [\n        \"name\",\n        \"priority\",\n        \"prefix\",\n        \"soft_timeout\",\n        \"hard_timeout\",\n        \"max_retries\",\n    ]\n\n    def __init__(self, name=None, priority=-1, prefix=None, soft_timeout=None, hard_timeout=None, max_retries=None):\n        self.prefix = re.sub(r\"[^a-zA-Z0-9_.-]\", \"\", prefix or \"wakaq\")\n        self.name = re.sub(r\"[^a-zA-Z0-9_.-]\", \"\", name)\n\n        try:\n            self.priority = int(priority)\n        except:\n            raise Exception(f\"Invalid queue priority: {priority}\")\n\n        self.soft_timeout = soft_timeout.total_seconds() if isinstance(soft_timeout, timedelta) else soft_timeout\n        self.hard_timeout = hard_timeout.total_seconds() if isinstance(hard_timeout, timedelta) else hard_timeout\n\n        if self.soft_timeout and self.hard_timeout and self.hard_timeout <= self.soft_timeout:\n            raise Exception(\n                f\"Queue hard timeout ({self.hard_timeout}) can not be less than or equal to soft timeout ({self.soft_timeout}).\"\n            )\n\n        if max_retries:\n            try:\n                self.max_retries = int(max_retries)\n            except:\n                raise Exception(f\"Invalid queue max retries: {max_retries}\")\n        else:\n            self.max_retries = None\n\n    @classmethod\n    def create(cls, obj, queues_by_name=None):\n        if isinstance(obj, cls):\n            if queues_by_name is not None and obj.name not in queues_by_name:\n                raise Exception(f\"Unknown queue: {obj.name}\")\n            return obj\n        elif isinstance(obj, (list, tuple)) and len(obj) == 2:\n            if isinstance(obj[0], int):\n                if queues_by_name is not None and obj[1] not in queues_by_name:\n                    raise Exception(f\"Unknown queue: {obj[1]}\")\n                return cls(priority=obj[0], name=obj[1])\n            else:\n                if queues_by_name is not None and obj[0] not in queues_by_name:\n                    raise Exception(f\"Unknown queue: {obj[0]}\")\n                return cls(name=obj[0], priority=obj[1])\n        else:\n            if queues_by_name is not None and obj not in queues_by_name:\n                raise Exception(f\"Unknown queue: {obj}\")\n            return cls(name=obj)\n\n    @property\n    def broker_key(self):\n        return f\"{self.prefix}:{self.name}\"\n\n    @property\n    def broker_eta_key(self):\n        return f\"{self.prefix}:eta:{self.name}\"\n"
  },
  {
    "path": "wakaq/scheduler.py",
    "content": "import time\nfrom datetime import datetime, timedelta\n\nfrom croniter import croniter\n\nfrom .logger import log, setup_logging\nfrom .serializer import serialize\n\n\nclass CronTask:\n    __slots__ = [\n        \"schedule\",\n        \"task_name\",\n        \"queue\",\n        \"args\",\n        \"kwargs\",\n    ]\n\n    def __init__(self, schedule=None, task_name=None, queue=None, args=None, kwargs=None):\n        if not croniter.is_valid(schedule):\n            log.error(f\"Invalid cron schedule (min hour dom month dow): {schedule}\")\n            raise Exception(f\"Invalid cron schedule (min hour dom month dow): {schedule}\")\n\n        self.schedule = schedule\n        self.task_name = task_name\n        self.queue = queue\n        self.args = args\n        self.kwargs = kwargs\n\n    @classmethod\n    def create(cls, obj, queues_by_name=None):\n        if isinstance(obj, cls):\n            if queues_by_name is not None and obj.queue and obj.queue not in queues_by_name:\n                log.error(f\"Unknown queue: {obj.queue}\")\n                raise Exception(f\"Unknown queue: {obj.queue}\")\n            return obj\n        elif isinstance(obj, (list, tuple)) and len(obj) == 2:\n            return cls(schedule=obj[0], task_name=obj[1])\n        elif isinstance(obj, (list, tuple)) and len(obj) == 4:\n            return cls(schedule=obj[0], task_name=obj[1], args=obj[2], kwargs=obj[3])\n        else:\n            log.error(f\"Invalid schedule: {obj}\")\n            raise Exception(f\"Invalid schedule: {obj}\")\n\n    @property\n    def payload(self):\n        return serialize(\n            {\n                \"name\": self.task_name,\n                \"args\": self.args if self.args is not None else [],\n                \"kwargs\": self.kwargs if self.kwargs is not None else {},\n            }\n        )\n\n\nclass Scheduler:\n    __slots__ = [\n        \"wakaq\",\n        \"schedules\",\n    ]\n\n    def __init__(self, wakaq=None):\n        self.wakaq = wakaq\n\n    def start(self):\n        setup_logging(self.wakaq, is_scheduler=True)\n        log.info(\"starting scheduler\")\n\n        if len(self.wakaq.schedules) == 0:\n            log.error(\"no scheduled tasks found\")\n            raise Exception(\"No scheduled tasks found.\")\n\n        self.schedules = []\n        for schedule in self.wakaq.schedules:\n            self.schedules.append(CronTask.create(schedule, queues_by_name=self.wakaq.queues_by_name))\n\n        self._run()\n\n    def _run(self):\n        base = datetime.utcnow()\n        upcoming_tasks = []\n\n        while True:\n            for cron_task in upcoming_tasks:\n                task = self.wakaq.tasks[cron_task.task_name]\n                if cron_task.queue:\n                    queue = self.wakaq.queues_by_name[cron_task.queue]\n                elif task.queue:\n                    queue = task.queue\n                else:\n                    queue = self.wakaq.queues[-1]\n                log.debug(f\"run scheduled task on queue {queue.name}: {task.name}\")\n                self.wakaq.broker.lpush(queue.broker_key, cron_task.payload)\n\n            upcoming_tasks = []\n            crons = [(croniter(x.schedule, base).get_next(datetime), x) for x in self.schedules]\n            sleep_until = base + timedelta(days=1)\n\n            for dt, cron_task in crons:\n                if self._is_same_minute_precision(dt, sleep_until):\n                    upcoming_tasks.append(cron_task)\n                elif dt < sleep_until:\n                    sleep_until = dt\n                    upcoming_tasks = [cron_task]\n\n            # sleep until the next scheduled task\n            time.sleep((sleep_until - base).total_seconds())\n\n            base = sleep_until\n\n    def _is_same_minute_precision(self, a, b):\n        return a.strftime(\"%Y%m%d%H%M\") == b.strftime(\"%Y%m%d%H%M\")\n"
  },
  {
    "path": "wakaq/serializer.py",
    "content": "from base64 import b64decode, b64encode\nfrom datetime import date, datetime, timedelta, timezone\nfrom decimal import Decimal\nfrom json import JSONEncoder, dumps, loads\n\n\nclass CustomJSONEncoder(JSONEncoder):\n    def __init__(self, *args, allow_nan=False, **kwargs):\n        kwargs[\"allow_nan\"] = allow_nan\n        super().__init__(*args, **kwargs)\n\n    def default(self, o):\n        if isinstance(o, set):\n            return list(o)\n        if isinstance(o, Decimal):\n            return {\n                \"__class__\": \"Decimal\",\n                \"value\": str(o),\n            }\n        if isinstance(o, datetime):\n            if o.tzinfo is not None:\n                # tasks always receive datetimes in utc without tzinfo\n                o = o.astimezone(timezone.utc)\n            return {\n                \"__class__\": \"datetime\",\n                \"year\": o.year,\n                \"month\": o.month,\n                \"day\": o.day,\n                \"hour\": o.hour,\n                \"minute\": o.minute,\n                \"second\": o.second,\n                \"microsecond\": o.microsecond,\n                \"fold\": o.fold,\n            }\n        if isinstance(o, date):\n            return {\n                \"__class__\": \"date\",\n                \"year\": o.year,\n                \"month\": o.month,\n                \"day\": o.day,\n            }\n        if isinstance(o, timedelta):\n            return {\n                \"__class__\": \"timedelta\",\n                \"kwargs\": {\n                    \"days\": o.days,\n                    \"seconds\": o.seconds,\n                    \"microseconds\": o.microseconds,\n                },\n            }\n        if isinstance(o, bytes):\n            return {\n                \"__class__\": \"bytes\",\n                \"value\": b64encode(o).decode(\"ascii\"),\n            }\n        return str(o)\n\n\ndef object_hook(obj):\n    cls = obj.get(\"__class__\")\n    if not cls:\n        return obj\n\n    if cls == \"Decimal\":\n        return Decimal(obj[\"value\"])\n    if cls == \"datetime\":\n        return datetime(\n            year=obj[\"year\"],\n            month=obj[\"month\"],\n            day=obj[\"day\"],\n            hour=obj[\"hour\"],\n            minute=obj[\"minute\"],\n            second=obj[\"second\"],\n            microsecond=obj[\"microsecond\"],\n            fold=obj[\"fold\"],\n        )\n    if cls == \"date\":\n        return date(\n            year=obj[\"year\"],\n            month=obj[\"month\"],\n            day=obj[\"day\"],\n        )\n    if cls == \"timedelta\":\n        return timedelta(**obj[\"kwargs\"])\n    if cls == \"bytes\":\n        return b64decode(obj[\"value\"])\n\n    return obj\n\n\ndef serialize(*args, **kwargs):\n    kwargs[\"cls\"] = CustomJSONEncoder\n    return dumps(*args, **kwargs)\n\n\ndef deserialize(*args, **kwargs):\n    return loads(*args, object_hook=object_hook, **kwargs)\n"
  },
  {
    "path": "wakaq/task.py",
    "content": "import asyncio\nimport threading\nfrom datetime import timedelta\n\nfrom .queue import Queue\n\n\nclass Task:\n    __slots__ = [\n        \"name\",\n        \"fn\",\n        \"wakaq\",\n        \"queue\",\n        \"soft_timeout\",\n        \"hard_timeout\",\n        \"max_retries\",\n    ]\n\n    def __init__(self, fn=None, wakaq=None, queue=None, soft_timeout=None, hard_timeout=None, max_retries=None):\n        self.fn = fn\n        self.name = fn.__name__\n        self.wakaq = wakaq\n        if queue:\n            self.queue = Queue.create(queue, queues_by_name=self.wakaq.queues_by_name)\n        else:\n            self.queue = None\n\n        self.soft_timeout = soft_timeout.total_seconds() if isinstance(soft_timeout, timedelta) else soft_timeout\n        self.hard_timeout = hard_timeout.total_seconds() if isinstance(hard_timeout, timedelta) else hard_timeout\n\n        if self.soft_timeout and self.hard_timeout and self.hard_timeout <= self.soft_timeout:\n            raise Exception(\n                f\"Task hard timeout ({self.hard_timeout}) can not be less than or equal to soft timeout ({self.soft_timeout}).\"\n            )\n\n        self.max_retries = int(max_retries) if max_retries else None\n\n        self.fn.delay = self._delay\n        self.fn.broadcast = self._broadcast\n\n    def _delay(self, *args, **kwargs):\n        \"\"\"Run task in the background.\"\"\"\n\n        queue = kwargs.pop(\"queue\", None) or self.queue\n        eta = kwargs.pop(\"eta\", None)\n\n        if eta:\n            self.wakaq._enqueue_with_eta(self.name, queue, args, kwargs, eta)\n        else:\n            self.wakaq._enqueue_at_end(self.name, queue, args, kwargs)\n\n    def get_event_loop(self):\n        try:\n            loop = asyncio.get_event_loop()\n        except RuntimeError:\n            loop = asyncio.new_event_loop()\n            asyncio.set_event_loop(loop)\n\n        if not loop.is_running():\n            thread = threading.Thread(target=loop.run_forever, daemon=True)\n            thread.start()\n\n        return loop\n\n    def _broadcast(self, *args, **kwargs) -> int:\n        \"\"\"Run task in the background on all workers.\n\n        Only runs the task once per worker parent daemon, no matter the worker's concurrency.\n\n        Returns the number of workers the task was sent to.\n        \"\"\"\n\n        return self.wakaq._broadcast(self.name, args, kwargs)\n"
  },
  {
    "path": "wakaq/utils.py",
    "content": "import ast\nimport calendar\nimport operator\nimport os\nimport sys\nfrom datetime import datetime, timedelta\nfrom importlib import import_module\nfrom typing import Union\n\nimport psutil\n\nfrom .serializer import deserialize\n\n\ndef import_app(app):\n    \"\"\"Import and return the WakaQ instance from the specified module path.\"\"\"\n\n    cwd = os.getcwd()\n    if cwd not in sys.path:\n        sys.path.insert(0, cwd)\n\n    try:\n        module_path, class_name = app.rsplit(\".\", 1)\n    except ValueError:\n        raise Exception(\n            f\"Invalid app path: {app}. App must point to a WakaQ instance. For ex: yourapp.background.wakaq\"\n        )\n\n    module = import_module(module_path)\n    wakaq = getattr(module, class_name)\n    from . import WakaQ\n\n    if not isinstance(wakaq, WakaQ):\n        raise Exception(f\"Invalid app path: {app}. App must point to a WakaQ instance.\")\n    return wakaq\n\n\ndef inspect(app):\n    \"\"\"Return the queues and their respective pending task counts, and the number of workers connected.\"\"\"\n\n    queues = {}\n    for queue in app.queues:\n        queues[queue.name] = {\n            \"name\": queue.name,\n            \"priority\": queue.priority,\n            \"broker_key\": queue.broker_key,\n            \"broker_eta_key\": queue.broker_eta_key,\n            \"pending_tasks\": num_pending_tasks_in_queue(app, queue),\n            \"pending_eta_tasks\": num_pending_eta_tasks_in_queue(app, queue),\n        }\n    return {\n        \"queues\": queues,\n        \"workers\": num_workers_connected(app),\n    }\n\n\ndef pending_tasks_in_queue(app, queue=None, queue_name: str = None, limit: int = 20) -> list:\n    \"\"\"Retrieve a list of pending tasks from a queue, without removing them from the queue.\"\"\"\n\n    if not queue:\n        if queue_name is None:\n            return []\n        queue = app.queues_by_name.get(queue_name)\n        if not queue:\n            return []\n\n    if not limit:\n        limit = 0\n\n    tasks = app.broker.lrange(queue.broker_key, 0, limit - 1)\n    return [deserialize(task) for task in tasks]\n\n\ndef pending_eta_tasks_in_queue(\n    app,\n    queue=None,\n    queue_name: str = None,\n    before: Union[datetime, timedelta, int] = None,\n    limit: int = 20,\n) -> list:\n    \"\"\"Retrieve a list of pending eta tasks from a queue, without removing them from the queue.\"\"\"\n\n    if not queue:\n        if queue_name is None:\n            return []\n        queue = app.queues_by_name.get(queue_name)\n        if not queue:\n            return []\n    params = []\n    if before:\n        cmd = \"ZRANGEBYSCORE\"\n        if isinstance(before, timedelta):\n            before = datetime.utcnow() + before\n        if isinstance(before, datetime):\n            before = calendar.timegm(before.utctimetuple())\n        params.extend([0, before])\n        params.append(\"WITHSCORES\")\n        if limit:\n            params.extend([\"LIMIT\", 0, limit])\n    else:\n        cmd = \"ZRANGE\"\n        if not limit:\n            limit = 0\n        params.extend([0, limit - 1])\n        params.append(\"WITHSCORES\")\n    tasks = app.broker.execute_command(cmd, queue.broker_eta_key, *params)\n    payloads = []\n    for n in range(0, len(tasks), 2):\n        payload = deserialize(tasks[n])\n        payload[\"eta\"] = datetime.utcfromtimestamp(int(tasks[n + 1]))\n        payloads.append(payload)\n    return payloads\n\n\ndef num_pending_tasks_in_queue(app, queue=None, queue_name: str = None) -> int:\n    \"\"\"Count and return the number of pending tasks in a queue.\"\"\"\n\n    if not queue:\n        if queue_name is None:\n            return 0\n        queue = app.queues_by_name.get(queue_name)\n        if not queue:\n            return 0\n    return app.broker.llen(queue.broker_key)\n\n\ndef num_pending_eta_tasks_in_queue(app, queue=None, queue_name: str = None) -> int:\n    \"\"\"Count and return the number of pending eta tasks in a queue.\"\"\"\n\n    if not queue:\n        if queue_name is None:\n            return 0\n        queue = app.queues_by_name.get(queue_name)\n        if not queue:\n            return 0\n    return app.broker.zcount(queue.broker_eta_key, \"-inf\", \"+inf\")\n\n\ndef num_workers_connected(app) -> int:\n    \"\"\"Count and return the number of connected workers.\"\"\"\n\n    return app.broker.pubsub_numsub(app.broadcast_key)[0][1]\n\n\ndef purge_queue(app, queue_name: str):\n    \"\"\"Empty a queue, discarding any pending tasks.\"\"\"\n\n    if queue_name is None:\n        return\n    queue = app.queues_by_name.get(queue_name)\n    if not queue:\n        return\n    app.broker.delete(queue.broker_key)\n\n\ndef purge_eta_queue(app, queue_name: str):\n    \"\"\"Empty a queue of any pending eta tasks.\"\"\"\n\n    if queue_name is None:\n        return\n    queue = app.queues_by_name.get(queue_name)\n    if not queue:\n        return\n    app.broker.delete(queue.broker_eta_key)\n\n\ndef kill(pid, signum):\n    try:\n        os.kill(pid, signum)\n    except IOError:\n        pass\n\n\ndef read_fd(fd):\n    try:\n        return os.read(fd, 64000).decode(\"utf8\")\n    except OSError:\n        return \"\"\n\n\ndef write_fd_or_raise(fd, s):\n    os.write(fd, s.encode(\"utf8\"))\n\n\ndef write_fd(fd, s):\n    try:\n        write_fd_or_raise(fd, s)\n    except:\n        pass\n\n\ndef close_fd(fd):\n    try:\n        os.close(fd)\n    except:\n        pass\n\n\ndef flush_fh(fh):\n    try:\n        fh.flush()\n    except:\n        pass\n\n\ndef mem_usage_percent():\n    return int(round(psutil.virtual_memory().percent))\n\n\n_operations = {\n    ast.Add: operator.add,\n    ast.Sub: operator.sub,\n    ast.Mult: operator.mul,\n    ast.Div: operator.truediv,\n    ast.FloorDiv: operator.floordiv,\n    ast.Pow: operator.pow,\n}\n\n\ndef _safe_eval(node, variables, functions):\n    if isinstance(node, ast.Constant) and isinstance(node.value, (int, float)):\n        return node.value\n    elif isinstance(node, ast.Name):\n        try:\n            return variables[node.id]\n        except KeyError:\n            raise Exception(f\"Unknown variable: {node.id}\")\n    elif isinstance(node, ast.BinOp):\n        try:\n            op = _operations[node.op.__class__]\n        except KeyError:\n            raise Exception(f\"Unknown operation: {node.op.__class__}\")\n        left = _safe_eval(node.left, variables, functions)\n        right = _safe_eval(node.right, variables, functions)\n        if isinstance(node.op, ast.Pow):\n            assert right < 100\n        return op(left, right)\n    elif isinstance(node, ast.Call):\n        assert not node.keywords and not node.starargs and not node.kwargs\n        assert isinstance(node.func, ast.Name), \"Unsafe function derivation\"\n        try:\n            func = functions[node.func.id]\n        except KeyError:\n            raise Exception(f\"Unknown function: {node.func.id}\")\n        args = [_safe_eval(arg, variables, functions) for arg in node.args]\n        return func(*args)\n    assert False, \"Unsafe operation\"\n\n\ndef safe_eval(expr, variables={}, functions={}):\n    node = ast.parse(expr, \"<string>\", \"eval\").body\n    return _safe_eval(node, variables, functions)\n\n\nclass Context:\n    __slots__ = [\"value\"]\n\n    def __init__(self):\n        self.value = None\n\n    def get(self):\n        return self.value\n\n    def set(self, val):\n        self.value = val\n\n\ncurrent_task = Context()\n\n\ndef exception_in_chain(e, exception_type):\n    if isinstance(e, exception_type):\n        return True\n    while (e.__cause__ or e.__context__) is not None:\n        if isinstance((e.__cause__ or e.__context__), exception_type):\n            return True\n        e = e.__cause__ or e.__context__\n    return False\n\n\ndef get_timeouts(app, task=None, queue=None):\n    soft_timeout = app.soft_timeout\n    hard_timeout = app.hard_timeout\n    if task and task.soft_timeout:\n        soft_timeout = task.soft_timeout\n    elif queue and queue.soft_timeout:\n        soft_timeout = queue.soft_timeout\n    if task and task.hard_timeout:\n        hard_timeout = task.hard_timeout\n    elif queue and queue.hard_timeout:\n        hard_timeout = queue.hard_timeout\n    return soft_timeout, hard_timeout\n"
  },
  {
    "path": "wakaq/worker.py",
    "content": "import asyncio\nimport inspect\nimport logging\nimport os\nimport signal\nimport sys\nimport time\nimport traceback\n\nimport psutil\nfrom redis.exceptions import ConnectionError\n\nfrom .exceptions import SoftTimeout, WakaQError\nfrom .logger import log, setup_logging\nfrom .serializer import deserialize, serialize\nfrom .utils import (\n    close_fd,\n    current_task,\n    exception_in_chain,\n    flush_fh,\n    get_timeouts,\n    kill,\n    mem_usage_percent,\n    read_fd,\n    write_fd,\n    write_fd_or_raise,\n)\n\nZRANGEPOP = \"\"\"\nlocal results = redis.call('ZRANGEBYSCORE', KEYS[1], 0, ARGV[1])\nredis.call('ZREMRANGEBYSCORE', KEYS[1], 0, ARGV[1])\nreturn results\n\"\"\"\n\n\nclass Child:\n    __slots__ = [\n        \"pid\",\n        \"stdin\",\n        \"pingin\",\n        \"ping_buffer\",\n        \"log_buffer\",\n        \"broadcastout\",\n        \"last_ping\",\n        \"soft_timeout_reached\",\n        \"max_mem_reached_at\",\n        \"done\",\n        \"soft_timeout\",\n        \"hard_timeout\",\n        \"current_task\",\n    ]\n\n    def __init__(self, pid, stdin, pingin, broadcastout):\n        os.set_blocking(stdin, False)\n        os.set_blocking(pingin, False)\n        os.set_blocking(broadcastout, False)\n        self.current_task = None\n        self.pid = pid\n        self.stdin = stdin\n        self.pingin = pingin\n        self.ping_buffer = \"\"\n        self.log_buffer = b\"\"\n        self.broadcastout = broadcastout\n        self.soft_timeout_reached = False\n        self.last_ping = time.time()\n        self.done = False\n        self.soft_timeout = None\n        self.hard_timeout = None\n        self.max_mem_reached_at = 0\n\n    def close(self):\n        close_fd(self.pingin)\n        close_fd(self.stdin)\n        close_fd(self.broadcastout)\n\n    def set_timeouts(self, wakaq, task=None, queue=None):\n        self.current_task = task\n        soft_timeout, hard_timeout = get_timeouts(wakaq, task=task, queue=queue)\n        self.soft_timeout = soft_timeout\n        self.hard_timeout = hard_timeout\n\n    @property\n    def mem_usage_percent(self):\n        try:\n            return psutil.Process(self.pid).memory_percent()\n        except psutil.NoSuchProcess:\n            return 0\n\n\nclass Worker:\n    __slots__ = [\n        \"wakaq\",\n        \"children\",\n        \"_stop_processing\",\n        \"_pubsub\",\n        \"_pingout\",\n        \"_broadcastin\",\n        \"_num_tasks_processed\",\n        \"_loop\",\n        \"_active_async_tasks\",\n        \"_async_task_context\",\n    ]\n\n    def __init__(self, wakaq=None):\n        self.wakaq = wakaq\n\n    def start(self):\n        setup_logging(self.wakaq)\n        log.info(f\"concurrency={self.wakaq.concurrency}\")\n        log.info(f\"async_concurrency={self.wakaq.async_concurrency}\")\n        log.info(f\"soft_timeout={self.wakaq.soft_timeout}\")\n        log.info(f\"hard_timeout={self.wakaq.hard_timeout}\")\n        log.info(f\"wait_timeout={self.wakaq.wait_timeout}\")\n        log.info(f\"exclude_queues={self.wakaq.exclude_queues}\")\n        log.info(f\"max_retries={self.wakaq.max_retries}\")\n        log.info(f\"max_mem_percent={self.wakaq.max_mem_percent}\")\n        log.info(f\"max_tasks_per_worker={self.wakaq.max_tasks_per_worker}\")\n        log.info(f\"worker_log_file={self.wakaq.worker_log_file}\")\n        log.info(f\"scheduler_log_file={self.wakaq.scheduler_log_file}\")\n        log.info(f\"worker_log_level={self.wakaq.worker_log_level}\")\n        log.info(f\"scheduler_log_level={self.wakaq.scheduler_log_level}\")\n        log.info(f\"starting {self.wakaq.concurrency} workers...\")\n        self._run()\n\n    def _stop(self):\n        self._stop_processing = True\n        for child in self.children:\n            kill(child.pid, signal.SIGTERM)\n        try:\n            if self._pubsub:\n                self._pubsub.unsubscribe()\n        except:\n            log.debug(traceback.format_exc())\n\n    def _run(self):\n        self.children = []\n        self._stop_processing = False\n\n        pid = None\n        for i in range(self.wakaq.concurrency):\n\n            # postpone fork if using too much ram\n            if self.wakaq.max_mem_percent:\n                percent_used = mem_usage_percent()\n                if percent_used >= self.wakaq.max_mem_percent:\n                    log.info(\n                        f\"postpone forking workers... mem usage {percent_used}% is more than max_mem_percent threshold ({self.wakaq.max_mem_percent}%)\"\n                    )\n                    break\n\n            pid = self._fork()\n            if pid == 0:\n                break\n\n        if pid != 0:  # parent\n            self._parent()\n\n    def _fork(self) -> int:\n        pingin, pingout = os.pipe()\n        broadcastin, broadcastout = os.pipe()\n        stdin, stdout = os.pipe()\n        pid = os.fork()\n        if pid == 0:  # child worker process\n            close_fd(stdin)\n            close_fd(pingin)\n            close_fd(broadcastout)\n            self._child(stdout, pingout, broadcastin)\n        else:  # parent process\n            close_fd(stdout)\n            close_fd(pingout)\n            close_fd(broadcastin)\n            self._add_child(pid, stdin, pingin, broadcastout)\n        return pid\n\n    def _parent(self):\n        signal.signal(signal.SIGCHLD, self._on_child_exited)\n        signal.signal(signal.SIGINT, self._on_exit_parent)\n        signal.signal(signal.SIGTERM, self._on_exit_parent)\n        signal.signal(signal.SIGQUIT, self._on_exit_parent)\n\n        log.info(\"finished forking all workers\")\n\n        try:\n            self._setup_pubsub()\n\n            try:\n                while not self._stop_processing:\n                    self._read_child_logs()\n                    self._check_max_mem_percent()\n                    self._refork_missing_children()\n                    self._enqueue_ready_eta_tasks()\n                    self._cleanup_children()\n                    self._check_child_runtimes()\n                    self._listen_for_broadcast_task()\n            except SoftTimeout:\n                pass\n\n            if len(self.children) > 0:\n                log.info(\"shutting down...\")\n                while len(self.children) > 0:\n                    self._cleanup_children()\n                    self._check_child_runtimes()\n                    time.sleep(0.05)\n            log.info(\"all workers stopped\")\n\n        except:\n            try:\n                log.error(traceback.format_exc())\n            except:\n                print(traceback.format_exc())\n            self._stop()\n\n    def _child(self, stdout, pingout, broadcastin):\n        os.dup2(stdout, sys.stdout.fileno())\n        os.dup2(stdout, sys.stderr.fileno())\n        close_fd(stdout)\n        os.set_blocking(pingout, False)\n        os.set_blocking(broadcastin, False)\n        os.set_blocking(sys.stdout.fileno(), False)\n        os.set_blocking(sys.stderr.fileno(), False)\n        self._pingout = pingout\n        self._broadcastin = broadcastin\n\n        # reset sigchld\n        signal.signal(signal.SIGCHLD, signal.SIG_DFL)\n\n        # stop processing and gracefully shutdown\n        signal.signal(signal.SIGTERM, self._on_exit_child)\n\n        # ignore ctrl-c sent to process group from terminal\n        signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n        # raise SoftTimeout\n        signal.signal(signal.SIGQUIT, self._on_soft_timeout_child)\n\n        setup_logging(self.wakaq, is_child=True)\n\n        try:\n\n            # redis should eventually detect pid change and reset, but we force it\n            self.wakaq.broker.connection_pool.reset()\n\n            # cleanup file descriptors opened by parent process\n            self._remove_all_children()\n            self._num_tasks_processed = 0\n\n            log.debug(\"started worker process\")\n\n            if callable(self.wakaq.after_worker_started_callback):\n                self.wakaq.after_worker_started_callback()\n\n            self._active_async_tasks = set()\n            self._async_task_context = {}\n            self._loop = asyncio.new_event_loop()\n            asyncio.set_event_loop(self._loop)\n            self._loop.run_until_complete(self._event_loop())\n\n        except (MemoryError, BlockingIOError, BrokenPipeError):\n            if current_task.get():\n                raise\n            log.debug(traceback.format_exc())\n\n        except Exception as e:\n            if exception_in_chain(e, SoftTimeout):\n                if current_task.get():\n                    raise\n                log.error(traceback.format_exc())\n            else:\n                log.error(traceback.format_exc())\n\n        except:  # catch BaseException, SystemExit, KeyboardInterrupt, and GeneratorExit\n            log.error(traceback.format_exc())\n\n        finally:\n            if hasattr(self, \"_loop\"):\n                self._loop.close()\n\n            flush_fh(sys.stdout)\n            flush_fh(sys.stderr)\n            close_fd(self._broadcastin)\n            close_fd(self._pingout)\n            close_fd(sys.stdout)\n            close_fd(sys.stderr)\n\n    async def _event_loop(self):\n        while not self._stop_processing and (\n            not self.wakaq.async_concurrency or len(self._active_async_tasks) < self.wakaq.async_concurrency\n        ):\n            self._send_ping_to_parent()\n\n            queue_broker_key, payload = await self._blocking_dequeue()\n            if payload is not None:\n                try:\n                    task = self.wakaq.tasks[payload[\"name\"]]\n                except KeyError:\n                    log.error(f'Task not found: {payload[\"name\"]}')\n                    task = None\n\n                if task is not None:\n                    queue = self.wakaq.queues_by_key[queue_broker_key]\n                    current_task.set((task, payload))\n                    retry = payload.get(\"retry\") or 0\n\n                    # make sure parent process is still around (OOM killer may have stopped it without sending child signal)\n                    try:\n                        self._send_ping_to_parent(task_name=task.name, queue_name=queue.name if queue else None)\n                    except:\n                        # give task back to queue so it's not lost\n                        self.wakaq.broker.lpush(queue_broker_key, serialize(payload))\n                        current_task.set(None)\n                        raise\n\n                    async_task = self._loop.create_task(self._execute_task(task, payload, queue=queue))\n                    self._active_async_tasks.add(async_task)\n                    self._async_task_context[async_task] = {\n                        \"task\": task,\n                        \"payload\": payload,\n                        \"queue\": queue,\n                        \"start_time\": time.time(),\n                    }\n\n            try:\n                if self._active_async_tasks:\n                    done, pending = await asyncio.wait(\n                        self._active_async_tasks, timeout=0.01, return_when=asyncio.FIRST_COMPLETED\n                    )\n\n                    for async_task in done:\n                        self._active_async_tasks.remove(async_task)\n                        context = self._async_task_context.pop(async_task)\n                        try:\n                            await async_task\n                        except (MemoryError, BlockingIOError, BrokenPipeError):\n                            current_task.set((context[\"task\"], context[\"payload\"]))\n                            raise\n                        except asyncio.exceptions.CancelledError:\n                            current_task.set((context[\"task\"], context[\"payload\"]))\n                            retry += 1\n                            max_retries = context[\"task\"].max_retries\n                            if max_retries is None:\n                                max_retries = (\n                                    queue.max_retries if queue.max_retries is not None else self.wakaq.max_retries\n                                )\n                            if retry > max_retries:\n                                log.error(traceback.format_exc())\n                            else:\n                                log.warning(traceback.format_exc())\n                                self.wakaq._enqueue_at_end(\n                                    context[\"task\"].name,\n                                    queue.name,\n                                    context[\"payload\"][\"args\"],\n                                    context[\"payload\"][\"kwargs\"],\n                                    retry=retry,\n                                )\n                        except Exception as e:\n                            current_task.set((context[\"task\"], context[\"payload\"]))\n                            if exception_in_chain(e, SoftTimeout):\n                                retry += 1\n                                max_retries = context[\"task\"].max_retries\n                                if max_retries is None:\n                                    max_retries = (\n                                        queue.max_retries if queue.max_retries is not None else self.wakaq.max_retries\n                                    )\n                                if retry > max_retries:\n                                    log.error(traceback.format_exc())\n                                else:\n                                    log.warning(traceback.format_exc())\n                                    self.wakaq._enqueue_at_end(\n                                        context[\"task\"].name,\n                                        queue.name,\n                                        context[\"payload\"][\"args\"],\n                                        context[\"payload\"][\"kwargs\"],\n                                        retry=retry,\n                                    )\n                            else:\n                                log.error(traceback.format_exc())\n\n                    for async_task in pending:\n                        context = self._async_task_context.get(async_task)\n                        if not context:\n                            continue\n                        soft_timeout, _ = get_timeouts(self.wakaq, task=context[\"task\"], queue=context[\"queue\"])\n                        if not soft_timeout:\n                            continue\n                        runtime = time.time() - context[\"start_time\"]\n                        if runtime - 0.1 > soft_timeout and not async_task.cancelled():\n                            current_task.set((context[\"task\"], context[\"payload\"]))\n                            log.debug(\n                                f\"async task {context['task'].name} runtime {runtime} reached soft timeout, raising asyncio.CancelledError\"\n                            )\n                            async_task.cancel()\n\n                current_task.set(None)\n                self._send_ping_to_parent()\n\n            except (MemoryError, BlockingIOError, BrokenPipeError):\n                raise\n\n            # catch BaseException, SystemExit, KeyboardInterrupt, and GeneratorExit\n            except:\n                log.error(traceback.format_exc())\n\n            flush_fh(sys.stdout)\n            flush_fh(sys.stderr)\n            await self._execute_broadcast_tasks()\n            if self.wakaq.max_tasks_per_worker and self._num_tasks_processed >= self.wakaq.max_tasks_per_worker:\n                log.info(f\"restarting worker after {self._num_tasks_processed} tasks\")\n                self._stop_processing = True\n            flush_fh(sys.stdout)\n            flush_fh(sys.stderr)\n\n    def _send_ping_to_parent(self, task_name=None, queue_name=None):\n        msg = task_name or \"\"\n        if msg:\n            msg = f\"{msg}:{queue_name or ''}\"\n        write_fd_or_raise(self._pingout, f\"{msg}\\n\")\n\n    def _add_child(self, pid, stdin, pingin, broadcastout):\n        self.children.append(Child(pid, stdin, pingin, broadcastout))\n\n    def _remove_all_children(self):\n        for child in self.children:\n            self._remove_child(child)\n\n    def _cleanup_children(self):\n        for child in self.children:\n            if child.done:\n                self._remove_child(child)\n\n    def _remove_child(self, child):\n        child.close()\n        self.children = [c for c in self.children if c.pid != child.pid]\n\n    def _on_exit_parent(self, signum, frame):\n        log.debug(f\"Received signal {signum}\")\n        self._stop()\n\n    def _on_exit_child(self, signum, frame):\n        self._stop_processing = True\n\n    def _on_soft_timeout_child(self, signum, frame):\n        raise SoftTimeout(\"SoftTimeout\")\n\n    def _on_child_exited(self, signum, frame):\n        for child in self.children:\n            if child.done:\n                continue\n            try:\n                pid, _ = os.waitpid(child.pid, os.WNOHANG)\n                if pid != 0:  # child exited\n                    child.done = True\n            except InterruptedError:  # child exited while calling os.waitpid\n                child.done = True\n            except ChildProcessError:  # child pid no longer valid\n                child.done = True\n            if child.done and child.max_mem_reached_at:\n                after = round(time.time() - child.max_mem_reached_at, 2)\n                log.info(f\"Stopped {child.pid} after {after} seconds\")\n\n    def _enqueue_ready_eta_tasks(self):\n        script = self.wakaq.broker.register_script(ZRANGEPOP)\n        for queue in self.wakaq.queues:\n            results = script(keys=[queue.broker_eta_key], args=[int(round(time.time()))])\n            for payload in results:\n                payload = deserialize(payload)\n                task_name = payload.pop(\"name\")\n                args = payload.pop(\"args\")\n                kwargs = payload.pop(\"kwargs\")\n                self.wakaq._enqueue_at_front(task_name, queue.name, args, kwargs)\n\n    async def _execute_task(self, task, payload, queue=None):\n        log.debug(f\"running with payload {payload}\")\n        if callable(self.wakaq.before_task_started_callback):\n            if inspect.iscoroutinefunction(self.wakaq.before_task_started_callback):\n                await self.wakaq.before_task_started_callback()\n            else:\n                self.wakaq.before_task_started_callback()\n\n        try:\n            if callable(self.wakaq.wrap_tasks_function):\n                if inspect.iscoroutinefunction(task.fn) and not inspect.iscoroutinefunction(\n                    self.wakaq.wrap_tasks_function\n                ):\n                    raise WakaQError(\n                        \"Unable to execute sync wrap_tasks_with when task is async. Make your wrap_tasks_with function async.\"\n                    )\n                if inspect.iscoroutinefunction(self.wakaq.wrap_tasks_function):\n                    await self.wakaq.wrap_tasks_function(task.fn, payload[\"args\"], payload[\"kwargs\"])\n                else:\n                    self.wakaq.wrap_tasks_function(task.fn, payload[\"args\"], payload[\"kwargs\"])\n\n            else:\n                if inspect.iscoroutinefunction(task.fn):\n                    await task.fn(*payload[\"args\"], **payload[\"kwargs\"])\n                else:\n                    task.fn(*payload[\"args\"], **payload[\"kwargs\"])\n\n        finally:\n            self._num_tasks_processed += 1\n            if callable(self.wakaq.after_task_finished_callback):\n                if inspect.iscoroutinefunction(self.wakaq.after_task_finished_callback):\n                    await self.wakaq.after_task_finished_callback()\n                else:\n                    self.wakaq.after_task_finished_callback()\n\n    async def _execute_broadcast_tasks(self):\n        payloads = read_fd(self._broadcastin)\n        if payloads == \"\":\n            return\n        for payload in payloads.splitlines():\n            payload = deserialize(payload)\n            try:\n                task = self.wakaq.tasks[payload[\"name\"]]\n            except KeyError:\n                log.error(f'Task not found: {payload[\"name\"]}')\n                continue\n            retry = 0\n            current_task.set((task, payload))\n            while True:\n                try:\n                    self._send_ping_to_parent(task_name=task.name)\n                    await self._execute_task(task, payload)\n                    current_task.set(None)\n                    self._send_ping_to_parent()\n                    break\n\n                except (MemoryError, BlockingIOError, BrokenPipeError):\n                    raise\n\n                except Exception as e:\n                    if exception_in_chain(e, SoftTimeout):\n                        retry += 1\n                        max_retries = task.max_retries\n                        if max_retries is None:\n                            max_retries = self.wakaq.max_retries\n                        if retry > max_retries:\n                            log.error(traceback.format_exc())\n                            break\n                        else:\n                            log.warning(traceback.format_exc())\n                    else:\n                        log.error(traceback.format_exc())\n                        break\n\n                except:  # catch BaseException, SystemExit, KeyboardInterrupt, and GeneratorExit\n                    log.error(traceback.format_exc())\n                    break\n\n    def _read_child_logs(self):\n        for child in self.children:\n            logs = read_fd(child.stdin)\n            if logs:\n                child.log_buffer += logs.encode(\"utf8\")\n\n            if not child.log_buffer:\n                continue\n\n            handler = log.handlers[0]\n            stream = handler.stream\n            if stream is None:  # filehandle can disappear if we run out of RAM\n                print(child.log_buffer.decode(\"utf8\"))\n                self._stop()\n                return\n\n            pending = child.log_buffer\n            did_write = False\n\n            try:\n                fd = stream.fileno()\n            except:\n                try:\n                    decoded = pending.decode(\"utf8\")\n                    chars_written = stream.write(decoded)\n                    if isinstance(chars_written, int) and chars_written > 0:\n                        bytes_written = len(decoded[:chars_written].encode(\"utf8\"))\n                        pending = pending[bytes_written:]\n                        did_write = True\n                    else:\n                        pending = b\"\"\n                except BlockingIOError as e:\n                    if hasattr(e, \"characters_written\") and e.characters_written:\n                        decoded = pending.decode(\"utf8\")\n                        bytes_written = len(decoded[: e.characters_written].encode(\"utf8\"))\n                        pending = pending[bytes_written:]\n                        did_write = True\n                except BrokenPipeError:\n                    pending = b\"\"\n            else:\n                while pending:\n                    try:\n                        chars_written = os.write(fd, pending)\n                        pending = pending[chars_written:]\n                        did_write = True\n                    except BlockingIOError:\n                        break\n                    except BrokenPipeError:\n                        pending = b\"\"\n                        break\n\n            child.log_buffer = pending\n            if did_write:\n                flush_fh(handler)\n\n    def _check_max_mem_percent(self):\n        if not self.wakaq.max_mem_percent:\n            return\n        max_mem_reached_at = max([c.max_mem_reached_at for c in self.children if not c.done], default=0)\n        task_timeout = self.wakaq.hard_timeout or self.wakaq.soft_timeout or 120\n        now = time.time()\n        if now - max_mem_reached_at < task_timeout:\n            return\n        if len(self.children) == 0:\n            return\n        percent_used = mem_usage_percent()\n        if percent_used < self.wakaq.max_mem_percent:\n            return\n        log.info(f\"Mem usage {percent_used}% is more than max_mem_percent threshold ({self.wakaq.max_mem_percent}%)\")\n        self._log_mem_usage_of_all_children()\n        child = self._child_using_most_mem()\n        if child:\n            task = \"\"\n            if child.current_task:\n                task = f\" while processing task {child.current_task.name}\"\n            log.info(f\"Stopping child process {child.pid}{task}...\")\n            child.soft_timeout_reached = True  # prevent raising SoftTimeout twice for same child\n            child.max_mem_reached_at = now\n            kill(child.pid, signal.SIGTERM)\n\n    def _log_mem_usage_of_all_children(self):\n        if self.wakaq.worker_log_level != logging.DEBUG:\n            return\n        for child in self.children:\n            task = \"\"\n            if child.current_task:\n                task = f\" while processing task {child.current_task.name}\"\n            try:\n                log.debug(f\"Child process {child.pid} using {round(child.mem_usage_percent, 2)}% ram{task}\")\n            except:\n                log.warning(f\"Unable to get ram usage of child process {child.pid}{task}\")\n                log.warning(traceback.format_exc())\n\n    def _child_using_most_mem(self):\n        try:\n            return max(self.children, key=lambda c: c.mem_usage_percent)\n        except ValueError:\n            return None\n\n    def _check_child_runtimes(self):\n        for child in self.children:\n            child.ping_buffer += read_fd(child.pingin)\n            if child.ping_buffer[-1:] == \"\\n\":\n                child.last_ping = time.time()\n                child.soft_timeout_reached = False\n                ping = child.ping_buffer[:-1]\n                child.ping_buffer = \"\"\n                ping = ping.rsplit(\"\\n\", 1)[-1]\n                task, queue = None, None\n                if ping != \"\":\n                    try:\n                        task_name, queue_name = ping.split(\":\", 1)\n                        task = self.wakaq.tasks[task_name]\n                        queue = self.wakaq.queues_by_name.get(queue_name)\n                    except ValueError:\n                        log.error(f\"Unable to unpack message from child process {child.pid}: {ping}\")\n                child.set_timeouts(self.wakaq, task=task, queue=queue)\n            else:\n                soft_timeout = child.soft_timeout or self.wakaq.soft_timeout\n                hard_timeout = child.hard_timeout or self.wakaq.hard_timeout\n                if soft_timeout or hard_timeout:\n                    runtime = time.time() - child.last_ping\n                    if hard_timeout and runtime > hard_timeout:\n                        log.debug(f\"child process {child.pid} runtime {runtime} reached hard timeout, sending sigkill\")\n                        kill(child.pid, signal.SIGKILL)\n                    elif not child.soft_timeout_reached and soft_timeout and runtime > soft_timeout:\n                        log.debug(f\"child process {child.pid} runtime {runtime} reached soft timeout, sending sigquit\")\n                        child.soft_timeout_reached = True  # prevent raising SoftTimeout twice for same child\n                        kill(child.pid, signal.SIGQUIT)\n\n    def _listen_for_broadcast_task(self):\n        if not self._pubsub:\n            self._setup_pubsub()\n            return\n        try:\n            msg = self._pubsub.get_message(ignore_subscribe_messages=True, timeout=self.wakaq.wait_timeout)\n        except (ConnectionError, BrokenPipeError, OSError):\n            log.warning(\"redis pubsub disconnected, reconnecting...\")\n            self._setup_pubsub()\n            return\n        if msg:\n            payload = msg[\"data\"]\n            for child in self.children:\n                if child.done:\n                    continue\n                log.debug(f\"run broadcast task: {payload}\")\n                write_fd(child.broadcastout, f\"{payload}\\n\")\n                break\n\n    def _setup_pubsub(self):\n        try:\n            if getattr(self, \"_pubsub\", None):\n                try:\n                    self._pubsub.close()\n                except:\n                    log.debug(traceback.format_exc())\n            try:\n                self.wakaq.broker.connection_pool.reset()\n            except:\n                log.debug(traceback.format_exc())\n            self._pubsub = self.wakaq.broker.pubsub()\n            self._pubsub.subscribe(self.wakaq.broadcast_key)\n            log.info(\"listening for broadcast tasks\")\n        except:\n            self._pubsub = None\n            log.warning(\"redis pubsub connection failure\")\n            log.debug(traceback.format_exc())\n\n    async def _blocking_dequeue(self):\n        if len(self.wakaq.broker_keys) == 0:\n            await asyncio.sleep(self.wakaq.wait_timeout)\n            return None, None\n        data = self.wakaq.broker.blpop(self.wakaq.broker_keys, self.wakaq.wait_timeout)\n        if data is None:\n            return None, None\n        return data[0], deserialize(data[1])\n\n    def _refork_missing_children(self):\n        if self._stop_processing:\n            return\n\n        if len(self.children) >= self.wakaq.concurrency:\n            return\n\n        # postpone fork missing children if using too much ram\n        if self.wakaq.max_mem_percent:\n            percent_used = mem_usage_percent()\n            if percent_used >= self.wakaq.max_mem_percent:\n                log.debug(\n                    f\"postpone forking missing workers... mem usage {percent_used}% is more than max_mem_percent threshold ({self.wakaq.max_mem_percent}%)\"\n                )\n                return\n\n        log.debug(\"restarting a crashed worker\")\n        self._fork()\n"
  }
]