[
  {
    "path": ".github/workflows/doc-deploy.yml",
    "content": "name: Document Deploy\non:\n  workflow_dispatch: { }\n  push:\n    paths:\n      - 'docs/**'\n    branches:\n      - master\njobs:\n  deploy:\n    runs-on: ubuntu-latest\n    permissions:\n      pages: write\n      id-token: write\n    environment:\n      name: github-pages\n      url: ${{ steps.deployment.outputs.page_url }}\n    steps:\n      - uses: actions/checkout@v3\n        with:\n          fetch-depth: 0\n      - uses: actions/setup-node@v3\n        with:\n          node-version: 16\n          cache: 'npm'\n          cache-dependency-path: docs/package-lock.json\n      - name: Install dependencies and build\n        run: |\n          npm ci\n          npm run docs:build\n        working-directory: docs\n      - uses: actions/configure-pages@v2\n      - uses: actions/upload-pages-artifact@v1\n        with:\n          path: docs/.vitepress/dist\n      - name: Deploy\n        id: deployment\n        uses: actions/deploy-pages@v1\n"
  },
  {
    "path": ".github/workflows/python-app.yml",
    "content": "# This workflow will install Python dependencies, run tests and lint with a single version of Python\n# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions\n\nname: Python application\n\non:\n  push:\n    paths:\n      - '.github/workflows/python-app.yml'\n      - 'bilix/**'\n      - 'pyproject.toml'\n    branches: [ \"master\" ]\n  pull_request:\n    paths:\n      - '.github/workflows/python-app.yml'\n      - 'bilix/**'\n      - 'pyproject.toml'\n    branches: [ \"master\" ]\n\npermissions:\n  contents: read\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n    strategy:\n      # You can use PyPy versions in python-version.\n      # For example, pypy-2.7 and pypy-3.8\n      matrix:\n        python-version: [ \"3.8\", \"3.9\", \"3.10\", \"3.11\", \"3.12\" ]\n\n    steps:\n      - uses: actions/checkout@v3\n      - name: Set up Python ${{ matrix.python-version }}\n        uses: actions/setup-python@v4\n        with:\n          python-version: ${{ matrix.python-version }}\n      - name: Install dependencies\n        run: |\n          python -m pip install --upgrade pip\n          if [ -f requirements.txt ]; then pip install -r requirements.txt; fi\n          pip install -e .\n"
  },
  {
    "path": ".github/workflows/python-publish.yml",
    "content": "# This workflow will upload a Python Package using Twine when a release is created\n# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries\n\n# This workflow uses actions that are not certified by GitHub.\n# They are provided by a third-party and are governed by\n# separate terms of service, privacy policy, and support\n# documentation.\n\nname: Upload Python Package\n\non:\n  release:\n    types: [published]\n\npermissions:\n  contents: read\n\njobs:\n  deploy:\n\n    runs-on: ubuntu-latest\n\n    steps:\n    - uses: actions/checkout@v3\n    - name: Set up Python\n      uses: actions/setup-python@v3\n      with:\n        python-version: '3.x'\n    - name: Install dependencies\n      run: |\n        python -m pip install --upgrade pip\n        pip install build\n    - name: Build package\n      run: python -m build\n    - name: Publish package\n      uses: pypa/gh-action-pypi-publish@27b31702a0e7fc50959f5ad993c78deac1bdfc29\n      with:\n        user: __token__\n        password: ${{ secrets.PYPI_API_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": ".idea\n.vscode\n.fleet\n.pytest_cache\nvideos\n__pycache__/\n*.egg-info/\n*.pyc\nvenv*/\nbuild/\ndist/\ndocs/.vitepress/dist\ndocs/.vitepress/cache\nnode_modules\n.venv\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# bilix 开发指南\n\n感谢你对贡献bilix有所兴趣，在你开始之前可以阅读下面的一些提示。请注意，bilix正快速迭代，\n如果你在阅读本文档时发现有些内容已经过时，请以master分支的代码为准。\n\n# 开始之前\n\n在一切开始之前，你需要先 **fork** 本仓库，然后clone你fork的仓库到你的本地：\n\n```shell\ngit clone https://github.com/your_user_name/bilix\n```\n\n拉取至本地后，我**建议**你在独立的python环境中进行测试和开发，确认后进行本地源码可编辑安装：\n\n```shell\npip install -e .\n```\n\n试试bilix命令能否正常执行。通过测试了？至此，你可以在本地开发bilix了🍻\n\n# bilix 结构\n\n在动手改动代码之前你需要对bilix的结构有一定的了解，下面是bilix的大致目录和各模块相应功能：\n\n```text\nbilix\n├── __init__.py\n├── __main__.py\n├── _process.py  # 多进程相关\n├── cli\n│   ├── assign.py  # 分配任务，动态导入相关\n│   └── main.py    # 命令行入口\n├── download\n│   ├── base_downloader.py\n│   ├── base_downloader_m3u8.py  # 基础m3u8下载器\n│   ├── base_downloader_part.py  # 基础分段文件下载器\n│   └── utils.py                 # 下载相关的一些工具函数\n├── exception.py\n├── log.py\n├── progress\n│   ├── abc.py            # 进度条抽象类\n│   ├── cli_progress.py   # 命令行进度条\n│   └── ws_progress.py\n├── serve\n│   ├── __init__.py\n│   ├── app.py\n│   ├── auth.py\n│   ├── serve.py\n│   └── user.py\n├── sites     # 站点扩展目录，稍后介绍\n└── utils.py  # 通用工具函数\n```\n\n## 基础下载器\n\nbilix在`bilix.download`中提供了两种基础下载器，m3u8下载器和分段文件下载器。\n它们基于`httpx`乃至更底层的`asyncio`及IO多路复用，并且集成了速度控制，并发控制，断点续传，时间段切片，进度条显示等许多实用功能。\nbilix的站点扩展下载功能都将基于这些基础下载器完成，基础下载器本身也提供cli服务\n\n## 下载器是如何提供cli服务的\n\n在bilix中，一个类只要实现了`handle`方法，就可以被注册到命令行（cli）中，`handle`方法的函数签名为\n\n```python\n@classmethod\ndef handle(cls, method: str, keys: Tuple[str, ...], options: dict):\n    ...\n```\n\nhandle函数的实现应该满足下面三个原则：\n\n1. 如果类根据`method` `keys` `options`认为自己不应该承担下载任务，`handle`函数应该返回`None`\n2. 如果类可以承担任务，但发现`method`不在自己的可接受范围内，应该抛出`HandleMethodError`异常\n3. 如果类可以承担任务，且`method`在自己的可接受范围内，应该返回两个值，第一个值为下载器实例，第二个值为下载coroutine\n\nQ：🙋为什么我看到有的下载器返回的是类本身，以及下载函数对象？\n\n```python\n@classmethod\ndef handle(cls, method: str, keys: Tuple[str, ...], options: dict):\n    if method == 'f' or method == 'get_file':\n        return cls, cls.get_file\n```\n\nA：为了偷懒，如果返回值是类以及下载函数对象，将根据命令行参数及type hint自动组装为实例和coroutine，\n适用于当命令行options的名字和方法，类参数名字、类型一致的情况\n\n其实`handle`函数给你了较大的自由，你可以根据自己的需求，自由的组合出适合你的下载器的cli服务\n\n## 如何快速添加一个站点的支持\n\n在`bilix/sites`下，已经有一些站点的支持，如果你想要添加一个新的站点支持，可以按照下面的步骤进行：\n\n1. 在`sites`文件夹下新建一个站点文件夹，例如`example`\n2. 在`example`文件夹下添加站点的api模块`api.py`，仿照其他站点的格式实现从输入网页url到输出视频url，视频title的各种api\n3. 在`example`文件夹下添加站点api模块的测试`api_test.py`，让大家随时测试站点是否可用\n4. 在`example`文件夹下添加站点的下载器`donwloader.py`，定义`DownloaderExample`\n   类，根据该站点使用的传输方法选择相应的`BaseDownloader`进行继承，然后在类中定义好下载视频的方法，并实现`handle`\n   方法。另外你还可以添加`downloader_test.py`来验证你的下载器是否可用\n5. 在`example`文件夹下添加`__init__.py`，将`DownloaderExample`类导入，并且在`__all__`中添加`DownloaderExample`以方便bilix找到你的下载器\n\n搞定，使用bilix命令测试一下吧\n\n当前已经有其他开发者为bilix对其他站点的适配做出了贡献🎉，\n或许被接受的[New site PR](https://github.com/HFrost0/bilix/pulls?q=is%3Apr+is%3Aclosed+label%3A%22New+site%22)也能为你提供帮助\n\n"
  },
  {
    "path": "CONTRIBUTING_EN.md",
    "content": "# Development guide of bilix\n\nThank you for your interest in contributing to bilix. Before you start, you can read some tips below.\nPlease note that bilix is rapidly iterating, if you find some content outdated while reading this document,\nplease refer to the code of the master branch.\n\n# Before starting\n\nBefore everything starts, you need to first **fork** this repository, and then clone your fork:\n\n```shell\ngit clone https://github.com/your_user_name/bilix\n```\n\nAfter clone, I **recommend** you to test and develop in an independent python environment,\nand then perform local source editable installation after that:\n\n```shell\npip install -e .\n```\n\nTry whether the `bilix` command can be executed normally. Passed the test? At this point,\nyou can develop bilix locally🍻\n\n# Structure of bilix\n\nBefore making any changes to the code, you need to have some understanding of the structure of bilix.\n\n```text\nbilix\n├── __init__.py\n├── __main__.py\n├── _process.py  # related to multiprocessing\n├── cli\n│   ├── assign.py  # assign tasks, dynamically import related\n│   └── main.py    # command line entry\n├── download\n│   ├── base_downloader.py\n│   ├── base_downloader_m3u8.py  # basic m3u8 downloader\n│   ├── base_downloader_part.py  # basic segmented file downloader\n│   └── utils.py                 # some utils for download\n├── exception.py\n├── log.py\n├── progress\n│   ├── abc.py            # abstract class of progress\n│   ├── cli_progress.py   # progress for cli\n│   └── ws_progress.py\n├── serve\n│   ├── __init__.py\n│   ├── app.py\n│   ├── auth.py\n│   ├── serve.py\n│   └── user.py\n├── sites     # site support\n└── utils.py  # some utils\n```\n\n# BaseDownloader\n\nbilix provides two basic downloaders in `bilix.download`, m3u8 downloader and content range file downloader.\nThey are based on `httpx` and even lower-level `asyncio` and IO multiplexing, and integrate many practical functions\nsuch as speed control, concurrency control, download resume, time range clip, and progress bar display.\nThe site extension of bilix will be based on these basic downloaders, and the basic downloaders\nthemselves also provide cli services\n\n\n# How does the downloader provide cli service\n\nIn bilix, as long as a class implements the `handle` method, it can be registered in the command line interface (cli).\nThe function signature of the `handle` method is\n\n```python\n@classmethod\ndef handle(cls, method: str, keys: Tuple[str, ...], options: dict):\n    ...\n```\n\nThe implementation of the `handle` function should meet the following three principles:\n\n1. If the class thinks that it should not be assigned the download task according to `method` `keys` `options`, the `handle` function should return `None`\n2. If the class can be assigned the task, but finds that the `method` is not within its acceptable range, it should raise a `HandleMethodError` exception\n3. If the class can handle the task, and `method` is within its acceptable range, it should return two values, the first value is the downloader instance, and the second value is the download coroutine\n\nQ: 🙋Why do I see that some downloaders return the class itself and the download function object?\n\n```python\n@classmethod\ndef handle(cls, method: str, keys: Tuple[str, ...], options: dict):\n    if method == 'f' or method == 'get_file':\n        return cls, cls.get_file\n```\n\nA: Just for easy, if the return value is a class and the function object, it will be automatically assembled into an\ninstance and coroutine according to the command line arguments, options and type hint.\n\n\n# How to add support for a site\n\nUnder `bilix/sites`, there are already some sites supported, if you want to add a new site support, you can follow the steps below:\n\n1. Create a new site folder under the `sites` folder, such as `example`\n2. Add the site's api module `api.py` under the `example` folder, and follow the format of other sites to implement various APIs from input webpage url to output video url and video title\n3. Add the site api module test `api_test.py` under the `example` folder, so that everyone can test whether the site is available at any time\n4. Add the site downloader `donwloader.py` under the `example` folder, define `DownloaderExample`\n   Class, select the corresponding `BaseDownloader` to inherit according to the site, then define the method of downloading the video in the class, and implement `handle`\n   method.\n5. Add `__init__.py` under the `example` folder, import `DownloaderExample` class, and add `DownloaderExample` in `__all__` to facilitate bilix to find your downloader\n\nOkay, let's test it\n\nAt present, other developers have contributed to the extension of bilix to other sites🎉,\nMaybe the accepted [New site PR](https://github.com/HFrost0/bilix/pulls?q=is%3Apr+is%3Aclosed+label%3A%22New+site%22) can also help you\n"
  },
  {
    "path": "LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [HFrost0] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "# bilix\n\n[![GitHub license](https://img.shields.io/github/license/HFrost0/bilix?style=flat-square)](https://github.com/HFrost0/bilix/blob/master/LICENSE)\n![PyPI](https://img.shields.io/pypi/v/bilix?style=flat-square&color=blue)\n![GitHub commit activity](https://img.shields.io/github/commit-activity/m/HFrost0/bilix)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/bilix?label=pypi%20downloads&style=flat-square)\n\n⚡️Lightning-fast asynchronous download tool for bilibili and more\n\n\n## Features\n\n### ⚡️ Fast & Async\n\nAsynchronous high concurrency support, controllable concurrency and speed settings.\n\n### 😉 Lightweight & User-friendly\n\nLightweight user-friendly CLI with progress notification, focusing on core functionality.\n\n### 📝 Fully-featured\n\nSubmissions, anime, TV Series, video clip, audio, favourite, danmaku ,cover...\n\n### 🔨 Extensible\n\nExtensible Python module suitable for more download scenarios.\n\n## Install\n\n```shell\npip install bilix\n```\n\nfor macOS, you can also install `bilix` by `brew`\n\n```shell\nbrew install bilix\n```\n\n## Usage Example\n\n* If you prefer to use command line interface (cli)\n\n```shell\nbilix v 'url'\n```\n\n> `v` is a method short alias for `get_video`\n\n* If you prefer to code with python\n\n```python\nfrom bilix.sites.bilibili import DownloaderBilibili\nimport asyncio\n\n\nasync def main():\n    async with DownloaderBilibili() as d:\n        await d.get_video('url')\n\n\nasyncio.run(main())\n```\n\n## Community\n\nIf you find any bugs or other issues, feel free to raise an [Issue](https://github.com/HFrost0/bilix/issues).\n\nIf you have new ideas or new feature requests👍，welcome to participate in\nthe [Discussion](https://github.com/HFrost0/bilix/discussions)\n\nIf you find this project helpful, you can support the author by [Star](https://github.com/HFrost0/bilix/stargazers)🌟\n\n## Contribute\n\n❤️ Welcome! Details can be found in [Contributing](https://github.com/HFrost0/bilix/blob/master/CONTRIBUTING_EN.md)\n"
  },
  {
    "path": "bilix/__init__.py",
    "content": "\"\"\"\nLighting-fast async download tool inspired by w\n\"\"\"\n\n__version__ = \"0.18.9\"\n__url__ = \"https://github.com/HFrost0/bilix\"\n"
  },
  {
    "path": "bilix/__main__.py",
    "content": "from bilix.cli.main import main\n\nmain()\n"
  },
  {
    "path": "bilix/_process.py",
    "content": "import signal\nimport sys\nfrom concurrent.futures import ProcessPoolExecutor\nfrom functools import partial\n\n\ndef _init():\n    def shutdown(*args):\n        sys.exit(0)\n\n    signal.signal(signal.SIGINT, shutdown)\n\n\ndef singleton(cls):\n    _instance = {}\n\n    def inner(*args, **kwargs):\n        if cls not in _instance:\n            _instance[cls] = cls(*args, **kwargs)\n        return _instance[cls]\n\n    return inner\n\n\n# singleton ProcessPoolExecutor to avoid recreation in spawn process\nSingletonPPE = singleton(partial(ProcessPoolExecutor, initializer=_init))\n\nif __name__ == '__main__':\n    p = SingletonPPE(max_workers=5)\n    p.shutdown()\n"
  },
  {
    "path": "bilix/cli/assign.py",
    "content": "import asyncio\nimport inspect\nimport re\nimport time\nfrom functools import wraps\nfrom pathlib import Path\nfrom typing import Callable, Union, Tuple\nfrom importlib import import_module\n\nfrom bilix.exception import HandleMethodError, HandleError\nfrom bilix.log import logger\n\n\ndef kwargs_filter(obj: Union[type, Callable], kwargs: dict):\n    \"\"\"\n\n    :param obj:\n    :param kwargs:\n    :return:\n    \"\"\"\n    sig = inspect.signature(obj)\n    obj_require = set(sig.parameters.keys())\n\n    def check(k):\n        if k in obj_require:\n            p = sig.parameters[k]\n            # check type hint\n            try:\n                if p.annotation is inspect.Signature.empty or \\\n                        isinstance(kwargs[k], p.annotation):\n                    return True\n                else:\n                    logger.debug(f\"kwarg {k}:{kwargs[k]} has been drop due to type hint missmatch\")\n                    return False\n            except TypeError:  # https://peps.python.org/pep-0604/#isinstance-and-issubclass\n                # lower than 3.10, Union\n                # TypeError: Subscripted generics cannot be used with class and instance checks\n                return True\n        return False\n\n    kwargs = {k: kwargs[k] for k in filter(check, kwargs)}\n    return kwargs\n\n\ndef module_handle_funcs(module):\n    \"\"\"find and yield all handle func in module\"\"\"\n    attrs = getattr(module, '__all__', None)\n    attrs = attrs or dir(module)\n    for attr_name in attrs:\n        if attr_name.startswith('__'):\n            continue\n        executor_cls = getattr(module, attr_name)\n        if not inspect.isclass(executor_cls):\n            continue\n        handle_func = getattr(executor_cls, 'handle', None)\n        if handle_func is None:\n            continue\n        yield handle_func\n\n\ndef auto_assemble(handle_func):\n    @wraps(handle_func)\n    def wrapped(cls, method: str, keys: Tuple[str, ...], options: dict):\n        res = handle_func(cls, method, keys, options)\n        if res is NotImplemented or res is None:\n            return res\n        executor, cor = res\n        # handle func return class instead of instance\n        if inspect.isclass(executor):\n            kwargs = kwargs_filter(executor, options)\n            executor = executor(**kwargs)\n            logger.debug(f\"auto assemble {executor} by {kwargs}\")\n        # handle func return async function instead of coroutine\n        if inspect.iscoroutinefunction(cor):\n            kwargs = kwargs_filter(cor, options)\n            cors = []\n            for key in keys:\n                if not hasattr(cor, '__self__'):  # coroutine function has not bound to instance\n                    cors.append(cor(executor, key, **kwargs))  # bound executor to self\n                else:\n                    cors.append(cor(key, **kwargs))\n                logger.debug(f\"auto assemble {cor} by {kwargs}\")\n            cor = asyncio.gather(*cors)\n        return executor, cor\n\n    return wrapped\n\n\ndef longest_common_len(str1, str2):\n    m, n = len(str1), len(str2)\n    dp = [[0] * (n + 1) for _ in range(m + 1)]\n    max_length = 0\n    for i in range(1, m + 1):\n        for j in range(1, n + 1):\n            if str1[i - 1] == str2[j - 1]:\n                dp[i][j] = dp[i - 1][j - 1] + 1\n                max_length = max(max_length, dp[i][j])\n    return max_length\n\n\ndef find_sites():\n    sites_path = Path(__file__).parent.parent / 'sites'\n    for site in sites_path.iterdir():\n        if not site.is_dir() or not (site / '__init__.py').exists():\n            continue\n        yield site\n\n\ndef assign(cli_kwargs):\n    method = cli_kwargs.pop('method')\n    keys = cli_kwargs.pop('keys')\n    options = cli_kwargs\n    modules = [\n        # path, cmp_key\n        ('download.base_downloader_m3u8', 'm3u8'),\n        ('download.base_downloader_part', 'file'),\n    ]\n    for site in find_sites():\n        modules.append((f\"sites.{site.name}\", site.name))\n    pattern = re.compile(r\"https?://(?:[\\w-]*\\.)?([\\w-]+)\\.([\\w-]+)\")\n    if g := pattern.search(keys[0]):\n        cmp_base = g.group(1)\n    else:\n        cmp_base = keys[0]\n\n    def key(x: Tuple[str, str]):\n        if x[0].startswith(\"sites\"):\n            return longest_common_len(cmp_base, x[-1])\n        else:  # base_downloader\n            return longest_common_len(method, x[-1])\n\n    for module, _ in sorted(modules, key=key, reverse=True):\n        a = time.time()\n        try:\n            module = import_module(f\"bilix.{module}\")\n        except ImportError as e:\n            logger.debug(f\"duo to ImportError <{e}>, skip <module 'bilix.{module}'>\")\n            continue\n        logger.debug(f\"import cost {time.time() - a:.6f} s <module '{module.__name__}'>\")\n        exc = None\n        for handle_func in module_handle_funcs(module):\n            try:\n                res = handle_func(method, keys, options)\n            except HandleMethodError as e:\n                exc = e\n                continue\n            if res is NotImplemented or res is None:\n                continue\n            executor, cor = res\n            logger.debug(f\"Assign to {executor.__class__.__name__}\")\n            return executor, cor\n        if exc is not None:  # for the module, some handler can handle, but method miss match\n            raise exc\n    raise HandleError(f\"Can't find any handler for method: '{method}' keys: {keys}\")\n"
  },
  {
    "path": "bilix/cli/main.py",
    "content": "import asyncio\nimport typing\nfrom pathlib import Path\nimport click\nimport rich\nfrom rich.panel import Panel\nfrom rich.table import Table\n\nfrom .. import __version__\nfrom ..log import logger\nfrom .assign import assign\nfrom ..progress.cli_progress import CLIProgress\nfrom ..utils import parse_bytes_str, s2t\nfrom ..exception import HandleError\n\n\ndef handle_help(ctx: click.Context, param: typing.Union[click.Option, click.Parameter], value: typing.Any, ) -> None:\n    if not value or ctx.resilient_parsing:\n        return\n    print_help()\n    ctx.exit()\n\n\ndef handle_version(ctx: click.Context, param: typing.Union[click.Option, click.Parameter], value: typing.Any, ) -> None:\n    if not value or ctx.resilient_parsing:\n        return\n    print(f\"Version {__version__}\")\n    ctx.exit()\n\n\ndef handle_debug(ctx: click.Context, param: typing.Union[click.Option, click.Parameter], value: typing.Any, ):\n    if not value or ctx.resilient_parsing:\n        return\n    from rich.traceback import install\n    install()\n    logger.setLevel('DEBUG')\n    logger.debug(\"Debug on, more information will be shown\")\n\n\ndef print_help():\n    console = rich.console.Console()\n    console.print(f\"\\n[bold]bilix {__version__}\", justify=\"center\")\n    console.print(\"⚡️快如闪电的bilibili下载工具，基于Python现代Async特性，高速批量下载整部动漫，电视剧，up投稿等\\n\",\n                  justify=\"center\")\n    console.print(\"使用方法： bilix [cyan]<method> <key1, key2...> [OPTIONS][/cyan] \", justify=\"left\")\n    table = Table.grid(padding=1, pad_edge=False)\n    table.add_column(\"Parameter\", no_wrap=True, justify=\"left\", style=\"bold\")\n    table.add_column(\"Description\")\n\n    table.add_row(\n        \"[cyan]<method>\",\n        'get_series 或 s：   获取整个系列的视频（包括多p投稿，动漫，电视剧，电影，纪录片），也可以下载单个视频\\n'\n        'get_video 或 v：    获取特定的单个视频，在用户不希望下载系列其他视频的时候可以使用\\n'\n        'get_up 或 up：      获取某个up的所有投稿视频，支持数量选择，关键词搜索，排序\\n'\n        'get_cate 或 cate：  获取分区视频，支持数量选择，关键词搜索，排序\\n'\n        'get_favour 或 fav： 获取收藏夹内视频，支持数量选择，关键词搜索\\n'\n        'get_collect 或 col：获取合集或视频列表内视频\\n'\n        'info：              打印url所属资源的详细信息（例如点赞数，画质，编码格式等）'\n    )\n    table.add_row(\n        \"[cyan]<key>[/cyan]\",\n        '如使用get_video/get_series，填写视频的url\\n'\n        '如使用get_up，填写b站用户空间页url或用户id\\n'\n        '如使用get_cate，填写分区名称\\n'\n        '如使用get_favour，填写收藏夹页url或收藏夹id\\n'\n        '如使用get_collect，填写合集或者视频列表详情页url\\n'\n        '如使用info，填写任意资源url'\n    )\n    console.print(table)\n    # console.rule(\"OPTIONS参数\")\n    table = Table(highlight=True, box=None, show_header=False)\n    table.add_column(\"OPTIONS\", no_wrap=True, justify=\"left\", style=\"bold\")\n    table.add_column(\"type\", no_wrap=True, justify=\"left\", style=\"bold\")\n    table.add_column(\"Description\", )\n    table.add_row(\n        \"-d --dir\",\n        '[dark_cyan]str',\n        \"文件的下载目录，默认当前路径下的videos文件夹下，不存在会自动创建\"\n    )\n    table.add_row(\n        \"-q --quality\",\n        '[dark_cyan]int | str',\n        \"视频画面质量，默认0为最高画质，越大画质越低，超出范围时自动选最低画质，或者直接使用字符串指定'1080p'等名称\"\n    )\n    table.add_row(\n        \"-vc --video-con\",\n        '[dark_cyan]int',\n        \"控制最大同时下载的视频数量，理论上网络带宽越高可以设的越高，默认3\",\n    )\n    table.add_row(\n        \"-pc --part-con\",\n        '[dark_cyan]int',\n        \"控制每个媒体的分段并发数，默认10\",\n    )\n    table.add_row(\n        '--cookie',\n        '[dark_cyan]str',\n        '有条件的用户可以提供大会员的SESSDATA来下载会员视频'\n    )\n    table.add_row(\n        \"-fb --from-browser\", '[dark_cyan]str',\n        '从哪个浏览器中导入cookies，例如safari，chrome，edge...默认无',\n    )\n    table.add_row(\n        '--days',\n        '[dark_cyan]int',\n        '过去days天中的结果，默认为7，仅get_up, get_cate时生效'\n    )\n    table.add_row(\n        \"-n --num\",\n        '[dark_cyan]int',\n        \"下载前多少个投稿，仅get_up，get_cate，get_favor时生效\",\n    )\n    table.add_row(\n        \"--order\",\n        '[dark_cyan]str',\n        '何种排序，pubdate发布时间（默认）， click播放数，scores评论数，stow收藏数，coin硬币数，dm弹幕数, 仅get_up, get_cate时生效',\n    )\n    table.add_row(\n        \"--keyword\",\n        '[dark_cyan]str',\n        '搜索关键词， 仅get_up, get_cate，get_favor时生效',\n    )\n    table.add_row(\n        \"-ns --no-series\", '',\n        '只下载搜索结果每个视频的第一p，仅get_up，get_cate，get_favour时生效',\n    )\n    table.add_row(\n        \"-nh --no-hierarchy\", '',\n        '不使用层次目录，所有视频统一保存在下载目录下'\n    )\n    table.add_row(\n        \"--image\", '',\n        '下载视频封面'\n    )\n    table.add_row(\n        \"--subtitle\", '',\n        '下载srt字幕',\n    )\n    table.add_row(\n        \"--dm\", '',\n        '下载弹幕',\n    )\n    table.add_row(\n        \"-oa --only-audio\", '',\n        '仅下载音频，下载的音质固定为最高音质',\n    )\n    table.add_row(\n        \"-p\", '[dark_cyan]int, int',\n        '下载集数范围，例如-p 1 3 只下载P1至P3，仅get_series时生效',\n    )\n    table.add_row(\n        \"--codec\", '[dark_cyan]str',\n        '视频及音频编码（可使用info查看后填写，使用:分隔），可使用完整名称（例如avc1.640032，fLaC）或部分名称（例如avc，hev）',\n    )\n    table.add_row(\n        \"-sl --speed-limit\", '[dark_cyan]str',\n        '最大下载速度，默认无限制。例如：-sl 1.5MB',\n    )\n    table.add_row(\n        \"-sr --stream-retry\", '[dark_cyan]int',\n        '下载过程中发生网络错误后最大重试数，默认5',\n    )\n    table.add_row(\n        \"-tr --time-range\", '[dark_cyan]str',\n        r'下载视频的时间范围，格式如 h:m:s-h:m:s 或 s-s，默认无，仅get_video时生效',\n    )\n    table.add_row(\"-h --help\", '', \"帮助信息\")\n    table.add_row(\"-v --version\", '', \"版本信息\")\n    table.add_row(\"--debug\", '', \"显示debug信息\")\n    console.print(Panel(table, border_style=\"dim\", title=\"Options\", title_align=\"left\"))\n\n\nclass BasedQualityType(click.ParamType):\n    name = \"quality\"\n\n    def convert(self, value, param, ctx):\n        try:\n            value = int(value)\n        except ValueError:\n            return value  # str\n        if value in {1080, 720, 480, 360}:\n            return str(value)\n        else:\n            return value  # relative choice like 0, 1, 2, 999...\n\n\nclass BasedSpeedLimit(click.ParamType):\n    name = \"speed_limit\"\n\n    def convert(self, value, param, ctx):\n        if value is not None:\n            return parse_bytes_str(value)\n\n\nclass BasedTimeRange(click.ParamType):\n    name = \"time_range\"\n\n    def convert(self, value, param, ctx):\n        start_time, end_time = map(s2t, value.split('-'))\n        return start_time, end_time\n\n\n@click.command(add_help_option=False)\n@click.argument(\"method\", type=str)\n@click.argument(\"keys\", type=str, nargs=-1, required=True)\n@click.option(\n    \"-d\",\n    \"--dir\",\n    \"path\",\n    type=Path,\n    default='videos',\n)\n@click.option(\n    '-q',\n    '--quality',\n    'quality',\n    type=BasedQualityType(),\n    default=0,  # default relatively choice\n)\n@click.option(\n    '-vc',\n    '--video-con',\n    'video_concurrency',\n    type=int,\n    default=3,\n)\n@click.option(\n    '-pc',\n    \"--part-con\",\n    \"part_concurrency\",\n    type=int,\n    default=10,\n)\n@click.option(\n    '--cookie',\n    'cookie',\n    type=str,\n)\n@click.option(\n    '--days',\n    'days',\n    type=int,\n    default=7,\n)\n@click.option(\n    '-n',\n    '--num',\n    type=int,\n    default=10,\n)\n@click.option(\n    '--order',\n    'order',\n    type=str,\n    default='pubdate',\n)\n@click.option(\n    '--keyword',\n    'keyword',\n    type=str\n)\n@click.option(\n    '-ns',\n    '--no-series',\n    'series',\n    is_flag=True,\n    default=True,\n)\n@click.option(\n    '-nh',\n    '--no-hierarchy',\n    'hierarchy',\n    is_flag=True,\n    default=True,\n)\n@click.option(\n    '--image',\n    'image',\n    is_flag=True,\n    default=False,\n)\n@click.option(\n    '--subtitle',\n    'subtitle',\n    is_flag=True,\n    default=False,\n)\n@click.option(\n    '--dm',\n    'dm',\n    is_flag=True,\n    default=False,\n)\n@click.option(\n    '-oa',\n    '--only-audio',\n    'only_audio',\n    is_flag=True,\n    default=False,\n)\n@click.option(\n    '-p',\n    'p_range',\n    type=(int, int),\n)\n@click.option(\n    '--codec',\n    'codec',\n    type=str,\n    default=''\n)\n@click.option(\n    '--speed-limit',\n    '-sl',\n    'speed_limit',\n    type=BasedSpeedLimit(),\n    default=None,\n)\n@click.option(\n    '--stream-retry',\n    '-sr',\n    'stream_retry',\n    type=int,\n    default=5\n)\n@click.option(\n    '--from-browser',\n    '-fb',\n    'browser',\n    type=str,\n)\n@click.option(\n    '--time-range',\n    '-tr',\n    'time_range',\n    type=BasedTimeRange(),\n    default=None,\n)\n@click.option(\n    '-h',\n    \"--help\",\n    is_flag=True,\n    is_eager=True,\n    expose_value=False,\n    callback=handle_help,\n)\n@click.option(\n    '-v',\n    \"--version\",\n    is_flag=True,\n    is_eager=True,\n    expose_value=False,\n    callback=handle_version,\n)\n@click.option(\n    \"--debug\",\n    is_flag=True,\n    is_eager=True,\n    expose_value=False,\n    callback=handle_debug,\n)\ndef main(**kwargs):\n    loop = asyncio.new_event_loop()  # avoid deprecated warning in 3.11\n    asyncio.set_event_loop(loop)\n    logger.debug(f'CLI KEY METHOD and OPTIONS: {kwargs}')\n    try:\n        # CLIProgress.switch_theme(gs=\"cyan\", bs=\"dark_cyan\")\n        CLIProgress.start()  # start progress\n        if not kwargs['path'].exists():\n            kwargs['path'].mkdir(parents=True)\n            logger.info(f'Directory {kwargs[\"path\"]} not exists, auto created')\n        executor, cor = assign(kwargs)\n        loop.run_until_complete(cor)\n    except HandleError as e:  # method no match\n        logger.error(e)\n    except KeyboardInterrupt:\n        logger.info('[cyan]提示：用户中断，重复执行命令可继续下载')\n    finally:\n        CLIProgress.stop()  # stop rich progress to ensure cursor is repositioned\n"
  },
  {
    "path": "bilix/download/base_downloader.py",
    "content": "import asyncio\nimport inspect\nimport logging\nimport re\nimport time\nfrom functools import wraps\nfrom typing import Union, Optional, Tuple\nfrom contextlib import asynccontextmanager\nfrom urllib.parse import urlparse\nimport aiofiles\nimport httpx\n\nfrom bilix.cli.assign import auto_assemble\nfrom bilix.log import logger as dft_logger\nfrom bilix.download.utils import req_retry, path_check\nfrom bilix.progress.abc import Progress\nfrom bilix.progress.cli_progress import CLIProgress\nfrom bilix.exception import HandleMethodError\nfrom pathlib import Path, PurePath\n\n__all__ = ['BaseDownloader']\n\n\nclass BaseDownloaderMeta(type):\n    def __new__(cls, name, bases, dct):\n        dct['_cli_info'] = {}\n        dct['_cli_map'] = {}\n        for method_name, method in dct.items():\n            if not method_name.startswith('_') and asyncio.iscoroutinefunction(method):\n                if 'path' in (sig := inspect.signature(method)).parameters:\n                    dct[method_name] = cls.ensure_path(method, sig)\n\n                if cls.check_unique_method(method, bases):\n                    cli_info = cls.parse_cli_doc(method)\n                    if cli_info:\n                        dct['_cli_info'][method] = cli_info\n                        dct['_cli_map'][method_name] = method\n                        if cli_info['short']:\n                            dct['_cli_map'][cli_info['short']] = method\n\n        return super().__new__(cls, name, bases, dct)\n\n    @staticmethod\n    def check_unique_method(method_name: str, bases: Tuple[type, ...]):\n        for base in bases:\n            if method_name in base.__dict__:\n                return False\n        return True\n\n    @staticmethod\n    def parse_cli_doc(func) -> Optional[dict]:\n        docstring = func.__doc__\n        if not docstring or ':cli:' not in docstring:\n            return\n        params_matches = re.findall(r\":param (\\w+): (.+)\", docstring)\n        params = {param: description for param, description in params_matches}\n\n        cli_short_match = re.search(r\":cli: short: (\\w+)\", docstring)\n        short_name = cli_short_match.group(1) if cli_short_match else None\n\n        return {\"short\": short_name, \"params\": params}\n\n    @staticmethod\n    def ensure_path(func, sig):\n        path_index = next(i for i, name in enumerate(sig.parameters) if name == 'path')\n\n        @wraps(func)\n        async def wrapper(*args, **kwargs):\n            new_args = list(args)\n            if path_index < len(args) and isinstance(args[path_index], str):\n                new_args[path_index] = Path(args[path_index])\n            elif 'path' in kwargs and isinstance(kwargs['path'], str):\n                kwargs['path'] = Path(kwargs['path'])\n\n            return await func(*new_args, **kwargs)\n\n        wrapper.__annotations__['path'] = Union[Path, str]\n        return wrapper\n\n\nclass BaseDownloader(metaclass=BaseDownloaderMeta):\n    pattern: re.Pattern = None\n    cookie_domain: str = \"\"\n    _cli_info: dict\n    _cli_map: dict\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int] = None,\n            stream_retry: int = 5,\n            progress: Progress = None,\n            logger: logging.Logger = None,\n    ):\n        \"\"\"\n\n        :param client: client used for http request\n        :param browser: load cookies from which browser\n        :param speed_limit: global download rate for the downloader, should be a number (Byte/s unit)\n        :param progress: progress obj\n        \"\"\"\n        # use cli progress by default\n        self.progress = progress or CLIProgress()\n        self.logger = logger or dft_logger\n        self.client = client if client else httpx.AsyncClient(headers={'user-agent': 'PostmanRuntime/7.29.0'})\n        if browser:  # load cookies from browser, may need auth\n            self.update_cookies_from_browser(browser)\n        assert speed_limit is None or speed_limit > 0\n        self.speed_limit = speed_limit\n        self.stream_retry = stream_retry\n        # active stream number\n        self._stream_num = 0\n\n    async def __aenter__(self):\n        await self.client.__aenter__()\n        return self\n\n    async def __aexit__(self, exc_type, exc_val, exc_tb):\n        await self.client.__aexit__(exc_type, exc_val, exc_tb)\n\n    async def aclose(self):\n        \"\"\"Close transport and proxies for httpx client\"\"\"\n        await self.client.aclose()\n\n    async def get_static(self, url: str, path: Union[str, Path], convert_func=None) -> Path:\n        \"\"\"\n\n        :param url:\n        :param path: file path without suffix\n        :param convert_func: function used to convert http bytes content, must be named like ...2...\n        :return: downloaded file path\n        \"\"\"\n        # use suffix from convert_func's name\n        if convert_func:\n            suffix = '.' + convert_func.__name__.split('2')[-1]\n        # try to find suffix from url\n        else:\n            suffix = PurePath(urlparse(url).path).suffix\n        path = path.with_name(path.name + suffix)\n        exist, path = path_check(path)\n        if exist:\n            self.logger.info(f'[green]已存在[/green] {path.name}')\n            return path\n        res = await req_retry(self.client, url)\n        content = convert_func(res.content) if convert_func else res.content\n        async with aiofiles.open(path, 'wb') as f:\n            await f.write(content)\n        self.logger.info(f'[cyan]已完成[/cyan] {path.name}')\n        return path\n\n    @asynccontextmanager\n    async def _stream_context(self, times: int):\n        \"\"\"\n        contextmanager to print log, slow down streaming and count active stream number\n\n        :param times: error occur times which is related to sleep time\n        :return:\n        \"\"\"\n        self._stream_num += 1\n        try:\n            yield\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 403:\n                self.logger.warning(f\"STREAM slowing down since 403 forbidden {e}\")\n                await asyncio.sleep(10. * (times + 1))\n            else:\n                self.logger.warning(f\"STREAM {e}\")\n                await asyncio.sleep(.5 * (times + 1))\n            raise\n        except httpx.TransportError as e:\n            msg = f'STREAM {e.__class__.__name__} 异常可能由于网络条件不佳或并发数过大导致，若重复出现请考虑降低并发数'\n            self.logger.warning(msg) if times > 2 else self.logger.debug(msg)\n            await asyncio.sleep(.1 * (times + 1))\n            raise\n        except Exception as e:\n            self.logger.warning(f'STREAM Unexpected Exception class:{e.__class__.__name__} {e}')\n            raise\n        finally:\n            self._stream_num -= 1\n\n    @property\n    def stream_num(self):\n        \"\"\"current activate network stream number\"\"\"\n        return self._stream_num\n\n    @property\n    def chunk_size(self) -> Optional[int]:\n        if self.speed_limit and self.speed_limit < 1e5:  # 1e5 limit bound\n            # only restrict chunk_size when speed_limit is too low\n            return int(self.speed_limit * 0.1)  # 0.1 delay slope\n        # default to None setup\n        return None\n\n    async def _check_speed(self, content_size):\n        if self.speed_limit and (cur_speed := self.progress.active_speed) > self.speed_limit:\n            t_tgt = content_size / self.speed_limit * self.stream_num\n            t_real = content_size / cur_speed\n            t = t_tgt - t_real\n            await asyncio.sleep(t)\n\n    def update_cookies_from_browser(self, browser: str):\n        try:\n            a = time.time()\n            import browser_cookie3\n            f = getattr(browser_cookie3, browser.lower())\n            self.logger.debug(f\"trying to load cookies from {browser}: {self.cookie_domain}, may need auth\")\n            self.client.cookies.update(f(domain_name=self.cookie_domain))\n            self.logger.debug(f\"load complete, consumed time: {time.time() - a} s\")\n        except AttributeError:\n            raise AttributeError(f\"Invalid Browser {browser}\")\n\n    @classmethod\n    def _decide_handle(cls, method: str, keys: Tuple[str, ...], options: dict) -> bool:\n        \"\"\"check if the cls can be handled by this downloader\"\"\"\n        if cls.pattern:\n            return cls.pattern.match(keys[0]) is not None\n        else:\n            return method in cls._cli_map\n\n    @classmethod\n    @auto_assemble\n    def handle(cls, method: str, keys: Tuple[str, ...], options: dict):\n        if cls._decide_handle(method, keys, options):\n            try:\n                method = cls._cli_map[method]\n            except KeyError:\n                raise HandleMethodError(cls, method)\n            return cls, method\n"
  },
  {
    "path": "bilix/download/base_downloader_m3u8.py",
    "content": "import asyncio\nimport uuid\nfrom pathlib import Path, PurePath\nfrom typing import Tuple, Union\nfrom urllib.parse import urlparse\nimport aiofiles\nimport httpx\nimport os\nimport m3u8\nfrom Crypto.Cipher import AES\nfrom m3u8 import Segment\nfrom bilix.download.base_downloader import BaseDownloader\nfrom bilix.download.utils import path_check, merge_files\nfrom bilix import ffmpeg\nfrom .utils import req_retry\n\n__all__ = ['BaseDownloaderM3u8']\n\n\nclass BaseDownloaderM3u8(BaseDownloader):\n    \"\"\"Base Async http m3u8 Downloader\"\"\"\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            # unique params\n            part_concurrency: int = 10,\n            video_concurrency: Union[int, asyncio.Semaphore] = 3,\n    ):\n        super(BaseDownloaderM3u8, self).__init__(\n            client=client,\n            browser=browser,\n            stream_retry=stream_retry,\n            speed_limit=speed_limit,\n            progress=progress,\n            logger=logger\n        )\n        self.v_sema = asyncio.Semaphore(video_concurrency) if isinstance(video_concurrency, int) else video_concurrency\n        self.part_concurrency = part_concurrency\n        self.decrypt_cache = {}\n\n    async def _decrypt(self, seg: m3u8.Segment, content: bytearray):\n        async def get_key():\n            key_bytes = (await req_retry(self.client, uri)).content\n            iv = bytes.fromhex(seg.key.iv.replace('0x', '')) if seg.key.iv is not None else \\\n                seg.custom_parser_values['iv']\n            return AES.new(key_bytes, AES.MODE_CBC, iv)\n\n        uri = seg.key.absolute_uri\n        if uri not in self.decrypt_cache:\n            self.decrypt_cache[uri] = asyncio.ensure_future(get_key())\n            self.decrypt_cache[uri] = await self.decrypt_cache[uri]\n        elif asyncio.isfuture(self.decrypt_cache[uri]):\n            await self.decrypt_cache[uri]\n        cipher = self.decrypt_cache[uri]\n        return cipher.decrypt(content)\n\n    async def to_invariant_m3u8(self, m3u8_url: str) -> m3u8.M3U8:\n        res = await req_retry(self.client, m3u8_url, follow_redirects=True)\n        m3u8_info = m3u8.loads(res.text)\n        if not m3u8_info.base_uri:\n            m3u8_info.base_uri = m3u8_url\n        if m3u8_info.is_variant:\n            self.logger.debug(f\"m3u8 is variant, use first playlist: {m3u8_info.playlists[0].absolute_uri}\")\n            return await self.to_invariant_m3u8(m3u8_info.playlists[0].absolute_uri)\n        return m3u8_info\n\n    async def get_m3u8_video(self, m3u8_url: str, path: Union[str, Path], time_range: Tuple[int, int] = None) -> Path:\n        \"\"\"\n        download video from m3u8 url\n        :cli: short: m3u8\n        :param m3u8_url:\n        :param path: file path or file dir, if dir, filename will be set according to m3u8_url\n        :param time_range: (start, end) in seconds, if provided, only download the clip and add start-end to filename\n        :return: downloaded file path\n        \"\"\"\n        if path.is_dir():\n            path = (path / PurePath(urlparse(m3u8_url).path).stem).with_suffix('.mp4')\n        if time_range:\n            path = path.with_stem(f\"{path.stem}-{time_range[0]}-{time_range[1]}\")\n        exist, path = path_check(path)\n        if exist:\n            self.logger.info(f\"[green]已存在[/green] {path.name}\")\n            return path\n        async with self.v_sema:\n            task_id = await self.progress.add_task(total=None, description=path.name)\n            m3u8_info = await self.to_invariant_m3u8(m3u8_url)\n            cors = []\n            p_sema = asyncio.Semaphore(self.part_concurrency)\n            total_time = 0\n            if time_range:\n                current_time = 0\n                start_time, end_time = time_range\n                inside = False\n            else:\n                inside = True\n            for idx, seg in enumerate(m3u8_info.segments):\n                if time_range:\n                    current_time += seg.duration\n                    if not inside and current_time > start_time:\n                        inside = True\n                        s = seg.duration - (current_time - start_time)\n                    elif current_time > end_time:\n                        break\n                if inside:\n                    total_time += seg.duration\n                    # https://stackoverflow.com/questions/50628791/decrypt-m3u8-playlist-encrypted-with-aes-128-without-iv\n                    if seg.key and seg.key.iv is None:\n                        seg.custom_parser_values['iv'] = idx.to_bytes(16, 'big')\n                    cors.append(self._get_seg(seg, path.with_name(f\"{path.stem}-{idx}.ts\"), task_id, p_sema))\n            if len(cors) == 0 and time_range:\n                raise Exception(f\"time range <{start_time}-{end_time}> invalid for <{path.name}>\")\n            if init_sec := m3u8_info.segments[0].init_section:\n                async def _get_init():\n                    r = await req_retry(self.client, init_sec.absolute_uri)\n                    async with aiofiles.open(fn := path.with_name(f\"{path.stem}-init\"), 'wb') as f:\n                        await f.write(r.content)\n                        return fn\n\n                cors.insert(0, _get_init())\n                merge_fn = merge_files\n            else:\n                merge_fn = ffmpeg.concat\n            await self.progress.update(task_id, total_time=total_time)\n            file_list = await asyncio.gather(*cors)\n\n        await merge_fn(file_list, path)\n        if time_range:\n            path_tmp = path.with_stem(str(uuid.uuid4()))\n            # to save key frame, use 0 as start time instead of s, clip will be a little longer than expected\n            await ffmpeg.time_range_clip(path, 0, end_time - start_time + s, path_tmp)\n            os.rename(path_tmp, path)\n        self.logger.info(f\"[cyan]已完成[/cyan] {path.name}\")\n        await self.progress.update(task_id, visible=False)\n        return path\n\n    async def _update_task_total(self, task_id, time_part: float, update_size: int):\n        task = self.progress.tasks[task_id]\n        if task.total is None:\n            confirmed_t = time_part\n            confirmed_b = update_size\n        else:\n            confirmed_t = time_part + task.fields['confirmed_t']\n            confirmed_b = update_size + task.fields['confirmed_b']\n        predicted_total = task.fields['total_time'] * confirmed_b / confirmed_t\n        await self.progress.update(task_id, total=predicted_total, confirmed_t=confirmed_t, confirmed_b=confirmed_b)\n\n    async def _get_seg(self, seg: Segment, path: Path, task_id, p_sema: asyncio.Semaphore) -> Path:\n        exists, path = path_check(path)\n        if exists:\n            downloaded = os.path.getsize(path)\n            await self._update_task_total(task_id, time_part=seg.duration, update_size=downloaded)\n            await self.progress.update(task_id, advance=downloaded)\n            return path\n        seg_url = seg.absolute_uri\n        async with p_sema:\n            content = None\n            for times in range(1 + self.stream_retry):\n                content = bytearray()\n                try:\n                    async with self.client.stream(\"GET\", seg_url,\n                                                  follow_redirects=True) as r, self._stream_context(times):\n                        r.raise_for_status()\n                        # pre-update total if content-length is provided and first time to get content\n                        if 'content-length' in r.headers and not content:\n                            await self._update_task_total(\n                                task_id, time_part=seg.duration, update_size=int(r.headers['content-length']))\n                        async for chunk in r.aiter_bytes(chunk_size=self.chunk_size):\n                            content.extend(chunk)\n                            await self.progress.update(task_id, advance=len(chunk))\n                            await self._check_speed(len(chunk))\n                    if 'content-length' not in r.headers:  # after-update total if content-length is not provided\n                        await self._update_task_total(task_id, time_part=seg.duration, update_size=len(content))\n                    break\n                except (httpx.HTTPStatusError, httpx.TransportError):\n                    continue\n            else:\n                raise Exception(f\"STREAM 超过重复次数 {seg_url}\")\n        content = self._after_seg(seg, content)\n        # in case encrypted\n        if seg.key:\n            content = await self._decrypt(seg, content)\n        async with aiofiles.open(path, 'wb') as f:\n            await f.write(content)\n        return path\n\n    def _after_seg(self, seg: Segment, content: bytearray) -> bytearray:\n        \"\"\"hook for subclass to modify segment content, happened before decrypt\"\"\"\n        return content\n"
  },
  {
    "path": "bilix/download/base_downloader_part.py",
    "content": "import asyncio\nfrom pathlib import Path, PurePath\nfrom typing import Union, List, Iterable, Tuple\nfrom urllib.parse import urlparse\nimport aiofiles\nimport httpx\nimport uuid\nimport random\nimport os\nfrom email.message import Message\nfrom pymp4.parser import Box\nfrom bilix.download.base_downloader import BaseDownloader\nfrom bilix.download.utils import path_check, merge_files\nfrom bilix import ffmpeg\nfrom .utils import req_retry\n\n__all__ = ['BaseDownloaderPart']\n\n\nclass BaseDownloaderPart(BaseDownloader):\n    \"\"\"Base Async http Content-Range Downloader\"\"\"\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int, None] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            # unique params\n            part_concurrency: int = 10,\n    ):\n        super(BaseDownloaderPart, self).__init__(\n            client=client,\n            browser=browser,\n            stream_retry=stream_retry,\n            speed_limit=speed_limit,\n            progress=progress,\n            logger=logger\n        )\n        self.part_concurrency = part_concurrency\n\n    async def _pre_req(self, urls: List[str]) -> Tuple[int, str]:\n        # use GET instead of HEAD due to 404 bug https://github.com/HFrost0/bilix/issues/16\n        res = await req_retry(self.client, urls[0], follow_redirects=True, headers={'Range': 'bytes=0-1'})\n        total = int(res.headers['Content-Range'].split('/')[-1])\n        # get filename\n        if content_disposition := res.headers.get('Content-Disposition', None):\n            m = Message()\n            m['content-type'] = content_disposition\n            filename = m.get_param('filename', '')\n        else:\n            filename = ''\n        # change origin url to redirected position to avoid twice redirect\n        if res.history:\n            urls[0] = str(res.url)\n        return total, filename\n\n    async def get_media_clip(\n            self,\n            url_or_urls: Union[str, Iterable[str]],\n            path: Union[Path, str],\n            time_range: Tuple[int, int],\n            init_range: str,\n            seg_range: str,\n            get_s: asyncio.Future = None,\n            set_s: asyncio.Future = None,\n            task_id=None,\n    ):\n        \"\"\"\n\n        :param url_or_urls:\n        :param path:\n        :param time_range: (start_time, end_time)\n        :param init_range: xxx-xxx\n        :param seg_range: xxx-xxx\n        :param get_s:\n        :param set_s:\n        :param task_id:\n        :return:\n        \"\"\"\n        upper = task_id is not None and self.progress.tasks[task_id].fields.get('upper', None)\n        exist, path = path_check(path)\n        if exist:\n            if not upper:\n                self.logger.info(f'[green]已存在[/green] {path.name}')\n            return path\n\n        urls = [url_or_urls] if isinstance(url_or_urls, str) else [url for url in url_or_urls]\n        init_start, init_end = map(int, init_range.split('-'))\n        seg_start, seg_end = map(int, seg_range.split('-'))\n        res = await req_retry(self.client, urls[0], follow_redirects=True,\n                              headers={'Range': f'bytes={seg_start}-{seg_end}'})\n        container = Box.parse(res.content)\n        assert container.type == b'sidx'\n        if get_s:\n            start_time = await get_s\n            end_time = time_range[1]\n        else:\n            start_time, end_time = time_range\n        pre_time, pre_byte = 0, seg_end + 1\n        inside = False\n        parts = [(init_start, init_end)]\n        total = init_end - init_start + 1\n        s = 0\n        for idx, ref in enumerate(container.references):\n            if ref.reference_type != \"MEDIA\":\n                self.logger.debug(\"not a media\", ref)\n                continue\n            seg_duration = ref.segment_duration / container.timescale\n            if not inside and start_time < pre_time + seg_duration:\n                s = start_time - pre_time\n                inside = True\n            if inside and end_time < pre_time:\n                break\n            if inside:\n                total += ref.referenced_size\n                parts.append((pre_byte, pre_byte + ref.referenced_size - 1))\n            pre_time += seg_duration\n            pre_byte += ref.referenced_size\n        if len(parts) == 1:\n            raise Exception(f\"time range <{start_time}-{end_time}> invalid for <{path.name}>\")\n        if set_s:\n            set_s.set_result(start_time - s)\n        if task_id is not None:\n            await self.progress.update(\n                task_id,\n                total=self.progress.tasks[task_id].total + total if self.progress.tasks[task_id].total else total)\n        else:\n            task_id = await self.progress.add_task(description=path.name, total=total)\n        p_sema = asyncio.Semaphore(self.part_concurrency)\n\n        async def get_seg(part_range: Tuple[int, int]):\n            async with p_sema:\n                return await self._get_file_part(urls, path=path, part_range=part_range, task_id=task_id)\n\n        file_list = await asyncio.gather(*[get_seg(part_range) for part_range in parts])\n        path_tmp = path.with_name(str(uuid.uuid4()))\n        await merge_files(file_list, path_tmp)\n        if set_s:\n            await ffmpeg.time_range_clip(path_tmp, start=0, t=end_time - start_time + s, output_path=path)\n        else:\n            await ffmpeg.time_range_clip(path_tmp, start=s, t=end_time - start_time, output_path=path)\n        if not upper:  # no upstream task\n            await self.progress.update(task_id, visible=False)\n            self.logger.info(f\"[cyan]已完成[/cyan] {path.name}\")\n        return path\n\n    async def get_file(self, url_or_urls: Union[str, Iterable[str]], path: Union[Path, str], task_id=None) -> Path:\n        \"\"\"\n        download file by http content-range\n        :cli: short: f\n        :param url_or_urls: file url or urls with backups\n        :param path: file path or dir path, if dir path, filename will be extracted from url\n        :param task_id: if not provided, a new progress task will be created\n        :return: downloaded file path\n        \"\"\"\n        urls = [url_or_urls] if isinstance(url_or_urls, str) else [url for url in url_or_urls]\n        upper = task_id is not None and self.progress.tasks[task_id].fields.get('upper', None)\n\n        if not path.is_dir():\n            exist, path = path_check(path)\n            if exist:\n                if not upper:\n                    self.logger.info(f'[green]已存在[/green] {path.name}')\n                return path\n\n        total, req_filename = await self._pre_req(urls)\n\n        if path.is_dir():\n            file_name = req_filename if req_filename else PurePath(urlparse(urls[0]).path).name\n            path /= file_name\n            exist, path = path_check(path)\n            if exist:\n                if not upper:\n                    self.logger.info(f'[green]已存在[/green] {path.name}')\n                return path\n\n        if task_id is not None:\n            await self.progress.update(\n                task_id,\n                total=self.progress.tasks[task_id].total + total if self.progress.tasks[task_id].total else total)\n        else:\n            task_id = await self.progress.add_task(description=path.name, total=total)\n        part_length = total // self.part_concurrency\n        cors = []\n        for i in range(self.part_concurrency):\n            start = i * part_length\n            end = (i + 1) * part_length - 1 if i < self.part_concurrency - 1 else total - 1\n            cors.append(self._get_file_part(urls, path=path, part_range=(start, end), task_id=task_id))\n        file_list = await asyncio.gather(*cors)\n        await merge_files(file_list, new_path=path)\n        if not upper:\n            await self.progress.update(task_id, visible=False)\n            self.logger.info(f\"[cyan]已完成[/cyan] {path.name}\")\n        return path\n\n    async def _get_file_part(self, urls: List[str], path: Path, part_range: Tuple[int, int],\n                             task_id) -> Path:\n        start, end = part_range\n        part_path = path.with_name(f'{path.name}.{part_range[0]}-{part_range[1]}')\n        exist, part_path = path_check(part_path)\n        if exist:\n            downloaded = os.path.getsize(part_path)\n            start += downloaded\n            await self.progress.update(task_id, advance=downloaded)\n        if start > end:\n            return part_path  # skip already finished\n        url_idx = random.randint(0, len(urls) - 1)\n\n        for times in range(1 + self.stream_retry):\n            try:\n                async with \\\n                        self.client.stream(\"GET\", urls[url_idx], follow_redirects=True,\n                                           headers={'Range': f'bytes={start}-{end}'}) as r, \\\n                        self._stream_context(times), \\\n                        aiofiles.open(part_path, 'ab') as f:\n                    r.raise_for_status()\n                    if r.history:  # avoid twice redirect\n                        urls[url_idx] = r.url\n                    async for chunk in r.aiter_bytes(chunk_size=self.chunk_size):\n                        await f.write(chunk)\n                        start += len(chunk)\n                        await self.progress.update(task_id, advance=len(chunk))\n                        await self._check_speed(len(chunk))\n                break\n            except (httpx.HTTPStatusError, httpx.TransportError):\n                continue\n        else:\n            raise Exception(f\"STREAM 超过重复次数 {part_path.name}\")\n        return part_path\n"
  },
  {
    "path": "bilix/download/utils.py",
    "content": "import asyncio\nimport errno\nimport os\nimport random\nfrom functools import wraps\nfrom pathlib import Path\n\nimport aiofiles\nimport httpx\nfrom typing import Union, Sequence, Tuple, List\nfrom bilix.exception import APIError, APIParseError\nfrom bilix.log import logger\n\n\nasync def merge_files(file_list: List[Path], new_path: Path):\n    first_file = file_list[0]\n    async with aiofiles.open(first_file, 'ab') as f:\n        for idx in range(1, len(file_list)):\n            async with aiofiles.open(file_list[idx], 'rb') as fa:\n                await f.write(await fa.read())\n            os.remove(file_list[idx])\n    os.rename(first_file, new_path)\n\n\nasync def req_retry(client: httpx.AsyncClient, url_or_urls: Union[str, Sequence[str]], method='GET',\n                    follow_redirects=False, retry=5, **kwargs) -> httpx.Response:\n    \"\"\"Client request with multiple backup urls and retry\"\"\"\n    pre_exc = None  # predefine to avoid warning\n    for times in range(1 + retry):\n        url = url_or_urls if type(url_or_urls) is str else random.choice(url_or_urls)\n        try:\n            res = await client.request(method, url, follow_redirects=follow_redirects, **kwargs)\n            res.raise_for_status()\n        except httpx.TransportError as e:\n            msg = f'{method} {e.__class__.__name__} url: {url}'\n            logger.warning(msg) if times > 0 else logger.debug(msg)\n            pre_exc = e\n            await asyncio.sleep(.1 * (times + 1))\n        except httpx.HTTPStatusError as e:\n            logger.warning(f'{method} {e.response.status_code} {url}')\n            pre_exc = e\n            await asyncio.sleep(1. * (times + 1))\n        except Exception as e:\n            logger.warning(f'{method} {e.__class__.__name__} 未知异常 url: {url}')\n            raise e\n        else:\n            return res\n    logger.error(f\"{method} 超过重复次数 {url_or_urls}\")\n    raise pre_exc\n\n\ndef eclipse_str(s: str, max_len: int = 100):\n    if len(s) <= max_len:\n        return s\n    else:\n        half_len = (max_len - 1) // 2\n        return f\"{s[:half_len]}…{s[-half_len:]}\"\n\n\ndef path_check(path: Path, retry: int = 100) -> Tuple[bool, Path]:\n    \"\"\"\n    check whether path exist, if filename too long, truncate and return valid path\n\n    :param path: path to check\n    :param retry: max retry times\n    :return: exist, path\n    \"\"\"\n    for times in range(retry):\n        try:\n            exist = path.exists()\n            return exist, path\n        except OSError as e:\n            if e.errno == errno.ENAMETOOLONG:  # filename too long for os\n                if times == 0:\n                    logger.warning(f\"filename too long for os, truncate will be applied. filename: {path.name}\")\n                else:\n                    logger.debug(f\"filename too long for os {path.name}\")\n                path = path.with_stem(eclipse_str(path.stem, int(len(path.stem) * .8)))\n            else:\n                raise e\n    raise OSError(f\"filename too long for os {path.name}\")\n\n\ndef raise_api_error(func):\n    \"\"\"Decorator to catch exceptions except APIError and HTTPError and raise APIParseError\"\"\"\n\n    @wraps(func)\n    async def wrapped(client: httpx.AsyncClient, *args, **kwargs):\n        try:\n            return await func(client, *args, **kwargs)\n        except (APIError, httpx.HTTPError):\n            raise\n        except Exception as e:\n            raise APIParseError(e, func) from e\n\n    return wrapped\n"
  },
  {
    "path": "bilix/exception.py",
    "content": "class APIError(Exception):\n    \"\"\"API Error during request to website\"\"\"\n\n    def __init__(self, msg: str, resource):\n        self.msg = msg\n        self.resource = resource\n\n    def __str__(self):\n        return f\"{self.msg} resource: {self.resource}\"\n\n\nclass APIParseError(APIError):\n    \"\"\"API Parse Error, maybe cased by website interface change, raise by decorator\"\"\"\n\n    def __init__(self, e, func):\n        self.e = e\n        self.func = func\n\n    def __str__(self):\n        return f\"APIParseError Caused by {self.e.__class__.__name__} in <{self.func.__module__}:{self.func.__name__}>\"\n\n\nclass APIResourceError(APIError):\n    \"\"\"API Error that resource is not available (like deleted by uploader)\"\"\"\n\n\nclass APIUnsupportedError(APIError):\n    \"\"\"The resource parse is not supported yet\"\"\"\n\n\nclass APIInvalidError(APIError):\n    \"\"\"API request is invalid\"\"\"\n\n\nclass HandleError(Exception):\n    \"\"\"the error related to bilix cli handle\"\"\"\n\n\nclass HandleMethodError(HandleError):\n    \"\"\"the error that handler can not recognize the method\"\"\"\n\n    def __init__(self, executor_cls, method):\n        self.executor_cls = executor_cls\n        self.method = method\n\n    def __str__(self):\n        return f\"For {self.executor_cls.__name__} method '{self.method}' is not available\"\n"
  },
  {
    "path": "bilix/ffmpeg.py",
    "content": "\"\"\"\njust some useful ffmpeg commands wrapped in python\n\"\"\"\nimport os\nfrom anyio import run_process\nfrom typing import List\nfrom pathlib import Path\nimport tempfile\n\n\nasync def concat(path_lst: List[Path], output_path: Path, remove=True):\n    with tempfile.NamedTemporaryFile('w', dir=output_path.parent, delete=False) as fp:\n        for path in path_lst:\n            fp.write(f\"file '{path.name}'\\n\")\n        cmd = ['ffmpeg', '-f', 'concat', '-safe', '0', '-i', fp.name, '-c', 'copy', '-loglevel', 'quiet',\n               str(output_path)]\n        # print(' '.join(map(lambda x: f'\"{x}\"', cmd)))\n    await run_process(cmd)\n    os.remove(fp.name)\n    if remove:\n        for path in path_lst:\n            os.remove(path)\n\n\nasync def combine(path_lst: List[Path], output_path: Path, remove=True):\n    cmd = ['ffmpeg']\n    for path in path_lst:\n        cmd.extend(['-i', str(path)])\n    # for flac, use -strict -2\n    cmd.extend(['-c', 'copy', '-strict', '-2', '-loglevel', 'quiet', str(output_path)])\n    # print(' '.join(map(lambda x: f'\"{x}\"', cmd)))\n    await run_process(cmd)\n    if remove:\n        for path in path_lst:\n            os.remove(path)\n\n\nasync def time_range_clip(input_path: Path, start: int, t: int, output_path: Path, remove=True):\n    # for flac, use -strict -2\n    cmd = ['ffmpeg', '-ss', f'{start:.1f}', '-t', f'{t:.1f}', '-i', str(input_path), '-codec', 'copy', '-strict', '-2',\n           '-loglevel', 'quiet', '-f', 'mp4', str(output_path)]\n    # print(' '.join(map(lambda x: f'\"{x}\"', cmd)))\n    await run_process(cmd)\n    if remove:\n        os.remove(input_path)\n"
  },
  {
    "path": "bilix/log.py",
    "content": "import logging\nfrom rich.logging import RichHandler\n\n\ndef get_logger():\n    bilix_logger = logging.getLogger(\"bilix\")\n    # 如果logger已经配置过handler，直接返回logger实例\n    if bilix_logger.hasHandlers():\n        return bilix_logger\n    bilix_logger.setLevel(logging.INFO)\n    # 创建自定义的RichHandler\n    custom_rich_handler = RichHandler(\n        show_time=False,\n        show_path=False,\n        markup=True,\n        keywords=RichHandler.KEYWORDS + ['STREAM'],\n        rich_tracebacks=True\n    )\n    # 设置日志格式\n    formatter = logging.Formatter(\"{message}\", style=\"{\", datefmt=\"[%X]\")\n    custom_rich_handler.setFormatter(formatter)\n    # 为logger添加自定义的RichHandler\n    bilix_logger.addHandler(custom_rich_handler)\n    return bilix_logger\n\n\nlogger = get_logger()\n"
  },
  {
    "path": "bilix/progress/abc.py",
    "content": "from abc import ABC, abstractmethod\nfrom typing import Optional, Any\n\n\nclass Progress(ABC):\n    \"\"\"Abstract Class for bilix download progress, checkout to design your own progress\"\"\"\n\n    @classmethod\n    @abstractmethod\n    def start(cls):\n        \"\"\"start to show the progress\"\"\"\n\n    @classmethod\n    @abstractmethod\n    def stop(cls):\n        \"\"\"stop to show the progress\"\"\"\n\n    @abstractmethod\n    def tasks(self):\n        \"\"\"return the tasks\"\"\"\n\n    @abstractmethod\n    def active_speed(self) -> Optional[float]:\n        \"\"\"return current active speed (bit/s)\"\"\"\n\n    @abstractmethod\n    async def add_task(\n            self,\n            description: str,\n            start: bool = True,\n            total: Optional[float] = None,\n            completed: int = 0,\n            visible: bool = True,\n            **fields,\n    ):\n        \"\"\"async add a task to progress\"\"\"\n\n    @abstractmethod\n    async def update(\n            self,\n            task_id,\n            *,\n            total: Optional[float] = None,\n            completed: Optional[float] = None,\n            advance: Optional[float] = None,\n            description: Optional[str] = None,\n            visible: Optional[bool] = None,\n            refresh: bool = False,\n            **fields: Any\n    ):\n        \"\"\"async update a task status\"\"\"\n"
  },
  {
    "path": "bilix/progress/cli_progress.py",
    "content": "from bilix.progress.abc import Progress\nfrom typing import Optional, Any, Set\nfrom rich.theme import Theme\nfrom rich.style import Style\nfrom rich.spinner import Spinner\nfrom rich.progress import Progress as RichProgress, TaskID, \\\n    TextColumn, BarColumn, DownloadColumn, TransferSpeedColumn, TimeRemainingColumn, ProgressColumn\n\n\nclass SpinnerColumn(ProgressColumn):\n    def __init__(self, style=\"progress.spinner\", speed: float = 1.0):\n        self.waiting = Spinner(\"dqpb\", style=style)\n        self.downloading = Spinner(\"dots\", style=style, speed=speed)\n        self.merging = Spinner(\"line\", style=style, speed=speed)\n        super().__init__()\n\n    def render(self, task):\n        t = task.get_time()\n        if task.total is None:\n            return self.waiting.render(t)\n        elif task.finished:\n            return self.merging.render(t)\n        else:\n            return self.downloading.render(t)\n\n\nclass CLIProgress(Progress):\n    # Only one live display may be active at once\n    _progress = RichProgress(\n        SpinnerColumn(speed=2.),\n        TextColumn(\"[progress.description]{task.description}\"),\n        TextColumn(\"[progress.percentage]{task.percentage:>4.1f}%\"),\n        BarColumn(),\n        DownloadColumn(),\n        TransferSpeedColumn(),\n        TextColumn('ETA'),\n        TimeRemainingColumn(),\n        transient=True,\n    )\n\n    def __init__(self):\n        self._active_ids: Set[TaskID] = set()\n\n    @classmethod\n    def start(cls):\n        cls._progress.start()\n\n    @classmethod\n    def stop(cls):\n        cls._progress.stop()\n\n    @property\n    def tasks(self):\n        return self._progress.tasks\n\n    @staticmethod\n    def _cat_description(description, max_length=33):\n        mid = (max_length - 3) // 2\n        return description if len(description) < max_length else f'{description[:mid]}...{description[-mid:]}'\n\n    async def add_task(\n            self,\n            description: str,\n            start: bool = True,\n            total: Optional[float] = None,\n            completed: int = 0,\n            visible: bool = True,\n            **fields: Any,\n    ) -> TaskID:\n        task_id = self._progress.add_task(description=self._cat_description(description),\n                                          start=start, total=total, completed=completed, visible=visible, **fields)\n        self._active_ids.add(task_id)\n        return task_id\n\n    @property\n    def active_speed(self):\n        return sum(self._progress.tasks[task_id].speed for task_id in self._active_ids\n                   if self._progress.tasks[task_id].speed)\n\n    async def update(\n            self,\n            task_id: TaskID,\n            *,\n            total: Optional[float] = None,\n            completed: Optional[float] = None,\n            advance: Optional[float] = None,\n            description: Optional[str] = None,\n            visible: Optional[bool] = None,\n            refresh: bool = False,\n            **fields: Any,\n    ) -> None:\n        if description:\n            description = self._cat_description(description)\n        self._progress.update(task_id, total=total, completed=completed, advance=advance,\n                              description=description, visible=visible, refresh=refresh, **fields)\n        if self._progress.tasks[task_id].finished and task_id in self._active_ids:\n            self._active_ids.remove(task_id)\n\n    @classmethod\n    def switch_theme(cls, bs=\"rgb(95,138,239)\", gs=\"rgb(65,165,189)\"):\n        cls._progress.console.push_theme(Theme({\n            # \"progress.data.speed\": Style(color=bs),\n            \"progress.download\": Style(color=gs),\n            \"progress.percentage\": Style(color=gs),\n            \"progress.spinner\": Style(color=bs),\n            \"progress.remaining\": Style(color=gs),\n            # \"bar.back\": Style(color=\"grey23\"),\n            \"bar.complete\": Style(color=bs),\n            \"bar.finished\": Style(color=gs),\n            \"bar.pulse\": Style(color=bs),\n        }))\n"
  },
  {
    "path": "bilix/progress/ws_progress.py",
    "content": "import asyncio\nimport json\n\nfrom bilix.progress.cli_progress import CLIProgress\n\n\nclass WebSocketProgress(CLIProgress):\n    def __init__(self, sockets):\n        super().__init__()\n        self._sockets = sockets\n\n    async def broadcast(self, msg: str):\n        cors = [s.send_text(msg) for s in self._sockets]\n        await asyncio.gather(*cors)\n\n    async def add_task(self, **kwargs):\n        task_id = await super().add_task(**kwargs)\n        asyncio.create_task(\n            self.broadcast(json.dumps({'method': 'add_task', 'task_id': task_id, **kwargs}))\n        )\n        return task_id\n\n    async def update(self, task_id, **kwargs) -> None:\n        await super().update(task_id, **kwargs)\n        asyncio.create_task(\n            self.broadcast(json.dumps({'method': 'update', \"task_id\": task_id, **kwargs}))\n        )\n"
  },
  {
    "path": "bilix/sites/bilibili/__init__.py",
    "content": "from .downloader import DownloaderBilibili\nfrom .informer import InformerBilibili\n\n__all__ = ['DownloaderBilibili', 'InformerBilibili']\n"
  },
  {
    "path": "bilix/sites/bilibili/api.py",
    "content": "import asyncio\nimport json\nimport re\nfrom urllib.parse import quote\nimport httpx\nfrom pydantic import field_validator, BaseModel, Field\nfrom typing import Union, List, Tuple, Dict, Optional\nimport json5\nfrom danmakuC.bilibili import parse_view\nfrom bilix.download.utils import req_retry, raise_api_error\nfrom bilix.sites.bilibili.utils import parse_ids_from_url\nfrom bilix.utils import legal_title\nfrom bilix.exception import APIInvalidError, APIError, APIResourceError, APIUnsupportedError\nimport hashlib\nimport time\n\ndft_client_settings = {\n    'headers': {'user-agent': 'PostmanRuntime/7.29.0', 'referer': 'https://www.bilibili.com'},\n    'cookies': {'CURRENT_FNVAL': '4048'},\n    'http2': True\n}\n\n\n@raise_api_error\nasync def get_cate_meta(client: httpx.AsyncClient) -> dict:\n    \"\"\"\n    获取b站分区元数据\n\n    :param client:\n    :return:\n    \"\"\"\n    cate_info = {}\n    res = await req_retry(client, 'https://s1.hdslb.com/bfs/static/laputa-channel/client/assets/index.c0ea30e6.js')\n    cate_data = re.search('Za=([^;]*);', res.text).groups()[0]\n    cate_data = json5.loads(cate_data)['channelList']\n    for i in cate_data:\n        if 'sub' in i:\n            for j in i['sub']:\n                cate_info[j['name']] = j\n            cate_info[i['name']] = i\n    return cate_info\n\n\n@raise_api_error\nasync def get_list_info(client: httpx.AsyncClient, url_or_sid: str, ):\n    \"\"\"\n    获取视频列表信息\n\n    :param url_or_sid:\n    :param client:\n    :return:\n    \"\"\"\n    if url_or_sid.startswith('http'):\n        sid = re.search(r'sid=(\\d+)', url_or_sid).groups()[0]\n    else:\n        sid = url_or_sid\n    res = await req_retry(client, f'https://api.bilibili.com/x/series/series?series_id={sid}')  # meta api\n    meta = json.loads(res.text)\n    mid = meta['data']['meta']['mid']\n    params = {'mid': mid, 'series_id': sid, 'ps': meta['data']['meta']['total']}\n    list_res, up_info = await asyncio.gather(\n        req_retry(client, 'https://api.bilibili.com/x/series/archives', params=params),\n        get_up_info(client, str(mid)),\n    )\n    list_info = json.loads(list_res.text)\n    list_name = meta['data']['meta']['name']\n    up_name = up_info.get('name', '')\n    bvids = [i['bvid'] for i in list_info['data']['archives']]\n    return list_name, up_name, bvids\n\n\n@raise_api_error\nasync def get_collect_info(client: httpx.AsyncClient, url_or_sid: str):\n    \"\"\"\n    获取合集信息\n\n    :param url_or_sid:\n    :param client:\n    :return:\n    \"\"\"\n    sid = re.search(r'sid=(\\d+)', url_or_sid).groups()[0] if url_or_sid.startswith('http') else url_or_sid\n    params = {'season_id': sid}\n    res = await req_retry(client, 'https://api.bilibili.com/x/space/fav/season/list', params=params)\n    data = json.loads(res.text)\n    medias = data['data']['medias']\n    info = data['data']['info']\n    col_name, up_name = info['title'], medias[0]['upper']['name']\n    bvids = [i['bvid'] for i in data['data']['medias']]\n    return col_name, up_name, bvids\n\n\n@raise_api_error\nasync def get_favour_page_info(client: httpx.AsyncClient, url_or_fid: str, pn=1, ps=20, keyword=''):\n    \"\"\"\n    获取收藏夹信息（分页）\n\n    :param url_or_fid:\n    :param pn:\n    :param ps:\n    :param keyword:\n    :param client:\n    :return:\n    \"\"\"\n    if url_or_fid.startswith('http'):\n        fid = re.findall(r'fid=(\\d+)', url_or_fid)[0]\n    else:\n        fid = url_or_fid\n    params = {'media_id': fid, 'pn': pn, 'ps': ps, 'keyword': keyword, 'order': 'mtime'}\n    res = await req_retry(client, 'https://api.bilibili.com/x/v3/fav/resource/list', params=params)\n    data = json.loads(res.text)['data']\n    fav_name, up_name = data['info']['title'], data['info']['upper']['name']\n    bvids = [i['bvid'] for i in data['medias'] if i['title'] != '已失效视频']\n    total_size = data['info']['media_count']\n    return fav_name, up_name, total_size, bvids\n\n\n@raise_api_error\nasync def get_cate_page_info(client: httpx.AsyncClient, cate_id, time_from, time_to, pn=1, ps=30,\n                             order='click', keyword=''):\n    \"\"\"\n    获取分区视频信息（分页）\n\n    :param cate_id:\n    :param pn:\n    :param ps:\n    :param order:\n    :param keyword:\n    :param time_from:\n    :param time_to:\n    :param client:\n    :return:\n    \"\"\"\n    params = {'search_type': 'video', 'view_type': 'hot_rank', 'cate_id': cate_id, 'pagesize': ps,\n              'keyword': keyword, 'page': pn, 'order': order, 'time_from': time_from, 'time_to': time_to}\n    res = await req_retry(client, 'https://s.search.bilibili.com/cate/search', params=params)\n    info = json.loads(res.text)\n    bvids = [i['bvid'] for i in info['result']]\n    return bvids\n\n\nasync def _add_sign(client: httpx.AsyncClient, params: dict):\n    \"\"\"添加b站api签名到params中\n    :param params:\n    :return:\n    \"\"\"\n    OE = [46, 47, 18, 2, 53, 8, 23, 32, 15, 50, 10, 31, 58, 3, 45,\n          35, 27, 43, 5, 49, 33, 9, 42, 19, 29, 28, 14, 39, 12, 38,\n          41, 13, 37, 48, 7, 16, 24, 55, 40, 61, 26, 17, 0, 1, 60,\n          51, 30, 4, 22, 25, 54, 21, 56, 59, 6, 63, 57, 62, 11, 36,\n          20, 34, 44, 52]\n    res = await req_retry(\n        client, \"https://api.bilibili.com/x/web-interface/nav\"\n    )\n    info = json.loads(res.text)\n    img_val = info['data']['wbi_img']['img_url'].split('/')[-1].split('.')[0]\n    sub_val = info['data']['wbi_img']['sub_url'].split('/')[-1].split('.')[0]\n    val = img_val + sub_val\n    request_token = ''.join([val[v] for v in OE])[:32]\n\n    wts = int(time.time())\n    params[\"wts\"] = wts\n    data = dict(sorted(params.items()))\n    data_str = \"&\".join([f\"{k}={v}\" for k, v in data.items()]) + request_token\n    md5 = hashlib.md5(data_str.encode(\"utf-8\")).hexdigest()\n    params[\"w_rid\"] = md5\n    return params\n\n\ndef _find_mid(space_url: str):\n    return re.search(r'^https://space.bilibili.com/(\\d+)/?', space_url).group(1)\n\n\n@raise_api_error\nasync def get_up_video_info(client: httpx.AsyncClient, url_or_mid: str, pn=1, ps=30, order=\"pubdate\", keyword=\"\"):\n    \"\"\"\n    获取up主信息\n\n    :param url_or_mid:\n    :param pn:\n    :param ps:\n    :param order:\n    :param keyword:\n    :param client:\n    :return:\n    \"\"\"\n    if url_or_mid.startswith(\"http\"):\n        mid = re.findall(r\"/(\\d+)\", url_or_mid)[0]\n    else:\n        mid = url_or_mid\n\n    params = {\"mid\": mid, \"order\": order, \"ps\": ps, \"pn\": pn, \"keyword\": quote(keyword or \"\")}\n    await _add_sign(client, params)\n\n    res = await req_retry(client, \"https://api.bilibili.com/x/space/wbi/arc/search\", params=params)\n    info = json.loads(res.text)\n    up_name = info[\"data\"][\"list\"][\"vlist\"][0][\"author\"]\n    total_size = info[\"data\"][\"page\"][\"count\"]\n    bv_ids = [i[\"bvid\"] for i in info[\"data\"][\"list\"][\"vlist\"]]\n    return up_name, total_size, bv_ids\n\n\nasync def get_up_info(client: httpx.AsyncClient, url_or_mid: str):\n    if url_or_mid.startswith(\"http\"):\n        mid = _find_mid(url_or_mid)\n    else:\n        mid = url_or_mid\n    params = {\"mid\": mid}\n    await _add_sign(client, params)\n    res = await req_retry(client, \"https://api.bilibili.com/x/space/wbi/acc/info\", params=params)\n    data = json.loads(res.text)['data']\n    return data\n\n\nclass Media(BaseModel):\n    base_url: str\n    backup_url: Optional[List[str]] = None\n    size: Optional[int] = None\n    width: Optional[int] = None\n    height: Optional[int] = None\n    suffix: Optional[str] = None\n    quality: Optional[str] = None\n    codec: Optional[str] = None\n    segment_base: Optional[dict] = None\n\n    @property\n    def urls(self):\n        \"\"\"the copy of all url including backup\"\"\"\n        return [self.base_url, *self.backup_url] if self.backup_url else [self.base_url]\n\n\nclass Dash(BaseModel):\n    duration: int\n    videos: List[Media]\n    audios: List[Media]\n    video_formats: Dict[str, Dict[str, Media]]\n    audio_formats: Dict[str, Optional[Media]]\n\n    @classmethod\n    def from_dict(cls, play_info: dict):\n        dash = play_info['dash']  # may raise KeyError\n        video_formats = {}\n        quality_map = {}\n        for d in play_info['support_formats']:\n            quality_map[d['quality']] = d['new_description']\n            video_formats[d['new_description']] = {}\n        videos = []\n        for d in dash['video']:\n            if d['id'] not in quality_map:\n                continue  # https://github.com/HFrost0/bilix/issues/93\n            quality = quality_map[d['id']]\n            m = Media(quality=quality, codec=d['codecs'], **d)\n            video_formats[quality][m.codec] = m\n            videos.append(m)\n\n        audios = []\n        audio_formats = {}\n        if dash.get('audio', None):  # some video have NO audio\n            d = dash['audio'][0]\n            m = Media(quality=\"default\", suffix='.aac', codec=d['codecs'], **d)\n            audios.append(m)\n            audio_formats[m.quality] = m\n        if dash['dolby']['type'] != 0:\n            quality = \"dolby\"\n            audio_formats[quality] = None\n            if dash['dolby'].get('audio', None):\n                d = dash['dolby']['audio'][0]\n                m = Media(quality=quality, suffix='.eac3', codec=d['codecs'], **d)\n                audios.append(m)\n                audio_formats[m.quality] = m\n        if dash.get('flac', None):\n            quality = \"flac\"\n            audio_formats[quality] = None\n            if d := dash['flac']['audio']:\n                m = Media(quality=quality, suffix='.flac', codec=d['codecs'], **d)\n                audios.append(m)\n                audio_formats[m.quality] = m\n        return cls(duration=dash['duration'], videos=videos, audios=audios,\n                   video_formats=video_formats, audio_formats=audio_formats)\n\n    def choose_video(self, quality: Union[int, str], video_codec: str) -> Media:\n        # 1. absolute choice with quality name like 4k 1080p '1080p 60帧'\n        if isinstance(quality, str):\n            for k in self.video_formats:\n                if k.upper().startswith(quality.upper()):  # incase 1080P->1080p\n                    for c in self.video_formats[k]:\n                        if c.startswith(video_codec):\n                            return self.video_formats[k][c]\n        # 2. relative choice\n        else:\n            keys = [k for k in self.video_formats.keys() if self.video_formats[k]]\n            quality = min(quality, len(keys) - 1)\n            k = keys[quality]\n            for c in self.video_formats[k]:\n                if c.startswith(video_codec):\n                    return self.video_formats[k][c]\n        raise KeyError(f\"no match for video quality: {quality} codec: {video_codec}\")\n\n    def choose_audio(self, audio_codec: str) -> Optional[Media]:\n        if len(self.audios) == 0:  # some video has no audio\n            return\n        for k in self.audio_formats:\n            if self.audio_formats[k] and self.audio_formats[k].codec.startswith(audio_codec):\n                return self.audio_formats[k]\n        raise KeyError(f'no match for audio codec: {audio_codec}')\n\n    def choose_quality(self, quality: Union[str, int], codec: str = '') -> Tuple[Media, Optional[Media]]:\n        v_codec, a_codec, *_ = codec.split(':') + [\"\"]\n        video, audio = self.choose_video(quality, v_codec), self.choose_audio(a_codec)\n        return video, audio\n\n\nclass Status(BaseModel):\n    view: int = Field(description=\"播放量\")\n    danmaku: int = Field(description=\"弹幕数\")\n    coin: int = Field(description=\"硬币数\")\n    like: int = Field(description=\"点赞数\")\n    reply: int = Field(description=\"回复数\")\n    favorite: int = Field(description=\"收藏数\")\n    share: int = Field(description=\"分享数\")\n    follow: Optional[int] = Field(default=None, description=\"追剧数/追番数\")\n\n    @field_validator('view', mode=\"before\")\n    @classmethod\n    def no_view(cls, v):\n        return 0 if v == '--' else v\n\n\nclass Page(BaseModel):\n    p_name: str\n    p_url: str\n\n\nclass VideoInfo(BaseModel):\n    title: str\n    aid: int\n    cid: int\n    ep_id: Optional[int] = None\n    p: int\n    pages: List[Page]  # [[p_name, p_url], ...]\n    img_url: str\n    status: Status\n    bvid: Optional[str] = None\n    dash: Optional[Dash] = None\n    other: Optional[List[Media]] = None  # durl resource: flv, mp4.\n    desc: Optional[str] = None\n    tags: Optional[List[str]] = None\n\n\ndef _parse_bv_html(url, html: str) -> VideoInfo:\n    init_info = re.search(r'<script>window.__INITIAL_STATE__=({.*?});\\(', html).groups()[0]  # this line may raise\n    init_info = json.loads(init_info)\n    if len(init_info.get('error', {})) > 0:\n        raise APIResourceError(\"视频已失效\", url)  # 啊叻？视频不见了？在分区下载的时候可能产生\n    # extract meta\n    pages = []\n    h1_title = legal_title(re.search('<h1[^>]*title=\"([^\"]*)\"', html).groups()[0])\n    status = Status(**init_info['videoData']['stat'])\n    bvid = init_info['bvid']\n    desc = init_info['videoData'].get('desc', '')\n    tags = [i['tag_name'] for i in init_info['tags']]\n    aid = init_info['aid']\n    (p, cid), = init_info['cidMap'][bvid]['cids'].items()\n    p = int(p) - 1\n    title = legal_title(init_info['videoData']['title'])\n    base_url = url.split('?')[0]\n    for idx, i in enumerate(init_info['videoData']['pages']):\n        p_url = f\"{base_url}?p={idx + 1}\"\n        p_name = f\"P{idx + 1}-{i['part']}\" if len(init_info['videoData']['pages']) > 1 else ''\n        pages.append(Page(p_name=p_name, p_url=p_url))\n    # extract dash and flv_url\n    dash, other = None, []\n    play_info = re.search('<script>window.__playinfo__=({.*?})</script><script>', html).groups()[0]\n    play_info = json.loads(play_info)['data']\n    try:\n        dash = Dash.from_dict(play_info)\n    except KeyError:\n        pass\n    try:\n        for i in play_info['durl']:\n            suffix = re.search(r'\\.([a-zA-Z0-9]+)\\?', i['url']).group(1)\n            other.append(Media(base_url=i['url'], backup_url=i['backup_url'], suffix=suffix))\n    except KeyError:\n        pass\n    # extract img url\n    img_url = re.search('property=\"og:image\" content=\"([^\"]*)\"', html).groups()[0]\n    if not img_url.startswith('http'):  # https://github.com/HFrost0/bilix/issues/52 just for some video\n        img_url = 'http:' + img_url.split('@')[0]\n    # construct data\n    video_info = VideoInfo(title=title, aid=aid, cid=cid, status=status,\n                           p=p, pages=pages, img_url=img_url, bvid=bvid, dash=dash, other=other,\n                           desc=desc, tags=tags)\n    return video_info\n\n\ndef _parse_ep_html(url, html: str) -> VideoInfo:\n    data = re.search(r'<script id=\"__NEXT_DATA__\" type=\"application/json\">({.*})</script>', html).groups()[0]\n    data = json.loads(data)\n    queries = data['props']['pageProps']['dehydratedState']['queries']\n    season_info = queries[0]['state']['data']['seasonInfo']\n    media_info = season_info['mediaInfo']\n    stat = media_info['stat']\n    status = Status(coin=stat['coins'], view=stat['views'], danmaku=stat['danmakus'], share=stat['share'],\n                    like=stat['likes'], reply=stat['reply'], favorite=stat['favorite'], follow=stat['favorites'])\n    title = legal_title(media_info['title'])\n    desc = media_info['evaluate']\n    episodes = media_info['episodes']\n    path: str = url.split('?')[0].split('/')[-1]\n    ep_id = path[2:] if path.startswith('ep') else str(episodes[0][\"ep_id\"])\n    p = 0\n    aid, cid, bvid = 0, 0, \"\"\n    pages = []\n    img_url = ''\n    for i, ep in enumerate(episodes):\n        if str(ep[\"ep_id\"]) == ep_id:\n            p = i\n            aid, cid, bvid = ep[\"aid\"], ep[\"cid\"], ep[\"bvid\"]\n            img_url = ep[\"cover\"]\n        pages.append(Page(p_name=legal_title(ep[\"playerEpTitle\"]), p_url=ep[\"link\"]))\n    video_info = VideoInfo(\n        title=title, status=status, desc=desc,\n        aid=aid, cid=cid, bvid=bvid, p=p, pages=pages,\n        img_url=img_url, ep_id=ep_id,\n    )\n    return video_info\n\n\n@raise_api_error\nasync def get_video_info(client: httpx.AsyncClient, url: str) -> VideoInfo:\n    try:\n        # try to get video info from web front-end first\n        return await _get_video_info_from_html(client, url)\n    except APIInvalidError:\n        # try to get video info from api if web front-end is banned\n        return await _get_video_info_from_api(client, url)\n\n\nasync def _get_video_info_from_html(client: httpx.AsyncClient, url: str) -> VideoInfo:\n    res = await req_retry(client, url, follow_redirects=True)\n    if str(res.url).startswith(\"https://www.bilibili.com/festival\"):\n        raise APIInvalidError(\"特殊节日页面\", url)\n    html = res.text\n    if \"window._riskdata_\" in html:\n        raise APIInvalidError(\"web 前端访问被风控\", url)\n    if \"window.__INITIAL_STATE__\" in html:\n        return _parse_bv_html(url, html)\n    elif \"__NEXT_DATA__\" in html:\n        video_info = _parse_ep_html(url, html)\n        await _attach_ep_dash(client, video_info)\n        return video_info\n    else:\n        raise APIUnsupportedError(\"未知页面类型\", url)\n\n\nasync def _get_video_info_from_api(client: httpx.AsyncClient, url: str) -> VideoInfo:\n    assert '/av' in url or '/BV' in url  # TODO: only support BV or av url\n    video_info = await _get_video_basic_info_from_api(client, url)\n    # can not be parallelized since we need to get cid first\n    await _attach_dash_and_durl_from_api(client, video_info)\n    return video_info\n\n\nasync def _attach_ep_dash(client: httpx.AsyncClient, video_info: VideoInfo):\n    params = {\n        'support_multi_audio': True,\n        'avid': video_info.aid,\n        'cid': video_info.cid,\n        'fnver': 0,\n        'fnval': 4048,\n        'fourk': 1,\n        'ep_id': video_info.ep_id,\n    }\n    res = await req_retry(client, 'https://api.bilibili.com/pgc/player/web/v2/playurl', params=params)\n    res = json.loads(res.text)\n    data = res['result']['video_info']\n    if \"dash\" in data:\n        video_info.dash = Dash.from_dict(data)\n    if \"durl\" in data:\n        other = []\n        for i in data['durl']:\n            suffix = re.search(r'\\.([a-zA-Z0-9]+)\\?', i['url']).group(1)\n            other.append(Media(base_url=i['url'], backup_url=i['backup_url'], size=i['size'], suffix=suffix))\n        video_info.other = other\n\n\nasync def _attach_dash_and_durl_from_api(client: httpx.AsyncClient, video_info: VideoInfo):\n    params = {'cid': video_info.cid, 'bvid': video_info.bvid,\n              'qn': 120,  # 如无 dash 资源（少数老视频），fallback 到 4K 超清 durl\n              'fnval': 4048,  # 如 dash 资源可用，请求 dash 格式的全部可用流\n              'fourk': 1,  # 请求 4k 资源\n              'fnver': 0, 'platform': 'pc', 'otype': 'json'}\n    dash_response = await req_retry(client, 'https://api.bilibili.com/x/player/playurl',\n                                    params=params, follow_redirects=True)\n    dash_json = json.loads(dash_response.text)\n    if dash_json['code'] != 0:\n        raise APIResourceError(dash_json['message'], video_info.bvid)\n    dash, other = None, []\n    if 'dash' in dash_json['data']:\n        dash = Dash.from_dict(dash_json['data'])\n    if 'durl' in dash_json['data']:\n        for i in dash_json['data']['durl']:\n            suffix = re.search(r'\\.([a-zA-Z0-9]+)\\?', i['url']).group(1)\n            other.append(Media(base_url=i['url'], backup_url=i['backup_url'], size=i['size'], suffix=suffix))\n    video_info.dash, video_info.other = dash, other\n\n\nasync def _get_video_basic_info_from_api(client: httpx.AsyncClient, url) -> VideoInfo:\n    \"\"\"通过 view api 获取视频的基本信息，不包括 dash 或 durl(other) 视频流资源\"\"\"\n    aid, bvid, selected_page_num = parse_ids_from_url(url)\n    params = {'bvid': bvid} if bvid else {'aid': aid}\n    r = await req_retry(client, 'https://api.bilibili.com/x/web-interface/view',\n                        params=params, follow_redirects=True)\n    raw_json = json.loads(r.text)\n    if raw_json['code'] != 0:\n        raise APIResourceError(raw_json['message'], raw_json['message'])\n    title = legal_title(raw_json['data']['title'])\n    h1_title = title  # TODO: 根据视频类型，使 h1_title 与实际网页标题的格式一致\n    aid = raw_json['data']['aid']\n    bvid = raw_json['data']['bvid']\n    base_url = f\"https://www.bilibili.com/video/{bvid}/\"\n    status = Status(**raw_json['data']['stat'])\n    pages = []\n    p = None\n    cid = None\n    for idx, i in enumerate(raw_json['data']['pages']):\n        page_num = int(i['page'])\n        if page_num == selected_page_num:\n            p = idx  # selected_page_num 的分p 在 pages 列表中的 index 位置\n            cid = int(i['cid'])  # selected_page_num 的分p 的 cid\n        p_url = f\"{base_url}?p={page_num}\"\n        p_name = f\"P{page_num}-{i['part']}\"\n        pages.append(Page(p_name=p_name, p_url=p_url))\n    assert p is not None, f\"没有找到分P: p{selected_page_num}，请检查输入\"  # cid 也会是 None\n    img_url = raw_json['data']['pic']\n    basic_video_info = VideoInfo(title=title, h1_title=h1_title, aid=aid, cid=cid, status=status,\n                                 p=p, pages=pages, img_url=img_url, bvid=bvid, dash=None, other=None)\n    return basic_video_info\n\n\n@raise_api_error\nasync def get_subtitle_info(client: httpx.AsyncClient, bvid, cid):\n    params = {'bvid': bvid, 'cid': cid}\n    res = await req_retry(client, 'https://api.bilibili.com/x/player/v2', params=params)\n    info = json.loads(res.text)\n    if info['code'] == -400:\n        raise APIError(f'未找到字幕信息', params)\n    return [[f'http:{i[\"subtitle_url\"]}', i['lan_doc']] for i in info['data']['subtitle']['subtitles']]\n\n\n@raise_api_error\nasync def get_dm_urls(client: httpx.AsyncClient, aid, cid) -> List[str]:\n    params = {'oid': cid, 'pid': aid, 'type': 1}\n    res = await req_retry(client, f'https://api.bilibili.com/x/v2/dm/web/view', params=params)\n    view = parse_view(res.content)\n    total = int(view['dmSge']['total'])\n    return [f'https://api.bilibili.com/x/v2/dm/web/seg.so?oid={cid}&type=1&segment_index={i + 1}' for i in range(total)]\n"
  },
  {
    "path": "bilix/sites/bilibili/api_test.py",
    "content": "import httpx\nimport pytest\nimport asyncio\nfrom datetime import datetime, timedelta\nfrom bilix.sites.bilibili import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n# https://stackoverflow.com/questions/61022713/pytest-asyncio-has-a-closed-event-loop-but-only-when-running-all-tests\n@pytest.fixture(scope=\"session\")\ndef event_loop():\n    try:\n        loop = asyncio.get_running_loop()\n    except RuntimeError:\n        loop = asyncio.new_event_loop()\n    yield loop\n    loop.close()\n\n\n@pytest.mark.asyncio\nasync def test_get_cate_meta():\n    data = await api.get_cate_meta(client)\n    assert '舞蹈' in data and \"sub\" in data[\"舞蹈\"]\n    assert \"宅舞\" in data and 'tid' in data['宅舞']\n\n\n@pytest.mark.asyncio\nasync def test_get_list_info():\n    list_name, up_name, bvids = await api.get_list_info(\n        client,\n        \"https://space.bilibili.com/369750017/channel/seriesdetail?sid=2458228\")\n    assert list_name == '瘦腰腹跟练'\n    assert len(bvids) > 0 and bvids[0].startswith('BV')\n\n\n@pytest.mark.asyncio\nasync def test_get_collect_info():\n    list_name, up_name, bvids = await api.get_collect_info(\n        client,\n        \"https://space.bilibili.com/54296062/channel/collectiondetail?sid=412818&ctype=0\")\n    assert list_name == 'asyncio协程'\n    assert len(bvids) > 0 and bvids[0].startswith('BV')\n\n\n@pytest.mark.asyncio\nasync def test_get_favour_page_info():\n    fav_name, up_name, total_size, bvids = await api.get_favour_page_info(client, \"69072721\")\n    assert fav_name == '默认收藏夹'\n    assert len(bvids) > 0 and bvids[0].startswith('BV')\n\n\n@pytest.mark.asyncio\nasync def test_get_cate_page_info():\n    time_to = datetime.now()\n    time_from = time_to - timedelta(days=7)\n    time_from, time_to = time_from.strftime('%Y%m%d'), time_to.strftime('%Y%m%d')\n    meta = await api.get_cate_meta(client)\n    bvids = await api.get_cate_page_info(client, cate_id=meta['宅舞']['tid'], time_from=time_from, time_to=time_to)\n    assert len(bvids) > 0 and bvids[0].startswith('BV')\n\n\n@pytest.mark.asyncio\nasync def test_get_up_video_info():\n    up_name, total_size, bvids = await api.get_up_video_info(client, \"316568752\", keyword=\"什么\")\n    assert len(bvids) > 0 and bvids[0].startswith('BV')\n\n\n# GitHub actions problem...\n# @pytest.mark.asyncio\n# async.md def test_get_special_audio():\n#     # Dolby\n#     data = await api.get_video_info(client, 'https://www.bilibili.com/video/BV13L4y1K7th')\n#     assert data.dash['dolby']['type'] != 0\n#     # Hi-Res\n#     data = await api.get_video_info(client, 'https://www.bilibili.com/video/BV16K411S7sk')\n#     assert data.dash['flac']['display']\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    methods = (api._get_video_info_from_html, api._get_video_info_from_api)\n    for method in methods:\n        # 单个bv视频\n        data = await method(client, \"https://www.bilibili.com/video/BV1sS4y1b7qb?spm_id_from=333.999.0.0\")\n        assert len(data.pages) == 1\n        assert data.p == 0\n        assert data.bvid\n        assert data.img_url.startswith('http://') or data.img_url.startswith('https://')\n        assert data.dash\n        # 多个bv视频\n        data = await method(client, \"https://www.bilibili.com/video/BV1jK4y1N7ST?p=5\")\n        assert len(data.pages) > 1\n        assert data.p == 4\n        assert data.bvid\n        if method is api._get_video_info_from_api:\n            continue\n        # 电视剧\n        data = await method(client, \"https://www.bilibili.com/bangumi/play/ss24053?spm_id_from=333.337.0.0\")\n        assert len(data.pages) > 1\n        assert data.status.follow\n        # 动漫\n        data = await method(client, \"https://www.bilibili.com/bangumi/play/ss5043?spm_id_from=333.337.0.0\")\n        assert len(data.pages) > 1\n        assert data.status.follow\n        # 电影\n        data = await method(client,\n                            \"https://www.bilibili.com/bangumi/play/ss33343?theme=movie&spm_id_from=333.337.0.0\")\n        assert data.title == '天气之子'\n        assert data.status.follow\n        # 纪录片\n        data = await method(client, \"https://www.bilibili.com/bangumi/play/ss40509?from_spmid=666.9.hotlist.3\")\n        assert len(data.pages) > 1\n        assert data.status.follow\n\n\n@pytest.mark.asyncio\nasync def test_get_subtitle_info():\n    data = await api.get_video_info(client, \"https://www.bilibili.com/video/BV1hS4y1m7Ma\")\n    data = await api.get_subtitle_info(client, data.bvid, data.cid)\n    assert data[0][0].startswith('http')\n    assert data[0][1]\n\n\n@pytest.mark.asyncio\nasync def test_get_dm_info():\n    data = await api.get_video_info(client,\n                                    \"https://www.bilibili.com/bangumi/play/ss33343?theme=movie&spm_id_from=333.337.0.0\")\n    data = await api.get_dm_urls(client, data.aid, data.cid)\n    assert len(data) > 0\n"
  },
  {
    "path": "bilix/sites/bilibili/downloader.py",
    "content": "import asyncio\nimport functools\nimport re\nfrom pathlib import Path\nfrom typing import Union, Sequence, Tuple, List\nimport aiofiles\nimport httpx\nfrom datetime import datetime, timedelta\nfrom . import api\nfrom bilix.download.base_downloader_part import BaseDownloaderPart\nfrom bilix._process import SingletonPPE\nfrom bilix.utils import legal_title, cors_slice, valid_sess_data, t2s, json2srt\nfrom bilix.download.utils import req_retry, path_check\nfrom bilix.exception import HandleMethodError, APIUnsupportedError, APIResourceError, APIError\nfrom bilix.cli.assign import kwargs_filter, auto_assemble\nfrom bilix import ffmpeg\n\nfrom danmakuC.bilibili import proto2ass\n\n\nclass DownloaderBilibili(BaseDownloaderPart):\n    cookie_domain = \"bilibili.com\"  # for load cookies quickly\n    pattern = re.compile(r\"^https?://([A-Za-z0-9-]+\\.)*(bilibili\\.com|b23\\.tv)\")\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int, None] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n            # unique params\n            sess_data: str = None,\n            video_concurrency: Union[int, asyncio.Semaphore] = 3,\n            hierarchy: bool = True,\n    ):\n        \"\"\"\n\n        :param client:\n        :param browser:\n        :param speed_limit:\n        :param stream_retry:\n        :param progress:\n        :param logger:\n        :param sess_data: bilibili SESSDATA cookie\n        :param part_concurrency: 媒体分段并发数\n        :param video_concurrency: 视频并发数\n        :param hierarchy: 是否使用层级目录\n        \"\"\"\n        client = client or httpx.AsyncClient(**api.dft_client_settings)\n        super(DownloaderBilibili, self).__init__(\n            client=client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency,\n        )\n        client.cookies.set('SESSDATA', valid_sess_data(sess_data))\n        self._cate_meta = None\n        self.v_sema = asyncio.Semaphore(video_concurrency)\n        self.api_sema = asyncio.Semaphore(video_concurrency)\n        self.hierarchy = hierarchy\n        self.title_overflow = 50\n\n    @classmethod\n    def parse_url(cls, url: str):\n        if re.match(r'https://space\\.bilibili\\.com/\\d+/favlist\\?fid=\\d+', url):\n            return cls.get_favour\n        elif re.match(r'https://space\\.bilibili\\.com/\\d+/channel/seriesdetail\\?sid=\\d+', url):\n            return cls.get_collect_or_list\n        elif re.match(r'https://space\\.bilibili\\.com/\\d+/channel/collectiondetail\\?sid=\\d+', url):\n            return cls.get_collect_or_list\n        elif re.match(r'https://space\\.bilibili\\.com/\\d+', url):  # up space url\n            return cls.get_up\n        elif re.search(r'(www\\.bilibili\\.com)|(b23\\.tv)', url):\n            return cls.get_video\n        raise ValueError(f'{url} no match for bilibili')\n\n    async def get_collect_or_list(self, url, path=Path('.'),\n                                  quality=0, image=False, subtitle=False, dm=False, only_audio=False, codec: str = ''):\n        \"\"\"\n        下载合集或视频列表\n        :cli: short: col\n        :param url: 合集或视频列表详情页url\n        :param path: 保存路径\n        :param quality:\n        :param image:\n        :param subtitle:\n        :param dm:\n        :param only_audio:\n        :param codec:\n        :return:\n        \"\"\"\n        if 'series' in url:\n            list_name, up_name, bvids = await api.get_list_info(self.client, url)\n            name = legal_title(f\"【视频列表】{up_name}\", list_name)\n        elif 'collection' in url:\n            col_name, up_name, bvids = await api.get_collect_info(self.client, url)\n            name = legal_title(f\"【合集】{up_name}\", col_name)\n        else:\n            raise ValueError(f'{url} invalid for get_collect_or_list')\n        if self.hierarchy:\n            path /= name\n            path.mkdir(parents=True, exist_ok=True)\n        await asyncio.gather(\n            *[self.get_series(f\"https://www.bilibili.com/video/{i}\", path=path, quality=quality, codec=codec,\n                              image=image, subtitle=subtitle, dm=dm, only_audio=only_audio)\n              for i in bvids])\n\n    async def get_favour(self, url_or_fid, path=Path('.'),\n                         num=20, keyword='', quality=0, series=True, image=False, subtitle=False,\n                         dm=False, only_audio=False, codec: str = ''):\n        \"\"\"\n        下载收藏夹内的视频\n        :cli: short: fav\n        :param url_or_fid: 收藏夹url或收藏夹id\n        :param path: 保存路径\n        :param num: 下载数量\n        :param keyword: 搜索关键词\n        :param quality: 画面质量，0为可以观看的最高画质，越大质量越低，超过范围时自动选择最低画质，或者直接使用字符串指定'1080p'等名称\n        :param series: 每个视频是否下载所有p，False时仅下载系列中的第一个视频\n        :param image: 是否下载封面\n        :param subtitle: 是否下载字幕\n        :param dm: 是否下载弹幕\n        :param only_audio: 是否仅下载音频\n        :param codec:\n        :return:\n        \"\"\"\n        fav_name, up_name, total_size, bvids = await api.get_favour_page_info(self.client, url_or_fid, keyword=keyword)\n        if self.hierarchy:\n            name = legal_title(f\"【收藏夹】{up_name}-{fav_name}\")\n            path /= name\n            path.mkdir(parents=True, exist_ok=True)\n        total = min(total_size, num)\n        ps = 20\n        page_nums = total // ps + min(1, total % ps)\n        cors = []\n        for i in range(page_nums):\n            if i + 1 == page_nums:\n                num = total - (page_nums - 1) * ps\n            else:\n                num = ps\n            cors.append(self._get_favor_by_page(\n                url_or_fid, path, i + 1, num, keyword, quality, series, image, subtitle, dm, only_audio, codec=codec))\n        await asyncio.gather(*cors)\n\n    async def _get_favor_by_page(self, url_or_fid, path: Path, pn=1, num=20, keyword='', quality=0,\n                                 series=True, image=False, subtitle=False, dm=False, only_audio=False, codec=''):\n        ps = 20\n        num = min(ps, num)\n        _, _, _, bvids = await api.get_favour_page_info(self.client, url_or_fid, pn, ps, keyword)\n        cors = []\n        for i in bvids[:num]:\n            func = self.get_series if series else self.get_video\n            # noinspection PyArgumentList\n            cors.append(func(f'https://www.bilibili.com/video/{i}', path=path, quality=quality, codec=codec,\n                             image=image, subtitle=subtitle, dm=dm, only_audio=only_audio))\n        await asyncio.gather(*cors)\n\n    @property\n    async def cate_meta(self):\n        if not self._cate_meta:\n            self._cate_meta = asyncio.ensure_future(api.get_cate_meta(self.client))\n            self._cate_meta = await self._cate_meta\n        elif asyncio.isfuture(self._cate_meta):\n            await self._cate_meta\n        return self._cate_meta\n\n    async def get_cate(self, cate_name: str, path=Path('.'), num=10, order='click', keyword='', days=7,\n                       quality=0, series=True, image=False, subtitle=False, dm=False, only_audio=False, codec='', ):\n        \"\"\"\n        下载分区视频\n        :cli: short: cate\n        :param cate_name: 分区名称\n        :param path: 保存路径\n        :param num: 下载数量\n        :param order: 何种排序，click播放数，scores评论数，stow收藏数，coin硬币数，dm弹幕数\n        :param keyword: 搜索关键词\n        :param days: 过去days天中的结果\n        :param quality: 画面质量，0为可以观看的最高画质，越大质量越低，超过范围时自动选择最低画质，或者直接使用字符串指定'1080p'等名称\n        :param series: 每个视频是否下载所有p，False时仅下载系列中的第一个视频\n        :param image: 是否下载封面\n        :param subtitle: 是否下载字幕\n        :param dm: 是否下载弹幕\n        :param only_audio: 是否仅下载音频\n        :param codec:\n        :return:\n        \"\"\"\n        cate_meta = await self.cate_meta\n        if cate_name not in cate_meta:\n            return self.logger.error(f'未找到分区 {cate_name}')\n        if 'subChannelId' not in cate_meta[cate_name]:\n            sub_names = [i['name'] for i in cate_meta[cate_name]['sub']]\n            return self.logger.error(f'{cate_name} 是主分区，仅支持子分区，试试 {sub_names}')\n        if self.hierarchy:\n            path /= legal_title(f\"【分区】{cate_name}\")\n            path.mkdir(parents=True, exist_ok=True)\n        cate_id = cate_meta[cate_name]['tid']\n        time_to = datetime.now()\n        time_from = time_to - timedelta(days=days)\n        time_from, time_to = time_from.strftime('%Y%m%d'), time_to.strftime('%Y%m%d')\n        pagesize = 30\n        page = 1\n        cors = []\n        while num > 0:\n            cors.append(self._get_cate_by_page(\n                cate_id, path, time_from, time_to, page, min(pagesize, num), order, keyword, quality,\n                series, image=image, subtitle=subtitle, dm=dm, only_audio=only_audio, codec=codec))\n            num -= pagesize\n            page += 1\n        await asyncio.gather(*cors)\n\n    async def _get_cate_by_page(\n            self, cate_id, path: Path, time_from, time_to, pn=1, num=30, order='click', keyword='',\n            quality=0, series=True, image=False, subtitle=False, dm=False, only_audio=False, codec=''):\n        bvids = await api.get_cate_page_info(self.client, cate_id, time_from, time_to, pn, 30, order, keyword)\n        bvids = bvids[:num]\n        func = self.get_series if series else self.get_video\n        # noinspection PyArgumentList\n        cors = [func(f\"https://www.bilibili.com/video/{i}\", path=path, quality=quality, codec=codec,\n                     image=image, subtitle=subtitle, dm=dm, only_audio=only_audio)\n                for i in bvids]\n        await asyncio.gather(*cors)\n\n    async def get_up(\n            self, url_or_mid: str, path=Path('.'), num=10, order='pubdate', keyword='', quality=0,\n            series=True, image=False, subtitle=False, dm=False, only_audio=False, codec='', ):\n        \"\"\"\n        下载up主视频\n        :cli: short: up\n        :param url_or_mid: b站用户空间页面url 或b站用户id，在空间页面的url中可以找到\n        :param path: 保存路径\n        :param num: 下载总数\n        :param order: 何种排序，b站支持：最新发布pubdate，最多播放click，最多收藏stow\n        :param keyword: 过滤关键词\n        :param quality: 画面质量，0为可以观看的最高画质，越大质量越低，超过范围时自动选择最低画质，或者直接使用字符串指定'1080p'等名称\n        :param series: 每个视频是否下载所有p，False时仅下载系列中的第一个视频\n        :param image: 是否下载封面\n        :param subtitle: 是否下载字幕\n        :param dm: 是否下载弹幕\n        :param only_audio: 是否仅下载音频\n        :param codec:\n        :return:\n        \"\"\"\n        ps = 30\n        up_name, total_size, bv_ids = await api.get_up_video_info(self.client, url_or_mid, 1, ps, order, keyword)\n        if self.hierarchy:\n            path /= legal_title(f\"【up】{up_name}\")\n            path.mkdir(parents=True, exist_ok=True)\n        num = min(total_size, num)\n        page_nums = num // ps + min(1, num % ps)\n        cors = []\n        for i in range(page_nums):\n            if i + 1 == page_nums:\n                p_num = num - (page_nums - 1) * ps\n            else:\n                p_num = ps\n            cors.append(self._get_up_by_page(\n                url_or_mid, path, i + 1, p_num, order, keyword, quality, series, image=image,\n                subtitle=subtitle, dm=dm, only_audio=only_audio, codec=codec))\n        await asyncio.gather(*cors)\n\n    async def _get_up_by_page(self, url_or_mid, path: Path, pn=1, num=30, order='pubdate', keyword='', quality=0,\n                              series=True, image=False, subtitle=False, dm=False, only_audio=False, codec='', ):\n        ps = 30\n        num = min(ps, num)\n        _, _, bvids = await api.get_up_video_info(self.client, url_or_mid, pn, ps, order, keyword)\n        bvids = bvids[:num]\n        func = self.get_series if series else self.get_video\n        # noinspection PyArgumentList\n        await asyncio.gather(\n            *[func(f'https://www.bilibili.com/video/{bv}', path=path, quality=quality, codec=codec,\n                   image=image, subtitle=subtitle, dm=dm, only_audio=only_audio) for bv in bvids])\n\n    async def get_series(self, url: str, path=Path('.'),\n                         quality: Union[str, int] = 0, image=False, subtitle=False,\n                         dm=False, only_audio=False, p_range: Sequence[int] = None, codec: str = ''):\n        \"\"\"\n        下载某个系列（包括up发布的多p投稿，动画，电视剧，电影等）的所有视频。只有一个视频的情况下仍然可用该方法\n        :cli: short: s\n        :param url: 系列中任意一个视频的url\n        :param path: 保存路径\n        :param quality: 画面质量，0为可以观看的最高画质，越大质量越低，超过范围时自动选择最低画质，或者直接使用字符串指定'1080p'等名称\n        :param image: 是否下载封面\n        :param subtitle: 是否下载字幕\n        :param dm: 是否下载弹幕\n        :param only_audio: 是否仅下载音频\n        :param p_range: 下载集数范围，例如(1, 3)：P1至P3\n        :param codec: 视频编码（可通过info获取）\n        :return:\n        \"\"\"\n        try:\n            async with self.api_sema:\n                video_info = await api.get_video_info(self.client, url)\n        except (APIResourceError, APIUnsupportedError) as e:\n            return self.logger.warning(e)\n        if self.hierarchy and len(video_info.pages) > 1:\n            path /= video_info.title\n            path.mkdir(parents=True, exist_ok=True)\n        cors = [self.get_video(p.p_url, path=path,\n                               quality=quality, image=image, subtitle=subtitle, dm=dm,\n                               only_audio=only_audio, codec=codec,\n                               video_info=video_info if idx == video_info.p else None)\n                for idx, p in enumerate(video_info.pages)]\n        if p_range:\n            cors = cors_slice(cors, p_range)\n        await asyncio.gather(*cors)\n\n    async def get_video(self, url: str, path=Path('.'),\n                        quality: Union[str, int] = 0, image=False, subtitle=False, dm=False, only_audio=False,\n                        codec: str = '', time_range: Tuple[int, int] = None, video_info: api.VideoInfo = None):\n        \"\"\"\n        下载单个视频\n        :cli: short: v\n        :param url: 视频的url\n        :param path: 保存路径\n        :param quality: 画面质量，0为可以观看的最高画质，越大质量越低，超过范围时自动选择最低画质，或者直接使用字符串指定'1080p'等名称\n        :param image: 是否下载封面\n        :param subtitle: 是否下载字幕\n        :param dm: 是否下载弹幕\n        :param only_audio: 是否仅下载音频\n        :param codec: 视频编码（可通过codec获取）\n        :param time_range: 切片的时间范围\n        :param video_info: 额外数据，提供时不用再次请求页面\n        :return:\n        \"\"\"\n        async with self.v_sema:\n            if not video_info:\n                try:\n                    video_info = await api.get_video_info(self.client, url)\n                except (APIResourceError, APIUnsupportedError) as e:\n                    return self.logger.warning(e)\n            p_name = legal_title(video_info.pages[video_info.p].p_name)\n            task_name = legal_title(video_info.title, p_name)\n            # if title is too long, use p_name as base_name\n            base_name = p_name if len(video_info.title) > self.title_overflow and self.hierarchy and p_name else \\\n                task_name\n            media_name = base_name if not time_range else legal_title(base_name, *map(t2s, time_range))\n            media_cors = []\n            task_id = await self.progress.add_task(total=None, description=task_name)\n            if video_info.dash:\n                try:  # choose video quality\n                    video, audio = video_info.dash.choose_quality(quality, codec)\n                except KeyError:\n                    self.logger.warning(\n                        f\"{task_name} 清晰度<{quality}> 编码<{codec}>不可用，请检查输入是否正确或是否需要大会员\")\n                else:\n                    tmp: List[Tuple[api.Media, Path]] = []\n                    # 1. only video\n                    if not audio and not only_audio:\n                        tmp.append((video, path / f'{media_name}.mp4'))\n                    # 2. video and audio\n                    elif audio and not only_audio:\n                        exists, media_path = path_check(path / f'{media_name}.mp4')\n                        if exists:\n                            self.logger.info(f'[green]已存在[/green] {media_path.name}')\n                        else:\n                            tmp.append((video, path / f'{media_name}-v'))\n                            tmp.append((audio, path / f'{media_name}-a'))\n                            # task need to be merged\n                            await self.progress.update(task_id=task_id, upper=ffmpeg.combine)\n                    # 3. only audio\n                    elif audio and only_audio:\n                        tmp.append((audio, path / f'{media_name}{audio.suffix}'))\n                    else:\n                        self.logger.warning(f\"No audio for {task_name}\")\n                    # convert to coroutines\n                    if not time_range:\n                        media_cors.extend(self.get_file(t[0].urls, path=t[1], task_id=task_id) for t in tmp)\n                    else:\n                        if len(tmp) > 0:\n                            fut = asyncio.Future()  # to fix key frame\n                            v = tmp[0]\n                            media_cors.append(self.get_media_clip(v[0].urls, v[1], time_range,\n                                                                  init_range=v[0].segment_base['initialization'],\n                                                                  seg_range=v[0].segment_base['index_range'],\n                                                                  set_s=fut,\n                                                                  task_id=task_id))\n                        if len(tmp) > 1:  # with audio\n                            a = tmp[1]\n                            media_cors.append(self.get_media_clip(a[0].urls, a[1], time_range,\n                                                                  init_range=a[0].segment_base['initialization'],\n                                                                  seg_range=a[0].segment_base['index_range'],\n                                                                  get_s=fut,\n                                                                  task_id=task_id))\n\n            elif video_info.other:\n                self.logger.warning(\n                    f\"{task_name} 未解析到dash资源，转入durl mp4/flv下载（不需要会员的电影/番剧预览，不支持dash的视频）\")\n                media_name = base_name\n                if len(video_info.other) == 1:\n                    m = video_info.other[0]\n                    media_cors.append(\n                        self.get_file(m.urls, path=path / f'{media_name}.{m.suffix}', task_id=task_id))\n                else:\n                    exist, media_path = path_check(path / f'{media_name}.mp4')\n                    if exist:\n                        self.logger.info(f'[green]已存在[/green] {media_path.name}')\n                    else:\n                        p_sema = asyncio.Semaphore(self.part_concurrency)\n\n                        async def _get_file(media: api.Media, p: Path) -> Path:\n                            async with p_sema:\n                                return await self.get_file(media.urls, path=p, task_id=task_id)\n\n                        for i, m in enumerate(video_info.other):\n                            f = f'{media_name}-{i}.{m.suffix}'\n                            media_cors.append(_get_file(m, path / f))\n                        await self.progress.update(task_id=task_id, upper=ffmpeg.concat)\n            else:\n                self.logger.warning(f'{task_name} 需要大会员或该地区不支持')\n            # additional task\n            add_cors = []\n            if image or subtitle or dm:\n                extra_path = path / \"extra\" if self.hierarchy else path\n                extra_path.mkdir(exist_ok=True)\n                if image:\n                    add_cors.append(self.get_static(video_info.img_url, path=extra_path / base_name))\n                if subtitle:\n                    add_cors.append(self.get_subtitle(url, path=extra_path, video_info=video_info))\n                if dm:\n                    try:\n                        width, height = video.width, video.height\n                    except UnboundLocalError:\n                        width, height = 1920, 1080\n                    add_cors.append(self.get_dm(\n                        url, path=extra_path, convert_func=self._dm2ass_factory(width, height), video_info=video_info))\n            path_lst, _ = await asyncio.gather(asyncio.gather(*media_cors), asyncio.gather(*add_cors))\n\n        if upper := self.progress.tasks[task_id].fields.get('upper', None):\n            await upper(path_lst, media_path)\n            self.logger.info(f'[cyan]已完成[/cyan] {media_path.name}')\n        await self.progress.update(task_id, visible=False)\n\n    @staticmethod\n    def _dm2ass_factory(width: int, height: int):\n        async def dm2ass(protobuf_bytes: bytes) -> bytes:\n            loop = asyncio.get_event_loop()\n            f = functools.partial(proto2ass, protobuf_bytes, width, height, font_size=width / 40, )\n            content = await loop.run_in_executor(SingletonPPE(), f)\n            return content.encode('utf-8')\n\n        return dm2ass\n\n    async def get_dm(self, url, path=Path('.'), update=False, convert_func=None, video_info=None):\n        \"\"\"\n        下载视频的弹幕\n        :cli: short: dm\n        :param url: 视频url\n        :param path: 保存路径\n        :param update: 是否更新覆盖之前下载的弹幕文件\n        :param convert_func:\n        :param video_info: 额外数据，提供则不再访问前端\n        :return:\n        \"\"\"\n        if not video_info:\n            video_info = await api.get_video_info(self.client, url)\n        aid, cid = video_info.aid, video_info.cid\n        file_type = '.' + ('pb' if not convert_func else convert_func.__name__.split('2')[-1])\n        p_name = video_info.pages[video_info.p].p_name\n        # to avoid file name too long bug\n        if len(video_info.title) > self.title_overflow and self.hierarchy and p_name:\n            file_name = legal_title(p_name, \"弹幕\") + file_type\n        else:\n            file_name = legal_title(video_info.title, p_name, \"弹幕\") + file_type\n        file_path = path / file_name\n        exist, file_path = path_check(file_path)\n        if not update and exist:\n            self.logger.info(f\"[green]已存在[/green] {file_name}\")\n            return file_path\n        dm_urls = await api.get_dm_urls(self.client, aid, cid)\n        cors = [req_retry(self.client, dm_url) for dm_url in dm_urls]\n        results = await asyncio.gather(*cors)\n        content = b''.join(res.content for res in results)\n        content = convert_func(content) if convert_func else content\n        if asyncio.iscoroutine(content):\n            content = await content\n        async with aiofiles.open(file_path, 'wb') as f:\n            await f.write(content)\n        self.logger.info(f\"[cyan]已完成[/cyan] {file_name}\")\n        return file_path\n\n    async def get_subtitle(self, url, path=Path('.'), convert_func=json2srt, video_info=None):\n        \"\"\"\n        下载视频的字幕文件\n        :cli: short: sub\n        :param url: 视频url\n        :param path: 字幕文件保存路径\n        :param convert_func: function used to convert original subtitle text\n        :param video_info: 额外数据，提供则不再访问前端\n        :return:\n        \"\"\"\n        if not video_info:\n            video_info = await api.get_video_info(self.client, url)\n        p, cid = video_info.p, video_info.cid\n        p_name = video_info.pages[p].p_name\n        try:\n            subtitles = await api.get_subtitle_info(self.client, video_info.bvid, cid)\n        except APIError as e:\n            return self.logger.warning(e)\n        cors = []\n\n        for sub_url, sub_name in subtitles:\n            if len(video_info.title) > self.title_overflow and self.hierarchy and p_name:\n                file_name = legal_title(p_name, sub_name)\n            else:\n                file_name = legal_title(video_info.title, p_name, sub_name)\n            cors.append(self.get_static(sub_url, path / file_name, convert_func=convert_func))\n        paths = await asyncio.gather(*cors)\n        return paths\n\n    @classmethod\n    @auto_assemble\n    def handle(cls, method: str, keys: Tuple[str, ...], options: dict):\n        if cls.pattern.match(keys[0]) or method == 'cate' or method == 'get_cate':\n            if method in {'auto', 'a'}:\n                m = cls.parse_url(keys[0])\n            elif method in cls._cli_map:\n                m = cls._cli_map[method]\n            else:\n                raise HandleMethodError(cls, method=method)\n            d = cls(sess_data=options['cookie'], **kwargs_filter(cls, options))\n            return d, m\n"
  },
  {
    "path": "bilix/sites/bilibili/downloader_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\n@pytest.mark.asyncio\nasync def test_get_collect_or_list():\n    d = DownloaderBilibili()\n    await d.get_collect_or_list('https://space.bilibili.com/54296062/channel/collectiondetail?sid=412818&ctype=0',\n                                quality=999)\n    await d.get_collect_or_list('https://space.bilibili.com/8251621/channel/seriesdetail?sid=2323334&ctype=0',\n                                quality=999)\n    await d.aclose()\n\n\n@pytest.mark.asyncio\nasync def test_get_favour():\n    d = DownloaderBilibili()\n    await d.get_favour(\"69072721\", num=1, quality=999)\n    await d.aclose()\n\n\n@pytest.mark.asyncio\nasync def test_get_cate():\n    d = DownloaderBilibili()\n    await d.get_cate(\"宅舞\", num=1, order=\"click\", keyword=\"jk\", quality=1)\n    await d.aclose()\n\n\n@pytest.mark.asyncio\nasync def test_get_up():\n    d = DownloaderBilibili()\n    await d.get_up(\"455511061\", num=1, order=\"pubdate\", quality=1)\n    await d.aclose()\n\n\n@pytest.mark.asyncio\nasync def test_get_series():\n    d = DownloaderBilibili()\n    await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=3\", p_range=(5, 5), quality=999)\n    # only audio\n    await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=3\", p_range=(5, 5), only_audio=True)\n    # image\n    await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=3\", p_range=(1, 1), image=True, quality=999)\n    # 单个视频\n    await d.get_series(\"https://www.bilibili.com/video/BV1sS4y1b7qb?spm_id_from=333.999.0.0\", quality=999)\n    await d.aclose()\n\n\n@pytest.mark.asyncio\nasync def test_get_dm():\n    d = DownloaderBilibili()\n    await d.get_dm('https://www.bilibili.com/video/BV11Z4y1z7s8?spm_id_from=333.337.search-card.all.click')\n    await d.aclose()\n\n\n@pytest.mark.asyncio\nasync def test_get_subtitle():\n    d = DownloaderBilibili()\n    await d.get_subtitle(\"https://www.bilibili.com/video/BV1hS4y1m7Ma\")\n    await d.aclose()\n\n\n@pytest.mark.asyncio\nasync def test_choose_quality():\n    import os\n    from bilix.sites.bilibili import api\n\n    client = httpx.AsyncClient()\n    client.cookies.set('SESSDATA', os.getenv('BILI_TOKEN'))\n    # dolby\n    data = await api.get_video_info(client, \"https://www.bilibili.com/video/BV13L4y1K7th\")\n    try:\n        video, audio = data.dash.choose_quality(quality=999, codec=\":ec-3\")\n    except KeyError:\n        assert not os.getenv(\"BILI_TOKEN\")\n    # normal\n    data.dash.choose_quality(quality=\"360P\", codec=\"hev\")\n    # hi-res\n    data = await api.get_video_info(client, \"https://www.bilibili.com/video/BV16K411S7sk\")\n    try:\n        video, audio = data.dash.choose_quality(quality='1080P', codec=\"hev:fLaC\")\n    except KeyError:\n        assert not os.getenv(\"BILI_TOKEN\")\n"
  },
  {
    "path": "bilix/sites/bilibili/informer.py",
    "content": "import asyncio\nfrom typing import Tuple\nfrom rich.tree import Tree\nfrom .downloader import DownloaderBilibili\nfrom . import api\nfrom bilix.log import logger\nfrom rich import print as rprint\nfrom bilix.utils import convert_size\nfrom bilix.download.utils import req_retry\nfrom bilix.cli.assign import kwargs_filter\n\n\nclass InformerBilibili(DownloaderBilibili):\n    \"\"\"A special downloader with functionality to log info of bilibili resources\"\"\"\n\n    @classmethod\n    def parse_url(cls, url: str):\n        res = super().parse_url(url)\n        func_name = res.__name__.replace(\"get_\", \"info_\")\n        return getattr(cls, func_name)\n\n    async def info_key(self, key):\n        await self.parse_url(key)(self, key)\n\n    async def info_up(self, url: str):\n        up_name, total_size, bvids = await api.get_up_video_info(self.client, url)\n        rprint(up_name)\n\n    async def info_favour(self, url: str):\n        pass\n\n    async def info_collect_or_list(self, url: str):\n        pass\n\n    async def info_video(self, url: str):\n        video_info = await api.get_video_info(self.client, url)\n        if video_info.dash is None and video_info.other is None:\n            return logger.warning(f'{video_info.title} 需要大会员或该地区不支持')\n        elif video_info.other and video_info.dash is None:\n            return rprint(video_info.other)  # todo: beautify durl info\n\n        async def ensure_size(m: api.Media):\n            if m.size is None:\n                res = await req_retry(self.client, m.base_url, method='GET', headers={'Range': 'bytes=0-1'})\n                m.size = int(res.headers['Content-Range'].split('/')[-1])\n\n        dash = video_info.dash\n        cors = [ensure_size(m) for m in dash.videos] + [ensure_size(m) for m in dash.audios]\n        await asyncio.gather(*cors)\n\n        tree = Tree(\n            f\"[bold reverse] {video_info.title}-{video_info.pages[video_info.p].p_name} [/]\"\n            f\" {video_info.status.view:,}👀 {video_info.status.like:,}👍 {video_info.status.coin:,}🪙\",\n            guide_style=\"bold cyan\")\n        video_tree = tree.add(\"[bold]画面 Video\")\n        audio_tree = tree.add(\"[bold]声音 Audio\")\n        leaf_fmt = \"codec: {codec:32} size: {size}\"\n        # for video\n        for quality in dash.video_formats:\n            p_tree = video_tree.add(quality)\n            for c in dash.video_formats[quality]:\n                m = dash.video_formats[quality][c]\n                p_tree.add(leaf_fmt.format(codec=m.codec, size=convert_size(m.size)))\n            if len(p_tree.children) == 0:\n                p_tree.style = \"rgb(242,93,142)\"\n                p_tree.add(\"需要登录或大会员\")\n        # for audio\n        name_map = {\"default\": \"默认音质\", \"dolby\": \"杜比全景声 Dolby\", \"flac\": \"Hi-Res无损\"}\n        for k in dash.audio_formats:\n            sub_tree = audio_tree.add(name_map[k])\n            if m := dash.audio_formats[k]:\n                sub_tree.add(leaf_fmt.format(codec=m.codec, size=convert_size(m.size)))\n            else:\n                sub_tree.style = \"rgb(242,93,142)\"\n                sub_tree.add(\"需要登录或大会员\")\n        rprint(tree)\n\n    @classmethod\n    def handle(cls, method: str, keys: Tuple[str, ...], options: dict):\n        if cls.pattern.match(keys[0]) and 'info' == method:\n            informer = InformerBilibili(sess_data=options['cookie'], **kwargs_filter(cls, options))\n\n            # in order to maintain order\n            async def temp():\n                for key in keys:\n                    if len(keys) > 1:\n                        logger.info(f\"For {key}\")\n                    await informer.info_key(key)\n\n            return informer, temp()\n"
  },
  {
    "path": "bilix/sites/bilibili/informer_test.py",
    "content": "import pytest\nfrom bilix.sites.bilibili import InformerBilibili\n\ninformer = InformerBilibili()\n\n\n@pytest.mark.asyncio\nasync def test_bilibili_informer():\n    await informer.info_video('https://www.bilibili.com/video/BV1sG411A7r3')\n    await informer.info_video('https://www.bilibili.com/video/BV1oG4y1Z7fx')\n    await informer.info_video('https://www.bilibili.com/video/BV1eV411W7tt')\n    await informer.info_video(\"https://www.bilibili.com/bangumi/play/ep508404/\")\n"
  },
  {
    "path": "bilix/sites/bilibili/utils.py",
    "content": "import re\n\n\ndef parse_ids_from_url(url_or_string: str):\n    bvid, aid, page_num = None, None, 1\n    if re.match(r'https?://www.bilibili.com/video/BV\\w+', url_or_string) or re.match(r'BV\\w+', url_or_string):\n        bvid = re.search(r'(BV\\w+)', url_or_string).groups()[0]\n        assert bvid.isalnum()\n    elif re.match(r'https?://www.bilibili.com/video/av\\d+', url_or_string) or re.match(r'av\\d+', url_or_string):\n        aid = re.search(r'av(\\d+)', url_or_string).groups()[0]\n        assert aid.isdigit()\n        aid = int(aid)\n    else:\n        raise ValueError(f\"{url_or_string} is not a valid bilibili video url\")\n    # ?p=123 or &p=123\n    if m := re.match(r'.*[?&]p=(\\d+)', url_or_string):\n        page_num = int(m.groups()[0])\n        assert page_num >= 1\n    return aid, bvid, page_num\n"
  },
  {
    "path": "bilix/sites/bilibili/utils_test.py",
    "content": "from bilix.sites.bilibili.utils import parse_ids_from_url\n\n\ndef test_parse_ids_from_url():\n    strings = [\n        \"https://www.bilibili.com/video/av170001\",\n        \"http://www.bilibili.com/video/BV1Xx41117Tz/?ba=labala&p=3#time=1234\",\n        \"av170001\",\n        \"BV1sE411w7tQ?p=2&from=search\",\n        \"https://www.bilibili.com/video/BV1xx411c7HW?p=1\"\n    ]\n    results = [\n        (170001, None, 1),\n        (None, 'BV1Xx41117Tz', 3),\n        (170001, None, 1),\n        (None, 'BV1sE411w7tQ', 2),\n        (None, 'BV1xx411c7HW', 1)\n    ]\n    for index, string in enumerate(strings):\n        assert parse_ids_from_url(string) == results[index]\n"
  },
  {
    "path": "bilix/sites/cctv/__init__.py",
    "content": "from .downloader import DownloaderCctv\n\n__all__ = ['DownloaderCctv']\n"
  },
  {
    "path": "bilix/sites/cctv/api.py",
    "content": "import asyncio\nimport re\nimport json\nfrom typing import Sequence, Tuple\n\nimport httpx\nimport m3u8\n\nfrom bilix.download.utils import req_retry, raise_api_error\nfrom bilix.utils import legal_title\n\ndft_client_settings = {\n    'headers': {'user-agent': 'PostmanRuntime/7.29.0'},\n    'http2': True\n}\n\n\n@raise_api_error\nasync def get_id(client: httpx.AsyncClient, url: str) -> Tuple[str, str, str]:\n    res_web = await req_retry(client, url)\n    pid = re.findall(r'guid ?= ?\"(\\w+)\"', res_web.text)[0]\n    vide = re.findall(r'/(VIDE\\w+)\\.', url)[0]\n    try:\n        vida = re.findall(r'videotvCodes ?= ?\"(\\w+)\"', res_web.text)[0]\n    except IndexError:\n        vida = None\n    return pid, vide, vida\n\n\n@raise_api_error\nasync def get_media_info(client: httpx.AsyncClient, pid: str) -> Tuple[str, Sequence[str]]:\n    \"\"\"\n\n    :param pid:\n    :param client:\n    :return: title and m3u8 urls sorted by quality\n    \"\"\"\n    res = await req_retry(client, f'https://vdn.apps.cntv.cn/api/getHttpVideoInfo.do?pid={pid}')\n    info_data = json.loads(res.text)\n    # extract\n    title = legal_title(info_data['title'])\n    m3u8_main_url = info_data['hls_url']\n    res = await req_retry(client, m3u8_main_url)\n    m3u8_info = m3u8.loads(res.text)\n    if m3u8_info.base_uri is None:\n        m3u8_info.base_uri = re.match(r'(https?://[^/]*)/', m3u8_main_url).groups()[0]\n    m3u8_urls = list(sorted((i.absolute_uri for i in m3u8_info.playlists), reverse=True,\n                            key=lambda s: int(re.findall(r'/(\\d+).m3u8', s)[0])))\n    return title, m3u8_urls\n\n\n@raise_api_error\nasync def get_series_info(client: httpx.AsyncClient, vide: str, vida: str) -> Tuple[str, Sequence[str]]:\n    \"\"\"\n\n    :param vide:\n    :param vida:\n    :param client:\n    :return: title and list of guid(pid)\n    \"\"\"\n    params = {'mode': 0, 'id': vida, 'serviceId': 'tvcctv', 'p': 1, 'n': 999}\n    res_meta, res_list = await asyncio.gather(\n        req_retry(client, f\"https://api.cntv.cn/NewVideoset/getVideoAlbumInfoByVideoId?id={vide}&serviceId=tvcctv\"),\n        req_retry(client, f'https://api.cntv.cn/NewVideo/getVideoListByAlbumIdNew', params=params)\n    )\n    meta_data = json.loads(res_meta.text)\n    list_data = json.loads(res_list.text)\n    # extract\n    title = legal_title(meta_data['data']['title'])\n    pids = [i['guid'] for i in list_data['data']['list']]\n    return title, pids\n"
  },
  {
    "path": "bilix/sites/cctv/api_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.cctv import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    pid, vide, vida = await api.get_id(client, \"https://tv.cctv.com/2012/05/02/VIDE1355968282695723.shtml\")\n    data = await api.get_media_info(client, pid)\n    data = await api.get_series_info(client, vide, vida)\n    pass\n"
  },
  {
    "path": "bilix/sites/cctv/downloader.py",
    "content": "import asyncio\nimport re\nfrom pathlib import Path\nfrom typing import Union, Tuple\nimport httpx\n\nfrom . import api\nfrom bilix.download.base_downloader_m3u8 import BaseDownloaderM3u8\n\n\nclass DownloaderCctv(BaseDownloaderM3u8):\n    pattern = re.compile(r'https?://(?:tv\\.cctv\\.com|tv\\.cctv\\.cn)/?[?/](?:pid=)?(\\d+)(?:&vid=(\\d+))?(?:&v=(\\d+))?')\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n            video_concurrency: Union[int, asyncio.Semaphore] = 3,\n            # unique params\n            hierarchy: bool = True,\n    ):\n        client = client or httpx.AsyncClient(**api.dft_client_settings)\n        super(DownloaderCctv, self).__init__(\n            client=client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency,\n            video_concurrency=video_concurrency,\n        )\n        self.hierarchy = hierarchy\n\n    async def get_series(self, url: str, path=Path('.'), quality: int = 0):\n        \"\"\"\n        :cli: short: s\n        :param url:\n        :param path:\n        :param quality:\n        :return:\n        \"\"\"\n        pid, vide, vida = await api.get_id(self.client, url)\n        if vida is None:  # 单个视频\n            await self.get_video(pid, quality=quality)\n        else:  # 剧集\n            title, pids = await api.get_series_info(self.client, vide, vida)\n            if self.hierarchy:\n                path /= title\n                path.mkdir(parents=True, exist_ok=True)\n            await asyncio.gather(*[self.get_video(pid, path, quality) for pid in pids])\n\n    async def get_video(self, url_or_pid: str, path=Path('.'), quality: int = 0, time_range: Tuple[int, int] = None):\n        \"\"\"\n        :cli: short: v\n        :param url_or_pid:\n        :param path:\n        :param quality:\n        :param time_range:\n        :return:\n        \"\"\"\n        if url_or_pid.startswith('http'):\n            pid, _, _ = await api.get_id(self.client, url_or_pid)\n        else:\n            pid = url_or_pid\n        title, m3u8_urls = await api.get_media_info(self.client, pid)\n        m3u8_url = m3u8_urls[min(quality, len(m3u8_urls) - 1)]\n        file_path = await self.get_m3u8_video(m3u8_url, path / f\"{title}.mp4\", time_range=time_range)\n        return file_path\n"
  },
  {
    "path": "bilix/sites/douyin/__init__.py",
    "content": "from .downloader import DownloaderDouyin\n\n__all__ = ['DownloaderDouyin']\n"
  },
  {
    "path": "bilix/sites/douyin/api.py",
    "content": "\"\"\"\nOriginally From\n@Author: https://github.com/Evil0ctal/\nhttps://github.com/Evil0ctal/Douyin_TikTok_Download_API\n\nModified by\n@Author: https://github.com/HFrost0/\n\"\"\"\nimport asyncio\nimport re\nimport json\nfrom typing import List\nimport httpx\nfrom pydantic import BaseModel\nfrom bilix.utils import legal_title\nfrom bilix.download.utils import req_retry, raise_api_error\n\ndft_client_settings = {\n    'headers': {'user-agent': 'Mozilla/5.0 (Linux; Android 8.0; Pixel 2 Build/OPD3.170816.012)'\n                              ' AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Mobile'\n                              ' Safari/537.36 Edg/87.0.664.66'},\n    'http2': True\n}\n\n\nclass VideoInfo(BaseModel):\n    title: str\n    author_name: str\n    wm_urls: List[str]\n    nwm_urls: List[str]\n    cover: str\n    dynamic_cover: str\n    origin_cover: str\n\n\n@raise_api_error\nasync def get_video_info(client: httpx.AsyncClient, url: str) -> VideoInfo:\n    if short_url := re.findall(r'https://v.douyin.com/\\w+/', url):\n        res = await req_retry(client, short_url[0], follow_redirects=True)\n        url = str(res.url)\n    if key := re.search(r'/video/(\\d+)', url):\n        key = key.groups()[0]\n    else:\n        key = re.search(r\"modal_id=(\\d+)\", url).groups()[0]\n    res = await req_retry(client, f'https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids={key}')\n    data = json.loads(res.text)\n    data = data['item_list'][0]\n    # 视频标题\n    title = legal_title(data['desc'])\n    # 视频作者昵称\n    author_name = data['author']['nickname']\n    # 有水印视频链接\n    wm_urls = data['video']['play_addr']['url_list']\n    # 无水印视频链接 (在回执JSON中将关键字'playwm'替换为'play'即可获得无水印地址)\n    nwm_urls = list(map(lambda x: x.replace('playwm', 'play'), wm_urls))\n    # 视频封面\n    cover = data['video']['cover']['url_list'][0]\n    # 视频动态封面\n    dynamic_cover = data['video']['dynamic_cover']['url_list'][0]\n    # 视频原始封面\n    origin_cover = data['video']['origin_cover']['url_list'][0]\n    video_info = VideoInfo(title=title, author_name=author_name, wm_urls=wm_urls, nwm_urls=nwm_urls, cover=cover,\n                           dynamic_cover=dynamic_cover, origin_cover=origin_cover)\n    return video_info\n\n\nif __name__ == '__main__':\n    async def main():\n        client = httpx.AsyncClient(**dft_client_settings)\n        data = await get_video_info(client, 'https://www.douyin.com/video/7132430286415252773')\n        print(data)\n\n\n    asyncio.run(main())\n"
  },
  {
    "path": "bilix/sites/douyin/api_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.douyin import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    data = await api.get_video_info(client, \"https://www.douyin.com/video/7132430286415252773\")\n    pass\n"
  },
  {
    "path": "bilix/sites/douyin/downloader.py",
    "content": "import asyncio\nimport re\nfrom pathlib import Path\nfrom typing import Union\nimport httpx\nfrom . import api\nfrom bilix.download.base_downloader_part import BaseDownloaderPart\nfrom bilix.utils import legal_title\n\n\nclass DownloaderDouyin(BaseDownloaderPart):\n    pattern = re.compile(r\"^https?://([A-Za-z0-9-]+\\.)*(douyin\\.com)\")\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int, None] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n    ):\n        client = client or httpx.AsyncClient(**api.dft_client_settings)\n        super(DownloaderDouyin, self).__init__(\n            client=client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency,\n        )\n\n    async def get_video(self, url: str, path=Path('.'), image=False):\n        \"\"\"\n        :cli: short: v\n        :param url:\n        :param path:\n        :param image:\n        :return:\n        \"\"\"\n        video_info = await api.get_video_info(self.client, url)\n        title = legal_title(video_info.author_name, video_info.title)\n        cors = [self.get_file(video_info.nwm_urls, path=path / f\"{title}.mp4\")]\n        if image:\n            cors.append(self.get_static(video_info.cover, path / title))\n        await asyncio.gather(*cors)\n"
  },
  {
    "path": "bilix/sites/douyin/downloader_test.py",
    "content": "import pytest\nfrom bilix.sites.douyin import DownloaderDouyin\n\n\n@pytest.mark.asyncio\nasync def test_get_video():\n    async with DownloaderDouyin() as d:\n        await d.get_video('https://v.douyin.com/r4tm4Pe/')\n\n"
  },
  {
    "path": "bilix/sites/hanime1/__init__.py",
    "content": "from .downloader import DownloaderHanime1\n\n__all__ = ['DownloaderHanime1']\n"
  },
  {
    "path": "bilix/sites/hanime1/api.py",
    "content": "from pydantic import BaseModel\nimport httpx\nfrom bilix.utils import legal_title\nfrom bilix.download.utils import req_retry, raise_api_error\nfrom bs4 import BeautifulSoup\n\nBASE_URL = \"https://hanime1.me\"\ndft_client_settings = {\n    'headers': {'user-agent': 'Mozilla/5.0 (Linux; Android 8.0; Pixel 2 Build/OPD3.170816.012)'\n                              ' AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Mobile'\n                              ' Safari/537.36 Edg/87.0.664.66', \"Referer\": BASE_URL},\n    'http2': False\n}\n\n\nclass VideoInfo(BaseModel):\n    url: str\n    avid: str\n    title: str\n    video_url: str\n    img_url: str\n\n\n@raise_api_error\nasync def get_video_info(client: httpx.AsyncClient, url_or_avid: str) -> VideoInfo:\n    if url_or_avid.startswith('http'):\n        url = url_or_avid\n        avid = url.split('=')[-1]\n    else:\n        url = f'{BASE_URL}/watch?v={url_or_avid}'\n        avid = url_or_avid\n    res = await req_retry(client, url)\n    soup = BeautifulSoup(res.text, \"html.parser\")\n    title = soup.find('meta', property=\"og:title\")['content']\n    title = legal_title(title)\n    img_url = soup.find('meta', property=\"og:image\")['content']\n    video_url = soup.find('input', {'id': 'video-sd'})['value']\n    video_info = VideoInfo(url=url, avid=avid, title=title, img_url=img_url, video_url=video_url)\n    return video_info\n"
  },
  {
    "path": "bilix/sites/hanime1/api_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.hanime1 import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    data = await api.get_video_info(client, \"https://hanime1.me/watch?v=39123\")\n    assert data.title\n    data = await api.get_video_info(client, \"https://hanime1.me/watch?v=13658\")\n    assert data.title\n"
  },
  {
    "path": "bilix/sites/hanime1/downloader.py",
    "content": "import asyncio\nimport re\nfrom pathlib import Path\nfrom typing import Union, Tuple\nimport httpx\nfrom . import api\nfrom bilix.download.base_downloader_part import BaseDownloaderPart\nfrom bilix.download.base_downloader_m3u8 import BaseDownloaderM3u8\n\n\nclass DownloaderHanime1(BaseDownloaderM3u8, BaseDownloaderPart):\n    pattern = re.compile(r\"^https?://([A-Za-z0-9-]+\\.)*(hanime1\\.me)\")\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n            video_concurrency: Union[int, asyncio.Semaphore] = 3,\n    ):\n        self.client = client or httpx.AsyncClient(**api.dft_client_settings)\n        super().__init__(\n            client=self.client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency,\n            video_concurrency=video_concurrency,\n        )\n\n    async def get_video(self, url: str, path=Path('.'), image=False, time_range: Tuple[int, int] = None):\n        \"\"\"\n        :cli: short: v\n        :param url:\n        :param path:\n        :param image:\n        :param time_range:\n        :return:\n        \"\"\"\n        video_info = await api.get_video_info(self.client, url)\n        video_url = video_info.video_url\n        cors = [\n            self.get_m3u8_video(\n                video_url, path=path / f'{video_info.title}.mp4', time_range=time_range) if '.m3u8' in video_url else\n            self.get_file(video_url, path=path / f'{video_info.title}.mp4')]\n        if image:\n            cors.append(self.get_static(video_info.img_url, path=path / video_info.title))\n        await asyncio.gather(*cors)\n"
  },
  {
    "path": "bilix/sites/jable/__init__.py",
    "content": "from .downloader import DownloaderJable\n\n__all__ = ['DownloaderJable']\n"
  },
  {
    "path": "bilix/sites/jable/api.py",
    "content": "import re\nfrom pydantic import BaseModel\nimport httpx\nfrom bs4 import BeautifulSoup\nfrom bilix.utils import legal_title\nfrom bilix.download.utils import raise_api_error, req_retry\n\nBASE_URL = \"https://jable.tv\"\ndft_client_settings = {\n    'headers': {'user-agent': 'PostmanRuntime/7.29.0', \"Referer\": BASE_URL},\n    'http2': False\n}\n\n\nclass VideoInfo(BaseModel):\n    url: str\n    avid: str\n    title: str\n    actor_name: str\n    m3u8_url: str\n    img_url: str\n\n\n@raise_api_error\nasync def get_actor_info(client: httpx.AsyncClient, url: str):\n    res = await req_retry(client, url)\n    soup = BeautifulSoup(res.text, \"html.parser\")\n    actor_name = soup.find('h2', class_='h3-md mb-1').text\n    urls = [h6.a['href'] for h6 in soup.find('section', class_='pb-3 pb-e-lg-40').find_all('h6')]\n    return {'actor_name': actor_name, 'urls': urls}\n\n\n@raise_api_error\nasync def get_video_info(client: httpx.AsyncClient, url_or_avid: str) -> VideoInfo:\n    if url_or_avid.startswith('http'):\n        url = url_or_avid\n        avid = url.split('/')[-2]\n    else:\n        url = f'{BASE_URL}/videos/{url_or_avid}/'\n        avid = url_or_avid\n    avid = avid.upper()\n    res = await req_retry(client, url)  # proxies default global in httpx\n    soup = BeautifulSoup(res.text, \"html.parser\")\n    title = soup.find('meta', property=\"og:title\")['content']\n    title = legal_title(title)\n    if span := soup.find(\"span\", class_=\"placeholder rounded-circle\"):\n        actor_name = span['title']\n    else:  # https://github.com/HFrost0/bilix/issues/45  for some video actor name in different place\n        actor_name = soup.find(\"img\", class_=\"avatar rounded-circle\")['title']\n    img_url = soup.find('meta', property=\"og:image\")['content']\n    m3u8_url = re.findall(r'http.*m3u8', res.text)[0]\n    video_info = VideoInfo(url=url, avid=avid, title=title, img_url=img_url, m3u8_url=m3u8_url, actor_name=actor_name)\n    return video_info\n"
  },
  {
    "path": "bilix/sites/jable/api_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.jable import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    data = await api.get_video_info(client, \"https://jable.tv/videos/ssis-533/\")\n    assert data.actor_name\n    data = await api.get_video_info(client, \"https://jable.tv/videos/ssis-448/\")\n    assert data.actor_name\n\n\n@pytest.mark.asyncio\nasync def test_get_actor_info():\n    data = await api.get_actor_info(client, 'https://jable.tv/models/393ec3548aecc34004d54e03becd2ea9/')\n    assert data['actor_name'].encode('utf8') == b'\\xe4\\xbd\\x90\\xe4\\xb9\\x85\\xe8\\x89\\xaf\\xe5\\x92\\xb2\\xe5\\xb8\\x8c'\n    assert data['urls']\n"
  },
  {
    "path": "bilix/sites/jable/downloader.py",
    "content": "import asyncio\nimport re\nfrom pathlib import Path\nfrom typing import Union, Tuple\nimport httpx\nfrom . import api\nfrom bilix.download.base_downloader_m3u8 import BaseDownloaderM3u8\n\n\nclass DownloaderJable(BaseDownloaderM3u8):\n    pattern = re.compile(r\"^https?://([A-Za-z0-9-]+\\.)*(jable\\.tv)\")\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n            video_concurrency: Union[int, asyncio.Semaphore] = 3,\n            # unique params\n            hierarchy: bool = True,\n\n    ):\n        client = client or httpx.AsyncClient(**api.dft_client_settings)\n        super(DownloaderJable, self).__init__(\n            client=client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency,\n            video_concurrency=video_concurrency,\n        )\n        self.hierarchy = hierarchy\n\n    async def get_actor(self, url: str, path=Path(\".\"), image=True):\n        \"\"\"\n        download videos of a actor\n        :cli: short: a\n        :param url: actor page url\n        :param path: save path\n        :param image: download cover\n        :return:\n        \"\"\"\n        data = await api.get_actor_info(self.client, url)\n        if self.hierarchy:\n            path /= data['actor_name']\n            path.mkdir(parents=True, exist_ok=True)\n        await asyncio.gather(*[self.get_video(url, path, image) for url in data['urls']])\n\n    async def get_video(self, url: str, path=Path(\".\"), image=True, time_range: Tuple[int, int] = None):\n        \"\"\"\n        :cli: short: v\n        :param url:\n        :param path:\n        :param image:\n        :param time_range:\n        :return:\n        \"\"\"\n        video_info = await api.get_video_info(self.client, url)\n        if self.hierarchy:\n            path /= f\"{video_info.avid} {video_info.actor_name}\"\n            path.mkdir(parents=True, exist_ok=True)\n        cors = [self.get_m3u8_video(m3u8_url=video_info.m3u8_url, path=path / f\"{video_info.title}.mp4\",\n                                    time_range=time_range)]\n        if image:\n            cors.append(self.get_static(video_info.img_url, path=path / video_info.title, ))\n        await asyncio.gather(*cors)\n"
  },
  {
    "path": "bilix/sites/tiktok/__init__.py",
    "content": "from .downloader import DownloaderTiktok\n\n__all__ = ['DownloaderTiktok']\n"
  },
  {
    "path": "bilix/sites/tiktok/api.py",
    "content": "\"\"\"\nOriginally From\n@Author: https://github.com/Evil0ctal/\nhttps://github.com/Evil0ctal/Douyin_TikTok_Download_API\n\"\"\"\n\nimport re\nimport json\nimport random\nfrom typing import List\nimport httpx\nfrom pydantic import BaseModel\nfrom bilix.utils import legal_title\nfrom bilix.download.utils import req_retry, raise_api_error\n\ndft_client_settings = {\n    'headers': {'user-agent': 'com.ss.android.ugc.trill/494+Mozilla/5.0+(Linux;+Android+12;'\n                              '+2112123G+Build/SKQ1.211006.001;+wv)+AppleWebKit/537.36+'\n                              '(KHTML,+like+Gecko)+Version/4.0+Chrome/107.0.5304.105+Mobile+Safari/537.36'},\n    'http2': True\n}\n\n\nclass VideoInfo(BaseModel):\n    title: str\n    author_name: str\n    wm_urls: List[str]\n    nwm_urls: List[str]\n    cover: str\n    dynamic_cover: str\n    origin_cover: str\n\n\n@raise_api_error\nasync def get_video_info(client: httpx.AsyncClient, url: str) -> VideoInfo:\n    if short_url := re.findall(r'https://www.tiktok.com/t/\\w+/', url):\n        res = await req_retry(client, short_url[0], follow_redirects=True)\n        url = str(res.url)\n    if key := re.search(r'/video/(\\d+)', url):\n        key = key.groups()[0]\n    else:\n        key = re.search(r\"/v/(\\d+)\", url).groups()[0]\n    params = {'aweme_id': key, 'aid': 1180, 'iid': 6165993682518218889,\n              'device_id': random.randint(10 * 10 * 10, 9 * 10 ** 10)}\n    res = await req_retry(client, 'https://api16-normal-c-useast1a.tiktokv.com/aweme/v1/feed/', params=params)\n    data = json.loads(res.text)\n    data = data['aweme_list'][0]\n    # 视频标题 (如果为空则使用分享标题)\n    title = legal_title(data['desc'] if data['desc'] != '' else data['share_info']['share_title'])\n    # 视频作者昵称\n    author_name = data['author']['nickname']\n    # 有水印视频链接\n    wm_urls = data['video']['download_addr']['url_list']\n    # 无水印视频链接\n    nwm_urls = data['video']['bit_rate'][0]['play_addr']['url_list']\n    # 视频封面\n    cover = data['video']['cover']['url_list'][0]\n    # 视频动态封面\n    dynamic_cover = data['video']['dynamic_cover']['url_list'][0]\n    # 视频原始封面\n    origin_cover = data['video']['origin_cover']['url_list'][0]\n    video_info = VideoInfo(title=title, author_name=author_name, wm_urls=wm_urls, nwm_urls=nwm_urls, cover=cover,\n                           dynamic_cover=dynamic_cover, origin_cover=origin_cover)\n    return video_info\n"
  },
  {
    "path": "bilix/sites/tiktok/api_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.tiktok import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    data = await api.get_video_info(client, \"https://www.tiktok.com/@lindaselection/video/7171715528124271877\")\n    assert data.nwm_urls\n"
  },
  {
    "path": "bilix/sites/tiktok/downloader.py",
    "content": "import asyncio\nimport re\nfrom pathlib import Path\nfrom typing import Union\nimport httpx\nfrom . import api\nfrom bilix.download.base_downloader_part import BaseDownloaderPart\nfrom bilix.utils import legal_title\n\n\nclass DownloaderTiktok(BaseDownloaderPart):\n    pattern = re.compile(r\"^https?://([A-Za-z0-9-]+\\.)*(titok\\.com)\")\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int, None] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n    ):\n        client = client or httpx.AsyncClient(**api.dft_client_settings)\n        super(DownloaderTiktok, self).__init__(\n            client=client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency,\n        )\n\n    async def get_video(self, url: str, path=Path('.'), image=False):\n        \"\"\"\n        :cli: short: v\n        :param url:\n        :param path:\n        :param image:\n        :return:\n        \"\"\"\n        video_info = await api.get_video_info(self.client, url)\n        title = legal_title(video_info.author_name, video_info.title)\n        # since TikTok backup not fast enough some time, use the first one\n        cors = [self.get_file(video_info.nwm_urls[0], path / f'{title}.mp4')]\n        if image:\n            cors.append(self.get_static(video_info.cover, path=path / title, ))\n        await asyncio.gather(*cors)\n"
  },
  {
    "path": "bilix/sites/tiktok/downloader_test.py",
    "content": "import pytest\nfrom bilix.sites.tiktok import DownloaderTiktok\n\n\n@pytest.mark.asyncio\nasync def test_get_video():\n    async with DownloaderTiktok() as d:\n        await d.get_video('https://www.tiktok.com/@evil0ctal/video/7168978761973550378')\n\n"
  },
  {
    "path": "bilix/sites/yhdmp/__init__.py",
    "content": "from .downloader import DownloaderYhdmp\n\n__all__ = ['DownloaderYhdmp']\n"
  },
  {
    "path": "bilix/sites/yhdmp/api.py",
    "content": "import asyncio\nimport json\nimport random\nimport re\nfrom pathlib import Path\nfrom pydantic import BaseModel\nfrom typing import Union, List\nimport httpx\nimport execjs\nfrom bs4 import BeautifulSoup\nfrom bilix.utils import legal_title\nfrom bilix.download.utils import req_retry as rr, raise_api_error\n\nBASE_URL = \"https://www.yhdmp.cc\"\ndft_client_settings = {\n    'headers': {'user-agent': 'PostmanRuntime/7.29.0', \"Referer\": BASE_URL},\n    'http2': False\n}\n_js = None\n\n\ndef _get_js():\n    global _js\n    if _js is None:\n        with open(Path(__file__).parent / 'yhdmp.js', 'r') as f:\n            _js = execjs.compile(f.read())\n    return _js\n\n\ndef _get_t2_k2(t1: str, k1: str) -> dict:\n    new_cookies = _get_js().call(\"get_t2_k2\", t1, k1)\n    return new_cookies\n\n\ndef _decode(data: str) -> str:\n    return _get_js().call('__getplay_rev_data', data)\n\n\nasync def req_retry(client: httpx.AsyncClient, url_or_urls: Union[str, List[str]],\n                    method: str = 'GET',\n                    follow_redirects: bool = False,\n                    **kwargs):\n    if 't1' in client.cookies and 'k1' in client.cookies:\n        new_cookies = _get_t2_k2(client.cookies['t1'], client.cookies['k1'])\n        if 't2' in client.cookies:\n            client.cookies.delete('t2')\n        if 'k2' in client.cookies:\n            client.cookies.delete('k2')\n        client.cookies.update(new_cookies)\n\n    res = await rr(client, url_or_urls, method, follow_redirects, **kwargs)\n    return res\n\n\nclass VideoInfo(BaseModel):\n    aid: Union[str, int]\n    play_idx: int\n    ep_idx: int\n    title: str\n    sub_title: str\n    play_info: List[Union[List[str], List]]  # may be empty\n    m3u8_url: str\n\n\n@raise_api_error\nasync def get_video_info(client: httpx.AsyncClient, url: str) -> VideoInfo:\n    aid, play_idx, ep_idx = url.split('/')[-1].split('.')[0].split('-')\n    play_idx, ep_idx = int(play_idx), int(ep_idx)\n    # request\n    res_web = req_retry(client, url)\n    m3u8_url = get_m3u8_url(url=url, client=client)\n    if 't1' in client.cookies and 'k1' in client.cookies:\n        res_web, m3u8_url = await asyncio.gather(res_web, m3u8_url)\n    else:\n        res_web, m3u8_url = await res_web, await m3u8_url\n    # extract\n    title, sub_title = map(legal_title,\n                           re.search(r'target=\"_self\">([^<]+)</a><span>:([^<]+)</span>', res_web.text).groups())\n    soup = BeautifulSoup(res_web.text, 'html.parser')\n    divs = soup.find_all('div', class_=\"movurl\")\n    play_info = []\n    for div in divs:\n        play_info.append([[legal_title(a[\"title\"]), f\"{BASE_URL}/{a['href']}\"] for a in div.find_all(\"a\")])\n    video_info = VideoInfo(aid=aid, play_idx=play_idx, ep_idx=ep_idx, title=title, sub_title=sub_title,\n                           play_info=play_info, m3u8_url=m3u8_url)\n    return video_info\n\n\n@raise_api_error\nasync def get_m3u8_url(client: httpx.AsyncClient, url):\n    aid, play_idx, ep_idx = url.split('/')[-1].split('.')[0].split('-')\n    params = {\"aid\": aid, \"playindex\": play_idx, \"epindex\": ep_idx, \"r\": random.random()}\n    res_play = await req_retry(client, f\"{BASE_URL}/_getplay\", params=params)\n    if res_play.text.startswith(\"err\"):  # maybe first time\n        res_play = await req_retry(client, f\"{BASE_URL}/_getplay\", params=params)\n    data = json.loads(res_play.text)\n    purl, vurl = _decode(data['purl']), _decode(data['vurl'])\n    m3u8_url = purl.split(\"url=\")[-1] + vurl\n    return m3u8_url\n"
  },
  {
    "path": "bilix/sites/yhdmp/api_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.yhdmp import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    data = await api.get_video_info(client, \"https://www.yhdmp.cc/vp/22224-1-0.html\")\n    data = await api.get_m3u8_url(client, \"https://www.yhdmp.cc/vp/22224-1-0.html\")\n    pass\n"
  },
  {
    "path": "bilix/sites/yhdmp/downloader.py",
    "content": "import asyncio\nfrom pathlib import Path\nimport httpx\nfrom typing import Sequence, Union, Tuple\nfrom . import api\nfrom bilix.utils import legal_title, cors_slice\nfrom bilix.download.base_downloader_m3u8 import BaseDownloaderM3u8\n\n\nclass DownloaderYhdmp(BaseDownloaderM3u8):\n    def __init__(\n            self,\n            *,\n            api_client: httpx.AsyncClient = None,\n            stream_client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n            video_concurrency: Union[int, asyncio.Semaphore] = 3,\n            hierarchy: bool = True,\n    ):\n        stream_client = stream_client or httpx.AsyncClient()\n        super(DownloaderYhdmp, self).__init__(\n            client=stream_client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency,\n            video_concurrency=video_concurrency,\n        )\n        self.api_client = api_client or httpx.AsyncClient(**api.dft_client_settings)\n        self.hierarchy = hierarchy\n\n    async def get_series(self, url: str, path=Path('.'), p_range: Sequence[int] = None):\n        \"\"\"\n        :cli: short: s\n        :param url:\n        :param path:\n        :param p_range:\n        :return:\n        \"\"\"\n        video_info = await api.get_video_info(self.api_client, url)\n        ep_idx = video_info.ep_idx\n        play_idx = video_info.play_idx\n        title = video_info.title\n        if self.hierarchy:\n            path = path / title\n            path.mkdir(parents=True, exist_ok=True)\n\n        # no need to reuse get_video since we only need m3u8_url\n        async def get_video(page_url, name):\n            m3u8_url = await api.get_m3u8_url(self.api_client, page_url)\n            await self.get_m3u8_video(m3u8_url=m3u8_url, path=path / name)\n\n        cors = []\n        for idx, (sub_title, url) in enumerate(video_info.play_info[play_idx]):\n            if ep_idx == idx:\n                cors.append(self.get_m3u8_video(m3u8_url=video_info.m3u8_url,\n                                                path=path / f'{legal_title(title, sub_title)}.mp4'))\n            else:\n                cors.append(get_video(url, legal_title(title, sub_title)))\n        if p_range:\n            cors = cors_slice(cors, p_range)\n        await asyncio.gather(*cors)\n\n    async def get_video(self, url: str, path=Path('.'), time_range=None):\n        \"\"\"\n        :cli: short: v\n        :param url:\n        :param path:\n        :param time_range:\n        :return:\n        \"\"\"\n        video_info = await api.get_video_info(self.api_client, url)\n        name = legal_title(video_info.title, video_info.sub_title)\n        await self.get_m3u8_video(m3u8_url=video_info.m3u8_url, path=path / f'{name}.mp4', time_range=time_range)\n\n    @classmethod\n    def _decide_handle(cls, method: str, keys: Tuple[str, ...], options: dict) -> bool:\n        return 'yhdmp' in keys[0]\n"
  },
  {
    "path": "bilix/sites/yhdmp/yhdmp.js",
    "content": "function __getplay_rev_data(_in_data) {\n    if (_in_data.indexOf('{') < 0) {\n        ;var encode_version = 'jsjiami.com.v5', unthu = '__0xb5aef',\n            __0xb5aef = ['wohHHQdR', 'dyXDlMOIw5M=', 'dA9wwoRS', 'U8K2w7FvETZ9csKtEFTCjQ==', 'wo7ChVE=', 'VRrDhMOnw6I=', 'wr5LwoQkKBbDkcKwwqk='];\n        (function (_0x22b97e, _0x2474ca) {\n            var _0x5b074e = function (_0x5864d0) {\n                while (--_0x5864d0) {\n                    _0x22b97e['push'](_0x22b97e['shift']());\n                }\n            };\n            _0x5b074e(++_0x2474ca);\n        }(__0xb5aef, 0x1ae));\n        var _0x2c0f = function (_0x19a33a, _0x9a1ebf) {\n            _0x19a33a = _0x19a33a - 0x0;\n            var _0x40a3ce = __0xb5aef[_0x19a33a];\n            if (_0x2c0f['initialized'] === undefined) {\n                (function () {\n                    var _0x4d044c = typeof window !== 'undefined' ? window : typeof process === 'object' && typeof require === 'function' && typeof global === 'object' ? global : this;\n                    var _0x1268d6 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=';\n                    _0x4d044c['atob'] || (_0x4d044c['atob'] = function (_0x2993de) {\n                        var _0x467e1d = String(_0x2993de)['replace'](/=+$/, '');\n                        for (var _0x22a01d = 0x0, _0x1ee2a1, _0x2cf5ea, _0x3a84f7 = 0x0, _0x5c0e64 = ''; _0x2cf5ea = _0x467e1d['charAt'](_0x3a84f7++); ~_0x2cf5ea && (_0x1ee2a1 = _0x22a01d % 0x4 ? _0x1ee2a1 * 0x40 + _0x2cf5ea : _0x2cf5ea, _0x22a01d++ % 0x4) ? _0x5c0e64 += String['fromCharCode'](0xff & _0x1ee2a1 >> (-0x2 * _0x22a01d & 0x6)) : 0x0) {\n                            _0x2cf5ea = _0x1268d6['indexOf'](_0x2cf5ea);\n                        }\n                        return _0x5c0e64;\n                    });\n                }());\n                var _0x3c81da = function (_0x457f21, _0x6cb980) {\n                    var _0x133a9b = [], _0x749ec5 = 0x0, _0x3ceeee, _0x1df5a4 = '', _0x35a2a6 = '';\n                    _0x457f21 = atob(_0x457f21);\n                    for (var _0x9a0e47 = 0x0, _0x4a71aa = _0x457f21['length']; _0x9a0e47 < _0x4a71aa; _0x9a0e47++) {\n                        _0x35a2a6 += '%' + ('00' + _0x457f21['charCodeAt'](_0x9a0e47)['toString'](0x10))['slice'](-0x2);\n                    }\n                    _0x457f21 = decodeURIComponent(_0x35a2a6);\n                    for (var _0x2ef02e = 0x0; _0x2ef02e < 0x100; _0x2ef02e++) {\n                        _0x133a9b[_0x2ef02e] = _0x2ef02e;\n                    }\n                    for (_0x2ef02e = 0x0; _0x2ef02e < 0x100; _0x2ef02e++) {\n                        _0x749ec5 = (_0x749ec5 + _0x133a9b[_0x2ef02e] + _0x6cb980['charCodeAt'](_0x2ef02e % _0x6cb980['length'])) % 0x100;\n                        _0x3ceeee = _0x133a9b[_0x2ef02e];\n                        _0x133a9b[_0x2ef02e] = _0x133a9b[_0x749ec5];\n                        _0x133a9b[_0x749ec5] = _0x3ceeee;\n                    }\n                    _0x2ef02e = 0x0;\n                    _0x749ec5 = 0x0;\n                    for (var _0xa5d5ef = 0x0; _0xa5d5ef < _0x457f21['length']; _0xa5d5ef++) {\n                        _0x2ef02e = (_0x2ef02e + 0x1) % 0x100;\n                        _0x749ec5 = (_0x749ec5 + _0x133a9b[_0x2ef02e]) % 0x100;\n                        _0x3ceeee = _0x133a9b[_0x2ef02e];\n                        _0x133a9b[_0x2ef02e] = _0x133a9b[_0x749ec5];\n                        _0x133a9b[_0x749ec5] = _0x3ceeee;\n                        _0x1df5a4 += String['fromCharCode'](_0x457f21['charCodeAt'](_0xa5d5ef) ^ _0x133a9b[(_0x133a9b[_0x2ef02e] + _0x133a9b[_0x749ec5]) % 0x100]);\n                    }\n                    return _0x1df5a4;\n                };\n                _0x2c0f['rc4'] = _0x3c81da;\n                _0x2c0f['data'] = {};\n                _0x2c0f['initialized'] = !![];\n            }\n            var _0x4222af = _0x2c0f['data'][_0x19a33a];\n            if (_0x4222af === undefined) {\n                if (_0x2c0f['once'] === undefined) {\n                    _0x2c0f['once'] = !![];\n                }\n                _0x40a3ce = _0x2c0f['rc4'](_0x40a3ce, _0x9a1ebf);\n                _0x2c0f['data'][_0x19a33a] = _0x40a3ce;\n            } else {\n                _0x40a3ce = _0x4222af;\n            }\n            return _0x40a3ce;\n        };\n        var panurl = _in_data;\n        var hf_panurl = '';\n        const keyMP = 0x100000;\n        const panurl_len = panurl['length'];\n        for (var i = 0x0; i < panurl_len; i += 0x2) {\n            var mn = parseInt(panurl[i] + panurl[i + 0x1], 0x10);\n            mn = (mn + keyMP - (panurl_len / 0x2 - 0x1 - i / 0x2)) % 0x100;\n            hf_panurl = String[_0x2c0f('0x0', '1JYE')](mn) + hf_panurl;\n        }\n        _in_data = hf_panurl;\n        ;(function (_0x5be96b, _0x58d96a, _0x2d2c35) {\n            var _0x13ecbc = {\n                'luTaD': function _0x478551(_0x58d2f3, _0x3c17c5) {\n                    return _0x58d2f3 !== _0x3c17c5;\n                }, 'dkPfD': function _0x52a07f(_0x5999d5, _0x5de375) {\n                    return _0x5999d5 === _0x5de375;\n                }, 'NJDNu': function _0x386503(_0x39f385, _0x251b7b) {\n                    return _0x39f385 + _0x251b7b;\n                }, 'mNqKE': '版本号，js会定期弹窗，还请支持我们的工作', 'GllzR': '删除版本号，js会定期弹窗'\n            };\n            _0x2d2c35 = 'al';\n            try {\n                _0x2d2c35 += _0x2c0f('0x1', 's^Zc');\n                _0x58d96a = encode_version;\n                if (!(_0x13ecbc[_0x2c0f('0x2', '(fbB')](typeof _0x58d96a, _0x2c0f('0x3', '*OI!')) && _0x13ecbc[_0x2c0f('0x4', '8iw%')](_0x58d96a, 'jsjiami.com.v5'))) {\n                    _0x5be96b[_0x2d2c35](_0x13ecbc[_0x2c0f('0x5', '(fbB')]('删除', _0x13ecbc['mNqKE']));\n                }\n            } catch (_0x57623d) {\n                _0x5be96b[_0x2d2c35](_0x13ecbc[_0x2c0f('0x6', '126j')]);\n            }\n        }(\"undefined\"));\n        ;encode_version = 'jsjiami.com.v5';\n    }\n    return decodeURIComponent(_in_data);\n}\n\n\nfunction __getplay_pck() {\n    ;var encode_version = 'sojson.v5', yqpcz = '__0x6d4a1',\n        __0x6d4a1 = ['wq4mw7/CmF4=', 'w6XDrMOmwprCgg==', 'eRfDo8OoZQ==', 'IUnCmSzDgyfDjw==', 'S0pEJ8KxUMOSwqlq', 'asOow5tBwqk=', '5Lqc6ICk5Yi16ZuCw7A4wqEAwqHCisKHwr0/', 'TjpSwqZ3WMOmG8Oz', 'MhvDm8OOwqk=', 'XsKOwrAgwrFzwoU=', 'UyHCmcOyREsv', 'N2DDnXUC', 'BcOIwowrdgc=', 'GcOwNxbDqg==', 'JcKMw4ZORw==', 'Jm/ChVfDhw==', 'w7U3w4PCksKm', 'w7jDnHDCpcOF', 'wrgOw5PDlcO7', 'w4HDkMODYcK/D8O0PMKjShFZcw==', 'F8KFT8Ktwp3Ckw/CqXI=', 'M8O0dUFY', 'e1zDtMOGZg==', 'w6LChsKLCBo=', 'EMKJXSbDjQ==', 'T8KPWMK2wp3ChA==', 'wpRjw5BEZQ==', 'JHsWwq3DoQ==', 'HsKKUAvDqw==', 'wopnw5BzZA3DgQ==', 'wqAkw5PCpmw=', 'w68MBSvDow==', 'MljDsVQq', 'FMKIw6xETQ=='];\n    (function (_0x3aee46, _0x59ba69) {\n        var _0x3ea520 = function (_0x1dd9c6) {\n            while (--_0x1dd9c6) {\n                _0x3aee46['push'](_0x3aee46['shift']());\n            }\n        };\n        _0x3ea520(++_0x59ba69);\n    }(__0x6d4a1, 0x15b));\n    var _0x15f5 = function (_0x36bc78, _0xbd2420) {\n        _0x36bc78 = _0x36bc78 - 0x0;\n        var _0xfd0a5f = __0x6d4a1[_0x36bc78];\n        if (_0x15f5['initialized'] === undefined) {\n            (function () {\n                var _0x4b7bb1 = typeof window !== 'undefined' ? window : typeof process === 'object' && typeof require === 'function' && typeof global === 'object' ? global : this;\n                var _0x531bb8 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=';\n                _0x4b7bb1['atob'] || (_0x4b7bb1['atob'] = function (_0x1870ad) {\n                    var _0x576c80 = String(_0x1870ad)['replace'](/=+$/, '');\n                    for (var _0x44d56e = 0x0, _0x1a3ebb, _0x42d2dc, _0x1cf4b1 = 0x0, _0x2af9b7 = ''; _0x42d2dc = _0x576c80['charAt'](_0x1cf4b1++); ~_0x42d2dc && (_0x1a3ebb = _0x44d56e % 0x4 ? _0x1a3ebb * 0x40 + _0x42d2dc : _0x42d2dc, _0x44d56e++ % 0x4) ? _0x2af9b7 += String['fromCharCode'](0xff & _0x1a3ebb >> (-0x2 * _0x44d56e & 0x6)) : 0x0) {\n                        _0x42d2dc = _0x531bb8['indexOf'](_0x42d2dc);\n                    }\n                    return _0x2af9b7;\n                });\n            }());\n            var _0x1897b8 = function (_0x3c0b9b, _0x2579f3) {\n                var _0x5a0327 = [], _0x330679 = 0x0, _0x12b19f, _0x3ebfbf = '', _0x20630f = '';\n                _0x3c0b9b = atob(_0x3c0b9b);\n                for (var _0x514228 = 0x0, _0x4f7f74 = _0x3c0b9b['length']; _0x514228 < _0x4f7f74; _0x514228++) {\n                    _0x20630f += '%' + ('00' + _0x3c0b9b['charCodeAt'](_0x514228)['toString'](0x10))['slice'](-0x2);\n                }\n                _0x3c0b9b = decodeURIComponent(_0x20630f);\n                for (var _0x53cc80 = 0x0; _0x53cc80 < 0x100; _0x53cc80++) {\n                    _0x5a0327[_0x53cc80] = _0x53cc80;\n                }\n                for (_0x53cc80 = 0x0; _0x53cc80 < 0x100; _0x53cc80++) {\n                    _0x330679 = (_0x330679 + _0x5a0327[_0x53cc80] + _0x2579f3['charCodeAt'](_0x53cc80 % _0x2579f3['length'])) % 0x100;\n                    _0x12b19f = _0x5a0327[_0x53cc80];\n                    _0x5a0327[_0x53cc80] = _0x5a0327[_0x330679];\n                    _0x5a0327[_0x330679] = _0x12b19f;\n                }\n                _0x53cc80 = 0x0;\n                _0x330679 = 0x0;\n                for (var _0x25c772 = 0x0; _0x25c772 < _0x3c0b9b['length']; _0x25c772++) {\n                    _0x53cc80 = (_0x53cc80 + 0x1) % 0x100;\n                    _0x330679 = (_0x330679 + _0x5a0327[_0x53cc80]) % 0x100;\n                    _0x12b19f = _0x5a0327[_0x53cc80];\n                    _0x5a0327[_0x53cc80] = _0x5a0327[_0x330679];\n                    _0x5a0327[_0x330679] = _0x12b19f;\n                    _0x3ebfbf += String['fromCharCode'](_0x3c0b9b['charCodeAt'](_0x25c772) ^ _0x5a0327[(_0x5a0327[_0x53cc80] + _0x5a0327[_0x330679]) % 0x100]);\n                }\n                return _0x3ebfbf;\n            };\n            _0x15f5['rc4'] = _0x1897b8;\n            _0x15f5['data'] = {};\n            _0x15f5['initialized'] = !![];\n        }\n        var _0x597ef6 = _0x15f5['data'][_0x36bc78];\n        if (_0x597ef6 === undefined) {\n            if (_0x15f5['once'] === undefined) {\n                _0x15f5['once'] = !![];\n            }\n            _0xfd0a5f = _0x15f5['rc4'](_0xfd0a5f, _0xbd2420);\n            _0x15f5['data'][_0x36bc78] = _0xfd0a5f;\n        } else {\n            _0xfd0a5f = _0x597ef6;\n        }\n        return _0xfd0a5f;\n    };\n    if (!![]) {\n        var _0x36d031 = _0x15f5('0x0', 'CuZW')[_0x15f5('0x1', '^Ou5')]('|'), _0x5a77e0 = 0x0;\n        while (!![]) {\n            switch (_0x36d031[_0x5a77e0++]) {\n                case'0':\n                    f2 = function (_0x369589, _0x22305e) {\n                        var _0x3df411 = {\n                            'DUWem': function _0x172fb9(_0x5ec61c, _0x564208) {\n                                return _0x5ec61c + _0x564208;\n                            }, 'chgqL': function _0xdabcda(_0x221552, _0x9f16bb) {\n                                return _0x221552 * _0x9f16bb;\n                            }, 'ueYPD': function _0x42de89(_0x168663, _0x45775b) {\n                                return _0x168663 + _0x45775b;\n                            }, 'FyVON': function _0x132543(_0x14cf95, _0x5f0613) {\n                                return _0x14cf95 + _0x5f0613;\n                            }, 'rImkg': function _0x3ee8de(_0x50917a, _0x5aa05b) {\n                                return _0x50917a + _0x5aa05b;\n                            }, 'EhXgt': ';expires=', 'eglgt': _0x15f5('0x2', 'y4Vs')\n                        };\n                        var _0x355c8f = 0x1e;\n                        var _0x36f590 = new Date();\n                        _0x36f590['setTime'](_0x3df411['DUWem'](_0x36f590[_0x15f5('0x3', 'wmgi')](), _0x3df411[_0x15f5('0x4', 'Put*')](_0x3df411['chgqL'](_0x3df411['chgqL'](_0x355c8f, 0x18), 0x3c) * 0x3c, 0x3e8)));\n                        var cookie = _0x3df411['DUWem'](_0x3df411[_0x15f5('0x6', 'PIK)')](_0x3df411['FyVON'](_0x3df411['rImkg'](_0x3df411[_0x15f5('0x7', 'MDzc')](_0x369589, '='), escape(_0x22305e)), _0x3df411[_0x15f5('0x8', 'bDPL')]), _0x36f590['toGMTString']()), _0x3df411[_0x15f5('0x9', 'Doro')])\n                        updateDoc(cookie)\n                    };\n                    continue;\n                case'1':\n                    t1 = Math[_0x15f5('0xa', 'Q5gT')](Number(f('t1')) / 0x3e8) >> 0x5;\n                    continue;\n                case'2':\n                    f = function (_0x30755b) {\n                        var _0x2061a3 = {\n                            'JwcjB': function _0x4d63cc(_0x53138c, _0x57679f) {\n                                return _0x53138c + _0x57679f;\n                            },\n                            'zWwUP': _0x15f5('0xb', 'Doro'),\n                            'zMNwJ': _0x15f5('0xc', 'mu(g'),\n                            'QLLCz': function _0xcf9e5b(_0x22b423, _0x4bb2df) {\n                                return _0x22b423(_0x4bb2df);\n                            },\n                            'tNCZl': 'BSp',\n                            'fPKPd': function _0x1e8a5f(_0x1b5aa9, _0x4db818) {\n                                return _0x1b5aa9 + _0x4db818;\n                            },\n                            'BbKyG': function _0x1758f2(_0x471863, _0x128f5e) {\n                                return _0x471863 * _0x128f5e;\n                            },\n                            'xIvIx': function _0x25258e(_0xf7b32b, _0x717bc1) {\n                                return _0xf7b32b * _0x717bc1;\n                            },\n                            'CMGam': function _0x5cb526(_0x32dc57, _0x589dad) {\n                                return _0x32dc57 + _0x589dad;\n                            },\n                            'hRgnV': function _0x30a4e5(_0x401fb4, _0x49024c) {\n                                return _0x401fb4 + _0x49024c;\n                            },\n                            'QNctg': _0x15f5('0xd', 'KvKZ')\n                        };\n                        var _0x583897,\n                            _0x3a66ce = new RegExp(_0x2061a3[_0x15f5('0xe', 'Ox#l')](_0x2061a3[_0x15f5('0xf', 'v78#')](_0x2061a3[_0x15f5('0x10', '7jQL')], _0x30755b), _0x2061a3[_0x15f5('0x11', '6O7p')]));\n                        if (_0x583897 = document[_0x15f5('0x12', 'KvKZ')][_0x15f5('0x13', 'Z@&Q')](_0x3a66ce)) {\n                            return _0x2061a3[_0x15f5('0x14', 'g#CQ')](unescape, _0x583897[0x2]);\n                        } else {\n                            if (_0x2061a3['tNCZl'] !== _0x2061a3[_0x15f5('0x15', '6O7p')]) {\n                                var _0x2856c4 = 0x1e;\n                                var _0x412bd3 = new Date();\n                                _0x412bd3[_0x15f5('0x16', 'Z@&Q')](_0x2061a3[_0x15f5('0x17', '0USv')](_0x412bd3['getTime'](), _0x2061a3['BbKyG'](_0x2061a3[_0x15f5('0x18', 'x]l]')](_0x2856c4, 0x18) * 0x3c * 0x3c, 0x3e8)));\n                                var key = _0x2061a3[_0x15f5('0x19', 'Put*')](_0x2061a3['fPKPd'](_0x2061a3[_0x15f5('0x1a', 'MDzc')](_0x2061a3[_0x15f5('0x1b', '0USv')](_0x30755b + '=', _0x2061a3[_0x15f5('0x1c', 'd$Fs')](escape, value)), _0x2061a3[_0x15f5('0x1d', 's1ve')]), _0x412bd3['toGMTString']()), ';path=/')\n                                updateDoc(key)\n                            } else {\n                                return null;\n                            }\n                        }\n                    };\n                    continue;\n                case'3':\n                    f2('t2', new Date()[_0x15f5('0x1e', '9k4F')]());\n                    continue;\n                case'4':\n                    f2('k2', (t1 * (t1 % 0x1000) + 0x99d6) * (t1 % 0x1000) + t1);\n                    continue;\n            }\n            break;\n        }\n    }\n    ;\n    if (!(typeof encode_version !== 'undefined' && encode_version === _0x15f5('0x1f', 'wZ(I'))) {\n        window[_0x15f5('0x20', 'KbZ5')](_0x15f5('0x21', 'YAu4'));\n    }\n    ;encode_version = 'sojson.v5';\n}\n\n\nfunction __getplay_pck2() {\n    ;var encode_version = 'sojson.v5', woaew = '__0x6d4a2',\n        __0x6d4a2 = ['w4TCkxtLwofCuBE=', 'YsKYwok/w5M=', 'DWwZJDPDksOi', 'wocjwrkSXQ==', 'XG5tw6Y2', 'OMOpSErDhw==', 'AA7DksO/w4gM', 'w5prw6vCrFI=', 'w7U3L8K1bQ==', 'Z8K5wrJIwrE=', 'L8OKZcKaGcOoTcOUwqIFYw==', 'YCPDs1bDrQPDvg==', 'dcOrVsOlwoA=', 'OcORb2nDtg==', 'FcKQdxtY', 'dsKSQz8V', 'McKZVzd2Xg==', 'VyEpUy4=', 'ASUlQC97HGdz', 'wqzDryzCjMKSWAE='];\n    (function (_0x57c88f, _0x2383d8) {\n        var _0x4b2391 = function (_0x58c926) {\n            while (--_0x58c926) {\n                _0x57c88f['push'](_0x57c88f['shift']());\n            }\n        };\n        _0x4b2391(++_0x2383d8);\n    }(__0x6d4a2, 0xad));\n    var _0x1691 = function (_0x3c08d1, _0xc096f) {\n        _0x3c08d1 = _0x3c08d1 - 0x0;\n        var _0x2babb8 = __0x6d4a2[_0x3c08d1];\n        if (_0x1691['initialized'] === undefined) {\n            (function () {\n                var _0x2f1e69 = typeof window !== 'undefined' ? window : typeof process === 'object' && typeof require === 'function' && typeof global === 'object' ? global : this;\n                var _0x4f603c = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=';\n                _0x2f1e69['atob'] || (_0x2f1e69['atob'] = function (_0x2c68bb) {\n                    var _0x492998 = String(_0x2c68bb)['replace'](/=+$/, '');\n                    for (var _0x5ee61a = 0x0, _0x2ac634, _0x1d1013, _0x6f4d80 = 0x0, _0x4a006d = ''; _0x1d1013 = _0x492998['charAt'](_0x6f4d80++); ~_0x1d1013 && (_0x2ac634 = _0x5ee61a % 0x4 ? _0x2ac634 * 0x40 + _0x1d1013 : _0x1d1013, _0x5ee61a++ % 0x4) ? _0x4a006d += String['fromCharCode'](0xff & _0x2ac634 >> (-0x2 * _0x5ee61a & 0x6)) : 0x0) {\n                        _0x1d1013 = _0x4f603c['indexOf'](_0x1d1013);\n                    }\n                    return _0x4a006d;\n                });\n            }());\n            var _0xa0b1f0 = function (_0x2fa32b, _0x4608dc) {\n                var _0x4f2019 = [], _0x4a28e8 = 0x0, _0x19767d, _0x4cf800 = '', _0x4bb512 = '';\n                _0x2fa32b = atob(_0x2fa32b);\n                for (var _0x36c759 = 0x0, _0x20d6ad = _0x2fa32b['length']; _0x36c759 < _0x20d6ad; _0x36c759++) {\n                    _0x4bb512 += '%' + ('00' + _0x2fa32b['charCodeAt'](_0x36c759)['toString'](0x10))['slice'](-0x2);\n                }\n                _0x2fa32b = decodeURIComponent(_0x4bb512);\n                for (var _0x3ac32b = 0x0; _0x3ac32b < 0x100; _0x3ac32b++) {\n                    _0x4f2019[_0x3ac32b] = _0x3ac32b;\n                }\n                for (_0x3ac32b = 0x0; _0x3ac32b < 0x100; _0x3ac32b++) {\n                    _0x4a28e8 = (_0x4a28e8 + _0x4f2019[_0x3ac32b] + _0x4608dc['charCodeAt'](_0x3ac32b % _0x4608dc['length'])) % 0x100;\n                    _0x19767d = _0x4f2019[_0x3ac32b];\n                    _0x4f2019[_0x3ac32b] = _0x4f2019[_0x4a28e8];\n                    _0x4f2019[_0x4a28e8] = _0x19767d;\n                }\n                _0x3ac32b = 0x0;\n                _0x4a28e8 = 0x0;\n                for (var _0x3b73f2 = 0x0; _0x3b73f2 < _0x2fa32b['length']; _0x3b73f2++) {\n                    _0x3ac32b = (_0x3ac32b + 0x1) % 0x100;\n                    _0x4a28e8 = (_0x4a28e8 + _0x4f2019[_0x3ac32b]) % 0x100;\n                    _0x19767d = _0x4f2019[_0x3ac32b];\n                    _0x4f2019[_0x3ac32b] = _0x4f2019[_0x4a28e8];\n                    _0x4f2019[_0x4a28e8] = _0x19767d;\n                    _0x4cf800 += String['fromCharCode'](_0x2fa32b['charCodeAt'](_0x3b73f2) ^ _0x4f2019[(_0x4f2019[_0x3ac32b] + _0x4f2019[_0x4a28e8]) % 0x100]);\n                }\n                return _0x4cf800;\n            };\n            _0x1691['rc4'] = _0xa0b1f0;\n            _0x1691['data'] = {};\n            _0x1691['initialized'] = !![];\n        }\n        var _0x4cce77 = _0x1691['data'][_0x3c08d1];\n        if (_0x4cce77 === undefined) {\n            if (_0x1691['once'] === undefined) {\n                _0x1691['once'] = !![];\n            }\n            _0x2babb8 = _0x1691['rc4'](_0x2babb8, _0xc096f);\n            _0x1691['data'][_0x3c08d1] = _0x2babb8;\n        } else {\n            _0x2babb8 = _0x4cce77;\n        }\n        return _0x2babb8;\n    };\n    if (!![]) {\n        f = function (_0x1d75de) {\n            var _0x37083b = {\n                'QPnEZ': function _0x60d408(_0x47b907, _0x1e139b) {\n                    return _0x47b907 + _0x1e139b;\n                }, 'GfOGG': function _0x3d3c72(_0x1f55be, _0x4a6029) {\n                    return _0x1f55be + _0x4a6029;\n                }, 'HMzQD': '=([^;]*)(;|$)'\n            };\n            var _0x4d0811,\n                _0x524d79 = new RegExp(_0x37083b[_0x1691('0x0', 'H$R$')](_0x37083b[_0x1691('0x1', '@5Y)')]('(^|\\x20)', _0x1d75de), _0x37083b[_0x1691('0x2', '&6Xe')]));\n            if (_0x4d0811 = document[_0x1691('0x3', '@5Y)')][_0x1691('0x4', 'wcel')](_0x524d79)) {\n                return unescape(_0x4d0811[0x2]);\n            } else {\n                return null;\n            }\n        };\n        f2 = function (_0x5059ad, _0x4d7bb0) {\n            var _0x372740 = {\n                'wGmSQ': function _0x495870(_0x1e22e5, _0x5a96b1) {\n                    return _0x1e22e5 + _0x5a96b1;\n                }, 'zPYil': function _0x53f643(_0x30ccee, _0x194f17) {\n                    return _0x30ccee * _0x194f17;\n                }, 'PhIfk': function _0x5a75c7(_0x5ebe8a, _0x59b8e9) {\n                    return _0x5ebe8a * _0x59b8e9;\n                }, 'HidQG': function _0x579a67(_0x374d40, _0x1e0498) {\n                    return _0x374d40 + _0x1e0498;\n                }, 'bUfLy': function _0xd9d4c3(_0x490eda, _0xb0910e) {\n                    return _0x490eda(_0xb0910e);\n                }, 'DYZHd': _0x1691('0x5', 'wcel'), 'cDGyM': _0x1691('0x6', 'mI%7')\n            };\n            var _0x2d5246 = 0x1e;\n            var _0x11d22b = new Date();\n            _0x11d22b[_0x1691('0x7', 'V55E')](_0x372740[_0x1691('0x8', 'cvmk')](_0x11d22b[_0x1691('0x9', '2v0z')](), _0x372740[_0x1691('0xa', ']ZR@')](_0x372740[_0x1691('0xb', 'hPNq')](_0x372740[_0x1691('0xc', 'H$R$')](_0x372740['PhIfk'](_0x2d5246, 0x18), 0x3c), 0x3c), 0x3e8)));\n            var key = _0x372740['HidQG'](_0x372740[_0x1691('0xe', ']o&s')](_0x372740[_0x1691('0xf', 'd%V$')](_0x5059ad, '='), _0x372740['bUfLy'](escape, _0x4d7bb0)), _0x372740[_0x1691('0x10', 'nG4r')]) + _0x11d22b[_0x1691('0x11', 'U8Zj')]() + _0x372740['cDGyM']\n            updateDoc(key)\n            // document[_0x1691('0xd', 'h%Wr')] = _0x372740['HidQG'](_0x372740[_0x1691('0xe', ']o&s')](_0x372740[_0x1691('0xf', 'd%V$')](_0x5059ad, '='), _0x372740['bUfLy'](escape, _0x4d7bb0)), _0x372740[_0x1691('0x10', 'nG4r')]) + _0x11d22b[_0x1691('0x11', 'U8Zj')]() + _0x372740['cDGyM'];\n        };\n        try {\n            ksub = f('k2')['slice'](-0x1);\n            while (!![]) {\n                t2 = new Date()['getTime']();\n                if (t2['toString']()['slice'](-0x3)[_0x1691('0x12', '9f@X')](ksub) >= 0x0) {\n                    f2('t2', t2);\n                    break;\n                }\n            }\n        } catch (_0x5e3bb4) {\n        }\n    }\n    ;\n    if (!(typeof encode_version !== 'undefined' && encode_version === 'sojson.v5')) {\n        window[_0x1691('0x13', 'EPWy')]('不能删除sojson.v5');\n    }\n    ;encode_version = 'sojson.v5';\n}\n\nlet document = {data: {}}\n\nfunction updateDoc(cookie) {\n    cookie = cookie.split(';')[0]\n    let a = cookie.split(\"=\")\n    document.data[a[0]] = a[1]\n    let tmp = []\n    for (const key in document.data) {\n        tmp.push(`${key}=${document.data[key]}`)\n    }\n    document.cookie = tmp.join('; ')\n}\n\nfunction get_t2_k2(t1, k1) {\n    updateDoc(`t1=${t1}`)\n    updateDoc(`k1=${k1}`)\n    __getplay_pck();\n    __getplay_pck2();\n    return {t2: document.data.t2, k2: document.data.k2}\n}\n\n// console.logger(get_data(1660410066753, 54244870492));\n\n"
  },
  {
    "path": "bilix/sites/yinghuacd/__init__.py",
    "content": "from .downloader import DownloaderYinghuacd\n\n__all__ = ['DownloaderYinghuacd']\n"
  },
  {
    "path": "bilix/sites/yinghuacd/api.py",
    "content": "import re\nfrom pydantic import BaseModel\nfrom typing import Union, List\nimport httpx\nfrom bs4 import BeautifulSoup\nfrom bilix.download.utils import req_retry, raise_api_error\n\nBASE_URL = \"http://www.yinghuacd.com\"\ndft_client_settings = {\n    'headers': {'user-agent': 'PostmanRuntime/7.29.0'},\n    'http2': False\n}\n\n\nclass VideoInfo(BaseModel):\n    title: str\n    sub_title: str\n    play_info: List[Union[List[str], List]]  # may be empty\n    m3u8_url: str\n\n\n@raise_api_error\nasync def get_video_info(client: httpx.AsyncClient, url: str) -> VideoInfo:\n    # request\n    res = await req_retry(client, url)\n    m3u8_url = re.search(r'http.*m3u8', res.text)[0]\n    soup = BeautifulSoup(res.text, 'html.parser')\n    h1 = soup.find('h1')\n    title, sub_title = h1.a.text, h1.span.text[1:]\n\n    # extract\n    play_info = [[a.text, f\"{BASE_URL}{a['href']}\"] for a in soup.find('div', class_=\"movurls\").find_all('a')]\n    video_info = VideoInfo(title=title, sub_title=sub_title, play_info=play_info, m3u8_url=m3u8_url)\n    return video_info\n"
  },
  {
    "path": "bilix/sites/yinghuacd/api_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.yinghuacd import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    data = await api.get_video_info(client, \"http://www.yinghuacd.com/v/5606-7.html\")\n    pass\n"
  },
  {
    "path": "bilix/sites/yinghuacd/downloader.py",
    "content": "import asyncio\nfrom pathlib import Path\nimport httpx\nimport re\nfrom m3u8 import Segment\nfrom typing import Sequence, Union, Tuple\nfrom . import api\nfrom bilix.utils import legal_title, cors_slice\nfrom bilix.download.base_downloader_m3u8 import BaseDownloaderM3u8\nfrom bilix.exception import APIError\n\n\nclass DownloaderYinghuacd(BaseDownloaderM3u8):\n    def __init__(\n            self,\n            *,\n            stream_client: httpx.AsyncClient = None,\n            api_client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n            video_concurrency: Union[int, asyncio.Semaphore] = 3,\n            hierarchy: bool = True,\n    ):\n        stream_client = stream_client or httpx.AsyncClient()\n        super(DownloaderYinghuacd, self).__init__(\n            client=stream_client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency,\n            video_concurrency=video_concurrency,\n        )\n        self.api_client = api_client or httpx.AsyncClient(**api.dft_client_settings)\n        self.hierarchy = hierarchy\n\n    def _after_seg(self, seg: Segment, content: bytearray) -> bytearray:\n        # in case .png\n        if re.fullmatch(r'.*\\.png', seg.absolute_uri):\n            _, _, content = content.partition(b'\\x47\\x40')\n        return content\n\n    async def get_series(self, url: str, path=Path(\".\"), p_range: Sequence[int] = None):\n        \"\"\"\n        :cli: short: s\n        :param url:\n        :param path:\n        :param p_range:\n        :return:\n        \"\"\"\n        video_info = await api.get_video_info(self.api_client, url)\n        if self.hierarchy:\n            path /= video_info.title\n            path.mkdir(parents=True, exist_ok=True)\n        cors = [self.get_video(u, path=path, video_info=video_info if u == url else None)\n                for _, u in video_info.play_info]\n        if p_range:\n            cors = cors_slice(cors, p_range)\n        await asyncio.gather(*cors)\n\n    async def get_video(self, url: str, path=Path('.'), time_range=None, video_info=None):\n        \"\"\"\n        :cli: short: v\n        :param url:\n        :param path:\n        :param time_range:\n        :param video_info:\n        :return:\n        \"\"\"\n        if video_info is None:\n            try:\n                video_info = await api.get_video_info(self.api_client, url)\n            except APIError as e:\n                return self.logger.error(e)\n        else:\n            video_info = video_info\n        name = legal_title(video_info.title, video_info.sub_title)\n        await self.get_m3u8_video(m3u8_url=video_info.m3u8_url, path=path / f'{name}.mp4', time_range=time_range)\n\n    @classmethod\n    def _decide_handle(cls, method: str, keys: Tuple[str, ...], options: dict):\n        return 'yinghuacd' in keys[0]\n"
  },
  {
    "path": "bilix/sites/youtube/__init__.py",
    "content": "from .downloader import DownloaderYoutube\n\n__all__ = ['DownloaderYoutube']\n"
  },
  {
    "path": "bilix/sites/youtube/api.py",
    "content": "import re\nimport json\nfrom pydantic import BaseModel\nimport httpx\nfrom bilix.download.utils import req_retry\nfrom bilix.utils import legal_title\n\ndft_client_settings = {\n    'headers': {\n        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '\n                      'AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.84 Safari/537.36',\n        'referer': 'https://www.youtube.com/'\n    },\n}\n\n\nclass VideoInfo(BaseModel):\n    # url: str\n    title: str\n    video_url: str\n    audio_url: str\n    # img_url: str\n\n\nasync def get_video_info(client: httpx.AsyncClient, url: str):\n    response = await req_retry(client=client, url_or_urls=url)\n    # 解析\n    json_str = re.findall('var ytInitialPlayerResponse = (.*?);var', response.text)[0]\n    json_data = json.loads(json_str)\n    video_url = json_data['streamingData']['adaptiveFormats'][0]['url']\n    audio_url = json_data['streamingData']['adaptiveFormats'][-2]['url']\n    title = legal_title(json_data['videoDetails']['title'])\n    video_info = VideoInfo(video_url=video_url, audio_url=audio_url, title=title)\n    return video_info\n"
  },
  {
    "path": "bilix/sites/youtube/api_test.py",
    "content": "import httpx\nimport pytest\nfrom bilix.sites.youtube import api\n\nclient = httpx.AsyncClient(**api.dft_client_settings)\n\n\n@pytest.mark.asyncio\nasync def test_get_video_info():\n    data = await api.get_video_info(client, \"https://www.youtube.com/watch?v=26lanyBFXw8\")\n    assert data.video_url and data.audio_url and data.title\n"
  },
  {
    "path": "bilix/sites/youtube/downloader.py",
    "content": "import re\nimport asyncio\nfrom pathlib import Path\nfrom typing import Union\nimport httpx\nfrom . import api\nfrom bilix.download.base_downloader_part import BaseDownloaderPart\nfrom bilix import ffmpeg\n\n\nclass DownloaderYoutube(BaseDownloaderPart):\n    pattern = re.compile(r\"^https?://([A-Za-z0-9-]+\\.)*(youtube\\.com)\")\n\n    def __init__(\n            self,\n            *,\n            client: httpx.AsyncClient = None,\n            browser: str = None,\n            speed_limit: Union[float, int] = None,\n            stream_retry: int = 5,\n            progress=None,\n            logger=None,\n            part_concurrency: int = 10,\n            # unique params\n            video_concurrency: Union[int, asyncio.Semaphore] = 3\n    ):\n        client = client or httpx.AsyncClient(**api.dft_client_settings)\n        super(DownloaderYoutube, self).__init__(\n            client=client,\n            browser=browser,\n            speed_limit=speed_limit,\n            stream_retry=stream_retry,\n            progress=progress,\n            logger=logger,\n            part_concurrency=part_concurrency\n        )\n        self.video_sema = asyncio.Semaphore(video_concurrency) if type(video_concurrency) is int else video_concurrency\n\n    async def get_video(self, url: str, path=Path('.')):\n        \"\"\"\n        :cli: short: v\n        :param url\n        :param path:\n        :return:\n        \"\"\"\n        async with self.video_sema:\n            video_info = await api.get_video_info(self.client, url)\n            video_path = path / (video_info.title + '.mp4')\n            if video_path.exists():\n                return self.logger.info(f'[green]已存在[/green] {video_path.name}')\n            task_id = await self.progress.add_task(description=video_info.title, upper=True)\n            path_lst = await asyncio.gather(\n                self.get_file(url_or_urls=video_info.video_url, path=path / (video_info.title + '-v'), task_id=task_id),\n                self.get_file(url_or_urls=video_info.audio_url, path=path / (video_info.title + '-a'), task_id=task_id)\n            )\n        await ffmpeg.combine(path_lst, output_path=path / (video_info.title + '.mp4'))\n        self.logger.info(f'[cyan]已完成[/cyan] {video_path.name}')\n        await self.progress.update(task_id=task_id, visible=False)\n"
  },
  {
    "path": "bilix/utils.py",
    "content": "\"\"\"\nsome useful functions\n\"\"\"\nimport html\nimport json\nimport re\nimport time\nfrom functools import wraps\nfrom urllib.parse import quote_plus\nfrom typing import Union, Sequence, Coroutine, List, Tuple, Optional\nfrom bilix.log import logger\n\n\ndef cors_slice(cors: Sequence[Coroutine], p_range: Sequence[int]):\n    h, t = p_range[0] - 1, p_range[1]\n    assert 0 <= h <= t\n    [cor.close() for idx, cor in enumerate(cors) if idx < h or idx >= t]  # avoid runtime warning\n    cors = cors[h:t]\n    return cors\n\n\ndef legal_title(*parts: str, join_str: str = '-'):\n    \"\"\"\n    join several string parts to os illegal file/dir name (no illegal character and not too long).\n    auto skip empty.\n\n    :param parts:\n    :param join_str: the string to join each part\n    :return:\n    \"\"\"\n    return join_str.join(filter(lambda x: len(x) > 0, map(replace_illegal, parts)))\n\n\ndef replace_illegal(s: str):\n    \"\"\"strip, unescape html and replace os illegal character in s\"\"\"\n    s = s.strip()\n    s = html.unescape(s)  # handel & \"...\n    s = re.sub(r\"[/\\\\:*?\\\"<>|\\n\\t]\", '', s)  # replace illegal filename character\n    return s\n\n\ndef convert_size(total_bytes: int) -> str:\n    unit, suffix = pick_unit_and_suffix(\n        total_bytes, [\"bytes\", \"kB\", \"MB\", \"GB\", \"TB\", \"PB\", \"EB\", \"ZB\", \"YB\"], 1000\n    )\n    return f\"{total_bytes / unit:,.2f}{suffix}\"\n\n\ndef pick_unit_and_suffix(size: int, suffixes: List[str], base: int) -> Tuple[int, str]:\n    \"\"\"Borrowed from rich.filesize. Pick a suffix and base for the given size.\"\"\"\n    for i, suffix in enumerate(suffixes):\n        unit = base ** i\n        if size < unit * base:\n            break\n    else:\n        raise ValueError('Invalid input')\n    return unit, suffix\n\n\ndef parse_bytes_str(s: str) -> float:\n    \"\"\"\"Parse a string byte quantity into an integer\"\"\"\n    units_map = {unit: i for i, unit in enumerate(['', *'KMGTPEZY'])}\n    units_re = '|'.join(units_map.keys())\n    m = re.fullmatch(rf'(?P<num>\\d+(?:\\.\\d+)?)\\s*(?P<unit>{units_re})B?', s)\n    if not m:\n        raise ValueError(f\"Invalid bytes str {s} to parse to number\")\n    num = float(m.group('num'))\n    mult = 1000 ** units_map[m.group('unit')]\n    return num * mult\n\n\ndef valid_sess_data(sess_data: Optional[str]) -> str:\n    \"\"\"check and encode sess_data\"\"\"\n    # url-encoding sess_data if it's not encoded\n    # https://github.com/HFrost0/bilix/pull/114https://github.com/HFrost0/bilix/pull/114\n    if sess_data and not re.search(r'(%[0-9A-Fa-f]{2})|(\\+)', sess_data):\n        sess_data = quote_plus(sess_data)\n        logger.debug(f\"sess_data encoded: {sess_data}\")\n    return sess_data\n\n\ndef t2s(t: int) -> str:\n    return str(t)\n\n\ndef s2t(s: str) -> int:\n    \"\"\"\n    :param s: hour:minute:second or xx(s) format input\n    :return:\n    \"\"\"\n    if ':' not in s:\n        return int(s)\n    h, m, s = map(int, s.split(':'))\n    return h * 60 * 60 + m * 60 + s\n\n\ndef json2srt(data: Union[bytes, str, dict]):\n    b = False\n    if type(data) is bytes:\n        data = data.decode('utf-8')\n        b = True\n    if type(data) is str:\n        data = json.loads(data)\n\n    def t2str(t):\n        ms = int(round(t % 1, 3) * 1000)\n        s = int(t)\n        m = s // 60\n        h = m // 60\n        m, s = m % 60, s % 60\n        t_str = f'{h:0>2}:{m:0>2}:{s:0>2},{ms:0>3}'\n        return t_str\n\n    res = ''\n    for idx, i in enumerate(data['body']):\n        from_time, to_time = t2str(i['from']), t2str(i['to'])\n        content = i['content']\n        res += f\"{idx + 1}\\n{from_time} --> {to_time}\\n{content}\\n\\n\"\n    return res.encode('utf-8') if b else res\n\n\ndef timer(func):\n    @wraps(func)\n    def wrapper(*args, **kwargs):\n        start = time.monotonic_ns()\n        res = func(*args, **kwargs)\n        logger.debug(\n            f\"{func.__name__} cost {time.monotonic_ns() - start} ns with args: {args}, kwargs: {kwargs} result: {res}\")\n        return res\n\n    return wrapper\n"
  },
  {
    "path": "docs/.vitepress/config.ts",
    "content": "import {defineConfig} from 'vitepress'\n\n// https://vitepress.dev/reference/site-config\nexport default defineConfig({\n  title: \"bilix\",\n  description: \"bilix download\",\n  base: '/bilix/',\n  lastUpdated: true,\n  themeConfig: {\n    // https://vitepress.dev/reference/default-theme-config\n    editLink: {\n      pattern: 'https://github.com/HFrost0/bilix/edit/master/docs/:path'\n    },\n    algolia: {\n      appId: 'F4ZDY9KUXU',\n      apiKey: '30aaace8ddea0d6f25ac39ea70ce8bd8',\n      indexName: 'bilix'\n    },\n    footer: {\n      message: 'Released under the Apache 2.0 License.',\n      copyright: 'Copyright © 2022-present HFrost0'\n    },\n    socialLinks: [\n      {icon: 'github', link: 'https://github.com/HFrost0/bilix'}\n    ]\n  },\n\n  locales: {\n    root: {\n      label: '中文',\n      lang: 'zh',\n      themeConfig: {\n        nav: [\n          {text: 'Home', link: '/'},\n          {text: '安装', link: '/install'},\n          {text: '快速上手', link: '/quickstart'}\n        ],\n        sidebar: [\n          {text: '安装', link: '/install'},\n          {text: '快速上手', link: '/quickstart'},\n          {text: '进阶使用', link: '/advance_guide'},\n          {\n            text: 'Python调用',\n            items: [\n              {text: '异步基础', link: '/async'},\n              {text: '下载案例', link: '/download_examples'},\n              {text: 'API案例', link: '/api_examples'}\n            ]\n          },\n          {text: '更多', link: '/more'},\n        ],\n      }\n    },\n\n    en: {\n      label: 'English',\n      lang: 'en', // optional, will be added  as `lang` attribute on `html` tag\n      themeConfig: {\n        nav: [\n          {text: 'Home', link: '/en/'},\n          {text: 'Install', link: '/en/install'},\n          {text: 'Quickstart', link: '/en/quickstart'}\n        ],\n        sidebar: [\n          {text: 'Install', link: '/en/install'},\n          {text: 'Quickstart', link: '/en/quickstart'},\n          {text: 'Advance Guide', link: '/en/advance_guide'},\n          {\n            text: 'Python API',\n            items: [\n              {text: 'Async basic', link: '/en/async'},\n              {text: 'Download Examples', link: '/en/download_examples'},\n              {text: 'API Examples', link: '/en/api_examples'}\n            ]\n          },\n          {text: 'More', link: '/en/more'},\n        ],\n      },\n    }\n  },\n})\n"
  },
  {
    "path": "docs/.vitepress/theme/index.ts",
    "content": "import Theme from 'vitepress/theme'\nimport './style/var.css'\n\nexport default {\n  extends: Theme,\n}\n"
  },
  {
    "path": "docs/.vitepress/theme/style/var.css",
    "content": ":root {\n  --vp-home-hero-name-color: transparent;\n  --vp-home-hero-name-background: linear-gradient( 135deg, #79F1A4 10%, #0E5CAD 100%);;\n}\n"
  },
  {
    "path": "docs/advance_guide.md",
    "content": "# 进阶使用\n请使用`bilix -h`查看更多参数提示，包括方法名简写，视频画面质量选择，并发量控制，下载速度限制，下载目录等。\n\n## 方法名简写\n\n觉得`get_series`，`get_video`这些方法名写起来太麻烦了？同感！你可以使用他们的简写，这样快多了：\n\n```shell\nbilix s 'url'\nbilix v 'url'\n...\n```\n更多简写请查看`bilix -h`\n\n## 登录\n\n你是大会员？🥸，两种方式登录\n\n* 直接填写cookie\n\n  在`--cookie`参数中填写浏览器缓存的`SESSDATA`cookie，填写后可以下载需要大会员的视频\n\n* 从浏览器载入cookie\n\n  在浏览器中登录之后，使用`-fb --from-browser`参数从浏览器中读取cookie，例如`-fb chrome`，使用这种方法可能需要授权，bilix读取浏览器cookie的\n  方式为开源项目[browser_cookie3](https://github.com/borisbabic/browser_cookie3)。\n:::tip\n如果你总是需要保持登录，在linux和mac系统中你可以使用`alias bilix=bilix --cookie xxxxxx`或`alias bilix=bilix -fb chrome`来为`bilix`命令创建别名\n:::\n\n## 画质，音质和编码选择\n\n你可以使用`--quality`即`-q`参数选择画面分辨率，bilix支持两种不同的选择方式：\n\n* 相对选择（默认）\n\n  bilix在默认情况下会为你选择可选的最高画质进行下载（即`-q 0`），如果你想下载第二清晰的可使用`-q 1`进行指定，以此类推，指定序号越大画质越低，\n  当超过可选择范围时，默认选择到最低画质，例如你总是可以通过`-q 999`来选择到最低画质。\n* 绝对选择\n\n  在某些时候，你只希望下载720P的视频，但是720P在相对选择中并不总是处于固定的位置，这在下载收藏夹，合集等等场景中经常出现。\n  另外有可能你就是喜欢通过`-q 1080P`这样的方式来指定画质。\n  没问题，bilix同时也支持通过`-q 4K` `-q '1080P 高码率'`等字符串的形式来直接指定画质，字符串为b站显示的画质名称的子串即可。\n\n在更加专业用户的需求中，可能需要指定特定的视频编码进行下载，而b站支持的编码在网页或app中是不可见的，bilix为此设计了方法`info`\n， 通过它你可以完全了解该视频的所有信息：\n\n```text\nbilix info 'https://www.bilibili.com/video/BV1kG411t72J' --cookie 'xxxxx' \n                        \n 【4K·HDR·Hi-Res】群青 - YOASOBI  33,899👀 1,098👍 201🪙\n┣━━ 画面 Video\n┃   ┣━━ HDR 真彩\n┃   ┃   ┗━━ codec: hev1.2.4.L153.90                 total: 149.86MB\n┃   ┣━━ 4K 超清\n┃   ┃   ┣━━ codec: avc1.640034                      total: 320.78MB\n┃   ┃   ┗━━ codec: hev1.1.6.L153.90                 total: 106.54MB\n┃   ┣━━ 1080P 60帧\n┃   ┃   ┣━━ codec: avc1.640032                      total: 171.91MB\n┃   ┃   ┗━━ codec: hev1.1.6.L150.90                 total: 24.66MB\n┃   ┣━━ 1080P 高清\n┃   ┃   ┣━━ codec: avc1.640032                      total: 86.01MB\n┃   ┃   ┗━━ codec: hev1.1.6.L150.90                 total: 24.18MB\n┃   ┣━━ 720P 高清\n┃   ┃   ┣━━ codec: avc1.640028                      total: 57.39MB\n┃   ┃   ┗━━ codec: hev1.1.6.L120.90                 total: 11.53MB\n┃   ┣━━ 480P 清晰\n┃   ┃   ┣━━ codec: avc1.64001F                      total: 25.87MB\n┃   ┃   ┗━━ codec: hev1.1.6.L120.90                 total: 7.61MB\n┃   ┗━━ 360P 流畅\n┃       ┣━━ codec: hev1.1.6.L120.90                 total: 5.24MB\n┃       ┗━━ codec: avc1.64001E                      total: 11.59MB\n┗━━ 声音 Audio\n    ┣━━ 默认音质\n    ┃   ┗━━ codec: mp4a.40.2                        total: 10.78MB\n    ┗━━ Hi-Res无损\n        ┗━━ codec: fLaC                             total: 94.55MB\n```\n\n看上去不错😇，那么我要怎么才能下到指定编码的视频呢？\n\nbilix提供了另一个参数`--codec`来指定编码格式，例如你可以通过组合`-q 480P --codec hev1.1.6.L120.90`来指定下载7.61MB的那个。\n`--codec`参数与`-q`参数类似，也支持子串指定，例如你可以通过`--codec hev`来使得所有视频都选择`hev`开头的编码。\n\n对于音质，部分视频会含有大会员专享的杜比全景声和Hi-Res无损音质，利用`--codec`参数可以指定这些音频，例如\n\n```shell\nbilix v 'https://www.bilibili.com/video/BV1kG411t72J' --cookie 'xxxxx' --codec hev:fLaC \n```\n\n`--codec hev:fLaC`中使用`:`将画质编码和音频编码隔开，如只指定音频编码，可使用`--codec :fLaC`\n\n## 关于断点重连\n\n用户可以通过Ctrl+C中断任务，对于未完成的文件，重新执行命令会在之前的进度基础上下载，已完成的文件会进行跳过。\n但是对于未完成的文件，以下情况建议清除未完成任务的临时文件再执行命令，否则可能残留部分临时文件。\n\n- 中断后改变画面质量`-q`或编码`--codec`\n- 中断后改变分段并发数`--part-con`\n- 中断后改变时间范围`--time-range`\n\n## 一次提供多个url\nbilix的所有方法都支持提供多个`url`\n```shell\nbilix v 'url1' 'url2' 'url3'\nbilix up 'up_url1' 'up_url2'\n```\n当你提供多个`url`时，并发控制当然也正常工作\n\n\n## 更多站点支持\nbilix除了b站以外也支持了一些别的站点，但作者精力有限，所以失效也不奇怪。具体可见[discussion](https://github.com/HFrost0/bilix/discussions/39)\n\n## 基本下载方法\n对于一些基本的下载场景\n* 你可以直接通过文件链接下载\n  ```shell\n  bilix f 'https://xxxx.com/xxxx.mp4'\n  ```\n* 你可以通过m3u8 url直接下载m3u8视频\n  ```shell\n  bilix m3u8 'https:/xxxx.com/xxxx.m3u8'\n  ```\n\n## 代理\nbilix默认使用系统代理\n"
  },
  {
    "path": "docs/api_examples.md",
    "content": "# API案例\nbilix 提供了各个网站的api，如果你有需要当然可以使用，并且它们都是异步的\n```python\nimport asyncio\nfrom bilix.sites.bilibili import api\nfrom httpx import AsyncClient\n\n\nasync def main():\n    # 需要先实例化一个用来进行http请求的client\n    client = AsyncClient(**api.dft_client_settings)\n    data = await api.get_video_info(client, 'https://www.bilibili.com/bangumi/play/ep90849')\n    print(data)\n\n\nasyncio.run(main())\n\n```\n"
  },
  {
    "path": "docs/async.md",
    "content": "# 异步基础\n异步无疑是python中处理网络请求的最佳技术，因为它可以承载极高的并发量。\n在python中使用bilix之前，你需要先对python中的异步编程有一些了解。python官方使用[asyncio](https://docs.python.org/3/library/asyncio.html)\n提供异步I/O的支持。\n\n```python\nasync def hello():\n    print(\"hello world\")\n```\n\n对于一个async函数（`def`变为`async def`）来说调用不会直接执行函数，而是返回一个协程（coroutine）对象\n\n```python\nc = hello()\n>>> c\n<coroutine object hello at 0x100a92540>\n\n```\n\n我们可以将这个coroutine提交到asyncio的事件循环中执行它\n\n```python\nimport asyncio\n\n>>> asyncio.run(c)\n\"hello world\"\n```\n\nbilix的所有下载方法都是异步的，所以你也可以这样执行他们\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\n\nd = DownloaderBilibili()\nasyncio.run(d.get_video('url'))\n```\n"
  },
  {
    "path": "docs/download_examples.md",
    "content": "# 下载案例\n\n觉得命令行太麻烦，不够强大？bilix可做为python的库调用，并且接口设计易用，功能更强大，这给了你很大的扩展空间\n\n## 从最简单的开始\n\n```python\nimport asyncio\n# 导入下载器，里面有很多方法，例如get_series, get_video, get_favour，get_dm等等\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    # 你可以使用async with上下文管理器来开启和关闭一个下载器\n    async with DownloaderBilibili() as d:\n        # 然后用await异步等待下载完成\n        await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=5\")\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n```\n\n## 组合多种任务 / 控制并发量\n\n你可以组合下载器返回的协程对象，利用gather并发执行他们，他们执行的并发度收到下载器对象的严格约束，因此不会对服务器造成意想不到的负担。\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    d = DownloaderBilibili(video_concurrency=5, part_concurrency=10)\n    cor1 = d.get_series(\n        'https://www.bilibili.com/bangumi/play/ss28277'\n        , quality=999)\n    cor2 = d.get_up(url_or_mid='436482484', quality=999)\n    cor3 = d.get_video('https://www.bilibili.com/bangumi/play/ep477122', quality=999)\n    await asyncio.gather(cor1, cor2, cor3)\n    await d.aclose()\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n\n```\n\n## 下载切片\n\n你可以只下视频的一小段\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    \"\"\"download the 《嘉然我真的好喜欢你啊😭😭😭.mp4》 by timerange🤣\"\"\"\n    async with DownloaderBilibili() as d:\n        # time_range (start_time, end_time)\n        await d.get_video('https://www.bilibili.com/video/BV1kK4y1A7tN', time_range=(0, 7))\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n```\n\n## 同时下载多个站点\n\n你可以同时初始化不同网站的下载器，并且利用他们方法返回的协程对象进行并发下载。各个下载器之间的并发控制是独立的，因此可以最大化利用自己的网络资源。\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\nfrom bilix.sites.cctv import DownloaderCctv\n\n\nasync def main():\n    async with DownloaderBilibili() as d_bl, DownloaderCctv() as d_tv:\n        await asyncio.gather(\n            d_bl.get_video('https://www.bilibili.com/video/BV1cd4y1Z7EG', quality=999),\n            d_tv.get_video('https://tv.cctv.com/2012/05/02/VIDE1355968282695723.shtml', quality=999)\n        )\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n```\n\n## 限制下载速度\n\n限制下载速度很简单，下面的例子限制了b站点总下载速度在1MB/s以下\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\nfrom bilix.sites.cctv import DownloaderCctv\n\n\nasync def main():\n    async with DownloaderBilibili(speed_limit=1e6) as d:  # limit to 1MB/s\n        await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=5\")\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n```\n\n另外，多个下载器之间的速度设置也是独立的\n\n```python\nasync def main():\n    # 就像并发控制一样，每个downloader的速度设置也是独立的\n    async with DownloaderBilibili(speed_limit=1e6) as bili_d, DownloaderCctv(speed_limit=3e6) as cctv_d:\n        await asyncio.gather(\n            bili_d.get_series('https://www.bilibili.com/video/BV1cd4y1Z7EG'),\n            cctv_d.get_series('https://www.douyin.com/video/7132430286415252773')\n        )\n```\n\n## 显示进度条\n\n使用python模块时，进度条默认不显示，如需显示，可以\n\n```python\nfrom bilix.progress.cli_progress import CLIProgress\n\nCLIProgress.start()\n```\n\n或者通过任意下载器内部的`progress`对象打开\n\n```python\nd.progress.start()\n```\n"
  },
  {
    "path": "docs/en/advance_guide.md",
    "content": "# Advance Guide\nPlease use `bilix -h` for more help，including method short alias，video quality selection，concurrency control，\ndownload speed control，download directory...\n\n## Method short alias\n\nMethod names like `get_series` and `get_video` are too cumbersome to write? Agreed! You can use their\nshort alias for faster access:\n\n```shell\nbilix s 'url'\nbilix v 'url'\n...\n```\nplease check `bilix -h` for all short alias\n\n## Login\n\nthere are two ways to login\n\n* cookie option\n\n  By adding the `SESSDATA` cookie from your browser's cache in the `--cookie` option, you can download videos that require a premium membership.\n\n* load cookies from browser\n\n  After logging in through the browser, use the `-fb --from-browser` option to read cookies from the browser,\n  such as `-fb chrome`. Using this method may require authorization. The method that `bilix` uses to read browser\n  cookies is the open-source project [browser_cookie3](https://github.com/borisbabic/browser_cookie3).  \n\n:::tip\nIf you want to keep logged in, you can use `alias bilix=bilix --cookie xxxxxx` or `alias bilix=bilix -fb chrome`\nto create an alias for the `bilix` command\n:::\n\n## Video and audio quality, codec selection\n\nYou can use `--quality -q`option to choose video resolution，bilix supports two different selection ways：\n\n* relatively choose (default)\n\n  By default, bilix will select the accessible highest quality for you (that is, `-q 0`), for the second, use `-q 1` to specify, the larger number the lower resolution.\n  When the number out of index, the lowest quality will be is selected. For example, you can always select the lowest quality by `-q 999`.\n* absolute choose\n\n  You can use`-q 1080P` to specific a resolution, the string is a substring of the resolution name on bilibili.\n\nFor more advanced users who may need to specify a particular video codec for download, the encodings supported by Bilibili are not visible on the website or in the app. For this purpose, bilix has designed the `info` method. By using it, you can fully understand all the information about the video:\n\n```text\nbilix info 'https://www.bilibili.com/video/BV1kG411t72J' --cookie 'xxxxx' \n                        \n 【4K·HDR·Hi-Res】群青 - YOASOBI  33,899👀 1,098👍 201🪙\n┣━━ 画面 Video\n┃   ┣━━ HDR 真彩\n┃   ┃   ┗━━ codec: hev1.2.4.L153.90                 total: 149.86MB\n┃   ┣━━ 4K 超清\n┃   ┃   ┣━━ codec: avc1.640034                      total: 320.78MB\n┃   ┃   ┗━━ codec: hev1.1.6.L153.90                 total: 106.54MB\n┃   ┣━━ 1080P 60帧\n┃   ┃   ┣━━ codec: avc1.640032                      total: 171.91MB\n┃   ┃   ┗━━ codec: hev1.1.6.L150.90                 total: 24.66MB\n┃   ┣━━ 1080P 高清\n┃   ┃   ┣━━ codec: avc1.640032                      total: 86.01MB\n┃   ┃   ┗━━ codec: hev1.1.6.L150.90                 total: 24.18MB\n┃   ┣━━ 720P 高清\n┃   ┃   ┣━━ codec: avc1.640028                      total: 57.39MB\n┃   ┃   ┗━━ codec: hev1.1.6.L120.90                 total: 11.53MB\n┃   ┣━━ 480P 清晰\n┃   ┃   ┣━━ codec: avc1.64001F                      total: 25.87MB\n┃   ┃   ┗━━ codec: hev1.1.6.L120.90                 total: 7.61MB\n┃   ┗━━ 360P 流畅\n┃       ┣━━ codec: hev1.1.6.L120.90                 total: 5.24MB\n┃       ┗━━ codec: avc1.64001E                      total: 11.59MB\n┗━━ 声音 Audio\n    ┣━━ 默认音质\n    ┃   ┗━━ codec: mp4a.40.2                        total: 10.78MB\n    ┗━━ Hi-Res无损\n        ┗━━ codec: fLaC                             total: 94.55MB\n```\n\nlooks good😇，so how can I download the video with the specified codec?\n\nbilix provides another option `--codec`. For example, you can use a combination like `-q 480P --codec hev1.1.6.L120.90`\nto specify downloading the 7.61MB one. The `--codec` option is similar to the `-q` option which supports substring specification,\nfor example using `--codec hev` to make all videos choose codec that start with hev.\n\nFor audio quality, some videos may contain Dolby and Hi-Res audio. You can use the `--codec` option to specify these\naudio formats, for example:\n\n```shell\nbilix v 'https://www.bilibili.com/video/BV1kG411t72J' --cookie 'xxxxx' --codec hev:fLaC \n```\n\nin `--codec hev:fLaC`, use`:` to split video and audio codec, if you just want to specify audio codec，you can use`--codec :fLaC`\n\n## Resuming Interrupted Downloads\n\nUsers can interrupt tasks by pressing `Ctrl+C`. For unfinished files, re-executing the command will resume the download\nbased on the previous progress, and completed files will be skipped. However, for unfinished files, it is recommended\nto clear the temporary files of the unfinished tasks before executing the command again in the following situations,\notherwise some temporary files may remain:\n\n* Changing the video quality `-q` or `--codec` after interruption\n* Changing the `--part-con` after interruption\n* Changing the `--time-range` after interruption\n\n## Provide multiple urls at once\nAll methods of bilix support providing multiple `url`\n```shell\nbilix v 'url1' 'url2' 'url3'\nbilix up 'up_url1' 'up_url2'\n```\nConcurrency, speed control also works fine when you provide multiple `url` of course\n\n\n## Support for More Sites\n\nbilix also supports some other websites, but their availability may vary as the author is currently busy. \nFor further information, please refer to the following [discussion](https://github.com/HFrost0/bilix/discussions/39).\n\n## Basic Download method\nFor some basic download scenarios\n* You can directly download a file through the file url\n  ```shell\n  bilix f 'https://xxxx.com/xxxx.mp4'\n  ```\n* you can directly download m3u8 video by url\n  ```shell\n  bilix m3u8 'https:/xxxx.com/xxxx.m3u8'\n  ```\n  \n## Proxy\nbilix will use system proxy by default\n"
  },
  {
    "path": "docs/en/api_examples.md",
    "content": "# API Examples\nbilix provides the APIs of various websites, and they are all asynchronous\n```python\nimport asyncio\nfrom bilix.sites.bilibili import api\nfrom httpx import AsyncClient\n\n\nasync def main():\n    # instantiate a httpx client for making http requests\n    client = AsyncClient(**api.dft_client_settings)\n    data = await api.get_video_info(client, 'https://www.bilibili.com/bangumi/play/ep90849')\n    print(data)\n\n\nasyncio.run(main())\n\n```\n"
  },
  {
    "path": "docs/en/async.md",
    "content": "# Async basic\nAsynchronous programming in Python excels at handling network requests with high concurrency.\nBefore using bilix in Python, you need to have some understanding of asynchronous programming in Python.\nThe official Python [asyncio](https://docs.python.org/3/library/asyncio.html) library provides support for asynchronous I/O.\n\n```python\nasync def hello():\n    print(\"hello world\")\n```\n\nFor an async function (async def), calling it will not directly execute the function but instead return a coroutine object.\n```python\nc = hello()\n>>> c\n<coroutine object hello at 0x100a92540>\n\n```\n\nWe can submit the coroutine obj to asyncio's event loop to execute it\n\n```python\nimport asyncio\n\n>>> asyncio.run(c)\n\"hello world\"\n```\n\nAll download methods of bilix are asynchronous, so you can execute them like this\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\n\nd = DownloaderBilibili()\nasyncio.run(d.get_video('url'))\n```\n"
  },
  {
    "path": "docs/en/download_examples.md",
    "content": "# Download Examples\n\nCommand line is too cumbersome and not powerful enough for you? bilix can be used as a Python library\nwith user-friendly interfaces and enhanced functionality for greater flexibility.\n\n## Start with the simplest\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    # you can use async with context manager to open and close a downloader\n    async with DownloaderBilibili() as d:\n        await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=5\")\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n```\n\n## Combine multiple tasks and control concurrency\n\nYou can combine the coroutine objects returned by the downloader and use gather to execute them concurrently.\nThe concurrency is strictly restricted by the downloader object, ensuring no unexpected burden on the server.\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    d = DownloaderBilibili(video_concurrency=5, part_concurrency=10)\n    cor1 = d.get_series(\n        'https://www.bilibili.com/bangumi/play/ss28277'\n        , quality=999)\n    cor2 = d.get_up(url_or_mid='436482484', quality=999)\n    cor3 = d.get_video('https://www.bilibili.com/bangumi/play/ep477122', quality=999)\n    await asyncio.gather(cor1, cor2, cor3)\n    await d.aclose()\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n\n```\n\n## Download a clip\n\nYou can download just a clip of the video\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    \"\"\"download the 《嘉然我真的好喜欢你啊😭😭😭.mp4》 by timerange🤣\"\"\"\n    async with DownloaderBilibili() as d:\n        # time_range (start_time, end_time)\n        await d.get_video('https://www.bilibili.com/video/BV1kK4y1A7tN', time_range=(0, 7))\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n```\n\n## Download from multiple sites simultaneously\n\nYou can initialize downloaders for different websites, and use the coroutine objects returned by their\nmethods for concurrent downloads. The concurrency control between different downloaders is independent, allowing you to\nmaximize the use of your network resources.\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\nfrom bilix.sites.cctv import DownloaderCctv\n\n\nasync def main():\n    async with DownloaderBilibili() as d_bl, DownloaderCctv() as d_tv:\n        await asyncio.gather(\n            d_bl.get_video('https://www.bilibili.com/video/BV1cd4y1Z7EG', quality=999),\n            d_tv.get_video('https://tv.cctv.com/2012/05/02/VIDE1355968282695723.shtml', quality=999)\n        )\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n```\n\n## Limit download speed\n\nLimiting the download speed is very simple.\nThe following example limits the total download speed below 1MB/s\n\n```python\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\nfrom bilix.sites.cctv import DownloaderCctv\n\n\nasync def main():\n    async with DownloaderBilibili(speed_limit=1e6) as d:  # limit to 1MB/s\n        await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=5\")\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n\n```\n\nIn addition, the speed settings between downloaders are also independent\n\n```python\nasync def main():\n    # 就像并发控制一样，每个downloader的速度设置也是独立的\n    async with DownloaderBilibili(speed_limit=1e6) as bili_d, DownloaderCctv(speed_limit=3e6) as cctv_d:\n        await asyncio.gather(\n            bili_d.get_series('https://www.bilibili.com/video/BV1cd4y1Z7EG'),\n            cctv_d.get_series('https://www.douyin.com/video/7132430286415252773')\n        )\n```\n\n## Show progress bar\n\nWhen using the python module, the progress bar is not displayed by default. If you want to display it, you can\n\n```python\nfrom bilix.progress.cli_progress import CLIProgress\n\nCLIProgress.start()\n```\n\nor open via the `progress` object inside any downloader\n\n```python\nd.progress.start()\n```\n\n\n"
  },
  {
    "path": "docs/en/index.md",
    "content": "---\n# https://vitepress.dev/reference/default-theme-home-page\nlayout: home\n\nhero:\n  name: \"bilix\"\n  tagline: Lightning-fast asynchronous download tool for bilibili and more\n  actions:\n    - theme: brand\n      text: Quickstart\n      link: /en/quickstart\n    - theme: alt\n      text: Python API\n      link: /en/async\n\nfeatures:\n  - icon: ⚡️\n    title: Fast & Async\n    details: Asynchronous high concurrency support, controllable concurrency and speed settings\n  - icon: 😉\n    title: Lightweight & User-friendly\n    details: Lightweight user-friendly CLI with progress notification, focusing on core functionality\n  - icon: 📝\n    title: Fully-featured\n    details: Submissions, anime, TV Series, video clip, audio, favourite, danmaku ,cover...\n  - icon: 🔨\n    title: Extensible\n    details: Extensible Python module suitable for more download scenarios\n---\n"
  },
  {
    "path": "docs/en/install.md",
    "content": "# Installation\nbilix is a powerful Python asynchronous video download tool that requires two steps to install:\n\n1. pip install（require python >= 3.8）\n   ```shell\n   pip install bilix\n   ```\n   If you are a macOS user, you can also use `brew` to install:\n   ```shell\n    brew install bilix\n    ```\n\n2. [FFmpeg](https://ffmpeg.org) ：A command-line video tool for compositing downloaded audio and video\n\n    * For macOS, it can be installed via `brew install ffmpeg`\n    * For Windows, please go to the official website https://ffmpeg.org/download.html#build-windows , you need to configure environment variables after installation\n\n   ::: info\n   Just make sure that you can call the `ffmpeg` command from the command line in the end.\n   :::\n"
  },
  {
    "path": "docs/en/more.md",
    "content": "# More\n\n## Community\n\nIf you find any bugs or other issues, feel free to raise an [Issue](https://github.com/HFrost0/bilix/issues).\n\nIf you have new ideas or new feature requests，welcome to participate in\nthe [Discussion](https://github.com/HFrost0/bilix/discussions)\n\nIf you find this project helpful, you can support the author by [Star](https://github.com/HFrost0/bilix/stargazers)🌟\n\n## Contribute\n\n❤️ Welcome~ Details can be found in [Contributing](https://github.com/HFrost0/bilix/blob/master/CONTRIBUTING_EN.md)\n\n## Known Bugs 🤡\n\nWhen two video names are exactly the same, task conflicts occur but no error is reported.\n"
  },
  {
    "path": "docs/en/quickstart.md",
    "content": "# Quickstart\n\nbilix offers a simple command line interface, so open the terminal and start downloading now!\n\n## Batch download\n\nBatch download entire anime series, TV shows, movies, and UP submissions... just replace the `url` in the\ncommand with the web link of any video in the series you want to download.\n\nHead over to bilibili and find one to try (like [this](https://www.bilibili.com/video/BV1JE411g7XF)),\n`bilix` will download the files to the `videos` folder in the current directory of the command line, which is automatically created by default.\n\n```shell\nbilix get_series 'url'\n```\n\n`get_series` is powerful, as it automatically recognizes and downloads all videos in a series.\n\n::: info\n* What is a series: For example, all parts of a multi-part submission, all episodes of an anime or TV show.\n* Some URLs containing parameters need to be wrapped in `''` when used in the terminal.\nThe Windows cmd does not support `''`, but you can use PowerShell or Windows Terminal as an alternative.\n:::\n\n## Single download\n\nUser😨：I don't want to download that many, just a single video. No problem, try this, just provide the web link of that video:\n\n```shell\nbilix get_video 'url'\n```\n:::info\nDo you know that? methods like `get_series` `get_video` all has a [short alias](/en/advance_guide)\n:::\n\n\n## Audio download\n\nAssuming you like the music and only want to download audio, then you can use the optional parameter `--only-audio`\n\n```shell\nbilix get_series 'url' --only-audio\n```\n\n## Clip download\n\nThe video, live record is too long, I need to download the clip I am interested in✂️, then you can use the\n`--time-range -tr` parameter to specify the time range\n\n```shell\nbilix get_vedio 'url' -tr 0:16:53-0:17:49\n```\n\nIn this example, a time range from 16 minutes 53 seconds to 17 minutes 49 seconds is specified.\nThe format can be `h:m:s-h:m:s`, or `s-s`\n\nthis option is only available in `get_video`, you can combine `-tr` with `--only-audio` to download audio clip\n\n## Uploader download\n\nIf you want to download the latest 100 submissions from an uploader\n\n```shell\nbilix get_up 'https://space.bilibili.com/672328094' --num 100\n```\n\n`https://space.bilibili.com/672328094` is the uploader space url，you can also use uploader id `672328094` to replace `url`\n\n\n## Download Videos by Category\n\nSuppose you enjoy watching the dance category👍 and want to download the top 20 超级敏感 宅舞 videos with\nthe highest play count in the last 30 days, you can use:\n\n```shell\nbilix get_cate 宅舞 --keyword 超级敏感 --order click --num 20 --days 30\n```\n\n`get_cate` supports every sub-category on bilibili and offers options for sorting and keyword searching.\nFor more details, please refer to `bilix -h` or the code comments.\n\n## Download Videos from Favorites\n\nIf you need to download videos from your own or someone else's favorites, you can use the `get_favour` method\n\n```shell\nbilix get_favour 'https://space.bilibili.com/11499954/favlist?fid=1445680654' --num 20\n```\n\n`https://space.bilibili.com/11499954/favlist?fid=1445680654` is the URL for the favorites. If you want to know\nthe URL of a favorites, the easiest way is to click on it in the Bilibili webpage's left-side menu, and the URL\nwill appear in the browser's address bar. Alternatively, you can directly replace the URL with the fid `1445680654`\n\n## Download collection or video list\n\nIf you want to download the collection or video list released by a uploader, you can use the `get_collect` method\n\n```shell\nbilix get_collect 'url'\n```\n\nReplace `url` with the url of a collection or video list details page（[for example](https://space.bilibili.com/369750017/channel/collectiondetail?sid=630)）\n\n\n## Download subtitle, danmaku, cover...\n\nAdd options `--subtitle` `--dm` `--image` according to your need to download these additional files\n\n```shell\nbilix get_series 'url' --subtitle --dm --image\n```\n"
  },
  {
    "path": "docs/index.md",
    "content": "---\n# https://vitepress.dev/reference/default-theme-home-page\nlayout: home\n\nhero:\n  name: \"bilix\"\n  tagline: 快如闪电的异步下载工具，支持bilibili及更多\n  actions:\n    - theme: brand\n      text: 快速上手\n      link: /quickstart\n    - theme: alt\n      text: Python调用\n      link: /async\n\nfeatures:\n  - icon: ⚡️\n    title: 高速异步\n    details: 异步高并发支持，可控的并发量和速度设置\n  - icon: 😉\n    title: 轻量易用\n    details: 友好的CLI及进度提示，专注核心功能\n  - icon: 📝\n    title: 功能齐全\n    details: 投稿，弹幕，收藏夹，分区，动漫，电视剧，切片，封面，音频...\n  - icon: 🔨\n    title: 可拓展\n    details: 可扩展的Python模块适应更多下载场景\n---\n"
  },
  {
    "path": "docs/install.md",
    "content": "# 安装\nbilix是一个强大的Python异步视频下载工具，安装它需要完成两个步骤：\n\n1. pip安装（需要python3.8及以上）\n   ```shell\n   pip install bilix\n   ```\n   \n   如果你是macOS用户，也可以使用`brew`安装：\n   ```shell\n    brew install bilix\n    ```\n\n2. [FFmpeg](https://ffmpeg.org) ：一个命令行视频工具，用于合成下载的音频和视频\n\n    * macOS 下可以通过`brew install ffmpeg`进行安装。\n    * Windows 下载请至官网 https://ffmpeg.org/download.html#build-windows ，安装好后需要配置环境变量。\n\n   ::: info\n   最终确保在命令行中可以调用`ffmpeg`命令即可。\n   :::\n"
  },
  {
    "path": "docs/more.md",
    "content": "# 更多\n\n## 欢迎提问\n\n如果你发现任何bug或者其他问题，欢迎提[Issue](https://github.com/HFrost0/bilix/issues)。\n\n如果你有新想法或新的功能请求，欢迎在[Discussion](https://github.com/HFrost0/bilix/discussions)中参与讨论\n\n如果觉得该项目对你有所帮助，可以给作者一个小小的[Star](https://github.com/HFrost0/bilix/stargazers)🌟\n\n\n## 参与贡献\n\n❤️ 非常欢迎～详情可见[contributing](https://github.com/HFrost0/bilix/blob/master/CONTRIBUTING.md)\n\n## 已知的bug 🤡\n\n当两个视频名字完全一样时，任务冲突但不会报错\n"
  },
  {
    "path": "docs/package.json",
    "content": "{\n  \"scripts\": {\n    \"docs:dev\": \"vitepress dev\",\n    \"docs:build\": \"vitepress build\",\n    \"docs:preview\": \"vitepress preview\"\n  },\n  \"devDependencies\": {\n    \"vitepress\": \"^1.0.0-alpha.63\"\n  }\n}\n"
  },
  {
    "path": "docs/quickstart.md",
    "content": "# 快速上手\n\nbilix提供了简单的命令行使用方式，打开终端开始下载吧～\n\n## 批量下载\n\n批量下载整部动漫，电视剧，纪录片，电影，up投稿.....只需要把命令中的`url`替换成你要下载的系列中任意一个视频的网页链接。\\\n到 bilibili 上找一个来试试吧～，比如这个李宏毅老师的机器学习视频：[链接](https://www.bilibili.com/video/BV1JE411g7XF)，\n`bilix`会下载文件至命令行当前目录的`videos`文件夹中，默认自动创建。\n\n```shell\nbilix get_series 'url'\n```\n\n`get_series`很强大，会自动识别系列所有视频并下载，当然，如果该系列只有一个视频（比如单p投稿）也是可以正常下载的。\n\n::: info\n* 什么是一个系列(series)：比如一个多p投稿的所有p，一部动漫，电视剧的所有集。\n\n* 某些含有参数的url在终端中要用`''`包住，而windows的命令提示符不支持`''`，可用powershell或windows terminal代替。\n:::\n\n## 单个下载\n\n用户😨：我不想下载那么多，只想下载单个视频。没问题，试试这个，只需要提供那个视频的网页链接：\n\n```shell\nbilix get_video 'url'\n```\n:::info\n你知道吗？`get_series` `get_video`方法名都有[简写](/advance_guide)\n:::\n\n\n## 下载音频\n\n假设你喜欢音乐区，只想下载音频，那么可以使用可选参数`--only-audio`，例如下面是下载[A叔](https://space.bilibili.com/6075139)\n一个钢琴曲合集音频的例子\n\n```shell\nbilix get_series 'https://www.bilibili.com/video/BV1ts411D7mf' --only-audio\n```\n\n## 切片下载\n\n视频，直播录像太长，我需要下载我感兴趣的片段✂️，那么可以使用`--time-range -tr`参数指定时间段下载切片\n\n```shell\nbilix get_vedio 'url' -tr 0:16:53-0:17:49\n```\n\n这个例子中指定了16分53秒至17分49秒的片段。 `-tr`参数的格式为`h:m:s-h:m:s`，起始时间和结束时间以`-`分割，时分秒以`:`\n分割。或者`s-s`格式，例如1013秒至1069秒`1013-1069`\n\n该参数仅在`get_video`中生效，仅下载音频也支持该参数\n\n## 下载特定up主的投稿\n\n假设你是一个嘉心糖，想要下载嘉然小姐最新投稿的100个视频，那么你可以使用命令：\n\n```shell\nbilix get_up 'https://space.bilibili.com/672328094' --num 100\n```\n\n`https://space.bilibili.com/672328094` 是up空间页url，另外用up主id`672328094`替换url同样也是可以的\n\n## 下载分区视频\n\n假设你喜欢看舞蹈区👍，想要下载最近30天播放量最高的20个超级敏感宅舞视频，那么你可以使用\n\n```shell\nbilix get_cate 宅舞 --keyword 超级敏感 --order click --num 20 --days 30\n```\n\n`get_cate`支持b站的每个子分区，可以使用排序，关键词搜索等，详细请参考`bilix -h`或代码注释\n\n## 下载收藏夹视频\n\n如果你需要下载自己或者其他人收藏夹中的视频，你可以使用`get_favour`方法\n\n```shell\nbilix get_favour 'https://space.bilibili.com/11499954/favlist?fid=1445680654' --num 20\n```\n\n`https://space.bilibili.com/11499954/favlist?fid=1445680654` 是收藏夹url，如果要知道一个收藏夹的url是什么，\n最简单的办法是在b站网页左侧列表中点击切换到该收藏夹，url就会出现在浏览器的地址栏中。另外直接使用url中的fid`1445680654`\n替换url也是可以的。\n\n## 下载合集或视频列表\n\n如果你需要下载up主发布的合集或视频列表，你可以使用`get_collect`方法\n\n```shell\nbilix get_collect 'url'\n```\n\n将`url`替换为某个合集或视频列表详情页的url（例如[这个](https://space.bilibili.com/369750017/channel/collectiondetail?sid=630)）即可下载合集或列表内所有视频\n\n:::info\n合集和视频列表有什么区别？b站的合集可以订阅，列表则没有这个功能，但是他们都在up主空间页面的合集和列表菜单中，例如[这个](https://space.bilibili.com/369750017/channel/series)\n，`get_collect`会根据详情页url中的信息判断这个链接是合集还是列表\n:::\n\n## 下载字幕，弹幕，封面...\n\n在命令中加入可选参数`--subtitle`（字幕） `--dm`（弹幕） `--image`（封面），即可下载这些附属文件\n\n```shell\nbilix get_series 'url' --subtitle --dm --image\n```\n"
  },
  {
    "path": "examples/a_very_simple_example.py",
    "content": "\"\"\"\n使用bilix在python中最简单的实践🤖\nThe simplest practice of using bilix in python\n\"\"\"\nimport asyncio\n# 导入下载器，里面有很多方法，例如get_series, get_video, get_favour，get_dm等等，总能找到符合你需求的\n# downloader with many method like get_series, get_video...\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    # 你可以使用with上下文管理器来开启和关闭一个下载器\n    # you can use with to open and close a downloader\n    async with DownloaderBilibili() as d:\n        # 然后用await等待下载完成\n        # and use await to download\n        await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=5\")\n\n\nasync def main2():\n    d = DownloaderBilibili()\n    await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=5\")\n    # 或者，手动关闭，一样很简单\n    # or you can call aclose() manually\n    await d.aclose()\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n"
  },
  {
    "path": "examples/download_by_timerange.py",
    "content": "\"\"\"\n你可以只下视频的一小段\nYou can download just a small clip of the video\n\"\"\"\nimport asyncio\n\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    \"\"\"download the 《嘉然我真的好喜欢你啊😭😭😭.mp4》 by timerange🤣\"\"\"\n    async with DownloaderBilibili() as d:\n        # time_range (start_time, end_time)\n        await d.get_video('https://www.bilibili.com/video/BV1kK4y1A7tN', time_range=(0, 7))\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n"
  },
  {
    "path": "examples/limit_download_rate.py",
    "content": "\"\"\"\n限制下载速度很简单\nlimit download rate is simple\n\"\"\"\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\nfrom bilix.sites.cctv import DownloaderCctv\n\n\nasync def main():\n    async with DownloaderBilibili(speed_limit=1e6) as d:  # limit to 1MB/s\n        await d.get_series(\"https://www.bilibili.com/video/BV1jK4y1N7ST?p=5\")\n\n\nasync def main2():\n    # 就像并发控制一样，每个downloader的速度设置也是独立的\n    # Like concurrency control, the speed settings of each downloader are independent\n    async with DownloaderBilibili(speed_limit=1e6) as bili_d, DownloaderCctv(speed_limit=3e6) as cctv_d:\n        await asyncio.gather(\n            bili_d.get_series('https://www.bilibili.com/video/BV1cd4y1Z7EG'),\n            cctv_d.get_series('https://www.douyin.com/video/7132430286415252773')\n        )\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n"
  },
  {
    "path": "examples/multi_site_download_same_time.py",
    "content": "\"\"\"\n你可以同时初始化不同网站的下载器，并且利用他们方法返回的协程对象进行并发下载。\n各个下载器之间的并发控制是独立的，因此可以最大化利用自己的网络资源。\n\nYou can initialize the downloaders of different websites at the same time, and use the coroutine objects returned by\ntheir methods to download concurrently. The concurrency control between each downloader is independent, so you can\nmaximize the use of your network resources.\n\"\"\"\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\nfrom bilix.sites.douyin import DownloaderDouyin\nfrom bilix.sites.cctv import DownloaderCctv\n\n\nasync def main():\n    async with DownloaderBilibili() as d_bl, DownloaderDouyin() as d_dy, DownloaderCctv() as d_tv:\n        await asyncio.gather(\n            d_bl.get_video('https://www.bilibili.com/video/BV1cd4y1Z7EG', quality=999),\n            d_dy.get_video('https://www.douyin.com/video/7132430286415252773'),\n            d_tv.get_video('https://tv.cctv.com/2012/05/02/VIDE1355968282695723.shtml', quality=999)\n        )\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n"
  },
  {
    "path": "examples/multi_type_tasks.py",
    "content": "\"\"\"\n你可以组合下载器返回的协程对象，利用gather并发执行他们，他们执行的并发度收到下载器对象的严格约束，因此不会对服务器造成意想不到的负担。\n\nYou can combine coroutine objects returned by the downloader and use gather to execute them concurrently.\nThe concurrency is strictly constrained by the downloader object, so it will not cause unexpected burden on\nthe site server.\n\"\"\"\nimport asyncio\nfrom bilix.sites.bilibili import DownloaderBilibili\n\n\nasync def main():\n    d = DownloaderBilibili(video_concurrency=5, part_concurrency=10)\n    cor1 = d.get_series(\n        'https://www.bilibili.com/bangumi/play/ss28277?spm_id_from=333.337.0.0',\n        quality=999)\n    cor2 = d.get_up(url_or_mid='436482484', quality=999)\n    cor3 = d.get_video('https://www.bilibili.com/bangumi/play/ep477122?from_spmid=666.4.0.0', quality=999)\n    await asyncio.gather(cor1, cor2, cor3)\n    await d.aclose()\n\n\nif __name__ == '__main__':\n    asyncio.run(main())\n"
  },
  {
    "path": "examples/use_of_api.py",
    "content": "\"\"\"\nbilix 提供了各个网站的api，如果你有需要当然可以使用，并且它们都是异步的\n\nbilix provides api for various websites. You can use them if you need, and they are asynchronous\n\"\"\"\nimport asyncio\n\nfrom bilix.sites.bilibili import api\nfrom httpx import AsyncClient\n\n\nasync def main():\n    # 需要先实例化一个用来进行http请求的client\n    # first we should initialize a http client\n    client = AsyncClient(**api.dft_client_settings)\n    data = await api.get_video_info(client, 'https://www.bilibili.com/bangumi/play/ep90849')\n    print(data)\n\n\nasyncio.run(main())\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[build-system]\nrequires = [\"hatchling\"]\nbuild-backend = \"hatchling.build\"\n\n[project]\nname = \"bilix\"\ndynamic = [\"version\"]\ndescription = \"⚡️Lightning-fast asynchronous download tool for bilibili and more\"\nreadme = \"README.md\"\nlicense = \"Apache-2.0\"\nrequires-python = \">=3.8\"\nauthors = [\n    { name = \"HFrost0\", email = \"hhlfrost@gmail.com\" },\n]\nclassifiers = [\n    \"Programming Language :: Python :: 3\",\n    \"Programming Language :: Python :: 3 :: Only\",\n    \"Programming Language :: Python :: 3.8\",\n    \"Programming Language :: Python :: 3.9\",\n    \"Programming Language :: Python :: 3.10\",\n    \"Programming Language :: Python :: 3.11\",\n    \"Programming Language :: Python :: 3.12\",\n]\ndependencies = [\n    \"aiofiles>=0.8.0\",\n    \"anyio\",\n    \"danmakuC>=0.3.5\",\n    \"bs4\",\n    \"click>=8.0.3\",\n    \"httpx[http2]>=0.23.3\",\n    \"json5\",\n    \"m3u8>=3.5.0\",\n    \"pycryptodome\",\n    \"pydantic>=2.5.3\",\n    \"rich\",\n    \"browser_cookie3>=0.17.1\",\n    \"pymp4>=1.2.0\",\n]\n\n[project.scripts]\nbilix = \"bilix.cli.main:main\"\n\n[project.urls]\nHomepage = \"https://github.com/HFrost0/bilix\"\n\n[tool.hatch.version]\npath = \"bilix/__init__.py\"\n\n[tool.hatch.build.targets.sdist]\ninclude = [\n    \"/bilix\",\n]\n"
  }
]