Showing preview only (491K chars total). Download the full file or copy to clipboard to get everything.
Repository: Evil0ctal/Douyin_TikTok_Download_API
Branch: main
Commit: 42784ffc83a7
Files: 75
Total size: 467.3 KB
Directory structure:
gitextract_6fzjbc2k/
├── .github/
│ ├── ISSUE_TEMPLATE/
│ │ ├── bug_report.md
│ │ ├── bug_report_CN.md
│ │ ├── feature_request.md
│ │ └── feature_request_CN.md
│ └── workflows/
│ ├── codeql-analysis.yml
│ ├── docker-image.yml
│ └── readme.yml
├── .gitignore
├── Dockerfile
├── LICENSE
├── Procfile
├── README.en.md
├── README.md
├── Screenshots/
│ ├── benchmarks/
│ │ └── info
│ └── v3_screenshots/
│ └── info
├── app/
│ ├── api/
│ │ ├── endpoints/
│ │ │ ├── bilibili_web.py
│ │ │ ├── douyin_web.py
│ │ │ ├── download.py
│ │ │ ├── hybrid_parsing.py
│ │ │ ├── ios_shortcut.py
│ │ │ ├── tiktok_app.py
│ │ │ └── tiktok_web.py
│ │ ├── models/
│ │ │ └── APIResponseModel.py
│ │ └── router.py
│ ├── main.py
│ └── web/
│ ├── app.py
│ └── views/
│ ├── About.py
│ ├── Document.py
│ ├── Downloader.py
│ ├── EasterEgg.py
│ ├── ParseVideo.py
│ ├── Shortcuts.py
│ └── ViewsUtils.py
├── bash/
│ ├── install.sh
│ └── update.sh
├── chrome-cookie-sniffer/
│ ├── README.md
│ ├── background.js
│ ├── manifest.json
│ ├── popup.html
│ └── popup.js
├── config.yaml
├── crawlers/
│ ├── base_crawler.py
│ ├── bilibili/
│ │ └── web/
│ │ ├── config.yaml
│ │ ├── endpoints.py
│ │ ├── models.py
│ │ ├── utils.py
│ │ ├── web_crawler.py
│ │ └── wrid.py
│ ├── douyin/
│ │ └── web/
│ │ ├── abogus.py
│ │ ├── config.yaml
│ │ ├── endpoints.py
│ │ ├── models.py
│ │ ├── utils.py
│ │ ├── web_crawler.py
│ │ └── xbogus.py
│ ├── hybrid/
│ │ └── hybrid_crawler.py
│ ├── tiktok/
│ │ ├── app/
│ │ │ ├── app_crawler.py
│ │ │ ├── config.yaml
│ │ │ ├── endpoints.py
│ │ │ └── models.py
│ │ └── web/
│ │ ├── config.yaml
│ │ ├── endpoints.py
│ │ ├── models.py
│ │ ├── utils.py
│ │ └── web_crawler.py
│ └── utils/
│ ├── api_exceptions.py
│ ├── deprecated.py
│ ├── logger.py
│ └── utils.py
├── daemon/
│ └── Douyin_TikTok_Download_API.service
├── docker-compose.yml
├── logo/
│ └── logo.txt
├── requirements.txt
├── start.py
└── start.sh
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Please describe your problem in as much detail as possible so that it can be
solved faster
title: "[BUG] Brief and clear description of the problem"
labels: BUG, enhancement
assignees: Evil0ctal
---
***Platform where the error occurred?***
Such as: Douyin/TikTok
***The endpoint where the error occurred?***
Such as: API-V1/API-V2/Web APP
***Submitted input value?***
Such as: video link
***Have you tried again?***
Such as: Yes, the error still exists after X time after the error occurred.
***Have you checked the readme or interface documentation for this project?***
Such as: Yes, and it is very sure that the problem is caused by the program.
================================================
FILE: .github/ISSUE_TEMPLATE/bug_report_CN.md
================================================
---
name: Bug反馈
about: 请尽量详细的描述你的问题以便更快的解决它
title: "[BUG] 简短明了的描述问题"
labels: BUG
assignees: Evil0ctal
---
***发生错误的平台?***
如:抖音/TikTok
***发生错误的端点?***
如:API-V1/API-V2/Web APP
***提交的输入值?***
如:短视频链接
***是否有再次尝试?***
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
title: "[Feature request] Brief and clear description of the problem"
labels: enhancement
assignees: Evil0ctal
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request_CN.md
================================================
---
name: 新功能需求
about: 为本项目提出一个新需求或想法
title: "[Feature request] 简短明了的描述问题"
labels: enhancement
assignees: Evil0ctal
---
**您的功能请求是否与问题相关? 如有,请描述。**
如:我在使用xxx时觉得如果可以改进xxx的话会更好。
**描述您想要的解决方案**
如:对您想要发生的事情的清晰简洁的描述。
**描述您考虑过的替代方案**
如:对您考虑过的任何替代解决方案或功能的清晰简洁的描述。
**附加上下文**
在此处添加有关功能请求的任何其他上下文或屏幕截图。
================================================
FILE: .github/workflows/codeql-analysis.yml
================================================
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ main ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ main ]
schedule:
- cron: '22 7 * * 3'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://git.io/codeql-language-support
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2
# ℹ️ Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
================================================
FILE: .github/workflows/docker-image.yml
================================================
# docker-image.yml
name: Publish Docker image # workflow名称,可以在Github项目主页的【Actions】中看到所有的workflow
on: # 配置触发workflow的事件
push:
branches: # main分支有push时触发此workflow
- 'main'
tags: # tag更新时触发此workflow
- '*'
workflow_dispatch:
inputs:
name:
description: 'Person to greet'
required: true
default: 'Mona the Octocat'
home:
description: 'location'
required: false
default: 'The Octoverse'
# 定义环境变量, 后面会使用
# 定义 APP_NAME 用于 docker build-args
# 定义 DOCKERHUB_REPO 标记 docker hub repo 名称
env:
APP_NAME: douyin_tiktok_download_api
DOCKERHUB_REPO: evil0ctal/douyin_tiktok_download_api
jobs:
main:
# 在 Ubuntu 上运行
runs-on: ubuntu-latest
steps:
# git checkout 代码
- name: Checkout
uses: actions/checkout@v2
# 设置 QEMU, 后面 docker buildx 依赖此.
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
# 设置 Docker buildx, 方便构建 Multi platform 镜像
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
# 登录 docker hub
- name: Login to DockerHub
uses: docker/login-action@v1
with:
# GitHub Repo => Settings => Secrets 增加 docker hub 登录密钥信息
# DOCKERHUB_USERNAME 是 docker hub 账号名.
# DOCKERHUB_TOKEN: docker hub => Account Setting => Security 创建.
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# 通过 git 命令获取当前 tag 信息, 存入环境变量 APP_VERSION
- name: Generate App Version
run: echo APP_VERSION=`git describe --tags --always` >> $GITHUB_ENV
# 构建 Docker 并推送到 Docker hub
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
# 是否 docker push
push: true
# 生成多平台镜像, see https://github.com/docker-library/bashbrew/blob/v0.1.1/architecture/oci-platform.go
platforms: |
linux/amd64
linux/arm64
# docker build arg, 注入 APP_NAME/APP_VERSION
build-args: |
APP_NAME=${{ env.APP_NAME }}
APP_VERSION=${{ env.APP_VERSION }}
# 生成两个 docker tag: ${APP_VERSION} 和 latest
tags: |
${{ env.DOCKERHUB_REPO }}:latest
${{ env.DOCKERHUB_REPO }}:${{ env.APP_VERSION }}
================================================
FILE: .github/workflows/readme.yml
================================================
name: Translate README
on:
push:
branches:
- main
- Dev
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Node.js
uses: actions/setup-node@v1
with:
node-version: 12.x
# ISO Langusge Codes: https://cloud.google.com/translate/docs/languages
- name: Adding README - English
uses: dephraiim/translate-readme@main
with:
LANG: en
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pycharm
.idea
/app/api/endpoints/download/
/download/
================================================
FILE: Dockerfile
================================================
# 使用官方 Python 3.11 的轻量版镜像
FROM python:3.11-slim
LABEL maintainer="Evil0ctal"
# 设置非交互模式,避免 Docker 构建时的交互问题
ENV DEBIAN_FRONTEND=noninteractive
# 设置工作目录
WORKDIR /app
# 复制应用代码到容器
COPY . /app
# 使用 Aliyun 镜像源加速 pip
RUN pip install -i https://mirrors.aliyun.com/pypi/simple/ -U pip \
&& pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/
# 安装依赖
RUN pip install --no-cache-dir -r requirements.txt
# 确保启动脚本可执行
RUN chmod +x start.sh
# 设置容器启动命令
CMD ["./start.sh"]
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: Procfile
================================================
web: python3 start.py
================================================
FILE: README.en.md
================================================
<div align="center">
<a href="https://douyin.wtf/" alt="logo" ><img src="https://raw.githubusercontent.com/Evil0ctal/Douyin_TikTok_Download_API/main/logo/logo192.png" width="120"/></a>
</div>
<h1 align="center">Douyin_TikTok_Download_API(抖音/TikTok API)</h1>
<div align="center">
[English](./README.en.md)\|[Simplified Chinese](./README.md)
🚀"Douyin_TikTok_Download_API" is a high-performance asynchronous API that can be used out of the box[Tik Tok](https://www.douyin.com)\|[Tiktok](https://www.tiktok.com)\|[Biliable](https://www.bilibili.com)Data crawling tool supports API calling, online batch analysis and downloading.
[](LICENSE)[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/releases/latest)[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/stargazers)[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/network/members)[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues)[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues?q=is%3Aissue+is%3Aclosed)<br>[](https://pypi.org/project/douyin-tiktok-scraper/)[](https://pypi.org/project/douyin-tiktok-scraper/#files)[](https://pypi.org/project/douyin-tiktok-scraper/)[](https://pypi.org/project/douyin-tiktok-scraper/)<br>[](https://api.douyin.wtf/docs)[](https://api.tikhub.io/docs)<br>[](https://afdian.net/@evil0ctal)[](https://ko-fi.com/evil0ctal)[](https://www.patreon.com/evil0ctal)
</div>
## Sponsor
These sponsors have paid to be placed here,**Doinan_tics_download_api**The project will always be free and open source. If you would like to become a sponsor of this project, please check out my[GitHub Sponsor Page](https://github.com/sponsors/evil0ctal)。
<p align="center">
<a href="https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad">
<img style="border-radius:20px" width="845" height="845" alt="TikHub IO_Banner zh" src="https://github.com/user-attachments/assets/18ce4674-83b3-4312-a5d8-a45cf7cef7b2">
</a>
</p>
[Tickubbub](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)Provides more than 700 endpoints that can be used to obtain and analyze data from 14+ social media platforms - including videos, users, comments, stores, products, trends, etc., complete all data access and analysis in one stop.
By checking in every day, you can get free quota. You can use my registration invitation link:[https://user.tikhub.io/users/signup?referral_code=1wRL8eQk](https://user.tikhub.io/users/signup?referral_code=1wRL8eQk&utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)or invitation code:`1wRL8eQk`, you can get it by registering and recharging`$2`Quota.
[Tickubbub](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)The following services are provided:
- Rich data interface
- Get free quota by signing in every day
- High-quality API services
- Official website:[https://tikhub.io/](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)
- GitHub address:<https://github.com/TikHubIO/>
## 👻Introduction
> 🚨If you need to use a private server to run this project, please refer to:[Deployment preparations](./README.md#%EF%B8%8F%E9%83%A8%E7%BD%B2%E5%89%8D%E7%9A%84%E5%87%86%E5%A4%87%E5%B7%A5%E4%BD%9C%E8%AF%B7%E4%BB%94%E7%BB%86%E9%98%85%E8%AF%BB),[Docker deployment](./README.md#%E9%83%A8%E7%BD%B2%E6%96%B9%E5%BC%8F%E4%BA%8C-docker),[One-click deployment](./README.md#%E9%83%A8%E7%BD%B2%E6%96%B9%E5%BC%8F%E4%B8%80-linux)
This project is based on[PyWebIO](https://github.com/pywebio/PyWebIO),[speedy](https://fastapi.tiangolo.com/),[HTTPX](https://www.python-httpx.org/), fast and asynchronous[Tik Tok](https://www.douyin.com/)/[Tiktok](https://www.tiktok.com/)Data crawling tool, and realizes online batch parsing and downloading of videos or photo albums without watermarks, data crawling API, and iOS shortcut commands without watermark downloads through the Web. You can deploy or modify this project yourself to achieve more functions, or you can call it directly in your project[scraper.py](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/Stable/scraper.py)or install an existing[pip package](https://pypi.org/project/douyin-tiktok-scraper/)As a parsing library, it is easy to crawl data, etc.....
_Some simple application scenarios:_
_Download prohibited videos, perform data analysis, download without watermark on iOS (with[Shortcut command APP that comes with iOS](https://apps.apple.com/cn/app/%E5%BF%AB%E6%8D%B7%E6%8C%87%E4%BB%A4/id915249334)Cooperate with the API of this project to achieve in-app downloads or read clipboard downloads), etc....._
## 🔊 V4 version notes
- If you are interested in writing this project together, please add us on WeChat`Evil0ctal`Note: Github project reconstruction, everyone can communicate and learn from each other in the group. Advertising and illegal things are not allowed. It is purely for making friends and technical exchanges.
- This project uses`X-Bogus`Algorithms and`A_Bogus`The algorithm requests the Web API of Douyin and TikTok.
- Due to Douyin's risk control, after deploying this project, please**Obtain the cookie of Douyin website in the browser and replace it in config.yaml.**
- Please read the document below before raising an issue. Solutions to most problems will be included in the document.
- This project is completely free, but when using it, please comply with:[Apache-2.0 license](https://github.com/Evil0ctal/Douyin_TikTok_Download_API?tab=Apache-2.0-1-ov-file#readme)
## 🔖TikHub.io API
[TikHub.io](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)Provides more than 700 endpoints that can be used to obtain and analyze data from 14+ social media platforms - including videos, users, comments, stores, products, trends, etc., complete all data access and analysis in one stop.
If you want to support[Doinan_tics_download_api](https://github.com/Evil0ctal/Douyin_TikTok_Download_API)For project development, we strongly recommend that you choose[TikHub.io](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)。
#### Features:
> 📦 Ready to use right out of the box
Simplify the use process and use the packaged SDK to quickly carry out development work. All API interfaces are designed based on RESTful architecture and are described and documented using OpenAPI specifications, with sample parameters included to ensure easier calling.
> 💰 Cost advantage
There are no preset package restrictions and no monthly usage thresholds. All consumption is billed immediately based on actual usage, and tiered billing is performed based on the user's daily requests. At the same time, free quotas can be obtained in the user backend through daily check-ins, and these free quotas will not expire.
> ⚡️ Fast support
We have a large Discord community server, where administrators and other users will quickly reply to you and help you quickly solve current problems.
> 🎉Embrace open source
Part of TikHub's source code will be open sourced on Github, and it will sponsor authors of some open source projects.
#### Registration and use:
By checking in every day, you can get free quota. You can use my registration invitation link:[https://user.tikhub.io/users/signup?referral_code=1wRL8eQk](https://user.tikhub.io/users/signup?referral_code=1wRL8eQk&utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)or invitation code:`1wRL8eQk`, you can get it by registering and recharging`$2`Quota.
#### Related links:
- Official website:[https://tikhub.io/](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)
- API documentation:<https://api.tikhub.io/docs>
- Githubub:<https://github.com/TikHubIO/>
- Discord:<https://discord.com/invite/aMEAS8Xsvz>
## 🖥Demo site: I am very vulnerable...please do not stress test (·•᷄ࡇ•᷅ )
> 😾The online download function of the demo site has been turned off, and due to cookie reasons, Douyin's parsing and API services cannot guarantee availability on the Demo site.
🍔Web APP:<https://douyin.wtf/>
🍟API Document:<https://douyin.wtf/docs>
🌭tikub APU Docuration:<https://api.tikhub.io/docs>
💾iOS Shortcut (shortcut command):[Shortcut release](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/discussions/104?sort=top)
📦️Desktop downloader (recommended by warehouse):
- [Johnserf-Seed/Tiktokdownload](https://github.com/Johnserf-Seed/TikTokDownload)
- [HFrost0/bilix](https://github.com/HFrost0/bilix)
- [Tairraos/TikDown - \[needs update\]](https://github.com/Tairraos/TikDown/)
## ⚗️Technology stack
- [/app/web](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/main/app/web)-[PyWebIO](https://www.pyweb.io/)
- [/app/api](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/main/app/api)-[speedy](https://fastapi.tiangolo.com/)
- [/crawlers](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/main/crawlers)-[HTTPX](https://www.python-httpx.org/)
> **_/crawlers_**
- Submit requests to APIs on different platforms and retrieve data. After processing, a dictionary (dict) is returned, and asynchronous support is supported.
> **_/app/api_**
- Get request parameters and use`Crawlers`The related classes process the data and return it in JSON form, download the video, and cooperate with iOS shortcut commands to achieve fast calling and support asynchronous.
> **_/app/web_**
- use`PyWebIO`A simple web program created to process the values entered on the web page and then use them`Crawlers`The related class processing interface outputs related data on the web page.
**_Most of the parameters of the above files can be found in the corresponding`config.yaml`Modify in_**
## 💡Project file structure
./Douyin_TikTok_Download_API
├─app
│ ├─api
│ │ ├─endpoints
│ │ └─models
│ ├─download
│ └─web
│ └─views
└─crawlers
├─bilibili
│ └─web
├─douyin
│ └─web
├─hybrid
├─tiktok
│ ├─app
│ └─web
└─utils
## ✨Supported functions:
- Batch parsing on the web page (supports Douyin/TikTok mixed parsing)
- Download videos or photo albums online.
- make[pip package](https://pypi.org/project/douyin-tiktok-scraper/)Conveniently and quickly import your projects
- [iOS shortcut commands to quickly call API](https://apps.apple.com/cn/app/%E5%BF%AB%E6%8D%B7%E6%8C%87%E4%BB%A4/id915249334)Achieve in-app download of watermark-free videos/photo albums
- Complete API documentation ([Demo/Demonstration](https://api.douyin.wtf/docs))
- Rich API interface:
- Douyin web version API
- [x] Video data analysis
- [x] Get user homepage work data
- [x] Obtain the data of works liked by the user's homepage
- [x] Obtain the data of collected works on the user's homepage
- [x] Get user homepage information
- [x] Get user collection work data
- [x] Get user live stream data
- [x] Get the live streaming data of a specified user
- [x] Get the ranking of users who give gifts in the live broadcast room
- [x] Get single video comment data
- [x] Get the comment reply data of the specified video
- [x] Generate msToken
- [x] Generate verify_fp
- [x] Generate s_v_web_id
- [x] Generate X-Bogus parameters using interface URL
- [x] Generate A_Bogus parameters using interface URL
- [x] Extract a single user id
- [x] Extract list user id
- [x] Extract a single work id
- [x] Extract list work id
- [x] Extract live broadcast room number from list
- [x] Extract live broadcast room number from list
- TikTok web version API
- [x] Video data analysis
- [x] Get user homepage work data
- [x] Obtain the data of works liked by the user's homepage
- [x] Get user homepage information
- [x] Get user home page fan data
- [x] Get user homepage follow data
- [x] Get user homepage collection work data
- [x] 获取用户主页搜藏数据
- [x] Get user homepage playlist data
- [x] Get single video comment data
- [x] Get the comment reply data of the specified video
- [x] Generate msToken
- [x] Generate ttwid
- [x] Generate X-Bogus parameters using interface URL
- [x] Extract a single user sec_user_id
- [x] Extract list user sec_user_id
- [x] Extract a single work id
- [x] Extract list work id
- [x] Get user unique_id
- [x] Get list unique_id
- Bilibili web version API
- [x] Get individual video details
- [x] Get video stream address
- [x] Obtain user-published video work data
- [x] Get all favorites information of the user
- [x] Get video data in specified favorites
- [x] Get information about a specified user
- [x] Get comprehensive popular video information
- [x] Get comments for specified video
- [x] Get the reply to the specified comment under the video
- [x] Get the specified user's updates
- [x] Get real-time video barrages
- [x] Get specified live broadcast room information
- [x] Get live room video stream
- [x] Get the anchors who are live broadcasting in the specified partition
- [x] Get a list of all live broadcast partitions
- [x] Obtain video sub-p information through bv number
* * *
## 📦Call the parsing library (obsolete and needs to be updated):
> 💡PIPI :<https://pypi.org/project/douyin-tiktok-scraper/>
Install the parsing library:`pip install douyin-tiktok-scraper`
```python
import asyncio
from douyin_tiktok_scraper.scraper import Scraper
api = Scraper()
async def hybrid_parsing(url: str) -> dict:
# Hybrid parsing(Douyin/TikTok URL)
result = await api.hybrid_parsing(url)
print(f"The hybrid parsing result:\n {result}")
return result
asyncio.run(hybrid_parsing(url=input("Paste Douyin/TikTok/Bilibili share URL here: ")))
```
## 🗺️Supported submission formats:
> 💡Tip: Including but not limited to the following examples, if you encounter link parsing failure, please open a new one[issue](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues)
- Douyin sharing password (copy in APP)
```text
7.43 pda:/ 让你在几秒钟之内记住我 https://v.douyin.com/L5pbfdP/ 复制此链接,打开Dou音搜索,直接观看视频!
```
- Douyin short URL (copy within APP)
```text
https://v.douyin.com/L4FJNR3/
```
- Douyin normal URL (copy from web version)
```text
https://www.douyin.com/video/6914948781100338440
```
- Douyin discovery page URL (APP copy)
```text
https://www.douyin.com/discover?modal_id=7069543727328398622
```
- TikTok short URL (copy within APP)
```text
https://www.tiktok.com/t/ZTR9nDNWq/
```
- TikTok normal URL (copy from web version)
```text
https://www.tiktok.com/@evil0ctal/video/7156033831819037994
```
- Douyin/TikTok batch URL (no need to use matching separation)
```text
https://v.douyin.com/L4NpDJ6/
https://www.douyin.com/video/7126745726494821640
2.84 nqe:/ 骑白马的也可以是公主%%百万转场变身https://v.douyin.com/L4FJNR3/ 复制此链接,打开Dou音搜索,直接观看视频!
https://www.tiktok.com/t/ZTR9nkkmL/
https://www.tiktok.com/t/ZTR9nDNWq/
https://www.tiktok.com/@evil0ctal/video/7156033831819037994
```
## 🛰️API documentation
**_API documentation:_**
local:<http://localhost/docs>
Online:<https://api.douyin.wtf/docs>
**_API demo:_**
- Crawl video data (TikTok or Douyin hybrid analysis)`https://api.douyin.wtf/api/hybrid/video_data?url=[视频链接/Video URL]&minimal=false`
- Download videos/photo albums (TikTok or Douyin hybrid analysis)`https://api.douyin.wtf/api/download?url=[视频链接/Video URL]&prefix=true&with_watermark=false`
**_For more demonstrations, please see the documentation..._**
## ⚠️Preparation work before deployment (please read carefully):
- You need to solve the problem of crawler cookie risk control by yourself, otherwise the interface may become unusable. After modifying the configuration file, you need to restart the service for it to take effect, and it is best to use cookies from accounts that you have already logged in to.
- Douyin web cookie (obtain and replace the cookie in the configuration file below):
- <https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/30e56e5a7f97f87d60b1045befb1f6db147f8590/crawlers/douyin/web/config.yaml#L7>
- TikTok web-side cookies (obtain and replace the cookies in the configuration file below):
- <https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/30e56e5a7f97f87d60b1045befb1f6db147f8590/crawlers/tiktok/web/config.yaml#L6>
- I turned off the online download function of the demo site. The video someone downloaded was so huge that it crashed the server. You can right-click on the web page parsing results page to save the video...
- The cookies of the demo site are my own and are not guaranteed to be valid for a long time. They only serve as a demonstration. If you deploy it yourself, please obtain the cookies yourself.
- If you need to directly access the video link returned by TikTok Web API, an HTTP 403 error will occur. Please use the API in this project.`/api/download`The interface downloads TikTok videos. This interface has been manually closed in the demo site, and you need to deploy this project by yourself.
- here is one**Video tutorial**You can refer to:**_<https://www.bilibili.com/video/BV1vE421j7NR/>_**
## 💻Deployment (Method 1 Linux)
> 💡Tips: It is best to deploy this project to a server in the United States, otherwise strange BUGs may appear.
Recommended for everyone to use[Digitalocean](https://www.digitalocean.com/)server, because you can have sex for free.
Use my invitation link to sign up and you can get a $200 credit, and when you spend $25 on it, I can also get a $25 reward.
My invitation link:
<https://m.do.co/c/9f72a27dec35>
> Use script to deploy this project with one click
- This project provides a one-click deployment script that can quickly deploy this project on the server.
- The script was tested on Ubuntu 20.04 LTS. Other systems may have problems. If there are any problems, please solve them yourself.
- Download using wget command[install.sh](https://raw.githubusercontent.com/Evil0ctal/Douyin_TikTok_Download_API/main/bash/install.sh)to the server and run
wget -O install.sh https://raw.githubusercontent.com/Evil0ctal/Douyin_TikTok_Download_API/main/bash/install.sh && sudo bash install.sh
> Start/stop service
- Use the following commands to control running or stopping the service:
- `sudo systemctl start Douyin_TikTok_Download_API.service`
- `sudo systemctl stop Douyin_TikTok_Download_API.service`
> Turn on/off automatic operation at startup
- Use the following commands to set the service to run automatically at boot or cancel automatic run at boot:
- `sudo systemctl enable Douyin_TikTok_Download_API.service`
- `sudo systemctl disable Douyin_TikTok_Download_API.service`
> Update project
- When the project is updated, ensure that the update script is executed in the virtual environment and all dependencies are updated. Enter the project bash directory and run update.sh:
- `cd /www/wwwroot/Douyin_TikTok_Download_API/bash && sudo bash update.sh`
## 💽Deployment (Method 2 Docker)
> 💡Tip: Docker deployment is the simplest deployment method and is suitable for users who are not familiar with Linux. This method is suitable for ensuring environment consistency, isolation and quick setup.
> Please use a server that can normally access Douyin or TikTok, otherwise strange BUG may occur.
### Preparation
Before you begin, make sure Docker is installed on your system. If you haven't installed Docker yet, you can install it from[Docker official website](https://www.docker.com/products/docker-desktop/)Download and install.
### Step 1: Pull the Docker image
First, pull the latest Douyin_TikTok_Download_API image from Docker Hub.
```bash
docker pull evil0ctal/douyin_tiktok_download_api:latest
```
Can be replaced if needed`latest`Label the specific version you need to deploy.
### Step 2: Run the Docker container
After pulling the image, you can start a container from this image. Here are the commands to run the container, including basic configuration:
```bash
docker run -d --name douyin_tiktok_api -p 80:80 evil0ctal/douyin_tiktok_download_api
```
Each part of this command does the following:
- `-d`: Run the container in the background (detached mode).
- `--name douyin_tiktok_api `: Name the container`douyin_tiktok_api `。
- `-p 80:80`:将主机上的80端口映射到容器的80端口。根据您的配置或端口可用性调整端口号。
- `evil0ctal/douyin_tiktok_download_api`: The name of the Docker image to use.
### Step 3: Verify the container is running
Check if your container is running using the following command:
```bash
docker ps
```
This will list all active containers. Find`douyin_tiktok_api `to confirm that it is functioning properly.
### Step 4: Access the App
Once the container is running, you should be able to pass`http://localhost`Or API client access Douyin_TikTok_Download_API. Adjust the URL if a different port is configured or accessed from a remote location.
### Optional: Custom Docker commands
For more advanced deployments, you may wish to customize Docker commands to include environment variables, volume mounts for persistent data, or other Docker parameters. Here is an example:
```bash
docker run -d --name douyin_tiktok_api -p 80:80 \
-v /path/to/your/data:/data \
-e MY_ENV_VAR=my_value \
evil0ctal/douyin_tiktok_download_api
```
- `-v /path/to/your/data:/data`: Change the`/path/to/your/data`Directory mounted to the container`/data`Directory for persisting or sharing data.
- `-e MY_ENV_VAR=my_value`: Set environment variables within the container`MY_ENV_VAR`, whose value is`my_value`。
### Configuration file modification
Most of the configuration of the project can be found in the following directories:`config.yaml`File modification:
- `/crawlers/douyin/web/config.yaml`
- `/crawlers/tiktok/web/config.yaml`
- `/crawlers/tiktok/app/config.yaml`
### Step 5: Stop and remove the container
When you need to stop and remove a container, use the following commands:
```bash
# Stop
docker stop douyin_tiktok_api
# Remove
docker rm douyin_tiktok_api
```
## 📸Screenshot
**_API speed test (compared to official API)_**
<details><summary>🔎点击展开截图</summary>
Douyin official API:
API of this project:
TikTok official API:
API of this project:
</details>
<hr>
**_Project interface_**
<details><summary>🔎点击展开截图</summary>
Web main interface:

Web main interface:

</details>
<hr>
## 📜 Star History
[](https://star-history.com/#Evil0ctal/Douyin_TikTok_Download_API&Timeline)
[Apache-2.0 license](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/Stable/LICENSE)
> Start: 2021/11/06
> Githubub:[@Evil0ctal](https://github.com/Evil0ctal)
================================================
FILE: README.md
================================================
<div align="center">
<a href="https://douyin.wtf/" alt="logo" ><img src="https://raw.githubusercontent.com/Evil0ctal/Douyin_TikTok_Download_API/main/logo/logo192.png" width="120"/></a>
</div>
<h1 align="center">Douyin_TikTok_Download_API(抖音/TikTok API)</h1>
<div align="center">
[English](./README.en.md) | [简体中文](./README.md)
🚀「Douyin_TikTok_Download_API」是一个开箱即用的高性能异步[抖音](https://www.douyin.com)|[TikTok](https://www.tiktok.com)|[Bilibili](https://www.bilibili.com)数据爬取工具,支持API调用,在线批量解析及下载。
[](LICENSE)
[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/releases/latest)
[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/stargazers)
[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/network/members)
[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues)
[](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues?q=is%3Aissue+is%3Aclosed)

<br>
[](https://pypi.org/project/douyin-tiktok-scraper/)
[](https://pypi.org/project/douyin-tiktok-scraper/#files)
[](https://pypi.org/project/douyin-tiktok-scraper/)
[](https://pypi.org/project/douyin-tiktok-scraper/)
<br>
[](https://api.douyin.wtf/docs)
[](https://api.tikhub.io/docs)
<br>
[](https://afdian.net/@evil0ctal)
[](https://ko-fi.com/evil0ctal)
[](https://www.patreon.com/evil0ctal)
</div>
## 赞助商
这些赞助商已付费放置在这里,**Douyin_TikTok_Download_API** 项目将永远免费且开源。如果您希望成为该项目的赞助商,请查看我的 [GitHub 赞助商页面](https://github.com/sponsors/evil0ctal)。
<p align="center">
<a href="https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad">
<img style="border-radius:20px" width="845" height="845" alt="TikHub IO_Banner zh" src="https://github.com/user-attachments/assets/18ce4674-83b3-4312-a5d8-a45cf7cef7b2">
</a>
</p>
[TikHub](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad) 提供超过 700 个端点,可用于从 14+ 个社交媒体平台获取与分析数据 —— 包括视频、用户、评论、商店、商品与趋势等,一站式完成所有数据访问与分析。
通过每日签到,可以获取免费额度。可以使用我的注册邀请链接:[https://user.tikhub.io/users/signup?referral_code=1wRL8eQk](https://user.tikhub.io/users/signup?referral_code=1wRL8eQk&utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad) 或 邀请码:`1wRL8eQk`,注册并充值即可获得`$2`额度。
[TikHub](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad) 提供以下服务:
- 丰富的数据接口
- 每日签到免费获取额度
- 高质量的API服务
- 官网:[https://tikhub.io/](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)
- GitHub地址:[https://github.com/TikHubIO/](https://github.com/TikHubIO/)
## 👻介绍
> 🚨如需使用私有服务器运行本项目,请参考:[部署准备工作](./README.md#%EF%B8%8F%E9%83%A8%E7%BD%B2%E5%89%8D%E7%9A%84%E5%87%86%E5%A4%87%E5%B7%A5%E4%BD%9C%E8%AF%B7%E4%BB%94%E7%BB%86%E9%98%85%E8%AF%BB), [Docker部署](./README.md#%E9%83%A8%E7%BD%B2%E6%96%B9%E5%BC%8F%E4%BA%8C-docker), [一键部署](./README.md#%E9%83%A8%E7%BD%B2%E6%96%B9%E5%BC%8F%E4%B8%80-linux)
本项目是基于 [PyWebIO](https://github.com/pywebio/PyWebIO),[FastAPI](https://fastapi.tiangolo.com/),[HTTPX](https://www.python-httpx.org/),快速异步的[抖音](https://www.douyin.com/)/[TikTok](https://www.tiktok.com/)数据爬取工具,并通过Web端实现在线批量解析以及下载无水印视频或图集,数据爬取API,iOS快捷指令无水印下载等功能。你可以自己部署或改造本项目实现更多功能,也可以在你的项目中直接调用[scraper.py](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/Stable/scraper.py)或安装现有的[pip包](https://pypi.org/project/douyin-tiktok-scraper/)作为解析库轻松爬取数据等.....
*一些简单的运用场景:*
*下载禁止下载的视频,进行数据分析,iOS无水印下载(搭配[iOS自带的快捷指令APP](https://apps.apple.com/cn/app/%E5%BF%AB%E6%8D%B7%E6%8C%87%E4%BB%A4/id915249334)
配合本项目API实现应用内下载或读取剪贴板下载)等.....*
## 🔊 V4 版本备注
- 感兴趣一起写这个项目的给请加微信`Evil0ctal`备注github项目重构,大家可以在群里互相交流学习,不允许发广告以及违法的东西,纯粹交朋友和技术交流。
- 本项目使用`X-Bogus`算法以及`A_Bogus`算法请求抖音和TikTok的Web API。
- 由于Douyin的风控,部署完本项目后请在**浏览器中获取Douyin网站的Cookie然后在config.yaml中进行替换。**
- 请在提出issue之前先阅读下方的文档,大多数问题的解决方法都会包含在文档中。
- 本项目是完全免费的,但使用时请遵守:[Apache-2.0 license](https://github.com/Evil0ctal/Douyin_TikTok_Download_API?tab=Apache-2.0-1-ov-file#readme)
## 🔖TikHub.io API
[TikHub.io](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad) 提供超过 700 个端点,可用于从 14+ 个社交媒体平台获取与分析数据 —— 包括视频、用户、评论、商店、商品与趋势等,一站式完成所有数据访问与分析。
如果您想支持 [Douyin_TikTok_Download_API](https://github.com/Evil0ctal/Douyin_TikTok_Download_API) 项目的开发,我们强烈建议您选择 [TikHub.io](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)。
#### 特点:
> 📦 开箱即用
简化使用流程,利用封装好的SDK迅速开展开发工作。所有API接口均依据RESTful架构设计,并使用OpenAPI规范进行描述和文档化,附带示例参数,确保调用更加简便。
> 💰 成本优势
不预设套餐限制,没有月度使用门槛,所有消费按实际使用量即时计费,并且根据用户每日的请求量进行阶梯式计费,同时可以通过每日签到在用户后台获取免费的额度,并且这些免费额度不会过期。
> ⚡️ 快速支持
我们有一个庞大的Discord社区服务器,管理员和其他用户会在服务器中快速的回复你,帮助你快速解决当前的问题。
> 🎉 拥抱开源
TikHub的部分源代码会开源在Github上,并且会赞助一些开源项目的作者。
#### 注册与使用:
通过每日签到,可以获取免费额度。可以使用我的注册邀请链接:[https://user.tikhub.io/users/signup?referral_code=1wRL8eQk](https://user.tikhub.io/users/signup?referral_code=1wRL8eQk&utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad) 或 邀请码:`1wRL8eQk`,注册并充值即可获得`$2`额度。
#### 相关链接:
- 官网:[https://tikhub.io/](https://tikhub.io/?utm_source=github.com/Evil0ctal/Douyin_TikTok_Download_API&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)
- API 文档:[https://api.tikhub.io/docs](https://api.tikhub.io/docs)
- GitHub:[https://github.com/TikHubIO/](https://github.com/TikHubIO/)
- Discord:[https://discord.com/invite/aMEAS8Xsvz](https://discord.com/invite/aMEAS8Xsvz)
## 🖥演示站点: 我很脆弱...请勿压测(·•᷄ࡇ•᷅ )
> 😾演示站点的在线下载功能已关闭,并且由于Cookie原因,Douyin的解析以及API服务在Demo站点无法保证可用性。
🍔Web APP: [https://douyin.wtf/](https://douyin.wtf/)
🍟API Document: [https://douyin.wtf/docs](https://douyin.wtf/docs)
🌭TikHub API Document: [https://api.tikhub.io/docs](https://api.tikhub.io/docs)
💾iOS Shortcut(快捷指令): [Shortcut release](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/discussions/104?sort=top)
📦️桌面端下载器(仓库推荐):
- [Johnserf-Seed/TikTokDownload](https://github.com/Johnserf-Seed/TikTokDownload)
- [HFrost0/bilix](https://github.com/HFrost0/bilix)
- [Tairraos/TikDown - [需更新]](https://github.com/Tairraos/TikDown/)
## ⚗️技术栈
* [/app/web](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/main/app/web) - [PyWebIO](https://www.pyweb.io/)
* [/app/api](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/main/app/api) - [FastAPI](https://fastapi.tiangolo.com/)
* [/crawlers](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/main/crawlers) - [HTTPX](https://www.python-httpx.org/)
> ***/crawlers***
- 向不同平台的API提交请求并取回数据,处理后返回字典(dict),支持异步。
> ***/app/api***
- 获得请求参数并使用`Crawlers`相关类处理数据后以JSON形式返回,视频下载,配合iOS快捷指令实现快速调用,支持异步。
> ***/app/web***
- 使用`PyWebIO`制作的简易Web程序,将网页输入的值进行处理后使用`Crawlers`相关类处理接口输出相关数据在网页上。
***以上文件的参数大多可在对应的`config.yaml`中进行修改***
## 💡项目文件结构
```
./Douyin_TikTok_Download_API
├─app
│ ├─api
│ │ ├─endpoints
│ │ └─models
│ ├─download
│ └─web
│ └─views
└─crawlers
├─bilibili
│ └─web
├─douyin
│ └─web
├─hybrid
├─tiktok
│ ├─app
│ └─web
└─utils
```
## ✨支持功能:
- 网页端批量解析(支持抖音/TikTok混合解析)
- 在线下载视频或图集。
- 制作[pip包](https://pypi.org/project/douyin-tiktok-scraper/)方便快速导入你的项目
- [iOS快捷指令快速调用API](https://apps.apple.com/cn/app/%E5%BF%AB%E6%8D%B7%E6%8C%87%E4%BB%A4/id915249334)实现应用内下载无水印视频/图集
- 完善的API文档([Demo/演示](https://api.douyin.wtf/docs))
- 丰富的API接口:
- 抖音网页版API
- [x] 视频数据解析
- [x] 获取用户主页作品数据
- [x] 获取用户主页喜欢作品数据
- [x] 获取用户主页收藏作品数据
- [x] 获取用户主页信息
- [x] 获取用户合辑作品数据
- [x] 获取用户直播流数据
- [x] 获取指定用户的直播流数据
- [x] 获取直播间送礼用户排行榜
- [x] 获取单个视频评论数据
- [x] 获取指定视频的评论回复数据
- [x] 生成msToken
- [x] 生成verify_fp
- [x] 生成s_v_web_id
- [x] 使用接口网址生成X-Bogus参数
- [x] 使用接口网址生成A_Bogus参数
- [x] 提取单个用户id
- [x] 提取列表用户id
- [x] 提取单个作品id
- [x] 提取列表作品id
- [x] 提取列表直播间号
- [x] 提取列表直播间号
- TikTok网页版API
- [x] 视频数据解析
- [x] 获取用户主页作品数据
- [x] 获取用户主页喜欢作品数据
- [x] 获取用户主页信息
- [x] 获取用户主页粉丝数据
- [x] 获取用户主页关注数据
- [x] 获取用户主页合辑作品数据
- [x] 获取用户主页搜藏数据
- [x] 获取用户主页播放列表数据
- [x] 获取单个视频评论数据
- [x] 获取指定视频的评论回复数据
- [x] 生成msToken
- [x] 生成ttwid
- [x] 使用接口网址生成X-Bogus参数
- [x] 提取单个用户sec_user_id
- [x] 提取列表用户sec_user_id
- [x] 提取单个作品id
- [x] 提取列表作品id
- [x] 获取用户unique_id
- [x] 获取列表unique_id
- 哔哩哔哩网页版API
- [x] 获取单个视频详情信息
- [x] 获取视频流地址
- [x] 获取用户发布视频作品数据
- [x] 获取用户所有收藏夹信息
- [x] 获取指定收藏夹内视频数据
- [x] 获取指定用户的信息
- [x] 获取综合热门视频信息
- [x] 获取指定视频的评论
- [x] 获取视频下指定评论的回复
- [x] 获取指定用户动态
- [x] 获取视频实时弹幕
- [x] 获取指定直播间信息
- [x] 获取直播间视频流
- [x] 获取指定分区正在直播的主播
- [x] 获取所有直播分区列表
- [x] 通过bv号获得视频分p信息
---
## 📦调用解析库(已废弃需要更新):
> 💡PyPi:[https://pypi.org/project/douyin-tiktok-scraper/](https://pypi.org/project/douyin-tiktok-scraper/)
安装解析库:`pip install douyin-tiktok-scraper`
```python
import asyncio
from douyin_tiktok_scraper.scraper import Scraper
api = Scraper()
async def hybrid_parsing(url: str) -> dict:
# Hybrid parsing(Douyin/TikTok URL)
result = await api.hybrid_parsing(url)
print(f"The hybrid parsing result:\n {result}")
return result
asyncio.run(hybrid_parsing(url=input("Paste Douyin/TikTok/Bilibili share URL here: ")))
```
## 🗺️支持的提交格式:
> 💡提示:包含但不仅限于以下例子,如果遇到链接解析失败请开启一个新 [issue](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues)
- 抖音分享口令 (APP内复制)
```text
7.43 pda:/ 让你在几秒钟之内记住我 https://v.douyin.com/L5pbfdP/ 复制此链接,打开Dou音搜索,直接观看视频!
```
- 抖音短网址 (APP内复制)
```text
https://v.douyin.com/L4FJNR3/
```
- 抖音正常网址 (网页版复制)
```text
https://www.douyin.com/video/6914948781100338440
```
- 抖音发现页网址 (APP复制)
```text
https://www.douyin.com/discover?modal_id=7069543727328398622
```
- TikTok短网址 (APP内复制)
```text
https://www.tiktok.com/t/ZTR9nDNWq/
```
- TikTok正常网址 (网页版复制)
```text
https://www.tiktok.com/@evil0ctal/video/7156033831819037994
```
- 抖音/TikTok批量网址(无需使用符合隔开)
```text
https://v.douyin.com/L4NpDJ6/
https://www.douyin.com/video/7126745726494821640
2.84 nqe:/ 骑白马的也可以是公主%%百万转场变身https://v.douyin.com/L4FJNR3/ 复制此链接,打开Dou音搜索,直接观看视频!
https://www.tiktok.com/t/ZTR9nkkmL/
https://www.tiktok.com/t/ZTR9nDNWq/
https://www.tiktok.com/@evil0ctal/video/7156033831819037994
```
## 🛰️API文档
***API文档:***
本地:[http://localhost/docs](http://localhost/docs)
在线:[https://api.douyin.wtf/docs](https://api.douyin.wtf/docs)
***API演示:***
- 爬取视频数据(TikTok或Douyin混合解析)
`https://api.douyin.wtf/api/hybrid/video_data?url=[视频链接/Video URL]&minimal=false`
- 下载视频/图集(TikTok或Douyin混合解析)
`https://api.douyin.wtf/api/download?url=[视频链接/Video URL]&prefix=true&with_watermark=false`
***更多演示请查看文档内容......***
## ⚠️部署前的准备工作(请仔细阅读):
- 你需要自行解决爬虫Cookie风控问题,否则可能会导致接口无法使用,修改完配置文件后需要重启服务才能生效,并且最好使用已经登录过的账号的Cookie。
- 抖音网页端Cookie(自行获取并替换下面配置文件中的Cookie):
- https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/30e56e5a7f97f87d60b1045befb1f6db147f8590/crawlers/douyin/web/config.yaml#L7
- TikTok网页端Cookie(自行获取并替换下面配置文件中的Cookie):
- https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/30e56e5a7f97f87d60b1045befb1f6db147f8590/crawlers/tiktok/web/config.yaml#L6
- 演示站点的在线下载功能被我关掉了,有人下的视频巨大无比直接给我服务器干崩了,你可以在网页解析结果页面右键保存视频...
- 演示站点的Cookie是我自己的,不保证长期有效,只起到演示作用,自己部署的话请自行获取Cookie。
- 需要TikTok Web API返回的视频链接直接访问会发生HTTP 403错误,请使用本项目API中的`/api/download`接口对TikTok 视频进行下载,这个接口在演示站点中已经被手动关闭了,需要你自行部署本项目。
- 这里有一个**视频教程**可以参考:***[https://www.bilibili.com/video/BV1vE421j7NR/](https://www.bilibili.com/video/BV1vE421j7NR/)***
## 💻部署(方式一 Linux)
> 💡提示:最好将本项目部署至美国地区的服务器,否则可能会出现奇怪的BUG。
推荐大家使用[Digitalocean](https://www.digitalocean.com/)的服务器,因为可以白嫖。
使用我的邀请链接注册,你可以获得$200的credit,当你在上面消费$25时,我也可以获得$25的奖励。
我的邀请链接:
[https://m.do.co/c/9f72a27dec35](https://m.do.co/c/9f72a27dec35)
> 使用脚本一键部署本项目
- 本项目提供了一键部署脚本,可以在服务器上快速部署本项目。
- 脚本是在Ubuntu 20.04 LTS上测试的,其他系统可能会有问题,如果有问题请自行解决。
- 使用wget命令下载[install.sh](https://raw.githubusercontent.com/Evil0ctal/Douyin_TikTok_Download_API/main/bash/install.sh)至服务器并运行
```
wget -O install.sh https://raw.githubusercontent.com/Evil0ctal/Douyin_TikTok_Download_API/main/bash/install.sh && sudo bash install.sh
```
> 开启/停止服务
- 使用以下命令来控制服务的运行或停止:
- `sudo systemctl start Douyin_TikTok_Download_API.service`
- `sudo systemctl stop Douyin_TikTok_Download_API.service`
> 开启/关闭开机自动运行
- 使用以下命令来设置服务开机自动运行或取消开机自动运行:
- `sudo systemctl enable Douyin_TikTok_Download_API.service`
- `sudo systemctl disable Douyin_TikTok_Download_API.service`
> 更新项目
- 项目更新时,确保更新脚本在虚拟环境中执行,更新所有依赖。进入项目bash目录并运行update.sh:
- `cd /www/wwwroot/Douyin_TikTok_Download_API/bash && sudo bash update.sh`
## 💽部署(方式二 Docker)
> 💡提示:Docker部署是最简单的部署方式,适合不熟悉Linux的用户,这种方法适合保证环境一致性、隔离性和快速设置。
> 请使用能正常访问Douyin或TikTok的服务器,否则可能会出现奇怪的BUG。
### 准备工作
开始之前,请确保您的系统已安装Docker。如果还未安装Docker,可以从[Docker官方网站](https://www.docker.com/products/docker-desktop/)下载并安装。
### 步骤1:拉取Docker镜像
首先,从Docker Hub拉取最新的Douyin_TikTok_Download_API镜像。
```bash
docker pull evil0ctal/douyin_tiktok_download_api:latest
```
如果需要,可以替换`latest`为你需要部署的具体版本标签。
### 步骤2:运行Docker容器
拉取镜像后,您可以从此镜像启动一个容器。以下是运行容器的命令,包括基本配置:
```bash
docker run -d --name douyin_tiktok_api -p 80:80 evil0ctal/douyin_tiktok_download_api
```
这个命令的每个部分作用如下:
* `-d`:在后台运行容器(分离模式)。
* `--name douyin_tiktok_api `:将容器命名为`douyin_tiktok_api `。
* `-p 80:80`:将主机上的80端口映射到容器的80端口。根据您的配置或端口可用性调整端口号。
* `evil0ctal/douyin_tiktok_download_api`:要使用的Docker镜像名称。
### 步骤3:验证容器是否运行
使用以下命令检查您的容器是否正在运行:
```bash
docker ps
```
这将列出所有活动容器。查找`douyin_tiktok_api `以确认其正常运行。
### 步骤4:访问应用程序
容器运行后,您应该能够通过`http://localhost`或API客户端访问Douyin_TikTok_Download_API。如果配置了不同的端口或从远程位置访问,请调整URL。
### 可选:自定义Docker命令
对于更高级的部署,您可能希望自定义Docker命令,包括环境变量、持久数据的卷挂载或其他Docker参数。这是一个示例:
```bash
docker run -d --name douyin_tiktok_api -p 80:80 \
-v /path/to/your/data:/data \
-e MY_ENV_VAR=my_value \
evil0ctal/douyin_tiktok_download_api
```
* `-v /path/to/your/data:/data`:将主机上的`/path/to/your/data`目录挂载到容器的`/data`目录,用于持久化或共享数据。
* `-e MY_ENV_VAR=my_value`:在容器内设置环境变量`MY_ENV_VAR`,其值为`my_value`。
### 配置文件修改
项目的大部分配置可以在以下几个目录中的`config.yaml`文件进行修改:
* `/crawlers/douyin/web/config.yaml`
* `/crawlers/tiktok/web/config.yaml`
* `/crawlers/tiktok/app/config.yaml`
### 步骤5:停止并移除容器
需要停止和移除容器时,使用以下命令:
```bash
# Stop
docker stop douyin_tiktok_api
# Remove
docker rm douyin_tiktok_api
```
## 📸截图
***API速度测试(对比官方API)***
<details><summary>🔎点击展开截图</summary>
抖音官方API:

本项目API:

TikTok官方API:

本项目API:

</details>
<hr>
***项目界面***
<details><summary>🔎点击展开截图</summary>
Web主界面:

Web main interface:

</details>
<hr>
## 📜 Star历史
[](https://star-history.com/#Evil0ctal/Douyin_TikTok_Download_API&Timeline)
[Apache-2.0 license](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/blob/Stable/LICENSE)
> Start: 2021/11/06
> GitHub: [@Evil0ctal](https://github.com/Evil0ctal)
================================================
FILE: Screenshots/benchmarks/info
================================================
API benchmarks screenshots
================================================
FILE: Screenshots/v3_screenshots/info
================================================
V3.0 Screenshots
================================================
FILE: app/api/endpoints/bilibili_web.py
================================================
from fastapi import APIRouter, Body, Query, Request, HTTPException # 导入FastAPI组件
from app.api.models.APIResponseModel import ResponseModel, ErrorResponseModel # 导入响应模型
from crawlers.bilibili.web.web_crawler import BilibiliWebCrawler # 导入哔哩哔哩web爬虫
router = APIRouter()
BilibiliWebCrawler = BilibiliWebCrawler()
# 获取单个视频详情信息
@router.get("/fetch_one_video", response_model=ResponseModel, summary="获取单个视频详情信息/Get single video data")
async def fetch_one_video(request: Request,
bv_id: str = Query(example="BV1M1421t7hT", description="作品id/Video id")):
"""
# [中文]
### 用途:
- 获取单个视频详情信息
### 参数:
- bv_id: 作品id
### 返回:
- 视频详情信息
# [English]
### Purpose:
- Get single video data
### Parameters:
- bv_id: Video id
### Return:
- Video data
# [示例/Example]
bv_id = "BV1M1421t7hT"
"""
try:
data = await BilibiliWebCrawler.fetch_one_video(bv_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取视频流地址
@router.get("/fetch_video_playurl", response_model=ResponseModel, summary="获取视频流地址/Get video playurl")
async def fetch_one_video(request: Request,
bv_id: str = Query(example="BV1y7411Q7Eq", description="作品id/Video id"),
cid:str = Query(example="171776208", description="作品cid/Video cid")):
"""
# [中文]
### 用途:
- 获取视频流地址
### 参数:
- bv_id: 作品id
- cid: 作品cid
### 返回:
- 视频流地址
# [English]
### Purpose:
- Get video playurl
### Parameters:
- bv_id: Video id
- cid: Video cid
### Return:
- Video playurl
# [示例/Example]
bv_id = "BV1y7411Q7Eq"
cid = "171776208"
"""
try:
data = await BilibiliWebCrawler.fetch_video_playurl(bv_id, cid)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户发布视频作品数据
@router.get("/fetch_user_post_videos", response_model=ResponseModel,
summary="获取用户主页作品数据/Get user homepage video data")
async def fetch_user_post_videos(request: Request,
uid: str = Query(example="178360345", description="用户UID"),
pn: int = Query(default=1, description="页码/Page number"),):
"""
# [中文]
### 用途:
- 获取用户发布的视频数据
### 参数:
- uid: 用户UID
- pn: 页码
### 返回:
- 用户发布的视频数据
# [English]
### Purpose:
- Get user post video data
### Parameters:
- uid: User UID
- pn: Page number
### Return:
- User posted video data
# [示例/Example]
uid = "178360345"
pn = 1
"""
try:
data = await BilibiliWebCrawler.fetch_user_post_videos(uid, pn)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户所有收藏夹信息
@router.get("/fetch_collect_folders", response_model=ResponseModel,
summary="获取用户所有收藏夹信息/Get user collection folders")
async def fetch_collect_folders(request: Request,
uid: str = Query(example="178360345", description="用户UID")):
"""
# [中文]
### 用途:
- 获取用户收藏作品数据
### 参数:
- uid: 用户UID
### 返回:
- 用户收藏夹信息
# [English]
### Purpose:
- Get user collection folders
### Parameters:
- uid: User UID
### Return:
- user collection folders
# [示例/Example]
uid = "178360345"
"""
try:
data = await BilibiliWebCrawler.fetch_collect_folders(uid)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定收藏夹内视频数据
@router.get("/fetch_user_collection_videos", response_model=ResponseModel,
summary="获取指定收藏夹内视频数据/Gets video data from a collection folder")
async def fetch_user_collection_videos(request: Request,
folder_id: str = Query(example="1756059545",
description="收藏夹id/collection folder id"),
pn: int = Query(default=1, description="页码/Page number")
):
"""
# [中文]
### 用途:
- 获取指定收藏夹内视频数据
### 参数:
- folder_id: 用户UID
- pn: 页码
### 返回:
- 指定收藏夹内视频数据
# [English]
### Purpose:
- Gets video data from a collection folder
### Parameters:
- folder_id: collection folder id
- pn: Page number
### Return:
- video data from collection folder
# [示例/Example]
folder_id = "1756059545"
pn = 1
"""
try:
data = await BilibiliWebCrawler.fetch_folder_videos(folder_id, pn)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定用户的信息
@router.get("/fetch_user_profile", response_model=ResponseModel,
summary="获取指定用户的信息/Get information of specified user")
async def fetch_collect_folders(request: Request,
uid: str = Query(example="178360345", description="用户UID")):
"""
# [中文]
### 用途:
- 获取指定用户的信息
### 参数:
- uid: 用户UID
### 返回:
- 指定用户的个人信息
# [English]
### Purpose:
- Get information of specified user
### Parameters:
- uid: User UID
### Return:
- information of specified user
# [示例/Example]
uid = "178360345"
"""
try:
data = await BilibiliWebCrawler.fetch_user_profile(uid)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取综合热门视频信息
@router.get("/fetch_com_popular", response_model=ResponseModel,
summary="获取综合热门视频信息/Get comprehensive popular video information")
async def fetch_collect_folders(request: Request,
pn: int = Query(default=1, description="页码/Page number")):
"""
# [中文]
### 用途:
- 获取综合热门视频信息
### 参数:
- pn: 页码
### 返回:
- 综合热门视频信息
# [English]
### Purpose:
- Get comprehensive popular video information
### Parameters:
- pn: Page number
### Return:
- comprehensive popular video information
# [示例/Example]
pn = 1
"""
try:
data = await BilibiliWebCrawler.fetch_com_popular(pn)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定视频的评论
@router.get("/fetch_video_comments", response_model=ResponseModel,
summary="获取指定视频的评论/Get comments on the specified video")
async def fetch_collect_folders(request: Request,
bv_id: str = Query(example="BV1M1421t7hT", description="作品id/Video id"),
pn: int = Query(default=1, description="页码/Page number")):
"""
# [中文]
### 用途:
- 获取指定视频的评论
### 参数:
- bv_id: 作品id
- pn: 页码
### 返回:
- 指定视频的评论数据
# [English]
### Purpose:
- Get comments on the specified video
### Parameters:
- bv_id: Video id
- pn: Page number
### Return:
- comments of the specified video
# [示例/Example]
bv_id = "BV1M1421t7hT"
pn = 1
"""
try:
data = await BilibiliWebCrawler.fetch_video_comments(bv_id, pn)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取视频下指定评论的回复
@router.get("/fetch_comment_reply", response_model=ResponseModel,
summary="获取视频下指定评论的回复/Get reply to the specified comment")
async def fetch_collect_folders(request: Request,
bv_id: str = Query(example="BV1M1421t7hT", description="作品id/Video id"),
pn: int = Query(default=1, description="页码/Page number"),
rpid: str = Query(example="237109455120", description="回复id/Reply id")):
"""
# [中文]
### 用途:
- 获取视频下指定评论的回复
### 参数:
- bv_id: 作品id
- pn: 页码
- rpid: 回复id
### 返回:
- 指定评论的回复数据
# [English]
### Purpose:
- Get reply to the specified comment
### Parameters:
- bv_id: Video id
- pn: Page number
- rpid: Reply id
### Return:
- Reply of the specified comment
# [示例/Example]
bv_id = "BV1M1421t7hT"
pn = 1
rpid = "237109455120"
"""
try:
data = await BilibiliWebCrawler.fetch_comment_reply(bv_id, pn, rpid)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定用户动态
@router.get("/fetch_user_dynamic", response_model=ResponseModel,
summary="获取指定用户动态/Get dynamic information of specified user")
async def fetch_collect_folders(request: Request,
uid: str = Query(example="16015678", description="用户UID"),
offset: str = Query(default="", example="953154282154098691",
description="开始索引/offset")):
"""
# [中文]
### 用途:
- 获取指定用户动态
### 参数:
- uid: 用户UID
- offset: 开始索引
### 返回:
- 指定用户动态数据
# [English]
### Purpose:
- Get dynamic information of specified user
### Parameters:
- uid: User UID
- offset: offset
### Return:
- dynamic information of specified user
# [示例/Example]
uid = "178360345"
offset = "953154282154098691"
"""
try:
data = await BilibiliWebCrawler.fetch_user_dynamic(uid, offset)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取视频实时弹幕
@router.get("/fetch_video_danmaku", response_model=ResponseModel, summary="获取视频实时弹幕/Get Video Danmaku")
async def fetch_one_video(request: Request,
cid: str = Query(example="1639235405", description="作品cid/Video cid")):
"""
# [中文]
### 用途:
- 获取视频实时弹幕
### 参数:
- cid: 作品cid
### 返回:
- 视频实时弹幕
# [English]
### Purpose:
- Get Video Danmaku
### Parameters:
- cid: Video cid
### Return:
- Video Danmaku
# [示例/Example]
cid = "1639235405"
"""
try:
data = await BilibiliWebCrawler.fetch_video_danmaku(cid)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定直播间信息
@router.get("/fetch_live_room_detail", response_model=ResponseModel,
summary="获取指定直播间信息/Get information of specified live room")
async def fetch_collect_folders(request: Request,
room_id: str = Query(example="22816111", description="直播间ID/Live room ID")):
"""
# [中文]
### 用途:
- 获取指定直播间信息
### 参数:
- room_id: 直播间ID
### 返回:
- 指定直播间信息
# [English]
### Purpose:
- Get information of specified live room
### Parameters:
- room_id: Live room ID
### Return:
- information of specified live room
# [示例/Example]
room_id = "22816111"
"""
try:
data = await BilibiliWebCrawler.fetch_live_room_detail(room_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定直播间视频流
@router.get("/fetch_live_videos", response_model=ResponseModel,
summary="获取直播间视频流/Get live video data of specified room")
async def fetch_collect_folders(request: Request,
room_id: str = Query(example="1815229528", description="直播间ID/Live room ID")):
"""
# [中文]
### 用途:
- 获取指定直播间视频流
### 参数:
- room_id: 直播间ID
### 返回:
- 指定直播间视频流
# [English]
### Purpose:
- Get live video data of specified room
### Parameters:
- room_id: Live room ID
### Return:
- live video data of specified room
# [示例/Example]
room_id = "1815229528"
"""
try:
data = await BilibiliWebCrawler.fetch_live_videos(room_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定分区正在直播的主播
@router.get("/fetch_live_streamers", response_model=ResponseModel,
summary="获取指定分区正在直播的主播/Get live streamers of specified live area")
async def fetch_collect_folders(request: Request,
area_id: str = Query(example="9", description="直播分区id/Live area ID"),
pn: int = Query(default=1, description="页码/Page number")):
"""
# [中文]
### 用途:
- 获取指定分区正在直播的主播
### 参数:
- area_id: 直播分区id
- pn: 页码
### 返回:
- 指定分区正在直播的主播
# [English]
### Purpose:
- Get live streamers of specified live area
### Parameters:
- area_id: Live area ID
- pn: Page number
### Return:
- live streamers of specified live area
# [示例/Example]
area_id = "9"
pn = 1
"""
try:
data = await BilibiliWebCrawler.fetch_live_streamers(area_id, pn)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取所有直播分区列表
@router.get("/fetch_all_live_areas", response_model=ResponseModel,
summary="获取所有直播分区列表/Get a list of all live areas")
async def fetch_collect_folders(request: Request,):
"""
# [中文]
### 用途:
- 获取所有直播分区列表
### 参数:
### 返回:
- 所有直播分区列表
# [English]
### Purpose:
- Get a list of all live areas
### Parameters:
### Return:
- list of all live areas
# [示例/Example]
"""
try:
data = await BilibiliWebCrawler.fetch_all_live_areas()
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 通过bv号获得视频aid号
@router.get("/bv_to_aid", response_model=ResponseModel, summary="通过bv号获得视频aid号/Generate aid by bvid")
async def fetch_one_video(request: Request,
bv_id: str = Query(example="BV1M1421t7hT", description="作品id/Video id")):
"""
# [中文]
### 用途:
- 通过bv号获得视频aid号
### 参数:
- bv_id: 作品id
### 返回:
- 视频aid号
# [English]
### Purpose:
- Generate aid by bvid
### Parameters:
- bv_id: Video id
### Return:
- Video aid
# [示例/Example]
bv_id = "BV1M1421t7hT"
"""
try:
data = await BilibiliWebCrawler.bv_to_aid(bv_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 通过bv号获得视频分p信息
@router.get("/fetch_video_parts", response_model=ResponseModel, summary="通过bv号获得视频分p信息/Get Video Parts By bvid")
async def fetch_one_video(request: Request,
bv_id: str = Query(example="BV1vf421i7hV", description="作品id/Video id")):
"""
# [中文]
### 用途:
- 通过bv号获得视频分p信息
### 参数:
- bv_id: 作品id
### 返回:
- 视频分p信息
# [English]
### Purpose:
- Get Video Parts By bvid
### Parameters:
- bv_id: Video id
### Return:
- Video Parts
# [示例/Example]
bv_id = "BV1vf421i7hV"
"""
try:
data = await BilibiliWebCrawler.fetch_video_parts(bv_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
================================================
FILE: app/api/endpoints/douyin_web.py
================================================
from typing import List
from fastapi import APIRouter, Body, Query, Request, HTTPException # 导入FastAPI组件
from app.api.models.APIResponseModel import ResponseModel, ErrorResponseModel # 导入响应模型
from crawlers.douyin.web.web_crawler import DouyinWebCrawler # 导入抖音Web爬虫
router = APIRouter()
DouyinWebCrawler = DouyinWebCrawler()
# 获取单个作品数据
@router.get("/fetch_one_video", response_model=ResponseModel, summary="获取单个作品数据/Get single video data")
async def fetch_one_video(request: Request,
aweme_id: str = Query(example="7372484719365098803", description="作品id/Video id")):
"""
# [中文]
### 用途:
- 获取单个作品数据
### 参数:
- aweme_id: 作品id
### 返回:
- 作品数据
# [English]
### Purpose:
- Get single video data
### Parameters:
- aweme_id: Video id
### Return:
- Video data
# [示例/Example]
aweme_id = "7372484719365098803"
"""
try:
data = await DouyinWebCrawler.fetch_one_video(aweme_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户作品集合数据
@router.get("/fetch_user_post_videos", response_model=ResponseModel,
summary="获取用户主页作品数据/Get user homepage video data")
async def fetch_user_post_videos(request: Request,
sec_user_id: str = Query(
example="MS4wLjABAAAANXSltcLCzDGmdNFI2Q_QixVTr67NiYzjKOIP5s03CAE",
description="用户sec_user_id/User sec_user_id"),
max_cursor: int = Query(default=0, description="最大游标/Maximum cursor"),
count: int = Query(default=20, description="每页数量/Number per page")):
"""
# [中文]
### 用途:
- 获取用户主页作品数据
### 参数:
- sec_user_id: 用户sec_user_id
- max_cursor: 最大游标
- count: 最大数量
### 返回:
- 用户作品数据
# [English]
### Purpose:
- Get user homepage video data
### Parameters:
- sec_user_id: User sec_user_id
- max_cursor: Maximum cursor
- count: Maximum count number
### Return:
- User video data
# [示例/Example]
sec_user_id = "MS4wLjABAAAANXSltcLCzDGmdNFI2Q_QixVTr67NiYzjKOIP5s03CAE"
max_cursor = 0
counts = 20
"""
try:
data = await DouyinWebCrawler.fetch_user_post_videos(sec_user_id, max_cursor, count)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户喜欢作品数据
@router.get("/fetch_user_like_videos", response_model=ResponseModel,
summary="获取用户喜欢作品数据/Get user like video data")
async def fetch_user_like_videos(request: Request,
sec_user_id: str = Query(
example="MS4wLjABAAAAW9FWcqS7RdQAWPd2AA5fL_ilmqsIFUCQ_Iym6Yh9_cUa6ZRqVLjVQSUjlHrfXY1Y",
description="用户sec_user_id/User sec_user_id"),
max_cursor: int = Query(default=0, description="最大游标/Maximum cursor"),
counts: int = Query(default=20, description="每页数量/Number per page")):
"""
# [中文]
### 用途:
- 获取用户喜欢作品数据
### 参数:
- sec_user_id: 用户sec_user_id
- max_cursor: 最大游标
- count: 最大数量
### 返回:
- 用户作品数据
# [English]
### Purpose:
- Get user like video data
### Parameters:
- sec_user_id: User sec_user_id
- max_cursor: Maximum cursor
- count: Maximum count number
### Return:
- User video data
# [示例/Example]
sec_user_id = "MS4wLjABAAAAW9FWcqS7RdQAWPd2AA5fL_ilmqsIFUCQ_Iym6Yh9_cUa6ZRqVLjVQSUjlHrfXY1Y"
max_cursor = 0
counts = 20
"""
try:
data = await DouyinWebCrawler.fetch_user_like_videos(sec_user_id, max_cursor, counts)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户收藏作品数据(用户提供自己的Cookie)
@router.get("/fetch_user_collection_videos", response_model=ResponseModel,
summary="获取用户收藏作品数据/Get user collection video data")
async def fetch_user_collection_videos(request: Request,
cookie: str = Query(example="YOUR_COOKIE",
description="用户网页版抖音Cookie/Your web version of Douyin Cookie"),
max_cursor: int = Query(default=0, description="最大游标/Maximum cursor"),
counts: int = Query(default=20, description="每页数量/Number per page")):
"""
# [中文]
### 用途:
- 获取用户收藏作品数据
### 参数:
- cookie: 用户网页版抖音Cookie(此接口需要用户提供自己的Cookie)
- max_cursor: 最大游标
- count: 最大数量
### 返回:
- 用户作品数据
# [English]
### Purpose:
- Get user collection video data
### Parameters:
- cookie: User's web version of Douyin Cookie (This interface requires users to provide their own Cookie)
- max_cursor: Maximum cursor
- count: Maximum number
### Return:
- User video data
# [示例/Example]
cookie = "YOUR_COOKIE"
max_cursor = 0
counts = 20
"""
try:
data = await DouyinWebCrawler.fetch_user_collection_videos(cookie, max_cursor, counts)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户合辑作品数据
@router.get("/fetch_user_mix_videos", response_model=ResponseModel,
summary="获取用户合辑作品数据/Get user mix video data")
async def fetch_user_mix_videos(request: Request,
mix_id: str = Query(example="7348687990509553679", description="合辑id/Mix id"),
max_cursor: int = Query(default=0, description="最大游标/Maximum cursor"),
counts: int = Query(default=20, description="每页数量/Number per page")):
"""
# [中文]
### 用途:
- 获取用户合辑作品数据
### 参数:
- mix_id: 合辑id
- max_cursor: 最大游标
- count: 最大数量
### 返回:
- 用户作品数据
# [English]
### Purpose:
- Get user mix video data
### Parameters:
- mix_id: Mix id
- max_cursor: Maximum cursor
- count: Maximum number
### Return:
- User video data
# [示例/Example]
url = https://www.douyin.com/collection/7348687990509553679
mix_id = "7348687990509553679"
max_cursor = 0
counts = 20
"""
try:
data = await DouyinWebCrawler.fetch_user_mix_videos(mix_id, max_cursor, counts)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户直播流数据
@router.get("/fetch_user_live_videos", response_model=ResponseModel,
summary="获取用户直播流数据/Get user live video data")
async def fetch_user_live_videos(request: Request,
webcast_id: str = Query(example="285520721194",
description="直播间webcast_id/Room webcast_id")):
"""
# [中文]
### 用途:
- 获取用户直播流数据
### 参数:
- webcast_id: 直播间webcast_id
### 返回:
- 直播流数据
# [English]
### Purpose:
- Get user live video data
### Parameters:
- webcast_id: Room webcast_id
### Return:
- Live stream data
# [示例/Example]
webcast_id = "285520721194"
"""
try:
data = await DouyinWebCrawler.fetch_user_live_videos(webcast_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定用户的直播流数据
@router.get("/fetch_user_live_videos_by_room_id",
response_model=ResponseModel,
summary="获取指定用户的直播流数据/Get live video data of specified user")
async def fetch_user_live_videos_by_room_id(request: Request,
room_id: str = Query(example="7318296342189919011",
description="直播间room_id/Room room_id")):
"""
# [中文]
### 用途:
- 获取指定用户的直播流数据
### 参数:
- room_id: 直播间room_id
### 返回:
- 直播流数据
# [English]
### Purpose:
- Get live video data of specified user
### Parameters:
- room_id: Room room_id
### Return:
- Live stream data
# [示例/Example]
room_id = "7318296342189919011"
"""
try:
data = await DouyinWebCrawler.fetch_user_live_videos_by_room_id(room_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取直播间送礼用户排行榜
@router.get("/fetch_live_gift_ranking",
response_model=ResponseModel,
summary="获取直播间送礼用户排行榜/Get live room gift user ranking")
async def fetch_live_gift_ranking(request: Request,
room_id: str = Query(example="7356585666190461731",
description="直播间room_id/Room room_id"),
rank_type: int = Query(default=30, description="排行类型/Leaderboard type")):
"""
# [中文]
### 用途:
- 获取直播间送礼用户排行榜
### 参数:
- room_id: 直播间room_id
- rank_type: 排行类型,默认为30不用修改。
### 返回:
- 排行榜数据
# [English]
### Purpose:
- Get live room gift user ranking
### Parameters:
- room_id: Room room_id
- rank_type: Leaderboard type, default is 30, no need to modify.
### Return:
- Leaderboard data
# [示例/Example]
room_id = "7356585666190461731"
rank_type = 30
"""
try:
data = await DouyinWebCrawler.fetch_live_gift_ranking(room_id, rank_type)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 抖音直播间商品信息
@router.get("/fetch_live_room_product_result",
response_model=ResponseModel,
summary="抖音直播间商品信息/Douyin live room product information")
async def fetch_live_room_product_result(request: Request,
cookie: str = Query(example="YOUR_COOKIE",
description="用户网页版抖音Cookie/Your web version of Douyin Cookie"),
room_id: str = Query(example="7356742011975715619",
description="直播间room_id/Room room_id"),
author_id: str = Query(example="2207432981615527",
description="作者id/Author id"),
limit: int = Query(default=20, description="数量/Number")):
"""
# [中文]
### 用途:
- 抖音直播间商品信息
### 参数:
- cookie: 用户网页版抖音Cookie(此接口需要用户提供自己的Cookie,如获取失败请手动过一次验证码)
- room_id: 直播间room_id
- author_id: 作者id
- limit: 数量
### 返回:
- 商品信息
# [English]
### Purpose:
- Douyin live room product information
### Parameters:
- cookie: User's web version of Douyin Cookie (This interface requires users to provide their own Cookie, if the acquisition fails, please manually pass the captcha code once)
- room_id: Room room_id
- author_id: Author id
- limit: Number
### Return:
- Product information
# [示例/Example]
cookie = "YOUR_COOKIE"
room_id = "7356742011975715619"
author_id = "2207432981615527"
limit = 20
"""
try:
data = await DouyinWebCrawler.fetch_live_room_product_result(cookie, room_id, author_id, limit)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定用户的信息
@router.get("/handler_user_profile",
response_model=ResponseModel,
summary="获取指定用户的信息/Get information of specified user")
async def handler_user_profile(request: Request,
sec_user_id: str = Query(
example="MS4wLjABAAAAW9FWcqS7RdQAWPd2AA5fL_ilmqsIFUCQ_Iym6Yh9_cUa6ZRqVLjVQSUjlHrfXY1Y",
description="用户sec_user_id/User sec_user_id")):
"""
# [中文]
### 用途:
- 获取指定用户的信息
### 参数:
- sec_user_id: 用户sec_user_id
### 返回:
- 用户信息
# [English]
### Purpose:
- Get information of specified user
### Parameters:
- sec_user_id: User sec_user_id
### Return:
- User information
# [示例/Example]
sec_user_id = "MS4wLjABAAAAW9FWcqS7RdQAWPd2AA5fL_ilmqsIFUCQ_Iym6Yh9_cUa6ZRqVLjVQSUjlHrfXY1Y"
"""
try:
data = await DouyinWebCrawler.handler_user_profile(sec_user_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取单个视频评论数据
@router.get("/fetch_video_comments",
response_model=ResponseModel,
summary="获取单个视频评论数据/Get single video comments data")
async def fetch_video_comments(request: Request,
aweme_id: str = Query(example="7372484719365098803", description="作品id/Video id"),
cursor: int = Query(default=0, description="游标/Cursor"),
count: int = Query(default=20, description="数量/Number")):
"""
# [中文]
### 用途:
- 获取单个视频评论数据
### 参数:
- aweme_id: 作品id
- cursor: 游标
- count: 数量
### 返回:
- 评论数据
# [English]
### Purpose:
- Get single video comments data
### Parameters:
- aweme_id: Video id
- cursor: Cursor
- count: Number
### Return:
- Comments data
# [示例/Example]
aweme_id = "7372484719365098803"
cursor = 0
count = 20
"""
try:
data = await DouyinWebCrawler.fetch_video_comments(aweme_id, cursor, count)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取指定视频的评论回复数据
@router.get("/fetch_video_comment_replies",
response_model=ResponseModel,
summary="获取指定视频的评论回复数据/Get comment replies data of specified video")
async def fetch_video_comments_reply(request: Request,
item_id: str = Query(example="7354666303006723354", description="作品id/Video id"),
comment_id: str = Query(example="7354669356632638218",
description="评论id/Comment id"),
cursor: int = Query(default=0, description="游标/Cursor"),
count: int = Query(default=20, description="数量/Number")):
"""
# [中文]
### 用途:
- 获取指定视频的评论回复数据
### 参数:
- item_id: 作品id
- comment_id: 评论id
- cursor: 游标
- count: 数量
### 返回:
- 评论回复数据
# [English]
### Purpose:
- Get comment replies data of specified video
### Parameters:
- item_id: Video id
- comment_id: Comment id
- cursor: Cursor
- count: Number
### Return:
- Comment replies data
# [示例/Example]
aweme_id = "7354666303006723354"
comment_id = "7354669356632638218"
cursor = 0
count = 20
"""
try:
data = await DouyinWebCrawler.fetch_video_comments_reply(item_id, comment_id, cursor, count)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 生成真实msToken
@router.get("/generate_real_msToken",
response_model=ResponseModel,
summary="生成真实msToken/Generate real msToken")
async def generate_real_msToken(request: Request):
"""
# [中文]
### 用途:
- 生成真实msToken
### 返回:
- msToken
# [English]
### Purpose:
- Generate real msToken
### Return:
- msToken
"""
try:
data = await DouyinWebCrawler.gen_real_msToken()
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 生成ttwid
@router.get("/generate_ttwid",
response_model=ResponseModel,
summary="生成ttwid/Generate ttwid")
async def generate_ttwid(request: Request):
"""
# [中文]
### 用途:
- 生成ttwid
### 返回:
- ttwid
# [English]
### Purpose:
- Generate ttwid
### Return:
- ttwid
"""
try:
data = await DouyinWebCrawler.gen_ttwid()
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 生成verify_fp
@router.get("/generate_verify_fp",
response_model=ResponseModel,
summary="生成verify_fp/Generate verify_fp")
async def generate_verify_fp(request: Request):
"""
# [中文]
### 用途:
- 生成verify_fp
### 返回:
- verify_fp
# [English]
### Purpose:
- Generate verify_fp
### Return:
- verify_fp
"""
try:
data = await DouyinWebCrawler.gen_verify_fp()
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 生成s_v_web_id
@router.get("/generate_s_v_web_id",
response_model=ResponseModel,
summary="生成s_v_web_id/Generate s_v_web_id")
async def generate_s_v_web_id(request: Request):
"""
# [中文]
### 用途:
- 生成s_v_web_id
### 返回:
- s_v_web_id
# [English]
### Purpose:
- Generate s_v_web_id
### Return:
- s_v_web_id
"""
try:
data = await DouyinWebCrawler.gen_s_v_web_id()
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 使用接口地址生成Xbogus参数
@router.get("/generate_x_bogus",
response_model=ResponseModel,
summary="使用接口网址生成X-Bogus参数/Generate X-Bogus parameter using API URL")
async def generate_x_bogus(request: Request,
url: str = Query(
example="https://www.douyin.com/aweme/v1/web/aweme/detail/?aweme_id=7148736076176215311&device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=170400&version_name=17.4.0&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Edge&browser_version=117.0.2045.47&browser_online=true&engine_name=Blink&engine_version="),
user_agent: str = Query(
example="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36")):
"""
# [中文]
### 用途:
- 使用接口网址生成X-Bogus参数
### 参数:
- url: 接口网址
# [English]
### Purpose:
- Generate X-Bogus parameter using API URL
### Parameters:
- url: API URL
# [示例/Example]
url = "https://www.douyin.com/aweme/v1/web/aweme/detail/?aweme_id=7148736076176215311&device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=170400&version_name=17.4.0&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Edge&browser_version=117.0.2045.47&browser_online=true&engine_name=Blink&engine_version=117.0.0.0&os_name=Windows&os_version=10&cpu_core_num=128&device_memory=10240&platform=PC&downlink=10&effective_type=4g&round_trip_time=100"
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
"""
try:
x_bogus = await DouyinWebCrawler.get_x_bogus(url, user_agent)
return ResponseModel(code=200,
router=request.url.path,
data=x_bogus)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 使用接口地址生成Abogus参数
@router.get("/generate_a_bogus",
response_model=ResponseModel,
summary="使用接口网址生成A-Bogus参数/Generate A-Bogus parameter using API URL")
async def generate_a_bogus(request: Request,
url: str = Query(
example="https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_online=true&engine_name=Gecko&os_name=Windows&os_version=10&platform=PC&screen_width=1920&screen_height=1080&browser_version=124.0&engine_version=122.0.0.0&cpu_core_num=12&device_memory=8&aweme_id=7372484719365098803"),
user_agent: str = Query(
example="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36")):
"""
# [中文]
### 用途:
- 使用接口网址生成A-Bogus参数
### 参数:
- url: 接口网址
- user_agent: 用户代理,暂时不支持自定义,直接使用默认值即可。
# [English]
### Purpose:
- Generate A-Bogus parameter using API URL
### Parameters:
- url: API URL
- user_agent: User agent, temporarily does not support customization, just use the default value.
# [示例/Example]
url = "https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_online=true&engine_name=Gecko&os_name=Windows&os_version=10&platform=PC&screen_width=1920&screen_height=1080&browser_version=124.0&engine_version=122.0.0.0&cpu_core_num=12&device_memory=8&aweme_id=7372484719365098803"
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36"
"""
try:
a_bogus = await DouyinWebCrawler.get_a_bogus(url, user_agent)
return ResponseModel(code=200,
router=request.url.path,
data=a_bogus)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取单个用户id
@router.get("/get_sec_user_id",
response_model=ResponseModel,
summary="提取单个用户id/Extract single user id")
async def get_sec_user_id(request: Request,
url: str = Query(
example="https://www.douyin.com/user/MS4wLjABAAAANXSltcLCzDGmdNFI2Q_QixVTr67NiYzjKOIP5s03CAE")):
"""
# [中文]
### 用途:
- 提取单个用户id
### 参数:
- url: 用户主页链接
### 返回:
- 用户sec_user_id
# [English]
### Purpose:
- Extract single user id
### Parameters:
- url: User homepage link
### Return:
- User sec_user_id
# [示例/Example]
url = "https://www.douyin.com/user/MS4wLjABAAAANXSltcLCzDGmdNFI2Q_QixVTr67NiYzjKOIP5s03CAE"
"""
try:
data = await DouyinWebCrawler.get_sec_user_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取列表用户id
@router.post("/get_all_sec_user_id",
response_model=ResponseModel,
summary="提取列表用户id/Extract list user id")
async def get_all_sec_user_id(request: Request,
url: List[str] = Body(
example=[
"https://www.douyin.com/user/MS4wLjABAAAANXSltcLCzDGmdNFI2Q_QixVTr67NiYzjKOIP5s03CAE?vid=7285950278132616463",
"https://www.douyin.com/user/MS4wLjABAAAAVsneOf144eGDFf8Xp9QNb1VW6ovXnNT5SqJBhJfe8KQBKWKDTWK5Hh-_i9mJzb8C",
"长按复制此条消息,打开抖音搜索,查看TA的更多作品。 https://v.douyin.com/idFqvUms/",
"https://v.douyin.com/idFqvUms/",
],
description="用户主页链接列表/User homepage link list"
)):
"""
# [中文]
### 用途:
- 提取列表用户id
### 参数:
- url: 用户主页链接列表
### 返回:
- 用户sec_user_id列表
# [English]
### Purpose:
- Extract list user id
### Parameters:
- url: User homepage link list
### Return:
- User sec_user_id list
# [示例/Example]
```json
{
"urls":[
"https://www.douyin.com/user/MS4wLjABAAAANXSltcLCzDGmdNFI2Q_QixVTr67NiYzjKOIP5s03CAE?vid=7285950278132616463",
"https://www.douyin.com/user/MS4wLjABAAAAVsneOf144eGDFf8Xp9QNb1VW6ovXnNT5SqJBhJfe8KQBKWKDTWK5Hh-_i9mJzb8C",
"长按复制此条消息,打开抖音搜索,查看TA的更多作品。 https://v.douyin.com/idFqvUms/",
"https://v.douyin.com/idFqvUms/"
]
}
```
"""
try:
data = await DouyinWebCrawler.get_all_sec_user_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取单个作品id
@router.get("/get_aweme_id",
response_model=ResponseModel,
summary="提取单个作品id/Extract single video id")
async def get_aweme_id(request: Request,
url: str = Query(example="https://www.douyin.com/video/7298145681699622182")):
"""
# [中文]
### 用途:
- 提取单个作品id
### 参数:
- url: 作品链接
### 返回:
- 作品id
# [English]
### Purpose:
- Extract single video id
### Parameters:
- url: Video link
### Return:
- Video id
# [示例/Example]
url = "https://www.douyin.com/video/7298145681699622182"
"""
try:
data = await DouyinWebCrawler.get_aweme_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取列表作品id
@router.post("/get_all_aweme_id",
response_model=ResponseModel,
summary="提取列表作品id/Extract list video id")
async def get_all_aweme_id(request: Request,
url: List[str] = Body(
example=[
"0.53 02/26 I@v.sE Fus:/ 你别太帅了郑润泽# 现场版live # 音乐节 # 郑润泽 https://v.douyin.com/iRNBho6u/ 复制此链接,打开Dou音搜索,直接观看视频!",
"https://v.douyin.com/iRNBho6u/",
"https://www.iesdouyin.com/share/video/7298145681699622182/?region=CN&mid=7298145762238565171&u_code=l1j9bkbd&did=MS4wLjABAAAAtqpCx0hpOERbdSzQdjRZw-wFPxaqdbAzsKDmbJMUI3KWlMGQHC-n6dXAqa-dM2EP&iid=MS4wLjABAAAANwkJuWIRFOzg5uCpDRpMj4OX-QryoDgn-yYlXQnRwQQ&with_sec_did=1&titleType=title&share_sign=05kGlqGmR4_IwCX.ZGk6xuL0osNA..5ur7b0jbOx6cc-&share_version=170400&ts=1699262937&from_aid=6383&from_ssr=1&from=web_code_link",
"https://www.douyin.com/video/7298145681699622182?previous_page=web_code_link",
"https://www.douyin.com/video/7298145681699622182",
],
description="作品链接列表/Video link list")):
"""
# [中文]
### 用途:
- 提取列表作品id
### 参数:
- url: 作品链接列表
### 返回:
- 作品id列表
# [English]
### Purpose:
- Extract list video id
### Parameters:
- url: Video link list
### Return:
- Video id list
# [示例/Example]
```json
{
"urls":[
"0.53 02/26 I@v.sE Fus:/ 你别太帅了郑润泽# 现场版live # 音乐节 # 郑润泽 https://v.douyin.com/iRNBho6u/ 复制此链接,打开Dou音搜索,直接观看视频!",
"https://v.douyin.com/iRNBho6u/",
"https://www.iesdouyin.com/share/video/7298145681699622182/?region=CN&mid=7298145762238565171&u_code=l1j9bkbd&did=MS4wLjABAAAAtqpCx0hpOERbdSzQdjRZw-wFPxaqdbAzsKDmbJMUI3KWlMGQHC-n6dXAqa-dM2EP&iid=MS4wLjABAAAANwkJuWIRFOzg5uCpDRpMj4OX-QryoDgn-yYlXQnRwQQ&with_sec_did=1&titleType=title&share_sign=05kGlqGmR4_IwCX.ZGk6xuL0osNA..5ur7b0jbOx6cc-&share_version=170400&ts=1699262937&from_aid=6383&from_ssr=1&from=web_code_link",
"https://www.douyin.com/video/7298145681699622182?previous_page=web_code_link",
"https://www.douyin.com/video/7298145681699622182",
]
}
```
"""
try:
data = await DouyinWebCrawler.get_all_aweme_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取列表直播间号
@router.get("/get_webcast_id",
response_model=ResponseModel,
summary="提取列表直播间号/Extract list webcast id")
async def get_webcast_id(request: Request,
url: str = Query(example="https://live.douyin.com/775841227732")):
"""
# [中文]
### 用途:
- 提取列表直播间号
### 参数:
- url: 直播间链接
### 返回:
- 直播间号
# [English]
### Purpose:
- Extract list webcast id
### Parameters:
- url: Room link
### Return:
- Room id
# [示例/Example]
url = "https://live.douyin.com/775841227732"
"""
try:
data = await DouyinWebCrawler.get_webcast_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取列表直播间号
@router.post("/get_all_webcast_id",
response_model=ResponseModel,
summary="提取列表直播间号/Extract list webcast id")
async def get_all_webcast_id(request: Request,
url: List[str] = Body(
example=[
"https://live.douyin.com/775841227732",
"https://live.douyin.com/775841227732?room_id=7318296342189919011&enter_from_merge=web_share_link&enter_method=web_share_link&previous_page=app_code_link",
'https://webcast.amemv.com/douyin/webcast/reflow/7318296342189919011?u_code=l1j9bkbd&did=MS4wLjABAAAAEs86TBQPNwAo-RGrcxWyCdwKhI66AK3Pqf3ieo6HaxI&iid=MS4wLjABAAAA0ptpM-zzoliLEeyvWOCUt-_dQza4uSjlIvbtIazXnCY&with_sec_did=1&use_link_command=1&ecom_share_track_params=&extra_params={"from_request_id":"20231230162057EC005772A8EAA0199906","im_channel_invite_id":"0"}&user_id=3644207898042206&liveId=7318296342189919011&from=share&style=share&enter_method=click_share&roomId=7318296342189919011&activity_info={}',
"6i- Q@x.Sl 03/23 【醒子8ke的直播间】 点击打开👉https://v.douyin.com/i8tBR7hX/ 或长按复制此条消息,打开抖音,看TA直播",
"https://v.douyin.com/i8tBR7hX/",
],
description="直播间链接列表/Room link list")):
"""
# [中文]
### 用途:
- 提取列表直播间号
### 参数:
- url: 直播间链接列表
### 返回:
- 直播间号列表
# [English]
### Purpose:
- Extract list webcast id
### Parameters:
- url: Room link list
### Return:
- Room id list
# [示例/Example]
```json
{
"urls": [
"https://live.douyin.com/775841227732",
"https://live.douyin.com/775841227732?room_id=7318296342189919011&enter_from_merge=web_share_link&enter_method=web_share_link&previous_page=app_code_link",
'https://webcast.amemv.com/douyin/webcast/reflow/7318296342189919011?u_code=l1j9bkbd&did=MS4wLjABAAAAEs86TBQPNwAo-RGrcxWyCdwKhI66AK3Pqf3ieo6HaxI&iid=MS4wLjABAAAA0ptpM-zzoliLEeyvWOCUt-_dQza4uSjlIvbtIazXnCY&with_sec_did=1&use_link_command=1&ecom_share_track_params=&extra_params={"from_request_id":"20231230162057EC005772A8EAA0199906","im_channel_invite_id":"0"}&user_id=3644207898042206&liveId=7318296342189919011&from=share&style=share&enter_method=click_share&roomId=7318296342189919011&activity_info={}',
"6i- Q@x.Sl 03/23 【醒子8ke的直播间】 点击打开👉https://v.douyin.com/i8tBR7hX/ 或长按复制此条消息,打开抖音,看TA直播",
"https://v.douyin.com/i8tBR7hX/",
]
}
```
"""
try:
data = await DouyinWebCrawler.get_all_webcast_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
================================================
FILE: app/api/endpoints/download.py
================================================
import os
import zipfile
import subprocess
import tempfile
import aiofiles
import httpx
import yaml
from fastapi import APIRouter, Request, Query, HTTPException # 导入FastAPI组件
from starlette.responses import FileResponse
from app.api.models.APIResponseModel import ErrorResponseModel # 导入响应模型
from crawlers.hybrid.hybrid_crawler import HybridCrawler # 导入混合数据爬虫
router = APIRouter()
HybridCrawler = HybridCrawler()
# 读取上级再上级目录的配置文件
config_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__)))), 'config.yaml')
with open(config_path, 'r', encoding='utf-8') as file:
config = yaml.safe_load(file)
async def fetch_data(url: str, headers: dict = None):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
} if headers is None else headers.get('headers')
async with httpx.AsyncClient() as client:
response = await client.get(url, headers=headers)
response.raise_for_status() # 确保响应是成功的
return response
# 下载视频专用
async def fetch_data_stream(url: str, request:Request , headers: dict = None, file_path: str = None):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
} if headers is None else headers.get('headers')
async with httpx.AsyncClient() as client:
# 启用流式请求
async with client.stream("GET", url, headers=headers) as response:
response.raise_for_status()
# 流式保存文件
async with aiofiles.open(file_path, 'wb') as out_file:
async for chunk in response.aiter_bytes():
if await request.is_disconnected():
print("客户端断开连接,清理未完成的文件")
await out_file.close()
os.remove(file_path)
return False
await out_file.write(chunk)
return True
async def merge_bilibili_video_audio(video_url: str, audio_url: str, request: Request, output_path: str, headers: dict) -> bool:
"""
下载并合并 Bilibili 的视频流和音频流
"""
try:
# 创建临时文件
with tempfile.NamedTemporaryFile(suffix='.m4v', delete=False) as video_temp:
video_temp_path = video_temp.name
with tempfile.NamedTemporaryFile(suffix='.m4a', delete=False) as audio_temp:
audio_temp_path = audio_temp.name
# 下载视频流
video_success = await fetch_data_stream(video_url, request, headers=headers, file_path=video_temp_path)
# 下载音频流
audio_success = await fetch_data_stream(audio_url, request, headers=headers, file_path=audio_temp_path)
if not video_success or not audio_success:
print("Failed to download video or audio stream")
return False
# 使用 FFmpeg 合并视频和音频
ffmpeg_cmd = [
'ffmpeg', '-y', # -y 覆盖输出文件
'-i', video_temp_path, # 视频输入
'-i', audio_temp_path, # 音频输入
'-c:v', 'copy', # 复制视频编码,不重新编码
'-c:a', 'copy', # 复制音频编码,不重新编码(保持原始质量)
'-f', 'mp4', # 确保输出格式为MP4
output_path
]
print(f"FFmpeg command: {' '.join(ffmpeg_cmd)}")
result = subprocess.run(ffmpeg_cmd, capture_output=True, text=True)
print(f"FFmpeg return code: {result.returncode}")
if result.stderr:
print(f"FFmpeg stderr: {result.stderr}")
if result.stdout:
print(f"FFmpeg stdout: {result.stdout}")
# 清理临时文件
try:
os.unlink(video_temp_path)
os.unlink(audio_temp_path)
except:
pass
return result.returncode == 0
except Exception as e:
# 清理临时文件
try:
os.unlink(video_temp_path)
os.unlink(audio_temp_path)
except:
pass
print(f"Error merging video and audio: {e}")
return False
@router.get("/download", summary="在线下载抖音|TikTok|Bilibili视频/图片/Online download Douyin|TikTok|Bilibili video/image")
async def download_file_hybrid(request: Request,
url: str = Query(
example="https://www.douyin.com/video/7372484719365098803",
description="视频或图片的URL地址,支持抖音|TikTok|Bilibili的分享链接,例如:https://v.douyin.com/e4J8Q7A/ 或 https://www.bilibili.com/video/BV1xxxxxxxxx"),
prefix: bool = True,
with_watermark: bool = False):
"""
# [中文]
### 用途:
- 在线下载抖音|TikTok|Bilibili 无水印或有水印的视频/图片
- 通过传入的视频URL参数,获取对应的视频或图片数据,然后下载到本地。
- 如果你在尝试直接访问TikTok单一视频接口的JSON数据中的视频播放地址时遇到HTTP403错误,那么你可以使用此接口来下载视频。
- Bilibili视频会自动合并视频流和音频流,确保下载的视频有声音。
- 这个接口会占用一定的服务器资源,所以在Demo站点是默认关闭的,你可以在本地部署后调用此接口。
### 参数:
- url: 视频或图片的URL地址,支持抖音|TikTok|Bilibili的分享链接,例如:https://v.douyin.com/e4J8Q7A/ 或 https://www.bilibili.com/video/BV1xxxxxxxxx
- prefix: 下载文件的前缀,默认为True,可以在配置文件中修改。
- with_watermark: 是否下载带水印的视频或图片,默认为False。(注意:Bilibili没有水印概念)
### 返回:
- 返回下载的视频或图片文件响应。
# [English]
### Purpose:
- Download Douyin|TikTok|Bilibili video/image with or without watermark online.
- By passing the video URL parameter, get the corresponding video or image data, and then download it to the local.
- If you encounter an HTTP403 error when trying to access the video playback address in the JSON data of the TikTok single video interface directly, you can use this interface to download the video.
- Bilibili videos will automatically merge video and audio streams to ensure downloaded videos have sound.
- This interface will occupy a certain amount of server resources, so it is disabled by default on the Demo site, you can call this interface after deploying it locally.
### Parameters:
- url: The URL address of the video or image, supports Douyin|TikTok|Bilibili sharing links, for example: https://v.douyin.com/e4J8Q7A/ or https://www.bilibili.com/video/BV1xxxxxxxxx
- prefix: The prefix of the downloaded file, the default is True, and can be modified in the configuration file.
- with_watermark: Whether to download videos or images with watermarks, the default is False. (Note: Bilibili has no watermark concept)
### Returns:
- Return the response of the downloaded video or image file.
# [示例/Example]
url: https://www.bilibili.com/video/BV1U5efz2Egn
"""
# 是否开启此端点/Whether to enable this endpoint
if not config["API"]["Download_Switch"]:
code = 400
message = "Download endpoint is disabled in the configuration file. | 配置文件中已禁用下载端点。"
return ErrorResponseModel(code=code, message=message, router=request.url.path,
params=dict(request.query_params))
# 开始解析数据/Start parsing data
try:
data = await HybridCrawler.hybrid_parsing_single_video(url, minimal=True)
except Exception as e:
code = 400
return ErrorResponseModel(code=code, message=str(e), router=request.url.path, params=dict(request.query_params))
# 开始下载文件/Start downloading files
try:
data_type = data.get('type')
platform = data.get('platform')
video_id = data.get('video_id') # 改为使用video_id
file_prefix = config.get("API").get("Download_File_Prefix") if prefix else ''
download_path = os.path.join(config.get("API").get("Download_Path"), f"{platform}_{data_type}")
# 确保目录存在/Ensure the directory exists
os.makedirs(download_path, exist_ok=True)
# 下载视频文件/Download video file
if data_type == 'video':
file_name = f"{file_prefix}{platform}_{video_id}.mp4" if not with_watermark else f"{file_prefix}{platform}_{video_id}_watermark.mp4"
file_path = os.path.join(download_path, file_name)
# 判断文件是否存在,存在就直接返回
if os.path.exists(file_path):
return FileResponse(path=file_path, media_type='video/mp4', filename=file_name)
# 获取对应平台的headers
if platform == 'tiktok':
__headers = await HybridCrawler.TikTokWebCrawler.get_tiktok_headers()
elif platform == 'bilibili':
__headers = await HybridCrawler.BilibiliWebCrawler.get_bilibili_headers()
else: # douyin
__headers = await HybridCrawler.DouyinWebCrawler.get_douyin_headers()
# Bilibili 特殊处理:音视频分离
if platform == 'bilibili':
video_data = data.get('video_data', {})
video_url = video_data.get('nwm_video_url_HQ') if not with_watermark else video_data.get('wm_video_url_HQ')
audio_url = video_data.get('audio_url')
if not video_url or not audio_url:
raise HTTPException(
status_code=500,
detail="Failed to get video or audio URL from Bilibili"
)
# 使用专门的函数合并音视频
success = await merge_bilibili_video_audio(video_url, audio_url, request, file_path, __headers.get('headers'))
if not success:
raise HTTPException(
status_code=500,
detail="Failed to merge Bilibili video and audio streams"
)
else:
# 其他平台的常规处理
url = data.get('video_data').get('nwm_video_url_HQ') if not with_watermark else data.get('video_data').get('wm_video_url_HQ')
success = await fetch_data_stream(url, request, headers=__headers, file_path=file_path)
if not success:
raise HTTPException(
status_code=500,
detail="An error occurred while fetching data"
)
# # 保存文件
# async with aiofiles.open(file_path, 'wb') as out_file:
# await out_file.write(response.content)
# 返回文件内容
return FileResponse(path=file_path, filename=file_name, media_type="video/mp4")
# 下载图片文件/Download image file
elif data_type == 'image':
# 压缩文件属性/Compress file properties
zip_file_name = f"{file_prefix}{platform}_{video_id}_images.zip" if not with_watermark else f"{file_prefix}{platform}_{video_id}_images_watermark.zip"
zip_file_path = os.path.join(download_path, zip_file_name)
# 判断文件是否存在,存在就直接返回、
if os.path.exists(zip_file_path):
return FileResponse(path=zip_file_path, filename=zip_file_name, media_type="application/zip")
# 获取图片文件/Get image file
urls = data.get('image_data').get('no_watermark_image_list') if not with_watermark else data.get(
'image_data').get('watermark_image_list')
image_file_list = []
for url in urls:
# 请求图片文件/Request image file
response = await fetch_data(url)
index = int(urls.index(url))
content_type = response.headers.get('content-type')
file_format = content_type.split('/')[1]
file_name = f"{file_prefix}{platform}_{video_id}_{index + 1}.{file_format}" if not with_watermark else f"{file_prefix}{platform}_{video_id}_{index + 1}_watermark.{file_format}"
file_path = os.path.join(download_path, file_name)
image_file_list.append(file_path)
# 保存文件/Save file
async with aiofiles.open(file_path, 'wb') as out_file:
await out_file.write(response.content)
# 压缩文件/Compress file
with zipfile.ZipFile(zip_file_path, 'w') as zip_file:
for image_file in image_file_list:
zip_file.write(image_file, os.path.basename(image_file))
# 返回压缩文件/Return compressed file
return FileResponse(path=zip_file_path, filename=zip_file_name, media_type="application/zip")
# 异常处理/Exception handling
except Exception as e:
print(e)
code = 400
return ErrorResponseModel(code=code, message=str(e), router=request.url.path, params=dict(request.query_params))
================================================
FILE: app/api/endpoints/hybrid_parsing.py
================================================
import asyncio
from fastapi import APIRouter, Body, Query, Request, HTTPException # 导入FastAPI组件
from app.api.models.APIResponseModel import ResponseModel, ErrorResponseModel # 导入响应模型
# 爬虫/Crawler
from crawlers.hybrid.hybrid_crawler import HybridCrawler # 导入混合爬虫
HybridCrawler = HybridCrawler() # 实例化混合爬虫
router = APIRouter()
@router.get("/video_data", response_model=ResponseModel, tags=["Hybrid-API"],
summary="混合解析单一视频接口/Hybrid parsing single video endpoint")
async def hybrid_parsing_single_video(request: Request,
url: str = Query(example="https://v.douyin.com/L4FJNR3/"),
minimal: bool = Query(default=False)):
"""
# [中文]
### 用途:
- 该接口用于解析抖音/TikTok单一视频的数据。
### 参数:
- `url`: 视频链接、分享链接、分享文本。
### 返回:
- `data`: 视频数据。
# [English]
### Purpose:
- This endpoint is used to parse data of a single Douyin/TikTok video.
### Parameters:
- `url`: Video link, share link, or share text.
### Returns:
- `data`: Video data.
# [Example]
url = "https://v.douyin.com/L4FJNR3/"
"""
try:
# 解析视频/Parse video
data = await HybridCrawler.hybrid_parsing_single_video(url=url, minimal=minimal)
# 返回数据/Return data
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 更新Cookie
@router.post("/update_cookie",
response_model=ResponseModel,
summary="更新Cookie/Update Cookie")
async def update_cookie_api(request: Request,
service: str = Body(example="douyin", description="服务名称/Service name"),
cookie: str = Body(example="YOUR_NEW_COOKIE", description="新的Cookie值/New Cookie value")):
"""
# [中文]
### 用途:
- 更新指定服务的Cookie
### 参数:
- service: 服务名称 (如: douyin_web)
- cookie: 新的Cookie值
### 返回:
- 更新结果
# [English]
### Purpose:
- Update Cookie for specified service
### Parameters:
- service: Service name (e.g.: douyin_web)
- cookie: New Cookie value
### Return:
- Update result
# [示例/Example]
service = "douyin_web"
cookie = "YOUR_NEW_COOKIE"
"""
try:
if service == "douyin":
from crawlers.douyin.web.web_crawler import DouyinWebCrawler
douyin_crawler = DouyinWebCrawler()
await douyin_crawler.update_cookie(cookie)
return ResponseModel(code=200,
router=request.url.path,
data={"message": f"Cookie for {service} updated successfully"})
elif service == "tiktok":
# 这里可以添加TikTok的cookie更新逻辑
# from crawlers.tiktok.web.web_crawler import TikTokWebCrawler
# tiktok_crawler = TikTokWebCrawler()
# await tiktok_crawler.update_cookie(cookie)
return ResponseModel(code=200,
router=request.url.path,
data={"message": f"Cookie for {service} will be updated (not implemented yet)"})
elif service == "bilibili":
# 这里可以添加Bilibili的cookie更新逻辑
# from crawlers.bilibili.web.web_crawler import BilibiliWebCrawler
# bilibili_crawler = BilibiliWebCrawler()
# await bilibili_crawler.update_cookie(cookie)
return ResponseModel(code=200,
router=request.url.path,
data={"message": f"Cookie for {service} will be updated (not implemented yet)"})
else:
raise ValueError(f"Service '{service}' is not supported. Supported services: douyin, tiktok, bilibili")
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
================================================
FILE: app/api/endpoints/ios_shortcut.py
================================================
import os
import yaml
from fastapi import APIRouter
from app.api.models.APIResponseModel import iOS_Shortcut
# 读取上级再上级目录的配置文件
config_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__)))), 'config.yaml')
with open(config_path, 'r', encoding='utf-8') as file:
config = yaml.safe_load(file)
router = APIRouter()
@router.get("/shortcut", response_model=iOS_Shortcut, summary="用于iOS快捷指令的版本更新信息/Version update information for iOS shortcuts")
async def get_shortcut():
shortcut_config = config["iOS_Shortcut"]
version = shortcut_config["iOS_Shortcut_Version"]
update = shortcut_config['iOS_Shortcut_Update_Time']
link = shortcut_config['iOS_Shortcut_Link']
link_en = shortcut_config['iOS_Shortcut_Link_EN']
note = shortcut_config['iOS_Shortcut_Update_Note']
note_en = shortcut_config['iOS_Shortcut_Update_Note_EN']
return iOS_Shortcut(version=str(version), update=update, link=link, link_en=link_en, note=note, note_en=note_en)
================================================
FILE: app/api/endpoints/tiktok_app.py
================================================
from fastapi import APIRouter, Query, Request, HTTPException # 导入FastAPI组件
from app.api.models.APIResponseModel import ResponseModel, ErrorResponseModel # 导入响应模型
from crawlers.tiktok.app.app_crawler import TikTokAPPCrawler # 导入APP爬虫
router = APIRouter()
TikTokAPPCrawler = TikTokAPPCrawler()
# 获取单个作品数据
@router.get("/fetch_one_video",
response_model=ResponseModel,
summary="获取单个作品数据/Get single video data"
)
async def fetch_one_video(request: Request,
aweme_id: str = Query(example="7350810998023949599", description="作品id/Video id")):
"""
# [中文]
### 用途:
- 获取单个作品数据
### 参数:
- aweme_id: 作品id
### 返回:
- 作品数据
# [English]
### Purpose:
- Get single video data
### Parameters:
- aweme_id: Video id
### Return:
- Video data
# [示例/Example]
aweme_id = "7350810998023949599"
"""
try:
data = await TikTokAPPCrawler.fetch_one_video(aweme_id)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
================================================
FILE: app/api/endpoints/tiktok_web.py
================================================
from typing import List
from fastapi import APIRouter, Query, Body, Request, HTTPException # 导入FastAPI组件
from app.api.models.APIResponseModel import ResponseModel, ErrorResponseModel # 导入响应模型
from crawlers.tiktok.web.web_crawler import TikTokWebCrawler # 导入TikTokWebCrawler类
router = APIRouter()
TikTokWebCrawler = TikTokWebCrawler()
# 获取单个作品数据
@router.get("/fetch_one_video",
response_model=ResponseModel,
summary="获取单个作品数据/Get single video data")
async def fetch_one_video(request: Request,
itemId: str = Query(example="7339393672959757570", description="作品id/Video id")):
"""
# [中文]
### 用途:
- 获取单个作品数据
### 参数:
- itemId: 作品id
### 返回:
- 作品数据
# [English]
### Purpose:
- Get single video data
### Parameters:
- itemId: Video id
### Return:
- Video data
# [示例/Example]
itemId = "7339393672959757570"
"""
try:
data = await TikTokWebCrawler.fetch_one_video(itemId)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户的个人信息
@router.get("/fetch_user_profile",
response_model=ResponseModel,
summary="获取用户的个人信息/Get user profile")
async def fetch_user_profile(request: Request,
uniqueId: str = Query(default="tiktok", description="用户uniqueId/User uniqueId"),
secUid: str = Query(default="", description="用户secUid/User secUid"),):
"""
# [中文]
### 用途:
- 获取用户的个人信息
### 参数:
- secUid: 用户secUid
- uniqueId: 用户uniqueId
- secUid和uniqueId至少提供一个, 优先使用uniqueId, 也就是用户主页的链接中的用户名。
### 返回:
- 用户的个人信息
# [English]
### Purpose:
- Get user profile
### Parameters:
- secUid: User secUid
- uniqueId: User uniqueId
- At least one of secUid and uniqueId is provided, and uniqueId is preferred, that is, the username in the user's homepage link.
### Return:
- User profile
# [示例/Example]
secUid = "MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM"
uniqueId = "tiktok"
"""
try:
data = await TikTokWebCrawler.fetch_user_profile(secUid, uniqueId)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户的作品列表
@router.get("/fetch_user_post",
response_model=ResponseModel,
summary="获取用户的作品列表/Get user posts")
async def fetch_user_post(request: Request,
secUid: str = Query(example="MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM",
description="用户secUid/User secUid"),
cursor: int = Query(default=0, description="翻页游标/Page cursor"),
count: int = Query(default=35, description="每页数量/Number per page"),
coverFormat: int = Query(default=2, description="封面格式/Cover format")):
"""
# [中文]
### 用途:
- 获取用户的作品列表
### 参数:
- secUid: 用户secUid
- cursor: 翻页游标
- count: 每页数量
- coverFormat: 封面格式
### 返回:
- 用户的作品列表
# [English]
### Purpose:
- Get user posts
### Parameters:
- secUid: User secUid
- cursor: Page cursor
- count: Number per page
- coverFormat: Cover format
### Return:
- User posts
# [示例/Example]
secUid = "MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM"
cursor = 0
count = 35
coverFormat = 2
"""
try:
data = await TikTokWebCrawler.fetch_user_post(secUid, cursor, count, coverFormat)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户的点赞列表
@router.get("/fetch_user_like",
response_model=ResponseModel,
summary="获取用户的点赞列表/Get user likes")
async def fetch_user_like(request: Request,
secUid: str = Query(
example="MS4wLjABAAAAq1iRXNduFZpY301UkVpJ1eQT60_NiWS9QQSeNqmNQEDJp0pOF8cpleNEdiJx5_IU",
description="用户secUid/User secUid"),
cursor: int = Query(default=0, description="翻页游标/Page cursor"),
count: int = Query(default=35, description="每页数量/Number per page"),
coverFormat: int = Query(default=2, description="封面格式/Cover format")):
"""
# [中文]
### 用途:
- 获取用户的点赞列表
- 注意: 该接口需要用户点赞列表为公开状态
### 参数:
- secUid: 用户secUid
- cursor: 翻页游标
- count: 每页数量
- coverFormat: 封面格式
### 返回:
- 用户的点赞列表
# [English]
### Purpose:
- Get user likes
- Note: This interface requires that the user's like list be public
### Parameters:
- secUid: User secUid
- cursor: Page cursor
- count: Number per page
- coverFormat: Cover format
### Return:
- User likes
# [示例/Example]
secUid = "MS4wLjABAAAAq1iRXNduFZpY301UkVpJ1eQT60_NiWS9QQSeNqmNQEDJp0pOF8cpleNEdiJx5_IU"
cursor = 0
count = 35
coverFormat = 2
"""
try:
data = await TikTokWebCrawler.fetch_user_like(secUid, cursor, count, coverFormat)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户的收藏列表
@router.get("/fetch_user_collect",
response_model=ResponseModel,
summary="获取用户的收藏列表/Get user favorites")
async def fetch_user_collect(request: Request,
cookie: str = Query(example="Your_Cookie", description="用户cookie/User cookie"),
secUid: str = Query(example="Your_SecUid", description="用户secUid/User secUid"),
cursor: int = Query(default=0, description="翻页游标/Page cursor"),
count: int = Query(default=30, description="每页数量/Number per page"),
coverFormat: int = Query(default=2, description="封面格式/Cover format")):
"""
# [中文]
### 用途:
- 获取用户的收藏列表
- 注意: 该接口目前只能获取自己的收藏列表,需要提供自己账号的cookie。
### 参数:
- cookie: 用户cookie
- secUid: 用户secUid
- cursor: 翻页游标
- count: 每页数量
- coverFormat: 封面格式
### 返回:
- 用户的收藏列表
# [English]
### Purpose:
- Get user favorites
- Note: This interface can currently only get your own favorites list, you need to provide your account cookie.
### Parameters:
- cookie: User cookie
- secUid: User secUid
- cursor: Page cursor
- count: Number per page
- coverFormat: Cover format
### Return:
- User favorites
# [示例/Example]
cookie = "Your_Cookie"
secUid = "Your_SecUid"
cursor = 0
count = 30
coverFormat = 2
"""
try:
data = await TikTokWebCrawler.fetch_user_collect(cookie, secUid, cursor, count, coverFormat)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户的播放列表
@router.get("/fetch_user_play_list",
response_model=ResponseModel,
summary="获取用户的播放列表/Get user play list")
async def fetch_user_play_list(request: Request,
secUid: str = Query(example="MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM",
description="用户secUid/User secUid"),
cursor: int = Query(default=0, description="翻页游标/Page cursor"),
count: int = Query(default=30, description="每页数量/Number per page")):
"""
# [中文]
### 用途:
- 获取用户的播放列表
### 参数:
- secUid: 用户secUid
- cursor: 翻页游标
- count: 每页数量
### 返回:
- 用户的播放列表
# [English]
### Purpose:
- Get user play list
### Parameters:
- secUid: User secUid
- cursor: Page cursor
- count: Number per page
### Return:
- User play list
# [示例/Eample]
secUid = "MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM"
cursor = 0
count = 30
"""
try:
data = await TikTokWebCrawler.fetch_user_play_list(secUid, cursor, count)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户的合辑列表
@router.get("/fetch_user_mix",
response_model=ResponseModel,
summary="获取用户的合辑列表/Get user mix list")
async def fetch_user_mix(request: Request,
mixId: str = Query(example="7101538765474106158",
description="合辑id/Mix id"),
cursor: int = Query(default=0, description="翻页游标/Page cursor"),
count: int = Query(default=30, description="每页数量/Number per page")):
"""
# [中文]
### 用途:
- 获取用户的合辑列表
### 参数:
- mixId: 合辑id
- cursor: 翻页游标
- count: 每页数量
### 返回:
- 用户的合辑列表
# [English]
### Purpose:
- Get user mix list
### Parameters:
- mixId: Mix id
- cursor: Page cursor
- count: Number per page
### Return:
- User mix list
# [示例/Eample]
mixId = "7101538765474106158"
cursor = 0
count = 30
"""
try:
data = await TikTokWebCrawler.fetch_user_mix(mixId, cursor, count)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取作品的评论列表
@router.get("/fetch_post_comment",
response_model=ResponseModel,
summary="获取作品的评论列表/Get video comments")
async def fetch_post_comment(request: Request,
aweme_id: str = Query(example="7304809083817774382", description="作品id/Video id"),
cursor: int = Query(default=0, description="翻页游标/Page cursor"),
count: int = Query(default=20, description="每页数量/Number per page"),
current_region: str = Query(default="", description="当前地区/Current region")):
"""
# [中文]
### 用途:
- 获取作品的评论列表
### 参数:
- aweme_id: 作品id
- cursor: 翻页游标
- count: 每页数量
- current_region: 当前地区,默认为空。
### 返回:
- 作品的评论列表
# [English]
### Purpose:
- Get video comments
### Parameters:
- aweme_id: Video id
- cursor: Page cursor
- count: Number per page
- current_region: Current region, default is empty.
### Return:
- Video comments
# [示例/Eample]
aweme_id = "7304809083817774382"
cursor = 0
count = 20
current_region = ""
"""
try:
data = await TikTokWebCrawler.fetch_post_comment(aweme_id, cursor, count, current_region)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取作品的评论回复列表
@router.get("/fetch_post_comment_reply",
response_model=ResponseModel,
summary="获取作品的评论回复列表/Get video comment replies")
async def fetch_post_comment_reply(request: Request,
item_id: str = Query(example="7304809083817774382", description="作品id/Video id"),
comment_id: str = Query(example="7304877760886588191",
description="评论id/Comment id"),
cursor: int = Query(default=0, description="翻页游标/Page cursor"),
count: int = Query(default=20, description="每页数量/Number per page"),
current_region: str = Query(default="", description="当前地区/Current region")):
"""
# [中文]
### 用途:
- 获取作品的评论回复列表
### 参数:
- item_id: 作品id
- comment_id: 评论id
- cursor: 翻页游标
- count: 每页数量
- current_region: 当前地区,默认为空。
### 返回:
- 作品的评论回复列表
# [English]
### Purpose:
- Get video comment replies
### Parameters:
- item_id: Video id
- comment_id: Comment id
- cursor: Page cursor
- count: Number per page
- current_region: Current region, default is empty.
### Return:
- Video comment replies
# [示例/Eample]
item_id = "7304809083817774382"
comment_id = "7304877760886588191"
cursor = 0
count = 20
current_region = ""
"""
try:
data = await TikTokWebCrawler.fetch_post_comment_reply(item_id, comment_id, cursor, count, current_region)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户的粉丝列表
@router.get("/fetch_user_fans",
response_model=ResponseModel,
summary="获取用户的粉丝列表/Get user followers")
async def fetch_user_fans(request: Request,
secUid: str = Query(example="MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM",
description="用户secUid/User secUid"),
count: int = Query(default=30, description="每页数量/Number per page"),
maxCursor: int = Query(default=0, description="最大游标/Max cursor"),
minCursor: int = Query(default=0, description="最小游标/Min cursor")):
"""
# [中文]
### 用途:
- 获取用户的粉丝列表
### 参数:
- secUid: 用户secUid
- count: 每页数量
- maxCursor: 最大游标
- minCursor: 最小游标
### 返回:
- 用户的粉丝列表
# [English]
### Purpose:
- Get user followers
### Parameters:
- secUid: User secUid
- count: Number per page
- maxCursor: Max cursor
- minCursor: Min cursor
### Return:
- User followers
# [示例/Example]
secUid = "MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM"
count = 30
maxCursor = 0
minCursor = 0
"""
try:
data = await TikTokWebCrawler.fetch_user_fans(secUid, count, maxCursor, minCursor)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户的关注列表
@router.get("/fetch_user_follow",
response_model=ResponseModel,
summary="获取用户的关注列表/Get user followings")
async def fetch_user_follow(request: Request,
secUid: str = Query(example="MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM",
description="用户secUid/User secUid"),
count: int = Query(default=30, description="每页数量/Number per page"),
maxCursor: int = Query(default=0, description="最大游标/Max cursor"),
minCursor: int = Query(default=0, description="最小游标/Min cursor")):
"""
# [中文]
### 用途:
- 获取用户的关注列表
### 参数:
- secUid: 用户secUid
- count: 每页数量
- maxCursor: 最大游标
- minCursor: 最小游标
### 返回:
- 用户的关注列表
# [English]
### Purpose:
- Get user followings
### Parameters:
- secUid: User secUid
- count: Number per page
- maxCursor: Max cursor
- minCursor: Min cursor
### Return:
- User followings
# [示例/Example]
secUid = "MS4wLjABAAAAv7iSuuXDJGDvJkmH_vz1qkDZYo1apxgzaxdBSeIuPiM"
count = 30
maxCursor = 0
minCursor = 0
"""
try:
data = await TikTokWebCrawler.fetch_user_follow(secUid, count, maxCursor, minCursor)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
"""-------------------------------------------------------utils接口列表-------------------------------------------------------"""
# 生成真实msToken
@router.get("/generate_real_msToken",
response_model=ResponseModel,
summary="生成真实msToken/Generate real msToken")
async def generate_real_msToken(request: Request):
"""
# [中文]
### 用途:
- 生成真实msToken
### 返回:
- 真实msToken
# [English]
### Purpose:
- Generate real msToken
### Return:
- Real msToken
"""
try:
data = await TikTokWebCrawler.fetch_real_msToken()
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 生成ttwid
@router.get("/generate_ttwid",
response_model=ResponseModel,
summary="生成ttwid/Generate ttwid")
async def generate_ttwid(request: Request,
cookie: str = Query(example="Your_Cookie", description="用户cookie/User cookie")):
"""
# [中文]
### 用途:
- 生成ttwid
### 参数:
- cookie: 用户cookie
### 返回:
- ttwid
# [English]
### Purpose:
- Generate ttwid
### Parameters:
- cookie: User cookie
### Return:
- ttwid
# [示例/Example]
cookie = "Your_Cookie"
"""
try:
data = await TikTokWebCrawler.fetch_ttwid(cookie)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 生成xbogus
@router.get("/generate_xbogus",
response_model=ResponseModel,
summary="生成xbogus/Generate xbogus")
async def generate_xbogus(request: Request,
url: str = Query(
example="https://www.tiktok.com/api/item/detail/?WebIdLastTime=1712665533&aid=1988&app_language=en&app_name=tiktok_web&browser_language=en-US&browser_name=Mozilla&browser_online=true&browser_platform=Win32&browser_version=5.0%20%28Windows%29&channel=tiktok_web&cookie_enabled=true&device_id=7349090360347690538&device_platform=web_pc&focus_state=true&from_page=user&history_len=4&is_fullscreen=false&is_page_visible=true&language=en&os=windows&priority_region=US&referer=®ion=US&root_referer=https%3A%2F%2Fwww.tiktok.com%2F&screen_height=1080&screen_width=1920&webcast_language=en&tz_name=America%2FTijuana&msToken=AYFCEapCLbMrS8uTLBoYdUMeeVLbCdFQ_QF_-OcjzJw1CPr4JQhWUtagy0k4a9IITAqi5Qxr2Vdh9mgCbyGxTnvWLa4ZVY6IiSf6lcST-tr0IXfl-r_ZTpzvWDoQfqOVsWCTlSNkhAwB-tap5g==&itemId=7339393672959757570",
description="未签名的API URL/Unsigned API URL"),
user_agent: str = Query(
example="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3",
description="用户浏览器User-Agent/User browser User-Agent")):
"""
# [中文]
### 用途:
- 生成xbogus
### 参数:
- url: 未签名的API URL
- user_agent: 用户浏览器User-Agent
### 返回:
- xbogus
# [English]
### Purpose:
- Generate xbogus
### Parameters:
- url: Unsigned API URL
- user_agent: User browser User-Agent
### Return:
- xbogus
# [示例/Example]
url = "https://www.tiktok.com/api/item/detail/?WebIdLastTime=1712665533&aid=1988&app_language=en&app_name=tiktok_web&browser_language=en-US&browser_name=Mozilla&browser_online=true&browser_platform=Win32&browser_version=5.0%20%28Windows%29&channel=tiktok_web&cookie_enabled=true&device_id=7349090360347690538&device_platform=web_pc&focus_state=true&from_page=user&history_len=4&is_fullscreen=false&is_page_visible=true&language=en&os=windows&priority_region=US&referer=®ion=US&root_referer=https%3A%2F%2Fwww.tiktok.com%2F&screen_height=1080&screen_width=1920&webcast_language=en&tz_name=America%2FTijuana&msToken=AYFCEapCLbMrS8uTLBoYdUMeeVLbCdFQ_QF_-OcjzJw1CPr4JQhWUtagy0k4a9IITAqi5Qxr2Vdh9mgCbyGxTnvWLa4ZVY6IiSf6lcST-tr0IXfl-r_ZTpzvWDoQfqOVsWCTlSNkhAwB-tap5g==&itemId=7339393672959757570"
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
"""
try:
data = await TikTokWebCrawler.gen_xbogus(url, user_agent)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取列表用户id
@router.get("/get_sec_user_id",
response_model=ResponseModel,
summary="提取列表用户id/Extract list user id")
async def get_sec_user_id(request: Request,
url: str = Query(
example="https://www.tiktok.com/@tiktok",
description="用户主页链接/User homepage link")):
"""
# [中文]
### 用途:
- 提取列表用户id
### 参数:
- url: 用户主页链接
### 返回:
- 用户id
# [English]
### Purpose:
- Extract list user id
### Parameters:
- url: User homepage link
### Return:
- User id
# [示例/Example]
url = "https://www.tiktok.com/@tiktok"
"""
try:
data = await TikTokWebCrawler.get_sec_user_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取列表用户id
@router.post("/get_all_sec_user_id",
response_model=ResponseModel,
summary="提取列表用户id/Extract list user id")
async def get_all_sec_user_id(request: Request,
url: List[str] = Body(
example=["https://www.tiktok.com/@tiktok"],
description="用户主页链接/User homepage link")):
"""
# [中文]
### 用途:
- 提取列表用户id
### 参数:
- url: 用户主页链接
### 返回:
- 用户id
# [English]
### Purpose:
- Extract list user id
### Parameters:
- url: User homepage link
### Return:
- User id
# [示例/Example]
url = ["https://www.tiktok.com/@tiktok"]
"""
try:
data = await TikTokWebCrawler.get_all_sec_user_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取单个作品id
@router.get("/get_aweme_id",
response_model=ResponseModel,
summary="提取单个作品id/Extract single video id")
async def get_aweme_id(request: Request,
url: str = Query(
example="https://www.tiktok.com/@owlcitymusic/video/7218694761253735723",
description="作品链接/Video link")):
"""
# [中文]
### 用途:
- 提取单个作品id
### 参数:
- url: 作品链接
### 返回:
- 作品id
# [English]
### Purpose:
- Extract single video id
### Parameters:
- url: Video link
### Return:
- Video id
# [示例/Example]
url = "https://www.tiktok.com/@owlcitymusic/video/7218694761253735723"
"""
try:
data = await TikTokWebCrawler.get_aweme_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 提取列表作品id
@router.post("/get_all_aweme_id",
response_model=ResponseModel,
summary="提取列表作品id/Extract list video id")
async def get_all_aweme_id(request: Request,
url: List[str] = Body(
example=["https://www.tiktok.com/@owlcitymusic/video/7218694761253735723"],
description="作品链接/Video link")):
"""
# [中文]
### 用途:
- 提取列表作品id
### 参数:
- url: 作品链接
### 返回:
- 作品id
# [English]
### Purpose:
- Extract list video id
### Parameters:
- url: Video link
### Return:
- Video id
# [示例/Example]
url = ["https://www.tiktok.com/@owlcitymusic/video/7218694761253735723"]
"""
try:
data = await TikTokWebCrawler.get_all_aweme_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取用户unique_id
@router.get("/get_unique_id",
response_model=ResponseModel,
summary="获取用户unique_id/Get user unique_id")
async def get_unique_id(request: Request,
url: str = Query(
example="https://www.tiktok.com/@tiktok",
description="用户主页链接/User homepage link")):
"""
# [中文]
### 用途:
- 获取用户unique_id
### 参数:
- url: 用户主页链接
### 返回:
- unique_id
# [English]
### Purpose:
- Get user unique_id
### Parameters:
- url: User homepage link
### Return:
- unique_id
# [示例/Example]
url = "https://www.tiktok.com/@tiktok"
"""
try:
data = await TikTokWebCrawler.get_unique_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
# 获取列表unique_id列表
@router.post("/get_all_unique_id",
response_model=ResponseModel,
summary="获取列表unique_id/Get list unique_id")
async def get_all_unique_id(request: Request,
url: List[str] = Body(
example=["https://www.tiktok.com/@tiktok"],
description="用户主页链接/User homepage link")):
"""
# [中文]
### 用途:
- 获取列表unique_id
### 参数:
- url: 用户主页链接
### 返回:
- unique_id
# [English]
### Purpose:
- Get list unique_id
### Parameters:
- url: User homepage link
### Return:
- unique_id
# [示例/Example]
url = ["https://www.tiktok.com/@tiktok"]
"""
try:
data = await TikTokWebCrawler.get_all_unique_id(url)
return ResponseModel(code=200,
router=request.url.path,
data=data)
except Exception as e:
status_code = 400
detail = ErrorResponseModel(code=status_code,
router=request.url.path,
params=dict(request.query_params),
)
raise HTTPException(status_code=status_code, detail=detail.dict())
================================================
FILE: app/api/models/APIResponseModel.py
================================================
from fastapi import Body, FastAPI, Query, Request, HTTPException
from pydantic import BaseModel
from typing import Any, Callable, Type, Optional, Dict
from functools import wraps
import datetime
app = FastAPI()
# 定义响应模型
class ResponseModel(BaseModel):
code: int = 200
router: str = "Endpoint path"
data: Optional[Any] = {}
# 定义错误响应模型
class ErrorResponseModel(BaseModel):
code: int = 400
message: str = "An error occurred."
support: str = "Please contact us on Github: https://github.com/Evil0ctal/Douyin_TikTok_Download_API"
time: str = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
router: str
params: dict = {}
# 混合解析响应模型
class HybridResponseModel(BaseModel):
code: int = 200
router: str = "Hybrid parsing single video endpoint"
data: Optional[Any] = {}
# iOS_Shortcut响应模型
class iOS_Shortcut(BaseModel):
version: str
update: str
link: str
link_en: str
note: str
note_en: str
================================================
FILE: app/api/router.py
================================================
from fastapi import APIRouter
from app.api.endpoints import (
tiktok_web,
tiktok_app,
douyin_web,
bilibili_web,
hybrid_parsing, ios_shortcut, download,
)
router = APIRouter()
# TikTok routers
router.include_router(tiktok_web.router, prefix="/tiktok/web", tags=["TikTok-Web-API"])
router.include_router(tiktok_app.router, prefix="/tiktok/app", tags=["TikTok-App-API"])
# Douyin routers
router.include_router(douyin_web.router, prefix="/douyin/web", tags=["Douyin-Web-API"])
# Bilibili routers
router.include_router(bilibili_web.router, prefix="/bilibili/web", tags=["Bilibili-Web-API"])
# Hybrid routers
router.include_router(hybrid_parsing.router, prefix="/hybrid", tags=["Hybrid-API"])
# iOS_Shortcut routers
router.include_router(ios_shortcut.router, prefix="/ios", tags=["iOS-Shortcut"])
# Download routers
router.include_router(download.router, tags=["Download"])
================================================
FILE: app/main.py
================================================
# ==============================================================================
# Copyright (C) 2021 Evil0ctal
#
# This file is part of the Douyin_TikTok_Download_API project.
#
# This project is licensed under the Apache License 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# __
# /> フ
# | _ _ l
# /` ミ_xノ
# / | Feed me Stars ⭐ ️
# / ヽ ノ
# │ | | |
# / ̄| | | |
# | ( ̄ヽ__ヽ_)__)
# \二つ
# ==============================================================================
#
# Contributor Link:
# - https://github.com/Evil0ctal
# - https://github.com/Johnserf-Seed
#
# ==============================================================================
# FastAPI APP
import uvicorn
from fastapi import FastAPI
from app.api.router import router as api_router
# PyWebIO APP
from app.web.app import MainView
from pywebio.platform.fastapi import asgi_app
# OS
import os
# YAML
import yaml
# Load Config
# 读取上级再上级目录的配置文件
config_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'config.yaml')
with open(config_path, 'r', encoding='utf-8') as file:
config = yaml.safe_load(file)
Host_IP = config['API']['Host_IP']
Host_Port = config['API']['Host_Port']
# API Tags
tags_metadata = [
{
"name": "Hybrid-API",
"description": "**(混合数据接口/Hybrid-API data endpoints)**",
},
{
"name": "Douyin-Web-API",
"description": "**(抖音Web数据接口/Douyin-Web-API data endpoints)**",
},
{
"name": "TikTok-Web-API",
"description": "**(TikTok-Web-API数据接口/TikTok-Web-API data endpoints)**",
},
{
"name": "TikTok-App-API",
"description": "**(TikTok-App-API数据接口/TikTok-App-API data endpoints)**",
},
{
"name": "Bilibili-Web-API",
"description": "**(Bilibili-Web-API数据接口/Bilibili-Web-API data endpoints)**",
},
{
"name": "iOS-Shortcut",
"description": "**(iOS快捷指令数据接口/iOS-Shortcut data endpoints)**",
},
{
"name": "Download",
"description": "**(下载数据接口/Download data endpoints)**",
},
]
version = config['API']['Version']
update_time = config['API']['Update_Time']
environment = config['API']['Environment']
description = f"""
### [中文]
#### 关于
- **Github**: [Douyin_TikTok_Download_API](https://github.com/Evil0ctal/Douyin_TikTok_Download_API)
- **版本**: `{version}`
- **更新时间**: `{update_time}`
- **环境**: `{environment}`
- **文档**: [API Documentation](https://douyin.wtf/docs)
#### 备注
- 本项目仅供学习交流使用,不得用于违法用途,否则后果自负。
- 如果你不想自己部署,可以直接使用我们的在线API服务:[Douyin_TikTok_Download_API](https://douyin.wtf/docs)
- 如果你需要更稳定以及更多功能的API服务,可以使用付费API服务:[TikHub API](https://api.tikhub.io/)
### [English]
#### About
- **Github**: [Douyin_TikTok_Download_API](https://github.com/Evil0ctal/Douyin_TikTok_Download_API)
- **Version**: `{version}`
- **Last Updated**: `{update_time}`
- **Environment**: `{environment}`
- **Documentation**: [API Documentation](https://douyin.wtf/docs)
#### Note
- This project is for learning and communication only, and shall not be used for illegal purposes, otherwise the consequences shall be borne by yourself.
- If you do not want to deploy it yourself, you can directly use our online API service: [Douyin_TikTok_Download_API](https://douyin.wtf/docs)
- If you need a more stable and feature-rich API service, you can use the paid API service: [TikHub API](https://api.tikhub.io)
"""
docs_url = config['API']['Docs_URL']
redoc_url = config['API']['Redoc_URL']
app = FastAPI(
title="Douyin TikTok Download API",
description=description,
version=version,
openapi_tags=tags_metadata,
docs_url=docs_url, # 文档路径
redoc_url=redoc_url, # redoc文档路径
)
# API router
app.include_router(api_router, prefix="/api")
# PyWebIO APP
if config['Web']['PyWebIO_Enable']:
webapp = asgi_app(lambda: MainView().main_view())
app.mount("/", webapp)
if __name__ == '__main__':
uvicorn.run(app, host=Host_IP, port=Host_Port)
================================================
FILE: app/web/app.py
================================================
# PyWebIO组件/PyWebIO components
import os
import yaml
from pywebio import session, config as pywebio_config
from pywebio.input import *
from pywebio.output import *
from app.web.views.About import about_pop_window
from app.web.views.Document import api_document_pop_window
from app.web.views.Downloader import downloader_pop_window
from app.web.views.EasterEgg import a
from app.web.views.ParseVideo import parse_video
from app.web.views.Shortcuts import ios_pop_window
# PyWebIO的各个视图/Views of PyWebIO
from app.web.views.ViewsUtils import ViewsUtils
# 读取上级再上级目录的配置文件
config_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'config.yaml')
with open(config_path, 'r', encoding='utf-8') as file:
_config = yaml.safe_load(file)
pywebio_config(theme=_config['Web']['PyWebIO_Theme'],
title=_config['Web']['Tab_Title'],
description=_config['Web']['Description'],
js_file=[
# 整一个看板娘,二次元浓度++
_config['Web']['Live2D_JS'] if _config['Web']['Live2D_Enable'] else None,
])
class MainView:
def __init__(self):
self.utils = ViewsUtils()
# 主界面/Main view
def main_view(self):
# 左侧导航栏/Left navbar
with use_scope('main'):
# 设置favicon/Set favicon
favicon_url = _config['Web']['Favicon']
session.run_js(f"""
$('head').append('<link rel="icon" type="image/png" href="{favicon_url}">')
""")
# 修改footer/Remove footer
session.run_js("""$('footer').remove()""")
# 设置不允许referrer/Set no referrer
session.run_js("""$('head').append('<meta name=referrer content=no-referrer>');""")
# 设置标题/Set title
title = self.utils.t("TikTok/抖音无水印在线解析下载",
"Douyin/TikTok online parsing and download without watermark")
put_html(f"""
<div align="center">
<a href="/" alt="logo" ><img src="{favicon_url}" width="100"/></a>
<h1 align="center">{title}</h1>
</div>
""")
# 设置导航栏/Navbar
put_row(
[
put_button(self.utils.t("快捷指令", 'iOS Shortcut'),
onclick=lambda: ios_pop_window(), link_style=True, small=True),
put_button(self.utils.t("开放接口", 'Open API'),
onclick=lambda: api_document_pop_window(), link_style=True, small=True),
put_button(self.utils.t("下载器", "Downloader"),
onclick=lambda: downloader_pop_window(), link_style=True, small=True),
put_button(self.utils.t("关于", 'About'),
onclick=lambda: about_pop_window(), link_style=True, small=True),
])
# 设置功能选择/Function selection
options = [
# Index: 0
self.utils.t('🔍批量解析视频', '🔍Batch Parse Video'),
# Index: 1
self.utils.t('🔍解析用户主页视频', '🔍Parse User Homepage Video'),
# Index: 2
self.utils.t('🥚小彩蛋', '🥚Easter Egg'),
]
select_options = select(
self.utils.t('请在这里选择一个你想要的功能吧 ~', 'Please select a function you want here ~'),
required=True,
options=options,
help_text=self.utils.t('📎选上面的选项然后点击提交', '📎Select the options above and click Submit')
)
# 根据输入运行不同的函数
if select_options == options[0]:
parse_video()
elif select_options == options[1]:
put_markdown(self.utils.t('暂未开放,敬请期待~', 'Not yet open, please look forward to it~'))
elif select_options == options[2]:
a() if _config['Web']['Easter_Egg'] else put_markdown(self.utils.t('没有小彩蛋哦~', 'No Easter Egg~'))
================================================
FILE: app/web/views/About.py
================================================
from pywebio.output import popup, put_markdown, put_html, put_text, put_link, put_image
from app.web.views.ViewsUtils import ViewsUtils
t = ViewsUtils().t
# 关于弹窗/About pop-up
def about_pop_window():
with popup(t('更多信息', 'More Information')):
put_html('<h3>👀{}</h3>'.format(t('访问记录', 'Visit Record')))
put_image('https://views.whatilearened.today/views/github/evil0ctal/TikTokDownload_PyWebIO.svg',
title='访问记录')
put_html('<hr>')
put_html('<h3>⭐Github</h3>')
put_markdown('[Douyin_TikTok_Download_API](https://github.com/Evil0ctal/Douyin_TikTok_Download_API)')
put_html('<hr>')
put_html('<h3>🎯{}</h3>'.format(t('反馈', 'Feedback')))
put_markdown('{}:[issues](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues)'.format(
t('Bug反馈', 'Bug Feedback')))
put_html('<hr>')
put_html('<h3>💖WeChat</h3>')
put_markdown('WeChat:[Evil0ctal](https://mycyberpunk.com/)')
put_html('<hr>')
================================================
FILE: app/web/views/Document.py
================================================
from pywebio.output import popup, put_markdown, put_html, put_text, put_link
from app.web.views.ViewsUtils import ViewsUtils
t = ViewsUtils().t
# API文档弹窗/API documentation pop-up
def api_document_pop_window():
with popup(t("📑API文档", "📑API Document")):
put_markdown(t("> 介绍",
"> Introduction"))
put_markdown(t("你可以利用本项目提供的API接口来获取抖音/TikTok的数据,具体接口文档请参考下方链接。",
"You can use the API provided by this project to obtain Douyin/TikTok data. For specific API documentation, please refer to the link below."))
put_markdown(t("如果API不可用,请尝试自己部署本项目,然后再配置文件中修改cookie的值。",
"If the API is not available, please try to deploy this project by yourself, and then modify the value of the cookie in the configuration file."))
put_link('[API Docs]', '/docs', new_window=True)
put_markdown("----")
put_markdown(t("> 更多接口",
"> More APIs"))
put_markdown(
t("[TikHub.io](https://beta-web.tikhub.io/en-us/users/signin)是一个API平台,提供包括Douyin、TikTok在内的各种公开数据接口,如果您想支持 [Douyin_TikTok_Download_API](https://github.com/Evil0ctal/Douyin_TikTok_Download_API) 项目的开发,我们强烈建议您选择[TikHub.io](https://beta-web.tikhub.io/en-us/users/signin)。",
"[TikHub.io](https://beta-web.tikhub.io/en-us/users/signin) is an API platform that provides various public data interfaces including Douyin and TikTok. If you want to support the development of the [Douyin_TikTok_Download_API](https://github.com/Evil0ctal/Douyin_TikTok_Download_API) project, we strongly recommend that you choose [TikHub.io](https://beta-web.tikhub.io/en-us/users/signin)."))
put_markdown(
t("#### 特点:",
"#### Features:"))
put_markdown(
t("> 📦 开箱即用",
"> 📦 Ready to use"))
put_markdown(
t("简化使用流程,利用封装好的SDK迅速开展开发工作。所有API接口均依据RESTful架构设计,并使用OpenAPI规范进行描述和文档化,附带示例参数,确保调用更加简便。",
"Simplify the use process and quickly carry out development work using the encapsulated SDK. All API interfaces are designed based on the RESTful architecture and described and documented using the OpenAPI specification, with example parameters attached to ensure easier calls."))
put_markdown(
t("> 💰 成本优势",
"> 💰 Cost advantage"))
put_markdown(
t("不预设套餐限制,没有月度使用门槛,所有消费按实际使用量即时计费,并且根据用户每日的请求量进行阶梯式计费,同时可以通过每日签到在用户后台进行签到获取免费的额度,并且这些免费额度不会过期。",
"There is no preset package limit, no monthly usage threshold, all consumption is billed in real time according to the actual usage, and billed in a step-by-step manner according to the user's daily request volume. At the same time, you can sign in daily in the user background to get free quotas, and these free quotas will not expire."))
put_markdown(
t("> ⚡️ 快速支持",
"> ⚡️ Quick support"))
put_markdown(
t("我们有一个庞大的Discord社区服务器,管理员和其他用户会在服务器中快速的回复你,帮助你快速解决当前的问题。",
"We have a huge Discord community server, where administrators and other users will quickly reply to you in the server and help you quickly solve the current problem."))
put_markdown(
t("> 🎉 拥抱开源",
"> 🎉 Embrace open source"))
put_markdown(
t("TikHub的部分源代码会开源在Github上,并且会赞助一些开源项目的作者。",
"Some of TikHub's source code will be open sourced on Github, and will sponsor some open source project authors."))
put_markdown(
t("#### 链接:",
"#### Links:"))
put_markdown(
t("- Github: [TikHub Github](https://github.com/TikHubIO)",
"- Github: [TikHub Github](https://github.com/TikHubIO)"))
put_markdown(
t("- Discord: [TikHub Discord](https://discord.com/invite/aMEAS8Xsvz)",
"- Discord: [TikHub Discord](https://discord.com/invite/aMEAS8Xsvz)"))
put_markdown(
t("- Register: [TikHub signup](https://beta-web.tikhub.io/en-us/users/signup)",
"- Register: [TikHub signup](https://beta-web.tikhub.io/en-us/users/signup)"))
put_markdown(
t("- API Docs: [TikHub API Docs](https://api.tikhub.io/)",
"- API Docs: [TikHub API Docs](https://api.tikhub.io/)"))
put_markdown("----")
================================================
FILE: app/web/views/Downloader.py
================================================
from pywebio.output import popup, put_markdown, put_html, put_text, put_link
from app.web.views.ViewsUtils import ViewsUtils
t = ViewsUtils().t
# 下载器弹窗/Downloader pop-up
def downloader_pop_window():
with popup(t("💾 下载器", "💾 Downloader")):
put_markdown(t("> 桌面端下载器", "> Desktop Downloader"))
put_markdown(t("你可以使用下面的开源项目在桌面端下载视频:",
"You can use the following open source projects to download videos on the desktop:"))
put_markdown("1. [TikTokDownload](https://github.com/Johnserf-Seed/TikTokDownload)")
put_markdown(t("> 备注", "> Note"))
put_markdown(t("1. 请注意下载器的使用规范,不要用于违法用途。",
"1. Please pay attention to the use specifications of the downloader and do not use it for illegal purposes."))
put_markdown(t("2. 下载器相关问题请咨询对应项目的开发者。",
"2. For issues related to the downloader, please consult the developer of the corresponding project."))
================================================
FILE
gitextract_6fzjbc2k/ ├── .github/ │ ├── ISSUE_TEMPLATE/ │ │ ├── bug_report.md │ │ ├── bug_report_CN.md │ │ ├── feature_request.md │ │ └── feature_request_CN.md │ └── workflows/ │ ├── codeql-analysis.yml │ ├── docker-image.yml │ └── readme.yml ├── .gitignore ├── Dockerfile ├── LICENSE ├── Procfile ├── README.en.md ├── README.md ├── Screenshots/ │ ├── benchmarks/ │ │ └── info │ └── v3_screenshots/ │ └── info ├── app/ │ ├── api/ │ │ ├── endpoints/ │ │ │ ├── bilibili_web.py │ │ │ ├── douyin_web.py │ │ │ ├── download.py │ │ │ ├── hybrid_parsing.py │ │ │ ├── ios_shortcut.py │ │ │ ├── tiktok_app.py │ │ │ └── tiktok_web.py │ │ ├── models/ │ │ │ └── APIResponseModel.py │ │ └── router.py │ ├── main.py │ └── web/ │ ├── app.py │ └── views/ │ ├── About.py │ ├── Document.py │ ├── Downloader.py │ ├── EasterEgg.py │ ├── ParseVideo.py │ ├── Shortcuts.py │ └── ViewsUtils.py ├── bash/ │ ├── install.sh │ └── update.sh ├── chrome-cookie-sniffer/ │ ├── README.md │ ├── background.js │ ├── manifest.json │ ├── popup.html │ └── popup.js ├── config.yaml ├── crawlers/ │ ├── base_crawler.py │ ├── bilibili/ │ │ └── web/ │ │ ├── config.yaml │ │ ├── endpoints.py │ │ ├── models.py │ │ ├── utils.py │ │ ├── web_crawler.py │ │ └── wrid.py │ ├── douyin/ │ │ └── web/ │ │ ├── abogus.py │ │ ├── config.yaml │ │ ├── endpoints.py │ │ ├── models.py │ │ ├── utils.py │ │ ├── web_crawler.py │ │ └── xbogus.py │ ├── hybrid/ │ │ └── hybrid_crawler.py │ ├── tiktok/ │ │ ├── app/ │ │ │ ├── app_crawler.py │ │ │ ├── config.yaml │ │ │ ├── endpoints.py │ │ │ └── models.py │ │ └── web/ │ │ ├── config.yaml │ │ ├── endpoints.py │ │ ├── models.py │ │ ├── utils.py │ │ └── web_crawler.py │ └── utils/ │ ├── api_exceptions.py │ ├── deprecated.py │ ├── logger.py │ └── utils.py ├── daemon/ │ └── Douyin_TikTok_Download_API.service ├── docker-compose.yml ├── logo/ │ └── logo.txt ├── requirements.txt ├── start.py └── start.sh
SYMBOL INDEX (425 symbols across 42 files)
FILE: app/api/endpoints/bilibili_web.py
function fetch_one_video (line 13) | async def fetch_one_video(request: Request,
function fetch_one_video (line 51) | async def fetch_one_video(request: Request,
function fetch_user_post_videos (line 94) | async def fetch_user_post_videos(request: Request,
function fetch_collect_folders (line 137) | async def fetch_collect_folders(request: Request,
function fetch_user_collection_videos (line 176) | async def fetch_user_collection_videos(request: Request,
function fetch_collect_folders (line 221) | async def fetch_collect_folders(request: Request,
function fetch_collect_folders (line 260) | async def fetch_collect_folders(request: Request,
function fetch_collect_folders (line 299) | async def fetch_collect_folders(request: Request,
function fetch_collect_folders (line 342) | async def fetch_collect_folders(request: Request,
function fetch_collect_folders (line 389) | async def fetch_collect_folders(request: Request,
function fetch_one_video (line 432) | async def fetch_one_video(request: Request,
function fetch_collect_folders (line 471) | async def fetch_collect_folders(request: Request,
function fetch_collect_folders (line 510) | async def fetch_collect_folders(request: Request,
function fetch_collect_folders (line 549) | async def fetch_collect_folders(request: Request,
function fetch_collect_folders (line 592) | async def fetch_collect_folders(request: Request,):
function fetch_one_video (line 626) | async def fetch_one_video(request: Request,
function fetch_one_video (line 664) | async def fetch_one_video(request: Request,
FILE: app/api/endpoints/douyin_web.py
function fetch_one_video (line 15) | async def fetch_one_video(request: Request,
function fetch_user_post_videos (line 54) | async def fetch_user_post_videos(request: Request,
function fetch_user_like_videos (line 103) | async def fetch_user_like_videos(request: Request,
function fetch_user_collection_videos (line 152) | async def fetch_user_collection_videos(request: Request,
function fetch_user_mix_videos (line 200) | async def fetch_user_mix_videos(request: Request,
function fetch_user_live_videos (line 248) | async def fetch_user_live_videos(request: Request,
function fetch_user_live_videos_by_room_id (line 289) | async def fetch_user_live_videos_by_room_id(request: Request,
function fetch_live_gift_ranking (line 330) | async def fetch_live_gift_ranking(request: Request,
function fetch_live_room_product_result (line 375) | async def fetch_live_room_product_result(request: Request,
function handler_user_profile (line 430) | async def handler_user_profile(request: Request,
function fetch_video_comments (line 472) | async def fetch_video_comments(request: Request,
function fetch_video_comments_reply (line 520) | async def fetch_video_comments_reply(request: Request,
function generate_real_msToken (line 573) | async def generate_real_msToken(request: Request):
function generate_ttwid (line 605) | async def generate_ttwid(request: Request):
function generate_verify_fp (line 637) | async def generate_verify_fp(request: Request):
function generate_s_v_web_id (line 669) | async def generate_s_v_web_id(request: Request):
function generate_x_bogus (line 701) | async def generate_x_bogus(request: Request,
function generate_a_bogus (line 741) | async def generate_a_bogus(request: Request,
function get_sec_user_id (line 783) | async def get_sec_user_id(request: Request,
function get_all_sec_user_id (line 824) | async def get_all_sec_user_id(request: Request,
function get_aweme_id (line 881) | async def get_aweme_id(request: Request,
function get_all_aweme_id (line 921) | async def get_all_aweme_id(request: Request,
function get_webcast_id (line 979) | async def get_webcast_id(request: Request,
function get_all_webcast_id (line 1019) | async def get_all_webcast_id(request: Request,
FILE: app/api/endpoints/download.py
function fetch_data (line 23) | async def fetch_data(url: str, headers: dict = None):
function fetch_data_stream (line 33) | async def fetch_data_stream(url: str, request:Request , headers: dict = ...
function merge_bilibili_video_audio (line 53) | async def merge_bilibili_video_audio(video_url: str, audio_url: str, req...
function download_file_hybrid (line 112) | async def download_file_hybrid(request: Request,
FILE: app/api/endpoints/hybrid_parsing.py
function hybrid_parsing_single_video (line 17) | async def hybrid_parsing_single_video(request: Request,
function update_cookie_api (line 59) | async def update_cookie_api(request: Request,
FILE: app/api/endpoints/ios_shortcut.py
function get_shortcut (line 16) | async def get_shortcut():
FILE: app/api/endpoints/tiktok_app.py
function fetch_one_video (line 15) | async def fetch_one_video(request: Request,
FILE: app/api/endpoints/tiktok_web.py
function fetch_one_video (line 17) | async def fetch_one_video(request: Request,
function fetch_user_profile (line 57) | async def fetch_user_profile(request: Request,
function fetch_user_post (line 103) | async def fetch_user_post(request: Request,
function fetch_user_like (line 156) | async def fetch_user_like(request: Request,
function fetch_user_collect (line 212) | async def fetch_user_collect(request: Request,
function fetch_user_play_list (line 270) | async def fetch_user_play_list(request: Request,
function fetch_user_mix (line 319) | async def fetch_user_mix(request: Request,
function fetch_post_comment (line 368) | async def fetch_post_comment(request: Request,
function fetch_post_comment_reply (line 420) | async def fetch_post_comment_reply(request: Request,
function fetch_user_fans (line 477) | async def fetch_user_fans(request: Request,
function fetch_user_follow (line 530) | async def fetch_user_follow(request: Request,
function generate_real_msToken (line 586) | async def generate_real_msToken(request: Request):
function generate_ttwid (line 618) | async def generate_ttwid(request: Request,
function generate_xbogus (line 658) | async def generate_xbogus(request: Request,
function get_sec_user_id (line 706) | async def get_sec_user_id(request: Request,
function get_all_sec_user_id (line 748) | async def get_all_sec_user_id(request: Request,
function get_aweme_id (line 790) | async def get_aweme_id(request: Request,
function get_all_aweme_id (line 832) | async def get_all_aweme_id(request: Request,
function get_unique_id (line 874) | async def get_unique_id(request: Request,
function get_all_unique_id (line 916) | async def get_all_unique_id(request: Request,
FILE: app/api/models/APIResponseModel.py
class ResponseModel (line 11) | class ResponseModel(BaseModel):
class ErrorResponseModel (line 18) | class ErrorResponseModel(BaseModel):
class HybridResponseModel (line 28) | class HybridResponseModel(BaseModel):
class iOS_Shortcut (line 35) | class iOS_Shortcut(BaseModel):
FILE: app/web/app.py
class MainView (line 32) | class MainView:
method __init__ (line 33) | def __init__(self):
method main_view (line 37) | def main_view(self):
FILE: app/web/views/About.py
function about_pop_window (line 8) | def about_pop_window():
FILE: app/web/views/Document.py
function api_document_pop_window (line 8) | def api_document_pop_window():
FILE: app/web/views/Downloader.py
function downloader_pop_window (line 8) | def downloader_pop_window():
FILE: app/web/views/EasterEgg.py
function a (line 9) | def a():
FILE: app/web/views/ParseVideo.py
function valid_check (line 23) | def valid_check(input_data: str):
function error_do (line 42) | def error_do(reason: str, value: str) -> None:
function parse_video (line 76) | def parse_video():
FILE: app/web/views/Shortcuts.py
function ios_pop_window (line 16) | def ios_pop_window():
FILE: app/web/views/ViewsUtils.py
class ViewsUtils (line 7) | class ViewsUtils:
method t (line 11) | def t(zh: str, en: str) -> str:
method clear_previous_scope (line 16) | def clear_previous_scope():
method find_url (line 22) | def find_url(string: str) -> list:
FILE: chrome-cookie-sniffer/background.js
constant SERVICES (line 5) | const SERVICES = {
function getServiceFromUrl (line 15) | function getServiceFromUrl(url) {
function shouldSkipCapture (line 25) | async function shouldSkipCapture(serviceName) {
function isCookieChanged (line 47) | async function isCookieChanged(serviceName, newCookie) {
function saveCookieData (line 62) | async function saveCookieData(serviceName, url, cookie, source = 'header...
function sendWebhook (line 85) | async function sendWebhook(serviceName, cookie) {
FILE: chrome-cookie-sniffer/popup.js
function loadWebhookConfig (line 18) | function loadWebhookConfig() {
function saveWebhookConfig (line 28) | function saveWebhookConfig() {
function updateTestButtonState (line 36) | function updateTestButtonState() {
function isValidUrl (line 42) | function isValidUrl(string) {
function testWebhook (line 52) | async function testWebhook() {
function showStatusInfo (line 127) | function showStatusInfo(message) {
function loadServiceData (line 136) | function loadServiceData() {
function createServiceCard (line 162) | function createServiceCard(service, data) {
function copyCookie (line 193) | async function copyCookie(serviceName) {
function deleteService (line 215) | function deleteService(serviceName) {
function clearAllData (line 228) | function clearAllData() {
function exportData (line 244) | function exportData() {
FILE: crawlers/base_crawler.py
class BaseCrawler (line 56) | class BaseCrawler:
method __init__ (line 61) | def __init__(
method fetch_response (line 104) | async def fetch_response(self, endpoint: str) -> Response:
method fetch_get_json (line 115) | async def fetch_get_json(self, endpoint: str) -> dict:
method fetch_post_json (line 127) | async def fetch_post_json(self, endpoint: str, params: dict = {}, data...
method parse_json (line 139) | def parse_json(self, response: Response) -> dict:
method get_fetch_data (line 174) | async def get_fetch_data(self, url: str):
method post_fetch_data (line 217) | async def post_fetch_data(self, url: str, params: dict = {}, data=None):
method head_fetch_data (line 267) | async def head_fetch_data(self, url: str):
method handle_http_status_error (line 295) | def handle_http_status_error(self, http_error, url: str, attempt):
method close (line 342) | async def close(self):
method __aenter__ (line 345) | async def __aenter__(self):
method __aexit__ (line 348) | async def __aexit__(self, exc_type, exc_val, exc_tb):
FILE: crawlers/bilibili/web/endpoints.py
class BilibiliAPIEndpoints (line 1) | class BilibiliAPIEndpoints:
FILE: crawlers/bilibili/web/models.py
class BaseRequestsModel (line 5) | class BaseRequestsModel(BaseModel):
class UserPostVideos (line 9) | class UserPostVideos(BaseRequestsModel):
class UserProfile (line 17) | class UserProfile(BaseRequestsModel):
class UserDynamic (line 21) | class UserDynamic(BaseRequestsModel):
class ComPopular (line 27) | class ComPopular(BaseRequestsModel):
class PlayUrl (line 33) | class PlayUrl(BaseRequestsModel):
FILE: crawlers/bilibili/web/utils.py
class EndpointGenerator (line 7) | class EndpointGenerator:
method __init__ (line 8) | def __init__(self, params: dict):
method user_post_videos_endpoint (line 12) | async def user_post_videos_endpoint(self) -> str:
method video_playurl_endpoint (line 20) | async def video_playurl_endpoint(self) -> str:
method user_profile_endpoint (line 28) | async def user_profile_endpoint(self) -> str:
method com_popular_endpoint (line 36) | async def com_popular_endpoint(self) -> str:
method user_dynamic_endpoint (line 44) | async def user_dynamic_endpoint(self):
class WridManager (line 52) | class WridManager:
method get_encode_query (line 54) | async def get_encode_query(cls, params: dict) -> str:
method wrid_model_endpoint (line 67) | async def wrid_model_endpoint(cls, params: dict) -> str:
function bv2av (line 77) | async def bv2av(bv_id: str) -> int:
class ResponseAnalyzer (line 96) | class ResponseAnalyzer:
method collect_folders_analyze (line 99) | async def collect_folders_analyze(cls, response: dict) -> dict:
FILE: crawlers/bilibili/web/web_crawler.py
class BilibiliWebCrawler (line 56) | class BilibiliWebCrawler:
method get_bilibili_headers (line 59) | async def get_bilibili_headers(self):
method fetch_one_video (line 76) | async def fetch_one_video(self, bv_id: str) -> dict:
method fetch_video_playurl (line 89) | async def fetch_video_playurl(self, bv_id: str, cid: str, qn: str = "6...
method fetch_user_post_videos (line 105) | async def fetch_user_post_videos(self, uid: str, pn: int) -> dict:
method fetch_collect_folders (line 126) | async def fetch_collect_folders(self, uid: str) -> dict:
method fetch_folder_videos (line 141) | async def fetch_folder_videos(self, folder_id: str, pn: int) -> dict:
method fetch_user_profile (line 158) | async def fetch_user_profile(self, uid: str) -> dict:
method fetch_com_popular (line 174) | async def fetch_com_popular(self, pn: int) -> dict:
method fetch_video_comments (line 190) | async def fetch_video_comments(self, bv_id: str, pn: int) -> dict:
method fetch_comment_reply (line 205) | async def fetch_comment_reply(self, bv_id: str, pn: int, rpid: str) ->...
method fetch_user_dynamic (line 224) | async def fetch_user_dynamic(self, uid: str, offset: str) -> dict:
method fetch_video_danmaku (line 241) | async def fetch_video_danmaku(self, cid: str):
method fetch_live_room_detail (line 254) | async def fetch_live_room_detail(self, room_id: str) -> dict:
method fetch_live_videos (line 267) | async def fetch_live_videos(self, room_id: str) -> dict:
method fetch_live_streamers (line 280) | async def fetch_live_streamers(self, area_id: str, pn: int):
method bv_to_aid (line 295) | async def bv_to_aid(self, bv_id: str) -> int:
method fetch_video_parts (line 300) | async def fetch_video_parts(self, bv_id: str) -> str:
method fetch_all_live_areas (line 313) | async def fetch_all_live_areas(self) -> dict:
method main (line 327) | async def main(self):
FILE: crawlers/bilibili/web/wrid.py
function srotl (line 3) | def srotl(t, e):
function tendian (line 6) | def tendian(t):
function tbytes_to_words (line 14) | def tbytes_to_words(t):
function jbinstring_to_bytes (line 24) | def jbinstring_to_bytes(t):
function estring_to_bytes (line 31) | def estring_to_bytes(t):
function _ff (line 34) | def _ff(t, e, n, r, o, i, a):
function _gg (line 44) | def _gg(t, e, n, r, o, i, a):
function _hh (line 54) | def _hh(t, e, n, r, o, i, a):
function _ii (line 64) | def _ii(t, e, n, r, o, i, a):
function o (line 74) | def o(i, a):
function twords_to_bytes (line 170) | def twords_to_bytes(t):
function tbytes_to_hex (line 176) | def tbytes_to_hex(t):
function get_wrid (line 183) | def get_wrid(e):
FILE: crawlers/douyin/web/abogus.py
class ABogus (line 30) | class ABogus:
method __init__ (line 55) | def __init__(self,
method list_1 (line 102) | def list_1(cls, random_num=None, a=170, b=85, c=45, ) -> list:
method list_2 (line 114) | def list_2(cls, random_num=None, a=170, b=85, ) -> list:
method list_3 (line 126) | def list_3(cls, random_num=None, a=170, b=85, ) -> list:
method random_list (line 138) | def random_list(
method from_char_code (line 164) | def from_char_code(*args):
method generate_string_1 (line 168) | def generate_string_1(
method generate_string_2 (line 177) | def generate_string_2(
method generate_string_2_list (line 195) | def generate_string_2_list(
method reg_to_array (line 227) | def reg_to_array(a):
method compress (line 241) | def compress(self, a):
method generate_f (line 270) | def generate_f(cls, e):
method pad_array (line 289) | def pad_array(arr, length=60):
method fill (line 294) | def fill(self, length=60):
method list_4 (line 302) | def list_4(
method end_check_num (line 368) | def end_check_num(a: list):
method decode_string (line 375) | def decode_string(cls, url_string, ):
method replace_func (line 380) | def replace_func(match):
method de (line 384) | def de(e, r):
method pe (line 389) | def pe(e):
method he (line 393) | def he(e, r, t, n):
method ve (line 401) | def ve(e, r, t, n):
method convert_to_char_code (line 409) | def convert_to_char_code(a):
method split_array (line 416) | def split_array(arr, chunk_size=64):
method char_code_at (line 423) | def char_code_at(s):
method write (line 426) | def write(self, e, ):
method reset (line 439) | def reset(self, ):
method sum (line 444) | def sum(self, e, length=60):
method generate_result_unit (line 452) | def generate_result_unit(cls, n, s):
method generate_result_end (line 459) | def generate_result_end(cls, s, e="s4"):
method generate_result (line 468) | def generate_result(cls, s, e="s4"):
method generate_args_code (line 504) | def generate_args_code(cls):
method generate_method_code (line 516) | def generate_method_code(self, method: str = "GET") -> list[int]:
method generate_params_code (line 520) | def generate_params_code(self, params: str) -> list[int]:
method sm3_to_array (line 525) | def sm3_to_array(cls, data: str | list) -> list[int]:
method generate_browser_info (line 551) | def generate_browser_info(cls, platform: str = "Win32") -> str:
method rc4_encrypt (line 580) | def rc4_encrypt(plaintext, key):
method get_value (line 601) | def get_value(self,
FILE: crawlers/douyin/web/endpoints.py
class DouyinAPIEndpoints (line 1) | class DouyinAPIEndpoints:
FILE: crawlers/douyin/web/models.py
class BaseRequestModel (line 8) | class BaseRequestModel(BaseModel):
class BaseLiveModel (line 45) | class BaseLiveModel(BaseModel):
class BaseLiveModel2 (line 64) | class BaseLiveModel2(BaseModel):
class BaseLoginModel (line 74) | class BaseLoginModel(BaseModel):
class UserProfile (line 86) | class UserProfile(BaseRequestModel):
class UserPost (line 90) | class UserPost(BaseRequestModel):
class PostDanmaku (line 97) | class PostDanmaku(BaseRequestModel):
class UserLike (line 104) | class UserLike(BaseRequestModel):
class UserCollection (line 110) | class UserCollection(BaseRequestModel):
class UserCollects (line 116) | class UserCollects(BaseRequestModel):
class UserCollectsVideo (line 122) | class UserCollectsVideo(BaseRequestModel):
class UserMusicCollection (line 129) | class UserMusicCollection(BaseRequestModel):
class UserMix (line 135) | class UserMix(BaseRequestModel):
class FriendFeed (line 141) | class FriendFeed(BaseRequestModel):
class PostFeed (line 152) | class PostFeed(BaseRequestModel):
class FollowFeed (line 168) | class FollowFeed(BaseRequestModel):
class PostRelated (line 175) | class PostRelated(BaseRequestModel):
class PostDetail (line 184) | class PostDetail(BaseRequestModel):
class PostComments (line 188) | class PostComments(BaseRequestModel):
class PostCommentsReply (line 199) | class PostCommentsReply(BaseRequestModel):
class PostLocate (line 207) | class PostLocate(BaseRequestModel):
class UserLive (line 217) | class UserLive(BaseLiveModel):
class LiveRoomRanking (line 223) | class LiveRoomRanking(BaseRequestModel):
class UserLive2 (line 231) | class UserLive2(BaseLiveModel2):
class FollowUserLive (line 235) | class FollowUserLive(BaseRequestModel):
class SuggestWord (line 239) | class SuggestWord(BaseRequestModel):
class LoginGetQr (line 248) | class LoginGetQr(BaseLoginModel):
class LoginCheckQr (line 254) | class LoginCheckQr(BaseLoginModel):
class UserFollowing (line 261) | class UserFollowing(BaseRequestModel):
class UserFollower (line 275) | class UserFollower(BaseRequestModel):
class URL_List (line 290) | class URL_List(BaseModel):
FILE: crawlers/douyin/web/utils.py
class TokenManager (line 78) | class TokenManager:
method gen_real_msToken (line 89) | def gen_real_msToken(cls) -> str:
method gen_false_msToken (line 154) | def gen_false_msToken(cls) -> str:
method gen_ttwid (line 159) | def gen_ttwid(cls) -> str:
class VerifyFpManager (line 200) | class VerifyFpManager:
method gen_verify_fp (line 202) | def gen_verify_fp(cls) -> str:
method gen_s_v_web_id (line 232) | def gen_s_v_web_id(cls) -> str:
class BogusManager (line 236) | class BogusManager:
method xb_str_2_endpoint (line 240) | def xb_str_2_endpoint(cls, endpoint: str, user_agent: str) -> str:
method xb_model_2_endpoint (line 250) | def xb_model_2_endpoint(cls, base_endpoint: str, params: dict, user_ag...
method ab_model_2_endpoint (line 295) | def ab_model_2_endpoint(cls, params: dict, user_agent: str) -> str:
class SecUserIdFetcher (line 307) | class SecUserIdFetcher:
method get_sec_user_id (line 313) | async def get_sec_user_id(cls, url: str) -> str:
method get_all_sec_user_id (line 379) | async def get_all_sec_user_id(cls, urls: list) -> list:
class AwemeIdFetcher (line 406) | class AwemeIdFetcher:
method get_aweme_id (line 414) | async def get_aweme_id(cls, url: str) -> str:
method get_all_aweme_id (line 463) | async def get_all_aweme_id(cls, urls: list) -> list:
class MixIdFetcher (line 490) | class MixIdFetcher:
method get_mix_id (line 493) | async def get_mix_id(cls, url: str) -> str:
class WebCastIdFetcher (line 497) | class WebCastIdFetcher:
method get_webcast_id (line 507) | async def get_webcast_id(cls, url: str) -> str:
method get_all_webcast_id (line 570) | async def get_all_webcast_id(cls, urls: list) -> list:
function format_file_name (line 597) | def format_file_name(
function create_user_folder (line 651) | def create_user_folder(kwargs: dict, nickname: Union[str, int]) -> Path:
function rename_user_folder (line 692) | def rename_user_folder(old_path: Path, new_nickname: str) -> Path:
function create_or_rename_user_folder (line 712) | def create_or_rename_user_folder(
function show_qrcode (line 738) | def show_qrcode(qrcode_url: str, show_image: bool = False) -> None:
function json_2_lrc (line 760) | def json_2_lrc(data: Union[str, list, dict]) -> str:
FILE: crawlers/douyin/web/web_crawler.py
class DouyinWebCrawler (line 70) | class DouyinWebCrawler:
method get_douyin_headers (line 73) | async def get_douyin_headers(self):
method fetch_one_video (line 89) | async def fetch_one_video(self, aweme_id: str):
method fetch_user_post_videos (line 113) | async def fetch_user_post_videos(self, sec_user_id: str, max_cursor: i...
method fetch_user_like_videos (line 133) | async def fetch_user_like_videos(self, sec_user_id: str, max_cursor: i...
method fetch_user_collection_videos (line 152) | async def fetch_user_collection_videos(self, cookie: str, cursor: int ...
method fetch_user_mix_videos (line 165) | async def fetch_user_mix_videos(self, mix_id: str, cursor: int = 0, co...
method fetch_user_live_videos (line 177) | async def fetch_user_live_videos(self, webcast_id: str, room_id_str=""):
method fetch_user_live_videos_by_room_id (line 189) | async def fetch_user_live_videos_by_room_id(self, room_id: str):
method fetch_live_gift_ranking (line 201) | async def fetch_live_gift_ranking(self, room_id: str, rank_type: int =...
method handler_user_profile (line 213) | async def handler_user_profile(self, sec_user_id: str):
method fetch_video_comments (line 225) | async def fetch_video_comments(self, aweme_id: str, cursor: int = 0, c...
method fetch_video_comments_reply (line 237) | async def fetch_video_comments_reply(self, item_id: str, comment_id: s...
method fetch_hot_search_result (line 249) | async def fetch_hot_search_result(self):
method gen_real_msToken (line 263) | async def gen_real_msToken(self, ):
method gen_ttwid (line 270) | async def gen_ttwid(self, ):
method gen_verify_fp (line 277) | async def gen_verify_fp(self, ):
method gen_s_v_web_id (line 284) | async def gen_s_v_web_id(self, ):
method get_x_bogus (line 291) | async def get_x_bogus(self, url: str, user_agent: str):
method get_a_bogus (line 301) | async def get_a_bogus(self, url: str, user_agent: str):
method get_sec_user_id (line 316) | async def get_sec_user_id(self, url: str):
method get_all_sec_user_id (line 320) | async def get_all_sec_user_id(self, urls: list):
method get_aweme_id (line 328) | async def get_aweme_id(self, url: str):
method get_all_aweme_id (line 332) | async def get_all_aweme_id(self, urls: list):
method get_webcast_id (line 340) | async def get_webcast_id(self, url: str):
method get_all_webcast_id (line 344) | async def get_all_webcast_id(self, urls: list):
method update_cookie (line 351) | async def update_cookie(self, cookie: str):
method main (line 371) | async def main(self):
FILE: crawlers/douyin/web/xbogus.py
class XBogus (line 41) | class XBogus:
method __init__ (line 42) | def __init__(self, user_agent: str = None) -> None:
method md5_str_to_array (line 61) | def md5_str_to_array(self, md5_str):
method md5_encrypt (line 79) | def md5_encrypt(self, url_path):
method md5 (line 89) | def md5(self, input_data):
method encoding_conversion (line 105) | def encoding_conversion(
method encoding_conversion2 (line 118) | def encoding_conversion2(self, a, b, c):
method rc4_encrypt (line 125) | def rc4_encrypt(self, key, data):
method calculation (line 152) | def calculation(self, a1, a2, a3):
method getXBogus (line 167) | def getXBogus(self, url_path):
FILE: crawlers/hybrid/hybrid_crawler.py
class HybridCrawler (line 44) | class HybridCrawler:
method __init__ (line 45) | def __init__(self):
method get_bilibili_bv_id (line 51) | async def get_bilibili_bv_id(self, url: str) -> str:
method hybrid_parsing_single_video (line 69) | async def hybrid_parsing_single_video(self, url: str, minimal: bool = ...
method main (line 302) | async def main(self):
FILE: crawlers/tiktok/app/app_crawler.py
class TikTokAPPCrawler (line 65) | class TikTokAPPCrawler:
method get_tiktok_headers (line 68) | async def get_tiktok_headers(self):
method fetch_one_video (line 87) | async def fetch_one_video(self, aweme_id: str):
method main (line 104) | async def main(self):
FILE: crawlers/tiktok/app/endpoints.py
class TikTokAPIEndpoints (line 1) | class TikTokAPIEndpoints:
FILE: crawlers/tiktok/app/models.py
class BaseRequestModel (line 8) | class BaseRequestModel(BaseModel):
class FeedVideoDetail (line 23) | class FeedVideoDetail(BaseRequestModel):
FILE: crawlers/tiktok/web/endpoints.py
class TikTokAPIEndpoints (line 1) | class TikTokAPIEndpoints:
FILE: crawlers/tiktok/web/models.py
class BaseRequestModel (line 10) | class BaseRequestModel(BaseModel):
class UserProfile (line 48) | class UserProfile(BaseRequestModel):
class UserPost (line 53) | class UserPost(BaseModel):
class UserLike (line 96) | class UserLike(BaseRequestModel):
class UserCollect (line 103) | class UserCollect(BaseRequestModel):
class UserPlayList (line 111) | class UserPlayList(BaseRequestModel):
class UserMix (line 117) | class UserMix(BaseRequestModel):
class PostDetail (line 123) | class PostDetail(BaseRequestModel):
class PostComment (line 127) | class PostComment(BaseRequestModel):
class PostCommentReply (line 135) | class PostCommentReply(BaseRequestModel):
class UserFans (line 144) | class UserFans(BaseRequestModel):
class UserFollow (line 153) | class UserFollow(BaseRequestModel):
FILE: crawlers/tiktok/web/utils.py
class TokenManager (line 36) | class TokenManager:
method gen_real_msToken (line 48) | def gen_real_msToken(cls) -> str:
method gen_false_msToken (line 111) | def gen_false_msToken(cls) -> str:
method gen_ttwid (line 116) | def gen_ttwid(cls, cookie: str) -> str:
method gen_odin_tt (line 164) | def gen_odin_tt(cls):
class BogusManager (line 203) | class BogusManager:
method xb_str_2_endpoint (line 205) | def xb_str_2_endpoint(
method model_2_endpoint (line 218) | def model_2_endpoint(
class SecUserIdFetcher (line 243) | class SecUserIdFetcher:
method get_secuid (line 252) | async def get_secuid(cls, url: str) -> str:
method get_all_secuid (line 315) | async def get_all_secuid(cls, urls: list) -> list:
method get_uniqueid (line 343) | async def get_uniqueid(cls, url: str) -> str:
method get_all_uniqueid (line 402) | async def get_all_uniqueid(cls, urls: list) -> list:
class AwemeIdFetcher (line 430) | class AwemeIdFetcher:
method get_aweme_id (line 441) | async def get_aweme_id(cls, url: str) -> str:
method get_all_aweme_id (line 513) | async def get_all_aweme_id(cls, urls: list) -> list:
function format_file_name (line 541) | def format_file_name(
function create_user_folder (line 595) | def create_user_folder(kwargs: dict, nickname: Union[str, int]) -> Path:
function rename_user_folder (line 636) | def rename_user_folder(old_path: Path, new_nickname: str) -> Path:
function create_or_rename_user_folder (line 656) | def create_or_rename_user_folder(
FILE: crawlers/tiktok/web/web_crawler.py
class TikTokWebCrawler (line 78) | class TikTokWebCrawler:
method __init__ (line 80) | def __init__(self):
method get_tiktok_headers (line 84) | async def get_tiktok_headers(self):
method fetch_one_video (line 100) | async def fetch_one_video(self, itemId: str):
method fetch_user_profile (line 116) | async def fetch_user_profile(self, secUid: str, uniqueId: str):
method fetch_user_post (line 132) | async def fetch_user_post(self, secUid: str, cursor: int = 0, count: i...
method fetch_user_like (line 149) | async def fetch_user_like(self, secUid: str, cursor: int = 0, count: i...
method fetch_user_collect (line 165) | async def fetch_user_collect(self, cookie: str, secUid: str, cursor: i...
method fetch_user_play_list (line 183) | async def fetch_user_play_list(self, secUid: str, cursor: int = 0, cou...
method fetch_user_mix (line 199) | async def fetch_user_mix(self, mixId: str, cursor: int = 0, count: int...
method fetch_post_comment (line 215) | async def fetch_post_comment(self, aweme_id: str, cursor: int = 0, cou...
method fetch_post_comment_reply (line 232) | async def fetch_post_comment_reply(self, item_id: str, comment_id: str...
method fetch_user_fans (line 250) | async def fetch_user_fans(self, secUid: str, count: int = 30, maxCurso...
method fetch_user_follow (line 266) | async def fetch_user_follow(self, secUid: str, count: int = 30, maxCur...
method fetch_real_msToken (line 284) | async def fetch_real_msToken(self):
method gen_ttwid (line 291) | async def gen_ttwid(self, cookie: str):
method gen_xbogus (line 298) | async def gen_xbogus(self, url: str, user_agent: str):
method get_sec_user_id (line 308) | async def get_sec_user_id(self, url: str):
method get_all_sec_user_id (line 312) | async def get_all_sec_user_id(self, urls: list):
method get_aweme_id (line 320) | async def get_aweme_id(self, url: str):
method get_all_aweme_id (line 324) | async def get_all_aweme_id(self, urls: list):
method get_unique_id (line 332) | async def get_unique_id(self, url: str):
method get_all_unique_id (line 336) | async def get_all_unique_id(self, urls: list):
method main (line 345) | async def main(self):
FILE: crawlers/utils/api_exceptions.py
class APIError (line 36) | class APIError(Exception):
method __init__ (line 39) | def __init__(self, status_code=None):
method display_error (line 45) | def display_error(self):
class APIConnectionError (line 52) | class APIConnectionError(APIError):
method display_error (line 55) | def display_error(self):
class APIUnavailableError (line 59) | class APIUnavailableError(APIError):
method display_error (line 62) | def display_error(self):
class APINotFoundError (line 66) | class APINotFoundError(APIError):
method display_error (line 69) | def display_error(self):
class APIResponseError (line 73) | class APIResponseError(APIError):
method display_error (line 76) | def display_error(self):
class APIRateLimitError (line 80) | class APIRateLimitError(APIError):
method display_error (line 83) | def display_error(self):
class APITimeoutError (line 87) | class APITimeoutError(APIError):
method display_error (line 90) | def display_error(self):
class APIUnauthorizedError (line 94) | class APIUnauthorizedError(APIError):
method display_error (line 97) | def display_error(self):
class APIRetryExhaustedError (line 101) | class APIRetryExhaustedError(APIError):
method display_error (line 104) | def display_error(self):
FILE: crawlers/utils/deprecated.py
function deprecated (line 5) | def deprecated(message):
FILE: crawlers/utils/logger.py
class Singleton (line 46) | class Singleton(type):
method __init__ (line 50) | def __init__(self, *args, **kwargs):
method __call__ (line 53) | def __call__(cls, *args, **kwargs):
method reset_instance (line 66) | def reset_instance(cls, *args, **kwargs):
class LogManager (line 77) | class LogManager(metaclass=Singleton):
method __init__ (line 78) | def __init__(self):
method setup_logging (line 87) | def setup_logging(self, level=logging.INFO, log_to_console=False, log_...
method ensure_log_dir_exists (line 118) | def ensure_log_dir_exists(log_path: Path):
method clean_logs (line 121) | def clean_logs(self, keep_last_n=10):
method shutdown (line 139) | def shutdown(self):
function log_setup (line 147) | def log_setup(log_to_console=True):
FILE: crawlers/utils/utils.py
function model_to_query_string (line 61) | def model_to_query_string(model: BaseModel) -> str:
function gen_random_str (line 68) | def gen_random_str(randomlength: int) -> str:
function get_timestamp (line 83) | def get_timestamp(unit: str = "milli"):
function timestamp_2_str (line 106) | def timestamp_2_str(
function num_to_base36 (line 132) | def num_to_base36(num: int) -> str:
function split_set_cookie (line 148) | def split_set_cookie(cookie_str: str) -> str:
function split_dict_cookie (line 171) | def split_dict_cookie(cookie_dict: dict) -> str:
function extract_valid_urls (line 175) | def extract_valid_urls(inputs: Union[str, List[str]]) -> Union[str, List...
function _get_first_item_from_list (line 203) | def _get_first_item_from_list(_list) -> list:
function get_resource_path (line 217) | def get_resource_path(filepath: str):
function replaceT (line 227) | def replaceT(obj: Union[str, Any]) -> Union[str, Any]:
function split_filename (line 250) | def split_filename(text: str, os_limit: dict) -> str:
function ensure_path (line 284) | def ensure_path(path: Union[str, Path]) -> Path:
function get_cookie_from_browser (line 289) | def get_cookie_from_browser(browser_choice: str, domain: str = "") -> dict:
function check_invalid_naming (line 321) | def check_invalid_naming(
function merge_config (line 363) | def merge_config(
Condensed preview — 75 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (502K chars).
[
{
"path": ".github/ISSUE_TEMPLATE/bug_report.md",
"chars": 689,
"preview": "---\nname: Bug report\nabout: Please describe your problem in as much detail as possible so that it can be\n solved faster"
},
{
"path": ".github/ISSUE_TEMPLATE/bug_report_CN.md",
"chars": 287,
"preview": "---\nname: Bug反馈\nabout: 请尽量详细的描述你的问题以便更快的解决它\ntitle: \"[BUG] 简短明了的描述问题\"\nlabels: BUG\nassignees: Evil0ctal\n\n---\n\n***发生错误的平台?*"
},
{
"path": ".github/ISSUE_TEMPLATE/feature_request.md",
"chars": 671,
"preview": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: \"[Feature request] Brief and clear description "
},
{
"path": ".github/ISSUE_TEMPLATE/feature_request_CN.md",
"chars": 301,
"preview": "---\nname: 新功能需求\nabout: 为本项目提出一个新需求或想法\ntitle: \"[Feature request] 简短明了的描述问题\"\nlabels: enhancement\nassignees: Evil0ctal\n\n---"
},
{
"path": ".github/workflows/codeql-analysis.yml",
"chars": 2322,
"preview": "# For most projects, this workflow file will not need changing; you simply need\n# to commit it to your repository.\n#\n# Y"
},
{
"path": ".github/workflows/docker-image.yml",
"chars": 2436,
"preview": "# docker-image.yml\nname: Publish Docker image # workflow名称,可以在Github项目主页的【Actions】中看到所有的workflow\n\non: # 配置触发workflow"
},
{
"path": ".github/workflows/readme.yml",
"chars": 466,
"preview": "name: Translate README\n\non:\n push:\n branches:\n - main\n - Dev\njobs:\n build:\n runs-on: ubuntu-latest\n "
},
{
"path": ".gitignore",
"chars": 1856,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": "Dockerfile",
"chars": 486,
"preview": "# 使用官方 Python 3.11 的轻量版镜像\nFROM python:3.11-slim\n\nLABEL maintainer=\"Evil0ctal\"\n\n# 设置非交互模式,避免 Docker 构建时的交互问题\nENV DEBIAN_F"
},
{
"path": "LICENSE",
"chars": 11357,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "Procfile",
"chars": 22,
"preview": "web: python3 start.py\n"
},
{
"path": "README.en.md",
"chars": 26604,
"preview": "<div align=\"center\">\n<a href=\"https://douyin.wtf/\" alt=\"logo\" ><img src=\"https://raw.githubusercontent.com/Evil0ctal/Dou"
},
{
"path": "README.md",
"chars": 17777,
"preview": "<div align=\"center\">\n<a href=\"https://douyin.wtf/\" alt=\"logo\" ><img src=\"https://raw.githubusercontent.com/Evil0ctal/Dou"
},
{
"path": "Screenshots/benchmarks/info",
"chars": 28,
"preview": "API benchmarks screenshots \n"
},
{
"path": "Screenshots/v3_screenshots/info",
"chars": 17,
"preview": "V3.0 Screenshots\n"
},
{
"path": "app/api/endpoints/bilibili_web.py",
"chars": 22208,
"preview": "from fastapi import APIRouter, Body, Query, Request, HTTPException # 导入FastAPI组件\r\nfrom app.api.models.APIResponseModel "
},
{
"path": "app/api/endpoints/douyin_web.py",
"chars": 40123,
"preview": "from typing import List\n\nfrom fastapi import APIRouter, Body, Query, Request, HTTPException # 导入FastAPI组件\nfrom app.api."
},
{
"path": "app/api/endpoints/download.py",
"chars": 12389,
"preview": "import os\nimport zipfile\nimport subprocess\nimport tempfile\n\nimport aiofiles\nimport httpx\nimport yaml\nfrom fastapi import"
},
{
"path": "app/api/endpoints/hybrid_parsing.py",
"chars": 4457,
"preview": "import asyncio\n\nfrom fastapi import APIRouter, Body, Query, Request, HTTPException # 导入FastAPI组件\n\nfrom app.api.models.A"
},
{
"path": "app/api/endpoints/ios_shortcut.py",
"chars": 1004,
"preview": "import os\nimport yaml\nfrom fastapi import APIRouter\nfrom app.api.models.APIResponseModel import iOS_Shortcut\n\n\n# 读取上级再上级"
},
{
"path": "app/api/endpoints/tiktok_app.py",
"chars": 1474,
"preview": "from fastapi import APIRouter, Query, Request, HTTPException # 导入FastAPI组件\nfrom app.api.models.APIResponseModel import "
},
{
"path": "app/api/endpoints/tiktok_web.py",
"chars": 32436,
"preview": "from typing import List\n\nfrom fastapi import APIRouter, Query, Body, Request, HTTPException # 导入FastAPI组件\n\nfrom app.api"
},
{
"path": "app/api/models/APIResponseModel.py",
"chars": 966,
"preview": "from fastapi import Body, FastAPI, Query, Request, HTTPException\nfrom pydantic import BaseModel\nfrom typing import Any, "
},
{
"path": "app/api/router.py",
"chars": 895,
"preview": "from fastapi import APIRouter\nfrom app.api.endpoints import (\n tiktok_web,\n tiktok_app,\n douyin_web,\n bilibi"
},
{
"path": "app/main.py",
"chars": 4497,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "app/web/app.py",
"chars": 4019,
"preview": "# PyWebIO组件/PyWebIO components\nimport os\n\nimport yaml\nfrom pywebio import session, config as pywebio_config\nfrom pywebio"
},
{
"path": "app/web/views/About.py",
"chars": 1015,
"preview": "from pywebio.output import popup, put_markdown, put_html, put_text, put_link, put_image\nfrom app.web.views.ViewsUtils im"
},
{
"path": "app/web/views/Document.py",
"chars": 4336,
"preview": "from pywebio.output import popup, put_markdown, put_html, put_text, put_link\nfrom app.web.views.ViewsUtils import ViewsU"
},
{
"path": "app/web/views/Downloader.py",
"chars": 960,
"preview": "from pywebio.output import popup, put_markdown, put_html, put_text, put_link\nfrom app.web.views.ViewsUtils import ViewsU"
},
{
"path": "app/web/views/EasterEgg.py",
"chars": 2313,
"preview": "import numpy as np\nimport time\n\nimport pyfiglet\nfrom pywebio import start_server\nfrom pywebio.output import put_text, cl"
},
{
"path": "app/web/views/ParseVideo.py",
"chars": 11395,
"preview": "import asyncio\nimport os\nimport time\n\nimport yaml\nfrom pywebio.input import *\nfrom pywebio.output import *\nfrom pywebio_"
},
{
"path": "app/web/views/Shortcuts.py",
"chars": 3189,
"preview": "import os\nimport yaml\nfrom pywebio.output import popup, put_markdown, put_html, put_text, put_link\nfrom app.web.views.Vi"
},
{
"path": "app/web/views/ViewsUtils.py",
"chars": 726,
"preview": "import re\n\nfrom pywebio.output import get_scope, clear\nfrom pywebio.session import info as session_info\n\n\nclass ViewsUti"
},
{
"path": "bash/install.sh",
"chars": 2536,
"preview": "#!/bin/bash\n\n# Set script to exit on any errors.\nset -e\n\necho 'Updating package lists... | 正在更新软件包列表...'\nsudo apt-get up"
},
{
"path": "bash/update.sh",
"chars": 943,
"preview": "#!/bin/bash\n\n# Ask for confirmation to proceed with the update\nread -r -p \"Do you want to update Douyin_TikTok_Download_"
},
{
"path": "chrome-cookie-sniffer/README.md",
"chars": 2694,
"preview": "# Chrome Cookie Sniffer\n\n一个用于自动嗅探和提取网站Cookie的Chrome扩展程序。支持抖音等主流平台,具备智能去重、时间控制和Webhook回调等功能。\n\n## 功能特性\n\n- 🎯 **智能Cookie抓取**"
},
{
"path": "chrome-cookie-sniffer/background.js",
"chars": 5034,
"preview": "// 启动时记录\nconsole.log('Cookie Sniffer service worker 已启动');\n\n// 服务配置\nconst SERVICES = {\n douyin: {\n name: 'douyin',\n "
},
{
"path": "chrome-cookie-sniffer/manifest.json",
"chars": 489,
"preview": "{\n \"manifest_version\": 3,\n \"name\": \"Cookie Sniffer\",\n \"version\": \"1.0\",\n \"description\": \"监听并获取指定网站的请求 Cookie"
},
{
"path": "chrome-cookie-sniffer/popup.html",
"chars": 4842,
"preview": "<!DOCTYPE html>\n<html>\n<head>\n <meta charset=\"utf-8\">\n <style>\n body {\n width: 400px;\n "
},
{
"path": "chrome-cookie-sniffer/popup.js",
"chars": 10603,
"preview": "document.addEventListener('DOMContentLoaded', function() {\n const refreshBtn = document.getElementById('refresh');\n "
},
{
"path": "config.yaml",
"chars": 2042,
"preview": "# Web\nWeb:\n # APP Switch\n PyWebIO_Enable: true # Enable APP | 启用APP\n\n # APP Information\n Domain: https://douyin.w"
},
{
"path": "crawlers/base_crawler.py",
"chars": 12043,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/bilibili/web/config.yaml",
"chars": 1010,
"preview": "TokenManager:\r\n bilibili:\r\n headers:\r\n 'accept-language': zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6\r\n "
},
{
"path": "crawlers/bilibili/web/endpoints.py",
"chars": 1789,
"preview": "class BilibiliAPIEndpoints:\r\n\r\n \"-------------------------------------------------------域名-domain--------------------"
},
{
"path": "crawlers/bilibili/web/models.py",
"chars": 687,
"preview": "import time\nfrom pydantic import BaseModel\n\n\nclass BaseRequestsModel(BaseModel):\n wts: str = str(round(time.time()))\n"
},
{
"path": "crawlers/bilibili/web/utils.py",
"chars": 3507,
"preview": "from urllib.parse import urlencode\r\nfrom crawlers.bilibili.web import wrid\r\nfrom crawlers.utils.logger import logger\r\nfr"
},
{
"path": "crawlers/bilibili/web/web_crawler.py",
"chars": 16293,
"preview": "# ==============================================================================\r\n# Copyright (C) 2021 Evil0ctal\r\n#\r\n# T"
},
{
"path": "crawlers/bilibili/web/wrid.py",
"chars": 6517,
"preview": "import urllib.parse\r\n\r\ndef srotl(t, e):\r\n return (t << e) | (t >> (32 - e))\r\n\r\ndef tendian(t):\r\n if isinstance(t, "
},
{
"path": "crawlers/douyin/web/abogus.py",
"chars": 17784,
"preview": "\"\"\"\nOriginal Author:\nThis file is from https://github.com/JoeanAmier/TikTokDownloader\nAnd is licensed under the GNU Gene"
},
{
"path": "crawlers/douyin/web/config.yaml",
"chars": 8690,
"preview": "TokenManager:\n douyin:\n headers:\n Accept-Language: zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2\n"
},
{
"path": "crawlers/douyin/web/endpoints.py",
"chars": 4617,
"preview": "class DouyinAPIEndpoints:\n \"\"\"\n API Endpoints for Douyin\n \"\"\"\n\n # 抖音域名 (Douyin Domain)\n DOUYIN_DOMAIN = \""
},
{
"path": "crawlers/douyin/web/models.py",
"chars": 6667,
"preview": "from typing import Any, List\nfrom pydantic import BaseModel, Field\n\nfrom crawlers.douyin.web.utils import TokenManager, "
},
{
"path": "crawlers/douyin/web/utils.py",
"chars": 27042,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/douyin/web/web_crawler.py",
"chars": 23135,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/douyin/web/xbogus.py",
"chars": 9186,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/hybrid/hybrid_crawler.py",
"chars": 13587,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/tiktok/app/app_crawler.py",
"chars": 4135,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/tiktok/app/config.yaml",
"chars": 275,
"preview": "TokenManager:\n tiktok:\n headers:\n User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHT"
},
{
"path": "crawlers/tiktok/app/endpoints.py",
"chars": 248,
"preview": "class TikTokAPIEndpoints:\n \"\"\"\n API Endpoints for TikTok APP\n \"\"\"\n\n # Tiktok域名 (Tiktok Domain)\n TIKTOK_DO"
},
{
"path": "crawlers/tiktok/app/models.py",
"chars": 626,
"preview": "import time\nfrom typing import List\n\nfrom pydantic import BaseModel\n\n\n# API基础请求模型/Base Request Model\nclass BaseRequestMo"
},
{
"path": "crawlers/tiktok/web/config.yaml",
"chars": 5843,
"preview": "TokenManager:\n tiktok:\n headers:\n User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHT"
},
{
"path": "crawlers/tiktok/web/endpoints.py",
"chars": 1400,
"preview": "class TikTokAPIEndpoints:\n \"\"\"\n API Endpoints for TikTok\n \"\"\"\n\n # 抖音域名 (Tiktok Domain)\n TIKTOK_DOMAIN = \""
},
{
"path": "crawlers/tiktok/web/models.py",
"chars": 4092,
"preview": "from typing import Any\nfrom pydantic import BaseModel\nfrom urllib.parse import quote, unquote\n\nfrom crawlers.tiktok.web."
},
{
"path": "crawlers/tiktok/web/utils.py",
"chars": 23590,
"preview": "import os\nimport re\nimport json\nimport yaml\nimport httpx\nimport asyncio\n\nfrom typing import Union\nfrom pathlib import Pa"
},
{
"path": "crawlers/tiktok/web/web_crawler.py",
"chars": 19293,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/utils/api_exceptions.py",
"chars": 2834,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/utils/deprecated.py",
"chars": 419,
"preview": "import warnings\nimport functools\n\n\ndef deprecated(message):\n def decorator(func):\n @functools.wraps(func)\n "
},
{
"path": "crawlers/utils/logger.py",
"chars": 5302,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "crawlers/utils/utils.py",
"chars": 11616,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "daemon/Douyin_TikTok_Download_API.service",
"chars": 304,
"preview": "[Unit]\nDescription=Douyin_TikTok_Download_API daemon\nAfter=network.target\n\n[Service]\nType=simple\nUser=root\nGroup=root\nWo"
},
{
"path": "docker-compose.yml",
"chars": 773,
"preview": "version: \"3.9\" # Docker Compose 文件版本\n\nservices: # 定义服务列表\n douyin_tiktok_download_api: # 服务名称\n image: evil0ctal/do"
},
{
"path": "logo/logo.txt",
"chars": 46,
"preview": "Free logo, Bad design by Evil0ctal\n2022/09/05\n"
},
{
"path": "requirements.txt",
"chars": 634,
"preview": "aiofiles==23.2.1\nannotated-types==0.6.0\nanyio==4.3.0\nbrowser-cookie3==0.19.1\ncertifi==2024.2.2\nclick==8.1.7\ncolorama==0."
},
{
"path": "start.py",
"chars": 1522,
"preview": "# ==============================================================================\n# Copyright (C) 2021 Evil0ctal\n#\n# This"
},
{
"path": "start.sh",
"chars": 85,
"preview": "#!/bin/sh\n\n# Starting the Python application directly using python3\npython3 start.py\n"
}
]
About this extraction
This page contains the full source code of the Evil0ctal/Douyin_TikTok_Download_API GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 75 files (467.3 KB), approximately 135.5k tokens, and a symbol index with 425 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.