Full Code of soimort/you-get for AI

develop 049548f3f3f3 cached
139 files
574.3 KB
161.3k tokens
659 symbols
1 requests
Download .txt
Showing preview only (611K chars total). Download the full file or copy to clipboard to get everything.
Repository: soimort/you-get
Branch: develop
Commit: 049548f3f3f3
Files: 139
Total size: 574.3 KB

Directory structure:
gitextract_fe0wl05y/

├── .github/
│   └── workflows/
│       └── python-package.yml
├── .gitignore
├── CHANGELOG.rst
├── CONTRIBUTING.md
├── MANIFEST.in
├── Makefile
├── README.md
├── README.rst
├── SECURITY.md
├── contrib/
│   └── completion/
│       ├── you-get-completion.bash
│       └── you-get.fish
├── setup.cfg
├── setup.py
├── src/
│   └── you_get/
│       ├── cli_wrapper/
│       │   ├── player/
│       │   │   ├── dragonplayer.py
│       │   │   ├── gnome_mplayer.py
│       │   │   ├── mplayer.py
│       │   │   ├── vlc.py
│       │   │   └── wmp.py
│       │   └── transcoder/
│       │       ├── ffmpeg.py
│       │       ├── libav.py
│       │       └── mencoder.py
│       ├── common.py
│       ├── extractor.py
│       ├── extractors/
│       │   ├── acfun.py
│       │   ├── alive.py
│       │   ├── archive.py
│       │   ├── baidu.py
│       │   ├── bandcamp.py
│       │   ├── baomihua.py
│       │   ├── bigthink.py
│       │   ├── bilibili.py
│       │   ├── bokecc.py
│       │   ├── cbs.py
│       │   ├── ckplayer.py
│       │   ├── cntv.py
│       │   ├── coub.py
│       │   ├── dailymotion.py
│       │   ├── douban.py
│       │   ├── douyin.py
│       │   ├── douyutv.py
│       │   ├── ehow.py
│       │   ├── embed.py
│       │   ├── facebook.py
│       │   ├── fc2video.py
│       │   ├── flickr.py
│       │   ├── freesound.py
│       │   ├── funshion.py
│       │   ├── giphy.py
│       │   ├── google.py
│       │   ├── heavymusic.py
│       │   ├── huomaotv.py
│       │   ├── icourses.py
│       │   ├── ifeng.py
│       │   ├── imgur.py
│       │   ├── infoq.py
│       │   ├── instagram.py
│       │   ├── interest.py
│       │   ├── iqilu.py
│       │   ├── iqiyi.py
│       │   ├── iwara.py
│       │   ├── ixigua.py
│       │   ├── joy.py
│       │   ├── kakao.py
│       │   ├── khan.py
│       │   ├── ku6.py
│       │   ├── kuaishou.py
│       │   ├── kugou.py
│       │   ├── kuwo.py
│       │   ├── le.py
│       │   ├── lizhi.py
│       │   ├── longzhu.py
│       │   ├── lrts.py
│       │   ├── magisto.py
│       │   ├── metacafe.py
│       │   ├── mgtv.py
│       │   ├── miaopai.py
│       │   ├── miomio.py
│       │   ├── missevan.py
│       │   ├── mixcloud.py
│       │   ├── mtv81.py
│       │   ├── nanagogo.py
│       │   ├── naver.py
│       │   ├── netease.py
│       │   ├── nicovideo.py
│       │   ├── pinterest.py
│       │   ├── pixnet.py
│       │   ├── pptv.py
│       │   ├── qie.py
│       │   ├── qie_video.py
│       │   ├── qingting.py
│       │   ├── qq.py
│       │   ├── qq_egame.py
│       │   ├── showroom.py
│       │   ├── sina.py
│       │   ├── sohu.py
│       │   ├── soundcloud.py
│       │   ├── suntv.py
│       │   ├── ted.py
│       │   ├── theplatform.py
│       │   ├── tiktok.py
│       │   ├── toutiao.py
│       │   ├── tucao.py
│       │   ├── tudou.py
│       │   ├── tumblr.py
│       │   ├── twitter.py
│       │   ├── ucas.py
│       │   ├── universal.py
│       │   ├── veoh.py
│       │   ├── vimeo.py
│       │   ├── vk.py
│       │   ├── w56.py
│       │   ├── wanmen.py
│       │   ├── ximalaya.py
│       │   ├── xinpianchang.py
│       │   ├── yixia.py
│       │   ├── yizhibo.py
│       │   ├── youku.py
│       │   ├── youtube.py
│       │   ├── zhanqi.py
│       │   ├── zhibo.py
│       │   └── zhihu.py
│       ├── json_output.py
│       ├── processor/
│       │   ├── ffmpeg.py
│       │   ├── join_flv.py
│       │   ├── join_mp4.py
│       │   ├── join_ts.py
│       │   └── rtmpdump.py
│       ├── util/
│       │   ├── fs.py
│       │   ├── git.py
│       │   ├── log.py
│       │   ├── os.py
│       │   ├── strings.py
│       │   └── term.py
│       └── version.py
├── tests/
│   ├── test.py
│   ├── test_common.py
│   └── test_util.py
├── you-get
└── you-get.plugin.zsh

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/python-package.yml
================================================
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions

name: develop

on:
  push:
    branches: [ develop ]
  pull_request:
    branches: [ develop ]

jobs:
  build:

    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        python-version: [3.8, 3.9, '3.10', '3.11', '3.12', '3.13', pypy-3.8, pypy-3.9, pypy-3.10]

    steps:
    - uses: actions/checkout@v4
    - name: Set up Python ${{ matrix.python-version }}
      uses: actions/setup-python@v5
      with:
        python-version: ${{ matrix.python-version }}
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip setuptools
        pip install flake8
        if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
    - name: Lint with flake8
      run: |
        # stop the build if there are Python syntax errors or undefined names
        flake8 . --count --select=E9,F63,F7,F82 --ignore=F824 --show-source --statistics
        # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
        flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
    - name: Test with unittest
      run: |
        make test


================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Misc
_*
*_
*.3gp
*.asf
*.download
*.f4v
*.flv
*.gif
*.html
*.jpg
*.lrc
*.mkv
*.mp3
*.mp4
*.mpg
*.png
*.srt
*.ts
*.webm
*.xml
*.json
/.env
/.idea
*.m4a
*.DS_Store
*.txt
*.sw[a-p]

*.zip

.emacs*
.vscode


================================================
FILE: CHANGELOG.rst
================================================
Changelog
=========

0.3.36
------

*Date: 2015-10-05*

* New command-line option: --json
* New site support:
    - Internet Archive
* Bug fixes:
    - iQIYI
    - SoundCloud

0.3.35
------

*Date: 2015-09-21*

* New site support:
    - 755 http://7gogo.jp/ (via #659 by @soimort)
    - Funshion http://www.fun.tv/ (via #619 by @cnbeining)
    - iQilu http://v.iqilu.com/ (via #636 by @cnbeining)
    - Metacafe http://www.metacafe.com/ (via #620 by @cnbeining)
    - Qianmo http://qianmo.com/ (via #600 by @cnbeining)
    - Weibo Miaopai http://weibo.com/ (via #605 by @cnbeining)
* Bug fixes:
    - 163 (by @lilydjwg)
    - CNTV (by @Red54)
    - Dailymotion (by @jackyzy823 and @ddumitran)
    - iQIYI (by @jackyzy823 and others)
    - QQ (by @soimort)
    - SoundCloud (by @soimort)
    - Tudou (by @CzBiX)
    - Vimeo channel (by @cnbeining)
    - YinYueTai (by @soimort)
    - Youku (by @junzh0u)
    - Embedded Youku/Tudou player (by @zhangn1985)

0.3.34
------

*Date: 2015-07-12*

* Bug fix release

0.3.33
------

*Date: 2015-06-10*

* Many bug fixes by our awesome contributors

0.3.32
------

*Date: 2014-12-10*

* New site support:
    - baomihua.com
    - zhanqi.tv
* Bug fixes:
    - DouyuTV
    - Tudou
    - Tumblr
    - Vine
    - Youku

0.3.31
------

*Date: 2014-11-01*

* New site support:
    - Dongting (by @lilydjwg)
    - DouyuTV (by @0x00-pl)
    - LeTV cloud (by @cnbeining)
* Bug fixes:
    - AcFun
    - Bilibili
    - Niconico
    - iQIYI

0.3.30
------

*Date: 2014-09-21*

* First Alpha release
* Support PyPy3
* Bug fixes:
    - YouTube
    - Youku
    - Tudou
    - Niconico
    - AcFun

0.3.30dev-20140907
------------------

*Date: 2014-09-07*

* Bug fixes:
    - AcFun
    - iQIYI
    - MioMio
    - QQ

0.3.30dev-20140820
------------------

*Date: 2014-08-20*

* Bug fix release

0.3.30dev-20140812
------------------

*Date: 2014-08-12*

* Bug fixes:
    - Youku
* New site support:
    - VideoBam (by @cnbeining)

0.3.30dev-20140806
------------------

*Date: 2014-08-06*

* Bug fixes:
    - Youku
    - Nicovideo
    - Bilibili
    - Letv
* New site support:
    - Tucao.cc
* Use FFmpeg concat demuxer to join video segments (ffmpeg>=1.1)

0.3.30dev-20140730
------------------

*Date: 2014-07-30*

* YouTube: support fixed
* Youku: password-protected video support

0.3.30dev-20140723
------------------

*Date: 2014-07-23*

* YouTube: (experimental) video format selection
* Youku: playlist support
* NetEase Music: high quality download (by @farseer90718)
* PPTV: support fixed (by @jackyzy823)
* Catfun.tv: new site support (by @jackyzy823)
* AcFun.tv: domain name fixed

0.3.30dev-20140716
------------------

*Date: 2014-07-16*

* Bug fix release for:
    - YouTube
    - Youku

* New site support: (by @jackyzy823)
    - MTV 81 http://www.mtv81.com
    - Kugou (酷狗音乐) http://www.kugou.com
    - Kuwo (酷我音乐) http://www.kuwo.cn
    - NetEase Music (网易云音乐) http://music.163.com

0.3.30dev-20140629
------------------

*Date: 2014-06-29*

* Bug fix release for:
    - Youku
    - YouTube
    - TED
    - Bilibili
* (Experimental) Video format selection (for Youku only)

0.3.29
------

*Date: 2014-05-29*

* Bug fix release

0.3.28.3
--------

*Date: 2014-05-18*

* New site support:
    - CBS.com

0.3.28.2
--------

*Date: 2014-04-13*

* Bug fix release

0.3.28.1
--------

*Date: 2014-02-28*

* Bug fix release

0.3.28
------

*Date: 2014-02-21*

* New site support:
    - Magisto.com
    - VK.com

0.3.27
------

*Date: 2014-02-14*

* Bug fix release

0.3.26
------

*Date: 2014-02-08*

* New features:
    - Play video in players (#286)
    - LeTV support (#289)
    - Youku 1080P support
* Bug fixes:
    - YouTube (#282, #292)
    - Sina (#246, #280)
    - Mixcloud
    - NetEase
    - QQ
    - Vine

0.3.25
------

*Date: 2013-12-20*

* Bug fix release

0.3.24
------

*Date: 2013-10-30*

* Experimental: Sogou proxy server
* Fix issues for:
    - Vimeo

0.3.23
------

*Date: 2013-10-23*

* Support YouTube playlists
* Support general short URLs
* Fix issues for:
    - Sina

0.3.22
------

*Date: 2013-10-18*

* Fix issues for:
    - Baidu
    - Bilibili
    - JPopsuki TV
    - Niconico
    - PPTV
    - TED
    - Tumblr
    - YinYueTai
    - YouTube
    - ...

0.3.21
------

*Date: 2013-08-17*

* Fix issues for:
    - YouTube
    - YinYueTai
    - pan.baidu.com

0.3.20
------

*Date: 2013-08-16*

* Add support for:
    - eHow
    - Khan Academy
    - TED
    - 5sing
* Fix issues for:
    - Tudou

0.3.18
------

*Date: 2013-07-19*

* Fix issues for:
    - Dailymotion
    - Youku
    - Sina
    - AcFun
    - bilibili

0.3.17
------

*Date: 2013-07-12*

* Fix issues for:
    - YouTube
    - 163
    - bilibili
* Code cleanup.

0.3.16
------

*Date: 2013-06-28*

* Fix issues for:
    - YouTube
    - Sohu
    - Google+ (enable HTTPS proxy)

0.3.15
------

*Date: 2013-06-21*

* Add support for:
    - Instagram

0.3.14
------

*Date: 2013-06-14*

* Add support for:
    - Alive.in.th
* Remove support of:
    - JPopsuki
* Fix issues for:
    - AcFun
    - iQIYI

0.3.13
------

*Date: 2013-06-07*

* Add support for:
    - Baidu Wangpan (video only)
* Fix issue for:
    - Google+

0.3.12
------

*Date: 2013-05-19*

* Fix issues for:
    - Google+
    - Mixcloud
    - Tudou

0.3.11
------

*Date: 2013-04-26*

* Add support for:
    - Google Drive (Google Docs)

0.3.10
------

*Date: 2013-04-19*

* Add support for:
    - SongTaste
* Support Libav as well as FFmpeg.

0.3.9
-----

*Date: 2013-04-12*

* Add support for:
    - Freesound

0.3.8
-----

*Date: 2013-04-05*

* Add support for:
    - Coursera

0.3.7
-----

*Date: 2013-03-29*

* Add support for:
    - Baidu

0.3.6
-----

*Date: 2013-03-22*

* Add support for:
    - Vine
* Fix issue for:
    - YouTube

0.3.5
-----

*Date: 2013-03-15*

* Default to use FFmpeg for merging .flv files.

0.3.4
-----

*Date: 2013-03-08*

* Add support for:
    - Blip
    - VID48

0.3.3
-----

*Date: 2013-03-01*

* Add support for:
    - Douban
    - MioMio
* Fix issues for:
    - Tudou
    - Vimeo

0.3.2
-----

*Date: 2013-02-22*

* Add support for:
    - JPopsuki
* Fix issue for Xiami.

0.3.1
-----

*Date: 2013-02-15*

* Fix issues for Google+ and Mixcloud.
* API changed.

0.3.0
-----

*Date: 2013-02-08*

* Add support for:
    - Niconico

0.3dev-20130201
---------------

*Date: 2013-02-01*

* Add support for:
    - Mixcloud
    - Facebook
    - Joy.cn

0.3dev-20130125
---------------

*Date: 2013-01-25*

* Dailymotion: downloading best quality available now.
* iQIYI: fix `#77 <https://github.com/soimort/you-get/issues/77>`_.

0.3dev-20130118
---------------

*Date: 2013-01-18*

* YinYueTai: downloading best quality available now.
* Sohu: fix `#69 <https://github.com/soimort/you-get/issues/69>`_.

0.3dev-20130111
---------------

*Date: 2013-01-11*

* Add support for:
    - NetEase (v.163.com)
    - YouTube short URLs
* Vimeo: downloading best quality available now.

0.3dev-20130104
---------------

*Date: 2013-01-04*

* Sohu:
    - fix `#53 <https://github.com/soimort/you-get/issues/53>`_.
    - merge pull request `#54 <https://github.com/soimort/you-get/pull/54>`_; downloading best quality available now.

0.3dev-20121228
---------------

*Date: 2012-12-28*

* Add support for:
    - Xiami
    - Tumblr audios

0.3dev-20121221
---------------

*Date: 2012-12-21*

* YouTube: fix `#45 <https://github.com/soimort/you-get/issues/45>`_.
* Merge pull request `#46 <https://github.com/soimort/you-get/pull/46>`_; fix title parsing issue on Tudou.

0.3dev-20121220
---------------

*Date: 2012-12-20*

* YouTube: quick dirty fix to `#45 <https://github.com/soimort/you-get/issues/45>`_.

0.3dev-20121219
---------------

*Date: 2012-12-19*

* Add support for:
    - Tumblr

0.3dev-20121217
---------------

*Date: 2012-12-17*

* Google+: downloading best quality available now.
* Fix issues `#42 <https://github.com/soimort/you-get/issues/42>`_, `#43 <https://github.com/soimort/you-get/issues/43>`_ for Google+.
* Merge pull request `#40 <https://github.com/soimort/you-get/pull/40>`_; fix some issues for Ku6, Sina and 56.

0.3dev-20121212
---------------

*Date: 2012-12-12*

* YouTube: fix some major issues on parsing video titles.

0.3dev-20121210
---------------

*Date: 2012-12-10*

* YouTube: downloading best quality available now.
* Add support for:
    - SoundCloud

0.2.16
------

*Date: 2012-12-01*

* Add support for:
    - QQ
* Small fixes merged from youku-lixian.

0.2.15
------

*Date: 2012-11-30*

* Fix issue `#30 <https://github.com/soimort/you-get/issues/30>`_ for bilibili.

0.2.14
------

*Date: 2012-11-29*

* Fix issue `#28 <https://github.com/soimort/you-get/issues/28>`_ for Tudou.
* Better support for AcFun.

0.2.13
------

*Date: 2012-10-30*

* Nothing new.

0.2.12
------

*Date: 2012-10-30*

* Fix issue `#20 <https://github.com/soimort/you-get/issues/20>`_ for AcFun.

0.2.11
------

*Date: 2012-10-23*

* Move on to Python 3.3!
* Fix issues:
    - `#17 <https://github.com/soimort/you-get/issues/17>`_
    - `#18 <https://github.com/soimort/you-get/issues/18>`_
    - `#19 <https://github.com/soimort/you-get/issues/19>`_

0.2.10
------

*Date: 2012-10-16*

* Add support for:
    - Google+

0.2.9
-----

*Date: 2012-10-09*

* Fix issue `#16 <https://github.com/soimort/you-get/issues/16>`_.

0.2.8
-----

*Date: 2012-10-02*

* Fix issue `#15 <https://github.com/soimort/you-get/issues/15>`_ for AcFun.

0.2.7
-----

*Date: 2012-09-28*

* Fix issue `#6 <https://github.com/soimort/you-get/issues/6>`_ for YouTube.

0.2.6
-----

*Date: 2012-09-26*

* Fix issue `#5 <https://github.com/soimort/you-get/issues/5>`_ for YinYueTai.

0.2.5
-----

*Date: 2012-09-25*

* Add support for:
    - Dailymotion

0.2.4
-----

*Date: 2012-09-18*

* Use FFmpeg for converting and joining video files.
* Add '--url' and '--debug' options.

0.2.2
-----

*Date: 2012-09-17*

* Add danmaku support for AcFun and bilibili.
* Fix issue `#2 <https://github.com/soimort/you-get/issues/2>`_ and `#4 <https://github.com/soimort/you-get/issues/4>`_ for YouTube.
* Temporarily fix issue for iQIYI (use .ts instead of .f4v).

0.2.1
-----

*Date: 2012-09-02*

* Add support for:
    - ifeng

0.2
---

*Date: 2012-09-02*

* Add support for:
    - Vimeo
    - AcFun
    - bilibili
    - CNTV
    - iQIYI
    - Ku6
    - PPTV
    - Sina
    - Sohu
    - 56

0.1.3
-----

*Date: 2012-09-01*

* Playlist URLs are now automatically handled. ('--playlist' option is no longer needed)
* Handle KeyboardInterrupt silently.
* Fix Unicode character display on code pages.

0.1
---

*Date: 2012-09-01*

* First PyPI release.
* Fix issue `#1 <https://github.com/soimort/you-get/issues/1>`_.

0.0.1
-----

*Date: 2012-08-21*

* Initial release, forked from `iambus/youku-lixian <https://github.com/iambus/youku-lixian>`_; add:
    - YouTube support.
    - Pausing and resuming of downloads.
    - HTTP proxy settings.


================================================
FILE: CONTRIBUTING.md
================================================
# How to Report an Issue

If you would like to report a problem you find when using `you-get`, please open a [Pull Request](https://github.com/soimort/you-get/pulls), which should include:

1. A detailed description of the encountered problem;
2. At least one commit, addressing the problem through some unit test(s).
   * Examples of good commits: [#2675](https://github.com/soimort/you-get/pull/2675/files), [#2680](https://github.com/soimort/you-get/pull/2680/files), [#2685](https://github.com/soimort/you-get/pull/2685/files)

PRs that fail to meet the above criteria may be closed summarily with no further action.

A valid PR will remain open until its addressed problem is fixed.



# 如何汇报问题

为了防止对 GitHub Issues 的滥用,本项目不接受一般的 Issue。

如您在使用 `you-get` 的过程中发现任何问题,请开启一个 [Pull Request](https://github.com/soimort/you-get/pulls)。该 PR 应当包含:

1. 详细的问题描述;
2. 至少一个 commit,其内容是**与问题相关的**单元测试。**不要通过随意修改无关文件的方式来提交 PR!**
   * 有效的 commit 示例:[#2675](https://github.com/soimort/you-get/pull/2675/files), [#2680](https://github.com/soimort/you-get/pull/2680/files), [#2685](https://github.com/soimort/you-get/pull/2685/files)

不符合以上条件的 PR 可能被直接关闭。

有效的 PR 将会被一直保留,直至相应的问题得以修复。


================================================
FILE: MANIFEST.in
================================================
include *.rst
include *.txt
include Makefile
include CONTRIBUTING.md
include README.md
include you-get
include you-get.json
include you-get.plugin.zsh
recursive-include contrib *


================================================
FILE: Makefile
================================================
.PHONY: default i test clean all html rst build install release

default: i

i:
	@(cd src; python -i -c 'import you_get; print("You-Get %s\n>>> import you_get" % you_get.version.__version__)')

test:
	(cd src; python -m unittest discover -s ../tests)

clean:
	zenity --question
	rm -fr build/ dist/ src/*.egg-info/
	find . | grep __pycache__ | xargs rm -fr
	find . | grep .pyc | xargs rm -f

all: build

html:
	pandoc README.md > README.html

rst:
	pandoc -s -t rst README.md > README.rst

build:
	python -m build

install:
	python -m pip install .

release: build
	@echo 'Upload new version to PyPI using:'
	@echo '	twine upload --sign dist/you_get-VERSION*'


================================================
FILE: README.md
================================================
# You-Get

[![Build Status](https://github.com/soimort/you-get/workflows/develop/badge.svg)](https://github.com/soimort/you-get/actions)
[![PyPI version](https://img.shields.io/pypi/v/you-get.svg)](https://pypi.python.org/pypi/you-get/)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/soimort/you-get?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)

**NOTICE (30 May 2022): Support for Python 3.5, 3.6 and 3.7 will eventually be dropped. ([see details here](https://github.com/soimort/you-get/wiki/TLS-1.3-post-handshake-authentication-(PHA)))**

**NOTICE (8 Mar 2019): Read [this](https://github.com/soimort/you-get/blob/develop/CONTRIBUTING.md) if you are looking for the conventional "Issues" tab.**

---

[You-Get](https://you-get.org/) is a tiny command-line utility to download media contents (videos, audios, images) from the Web, in case there is no other handy way to do it.

Here's how you use `you-get` to download a video from [YouTube](https://www.youtube.com/watch?v=jNQXAC9IVRw):

```console
$ you-get 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
site:                YouTube
title:               Me at the zoo
stream:
    - itag:          43
      container:     webm
      quality:       medium
      size:          0.5 MiB (564215 bytes)
    # download-with: you-get --itag=43 [URL]

Downloading Me at the zoo.webm ...
 100% (  0.5/  0.5MB) ├██████████████████████████████████┤[1/1]    6 MB/s

Saving Me at the zoo.en.srt ... Done.
```

And here's why you might want to use it:

* You enjoyed something on the Internet, and just want to download them for your own pleasure.
* You watch your favorite videos online from your computer, but you are prohibited from saving them. You feel that you have no control over your own computer. (And it's not how an open Web is supposed to work.)
* You want to get rid of any closed-source technology or proprietary JavaScript code, and disallow things like Flash running on your computer.
* You are an adherent of hacker culture and free software.

What `you-get` can do for you:

* Download videos / audios from popular websites such as YouTube, Youku, Niconico, and a bunch more. (See the [full list of supported sites](#supported-sites))
* Stream an online video in your media player. No web browser, no more ads.
* Download images (of interest) by scraping a web page.
* Download arbitrary non-HTML contents, i.e., binary files.

Interested? [Install it](#installation) now and [get started by examples](#getting-started).

Are you a Python programmer? Then check out [the source](https://github.com/soimort/you-get) and fork it!

![](https://i.imgur.com/GfthFAz.png)

## Installation

### Prerequisites

The following dependencies are recommended:

* **[Python](https://www.python.org/downloads/)**  3.7.4 or above
* **[FFmpeg](https://www.ffmpeg.org/)** 1.0 or above
* (Optional) [RTMPDump](https://rtmpdump.mplayerhq.hu/)

### Option 1: Install via pip

The official release of `you-get` is distributed on [PyPI](https://pypi.python.org/pypi/you-get), and can be installed easily from a PyPI mirror via the [pip](https://en.wikipedia.org/wiki/Pip_\(package_manager\)) package manager: (Note that you must use the Python 3 version of `pip`)

    $ pip install you-get

### Option 2: Install via [Antigen](https://github.com/zsh-users/antigen) (for Zsh users)

Add the following line to your `.zshrc`:

    antigen bundle soimort/you-get

### Option 3: Download from GitHub

You may either download the [stable](https://github.com/soimort/you-get/archive/master.zip) (identical with the latest release on PyPI) or the [develop](https://github.com/soimort/you-get/archive/develop.zip) (more hotfixes, unstable features) branch of `you-get`. Unzip it, and put the directory containing the `you-get` script into your `PATH`.

Alternatively, run

```
$ cd path/to/you-get
$ [sudo] python -m pip install .
```

Or

```
$ cd path/to/you-get
$ python -m pip install . --user
```

to install `you-get` to a permanent path. (And don't omit the dot `.` representing the current directory)

You can also use the [pipenv](https://pipenv.pypa.io/en/latest) to install the `you-get` in the Python virtual environment.

```
$ pipenv install -e .
$ pipenv run you-get --version
you-get: version 0.4.1555, a tiny downloader that scrapes the web.
```

### Option 4: Git clone

This is the recommended way for all developers, even if you don't often code in Python.

```
$ git clone git://github.com/soimort/you-get.git
```

Then put the cloned directory into your `PATH`, or run `python -m pip install path/to/you-get` to install `you-get` to a permanent path.

### Option 5: Homebrew (Mac only)

You can install `you-get` easily via:

```
$ brew install you-get
```

### Option 6: pkg (FreeBSD only)

You can install `you-get` easily via:

```
# pkg install you-get
```

### Option 7: Flox (Mac, Linux, and Windows WSL)

You can install `you-get` easily via:

```
$ flox install you-get
```

### Shell completion

Completion definitions for Bash, Fish and Zsh can be found in [`contrib/completion`](https://github.com/soimort/you-get/tree/develop/contrib/completion). Please consult your shell's manual for how to take advantage of them.

## Upgrading

Based on which option you chose to install `you-get`, you may upgrade it via:

```
$ pip install --upgrade you-get
```

or download the latest release via:

```
$ you-get https://github.com/soimort/you-get/archive/master.zip
```

In order to get the latest ```develop``` branch without messing up the PIP, you can try:

```
$ pip install --upgrade --force-reinstall git+https://github.com/soimort/you-get@develop
```

## Getting Started

### Download a video

When you get a video of interest, you might want to use the `--info`/`-i` option to see all available quality and formats:

```
$ you-get -i 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
site:                YouTube
title:               Me at the zoo
streams:             # Available quality and codecs
    [ DASH ] ____________________________________
    - itag:          242
      container:     webm
      quality:       320x240
      size:          0.6 MiB (618358 bytes)
    # download-with: you-get --itag=242 [URL]

    - itag:          395
      container:     mp4
      quality:       320x240
      size:          0.5 MiB (550743 bytes)
    # download-with: you-get --itag=395 [URL]

    - itag:          133
      container:     mp4
      quality:       320x240
      size:          0.5 MiB (498558 bytes)
    # download-with: you-get --itag=133 [URL]

    - itag:          278
      container:     webm
      quality:       192x144
      size:          0.4 MiB (392857 bytes)
    # download-with: you-get --itag=278 [URL]

    - itag:          160
      container:     mp4
      quality:       192x144
      size:          0.4 MiB (370882 bytes)
    # download-with: you-get --itag=160 [URL]

    - itag:          394
      container:     mp4
      quality:       192x144
      size:          0.4 MiB (367261 bytes)
    # download-with: you-get --itag=394 [URL]

    [ DEFAULT ] _________________________________
    - itag:          43
      container:     webm
      quality:       medium
      size:          0.5 MiB (568748 bytes)
    # download-with: you-get --itag=43 [URL]

    - itag:          18
      container:     mp4
      quality:       small
    # download-with: you-get --itag=18 [URL]

    - itag:          36
      container:     3gp
      quality:       small
    # download-with: you-get --itag=36 [URL]

    - itag:          17
      container:     3gp
      quality:       small
    # download-with: you-get --itag=17 [URL]
```

By default, the one on the top is the one you will get. If that looks cool to you, download it:

```
$ you-get 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
site:                YouTube
title:               Me at the zoo
stream:
    - itag:          242
      container:     webm
      quality:       320x240
      size:          0.6 MiB (618358 bytes)
    # download-with: you-get --itag=242 [URL]

Downloading Me at the zoo.webm ...
 100% (  0.6/  0.6MB) ├██████████████████████████████████████████████████████████████████████████████┤[2/2]    2 MB/s
Merging video parts... Merged into Me at the zoo.webm

Saving Me at the zoo.en.srt ... Done.
```

(If a YouTube video has any closed captions, they will be downloaded together with the video file, in SubRip subtitle format.)

Or, if you prefer another format (mp4), just use whatever the option `you-get` shows to you:

```
$ you-get --itag=18 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
```

**Note:**

* At this point, format selection has not been generally implemented for most of our supported sites; in that case, the default format to download is the one with the highest quality.
* `ffmpeg` is a required dependency, for downloading and joining videos streamed in multiple parts (e.g. on some sites like Youku), and for YouTube videos of 1080p or high resolution.
* If you don't want `you-get` to join video parts after downloading them, use the `--no-merge`/`-n` option.

### Download anything else

If you already have the URL of the exact resource you want, you can download it directly with:

```
$ you-get https://stallman.org/rms.jpg
Site:       stallman.org
Title:      rms
Type:       JPEG Image (image/jpeg)
Size:       0.06 MiB (66482 Bytes)

Downloading rms.jpg ...
 100% (  0.1/  0.1MB) ├████████████████████████████████████████┤[1/1]  127 kB/s
```

Otherwise, `you-get` will scrape the web page and try to figure out if there's anything interesting to you:

```
$ you-get https://kopasas.tumblr.com/post/69361932517
Site:       Tumblr.com
Title:      [tumblr] tumblr_mxhg13jx4n1sftq6do1_640
Type:       Portable Network Graphics (image/png)
Size:       0.11 MiB (118484 Bytes)

Downloading [tumblr] tumblr_mxhg13jx4n1sftq6do1_640.png ...
 100% (  0.1/  0.1MB) ├████████████████████████████████████████┤[1/1]   22 MB/s
```

**Note:**

* This feature is an experimental one and far from perfect. It works best on scraping large-sized images from popular websites like Tumblr and Blogger, but there is really no universal pattern that can apply to any site on the Internet.

### Search on Google Videos and download

You can pass literally anything to `you-get`. If it isn't a valid URL, `you-get` will do a Google search and download the most relevant video for you. (It might not be exactly the thing you wish to see, but still very likely.)

```
$ you-get "Richard Stallman eats"
```

### Pause and resume a download

You may use <kbd>Ctrl</kbd>+<kbd>C</kbd> to interrupt a download.

A temporary `.download` file is kept in the output directory. Next time you run `you-get` with the same arguments, the download progress will resume from the last session. In case the file is completely downloaded (the temporary `.download` extension is gone), `you-get` will just skip the download.

To enforce re-downloading, use the `--force`/`-f` option. (**Warning:** doing so will overwrite any existing file or temporary file with the same name!)

### Set the path and name of downloaded file

Use the `--output-dir`/`-o` option to set the path, and `--output-filename`/`-O` to set the name of the downloaded file:

```
$ you-get -o ~/Videos -O zoo.webm 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
```

**Tips:**

* These options are helpful if you encounter problems with the default video titles, which may contain special characters that do not play well with your current shell / operating system / filesystem.
* These options are also helpful if you write a script to batch download files and put them into designated folders with designated names.

### Proxy settings

You may specify an HTTP proxy for `you-get` to use, via the `--http-proxy`/`-x` option:

```
$ you-get -x 127.0.0.1:8087 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
```

However, the system proxy setting (i.e. the environment variable `http_proxy`) is applied by default. To disable any proxy, use the `--no-proxy` option.

**Tips:**

* If you need to use proxies a lot (in case your network is blocking certain sites), you might want to use `you-get` with [proxychains](https://github.com/rofl0r/proxychains-ng) and set `alias you-get="proxychains -q you-get"` (in Bash).
* For some websites (e.g. Youku), if you need access to some videos that are only available in mainland China, there is an option of using a specific proxy to extract video information from the site: `--extractor-proxy`/`-y`.

### Watch a video

Use the `--player`/`-p` option to feed the video into your media player of choice, e.g. `mpv` or `vlc`, instead of downloading it:

```
$ you-get -p vlc 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
```

Or, if you prefer to watch the video in a browser, just without ads or comment section:

```
$ you-get -p chromium 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
```

**Tips:**

* It is possible to use the `-p` option to start another download manager, e.g., `you-get -p uget-gtk 'https://www.youtube.com/watch?v=jNQXAC9IVRw'`, though they may not play together very well.

### Load cookies

Not all videos are publicly available to anyone. If you need to log in your account to access something (e.g., a private video), it would be unavoidable to feed the browser cookies to `you-get` via the `--cookies`/`-c` option.

**Note:**

* As of now, we are supporting two formats of browser cookies: Mozilla `cookies.sqlite` and Netscape `cookies.txt`.

### Reuse extracted data

Use `--url`/`-u` to get a list of downloadable resource URLs extracted from the page. Use `--json` to get an abstract of extracted data in the JSON format.

**Warning:**

* For the time being, this feature has **NOT** been stabilized and the JSON schema may have breaking changes in the future.

## Supported Sites

| Site | URL | Videos? | Images? | Audios? |
| :--: | :-- | :-----: | :-----: | :-----: |
| **YouTube** | <https://www.youtube.com/>    |✓| | |
| **X (Twitter)** | <https://x.com/>        |✓|✓| |
| VK          | <https://vk.com/>              |✓|✓| |
| Vimeo       | <https://vimeo.com/>          |✓| | |
| Veoh        | <https://www.veoh.com/>        |✓| | |
| **Tumblr**  | <https://www.tumblr.com/>     |✓|✓|✓|
| TED         | <https://www.ted.com/>         |✓| | |
| SoundCloud  | <https://soundcloud.com/>     | | |✓|
| SHOWROOM    | <https://www.showroom-live.com/> |✓| | |
| Pinterest   | <https://www.pinterest.com/>  | |✓| |
| MTV81       | <https://www.mtv81.com/>       |✓| | |
| Mixcloud    | <https://www.mixcloud.com/>   | | |✓|
| Metacafe    | <https://www.metacafe.com/>    |✓| | |
| Magisto     | <https://www.magisto.com/>     |✓| | |
| Khan Academy | <https://www.khanacademy.org/> |✓| | |
| Internet Archive | <https://archive.org/>   |✓| | |
| **Instagram** | <https://instagram.com/>    |✓|✓| |
| InfoQ       | <https://www.infoq.com/presentations/> |✓| | |
| Imgur       | <https://imgur.com/>           | |✓| |
| Heavy Music Archive | <https://www.heavy-music.ru/> | | |✓|
| Freesound   | <https://www.freesound.org/>   | | |✓|
| Flickr      | <https://www.flickr.com/>     |✓|✓| |
| FC2 Video   | <https://video.fc2.com/>       |✓| | |
| Facebook    | <https://www.facebook.com/>   |✓| | |
| eHow        | <https://www.ehow.com/>        |✓| | |
| Dailymotion | <https://www.dailymotion.com/> |✓| | |
| Coub        | <https://coub.com/>            |✓| | |
| CBS         | <https://www.cbs.com/>         |✓| | |
| Bandcamp    | <https://bandcamp.com/>        | | |✓|
| AliveThai   | <https://alive.in.th/>         |✓| | |
| interest.me | <https://ch.interest.me/tvn>   |✓| | |
| **755<br/>ナナゴーゴー** | <https://7gogo.jp/> |✓|✓| |
| **niconico<br/>ニコニコ動画** | <https://www.nicovideo.jp/> |✓| | |
| **163<br/>网易视频<br/>网易云音乐** | <https://v.163.com/><br/><https://music.163.com/> |✓| |✓|
| 56网     | <https://www.56.com/>           |✓| | |
| **AcFun** | <https://www.acfun.cn/>        |✓| | |
| **Baidu<br/>百度贴吧** | <https://tieba.baidu.com/> |✓|✓| |
| 爆米花网 | <https://www.baomihua.com/>     |✓| | |
| **bilibili<br/>哔哩哔哩** | <https://www.bilibili.com/> |✓|✓|✓|
| 豆瓣     | <https://www.douban.com/>       |✓| |✓|
| 斗鱼     | <https://www.douyutv.com/>      |✓| | |
| 凤凰视频 | <https://v.ifeng.com/>          |✓| | |
| 风行网   | <https://www.fun.tv/>           |✓| | |
| iQIYI<br/>爱奇艺 | <https://www.iqiyi.com/> |✓| | |
| 激动网   | <https://www.joy.cn/>           |✓| | |
| 酷6网    | <https://www.ku6.com/>          |✓| | |
| 酷狗音乐 | <https://www.kugou.com/>        | | |✓|
| 酷我音乐 | <https://www.kuwo.cn/>          | | |✓|
| 乐视网   | <https://www.le.com/>           |✓| | |
| 荔枝FM   | <https://www.lizhi.fm/>         | | |✓|
| 懒人听书 | <https://www.lrts.me/>          | | |✓|
| 秒拍     | <https://www.miaopai.com/>      |✓| | |
| MioMio弹幕网 | <https://www.miomio.tv/>    |✓| | |
| MissEvan<br/>猫耳FM | <https://www.missevan.com/> | | |✓|
| 痞客邦   | <https://www.pixnet.net/>      |✓| | |
| PPTV聚力 | <https://www.pptv.com/>         |✓| | |
| 齐鲁网   | <https://v.iqilu.com/>          |✓| | |
| QQ<br/>腾讯视频 | <https://v.qq.com/>      |✓| | |
| 企鹅直播 | <https://live.qq.com/>          |✓| | |
| Sina<br/>新浪视频<br/>微博秒拍视频 | <https://video.sina.com.cn/><br/><https://video.weibo.com/> |✓| | |
| Sohu<br/>搜狐视频 | <https://tv.sohu.com/> |✓| | |
| **Tudou<br/>土豆** | <https://www.tudou.com/> |✓| | |
| 阳光卫视 | <https://www.isuntv.com/>       |✓| | |
| **Youku<br/>优酷** | <https://www.youku.com/> |✓| | |
| 战旗TV   | <https://www.zhanqi.tv/lives>   |✓| | |
| 央视网   | <https://www.cntv.cn/>          |✓| | |
| Naver<br/>네이버 | <https://tvcast.naver.com/>     |✓| | |
| 芒果TV   | <https://www.mgtv.com/>         |✓| | |
| 火猫TV   | <https://www.huomao.com/>       |✓| | |
| 阳光宽频网 | <https://www.365yg.com/>      |✓| | |
| 西瓜视频 | <https://www.ixigua.com/>      |✓| | |
| 新片场 | <https://www.xinpianchang.com/>      |✓| | |
| 快手 | <https://www.kuaishou.com/>      |✓|✓| |
| 抖音 | <https://www.douyin.com/>      |✓| | |
| TikTok | <https://www.tiktok.com/>      |✓| | |
| 中国体育(TV) | <https://v.zhibo.tv/> </br><https://video.zhibo.tv/>    |✓| | |
| 知乎 | <https://www.zhihu.com/>      |✓| | |

For all other sites not on the list, the universal extractor will take care of finding and downloading interesting resources from the page.

### Known bugs

If something is broken and `you-get` can't get you things you want, don't panic. (Yes, this happens all the time!)

Check if it's already a known problem on <https://github.com/soimort/you-get/wiki/Known-Bugs>. If not, follow the guidelines on [how to report an issue](https://github.com/soimort/you-get/blob/develop/CONTRIBUTING.md).

## Getting Involved

You can reach us on the Gitter channel [#soimort/you-get](https://gitter.im/soimort/you-get) (here's how you [set up your IRC client](https://irc.gitter.im) for Gitter). If you have a quick question regarding `you-get`, ask it there.

If you are seeking to report an issue or contribute, please make sure to read [the guidelines](https://github.com/soimort/you-get/blob/develop/CONTRIBUTING.md) first.

## Legal Issues

This software is distributed under the [MIT license](https://raw.github.com/soimort/you-get/master/LICENSE.txt).

In particular, please be aware that

> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Translated to human words:

*In case your use of the software forms the basis of copyright infringement, or you use the software for any other illegal purposes, the authors cannot take any responsibility for you.*

We only ship the code here, and how you are going to use it is left to your own discretion.

## Authors

Made by [@soimort](https://github.com/soimort), who is in turn powered by :coffee:, :beer: and :ramen:.

You can find the [list of all contributors](https://github.com/soimort/you-get/graphs/contributors) here.


================================================
FILE: README.rst
================================================
You-Get
=======

|PyPI version| |Build Status| |Gitter|

`You-Get <https://you-get.org/>`__ is a tiny command-line utility to
download media contents (videos, audios, images) from the Web, in case
there is no other handy way to do it.

Here's how you use ``you-get`` to download a video from `this web
page <http://www.fsf.org/blogs/rms/20140407-geneva-tedx-talk-free-software-free-society>`__:

.. code:: console

    $ you-get http://www.fsf.org/blogs/rms/20140407-geneva-tedx-talk-free-software-free-society
    Site:       fsf.org
    Title:      TEDxGE2014_Stallman05_LQ
    Type:       WebM video (video/webm)
    Size:       27.12 MiB (28435804 Bytes)

    Downloading TEDxGE2014_Stallman05_LQ.webm ...
    100.0% ( 27.1/27.1 MB) ├████████████████████████████████████████┤[1/1]   12 MB/s

And here's why you might want to use it:

-  You enjoyed something on the Internet, and just want to download them
   for your own pleasure.
-  You watch your favorite videos online from your computer, but you are
   prohibited from saving them. You feel that you have no control over
   your own computer. (And it's not how an open Web is supposed to
   work.)
-  You want to get rid of any closed-source technology or proprietary
   JavaScript code, and disallow things like Flash running on your
   computer.
-  You are an adherent of hacker culture and free software.

What ``you-get`` can do for you:

-  Download videos / audios from popular websites such as YouTube,
   Youku, Niconico, and a bunch more. (See the `full list of supported
   sites <#supported-sites>`__)
-  Stream an online video in your media player. No web browser, no more
   ads.
-  Download images (of interest) by scraping a web page.
-  Download arbitrary non-HTML contents, i.e., binary files.

Interested? `Install it <#installation>`__ now and `get started by
examples <#getting-started>`__.

Are you a Python programmer? Then check out `the
source <https://github.com/soimort/you-get>`__ and fork it!

.. |PyPI version| image:: https://badge.fury.io/py/you-get.png
   :target: http://badge.fury.io/py/you-get
.. |Build Status| image:: https://github.com/soimort/you-get/workflows/develop/badge.svg
   :target: https://github.com/soimort/you-get/actions
.. |Gitter| image:: https://badges.gitter.im/Join%20Chat.svg
   :target: https://gitter.im/soimort/you-get?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge


================================================
FILE: SECURITY.md
================================================
# Security Policy

## Reporting a Vulnerability

Please report security issues to <mort.yao+you-get@gmail.com>.


================================================
FILE: contrib/completion/you-get-completion.bash
================================================
# Bash completion definition for you-get.

_you-get () {
    COMPREPLY=()
    local IFS=$' \n'
    local cur=$2 prev=$3
    local -a opts_without_arg opts_with_arg
    opts_without_arg=(
        -V --version -h --help -i --info -u --url --json -n --no-merge
        --no-caption -f --force --no-proxy -d --debug
    )
    opts_with_arg=(
        -F --format -O --output-filename -o --output-dir -p --player
        -c --cookies -x --http-proxy -y --extractor-proxy -t --timeout
    )

    # Do not complete non option names
    [[ $cur == -* ]] || return 1

    # Do not complete when the previous arg is an option expecting an argument
    for opt in "${opts_with_arg[@]}"; do
        [[ $opt == $prev ]] && return 1
    done

    # Complete option names
    COMPREPLY=( $(compgen -W "${opts_without_arg[*]} ${opts_with_arg[*]}" \
                          -- "$cur") )
    return 0
}

complete -F _you-get you-get


================================================
FILE: contrib/completion/you-get.fish
================================================
# Fish completion definition for you-get.

complete -c you-get -s V -l version -d 'print version and exit'
complete -c you-get -s h -l help -d 'print help and exit'
complete -c you-get -s i -l info -d 'print extracted information'
complete -c you-get -s u -l url -d 'print extracted information'
complete -c you-get -l json -d 'print extracted URLs in JSON format'
complete -c you-get -s n -l no-merge -d 'do not merge video parts'
complete -c you-get -l no-caption -d 'do not download captions'
complete -c you-get -s f -l force -d 'force overwrite existing files'
complete -c you-get -s F -l format -x -d 'set video format to the specified stream id'
complete -c you-get -s O -l output-filename -d 'set output filename' \
         -x -a '(__fish_complete_path (commandline -ct) "output filename")'
complete -c you-get -s o -l output-dir  -d 'set output directory' \
         -x -a '(__fish_complete_directories (commandline -ct) "output directory")'
complete -c you-get -s p -l player -x -d 'stream extracted URL to the specified player'
complete -c you-get -s c -l cookies -d 'load cookies.txt or cookies.sqlite' \
         -x -a '(__fish_complete_path (commandline -ct) "cookies.txt or cookies.sqlite")'
complete -c you-get -s x -l http-proxy -x -d 'use the specified HTTP proxy for downloading'
complete -c you-get -s y -l extractor-proxy -x -d 'use the specified HTTP proxy for extraction only'
complete -c you-get -l no-proxy -d 'do not use a proxy'
complete -c you-get -s t -l timeout -x -d 'set socket timeout'
complete -c you-get -s d -l debug -d 'show traceback and other debug info'


================================================
FILE: setup.cfg
================================================
[build]
force = 0

[global]
verbose = 0

[egg_info]
tag_build = 
tag_date = 0
tag_svn_revision = 0


================================================
FILE: setup.py
================================================
#!/usr/bin/env python3

PROJ_NAME = 'you-get'
PACKAGE_NAME = 'you_get'

PROJ_METADATA = '%s.json' % PROJ_NAME

import importlib.util
import importlib.machinery

def load_source(modname, filename):
    loader = importlib.machinery.SourceFileLoader(modname, filename)
    spec = importlib.util.spec_from_file_location(modname, filename, loader=loader)
    module = importlib.util.module_from_spec(spec)
    # The module is always executed and not cached in sys.modules.
    # Uncomment the following line to cache the module.
    # sys.modules[module.__name__] = module
    loader.exec_module(module)
    return module

import os, json
here = os.path.abspath(os.path.dirname(__file__))
proj_info = json.loads(open(os.path.join(here, PROJ_METADATA), encoding='utf-8').read())
try:
    README = open(os.path.join(here, 'README.rst'), encoding='utf-8').read()
except:
    README = ""
CHANGELOG = open(os.path.join(here, 'CHANGELOG.rst'), encoding='utf-8').read()
VERSION = load_source('version', os.path.join(here, 'src/%s/version.py' % PACKAGE_NAME)).__version__

from setuptools import setup, find_packages
setup(
    name = proj_info['name'],
    version = VERSION,

    author = proj_info['author'],
    author_email = proj_info['author_email'],
    url = proj_info['url'],
    license = proj_info['license'],

    description = proj_info['description'],
    keywords = proj_info['keywords'],

    long_description = README,

    packages = find_packages('src'),
    package_dir = {'' : 'src'},

    test_suite = 'tests',

    platforms = 'any',
    zip_safe = True,
    include_package_data = True,

    classifiers = proj_info['classifiers'],

    entry_points = {'console_scripts': proj_info['console_scripts']},

    install_requires = ['dukpy'],
    extras_require = {
        'socks': ['PySocks'],
    }
)


================================================
FILE: src/you_get/cli_wrapper/player/dragonplayer.py
================================================


================================================
FILE: src/you_get/cli_wrapper/player/gnome_mplayer.py
================================================


================================================
FILE: src/you_get/cli_wrapper/player/mplayer.py
================================================


================================================
FILE: src/you_get/cli_wrapper/player/vlc.py
================================================
#!/usr/bin/env python


================================================
FILE: src/you_get/cli_wrapper/player/wmp.py
================================================


================================================
FILE: src/you_get/cli_wrapper/transcoder/ffmpeg.py
================================================


================================================
FILE: src/you_get/cli_wrapper/transcoder/libav.py
================================================


================================================
FILE: src/you_get/cli_wrapper/transcoder/mencoder.py
================================================


================================================
FILE: src/you_get/common.py
================================================
#!/usr/bin/env python

import io
import os
import re
import sys
import time
import json
import socket
import locale
import logging
import argparse
import ssl
from http import cookiejar
from importlib import import_module
from urllib import request, parse, error

from .version import __version__
from .util import log, term
from .util.git import get_version
from .util.strings import get_filename, unescape_html
from . import json_output as json_output_
sys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='utf8')

SITES = {
    '163'              : 'netease',
    '56'               : 'w56',
    '365yg'            : 'toutiao',
    'acfun'            : 'acfun',
    'archive'          : 'archive',
    'baidu'            : 'baidu',
    'bandcamp'         : 'bandcamp',
    'baomihua'         : 'baomihua',
    'bigthink'         : 'bigthink',
    'bilibili'         : 'bilibili',
    'cctv'             : 'cntv',
    'cntv'             : 'cntv',
    'cbs'              : 'cbs',
    'coub'             : 'coub',
    'dailymotion'      : 'dailymotion',
    'douban'           : 'douban',
    'douyin'           : 'douyin',
    'douyu'            : 'douyutv',
    'ehow'             : 'ehow',
    'facebook'         : 'facebook',
    'fc2'              : 'fc2video',
    'flickr'           : 'flickr',
    'freesound'        : 'freesound',
    'fun'              : 'funshion',
    'google'           : 'google',
    'giphy'            : 'giphy',
    'heavy-music'      : 'heavymusic',
    'huomao'           : 'huomaotv',
    'iask'             : 'sina',
    'icourses'         : 'icourses',
    'ifeng'            : 'ifeng',
    'imgur'            : 'imgur',
    'in'               : 'alive',
    'infoq'            : 'infoq',
    'instagram'        : 'instagram',
    'interest'         : 'interest',
    'iqilu'            : 'iqilu',
    'iqiyi'            : 'iqiyi',
    'ixigua'           : 'ixigua',
    'isuntv'           : 'suntv',
    'iwara'            : 'iwara',
    'joy'              : 'joy',
    'kankanews'        : 'bilibili',
    'kakao'            : 'kakao',
    'khanacademy'      : 'khan',
    'ku6'              : 'ku6',
    'kuaishou'         : 'kuaishou',
    'kugou'            : 'kugou',
    'kuwo'             : 'kuwo',
    'le'               : 'le',
    'letv'             : 'le',
    'lizhi'            : 'lizhi',
    'longzhu'          : 'longzhu',
    'lrts'             : 'lrts',
    'magisto'          : 'magisto',
    'metacafe'         : 'metacafe',
    'mgtv'             : 'mgtv',
    'miomio'           : 'miomio',
    'missevan'         : 'missevan',
    'mixcloud'         : 'mixcloud',
    'mtv81'            : 'mtv81',
    'miaopai'          : 'yixia',
    'naver'            : 'naver',
    '7gogo'            : 'nanagogo',
    'nicovideo'        : 'nicovideo',
    'pinterest'        : 'pinterest',
    'pixnet'           : 'pixnet',
    'pptv'             : 'pptv',
    'qingting'         : 'qingting',
    'qq'               : 'qq',
    'showroom-live'    : 'showroom',
    'sina'             : 'sina',
    'smgbb'            : 'bilibili',
    'sohu'             : 'sohu',
    'soundcloud'       : 'soundcloud',
    'ted'              : 'ted',
    'theplatform'      : 'theplatform',
    'tiktok'           : 'tiktok',
    'tucao'            : 'tucao',
    'tudou'            : 'tudou',
    'tumblr'           : 'tumblr',
    'twimg'            : 'twitter',
    'twitter'          : 'twitter',
    'ucas'             : 'ucas',
    'vimeo'            : 'vimeo',
    'wanmen'           : 'wanmen',
    'weibo'            : 'miaopai',
    'veoh'             : 'veoh',
    'vk'               : 'vk',
    'x'                : 'twitter',
    'xiaokaxiu'        : 'yixia',
    'xiaojiadianvideo' : 'fc2video',
    'ximalaya'         : 'ximalaya',
    'xinpianchang'     : 'xinpianchang',
    'yizhibo'          : 'yizhibo',
    'youku'            : 'youku',
    'youtu'            : 'youtube',
    'youtube'          : 'youtube',
    'zhanqi'           : 'zhanqi',
    'zhibo'            : 'zhibo',
    'zhihu'            : 'zhihu',
}

dry_run = False
json_output = False
force = False
skip_existing_file_size_check = False
player = None
extractor_proxy = None
cookies = None
output_filename = None
auto_rename = False
insecure = False
m3u8 = False
postfix = False
prefix = None

fake_headers = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Charset': 'UTF-8,*;q=0.5',
    'Accept-Encoding': 'gzip,deflate,sdch',
    'Accept-Language': 'en-US,en;q=0.8',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36 Edg/126.0.2592.113'  # Latest Edge
}

if sys.stdout.isatty():
    default_encoding = sys.stdout.encoding.lower()
else:
    default_encoding = locale.getpreferredencoding().lower()


def rc4(key, data):
    # all encryption algo should work on bytes
    assert type(key) == type(data) and type(key) == type(b'')
    state = list(range(256))
    j = 0
    for i in range(256):
        j += state[i] + key[i % len(key)]
        j &= 0xff
        state[i], state[j] = state[j], state[i]

    i = 0
    j = 0
    out_list = []
    for char in data:
        i += 1
        i &= 0xff
        j += state[i]
        j &= 0xff
        state[i], state[j] = state[j], state[i]
        prn = state[(state[i] + state[j]) & 0xff]
        out_list.append(char ^ prn)

    return bytes(out_list)


def general_m3u8_extractor(url, headers={}):
    m3u8_list = get_content(url, headers=headers).split('\n')
    urls = []
    for line in m3u8_list:
        line = line.strip()
        if line and not line.startswith('#'):
            if line.startswith('http'):
                urls.append(line)
            else:
                seg_url = parse.urljoin(url, line)
                urls.append(seg_url)
    return urls


def maybe_print(*s):
    try:
        print(*s)
    except:
        pass


def tr(s):
    if default_encoding == 'utf-8':
        return s
    else:
        return s
        # return str(s.encode('utf-8'))[2:-1]


# DEPRECATED in favor of match1()
def r1(pattern, text):
    m = re.search(pattern, text)
    if m:
        return m.group(1)


# DEPRECATED in favor of match1()
def r1_of(patterns, text):
    for p in patterns:
        x = r1(p, text)
        if x:
            return x


def match1(text, *patterns):
    """Scans through a string for substrings matched some patterns (first-subgroups only).

    Args:
        text: A string to be scanned.
        patterns: Arbitrary number of regex patterns.

    Returns:
        When only one pattern is given, returns a string (None if no match found).
        When more than one pattern are given, returns a list of strings ([] if no match found).
    """

    if len(patterns) == 1:
        pattern = patterns[0]
        match = re.search(pattern, text)
        if match:
            return match.group(1)
        else:
            return None
    else:
        ret = []
        for pattern in patterns:
            match = re.search(pattern, text)
            if match:
                ret.append(match.group(1))
        return ret


def matchall(text, patterns):
    """Scans through a string for substrings matched some patterns.

    Args:
        text: A string to be scanned.
        patterns: a list of regex pattern.

    Returns:
        a list if matched. empty if not.
    """

    ret = []
    for pattern in patterns:
        match = re.findall(pattern, text)
        ret += match

    return ret


def launch_player(player, urls):
    import subprocess
    import shlex
    urls = list(urls)
    for url in urls.copy():
        if type(url) is list:
            urls.extend(url)
    urls = [url for url in urls if type(url) is str]
    assert urls
    if (sys.version_info >= (3, 3)):
        import shutil
        exefile=shlex.split(player)[0]
        if shutil.which(exefile) is not None:
            subprocess.call(shlex.split(player) + urls)
        else:
            log.wtf('[Failed] Cannot find player "%s"' % exefile)
    else:
        subprocess.call(shlex.split(player) + urls)


def parse_query_param(url, param):
    """Parses the query string of a URL and returns the value of a parameter.

    Args:
        url: A URL.
        param: A string representing the name of the parameter.

    Returns:
        The value of the parameter.
    """

    try:
        return parse.parse_qs(parse.urlparse(url).query)[param][0]
    except:
        return None


def unicodize(text):
    return re.sub(
        r'\\u([0-9A-Fa-f][0-9A-Fa-f][0-9A-Fa-f][0-9A-Fa-f])',
        lambda x: chr(int(x.group(0)[2:], 16)),
        text
    )


# DEPRECATED in favor of util.legitimize()
def escape_file_path(path):
    path = path.replace('/', '-')
    path = path.replace('\\', '-')
    path = path.replace('*', '-')
    path = path.replace('?', '-')
    return path


def ungzip(data):
    """Decompresses data for Content-Encoding: gzip.
    """
    from io import BytesIO
    import gzip
    buffer = BytesIO(data)
    f = gzip.GzipFile(fileobj=buffer)
    return f.read()


def undeflate(data):
    """Decompresses data for Content-Encoding: deflate.
    (the zlib compression is used.)
    """
    import zlib
    decompressobj = zlib.decompressobj(-zlib.MAX_WBITS)
    return decompressobj.decompress(data)+decompressobj.flush()


# an http.client implementation of get_content()
# because urllib does not support "Connection: keep-alive"
def getHttps(host, url, headers, debuglevel=0):
    import http.client

    conn = http.client.HTTPSConnection(host)
    conn.set_debuglevel(debuglevel)
    conn.request("GET", url, headers=headers)
    resp = conn.getresponse()
    logging.debug('getHttps: %s' % resp.getheaders())
    set_cookie = resp.getheader('set-cookie')

    data = resp.read()
    try:
        data = ungzip(data)  # gzip
        data = undeflate(data)  # deflate
    except:
        pass

    conn.close()
    return str(data, encoding='utf-8'), set_cookie  # TODO: support raw data


# DEPRECATED in favor of get_content()
def get_response(url, faker=False):
    logging.debug('get_response: %s' % url)
    ctx = None
    if insecure:
        # ignore ssl errors
        ctx = ssl.create_default_context()
        ctx.check_hostname = False
        ctx.verify_mode = ssl.CERT_NONE
    # install cookies
    if cookies:
        opener = request.build_opener(request.HTTPCookieProcessor(cookies))
        request.install_opener(opener)

    if faker:
        response = request.urlopen(
            request.Request(url, headers=fake_headers), None, context=ctx,
        )
    else:
        response = request.urlopen(url, context=ctx)

    data = response.read()
    if response.info().get('Content-Encoding') == 'gzip':
        data = ungzip(data)
    elif response.info().get('Content-Encoding') == 'deflate':
        data = undeflate(data)
    response.data = data
    return response


# DEPRECATED in favor of get_content()
def get_html(url, encoding=None, faker=False):
    content = get_response(url, faker).data
    return str(content, 'utf-8', 'ignore')


# DEPRECATED in favor of get_content()
def get_decoded_html(url, faker=False):
    response = get_response(url, faker)
    data = response.data
    charset = r1(r'charset=([\w-]+)', response.headers['content-type'])
    if charset:
        return data.decode(charset, 'ignore')
    else:
        return data


def get_location(url, headers=None, get_method='HEAD'):
    logging.debug('get_location: %s' % url)

    if headers:
        req = request.Request(url, headers=headers)
    else:
        req = request.Request(url)
    req.get_method = lambda: get_method
    res = urlopen_with_retry(req)
    return res.geturl()


def urlopen_with_retry(*args, **kwargs):
    retry_time = 3
    for i in range(retry_time):
        try:
            if insecure:
                # ignore ssl errors
                ctx = ssl.create_default_context()
                ctx.check_hostname = False
                ctx.verify_mode = ssl.CERT_NONE
                return request.urlopen(*args, context=ctx, **kwargs)
            else:
                return request.urlopen(*args, **kwargs)
        except socket.timeout as e:
            logging.debug('request attempt %s timeout' % str(i + 1))
            if i + 1 == retry_time:
                raise e
        # try to tackle youku CDN fails
        except error.HTTPError as http_error:
            logging.debug('HTTP Error with code{}'.format(http_error.code))
            if i + 1 == retry_time:
                raise http_error


def get_content(url, headers={}, decoded=True):
    """Gets the content of a URL via sending a HTTP GET request.

    Args:
        url: A URL.
        headers: Request headers used by the client.
        decoded: Whether decode the response body using UTF-8 or the charset specified in Content-Type.

    Returns:
        The content as a string.
    """

    logging.debug('get_content: %s' % url)

    req = request.Request(url, headers=headers)
    if cookies:
        # NOTE: Do not use cookies.add_cookie_header(req)
        # #HttpOnly_ cookies were not supported by CookieJar and MozillaCookieJar properly until python 3.10
        # See also:
        # - https://github.com/python/cpython/pull/17471
        # - https://bugs.python.org/issue2190
        # Here we add cookies to the request headers manually
        cookie_strings = []
        for cookie in list(cookies):
            cookie_strings.append(cookie.name + '=' + cookie.value)
        cookie_headers = {'Cookie': '; '.join(cookie_strings)}
        req.headers.update(cookie_headers)

    response = urlopen_with_retry(req)
    data = response.read()

    # Handle HTTP compression for gzip and deflate (zlib)
    content_encoding = response.getheader('Content-Encoding')
    if content_encoding == 'gzip':
        data = ungzip(data)
    elif content_encoding == 'deflate':
        data = undeflate(data)

    # Decode the response body
    if decoded:
        charset = match1(
            response.getheader('Content-Type', ''), r'charset=([\w-]+)'
        )
        if charset is not None:
            data = data.decode(charset, 'ignore')
        else:
            data = data.decode('utf-8', 'ignore')

    return data


def post_content(url, headers={}, post_data={}, decoded=True, **kwargs):
    """Post the content of a URL via sending a HTTP POST request.

    Args:
        url: A URL.
        headers: Request headers used by the client.
        decoded: Whether decode the response body using UTF-8 or the charset specified in Content-Type.

    Returns:
        The content as a string.
    """
    if kwargs.get('post_data_raw'):
        logging.debug('post_content: %s\npost_data_raw: %s' % (url, kwargs['post_data_raw']))
    else:
        logging.debug('post_content: %s\npost_data: %s' % (url, post_data))

    req = request.Request(url, headers=headers)
    if cookies:
        # NOTE: Do not use cookies.add_cookie_header(req)
        # #HttpOnly_ cookies were not supported by CookieJar and MozillaCookieJar properly until python 3.10
        # See also:
        # - https://github.com/python/cpython/pull/17471
        # - https://bugs.python.org/issue2190
        # Here we add cookies to the request headers manually
        cookie_strings = []
        for cookie in list(cookies):
            cookie_strings.append(cookie.name + '=' + cookie.value)
        cookie_headers = {'Cookie': '; '.join(cookie_strings)}
        req.headers.update(cookie_headers)
    if kwargs.get('post_data_raw'):
        post_data_enc = bytes(kwargs['post_data_raw'], 'utf-8')
    else:
        post_data_enc = bytes(parse.urlencode(post_data), 'utf-8')
    response = urlopen_with_retry(req, data=post_data_enc)
    data = response.read()

    # Handle HTTP compression for gzip and deflate (zlib)
    content_encoding = response.getheader('Content-Encoding')
    if content_encoding == 'gzip':
        data = ungzip(data)
    elif content_encoding == 'deflate':
        data = undeflate(data)

    # Decode the response body
    if decoded:
        charset = match1(
            response.getheader('Content-Type'), r'charset=([\w-]+)'
        )
        if charset is not None:
            data = data.decode(charset)
        else:
            data = data.decode('utf-8')

    return data


def url_size(url, faker=False, headers={}):
    if faker:
        response = urlopen_with_retry(
            request.Request(url, headers=fake_headers)
        )
    elif headers:
        response = urlopen_with_retry(request.Request(url, headers=headers))
    else:
        response = urlopen_with_retry(url)

    size = response.headers['content-length']
    return int(size) if size is not None else float('inf')


def urls_size(urls, faker=False, headers={}):
    return sum([url_size(url, faker=faker, headers=headers) for url in urls])


def get_head(url, headers=None, get_method='HEAD'):
    logging.debug('get_head: %s' % url)

    if headers:
        req = request.Request(url, headers=headers)
    else:
        req = request.Request(url)
    req.get_method = lambda: get_method
    res = urlopen_with_retry(req)
    return res.headers


def url_info(url, faker=False, headers={}):
    logging.debug('url_info: %s' % url)

    if faker:
        response = urlopen_with_retry(
            request.Request(url, headers=fake_headers)
        )
    elif headers:
        response = urlopen_with_retry(request.Request(url, headers=headers))
    else:
        response = urlopen_with_retry(request.Request(url))

    headers = response.headers

    type = headers['content-type']
    if type == 'image/jpg; charset=UTF-8' or type == 'image/jpg':
        type = 'audio/mpeg'  # fix for netease
    mapping = {
        'video/3gpp': '3gp',
        'video/f4v': 'flv',
        'video/mp4': 'mp4',
        'video/MP2T': 'ts',
        'video/quicktime': 'mov',
        'video/webm': 'webm',
        'video/x-flv': 'flv',
        'video/x-ms-asf': 'asf',
        'audio/mp4': 'mp4',
        'audio/mpeg': 'mp3',
        'audio/wav': 'wav',
        'audio/x-wav': 'wav',
        'audio/wave': 'wav',
        'image/jpeg': 'jpg',
        'image/png': 'png',
        'image/gif': 'gif',
        'application/pdf': 'pdf',
    }
    if type in mapping:
        ext = mapping[type]
    else:
        type = None
        if headers['content-disposition']:
            try:
                filename = parse.unquote(
                    r1(r'filename="?([^"]+)"?', headers['content-disposition'])
                )
                if len(filename.split('.')) > 1:
                    ext = filename.split('.')[-1]
                else:
                    ext = None
            except:
                ext = None
        else:
            ext = None

    if headers['transfer-encoding'] != 'chunked':
        size = headers['content-length'] and int(headers['content-length'])
    else:
        size = None

    return type, ext, size


def url_locations(urls, faker=False, headers={}):
    locations = []
    for url in urls:
        logging.debug('url_locations: %s' % url)

        if faker:
            response = urlopen_with_retry(
                request.Request(url, headers=fake_headers)
            )
        elif headers:
            response = urlopen_with_retry(
                request.Request(url, headers=headers)
            )
        else:
            response = urlopen_with_retry(request.Request(url))

        locations.append(response.url)
    return locations


def url_save(
    url, filepath, bar, refer=None, is_part=False, faker=False,
    headers=None, timeout=None, **kwargs
):
    tmp_headers = headers.copy() if headers is not None else {}
    # When a referer specified with param refer,
    # the key must be 'Referer' for the hack here
    if refer is not None:
        tmp_headers['Referer'] = refer
    if type(url) is list:
        chunk_sizes = [url_size(url, faker=faker, headers=tmp_headers) for url in url]
        file_size = sum(chunk_sizes)
        is_chunked, urls = True, url
    else:
        file_size = url_size(url, faker=faker, headers=tmp_headers)
        chunk_sizes = [file_size]
        is_chunked, urls = False, [url]

    continue_renameing = True
    while continue_renameing:
        continue_renameing = False
        if os.path.exists(filepath):
            if not force and (file_size == os.path.getsize(filepath) or skip_existing_file_size_check):
                if not is_part:
                    if bar:
                        bar.done()
                    if skip_existing_file_size_check:
                        log.w(
                            'Skipping {} without checking size: file already exists'.format(
                                tr(os.path.basename(filepath))
                            )
                        )
                    else:
                        log.w(
                            'Skipping {}: file already exists'.format(
                                tr(os.path.basename(filepath))
                            )
                        )
                else:
                    if bar:
                        bar.update_received(file_size)
                return
            else:
                if not is_part:
                    if bar:
                        bar.done()
                    if not force and auto_rename:
                        path, ext = os.path.basename(filepath).rsplit('.', 1)
                        finder = re.compile(r' \([1-9]\d*?\)$')
                        if (finder.search(path) is None):
                            thisfile = path + ' (1).' + ext
                        else:
                            def numreturn(a):
                                return ' (' + str(int(a.group()[2:-1]) + 1) + ').'
                            thisfile = finder.sub(numreturn, path) + ext
                        filepath = os.path.join(os.path.dirname(filepath), thisfile)
                        print('Changing name to %s' % tr(os.path.basename(filepath)), '...')
                        continue_renameing = True
                        continue
                    if log.yes_or_no('File with this name already exists. Overwrite?'):
                        log.w('Overwriting %s ...' % tr(os.path.basename(filepath)))
                    else:
                        return
        elif not os.path.exists(os.path.dirname(filepath)):
            os.mkdir(os.path.dirname(filepath))

    temp_filepath = filepath + '.download' if file_size != float('inf') \
        else filepath
    received = 0
    if not force:
        open_mode = 'ab'

        if os.path.exists(temp_filepath):
            received += os.path.getsize(temp_filepath)
            if bar:
                bar.update_received(os.path.getsize(temp_filepath))
    else:
        open_mode = 'wb'

    chunk_start = 0
    chunk_end = 0
    for i, url in enumerate(urls):
        received_chunk = 0
        chunk_start += 0 if i == 0 else chunk_sizes[i - 1]
        chunk_end += chunk_sizes[i]
        if received < file_size and received < chunk_end:
            if faker:
                tmp_headers = fake_headers
            '''
            if parameter headers passed in, we have it copied as tmp_header
            elif headers:
                headers = headers
            else:
                headers = {}
            '''
            if received:
                # chunk_start will always be 0 if not chunked
                tmp_headers['Range'] = 'bytes=' + str(received - chunk_start) + '-'
            if refer:
                tmp_headers['Referer'] = refer

            if timeout:
                response = urlopen_with_retry(
                    request.Request(url, headers=tmp_headers), timeout=timeout
                )
            else:
                response = urlopen_with_retry(
                    request.Request(url, headers=tmp_headers)
                )
            try:
                range_start = int(
                    response.headers[
                        'content-range'
                    ][6:].split('/')[0].split('-')[0]
                )
                end_length = int(
                    response.headers['content-range'][6:].split('/')[1]
                )
                range_length = end_length - range_start
            except:
                content_length = response.headers['content-length']
                range_length = int(content_length) if content_length is not None \
                    else float('inf')

            if is_chunked:  # always append if chunked
                open_mode = 'ab'
            elif file_size != received + range_length:  # is it ever necessary?
                received = 0
                if bar:
                    bar.received = 0
                open_mode = 'wb'

            with open(temp_filepath, open_mode) as output:
                while True:
                    buffer = None
                    try:
                        buffer = response.read(1024 * 256)
                    except socket.timeout:
                        pass
                    if not buffer:
                        if file_size == float('+inf'):  # Prevent infinite downloading
                            break
                        if is_chunked and received_chunk == range_length:
                            break
                        elif not is_chunked and received == file_size:  # Download finished
                            break
                        # Unexpected termination. Retry request
                        tmp_headers['Range'] = 'bytes=' + str(received - chunk_start) + '-'
                        response = urlopen_with_retry(
                            request.Request(url, headers=tmp_headers)
                        )
                        continue
                    output.write(buffer)
                    received += len(buffer)
                    received_chunk += len(buffer)
                    if bar:
                        bar.update_received(len(buffer))

    assert received == os.path.getsize(temp_filepath), '%s == %s == %s' % (
        received, os.path.getsize(temp_filepath), temp_filepath
    )

    if os.access(filepath, os.W_OK) and file_size != float('inf'):
        # on Windows rename could fail if destination filepath exists
        # we should simply choose a new name instead of brutal os.remove(filepath)
        filepath = filepath + " (2)"
    os.rename(temp_filepath, filepath)


class SimpleProgressBar:
    term_size = term.get_terminal_size()[1]

    def __init__(self, total_size, total_pieces=1):
        self.displayed = False
        self.total_size = total_size
        self.total_pieces = total_pieces
        self.current_piece = 1
        self.received = 0
        self.speed = ''
        self.last_updated = time.time()

        total_pieces_len = len(str(total_pieces))
        # 38 is the size of all statically known size in self.bar
        total_str = '%5s' % round(self.total_size / 1048576, 1)
        total_str_width = max(len(total_str), 5)
        self.bar_size = self.term_size - 28 - 2 * total_pieces_len \
            - 2 * total_str_width
        self.bar = '{:>4}%% ({:>%s}/%sMB) ├{:─<%s}┤[{:>%s}/{:>%s}] {}' % (
            total_str_width, total_str, self.bar_size, total_pieces_len,
            total_pieces_len
        )

    def update(self):
        self.displayed = True
        bar_size = self.bar_size
        percent = round(self.received * 100 / self.total_size, 1)
        if percent >= 100:
            percent = 100
        dots = bar_size * int(percent) // 100
        plus = int(percent) - dots // bar_size * 100
        if plus > 0.8:
            plus = '█'
        elif plus > 0.4:
            plus = '>'
        else:
            plus = ''
        bar = '█' * dots + plus
        bar = self.bar.format(
            percent, round(self.received / 1048576, 1), bar,
            self.current_piece, self.total_pieces, self.speed
        )
        sys.stdout.write('\r' + bar)
        sys.stdout.flush()

    def update_received(self, n):
        self.received += n
        time_diff = time.time() - self.last_updated
        bytes_ps = n / time_diff if time_diff else 0
        if bytes_ps >= 1024 ** 3:
            self.speed = '{:4.0f} GB/s'.format(bytes_ps / 1024 ** 3)
        elif bytes_ps >= 1024 ** 2:
            self.speed = '{:4.0f} MB/s'.format(bytes_ps / 1024 ** 2)
        elif bytes_ps >= 1024:
            self.speed = '{:4.0f} kB/s'.format(bytes_ps / 1024)
        else:
            self.speed = '{:4.0f}  B/s'.format(bytes_ps)
        self.last_updated = time.time()
        self.update()

    def update_piece(self, n):
        self.current_piece = n

    def done(self):
        if self.displayed:
            print()
            self.displayed = False


class PiecesProgressBar:
    def __init__(self, total_size, total_pieces=1):
        self.displayed = False
        self.total_size = total_size
        self.total_pieces = total_pieces
        self.current_piece = 1
        self.received = 0

    def update(self):
        self.displayed = True
        bar = '{0:>5}%[{1:<40}] {2}/{3}'.format(
            '', '=' * 40, self.current_piece, self.total_pieces
        )
        sys.stdout.write('\r' + bar)
        sys.stdout.flush()

    def update_received(self, n):
        self.received += n
        self.update()

    def update_piece(self, n):
        self.current_piece = n

    def done(self):
        if self.displayed:
            print()
            self.displayed = False


class DummyProgressBar:
    def __init__(self, *args):
        pass

    def update_received(self, n):
        pass

    def update_piece(self, n):
        pass

    def done(self):
        pass


def get_output_filename(urls, title, ext, output_dir, merge, **kwargs):
    # lame hack for the --output-filename option
    global output_filename
    if output_filename:
        result = output_filename
        if kwargs.get('part', -1) >= 0:
            result = '%s[%02d]' % (result, kwargs.get('part'))
        if ext:
            result = '%s.%s' % (result, ext)
        return result

    merged_ext = ext
    if (len(urls) > 1) and merge:
        from .processor.ffmpeg import has_ffmpeg_installed
        if ext in ['flv', 'f4v']:
            if has_ffmpeg_installed():
                merged_ext = 'mp4'
            else:
                merged_ext = 'flv'
        elif ext == 'mp4':
            merged_ext = 'mp4'
        elif ext == 'ts':
            if has_ffmpeg_installed():
                merged_ext = 'mkv'
            else:
                merged_ext = 'ts'
    result = title
    if kwargs.get('part', -1) >= 0:
        result = '%s[%02d]' % (result, kwargs.get('part'))
    result = '%s.%s' % (result, merged_ext)
    return result.replace("'", "_")

def print_user_agent(faker=False):
    urllib_default_user_agent = 'Python-urllib/%d.%d' % sys.version_info[:2]
    user_agent = fake_headers['User-Agent'] if faker else urllib_default_user_agent
    print('User Agent: %s' % user_agent)

def download_urls(
    urls, title, ext, total_size, output_dir='.', refer=None, merge=True,
    faker=False, headers={}, **kwargs
):
    assert urls
    if json_output:
        json_output_.download_urls(
            urls=urls, title=title, ext=ext, total_size=total_size,
            refer=refer
        )
        return
    if dry_run:
        print_user_agent(faker=faker)
        try:
            print('Real URLs:\n%s' % '\n'.join(urls))
        except:
            print('Real URLs:\n%s' % '\n'.join([j for i in urls for j in i]))
        return

    if player:
        launch_player(player, urls)
        return

    if not total_size:
        try:
            total_size = urls_size(urls, faker=faker, headers=headers)
        except:
            import traceback
            traceback.print_exc(file=sys.stdout)
            pass

    title = tr(get_filename(title))
    if postfix and 'vid' in kwargs:
        title = "%s [%s]" % (title, kwargs['vid'])
    if prefix is not None:
        title = "[%s] %s" % (prefix, title)
    output_filename = get_output_filename(urls, title, ext, output_dir, merge)
    output_filepath = os.path.join(output_dir, output_filename)

    if total_size:
        if not force and os.path.exists(output_filepath) and not auto_rename\
                and (os.path.getsize(output_filepath) >= total_size * 0.9\
                or skip_existing_file_size_check):
            if skip_existing_file_size_check:
                log.w('Skipping %s without checking size: file already exists' % output_filepath)
            else:
                log.w('Skipping %s: file already exists' % output_filepath)
            print()
            return
        bar = SimpleProgressBar(total_size, len(urls))
    else:
        bar = PiecesProgressBar(total_size, len(urls))

    if len(urls) == 1:
        url = urls[0]
        print('Downloading %s ...' % tr(output_filename))
        bar.update()
        url_save(
            url, output_filepath, bar, refer=refer, faker=faker,
            headers=headers, **kwargs
        )
        bar.done()
    else:
        parts = []
        print('Downloading %s ...' % tr(output_filename))
        bar.update()
        for i, url in enumerate(urls):
            output_filename_i = get_output_filename(urls, title, ext, output_dir, merge, part=i)
            output_filepath_i = os.path.join(output_dir, output_filename_i)
            parts.append(output_filepath_i)
            # print 'Downloading %s [%s/%s]...' % (tr(filename), i + 1, len(urls))
            bar.update_piece(i + 1)
            url_save(
                url, output_filepath_i, bar, refer=refer, is_part=True, faker=faker,
                headers=headers, **kwargs
            )
        bar.done()

        if not merge:
            print()
            return

        if 'av' in kwargs and kwargs['av']:
            from .processor.ffmpeg import has_ffmpeg_installed
            if has_ffmpeg_installed():
                from .processor.ffmpeg import ffmpeg_concat_av
                ret = ffmpeg_concat_av(parts, output_filepath, ext)
                print('Merged into %s' % output_filename)
                if ret == 0:
                    for part in parts:
                        os.remove(part)

        elif ext in ['flv', 'f4v']:
            try:
                from .processor.ffmpeg import has_ffmpeg_installed
                if has_ffmpeg_installed():
                    from .processor.ffmpeg import ffmpeg_concat_flv_to_mp4
                    ffmpeg_concat_flv_to_mp4(parts, output_filepath)
                else:
                    from .processor.join_flv import concat_flv
                    concat_flv(parts, output_filepath)
                print('Merged into %s' % output_filename)
            except:
                raise
            else:
                for part in parts:
                    os.remove(part)

        elif ext == 'mp4':
            try:
                from .processor.ffmpeg import has_ffmpeg_installed
                if has_ffmpeg_installed():
                    from .processor.ffmpeg import ffmpeg_concat_mp4_to_mp4
                    ffmpeg_concat_mp4_to_mp4(parts, output_filepath)
                else:
                    from .processor.join_mp4 import concat_mp4
                    concat_mp4(parts, output_filepath)
                print('Merged into %s' % output_filename)
            except:
                raise
            else:
                for part in parts:
                    os.remove(part)

        elif ext == 'ts':
            try:
                from .processor.ffmpeg import has_ffmpeg_installed
                if has_ffmpeg_installed():
                    from .processor.ffmpeg import ffmpeg_concat_ts_to_mkv
                    ffmpeg_concat_ts_to_mkv(parts, output_filepath)
                else:
                    from .processor.join_ts import concat_ts
                    concat_ts(parts, output_filepath)
                print('Merged into %s' % output_filename)
            except:
                raise
            else:
                for part in parts:
                    os.remove(part)

        elif ext == 'mp3':
            try:
                from .processor.ffmpeg import has_ffmpeg_installed

                assert has_ffmpeg_installed()
                from .processor.ffmpeg import ffmpeg_concat_mp3_to_mp3
                ffmpeg_concat_mp3_to_mp3(parts, output_filepath)
                print('Merged into %s' % output_filename)
            except:
                raise
            else:
                for part in parts:
                    os.remove(part)

        else:
            print("Can't merge %s files" % ext)

    print()


def download_rtmp_url(
    url, title, ext, params={}, total_size=0, output_dir='.', refer=None,
    merge=True, faker=False
):
    assert url
    if dry_run:
        print_user_agent(faker=faker)
        print('Real URL:\n%s\n' % [url])
        if params.get('-y', False):  # None or unset -> False
            print('Real Playpath:\n%s\n' % [params.get('-y')])
        return

    if player:
        from .processor.rtmpdump import play_rtmpdump_stream
        play_rtmpdump_stream(player, url, params)
        return

    from .processor.rtmpdump import (
        has_rtmpdump_installed, download_rtmpdump_stream
    )
    assert has_rtmpdump_installed(), 'RTMPDump not installed.'
    download_rtmpdump_stream(url,  title, ext, params, output_dir)


def download_url_ffmpeg(
    url, title, ext, params={}, total_size=0, output_dir='.', refer=None,
    merge=True, faker=False, stream=True
):
    assert url
    if dry_run:
        print_user_agent(faker=faker)
        print('Real URL:\n%s\n' % [url])
        if params.get('-y', False):  # None or unset ->False
            print('Real Playpath:\n%s\n' % [params.get('-y')])
        return

    if player:
        launch_player(player, [url])
        return

    from .processor.ffmpeg import has_ffmpeg_installed, ffmpeg_download_stream
    assert has_ffmpeg_installed(), 'FFmpeg not installed.'

    global output_filename
    if output_filename:
        dotPos = output_filename.rfind('.')
        if dotPos > 0:
            title = output_filename[:dotPos]
            ext = output_filename[dotPos+1:]
        else:
            title = output_filename

    title = tr(get_filename(title))

    ffmpeg_download_stream(url, title, ext, params, output_dir, stream=stream)


def playlist_not_supported(name):
    def f(*args, **kwargs):
        raise NotImplementedError('Playlist is not supported for ' + name)
    return f


def print_info(site_info, title, type, size, **kwargs):
    if json_output:
        json_output_.print_info(
            site_info=site_info, title=title, type=type, size=size
        )
        return
    if type:
        type = type.lower()
    if type in ['3gp']:
        type = 'video/3gpp'
    elif type in ['asf', 'wmv']:
        type = 'video/x-ms-asf'
    elif type in ['flv', 'f4v']:
        type = 'video/x-flv'
    elif type in ['mkv']:
        type = 'video/x-matroska'
    elif type in ['mp3']:
        type = 'audio/mpeg'
    elif type in ['mp4']:
        type = 'video/mp4'
    elif type in ['mov']:
        type = 'video/quicktime'
    elif type in ['ts']:
        type = 'video/MP2T'
    elif type in ['webm']:
        type = 'video/webm'

    elif type in ['jpg']:
        type = 'image/jpeg'
    elif type in ['png']:
        type = 'image/png'
    elif type in ['gif']:
        type = 'image/gif'

    if type in ['video/3gpp']:
        type_info = '3GPP multimedia file (%s)' % type
    elif type in ['video/x-flv', 'video/f4v']:
        type_info = 'Flash video (%s)' % type
    elif type in ['video/mp4', 'video/x-m4v']:
        type_info = 'MPEG-4 video (%s)' % type
    elif type in ['video/MP2T']:
        type_info = 'MPEG-2 transport stream (%s)' % type
    elif type in ['video/webm']:
        type_info = 'WebM video (%s)' % type
    # elif type in ['video/ogg']:
    #    type_info = 'Ogg video (%s)' % type
    elif type in ['video/quicktime']:
        type_info = 'QuickTime video (%s)' % type
    elif type in ['video/x-matroska']:
        type_info = 'Matroska video (%s)' % type
    # elif type in ['video/x-ms-wmv']:
    #    type_info = 'Windows Media video (%s)' % type
    elif type in ['video/x-ms-asf']:
        type_info = 'Advanced Systems Format (%s)' % type
    # elif type in ['video/mpeg']:
    #    type_info = 'MPEG video (%s)' % type
    elif type in ['audio/mp4', 'audio/m4a']:
        type_info = 'MPEG-4 audio (%s)' % type
    elif type in ['audio/mpeg']:
        type_info = 'MP3 (%s)' % type
    elif type in ['audio/wav', 'audio/wave', 'audio/x-wav']:
        type_info = 'Waveform Audio File Format ({})'.format(type)

    elif type in ['image/jpeg']:
        type_info = 'JPEG Image (%s)' % type
    elif type in ['image/png']:
        type_info = 'Portable Network Graphics (%s)' % type
    elif type in ['image/gif']:
        type_info = 'Graphics Interchange Format (%s)' % type
    elif type in ['m3u8']:
        if 'm3u8_type' in kwargs:
            if kwargs['m3u8_type'] == 'master':
                type_info = 'M3U8 Master {}'.format(type)
        else:
            type_info = 'M3U8 Playlist {}'.format(type)
    else:
        type_info = 'Unknown type (%s)' % type

    maybe_print('Site:      ', site_info)
    maybe_print('Title:     ', unescape_html(tr(title)))
    print('Type:      ', type_info)
    if type != 'm3u8':
        print(
            'Size:      ', round(size / 1048576, 2),
            'MiB (' + str(size) + ' Bytes)'
        )
    if type == 'm3u8' and 'm3u8_url' in kwargs:
        print('M3U8 Url:   {}'.format(kwargs['m3u8_url']))
    print()


def mime_to_container(mime):
    mapping = {
        'video/3gpp': '3gp',
        'video/mp4': 'mp4',
        'video/webm': 'webm',
        'video/x-flv': 'flv',
    }
    if mime in mapping:
        return mapping[mime]
    else:
        return mime.split('/')[1]


def parse_host(host):
    """Parses host name and port number from a string.
    """
    if re.match(r'^(\d+)$', host) is not None:
        return ("0.0.0.0", int(host))
    if re.match(r'^(\w+)://', host) is None:
        host = "//" + host
    o = parse.urlparse(host)
    hostname = o.hostname or "0.0.0.0"
    port = o.port or 0
    return (hostname, port)


def set_proxy(proxy):
    proxy_handler = request.ProxyHandler({
        'http': '%s:%s' % proxy,
        'https': '%s:%s' % proxy,
    })
    opener = request.build_opener(proxy_handler)
    request.install_opener(opener)


def unset_proxy():
    proxy_handler = request.ProxyHandler({})
    opener = request.build_opener(proxy_handler)
    request.install_opener(opener)


# DEPRECATED in favor of set_proxy() and unset_proxy()
def set_http_proxy(proxy):
    if proxy is None:  # Use system default setting
        proxy_support = request.ProxyHandler()
    elif proxy == '':  # Don't use any proxy
        proxy_support = request.ProxyHandler({})
    else:  # Use proxy
        proxy_support = request.ProxyHandler(
            {'http': '%s' % proxy, 'https': '%s' % proxy}
        )
    opener = request.build_opener(proxy_support)
    request.install_opener(opener)


def print_more_compatible(*args, **kwargs):
    import builtins as __builtin__
    """Overload default print function as py (<3.3) does not support 'flush' keyword.
    Although the function name can be same as print to get itself overloaded automatically,
    I'd rather leave it with a different name and only overload it when importing to make less confusion.
    """
    # nothing happens on py3.3 and later
    if sys.version_info[:2] >= (3, 3):
        return __builtin__.print(*args, **kwargs)

    # in lower pyver (e.g. 3.2.x), remove 'flush' keyword and flush it as requested
    doFlush = kwargs.pop('flush', False)
    ret = __builtin__.print(*args, **kwargs)
    if doFlush:
        kwargs.get('file', sys.stdout).flush()
    return ret


def download_main(download, download_playlist, urls, playlist, **kwargs):
    for url in urls:
        if re.match(r'https?://', url) is None:
            url = 'http://' + url

        if m3u8:
            if output_filename:
                title = output_filename
            else:
                title = "m3u8file"
            download_url_ffmpeg(url=url, title=title,ext = 'mp4',output_dir = '.')
        elif playlist:
            download_playlist(url, **kwargs)
        else:
            download(url, **kwargs)


def load_cookies(cookiefile):
    global cookies
    if cookiefile.endswith('.txt'):
        # MozillaCookieJar treats prefix '#HttpOnly_' as comments incorrectly!
        # do not use its load()
        # see also:
        #   - https://docs.python.org/3/library/http.cookiejar.html#http.cookiejar.MozillaCookieJar
        #   - https://github.com/python/cpython/blob/4b219ce/Lib/http/cookiejar.py#L2014
        #   - https://curl.haxx.se/libcurl/c/CURLOPT_COOKIELIST.html#EXAMPLE
        #cookies = cookiejar.MozillaCookieJar(cookiefile)
        #cookies.load()
        from http.cookiejar import Cookie
        cookies = cookiejar.MozillaCookieJar()
        now = time.time()
        ignore_discard, ignore_expires = False, False
        with open(cookiefile, 'r', encoding='utf-8') as f:
            for line in f:
                # last field may be absent, so keep any trailing tab
                if line.endswith("\n"): line = line[:-1]

                # skip comments and blank lines XXX what is $ for?
                if (line.strip().startswith(("#", "$")) or
                    line.strip() == ""):
                    if not line.strip().startswith('#HttpOnly_'):  # skip for #HttpOnly_
                        continue

                domain, domain_specified, path, secure, expires, name, value = \
                        line.split("\t")
                secure = (secure == "TRUE")
                domain_specified = (domain_specified == "TRUE")
                if name == "":
                    # cookies.txt regards 'Set-Cookie: foo' as a cookie
                    # with no name, whereas http.cookiejar regards it as a
                    # cookie with no value.
                    name = value
                    value = None

                initial_dot = domain.startswith(".")
                if not line.strip().startswith('#HttpOnly_'):  # skip for #HttpOnly_
                    assert domain_specified == initial_dot

                discard = False
                if expires == "":
                    expires = None
                    discard = True

                # assume path_specified is false
                c = Cookie(0, name, value,
                           None, False,
                           domain, domain_specified, initial_dot,
                           path, False,
                           secure,
                           expires,
                           discard,
                           None,
                           None,
                           {})
                if not ignore_discard and c.discard:
                    continue
                if not ignore_expires and c.is_expired(now):
                    continue
                cookies.set_cookie(c)

    elif cookiefile.endswith(('.sqlite', '.sqlite3')):
        import sqlite3, shutil, tempfile
        temp_dir = tempfile.gettempdir()
        temp_cookiefile = os.path.join(temp_dir, 'temp_cookiefile.sqlite')
        shutil.copy2(cookiefile, temp_cookiefile)

        cookies = cookiejar.MozillaCookieJar()
        con = sqlite3.connect(temp_cookiefile)
        cur = con.cursor()
        cur.execute("""SELECT host, path, isSecure, expiry, name, value
        FROM moz_cookies""")
        for item in cur.fetchall():
            c = cookiejar.Cookie(
                0, item[4], item[5], None, False, item[0],
                item[0].startswith('.'), item[0].startswith('.'),
                item[1], False, item[2], item[3], item[3] == '', None,
                None, {},
            )
            cookies.set_cookie(c)

    else:
        log.e('[error] unsupported cookies format')
        # TODO: Chromium Cookies
        # SELECT host_key, path, secure, expires_utc, name, encrypted_value
        # FROM cookies
        # http://n8henrie.com/2013/11/use-chromes-cookies-for-easier-downloading-with-python-requests/


def set_socks_proxy(proxy):
    try:
        import socks
        if '@' in proxy:
            proxy_info = proxy.split("@")
            socks_proxy_addrs = proxy_info[1].split(':')
            socks_proxy_auth = proxy_info[0].split(":")
            socks.set_default_proxy(
                socks.SOCKS5,
                socks_proxy_addrs[0],
                int(socks_proxy_addrs[1]),
                True,
                socks_proxy_auth[0],
                socks_proxy_auth[1]
            )
        else:
           socks_proxy_addrs = proxy.split(':')
           socks.set_default_proxy(
               socks.SOCKS5,
               socks_proxy_addrs[0],
               int(socks_proxy_addrs[1]),
           )
        socket.socket = socks.socksocket

        def getaddrinfo(*args):
            return [
                (socket.AF_INET, socket.SOCK_STREAM, 6, '', (args[0], args[1]))
            ]
        socket.getaddrinfo = getaddrinfo
    except ImportError:
        log.w(
            'Error importing PySocks library, socks proxy ignored.'
            'In order to use use socks proxy, please install PySocks.'
        )


def script_main(download, download_playlist, **kwargs):
    logging.basicConfig(format='[%(levelname)s] %(message)s')

    def print_version():
        version = get_version(
            kwargs['repo_path'] if 'repo_path' in kwargs else __version__
        )
        log.i(
            'version {}, a tiny downloader that scrapes the web.'.format(
                version
            )
        )

    parser = argparse.ArgumentParser(
        prog='you-get',
        usage='you-get [OPTION]... URL...',
        description='A tiny downloader that scrapes the web',
        add_help=False,
    )
    parser.add_argument(
        '-V', '--version', action='store_true',
        help='Print version and exit'
    )
    parser.add_argument(
        '-h', '--help', action='store_true',
        help='Print this help message and exit'
    )

    dry_run_grp = parser.add_argument_group(
        'Dry-run options', '(no actual downloading)'
    )
    dry_run_grp = dry_run_grp.add_mutually_exclusive_group()
    dry_run_grp.add_argument(
        '-i', '--info', action='store_true', help='Print extracted information'
    )
    dry_run_grp.add_argument(
        '-u', '--url', action='store_true',
        help='Print extracted information with URLs'
    )
    dry_run_grp.add_argument(
        '--json', action='store_true',
        help='Print extracted URLs in JSON format'
    )

    download_grp = parser.add_argument_group('Download options')
    download_grp.add_argument(
        '-n', '--no-merge', action='store_true', default=False,
        help='Do not merge video parts'
    )
    download_grp.add_argument(
        '--no-caption', action='store_true',
        help='Do not download captions (subtitles, lyrics, danmaku, ...)'
    )
    download_grp.add_argument(
        '--post', '--postfix', dest='postfix', action='store_true', default=False,
        help='Postfix downloaded files with unique identifiers'
    )
    download_grp.add_argument(
        '--pre', '--prefix', dest='prefix', metavar='PREFIX', default=None,
        help='Prefix downloaded files with string'
    )
    download_grp.add_argument(
        '-f', '--force', action='store_true', default=False,
        help='Force overwriting existing files'
    )
    download_grp.add_argument(
        '--skip-existing-file-size-check', action='store_true', default=False,
        help='Skip existing file without checking file size'
    )
    download_grp.add_argument(
        '-F', '--format', metavar='STREAM_ID',
        help='Set video format to STREAM_ID'
    )
    download_grp.add_argument(
        '-O', '--output-filename', metavar='FILE', help='Set output filename'
    )
    download_grp.add_argument(
        '-o', '--output-dir', metavar='DIR', default='.',
        help='Set output directory'
    )
    download_grp.add_argument(
        '-p', '--player', metavar='PLAYER',
        help='Stream extracted URL to a PLAYER'
    )
    download_grp.add_argument(
        '-c', '--cookies', metavar='COOKIES_FILE',
        help='Load cookies.txt or cookies.sqlite'
    )
    download_grp.add_argument(
        '-t', '--timeout', metavar='SECONDS', type=int, default=600,
        help='Set socket timeout'
    )
    download_grp.add_argument(
        '-d', '--debug', action='store_true',
        help='Show traceback and other debug info'
    )
    download_grp.add_argument(
        '-I', '--input-file', metavar='FILE', type=argparse.FileType('r'),
        help='Read non-playlist URLs from FILE'
    )
    download_grp.add_argument(
        '-P', '--password', help='Set video visit password to PASSWORD'
    )
    download_grp.add_argument(
        '-l', '--playlist', action='store_true',
        help='Prefer to download a playlist'
    )

    playlist_grp = parser.add_argument_group('Playlist optional options')
    playlist_grp.add_argument(
        '--first', metavar='FIRST',
        help='the first number'
    )
    playlist_grp.add_argument(
        '--last', metavar='LAST',
        help='the last number'
    )
    playlist_grp.add_argument(
        '--size', '--page-size', metavar='PAGE_SIZE',
        help='the page size number'
    )

    download_grp.add_argument(
        '-a', '--auto-rename', action='store_true', default=False,
        help='Auto rename same name different files'
    )

    download_grp.add_argument(
        '-k', '--insecure', action='store_true', default=False,
        help='ignore ssl errors'
    )

    proxy_grp = parser.add_argument_group('Proxy options')
    proxy_grp = proxy_grp.add_mutually_exclusive_group()
    proxy_grp.add_argument(
        '-x', '--http-proxy', metavar='HOST:PORT',
        help='Use an HTTP proxy for downloading'
    )
    proxy_grp.add_argument(
        '-y', '--extractor-proxy', metavar='HOST:PORT',
        help='Use an HTTP proxy for extracting only'
    )
    proxy_grp.add_argument(
        '--no-proxy', action='store_true', help='Never use a proxy'
    )
    proxy_grp.add_argument(
        '-s', '--socks-proxy', metavar='HOST:PORT or USERNAME:PASSWORD@HOST:PORT',
        help='Use an SOCKS5 proxy for downloading'
    )

    download_grp.add_argument('--stream', help=argparse.SUPPRESS)
    download_grp.add_argument('--itag', help=argparse.SUPPRESS)

    download_grp.add_argument('-m', '--m3u8', action='store_true', default=False,
        help = 'download video using an m3u8 url')


    parser.add_argument('URL', nargs='*', help=argparse.SUPPRESS)

    args = parser.parse_args()

    if args.help:
        print_version()
        parser.print_help()
        sys.exit()
    if args.version:
        print_version()
        sys.exit()

    if args.debug:
        # Set level of root logger to DEBUG
        logging.getLogger().setLevel(logging.DEBUG)

    global force
    global skip_existing_file_size_check
    global dry_run
    global json_output
    global player
    global extractor_proxy
    global output_filename
    global auto_rename
    global insecure
    global m3u8
    global postfix
    global prefix
    output_filename = args.output_filename
    extractor_proxy = args.extractor_proxy

    info_only = args.info
    if args.force:
        force = True
    if args.skip_existing_file_size_check:
        skip_existing_file_size_check = True
    if args.auto_rename:
        auto_rename = True
    if args.url:
        dry_run = True
    if args.json:
        json_output = True
        # to fix extractors not use VideoExtractor
        dry_run = True
        info_only = False

    if args.cookies:
        load_cookies(args.cookies)

    if args.m3u8:
        m3u8 = True

    caption = True
    stream_id = args.format or args.stream or args.itag
    if args.no_caption:
        caption = False
    if args.player:
        player = args.player
        caption = False

    if args.insecure:
        # ignore ssl
        insecure = True

    postfix = args.postfix
    prefix = args.prefix

    if args.no_proxy:
        set_http_proxy('')
    else:
        set_http_proxy(args.http_proxy)
    if args.socks_proxy:
        set_socks_proxy(args.socks_proxy)

    URLs = []
    if args.input_file:
        logging.debug('you are trying to load urls from %s', args.input_file)
        if args.playlist:
            log.e(
                "reading playlist from a file is unsupported "
                "and won't make your life easier"
            )
            sys.exit(2)
        URLs.extend(args.input_file.read().splitlines())
        args.input_file.close()
    URLs.extend(args.URL)

    if not URLs:
        parser.print_help()
        sys.exit()

    socket.setdefaulttimeout(args.timeout)

    try:
        extra = {'args': args}
        if extractor_proxy:
            extra['extractor_proxy'] = extractor_proxy
        if stream_id:
            extra['stream_id'] = stream_id
        download_main(
            download, download_playlist,
            URLs, args.playlist,
            output_dir=args.output_dir, merge=not args.no_merge,
            info_only=info_only, json_output=json_output, caption=caption,
            password=args.password,
            **extra
        )
    except KeyboardInterrupt:
        if args.debug:
            raise
        else:
            sys.exit(1)
    except UnicodeEncodeError:
        if args.debug:
            raise
        log.e(
            '[error] oops, the current environment does not seem to support '
            'Unicode.'
        )
        log.e('please set it to a UTF-8-aware locale first,')
        log.e(
            'so as to save the video (with some Unicode characters) correctly.'
        )
        log.e('you can do it like this:')
        log.e('    (Windows)    % chcp 65001 ')
        log.e('    (Linux)      $ LC_CTYPE=en_US.UTF-8')
        sys.exit(1)
    except Exception:
        if not args.debug:
            log.e('[error] oops, something went wrong.')
            log.e(
                'don\'t panic, c\'est la vie. please try the following steps:'
            )
            log.e('  (1) Rule out any network problem.')
            log.e('  (2) Make sure you-get is up-to-date.')
            log.e('  (3) Check if the issue is already known, on')
            log.e('        https://github.com/soimort/you-get/wiki/Known-Bugs')
            log.e('        https://github.com/soimort/you-get/issues')
            log.e('  (4) Run the command with \'--debug\' option,')
            log.e('      and report this issue with the full output.')
        else:
            print_version()
            log.i(args)
            raise
        sys.exit(1)


def google_search(url):
    keywords = r1(r'https?://(.*)', url)
    url = 'https://www.google.com/search?tbm=vid&q=%s' % parse.quote(keywords)
    page = get_content(url, headers=fake_headers)
    videos = re.findall(
        r'(https://www\.youtube\.com/watch\?v=[\w-]+)', page
    )
    print('Best matched result:')
    return(videos[0])


def url_to_module(url):
    try:
        video_host = r1(r'https?://([^/]+)/', url)
        video_url = r1(r'https?://[^/]+(.*)', url)
        assert video_host and video_url
    except AssertionError:
        url = google_search(url)
        video_host = r1(r'https?://([^/]+)/', url)
        video_url = r1(r'https?://[^/]+(.*)', url)

    if video_host.endswith('.com.cn') or video_host.endswith('.ac.cn'):
        video_host = video_host[:-3]
    domain = r1(r'(\.[^.]+\.[^.]+)$', video_host) or video_host
    assert domain, 'unsupported url: ' + url

    # all non-ASCII code points must be quoted (percent-encoded UTF-8)
    url = ''.join([ch if ord(ch) in range(128) else parse.quote(ch) for ch in url])
    video_host = r1(r'https?://([^/]+)/', url)
    video_url = r1(r'https?://[^/]+(.*)', url)

    k = r1(r'([^.]+)', domain)
    if k in SITES:
        return (
            import_module('.'.join(['you_get', 'extractors', SITES[k]])),
            url
        )
    else:
        try:
            try:
                location = get_location(url) # t.co isn't happy with fake_headers
            except:
                location = get_location(url, headers=fake_headers)
        except:
            location = get_location(url, headers=fake_headers, get_method='GET')

        if location and location != url and not location.startswith('/'):
            return url_to_module(location)
        else:
            return import_module('you_get.extractors.universal'), url


def any_download(url, **kwargs):
    m, url = url_to_module(url)
    m.download(url, **kwargs)


def any_download_playlist(url, **kwargs):
    m, url = url_to_module(url)
    m.download_playlist(url, **kwargs)


def main(**kwargs):
    script_main(any_download, any_download_playlist, **kwargs)


================================================
FILE: src/you_get/extractor.py
================================================
#!/usr/bin/env python

from .common import match1, maybe_print, download_urls, get_filename, parse_host, set_proxy, unset_proxy, get_content, dry_run, player
from .common import print_more_compatible as print
from .util import log
from . import json_output
import os
import sys

class Extractor():
    def __init__(self, *args):
        self.url = None
        self.title = None
        self.vid = None
        self.streams = {}
        self.streams_sorted = []

        if args:
            self.url = args[0]

class VideoExtractor():
    def __init__(self, *args):
        self.url = None
        self.title = None
        self.vid = None
        self.m3u8_url = None
        self.streams = {}
        self.streams_sorted = []
        self.audiolang = None
        self.password_protected = False
        self.dash_streams = {}
        self.caption_tracks = {}
        self.out = False
        self.ua = None
        self.referer = None
        self.danmaku = None
        self.lyrics = None

        if args:
            self.url = args[0]

    def download_by_url(self, url, **kwargs):
        self.url = url
        self.vid = None

        if 'extractor_proxy' in kwargs and kwargs['extractor_proxy']:
            set_proxy(parse_host(kwargs['extractor_proxy']))
        self.prepare(**kwargs)
        if self.out:
            return
        if 'extractor_proxy' in kwargs and kwargs['extractor_proxy']:
            unset_proxy()

        try:
            self.streams_sorted = [dict([('id', stream_type['id'])] + list(self.streams[stream_type['id']].items())) for stream_type in self.__class__.stream_types if stream_type['id'] in self.streams]
        except:
            self.streams_sorted = [dict([('itag', stream_type['itag'])] + list(self.streams[stream_type['itag']].items())) for stream_type in self.__class__.stream_types if stream_type['itag'] in self.streams]

        self.extract(**kwargs)

        self.download(**kwargs)

    def download_by_vid(self, vid, **kwargs):
        self.url = None
        self.vid = vid

        if 'extractor_proxy' in kwargs and kwargs['extractor_proxy']:
            set_proxy(parse_host(kwargs['extractor_proxy']))
        self.prepare(**kwargs)
        if 'extractor_proxy' in kwargs and kwargs['extractor_proxy']:
            unset_proxy()

        try:
            self.streams_sorted = [dict([('id', stream_type['id'])] + list(self.streams[stream_type['id']].items())) for stream_type in self.__class__.stream_types if stream_type['id'] in self.streams]
        except:
            self.streams_sorted = [dict([('itag', stream_type['itag'])] + list(self.streams[stream_type['itag']].items())) for stream_type in self.__class__.stream_types if stream_type['itag'] in self.streams]

        self.extract(**kwargs)

        self.download(**kwargs)

    def prepare(self, **kwargs):
        pass
        #raise NotImplementedError()

    def extract(self, **kwargs):
        pass
        #raise NotImplementedError()

    def p_stream(self, stream_id):
        if stream_id in self.streams:
            stream = self.streams[stream_id]
        else:
            stream = self.dash_streams[stream_id]

        if 'itag' in stream:
            print("    - itag:          %s" % log.sprint(stream_id, log.NEGATIVE))
        else:
            print("    - format:        %s" % log.sprint(stream_id, log.NEGATIVE))

        if 'container' in stream:
            print("      container:     %s" % stream['container'])

        if 'video_profile' in stream:
            maybe_print("      video-profile: %s" % stream['video_profile'])

        if 'quality' in stream:
            print("      quality:       %s" % stream['quality'])

        if 'size' in stream and 'container' in stream and stream['container'].lower() != 'm3u8':
            if stream['size'] != float('inf')  and stream['size'] != 0:
                print("      size:          %s MiB (%s bytes)" % (round(stream['size'] / 1048576, 1), stream['size']))

        if 'm3u8_url' in stream:
            print("      m3u8_url:      {}".format(stream['m3u8_url']))

        if 'itag' in stream:
            print("    # download-with: %s" % log.sprint("you-get --itag=%s [URL]" % stream_id, log.UNDERLINE))
        else:
            print("    # download-with: %s" % log.sprint("you-get --format=%s [URL]" % stream_id, log.UNDERLINE))

        print()

    def p_i(self, stream_id):
        if stream_id in self.streams:
            stream = self.streams[stream_id]
        else:
            stream = self.dash_streams[stream_id]

        maybe_print("    - title:         %s" % self.title)
        print("       size:         %s MiB (%s bytes)" % (round(stream['size'] / 1048576, 1), stream['size']))
        print("        url:         %s" % self.url)
        print()

        sys.stdout.flush()

    def p(self, stream_id=None):
        maybe_print("site:                %s" % self.__class__.name)
        maybe_print("title:               %s" % self.title)
        if stream_id:
            # Print the stream
            print("stream:")
            self.p_stream(stream_id)

        elif stream_id is None:
            # Print stream with best quality
            print("stream:              # Best quality")
            stream_id = self.streams_sorted[0]['id'] if 'id' in self.streams_sorted[0] else self.streams_sorted[0]['itag']
            self.p_stream(stream_id)

        elif stream_id == []:
            print("streams:             # Available quality and codecs")
            # Print DASH streams
            if self.dash_streams:
                print("    [ DASH ] %s" % ('_' * 36))
                itags = sorted(self.dash_streams,
                               key=lambda i: -self.dash_streams[i]['size'])
                for stream in itags:
                    self.p_stream(stream)
            # Print all other available streams
            if self.streams_sorted:
                print("    [ DEFAULT ] %s" % ('_' * 33))
                for stream in self.streams_sorted:
                    self.p_stream(stream['id'] if 'id' in stream else stream['itag'])

        if self.audiolang:
            print("audio-languages:")
            for i in self.audiolang:
                print("    - lang:          {}".format(i['lang']))
                print("      download-url:  {}\n".format(i['url']))

        sys.stdout.flush()

    def p_playlist(self, stream_id=None):
        maybe_print("site:                %s" % self.__class__.name)
        print("playlist:            %s" % self.title)
        print("videos:")

    def download(self, **kwargs):
        if 'json_output' in kwargs and kwargs['json_output']:
            json_output.output(self)
        elif 'info_only' in kwargs and kwargs['info_only']:
            if 'stream_id' in kwargs and kwargs['stream_id']:
                # Display the stream
                stream_id = kwargs['stream_id']
                if 'index' not in kwargs:
                    self.p(stream_id)
                else:
                    self.p_i(stream_id)
            else:
                # Display all available streams
                if 'index' not in kwargs:
                    self.p([])
                else:
                    stream_id = self.streams_sorted[0]['id'] if 'id' in self.streams_sorted[0] else self.streams_sorted[0]['itag']
                    self.p_i(stream_id)

        else:
            if 'stream_id' in kwargs and kwargs['stream_id']:
                # Download the stream
                stream_id = kwargs['stream_id']
            else:
                # Download stream with the best quality
                from .processor.ffmpeg import has_ffmpeg_installed
                if has_ffmpeg_installed() and player is None and self.dash_streams or not self.streams_sorted:
                    #stream_id = list(self.dash_streams)[-1]
                    itags = sorted(self.dash_streams,
                                   key=lambda i: -self.dash_streams[i]['size'])
                    stream_id = itags[0]
                else:
                    stream_id = self.streams_sorted[0]['id'] if 'id' in self.streams_sorted[0] else self.streams_sorted[0]['itag']

            if 'index' not in kwargs:
                self.p(stream_id)
            else:
                self.p_i(stream_id)

            if stream_id in self.streams:
                urls = self.streams[stream_id]['src']
                ext = self.streams[stream_id]['container']
                total_size = self.streams[stream_id]['size']
            else:
                urls = self.dash_streams[stream_id]['src']
                ext = self.dash_streams[stream_id]['container']
                total_size = self.dash_streams[stream_id]['size']

            if ext == 'm3u8' or ext == 'm4a':
                ext = 'mp4'

            if not urls:
                log.wtf('[Failed] Cannot extract video source.')
            # For legacy main()
            headers = {}
            if self.ua is not None:
                headers['User-Agent'] = self.ua
            if self.referer is not None:
                headers['Referer'] = self.referer
            download_urls(urls, self.title, ext, total_size, headers=headers,
                          output_dir=kwargs['output_dir'],
                          merge=kwargs['merge'],
                          av=stream_id in self.dash_streams,
                          vid=self.vid)

            if 'caption' not in kwargs or not kwargs['caption']:
                print('Skipping captions or danmaku.')
                return

            for lang in self.caption_tracks:
                filename = '%s.%s.srt' % (get_filename(self.title), lang)
                print('Saving %s ... ' % filename, end="", flush=True)
                srt = self.caption_tracks[lang]
                with open(os.path.join(kwargs['output_dir'], filename),
                          'w', encoding='utf-8') as x:
                    x.write(srt)
                print('Done.')

            if self.danmaku is not None and not dry_run:
                filename = '{}.cmt.xml'.format(get_filename(self.title))
                print('Downloading {} ...\n'.format(filename))
                with open(os.path.join(kwargs['output_dir'], filename), 'w', encoding='utf8') as fp:
                    fp.write(self.danmaku)

            if self.lyrics is not None and not dry_run:
                filename = '{}.lrc'.format(get_filename(self.title))
                print('Downloading {} ...\n'.format(filename))
                with open(os.path.join(kwargs['output_dir'], filename), 'w', encoding='utf8') as fp:
                    fp.write(self.lyrics)

            # For main_dev()
            #download_urls(urls, self.title, self.streams[stream_id]['container'], self.streams[stream_id]['size'])
        keep_obj = kwargs.get('keep_obj', False)
        if not keep_obj:
            self.__init__()


================================================
FILE: src/you_get/extractors/acfun.py
================================================
#!/usr/bin/env python

from ..common import *
from ..extractor import VideoExtractor

class AcFun(VideoExtractor):
    name = "AcFun"

    stream_types = [
        {'id': '2160P', 'qualityType': '2160p'},
        {'id': '1080P60', 'qualityType': '1080p60'},
        {'id': '720P60', 'qualityType': '720p60'},
        {'id': '1080P+', 'qualityType': '1080p+'},
        {'id': '1080P', 'qualityType': '1080p'},
        {'id': '720P', 'qualityType': '720p'},
        {'id': '540P', 'qualityType': '540p'},
        {'id': '360P', 'qualityType': '360p'}
    ]    

    def prepare(self, **kwargs):
        assert re.match(r'https?://[^\.]*\.*acfun\.[^\.]+/(\D|bangumi)/\D\D(\d+)', self.url)

        if re.match(r'https?://[^\.]*\.*acfun\.[^\.]+/\D/\D\D(\d+)', self.url):
            html = get_content(self.url, headers=fake_headers)
            json_text = match1(html, r"(?s)videoInfo\s*=\s*(\{.*?\});")
            json_data = json.loads(json_text)
            vid = json_data.get('currentVideoInfo').get('id')
            up = json_data.get('user').get('name')
            self.title = json_data.get('title')
            video_list = json_data.get('videoList')
            if len(video_list) > 1:
                self.title += " - " + [p.get('title') for p in video_list if p.get('id') == vid][0]
            currentVideoInfo = json_data.get('currentVideoInfo')

        elif re.match(r"https?://[^\.]*\.*acfun\.[^\.]+/bangumi/aa(\d+)", self.url):
            html = get_content(self.url, headers=fake_headers)
            tag_script = match1(html, r'<script>\s*window\.pageInfo([^<]+)</script>')
            json_text = tag_script[tag_script.find('{') : tag_script.find('};') + 1]
            json_data = json.loads(json_text)
            self.title = json_data['bangumiTitle'] + " " + json_data['episodeName'] + " " + json_data['title']
            vid = str(json_data['videoId'])
            up = "acfun"
            currentVideoInfo = json_data.get('currentVideoInfo')

        else:
            raise NotImplementedError()         

        if 'ksPlayJson' in currentVideoInfo:
            durationMillis = currentVideoInfo['durationMillis']
            ksPlayJson = ksPlayJson = json.loads( currentVideoInfo['ksPlayJson'] )
            representation = ksPlayJson.get('adaptationSet')[0].get('representation')
            stream_list = representation

        for stream in stream_list:
            m3u8_url = stream["url"]
            size = durationMillis * stream["avgBitrate"] / 8
            # size = float('inf')
            container = 'mp4'
            stream_id = stream["qualityLabel"]
            quality = stream["qualityType"]
            
            stream_data = dict(src=m3u8_url, size=size, container=container, quality=quality)
            self.streams[stream_id] = stream_data

        assert self.title and m3u8_url
        self.title = unescape_html(self.title)
        self.title = escape_file_path(self.title)
        p_title = r1('active">([^<]+)', html)
        self.title = '%s (%s)' % (self.title, up)
        if p_title:
            self.title = '%s - %s' % (self.title, p_title)       


    def download(self, **kwargs):
        if 'json_output' in kwargs and kwargs['json_output']:
            json_output.output(self)
        elif 'info_only' in kwargs and kwargs['info_only']:
            if 'stream_id' in kwargs and kwargs['stream_id']:
                # Display the stream
                stream_id = kwargs['stream_id']
                if 'index' not in kwargs:
                    self.p(stream_id)
                else:
                    self.p_i(stream_id)
            else:
                # Display all available streams
                if 'index' not in kwargs:
                    self.p([])
                else:
                    stream_id = self.streams_sorted[0]['id'] if 'id' in self.streams_sorted[0] else self.streams_sorted[0]['itag']
                    self.p_i(stream_id)

        else:
            if 'stream_id' in kwargs and kwargs['stream_id']:
                # Download the stream
                stream_id = kwargs['stream_id']
            else:
                stream_id = self.streams_sorted[0]['id'] if 'id' in self.streams_sorted[0] else self.streams_sorted[0]['itag']

            if 'index' not in kwargs:
                self.p(stream_id)
            else:
                self.p_i(stream_id)
            if stream_id in self.streams:
                url = self.streams[stream_id]['src']
                ext = self.streams[stream_id]['container']
                total_size = self.streams[stream_id]['size']


            if ext == 'm3u8' or ext == 'm4a':
                ext = 'mp4'

            if not url:
                log.wtf('[Failed] Cannot extract video source.')
            # For legacy main()
            headers = {}
            if self.ua is not None:
                headers['User-Agent'] = self.ua
            if self.referer is not None:
                headers['Referer'] = self.referer

            download_url_ffmpeg(url, self.title, ext, output_dir=kwargs['output_dir'], merge=kwargs['merge'])                           

            if 'caption' not in kwargs or not kwargs['caption']:
                print('Skipping captions or danmaku.')
                return

            for lang in self.caption_tracks:
                filename = '%s.%s.srt' % (get_filename(self.title), lang)
                print('Saving %s ... ' % filename, end="", flush=True)
                srt = self.caption_tracks[lang]
                with open(os.path.join(kwargs['output_dir'], filename),
                          'w', encoding='utf-8') as x:
                    x.write(srt)
                print('Done.')

            if self.danmaku is not None and not dry_run:
                filename = '{}.cmt.xml'.format(get_filename(self.title))
                print('Downloading {} ...\n'.format(filename))
                with open(os.path.join(kwargs['output_dir'], filename), 'w', encoding='utf8') as fp:
                    fp.write(self.danmaku)

            if self.lyrics is not None and not dry_run:
                filename = '{}.lrc'.format(get_filename(self.title))
                print('Downloading {} ...\n'.format(filename))
                with open(os.path.join(kwargs['output_dir'], filename), 'w', encoding='utf8') as fp:
                    fp.write(self.lyrics)

            # For main_dev()
            #download_urls(urls, self.title, self.streams[stream_id]['container'], self.streams[stream_id]['size'])
        keep_obj = kwargs.get('keep_obj', False)
        if not keep_obj:
            self.__init__()


    def acfun_download(self, url, output_dir='.', merge=True, info_only=False, **kwargs):
        assert re.match(r'https?://[^\.]*\.*acfun\.[^\.]+/(\D|bangumi)/\D\D(\d+)', url)

        def getM3u8UrlFromCurrentVideoInfo(currentVideoInfo):
            if 'playInfos' in currentVideoInfo:
                return currentVideoInfo['playInfos'][0]['playUrls'][0]
            elif 'ksPlayJson' in currentVideoInfo:
                ksPlayJson = json.loads( currentVideoInfo['ksPlayJson'] )
                representation = ksPlayJson.get('adaptationSet')[0].get('representation')
                reps = []
                for one in representation:
                    reps.append( (one['width']* one['height'], one['url'], one['backupUrl']) )
                return max(reps)[1]


        if re.match(r'https?://[^\.]*\.*acfun\.[^\.]+/\D/\D\D(\d+)', url):
            html = get_content(url, headers=fake_headers)
            json_text = match1(html, r"(?s)videoInfo\s*=\s*(\{.*?\});")
            json_data = json.loads(json_text)
            vid = json_data.get('currentVideoInfo').get('id')
            up = json_data.get('user').get('name')
            title = json_data.get('title')
            video_list = json_data.get('videoList')
            if len(video_list) > 1:
                title += " - " + [p.get('title') for p in video_list if p.get('id') == vid][0]
            currentVideoInfo = json_data.get('currentVideoInfo')
            m3u8_url = getM3u8UrlFromCurrentVideoInfo(currentVideoInfo)
        elif re.match(r'https?://[^\.]*\.*acfun\.[^\.]+/bangumi/aa(\d+)', url):
            html = get_content(url, headers=fake_headers)
            tag_script = match1(html, r'<script>\s*window\.pageInfo([^<]+)</script>')
            json_text = tag_script[tag_script.find('{') : tag_script.find('};') + 1]
            json_data = json.loads(json_text)
            title = json_data['bangumiTitle'] + " " + json_data['episodeName'] + " " + json_data['title']
            vid = str(json_data['videoId'])
            up = "acfun"

            currentVideoInfo = json_data.get('currentVideoInfo')
            m3u8_url = getM3u8UrlFromCurrentVideoInfo(currentVideoInfo)

        else:
            raise NotImplementedError()

        assert title and m3u8_url
        title = unescape_html(title)
        title = escape_file_path(title)
        p_title = r1('active">([^<]+)', html)
        title = '%s (%s)' % (title, up)
        if p_title:
            title = '%s - %s' % (title, p_title)

        print_info(site_info, title, 'm3u8', float('inf'))
        if not info_only:
            download_url_ffmpeg(m3u8_url, title, 'mp4', output_dir=output_dir, merge=merge)

site = AcFun()
site_info = "AcFun.cn"
download = site.download_by_url
download_playlist = playlist_not_supported('acfun')


================================================
FILE: src/you_get/extractors/alive.py
================================================
#!/usr/bin/env python

__all__ = ['alive_download']

from ..common import *

def alive_download(url, output_dir = '.', merge = True, info_only = False, **kwargs):
    html = get_html(url)
    
    title = r1(r'<meta property="og:title" content="([^"]+)"', html)
    
    url = r1(r'file: "(http://alive[^"]+)"', html)
    type, ext, size = url_info(url)
    
    print_info(site_info, title, type, size)
    if not info_only:
        download_urls([url], title, ext, size, output_dir, merge = merge)

site_info = "Alive.in.th"
download = alive_download
download_playlist = playlist_not_supported('alive')


================================================
FILE: src/you_get/extractors/archive.py
================================================
#!/usr/bin/env python

__all__ = ['archive_download']

from ..common import *

def archive_download(url, output_dir='.', merge=True, info_only=False, **kwargs):
    html = get_html(url)
    title = r1(r'<meta property="og:title" content="([^"]*)"', html)
    source = r1(r'<meta property="og:video" content="([^"]*)"', html)
    mime, ext, size = url_info(source)

    print_info(site_info, title, mime, size)
    if not info_only:
        download_urls([source], title, ext, size, output_dir, merge=merge)

site_info = "Archive.org"
download = archive_download
download_playlist = playlist_not_supported('archive')


================================================
FILE: src/you_get/extractors/baidu.py
================================================
#!/usr/bin/env python
# -*- coding: utf-8 -*-

__all__ = ['baidu_download']

from ..common import *
from .embed import *
from .universal import *


def baidu_get_song_data(sid):
    data = json.loads(get_html(
        'http://music.baidu.com/data/music/fmlink?songIds=%s' % sid, faker=True))['data']

    if data['xcode'] != '':
        # inside china mainland
        return data['songList'][0]
    else:
        # outside china mainland
        return None


def baidu_get_song_url(data):
    return data['songLink']


def baidu_get_song_artist(data):
    return data['artistName']


def baidu_get_song_album(data):
    return data['albumName']


def baidu_get_song_title(data):
    return data['songName']


def baidu_get_song_lyric(data):
    lrc = data['lrcLink']
    return "http://music.baidu.com%s" % lrc if lrc else None


def baidu_download_song(sid, output_dir='.', merge=True, info_only=False):
    data = baidu_get_song_data(sid)
    if data is not None:
        url = baidu_get_song_url(data)
        title = baidu_get_song_title(data)
        artist = baidu_get_song_artist(data)
        album = baidu_get_song_album(data)
        lrc = baidu_get_song_lyric(data)
        file_name = "%s - %s - %s" % (title, album, artist)
    else:
        html = get_html("http://music.baidu.com/song/%s" % sid)
        url = r1(r'data_url="([^"]+)"', html)
        title = r1(r'data_name="([^"]+)"', html)
        file_name = title

    type, ext, size = url_info(url, faker=True)
    print_info(site_info, title, type, size)
    if not info_only:
        download_urls([url], file_name, ext, size,
                      output_dir, merge=merge, faker=True)

    try:
        type, ext, size = url_info(lrc, faker=True)
        print_info(site_info, title, type, size)
        if not info_only:
            download_urls([lrc], file_name, ext, size, output_dir, faker=True)
    except:
        pass


def baidu_download_album(aid, output_dir='.', merge=True, info_only=False):
    html = get_html('http://music.baidu.com/album/%s' % aid, faker=True)
    album_name = r1(r'<h2 class="album-name">(.+?)<\/h2>', html)
    artist = r1(r'<span class="author_list" title="(.+?)">', html)
    output_dir = '%s/%s - %s' % (output_dir, artist, album_name)
    ids = json.loads(r1(r'<span class="album-add" data-adddata=\'(.+?)\'>',
                        html).replace('&quot', '').replace(';', '"'))['ids']
    track_nr = 1
    for id in ids:
        song_data = baidu_get_song_data(id)
        song_url = baidu_get_song_url(song_data)
        song_title = baidu_get_song_title(song_data)
        song_lrc = baidu_get_song_lyric(song_data)
        file_name = '%02d.%s' % (track_nr, song_title)

        type, ext, size = url_info(song_url, faker=True)
        print_info(site_info, song_title, type, size)
        if not info_only:
            download_urls([song_url], file_name, ext, size,
                          output_dir, merge=merge, faker=True)

        if song_lrc:
            type, ext, size = url_info(song_lrc, faker=True)
            print_info(site_info, song_title, type, size)
            if not info_only:
                download_urls([song_lrc], file_name, ext,
                              size, output_dir, faker=True)

        track_nr += 1


def baidu_download(url, output_dir='.', stream_type=None, merge=True, info_only=False, **kwargs):

    if re.match(r'https?://pan.baidu.com', url):
        real_url, title, ext, size = baidu_pan_download(url)
        print_info('BaiduPan', title, ext, size)
        if not info_only:
            print('Hold on...')
            time.sleep(5)
            download_urls([real_url], title, ext, size,
                          output_dir, url, merge=merge, faker=True)
    elif re.match(r'https?://music.baidu.com/album/\d+', url):
        id = r1(r'https?://music.baidu.com/album/(\d+)', url)
        baidu_download_album(id, output_dir, merge, info_only)

    elif re.match(r'https?://music.baidu.com/song/\d+', url):
        id = r1(r'https?://music.baidu.com/song/(\d+)', url)
        baidu_download_song(id, output_dir, merge, info_only)

    elif re.match('https?://tieba.baidu.com/', url):
        try:
            # embedded videos
            embed_download(url, output_dir, merge=merge, info_only=info_only, **kwargs)
        except:
            # images
            html = get_html(url)
            title = r1(r'title:"([^"]+)"', html)

            vhsrc = re.findall(r'"BDE_Image"[^>]+src="([^"]+\.mp4)"', html) or \
                re.findall(r'vhsrc="([^"]+)"', html)
            if len(vhsrc) > 0:
                ext = 'mp4'
                size = url_size(vhsrc[0])
                print_info(site_info, title, ext, size)
                if not info_only:
                    download_urls(vhsrc, title, ext, size,
                                  output_dir=output_dir, merge=False)

            items = re.findall(
                r'//tiebapic.baidu.com/forum/w[^"]+/([^/"]+)', html)
            urls = ['http://tiebapic.baidu.com/forum/pic/item/' + i
                    for i in set(items)]

            # handle albums
            kw = r1(r'kw=([^&]+)', html) or r1(r"kw:'([^']+)'", html)
            tid = r1(r'tid=(\d+)', html) or r1(r"tid:'([^']+)'", html)
            album_url = 'http://tieba.baidu.com/photo/g/bw/picture/list?kw=%s&tid=%s&pe=%s' % (kw, tid, 1000)
            album_info = json.loads(get_content(album_url))
            for i in album_info['data']['pic_list']:
                urls.append(
                    'http://tiebapic.baidu.com/forum/pic/item/' + i['pic_id'] + '.jpg')

            ext = 'jpg'
            size = float('Inf')
            print_info(site_info, title, ext, size)

            if not info_only:
                download_urls(urls, title, ext, size,
                              output_dir=output_dir, merge=False)


def baidu_pan_download(url):
    errno_patt = r'errno":([^"]+),'
    refer_url = ""
    fake_headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
        'Accept-Charset': 'UTF-8,*;q=0.5',
        'Accept-Encoding': 'gzip,deflate,sdch',
        'Accept-Language': 'en-US,en;q=0.8',
        'Host': 'pan.baidu.com',
        'Origin': 'http://pan.baidu.com',
        'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:13.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2500.0 Safari/537.36',
        'Referer': refer_url
    }
    if cookies:
        print('Use user specified cookies')
    else:
        print('Generating cookies...')
        fake_headers['Cookie'] = baidu_pan_gen_cookies(url)
    refer_url = "http://pan.baidu.com"
    html = get_content(url, fake_headers, decoded=True)
    isprotected = False
    sign, timestamp, bdstoken, appid, primary_id, fs_id, uk = baidu_pan_parse(
        html)
    if sign is None:
        if re.findall(r'\baccess-code\b', html):
            isprotected = True
            sign, timestamp, bdstoken, appid, primary_id, fs_id, uk, fake_headers, psk = baidu_pan_protected_share(
                url)
            # raise NotImplementedError("Password required!")
        if isprotected != True:
            raise AssertionError("Share not found or canceled: %s" % url)
    if bdstoken is None:
        bdstoken = ""
    if isprotected != True:
        sign, timestamp, bdstoken, appid, primary_id, fs_id, uk = baidu_pan_parse(
            html)
    request_url = "http://pan.baidu.com/api/sharedownload?sign=%s&timestamp=%s&bdstoken=%s&channel=chunlei&clienttype=0&web=1&app_id=%s" % (
        sign, timestamp, bdstoken, appid)
    refer_url = url
    post_data = {
        'encrypt': 0,
        'product': 'share',
        'uk': uk,
        'primaryid': primary_id,
        'fid_list': '[' + fs_id + ']'
    }
    if isprotected == True:
        post_data['sekey'] = psk
    response_content = post_content(request_url, fake_headers, post_data, True)
    errno = match1(response_content, errno_patt)
    if errno != "0":
        raise AssertionError(
            "Server refused to provide download link! (Errno:%s)" % errno)
    real_url = r1(r'dlink":"([^"]+)"', response_content).replace('\\/', '/')
    title = r1(r'server_filename":"([^"]+)"', response_content)
    assert real_url
    type, ext, size = url_info(real_url, faker=True)
    title_wrapped = json.loads('{"wrapper":"%s"}' % title)
    title = title_wrapped['wrapper']
    logging.debug(real_url)
    return real_url, title, ext, size


def baidu_pan_parse(html):
    sign_patt = r'sign":"([^"]+)"'
    timestamp_patt = r'timestamp":([^"]+),'
    appid_patt = r'app_id":"([^"]+)"'
    bdstoken_patt = r'bdstoken":"([^"]+)"'
    fs_id_patt = r'fs_id":([^"]+),'
    uk_patt = r'uk":([^"]+),'
    errno_patt = r'errno":([^"]+),'
    primary_id_patt = r'shareid":([^"]+),'
    sign = match1(html, sign_patt)
    timestamp = match1(html, timestamp_patt)
    appid = match1(html, appid_patt)
    bdstoken = match1(html, bdstoken_patt)
    fs_id = match1(html, fs_id_patt)
    uk = match1(html, uk_patt)
    primary_id = match1(html, primary_id_patt)
    return sign, timestamp, bdstoken, appid, primary_id, fs_id, uk


def baidu_pan_gen_cookies(url, post_data=None):
    from http import cookiejar
    cookiejar = cookiejar.CookieJar()
    opener = request.build_opener(request.HTTPCookieProcessor(cookiejar))
    resp = opener.open('http://pan.baidu.com')
    if post_data != None:
        resp = opener.open(url, bytes(parse.urlencode(post_data), 'utf-8'))
    return cookjar2hdr(cookiejar)


def baidu_pan_protected_share(url):
    print('This share is protected by password!')
    inpwd = input('Please provide unlock password: ')
    inpwd = inpwd.replace(' ', '').replace('\t', '')
    print('Please wait...')
    post_pwd = {
        'pwd': inpwd,
        'vcode': None,
        'vstr': None
    }
    from http import cookiejar
    import time
    cookiejar = cookiejar.CookieJar()
    opener = request.build_opener(request.HTTPCookieProcessor(cookiejar))
    resp = opener.open('http://pan.baidu.com')
    resp = opener.open(url)
    init_url = resp.geturl()
    verify_url = 'http://pan.baidu.com/share/verify?%s&t=%s&channel=chunlei&clienttype=0&web=1' % (
        init_url.split('?', 1)[1], int(time.time()))
    refer_url = init_url
    fake_headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
        'Accept-Charset': 'UTF-8,*;q=0.5',
        'Accept-Encoding': 'gzip,deflate,sdch',
        'Accept-Language': 'en-US,en;q=0.8',
        'Host': 'pan.baidu.com',
        'Origin': 'http://pan.baidu.com',
        'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:13.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2500.0 Safari/537.36',
        'Referer': refer_url
    }
    opener.addheaders = dict2triplet(fake_headers)
    pwd_resp = opener.open(verify_url, bytes(
        parse.urlencode(post_pwd), 'utf-8'))
    pwd_resp_str = ungzip(pwd_resp.read()).decode('utf-8')
    pwd_res = json.loads(pwd_resp_str)
    if pwd_res['errno'] != 0:
        raise AssertionError(
            'Server returned an error: %s (Incorrect password?)' % pwd_res['errno'])
    pg_resp = opener.open('http://pan.baidu.com/share/link?%s' %
                          init_url.split('?', 1)[1])
    content = ungzip(pg_resp.read()).decode('utf-8')
    sign, timestamp, bdstoken, appid, primary_id, fs_id, uk = baidu_pan_parse(
        content)
    psk = query_cookiejar(cookiejar, 'BDCLND')
    psk = parse.unquote(psk)
    fake_headers['Cookie'] = cookjar2hdr(cookiejar)
    return sign, timestamp, bdstoken, appid, primary_id, fs_id, uk, fake_headers, psk


def cookjar2hdr(cookiejar):
    cookie_str = ''
    for i in cookiejar:
        cookie_str = cookie_str + i.name + '=' + i.value + ';'
    return cookie_str[:-1]


def query_cookiejar(cookiejar, name):
    for i in cookiejar:
        if i.name == name:
            return i.value


def dict2triplet(dictin):
    out_triplet = []
    for i in dictin:
        out_triplet.append((i, dictin[i]))
    return out_triplet

site_info = "Baidu.com"
download = baidu_download
download_playlist = playlist_not_supported("baidu")


================================================
FILE: src/you_get/extractors/bandcamp.py
================================================
#!/usr/bin/env python

__all__ = ['bandcamp_download']

from ..common import *

def bandcamp_download(url, output_dir='.', merge=True, info_only=False, **kwargs):
    html = get_html(url)
    trackinfo = json.loads(r1(r'(\[{"(video_poster_url|video_caption)".*}\]),', html))
    for track in trackinfo:
        track_num = track['track_num']
        title = '%s. %s' % (track_num, track['title'])
        file_url = 'http:' + track['file']['mp3-128']
        mime, ext, size = url_info(file_url)

        print_info(site_info, title, mime, size)
        if not info_only:
            download_urls([file_url], title, ext, size, output_dir, merge=merge)

site_info = "Bandcamp.com"
download = bandcamp_download
download_playlist = bandcamp_download


================================================
FILE: src/you_get/extractors/baomihua.py
================================================
#!/usr/bin/env python

__all__ = ['baomihua_download', 'baomihua_download_by_id']

from ..common import *

import urllib

def baomihua_headers(referer=None, cookie=None):
	# a reasonable UA
	ua = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36'
	headers = {'Accept': '*/*', 'Accept-Language': 'en-US,en;q=0.5', 'User-Agent': ua}
	if referer is not None:
		headers.update({'Referer': referer})
	if cookie is not None:
		headers.update({'Cookie': cookie})
	return headers
	
def baomihua_download_by_id(id, title=None, output_dir='.', merge=True, info_only=False, **kwargs):
    html = get_html('http://play.baomihua.com/getvideourl.aspx?flvid=%s&devicetype=phone_app' % id)
    host = r1(r'host=([^&]*)', html)
    assert host
    type = r1(r'videofiletype=([^&]*)', html)
    assert type
    vid = r1(r'&stream_name=([^&]*)', html)
    assert vid
    dir_str = r1(r'&dir=([^&]*)', html).strip()
    url = "http://%s/%s/%s.%s" % (host, dir_str, vid, type)
    _, ext, size = url_info(url, headers=baomihua_headers())
    print_info(site_info, title, type, size)
    if not info_only:
        download_urls([url], title, ext, size, output_dir, merge = merge, headers=baomihua_headers())

def baomihua_download(url, output_dir='.', merge=True, info_only=False, **kwargs):
    html = get_html(url)
    title = r1(r'<title>(.*)</title>', html)
    assert title
    id = r1(r'flvid\s*=\s*(\d+)', html)
    assert id
    baomihua_download_by_id(id, title, output_dir=output_dir, merge=merge, info_only=info_only)

site_info = "baomihua.com"
download = baomihua_download
download_playlist = playlist_not_supported('baomihua')


================================================
FILE: src/you_get/extractors/bigthink.py
================================================
#!/usr/bin/env python

from ..common import *
from ..extractor import VideoExtractor

import json

class Bigthink(VideoExtractor):
    name = "Bigthink"

    stream_types = [  #this is just a sample. Will make it in prepare()
        # {'id': '1080'},
        # {'id': '720'},
        # {'id': '360'},
        # {'id': '288'},
        # {'id': '190'},
        # {'id': '180'},
        
    ]

    @staticmethod
    def get_streams_by_id(account_number, video_id):
        """
        int, int->list
        
        Get the height of the videos.
        
        Since brightcove is using 3 kinds of links: rtmp, http and https,
        we will be using the HTTPS one to make it secure.
        
        If somehow akamaihd.net is blocked by the Great Fucking Wall,
        change the "startswith https" to http.
        """
        endpoint = 'https://edge.api.brightcove.com/playback/v1/accounts/{account_number}/videos/{video_id}'.format(account_number = account_number, video_id = video_id)
        fake_header_id = fake_headers
        #is this somehow related to the time? Magic....
        fake_header_id['Accept'] ='application/json;pk=BCpkADawqM1cc6wmJQC2tvoXZt4mrB7bFfi6zGt9QnOzprPZcGLE9OMGJwspQwKfuFYuCjAAJ53JdjI8zGFx1ll4rxhYJ255AXH1BQ10rnm34weknpfG-sippyQ'

        html = get_content(endpoint, headers= fake_header_id)
        html_json = json.loads(html)

        link_list = []

        for i in html_json['sources']:
            if 'src' in i:  #to avoid KeyError
                if i['src'].startswith('https'):
                    link_list.append((str(i['height']), i['src']))

        return link_list

    def prepare(self, **kwargs):

        html = get_content(self.url)

        self.title = match1(html, r'<meta property="og:title" content="([^"]*)"')

        account_number = match1(html, r'data-account="(\d+)"')

        video_id = match1(html, r'data-brightcove-id="(\d+)"')
        
        assert account_number, video_id

        link_list = self.get_streams_by_id(account_number, video_id)

        for i in link_list:
            self.stream_types.append({'id': str(i[0])})
            self.streams[i[0]] = {'url': i[1]}

    def extract(self, **kwargs):
        for i in self.streams:
            s = self.streams[i]
            _, s['container'], s['size'] = url_info(s['url'])
            s['src'] = [s['url']]

site = Bigthink()
download = site.download_by_url


================================================
FILE: src/you_get/extractors/bilibili.py
================================================
#!/usr/bin/env python

from ..common import *
from ..extractor import VideoExtractor

import hashlib
import math


class Bilibili(VideoExtractor):
    name = "Bilibili"

    # Bilibili media encoding options, in descending quality order.
    stream_types = [
        {'id': 'hdflv2_8k', 'quality': 127, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '4320p', 'desc': '超高清 8K'},
        {'id': 'hdflv2_dolby', 'quality': 126, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '3840p', 'desc': '杜比视界'},
        {'id': 'hdflv2_hdr', 'quality': 125, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '2160p', 'desc': '真彩 HDR'},
        {'id': 'hdflv2_4k', 'quality': 120, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '2160p', 'desc': '超清 4K'},
        {'id': 'flv_p60', 'quality': 116, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '1080p', 'desc': '高清 1080P60'},
        {'id': 'hdflv2', 'quality': 112, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '1080p', 'desc': '高清 1080P+'},
        {'id': 'flv', 'quality': 80, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '1080p', 'desc': '高清 1080P'},
        {'id': 'flv720_p60', 'quality': 74, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '720p', 'desc': '高清 720P60'},
        {'id': 'flv720', 'quality': 64, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '720p', 'desc': '高清 720P'},
        {'id': 'hdmp4', 'quality': 48, 'audio_quality': 30280,
         'container': 'MP4', 'video_resolution': '720p', 'desc': '高清 720P (MP4)'},
        {'id': 'flv480', 'quality': 32, 'audio_quality': 30280,
         'container': 'FLV', 'video_resolution': '480p', 'desc': '清晰 480P'},
        {'id': 'flv360', 'quality': 16, 'audio_quality': 30216,
         'container': 'FLV', 'video_resolution': '360p', 'desc': '流畅 360P'},
        # 'quality': 15?
        {'id': 'mp4', 'quality': 0},

        {'id': 'jpg', 'quality': 0},
    ]

    codecids = {7: 'AVC', 12: 'HEVC', 13: 'AV1'}

    @staticmethod
    def height_to_quality(height, qn):
        if height <= 360 and qn <= 16:
            return 16
        elif height <= 480 and qn <= 32:
            return 32
        elif height <= 720 and qn <= 64:
            return 64
        elif height <= 1080 and qn <= 80:
            return 80
        elif height <= 1080 and qn <= 112:
            return 112
        else:
            return 120

    @staticmethod
    def bilibili_headers(referer=None, cookie=None):
        # a reasonable UA
        ua = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36'
        headers = {'Accept': '*/*', 'Accept-Language': 'en-US,en;q=0.5', 'User-Agent': ua}
        if referer is not None:
            headers.update({'Referer': referer})
        if cookie is not None:
            headers.update({'Cookie': cookie})
        return headers

    @staticmethod
    def bilibili_api(avid, cid, qn=0):
        return 'https://api.bilibili.com/x/player/playurl?avid=%s&cid=%s&qn=%s&type=&otype=json&fnver=0&fnval=4048&fourk=1' % (avid, cid, qn)

    @staticmethod
    def bilibili_audio_api(sid):
        return 'https://www.bilibili.com/audio/music-service-c/web/url?sid=%s' % sid

    @staticmethod
    def bilibili_audio_info_api(sid):
        return 'https://www.bilibili.com/audio/music-service-c/web/song/info?sid=%s' % sid

    @staticmethod
    def bilibili_audio_menu_info_api(sid):
        return 'https://www.bilibili.com/audio/music-service-c/web/menu/info?sid=%s' % sid

    @staticmethod
    def bilibili_audio_menu_song_api(sid, ps=100):
        return 'https://www.bilibili.com/audio/music-service-c/web/song/of-menu?sid=%s&pn=1&ps=%s' % (sid, ps)

    @staticmethod
    def bilibili_bangumi_api(avid, cid, ep_id, qn=0, fnval=16):
        return 'https://api.bilibili.com/pgc/player/web/playurl?avid=%s&cid=%s&qn=%s&type=&otype=json&ep_id=%s&fnver=0&fnval=%s' % (avid, cid, qn, ep_id, fnval)

    @staticmethod
    def bilibili_interface_api(cid, qn=0):
        entropy = 'rbMCKn@KuamXWlPMoJGsKcbiJKUfkPF_8dABscJntvqhRSETg'
        appkey, sec = ''.join([chr(ord(i) + 2) for i in entropy[::-1]]).split(':')
        params = 'appkey=%s&cid=%s&otype=json&qn=%s&quality=%s&type=' % (appkey, cid, qn, qn)
        chksum = hashlib.md5(bytes(params + sec, 'utf8')).hexdigest()
        return 'https://api.bilibili.com/x/player/wbi/v2?%s&sign=%s' % (params, chksum)


    @staticmethod
    def bilibili_live_api(cid):
        return 'https://api.live.bilibili.com/room/v1/Room/playUrl?cid=%s&quality=0&platform=web' % cid

    @staticmethod
    def bilibili_live_room_info_api(room_id):
        return 'https://api.live.bilibili.com/room/v1/Room/get_info?room_id=%s' % room_id

    @staticmethod
    def bilibili_live_room_init_api(room_id):
        return 'https://api.live.bilibili.com/room/v1/Room/room_init?id=%s' % room_id

    @staticmethod
    def bilibili_space_channel_api(mid, cid, pn=1, ps=100):
        return 'https://api.bilibili.com/x/space/channel/video?mid=%s&cid=%s&pn=%s&ps=%s&order=0&jsonp=jsonp' % (mid, cid, pn, ps)

    @staticmethod
    def bilibili_space_collection_api(mid, cid, pn=1, ps=30):
        return 'https://api.bilibili.com/x/polymer/space/seasons_archives_list?mid=%s&season_id=%s&sort_reverse=false&page_num=%s&page_size=%s' % (mid, cid, pn, ps)

    @staticmethod
    def bilibili_series_archives_api(mid, sid, pn=1, ps=100):
        return 'https://api.bilibili.com/x/series/archives?mid=%s&series_id=%s&pn=%s&ps=%s&only_normal=true&sort=asc&jsonp=jsonp' % (mid, sid, pn, ps)

    @staticmethod
    def bilibili_space_favlist_api(fid, pn=1, ps=20):
        return 'https://api.bilibili.com/x/v3/fav/resource/list?media_id=%s&pn=%s&ps=%s&order=mtime&type=0&tid=0&jsonp=jsonp' % (fid, pn, ps)

    @staticmethod
    def bilibili_space_video_api(mid, pn=1, ps=50):
        return "https://api.bilibili.com/x/space/arc/search?mid=%s&pn=%s&ps=%s&tid=0&keyword=&order=pubdate&jsonp=jsonp" % (mid, pn, ps)

    @staticmethod
    def bilibili_vc_api(video_id):
        return 'https://api.vc.bilibili.com/clip/v1/video/detail?video_id=%s' % video_id

    @staticmethod
    def bilibili_h_api(doc_id):
        return 'https://api.vc.bilibili.com/link_draw/v1/doc/detail?doc_id=%s' % doc_id

    @staticmethod
    def url_size(url, faker=False, headers={},err_value=0):
        try:
            return url_size(url,faker,headers)
        except:
            return err_value

    def prepare(self, **kwargs):
        self.stream_qualities = {s['quality']: s for s in self.stream_types}
        self.streams.clear()
        self.dash_streams.clear()

        try:
            html_content = get_content(self.url, headers=self.bilibili_headers(referer=self.url))
        except:
            html_content = ''  # live always returns 400 (why?)
        #self.title = match1(html_content,
        #                    r'<h1 title="([^"]+)"')

        # redirect: watchlater
        if re.match(r'https?://(www\.)?bilibili\.com/watchlater/#/(av(\d+)|BV(\S+)/?)', self.url):
            avid = match1(self.url, r'/(av\d+)') or match1(self.url, r'/(BV\w+)')
            p = int(match1(self.url, r'/p(\d+)') or '1')
            self.url = 'https://www.bilibili.com/video/%s?p=%s' % (avid, p)
            html_content = get_content(self.url, headers=self.bilibili_headers())

        # redirect: bangumi/play/ss -> bangumi/play/ep
        # redirect: bangumi.bilibili.com/anime -> bangumi/play/ep
        elif re.match(r'https?://(www\.)?bilibili\.com/bangumi/play/ss(\d+)', self.url) or \
             re.match(r'https?://bangumi\.bilibili\.com/anime/(\d+)/play', self.url):
            initial_state_text = match1(html_content, r'__INITIAL_STATE__=(.*?);\(function\(\)')  # FIXME
            initial_state = json.loads(initial_state_text)
            ep_id = initial_state['epList'][0]['id']
            self.url = 'https://www.bilibili.com/bangumi/play/ep%s' % ep_id
            html_content = get_content(self.url, headers=self.bilibili_headers(referer=self.url))

        # redirect: s
        elif re.match(r'https?://(www\.)?bilibili\.com/s/(.+)', self.url):
            self.url = 'https://www.bilibili.com/%s' % match1(self.url, r'/s/(.+)')
            html_content = get_content(self.url, headers=self.bilibili_headers())

        # redirect: festival
        elif re.match(r'https?://(www\.)?bilibili\.com/festival/(.+)', self.url):
            self.url = 'https://www.bilibili.com/video/%s' % match1(self.url, r'bvid=([^&]+)')
            html_content = get_content(self.url, headers=self.bilibili_headers())

        # sort it out
        if re.match(r'https?://(www\.)?bilibili\.com/audio/au(\d+)', self.url):
            sort = 'audio'
        elif re.match(r'https?://(www\.)?bilibili\.com/bangumi/play/ep(\d+)', self.url):
            sort = 'bangumi'
        elif match1(html_content, r'<meta property="og:url" content="(https://www.bilibili.com/bangumi/play/[^"]+)"'):
            sort = 'bangumi'
        elif re.match(r'https?://live\.bilibili\.com/', self.url):
            sort = 'live'
        elif re.match(r'https?://vc\.bilibili\.com/video/(\d+)', self.url):
            sort = 'vc'
        elif re.match(r'https?://(www\.)?bilibili\.com/video/(av(\d+)|(bv(\S+))|(BV(\S+)))', self.url):
            sort = 'video'
        elif re.match(r'https?://h\.?bilibili\.com/(\d+)', self.url):
            sort = 'h'
        else:
            self.download_playlist_by_url(self.url, **kwargs)
            return

        # regular video
        if sort == 'video':
            initial_state_text = match1(html_content, r'__INITIAL_STATE__=(.*?);\(function\(\)')  # FIXME
            initial_state = json.loads(initial_state_text)

            playinfo_text = match1(html_content, r'__playinfo__=(.*?)</script><script>')  # FIXME
            playinfo = json.loads(playinfo_text) if playinfo_text else None
            playinfo = playinfo if playinfo and playinfo.get('code') == 0 else None

            html_content_ = get_content(self.url, headers=self.bilibili_headers(cookie='CURRENT_FNVAL=16'))
            playinfo_text_ = match1(html_content_, r'__playinfo__=(.*?)</script><script>')  # FIXME
            playinfo_ = json.loads(playinfo_text_) if playinfo_text_ else None
            playinfo_ = playinfo_ if playinfo_ and playinfo_.get('code') == 0 else None

            if 'videoData' in initial_state:
                # (standard video)

                # warn if cookies are not loaded
                if cookies is None:
                    log.w('You will need login cookies for 720p formats or above. (use --cookies to load cookies.txt.)')

                # warn if it is a multi-part video
                pn = initial_state['videoData']['videos']
                if pn > 1 and not kwargs.get('playlist'):
                    log.w('This is a multipart video. (use --playlist to download all parts.)')

                # set video title
                self.title = initial_state['videoData']['title']
                # refine title for a specific part, if it is a multi-part video
                p = int(match1(self.url, r'[\?&]p=(\d+)') or match1(self.url, r'/index_(\d+)') or
                        '1')  # use URL to decide p-number, not initial_state['p']
                if pn > 1:
                    part = initial_state['videoData']['pages'][p - 1]['part']
                    self.title = '%s (P%s. %s)' % (self.title, p, part)

                # construct playinfos
                avid = initial_state['aid']
                cid = initial_state['videoData']['pages'][p - 1]['cid']  # use p-number, not initial_state['videoData']['cid']
            else:
                # (festival video)

                # set video title
                self.title = initial_state['videoInfo']['title']

                # construct playinfos
                avid = initial_state['videoInfo']['aid']
                cid = initial_state['videoInfo']['cid']

            current_quality, best_quality = None, None
            if playinfo is not None:
                current_quality = playinfo['data']['quality'] or None  # 0 indicates an error, fallback to None
                if 'accept_quality' in playinfo['data'] and playinfo['data']['accept_quality'] != []:
                    best_quality = playinfo['data']['accept_quality'][0]
            playinfos = []
            if playinfo is not None:
                playinfos.append(playinfo)
            if playinfo_ is not None:
                playinfos.append(playinfo_)
            # get alternative formats from API
            for qn in [120, 112, 80, 64, 32, 16]:
                # automatic format for durl: qn=0
                # for dash, qn does not matter
                if current_quality is None or qn < current_quality:
                    api_url = self.bilibili_api(avid, cid, qn=qn)
                    api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
                    api_playinfo = json.loads(api_content)
                    if api_playinfo['code'] == 0:  # success
                        playinfos.append(api_playinfo)
                    else:
                        message = api_playinfo['data']['message']
                if best_quality is None or qn <= best_quality:
                    api_url = self.bilibili_interface_api(cid, qn=qn)
                    api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
                    api_playinfo_data = json.loads(api_content)
                    if api_playinfo_data.get('quality'):
                        playinfos.append({'code': 0, 'message': '0', 'ttl': 1, 'data': api_playinfo_data})
            if not playinfos:
                log.w(message)
                # use bilibili error video instead
                url = 'https://static.hdslb.com/error.mp4'
                _, container, size = url_info(url)
                self.streams['flv480'] = {'container': container, 'size': size, 'src': [url]}
                return

            for playinfo in playinfos:
                quality = playinfo['data']['quality']
                format_id = self.stream_qualities[quality]['id']
                container = self.stream_qualities[quality]['container'].lower()
                desc = self.stream_qualities[quality]['desc']

                if 'durl' in playinfo['data']:
                    src, size = [], 0
                    for durl in playinfo['data']['durl']:
                        src.append(durl['url'])
                        size += durl['size']
                    self.streams[format_id] = {'container': container, 'quality': desc, 'size': size, 'src': src}

                # DASH formats
                if 'dash' in playinfo['data']:
                    audio_size_cache = {}
                    for video in playinfo['data']['dash']['video']:
                        s = self.stream_qualities[video['id']]
                        format_id = f"dash-{s['id']}-{self.codecids[video['codecid']]}"  # prefix
                        container = 'mp4'  # enforce MP4 container
                        desc = s['desc'] + ' ' + video['codecs']
                        audio_quality = s['audio_quality']
                        baseurl = video['baseUrl']
                        size = self.url_size(baseurl, headers=self.bilibili_headers(referer=self.url))

                        # find matching audio track
                        if playinfo['data']['dash']['audio']:
                            audio_baseurl = playinfo['data']['dash']['audio'][0]['baseUrl']
                            for audio in playinfo['data']['dash']['audio']:
                                if int(audio['id']) == audio_quality:
                                    audio_baseurl = audio['baseUrl']
                                    break
                            if not audio_size_cache.get(audio_quality, False):
                                audio_size_cache[audio_quality] = self.url_size(audio_baseurl, headers=self.bilibili_headers(referer=self.url))
                            size += audio_size_cache[audio_quality]

                            self.dash_streams[format_id] = {'container': container, 'quality': desc,
                                                            'src': [[baseurl], [audio_baseurl]], 'size': size}
                        else:
                            self.dash_streams[format_id] = {'container': container, 'quality': desc,
                                                            'src': [[baseurl]], 'size': size}

            # get danmaku
            self.danmaku = get_content('https://comment.bilibili.com/%s.xml' % cid, headers=self.bilibili_headers(referer=self.url))

        # bangumi
        elif sort == 'bangumi':
            initial_state_text = match1(html_content, r'__INITIAL_STATE__=(.*?);\(function\(\)')  # FIXME
            initial_state = json.loads(initial_state_text)

            # warn if this bangumi has more than 1 video
            epn = len(initial_state['epList'])
            if epn > 1 and not kwargs.get('playlist'):
                log.w('This bangumi currently has %s videos. (use --playlist to download all videos.)' % epn)

            # set video title
            self.title = initial_state['h1Title']

            # construct playinfos
            ep_id = initial_state['epInfo']['id']
            avid = initial_state['epInfo']['aid']
            cid = initial_state['epInfo']['cid']
            playinfos = []
            api_url = self.bilibili_bangumi_api(avid, cid, ep_id)
            api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
            api_playinfo = json.loads(api_content)
            if api_playinfo['code'] == 0:  # success
                playinfos.append(api_playinfo)
            else:
                log.e(api_playinfo['message'])
                return
            current_quality = api_playinfo['result']['quality']
            # get alternative formats from API
            for fnval in [8, 16]:
                for qn in [120, 112, 80, 64, 32, 16]:
                    # automatic format for durl: qn=0
                    # for dash, qn does not matter
                    if qn != current_quality:
                        api_url = self.bilibili_bangumi_api(avid, cid, ep_id, qn=qn, fnval=fnval)
                        api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
                        api_playinfo = json.loads(api_content)
                        if api_playinfo['code'] == 0:  # success
                            playinfos.append(api_playinfo)

            for playinfo in playinfos:
                if 'durl' in playinfo['result']:
                    quality = playinfo['result']['quality']
                    format_id = self.stream_qualities[quality]['id']
                    container = self.stream_qualities[quality]['container'].lower()
                    desc = self.stream_qualities[quality]['desc']

                    src, size = [], 0
                    for durl in playinfo['result']['durl']:
                        src.append(durl['url'])
                        size += durl['size']
                    self.streams[format_id] = {'container': container, 'quality': desc, 'size': size, 'src': src}

                # DASH formats
                if 'dash' in playinfo['result']:
                    for video in playinfo['result']['dash']['video']:
                        # playinfo['result']['quality'] does not reflect the correct quality of DASH stream
                        quality = self.height_to_quality(video['height'], video['id'])  # convert height to quality code
                        s = self.stream_qualities[quality]
                        format_id = 'dash-' + s['id']  # prefix
                        container = 'mp4'  # enforce MP4 container
                        desc = s['desc']
                        audio_quality = s['audio_quality']
                        baseurl = video['baseUrl']
                        size = url_size(baseurl, headers=self.bilibili_headers(referer=self.url))

                        # find matching audio track
                        audio_baseurl = playinfo['result']['dash']['audio'][0]['baseUrl']
                        for audio in playinfo['result']['dash']['audio']:
                            if int(audio['id']) == audio_quality:
                                audio_baseurl = audio['baseUrl']
                                break
                        size += url_size(audio_baseurl, headers=self.bilibili_headers(referer=self.url))

                        self.dash_streams[format_id] = {'container': container, 'quality': desc,
                                                        'src': [[baseurl], [audio_baseurl]], 'size': size}

            # get danmaku
            self.danmaku = get_content('https://comment.bilibili.com/%s.xml' % cid, headers=self.bilibili_headers(referer=self.url))

        # vc video
        elif sort == 'vc':
            video_id = match1(self.url, r'https?://vc\.?bilibili\.com/video/(\d+)')
            api_url = self.bilibili_vc_api(video_id)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            api_playinfo = json.loads(api_content)

            # set video title
            self.title = '%s (%s)' % (api_playinfo['data']['user']['name'], api_playinfo['data']['item']['id'])

            height = api_playinfo['data']['item']['height']
            quality = self.height_to_quality(height)  # convert height to quality code
            s = self.stream_qualities[quality]
            format_id = s['id']
            container = 'mp4'  # enforce MP4 container
            desc = s['desc']

            playurl = api_playinfo['data']['item']['video_playurl']
            size = int(api_playinfo['data']['item']['video_size'])

            self.streams[format_id] = {'container': container, 'quality': desc, 'size': size, 'src': [playurl]}

        # live
        elif sort == 'live':
            m = re.match(r'https?://live\.bilibili\.com/(\w+)', self.url)
            short_id = m.group(1)
            api_url = self.bilibili_live_room_init_api(short_id)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            room_init_info = json.loads(api_content)

            room_id = room_init_info['data']['room_id']
            api_url = self.bilibili_live_room_info_api(room_id)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            room_info = json.loads(api_content)

            # set video title
            self.title = room_info['data']['title'] + '.' + str(int(time.time()))

            api_url = self.bilibili_live_api(room_id)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            video_info = json.loads(api_content)

            durls = video_info['data']['durl']
            playurl = durls[0]['url']
            container = 'flv'  # enforce FLV container
            self.streams['flv'] = {'container': container, 'quality': 'unknown',
                                   'size': 0, 'src': [playurl]}

        # audio
        elif sort == 'audio':
            m = re.match(r'https?://(?:www\.)?bilibili\.com/audio/au(\d+)', self.url)
            sid = m.group(1)
            api_url = self.bilibili_audio_info_api(sid)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            song_info = json.loads(api_content)

            # set audio title
            self.title = song_info['data']['title']

            # get lyrics
            self.lyrics = get_content(song_info['data']['lyric'])

            api_url = self.bilibili_audio_api(sid)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            audio_info = json.loads(api_content)

            playurl = audio_info['data']['cdns'][0]
            size = audio_info['data']['size']
            container = 'mp4'  # enforce MP4 container
            self.streams['mp4'] = {'container': container,
                                   'size': size, 'src': [playurl]}

        # h images
        elif sort == 'h':
            m = re.match(r'https?://h\.?bilibili\.com/(\d+)', self.url)
            doc_id = m.group(1)
            api_url = self.bilibili_h_api(doc_id)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            h_info = json.loads(api_content)

            urls = []
            for pic in h_info['data']['item']['pictures']:
                img_src = pic['img_src']
                urls.append(img_src)
            size = urls_size(urls)

            self.title = doc_id
            container = 'jpg'  # enforce JPG container
            self.streams[container] = {'container': container,
                                       'size': size, 'src': urls}

    def prepare_by_cid(self,avid,cid,title,html_content,playinfo,playinfo_,url):
        #response for interaction video
        #主要针对互动视频,使用cid而不是url来相互区分

        self.stream_qualities = {s['quality']: s for s in self.stream_types}
        self.title = title
        self.url = url

        current_quality, best_quality = None, None
        if playinfo is not None:
            current_quality = playinfo['data']['quality'] or None  # 0 indicates an error, fallback to None
            if 'accept_quality' in playinfo['data'] and playinfo['data']['accept_quality'] != []:
                best_quality = playinfo['data']['accept_quality'][0]
        playinfos = []
        if playinfo is not None:
            playinfos.append(playinfo)
        if playinfo_ is not None:
            playinfos.append(playinfo_)
        # get alternative formats from API
        for qn in [80, 64, 32, 16]:
            # automatic format for durl: qn=0
            # for dash, qn does not matter
            if current_quality is None or qn < current_quality:
                api_url = self.bilibili_api(avid, cid, qn=qn)
                api_content = get_content(api_url, headers=self.bilibili_headers())
                api_playinfo = json.loads(api_content)
                if api_playinfo['code'] == 0:  # success
                    playinfos.append(api_playinfo)
                else:
                    message = api_playinfo['data']['message']
            if best_quality is None or qn <= best_quality:
                api_url = self.bilibili_interface_api(cid, qn=qn)
                api_content = get_content(api_url, headers=self.bilibili_headers())
                api_playinfo_data = json.loads(api_content)
                if api_playinfo_data.get('quality'):
                    playinfos.append({'code': 0, 'message': '0', 'ttl': 1, 'data': api_playinfo_data})
        if not playinfos:
            log.w(message)
            # use bilibili error video instead
            url = 'https://static.hdslb.com/error.mp4'
            _, container, size = url_info(url)
            self.streams['flv480'] = {'container': container, 'size': size, 'src': [url]}
            return

        for playinfo in playinfos:
            quality = playinfo['data']['quality']
            format_id = self.stream_qualities[quality]['id']
            container = self.stream_qualities[quality]['container'].lower()
            desc = self.stream_qualities[quality]['desc']

            if 'durl' in playinfo['data']:
                src, size = [], 0
                for durl in playinfo['data']['durl']:
                    src.append(durl['url'])
                    size += durl['size']
                self.streams[format_id] = {'container': container, 'quality': desc, 'size': size, 'src': src}

            # DASH formats
            if 'dash' in playinfo['data']:
                audio_size_cache = {}
                for video in playinfo['data']['dash']['video']:
                    # prefer the latter codecs!
                    s = self.stream_qualities[video['id']]
                    format_id = 'dash-' + s['id']  # prefix
                    container = 'mp4'  # enforce MP4 container
                    desc = s['desc']
                    audio_quality = s['audio_quality']
                    baseurl = video['baseUrl']
                    size = self.url_size(baseurl, headers=self.bilibili_headers(referer=self.url))

                    # find matching audio track
                    if playinfo['data']['dash']['audio']:
                        audio_baseurl = playinfo['data']['dash']['audio'][0]['baseUrl']
                        for audio in playinfo['data']['dash']['audio']:
                            if int(audio['id']) == audio_quality:
                                audio_baseurl = audio['baseUrl']
                                break
                        if not audio_size_cache.get(audio_quality, False):
                            audio_size_cache[audio_quality] = self.url_size(audio_baseurl,
                                                                            headers=self.bilibili_headers(referer=self.url))
                        size += audio_size_cache[audio_quality]

                        self.dash_streams[format_id] = {'container': container, 'quality': desc,
                                                        'src': [[baseurl], [audio_baseurl]], 'size': size}
                    else:
                        self.dash_streams[format_id] = {'container': container, 'quality': desc,
                                                        'src': [[baseurl]], 'size': size}

        # get danmaku
        self.danmaku = get_content('https://comment.bilibili.com/%s.xml' % cid, headers=self.bilibili_headers(referer=self.url))

    def extract(self, **kwargs):
        # set UA and referer for downloading
        headers = self.bilibili_headers(referer=self.url)
        self.ua, self.referer = headers['User-Agent'], headers['Referer']

        if not self.streams_sorted:
            # no stream is available
            return

        if 'stream_id' in kwargs and kwargs['stream_id']:
            # extract the stream
            stream_id = kwargs['stream_id']
            if stream_id not in self.streams and stream_id not in self.dash_streams:
                log.e('[Error] Invalid video format.')
                log.e('Run \'-i\' command with no specific video format to view all available formats.')
                exit(2)
        else:
            # extract stream with the best quality
            stream_id = self.streams_sorted[0]['id']

    def download_playlist_by_url(self, url, **kwargs):
        self.url = url
        kwargs['playlist'] = True

        html_content = get_content(self.url, headers=self.bilibili_headers(referer=self.url))

        # sort it out
        if re.match(r'https?://(www\.)?bilibili\.com/bangumi/play/ep(\d+)', self.url):
            sort = 'bangumi'
        elif match1(html_content, r'<meta property="og:url" content="(https://www.bilibili.com/bangumi/play/[^"]+)"'):
            sort = 'bangumi'
        elif re.match(r'https?://(www\.)?bilibili\.com/bangumi/media/md(\d+)', self.url) or \
            re.match(r'https?://bangumi\.bilibili\.com/anime/(\d+)', self.url):
            sort = 'bangumi_md'
        elif re.match(r'https?://(www\.)?bilibili\.com/video/(av(\d+)|bv(\S+)|BV(\S+))', self.url):
            sort = 'video'
        elif re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/detail\?.*cid=(\d+)', self.url):
            sort = 'space_channel'
        elif re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/seriesdetail\?.*sid=(\d+)', self.url):
            sort = 'space_channel_series'
        elif re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/collectiondetail\?.*sid=(\d+)', self.url):
            sort = 'space_channel_collection'
        elif re.match(r'https?://space\.?bilibili\.com/(\d+)/favlist\?.*fid=(\d+)', self.url):
            sort = 'space_favlist'
        elif re.match(r'https?://space\.?bilibili\.com/(\d+)/video', self.url):
            sort = 'space_video'
        elif re.match(r'https?://(www\.)?bilibili\.com/audio/am(\d+)', self.url):
            sort = 'audio_menu'
        else:
            log.e('[Error] Unsupported URL pattern.')
            exit(1)

        # regular video
        if sort == 'video':
            initial_state_text = match1(html_content, r'__INITIAL_STATE__=(.*?);\(function\(\)')  # FIXME
            initial_state = json.loads(initial_state_text)
            aid = initial_state['videoData']['aid']
            pn = initial_state['videoData']['videos']

            if pn == len(initial_state['videoData']['pages']):
                # non-interative video
                for pi in range(1, pn + 1):
                     purl = 'https://www.bilibili.com/video/av%s?p=%s' % (aid, pi)
                     self.__class__().download_by_url(purl, **kwargs)

            else:
                # interative video
                search_node_list = []
                download_cid_set = set([initial_state['videoData']['cid']])
                params = {
                        'id': 'cid:{}'.format(initial_state['videoData']['cid']),
                        'aid': str(aid)
                }
                urlcontent = get_content('https://api.bilibili.com/x/player.so?'+parse.urlencode(params), headers=self.bilibili_headers(referer='https://www.bilibili.com/video/av{}'.format(aid)))
                graph_version = json.loads(urlcontent[urlcontent.find('<interaction>')+13:urlcontent.find('</interaction>')])['graph_version']
                params = {
                    'aid': str(aid),
                    'graph_version': graph_version,
                    'platform': 'pc',
                    'portal': 0,
                    'screen': 0,
                }
                node_info = json.loads(get_content('https://api.bilibili.com/x/stein/nodeinfo?'+parse.urlencode(params)))

                playinfo_text = match1(html_content, r'__playinfo__=(.*?)</script><script>')  # FIXME
                playinfo = json.loads(playinfo_text) if playinfo_text else None

                html_content_ = get_content(self.url, headers=self.bilibili_headers(cookie='CURRENT_FNVAL=16'))
                playinfo_text_ = match1(html_content_, r'__playinfo__=(.*?)</script><script>')  # FIXME
                playinfo_ = json.loads(playinfo_text_) if playinfo_text_ else None

                self.prepare_by_cid(aid, initial_state['videoData']['cid'], initial_state['videoData']['title'] + ('P{}. {}'.format(1, node_info['data']['title'])),html_content,playinfo,playinfo_,url)
                self.extract(**kwargs)
                self.download(**kwargs)
                for choice in node_info['data']['edges']['choices']:
                    search_node_list.append(choice['node_id'])
                    if not choice['cid'] in download_cid_set:
                        download_cid_set.add(choice['cid'])
                        self.prepare_by_cid(aid,choice['cid'],initial_state['videoData']['title']+('P{}. {}'.format(len(download_cid_set),choice['option'])),html_content,playinfo,playinfo_,url)
                        self.extract(**kwargs)
                        self.download(**kwargs)
                while len(search_node_list)>0:
                    node_id = search_node_list.pop(0)
                    params.update({'node_id':node_id})
                    node_info = json.loads(get_content('https://api.bilibili.com/x/stein/nodeinfo?'+parse.urlencode(params)))
                    if node_info['data'].__contains__('edges'):
                        for choice in node_info['data']['edges']['choices']:
                            search_node_list.append(choice['node_id'])
                            if not choice['cid'] in download_cid_set:
                                download_cid_set.add(choice['cid'])
                                self.prepare_by_cid(aid,choice['cid'],initial_state['videoData']['title']+('P{}. {}'.format(len(download_cid_set),choice['option'])),html_content,playinfo,playinfo_,url)
                                try:
                                    self.streams_sorted = [dict([('id', stream_type['id'])] + list(self.streams[stream_type['id']].items())) for stream_type in self.__class__.stream_types if stream_type['id'] in self.streams]
                                except:
                                    self.streams_sorted = [dict([('itag', stream_type['itag'])] + list(self.streams[stream_type['itag']].items())) for stream_type in self.__class__.stream_types if stream_type['itag'] in self.streams]
                                self.extract(**kwargs)
                                self.download(**kwargs)

        elif sort == 'bangumi':
            initial_state_text = match1(html_content, r'__INITIAL_STATE__=(.*?);\(function\(\)')  # FIXME
            initial_state = json.loads(initial_state_text)
            epn, i = len(initial_state['epList']), 0
            for ep in initial_state['epList']:
                i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                ep_id = ep['id']
                epurl = 'https://www.bilibili.com/bangumi/play/ep%s/' % ep_id
                self.__class__().download_by_url(epurl, **kwargs)

        elif sort == 'bangumi_md':
            initial_state_text = match1(html_content, r'__INITIAL_STATE__=(.*?);\(function\(\)')  # FIXME
            initial_state = json.loads(initial_state_text)
            epn, i = len(initial_state['mediaInfo']['episodes']), 0
            for ep in initial_state['mediaInfo']['episodes']:
                i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                ep_id = ep['ep_id']
                epurl = 'https://www.bilibili.com/bangumi/play/ep%s/' % ep_id
                self.__class__().download_by_url(epurl, **kwargs)

        elif sort == 'space_channel':
            m = re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/detail\?.*cid=(\d+)', self.url)
            mid, cid = m.group(1), m.group(2)
            api_url = self.bilibili_space_channel_api(mid, cid)
            api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
            channel_info = json.loads(api_content)
            # TBD: channel of more than 100 videos

            epn, i = len(channel_info['data']['list']['archives']), 0
            for video in channel_info['data']['list']['archives']:
                i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                url = 'https://www.bilibili.com/video/av%s' % video['aid']
                self.__class__().download_playlist_by_url(url, **kwargs)

        elif sort == 'space_channel_series':
            m = re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/seriesdetail\?.*sid=(\d+)', self.url)
            mid, sid = m.group(1), m.group(2)
            pn = 1
            video_list = []
            while True:
                api_url = self.bilibili_series_archives_api(mid, sid, pn)
                api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
                archives_info = json.loads(api_content)
                video_list.extend(archives_info['data']['archives'])
                if len(video_list) < archives_info['data']['page']['total'] and len(archives_info['data']['archives']) > 0:
                    pn += 1
                else:
                    break

            epn, i = len(video_list), 0
            for video in video_list:
                i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                url = 'https://www.bilibili.com/video/av%s' % video['aid']
                self.__class__().download_playlist_by_url(url, **kwargs)

        elif sort == 'space_channel_collection':
            m = re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/collectiondetail\?.*sid=(\d+)', self.url)
            mid, sid = m.group(1), m.group(2)
            pn = 1
            video_list = []
            while True:
                api_url = self.bilibili_space_collection_api(mid, sid, pn)
                api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
                archives_info = json.loads(api_content)
                video_list.extend(archives_info['data']['archives'])
                if len(video_list) < archives_info['data']['page']['total'] and len(archives_info['data']['archives']) > 0:
                    pn += 1
                else:
                    break

            epn, i = len(video_list), 0
            for video in video_list:
                i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                url = 'https://www.bilibili.com/video/av%s' % video['aid']
                self.__class__().download_playlist_by_url(url, **kwargs)

        elif sort == 'space_favlist':
            m = re.match(r'https?://space\.?bilibili\.com/(\d+)/favlist\?.*fid=(\d+)', self.url)
            vmid, fid = m.group(1), m.group(2)
            api_url = self.bilibili_space_favlist_api(fid)
            api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
            favlist_info = json.loads(api_content)
            pc = favlist_info['data']['info']['media_count'] // len(favlist_info['data']['medias'])
            if favlist_info['data']['info']['media_count'] % len(favlist_info['data']['medias']) != 0:
                pc += 1
            for pn in range(1, pc + 1):
                log.w('Extracting %s of %s pages ...' % (pn, pc))
                api_url = self.bilibili_space_favlist_api(fid, pn=pn)
                api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url))
                favlist_info = json.loads(api_content)

                epn, i = len(favlist_info['data']['medias']), 0
                for video in favlist_info['data']['medias']:
                    i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                    url = 'https://www.bilibili.com/video/av%s' % video['id']
                    self.__class__().download_playlist_by_url(url, **kwargs)

        elif sort == 'space_video':
            m = re.match(r'https?://space\.?bilibili\.com/(\d+)/video', self.url)
            mid = m.group(1)
            api_url = self.bilibili_space_video_api(mid)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            videos_info = json.loads(api_content)
            # pc = videos_info['data']['page']['count'] // videos_info['data']['page']['ps']
            pc = math.ceil(videos_info['data']['page']['count'] / videos_info['data']['page']['ps'])

            for pn in range(1, pc + 1):
                api_url = self.bilibili_space_video_api(mid, pn=pn)
                api_content = get_content(api_url, headers=self.bilibili_headers())
                videos_info = json.loads(api_content)

                epn, i = len(videos_info['data']['list']['vlist']), 0
                for video in videos_info['data']['list']['vlist']:
                    i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                    url = 'https://www.bilibili.com/video/av%s' % video['aid']
                    self.__class__().download_playlist_by_url(url, **kwargs)

        elif sort == 'audio_menu':
            m = re.match(r'https?://(?:www\.)?bilibili\.com/audio/am(\d+)', self.url)
            sid = m.group(1)
            #api_url = self.bilibili_audio_menu_info_api(sid)
            #api_content = get_content(api_url, headers=self.bilibili_headers())
            #menu_info = json.loads(api_content)
            api_url = self.bilibili_audio_menu_song_api(sid)
            api_content = get_content(api_url, headers=self.bilibili_headers())
            menusong_info = json.loads(api_content)
            epn, i = len(menusong_info['data']['data']), 0
            for song in menusong_info['data']['data']:
                i += 1; log.w('Extracting %s of %s songs ...' % (i, epn))
                url = 'https://www.bilibili.com/audio/au%s' % song['id']
                self.__class__().download_by_url(url, **kwargs)


site = Bilibili()
download = site.download_by_url
download_playlist = site.download_playlist_by_url

bilibili_download = download


================================================
FILE: src/you_get/extractors/bokecc.py
================================================
#!/usr/bin/env python

from ..common import *
from ..extractor import VideoExtractor
import xml.etree.ElementTree as ET

class BokeCC(VideoExtractor):
    name = "BokeCC"

    stream_types = [  # we do now know for now, as we have to check the
                      # output from the API
    ]

    API_ENDPOINT = 'http://p.bokecc.com/'


    def download_by_id(self, vid = '', title = None, output_dir='.', merge=True, info_only=False,**kwargs):
        """self, str->None
        
        Keyword arguments:
        self: self
        vid: The video ID for BokeCC cloud, something like
        FE3BB999594978049C33DC5901307461
        
        Calls the prepare() to download the video.
        
        If no title is provided, this method shall try to find a proper title
        with the information providin within the
        returned content of the API."""

        assert vid

        self.prepare(vid = vid, title = title, **kwargs)

        self.extract(**kwargs)

        self.download(output_dir = output_dir, 
                    merge = merge, 
                    info_only = info_only, **kwargs)

    def prepare(self, vid = '', title = None, **kwargs):
        assert vid

        api_url = self.API_ENDPOINT + \
            'servlet/playinfo?vid={vid}&m=0'.format(vid = vid)  #return XML

        html = get_content(api_url)
        self.tree = ET.ElementTree(ET.fromstring(html))

        if self.tree.find('result').text != '1':
            log.wtf('API result says failed!')
            raise 

        if title is None:
            self.title = '_'.join([i.text for i in self.tree.iterfind('video/videomarks/videomark/markdesc')])
        else:
            self.title = title

        if not title:
            self.title = vid

        for i in self.tree.iterfind('video/quality'):
            quality = i.attrib ['value']
            url = i[0].attrib['playurl']
            self.stream_types.append({'id': quality,
                                      'video_profile': i.attrib ['desp']})
            self.streams[quality] = {'url': url,
                                     'video_profile': i.attrib ['desp']}
            self.streams_sorted = [dict([('id', stream_type['id'])] + list(self.streams[stream_type['id']].items())) for stream_type in self.__class__.stream_types if stream_type['id'] in self.streams]


    def extract(self, **kwargs):
        for i in self.streams:
            s = self.streams[i]
            _, s['container'], s['size'] = url_info(s['url'])
            s['src'] = [s['url']]
        if 'stream_id' in kwargs and kwargs['stream_id']:
            # Extract the stream
            stream_id = kwargs['stream_id']

            if stream_id not in self.streams:
                log.e('[Error] Invalid video format.')
                log.e('Run \'-i\' command with no specific video format to view all available formats.')
                exit(2)
        else:
            # Extract stream with the best quality
            stream_id = self.streams_sorted[0]['id']
            _, s['container'], s['size'] = url_info(s['url'])
            s['src'] = [s['url']]

site = BokeCC()

# I don't know how to call the player directly so I just put it here
# just in case anyone touchs it -- Beining@Aug.24.2016
#download = site.download_by_url
#download_playlist = site.download_by_url

bokecc_download_by_id = site.download_by_id


================================================
FILE: src/you_get/extractors/cbs.py
================================================
#!/usr/bin/env python

__all__ = ['cbs_download']

from ..common import *

from .theplatform import theplatform_download_by_pid

def cbs_download(url, output_dir='.', merge=True, info_only=False, **kwargs):
    """Downloads CBS videos by URL.
    """

    html = get_content(url)
    pid = match1(html, r'video\.settings\.pid\s*=\s*\'([^\']+)\'')
    title = match1(html, r'video\.settings\.title\s*=\s*\"([^\"]+)\"')

    theplatform_download_by_pid(pid, title, output_dir=output_dir, merge=merge, info_only=info_only)

site_info = "CBS.com"
download = cbs_download
download_playlist = playlist_not_supported('cbs')


================================================
FILE: src/you_get/extractors/ckplayer.py
================================================
#!/usr/bin/env python
#coding:utf-8
# Author:  Beining --<i@cnbeining.com>
# Purpose: A general extractor for CKPlayer
# Created: 03/15/2016

__all__ = ['ckplayer_download']

from xml.etree import ElementTree as ET
from copy import copy
from ..common import *
#----------------------------------------------------------------------
def ckplayer_get_info_by_xml(ckinfo):
    """str->dict
    Information for CKPlayer API content."""
    e = ET.XML(ckinfo)
    video_dict = {'title': '',
                  #'duration': 0,
                  'links': [],
                  'size': 0,
                  'flashvars': '',}
    dictified = dictify(e)['ckplayer']
    if 'info' in dictified:
        if '_text' in dictified['info'][0]['title'][0]:  #title
            video_dict['title'] = dictified['info'][0]['title'][0]['_text'].strip()

    #if dictify(e)['ckplayer']['info'][0]['title'][0]['_text'].strip():  #duration
        #video_dict['title'] = dictify(e)['ckplayer']['info'][0]['title'][0]['_text'].strip()

    if '_text' in dictified['video'][0]['size'][0]:  #size exists for 1 piece
        video_dict['size'] = sum([int(i['size'][0]['_text']) for i in dictified['video']])

    if '_text' in dictified['video'][0]['file'][0]:  #link exist
        video_dict['links'] = [i['file'][0]['_text'].strip() for i in dictified['video']]

    if '_text' in dictified['flashvars'][0]:
        video_dict['flashvars'] = dictified['flashvars'][0]['_text'].strip()

    return video_dict

#----------------------------------------------------------------------
#helper
#https://stackoverflow.com/questions/2148119/how-to-convert-an-xml-string-to-a-dictionary-in-python
def dictify(r,root=True):
    if root:
        return {r.tag : dictify(r, False)}
    d=copy(r.attrib)
    if r.text:
        d["_text"]=r.text
    for x in r.findall("./*"):
        if x.tag not in d:
            d[x.tag]=[]
        d[x.tag].append(dictify(x,False))
    return d

#----------------------------------------------------------------------
def ckplayer_download_by_xml(ckinfo, output_dir = '.', merge = False, info_only = False, **kwargs):
    #Info XML
    video_info = ckplayer_get_info_by_xml(ckinfo)
    
    try:
        title = kwargs['title']
    except:
        title = ''
    type_ = ''
    size = 0
    
    if len(video_info['links']) > 0:  #has link
        type_, _ext, size = url_info(video_info['links'][0])  #use 1st to determine type, ext
    
    if 'size' in video_info:
        size = int(video_info['size'])
    else:
        for i in video_info['links'][1:]:  #save 1st one
            size += url_info(i)[2]
    
    print_info(site_info, title, type_, size)
    if not info_only:
        download_urls(video_info['links'], title, _ext, size, output_dir=output_dir, merge=merge)

#----------------------------------------------------------------------
def ckplayer_download(url, output_dir = '.', merge = False, info_only = False, is_xml = True, **kwargs):
    if is_xml:  #URL is XML URL
        try:
            title = kwargs['title']
        except:
            title = ''
        try:
            headers = kwargs['headers']  #headers provided
            ckinfo = get_content(url, headers = headers)
        except NameError:
            ckinfo = get_content(url)
        
        ckplayer_download_by_xml(ckinfo, output_dir, merge, 
                                info_only, title = title)

site_info = "CKPlayer General"
download = ckplayer_download
download_playlist = playlist_not_supported('ckplayer')


================================================
FILE: src/you_get/extractors/cntv.py
================================================
#!/usr/bin/env python

import json
import re

from ..common import get_content, r1, match1, playlist_not_supported
from ..extractor import VideoExtractor

__all__ = ['cntv_download', 'cntv_download_by_id']


class CNTV(VideoExtractor):
    name = 'CNTV.com'
    stream_types = [
        {'id': '1', 'video_profile': '1280x720_2000kb/s', 'map_to': 'chapters4'},
        {'id': '2', 'video_profile': '1280x720_1200kb/s', 'map_to': 'chapters3'},
        {'id': '3', 'video_profile': '640x360_850kb/s', 'map_to': 'chapters2'},
        {'id': '4', 'video_profile': '480x270_450kb/s', 'map_to': 'chapters'},
        {'id': '5', 'video_profile': '320x180_200kb/s', 'map_to': 'lowChapters'},
    ]

    ep = 'http://vdn.apps.cntv.cn/api/getHttpVideoInfo.do?pid={}'

    def __init__(self):
        super().__init__()
        self.api_data = None

    def prepare(self, **kwargs):
        self.api_data = json.loads(get_content(self.__class__.ep.format(self.vid)))
        self.title = self.api_data['title']
        for s in self.api_data['video']:
            for st in self.__class__.stream_types:
                if st['map_to'] == s:
                    urls = self.api_data['video'][s]
                    src = [u['url'] for u in urls]
                    stream_data = dict(src=src, size=0, container='mp4', video_profile=st['video_profile'])
                    self.streams[st['id']] = stream_data


def cntv_download_by_id(rid, **kwargs)
Download .txt
gitextract_fe0wl05y/

├── .github/
│   └── workflows/
│       └── python-package.yml
├── .gitignore
├── CHANGELOG.rst
├── CONTRIBUTING.md
├── MANIFEST.in
├── Makefile
├── README.md
├── README.rst
├── SECURITY.md
├── contrib/
│   └── completion/
│       ├── you-get-completion.bash
│       └── you-get.fish
├── setup.cfg
├── setup.py
├── src/
│   └── you_get/
│       ├── cli_wrapper/
│       │   ├── player/
│       │   │   ├── dragonplayer.py
│       │   │   ├── gnome_mplayer.py
│       │   │   ├── mplayer.py
│       │   │   ├── vlc.py
│       │   │   └── wmp.py
│       │   └── transcoder/
│       │       ├── ffmpeg.py
│       │       ├── libav.py
│       │       └── mencoder.py
│       ├── common.py
│       ├── extractor.py
│       ├── extractors/
│       │   ├── acfun.py
│       │   ├── alive.py
│       │   ├── archive.py
│       │   ├── baidu.py
│       │   ├── bandcamp.py
│       │   ├── baomihua.py
│       │   ├── bigthink.py
│       │   ├── bilibili.py
│       │   ├── bokecc.py
│       │   ├── cbs.py
│       │   ├── ckplayer.py
│       │   ├── cntv.py
│       │   ├── coub.py
│       │   ├── dailymotion.py
│       │   ├── douban.py
│       │   ├── douyin.py
│       │   ├── douyutv.py
│       │   ├── ehow.py
│       │   ├── embed.py
│       │   ├── facebook.py
│       │   ├── fc2video.py
│       │   ├── flickr.py
│       │   ├── freesound.py
│       │   ├── funshion.py
│       │   ├── giphy.py
│       │   ├── google.py
│       │   ├── heavymusic.py
│       │   ├── huomaotv.py
│       │   ├── icourses.py
│       │   ├── ifeng.py
│       │   ├── imgur.py
│       │   ├── infoq.py
│       │   ├── instagram.py
│       │   ├── interest.py
│       │   ├── iqilu.py
│       │   ├── iqiyi.py
│       │   ├── iwara.py
│       │   ├── ixigua.py
│       │   ├── joy.py
│       │   ├── kakao.py
│       │   ├── khan.py
│       │   ├── ku6.py
│       │   ├── kuaishou.py
│       │   ├── kugou.py
│       │   ├── kuwo.py
│       │   ├── le.py
│       │   ├── lizhi.py
│       │   ├── longzhu.py
│       │   ├── lrts.py
│       │   ├── magisto.py
│       │   ├── metacafe.py
│       │   ├── mgtv.py
│       │   ├── miaopai.py
│       │   ├── miomio.py
│       │   ├── missevan.py
│       │   ├── mixcloud.py
│       │   ├── mtv81.py
│       │   ├── nanagogo.py
│       │   ├── naver.py
│       │   ├── netease.py
│       │   ├── nicovideo.py
│       │   ├── pinterest.py
│       │   ├── pixnet.py
│       │   ├── pptv.py
│       │   ├── qie.py
│       │   ├── qie_video.py
│       │   ├── qingting.py
│       │   ├── qq.py
│       │   ├── qq_egame.py
│       │   ├── showroom.py
│       │   ├── sina.py
│       │   ├── sohu.py
│       │   ├── soundcloud.py
│       │   ├── suntv.py
│       │   ├── ted.py
│       │   ├── theplatform.py
│       │   ├── tiktok.py
│       │   ├── toutiao.py
│       │   ├── tucao.py
│       │   ├── tudou.py
│       │   ├── tumblr.py
│       │   ├── twitter.py
│       │   ├── ucas.py
│       │   ├── universal.py
│       │   ├── veoh.py
│       │   ├── vimeo.py
│       │   ├── vk.py
│       │   ├── w56.py
│       │   ├── wanmen.py
│       │   ├── ximalaya.py
│       │   ├── xinpianchang.py
│       │   ├── yixia.py
│       │   ├── yizhibo.py
│       │   ├── youku.py
│       │   ├── youtube.py
│       │   ├── zhanqi.py
│       │   ├── zhibo.py
│       │   └── zhihu.py
│       ├── json_output.py
│       ├── processor/
│       │   ├── ffmpeg.py
│       │   ├── join_flv.py
│       │   ├── join_mp4.py
│       │   ├── join_ts.py
│       │   └── rtmpdump.py
│       ├── util/
│       │   ├── fs.py
│       │   ├── git.py
│       │   ├── log.py
│       │   ├── os.py
│       │   ├── strings.py
│       │   └── term.py
│       └── version.py
├── tests/
│   ├── test.py
│   ├── test_common.py
│   └── test_util.py
├── you-get
└── you-get.plugin.zsh
Download .txt
SYMBOL INDEX (659 symbols across 116 files)

FILE: setup.py
  function load_source (line 11) | def load_source(modname, filename):

FILE: src/you_get/common.py
  function rc4 (line 157) | def rc4(key, data):
  function general_m3u8_extractor (line 182) | def general_m3u8_extractor(url, headers={}):
  function maybe_print (line 196) | def maybe_print(*s):
  function tr (line 203) | def tr(s):
  function r1 (line 212) | def r1(pattern, text):
  function r1_of (line 219) | def r1_of(patterns, text):
  function match1 (line 226) | def match1(text, *patterns):
  function matchall (line 254) | def matchall(text, patterns):
  function launch_player (line 273) | def launch_player(player, urls):
  function parse_query_param (line 293) | def parse_query_param(url, param):
  function unicodize (line 310) | def unicodize(text):
  function escape_file_path (line 319) | def escape_file_path(path):
  function ungzip (line 327) | def ungzip(data):
  function undeflate (line 337) | def undeflate(data):
  function getHttps (line 348) | def getHttps(host, url, headers, debuglevel=0):
  function get_response (line 370) | def get_response(url, faker=False):
  function get_html (line 400) | def get_html(url, encoding=None, faker=False):
  function get_decoded_html (line 406) | def get_decoded_html(url, faker=False):
  function get_location (line 416) | def get_location(url, headers=None, get_method='HEAD'):
  function urlopen_with_retry (line 428) | def urlopen_with_retry(*args, **kwargs):
  function get_content (line 451) | def get_content(url, headers={}, decoded=True):
  function post_content (line 502) | def post_content(url, headers={}, post_data={}, decoded=True, **kwargs):
  function url_size (line 558) | def url_size(url, faker=False, headers={}):
  function urls_size (line 572) | def urls_size(urls, faker=False, headers={}):
  function get_head (line 576) | def get_head(url, headers=None, get_method='HEAD'):
  function url_info (line 588) | def url_info(url, faker=False, headers={}):
  function url_locations (line 650) | def url_locations(urls, faker=False, headers={}):
  function url_save (line 670) | def url_save(
  class SimpleProgressBar (line 839) | class SimpleProgressBar:
    method __init__ (line 842) | def __init__(self, total_size, total_pieces=1):
    method update (line 862) | def update(self):
    method update_received (line 884) | def update_received(self, n):
    method update_piece (line 899) | def update_piece(self, n):
    method done (line 902) | def done(self):
  class PiecesProgressBar (line 908) | class PiecesProgressBar:
    method __init__ (line 909) | def __init__(self, total_size, total_pieces=1):
    method update (line 916) | def update(self):
    method update_received (line 924) | def update_received(self, n):
    method update_piece (line 928) | def update_piece(self, n):
    method done (line 931) | def done(self):
  class DummyProgressBar (line 937) | class DummyProgressBar:
    method __init__ (line 938) | def __init__(self, *args):
    method update_received (line 941) | def update_received(self, n):
    method update_piece (line 944) | def update_piece(self, n):
    method done (line 947) | def done(self):
  function get_output_filename (line 951) | def get_output_filename(urls, title, ext, output_dir, merge, **kwargs):
  function print_user_agent (line 983) | def print_user_agent(faker=False):
  function download_urls (line 988) | def download_urls(
  function download_rtmp_url (line 1148) | def download_rtmp_url(
  function download_url_ffmpeg (line 1172) | def download_url_ffmpeg(
  function playlist_not_supported (line 1205) | def playlist_not_supported(name):
  function print_info (line 1211) | def print_info(site_info, title, type, size, **kwargs):
  function mime_to_container (line 1302) | def mime_to_container(mime):
  function parse_host (line 1315) | def parse_host(host):
  function set_proxy (line 1328) | def set_proxy(proxy):
  function unset_proxy (line 1337) | def unset_proxy():
  function set_http_proxy (line 1344) | def set_http_proxy(proxy):
  function print_more_compatible (line 1357) | def print_more_compatible(*args, **kwargs):
  function download_main (line 1375) | def download_main(download, download_playlist, urls, playlist, **kwargs):
  function load_cookies (line 1392) | def load_cookies(cookiefile):
  function set_socks_proxy (line 1483) | def set_socks_proxy(proxy):
  function script_main (line 1519) | def script_main(download, download_playlist, **kwargs):
  function google_search (line 1823) | def google_search(url):
  function url_to_module (line 1834) | def url_to_module(url):
  function any_download (line 1875) | def any_download(url, **kwargs):
  function any_download_playlist (line 1880) | def any_download_playlist(url, **kwargs):
  function main (line 1885) | def main(**kwargs):

FILE: src/you_get/extractor.py
  class Extractor (line 10) | class Extractor():
    method __init__ (line 11) | def __init__(self, *args):
  class VideoExtractor (line 21) | class VideoExtractor():
    method __init__ (line 22) | def __init__(self, *args):
    method download_by_url (line 42) | def download_by_url(self, url, **kwargs):
    method download_by_vid (line 63) | def download_by_vid(self, vid, **kwargs):
    method prepare (line 82) | def prepare(self, **kwargs):
    method extract (line 86) | def extract(self, **kwargs):
    method p_stream (line 90) | def p_stream(self, stream_id):
    method p_i (line 124) | def p_i(self, stream_id):
    method p (line 137) | def p(self, stream_id=None):
    method p_playlist (line 174) | def p_playlist(self, stream_id=None):
    method download (line 179) | def download(self, **kwargs):

FILE: src/you_get/extractors/acfun.py
  class AcFun (line 6) | class AcFun(VideoExtractor):
    method prepare (line 20) | def prepare(self, **kwargs):
    method download (line 74) | def download(self, **kwargs):
    method acfun_download (line 156) | def acfun_download(self, url, output_dir='.', merge=True, info_only=Fa...

FILE: src/you_get/extractors/alive.py
  function alive_download (line 7) | def alive_download(url, output_dir = '.', merge = True, info_only = Fals...

FILE: src/you_get/extractors/archive.py
  function archive_download (line 7) | def archive_download(url, output_dir='.', merge=True, info_only=False, *...

FILE: src/you_get/extractors/baidu.py
  function baidu_get_song_data (line 11) | def baidu_get_song_data(sid):
  function baidu_get_song_url (line 23) | def baidu_get_song_url(data):
  function baidu_get_song_artist (line 27) | def baidu_get_song_artist(data):
  function baidu_get_song_album (line 31) | def baidu_get_song_album(data):
  function baidu_get_song_title (line 35) | def baidu_get_song_title(data):
  function baidu_get_song_lyric (line 39) | def baidu_get_song_lyric(data):
  function baidu_download_song (line 44) | def baidu_download_song(sid, output_dir='.', merge=True, info_only=False):
  function baidu_download_album (line 74) | def baidu_download_album(aid, output_dir='.', merge=True, info_only=False):
  function baidu_download (line 105) | def baidu_download(url, output_dir='.', stream_type=None, merge=True, in...
  function baidu_pan_download (line 165) | def baidu_pan_download(url):
  function baidu_pan_parse (line 228) | def baidu_pan_parse(html):
  function baidu_pan_gen_cookies (line 247) | def baidu_pan_gen_cookies(url, post_data=None):
  function baidu_pan_protected_share (line 257) | def baidu_pan_protected_share(url):
  function cookjar2hdr (line 306) | def cookjar2hdr(cookiejar):
  function query_cookiejar (line 313) | def query_cookiejar(cookiejar, name):
  function dict2triplet (line 319) | def dict2triplet(dictin):

FILE: src/you_get/extractors/bandcamp.py
  function bandcamp_download (line 7) | def bandcamp_download(url, output_dir='.', merge=True, info_only=False, ...

FILE: src/you_get/extractors/baomihua.py
  function baomihua_headers (line 9) | def baomihua_headers(referer=None, cookie=None):
  function baomihua_download_by_id (line 19) | def baomihua_download_by_id(id, title=None, output_dir='.', merge=True, ...
  function baomihua_download (line 34) | def baomihua_download(url, output_dir='.', merge=True, info_only=False, ...

FILE: src/you_get/extractors/bigthink.py
  class Bigthink (line 8) | class Bigthink(VideoExtractor):
    method get_streams_by_id (line 22) | def get_streams_by_id(account_number, video_id):
    method prepare (line 51) | def prepare(self, **kwargs):
    method extract (line 69) | def extract(self, **kwargs):

FILE: src/you_get/extractors/bilibili.py
  class Bilibili (line 10) | class Bilibili(VideoExtractor):
    method height_to_quality (line 48) | def height_to_quality(height, qn):
    method bilibili_headers (line 63) | def bilibili_headers(referer=None, cookie=None):
    method bilibili_api (line 74) | def bilibili_api(avid, cid, qn=0):
    method bilibili_audio_api (line 78) | def bilibili_audio_api(sid):
    method bilibili_audio_info_api (line 82) | def bilibili_audio_info_api(sid):
    method bilibili_audio_menu_info_api (line 86) | def bilibili_audio_menu_info_api(sid):
    method bilibili_audio_menu_song_api (line 90) | def bilibili_audio_menu_song_api(sid, ps=100):
    method bilibili_bangumi_api (line 94) | def bilibili_bangumi_api(avid, cid, ep_id, qn=0, fnval=16):
    method bilibili_interface_api (line 98) | def bilibili_interface_api(cid, qn=0):
    method bilibili_live_api (line 107) | def bilibili_live_api(cid):
    method bilibili_live_room_info_api (line 111) | def bilibili_live_room_info_api(room_id):
    method bilibili_live_room_init_api (line 115) | def bilibili_live_room_init_api(room_id):
    method bilibili_space_channel_api (line 119) | def bilibili_space_channel_api(mid, cid, pn=1, ps=100):
    method bilibili_space_collection_api (line 123) | def bilibili_space_collection_api(mid, cid, pn=1, ps=30):
    method bilibili_series_archives_api (line 127) | def bilibili_series_archives_api(mid, sid, pn=1, ps=100):
    method bilibili_space_favlist_api (line 131) | def bilibili_space_favlist_api(fid, pn=1, ps=20):
    method bilibili_space_video_api (line 135) | def bilibili_space_video_api(mid, pn=1, ps=50):
    method bilibili_vc_api (line 139) | def bilibili_vc_api(video_id):
    method bilibili_h_api (line 143) | def bilibili_h_api(doc_id):
    method url_size (line 147) | def url_size(url, faker=False, headers={},err_value=0):
    method prepare (line 153) | def prepare(self, **kwargs):
    method prepare_by_cid (line 510) | def prepare_by_cid(self,avid,cid,title,html_content,playinfo,playinfo_...
    method extract (line 601) | def extract(self, **kwargs):
    method download_playlist_by_url (line 621) | def download_playlist_by_url(self, url, **kwargs):

FILE: src/you_get/extractors/bokecc.py
  class BokeCC (line 7) | class BokeCC(VideoExtractor):
    method download_by_id (line 17) | def download_by_id(self, vid = '', title = None, output_dir='.', merge...
    method prepare (line 41) | def prepare(self, vid = '', title = None, **kwargs):
    method extract (line 72) | def extract(self, **kwargs):

FILE: src/you_get/extractors/cbs.py
  function cbs_download (line 9) | def cbs_download(url, output_dir='.', merge=True, info_only=False, **kwa...

FILE: src/you_get/extractors/ckplayer.py
  function ckplayer_get_info_by_xml (line 13) | def ckplayer_get_info_by_xml(ckinfo):
  function dictify (line 44) | def dictify(r,root=True):
  function ckplayer_download_by_xml (line 57) | def ckplayer_download_by_xml(ckinfo, output_dir = '.', merge = False, in...
  function ckplayer_download (line 82) | def ckplayer_download(url, output_dir = '.', merge = False, info_only = ...

FILE: src/you_get/extractors/cntv.py
  class CNTV (line 12) | class CNTV(VideoExtractor):
    method __init__ (line 24) | def __init__(self):
    method prepare (line 28) | def prepare(self, **kwargs):
  function cntv_download_by_id (line 40) | def cntv_download_by_id(rid, **kwargs):
  function cntv_download (line 44) | def cntv_download(url, **kwargs):

FILE: src/you_get/extractors/coub.py
  function coub_download (line 10) | def coub_download(url, output_dir='.', merge=True, info_only=False, **kw...
  function write_loop_file (line 41) | def write_loop_file(records_number, loop_file_path, file_name):
  function download_url (line 47) | def download_url(url, merge, output_dir, title, info_only):
  function fix_coub_video_file (line 54) | def fix_coub_video_file(file_path):
  function get_title_and_urls (line 60) | def get_title_and_urls(json_data):
  function get_coub_data (line 81) | def get_coub_data(html):
  function get_file_path (line 87) | def get_file_path(merge, output_dir, title, url):
  function get_loop_file_path (line 94) | def get_loop_file_path(title, output_dir):
  function cleanup_files (line 98) | def cleanup_files(files):

FILE: src/you_get/extractors/dailymotion.py
  function rebuilt_url (line 8) | def rebuilt_url(url):
  function dailymotion_download (line 13) | def dailymotion_download(url, output_dir='.', merge=True, info_only=Fals...

FILE: src/you_get/extractors/douban.py
  function douban_download (line 8) | def douban_download(url, output_dir = '.', merge = True, info_only = Fal...

FILE: src/you_get/extractors/douyin.py
  function get_value (line 19) | def get_value(source: dict, path):
  function douyin_download_by_url (line 40) | def douyin_download_by_url(url, **kwargs):

FILE: src/you_get/extractors/douyutv.py
  function douyutv_video_download (line 16) | def douyutv_video_download(url, output_dir='.', merge=True, info_only=Fa...
  function douyutv_download (line 43) | def douyutv_download(url, output_dir='.', merge=True, info_only=False, *...

FILE: src/you_get/extractors/ehow.py
  function ehow_download (line 7) | def ehow_download(url, output_dir = '.', merge = True, info_only = False...

FILE: src/you_get/extractors/embed.py
  function embed_download (line 67) | def embed_download(url, output_dir = '.', merge = True, info_only = Fals...

FILE: src/you_get/extractors/facebook.py
  function facebook_download (line 7) | def facebook_download(url, output_dir='.', merge=True, info_only=False, ...

FILE: src/you_get/extractors/fc2video.py
  function makeMimi (line 10) | def makeMimi(upid):
  function fc2video_download_by_upid (line 19) | def fc2video_download_by_upid(upid, output_dir = '.', merge = True, info...
  function fc2video_download (line 45) | def fc2video_download(url, output_dir = '.', merge = True, info_only = F...

FILE: src/you_get/extractors/flickr.py
  function get_content_headered (line 60) | def get_content_headered(url):
  function get_photoset_id (line 63) | def get_photoset_id(url, page):
  function get_photo_id (line 66) | def get_photo_id(url, page):
  function get_gallery_id (line 69) | def get_gallery_id(url, page):
  function get_api_key (line 72) | def get_api_key(page):
  function get_NSID (line 83) | def get_NSID(url, page):
  function flickr_download_main (line 141) | def flickr_download_main(url, output_dir = '.', merge = False, info_only...
  function fetch_photo_url_list (line 161) | def fetch_photo_url_list(url, size):
  function fetch_photo_url_list_impl (line 168) | def fetch_photo_url_list_impl(url, size, method, id_field, id_parse_func...
  function get_orig_video_source (line 194) | def get_orig_video_source(api_key, pid, secret):
  function get_url_of_largest (line 201) | def get_url_of_largest(info, api_key, size):
  function get_single_photo_url (line 213) | def get_single_photo_url(url):

FILE: src/you_get/extractors/freesound.py
  function freesound_download (line 7) | def freesound_download(url, output_dir = '.', merge = True, info_only = ...

FILE: src/you_get/extractors/funshion.py
  class KBaseMapping (line 16) | class KBaseMapping:
    method __init__ (line 17) | def __init__(self, base=62):
    method mapping (line 27) | def mapping(self, num):
  class Funshion (line 35) | class Funshion(VideoExtractor):
    method fetch_magic (line 53) | def fetch_magic(cls, url):
    method get_coeff (line 93) | def get_coeff(cls, magic_list):
    method funshion_decrypt (line 111) | def funshion_decrypt(cls, a_bytes, coeff):
    method funshion_decrypt_str (line 129) | def funshion_decrypt_str(cls, a_str, coeff):
    method checksum (line 140) | def checksum(cls, sha1_str):
    method get_cdninfo (line 151) | def get_cdninfo(cls, hashid):
    method dec_playinfo (line 157) | def dec_playinfo(cls, info, coeff):
    method prepare (line 168) | def prepare(self, **kwargs):
  function funshion_download (line 205) | def funshion_download(url, **kwargs):

FILE: src/you_get/extractors/giphy.py
  function giphy_download (line 7) | def giphy_download(url, output_dir='.', merge=True, info_only=False, **k...

FILE: src/you_get/extractors/google.py
  function google_download (line 43) | def google_download(url, output_dir = '.', merge = True, info_only = Fal...

FILE: src/you_get/extractors/heavymusic.py
  function heavymusic_download (line 7) | def heavymusic_download(url, output_dir='.', merge=True, info_only=False...

FILE: src/you_get/extractors/huomaotv.py
  function get_mobile_room_url (line 8) | def get_mobile_room_url(room_id):
  function get_m3u8_url (line 12) | def get_m3u8_url(stream_id):
  function huomaotv_download (line 16) | def huomaotv_download(url, output_dir='.', merge=True, info_only=False, ...

FILE: src/you_get/extractors/icourses.py
  function icourses_download (line 16) | def icourses_download(url, output_dir='.', **kwargs):
  function get_course_title (line 49) | def get_course_title(url, course_type, page=None):
  function public_course_playlist (line 67) | def public_course_playlist(url, page=None):
  function public_course_get_title (line 77) | def public_course_get_title(url, page=None):
  function icourses_playlist_download (line 87) | def icourses_playlist_download(url, output_dir='.', **kwargs):
  function icourses_playlist_new (line 121) | def icourses_playlist_new(url, page=None):
  function get_playlist (line 169) | def get_playlist(res_id, course_id):
  class ICousesExactor (line 177) | class ICousesExactor(object):
    method __init__ (line 182) | def __init__(self, url):
    method get_title (line 193) | def get_title(self):
    method get_flashvars (line 205) | def get_flashvars(self):
    method api_req (line 221) | def api_req(self, url):
    method basic_extract (line 236) | def basic_extract(self):
    method do_extract (line 242) | def do_extract(self, received=0):
    method update_url (line 246) | def update_url(self, received):
    method get_date_str (line 260) | def get_date_str(self):
    method generate_url (line 269) | def generate_url(self, received):
    method get_sign (line 287) | def get_sign(self, media_url):
    method get_media_host (line 296) | def get_media_host(self, ori_host):
  function download_urls_icourses (line 302) | def download_urls_icourses(url, title, ext, total_size, output_dir='.', ...
  function url_save_icourses (line 321) | def url_save_icourses(url, filepath, bar, total_size, dyn_callback=None,...

FILE: src/you_get/extractors/ifeng.py
  function ifeng_download_by_id (line 7) | def ifeng_download_by_id(id, title = None, output_dir = '.', merge = Tru...
  function ifeng_download (line 23) | def ifeng_download(url, output_dir = '.', merge = True, info_only = Fals...

FILE: src/you_get/extractors/imgur.py
  class Imgur (line 7) | class Imgur(VideoExtractor):
    method prepare (line 15) | def prepare(self, **kwargs):
    method extract (line 69) | def extract(self, **kwargs):

FILE: src/you_get/extractors/infoq.py
  class Infoq (line 8) | class Infoq(VideoExtractor):
    method prepare (line 17) | def prepare(self, **kwargs):
    method extract (line 47) | def extract(self, **kwargs):

FILE: src/you_get/extractors/instagram.py
  function instagram_download (line 7) | def instagram_download(url, output_dir='.', merge=True, info_only=False,...

FILE: src/you_get/extractors/interest.py
  function interest_download (line 6) | def interest_download(url, output_dir='.', merge=True, info_only=False, ...

FILE: src/you_get/extractors/iqilu.py
  function iqilu_download (line 8) | def iqilu_download(url, output_dir = '.', merge = False, info_only = Fal...

FILE: src/you_get/extractors/iqiyi.py
  function getVMS (line 84) | def getVMS(tvid, vid):
  class Iqiyi (line 92) | class Iqiyi(VideoExtractor):
    method download_playlist_by_url (line 117) | def download_playlist_by_url(self, url, **kwargs):
    method prepare (line 126) | def prepare(self, **kwargs):
    method download (line 157) | def download(self, **kwargs):

FILE: src/you_get/extractors/iwara.py
  function iwara_download (line 21) | def iwara_download(url, output_dir='.', merge=True, info_only=False, **k...
  function download_playlist_by_url (line 40) | def download_playlist_by_url( url, **kwargs):

FILE: src/you_get/extractors/ixigua.py
  function ixigua_download (line 16) | def ixigua_download(url, output_dir='.', merge=True, info_only=False, st...
  function convertStreams (line 91) | def convertStreams(video_list, audio_url):
  function ixigua_download_playlist_by_url (line 109) | def ixigua_download_playlist_by_url(url, output_dir='.', merge=True, inf...

FILE: src/you_get/extractors/joy.py
  function video_info (line 7) | def video_info(channel_id, program_id, volumn_id):
  function joy_download (line 26) | def joy_download(url, output_dir = '.', merge = True, info_only = False,...

FILE: src/you_get/extractors/kakao.py
  function kakao_download (line 9) | def kakao_download(url, output_dir='.', info_only=False,  **kwargs):

FILE: src/you_get/extractors/khan.py
  function khan_download (line 8) | def khan_download(url, output_dir='.', merge=True, info_only=False, **kw...

FILE: src/you_get/extractors/ku6.py
  function ku6_download_by_id (line 10) | def ku6_download_by_id(id, title = None, output_dir = '.', merge = True,...
  function ku6_download (line 29) | def ku6_download(url, output_dir = '.', merge = True, info_only = False,...
  function baidu_ku6 (line 66) | def baidu_ku6(url):

FILE: src/you_get/extractors/kuaishou.py
  function kuaishou_download_by_url (line 12) | def kuaishou_download_by_url(url, info_only=False, **kwargs):

FILE: src/you_get/extractors/kugou.py
  function kugou_download (line 11) | def kugou_download(url, output_dir=".", merge=True, info_only=False, **k...
  function kugou_download_by_hash (line 31) | def kugou_download_by_hash(url, output_dir='.', merge=True, info_only=Fa...
  function kugou_download_playlist (line 51) | def kugou_download_playlist(url, output_dir='.', merge=True, info_only=F...

FILE: src/you_get/extractors/kuwo.py
  function kuwo_download_by_rid (line 8) | def kuwo_download_by_rid(rid, output_dir = '.', merge = True, info_only ...
  function kuwo_playlist_download (line 19) | def kuwo_playlist_download(url, output_dir = '.', merge = True, info_onl...
  function kuwo_download (line 27) | def kuwo_download(url, output_dir = '.', merge = True, info_only = False...

FILE: src/you_get/extractors/le.py
  function get_timestamp (line 14) | def get_timestamp():
  function get_key (line 22) | def get_key(t):
  function calcTimeKey (line 31) | def calcTimeKey(t):
  function decode (line 38) | def decode(data):
  function video_info (line 58) | def video_info(vid, **kwargs):
  function letv_download_by_vid (line 94) | def letv_download_by_vid(vid, title, output_dir='.', merge=True, info_on...
  function letvcloud_download_by_vu (line 106) | def letvcloud_download_by_vu(vu, uu, title=None, output_dir='.', merge=T...
  function letvcloud_download (line 128) | def letvcloud_download(url, output_dir='.', merge=True, info_only=False):
  function letv_download (line 136) | def letv_download(url, output_dir='.', merge=True, info_only=False, **kw...

FILE: src/you_get/extractors/lizhi.py
  function get_url (line 12) | def get_url(ep):
  function lizhi_extract_playlist_info (line 21) | def lizhi_extract_playlist_info(radio_id):
  function lizhi_download_audio (line 37) | def lizhi_download_audio(audio_id, title, url, output_dir='.', info_only...
  function lizhi_download_playlist (line 43) | def lizhi_download_playlist(url, output_dir='.', info_only=False, **kwar...
  function lizhi_download (line 51) | def lizhi_download(url, output_dir='.', info_only=False, **kwargs):

FILE: src/you_get/extractors/longzhu.py
  function longzhu_download (line 16) | def longzhu_download(url, output_dir = '.', merge=True, info_only=False,...

FILE: src/you_get/extractors/lrts.py
  function lrts_download (line 9) | def lrts_download(url, output_dir='.', merge=True, info_only=False, **kw...

FILE: src/you_get/extractors/magisto.py
  function magisto_download (line 8) | def magisto_download(url, output_dir='.', merge=True, info_only=False, *...

FILE: src/you_get/extractors/metacafe.py
  function metacafe_download (line 9) | def metacafe_download(url, output_dir = '.', merge = True, info_only = F...

FILE: src/you_get/extractors/mgtv.py
  class MGTV (line 17) | class MGTV(VideoExtractor):
    method tk2 (line 34) | def tk2(self):
    method get_vid_from_url (line 44) | def get_vid_from_url(url):
    method get_mgtv_real_url (line 55) | def get_mgtv_real_url(self, url):
    method download_playlist_by_url (line 79) | def download_playlist_by_url(self, url, **kwargs):
    method prepare (line 93) | def prepare(self, **kwargs):
    method extract (line 146) | def extract(self, **kwargs):
    method download (line 159) | def download(self, **kwargs):

FILE: src/you_get/extractors/miaopai.py
  function miaopai_download_by_fid (line 20) | def miaopai_download_by_fid(fid, output_dir = '.', merge = False, info_o...
  function miaopai_download_by_wbmp (line 40) | def miaopai_download_by_wbmp(wbmp_url, fid, output_dir='.', merge=False,...
  function miaopai_download_story (line 68) | def miaopai_download_story(url, output_dir='.', merge=False, info_only=F...
  function miaopai_download_h5api (line 81) | def miaopai_download_h5api(url, output_dir='.', merge=False, info_only=F...
  function miaopai_download_direct (line 128) | def miaopai_download_direct(url, output_dir='.', merge=False, info_only=...
  function miaopai_download (line 147) | def miaopai_download(url, output_dir='.', merge=False, info_only=False, ...

FILE: src/you_get/extractors/miomio.py
  function miomio_download (line 11) | def miomio_download(url, output_dir = '.', merge = True, info_only = Fal...
  function sina_xml_to_url_list (line 41) | def sina_xml_to_url_list(xml_data):

FILE: src/you_get/extractors/missevan.py
  class _NoMatchException (line 37) | class _NoMatchException(Exception):
  class _Dispatcher (line 41) | class _Dispatcher(object):
    method __init__ (line 43) | def __init__(self):
    method register (line 46) | def register(self, patterns, fun):
    method endpoint (line 53) | def endpoint(self, *patterns):
    method test (line 60) | def test(self, url):
    method dispatch (line 63) | def dispatch(self, url, *args, **kwargs):
  function _get_resource_uri (line 88) | def _get_resource_uri(data, stream_type):
  function is_covers_stream (line 98) | def is_covers_stream(stream):
  function get_file_extension (line 102) | def get_file_extension(file_path, default=''):
  function best_quality_stream_id (line 110) | def best_quality_stream_id(streams, stream_types):
  class MissEvanWithStream (line 118) | class MissEvanWithStream(VideoExtractor):
    method __init__ (line 123) | def __init__(self, *args):
    method create (line 129) | def create(cls, title, streams, *, streams_sorted=None):
    method set_danmaku (line 137) | def set_danmaku(self, danmaku):
    method _setup_streams_sorted (line 142) | def _setup_streams_sorted(streams):
    method download (line 151) | def download(self, **kwargs):
    method unsupported_method (line 159) | def unsupported_method(self, *args, **kwargs):
  class MissEvan (line 168) | class MissEvan(VideoExtractor):
    method __init__ (line 173) | def __init__(self, *args):
    method prepare_sound (line 183) | def prepare_sound(self, sid, **kwargs):
    method setup_streams (line 198) | def setup_streams(cls, sound):
    method prepare (line 214) | def prepare(self, **kwargs):
    method download_covers (line 226) | def download_covers(title, streams, **kwargs):
    method download_album (line 240) | def download_album(self, aid, **kwargs):
    method download_drama (line 268) | def download_drama(self, did, **kwargs):
    method download_playlist_by_url (line 288) | def download_playlist_by_url(self, url, **kwargs):
    method download_by_url (line 296) | def download_by_url(self, url, **kwargs):
    method download (line 302) | def download(self, **kwargs):
    method extract (line 307) | def extract(self, **kwargs):
    method _get_content (line 326) | def _get_content(self, url):
    method _get_json (line 329) | def _get_json(self, url):
    method url_album_api (line 334) | def url_album_api(album_id):
    method url_sound_api (line 339) | def url_sound_api(sound_id):
    method url_drama_api (line 344) | def url_drama_api(drama_id):
    method url_danmaku_api (line 349) | def url_danmaku_api(sound_id):
    method url_resource (line 353) | def url_resource(uri):

FILE: src/you_get/extractors/mixcloud.py
  function mixcloud_download (line 7) | def mixcloud_download(url, output_dir='.', merge=True, info_only=False, ...

FILE: src/you_get/extractors/mtv81.py
  function mtv81_download (line 12) | def mtv81_download(url, output_dir='.', merge=True, info_only=False, **k...

FILE: src/you_get/extractors/nanagogo.py
  function nanagogo_download (line 8) | def nanagogo_download(url, output_dir='.', merge=True, info_only=False, ...

FILE: src/you_get/extractors/naver.py
  function naver_download_by_url (line 15) | def naver_download_by_url(url, output_dir='.', merge=True, info_only=Fal...

FILE: src/you_get/extractors/netease.py
  function netease_hymn (line 14) | def netease_hymn():
  function netease_cloud_music_download (line 24) | def netease_cloud_music_download(url, output_dir='.', merge=True, info_o...
  function netease_lyric_download (line 95) | def netease_lyric_download(song, lyric, output_dir='.', info_only=False,...
  function netease_video_download (line 106) | def netease_video_download(vinfo, output_dir='.', info_only=False):
  function netease_song_download (line 113) | def netease_song_download(song, output_dir='.', info_only=False, playlis...
  function netease_download_common (line 130) | def netease_download_common(title, url_best, output_dir, info_only):
  function netease_download (line 137) | def netease_download(url, output_dir = '.', merge = True, info_only = Fa...
  function encrypted_id (line 168) | def encrypted_id(dfsId):
  function make_url (line 183) | def make_url(songNet, dfsId):

FILE: src/you_get/extractors/nicovideo.py
  function nicovideo_login (line 7) | def nicovideo_login(user, password):
  function nicovideo_download (line 12) | def nicovideo_download(url, output_dir='.', merge=True, info_only=False,...

FILE: src/you_get/extractors/pinterest.py
  class Pinterest (line 6) | class Pinterest(VideoExtractor):
    method prepare (line 17) | def prepare(self, **kwargs):
    method extract (line 35) | def extract(self, **kwargs):

FILE: src/you_get/extractors/pixnet.py
  function pixnet_download (line 10) | def pixnet_download(url, output_dir = '.', merge = True, info_only = Fal...

FILE: src/you_get/extractors/pptv.py
  function lshift (line 16) | def lshift(a, b):
  function rshift (line 18) | def rshift(a, b):
  function le32_pack (line 23) | def le32_pack(b_str):
  function tea_core (line 31) | def tea_core(data, key_seg):
  function ran_hex (line 56) | def ran_hex(size):
  function zpad (line 62) | def zpad(b_str, size):
  function gen_key (line 66) | def gen_key(t):
  function unpack_le32 (line 73) | def unpack_le32(i32):
  function get_elem (line 84) | def get_elem(elem, tag):
  function get_attr (line 87) | def get_attr(elem, attr):
  function get_text (line 90) | def get_text(elem):
  function shift_time (line 93) | def shift_time(time_str):
  function parse_pptv_xml (line 97) | def parse_pptv_xml(dom):
  function merge_meta (line 143) | def merge_meta(item_mlist, stream_mlist, segs_mlist):
  function make_url (line 168) | def make_url(stream):
  class PPTV (line 181) | class PPTV(VideoExtractor):
    method prepare (line 191) | def prepare(self, **kwargs):

FILE: src/you_get/extractors/qie.py
  class QiE (line 10) | class QiE(VideoExtractor):
    method get_room_id_from_url (line 25) | def get_room_id_from_url(self, match_id):
    method get_vid_from_url (line 35) | def get_vid_from_url(self, url):
    method download_playlist_by_url (line 50) | def download_playlist_by_url(self, url, **kwargs):
    method prepare (line 53) | def prepare(self, **kwargs):
    method extract (line 79) | def extract(self, **kwargs):

FILE: src/you_get/extractors/qie_video.py
  class QieVideo (line 8) | class QieVideo(VideoExtractor):
    method get_vid_from_url (line 20) | def get_vid_from_url(self):
    method get_title (line 26) | def get_title(self):
    method prepare (line 32) | def prepare(self, **kwargs):
    method extract (line 52) | def extract(self, **kwargs):
  function general_m3u8_extractor (line 57) | def general_m3u8_extractor(url):

FILE: src/you_get/extractors/qingting.py
  class Qingting (line 11) | class Qingting(VideoExtractor):
    method prepare (line 24) | def prepare(self, **kwargs):
    method extract (line 41) | def extract(self, **kwargs):
  function qingting_download_by_url (line 45) | def qingting_download_by_url(url, **kwargs):

FILE: src/you_get/extractors/qq.py
  function qq_download_by_vid (line 14) | def qq_download_by_vid(vid, title, output_dir='.', merge=True, info_only...
  function kg_qq_download_by_shareid (line 75) | def kg_qq_download_by_shareid(shareid, output_dir='.', info_only=False, ...
  function qq_download (line 113) | def qq_download(url, output_dir='.', merge=True, info_only=False, **kwar...

FILE: src/you_get/extractors/qq_egame.py
  function qq_egame_download (line 12) | def qq_egame_download(url,

FILE: src/you_get/extractors/showroom.py
  function showroom_get_roomid_by_room_url_key (line 11) | def showroom_get_roomid_by_room_url_key(room_url_key):
  function showroom_download_by_room_id (line 26) | def showroom_download_by_room_id(room_id, output_dir = '.', merge = Fals...
  function showroom_download (line 60) | def showroom_download(url, output_dir = '.', merge = False, info_only = ...

FILE: src/you_get/extractors/sina.py
  function api_req (line 14) | def api_req(vid):
  function video_info (line 22) | def video_info(xml):
  function sina_download_by_vid (line 41) | def sina_download_by_vid(vid, title=None, output_dir='.', merge=True, in...
  function sina_download_by_vkey (line 54) | def sina_download_by_vkey(vkey, title=None, output_dir='.', merge=True, ...
  function sina_zxt (line 66) | def sina_zxt(url, output_dir='.', merge=True, info_only=False, **kwargs):
  function sina_download (line 94) | def sina_download(url, output_dir='.', merge=True, info_only=False, **kw...

FILE: src/you_get/extractors/sohu.py
  function real_url (line 16) | def real_url(fileName, key, ch):
  function sohu_download (line 21) | def sohu_download(url, output_dir='.', merge=True, info_only=False, extr...

FILE: src/you_get/extractors/soundcloud.py
  function get_sndcd_apikey (line 10) | def get_sndcd_apikey():
  function get_resource_info (line 18) | def get_resource_info(resource_url, client_id):
  function sndcd_download (line 45) | def sndcd_download(url, output_dir='.', merge=True, info_only=False, **k...

FILE: src/you_get/extractors/suntv.py
  function suntv_download (line 9) | def suntv_download(url, output_dir = '.', merge = True, info_only = Fals...

FILE: src/you_get/extractors/ted.py
  function ted_download (line 8) | def ted_download(url, output_dir='.', merge=True, info_only=False, **kwa...

FILE: src/you_get/extractors/theplatform.py
  function theplatform_download_by_pid (line 5) | def theplatform_download_by_pid(pid, title, output_dir='.', merge=True, ...

FILE: src/you_get/extractors/tiktok.py
  function tiktok_download (line 7) | def tiktok_download(url, output_dir='.', merge=True, info_only=False, **...

FILE: src/you_get/extractors/toutiao.py
  function random_with_n_digits (line 19) | def random_with_n_digits(n):
  function sign_video_url (line 23) | def sign_video_url(vid):
  class ToutiaoVideoInfo (line 36) | class ToutiaoVideoInfo(object):
    method __init__ (line 38) | def __init__(self):
    method __str__ (line 47) | def __str__(self):
  function get_file_by_vid (line 51) | def get_file_by_vid(video_id):
  function toutiao_download (line 73) | def toutiao_download(url, output_dir='.', merge=True, info_only=False, *...

FILE: src/you_get/extractors/tucao.py
  function tucao_single_download (line 19) | def tucao_single_download(type_link, title, output_dir=".", merge=True, ...
  function tucao_download (line 48) | def tucao_download(url, output_dir=".", merge=True, info_only=False, **k...

FILE: src/you_get/extractors/tudou.py
  function tudou_download_by_iid (line 9) | def tudou_download_by_iid(iid, title, output_dir = '.', merge = True, in...
  function tudou_download_by_id (line 25) | def tudou_download_by_id(id, title, output_dir = '.', merge = True, info...
  function tudou_download (line 35) | def tudou_download(url, output_dir = '.', merge = True, info_only = Fals...
  function parse_playlist (line 73) | def parse_playlist(url):
  function parse_plist (line 91) | def parse_plist(url):
  function tudou_download_playlist (line 97) | def tudou_download_playlist(url, output_dir = '.', merge = True, info_on...

FILE: src/you_get/extractors/tumblr.py
  function tumblr_download (line 10) | def tumblr_download(url, output_dir='.', merge=True, info_only=False, **...

FILE: src/you_get/extractors/twitter.py
  function extract_m3u (line 8) | def extract_m3u(source):
  function twitter_download (line 17) | def twitter_download(url, output_dir='.', merge=True, info_only=False, *...

FILE: src/you_get/extractors/ucas.py
  function dictify (line 17) | def dictify(r,root=True):
  function _get_video_query_url (line 30) | def _get_video_query_url(resourceID):
  function _get_virtualPath (line 50) | def _get_virtualPath(video_query_url):
  function _get_video_list (line 57) | def _get_video_list(resourceID):
  function _ucas_get_url_lists_by_resourceID (line 82) | def _ucas_get_url_lists_by_resourceID(resourceID):
  function ucas_download_single (line 101) | def ucas_download_single(url, output_dir = '.', merge = False, info_only...
  function ucas_download_playlist (line 118) | def ucas_download_playlist(url, output_dir = '.', merge = False, info_on...
  function ucas_download (line 128) | def ucas_download(url, output_dir = '.', merge = False, info_only = Fals...

FILE: src/you_get/extractors/universal.py
  function universal_download (line 8) | def universal_download(url, output_dir='.', merge=True, info_only=False,...

FILE: src/you_get/extractors/veoh.py
  function veoh_download (line 7) | def veoh_download(url, output_dir = '.', merge = False, info_only = Fals...
  function veoh_download_by_id (line 18) | def veoh_download_by_id(item_id, output_dir = '.', merge = False, info_o...

FILE: src/you_get/extractors/vimeo.py
  function vimeo_download_by_channel (line 15) | def vimeo_download_by_channel(url, output_dir='.', merge=False, info_onl...
  function vimeo_download_by_channel_id (line 22) | def vimeo_download_by_channel_id(channel_id, output_dir='.', merge=False...
  class VimeoExtractor (line 38) | class VimeoExtractor(VideoExtractor):
    method prepare (line 49) | def prepare(self, **kwargs):
    method extract (line 75) | def extract(self, **kwargs):
  function vimeo_download_by_id (line 134) | def vimeo_download_by_id(id, title=None, output_dir='.', merge=True, inf...
  function vimeo_download (line 138) | def vimeo_download(url, output_dir='.', merge=True, info_only=False, **k...

FILE: src/you_get/extractors/vk.py
  function get_video_info (line 8) | def get_video_info(url):
  function get_video_from_user_videolist (line 25) | def get_video_from_user_videolist(url):
  function get_image_info (line 38) | def get_image_info(url):
  function vk_download (line 53) | def vk_download(url, output_dir='.', stream_type=None, merge=True, info_...

FILE: src/you_get/extractors/w56.py
  function w56_download_by_id (line 11) | def w56_download_by_id(id, title = None, output_dir = '.', merge = True,...
  function w56_download (line 29) | def w56_download(url, output_dir = '.', merge = True, info_only = False,...

FILE: src/you_get/extractors/wanmen.py
  function _wanmen_get_json_api_content_by_courseID (line 11) | def _wanmen_get_json_api_content_by_courseID(courseID):
  function _wanmen_get_title_by_json_topic_part (line 18) | def _wanmen_get_title_by_json_topic_part(json_content, tIndex, pIndex):
  function _wanmen_get_boke_id_by_json_topic_part (line 28) | def _wanmen_get_boke_id_by_json_topic_part(json_content, tIndex, pIndex):
  function wanmen_download_by_course (line 37) | def wanmen_download_by_course(json_api_content, output_dir='.', merge=Tr...
  function wanmen_download_by_course_topic (line 54) | def wanmen_download_by_course_topic(json_api_content, tIndex, output_dir...
  function wanmen_download_by_course_topic_part (line 69) | def wanmen_download_by_course_topic_part(json_api_content, tIndex, pInde...
  function wanmen_download (line 88) | def wanmen_download(url, output_dir='.', merge=True, info_only=False, **...

FILE: src/you_get/extractors/ximalaya.py
  function ximalaya_download_by_id (line 16) | def ximalaya_download_by_id(id, title = None, output_dir = '.', info_onl...
  function ximalaya_download (line 52) | def ximalaya_download(url, output_dir = '.', info_only = False, stream_i...
  function ximalaya_download_page (line 59) | def ximalaya_download_page(playlist_url, output_dir = '.', info_only = F...
  function ximalaya_download_playlist (line 72) | def ximalaya_download_playlist(url, output_dir='.', info_only=False, str...
  function print_stream_info (line 89) | def print_stream_info(stream_id):

FILE: src/you_get/extractors/xinpianchang.py
  class Xinpianchang (line 9) | class Xinpianchang(VideoExtractor):
    method prepare (line 20) | def prepare(self, **kwargs):

FILE: src/you_get/extractors/yixia.py
  function miaopai_download_by_smid (line 11) | def miaopai_download_by_smid(smid, output_dir = '.', merge = True, info_...
  function yixia_miaopai_download_by_scid (line 29) | def yixia_miaopai_download_by_scid(scid, output_dir = '.', merge = True,...
  function yixia_xiaokaxiu_download_by_scid (line 47) | def yixia_xiaokaxiu_download_by_scid(scid, output_dir = '.', merge = Tru...
  function yixia_download (line 65) | def yixia_download(url, output_dir = '.', merge = True, info_only = Fals...

FILE: src/you_get/extractors/yizhibo.py
  function yizhibo_download (line 8) | def yizhibo_download(url, output_dir = '.', merge = True, info_only = Fa...

FILE: src/you_get/extractors/youku.py
  function fetch_cna (line 13) | def fetch_cna():
  class Youku (line 38) | class Youku(VideoExtractor):
    method __init__ (line 65) | def __init__(self):
    method youku_ups (line 85) | def youku_ups(self):
    method change_cdn (line 109) | def change_cdn(cls, url):
    method get_vid_from_url (line 123) | def get_vid_from_url(self):
    method get_vid_from_page (line 138) | def get_vid_from_page(self):
    method prepare (line 146) | def prepare(self, **kwargs):
  function youku_download_playlist_by_url (line 242) | def youku_download_playlist_by_url(url, **kwargs):
  function youku_download_by_url (line 306) | def youku_download_by_url(url, **kwargs):
  function youku_download_by_vid (line 310) | def youku_download_by_vid(vid, **kwargs):

FILE: src/you_get/extractors/youtube.py
  class YouTube (line 15) | class YouTube(VideoExtractor):
    method dethrottle (line 78) | def dethrottle(js, url):
    method s_to_sig (line 108) | def s_to_sig(js, s):
    method chunk_by_range (line 131) | def chunk_by_range(url, size):
    method get_url_from_vid (line 141) | def get_url_from_vid(vid):
    method get_vid_from_url (line 144) | def get_vid_from_url(url):
    method get_playlist_id_from_url (line 155) | def get_playlist_id_from_url(url):
    method download_playlist_by_url (line 161) | def download_playlist_by_url(self, url, **kwargs):
    method check_playability_response (line 191) | def check_playability_response(self, ytInitialPlayerResponse):
    method prepare (line 208) | def prepare(self, **kwargs):
    method extract (line 417) | def extract(self, **kwargs):

FILE: src/you_get/extractors/zhanqi.py
  function zhanqi_download (line 10) | def zhanqi_download(url, output_dir = '.', merge = True, info_only = Fal...
  function zhanqi_live (line 22) | def zhanqi_live(room_id, merge=True, output_dir='.', info_only=False, **...
  function zhanqi_video (line 38) | def zhanqi_video(video_id, output_dir='.', info_only=False, merge=True, ...

FILE: src/you_get/extractors/zhibo.py
  function zhibo_vedio_download (line 7) | def zhibo_vedio_download(url, output_dir = '.', merge = True, info_only ...
  function zhibo_download (line 27) | def zhibo_download(url, output_dir = '.', merge = True, info_only = Fals...

FILE: src/you_get/extractors/zhihu.py
  function zhihu_download (line 9) | def zhihu_download(url, output_dir='.', merge=True, info_only=False, **k...
  function zhihu_download_playlist (line 50) | def zhihu_download_playlist(url, output_dir='.', merge=True, info_only=F...

FILE: src/you_get/json_output.py
  function output (line 7) | def output(video_extractor, pretty_print=True):
  class VideoExtractor (line 37) | class VideoExtractor(object):
  function print_info (line 40) | def print_info(site_info=None, title=None, type=None, size=None):
  function download_urls (line 49) | def download_urls(urls=None, title=None, ext=None, total_size=None, refe...

FILE: src/you_get/processor/ffmpeg.py
  function get_usable_ffmpeg (line 19) | def get_usable_ffmpeg(cmd):
  function has_ffmpeg_installed (line 42) | def has_ffmpeg_installed():
  function generate_concat_list (line 47) | def generate_concat_list(files, output):
  function ffmpeg_concat_av (line 57) | def ffmpeg_concat_av(files, output, ext):
  function ffmpeg_convert_ts_to_mkv (line 82) | def ffmpeg_convert_ts_to_mkv(files, output='output.mkv'):
  function ffmpeg_concat_mp4_to_mpg (line 92) | def ffmpeg_concat_mp4_to_mpg(files, output='output.mpg'):
  function ffmpeg_concat_ts_to_mkv (line 129) | def ffmpeg_concat_ts_to_mkv(files, output='output.mkv'):
  function ffmpeg_concat_flv_to_mp4 (line 147) | def ffmpeg_concat_flv_to_mp4(files, output='output.mp4'):
  function ffmpeg_concat_mp3_to_mp3 (line 188) | def ffmpeg_concat_mp3_to_mp3(files, output='output.mp3'):
  function ffmpeg_concat_mp4_to_mp4 (line 201) | def ffmpeg_concat_mp4_to_mp4(files, output='output.mp4'):
  function ffmpeg_download_stream (line 240) | def ffmpeg_download_stream(files, title, ext, params={}, output_dir='.',...
  function ffmpeg_concat_audio_and_video (line 285) | def ffmpeg_concat_audio_and_video(files, output, ext):
  function ffprobe_get_media_duration (line 303) | def ffprobe_get_media_duration(file):

FILE: src/you_get/processor/join_flv.py
  class ECMAObject (line 31) | class ECMAObject:
    method __init__ (line 32) | def __init__(self, max_number):
    method put (line 36) | def put(self, k, v):
    method get (line 39) | def get(self, k):
    method set (line 41) | def set(self, k, v):
    method keys (line 49) | def keys(self):
    method __str__ (line 51) | def __str__(self):
    method __eq__ (line 53) | def __eq__(self, other):
  function read_amf_number (line 56) | def read_amf_number(stream):
  function read_amf_boolean (line 59) | def read_amf_boolean(stream):
  function read_amf_string (line 64) | def read_amf_string(stream):
  function read_amf_object (line 74) | def read_amf_object(stream):
  function read_amf_mixed_array (line 85) | def read_amf_mixed_array(stream):
  function read_amf_array (line 101) | def read_amf_array(stream):
  function read_amf (line 117) | def read_amf(stream):
  function write_amf_number (line 120) | def write_amf_number(stream, v):
  function write_amf_boolean (line 123) | def write_amf_boolean(stream, v):
  function write_amf_string (line 129) | def write_amf_string(stream, s):
  function write_amf_object (line 134) | def write_amf_object(stream, o):
  function write_amf_mixed_array (line 141) | def write_amf_mixed_array(stream, o):
  function write_amf_array (line 149) | def write_amf_array(stream, o):
  function write_amf (line 172) | def write_amf(stream, v):
  function read_int (line 184) | def read_int(stream):
  function read_uint (line 187) | def read_uint(stream):
  function write_uint (line 190) | def write_uint(stream, n):
  function read_byte (line 193) | def read_byte(stream):
  function write_byte (line 196) | def write_byte(stream, b):
  function read_unsigned_medium_int (line 199) | def read_unsigned_medium_int(stream):
  function read_tag (line 203) | def read_tag(stream):
  function write_tag (line 228) | def write_tag(stream, tag):
  function read_flv_header (line 242) | def read_flv_header(stream):
  function write_flv_header (line 251) | def write_flv_header(stream):
  function read_meta_data (line 257) | def read_meta_data(stream):
  function read_meta_tag (line 262) | def read_meta_tag(tag):
  function write_meta_tag (line 274) | def write_meta_tag(stream, meta_type, meta_data):
  function guess_output (line 286) | def guess_output(inputs):
  function concat_flv (line 295) | def concat_flv(flvs, output = None):
  function usage (line 338) | def usage():
  function main (line 341) | def main():

FILE: src/you_get/processor/join_mp4.py
  function skip (line 12) | def skip(stream, n):
  function skip_zeros (line 15) | def skip_zeros(stream, n):
  function read_int (line 18) | def read_int(stream):
  function read_uint (line 21) | def read_uint(stream):
  function write_uint (line 24) | def write_uint(stream, n):
  function write_ulong (line 27) | def write_ulong(stream, n):
  function read_ushort (line 30) | def read_ushort(stream):
  function read_ulong (line 33) | def read_ulong(stream):
  function read_byte (line 36) | def read_byte(stream):
  function copy_stream (line 39) | def copy_stream(source, target, n):
  class Atom (line 48) | class Atom:
    method __init__ (line 49) | def __init__(self, type, size, body):
    method __str__ (line 54) | def __str__(self):
    method __repr__ (line 57) | def __repr__(self):
    method write1 (line 59) | def write1(self, stream):
    method write (line 62) | def write(self, stream):
    method calsize (line 67) | def calsize(self):
  class CompositeAtom (line 70) | class CompositeAtom(Atom):
    method __init__ (line 71) | def __init__(self, type, size, body):
    method write (line 74) | def write(self, stream):
    method calsize (line 79) | def calsize(self):
    method get1 (line 82) | def get1(self, k):
    method get (line 88) | def get(self, *keys):
    method get_all (line 93) | def get_all(self, k):
  class VariableAtom (line 96) | class VariableAtom(Atom):
    method __init__ (line 97) | def __init__(self, type, size, body, variables):
    method write (line 101) | def write(self, stream):
    method get (line 118) | def get(self, k):
    method set (line 124) | def set(self, k, v):
  function read_raw (line 133) | def read_raw(stream, size, left, type):
  function read_udta (line 138) | def read_udta(stream, size, left, type):
  function read_body_stream (line 148) | def read_body_stream(stream, left):
  function read_full_atom (line 153) | def read_full_atom(stream):
  function read_full_atom2 (line 160) | def read_full_atom2(stream):
  function read_mvhd (line 166) | def read_mvhd(stream, size, left, type):
  function read_tkhd (line 201) | def read_tkhd(stream, size, left, type):
  function read_mdhd (line 236) | def read_mdhd(stream, size, left, type):
  function read_hdlr (line 264) | def read_hdlr(stream, size, left, type):
  function read_vmhd (line 281) | def read_vmhd(stream, size, left, type):
  function read_stsd (line 294) | def read_stsd(stream, size, left, type):
  function read_avc1 (line 325) | def read_avc1(stream, size, left, type):
  function read_avcC (line 351) | def read_avcC(stream, size, left, type):
  function read_stts (line 355) | def read_stts(stream, size, left, type):
  function read_stss (line 389) | def read_stss(stream, size, left, type):
  function read_stsc (line 418) | def read_stsc(stream, size, left, type):
  function read_stsz (line 457) | def read_stsz(stream, size, left, type):
  function read_stco (line 492) | def read_stco(stream, size, left, type):
  function read_ctts (line 521) | def read_ctts(stream, size, left, type):
  function read_smhd (line 551) | def read_smhd(stream, size, left, type):
  function read_mp4a (line 563) | def read_mp4a(stream, size, left, type):
  function read_descriptor (line 583) | def read_descriptor(stream):
  function read_esds (line 587) | def read_esds(stream, size, left, type):
  function read_composite_atom (line 597) | def read_composite_atom(stream, size, left, type):
  function read_mdat (line 606) | def read_mdat(stream, size, left, type):
  function read_atom (line 681) | def read_atom(stream):
  function write_atom (line 702) | def write_atom(stream, atom):
  function parse_atoms (line 705) | def parse_atoms(stream):
  function read_mp4 (line 715) | def read_mp4(stream):
  function merge_stts (line 730) | def merge_stts(samples_list):
  function merge_stss (line 742) | def merge_stss(samples, sample_number_list):
  function merge_stsc (line 750) | def merge_stsc(chunks_list, total_chunk_number_list):
  function merge_stco (line 765) | def merge_stco(offsets_list, mdats):
  function merge_stsz (line 773) | def merge_stsz(sizes_list):
  function merge_mdats (line 776) | def merge_mdats(mdats):
  function merge_moov (line 791) | def merge_moov(moovs, mdats):
  function merge_mp4s (line 873) | def merge_mp4s(files, output):
  function guess_output (line 896) | def guess_output(inputs):
  function concat_mp4 (line 905) | def concat_mp4(mp4s, output = None):
  function usage (line 918) | def usage():
  function main (line 921) | def main():

FILE: src/you_get/processor/join_ts.py
  function guess_output (line 10) | def guess_output(inputs):
  function concat_ts (line 19) | def concat_ts(ts_parts, output = None):
  function usage (line 38) | def usage():
  function main (line 41) | def main():

FILE: src/you_get/processor/rtmpdump.py
  function get_usable_rtmpdump (line 6) | def get_usable_rtmpdump(cmd):
  function has_rtmpdump_installed (line 16) | def has_rtmpdump_installed():
  function download_rtmpdump_stream (line 24) | def download_rtmpdump_stream(url, title, ext,params={},output_dir='.'):
  function play_rtmpdump_stream (line 45) | def play_rtmpdump_stream(player, url, params={}):

FILE: src/you_get/util/fs.py
  function legitimize (line 5) | def legitimize(text, os=detect_os()):

FILE: src/you_get/util/git.py
  function get_head (line 7) | def get_head(repo_path):
  function get_version (line 17) | def get_version(repo_path):

FILE: src/you_get/util/log.py
  function sprint (line 60) | def sprint(text, *colors):
  function println (line 64) | def println(text, *colors):
  function print_err (line 68) | def print_err(text, *colors):
  function print_log (line 72) | def print_log(text, *colors):
  function i (line 76) | def i(message):
  function d (line 80) | def d(message):
  function w (line 84) | def w(message):
  function e (line 88) | def e(message, exit_code=None):
  function wtf (line 94) | def wtf(message, exit_code=1):
  function yes_or_no (line 100) | def yes_or_no(message):

FILE: src/you_get/util/os.py
  function detect_os (line 5) | def detect_os():

FILE: src/you_get/util/strings.py
  function unescape_html (line 8) | def unescape_html(string):
  function _sharp2uni (line 14) | def _sharp2uni(m):
  function get_filename (line 24) | def get_filename(htmlstring):
  function parameterize (line 27) | def parameterize(string):

FILE: src/you_get/util/term.py
  function get_terminal_size (line 3) | def get_terminal_size():

FILE: tests/test.py
  class YouGetTests (line 19) | class YouGetTests(unittest.TestCase):
    method test_imgur (line 20) | def test_imgur(self):
    method test_magisto (line 24) | def test_magisto(self):
    method test_acfun (line 43) | def test_acfun(self):
    method test_tiktok (line 59) | def test_tiktok(self):
    method test_twitter (line 65) | def test_twitter(self):
    method test_weibo (line 69) | def test_weibo(self):

FILE: tests/test_common.py
  class TestCommon (line 7) | class TestCommon(unittest.TestCase):
    method test_match1 (line 9) | def test_match1(self):

FILE: tests/test_util.py
  class TestUtil (line 7) | class TestUtil(unittest.TestCase):
    method test_legitimize (line 8) | def test_legitimize(self):
Condensed preview — 139 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (617K chars).
[
  {
    "path": ".github/workflows/python-package.yml",
    "chars": 1247,
    "preview": "# This workflow will install Python dependencies, run tests and lint with a variety of Python versions\n\nname: develop\n\no"
  },
  {
    "path": ".gitignore",
    "chars": 931,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "CHANGELOG.rst",
    "chars": 10840,
    "preview": "Changelog\n=========\n\n0.3.36\n------\n\n*Date: 2015-10-05*\n\n* New command-line option: --json\n* New site support:\n    - Inte"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 1170,
    "preview": "# How to Report an Issue\n\nIf you would like to report a problem you find when using `you-get`, please open a [Pull Reque"
  },
  {
    "path": "MANIFEST.in",
    "chars": 179,
    "preview": "include *.rst\ninclude *.txt\ninclude Makefile\ninclude CONTRIBUTING.md\ninclude README.md\ninclude you-get\ninclude you-get.j"
  },
  {
    "path": "Makefile",
    "chars": 660,
    "preview": ".PHONY: default i test clean all html rst build install release\n\ndefault: i\n\ni:\n\t@(cd src; python -i -c 'import you_get;"
  },
  {
    "path": "README.md",
    "chars": 20178,
    "preview": "# You-Get\n\n[![Build Status](https://github.com/soimort/you-get/workflows/develop/badge.svg)](https://github.com/soimort/"
  },
  {
    "path": "README.rst",
    "chars": 2414,
    "preview": "You-Get\n=======\n\n|PyPI version| |Build Status| |Gitter|\n\n`You-Get <https://you-get.org/>`__ is a tiny command-line utili"
  },
  {
    "path": "SECURITY.md",
    "chars": 112,
    "preview": "# Security Policy\n\n## Reporting a Vulnerability\n\nPlease report security issues to <mort.yao+you-get@gmail.com>.\n"
  },
  {
    "path": "contrib/completion/you-get-completion.bash",
    "chars": 916,
    "preview": "# Bash completion definition for you-get.\n\n_you-get () {\n    COMPREPLY=()\n    local IFS=$' \\n'\n    local cur=$2 prev=$3\n"
  },
  {
    "path": "contrib/completion/you-get.fish",
    "chars": 1595,
    "preview": "# Fish completion definition for you-get.\n\ncomplete -c you-get -s V -l version -d 'print version and exit'\ncomplete -c y"
  },
  {
    "path": "setup.cfg",
    "chars": 99,
    "preview": "[build]\nforce = 0\n\n[global]\nverbose = 0\n\n[egg_info]\ntag_build = \ntag_date = 0\ntag_svn_revision = 0\n"
  },
  {
    "path": "setup.py",
    "chars": 1811,
    "preview": "#!/usr/bin/env python3\n\nPROJ_NAME = 'you-get'\nPACKAGE_NAME = 'you_get'\n\nPROJ_METADATA = '%s.json' % PROJ_NAME\n\nimport im"
  },
  {
    "path": "src/you_get/cli_wrapper/player/dragonplayer.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/you_get/cli_wrapper/player/gnome_mplayer.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/you_get/cli_wrapper/player/mplayer.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/you_get/cli_wrapper/player/vlc.py",
    "chars": 22,
    "preview": "#!/usr/bin/env python\n"
  },
  {
    "path": "src/you_get/cli_wrapper/player/wmp.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/you_get/cli_wrapper/transcoder/ffmpeg.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/you_get/cli_wrapper/transcoder/libav.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/you_get/cli_wrapper/transcoder/mencoder.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/you_get/common.py",
    "chars": 61232,
    "preview": "#!/usr/bin/env python\n\nimport io\nimport os\nimport re\nimport sys\nimport time\nimport json\nimport socket\nimport locale\nimpo"
  },
  {
    "path": "src/you_get/extractor.py",
    "chars": 10897,
    "preview": "#!/usr/bin/env python\n\nfrom .common import match1, maybe_print, download_urls, get_filename, parse_host, set_proxy, unse"
  },
  {
    "path": "src/you_get/extractors/acfun.py",
    "chars": 9413,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\n\nclass AcFun(VideoExtractor):\n    n"
  },
  {
    "path": "src/you_get/extractors/alive.py",
    "chars": 605,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['alive_download']\n\nfrom ..common import *\n\ndef alive_download(url, output_dir = '.', m"
  },
  {
    "path": "src/you_get/extractors/archive.py",
    "chars": 616,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['archive_download']\n\nfrom ..common import *\n\ndef archive_download(url, output_dir='.',"
  },
  {
    "path": "src/you_get/extractors/baidu.py",
    "chars": 12124,
    "preview": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n__all__ = ['baidu_download']\n\nfrom ..common import *\nfrom .embed import *"
  },
  {
    "path": "src/you_get/extractors/bandcamp.py",
    "chars": 748,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['bandcamp_download']\n\nfrom ..common import *\n\ndef bandcamp_download(url, output_dir='."
  },
  {
    "path": "src/you_get/extractors/baomihua.py",
    "chars": 1691,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['baomihua_download', 'baomihua_download_by_id']\n\nfrom ..common import *\n\nimport urllib"
  },
  {
    "path": "src/you_get/extractors/bigthink.py",
    "chars": 2400,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\n\nimport json\n\nclass Bigthink(VideoE"
  },
  {
    "path": "src/you_get/extractors/bilibili.py",
    "chars": 44426,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\n\nimport hashlib\nimport math\n\n\nclass"
  },
  {
    "path": "src/you_get/extractors/bokecc.py",
    "chars": 3376,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\nimport xml.etree.ElementTree as ET\n"
  },
  {
    "path": "src/you_get/extractors/cbs.py",
    "chars": 617,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['cbs_download']\n\nfrom ..common import *\n\nfrom .theplatform import theplatform_download"
  },
  {
    "path": "src/you_get/extractors/ckplayer.py",
    "chars": 3513,
    "preview": "#!/usr/bin/env python\n#coding:utf-8\n# Author:  Beining --<i@cnbeining.com>\n# Purpose: A general extractor for CKPlayer\n#"
  },
  {
    "path": "src/you_get/extractors/cntv.py",
    "chars": 2715,
    "preview": "#!/usr/bin/env python\n\nimport json\nimport re\n\nfrom ..common import get_content, r1, match1, playlist_not_supported\nfrom "
  },
  {
    "path": "src/you_get/extractors/coub.py",
    "chars": 3907,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['coub_download']\n\nfrom ..common import *\nfrom ..processor import ffmpeg\nfrom ..util.fs"
  },
  {
    "path": "src/you_get/extractors/dailymotion.py",
    "chars": 1213,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['dailymotion_download']\n\nfrom ..common import *\nimport urllib.parse\n\ndef rebuilt_url(u"
  },
  {
    "path": "src/you_get/extractors/douban.py",
    "chars": 2340,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['douban_download']\n\nimport urllib.request, urllib.parse\nfrom ..common import *\n\ndef do"
  },
  {
    "path": "src/you_get/extractors/douyin.py",
    "chars": 1936,
    "preview": "# coding=utf-8\n\nimport json\n\nfrom ..common import (\n    url_size,\n    print_info,\n    get_content,\n    fake_headers,\n   "
  },
  {
    "path": "src/you_get/extractors/douyutv.py",
    "chars": 2798,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['douyutv_download']\n\nfrom ..common import *\nfrom ..util.log import *\nimport json\nimpor"
  },
  {
    "path": "src/you_get/extractors/ehow.py",
    "chars": 1110,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['ehow_download']\n\nfrom ..common import *\n\ndef ehow_download(url, output_dir = '.', mer"
  },
  {
    "path": "src/you_get/extractors/embed.py",
    "chars": 5270,
    "preview": "__all__ = ['embed_download']\n\nimport urllib.parse\n\nfrom ..common import *\n\nfrom .bilibili import bilibili_download\nfrom "
  },
  {
    "path": "src/you_get/extractors/facebook.py",
    "chars": 1013,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['facebook_download']\n\nfrom ..common import *\n\ndef facebook_download(url, output_dir='."
  },
  {
    "path": "src/you_get/extractors/fc2video.py",
    "chars": 2429,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['fc2video_download']\n\nfrom ..common import *\nfrom hashlib import md5\nfrom urllib.parse"
  },
  {
    "path": "src/you_get/extractors/flickr.py",
    "chars": 7706,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['flickr_download_main']\n\nfrom ..common import *\n\nimport json\n\npattern_url_photoset = r"
  },
  {
    "path": "src/you_get/extractors/freesound.py",
    "chars": 663,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['freesound_download']\n\nfrom ..common import *\n\ndef freesound_download(url, output_dir "
  },
  {
    "path": "src/you_get/extractors/funshion.py",
    "chars": 8144,
    "preview": "#!/usr/bin/env python\n\nimport json\nimport urllib.parse\nimport base64\nimport binascii\nimport re\n\nfrom ..extractors import"
  },
  {
    "path": "src/you_get/extractors/giphy.py",
    "chars": 816,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['giphy_download']\n\nfrom ..common import *\n\ndef giphy_download(url, output_dir='.', mer"
  },
  {
    "path": "src/you_get/extractors/google.py",
    "chars": 8977,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['google_download']\n\nfrom ..common import *\n\nimport re\n\n# YouTube media encoding option"
  },
  {
    "path": "src/you_get/extractors/heavymusic.py",
    "chars": 861,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['heavymusic_download']\n\nfrom ..common import *\n\ndef heavymusic_download(url, output_di"
  },
  {
    "path": "src/you_get/extractors/huomaotv.py",
    "chars": 997,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['huomaotv_download']\n\nfrom ..common import *\n\n\ndef get_mobile_room_url(room_id):\n    r"
  },
  {
    "path": "src/you_get/extractors/icourses.py",
    "chars": 15344,
    "preview": "#!/usr/bin/env python\nfrom ..common import *\nfrom urllib import parse, error\nimport random\nfrom time import sleep\nimport"
  },
  {
    "path": "src/you_get/extractors/ifeng.py",
    "chars": 1760,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['ifeng_download', 'ifeng_download_by_id']\n\nfrom ..common import *\n\ndef ifeng_download_"
  },
  {
    "path": "src/you_get/extractors/imgur.py",
    "chars": 2855,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\nfrom .universal import *\n\nclass Img"
  },
  {
    "path": "src/you_get/extractors/infoq.py",
    "chars": 1898,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\n\nimport ssl\n\nclass Infoq(VideoExtra"
  },
  {
    "path": "src/you_get/extractors/instagram.py",
    "chars": 2632,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['instagram_download']\n\nfrom ..common import *\n\ndef instagram_download(url, output_dir="
  },
  {
    "path": "src/you_get/extractors/interest.py",
    "chars": 1110,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom json import loads\n\ndef interest_download(url, output_dir='.', merge=T"
  },
  {
    "path": "src/you_get/extractors/iqilu.py",
    "chars": 892,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['iqilu_download']\n\nfrom ..common import *\nimport json\n\ndef iqilu_download(url, output_"
  },
  {
    "path": "src/you_get/extractors/iqiyi.py",
    "chars": 11968,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..common import print_more_compatible as print\nfrom ..extractor impor"
  },
  {
    "path": "src/you_get/extractors/iwara.py",
    "chars": 2169,
    "preview": "#!/usr/bin/env python\n__all__ = ['iwara_download']\nfrom ..common import *\nheaders = {\n    'DNT': '1',\n    'Accept-Encodi"
  },
  {
    "path": "src/you_get/extractors/ixigua.py",
    "chars": 5633,
    "preview": "#!/usr/bin/env python\nimport base64\n\nfrom ..common import *\nfrom json import loads\nfrom urllib import request\n\n__all__ ="
  },
  {
    "path": "src/you_get/extractors/joy.py",
    "chars": 1507,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['joy_download']\n\nfrom ..common import *\n\ndef video_info(channel_id, program_id, volumn"
  },
  {
    "path": "src/you_get/extractors/kakao.py",
    "chars": 1771,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom .universal import *\n\n__all__ = ['kakao_download']\n\n\ndef kakao_downloa"
  },
  {
    "path": "src/you_get/extractors/khan.py",
    "chars": 508,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['khan_download']\n\nfrom ..common import *\nfrom .youtube import YouTube\n\ndef khan_downlo"
  },
  {
    "path": "src/you_get/extractors/ku6.py",
    "chars": 2797,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['ku6_download', 'ku6_download_by_id']\n\nfrom ..common import *\n\nimport json\nimport re\n\n"
  },
  {
    "path": "src/you_get/extractors/kuaishou.py",
    "chars": 1649,
    "preview": "#!/usr/bin/env python\n\nimport urllib.request\nimport urllib.parse\nimport re\n\nfrom ..common import get_content, download_u"
  },
  {
    "path": "src/you_get/extractors/kugou.py",
    "chars": 3603,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['kugou_download']\n\nfrom ..common import *\nfrom json import loads\nfrom base64 import b6"
  },
  {
    "path": "src/you_get/extractors/kuwo.py",
    "chars": 1442,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['kuwo_download']\n\nfrom ..common import *\nimport re\n\ndef kuwo_download_by_rid(rid, outp"
  },
  {
    "path": "src/you_get/extractors/le.py",
    "chars": 6282,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['letv_download', 'letvcloud_download', 'letvcloud_download_by_vu']\n\nimport base64\nimpo"
  },
  {
    "path": "src/you_get/extractors/lizhi.py",
    "chars": 2684,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['lizhi_download']\nimport json\nimport datetime\nfrom ..common import *\n\n#\n# Worked well "
  },
  {
    "path": "src/you_get/extractors/longzhu.py",
    "chars": 2667,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['longzhu_download']\n\nimport json\nfrom ..common import (\n    get_content,\n    general_m"
  },
  {
    "path": "src/you_get/extractors/lrts.py",
    "chars": 2852,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['lrts_download']\n\nimport logging\nfrom ..common import *\nfrom ..util import log, term\n\n"
  },
  {
    "path": "src/you_get/extractors/magisto.py",
    "chars": 806,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['magisto_download']\n\nfrom ..common import *\nimport json\n\ndef magisto_download(url, out"
  },
  {
    "path": "src/you_get/extractors/metacafe.py",
    "chars": 865,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['metacafe_download']\n\nfrom ..common import *\nimport urllib.error\nfrom urllib.parse imp"
  },
  {
    "path": "src/you_get/extractors/mgtv.py",
    "chars": 8183,
    "preview": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\n\nfrom json "
  },
  {
    "path": "src/you_get/extractors/miaopai.py",
    "chars": 8090,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['miaopai_download']\n\nimport string\nimport random\nfrom ..common import *\nimport urllib."
  },
  {
    "path": "src/you_get/extractors/miomio.py",
    "chars": 1955,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['miomio_download']\n\nfrom ..common import *\n\nfrom .tudou import tudou_download_by_id\nfr"
  },
  {
    "path": "src/you_get/extractors/missevan.py",
    "chars": 12011,
    "preview": "\"\"\"\nMIT License\n\nCopyright (c) 2019 WaferJay\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
  },
  {
    "path": "src/you_get/extractors/mixcloud.py",
    "chars": 891,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['mixcloud_download']\n\nfrom ..common import *\n\ndef mixcloud_download(url, output_dir='."
  },
  {
    "path": "src/you_get/extractors/mtv81.py",
    "chars": 1622,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['mtv81_download']\n\nfrom ..common import *\n\nfrom xml.dom.minidom import parseString\n\nfr"
  },
  {
    "path": "src/you_get/extractors/nanagogo.py",
    "chars": 1946,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['nanagogo_download']\n\nfrom ..common import *\nfrom .universal import *\n\ndef nanagogo_do"
  },
  {
    "path": "src/you_get/extractors/naver.py",
    "chars": 1472,
    "preview": "#!/usr/bin/env python\n\nimport urllib.request\nimport urllib.parse\nimport json\nimport re\n\nfrom ..util import log\nfrom ..co"
  },
  {
    "path": "src/you_get/extractors/netease.py",
    "chars": 8101,
    "preview": "#!/usr/bin/env python\n\n\n__all__ = ['netease_download']\n\nfrom ..common import *\nfrom ..common import print_more_compatibl"
  },
  {
    "path": "src/you_get/extractors/nicovideo.py",
    "chars": 1729,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['nicovideo_download']\n\nfrom ..common import *\n\ndef nicovideo_login(user, password):\n  "
  },
  {
    "path": "src/you_get/extractors/pinterest.py",
    "chars": 1628,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\n\nclass Pinterest(VideoExtractor):\n "
  },
  {
    "path": "src/you_get/extractors/pixnet.py",
    "chars": 3778,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['pixnet_download']\n\nfrom ..common import *\nfrom time import time\nfrom urllib.parse imp"
  },
  {
    "path": "src/you_get/extractors/pptv.py",
    "chars": 6830,
    "preview": "#!/usr/bin/env python\n\n#__all__ = ['pptv_download', 'pptv_download_by_id']\n\nfrom ..common import *\nfrom ..extractor impo"
  },
  {
    "path": "src/you_get/extractors/qie.py",
    "chars": 3590,
    "preview": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\nfrom ..util"
  },
  {
    "path": "src/you_get/extractors/qie_video.py",
    "chars": 2901,
    "preview": "from ..common import *\nfrom ..extractor import VideoExtractor\nfrom ..util.log import *\n\nimport json\nimport math\n\nclass Q"
  },
  {
    "path": "src/you_get/extractors/qingting.py",
    "chars": 1560,
    "preview": "import json\nimport re\n\nfrom ..common import get_content, playlist_not_supported, url_size\nfrom ..extractors import Video"
  },
  {
    "path": "src/you_get/extractors/qq.py",
    "chars": 8106,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['qq_download']\n\nfrom .qie import download as qieDownload\nfrom .qie_video import downlo"
  },
  {
    "path": "src/you_get/extractors/qq_egame.py",
    "chars": 1385,
    "preview": "import re\nimport json\n\nfrom ..common import *\nfrom ..extractors import VideoExtractor\nfrom ..util import log\nfrom ..util"
  },
  {
    "path": "src/you_get/extractors/showroom.py",
    "chars": 3675,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['showroom_download']\n\nfrom ..common import *\nimport urllib.error\nfrom json import load"
  },
  {
    "path": "src/you_get/extractors/sina.py",
    "chars": 4564,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['sina_download', 'sina_download_by_vid', 'sina_download_by_vkey']\n\nfrom ..common impor"
  },
  {
    "path": "src/you_get/extractors/sohu.py",
    "chars": 2732,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['sohu_download']\n\nfrom ..common import *\n\nimport json\n\n'''\nChangelog:\n    1. http://tv"
  },
  {
    "path": "src/you_get/extractors/soundcloud.py",
    "chars": 2417,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['sndcd_download']\n\nfrom ..common import *\nimport re\nimport json\n\n\ndef get_sndcd_apikey"
  },
  {
    "path": "src/you_get/extractors/suntv.py",
    "chars": 1395,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['suntv_download']\n\nfrom ..common import *\nimport urllib\nimport re\n\ndef suntv_download("
  },
  {
    "path": "src/you_get/extractors/ted.py",
    "chars": 863,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['ted_download']\n\nfrom ..common import *\nimport json\n\ndef ted_download(url, output_dir="
  },
  {
    "path": "src/you_get/extractors/theplatform.py",
    "chars": 988,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\n\ndef theplatform_download_by_pid(pid, title, output_dir='.', merge=True, i"
  },
  {
    "path": "src/you_get/extractors/tiktok.py",
    "chars": 1864,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['tiktok_download']\n\nfrom ..common import *\n\ndef tiktok_download(url, output_dir='.', m"
  },
  {
    "path": "src/you_get/extractors/toutiao.py",
    "chars": 2668,
    "preview": "#!/usr/bin/env python\nimport binascii\nimport random\nfrom json import loads\nfrom urllib.parse import urlparse\n\nfrom ..com"
  },
  {
    "path": "src/you_get/extractors/tucao.py",
    "chars": 2711,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['tucao_download']\nfrom ..common import *\n# import re\nimport random\nimport time\nfrom xm"
  },
  {
    "path": "src/you_get/extractors/tudou.py",
    "chars": 4430,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['tudou_download', 'tudou_download_playlist', 'tudou_download_by_id', 'tudou_download_b"
  },
  {
    "path": "src/you_get/extractors/tumblr.py",
    "chars": 6839,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['tumblr_download']\n\nfrom ..common import *\nfrom .universal import *\nfrom .dailymotion "
  },
  {
    "path": "src/you_get/extractors/twitter.py",
    "chars": 3394,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['twitter_download']\n\nfrom ..common import *\nfrom .universal import *\n\ndef extract_m3u("
  },
  {
    "path": "src/you_get/extractors/ucas.py",
    "chars": 5423,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['ucas_download', 'ucas_download_single', 'ucas_download_playlist']\n\nfrom ..common impo"
  },
  {
    "path": "src/you_get/extractors/universal.py",
    "chars": 7150,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['universal_download']\n\nfrom ..common import *\nfrom .embed import *\n\ndef universal_down"
  },
  {
    "path": "src/you_get/extractors/veoh.py",
    "chars": 1418,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['veoh_download']\n\nfrom ..common import *\n\ndef veoh_download(url, output_dir = '.', mer"
  },
  {
    "path": "src/you_get/extractors/vimeo.py",
    "chars": 5912,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['vimeo_download', 'vimeo_download_by_id', 'vimeo_download_by_channel', 'vimeo_download"
  },
  {
    "path": "src/you_get/extractors/vk.py",
    "chars": 2430,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['vk_download']\n\nfrom ..common import *\n\n\ndef get_video_info(url):\n    video_page = get"
  },
  {
    "path": "src/you_get/extractors/w56.py",
    "chars": 1447,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['w56_download', 'w56_download_by_id']\n\nfrom ..common import *\n\nfrom .sohu import sohu_"
  },
  {
    "path": "src/you_get/extractors/wanmen.py",
    "chars": 4860,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['wanmen_download', 'wanmen_download_by_course', 'wanmen_download_by_course_topic', 'wa"
  },
  {
    "path": "src/you_get/extractors/ximalaya.py",
    "chars": 4211,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['ximalaya_download_playlist', 'ximalaya_download', 'ximalaya_download_by_id']\n\nfrom .."
  },
  {
    "path": "src/you_get/extractors/xinpianchang.py",
    "chars": 1666,
    "preview": "#!/usr/bin/env python\n\nimport re\nimport json\nfrom ..extractor import VideoExtractor\nfrom ..common import get_content, pl"
  },
  {
    "path": "src/you_get/extractors/yixia.py",
    "chars": 4443,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['yixia_download']\n\nfrom ..common import *\nfrom urllib.parse import urlparse\nfrom json "
  },
  {
    "path": "src/you_get/extractors/yizhibo.py",
    "chars": 1268,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['yizhibo_download']\n\nfrom ..common import *\nimport json\n\ndef yizhibo_download(url, out"
  },
  {
    "path": "src/you_get/extractors/youku.py",
    "chars": 12704,
    "preview": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\n\nimport tim"
  },
  {
    "path": "src/you_get/extractors/youtube.py",
    "chars": 23140,
    "preview": "#!/usr/bin/env python\n\nfrom ..common import *\nfrom ..extractor import VideoExtractor\n\ntry:\n    import dukpy\nexcept Impor"
  },
  {
    "path": "src/you_get/extractors/zhanqi.py",
    "chars": 2305,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['zhanqi_download']\n\nfrom ..common import *\nimport json\nimport base64\nfrom urllib.parse"
  },
  {
    "path": "src/you_get/extractors/zhibo.py",
    "chars": 1931,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['zhibo_download']\n\nfrom ..common import *\n\ndef zhibo_vedio_download(url, output_dir = "
  },
  {
    "path": "src/you_get/extractors/zhihu.py",
    "chars": 3549,
    "preview": "#!/usr/bin/env python\n\n__all__ = ['zhihu_download', 'zhihu_download_playlist']\n\nfrom ..common import *\nimport json\n\n\ndef"
  },
  {
    "path": "src/you_get/json_output.py",
    "chars": 1710,
    "preview": "\nimport json\n\n# save info from common.print_info()\nlast_info = None\n\ndef output(video_extractor, pretty_print=True):\n   "
  },
  {
    "path": "src/you_get/processor/ffmpeg.py",
    "chars": 10765,
    "preview": "#!/usr/bin/env python\n\nimport logging\nimport os\nimport subprocess\nimport sys\nfrom ..util.strings import parameterize\nfro"
  },
  {
    "path": "src/you_get/processor/join_flv.py",
    "chars": 10222,
    "preview": "#!/usr/bin/env python\n\nimport struct\nfrom io import BytesIO\n\nTAG_TYPE_METADATA = 18\n\n###################################"
  },
  {
    "path": "src/you_get/processor/join_mp4.py",
    "chars": 30413,
    "preview": "#!/usr/bin/env python\n\n# reference: c041828_ISO_IEC_14496-12_2005(E).pdf\n\n##############################################"
  },
  {
    "path": "src/you_get/processor/join_ts.py",
    "chars": 1624,
    "preview": "#!/usr/bin/env python\n\nimport struct\nfrom io import BytesIO\n\n##################################################\n# main\n#"
  },
  {
    "path": "src/you_get/processor/rtmpdump.py",
    "chars": 1711,
    "preview": "#!/usr/bin/env python\n\nimport os.path\nimport subprocess\n\ndef get_usable_rtmpdump(cmd):\n    try:\n        p = subprocess.P"
  },
  {
    "path": "src/you_get/util/fs.py",
    "chars": 1153,
    "preview": "#!/usr/bin/env python\n\nfrom .os import detect_os\n\ndef legitimize(text, os=detect_os()):\n    \"\"\"Converts a string to a va"
  },
  {
    "path": "src/you_get/util/git.py",
    "chars": 1489,
    "preview": "#!/usr/bin/env python\n\nimport os\nimport subprocess\nfrom ..version import __version__\n\ndef get_head(repo_path):\n    \"\"\"Ge"
  },
  {
    "path": "src/you_get/util/log.py",
    "chars": 2692,
    "preview": "#!/usr/bin/env python\n# This file is Python 2 compliant.\n\nfrom ..version import script_name\n\nimport os, sys\n\nTERM = os.g"
  },
  {
    "path": "src/you_get/util/os.py",
    "chars": 798,
    "preview": "#!/usr/bin/env python\n\nfrom platform import system\n\ndef detect_os():\n    \"\"\"Detect operating system.\n    \"\"\"\n\n    # Insp"
  },
  {
    "path": "src/you_get/util/strings.py",
    "chars": 763,
    "preview": "try:\n    # py 3.4\n    from html import unescape as unescape_html\nexcept ImportError:\n    import re\n    from html.entitie"
  },
  {
    "path": "src/you_get/util/term.py",
    "chars": 303,
    "preview": "#!/usr/bin/env python\n\ndef get_terminal_size():\n    \"\"\"Get (width, height) of the current terminal.\"\"\"\n    try:\n        "
  },
  {
    "path": "src/you_get/version.py",
    "chars": 72,
    "preview": "#!/usr/bin/env python\n\nscript_name = 'you-get'\n__version__ = '0.4.1743'\n"
  },
  {
    "path": "tests/test.py",
    "chars": 2430,
    "preview": "#!/usr/bin/env python\n\nimport unittest\n\nfrom you_get.extractors import (\n    imgur,\n    magisto,\n    youtube,\n    missev"
  },
  {
    "path": "tests/test_common.py",
    "chars": 363,
    "preview": "#!/usr/bin/env python\n\nimport unittest\n\nfrom you_get.common import *\n\nclass TestCommon(unittest.TestCase):\n    \n    def "
  },
  {
    "path": "tests/test_util.py",
    "chars": 387,
    "preview": "#!/usr/bin/env python\n\nimport unittest\n\nfrom you_get.util.fs import *\n\nclass TestUtil(unittest.TestCase):\n    def test_l"
  },
  {
    "path": "you-get",
    "chars": 477,
    "preview": "#!/usr/bin/env python3\nimport os, sys\n\n_srcdir = '%s/src/' % os.path.dirname(os.path.realpath(__file__))\n_filepath = os."
  },
  {
    "path": "you-get.plugin.zsh",
    "chars": 138,
    "preview": "#!/usr/bin/env zsh\nalias you-get=\"noglob python3 $(dirname $0)/you-get\"\nalias you-vlc=\"noglob python3 $(dirname $0)/you-"
  }
]

About this extraction

This page contains the full source code of the soimort/you-get GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 139 files (574.3 KB), approximately 161.3k tokens, and a symbol index with 659 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!