[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# dotenv\n.env\n\n# virtualenv\n.venv\nvenv/\nENV/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n\n# File downloads\n*.jpg\n*.jpe\n*.png\n*.gif\n*.mp4\n*.webm\n*.zip\n*.incomplete\n\n# Metadata\n*.json\n.incomplete\n\n# Logs\n*.log\n\n# Crawljobs\n*.crawljob\n\n# Jetbrains\n.idea/\n\n# Database\n*.db\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 bitbybyte\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# FantiaDL\nDownload media and other data from Fantia fanclubs and posts. A session cookie must be provided with the -c/--cookie argument directly or by passing the path to a legacy Netscape cookies file. Please see the [About Session Cookies](#about-session-cookies) section.\n\n```\nusage: fantiadl [options] url\n\npositional arguments:\n  url                   fanclub or post URL\n\noptions:\n  -h, --help            show this help message and exit\n  -c SESSION_COOKIE, --cookie SESSION_COOKIE\n                        _session_id cookie or cookies.txt\n  -q, --quiet           suppress output\n  -v, --version         show program's version number and exit\n  --db DB_PATH          database to track post download state (creates tables when first specified)\"\n  --db-bypass-post-check\n                        bypass checking a post for new content if it's marked as completed on the database\n\ndownload options:\n  -i, --ignore-errors   continue on download errors\n  -l #, --limit #       limit the number of posts to process per fanclub (excludes -n)\n  -o OUTPUT_PATH, --output-directory OUTPUT_PATH\n                        directory to download to\n  -s, --use-server-filenames\n                        download using server defined filenames\n  -r, --mark-incomplete-posts\n                        add .incomplete file to post directories that are incomplete\n  -m, --dump-metadata   store metadata to file (including fanclub icon, header, and background)\n  -x, --parse-for-external-links\n                        parse posts for external links\n  -t, --download-thumbnail\n                        download post thumbnails\n  -f, --download-fanclubs\n                        download posts from all followed fanclubs\n  -p, --download-paid-fanclubs\n                        download posts from all fanclubs backed on a paid plan\n  -n #, --download-new-posts #\n                        download a specified number of new posts from your fanclub timeline\n  -d %Y-%m, --download-month %Y-%m\n                        download posts only from a specific month, e.g. 2007-08 (excludes -n)\n  --exclude EXCLUDE_FILE\n                        file containing a list of filenames to exclude from downloading\n```\n\nTo track post downloads, specify a database path using `--db`, e.g. `--db ~/fantiadl.db`. When existing post content downloads are encountered, they will be skipped over. When all post contents under a parent post have been downloaded, the post will be marked complete on the database. If future requests to download a post indicate the post was modified based on its timestamp, new contents will be checked for; this behavior can be disabled by setting `--db-bypass-post-check`.\n\nWhen parsing for external links using `-x`, a .crawljob file is created in your root directory (either the directory provided with `-o` or the directory the script is being run from) that can be parsed by [JDownloader](http://jdownloader.org/). As posts are parsed, links will be appended and assigned their appropriate post directories for download. You can import this file manually into JDownloader (File -> Load Linkcontainer) or setup the Folder Watch plugin to watch your root directory for .crawljob files.\n\n## About Session Cookies\nDue to recent changes imposed by Fantia, providing an email and password to login from the command line is no longer supported. In order to login, you will need to provide the `_session_id` cookie for your Fantia login session using -c/--cookie. After logging in normally on your browser, this value can then be extracted and used with FantiaDL. This value expires and may need to be updated with some regularity.\n\n### Mozilla Firefox\n1. On https://fantia.jp, press Ctrl + Shift + I to open Developer Tools.\n2. Select the Storage tab at the top. In the sidebar, select https://fantia.jp under the Cookies heading.\n3. Locate the `_session_id` cookie name. Click on the value to copy it.\n\n### Google Chrome\n1. On https://fantia.jp, press Ctrl + Shift + I to open DevTools.\n2. Select the Application tab at the top. In the sidebar, expand Cookies under the Storage heading and select https://fantia.jp.\n3. Locate the `_session_id` cookie name. Click on the value to copy it.\n\n### Third-Party Extensions (cookies.txt)\nYou also have the option of passing the path to a legacy Netscape format cookies file with -c/--cookie, e.g. `-c ~/cookies.txt`. Using an extension like [cookies.txt](https://chrome.google.com/webstore/detail/cookiestxt/njabckikapfpffapmjgojcnbfjonfjfg), create a text file matching the accepted format:\n\n```\n# Netscape HTTP Cookie File\n# https://curl.haxx.se/rfc/cookie_spec.html\n# This is a generated file! Do not edit.\n\nfantia.jp\tFALSE\t/\tFALSE\t1595755239\t_session_id\ta1b2c3d4...\n```\n\nOnly the `_session_id` cookie is required.\n\n## Download\n`pip install fantiadl`\nhttps://pypi.org/project/fantiadl/\n\nBinaries are also provided for [new releases](https://github.com/bitbybyte/fantiadl/releases/latest).\n\n## Build Requirements\n - Python >=3.7\n - requests\n - beautifulsoup4\n\n## Roadmap\n - More robust logging\n"
  },
  {
    "path": "fantiadl/__init__.py",
    "content": "from . import fantiadl"
  },
  {
    "path": "fantiadl/__main__.py",
    "content": "from .fantiadl import cli\n\ncli()"
  },
  {
    "path": "fantiadl/__version__.py",
    "content": "__version__ = \"2.0.4\""
  },
  {
    "path": "fantiadl/db.py",
    "content": "import time\nimport sqlite3\n\nclass FantiaDlDatabase:\n    def __init__(self, db_path):\n        if db_path is None:\n            self.conn = None\n            return\n\n        self.conn = sqlite3.connect(db_path)\n        self.conn.row_factory = sqlite3.Row\n        self.cursor = self.conn.cursor()\n\n        self.cursor.execute(\"CREATE TABLE IF NOT EXISTS urls (url TEXT PRIMARY KEY, timestamp INTEGER)\")\n        self.cursor.execute(\"CREATE TABLE IF NOT EXISTS posts (id INTEGER PRIMARY KEY, title TEXT, fanclub INTEGER, posted_at INTEGER, converted_at INTEGER, download_complete INTEGER, timestamp INTEGER)\")\n        self.cursor.execute(\"CREATE TABLE IF NOT EXISTS post_contents (id INTEGER PRIMARY KEY, parent_post INTEGER, title TEXT, category TEXT, price INTEGER, currency TEXT, timestamp INTEGER, FOREIGN KEY(parent_post) REFERENCES posts(id))\")\n\n        self.conn.commit()\n\n    def __del__(self):\n        if self.conn is not None:\n            self.conn.close()\n\n    # Helper methods\n\n    def execute(self, query, args):\n        if self.conn is None:\n            return\n        self.cursor.execute(query, args)\n        self.conn.commit()\n\n    def fetchone(self, query, args):\n        if self.conn is None:\n            return None\n        self.cursor.execute(query, args)\n        return self.cursor.fetchone()\n\n    # INSERT, REPLACE\n\n    def insert_post(self, id, title, fanclub, posted_at, converted_at):\n        self.execute(\"REPLACE INTO posts VALUES (?, ?, ?, ?, ?, 0, ?)\", (id, title, fanclub, posted_at, converted_at, int(time.time())))\n\n    def insert_post_content(self, id, parent_post, title, category, price, price_unit):\n        self.execute(\"INSERT INTO post_contents VALUES (?, ?, ?, ?, ?, ?, ?)\", (id, parent_post, title, category, price, price_unit, int(time.time())))\n\n    def insert_url(self, url):\n        self.execute(\"INSERT INTO urls VALUES (?, ?)\", (url, int(time.time())))\n\n    # SELECT\n\n    def find_post(self, id):\n        return self.fetchone(\"SELECT * FROM posts WHERE id = ?\", (id,))\n\n    def is_post_content_downloaded(self, id):\n        return self.fetchone(\"SELECT timestamp FROM post_contents WHERE id = ?\", (id,)) is not None\n\n    def is_url_downloaded(self, url):\n        return self.fetchone(\"SELECT timestamp FROM urls WHERE url = ?\", (url,)) is not None\n\n    # UPDATE\n\n    def update_post_download_complete(self, id, download_complete=1):\n        self.execute(\"UPDATE posts SET download_complete = ?, timestamp = ? WHERE id = ?\", (download_complete, int(time.time()), id))\n\n    def update_post_converted_at(self, id, converted_at):\n        self.execute(\"UPDATE posts SET converted_at = ?, timestamp = ? WHERE id = ?\", (converted_at, int(time.time()), id))\n"
  },
  {
    "path": "fantiadl/fantiadl.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"Download media and other data from Fantia\"\"\"\n\nimport argparse\nimport getpass\nimport netrc\nimport sys\nimport traceback\n\nfrom .models import FantiaDownloader, FantiaClub, FANTIA_URL_RE\nfrom .__version__ import __version__\n\n__author__ = \"bitbybyte\"\n__copyright__ = \"Copyright 2024 bitbybyte\"\n\n__license__ = \"MIT\"\n\nBASE_HOST = \"fantia.jp\"\n\ncmdl_usage = \"%(prog)s [options] url\"\ncmdl_version = __version__\ncmdl_parser = argparse.ArgumentParser(usage=cmdl_usage, conflict_handler=\"resolve\")\n\ncmdl_parser.add_argument(\"-c\", \"--cookie\", dest=\"session_arg\", metavar=\"SESSION_COOKIE\", help=\"_session_id cookie or cookies.txt\")\ncmdl_parser.add_argument(\"-e\", \"--email\", dest=\"email\", metavar=\"EMAIL\", help=argparse.SUPPRESS)\ncmdl_parser.add_argument(\"-p\", \"--password\", dest=\"password\", metavar=\"PASSWORD\", help=argparse.SUPPRESS)\ncmdl_parser.add_argument(\"-n\", \"--netrc\", action=\"store_true\", dest=\"netrc\", help=argparse.SUPPRESS)\ncmdl_parser.add_argument(\"-q\", \"--quiet\", action=\"store_true\", dest=\"quiet\", help=\"suppress output\")\ncmdl_parser.add_argument(\"-v\", \"--version\", action=\"version\", version=cmdl_version)\ncmdl_parser.add_argument(\"--db\", dest=\"db_path\", help=\"database to track post download state (creates tables when first specified)\")\ncmdl_parser.add_argument(\"--db-bypass-post-check\", action=\"store_true\", dest=\"db_bypass_post_check\", help=\"bypass checking a post for new content if it's marked as completed on the database\")\ncmdl_parser.add_argument(\"url\", action=\"store\", nargs=\"*\", help=\"fanclub or post URL\")\n\ndl_group = cmdl_parser.add_argument_group(\"download options\")\ndl_group.add_argument(\"-i\", \"--ignore-errors\", action=\"store_true\", dest=\"continue_on_error\", help=\"continue on download errors\")\ndl_group.add_argument(\"-l\", \"--limit\", dest=\"limit\", metavar=\"#\", type=int, default=0, help=\"limit the number of posts to process per fanclub (excludes -n)\")\ndl_group.add_argument(\"-o\", \"--output-directory\", dest=\"output_path\", help=\"directory to download to\")\ndl_group.add_argument(\"-s\", \"--use-server-filenames\", action=\"store_true\", dest=\"use_server_filenames\", help=\"download using server defined filenames\")\ndl_group.add_argument(\"-r\", \"--mark-incomplete-posts\", action=\"store_true\", dest=\"mark_incomplete_posts\", help=\"add .incomplete file to post directories that are incomplete\")\ndl_group.add_argument(\"-m\", \"--dump-metadata\", action=\"store_true\", dest=\"dump_metadata\", help=\"store metadata to file (including fanclub icon, header, and background)\")\ndl_group.add_argument(\"-x\", \"--parse-for-external-links\", action=\"store_true\", dest=\"parse_for_external_links\", help=\"parse posts for external links\")\ndl_group.add_argument(\"-t\", \"--download-thumbnail\", action=\"store_true\", dest=\"download_thumb\", help=\"download post thumbnails\")\ndl_group.add_argument(\"-f\", \"--download-fanclubs\", action=\"store_true\", dest=\"download_fanclubs\", help=\"download posts from all followed fanclubs\")\ndl_group.add_argument(\"-p\", \"--download-paid-fanclubs\", action=\"store_true\", dest=\"download_paid_fanclubs\", help=\"download posts from all fanclubs backed on a paid plan\")\ndl_group.add_argument(\"-n\", \"--download-new-posts\", dest=\"download_new_posts\", metavar=\"#\", type=int, help=\"download a specified number of new posts from your fanclub timeline\")\ndl_group.add_argument(\"-d\", \"--download-month\", dest=\"month_limit\", metavar=\"%Y-%m\", help=\"download posts only from a specific month, e.g. 2007-08 (excludes -n)\")\ndl_group.add_argument(\"--exclude\", dest=\"exclude_file\", metavar=\"EXCLUDE_FILE\", help=\"file containing a list of filenames to exclude from downloading\")\n\n\ncmdl_opts = cmdl_parser.parse_args()\n\ndef main():\n    session_arg = cmdl_opts.session_arg\n    email = cmdl_opts.email\n    password = cmdl_opts.password\n\n    if (email or password or cmdl_opts.netrc) and not session_arg:\n        sys.exit(\"Logging in from the command line is no longer supported. Please provide a session cookie using -c/--cookie. See the README for more information.\")\n\n    if not (cmdl_opts.download_fanclubs or cmdl_opts.download_paid_fanclubs or cmdl_opts.download_new_posts) and not cmdl_opts.url:\n        sys.exit(\"Error: No valid input provided\")\n\n    if not session_arg:\n        session_arg = input(\"Fantia session cookie (_session_id or cookies.txt path): \")\n\n    # if cmdl_opts.netrc:\n    #     login = netrc.netrc().authenticators(BASE_HOST)\n    #     if login:\n    #         email = login[0]\n    #         password = login[2]\n    #     else:\n    #         sys.exit(\"Error: No Fantia login found in .netrc\")\n    # else:\n    #     if not email:\n    #         email = input(\"Email: \")\n    #     if not password:\n    #         password = getpass.getpass(\"Password: \")\n\n    try:\n        downloader = FantiaDownloader(session_arg=session_arg, dump_metadata=cmdl_opts.dump_metadata, parse_for_external_links=cmdl_opts.parse_for_external_links, download_thumb=cmdl_opts.download_thumb, directory=cmdl_opts.output_path, quiet=cmdl_opts.quiet, continue_on_error=cmdl_opts.continue_on_error, use_server_filenames=cmdl_opts.use_server_filenames, mark_incomplete_posts=cmdl_opts.mark_incomplete_posts, month_limit=cmdl_opts.month_limit, exclude_file=cmdl_opts.exclude_file, db_path=cmdl_opts.db_path, db_bypass_post_check=cmdl_opts.db_bypass_post_check)\n        if cmdl_opts.download_fanclubs:\n            try:\n                downloader.download_followed_fanclubs(limit=cmdl_opts.limit)\n            except KeyboardInterrupt:\n                raise\n            except:\n                if cmdl_opts.continue_on_error:\n                    downloader.output(\"Encountered an error downloading followed fanclubs. Skipping...\\n\")\n                    traceback.print_exc()\n                    pass\n                else:\n                    raise\n        elif cmdl_opts.download_paid_fanclubs:\n            try:\n                downloader.download_paid_fanclubs(limit=cmdl_opts.limit)\n            except:\n                if cmdl_opts.continue_on_error:\n                    downloader.output(\"Encountered an error downloading paid fanclubs. Skipping...\\n\")\n                    traceback.print_exc()\n                    pass\n                else:\n                    raise\n        elif cmdl_opts.download_new_posts:\n            try:\n                downloader.download_new_posts(post_limit=cmdl_opts.download_new_posts)\n            except:\n                if cmdl_opts.continue_on_error:\n                    downloader.output(\"Encountered an error downloading new posts from timeline. Skipping...\\n\")\n                    traceback.print_exc()\n                    pass\n                else:\n                    raise\n        if cmdl_opts.url:\n            for url in cmdl_opts.url:\n                    url_match = FANTIA_URL_RE.match(url)\n                    if url_match:\n                        try:\n                            url_groups = url_match.groups()\n                            if url_groups[0] == \"fanclubs\":\n                                fanclub = FantiaClub(url_groups[1])\n                                downloader.download_fanclub(fanclub, cmdl_opts.limit)\n                            elif url_groups[0] == \"posts\":\n                                downloader.download_post(url_groups[1])\n                        except KeyboardInterrupt:\n                            raise\n                        except:\n                            if cmdl_opts.continue_on_error:\n                                downloader.output(\"Encountered an error downloading URL. Skipping...\\n\")\n                                traceback.print_exc()\n                                continue\n                            else:\n                                raise\n                    else:\n                        sys.stderr.write(\"Error: {} is not a valid URL. Please provide a fully qualified Fantia URL (https://fantia.jp/posts/[id], https://fantia.jp/fanclubs/[id])\\n\".format(url))\n    except KeyboardInterrupt:\n        sys.exit(\"Interrupted by user. Exiting...\")\n\n\ndef cli():\n    global cmdl_opts\n    cmdl_opts = cmdl_parser.parse_args()\n    main()\n\nif __name__ == \"__main__\":\n    cli()"
  },
  {
    "path": "fantiadl/models.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom bs4 import BeautifulSoup\nfrom requests.adapters import HTTPAdapter, Retry\nimport requests\n\nfrom datetime import datetime as dt\nfrom email.utils import parsedate_to_datetime\nfrom urllib.parse import unquote\nfrom urllib.parse import urljoin\nfrom urllib.parse import urlparse\nimport http.cookiejar\nimport json\nimport math\nimport mimetypes\nimport os\nimport re\nimport sys\nimport time\nimport traceback\n\nfrom .__version__ import __version__\nfrom .db import FantiaDlDatabase\n\nFANTIA_URL_RE = re.compile(r\"(?:https?://(?:(?:www\\.)?(?:fantia\\.jp/(fanclubs|posts)/)))([0-9]+)\")\nEXTERNAL_LINKS_RE = re.compile(r\"(?:[\\s]+)?((?:(?:https?://)?(?:(?:www\\.)?(?:mega\\.nz|mediafire\\.com|(?:drive|docs)\\.google\\.com|youtube.com|dropbox.com)\\/))[^\\s]+)\")\n\nDOMAIN = \"fantia.jp\"\nBASE_URL = \"https://fantia.jp/\"\n\nLOGIN_SIGNIN_URL = \"https://fantia.jp/sessions/signin\"\nLOGIN_SESSION_URL = \"https://fantia.jp/sessions\"\n\nME_API = \"https://fantia.jp/api/v1/me\"\n\nFANCLUB_API = \"https://fantia.jp/api/v1/fanclubs/{}\"\nFANCLUBS_FOLLOWING_API = \"https://fantia.jp/api/v1/me/fanclubs\"\nFANCLUBS_PAID_HTML = \"https://fantia.jp/mypage/users/plans?type=not_free&page={}\"\nFANCLUB_POSTS_HTML = \"https://fantia.jp/fanclubs/{}/posts?page={}\"\n\nPOST_API = \"https://fantia.jp/api/v1/posts/{}\"\nPOST_URL = \"https://fantia.jp/posts/{}\"\nPOSTS_URL = \"https://fantia.jp/posts\"\nPOST_RELATIVE_URL = \"/posts/\"\n\nTIMELINES_API = \"https://fantia.jp/api/v1/me/timelines/posts?page={}&per=24\"\n\nUSER_AGENT = \"fantiadl/{}\".format(__version__)\n\nCRAWLJOB_FILENAME = \"external_links.crawljob\"\n\nMIMETYPES = {\n    \"image/jpeg\": \".jpg\",\n    \"image/png\": \".png\",\n    \"image/gif\": \".gif\",\n    \"video/mp4\": \".mp4\",\n    \"video/webm\": \".webm\"\n}\n\nUNICODE_CONTROL_MAP = dict.fromkeys(range(32))\n\n\nclass FantiaClub:\n    def __init__(self, fanclub_id):\n        self.id = fanclub_id\n\n\nclass FantiaDownloader:\n    def __init__(self, session_arg, chunk_size=1024 * 1024 * 5, dump_metadata=False, parse_for_external_links=False, download_thumb=False, directory=None, quiet=True, continue_on_error=False, use_server_filenames=False, mark_incomplete_posts=False, month_limit=None, exclude_file=None, db_path=None, db_bypass_post_check=False):\n        # self.email = email\n        # self.password = password\n        self.session_arg = session_arg\n        self.chunk_size = chunk_size\n        self.dump_metadata = dump_metadata\n        self.parse_for_external_links = parse_for_external_links\n        self.download_thumb = download_thumb\n        self.directory = directory or \"\"\n        self.quiet = quiet\n        self.continue_on_error = continue_on_error\n        self.use_server_filenames = use_server_filenames\n        self.mark_incomplete_posts = mark_incomplete_posts\n        self.month_limit = dt.strptime(month_limit, \"%Y-%m\") if month_limit else None\n        self.exclude_file = exclude_file\n        self.exclusions = []\n        self.db = FantiaDlDatabase(db_path)\n        self.db_bypass_post_check = db_bypass_post_check\n\n        self.initialize_session()\n        self.login()\n        self.create_exclusions()\n\n    def output(self, output):\n        \"\"\"Write output to the console.\"\"\"\n        if not self.quiet:\n            try:\n                sys.stdout.write(output.encode(sys.stdout.encoding, errors=\"backslashreplace\").decode(sys.stdout.encoding))\n                sys.stdout.flush()\n            except (UnicodeEncodeError, UnicodeDecodeError):\n                sys.stdout.buffer.write(output.encode(\"utf-8\"))\n                sys.stdout.flush()\n\n    def initialize_session(self):\n        \"\"\"Initialize session with necessary headers and config.\"\"\"\n\n        self.session = requests.session()\n        self.session.headers.update({\"User-Agent\": USER_AGENT})\n        retries = Retry(\n            total=5,\n            connect=5,\n            read=5,\n            status_forcelist=[429, 500, 502, 503, 504, 507, 508],\n            backoff_factor=2, # retry delay = {backoff factor} * (2 ** ({retry number} - 1))\n            raise_on_status=True\n        )\n        self.session.mount(\"http://\", HTTPAdapter(max_retries=retries))\n        self.session.mount(\"https://\", HTTPAdapter(max_retries=retries))\n\n    def login(self):\n        \"\"\"Login to Fantia using the provided email and password.\"\"\"\n        try:\n            with open(self.session_arg, \"r\") as cookies_file:\n                cookies = http.cookiejar.MozillaCookieJar(self.session_arg)\n                cookies.load()\n                self.session.cookies = cookies\n        except FileNotFoundError:\n            login_cookie = requests.cookies.create_cookie(domain=DOMAIN, name=\"_session_id\", value=self.session_arg)\n            self.session.cookies.set_cookie(login_cookie)\n\n        check_user = self.session.get(ME_API)\n        if not (check_user.ok or check_user.status_code == 304):\n            sys.exit(\"Error: Invalid session. Please verify your session cookie\")\n\n        # Login flow, requires reCAPTCHA token\n\n        # login_json = {\n        #     \"utf8\": \"✓\",\n        #     \"button\": \"\",\n        #     \"user[email]\": self.email,\n        #     \"user[password]\": self.password,\n        # }\n\n        # login_session = self.session.get(LOGIN_SIGNIN_URL)\n        # login_page = BeautifulSoup(login_session.text, \"html.parser\")\n        # authenticity_token = login_page.select_one(\"input[name=\\\"authenticity_token\\\"]\")[\"value\"]\n        # print(login_page.select_one(\"input[name=\\\"recaptcha_response\\\"]\"))\n        # login_json[\"authenticity_token\"] = authenticity_token\n        # login_json[\"recaptcha_response\"] = ...\n\n        # create_session = self.session.post(LOGIN_SESSION_URL, data=login_json)\n        # if not create_session.headers.get(\"Location\"):\n        #     sys.exit(\"Error: Bad login form data\")\n        # elif create_session.headers[\"Location\"] == LOGIN_SIGNIN_URL:\n        #     sys.exit(\"Error: Failed to login. Please verify your username and password\")\n\n        # check_user = self.session.get(ME_API)\n        # if not (check_user.ok or check_user.status_code == 304):\n        #     sys.exit(\"Error: Invalid session\")\n\n    def create_exclusions(self):\n        \"\"\"Read files to exclude from downloading.\"\"\"\n        if self.exclude_file:\n            with open(self.exclude_file, \"r\") as file:\n                self.exclusions = [line.rstrip(\"\\n\") for line in file]\n\n    def process_content_type(self, url):\n        \"\"\"Process the Content-Type from a request header and use it to build a filename.\"\"\"\n        url_header = self.session.head(url, allow_redirects=True)\n        mimetype = url_header.headers[\"Content-Type\"]\n        extension = guess_extension(mimetype, url)\n        return extension\n\n    def collect_post_titles(self, post_metadata):\n        \"\"\"Collect all post titles to check for duplicate names and rename as necessary by appending a counter.\"\"\"\n        post_titles = []\n        for post in post_metadata[\"post_contents\"]:\n            try:\n                potential_title = post[\"title\"] or post[\"parent_post\"][\"title\"]\n                if not potential_title:\n                    potential_title = str(post[\"id\"])\n            except KeyError:\n                potential_title = str(post[\"id\"])\n\n            title = potential_title\n            counter = 2\n            while title in post_titles:\n                title = potential_title + \"_{}\".format(counter)\n                counter += 1\n            post_titles.append(title)\n\n        return post_titles\n\n    def download_fanclub_metadata(self, fanclub):\n        \"\"\"Download fanclub header, icon, and custom background.\"\"\"\n        response = self.session.get(FANCLUB_API.format(fanclub.id))\n        response.raise_for_status()\n        fanclub_json = json.loads(response.text)\n\n        fanclub_creator = fanclub_json[\"fanclub\"][\"creator_name\"]\n        fanclub_directory = os.path.join(self.directory, sanitize_for_path(fanclub_creator))\n        os.makedirs(fanclub_directory, exist_ok=True)\n\n        self.save_metadata(fanclub_json, fanclub_directory)\n\n        header_url = fanclub_json[\"fanclub\"][\"cover\"][\"original\"]\n        if header_url:\n            header_filename = os.path.join(fanclub_directory, \"header\" + self.process_content_type(header_url))\n            self.output(\"Downloading fanclub header...\\n\")\n            self.perform_download(header_url, header_filename, use_server_filename=self.use_server_filenames)\n\n        fanclub_icon_url = fanclub_json[\"fanclub\"][\"icon\"][\"original\"]\n        if fanclub_icon_url:\n            fanclub_icon_filename = os.path.join(fanclub_directory, \"icon\" + self.process_content_type(fanclub_icon_url))\n            self.output(\"Downloading fanclub icon...\\n\")\n            self.perform_download(fanclub_icon_url, fanclub_icon_filename, use_server_filename=self.use_server_filenames)\n\n        background_url = fanclub_json[\"fanclub\"][\"background\"]\n        if background_url:\n            background_filename = os.path.join(fanclub_directory, \"background\" + self.process_content_type(background_url))\n            self.output(\"Downloading fanclub background...\\n\")\n            self.perform_download(background_url, background_filename, use_server_filename=self.use_server_filenames)\n\n    def download_fanclub(self, fanclub, limit=0):\n        \"\"\"Download a fanclub.\"\"\"\n        self.output(\"Downloading fanclub {}...\\n\".format(fanclub.id))\n        post_ids = self.fetch_fanclub_posts(fanclub)\n\n        if self.dump_metadata:\n            self.download_fanclub_metadata(fanclub)\n\n        for post_id in post_ids if limit == 0 else post_ids[:limit]:\n            try:\n                self.download_post(post_id)\n            except KeyboardInterrupt:\n                raise\n            except:\n                if self.continue_on_error:\n                    self.output(\"Encountered an error downloading post. Skipping...\\n\")\n                    traceback.print_exc()\n                    continue\n                else:\n                    raise\n\n    def download_followed_fanclubs(self, limit=0):\n        \"\"\"Download all followed fanclubs.\"\"\"\n        response = self.session.get(FANCLUBS_FOLLOWING_API)\n        response.raise_for_status()\n        fanclub_ids = json.loads(response.text)[\"fanclub_ids\"]\n\n        for fanclub_id in fanclub_ids:\n            try:\n                fanclub = FantiaClub(fanclub_id)\n                self.download_fanclub(fanclub, limit)\n            except KeyboardInterrupt:\n                raise\n            except:\n                if self.continue_on_error:\n                    self.output(\"Encountered an error downloading fanclub. Skipping...\\n\")\n                    traceback.print_exc()\n                    continue\n                else:\n                    raise\n\n    def download_paid_fanclubs(self, limit=0):\n        \"\"\"Download all fanclubs backed on a paid plan.\"\"\"\n        all_paid_fanclubs = []\n        page_number = 1\n        self.output(\"Collecting paid fanclubs...\\n\")\n        while True:\n            response = self.session.get(FANCLUBS_PAID_HTML.format(page_number))\n            response.raise_for_status()\n            response_page = BeautifulSoup(response.text, \"html.parser\")\n            fanclub_links = response_page.select(\"div.mb-5-children > div:nth-of-type(1) a[href^=\\\"/fanclubs\\\"]\")\n\n            for fanclub_link in fanclub_links:\n                fanclub_id = fanclub_link[\"href\"].lstrip(\"/fanclubs/\")\n                all_paid_fanclubs.append(fanclub_id)\n            if not fanclub_links:\n                self.output(\"Collected {} fanclubs.\\n\".format(len(all_paid_fanclubs)))\n                break\n            else:\n                page_number += 1\n\n        for fanclub_id in all_paid_fanclubs:\n            try:\n                fanclub = FantiaClub(fanclub_id)\n                self.download_fanclub(fanclub, limit)\n            except:\n                if self.continue_on_error:\n                    self.output(\"Encountered an error downloading fanclub. Skipping...\\n\")\n                    traceback.print_exc()\n                    continue\n                else:\n                    raise\n\n    def download_new_posts(self, post_limit=24):\n        all_new_post_ids = []\n        total_pages = math.ceil(post_limit / 24)\n        page_number = 1\n        has_next = True\n        self.output(\"Downloading {} new posts...\\n\".format(post_limit))\n\n        while has_next and not len(all_new_post_ids) >= post_limit:\n            response = self.session.get(TIMELINES_API.format(page_number))\n            response.raise_for_status()\n            json_response = json.loads(response.text)\n\n            posts = json_response[\"posts\"]\n            has_next = json_response[\"has_next\"]\n            for post in posts:\n                if len(all_new_post_ids) >= post_limit:\n                    break\n                post_id = post[\"id\"]\n                all_new_post_ids.append(post_id)\n            page_number += 1\n\n        for post_id in all_new_post_ids:\n            try:\n                self.download_post(post_id)\n            except KeyboardInterrupt:\n                raise\n            except:\n                if self.continue_on_error:\n                    self.output(\"Encountered an error downloading post. Skipping...\\n\")\n                    traceback.print_exc()\n                    continue\n                else:\n                    raise\n\n    def fetch_fanclub_posts(self, fanclub):\n        \"\"\"Iterate over a fanclub's HTML pages to fetch all post IDs.\"\"\"\n        all_posts = []\n        post_found = False\n        page_number = 1\n        self.output(\"Collecting fanclub posts...\\n\")\n        while True:\n            response = self.session.get(FANCLUB_POSTS_HTML.format(fanclub.id, page_number))\n            response.raise_for_status()\n            response_page = BeautifulSoup(response.text, \"html.parser\")\n            posts = response_page.select(\"div.post\")\n            new_post_ids = []\n            for post in posts:\n                link = post.select_one(\"a.link-block\")[\"href\"]\n                post_id = link.lstrip(POST_RELATIVE_URL)\n                date_string = post.select_one(\".post-date .mr-5\").text if post.select_one(\".post-date .mr-5\") else post.select_one(\".post-date\").text\n                parsed_date = dt.strptime(date_string, \"%Y-%m-%d %H:%M\")\n                if not self.month_limit or (parsed_date.year == self.month_limit.year and parsed_date.month == self.month_limit.month):\n                    post_found = True\n                    new_post_ids.append(post_id)\n            all_posts += new_post_ids\n            if not posts or (not new_post_ids and post_found): # No new posts found and we've already collected a post\n                self.output(\"Collected {} posts.\\n\".format(len(all_posts)))\n                return all_posts\n            else:\n                page_number += 1\n\n    def perform_download(self, url, filepath, use_server_filename=False, append_server_extension=False):\n        \"\"\"Perform a download for the specified URL while showing progress.\"\"\"\n        url_path = unquote(url.split(\"?\", 1)[0])\n        server_filename = os.path.basename(url_path)\n        filename = os.path.basename(filepath)\n        if use_server_filename:\n            filepath = os.path.join(os.path.dirname(filepath), server_filename)\n\n        # Check if filename is in exclusion list\n        if server_filename in self.exclusions:\n            self.output(\"Server filename in exclusion list (skipping): {}\\n\".format(server_filename))\n            return\n        elif filename in self.exclusions:\n            self.output(\"Filename in exclusion list (skipping): {}\\n\".format(filename))\n            return\n\n        if self.db.conn and self.db.is_url_downloaded(url_path):\n            self.output(\"URL already downloaded. Skipping...\\n\")\n            return\n\n        request = self.session.get(url, stream=True)\n        if request.status_code == 404:\n            self.output(\"Download URL returned 404. Skipping...\\n\")\n            return\n        request.raise_for_status()\n\n        # Handle redirects so we can properly catch an excluded filename\n        # Attachments typically route from fantia.jp/posts/#/download/#\n        # Images typically are served directly from cc.fantia.jp\n        # Metadata images typically are served from c.fantia.jp\n        if request.url != url:\n            url_path = unquote(request.url.split(\"?\", 1)[0])\n            server_filename = os.path.basename(url_path)\n            if server_filename in self.exclusions:\n                self.output(\"Server filename in exclusion list (skipping): {}\\n\".format(server_filename))\n                return\n            if use_server_filename:\n                filepath = os.path.join(os.path.dirname(filepath), server_filename)\n\n        if not use_server_filename and append_server_extension:\n            filepath += guess_extension(request.headers[\"Content-Type\"], url)\n\n        file_size = int(request.headers[\"Content-Length\"])\n        if os.path.isfile(filepath) and os.stat(filepath).st_size == file_size:\n            self.output(\"File found (skipping): {}\\n\".format(filepath))\n            self.db.insert_url(url_path)\n            return\n\n        self.output(\"File: {}\\n\".format(filepath))\n        incomplete_filename = filepath + \".part\"\n\n        downloaded = 0\n        with open(incomplete_filename, \"wb\") as file:\n            for chunk in request.iter_content(self.chunk_size):\n                downloaded += len(chunk)\n                file.write(chunk)\n                done = int(25 * downloaded / file_size)\n                percent = int(100 * downloaded / file_size)\n                self.output(\"\\r|{0}{1}| {2}% \".format(\"\\u2588\" * done, \" \" * (25 - done), percent))\n        self.output(\"\\n\")\n\n        if downloaded != file_size:\n            raise Exception(\"Downloaded file size mismatch (expected {}, got {})\".format(file_size, downloaded))\n\n        if os.path.exists(filepath):\n            os.remove(filepath)\n        os.rename(incomplete_filename, filepath)\n\n        self.db.insert_url(url_path)\n\n        modification_time_string = request.headers[\"Last-Modified\"]\n        modification_time = int(dt.strptime(modification_time_string, \"%a, %d %b %Y %H:%M:%S %Z\").timestamp())\n        if modification_time:\n            access_time = int(time.time())\n            os.utime(filepath, times=(access_time, modification_time))\n\n    def download_photo(self, photo_url, photo_counter, gallery_directory):\n        \"\"\"Download a photo to the post's directory.\"\"\"\n        extension = self.process_content_type(photo_url)\n        filename = os.path.join(gallery_directory, str(photo_counter) + extension) if gallery_directory else str()\n        self.perform_download(photo_url, filename, use_server_filename=self.use_server_filenames)\n\n    def download_file(self, download_url, filename, post_directory):\n        \"\"\"Download a file to the post's directory.\"\"\"\n        self.perform_download(download_url, filename, use_server_filename=True) # Force serve filenames to prevent duplicate collision\n\n    def download_post_content(self, post_json, post_directory, post_title):\n        \"\"\"Parse the post's content to determine whether to save the content as a photo gallery or file.\"\"\"\n        self.output(f\"> Content {post_json['id']}\\n\")\n\n        if self.db.conn and self.db.is_post_content_downloaded(post_json[\"id\"]):\n            self.output(\"Post content already downloaded. Skipping...\\n\")\n            return True\n\n        if post_json[\"visible_status\"] != \"visible\":\n            self.output(\"Post content not available on current plan. Skipping...\\n\")\n            return False\n\n        if post_json.get(\"category\"):\n            if post_json[\"category\"] == \"photo_gallery\":\n                photo_gallery = post_json[\"post_content_photos\"]\n                photo_counter = 0\n                gallery_directory = os.path.join(post_directory, sanitize_for_path(post_title))\n                os.makedirs(gallery_directory, exist_ok=True)\n                for photo in photo_gallery:\n                    photo_url = photo[\"url\"][\"original\"]\n                    self.download_photo(photo_url, photo_counter, gallery_directory)\n                    photo_counter += 1\n            elif post_json[\"category\"] == \"file\":\n                filename = os.path.join(post_directory, post_json[\"filename\"])\n                download_url = urljoin(POSTS_URL, post_json[\"download_uri\"])\n                self.download_file(download_url, filename, post_directory)\n            elif post_json[\"category\"] == \"embed\":\n                if self.parse_for_external_links:\n                    # TODO: Check what URLs are allowed as embeds\n                    link_as_list = [post_json[\"embed_url\"]]\n                    self.output(\"Adding embedded link {0} to {1}.\\n\".format(post_json[\"embed_url\"], CRAWLJOB_FILENAME))\n                    build_crawljob(link_as_list, self.directory, post_directory)\n            elif post_json[\"category\"] == \"blog\":\n                blog_comment = post_json[\"comment\"]\n                blog_json = json.loads(blog_comment)\n                photo_counter = 0\n                gallery_directory = os.path.join(post_directory, sanitize_for_path(post_title))\n                os.makedirs(gallery_directory, exist_ok=True)\n                for op in blog_json[\"ops\"]:\n                    if type(op[\"insert\"]) is dict and op[\"insert\"].get(\"fantiaImage\"):\n                        photo_url = urljoin(BASE_URL, op[\"insert\"][\"fantiaImage\"][\"original_url\"])\n                        self.download_photo(photo_url, photo_counter, gallery_directory)\n                        photo_counter += 1\n            else:\n                self.output(\"Post content category \\\"{}\\\" is not supported. Skipping...\\n\".format(post_json.get(\"category\")))\n                return False\n\n        self.db.insert_post_content(post_json[\"id\"], post_json[\"parent_post\"][\"url\"].rsplit(\"/\", 1)[1], post_json[\"title\"], post_json[\"category\"], post_json[\"foreign_plan_price\"], post_json[\"currency_code\"])\n\n        if self.parse_for_external_links:\n            post_description = post_json[\"comment\"] or \"\"\n            self.parse_external_links(post_description, os.path.abspath(post_directory))\n\n        return True\n\n    def download_thumbnail(self, thumb_url, post_directory):\n        \"\"\"Download a thumbnail to the post's directory.\"\"\"\n        extension = self.process_content_type(thumb_url)\n        filename = os.path.join(post_directory, \"thumb\" + extension)\n        self.perform_download(thumb_url, filename, use_server_filename=self.use_server_filenames)\n\n    def download_post(self, post_id):\n        \"\"\"Download a post to its own directory.\"\"\"\n        db_post = self.db.find_post(post_id)\n        if self.db_bypass_post_check and self.db.conn and db_post and db_post[\"download_complete\"]:\n            self.output(\"Post {} already downloaded. Skipping...\\n\".format(post_id))\n            return\n\n        self.output(\"Downloading post {}...\\n\".format(post_id))\n\n        post_html_response = self.session.get(POST_URL.format(post_id))\n        post_html_response.raise_for_status()\n        post_html = BeautifulSoup(post_html_response.text, \"html.parser\")\n        csrf_token = post_html.select_one(\"meta[name=\\\"csrf-token\\\"]\")[\"content\"]\n\n        response = self.session.get(POST_API.format(post_id), headers={\n            \"X-CSRF-Token\": csrf_token,\n            \"X-Requested-With\": \"XMLHttpRequest\"\n        })\n        response.raise_for_status()\n        post_json = json.loads(response.text)[\"post\"]\n\n        post_id = post_json[\"id\"]\n        post_creator = post_json[\"fanclub\"][\"creator_name\"]\n        post_title = post_json[\"title\"]\n        post_contents = post_json[\"post_contents\"]\n\n        post_posted_at = int(parsedate_to_datetime(post_json[\"posted_at\"]).timestamp())\n        post_converted_at = int(dt.fromisoformat(post_json[\"converted_at\"]).timestamp()) if post_json[\"converted_at\"] else post_posted_at\n\n        if self.db.conn and db_post and db_post[\"download_complete\"]:\n            # Check if the post date changed, which may indicate new contents were added\n            if db_post[\"converted_at\"] != post_converted_at:\n                self.output(\"Post date does not match date in database. Checking for new contents...\\n\")\n                self.db.update_post_download_complete(post_id, download_complete=0)\n                self.db.update_post_converted_at(post_id, post_converted_at)\n            else:\n                self.output(\"Post appears to have been downloaded completely. Skipping...\\n\".format(post_id))\n                return\n        if self.db.conn and not db_post:\n            self.db.insert_post(post_id, post_title, post_json[\"fanclub\"][\"id\"], post_posted_at, post_converted_at)\n\n        post_directory_title = sanitize_for_path(str(post_id))\n\n        post_directory = os.path.join(self.directory, sanitize_for_path(post_creator), post_directory_title)\n        os.makedirs(post_directory, exist_ok=True)\n\n        post_titles = self.collect_post_titles(post_json)\n\n        if self.dump_metadata:\n            self.save_metadata(post_json, post_directory)\n        if self.mark_incomplete_posts:\n            self.mark_incomplete_post(post_json, post_directory)\n        if self.download_thumb and post_json[\"thumb\"]:\n            self.download_thumbnail(post_json[\"thumb\"][\"original\"], post_directory)\n        if self.parse_for_external_links:\n            # Main post\n            post_description = post_json[\"comment\"] or \"\"\n            self.parse_external_links(post_description, os.path.abspath(post_directory))\n\n        download_complete_counter = 0\n        for post_index, post in enumerate(post_contents):\n            post_title = post_titles[post_index]\n            if self.download_post_content(post, post_directory, post_title):\n                download_complete_counter += 1\n        if self.db.conn and download_complete_counter == len(post_contents):\n            self.output(\"All post content appears to have been downloaded. Marking as complete in database...\\n\")\n            self.db.update_post_download_complete(post_id)\n\n        if not os.listdir(post_directory):\n            self.output(\"No content downloaded for post {}. Deleting directory.\\n\".format(post_id))\n            os.rmdir(post_directory)\n\n    def parse_external_links(self, post_description, post_directory):\n        \"\"\"Parse the post description for external links, e.g. Mega and Google Drive links.\"\"\"\n        link_matches = EXTERNAL_LINKS_RE.findall(post_description)\n        if link_matches:\n            self.output(\"Found {} external link(s) in post. Saving...\\n\".format(len(link_matches)))\n            build_crawljob(link_matches, self.directory, post_directory)\n\n    def save_metadata(self, metadata, directory):\n        \"\"\"Save the metadata for a post to the post's directory.\"\"\"\n        filename = os.path.join(directory, \"metadata.json\")\n        with open(filename, \"w\", encoding='utf-8') as file:\n            json.dump(metadata, file, sort_keys=True, ensure_ascii=False, indent=4)\n\n    def mark_incomplete_post(self, post_metadata, post_directory):\n        \"\"\"Mark incomplete posts with a .incomplete file.\"\"\"\n        is_incomplete = False\n        incomplete_filename = os.path.join(post_directory, \".incomplete\")\n        for post in post_metadata[\"post_contents\"]:\n            if post[\"visible_status\"] != \"visible\":\n                is_incomplete = True\n                break\n        if is_incomplete:\n            if not os.path.exists(incomplete_filename):\n                open(incomplete_filename, 'a').close()\n        else:\n            if os.path.exists(incomplete_filename):\n                os.remove(incomplete_filename)\n\n\ndef guess_extension(mimetype, download_url):\n    \"\"\"\n    Guess the file extension from the mimetype or force a specific extension for certain mimetypes.\n    If the mimetype returns no found extension, guess based on the download URL.\n    \"\"\"\n    extension = MIMETYPES.get(mimetype) or mimetypes.guess_extension(mimetype, strict=True)\n    if not extension:\n        try:\n            path = urlparse(download_url).path\n            extension = os.path.splitext(path)[1]\n        except IndexError:\n            extension = \".unknown\"\n    return extension\n\ndef sanitize_for_path(value, replace=' '):\n    \"\"\"Remove potentially illegal characters from a path.\"\"\"\n    sanitized = re.sub(r'[<>\\\"\\?\\\\\\/\\*:|]', replace, value)\n    sanitized = sanitized.translate(UNICODE_CONTROL_MAP)\n    return re.sub(r'[\\s.]+$', '', sanitized)\n\ndef build_crawljob(links, root_directory, post_directory):\n    \"\"\"Append to a root .crawljob file with external links gathered from a post.\"\"\"\n    filename = os.path.join(root_directory, CRAWLJOB_FILENAME)\n    with open(filename, \"a\", encoding=\"utf-8\") as file:\n        for link in links:\n            crawl_dict = {\n                \"packageName\": \"Fantia\",\n                \"text\": link,\n                \"downloadFolder\": post_directory,\n                \"enabled\": \"true\",\n                \"autoStart\": \"true\",\n                \"forcedStart\": \"true\",\n                \"autoConfirm\": \"true\",\n                \"addOfflineLink\": \"true\",\n                \"extractAfterDownload\": \"false\"\n            }\n\n            for key, value in crawl_dict.items():\n                file.write(key + \"=\" + value + \"\\n\")\n            file.write(\"\\n\")\n"
  },
  {
    "path": "fantiadl.py",
    "content": "from fantiadl.fantiadl import cli\n\ncli()"
  },
  {
    "path": "requirements.txt",
    "content": "beautifulsoup4\nrequests"
  },
  {
    "path": "setup.py",
    "content": "import setuptools\nfrom distutils.util import convert_path\n\n\nwith open(\"README.md\", \"r\") as description_file:\n    long_description = description_file.read()\n\nwith open(\"requirements.txt\", \"r\") as requirements_file:\n    requirements = requirements_file.read().split(\"\\n\")\n\nmain_ns = {}\nversion_path = convert_path(\"fantiadl/__version__.py\")\nwith open(version_path, encoding=\"utf8\") as version_file:\n    exec(version_file.read(), main_ns)\n\nsetuptools.setup(\n    name=\"fantiadl\",\n    version=main_ns[\"__version__\"],\n    description=\"Download posts and media from Fantia\",\n    long_description=long_description,\n    long_description_content_type=\"text/markdown\",\n    author=\"bitto\",\n    url=\"https://github.com/bitbybyte/fantiadl\",\n    packages=[\"fantiadl\"],\n    classifiers=[\n        \"Programming Language :: Python\",\n        \"Programming Language :: Python :: 3\",\n        \"License :: OSI Approved :: MIT License\",\n        \"Topic :: Internet :: WWW/HTTP\",\n    ],\n    license=\"MIT\",\n    install_requires=requirements,\n    entry_points={\n        \"console_scripts\": [\n            \"fantiadl=fantiadl.fantiadl:cli\"\n        ]\n    },\n    python_requires=\">=3.7\",\n)\n"
  }
]