[
  {
    "path": ".github/FUNDING.yml",
    "content": "github: [bipinkrish]\n"
  },
  {
    "path": ".gitignore",
    "content": "__pycache__\nmy_bot.session\nmy_bot.session-journal\n"
  },
  {
    "path": "Dockerfile",
    "content": "FROM python:3.9\r\n\r\nWORKDIR /app\r\n\r\nCOPY requirements.txt /app/\r\nRUN pip3 install -r requirements.txt\r\nCOPY . /app\r\n\r\nCMD gunicorn app:app & python3 main.py"
  },
  {
    "path": "Procfile",
    "content": "worker: python3 main.py\r\nweb: python3 app.py"
  },
  {
    "path": "README.md",
    "content": "# Link-Bypasser-Bot\r\n\r\na Telegram Bot (with Site) that can Bypass Ad Links, Generate Direct Links and Jump Paywalls. see the Bot at\r\n~~[@BypassLinkBot](https://t.me/BypassLinkBot)~~ [@BypassUrlsBot](https://t.me/BypassUrlsBot) or try it on [Replit](https://replit.com/@bipinkrish/Link-Bypasser#app.py)\r\n\r\n---\r\n\r\n## Special Feature - Public Database\r\n\r\nResults of the bypass is hosted on public database on [DBHub.io](https://dbhub.io/bipinkrish/link_bypass.db) so if the bot finds link alredy in database it uses the result from the same, time saved and so many problems will be solved.\r\n\r\nTable is creted with the below command, if anyone wants to use it for thier own.\r\n\r\n```sql\r\nCREATE TABLE results (link TEXT PRIMARY KEY, result TEXT)\r\n```\r\n\r\n---\r\n\r\n## Required Variables\r\n\r\n- `TOKEN` Bot Token from @BotFather\r\n- `HASH` API Hash from my.telegram.org\r\n- `ID` API ID from my.telegram.org\r\n\r\n## Optional Variables\r\nyou can also set these in `config.json` file\r\n\r\n- `CRYPT` GDTot Crypt If you don't know how to get Crypt then [Learn Here](https://www.youtube.com/watch?v=EfZ29CotRSU)\r\n- `XSRF_TOKEN` and `Laravel_Session` XSRF Token and Laravel Session cookies! If you don't know how to get then then watch [this Video](https://www.youtube.com/watch?v=EfZ29CotRSU) (for GDTOT) and do the same for sharer.pw\r\n- `DRIVEFIRE_CRYPT` Drivefire Crypt\r\n- `KOLOP_CRYPT`  Kolop Crypt!\r\n- `HUBDRIVE_CRYPT` Hubdrive Crypt\r\n- `KATDRIVE_CRYPT` Katdrive Crypt\r\n- `UPTOBOX_TOKEN` Uptobox Token\r\n- `TERA_COOKIE` Terabox Cookie (only `ndus` value) (see [Help](#help))\r\n- `CLOUDFLARE` Use `cf_clearance` cookie from and Cloudflare protected sites\r\n- `PORT` Port to run the Bot Site on (defaults to 5000)\r\n\r\n## Optinal Database Feature\r\nYou need set all three to work\r\n\r\n- `DB_API` API KEY from [DBHub](https://dbhub.io/pref), make sure it has Read/Write permission\r\n- `DB_OWNER` (defaults to `bipinkrish`)\r\n- `DB_NAME` (defaults to `link_bypass.db`)\r\n\r\n---\r\n\r\n## Deploy on Heroku\r\n\r\n*BEFORE YOU DEPLOY ON HEROKU, YOU SHOULD FORK THE REPO AND CHANGE ITS NAME TO ANYTHING ELSE*<br>\r\n\r\n[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy?template=https://github.com/bipinkrish/Link-Bypasser-Bot)</br>\r\n\r\n---\r\n\r\n## Commands\r\n\r\nEverything is set programtically, nothing to work\r\n\r\n```\r\n/start - Welcome Message\r\n/help - List of All Supported Sites\r\n```\r\n\r\n---\r\n\r\n## Supported Sites\r\n\r\nTo see the list of supported sites see [texts.py](https://github.com/bipinkrish/Link-Bypasser-Bot/blob/main/texts.py) file\r\n\r\n---\r\n\r\n## Help\r\n\r\n* If you are deploying on VPS, watch videos on how to set/export Environment Variables. OR you can set these in `config.json` file\r\n* Terabox Cookie\r\n\r\n    1. Open any Browser\r\n    2. Make sure you are logged in with a Terbox account\r\n    3. Press `f12` to open DEV tools and click Network tab\r\n    4. Open any Terbox video link and open Cookies tab\r\n    5. Copy value of `ndus`\r\n   \r\n   <br>\r\n\r\n   ![](https://i.ibb.co/hHBZM5m/Screenshot-113.png)\r\n"
  },
  {
    "path": "app.json",
    "content": "{\n  \"name\": \"Link-Bypasser-Bot\",\n  \"description\": \"A Telegram Bot (with Site) that can Bypass Ad Links, Generate Direct Links and Jump Paywalls\",\n  \"keywords\": [\n    \"telegram\",\n    \"Link bypass\",\n    \"bypass bot\", \n    \"telegram bot\" \n  ],\n  \"repository\": \"https://github.com/bipinkrish/Link-Bypasser-Bot\",\n  \"logo\": \"https://ibb.co/kMVxrCj\",\n  \"env\": {\n    \"HASH\": {\n      \"description\": \"Your API HASH from my.telegram.org\",\n      \"required\": true\n    },\n    \"ID\": {\n      \"description\": \"Your API ID from my.telegram.org\",\n      \"required\": true\n    },\n    \"TOKEN\":{\n      \"description\": \"Your bot token from @BotFather\",\n      \"required\": true\n    },\n    \"TERA_COOKIE\": {\n      \"description\": \"Terabox Cookie (only ndus value)\",\n      \"required\": false\n    },\n    \"CRYPT\": {\n      \"description\": \"GDTot Crypt\",\n      \"required\": false\n    },\n    \"XSRF_TOKEN\": {\n      \"description\": \"XSRF Token cookies! Check readme file in repo\",\n      \"required\": false\n    },\n    \"Laravel_Session\": {\n      \"description\": \"Laravel Session cookies! Check readme file in repo\",\n      \"required\": false\n    },\n    \"DRIVEFIRE_CRYPT\": {\n      \"description\": \"Drivefire Crypt\",\n      \"required\": false\n    },\n    \"KOLOP_CRYPT\": {\n      \"description\": \"Kolop Crypt\",\n      \"required\": false\n    },\n    \"HUBDRIVE_CRYPT\": {\n      \"description\": \"Hubdrive Crypt\",\n      \"required\": false\n    },\n    \"KATDRIVE_CRYPT\": {\n      \"description\": \"Katdrive Crypt\",\n      \"required\": false\n    },\n    \"UPTOBOX_TOKEN\": {\n      \"description\": \"Uptobox Token\",\n      \"required\": false\n    },\n    \"CLOUDFLARE\": {\n      \"description\": \"Use cf_clearance cookie from and Cloudflare protected sites\",\n      \"required\": false\n    },\n    \"PORT\": {\n      \"description\": \"Port to run the Bot Site on (default is 5000)\",\n      \"required\": false\n    }\n  },\n  \"buildpacks\": [\n    {\n      \"url\": \"heroku/python\"\n    }\n  ],\n  \"formation\": {\n    \"web\": {\n      \"quantity\": 1,\n      \"size\": \"standard-1x\"\n    }\n  }\n}\n"
  },
  {
    "path": "app.py",
    "content": "from flask import Flask, request, render_template, make_response, send_file\nimport bypasser\nimport re\nimport os\nimport freewall\n\n\napp = Flask(__name__)\n\n\ndef handle_index(ele):\n    return bypasser.scrapeIndex(ele)\n\n\ndef store_shortened_links(link):\n    with open(\"shortened_links.txt\", \"a\") as file:\n        file.write(link + \"\\n\")\n\n\ndef loop_thread(url):\n    urls = []\n    urls.append(url)\n\n    if not url:\n        return None\n\n    link = \"\"\n    temp = None\n    for ele in urls:\n        if re.search(r\"https?:\\/\\/(?:[\\w.-]+)?\\.\\w+\\/\\d+:\", ele):\n            handle_index(ele)\n        elif bypasser.ispresent(bypasser.ddl.ddllist, ele):\n            try:\n                temp = bypasser.ddl.direct_link_generator(ele)\n            except Exception as e:\n                temp = \"**Error**: \" + str(e)\n        elif freewall.pass_paywall(ele, check=True):\n            freefile = freewall.pass_paywall(ele)\n            if freefile:\n                try:\n                    return send_file(freefile)\n                except:\n                    pass\n        else:\n            try:\n                temp = bypasser.shortners(ele)\n            except Exception as e:\n                temp = \"**Error**: \" + str(e)\n        print(\"bypassed:\", temp)\n        if temp:\n            link = link + temp + \"\\n\\n\"\n\n    return link\n\n\n@app.route(\"/\", methods=[\"GET\", \"POST\"])\ndef index():\n    if request.method == \"POST\":\n        url = request.form.get(\"url\")\n        result = loop_thread(url)\n        if freewall.pass_paywall(url, check=True):\n            return result\n\n        shortened_links = request.cookies.get(\"shortened_links\")\n        if shortened_links:\n            prev_links = shortened_links.split(\",\")\n        else:\n            prev_links = []\n\n        if result:\n            prev_links.append(result)\n\n            if len(prev_links) > 10:\n                prev_links = prev_links[-10:]\n\n        shortened_links_str = \",\".join(prev_links)\n        resp = make_response(\n            render_template(\"index.html\", result=result, prev_links=prev_links)\n        )\n        resp.set_cookie(\"shortened_links\", shortened_links_str)\n\n        return resp\n\n    shortened_links = request.cookies.get(\"shortened_links\")\n    return render_template(\n        \"index.html\",\n        result=None,\n        prev_links=shortened_links.split(\",\") if shortened_links else None,\n    )\n\n\nif __name__ == \"__main__\":\n    port = int(os.environ.get(\"PORT\", 5000))\n    app.run(host=\"0.0.0.0\", port=port)\n"
  },
  {
    "path": "bypasser.py",
    "content": "import re\r\nimport requests\r\nfrom curl_cffi import requests as Nreq\r\nimport base64\r\nfrom urllib.parse import unquote, urlparse, quote\r\nimport time\r\nimport cloudscraper\r\nfrom bs4 import BeautifulSoup, NavigableString, Tag\r\nfrom lxml import etree\r\nimport hashlib\r\nimport json\r\nfrom asyncio import sleep as asleep\r\nimport ddl\r\nfrom cfscrape import create_scraper\r\nfrom json import load\r\nfrom os import environ\r\n\r\nwith open(\"config.json\", \"r\") as f:\r\n    DATA = load(f)\r\n\r\n\r\ndef getenv(var):\r\n    return environ.get(var) or DATA.get(var, None)\r\n\r\n\r\n##########################################################\r\n# ENVs\r\n\r\nGDTot_Crypt = getenv(\"CRYPT\")\r\nLaravel_Session = getenv(\"Laravel_Session\")\r\nXSRF_TOKEN = getenv(\"XSRF_TOKEN\")\r\nDCRYPT = getenv(\"DRIVEFIRE_CRYPT\")\r\nKCRYPT = getenv(\"KOLOP_CRYPT\")\r\nHCRYPT = getenv(\"HUBDRIVE_CRYPT\")\r\nKATCRYPT = getenv(\"KATDRIVE_CRYPT\")\r\nCF = getenv(\"CLOUDFLARE\")\r\n\r\n############################################################\r\n# Lists\r\n\r\notherslist = [\r\n    \"exe.io\",\r\n    \"exey.io\",\r\n    \"sub2unlock.net\",\r\n    \"sub2unlock.com\",\r\n    \"rekonise.com\",\r\n    \"letsboost.net\",\r\n    \"ph.apps2app.com\",\r\n    \"mboost.me\",\r\n    \"sub4unlock.com\",\r\n    \"ytsubme.com\",\r\n    \"social-unlock.com\",\r\n    \"boost.ink\",\r\n    \"goo.gl\",\r\n    \"shrto.ml\",\r\n    \"t.co\",\r\n]\r\n\r\ngdlist = [\r\n    \"appdrive\",\r\n    \"driveapp\",\r\n    \"drivehub\",\r\n    \"gdflix\",\r\n    \"drivesharer\",\r\n    \"drivebit\",\r\n    \"drivelinks\",\r\n    \"driveace\",\r\n    \"drivepro\",\r\n    \"driveseed\",\r\n]\r\n\r\n\r\n###############################################################\r\n# pdisk\r\n\r\n\r\ndef pdisk(url):\r\n    r = requests.get(url).text\r\n    try:\r\n        return r.split(\"<!-- \")[-1].split(\" -->\")[0]\r\n    except:\r\n        try:\r\n            return (\r\n                BeautifulSoup(r, \"html.parser\").find(\"video\").find(\"source\").get(\"src\")\r\n            )\r\n        except:\r\n            return None\r\n\r\n\r\n###############################################################\r\n# index scrapper\r\n\r\n\r\ndef scrapeIndex(url, username=\"none\", password=\"none\"):\r\n\r\n    def authorization_token(username, password):\r\n        user_pass = f\"{username}:{password}\"\r\n        return f\"Basic {base64.b64encode(user_pass.encode()).decode()}\"\r\n\r\n    def decrypt(string):\r\n        return base64.b64decode(string[::-1][24:-20]).decode(\"utf-8\")\r\n\r\n    def func(payload_input, url, username, password):\r\n        next_page = False\r\n        next_page_token = \"\"\r\n\r\n        url = f\"{url}/\" if url[-1] != \"/\" else url\r\n\r\n        try:\r\n            headers = {\"authorization\": authorization_token(username, password)}\r\n        except:\r\n            return \"username/password combination is wrong\", None, None\r\n\r\n        encrypted_response = requests.post(url, data=payload_input, headers=headers)\r\n        if encrypted_response.status_code == 401:\r\n            return \"username/password combination is wrong\", None, None\r\n\r\n        try:\r\n            decrypted_response = json.loads(decrypt(encrypted_response.text))\r\n        except:\r\n            return (\r\n                \"something went wrong. check index link/username/password field again\",\r\n                None,\r\n                None,\r\n            )\r\n\r\n        page_token = decrypted_response[\"nextPageToken\"]\r\n        if page_token is None:\r\n            next_page = False\r\n        else:\r\n            next_page = True\r\n            next_page_token = page_token\r\n\r\n        if list(decrypted_response.get(\"data\").keys())[0] != \"error\":\r\n            file_length = len(decrypted_response[\"data\"][\"files\"])\r\n            result = \"\"\r\n\r\n            for i, _ in enumerate(range(file_length)):\r\n                files_type = decrypted_response[\"data\"][\"files\"][i][\"mimeType\"]\r\n                if files_type != \"application/vnd.google-apps.folder\":\r\n                    files_name = decrypted_response[\"data\"][\"files\"][i][\"name\"]\r\n\r\n                    direct_download_link = url + quote(files_name)\r\n                    result += f\"• {files_name} :\\n{direct_download_link}\\n\\n\"\r\n            return result, next_page, next_page_token\r\n\r\n    def format(result):\r\n        long_string = \"\".join(result)\r\n        new_list = []\r\n\r\n        while len(long_string) > 0:\r\n            if len(long_string) > 4000:\r\n                split_index = long_string.rfind(\"\\n\\n\", 0, 4000)\r\n                if split_index == -1:\r\n                    split_index = 4000\r\n            else:\r\n                split_index = len(long_string)\r\n\r\n            new_list.append(long_string[:split_index])\r\n            long_string = long_string[split_index:].lstrip(\"\\n\\n\")\r\n\r\n        return new_list\r\n\r\n    # main\r\n    x = 0\r\n    next_page = False\r\n    next_page_token = \"\"\r\n    result = []\r\n\r\n    payload = {\"page_token\": next_page_token, \"page_index\": x}\r\n    print(f\"Index Link: {url}\\n\")\r\n    temp, next_page, next_page_token = func(payload, url, username, password)\r\n    if temp is not None:\r\n        result.append(temp)\r\n\r\n    while next_page == True:\r\n        payload = {\"page_token\": next_page_token, \"page_index\": x}\r\n        temp, next_page, next_page_token = func(payload, url, username, password)\r\n        if temp is not None:\r\n            result.append(temp)\r\n        x += 1\r\n\r\n    if len(result) == 0:\r\n        return None\r\n    return format(result)\r\n\r\n\r\n################################################################\r\n# Shortner Full Page API\r\n\r\n\r\ndef shortner_fpage_api(link):\r\n    link_pattern = r\"https?://[\\w.-]+/full\\?api=([^&]+)&url=([^&]+)(?:&type=(\\d+))?\"\r\n    match = re.match(link_pattern, link)\r\n    if match:\r\n        try:\r\n            url_enc_value = match.group(2)\r\n            url_value = base64.b64decode(url_enc_value).decode(\"utf-8\")\r\n            return url_value\r\n        except BaseException:\r\n            return None\r\n    else:\r\n        return None\r\n\r\n\r\n# Shortner Quick Link API\r\n\r\n\r\ndef shortner_quick_api(link):\r\n    link_pattern = r\"https?://[\\w.-]+/st\\?api=([^&]+)&url=([^&]+)\"\r\n    match = re.match(link_pattern, link)\r\n    if match:\r\n        try:\r\n            url_value = match.group(2)\r\n            return url_value\r\n        except BaseException:\r\n            return None\r\n    else:\r\n        return None\r\n\r\n\r\n##############################################################\r\n# tnlink\r\n\r\n\r\ndef tnlink(url):\r\n    client = requests.session()\r\n    DOMAIN = \"https://page.tnlink.in/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://usanewstoday.club/\"\r\n    h = {\"referer\": ref}\r\n    while len(client.cookies) == 0:\r\n        resp = client.get(final_url, headers=h)\r\n        time.sleep(2)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(8)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n###############################################################\r\n# psa\r\n\r\n\r\ndef try2link_bypass(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n\r\n    params = ((\"d\", int(time.time()) + (60 * 4)),)\r\n    r = client.get(url, params=params, headers={\"Referer\": \"https://newforex.online/\"})\r\n\r\n    soup = BeautifulSoup(r.text, \"html.parser\")\r\n    inputs = soup.find(id=\"go-link\").find_all(name=\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    time.sleep(7)\r\n\r\n    headers = {\r\n        \"Host\": \"try2link.com\",\r\n        \"X-Requested-With\": \"XMLHttpRequest\",\r\n        \"Origin\": \"https://try2link.com\",\r\n        \"Referer\": url,\r\n    }\r\n\r\n    bypassed_url = client.post(\r\n        \"https://try2link.com/links/go\", headers=headers, data=data\r\n    )\r\n    return bypassed_url.json()[\"url\"]\r\n\r\n\r\ndef try2link_scrape(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    h = {\r\n        \"upgrade-insecure-requests\": \"1\",\r\n        \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36\",\r\n    }\r\n    res = client.get(url, cookies={}, headers=h)\r\n    url = \"https://try2link.com/\" + re.findall(\"try2link\\.com\\/(.*?) \", res.text)[0]\r\n    return try2link_bypass(url)\r\n\r\n\r\ndef psa_bypasser(psa_url):\r\n    cookies = {\"cf_clearance\": CF}\r\n    headers = {\r\n        \"authority\": \"psa.wf\",\r\n        \"accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7\",\r\n        \"accept-language\": \"en-US,en;q=0.9\",\r\n        \"referer\": \"https://psa.wf/\",\r\n        \"user-agent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36\",\r\n    }\r\n\r\n    r = requests.get(psa_url, headers=headers, cookies=cookies)\r\n    soup = BeautifulSoup(r.text, \"html.parser\").find_all(\r\n        class_=\"dropshadowboxes-drop-shadow dropshadowboxes-rounded-corners dropshadowboxes-inside-and-outside-shadow dropshadowboxes-lifted-both dropshadowboxes-effect-default\"\r\n    )\r\n    links = []\r\n    for link in soup:\r\n        try:\r\n            exit_gate = link.a.get(\"href\")\r\n            if \"/exit\" in exit_gate:\r\n                print(\"scraping :\", exit_gate)\r\n                links.append(try2link_scrape(exit_gate))\r\n        except:\r\n            pass\r\n\r\n    finals = \"\"\r\n    for li in links:\r\n        try:\r\n            res = requests.get(li, headers=headers, cookies=cookies)\r\n            soup = BeautifulSoup(res.text, \"html.parser\")\r\n            name = soup.find(\"h1\", class_=\"entry-title\", itemprop=\"headline\").getText()\r\n            finals += \"**\" + name + \"**\\n\\n\"\r\n            soup = soup.find(\"div\", class_=\"entry-content\", itemprop=\"text\").findAll(\r\n                \"a\"\r\n            )\r\n            for ele in soup:\r\n                finals += \"○ \" + ele.get(\"href\") + \"\\n\"\r\n            finals += \"\\n\\n\"\r\n        except:\r\n            finals += li + \"\\n\\n\"\r\n    return finals\r\n\r\n\r\n##################################################################################################################\r\n# rocklinks\r\n\r\n\r\ndef rocklinks(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    if \"rocklinks.net\" in url:\r\n        DOMAIN = \"https://blog.disheye.com\"\r\n    else:\r\n        DOMAIN = \"https://rocklinks.net\"\r\n\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n\r\n    code = url.split(\"/\")[-1]\r\n    if \"rocklinks.net\" in url:\r\n        final_url = f\"{DOMAIN}/{code}?quelle=\"\r\n    else:\r\n        final_url = f\"{DOMAIN}/{code}\"\r\n\r\n    resp = client.get(final_url)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n\r\n    try:\r\n        inputs = soup.find(id=\"go-link\").find_all(name=\"input\")\r\n    except:\r\n        return \"Incorrect Link\"\r\n\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n\r\n    time.sleep(10)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n################################################\r\n# igg games\r\n\r\n\r\ndef decodeKey(encoded):\r\n    key = \"\"\r\n\r\n    i = len(encoded) // 2 - 5\r\n    while i >= 0:\r\n        key += encoded[i]\r\n        i = i - 2\r\n\r\n    i = len(encoded) // 2 + 4\r\n    while i < len(encoded):\r\n        key += encoded[i]\r\n        i = i + 2\r\n\r\n    return key\r\n\r\n\r\ndef bypassBluemediafiles(url, torrent=False):\r\n    headers = {\r\n        \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0\",\r\n        \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\",\r\n        \"Accept-Language\": \"en-US,en;q=0.5\",\r\n        \"Alt-Used\": \"bluemediafiles.com\",\r\n        \"Connection\": \"keep-alive\",\r\n        \"Upgrade-Insecure-Requests\": \"1\",\r\n        \"Sec-Fetch-Dest\": \"document\",\r\n        \"Sec-Fetch-Mode\": \"navigate\",\r\n        \"Sec-Fetch-Site\": \"none\",\r\n        \"Sec-Fetch-User\": \"?1\",\r\n    }\r\n\r\n    res = requests.get(url, headers=headers)\r\n    soup = BeautifulSoup(res.text, \"html.parser\")\r\n    script = str(soup.findAll(\"script\")[3])\r\n    encodedKey = script.split('Create_Button(\"')[1].split('\");')[0]\r\n\r\n    headers = {\r\n        \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0\",\r\n        \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\",\r\n        \"Accept-Language\": \"en-US,en;q=0.5\",\r\n        \"Referer\": url,\r\n        \"Alt-Used\": \"bluemediafiles.com\",\r\n        \"Connection\": \"keep-alive\",\r\n        \"Upgrade-Insecure-Requests\": \"1\",\r\n        \"Sec-Fetch-Dest\": \"document\",\r\n        \"Sec-Fetch-Mode\": \"navigate\",\r\n        \"Sec-Fetch-Site\": \"same-origin\",\r\n        \"Sec-Fetch-User\": \"?1\",\r\n    }\r\n\r\n    params = {\"url\": decodeKey(encodedKey)}\r\n\r\n    if torrent:\r\n        res = requests.get(\r\n            \"https://dl.pcgamestorrents.org/get-url.php\", params=params, headers=headers\r\n        )\r\n        soup = BeautifulSoup(res.text, \"html.parser\")\r\n        furl = soup.find(\"a\", class_=\"button\").get(\"href\")\r\n\r\n    else:\r\n        res = requests.get(\r\n            \"https://bluemediafiles.com/get-url.php\", params=params, headers=headers\r\n        )\r\n        furl = res.url\r\n        if \"mega.nz\" in furl:\r\n            furl = furl.replace(\"mega.nz/%23!\", \"mega.nz/file/\").replace(\"!\", \"#\")\r\n\r\n    return furl\r\n\r\n\r\ndef igggames(url):\r\n    res = requests.get(url)\r\n    soup = BeautifulSoup(res.text, \"html.parser\")\r\n    soup = soup.find(\"div\", class_=\"uk-margin-medium-top\").findAll(\"a\")\r\n\r\n    bluelist = []\r\n    for ele in soup:\r\n        bluelist.append(ele.get(\"href\"))\r\n    bluelist = bluelist[3:-1]\r\n\r\n    links = \"\"\r\n    last = None\r\n    fix = True\r\n    for ele in bluelist:\r\n        if ele == \"https://igg-games.com/how-to-install-a-pc-game-and-update.html\":\r\n            fix = False\r\n            links += \"\\n\"\r\n        if \"bluemediafile\" in ele:\r\n            tmp = bypassBluemediafiles(ele)\r\n            if fix:\r\n                tt = tmp.split(\"/\")[2]\r\n                if last is not None and tt != last:\r\n                    links += \"\\n\"\r\n                last = tt\r\n            links = links + \"○ \" + tmp + \"\\n\"\r\n        elif \"pcgamestorrents.com\" in ele:\r\n            res = requests.get(ele)\r\n            soup = BeautifulSoup(res.text, \"html.parser\")\r\n            turl = (\r\n                soup.find(\r\n                    \"p\", class_=\"uk-card uk-card-body uk-card-default uk-card-hover\"\r\n                )\r\n                .find(\"a\")\r\n                .get(\"href\")\r\n            )\r\n            links = links + \"🧲 `\" + bypassBluemediafiles(turl, True) + \"`\\n\\n\"\r\n        elif ele != \"https://igg-games.com/how-to-install-a-pc-game-and-update.html\":\r\n            if fix:\r\n                tt = ele.split(\"/\")[2]\r\n                if last is not None and tt != last:\r\n                    links += \"\\n\"\r\n                last = tt\r\n            links = links + \"○ \" + ele + \"\\n\"\r\n\r\n    return links[:-1]\r\n\r\n\r\n###############################################################\r\n# htpmovies cinevood sharespark atishmkv\r\n\r\n\r\ndef htpmovies(link):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    r = client.get(link, allow_redirects=True).text\r\n    j = r.split('(\"')[-1]\r\n    url = j.split('\")')[0]\r\n    param = url.split(\"/\")[-1]\r\n    DOMAIN = \"https://go.theforyou.in\"\r\n    final_url = f\"{DOMAIN}/{param}\"\r\n    resp = client.get(final_url)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    try:\r\n        inputs = soup.find(id=\"go-link\").find_all(name=\"input\")\r\n    except:\r\n        return \"Incorrect Link\"\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(10)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went Wrong !!\"\r\n\r\n\r\ndef scrappers(link):\r\n\r\n    try:\r\n        link = re.match(\r\n            r\"((http|https)\\:\\/\\/)?[a-zA-Z0-9\\.\\/\\?\\:@\\-_=#]+\\.([a-zA-Z]){2,6}([a-zA-Z0-9\\.\\&\\/\\?\\:@\\-_=#])*\",\r\n            link,\r\n        )[0]\r\n    except TypeError:\r\n        return \"Not a Valid Link.\"\r\n    links = []\r\n\r\n    if \"sharespark\" in link:\r\n        gd_txt = \"\"\r\n        res = requests.get(\"?action=printpage;\".join(link.split(\"?\")))\r\n        soup = BeautifulSoup(res.text, \"html.parser\")\r\n        for br in soup.findAll(\"br\"):\r\n            next_s = br.nextSibling\r\n            if not (next_s and isinstance(next_s, NavigableString)):\r\n                continue\r\n            next2_s = next_s.nextSibling\r\n            if next2_s and isinstance(next2_s, Tag) and next2_s.name == \"br\":\r\n                text = str(next_s).strip()\r\n                if text:\r\n                    result = re.sub(r\"(?m)^\\(https://i.*\", \"\", next_s)\r\n                    star = re.sub(r\"(?m)^\\*.*\", \" \", result)\r\n                    extra = re.sub(r\"(?m)^\\(https://e.*\", \" \", star)\r\n                    gd_txt += (\r\n                        \", \".join(\r\n                            re.findall(\r\n                                r\"(?m)^.*https://new1.gdtot.cfd/file/[0-9][^.]*\", next_s\r\n                            )\r\n                        )\r\n                        + \"\\n\\n\"\r\n                    )\r\n        return gd_txt\r\n\r\n    elif \"htpmovies\" in link and \"/exit.php\" in link:\r\n        return htpmovies(link)\r\n\r\n    elif \"htpmovies\" in link:\r\n        prsd = \"\"\r\n        links = []\r\n        res = requests.get(link)\r\n        soup = BeautifulSoup(res.text, \"html.parser\")\r\n        x = soup.select('a[href^=\"/exit.php?url=\"]')\r\n        y = soup.select(\"h5\")\r\n        z = (\r\n            unquote(link.split(\"/\")[-2]).split(\"-\")[0]\r\n            if link.endswith(\"/\")\r\n            else unquote(link.split(\"/\")[-1]).split(\"-\")[0]\r\n        )\r\n\r\n        for a in x:\r\n            links.append(a[\"href\"])\r\n            prsd = f\"Total Links Found : {len(links)}\\n\\n\"\r\n\r\n        msdcnt = -1\r\n        for b in y:\r\n            if str(b.string).lower().startswith(z.lower()):\r\n                msdcnt += 1\r\n                url = f\"https://htpmovies.lol\" + links[msdcnt]\r\n                prsd += f\"{msdcnt+1}. <b>{b.string}</b>\\n{htpmovies(url)}\\n\\n\"\r\n                asleep(5)\r\n        return prsd\r\n\r\n    elif \"cinevood\" in link:\r\n        prsd = \"\"\r\n        links = []\r\n        res = requests.get(link)\r\n        soup = BeautifulSoup(res.text, \"html.parser\")\r\n        x = soup.select('a[href^=\"https://kolop.icu/file\"]')\r\n        for a in x:\r\n            links.append(a[\"href\"])\r\n        for o in links:\r\n            res = requests.get(o)\r\n            soup = BeautifulSoup(res.content, \"html.parser\")\r\n            title = soup.title.string\r\n            reftxt = re.sub(r\"Kolop \\| \", \"\", title)\r\n            prsd += f\"{reftxt}\\n{o}\\n\\n\"\r\n        return prsd\r\n\r\n    elif \"atishmkv\" in link:\r\n        prsd = \"\"\r\n        links = []\r\n        res = requests.get(link)\r\n        soup = BeautifulSoup(res.text, \"html.parser\")\r\n        x = soup.select('a[href^=\"https://gdflix.top/file\"]')\r\n        for a in x:\r\n            links.append(a[\"href\"])\r\n        for o in links:\r\n            prsd += o + \"\\n\\n\"\r\n        return prsd\r\n\r\n    elif \"teluguflix\" in link:\r\n        gd_txt = \"\"\r\n        r = requests.get(link)\r\n        soup = BeautifulSoup(r.text, \"html.parser\")\r\n        links = soup.select('a[href*=\"gdtot\"]')\r\n        gd_txt = f\"Total Links Found : {len(links)}\\n\\n\"\r\n        for no, link in enumerate(links, start=1):\r\n            gdlk = link[\"href\"]\r\n            t = requests.get(gdlk)\r\n            soupt = BeautifulSoup(t.text, \"html.parser\")\r\n            title = soupt.select('meta[property^=\"og:description\"]')\r\n            gd_txt += f\"{no}. <code>{(title[0]['content']).replace('Download ' , '')}</code>\\n{gdlk}\\n\\n\"\r\n            asleep(1.5)\r\n        return gd_txt\r\n\r\n    elif \"taemovies\" in link:\r\n        gd_txt, no = \"\", 0\r\n        r = requests.get(link)\r\n        soup = BeautifulSoup(r.text, \"html.parser\")\r\n        links = soup.select('a[href*=\"shortingly\"]')\r\n        gd_txt = f\"Total Links Found : {len(links)}\\n\\n\"\r\n        for a in links:\r\n            glink = rocklinks(a[\"href\"])\r\n            t = requests.get(glink)\r\n            soupt = BeautifulSoup(t.text, \"html.parser\")\r\n            title = soupt.select('meta[property^=\"og:description\"]')\r\n            no += 1\r\n            gd_txt += (\r\n                f\"{no}. {(title[0]['content']).replace('Download ' , '')}\\n{glink}\\n\\n\"\r\n            )\r\n        return gd_txt\r\n\r\n    elif \"toonworld4all\" in link:\r\n        gd_txt, no = \"\", 0\r\n        r = requests.get(link)\r\n        soup = BeautifulSoup(r.text, \"html.parser\")\r\n        links = soup.select('a[href*=\"redirect/main.php?\"]')\r\n        for a in links:\r\n            down = requests.get(a[\"href\"], stream=True, allow_redirects=False)\r\n            link = down.headers[\"location\"]\r\n            glink = rocklinks(link)\r\n            if glink and \"gdtot\" in glink:\r\n                t = requests.get(glink)\r\n                soupt = BeautifulSoup(t.text, \"html.parser\")\r\n                title = soupt.select('meta[property^=\"og:description\"]')\r\n                no += 1\r\n                gd_txt += f\"{no}. {(title[0]['content']).replace('Download ' , '')}\\n{glink}\\n\\n\"\r\n        return gd_txt\r\n\r\n    elif \"animeremux\" in link:\r\n        gd_txt, no = \"\", 0\r\n        r = requests.get(link)\r\n        soup = BeautifulSoup(r.text, \"html.parser\")\r\n        links = soup.select('a[href*=\"urlshortx.com\"]')\r\n        gd_txt = f\"Total Links Found : {len(links)}\\n\\n\"\r\n        for a in links:\r\n            link = a[\"href\"]\r\n            x = link.split(\"url=\")[-1]\r\n            t = requests.get(x)\r\n            soupt = BeautifulSoup(t.text, \"html.parser\")\r\n            title = soupt.title\r\n            no += 1\r\n            gd_txt += f\"{no}. {title.text}\\n{x}\\n\\n\"\r\n            asleep(1.5)\r\n        return gd_txt\r\n\r\n    else:\r\n        res = requests.get(link)\r\n        soup = BeautifulSoup(res.text, \"html.parser\")\r\n        mystx = soup.select(r'a[href^=\"magnet:?xt=urn:btih:\"]')\r\n        for hy in mystx:\r\n            links.append(hy[\"href\"])\r\n        return links\r\n\r\n\r\n###################################################\r\n# script links\r\n\r\n\r\ndef getfinal(domain, url, sess):\r\n\r\n    # sess = requests.session()\r\n    res = sess.get(url)\r\n    soup = BeautifulSoup(res.text, \"html.parser\")\r\n    soup = soup.find(\"form\").findAll(\"input\")\r\n    datalist = []\r\n    for ele in soup:\r\n        datalist.append(ele.get(\"value\"))\r\n\r\n    data = {\r\n        \"_method\": datalist[0],\r\n        \"_csrfToken\": datalist[1],\r\n        \"ad_form_data\": datalist[2],\r\n        \"_Token[fields]\": datalist[3],\r\n        \"_Token[unlocked]\": datalist[4],\r\n    }\r\n\r\n    sess.headers = {\r\n        \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0\",\r\n        \"Accept\": \"application/json, text/javascript, */*; q=0.01\",\r\n        \"Accept-Language\": \"en-US,en;q=0.5\",\r\n        \"Content-Type\": \"application/x-www-form-urlencoded; charset=UTF-8\",\r\n        \"X-Requested-With\": \"XMLHttpRequest\",\r\n        \"Origin\": domain,\r\n        \"Connection\": \"keep-alive\",\r\n        \"Referer\": url,\r\n        \"Sec-Fetch-Dest\": \"empty\",\r\n        \"Sec-Fetch-Mode\": \"cors\",\r\n        \"Sec-Fetch-Site\": \"same-origin\",\r\n    }\r\n\r\n    # print(\"waiting 10 secs\")\r\n    time.sleep(10)  # important\r\n    response = sess.post(domain + \"/links/go\", data=data).json()\r\n    furl = response[\"url\"]\r\n    return furl\r\n\r\n\r\ndef getfirst(url):\r\n\r\n    sess = requests.session()\r\n    res = sess.get(url)\r\n\r\n    soup = BeautifulSoup(res.text, \"html.parser\")\r\n    soup = soup.find(\"form\")\r\n    action = soup.get(\"action\")\r\n    soup = soup.findAll(\"input\")\r\n    datalist = []\r\n    for ele in soup:\r\n        datalist.append(ele.get(\"value\"))\r\n    sess.headers = {\r\n        \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0\",\r\n        \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\",\r\n        \"Accept-Language\": \"en-US,en;q=0.5\",\r\n        \"Origin\": action,\r\n        \"Connection\": \"keep-alive\",\r\n        \"Referer\": action,\r\n        \"Upgrade-Insecure-Requests\": \"1\",\r\n        \"Sec-Fetch-Dest\": \"document\",\r\n        \"Sec-Fetch-Mode\": \"navigate\",\r\n        \"Sec-Fetch-Site\": \"same-origin\",\r\n        \"Sec-Fetch-User\": \"?1\",\r\n    }\r\n\r\n    data = {\"newwpsafelink\": datalist[1], \"g-recaptcha-response\": RecaptchaV3()}\r\n    response = sess.post(action, data=data)\r\n    soup = BeautifulSoup(response.text, \"html.parser\")\r\n    soup = soup.findAll(\"div\", class_=\"wpsafe-bottom text-center\")\r\n    for ele in soup:\r\n        rurl = ele.find(\"a\").get(\"onclick\")[13:-12]\r\n\r\n    res = sess.get(rurl)\r\n    furl = res.url\r\n    # print(furl)\r\n    return getfinal(f'https://{furl.split(\"/\")[-2]}/', furl, sess)\r\n\r\n\r\n####################################################################################################\r\n# ez4short\r\n\r\n\r\ndef ez4(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://ez4short.com\"\r\n    ref = \"https://techmody.io/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(8)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n################################################\r\n# ola movies\r\n\r\n\r\ndef olamovies(url):\r\n\r\n    print(\"this takes time, you might want to take a break.\")\r\n    headers = {\r\n        \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0\",\r\n        \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\",\r\n        \"Accept-Language\": \"en-US,en;q=0.5\",\r\n        \"Referer\": url,\r\n        \"Alt-Used\": \"olamovies.ink\",\r\n        \"Connection\": \"keep-alive\",\r\n        \"Upgrade-Insecure-Requests\": \"1\",\r\n        \"Sec-Fetch-Dest\": \"document\",\r\n        \"Sec-Fetch-Mode\": \"navigate\",\r\n        \"Sec-Fetch-Site\": \"same-origin\",\r\n        \"Sec-Fetch-User\": \"?1\",\r\n    }\r\n\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    res = client.get(url)\r\n    soup = BeautifulSoup(res.text, \"html.parser\")\r\n    soup = soup.findAll(\"div\", class_=\"wp-block-button\")\r\n\r\n    outlist = []\r\n    for ele in soup:\r\n        outlist.append(ele.find(\"a\").get(\"href\"))\r\n\r\n    slist = []\r\n    for ele in outlist:\r\n        try:\r\n            key = (\r\n                ele.split(\"?key=\")[1]\r\n                .split(\"&id=\")[0]\r\n                .replace(\"%2B\", \"+\")\r\n                .replace(\"%3D\", \"=\")\r\n                .replace(\"%2F\", \"/\")\r\n            )\r\n            id = ele.split(\"&id=\")[1]\r\n        except:\r\n            continue\r\n\r\n        count = 3\r\n        params = {\"key\": key, \"id\": id}\r\n        soup = \"None\"\r\n\r\n        while (\r\n            \"rocklinks.net\" not in soup\r\n            and \"try2link.com\" not in soup\r\n            and \"ez4short.com\" not in soup\r\n        ):\r\n            res = client.get(\r\n                \"https://olamovies.ink/download/\", params=params, headers=headers\r\n            )\r\n            soup = BeautifulSoup(res.text, \"html.parser\")\r\n            soup = soup.findAll(\"a\")[0].get(\"href\")\r\n            if soup != \"\":\r\n                if (\r\n                    \"try2link.com\" in soup\r\n                    or \"rocklinks.net\" in soup\r\n                    or \"ez4short.com\" in soup\r\n                ):\r\n                    slist.append(soup)\r\n                else:\r\n                    pass\r\n            else:\r\n                if count == 0:\r\n                    break\r\n                else:\r\n                    count -= 1\r\n\r\n            time.sleep(10)\r\n\r\n    final = []\r\n    for ele in slist:\r\n        if \"rocklinks.net\" in ele:\r\n            final.append(rocklinks(ele))\r\n        elif \"try2link.com\" in ele:\r\n            final.append(try2link_bypass(ele))\r\n        elif \"ez4short.com\" in ele:\r\n            final.append(ez4(ele))\r\n        else:\r\n            pass\r\n\r\n    links = \"\"\r\n    for ele in final:\r\n        links = links + ele + \"\\n\"\r\n    return links[:-1]\r\n\r\n\r\n###############################################\r\n# katdrive\r\n\r\n\r\ndef parse_info_katdrive(res):\r\n    info_parsed = {}\r\n    title = re.findall(\">(.*?)<\\/h4>\", res.text)[0]\r\n    info_chunks = re.findall(\">(.*?)<\\/td>\", res.text)\r\n    info_parsed[\"title\"] = title\r\n    for i in range(0, len(info_chunks), 2):\r\n        info_parsed[info_chunks[i]] = info_chunks[i + 1]\r\n    return info_parsed\r\n\r\n\r\ndef katdrive_dl(url, katcrypt):\r\n    client = requests.Session()\r\n    client.cookies.update({\"crypt\": katcrypt})\r\n\r\n    res = client.get(url)\r\n    info_parsed = parse_info_katdrive(res)\r\n    info_parsed[\"error\"] = False\r\n\r\n    up = urlparse(url)\r\n    req_url = f\"{up.scheme}://{up.netloc}/ajax.php?ajax=download\"\r\n\r\n    file_id = url.split(\"/\")[-1]\r\n    data = {\"id\": file_id}\r\n    headers = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n\r\n    try:\r\n        res = client.post(req_url, headers=headers, data=data).json()[\"file\"]\r\n    except:\r\n        return \"Error\"  # {'error': True, 'src_url': url}\r\n\r\n    gd_id = re.findall(\"gd=(.*)\", res, re.DOTALL)[0]\r\n    info_parsed[\"gdrive_url\"] = f\"https://drive.google.com/open?id={gd_id}\"\r\n    info_parsed[\"src_url\"] = url\r\n    return info_parsed[\"gdrive_url\"]\r\n\r\n\r\n###############################################\r\n# hubdrive\r\n\r\n\r\ndef parse_info_hubdrive(res):\r\n    info_parsed = {}\r\n    title = re.findall(\">(.*?)<\\/h4>\", res.text)[0]\r\n    info_chunks = re.findall(\">(.*?)<\\/td>\", res.text)\r\n    info_parsed[\"title\"] = title\r\n    for i in range(0, len(info_chunks), 2):\r\n        info_parsed[info_chunks[i]] = info_chunks[i + 1]\r\n    return info_parsed\r\n\r\n\r\ndef hubdrive_dl(url, hcrypt):\r\n    client = requests.Session()\r\n    client.cookies.update({\"crypt\": hcrypt})\r\n\r\n    res = client.get(url)\r\n    info_parsed = parse_info_hubdrive(res)\r\n    info_parsed[\"error\"] = False\r\n\r\n    up = urlparse(url)\r\n    req_url = f\"{up.scheme}://{up.netloc}/ajax.php?ajax=download\"\r\n\r\n    file_id = url.split(\"/\")[-1]\r\n    data = {\"id\": file_id}\r\n    headers = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n\r\n    try:\r\n        res = client.post(req_url, headers=headers, data=data).json()[\"file\"]\r\n    except:\r\n        return \"Error\"  # {'error': True, 'src_url': url}\r\n\r\n    gd_id = re.findall(\"gd=(.*)\", res, re.DOTALL)[0]\r\n    info_parsed[\"gdrive_url\"] = f\"https://drive.google.com/open?id={gd_id}\"\r\n    info_parsed[\"src_url\"] = url\r\n    return info_parsed[\"gdrive_url\"]\r\n\r\n\r\n#################################################\r\n# drivefire\r\n\r\n\r\ndef parse_info_drivefire(res):\r\n    info_parsed = {}\r\n    title = re.findall(\">(.*?)<\\/h4>\", res.text)[0]\r\n    info_chunks = re.findall(\">(.*?)<\\/td>\", res.text)\r\n    info_parsed[\"title\"] = title\r\n    for i in range(0, len(info_chunks), 2):\r\n        info_parsed[info_chunks[i]] = info_chunks[i + 1]\r\n    return info_parsed\r\n\r\n\r\ndef drivefire_dl(url, dcrypt):\r\n    client = requests.Session()\r\n    client.cookies.update({\"crypt\": dcrypt})\r\n\r\n    res = client.get(url)\r\n    info_parsed = parse_info_drivefire(res)\r\n    info_parsed[\"error\"] = False\r\n\r\n    up = urlparse(url)\r\n    req_url = f\"{up.scheme}://{up.netloc}/ajax.php?ajax=download\"\r\n\r\n    file_id = url.split(\"/\")[-1]\r\n    data = {\"id\": file_id}\r\n    headers = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n\r\n    try:\r\n        res = client.post(req_url, headers=headers, data=data).json()[\"file\"]\r\n    except:\r\n        return \"Error\"  # {'error': True, 'src_url': url}\r\n\r\n    decoded_id = res.rsplit(\"/\", 1)[-1]\r\n    info_parsed = f\"https://drive.google.com/file/d/{decoded_id}\"\r\n    return info_parsed\r\n\r\n\r\n##################################################\r\n# kolop\r\n\r\n\r\ndef parse_info_kolop(res):\r\n    info_parsed = {}\r\n    title = re.findall(\">(.*?)<\\/h4>\", res.text)[0]\r\n    info_chunks = re.findall(\">(.*?)<\\/td>\", res.text)\r\n    info_parsed[\"title\"] = title\r\n    for i in range(0, len(info_chunks), 2):\r\n        info_parsed[info_chunks[i]] = info_chunks[i + 1]\r\n    return info_parsed\r\n\r\n\r\ndef kolop_dl(url, kcrypt):\r\n    client = requests.Session()\r\n    client.cookies.update({\"crypt\": kcrypt})\r\n\r\n    res = client.get(url)\r\n    info_parsed = parse_info_kolop(res)\r\n    info_parsed[\"error\"] = False\r\n\r\n    up = urlparse(url)\r\n    req_url = f\"{up.scheme}://{up.netloc}/ajax.php?ajax=download\"\r\n\r\n    file_id = url.split(\"/\")[-1]\r\n    data = {\"id\": file_id}\r\n    headers = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n\r\n    try:\r\n        res = client.post(req_url, headers=headers, data=data).json()[\"file\"]\r\n    except:\r\n        return \"Error\"  # {'error': True, 'src_url': url}\r\n\r\n    gd_id = re.findall(\"gd=(.*)\", res, re.DOTALL)[0]\r\n    info_parsed[\"gdrive_url\"] = f\"https://drive.google.com/open?id={gd_id}\"\r\n    info_parsed[\"src_url\"] = url\r\n\r\n    return info_parsed[\"gdrive_url\"]\r\n\r\n\r\n##################################################\r\n# mediafire\r\n\r\n\r\ndef mediafire(url):\r\n\r\n    res = requests.get(url, stream=True)\r\n    contents = res.text\r\n\r\n    for line in contents.splitlines():\r\n        m = re.search(r'href=\"((http|https)://download[^\"]+)', line)\r\n        if m:\r\n            return m.groups()[0]\r\n\r\n\r\n####################################################\r\n# zippyshare\r\n\r\n\r\ndef zippyshare(url):\r\n    resp = requests.get(url).text\r\n    surl = resp.split(\"document.getElementById('dlbutton').href = \")[1].split(\";\")[0]\r\n    parts = surl.split(\"(\")[1].split(\")\")[0].split(\" \")\r\n    val = str(int(parts[0]) % int(parts[2]) + int(parts[4]) % int(parts[6]))\r\n    surl = surl.split('\"')\r\n    burl = url.split(\"zippyshare.com\")[0]\r\n    furl = burl + \"zippyshare.com\" + surl[1] + val + surl[-2]\r\n    return furl\r\n\r\n\r\n####################################################\r\n# filercrypt\r\n\r\n\r\ndef getlinks(dlc):\r\n    headers = {\r\n        \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0\",\r\n        \"Accept\": \"application/json, text/javascript, */*\",\r\n        \"Accept-Language\": \"en-US,en;q=0.5\",\r\n        \"X-Requested-With\": \"XMLHttpRequest\",\r\n        \"Origin\": \"http://dcrypt.it\",\r\n        \"Connection\": \"keep-alive\",\r\n        \"Referer\": \"http://dcrypt.it/\",\r\n    }\r\n\r\n    data = {\r\n        \"content\": dlc,\r\n    }\r\n\r\n    response = requests.post(\r\n        \"http://dcrypt.it/decrypt/paste\", headers=headers, data=data\r\n    ).json()[\"success\"][\"links\"]\r\n    links = \"\"\r\n    for link in response:\r\n        links = links + link + \"\\n\\n\"\r\n    return links[:-1]\r\n\r\n\r\ndef filecrypt(url):\r\n\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    headers = {\r\n        \"authority\": \"filecrypt.co\",\r\n        \"accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9\",\r\n        \"accept-language\": \"en-US,en;q=0.9\",\r\n        \"cache-control\": \"max-age=0\",\r\n        \"content-type\": \"application/x-www-form-urlencoded\",\r\n        \"origin\": \"https://filecrypt.co\",\r\n        \"referer\": url,\r\n        \"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36\",\r\n    }\r\n\r\n    resp = client.get(url, headers=headers)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n\r\n    buttons = soup.find_all(\"button\")\r\n    for ele in buttons:\r\n        line = ele.get(\"onclick\")\r\n        if line != None and \"DownloadDLC\" in line:\r\n            dlclink = (\r\n                \"https://filecrypt.co/DLC/\"\r\n                + line.split(\"DownloadDLC('\")[1].split(\"'\")[0]\r\n                + \".html\"\r\n            )\r\n            break\r\n\r\n    resp = client.get(dlclink, headers=headers)\r\n    return getlinks(resp.text)\r\n\r\n\r\n#####################################################\r\n# dropbox\r\n\r\n\r\ndef dropbox(url):\r\n    return (\r\n        url.replace(\"www.\", \"\")\r\n        .replace(\"dropbox.com\", \"dl.dropboxusercontent.com\")\r\n        .replace(\"?dl=0\", \"\")\r\n    )\r\n\r\n\r\n######################################################\r\n# shareus\r\n\r\n\r\ndef shareus(url):\r\n    headers = {\r\n        \"user-agent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\",\r\n    }\r\n    DOMAIN = \"https://us-central1-my-apps-server.cloudfunctions.net\"\r\n    sess = requests.session()\r\n\r\n    code = url.split(\"/\")[-1]\r\n    params = {\r\n        \"shortid\": code,\r\n        \"initial\": \"true\",\r\n        \"referrer\": \"https://shareus.io/\",\r\n    }\r\n    response = requests.get(f\"{DOMAIN}/v\", params=params, headers=headers)\r\n\r\n    for i in range(1, 4):\r\n        json_data = {\r\n            \"current_page\": i,\r\n        }\r\n        response = sess.post(f\"{DOMAIN}/v\", headers=headers, json=json_data)\r\n\r\n    response = sess.get(f\"{DOMAIN}/get_link\", headers=headers).json()\r\n    return response[\"link_info\"][\"destination\"]\r\n\r\n\r\n#######################################################\r\n# shortingly\r\n\r\n\r\ndef shortingly(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://shortingly.in\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://tech.gyanitheme.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(5)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#######################################################\r\n# Gyanilinks - gtlinks.me\r\n\r\n\r\ndef gyanilinks(url):\r\n    DOMAIN = \"https://go.hipsonyc.com/\"\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    resp = client.get(final_url)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    try:\r\n        inputs = soup.find(id=\"go-link\").find_all(name=\"input\")\r\n    except:\r\n        return \"Incorrect Link\"\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(5)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#######################################################\r\n# Flashlink\r\n\r\n\r\ndef flashl(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://files.earnash.com/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://flash1.cordtpoint.co.in\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(15)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#######################################################\r\n# short2url\r\n\r\n\r\ndef short2url(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://techyuth.xyz/blog\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://blog.coin2pay.xyz/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(10)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#######################################################\r\n# anonfiles\r\n\r\n\r\ndef anonfile(url):\r\n\r\n    headersList = {\"Accept\": \"*/*\"}\r\n    payload = \"\"\r\n\r\n    response = requests.request(\r\n        \"GET\", url, data=payload, headers=headersList\r\n    ).text.split(\"\\n\")\r\n    for ele in response:\r\n        if (\r\n            \"https://cdn\" in ele\r\n            and \"anonfiles.com\" in ele\r\n            and url.split(\"/\")[-2] in ele\r\n        ):\r\n            break\r\n\r\n    return ele.split('href=\"')[1].split('\"')[0]\r\n\r\n\r\n##########################################################\r\n# pixl\r\n\r\n\r\ndef pixl(url):\r\n    count = 1\r\n    dl_msg = \"\"\r\n    currentpage = 1\r\n    settotalimgs = True\r\n    totalimages = \"\"\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    resp = client.get(url)\r\n    if resp.status_code == 404:\r\n        return \"File not found/The link you entered is wrong!\"\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    if \"album\" in url and settotalimgs:\r\n        totalimages = soup.find(\"span\", {\"data-text\": \"image-count\"}).text\r\n        settotalimgs = False\r\n    thmbnailanch = soup.findAll(attrs={\"class\": \"--media\"})\r\n    links = soup.findAll(attrs={\"data-pagination\": \"next\"})\r\n    try:\r\n        url = links[0].attrs[\"href\"]\r\n    except BaseException:\r\n        url = None\r\n    for ref in thmbnailanch:\r\n        imgdata = client.get(ref.attrs[\"href\"])\r\n        if not imgdata.status_code == 200:\r\n            time.sleep(5)\r\n            continue\r\n        imghtml = BeautifulSoup(imgdata.text, \"html.parser\")\r\n        downloadanch = imghtml.find(attrs={\"class\": \"btn-download\"})\r\n        currentimg = downloadanch.attrs[\"href\"]\r\n        currentimg = currentimg.replace(\" \", \"%20\")\r\n        dl_msg += f\"{count}. {currentimg}\\n\"\r\n        count += 1\r\n    currentpage += 1\r\n    fld_msg = f\"Your provided Pixl.is link is of Folder and I've Found {count - 1} files in the folder.\\n\"\r\n    fld_link = f\"\\nFolder Link: {url}\\n\"\r\n    final_msg = fld_link + \"\\n\" + fld_msg + \"\\n\" + dl_msg\r\n    return final_msg\r\n\r\n\r\n############################################################\r\n# sirigan  ( unused )\r\n\r\n\r\ndef siriganbypass(url):\r\n    client = requests.Session()\r\n    res = client.get(url)\r\n    url = res.url.split(\"=\", maxsplit=1)[-1]\r\n\r\n    while True:\r\n        try:\r\n            url = base64.b64decode(url).decode(\"utf-8\")\r\n        except:\r\n            break\r\n\r\n    return url.split(\"url=\")[-1]\r\n\r\n\r\n############################################################\r\n# shorte\r\n\r\n\r\ndef sh_st_bypass(url):\r\n    client = requests.Session()\r\n    client.headers.update({\"referer\": url})\r\n    p = urlparse(url)\r\n\r\n    res = client.get(url)\r\n\r\n    sess_id = re.findall(\"\"\"sessionId(?:\\s+)?:(?:\\s+)?['|\"](.*?)['|\"]\"\"\", res.text)[0]\r\n\r\n    final_url = f\"{p.scheme}://{p.netloc}/shortest-url/end-adsession\"\r\n    params = {\"adSessionId\": sess_id, \"callback\": \"_\"}\r\n    time.sleep(5)  # !important\r\n\r\n    res = client.get(final_url, params=params)\r\n    dest_url = re.findall('\"(.*?)\"', res.text)[1].replace(\"\\/\", \"/\")\r\n\r\n    return {\"src\": url, \"dst\": dest_url}[\"dst\"]\r\n\r\n\r\n#############################################################\r\n# gofile\r\n\r\n\r\ndef gofile_dl(url, password=\"\"):\r\n    api_uri = \"https://api.gofile.io\"\r\n    client = requests.Session()\r\n    res = client.get(api_uri + \"/createAccount\").json()\r\n\r\n    data = {\r\n        \"contentId\": url.split(\"/\")[-1],\r\n        \"token\": res[\"data\"][\"token\"],\r\n        \"websiteToken\": \"12345\",\r\n        \"cache\": \"true\",\r\n        \"password\": hashlib.sha256(password.encode(\"utf-8\")).hexdigest(),\r\n    }\r\n    res = client.get(api_uri + \"/getContent\", params=data).json()\r\n\r\n    content = []\r\n    for item in res[\"data\"][\"contents\"].values():\r\n        content.append(item)\r\n\r\n    return {\"accountToken\": data[\"token\"], \"files\": content}[\"files\"][0][\"link\"]\r\n\r\n\r\n################################################################\r\n# sharer pw\r\n\r\n\r\ndef parse_info_sharer(res):\r\n    f = re.findall(\">(.*?)<\\/td>\", res.text)\r\n    info_parsed = {}\r\n    for i in range(0, len(f), 3):\r\n        info_parsed[f[i].lower().replace(\" \", \"_\")] = f[i + 2]\r\n    return info_parsed\r\n\r\n\r\ndef sharer_pw(url, Laravel_Session, XSRF_TOKEN, forced_login=False):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    client.cookies.update(\r\n        {\"XSRF-TOKEN\": XSRF_TOKEN, \"laravel_session\": Laravel_Session}\r\n    )\r\n    res = client.get(url)\r\n    token = re.findall(\"_token\\s=\\s'(.*?)'\", res.text, re.DOTALL)[0]\r\n    ddl_btn = etree.HTML(res.content).xpath(\"//button[@id='btndirect']\")\r\n    info_parsed = parse_info_sharer(res)\r\n    info_parsed[\"error\"] = True\r\n    info_parsed[\"src_url\"] = url\r\n    info_parsed[\"link_type\"] = \"login\"\r\n    info_parsed[\"forced_login\"] = forced_login\r\n    headers = {\r\n        \"content-type\": \"application/x-www-form-urlencoded; charset=UTF-8\",\r\n        \"x-requested-with\": \"XMLHttpRequest\",\r\n    }\r\n    data = {\"_token\": token}\r\n    if len(ddl_btn):\r\n        info_parsed[\"link_type\"] = \"direct\"\r\n    if not forced_login:\r\n        data[\"nl\"] = 1\r\n    try:\r\n        res = client.post(url + \"/dl\", headers=headers, data=data).json()\r\n    except:\r\n        return info_parsed\r\n    if \"url\" in res and res[\"url\"]:\r\n        info_parsed[\"error\"] = False\r\n        info_parsed[\"gdrive_link\"] = res[\"url\"]\r\n    if len(ddl_btn) and not forced_login and not \"url\" in info_parsed:\r\n        # retry download via login\r\n        return sharer_pw(url, Laravel_Session, XSRF_TOKEN, forced_login=True)\r\n    return info_parsed[\"gdrive_link\"]\r\n\r\n\r\n#################################################################\r\n# gdtot\r\n\r\n\r\ndef gdtot(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        res = cget(\"GET\", f'https://gdbot.xyz/file/{url.split(\"/\")[-1]}')\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    token_url = etree.HTML(res.content).xpath(\r\n        \"//a[contains(@class,'inline-flex items-center justify-center')]/@href\"\r\n    )\r\n    if not token_url:\r\n        try:\r\n            url = cget(\"GET\", url).url\r\n            p_url = urlparse(url)\r\n            res = cget(\r\n                \"GET\", f\"{p_url.scheme}://{p_url.hostname}/ddl/{url.split('/')[-1]}\"\r\n            )\r\n        except Exception as e:\r\n            return f\"ERROR: {e.__class__.__name__}\"\r\n        if (\r\n            drive_link := re.findall(r\"myDl\\('(.*?)'\\)\", res.text)\r\n        ) and \"drive.google.com\" in drive_link[0]:\r\n            return drive_link[0]\r\n        else:\r\n            return \"ERROR: Drive Link not found, Try in your broswer\"\r\n    token_url = token_url[0]\r\n    try:\r\n        token_page = cget(\"GET\", token_url)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__} with {token_url}\"\r\n    path = re.findall('\\(\"(.*?)\"\\)', token_page.text)\r\n    if not path:\r\n        return \"ERROR: Cannot bypass this\"\r\n    path = path[0]\r\n    raw = urlparse(token_url)\r\n    final_url = f\"{raw.scheme}://{raw.hostname}{path}\"\r\n    return ddl.sharer_scraper(final_url)\r\n\r\n\r\n##################################################################\r\n# adfly\r\n\r\n\r\ndef decrypt_url(code):\r\n    a, b = \"\", \"\"\r\n    for i in range(0, len(code)):\r\n        if i % 2 == 0:\r\n            a += code[i]\r\n        else:\r\n            b = code[i] + b\r\n    key = list(a + b)\r\n    i = 0\r\n    while i < len(key):\r\n        if key[i].isdigit():\r\n            for j in range(i + 1, len(key)):\r\n                if key[j].isdigit():\r\n                    u = int(key[i]) ^ int(key[j])\r\n                    if u < 10:\r\n                        key[i] = str(u)\r\n                    i = j\r\n                    break\r\n        i += 1\r\n    key = \"\".join(key)\r\n    decrypted = base64.b64decode(key)[16:-16]\r\n    return decrypted.decode(\"utf-8\")\r\n\r\n\r\ndef adfly(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    res = client.get(url).text\r\n    out = {\"error\": False, \"src_url\": url}\r\n    try:\r\n        ysmm = re.findall(\"ysmm\\s+=\\s+['|\\\"](.*?)['|\\\"]\", res)[0]\r\n    except:\r\n        out[\"error\"] = True\r\n        return out\r\n    url = decrypt_url(ysmm)\r\n    if re.search(r\"go\\.php\\?u\\=\", url):\r\n        url = base64.b64decode(re.sub(r\"(.*?)u=\", \"\", url)).decode()\r\n    elif \"&dest=\" in url:\r\n        url = unquote(re.sub(r\"(.*?)dest=\", \"\", url))\r\n    out[\"bypassed_url\"] = url\r\n    return out\r\n\r\n\r\n##############################################################################################\r\n# gplinks\r\n\r\n\r\ndef gplinks(url: str):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    token = url.split(\"/\")[-1]\r\n    domain = \"https://gplinks.co/\"\r\n    referer = \"https://mynewsmedia.co/\"\r\n    vid = client.get(url, allow_redirects=False).headers[\"Location\"].split(\"=\")[-1]\r\n    url = f\"{url}/?{vid}\"\r\n    response = client.get(url, allow_redirects=False)\r\n    soup = BeautifulSoup(response.content, \"html.parser\")\r\n    inputs = soup.find(id=\"go-link\").find_all(name=\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    time.sleep(10)\r\n    headers = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    bypassed_url = client.post(domain + \"links/go\", data=data, headers=headers).json()[\r\n        \"url\"\r\n    ]\r\n    try:\r\n        return bypassed_url\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n######################################################################################################\r\n# droplink\r\n\r\n\r\ndef droplink(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    res = client.get(url, timeout=5)\r\n\r\n    ref = re.findall(\"action[ ]{0,}=[ ]{0,}['|\\\"](.*?)['|\\\"]\", res.text)[0]\r\n    h = {\"referer\": ref}\r\n    res = client.get(url, headers=h)\r\n\r\n    bs4 = BeautifulSoup(res.content, \"html.parser\")\r\n    inputs = bs4.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\r\n        \"content-type\": \"application/x-www-form-urlencoded\",\r\n        \"x-requested-with\": \"XMLHttpRequest\",\r\n    }\r\n\r\n    p = urlparse(url)\r\n    final_url = f\"{p.scheme}://{p.netloc}/links/go\"\r\n    time.sleep(3.1)\r\n    res = client.post(final_url, data=data, headers=h).json()\r\n\r\n    if res[\"status\"] == \"success\":\r\n        return res[\"url\"]\r\n    return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################################\r\n# link vertise\r\n\r\n\r\ndef linkvertise(url):\r\n    params = {\r\n        \"url\": url,\r\n    }\r\n    response = requests.get(\"https://bypass.pm/bypass2\", params=params).json()\r\n    if response[\"success\"]:\r\n        return response[\"destination\"]\r\n    else:\r\n        return response[\"msg\"]\r\n\r\n\r\n###################################################################################################################\r\n# others\r\n\r\n\r\ndef others(url):\r\n    return \"API Currently not Available\"\r\n\r\n\r\n#################################################################################################################\r\n# ouo\r\n\r\n\r\n# RECAPTCHA v3 BYPASS\r\n# code from https://github.com/xcscxr/Recaptcha-v3-bypass\r\ndef RecaptchaV3():\r\n    ANCHOR_URL = \"https://www.google.com/recaptcha/api2/anchor?ar=1&k=6Lcr1ncUAAAAAH3cghg6cOTPGARa8adOf-y9zv2x&co=aHR0cHM6Ly9vdW8ucHJlc3M6NDQz&hl=en&v=pCoGBhjs9s8EhFOHJFe8cqis&size=invisible&cb=ahgyd1gkfkhe\"\r\n    url_base = \"https://www.google.com/recaptcha/\"\r\n    post_data = \"v={}&reason=q&c={}&k={}&co={}\"\r\n    client = requests.Session()\r\n    client.headers.update({\"content-type\": \"application/x-www-form-urlencoded\"})\r\n    matches = re.findall(\"([api2|enterprise]+)\\/anchor\\?(.*)\", ANCHOR_URL)[0]\r\n    url_base += matches[0] + \"/\"\r\n    params = matches[1]\r\n    res = client.get(url_base + \"anchor\", params=params)\r\n    token = re.findall(r'\"recaptcha-token\" value=\"(.*?)\"', res.text)[0]\r\n    params = dict(pair.split(\"=\") for pair in params.split(\"&\"))\r\n    post_data = post_data.format(params[\"v\"], token, params[\"k\"], params[\"co\"])\r\n    res = client.post(url_base + \"reload\", params=f'k={params[\"k\"]}', data=post_data)\r\n    answer = re.findall(r'\"rresp\",\"(.*?)\"', res.text)[0]\r\n    return answer\r\n\r\n\r\n# code from https://github.com/xcscxr/ouo-bypass/\r\ndef ouo(url):\r\n    tempurl = url.replace(\"ouo.press\", \"ouo.io\")\r\n    p = urlparse(tempurl)\r\n    id = tempurl.split(\"/\")[-1]\r\n    client = Nreq.Session(\r\n        headers={\r\n            \"authority\": \"ouo.io\",\r\n            \"accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7\",\r\n            \"accept-language\": \"en-GB,en-US;q=0.9,en;q=0.8\",\r\n            \"cache-control\": \"max-age=0\",\r\n            \"referer\": \"http://www.google.com/ig/adde?moduleurl=\",\r\n            \"upgrade-insecure-requests\": \"1\",\r\n        }\r\n    )\r\n    res = client.get(tempurl, impersonate=\"chrome110\")\r\n    next_url = f\"{p.scheme}://{p.hostname}/go/{id}\"\r\n\r\n    for _ in range(2):\r\n        if res.headers.get(\"Location\"):\r\n            break\r\n        bs4 = BeautifulSoup(res.content, \"lxml\")\r\n        inputs = bs4.form.findAll(\"input\", {\"name\": re.compile(r\"token$\")})\r\n        data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n        data[\"x-token\"] = RecaptchaV3()\r\n        header = {\"content-type\": \"application/x-www-form-urlencoded\"}\r\n        res = client.post(\r\n            next_url,\r\n            data=data,\r\n            headers=header,\r\n            allow_redirects=False,\r\n            impersonate=\"chrome110\",\r\n        )\r\n        next_url = f\"{p.scheme}://{p.hostname}/xreallcygo/{id}\"\r\n\r\n    return res.headers.get(\"Location\")\r\n\r\n\r\n####################################################################################################################\r\n# mdisk\r\n\r\n\r\ndef mdisk(url):\r\n    header = {\r\n        \"Accept\": \"*/*\",\r\n        \"Accept-Language\": \"en-US,en;q=0.5\",\r\n        \"Accept-Encoding\": \"gzip, deflate, br\",\r\n        \"Referer\": \"https://mdisk.me/\",\r\n        \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36\",\r\n    }\r\n\r\n    inp = url\r\n    fxl = inp.split(\"/\")\r\n    cid = fxl[-1]\r\n\r\n    URL = f\"https://diskuploader.entertainvideo.com/v1/file/cdnurl?param={cid}\"\r\n    res = requests.get(url=URL, headers=header).json()\r\n    return res[\"download\"] + \"\\n\\n\" + res[\"source\"]\r\n\r\n\r\n##################################################################################################################\r\n# AppDrive or DriveApp etc. Look-Alike Link and as well as the Account Details (Required for Login Required Links only)\r\n\r\n\r\ndef unified(url):\r\n\r\n    if ddl.is_share_link(url):\r\n        if \"https://gdtot\" in url:\r\n            return ddl.gdtot(url)\r\n        else:\r\n            return ddl.sharer_scraper(url)\r\n\r\n    try:\r\n        Email = \"chzeesha4@gmail.com\"\r\n        Password = \"zeeshi#789\"\r\n\r\n        account = {\"email\": Email, \"passwd\": Password}\r\n        client = cloudscraper.create_scraper(allow_brotli=False)\r\n        client.headers.update(\r\n            {\r\n                \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36\"\r\n            }\r\n        )\r\n        data = {\"email\": account[\"email\"], \"password\": account[\"passwd\"]}\r\n        client.post(f\"https://{urlparse(url).netloc}/login\", data=data)\r\n        res = client.get(url)\r\n        key = re.findall('\"key\",\\s+\"(.*?)\"', res.text)[0]\r\n        ddl_btn = etree.HTML(res.content).xpath(\"//button[@id='drc']\")\r\n        info = re.findall(\">(.*?)<\\/li>\", res.text)\r\n        info_parsed = {}\r\n        for item in info:\r\n            kv = [s.strip() for s in item.split(\": \", maxsplit=1)]\r\n            info_parsed[kv[0].lower()] = kv[1]\r\n        info_parsed = info_parsed\r\n        info_parsed[\"error\"] = False\r\n        info_parsed[\"link_type\"] = \"login\"\r\n        headers = {\r\n            \"Content-Type\": f\"multipart/form-data; boundary={'-'*4}_\",\r\n        }\r\n        data = {\"type\": 1, \"key\": key, \"action\": \"original\"}\r\n        if len(ddl_btn):\r\n            info_parsed[\"link_type\"] = \"direct\"\r\n            data[\"action\"] = \"direct\"\r\n        while data[\"type\"] <= 3:\r\n            boundary = f'{\"-\"*6}_'\r\n            data_string = \"\"\r\n            for item in data:\r\n                data_string += f\"{boundary}\\r\\n\"\r\n                data_string += f'Content-Disposition: form-data; name=\"{item}\"\\r\\n\\r\\n{data[item]}\\r\\n'\r\n            data_string += f\"{boundary}--\\r\\n\"\r\n            gen_payload = data_string\r\n            try:\r\n                response = client.post(url, data=gen_payload, headers=headers).json()\r\n                break\r\n            except BaseException:\r\n                data[\"type\"] += 1\r\n        if \"url\" in response:\r\n            info_parsed[\"gdrive_link\"] = response[\"url\"]\r\n        elif \"error\" in response and response[\"error\"]:\r\n            info_parsed[\"error\"] = True\r\n            info_parsed[\"error_message\"] = response[\"message\"]\r\n        else:\r\n            info_parsed[\"error\"] = True\r\n            info_parsed[\"error_message\"] = \"Something went wrong :(\"\r\n        if info_parsed[\"error\"]:\r\n            return info_parsed\r\n        if \"driveapp\" in urlparse(url).netloc and not info_parsed[\"error\"]:\r\n            res = client.get(info_parsed[\"gdrive_link\"])\r\n            drive_link = etree.HTML(res.content).xpath(\r\n                \"//a[contains(@class,'btn')]/@href\"\r\n            )[0]\r\n            info_parsed[\"gdrive_link\"] = drive_link\r\n        info_parsed[\"src_url\"] = url\r\n        if \"drivehub\" in urlparse(url).netloc and not info_parsed[\"error\"]:\r\n            res = client.get(info_parsed[\"gdrive_link\"])\r\n            drive_link = etree.HTML(res.content).xpath(\r\n                \"//a[contains(@class,'btn')]/@href\"\r\n            )[0]\r\n            info_parsed[\"gdrive_link\"] = drive_link\r\n        if \"gdflix\" in urlparse(url).netloc and not info_parsed[\"error\"]:\r\n            res = client.get(info_parsed[\"gdrive_link\"])\r\n            drive_link = etree.HTML(res.content).xpath(\r\n                \"//a[contains(@class,'btn')]/@href\"\r\n            )[0]\r\n            info_parsed[\"gdrive_link\"] = drive_link\r\n\r\n        if \"drivesharer\" in urlparse(url).netloc and not info_parsed[\"error\"]:\r\n            res = client.get(info_parsed[\"gdrive_link\"])\r\n            drive_link = etree.HTML(res.content).xpath(\r\n                \"//a[contains(@class,'btn')]/@href\"\r\n            )[0]\r\n            info_parsed[\"gdrive_link\"] = drive_link\r\n        if \"drivebit\" in urlparse(url).netloc and not info_parsed[\"error\"]:\r\n            res = client.get(info_parsed[\"gdrive_link\"])\r\n            drive_link = etree.HTML(res.content).xpath(\r\n                \"//a[contains(@class,'btn')]/@href\"\r\n            )[0]\r\n            info_parsed[\"gdrive_link\"] = drive_link\r\n        if \"drivelinks\" in urlparse(url).netloc and not info_parsed[\"error\"]:\r\n            res = client.get(info_parsed[\"gdrive_link\"])\r\n            drive_link = etree.HTML(res.content).xpath(\r\n                \"//a[contains(@class,'btn')]/@href\"\r\n            )[0]\r\n            info_parsed[\"gdrive_link\"] = drive_link\r\n        if \"driveace\" in urlparse(url).netloc and not info_parsed[\"error\"]:\r\n            res = client.get(info_parsed[\"gdrive_link\"])\r\n            drive_link = etree.HTML(res.content).xpath(\r\n                \"//a[contains(@class,'btn')]/@href\"\r\n            )[0]\r\n            info_parsed[\"gdrive_link\"] = drive_link\r\n        if \"drivepro\" in urlparse(url).netloc and not info_parsed[\"error\"]:\r\n            res = client.get(info_parsed[\"gdrive_link\"])\r\n            drive_link = etree.HTML(res.content).xpath(\r\n                \"//a[contains(@class,'btn')]/@href\"\r\n            )[0]\r\n            info_parsed[\"gdrive_link\"] = drive_link\r\n        if info_parsed[\"error\"]:\r\n            return \"Faced an Unknown Error!\"\r\n        return info_parsed[\"gdrive_link\"]\r\n    except BaseException:\r\n        return \"Unable to Extract GDrive Link\"\r\n\r\n\r\n#####################################################################################################\r\n# urls open\r\n\r\n\r\ndef urlsopen(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://blogpost.viewboonposts.com/e998933f1f665f5e75f2d1ae0009e0063ed66f889000\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://blog.textpage.xyz/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(2)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n####################################################################################################\r\n# URLShortX - xpshort\r\n\r\n\r\ndef xpshort(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://xpshort.com\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://www.animalwallpapers.online/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(8)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n####################################################################################################\r\n# Vnshortner-\r\n\r\n\r\ndef vnshortener(url):\r\n    sess = requests.session()\r\n    DOMAIN = \"https://vnshortener.com/\"\r\n    org = \"https://nishankhatri.xyz\"\r\n    PhpAcc = DOMAIN + \"link/new.php\"\r\n    ref = \"https://nishankhatri.com.np/\"\r\n    go = DOMAIN + \"links/go\"\r\n\r\n    code = url.split(\"/\")[3]\r\n    final_url = f\"{DOMAIN}/{code}/\"\r\n    headers = {\"authority\": DOMAIN, \"origin\": org}\r\n\r\n    data = {\r\n        \"step_1\": code,\r\n    }\r\n    response = sess.post(PhpAcc, headers=headers, data=data).json()\r\n    id = response[\"inserted_data\"][\"id\"]\r\n    data = {\r\n        \"step_2\": code,\r\n        \"id\": id,\r\n    }\r\n    response = sess.post(PhpAcc, headers=headers, data=data).json()\r\n\r\n    headers[\"referer\"] = ref\r\n    params = {\"sid\": str(id)}\r\n    resp = sess.get(final_url, params=params, headers=headers)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n\r\n    time.sleep(1)\r\n    headers[\"x-requested-with\"] = \"XMLHttpRequest\"\r\n    try:\r\n        r = sess.post(go, data=data, headers=headers).json()\r\n        if r[\"status\"] == \"success\":\r\n            return r[\"url\"]\r\n        else:\r\n            raise\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# onepagelink\r\n\r\n\r\ndef onepagelink(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"go.onepagelink.in\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"https://{DOMAIN}/{code}\"\r\n    ref = \"https://gorating.in/\"\r\n    h = {\"referer\": ref}\r\n    response = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(response.text, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(5)\r\n    r = client.post(f\"https://{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except BaseException:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# dulink\r\n\r\n\r\ndef dulink(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://du-link.in\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    ref = \"https://profitshort.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# krownlinks\r\n\r\n\r\ndef krownlinks(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://go.bloggerishyt.in/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://www.techkhulasha.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(8)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return str(r.json()[\"url\"])\r\n    except BaseException:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n####################################################################################################\r\n# adrinolink\r\n\r\n\r\ndef adrinolink(url):\r\n    if \"https://adrinolinks.in/\" not in url:\r\n        url = \"https://adrinolinks.in/\" + url.split(\"/\")[-1]\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://adrinolinks.in\"\r\n    ref = \"https://wikitraveltips.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(8)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# mdiskshortners\r\n\r\n\r\ndef mdiskshortners(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://mdiskshortners.in/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://www.adzz.in/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(2)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# tinyfy\r\n\r\n\r\ndef tiny(url):\r\n    client = requests.session()\r\n    DOMAIN = \"https://tinyfy.in\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://www.yotrickslog.tech/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# earnl\r\n\r\n\r\ndef earnl(url):\r\n    client = requests.session()\r\n    DOMAIN = \"https://v.earnl.xyz\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://link.modmakers.xyz/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(5)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# moneykamalo\r\n\r\n\r\ndef moneykamalo(url):\r\n    client = requests.session()\r\n    DOMAIN = \"https://go.moneykamalo.com\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://bloging.techkeshri.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(5)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# lolshort\r\n\r\n\r\ndef lolshort(url):\r\n    client = requests.session()\r\n    DOMAIN = \"https://get.lolshort.tech/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://tech.animezia.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(5)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# easysky\r\n\r\n\r\ndef easysky(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://techy.veganab.co/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://veganab.co/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(8)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# indiurl\r\n\r\n\r\ndef indi(url):\r\n    client = requests.session()\r\n    DOMAIN = \"https://file.earnash.com/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://indiurl.cordtpoint.co.in/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(10)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# linkbnao\r\n\r\n\r\ndef linkbnao(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://vip.linkbnao.com\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://ffworld.xyz/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(2)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# omegalinks\r\n\r\n\r\ndef mdiskpro(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://mdisk.pro\"\r\n    ref = \"https://m.meclipstudy.in/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(8)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# tnshort\r\n\r\n\r\ndef tnshort(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://news.sagenews.in/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://movies.djnonstopmusic.in/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(8)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return str(r.json()[\"url\"])\r\n    except BaseException:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# tnvalue\r\n\r\n\r\ndef tnvalue(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://gadgets.webhostingtips.club/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://ladkibahin.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(12)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return str(r.json()[\"url\"])\r\n    except BaseException:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# indianshortner\r\n\r\n\r\ndef indshort(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://indianshortner.com/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://moddingzone.in/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(5)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# mdisklink\r\n\r\n\r\ndef mdisklink(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://mdisklink.link/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://m.proappapk.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(2)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# rslinks\r\n\r\n\r\ndef rslinks(url):\r\n    client = requests.session()\r\n    download = requests.get(url, stream=True, allow_redirects=False)\r\n    v = download.headers[\"location\"]\r\n    code = v.split(\"ms9\")[-1]\r\n    final = f\"http://techyproio.blogspot.com/p/short.html?{code}==\"\r\n    try:\r\n        return final\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#########################\r\n# vipurl\r\ndef vipurl(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://count.vipurl.in/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://ezeviral.com/\"\r\n    h = {\"referer\": ref}\r\n    response = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(response.text, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(9)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return r.json()[\"url\"]\r\n    except BaseException:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# mdisky.link\r\ndef mdisky(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://go.bloggingaro.com/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://www.bloggingaro.com/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(6)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return str(r.json()[\"url\"])\r\n    except BaseException:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# bitly + tinyurl\r\n\r\n\r\ndef bitly_tinyurl(url: str) -> str:\r\n    response = requests.get(url).url\r\n    try:\r\n        return response\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# thinfi\r\n\r\n\r\ndef thinfi(url: str) -> str:\r\n    response = requests.get(url)\r\n    soup = BeautifulSoup(response.content, \"html.parser\").p.a.get(\"href\")\r\n    try:\r\n        return soup\r\n    except:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\n#####################################################################################################\r\n# kingurl\r\n\r\n\r\ndef kingurl1(url):\r\n    client = cloudscraper.create_scraper(allow_brotli=False)\r\n    DOMAIN = \"https://go.kingurl.in/\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}/{code}\"\r\n    ref = \"https://earnbox.bankshiksha.in/\"\r\n    h = {\"referer\": ref}\r\n    resp = client.get(final_url, headers=h)\r\n    soup = BeautifulSoup(resp.content, \"html.parser\")\r\n    inputs = soup.find_all(\"input\")\r\n    data = {input.get(\"name\"): input.get(\"value\") for input in inputs}\r\n    h = {\"x-requested-with\": \"XMLHttpRequest\"}\r\n    time.sleep(7)\r\n    r = client.post(f\"{DOMAIN}/links/go\", data=data, headers=h)\r\n    try:\r\n        return str(r.json()[\"url\"])\r\n    except BaseException:\r\n        return \"Something went wrong :(\"\r\n\r\n\r\ndef kingurl(url):\r\n    DOMAIN = \"https://earn.bankshiksha.in/click.php?LinkShortUrlID\"\r\n    url = url[:-1] if url[-1] == \"/\" else url\r\n    code = url.split(\"/\")[-1]\r\n    final_url = f\"{DOMAIN}={code}\"\r\n    return final_url\r\n#####################################################################################################\r\n# helpers\r\n\r\n\r\n# check if present in list\r\ndef ispresent(inlist, url):\r\n    for ele in inlist:\r\n        if ele in url:\r\n            return True\r\n    return False\r\n\r\n\r\n# shortners\r\ndef shortners(url):\r\n    # Shortner Full Page API\r\n    if val := shortner_fpage_api(url):\r\n        return val\r\n\r\n    # Shortner Quick Link API\r\n    elif val := shortner_quick_api(url):\r\n        return val\r\n\r\n    # igg games\r\n    elif \"https://igg-games.com/\" in url:\r\n        print(\"entered igg: \", url)\r\n        return igggames(url)\r\n\r\n    # ola movies\r\n    elif \"https://olamovies.\" in url:\r\n        print(\"entered ola movies: \", url)\r\n        return olamovies(url)\r\n\r\n    # katdrive\r\n    elif \"https://katdrive.\" in url:\r\n        if KATCRYPT == \"\":\r\n            return \"🚫 __You can't use this because__ **KATDRIVE_CRYPT** __ENV is not set__\"\r\n\r\n        print(\"entered katdrive: \", url)\r\n        return katdrive_dl(url, KATCRYPT)\r\n\r\n    # kolop\r\n    elif \"https://kolop.\" in url:\r\n        if KCRYPT == \"\":\r\n            return (\r\n                \"🚫 __You can't use this because__ **KOLOP_CRYPT** __ENV is not set__\"\r\n            )\r\n\r\n        print(\"entered kolop: \", url)\r\n        return kolop_dl(url, KCRYPT)\r\n\r\n    # hubdrive\r\n    elif \"https://hubdrive.\" in url:\r\n        if HCRYPT == \"\":\r\n            return \"🚫 __You can't use this because__ **HUBDRIVE_CRYPT** __ENV is not set__\"\r\n\r\n        print(\"entered hubdrive: \", url)\r\n        return hubdrive_dl(url, HCRYPT)\r\n\r\n    # drivefire\r\n    elif \"https://drivefire.\" in url:\r\n        if DCRYPT == \"\":\r\n            return \"🚫 __You can't use this because__ **DRIVEFIRE_CRYPT** __ENV is not set__\"\r\n\r\n        print(\"entered drivefire: \", url)\r\n        return drivefire_dl(url, DCRYPT)\r\n\r\n    # filecrypt\r\n    elif (\"https://filecrypt.co/\") in url or (\"https://filecrypt.cc/\" in url):\r\n        print(\"entered filecrypt: \", url)\r\n        return filecrypt(url)\r\n\r\n    # shareus\r\n    elif \"https://shareus.\" in url or \"https://shrs.link/\" in url:\r\n        print(\"entered shareus: \", url)\r\n        return shareus(url)\r\n\r\n    # shortingly\r\n    elif \"https://shortingly.in/\" in url:\r\n        print(\"entered shortingly: \", url)\r\n        return shortingly(url)\r\n\r\n    # vnshortner\r\n    elif \"https://vnshortener.com/\" in url:\r\n        print(\"entered vnshortener: \", url)\r\n        return vnshortener(url)\r\n\r\n    # onepagelink\r\n    elif \"https://onepagelink.in/\" in url:\r\n        print(\"entered onepagelink: \", url)\r\n        return onepagelink(url)\r\n\r\n    # gyanilinks\r\n    elif \"https://gyanilinks.com/\" in url or \"https://gtlinks.me/\" in url:\r\n        print(\"entered gyanilinks: \", url)\r\n        return gyanilinks(url)\r\n\r\n    # flashlink\r\n    elif \"https://go.flashlink.in\" in url:\r\n        print(\"entered flashlink: \", url)\r\n        return flashl(url)\r\n\r\n    # short2url\r\n    elif \"https://short2url.in/\" in url:\r\n        print(\"entered short2url: \", url)\r\n        return short2url(url)\r\n\r\n    # shorte\r\n    elif \"https://shorte.st/\" in url:\r\n        print(\"entered shorte: \", url)\r\n        return sh_st_bypass(url)\r\n\r\n    # psa\r\n    elif \"https://psa.wf/\" in url:\r\n        print(\"entered psa: \", url)\r\n        return psa_bypasser(url)\r\n\r\n    # sharer pw\r\n    elif \"https://sharer.pw/\" in url:\r\n        if XSRF_TOKEN == \"\" or Laravel_Session == \"\":\r\n            return \"🚫 __You can't use this because__ **XSRF_TOKEN** __and__ **Laravel_Session** __ENV is not set__\"\r\n\r\n        print(\"entered sharer: \", url)\r\n        return sharer_pw(url, Laravel_Session, XSRF_TOKEN)\r\n\r\n    # gdtot url\r\n    elif \"gdtot.cfd\" in url:\r\n        print(\"entered gdtot: \", url)\r\n        return gdtot(url)\r\n\r\n    # adfly\r\n    elif \"https://adf.ly/\" in url:\r\n        print(\"entered adfly: \", url)\r\n        out = adfly(url)\r\n        return out[\"bypassed_url\"]\r\n\r\n    # gplinks\r\n    elif \"https://gplinks.co/\" in url:\r\n        print(\"entered gplink: \", url)\r\n        return gplinks(url)\r\n\r\n    # droplink\r\n    elif \"https://droplink.co/\" in url:\r\n        print(\"entered droplink: \", url)\r\n        return droplink(url)\r\n\r\n    # linkvertise\r\n    elif \"https://linkvertise.com/\" in url:\r\n        print(\"entered linkvertise: \", url)\r\n        return linkvertise(url)\r\n\r\n    # rocklinks\r\n    elif \"https://rocklinks.net/\" in url:\r\n        print(\"entered rocklinks: \", url)\r\n        return rocklinks(url)\r\n\r\n    # ouo\r\n    elif \"https://ouo.press/\" in url or \"https://ouo.io/\" in url:\r\n        print(\"entered ouo: \", url)\r\n        return ouo(url)\r\n\r\n    # try2link\r\n    elif \"https://try2link.com/\" in url:\r\n        print(\"entered try2links: \", url)\r\n        return try2link_bypass(url)\r\n\r\n    # urlsopen\r\n    elif \"https://urlsopen.\" in url:\r\n        print(\"entered urlsopen: \", url)\r\n        return urlsopen(url)\r\n\r\n    # xpshort\r\n    elif (\r\n        \"https://xpshort.com/\" in url\r\n        or \"https://push.bdnewsx.com/\" in url\r\n        or \"https://techymozo.com/\" in url\r\n    ):\r\n        print(\"entered xpshort: \", url)\r\n        return xpshort(url)\r\n\r\n    # dulink\r\n    elif \"https://du-link.in/\" in url:\r\n        print(\"entered dulink: \", url)\r\n        return dulink(url)\r\n\r\n    # ez4short\r\n    elif \"https://ez4short.com/\" in url:\r\n        print(\"entered ez4short: \", url)\r\n        return ez4(url)\r\n\r\n    # krownlinks\r\n    elif \"https://krownlinks.me/\" in url:\r\n        print(\"entered krownlinks: \", url)\r\n        return krownlinks(url)\r\n\r\n    # adrinolink\r\n    elif \"https://adrinolinks.\" in url:\r\n        print(\"entered adrinolink: \", url)\r\n        return adrinolink(url)\r\n\r\n    # tnlink\r\n    elif \"https://link.tnlink.in/\" in url:\r\n        print(\"entered tnlink: \", url)\r\n        return tnlink(url)\r\n\r\n    # mdiskshortners\r\n    elif \"https://mdiskshortners.in/\" in url:\r\n        print(\"entered mdiskshortners: \", url)\r\n        return mdiskshortners(url)\r\n\r\n    # tinyfy\r\n    elif \"tinyfy.in\" in url:\r\n        print(\"entered tinyfy: \", url)\r\n        return tiny(url)\r\n\r\n    # earnl\r\n    elif \"go.earnl.xyz\" in url:\r\n        print(\"entered earnl: \", url)\r\n        return earnl(url)\r\n\r\n    # moneykamalo\r\n    elif \"earn.moneykamalo.com\" in url:\r\n        print(\"entered moneykamalo: \", url)\r\n        return moneykamalo(url)\r\n\r\n    # lolshort\r\n    elif \"http://go.lolshort.tech/\" in url or \"https://go.lolshort.tech/\" in url:\r\n        print(\"entered lolshort: \", url)\r\n        return lolshort(url)\r\n\r\n    # easysky\r\n    elif \"m.easysky.in\" in url:\r\n        print(\"entered easysky: \", url)\r\n        return easysky(url)\r\n\r\n    # indiurl\r\n    elif \"go.indiurl.in.net\" in url:\r\n        print(\"entered indiurl: \", url)\r\n        return indi(url)\r\n\r\n    # linkbnao\r\n    elif \"linkbnao.com\" in url:\r\n        print(\"entered linkbnao: \", url)\r\n        return linkbnao(url)\r\n\r\n    # omegalinks\r\n    elif \"mdisk.pro\" in url:\r\n        print(\"entered mdiskpro: \", url)\r\n        return mdiskpro(url)\r\n\r\n    # tnshort\r\n    elif \"https://link.tnshort.net/\" in url or \"https://tnseries.com/\" in url or \"https://link.tnseries.com/\" in url:\r\n        print(\"entered tnshort:\", url)\r\n        return tnshort(url)\r\n\r\n    # tnvalue\r\n    elif (\r\n        \"https://link.tnvalue.in/\" in url\r\n        or \"https://short.tnvalue.in/\" in url\r\n        or \"https://page.finclub.in/\" in url\r\n    ):\r\n        print(\"entered tnvalue:\", url)\r\n        return tnvalue(url)\r\n\r\n    # indianshortner\r\n    elif \"indianshortner.in\" in url:\r\n        print(\"entered indianshortner: \", url)\r\n        return indshort(url)\r\n\r\n    # mdisklink\r\n    elif \"mdisklink.link\" in url:\r\n        print(\"entered mdisklink: \", url)\r\n        return mdisklink(url)\r\n\r\n    # rslinks\r\n    elif \"rslinks.net\" in url:\r\n        print(\"entered rslinks: \", url)\r\n        return rslinks(url)\r\n\r\n    # bitly + tinyurl\r\n    elif \"bit.ly\" in url or \"tinyurl.com\" in url:\r\n        print(\"entered bitly_tinyurl: \", url)\r\n        return bitly_tinyurl(url)\r\n\r\n    # pdisk\r\n    elif \"pdisk.pro\" in url:\r\n        print(\"entered pdisk: \", url)\r\n        return pdisk(url)\r\n\r\n    # thinfi\r\n    elif \"thinfi.com\" in url:\r\n        print(\"entered thinfi: \", url)\r\n        return thinfi(url)\r\n\r\n    # vipurl\r\n    elif \"link.vipurl.in\" in url or \"count.vipurl.in\" in url or \"vipurl.in\" in url:\r\n        print(\"entered vipurl:\", url)\r\n        return vipurl(url)\r\n\r\n    # mdisky\r\n    elif \"mdisky.link\" in url:\r\n        print(\"entered mdisky:\", url)\r\n        return mdisky(url)\r\n\r\n    # kingurl\r\n    elif \"https://kingurl.in/\" in url:\r\n        print(\"entered kingurl:\", url)\r\n        return kingurl(url)\r\n\r\n    # htpmovies sharespark cinevood\r\n    elif (\r\n        \"https://htpmovies.\" in url\r\n        or \"https://sharespark.me/\" in url\r\n        or \"https://cinevood.\" in url\r\n        or \"https://atishmkv.\" in url\r\n        or \"https://teluguflix\" in url\r\n        or \"https://taemovies\" in url\r\n        or \"https://toonworld4all\" in url\r\n        or \"https://animeremux\" in url\r\n    ):\r\n        print(\"entered htpmovies sharespark cinevood atishmkv: \", url)\r\n        return scrappers(url)\r\n\r\n    # gdrive look alike\r\n    elif ispresent(gdlist, url):\r\n        print(\"entered gdrive look alike: \", url)\r\n        return unified(url)\r\n\r\n    # others\r\n    elif ispresent(otherslist, url):\r\n        print(\"entered others: \", url)\r\n        return others(url)\r\n\r\n    # else\r\n    else:\r\n        print(\"entered: \", url)\r\n        return \"Not in Supported Sites\"\r\n\r\n\r\n################################################################################################################################\r\n"
  },
  {
    "path": "config.json",
    "content": "{\n    \"TOKEN\": \"\",\n    \"ID\": \"\",\n    \"HASH\": \"\",\n    \"Laravel_Session\": \"\",\n    \"XSRF_TOKEN\": \"\",\n    \"GDTot_Crypt\": \"\",\n    \"DCRYPT\": \"\",\n    \"KCRYPT\": \"\",\n    \"HCRYPT\": \"\",\n    \"KATCRYPT\": \"\",\n    \"UPTOBOX_TOKEN\": \"\",\n    \"TERA_COOKIE\": \"\",\n    \"CLOUDFLARE\": \"\",\n    \"PORT\": 5000,\n    \"DB_API\": \"\",\n    \"DB_OWNER\": \"bipinkrish\",\n    \"DB_NAME\": \"link_bypass.db\"\n}"
  },
  {
    "path": "db.py",
    "content": "import requests\nimport base64\n\nclass DB:\n    def __init__(self, api_key, db_owner, db_name) -> None:\n        self.api_key = api_key\n        self.db_owner = db_owner\n        self.db_name = db_name\n        self.url = \"https://api.dbhub.io/v1/tables\"\n        self.data = {\n            \"apikey\": self.api_key,\n            \"dbowner\": self.db_owner,\n            \"dbname\": self.db_name\n        }\n\n        response = requests.post(self.url, data=self.data)\n        if response.status_code == 200:\n            if \"results\" not in response.json():\n                raise Exception(\"Error, Table not found\")\n        else:\n            raise Exception(\"Error in Auth\")\n\n    def insert(self, link: str, result: str):\n        url = \"https://api.dbhub.io/v1/execute\"\n        sql_insert = f\"INSERT INTO results (link, result) VALUES ('{link}', '{result}')\"\n        encoded_sql = base64.b64encode(sql_insert.encode()).decode()\n        self.data[\"sql\"] = encoded_sql\n        response = requests.post(url, data=self.data)\n        if response.status_code == 200:\n            if response.json()[\"status\"] != \"OK\":\n                raise Exception(\"Error while inserting\")\n            return True\n        else:\n            print(response.json())\n            return None\n    \n    def find(self, link: str):\n        url = \"https://api.dbhub.io/v1/query\"\n        sql_select = f\"SELECT result FROM results WHERE link = '{link}'\"\n        encoded_sql = base64.b64encode(sql_select.encode()).decode()\n        self.data[\"sql\"] = encoded_sql\n        response = requests.post(url, data=self.data)\n        if response.status_code == 200:\n            try: return response.json()[0][0][\"Value\"]\n            except: return None\n        else:\n            print(response.json())\n            return None\n"
  },
  {
    "path": "ddl.py",
    "content": "from base64 import standard_b64encode\r\nfrom json import loads\r\nfrom math import floor, pow\r\nfrom re import findall, match, search, sub\r\nfrom time import sleep\r\nfrom urllib.parse import quote, unquote, urlparse\r\nfrom uuid import uuid4\r\n\r\nfrom bs4 import BeautifulSoup\r\nfrom cfscrape import create_scraper\r\nfrom lxml import etree\r\nfrom requests import get, session\r\n\r\nfrom json import load\r\nfrom os import environ\r\n\r\nwith open(\"config.json\", \"r\") as f:\r\n    DATA = load(f)\r\n\r\n\r\ndef getenv(var):\r\n    return environ.get(var) or DATA.get(var, None)\r\n\r\n\r\nUPTOBOX_TOKEN = getenv(\"UPTOBOX_TOKEN\")\r\nndus = getenv(\"TERA_COOKIE\")\r\nif ndus is None:\r\n    TERA_COOKIE = None\r\nelse:\r\n    TERA_COOKIE = {\"ndus\": ndus}\r\n\r\n\r\nddllist = [\r\n    \"1drv.ms\",\r\n    \"1fichier.com\",\r\n    \"4funbox\",\r\n    \"akmfiles\",\r\n    \"anonfiles.com\",\r\n    \"antfiles.com\",\r\n    \"bayfiles.com\",\r\n    \"disk.yandex.com\",\r\n    \"fcdn.stream\",\r\n    \"femax20.com\",\r\n    \"fembed.com\",\r\n    \"fembed.net\",\r\n    \"feurl.com\",\r\n    \"filechan.org\",\r\n    \"filepress\",\r\n    \"github.com\",\r\n    \"hotfile.io\",\r\n    \"hxfile.co\",\r\n    \"krakenfiles.com\",\r\n    \"layarkacaxxi.icu\",\r\n    \"letsupload.cc\",\r\n    \"letsupload.io\",\r\n    \"linkbox\",\r\n    \"lolabits.se\",\r\n    \"mdisk.me\",\r\n    \"mediafire.com\",\r\n    \"megaupload.nz\",\r\n    \"mirrobox\",\r\n    \"mm9842.com\",\r\n    \"momerybox\",\r\n    \"myfile.is\",\r\n    \"naniplay.com\",\r\n    \"naniplay.nanime.biz\",\r\n    \"naniplay.nanime.in\",\r\n    \"nephobox\",\r\n    \"openload.cc\",\r\n    \"osdn.net\",\r\n    \"pixeldrain.com\",\r\n    \"racaty\",\r\n    \"rapidshare.nu\",\r\n    \"sbembed.com\",\r\n    \"sbplay.org\",\r\n    \"share-online.is\",\r\n    \"shrdsk\",\r\n    \"solidfiles.com\",\r\n    \"streamsb.net\",\r\n    \"streamtape\",\r\n    \"terabox\",\r\n    \"teraboxapp\",\r\n    \"upload.ee\",\r\n    \"uptobox.com\",\r\n    \"upvid.cc\",\r\n    \"vshare.is\",\r\n    \"watchsb.com\",\r\n    \"we.tl\",\r\n    \"wetransfer.com\",\r\n    \"yadi.sk\",\r\n    \"zippyshare.com\",\r\n]\r\n\r\n\r\ndef is_share_link(url):\r\n    return bool(\r\n        match(\r\n            r\"https?:\\/\\/.+\\.gdtot\\.\\S+|https?:\\/\\/(filepress|filebee|appdrive|gdflix|driveseed)\\.\\S+\",\r\n            url,\r\n        )\r\n    )\r\n\r\n\r\ndef get_readable_time(seconds):\r\n    result = \"\"\r\n    (days, remainder) = divmod(seconds, 86400)\r\n    days = int(days)\r\n    if days != 0:\r\n        result += f\"{days}d\"\r\n    (hours, remainder) = divmod(remainder, 3600)\r\n    hours = int(hours)\r\n    if hours != 0:\r\n        result += f\"{hours}h\"\r\n    (minutes, seconds) = divmod(remainder, 60)\r\n    minutes = int(minutes)\r\n    if minutes != 0:\r\n        result += f\"{minutes}m\"\r\n    seconds = int(seconds)\r\n    result += f\"{seconds}s\"\r\n    return result\r\n\r\n\r\nfmed_list = [\r\n    \"fembed.net\",\r\n    \"fembed.com\",\r\n    \"femax20.com\",\r\n    \"fcdn.stream\",\r\n    \"feurl.com\",\r\n    \"layarkacaxxi.icu\",\r\n    \"naniplay.nanime.in\",\r\n    \"naniplay.nanime.biz\",\r\n    \"naniplay.com\",\r\n    \"mm9842.com\",\r\n]\r\n\r\nanonfilesBaseSites = [\r\n    \"anonfiles.com\",\r\n    \"hotfile.io\",\r\n    \"bayfiles.com\",\r\n    \"megaupload.nz\",\r\n    \"letsupload.cc\",\r\n    \"filechan.org\",\r\n    \"myfile.is\",\r\n    \"vshare.is\",\r\n    \"rapidshare.nu\",\r\n    \"lolabits.se\",\r\n    \"openload.cc\",\r\n    \"share-online.is\",\r\n    \"upvid.cc\",\r\n]\r\n\r\n\r\ndef direct_link_generator(link: str):\r\n    \"\"\"direct links generator\"\"\"\r\n    domain = urlparse(link).hostname\r\n    if \"yadi.sk\" in domain or \"disk.yandex.com\" in domain:\r\n        return yandex_disk(link)\r\n    elif \"mediafire.com\" in domain:\r\n        return mediafire(link)\r\n    elif \"uptobox.com\" in domain:\r\n        return uptobox(link)\r\n    elif \"osdn.net\" in domain:\r\n        return osdn(link)\r\n    elif \"github.com\" in domain:\r\n        return github(link)\r\n    elif \"hxfile.co\" in domain:\r\n        return hxfile(link)\r\n    elif \"1drv.ms\" in domain:\r\n        return onedrive(link)\r\n    elif \"pixeldrain.com\" in domain:\r\n        return pixeldrain(link)\r\n    elif \"antfiles.com\" in domain:\r\n        return antfiles(link)\r\n    elif \"streamtape\" in domain:\r\n        return streamtape(link)\r\n    elif \"racaty\" in domain:\r\n        return racaty(link)\r\n    elif \"1fichier.com\" in domain:\r\n        return fichier(link)\r\n    elif \"solidfiles.com\" in domain:\r\n        return solidfiles(link)\r\n    elif \"krakenfiles.com\" in domain:\r\n        return krakenfiles(link)\r\n    elif \"upload.ee\" in domain:\r\n        return uploadee(link)\r\n    elif \"akmfiles\" in domain:\r\n        return akmfiles(link)\r\n    elif \"linkbox\" in domain:\r\n        return linkbox(link)\r\n    elif \"shrdsk\" in domain:\r\n        return shrdsk(link)\r\n    elif \"letsupload.io\" in domain:\r\n        return letsupload(link)\r\n    elif \"zippyshare.com\" in domain:\r\n        return zippyshare(link)\r\n    elif \"mdisk.me\" in domain:\r\n        return mdisk(link)\r\n    elif any(x in domain for x in [\"wetransfer.com\", \"we.tl\"]):\r\n        return wetransfer(link)\r\n    elif any(x in domain for x in anonfilesBaseSites):\r\n        return anonfilesBased(link)\r\n    elif any(\r\n        x in domain\r\n        for x in [\r\n            \"terabox\",\r\n            \"nephobox\",\r\n            \"4funbox\",\r\n            \"mirrobox\",\r\n            \"momerybox\",\r\n            \"teraboxapp\",\r\n        ]\r\n    ):\r\n        return terabox(link)\r\n    elif any(x in domain for x in fmed_list):\r\n        return fembed(link)\r\n    elif any(\r\n        x in domain\r\n        for x in [\"sbembed.com\", \"watchsb.com\", \"streamsb.net\", \"sbplay.org\"]\r\n    ):\r\n        return sbembed(link)\r\n    elif is_share_link(link):\r\n        if \"gdtot\" in domain:\r\n            return gdtot(link)\r\n        elif \"filepress\" in domain:\r\n            return filepress(link)\r\n        else:\r\n            return sharer_scraper(link)\r\n    else:\r\n        return f\"No Direct link function found for\\n\\n{link}\\n\\nuse /ddllist\"\r\n\r\n\r\ndef mdisk(url):\r\n    header = {\r\n        \"Accept\": \"*/*\",\r\n        \"Accept-Language\": \"en-US,en;q=0.5\",\r\n        \"Accept-Encoding\": \"gzip, deflate, br\",\r\n        \"Referer\": \"https://mdisk.me/\",\r\n        \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36\",\r\n    }\r\n    id = url.split(\"/\")[-1]\r\n    URL = f\"https://diskuploader.entertainvideo.com/v1/file/cdnurl?param={id}\"\r\n    return get(url=URL, headers=header).json()[\"source\"]\r\n\r\n\r\ndef yandex_disk(url: str) -> str:\r\n    \"\"\"Yandex.Disk direct link generator\r\n    Based on https://github.com/wldhx/yadisk-direct\"\"\"\r\n    try:\r\n        link = findall(r\"\\b(https?://(yadi.sk|disk.yandex.com)\\S+)\", url)[0][0]\r\n    except IndexError:\r\n        return \"No Yandex.Disk links found\\n\"\r\n    api = \"https://cloud-api.yandex.net/v1/disk/public/resources/download?public_key={}\"\r\n    cget = create_scraper().request\r\n    try:\r\n        return cget(\"get\", api.format(link)).json()[\"href\"]\r\n    except KeyError:\r\n        return \"ERROR: File not found/Download limit reached\"\r\n\r\n\r\ndef uptobox(url: str) -> str:\r\n    \"\"\"Uptobox direct link generator\r\n    based on https://github.com/jovanzers/WinTenCermin and https://github.com/sinoobie/noobie-mirror\r\n    \"\"\"\r\n    try:\r\n        link = findall(r\"\\bhttps?://.*uptobox\\.com\\S+\", url)[0]\r\n    except IndexError:\r\n        return \"No Uptobox links found\"\r\n    link = findall(r\"\\bhttps?://.*\\.uptobox\\.com/dl\\S+\", url)\r\n    if link:\r\n        return link[0]\r\n    cget = create_scraper().request\r\n    try:\r\n        file_id = findall(r\"\\bhttps?://.*uptobox\\.com/(\\w+)\", url)[0]\r\n        if UPTOBOX_TOKEN:\r\n            file_link = f\"https://uptobox.com/api/link?token={UPTOBOX_TOKEN}&file_code={file_id}\"\r\n        else:\r\n            file_link = f\"https://uptobox.com/api/link?file_code={file_id}\"\r\n        res = cget(\"get\", file_link).json()\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if res[\"statusCode\"] == 0:\r\n        return res[\"data\"][\"dlLink\"]\r\n    elif res[\"statusCode\"] == 16:\r\n        sleep(1)\r\n        waiting_token = res[\"data\"][\"waitingToken\"]\r\n        sleep(res[\"data\"][\"waiting\"])\r\n    elif res[\"statusCode\"] == 39:\r\n        return f\"ERROR: Uptobox is being limited please wait {get_readable_time(res['data']['waiting'])}\"\r\n    else:\r\n        return f\"ERROR: {res['message']}\"\r\n    try:\r\n        res = cget(\"get\", f\"{file_link}&waitingToken={waiting_token}\").json()\r\n        return res[\"data\"][\"dlLink\"]\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n\r\n\r\ndef mediafire(url: str) -> str:\r\n    final_link = findall(r\"https?:\\/\\/download\\d+\\.mediafire\\.com\\/\\S+\\/\\S+\\/\\S+\", url)\r\n    if final_link:\r\n        return final_link[0]\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"get\", url).url\r\n        page = cget(\"get\", url).text\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    final_link = findall(\r\n        r\"\\'(https?:\\/\\/download\\d+\\.mediafire\\.com\\/\\S+\\/\\S+\\/\\S+)\\'\", page\r\n    )\r\n    if not final_link:\r\n        return \"ERROR: No links found in this page\"\r\n    return final_link[0]\r\n\r\n\r\ndef osdn(url: str) -> str:\r\n    \"\"\"OSDN direct link generator\"\"\"\r\n    osdn_link = \"https://osdn.net\"\r\n    try:\r\n        link = findall(r\"\\bhttps?://.*osdn\\.net\\S+\", url)[0]\r\n    except IndexError:\r\n        return \"No OSDN links found\"\r\n    cget = create_scraper().request\r\n    try:\r\n        page = BeautifulSoup(cget(\"get\", link, allow_redirects=True).content, \"lxml\")\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    info = page.find(\"a\", {\"class\": \"mirror_link\"})\r\n    link = unquote(osdn_link + info[\"href\"])\r\n    mirrors = page.find(\"form\", {\"id\": \"mirror-select-form\"}).findAll(\"tr\")\r\n    urls = []\r\n    for data in mirrors[1:]:\r\n        mirror = data.find(\"input\")[\"value\"]\r\n        urls.append(sub(r\"m=(.*)&f\", f\"m={mirror}&f\", link))\r\n    return urls[0]\r\n\r\n\r\ndef github(url: str) -> str:\r\n    \"\"\"GitHub direct links generator\"\"\"\r\n    try:\r\n        findall(r\"\\bhttps?://.*github\\.com.*releases\\S+\", url)[0]\r\n    except IndexError:\r\n        return \"No GitHub Releases links found\"\r\n    cget = create_scraper().request\r\n    download = cget(\"get\", url, stream=True, allow_redirects=False)\r\n    try:\r\n        return download.headers[\"location\"]\r\n    except KeyError:\r\n        return \"ERROR: Can't extract the link\"\r\n\r\n\r\ndef hxfile(url: str) -> str:\r\n    sess = session()\r\n    try:\r\n        headers = {\r\n            \"content-type\": \"application/x-www-form-urlencoded\",\r\n            \"user-agent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.152 Safari/537.36\",\r\n        }\r\n\r\n        data = {\r\n            \"op\": \"download2\",\r\n            \"id\": urlparse(url).path.strip(\"/\")(url),\r\n            \"rand\": \"\",\r\n            \"referer\": \"\",\r\n            \"method_free\": \"\",\r\n            \"method_premium\": \"\",\r\n        }\r\n\r\n        response = sess.post(url, headers=headers, data=data)\r\n        soup = BeautifulSoup(response, \"html.parser\")\r\n\r\n        if btn := soup.find(class_=\"btn btn-dow\"):\r\n            return btn[\"href\"]\r\n        if unique := soup.find(id=\"uniqueExpirylink\"):\r\n            return unique[\"href\"]\r\n\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n\r\n\r\ndef letsupload(url: str) -> str:\r\n    cget = create_scraper().request\r\n    try:\r\n        res = cget(\"POST\", url)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    direct_link = findall(r\"(https?://letsupload\\.io\\/.+?)\\'\", res.text)\r\n    if direct_link:\r\n        return direct_link[0]\r\n    else:\r\n        return \"ERROR: Direct Link not found\"\r\n\r\n\r\ndef anonfilesBased(url: str) -> str:\r\n    cget = create_scraper().request\r\n    try:\r\n        soup = BeautifulSoup(cget(\"get\", url).content, \"lxml\")\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    sa = soup.find(id=\"download-url\")\r\n    if sa:\r\n        return sa[\"href\"]\r\n    return \"ERROR: File not found!\"\r\n\r\n\r\ndef fembed(link: str) -> str:\r\n    sess = session()\r\n    try:\r\n        url = url.replace(\"/v/\", \"/f/\")\r\n        raw = session.get(url)\r\n        api = search(r\"(/api/source/[^\\\"']+)\", raw.text)\r\n        if api is not None:\r\n            result = {}\r\n            raw = sess.post(\"https://layarkacaxxi.icu\" + api.group(1)).json()\r\n            for d in raw[\"data\"]:\r\n                f = d[\"file\"]\r\n                head = sess.head(f)\r\n                direct = head.headers.get(\"Location\", url)\r\n                result[f\"{d['label']}/{d['type']}\"] = direct\r\n            dl_url = result\r\n\r\n        count = len(dl_url)\r\n        lst_link = [dl_url[i] for i in dl_url]\r\n        return lst_link[count - 1]\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n\r\n\r\ndef sbembed(link: str) -> str:\r\n    sess = session()\r\n    try:\r\n        raw = sess.get(link)\r\n        soup = BeautifulSoup(raw, \"html.parser\")\r\n\r\n        result = {}\r\n        for a in soup.findAll(\"a\", onclick=compile(r\"^download_video[^>]+\")):\r\n            data = dict(\r\n                zip(\r\n                    [\"id\", \"mode\", \"hash\"],\r\n                    findall(r\"[\\\"']([^\\\"']+)[\\\"']\", a[\"onclick\"]),\r\n                )\r\n            )\r\n            data[\"op\"] = \"download_orig\"\r\n\r\n            raw = sess.get(\"https://sbembed.com/dl\", params=data)\r\n            soup = BeautifulSoup(raw, \"html.parser\")\r\n\r\n            if direct := soup.find(\"a\", text=compile(\"(?i)^direct\")):\r\n                result[a.text] = direct[\"href\"]\r\n        dl_url = result\r\n\r\n        count = len(dl_url)\r\n        lst_link = [dl_url[i] for i in dl_url]\r\n        return lst_link[count - 1]\r\n\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n\r\n\r\ndef onedrive(link: str) -> str:\r\n    \"\"\"Onedrive direct link generator\r\n    Based on https://github.com/UsergeTeam/Userge\"\"\"\r\n    link_without_query = urlparse(link)._replace(query=None).geturl()\r\n    direct_link_encoded = str(\r\n        standard_b64encode(bytes(link_without_query, \"utf-8\")), \"utf-8\"\r\n    )\r\n    direct_link1 = (\r\n        f\"https://api.onedrive.com/v1.0/shares/u!{direct_link_encoded}/root/content\"\r\n    )\r\n    cget = create_scraper().request\r\n    try:\r\n        resp = cget(\"head\", direct_link1)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if resp.status_code != 302:\r\n        return \"ERROR: Unauthorized link, the link may be private\"\r\n    return resp.next.url\r\n\r\n\r\ndef pixeldrain(url: str) -> str:\r\n    \"\"\"Based on https://github.com/yash-dk/TorToolkit-Telegram\"\"\"\r\n    url = url.strip(\"/ \")\r\n    file_id = url.split(\"/\")[-1]\r\n    if url.split(\"/\")[-2] == \"l\":\r\n        info_link = f\"https://pixeldrain.com/api/list/{file_id}\"\r\n        dl_link = f\"https://pixeldrain.com/api/list/{file_id}/zip?download\"\r\n    else:\r\n        info_link = f\"https://pixeldrain.com/api/file/{file_id}/info\"\r\n        dl_link = f\"https://pixeldrain.com/api/file/{file_id}?download\"\r\n    cget = create_scraper().request\r\n    try:\r\n        resp = cget(\"get\", info_link).json()\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if resp[\"success\"]:\r\n        return dl_link\r\n    else:\r\n        return f\"ERROR: Cant't download due {resp['message']}.\"\r\n\r\n\r\ndef antfiles(url: str) -> str:\r\n    sess = session()\r\n    try:\r\n        raw = sess.get(url)\r\n        soup = BeautifulSoup(raw, \"html.parser\")\r\n\r\n        if a := soup.find(class_=\"main-btn\", href=True):\r\n            return \"{0.scheme}://{0.netloc}/{1}\".format(urlparse(url), a[\"href\"])\r\n\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n\r\n\r\ndef streamtape(url: str) -> str:\r\n    response = get(url)\r\n\r\n    if videolink := findall(r\"document.*((?=id\\=)[^\\\"']+)\", response.text):\r\n        nexturl = \"https://streamtape.com/get_video?\" + videolink[-1]\r\n        try:\r\n            return nexturl\r\n        except Exception as e:\r\n            return f\"ERROR: {e.__class__.__name__}\"\r\n\r\n\r\ndef racaty(url: str) -> str:\r\n    \"\"\"Racaty direct link generator\r\n    By https://github.com/junedkh\"\"\"\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"GET\", url).url\r\n        json_data = {\"op\": \"download2\", \"id\": url.split(\"/\")[-1]}\r\n        res = cget(\"POST\", url, data=json_data)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    html_tree = etree.HTML(res.text)\r\n    direct_link = html_tree.xpath(\"//a[contains(@id,'uniqueExpirylink')]/@href\")\r\n    if direct_link:\r\n        return direct_link[0]\r\n    else:\r\n        return \"ERROR: Direct link not found\"\r\n\r\n\r\ndef fichier(link: str) -> str:\r\n    \"\"\"1Fichier direct link generator\r\n    Based on https://github.com/Maujar\r\n    \"\"\"\r\n    regex = r\"^([http:\\/\\/|https:\\/\\/]+)?.*1fichier\\.com\\/\\?.+\"\r\n    gan = match(regex, link)\r\n    if not gan:\r\n        return \"ERROR: The link you entered is wrong!\"\r\n    if \"::\" in link:\r\n        pswd = link.split(\"::\")[-1]\r\n        url = link.split(\"::\")[-2]\r\n    else:\r\n        pswd = None\r\n        url = link\r\n    cget = create_scraper().request\r\n    try:\r\n        if pswd is None:\r\n            req = cget(\"post\", url)\r\n        else:\r\n            pw = {\"pass\": pswd}\r\n            req = cget(\"post\", url, data=pw)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if req.status_code == 404:\r\n        return \"ERROR: File not found/The link you entered is wrong!\"\r\n    soup = BeautifulSoup(req.content, \"lxml\")\r\n    if soup.find(\"a\", {\"class\": \"ok btn-general btn-orange\"}):\r\n        dl_url = soup.find(\"a\", {\"class\": \"ok btn-general btn-orange\"})[\"href\"]\r\n        if dl_url:\r\n            return dl_url\r\n        return \"ERROR: Unable to generate Direct Link 1fichier!\"\r\n    elif len(soup.find_all(\"div\", {\"class\": \"ct_warn\"})) == 3:\r\n        str_2 = soup.find_all(\"div\", {\"class\": \"ct_warn\"})[-1]\r\n        if \"you must wait\" in str(str_2).lower():\r\n            numbers = [int(word) for word in str(str_2).split() if word.isdigit()]\r\n            if numbers:\r\n                return (\r\n                    f\"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute.\"\r\n                )\r\n            else:\r\n                return \"ERROR: 1fichier is on a limit. Please wait a few minutes/hour.\"\r\n        elif \"protect access\" in str(str_2).lower():\r\n            return \"ERROR: This link requires a password!\\n\\n<b>This link requires a password!</b>\\n- Insert sign <b>::</b> after the link and write the password after the sign.\\n\\n<b>Example:</b> https://1fichier.com/?smmtd8twfpm66awbqz04::love you\\n\\n* No spaces between the signs <b>::</b>\\n* For the password, you can use a space!\"\r\n        else:\r\n            return \"ERROR: Failed to generate Direct Link from 1fichier!\"\r\n    elif len(soup.find_all(\"div\", {\"class\": \"ct_warn\"})) == 4:\r\n        str_1 = soup.find_all(\"div\", {\"class\": \"ct_warn\"})[-2]\r\n        str_3 = soup.find_all(\"div\", {\"class\": \"ct_warn\"})[-1]\r\n        if \"you must wait\" in str(str_1).lower():\r\n            numbers = [int(word) for word in str(str_1).split() if word.isdigit()]\r\n            if numbers:\r\n                return (\r\n                    f\"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute.\"\r\n                )\r\n            else:\r\n                return \"ERROR: 1fichier is on a limit. Please wait a few minutes/hour.\"\r\n        elif \"bad password\" in str(str_3).lower():\r\n            return \"ERROR: The password you entered is wrong!\"\r\n        else:\r\n            return \"ERROR: Error trying to generate Direct Link from 1fichier!\"\r\n    else:\r\n        return \"ERROR: Error trying to generate Direct Link from 1fichier!\"\r\n\r\n\r\ndef solidfiles(url: str) -> str:\r\n    \"\"\"Solidfiles direct link generator\r\n    Based on https://github.com/Xonshiz/SolidFiles-Downloader\r\n    By https://github.com/Jusidama18\"\"\"\r\n    cget = create_scraper().request\r\n    try:\r\n        headers = {\r\n            \"User-Agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36\"\r\n        }\r\n        pageSource = cget(\"get\", url, headers=headers).text\r\n        mainOptions = str(search(r\"viewerOptions\\'\\,\\ (.*?)\\)\\;\", pageSource).group(1))\r\n        return loads(mainOptions)[\"downloadUrl\"]\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n\r\n\r\ndef krakenfiles(url: str) -> str:\r\n    sess = session()\r\n    try:\r\n        res = sess.get(url)\r\n        html = etree.HTML(res.text)\r\n        if post_url := html.xpath('//form[@id=\"dl-form\"]/@action'):\r\n            post_url = f\"https:{post_url[0]}\"\r\n        else:\r\n            sess.close()\r\n            return \"ERROR: Unable to find post link.\"\r\n        if token := html.xpath('//input[@id=\"dl-token\"]/@value'):\r\n            data = {\"token\": token[0]}\r\n        else:\r\n            sess.close()\r\n            return \"ERROR: Unable to find token for post.\"\r\n    except Exception as e:\r\n        sess.close()\r\n        return f\"ERROR: {e.__class__.__name__} Something went wrong\"\r\n    try:\r\n        dl_link = sess.post(post_url, data=data).json()\r\n        return dl_link[\"url\"]\r\n    except Exception as e:\r\n        sess.close()\r\n        return f\"ERROR: {e.__class__.__name__} While send post request\"\r\n\r\n\r\ndef uploadee(url: str) -> str:\r\n    \"\"\"uploadee direct link generator\r\n    By https://github.com/iron-heart-x\"\"\"\r\n    cget = create_scraper().request\r\n    try:\r\n        soup = BeautifulSoup(cget(\"get\", url).content, \"lxml\")\r\n        sa = soup.find(\"a\", attrs={\"id\": \"d_l\"})\r\n        return sa[\"href\"]\r\n    except:\r\n        return f\"ERROR: Failed to acquire download URL from upload.ee for : {url}\"\r\n\r\n\r\ndef terabox(url) -> str:\r\n    sess = session()\r\n    while True:\r\n        try:\r\n            res = sess.get(url)\r\n            print(\"connected\")\r\n            break\r\n        except:\r\n            print(\"retrying\")\r\n    url = res.url\r\n\r\n    key = url.split(\"?surl=\")[-1]\r\n    url = f\"http://www.terabox.com/wap/share/filelist?surl={key}\"\r\n    sess.cookies.update(TERA_COOKIE)\r\n\r\n    while True:\r\n        try:\r\n            res = sess.get(url)\r\n            print(\"connected\")\r\n            break\r\n        except Exception as e:\r\n            print(\"retrying\")\r\n\r\n    key = res.url.split(\"?surl=\")[-1]\r\n    soup = BeautifulSoup(res.content, \"lxml\")\r\n    jsToken = None\r\n\r\n    for fs in soup.find_all(\"script\"):\r\n        fstring = fs.string\r\n        if fstring and fstring.startswith(\"try {eval(decodeURIComponent\"):\r\n            jsToken = fstring.split(\"%22\")[1]\r\n\r\n    while True:\r\n        try:\r\n            res = sess.get(\r\n                f\"https://www.terabox.com/share/list?app_id=250528&jsToken={jsToken}&shorturl={key}&root=1\"\r\n            )\r\n            print(\"connected\")\r\n            break\r\n        except:\r\n            print(\"retrying\")\r\n    result = res.json()\r\n\r\n    if result[\"errno\"] != 0:\r\n        return f\"ERROR: '{result['errmsg']}' Check cookies\"\r\n    result = result[\"list\"]\r\n    if len(result) > 1:\r\n        return \"ERROR: Can't download mutiple files\"\r\n    result = result[0]\r\n\r\n    if result[\"isdir\"] != \"0\":\r\n        return \"ERROR: Can't download folder\"\r\n    return result.get(\"dlink\", \"Error\")\r\n\r\n\r\ndef filepress(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"GET\", url).url\r\n        raw = urlparse(url)\r\n\r\n        gd_data = {\r\n            \"id\": raw.path.split(\"/\")[-1],\r\n            \"method\": \"publicDownlaod\",\r\n        }\r\n        tg_data = {\r\n            \"id\": raw.path.split(\"/\")[-1],\r\n            \"method\": \"telegramDownload\",\r\n        }\r\n\r\n        api = f\"{raw.scheme}://{raw.hostname}/api/file/downlaod/\"\r\n\r\n        gd_res = cget(\r\n            \"POST\",\r\n            api,\r\n            headers={\"Referer\": f\"{raw.scheme}://{raw.hostname}\"},\r\n            json=gd_data,\r\n        ).json()\r\n        tg_res = cget(\r\n            \"POST\",\r\n            api,\r\n            headers={\"Referer\": f\"{raw.scheme}://{raw.hostname}\"},\r\n            json=tg_data,\r\n        ).json()\r\n\r\n    except Exception as e:\r\n        return f\"Google Drive: ERROR: {e.__class__.__name__} \\nTelegram: ERROR: {e.__class__.__name__}\"\r\n\r\n    gd_result = (\r\n        f'https://drive.google.com/uc?id={gd_res[\"data\"]}'\r\n        if \"data\" in gd_res\r\n        else f'ERROR: {gd_res[\"statusText\"]}'\r\n    )\r\n    tg_result = (\r\n        f'https://tghub.xyz/?start={tg_res[\"data\"]}'\r\n        if \"data\" in tg_res\r\n        else \"No Telegram file available \"\r\n    )\r\n\r\n    return f\"Google Drive: {gd_result} \\nTelegram: {tg_result}\"\r\n\r\n\r\ndef gdtot(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        res = cget(\"GET\", f'https://gdbot.xyz/file/{url.split(\"/\")[-1]}')\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    token_url = etree.HTML(res.content).xpath(\r\n        \"//a[contains(@class,'inline-flex items-center justify-center')]/@href\"\r\n    )\r\n    if not token_url:\r\n        try:\r\n            url = cget(\"GET\", url).url\r\n            p_url = urlparse(url)\r\n            res = cget(\r\n                \"GET\", f\"{p_url.scheme}://{p_url.hostname}/ddl/{url.split('/')[-1]}\"\r\n            )\r\n        except Exception as e:\r\n            return f\"ERROR: {e.__class__.__name__}\"\r\n        drive_link = findall(r\"myDl\\('(.*?)'\\)\", res.text)\r\n        if drive_link and \"drive.google.com\" in drive_link[0]:\r\n            return drive_link[0]\r\n        else:\r\n            return \"ERROR: Drive Link not found, Try in your broswer\"\r\n    token_url = token_url[0]\r\n    try:\r\n        token_page = cget(\"GET\", token_url)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__} with {token_url}\"\r\n    path = findall('\\(\"(.*?)\"\\)', token_page.text)\r\n    if not path:\r\n        return \"ERROR: Cannot bypass this\"\r\n    path = path[0]\r\n    raw = urlparse(token_url)\r\n    final_url = f\"{raw.scheme}://{raw.hostname}{path}\"\r\n    return sharer_scraper(final_url)\r\n\r\n\r\ndef sharer_scraper(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"GET\", url).url\r\n        raw = urlparse(url)\r\n        header = {\r\n            \"useragent\": \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/7.0.548.0 Safari/534.10\"\r\n        }\r\n        res = cget(\"GET\", url, headers=header)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    key = findall('\"key\",\\s+\"(.*?)\"', res.text)\r\n    if not key:\r\n        return \"ERROR: Key not found!\"\r\n    key = key[0]\r\n    if not etree.HTML(res.content).xpath(\"//button[@id='drc']\"):\r\n        return \"ERROR: This link don't have direct download button\"\r\n    boundary = uuid4()\r\n    headers = {\r\n        \"Content-Type\": f\"multipart/form-data; boundary=----WebKitFormBoundary{boundary}\",\r\n        \"x-token\": raw.hostname,\r\n        \"useragent\": \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/7.0.548.0 Safari/534.10\",\r\n    }\r\n\r\n    data = (\r\n        f'------WebKitFormBoundary{boundary}\\r\\nContent-Disposition: form-data; name=\"action\"\\r\\n\\r\\ndirect\\r\\n'\r\n        f'------WebKitFormBoundary{boundary}\\r\\nContent-Disposition: form-data; name=\"key\"\\r\\n\\r\\n{key}\\r\\n'\r\n        f'------WebKitFormBoundary{boundary}\\r\\nContent-Disposition: form-data; name=\"action_token\"\\r\\n\\r\\n\\r\\n'\r\n        f\"------WebKitFormBoundary{boundary}--\\r\\n\"\r\n    )\r\n    try:\r\n        res = cget(\"POST\", url, cookies=res.cookies, headers=headers, data=data).json()\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if \"url\" not in res:\r\n        return \"ERROR: Drive Link not found, Try in your broswer\"\r\n    if \"drive.google.com\" in res[\"url\"]:\r\n        return res[\"url\"]\r\n    try:\r\n        res = cget(\"GET\", res[\"url\"])\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    html_tree = etree.HTML(res.content)\r\n    drive_link = html_tree.xpath(\"//a[contains(@class,'btn')]/@href\")\r\n    if drive_link and \"drive.google.com\" in drive_link[0]:\r\n        return drive_link[0]\r\n    else:\r\n        return \"ERROR: Drive Link not found, Try in your broswer\"\r\n\r\n\r\ndef wetransfer(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"GET\", url).url\r\n        json_data = {\"security_hash\": url.split(\"/\")[-1], \"intent\": \"entire_transfer\"}\r\n        res = cget(\r\n            \"POST\",\r\n            f'https://wetransfer.com/api/v4/transfers/{url.split(\"/\")[-2]}/download',\r\n            json=json_data,\r\n        ).json()\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if \"direct_link\" in res:\r\n        return res[\"direct_link\"]\r\n    elif \"message\" in res:\r\n        return f\"ERROR: {res['message']}\"\r\n    elif \"error\" in res:\r\n        return f\"ERROR: {res['error']}\"\r\n    else:\r\n        return \"ERROR: cannot find direct link\"\r\n\r\n\r\ndef akmfiles(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"GET\", url).url\r\n        json_data = {\"op\": \"download2\", \"id\": url.split(\"/\")[-1]}\r\n        res = cget(\"POST\", url, data=json_data)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    html_tree = etree.HTML(res.content)\r\n    direct_link = html_tree.xpath(\"//a[contains(@class,'btn btn-dow')]/@href\")\r\n    if direct_link:\r\n        return direct_link[0]\r\n    else:\r\n        return \"ERROR: Direct link not found\"\r\n\r\n\r\ndef shrdsk(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"GET\", url).url\r\n        res = cget(\r\n            \"GET\",\r\n            f'https://us-central1-affiliate2apk.cloudfunctions.net/get_data?shortid={url.split(\"/\")[-1]}',\r\n        )\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if res.status_code != 200:\r\n        return f\"ERROR: Status Code {res.status_code}\"\r\n    res = res.json()\r\n    if \"type\" in res and res[\"type\"].lower() == \"upload\" and \"video_url\" in res:\r\n        return res[\"video_url\"]\r\n    return \"ERROR: cannot find direct link\"\r\n\r\n\r\ndef linkbox(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"GET\", url).url\r\n        res = cget(\r\n            \"GET\", f'https://www.linkbox.to/api/file/detail?itemId={url.split(\"/\")[-1]}'\r\n        ).json()\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if \"data\" not in res:\r\n        return \"ERROR: Data not found!!\"\r\n    data = res[\"data\"]\r\n    if not data:\r\n        return \"ERROR: Data is None!!\"\r\n    if \"itemInfo\" not in data:\r\n        return \"ERROR: itemInfo not found!!\"\r\n    itemInfo = data[\"itemInfo\"]\r\n    if \"url\" not in itemInfo:\r\n        return \"ERROR: url not found in itemInfo!!\"\r\n    if \"name\" not in itemInfo:\r\n        return \"ERROR: Name not found in itemInfo!!\"\r\n    name = quote(itemInfo[\"name\"])\r\n    raw = itemInfo[\"url\"].split(\"/\", 3)[-1]\r\n    return f\"https://wdl.nuplink.net/{raw}&filename={name}\"\r\n\r\n\r\ndef zippyshare(url):\r\n    cget = create_scraper().request\r\n    try:\r\n        url = cget(\"GET\", url).url\r\n        resp = cget(\"GET\", url)\r\n    except Exception as e:\r\n        return f\"ERROR: {e.__class__.__name__}\"\r\n    if not resp.ok:\r\n        return \"ERROR: Something went wrong!!, Try in your browser\"\r\n    if findall(r\">File does not exist on this server<\", resp.text):\r\n        return \"ERROR: File does not exist on server!!, Try in your browser\"\r\n    pages = etree.HTML(resp.text).xpath(\r\n        \"//script[contains(text(),'dlbutton')][3]/text()\"\r\n    )\r\n    if not pages:\r\n        return \"ERROR: Page not found!!\"\r\n    js_script = pages[0]\r\n    uri1 = None\r\n    uri2 = None\r\n    method = \"\"\r\n    omg = findall(r\"\\.omg.=.(.*?);\", js_script)\r\n    var_a = findall(r\"var.a.=.(\\d+)\", js_script)\r\n    var_ab = findall(r\"var.[ab].=.(\\d+)\", js_script)\r\n    unknown = findall(r\"\\+\\((.*?).\\+\", js_script)\r\n    unknown1 = findall(r\"\\+.\\((.*?)\\).\\+\", js_script)\r\n    if omg:\r\n        omg = omg[0]\r\n        method = f\"omg = {omg}\"\r\n        mtk = (eval(omg) * (int(omg.split(\"%\")[0]) % 3)) + 18\r\n        uri1 = findall(r'\"/(d/\\S+)/\"', js_script)\r\n        uri2 = findall(r'\\/d.*?\\+\"/(\\S+)\";', js_script)\r\n    elif var_a:\r\n        var_a = var_a[0]\r\n        method = f\"var_a = {var_a}\"\r\n        mtk = int(pow(int(var_a), 3) + 3)\r\n        uri1 = findall(r\"\\.href.=.\\\"/(.*?)/\\\"\", js_script)\r\n        uri2 = findall(r\"\\+\\\"/(.*?)\\\"\", js_script)\r\n    elif var_ab:\r\n        a = var_ab[0]\r\n        b = var_ab[1]\r\n        method = f\"a = {a}, b = {b}\"\r\n        mtk = eval(f\"{floor(int(a)/3) + int(a) % int(b)}\")\r\n        uri1 = findall(r\"\\.href.=.\\\"/(.*?)/\\\"\", js_script)\r\n        uri2 = findall(r\"\\)\\+\\\"/(.*?)\\\"\", js_script)\r\n    elif unknown:\r\n        method = f\"unknown = {unknown[0]}\"\r\n        mtk = eval(f\"{unknown[0]}+ 11\")\r\n        uri1 = findall(r\"\\.href.=.\\\"/(.*?)/\\\"\", js_script)\r\n        uri2 = findall(r\"\\)\\+\\\"/(.*?)\\\"\", js_script)\r\n    elif unknown1:\r\n        method = f\"unknown1 = {unknown1[0]}\"\r\n        mtk = eval(unknown1[0])\r\n        uri1 = findall(r\"\\.href.=.\\\"/(.*?)/\\\"\", js_script)\r\n        uri2 = findall(r\"\\+.\\\"/(.*?)\\\"\", js_script)\r\n    else:\r\n        return \"ERROR: Direct link not found\"\r\n    if not any([uri1, uri2]):\r\n        return f\"ERROR: uri1 or uri2 not found with method {method}\"\r\n    domain = urlparse(url).hostname\r\n    return f\"https://{domain}/{uri1[0]}/{mtk}/{uri2[0]}\"\r\n"
  },
  {
    "path": "freewall.py",
    "content": "import requests\nimport base64\nimport re\nfrom bs4 import BeautifulSoup\nfrom bypasser import RecaptchaV3\n\nRTOKEN = RecaptchaV3()\n\n#######################################################################\n\n\ndef getSoup(res):\n    return BeautifulSoup(res.text, \"html.parser\")\n\n\ndef downloaderla(url, site):\n    params = {\n        \"url\": url,\n        \"token\": RTOKEN,\n    }\n    return requests.get(site, params=params).json()\n\n\ndef getImg(url):\n    return requests.get(url).content\n\n\ndef decrypt(res, key):\n    if res[\"success\"]:\n        return base64.b64decode(res[\"result\"].split(key)[-1]).decode(\"utf-8\")\n\n\n#######################################################################\n\n\ndef shutterstock(url):\n    res = downloaderla(url, \"https://ttthreads.net/shutterstock.php\")\n    if res[\"success\"]:\n        return res[\"result\"]\n\n\ndef adobestock(url):\n    res = downloaderla(url, \"https://new.downloader.la/adobe.php\")\n    return decrypt(res, \"#\")\n\n\ndef alamy(url):\n    res = downloaderla(url, \"https://new.downloader.la/alamy.php\")\n    return decrypt(res, \"#\")\n\n\ndef getty(url):\n    res = downloaderla(url, \"https://getpaidstock.com/api.php\")\n    return decrypt(res, \"#\")\n\n\ndef picfair(url):\n    res = downloaderla(url, \"https://downloader.la/picf.php\")\n    return decrypt(res, \"?newURL=\")\n\n\ndef slideshare(url, type=\"pptx\"):\n    # enum = {\"pdf\",\"pptx\",\"img\"}\n    # if type not in enum: type = \"pdf\"\n    return requests.get(\n        f\"https://downloader.at/convert2{type}.php\", params={\"url\": url}\n    ).content\n\n\ndef medium(url):\n    return requests.post(\n        \"https://downloader.la/read.php\",\n        data={\n            \"mediumlink\": url,\n        },\n    ).content\n\n\n#######################################################################\n\n\ndef pass_paywall(url, check=False, link=False):\n    patterns = [\n        (r\"https?://(?:www\\.)?shutterstock\\.com/\", shutterstock, True, \"png\", -1),\n        (r\"https?://stock\\.adobe\\.com/\", adobestock, True, \"png\", -2),\n        (r\"https?://(?:www\\.)?alamy\\.com/\", alamy, True, \"png\", -1),\n        (r\"https?://(?:www\\.)?gettyimages\\.\", getty, True, \"png\", -2),\n        (r\"https?://(?:www\\.)?istockphoto\\.com\", getty, True, \"png\", -1),\n        (r\"https?://(?:www\\.)?picfair\\.com/\", picfair, True, \"png\", -1),\n        (r\"https?://(?:www\\.)?slideshare\\.net/\", slideshare, False, \"pptx\", -1),\n        (r\"https?://medium\\.com/\", medium, False, \"html\", -1),\n    ]\n\n    img_link = None\n    name = \"no-name\"\n    for pattern, downloader_func, img, ftype, idx in patterns:\n        if re.search(pattern, url):\n            if check:\n                return True\n            img_link = downloader_func(url)\n\n            try:\n                name = url.split(\"/\")[idx]\n            except:\n                pass\n            if (not img) and img_link:\n                fullname = name + \".\" + ftype\n                with open(fullname, \"wb\") as f:\n                    f.write(img_link)\n                return fullname\n            break\n\n    if check:\n        return False\n    if link or (not img_link):\n        return img_link\n    fullname = name + \".\" + \"png\"\n    with open(fullname, \"wb\") as f:\n        f.write(getImg(img_link))\n    return fullname\n\n\n#######################################################################\n"
  },
  {
    "path": "main.py",
    "content": "from pyrogram import Client, filters\r\nfrom pyrogram.types import (\r\n    InlineKeyboardMarkup,\r\n    InlineKeyboardButton,\r\n    BotCommand,\r\n    Message,\r\n)\r\nfrom os import environ, remove\r\nfrom threading import Thread\r\nfrom json import load\r\nfrom re import search\r\nimport re\r\nfrom urlextract import URLExtract\r\nfrom texts import HELP_TEXT\r\nimport bypasser\r\nimport freewall\r\nfrom time import time\r\nfrom db import DB\r\n\r\n# Initialize URL extractor\r\nextractor = URLExtract()\r\n\r\n# Bot configuration\r\nwith open(\"config.json\", \"r\") as f:\r\n    DATA: dict = load(f)\r\n\r\ndef getenv(var):\r\n    return environ.get(var) or DATA.get(var, None)\r\n\r\nbot_token = getenv(\"TOKEN\")\r\napi_hash = getenv(\"HASH\")\r\napi_id = getenv(\"ID\")\r\napp = Client(\"my_bot\", api_id=api_id, api_hash=api_hash, bot_token=bot_token)\r\nwith app:\r\n    app.set_bot_commands(\r\n        [\r\n            BotCommand(\"start\", \"Welcome Message\"),\r\n            BotCommand(\"help\", \"List of All Supported Sites\"),\r\n        ]\r\n    )\r\n\r\n# Database setup\r\ndb_api = getenv(\"DB_API\")\r\ndb_owner = getenv(\"DB_OWNER\")\r\ndb_name = getenv(\"DB_NAME\")\r\ntry:\r\n    database = DB(api_key=db_api, db_owner=db_owner, db_name=db_name)\r\nexcept:\r\n    print(\"Database is Not Set\")\r\n    database = None\r\n\r\n# Handle index\r\ndef handleIndex(ele: str, message: Message, msg: Message):\r\n    result = bypasser.scrapeIndex(ele)\r\n    try:\r\n        app.delete_messages(message.chat.id, msg.id)\r\n    except:\r\n        pass\r\n    if database and result:\r\n        database.insert(ele, result)\r\n    for page in result:\r\n        app.send_message(\r\n            message.chat.id,\r\n            page,\r\n            reply_to_message_id=message.id,\r\n            disable_web_page_preview=True,\r\n        )\r\n\r\n# URL regex pattern\r\nURL_REGEX = r'(?:(?:https?|ftp):\\/\\/)?[\\w/\\-?=%.]+\\.[\\w/\\-?=%.]+'\r\n\r\n# Updated loopthread function\r\ndef loopthread(message: Message, otherss=False):\r\n    urls = []\r\n    # Use message.caption for media (otherss=True), message.text for text messages (otherss=False)\r\n    if otherss:\r\n        texts = message.caption or \"\"\r\n    else:\r\n        texts = message.text or \"\"\r\n\r\n    # Check entities based on message type\r\n    entities = []\r\n    if otherss and hasattr(message, 'caption_entities') and message.caption_entities:\r\n        entities = message.caption_entities\r\n    elif message.entities:\r\n        entities = message.entities\r\n\r\n    # Step 1: Extract URLs from entities\r\n    if entities:\r\n        for entity in entities:\r\n            entity_type = str(entity.type)\r\n            normalized_type = entity_type.split('.')[-1].lower() if '.' in entity_type else entity_type.lower()\r\n            \r\n            if normalized_type == \"url\":\r\n                url = texts[entity.offset:entity.offset + entity.length]\r\n                urls.append(url)\r\n            elif normalized_type == \"text_link\":\r\n                if hasattr(entity, 'url') and entity.url:\r\n                    urls.append(entity.url)\r\n\r\n    # Step 2: Fallback to text-based URL extraction\r\n    extracted_urls = extractor.find_urls(texts)\r\n    urls.extend(extracted_urls)\r\n    regex_urls = re.findall(URL_REGEX, texts)\r\n    urls.extend(regex_urls)\r\n\r\n    # Step 3: Clean and deduplicate URLs\r\n    cleaned_urls = []\r\n    for url in urls:\r\n        cleaned_url = url.strip(\".,\").rstrip(\"/\")\r\n        if cleaned_url:\r\n            cleaned_urls.append(cleaned_url)\r\n    urls = list(dict.fromkeys(cleaned_urls))  # Preserve order, remove duplicates\r\n    if not urls:\r\n        app.send_message(\r\n            message.chat.id,\r\n            \"No valid URLs found in the message.\",\r\n            reply_to_message_id=message.id\r\n        )\r\n        return\r\n\r\n    # Step 4: Normalize URLs (add protocol if missing)\r\n    normalized_urls = []\r\n    for url in urls:\r\n        if not url.startswith(('http://', 'https://')):\r\n            url = 'https://' + url\r\n        normalized_urls.append(url)\r\n    urls = normalized_urls\r\n\r\n    # Bypassing logic\r\n    if bypasser.ispresent(bypasser.ddl.ddllist, urls[0]):\r\n        msg: Message = app.send_message(\r\n            message.chat.id, \"⚡ __generating...__\", reply_to_message_id=message.id\r\n        )\r\n    elif freewall.pass_paywall(urls[0], check=True):\r\n        msg: Message = app.send_message(\r\n            message.chat.id, \"🕴️ __jumping the wall...__\", reply_to_message_id=message.id\r\n        )\r\n    else:\r\n        if \"https://olamovies\" in urls[0] or \"https://psa.wf/\" in urls[0]:\r\n            msg: Message = app.send_message(\r\n                message.chat.id,\r\n                \"⏳ __this might take some time...__\",\r\n                reply_to_message_id=message.id,\r\n            )\r\n        else:\r\n            msg: Message = app.send_message(\r\n                message.chat.id, \"🔎 __bypassing...__\", reply_to_message_id=message.id\r\n            )\r\n\r\n    strt = time()\r\n    links = \"\"\r\n    temp = None\r\n\r\n    for ele in urls:\r\n        if database:\r\n            df_find = database.find(ele)\r\n        else:\r\n            df_find = None\r\n        if df_find:\r\n            print(\"Found in DB\")\r\n            temp = df_find\r\n        elif search(r\"https?:\\/\\/(?:[\\w.-]+)?\\.\\w+\\/\\d+:\", ele):\r\n            handleIndex(ele, message, msg)\r\n            return\r\n        elif bypasser.ispresent(bypasser.ddl.ddllist, ele):\r\n            try:\r\n                temp = bypasser.ddl.direct_link_generator(ele)\r\n            except Exception as e:\r\n                temp = \"**Error**: \" + str(e)\r\n        elif freewall.pass_paywall(ele, check=True):\r\n            freefile = freewall.pass_paywall(ele)\r\n            if freefile:\r\n                try:\r\n                    app.send_document(\r\n                        message.chat.id, freefile, reply_to_message_id=message.id\r\n                    )\r\n                    remove(freefile)\r\n                    app.delete_messages(message.chat.id, [msg.id])\r\n                    return\r\n                except:\r\n                    pass\r\n            else:\r\n                app.send_message(\r\n                    message.chat.id, \"__Failed to Jump\", reply_to_message_id=message.id\r\n                )\r\n        else:\r\n            try:\r\n                temp = bypasser.shortners(ele)\r\n            except Exception as e:\r\n                temp = \"**Error**: \" + str(e)\r\n\r\n        print(\"bypassed:\", temp)\r\n        if temp is not None:\r\n            if (not df_find) and (\"http://\" in temp or \"https://\" in temp) and database:\r\n                print(\"Adding to DB\")\r\n                database.insert(ele, temp)\r\n            links = links + temp + \"\\n\"\r\n\r\n    end = time()\r\n    print(\"Took \" + \"{:.2f}\".format(end - strt) + \"sec\")\r\n\r\n    # Send bypassed links\r\n    try:\r\n        final = []\r\n        tmp = \"\"\r\n        for ele in links.split(\"\\n\"):\r\n            tmp += ele + \"\\n\"\r\n            if len(tmp) > 4000:\r\n                final.append(tmp)\r\n                tmp = \"\"\r\n        final.append(tmp)\r\n        app.delete_messages(message.chat.id, msg.id)\r\n        tmsgid = message.id\r\n        for ele in final:\r\n            tmsg = app.send_message(\r\n                message.chat.id,\r\n                f\"__{ele}__\",\r\n                reply_to_message_id=tmsgid,\r\n                disable_web_page_preview=True,\r\n            )\r\n            tmsgid = tmsg.id\r\n    except Exception as e:\r\n        app.send_message(\r\n            message.chat.id,\r\n            f\"__Failed to Bypass : {e}__\",\r\n            reply_to_message_id=message.id,\r\n        )\r\n\r\n# Start command\r\n@app.on_message(filters.command([\"start\"]))\r\ndef send_start(client: Client, message: Message):\r\n    app.send_message(\r\n        message.chat.id,\r\n        f\"__👋 Hi **{message.from_user.mention}**, I am Link Bypasser Bot, just send me any supported links and I will get you results.\\nCheckout /help to Read More__\",\r\n        reply_markup=InlineKeyboardMarkup(\r\n            [\r\n                [\r\n                    InlineKeyboardButton(\r\n                        \"🌐 Source Code\",\r\n                        url=\"https://github.com/bipinkrish/Link-Bypasser-Bot\",\r\n                    )\r\n                ],\r\n                [\r\n                    InlineKeyboardButton(\r\n                        \"Replit\",\r\n                        url=\"https://replit.com/@bipinkrish/Link-Bypasser#app.py\",\r\n                    )\r\n                ],\r\n            ]\r\n        ),\r\n        reply_to_message_id=message.id,\r\n    )\r\n\r\n# Help command\r\n@app.on_message(filters.command([\"help\"]))\r\ndef send_help(client: Client, message: Message):\r\n    app.send_message(\r\n        message.chat.id,\r\n        HELP_TEXT,\r\n        reply_to_message_id=message.id,\r\n        disable_web_page_preview=True,\r\n    )\r\n\r\n# Text message handler\r\n@app.on_message(filters.text)\r\ndef receive(client: Client, message: Message):\r\n    bypass = Thread(target=lambda: loopthread(message), daemon=True)\r\n    bypass.start()\r\n\r\n# Document thread for DLC files\r\ndef docthread(message: Message):\r\n    msg: Message = app.send_message(\r\n        message.chat.id, \"🔎 __bypassing...__\", reply_to_message_id=message.id\r\n    )\r\n    print(\"sent DLC file\")\r\n    file = app.download_media(message)\r\n    dlccont = open(file, \"r\").read()\r\n    links = bypasser.getlinks(dlccont)\r\n    app.edit_message_text(\r\n        message.chat.id, msg.id, f\"__{links}__\", disable_web_page_preview=True\r\n    )\r\n    remove(file)\r\n\r\n# Media file handler\r\n@app.on_message([filters.document, filters.photo, filters.video])\r\ndef docfile(client: Client, message: Message):\r\n    try:\r\n        if message.document and message.document.file_name.endswith(\"dlc\"):\r\n            bypass = Thread(target=lambda: docthread(message), daemon=True)\r\n            bypass.start()\r\n            return\r\n    except:\r\n        pass\r\n    bypass = Thread(target=lambda: loopthread(message, True), daemon=True)\r\n    bypass.start()\r\n\r\n# Start the bot\r\nprint(\"Bot Starting\")\r\napp.run()\r\n"
  },
  {
    "path": "requirements.txt",
    "content": "requests\r\ncloudscraper\r\nbs4 \r\npython-dotenv\r\nkurigram==2.2.0\r\nPyroTgCrypto==1.2.7\r\nlxml\r\ncfscrape\r\nurllib3==1.26\r\nflask\r\ngunicorn\r\ncurl-cffi\r\nurlextract\r\n"
  },
  {
    "path": "runtime.txt",
    "content": "python-3.9.14\r\n"
  },
  {
    "path": "templates/index.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n  <head>\n    <title>Web Link Bypasser</title>\n    <meta\n      name=\"description\"\n      content=\"A site that can Bypass the Ad Links and Generate Direct Links\"\n    />\n    \n    <meta charset=\"utf-8\" />\n    <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\" />\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n    <link rel=\"stylesheet\" href=\"https://bootswatch.com/5/vapor/bootstrap.css\" />\n  </head>\n  <body>\n    <style>\n    ul.no-bullets {\n      list-style-type: none;\n      padding-left: 0;\n    }\n\n    .no-underline {\n      text-decoration: none;\n    }\n  </style>\n    <nav class=\"navbar navbar-expand-md navbar-dark bg-dark justify-content-center\" id=\"navbarHeader\">\n      <div class=\"navbar-brand text-white text-center\">\n        Quick, Simple and Easy to use Link-Bypasser \n      </div>\n    </nav>\n    <main>\n      <section class=\"py-5 text-center\" id=\"topHeaderSection\">\n        <div class=\"container mt-10\">\n          <h1 class=\"mb-4 text-center\">Link Bypasser Web</h1>\n          <div class=\"center-content\">\n            <form method=\"post\">\n              <div class=\"row justify-content-center\">\n                <div class=\"col-12 col-md-8 col-lg-6\">\n                  <div class=\"input-group mb-3\">\n                    <input type=\"text\" class=\"form-control\" name=\"url\" placeholder=\"Enter URL here\">\n                    <div class=\"input-group-append\">\n                      <button class=\"btn btn-primary\" type=\"submit\">Submit</button>\n                    </div>\n                  </div>\n                </div>\n              </div>\n            </form>\n            <div class=\"mt-4\">\n            {% if result %}\n            <div class=\"mb-2\">\n              <a href=\"{{ result }}\" class=\"d-block\" target=\"_blank\">{{ result }}</a>\n            </div>\n            {% endif %}\n          </div>\n          {% if prev_links %}\n          <div class=\"mt-4\">\n            <strong>Previously Shortened Links:</strong>\n            <ul class=\"no-bullets\">\n              {% for prev_link in prev_links[::-1] %}\n              {% if prev_link.strip() %}\n              <li>\n                <a href=\"{{ prev_link }}\" class=\"d-block mb-2 no-underline\" target=\"_blank\">{{ prev_link }}</a>\n              </li>\n              {% endif %}\n              {% endfor %}\n            </ul>\n          </div>\n          {% endif %}\n            </div>\n            <p class=\"mt-3 text-warning\">If you encounter any problems, <a href=\"https://github.com/bipinkrish/Link-Bypasser-Bot/issues\"> click here to report</a>.</p>\n            <div class=\"mt-5\">\n              <p>Designed and made by <a href=\"https://patelsujal.in\">Sujal Patel with Bipinkrish</a></p>\n            </div>\n          </div>\n        </div>\n      </section>\n    </main>\n    <script src=\"https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js\"></script>\n    <script src=\"https://code.jquery.com/jquery-3.6.0.min.js\"></script>\n  </body> \n</html>\n"
  },
  {
    "path": "texts.py",
    "content": "gdrivetext = \"\"\"__- appdrive \\n\\\r\n- driveapp \\n\\\r\n- drivehub \\n\\\r\n- gdflix \\n\\\r\n- drivesharer \\n\\\r\n- drivebit \\n\\\r\n- drivelinks \\n\\\r\n- driveace \\n\\\r\n- drivepro \\n\\\r\n- driveseed \\n\\\r\n    __\"\"\"\r\n\r\n\r\notherstext = \"\"\"__- exe, exey \\n\\\r\n- sub2unlock, sub2unlock \\n\\\r\n- rekonise \\n\\\r\n- letsboost \\n\\\r\n- phapps2app \\n\\\r\n- mboost\t\\n\\\r\n- sub4unlock \\n\\\r\n- ytsubme \\n\\\r\n- bitly \\n\\\r\n- social-unlock\t\\n\\\r\n- boost\t\\n\\\r\n- gooly \\n\\\r\n- shrto \\n\\\r\n- tinyurl\r\n    __\"\"\"\r\n\r\n\r\nddltext = \"\"\"__- yandex \\n\\\r\n- mediafire \\n\\\r\n- uptobox \\n\\\r\n- osdn \\n\\\r\n- github \\n\\\r\n- hxfile \\n\\\r\n- 1drv (onedrive) \\n\\\r\n- pixeldrain \\n\\\r\n- antfiles \\n\\\r\n- streamtape \\n\\\r\n- racaty \\n\\\r\n- 1fichier \\n\\\r\n- solidfiles \\n\\\r\n- krakenfiles \\n\\\r\n- upload \\n\\\r\n- akmfiles \\n\\\r\n- linkbox \\n\\\r\n- shrdsk \\n\\\r\n- letsupload \\n\\\r\n- zippyshare \\n\\\r\n- wetransfer \\n\\\r\n- terabox, teraboxapp, 4funbox, mirrobox, nephobox, momerybox \\n\\\r\n- filepress \\n\\\r\n- anonfiles, hotfile, bayfiles, megaupload, letsupload, filechan, myfile, vshare, rapidshare, lolabits, openload, share-online, upvid \\n\\\r\n- fembed, fembed, femax20, fcdn, feurl, layarkacaxxi, naniplay, nanime, naniplay, mm9842 \\n\\\r\n- sbembed, watchsb, streamsb, sbplay.\r\n    __\"\"\"\r\n\r\n\r\nshortnertext = \"\"\"__- igg-games \\n\\\r\n- olamovies\\n\\\r\n- katdrive \\n\\\r\n- drivefire\\n\\\r\n- kolop \\n\\\r\n- hubdrive \\n\\\r\n- filecrypt \\n\\\r\n- shareus \\n\\\r\n- shortingly \\n\\\r\n- gyanilinks \\n\\\r\n- shorte \\n\\\r\n- psa \\n\\\r\n- sharer \\n\\\r\n- new1.gdtot \\n\\\r\n- adfly\\n\\\r\n- gplinks\\n\\\r\n- droplink \\n\\\r\n- linkvertise \\n\\\r\n- rocklinks \\n\\\r\n- ouo \\n\\\r\n- try2link \\n\\\r\n- htpmovies \\n\\\r\n- sharespark \\n\\\r\n- cinevood\\n\\\r\n- atishmkv \\n\\\r\n- urlsopen \\n\\\r\n- xpshort, techymozo \\n\\\r\n- dulink \\n\\\r\n- ez4short \\n\\\r\n- krownlinks \\n\\\r\n- teluguflix \\n\\\r\n- taemovies \\n\\\r\n- toonworld4all \\n\\\r\n- animeremux \\n\\\r\n- adrinolinks \\n\\\r\n- tnlink \\n\\\r\n- flashlink \\n\\\r\n- short2url \\n\\\r\n- tinyfy \\n\\\r\n- mdiskshortners \\n\\\r\n- earnl \\n\\\r\n- moneykamalo \\n\\\r\n- easysky \\n\\\r\n- indiurl \\n\\\r\n- linkbnao \\n\\\r\n- mdiskpro \\n\\\r\n- tnshort \\n\\\r\n- indianshortner \\n\\\r\n- rslinks \\n\\\r\n- bitly, tinyurl \\n\\\r\n- thinfi \\n\\\r\n- pdisk \\n\\\r\n- vnshortener \\n\\\r\n- onepagelink \\n\\\r\n- lolshort \\n\\\r\n- tnvalue \\n\\\r\n- vipurl \\n\\\r\n__\"\"\"\r\n\r\n\r\nfreewalltext = \"\"\"__- shutterstock \\n\\\r\n- adobe stock \\n\\\r\n- drivehub \\n\\\r\n- alamy \\n\\\r\n- gettyimages \\n\\\r\n- istockphoto \\n\\\r\n- picfair \\n\\\r\n- slideshare \\n\\\r\n- medium \\n\\\r\n    __\"\"\"\r\n\r\n\r\nHELP_TEXT = f\"**--Just Send me any Supported Links From Below Mentioned Sites--** \\n\\n\\\r\n**List of Sites for DDL : ** \\n\\n{ddltext} \\n\\\r\n**List of Sites for Shortners : ** \\n\\n{shortnertext} \\n\\\r\n**List of Sites for GDrive Look-ALike : ** \\n\\n{gdrivetext} \\n\\\r\n**List of Sites for Jumping Paywall : ** \\n\\n{freewalltext} \\n\\\r\n**Other Supported Sites : ** \\n\\n{otherstext}\"\r\n"
  }
]