Full Code of bipinkrish/Link-Bypasser-Bot for AI

main 926cab96e8a9 cached
17 files
148.7 KB
39.7k tokens
145 symbols
1 requests
Download .txt
Repository: bipinkrish/Link-Bypasser-Bot
Branch: main
Commit: 926cab96e8a9
Files: 17
Total size: 148.7 KB

Directory structure:
gitextract_80omcl63/

├── .github/
│   └── FUNDING.yml
├── .gitignore
├── Dockerfile
├── Procfile
├── README.md
├── app.json
├── app.py
├── bypasser.py
├── config.json
├── db.py
├── ddl.py
├── freewall.py
├── main.py
├── requirements.txt
├── runtime.txt
├── templates/
│   └── index.html
└── texts.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/FUNDING.yml
================================================
github: [bipinkrish]


================================================
FILE: .gitignore
================================================
__pycache__
my_bot.session
my_bot.session-journal


================================================
FILE: Dockerfile
================================================
FROM python:3.9

WORKDIR /app

COPY requirements.txt /app/
RUN pip3 install -r requirements.txt
COPY . /app

CMD gunicorn app:app & python3 main.py

================================================
FILE: Procfile
================================================
worker: python3 main.py
web: python3 app.py

================================================
FILE: README.md
================================================
# Link-Bypasser-Bot

a Telegram Bot (with Site) that can Bypass Ad Links, Generate Direct Links and Jump Paywalls. see the Bot at
~~[@BypassLinkBot](https://t.me/BypassLinkBot)~~ [@BypassUrlsBot](https://t.me/BypassUrlsBot) or try it on [Replit](https://replit.com/@bipinkrish/Link-Bypasser#app.py)

---

## Special Feature - Public Database

Results of the bypass is hosted on public database on [DBHub.io](https://dbhub.io/bipinkrish/link_bypass.db) so if the bot finds link alredy in database it uses the result from the same, time saved and so many problems will be solved.

Table is creted with the below command, if anyone wants to use it for thier own.

```sql
CREATE TABLE results (link TEXT PRIMARY KEY, result TEXT)
```

---

## Required Variables

- `TOKEN` Bot Token from @BotFather
- `HASH` API Hash from my.telegram.org
- `ID` API ID from my.telegram.org

## Optional Variables
you can also set these in `config.json` file

- `CRYPT` GDTot Crypt If you don't know how to get Crypt then [Learn Here](https://www.youtube.com/watch?v=EfZ29CotRSU)
- `XSRF_TOKEN` and `Laravel_Session` XSRF Token and Laravel Session cookies! If you don't know how to get then then watch [this Video](https://www.youtube.com/watch?v=EfZ29CotRSU) (for GDTOT) and do the same for sharer.pw
- `DRIVEFIRE_CRYPT` Drivefire Crypt
- `KOLOP_CRYPT`  Kolop Crypt!
- `HUBDRIVE_CRYPT` Hubdrive Crypt
- `KATDRIVE_CRYPT` Katdrive Crypt
- `UPTOBOX_TOKEN` Uptobox Token
- `TERA_COOKIE` Terabox Cookie (only `ndus` value) (see [Help](#help))
- `CLOUDFLARE` Use `cf_clearance` cookie from and Cloudflare protected sites
- `PORT` Port to run the Bot Site on (defaults to 5000)

## Optinal Database Feature
You need set all three to work

- `DB_API` API KEY from [DBHub](https://dbhub.io/pref), make sure it has Read/Write permission
- `DB_OWNER` (defaults to `bipinkrish`)
- `DB_NAME` (defaults to `link_bypass.db`)

---

## Deploy on Heroku

*BEFORE YOU DEPLOY ON HEROKU, YOU SHOULD FORK THE REPO AND CHANGE ITS NAME TO ANYTHING ELSE*<br>

[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy?template=https://github.com/bipinkrish/Link-Bypasser-Bot)</br>

---

## Commands

Everything is set programtically, nothing to work

```
/start - Welcome Message
/help - List of All Supported Sites
```

---

## Supported Sites

To see the list of supported sites see [texts.py](https://github.com/bipinkrish/Link-Bypasser-Bot/blob/main/texts.py) file

---

## Help

* If you are deploying on VPS, watch videos on how to set/export Environment Variables. OR you can set these in `config.json` file
* Terabox Cookie

    1. Open any Browser
    2. Make sure you are logged in with a Terbox account
    3. Press `f12` to open DEV tools and click Network tab
    4. Open any Terbox video link and open Cookies tab
    5. Copy value of `ndus`
   
   <br>

   ![](https://i.ibb.co/hHBZM5m/Screenshot-113.png)


================================================
FILE: app.json
================================================
{
  "name": "Link-Bypasser-Bot",
  "description": "A Telegram Bot (with Site) that can Bypass Ad Links, Generate Direct Links and Jump Paywalls",
  "keywords": [
    "telegram",
    "Link bypass",
    "bypass bot", 
    "telegram bot" 
  ],
  "repository": "https://github.com/bipinkrish/Link-Bypasser-Bot",
  "logo": "https://ibb.co/kMVxrCj",
  "env": {
    "HASH": {
      "description": "Your API HASH from my.telegram.org",
      "required": true
    },
    "ID": {
      "description": "Your API ID from my.telegram.org",
      "required": true
    },
    "TOKEN":{
      "description": "Your bot token from @BotFather",
      "required": true
    },
    "TERA_COOKIE": {
      "description": "Terabox Cookie (only ndus value)",
      "required": false
    },
    "CRYPT": {
      "description": "GDTot Crypt",
      "required": false
    },
    "XSRF_TOKEN": {
      "description": "XSRF Token cookies! Check readme file in repo",
      "required": false
    },
    "Laravel_Session": {
      "description": "Laravel Session cookies! Check readme file in repo",
      "required": false
    },
    "DRIVEFIRE_CRYPT": {
      "description": "Drivefire Crypt",
      "required": false
    },
    "KOLOP_CRYPT": {
      "description": "Kolop Crypt",
      "required": false
    },
    "HUBDRIVE_CRYPT": {
      "description": "Hubdrive Crypt",
      "required": false
    },
    "KATDRIVE_CRYPT": {
      "description": "Katdrive Crypt",
      "required": false
    },
    "UPTOBOX_TOKEN": {
      "description": "Uptobox Token",
      "required": false
    },
    "CLOUDFLARE": {
      "description": "Use cf_clearance cookie from and Cloudflare protected sites",
      "required": false
    },
    "PORT": {
      "description": "Port to run the Bot Site on (default is 5000)",
      "required": false
    }
  },
  "buildpacks": [
    {
      "url": "heroku/python"
    }
  ],
  "formation": {
    "web": {
      "quantity": 1,
      "size": "standard-1x"
    }
  }
}


================================================
FILE: app.py
================================================
from flask import Flask, request, render_template, make_response, send_file
import bypasser
import re
import os
import freewall


app = Flask(__name__)


def handle_index(ele):
    return bypasser.scrapeIndex(ele)


def store_shortened_links(link):
    with open("shortened_links.txt", "a") as file:
        file.write(link + "\n")


def loop_thread(url):
    urls = []
    urls.append(url)

    if not url:
        return None

    link = ""
    temp = None
    for ele in urls:
        if re.search(r"https?:\/\/(?:[\w.-]+)?\.\w+\/\d+:", ele):
            handle_index(ele)
        elif bypasser.ispresent(bypasser.ddl.ddllist, ele):
            try:
                temp = bypasser.ddl.direct_link_generator(ele)
            except Exception as e:
                temp = "**Error**: " + str(e)
        elif freewall.pass_paywall(ele, check=True):
            freefile = freewall.pass_paywall(ele)
            if freefile:
                try:
                    return send_file(freefile)
                except:
                    pass
        else:
            try:
                temp = bypasser.shortners(ele)
            except Exception as e:
                temp = "**Error**: " + str(e)
        print("bypassed:", temp)
        if temp:
            link = link + temp + "\n\n"

    return link


@app.route("/", methods=["GET", "POST"])
def index():
    if request.method == "POST":
        url = request.form.get("url")
        result = loop_thread(url)
        if freewall.pass_paywall(url, check=True):
            return result

        shortened_links = request.cookies.get("shortened_links")
        if shortened_links:
            prev_links = shortened_links.split(",")
        else:
            prev_links = []

        if result:
            prev_links.append(result)

            if len(prev_links) > 10:
                prev_links = prev_links[-10:]

        shortened_links_str = ",".join(prev_links)
        resp = make_response(
            render_template("index.html", result=result, prev_links=prev_links)
        )
        resp.set_cookie("shortened_links", shortened_links_str)

        return resp

    shortened_links = request.cookies.get("shortened_links")
    return render_template(
        "index.html",
        result=None,
        prev_links=shortened_links.split(",") if shortened_links else None,
    )


if __name__ == "__main__":
    port = int(os.environ.get("PORT", 5000))
    app.run(host="0.0.0.0", port=port)


================================================
FILE: bypasser.py
================================================
import re
import requests
from curl_cffi import requests as Nreq
import base64
from urllib.parse import unquote, urlparse, quote
import time
import cloudscraper
from bs4 import BeautifulSoup, NavigableString, Tag
from lxml import etree
import hashlib
import json
from asyncio import sleep as asleep
import ddl
from cfscrape import create_scraper
from json import load
from os import environ

with open("config.json", "r") as f:
    DATA = load(f)


def getenv(var):
    return environ.get(var) or DATA.get(var, None)


##########################################################
# ENVs

GDTot_Crypt = getenv("CRYPT")
Laravel_Session = getenv("Laravel_Session")
XSRF_TOKEN = getenv("XSRF_TOKEN")
DCRYPT = getenv("DRIVEFIRE_CRYPT")
KCRYPT = getenv("KOLOP_CRYPT")
HCRYPT = getenv("HUBDRIVE_CRYPT")
KATCRYPT = getenv("KATDRIVE_CRYPT")
CF = getenv("CLOUDFLARE")

############################################################
# Lists

otherslist = [
    "exe.io",
    "exey.io",
    "sub2unlock.net",
    "sub2unlock.com",
    "rekonise.com",
    "letsboost.net",
    "ph.apps2app.com",
    "mboost.me",
    "sub4unlock.com",
    "ytsubme.com",
    "social-unlock.com",
    "boost.ink",
    "goo.gl",
    "shrto.ml",
    "t.co",
]

gdlist = [
    "appdrive",
    "driveapp",
    "drivehub",
    "gdflix",
    "drivesharer",
    "drivebit",
    "drivelinks",
    "driveace",
    "drivepro",
    "driveseed",
]


###############################################################
# pdisk


def pdisk(url):
    r = requests.get(url).text
    try:
        return r.split("<!-- ")[-1].split(" -->")[0]
    except:
        try:
            return (
                BeautifulSoup(r, "html.parser").find("video").find("source").get("src")
            )
        except:
            return None


###############################################################
# index scrapper


def scrapeIndex(url, username="none", password="none"):

    def authorization_token(username, password):
        user_pass = f"{username}:{password}"
        return f"Basic {base64.b64encode(user_pass.encode()).decode()}"

    def decrypt(string):
        return base64.b64decode(string[::-1][24:-20]).decode("utf-8")

    def func(payload_input, url, username, password):
        next_page = False
        next_page_token = ""

        url = f"{url}/" if url[-1] != "/" else url

        try:
            headers = {"authorization": authorization_token(username, password)}
        except:
            return "username/password combination is wrong", None, None

        encrypted_response = requests.post(url, data=payload_input, headers=headers)
        if encrypted_response.status_code == 401:
            return "username/password combination is wrong", None, None

        try:
            decrypted_response = json.loads(decrypt(encrypted_response.text))
        except:
            return (
                "something went wrong. check index link/username/password field again",
                None,
                None,
            )

        page_token = decrypted_response["nextPageToken"]
        if page_token is None:
            next_page = False
        else:
            next_page = True
            next_page_token = page_token

        if list(decrypted_response.get("data").keys())[0] != "error":
            file_length = len(decrypted_response["data"]["files"])
            result = ""

            for i, _ in enumerate(range(file_length)):
                files_type = decrypted_response["data"]["files"][i]["mimeType"]
                if files_type != "application/vnd.google-apps.folder":
                    files_name = decrypted_response["data"]["files"][i]["name"]

                    direct_download_link = url + quote(files_name)
                    result += f"• {files_name} :\n{direct_download_link}\n\n"
            return result, next_page, next_page_token

    def format(result):
        long_string = "".join(result)
        new_list = []

        while len(long_string) > 0:
            if len(long_string) > 4000:
                split_index = long_string.rfind("\n\n", 0, 4000)
                if split_index == -1:
                    split_index = 4000
            else:
                split_index = len(long_string)

            new_list.append(long_string[:split_index])
            long_string = long_string[split_index:].lstrip("\n\n")

        return new_list

    # main
    x = 0
    next_page = False
    next_page_token = ""
    result = []

    payload = {"page_token": next_page_token, "page_index": x}
    print(f"Index Link: {url}\n")
    temp, next_page, next_page_token = func(payload, url, username, password)
    if temp is not None:
        result.append(temp)

    while next_page == True:
        payload = {"page_token": next_page_token, "page_index": x}
        temp, next_page, next_page_token = func(payload, url, username, password)
        if temp is not None:
            result.append(temp)
        x += 1

    if len(result) == 0:
        return None
    return format(result)


################################################################
# Shortner Full Page API


def shortner_fpage_api(link):
    link_pattern = r"https?://[\w.-]+/full\?api=([^&]+)&url=([^&]+)(?:&type=(\d+))?"
    match = re.match(link_pattern, link)
    if match:
        try:
            url_enc_value = match.group(2)
            url_value = base64.b64decode(url_enc_value).decode("utf-8")
            return url_value
        except BaseException:
            return None
    else:
        return None


# Shortner Quick Link API


def shortner_quick_api(link):
    link_pattern = r"https?://[\w.-]+/st\?api=([^&]+)&url=([^&]+)"
    match = re.match(link_pattern, link)
    if match:
        try:
            url_value = match.group(2)
            return url_value
        except BaseException:
            return None
    else:
        return None


##############################################################
# tnlink


def tnlink(url):
    client = requests.session()
    DOMAIN = "https://page.tnlink.in/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://usanewstoday.club/"
    h = {"referer": ref}
    while len(client.cookies) == 0:
        resp = client.get(final_url, headers=h)
        time.sleep(2)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(8)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


###############################################################
# psa


def try2link_bypass(url):
    client = cloudscraper.create_scraper(allow_brotli=False)

    url = url[:-1] if url[-1] == "/" else url

    params = (("d", int(time.time()) + (60 * 4)),)
    r = client.get(url, params=params, headers={"Referer": "https://newforex.online/"})

    soup = BeautifulSoup(r.text, "html.parser")
    inputs = soup.find(id="go-link").find_all(name="input")
    data = {input.get("name"): input.get("value") for input in inputs}
    time.sleep(7)

    headers = {
        "Host": "try2link.com",
        "X-Requested-With": "XMLHttpRequest",
        "Origin": "https://try2link.com",
        "Referer": url,
    }

    bypassed_url = client.post(
        "https://try2link.com/links/go", headers=headers, data=data
    )
    return bypassed_url.json()["url"]


def try2link_scrape(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    h = {
        "upgrade-insecure-requests": "1",
        "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36",
    }
    res = client.get(url, cookies={}, headers=h)
    url = "https://try2link.com/" + re.findall("try2link\.com\/(.*?) ", res.text)[0]
    return try2link_bypass(url)


def psa_bypasser(psa_url):
    cookies = {"cf_clearance": CF}
    headers = {
        "authority": "psa.wf",
        "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
        "accept-language": "en-US,en;q=0.9",
        "referer": "https://psa.wf/",
        "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36",
    }

    r = requests.get(psa_url, headers=headers, cookies=cookies)
    soup = BeautifulSoup(r.text, "html.parser").find_all(
        class_="dropshadowboxes-drop-shadow dropshadowboxes-rounded-corners dropshadowboxes-inside-and-outside-shadow dropshadowboxes-lifted-both dropshadowboxes-effect-default"
    )
    links = []
    for link in soup:
        try:
            exit_gate = link.a.get("href")
            if "/exit" in exit_gate:
                print("scraping :", exit_gate)
                links.append(try2link_scrape(exit_gate))
        except:
            pass

    finals = ""
    for li in links:
        try:
            res = requests.get(li, headers=headers, cookies=cookies)
            soup = BeautifulSoup(res.text, "html.parser")
            name = soup.find("h1", class_="entry-title", itemprop="headline").getText()
            finals += "**" + name + "**\n\n"
            soup = soup.find("div", class_="entry-content", itemprop="text").findAll(
                "a"
            )
            for ele in soup:
                finals += "○ " + ele.get("href") + "\n"
            finals += "\n\n"
        except:
            finals += li + "\n\n"
    return finals


##################################################################################################################
# rocklinks


def rocklinks(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    if "rocklinks.net" in url:
        DOMAIN = "https://blog.disheye.com"
    else:
        DOMAIN = "https://rocklinks.net"

    url = url[:-1] if url[-1] == "/" else url

    code = url.split("/")[-1]
    if "rocklinks.net" in url:
        final_url = f"{DOMAIN}/{code}?quelle="
    else:
        final_url = f"{DOMAIN}/{code}"

    resp = client.get(final_url)
    soup = BeautifulSoup(resp.content, "html.parser")

    try:
        inputs = soup.find(id="go-link").find_all(name="input")
    except:
        return "Incorrect Link"

    data = {input.get("name"): input.get("value") for input in inputs}

    h = {"x-requested-with": "XMLHttpRequest"}

    time.sleep(10)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


################################################
# igg games


def decodeKey(encoded):
    key = ""

    i = len(encoded) // 2 - 5
    while i >= 0:
        key += encoded[i]
        i = i - 2

    i = len(encoded) // 2 + 4
    while i < len(encoded):
        key += encoded[i]
        i = i + 2

    return key


def bypassBluemediafiles(url, torrent=False):
    headers = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
        "Accept-Language": "en-US,en;q=0.5",
        "Alt-Used": "bluemediafiles.com",
        "Connection": "keep-alive",
        "Upgrade-Insecure-Requests": "1",
        "Sec-Fetch-Dest": "document",
        "Sec-Fetch-Mode": "navigate",
        "Sec-Fetch-Site": "none",
        "Sec-Fetch-User": "?1",
    }

    res = requests.get(url, headers=headers)
    soup = BeautifulSoup(res.text, "html.parser")
    script = str(soup.findAll("script")[3])
    encodedKey = script.split('Create_Button("')[1].split('");')[0]

    headers = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
        "Accept-Language": "en-US,en;q=0.5",
        "Referer": url,
        "Alt-Used": "bluemediafiles.com",
        "Connection": "keep-alive",
        "Upgrade-Insecure-Requests": "1",
        "Sec-Fetch-Dest": "document",
        "Sec-Fetch-Mode": "navigate",
        "Sec-Fetch-Site": "same-origin",
        "Sec-Fetch-User": "?1",
    }

    params = {"url": decodeKey(encodedKey)}

    if torrent:
        res = requests.get(
            "https://dl.pcgamestorrents.org/get-url.php", params=params, headers=headers
        )
        soup = BeautifulSoup(res.text, "html.parser")
        furl = soup.find("a", class_="button").get("href")

    else:
        res = requests.get(
            "https://bluemediafiles.com/get-url.php", params=params, headers=headers
        )
        furl = res.url
        if "mega.nz" in furl:
            furl = furl.replace("mega.nz/%23!", "mega.nz/file/").replace("!", "#")

    return furl


def igggames(url):
    res = requests.get(url)
    soup = BeautifulSoup(res.text, "html.parser")
    soup = soup.find("div", class_="uk-margin-medium-top").findAll("a")

    bluelist = []
    for ele in soup:
        bluelist.append(ele.get("href"))
    bluelist = bluelist[3:-1]

    links = ""
    last = None
    fix = True
    for ele in bluelist:
        if ele == "https://igg-games.com/how-to-install-a-pc-game-and-update.html":
            fix = False
            links += "\n"
        if "bluemediafile" in ele:
            tmp = bypassBluemediafiles(ele)
            if fix:
                tt = tmp.split("/")[2]
                if last is not None and tt != last:
                    links += "\n"
                last = tt
            links = links + "○ " + tmp + "\n"
        elif "pcgamestorrents.com" in ele:
            res = requests.get(ele)
            soup = BeautifulSoup(res.text, "html.parser")
            turl = (
                soup.find(
                    "p", class_="uk-card uk-card-body uk-card-default uk-card-hover"
                )
                .find("a")
                .get("href")
            )
            links = links + "🧲 `" + bypassBluemediafiles(turl, True) + "`\n\n"
        elif ele != "https://igg-games.com/how-to-install-a-pc-game-and-update.html":
            if fix:
                tt = ele.split("/")[2]
                if last is not None and tt != last:
                    links += "\n"
                last = tt
            links = links + "○ " + ele + "\n"

    return links[:-1]


###############################################################
# htpmovies cinevood sharespark atishmkv


def htpmovies(link):
    client = cloudscraper.create_scraper(allow_brotli=False)
    r = client.get(link, allow_redirects=True).text
    j = r.split('("')[-1]
    url = j.split('")')[0]
    param = url.split("/")[-1]
    DOMAIN = "https://go.theforyou.in"
    final_url = f"{DOMAIN}/{param}"
    resp = client.get(final_url)
    soup = BeautifulSoup(resp.content, "html.parser")
    try:
        inputs = soup.find(id="go-link").find_all(name="input")
    except:
        return "Incorrect Link"
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(10)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went Wrong !!"


def scrappers(link):

    try:
        link = re.match(
            r"((http|https)\:\/\/)?[a-zA-Z0-9\.\/\?\:@\-_=#]+\.([a-zA-Z]){2,6}([a-zA-Z0-9\.\&\/\?\:@\-_=#])*",
            link,
        )[0]
    except TypeError:
        return "Not a Valid Link."
    links = []

    if "sharespark" in link:
        gd_txt = ""
        res = requests.get("?action=printpage;".join(link.split("?")))
        soup = BeautifulSoup(res.text, "html.parser")
        for br in soup.findAll("br"):
            next_s = br.nextSibling
            if not (next_s and isinstance(next_s, NavigableString)):
                continue
            next2_s = next_s.nextSibling
            if next2_s and isinstance(next2_s, Tag) and next2_s.name == "br":
                text = str(next_s).strip()
                if text:
                    result = re.sub(r"(?m)^\(https://i.*", "", next_s)
                    star = re.sub(r"(?m)^\*.*", " ", result)
                    extra = re.sub(r"(?m)^\(https://e.*", " ", star)
                    gd_txt += (
                        ", ".join(
                            re.findall(
                                r"(?m)^.*https://new1.gdtot.cfd/file/[0-9][^.]*", next_s
                            )
                        )
                        + "\n\n"
                    )
        return gd_txt

    elif "htpmovies" in link and "/exit.php" in link:
        return htpmovies(link)

    elif "htpmovies" in link:
        prsd = ""
        links = []
        res = requests.get(link)
        soup = BeautifulSoup(res.text, "html.parser")
        x = soup.select('a[href^="/exit.php?url="]')
        y = soup.select("h5")
        z = (
            unquote(link.split("/")[-2]).split("-")[0]
            if link.endswith("/")
            else unquote(link.split("/")[-1]).split("-")[0]
        )

        for a in x:
            links.append(a["href"])
            prsd = f"Total Links Found : {len(links)}\n\n"

        msdcnt = -1
        for b in y:
            if str(b.string).lower().startswith(z.lower()):
                msdcnt += 1
                url = f"https://htpmovies.lol" + links[msdcnt]
                prsd += f"{msdcnt+1}. <b>{b.string}</b>\n{htpmovies(url)}\n\n"
                asleep(5)
        return prsd

    elif "cinevood" in link:
        prsd = ""
        links = []
        res = requests.get(link)
        soup = BeautifulSoup(res.text, "html.parser")
        x = soup.select('a[href^="https://kolop.icu/file"]')
        for a in x:
            links.append(a["href"])
        for o in links:
            res = requests.get(o)
            soup = BeautifulSoup(res.content, "html.parser")
            title = soup.title.string
            reftxt = re.sub(r"Kolop \| ", "", title)
            prsd += f"{reftxt}\n{o}\n\n"
        return prsd

    elif "atishmkv" in link:
        prsd = ""
        links = []
        res = requests.get(link)
        soup = BeautifulSoup(res.text, "html.parser")
        x = soup.select('a[href^="https://gdflix.top/file"]')
        for a in x:
            links.append(a["href"])
        for o in links:
            prsd += o + "\n\n"
        return prsd

    elif "teluguflix" in link:
        gd_txt = ""
        r = requests.get(link)
        soup = BeautifulSoup(r.text, "html.parser")
        links = soup.select('a[href*="gdtot"]')
        gd_txt = f"Total Links Found : {len(links)}\n\n"
        for no, link in enumerate(links, start=1):
            gdlk = link["href"]
            t = requests.get(gdlk)
            soupt = BeautifulSoup(t.text, "html.parser")
            title = soupt.select('meta[property^="og:description"]')
            gd_txt += f"{no}. <code>{(title[0]['content']).replace('Download ' , '')}</code>\n{gdlk}\n\n"
            asleep(1.5)
        return gd_txt

    elif "taemovies" in link:
        gd_txt, no = "", 0
        r = requests.get(link)
        soup = BeautifulSoup(r.text, "html.parser")
        links = soup.select('a[href*="shortingly"]')
        gd_txt = f"Total Links Found : {len(links)}\n\n"
        for a in links:
            glink = rocklinks(a["href"])
            t = requests.get(glink)
            soupt = BeautifulSoup(t.text, "html.parser")
            title = soupt.select('meta[property^="og:description"]')
            no += 1
            gd_txt += (
                f"{no}. {(title[0]['content']).replace('Download ' , '')}\n{glink}\n\n"
            )
        return gd_txt

    elif "toonworld4all" in link:
        gd_txt, no = "", 0
        r = requests.get(link)
        soup = BeautifulSoup(r.text, "html.parser")
        links = soup.select('a[href*="redirect/main.php?"]')
        for a in links:
            down = requests.get(a["href"], stream=True, allow_redirects=False)
            link = down.headers["location"]
            glink = rocklinks(link)
            if glink and "gdtot" in glink:
                t = requests.get(glink)
                soupt = BeautifulSoup(t.text, "html.parser")
                title = soupt.select('meta[property^="og:description"]')
                no += 1
                gd_txt += f"{no}. {(title[0]['content']).replace('Download ' , '')}\n{glink}\n\n"
        return gd_txt

    elif "animeremux" in link:
        gd_txt, no = "", 0
        r = requests.get(link)
        soup = BeautifulSoup(r.text, "html.parser")
        links = soup.select('a[href*="urlshortx.com"]')
        gd_txt = f"Total Links Found : {len(links)}\n\n"
        for a in links:
            link = a["href"]
            x = link.split("url=")[-1]
            t = requests.get(x)
            soupt = BeautifulSoup(t.text, "html.parser")
            title = soupt.title
            no += 1
            gd_txt += f"{no}. {title.text}\n{x}\n\n"
            asleep(1.5)
        return gd_txt

    else:
        res = requests.get(link)
        soup = BeautifulSoup(res.text, "html.parser")
        mystx = soup.select(r'a[href^="magnet:?xt=urn:btih:"]')
        for hy in mystx:
            links.append(hy["href"])
        return links


###################################################
# script links


def getfinal(domain, url, sess):

    # sess = requests.session()
    res = sess.get(url)
    soup = BeautifulSoup(res.text, "html.parser")
    soup = soup.find("form").findAll("input")
    datalist = []
    for ele in soup:
        datalist.append(ele.get("value"))

    data = {
        "_method": datalist[0],
        "_csrfToken": datalist[1],
        "ad_form_data": datalist[2],
        "_Token[fields]": datalist[3],
        "_Token[unlocked]": datalist[4],
    }

    sess.headers = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
        "Accept": "application/json, text/javascript, */*; q=0.01",
        "Accept-Language": "en-US,en;q=0.5",
        "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
        "X-Requested-With": "XMLHttpRequest",
        "Origin": domain,
        "Connection": "keep-alive",
        "Referer": url,
        "Sec-Fetch-Dest": "empty",
        "Sec-Fetch-Mode": "cors",
        "Sec-Fetch-Site": "same-origin",
    }

    # print("waiting 10 secs")
    time.sleep(10)  # important
    response = sess.post(domain + "/links/go", data=data).json()
    furl = response["url"]
    return furl


def getfirst(url):

    sess = requests.session()
    res = sess.get(url)

    soup = BeautifulSoup(res.text, "html.parser")
    soup = soup.find("form")
    action = soup.get("action")
    soup = soup.findAll("input")
    datalist = []
    for ele in soup:
        datalist.append(ele.get("value"))
    sess.headers = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
        "Accept-Language": "en-US,en;q=0.5",
        "Origin": action,
        "Connection": "keep-alive",
        "Referer": action,
        "Upgrade-Insecure-Requests": "1",
        "Sec-Fetch-Dest": "document",
        "Sec-Fetch-Mode": "navigate",
        "Sec-Fetch-Site": "same-origin",
        "Sec-Fetch-User": "?1",
    }

    data = {"newwpsafelink": datalist[1], "g-recaptcha-response": RecaptchaV3()}
    response = sess.post(action, data=data)
    soup = BeautifulSoup(response.text, "html.parser")
    soup = soup.findAll("div", class_="wpsafe-bottom text-center")
    for ele in soup:
        rurl = ele.find("a").get("onclick")[13:-12]

    res = sess.get(rurl)
    furl = res.url
    # print(furl)
    return getfinal(f'https://{furl.split("/")[-2]}/', furl, sess)


####################################################################################################
# ez4short


def ez4(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://ez4short.com"
    ref = "https://techmody.io/"
    h = {"referer": ref}
    resp = client.get(url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(8)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


################################################
# ola movies


def olamovies(url):

    print("this takes time, you might want to take a break.")
    headers = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
        "Accept-Language": "en-US,en;q=0.5",
        "Referer": url,
        "Alt-Used": "olamovies.ink",
        "Connection": "keep-alive",
        "Upgrade-Insecure-Requests": "1",
        "Sec-Fetch-Dest": "document",
        "Sec-Fetch-Mode": "navigate",
        "Sec-Fetch-Site": "same-origin",
        "Sec-Fetch-User": "?1",
    }

    client = cloudscraper.create_scraper(allow_brotli=False)
    res = client.get(url)
    soup = BeautifulSoup(res.text, "html.parser")
    soup = soup.findAll("div", class_="wp-block-button")

    outlist = []
    for ele in soup:
        outlist.append(ele.find("a").get("href"))

    slist = []
    for ele in outlist:
        try:
            key = (
                ele.split("?key=")[1]
                .split("&id=")[0]
                .replace("%2B", "+")
                .replace("%3D", "=")
                .replace("%2F", "/")
            )
            id = ele.split("&id=")[1]
        except:
            continue

        count = 3
        params = {"key": key, "id": id}
        soup = "None"

        while (
            "rocklinks.net" not in soup
            and "try2link.com" not in soup
            and "ez4short.com" not in soup
        ):
            res = client.get(
                "https://olamovies.ink/download/", params=params, headers=headers
            )
            soup = BeautifulSoup(res.text, "html.parser")
            soup = soup.findAll("a")[0].get("href")
            if soup != "":
                if (
                    "try2link.com" in soup
                    or "rocklinks.net" in soup
                    or "ez4short.com" in soup
                ):
                    slist.append(soup)
                else:
                    pass
            else:
                if count == 0:
                    break
                else:
                    count -= 1

            time.sleep(10)

    final = []
    for ele in slist:
        if "rocklinks.net" in ele:
            final.append(rocklinks(ele))
        elif "try2link.com" in ele:
            final.append(try2link_bypass(ele))
        elif "ez4short.com" in ele:
            final.append(ez4(ele))
        else:
            pass

    links = ""
    for ele in final:
        links = links + ele + "\n"
    return links[:-1]


###############################################
# katdrive


def parse_info_katdrive(res):
    info_parsed = {}
    title = re.findall(">(.*?)<\/h4>", res.text)[0]
    info_chunks = re.findall(">(.*?)<\/td>", res.text)
    info_parsed["title"] = title
    for i in range(0, len(info_chunks), 2):
        info_parsed[info_chunks[i]] = info_chunks[i + 1]
    return info_parsed


def katdrive_dl(url, katcrypt):
    client = requests.Session()
    client.cookies.update({"crypt": katcrypt})

    res = client.get(url)
    info_parsed = parse_info_katdrive(res)
    info_parsed["error"] = False

    up = urlparse(url)
    req_url = f"{up.scheme}://{up.netloc}/ajax.php?ajax=download"

    file_id = url.split("/")[-1]
    data = {"id": file_id}
    headers = {"x-requested-with": "XMLHttpRequest"}

    try:
        res = client.post(req_url, headers=headers, data=data).json()["file"]
    except:
        return "Error"  # {'error': True, 'src_url': url}

    gd_id = re.findall("gd=(.*)", res, re.DOTALL)[0]
    info_parsed["gdrive_url"] = f"https://drive.google.com/open?id={gd_id}"
    info_parsed["src_url"] = url
    return info_parsed["gdrive_url"]


###############################################
# hubdrive


def parse_info_hubdrive(res):
    info_parsed = {}
    title = re.findall(">(.*?)<\/h4>", res.text)[0]
    info_chunks = re.findall(">(.*?)<\/td>", res.text)
    info_parsed["title"] = title
    for i in range(0, len(info_chunks), 2):
        info_parsed[info_chunks[i]] = info_chunks[i + 1]
    return info_parsed


def hubdrive_dl(url, hcrypt):
    client = requests.Session()
    client.cookies.update({"crypt": hcrypt})

    res = client.get(url)
    info_parsed = parse_info_hubdrive(res)
    info_parsed["error"] = False

    up = urlparse(url)
    req_url = f"{up.scheme}://{up.netloc}/ajax.php?ajax=download"

    file_id = url.split("/")[-1]
    data = {"id": file_id}
    headers = {"x-requested-with": "XMLHttpRequest"}

    try:
        res = client.post(req_url, headers=headers, data=data).json()["file"]
    except:
        return "Error"  # {'error': True, 'src_url': url}

    gd_id = re.findall("gd=(.*)", res, re.DOTALL)[0]
    info_parsed["gdrive_url"] = f"https://drive.google.com/open?id={gd_id}"
    info_parsed["src_url"] = url
    return info_parsed["gdrive_url"]


#################################################
# drivefire


def parse_info_drivefire(res):
    info_parsed = {}
    title = re.findall(">(.*?)<\/h4>", res.text)[0]
    info_chunks = re.findall(">(.*?)<\/td>", res.text)
    info_parsed["title"] = title
    for i in range(0, len(info_chunks), 2):
        info_parsed[info_chunks[i]] = info_chunks[i + 1]
    return info_parsed


def drivefire_dl(url, dcrypt):
    client = requests.Session()
    client.cookies.update({"crypt": dcrypt})

    res = client.get(url)
    info_parsed = parse_info_drivefire(res)
    info_parsed["error"] = False

    up = urlparse(url)
    req_url = f"{up.scheme}://{up.netloc}/ajax.php?ajax=download"

    file_id = url.split("/")[-1]
    data = {"id": file_id}
    headers = {"x-requested-with": "XMLHttpRequest"}

    try:
        res = client.post(req_url, headers=headers, data=data).json()["file"]
    except:
        return "Error"  # {'error': True, 'src_url': url}

    decoded_id = res.rsplit("/", 1)[-1]
    info_parsed = f"https://drive.google.com/file/d/{decoded_id}"
    return info_parsed


##################################################
# kolop


def parse_info_kolop(res):
    info_parsed = {}
    title = re.findall(">(.*?)<\/h4>", res.text)[0]
    info_chunks = re.findall(">(.*?)<\/td>", res.text)
    info_parsed["title"] = title
    for i in range(0, len(info_chunks), 2):
        info_parsed[info_chunks[i]] = info_chunks[i + 1]
    return info_parsed


def kolop_dl(url, kcrypt):
    client = requests.Session()
    client.cookies.update({"crypt": kcrypt})

    res = client.get(url)
    info_parsed = parse_info_kolop(res)
    info_parsed["error"] = False

    up = urlparse(url)
    req_url = f"{up.scheme}://{up.netloc}/ajax.php?ajax=download"

    file_id = url.split("/")[-1]
    data = {"id": file_id}
    headers = {"x-requested-with": "XMLHttpRequest"}

    try:
        res = client.post(req_url, headers=headers, data=data).json()["file"]
    except:
        return "Error"  # {'error': True, 'src_url': url}

    gd_id = re.findall("gd=(.*)", res, re.DOTALL)[0]
    info_parsed["gdrive_url"] = f"https://drive.google.com/open?id={gd_id}"
    info_parsed["src_url"] = url

    return info_parsed["gdrive_url"]


##################################################
# mediafire


def mediafire(url):

    res = requests.get(url, stream=True)
    contents = res.text

    for line in contents.splitlines():
        m = re.search(r'href="((http|https)://download[^"]+)', line)
        if m:
            return m.groups()[0]


####################################################
# zippyshare


def zippyshare(url):
    resp = requests.get(url).text
    surl = resp.split("document.getElementById('dlbutton').href = ")[1].split(";")[0]
    parts = surl.split("(")[1].split(")")[0].split(" ")
    val = str(int(parts[0]) % int(parts[2]) + int(parts[4]) % int(parts[6]))
    surl = surl.split('"')
    burl = url.split("zippyshare.com")[0]
    furl = burl + "zippyshare.com" + surl[1] + val + surl[-2]
    return furl


####################################################
# filercrypt


def getlinks(dlc):
    headers = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
        "Accept": "application/json, text/javascript, */*",
        "Accept-Language": "en-US,en;q=0.5",
        "X-Requested-With": "XMLHttpRequest",
        "Origin": "http://dcrypt.it",
        "Connection": "keep-alive",
        "Referer": "http://dcrypt.it/",
    }

    data = {
        "content": dlc,
    }

    response = requests.post(
        "http://dcrypt.it/decrypt/paste", headers=headers, data=data
    ).json()["success"]["links"]
    links = ""
    for link in response:
        links = links + link + "\n\n"
    return links[:-1]


def filecrypt(url):

    client = cloudscraper.create_scraper(allow_brotli=False)
    headers = {
        "authority": "filecrypt.co",
        "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
        "accept-language": "en-US,en;q=0.9",
        "cache-control": "max-age=0",
        "content-type": "application/x-www-form-urlencoded",
        "origin": "https://filecrypt.co",
        "referer": url,
        "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36",
    }

    resp = client.get(url, headers=headers)
    soup = BeautifulSoup(resp.content, "html.parser")

    buttons = soup.find_all("button")
    for ele in buttons:
        line = ele.get("onclick")
        if line != None and "DownloadDLC" in line:
            dlclink = (
                "https://filecrypt.co/DLC/"
                + line.split("DownloadDLC('")[1].split("'")[0]
                + ".html"
            )
            break

    resp = client.get(dlclink, headers=headers)
    return getlinks(resp.text)


#####################################################
# dropbox


def dropbox(url):
    return (
        url.replace("www.", "")
        .replace("dropbox.com", "dl.dropboxusercontent.com")
        .replace("?dl=0", "")
    )


######################################################
# shareus


def shareus(url):
    headers = {
        "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36",
    }
    DOMAIN = "https://us-central1-my-apps-server.cloudfunctions.net"
    sess = requests.session()

    code = url.split("/")[-1]
    params = {
        "shortid": code,
        "initial": "true",
        "referrer": "https://shareus.io/",
    }
    response = requests.get(f"{DOMAIN}/v", params=params, headers=headers)

    for i in range(1, 4):
        json_data = {
            "current_page": i,
        }
        response = sess.post(f"{DOMAIN}/v", headers=headers, json=json_data)

    response = sess.get(f"{DOMAIN}/get_link", headers=headers).json()
    return response["link_info"]["destination"]


#######################################################
# shortingly


def shortingly(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://shortingly.in"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://tech.gyanitheme.com/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(5)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#######################################################
# Gyanilinks - gtlinks.me


def gyanilinks(url):
    DOMAIN = "https://go.hipsonyc.com/"
    client = cloudscraper.create_scraper(allow_brotli=False)
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    resp = client.get(final_url)
    soup = BeautifulSoup(resp.content, "html.parser")
    try:
        inputs = soup.find(id="go-link").find_all(name="input")
    except:
        return "Incorrect Link"
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(5)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#######################################################
# Flashlink


def flashl(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://files.earnash.com/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://flash1.cordtpoint.co.in"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(15)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#######################################################
# short2url


def short2url(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://techyuth.xyz/blog"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://blog.coin2pay.xyz/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(10)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#######################################################
# anonfiles


def anonfile(url):

    headersList = {"Accept": "*/*"}
    payload = ""

    response = requests.request(
        "GET", url, data=payload, headers=headersList
    ).text.split("\n")
    for ele in response:
        if (
            "https://cdn" in ele
            and "anonfiles.com" in ele
            and url.split("/")[-2] in ele
        ):
            break

    return ele.split('href="')[1].split('"')[0]


##########################################################
# pixl


def pixl(url):
    count = 1
    dl_msg = ""
    currentpage = 1
    settotalimgs = True
    totalimages = ""
    client = cloudscraper.create_scraper(allow_brotli=False)
    resp = client.get(url)
    if resp.status_code == 404:
        return "File not found/The link you entered is wrong!"
    soup = BeautifulSoup(resp.content, "html.parser")
    if "album" in url and settotalimgs:
        totalimages = soup.find("span", {"data-text": "image-count"}).text
        settotalimgs = False
    thmbnailanch = soup.findAll(attrs={"class": "--media"})
    links = soup.findAll(attrs={"data-pagination": "next"})
    try:
        url = links[0].attrs["href"]
    except BaseException:
        url = None
    for ref in thmbnailanch:
        imgdata = client.get(ref.attrs["href"])
        if not imgdata.status_code == 200:
            time.sleep(5)
            continue
        imghtml = BeautifulSoup(imgdata.text, "html.parser")
        downloadanch = imghtml.find(attrs={"class": "btn-download"})
        currentimg = downloadanch.attrs["href"]
        currentimg = currentimg.replace(" ", "%20")
        dl_msg += f"{count}. {currentimg}\n"
        count += 1
    currentpage += 1
    fld_msg = f"Your provided Pixl.is link is of Folder and I've Found {count - 1} files in the folder.\n"
    fld_link = f"\nFolder Link: {url}\n"
    final_msg = fld_link + "\n" + fld_msg + "\n" + dl_msg
    return final_msg


############################################################
# sirigan  ( unused )


def siriganbypass(url):
    client = requests.Session()
    res = client.get(url)
    url = res.url.split("=", maxsplit=1)[-1]

    while True:
        try:
            url = base64.b64decode(url).decode("utf-8")
        except:
            break

    return url.split("url=")[-1]


############################################################
# shorte


def sh_st_bypass(url):
    client = requests.Session()
    client.headers.update({"referer": url})
    p = urlparse(url)

    res = client.get(url)

    sess_id = re.findall("""sessionId(?:\s+)?:(?:\s+)?['|"](.*?)['|"]""", res.text)[0]

    final_url = f"{p.scheme}://{p.netloc}/shortest-url/end-adsession"
    params = {"adSessionId": sess_id, "callback": "_"}
    time.sleep(5)  # !important

    res = client.get(final_url, params=params)
    dest_url = re.findall('"(.*?)"', res.text)[1].replace("\/", "/")

    return {"src": url, "dst": dest_url}["dst"]


#############################################################
# gofile


def gofile_dl(url, password=""):
    api_uri = "https://api.gofile.io"
    client = requests.Session()
    res = client.get(api_uri + "/createAccount").json()

    data = {
        "contentId": url.split("/")[-1],
        "token": res["data"]["token"],
        "websiteToken": "12345",
        "cache": "true",
        "password": hashlib.sha256(password.encode("utf-8")).hexdigest(),
    }
    res = client.get(api_uri + "/getContent", params=data).json()

    content = []
    for item in res["data"]["contents"].values():
        content.append(item)

    return {"accountToken": data["token"], "files": content}["files"][0]["link"]


################################################################
# sharer pw


def parse_info_sharer(res):
    f = re.findall(">(.*?)<\/td>", res.text)
    info_parsed = {}
    for i in range(0, len(f), 3):
        info_parsed[f[i].lower().replace(" ", "_")] = f[i + 2]
    return info_parsed


def sharer_pw(url, Laravel_Session, XSRF_TOKEN, forced_login=False):
    client = cloudscraper.create_scraper(allow_brotli=False)
    client.cookies.update(
        {"XSRF-TOKEN": XSRF_TOKEN, "laravel_session": Laravel_Session}
    )
    res = client.get(url)
    token = re.findall("_token\s=\s'(.*?)'", res.text, re.DOTALL)[0]
    ddl_btn = etree.HTML(res.content).xpath("//button[@id='btndirect']")
    info_parsed = parse_info_sharer(res)
    info_parsed["error"] = True
    info_parsed["src_url"] = url
    info_parsed["link_type"] = "login"
    info_parsed["forced_login"] = forced_login
    headers = {
        "content-type": "application/x-www-form-urlencoded; charset=UTF-8",
        "x-requested-with": "XMLHttpRequest",
    }
    data = {"_token": token}
    if len(ddl_btn):
        info_parsed["link_type"] = "direct"
    if not forced_login:
        data["nl"] = 1
    try:
        res = client.post(url + "/dl", headers=headers, data=data).json()
    except:
        return info_parsed
    if "url" in res and res["url"]:
        info_parsed["error"] = False
        info_parsed["gdrive_link"] = res["url"]
    if len(ddl_btn) and not forced_login and not "url" in info_parsed:
        # retry download via login
        return sharer_pw(url, Laravel_Session, XSRF_TOKEN, forced_login=True)
    return info_parsed["gdrive_link"]


#################################################################
# gdtot


def gdtot(url):
    cget = create_scraper().request
    try:
        res = cget("GET", f'https://gdbot.xyz/file/{url.split("/")[-1]}')
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    token_url = etree.HTML(res.content).xpath(
        "//a[contains(@class,'inline-flex items-center justify-center')]/@href"
    )
    if not token_url:
        try:
            url = cget("GET", url).url
            p_url = urlparse(url)
            res = cget(
                "GET", f"{p_url.scheme}://{p_url.hostname}/ddl/{url.split('/')[-1]}"
            )
        except Exception as e:
            return f"ERROR: {e.__class__.__name__}"
        if (
            drive_link := re.findall(r"myDl\('(.*?)'\)", res.text)
        ) and "drive.google.com" in drive_link[0]:
            return drive_link[0]
        else:
            return "ERROR: Drive Link not found, Try in your broswer"
    token_url = token_url[0]
    try:
        token_page = cget("GET", token_url)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__} with {token_url}"
    path = re.findall('\("(.*?)"\)', token_page.text)
    if not path:
        return "ERROR: Cannot bypass this"
    path = path[0]
    raw = urlparse(token_url)
    final_url = f"{raw.scheme}://{raw.hostname}{path}"
    return ddl.sharer_scraper(final_url)


##################################################################
# adfly


def decrypt_url(code):
    a, b = "", ""
    for i in range(0, len(code)):
        if i % 2 == 0:
            a += code[i]
        else:
            b = code[i] + b
    key = list(a + b)
    i = 0
    while i < len(key):
        if key[i].isdigit():
            for j in range(i + 1, len(key)):
                if key[j].isdigit():
                    u = int(key[i]) ^ int(key[j])
                    if u < 10:
                        key[i] = str(u)
                    i = j
                    break
        i += 1
    key = "".join(key)
    decrypted = base64.b64decode(key)[16:-16]
    return decrypted.decode("utf-8")


def adfly(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    res = client.get(url).text
    out = {"error": False, "src_url": url}
    try:
        ysmm = re.findall("ysmm\s+=\s+['|\"](.*?)['|\"]", res)[0]
    except:
        out["error"] = True
        return out
    url = decrypt_url(ysmm)
    if re.search(r"go\.php\?u\=", url):
        url = base64.b64decode(re.sub(r"(.*?)u=", "", url)).decode()
    elif "&dest=" in url:
        url = unquote(re.sub(r"(.*?)dest=", "", url))
    out["bypassed_url"] = url
    return out


##############################################################################################
# gplinks


def gplinks(url: str):
    client = cloudscraper.create_scraper(allow_brotli=False)
    token = url.split("/")[-1]
    domain = "https://gplinks.co/"
    referer = "https://mynewsmedia.co/"
    vid = client.get(url, allow_redirects=False).headers["Location"].split("=")[-1]
    url = f"{url}/?{vid}"
    response = client.get(url, allow_redirects=False)
    soup = BeautifulSoup(response.content, "html.parser")
    inputs = soup.find(id="go-link").find_all(name="input")
    data = {input.get("name"): input.get("value") for input in inputs}
    time.sleep(10)
    headers = {"x-requested-with": "XMLHttpRequest"}
    bypassed_url = client.post(domain + "links/go", data=data, headers=headers).json()[
        "url"
    ]
    try:
        return bypassed_url
    except:
        return "Something went wrong :("


######################################################################################################
# droplink


def droplink(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    res = client.get(url, timeout=5)

    ref = re.findall("action[ ]{0,}=[ ]{0,}['|\"](.*?)['|\"]", res.text)[0]
    h = {"referer": ref}
    res = client.get(url, headers=h)

    bs4 = BeautifulSoup(res.content, "html.parser")
    inputs = bs4.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {
        "content-type": "application/x-www-form-urlencoded",
        "x-requested-with": "XMLHttpRequest",
    }

    p = urlparse(url)
    final_url = f"{p.scheme}://{p.netloc}/links/go"
    time.sleep(3.1)
    res = client.post(final_url, data=data, headers=h).json()

    if res["status"] == "success":
        return res["url"]
    return "Something went wrong :("


#####################################################################################################################
# link vertise


def linkvertise(url):
    params = {
        "url": url,
    }
    response = requests.get("https://bypass.pm/bypass2", params=params).json()
    if response["success"]:
        return response["destination"]
    else:
        return response["msg"]


###################################################################################################################
# others


def others(url):
    return "API Currently not Available"


#################################################################################################################
# ouo


# RECAPTCHA v3 BYPASS
# code from https://github.com/xcscxr/Recaptcha-v3-bypass
def RecaptchaV3():
    ANCHOR_URL = "https://www.google.com/recaptcha/api2/anchor?ar=1&k=6Lcr1ncUAAAAAH3cghg6cOTPGARa8adOf-y9zv2x&co=aHR0cHM6Ly9vdW8ucHJlc3M6NDQz&hl=en&v=pCoGBhjs9s8EhFOHJFe8cqis&size=invisible&cb=ahgyd1gkfkhe"
    url_base = "https://www.google.com/recaptcha/"
    post_data = "v={}&reason=q&c={}&k={}&co={}"
    client = requests.Session()
    client.headers.update({"content-type": "application/x-www-form-urlencoded"})
    matches = re.findall("([api2|enterprise]+)\/anchor\?(.*)", ANCHOR_URL)[0]
    url_base += matches[0] + "/"
    params = matches[1]
    res = client.get(url_base + "anchor", params=params)
    token = re.findall(r'"recaptcha-token" value="(.*?)"', res.text)[0]
    params = dict(pair.split("=") for pair in params.split("&"))
    post_data = post_data.format(params["v"], token, params["k"], params["co"])
    res = client.post(url_base + "reload", params=f'k={params["k"]}', data=post_data)
    answer = re.findall(r'"rresp","(.*?)"', res.text)[0]
    return answer


# code from https://github.com/xcscxr/ouo-bypass/
def ouo(url):
    tempurl = url.replace("ouo.press", "ouo.io")
    p = urlparse(tempurl)
    id = tempurl.split("/")[-1]
    client = Nreq.Session(
        headers={
            "authority": "ouo.io",
            "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
            "accept-language": "en-GB,en-US;q=0.9,en;q=0.8",
            "cache-control": "max-age=0",
            "referer": "http://www.google.com/ig/adde?moduleurl=",
            "upgrade-insecure-requests": "1",
        }
    )
    res = client.get(tempurl, impersonate="chrome110")
    next_url = f"{p.scheme}://{p.hostname}/go/{id}"

    for _ in range(2):
        if res.headers.get("Location"):
            break
        bs4 = BeautifulSoup(res.content, "lxml")
        inputs = bs4.form.findAll("input", {"name": re.compile(r"token$")})
        data = {input.get("name"): input.get("value") for input in inputs}
        data["x-token"] = RecaptchaV3()
        header = {"content-type": "application/x-www-form-urlencoded"}
        res = client.post(
            next_url,
            data=data,
            headers=header,
            allow_redirects=False,
            impersonate="chrome110",
        )
        next_url = f"{p.scheme}://{p.hostname}/xreallcygo/{id}"

    return res.headers.get("Location")


####################################################################################################################
# mdisk


def mdisk(url):
    header = {
        "Accept": "*/*",
        "Accept-Language": "en-US,en;q=0.5",
        "Accept-Encoding": "gzip, deflate, br",
        "Referer": "https://mdisk.me/",
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36",
    }

    inp = url
    fxl = inp.split("/")
    cid = fxl[-1]

    URL = f"https://diskuploader.entertainvideo.com/v1/file/cdnurl?param={cid}"
    res = requests.get(url=URL, headers=header).json()
    return res["download"] + "\n\n" + res["source"]


##################################################################################################################
# AppDrive or DriveApp etc. Look-Alike Link and as well as the Account Details (Required for Login Required Links only)


def unified(url):

    if ddl.is_share_link(url):
        if "https://gdtot" in url:
            return ddl.gdtot(url)
        else:
            return ddl.sharer_scraper(url)

    try:
        Email = "chzeesha4@gmail.com"
        Password = "zeeshi#789"

        account = {"email": Email, "passwd": Password}
        client = cloudscraper.create_scraper(allow_brotli=False)
        client.headers.update(
            {
                "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36"
            }
        )
        data = {"email": account["email"], "password": account["passwd"]}
        client.post(f"https://{urlparse(url).netloc}/login", data=data)
        res = client.get(url)
        key = re.findall('"key",\s+"(.*?)"', res.text)[0]
        ddl_btn = etree.HTML(res.content).xpath("//button[@id='drc']")
        info = re.findall(">(.*?)<\/li>", res.text)
        info_parsed = {}
        for item in info:
            kv = [s.strip() for s in item.split(": ", maxsplit=1)]
            info_parsed[kv[0].lower()] = kv[1]
        info_parsed = info_parsed
        info_parsed["error"] = False
        info_parsed["link_type"] = "login"
        headers = {
            "Content-Type": f"multipart/form-data; boundary={'-'*4}_",
        }
        data = {"type": 1, "key": key, "action": "original"}
        if len(ddl_btn):
            info_parsed["link_type"] = "direct"
            data["action"] = "direct"
        while data["type"] <= 3:
            boundary = f'{"-"*6}_'
            data_string = ""
            for item in data:
                data_string += f"{boundary}\r\n"
                data_string += f'Content-Disposition: form-data; name="{item}"\r\n\r\n{data[item]}\r\n'
            data_string += f"{boundary}--\r\n"
            gen_payload = data_string
            try:
                response = client.post(url, data=gen_payload, headers=headers).json()
                break
            except BaseException:
                data["type"] += 1
        if "url" in response:
            info_parsed["gdrive_link"] = response["url"]
        elif "error" in response and response["error"]:
            info_parsed["error"] = True
            info_parsed["error_message"] = response["message"]
        else:
            info_parsed["error"] = True
            info_parsed["error_message"] = "Something went wrong :("
        if info_parsed["error"]:
            return info_parsed
        if "driveapp" in urlparse(url).netloc and not info_parsed["error"]:
            res = client.get(info_parsed["gdrive_link"])
            drive_link = etree.HTML(res.content).xpath(
                "//a[contains(@class,'btn')]/@href"
            )[0]
            info_parsed["gdrive_link"] = drive_link
        info_parsed["src_url"] = url
        if "drivehub" in urlparse(url).netloc and not info_parsed["error"]:
            res = client.get(info_parsed["gdrive_link"])
            drive_link = etree.HTML(res.content).xpath(
                "//a[contains(@class,'btn')]/@href"
            )[0]
            info_parsed["gdrive_link"] = drive_link
        if "gdflix" in urlparse(url).netloc and not info_parsed["error"]:
            res = client.get(info_parsed["gdrive_link"])
            drive_link = etree.HTML(res.content).xpath(
                "//a[contains(@class,'btn')]/@href"
            )[0]
            info_parsed["gdrive_link"] = drive_link

        if "drivesharer" in urlparse(url).netloc and not info_parsed["error"]:
            res = client.get(info_parsed["gdrive_link"])
            drive_link = etree.HTML(res.content).xpath(
                "//a[contains(@class,'btn')]/@href"
            )[0]
            info_parsed["gdrive_link"] = drive_link
        if "drivebit" in urlparse(url).netloc and not info_parsed["error"]:
            res = client.get(info_parsed["gdrive_link"])
            drive_link = etree.HTML(res.content).xpath(
                "//a[contains(@class,'btn')]/@href"
            )[0]
            info_parsed["gdrive_link"] = drive_link
        if "drivelinks" in urlparse(url).netloc and not info_parsed["error"]:
            res = client.get(info_parsed["gdrive_link"])
            drive_link = etree.HTML(res.content).xpath(
                "//a[contains(@class,'btn')]/@href"
            )[0]
            info_parsed["gdrive_link"] = drive_link
        if "driveace" in urlparse(url).netloc and not info_parsed["error"]:
            res = client.get(info_parsed["gdrive_link"])
            drive_link = etree.HTML(res.content).xpath(
                "//a[contains(@class,'btn')]/@href"
            )[0]
            info_parsed["gdrive_link"] = drive_link
        if "drivepro" in urlparse(url).netloc and not info_parsed["error"]:
            res = client.get(info_parsed["gdrive_link"])
            drive_link = etree.HTML(res.content).xpath(
                "//a[contains(@class,'btn')]/@href"
            )[0]
            info_parsed["gdrive_link"] = drive_link
        if info_parsed["error"]:
            return "Faced an Unknown Error!"
        return info_parsed["gdrive_link"]
    except BaseException:
        return "Unable to Extract GDrive Link"


#####################################################################################################
# urls open


def urlsopen(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://blogpost.viewboonposts.com/e998933f1f665f5e75f2d1ae0009e0063ed66f889000"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://blog.textpage.xyz/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(2)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


####################################################################################################
# URLShortX - xpshort


def xpshort(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://xpshort.com"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://www.animalwallpapers.online/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(8)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


####################################################################################################
# Vnshortner-


def vnshortener(url):
    sess = requests.session()
    DOMAIN = "https://vnshortener.com/"
    org = "https://nishankhatri.xyz"
    PhpAcc = DOMAIN + "link/new.php"
    ref = "https://nishankhatri.com.np/"
    go = DOMAIN + "links/go"

    code = url.split("/")[3]
    final_url = f"{DOMAIN}/{code}/"
    headers = {"authority": DOMAIN, "origin": org}

    data = {
        "step_1": code,
    }
    response = sess.post(PhpAcc, headers=headers, data=data).json()
    id = response["inserted_data"]["id"]
    data = {
        "step_2": code,
        "id": id,
    }
    response = sess.post(PhpAcc, headers=headers, data=data).json()

    headers["referer"] = ref
    params = {"sid": str(id)}
    resp = sess.get(final_url, params=params, headers=headers)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}

    time.sleep(1)
    headers["x-requested-with"] = "XMLHttpRequest"
    try:
        r = sess.post(go, data=data, headers=headers).json()
        if r["status"] == "success":
            return r["url"]
        else:
            raise
    except:
        return "Something went wrong :("


#####################################################################################################
# onepagelink


def onepagelink(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "go.onepagelink.in"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"https://{DOMAIN}/{code}"
    ref = "https://gorating.in/"
    h = {"referer": ref}
    response = client.get(final_url, headers=h)
    soup = BeautifulSoup(response.text, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(5)
    r = client.post(f"https://{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except BaseException:
        return "Something went wrong :("


#####################################################################################################
# dulink


def dulink(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://du-link.in"
    url = url[:-1] if url[-1] == "/" else url
    ref = "https://profitshort.com/"
    h = {"referer": ref}
    resp = client.get(url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# krownlinks


def krownlinks(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://go.bloggerishyt.in/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://www.techkhulasha.com/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(8)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return str(r.json()["url"])
    except BaseException:
        return "Something went wrong :("


####################################################################################################
# adrinolink


def adrinolink(url):
    if "https://adrinolinks.in/" not in url:
        url = "https://adrinolinks.in/" + url.split("/")[-1]
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://adrinolinks.in"
    ref = "https://wikitraveltips.com/"
    h = {"referer": ref}
    resp = client.get(url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(8)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# mdiskshortners


def mdiskshortners(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://mdiskshortners.in/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://www.adzz.in/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(2)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# tinyfy


def tiny(url):
    client = requests.session()
    DOMAIN = "https://tinyfy.in"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://www.yotrickslog.tech/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# earnl


def earnl(url):
    client = requests.session()
    DOMAIN = "https://v.earnl.xyz"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://link.modmakers.xyz/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(5)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# moneykamalo


def moneykamalo(url):
    client = requests.session()
    DOMAIN = "https://go.moneykamalo.com"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://bloging.techkeshri.com/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(5)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# lolshort


def lolshort(url):
    client = requests.session()
    DOMAIN = "https://get.lolshort.tech/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://tech.animezia.com/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(5)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# easysky


def easysky(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://techy.veganab.co/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://veganab.co/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(8)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# indiurl


def indi(url):
    client = requests.session()
    DOMAIN = "https://file.earnash.com/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://indiurl.cordtpoint.co.in/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(10)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# linkbnao


def linkbnao(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://vip.linkbnao.com"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://ffworld.xyz/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(2)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# omegalinks


def mdiskpro(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://mdisk.pro"
    ref = "https://m.meclipstudy.in/"
    h = {"referer": ref}
    resp = client.get(url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(8)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# tnshort


def tnshort(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://news.sagenews.in/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://movies.djnonstopmusic.in/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(8)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return str(r.json()["url"])
    except BaseException:
        return "Something went wrong :("


#####################################################################################################
# tnvalue


def tnvalue(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://gadgets.webhostingtips.club/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://ladkibahin.com/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(12)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return str(r.json()["url"])
    except BaseException:
        return "Something went wrong :("


#####################################################################################################
# indianshortner


def indshort(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://indianshortner.com/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://moddingzone.in/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(5)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# mdisklink


def mdisklink(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://mdisklink.link/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://m.proappapk.com/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(2)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except:
        return "Something went wrong :("


#####################################################################################################
# rslinks


def rslinks(url):
    client = requests.session()
    download = requests.get(url, stream=True, allow_redirects=False)
    v = download.headers["location"]
    code = v.split("ms9")[-1]
    final = f"http://techyproio.blogspot.com/p/short.html?{code}=="
    try:
        return final
    except:
        return "Something went wrong :("


#########################
# vipurl
def vipurl(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://count.vipurl.in/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://ezeviral.com/"
    h = {"referer": ref}
    response = client.get(final_url, headers=h)
    soup = BeautifulSoup(response.text, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(9)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return r.json()["url"]
    except BaseException:
        return "Something went wrong :("


#####################################################################################################
# mdisky.link
def mdisky(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://go.bloggingaro.com/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://www.bloggingaro.com/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(6)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return str(r.json()["url"])
    except BaseException:
        return "Something went wrong :("


#####################################################################################################
# bitly + tinyurl


def bitly_tinyurl(url: str) -> str:
    response = requests.get(url).url
    try:
        return response
    except:
        return "Something went wrong :("


#####################################################################################################
# thinfi


def thinfi(url: str) -> str:
    response = requests.get(url)
    soup = BeautifulSoup(response.content, "html.parser").p.a.get("href")
    try:
        return soup
    except:
        return "Something went wrong :("


#####################################################################################################
# kingurl


def kingurl1(url):
    client = cloudscraper.create_scraper(allow_brotli=False)
    DOMAIN = "https://go.kingurl.in/"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}/{code}"
    ref = "https://earnbox.bankshiksha.in/"
    h = {"referer": ref}
    resp = client.get(final_url, headers=h)
    soup = BeautifulSoup(resp.content, "html.parser")
    inputs = soup.find_all("input")
    data = {input.get("name"): input.get("value") for input in inputs}
    h = {"x-requested-with": "XMLHttpRequest"}
    time.sleep(7)
    r = client.post(f"{DOMAIN}/links/go", data=data, headers=h)
    try:
        return str(r.json()["url"])
    except BaseException:
        return "Something went wrong :("


def kingurl(url):
    DOMAIN = "https://earn.bankshiksha.in/click.php?LinkShortUrlID"
    url = url[:-1] if url[-1] == "/" else url
    code = url.split("/")[-1]
    final_url = f"{DOMAIN}={code}"
    return final_url
#####################################################################################################
# helpers


# check if present in list
def ispresent(inlist, url):
    for ele in inlist:
        if ele in url:
            return True
    return False


# shortners
def shortners(url):
    # Shortner Full Page API
    if val := shortner_fpage_api(url):
        return val

    # Shortner Quick Link API
    elif val := shortner_quick_api(url):
        return val

    # igg games
    elif "https://igg-games.com/" in url:
        print("entered igg: ", url)
        return igggames(url)

    # ola movies
    elif "https://olamovies." in url:
        print("entered ola movies: ", url)
        return olamovies(url)

    # katdrive
    elif "https://katdrive." in url:
        if KATCRYPT == "":
            return "🚫 __You can't use this because__ **KATDRIVE_CRYPT** __ENV is not set__"

        print("entered katdrive: ", url)
        return katdrive_dl(url, KATCRYPT)

    # kolop
    elif "https://kolop." in url:
        if KCRYPT == "":
            return (
                "🚫 __You can't use this because__ **KOLOP_CRYPT** __ENV is not set__"
            )

        print("entered kolop: ", url)
        return kolop_dl(url, KCRYPT)

    # hubdrive
    elif "https://hubdrive." in url:
        if HCRYPT == "":
            return "🚫 __You can't use this because__ **HUBDRIVE_CRYPT** __ENV is not set__"

        print("entered hubdrive: ", url)
        return hubdrive_dl(url, HCRYPT)

    # drivefire
    elif "https://drivefire." in url:
        if DCRYPT == "":
            return "🚫 __You can't use this because__ **DRIVEFIRE_CRYPT** __ENV is not set__"

        print("entered drivefire: ", url)
        return drivefire_dl(url, DCRYPT)

    # filecrypt
    elif ("https://filecrypt.co/") in url or ("https://filecrypt.cc/" in url):
        print("entered filecrypt: ", url)
        return filecrypt(url)

    # shareus
    elif "https://shareus." in url or "https://shrs.link/" in url:
        print("entered shareus: ", url)
        return shareus(url)

    # shortingly
    elif "https://shortingly.in/" in url:
        print("entered shortingly: ", url)
        return shortingly(url)

    # vnshortner
    elif "https://vnshortener.com/" in url:
        print("entered vnshortener: ", url)
        return vnshortener(url)

    # onepagelink
    elif "https://onepagelink.in/" in url:
        print("entered onepagelink: ", url)
        return onepagelink(url)

    # gyanilinks
    elif "https://gyanilinks.com/" in url or "https://gtlinks.me/" in url:
        print("entered gyanilinks: ", url)
        return gyanilinks(url)

    # flashlink
    elif "https://go.flashlink.in" in url:
        print("entered flashlink: ", url)
        return flashl(url)

    # short2url
    elif "https://short2url.in/" in url:
        print("entered short2url: ", url)
        return short2url(url)

    # shorte
    elif "https://shorte.st/" in url:
        print("entered shorte: ", url)
        return sh_st_bypass(url)

    # psa
    elif "https://psa.wf/" in url:
        print("entered psa: ", url)
        return psa_bypasser(url)

    # sharer pw
    elif "https://sharer.pw/" in url:
        if XSRF_TOKEN == "" or Laravel_Session == "":
            return "🚫 __You can't use this because__ **XSRF_TOKEN** __and__ **Laravel_Session** __ENV is not set__"

        print("entered sharer: ", url)
        return sharer_pw(url, Laravel_Session, XSRF_TOKEN)

    # gdtot url
    elif "gdtot.cfd" in url:
        print("entered gdtot: ", url)
        return gdtot(url)

    # adfly
    elif "https://adf.ly/" in url:
        print("entered adfly: ", url)
        out = adfly(url)
        return out["bypassed_url"]

    # gplinks
    elif "https://gplinks.co/" in url:
        print("entered gplink: ", url)
        return gplinks(url)

    # droplink
    elif "https://droplink.co/" in url:
        print("entered droplink: ", url)
        return droplink(url)

    # linkvertise
    elif "https://linkvertise.com/" in url:
        print("entered linkvertise: ", url)
        return linkvertise(url)

    # rocklinks
    elif "https://rocklinks.net/" in url:
        print("entered rocklinks: ", url)
        return rocklinks(url)

    # ouo
    elif "https://ouo.press/" in url or "https://ouo.io/" in url:
        print("entered ouo: ", url)
        return ouo(url)

    # try2link
    elif "https://try2link.com/" in url:
        print("entered try2links: ", url)
        return try2link_bypass(url)

    # urlsopen
    elif "https://urlsopen." in url:
        print("entered urlsopen: ", url)
        return urlsopen(url)

    # xpshort
    elif (
        "https://xpshort.com/" in url
        or "https://push.bdnewsx.com/" in url
        or "https://techymozo.com/" in url
    ):
        print("entered xpshort: ", url)
        return xpshort(url)

    # dulink
    elif "https://du-link.in/" in url:
        print("entered dulink: ", url)
        return dulink(url)

    # ez4short
    elif "https://ez4short.com/" in url:
        print("entered ez4short: ", url)
        return ez4(url)

    # krownlinks
    elif "https://krownlinks.me/" in url:
        print("entered krownlinks: ", url)
        return krownlinks(url)

    # adrinolink
    elif "https://adrinolinks." in url:
        print("entered adrinolink: ", url)
        return adrinolink(url)

    # tnlink
    elif "https://link.tnlink.in/" in url:
        print("entered tnlink: ", url)
        return tnlink(url)

    # mdiskshortners
    elif "https://mdiskshortners.in/" in url:
        print("entered mdiskshortners: ", url)
        return mdiskshortners(url)

    # tinyfy
    elif "tinyfy.in" in url:
        print("entered tinyfy: ", url)
        return tiny(url)

    # earnl
    elif "go.earnl.xyz" in url:
        print("entered earnl: ", url)
        return earnl(url)

    # moneykamalo
    elif "earn.moneykamalo.com" in url:
        print("entered moneykamalo: ", url)
        return moneykamalo(url)

    # lolshort
    elif "http://go.lolshort.tech/" in url or "https://go.lolshort.tech/" in url:
        print("entered lolshort: ", url)
        return lolshort(url)

    # easysky
    elif "m.easysky.in" in url:
        print("entered easysky: ", url)
        return easysky(url)

    # indiurl
    elif "go.indiurl.in.net" in url:
        print("entered indiurl: ", url)
        return indi(url)

    # linkbnao
    elif "linkbnao.com" in url:
        print("entered linkbnao: ", url)
        return linkbnao(url)

    # omegalinks
    elif "mdisk.pro" in url:
        print("entered mdiskpro: ", url)
        return mdiskpro(url)

    # tnshort
    elif "https://link.tnshort.net/" in url or "https://tnseries.com/" in url or "https://link.tnseries.com/" in url:
        print("entered tnshort:", url)
        return tnshort(url)

    # tnvalue
    elif (
        "https://link.tnvalue.in/" in url
        or "https://short.tnvalue.in/" in url
        or "https://page.finclub.in/" in url
    ):
        print("entered tnvalue:", url)
        return tnvalue(url)

    # indianshortner
    elif "indianshortner.in" in url:
        print("entered indianshortner: ", url)
        return indshort(url)

    # mdisklink
    elif "mdisklink.link" in url:
        print("entered mdisklink: ", url)
        return mdisklink(url)

    # rslinks
    elif "rslinks.net" in url:
        print("entered rslinks: ", url)
        return rslinks(url)

    # bitly + tinyurl
    elif "bit.ly" in url or "tinyurl.com" in url:
        print("entered bitly_tinyurl: ", url)
        return bitly_tinyurl(url)

    # pdisk
    elif "pdisk.pro" in url:
        print("entered pdisk: ", url)
        return pdisk(url)

    # thinfi
    elif "thinfi.com" in url:
        print("entered thinfi: ", url)
        return thinfi(url)

    # vipurl
    elif "link.vipurl.in" in url or "count.vipurl.in" in url or "vipurl.in" in url:
        print("entered vipurl:", url)
        return vipurl(url)

    # mdisky
    elif "mdisky.link" in url:
        print("entered mdisky:", url)
        return mdisky(url)

    # kingurl
    elif "https://kingurl.in/" in url:
        print("entered kingurl:", url)
        return kingurl(url)

    # htpmovies sharespark cinevood
    elif (
        "https://htpmovies." in url
        or "https://sharespark.me/" in url
        or "https://cinevood." in url
        or "https://atishmkv." in url
        or "https://teluguflix" in url
        or "https://taemovies" in url
        or "https://toonworld4all" in url
        or "https://animeremux" in url
    ):
        print("entered htpmovies sharespark cinevood atishmkv: ", url)
        return scrappers(url)

    # gdrive look alike
    elif ispresent(gdlist, url):
        print("entered gdrive look alike: ", url)
        return unified(url)

    # others
    elif ispresent(otherslist, url):
        print("entered others: ", url)
        return others(url)

    # else
    else:
        print("entered: ", url)
        return "Not in Supported Sites"


################################################################################################################################


================================================
FILE: config.json
================================================
{
    "TOKEN": "",
    "ID": "",
    "HASH": "",
    "Laravel_Session": "",
    "XSRF_TOKEN": "",
    "GDTot_Crypt": "",
    "DCRYPT": "",
    "KCRYPT": "",
    "HCRYPT": "",
    "KATCRYPT": "",
    "UPTOBOX_TOKEN": "",
    "TERA_COOKIE": "",
    "CLOUDFLARE": "",
    "PORT": 5000,
    "DB_API": "",
    "DB_OWNER": "bipinkrish",
    "DB_NAME": "link_bypass.db"
}

================================================
FILE: db.py
================================================
import requests
import base64

class DB:
    def __init__(self, api_key, db_owner, db_name) -> None:
        self.api_key = api_key
        self.db_owner = db_owner
        self.db_name = db_name
        self.url = "https://api.dbhub.io/v1/tables"
        self.data = {
            "apikey": self.api_key,
            "dbowner": self.db_owner,
            "dbname": self.db_name
        }

        response = requests.post(self.url, data=self.data)
        if response.status_code == 200:
            if "results" not in response.json():
                raise Exception("Error, Table not found")
        else:
            raise Exception("Error in Auth")

    def insert(self, link: str, result: str):
        url = "https://api.dbhub.io/v1/execute"
        sql_insert = f"INSERT INTO results (link, result) VALUES ('{link}', '{result}')"
        encoded_sql = base64.b64encode(sql_insert.encode()).decode()
        self.data["sql"] = encoded_sql
        response = requests.post(url, data=self.data)
        if response.status_code == 200:
            if response.json()["status"] != "OK":
                raise Exception("Error while inserting")
            return True
        else:
            print(response.json())
            return None
    
    def find(self, link: str):
        url = "https://api.dbhub.io/v1/query"
        sql_select = f"SELECT result FROM results WHERE link = '{link}'"
        encoded_sql = base64.b64encode(sql_select.encode()).decode()
        self.data["sql"] = encoded_sql
        response = requests.post(url, data=self.data)
        if response.status_code == 200:
            try: return response.json()[0][0]["Value"]
            except: return None
        else:
            print(response.json())
            return None


================================================
FILE: ddl.py
================================================
from base64 import standard_b64encode
from json import loads
from math import floor, pow
from re import findall, match, search, sub
from time import sleep
from urllib.parse import quote, unquote, urlparse
from uuid import uuid4

from bs4 import BeautifulSoup
from cfscrape import create_scraper
from lxml import etree
from requests import get, session

from json import load
from os import environ

with open("config.json", "r") as f:
    DATA = load(f)


def getenv(var):
    return environ.get(var) or DATA.get(var, None)


UPTOBOX_TOKEN = getenv("UPTOBOX_TOKEN")
ndus = getenv("TERA_COOKIE")
if ndus is None:
    TERA_COOKIE = None
else:
    TERA_COOKIE = {"ndus": ndus}


ddllist = [
    "1drv.ms",
    "1fichier.com",
    "4funbox",
    "akmfiles",
    "anonfiles.com",
    "antfiles.com",
    "bayfiles.com",
    "disk.yandex.com",
    "fcdn.stream",
    "femax20.com",
    "fembed.com",
    "fembed.net",
    "feurl.com",
    "filechan.org",
    "filepress",
    "github.com",
    "hotfile.io",
    "hxfile.co",
    "krakenfiles.com",
    "layarkacaxxi.icu",
    "letsupload.cc",
    "letsupload.io",
    "linkbox",
    "lolabits.se",
    "mdisk.me",
    "mediafire.com",
    "megaupload.nz",
    "mirrobox",
    "mm9842.com",
    "momerybox",
    "myfile.is",
    "naniplay.com",
    "naniplay.nanime.biz",
    "naniplay.nanime.in",
    "nephobox",
    "openload.cc",
    "osdn.net",
    "pixeldrain.com",
    "racaty",
    "rapidshare.nu",
    "sbembed.com",
    "sbplay.org",
    "share-online.is",
    "shrdsk",
    "solidfiles.com",
    "streamsb.net",
    "streamtape",
    "terabox",
    "teraboxapp",
    "upload.ee",
    "uptobox.com",
    "upvid.cc",
    "vshare.is",
    "watchsb.com",
    "we.tl",
    "wetransfer.com",
    "yadi.sk",
    "zippyshare.com",
]


def is_share_link(url):
    return bool(
        match(
            r"https?:\/\/.+\.gdtot\.\S+|https?:\/\/(filepress|filebee|appdrive|gdflix|driveseed)\.\S+",
            url,
        )
    )


def get_readable_time(seconds):
    result = ""
    (days, remainder) = divmod(seconds, 86400)
    days = int(days)
    if days != 0:
        result += f"{days}d"
    (hours, remainder) = divmod(remainder, 3600)
    hours = int(hours)
    if hours != 0:
        result += f"{hours}h"
    (minutes, seconds) = divmod(remainder, 60)
    minutes = int(minutes)
    if minutes != 0:
        result += f"{minutes}m"
    seconds = int(seconds)
    result += f"{seconds}s"
    return result


fmed_list = [
    "fembed.net",
    "fembed.com",
    "femax20.com",
    "fcdn.stream",
    "feurl.com",
    "layarkacaxxi.icu",
    "naniplay.nanime.in",
    "naniplay.nanime.biz",
    "naniplay.com",
    "mm9842.com",
]

anonfilesBaseSites = [
    "anonfiles.com",
    "hotfile.io",
    "bayfiles.com",
    "megaupload.nz",
    "letsupload.cc",
    "filechan.org",
    "myfile.is",
    "vshare.is",
    "rapidshare.nu",
    "lolabits.se",
    "openload.cc",
    "share-online.is",
    "upvid.cc",
]


def direct_link_generator(link: str):
    """direct links generator"""
    domain = urlparse(link).hostname
    if "yadi.sk" in domain or "disk.yandex.com" in domain:
        return yandex_disk(link)
    elif "mediafire.com" in domain:
        return mediafire(link)
    elif "uptobox.com" in domain:
        return uptobox(link)
    elif "osdn.net" in domain:
        return osdn(link)
    elif "github.com" in domain:
        return github(link)
    elif "hxfile.co" in domain:
        return hxfile(link)
    elif "1drv.ms" in domain:
        return onedrive(link)
    elif "pixeldrain.com" in domain:
        return pixeldrain(link)
    elif "antfiles.com" in domain:
        return antfiles(link)
    elif "streamtape" in domain:
        return streamtape(link)
    elif "racaty" in domain:
        return racaty(link)
    elif "1fichier.com" in domain:
        return fichier(link)
    elif "solidfiles.com" in domain:
        return solidfiles(link)
    elif "krakenfiles.com" in domain:
        return krakenfiles(link)
    elif "upload.ee" in domain:
        return uploadee(link)
    elif "akmfiles" in domain:
        return akmfiles(link)
    elif "linkbox" in domain:
        return linkbox(link)
    elif "shrdsk" in domain:
        return shrdsk(link)
    elif "letsupload.io" in domain:
        return letsupload(link)
    elif "zippyshare.com" in domain:
        return zippyshare(link)
    elif "mdisk.me" in domain:
        return mdisk(link)
    elif any(x in domain for x in ["wetransfer.com", "we.tl"]):
        return wetransfer(link)
    elif any(x in domain for x in anonfilesBaseSites):
        return anonfilesBased(link)
    elif any(
        x in domain
        for x in [
            "terabox",
            "nephobox",
            "4funbox",
            "mirrobox",
            "momerybox",
            "teraboxapp",
        ]
    ):
        return terabox(link)
    elif any(x in domain for x in fmed_list):
        return fembed(link)
    elif any(
        x in domain
        for x in ["sbembed.com", "watchsb.com", "streamsb.net", "sbplay.org"]
    ):
        return sbembed(link)
    elif is_share_link(link):
        if "gdtot" in domain:
            return gdtot(link)
        elif "filepress" in domain:
            return filepress(link)
        else:
            return sharer_scraper(link)
    else:
        return f"No Direct link function found for\n\n{link}\n\nuse /ddllist"


def mdisk(url):
    header = {
        "Accept": "*/*",
        "Accept-Language": "en-US,en;q=0.5",
        "Accept-Encoding": "gzip, deflate, br",
        "Referer": "https://mdisk.me/",
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36",
    }
    id = url.split("/")[-1]
    URL = f"https://diskuploader.entertainvideo.com/v1/file/cdnurl?param={id}"
    return get(url=URL, headers=header).json()["source"]


def yandex_disk(url: str) -> str:
    """Yandex.Disk direct link generator
    Based on https://github.com/wldhx/yadisk-direct"""
    try:
        link = findall(r"\b(https?://(yadi.sk|disk.yandex.com)\S+)", url)[0][0]
    except IndexError:
        return "No Yandex.Disk links found\n"
    api = "https://cloud-api.yandex.net/v1/disk/public/resources/download?public_key={}"
    cget = create_scraper().request
    try:
        return cget("get", api.format(link)).json()["href"]
    except KeyError:
        return "ERROR: File not found/Download limit reached"


def uptobox(url: str) -> str:
    """Uptobox direct link generator
    based on https://github.com/jovanzers/WinTenCermin and https://github.com/sinoobie/noobie-mirror
    """
    try:
        link = findall(r"\bhttps?://.*uptobox\.com\S+", url)[0]
    except IndexError:
        return "No Uptobox links found"
    link = findall(r"\bhttps?://.*\.uptobox\.com/dl\S+", url)
    if link:
        return link[0]
    cget = create_scraper().request
    try:
        file_id = findall(r"\bhttps?://.*uptobox\.com/(\w+)", url)[0]
        if UPTOBOX_TOKEN:
            file_link = f"https://uptobox.com/api/link?token={UPTOBOX_TOKEN}&file_code={file_id}"
        else:
            file_link = f"https://uptobox.com/api/link?file_code={file_id}"
        res = cget("get", file_link).json()
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if res["statusCode"] == 0:
        return res["data"]["dlLink"]
    elif res["statusCode"] == 16:
        sleep(1)
        waiting_token = res["data"]["waitingToken"]
        sleep(res["data"]["waiting"])
    elif res["statusCode"] == 39:
        return f"ERROR: Uptobox is being limited please wait {get_readable_time(res['data']['waiting'])}"
    else:
        return f"ERROR: {res['message']}"
    try:
        res = cget("get", f"{file_link}&waitingToken={waiting_token}").json()
        return res["data"]["dlLink"]
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"


def mediafire(url: str) -> str:
    final_link = findall(r"https?:\/\/download\d+\.mediafire\.com\/\S+\/\S+\/\S+", url)
    if final_link:
        return final_link[0]
    cget = create_scraper().request
    try:
        url = cget("get", url).url
        page = cget("get", url).text
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    final_link = findall(
        r"\'(https?:\/\/download\d+\.mediafire\.com\/\S+\/\S+\/\S+)\'", page
    )
    if not final_link:
        return "ERROR: No links found in this page"
    return final_link[0]


def osdn(url: str) -> str:
    """OSDN direct link generator"""
    osdn_link = "https://osdn.net"
    try:
        link = findall(r"\bhttps?://.*osdn\.net\S+", url)[0]
    except IndexError:
        return "No OSDN links found"
    cget = create_scraper().request
    try:
        page = BeautifulSoup(cget("get", link, allow_redirects=True).content, "lxml")
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    info = page.find("a", {"class": "mirror_link"})
    link = unquote(osdn_link + info["href"])
    mirrors = page.find("form", {"id": "mirror-select-form"}).findAll("tr")
    urls = []
    for data in mirrors[1:]:
        mirror = data.find("input")["value"]
        urls.append(sub(r"m=(.*)&f", f"m={mirror}&f", link))
    return urls[0]


def github(url: str) -> str:
    """GitHub direct links generator"""
    try:
        findall(r"\bhttps?://.*github\.com.*releases\S+", url)[0]
    except IndexError:
        return "No GitHub Releases links found"
    cget = create_scraper().request
    download = cget("get", url, stream=True, allow_redirects=False)
    try:
        return download.headers["location"]
    except KeyError:
        return "ERROR: Can't extract the link"


def hxfile(url: str) -> str:
    sess = session()
    try:
        headers = {
            "content-type": "application/x-www-form-urlencoded",
            "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.152 Safari/537.36",
        }

        data = {
            "op": "download2",
            "id": urlparse(url).path.strip("/")(url),
            "rand": "",
            "referer": "",
            "method_free": "",
            "method_premium": "",
        }

        response = sess.post(url, headers=headers, data=data)
        soup = BeautifulSoup(response, "html.parser")

        if btn := soup.find(class_="btn btn-dow"):
            return btn["href"]
        if unique := soup.find(id="uniqueExpirylink"):
            return unique["href"]

    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"


def letsupload(url: str) -> str:
    cget = create_scraper().request
    try:
        res = cget("POST", url)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    direct_link = findall(r"(https?://letsupload\.io\/.+?)\'", res.text)
    if direct_link:
        return direct_link[0]
    else:
        return "ERROR: Direct Link not found"


def anonfilesBased(url: str) -> str:
    cget = create_scraper().request
    try:
        soup = BeautifulSoup(cget("get", url).content, "lxml")
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    sa = soup.find(id="download-url")
    if sa:
        return sa["href"]
    return "ERROR: File not found!"


def fembed(link: str) -> str:
    sess = session()
    try:
        url = url.replace("/v/", "/f/")
        raw = session.get(url)
        api = search(r"(/api/source/[^\"']+)", raw.text)
        if api is not None:
            result = {}
            raw = sess.post("https://layarkacaxxi.icu" + api.group(1)).json()
            for d in raw["data"]:
                f = d["file"]
                head = sess.head(f)
                direct = head.headers.get("Location", url)
                result[f"{d['label']}/{d['type']}"] = direct
            dl_url = result

        count = len(dl_url)
        lst_link = [dl_url[i] for i in dl_url]
        return lst_link[count - 1]
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"


def sbembed(link: str) -> str:
    sess = session()
    try:
        raw = sess.get(link)
        soup = BeautifulSoup(raw, "html.parser")

        result = {}
        for a in soup.findAll("a", onclick=compile(r"^download_video[^>]+")):
            data = dict(
                zip(
                    ["id", "mode", "hash"],
                    findall(r"[\"']([^\"']+)[\"']", a["onclick"]),
                )
            )
            data["op"] = "download_orig"

            raw = sess.get("https://sbembed.com/dl", params=data)
            soup = BeautifulSoup(raw, "html.parser")

            if direct := soup.find("a", text=compile("(?i)^direct")):
                result[a.text] = direct["href"]
        dl_url = result

        count = len(dl_url)
        lst_link = [dl_url[i] for i in dl_url]
        return lst_link[count - 1]

    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"


def onedrive(link: str) -> str:
    """Onedrive direct link generator
    Based on https://github.com/UsergeTeam/Userge"""
    link_without_query = urlparse(link)._replace(query=None).geturl()
    direct_link_encoded = str(
        standard_b64encode(bytes(link_without_query, "utf-8")), "utf-8"
    )
    direct_link1 = (
        f"https://api.onedrive.com/v1.0/shares/u!{direct_link_encoded}/root/content"
    )
    cget = create_scraper().request
    try:
        resp = cget("head", direct_link1)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if resp.status_code != 302:
        return "ERROR: Unauthorized link, the link may be private"
    return resp.next.url


def pixeldrain(url: str) -> str:
    """Based on https://github.com/yash-dk/TorToolkit-Telegram"""
    url = url.strip("/ ")
    file_id = url.split("/")[-1]
    if url.split("/")[-2] == "l":
        info_link = f"https://pixeldrain.com/api/list/{file_id}"
        dl_link = f"https://pixeldrain.com/api/list/{file_id}/zip?download"
    else:
        info_link = f"https://pixeldrain.com/api/file/{file_id}/info"
        dl_link = f"https://pixeldrain.com/api/file/{file_id}?download"
    cget = create_scraper().request
    try:
        resp = cget("get", info_link).json()
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if resp["success"]:
        return dl_link
    else:
        return f"ERROR: Cant't download due {resp['message']}."


def antfiles(url: str) -> str:
    sess = session()
    try:
        raw = sess.get(url)
        soup = BeautifulSoup(raw, "html.parser")

        if a := soup.find(class_="main-btn", href=True):
            return "{0.scheme}://{0.netloc}/{1}".format(urlparse(url), a["href"])

    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"


def streamtape(url: str) -> str:
    response = get(url)

    if videolink := findall(r"document.*((?=id\=)[^\"']+)", response.text):
        nexturl = "https://streamtape.com/get_video?" + videolink[-1]
        try:
            return nexturl
        except Exception as e:
            return f"ERROR: {e.__class__.__name__}"


def racaty(url: str) -> str:
    """Racaty direct link generator
    By https://github.com/junedkh"""
    cget = create_scraper().request
    try:
        url = cget("GET", url).url
        json_data = {"op": "download2", "id": url.split("/")[-1]}
        res = cget("POST", url, data=json_data)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    html_tree = etree.HTML(res.text)
    direct_link = html_tree.xpath("//a[contains(@id,'uniqueExpirylink')]/@href")
    if direct_link:
        return direct_link[0]
    else:
        return "ERROR: Direct link not found"


def fichier(link: str) -> str:
    """1Fichier direct link generator
    Based on https://github.com/Maujar
    """
    regex = r"^([http:\/\/|https:\/\/]+)?.*1fichier\.com\/\?.+"
    gan = match(regex, link)
    if not gan:
        return "ERROR: The link you entered is wrong!"
    if "::" in link:
        pswd = link.split("::")[-1]
        url = link.split("::")[-2]
    else:
        pswd = None
        url = link
    cget = create_scraper().request
    try:
        if pswd is None:
            req = cget("post", url)
        else:
            pw = {"pass": pswd}
            req = cget("post", url, data=pw)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if req.status_code == 404:
        return "ERROR: File not found/The link you entered is wrong!"
    soup = BeautifulSoup(req.content, "lxml")
    if soup.find("a", {"class": "ok btn-general btn-orange"}):
        dl_url = soup.find("a", {"class": "ok btn-general btn-orange"})["href"]
        if dl_url:
            return dl_url
        return "ERROR: Unable to generate Direct Link 1fichier!"
    elif len(soup.find_all("div", {"class": "ct_warn"})) == 3:
        str_2 = soup.find_all("div", {"class": "ct_warn"})[-1]
        if "you must wait" in str(str_2).lower():
            numbers = [int(word) for word in str(str_2).split() if word.isdigit()]
            if numbers:
                return (
                    f"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute."
                )
            else:
                return "ERROR: 1fichier is on a limit. Please wait a few minutes/hour."
        elif "protect access" in str(str_2).lower():
            return "ERROR: This link requires a password!\n\n<b>This link requires a password!</b>\n- Insert sign <b>::</b> after the link and write the password after the sign.\n\n<b>Example:</b> https://1fichier.com/?smmtd8twfpm66awbqz04::love you\n\n* No spaces between the signs <b>::</b>\n* For the password, you can use a space!"
        else:
            return "ERROR: Failed to generate Direct Link from 1fichier!"
    elif len(soup.find_all("div", {"class": "ct_warn"})) == 4:
        str_1 = soup.find_all("div", {"class": "ct_warn"})[-2]
        str_3 = soup.find_all("div", {"class": "ct_warn"})[-1]
        if "you must wait" in str(str_1).lower():
            numbers = [int(word) for word in str(str_1).split() if word.isdigit()]
            if numbers:
                return (
                    f"ERROR: 1fichier is on a limit. Please wait {numbers[0]} minute."
                )
            else:
                return "ERROR: 1fichier is on a limit. Please wait a few minutes/hour."
        elif "bad password" in str(str_3).lower():
            return "ERROR: The password you entered is wrong!"
        else:
            return "ERROR: Error trying to generate Direct Link from 1fichier!"
    else:
        return "ERROR: Error trying to generate Direct Link from 1fichier!"


def solidfiles(url: str) -> str:
    """Solidfiles direct link generator
    Based on https://github.com/Xonshiz/SolidFiles-Downloader
    By https://github.com/Jusidama18"""
    cget = create_scraper().request
    try:
        headers = {
            "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36"
        }
        pageSource = cget("get", url, headers=headers).text
        mainOptions = str(search(r"viewerOptions\'\,\ (.*?)\)\;", pageSource).group(1))
        return loads(mainOptions)["downloadUrl"]
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"


def krakenfiles(url: str) -> str:
    sess = session()
    try:
        res = sess.get(url)
        html = etree.HTML(res.text)
        if post_url := html.xpath('//form[@id="dl-form"]/@action'):
            post_url = f"https:{post_url[0]}"
        else:
            sess.close()
            return "ERROR: Unable to find post link."
        if token := html.xpath('//input[@id="dl-token"]/@value'):
            data = {"token": token[0]}
        else:
            sess.close()
            return "ERROR: Unable to find token for post."
    except Exception as e:
        sess.close()
        return f"ERROR: {e.__class__.__name__} Something went wrong"
    try:
        dl_link = sess.post(post_url, data=data).json()
        return dl_link["url"]
    except Exception as e:
        sess.close()
        return f"ERROR: {e.__class__.__name__} While send post request"


def uploadee(url: str) -> str:
    """uploadee direct link generator
    By https://github.com/iron-heart-x"""
    cget = create_scraper().request
    try:
        soup = BeautifulSoup(cget("get", url).content, "lxml")
        sa = soup.find("a", attrs={"id": "d_l"})
        return sa["href"]
    except:
        return f"ERROR: Failed to acquire download URL from upload.ee for : {url}"


def terabox(url) -> str:
    sess = session()
    while True:
        try:
            res = sess.get(url)
            print("connected")
            break
        except:
            print("retrying")
    url = res.url

    key = url.split("?surl=")[-1]
    url = f"http://www.terabox.com/wap/share/filelist?surl={key}"
    sess.cookies.update(TERA_COOKIE)

    while True:
        try:
            res = sess.get(url)
            print("connected")
            break
        except Exception as e:
            print("retrying")

    key = res.url.split("?surl=")[-1]
    soup = BeautifulSoup(res.content, "lxml")
    jsToken = None

    for fs in soup.find_all("script"):
        fstring = fs.string
        if fstring and fstring.startswith("try {eval(decodeURIComponent"):
            jsToken = fstring.split("%22")[1]

    while True:
        try:
            res = sess.get(
                f"https://www.terabox.com/share/list?app_id=250528&jsToken={jsToken}&shorturl={key}&root=1"
            )
            print("connected")
            break
        except:
            print("retrying")
    result = res.json()

    if result["errno"] != 0:
        return f"ERROR: '{result['errmsg']}' Check cookies"
    result = result["list"]
    if len(result) > 1:
        return "ERROR: Can't download mutiple files"
    result = result[0]

    if result["isdir"] != "0":
        return "ERROR: Can't download folder"
    return result.get("dlink", "Error")


def filepress(url):
    cget = create_scraper().request
    try:
        url = cget("GET", url).url
        raw = urlparse(url)

        gd_data = {
            "id": raw.path.split("/")[-1],
            "method": "publicDownlaod",
        }
        tg_data = {
            "id": raw.path.split("/")[-1],
            "method": "telegramDownload",
        }

        api = f"{raw.scheme}://{raw.hostname}/api/file/downlaod/"

        gd_res = cget(
            "POST",
            api,
            headers={"Referer": f"{raw.scheme}://{raw.hostname}"},
            json=gd_data,
        ).json()
        tg_res = cget(
            "POST",
            api,
            headers={"Referer": f"{raw.scheme}://{raw.hostname}"},
            json=tg_data,
        ).json()

    except Exception as e:
        return f"Google Drive: ERROR: {e.__class__.__name__} \nTelegram: ERROR: {e.__class__.__name__}"

    gd_result = (
        f'https://drive.google.com/uc?id={gd_res["data"]}'
        if "data" in gd_res
        else f'ERROR: {gd_res["statusText"]}'
    )
    tg_result = (
        f'https://tghub.xyz/?start={tg_res["data"]}'
        if "data" in tg_res
        else "No Telegram file available "
    )

    return f"Google Drive: {gd_result} \nTelegram: {tg_result}"


def gdtot(url):
    cget = create_scraper().request
    try:
        res = cget("GET", f'https://gdbot.xyz/file/{url.split("/")[-1]}')
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    token_url = etree.HTML(res.content).xpath(
        "//a[contains(@class,'inline-flex items-center justify-center')]/@href"
    )
    if not token_url:
        try:
            url = cget("GET", url).url
            p_url = urlparse(url)
            res = cget(
                "GET", f"{p_url.scheme}://{p_url.hostname}/ddl/{url.split('/')[-1]}"
            )
        except Exception as e:
            return f"ERROR: {e.__class__.__name__}"
        drive_link = findall(r"myDl\('(.*?)'\)", res.text)
        if drive_link and "drive.google.com" in drive_link[0]:
            return drive_link[0]
        else:
            return "ERROR: Drive Link not found, Try in your broswer"
    token_url = token_url[0]
    try:
        token_page = cget("GET", token_url)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__} with {token_url}"
    path = findall('\("(.*?)"\)', token_page.text)
    if not path:
        return "ERROR: Cannot bypass this"
    path = path[0]
    raw = urlparse(token_url)
    final_url = f"{raw.scheme}://{raw.hostname}{path}"
    return sharer_scraper(final_url)


def sharer_scraper(url):
    cget = create_scraper().request
    try:
        url = cget("GET", url).url
        raw = urlparse(url)
        header = {
            "useragent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/7.0.548.0 Safari/534.10"
        }
        res = cget("GET", url, headers=header)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    key = findall('"key",\s+"(.*?)"', res.text)
    if not key:
        return "ERROR: Key not found!"
    key = key[0]
    if not etree.HTML(res.content).xpath("//button[@id='drc']"):
        return "ERROR: This link don't have direct download button"
    boundary = uuid4()
    headers = {
        "Content-Type": f"multipart/form-data; boundary=----WebKitFormBoundary{boundary}",
        "x-token": raw.hostname,
        "useragent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/7.0.548.0 Safari/534.10",
    }

    data = (
        f'------WebKitFormBoundary{boundary}\r\nContent-Disposition: form-data; name="action"\r\n\r\ndirect\r\n'
        f'------WebKitFormBoundary{boundary}\r\nContent-Disposition: form-data; name="key"\r\n\r\n{key}\r\n'
        f'------WebKitFormBoundary{boundary}\r\nContent-Disposition: form-data; name="action_token"\r\n\r\n\r\n'
        f"------WebKitFormBoundary{boundary}--\r\n"
    )
    try:
        res = cget("POST", url, cookies=res.cookies, headers=headers, data=data).json()
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if "url" not in res:
        return "ERROR: Drive Link not found, Try in your broswer"
    if "drive.google.com" in res["url"]:
        return res["url"]
    try:
        res = cget("GET", res["url"])
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    html_tree = etree.HTML(res.content)
    drive_link = html_tree.xpath("//a[contains(@class,'btn')]/@href")
    if drive_link and "drive.google.com" in drive_link[0]:
        return drive_link[0]
    else:
        return "ERROR: Drive Link not found, Try in your broswer"


def wetransfer(url):
    cget = create_scraper().request
    try:
        url = cget("GET", url).url
        json_data = {"security_hash": url.split("/")[-1], "intent": "entire_transfer"}
        res = cget(
            "POST",
            f'https://wetransfer.com/api/v4/transfers/{url.split("/")[-2]}/download',
            json=json_data,
        ).json()
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if "direct_link" in res:
        return res["direct_link"]
    elif "message" in res:
        return f"ERROR: {res['message']}"
    elif "error" in res:
        return f"ERROR: {res['error']}"
    else:
        return "ERROR: cannot find direct link"


def akmfiles(url):
    cget = create_scraper().request
    try:
        url = cget("GET", url).url
        json_data = {"op": "download2", "id": url.split("/")[-1]}
        res = cget("POST", url, data=json_data)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    html_tree = etree.HTML(res.content)
    direct_link = html_tree.xpath("//a[contains(@class,'btn btn-dow')]/@href")
    if direct_link:
        return direct_link[0]
    else:
        return "ERROR: Direct link not found"


def shrdsk(url):
    cget = create_scraper().request
    try:
        url = cget("GET", url).url
        res = cget(
            "GET",
            f'https://us-central1-affiliate2apk.cloudfunctions.net/get_data?shortid={url.split("/")[-1]}',
        )
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if res.status_code != 200:
        return f"ERROR: Status Code {res.status_code}"
    res = res.json()
    if "type" in res and res["type"].lower() == "upload" and "video_url" in res:
        return res["video_url"]
    return "ERROR: cannot find direct link"


def linkbox(url):
    cget = create_scraper().request
    try:
        url = cget("GET", url).url
        res = cget(
            "GET", f'https://www.linkbox.to/api/file/detail?itemId={url.split("/")[-1]}'
        ).json()
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if "data" not in res:
        return "ERROR: Data not found!!"
    data = res["data"]
    if not data:
        return "ERROR: Data is None!!"
    if "itemInfo" not in data:
        return "ERROR: itemInfo not found!!"
    itemInfo = data["itemInfo"]
    if "url" not in itemInfo:
        return "ERROR: url not found in itemInfo!!"
    if "name" not in itemInfo:
        return "ERROR: Name not found in itemInfo!!"
    name = quote(itemInfo["name"])
    raw = itemInfo["url"].split("/", 3)[-1]
    return f"https://wdl.nuplink.net/{raw}&filename={name}"


def zippyshare(url):
    cget = create_scraper().request
    try:
        url = cget("GET", url).url
        resp = cget("GET", url)
    except Exception as e:
        return f"ERROR: {e.__class__.__name__}"
    if not resp.ok:
        return "ERROR: Something went wrong!!, Try in your browser"
    if findall(r">File does not exist on this server<", resp.text):
        return "ERROR: File does not exist on server!!, Try in your browser"
    pages = etree.HTML(resp.text).xpath(
        "//script[contains(text(),'dlbutton')][3]/text()"
    )
    if not pages:
        return "ERROR: Page not found!!"
    js_script = pages[0]
    uri1 = None
    uri2 = None
    method = ""
    omg = findall(r"\.omg.=.(.*?);", js_script)
    var_a = findall(r"var.a.=.(\d+)", js_script)
    var_ab = findall(r"var.[ab].=.(\d+)", js_script)
    unknown = findall(r"\+\((.*?).\+", js_script)
    unknown1 = findall(r"\+.\((.*?)\).\+", js_script)
    if omg:
        omg = omg[0]
        method = f"omg = {omg}"
        mtk = (eval(omg) * (int(omg.split("%")[0]) % 3)) + 18
        uri1 = findall(r'"/(d/\S+)/"', js_script)
        uri2 = findall(r'\/d.*?\+"/(\S+)";', js_script)
    elif var_a:
        var_a = var_a[0]
        method = f"var_a = {var_a}"
        mtk = int(pow(int(var_a), 3) + 3)
        uri1 = findall(r"\.href.=.\"/(.*?)/\"", js_script)
        uri2 = findall(r"\+\"/(.*?)\"", js_script)
    elif var_ab:
        a = var_ab[0]
        b = var_ab[1]
        method = f"a = {a}, b = {b}"
        mtk = eval(f"{floor(int(a)/3) + int(a) % int(b)}")
        uri1 = findall(r"\.href.=.\"/(.*?)/\"", js_script)
        uri2 = findall(r"\)\+\"/(.*?)\"", js_script)
    elif unknown:
        method = f"unknown = {unknown[0]}"
        mtk = eval(f"{unknown[0]}+ 11")
        uri1 = findall(r"\.href.=.\"/(.*?)/\"", js_script)
        uri2 = findall(r"\)\+\"/(.*?)\"", js_script)
    elif unknown1:
        method = f"unknown1 = {unknown1[0]}"
        mtk = eval(unknown1[0])
        uri1 = findall(r"\.href.=.\"/(.*?)/\"", js_script)
        uri2 = findall(r"\+.\"/(.*?)\"", js_script)
    else:
        return "ERROR: Direct link not found"
    if not any([uri1, uri2]):
        return f"ERROR: uri1 or uri2 not found with method {method}"
    domain = urlparse(url).hostname
    return f"https://{domain}/{uri1[0]}/{mtk}/{uri2[0]}"


================================================
FILE: freewall.py
================================================
import requests
import base64
import re
from bs4 import BeautifulSoup
from bypasser import RecaptchaV3

RTOKEN = RecaptchaV3()

#######################################################################


def getSoup(res):
    return BeautifulSoup(res.text, "html.parser")


def downloaderla(url, site):
    params = {
        "url": url,
        "token": RTOKEN,
    }
    return requests.get(site, params=params).json()


def getImg(url):
    return requests.get(url).content


def decrypt(res, key):
    if res["success"]:
        return base64.b64decode(res["result"].split(key)[-1]).decode("utf-8")


#######################################################################


def shutterstock(url):
    res = downloaderla(url, "https://ttthreads.net/shutterstock.php")
    if res["success"]:
        return res["result"]


def adobestock(url):
    res = downloaderla(url, "https://new.downloader.la/adobe.php")
    return decrypt(res, "#")


def alamy(url):
    res = downloaderla(url, "https://new.downloader.la/alamy.php")
    return decrypt(res, "#")


def getty(url):
    res = downloaderla(url, "https://getpaidstock.com/api.php")
    return decrypt(res, "#")


def picfair(url):
    res = downloaderla(url, "https://downloader.la/picf.php")
    return decrypt(res, "?newURL=")


def slideshare(url, type="pptx"):
    # enum = {"pdf","pptx","img"}
    # if type not in enum: type = "pdf"
    return requests.get(
        f"https://downloader.at/convert2{type}.php", params={"url": url}
    ).content


def medium(url):
    return requests.post(
        "https://downloader.la/read.php",
        data={
            "mediumlink": url,
        },
    ).content


#######################################################################


def pass_paywall(url, check=False, link=False):
    patterns = [
        (r"https?://(?:www\.)?shutterstock\.com/", shutterstock, True, "png", -1),
        (r"https?://stock\.adobe\.com/", adobestock, True, "png", -2),
        (r"https?://(?:www\.)?alamy\.com/", alamy, True, "png", -1),
        (r"https?://(?:www\.)?gettyimages\.", getty, True, "png", -2),
        (r"https?://(?:www\.)?istockphoto\.com", getty, True, "png", -1),
        (r"https?://(?:www\.)?picfair\.com/", picfair, True, "png", -1),
        (r"https?://(?:www\.)?slideshare\.net/", slideshare, False, "pptx", -1),
        (r"https?://medium\.com/", medium, False, "html", -1),
    ]

    img_link = None
    name = "no-name"
    for pattern, downloader_func, img, ftype, idx in patterns:
        if re.search(pattern, url):
            if check:
                return True
            img_link = downloader_func(url)

            try:
                name = url.split("/")[idx]
            except:
                pass
            if (not img) and img_link:
                fullname = name + "." + ftype
                with open(fullname, "wb") as f:
                    f.write(img_link)
                return fullname
            break

    if check:
        return False
    if link or (not img_link):
        return img_link
    fullname = name + "." + "png"
    with open(fullname, "wb") as f:
        f.write(getImg(img_link))
    return fullname


#######################################################################


================================================
FILE: main.py
================================================
from pyrogram import Client, filters
from pyrogram.types import (
    InlineKeyboardMarkup,
    InlineKeyboardButton,
    BotCommand,
    Message,
)
from os import environ, remove
from threading import Thread
from json import load
from re import search
import re
from urlextract import URLExtract
from texts import HELP_TEXT
import bypasser
import freewall
from time import time
from db import DB

# Initialize URL extractor
extractor = URLExtract()

# Bot configuration
with open("config.json", "r") as f:
    DATA: dict = load(f)

def getenv(var):
    return environ.get(var) or DATA.get(var, None)

bot_token = getenv("TOKEN")
api_hash = getenv("HASH")
api_id = getenv("ID")
app = Client("my_bot", api_id=api_id, api_hash=api_hash, bot_token=bot_token)
with app:
    app.set_bot_commands(
        [
            BotCommand("start", "Welcome Message"),
            BotCommand("help", "List of All Supported Sites"),
        ]
    )

# Database setup
db_api = getenv("DB_API")
db_owner = getenv("DB_OWNER")
db_name = getenv("DB_NAME")
try:
    database = DB(api_key=db_api, db_owner=db_owner, db_name=db_name)
except:
    print("Database is Not Set")
    database = None

# Handle index
def handleIndex(ele: str, message: Message, msg: Message):
    result = bypasser.scrapeIndex(ele)
    try:
        app.delete_messages(message.chat.id, msg.id)
    except:
        pass
    if database and result:
        database.insert(ele, result)
    for page in result:
        app.send_message(
            message.chat.id,
            page,
            reply_to_message_id=message.id,
            disable_web_page_preview=True,
        )

# URL regex pattern
URL_REGEX = r'(?:(?:https?|ftp):\/\/)?[\w/\-?=%.]+\.[\w/\-?=%.]+'

# Updated loopthread function
def loopthread(message: Message, otherss=False):
    urls = []
    # Use message.caption for media (otherss=True), message.text for text messages (otherss=False)
    if otherss:
        texts = message.caption or ""
    else:
        texts = message.text or ""

    # Check entities based on message type
    entities = []
    if otherss and hasattr(message, 'caption_entities') and message.caption_entities:
        entities = message.caption_entities
    elif message.entities:
        entities = message.entities

    # Step 1: Extract URLs from entities
    if entities:
        for entity in entities:
            entity_type = str(entity.type)
            normalized_type = entity_type.split('.')[-1].lower() if '.' in entity_type else entity_type.lower()
            
            if normalized_type == "url":
                url = texts[entity.offset:entity.offset + entity.length]
                urls.append(url)
            elif normalized_type == "text_link":
                if hasattr(entity, 'url') and entity.url:
                    urls.append(entity.url)

    # Step 2: Fallback to text-based URL extraction
    extracted_urls = extractor.find_urls(texts)
    urls.extend(extracted_urls)
    regex_urls = re.findall(URL_REGEX, texts)
    urls.extend(regex_urls)

    # Step 3: Clean and deduplicate URLs
    cleaned_urls = []
    for url in urls:
        cleaned_url = url.strip(".,").rstrip("/")
        if cleaned_url:
            cleaned_urls.append(cleaned_url)
    urls = list(dict.fromkeys(cleaned_urls))  # Preserve order, remove duplicates
    if not urls:
        app.send_message(
            message.chat.id,
            "No valid URLs found in the message.",
            reply_to_message_id=message.id
        )
        return

    # Step 4: Normalize URLs (add protocol if missing)
    normalized_urls = []
    for url in urls:
        if not url.startswith(('http://', 'https://')):
            url = 'https://' + url
        normalized_urls.append(url)
    urls = normalized_urls

    # Bypassing logic
    if bypasser.ispresent(bypasser.ddl.ddllist, urls[0]):
        msg: Message = app.send_message(
            message.chat.id, "⚡ __generating...__", reply_to_message_id=message.id
        )
    elif freewall.pass_paywall(urls[0], check=True):
        msg: Message = app.send_message(
            message.chat.id, "🕴️ __jumping the wall...__", reply_to_message_id=message.id
        )
    else:
        if "https://olamovies" in urls[0] or "https://psa.wf/" in urls[0]:
            msg: Message = app.send_message(
                message.chat.id,
                "⏳ __this might take some time...__",
                reply_to_message_id=message.id,
            )
        else:
            msg: Message = app.send_message(
                message.chat.id, "🔎 __bypassing...__", reply_to_message_id=message.id
            )

    strt = time()
    links = ""
    temp = None

    for ele in urls:
        if database:
            df_find = database.find(ele)
        else:
            df_find = None
        if df_find:
            print("Found in DB")
            temp = df_find
        elif search(r"https?:\/\/(?:[\w.-]+)?\.\w+\/\d+:", ele):
            handleIndex(ele, message, msg)
            return
        elif bypasser.ispresent(bypasser.ddl.ddllist, ele):
            try:
                temp = bypasser.ddl.direct_link_generator(ele)
            except Exception as e:
                temp = "**Error**: " + str(e)
        elif freewall.pass_paywall(ele, check=True):
            freefile = freewall.pass_paywall(ele)
            if freefile:
                try:
                    app.send_document(
                        message.chat.id, freefile, reply_to_message_id=message.id
                    )
                    remove(freefile)
                    app.delete_messages(message.chat.id, [msg.id])
                    return
                except:
                    pass
            else:
                app.send_message(
                    message.chat.id, "__Failed to Jump", reply_to_message_id=message.id
                )
        else:
            try:
                temp = bypasser.shortners(ele)
            except Exception as e:
                temp = "**Error**: " + str(e)

        print("bypassed:", temp)
        if temp is not None:
            if (not df_find) and ("http://" in temp or "https://" in temp) and database:
                print("Adding to DB")
                database.insert(ele, temp)
            links = links + temp + "\n"

    end = time()
    print("Took " + "{:.2f}".format(end - strt) + "sec")

    # Send bypassed links
    try:
        final = []
        tmp = ""
        for ele in links.split("\n"):
            tmp += ele + "\n"
            if len(tmp) > 4000:
                final.append(tmp)
                tmp = ""
        final.append(tmp)
        app.delete_messages(message.chat.id, msg.id)
        tmsgid = message.id
        for ele in final:
            tmsg = app.send_message(
                message.chat.id,
                f"__{ele}__",
                reply_to_message_id=tmsgid,
                disable_web_page_preview=True,
            )
            tmsgid = tmsg.id
    except Exception as e:
        app.send_message(
            message.chat.id,
            f"__Failed to Bypass : {e}__",
            reply_to_message_id=message.id,
        )

# Start command
@app.on_message(filters.command(["start"]))
def send_start(client: Client, message: Message):
    app.send_message(
        message.chat.id,
        f"__👋 Hi **{message.from_user.mention}**, I am Link Bypasser Bot, just send me any supported links and I will get you results.\nCheckout /help to Read More__",
        reply_markup=InlineKeyboardMarkup(
            [
                [
                    InlineKeyboardButton(
                        "🌐 Source Code",
                        url="https://github.com/bipinkrish/Link-Bypasser-Bot",
                    )
                ],
                [
                    InlineKeyboardButton(
                        "Replit",
                        url="https://replit.com/@bipinkrish/Link-Bypasser#app.py",
                    )
                ],
            ]
        ),
        reply_to_message_id=message.id,
    )

# Help command
@app.on_message(filters.command(["help"]))
def send_help(client: Client, message: Message):
    app.send_message(
        message.chat.id,
        HELP_TEXT,
        reply_to_message_id=message.id,
        disable_web_page_preview=True,
    )

# Text message handler
@app.on_message(filters.text)
def receive(client: Client, message: Message):
    bypass = Thread(target=lambda: loopthread(message), daemon=True)
    bypass.start()

# Document thread for DLC files
def docthread(message: Message):
    msg: Message = app.send_message(
        message.chat.id, "🔎 __bypassing...__", reply_to_message_id=message.id
    )
    print("sent DLC file")
    file = app.download_media(message)
    dlccont = open(file, "r").read()
    links = bypasser.getlinks(dlccont)
    app.edit_message_text(
        message.chat.id, msg.id, f"__{links}__", disable_web_page_preview=True
    )
    remove(file)

# Media file handler
@app.on_message([filters.document, filters.photo, filters.video])
def docfile(client: Client, message: Message):
    try:
        if message.document and message.document.file_name.endswith("dlc"):
            bypass = Thread(target=lambda: docthread(message), daemon=True)
            bypass.start()
            return
    except:
        pass
    bypass = Thread(target=lambda: loopthread(message, True), daemon=True)
    bypass.start()

# Start the bot
print("Bot Starting")
app.run()


================================================
FILE: requirements.txt
================================================
requests
cloudscraper
bs4 
python-dotenv
kurigram==2.2.0
PyroTgCrypto==1.2.7
lxml
cfscrape
urllib3==1.26
flask
gunicorn
curl-cffi
urlextract


================================================
FILE: runtime.txt
================================================
python-3.9.14


================================================
FILE: templates/index.html
================================================
<!DOCTYPE html>
<html lang="en">
  <head>
    <title>Web Link Bypasser</title>
    <meta
      name="description"
      content="A site that can Bypass the Ad Links and Generate Direct Links"
    />
    
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="stylesheet" href="https://bootswatch.com/5/vapor/bootstrap.css" />
  </head>
  <body>
    <style>
    ul.no-bullets {
      list-style-type: none;
      padding-left: 0;
    }

    .no-underline {
      text-decoration: none;
    }
  </style>
    <nav class="navbar navbar-expand-md navbar-dark bg-dark justify-content-center" id="navbarHeader">
      <div class="navbar-brand text-white text-center">
        Quick, Simple and Easy to use Link-Bypasser 
      </div>
    </nav>
    <main>
      <section class="py-5 text-center" id="topHeaderSection">
        <div class="container mt-10">
          <h1 class="mb-4 text-center">Link Bypasser Web</h1>
          <div class="center-content">
            <form method="post">
              <div class="row justify-content-center">
                <div class="col-12 col-md-8 col-lg-6">
                  <div class="input-group mb-3">
                    <input type="text" class="form-control" name="url" placeholder="Enter URL here">
                    <div class="input-group-append">
                      <button class="btn btn-primary" type="submit">Submit</button>
                    </div>
                  </div>
                </div>
              </div>
            </form>
            <div class="mt-4">
            {% if result %}
            <div class="mb-2">
              <a href="{{ result }}" class="d-block" target="_blank">{{ result }}</a>
            </div>
            {% endif %}
          </div>
          {% if prev_links %}
          <div class="mt-4">
            <strong>Previously Shortened Links:</strong>
            <ul class="no-bullets">
              {% for prev_link in prev_links[::-1] %}
              {% if prev_link.strip() %}
              <li>
                <a href="{{ prev_link }}" class="d-block mb-2 no-underline" target="_blank">{{ prev_link }}</a>
              </li>
              {% endif %}
              {% endfor %}
            </ul>
          </div>
          {% endif %}
            </div>
            <p class="mt-3 text-warning">If you encounter any problems, <a href="https://github.com/bipinkrish/Link-Bypasser-Bot/issues"> click here to report</a>.</p>
            <div class="mt-5">
              <p>Designed and made by <a href="https://patelsujal.in">Sujal Patel with Bipinkrish</a></p>
            </div>
          </div>
        </div>
      </section>
    </main>
    <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script>
    <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
  </body> 
</html>


================================================
FILE: texts.py
================================================
gdrivetext = """__- appdrive \n\
- driveapp \n\
- drivehub \n\
- gdflix \n\
- drivesharer \n\
- drivebit \n\
- drivelinks \n\
- driveace \n\
- drivepro \n\
- driveseed \n\
    __"""


otherstext = """__- exe, exey \n\
- sub2unlock, sub2unlock \n\
- rekonise \n\
- letsboost \n\
- phapps2app \n\
- mboost	\n\
- sub4unlock \n\
- ytsubme \n\
- bitly \n\
- social-unlock	\n\
- boost	\n\
- gooly \n\
- shrto \n\
- tinyurl
    __"""


ddltext = """__- yandex \n\
- mediafire \n\
- uptobox \n\
- osdn \n\
- github \n\
- hxfile \n\
- 1drv (onedrive) \n\
- pixeldrain \n\
- antfiles \n\
- streamtape \n\
- racaty \n\
- 1fichier \n\
- solidfiles \n\
- krakenfiles \n\
- upload \n\
- akmfiles \n\
- linkbox \n\
- shrdsk \n\
- letsupload \n\
- zippyshare \n\
- wetransfer \n\
- terabox, teraboxapp, 4funbox, mirrobox, nephobox, momerybox \n\
- filepress \n\
- anonfiles, hotfile, bayfiles, megaupload, letsupload, filechan, myfile, vshare, rapidshare, lolabits, openload, share-online, upvid \n\
- fembed, fembed, femax20, fcdn, feurl, layarkacaxxi, naniplay, nanime, naniplay, mm9842 \n\
- sbembed, watchsb, streamsb, sbplay.
    __"""


shortnertext = """__- igg-games \n\
- olamovies\n\
- katdrive \n\
- drivefire\n\
- kolop \n\
- hubdrive \n\
- filecrypt \n\
- shareus \n\
- shortingly \n\
- gyanilinks \n\
- shorte \n\
- psa \n\
- sharer \n\
- new1.gdtot \n\
- adfly\n\
- gplinks\n\
- droplink \n\
- linkvertise \n\
- rocklinks \n\
- ouo \n\
- try2link \n\
- htpmovies \n\
- sharespark \n\
- cinevood\n\
- atishmkv \n\
- urlsopen \n\
- xpshort, techymozo \n\
- dulink \n\
- ez4short \n\
- krownlinks \n\
- teluguflix \n\
- taemovies \n\
- toonworld4all \n\
- animeremux \n\
- adrinolinks \n\
- tnlink \n\
- flashlink \n\
- short2url \n\
- tinyfy \n\
- mdiskshortners \n\
- earnl \n\
- moneykamalo \n\
- easysky \n\
- indiurl \n\
- linkbnao \n\
- mdiskpro \n\
- tnshort \n\
- indianshortner \n\
- rslinks \n\
- bitly, tinyurl \n\
- thinfi \n\
- pdisk \n\
- vnshortener \n\
- onepagelink \n\
- lolshort \n\
- tnvalue \n\
- vipurl \n\
__"""


freewalltext = """__- shutterstock \n\
- adobe stock \n\
- drivehub \n\
- alamy \n\
- gettyimages \n\
- istockphoto \n\
- picfair \n\
- slideshare \n\
- medium \n\
    __"""


HELP_TEXT = f"**--Just Send me any Supported Links From Below Mentioned Sites--** \n\n\
**List of Sites for DDL : ** \n\n{ddltext} \n\
**List of Sites for Shortners : ** \n\n{shortnertext} \n\
**List of Sites for GDrive Look-ALike : ** \n\n{gdrivetext} \n\
**List of Sites for Jumping Paywall : ** \n\n{freewalltext} \n\
**Other Supported Sites : ** \n\n{otherstext}"
Download .txt
gitextract_80omcl63/

├── .github/
│   └── FUNDING.yml
├── .gitignore
├── Dockerfile
├── Procfile
├── README.md
├── app.json
├── app.py
├── bypasser.py
├── config.json
├── db.py
├── ddl.py
├── freewall.py
├── main.py
├── requirements.txt
├── runtime.txt
├── templates/
│   └── index.html
└── texts.py
Download .txt
SYMBOL INDEX (145 symbols across 6 files)

FILE: app.py
  function handle_index (line 11) | def handle_index(ele):
  function store_shortened_links (line 15) | def store_shortened_links(link):
  function loop_thread (line 20) | def loop_thread(url):
  function index (line 57) | def index():

FILE: bypasser.py
  function getenv (line 22) | def getenv(var):
  function pdisk (line 77) | def pdisk(url):
  function scrapeIndex (line 94) | def scrapeIndex(url, username="none", password="none"):
  function shortner_fpage_api (line 192) | def shortner_fpage_api(link):
  function shortner_quick_api (line 209) | def shortner_quick_api(link):
  function tnlink (line 226) | def tnlink(url):
  function try2link_bypass (line 253) | def try2link_bypass(url):
  function try2link_scrape (line 279) | def try2link_scrape(url):
  function psa_bypasser (line 290) | def psa_bypasser(psa_url):
  function rocklinks (line 336) | def rocklinks(url):
  function decodeKey (line 375) | def decodeKey(encoded):
  function bypassBluemediafiles (line 391) | def bypassBluemediafiles(url, torrent=False):
  function igggames (line 444) | def igggames(url):
  function htpmovies (line 495) | def htpmovies(link):
  function scrappers (line 519) | def scrappers(link):
  function getfinal (line 691) | def getfinal(domain, url, sess):
  function getfirst (line 730) | def getfirst(url):
  function ez4 (line 773) | def ez4(url):
  function olamovies (line 795) | def olamovies(url):
  function parse_info_katdrive (line 887) | def parse_info_katdrive(res):
  function katdrive_dl (line 897) | def katdrive_dl(url, katcrypt):
  function parse_info_hubdrive (line 927) | def parse_info_hubdrive(res):
  function hubdrive_dl (line 937) | def hubdrive_dl(url, hcrypt):
  function parse_info_drivefire (line 967) | def parse_info_drivefire(res):
  function drivefire_dl (line 977) | def drivefire_dl(url, dcrypt):
  function parse_info_kolop (line 1006) | def parse_info_kolop(res):
  function kolop_dl (line 1016) | def kolop_dl(url, kcrypt):
  function mediafire (line 1047) | def mediafire(url):
  function zippyshare (line 1062) | def zippyshare(url):
  function getlinks (line 1077) | def getlinks(dlc):
  function filecrypt (line 1101) | def filecrypt(url):
  function dropbox (line 1137) | def dropbox(url):
  function shareus (line 1149) | def shareus(url):
  function shortingly (line 1178) | def shortingly(url):
  function gyanilinks (line 1203) | def gyanilinks(url):
  function flashl (line 1229) | def flashl(url):
  function short2url (line 1254) | def short2url(url):
  function anonfile (line 1279) | def anonfile(url):
  function pixl (line 1302) | def pixl(url):
  function siriganbypass (line 1344) | def siriganbypass(url):
  function sh_st_bypass (line 1362) | def sh_st_bypass(url):
  function gofile_dl (line 1385) | def gofile_dl(url, password=""):
  function parse_info_sharer (line 1410) | def parse_info_sharer(res):
  function sharer_pw (line 1418) | def sharer_pw(url, Laravel_Session, XSRF_TOKEN, forced_login=False):
  function gdtot (line 1457) | def gdtot(url):
  function decrypt_url (line 1499) | def decrypt_url(code):
  function adfly (line 1523) | def adfly(url):
  function gplinks (line 1545) | def gplinks(url: str):
  function droplink (line 1571) | def droplink(url):
  function linkvertise (line 1601) | def linkvertise(url):
  function others (line 1616) | def others(url):
  function RecaptchaV3 (line 1626) | def RecaptchaV3():
  function ouo (line 1645) | def ouo(url):
  function mdisk (line 1686) | def mdisk(url):
  function unified (line 1708) | def unified(url):
  function urlsopen (line 1831) | def urlsopen(url):
  function xpshort (line 1856) | def xpshort(url):
  function vnshortener (line 1881) | def vnshortener(url):
  function onepagelink (line 1927) | def onepagelink(url):
  function dulink (line 1952) | def dulink(url):
  function krownlinks (line 1974) | def krownlinks(url):
  function adrinolink (line 1999) | def adrinolink(url):
  function mdiskshortners (line 2023) | def mdiskshortners(url):
  function tiny (line 2048) | def tiny(url):
  function earnl (line 2072) | def earnl(url):
  function moneykamalo (line 2097) | def moneykamalo(url):
  function lolshort (line 2122) | def lolshort(url):
  function easysky (line 2147) | def easysky(url):
  function indi (line 2172) | def indi(url):
  function linkbnao (line 2197) | def linkbnao(url):
  function mdiskpro (line 2222) | def mdiskpro(url):
  function tnshort (line 2244) | def tnshort(url):
  function tnvalue (line 2269) | def tnvalue(url):
  function indshort (line 2294) | def indshort(url):
  function mdisklink (line 2319) | def mdisklink(url):
  function rslinks (line 2344) | def rslinks(url):
  function vipurl (line 2358) | def vipurl(url):
  function mdisky (line 2381) | def mdisky(url):
  function bitly_tinyurl (line 2406) | def bitly_tinyurl(url: str) -> str:
  function thinfi (line 2418) | def thinfi(url: str) -> str:
  function kingurl1 (line 2431) | def kingurl1(url):
  function kingurl (line 2452) | def kingurl(url):
  function ispresent (line 2463) | def ispresent(inlist, url):
  function shortners (line 2471) | def shortners(url):

FILE: db.py
  class DB (line 4) | class DB:
    method __init__ (line 5) | def __init__(self, api_key, db_owner, db_name) -> None:
    method insert (line 23) | def insert(self, link: str, result: str):
    method find (line 37) | def find(self, link: str):

FILE: ddl.py
  function getenv (line 21) | def getenv(var):
  function is_share_link (line 95) | def is_share_link(url):
  function get_readable_time (line 104) | def get_readable_time(seconds):
  function direct_link_generator (line 153) | def direct_link_generator(link: str):
  function mdisk (line 232) | def mdisk(url):
  function yandex_disk (line 245) | def yandex_disk(url: str) -> str:
  function uptobox (line 260) | def uptobox(url: str) -> str:
  function mediafire (line 298) | def mediafire(url: str) -> str:
  function osdn (line 316) | def osdn(url: str) -> str:
  function github (line 338) | def github(url: str) -> str:
  function hxfile (line 352) | def hxfile(url: str) -> str:
  function letsupload (line 381) | def letsupload(url: str) -> str:
  function anonfilesBased (line 394) | def anonfilesBased(url: str) -> str:
  function fembed (line 406) | def fembed(link: str) -> str:
  function sbembed (line 429) | def sbembed(link: str) -> str:
  function onedrive (line 460) | def onedrive(link: str) -> str:
  function pixeldrain (line 480) | def pixeldrain(url: str) -> str:
  function antfiles (line 501) | def antfiles(url: str) -> str:
  function streamtape (line 514) | def streamtape(url: str) -> str:
  function racaty (line 525) | def racaty(url: str) -> str:
  function fichier (line 543) | def fichier(link: str) -> str:
  function solidfiles (line 607) | def solidfiles(url: str) -> str:
  function krakenfiles (line 623) | def krakenfiles(url: str) -> str:
  function uploadee (line 649) | def uploadee(url: str) -> str:
  function terabox (line 661) | def terabox(url) -> str:
  function filepress (line 716) | def filepress(url):
  function gdtot (line 763) | def gdtot(url):
  function sharer_scraper (line 800) | def sharer_scraper(url):
  function wetransfer (line 850) | def wetransfer(url):
  function akmfiles (line 872) | def akmfiles(url):
  function shrdsk (line 888) | def shrdsk(url):
  function linkbox (line 906) | def linkbox(url):
  function zippyshare (line 932) | def zippyshare(url):

FILE: freewall.py
  function getSoup (line 12) | def getSoup(res):
  function downloaderla (line 16) | def downloaderla(url, site):
  function getImg (line 24) | def getImg(url):
  function decrypt (line 28) | def decrypt(res, key):
  function shutterstock (line 36) | def shutterstock(url):
  function adobestock (line 42) | def adobestock(url):
  function alamy (line 47) | def alamy(url):
  function getty (line 52) | def getty(url):
  function picfair (line 57) | def picfair(url):
  function slideshare (line 62) | def slideshare(url, type="pptx"):
  function medium (line 70) | def medium(url):
  function pass_paywall (line 82) | def pass_paywall(url, check=False, link=False):

FILE: main.py
  function getenv (line 27) | def getenv(var):
  function handleIndex (line 53) | def handleIndex(ele: str, message: Message, msg: Message):
  function loopthread (line 73) | def loopthread(message: Message, otherss=False):
  function send_start (line 232) | def send_start(client: Client, message: Message):
  function send_help (line 257) | def send_help(client: Client, message: Message):
  function receive (line 267) | def receive(client: Client, message: Message):
  function docthread (line 272) | def docthread(message: Message):
  function docfile (line 287) | def docfile(client: Client, message: Message):
Condensed preview — 17 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (168K chars).
[
  {
    "path": ".github/FUNDING.yml",
    "chars": 21,
    "preview": "github: [bipinkrish]\n"
  },
  {
    "path": ".gitignore",
    "chars": 50,
    "preview": "__pycache__\nmy_bot.session\nmy_bot.session-journal\n"
  },
  {
    "path": "Dockerfile",
    "chars": 155,
    "preview": "FROM python:3.9\r\n\r\nWORKDIR /app\r\n\r\nCOPY requirements.txt /app/\r\nRUN pip3 install -r requirements.txt\r\nCOPY . /app\r\n\r\nCMD"
  },
  {
    "path": "Procfile",
    "chars": 44,
    "preview": "worker: python3 main.py\r\nweb: python3 app.py"
  },
  {
    "path": "README.md",
    "chars": 2987,
    "preview": "# Link-Bypasser-Bot\r\n\r\na Telegram Bot (with Site) that can Bypass Ad Links, Generate Direct Links and Jump Paywalls. see"
  },
  {
    "path": "app.json",
    "chars": 1972,
    "preview": "{\n  \"name\": \"Link-Bypasser-Bot\",\n  \"description\": \"A Telegram Bot (with Site) that can Bypass Ad Links, Generate Direct "
  },
  {
    "path": "app.py",
    "chars": 2461,
    "preview": "from flask import Flask, request, render_template, make_response, send_file\nimport bypasser\nimport re\nimport os\nimport f"
  },
  {
    "path": "bypasser.py",
    "chars": 90890,
    "preview": "import re\r\nimport requests\r\nfrom curl_cffi import requests as Nreq\r\nimport base64\r\nfrom urllib.parse import unquote, url"
  },
  {
    "path": "config.json",
    "chars": 364,
    "preview": "{\n    \"TOKEN\": \"\",\n    \"ID\": \"\",\n    \"HASH\": \"\",\n    \"Laravel_Session\": \"\",\n    \"XSRF_TOKEN\": \"\",\n    \"GDTot_Crypt\": \"\","
  },
  {
    "path": "db.py",
    "chars": 1762,
    "preview": "import requests\nimport base64\n\nclass DB:\n    def __init__(self, api_key, db_owner, db_name) -> None:\n        self.api_ke"
  },
  {
    "path": "ddl.py",
    "chars": 32786,
    "preview": "from base64 import standard_b64encode\r\nfrom json import loads\r\nfrom math import floor, pow\r\nfrom re import findall, matc"
  },
  {
    "path": "freewall.py",
    "chars": 3243,
    "preview": "import requests\nimport base64\nimport re\nfrom bs4 import BeautifulSoup\nfrom bypasser import RecaptchaV3\n\nRTOKEN = Recaptc"
  },
  {
    "path": "main.py",
    "chars": 9735,
    "preview": "from pyrogram import Client, filters\r\nfrom pyrogram.types import (\r\n    InlineKeyboardMarkup,\r\n    InlineKeyboardButton,"
  },
  {
    "path": "requirements.txt",
    "chars": 154,
    "preview": "requests\r\ncloudscraper\r\nbs4 \r\npython-dotenv\r\nkurigram==2.2.0\r\nPyroTgCrypto==1.2.7\r\nlxml\r\ncfscrape\r\nurllib3==1.26\r\nflask\r"
  },
  {
    "path": "runtime.txt",
    "chars": 15,
    "preview": "python-3.9.14\r\n"
  },
  {
    "path": "templates/index.html",
    "chars": 2965,
    "preview": "<!DOCTYPE html>\n<html lang=\"en\">\n  <head>\n    <title>Web Link Bypasser</title>\n    <meta\n      name=\"description\"\n      "
  },
  {
    "path": "texts.py",
    "chars": 2714,
    "preview": "gdrivetext = \"\"\"__- appdrive \\n\\\r\n- driveapp \\n\\\r\n- drivehub \\n\\\r\n- gdflix \\n\\\r\n- drivesharer \\n\\\r\n- drivebit \\n\\\r\n- dri"
  }
]

About this extraction

This page contains the full source code of the bipinkrish/Link-Bypasser-Bot GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 17 files (148.7 KB), approximately 39.7k tokens, and a symbol index with 145 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!