Full Code of TeaByte/telegram-views for AI

master e764fc1c5d05 cached
8 files
22.6 KB
6.0k tokens
13 symbols
1 requests
Download .txt
Repository: TeaByte/telegram-views
Branch: master
Commit: e764fc1c5d05
Files: 8
Total size: 22.6 KB

Directory structure:
gitextract_prubwzpo/

├── .gitignore
├── LICENSE
├── README.md
├── auto/
│   ├── http.txt
│   ├── socks4.txt
│   └── socks5.txt
├── main.py
└── requirements.txt

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Python bytecode files
*.pyc
*.pyo
*.pyd
__pycache__/
*.so

# Distribution / packaging
.Python
build/
dist/
*.egg-info/
*.eggs/
MANIFEST

# Virtual environment
venv/
env/
.venv/
ENV/
env.bak/
venv.bak/

# Jupyter Notebook checkpoints
.ipynb_checkpoints/

# pyenv
.python-version

# pytest
.cache
nosetests.xml
coverage.xml
*.cover
*.hypothesis/

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Ruff
.ruff_cache/

# VSCode
.vscode/

# JetBrains (PyCharm, IntelliJ, etc.)
.idea/

# macOS
.DS_Store

# Windows
Thumbs.db
Desktop.ini

# Flask instance folder
instance/

# dotenv
.env
.env.*

# Python tool caches
.cache/

# Program Logs
error.txt

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2025 VWH

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

================================================
FILE: README.md
================================================
# Telegram Views

![Gif](https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExOGFkOTRiMTdjMTc3OTJhZmU0MDRmZGFlNGJiMjA3NGYxOGQwM2Y2ZSZlcD12MV9pbnRlcm5hbF9naWZzX2dpZklkJmN0PWc/jStbo9qVAJsObKKr5a/giphy.gif)

## Features

- **Asynchronous**: Optimized for performance with async tasks.
- **Proxy Support**: Compatible with all proxy types (HTTP/S, SOCKS4, SOCKS5).
- **Auto Proxy Scraping**: Automatically scrapes proxies from various sources.
- **Proxy Rotation**: Supports rotating proxies for added anonymity.
- **Custom Proxy Lists**: Load proxies from files and use them in different modes.

---

## Arguments Overview

### Command-Line Arguments:

- `--channel`: **Required** – Telegram channel to send views to (e.g., `@channel_name`).
- `--post`: **Required** – Post number in the Telegram channel (e.g., `4` for `https://t.me/channel_name/4`).
- `--type`: **Optional** – Proxy type (`http`, `socks4`, `socks5`).
- `--mode`: **Required** – Mode of operation (options: `auto`, `list`, `rotate`).
- `--proxy`: **Optional** – Path to a text file containing a list of proxies or a single proxy in `user:password@host:port` format.
- `--concurrency`: **Optional** – Maximum number of concurrent requests (default: `200`).

---

## Modes

### 1. **Auto Scraping Mode** (No Proxies Required)

In this mode, proxies are automatically scraped from various online sources. You don’t need to provide a list of proxies.

- **Usage**: This mode runs indefinitely and auto-rescrapes proxies after each loop.

```bash
main.py --mode auto --channel tviews --post 4
```

#### Notes:

- Proxies will be scraped from the `auto` directory. You can update the sources in the `auto` directory.
- This mode does not require a proxy list.

### 2. **List Mode** (Load Proxies From File)

This mode allows you to provide a text file containing a list of proxies. Each proxy should be on a new line. It supports multiple proxy types: `http`, `socks4`, and `socks5`.

- **Usage**:

```bash
main.py --mode list --type http --proxy http.txt --channel tviews --post 4
```

#### Notes:

- You need to provide a path to a text file with proxies (`http.txt`, `socks4.txt`, etc.).
- This mode uses the proxies from the file for continuous requests.

### 3. **Rotating Proxy Mode**

In this mode, a single proxy (with rotation) is used for requests. You need to provide a proxy in the `user:password@ip:port` format.

- **Usage**:

```bash
main.py --mode rotate --type http --proxy user:password@ip:port --channel tviews --post 4
```

#### Notes:

- This mode rotates a single proxy across multiple requests.

---

## Example Usage

### **1. Auto Scraping Mode** (No Proxy)

This mode continuously scrapes proxies and sends views to the given post on the Telegram channel.

```bash
python main.py --mode auto --channel tviews --post 4
```

### **2. Load Proxies From File** (Custom Proxies)

If you have a list of proxies (e.g., in a text file `http.txt`), use this mode to send views using those proxies.

```bash
python main.py --mode list --type http --proxy http.txt --channel tviews --post 4
```

### **3. Rotating Proxy Mode** (Single Proxy with Rotation)

This mode allows you to send views using a single rotating proxy. Provide the proxy in `user:password@ip:port` format.

```bash
python main.py --mode rotate --type http --proxy user:password@ip:port --channel tviews --post 4
```

---

## Requirements

Make sure to install the required dependencies before running the script:

```bash
pip install -r requirements.txt
```

---


================================================
FILE: auto/http.txt
================================================
https://raw.githubusercontent.com/proxifly/free-proxy-list/main/proxies/protocols/http/data.txt
https://raw.githubusercontent.com/zloi-user/hideip.me/main/http.txt
https://openproxylist.xyz/http.txt
https://api.proxyscrape.com/v2/?request=getproxies&protocol=http
https://openproxy.space/list/http
https://proxyspace.pro/http.txt
https://proxyspace.pro/https.txt
https://proxyhub.me/en/all-http-proxy-list.html
https://proxyhub.me/en/all-https-proxy-list.html
https://proxy-tools.com/proxy/http
https://proxy-tools.com/proxy/https
https://www.proxy-list.download/api/v1/get?type=http
https://www.proxy-list.download/api/v1/get?type=https
https://www.proxyscan.io/download?type=http
https://rootjazz.com/proxies/proxies.txt
https://sheesh.rip/http.txt
https://spys.me/proxy.txt
https://proxysearcher.sourceforge.net/Proxy%20List.php?type=http
https://proxylist.geonode.com/api/proxy-list?limit=500&page=1&sort_by=lastChecked&sort_type=desc&protocols=http
https://proxylist.geonode.com/api/proxy-list?limit=500&page=1&sort_by=lastChecked&sort_type=desc&protocols=https
https://cdn.jsdelivr.net/gh/aslisk/proxyhttps/https.txt
https://cdn.jsdelivr.net/gh/clarketm/proxy-list/proxy-list-raw.txt
https://cdn.jsdelivr.net/gh/hendrikbgr/Free-Proxy-Repo/proxy_list.txt
https://cdn.jsdelivr.net/gh/jetkai/proxy-list/online-proxies/txt/proxies-http.txt
https://cdn.jsdelivr.net/gh/mertguvencli/http-proxy-list/proxy-list/data.txt
https://cdn.jsdelivr.net/gh/mmpx12/proxy-list/https.txt
https://cdn.jsdelivr.net/gh/roosterkid/openproxylist/HTTPS_RAW.txt
https://cdn.jsdelivr.net/gh/saschazesiger/Free-Proxies/proxies/http.txt
https://cdn.jsdelivr.net/gh/ShiftyTR/Proxy-List/https.txt
https://cdn.jsdelivr.net/gh/sunny9577/proxy-scraper/proxies.txt
https://raw.githubusercontent.com/TheSpeedX/SOCKS-List/master/http.txt
https://raw.githubusercontent.com/themiralay/Proxy-List-World/master/data.txt
https://raw.githubusercontent.com/elliottophellia/yakumo/master/results/http/global/http_checked.txt
https://raw.githubusercontent.com/zloi-user/hideip.me/main/http.txt
https://raw.githubusercontent.com/zloi-user/hideip.me/main/https.txt
https://raw.githubusercontent.com/fahimscirex/proxybd/master/proxylist/http.txt
https://raw.githubusercontent.com/prxchk/proxy-list/main/http.txt
https://raw.githubusercontent.com/yemixzy/proxy-list/main/proxies/http.txt
https://raw.githubusercontent.com/ErcinDedeoglu/proxies/main/proxies/http.txt
https://raw.githubusercontent.com/ErcinDedeoglu/proxies/main/proxies/https.txt
https://raw.githubusercontent.com/im-razvan/proxy_list/main/http.txt
https://raw.githubusercontent.com/monosans/proxy-list/main/proxies/http.txt
https://raw.githubusercontent.com/sunny9577/proxy-scraper/master/generated/http_proxies.txt
https://raw.githubusercontent.com/SevenworksDev/proxy-list/main/proxies/http.txt
https://raw.githubusercontent.com/SevenworksDev/proxy-list/main/proxies/https.txt
https://raw.githubusercontent.com/MrMarble/proxy-list/main/all.txt
https://raw.githubusercontent.com/tuanminpay/live-proxy/master/http.txt
https://raw.githubusercontent.com/officialputuid/KangProxy/KangProxy/http/http.txt
https://raw.githubusercontent.com/officialputuid/KangProxy/KangProxy/https/https.txt
https://raw.githubusercontent.com/Tsprnay/Proxy-lists/master/proxies/http.txt
https://raw.githubusercontent.com/Tsprnay/Proxy-lists/master/proxies/https.txt
https://raw.githubusercontent.com/mmpx12/proxy-list/master/http.txt
https://raw.githubusercontent.com/mmpx12/proxy-list/master/https.txt
https://raw.githubusercontent.com/MuRongPIG/Proxy-Master/main/http.txt
https://raw.githubusercontent.com/ALIILAPRO/Proxy/main/http.txt


================================================
FILE: auto/socks4.txt
================================================
https://raw.githubusercontent.com/proxifly/free-proxy-list/main/proxies/protocols/socks4/data.txt
https://openproxy.space/list/socks4
https://openproxylist.xyz/socks4.txt
https://api.proxyscrape.com/v2/?request=getproxies&protocol=socks4
https://proxysearcher.sourceforge.net/Proxy%20List.php?type=socks
https://proxyspace.pro/socks4.txt
https://www.proxy-list.download/api/v1/get?type=socks4
https://www.proxyscan.io/download?type=socks4
https://proxyhub.me/en/all-socks4-proxy-list.html
https://proxy-tools.com/proxy/socks4
https://proxylist.geonode.com/api/proxy-list?limit=500&page=1&sort_by=lastChecked&sort_type=desc&protocols=socks4
https://spys.me/socks.txt
https://www.socks-proxy.net/
https://www.my-proxy.com/free-socks-4-proxy.html
https://cdn.jsdelivr.net/gh/B4RC0DE-TM/proxy-list/SOCKS4.txt
https://cdn.jsdelivr.net/gh/jetkai/proxy-list/online-proxies/txt/proxies-socks4.txt
https://cdn.jsdelivr.net/gh/roosterkid/openproxylist/SOCKS4_RAW.txt
https://cdn.jsdelivr.net/gh/saschazesiger/Free-Proxies/proxies/socks4.txt
https://cdn.jsdelivr.net/gh/TheSpeedX/PROXY-List/socks4.txt
https://raw.githubusercontent.com/TheSpeedX/SOCKS-List/master/socks4.txt
https://raw.githubusercontent.com/elliottophellia/yakumo/master/results/socks4/global/socks4_checked.txt
https://raw.githubusercontent.com/zloi-user/hideip.me/main/socks4.txt
https://raw.githubusercontent.com/fahimscirex/proxybd/master/proxylist/socks4.txt
https://raw.githubusercontent.com/prxchk/proxy-list/main/socks4.txt
https://raw.githubusercontent.com/yemixzy/proxy-list/main/proxies/socks4.txt
https://raw.githubusercontent.com/ErcinDedeoglu/proxies/main/proxies/socks4.txt
https://raw.githubusercontent.com/monosans/proxy-list/main/proxies/socks4.txt
https://raw.githubusercontent.com/sunny9577/proxy-scraper/master/generated/socks4_proxies.txt
https://raw.githubusercontent.com/SevenworksDev/proxy-list/main/proxies/socks4.txt
https://raw.githubusercontent.com/tuanminpay/live-proxy/master/socks4.txt
https://raw.githubusercontent.com/officialputuid/KangProxy/KangProxy/socks4/socks4.txt
https://raw.githubusercontent.com/Tsprnay/Proxy-lists/master/proxies/socks4.txt
https://raw.githubusercontent.com/mmpx12/proxy-list/master/socks4.txt
https://raw.githubusercontent.com/MuRongPIG/Proxy-Master/main/socks4.txt
https://raw.githubusercontent.com/ALIILAPRO/Proxy/main/socks4.txt


================================================
FILE: auto/socks5.txt
================================================
https://raw.githubusercontent.com/proxifly/free-proxy-list/main/proxies/protocols/socks5/data.txt
https://openproxy.space/list/socks5
https://openproxylist.xyz/socks5.txt
https://api.proxyscrape.com/v2/?request=getproxies&protocol=socks5
http://proxysearcher.sourceforge.net/Proxy%20List.php?type=socks
https://proxyspace.pro/socks5.txt
https://spys.me/socks.txt
https://www.proxy-list.download/api/v1/get?type=socks5
https://www.proxyscan.io/download?type=socks5
https://proxy-tools.com/proxy/socks5
https://proxyhub.me/en/all-sock5-proxy-list.html
https://www.my-proxy.com/free-socks-5-proxy.html
https://proxylist.geonode.com/api/proxy-list?limit=500&page=1&sort_by=lastChecked&sort_type=desc&protocols=socks5
https://cdn.jsdelivr.net/gh/HyperBeats/proxy-list/socks5.txt
https://cdn.jsdelivr.net/gh/jetkai/proxy-list/online-proxies/txt/proxies-socks5.txt
https://cdn.jsdelivr.net/gh/mmpx12/proxy-list/socks5.txt
https://cdn.jsdelivr.net/gh/roosterkid/openproxylist/SOCKS5_RAW.txt
https://cdn.jsdelivr.net/gh/saschazesiger/Free-Proxies/proxies/socks5.txt
https://cdn.jsdelivr.net/gh/TheSpeedX/PROXY-List/socks5.txt
https://raw.githubusercontent.com/TheSpeedX/SOCKS-List/master/socks5.txt
https://raw.githubusercontent.com/elliottophellia/yakumo/master/results/socks5/global/socks5_checked.txt
https://raw.githubusercontent.com/zloi-user/hideip.me/main/socks5.txt
https://raw.githubusercontent.com/fahimscirex/proxybd/master/proxylist/socks5.txt
https://raw.githubusercontent.com/prxchk/proxy-list/main/socks5.txt
https://raw.githubusercontent.com/yemixzy/proxy-list/main/proxies/socks5.txt
https://raw.githubusercontent.com/hookzof/socks5_list/master/proxy.txt
https://raw.githubusercontent.com/ErcinDedeoglu/proxies/main/proxies/socks5.txt
https://raw.githubusercontent.com/im-razvan/proxy_list/main/socks5.txt
https://raw.githubusercontent.com/monosans/proxy-list/main/proxies/socks5.txt
https://raw.githubusercontent.com/sunny9577/proxy-scraper/master/generated/socks5_proxies.txt
https://raw.githubusercontent.com/SevenworksDev/proxy-list/main/proxies/socks5.txt
https://raw.githubusercontent.com/tuanminpay/live-proxy/master/socks5.txt
https://raw.githubusercontent.com/officialputuid/KangProxy/KangProxy/socks5/socks5.txt
https://raw.githubusercontent.com/Tsprnay/Proxy-lists/master/proxies/socks5.txt
https://raw.githubusercontent.com/mmpx12/proxy-list/master/socks5.txt
https://raw.githubusercontent.com/MuRongPIG/Proxy-Master/main/socks5.txt
https://raw.githubusercontent.com/ALIILAPRO/Proxy/main/socks5.txt
https://raw.githubusercontent.com/AGDDoS/AGProxy/master/proxies/socks5.txt


================================================
FILE: main.py
================================================
import aiohttp
import asyncio
from re import search, compile
from argparse import ArgumentParser
from datetime import datetime
from fake_useragent import UserAgent
from aiohttp_socks import ProxyConnector

# Regular expression for matching proxy patterns
REGEX = compile(
    r"(?:^|\D)?(("+ r"(?:[1-9]|[1-9]\d|1\d{2}|2[0-4]\d|25[0-5])"
    + r"\." + r"(?:\d|[1-9]\d|1\d{2}|2[0-4]\d|25[0-5])"
    + r"\." + r"(?:\d|[1-9]\d|1\d{2}|2[0-4]\d|25[0-5])"
    + r"\." + r"(?:\d|[1-9]\d|1\d{2}|2[0-4]\d|25[0-5])"
    + r"):" + (r"(?:\d|[1-9]\d{1,3}|[1-5]\d{4}|6[0-4]\d{3}"
    + r"|65[0-4]\d{2}|655[0-2]\d|6553[0-5])")
    + r")(?:\D|$)"
)

def log(message):
    timestamp = datetime.now().strftime("%H:%M:%S")
    print(f"[{timestamp}] {message}")

class Telegram:
    def __init__(self, channel: str, post: int, concurrency: int = 100) -> None:
        self.channel = channel
        self.post = post
        self.concurrency = concurrency
        self.semaphore = asyncio.Semaphore(concurrency)
        log(f"Initialized with channel: @{channel}, post: {post}, concurrency: {concurrency}")

    async def request(self, proxy: str, proxy_type: str):
        proxy_url = f"{proxy_type}://{proxy}"
        try:
            async with self.semaphore:
                connector = ProxyConnector.from_url(proxy_url)
                jar = aiohttp.CookieJar(unsafe=True)
                async with aiohttp.ClientSession(cookie_jar=jar, connector=connector) as session:
                    user_agent = UserAgent().random
                    headers = {
                        "referer": f"https://t.me/{self.channel}/{self.post}",
                        "user-agent": user_agent,
                    }
                    async with session.get(
                        f"https://t.me/{self.channel}/{self.post}?embed=1&mode=tme",
                        headers=headers,
                        timeout=aiohttp.ClientTimeout(total=5),
                    ) as embed_response:

                        if not jar.filter_cookies(embed_response.url).get("stel_ssid"):
                            log("ERROR: No cookies received")
                            return

                        views_token = search(
                            'data-view="([^"]+)"', await embed_response.text()
                        )

                        if not views_token:
                            log("ERROR: No view token found")
                            return

                        views_response = await session.post(
                            "https://t.me/v/?views=" + views_token.group(1),
                            headers={
                                "referer": f"https://t.me/{self.channel}/{self.post}?embed=1&mode=tme",
                                "user-agent": user_agent,
                                "x-requested-with": "XMLHttpRequest",
                            },
                            timeout=aiohttp.ClientTimeout(total=5),
                        )

                        if (
                            await views_response.text() == "true"
                            and views_response.status == 200
                        ):
                            log("SUCCESS: View sent")
                        else:
                            log("FAILED: View not registered")

        except Exception as e:
            log(f"ERROR: Proxy connection failed - {proxy_type}://{proxy} - {str(e)[:50]}...")

        finally:
            if 'jar' in locals():
                jar.clear()

    async def run_proxies_continuous(self, lines: list, proxy_type: str):
        log(f"Starting continuous mode with {len(lines)} proxies of type {proxy_type}")
        
        tasks = [
            asyncio.create_task(
                self.request(proxy, proxy_type)
            ) for proxy in lines
        ]
        
        await asyncio.gather(*tasks)

    async def continuous_request(self, proxy: str, proxy_type: str):
        
        while True:
            await self.request(proxy, proxy_type)

    async def run_auto_continuous(self):
        log("Starting continuous auto mode")
        while True:
            auto = Auto()
            await auto.init()
            
            if not auto.proxies:
                log("No proxies found, retrying in 60 seconds...")
                await asyncio.sleep(60)
                continue
                
            log(f"Auto scraping complete. Found {len(auto.proxies)} proxies")
            
            tasks = [
                asyncio.create_task(
                    self.continuous_request(proxy, proxy_type)
                ) for proxy_type, proxy in auto.proxies
            ]
            
            try:
                await asyncio.gather(*tasks)
            except Exception as e:
                log(f"Error in auto mode: {str(e)}")
                log("All proxy tasks failed, rescanning...")

    async def run_rotated_continuous(self, proxy: str, proxy_type: str):
        log(f"Starting continuous rotated mode with proxy {proxy_type}://{proxy}")
        
        tasks = [
            asyncio.create_task(
                self.continuous_request(proxy, proxy_type)
            ) for _ in range(self.concurrency * 5)
        ]
        
        await asyncio.gather(*tasks)

class Auto:
    def __init__(self):
        self.proxies = []
        try:
            with open("auto/http.txt", "r") as file:
                self.http_sources = file.read().splitlines()
                log(f"Loaded {len(self.http_sources)} HTTP proxy sources")
                
            with open("auto/socks4.txt", "r") as file:
                self.socks4_sources = file.read().splitlines()
                log(f"Loaded {len(self.socks4_sources)} SOCKS4 proxy sources")
                
            with open("auto/socks5.txt", "r") as file:
                self.socks5_sources = file.read().splitlines()
                log(f"Loaded {len(self.socks5_sources)} SOCKS5 proxy sources")
                
        except FileNotFoundError as e:
            log(f"ERROR: Auto file not found - {str(e)}")
            exit(0)
        
        log("Starting proxy scraping from sources...")

    async def scrap(self, source_url, proxy_type):
        try:
            async with aiohttp.ClientSession() as session:
                headers = {"user-agent": UserAgent().random}
                log(f"Scraping {proxy_type} proxies from {source_url}")
                async with session.get(
                    source_url, headers=headers, timeout=aiohttp.ClientTimeout(total=15)
                ) as response:
                    html = await response.text()
                    matches = REGEX.finditer(html)
                    found_proxies = [(proxy_type, match.group(1)) for match in matches]
                    self.proxies.extend(found_proxies)
                    log(f"Found {len(found_proxies)} {proxy_type} proxies from {source_url}")

        except Exception as e:
            log(f"ERROR: Failed to scrape from {source_url} - {str(e)[:100]}")
            with open("error.txt", "a", encoding="utf-8", errors="ignore") as f:
                f.write(f"{source_url} -> {e}\n")

    async def init(self):
        tasks = []
        self.proxies.clear()
        sources_list = [
            (self.http_sources, "http"),
            (self.socks4_sources, "socks4"),
            (self.socks5_sources, "socks5"),
        ]

        for sources, proxy_type in sources_list:
            tasks.extend([self.scrap(source_url, proxy_type) for source_url in sources])

        await asyncio.gather(*tasks)
        log(f"Proxy scraping complete. Total proxies found: {len(self.proxies)}")

async def main():
    parser = ArgumentParser()
    parser.add_argument("-c", "--channel", dest="channel", help="Channel user Without @ (e.g: MyChannel1234)", type=str, required=True)
    parser.add_argument("-pt", "--post", dest="post", help="Post number (ID) (e.g: 1921)", type=int, required=True)
    parser.add_argument("-t", "--type", dest="type", help="Proxy type (e.g: http)", type=str, required=False)
    parser.add_argument("-m", "--mode", dest="mode", help="Proxy mode (list | auto | rotate)", type=str, required=True)
    parser.add_argument("-p", "--proxy", dest="proxy", help="Proxy file path or user:password@host:port", type=str, required=False)
    parser.add_argument("-cc", "--concurrency", dest="concurrency", help="Maximum concurrent requests", type=int, default=200)
    args = parser.parse_args()
    
    log(f"Telegram Auto Views started with mode: {args.mode}")
    api = Telegram(args.channel, args.post, args.concurrency)
    
    if args.mode[0] == "l":
        with open(args.proxy, "r") as file:
            lines = file.read().splitlines()
        log(f"Loaded {len(lines)} proxies from file {args.proxy}")
        await api.run_proxies_continuous(lines, args.type)

    elif args.mode[0] == "r":
        log(f"Starting rotated mode with single proxy: {args.proxy}")
        await api.run_rotated_continuous(args.proxy, args.type)

    else:
        await api.run_auto_continuous()

if __name__ == "__main__":
    log("Program started")
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        log("Program terminated by user")
    except Exception as e:
        log(f"Unhandled exception: {str(e)}")

================================================
FILE: requirements.txt
================================================
aiohttp==3.11.16
aiohttp_socks==0.10.1
fake-useragent==2.1.0
Download .txt
gitextract_prubwzpo/

├── .gitignore
├── LICENSE
├── README.md
├── auto/
│   ├── http.txt
│   ├── socks4.txt
│   └── socks5.txt
├── main.py
└── requirements.txt
Download .txt
SYMBOL INDEX (13 symbols across 1 files)

FILE: main.py
  function log (line 20) | def log(message):
  class Telegram (line 24) | class Telegram:
    method __init__ (line 25) | def __init__(self, channel: str, post: int, concurrency: int = 100) ->...
    method request (line 32) | async def request(self, proxy: str, proxy_type: str):
    method run_proxies_continuous (line 87) | async def run_proxies_continuous(self, lines: list, proxy_type: str):
    method continuous_request (line 98) | async def continuous_request(self, proxy: str, proxy_type: str):
    method run_auto_continuous (line 103) | async def run_auto_continuous(self):
    method run_rotated_continuous (line 128) | async def run_rotated_continuous(self, proxy: str, proxy_type: str):
  class Auto (line 139) | class Auto:
    method __init__ (line 140) | def __init__(self):
    method scrap (line 161) | async def scrap(self, source_url, proxy_type):
    method init (line 180) | async def init(self):
  function main (line 195) | async def main():
Condensed preview — 8 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (24K chars).
[
  {
    "path": ".gitignore",
    "chars": 640,
    "preview": "# Python bytecode files\n*.pyc\n*.pyo\n*.pyd\n__pycache__/\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndist/\n*.egg-info/"
  },
  {
    "path": "LICENSE",
    "chars": 1059,
    "preview": "MIT License\n\nCopyright (c) 2025 VWH\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof thi"
  },
  {
    "path": "README.md",
    "chars": 3504,
    "preview": "# Telegram Views\n\n![Gif](https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExOGFkOTRiMTdjMTc3OTJhZmU0MDRmZGFlNGJiMjA3NGYxO"
  },
  {
    "path": "auto/http.txt",
    "chars": 3637,
    "preview": "https://raw.githubusercontent.com/proxifly/free-proxy-list/main/proxies/protocols/http/data.txt\nhttps://raw.githubuserco"
  },
  {
    "path": "auto/socks4.txt",
    "chars": 2351,
    "preview": "https://raw.githubusercontent.com/proxifly/free-proxy-list/main/proxies/protocols/socks4/data.txt\nhttps://openproxy.spac"
  },
  {
    "path": "auto/socks5.txt",
    "chars": 2594,
    "preview": "https://raw.githubusercontent.com/proxifly/free-proxy-list/main/proxies/protocols/socks5/data.txt\nhttps://openproxy.spac"
  },
  {
    "path": "main.py",
    "chars": 9313,
    "preview": "import aiohttp\nimport asyncio\nfrom re import search, compile\nfrom argparse import ArgumentParser\nfrom datetime import da"
  },
  {
    "path": "requirements.txt",
    "chars": 62,
    "preview": "aiohttp==3.11.16\r\naiohttp_socks==0.10.1\r\nfake-useragent==2.1.0"
  }
]

About this extraction

This page contains the full source code of the TeaByte/telegram-views GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 8 files (22.6 KB), approximately 6.0k tokens, and a symbol index with 13 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!