Repository: pyaf/code_for_fun
Branch: master
Commit: e1f9c35ffe53
Files: 23
Total size: 25.3 KB
Directory structure:
gitextract_kkopeqwb/
├── .gitignore
├── README.md
├── WhatsApp_img_notes_extractor/
│ ├── behind_the_scenes/
│ │ ├── __init__.py
│ │ ├── extract.ipynb
│ │ ├── model.py
│ │ ├── train.ipynb
│ │ └── weights.h5
│ ├── extract.py
│ ├── readme.md
│ └── requirements.txt
├── auto_lan_auth/
│ ├── README.md
│ ├── lan_auth.py
│ └── run.sh
├── csv_to_vcf.py
├── devrant.py
├── get_bhu_mails.py
├── github.py
├── hackerrank_medal.py
├── selenium/
│ ├── chromedriver
│ ├── gmail.py
│ └── send.py
├── set_router_ip.py
└── typeracer_plot.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
*.out
# C extensions
*.so
*.jpg
*.jpeg
# Distribution / packaging
.Python
visualize.ipynb
proxy/
resuts.txt
others/
voters_list.pdf
wish_me_HBD.py
fedena/
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# dotenv
.env
# virtualenv
venv/
ENV/
# Spyder project settings
.spyderproject
# Rope project settings
.ropeproject
old/
================================================
FILE: README.md
================================================
# Code For Fun
This repository contains my code scripts to automate boring stuff around me.
# Content
- [WhatsApp_img_notes_extractor](#WhatsApp_img_notes_extractor)
- [devrant](#devrant)
- [typeracer_plot](#typeracer_plot)
- [auto_lan_auth](#auto_lan_auth)
- [get_bhu_mails](#get_bhu_mails)
- [csv_to_vcf](#csv_to_vcf)
- [hackerrank_medal](#hackerrank_medal)
- [set_router_ip](#set_router_ip)
- [selenium/send](#selenium/send)
- [selenium/gmail](#selenium/gmail)
### `WhatsApp_img_notes_extractor`
Code to extract study notes from WhatsApp Images folder. I've trained a Convolutional Neural Network model to predict such images and extract them out of WhatsApp Images directory.
### `devrant`
A Python script to send notifications when you hit a given number of "++"s on [devrant](devrant.com) (requires `BeautifulSoup`)
### `typeracer_plot`
A Python script to plot [typeracer](typeracer.com) speed statistics of a given user. (requires `Matplotlib` and `BeautifulSoup` )
### `auto_lan_auth`
A Python script to automate the LAN Authentication process on IIT (BHU) campus.
### `get_bhu_mails`
A python script to get the emails of all the professors of Banaras Hindu University.
Just run `get_bhu_mails.py` and it will crawl the contact section of BHU website, download all the docs containing details of every department, then use regex to get the emails out and paste it in results.txt.
### `csv_to_vcf`
A python script to create a VCF file using data (name and phone-number) stored in a CSV file.
### `hackerrank_medal.py`
A python script to know your medal in any Hackerrank contest.
### `set_router_ip`
A python script to reset IP of any Dlink router. The script will check for available IPs of the hostel, and it will use the given admin credentials to log in to the router admin portal, create a user session, and set the router IP to the one currently available :)
### `selenium/send`
A python script to send messages using WhatsApp web, using selenium.
### `selenium/gmail`
A python script to send emails using the Gmail web interface, using selenium.
Enjoy!
================================================
FILE: WhatsApp_img_notes_extractor/behind_the_scenes/__init__.py
================================================
================================================
FILE: WhatsApp_img_notes_extractor/behind_the_scenes/extract.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from keras.preprocessing.image import *\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import os\n",
"import random\n",
"from glob import glob\n",
"from model import CNN_model\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"model = CNN_model() # model architecture defined in model.py\n",
"# load trained weights\n",
"model.load_weights('weights.h5')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Check model performance on random images"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"img_path = random.choice(glob('WhatsApp Images/*'))\n",
"img = load_img(img_path, target_size=(124, 124, 3)) # this is a PIL image\n",
"x = img_to_array(img) / 255.0\n",
"y = model.predict(np.expand_dims(x, axis=0))\n",
"print(np.squeeze(y) > 0.5)\n",
"img"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prediction"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def predict(file_path):\n",
" '''\n",
" predict whether file is a notes image\n",
" '''\n",
" img = load_img(file_path, target_size=(124, 124, 3))\n",
" x = img_to_array(img) / 255. \n",
" y = model.predict(np.expand_dims(x, axis=0))\n",
" return np.squeeze(y) > 0.5"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# create 'notes' folder to store extracted notes images\n",
"if not os.path.exists('notes'):\n",
" os.mkdir('notes')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# extract notes from WhatsApp Images folder\n",
"\n",
"files = glob('WhatsApp Images/*.*') + glob('WhatsApp Images/Sent/*.*')\n",
"\n",
"for file_path in files:\n",
" if predict(file_path): # check if the file is one of the notes\n",
" file_name = file_path.split('/')[-1] # get file name\n",
" os.rename(file_path, 'notes/' + file_name) # move the file to 'notes' folder"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:ML]",
"language": "python",
"name": "conda-env-ML-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.4"
},
"widgets": {
"state": {},
"version": "1.1.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
================================================
FILE: WhatsApp_img_notes_extractor/behind_the_scenes/model.py
================================================
import keras
from keras.models import *
from keras.layers import *
from keras.preprocessing.image import *
import numpy as np
# ## Define Model
def CNN_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(124, 124, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
================================================
FILE: WhatsApp_img_notes_extractor/behind_the_scenes/train.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from keras.preprocessing.image import *\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import os\n",
"import random\n",
"from glob import glob\n",
"from model import CNN_model\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data augmentation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"batch_size = 4\n",
"\n",
"# data augmentation\n",
"train_datagen = ImageDataGenerator(\n",
" rescale=1./255, \n",
" rotation_range=40,\n",
" width_shift_range=0.2,\n",
" height_shift_range=0.2,\n",
" shear_range=0.2,\n",
" zoom_range=0.2,\n",
" horizontal_flip=True,\n",
" fill_mode='nearest')\n",
"\n",
"train_generator = train_datagen.flow_from_directory(\n",
" 'data', # this is the target data directory\n",
" target_size=(124, 124), # all images will be resized to 124 x 124\n",
" batch_size=batch_size,\n",
" class_mode='binary') # since we use binary_crossentropy loss, we need binary labels"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x, y = next(train_generator)\n",
"x.shape, y.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model = CNN_model()\n",
"\n",
"model.fit_generator(\n",
" train_generator,\n",
" steps_per_epoch=2000 // batch_size,\n",
" epochs=5)\n",
"model.save_weights('weights.h5') # save weights after training"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cross check"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x, y = next(train_generator)\n",
"y = y.reshape(len(y), 1)\n",
"\n",
"y_pred = model.predict(x)\n",
"y_pred = (y_pred > 0.5) * 1\n",
"\n",
"y == y_pred"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"img_path = random.choice(glob('data/1/*'))\n",
"img = load_img(img_path, target_size=(124, 124, 3)) # this is a PIL image\n",
"x = img_to_array(img) / 255.0\n",
"y = model.predict(np.expand_dims(x, axis=0))\n",
"print(np.squeeze(y) > 0.5)\n",
"img"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:ML]",
"language": "python",
"name": "conda-env-ML-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.4"
},
"widgets": {
"state": {},
"version": "1.1.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
================================================
FILE: WhatsApp_img_notes_extractor/extract.py
================================================
# coding: utf-8
# ## Import dependencies
import os
import numpy as np
from glob import glob
from keras.preprocessing.image import *
from behind_the_scenes.model import CNN_model
print('Connect your smartphone to this system, mount your Internal Storage and note the absolute path of WhatsApp folder')
print('For example: "/run/user/1000/gvfs/mtp:host=%5Busb%3A003%2C002%5D/Internal storage/WhatsApp" (without quotes) \n')
WA_path = input('Enter absolute path of WhatsApp folder: \n')
WA_img_path = WA_path + '/Media/WhatsApp Images/'
WA_img_path.replace('//', '/') # shit happens
WA_img_path.replace(' ', '\\ ') # replace spaces with their escaped versions
# define model
model = CNN_model()
# load trained weights
model.load_weights('behind_the_scenes/weights.h5')
notes_path = WA_img_path + 'notes/'
if not os.path.exists(notes_path):
os.mkdir(notes_path)
print('Created a "notes" folder in your WhatsApp Image folder to keep the notes')
def predict(file_path):
'''
predict whether file is a notes image
'''
img = load_img(file_path, target_size=(124, 124, 3))
x = img_to_array(img) / 255.
y = model.predict(np.expand_dims(x, axis=0))
return np.squeeze(y) > 0.5
# get file paths
files = glob(WA_img_path + '*.*') + glob(WA_img_path + 'Sent/*.*')
# extract notes from WhatsApp Images folder
for count, file_path in enumerate(files):
if not count % 10: print(str(count) + ' files examined')
if predict(file_path): # check if the file is one of the notes
file_name = file_path.split('/')[-1] # get file name
os.rename(file_path, notes_path + file_name) # move the file to 'notes' folder
================================================
FILE: WhatsApp_img_notes_extractor/readme.md
================================================
## Automate extraction of study notes from `WhatsApp Images`
We all end up with hell lot of _images to be deleted_ at the end of each semester. I've trained a CNN model to predict such images and extract them out of WhatsApp Images directory :)
like this:
<img src="behind_the_scenes/image.jpeg" width="300px" height="500px" />
Images in red circles are notes which you may wanna delete or extract them to a folder (in case you are a maggu :P) and blue circles are important ones which shouldn't be messed up with.
Requirements:
* [Numpy](http://www.numpy.org/)
* [Keras](https://keras.io)
Instructions:
Install dependencies using `pip install -r requirements.txt`. Connect your Smartphone to your system, mount `Internal Storage` and copy the absolute path to the WhatsApp folder, to know the absolute path open a terminal in `WhatsApp` folder and run `pwd` command. Run the `extract.py` script by `python extract.py` and paste the copied path when asked to. The script will create a new folder named `notes` in your `WhatsApp Image` folder and move the study notes images to it.
I've trained the model on about 1000 images and using Keras' data augmentation pipeline. Currently the model is 85% accurate on my dataset. Please feel free to add your own data and train the model on it to make the model more accurate. To add your own data, create a `data` folder in `behind_the_scenes` folder, create two subfolders `1` and `0` inside `data`, in `1` put study notes and put all other important images in `0`. See `behind_the_scenes` folder for more info.
Feel free to open up an issue if you wanna contribute or know something about this project.
Enjoy!
================================================
FILE: WhatsApp_img_notes_extractor/requirements.txt
================================================
Keras==2.1.2
numpy==1.13.3
================================================
FILE: auto_lan_auth/README.md
================================================
## Automate IIT (BHU) LAN Authentication
This code automates the authentication process of IIT (BHU) LAN.
## Dependencies:
* Python 3
### Instructions:
To authenticate your system on LAN, set your username and password in `lan_auth.py` and run:
python lan_auth.py
If you wanna forget about LAN Authentication forever, run:
nohup ./run.sh &
This will check for LAN Authentication every 10 seconds in the background. Add the above command as startup application to run this script in the background after every system boot.
Enjoy!
================================================
FILE: auto_lan_auth/lan_auth.py
================================================
import requests
from lxml import html
from bs4 import BeautifulSoup
# set your username and password here
username = ''
password = ''
def crawl():
session_requests = requests.session()
url = 'http://www.msftncsi.com/ncsi.txt' # a light weight webpage
result = session_requests.get(url)
# check if already authenticated
if 'Microsoft' in result.text:
print('Already Authenticated!')
return
# extract hidden form token
tree = html.fromstring(result.text)
auth_token = list(set(tree.xpath("//input[@name='magic']/@value")))[0]
# prepare payload for form submission
payload = {
"4Tredir":'',
"username": username,
"password": password,
"magic": auth_token,
}
# submit the authentication form
resp = session_requests.post(
'http://192.168.249.1:1000/',
data = payload,
headers = dict(referer=url)
)
# parse the response
if 'Firewall authentication failed. Please try again.' in resp.text:
print('Authentication Failure, Please check your username and password.')
if 'Firewall Authentication Keepalive Window' in resp.text:
print('Authentication Successful, Enjoy!')
crawl()
================================================
FILE: auto_lan_auth/run.sh
================================================
#!/bin/sh
# this script will run `lan_auth.py` every 10 seconds
while true
do
my_dir=`dirname $0`
python $my_dir/lan_auth.py
sleep 10
done
================================================
FILE: csv_to_vcf.py
================================================
import csv
with open('KY_responses.csv', 'rb') as f:
data = csv.reader(f)
with open('KY.vcf', 'w') as f:
for row in data:
f.write('BEGIN:VCARD\nVERSION:2.1\n')
f.write('FN:' + 'KY ' + row[1].split('@')[0] +'\n')
f.write('TEL;CELL:' + row[5] + '\n')
f.write('END:VCARD\n')
================================================
FILE: devrant.py
================================================
import requests
import subprocess
from bs4 import BeautifulSoup
while True:
url = "https://devrant.com/users/pyaf"
response = requests.get(url) # get html content
soup = BeautifulSoup(response.text, "lxml") # make a soup
# parse the html to get score
score = soup.find("div", class_="profile-score").get_text()
if int(score[1:]) == 1000:
# send notification of 1k ++s :)
subprocess.call(
["notify-send", "-i", "", "Har har mahadev! 1k reached!"])
break
print("Current score:", score) # print score to know current status
================================================
FILE: get_bhu_mails.py
================================================
import httplib2
import urllib2
from bs4 import BeautifulSoup, SoupStrainer
import re
http = httplib2.Http()
website = 'http://www.bhu.ac.in/telephone/'
doc_links = []
emails = []
status, response = http.request(website)
for link in BeautifulSoup(response,"lxml", parse_only=SoupStrainer('a')):
if link.has_attr('href'):
if 'doc' in link['href']:
doc_links.append(website+link['href'])
print "Got doc links.."
print doc_links
for i in doc_links:
try:
doc = urllib2.urlopen(i)
doc_data = doc.read()
match = match = re.findall(r'[\w\.-]+@[\w\.-]+', doc_data)
emails.extend(match)
print "done", i
except Exception as e:
print e
print 'Yo, got all emails'
print 'writing data in results.txt'
with open('results.txt', "w") as f:
for email in emails:
f.write(email)
print(email)
f.write("\n")
================================================
FILE: github.py
================================================
# script to check for available github usernames
import requests
alphabet = 'abcdefghijklmnopqrstuvwxyz'
all_usernames = [x for x in alphabet]
for i, username in enumerate(all_usernames):
res = requests.get('https://github.com/'+username)
if res.status_code == 404:
print(username)
if not i % 100:
print(i)
================================================
FILE: hackerrank_medal.py
================================================
print("Enter the rank/total no. of participants: (ex: 432/5678)")
rank, total_number_of_participants = list(map(int, input().strip().split('/')))
gold = 0.04 * total_number_of_participants
silver = gold + 0.08 * total_number_of_participants
bronze = gold + silver + .13 * total_number_of_participants
if rank <= gold:
print("\n\nGOLD!!!")
print("Ab to party hai BC\n\n")
elif rank <= silver:
print("\n\nSILVER!! \n\n")
print("Nobody remembers silver ppl :P\n\n")
elif rank <= bronze:
print("\n\nBRONZE \n\n")
print("Ja jile apni zindagi\n\n")
else:
print("\n\nMu mat dikha BC\n\n")
print("Gold: below %d rank" %(gold))
print("Silver: below %d rank" %(silver))
print("Bronze: below %d rank" %(bronze))
================================================
FILE: selenium/gmail.py
================================================
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
import time
import os
# Replace below path with the absolute path
# to chromedriver in your computer
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
driver = webdriver.Chrome(os.path.join(BASE_DIR, 'chromedriver'))
#open tab
# driver = webdriver.Firefox()
driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't')
driver.get("https://gmail.com/")
wait = WebDriverWait(driver, 600)
x_arg = "//div[text()='COMPOSE']"
group_title = wait.until(EC.presence_of_element_located((
By.XPATH, x_arg)))
print(group_title)
group_title.click()
print("clicked")
to_xpath = "//textarea[@name='to']"
input_box = wait.until(EC.presence_of_element_located((
By.XPATH, to_xpath)))
to_email = "kalpesh.bansal.eee15@iitbhu.ac.in"
input_box.send_keys(to_email + Keys.ENTER)
time.sleep(1)
subj_xpath = "//input[@name='subjectbox']"
subject = "mail through python :D"
input_box = wait.until(EC.presence_of_element_located((
By.XPATH, subj_xpath)))
input_box.send_keys(subject + Keys.ENTER)
time.sleep(1)
msg_body = "Bro! \n Aja room par script is done :D\n\n\n Sent through python"
msg_xpath = "//div[@aria-label='Message Body']"
input_box = wait.until(EC.presence_of_element_located((
By.XPATH, msg_xpath)))
input_box.send_keys(msg_body + Keys.ENTER)
time.sleep(1)
file_input = driver.execute_script(
"var input = document.createElement('input');"
"input.type = 'file';"
"input.style.display = 'block';"
"if (document.body.childElementCount > 0) {"
" document.body.insertBefore("
" input, document.body.childNodes[0]"
" );"
"} else {"
" document.body.appendChild(input);"
"}"
"return input;"
)
def dispatch_file_drag_event(event_name, to, file_input_element):
driver.execute_script(
"var event = document.createEvent('CustomEvent');"
"event.initCustomEvent(arguments[0], true, true 0);"
"event.dataTransfer = {"
" files: arguments[1].files"
"};"
"arguments[2].dispatchEvent(event);",
event_name, file_input_element, to)
file_input.send_keys('/home/ags/test.txt')
dispatch_file_drag_event('dragenter', 'document', file_input)
drag_target = driver.find_element_by_xpath(
"//div[text()='Drop files here']"
)
dispatch_file_drag_event('drop', drag_target, file_input)
driver.execute_script(
"arguments[0].parentNode.removeChild(arguments[0]);", file_input
)
send_xpath = "//div[text()='Send']"
input_box = wait.until(EC.presence_of_element_located((
By.XPATH, send_xpath)))
input_box.send_keys(Keys.ENTER)
time.sleep(1)
================================================
FILE: selenium/send.py
================================================
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
import time
import os
# Replace below path with the absolute path
# to chromedriver in your computer
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
driver = webdriver.Chrome(os.path.join(BASE_DIR, 'chromedriver'))
#open tab
# driver = webdriver.Firefox()
driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't')
driver.get("https://web.whatsapp.com/")
wait = WebDriverWait(driver, 600)
# your friend's name in your contact list
target = '"BHU Mantri"'
# Replace the below string with your own message
string = "lol"
x_arg = '//span[contains(@title,' + target + ')]'
print(x_arg)
group_title = wait.until(EC.presence_of_element_located((
By.XPATH, x_arg)))
print(group_title)
group_title.click()
print("clicked")
inp_xpath = '//div[@class="input"][@dir="auto"][@data-tab="1"]'
print(inp_xpath)
input_box = wait.until(EC.presence_of_element_located((
By.XPATH, inp_xpath)))
print(input_box)
for i in range(100):
input_box.send_keys(string + Keys.ENTER)
time.sleep(1)
================================================
FILE: set_router_ip.py
================================================
import os
import requests
import time
#Check which IPs are available
i = 2
while i!=254:
IP = "10.9.1.%d" %i #example
response = os.system("ping -c 1 " + IP)
print(response)
if response == 0:
print(IP, 'is not available!')
else:
print('\n\n Gotchha!')
print(IP, 'is down!')
print('Gonna use this IP for your router!')
break
i+=1
print('Logging In')
os.environ['NO_PROXY'] = '192.168.0.1'
del os.environ['http_proxy']
del os.environ['https_proxy']
session_requests = requests.session()
# session_requests.trust_env = False
login_url = 'http://192.168.0.1/login.cgi'
login_payload = {
'username': '<your username>',
'password': '<your password>',
'submit.htm?login.htm': 'Send'
}
session_requests.post(
login_url,
data = login_payload,
headers = dict(referer=login_url)
)
print('Logged In\nSession created')
final_payload = {
'save':'Apply Changes',
'staip_gateway' :'10.9.1.1',
'staip_ipaddr' :IP,
'staip_mtusize' :'1500',
'staip_netmask' :'255.255.255.0',
'submit.htm?wan.htm': 'Send',
'wanPPPoeConnection' :'0',
'wan_dns1' :'10.1.1.11',
'wanconn_type': '0',
'wanspeed':'0',
'wantype':'0'
}
final_url = 'http://192.168.0.1/form2Wan.cgi'
while True:
try:
result = session_requests.post(
final_url,
data = final_payload,
headers = dict(referer = final_url)
)
print(result)
print('IP Set to %s' % IP)
break
except Exception as e:
print('Error Occured!!')
print(e)
print('Retrying\n')
time.sleep(1)
================================================
FILE: typeracer_plot.py
================================================
###########################################################
## Scipt to plot speed stats of any typeracer user
###########################################################
import requests
from bs4 import BeautifulSoup
from matplotlib import pyplot as plt
username = "agga_daku" # typeracer username
num_races = 1000 # number of last races to plot, too high value may result in error :/
url = "https://data.typeracer.com/pit/race_history?user=%s&n=%s&startDate=" % (username, num_races)
response = requests.get(url) # get html content
soup = BeautifulSoup(response.text, "lxml") # make a soup
speed_data = []
for row in soup.find('table', class_="scoresTable").findAll('tr'):
try:
speed_data.append(int(row.findAll('td')[1].contents[0].split(' ')[0]))
except:
pass
speed_data = list(reversed(speed_data))
# plot using matplotlib
plt.plot(range(len(speed_data)), speed_data)
plt.show()
gitextract_kkopeqwb/ ├── .gitignore ├── README.md ├── WhatsApp_img_notes_extractor/ │ ├── behind_the_scenes/ │ │ ├── __init__.py │ │ ├── extract.ipynb │ │ ├── model.py │ │ ├── train.ipynb │ │ └── weights.h5 │ ├── extract.py │ ├── readme.md │ └── requirements.txt ├── auto_lan_auth/ │ ├── README.md │ ├── lan_auth.py │ └── run.sh ├── csv_to_vcf.py ├── devrant.py ├── get_bhu_mails.py ├── github.py ├── hackerrank_medal.py ├── selenium/ │ ├── chromedriver │ ├── gmail.py │ └── send.py ├── set_router_ip.py └── typeracer_plot.py
SYMBOL INDEX (4 symbols across 4 files) FILE: WhatsApp_img_notes_extractor/behind_the_scenes/model.py function CNN_model (line 8) | def CNN_model(): FILE: WhatsApp_img_notes_extractor/extract.py function predict (line 30) | def predict(file_path): FILE: auto_lan_auth/lan_auth.py function crawl (line 9) | def crawl(): FILE: selenium/gmail.py function dispatch_file_drag_event (line 62) | def dispatch_file_drag_event(event_name, to, file_input_element):
Condensed preview — 23 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (29K chars).
[
{
"path": ".gitignore",
"chars": 1148,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n*.out\n# C extensions\n*.so\n*.jpg\n*.jpeg\n# Distr"
},
{
"path": "README.md",
"chars": 2091,
"preview": "# Code For Fun\n\nThis repository contains my code scripts to automate boring stuff around me.\n\n# Content\n- [WhatsApp_img_"
},
{
"path": "WhatsApp_img_notes_extractor/behind_the_scenes/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "WhatsApp_img_notes_extractor/behind_the_scenes/extract.ipynb",
"chars": 3446,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"## Import dependencies\"\n ]\n },\n "
},
{
"path": "WhatsApp_img_notes_extractor/behind_the_scenes/model.py",
"chars": 940,
"preview": "import keras\nfrom keras.models import *\nfrom keras.layers import *\nfrom keras.preprocessing.image import *\nimport numpy "
},
{
"path": "WhatsApp_img_notes_extractor/behind_the_scenes/train.ipynb",
"chars": 3560,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"## Import Dependencies\"\n ]\n },\n "
},
{
"path": "WhatsApp_img_notes_extractor/extract.py",
"chars": 1657,
"preview": "# coding: utf-8\n\n# ## Import dependencies\nimport os\nimport numpy as np\nfrom glob import glob\nfrom keras.preprocessing.im"
},
{
"path": "WhatsApp_img_notes_extractor/readme.md",
"chars": 1667,
"preview": "## Automate extraction of study notes from `WhatsApp Images`\n\nWe all end up with hell lot of _images to be deleted_ at t"
},
{
"path": "WhatsApp_img_notes_extractor/requirements.txt",
"chars": 26,
"preview": "Keras==2.1.2\nnumpy==1.13.3"
},
{
"path": "auto_lan_auth/README.md",
"chars": 545,
"preview": "## Automate IIT (BHU) LAN Authentication \n\nThis code automates the authentication process of IIT (BHU) LAN.\n\n## Dependen"
},
{
"path": "auto_lan_auth/lan_auth.py",
"chars": 1260,
"preview": "import requests\nfrom lxml import html\nfrom bs4 import BeautifulSoup\n\n# set your username and password here\nusername = ''"
},
{
"path": "auto_lan_auth/run.sh",
"chars": 147,
"preview": "#!/bin/sh\n\n# this script will run `lan_auth.py` every 10 seconds\nwhile true\ndo\n\tmy_dir=`dirname $0`\n\tpython $my_dir/lan"
},
{
"path": "csv_to_vcf.py",
"chars": 337,
"preview": "import csv\n\nwith open('KY_responses.csv', 'rb') as f:\n data = csv.reader(f)\n with open('KY.vcf', 'w') as f:\n "
},
{
"path": "devrant.py",
"chars": 589,
"preview": "import requests\nimport subprocess\nfrom bs4 import BeautifulSoup\n\nwhile True:\n url = \"https://devrant.com/users/pyaf\"\n"
},
{
"path": "get_bhu_mails.py",
"chars": 901,
"preview": "import httplib2\nimport urllib2\nfrom bs4 import BeautifulSoup, SoupStrainer\nimport re\n\nhttp = httplib2.Http()\nwebsite = '"
},
{
"path": "github.py",
"chars": 338,
"preview": "# script to check for available github usernames\n\nimport requests\n\nalphabet = 'abcdefghijklmnopqrstuvwxyz'\nall_usernames"
},
{
"path": "hackerrank_medal.py",
"chars": 734,
"preview": "print(\"Enter the rank/total no. of participants: (ex: 432/5678)\")\nrank, total_number_of_participants = list(map(int, inp"
},
{
"path": "selenium/gmail.py",
"chars": 2717,
"preview": "from selenium import webdriver\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support i"
},
{
"path": "selenium/send.py",
"chars": 1250,
"preview": "from selenium import webdriver\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support i"
},
{
"path": "set_router_ip.py",
"chars": 1650,
"preview": "import os\nimport requests\nimport time\n\n#Check which IPs are available\ni = 2\nwhile i!=254:\n IP = \"10.9.1.%d\" %i #examp"
},
{
"path": "typeracer_plot.py",
"chars": 914,
"preview": "###########################################################\n## Scipt to plot speed stats of any typeracer user\n#########"
}
]
// ... and 2 more files (download for full content)
About this extraction
This page contains the full source code of the pyaf/code_for_fun GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 23 files (25.3 KB), approximately 7.6k tokens, and a symbol index with 4 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.