[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n*.out\n# C extensions\n*.so\n*.jpg\n*.jpeg\n# Distribution / packaging\n.Python\nvisualize.ipynb\nproxy/\nresuts.txt\nothers/\nvoters_list.pdf\nwish_me_HBD.py\nfedena/\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*,cover\n.hypothesis/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# IPython Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# dotenv\n.env\n\n# virtualenv\nvenv/\nENV/\n\n# Spyder project settings\n.spyderproject\n\n# Rope project settings\n.ropeproject\nold/\n"
  },
  {
    "path": "README.md",
    "content": "# Code For Fun\n\nThis repository contains my code scripts to automate boring stuff around me.\n\n# Content\n- [WhatsApp_img_notes_extractor](#WhatsApp_img_notes_extractor)\n- [devrant](#devrant)\n- [typeracer_plot](#typeracer_plot)\n- [auto_lan_auth](#auto_lan_auth)\n- [get_bhu_mails](#get_bhu_mails)\n- [csv_to_vcf](#csv_to_vcf)\n- [hackerrank_medal](#hackerrank_medal)\n- [set_router_ip](#set_router_ip)\n- [selenium/send](#selenium/send)\n- [selenium/gmail](#selenium/gmail)\n\n### `WhatsApp_img_notes_extractor`\n\nCode to extract study notes from WhatsApp Images folder. I've trained a Convolutional Neural Network model to predict such images and extract them out of WhatsApp Images directory.\n\n### `devrant`\n\nA Python script to send notifications when you hit a given number of \"++\"s on [devrant](devrant.com) (requires `BeautifulSoup`)\n\n### `typeracer_plot`\n\nA Python script to plot [typeracer](typeracer.com) speed statistics of a given user. (requires `Matplotlib` and `BeautifulSoup` )\n\n### `auto_lan_auth`\n\nA Python script to automate the LAN Authentication process on IIT (BHU) campus.\n\n### `get_bhu_mails`\n\nA python script to get the emails of all the professors of Banaras Hindu University.\n\nJust run `get_bhu_mails.py` and it will crawl the contact section of BHU website, download all the docs containing details of every department, then use regex to get the emails out and paste it in results.txt.\n\n### `csv_to_vcf`\n\nA python script to create a VCF file using data (name and phone-number) stored in a CSV file.\n\n### `hackerrank_medal.py`\n\nA python script to know your medal in any Hackerrank contest.\n\n### `set_router_ip`\n\nA python script to reset IP of any Dlink router. The script will check for available IPs of the hostel, and it will use the given admin credentials to log in to the router admin portal, create a user session, and set the router IP to the one currently available :)\n\n### `selenium/send`\n\nA python script to send messages using WhatsApp web, using selenium.\n\n### `selenium/gmail`\n\nA python script to send emails using the Gmail web interface, using selenium.\n\nEnjoy!\n"
  },
  {
    "path": "WhatsApp_img_notes_extractor/behind_the_scenes/__init__.py",
    "content": ""
  },
  {
    "path": "WhatsApp_img_notes_extractor/behind_the_scenes/extract.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Import dependencies\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"from keras.preprocessing.image import *\\n\",\n    \"import numpy as np\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"import os\\n\",\n    \"import random\\n\",\n    \"from glob import glob\\n\",\n    \"from model import CNN_model\\n\",\n    \"%matplotlib inline\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Model\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"model = CNN_model() # model architecture defined in model.py\\n\",\n    \"# load trained weights\\n\",\n    \"model.load_weights('weights.h5')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Check model performance on random images\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"img_path = random.choice(glob('WhatsApp Images/*'))\\n\",\n    \"img = load_img(img_path, target_size=(124, 124, 3)) # this is a PIL image\\n\",\n    \"x = img_to_array(img) / 255.0\\n\",\n    \"y = model.predict(np.expand_dims(x, axis=0))\\n\",\n    \"print(np.squeeze(y) > 0.5)\\n\",\n    \"img\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Prediction\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def predict(file_path):\\n\",\n    \"    '''\\n\",\n    \"    predict whether file is a notes image\\n\",\n    \"    '''\\n\",\n    \"    img = load_img(file_path, target_size=(124, 124, 3))\\n\",\n    \"    x = img_to_array(img) / 255. \\n\",\n    \"    y = model.predict(np.expand_dims(x, axis=0))\\n\",\n    \"    return np.squeeze(y) > 0.5\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# create 'notes' folder to store extracted notes images\\n\",\n    \"if not os.path.exists('notes'):\\n\",\n    \"    os.mkdir('notes')\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# extract notes from WhatsApp Images folder\\n\",\n    \"\\n\",\n    \"files = glob('WhatsApp Images/*.*') + glob('WhatsApp Images/Sent/*.*')\\n\",\n    \"\\n\",\n    \"for file_path in files:\\n\",\n    \"    if predict(file_path): # check if the file is one of the notes\\n\",\n    \"        file_name = file_path.split('/')[-1] # get file name\\n\",\n    \"        os.rename(file_path, 'notes/' + file_name) # move the file to 'notes' folder\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": []\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python [conda env:ML]\",\n   \"language\": \"python\",\n   \"name\": \"conda-env-ML-py\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.5.4\"\n  },\n  \"widgets\": {\n   \"state\": {},\n   \"version\": \"1.1.2\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 2\n}\n"
  },
  {
    "path": "WhatsApp_img_notes_extractor/behind_the_scenes/model.py",
    "content": "import keras\nfrom keras.models import *\nfrom keras.layers import *\nfrom keras.preprocessing.image import *\nimport numpy as np\n\n# ## Define Model\ndef CNN_model():\n    \n    model = Sequential()\n    model.add(Conv2D(32, (3, 3), input_shape=(124, 124, 3)))\n    model.add(Activation('relu'))\n    model.add(MaxPooling2D(pool_size=(2, 2)))\n\n    model.add(Conv2D(32, (3, 3)))\n    model.add(Activation('relu'))\n    model.add(MaxPooling2D(pool_size=(2, 2)))\n\n    model.add(Conv2D(64, (3, 3)))\n    model.add(Activation('relu'))\n    model.add(MaxPooling2D(pool_size=(2, 2)))\n\n    model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors\n    model.add(Dense(64))\n    model.add(Activation('relu'))\n    model.add(Dropout(0.5))\n    model.add(Dense(1))\n    model.add(Activation('sigmoid'))\n\n    model.compile(loss='binary_crossentropy',\n                  optimizer='adam',\n                  metrics=['accuracy'])\n    \n    return model"
  },
  {
    "path": "WhatsApp_img_notes_extractor/behind_the_scenes/train.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Import Dependencies\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"from keras.preprocessing.image import *\\n\",\n    \"import numpy as np\\n\",\n    \"import matplotlib.pyplot as plt\\n\",\n    \"import os\\n\",\n    \"import random\\n\",\n    \"from glob import glob\\n\",\n    \"from model import CNN_model\\n\",\n    \"%matplotlib inline\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Data augmentation\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"batch_size = 4\\n\",\n    \"\\n\",\n    \"# data augmentation\\n\",\n    \"train_datagen = ImageDataGenerator(\\n\",\n    \"        rescale=1./255,    \\n\",\n    \"        rotation_range=40,\\n\",\n    \"        width_shift_range=0.2,\\n\",\n    \"        height_shift_range=0.2,\\n\",\n    \"        shear_range=0.2,\\n\",\n    \"        zoom_range=0.2,\\n\",\n    \"        horizontal_flip=True,\\n\",\n    \"        fill_mode='nearest')\\n\",\n    \"\\n\",\n    \"train_generator = train_datagen.flow_from_directory(\\n\",\n    \"        'data',  # this is the target data directory\\n\",\n    \"        target_size=(124, 124),  # all images will be resized to 124 x 124\\n\",\n    \"        batch_size=batch_size,\\n\",\n    \"        class_mode='binary')  # since we use binary_crossentropy loss, we need binary labels\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"x, y = next(train_generator)\\n\",\n    \"x.shape, y.shape\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Train the model\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"model = CNN_model()\\n\",\n    \"\\n\",\n    \"model.fit_generator(\\n\",\n    \"        train_generator,\\n\",\n    \"        steps_per_epoch=2000 // batch_size,\\n\",\n    \"        epochs=5)\\n\",\n    \"model.save_weights('weights.h5')  # save weights after training\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Cross check\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"x, y = next(train_generator)\\n\",\n    \"y = y.reshape(len(y), 1)\\n\",\n    \"\\n\",\n    \"y_pred = model.predict(x)\\n\",\n    \"y_pred = (y_pred > 0.5) * 1\\n\",\n    \"\\n\",\n    \"y == y_pred\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"img_path = random.choice(glob('data/1/*'))\\n\",\n    \"img = load_img(img_path, target_size=(124, 124, 3)) # this is a PIL image\\n\",\n    \"x = img_to_array(img) / 255.0\\n\",\n    \"y = model.predict(np.expand_dims(x, axis=0))\\n\",\n    \"print(np.squeeze(y) > 0.5)\\n\",\n    \"img\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"collapsed\": true\n   },\n   \"outputs\": [],\n   \"source\": []\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Python [conda env:ML]\",\n   \"language\": \"python\",\n   \"name\": \"conda-env-ML-py\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.5.4\"\n  },\n  \"widgets\": {\n   \"state\": {},\n   \"version\": \"1.1.2\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 2\n}\n"
  },
  {
    "path": "WhatsApp_img_notes_extractor/extract.py",
    "content": "# coding: utf-8\n\n# ## Import dependencies\nimport os\nimport numpy as np\nfrom glob import glob\nfrom keras.preprocessing.image import *\nfrom behind_the_scenes.model import CNN_model\n\n\nprint('Connect your smartphone to this system, mount your Internal Storage and note the absolute path of WhatsApp folder')\nprint('For example: \"/run/user/1000/gvfs/mtp:host=%5Busb%3A003%2C002%5D/Internal storage/WhatsApp\" (without quotes) \\n')\nWA_path = input('Enter absolute path of WhatsApp folder: \\n')\n\nWA_img_path = WA_path + '/Media/WhatsApp Images/'\nWA_img_path.replace('//', '/') # shit happens\nWA_img_path.replace(' ', '\\\\ ') # replace spaces with their escaped versions\n\n# define model\nmodel = CNN_model()\n# load trained weights\nmodel.load_weights('behind_the_scenes/weights.h5')\n\nnotes_path = WA_img_path + 'notes/'\nif not os.path.exists(notes_path):\n    os.mkdir(notes_path)\n\nprint('Created a \"notes\" folder in your WhatsApp Image folder to keep the notes')\n\ndef predict(file_path):\n    '''\n    predict whether file is a notes image\n    '''\n    img = load_img(file_path, target_size=(124, 124, 3))\n    x = img_to_array(img) / 255. \n    y = model.predict(np.expand_dims(x, axis=0))\n    return np.squeeze(y) > 0.5\n\n\n# get file paths \nfiles = glob(WA_img_path + '*.*') + glob(WA_img_path + 'Sent/*.*')\n\n# extract notes from WhatsApp Images folder\n\nfor count, file_path in enumerate(files):\n    if not count % 10: print(str(count) + ' files examined')\n    if predict(file_path): # check if the file is one of the notes\n        file_name = file_path.split('/')[-1] # get file name\n        os.rename(file_path, notes_path + file_name) # move the file to 'notes' folder\n\n"
  },
  {
    "path": "WhatsApp_img_notes_extractor/readme.md",
    "content": "## Automate extraction of study notes from `WhatsApp Images`\n\nWe all end up with hell lot of _images to be deleted_ at the end of each semester. I've trained a CNN model to predict such images and extract them out of WhatsApp Images directory :)\n\nlike this: \n\n<img src=\"behind_the_scenes/image.jpeg\" width=\"300px\" height=\"500px\" />\n\nImages in red circles are notes which you may wanna delete or extract them to a folder (in case you are a maggu :P) and blue circles are important ones which shouldn't be messed up with.\n\nRequirements:\n\n* [Numpy](http://www.numpy.org/)\n* [Keras](https://keras.io)\n\nInstructions:\n\nInstall dependencies using `pip install -r requirements.txt`. Connect your Smartphone to your system, mount `Internal Storage` and copy the absolute path to the WhatsApp folder, to know the absolute path open a terminal in `WhatsApp` folder and run `pwd` command. Run the `extract.py` script by `python extract.py` and paste the copied path when asked to. The script will create a new folder named `notes` in your `WhatsApp Image` folder and move the study notes images to it.\n\nI've trained the model on about 1000 images and using Keras' data augmentation pipeline. Currently the model is 85% accurate on my dataset. Please feel free to add your own data and train the model on it to make the model more accurate. To add your own data, create a `data` folder in `behind_the_scenes` folder, create two subfolders `1` and `0` inside `data`, in `1` put study notes and put all other important images in `0`. See `behind_the_scenes` folder for more info.\n\nFeel free to open up an issue if you wanna contribute or know something about this project.\n\nEnjoy!\n\n"
  },
  {
    "path": "WhatsApp_img_notes_extractor/requirements.txt",
    "content": "Keras==2.1.2\nnumpy==1.13.3"
  },
  {
    "path": "auto_lan_auth/README.md",
    "content": "## Automate IIT (BHU) LAN Authentication \n\nThis code automates the authentication process of IIT (BHU) LAN.\n\n## Dependencies:\n\n* Python 3\n\n### Instructions:\n\nTo authenticate your system on LAN, set your username and password in `lan_auth.py` and run:\n\n\tpython lan_auth.py\n\nIf you wanna forget about LAN Authentication forever, run:\n\n\tnohup ./run.sh &\n\nThis will check for LAN Authentication every 10 seconds in the background. Add the above command as startup application to run this script in the background after every system boot.\n\nEnjoy!\n\n\n\n"
  },
  {
    "path": "auto_lan_auth/lan_auth.py",
    "content": "import requests\nfrom lxml import html\nfrom bs4 import BeautifulSoup\n\n# set your username and password here\nusername = ''\npassword = ''\n\ndef crawl():\n    session_requests = requests.session()\n    url = 'http://www.msftncsi.com/ncsi.txt' # a light weight webpage\n    result = session_requests.get(url)\n    \n    # check if already authenticated\n    if 'Microsoft' in result.text:\n        print('Already Authenticated!')\n        return\n\n    # extract hidden form token\n    tree = html.fromstring(result.text)\n    auth_token = list(set(tree.xpath(\"//input[@name='magic']/@value\")))[0]\n\n    # prepare payload for form submission\n    payload = {\n        \"4Tredir\":'',\n        \"username\": username, \n        \"password\": password, \n        \"magic\": auth_token,\n    }\n    # submit the authentication form\n    resp = session_requests.post(\n            'http://192.168.249.1:1000/', \n            data = payload, \n            headers = dict(referer=url)\n        )\n    \n    # parse the response\n    if 'Firewall authentication failed. Please try again.' in resp.text:\n        print('Authentication Failure, Please check your username and password.')\n    if 'Firewall Authentication Keepalive Window' in resp.text:\n        print('Authentication Successful, Enjoy!')\n\ncrawl()\n"
  },
  {
    "path": "auto_lan_auth/run.sh",
    "content": "#!/bin/sh\n\n# this script will run  `lan_auth.py` every 10 seconds\nwhile true\ndo\n\tmy_dir=`dirname $0`\n\tpython $my_dir/lan_auth.py\n    sleep 10\ndone\n"
  },
  {
    "path": "csv_to_vcf.py",
    "content": "import csv\n\nwith open('KY_responses.csv', 'rb') as f:\n    data = csv.reader(f)\n    with open('KY.vcf', 'w') as f:\n        for row in data:\n            f.write('BEGIN:VCARD\\nVERSION:2.1\\n')\n            f.write('FN:' + 'KY ' + row[1].split('@')[0] +'\\n')\n            f.write('TEL;CELL:' + row[5] + '\\n')\n            f.write('END:VCARD\\n')\n"
  },
  {
    "path": "devrant.py",
    "content": "import requests\nimport subprocess\nfrom bs4 import BeautifulSoup\n\nwhile True:\n    url = \"https://devrant.com/users/pyaf\"\n    response = requests.get(url)  # get html content\n    soup = BeautifulSoup(response.text, \"lxml\")  # make a soup\n    # parse the html to get score\n    score = soup.find(\"div\", class_=\"profile-score\").get_text()\n    if int(score[1:]) == 1000:\n        # send notification of 1k ++s :)\n        subprocess.call(\n            [\"notify-send\", \"-i\", \"\", \"Har har mahadev! 1k reached!\"])\n        break\n    print(\"Current score:\", score)  # print score to know current status\n"
  },
  {
    "path": "get_bhu_mails.py",
    "content": "import httplib2\nimport urllib2\nfrom bs4 import BeautifulSoup, SoupStrainer\nimport re\n\nhttp = httplib2.Http()\nwebsite = 'http://www.bhu.ac.in/telephone/'\ndoc_links = []\nemails = []\n\nstatus, response = http.request(website)\n\nfor link in BeautifulSoup(response,\"lxml\", parse_only=SoupStrainer('a')):\n    if link.has_attr('href'):\n        if 'doc' in link['href']:\n            doc_links.append(website+link['href'])\n\nprint \"Got doc links..\"\nprint doc_links\n\nfor i in doc_links:\n    try:\n        doc = urllib2.urlopen(i)\n        doc_data = doc.read()\n        match = match = re.findall(r'[\\w\\.-]+@[\\w\\.-]+', doc_data)\n        emails.extend(match)\n        print \"done\", i\n    except Exception as e:\n        print e\n\nprint 'Yo, got all emails'\n\nprint 'writing data in results.txt'\nwith open('results.txt', \"w\") as f:\n    for email in emails:\n        f.write(email)\n        print(email)\n        f.write(\"\\n\")\n"
  },
  {
    "path": "github.py",
    "content": "# script to check for available github usernames\n\nimport requests\n\nalphabet = 'abcdefghijklmnopqrstuvwxyz'\nall_usernames = [x for x in alphabet]\n\nfor i, username in enumerate(all_usernames):\n    res = requests.get('https://github.com/'+username)\n    if res.status_code == 404:\n        print(username)\n    if not i % 100:\n        print(i)\n"
  },
  {
    "path": "hackerrank_medal.py",
    "content": "print(\"Enter the rank/total no. of participants: (ex: 432/5678)\")\nrank, total_number_of_participants = list(map(int, input().strip().split('/')))\n\ngold = 0.04 * total_number_of_participants\n\nsilver = gold + 0.08 * total_number_of_participants\n\nbronze = gold + silver + .13 * total_number_of_participants\n\nif rank <= gold:\n    print(\"\\n\\nGOLD!!!\")\n    print(\"Ab to party hai BC\\n\\n\")\nelif rank <= silver:\n    print(\"\\n\\nSILVER!! \\n\\n\")\n    print(\"Nobody remembers silver ppl :P\\n\\n\")\nelif rank <= bronze:\n    print(\"\\n\\nBRONZE \\n\\n\")\n    print(\"Ja jile apni zindagi\\n\\n\")\nelse:\n    print(\"\\n\\nMu mat dikha BC\\n\\n\")\n\nprint(\"Gold: below %d rank\" %(gold))\nprint(\"Silver: below %d rank\" %(silver))\nprint(\"Bronze: below %d rank\" %(bronze))\n"
  },
  {
    "path": "selenium/gmail.py",
    "content": "from selenium import webdriver\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.common.by import By\nimport time\nimport os\n# Replace below path with the absolute path\n# to chromedriver in your computer\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\n\ndriver = webdriver.Chrome(os.path.join(BASE_DIR, 'chromedriver'))\n#open tab\n# driver = webdriver.Firefox()\ndriver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't')\n\ndriver.get(\"https://gmail.com/\")\nwait = WebDriverWait(driver, 600)\n\nx_arg = \"//div[text()='COMPOSE']\"\n\ngroup_title = wait.until(EC.presence_of_element_located((\n    By.XPATH, x_arg)))\nprint(group_title)\ngroup_title.click()\nprint(\"clicked\")\n\nto_xpath = \"//textarea[@name='to']\"\ninput_box = wait.until(EC.presence_of_element_located((\n    By.XPATH, to_xpath)))\nto_email = \"kalpesh.bansal.eee15@iitbhu.ac.in\"\ninput_box.send_keys(to_email + Keys.ENTER)\ntime.sleep(1)\n\nsubj_xpath = \"//input[@name='subjectbox']\"\nsubject = \"mail through python :D\"\ninput_box = wait.until(EC.presence_of_element_located((\n    By.XPATH, subj_xpath)))\ninput_box.send_keys(subject + Keys.ENTER)\ntime.sleep(1)\n\nmsg_body = \"Bro! \\n Aja room par script is done :D\\n\\n\\n Sent through python\"\nmsg_xpath = \"//div[@aria-label='Message Body']\"\ninput_box = wait.until(EC.presence_of_element_located((\n    By.XPATH, msg_xpath)))\ninput_box.send_keys(msg_body + Keys.ENTER)\ntime.sleep(1)\n\nfile_input = driver.execute_script(\n \"var input = document.createElement('input');\"\n \"input.type = 'file';\"\n \"input.style.display = 'block';\"\n \"if (document.body.childElementCount > 0) {\"\n \" document.body.insertBefore(\"\n \" input, document.body.childNodes[0]\"\n \" );\"\n \"} else {\"\n \" document.body.appendChild(input);\"\n \"}\"\n \"return input;\"\n)\ndef dispatch_file_drag_event(event_name, to, file_input_element):\n \tdriver.execute_script(\n\t \"var event = document.createEvent('CustomEvent');\"\n\t \"event.initCustomEvent(arguments[0], true, true 0);\"\n\t \"event.dataTransfer = {\"\n\t \" files: arguments[1].files\"\n\t \"};\"\n\t \"arguments[2].dispatchEvent(event);\",\n\t event_name, file_input_element, to)\n \t\nfile_input.send_keys('/home/ags/test.txt')\ndispatch_file_drag_event('dragenter', 'document', file_input)\ndrag_target = driver.find_element_by_xpath(\n \"//div[text()='Drop files here']\"\n)\ndispatch_file_drag_event('drop', drag_target, file_input)\ndriver.execute_script(\n \"arguments[0].parentNode.removeChild(arguments[0]);\", file_input\n)\n\n\nsend_xpath = \"//div[text()='Send']\"\ninput_box = wait.until(EC.presence_of_element_located((\n    By.XPATH, send_xpath)))\ninput_box.send_keys(Keys.ENTER)\n\ntime.sleep(1)\n"
  },
  {
    "path": "selenium/send.py",
    "content": "from selenium import webdriver\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.common.by import By\nimport time\nimport os\n# Replace below path with the absolute path\n# to chromedriver in your computer\nBASE_DIR = os.path.dirname(os.path.abspath(__file__))\n\ndriver = webdriver.Chrome(os.path.join(BASE_DIR, 'chromedriver'))\n#open tab\n# driver = webdriver.Firefox()\ndriver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't')\n\ndriver.get(\"https://web.whatsapp.com/\")\nwait = WebDriverWait(driver, 600)\n\n# your friend's name in your contact list\ntarget = '\"BHU Mantri\"'\n\n# Replace the below string with your own message\nstring = \"lol\"\n\nx_arg = '//span[contains(@title,' + target + ')]'\nprint(x_arg)\ngroup_title = wait.until(EC.presence_of_element_located((\n    By.XPATH, x_arg)))\nprint(group_title)\ngroup_title.click()\nprint(\"clicked\")\ninp_xpath = '//div[@class=\"input\"][@dir=\"auto\"][@data-tab=\"1\"]'\nprint(inp_xpath)\ninput_box = wait.until(EC.presence_of_element_located((\n    By.XPATH, inp_xpath)))\nprint(input_box)\nfor i in range(100):\n    input_box.send_keys(string + Keys.ENTER)\n    time.sleep(1)\n"
  },
  {
    "path": "set_router_ip.py",
    "content": "import os\nimport requests\nimport time\n\n#Check which IPs are available\ni = 2\nwhile i!=254:\n    IP = \"10.9.1.%d\" %i #example\n    response = os.system(\"ping -c 1 \" + IP)\n    print(response)\n\n    if response == 0:\n        print(IP, 'is not available!')\n    else:\n        print('\\n\\n Gotchha!')\n        print(IP, 'is down!')\n        print('Gonna use this IP for your router!')\n        break\n    i+=1\n\nprint('Logging In')\nos.environ['NO_PROXY'] = '192.168.0.1'\ndel os.environ['http_proxy']\ndel os.environ['https_proxy']\n\nsession_requests = requests.session()\n# session_requests.trust_env = False\nlogin_url = 'http://192.168.0.1/login.cgi'\nlogin_payload = {\n    'username': '<your username>',\n    'password': '<your password>',\n    'submit.htm?login.htm': 'Send'\n}\n\nsession_requests.post(\n    login_url,\n    data = login_payload,\n    headers = dict(referer=login_url)\n)\n\nprint('Logged In\\nSession created')\nfinal_payload = {\n    'save':'Apply Changes',\n    'staip_gateway' :'10.9.1.1',\n    'staip_ipaddr' :IP,\n    'staip_mtusize' :'1500',\n    'staip_netmask' :'255.255.255.0',\n    'submit.htm?wan.htm': 'Send',\n    'wanPPPoeConnection' :'0',\n    'wan_dns1' :'10.1.1.11',\n    'wanconn_type': '0',\n    'wanspeed':'0',\n    'wantype':'0'\n\n}\nfinal_url = 'http://192.168.0.1/form2Wan.cgi'\nwhile True:\n    try:\n        result = session_requests.post(\n            final_url,\n            data = final_payload,\n            headers = dict(referer = final_url)\n        )\n        print(result)\n        print('IP Set to %s' % IP)\n        break\n    except Exception as e:\n        print('Error Occured!!')\n        print(e)\n        print('Retrying\\n')\n        time.sleep(1)\n"
  },
  {
    "path": "typeracer_plot.py",
    "content": "###########################################################\n## Scipt to plot speed stats of any typeracer user\n###########################################################\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom matplotlib import pyplot as plt\n\nusername = \"agga_daku\"  # typeracer username\nnum_races = 1000  # number of last races to plot, too high value may result in error :/\nurl = \"https://data.typeracer.com/pit/race_history?user=%s&n=%s&startDate=\" % (username, num_races)\n\nresponse = requests.get(url)  # get html content\nsoup = BeautifulSoup(response.text, \"lxml\")  # make a soup\nspeed_data = []\nfor row in soup.find('table', class_=\"scoresTable\").findAll('tr'):\n    try:\n        speed_data.append(int(row.findAll('td')[1].contents[0].split(' ')[0]))\n    except:\n        pass\nspeed_data = list(reversed(speed_data))\n# plot using matplotlib\nplt.plot(range(len(speed_data)), speed_data)\nplt.show()\n"
  }
]