[
  {
    "path": ".gitignore",
    "content": "# GITHUB BOILERPLATE GIT IGNORE\n# https://github.com/github/gitignore/blob/master/Python.gitignore\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\n.static_storage/\n.media/\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n\n# OTHER GIT IGNORE\n\n#database\nmarket_prices.db\n\n#OS files\n.DS_Store\n.idea/\n\n#Twitter Configuration File\nconfig.py\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.\n\n## Our Standards\n\nExamples of behavior that contributes to creating a positive environment include:\n\n* Using welcoming and inclusive language\n* Being respectful of differing viewpoints and experiences\n* Gracefully accepting constructive criticism\n* Focusing on what is best for the community\n* Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n* The use of sexualized language or imagery and unwelcome sexual attention or advances\n* Trolling, insulting/derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or electronic address, without explicit permission\n* Other conduct which could reasonably be considered inappropriate in a professional setting\n\n## Our Responsibilities\n\nProject maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.\n\n## Scope\n\nThis Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at bshaw19@vt.edu. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.\n\nProject maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]\n\n[homepage]: http://contributor-covenant.org\n[version]: http://contributor-covenant.org/version/1/4/\n"
  },
  {
    "path": "Data_Grabber.py",
    "content": "import os\nimport ccxt\nimport time\nimport datetime\nimport dash\nfrom dash.dependencies import Input, Output, Event\nimport dash_core_components as dcc\nimport dash_html_components as html\nimport plotly.graph_objs as go\nimport threading\nimport dataset\nfrom collections import deque\n\n\ndef ensure_dir(directory):\n    \"\"\"\"\"\"\n    if not os.path.exists(directory):\n        os.makedirs(directory)\n\n\nclass CryptoDataGrabber(object):\n\n    def __init__(self):\n        \"\"\"\"\"\"\n        self.exchange = ccxt.poloniex()\n        self.exchange.load_markets()\n        self.delay_seconds = self.exchange.rateLimit / 1000\n        self.symbols = self.exchange.markets\n        self.timeframe = '1d'\n        self.db_url = 'sqlite:///databases/market_prices.db'\n        self.deques = dict()\n        self.ohlcv = dict()\n        self.database = dataset.connect(self.db_url)\n        ensure_dir('databases')\n        for symbol in self.symbols:\n            if self.exchange.has['fetchOHLCV']:\n                print('Obtaining OHLCV data')\n                data = self.exchange.fetch_ohlcv(symbol, self.timeframe)\n                data = list(zip(*data))\n                data[0] = [datetime.datetime.fromtimestamp(ms / 1000)\n                           for ms in data[0]]\n                self.ohlcv[symbol] = data\n                time.sleep(self.delay_seconds)\n            else:\n                print('No OHLCV data available')\n            self.deques[symbol] = deque()\n            if len(self.database[symbol]):\n                for e in self.database[symbol]:\n                    entry = (e['bid'], e['ask'], e['spread'], e['time'])\n                    self.deques[symbol].append(entry)\n        del self.database\n        self.thread = threading.Thread(target=self.__update)\n        self.thread.daemon = True\n        self.thread.start()\n\n    def get_symbols(self):\n        \"\"\"\"\"\"\n        return self.deques.keys()\n\n    def get_prices(self, symbol):\n        \"\"\"\"\"\"\n        return self.deques[symbol]\n\n    def get_ohlcv(self, symbol):\n        \"\"\"\"\"\"\n        data = self.ohlcv[symbol]\n        return data[0], data[1], data[2], data[3], data[4], data[5]\n\n    def __update(self):\n        \"\"\"\n        https://github.com/ccxt-dev/ccxt/wiki/Manual#market-price\n        \"\"\"\n        self.database = dataset.connect(self.db_url)\n        while True:\n            for symbol in self.symbols:\n\n                start_time = time.clock()\n                orders = self.exchange.fetch_order_book(symbol)\n                bid = orders['bids'][0][0] if len(orders['bids']) > 0 else None\n                ask = orders['asks'][0][0] if len(orders['asks']) > 0 else None\n                spread = (ask - bid) if (bid and ask) else None\n                dtime = datetime.datetime.now()\n                self.deques[symbol].append((bid, ask, spread, dtime))\n                self.database.begin()\n                try:\n                    self.database[symbol].insert({\n                        'ask': ask,\n                        'bid': bid,\n                        'spread': spread,\n                        'time': dtime\n                    })\n                    self.database.commit()\n                except:\n                    self.database.rollback()\n\n                time.sleep(self.delay_seconds - (time.clock() - start_time))\n\n\ndata = CryptoDataGrabber()\n\nselected_dropdown_value = 'ETH/BTC'\n\napp = dash.Dash()\napp.layout = html.Div([\n    html.Div([\n        html.H1('Poloniex Nerd', id='h1_title'),\n        dcc.Dropdown(\n            id='symbol-dropdown',\n            options=[{'label': key, 'value': key}\n                     for key in data.get_symbols()],\n            value=selected_dropdown_value\n        ),\n        html.Div([\n            dcc.Graph(\n                id='ohlc',\n                config={\n                    'displayModeBar': False\n                }\n            ),\n        ], className=\"row\"),\n        html.Div([\n            dcc.Graph(\n                id='v',\n                config={\n                    'displayModeBar': False\n                }\n            ),\n        ], className=\"row\"),\n        html.Div([\n            html.Div([\n                dcc.Graph(\n                    id='market-prices-graph',\n                    config={\n                        'displayModeBar': False\n                    }\n                ),\n                dcc.Graph(\n                    id='spread-graph',\n                    config={\n                        'displayModeBar': False\n                    }\n                ),\n            ], className=\"eight columns\"),\n            html.Div([\n                dcc.Graph(\n                    id='market-prices-hist',\n                    config={\n                        'displayModeBar': False\n                    }\n                ),\n                dcc.Graph(\n                    id='spread-hist',\n                    config={\n                        'displayModeBar': False\n                    }\n                ),\n            ], className=\"four columns\")\n        ], className=\"row\"),\n        dcc.Interval(id='graph-speed-update', interval=2000),\n    ], className=\"row\")\n])\napp.css.append_css({\n    'external_url': 'https://codepen.io/chriddyp/pen/bWLwgP.css'\n})\n\n\n@app.callback(Output('h1_title', 'children'),\n              [Input('symbol-dropdown', 'value')])\ndef change_plot(value):\n    global selected_dropdown_value\n    selected_dropdown_value = value\n    return 'Market Prices ' + str(value)\n\n\n@app.callback(Output('ohlc', 'figure'),\n              [Input('symbol-dropdown', 'value')])\ndef plot_olhc(value):\n    global data\n    dates, open_data, high_data, low_data, close_data, _ \\\n        = data.get_ohlcv(value)\n    return {\n        'data': [go.Ohlc(x=dates,\n                         open=open_data,\n                         high=high_data,\n                         low=low_data,\n                         close=close_data)],\n        'layout': dict(title=\"OHLC\")\n    }\n\n\n@app.callback(Output('v', 'figure'),\n              [Input('symbol-dropdown', 'value')])\ndef plot_v(value):\n    global data\n    dates, _, _, _, _, volume = data.get_ohlcv(value)\n    return {\n        'data': [go.Bar(x=dates,\n                        y=volume)],\n        'layout': dict(title=\"Volume\")\n    }\n\n\n@app.callback(Output('market-prices-graph', 'figure'),\n              events=[Event('graph-speed-update', 'interval')])\ndef update_market_prices():\n    global selected_dropdown_value\n    global data\n    prices = data.get_prices(selected_dropdown_value)\n    prices = [list(p) for p in zip(*prices)]\n    if len(prices) > 0:\n        traces = []\n        x = list(prices[3])\n        for i, key in enumerate(['bid', 'ask']):\n            trace = go.Scatter(x=x,\n                               y=prices[i],\n                               name=key,\n                               opacity=0.8)\n            traces.append(trace)\n        return {\n            'data': traces,\n            'layout': dict(title=\"Market Prices\")\n        }\n\n\n@app.callback(Output('market-prices-hist', 'figure'),\n              events=[Event('graph-speed-update', 'interval')])\ndef update_market_prices_hist():\n    global selected_dropdown_value\n    global data\n    prices = data.get_prices(selected_dropdown_value)\n    prices = [list(p) for p in zip(*prices)]\n    if len(prices) > 0:\n        traces = []\n        for i, key in enumerate(['bid', 'ask']):\n            trace = go.Histogram(x=prices[i][-200:],\n                                 name=key,\n                                 opacity=0.8)\n            traces.append(trace)\n        return {\n            'data': traces,\n            'layout': dict(title=\"Market Prices Histogram (200 Most Recent)\")\n        }\n\n\n@app.callback(Output('spread-graph', 'figure'),\n              events=[Event('graph-speed-update', 'interval')])\ndef update_spread():\n    global selected_dropdown_value\n    global data\n    prices = data.get_prices(selected_dropdown_value)\n    prices = [list(p) for p in zip(*prices)]\n    if len(prices) > 0:\n        traces = []\n        trace = go.Scatter(x=list(prices[3]),\n                           y=list(prices[2]),\n                           name='spread',\n                           line=dict(color='rgb(114, 186, 59)'),\n                           fill='tozeroy',\n                           fillcolor='rgba(114, 186, 59, 0.5)',\n                           mode='none')\n        traces.append(trace)\n\n        return {\n            'data': traces,\n            'layout': dict(title=\"Spread\")\n        }\n\n\n@app.callback(Output('spread-hist', 'figure'),\n              events=[Event('graph-speed-update', 'interval')])\ndef update_spread_hist():\n    global selected_dropdown_value\n    global data\n    prices = data.get_prices(selected_dropdown_value)\n    prices = [list(p) for p in zip(*prices)]\n    if len(prices) > 0:\n        traces = []\n        trace = go.Histogram(x=list(prices[2][-200:]),\n                             name='spread',\n                             marker=dict(color='rgba(114, 186, 59, 0.5)'))\n        traces.append(trace)\n\n        return {\n            'data': traces,\n            'layout': dict(title=\"Spread Histogram (200 Most Recent)\")\n        }\n\n\nif __name__ == '__main__':\n    app.run_server()\n    print(self.server)\n\n\n"
  },
  {
    "path": "Inputs_Table_Builder.py",
    "content": "import sqlite3\nimport pandas as pd\nimport numpy as np\nfrom datetime import datetime, timedelta\nfrom Variable_Functions.Twitter_Sentiment import dates_to_sentiment\nfrom crycompare import *\nimport subprocess\nfrom time import strftime, gmtime\nimport time\nimport datetime\nimport arrow\nfrom termcolor import colored\n\n\n\n\n\ndef db_connection(database):\n    \"\"\"\"\"\"\n    db_connection = sqlite3.connect(database)\n    return db_connection\n\n\ndef get_symbols(database):\n    \"\"\"\"\"\"\n    db_connection = sqlite3.connect(database)\n    cursor = db_connection.cursor()\n    cursor.execute(\"SELECT name FROM sqlite_master WHERE type='table' and name NOT LIKE '%_1%'\")\n    symbols1 = cursor.fetchall()\n    symbols = []\n    for symbol in symbols1:\n        symbols.append(symbol[0])\n    return symbols\n\n\nsymbols = get_symbols('databases/market_prices.db')\n\n######################################## Bid Ask Spread Time #######################################\ndef OLHCV_From_DB(symbol, database):\n    \"\"\"\"\"\"\n    df = pd.read_sql(r\"SELECT * FROM '\" + symbol + \"'\", con=db_connection(database))\n    return df\n\n\n######################################### Twitter Sentiment #######################################\ndef twitter_sentiment(symbol, max_tweets):\n    \"\"\"\"\"\"\n    \n    now = arrow.now()\n    dates = now.date()\n    print(dates)\n\n    time = now.time()\n    print(time)\n    sentiment_variables = dates_to_sentiment(dates, symbol, max_tweets)\n    return sentiment_variables\n\n\n\n\n######################################### Split Trading Pair into 'to' and 'from' ###########\ndef split_symbols(symbols):\n    \"\"\"\"\"\"\n    split_symbols = {}\n    for symbol in symbols:\n        split_symbols[symbol] = symbol.split('/')\n\n    #split_symbols[symbol][0] is the \"tp\" symbol\n    #split_symbols[symbol][1] is the \"from\" symbol\n\n    return split_symbols\n\n\n\n\n######################################### Main function to build input dataset #####################\ndef generate_input_dataset(database):\n    \"\"\"\"\"\"\n    symbols = get_symbols(database)\n    table_names = {}\n    start_time = time.time()\n    total_runtime = time.time() - start_time\n    for symbol in symbols:\n        table_names[symbol] = OLHCV_From_DB(symbol, database)\n    i = len(symbols)\n    total_symbols = len(symbols)\n    for symbol in symbols:\n        print('')\n        print(str(i) + \" More Symbols to Pull Sentiment For\")\n        print('')\n        total_runtime += (time.time() - start_time) - total_runtime\n        average_runtime = round(( total_runtime / ((total_symbols+1) - i)), 2)\n        print('')\n        print(colored(\"Total Runtime = \" + str(round((total_runtime / 60), 2)) + \" minutes\", 'blue'))\n        print('')\n        print(colored(\"Average seconds per symbol = \" + str(average_runtime) + \" seconds\", 'blue'))\n        print('')\n        print(colored(\"Estimated seconds until last sentiment = \" + (str(average_runtime * i) + \" seconds remaining\"), 'blue'))\n        print('')\n        print('')\n        i -= 1\n        sentiment_variables = twitter_sentiment(symbol, 1)\n        \n        table_names[symbol]['Tweet_Sentiment_Polarity'] = sentiment_variables[0]\n        table_names[symbol]['Tweet_Sentiment_Subjectivity'] = sentiment_variables[1]\n        table_names[symbol]['Tweet_Positive_Percent'] = sentiment_variables[2]\n        table_names[symbol]['Tweet_Sentiment_STDDEV'] = sentiment_variables[3]\n        table_names[symbol]['Tweet_Sentiment_Polarity_to'] = sentiment_variables[4]\n        table_names[symbol]['Tweet_Sentiment_Subjectivity_to'] = sentiment_variables[5]\n        table_names[symbol]['Tweet_Positive_Percent_to'] = sentiment_variables[6]\n        table_names[symbol]['Tweet_Sentiment_STDDEV_to'] = sentiment_variables[7]\n        table_names[symbol]['Tweet_Sentiment_Polarity_from'] = sentiment_variables[8]\n        table_names[symbol]['Tweet_Sentiment_Subjectivity_from'] = sentiment_variables[9]\n        table_names[symbol]['Tweet_Positive_Percent_from'] = sentiment_variables[10]\n        table_names[symbol]['Tweet_Sentiment_STDDEV_from'] = sentiment_variables[11]\n    print(table_names)\n    return table_names\n\n\n\n\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Brandon Shaw\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Crypto_Trader\n**A Live Machine-Learning based Cryptocurrency Trader for the Poloniex Exchange**\n\nIf you would like to follow our progress or reach out to developers, feel free to join our discord channel at:\nhttps://discord.gg/UPNV2fH\n\nThe goal of the project is to create an open sourced, machine learning based cryptocurrency portfolio optimizer including as many relevant variables as possible. \n\nThese will include:\n\n- **Community Sentiment** (such as forum and twitter sentiment analysis)\n- **Analyst Opinions** (notable twitter account/analyst feeds with a history of winning strategies - ex. \"Haejin\" a successful elliot wave trader with an established history)\n- **Global Economic Indexes** (DOW, Nikkei225, FOREX data, etc.)\n- **Open/Close/High/Low/Volume**\n- **Live Orderbook Data** (spread, bid, ask, order size) \n- **Technical Indicators** (MACD's, MA's, VWAPS, etc.)\n- **Any Other Variable** of interest found to be important (Please feel free to brainstorm and send suggestions)\n\n\nA **Boruta Analysis** (variation of Random forest) will be used to reduce these input variables by removing \"unimportant\" features. \n\nUsing this input dataset, a machine learning **Binary Classifier** will be created to assign trading pairs on Poloniex a confidence score from 0 to 1. This value will represent the confidence that the trading pair will increase over a certain timeframe. Multiple machines may be constructed to score for a 5 minute window, 10 minute, 15 minute, 30 minute, 1 hour, etc. \n\nUsing these scores, a **Q-Learning Bot** (\"reinforcement learning\") will be created that will optimize a trading strategy based on the binary classifier scores. The machine will read the amount of capital in the users Poloniex Account, and automatically place trades to optimize the portfolios holdings. These strategies will use stop losses and sell limits. Because it is q learning based, the machine will receive, and use live data to make decisions in a live auto-updating context with a reward that optimizes profit. This will allow the machine to continue to train and optimize itself over time while feeding in live data and placing trades.\n\n\nUltimately, the program will have the ability to be run 24 hours a day while optimizing a portfolio using live data, while taking into account fees in order to identify and take advantage of all trading opportunities accross the entire cryptocurrency market on Poloniex. Because of the decimal based system of cryptocurrencies, in theory a portfolio of any size should be able to be used as long as the user has the minimum trade size on Poloniex.\n\n\n\n| Major Design Features |         Purpose          |\n| --------------------- |:------------------------:|\n| Identified Variables  | Related to Price Action  |\n| Data Scraper          | Live Variables to Array  |\n| Binary Classifers     | Score Trading Pairs Live |\n| Q-Learning Bot        | Optimize Trade Strategy  |\n| Poloniex API Link     | Allow Bot to Make Trades |\n\n\n\nIf you would like to donate to the project, please do so at the following Bitcoin/Litecoin/Ethereum addresses. All donations appreciated :)\n\n**DONATIONS**\n\n| Currency |                  Address                   |\n| -------- |:-----------------------------------------: |\n| Bitcoin  |     1GjVgMUDfKHzhxgeauRagVfp1GCrSJXijb     |\n| Litecoin | 0x9852389Bd431A90A9AEcb48EdA50Da1ac05Bd4d8 |\n| Ethereum |     M9oJaUnCB6Soistk3wSETziFDzz8gAaJCU     |\n    \n"
  },
  {
    "path": "Table_Timer.py",
    "content": "from Inputs_Table_Builder import *\nimport time\nimport datetime\nimport sqlite3\nimport pandas\n\nstarttime=time.time()\n\n#Runs the Input Table Builder module in 5 minute timesteps, appending to database\nwhile True:\n  ts = time.time()\n  ts = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')\n  print(\"Scraping at: \" + ts)\n  input_tables = generate_input_dataset('databases/market_prices.db')\n  db_connection = sqlite3.connect('databases/market_prices.db')\n  cursor = db_connection.cursor()\n  i = len(symbols)\n  for symbol in symbols:\n    split=symbol.split('/')\n    try:\n      cursor.execute(\"select Tweet_Sentiment_Polarity_to from '\" + symbol + \"';\")\n    except:\n      cursor.execute(\"Alter table '\" + symbol + \"' RENAME TO \" + split[0] + split[1] + \"_1old\")\n    try:\n      input_tables[symbol]['Tweet_Sentiment_Polarity']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_Polarity'] = 0\n    try:\n      input_tables[symbol]['Tweet_Sentiment_Subjectivity']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_Subjectivity'] = 0\n    try:\n      input_tables[symbol]['Tweet_Positive_Percent']\n    except:\n      input_tables[symbol]['Tweet_Positive_Percent'] = 0\n    try:\n      input_tables[symbol]['Tweet_Sentiment_STDDEV']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_STDDEV'] = 0\n\n    try:\n      input_tables[symbol]['Tweet_Sentiment_Polarity_to']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_Polarity_to'] = 0\n    try:\n      input_tables[symbol]['Tweet_Sentiment_Subjectivity_to']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_Subjectivity_to'] = 0\n    try:\n      input_tables[symbol]['Tweet_Positive_Percent_to']\n    except:\n      input_tables[symbol]['Tweet_Positive_Percent_to'] = 0\n    try:\n      input_tables[symbol]['Tweet_Sentiment_STDDEV_to']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_STDDEV_to'] = 0\n    try:\n      input_tables[symbol]['Tweet_Sentiment_Polarity_from']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_Polarity_from'] = 0\n    try:\n      input_tables[symbol]['Tweet_Sentiment_Subjectivity_from']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_Subjectivity_from'] = 0\n    try:\n      input_tables[symbol]['Tweet_Positive_Percent_from']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_STDDEV_from'] = 0\n    try:\n      input_tables[symbol]['Tweet_Sentiment_STDDEV_from']\n    except:\n      input_tables[symbol]['Tweet_Sentiment_STDDEV_from'] = 0\n    input_tables[symbol].to_sql(name=(str(symbol)), con=db_connection, if_exists = 'append', index=False)\nts = time.time()\n\ntime.sleep(300.0 - ((time.time() - starttime) % 300.0))\nts = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')\nprint(\"Saved to database at: \" + ts)\nprint(input_tables)\n"
  },
  {
    "path": "Variable_Functions/Technical_Indicators.py",
    "content": "from crycompare import *\nimport pandas as pd\nimport sqlite3\n\ndb_connection = sqlite3.connect('databases/market_prices.db')\ncursor = db_connection.cursor()\ncursor.execute(\"SELECT name FROM sqlite_master WHERE type='table';\")\nsymbols1 = cursor.fetchall()\nsymbols = []\nfor symbol in symbols1:\n\tsymbols.append(symbol[0])\nprint(symbols)\n\nh = History()\n\nfrom_coins = []\nto_coins = []\n\ndf_dict = {}\n\nfor coin in coins:\n    histo = h.histoMinute(coin,'USD')\n    if histo['Data']:\n        df_histo = pd.DataFrame(histo['Data'])\n        df_histo['time'] = pd.to_datetime(df_histo['time'],unit='s')\n        df_histo.index = df_histo['time']\n        del df_histo['time']\n        del df_histo['volumefrom']\n        del df_histo['volumeto']\n        \n        df_dict[coin] = df_histo\n        print(df_dict[coin])"
  },
  {
    "path": "Variable_Functions/Twitter_Sentiment.py",
    "content": "import got3\nimport arrow\nfrom textblob import TextBlob\nimport numpy as np\nfrom termcolor import colored\n\n\n\n\n\ndef dates_to_sentiment(dates, ticker, max_tweets):\n\n    ticker = ticker\n    print(colored(\"Calculating sentiment for:\" + ticker, 'white'))\n    sentiments = []\n    positives = []\n    negatives = []\n    arrow_date = dates\n    tweetCriteria = got3.manager.TweetCriteria().setQuerySearch(\"{}{}\".format(\"#\", ticker)).setMaxTweets(max_tweets)\n    tweets = got3.manager.TweetManager.getTweets(tweetCriteria)\n    sents_per_date = []\n    subjectivity = []\n    for t in tweets:      \n        blob = TextBlob(t.text)\n        sent = blob.sentiment[0] #get the polarity\n        subjectives = blob.sentiment[1] #get the subjectivity\n        sents_per_date.append(sent) #Saving polarity to sents_per_date\n        subjectivity.append(subjectives) #Saving subjectivity to subjectivity \n        if blob.sentiment[0] > 0: #Separating positive and negative tweets to lists\n            positives.append(t)\n        else:\n            negatives.append(t)\n    standard_dev_array = np.asarray(sents_per_date)\n    if len(sents_per_date) >= 1:\n        mean_polarity = sum(sents_per_date) / len(sents_per_date)\n        mean_subjectivity = sum(subjectivity) / len(sents_per_date)\n        percent_positive = len(positives) / len(sents_per_date)\n        standard_deviation_polarity = np.std(standard_dev_array)\n    else:\n        mean_polarity = 0\n        mean_subjectivity = 0\n        percent_positive = .5\n        standard_deviation_polarity = 0\n        #Mean Polarity \n\n    try:\n        sentiments.append(mean_polarity)\n    except: \n        sentiments.append(0)\n    #Mean Subjectivity \n    try:\n        sentiments.append(mean_subjectivity)\n    except: \n        sentiments.append(0)\n    #Percentage of Tweets that are positive \n    try: \n        sentiments.append(percent_positive)\n    except:\n        sentiments.append(0.5)\n    #Standard Deviation of tweet sentiment Polarity\n    try: \n        sentiments.append(standard_deviation_polarity)\n    except:\n        sentiments.append(0)\n\n\n\n\n    split_symbol = ticker.split('/')\n\n\n    ticker = split_symbol[1]\n    print(colored(\"Calculating sentiment for:\" + ticker, 'red'))\n    positives = []\n    negatives = []\n    subjectivity = []\n    sents_per_date = []\n    tweetCriteria = got3.manager.TweetCriteria().setQuerySearch(\"{}{}\".format(\"#\", ticker)).setMaxTweets(max_tweets) \n    tweets = got3.manager.TweetManager.getTweets(tweetCriteria)\n\n    for t in tweets:\n        blob = TextBlob(t.text)\n        sent = blob.sentiment[0] #get the polarity\n        subjectives = blob.sentiment[1] #get the subjectivity\n        sents_per_date.append(sent) #Saving polarity to sents_per_date\n        subjectivity.append(subjectives) #Saving subjectivity to subjectivity \n\n        if blob.sentiment[0] > 0: #Separating positive and negative tweets to lists\n            positives.append(t)\n        else:\n            negatives.append(t)\n    standard_dev_array = np.asarray(sents_per_date)\n\n    if len(sents_per_date) >= 1:\n        mean_polarity_from = sum(sents_per_date) / len(sents_per_date)\n        mean_subjectivity_from = sum(subjectivity) / len(sents_per_date)\n        percent_positive_from = len(positives) / len(sents_per_date)\n        standard_deviation_polarity_from = np.std(standard_dev_array)\n    else:\n        mean_polarity_from = 0\n        mean_subjectivity_from = 0\n        percent_positive_from = .5\n        standard_deviation_polarity_from = 0\n\n    #Mean Polarity \n    try:\n        sentiments.append(mean_polarity_from)\n    except:\n        sentiments.append(0)\n    #Mean Subjectivity \n    try:\n        sentiments.append(mean_subjectivity_from)\n    except:\n        sentiments.append(0)\n    #Percentage of Tweets that are positive \n    try:\n        sentiments.append(percent_positive_from)\n    except:\n        sentiments.append(0.5)        \n    #Standard Deviation of tweet sentiment Polarity\n    try:\n        sentiments.append(standard_deviation_polarity_from)\n    except:\n        sentiments.append(0)\n\n\n\n\n    ticker = split_symbol[0]\n    print(colored(\"Calculating sentiment for:\" + ticker, 'green'))\n    positives = []\n    negatives = []\n    tweetCriteria = got3.manager.TweetCriteria().setQuerySearch(\"{}{}\".format(\"#\", ticker)).setMaxTweets(max_tweets)\n    tweets = got3.manager.TweetManager.getTweets(tweetCriteria)\n    sents_per_date = []\n    subjectivity = []\n    for t in tweets:\n        blob = TextBlob(t.text)\n        sent = blob.sentiment[0] #get the polarity\n        subjectives = blob.sentiment[1] #get the subjectivity\n        sents_per_date.append(sent) #Saving polarity to sents_per_date\n        subjectivity.append(subjectives) #Saving subjectivity to subjectivity \n\n        if blob.sentiment[0] > 0: #Separating positive and negative tweets to lists\n            positives.append(t)\n        else:\n            negatives.append(t)\n    standard_dev_array = np.asarray(sents_per_date)\n    if len(sents_per_date) >= 1:\n        mean_polarity_to = sum(sents_per_date) / len(sents_per_date)\n        mean_subjectivity_to = sum(subjectivity) / len(sents_per_date)\n        percent_positive_to = len(positives) / len(sents_per_date)\n        standard_deviation_polarity_to = np.std(standard_dev_array)\n    else:\n        mean_polarity_to = 0\n        mean_subjectivity_to = 0\n        percent_positive_to = .5\n        standard_deviation_polarity_to = 0\n    #Mean Polarity \n    try:\n        sentiments.append(mean_polarity_to) \n    except:\n        sentiments.append(0)\n    #Mean Subjectivity \n    try:\n        sentiments.append(mean_subjectivity_to)\n    except:\n        sentiments.append(0)\n    #Percentage of Tweets that are positive \n    try:\n        sentiments.append(percent_positive_to)\n    except:\n        sentiments.append(0.5)\n    #Standard Deviation of tweet sentiment Polarity\n    try:\n        sentiments.append(standard_deviation_polarity_to)\n    except:\n        sentiments.append(0)\n\n\n\n\n    sentiments = np.asarray(sentiments)\n\n    return sentiments"
  },
  {
    "path": "Variable_Functions/__init__.py",
    "content": ""
  },
  {
    "path": "crycompare.py",
    "content": "import sys\nimport requests\nimport warnings\n\n\nclass Price:\n\tdef __init__(self):\n\t\tself.__coinlisturl = 'https://www.cryptocompare.com/api/data/coinlist/'\n\t\tself.__priceurl = 'https://min-api.cryptocompare.com/data/price?'\n\t\tself.__pricemultiurl = 'https://min-api.cryptocompare.com/data/pricemulti?'\n\t\tself.__pricemultifullurl = 'https://min-api.cryptocompare.com/data/pricemultifull?'\n\t\tself.__generateavgurl = 'https://min-api.cryptocompare.com/data/generateAvg?'\n\t\tself.__dayavgurl = 'https://min-api.cryptocompare.com/data/dayAvg?'\n\t\tself.__historicalurl = 'https://min-api.cryptocompare.com/data/pricehistorical?'\n\t\tself.__coinsnapshoturl = 'https://www.cryptocompare.com/api/data/coinsnapshot/?'\n\t\tself.__coinsnapshotfull = 'https://www.cryptocompare.com/api/data/coinsnapshotfullbyid/?'\n\n\tdef coinList(self):\n\t\treturn self.__get_url(self.__coinlisturl)\n\n\tdef price(self, from_curr, to_curr, e=None, extraParams=None, sign=False, tryConversion=True):\n\t\treturn self.__get_price(self.__priceurl, from_curr, to_curr, e, extraParams, sign, tryConversion)\n\n\tdef priceMulti(self, from_curr, to_curr, e=None, extraParams=None, sign=False, tryConversion=True):\n\t\treturn self.__get_price(self.__pricemultiurl, from_curr, to_curr, e, extraParams, sign, tryConversion)\n\n\tdef priceMultiFull(self, from_curr, to_curr, e=None, extraParams=None, sign=False, tryConversion=True):\n\t\treturn self.__get_price(self.__pricemultifullurl, from_curr, to_curr, e, extraParams, sign, tryConversion)\n\n\tdef priceHistorical(self, from_curr, to_curr, markets, ts=None, e=None, extraParams=None,\n\t\t\t\t\t\tsign=False, tryConversion=True):\n\t\treturn self.__get_price(self.__historicalurl, from_curr, to_curr, markets, e, extraParams, sign, tryConversion, ts)\n\n\tdef generateAvg(self, from_curr, to_curr, markets, extraParams=None, sign=False, tryConversion=True):\n\t\treturn self.__get_avg(self.__generateavgurl, from_curr, to_curr, markets, extraParams, sign, tryConversion)\n\n\tdef dayAvg(self, from_curr, to_curr, e=None, extraParams=None, sign=False, tryConversion=True,\n\t\t\t   avgType=None, UTCHourDiff=0, toTs=None):\n\t\treturn self.__get_avg(self.__dayavgurl, from_curr, to_curr, e, extraParams, sign,\n\t\t\t\t\t\t\ttryConversion, avgType, UTCHourDiff, toTs)\n\n\tdef coinSnapshot(self, from_curr, to_curr):\n\t\treturn self.__get_url(self.__coinsnapshoturl + 'fsym=' + from_curr.upper() + '&tsym=' + to_curr.upper())\n\n\tdef coinSnapshotFullById(self, coin_id):\n\t\treturn self.__get_url(self.__coinsnapshotfull + 'id=' + str(coin_id))\n\n\tdef __get_price(self, baseurl, from_curr, to_curr, e=None, extraParams=None, sign=False,\n\t\t\t\t  tryConversion=True, markets=None, ts=None):\n\t\targs = list()\n\t\tif isinstance(from_curr, str):\n\t\t\targs.append('fsym=' + from_curr.upper())\n\t\telif isinstance(from_curr, list):\n\t\t\targs.append('fsyms=' + ','.join(from_curr).upper())\n\t\tif isinstance(to_curr, list):\n\t\t\targs.append('tsyms=' + ','.join(to_curr).upper())\n\t\telif isinstance(to_curr, str):\n\t\t\targs.append('tsyms=' + to_curr.upper())\n\t\tif isinstance(markets, str):\n\t\t\targs.append('markets=' + markets)\n\t\telif isinstance(markets, list):\n\t\t\targs.append('markets=' + ','.join(markets))\n\t\tif e:\n\t\t\targs.append('e=' + e)\n\t\tif extraParams:\n\t\t\targs.append('extraParams=' + extraParams)\n\t\tif sign:\n\t\t\targs.append('sign=true')\n\t\tif ts:\n\t\t\targs.append('ts=' + str(ts))\n\t\tif not tryConversion:\n\t\t\targs.append('tryConversion=false')\n\t\tif len(args) >= 2:\n\t\t\treturn self.__get_url(baseurl + '&'.join(args))\n\t\telse:\n\t\t\traise ValueError('Must have both fsym and tsym arguments.')\n\n\tdef __get_avg(self, baseurl, from_curr, to_curr, markets=None, e=None, extraParams=None,\n\t\t\t\tsign=False, tryConversion=True, avgType=None, UTCHourDiff=0, toTs=None):\n\t\targs = list()\n\t\tif isinstance(from_curr, str):\n\t\t\targs.append('fsym=' + from_curr.upper())\n\t\tif isinstance(to_curr, str):\n\t\t\targs.append('tsym=' + to_curr.upper())\n\t\tif isinstance(markets, str):\n\t\t\targs.append('markets=' + markets)\n\t\telif isinstance(markets, list):\n\t\t\targs.append('markets=' + ','.join(markets))\n\t\tif e:\n\t\t\targs.append('e=' + e)\n\t\tif extraParams:\n\t\t\targs.append('extraParams=' + extraParams)\n\t\tif sign:\n\t\t\targs.append('sign=true')\n\t\tif avgType:\n\t\t\targs.append('avgType=' + avgType)\n\t\tif UTCHourDiff:\n\t\t\targs.append('UTCHourDiff=' + str(UTCHourDiff))\n\t\tif toTs:\n\t\t\targs.append('toTs=' + toTs)\n\t\tif not tryConversion:\n\t\t\targs.append('tryConversion=false')\n\t\tif len(args) >= 2:\n\t\t\treturn self.__get_url(baseurl + '&'.join(args))\n\t\telse:\n\t\t\traise ValueError('Must have both fsym and tsym arguments.')\n\n\tdef __get_url(self, url):\n\t\traw_data = requests.get(url)\n\t\traw_data.encoding = 'utf-8'\n\t\tif raw_data.status_code != 200:\n\t\t\traw_data.raise_for_status()\n\t\t\treturn False\n\t\ttry:\n\t\t\tif isinstance(raw_data.text, unicode):\n\t\t\t\twarnings.warn('Object returned is of type unicode. Cannot parse to str in Python 2.')\n\t\texcept NameError:\n\t\t\tpass\n\t\treturn raw_data.json()\n\n\nclass History:\n\tdef __init__(self):\n\t\tself.__histominuteurl = 'https://min-api.cryptocompare.com/data/histominute?'\n\t\tself.__histohoururl = 'https://min-api.cryptocompare.com/data/histohour?'\n\t\tself.__histodayurl = 'https://min-api.cryptocompare.com/data/histoday?'\n\n\tdef histoMinute(self, from_curr, to_curr, e=None, extraParams=None,\n\t\t\t\t\tsign=False, tryConversion=True, aggregate=None, limit=None, toTs=None):\n\t\treturn self.__get_price(self.__histominuteurl, from_curr, to_curr, e, extraParams, sign,\n\t\t\t\t\t\t\t\ttryConversion, aggregate, limit, toTs)\n\n\tdef histoHour(self, from_curr, to_curr, e=None, extraParams=None,\n\t\t\t\t  sign=False, tryConversion=True, aggregate=None, limit=None, toTs=None):\n\t\treturn self.__get_price(self.__histohoururl, from_curr, to_curr, e, extraParams, sign,\n\t\t\t\t\t\t\t\ttryConversion, aggregate, limit, toTs)\n\n\tdef histoDay(self, from_curr, to_curr, e=None, extraParams=None, sign=False,\n\t\t\t\t tryConversion=True, aggregate=None, limit=None, toTs=None, allData=False):\n\t\treturn self.__get_price(self.__histodayurl, from_curr, to_curr, e, extraParams, sign,\n\t\t\t\t\t\t\t\ttryConversion, aggregate, limit, toTs, allData)\n\n\tdef __get_price(self, baseurl, from_curr, to_curr, e=None, extraParams=None, sign=False,\n\t\t\t\t\ttryConversion=True, aggregate=None, limit=None, toTs=None, allData=False):\n\t\targs = list()\n\t\tif isinstance(from_curr, str):\n\t\t\targs.append('fsym=' + from_curr.upper())\n\t\tif isinstance(to_curr, str):\n\t\t\targs.append('tsym=' + to_curr.upper())\n\t\tif e:\n\t\t\targs.append('e=' + e)\n\t\tif extraParams:\n\t\t\targs.append('extraParams=' + extraParams)\n\t\tif sign:\n\t\t\targs.append('sign=true')\n\t\tif aggregate:\n\t\t\targs.append('aggregate=' + str(aggregate))\n\t\tif limit:\n\t\t\targs.append('limit=' + str(limit))\n\t\tif toTs:\n\t\t\targs.append('toTs=' + str(toTs))\n\t\tif allData:\n\t\t\targs.append('allData=true')\n\t\tif not tryConversion:\n\t\t\targs.append('tryConversion=false')\n\t\tif len(args) >= 2:\n\t\t\treturn self.__get_url(baseurl + '&'.join(args))\n\t\telse:\n\t\t\traise ValueError('Must have both fsym and tsym arguments.')\n\n\tdef __get_url(self, url):\n\t\traw_data = requests.get(url)\n\t\traw_data.encoding = 'utf-8'\n\t\tif raw_data.status_code != 200:\n\t\t\traw_data.raise_for_status()\n\t\t\treturn False\n\t\ttry:\n\t\t\tif isinstance(raw_data.text, unicode):\n\t\t\t\twarnings.warn('Object returned is of type unicode. Cannot parse to str in Python 2.')\n\t\texcept NameError:\n\t\t\tpass\n\t\treturn raw_data.json()\n"
  },
  {
    "path": "got3/__init__.py",
    "content": "from . import models\nfrom . import manager"
  },
  {
    "path": "got3/manager/TweetCriteria.py",
    "content": "class TweetCriteria:\n\n\tdef __init__(self):\n\t\tself.maxTweets = 0\n\n\tdef setUsername(self, username):\n\t\tself.username = username\n\t\treturn self\n\n\tdef setSince(self, since):\n\t\tself.since = since\n\t\treturn self\n\n\tdef setUntil(self, until):\n\t\tself.until = until\n\t\treturn self\n\n\tdef setQuerySearch(self, querySearch):\n\t\tself.querySearch = querySearch\n\t\treturn self\n\n\tdef setMaxTweets(self, maxTweets):\n\t\tself.maxTweets = maxTweets\n\t\treturn self\n\n\tdef setLang(self, Lang):\n\t\tself.lang = Lang\n\t\treturn self\n\n\tdef setTopTweets(self, topTweets):\n \t\tself.topTweets = topTweets\n \t\treturn self"
  },
  {
    "path": "got3/manager/TweetManager.py",
    "content": "import urllib.request, urllib.parse, urllib.error, urllib.request, urllib.error, urllib.parse, json, re, datetime, sys, \\\r\n    http.cookiejar\r\nfrom .. import models\r\nfrom pyquery import PyQuery\r\n\r\n\r\nclass TweetManager:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @staticmethod\r\n    def getTweets(tweetCriteria, receiveBuffer=None, bufferLength=100, proxy=None):\r\n        refreshCursor = ''\r\n\r\n        results = []\r\n        resultsAux = []\r\n        cookieJar = http.cookiejar.CookieJar()\r\n\r\n        active = True\r\n\r\n        while active:\r\n            json = TweetManager.getJsonReponse(tweetCriteria, refreshCursor, cookieJar, proxy)\r\n            if len(json['items_html'].strip()) == 0:\r\n                break\r\n\r\n            refreshCursor = json['min_position']\r\n            tweets = PyQuery(json['items_html'])('div.js-stream-tweet')\r\n\r\n            if len(tweets) == 0:\r\n                break\r\n\r\n            for tweetHTML in tweets:\r\n                tweetPQ = PyQuery(tweetHTML)\r\n                tweet = models.Tweet()\r\n\r\n                usernameTweet = tweetPQ(\"span.username.js-action-profile-name b\").text();\r\n                txt = re.sub(r\"\\s+\", \" \", tweetPQ(\"p.js-tweet-text\").text().replace('# ', '#').replace('@ ', '@'));\r\n                retweets = int(tweetPQ(\"span.ProfileTweet-action--retweet span.ProfileTweet-actionCount\").attr(\r\n                    \"data-tweet-stat-count\").replace(\",\", \"\"));\r\n                favorites = int(tweetPQ(\"span.ProfileTweet-action--favorite span.ProfileTweet-actionCount\").attr(\r\n                    \"data-tweet-stat-count\").replace(\",\", \"\"));\r\n                dateSec = int(tweetPQ(\"small.time span.js-short-timestamp\").attr(\"data-time\"));\r\n                id = tweetPQ.attr(\"data-tweet-id\");\r\n                permalink = tweetPQ.attr(\"data-permalink-path\");\r\n                user_id = int(tweetPQ(\"a.js-user-profile-link\").attr(\"data-user-id\"))\r\n\r\n                geo = ''\r\n                geoSpan = tweetPQ('span.Tweet-geo')\r\n                if len(geoSpan) > 0:\r\n                    geo = geoSpan.attr('title')\r\n                urls = []\r\n                for link in tweetPQ(\"a\"):\r\n                    try:\r\n                        urls.append((link.attrib[\"data-expanded-url\"]))\r\n                    except KeyError:\r\n                        pass\r\n                tweet.id = id\r\n                tweet.permalink = 'https://twitter.com' + permalink\r\n                tweet.username = usernameTweet\r\n\r\n                tweet.text = txt\r\n                tweet.date = datetime.datetime.fromtimestamp(dateSec)\r\n                tweet.formatted_date = datetime.datetime.fromtimestamp(dateSec).strftime(\"%a %b %d %X +0000 %Y\")\r\n                tweet.retweets = retweets\r\n                tweet.favorites = favorites\r\n                tweet.mentions = \" \".join(re.compile('(@\\\\w*)').findall(tweet.text))\r\n                tweet.hashtags = \" \".join(re.compile('(#\\\\w*)').findall(tweet.text))\r\n                tweet.geo = geo\r\n                tweet.urls = \",\".join(urls)\r\n                tweet.author_id = user_id\r\n\r\n                results.append(tweet)\r\n                resultsAux.append(tweet)\r\n\r\n                if receiveBuffer and len(resultsAux) >= bufferLength:\r\n                    receiveBuffer(resultsAux)\r\n                    resultsAux = []\r\n\r\n                if tweetCriteria.maxTweets > 0 and len(results) >= tweetCriteria.maxTweets:\r\n                    active = False\r\n                    break\r\n\r\n        if receiveBuffer and len(resultsAux) > 0:\r\n            receiveBuffer(resultsAux)\r\n\r\n        return results\r\n\r\n    @staticmethod\r\n    def getJsonReponse(tweetCriteria, refreshCursor, cookieJar, proxy):\r\n        url = \"https://twitter.com/i/search/timeline?f=tweets&q=%s&l=en&src=typd&%smax_position=%s\"\r\n\r\n        urlGetData = ''\r\n        if hasattr(tweetCriteria, 'username'):\r\n            urlGetData += ' from:' + tweetCriteria.username\r\n\r\n        if hasattr(tweetCriteria, 'since'):\r\n            urlGetData += ' since:' + tweetCriteria.since\r\n\r\n        if hasattr(tweetCriteria, 'until'):\r\n            urlGetData += ' until:' + tweetCriteria.until\r\n\r\n        if hasattr(tweetCriteria, 'querySearch'):\r\n            urlGetData += ' ' + tweetCriteria.querySearch\r\n\r\n        if hasattr(tweetCriteria, 'lang'):\r\n            urlLang = 'lang=' + tweetCriteria.lang + '&'\r\n        else:\r\n            urlLang = ''\r\n        url = url % (urllib.parse.quote(urlGetData), urlLang, refreshCursor)\r\n        #comment out to hide url\r\n        #print(url)\r\n\r\n        headers = [\r\n            ('Host', \"twitter.com\"),\r\n            ('User-Agent', \"Mozilla/5.0 (Windows NT 6.1; Win64; x64)\"),\r\n            ('Accept', \"application/json, text/javascript, */*; q=0.01\"),\r\n            ('Accept-Language', \"de,en-US;q=0.7,en;q=0.3\"),\r\n            ('X-Requested-With', \"XMLHttpRequest\"),\r\n            ('Referer', url),\r\n            ('Connection', \"keep-alive\")\r\n        ]\r\n\r\n        if proxy:\r\n            opener = urllib.request.build_opener(urllib.request.ProxyHandler({'http': proxy, 'https': proxy}), \\\r\n                                                 urllib.request.HTTPCookieProcessor(cookieJar))\r\n        else:\r\n            opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cookieJar))\r\n        opener.addheaders = headers\r\n\r\n        try:\r\n            response = opener.open(url)\r\n            jsonResponse = response.read()\r\n            #comment out once done debugging\r\n            # print(\r\n            #     \"Twitter weird response. Try to see on browser: https://twitter.com/search?q=%s&src=typd\" % urllib.parse.quote(\r\n            #         urlGetData))\r\n        except:\r\n            # print(\"Twitter weird response. Try to see on browser: \", url)\r\n            print(\r\n                \"Twitter weird response. Try to see on browser: https://twitter.com/search?q=%s&src=typd\" % urllib.parse.quote(\r\n                    urlGetData))\r\n            print(\"Unexpected error:\", sys.exc_info()[0])\r\n            sys.exit()\r\n            return\r\n\r\n        dataJson = json.loads(jsonResponse.decode())\r\n\r\n        return dataJson\r\n"
  },
  {
    "path": "got3/manager/__init__.py",
    "content": "from .TweetCriteria import TweetCriteria\r\nfrom .TweetManager import TweetManager"
  },
  {
    "path": "got3/manager/__init__.py.bak",
    "content": "from TweetCriteria import TweetCriteria\nfrom TweetManager import TweetManager"
  },
  {
    "path": "got3/models/Tweet.py",
    "content": "class Tweet:\n\t\n\tdef __init__(self):\n\t\tpass"
  },
  {
    "path": "got3/models/__init__.py",
    "content": "from .Tweet import Tweet"
  },
  {
    "path": "requirements.txt",
    "content": "alembic==0.9.8\nbanal==0.3.3\nccxt==1.10.1242\ncertifi==2018.1.18\nchardet==3.0.4\nclick==6.7\ndash==0.21.0\ndash-core-components==0.19.0\ndash-html-components==0.9.0\ndash-renderer==0.11.3\ndataset==1.0.6\ndecorator==4.2.1\nenum34==1.1.6\nFlask==0.12.2\nFlask-Compress==1.4.0\nfunctools32==3.2.3.post2\nidna==2.6\nipython-genutils==0.2.0\nitsdangerous==0.24\nJinja2==2.10\njsonschema==2.6.0\njupyter-core==4.4.0\nMako==1.0.7\nMarkupSafe==1.0\nnbformat==4.4.0\nnormality==0.5.11\nnumpy==1.14.1\npandas==0.22.0\nplotly==2.4.1\npython-dateutil==2.6.1\npython-editor==1.0.3\npytz==2018.3\nrequests==2.18.4\nsix==1.11.0\nSQLAlchemy==1.2.4\ntraitlets==4.3.2\nurllib3==1.22\nWerkzeug==0.14.1\n"
  },
  {
    "path": "requirements_to_freeze.txt",
    "content": "ccxt==1.10.1242\ndash==0.21.0\ndash-core-components==0.19.0\ndash-html-components==0.9.0\ndash-renderer==0.11.3\ndataset==1.0.6\npandas==0.22.0\n"
  }
]