Full Code of bshaw19/Crypto_Trader for AI

master ab7497cd3728 cached
20 files
45.7 KB
11.6k tokens
57 symbols
1 requests
Download .txt
Repository: bshaw19/Crypto_Trader
Branch: master
Commit: ab7497cd3728
Files: 20
Total size: 45.7 KB

Directory structure:
gitextract_kclr9y64/

├── .gitignore
├── CODE_OF_CONDUCT.md
├── Data_Grabber.py
├── Inputs_Table_Builder.py
├── LICENSE
├── README.md
├── Table_Timer.py
├── Variable_Functions/
│   ├── Technical_Indicators.py
│   ├── Twitter_Sentiment.py
│   └── __init__.py
├── crycompare.py
├── got3/
│   ├── __init__.py
│   ├── manager/
│   │   ├── TweetCriteria.py
│   │   ├── TweetManager.py
│   │   ├── __init__.py
│   │   └── __init__.py.bak
│   └── models/
│       ├── Tweet.py
│       └── __init__.py
├── requirements.txt
└── requirements_to_freeze.txt

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# GITHUB BOILERPLATE GIT IGNORE
# https://github.com/github/gitignore/blob/master/Python.gitignore

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
.static_storage/
.media/
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/

# OTHER GIT IGNORE

#database
market_prices.db

#OS files
.DS_Store
.idea/

#Twitter Configuration File
config.py


================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Contributor Covenant Code of Conduct

## Our Pledge

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.

## Our Standards

Examples of behavior that contributes to creating a positive environment include:

* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members

Examples of unacceptable behavior by participants include:

* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting

## Our Responsibilities

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

## Scope

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at bshaw19@vt.edu. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]

[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/


================================================
FILE: Data_Grabber.py
================================================
import os
import ccxt
import time
import datetime
import dash
from dash.dependencies import Input, Output, Event
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objs as go
import threading
import dataset
from collections import deque


def ensure_dir(directory):
    """"""
    if not os.path.exists(directory):
        os.makedirs(directory)


class CryptoDataGrabber(object):

    def __init__(self):
        """"""
        self.exchange = ccxt.poloniex()
        self.exchange.load_markets()
        self.delay_seconds = self.exchange.rateLimit / 1000
        self.symbols = self.exchange.markets
        self.timeframe = '1d'
        self.db_url = 'sqlite:///databases/market_prices.db'
        self.deques = dict()
        self.ohlcv = dict()
        self.database = dataset.connect(self.db_url)
        ensure_dir('databases')
        for symbol in self.symbols:
            if self.exchange.has['fetchOHLCV']:
                print('Obtaining OHLCV data')
                data = self.exchange.fetch_ohlcv(symbol, self.timeframe)
                data = list(zip(*data))
                data[0] = [datetime.datetime.fromtimestamp(ms / 1000)
                           for ms in data[0]]
                self.ohlcv[symbol] = data
                time.sleep(self.delay_seconds)
            else:
                print('No OHLCV data available')
            self.deques[symbol] = deque()
            if len(self.database[symbol]):
                for e in self.database[symbol]:
                    entry = (e['bid'], e['ask'], e['spread'], e['time'])
                    self.deques[symbol].append(entry)
        del self.database
        self.thread = threading.Thread(target=self.__update)
        self.thread.daemon = True
        self.thread.start()

    def get_symbols(self):
        """"""
        return self.deques.keys()

    def get_prices(self, symbol):
        """"""
        return self.deques[symbol]

    def get_ohlcv(self, symbol):
        """"""
        data = self.ohlcv[symbol]
        return data[0], data[1], data[2], data[3], data[4], data[5]

    def __update(self):
        """
        https://github.com/ccxt-dev/ccxt/wiki/Manual#market-price
        """
        self.database = dataset.connect(self.db_url)
        while True:
            for symbol in self.symbols:

                start_time = time.clock()
                orders = self.exchange.fetch_order_book(symbol)
                bid = orders['bids'][0][0] if len(orders['bids']) > 0 else None
                ask = orders['asks'][0][0] if len(orders['asks']) > 0 else None
                spread = (ask - bid) if (bid and ask) else None
                dtime = datetime.datetime.now()
                self.deques[symbol].append((bid, ask, spread, dtime))
                self.database.begin()
                try:
                    self.database[symbol].insert({
                        'ask': ask,
                        'bid': bid,
                        'spread': spread,
                        'time': dtime
                    })
                    self.database.commit()
                except:
                    self.database.rollback()

                time.sleep(self.delay_seconds - (time.clock() - start_time))


data = CryptoDataGrabber()

selected_dropdown_value = 'ETH/BTC'

app = dash.Dash()
app.layout = html.Div([
    html.Div([
        html.H1('Poloniex Nerd', id='h1_title'),
        dcc.Dropdown(
            id='symbol-dropdown',
            options=[{'label': key, 'value': key}
                     for key in data.get_symbols()],
            value=selected_dropdown_value
        ),
        html.Div([
            dcc.Graph(
                id='ohlc',
                config={
                    'displayModeBar': False
                }
            ),
        ], className="row"),
        html.Div([
            dcc.Graph(
                id='v',
                config={
                    'displayModeBar': False
                }
            ),
        ], className="row"),
        html.Div([
            html.Div([
                dcc.Graph(
                    id='market-prices-graph',
                    config={
                        'displayModeBar': False
                    }
                ),
                dcc.Graph(
                    id='spread-graph',
                    config={
                        'displayModeBar': False
                    }
                ),
            ], className="eight columns"),
            html.Div([
                dcc.Graph(
                    id='market-prices-hist',
                    config={
                        'displayModeBar': False
                    }
                ),
                dcc.Graph(
                    id='spread-hist',
                    config={
                        'displayModeBar': False
                    }
                ),
            ], className="four columns")
        ], className="row"),
        dcc.Interval(id='graph-speed-update', interval=2000),
    ], className="row")
])
app.css.append_css({
    'external_url': 'https://codepen.io/chriddyp/pen/bWLwgP.css'
})


@app.callback(Output('h1_title', 'children'),
              [Input('symbol-dropdown', 'value')])
def change_plot(value):
    global selected_dropdown_value
    selected_dropdown_value = value
    return 'Market Prices ' + str(value)


@app.callback(Output('ohlc', 'figure'),
              [Input('symbol-dropdown', 'value')])
def plot_olhc(value):
    global data
    dates, open_data, high_data, low_data, close_data, _ \
        = data.get_ohlcv(value)
    return {
        'data': [go.Ohlc(x=dates,
                         open=open_data,
                         high=high_data,
                         low=low_data,
                         close=close_data)],
        'layout': dict(title="OHLC")
    }


@app.callback(Output('v', 'figure'),
              [Input('symbol-dropdown', 'value')])
def plot_v(value):
    global data
    dates, _, _, _, _, volume = data.get_ohlcv(value)
    return {
        'data': [go.Bar(x=dates,
                        y=volume)],
        'layout': dict(title="Volume")
    }


@app.callback(Output('market-prices-graph', 'figure'),
              events=[Event('graph-speed-update', 'interval')])
def update_market_prices():
    global selected_dropdown_value
    global data
    prices = data.get_prices(selected_dropdown_value)
    prices = [list(p) for p in zip(*prices)]
    if len(prices) > 0:
        traces = []
        x = list(prices[3])
        for i, key in enumerate(['bid', 'ask']):
            trace = go.Scatter(x=x,
                               y=prices[i],
                               name=key,
                               opacity=0.8)
            traces.append(trace)
        return {
            'data': traces,
            'layout': dict(title="Market Prices")
        }


@app.callback(Output('market-prices-hist', 'figure'),
              events=[Event('graph-speed-update', 'interval')])
def update_market_prices_hist():
    global selected_dropdown_value
    global data
    prices = data.get_prices(selected_dropdown_value)
    prices = [list(p) for p in zip(*prices)]
    if len(prices) > 0:
        traces = []
        for i, key in enumerate(['bid', 'ask']):
            trace = go.Histogram(x=prices[i][-200:],
                                 name=key,
                                 opacity=0.8)
            traces.append(trace)
        return {
            'data': traces,
            'layout': dict(title="Market Prices Histogram (200 Most Recent)")
        }


@app.callback(Output('spread-graph', 'figure'),
              events=[Event('graph-speed-update', 'interval')])
def update_spread():
    global selected_dropdown_value
    global data
    prices = data.get_prices(selected_dropdown_value)
    prices = [list(p) for p in zip(*prices)]
    if len(prices) > 0:
        traces = []
        trace = go.Scatter(x=list(prices[3]),
                           y=list(prices[2]),
                           name='spread',
                           line=dict(color='rgb(114, 186, 59)'),
                           fill='tozeroy',
                           fillcolor='rgba(114, 186, 59, 0.5)',
                           mode='none')
        traces.append(trace)

        return {
            'data': traces,
            'layout': dict(title="Spread")
        }


@app.callback(Output('spread-hist', 'figure'),
              events=[Event('graph-speed-update', 'interval')])
def update_spread_hist():
    global selected_dropdown_value
    global data
    prices = data.get_prices(selected_dropdown_value)
    prices = [list(p) for p in zip(*prices)]
    if len(prices) > 0:
        traces = []
        trace = go.Histogram(x=list(prices[2][-200:]),
                             name='spread',
                             marker=dict(color='rgba(114, 186, 59, 0.5)'))
        traces.append(trace)

        return {
            'data': traces,
            'layout': dict(title="Spread Histogram (200 Most Recent)")
        }


if __name__ == '__main__':
    app.run_server()
    print(self.server)




================================================
FILE: Inputs_Table_Builder.py
================================================
import sqlite3
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
from Variable_Functions.Twitter_Sentiment import dates_to_sentiment
from crycompare import *
import subprocess
from time import strftime, gmtime
import time
import datetime
import arrow
from termcolor import colored





def db_connection(database):
    """"""
    db_connection = sqlite3.connect(database)
    return db_connection


def get_symbols(database):
    """"""
    db_connection = sqlite3.connect(database)
    cursor = db_connection.cursor()
    cursor.execute("SELECT name FROM sqlite_master WHERE type='table' and name NOT LIKE '%_1%'")
    symbols1 = cursor.fetchall()
    symbols = []
    for symbol in symbols1:
        symbols.append(symbol[0])
    return symbols


symbols = get_symbols('databases/market_prices.db')

######################################## Bid Ask Spread Time #######################################
def OLHCV_From_DB(symbol, database):
    """"""
    df = pd.read_sql(r"SELECT * FROM '" + symbol + "'", con=db_connection(database))
    return df


######################################### Twitter Sentiment #######################################
def twitter_sentiment(symbol, max_tweets):
    """"""
    
    now = arrow.now()
    dates = now.date()
    print(dates)

    time = now.time()
    print(time)
    sentiment_variables = dates_to_sentiment(dates, symbol, max_tweets)
    return sentiment_variables




######################################### Split Trading Pair into 'to' and 'from' ###########
def split_symbols(symbols):
    """"""
    split_symbols = {}
    for symbol in symbols:
        split_symbols[symbol] = symbol.split('/')

    #split_symbols[symbol][0] is the "tp" symbol
    #split_symbols[symbol][1] is the "from" symbol

    return split_symbols




######################################### Main function to build input dataset #####################
def generate_input_dataset(database):
    """"""
    symbols = get_symbols(database)
    table_names = {}
    start_time = time.time()
    total_runtime = time.time() - start_time
    for symbol in symbols:
        table_names[symbol] = OLHCV_From_DB(symbol, database)
    i = len(symbols)
    total_symbols = len(symbols)
    for symbol in symbols:
        print('')
        print(str(i) + " More Symbols to Pull Sentiment For")
        print('')
        total_runtime += (time.time() - start_time) - total_runtime
        average_runtime = round(( total_runtime / ((total_symbols+1) - i)), 2)
        print('')
        print(colored("Total Runtime = " + str(round((total_runtime / 60), 2)) + " minutes", 'blue'))
        print('')
        print(colored("Average seconds per symbol = " + str(average_runtime) + " seconds", 'blue'))
        print('')
        print(colored("Estimated seconds until last sentiment = " + (str(average_runtime * i) + " seconds remaining"), 'blue'))
        print('')
        print('')
        i -= 1
        sentiment_variables = twitter_sentiment(symbol, 1)
        
        table_names[symbol]['Tweet_Sentiment_Polarity'] = sentiment_variables[0]
        table_names[symbol]['Tweet_Sentiment_Subjectivity'] = sentiment_variables[1]
        table_names[symbol]['Tweet_Positive_Percent'] = sentiment_variables[2]
        table_names[symbol]['Tweet_Sentiment_STDDEV'] = sentiment_variables[3]
        table_names[symbol]['Tweet_Sentiment_Polarity_to'] = sentiment_variables[4]
        table_names[symbol]['Tweet_Sentiment_Subjectivity_to'] = sentiment_variables[5]
        table_names[symbol]['Tweet_Positive_Percent_to'] = sentiment_variables[6]
        table_names[symbol]['Tweet_Sentiment_STDDEV_to'] = sentiment_variables[7]
        table_names[symbol]['Tweet_Sentiment_Polarity_from'] = sentiment_variables[8]
        table_names[symbol]['Tweet_Sentiment_Subjectivity_from'] = sentiment_variables[9]
        table_names[symbol]['Tweet_Positive_Percent_from'] = sentiment_variables[10]
        table_names[symbol]['Tweet_Sentiment_STDDEV_from'] = sentiment_variables[11]
    print(table_names)
    return table_names






================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2018 Brandon Shaw

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# Crypto_Trader
**A Live Machine-Learning based Cryptocurrency Trader for the Poloniex Exchange**

If you would like to follow our progress or reach out to developers, feel free to join our discord channel at:
https://discord.gg/UPNV2fH

The goal of the project is to create an open sourced, machine learning based cryptocurrency portfolio optimizer including as many relevant variables as possible. 

These will include:

- **Community Sentiment** (such as forum and twitter sentiment analysis)
- **Analyst Opinions** (notable twitter account/analyst feeds with a history of winning strategies - ex. "Haejin" a successful elliot wave trader with an established history)
- **Global Economic Indexes** (DOW, Nikkei225, FOREX data, etc.)
- **Open/Close/High/Low/Volume**
- **Live Orderbook Data** (spread, bid, ask, order size) 
- **Technical Indicators** (MACD's, MA's, VWAPS, etc.)
- **Any Other Variable** of interest found to be important (Please feel free to brainstorm and send suggestions)


A **Boruta Analysis** (variation of Random forest) will be used to reduce these input variables by removing "unimportant" features. 

Using this input dataset, a machine learning **Binary Classifier** will be created to assign trading pairs on Poloniex a confidence score from 0 to 1. This value will represent the confidence that the trading pair will increase over a certain timeframe. Multiple machines may be constructed to score for a 5 minute window, 10 minute, 15 minute, 30 minute, 1 hour, etc. 

Using these scores, a **Q-Learning Bot** ("reinforcement learning") will be created that will optimize a trading strategy based on the binary classifier scores. The machine will read the amount of capital in the users Poloniex Account, and automatically place trades to optimize the portfolios holdings. These strategies will use stop losses and sell limits. Because it is q learning based, the machine will receive, and use live data to make decisions in a live auto-updating context with a reward that optimizes profit. This will allow the machine to continue to train and optimize itself over time while feeding in live data and placing trades.


Ultimately, the program will have the ability to be run 24 hours a day while optimizing a portfolio using live data, while taking into account fees in order to identify and take advantage of all trading opportunities accross the entire cryptocurrency market on Poloniex. Because of the decimal based system of cryptocurrencies, in theory a portfolio of any size should be able to be used as long as the user has the minimum trade size on Poloniex.



| Major Design Features |         Purpose          |
| --------------------- |:------------------------:|
| Identified Variables  | Related to Price Action  |
| Data Scraper          | Live Variables to Array  |
| Binary Classifers     | Score Trading Pairs Live |
| Q-Learning Bot        | Optimize Trade Strategy  |
| Poloniex API Link     | Allow Bot to Make Trades |



If you would like to donate to the project, please do so at the following Bitcoin/Litecoin/Ethereum addresses. All donations appreciated :)

**DONATIONS**

| Currency |                  Address                   |
| -------- |:-----------------------------------------: |
| Bitcoin  |     1GjVgMUDfKHzhxgeauRagVfp1GCrSJXijb     |
| Litecoin | 0x9852389Bd431A90A9AEcb48EdA50Da1ac05Bd4d8 |
| Ethereum |     M9oJaUnCB6Soistk3wSETziFDzz8gAaJCU     |
    


================================================
FILE: Table_Timer.py
================================================
from Inputs_Table_Builder import *
import time
import datetime
import sqlite3
import pandas

starttime=time.time()

#Runs the Input Table Builder module in 5 minute timesteps, appending to database
while True:
  ts = time.time()
  ts = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
  print("Scraping at: " + ts)
  input_tables = generate_input_dataset('databases/market_prices.db')
  db_connection = sqlite3.connect('databases/market_prices.db')
  cursor = db_connection.cursor()
  i = len(symbols)
  for symbol in symbols:
    split=symbol.split('/')
    try:
      cursor.execute("select Tweet_Sentiment_Polarity_to from '" + symbol + "';")
    except:
      cursor.execute("Alter table '" + symbol + "' RENAME TO " + split[0] + split[1] + "_1old")
    try:
      input_tables[symbol]['Tweet_Sentiment_Polarity']
    except:
      input_tables[symbol]['Tweet_Sentiment_Polarity'] = 0
    try:
      input_tables[symbol]['Tweet_Sentiment_Subjectivity']
    except:
      input_tables[symbol]['Tweet_Sentiment_Subjectivity'] = 0
    try:
      input_tables[symbol]['Tweet_Positive_Percent']
    except:
      input_tables[symbol]['Tweet_Positive_Percent'] = 0
    try:
      input_tables[symbol]['Tweet_Sentiment_STDDEV']
    except:
      input_tables[symbol]['Tweet_Sentiment_STDDEV'] = 0

    try:
      input_tables[symbol]['Tweet_Sentiment_Polarity_to']
    except:
      input_tables[symbol]['Tweet_Sentiment_Polarity_to'] = 0
    try:
      input_tables[symbol]['Tweet_Sentiment_Subjectivity_to']
    except:
      input_tables[symbol]['Tweet_Sentiment_Subjectivity_to'] = 0
    try:
      input_tables[symbol]['Tweet_Positive_Percent_to']
    except:
      input_tables[symbol]['Tweet_Positive_Percent_to'] = 0
    try:
      input_tables[symbol]['Tweet_Sentiment_STDDEV_to']
    except:
      input_tables[symbol]['Tweet_Sentiment_STDDEV_to'] = 0
    try:
      input_tables[symbol]['Tweet_Sentiment_Polarity_from']
    except:
      input_tables[symbol]['Tweet_Sentiment_Polarity_from'] = 0
    try:
      input_tables[symbol]['Tweet_Sentiment_Subjectivity_from']
    except:
      input_tables[symbol]['Tweet_Sentiment_Subjectivity_from'] = 0
    try:
      input_tables[symbol]['Tweet_Positive_Percent_from']
    except:
      input_tables[symbol]['Tweet_Sentiment_STDDEV_from'] = 0
    try:
      input_tables[symbol]['Tweet_Sentiment_STDDEV_from']
    except:
      input_tables[symbol]['Tweet_Sentiment_STDDEV_from'] = 0
    input_tables[symbol].to_sql(name=(str(symbol)), con=db_connection, if_exists = 'append', index=False)
ts = time.time()

time.sleep(300.0 - ((time.time() - starttime) % 300.0))
ts = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
print("Saved to database at: " + ts)
print(input_tables)


================================================
FILE: Variable_Functions/Technical_Indicators.py
================================================
from crycompare import *
import pandas as pd
import sqlite3

db_connection = sqlite3.connect('databases/market_prices.db')
cursor = db_connection.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
symbols1 = cursor.fetchall()
symbols = []
for symbol in symbols1:
	symbols.append(symbol[0])
print(symbols)

h = History()

from_coins = []
to_coins = []

df_dict = {}

for coin in coins:
    histo = h.histoMinute(coin,'USD')
    if histo['Data']:
        df_histo = pd.DataFrame(histo['Data'])
        df_histo['time'] = pd.to_datetime(df_histo['time'],unit='s')
        df_histo.index = df_histo['time']
        del df_histo['time']
        del df_histo['volumefrom']
        del df_histo['volumeto']
        
        df_dict[coin] = df_histo
        print(df_dict[coin])

================================================
FILE: Variable_Functions/Twitter_Sentiment.py
================================================
import got3
import arrow
from textblob import TextBlob
import numpy as np
from termcolor import colored





def dates_to_sentiment(dates, ticker, max_tweets):

    ticker = ticker
    print(colored("Calculating sentiment for:" + ticker, 'white'))
    sentiments = []
    positives = []
    negatives = []
    arrow_date = dates
    tweetCriteria = got3.manager.TweetCriteria().setQuerySearch("{}{}".format("#", ticker)).setMaxTweets(max_tweets)
    tweets = got3.manager.TweetManager.getTweets(tweetCriteria)
    sents_per_date = []
    subjectivity = []
    for t in tweets:      
        blob = TextBlob(t.text)
        sent = blob.sentiment[0] #get the polarity
        subjectives = blob.sentiment[1] #get the subjectivity
        sents_per_date.append(sent) #Saving polarity to sents_per_date
        subjectivity.append(subjectives) #Saving subjectivity to subjectivity 
        if blob.sentiment[0] > 0: #Separating positive and negative tweets to lists
            positives.append(t)
        else:
            negatives.append(t)
    standard_dev_array = np.asarray(sents_per_date)
    if len(sents_per_date) >= 1:
        mean_polarity = sum(sents_per_date) / len(sents_per_date)
        mean_subjectivity = sum(subjectivity) / len(sents_per_date)
        percent_positive = len(positives) / len(sents_per_date)
        standard_deviation_polarity = np.std(standard_dev_array)
    else:
        mean_polarity = 0
        mean_subjectivity = 0
        percent_positive = .5
        standard_deviation_polarity = 0
        #Mean Polarity 

    try:
        sentiments.append(mean_polarity)
    except: 
        sentiments.append(0)
    #Mean Subjectivity 
    try:
        sentiments.append(mean_subjectivity)
    except: 
        sentiments.append(0)
    #Percentage of Tweets that are positive 
    try: 
        sentiments.append(percent_positive)
    except:
        sentiments.append(0.5)
    #Standard Deviation of tweet sentiment Polarity
    try: 
        sentiments.append(standard_deviation_polarity)
    except:
        sentiments.append(0)




    split_symbol = ticker.split('/')


    ticker = split_symbol[1]
    print(colored("Calculating sentiment for:" + ticker, 'red'))
    positives = []
    negatives = []
    subjectivity = []
    sents_per_date = []
    tweetCriteria = got3.manager.TweetCriteria().setQuerySearch("{}{}".format("#", ticker)).setMaxTweets(max_tweets) 
    tweets = got3.manager.TweetManager.getTweets(tweetCriteria)

    for t in tweets:
        blob = TextBlob(t.text)
        sent = blob.sentiment[0] #get the polarity
        subjectives = blob.sentiment[1] #get the subjectivity
        sents_per_date.append(sent) #Saving polarity to sents_per_date
        subjectivity.append(subjectives) #Saving subjectivity to subjectivity 

        if blob.sentiment[0] > 0: #Separating positive and negative tweets to lists
            positives.append(t)
        else:
            negatives.append(t)
    standard_dev_array = np.asarray(sents_per_date)

    if len(sents_per_date) >= 1:
        mean_polarity_from = sum(sents_per_date) / len(sents_per_date)
        mean_subjectivity_from = sum(subjectivity) / len(sents_per_date)
        percent_positive_from = len(positives) / len(sents_per_date)
        standard_deviation_polarity_from = np.std(standard_dev_array)
    else:
        mean_polarity_from = 0
        mean_subjectivity_from = 0
        percent_positive_from = .5
        standard_deviation_polarity_from = 0

    #Mean Polarity 
    try:
        sentiments.append(mean_polarity_from)
    except:
        sentiments.append(0)
    #Mean Subjectivity 
    try:
        sentiments.append(mean_subjectivity_from)
    except:
        sentiments.append(0)
    #Percentage of Tweets that are positive 
    try:
        sentiments.append(percent_positive_from)
    except:
        sentiments.append(0.5)        
    #Standard Deviation of tweet sentiment Polarity
    try:
        sentiments.append(standard_deviation_polarity_from)
    except:
        sentiments.append(0)




    ticker = split_symbol[0]
    print(colored("Calculating sentiment for:" + ticker, 'green'))
    positives = []
    negatives = []
    tweetCriteria = got3.manager.TweetCriteria().setQuerySearch("{}{}".format("#", ticker)).setMaxTweets(max_tweets)
    tweets = got3.manager.TweetManager.getTweets(tweetCriteria)
    sents_per_date = []
    subjectivity = []
    for t in tweets:
        blob = TextBlob(t.text)
        sent = blob.sentiment[0] #get the polarity
        subjectives = blob.sentiment[1] #get the subjectivity
        sents_per_date.append(sent) #Saving polarity to sents_per_date
        subjectivity.append(subjectives) #Saving subjectivity to subjectivity 

        if blob.sentiment[0] > 0: #Separating positive and negative tweets to lists
            positives.append(t)
        else:
            negatives.append(t)
    standard_dev_array = np.asarray(sents_per_date)
    if len(sents_per_date) >= 1:
        mean_polarity_to = sum(sents_per_date) / len(sents_per_date)
        mean_subjectivity_to = sum(subjectivity) / len(sents_per_date)
        percent_positive_to = len(positives) / len(sents_per_date)
        standard_deviation_polarity_to = np.std(standard_dev_array)
    else:
        mean_polarity_to = 0
        mean_subjectivity_to = 0
        percent_positive_to = .5
        standard_deviation_polarity_to = 0
    #Mean Polarity 
    try:
        sentiments.append(mean_polarity_to) 
    except:
        sentiments.append(0)
    #Mean Subjectivity 
    try:
        sentiments.append(mean_subjectivity_to)
    except:
        sentiments.append(0)
    #Percentage of Tweets that are positive 
    try:
        sentiments.append(percent_positive_to)
    except:
        sentiments.append(0.5)
    #Standard Deviation of tweet sentiment Polarity
    try:
        sentiments.append(standard_deviation_polarity_to)
    except:
        sentiments.append(0)




    sentiments = np.asarray(sentiments)

    return sentiments

================================================
FILE: Variable_Functions/__init__.py
================================================


================================================
FILE: crycompare.py
================================================
import sys
import requests
import warnings


class Price:
	def __init__(self):
		self.__coinlisturl = 'https://www.cryptocompare.com/api/data/coinlist/'
		self.__priceurl = 'https://min-api.cryptocompare.com/data/price?'
		self.__pricemultiurl = 'https://min-api.cryptocompare.com/data/pricemulti?'
		self.__pricemultifullurl = 'https://min-api.cryptocompare.com/data/pricemultifull?'
		self.__generateavgurl = 'https://min-api.cryptocompare.com/data/generateAvg?'
		self.__dayavgurl = 'https://min-api.cryptocompare.com/data/dayAvg?'
		self.__historicalurl = 'https://min-api.cryptocompare.com/data/pricehistorical?'
		self.__coinsnapshoturl = 'https://www.cryptocompare.com/api/data/coinsnapshot/?'
		self.__coinsnapshotfull = 'https://www.cryptocompare.com/api/data/coinsnapshotfullbyid/?'

	def coinList(self):
		return self.__get_url(self.__coinlisturl)

	def price(self, from_curr, to_curr, e=None, extraParams=None, sign=False, tryConversion=True):
		return self.__get_price(self.__priceurl, from_curr, to_curr, e, extraParams, sign, tryConversion)

	def priceMulti(self, from_curr, to_curr, e=None, extraParams=None, sign=False, tryConversion=True):
		return self.__get_price(self.__pricemultiurl, from_curr, to_curr, e, extraParams, sign, tryConversion)

	def priceMultiFull(self, from_curr, to_curr, e=None, extraParams=None, sign=False, tryConversion=True):
		return self.__get_price(self.__pricemultifullurl, from_curr, to_curr, e, extraParams, sign, tryConversion)

	def priceHistorical(self, from_curr, to_curr, markets, ts=None, e=None, extraParams=None,
						sign=False, tryConversion=True):
		return self.__get_price(self.__historicalurl, from_curr, to_curr, markets, e, extraParams, sign, tryConversion, ts)

	def generateAvg(self, from_curr, to_curr, markets, extraParams=None, sign=False, tryConversion=True):
		return self.__get_avg(self.__generateavgurl, from_curr, to_curr, markets, extraParams, sign, tryConversion)

	def dayAvg(self, from_curr, to_curr, e=None, extraParams=None, sign=False, tryConversion=True,
			   avgType=None, UTCHourDiff=0, toTs=None):
		return self.__get_avg(self.__dayavgurl, from_curr, to_curr, e, extraParams, sign,
							tryConversion, avgType, UTCHourDiff, toTs)

	def coinSnapshot(self, from_curr, to_curr):
		return self.__get_url(self.__coinsnapshoturl + 'fsym=' + from_curr.upper() + '&tsym=' + to_curr.upper())

	def coinSnapshotFullById(self, coin_id):
		return self.__get_url(self.__coinsnapshotfull + 'id=' + str(coin_id))

	def __get_price(self, baseurl, from_curr, to_curr, e=None, extraParams=None, sign=False,
				  tryConversion=True, markets=None, ts=None):
		args = list()
		if isinstance(from_curr, str):
			args.append('fsym=' + from_curr.upper())
		elif isinstance(from_curr, list):
			args.append('fsyms=' + ','.join(from_curr).upper())
		if isinstance(to_curr, list):
			args.append('tsyms=' + ','.join(to_curr).upper())
		elif isinstance(to_curr, str):
			args.append('tsyms=' + to_curr.upper())
		if isinstance(markets, str):
			args.append('markets=' + markets)
		elif isinstance(markets, list):
			args.append('markets=' + ','.join(markets))
		if e:
			args.append('e=' + e)
		if extraParams:
			args.append('extraParams=' + extraParams)
		if sign:
			args.append('sign=true')
		if ts:
			args.append('ts=' + str(ts))
		if not tryConversion:
			args.append('tryConversion=false')
		if len(args) >= 2:
			return self.__get_url(baseurl + '&'.join(args))
		else:
			raise ValueError('Must have both fsym and tsym arguments.')

	def __get_avg(self, baseurl, from_curr, to_curr, markets=None, e=None, extraParams=None,
				sign=False, tryConversion=True, avgType=None, UTCHourDiff=0, toTs=None):
		args = list()
		if isinstance(from_curr, str):
			args.append('fsym=' + from_curr.upper())
		if isinstance(to_curr, str):
			args.append('tsym=' + to_curr.upper())
		if isinstance(markets, str):
			args.append('markets=' + markets)
		elif isinstance(markets, list):
			args.append('markets=' + ','.join(markets))
		if e:
			args.append('e=' + e)
		if extraParams:
			args.append('extraParams=' + extraParams)
		if sign:
			args.append('sign=true')
		if avgType:
			args.append('avgType=' + avgType)
		if UTCHourDiff:
			args.append('UTCHourDiff=' + str(UTCHourDiff))
		if toTs:
			args.append('toTs=' + toTs)
		if not tryConversion:
			args.append('tryConversion=false')
		if len(args) >= 2:
			return self.__get_url(baseurl + '&'.join(args))
		else:
			raise ValueError('Must have both fsym and tsym arguments.')

	def __get_url(self, url):
		raw_data = requests.get(url)
		raw_data.encoding = 'utf-8'
		if raw_data.status_code != 200:
			raw_data.raise_for_status()
			return False
		try:
			if isinstance(raw_data.text, unicode):
				warnings.warn('Object returned is of type unicode. Cannot parse to str in Python 2.')
		except NameError:
			pass
		return raw_data.json()


class History:
	def __init__(self):
		self.__histominuteurl = 'https://min-api.cryptocompare.com/data/histominute?'
		self.__histohoururl = 'https://min-api.cryptocompare.com/data/histohour?'
		self.__histodayurl = 'https://min-api.cryptocompare.com/data/histoday?'

	def histoMinute(self, from_curr, to_curr, e=None, extraParams=None,
					sign=False, tryConversion=True, aggregate=None, limit=None, toTs=None):
		return self.__get_price(self.__histominuteurl, from_curr, to_curr, e, extraParams, sign,
								tryConversion, aggregate, limit, toTs)

	def histoHour(self, from_curr, to_curr, e=None, extraParams=None,
				  sign=False, tryConversion=True, aggregate=None, limit=None, toTs=None):
		return self.__get_price(self.__histohoururl, from_curr, to_curr, e, extraParams, sign,
								tryConversion, aggregate, limit, toTs)

	def histoDay(self, from_curr, to_curr, e=None, extraParams=None, sign=False,
				 tryConversion=True, aggregate=None, limit=None, toTs=None, allData=False):
		return self.__get_price(self.__histodayurl, from_curr, to_curr, e, extraParams, sign,
								tryConversion, aggregate, limit, toTs, allData)

	def __get_price(self, baseurl, from_curr, to_curr, e=None, extraParams=None, sign=False,
					tryConversion=True, aggregate=None, limit=None, toTs=None, allData=False):
		args = list()
		if isinstance(from_curr, str):
			args.append('fsym=' + from_curr.upper())
		if isinstance(to_curr, str):
			args.append('tsym=' + to_curr.upper())
		if e:
			args.append('e=' + e)
		if extraParams:
			args.append('extraParams=' + extraParams)
		if sign:
			args.append('sign=true')
		if aggregate:
			args.append('aggregate=' + str(aggregate))
		if limit:
			args.append('limit=' + str(limit))
		if toTs:
			args.append('toTs=' + str(toTs))
		if allData:
			args.append('allData=true')
		if not tryConversion:
			args.append('tryConversion=false')
		if len(args) >= 2:
			return self.__get_url(baseurl + '&'.join(args))
		else:
			raise ValueError('Must have both fsym and tsym arguments.')

	def __get_url(self, url):
		raw_data = requests.get(url)
		raw_data.encoding = 'utf-8'
		if raw_data.status_code != 200:
			raw_data.raise_for_status()
			return False
		try:
			if isinstance(raw_data.text, unicode):
				warnings.warn('Object returned is of type unicode. Cannot parse to str in Python 2.')
		except NameError:
			pass
		return raw_data.json()


================================================
FILE: got3/__init__.py
================================================
from . import models
from . import manager

================================================
FILE: got3/manager/TweetCriteria.py
================================================
class TweetCriteria:

	def __init__(self):
		self.maxTweets = 0

	def setUsername(self, username):
		self.username = username
		return self

	def setSince(self, since):
		self.since = since
		return self

	def setUntil(self, until):
		self.until = until
		return self

	def setQuerySearch(self, querySearch):
		self.querySearch = querySearch
		return self

	def setMaxTweets(self, maxTweets):
		self.maxTweets = maxTweets
		return self

	def setLang(self, Lang):
		self.lang = Lang
		return self

	def setTopTweets(self, topTweets):
 		self.topTweets = topTweets
 		return self

================================================
FILE: got3/manager/TweetManager.py
================================================
import urllib.request, urllib.parse, urllib.error, urllib.request, urllib.error, urllib.parse, json, re, datetime, sys, \
    http.cookiejar
from .. import models
from pyquery import PyQuery


class TweetManager:
    def __init__(self):
        pass

    @staticmethod
    def getTweets(tweetCriteria, receiveBuffer=None, bufferLength=100, proxy=None):
        refreshCursor = ''

        results = []
        resultsAux = []
        cookieJar = http.cookiejar.CookieJar()

        active = True

        while active:
            json = TweetManager.getJsonReponse(tweetCriteria, refreshCursor, cookieJar, proxy)
            if len(json['items_html'].strip()) == 0:
                break

            refreshCursor = json['min_position']
            tweets = PyQuery(json['items_html'])('div.js-stream-tweet')

            if len(tweets) == 0:
                break

            for tweetHTML in tweets:
                tweetPQ = PyQuery(tweetHTML)
                tweet = models.Tweet()

                usernameTweet = tweetPQ("span.username.js-action-profile-name b").text();
                txt = re.sub(r"\s+", " ", tweetPQ("p.js-tweet-text").text().replace('# ', '#').replace('@ ', '@'));
                retweets = int(tweetPQ("span.ProfileTweet-action--retweet span.ProfileTweet-actionCount").attr(
                    "data-tweet-stat-count").replace(",", ""));
                favorites = int(tweetPQ("span.ProfileTweet-action--favorite span.ProfileTweet-actionCount").attr(
                    "data-tweet-stat-count").replace(",", ""));
                dateSec = int(tweetPQ("small.time span.js-short-timestamp").attr("data-time"));
                id = tweetPQ.attr("data-tweet-id");
                permalink = tweetPQ.attr("data-permalink-path");
                user_id = int(tweetPQ("a.js-user-profile-link").attr("data-user-id"))

                geo = ''
                geoSpan = tweetPQ('span.Tweet-geo')
                if len(geoSpan) > 0:
                    geo = geoSpan.attr('title')
                urls = []
                for link in tweetPQ("a"):
                    try:
                        urls.append((link.attrib["data-expanded-url"]))
                    except KeyError:
                        pass
                tweet.id = id
                tweet.permalink = 'https://twitter.com' + permalink
                tweet.username = usernameTweet

                tweet.text = txt
                tweet.date = datetime.datetime.fromtimestamp(dateSec)
                tweet.formatted_date = datetime.datetime.fromtimestamp(dateSec).strftime("%a %b %d %X +0000 %Y")
                tweet.retweets = retweets
                tweet.favorites = favorites
                tweet.mentions = " ".join(re.compile('(@\\w*)').findall(tweet.text))
                tweet.hashtags = " ".join(re.compile('(#\\w*)').findall(tweet.text))
                tweet.geo = geo
                tweet.urls = ",".join(urls)
                tweet.author_id = user_id

                results.append(tweet)
                resultsAux.append(tweet)

                if receiveBuffer and len(resultsAux) >= bufferLength:
                    receiveBuffer(resultsAux)
                    resultsAux = []

                if tweetCriteria.maxTweets > 0 and len(results) >= tweetCriteria.maxTweets:
                    active = False
                    break

        if receiveBuffer and len(resultsAux) > 0:
            receiveBuffer(resultsAux)

        return results

    @staticmethod
    def getJsonReponse(tweetCriteria, refreshCursor, cookieJar, proxy):
        url = "https://twitter.com/i/search/timeline?f=tweets&q=%s&l=en&src=typd&%smax_position=%s"

        urlGetData = ''
        if hasattr(tweetCriteria, 'username'):
            urlGetData += ' from:' + tweetCriteria.username

        if hasattr(tweetCriteria, 'since'):
            urlGetData += ' since:' + tweetCriteria.since

        if hasattr(tweetCriteria, 'until'):
            urlGetData += ' until:' + tweetCriteria.until

        if hasattr(tweetCriteria, 'querySearch'):
            urlGetData += ' ' + tweetCriteria.querySearch

        if hasattr(tweetCriteria, 'lang'):
            urlLang = 'lang=' + tweetCriteria.lang + '&'
        else:
            urlLang = ''
        url = url % (urllib.parse.quote(urlGetData), urlLang, refreshCursor)
        #comment out to hide url
        #print(url)

        headers = [
            ('Host', "twitter.com"),
            ('User-Agent', "Mozilla/5.0 (Windows NT 6.1; Win64; x64)"),
            ('Accept', "application/json, text/javascript, */*; q=0.01"),
            ('Accept-Language', "de,en-US;q=0.7,en;q=0.3"),
            ('X-Requested-With', "XMLHttpRequest"),
            ('Referer', url),
            ('Connection', "keep-alive")
        ]

        if proxy:
            opener = urllib.request.build_opener(urllib.request.ProxyHandler({'http': proxy, 'https': proxy}), \
                                                 urllib.request.HTTPCookieProcessor(cookieJar))
        else:
            opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cookieJar))
        opener.addheaders = headers

        try:
            response = opener.open(url)
            jsonResponse = response.read()
            #comment out once done debugging
            # print(
            #     "Twitter weird response. Try to see on browser: https://twitter.com/search?q=%s&src=typd" % urllib.parse.quote(
            #         urlGetData))
        except:
            # print("Twitter weird response. Try to see on browser: ", url)
            print(
                "Twitter weird response. Try to see on browser: https://twitter.com/search?q=%s&src=typd" % urllib.parse.quote(
                    urlGetData))
            print("Unexpected error:", sys.exc_info()[0])
            sys.exit()
            return

        dataJson = json.loads(jsonResponse.decode())

        return dataJson


================================================
FILE: got3/manager/__init__.py
================================================
from .TweetCriteria import TweetCriteria
from .TweetManager import TweetManager

================================================
FILE: got3/manager/__init__.py.bak
================================================
from TweetCriteria import TweetCriteria
from TweetManager import TweetManager

================================================
FILE: got3/models/Tweet.py
================================================
class Tweet:
	
	def __init__(self):
		pass

================================================
FILE: got3/models/__init__.py
================================================
from .Tweet import Tweet

================================================
FILE: requirements.txt
================================================
alembic==0.9.8
banal==0.3.3
ccxt==1.10.1242
certifi==2018.1.18
chardet==3.0.4
click==6.7
dash==0.21.0
dash-core-components==0.19.0
dash-html-components==0.9.0
dash-renderer==0.11.3
dataset==1.0.6
decorator==4.2.1
enum34==1.1.6
Flask==0.12.2
Flask-Compress==1.4.0
functools32==3.2.3.post2
idna==2.6
ipython-genutils==0.2.0
itsdangerous==0.24
Jinja2==2.10
jsonschema==2.6.0
jupyter-core==4.4.0
Mako==1.0.7
MarkupSafe==1.0
nbformat==4.4.0
normality==0.5.11
numpy==1.14.1
pandas==0.22.0
plotly==2.4.1
python-dateutil==2.6.1
python-editor==1.0.3
pytz==2018.3
requests==2.18.4
six==1.11.0
SQLAlchemy==1.2.4
traitlets==4.3.2
urllib3==1.22
Werkzeug==0.14.1


================================================
FILE: requirements_to_freeze.txt
================================================
ccxt==1.10.1242
dash==0.21.0
dash-core-components==0.19.0
dash-html-components==0.9.0
dash-renderer==0.11.3
dataset==1.0.6
pandas==0.22.0
Download .txt
gitextract_kclr9y64/

├── .gitignore
├── CODE_OF_CONDUCT.md
├── Data_Grabber.py
├── Inputs_Table_Builder.py
├── LICENSE
├── README.md
├── Table_Timer.py
├── Variable_Functions/
│   ├── Technical_Indicators.py
│   ├── Twitter_Sentiment.py
│   └── __init__.py
├── crycompare.py
├── got3/
│   ├── __init__.py
│   ├── manager/
│   │   ├── TweetCriteria.py
│   │   ├── TweetManager.py
│   │   ├── __init__.py
│   │   └── __init__.py.bak
│   └── models/
│       ├── Tweet.py
│       └── __init__.py
├── requirements.txt
└── requirements_to_freeze.txt
Download .txt
SYMBOL INDEX (57 symbols across 7 files)

FILE: Data_Grabber.py
  function ensure_dir (line 15) | def ensure_dir(directory):
  class CryptoDataGrabber (line 21) | class CryptoDataGrabber(object):
    method __init__ (line 23) | def __init__(self):
    method get_symbols (line 56) | def get_symbols(self):
    method get_prices (line 60) | def get_prices(self, symbol):
    method get_ohlcv (line 64) | def get_ohlcv(self, symbol):
    method __update (line 69) | def __update(self):
  function change_plot (line 169) | def change_plot(value):
  function plot_olhc (line 177) | def plot_olhc(value):
  function plot_v (line 193) | def plot_v(value):
  function update_market_prices (line 205) | def update_market_prices():
  function update_market_prices_hist (line 227) | def update_market_prices_hist():
  function update_spread (line 247) | def update_spread():
  function update_spread_hist (line 271) | def update_spread_hist():

FILE: Inputs_Table_Builder.py
  function db_connection (line 18) | def db_connection(database):
  function get_symbols (line 24) | def get_symbols(database):
  function OLHCV_From_DB (line 39) | def OLHCV_From_DB(symbol, database):
  function twitter_sentiment (line 46) | def twitter_sentiment(symbol, max_tweets):
  function split_symbols (line 62) | def split_symbols(symbols):
  function generate_input_dataset (line 77) | def generate_input_dataset(database):

FILE: Variable_Functions/Twitter_Sentiment.py
  function dates_to_sentiment (line 11) | def dates_to_sentiment(dates, ticker, max_tweets):

FILE: crycompare.py
  class Price (line 6) | class Price:
    method __init__ (line 7) | def __init__(self):
    method coinList (line 18) | def coinList(self):
    method price (line 21) | def price(self, from_curr, to_curr, e=None, extraParams=None, sign=Fal...
    method priceMulti (line 24) | def priceMulti(self, from_curr, to_curr, e=None, extraParams=None, sig...
    method priceMultiFull (line 27) | def priceMultiFull(self, from_curr, to_curr, e=None, extraParams=None,...
    method priceHistorical (line 30) | def priceHistorical(self, from_curr, to_curr, markets, ts=None, e=None...
    method generateAvg (line 34) | def generateAvg(self, from_curr, to_curr, markets, extraParams=None, s...
    method dayAvg (line 37) | def dayAvg(self, from_curr, to_curr, e=None, extraParams=None, sign=Fa...
    method coinSnapshot (line 42) | def coinSnapshot(self, from_curr, to_curr):
    method coinSnapshotFullById (line 45) | def coinSnapshotFullById(self, coin_id):
    method __get_price (line 48) | def __get_price(self, baseurl, from_curr, to_curr, e=None, extraParams...
    method __get_avg (line 78) | def __get_avg(self, baseurl, from_curr, to_curr, markets=None, e=None,...
    method __get_url (line 108) | def __get_url(self, url):
  class History (line 122) | class History:
    method __init__ (line 123) | def __init__(self):
    method histoMinute (line 128) | def histoMinute(self, from_curr, to_curr, e=None, extraParams=None,
    method histoHour (line 133) | def histoHour(self, from_curr, to_curr, e=None, extraParams=None,
    method histoDay (line 138) | def histoDay(self, from_curr, to_curr, e=None, extraParams=None, sign=...
    method __get_price (line 143) | def __get_price(self, baseurl, from_curr, to_curr, e=None, extraParams...
    method __get_url (line 171) | def __get_url(self, url):

FILE: got3/manager/TweetCriteria.py
  class TweetCriteria (line 1) | class TweetCriteria:
    method __init__ (line 3) | def __init__(self):
    method setUsername (line 6) | def setUsername(self, username):
    method setSince (line 10) | def setSince(self, since):
    method setUntil (line 14) | def setUntil(self, until):
    method setQuerySearch (line 18) | def setQuerySearch(self, querySearch):
    method setMaxTweets (line 22) | def setMaxTweets(self, maxTweets):
    method setLang (line 26) | def setLang(self, Lang):
    method setTopTweets (line 30) | def setTopTweets(self, topTweets):

FILE: got3/manager/TweetManager.py
  class TweetManager (line 7) | class TweetManager:
    method __init__ (line 8) | def __init__(self):
    method getTweets (line 12) | def getTweets(tweetCriteria, receiveBuffer=None, bufferLength=100, pro...
    method getJsonReponse (line 89) | def getJsonReponse(tweetCriteria, refreshCursor, cookieJar, proxy):

FILE: got3/models/Tweet.py
  class Tweet (line 1) | class Tweet:
    method __init__ (line 3) | def __init__(self):
Condensed preview — 20 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (50K chars).
[
  {
    "path": ".gitignore",
    "chars": 1432,
    "preview": "# GITHUB BOILERPLATE GIT IGNORE\n# https://github.com/github/gitignore/blob/master/Python.gitignore\n\n# Byte-compiled / op"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "chars": 3211,
    "preview": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, w"
  },
  {
    "path": "Data_Grabber.py",
    "chars": 9138,
    "preview": "import os\nimport ccxt\nimport time\nimport datetime\nimport dash\nfrom dash.dependencies import Input, Output, Event\nimport "
  },
  {
    "path": "Inputs_Table_Builder.py",
    "chars": 4068,
    "preview": "import sqlite3\nimport pandas as pd\nimport numpy as np\nfrom datetime import datetime, timedelta\nfrom Variable_Functions.T"
  },
  {
    "path": "LICENSE",
    "chars": 1069,
    "preview": "MIT License\n\nCopyright (c) 2018 Brandon Shaw\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
  },
  {
    "path": "README.md",
    "chars": 3428,
    "preview": "# Crypto_Trader\n**A Live Machine-Learning based Cryptocurrency Trader for the Poloniex Exchange**\n\nIf you would like to "
  },
  {
    "path": "Table_Timer.py",
    "chars": 2764,
    "preview": "from Inputs_Table_Builder import *\nimport time\nimport datetime\nimport sqlite3\nimport pandas\n\nstarttime=time.time()\n\n#Run"
  },
  {
    "path": "Variable_Functions/Technical_Indicators.py",
    "chars": 797,
    "preview": "from crycompare import *\nimport pandas as pd\nimport sqlite3\n\ndb_connection = sqlite3.connect('databases/market_prices.db"
  },
  {
    "path": "Variable_Functions/Twitter_Sentiment.py",
    "chars": 5985,
    "preview": "import got3\nimport arrow\nfrom textblob import TextBlob\nimport numpy as np\nfrom termcolor import colored\n\n\n\n\n\ndef dates_t"
  },
  {
    "path": "Variable_Functions/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crycompare.py",
    "chars": 7230,
    "preview": "import sys\nimport requests\nimport warnings\n\n\nclass Price:\n\tdef __init__(self):\n\t\tself.__coinlisturl = 'https://www.crypt"
  },
  {
    "path": "got3/__init__.py",
    "chars": 42,
    "preview": "from . import models\nfrom . import manager"
  },
  {
    "path": "got3/manager/TweetCriteria.py",
    "chars": 577,
    "preview": "class TweetCriteria:\n\n\tdef __init__(self):\n\t\tself.maxTweets = 0\n\n\tdef setUsername(self, username):\n\t\tself.username = use"
  },
  {
    "path": "got3/manager/TweetManager.py",
    "chars": 6085,
    "preview": "import urllib.request, urllib.parse, urllib.error, urllib.request, urllib.error, urllib.parse, json, re, datetime, sys, "
  },
  {
    "path": "got3/manager/__init__.py",
    "chars": 80,
    "preview": "from .TweetCriteria import TweetCriteria\r\nfrom .TweetManager import TweetManager"
  },
  {
    "path": "got3/manager/__init__.py.bak",
    "chars": 77,
    "preview": "from TweetCriteria import TweetCriteria\nfrom TweetManager import TweetManager"
  },
  {
    "path": "got3/models/Tweet.py",
    "chars": 42,
    "preview": "class Tweet:\n\t\n\tdef __init__(self):\n\t\tpass"
  },
  {
    "path": "got3/models/__init__.py",
    "chars": 24,
    "preview": "from .Tweet import Tweet"
  },
  {
    "path": "requirements.txt",
    "chars": 649,
    "preview": "alembic==0.9.8\nbanal==0.3.3\nccxt==1.10.1242\ncertifi==2018.1.18\nchardet==3.0.4\nclick==6.7\ndash==0.21.0\ndash-core-componen"
  },
  {
    "path": "requirements_to_freeze.txt",
    "chars": 138,
    "preview": "ccxt==1.10.1242\ndash==0.21.0\ndash-core-components==0.19.0\ndash-html-components==0.9.0\ndash-renderer==0.11.3\ndataset==1.0"
  }
]

About this extraction

This page contains the full source code of the bshaw19/Crypto_Trader GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 20 files (45.7 KB), approximately 11.6k tokens, and a symbol index with 57 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!