Full Code of MiloMallo/StockMarketGAN for AI

master 06205773e8d7 cached
19 files
12.4 MB
11.8k tokens
26 symbols
1 requests
Download .txt
Repository: MiloMallo/StockMarketGAN
Branch: master
Commit: 06205773e8d7
Files: 19
Total size: 12.4 MB

Directory structure:
gitextract_g4r7c7s8/

├── .gitignore
├── README.md
├── cnn.py
├── companylist.csv
├── deployed_models/
│   ├── cnn.data-00000-of-00001
│   ├── cnn.index
│   ├── cnn.meta
│   ├── gan.data-00000-of-00001
│   ├── gan.index
│   ├── gan.meta
│   └── xgb
├── figures/
│   └── Stock_GAN.odp
├── gan.py
├── get_predictions.py
├── get_stock_data.py
├── plot_confusion_matrix.py
├── train_cnn.py
├── train_gan.py
└── train_xgb_boost.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
stock_data/
models/


================================================
FILE: README.md
================================================
# Unsupervised Stock Market Features Construction using Generative Adversarial Networks(GAN)
Deep Learning constructs feature using only raw data. The leaned representation of the data outperforms expert features for many modalities including Radio Frequency ([Convolutional Radio Modulation Recognition Networks](https://arxiv.org/pdf/1602.04105.pdf)), computer vision ([Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations](https://www.cs.princeton.edu/~rajeshr/papers/icml09-ConvolutionalDeepBeliefNetworks.pdf)) and audio classification ([Unsupervised feature learning for audio classification using convolutional deep belief networks](http://www.robotics.stanford.edu/~ang/papers/nips09-AudioConvolutionalDBN.pdf)). In the case of Convolutional Neural Networks (CNN), the data representation is leaned in a supervised fashion with respect to a task such as classification. For a typical CNN to generalize to unseen data in requires very large quantities of data. The amount of data available is often not sufficient to train a CNN. GANs allow features to be learned unsupervised. This reduces that potential for features being overfitted to the training data and in turn means that a classification algorithm trained on the features will generalize on a smaller amount of data. In fact, GANs promote generalization beyond the training data, as will be seen.  
# GAN 
For a full review of a GAN: [Generative Adversarial Nets](https://arxiv.org/pdf/1406.2661.pdf) 
![alt text](https://github.com/nmharmon8/StockMarketGAN/blob/master/figures/gan.png)
The Generator is trained to generate data that looks like historical price data of the target stocks over a gaussian distribution. The Discriminator is trained to tell the difference between the data from the Generator and the real data. The error from the Discriminator is used to train the Generator to defeat the Discriminator. The competition between the Generator and the Discriminator forces the Discriminator to distinguish random from real variability.    
# Approach 
**Data**
Historical prices of stocks are likely not very predictive of the future price of the stock, but it is free data. Technical indicators are calculated using the historical prices of stocks. Not being a trader I don't know the validity of technical indicators, but if a sufficient number investors use technical indicators to invest such that they move the market, then the historical prices data of stocks should suffice to predict the direction of the market correctly more then 50% of the time.
**Training**
The GAN is trained on 96 stocks off the Nasdaq. Each stock is normalized using a 20-day rolling window (data-mean)/(max-min). The last 356 days (1.4 years) of trading are held out as a test set. Time series of 20 day periods are constructed and used as input to the GAN. Once the GAN is finished training, the activated weighs from the last layer of convolutional lays is used as the new representation of the data. XGBoost is trained to classify whether the stock will go up or down over some period of time. 

**Testing**
The data the was held out in the training phase is run through the Discriminator portion of the GAN and the activated weights of the last convolutional layer are extracted. The extracted features are then classified using XGBoost.

**Results** 
The confusion matrix shows the results of the model's classification. The perfect confusion matrix would only have predictions on the main diagonal. Each number off the main diagonal is a misclassification.  


**Predictions of Up or Down movement over 10 Days**

The predictions over a 10 day period are quite good. 

![alt text](https://github.com/nmharmon8/StockMarketGAN/blob/master/figures/XGB_GAN_Confusion_Matrix_Up_Or_Down_Over_10_Days_normalize.png)

**Predictions of Up or Down movement over 1 Day**

Predicting over a short time interval seems to be harder. Results loss significant accuracy when trying to predict the next day movement of the stock. 

![alt text](https://github.com/nmharmon8/StockMarketGAN/blob/master/figures/XGB_GAN_Confusion_Matrix_Up_Or_Down_Over_1_Days_normalize.png)

**Predictions 10% Gain Over 10 Days**

Just knowing that the stock will go up or down is of limited use. A lot of stocks will go up in a day but an investor will want to only buy the stocks that will go up the most, maximizing returns. This time the XGBoost model was trained to predict stocks that would go up by 10% or more over the following 10 days. 

![alt text](https://github.com/nmharmon8/StockMarketGAN/blob/master/figures/XGB_GAN_Confusion_Matrix_Up_Or_Down_Over_10_Days_10_percent_normalize.png)
 


================================================
FILE: cnn.py
================================================
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os

SEED = 42
tf.set_random_seed(SEED)
class CNN():

    def __init__(self, num_features, num_historical_days, is_train=True):
      
        self.X = tf.placeholder(tf.float32, shape=[None, num_historical_days, num_features])
        X = tf.reshape(self.X, [-1, num_historical_days, 1, num_features])
        self.Y = tf.placeholder(tf.int32, shape=[None, 2])
        self.keep_prob = tf.placeholder(tf.float32, shape=[])

        with tf.variable_scope("cnn"):
            #[filter_height, filter_width, in_channels, out_channels]
            k1 = tf.Variable(tf.truncated_normal([3, 1, num_features, 16],
                stddev=0.1,seed=SEED, dtype=tf.float32))
            b1 = tf.Variable(tf.zeros([16], dtype=tf.float32))

            conv = tf.nn.conv2d(X,k1,strides=[1, 1, 1, 1],padding='SAME')
            relu = tf.nn.relu(tf.nn.bias_add(conv, b1))
            if is_train:
                relu = tf.nn.dropout(relu, keep_prob = self.keep_prob)
            print(relu)


            k2 = tf.Variable(tf.truncated_normal([3, 1, 16, 32],
                stddev=0.1,seed=SEED, dtype=tf.float32))
            b2 = tf.Variable(tf.zeros([32], dtype=tf.float32))
            conv = tf.nn.conv2d(relu, k2,strides=[1, 1, 1, 1],padding='SAME')
            relu = tf.nn.relu(tf.nn.bias_add(conv, b2))
            if is_train:
                relu = tf.nn.dropout(relu, keep_prob = self.keep_prob)
            print(relu)


            k3 = tf.Variable(tf.truncated_normal([3, 1, 32, 64],
                stddev=0.1,seed=SEED, dtype=tf.float32))
            b3 = tf.Variable(tf.zeros([64], dtype=tf.float32))
            conv = tf.nn.conv2d(relu, k3, strides=[1, 1, 1, 1], padding='VALID')
            relu = tf.nn.relu(tf.nn.bias_add(conv, b3))
            if is_train:
                relu = tf.nn.dropout(relu, keep_prob=self.keep_prob)
            print(relu)


            flattened_convolution_size = int(relu.shape[1]) * int(relu.shape[2]) * int(relu.shape[3])
            print(flattened_convolution_size)
            flattened_convolution = features = tf.reshape(relu, [-1, flattened_convolution_size])

            if is_train:
                flattened_convolution =  tf.nn.dropout(flattened_convolution, keep_prob=self.keep_prob)

            W1 = tf.Variable(tf.truncated_normal([18*1*64, 32]))
            b4 = tf.Variable(tf.truncated_normal([32]))
            h1 = tf.nn.relu(tf.matmul(flattened_convolution, W1) + b4)


            W2 = tf.Variable(tf.truncated_normal([32, 2]))
            logits = tf.matmul(h1, W2)

            #self.accuracy = tf.metrics.accuracy(tf.argmax(self.Y, 1), tf.argmax(logits, 1))
            self.accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(self.Y, 1), tf.argmax(logits, 1)), tf.float32))
            self.confusion_matrix = tf.confusion_matrix(tf.argmax(self.Y, 1), tf.argmax(logits, 1))
            tf.summary.scalar('accuracy', self.accuracy)
            theta_D = [k1, b1, k2, b2, k3, b3, W1, b4, W2]           
            
            # D_prob = tf.nn.sigmoid(D_logit)

        self.loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=self.Y, logits=logits))
        tf.summary.scalar('loss', self.loss)
        # self.D_l2_loss = (0.0001 * tf.add_n([tf.nn.l2_loss(t) for t in theta_D]) / len(theta_D))
        # self.D_loss = D_loss_real + D_loss_fake + self.D_l2_loss
        # self.G_l2_loss = (0.00001 * tf.add_n([tf.nn.l2_loss(t) for t in theta_G]) / len(theta_G))
        # self.G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logit_fake, labels=tf.ones_like(D_logit_fake))) + self.G_l2_loss

        self.optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(self.loss)
        self.summary = tf.summary.merge_all()

================================================
FILE: companylist.csv
================================================
Symbol, Name, lastsale, netchange,pctchange, share_volume, Nasdaq100_points,
AAPL, Apple Inc, 44.51, -0.08,-0.18, 8648571, -.1,
ATVI, Activision Blizzard Inc, 44.51, -0.08,-0.18, 8648571, -.1,
ADBE, Adobe Systems Incorporated, 99.1, -0.52,-0.52, 2689240, -.2,
AKAM, Akamai Technologies Inc., 51.02, -0.36,-0.7, 1700915, -.1,
ALXN, Alexion Pharmaceuticals Inc., 132.01, 2.31,1.78, 1600959, .4,
GOOG, Alphabet Inc., 767.545, -4.215,-0.55, 1468065, -1.0,
GOOGL, Alphabet Inc., 797.58, -3.65,-0.46, 1467020, .0,
AMZN, Amazon.com Inc., 777.64, 7.95,1.03, 3643647, 3.2,
AAL, American Airlines Group Inc., 35.64, -0.77,-2.11, 5510588, .0,
AMGN, Amgen Inc., 173, 0.36,0.21, 2871628, .2,
ADI, Analog Devices Inc., 62.16, -0.49,-0.78, 1896545, -.1,
AMAT, Applied Materials Inc., 30.11, -0.04,-0.13, 10809621, .0,
ADSK, Autodesk Inc., 67.63, 0.24,0.36, 2122174, .1,
ADP, Automatic Data Processing Inc., 86.9, -0.58,-0.66, 1863520, -.2,
BIDU, Baidu Inc., 185.17, -1.33,-0.71, 1300251, -.3,
BBBY, Bed Bath & Beyond Inc., 43.13, 0.03,0.07, 1967613, .0,
BIIB, Biogen Inc., 304.68, 1.86,0.61, 1369746, .4,
BMRN, BioMarin Pharmaceutical Inc., 97.05, 1.67,1.75, 1337946, .0,
AVGO, Broadcom Limited, 171.77, -0.95,-0.55, 3172354, -.2,
CA, CA Inc., 32.07, -0.4,-1.23, 2289365, -.2,
CELG, Celgene Corporation, 108.78, 1.43,1.33, 4275630, .5,
CERN, Cerner Corporation, 62.1, -0.67,-1.07, 2099221, -.2,
CHTR, Charter Communications Inc., 265.13, -1.14,-0.43, 1437862, -.1,
CHKP, Check Point Software Technologies Ltd., 75.06, -0.45,-0.6, 971010, -.1,
CSCO, Cisco Systems Inc., 30.92, -0.39,-1.25, 26516368, -1.7,
CTXS, Citrix Systems Inc., 83.19, -0.67,-0.8, 1042641, -.1,
CTSH, Cognizant Technology Solutions Corporation, 53.44, -0.66,-1.22, 5936067, -.4,
CMCSA, Comcast Corporation, 65.995, -0.245,-0.37, 10512643, -.5,
COST, Costco Wholesale Corporation, 152.5559, -0.1141,-0.07, 2091886, .0,
CTRP, Ctrip.com International Ltd., 43.73, -0.02,-0.05, 2875035, .0,
XRAY, DENTSPLY SIRONA Inc., 59.81, -0.22,-0.37, 960442, .0,
DISCA, Discovery Communications Inc., 24.36, -0.05,-0.2, 4304349, .0,
DISCK, Discovery Communications Inc., 23.61, -0.09,-0.38, 1109834, .0,
DISH, DISH Network Corporation, 52.15, 0.1,0.19, 2690270, .0,
DLTR, Dollar Tree Inc., 81.195, -0.795,-0.97, 2301440, -.1,
EBAY, eBay Inc., 31.815, -0.165,-0.52, 7019272, -.2,
EA, Electronic Arts Inc., 82.96, -0.41,-0.49, 2609316, .0,
EXPE, Expedia Inc., 107.95, -3.95,-3.53, 5108596, -.4,
ESRX, Express Scripts Holding Company, 70.205, -0.235,-0.33, 3617077, -.2,
FB, Facebook Inc., 128.93, 0.58,0.45, 13411515, 1.0,
FAST, Fastenal Company, 40.095, -0.715,-1.75, 2308194, -.2,
FISV, Fiserv Inc., 99.2, -0.73,-0.73, 1138941, -.2,
GILD, Gilead Sciences Inc., 78.93, 0.09,0.11, 7487560, .1,
HSIC, Henry Schein Inc., 164.26, 1.49,0.92, 563392, .1,
ILMN, Illumina Inc., 172.93, -2.01,-1.15, 811231, -.2,
INCY, Incyte Corporation, 82.45, 2.09,2.6, 987770, .0,
INTU, Intuit Inc., 108.97, -0.79,-0.72, 1512370, -.2,
ISRG, Intuitive Surgical Inc., 686.68, 2.49,0.36, 222109, .1,
JD, JD.com Inc., 26.2475, 0.1175,0.45, 8541023, .0,
LRCX, Lam Research Corporation, 93.2, -0.16,-0.17, 1420773, .0,
LBTYA, Liberty Global plc, 32.59, -0.055,-0.17, 1683861, .0,
LBTYK, Liberty Global plc, 31.7, -0.15,-0.47, 1532985, .0,
LVNTA, Liberty Interactive Corporation, 38.6, -1.35,-3.38, 1624743, .0,
QVCA, Liberty Interactive Corporation, 18.46, -0.51,-2.69, 3919115, .0,
LLTC, Linear Technology Corporation, 58.47, -0.08,-0.14, 1751049, .0,
MAR, Marriott International, 68.7, -0.59,-0.85, 1989898, -.2,
MXIM, Maxim Integrated Products Inc., 38.86, -0.13,-0.33, 3465172, .0,
MCHP, Microchip Technology Incorporated, 60.42, -0.02,-0.03, 1652175, .0,
MU, Micron Technology Inc., 17.49, 0.04,0.23, 31528528, .0,
MSFT, Microsoft Corporation, 57.285, 0.095,0.17, 31426513, .7,
MDLZ, Mondelez International Inc., 43, -0.02,-0.05, 7084275, .0,
MNST, Monster Beverage Corporation, 146.41, 0.06,0.04, 1041770, .0,
MYL, Mylan N.V., 41.78, 0.29,0.7, 4696846, .1,
NTAP, NetApp Inc., 34.85, -0.2,-0.57, 2003381, -.1,
NTES, NetEase Inc., 238.07, 0.32,0.13, 943343, .0,
NFLX, Netflix Inc., 99.29, 1.95,2, 6765848, .1,
NCLH, Norwegian Cruise Line Holdings Ltd., 35.87, -0.37,-1.02, 2171219, .0,
NVDA, NVIDIA Corporation, 62.96, 0.27,0.43, 10420796, .1,
NXPI, NXP Semiconductors N.V., 83.83, -1.71,-2, 2553456, -.4,
ORLY, O'Reilly Automotive Inc., 272.83, -3.21,-1.16, 743599, -.3,
PCAR, PACCAR Inc., 56.655, -0.275,-0.48, 2202565, -.1,
PAYX, Paychex Inc., 58.375, -0.635,-1.08, 3061211, -.2,
PYPL, PayPal Holdings Inc., 40.72, -0.11,-0.27, 7734598, .0,
QCOM, QUALCOMM Incorporated, 63.04, 0.5,0.8, 8801907, .7,
REGN, Regeneron Pharmaceuticals Inc., 408.86, 6.11,1.52, 901266, .5,
ROST, Ross Stores Inc., 61.94, -0.01,-0.02, 1661730, .0,
SBAC, SBA Communications Corporation, 108.68, -0.22,-0.2, 616118, .0,
STX, Seagate Technology PLC, 36.45, 0,0, 5600814, .0,
SIRI, Sirius XM Holdings Inc., 4.125, -0.04,-0.96, 40279977, -.2,
SWKS, Skyworks Solutions Inc., 76.22, -0.8,-1.04, 4471051, .0,
SBUX, Starbucks Corporation, 53.78, -0.33,-0.61, 7818288, -.2,
SRCL, Stericycle Inc., 81.21, -0.55,-0.67, 1222820, .0,
SYMC, Symantec Corporation, 25.195, 0.325,1.31, 10174561, .2,
TMUS, T-Mobile US Inc., 46.49, -0.69,-1.46, 3487789, .0,
TSLA, Tesla Motors Inc., 205.415, 4.995,2.49, 2402292, .5,
KHC, The Kraft Heinz Company, 89.014, -0.156,-0.17, 3172798, .0,
PCLN, The Priceline Group Inc., 1460.55, 3.33,0.23, 609486, .2,
TSCO, Tractor Supply Company, 68.29, -0.77,-1.11, 1210939, -.1,
TRIP, TripAdvisor Inc., 61.2, -0.62,-1, 1832039, -.1,
FOX, Twenty-First Century Fox Inc., 24.3, -0.02,-0.08, 4980247, .0,
FOXA, Twenty-First Century Fox Inc., 23.915, 0.075,0.31, 20513508, .1,
ULTA, Ulta Salon Cosmetics & Fragrance Inc., 234, 1.01,0.43, 723369, .0,
VRSK, Verisk Analytics Inc., 80.85, -0.64,-0.79, 710627, -.1,
VRTX, Vertex Pharmaceuticals Incorporated, 92.88, -0.22,-0.24, 1609405, .0,
VIAB, Viacom Inc., 37.05, -0.22,-0.59, 3122390, -.1,
VOD, Vodafone Group Plc, 29.04, -0.53,-1.79, 8078232, -.2,
WBA, Walgreens Boots Alliance Inc., 81.39, -0.01,-0.01, 5454174, .0,
WDC, Western Digital Corporation, 54.82, 1.54,2.89, 7008776, .3,
WFM, Whole Foods Market Inc., 28.36, -0.16,-0.56, 3654014, -.1,
XLNX, Xilinx Inc., 53.485, -0.085,-0.16, 1467151, .0,
YHOO, Yahoo! Inc., 43.78, -0.21,-0.48, 12061751, -.2,


================================================
FILE: deployed_models/gan.data-00000-of-00001
================================================
[File too large to display: 12.3 MB]

================================================
FILE: gan.py
================================================
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os

SEED = 42
tf.set_random_seed(SEED)

class GAN():

    def sample_Z(self, batch_size, n):
        return np.random.uniform(-1., 1., size=(batch_size, n))

    def __init__(self, num_features, num_historical_days, generator_input_size=200, is_train=True):
        def get_batch_norm_with_global_normalization_vars(size):
            v = tf.Variable(tf.ones([size]), dtype=tf.float32)
            m = tf.Variable(tf.ones([size]), dtype=tf.float32)
            beta = tf.Variable(tf.ones([size]), dtype=tf.float32)
            gamma = tf.Variable(tf.ones([size]), dtype=tf.float32)
            return v, m, beta, gamma

        self.X = tf.placeholder(tf.float32, shape=[None, num_historical_days, num_features])
        X = tf.reshape(self.X, [-1, num_historical_days, 1, num_features])
        self.Z = tf.placeholder(tf.float32, shape=[None, generator_input_size])

        generator_output_size = num_features*num_historical_days
        with tf.variable_scope("generator"):
            W1 = tf.Variable(tf.truncated_normal([generator_input_size, generator_output_size*10]))
            b1 = tf.Variable(tf.truncated_normal([generator_output_size*10]))

            h1 = tf.nn.sigmoid(tf.matmul(self.Z, W1) + b1)

            # v1, m1, beta1, gamma1 = get_batch_norm_with_global_normalization_vars(generator_output_size*10)
            # h1 = tf.nn.batch_norm_with_global_normalization(h1, v1, m1,
            #         beta1, gamma1, variance_epsilon=0.000001, scale_after_normalization=False)

            W2 = tf.Variable(tf.truncated_normal([generator_output_size*10, generator_output_size*5]))
            b2 = tf.Variable(tf.truncated_normal([generator_output_size*5]))

            h2 = tf.nn.sigmoid(tf.matmul(h1, W2) + b2)

            # v2, m2, beta2, gamma2 = get_batch_norm_with_global_normalization_vars(generator_output_size*5)
            # h2 = tf.nn.batch_norm_with_global_normalization(h2, v2, m2,
            #         beta2, gamma2, variance_epsilon=0.000001, scale_after_normalization=False)


            W3 = tf.Variable(tf.truncated_normal([generator_output_size*5, generator_output_size]))
            b3 = tf.Variable(tf.truncated_normal([generator_output_size]))

            g_log_prob = tf.matmul(h2, W3) + b3
            g_log_prob = tf.reshape(g_log_prob, [-1, num_historical_days, 1, num_features])
            self.gen_data = tf.reshape(g_log_prob, [-1, num_historical_days, num_features])
            #g_log_prob = g_log_prob / tf.reshape(tf.reduce_max(g_log_prob, axis=1), [-1, 1, num_features, 1])
            #g_prob = tf.nn.sigmoid(g_log_prob)

            theta_G = [W1, b1, W2, b2, W3, b3]



        with tf.variable_scope("discriminator"):
            #[filter_height, filter_width, in_channels, out_channels]
            k1 = tf.Variable(tf.truncated_normal([3, 1, num_features, 32],
                stddev=0.1,seed=SEED, dtype=tf.float32))
            b1 = tf.Variable(tf.zeros([32], dtype=tf.float32))

            v1, m1, beta1, gamma1 = get_batch_norm_with_global_normalization_vars(32)

            k2 = tf.Variable(tf.truncated_normal([3, 1, 32, 64],
                stddev=0.1,seed=SEED, dtype=tf.float32))
            b2 = tf.Variable(tf.zeros([64], dtype=tf.float32))

            v2, m2, beta2, gamma2 = get_batch_norm_with_global_normalization_vars(64)

            k3 = tf.Variable(tf.truncated_normal([3, 1, 64, 128],
                stddev=0.1,seed=SEED, dtype=tf.float32))
            b3 = tf.Variable(tf.zeros([128], dtype=tf.float32))

            v3, m3, beta3, gamma3 = get_batch_norm_with_global_normalization_vars(128)

            W1 = tf.Variable(tf.truncated_normal([18*1*128, 128]))
            b4 = tf.Variable(tf.truncated_normal([128]))

            v4, m4, beta4, gamma4 = get_batch_norm_with_global_normalization_vars(128)

            W2 = tf.Variable(tf.truncated_normal([128, 1]))

            theta_D = [k1, b1, k2, b2, k3, b3, W1, b4, W2]

        def discriminator(X):
            conv = tf.nn.conv2d(X,k1,strides=[1, 1, 1, 1],padding='SAME')
            relu = tf.nn.relu(tf.nn.bias_add(conv, b1))
            pool = relu
            # pool = tf.nn.avg_pool(relu, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME')
            if is_train:
                pool = tf.nn.dropout(pool, keep_prob = 0.8)
            # pool = tf.nn.batch_norm_with_global_normalization(pool, v1, m1,
            #         beta1, gamma1, variance_epsilon=0.000001, scale_after_normalization=False)
            print(pool)

            conv = tf.nn.conv2d(pool, k2,strides=[1, 1, 1, 1],padding='SAME')
            relu = tf.nn.relu(tf.nn.bias_add(conv, b2))
            pool = relu
            #pool = tf.nn.avg_pool(relu, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME')
            if is_train:
                pool = tf.nn.dropout(pool, keep_prob = 0.8)
            # pool = tf.nn.batch_norm_with_global_normalization(pool, v2, m2,
            #         beta2, gamma2, variance_epsilon=0.000001, scale_after_normalization=False)
            print(pool)

            conv = tf.nn.conv2d(pool, k3, strides=[1, 1, 1, 1], padding='VALID')
            relu = tf.nn.relu(tf.nn.bias_add(conv, b3))
            if is_train:
                relu = tf.nn.dropout(relu, keep_prob=0.8)
            # relu = tf.nn.batch_norm_with_global_normalization(relu, v3, m3,
            #         beta3, gamma3, variance_epsilon=0.000001, scale_after_normalization=False)
            print(relu)


            flattened_convolution_size = int(relu.shape[1]) * int(relu.shape[2]) * int(relu.shape[3])
            print(flattened_convolution_size)
            flattened_convolution = features = tf.reshape(relu, [-1, flattened_convolution_size])

            if is_train:
                flattened_convolution =  tf.nn.dropout(flattened_convolution, keep_prob=0.8)

            h1 = tf.nn.relu(tf.matmul(flattened_convolution, W1) + b4)

            # h1 = tf.nn.batch_norm_with_global_normalization(h1, v4, m4,
            #         beta4, gamma4, variance_epsilon=0.000001, scale_after_normalization=False)

            D_logit = tf.matmul(h1, W2)
            D_prob = tf.nn.sigmoid(D_logit)
            return D_prob, D_logit, features

        D_real, D_logit_real, self.features = discriminator(X)
        D_fake, D_logit_fake, _ = discriminator(g_log_prob)


        D_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logit_real, labels=tf.ones_like(D_logit_real)))
        D_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logit_fake, labels=tf.zeros_like(D_logit_fake)))
        self.D_l2_loss = (0.0001 * tf.add_n([tf.nn.l2_loss(t) for t in theta_D]) / len(theta_D))
        self.D_loss = D_loss_real + D_loss_fake + self.D_l2_loss
        self.G_l2_loss = (0.00001 * tf.add_n([tf.nn.l2_loss(t) for t in theta_G]) / len(theta_G))
        self.G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_logit_fake, labels=tf.ones_like(D_logit_fake))) + self.G_l2_loss


        self.D_solver = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(self.D_loss, var_list=theta_D)
        self.G_solver = tf.train.AdamOptimizer(learning_rate=0.000055).minimize(self.G_loss, var_list=theta_G)


================================================
FILE: get_predictions.py
================================================
from get_stock_data import download_all

#Download Stocks
download_all()



import os
import pandas as pd
from gan import GAN
import random
import tensorflow as tf
import xgboost as xgb
from sklearn.externals import joblib


os.environ["CUDA_VISIBLE_DEVICES"]=""

class Predict:

    def __init__(self, num_historical_days=20, days=10, pct_change=0, gan_model='./deployed_models/gan', cnn_modle='./deployed_models/cnn', xgb_model='./deployed_models/xgb'):
        self.data = []
        self.num_historical_days = num_historical_days
        self.gan_model = gan_model
        self.cnn_modle = cnn_modle
        self.xgb_model = xgb_model
        # assert os.path.exists(gan_model)
        # assert os.path.exists(cnn_modle)
        # assert os.path.exists(xgb_model)

        files = [os.path.join('./stock_data', f) for f in os.listdir('./stock_data')]
        for file in files:
            print(file)
            df = pd.read_csv(file, index_col='Date', parse_dates=True)
            df = df[['Open','High','Low','Close','Volume']]
            df = ((df -
            df.rolling(num_historical_days).mean().shift(-num_historical_days))
            /(df.rolling(num_historical_days).max().shift(-num_historical_days)
            -df.rolling(num_historical_days).min().shift(-num_historical_days)))
            df = df.dropna()
            self.data.append((file.split('/')[-1], df.index[0], df[200:200+num_historical_days].values))


    def gan_predict(self):
    	tf.reset_default_graph()
        gan = GAN(num_features=5, num_historical_days=self.num_historical_days,
                        generator_input_size=200, is_train=False)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            saver = tf.train.Saver()
            saver.restore(sess, self.gan_model)
            clf = joblib.load(self.xgb_model)
            for sym, date, data in self.data:
	            features = sess.run(gan.features, feed_dict={gan.X:[data]})
	            features = xgb.DMatrix(features)
	            print('{} {} {}'.format(str(date).split(' ')[0], sym, clf.predict(features)[0][1] > 0.5))
	            


if __name__ == '__main__':
	p = Predict()
	p.gan_predict()


================================================
FILE: get_stock_data.py
================================================
import os
import datetime
import urllib2
from dateutil.parser import parse
import threading

assert 'QUANDL_KEY' in os.environ
quandl_api_key = os.environ['QUANDL_KEY']

class nasdaq():
	def __init__(self):
		self.output = './stock_data'
		self.company_list = './companylist.csv'

	def build_url(self, symbol):
		url = 'https://www.quandl.com/api/v3/datasets/WIKI/{}.csv?api_key={}'.format(symbol, quandl_api_key)
		return url

	def symbols(self):
		symbols = []
		with open(self.company_list, 'r') as f:
			next(f)
			for line in f:
				symbols.append(line.split(',')[0].strip())
		return symbols

def download(i, symbol, url, output):
	print('Downloading {} {}'.format(symbol, i))
	try:
		response = urllib2.urlopen(url)
		quotes = response.read()
		lines = quotes.strip().split('\n')
		with open(os.path.join(output, symbol), 'w') as f:
			for i, line in enumerate(lines):
				f.write(line + '\n')
	except Exception as e:
		print('Failed to download {}'.format(symbol))
		print(e)

def download_all():
	if not os.path.exists('./stock_data'):
	    os.makedirs('./stock_data')

	nas = nasdaq()
	for i, symbol in enumerate(nas.symbols()):
		url = nas.build_url(symbol)
		download(i, symbol, url, nas.output)

if __name__ == '__main__':
	download_all()

================================================
FILE: plot_confusion_matrix.py
================================================
import itertools
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix


def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=45)
    plt.yticks(tick_marks, classes)

    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
        print("Normalized confusion matrix")
    else:
        print('Confusion matrix, without normalization')

    print(cm)

    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')
    plt.show()



================================================
FILE: train_cnn.py
================================================
import os
import pandas as pd
from cnn import CNN
import random
import tensorflow as tf
import xgboost as xgb
from sklearn.externals import joblib
from sklearn.metrics import confusion_matrix
from plot_confusion_matrix import plot_confusion_matrix

random.seed(42)

class TrainCNN:

    def __init__(self, num_historical_days, days=10, pct_change=0):
        self.data = []
        self.labels = []
        self.test_data = []
        self.test_labels = []
        self.cnn = CNN(num_features=5, num_historical_days=num_historical_days, is_train=False)
        files = [os.path.join('./stock_data', f) for f in os.listdir('./stock_data')]
        for file in files:
            print(file)
            df = pd.read_csv(file, index_col='Date', parse_dates=True)
            df = df[['Open','High','Low','Close','Volume']]
            labels = df.Close.pct_change(days).map(lambda x: [int(x > pct_change/100.0), int(x <= pct_change/100.0)])
            df = ((df -
            df.rolling(num_historical_days).mean().shift(-num_historical_days))
            /(df.rolling(num_historical_days).max().shift(-num_historical_days)
            -df.rolling(num_historical_days).min().shift(-num_historical_days)))
            df['labels'] = labels
            df = df.dropna()
            test_df = df[:365]
            df = df[400:]
            data = df[['Open', 'High', 'Low', 'Close', 'Volume']].values
            labels = df['labels'].values
            for i in range(num_historical_days, len(df), num_historical_days):
                self.data.append(data[i-num_historical_days:i])
                self.labels.append(labels[i-1])
            data = test_df[['Open', 'High', 'Low', 'Close', 'Volume']].values
            labels = test_df['labels'].values
            for i in range(num_historical_days, len(test_df), 1):
                self.test_data.append(data[i-num_historical_days:i])
                self.test_labels.append(labels[i-1])



    def random_batch(self, batch_size=128):
        batch = []
        labels = []
        data = zip(self.data, self.labels)
        i = 0
        while True:
            i+= 1
            while True:
                d = random.choice(data)
                if(d[1][0]== int(i%2)):
                    break
            batch.append(d[0])
            labels.append(d[1])
            if (len(batch) == batch_size):
                yield batch, labels
                batch = []
                labels = []

    def train(self, print_steps=100, display_steps=100, save_steps=1000, batch_size=128, keep_prob=0.6):
        if not os.path.exists('./cnn_models'):
            os.makedirs('./cnn_models')
        if not os.path.exists('./logs'):
            os.makedirs('./logs')
        if os.path.exists('./logs/train'):
            for file in [os.path.join('./logs/train/', f) for f in os.listdir('./logs/train/')]:
                os.remove(file)
        if os.path.exists('./logs/test'):
            for file in [os.path.join('./logs/test/', f) for f in os.listdir('./logs/test')]:
                os.remove(file)

        sess = tf.Session()
        loss = 0
        l2_loss = 0
        accuracy = 0
        saver = tf.train.Saver()
        train_writer = tf.summary.FileWriter('./logs/train')
        test_writer = tf.summary.FileWriter('./logs/test')
        sess.run(tf.global_variables_initializer())
        if os.path.exists('./cnn_models/checkpoint'):
            with open('./cnn_models/checkpoint', 'rb') as f:
                model_name = next(f).split('"')[1]
            #saver.restore(sess, "./models/{}".format(model_name))
        for i, [X, y] in enumerate(self.random_batch(batch_size)):
            _, loss_curr, accuracy_curr = sess.run([self.cnn.optimizer, self.cnn.loss, self.cnn.accuracy], feed_dict=
                    {self.cnn.X:X, self.cnn.Y:y, self.cnn.keep_prob:keep_prob})
            loss += loss_curr
            accuracy += accuracy_curr
            if (i+1) % print_steps == 0:
                print('Step={} loss={}, accuracy={}'.format(i, loss/print_steps, accuracy/print_steps))
                loss = 0
                l2_loss = 0
                accuracy = 0
                test_loss, test_accuracy, confusion_matrix = sess.run([self.cnn.loss, self.cnn.accuracy, self.cnn.confusion_matrix], feed_dict={self.cnn.X:self.test_data, self.cnn.Y:self.test_labels, self.cnn.keep_prob:1})
                print("Test loss = {}, Test accuracy = {}".format(test_loss, test_accuracy))
                print(confusion_matrix)
            if (i+1) % save_steps == 0:
                saver.save(sess, './cnn_models/cnn.ckpt', i)

            if (i+1) % display_steps == 0:
                summary = sess.run(self.cnn.summary, feed_dict=
                    {self.cnn.X:X, self.cnn.Y:y, self.cnn.keep_prob:keep_prob})
                train_writer.add_summary(summary, i)
                summary = sess.run(self.cnn.summary, feed_dict={
                    self.cnn.X:self.test_data, self.cnn.Y:self.test_labels, self.cnn.keep_prob:1})
                test_writer.add_summary(summary, i)


if __name__ == '__main__':
    cnn = TrainCNN(num_historical_days=20, days=10, pct_change=10)
    cnn.train()


================================================
FILE: train_gan.py
================================================
import os
import pandas as pd
from gan import GAN
import random
import tensorflow as tf

random.seed(42)
class TrainGan:

    def __init__(self, num_historical_days, batch_size=128):
        self.batch_size = batch_size
        self.data = []
        files = [os.path.join('./stock_data', f) for f in os.listdir('./stock_data')]
        for file in files:
            print(file)
            #Read in file -- note that parse_dates will be need later
            df = pd.read_csv(file, index_col='Date', parse_dates=True)
            df = df[['Open','High','Low','Close','Volume']]
            # #Create new index with missing days
            # idx = pd.date_range(df.index[-1], df.index[0])
            # #Reindex and fill the missing day with the value from the day before
            # df = df.reindex(idx, method='bfill').sort_index(ascending=False)
            #Normilize using a of size num_historical_days
            df = ((df -
            df.rolling(num_historical_days).mean().shift(-num_historical_days))
            /(df.rolling(num_historical_days).max().shift(-num_historical_days)
            -df.rolling(num_historical_days).min().shift(-num_historical_days)))
            #Drop the last 10 day that we don't have data for
            df = df.dropna()
            #Hold out the last year of trading for testing
            #Padding to keep labels from bleeding
            df = df[400:]
            #This may not create good samples if num_historical_days is a
            #mutliple of 7
            for i in range(num_historical_days, len(df), num_historical_days):
                self.data.append(df.values[i-num_historical_days:i])

        self.gan = GAN(num_features=5, num_historical_days=num_historical_days,
                        generator_input_size=200)

    def random_batch(self, batch_size=128):
        batch = []
        while True:
            batch.append(random.choice(self.data))
            if (len(batch) == batch_size):
                yield batch
                batch = []

    def train(self, print_steps=100, display_data=100, save_steps=1000):
        if not os.path.exists('./models'):
            os.makedirs('./models')
        sess = tf.Session()
        G_loss = 0
        D_loss = 0
        G_l2_loss = 0
        D_l2_loss = 0
        sess.run(tf.global_variables_initializer())
        saver = tf.train.Saver()
        with open('./models/checkpoint', 'rb') as f:
            model_name = next(f).split('"')[1]
        saver.restore(sess, "./models/{}".format(model_name))
        for i, X in enumerate(self.random_batch(self.batch_size)):
            if i % 1 == 0:
                _, D_loss_curr, D_l2_loss_curr = sess.run([self.gan.D_solver, self.gan.D_loss, self.gan.D_l2_loss], feed_dict=
                        {self.gan.X:X, self.gan.Z:self.gan.sample_Z(self.batch_size, 200)})
                D_loss += D_loss_curr
                D_l2_loss += D_l2_loss_curr
            if i % 1 == 0:
                _, G_loss_curr, G_l2_loss_curr = sess.run([self.gan.G_solver, self.gan.G_loss, self.gan.G_l2_loss],
                        feed_dict={self.gan.Z:self.gan.sample_Z(self.batch_size, 200)})
                G_loss += G_loss_curr
                G_l2_loss += G_l2_loss_curr
            if (i+1) % print_steps == 0:
                print('Step={} D_loss={}, G_loss={}'.format(i, D_loss/print_steps - D_l2_loss/print_steps, G_loss/print_steps - G_l2_loss/print_steps))
                #print('D_l2_loss = {} G_l2_loss={}'.format(D_l2_loss/print_steps, G_l2_loss/print_steps))
                G_loss = 0
                D_loss = 0
                G_l2_loss = 0
                D_l2_loss = 0
            if (i+1) % save_steps == 0:
                saver.save(sess, './models/gan.ckpt', i)
            # if (i+1) % display_data == 0:
            #     print('Generated Data')
            #     print(sess.run(self.gan.gen_data, feed_dict={self.gan.Z:self.gan.sample_Z(1, 200)}))
            #     print('Real Data')
            #     print(X[0])


if __name__ == '__main__':
    gan = TrainGan(20, 128)
    gan.train()


================================================
FILE: train_xgb_boost.py
================================================
import os
import pandas as pd
from gan import GAN
import random
import tensorflow as tf
import xgboost as xgb
from sklearn.externals import joblib
from sklearn.metrics import confusion_matrix
from plot_confusion_matrix import plot_confusion_matrix

os.environ["CUDA_VISIBLE_DEVICES"]=""

class TrainXGBBoost:

    def __init__(self, num_historical_days, days=10, pct_change=0):
        self.data = []
        self.labels = []
        self.test_data = []
        self.test_labels = []
        assert os.path.exists('./models/checkpoint')
        gan = GAN(num_features=5, num_historical_days=num_historical_days,
                        generator_input_size=200, is_train=False)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            saver = tf.train.Saver()
            with open('./models/checkpoint', 'rb') as f:
                model_name = next(f).split('"')[1]
            saver.restore(sess, "./models/{}".format(model_name))
            files = [os.path.join('./stock_data', f) for f in os.listdir('./stock_data')]
            for file in files:
                print(file)
                #Read in file -- note that parse_dates will be need later
                df = pd.read_csv(file, index_col='Date', parse_dates=True)
                df = df[['Open','High','Low','Close','Volume']]
                # #Create new index with missing days
                # idx = pd.date_range(df.index[-1], df.index[0])
                # #Reindex and fill the missing day with the value from the day before
                # df = df.reindex(idx, method='bfill').sort_index(ascending=False)
                #Normilize using a of size num_historical_days
                labels = df.Close.pct_change(days).map(lambda x: int(x > pct_change/100.0))
                df = ((df -
                df.rolling(num_historical_days).mean().shift(-num_historical_days))
                /(df.rolling(num_historical_days).max().shift(-num_historical_days)
                -df.rolling(num_historical_days).min().shift(-num_historical_days)))
                df['labels'] = labels
                #Drop the last 10 day that we don't have data for
                df = df.dropna()
                #Hold out the last year of trading for testing
                test_df = df[:365]
                #Padding to keep labels from bleeding
                df = df[400:]
                #This may not create good samples if num_historical_days is a
                #mutliple of 7
                data = df[['Open', 'High', 'Low', 'Close', 'Volume']].values
                labels = df['labels'].values
                for i in range(num_historical_days, len(df), num_historical_days):
                    features = sess.run(gan.features, feed_dict={gan.X:[data[i-num_historical_days:i]]})
                    self.data.append(features[0])
                    print(features[0])
                    self.labels.append(labels[i-1])
                data = test_df[['Open', 'High', 'Low', 'Close', 'Volume']].values
                labels = test_df['labels'].values
                for i in range(num_historical_days, len(test_df), 1):
                    features = sess.run(gan.features, feed_dict={gan.X:[data[i-num_historical_days:i]]})
                    self.test_data.append(features[0])
                    self.test_labels.append(labels[i-1])



    def train(self):
        params = {}
        params['objective'] = 'multi:softprob'
        params['eta'] = 0.01
        params['num_class'] = 2
        params['max_depth'] = 20
        params['subsample'] = 0.05
        params['colsample_bytree'] = 0.05
        params['eval_metric'] = 'mlogloss'
        #params['scale_pos_weight'] = 10
        #params['silent'] = True
        #params['gpu_id'] = 0
        #params['max_bin'] = 16
        #params['tree_method'] = 'gpu_hist'

        train = xgb.DMatrix(self.data, self.labels)
        test = xgb.DMatrix(self.test_data, self.test_labels)

        watchlist = [(train, 'train'), (test, 'test')]
        clf = xgb.train(params, train, 1000, evals=watchlist, early_stopping_rounds=100)
        joblib.dump(clf, 'models/clf.pkl')
        cm = confusion_matrix(self.test_labels, map(lambda x: int(x[1] > .5), clf.predict(test)))
        print(cm)
        plot_confusion_matrix(cm, ['Down', 'Up'], normalize=True, title="Confusion Matrix")


boost_model = TrainXGBBoost(num_historical_days=20, days=10, pct_change=10)
boost_model.train()
Download .txt
gitextract_g4r7c7s8/

├── .gitignore
├── README.md
├── cnn.py
├── companylist.csv
├── deployed_models/
│   ├── cnn.data-00000-of-00001
│   ├── cnn.index
│   ├── cnn.meta
│   ├── gan.data-00000-of-00001
│   ├── gan.index
│   ├── gan.meta
│   └── xgb
├── figures/
│   └── Stock_GAN.odp
├── gan.py
├── get_predictions.py
├── get_stock_data.py
├── plot_confusion_matrix.py
├── train_cnn.py
├── train_gan.py
└── train_xgb_boost.py
Download .txt
SYMBOL INDEX (26 symbols across 8 files)

FILE: cnn.py
  class CNN (line 8) | class CNN():
    method __init__ (line 10) | def __init__(self, num_features, num_historical_days, is_train=True):

FILE: gan.py
  class GAN (line 9) | class GAN():
    method sample_Z (line 11) | def sample_Z(self, batch_size, n):
    method __init__ (line 14) | def __init__(self, num_features, num_historical_days, generator_input_...

FILE: get_predictions.py
  class Predict (line 19) | class Predict:
    method __init__ (line 21) | def __init__(self, num_historical_days=20, days=10, pct_change=0, gan_...
    method gan_predict (line 44) | def gan_predict(self):

FILE: get_stock_data.py
  class nasdaq (line 10) | class nasdaq():
    method __init__ (line 11) | def __init__(self):
    method build_url (line 15) | def build_url(self, symbol):
    method symbols (line 19) | def symbols(self):
  function download (line 27) | def download(i, symbol, url, output):
  function download_all (line 40) | def download_all():

FILE: plot_confusion_matrix.py
  function plot_confusion_matrix (line 7) | def plot_confusion_matrix(cm, classes,

FILE: train_cnn.py
  class TrainCNN (line 13) | class TrainCNN:
    method __init__ (line 15) | def __init__(self, num_historical_days, days=10, pct_change=0):
    method random_batch (line 48) | def random_batch(self, batch_size=128):
    method train (line 66) | def train(self, print_steps=100, display_steps=100, save_steps=1000, b...

FILE: train_gan.py
  class TrainGan (line 8) | class TrainGan:
    method __init__ (line 10) | def __init__(self, num_historical_days, batch_size=128):
    method random_batch (line 41) | def random_batch(self, batch_size=128):
    method train (line 49) | def train(self, print_steps=100, display_data=100, save_steps=1000):

FILE: train_xgb_boost.py
  class TrainXGBBoost (line 13) | class TrainXGBBoost:
    method __init__ (line 15) | def __init__(self, num_historical_days, days=10, pct_change=0):
    method train (line 70) | def train(self):
Condensed preview — 19 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (42K chars).
[
  {
    "path": ".gitignore",
    "chars": 20,
    "preview": "stock_data/\nmodels/\n"
  },
  {
    "path": "README.md",
    "chars": 4679,
    "preview": "# Unsupervised Stock Market Features Construction using Generative Adversarial Networks(GAN)\nDeep Learning constructs fe"
  },
  {
    "path": "cnn.py",
    "chars": 3820,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os\n\nSEED = 42\ntf.set_random_seed(SEED)"
  },
  {
    "path": "companylist.csv",
    "chars": 6417,
    "preview": "Symbol, Name, lastsale, netchange,pctchange, share_volume, Nasdaq100_points,\r\nAAPL, Apple Inc, 44.51, -0.08,-0.18, 86485"
  },
  {
    "path": "gan.py",
    "chars": 7303,
    "preview": "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os\n\nSEED = 42\ntf.set_random_seed(SEED)"
  },
  {
    "path": "get_predictions.py",
    "chars": 2206,
    "preview": "from get_stock_data import download_all\n\n#Download Stocks\ndownload_all()\n\n\n\nimport os\nimport pandas as pd\nfrom gan impor"
  },
  {
    "path": "get_stock_data.py",
    "chars": 1251,
    "preview": "import os\nimport datetime\nimport urllib2\nfrom dateutil.parser import parse\nimport threading\n\nassert 'QUANDL_KEY' in os.e"
  },
  {
    "path": "plot_confusion_matrix.py",
    "chars": 1179,
    "preview": "import itertools\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import confusion_matrix\n\n\ndef p"
  },
  {
    "path": "train_cnn.py",
    "chars": 5168,
    "preview": "import os\nimport pandas as pd\nfrom cnn import CNN\nimport random\nimport tensorflow as tf\nimport xgboost as xgb\nfrom sklea"
  },
  {
    "path": "train_gan.py",
    "chars": 4075,
    "preview": "import os\nimport pandas as pd\nfrom gan import GAN\nimport random\nimport tensorflow as tf\n\nrandom.seed(42)\nclass TrainGan:"
  },
  {
    "path": "train_xgb_boost.py",
    "chars": 4458,
    "preview": "import os\nimport pandas as pd\nfrom gan import GAN\nimport random\nimport tensorflow as tf\nimport xgboost as xgb\nfrom sklea"
  }
]

// ... and 8 more files (download for full content)

About this extraction

This page contains the full source code of the MiloMallo/StockMarketGAN GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 19 files (12.4 MB), approximately 11.8k tokens, and a symbol index with 26 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!