[
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2015 Calvin Schmidt\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\nCTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE."
  },
  {
    "path": "MANIFEST.in",
    "content": "# Informational files\ninclude README.rst\ninclude LICENSE\n\n# Include docs and tests. It's unclear whether convention dictates\n# including built docs. However, Sphinx doesn't include built docs, so\n# we are following their lead.\ngraft docs\nprune docs/build\ngraft tests\n\n# Exclude any compile Python files (most likely grafted by tests/ directory).\nglobal-exclude *.pyc\n\n# Setup-related things\ninclude pavement.py\ninclude requirements-dev.txt\ninclude requirements.txt\ninclude setup.py\ninclude tox.ini\n"
  },
  {
    "path": "README.rst",
    "content": "=========================\n Easy Tensorflow\n=========================\n\nThis package provides users with methods for the automated building, training, and testing of complex neural networks using Google's Tensorflow package. The project includes objects that perform both regression and classification tasks.\n\nIn addition, there is a function included that uses the DEAP genetic algorithm package to evolve the optimal network architecture. The evolution function is almost entirely based off of the sample DEAP evolution.\n\nThis project is meant to simplify the tensorflow experience, and therefore it reduces the customizibility of the networks. Patches that expand functionality are welcome and encouraged, as long they do not reduce the simplicity of usage. I will try to keep up with maintenance as best as I can, but please be patient; I am new to this.\n\nProject Setup\n=============\n\nDependancies\n------------\n\nFull support for Python 2.7. Python 3.3 not tested.\n\nRequires tensorflow (tested on version 0.6.0). Installation instructions `on the Tensorflow website <https://www.tensorflow.org/versions/master/get_started/os_setup.html>`_ .\n\nRequires DEAP (tested on version 1.0) for evolving. Installation instructions `here <http://deap.readthedocs.org/en/1.0.x/installation.html>`_.\n\nInstallation\n------------\n\n1. Either download and extract the zipped file, or clone directly from github using::\n\n    git clone https://github.com/calvinschmdt/EasyTensorflow.git easy_tensorflow\n\n   This should create a new directory containing the required files.\n    \n2. Install the dependancies manually or by running this command while in the easy_tensorflow directory::\n\n    sudo pip install -r requirements.txt\n\n3. Install the project by running this command while in the easy_tensorflow directory::\n\n    sudo python setup.py install\n    \nUsage\n=====\n\nPrediction Objects\n------------------\n\nThis package uses objects to hold the neural networks. There are separate objects for performing regression and classification (the two objects are Regresser and Classifier), but the two objects have the same basic functions.\n\nInstantiation\n-------------\n\nInstantiate the object by assigning it to a variable. The only required argument for instantiation is a list that describes the neural network::\n\n    net_type = ['none', 20, 'sigmoid', 30, 'bias_add', 30, 'sigmoid']\n    reg = etf.tf_functions.Regresser(net_type)\n      \nThe net_type list needs to be in a specific format: alternating strings and integers starting and ending with a string. The strings describe the transformation that is made between each layer of the neural network, while the integers denote the number of size of the layer after the transformation is made. For example, the above network would look like this:\n\n    +---------------------------------------------------------------------------+\n    |                  Input with n samples and f features                      |\n    +---------------------------------------------------------------------------+\n    |              *Matrix multiplication by adjustable weights*                |\n    +---------------------------------------------------------------------------+\n    |                 Layer 1 with n samples and 20 features                    |\n    +---------------------------------------------------------------------------+\n    | *Sigmoid transformation on a matrix multiplication by adjustable weights* |\n    +---------------------------------------------------------------------------+\n    |                 Layer 2 with n samples and 30 features                    |\n    +---------------------------------------------------------------------------+\n    |            *Addition to each feature of an adjustable weight*             |\n    +---------------------------------------------------------------------------+\n    |                 Layer 3 with n samples and 30 features                    |\n    +---------------------------------------------------------------------------+\n    | *Sigmoid transformation on a matrix multiplication by adjustable weights* |\n    +---------------------------------------------------------------------------+\n    |                  Output with n samples and o features                     |\n    +---------------------------------------------------------------------------+\n\n\nThe number of input and output features do not have to be specified upon instantiation, but are learned during training.\n\nThe transformations available are:\n    \n    - 'relu': Relu transformation on a matrix multiplication by adjustable weights. Relu applies a ramp function.\n    - 'softplus': Softplus transformation on a matrix multiplication by adjustable weights. Softplus applies a smoothed ramp function.\n    - 'dropout': Randomly pushes features to zero. Prevents overfitting. Does not change the number of features.\n    - 'bias_add': Adds an adjustable weight to each feature. Does not change the number of features.\n    - 'sigmoid': Sigmoid transformation on a matrix multiplication by adjustable weights. Sigmoid forces values to approach 0 or 1.\n    - 'tanh': Hyperbolic tangent transformation on a matrix multiplication by adjustable weights. Tanh forces values very positive or very negative.\n    - 'none': Matrix multiplication with adjustable weights.\n    - 'normalize': Normalizes features of a sample using an L2 norm. Does not change the number of features.\n    - 'sum': Sum across all features. Reduces to 1 feature, so most useful as a final transformation in a regression.\n    - 'prod': Multiplies all features. Reduces to 1 feature, so most useful as a final transformation in a regression.\n    - 'min': Takes minimum value of all features. Reduces to 1 feature, so most useful as a final transformation in a regression.\n    - 'max': Takes maximum value of all features. Reduces to 1 feature, so most useful as a final transformation in a regression.\n    - 'mean': Takes mean of all features. Reduces to 1 feature, so most useful as a final transformation in a regression.\n    - 'softmax': Normalizes the features so that the sum equals 1 on a matrix multiplication by adjustable weights. Most useful as a final transformation in a classification to give class probabilities.\n\nThe object has several optional arguments:\n\n    loss_type: String that defines the error measurement term. This is used during training to determine the weights that give the most accurate output. The loss types available are:\n        \n        - 'l2_loss' - Uses tensorflow's nn.l2_loss function on the difference between the predicted and actual. Computes half the L2 norm without the sqrt. Use for regression, and the default loss_type for the regression object.\n        - 'cross_entropy' - Calculates the cross-entropy between two probability distributions as defined in Tensorflow's MNIST tutorial (-tf.reduce_sum(y * tf.log(py_x))). Use for classification, and the default loss_type for the classification object.\n        \n    optimizer: String that defines the optimization algorithm for training. If a string is passed, the optimizers will be used with default learning rates. If you wish to use a custom training rate, instead of a string, pass in a tuple with the tensorflow optimizer as the first index, and a tuple with the arguments to pass in as the second index. The optimizers available are:\n        \n        - 'GradientDescent': Implements the gradient descent algorithm with a default learning rate of 0.001.\n        - 'Adagrad': Implements the Adagrad algorithm with a default learning rate of 0.001.\n        - 'Momentum': Implements the Momentum algorithm with a default learning rate of 0.001 and momentum of 0.1.\n        - 'Adam': Implements the Adam algorithm.\n        - 'FTRL': Implements the FTRL algorithm with a learning rate of 0.001.\n        - 'RMSProp': Implements the RMSProp algorithm with a learning rate of 0.001 and a decay of 0.9.\n        \nTraining\n--------\n\nObjects are trained by calling object.train() with certain arguments::\n\n    trX = training_data\n    try = training_output\n    training_steps = 50\n    reg.train(trX, try, training_steps)\n\nBoth objects are trained by passing in a set of data with known outputs. The training input data should be passed in as a numpy array, with each sample as a row and features as the columns. The training output data can take multiple forms: \n\n    - For regression tasks, it can be an iterable list with one output value for each sample, or it can be a numpy array of shape (n, 1).\n    - For classification tasks, it can be a numpy array of shape (n, m), where m is the number of classes. In this array, there is a 1 in each row in the column of the class that that sample belongs to, and a 0 in all other rows. Otherwise, an iterable list can be passed in with the class name for each sample. This is required is the class names, and not a probability matrix, are to be returned during testing.\n    \nIn addition to the training data and training output, the number of times to iterate over training must be passed in as the third argument.\n\nThere are several optional arguments for training that control how long training the network takes:\n\n    - full_train: Denotes whether to use the entire training set each iteration. Set to True by default.\n    - train_size: If full train is set to False, denotes how many samples to use from the training set each iteration of training. Pulls randomly from the training set with possible repeats.\n    \nPredicting\n----------\n\nAfter the object is trained, the network can be used to predict the output of test data that is given to it by calling object.predict() with certain arguments::\n\n    teX = test_data\n    p = reg.predict(teX)\n      \nThe test data should have the same number of features as the training data, though the number of samples may be different.\n\nThe output for a regression object will be a numpy array of shape (n, ) with the predicted value for each sample.\n\nThe output for a classification object will be a list with a predicted class for each sample. If a probability matrix is desired, the pass the argument return_encoded = False when predicting, and a numpy array of shape (n, m) will be returned.\n\nClosing\n-------\n\nCalling object.close() will close the network, freeing up resources. It cannot be used again, and a new object must be started for training and predicting to occur.\n\nEvolving\n========\n\nFor those who do not know the neural network architecture for your problem, we can use a genetic selection algorithm to evolve the optimal architecture.\n\nTo do this, use the command evolve() with several required arguments:\n\n    - predict_type: String denoting the type of neural network to evolve. Two options: 'regression' and 'classification'.\n    - fitness_measure: String denoting the type of measurement to use for evaluating the performance of the network type. Options:\n        - 'rmse': Root mean squared error between the predicted values and known values. Use for regression.\n        - 'r_squared': Coefficient of determination for determining how well the data fits the model. Use for regression.\n        - 'accuracy': Fraction of samples that were classified correctly. Use for classification, and can be used for multi-class classification.\n        - 'sensitivity': Fraction of positive samples correctly identified as positive. Use for classification with two classes, and the second class is the positive class.\n        - 'specificity': Fraction of negative samples correctly identified as negative. Use for classification with two classes, and the first class is the negative class.\n    - trX: Numpy array with input data to use for training. Will pull randomly from this array to create test and training sets.\n    - trY: Numpy array with output data to use for training.\n\nAfter the evolution finishes, it will return a net_type and optimizer that can be fed into an regression or classification object, along with the measurement that net_type produced. If \"Error during training\" is printed, it only means that an error was encountered at some point during the evolution.::\n\n    net_type, opt, m = etf.evolve_functions.evolve('classification', 'accuracy', trX, trY)\n\nThere are many optional arguments that allow for customization of the evolution:\n\n\t- max_layers: Integer denoting the maximum number of layers that exist between the input and output layer. Set at 5 by default.\n\t- num_gens: Number of generations to simulate. Set at 10 by default.\n\t- gen_size: Number of individual members per generation. Set at 40 by default.\n\t- teX: If a specific test set is desired, enter the input data here as a numpy array.\n\t- teY: Test output data as a numpy array.\n\t- layer_types: List of strings denoting the layer types possible to be used. Can have repeated types for an increased probability of incorporation. Set to ['relu', 'softplus', 'dropout', 'bias_add', 'sigmoid', 'tanh', 'none', 'normalize'] by default.\n\t- layer_sizes: List of integers denoting the layer sizes possible to be used. Layer sizes of 0 drop out a layer. List must be of the same length as layer_types. Set to [0, 0, 10, 50, 100, 200, 500, 1000] by default.\n\t- end_types: List of strings denoting the options for the type of transformation that gives the output. Forced to be softmax by default during classification. List must be of same length as layer_types. Set to ['sum', 'prod', 'min', 'max', 'mean', 'none', 'sigmoid', 'tanh'] by default.\n\t- train_types: List of strings denoting the optimizer types possible to be used. List must be of same length as layer_types. Set to ['GradientDescent', 'GradientDescent', 'GradientDescent', 'Adagrad', 'Momentum', 'Adam', 'Ftrl', 'RMSProp'] by default.\n\t- cross_prob: Float value denoting the probability of crossing the genetics of different individuals. Set at 0.2 by default.\n\t- mut_prob: Float value denoting the probability of changing the genetics of a single individual. Set at 0.2 by default.\n\t- tourn_size: Integer denoting the number of individuals to carry from each generation. Set at 5 by default.\n\t- train_iters: Integer denoting the number of training iterations to use for each neural network. Set at 5 by default.\n\t- squash_errors: Boolean value denoting whether to give a fail value if the network results in an error. Set to True by default. Recommended to leave true, as it is difficult to complete a long evolution without running into some type of error.\n\nLicenses\n========\n\nThe code which makes up this Python project template is licensed under the MIT/X11 license. Feel free to use it in your free software/open-source or proprietary projects.\n\nIssues\n======\n\nPlease report any bugs or requests that you have using the GitHub issue tracker!\n\nDevelopment\n===========\n\nIf you wish to contribute, first make your changes. Then run the following from the project root directory::\n\n    source internal/test.sh\n\nThis will copy the template directory to a temporary directory, run the generation, then run tox. Any arguments passed will go directly to the tox command line, e.g.::\n\n    source internal/test.sh -e py27\n\nThis command line would just test Python 2.7.\n\nAcknowledgements\n================\n\nBoth Tensorflow and DEAP were creating by other (very smart) people, this package just combines the two.\n\nThis package was set up using Sean Fisk's Python Project Template package.\n\nAuthors\n=======\n\n* Calvin Schmidt\n"
  },
  {
    "path": "docs/Makefile",
    "content": "# Makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    = -W\nSPHINXBUILD   = sphinx-build\nPAPER         =\nBUILDDIR      = build\n\n# Internal variables.\nPAPEROPT_a4     = -D latex_paper_size=a4\nPAPEROPT_letter = -D latex_paper_size=letter\nALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source\n# the i18n builder cannot share the environment and doctrees with the others\nI18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source\n\n.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext\n\nhelp:\n\t@echo \"Please use \\`make <target>' where <target> is one of\"\n\t@echo \"  html       to make standalone HTML files\"\n\t@echo \"  dirhtml    to make HTML files named index.html in directories\"\n\t@echo \"  singlehtml to make a single large HTML file\"\n\t@echo \"  pickle     to make pickle files\"\n\t@echo \"  json       to make JSON files\"\n\t@echo \"  htmlhelp   to make HTML files and a HTML help project\"\n\t@echo \"  qthelp     to make HTML files and a qthelp project\"\n\t@echo \"  devhelp    to make HTML files and a Devhelp project\"\n\t@echo \"  epub       to make an epub\"\n\t@echo \"  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter\"\n\t@echo \"  latexpdf   to make LaTeX files and run them through pdflatex\"\n\t@echo \"  text       to make text files\"\n\t@echo \"  man        to make manual pages\"\n\t@echo \"  texinfo    to make Texinfo files\"\n\t@echo \"  info       to make Texinfo files and run them through makeinfo\"\n\t@echo \"  gettext    to make PO message catalogs\"\n\t@echo \"  changes    to make an overview of all changed/added/deprecated items\"\n\t@echo \"  linkcheck  to check all external links for integrity\"\n\t@echo \"  doctest    to run all doctests embedded in the documentation (if enabled)\"\n\nclean:\n\t-rm -rf $(BUILDDIR)/*\n\nhtml:\n\t$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html\n\t@echo\n\t@echo \"Build finished. The HTML pages are in $(BUILDDIR)/html.\"\n\ndirhtml:\n\t$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml\n\t@echo\n\t@echo \"Build finished. The HTML pages are in $(BUILDDIR)/dirhtml.\"\n\nsinglehtml:\n\t$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml\n\t@echo\n\t@echo \"Build finished. The HTML page is in $(BUILDDIR)/singlehtml.\"\n\npickle:\n\t$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle\n\t@echo\n\t@echo \"Build finished; now you can process the pickle files.\"\n\njson:\n\t$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json\n\t@echo\n\t@echo \"Build finished; now you can process the JSON files.\"\n\nhtmlhelp:\n\t$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp\n\t@echo\n\t@echo \"Build finished; now you can run HTML Help Workshop with the\" \\\n\t      \".hhp project file in $(BUILDDIR)/htmlhelp.\"\n\nqthelp:\n\t$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp\n\t@echo\n\t@echo \"Build finished; now you can run \"qcollectiongenerator\" with the\" \\\n\t      \".qhcp project file in $(BUILDDIR)/qthelp, like this:\"\n\t@echo \"# qcollectiongenerator $(BUILDDIR)/qthelp/EasyTensorflow.qhcp\"\n\t@echo \"To view the help file:\"\n\t@echo \"# assistant -collectionFile $(BUILDDIR)/qthelp/EasyTensorflow.qhc\"\n\ndevhelp:\n\t$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp\n\t@echo\n\t@echo \"Build finished.\"\n\t@echo \"To view the help file:\"\n\t@echo \"# mkdir -p $HOME/.local/share/devhelp/EasyTensorflow\"\n\t@echo \"# ln -s $(BUILDDIR)/devhelp $HOME/.local/share/devhelp/EasyTensorflow\"\n\t@echo \"# devhelp\"\n\nepub:\n\t$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub\n\t@echo\n\t@echo \"Build finished. The epub file is in $(BUILDDIR)/epub.\"\n\nlatex:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo\n\t@echo \"Build finished; the LaTeX files are in $(BUILDDIR)/latex.\"\n\t@echo \"Run \\`make' in that directory to run these through (pdf)latex\" \\\n\t      \"(use \\`make latexpdf' here to do that automatically).\"\n\nlatexpdf:\n\t$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex\n\t@echo \"Running LaTeX files through pdflatex...\"\n\t$(MAKE) -C $(BUILDDIR)/latex all-pdf\n\t@echo \"pdflatex finished; the PDF files are in $(BUILDDIR)/latex.\"\n\ntext:\n\t$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text\n\t@echo\n\t@echo \"Build finished. The text files are in $(BUILDDIR)/text.\"\n\nman:\n\t$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man\n\t@echo\n\t@echo \"Build finished. The manual pages are in $(BUILDDIR)/man.\"\n\ntexinfo:\n\t$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo\n\t@echo\n\t@echo \"Build finished. The Texinfo files are in $(BUILDDIR)/texinfo.\"\n\t@echo \"Run \\`make' in that directory to run these through makeinfo\" \\\n\t      \"(use \\`make info' here to do that automatically).\"\n\ninfo:\n\t$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo\n\t@echo \"Running Texinfo files through makeinfo...\"\n\tmake -C $(BUILDDIR)/texinfo info\n\t@echo \"makeinfo finished; the Info files are in $(BUILDDIR)/texinfo.\"\n\ngettext:\n\t$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale\n\t@echo\n\t@echo \"Build finished. The message catalogs are in $(BUILDDIR)/locale.\"\n\nchanges:\n\t$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes\n\t@echo\n\t@echo \"The overview file is in $(BUILDDIR)/changes.\"\n\nlinkcheck:\n\t$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck\n\t@echo\n\t@echo \"Link check complete; look for any errors in the above output \" \\\n\t      \"or in $(BUILDDIR)/linkcheck/output.txt.\"\n\ndoctest:\n\t$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest\n\t@echo \"Testing of doctests in the sources finished, look at the \" \\\n\t      \"results in $(BUILDDIR)/doctest/output.txt.\"\n"
  },
  {
    "path": "docs/make.bat",
    "content": "@ECHO OFF\n\nREM Command file for Sphinx documentation\n\nif \"%SPHINXBUILD%\" == \"\" (\n\tset SPHINXBUILD=sphinx-build\n)\nset BUILDDIR=build\nset SPHINXOPTS=-W\nset ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source\nset I18NSPHINXOPTS=%SPHINXOPTS% source\nif NOT \"%PAPER%\" == \"\" (\n\tset ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%\n\tset I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%\n)\n\nif \"%1\" == \"\" goto help\n\nif \"%1\" == \"help\" (\n\t:help\n\techo.Please use `make ^<target^>` where ^<target^> is one of\n\techo.  html       to make standalone HTML files\n\techo.  dirhtml    to make HTML files named index.html in directories\n\techo.  singlehtml to make a single large HTML file\n\techo.  pickle     to make pickle files\n\techo.  json       to make JSON files\n\techo.  htmlhelp   to make HTML files and a HTML help project\n\techo.  qthelp     to make HTML files and a qthelp project\n\techo.  devhelp    to make HTML files and a Devhelp project\n\techo.  epub       to make an epub\n\techo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter\n\techo.  text       to make text files\n\techo.  man        to make manual pages\n\techo.  texinfo    to make Texinfo files\n\techo.  gettext    to make PO message catalogs\n\techo.  changes    to make an overview over all changed/added/deprecated items\n\techo.  linkcheck  to check all external links for integrity\n\techo.  doctest    to run all doctests embedded in the documentation if enabled\n\tgoto end\n)\n\nif \"%1\" == \"clean\" (\n\tfor /d %%i in (%BUILDDIR%\\*) do rmdir /q /s %%i\n\tdel /q /s %BUILDDIR%\\*\n\tgoto end\n)\n\nif \"%1\" == \"html\" (\n\t%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished. The HTML pages are in %BUILDDIR%/html.\n\tgoto end\n)\n\nif \"%1\" == \"dirhtml\" (\n\t%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.\n\tgoto end\n)\n\nif \"%1\" == \"singlehtml\" (\n\t%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.\n\tgoto end\n)\n\nif \"%1\" == \"pickle\" (\n\t%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished; now you can process the pickle files.\n\tgoto end\n)\n\nif \"%1\" == \"json\" (\n\t%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished; now you can process the JSON files.\n\tgoto end\n)\n\nif \"%1\" == \"htmlhelp\" (\n\t%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished; now you can run HTML Help Workshop with the ^\n.hhp project file in %BUILDDIR%/htmlhelp.\n\tgoto end\n)\n\nif \"%1\" == \"qthelp\" (\n\t%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished; now you can run \"qcollectiongenerator\" with the ^\n.qhcp project file in %BUILDDIR%/qthelp, like this:\n\techo.^> qcollectiongenerator %BUILDDIR%\\qthelp\\EasyTensorflow.qhcp\n\techo.To view the help file:\n\techo.^> assistant -collectionFile %BUILDDIR%\\qthelp\\EasyTensorflow.qhc\n\tgoto end\n)\n\nif \"%1\" == \"devhelp\" (\n\t%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished.\n\tgoto end\n)\n\nif \"%1\" == \"epub\" (\n\t%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished. The epub file is in %BUILDDIR%/epub.\n\tgoto end\n)\n\nif \"%1\" == \"latex\" (\n\t%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished; the LaTeX files are in %BUILDDIR%/latex.\n\tgoto end\n)\n\nif \"%1\" == \"text\" (\n\t%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished. The text files are in %BUILDDIR%/text.\n\tgoto end\n)\n\nif \"%1\" == \"man\" (\n\t%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished. The manual pages are in %BUILDDIR%/man.\n\tgoto end\n)\n\nif \"%1\" == \"texinfo\" (\n\t%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.\n\tgoto end\n)\n\nif \"%1\" == \"gettext\" (\n\t%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Build finished. The message catalogs are in %BUILDDIR%/locale.\n\tgoto end\n)\n\nif \"%1\" == \"changes\" (\n\t%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.The overview file is in %BUILDDIR%/changes.\n\tgoto end\n)\n\nif \"%1\" == \"linkcheck\" (\n\t%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Link check complete; look for any errors in the above output ^\nor in %BUILDDIR%/linkcheck/output.txt.\n\tgoto end\n)\n\nif \"%1\" == \"doctest\" (\n\t%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest\n\tif errorlevel 1 exit /b 1\n\techo.\n\techo.Testing of doctests in the sources finished, look at the ^\nresults in %BUILDDIR%/doctest/output.txt.\n\tgoto end\n)\n\n:end\n"
  },
  {
    "path": "docs/source/README",
    "content": "Run `sphinx-apidoc -o . ../../easy_tensorflow' in this directory.\n\nThis will generate `modules.rst' and `easy_tensorflow.rst'.\n\nThen include `modules.rst' in your `index.rst' file."
  },
  {
    "path": "docs/source/_static/.gitkeep",
    "content": ""
  },
  {
    "path": "docs/source/conf.py",
    "content": "# -*- coding: utf-8 -*-\n\n# This file is based upon the file generated by sphinx-quickstart. However,\n# where sphinx-quickstart hardcodes values in this file that you input, this\n# file has been changed to pull from your module's metadata module.\n#\n# This file is execfile()d with the current directory set to its containing\n# dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\nimport sys\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('../..'))\n\n# Import project metadata\nfrom easy_tensorflow import metadata\n\n# -- General configuration ----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx',\n              'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.viewcode']\n\n# show todos\ntodo_include_todos = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = metadata.project\ncopyright = metadata.copyright\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = metadata.version\n# The full version, including alpha/beta/rc tags.\nrelease = metadata.version\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = []\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n\n# -- Options for HTML output --------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.  See the documentation for\n# a list of builtin themes.\nhtml_theme = 'nature'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further.  For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents.  If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar.  Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it.  The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = metadata.project_no_spaces + 'doc'\n\n\n# -- Options for LaTeX output -------------------------------------------------\n\nlatex_elements = {\n    # The paper size ('letterpaper' or 'a4paper').\n    #'papersize': 'letterpaper',\n\n    # The font size ('10pt', '11pt' or '12pt').\n    #'pointsize': '10pt',\n\n    # Additional stuff for the LaTeX preamble.\n    #'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author,\n# documentclass [howto/manual]).\nlatex_documents = [\n    ('index', metadata.project_no_spaces + '.tex',\n     metadata.project + ' Documentation', metadata.authors_string,\n     'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output -------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n    ('index', metadata.package, metadata.project + ' Documentation',\n     metadata.authors_string, 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n#  dir menu entry, description, category)\ntexinfo_documents = [\n    ('index', metadata.project_no_spaces,\n     metadata.project + ' Documentation', metadata.authors_string,\n     metadata.project_no_spaces, metadata.description, 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n    'python': ('http://docs.python.org/', None),\n}\n\n# Extra local configuration. This is useful for placing the class description\n# in the class docstring and the __init__ parameter documentation in the\n# __init__ docstring. See\n# <http://sphinx-doc.org/ext/autodoc.html#confval-autoclass_content> for more\n# information.\nautoclass_content = 'both'\n"
  },
  {
    "path": "docs/source/index.rst",
    "content": "EasyTensorflow\n==============\n\nContents:\n\n.. toctree::\n   :maxdepth: 2\n\n\n.. only:: html\n\n   Indices and tables\n   ==================\n\n   * :ref:`genindex`\n   * :ref:`modindex`\n   * :ref:`search`\n"
  },
  {
    "path": "easy_tensorflow/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"Provides objects that allow for easy setup, training, and running of neural networks, based on Google's tensorflow library. Also allows for the evolution of effective networks using a genetic algorithm derived from the DEAP package.\"\"\"\n\nfrom easy_tensorflow import metadata, tf_functions, evolve_functions\n\n\n__version__ = metadata.version\n__author__ = metadata.authors[0]\n__license__ = metadata.license\n__copyright__ = metadata.copyright\n"
  },
  {
    "path": "easy_tensorflow/evolve_functions.py",
    "content": "import numpy as np\nimport scipy.stats\nimport random\nfrom deap import base\nfrom deap import creator\nfrom deap import tools\nimport tf_functions\n\ndef rmse(p, t):\n    '''\n    Calculates the root means squared error between two vectors.\n    :param p: One-dimensional numpy array with the predicted values.\n    :param t: One-dimensional numpy array with the known values.\n    :return: Float value between 0 and infinity.\n    '''\n\n    return np.sqrt(np.average(np.square(np.subtract(p, t))))\n\ndef r_squared(p, t):\n    '''\n    Coefficient of determination for determining how well the data fits the model.\n    :param p: Numpy array with the predicted values.\n    :param t: Numpy array with the known values.\n    :return: Float value between 0 and 1.\n    '''\n\n    # Reshapes into one-dimensional array if necessary.\n    if len(t.shape) == 2:\n        t = [i[0] for i in t]\n\n    if len(p.shape) == 2:\n        p = [i[0] for i in p]\n\n    # Get r value for squaring.\n    slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(p, t)\n    return r_value ** 2\n\ndef accuracy(p, t):\n    '''\n    Fraction of samples that were classified correctly.\n    :param p: Multi-dimensional numpy array with the predicted values.\n    :param t: Multi-dimensional numpy array with the known values.\n    :return: Float value between 0 and 1.\n    '''\n\n    return sum([1 for i, j in zip(np.argmax(p, axis=1), np.argmax(t, axis=1)) if i == j]) / float(len(p))\n\ndef specificity(p, t):\n    '''\n    Fraction of positive samples correctly identified as positive.\n    :param p: Multi-dimensional numpy array with the predicted values.\n    :param t: Multi-dimensional numpy array with the known values.\n    :return: Float value between 0 and 1.\n    '''\n\n    # Calculates number of correctly identified positive samples.\n    num = sum([1 for i, j in zip(np.argmax(p, axis=1), np.argmax(t, axis=1)) if i == 0 and j == 0])\n\n    # Calculates total number of positive samples.\n    denom = sum([1 for i in np.argmax(t, axis=1) if i == 0])\n\n    return num / float(denom)\n\ndef sensitivity(p, t):\n    '''\n    Fraction of negative samples correctly identified as negative.\n    :param p: Multi-dimensional numpy array with the predicted values.\n    :param t: Multi-dimensional numpy array with the known values.\n    :return: Float value between 0 and 1.\n    '''\n\n    # Calculates number of correctly identified negative samples.\n    num = sum([1 for i, j in zip(np.argmax(p, axis=1), np.argmax(t, axis=1)) if i == 1 and j == 1])\n\n    # Calculates total number of negative samples.\n    denom = sum([1 for i in np.argmax(t, axis=1) if i == 1])\n\n    return num / float(denom)\n\ndef test_train_split(X, y, num_test = 0):\n    '''\n    Splits a full set of input and output data randomly into train and test sets. Keeps the input and output values\n    connected.\n    :param X: Numpy array with input data.\n    :param y: Numpy array with output data.\n    :param num_test: Number of samples to use for the test set. Set to 0 by default, which cause 1/5 of the full set to\n        be split off for testing.\n    :return: Four numpy arrays corresponding to the training input, training output, testing input, and testing output\n        data.\n    '''\n\n    # Splits off 1/5 of data if no specific amount is given.\n    if num_test == 0:\n        num_test = y.shape[0] / 5\n\n    # Turns one-dimensional vector into array of shape (n, 1).\n    if len(y.shape) == 1:\n        y = y.reshape(len(y), 1)\n\n    # Records the number of output features.\n    output_features = y.shape[1]\n\n    # Combines the input and output features so that the outputs can stay connected.\n    all_vals = np.append(X, y, 1)\n\n    # Randomly shuffles the samples.\n    np.random.shuffle(all_vals)\n\n    # Pulls out the test and train input and output features.\n    teX, teY = all_vals[:num_test, :-output_features], all_vals[:num_test, -output_features:]\n    trX, trY = all_vals[num_test:, :-output_features], all_vals[num_test:, -output_features:]\n\n    return trX, trY, teX, teY\n\ndef evolve(predict_type, fitness_measure, trX, trY,\n           max_layers = 5, num_gens = 10, gen_size = 40, teX = [], teY = [],\n           layer_types = ['relu', 'softplus', 'dropout', 'bias_add', 'sigmoid', 'tanh', 'none', 'normalize'],\n           layer_sizes = [0, 0, 10, 50, 100, 200, 500, 1000],\n           end_types = ['sum', 'prod', 'min', 'max', 'mean', 'none', 'sigmoid', 'tanh'],\n           train_types = ['GradientDescent', 'GradientDescent', 'GradientDescent', 'Adagrad', 'Momentum', 'Adam', 'Ftrl', 'RMSProp'],\n           cross_prob = 0.2, mut_prob = 0.2, tourn_size = 5, train_iters = 5, squash_errors = True):\n    '''\n\n    :param predict_type: String denoting the type of neural network to evolve. Two options: 'regression' and\n        'classification'.\n    :param fitness_measure: String denoting the type of measurement to use for evaluating the performance of the network\n        type. Options:\n\t\t- 'rmse': Root mean squared error between the predicted values and known values. Use for regression.\n\t\t- 'r_squared': Coefficient of determination for determining how well the data fits the model. Use for\n\t\t    regression.\n\t\t- 'accuracy': Fraction of samples that were classified correctly. Use for classification, and can be used for\n\t\t    multi-class classification.\n\t\t- 'sensitivity': Fraction of positive samples correctly identified as positive. Use for classification with two\n\t\t    classes, and the second class is the positive class.\n\t\t- 'specificity': Fraction of negative samples correctly identified as negative. Use for classification with two\n\t\t    classes, and the first class is the negative class.\n    :param trX: Numpy array with input data to use for training. Will pull randomly from this array to create test and\n        training sets.\n    :param trY: Numpy array with output data to use for training.\n    :param max_layers: Integer denoting the maximum number of layers that exist between the input and output layer. Set\n        at 5 by default.\n    :param num_gens: Number of generations to simulate. Set at 10 by default.\n    :param gen_size: Number of individual members per generation. Set at 40 by default.\n    :param teX: If a specific test set is desired, enter the input data here as a numpy array.\n    :param teY: Test output data as a numpy array.\n    :param layer_types: List of strings denoting the layer types possible to be used. Set to ['relu', 'softplus',\n        'dropout', 'bias_add', 'sigmoid', 'tanh', 'none', 'normalize'] by default.\n    :param layer_sizes: List of integers denoting the layer sizes possible to be used. List must be of the same length\n        as layer_types. Set to [0, 0, 10, 50, 100, 200, 500, 1000] by default.\n    :param end_types: List of strings denoting the options for the type of transformation that gives the output. List\n        must be of same length as layer_types. Set to ['sum', 'prod', 'min', 'max', 'mean', 'none', 'sigmoid', 'tanh']\n        by default.\n    :param train_types: List of strings denoting the optimizer types possible to be used. List must be of same length as\n        layer_types. Set to ['GradientDescent', 'GradientDescent', 'GradientDescent', 'Adagrad', 'Momentum', 'Adam',\n        'Ftrl', 'RMSProp'] by default.\n    :param cross_prob: Float value denoting the probability of crossing the genetics of different individuals. Set at\n        0.2 by default.\n    :param mut_prob: Float value denoting the probability of changing the genetics of a single individual. Set at 0.2 by\n        default.\n    :param tourn_size: Integer denoting the number of individuals to carry from each generation. Set at 5 by default.\n    :param train_iters: Integer denoting the number of training iterations to use for each neural network. Set at 5 by\n        default.\n    :param squash_errors: Boolean value denoting whether to give a fail value if the network results in an error. Set to\n        True by default.\n    :return: List of strings giving the best net_type, string denoting the best optimizer, and Float value denoting the\n        measure of the best network type.\n    '''\n\n    # Checks that the different options have the same size.\n    if not len(layer_types) == len(layer_sizes) == len(end_types) == len(train_types):\n        print('Input attribute lists have different sizes.')\n        return None\n\n    # Gets the type of network to check.\n    if predict_type == 'regression':\n        predictor = tf_functions.Regresser\n    elif predict_type == 'classification':\n        predictor = tf_functions.Classifier\n        end_types = ['softmax'] * len(layer_types)\n\n    # Gets the type of success measure to use.\n    if fitness_measure == 'rmse':\n        measure = rmse\n        creator.create(\"FitnessMin\", base.Fitness, weights=(-1.0,))\n        creator.create(\"Individual\", list, fitness=creator.FitnessMin)\n        fail_val = np.inf\n    elif fitness_measure == 'r_squared':\n        measure = r_squared\n        creator.create(\"FitnessMax\", base.Fitness, weights=(1.0,))\n        creator.create(\"Individual\", list, fitness=creator.FitnessMax)\n        fail_val = 0\n    elif fitness_measure == 'accuracy':\n        measure = accuracy\n        creator.create(\"FitnessMax\", base.Fitness, weights=(1.0,))\n        creator.create(\"Individual\", list, fitness=creator.FitnessMax)\n        fail_val = 0\n    elif fitness_measure == 'specificity':\n        measure = specificity\n        creator.create(\"FitnessMax\", base.Fitness, weights=(1.0,))\n        creator.create(\"Individual\", list, fitness=creator.FitnessMax)\n        fail_val = 0\n    elif fitness_measure == 'sensitivity':\n        measure = sensitivity\n        creator.create(\"FitnessMax\", base.Fitness, weights=(1.0,))\n        creator.create(\"Individual\", list, fitness=creator.FitnessMax)\n        fail_val = 0\n\n    toolbox = base.Toolbox()\n\n    # Attribute generator.\n    toolbox.register(\"attr_ints\", random.randint, 0, len(layer_types) - 1)\n\n    # Structure initializers.\n    toolbox.register(\"individual\", tools.initRepeat, creator.Individual, toolbox.attr_ints, n=(max_layers * 2) + 2)\n    toolbox.register(\"population\", tools.initRepeat, list, toolbox.individual)\n\n    # Operator registering.\n    toolbox.register(\"mate\", tools.cxTwoPoint)\n    toolbox.register(\"mutate\", tools.mutUniformInt,low = 0, up = len(layer_types) - 1, indpb=0.5)\n    toolbox.register(\"select\", tools.selTournament, tournsize = tourn_size)\n\n    # Gets the initial population.\n    pop = toolbox.population(n = gen_size)\n\n    # Performs an initial selection.\n    for ind in pop:\n\n        # Turns the individual's genes into a net_type and an optimizer.\n        net_type = []\n        for i in range(max_layers):\n            net_type.append(layer_types[ind[i * 2]])\n            net_type.append(layer_sizes[ind[i * 2 + 1]])\n\n        net_type.append(end_types[ind[-2]])\n\n        # Splits into test and train if needed.\n        if teX == []:\n            ttrX, ttrY, tteX, tteY = test_train_split(trX, trY)\n        else:\n            ttrX, ttrY, tteX, tteY = trX, trY, teX, teY\n\n        # Attempts to test network if errors to be squashed.\n        if squash_errors:\n            try:\n\n                # Sets up, trains, and tests network.\n                ind_predictor = predictor(net_type, optimizer = train_types[ind[-1]])\n                ind_predictor.train(ttrX, ttrY, train_iters)\n\n                p = ind_predictor.predict(tteX)\n                m = measure(p, tteY)\n\n                ind_predictor.close()\n\n            # Upon an error, gives the worst possible value.\n            except:\n                m = fail_val\n\n            if np.isnan(m):\n                m = fail_val\n\n        else:\n            ind_predictor = predictor(net_type, optimizer = train_types[ind[-1]])\n            ind_predictor.train(ttrX, ttrY, train_iters)\n\n            p = ind_predictor.predict(tteX)\n            m = measure(p, tteY)\n\n            ind_predictor.close()\n\n        ind.fitness.values = (m,)\n\n    # Begins the evolution.\n    for g in range(num_gens):\n\n        # Selects the next generation individuals.\n        offspring = toolbox.select(pop, len(pop))\n        # Clones the selected individuals.\n        offspring = list(map(toolbox.clone, offspring))\n\n        # Applies crossover and mutation on the offspring.\n        for child1, child2 in zip(offspring[::2], offspring[1::2]):\n            if random.random() < cross_prob:\n                toolbox.mate(child1, child2)\n                del child1.fitness.values\n                del child2.fitness.values\n\n        for mutant in offspring:\n            if random.random() < mut_prob:\n\n                toolbox.mutate(mutant)\n\n                del mutant.fitness.values\n\n        # Evaluates the individuals with an invalid fitness.\n        invalid_ind = [ind for ind in offspring if not ind.fitness.valid]\n        for ind in invalid_ind:\n            net_type = []\n            for i in range(max_layers):\n                net_type.append(layer_types[ind[i * 2]])\n                net_type.append(layer_sizes[ind[i * 2 + 1]])\n\n            net_type.append(end_types[ind[-2]])\n\n            if squash_errors:\n                try:\n                    ind_predictor = predictor(net_type, optimizer = train_types[ind[-1]])\n                    ind_predictor.train(ttrX, ttrY, train_iters)\n\n                    p = ind_predictor.predict(tteX)\n                    m = measure(p, tteY)\n\n                    ind_predictor.close()\n\n                except:\n                    m = fail_val\n\n                if np.isnan(m):\n                    m = fail_val\n\n            else:\n                ind_predictor = predictor(net_type, optimizer = train_types[ind[-1]])\n                ind_predictor.train(ttrX, ttrY, train_iters)\n\n                p = ind_predictor.predict(tteX)\n                m = measure(p, tteY)\n\n                ind_predictor.close()\n\n            ind.fitness.values = (m,)\n\n        # The population is entirely replaced by the offspring.\n        pop[:] = offspring\n\n    # Gets the best individual remaining after\n    best_ind = tools.selBest(pop, 1)[0]\n\n    net_type = []\n    for i in range(max_layers):\n        net_type.append(layer_types[best_ind[i * 2]])\n        net_type.append(layer_sizes[best_ind[i * 2 + 1]])\n\n    net_type.append(end_types[best_ind[-2]])\n    optimizer = train_types[best_ind[-1]]\n\n    return net_type, optimizer, best_ind.fitness.values\n"
  },
  {
    "path": "easy_tensorflow/main.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"Program entry point\"\"\"\n\nfrom __future__ import print_function\n\nimport argparse\nimport sys\n\nfrom easy_tensorflow import metadata, tf_functions, evolve_functions\n\n\ndef main(argv):\n    \"\"\"Program entry point.\n\n    :param argv: command-line arguments\n    :type argv: :class:`list`\n    \"\"\"\n    author_strings = []\n    for name, email in zip(metadata.authors, metadata.emails):\n        author_strings.append('Author: {0} <{1}>'.format(name, email))\n\n    epilog = '''\n{project} {version}\n\n{authors}\nURL: <{url}>\n'''.format(\n        project=metadata.project,\n        version=metadata.version,\n        authors='\\n'.join(author_strings),\n        url=metadata.url)\n\n    arg_parser = argparse.ArgumentParser(\n        prog=argv[0],\n        formatter_class=argparse.RawDescriptionHelpFormatter,\n        description=metadata.description,\n        epilog=epilog)\n    arg_parser.add_argument(\n        '-V', '--version',\n        action='version',\n        version='{0} {1}'.format(metadata.project, metadata.version))\n\n    arg_parser.parse_args(args=argv[1:])\n\n    print(epilog)\n\n    return 0\n\n\ndef entry_point():\n    \"\"\"Zero-argument entry point for use with setuptools/distribute.\"\"\"\n    raise SystemExit(main(sys.argv))\n\n\nif __name__ == '__main__':\n    entry_point()\n"
  },
  {
    "path": "easy_tensorflow/metadata.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"Project metadata\n\nInformation describing the project.\n\"\"\"\n\n# The package name, which is also the \"UNIX name\" for the project.\npackage = 'easy_tensorflow'\nproject = \"EasyTensorflow\"\nproject_no_spaces = project.replace(' ', '')\nversion = '0.1'\ndescription = 'Provides objects that allow for easy setup, training, and running of neural networks, based on Google\\'s ' \\\n              'tensorflow library. Also allows for the evolution of effective networks using a genetic algorithm derived' \\\n              ' from the DEAP package.'\nauthors = ['Calvin Schmidt']\nauthors_string = ', '.join(authors)\nemails = ['calvins@stanford.edu']\nlicense = 'MIT'\ncopyright = '2015 ' + authors_string\nurl = 'https://github.com/calvinschmdt/EasyTensorflow'\n"
  },
  {
    "path": "easy_tensorflow/tf_dictionaries.py",
    "content": "import tensorflow as tf\n\ntransform_dict = {\n    'relu': (tf.nn.relu, (tf.nn.bias_add, ((tf.matmul, ('X', 'weight1')), 'weight2'))),\n    'softplus': (tf.nn.softplus, (tf.nn.bias_add, ((tf.matmul, ('X', 'weight1')), 'weight2'))),\n    'dropout': (tf.nn.dropout, ('X', 'weight')),\n    'bias_add': (tf.nn.bias_add, ('X', 'weight')),\n    'sigmoid': (tf.nn.sigmoid, (tf.nn.bias_add, ((tf.matmul, ('X', 'weight1')), 'weight2'))),\n    'tanh': (tf.nn.tanh, (tf.nn.bias_add, ((tf.matmul, ('X', 'weight1')), 'weight2'))),\n    'none': (tf.nn.bias_add, ((tf.matmul, ('X', 'weight1')), 'weight2')),\n    'normalize': (tf.nn.l2_normalize, ('X', 1)),\n    'sum': (tf.reduce_sum, ('X', 1)),\n    'prod': (tf.reduce_prod, ('X', 1)),\n    'min': (tf.reduce_min, ('X', 1)),\n    'max': (tf.reduce_max, ('X', 1)),\n    'mean': (tf.reduce_mean, ('X', 1)),\n    'softmax': (tf.nn.softmax, (tf.matmul, ('X', 'weight')))\n}\n\noptimizer_dict = {\n    'GradientDescent': (tf.train.GradientDescentOptimizer, (0.001, )),\n    'Adagrad': (tf.train.AdagradOptimizer, (0.001, )),\n    'Momentum': (tf.train.MomentumOptimizer, (0.001, 0.1)),\n    'Adam': (tf.train.AdamOptimizer, ()),\n    'Ftrl': (tf.train.FtrlOptimizer, (0.001, )),\n    'RMSProp': (tf.train.RMSPropOptimizer, (0.001, 0.9))\n}\n"
  },
  {
    "path": "easy_tensorflow/tf_functions.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport random\nimport tf_dictionaries\n\ndef random_index_list(list_size, sample_size):\n    '''\n    Creates a list of random integers that are constrained in the max value. May have repeating indexes.\n    :param list_size: Integer denoting the length of the final list.\n    :param sample_size: Integer denoting the maximum index to include.\n    :return: List of integers.\n    '''\n\n    return [random.randint(0, sample_size - 1) for i in range(0, list_size)]\n\ndef encode_classifications(class_list):\n    '''\n    Given a list of class labels, encodes the list in a manner that is usable in neural networks. For each sample,\n    there will be a list of 0s and one 1 corresponding to the label that sample is encoded. Also returns the list of\n    so that encoded sample lists can be decoded.\n    :param class_list: List of labels, that can be either strings or numbers.\n    :return: classes: List of strings denoting the different labels found in the list.\n        encoded: Numpy array of encoded labels. One row for each sample, with the numbers in the row corresponding to\n            the label for that sample.\n    '''\n\n    # Iterates through the list, recording all the samples.\n    classes = []\n    for i in class_list:\n        if i not in classes:\n            classes.append(i)\n\n    # Creates a list of 0s for each sample.\n    encoded = [[0] * len(classes) for i in range(len(class_list))]\n\n    # Iterates through the list to be encoded, marking a 1 in the position for the given label for that sample.\n    for e, i in enumerate(class_list):\n        encoded[e][classes.index(i)] = 1\n\n    return classes, np.array(encoded)\n\ndef decode_classifications(classes, encoded):\n    '''\n    Given an array of labels encoded in array format, returns a that array as their original labels.\n    :param classes: List of strings denoting the different labels found in the list.\n    :param encoded: Numpy array of encoded labels. One row for each sample, with the numbers in the row corresponding to\n        the label for that sample.\n    :return: List of strings denoting the label of each sample.\n    '''\n\n    return [classes[i] for i in encoded]\n\ndef unpack_transform(transform, X, weight):\n    '''\n    Turns a tuple of tensorflow functions and their inputs into tensorflow functions ready to use.\n    :param transform: Tuple containing a tensorflow tensor as the first part, and a tuple containing the tensor input in\n        the second part.\n    :param X: Tensor containing the input matrix to be transformed.\n    :param weight: Tensor, integer, or float containing a weight as the second tensor input.\n    :return: Completed tensor, with inputs placed in.\n    '''\n\n    # If the first part of the tensor's input should be X, then applies the X as the incoming tensor.\n    if transform[1][0] == 'X':\n\n        # If a weight tensor or value should be applied to the tensor, then uses the supplied weight. If there is not a\n        # weight, uses 1 to direct the axis of tensor transformation.\n        if transform[1][1] == 'weight' or transform[1][1] == 'weight1':\n            w = weight\n        elif transform[1][1] == 1:\n            w = 1\n\n        return transform[0](X, w)\n\n    if transform[1][1] == 'weight2':\n        return transform[0](unpack_transform(transform[1][0], X, weight[0]), weight[1])\n\n    # If the first part of the tensor's input is not X, then unpacks the tensor that should be the input, and uses that.\n    else:\n        return transform[0](unpack_transform(transform[1], X, weight))\n\ndef compile_model(X, weights, models, transform_dict):\n    '''\n    Converts lists of weights and models into a tensorflow neural network that can take in an input tensor and return\n    an output tensor.\n    :param X: Tensor that contains the input values to be transformed through the network.\n    :param weights: List of tensors (or single value in the case of dropouts) that transform the X value through the\n        layers.\n    :param models: List of the different types of layers. Should be in the same order as the weights list, and be the\n        desired order of layers.\n    :return: Tensor that has been transformed through the different layers.\n    '''\n\n    # Iterates through each layer.\n    for model, weight in zip(models, weights):\n\n        # Finds the layer type, and transforms through that layer type.\n        transform = transform_dict[model]\n\n        X = unpack_transform(transform, X, weight)\n\n    return X\n\ndef make_model(X, input_size, output_size, net_type, transform_dict):\n    '''\n    Creates the lists of weight tensors and layer types that are used to compile a tensorflow neural network.\n    :param X: Tensor that contains the input values to be transformed through the network.\n    :param input_size: Float value giving the number of features being fed into the network.\n    :param output_size: Float value giving the number of output values desired.\n    :param net_type: List of alternating string values and integer values. Must always start and end with a string\n        values. The strings denote the type of each layer. The integer values denote the end size of each layer, though\n        this is constrained for certain layer types. Sizes of zero drop that layer out.\n    :return: Function that represents the neural network.\n    '''\n\n    weights = []\n    models = []\n    started = False\n    last = input_size\n\n    # Iterates through each layer of the network.\n    for e in range(0, len(net_type) - 1, 2):\n\n        # Creates the weight type for the layer, depending on the layer type. Dropouts don't use a tensor, just a float\n        # value.\n        if net_type[e] == 'dropout' or net_type[e] == 'normalize':\n            next = last\n            weight = random.random()\n\n        # Bias add uses a linear layer to transform without changing the size.\n        elif net_type[e] == 'bias_add':\n            next = last\n            weight = tf.Variable(tf.constant(0.0, shape=[next]))\n\n        elif net_type[e] == 'relu' or net_type[e] == 'softplus' or net_type[e] == 'tanh' or \\\n                net_type[e] == 'sigmoid' or net_type[e] == 'none':\n            next = last\n            weight1 = tf.Variable(tf.random_normal([last, next], stddev=0.01))\n            weight2 = tf.Variable(tf.constant(0.0, shape=[next]))\n            weight = (weight1, weight2)\n\n        # Other types use a matrix to transform the size.\n        else:\n            next = net_type[e + 1]\n            weight = tf.Variable(tf.random_normal([last, next], stddev=0.01))\n\n        # Records the layer type and weight used for transforming into that layer for each layer.\n        if net_type[e + 1] > 0:\n\n            # The first layer pulls from the input size, while the rest pull from the last size.\n            if not started:\n                weights.append(weight)\n                models.append(net_type[e])\n                last = next\n                started = True\n            else:\n                weights.append(weight)\n                models.append(net_type[e])\n                last = next\n\n    # Makes sure the last layer can change the size.\n    if net_type[-1] == 'bias_add' or net_type[-1] == 'dropout' or net_type[-1] == 'normalize':\n        final = 'sigmoid'\n    else:\n        final = net_type[-1]\n\n    # Adds in the last layer.\n    if final == 'relu' or final == 'softplus' or final == 'tanh' or \\\n                final == 'sigmoid' or final == 'none':\n            weight1 = tf.Variable(tf.random_normal([last, output_size], stddev=0.01))\n            weight2 = tf.Variable(tf.random_normal([output_size], stddev=0.01))\n            weight = (weight1, weight2)\n\n    else:\n        weight = tf.Variable(tf.random_normal([last, output_size], stddev=0.01))\n\n    weights.append(weight)\n    models.append(final)\n\n    # Returns the compiled function.\n    return compile_model(X, weights, models, transform_dict)\n\ndef train_tensorflow(sess, trX, trY, train_steps, full_train, train_size, net_type, transform_dict, loss_type,\n                     optimizer, optimizer_dict):\n    '''\n    Automatically constructs, trains, and tests a tensorflow neural network, returning the r squared value of the\n    output.\n    :param sess: A tensorflow session.\n    :param trX: Numpy array that contains the training features.\n    :param trY: Numpy array that contains the training outputs. Must have shape of at least 1 on columns.\n    :param train_steps: Integer value denoting the number of times to iterate through training.\n    :param full_train: Boolean value denoting whether to use the full training set for each iteration.\n    :param train_size: Integer value denoting the number of samples to pull from the training set for each iteration of\n        training.\n    :param net_type: List of alternating string values and integer values. Must always start and end with a string\n        values. The strings denote the type of each layer. The integer values denote the end size of each layer, though\n        this is constrained for certain layer types. Sizes of zero drop that layer out.\n    :param transform_dict: Dictionary of strings to tuples of tensors that encode how to set up the layers of the neural\n        network.\n    :param loss_type: String denoting type of tensor to use for loss type. Use l2_loss for regression, cross_entropy for\n        classification.\n    :param optimizer: String denoting the type of optimization tensor to use for training the neural network.\n    :param optimizer_dict: Dictionary of strings to tuples of tensors that encode how to set up the optimizers of the\n        neural network.\n    :return: predict_op: Tensor that encodes the neural network.\n        X: Placeholder tensor for the features array.\n        y: Placeholder tensor for the output array.\n    '''\n\n    # Set up input and output tensors.\n    X = tf.placeholder(\"float\", [None, trX.shape[1]])\n    y = tf.placeholder(\"float\", [None, trY.shape[1]])\n\n    # Set up network.\n    py_x = make_model(X, trX.shape[1], trY.shape[1], net_type, transform_dict)\n\n    # Set up cost and training type.\n    if loss_type == 'l2_loss':\n        cost = tf.nn.l2_loss(tf.sub(py_x, y))\n    elif loss_type == 'cross_entropy':\n        cost = -tf.reduce_sum(y * tf.log(py_x))\n\n    # Gets the optimizer to be used for training and sets it up.\n    if type(optimizer) == str:\n        train_op = optimizer_dict[optimizer][0](*optimizer_dict[optimizer][1]).minimize(cost)\n    else:\n        train_op = optimizer[0](*optimizer[1]).minimize(cost)\n\n    predict_op = py_x\n\n    # Initialize session.\n    init = tf.initialize_all_variables()\n    sess.run(init)\n\n    # Trains given number of times\n    try:\n        for i in range(train_steps):\n\n            # If full_train is selected, the trains on the full set of training data, in 100 sample increments.\n            if full_train:\n                for start, end in zip(range(0, len(trX), 100), range(100, len(trX), 100)):\n                    sess.run(train_op, feed_dict={X: trX[start:end], y: trY[start:end]})\n\n            # If full_train is not selected, then trains on a random set of samples from the training data.\n            else:\n                indices = random_index_list(train_size, len(trY))\n                sess.run(train_op, feed_dict={X: trX[indices], y: trY[indices]})\n\n    # If training throws an error for whatever reason, stops the program from breaking. Throws an error message and \\\n    # closes the session.\n    except:\n        print(\"Error during training\")\n        sess.close()\n        return None\n\n    return predict_op, X, y\n\nclass Classifier:\n    '''\n    Object that holds a neural network used for classifying a set of data.\n    '''\n\n    def __init__(self, net_type, loss_type = 'cross_entropy', optimizer = 'Adam'):\n        '''\n        Initializing function. Records the neural network type and, if given, the loss type and optimizer type.\n        :param net_type: List of alternating string values and integer values. Must always start and end with a string\n        values. The strings denote the type of each layer. The integer values denote the end size of each layer, though\n        this is constrained for certain layer types. Sizes of zero drop that layer out.\n        :param loss_type: String denoting type of tensor to use for loss type. Set as cross_entropy by default.\n        :param optimizer: String denoting the type of optimization tensor to use for training the neural network. Set\n            as Adam by default. Tuple containing optimization tensor and input tuple can be used in place of a string to\n            set specific parameters for the optimization.\n        '''\n\n        # Sets up initial parameters and starts the tensorflow session.\n        self.net_type = net_type\n        self.loss_type = loss_type\n        self.optimizer = optimizer\n        self.sess = tf.Session()\n        self.transform_dict = tf_dictionaries.transform_dict\n        self.optimizer_dict = tf_dictionaries.optimizer_dict\n\n    def train(self, trX, trY, iterations, full_train = True, train_size = 0):\n        '''\n        Sets up and trains the neural network.\n        :param trX: Numpy array that contains the training features.\n        :param trY: Numpy array that contains the training outputs. Can be in the encoded or unencoded format.\n        :param iterations: Integer denoting the number of iterations to train the model.\n        :param full_train: Boolean value denoting whether to use the full training set for each iteration. Set as True\n            by default.\n        :param train_size: Integer value denoting the number of samples to pull from the training set for each iteration\n            of training. Set as 0 by default.\n        :return: Does not return anything, but stores the input, output, and transformation tensors for predicting.\n        '''\n\n        # If the labels are not encoded in matrix format, does that and stores the list of classes.\n        if trY.shape[1] == 1:\n            self.class_list, trY = encode_classifications(trY)\n\n        # Sets up and trains tensorflow.\n        self.predict_op, self.X, self.y = train_tensorflow(self.sess, trX, trY, iterations,\n                                                           full_train, train_size,\n                                                           self.net_type, self.transform_dict,\n                                                           self.loss_type,\n                                                           self.optimizer, self.optimizer_dict)\n\n    def predict(self, teX, return_encoded = True):\n        '''\n        Uses the trained neural network to classify the samples based on their features.\n        :param teX: Numpy array of features to be used for the classification.\n        :param return_encoded: Boolean value denoting whether to decode the classifications if needed. Must have\n            generated class list by encoding during the training.\n        :return: Either encoded or decoded classifications for the input samples as a numpy array or list.\n        '''\n\n        # Use the neural network for predicting the classes.\n        p = self.sess.run(self.predict_op, feed_dict={self.X: teX})\n\n        # Decodes if desired.\n        if not return_encoded:\n            p = np.argmax(p, axis=1)\n            return decode_classifications(p, self.class_list)\n\n        return p\n\n    def close(self):\n        '''\n        Closes the session.\n        :return: Nothing, but closes the session.\n        '''\n\n        self.sess.close()\n\nclass Regresser:\n    '''\n    Object that holds a neural network used for predicting a set of data's numerical outputs.\n    '''\n\n    def __init__(self, net_type, loss_type = 'l2_loss', optimizer = 'Adam'):\n        '''\n        Initializing function. Records the neural network type and, if given, the loss type and optimizer type.\n        :param net_type: List of alternating string values and integer values. Must always start and end with a string\n        values. The strings denote the type of each layer. The integer values denote the end size of each layer, though\n        this is constrained for certain layer types. Sizes of zero drop that layer out.\n        :param loss_type: String denoting type of tensor to use for loss type. Set as l2_loss by default.\n        :param optimizer: String denoting the type of optimization tensor to use for training the neural network. Set\n            as Adam by default. Tuple containing optimization tensor and input tuple can be used in place of a string to\n            set specific parameters for the optimization.\n        '''\n\n        # Sets up initial parameters and starts the tensorflow session.\n        self.net_type = net_type\n        self.loss_type = loss_type\n        self.optimizer = optimizer\n        self.sess = tf.Session()\n        self.transform_dict = tf_dictionaries.transform_dict\n        self.optimizer_dict = tf_dictionaries.optimizer_dict\n\n    def train(self, trX, trY, iterations, full_train = True, train_size = 0):\n        '''\n        Sets up and trains the neural network.\n        :param trX: Numpy array that contains the training features.\n        :param trY: Numpy array that contains the training outputs. If if in vector format, then reshapes as array.\n        :param iterations: Integer denoting the number of iterations to train the model.\n        :param full_train: Boolean value denoting whether to use the full training set for each iteration. Set as True\n            by default.\n        :param train_size: Integer value denoting the number of samples to pull from the training set for each iteration\n            of training. Set as 0 by default.\n        :return: Does not return anything, but stores the input, output, and transformation tensors for predicting.\n        '''\n\n        # The training outputs are not in the correct shape, fits into correct shape.\n        if len(trY.shape) == 1:\n            trY = trY.reshape(len(trY), 1)\n\n        # Sets up and trains tensorflow.\n        self.predict_op, self.X, self.y = train_tensorflow(self.sess, trX, trY, iterations,\n                                                           full_train, train_size,\n                                                           self.net_type, self.transform_dict,\n                                                           self.loss_type,\n                                                           self.optimizer, self.optimizer_dict)\n\n    def predict(self, teX):\n        '''\n        Uses the trained neural network to classify the samples based on their features.\n        :param teX: Numpy array of features to be used for the classification.\n        :return: Numpy vector of predicted outputs..\n        '''\n\n        # Use the neural network for predicting the classes.\n        p = self.sess.run(self.predict_op, feed_dict={self.X: teX})\n\n        # Reshapes array of columns into vector.\n        if len(p.shape) > 1:\n            p = np.array([i[0] for i in np.ndarray.tolist(p)])\n\n        return np.array(p)\n\n    def close(self):\n        '''\n        Closes the session.\n        :return: Nothing, but closes the session.\n        '''\n\n        self.sess.close()\n"
  },
  {
    "path": "pavement.py",
    "content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\n\nimport os\nimport sys\nimport time\nimport subprocess\n\n# Import parameters from the setup file.\nsys.path.append('.')\nfrom setup import (\n    setup_dict, get_project_files, print_success_message,\n    print_failure_message, _lint, _test, _test_all,\n    CODE_DIRECTORY, DOCS_DIRECTORY, TESTS_DIRECTORY, PYTEST_FLAGS)\n\nfrom paver.easy import options, task, needs, consume_args\nfrom paver.setuputils import install_distutils_tasks\n\noptions(setup=setup_dict)\n\ninstall_distutils_tasks()\n\n## Miscellaneous helper functions\n\n\ndef print_passed():\n    # generated on http://patorjk.com/software/taag/#p=display&f=Small&t=PASSED\n    print_success_message(r'''  ___  _   ___ ___ ___ ___\n | _ \\/_\\ / __/ __| __|   \\\n |  _/ _ \\\\__ \\__ \\ _|| |) |\n |_|/_/ \\_\\___/___/___|___/\n''')\n\n\ndef print_failed():\n    # generated on http://patorjk.com/software/taag/#p=display&f=Small&t=FAILED\n    print_failure_message(r'''  ___ _   ___ _    ___ ___\n | __/_\\ |_ _| |  | __|   \\\n | _/ _ \\ | || |__| _|| |) |\n |_/_/ \\_\\___|____|___|___/\n''')\n\n\nclass cwd(object):\n    \"\"\"Class used for temporarily changing directories. Can be though of\n    as a `pushd /my/dir' then a `popd' at the end.\n    \"\"\"\n    def __init__(self, newcwd):\n        \"\"\":param newcwd: directory to make the cwd\n        :type newcwd: :class:`str`\n        \"\"\"\n        self.newcwd = newcwd\n\n    def __enter__(self):\n        self.oldcwd = os.getcwd()\n        os.chdir(self.newcwd)\n        return os.getcwd()\n\n    def __exit__(self, type_, value, traceback):\n        # This acts like a `finally' clause: it will always be executed.\n        os.chdir(self.oldcwd)\n\n\n## Task-related functions\n\ndef _doc_make(*make_args):\n    \"\"\"Run make in sphinx' docs directory.\n\n    :return: exit code\n    \"\"\"\n    if sys.platform == 'win32':\n        # Windows\n        make_cmd = ['make.bat']\n    else:\n        # Linux, Mac OS X, and others\n        make_cmd = ['make']\n    make_cmd.extend(make_args)\n\n    # Account for a stupid Python \"bug\" on Windows:\n    # <http://bugs.python.org/issue15533>\n    with cwd(DOCS_DIRECTORY):\n        retcode = subprocess.call(make_cmd)\n    return retcode\n\n\n## Tasks\n\n@task\n@needs('doc_html', 'setuptools.command.sdist')\ndef sdist():\n    \"\"\"Build the HTML docs and the tarball.\"\"\"\n    pass\n\n\n@task\ndef test():\n    \"\"\"Run the unit tests.\"\"\"\n    raise SystemExit(_test())\n\n\n@task\ndef lint():\n    # This refuses to format properly when running `paver help' unless\n    # this ugliness is used.\n    ('Perform PEP8 style check, run PyFlakes, and run McCabe complexity '\n     'metrics on the code.')\n    raise SystemExit(_lint())\n\n\n@task\ndef test_all():\n    \"\"\"Perform a style check and run all unit tests.\"\"\"\n    retcode = _test_all()\n    if retcode == 0:\n        print_passed()\n    else:\n        print_failed()\n    raise SystemExit(retcode)\n\n\n@task\n@consume_args\ndef run(args):\n    \"\"\"Run the package's main script. All arguments are passed to it.\"\"\"\n    # The main script expects to get the called executable's name as\n    # argv[0]. However, paver doesn't provide that in args. Even if it did (or\n    # we dove into sys.argv), it wouldn't be useful because it would be paver's\n    # executable. So we just pass the package name in as the executable name,\n    # since it's close enough. This should never be seen by an end user\n    # installing through Setuptools anyway.\n    from easy_tensorflow.main import main\n    raise SystemExit(main([CODE_DIRECTORY] + args))\n\n\n@task\ndef commit():\n    \"\"\"Commit only if all the tests pass.\"\"\"\n    if _test_all() == 0:\n        subprocess.check_call(['git', 'commit'])\n    else:\n        print_failure_message('\\nTests failed, not committing.')\n\n\n@task\ndef coverage():\n    \"\"\"Run tests and show test coverage report.\"\"\"\n    try:\n        import pytest_cov  # NOQA\n    except ImportError:\n        print_failure_message(\n            'Install the pytest coverage plugin to use this task, '\n            \"i.e., `pip install pytest-cov'.\")\n        raise SystemExit(1)\n    import pytest\n    pytest.main(PYTEST_FLAGS + [\n        '--cov', CODE_DIRECTORY,\n        '--cov-report', 'term-missing',\n        TESTS_DIRECTORY])\n\n\n@task  # NOQA\ndef doc_watch():\n    \"\"\"Watch for changes in the docs and rebuild HTML docs when changed.\"\"\"\n    try:\n        from watchdog.events import FileSystemEventHandler\n        from watchdog.observers import Observer\n    except ImportError:\n        print_failure_message('Install the watchdog package to use this task, '\n                              \"i.e., `pip install watchdog'.\")\n        raise SystemExit(1)\n\n    class RebuildDocsEventHandler(FileSystemEventHandler):\n        def __init__(self, base_paths):\n            self.base_paths = base_paths\n\n        def dispatch(self, event):\n            \"\"\"Dispatches events to the appropriate methods.\n            :param event: The event object representing the file system event.\n            :type event: :class:`watchdog.events.FileSystemEvent`\n            \"\"\"\n            for base_path in self.base_paths:\n                if event.src_path.endswith(base_path):\n                    super(RebuildDocsEventHandler, self).dispatch(event)\n                    # We found one that matches. We're done.\n                    return\n\n        def on_modified(self, event):\n            print_failure_message('Modification detected. Rebuilding docs.')\n            # # Strip off the path prefix.\n            # import os\n            # if event.src_path[len(os.getcwd()) + 1:].startswith(\n            #         CODE_DIRECTORY):\n            #     # sphinx-build doesn't always pick up changes on code files,\n            #     # even though they are used to generate the documentation. As\n            #     # a workaround, just clean before building.\n            doc_html()\n            print_success_message('Docs have been rebuilt.')\n\n    print_success_message(\n        'Watching for changes in project files, press Ctrl-C to cancel...')\n    handler = RebuildDocsEventHandler(get_project_files())\n    observer = Observer()\n    observer.schedule(handler, path='.', recursive=True)\n    observer.start()\n    try:\n        while True:\n            time.sleep(1)\n    except KeyboardInterrupt:\n        observer.stop()\n        observer.join()\n\n\n@task\n@needs('doc_html')\ndef doc_open():\n    \"\"\"Build the HTML docs and open them in a web browser.\"\"\"\n    doc_index = os.path.join(DOCS_DIRECTORY, 'build', 'html', 'index.html')\n    if sys.platform == 'darwin':\n        # Mac OS X\n        subprocess.check_call(['open', doc_index])\n    elif sys.platform == 'win32':\n        # Windows\n        subprocess.check_call(['start', doc_index], shell=True)\n    elif sys.platform == 'linux2':\n        # All freedesktop-compatible desktops\n        subprocess.check_call(['xdg-open', doc_index])\n    else:\n        print_failure_message(\n            \"Unsupported platform. Please open `{0}' manually.\".format(\n                doc_index))\n\n\n@task\ndef get_tasks():\n    \"\"\"Get all paver-defined tasks.\"\"\"\n    from paver.tasks import environment\n    for task in environment.get_tasks():\n        print(task.shortname)\n\n\n@task\ndef doc_html():\n    \"\"\"Build the HTML docs.\"\"\"\n    retcode = _doc_make('html')\n\n    if retcode:\n        raise SystemExit(retcode)\n\n\n@task\ndef doc_clean():\n    \"\"\"Clean (delete) the built docs.\"\"\"\n    retcode = _doc_make('clean')\n\n    if retcode:\n        raise SystemExit(retcode)\n"
  },
  {
    "path": "requirements-dev.txt",
    "content": "# Runtime requirements\n--requirement requirements.txt\n\n# Testing\npytest==2.5.1\npy==1.4.19\nmock==1.0.1\n\n# Linting\nflake8==2.1.0\nmccabe==0.2.1\npep8==1.4.6\npyflakes==0.7.3\n\n# Documentation\nSphinx==1.2\ndocutils==0.11\nJinja2==2.7.1\nMarkupSafe==0.18\nPygments==1.6\n\n# Miscellaneous\nPaver==1.2.1\ncolorama==0.2.7\n\n# Function\ndeap==1.0\ntensorflow\n"
  },
  {
    "path": "requirements.txt",
    "content": "# Python 2.6 compatibility\n# argparse==1.2.1\n\n# Function\ndeap==1.0\ntensorflow\n"
  },
  {
    "path": "setup.py",
    "content": "# -*- coding: utf-8 -*-\nfrom __future__ import print_function\n\nimport os\nimport sys\nimport imp\nimport subprocess\n\n## Python 2.6 subprocess.check_output compatibility. Thanks Greg Hewgill!\nif 'check_output' not in dir(subprocess):\n    def check_output(cmd_args, *args, **kwargs):\n        proc = subprocess.Popen(\n            cmd_args, *args,\n            stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)\n        out, err = proc.communicate()\n        if proc.returncode != 0:\n            raise subprocess.CalledProcessError(args)\n        return out\n    subprocess.check_output = check_output\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\nfrom distutils import spawn\n\ntry:\n    import colorama\n    colorama.init()  # Initialize colorama on Windows\nexcept ImportError:\n    # Don't require colorama just for running paver tasks. This allows us to\n    # run `paver install' without requiring the user to first have colorama\n    # installed.\n    pass\n\n# Add the current directory to the module search path.\nsys.path.insert(0, os.path.abspath('.'))\n\n## Constants\nCODE_DIRECTORY = 'easy_tensorflow'\nDOCS_DIRECTORY = 'docs'\nTESTS_DIRECTORY = 'tests'\nPYTEST_FLAGS = ['--doctest-modules']\n\n# Import metadata. Normally this would just be:\n#\n#     from easy_tensorflow import metadata\n#\n# However, when we do this, we also import `easy_tensorflow/__init__.py'. If this\n# imports names from some other modules and these modules have third-party\n# dependencies that need installing (which happens after this file is run), the\n# script will crash. What we do instead is to load the metadata module by path\n# instead, effectively side-stepping the dependency problem. Please make sure\n# metadata has no dependencies, otherwise they will need to be added to\n# the setup_requires keyword.\nmetadata = imp.load_source(\n    'metadata', os.path.join(CODE_DIRECTORY, 'metadata.py'))\n\n\n## Miscellaneous helper functions\n\ndef get_project_files():\n    \"\"\"Retrieve a list of project files, ignoring hidden files.\n\n    :return: sorted list of project files\n    :rtype: :class:`list`\n    \"\"\"\n    if is_git_project() and has_git():\n        return get_git_project_files()\n\n    project_files = []\n    for top, subdirs, files in os.walk('.'):\n        for subdir in subdirs:\n            if subdir.startswith('.'):\n                subdirs.remove(subdir)\n\n        for f in files:\n            if f.startswith('.'):\n                continue\n            project_files.append(os.path.join(top, f))\n\n    return project_files\n\n\ndef is_git_project():\n    return os.path.isdir('.git')\n\n\ndef has_git():\n    return bool(spawn.find_executable(\"git\"))\n\n\ndef get_git_project_files():\n    \"\"\"Retrieve a list of all non-ignored files, including untracked files,\n    excluding deleted files.\n\n    :return: sorted list of git project files\n    :rtype: :class:`list`\n    \"\"\"\n    cached_and_untracked_files = git_ls_files(\n        '--cached',  # All files cached in the index\n        '--others',  # Untracked files\n        # Exclude untracked files that would be excluded by .gitignore, etc.\n        '--exclude-standard')\n    uncommitted_deleted_files = git_ls_files('--deleted')\n\n    # Since sorting of files in a set is arbitrary, return a sorted list to\n    # provide a well-defined order to tools like flake8, etc.\n    return sorted(cached_and_untracked_files - uncommitted_deleted_files)\n\n\ndef git_ls_files(*cmd_args):\n    \"\"\"Run ``git ls-files`` in the top-level project directory. Arguments go\n    directly to execution call.\n\n    :return: set of file names\n    :rtype: :class:`set`\n    \"\"\"\n    cmd = ['git', 'ls-files']\n    cmd.extend(cmd_args)\n    return set(subprocess.check_output(cmd).splitlines())\n\n\ndef print_success_message(message):\n    \"\"\"Print a message indicating success in green color to STDOUT.\n\n    :param message: the message to print\n    :type message: :class:`str`\n    \"\"\"\n    try:\n        import colorama\n        print(colorama.Fore.GREEN + message + colorama.Fore.RESET)\n    except ImportError:\n        print(message)\n\n\ndef print_failure_message(message):\n    \"\"\"Print a message indicating failure in red color to STDERR.\n\n    :param message: the message to print\n    :type message: :class:`str`\n    \"\"\"\n    try:\n        import colorama\n        print(colorama.Fore.RED + message + colorama.Fore.RESET,\n              file=sys.stderr)\n    except ImportError:\n        print(message, file=sys.stderr)\n\n\ndef read(filename):\n    \"\"\"Return the contents of a file.\n\n    :param filename: file path\n    :type filename: :class:`str`\n    :return: the file's content\n    :rtype: :class:`str`\n    \"\"\"\n    with open(os.path.join(os.path.dirname(__file__), filename)) as f:\n        return f.read()\n\n\ndef _lint():\n    \"\"\"Run lint and return an exit code.\"\"\"\n    # Flake8 doesn't have an easy way to run checks using a Python function, so\n    # just fork off another process to do it.\n\n    # Python 3 compat:\n    # - The result of subprocess call outputs are byte strings, meaning we need\n    #   to pass a byte string to endswith.\n    project_python_files = [filename for filename in get_project_files()\n                            if filename.endswith(b'.py')]\n    retcode = subprocess.call(\n        ['flake8', '--max-complexity=10'] + project_python_files)\n    if retcode == 0:\n        print_success_message('No style errors')\n    return retcode\n\n\ndef _test():\n    \"\"\"Run the unit tests.\n\n    :return: exit code\n    \"\"\"\n    # Make sure to import pytest in this function. For the reason, see here:\n    # <http://pytest.org/latest/goodpractises.html#integration-with-setuptools-test-commands>  # NOPEP8\n    import pytest\n    # This runs the unit tests.\n    # It also runs doctest, but only on the modules in TESTS_DIRECTORY.\n    return pytest.main(PYTEST_FLAGS + [TESTS_DIRECTORY])\n\n\ndef _test_all():\n    \"\"\"Run lint and tests.\n\n    :return: exit code\n    \"\"\"\n    return _lint() + _test()\n\n\n# The following code is to allow tests to be run with `python setup.py test'.\n# The main reason to make this possible is to allow tests to be run as part of\n# Setuptools' automatic run of 2to3 on the source code. The recommended way to\n# run tests is still `paver test_all'.\n# See <http://pythonhosted.org/setuptools/python3.html>\n# Code based on <http://pytest.org/latest/goodpractises.html#integration-with-setuptools-test-commands>  # NOPEP8\nclass TestAllCommand(TestCommand):\n    def finalize_options(self):\n        TestCommand.finalize_options(self)\n        # These are fake, and just set to appease distutils and setuptools.\n        self.test_suite = True\n        self.test_args = []\n\n    def run_tests(self):\n        raise SystemExit(_test_all())\n\n\n# define install_requires for specific Python versions\npython_version_specific_requires = []\n\n# as of Python >= 2.7 and >= 3.2, the argparse module is maintained within\n# the Python standard library, otherwise we install it as a separate package\nif sys.version_info < (2, 7) or (3, 0) <= sys.version_info < (3, 3):\n    python_version_specific_requires.append('argparse')\n\n\n# See here for more options:\n# <http://pythonhosted.org/setuptools/setuptools.html>\nsetup_dict = dict(\n    name=metadata.package,\n    version=metadata.version,\n    author=metadata.authors[0],\n    author_email=metadata.emails[0],\n    maintainer=metadata.authors[0],\n    maintainer_email=metadata.emails[0],\n    url=metadata.url,\n    description=metadata.description,\n    long_description=read('README.rst'),\n    # Find a list of classifiers here:\n    # <http://pypi.python.org/pypi?%3Aaction=list_classifiers>\n    classifiers=[\n        'Development Status :: 1 - Planning',\n        'Environment :: Console',\n        'Intended Audience :: Developers',\n        'License :: OSI Approved :: MIT License',\n        'Natural Language :: English',\n        'Operating System :: OS Independent',\n        'Programming Language :: Python :: 2.6',\n        'Programming Language :: Python :: 2.7',\n        'Programming Language :: Python :: 3.3',\n        'Programming Language :: Python :: Implementation :: PyPy',\n        'Topic :: Documentation',\n        'Topic :: Software Development :: Libraries :: Python Modules',\n        'Topic :: System :: Installation/Setup',\n        'Topic :: System :: Software Distribution',\n    ],\n    packages=find_packages(exclude=(TESTS_DIRECTORY,)),\n    install_requires=[\n        # your module dependencies\n    ] + python_version_specific_requires,\n    # Allow tests to be run with `python setup.py test'.\n    tests_require=[\n        'pytest==2.5.1',\n        'mock==1.0.1',\n        'flake8==2.1.0',\n    ],\n    cmdclass={'test': TestAllCommand},\n    zip_safe=False,  # don't use eggs\n    entry_points={\n        'console_scripts': [\n            'easy_tensorflow_cli = easy_tensorflow.main:entry_point'\n        ],\n        # if you have a gui, use this\n        # 'gui_scripts': [\n        #     'easy_tensorflow_gui = easy_tensorflow.gui:entry_point'\n        # ]\n    }\n)\n\n\ndef main():\n    setup(**setup_dict)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "tests/test_main.py",
    "content": "# -*- coding: utf-8 -*-\nfrom pytest import raises\n\n# The parametrize function is generated, so this doesn't work:\n#\n#     from pytest.mark import parametrize\n#\nimport pytest\nparametrize = pytest.mark.parametrize\n\nfrom easy_tensorflow import metadata\nfrom easy_tensorflow.main import main\n\n\nclass TestMain(object):\n    @parametrize('helparg', ['-h', '--help'])\n    def test_help(self, helparg, capsys):\n        with raises(SystemExit) as exc_info:\n            main(['progname', helparg])\n        out, err = capsys.readouterr()\n        # Should have printed some sort of usage message. We don't\n        # need to explicitly test the content of the message.\n        assert 'usage' in out\n        # Should have used the program name from the argument\n        # vector.\n        assert 'progname' in out\n        # Should exit with zero return code.\n        assert exc_info.value.code == 0\n\n    @parametrize('versionarg', ['-V', '--version'])\n    def test_version(self, versionarg, capsys):\n        with raises(SystemExit) as exc_info:\n            main(['progname', versionarg])\n        out, err = capsys.readouterr()\n        # Should print out version.\n        assert err == '{0} {1}\\n'.format(metadata.project, metadata.version)\n        # Should exit with zero return code.\n        assert exc_info.value.code == 0\n"
  },
  {
    "path": "tox.ini",
    "content": "# Tox (http://tox.testrun.org/) is a tool for running tests in\n# multiple virtualenvs. This configuration file will run the test\n# suite on all supported python versions. To use it, \"pip install tox\"\n# and then run \"tox\" from this directory.\n#\n# To run tox faster, check out Detox\n# (https://pypi.python.org/pypi/detox), which runs your tox runs in\n# parallel. To use it, \"pip install detox\" and then run \"detox\" from\n# this directory.\n\n[tox]\nenvlist = py26,py27,py33,pypy,docs\n\n[testenv]\ndeps =\n     --no-deps\n     --requirement\n     {toxinidir}/requirements-dev.txt\ncommands = paver test_all\n\n[testenv:docs]\nbasepython = python\ncommands = paver doc_html\n"
  }
]