[
  {
    "path": ".gitignore",
    "content": "**/env\n**/*data*\n**/results/*\ngist-medium/*\n.DS_Store"
  },
  {
    "path": "README.md",
    "content": "# blog\nTechnical blog repo of metaflow\n"
  },
  {
    "path": "notes_loglikelihood/README.md",
    "content": "# ML notes: Why the log-likelihood?\n\n## TL, DR!\n### EN\nHere is my first note on the machine learning subject about the reasons behind the use of the log-likelihood. I work out the equations from the definition of the likelihood to the log-likelihood removing any untold assumptions between each line of calculus.\nBonus: I add some remarks about SGD and MAP.\n\n### FR\nVoici ma première note sur le Machine Learning à propos des raisons d'usages du log-likelihood. Je refais le chemin de pensé qui mène de la définition du likelihood aux raisons de l'utilisation du log-likelihood en supprimant tout non dis pour chaque ligne de calcul.\nEn bonus, je fais quelques remarques sur les raisons du SGD et du MAP.\n\n\n## Latex\n```\n\\begin{align}\n\\underset{\\theta}{\\text{ argmax }} L(\\theta, m | \\mathcal{D}) &= \\underset{\\theta}{\\text{ argmax }} p(\\mathcal{D}|\\theta, m) \\\\\n&= \\underset{\\theta}{\\text{ argmax }} p_{D_1, \\ldots, D_n}(d_1, \\ldots, d_n | \\theta,m) \\\\ \n&\\stackrel{\\text{independence}}{=} \\underset{\\theta}{\\text{ argmax }}  \\prod_i p_{D_i}(d_i | \\theta, m) \\\\\n&\\stackrel{\\text{identically distributed}}{=} \\underset{\\theta}{\\text{ argmax }} \\prod_i p(d_i | \\theta, m) \\\\\n&= \\underset{\\theta}{\\text{ argmax }} \\sum_i \\log(p(d_i | \\theta, m)) \\\\\n&= \\underset{\\theta}{\\text{ argmax }} \\frac{1}{n} \\sum_i \\log(p(d_i | \\theta, m)) \\\\\n&= \\underset{\\theta}{\\text{ argmax }} E_{d \\sim U_{\\mathcal{D}}}[log(p(d|\\theta,m))]\n\\end{align}\n```\n\n```\nP(\\mathcal{D}|\\theta,m) = P(\\mathcal{D}|\\theta,\\eta_1, \\ldots, \\eta_K, f_m)  \n```"
  },
  {
    "path": "notes_lse/README.md",
    "content": "# ML notes: why the Least Square Error?\n\n## TL, DR!\n### EN\nA second ML note exploring how the Least Square Error emerge from supervised learning and the additive white Gaussian noise model:\n- I start from the end of my last note and explore the impact of supervised learning on the math\n- I explore the impact of the additive white gaussian noise model\n- Bonus: I make a simple experiment (using TF) on a noisy regression example showing that if you respect the assumptions, you indeed retrieve the original function on a given interval\n\n### FR\nUne seconde note ML ou j'explore comment la \"least square error\" émerge de l'apprentissage supervisé et du model de bruit blanc Gaussien additif:\n- Je repars de ma dernière note et j'explore l'impact de l'apprentissage supervisé sur les maths.\n- J'explore ensuite l'impact du model de bruit blanc Gaussien additif\n- Bonus: Je fais une experimentation simple (utilisant TF) sur une regression bruitée, montrant que tant que l'on respect exactement les postulats probabilistic, on reconstruit bien la fonction d'origine sur un interval donné.\n\n\n## Latex\n```\n\\begin{equation*}\n\\underset{\\theta}{\\text{ argmax }} L(\\theta, m | \\mathcal{D}) \\stackrel{\\text{i.i.d.}}{=} \\underset{\\theta}{\\text{ argmax }} \\sum_{d^{(i)} \\in \\mathcal{D}} \\log(p(d^{(i)} | \\theta, m)) \\tag{1}\n\\end{equation*}\nWith:\n\\begin{itemize}\n  \\item $ \\theta $ the model parameters (a random variable)\n  \\item $ m $ the model hypothesis (a random variable)\n\\end{itemize}\n\n```\n\n```\n\\begin{align}\np(d^{(i)} | \\theta, m) &= p(x^{(i)}, y^{(i)} | \\theta, m) \\tag{2} \\\\\n&= p(y^{(i)} | x^{(i)}, \\theta, m) \\times p(x^{(i)} | \\theta, m) \\tag{3} \\\\\n&= p(y^{(i)} | x^{(i)}, \\theta, m) \\times p(x^{(i)}) \\tag{4}\n\\end{align}\n```\n\n```\n\\underset{\\theta}{\\text{ argmax }} \\sum_i \\log(p(d^{(i)} | \\theta, m)) &= \\underset{\\theta}{\\text{ argmax }} \\sum_i \\log(p(y^{(i)} | x^{(i)}, \\theta, m)) \\tag{5}\n```\n\n```\n\\forall i, y^{(i)}= h(x^{(i)}) + z^{(i)} \\text{ In term of random variables: } Y = h(X) + Z\n```\n\n```\n\\begin{align}\n&\\forall i, z^{(i)} \\sim \\mathcal{N}(0, \\sigma^2) \\Rightarrow p(z^{(i)}) = \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{-\\frac{(z^{(i)})^2}{2\\sigma^2}} \\tag{6} \\\\\n&\\forall i \\neq j \\text{, } p(z^{(i)} | z^{(j)}) = p(z^{(i)}) \\text{ and } \\forall i \\text{, } p(z^{(i)} | x^{(i)}) = p(z^{(i)}) \\tag{7}\n\\end{align}\n```\n\n```\n\\forall i \\text{, } y^{(i)} \\approx h_{\\theta, m}(x^{(i)}) + z^{(i)}\n```\n\n```\nGiven that: $ Y = h_{\\theta, m}(X) + Z $ and $ d = \\{x, y\\} $ we have:\n\\begin{align}\np(Y=y | X=x, \\theta, m) &= p(Z= y - h_{\\theta, m}(X) | X=x, \\theta, m) \\tag{8} \\\\\n &= p(Z= y - h_{\\theta, m}(x)) \\tag{9}\n\\end{align}\n\\begin{align}\n\\Leftrightarrow &p(Y=y | x, \\theta, m) =  \\frac{1}{\\sqrt{2\\pi}\\sigma} e^{- \\frac{(y - h_{\\theta, m}(x))^2}{2\\sigma^2}} \\tag{10} \\\\\n\\Leftrightarrow &log(p(Y=y | x, \\theta, m)) =  \\log\\frac{1}{\\sqrt{2\\pi}\\sigma} - \\frac{1}{\\sigma^2} \\frac{1}{2} (y - h_{\\theta, m}(x))^2 \\tag{11} \\\\\n\\Rightarrow &\\underset{\\theta}{\\text{ argmax }} \\sum_i \\log(p(d | \\theta, m)) = \\underset{\\theta}{\\text{ argmin }} \\frac{1}{2}\\sum_i(y - h_{\\theta, m}(x))^2 \\tag{12}\n\\end{align}\n```\n\n```\np(y^{(i)}|x^{(i)}, \\theta, m) \\neq p(y^{(i)}|x^{(i)}; \\theta, m)\n```\n"
  },
  {
    "path": "requirements.txt",
    "content": "appdirs==1.4.0\nbleach==1.5.0\ncycler==0.10.0\nhtml5lib==0.9999999\nMarkdown==2.6.9\nmatplotlib==2.0.0\nnumpy==1.13.1\npackaging==16.8\nprotobuf==3.4.0\npyparsing==2.1.10\npython-dateutil==2.6.0\npytz==2016.10\nsix==1.10.0\ntensorflow==1.3.0\ntensorflow-tensorboard==0.1.5\nWerkzeug==0.12.2\n"
  },
  {
    "path": "sparse-coding/Readme.md",
    "content": "# Sparse coding: A simple exploration\n\n*Disclaimer:* I don't own a Ph.D in machine learning, yet I'm deeply passionate about the field and try to learn as much as I can about it.\n\nWhile I'm learning a new subject, I found out that it was a very valuable exercise to write a comprehensive guide about my study. So I'm now sharing my progress looking for useful feedback while helping others reach understanding faster. \n\nI've been spending the last few days to understand what is sparse coding, Here are my results!\n\nWe won't dive hard in any theoretical math, this article is trying to explore the sparse coding notion and see their impact on neural networks: I prefer to experiment with a notion and implement some concrete hypothesis than just playin with the math, i believe that in ML, concrete experience seems more relevant than pure theory.\n\n**Please, if you find any errors, contact me: I'll be glad to hear from you :)**\n\n**The reader should already have some understanding of what is a neural networks before reading this**\n\n## What is Sparse coding?\n\n### The mathematical standpoint\nSparse coding is the study of algorithms which aim to learn a useful **sparse representation** of any given data. Each datum will then be encoded as a *sparse code*:\n- The algorithm only needs input data to learn the sparse representation. This is very useful since you can apply it directly on any kind of data, it is called unsupervised learning\n- It will automatically find the representation without loosing any information (As if one could automatically reveals the intrinsic atoms of one's data).\n\nTo do so, sparse coding algorithms try to satisfy two constraints at the same time:\n- For a given datum as a vector **x**, it will try to learn a \"useful\" sparse representation as a vector **h** \n- For each representation as a vector **h**, it will try to learn a basis **D** to reconstruct the original datum as a vector **x**.\n\nThe mathematical representation of the general objective function for this problem speaks for itself:\n![General equation of the sparse coding algorithm](sparse-coding-equation.jpg \"Sparse coding equation\")\nwhere:\n- **N** is the number of datum in the data\n- **x_k** is the **k** given vector of the data\n- **h_k** is the sparse representation of **x_k**\n- **D** D is a the decoder matrix (more on that later)\n- **lambda** is the coefficient of sparsity\n- **C** is a given constant\n\nThe general form looks like we are trying to nest two optimization problem (The double \"min\" expression). I prefer to see this as two different optimization problem competing against each other to find the best middle ground possible:\n- The first \"min\" is acting only on the left side of the sum trying to minimize the **reconstruction loss** of the input by tweaking **D**.\n- The second \"min\" tries to promote sparsity by minimizing the L1-norm of the sparse representation **h**\nPut simply, we are just trying to resolve a problem (in this case, reconstruction) while using the least possible amount of resource we can to store our data.\n\n**Note on the constraint on D rows:**\n\nIf you don't apply any constraint on D rows you can actually minimize **h** as much as you want while making **D** big enough to compensate. We want to avoid this behaviour as we are looking for vector containing actual zero value with only as few as possible non-zeros and big values.\n\nThe value of C doesn't really matter, in our case, we will choose **C = 1**\n\n\n#### The vector space interpretation:\nIf you try to find a vector space to plot a representation of your data, you need to find a basis of vectors.\n\nGiven a number of dimensions, **sparse coding** tries to learn an **over-complete basis** to represent data efficiently. To do so, you must have provided at first enough dimensions to learn this **over-complete basis**. \n\nIn real life, you just give more (or same amount) dimensions than the number of dimensions the original data are encoded in. For example, for images of size 28x28 with only 1 channel (gray scale), our space containing our input data has 28x28x1=784 dimension\n\n**Why would you need it to be over-complete?**\n\nAn over-complete basis means redundancy in your basis, and vectors (while training) will be able to \"compete\" to represent data more efficiently. Also you are assured, you won't need all dimensions to represent any data point: In an over-complete basis, you always can set some dimensions to 0.  \n\nFeatures will have multiple representation in the obtained basis, but by adding the sparsity constraint you can enforce a unique representation as each feature will get encoded with the most sparse representation.\n\n### The biological standpoint\nFrom the biological standpoint the notion of sparse code (and sparse coding) comes after the more general notion of neural code (and neural coding).\n\n*Wikipedia says:*\n```\nNeural coding is concerned with characterizing the relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among the electrical activity of the neurons in the ensemble.\n```\nLet's say you have N binary (for simplicity) neurons (which can be biological or virtual). Basically:\n- You will feed some inputs to your neural networks which will give you an output in returns. \n- To compute this output, the neural network will strongly activate some neurons while some others won't activate at all. \n- The observation of those activations for the a given input is what we call the neural code and the quantity of activations on some data is called the **average activity ratio**\n- Training your neurons to reach an acceptable neural code is what we call neural coding.\n\nNow that we know what a neural code is we can start to guess what neural code could looks like. You could have only 4 big cases:\n- No neuron activate at all \n- Only one neuron gets activated\n- Less than half of the neurons get activated\n- Half of the neurons get activated\n\nLet's put aside the case where no neurons activate at all as this case is pretty dull.\n\n### Local code\nAt one extreme of low average activity ratio are local codes: Only one neurons gets activated per feature of the stimulus. No overlap between neurons is possible: one neuron can't react to different features. \n\nNote that a complex stimulus might contain multiple feature (multiple neurons can fire at the same time for a complex stimulus)\n\n### Dense code\nThe other extreme corresponds to dense code: half of the neurons get activated per feature.\n\nWhy half (average activity ratio of 0.5)?\n\nRemember we are looking at neurons in the ensemble, for any given distribution of activated neurons with an activity ratio > 0.5, you could swap activated ones with the inactivated ones without losing information in the ensemble. Remember that all neurons are used to encode information, to the opposite of sparse code, one neurons doesn't hold any specific information in itself. \n\n### Sparse code\nEverything in between those two extremes is what we call sparse code. We take into account only a subset of neurons to encode information.\n\nSparce code can be seen as a good compromise between balancing all the different characteristics of neural codes: computation power, fault tolerance, interference, complexity ... . This is why in an evolutionary perspective, it make sens that the brain works only with sparse code.\n\n*More on that in the why section.*\n\n### Linking the two fields\n**How does neural code relate to the mathematical definition of sparse coding?**\n\nThis two fields can be linked thanks to neural networks: You can interpret a neural network in terms of projections of information into vector spaces: Each layer of neurons will receive a representation of the data (in a vector space) from the previous layer and project it to another vector space.\n\n**If you interpret a layer as a data projection, what can be interpreted as the basis of the vector space?**\n\nIt's the neurons per layer: each neurons would represent a vector. if a neuron is activated in a layer: it means that you need this dimension to encode your data. The less neurons is activated per projection, the closer you are from a sparse representation of your data and a sparse code in the biological standpoint.\n\nChoosing the number of neurons per layer is then equivalent as choosing the number of dimension of your over-complete basis which will represent your data. It is the same idea as word embedding when you choose the number of dimensions of the word vector space to encode your words for example.\n\n*Remember the mathematical equation*\n\nYou can notice the presence of 2 unknown variables in the equation: **D** and **h**. In neural networks: \n- **h = E . x** (I avoid writing the bias terms for the sake of simplicity). With **E** a matrix\n\nYou can interpret the neural network weights as the weights of an encoder (trying to create the most sparse representation of the data it cans) and the neural network weight D as the weights of the decoder (Trying to reproduce the original content as best as it cans). Also it happens to be what we call an auto-encoder.\n\n*Remark:*\nHow much neurons? you said. It seems impossible to tell without experiment, this is why we use cross validation to optimize hyper-parameters.\nIf the number of neurons (and so, dimensions) is greater than the intrinsic vector space of the data, then it means that you can infer an over-complete basis and so, you can achieve sparse coding in your neural network. If you don't have enough neurons, you will compress your data with some loss, getting worst results when you try to reproduce the data.\n\nWhy not using as much neurons as we can? Because this is just too expensive and terribly inefficient.\n\n## Why having a sparse representation of data can be useful?\n### Intuitions\nA good sparse coding algorithm would be able to find a very good over-complete basis of vectors representing the inherent structure of the data. By \"very good\", I mean \"meaningful\" for a human:\n\nIt could be compared to:\n- Decomposing all the matters in the universe in a composition of atoms\n- Decomposing music in a composition of just a few notes\n- Decomposing a whole language in just a few letters\nAnd yet, from those basic components, you can generate all the music or all the matters or all the words in a language back.\n\nHaving an algorithm, which could infer those basic components just by looking at the data would be very useful, because it would imply that we could pretrain a neural network to learn *always useful features* for any future task that you would need to solve.\n\nAlso sparse code, brings a loads of interesting characteristics for the brain which can be apply to computers too and are often overlooked.\n\n#### Interference\nThis scheme of neural code avoids the risk of unwanted interference between neurons: One neurons can't encode two features at the same time.\n\nin other schemes (see dense/sparse code): one neuron might react to multiple feature, if it happens that the 2 feature appears at the same time in the stimuli, one could imagine the emergence of a new signal from a neurons reacting to both information which could be misleading.\n\n#### Parallelism\nSince not all neurons are used at the same time and interference are unlikely in sparse code, neurons can work asynchronously and parallelise the computation of information.\n\nIt is important to note that parallelism is closely linked to interference, in a dense code, since all neurons are used to encode any piece of information, trying to parallelise computation leads to a great risk of interference.\n\n#### Fault tolerance\nHaving multiple neurons firing for the same stimulus can help one to recover from an error coming from a neuron. Local code is especially sensible to errors, and dense code can always recover. \n\n#### Memory capacity\nIn local code, since one neurons represent a feature you can encode only as much as neurons.\n\nIf you think about neurons as a binary entity (1 or 0), we have two state. It means that for dense code we have actually **2^N** different combination of neurons to choose from for each feature of the input data which is A LOT!\n\nIn sparse code, if we set **d** to be the average density of neurons activation per stimulus, we then have **2^d** different combination this time. It's interesting because since **d** can be a lot smaller than **N**, we have room for other characteristics.\n\n#### Complexity And Decoding\nObviously, the more neurons you need to observe to detect what feature has been encoded, the more complex it will be to decode information.\n\n#### Sum up\n(N: total numbers of neurons, d: average density of activations, we simplify the behaviour of neurons as activated or not)\n\n|            | Local code | Sparse code | Dense code |\n|------------|------------|-------------|------------|\n| Interference      | None | Unlikely | Likely | \n| Parallelism       | Possible | Possible | Dangerous | \n| Decoding          | easy | Average | Complex |\n| Fault tolerant    | None | Ok | Very good | \n| Memory Capacity   | N | 2^(d) | 2^N | \n| Complexity        | None | low | Very high |\n| Computation power or energy        | very low | low | high | \n\nAs you can see sparse code seems to be the perfect middle ground between having enough computation power/memory/stability and using as less as possible energy to handle the task at hand.\n\n\n\n## How to use it in neural networks?\nThe simplest known usage of combining neural networks and sparse coding is in sparse auto-encoder: It is a neural network that will try to mimic the identity function while under some constraint of sparsity in the hidden layers or the objective function.\n\nAfter learning the concept, I wanted to try some experiments that I've never seen around. My idea was to use the ReLU activation combined with a L1 norm in the objective functions to promote sparse neurons activations: Let's see (with code and TensorFlow) how far can we go with this idea.\n\n### Some experiments with TensorFlow\nWe will now try to validate our previous statement with some very simple neural networks.\nMultiple questions here:\n- Can we reach (at all) local code on the MNIST dataset?\n- Can we reach sparse code while keeping a good accuracy?\n- Does all neurons get activated through the complete dataset?\n\nExperiments:\n- A simple fully connected neural network used as a classifier of digits on MNIST, i add the sparsity constraint with l1-norm directly on the hidden layer\n    - I will test with different coefficient of sparsity, **especially 0**, given us a baseline \n- Let's move forward with a CNN, doing the same experiment as before (l1-norm on the activations of the feature map this time). Since we are trying to classify images, we should see an improvement.\n- Finally, I will add a reconstruction layer and loss to our CNN architecture and train them jointly to see if any improvement happen\n\n\n#### Fully connected neural network\nI will do this experiment with only one hidden layer of 784 neurons because our input data is a 28x28 image so I'm sure I can reencode the data in as many dimensions as the original space. Let's see how sparse we can get!\n\n**All code files can be found in the repo**\n\nFirst we load the MNIST dataset and prepare our placeholder for training/testing:\n```python\nimport time, os\n\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\ndir = os.path.dirname(os.path.realpath(__file__))\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n\n# Fully connected model\n# Number of parameters: (784 * 784 + 784) + (784 * 10 + 10) = 615440 + 7850 = 623290\n# Dimensionality: R^784 -> R^784 -> R^10\n\n# Placeholder\nx = tf.placeholder(tf.float32, shape=[None, 784])\ny_true = tf.placeholder(tf.float32, shape=[None, 10])\n\nsparsity_constraint = tf.placeholder(tf.float32)\n```\nWe add our neural layer and calculate the average density of activations:\n```python\n# Variables\nwith tf.variable_scope('NeuralLayer'):\n    W = tf.get_variable('W', shape=[784, 784], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b = tf.get_variable('b', shape=[784], initializer=tf.constant_initializer(0.1))\n\n    z = tf.matmul(x, W) + b\n    a = tf.nn.relu(z)\n\n    # We graph the average density of neurons activation\n    average_density = tf.reduce_mean(tf.reduce_sum(tf.cast((a > 0), tf.float32), axis=[1]))\n    tf.summary.scalar('AverageDensity', average_density)\n```\n\nWe add the softmax layer \n```python\nwith tf.variable_scope('SoftmaxLayer'):\n    W_s = tf.get_variable('W_s', shape=[784, 10], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b_s = tf.get_variable('b_s', shape=[10], initializer=tf.constant_initializer(0.1))\n\n    out = tf.matmul(a, W_s) + b_s\n    y = tf.nn.softmax(out)\n```\n\nWe finish with the loss: note that we add our sparsity constraint on activations\n```\nwith tf.variable_scope('Loss'):\n    epsilon = 1e-7 # After some training, y can be 0 on some classes which lead to NaN \n    cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y + epsilon), axis=[1]))\n    # We add our sparsity constraint on the activations\n    loss = cross_entropy + sparsity_constraint * tf.reduce_sum(a)\n\n    tf.summary.scalar('loss', loss) # Graph the loss\n```\n\nNow we are going to merge all the so far defined summaries and only then we will calculate accuracy and create its summary. \nWe do that in this order because it is convenient and we avoid graphing the accuracy every iteration\n```python\nsummaries = tf.summary.merge_all() # This is convenient\n\nwith tf.variable_scope('Accuracy'):\n    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_true, 1))\n    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n    acc_summary = tf.summary.scalar('accuracy', accuracy) \n```\nNow we finish with the training/testing part\n```python\n# Training\nadam = tf.train.AdamOptimizer(learning_rate=1e-3)\ntrain_op = adam.minimize(loss)\nsess = None\n# We iterate over different sparsity constraint\nfor sc in [0, 1e-4, 5e-4, 1e-3, 2.7e-3]:\n    result_folder = dir + '/results/' + str(int(time.time())) + '-fc-sc' + str(sc)\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        sw = tf.summary.FileWriter(result_folder, sess.graph)\n        \n        for i in range(20000):\n            batch = mnist.train.next_batch(100)\n            current_loss, summary, _ = sess.run([loss, summaries, train_op], feed_dict={\n                x: batch[0],\n                y_true: batch[1],\n                sparsity_constraint: sc\n            })\n            sw.add_summary(summary, i + 1)\n\n            if (i + 1) % 100 == 0:\n                acc, acc_sum = sess.run([accuracy, acc_summary], feed_dict={\n                    x: mnist.test.images, \n                    y_true: mnist.test.labels\n                })\n                sw.add_summary(acc_sum, i + 1)\n                print('batch: %d, loss: %f, accuracy: %f' % (i + 1, current_loss, acc))\n```\n\n*Since we are talking about images, I've done another experiment using a convolutional neural network and 200 feature maps (Remember that the weight are shared between feature map in convolutional neural net, we are just looking for a local feature everywhere in the image)*\n\nEt voila!\nLet's launch our little experiment (Do it yourself to see graphs with TensorBoard)\nHere is the summary for (20000 iterations):\n\nFully connected neural net (N = 784 neurons):\n\n![Fully connected neural net results](results-fc.jpg \"Fully connected neural net results\")\n\n**Some remarks:**\n- Without sparsity constraint the networks converge under a dense network (<~0.5 average activity ratio)\n- We almost reach local code with an AVR(average activity ratio) of 1.67, for 89% accuracy. Clearly better than a random monkey! \n- Without loosing accuracy, we've been able to reach an AVR of 5.4 instead of 156.3 with a good sparsity parameter which is only 0.6% of activated neurons.\n- It's hard to see but we reached overfitting with the sparsity constraint. The absolute best score would have been 0.981 accuracy for an AVR of 28.19 (Sparsity constraint of 1e-05)\n\n#### Convolutionnal neural network\nConvolutionnal neural net (N = 39200, note that neurons are spatially distributed):\n\n![Convolutionnal neural net results](results-cnn.jpg \"Convolutionnal neural net results\")\n\n**Some remarks:**\n- We are very close to the capacity of the fully connecter neural network. \n- Without loosing accuracy, we've been able to reach an AVR of 146.2 instead of 10932 with a good sparsity parameter which is only 0.3% of activated neurons.\n\nFinally I venture in the auto-encoder area to see if my idea makes more sense there.\n\n#### Convolutionnal Sparse auto-encoder\nConvolutionnal neural net jointly trained with a sparse auto-encoder (N = 39200):\n\n![Autoencoder results](results-ae.jpg \"Autoencoder results\")\n\n**Some remarks:**\n- We actually **accelerated**(in terms of iteration, not time) and **improved** the learning of the classifier with sparsity. That's an interesting result!\n- The auto-encoder got worse with sparsity, until you quantize again real values to integer, In this case, the auto-encoder didn't change much (Average squared errors per pixels < 1).\n\n## Conclusion\nIt was very interesting to explore sparse coding and finally test the concept with some very simple but concrete example. TensorFlow is definitely very efficient for fast prototyping, having access to every variables and operations allows one to customize any process very easily.\n\nI don't think this work would lead to anything interesting in terms of neural networks, but I believe it clearly shows the concept of sparse coding in this field.\n\nAlso, imposing the sparsity constraint on ReLU activations made me think about what we call: [lateral inhibition](https://en.wikipedia.org/wiki/Lateral_inhibition) in our brain. Since the sparsity constraint is applied on the global network activations, it resonate with the description found in the wikipedia article: \n\n*An often under-appreciated point is that although lateral inhibition is visualised in a spatial sense, it is also thought to exist in what is known as \"lateral inhibition across abstract dimensions.\" This refers to lateral inhibition between neurons that are not adjacent in a spatial sense, but in terms of modality of stimulus. This phenomenon is thought to aid in colour discrimination*\n\nWho knows, maybe this is an interesting journey to pursue?\n\n**P.S.: If you find any mistakes of incoherences in this writeup, please contact me: morgan@explee.com, thank you!**\n\n## Running the experiment using nvidia-docker\nRun experiments (using GPU):\n\n`nvidia-docker run --name sparsity --rm -v ~/gist/sparsity:/sparsity  gcr.io/tensorflow/tensorflow:0.10.0-gpu python /sparsity/sparsity.py && python /sparsity/cnn_sparsity.py && python cnn_ae_sparsity.py`\n\nLook at tensorboard:\n\n`nvidia-docker run --name tensorboard -it -d -p 6006:6006  -v ~/gist/sparsity:/sparsity  gcr.io/tensorflow/tensorflow:0.10.0-gpu tensorboard --logdir /sparsity/results --reload_interval 30`\n\n\n## References \n- https://en.wikipedia.org/wiki/Neural_coding\n- https://en.wikipedia.org/wiki/Neural_coding#Sparse_coding\n- http://ufldl.stanford.edu/wiki/index.php/Sparse_Coding\n- http://deeplearning.stanford.edu/wiki/index.php/Sparse_Coding:_Autoencoder_Interpretation\n- http://www.scholarpedia.org/article/Sparse_coding\n- http://www.scholarpedia.org/article/Donald_Olding_Hebb\n- http://www.scholarpedia.org/article/Oja_learning_rule\n- https://www.youtube.com/user/hugolarochelle/search?query=sparse\n"
  },
  {
    "path": "sparse-coding/cnn_ae_sparsity.py",
    "content": "import time, os\n\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\n# The quality of the conv2d_transpose reconstruction operation has been recently beaten by the subpixel operation\n# Taken from the subpixel paper: https://github.com/Tetrachrome/subpixel/blob/master/subpixel.py\n####\ndef _phase_shift(I, r):\n    bsize, a, b, c = I.get_shape().as_list()\n    bsize = tf.shape(I)[0] # I've made a minor change to the original implementation to enable Dimension(None) for the batch dim\n    X = tf.reshape(I, (bsize, a, b, r, r))\n    X = tf.transpose(X, (0, 1, 2, 4, 3))  # bsize, a, b, 1, 1\n    X = tf.split(X, a, 1)  # a, [bsize, b, r, r]\n    X = tf.concat([tf.squeeze(x) for x in X], 2)  # bsize, b, a*r, r\n    X = tf.split(X, b, 1)  # b, [bsize, a*r, r]\n    X = tf.concat([tf.squeeze(x) for x in X], 2)  # bsize, a*r, b*r\n    return tf.reshape(X, (bsize, a*r, b*r, 1))\n\n\ndef PS(X, r, color=False):\n    if color:\n        Xc = tf.split(X, 3, 3)\n        X = tf.concat([_phase_shift(x, r) for x in Xc], 3)\n    else:\n        X = _phase_shift(X, r)\n    return X\n####\n\ndir = os.path.dirname(os.path.realpath(__file__))\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n\n# Convolutionnel model\n# We want to have approximatively the same reprensatitivity between neural layers\n# and the Softmax layer\n# Number of parameters: (3 * 3 * 1 * 200) + (14 * 14 * 200 * 10) = 393800\n# Dimensionality: R^784 -> R^39200 -> R_10, 39200 neurons but only 200 hundred features per spatial point\n\n# Placeholder\nx = tf.placeholder(tf.float32, shape=[None, 784])\ny_true = tf.placeholder(tf.float32, shape=[None, 10])\n\nsparsity_constraint = tf.placeholder(tf.float32)\n\nx_img = tf.reshape(x, [-1, 28, 28, 1])\nwith tf.variable_scope('NeuralLayer'):\n    # We would like 200 feature map, remember that weights are shared inside each feature map\n    # The only difference with FC layers, is that we look for a precise feature everywhere in the image\n    W1 = tf.get_variable('W1', shape=[3, 3, 1, 200], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b1 = tf.get_variable('b1', shape=[200], initializer=tf.constant_initializer(0.1))\n\n    z1 = tf.nn.conv2d(x_img, W1, strides=[1, 2, 2, 1], padding='SAME') + b1\n    a = tf.nn.relu(z1)\n\n    # We graph the average density of neurons activation\n    average_density = tf.reduce_mean(tf.reduce_sum(tf.cast((a > 0), tf.float32), axis=[1, 2, 3]))\n    tf.summary.scalar('AverageDensity', average_density)\n\na_vec_size = 14 * 14 * 200 \n\nwith tf.variable_scope('SoftmaxLayer'):\n    a_vec = tf.reshape(a, [-1, a_vec_size])\n\n    W_s = tf.get_variable('W_s', shape=[a_vec_size, 10], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b_s = tf.get_variable('b_s', shape=[10], initializer=tf.constant_initializer(0.1))\n\n    out = tf.matmul(a_vec, W_s) + b_s\n    y = tf.nn.softmax(out)\n\n# The conv2d_transpose decoder is not cool anymore\n# with tf.variable_scope('Decoder'):\n#     W_decoder = tf.get_variable('W_decoder', shape=[3, 3, 1, 200], initializer=tf.random_normal_initializer(stddev=1e-1))\n#     b_decoder = tf.get_variable('b_decoder', shape=[1], initializer=tf.constant_initializer(0.1))\n\n#     # We force the decoder weights (9) of each feature (200) to stay in the 1-norm area\n#     W_decoder = tf.clip_by_norm(W_decoder, 1, axes=[3])\n\n#     z_decoder = tf.nn.conv2d_transpose(a, W_decoder, output_shape=[tf.shape(x)[0], 28, 28, 1], strides=[1, 2, 2, 1], padding='SAME') + b1\n\n# The phaseshift decoder is a lot cooler!\nwith tf.variable_scope('Decoder_phaseshift'):\n    # a is a 14x14x200 feature map, we want to end up with a 28x28x1 image, so we need to scale down to 14x14x4 feature map\n    # before applying the phase shift trick with a ratio of 2\n    W_decoder_ps = tf.get_variable('W1', shape=[3, 3, 200, 4], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b_decoder_ps = tf.get_variable('b1', shape=[4], initializer=tf.constant_initializer(0.1))\n\n    # We force the decoder weights (9) of each feature (200) to stay in the 1-norm area\n    W_decoder_ps = tf.clip_by_norm(W_decoder_ps, 1, axes=[3])\n\n    z_ps = tf.nn.conv2d(a, W_decoder_ps, strides=[1, 1, 1, 1], padding='SAME') + b_decoder_ps\n    z_decoder_ps = PS(z_ps, 2)\n\nwith tf.variable_scope('Loss'):\n    epsilon = 1e-7 # After some training, y can be 0 on some classes which lead to NaN \n    loss_classifier = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y + epsilon), axis=[1]))\n    # loss_ae = tf.reduce_mean(tf.reduce_sum(tf.square(x_img - z_decoder), axis=[1, 2, 3]))\n    loss_ae_ps = tf.reduce_mean(tf.reduce_sum(tf.square(x_img - z_decoder_ps), axis=[1, 2, 3]))\n    loss_sc = sparsity_constraint * tf.reduce_sum(a)\n    loss = loss_classifier + loss_ae_ps + loss_sc\n\n    # tf.summary.scalar('loss_ae', loss_ae)\n    tf.summary.scalar('loss_ae_ps', loss_ae_ps)\n    tf.summary.scalar('loss_classifier', loss_classifier)\n    tf.summary.scalar('loss_sc', loss_sc)\n    tf.summary.scalar('loss', loss)\n\n\n# We merge summaries before the accuracy summary to avoid \n# graphing the accuracy with training data\nsummaries = tf.summary.merge_all()\n\nwith tf.variable_scope('Accuracy'):\n    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_true, 1))\n    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n    acc_summary = tf.summary.scalar('accuracy', accuracy) \n   \n    # Let's see the actual tensorflow reconstruction \n    input_img_summary = tf.summary.image(\"input\", x_img, 1)\n    rec_img_summary_ps = tf.summary.image(\"reconstruction\", z_decoder_ps, 1)\n\n\n# Training\nadam_train = tf.train.AdamOptimizer(learning_rate=1e-3)\ntrain_op = adam_train.minimize(loss)\nsess = None\n# We iterate over different sparsity constraint\nfor sc in [0, 1e-5, 5e-5, 1e-4, 5e-4]:\n    result_folder = dir + '/results/' + str(int(time.time())) + '-ae-sc' + str(sc)\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        sw = tf.summary.FileWriter(result_folder, sess.graph)\n\n        for i in range(20000):\n            batch = mnist.train.next_batch(100)\n            current_loss, summary, _ = sess.run([loss, summaries, train_op], feed_dict={\n                x: batch[0],\n                y_true: batch[1],\n                sparsity_constraint: sc\n            })\n            sw.add_summary(summary, i + 1)\n\n            if (i + 1) % 100 == 0:\n                acc, acc_sum, input_img_sum, rec_img_sum = sess.run([accuracy, acc_summary, input_img_summary, rec_img_summary_ps], feed_dict={\n                    x: mnist.test.images, \n                    y_true: mnist.test.labels\n                })\n                sw.add_summary(acc_sum, i + 1)\n                sw.add_summary(input_img_sum, i + 1)\n                sw.add_summary(rec_img_sum, i + 1)\n                print('batch: %d, loss: %f, accuracy: %f' % (i + 1, current_loss, acc))\n"
  },
  {
    "path": "sparse-coding/cnn_sparsity.py",
    "content": "import time, os\n\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\ndir = os.path.dirname(os.path.realpath(__file__))\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n\n# Convolutionnel model\n# We want to have approximatively the same reprensatitivity between neural layers\n# and the Softmax layer\n# Number of parameters: (3 * 3 * 1 * 784) + (14 * 14 * 200 * 10) = 393800\n# Dimensionality: R^784 -> R^39200 -> R_10, 39200 neurons but only 200 hundred features per spatial point\n\n# Placeholder\nx = tf.placeholder(tf.float32, shape=[None, 784])\ny_true = tf.placeholder(tf.float32, shape=[None, 10])\n\nsparsity_constraint = tf.placeholder(tf.float32)\n\nx_img = tf.reshape(x, [-1, 28, 28, 1])\nwith tf.variable_scope('NeuralLayer'):\n    # We would like 200 feature map, remember that weights are shared inside each feature map\n    # The only difference with FC layers, is that we look for a precise feature everywhere in the image\n    W1 = tf.get_variable('W1', shape=[3, 3, 1, 200], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b1 = tf.get_variable('b1', shape=[200], initializer=tf.constant_initializer(0.1))\n\n    z1 = tf.nn.conv2d(x_img, W1, strides=[1, 2, 2, 1], padding='SAME') + b1\n    a = tf.nn.relu(z1)\n\n    # We graph the average density of neurons activation\n    average_density = tf.reduce_mean(tf.reduce_sum(tf.cast((a > 0), tf.float32), axis=[1, 2, 3]))\n    tf.summary.scalar('AverageDensity', average_density)\n\na_vec_size = 14 * 14 * 200 \n\nwith tf.variable_scope('SoftmaxLayer'):\n    a_vec = tf.reshape(a, [-1, a_vec_size])\n\n    W_s = tf.get_variable('W_s', shape=[a_vec_size, 10], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b_s = tf.get_variable('b_s', shape=[10], initializer=tf.constant_initializer(0.1))\n\n    out = tf.matmul(a_vec, W_s) + b_s\n    y = tf.nn.softmax(out)\n\nwith tf.variable_scope('Loss'):\n    epsilon = 1e-7 # After some training, y can be 0 on some classes which lead to NaN \n    cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y + epsilon), axis=[1]))\n    # We add our sparsity constraint on the activations\n    # loss = cross_entropy + sparsity_constraint * (tf.reduce_sum(a1) + tf.reduce_sum(a2) + tf.reduce_sum(a))\n    loss = cross_entropy + sparsity_constraint * tf.reduce_sum(a)\n\n    tf.summary.scalar('loss', loss) # Graph the loss\n\n# We merge summaries before the accuracy summary to avoid \n# graphing the accuracy with training data\nsummaries = tf.summary.merge_all()\n\nwith tf.variable_scope('Accuracy'):\n    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_true, 1))\n    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n    acc_summary = tf.summary.scalar('accuracy', accuracy) \n\n\n# Training\nadam_train = tf.train.AdamOptimizer(learning_rate=1e-3)\ntrain_op = adam_train.minimize(loss)\nsess = None\n# We iterate over different sparsity constraint\nfor sc in [0, 1e-5, 5e-5, 1e-4, 5e-4]:\n    result_folder = dir + '/results/' + str(int(time.time())) + '-cnn-sc' + str(sc)\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        sw = tf.summary.FileWriter(result_folder, sess.graph)\n        \n        for i in range(20000):\n            batch = mnist.train.next_batch(100)\n            current_loss, summary, _ = sess.run([loss, summaries, train_op], feed_dict={\n                x: batch[0],\n                y_true: batch[1],\n                sparsity_constraint: sc\n            })\n            sw.add_summary(summary, i + 1)\n\n            if (i + 1) % 100 == 0:\n                acc, acc_sum = sess.run([accuracy, acc_summary], feed_dict={\n                    x: mnist.test.images, \n                    y_true: mnist.test.labels\n                })\n                sw.add_summary(acc_sum, i + 1)\n                print('batch: %d, loss: %f, accuracy: %f' % (i + 1, current_loss, acc))"
  },
  {
    "path": "sparse-coding/sparsity.py",
    "content": "import time, os\n\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\ndir = os.path.dirname(os.path.realpath(__file__))\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n\n# Fully connected model\n# Number of parameters: (784 * 784 + 784) + (784 * 10 + 10) = 615440 + 7850 = 623290\n# Dimensionality: R^784 -> R^784 -> R^10\n\n# Placeholder\nx = tf.placeholder(tf.float32, shape=[None, 784])\ny_true = tf.placeholder(tf.float32, shape=[None, 10])\n\nsparsity_constraint = tf.placeholder(tf.float32)\n\n# Variables\nwith tf.variable_scope('NeuralLayer'):\n    W = tf.get_variable('W', shape=[784, 784], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b = tf.get_variable('b', shape=[784], initializer=tf.constant_initializer(0.1))\n\n    z = tf.matmul(x, W) + b\n    a = tf.nn.relu(z)\n\n    # We graph the average density of neurons activation\n    average_density = tf.reduce_mean(tf.reduce_sum(tf.cast((a > 0), tf.float32), axis=[1]))\n    tf.summary.scalar('AverageDensity', average_density)\n\nwith tf.variable_scope('SoftmaxLayer'):\n    W_s = tf.get_variable('W_s', shape=[784, 10], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b_s = tf.get_variable('b_s', shape=[10], initializer=tf.constant_initializer(0.1))\n\n    out = tf.matmul(a, W_s) + b_s\n    y = tf.nn.softmax(out)\n\nwith tf.variable_scope('Loss'):\n    epsilon = 1e-7 # After some training, y can be 0 on some classes which lead to NaN \n    cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y + epsilon), axis=[1]))\n    # We add our sparsity constraint on the activations\n    loss = cross_entropy + sparsity_constraint * tf.reduce_sum(a)\n\n    tf.summary.scalar('loss', loss) # Graph the loss\n\n# We merge summaries before the accuracy summary to avoid \n# graphing the accuracy with training data\nsummaries = tf.summary.merge_all()\n\nwith tf.variable_scope('Accuracy'):\n    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_true, 1))\n    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n    acc_summary = tf.summary.scalar('accuracy', accuracy) \n\n# Training\nadam = tf.train.AdamOptimizer(learning_rate=1e-3)\ntrain_op = adam.minimize(loss)\nsess = None\n# We iterate over different sparsity constraint\nfor sc in [0, 1e-4, 5e-4, 1e-3, 2.7e-3]:\n    result_folder = dir + '/results/' + str(int(time.time())) + '-fc-sc' + str(sc)\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        sw = tf.summary.FileWriter(result_folder, sess.graph)\n        \n        for i in range(20000):\n            batch = mnist.train.next_batch(100)\n            current_loss, summary, _ = sess.run([loss, summaries, train_op], feed_dict={\n                x: batch[0],\n                y_true: batch[1],\n                sparsity_constraint: sc\n            })\n            sw.add_summary(summary, i + 1)\n\n            if (i + 1) % 100 == 0:\n                acc, acc_sum = sess.run([accuracy, acc_summary], feed_dict={\n                    x: mnist.test.images, \n                    y_true: mnist.test.labels\n                })\n                sw.add_summary(acc_sum, i + 1)\n                print('batch: %d, loss: %f, accuracy: %f' % (i + 1, current_loss, acc))"
  },
  {
    "path": "tf-architecture/.gitignore",
    "content": ""
  },
  {
    "path": "tf-architecture/README.md",
    "content": "# TensorFlow: A proposal of good practices for files, folders and models architecture\n\n## TL, DR!\n### EN\n- I list all the different tasks one will have to do when doing ML\n- I show a common folder structure that I believe handles all possible use cases nicely\n- I show a basic Model class, easily extendable that structure a lot of possible kinds of models\n- I describe how to build a good \"shell API\" for easy iterations.\nBonus: Some TF code linked to the subject.\n\n### FR\n- Je liste l'ensemble des tâches à effectuer quand on fait du machine learning\n- Je présente une architecture de dossier qui, je pense, permet de gérer l'ensemble de ces usages.\n- Je présente une classe Modèle de base, facilement \"extensible\", qui structure la manière de concevoir de nombreux types de modeles.\n- Enfin, je décris comment construire une bonne \"API shell\" pour faciliter les itérations de recherches.\nBonus: Du code TF divers et varié associé au sujet.\n"
  },
  {
    "path": "tf-architecture/archi/README",
    "content": ""
  },
  {
    "path": "tf-architecture/archi/hpsearch/hyperband.py",
    "content": ""
  },
  {
    "path": "tf-architecture/archi/hpsearch/randomsearch.py",
    "content": ""
  },
  {
    "path": "tf-architecture/archi/main.py",
    "content": "import os, json\nimport tensorflow as tf\n\n# See the __init__ script in the models folder\n# `make_models` is a helper function to load any models you have\nfrom models import make_models \nfrom hpsearch import hyperband, randomsearch\n\n# I personally always like to make my paths absolute\n# to be independent from where the python binary is called\ndir = os.path.dirname(os.path.realpath(__file__))\n\n# I won't dig into TF interaction with the shell, feel free to explore the documentation\nflags = tf.app.flags\n\n\n# Hyper-parameters search configuration\nflags.DEFINE_boolean('fullsearch', False, 'Perform a full search of hyperparameter space ex:(hyperband -> lr search -> hyperband with best lr)')\nflags.DEFINE_boolean('dry_run', False, 'Perform a dry_run (testing purpose)')\nflags.DEFINE_integer('nb_process', 4, 'Number of parallel process to perform a HP search')\n\n# fixed_params is a trick I use to be able to fix some parameters inside the model random function\n# For example, one might want to explore different models fixing the learning rate, see the basic_model get_random_config function\nflags.DEFINE_string('fixed_params', \"{}\", 'JSON inputs to fix some params in a HP search, ex: \\'{\"lr\": 0.001}\\'')\n\n\n# Agent configuration\nflags.DEFINE_string('model_name', 'DQNAgent', 'Unique name of the model')\nflags.DEFINE_boolean('best', False, 'Force to use the best known configuration')\nflags.DEFINE_float('initial_mean', 0., 'Initial mean for NN')\nflags.DEFINE_float('initial_stddev', 1e-2, 'Initial standard deviation for NN')\nflags.DEFINE_float('lr', 1e-3, 'The learning rate of SGD')\nflags.DEFINE_float('nb_units', 20, 'Number of hidden units in Deep learning agents')\n\n\n# Environment configuration\nflags.DEFINE_boolean('debug', False, 'Debug mode')\nflags.DEFINE_integer('max_iter', 2000, 'Number of training step')\nflags.DEFINE_boolean('infer', False, 'Load an agent for playing')\n\n# This is very important for TensorBoard\n# each model will end up in its own unique folder using time module\n# Obviously one can also choose to name the output folder\nflags.DEFINE_string('result_dir', dir + '/results/' + flags.FLAGS.model_name + '/' + str(int(time.time())), 'Name of the directory to store/log the model (if it exists, the model will be loaded from it)')\n\n# Another important point, you must provide an access to the random seed\n# to be able to fully reproduce an experiment\nflags.DEFINE_integer('random_seed', random.randint(0, sys.maxsize), 'Value of random seed')\n\ndef main(_):\n    config = flags.FLAGS.__flags.copy()\n    # fixed_params must be a string to be passed in the shell, let's use JSON\n    config[\"fixed_params\"] = json.loads(config[\"fixed_params\"])\n\n    if config['fullsearch']:\n        # Some code for HP search ...\n    else:\n        model = make_model(config)\n\n        if config['infer']:\n            # Some code for inference ...\n        else:\n            # Some code for training ...\n\n\nif __name__ == '__main__':\n  tf.app.run()"
  },
  {
    "path": "tf-architecture/archi/models/__init__.py",
    "content": "from models.basic_model import BasicModel\nfrom agents.other_model import SomeOtherModel\n\n__all__ = [\n    \"BasicModel\",\n    \"SomeOtherModel\"\n]\n\ndef make_model(config, env):\n    if config['model_name'] in __all__:\n        return globals()[config['model_name']](config, env)\n    else:\n        raise Exception('The model name %s does not exist' % config['model_name'])\n\ndef get_model_class(config):\n    if config['model_name'] in __all__:\n        return globals()[config['model_name']]\n    else:\n        raise Exception('The model name %s does not exist' % config['model_name'])"
  },
  {
    "path": "tf-architecture/archi/models/basic_model.py",
    "content": "import os, copy\nimport tensorflow as tf\n\nclass BasicAgent(object):\n    # To build your model, you only to pass a \"configuration\" which is a dictionary\n    def __init__(self, config):\n        # I like to keep the best HP found so far inside the model itself\n        # This is a mechanism to load the best HP and override the configuration\n        if config['best']:\n            config.update(self.get_best_config(config['env_name']))\n            \n        # I make a `deepcopy` of the configuration before using it\n        # to avoid any potential mutation when I iterate asynchronously over configurations\n        self.config = copy.deepcopy(config)\n\n        if config['debug']: # This is a personal check i like to do\n            print('config', self.config)\n\n        # When working with NN, one usually initialize randomly\n        # and you want to be able to reproduce your initialization so make sure\n        # you store the random seed and actually use it in your TF graph (tf.set_random_seed() for example)\n        self.random_seed = self.config['random_seed']\n\n        # All models share some basics hyper parameters, this is the section where we\n        # copy them into the model\n        self.result_dir = self.config['result_dir']\n        self.max_iter = self.config['max_iter']\n        self.lr = self.config['lr']\n        self.nb_units = self.config['nb_units']\n        # etc.\n        \n        # Now the child Model needs some custom parameters, to avoid any\n        # inheritance hell with the __init__ function, the model \n        # will override this function completely\n        self.set_agent_props()\n\n        # Again, child Model should provide its own build_grap function\n        self.graph = self.build_graph(tf.Graph())\n\n        # Any operations that should be in the graph but are common to all models\n        # can be added this way, here\n        with self.graph.as_default():\n            self.saver = tf.train.Saver(\n                max_to_keep=50,\n            )\n        \n        # Add all the other common code for the initialization here\n        gpu_options = tf.GPUOptions(allow_growth=True)\n        sessConfig = tf.ConfigProto(gpu_options=gpu_options)\n        self.sess = tf.Session(config=sessConfig, graph=self.graph)\n        self.sw = tf.summary.FileWriter(self.result_dir, self.sess.graph)\n        \n        # This function is not always common to all models, that's why it's again\n        # separated from the __init__ one\n        self.init()\n\n        # At the end of this function, you want your model to be ready!\n\n    def set_agent_props(self):\n        # This function is here to be overriden completely.\n        # When you look at your model, you want to know exactly which custom options it needs.\n        pass\n\n    def get_best_config(self):\n        # This function is here to be overriden completely.\n        # It returns a dictionary used to update the initial configuration (see __init__)\n        return {} \n\n    @staticmethod\n    def get_random_config(fixed_params={}):\n        # Why static? Because you want to be able to pass this function to other processes\n        # so they can independently generate random configuration of the current model\n        raise Exception('The get_random_config function must be overriden by the agent')\n\n    def build_graph(self, graph):\n        raise Exception('The build_graph function must be overriden by the agent')\n\n    def infer(self):\n        raise Exception('The infer function must be overriden by the agent')\n\n    def learn_from_epoch(self):\n        # I like to separate the function to train per epoch and the function to train globally\n        raise Exception('The learn_from_epoch function must be overriden by the agent')\n\n    def train(self, save_every=1):\n        # This function is usually common to all your models, Here is an example:\n        for epoch_id in range(0, self.max_iter):\n            self.learn_from_epoch()\n\n            # If you don't want to save during training, you can just pass a negative number\n            if save_every > 0 and epoch_id % save_every == 0:\n                self.save()\n\n    def save(self):\n        # This function is usually common to all your models, Here is an example:\n        global_step_t = tf.train.get_global_step(self.graph)\n        global_step, episode_id = self.sess.run([global_step_t, self.episode_id])\n        if self.config['debug']:\n            print('Saving to %s with global_step %d' % (self.result_dir, global_step))\n        self.saver.save(self.sess, self.result_dir + '/agent-ep_' + str(episode_id), global_step)\n\n        # I always keep the configuration that\n        if not os.path.isfile(self.result_dir + '/config.json'):\n            config = self.config\n            if 'phi' in config:\n                del config['phi']\n            with open(self.result_dir + '/config.json', 'w') as f:\n                json.dump(self.config, f)\n\n\n    def init(self):\n        # This function is usually common to all your models\n        # but making separate than the __init__ function allows it to be overidden cleanly\n        # this is an example of such a function\n        checkpoint = tf.train.get_checkpoint_state(self.result_dir)\n        if checkpoint is None:\n            self.sess.run(self.init_op)\n        else:\n\n            if self.config['debug']:\n                print('Loading the model from folder: %s' % self.result_dir)\n            self.saver.restore(self.sess, checkpoint.model_checkpoint_path)\n\n    def infer(self):\n        # This function is usually common to all your models\n        pass\n        "
  },
  {
    "path": "tf-freeze/freeze.py",
    "content": "import os, argparse\n\nimport tensorflow as tf\nfrom tensorflow.python.framework import graph_util\n\ndir = os.path.dirname(os.path.realpath(__file__))\n\ndef freeze_graph(model_folder):\n    # We retrieve our checkpoint fullpath\n    checkpoint = tf.train.get_checkpoint_state(model_folder)\n    input_checkpoint = checkpoint.model_checkpoint_path\n    \n    # We precise the file fullname of our freezed graph\n    absolute_model_folder = \"/\".join(input_checkpoint.split('/')[:-1])\n    output_graph = absolute_model_folder + \"/frozen_model.pb\"\n\n    # Before exporting our graph, we need to precise what is our output node\n    # NOTE: this variables is plural, because you can have multiple output nodes\n    output_node_names = \"Accuracy/predictions\"\n\n    # We clear the devices, to allow TensorFlow to control on the loading where it wants operations to be calculated\n    clear_devices = True\n    \n    # We import the meta graph and retrive a Saver\n    saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)\n\n    # We retrieve the protobuf graph definition\n    graph = tf.get_default_graph()\n    input_graph_def = graph.as_graph_def()\n\n    # We start a session and restore the graph weights\n    with tf.Session() as sess:\n        saver.restore(sess, input_checkpoint)\n\n        # We use a built-in TF helper to export variables to constant\n        output_graph_def = graph_util.convert_variables_to_constants(\n            sess, \n            input_graph_def, \n            output_node_names.split(\",\") # We split on comma for convenience\n        ) \n\n        # Finally we serialize and dump the output graph to the filesystem\n        with tf.gfile.GFile(output_graph, \"wb\") as f:\n            f.write(output_graph_def.SerializeToString())\n        print(\"%d ops in the final graph.\" % len(output_graph_def.node))\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--model_folder\", type=str, help=\"Model folder to export\")\n    args = parser.parse_args()\n\n    freeze_graph(args.model_folder)"
  },
  {
    "path": "tf-freeze/load.py",
    "content": "import argparse\n\nimport tensorflow as tf\n\ndef load_graph(frozen_graph_filename):\n    # We parse the graph_def file\n    with tf.gfile.GFile(frozen_graph_filename, \"rb\") as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString(f.read())\n\n    # We load the graph_def in the default graph\n    with tf.Graph().as_default() as graph:\n        tf.import_graph_def(\n            graph_def,\n            input_map=None,\n            return_elements=None,\n            name=\"prefix\",\n            op_dict=None,\n            producer_op_list=None\n        )\n    return graph\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--frozen_model_filename\", default=\"results/frozen_model.pb\", type=str, help=\"Frozen model file to import\")\n    args = parser.parse_args()\n\n    graph = load_graph(args.frozen_model_filename)\n\n    # We can list operations\n    for op in graph.get_operations():\n        print(op.name)\n        # prefix/Placeholder/inputs_placeholder\n        # ...\n        # prefix/Accuracy/predictions\n    x = graph.get_tensor_by_name('prefix/Placeholder/inputs_placeholder:0')\n    y = graph.get_tensor_by_name('prefix/Accuracy/predictions:0')\n\n    with tf.Session(graph=graph) as sess:\n        y_out = sess.run(y, feed_dict={\n            x: [[3, 5, 7, 4, 5, 1, 1, 1, 1, 1]] # < 45\n        })\n        print(y_out) # [[ False ]] Yay!"
  },
  {
    "path": "tf-freeze/server.py",
    "content": "import json, argparse, time\n\nimport tensorflow as tf\nfrom load import load_graph\n\nfrom flask import Flask, request\nfrom flask_cors import CORS\n\n##################################################\n# API part\n##################################################\napp = Flask(__name__)\ncors = CORS(app)\n@app.route(\"/api/predict\", methods=['POST'])\ndef predict():\n    start = time.time()\n    \n    data = request.data.decode(\"utf-8\")\n    if data == \"\":\n        params = request.form\n        x_in = json.loads(params['x'])\n    else:\n        params = json.loads(data)\n        x_in = params['x']\n\n    ##################################################\n    # Tensorflow part\n    ##################################################\n    y_out = persistent_sess.run(y, feed_dict={\n        x: x_in\n        # x: [[3, 5, 7, 4, 5, 1, 1, 1, 1, 1]] # < 45\n    })\n    ##################################################\n    # END Tensorflow part\n    ##################################################\n    \n    json_data = json.dumps({'y': y_out.tolist()})\n    print(\"Time spent handling the request: %f\" % (time.time() - start))\n    \n    return json_data\n##################################################\n# END API part\n##################################################\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--frozen_model_filename\", default=\"results/frozen_model.pb\", type=str, help=\"Frozen model file to import\")\n    parser.add_argument(\"--gpu_memory\", default=.2, type=float, help=\"GPU memory per process\")\n    args = parser.parse_args()\n\n    ##################################################\n    # Tensorflow part\n    ##################################################\n    print('Loading the model')\n    graph = load_graph(args.frozen_model_filename)\n    x = graph.get_tensor_by_name('prefix/Placeholder/inputs_placeholder:0')\n    y = graph.get_tensor_by_name('prefix/Accuracy/predictions:0')\n\n    print('Starting Session, setting the GPU memory usage to %f' % args.gpu_memory)\n    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=args.gpu_memory)\n    sess_config = tf.ConfigProto(gpu_options=gpu_options)\n    persistent_sess = tf.Session(graph=graph, config=sess_config)\n    ##################################################\n    # END Tensorflow part\n    ##################################################\n\n    print('Starting the API')\n    app.run()\n"
  },
  {
    "path": "tf-freeze/tiny_model.py",
    "content": "import tensorflow as tf\nimport numpy as np\n\n\nwith tf.variable_scope('Placeholder'):\n    inputs_placeholder = tf.placeholder(tf.float32, name='inputs_placeholder', shape=[None, 10])\n    labels_placeholder = tf.placeholder(tf.float32, name='labels_placeholder', shape=[None, 1])\n\nwith tf.variable_scope('NN'):\n    W1 = tf.get_variable('W1', shape=[10, 1], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b1 = tf.get_variable('b1', shape=[1], initializer=tf.constant_initializer(0.1))\n    W2 = tf.get_variable('W2', shape=[10, 1], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b2 = tf.get_variable('b2', shape=[1], initializer=tf.constant_initializer(0.1))\n\n    a = tf.nn.relu(tf.matmul(inputs_placeholder, W1) + b1)\n    a2 = tf.nn.relu(tf.matmul(inputs_placeholder, W2) + b2)\n\n    y = tf.divide(tf.add(a, a2), 2)\n\nwith tf.variable_scope('Loss'):\n    loss = tf.reduce_sum(tf.square(y - labels_placeholder) / 2)\n\nwith tf.variable_scope('Accuracy'):\n    predictions = tf.greater(y, 0.5, name=\"predictions\")\n    correct_predictions = tf.equal(predictions, tf.cast(labels_placeholder, tf.bool), name=\"correct_predictions\")\n    accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))\n\n\nadam = tf.train.AdamOptimizer(learning_rate=1e-3)\ntrain_op = adam.minimize(loss)\n\n# generate_data\ninputs = np.random.choice(10, size=[10000, 10])\nlabels = (np.sum(inputs, axis=1) > 45).reshape(-1, 1).astype(np.float32)\nprint('inputs.shape:', inputs.shape)\nprint('labels.shape:', labels.shape)\n\n\ntest_inputs = np.random.choice(10, size=[100, 10])\ntest_labels = (np.sum(test_inputs, axis=1) > 45).reshape(-1, 1).astype(np.float32)\nprint('test_inputs.shape:', test_inputs.shape)\nprint('test_labels.shape:', test_labels.shape)\n\nbatch_size = 32\nepochs = 10\n\nbatches = []\nprint(\"%d items in batch of %d gives us %d full batches and %d batches of %d items\" % (\n    len(inputs),\n    batch_size,\n    len(inputs) // batch_size,\n    batch_size - len(inputs) // batch_size,\n    len(inputs) - (len(inputs) // batch_size) * 32)\n)\nfor i in range(len(inputs) // batch_size):\n    batch = [ inputs[batch_size*i:batch_size*i+batch_size], labels[batch_size*i:batch_size*i+batch_size] ]\n    batches.append(list(batch))\nif (i + 1) * batch_size < len(inputs):\n    batch = [ inputs[batch_size*(i + 1):],labels[batch_size*(i + 1):] ]\n    batches.append(list(batch))\nprint(\"Number of batches: %d\" % len(batches))\nprint(\"Size of full batch: %d\" % len(batches[0]))\nprint(\"Size if final batch: %d\" % len(batches[-1]))\n\nglobal_count = 0\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    for i in range(epochs):\n        for batch in batches:\n            # print(batch[0].shape, batch[1].shape)\n            train_loss , _= sess.run([loss, train_op], feed_dict={\n                inputs_placeholder: batch[0],\n                labels_placeholder: batch[1]\n            })\n            # print('train_loss: %d' % train_loss)\n\n            if global_count % 100 == 0:\n                acc = sess.run(accuracy, feed_dict={\n                    inputs_placeholder: test_inputs,\n                    labels_placeholder: test_labels\n                })\n                print('accuracy: %f' % acc)\n            global_count += 1\n\n    acc = sess.run(accuracy, feed_dict={\n        inputs_placeholder: test_inputs,\n        labels_placeholder: test_labels\n    })\n    print(\"final accuracy: %f\" % acc)\n\n    saver = tf.train.Saver()\n    last_chkp = saver.save(sess, 'results/graph.chkp')\n\nfor op in tf.get_default_graph().get_operations():\n    print(op.name)\n"
  },
  {
    "path": "tf-mut-control/README.md",
    "content": "# TensorFlow: Mutating variables and control flow\n\n## TL, DR! \n### EN\n- I explore the different ways to manually mutate Variables in TF (content and shape)\n- I explore how to construct control flow like an \"if statement\"\n- I end up and showing a weird TF behaviour when you mix those\n- Bonus: a first try at animated GIF :D\n\n### FR\n- J'explore les différents moyens de modifier une Variable manuellement (contenu et \"forme\")\n- J'explore comment construire des flux de données (condition if/else)\n- Je finis sur un cas étrange de TF quand on utilise les deux sujets précédents en même temps\n- En bonus: une première tentative de GIF animé :D\n"
  },
  {
    "path": "tf-mut-control/dyn_array.py",
    "content": "import os\nimport tensorflow as tf\n\ndir = os.path.dirname(os.path.realpath(__file__))\n\nembed = tf.get_variable(\n    \"embed\",\n    shape=[0, 1],\n    dtype=tf.float32,\n    validate_shape=False # This shape will evolve, so we need to remove any TensorFlow optim here\n)\nword_dict = tf.Variable(\n    initial_value=[], \n    name='word_dict', \n    dtype=tf.string,\n    validate_shape=False,\n    trainable=False\n)\ntextfile = tf.placeholder(tf.string)\n\n# Update word dict\nsplitted_textfile = tf.string_split(textfile, \" \")\ntmp_word_dict = tf.concat([word_dict, splitted_textfile.values], 0)\ntmp_word_dict, word_idx, word_count = tf.unique_with_counts(tmp_word_dict)\nassign_word_dict = tf.assign(word_dict, tmp_word_dict, validate_shape=False)\nwith tf.control_dependencies([assign_word_dict]):\n    word_dict_value = word_dict.read_value()\n    missing_nb_dim = tf.shape(word_dict_value)[0] - tf.shape(embed)[0]\n    missing_nb_dim = tf.Print(missing_nb_dim, data=[missing_nb_dim, word_dict_value], message=\"missing_nb_dim, word_dict:\", summarize=10)\n\n# Update embed\ndef update_embed_func():\n    new_columns = tf.random_normal([missing_nb_dim, 1], mean=-1, stddev=4)\n    new_embed = tf.concat([embed, new_columns], 0)\n    assign_op = tf.assign(embed, new_embed, validate_shape=False)\n    return assign_op\n\nshould_update_embed = tf.less(0, missing_nb_dim)\nassign_to_embed = tf.cond(should_update_embed, update_embed_func, lambda: embed)\nwith tf.control_dependencies([assign_to_embed]):\n    # outputs = tf.identity(outputs)\n    embed_value = embed.read_value()\n    word_embed = tf.embedding_lookup_sparse(embed_value, word_idx)\n\npersistent_sess = tf.Session()\npersistent_sess.run(tf.global_variables_initializer())\n\npersistent_sess.run(assign_to_embed, feed_dict={\n    textfile: [\"that is cool! that is awesome!\"]\n})\nprint(persistent_sess.run(tf.trainable_variables()[0]))\npersistent_sess.run(assign_to_embed, feed_dict={\n    textfile: [\"this is cool! that was crazy\"]\n})\nprint(persistent_sess.run(tf.trainable_variables()[0]))\n\n\n\ntf.summary.FileWriter(dir, persistent_sess.graph).flush()\n\n# More discussion on https://github.com/tensorflow/tensorflow/issues/7782"
  },
  {
    "path": "tf-queues/README.md",
    "content": "# TensorFlow: How to optimise your input pipeline with queues and multi-threading\n\n## TL, DR! \n### EN\n- I implement a first basic neural net using the \"feed_dict\" system\n- I show step by step how to use queues in TF\n- I update the first example with a queue implementation and show that i train 33% faster the neural net\n- bonus: Some more code on queues on my Github in the references section\n\n### FR\n- J'implémente un premier réseau de neurones basique utilisant le système \"feed_dict\"\n- Je montre étape par étape comment utiliser les queues dans TF\n- Je mets à jour le code du premier exemple et montre que j'entraine mon réseau de neurones 33% plus vite\n- En bonus: du code en plus sur mon Github dans la section Références de l'article"
  },
  {
    "path": "tf-queues/enqueue.py",
    "content": "import tensorflow as tf\n\nq = tf.FIFOQueue(capacity=3, dtypes=tf.float32)\n\nx_input_data = tf.random_normal([3], mean=-1, stddev=4)\nenqueue_many_op = q.enqueue_many(x_input_data) # <- x1 - x2 -x3 |\nenqueue_op = q.enqueue(x_input_data) # <- [x1, x2, x3] - void - void |\n\nx_input_data2 = tf.random_normal([3, 1], mean=-1, stddev=4)\nenqueue_many_op2 = q.enqueue_many(x_input_data2) # <- [x1] - [x2] - [x3] |\nenqueue_op2 = q.enqueue(x_input_data2) # <- [ [x1], [x2], [x3] ] - void - void |\n\ndequeue_op = q.dequeue() \n\nwith tf.Session() as sess:\n    sess.run(enqueue_many_op)\n    print(sess.run(dequeue_op), sess.run(dequeue_op), sess.run(dequeue_op))\n    \n    sess.run(enqueue_op)\n    print(sess.run(dequeue_op))\n\n    sess.run(enqueue_many_op2)\n    print(sess.run(dequeue_op), sess.run(dequeue_op), sess.run(dequeue_op))\n\n    sess.run(enqueue_op2)\n    print(sess.run(dequeue_op))"
  },
  {
    "path": "tf-queues/lorem_ipsum1.txt",
    "content": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec ac ex ullamcorper, rutrum neque in, tristique eros. Nulla ac molestie diam, ac mattis neque. Sed ut lectus non odio imperdiet efficitur ac eget elit. Sed a mi non tellus condimentum semper. Sed dolor mi, gravida non orci mollis, tempor elementum metus. Nullam non ex sed sapien placerat fermentum sit amet ac odio. Sed mollis, odio ullamcorper varius condimentum, dui nisi porta ante, nec rhoncus orci turpis a magna. Aenean nec velit id odio faucibus ornare ac a mi. Morbi pharetra auctor nibh, in hendrerit felis viverra quis. Praesent pellentesque neque hendrerit felis venenatis, quis commodo lorem fermentum. Aenean mattis quam eu ante condimentum ultricies."
  },
  {
    "path": "tf-queues/lorem_ipsum2.txt",
    "content": "Pellentesque tempus est nec tristique porttitor. Ut vitae metus rutrum, dictum lectus eu, fringilla erat. Vivamus dignissim lobortis urna in rutrum. Duis lobortis ligula quis aliquet sagittis. Fusce ac faucibus tellus, in varius nibh. Nunc mi turpis, pellentesque vel ligula id, condimentum fermentum sapien. Pellentesque dapibus quam a ipsum tincidunt, quis sodales nunc ultrices. Duis fermentum luctus tortor eget fermentum. Vivamus iaculis magna nisi, ac vulputate elit interdum vitae. Praesent id mauris pharetra, mollis diam et, lacinia nisl. Vestibulum feugiat dignissim tortor eget luctus. In ante turpis, egestas vitae aliquet id, posuere nec sem. Sed arcu arcu, porta scelerisque tempus euismod, condimentum eget erat."
  },
  {
    "path": "tf-queues/ptb_producer.py",
    "content": "import os, time\nimport tensorflow as tf\n\n# We are fucked, the models folder has been kicked out of tensorflow 1.0 deps\n# I can't import this handy reader anymore\n# (For anyone interested in the source code you can find it here: https://github.com/tensorflow/models/blob/master/tutorials/rnn/ptb/reader.py)\n# from tensorflow.models.rnn.ptb import reader\n\ndir = os.path.dirname(os.path.realpath(__file__))\n\n# ain't cool, but hey! Let's rebuild one and even better\n# Let's make it to create an online RNN which adapt automatically to unseen words\n# crazy shit you say ? Not so much my friend\n\n# Let's say some backend is pulling some text files fom somewhere and dumpt them in the current folder\n# We will use a regexp to scan them\nfilenames = tf.train.match_filenames_once(dir + '/*.txt') # One minor thing, this op needs to be initialized\n# And then we will build a queue to load them once at a time\n# Again: multiple threads, no waiting\nfilenames = tf.Print(filenames, [filenames], message=\"filename_queue: \")\nfilename_queue = tf.train.string_input_producer(filenames) # Here we build a queue\nreader = tf.WholeFileReader()\nkey, value = reader.read(filename_queue) # Key holds the filename and value the content\n\n# So now, we have an async queue to load and read our text files from the filesystem\n\n# What we want now, is an other async job to prepare all the text files content\n# in a good batched format for our GPU\n# Remember the only goal of queues is to avoid starving the GPU\n# The bottleneck could very well be the environment around the GPU and the GPU itself\n\n# We will train with each batch containing ...\nbatch_size = 5 # ... 5 sequence of ...\nseq_length = 5 # ... 5 words\n# This leads us to find the epoch_size\nsplitted_textfile = tf.string_split([value], \" \")\ntext_length = tf.size(splitted_textfile.values) \nbatch_len = text_length // batch_size\ntext = tf.reshape(splitted_textfile.values[:batch_size * batch_len], [batch_size, batch_len])\nepoch_size = (batch_len - 1) // seq_length\nrange_q = tf.train.range_input_producer(epoch_size, shuffle=False)\nindex = range_q.dequeue()\nx = text[:, index * seq_length:(index + 1) * seq_length]\ny = text[:, index * seq_length + 1:(index + 1) * seq_length + 1]\nwith tf.Session() as sess:\n    # We initialize Variables\n    # This is when \"match_filenames_once\" run the regexp\n    sess.run(tf.global_variables_initializer())\n\n    coord = tf.train.Coordinator()\n    threads = tf.train.start_queue_runners(coord=coord)\n\n    for i in range(1):\n        print(sess.run([x, y]))\n    # print(\"File %s beginning with %s\" % (sess.run(key), str(sess.run(value)[:30]) + \"...\") )\n    # print(\"File %s beginning with %s\" % (sess.run(key), str(sess.run(value)[:30]) + \"...\") )\n    # print(\"File %s beginning with %s\" % (sess.run(key), str(sess.run(value)[:30]) + \"...\") )\n    # print(\"File %s beginning with %s\" % (sess.run(key), str(sess.run(value)[:30]) + \"...\") )\n    \n    coord.request_stop()\n    coord.join(threads)\n"
  },
  {
    "path": "tf-queues/reader_test.py",
    "content": "import os, time\nimport tensorflow as tf\n\ndir = os.path.dirname(os.path.realpath(__file__))\n\nfilenames = tf.train.match_filenames_once(dir + '/*.txt')\nfilename_queue = tf.train.string_input_producer(filenames)\nreader = tf.WholeFileReader()\nkey, value = reader.read(filename_queue)\n\nsplitted_textfile = tf.string_split([value], \" \")\nvalue_size = tf.size(splitted_textfile.values)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n\n    coord = tf.train.Coordinator()\n    threads = tf.train.start_queue_runners(coord=coord)\n\n    print(sess.run(value_size))\n\n    coord.request_stop()\n    coord.join(threads)"
  },
  {
    "path": "tf-queues/test_batching.py",
    "content": "import os\nimport numpy as np\nimport tensorflow as tf\n\ndir = os.path.dirname(os.path.realpath(__file__))\n\n# We simulates some raw inputs data\n# let's say we receive 100 batches, each containing 50 elements\nx_inputs_data = tf.random_normal([2], mean=0, stddev=1)\n# q = tf.FIFOQueue(capacity=10, dtypes=tf.float32)\n# enqueue_op = q.enqueue_many(x_inputs_data)\n\n# input = q.dequeue()\n\n# numberOfThreads = 1\n# qr = tf.train.QueueRunner(q, [enqueue_op] * numberOfThreads)\n\nbatch_input = tf.train.batch(\n    [x_inputs_data], \n    batch_size=3, \n    num_threads=1, \n    capacity=32, \n    enqueue_many=False, \n    shapes=None, \n    dynamic_pad=False, \n    allow_smaller_final_batch=True\n)\nwith tf.Session() as sess:\n    coord = tf.train.Coordinator()\n    threads = tf.train.start_queue_runners(coord=coord)\n    # threads = qr.create_threads(sess, coord=coord, start=True)\n\n    print(sess.run([x_inputs_data, batch_input]))\n    coord.request_stop()\n    coord.join(threads)"
  },
  {
    "path": "tf-save-load/README.md",
    "content": "# Tensorflow: How to freeze a model and serve it with a python API\n\n## TL, DR! \n### EN\n\n### FR\n"
  },
  {
    "path": "tf-save-load/embedding.py",
    "content": "import os, time\nimport tensorflow as tf\n\ndir = os.path.dirname(os.path.realpath(__file__))\nresults_dir = dir + \"/results/\" + str(int(time.time()))\n\n### UTILS ###\ndef batch_text(corpus, batch_size, seq_length):\n    if seq_length >= len(corpus):\n        raise Error(\"seq_length >= len(corpus): %d>=%d\" % (seq_length, len(corpus)))\n\n    seqs = [corpus[i:i+seq_length] for i in range(len(corpus) - seq_length)]\n    ys = [corpus[i:i+1] for i in range(seq_length, len(corpus))]\n    for i in range(0, len(seqs), batch_size):\n        x = seqs[i:i+batch_size]\n        y = ys[i:i+batch_size]\n\n        yield x, y\n\n        \n# Let's take a usual usecase you might have:\n# You want to train a NN on a NLP task to do predictive coding on a corpus using an embedding\n# This is an unsupervised task, one can see this as a way to evaluate the capcity of model in terms of pure memorisation\n\n# We load our corpus\nwith open(\"lorem.txt\", 'r') as f:\n    text = f.read()\ncorpus = text.split()\ncorpus_length = len(corpus)\n# We build our mapping between token ids and tokens\ntokens = set(corpus)\nword_to_id_dict = { word:i for i, word in enumerate(tokens) }\nid_to_word_dict = { i:word for word,i in word_to_id_dict.items() }\nid_corpus = [ word_to_id_dict[word] for word in corpus ]\nnb_token = len(tokens)\n\n\n# We will train this embedding with predictive coding\n# The input of our model is a number \"seq_length\" of precedent words ids \n# and the output is the id of next word\nseq_length = 5\nwith tf.variable_scope(\"placeholder\"):\n    x = tf.placeholder(tf.int32, shape=[None, seq_length], name=\"x\")\n    y_true = tf.placeholder(tf.int32, shape=[None, 1], name=\"y_true\")\n\n\n# We create an embedding\n# We choose in how many dimensions we want to embed our word vectors\ndim_embedding = 30\nwith tf.variable_scope(\"embedding\"):\n    # Let's build our embedding, we intialize it with s impel random normal centerd on 0 with a small variance\n    embedding = tf.get_variable(\"embedding\", shape=[nb_token, dim_embedding], initializer=tf.random_normal_initializer(0., 1e-3))\n    # Then we retrieve the context vector\n    context_vec = tf.nn.embedding_lookup(embedding, x, name=\"lookup\") # Dim: bs x seq_length x dim_embedding\n    context_vec = tf.reshape(context_vec, [tf.shape(x)[0], seq_length * dim_embedding])\n\n# We build a Neural net to predict the next word vector\nwith tf.variable_scope(\"1layer\"):\n    # We use the context vector to predict the next word vector inside the embedding\n    W1 = tf.get_variable(\"W1\", dtype=tf.float32, shape=[seq_length * dim_embedding, dim_embedding])\n    b1 = tf.get_variable(\"b1\", dtype=tf.float32, shape=[dim_embedding], initializer=tf.constant_initializer(.1))\n    h1 = tf.nn.relu(tf.matmul(context_vec, W1) + b1)\n\n    W2 = tf.get_variable(\"W2\", dtype=tf.float32, shape=[dim_embedding, dim_embedding])\n    b2 = tf.get_variable(\"b2\", dtype=tf.float32, shape=[dim_embedding], initializer=tf.constant_initializer(.1))\n    y_vector = tf.matmul(h1, W2) + b2 # Dim: bs x dim_embeddiing \n\n    # Now we calculated the dot product of the current words with all other words vectors\n    z = tf.matmul(y_vector, tf.transpose(embedding)) # Dim: bs x nb_token\n\nwith tf.variable_scope(\"loss\"):\n    y_true_reshaped = tf.reshape(y_true, [-1])\n    losses = tf.nn.sparse_softmax_cross_entropy_with_logits(None, y_true_reshaped, z)\n    loss_op = tf.reduce_mean(losses)\n\n    tf.summary.scalar('loss', loss_op)\n\nwith tf.variable_scope(\"Accuracy\"):\n    a = tf.nn.softmax(z)\n    predictions = tf.cast(tf.argmax(a, 1, name=\"predictions\"), tf.int32)\n    correct_predictions = tf.equal(predictions, y_true_reshaped)\n    nb_predicted_words = tf.shape(predictions)[0]\n    nb_wrong_predictions = nb_predicted_words - tf.reduce_sum(tf.cast(correct_predictions, tf.int32))\n    accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32), name=\"accuracy\")\n\nwith tf.variable_scope('Optimizer'):\n    global_step_t = tf.Variable(0, name=\"global_step\", trainable=False)\n    lr = tf.train.exponential_decay(3e-2, global_step_t, 500, 0.5, staircase=True)\n    adam = tf.train.AdamOptimizer(lr)\n    tf.summary.scalar('lr', lr)\n    \n    train_op = adam.minimize(loss_op, global_step=global_step_t, name=\"train_op\")\n\n    summaries = tf.summary.merge_all()\n\n# We build a Saver in the default graph handling all existing Variables\nsaver = tf.train.Saver()\n\nnb_epochs = 500\nwith tf.Session() as sess: # The Session handles the default graph too\n    sess.run(tf.global_variables_initializer())\n    sw = tf.summary.FileWriter(results_dir, sess.graph)\n\n    for i in range(nb_epochs):\n        input_gen = batch_text(id_corpus, corpus_length // 6, seq_length)\n        for x_batch, y_true_batch in input_gen:\n            to_compute = [train_op, loss_op, global_step_t, summaries]\n            feed_dict = {\n                x: x_batch,\n                y_true: y_true_batch\n            }\n            _, loss, global_step, summaries_metric = sess.run(to_compute, feed_dict=feed_dict)\n\n            # We add a data point in our \"events...\" file\n            sw.add_summary(summaries_metric, global_step)\n\n        if (i + 1) % 250 == 0:\n            # We save our model\n            saver.save(sess, results_dir + '/model', global_step=i + 1)\n\n    # We compute the final accuracy\n    val_gen = batch_text(id_corpus, corpus_length, seq_length)\n    for x_batch, y_batch in val_gen:\n        feed_dict = {\n            x: x_batch,\n            y_true: y_batch\n        }\n        acc, nb_preds, nb_w_pred = sess.run([accuracy, nb_predicted_words, nb_wrong_predictions], feed_dict=feed_dict)\n        print('Final accuracy: %f' % acc)\n        print('%d mispredicted words of %d predictions' % (nb_w_pred, nb_preds))\n\n"
  },
  {
    "path": "tf-save-load/lorem.txt",
    "content": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas dictum pulvinar leo nec pulvinar. Sed et rhoncus turpis. Nunc vitae nisl elementum, facilisis libero eu, elementum augue. Sed et lectus tellus. Interdum et malesuada fames ac ante ipsum primis in faucibus. Proin pellentesque lacus sit amet ante luctus cursus. Nunc ut lacus eget arcu varius facilisis non eget magna. Nullam vestibulum velit non lorem gravida, vel bibendum ligula suscipit. Donec aliquet aliquet dignissim. Integer vestibulum elementum mi, et auctor erat egestas vitae.\n\nMorbi vitae suscipit magna. Pellentesque auctor nisi et turpis scelerisque laoreet. Etiam nec hendrerit ipsum. Nulla ut nibh sit amet elit eleifend tempus. Aenean mi ligula, aliquam venenatis viverra at, suscipit id dui. Etiam auctor nec neque sit amet convallis. Suspendisse quis diam eros.\n\nPraesent quis porta tortor, at tincidunt justo. Curabitur nec tortor consectetur, porta ligula sit amet, consequat nibh. Curabitur blandit purus at blandit feugiat. Nullam aliquam eget tellus quis maximus. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Donec et imperdiet risus. Morbi sapien enim, finibus at erat sit amet, dictum rutrum libero. Suspendisse fermentum consequat est, eu dapibus orci aliquam at. Phasellus eu nisl nibh. Phasellus in aliquam magna.\n\nInteger interdum dui nec lacus varius volutpat. Fusce volutpat, odio nec euismod bibendum, risus odio ultricies mi, vel aliquam quam sapien eget tortor. Maecenas et tempus felis. Fusce blandit eleifend metus vel feugiat. Proin sed tristique orci. Sed vel tincidunt mi. Nunc lobortis arcu quis enim feugiat congue. Ut quis arcu nisl. Ut dui lorem, scelerisque at suscipit at, congue vel mi. Integer feugiat turpis ut elementum suscipit. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.\n\nNulla tincidunt pellentesque egestas. Phasellus leo ex, scelerisque mollis eleifend ac, commodo vel arcu. Morbi in auctor mauris. Vestibulum id lobortis mi. Etiam scelerisque tempus augue, id scelerisque sapien euismod eu. Integer euismod, neque ut egestas hendrerit, est urna feugiat felis, ut maximus arcu arcu sed nisl. Suspendisse facilisis aliquam ante eu consequat. Ut vitae magna enim. Sed in ipsum vitae turpis lacinia faucibus.\n\nSed augue ligula, tempor sed nisl eget, vehicula faucibus nunc. Nullam feugiat at tortor ut malesuada. In ullamcorper, lorem quis scelerisque mattis, magna purus convallis tortor, eu laoreet ligula ligula ut mauris. Sed odio ex, tempus ut vestibulum a, commodo sit amet orci. Donec et sodales nibh, ac gravida leo. Etiam vel urna vel ipsum aliquam luctus. Quisque bibendum felis ac urna maximus sagittis. Nulla eget tortor eu nisl malesuada varius a et tellus. Suspendisse sed posuere neque.\n\nMorbi sed tortor in tellus commodo fermentum eu ultricies elit. Sed nec consectetur enim, egestas consequat ligula. Sed massa ante, vulputate nec tortor nec, porttitor lobortis augue. Integer pulvinar, dui id dapibus commodo, sapien nisi scelerisque lacus, eget porttitor orci lorem vitae massa. Donec a nibh at justo elementum faucibus. Suspendisse lacinia risus rhoncus dui elementum, sit amet volutpat ligula venenatis. Vivamus ut pharetra risus. Sed et augue sit amet nisi varius tempus. Proin elit purus, iaculis ac imperdiet pretium, posuere vel purus. Aliquam faucibus, diam id pretium sollicitudin, nulla urna aliquam sem, sed feugiat ipsum eros vitae nisl. In ornare, justo ac bibendum mattis, lorem mauris sagittis tellus, eget vehicula tellus orci vel leo. Vivamus malesuada arcu eget luctus imperdiet. Morbi laoreet venenatis felis ac molestie. Maecenas in accumsan turpis. Sed nec laoreet metus, sit amet auctor urna.\n\nVestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Sed et nibh urna. Nam ac massa at nisl suscipit rutrum. Aenean vehicula lectus ex, a pretium nunc luctus at. Nullam blandit est id purus pharetra ornare. Sed et tortor et felis vestibulum imperdiet quis vel diam. Nunc vehicula nec quam bibendum eleifend. Proin ornare nulla sit amet lacinia luctus. Fusce ultricies vitae enim quis pretium. Aliquam venenatis ligula sit amet nisi venenatis, non viverra justo vehicula. Phasellus eu sollicitudin erat, at blandit ante. Suspendisse vel elit vel eros venenatis rhoncus eget quis urna. Duis dictum, erat ac sodales accumsan, tellus erat egestas eros, in sagittis eros ligula in lectus.\n\nProin suscipit augue non mattis suscipit. Ut eleifend tempor urna, non blandit velit aliquet ac. Quisque id dictum mi. Aliquam mi quam, ullamcorper eget elit et, feugiat cursus erat. Cras mattis tellus sed eleifend laoreet. Etiam pellentesque eros ipsum, non hendrerit purus dignissim sed. Quisque dignissim neque sem, id scelerisque velit condimentum eu.\n\nSed accumsan nisl in odio consectetur, at hendrerit elit molestie. Donec id gravida ante, sed suscipit nunc. Nunc id vehicula est, id tempor tellus. Sed vel maximus libero, sodales sodales arcu. Nam dictum lacus ipsum, vitae tempus est vulputate iaculis. Nam tincidunt arcu eget felis fermentum, placerat ultricies quam vehicula. Sed suscipit facilisis mauris, quis pretium augue semper eu. Fusce in tempus quam, vel facilisis urna. Donec tempor nunc metus, et fringilla ipsum ultrices in. Nam libero sapien, faucibus id felis sed, elementum porta neque. Aenean vel magna a nisl commodo varius a id ipsum. Maecenas quis tortor varius, finibus nisi at, ornare tortor. Duis sed felis enim. Phasellus eget lacus posuere justo tempor interdum. Maecenas ex ante, sodales ac molestie sed, mollis non metus. Donec tortor sem, volutpat id orci eget, semper porta elit.\n\nCurabitur non elit id nunc ornare mollis. Proin ac purus vitae sem consectetur aliquet. Morbi sed felis vitae tellus mattis hendrerit congue in velit. Cras vitae vulputate est. Aenean eu eros ex. Nam eget libero velit. Vestibulum pharetra sollicitudin mi, pretium rhoncus ipsum faucibus ut. Duis tincidunt erat nec eros tincidunt lobortis. Nunc ac tellus eget nunc imperdiet lobortis ac a diam. Sed facilisis libero vitae rhoncus pulvinar. Vestibulum aliquet posuere lectus, aliquet malesuada purus porttitor eget. Donec at lobortis massa. Praesent consectetur rutrum purus, at consectetur nisi ultricies ut. In vehicula elit neque, vel consectetur purus porta vel. Nulla fermentum felis commodo velit gravida sagittis sed aliquam nisi.\n\nCurabitur vel urna et leo scelerisque maximus. Praesent mattis, massa nec lobortis tempor, nulla tortor blandit mi, id malesuada urna velit sit amet elit. Aenean fringilla vel tellus eget facilisis. Proin eget lorem in quam consectetur facilisis eget eu odio. Quisque pulvinar consectetur nisi nec venenatis. Praesent ut semper nisi. Vivamus est nulla, semper sed massa vitae, placerat eleifend purus. Phasellus vel risus vitae mi pharetra dictum. Mauris maximus consequat faucibus. Pellentesque est velit, semper et ante vel, commodo ullamcorper arcu.\n\nMaecenas placerat lorem nec auctor sollicitudin. Aliquam erat volutpat. Donec gravida pretium augue, in tristique lectus rutrum ut. Aliquam aliquam, neque quis consequat eleifend, ex ex tincidunt dui, feugiat faucibus metus leo ac odio. Integer vel vulputate sem. Phasellus ullamcorper sapien risus, sit amet pulvinar sapien semper non. Aliquam rutrum maximus tellus, at consequat dolor.\n\nPhasellus convallis, est interdum auctor dignissim, felis libero tempor urna, in maximus mi sapien eget dolor. Nulla facilisi. Etiam viverra sollicitudin neque, vel semper dui porta ut. Sed viverra massa diam, nec feugiat massa vehicula non. Curabitur tempor arcu eu arcu venenatis vestibulum. In ligula mauris, laoreet et tortor non, fringilla blandit ante. Aenean sit amet orci est. Suspendisse vel felis at elit tristique commodo. Nullam rutrum faucibus velit, id porttitor nisi vehicula eget.\n\nInteger tristique, orci sit amet molestie dictum, magna justo lacinia libero, faucibus accumsan velit neque eget elit. Maecenas diam purus, eleifend vitae aliquet ut, lacinia eu lectus. Praesent mi nunc, porta suscipit viverra eget, auctor id magna. Praesent non urna libero. Praesent pharetra nisi neque, eget pulvinar lectus porttitor porta. Nunc auctor odio vitae posuere lacinia. Phasellus eu tortor venenatis, maximus libero vel, porta mauris. Curabitur et arcu nec dolor imperdiet auctor. In id eros laoreet, vehicula eros non, elementum urna. Curabitur semper mollis dolor, vel ultrices augue ullamcorper id. Integer eget auctor lacus. Vestibulum accumsan odio at ipsum feugiat, nec dignissim velit suscipit. Phasellus tincidunt, quam at mollis pharetra, dui risus congue tortor, at tristique dui neque vitae velit. Vivamus et commodo purus. Aliquam eu fringilla leo, vitae facilisis velit. Etiam eget tellus vel leo interdum rutrum.\n\nSuspendisse tempor sem a felis fringilla accumsan. Proin rhoncus tincidunt risus, nec cursus enim malesuada sit amet. Quisque sem arcu, vulputate vel condimentum nec, porta non turpis. Aenean tincidunt nisl non faucibus hendrerit. Donec cursus at urna quis mollis. Fusce suscipit risus id fringilla rutrum. Quisque lectus velit, feugiat et mauris ac, condimentum molestie elit. In sollicitudin lorem quis congue eleifend. Cras id euismod magna. In fermentum porta fermentum. Etiam mollis viverra efficitur. Aenean efficitur faucibus pellentesque. Vivamus ullamcorper vitae ex eu pharetra. Aenean maximus, dui eu luctus sollicitudin, quam nibh malesuada massa, quis pulvinar nulla nibh id sapien. Curabitur eget quam nec augue mattis scelerisque. Phasellus feugiat purus vel nisl facilisis rhoncus.\n\nVivamus sit amet mollis metus. Aenean mollis neque at urna scelerisque pulvinar. Morbi varius nibh augue, eget sollicitudin nunc interdum sed. Interdum et malesuada fames ac ante ipsum primis in faucibus. Sed ligula velit, consequat condimentum justo id, elementum porttitor massa. Maecenas feugiat accumsan erat vel sagittis. Praesent eu lorem interdum, maximus eros et, laoreet velit. Morbi at accumsan mi.\n\nVivamus dictum et odio eget scelerisque. Cras dictum, leo eget ultrices vehicula, lectus augue venenatis nunc, eu convallis velit nisi malesuada massa. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Duis vehicula purus vel mauris dictum, eget blandit nulla dictum. Sed eu erat imperdiet, facilisis turpis et, imperdiet urna. Duis iaculis pretium leo, non imperdiet urna cursus at. Vestibulum rutrum urna odio, et volutpat sem volutpat nec. Interdum et malesuada fames ac ante ipsum primis in faucibus.\n\nProin molestie sollicitudin nulla vel venenatis. Vestibulum vel auctor urna, sit amet aliquam velit. Pellentesque feugiat erat ante, ac elementum lacus feugiat a. Praesent eu placerat erat, quis mattis lectus. Etiam sit amet molestie velit. Maecenas ante augue, aliquam in tincidunt ac, accumsan nec felis. Vivamus vel volutpat orci, ac consequat neque. Duis porta mi nec volutpat tempor. Vivamus rhoncus neque ipsum.\n\nVivamus pharetra augue metus, vel condimentum augue facilisis sit amet. Fusce non nibh varius, porta ante sed, vehicula ipsum. Aenean bibendum erat leo, at egestas magna ultricies nec. Aenean vel risus a metus commodo venenatis. In hac habitasse platea dictumst. Aenean dolor neque, lacinia varius sollicitudin vel, dignissim vitae felis. Nulla facilisi.\n\nPhasellus venenatis nunc ut nunc consectetur sagittis. Donec a velit a elit lobortis suscipit. Cras accumsan sit amet nunc tempus molestie. Vestibulum at dolor vel felis luctus bibendum. Quisque porttitor, augue id ornare iaculis, purus nisi vehicula ligula, vel pellentesque nisl ante at mauris. Mauris eros risus, lacinia non est quis, tempor tempor velit. Praesent nec suscipit sapien, a rhoncus dui. Proin dolor lorem, ullamcorper eget mi a, ultricies aliquet ligula. Suspendisse potenti. Phasellus semper magna vel odio porta facilisis. Mauris et eros non mi semper sodales. Phasellus vehicula dolor vel neque suscipit volutpat. Nam nec massa lorem. Nullam efficitur cursus placerat. Nunc dapibus accumsan luctus. Aliquam erat volutpat.\n\nQuisque rhoncus orci nec aliquet volutpat. Sed non metus vitae mi tincidunt posuere. Interdum et malesuada fames ac ante ipsum primis in faucibus. Donec ullamcorper convallis sapien, eu fermentum leo sollicitudin eu. In imperdiet mollis nisl, nec consequat justo lacinia id. Curabitur ac ipsum vel enim pretium egestas in sit amet nisi. Nam porttitor, ipsum ac sollicitudin sodales, nulla tellus sodales lacus, vel eleifend mauris ex id purus. Sed id cursus quam. Fusce volutpat ligula turpis, a blandit dolor commodo eget. Proin in velit ac ipsum ultrices pellentesque. Fusce quam erat, lobortis et faucibus non, tincidunt non ipsum.\n\nCras faucibus convallis felis ac pharetra. Sed nibh metus, consectetur semper tempor eu, vestibulum vel justo. In hac habitasse platea dictumst. Curabitur condimentum eros tellus, quis condimentum sapien interdum eu. Nunc sagittis dictum odio. Ut rhoncus ex eros, vel ultricies massa sollicitudin et. Pellentesque mattis lobortis dui, sit amet rhoncus erat facilisis ac.\n\nNulla pretium porta metus, sed commodo velit tincidunt at. In dapibus neque at nisi ultrices feugiat. Fusce a arcu eget sapien pulvinar elementum quis eu sapien. Sed gravida massa lacinia dui lacinia, sit amet euismod nisl porttitor. Nulla facilisi. Integer non tincidunt ex. Nunc a euismod quam, dapibus egestas sapien. Mauris maximus semper magna, ut porttitor ligula. Sed cursus vulputate turpis, a ultricies purus. Proin semper enim et nisl elementum, quis vehicula tortor gravida. Maecenas vehicula accumsan risus rhoncus sollicitudin. Praesent lobortis velit vel enim rutrum, in aliquet nulla euismod. Ut eu porta augue.\n\nCras ornare commodo urna, id lobortis lorem scelerisque facilisis. Nunc ut molestie diam. Suspendisse a tortor sem. Aenean varius lacus vitae nibh porttitor auctor. Cras consequat ipsum eget massa ullamcorper, eget aliquet ante congue. Vestibulum id semper purus. Nam rhoncus leo ac sapien aliquam, in rhoncus ligula blandit.\n\nNulla facilisi. Suspendisse non rhoncus leo. Nam eu purus sollicitudin, lobortis ex at, consectetur eros. Morbi pulvinar leo feugiat, malesuada quam sit amet, aliquet mi. Donec malesuada, est vel ullamcorper ultrices, magna mauris ultrices nisl, vitae pretium nisl tellus consequat mi. Pellentesque quis consectetur tortor, eu commodo odio. Quisque in sollicitudin mauris. Ut laoreet leo vel est interdum accumsan. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Curabitur sollicitudin, mi sed tincidunt maximus, erat turpis cursus sapien, et accumsan nibh erat a libero. Nulla velit augue, fermentum eget est eget, consequat sollicitudin sem. Interdum et malesuada fames ac ante ipsum primis in faucibus.\n\nNunc enim lacus, egestas malesuada orci vitae, condimentum bibendum quam. Sed semper, ligula id fermentum placerat, turpis magna condimentum ante, id placerat lorem magna nec leo. Vestibulum laoreet elit ac enim congue, quis pulvinar tellus luctus. Quisque laoreet nulla at turpis congue lacinia. Vestibulum dignissim libero quis hendrerit fermentum. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Proin scelerisque velit sit amet condimentum elementum. Integer mollis felis ut rutrum ullamcorper. Quisque sit amet lacus non augue eleifend aliquam nec lobortis dui. Maecenas eros dolor, fermentum in eros eget, imperdiet malesuada nunc. Vivamus tempus semper nibh, sed vehicula risus placerat vel. Nullam sed molestie nisi, a pellentesque est. Proin ut commodo sapien.\n\nVivamus a suscipit lectus, eget malesuada dolor. Integer laoreet aliquam turpis, non mollis magna rutrum eu. Phasellus ut nibh eros. Mauris sit amet magna vel metus ornare dictum. Cras sit amet enim eget magna tincidunt cursus id quis dolor. Curabitur ornare, nunc non lacinia tristique, mi nibh suscipit dolor, ac vulputate ipsum elit quis turpis. Nulla ante sem, euismod non accumsan in, rutrum vel diam. Etiam vel ligula ullamcorper, sodales dui sed, pellentesque ipsum. Quisque eget dignissim est. Cras nec nisi et diam lobortis elementum. Ut fermentum sodales libero, a varius diam pretium sit amet. Aenean vel blandit eros. Proin volutpat quis quam tincidunt suscipit. Etiam convallis, arcu id facilisis auctor, dui sem semper lacus, ac cursus massa leo sollicitudin sapien. Aliquam feugiat eget leo id vestibulum.\n\nDonec venenatis nisl nibh, id iaculis erat tincidunt non. Donec lobortis et diam vitae ultricies. Phasellus vitae efficitur metus, non blandit nibh. Mauris id leo libero. Sed et maximus leo. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris nec risus sit amet erat lacinia sagittis. Donec tempus lacus eget neque facilisis varius non a dolor. Nulla vehicula laoreet sapien vel iaculis. Sed hendrerit, augue sit amet rutrum commodo, nisl quam molestie augue, sed accumsan urna lacus ac enim.\n\nNunc non mauris eget nisi congue cursus. Integer fermentum mauris eros, ut feugiat massa tempus quis. Fusce at nulla ultrices neque rutrum hendrerit vitae vel odio. Sed at urna posuere, convallis magna non, cursus justo. In non justo at lacus sollicitudin finibus. Curabitur at placerat massa. Quisque nec mi libero. Sed condimentum justo ut diam convallis, laoreet mollis orci consequat. Donec sagittis ipsum at eleifend bibendum. Vestibulum quis aliquet ipsum.\n\nNullam tristique imperdiet finibus. Duis eu tortor tincidunt dui pellentesque vestibulum eu nec massa. Praesent consectetur, sapien vitae tincidunt iaculis, turpis arcu efficitur magna, in fermentum nunc mi pellentesque diam. Donec sit amet cursus odio, a condimentum tellus. Integer feugiat, tellus id suscipit accumsan, dui arcu molestie diam, eget mattis massa eros nec massa. Praesent sit amet neque ac ex imperdiet sodales at eget nulla. Maecenas euismod tellus non tempor elementum. Integer efficitur rhoncus eros in commodo. Nulla blandit suscipit eleifend. Aliquam laoreet rutrum arcu, molestie tristique arcu tincidunt eu. Nullam fringilla nisi placerat libero accumsan sagittis. Fusce fermentum ultrices orci, vel ullamcorper tellus malesuada lobortis. Sed ut rutrum dui. Phasellus quis velit enim. Praesent a lectus et mauris sagittis tincidunt. Sed interdum tempus mauris sed venenatis.\n\nProin finibus et leo sed varius. Morbi sit amet luctus dolor, vitae tincidunt orci. Nam fringilla suscipit nulla et luctus. Cras quis tortor in lectus ultricies vulputate nec et orci. Mauris at hendrerit lectus, eu molestie nisl. Proin ut cursus nisi, a vulputate purus. Aenean dictum sem est, vel vulputate risus fringilla in.\n\nPellentesque hendrerit ut purus eget facilisis. Morbi accumsan gravida turpis. Nam non imperdiet velit. Integer ipsum dolor, iaculis id varius non, pellentesque in mi. Proin faucibus ut nibh ac consectetur. Etiam ullamcorper gravida arcu sit amet sodales. Phasellus eget libero in tortor faucibus vehicula. Integer viverra urna turpis, at ultricies risus posuere id. Etiam luctus finibus elementum. Nulla fringilla aliquet ante, et lobortis enim placerat eu.\n\nAliquam erat volutpat. Duis pellentesque rutrum orci, eu sodales neque venenatis ut. Integer ac volutpat libero. Aenean congue consequat ultrices. Vestibulum eget nibh ornare, scelerisque ligula at, iaculis purus. Phasellus posuere dignissim elit, at auctor velit malesuada et. Suspendisse id lacus quis dui tristique scelerisque. Vivamus feugiat fringilla diam. Etiam nec lectus elit. Donec pellentesque convallis pretium. Sed sed odio vestibulum, pulvinar sem in, semper dui. Aenean sed mauris a erat laoreet bibendum a non sem. Nunc nec sem purus. Proin tincidunt tincidunt feugiat. Aenean lorem elit, pretium a erat ac, ultrices ullamcorper dolor. Sed tincidunt tortor sed scelerisque varius.\n\nSed ornare, mi varius sagittis sagittis, ligula diam rhoncus ligula, vel euismod orci nibh elementum sem. Curabitur sed neque at elit ultricies ultrices at eu nisi. Proin tempor neque sed elit finibus, a commodo eros faucibus. Praesent luctus a justo vel condimentum. Suspendisse blandit nisl sollicitudin sem vulputate vehicula. Fusce sit amet convallis enim. Integer vel molestie ante, a vehicula augue. Mauris fringilla tincidunt risus in blandit. Nullam quis blandit lacus. Sed dictum lectus est, non tincidunt metus dapibus a. Quisque eleifend euismod erat in faucibus. Aenean non bibendum augue, in pulvinar lorem. Cras consectetur ex vel neque tincidunt posuere in nec diam. Integer sed mauris vitae diam lobortis euismod vitae quis magna.\n\nIn hac habitasse platea dictumst. Vivamus a mi metus. Nulla iaculis leo ut purus maximus maximus. Proin tincidunt imperdiet diam, nec pellentesque dolor lacinia eu. Praesent augue massa, ultrices et tellus eu, vulputate accumsan arcu. Curabitur facilisis, felis mattis euismod dictum, felis metus venenatis erat, nec ornare lectus nulla vel nibh. Ut vestibulum dignissim tellus, ullamcorper consequat eros blandit quis. Praesent sed neque in elit imperdiet bibendum et lobortis purus. Sed sed libero leo. Nullam tristique est eu dolor elementum vulputate. Quisque vestibulum faucibus elit, eget pharetra nibh pellentesque et. Quisque aliquam ut purus ut laoreet. Ut vel vehicula eros, ac lacinia lacus. Praesent quis auctor orci, in scelerisque arcu. Sed iaculis aliquet nibh, sit amet elementum augue efficitur id. Nam sed aliquet metus.\n\nMauris ultrices mattis mattis. Mauris orci mauris, blandit non augue sit amet, commodo cursus nisi. Cras sed posuere justo. Sed sodales lacinia dictum. Sed vel sapien at arcu volutpat maximus. Sed finibus ac libero et lacinia. Interdum et malesuada fames ac ante ipsum primis in faucibus. Phasellus ipsum nibh, blandit et vestibulum non, sagittis eu lorem.\n\nAliquam eu ullamcorper erat, et tincidunt magna. Maecenas urna neque, varius nec ligula ac, luctus cursus quam. Quisque vestibulum, leo ac auctor interdum, arcu nulla vestibulum dolor, eu pretium elit leo posuere lacus. Aenean eget tincidunt turpis. Aliquam erat volutpat. Sed sed iaculis eros, nec vestibulum metus. Duis libero tellus, feugiat in tortor vel, porttitor suscipit orci. Pellentesque vulputate, orci at dapibus sagittis, metus dolor faucibus nulla, ac rutrum nulla odio ac massa. Mauris ac laoreet neque. In vitae odio luctus, congue ex id, blandit eros. Integer porta venenatis purus vulputate bibendum. Sed ligula magna, imperdiet a tristique a, pharetra non enim. Ut pretium risus id sem lacinia, nec scelerisque erat interdum. Ut quis nisl a libero venenatis dictum ac vitae orci. Fusce imperdiet neque urna, in pharetra dui convallis in. Quisque neque enim, posuere in libero sed, condimentum scelerisque erat.\n\nPraesent magna nisi, ultricies viverra felis a, suscipit pellentesque purus. Pellentesque massa quam, scelerisque eget felis quis, blandit sollicitudin nisl. Nunc pellentesque scelerisque enim, non elementum ex auctor vitae. Mauris vel orci a metus eleifend porttitor quis sit amet metus. Phasellus faucibus tortor sed diam ullamcorper, a molestie libero semper. Integer vel molestie lacus. Nunc bibendum blandit augue, at egestas lectus sagittis eu. Fusce dictum ipsum vel porta ornare. Curabitur vitae odio non odio hendrerit euismod. Duis eu elit nisl. Sed bibendum orci vitae placerat venenatis. Fusce vestibulum sem vel urna sollicitudin gravida. In dapibus ex quis sodales mattis.\n\nQuisque vehicula ligula vel neque sagittis volutpat. In pharetra lorem eu ante euismod, nec interdum nisi convallis. In elit lacus, vestibulum vulputate magna nec, tincidunt porttitor diam. Sed tristique, urna semper feugiat efficitur, massa turpis rhoncus magna, et euismod libero tortor nec sem. Etiam varius molestie malesuada. Pellentesque vehicula lacinia turpis in convallis. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Quisque porta nec urna a dignissim. Phasellus diam turpis, sagittis quis fermentum in, dapibus sed orci. Donec vestibulum metus hendrerit, ultrices est sollicitudin, commodo eros. Donec in nulla ut velit gravida elementum eu vitae velit.\n\nPellentesque vehicula risus eget elit aliquam elementum. Sed vulputate urna lacus. Nullam aliquam bibendum elit, vel accumsan metus iaculis in. Praesent at congue velit. Donec sed rutrum nisl. Nullam euismod, sem at fermentum dapibus, mauris est facilisis dui, eget mollis augue lectus in libero. Suspendisse potenti. Vivamus eget ante vitae risus pretium iaculis. Nulla at risus dictum, sollicitudin leo eu, efficitur nunc. Aliquam fringilla dolor justo, non mollis mi lobortis ultrices. Maecenas ultricies, neque vel tempor consectetur, lectus sem rhoncus leo, cursus auctor urna urna non ante. Morbi nec nulla molestie, ultricies mauris id, ornare est. Interdum et malesuada fames ac ante ipsum primis in faucibus.\n\nVivamus congue porta accumsan. Curabitur in leo sit amet risus scelerisque fringilla eu ut mi. Sed ut pellentesque lorem. Praesent varius lectus placerat sodales sollicitudin. Praesent id tincidunt lectus. Etiam vulputate efficitur justo sed rutrum. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Donec varius tempus nunc, in luctus mi euismod eget. Sed ac commodo ex. Vivamus aliquet nibh urna.\n\nFusce sit amet felis ut nulla iaculis eleifend id non quam. Ut ut laoreet diam. Aenean aliquet lacus odio, a luctus lorem ullamcorper nec. Praesent id turpis lobortis, lobortis lorem sed, molestie leo. Pellentesque nec nibh faucibus, bibendum libero mollis, convallis risus. Vivamus feugiat vestibulum mi a egestas. Aliquam commodo est ac lectus sagittis, at luctus metus condimentum. Quisque luctus lectus lorem, a lobortis lorem condimentum et. Quisque ex leo, dapibus vel risus nec, dignissim elementum lectus. Duis eget sem accumsan, gravida libero at, molestie diam. Duis molestie dapibus erat. Donec a tincidunt purus. Phasellus scelerisque sagittis libero in iaculis.\n\nMauris eget laoreet ipsum. Aliquam ut velit convallis risus interdum euismod eget a dolor. Proin auctor nibh ac diam aliquet auctor. Cras enim diam, lacinia sed libero non, rutrum maximus augue. Suspendisse eu varius sapien. Nam neque libero, vestibulum et dignissim quis, vestibulum vitae purus. Nunc mollis sit amet erat at semper. Nullam tempor molestie urna, quis egestas magna ornare in. Nulla vitae interdum tellus. Proin mollis, enim sed ullamcorper interdum, ante nunc ornare ante, ut molestie lacus nibh vel elit.\n\nOrci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nulla lacinia congue quam et mattis. Sed iaculis maximus suscipit. Fusce in pellentesque urna. In non tellus sapien. Integer a felis scelerisque, sodales augue sed, cursus erat. Morbi eu tempus odio. Fusce eu semper neque. Vivamus sed tellus at nisl viverra faucibus. Aliquam in nulla ullamcorper, tempor augue nec, vehicula augue. Nunc pellentesque est elit, eu ullamcorper sapien laoreet id. Sed hendrerit pulvinar vehicula.\n\nMauris cursus varius mollis. Morbi non augue porttitor lectus ullamcorper porttitor sit amet sed tortor. Curabitur convallis auctor condimentum. Curabitur dictum, tellus sit amet gravida ornare, lacus elit varius nibh, nec lobortis arcu tellus nec elit. Mauris tincidunt ultricies diam tincidunt efficitur. Etiam molestie posuere ante ut imperdiet. Morbi eleifend arcu eget justo elementum, non porttitor orci ullamcorper. Ut vitae ultrices quam, sit amet eleifend ex. Pellentesque sagittis vel mi eget consequat. Proin ac velit magna.\n\nDonec vel purus risus. Maecenas accumsan dapibus ultrices. Vestibulum at arcu quis massa ultrices pellentesque a in turpis. Cras quis neque vel lorem tincidunt congue. Mauris aliquet libero pellentesque dui malesuada, at accumsan nulla posuere. Nullam posuere viverra volutpat. Sed molestie vitae orci sit amet vestibulum. Pellentesque quis mattis enim. Donec eget justo non ipsum facilisis auctor. Sed ex velit, facilisis id bibendum vulputate, mattis eget quam. Proin laoreet velit ut viverra facilisis. Integer vel urna sed odio facilisis cursus id ac diam. Vestibulum consectetur, erat at congue dignissim, enim ex ultricies eros, id tempus ex nulla in sapien. Phasellus vehicula vestibulum est nec malesuada.\n\nProin at pharetra nisl. Suspendisse sed augue eu metus placerat porta vitae non quam. Nam efficitur metus sed libero laoreet, id porttitor orci accumsan. Proin vitae dictum elit. Quisque non massa dui. Pellentesque sit amet dignissim risus. Sed lacinia tellus et nulla congue, eu condimentum tortor fermentum. Duis ipsum ante, vehicula aliquam pretium sed, mollis ac arcu. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nulla lobortis aliquet ipsum, id pulvinar urna rhoncus eu. Nunc at accumsan elit. Nunc tristique lobortis mi.\n\nNam vitae justo id ex porta accumsan. Nunc molestie leo quis odio convallis, quis varius lectus elementum. Morbi commodo nisl at mi fermentum, a porttitor libero lobortis. Pellentesque id vulputate turpis, sed tempus eros. Nulla consequat, orci sed commodo tincidunt, quam lacus interdum lacus, eget porta dolor nunc ut sapien. Praesent vitae justo ultricies, facilisis mauris a, facilisis velit. Phasellus ultricies molestie neque eget fringilla.\n\nCurabitur volutpat nibh non mi vestibulum, vitae faucibus est vestibulum. Aliquam erat volutpat. Proin bibendum tellus eu enim aliquet, a gravida sapien porta. Integer luctus viverra velit a pretium. In gravida dapibus sapien, in tincidunt nibh interdum vel. Aliquam tempus tincidunt est nec venenatis. Sed mollis purus quis placerat mattis. Sed facilisis quam et elit pharetra tincidunt. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. In faucibus iaculis faucibus. Donec posuere aliquam lacus, vitae feugiat lorem elementum et. Phasellus sed sapien ac est feugiat porta. Donec ante est, blandit et nulla at, bibendum placerat arcu. Nam dui nisi, pellentesque ut tempor et, viverra eu massa.\n\nMauris semper, turpis sit amet placerat euismod, turpis enim finibus lectus, viverra dapibus purus nisl in purus. Vivamus lacus magna, finibus id porta id, sollicitudin eget sapien. Nunc nec sem id enim pretium eleifend ut eu metus. Nam pharetra finibus dui a finibus. Cras euismod tellus nec lectus ullamcorper, convallis malesuada diam ornare. Duis ac enim et nibh lobortis vulputate. Aliquam non urna non eros tempor posuere. Duis faucibus odio vel urna euismod pellentesque. Cras dapibus, nisl vitae dignissim consectetur, eros leo viverra libero, sit amet lobortis metus erat a quam.\n\nNullam quis tempus dui. Phasellus tincidunt mi eget placerat commodo. Donec tincidunt nulla tellus, sed fermentum orci viverra ut. Morbi blandit a nisl sit amet lacinia. Nulla quis orci a ante consequat molestie. Integer gravida tristique magna vitae ultricies. Integer suscipit feugiat enim, non luctus mi finibus eget. Proin viverra ut justo in laoreet. Aenean ut purus aliquam, rhoncus lectus non, condimentum sapien. Cras porttitor ut mi eu lacinia. Vestibulum maximus dolor arcu, ut accumsan nisi lacinia rutrum. Fusce ut urna rutrum, gravida augue at, feugiat nisl.\n\nDuis nec laoreet odio, ut pretium tellus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec sed dui viverra, condimentum nibh in, bibendum dolor. Donec sit amet ipsum et augue ultricies semper non tincidunt nulla. Pellentesque dictum commodo ipsum a bibendum. Praesent dignissim iaculis augue, id dapibus nulla molestie non. Sed molestie convallis augue, a posuere nulla pulvinar eu. Praesent venenatis, odio non viverra suscipit, risus ligula placerat metus, non feugiat metus mauris sed magna. Aenean sed erat at justo semper bibendum ut at nunc.\n\nPellentesque a urna vel lectus rhoncus imperdiet nec ac metus. Nullam rutrum placerat purus quis pharetra. In accumsan ex odio, vel maximus tortor rhoncus ut. Ut laoreet turpis sit amet blandit consectetur. Curabitur ultricies facilisis posuere. Mauris nec lorem ornare, mattis est ut, condimentum felis. Pellentesque tempor dui a maximus tempus.\n\nSuspendisse ultrices sed neque vitae scelerisque. Nam euismod mi eget orci condimentum, ac consectetur ante lobortis. Integer et rutrum purus, quis aliquam nunc. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Pellentesque elementum elementum orci, eu luctus tortor suscipit a. Sed mollis metus tincidunt, consequat turpis eu, dictum leo. Nam at bibendum elit. Praesent facilisis mauris a ipsum efficitur commodo. Duis finibus sagittis cursus. Curabitur mattis justo tellus, commodo eleifend diam blandit eu. Proin et interdum metus. Etiam ullamcorper venenatis fermentum. Donec sodales, sapien sit amet iaculis venenatis, lacus enim cursus sem, et pulvinar magna lectus ac leo.\n\nAenean vitae mauris eu urna euismod vehicula sed ac libero. Pellentesque neque ex, suscipit ut mattis non, consectetur at urna. Morbi faucibus diam ante, ac consequat leo blandit a. Integer tincidunt convallis blandit. Quisque eu arcu at purus finibus commodo non et nisi. Pellentesque sed rutrum diam, non dictum velit. In sed volutpat sem. Proin dignissim lacinia eleifend. Nam sed lacus est. Aenean nec odio sem. Praesent scelerisque molestie nibh vitae volutpat. Vestibulum tempus magna ac tortor tincidunt varius.\n\nUt tempus malesuada nunc ac tincidunt. Phasellus fermentum augue ipsum. Cras justo nibh, viverra sed hendrerit nec, interdum eget metus. Sed quis tempor turpis, vel faucibus dui. Fusce id odio fringilla, tincidunt neque vitae, fermentum leo. Donec auctor feugiat neque non placerat. Praesent a rutrum mi. Maecenas facilisis tellus in nisi dictum convallis."
  },
  {
    "path": "tf-shape/Readme.md",
    "content": "# Shapes and dynamic dimensions in TensorFlow\n\n## TL;DR\n### EN\n\n### FR"
  },
  {
    "path": "tf-uat/README.md",
    "content": "# TensorFlow howto: a universal approximator inside a neural net\n\n## TL, DR! \n### EN\n- I implement a first universal approximator with TesnorFlow and i train it on a sinus function (I show that it actually works) \n- I use it inside a bigger neural networks to classify the MNIST dataset \n- I display the learnt activation functions \n- I show that whatever the learnt activation function is, i get consistently the accuracy 0.98 on the test set \n- bonus: all the code is open-source\n\n### FR\n- J'implémente une fonction d'approximation universelle avec Tensorflow et je l'entraine sur une fonction Sinus (je montre que ça marche :) )\n- Je l'intègre à l'intérieur d'un réseau de neurones plus larges pour classifier le dataset MNIST\n- Je présentes les fonctions d'activation ainsi apprises\n- Je montre que quelquesoit la fonction d'activation obtenu, j'obtiens toujours la même \"accuracy\" de 0.98 sur les données de test\n- En bonus, tous le code est open-source et commenté"
  },
  {
    "path": "tf-uat/train.sh",
    "content": "for i in {1..10}; do\n  python3 universal_mnist.py\ndone"
  },
  {
    "path": "tf-uat/universal.py",
    "content": "import time, os, argparse, io\n\nimport tensorflow as tf\nimport numpy as np\n\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n  \ndir = os.path.dirname(os.path.realpath(__file__))\n\n# Note: elu is not bounded, yet it works\ndef univAprox(x, N=50, phi=tf.nn.elu, reuse=False): # First trick: the reuse capacity\n    with tf.variable_scope('UniversalApproximator', reuse=reuse):\n        x = tf.expand_dims(x, -1)\n\n        # Second trick: using convolutions!\n        aW_1 = tf.get_variable('aW_1', shape=[1, 1, N], initializer=tf.random_normal_initializer(stddev=.1))\n        ab_1 = tf.get_variable('ab_1', shape=[N], initializer=tf.constant_initializer(0.))\n        z = tf.nn.conv1d(x, aW_1, stride=1, padding='SAME') + ab_1\n        a = phi(z)\n\n        aW_2 = tf.get_variable('aW_2', shape=[1, N, 1], initializer=tf.random_normal_initializer(stddev=.1))\n        z = tf.nn.conv1d(a, aW_2, stride=1, padding='SAME')\n\n        out = tf.squeeze(z, [-1])\n    return out\n\ndef func_to_approx(x):\n    return tf.sin(x)\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--nb_neurons\", default=50, type=int, help=\"Number of neurons\")\n    args = parser.parse_args()\n\n    with tf.variable_scope('Graph') as scope:\n        # Graph\n        x = tf.placeholder(tf.float32, shape=[None, 1], name=\"x\")\n        y_true = func_to_approx(x)\n        y = univAprox(x, args.nb_neurons)\n        loss = tf.reduce_mean(tf.square(y - y_true))\n        loss_summary_t = tf.summary.scalar('loss', loss)\n        adam = tf.train.AdamOptimizer(learning_rate=1e-2)\n        train_op = adam.minimize(loss)\n\n    # Plot graph\n    img_strbuf_plh = tf.placeholder(tf.string, shape=[])\n    my_img = tf.image.decode_png(img_strbuf_plh, 4)\n    img_summary_t = tf.summary.image('img', tf.expand_dims(my_img, 0))\n\n    saver = tf.train.Saver()\n    with tf.Session() as sess:\n        result_folder = dir + '/results/' + str(int(time.time()))\n        sw = tf.summary.FileWriter(result_folder, sess.graph)\n\n        print('Training our universal approximator')\n        sess.run(tf.global_variables_initializer())\n        for i in range(2000):\n            x_in = np.random.uniform(-10, 10, [100000, 1])\n\n            current_loss, loss_summary, _ = sess.run([loss, loss_summary_t, train_op], feed_dict={\n                x: x_in\n            })\n            sw.add_summary(loss_summary, i + 1)\n\n            if (i + 1) % 100 == 0:\n                print('batch: %d, loss: %f' % (i + 1, current_loss))\n\n        print('Plotting graphs')\n        inputs = np.array([ [(i - 1000) / 100] for i in range(2000) ])\n        y_true_res, y_res = sess.run([y_true, y], feed_dict={\n            x: inputs\n        })\n        plt.figure(1)\n        plt.subplot(211)\n        plt.plot(inputs, y_true_res.flatten())\n        plt.subplot(212)\n        plt.plot(inputs, y_res)\n        imgdata = io.BytesIO()\n        plt.savefig(imgdata, format='png')\n        imgdata.seek(0)\n        img_summary = sess.run(img_summary_t, feed_dict={\n            img_strbuf_plh: imgdata.getvalue()\n        })\n        sw.add_summary(img_summary, i + 1)\n        plt.clf()\n\n        # Saving the graph\n        saver.save(sess, result_folder + '/data.chkp')\n        "
  },
  {
    "path": "tf-uat/universal_mnist.py",
    "content": "import time, os, argparse, io\n\nimport tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport numpy as np\n\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\nfrom universal import univAprox\n\ndir = os.path.dirname(os.path.realpath(__file__))\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n\n# HyperParam\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--nb_neurons\", default=50, type=int, help=\"Number of neurons\")\nargs = parser.parse_args()\n\nwith tf.variable_scope('Graph') as scope:\n    # Graph\n    x = tf.placeholder(tf.float32, shape=[None, 784], name=\"x\")\n    y_true = tf.placeholder(tf.float32, shape=[None, 10], name=\"y_true\")\n\n    W1 = tf.get_variable('W1', shape=[784, 200], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b1 = tf.get_variable('b1', shape=[200], initializer=tf.constant_initializer(0.1))\n    z = tf.matmul(x, W1) + b1\n    a = univAprox(z, args.nb_neurons)\n\n    W2 = tf.get_variable('W2', shape=[200, 50], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b2 = tf.get_variable('b2', shape=[50], initializer=tf.constant_initializer(0.1))\n    z = tf.matmul(a, W2) + b2\n    a = univAprox(z, args.nb_neurons, reuse=True)\n\n    W_s = tf.get_variable('W_s', shape=[50, 10], initializer=tf.random_normal_initializer(stddev=1e-1))\n    b_s = tf.get_variable('b_s', shape=[10], initializer=tf.constant_initializer(0.1))\n    logits = tf.matmul(a, W_s) + b_s\n    \n    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(None, y_true, logits))\n    tf.summary.scalar('loss', loss) # Graph the loss\n    adam = tf.train.AdamOptimizer(learning_rate=1e-3)\n    train_op = adam.minimize(loss)    \n\n    # We merge summaries before the accuracy summary to avoid \n    # graphing the accuracy with training data\n    summaries = tf.summary.merge_all()\n\n    correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y_true, 1))\n    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n    acc_summary = tf.summary.scalar('accuracy', accuracy) \n\n    # Plot graph\n    plot_x = tf.placeholder(tf.float32, shape=[None, 1], name=\"plot_x\")\n    plot_y = univAprox(plot_x, args.nb_neurons, reuse=True)\n    img_strbuf_plh = tf.placeholder(tf.string, shape=[])\n    my_img = tf.image.decode_png(img_strbuf_plh, 4)\n    img_summary = tf.summary.image(\n        'matplotlib_graph'\n        , tf.expand_dims(my_img, 0)\n    )\n\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n    result_folder = dir + '/results/' + str(int(time.time()))\n    sess.run(tf.global_variables_initializer())\n    sw = tf.summary.FileWriter(result_folder, sess.graph)\n    \n    print('Training')\n    for i in range(20000):\n        batch = mnist.train.next_batch(200)\n        current_loss, summary, _ = sess.run([loss, summaries, train_op], feed_dict={\n            x: batch[0],\n            y_true: batch[1]\n        })\n        sw.add_summary(summary, i + 1)\n\n        if (i + 1) % 1000 == 0:\n            acc, acc_sum = sess.run([accuracy, acc_summary], feed_dict={\n                x: mnist.test.images, \n                y_true: mnist.test.labels\n            })\n            sw.add_summary(acc_sum, i + 1)\n            print('batch: %d, loss: %f, accuracy: %f' % (i + 1, current_loss, acc))\n\n    print('Plotting approximated function graph')\n    inputs = np.array([ [(i - 500) / 100] for i in range(1000) ])\n    plot_y_res = sess.run(plot_y, feed_dict={\n        plot_x: inputs\n    })\n    plt.figure(1)\n    plt.plot(inputs, plot_y_res)\n    imgdata = io.BytesIO()\n    plt.savefig(imgdata, format='png')\n    imgdata.seek(0)\n    plot_img_summary = sess.run(img_summary, feed_dict={\n        img_strbuf_plh: imgdata.getvalue()\n    })\n    sw.add_summary(plot_img_summary, i + 1)\n    plt.clf()\n\n    # Saving the graph\n    saver.save(sess, result_folder + '/data.chkp')"
  }
]