Showing preview only (2,863K chars total). Download the full file or copy to clipboard to get everything.
Repository: williamcwi/DeepLearning.AI-TensorFlow-Developer-Professional-Certificate
Branch: master
Commit: 779006f74cca
Files: 67
Total size: 2.7 MB
Directory structure:
gitextract_p4qcdysp/
├── 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/
│ ├── 1. A New Programming Paradigm/
│ │ ├── assignment/
│ │ │ └── C1W1_Assignment.ipynb
│ │ └── ungraded_lab/
│ │ └── C1_W1_Lab_1_hello_world_nn.ipynb
│ ├── 2. Introduction to Computer Vision/
│ │ ├── assignment/
│ │ │ └── C1W2_Assignment.ipynb
│ │ └── ungraded_labs/
│ │ ├── C1_W2_Lab_1_beyond_hello_world.ipynb
│ │ └── C1_W2_Lab_2_callbacks.ipynb
│ ├── 3. Enhancing Vision with Convolutional Neural Networks/
│ │ ├── assignment/
│ │ │ └── C1W3_Assignment.ipynb
│ │ └── ungraded_labs/
│ │ ├── C1_W3_Lab_1_improving_accuracy_using_convolutions.ipynb
│ │ └── C1_W3_Lab_2_exploring_convolutions.ipynb
│ └── 4. Using Real-world Images/
│ ├── assignment/
│ │ └── C1W4_Assignment.ipynb
│ └── ungraded_labs/
│ ├── C1_W4_Lab_1_image_generator_no_validation.ipynb
│ ├── C1_W4_Lab_2_image_generator_with_validation.ipynb
│ └── C1_W4_Lab_3_compacted_images.ipynb
├── 2. Convolutional Neural Networks in TensorFlow/
│ ├── 1. Exploring a Larger Dataset/
│ │ ├── assignment/
│ │ │ ├── C2W1_Assignment.ipynb
│ │ │ └── history.pkl
│ │ └── ungraded_lab/
│ │ └── C2_W1_Lab_1_cats_vs_dogs.ipynb
│ ├── 2. Augmentation - A Technique to Avoid Overfitting/
│ │ ├── assignment/
│ │ │ ├── C2W2_Assignment.ipynb
│ │ │ └── history_augmented.pkl
│ │ └── ungraded_labs/
│ │ ├── C2_W2_Lab_1_cats_v_dogs_augmentation.ipynb
│ │ └── C2_W2_Lab_2_horses_v_humans_augmentation.ipynb
│ ├── 3. Transfer Learning/
│ │ ├── assignment/
│ │ │ └── C2W3_Assignment.ipynb
│ │ └── ungraded_lab/
│ │ └── C2_W3_Lab_1_transfer_learning.ipynb
│ └── 4. Multiclass Classification/
│ ├── assignment/
│ │ └── C2W4_Assignment.ipynb
│ └── ungraded_lab/
│ └── C2_W4_Lab_1_multi_class_classifier.ipynb
├── 3. Natural Language Processing in TensorFlow/
│ ├── 1. Sentiment in Text/
│ │ ├── assignment/
│ │ │ └── C3W1_Assignment.ipynb
│ │ └── ungraded_labs/
│ │ ├── C3_W1_Lab_1_tokenize_basic.ipynb
│ │ ├── C3_W1_Lab_2_sequences_basic.ipynb
│ │ └── C3_W1_Lab_3_sarcasm.ipynb
│ ├── 2. Word Embeddings/
│ │ ├── assignment/
│ │ │ ├── C3W2_Assignment.ipynb
│ │ │ ├── meta.tsv
│ │ │ └── vecs.tsv
│ │ └── ungraded_labs/
│ │ ├── C3_W2_Lab_1_imdb.ipynb
│ │ ├── C3_W2_Lab_2_sarcasm_classifier.ipynb
│ │ └── C3_W2_Lab_3_imdb_subwords.ipynb
│ ├── 3. Sequence Models/
│ │ ├── assignment/
│ │ │ └── C3W3_Assignment.ipynb
│ │ └── ungraded_labs/
│ │ ├── C3_W3_Lab_1_single_layer_LSTM.ipynb
│ │ ├── C3_W3_Lab_2_multiple_layer_LSTM.ipynb
│ │ ├── C3_W3_Lab_3_Conv1D.ipynb
│ │ ├── C3_W3_Lab_4_imdb_reviews_with_GRU_LSTM_Conv1D.ipynb
│ │ ├── C3_W3_Lab_5_sarcasm_with_bi_LSTM.ipynb
│ │ └── C3_W3_Lab_6_sarcasm_with_1D_convolutional.ipynb
│ └── 4. Sequence Models and Literature/
│ ├── assignment/
│ │ ├── C3W4_Assignment.ipynb
│ │ └── history.pkl
│ ├── misc/
│ │ └── Laurences_generated_poetry.txt
│ └── ungraded_labs/
│ ├── C3_W4_Lab_1.ipynb
│ └── C3_W4_Lab_2_irish_lyrics.ipynb
├── 4. Sequences, Time Serirs and Prediction/
│ ├── 1. Sequences and Prediction/
│ │ ├── assignment/
│ │ │ ├── C4_W1_Assignment.ipynb
│ │ │ └── C4_W1_Assignment_Solution.ipynb
│ │ └── ungraded_labs/
│ │ ├── C4_W1_Lab_1_time_series.ipynb
│ │ └── C4_W1_Lab_2_forecasting.ipynb
│ ├── 2. Deep Neural Networks for Time Series/
│ │ ├── assignment/
│ │ │ ├── C4_W2_Assignment.ipynb
│ │ │ └── C4_W2_Assignment_Solution.ipynb
│ │ └── ungraded_labs/
│ │ ├── C4_W2_Lab_1_features_and_labels.ipynb
│ │ ├── C4_W2_Lab_2_single_layer_NN.ipynb
│ │ └── C4_W2_Lab_3_deep_NN.ipynb
│ ├── 3. Recurrent Neural Networks for Time Series/
│ │ ├── assignment/
│ │ │ ├── C4_W3_Assignment.ipynb
│ │ │ └── C4_W3_Assignment_Solution.ipynb
│ │ └── ungraded_labs/
│ │ ├── C4_W3_Lab_1_RNN.ipynb
│ │ └── C4_W3_Lab_2_LSTM.ipynb
│ └── 4. Real-world Time Series Data/
│ ├── assignment/
│ │ ├── C4_W4_Assignment.ipynb
│ │ └── C4_W4_Assignment_Solution.ipynb
│ └── ungraded_labs/
│ ├── C4_W4_Lab_1_LSTM.ipynb
│ ├── C4_W4_Lab_2_Sunspots.ipynb
│ └── C4_W4_Lab_3_DNN_only.ipynb
├── Coursera_Code_of_Conduct.md
├── Coursera_Honor_Code.md
├── LICENSE
└── README.md
================================================
FILE CONTENTS
================================================
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/1. A New Programming Paradigm/assignment/C1W1_Assignment.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "mw2VBrBcgvGa"
},
"source": [
"# Week 1 Assignment: Housing Prices\n",
"\n",
"In this exercise you'll try to build a neural network that predicts the price of a house according to a simple formula.\n",
"\n",
"Imagine that house pricing is as easy as:\n",
"\n",
"A house has a base cost of 50k, and every additional bedroom adds a cost of 50k. This will make a 1 bedroom house cost 100k, a 2 bedroom house cost 150k etc.\n",
"\n",
"How would you create a neural network that learns this relationship so that it would predict a 7 bedroom house as costing close to 400k etc.\n",
"\n",
"Hint: Your network might work better if you scale the house price down. You don't have to give the answer 400...it might be better to create something that predicts the number 4, and then your answer is in the 'hundreds of thousands' etc."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PUNO2E6SeURH"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import numpy as np"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "B-74xrKrBqGJ"
},
"outputs": [],
"source": [
"# GRADED FUNCTION: house_model\n",
"def house_model():\n",
" ### START CODE HERE\n",
" \n",
" # Define input and output tensors with the values for houses with 1 up to 6 bedrooms\n",
" # Hint: Remember to explictly set the dtype as float\n",
" xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)\n",
" ys = np.array([1.0, 1.5, 2.0, 2.5, 3.0, 3.5], dtype=float)\n",
" \n",
" # Define your model (should be a model with 1 dense layer and 1 unit)\n",
" model = tf.keras.Sequential([tf.keras.layers.Dense(units=1, input_shape=[1])])\n",
" \n",
" # Compile your model\n",
" # Set the optimizer to Stochastic Gradient Descent\n",
" # and use Mean Squared Error as the loss function\n",
" model.compile(optimizer='sgd', loss='mean_squared_error')\n",
" \n",
" # Train your model for 1000 epochs by feeding the i/o tensors\n",
" model.fit(xs, ys, epochs=1000)\n",
" \n",
" ### END CODE HERE\n",
" return model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you have a function that returns a compiled and trained model when invoked, use it to get the model to predict the price of houses: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get your trained model\n",
"model = house_model()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that your model has finished training it is time to test it out! You can do so by running the next cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "kMlInDdSBqGK"
},
"outputs": [],
"source": [
"new_y = 7.0\n",
"prediction = model.predict([new_y])[0]\n",
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If everything went as expected you should see a prediction value very close to 4. **If not, try adjusting your code before submitting the assignment.** Notice that you can play around with the value of `new_y` to get different predictions. In general you should see that the network was able to learn the linear relationship between `x` and `y`, so if you use a value of 8.0 you should get a prediction close to 4.5 and so on."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Congratulations on finishing this week's assignment!**\n",
"\n",
"You have successfully coded a neural network that learned the linear relationship between two variables. Nice job!\n",
"\n",
"**Keep it up!**"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/1. A New Programming Paradigm/ungraded_lab/C1_W1_Lab_1_hello_world_nn.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C1/W1/ungraded_lab/C1_W1_Lab_1_hello_world_nn.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZIAkIlfmCe1B"
},
"source": [
"# Ungraded Lab: The Hello World of Deep Learning with Neural Networks"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fA93WUy1zzWf"
},
"source": [
"Like every first app, you should start with something super simple that shows the overall scaffolding for how your code works. In the case of creating neural networks, one simple case is where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules': \n",
"\n",
"\n",
"```\n",
"def hw_function(x):\n",
" y = (2 * x) - 1\n",
" return y\n",
"```\n",
"\n",
"So how would you train a neural network to do the equivalent task? By using data! By feeding it with a set of x's and y's, it should be able to figure out the relationship between them. \n",
"\n",
"This is obviously a very different paradigm from what you might be used to. So let's step through it piece by piece.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DzbtdRcZDO9B"
},
"source": [
"## Imports\n",
"\n",
"Let's start with the imports. Here, you are importing [TensorFlow](https://www.tensorflow.org/) and calling it `tf` for convention and ease of use.\n",
"\n",
"You then import a library called [`numpy`](https://numpy.org) which helps to represent data as arrays easily and to optimize numerical operations.\n",
"\n",
"The framework you will use to build a neural network as a sequence of layers is called [`keras`](https://keras.io/) so you will import that too.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "X9uIpOS2zx7k"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import numpy as np\n",
"from tensorflow import keras\n",
"\n",
"print(tf.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wwJGmDrQ0EoB"
},
"source": [
"## Define and Compile the Neural Network\n",
"\n",
"Next, you will create the simplest possible neural network. It has 1 layer with 1 neuron, and the input shape to it is just 1 value. You will build this model using Keras' [Sequential](https://keras.io/api/models/sequential/) class which allows you to define the network as a sequence of [layers](https://keras.io/api/layers/). You can use a single [Dense](https://keras.io/api/layers/core_layers/dense/) layer to build this simple network as shown below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "kQFAr_xo0M4T"
},
"outputs": [],
"source": [
"# Build a simple Sequential model\n",
"model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KhjZjZ-c0Ok9"
},
"source": [
"Now, you will compile the neural network. When you do so, you have to specify 2 functions: a [loss](https://keras.io/api/losses/) and an [optimizer](https://keras.io/api/optimizers/).\n",
"\n",
"If you've seen lots of math for machine learning, here's where it's usually used. But in this case, it's nicely encapsulated in functions and classes for you. But what happens here? Let's explain...\n",
"\n",
"You know that in the function declared at the start of this notebook, the relationship between the numbers is `y=2x-1`. When the computer is trying to 'learn' that, it makes a guess... maybe `y=10x+10`. The `loss` function measures the guessed answers against the known correct answers and measures how well or how badly it did.\n",
"\n",
"It then uses the `optimizer` function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with something like `y=5x+5`, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower).\n",
"\n",
"It will repeat this for the number of _epochs_ which you will see shortly. But first, here's how you will tell it to use [mean squared error](https://keras.io/api/losses/regression_losses/#meansquarederror-function) for the loss and [stochastic gradient descent](https://keras.io/api/optimizers/sgd/) for the optimizer. You don't need to understand the math for these yet, but you can see that they work!\n",
"\n",
"Over time, you will learn the different and appropriate loss and optimizer functions for different scenarios. \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "m8YQN1H41L-Y"
},
"outputs": [],
"source": [
"# Compile the model\n",
"model.compile(optimizer='sgd', loss='mean_squared_error')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5QyOUhFw1OUX"
},
"source": [
"## Providing the Data\n",
"\n",
"Next up, you will feed in some data. In this case, you are taking 6 X's and 6 Y's. You can see that the relationship between these is `y=2x-1`, so where `x = -1`, `y=-3` etc. \n",
"\n",
"The de facto standard way of declaring model inputs and outputs is to use `numpy`, a Python library that provides lots of array type data structures. You can specify these values by building numpy arrays with [`np.array()`](https://numpy.org/doc/stable/reference/generated/numpy.array.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4Dxk4q-jzEy4"
},
"outputs": [],
"source": [
"# Declare model inputs and outputs for training\n",
"xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)\n",
"ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "n_YcWRElnM_b"
},
"source": [
"# Training the Neural Network\n",
"\n",
"The process of training the neural network, where it 'learns' the relationship between the x's and y's is in the [`model.fit()`](https://keras.io/api/models/model_training_apis/#fit-method) call. This is where it will go through the loop we spoke about above: making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of `epochs` you specify. When you run this code, you'll see the loss on the right hand side."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lpRrl7WK10Pq"
},
"outputs": [],
"source": [
"# Train the model\n",
"model.fit(xs, ys, epochs=500)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kaFIr71H2OZ-"
},
"source": [
"Ok, now you have a model that has been trained to learn the relationship between `x` and `y`. You can use the [`model.predict()`](https://keras.io/api/models/model_training_apis/#predict-method) method to have it figure out the `y` for a previously unknown `x`. So, for example, if `x=10`, what do you think `y` will be? Take a guess before you run this code:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "oxNzL4lS2Gui"
},
"outputs": [],
"source": [
"# Make a prediction\n",
"print(model.predict([10.0]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "btF2CSFH2iEX"
},
"source": [
"You might have thought `19`, right? But it ended up being a little under. Why do you think that is? \n",
"\n",
"Remember that neural networks deal with probabilities. So given the data that we fed the model with, it calculated that there is a very high probability that the relationship between `x` and `y` is `y=2x-1`, but with only 6 data points we can't know for sure. As a result, the result for 10 is very close to 19, but not necessarily 19.\n",
"\n",
"As you work with neural networks, you'll see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification.\n"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "C1_W1_Lab_1_hello_world_nn.ipynb",
"private_outputs": true,
"provenance": [
{
"file_id": "https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/main/C1/W1/ungraded_lab/C1_W1_Lab_1_hello_world_nn.ipynb",
"timestamp": 1637670538744
}
],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/2. Introduction to Computer Vision/assignment/C1W2_Assignment.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "_2s0EJ5Fy4u2"
},
"source": [
"# Week 2: Implementing Callbacks in TensorFlow using the MNIST Dataset\n",
"\n",
"In the course you learned how to do classification using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.\n",
"\n",
"Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy. In the lecture you saw how this was done for the loss but here you will be using accuracy instead.\n",
"\n",
"Some notes:\n",
"1. Given the architecture of the net, it should succeed in less than 10 epochs.\n",
"2. When it reaches 99% or greater it should print out the string \"Reached 99% accuracy so cancelling training!\" and stop training.\n",
"3. If you add any additional variables, make sure you use the same names as the ones used in the class. This is important for the function signatures (the parameters and names) of the callbacks."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "djVOgMHty4u3"
},
"outputs": [],
"source": [
"import os\n",
"import tensorflow as tf\n",
"from tensorflow import keras"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Begin by loading the data. A couple of things to notice:\n",
"\n",
"- The file `mnist.npz` is already included in the current workspace under the `data` directory. By default the `load_data` from Keras accepts a path relative to `~/.keras/datasets` but in this case it is stored somewhere else, as a result of this, you need to specify the full path.\n",
"\n",
"- `load_data` returns the train and test sets in the form of the tuples `(x_train, y_train), (x_test, y_test)` but in this exercise you will be needing only the train set so you can ignore the second tuple."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load the data\n",
"\n",
"# Get current working directory\n",
"current_dir = os.getcwd()\n",
"\n",
"# Append data/mnist.npz to the previous path to get the full path\n",
"data_path = os.path.join(current_dir, \"data/mnist.npz\")\n",
"\n",
"# Discard test set\n",
"(x_train, y_train), _ = tf.keras.datasets.mnist.load_data(path=data_path)\n",
" \n",
"# Normalize pixel values\n",
"x_train = x_train / 255.0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now take a look at the shape of the training data:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_shape = x_train.shape\n",
"\n",
"print(f\"There are {data_shape[0]} examples with shape ({data_shape[1]}, {data_shape[2]})\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now it is time to create your own custom callback. For this complete the `myCallback` class and the `on_epoch_end` method in the cell below. If you need some guidance on how to proceed, check out this [link](https://www.tensorflow.org/guide/keras/custom_callback)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# GRADED CLASS: myCallback\n",
"### START CODE HERE\n",
"\n",
"# Remember to inherit from the correct class\n",
"class myCallback(tf.keras.callbacks.Callback):\n",
" # Define the correct function signature for on_epoch_end\n",
" def on_epoch_end(self, epoch, logs={}):\n",
" if logs.get('accuracy') is not None and logs.get('accuracy') > 0.99: # @KEEP\n",
" print(\"\\nReached 99% accuracy so cancelling training!\") \n",
" \n",
" # Stop training once the above condition is met\n",
" self.model.stop_training = True\n",
"\n",
"### END CODE HERE\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you have defined your callback it is time to complete the `train_mnist` function below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rEHcB3kqyHZ6"
},
"outputs": [],
"source": [
"# GRADED FUNCTION: train_mnist\n",
"def train_mnist(x_train, y_train):\n",
"\n",
" ### START CODE HERE\n",
" \n",
" # Instantiate the callback class\n",
" callbacks = myCallback()\n",
" \n",
" # Define the model, it should have 3 layers:\n",
" # - A Flatten layer that receives inputs with the same shape as the images\n",
" # - A Dense layer with 512 units and ReLU activation function\n",
" # - A Dense layer with 10 units and softmax activation function\n",
" model = tf.keras.models.Sequential([ \n",
" keras.layers.Flatten(input_shape=(28, 28)),\n",
" keras.layers.Dense(512, activation=tf.nn.relu),\n",
" keras.layers.Dense(10, activation=tf.nn.softmax)\n",
" ]) \n",
"\n",
" # Compile the model\n",
" model.compile(optimizer='adam', \n",
" loss='sparse_categorical_crossentropy', \n",
" metrics=['accuracy']) \n",
" \n",
" # Fit the model for 10 epochs adding the callbacks\n",
" # and save the training history\n",
" history = model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])\n",
"\n",
" ### END CODE HERE\n",
"\n",
" return history"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `train_mnist` passing in the appropiate parameters to get the training history:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "sFgpwbGly4u4"
},
"outputs": [],
"source": [
"hist = train_mnist(x_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you see the message `Reached 99% accuracy so cancelling training!` printed out after less than 10 epochs it means your callback worked as expected. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Congratulations on finishing this week's assignment!**\n",
"\n",
"You have successfully implemented a callback that gives you more control over the training loop for your model. Nice job!\n",
"\n",
"**Keep it up!**"
]
}
],
"metadata": {
"jupytext": {
"main_language": "python"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/2. Introduction to Computer Vision/ungraded_labs/C1_W2_Lab_1_beyond_hello_world.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C1/W2/ungraded_labs/C1_W2_Lab_1_beyond_hello_world.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qnyTxjK_GbOD"
},
"source": [
"# Ungraded Lab: Beyond Hello World, A Computer Vision Example\n",
"In the previous exercise, you saw how to create a neural network that figured out the problem you were trying to solve. This gave an explicit example of learned behavior. Of course, in that instance, it was a bit of overkill because it would have been easier to write the function `y=2x-1` directly instead of bothering with using machine learning to learn the relationship between `x` and `y`.\n",
"\n",
"But what about a scenario where writing rules like that is much more difficult -- for example a computer vision problem? Let's take a look at a scenario where you will build a neural network to recognize different items of clothing, trained from a dataset containing 10 different types."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H41FYgtlHPjW"
},
"source": [
"## Start Coding\n",
"\n",
"Let's start with our import of TensorFlow."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "q3KzJyjv3rnA"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"print(tf.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "n_n1U5do3u_F"
},
"source": [
"The [Fashion MNIST dataset](https://github.com/zalandoresearch/fashion-mnist) is a collection of grayscale 28x28 pixel clothing images. Each image is associated with a label as shown in this table⁉\n",
"\n",
"| Label | Description |\n",
"| --- | --- |\n",
"| 0 | T-shirt/top |\n",
"| 1 | Trouser |\n",
"| 2 | Pullover |\n",
"| 3 | Dress |\n",
"| 4 | Coat |\n",
"| 5 | Sandal |\n",
"| 6 | Shirt |\n",
"| 7 | Sneaker |\n",
"| 8 | Bag |\n",
"| 9 | Ankle boot |\n",
"\n",
"This dataset is available directly in the [tf.keras.datasets](https://www.tensorflow.org/api_docs/python/tf/keras/datasets) API and you load it like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PmxkHFpt31bM"
},
"outputs": [],
"source": [
"# Load the Fashion MNIST dataset\n",
"fmnist = tf.keras.datasets.fashion_mnist"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GuoLQQBT4E-_"
},
"source": [
"Calling `load_data()` on this object will give you two tuples with two lists each. These will be the training and testing values for the graphics that contain the clothing items and their labels.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "BTdRgExe4TRB"
},
"outputs": [],
"source": [
"# Load the training and test split of the Fashion MNIST dataset\n",
"(training_images, training_labels), (test_images, test_labels) = fmnist.load_data()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rw395ROx4f5Q"
},
"source": [
"What does these values look like? Let's print a training image (both as an image and a numpy array), and a training label to see. Experiment with different indices in the array. For example, also take a look at index `42`. That's a different boot than the one at index `0`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FPc9d3gJ3jWF"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# You can put between 0 to 59999 here\n",
"index = 0\n",
"\n",
"# Set number of characters per row when printing\n",
"np.set_printoptions(linewidth=320)\n",
"\n",
"# Print the label and image\n",
"print(f'LABEL: {training_labels[index]}')\n",
"print(f'\\nIMAGE PIXEL ARRAY:\\n {training_images[index]}')\n",
"\n",
"# Visualize the image\n",
"plt.imshow(training_images[index])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3cbrdH225_nH"
},
"source": [
"You'll notice that all of the values in the number are between 0 and 255. If you are training a neural network especially in image processing, for various reasons it will usually learn better if you scale all values to between 0 and 1. It's a process called _normalization_ and fortunately in Python, it's easy to normalize an array without looping. You do it like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "kRH19pWs6ZDn"
},
"outputs": [],
"source": [
"# Normalize the pixel values of the train and test images\n",
"training_images = training_images / 255.0\n",
"test_images = test_images / 255.0"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3DkO0As46lRn"
},
"source": [
"Now you might be wondering why the dataset is split into two: training and testing? Remember we spoke about this in the intro? The idea is to have 1 set of data for training, and then another set of data that the model hasn't yet seen. This will be used to evaluate how good it would be at classifying values."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dIn7S9gf62ie"
},
"source": [
"Let's now design the model. There's quite a few new concepts here. But don't worry, you'll get the hang of them. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7mAyndG3kVlK"
},
"outputs": [],
"source": [
"# Build the classification model\n",
"model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), \n",
" tf.keras.layers.Dense(128, activation=tf.nn.relu), \n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-lUcWaiX7MFj"
},
"source": [
"[Sequential](https://keras.io/api/models/sequential/): That defines a sequence of layers in the neural network.\n",
"\n",
"[Flatten](https://keras.io/api/layers/reshaping_layers/flatten/): Remember earlier where our images were a 28x28 pixel matrix when you printed them out? Flatten just takes that square and turns it into a 1-dimensional array.\n",
"\n",
"[Dense](https://keras.io/api/layers/core_layers/dense/): Adds a layer of neurons\n",
"\n",
"Each layer of neurons need an [activation function](https://keras.io/api/layers/activations/) to tell them what to do. There are a lot of options, but just use these for now: \n",
"\n",
"[ReLU](https://keras.io/api/layers/activations/#relu-function) effectively means:\n",
"\n",
"```\n",
"if x > 0: \n",
" return x\n",
"\n",
"else: \n",
" return 0\n",
"```\n",
"\n",
"In other words, it it only passes values 0 or greater to the next layer in the network.\n",
"\n",
"[Softmax](https://keras.io/api/layers/activations/#softmax-function) takes a list of values and scales these so the sum of all elements will be equal to 1. When applied to model outputs, you can think of the scaled values as the probability for that class. For example, in your classification model which has 10 units in the output dense layer, having the highest value at `index = 4` means that the model is most confident that the input clothing image is a coat. If it is at index = 5, then it is a sandal, and so forth. See the short code block below which demonstrates these concepts. You can also watch this [lecture](https://www.youtube.com/watch?v=LLux1SW--oM&ab_channel=DeepLearningAI) if you want to know more about the Softmax function and how the values are computed.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Dk1hzzpDoGPI"
},
"outputs": [],
"source": [
"# Declare sample inputs and convert to a tensor\n",
"inputs = np.array([[1.0, 3.0, 4.0, 2.0]])\n",
"inputs = tf.convert_to_tensor(inputs)\n",
"print(f'input to softmax function: {inputs.numpy()}')\n",
"\n",
"# Feed the inputs to a softmax activation function\n",
"outputs = tf.keras.activations.softmax(inputs)\n",
"print(f'output of softmax function: {outputs.numpy()}')\n",
"\n",
"# Get the sum of all values after the softmax\n",
"sum = tf.reduce_sum(outputs)\n",
"print(f'sum of outputs: {sum}')\n",
"\n",
"# Get the index with highest value\n",
"prediction = np.argmax(outputs)\n",
"print(f'class with highest probability: {prediction}')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c8vbMCqb9Mh6"
},
"source": [
"The next thing to do, now that the model is defined, is to actually build it. You do this by compiling it with an optimizer and loss function as before -- and then you train it by calling `model.fit()` asking it to fit your training data to your training labels. It will figure out the relationship between the training data and its actual labels so in the future if you have inputs that looks like the training data, then it can predict what the label for that input is."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "BLMdl9aP8nQ0"
},
"outputs": [],
"source": [
"model.compile(optimizer = tf.optimizers.Adam(),\n",
" loss = 'sparse_categorical_crossentropy',\n",
" metrics=['accuracy'])\n",
"\n",
"model.fit(training_images, training_labels, epochs=5)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-JJMsvSB-1UY"
},
"source": [
"Once it's done training -- you should see an accuracy value at the end of the final epoch. It might look something like `0.9098`. This tells you that your neural network is about 91% accurate in classifying the training data. That is, it figured out a pattern match between the image and the labels that worked 91% of the time. Not great, but not bad considering it was only trained for 5 epochs and done quite quickly.\n",
"\n",
"But how would it work with unseen data? That's why we have the test images and labels. We can call [`model.evaluate()`](https://keras.io/api/models/model_training_apis/#evaluate-method) with this test dataset as inputs and it will report back the loss and accuracy of the model. Let's give it a try:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WzlqsEzX9s5P"
},
"outputs": [],
"source": [
"# Evaluate the model on unseen data\n",
"model.evaluate(test_images, test_labels)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6tki-Aro_Uax"
},
"source": [
"You can expect the accuracy here to be about `0.88` which means it was 88% accurate on the entire test set. As expected, it probably would not do as well with *unseen* data as it did with data it was trained on! As you go through this course, you'll look at ways to improve this. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "htldZNWcIPSN"
},
"source": [
"# Exploration Exercises\n",
"\n",
"To explore further and deepen your understanding, try the below exercises:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rquQqIx4AaGR"
},
"source": [
"### Exercise 1:\n",
"For this first exercise run the below code: It creates a set of classifications for each of the test images, and then prints the first entry in the classifications. The output, after you run it is a list of numbers. Why do you think this is, and what do those numbers represent? "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "RyEIki0z_hAD"
},
"outputs": [],
"source": [
"classifications = model.predict(test_images)\n",
"\n",
"print(classifications[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MdzqbQhRArzm"
},
"source": [
"**Hint:** try running `print(test_labels[0])` -- and you'll get a `9`. Does that help you understand why this list looks the way it does? "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WnBGOrMiA1n5"
},
"outputs": [],
"source": [
"print(test_labels[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uUs7eqr7uSvs"
},
"source": [
"### E1Q1: What does this list represent?\n",
"\n",
"\n",
"1. It's 10 random meaningless values\n",
"2. It's the first 10 classifications that the computer made\n",
"3. It's the probability that this item is each of the 10 classes\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wAbr92RTA67u"
},
"source": [
"<details><summary>Click for Answer</summary>\n",
"<p>\n",
"\n",
"#### Answer: \n",
"The correct answer is (3)\n",
"\n",
"The output of the model is a list of 10 numbers. These numbers are a probability that the value being classified is the corresponding value (https://github.com/zalandoresearch/fashion-mnist#labels), i.e. the first value in the list is the probability that the image is of a '0' (T-shirt/top), the next is a '1' (Trouser) etc. Notice that they are all VERY LOW probabilities.\n",
"\n",
"For index 9 (Ankle boot), the probability was in the 90's, i.e. the neural network is telling us that the image is most likely an ankle boot.\n",
"\n",
"</p>\n",
"</details>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CD4kC6TBu-69"
},
"source": [
"### E1Q2: How do you know that this list tells you that the item is an ankle boot?\n",
"\n",
"\n",
"1. There's not enough information to answer that question\n",
"2. The 10th element on the list is the biggest, and the ankle boot is labelled 9\n",
"2. The ankle boot is label 9, and there are 0->9 elements in the list\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "I-haLncrva5L"
},
"source": [
"<details><summary>Click for Answer</summary>\n",
"<p>\n",
"\n",
"#### Answer\n",
"The correct answer is (2). Both the list and the labels are 0 based, so the ankle boot having label 9 means that it is the 10th of the 10 classes. The list having the 10th element being the highest value means that the Neural Network has predicted that the item it is classifying is most likely an ankle boot\n",
"\n",
"</p>\n",
"</details>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OgQSIfDSOWv6"
},
"source": [
"### Exercise 2: \n",
"Let's now look at the layers in your model. Experiment with different values for the dense layer with 512 neurons. What different results do you get for loss, training time etc? Why do you think that's the case? \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GSZSwV5UObQP"
},
"outputs": [],
"source": [
"mnist = tf.keras.datasets.mnist\n",
"\n",
"(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n",
"\n",
"training_images = training_images/255.0\n",
"test_images = test_images/255.0\n",
"\n",
"model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(1024, activation=tf.nn.relu), # Try experimenting with this layer\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n",
"\n",
"model.compile(optimizer = 'adam',\n",
" loss = 'sparse_categorical_crossentropy')\n",
"\n",
"model.fit(training_images, training_labels, epochs=5)\n",
"\n",
"model.evaluate(test_images, test_labels)\n",
"\n",
"classifications = model.predict(test_images)\n",
"\n",
"print(classifications[0])\n",
"print(test_labels[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bOOEnHZFv5cS"
},
"source": [
"### E2Q1: Increase to 1024 Neurons -- What's the impact?\n",
"\n",
"1. Training takes longer, but is more accurate\n",
"2. Training takes longer, but no impact on accuracy\n",
"3. Training takes the same time, but is more accurate\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "U73MUP2lwrI2"
},
"source": [
"<details><summary>Click for Answer</summary>\n",
"<p>\n",
"\n",
"#### Answer\n",
"The correct answer is (1) by adding more Neurons we have to do more calculations, slowing down the process, but in this case they have a good impact -- we do get more accurate. That doesn't mean it's always a case of 'more is better', you can hit the law of diminishing returns very quickly!\n",
"\n",
"</p>\n",
"</details>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WtWxK16hQxLN"
},
"source": [
"### Exercise 3: \n",
"\n",
"### E3Q1: What would happen if you remove the Flatten() layer. Why do you think that's the case? \n",
"\n",
"<details><summary>Click for Answer</summary>\n",
"<p>\n",
"\n",
"#### Answer\n",
"You get an error about the shape of the data. It may seem vague right now, but it reinforces the rule of thumb that the first layer in your network should be the same shape as your data. Right now our data is 28x28 images, and 28 layers of 28 neurons would be infeasible, so it makes more sense to 'flatten' that 28,28 into a 784x1. Instead of writng all the code to handle that ourselves, we add the Flatten() layer at the begining, and when the arrays are loaded into the model later, they'll automatically be flattened for us.\n",
"\n",
"</p>\n",
"</details>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ExNxCwhcQ18S"
},
"outputs": [],
"source": [
"mnist = tf.keras.datasets.mnist\n",
"\n",
"(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n",
"\n",
"training_images = training_images/255.0\n",
"test_images = test_images/255.0\n",
"\n",
"model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), #Try removing this layer\n",
" tf.keras.layers.Dense(64, activation=tf.nn.relu),\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n",
"\n",
"model.compile(optimizer = 'adam',\n",
" loss = 'sparse_categorical_crossentropy')\n",
"\n",
"model.fit(training_images, training_labels, epochs=5)\n",
"\n",
"model.evaluate(test_images, test_labels)\n",
"\n",
"classifications = model.predict(test_images)\n",
"\n",
"print(classifications[0])\n",
"print(test_labels[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VqoCR-ieSGDg"
},
"source": [
"### Exercise 4: \n",
"\n",
"Consider the final (output) layers. Why are there 10 of them? What would happen if you had a different amount than 10? For example, try training the network with 5.\n",
"\n",
"<details><summary>Click for Answer</summary>\n",
"<p>\n",
"\n",
"#### Answer\n",
"You get an error as soon as it finds an unexpected value. Another rule of thumb -- the number of neurons in the last layer should match the number of classes you are classifying for. In this case it's the digits 0-9, so there are 10 of them, hence you should have 10 neurons in your final layer.\n",
"\n",
"</p>\n",
"</details>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MMckVntcSPvo"
},
"outputs": [],
"source": [
"mnist = tf.keras.datasets.mnist\n",
"\n",
"(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n",
"\n",
"training_images = training_images/255.0\n",
"test_images = test_images/255.0\n",
"\n",
"model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(64, activation=tf.nn.relu),\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax) # Try experimenting with this layer\n",
" ])\n",
"\n",
"model.compile(optimizer = 'adam',\n",
" loss = 'sparse_categorical_crossentropy')\n",
"\n",
"model.fit(training_images, training_labels, epochs=5)\n",
"\n",
"model.evaluate(test_images, test_labels)\n",
"\n",
"classifications = model.predict(test_images)\n",
"\n",
"print(classifications[0])\n",
"print(test_labels[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-0lF5MuvSuZF"
},
"source": [
"### Exercise 5: \n",
"\n",
"Consider the effects of additional layers in the network. What will happen if you add another layer between the one with 512 and the final layer with 10. \n",
"\n",
"<details><summary>Click for Answer</summary>\n",
"<p>\n",
"\n",
"#### Answer \n",
"There isn't a significant impact -- because this is relatively simple data. For far more complex data (including color images to be classified as flowers that you'll see in the next lesson), extra layers are often necessary. \n",
"\n",
"</p>\n",
"</details>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "b1YPa6UhS8Es"
},
"outputs": [],
"source": [
"mnist = tf.keras.datasets.mnist\n",
"\n",
"(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n",
"\n",
"training_images = training_images/255.0\n",
"test_images = test_images/255.0\n",
"\n",
"model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n",
" # Add a layer here,\n",
" tf.keras.layers.Dense(256, activation=tf.nn.relu),\n",
" # Add a layer here\n",
" ])\n",
"\n",
"model.compile(optimizer = 'adam',\n",
" loss = 'sparse_categorical_crossentropy')\n",
"\n",
"model.fit(training_images, training_labels, epochs=5)\n",
"\n",
"model.evaluate(test_images, test_labels)\n",
"\n",
"classifications = model.predict(test_images)\n",
"\n",
"print(classifications[0])\n",
"print(test_labels[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Bql9fyaNUSFy"
},
"source": [
"### Exercise 6: \n",
"\n",
"### E6Q1: Consider the impact of training for more or less epochs. Why do you think that would be the case? \n",
"\n",
"- Try 15 epochs -- you'll probably get a model with a much better loss than the one with 5\n",
"- Try 30 epochs -- you might see the loss value stops decreasing, and sometimes increases.\n",
"\n",
"This is a side effect of something called 'overfitting' which you can learn about later and it's something you need to keep an eye out for when training neural networks. There's no point in wasting your time training if you aren't improving your loss, right! :)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uE3esj9BURQe"
},
"outputs": [],
"source": [
"mnist = tf.keras.datasets.mnist\n",
"\n",
"(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()\n",
"\n",
"training_images = training_images/255.0\n",
"test_images = test_images/255.0\n",
"\n",
"model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(128, activation=tf.nn.relu),\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)])\n",
"\n",
"model.compile(optimizer = 'adam',\n",
" loss = 'sparse_categorical_crossentropy')\n",
"\n",
"model.fit(training_images, training_labels, epochs=5) # Experiment with the number of epochs\n",
"\n",
"model.evaluate(test_images, test_labels)\n",
"\n",
"classifications = model.predict(test_images)\n",
"\n",
"print(classifications[34])\n",
"print(test_labels[34])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HS3vVkOgCDGZ"
},
"source": [
"### Exercise 7: \n",
"\n",
"Before you trained, you normalized the data, going from values that were 0-255 to values that were 0-1. What would be the impact of removing that? Here's the complete code to give it a try. Why do you think you get different results? "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "JDqNAqrpCNg0"
},
"outputs": [],
"source": [
"mnist = tf.keras.datasets.mnist\n",
"(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n",
"training_images=training_images/255.0 # Experiment with removing this line\n",
"test_images=test_images/255.0 # Experiment with removing this line\n",
"model = tf.keras.models.Sequential([\n",
" tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(512, activation=tf.nn.relu),\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n",
"])\n",
"model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')\n",
"model.fit(training_images, training_labels, epochs=5)\n",
"model.evaluate(test_images, test_labels)\n",
"classifications = model.predict(test_images)\n",
"print(classifications[0])\n",
"print(test_labels[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E7W2PT66ZBHQ"
},
"source": [
"### Exercise 8: \n",
"\n",
"Earlier when you trained for extra epochs you had an issue where your loss might change. It might have taken a bit of time for you to wait for the training to do that, and you might have thought 'wouldn't it be nice if I could stop the training when I reach a desired value?' -- i.e. 95% accuracy might be enough for you, and if you reach that after 3 epochs, why sit around waiting for it to finish a lot more epochs....So how would you fix that? Like any other program...you have callbacks! Let's see them in action..."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pkaEHHgqZbYv"
},
"outputs": [],
"source": [
"class myCallback(tf.keras.callbacks.Callback):\n",
" def on_epoch_end(self, epoch, logs={}):\n",
" if(logs.get('accuracy') >= 0.6): # Experiment with changing this value\n",
" print(\"\\nReached 60% accuracy so cancelling training!\")\n",
" self.model.stop_training = True\n",
"\n",
"callbacks = myCallback()\n",
"mnist = tf.keras.datasets.fashion_mnist\n",
"(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n",
"training_images=training_images/255.0\n",
"test_images=test_images/255.0\n",
"model = tf.keras.models.Sequential([\n",
" tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(512, activation=tf.nn.relu),\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n",
"])\n",
"model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
"model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks])\n"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "C1_W2_Lab_1_beyond_hello_world.ipynb",
"private_outputs": true,
"provenance": [
{
"file_id": "https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/25_august_2021_fixes/C1/W2/ungraded_labs/C1_W2_Lab_1_beyond_hello_world.ipynb",
"timestamp": 1638837742743
}
],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/2. Introduction to Computer Vision/ungraded_labs/C1_W2_Lab_2_callbacks.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/main/C1/W2/ungraded_labs/C1_W2_Lab_2_callbacks.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vBNo9JrZIYG6"
},
"source": [
"# Ungraded Lab: Using Callbacks to Control Training\n",
"\n",
"In this lab, you will use the [Callbacks API](https://keras.io/api/callbacks/) to stop training when a specified metric is met. This is a useful feature so you won't need to complete all epochs when this threshold is reached. For example, if you set 1000 epochs and your desired accuracy is already reached at epoch 200, then the training will automatically stop. Let's see how this is implemented in the next sections.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Mcwrn9AKKVb8"
},
"source": [
"## Load and Normalize the Fashion MNIST dataset\n",
"\n",
"Like the previous lab, you will use the Fashion MNIST dataset again for this exercise. And also as mentioned before, you will normalize the pixel values to help optimize the training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8LTaefqDJMIn"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"# Instantiate the dataset API\n",
"fmnist = tf.keras.datasets.fashion_mnist\n",
"\n",
"# Load the dataset\n",
"(x_train, y_train),(x_test, y_test) = fmnist.load_data()\n",
"\n",
"# Normalize the pixel values\n",
"x_train, x_test = x_train / 255.0, x_test / 255.0"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ia2OadhALJjS"
},
"source": [
"## Creating a Callback class\n",
"\n",
"You can create a callback by defining a class that inherits the [tf.keras.callbacks.Callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback) base class. From there, you can define available methods to set where the callback will be executed. For instance below, you will use the [on_epoch_end()](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback#on_epoch_end) method to check the loss at each training epoch."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uuRmQZWVJAJH"
},
"outputs": [],
"source": [
"class myCallback(tf.keras.callbacks.Callback):\n",
" def on_epoch_end(self, epoch, logs={}):\n",
" '''\n",
" Halts the training after reaching 60 percent accuracy\n",
"\n",
" Args:\n",
" epoch (integer) - index of epoch (required but unused in the function definition below)\n",
" logs (dict) - metric results from the training epoch\n",
" '''\n",
"\n",
" # Check accuracy\n",
" if(logs.get('loss') < 0.4):\n",
"\n",
" # Stop if threshold is met\n",
" print(\"\\nLoss is lower than 0.4 so cancelling training!\")\n",
" self.model.stop_training = True\n",
"\n",
"# Instantiate class\n",
"callbacks = myCallback()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4xlXeLkFeMn8"
},
"source": [
"## Define and compile the model\n",
"\n",
"Next, you will define and compile the model. The architecture will be similar to the one you built in the previous lab. Afterwards, you will set the optimizer, loss, and metrics that you will use for training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7JXxMg3TpzER"
},
"outputs": [],
"source": [
"# Define the model\n",
"model = tf.keras.models.Sequential([\n",
" tf.keras.layers.Flatten(input_shape=(28, 28)),\n",
" tf.keras.layers.Dense(512, activation=tf.nn.relu),\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n",
"])\n",
"\n",
"# Compile the model\n",
"model.compile(optimizer=tf.optimizers.Adam(),\n",
" loss='sparse_categorical_crossentropy',\n",
" metrics=['accuracy'])\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6eLe4cPZe-ui"
},
"source": [
"### Train the model\n",
"\n",
"Now you are ready to train the model. To set the callback, simply set the `callbacks` parameter to the `myCallback` instance you declared before. Run the cell below and observe what happens."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "nLXTB32de3_e"
},
"outputs": [],
"source": [
"# Train the model with a callback\n",
"model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fGBSkRQPff93"
},
"source": [
"You will notice that the training does not need to complete all 10 epochs. By having a callback at each end of the epoch, it is able to check the training parameters and compare if it meets the threshold you set in the function definition. In this case, it will simply stop when the loss falls below `0.40` after the current epoch.\n",
"\n",
"*Optional Challenge: Modify the code to make the training stop when the accuracy metric exceeds 60%.*\n",
"\n",
"That concludes this simple exercise on callbacks!"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "C1_W2_Lab_2_callbacks.ipynb",
"private_outputs": true,
"provenance": [
{
"file_id": "https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/adding_C1/C1/W2/ungraded_labs/C1_W2_Lab_2_callbacks.ipynb",
"timestamp": 1638884482962
}
],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/3. Enhancing Vision with Convolutional Neural Networks/assignment/C1W3_Assignment.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "iQjHqsmTAVLU"
},
"source": [
"# Week 3: Improve MNIST with Convolutions\n",
"\n",
"In the videos you looked at how you would improve Fashion MNIST using Convolutions. For this exercise see if you can improve MNIST to 99.5% accuracy or more by adding only a single convolutional layer and a single MaxPooling 2D layer to the model from the assignment of the previous week. \n",
"\n",
"You should stop training once the accuracy goes above this amount. It should happen in less than 10 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your callback.\n",
"\n",
"When 99.5% accuracy has been hit, you should print out the string \"Reached 99.5% accuracy so cancelling training!\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZpztRwBouwYp"
},
"outputs": [],
"source": [
"import os\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from tensorflow import keras"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Begin by loading the data. A couple of things to notice:\n",
"\n",
"- The file `mnist.npz` is already included in the current workspace under the `data` directory. By default the `load_data` from Keras accepts a path relative to `~/.keras/datasets` but in this case it is stored somewhere else, as a result of this, you need to specify the full path.\n",
"\n",
"- `load_data` returns the train and test sets in the form of the tuples `(x_train, y_train), (x_test, y_test)` but in this exercise you will be needing only the train set so you can ignore the second tuple."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load the data\n",
"\n",
"# Get current working directory\n",
"current_dir = os.getcwd() \n",
"\n",
"# Append data/mnist.npz to the previous path to get the full path\n",
"data_path = os.path.join(current_dir, \"data/mnist.npz\") \n",
"\n",
"# Get only training set\n",
"(training_images, training_labels), _ = tf.keras.datasets.mnist.load_data(path=data_path) \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One important step when dealing with image data is to preprocess the data. During the preprocess step you can apply transformations to the dataset that will be fed into your convolutional neural network.\n",
"\n",
"Here you will apply two transformations to the data:\n",
"- Reshape the data so that it has an extra dimension. The reason for this \n",
"is that commonly you will use 3-dimensional arrays (without counting the batch dimension) to represent image data. The third dimension represents the color using RGB values. This data might be in black and white format so the third dimension doesn't really add any additional information for the classification process but it is a good practice regardless.\n",
"\n",
"\n",
"- Normalize the pixel values so that these are values between 0 and 1. You can achieve this by dividing every value in the array by the maximum.\n",
"\n",
"Remember that these tensors are of type `numpy.ndarray` so you can use functions like [reshape](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html) or [divide](https://numpy.org/doc/stable/reference/generated/numpy.divide.html) to complete the `reshape_and_normalize` function below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# GRADED FUNCTION: reshape_and_normalize\n",
"\n",
"def reshape_and_normalize(images):\n",
" \n",
" ### START CODE HERE\n",
"\n",
" # Reshape the images to add an extra dimension\n",
" images = np.expand_dims(images, axis = -1)\n",
" \n",
" # Normalize pixel values\n",
" images = images / 255.0\n",
" \n",
" ### END CODE HERE\n",
"\n",
" return images"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Test your function with the next cell:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reload the images in case you run this cell multiple times\n",
"(training_images, _), _ = tf.keras.datasets.mnist.load_data(path=data_path) \n",
"\n",
"# Apply your function\n",
"training_images = reshape_and_normalize(training_images)\n",
"\n",
"print(f\"Maximum pixel value after normalization: {np.max(training_images)}\\n\")\n",
"print(f\"Shape of training set after reshaping: {training_images.shape}\\n\")\n",
"print(f\"Shape of one image after reshaping: {training_images[0].shape}\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output:**\n",
"```\n",
"Maximum pixel value after normalization: 1.0\n",
"\n",
"Shape of training set after reshaping: (60000, 28, 28, 1)\n",
"\n",
"Shape of one image after reshaping: (28, 28, 1)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now complete the callback that will ensure that training will stop after an accuracy of 99.5% is reached:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# GRADED CLASS: myCallback\n",
"### START CODE HERE\n",
"\n",
"# Remember to inherit from the correct class\n",
"class myCallback(tf.keras.callbacks.Callback):\n",
" # Define the method that checks the accuracy at the end of each epoch\n",
" def on_epoch_end(self, epoch, logs={}):\n",
" if logs.get('accuracy') is not None and logs.get('accuracy') > 0.995:\n",
" print(\"\\nReached 99.5% accuracy so cancelling training!\") \n",
" \n",
" # Stop training once the above condition is met\n",
" self.model.stop_training = True\n",
"\n",
"### END CODE HERE\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, complete the `convolutional_model` function below. This function should return your convolutional neural network:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# GRADED FUNCTION: convolutional_model\n",
"def convolutional_model():\n",
" ### START CODE HERE\n",
"\n",
" # Define the model, it should have 5 layers:\n",
" # - A Conv2D layer with 32 filters, a kernel_size of 3x3, ReLU activation function\n",
" # and an input shape that matches that of every image in the training set\n",
" # - A MaxPooling2D layer with a pool_size of 2x2\n",
" # - A Flatten layer with no arguments\n",
" # - A Dense layer with 128 units and ReLU activation function\n",
" # - A Dense layer with 10 units and softmax activation function\n",
" model = tf.keras.models.Sequential([ \n",
" keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),\n",
" keras.layers.MaxPooling2D(2, 2),\n",
" keras.layers.Flatten(),\n",
" keras.layers.Dense(128, activation=tf.nn.relu),\n",
" keras.layers.Dense(10, activation=tf.nn.softmax)\n",
" ]) \n",
"\n",
" ### END CODE HERE\n",
"\n",
" # Compile the model\n",
" model.compile(optimizer='adam', \n",
" loss='sparse_categorical_crossentropy', \n",
" metrics=['accuracy']) \n",
" \n",
" return model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Save your untrained model\n",
"model = convolutional_model()\n",
"\n",
"# Instantiate the callback class\n",
"callbacks = myCallback()\n",
"\n",
"# Train your model (this can take up to 5 minutes)\n",
"history = model.fit(training_images, training_labels, epochs=10, callbacks=[callbacks])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you see the message that you defined in your callback printed out after less than 10 epochs it means your callback worked as expected. You can also double check by running the following cell:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(f\"Your model was trained for {len(history.epoch)} epochs\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Congratulations on finishing this week's assignment!**\n",
"\n",
"You have successfully implemented a CNN to assist you in the image classification task. Nice job!\n",
"\n",
"**Keep it up!**"
]
}
],
"metadata": {
"jupytext": {
"main_language": "python"
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/3. Enhancing Vision with Convolutional Neural Networks/ungraded_labs/C1_W3_Lab_1_improving_accuracy_using_convolutions.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C1/W3/ungraded_labs/C1_W3_Lab_1_improving_accuracy_using_convolutions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R6gHiH-I7uFa"
},
"source": [
"# Ungraded Lab: Improving Computer Vision Accuracy using Convolutions\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Z6396DKnr-xp"
},
"source": [
"# Shallow Neural Network\n",
"\n",
"In the previous lessons, you saw how to do fashion recognition using a neural network containing three layers -- the input layer (in the shape of the data), the output layer (in the shape of the desired output) and only one hidden layer. You experimented with the impact of different sizes of hidden layer, number of training epochs etc on the final accuracy. For convenience, here's the entire code again. Run it and take a note of the test accuracy that is printed out at the end. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qnCNAG-VecJ9"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"# Load the Fashion MNIST dataset\n",
"fmnist = tf.keras.datasets.fashion_mnist\n",
"(training_images, training_labels), (test_images, test_labels) = fmnist.load_data()\n",
"\n",
"# Normalize the pixel values\n",
"training_images = training_images / 255.0\n",
"test_images = test_images / 255.0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xcsRtq9OLorS"
},
"outputs": [],
"source": [
"\n",
"\n",
"# Define the model\n",
"model = tf.keras.models.Sequential([\n",
" tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(128, activation=tf.nn.relu),\n",
" tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n",
"])\n",
"\n",
"# Setup training parameters\n",
"model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
"\n",
"# Train the model\n",
"print(f'\\nMODEL TRAINING:')\n",
"model.fit(training_images, training_labels, epochs=5)\n",
"\n",
"# Evaluate on the test set\n",
"print(f'\\nMODEL EVALUATION:')\n",
"test_loss = model.evaluate(test_images, test_labels)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zldEXSsF8Noz"
},
"source": [
"## Convolutional Neural Network\n",
"\n",
"In the model above, your accuracy will probably be about 89% on training and 87% on validation. Not bad. But how do you make that even better? One way is to use something called _convolutions_. We're not going into the details of convolutions in this notebook (please see resources in the classroom), but the ultimate concept is that they narrow down the content of the image to focus on specific parts and this will likely improve the model accuracy. \n",
"\n",
"If you've ever done image processing using a filter (like [this](https://en.wikipedia.org/wiki/Kernel_(image_processing))), then convolutions will look very familiar. In short, you take an array (usually 3x3 or 5x5) and scan it over the entire image. By changing the underlying pixels based on the formula within that matrix, you can do things like edge detection. So, for example, if you look at the above link, you'll see a 3x3 matrix that is defined for edge detection where the middle cell is 8, and all of its neighbors are -1. In this case, for each pixel, you would multiply its value by 8, then subtract the value of each neighbor. Do this for every pixel, and you'll end up with a new image that has the edges enhanced.\n",
"\n",
"This is perfect for computer vision because it often highlights features that distinguish one item from another. Moreover, the amount of information needed is then much less because you'll just train on the highlighted features.\n",
"\n",
"That's the concept of **Convolutional Neural Networks**. Add some layers to do convolution before you have the dense layers, and then the information going to the dense layers is more focused and possibly more accurate.\n",
"\n",
"Run the code below. This is the same neural network as earlier, but this time with [Convolution](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) and [MaxPooling](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) layers added first. It will take longer, but look at the impact on the accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "C0tFgT1MMKi6"
},
"outputs": [],
"source": [
"# Define the model\n",
"model = tf.keras.models.Sequential([\n",
" \n",
" # Add convolutions and max pooling\n",
" tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),\n",
" tf.keras.layers.MaxPooling2D(2, 2),\n",
" tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
"\n",
" # Add the same layers as before\n",
" tf.keras.layers.Flatten(),\n",
" tf.keras.layers.Dense(128, activation='relu'),\n",
" tf.keras.layers.Dense(10, activation='softmax')\n",
"])\n",
"\n",
"# Print the model summary\n",
"model.summary()\n",
"\n",
"# Use same settings\n",
"model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
"\n",
"# Train the model\n",
"print(f'\\nMODEL TRAINING:')\n",
"model.fit(training_images, training_labels, epochs=5)\n",
"\n",
"# Evaluate on the test set\n",
"print(f'\\nMODEL EVALUATION:')\n",
"test_loss = model.evaluate(test_images, test_labels)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uRLfZ0jt-fQI"
},
"source": [
"It's likely gone up to about 92% on the training data and 90% on the validation data. That's significant, and a step in the right direction!\n",
"\n",
"Look at the code again, and see, step by step how the convolutions were built. Instead of the input layer at the top, you added a [Conv2D layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D). The parameters are:\n",
"\n",
"1. The number of convolutions you want to generate. The value here is purely arbitrary but it's good to use powers of 2 starting from 32.\n",
"2. The size of the Convolution. In this case, a 3x3 grid.\n",
"3. The activation function to use. In this case, you used a ReLU, which you might recall is the equivalent of returning `x` when `x>0`, else return `0`.\n",
"4. In the first layer, the shape of the input data.\n",
"\n",
"You'll follow the convolution with a [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) layer which is designed to compress the image, while maintaining the content of the features that were highlighted by the convlution. By specifying `(2,2)` for the MaxPooling, the effect is to quarter the size of the image. Without going into too much detail here, the idea is that it creates a 2x2 array of pixels, and picks the biggest one. Thus, it turns 4 pixels into 1. It repeats this across the image, and in doing so, it halves both the number of horizontal and vertical pixels, effectively reducing the image to 25% of the original image.\n",
"\n",
"You can call `model.summary()` to see the size and shape of the network, and you'll notice that after every max pooling layer, the image size is reduced in this way. \n",
"\n",
"\n",
"```\n",
"model = tf.keras.models.Sequential([\n",
" tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),\n",
" tf.keras.layers.MaxPooling2D(2, 2),\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RMorM6daADjA"
},
"source": [
"Then you added another convolution and flattened the output.\n",
"\n",
"\n",
"\n",
"```\n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2)\n",
" tf.keras.layers.Flatten(),\n",
" \n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qPtqR23uASjX"
},
"source": [
"After this, you'll just have the same DNN structure as the non convolutional version. The same 128 dense layers, and 10 output layers as in the pre-convolution example:\n",
"\n",
"\n",
"\n",
"```\n",
" tf.keras.layers.Dense(128, activation='relu'),\n",
" tf.keras.layers.Dense(10, activation='softmax')\n",
"])\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Np6AjBlLYveu"
},
"source": [
"## About overfitting\n",
"\n",
"Try running the training for more epochs -- say about 20, and explore the results. But while the results might seem really good, the validation results may actually go down, due to something called _overfitting_. In a nutshell, overfitting occurs when the network learns the data from the training set really well, but it's too specialised to only that data, and as a result is less effective at interpreting other unseen data. For example, if all your life you only saw red shoes, then when you see a red shoe you would be very good at identifying it. But blue suede shoes might confuse you... and you know you should never mess with my blue suede shoes."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IXx_LX3SAlFs"
},
"source": [
"# Visualizing the Convolutions and Pooling\n",
"\n",
"Let's explore how to show the convolutions graphically. The cell below prints the first 100 labels in the test set, and you can see that the ones at index `0`, index `23` and index `28` are all the same value (i.e. `9`). They're all shoes. Let's take a look at the result of running the convolution on each, and you'll begin to see common features between them emerge. Now, when the dense layer is training on that data, it's working with a lot less, and it's perhaps finding a commonality between shoes based on this convolution/pooling combination."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f-6nX4QsOku6"
},
"outputs": [],
"source": [
"print(test_labels[:100])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9FGsHhv6JvDx"
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"from tensorflow.keras import models\n",
"\n",
"f, axarr = plt.subplots(3,4)\n",
"\n",
"FIRST_IMAGE=0\n",
"SECOND_IMAGE=23\n",
"THIRD_IMAGE=28\n",
"CONVOLUTION_NUMBER = 1\n",
"\n",
"layer_outputs = [layer.output for layer in model.layers]\n",
"activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)\n",
"\n",
"for x in range(0,4):\n",
" f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]\n",
" axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')\n",
" axarr[0,x].grid(False)\n",
" \n",
" f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]\n",
" axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')\n",
" axarr[1,x].grid(False)\n",
" \n",
" f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]\n",
" axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')\n",
" axarr[2,x].grid(False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8KVPZqgHo5Ux"
},
"source": [
"### EXERCISES\n",
"\n",
"1. Try editing the convolutions. Change the 32s to either 16 or 64. What impact will this have on accuracy and/or training time.\n",
"\n",
"2. Remove the final Convolution. What impact will this have on accuracy or training time?\n",
"\n",
"3. How about adding more Convolutions? What impact do you think this will have? Experiment with it.\n",
"\n",
"4. Remove all Convolutions but the first. What impact do you think this will have? Experiment with it. \n",
"\n",
"5. In the previous lesson you implemented a callback to check on the loss function and to cancel training once it hit a certain amount. See if you can implement that here."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "C1_W3_Lab_1_improving_accuracy_using_convolutions.ipynb",
"private_outputs": true,
"provenance": [
{
"file_id": "https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/25_august_2021_fixes/C1/W3/ungraded_labs/C1_W3_Lab_1_improving_accuracy_using_convolutions.ipynb",
"timestamp": 1638957936408
}
],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/3. Enhancing Vision with Convolutional Neural Networks/ungraded_labs/C1_W3_Lab_2_exploring_convolutions.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C1/W3/ungraded_labs/C1_W3_Lab_2_exploring_convolutions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tJTHvE8Qe5nM"
},
"source": [
"# Ungraded Lab: Exploring Convolutions\n",
"\n",
"In this lab, you will explore how convolutions work by creating a basic convolution on a 2D grayscale image. First, you wil load the image by taking the [ascent](https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.ascent.html) image from [SciPy](https://scipy.org/). It's a nice, built-in picture with lots of angles and lines. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"executionInfo": {
"elapsed": 784,
"status": "ok",
"timestamp": 1639058947063,
"user": {
"displayName": "Chris Favila",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64",
"userId": "17311369472417335306"
},
"user_tz": -480
},
"id": "DZ5OXYiolCUi"
},
"outputs": [],
"source": [
"from scipy import misc\n",
"\n",
"# load the ascent image\n",
"ascent_image = misc.ascent()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SRIzxjWWfJjk"
},
"source": [
"You can use the pyplot library to draw the image so you'll know what it looks like."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 248
},
"executionInfo": {
"elapsed": 976,
"status": "ok",
"timestamp": 1639059000048,
"user": {
"displayName": "Chris Favila",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64",
"userId": "17311369472417335306"
},
"user_tz": -480
},
"id": "R4p0cfWcfIvi",
"outputId": "4565e085-4fb0-4129-8e83-ee4dc6646250"
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
"# Visualize the image\n",
"plt.grid(False)\n",
"plt.gray()\n",
"plt.axis('off')\n",
"plt.imshow(ascent_image)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "C1mhZ_ZTfPWH"
},
"source": [
"The image is stored as a numpy array so you can create the transformed image by first copying that array. You can also get the dimensions of the image so you can loop over it later. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"executionInfo": {
"elapsed": 353,
"status": "ok",
"timestamp": 1639059122348,
"user": {
"displayName": "Chris Favila",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64",
"userId": "17311369472417335306"
},
"user_tz": -480
},
"id": "o5pxGq1SmJMD"
},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"# Copy image to a numpy array\n",
"image_transformed = np.copy(ascent_image)\n",
"\n",
"# Get the dimensions of the image\n",
"size_x = image_transformed.shape[0]\n",
"size_y = image_transformed.shape[1]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Y7PwNkiXfddd"
},
"source": [
"Now you can create a filter as a 3x3 array. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"executionInfo": {
"elapsed": 544,
"status": "ok",
"timestamp": 1639059236890,
"user": {
"displayName": "Chris Favila",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64",
"userId": "17311369472417335306"
},
"user_tz": -480
},
"id": "sN3imZannN5J"
},
"outputs": [],
"source": [
"# Experiment with different values and see the effect\n",
"filter = [ [0, 1, 0], [1, -4, 1], [0, 1, 0]]\n",
"\n",
"# A couple more filters to try for fun!\n",
"# filter = [ [-1, -2, -1], [0, 0, 0], [1, 2, 1]]\n",
"# filter = [ [-1, 0, 1], [-2, 0, 2], [-1, 0, 1]]\n",
"\n",
"# If all the digits in the filter don't add up to 0 or 1, you \n",
"# should probably do a weight to get it to do so\n",
"# so, for example, if your weights are 1,1,1 1,2,1 1,1,1\n",
"# They add up to 10, so you would set a weight of .1 if you want to normalize them\n",
"weight = 1"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JQmm_iBufmCz"
},
"source": [
"Now you can create a convolution. You will iterate over the image, leaving a 1 pixel margin, and multiplying each of the neighbors of the current pixel by the value defined in the filter (i.e. the current pixel's neighbor above it and to the left will be multiplied by the top left item in the filter, etc.) \n",
"\n",
"You'll then multiply the result by the weight, and then ensure the result is in the range 0-255.\n",
"\n",
"Finally you'll load the new value into the transformed image. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"executionInfo": {
"elapsed": 3511,
"status": "ok",
"timestamp": 1639059241813,
"user": {
"displayName": "Chris Favila",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64",
"userId": "17311369472417335306"
},
"user_tz": -480
},
"id": "299uU2jAr90h"
},
"outputs": [],
"source": [
"# Iterate over the image\n",
"for x in range(1,size_x-1):\n",
" for y in range(1,size_y-1):\n",
" convolution = 0.0\n",
" convolution = convolution + (ascent_image[x-1, y-1] * filter[0][0])\n",
" convolution = convolution + (ascent_image[x-1, y] * filter[0][1]) \n",
" convolution = convolution + (ascent_image[x-1, y+1] * filter[0][2]) \n",
" convolution = convolution + (ascent_image[x, y-1] * filter[1][0]) \n",
" convolution = convolution + (ascent_image[x, y] * filter[1][1]) \n",
" convolution = convolution + (ascent_image[x, y+1] * filter[1][2]) \n",
" convolution = convolution + (ascent_image[x+1, y-1] * filter[2][0]) \n",
" convolution = convolution + (ascent_image[x+1, y] * filter[2][1]) \n",
" convolution = convolution + (ascent_image[x+1, y+1] * filter[2][2]) \n",
" \n",
" # Multiply by weight\n",
" convolution = convolution * weight \n",
" \n",
" # Check the boundaries of the pixel values\n",
" if(convolution<0):\n",
" convolution=0\n",
" if(convolution>255):\n",
" convolution=255\n",
"\n",
" # Load into the transformed image\n",
" image_transformed[x, y] = convolution"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6XA--vgvgDEQ"
},
"source": [
"After the loop, you can now plot the image to see the effect of the convolution!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 269
},
"executionInfo": {
"elapsed": 899,
"status": "ok",
"timestamp": 1639059523867,
"user": {
"displayName": "Chris Favila",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64",
"userId": "17311369472417335306"
},
"user_tz": -480
},
"id": "7oPhUPNhuGWC",
"outputId": "2aee35d3-e378-441c-e497-1c215722c34c"
},
"outputs": [],
"source": [
"# Plot the image. Note the size of the axes -- they are 512 by 512\n",
"plt.gray()\n",
"plt.grid(False)\n",
"plt.imshow(image_transformed)\n",
"plt.show() "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xF0FPplsgHNh"
},
"source": [
"## Effect of Max Pooling\n",
"\n",
"The next cell will show a (2, 2) pooling. The idea here is to iterate over the image, and look at the pixel and it's immediate neighbors to the right, beneath, and right-beneath. It will take the largest of them and load it into the new image. Thus, the new image will be 1/4 the size of the old -- with the dimensions on X and Y being halved by this process. You'll see that the features get maintained despite this compression!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 269
},
"executionInfo": {
"elapsed": 1881,
"status": "ok",
"timestamp": 1639059312953,
"user": {
"displayName": "Chris Favila",
"photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64",
"userId": "17311369472417335306"
},
"user_tz": -480
},
"id": "kDHjf-ehaBqm",
"outputId": "3d0837c6-11d6-44e0-a470-8c7a2f139d88"
},
"outputs": [],
"source": [
"# Assign dimensions half the size of the original image\n",
"new_x = int(size_x/2)\n",
"new_y = int(size_y/2)\n",
"\n",
"# Create blank image with reduced dimensions\n",
"newImage = np.zeros((new_x, new_y))\n",
"\n",
"# Iterate over the image\n",
"for x in range(0, size_x, 2):\n",
" for y in range(0, size_y, 2):\n",
" \n",
" # Store all the pixel values in the (2,2) pool\n",
" pixels = []\n",
" pixels.append(image_transformed[x, y])\n",
" pixels.append(image_transformed[x+1, y])\n",
" pixels.append(image_transformed[x, y+1])\n",
" pixels.append(image_transformed[x+1, y+1])\n",
"\n",
" # Get only the largest value and assign to the reduced image\n",
" newImage[int(x/2),int(y/2)] = max(pixels)\n",
"\n",
"# Plot the image. Note the size of the axes -- it is now 256 pixels instead of 512\n",
"plt.gray()\n",
"plt.grid(False)\n",
"plt.imshow(newImage)\n",
"plt.show() "
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "C1_W3_Lab_2_exploring_convolutions.ipynb",
"provenance": [
{
"file_id": "https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/12_sep_2021_fixes/C1/W3/ungraded_labs/C1_W3_Lab_2_exploring_convolutions.ipynb",
"timestamp": 1639058610295
}
]
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/4. Using Real-world Images/assignment/C1W4_Assignment.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "GvJbBW_oDOwC"
},
"source": [
"# Week 4: Handling Complex Images - Happy or Sad Dataset\n",
"\n",
"In this assignment you will be using the happy or sad dataset, which contains 80 images of emoji-like faces, 40 happy and 40 sad.\n",
"\n",
"Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "3NFuMFYXtwsT",
"outputId": "723d6bc3-c7cd-491b-d6f8-49a2e404a0a2"
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"import os"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Begin by taking a look at some images of the dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 369
},
"id": "uaWTfp5Ox9E-",
"outputId": "1a4b4b15-9a5f-4fd3-8c56-b32d47ae0893"
},
"outputs": [],
"source": [
"from tensorflow.keras.preprocessing.image import load_img\n",
"\n",
"happy_dir = \"./data/happy/\"\n",
"sad_dir = \"./data/sad/\"\n",
"\n",
"print(\"Sample happy image:\")\n",
"plt.imshow(load_img(f\"{os.path.join(happy_dir, os.listdir(happy_dir)[0])}\"))\n",
"plt.show()\n",
"\n",
"print(\"\\nSample sad image:\")\n",
"plt.imshow(load_img(f\"{os.path.join(sad_dir, os.listdir(sad_dir)[0])}\"))\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is cool to be able to see examples of the images to better understand the problem-space you are dealing with. \n",
"\n",
"However there is still some relevant information that is missing such as the resolution of the image (although matplotlib renders the images in a grid providing a good idea of these values) and the maximum pixel value (this is important for normalizing these values). For this you can use Keras as shown in the next cell:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from tensorflow.keras.preprocessing.image import img_to_array\n",
"\n",
"# Load the first example of a happy face\n",
"sample_image = load_img(f\"{os.path.join(happy_dir, os.listdir(happy_dir)[0])}\")\n",
"\n",
"# Convert the image into its numpy array representation\n",
"sample_array = img_to_array(sample_image)\n",
"\n",
"print(f\"Each image has shape: {sample_array.shape}\")\n",
"\n",
"print(f\"The maximum pixel value used is: {np.max(sample_array)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks like the images have a resolution of 150x150. **This is very important because this will be the input size of the first layer in your network.** \n",
"\n",
"**The last dimension refers to each one of the 3 RGB channels that are used to represent colored images.**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since you already have coded the callback responsible for stopping training (once a desired level of accuracy is reached) in the previous two assignments this time it is already provided so you can focus on the other steps:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "X0UOFLauzIW4"
},
"outputs": [],
"source": [
"class myCallback(tf.keras.callbacks.Callback):\n",
" def on_epoch_end(self, epoch, logs={}):\n",
" if logs.get('accuracy') is not None and logs.get('accuracy') > 0.999:\n",
" print(\"\\nReached 99.9% accuracy so cancelling training!\")\n",
" self.model.stop_training = True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A quick note on callbacks: \n",
"\n",
"So far you have used only the `on_epoch_end` callback but there are many more. For example you might want to check out the [EarlyStopping](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping) callback, which allows you to save the best weights for your model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Keras provides great support for preprocessing image data. A lot can be accomplished by using the `ImageDataGenerator` class. Be sure to check out the [docs](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) if you get stuck in the next exercise. In particular you might want to pay attention to the `rescale` argument when instantiating the `ImageDataGenerator` and to the [`flow_from_directory`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator#flow_from_directory) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "code",
"id": "rrGO8ObGzqht"
},
"outputs": [],
"source": [
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
"\n",
"# GRADED FUNCTION: image_generator\n",
"def image_generator():\n",
" ### START CODE HERE\n",
"\n",
" # Instantiate the ImageDataGenerator class.\n",
" # Remember to set the rescale argument.\n",
" train_datagen = ImageDataGenerator(rescale=1/255)\n",
"\n",
" # Specify the method to load images from a directory and pass in the appropriate arguments:\n",
" # - directory: should be a relative path to the directory containing the data\n",
" # - targe_size: set this equal to the resolution of each image (excluding the color dimension)\n",
" # - batch_size: number of images the generator yields when asked for a next batch. Set this to 10.\n",
" # - class_mode: How the labels are represented. Should be one of \"binary\", \"categorical\" or \"sparse\".\n",
" # Pick the one that better suits here given that the labels are going to be 1D binary labels.\n",
" train_generator = train_datagen.flow_from_directory(directory='./data/',\n",
" target_size=(150, 150),\n",
" batch_size=10,\n",
" class_mode='binary')\n",
" ### END CODE HERE\n",
"\n",
" return train_generator\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "L9uxJFQb1nOx",
"outputId": "0c6ce535-7764-4bc0-a4a4-e6289a360b04"
},
"outputs": [],
"source": [
"# Save your generator in a variable\n",
"gen = image_generator()\n",
"\n",
"# Expected output: 'Found 80 images belonging to 2 classes'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output:**\n",
"```\n",
"Found 80 images belonging to 2 classes.\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "eUcNTpra1FK0"
},
"outputs": [],
"source": [
"def train_happy_sad_model(train_generator):\n",
"\n",
" # Instantiate the callback\n",
" callbacks = myCallback()\n",
"\n",
" ### START CODE HERE\n",
"\n",
" # Define the model, you can toy around with the architecture.\n",
" # Some helpful tips in case you are stuck:\n",
" \n",
" # - A good first layer would be a Conv2D layer with an input shape that matches \n",
" # that of every image in the training set (including the color dimension)\n",
"\n",
" # - The model will work best with 3 convolutional layers\n",
"\n",
" # - There should be a Flatten layer in between convolutional and dense layers\n",
"\n",
" # - The final layer should be a Dense layer with the number of units \n",
" # and activation function that supports binary classification.\n",
"\n",
" model = tf.keras.models.Sequential([ \n",
" # First convolution\n",
" tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n",
" tf.keras.layers.MaxPooling2D(2, 2),\n",
" # Second convolution\n",
" tf.keras.layers.Conv2D(16, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2, 2),\n",
" # Third convolution\n",
" tf.keras.layers.Conv2D(16, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2, 2),\n",
" # Flatten\n",
" tf.keras.layers.Flatten(),\n",
" # Dense layers\n",
" tf.keras.layers.Dense(512, activation='relu'),\n",
" tf.keras.layers.Dense(1, activation='sigmoid')\n",
" ])\n",
"\n",
" # Compile the model\n",
" # Select a loss function compatible with the last layer of your network\n",
" model.compile(loss='binary_crossentropy',\n",
" optimizer=optimizers.RMSprop(learning_rate=0.001),\n",
" metrics=['accuracy']) \n",
" \n",
"\n",
"\n",
" # Train the model\n",
" # Your model should achieve the desired accuracy in less than 15 epochs.\n",
" # You can hardcode up to 20 epochs in the function below but the callback should trigger before 15.\n",
" history = model.fit(train_generator,\n",
" epochs=20,\n",
" callbacks=[callbacks]\n",
" ) \n",
" \n",
" ### END CODE HERE\n",
" return history"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "sSaPPUe_z_OU",
"outputId": "b6e6306a-8b28-463b-e1a0-8bdeb9116f26"
},
"outputs": [],
"source": [
"hist = train_happy_sad_model(gen)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you see the message that was defined in the callback printed out after less than 15 epochs it means your callback worked as expected and training was successful. You can also double check by running the following cell:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0imravDn0Ajz"
},
"outputs": [],
"source": [
"print(f\"Your model reached the desired accuracy after {len(hist.epoch)} epochs\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Congratulations on finishing the last assignment of this course!**\n",
"\n",
"You have successfully implemented a CNN to assist you in the classification task for complex images. Nice job!\n",
"\n",
"**Keep it up!**"
]
}
],
"metadata": {
"jupytext": {
"main_language": "python"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/4. Using Real-world Images/ungraded_labs/C1_W4_Lab_1_image_generator_no_validation.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C1/W4/ungraded_labs/C1_W4_Lab_1_image_generator_no_validation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-74XLLwqPlcw"
},
"source": [
"# Ungraded Lab: Training with ImageDataGenerator\n",
"\n",
"In this lab, you will build a train a model on the [Horses or Humans](https://www.tensorflow.org/datasets/catalog/horses_or_humans) dataset. This contains over a thousand images of horses and humans with varying poses and filesizes. You will use the [ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) class to prepare this dataset so it can be fed to a convolutional neural network.\n",
"\n",
"**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qYFguQkJvpV3"
},
"source": [
"Run the code below to download the compressed dataset `horse-or-human.zip`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "RXZT2UsyIVe_"
},
"outputs": [],
"source": [
"!wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/horse-or-human.zip"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9brUxyTpYZHy"
},
"source": [
"You can then unzip the archive using the [zipfile](https://docs.python.org/3/library/zipfile.html) module."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PLy3pthUS0D2"
},
"outputs": [],
"source": [
"import zipfile\n",
"\n",
"# Unzip the dataset\n",
"local_zip = './horse-or-human.zip'\n",
"zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
"zip_ref.extractall('./horse-or-human')\n",
"zip_ref.close()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o-qUPyfO7Qr8"
},
"source": [
"The contents of the .zip are extracted to the base directory `./horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.\n",
"\n",
"In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like' and 'this is what a human looks like'.\n",
"\n",
"One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. You will use the ImageDataGenerator API instead -- and this is coded to automatically label images according to the directory names and structure. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. `ImageDataGenerator` will label the images appropriately for you, reducing a coding step. \n",
"\n",
"You can now define each of these directories:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NR_M9nWN-K8B"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"# Directory with our training horse pictures\n",
"train_horse_dir = os.path.join('./horse-or-human/horses')\n",
"\n",
"# Directory with our training human pictures\n",
"train_human_dir = os.path.join('./horse-or-human/humans')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LuBYtA_Zd8_T"
},
"source": [
"Now see what the filenames look like in the `horses` and `humans` training directories:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4PIP1rkmeAYS"
},
"outputs": [],
"source": [
"train_horse_names = os.listdir(train_horse_dir)\n",
"print(train_horse_names[:10])\n",
"\n",
"train_human_names = os.listdir(train_human_dir)\n",
"print(train_human_names[:10])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HlqN5KbafhLI"
},
"source": [
"You can also find out the total number of horse and human images in the directories:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "H4XHh2xSfgie"
},
"outputs": [],
"source": [
"print('total training horse images:', len(os.listdir(train_horse_dir)))\n",
"print('total training human images:', len(os.listdir(train_human_dir)))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "C3WZABE9eX-8"
},
"source": [
"Now take a look at a few pictures to get a better sense of what they look like. First, configure the `matplotlib` parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "b2_Q0-_5UAv-"
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg\n",
"\n",
"# Parameters for our graph; we'll output images in a 4x4 configuration\n",
"nrows = 4\n",
"ncols = 4\n",
"\n",
"# Index for iterating over images\n",
"pic_index = 0"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xTvHzGCxXkqp"
},
"source": [
"Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Wpr8GxjOU8in"
},
"outputs": [],
"source": [
"# Set up matplotlib fig, and size it to fit 4x4 pics\n",
"fig = plt.gcf()\n",
"fig.set_size_inches(ncols * 4, nrows * 4)\n",
"\n",
"pic_index += 8\n",
"next_horse_pix = [os.path.join(train_horse_dir, fname) \n",
" for fname in train_horse_names[pic_index-8:pic_index]]\n",
"next_human_pix = [os.path.join(train_human_dir, fname) \n",
" for fname in train_human_names[pic_index-8:pic_index]]\n",
"\n",
"for i, img_path in enumerate(next_horse_pix+next_human_pix):\n",
" # Set up subplot; subplot indices start at 1\n",
" sp = plt.subplot(nrows, ncols, i + 1)\n",
" sp.axis('Off') # Don't show axes (or gridlines)\n",
"\n",
" img = mpimg.imread(img_path)\n",
" plt.imshow(img)\n",
"\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5oqBkNBJmtUv"
},
"source": [
"## Building a Small Model from Scratch\n",
"\n",
"Now you can define the model architecture that you will train.\n",
"\n",
"Step 1 will be to import tensorflow."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qvfZg3LQbD-5"
},
"outputs": [],
"source": [
"import tensorflow as tf"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BnhYCP4tdqjC"
},
"source": [
"You then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Note that because this is a two-class classification problem, i.e. a *binary classification problem*, you will end your network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function). This makes the output value of your network a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PixZ2s5QbYQ3"
},
"outputs": [],
"source": [
"model = tf.keras.models.Sequential([\n",
" # Note the input shape is the desired size of the image 300x300 with 3 bytes color\n",
" # This is the first convolution\n",
" tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),\n",
" tf.keras.layers.MaxPooling2D(2, 2),\n",
" # The second convolution\n",
" tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # The third convolution\n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # The fourth convolution\n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # The fifth convolution\n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # Flatten the results to feed into a DNN\n",
" tf.keras.layers.Flatten(),\n",
" # 512 neuron hidden layer\n",
" tf.keras.layers.Dense(512, activation='relu'),\n",
" # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')\n",
" tf.keras.layers.Dense(1, activation='sigmoid')\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "s9EaFDP5srBa"
},
"source": [
"You can review the network architecture and the output shapes with `model.summary()`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7ZKj8392nbgP"
},
"outputs": [],
"source": [
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DmtkTn06pKxF"
},
"source": [
"The \"output shape\" column shows how the size of your feature map evolves in each successive layer. As you saw in an earlier lesson, the convolution layers removes the outermost pixels of the image, and each pooling layer halves the dimensions."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PEkKSpZlvJXA"
},
"source": [
"Next, you'll configure the specifications for model training. You will train the model with the [`binary_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) loss because it's a binary classification problem, and the final activation is a sigmoid. (For a refresher on loss metrics, see this [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) You will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, you will want to monitor classification accuracy.\n",
"\n",
"**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/#SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descent#Adam) and [Adagrad](https://developers.google.com/machine-learning/glossary/#AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8DHWhFP_uhq3"
},
"outputs": [],
"source": [
"from tensorflow.keras.optimizers import RMSprop\n",
"\n",
"model.compile(loss='binary_crossentropy',\n",
" optimizer=RMSprop(learning_rate=0.001),\n",
" metrics=['accuracy'])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Sn9m9D3UimHM"
},
"source": [
"### Data Preprocessing\n",
"\n",
"Next step is to set up the data generators that will read pictures in the source folders, convert them to `float32` tensors, and feed them (with their labels) to the model. You'll have one generator for the training images and one for the validation images. These generators will yield batches of images of size 300x300 and their labels (binary).\n",
"\n",
"As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network (i.e. It is uncommon to feed raw pixels into a ConvNet.) In this case, you will preprocess the images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).\n",
"\n",
"In Keras, this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ClebU9NJg99G"
},
"outputs": [],
"source": [
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
"\n",
"# All images will be rescaled by 1./255\n",
"train_datagen = ImageDataGenerator(rescale=1/255)\n",
"\n",
"# Flow training images in batches of 128 using train_datagen generator\n",
"train_generator = train_datagen.flow_from_directory(\n",
" './horse-or-human/', # This is the source directory for training images\n",
" target_size=(300, 300), # All images will be resized to 300x300\n",
" batch_size=128,\n",
" # Since we use binary_crossentropy loss, we need binary labels\n",
" class_mode='binary')\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mu3Jdwkjwax4"
},
"source": [
"### Training\n",
"\n",
"You can start training for 15 epochs -- this may take a few minutes to run.\n",
"\n",
"Do note the values per epoch.\n",
"\n",
"The `loss` and `accuracy` are great indicators of progress in training. `loss` measures the current model prediction against the known labels, calculating the result. `accuracy`, on the other hand, is the portion of correct guesses. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Fb1_lgobv81m"
},
"outputs": [],
"source": [
"history = model.fit(\n",
" train_generator,\n",
" steps_per_epoch=8, \n",
" epochs=15,\n",
" verbose=1)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o6vSHzPR2ghH"
},
"source": [
"### Model Prediction\n",
"\n",
"Now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, upload them, and run them through the model, giving an indication of whether the object is a horse or a human.\n",
"\n",
"**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it.\n",
"\n",
"_For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DoWp43WxJDNT"
},
"outputs": [],
"source": [
"## CODE BLOCK FOR NON-SAFARI BROWSERS\n",
"## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD\n",
"\n",
"import numpy as np\n",
"from google.colab import files\n",
"from keras.preprocessing import image\n",
"\n",
"uploaded = files.upload()\n",
"\n",
"for fn in uploaded.keys():\n",
" \n",
" # predicting images\n",
" path = '/content/' + fn\n",
" img = image.load_img(path, target_size=(300, 300))\n",
" x = image.img_to_array(img)\n",
" x /= 255\n",
" x = np.expand_dims(x, axis=0)\n",
"\n",
" images = np.vstack([x])\n",
" classes = model.predict(images, batch_size=10)\n",
" print(classes[0])\n",
" \n",
" if classes[0]>0.5:\n",
" print(fn + \" is a human\")\n",
" else:\n",
" print(fn + \" is a horse\")\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WkLydXVZr30K"
},
"source": [
"`Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.\n",
"\n",
"Instructions on how to upload image(s) manually in a Colab:\n",
"\n",
"1. Select the `folder` icon on the left `menu bar`.\n",
"2. Click on the `folder with an arrow pointing upwards` named `..`\n",
"3. Click on the `folder` named `tmp`.\n",
"4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.\n",
"5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.\n",
"6. Uncomment and run the code block below. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1_vVstx-r4jy"
},
"outputs": [],
"source": [
"# # CODE BLOCK FOR SAFARI USERS\n",
"\n",
"# import numpy as np\n",
"# from keras.preprocessing import image\n",
"# import os\n",
"\n",
"# images = os.listdir(\"/tmp/images\")\n",
"\n",
"# print(images)\n",
"\n",
"# for i in images:\n",
"# print()\n",
"# # predicting images\n",
"# path = '/tmp/images/' + i\n",
"# img = image.load_img(path, target_size=(300, 300))\n",
"# x = image.img_to_array(img)\n",
"# x /= 255\n",
"# x = np.expand_dims(x, axis=0)\n",
"\n",
"# images = np.vstack([x])\n",
"# classes = model.predict(images, batch_size=10)\n",
"# print(classes[0])\n",
"# if classes[0]>0.5:\n",
"# print(i + \" is a human\")\n",
"# else:\n",
"# print(i + \" is a horse\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-8EHQyWGDvWz"
},
"source": [
"### Visualizing Intermediate Representations\n",
"\n",
"To get a feel for what kind of features your CNN has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the model.\n",
"\n",
"You can pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-5tES8rXFjux"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import random\n",
"from tensorflow.keras.preprocessing.image import img_to_array, load_img\n",
"\n",
"# Define a new Model that will take an image as input, and will output\n",
"# intermediate representations for all layers in the previous model after\n",
"# the first.\n",
"successive_outputs = [layer.output for layer in model.layers[1:]]\n",
"visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)\n",
"\n",
"# Prepare a random input image from the training set.\n",
"horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]\n",
"human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]\n",
"img_path = random.choice(horse_img_files + human_img_files)\n",
"\n",
"img = load_img(img_path, target_size=(300, 300)) # this is a PIL image\n",
"x = img_to_array(img) # Numpy array with shape (300, 300, 3)\n",
"x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3)\n",
"\n",
"# Scale by 1/255\n",
"x /= 255\n",
"\n",
"# Run the image through the network, thus obtaining all\n",
"# intermediate representations for this image.\n",
"successive_feature_maps = visualization_model.predict(x)\n",
"\n",
"# These are the names of the layers, so you can have them as part of the plot\n",
"layer_names = [layer.name for layer in model.layers[1:]]\n",
"\n",
"# Display the representations\n",
"for layer_name, feature_map in zip(layer_names, successive_feature_maps):\n",
" if len(feature_map.shape) == 4:\n",
"\n",
" # Just do this for the conv / maxpool layers, not the fully-connected layers\n",
" n_features = feature_map.shape[-1] # number of features in feature map\n",
"\n",
" # The feature map has shape (1, size, size, n_features)\n",
" size = feature_map.shape[1]\n",
" \n",
" # Tile the images in this matrix\n",
" display_grid = np.zeros((size, size * n_features))\n",
" for i in range(n_features):\n",
" x = feature_map[0, :, :, i]\n",
" x -= x.mean()\n",
" x /= x.std()\n",
" x *= 64\n",
" x += 128\n",
" x = np.clip(x, 0, 255).astype('uint8')\n",
" \n",
" # Tile each filter into this big horizontal grid\n",
" display_grid[:, i * size : (i + 1) * size] = x\n",
" \n",
" # Display the grid\n",
" scale = 20. / n_features\n",
" plt.figure(figsize=(scale * n_features, scale))\n",
" plt.title(layer_name)\n",
" plt.grid(False)\n",
" plt.imshow(display_grid, aspect='auto', cmap='viridis')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tuqK2arJL0wo"
},
"source": [
"You can see above how the pixels highlighted turn to increasingly abstract and compact representations, especially at the bottom grid. \n",
"\n",
"The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being \"activated\"; most are set to zero. This is called _representation sparsity_ and is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline wherein each layer filters out the most useful features."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "j4IBgYCYooGD"
},
"source": [
"## Clean Up\n",
"\n",
"You will continue with a similar exercise in the next lab but before that, run the following cell to terminate the kernel and free memory resources:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "651IgjLyo-Jx"
},
"outputs": [],
"source": [
"import os, signal\n",
"os.kill(os.getpid(), signal.SIGKILL)"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "C1_W4_Lab_1_image_generator_no_validation.ipynb",
"private_outputs": true,
"provenance": [
{
"file_id": "https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/adding_C1/C1/W4/ungraded_labs/C1_W4_Lab_1_image_generator_no_validation.ipynb",
"timestamp": 1639104486753
}
]
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/4. Using Real-world Images/ungraded_labs/C1_W4_Lab_2_image_generator_with_validation.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C1/W4/ungraded_labs/C1_W4_Lab_2_image_generator_with_validation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xB2cQUShkXNm"
},
"source": [
"# Ungraded Lab: ImageDataGenerator with a Validation Set\n",
"\n",
"In this lab, you will continue using the `ImageDataGenerator` class to prepare the `Horses or Humans` dataset. This time, you will add a validation set so you can also measure how well the model performs on data it hasn't seen."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WsO-u_3fySMd"
},
"source": [
"**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "l5FfBGV5yUjb"
},
"source": [
"Run the code blocks below to download the datasets `horse-or-human.zip` and `validation-horse-or-human.zip` respectively."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "RXZT2UsyIVe_"
},
"outputs": [],
"source": [
"# Download the training set\n",
"!wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/horse-or-human.zip"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0mLij6qde6Ox"
},
"outputs": [],
"source": [
"# Download the validation set\n",
"!wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/validation-horse-or-human.zip"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9brUxyTpYZHy"
},
"source": [
"Then unzip both archives."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PLy3pthUS0D2"
},
"outputs": [],
"source": [
"import zipfile\n",
"\n",
"# Unzip training set\n",
"local_zip = './horse-or-human.zip'\n",
"zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
"zip_ref.extractall('./horse-or-human')\n",
"\n",
"# Unzip validation set\n",
"local_zip = './validation-horse-or-human.zip'\n",
"zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
"zip_ref.extractall('./validation-horse-or-human')\n",
"\n",
"zip_ref.close()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o-qUPyfO7Qr8"
},
"source": [
"Similar to the previous lab, you will define the directories containing your images. This time, you will include those with validation data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NR_M9nWN-K8B"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"# Directory with training horse pictures\n",
"train_horse_dir = os.path.join('./horse-or-human/horses')\n",
"\n",
"# Directory with training human pictures\n",
"train_human_dir = os.path.join('./horse-or-human/humans')\n",
"\n",
"# Directory with validation horse pictures\n",
"validation_horse_dir = os.path.join('./validation-horse-or-human/horses')\n",
"\n",
"# Directory with validation human pictures\n",
"validation_human_dir = os.path.join('./validation-horse-or-human/humans')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LuBYtA_Zd8_T"
},
"source": [
"Now see what the filenames look like in these directories:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4PIP1rkmeAYS"
},
"outputs": [],
"source": [
"train_horse_names = os.listdir(train_horse_dir)\n",
"print(f'TRAIN SET HORSES: {train_horse_names[:10]}')\n",
"\n",
"train_human_names = os.listdir(train_human_dir)\n",
"print(f'TRAIN SET HUMANS: {train_human_names[:10]}')\n",
"\n",
"validation_horse_names = os.listdir(validation_horse_dir)\n",
"print(f'VAL SET HORSES: {validation_horse_names[:10]}')\n",
"\n",
"validation_human_names = os.listdir(validation_human_dir)\n",
"print(f'VAL SET HUMANS: {validation_human_names[:10]}')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HlqN5KbafhLI"
},
"source": [
"You can find out the total number of horse and human images in the directories:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "H4XHh2xSfgie"
},
"outputs": [],
"source": [
"print(f'total training horse images: {len(os.listdir(train_horse_dir))}')\n",
"print(f'total training human images: {len(os.listdir(train_human_dir))}')\n",
"print(f'total validation horse images: {len(os.listdir(validation_horse_dir))}')\n",
"print(f'total validation human images: {len(os.listdir(validation_human_dir))}')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "C3WZABE9eX-8"
},
"source": [
"Now take a look at a few pictures to get a better sense of what they look like. First, configure the `matplotlib` parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "b2_Q0-_5UAv-"
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg\n",
"\n",
"# Parameters for our graph; we'll output images in a 4x4 configuration\n",
"nrows = 4\n",
"ncols = 4\n",
"\n",
"# Index for iterating over images\n",
"pic_index = 0"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xTvHzGCxXkqp"
},
"source": [
"Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Wpr8GxjOU8in"
},
"outputs": [],
"source": [
"# Set up matplotlib fig, and size it to fit 4x4 pics\n",
"fig = plt.gcf()\n",
"fig.set_size_inches(ncols * 4, nrows * 4)\n",
"\n",
"pic_index += 8\n",
"next_horse_pix = [os.path.join(train_horse_dir, fname) \n",
" for fname in train_horse_names[pic_index-8:pic_index]]\n",
"next_human_pix = [os.path.join(train_human_dir, fname) \n",
" for fname in train_human_names[pic_index-8:pic_index]]\n",
"\n",
"for i, img_path in enumerate(next_horse_pix+next_human_pix):\n",
" # Set up subplot; subplot indices start at 1\n",
" sp = plt.subplot(nrows, ncols, i + 1)\n",
" sp.axis('Off') # Don't show axes (or gridlines)\n",
"\n",
" img = mpimg.imread(img_path)\n",
" plt.imshow(img)\n",
"\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5oqBkNBJmtUv"
},
"source": [
"## Building a Small Model from Scratch\n",
"\n",
"You will define the same model architecture as before:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qvfZg3LQbD-5"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"model = tf.keras.models.Sequential([\n",
" # Note the input shape is the desired size of the image 300x300 with 3 bytes color\n",
" # This is the first convolution\n",
" tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),\n",
" tf.keras.layers.MaxPooling2D(2, 2),\n",
" # The second convolution\n",
" tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # The third convolution\n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # The fourth convolution\n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # The fifth convolution\n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # Flatten the results to feed into a DNN\n",
" tf.keras.layers.Flatten(),\n",
" # 512 neuron hidden layer\n",
" tf.keras.layers.Dense(512, activation='relu'),\n",
" # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')\n",
" tf.keras.layers.Dense(1, activation='sigmoid')\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "s9EaFDP5srBa"
},
"source": [
"You can review the network architecture and the output shapes with `model.summary()`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7ZKj8392nbgP"
},
"outputs": [],
"source": [
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PEkKSpZlvJXA"
},
"source": [
"You will also use the same compile settings as before:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8DHWhFP_uhq3"
},
"outputs": [],
"source": [
"from tensorflow.keras.optimizers import RMSprop\n",
"\n",
"model.compile(loss='binary_crossentropy',\n",
" optimizer=RMSprop(learning_rate=0.001),\n",
" metrics=['accuracy'])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Sn9m9D3UimHM"
},
"source": [
"### Data Preprocessing\n",
"\n",
"Now you will setup the data generators. It will mostly be the same as last time but notice the additional code to also prepare the validation data. It will need to be instantiated separately and also scaled to have `[0,1]` range of pixel values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ClebU9NJg99G"
},
"outputs": [],
"source": [
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
"\n",
"# All images will be rescaled by 1./255\n",
"train_datagen = ImageDataGenerator(rescale=1/255)\n",
"validation_datagen = ImageDataGenerator(rescale=1/255)\n",
"\n",
"# Flow training images in batches of 128 using train_datagen generator\n",
"train_generator = train_datagen.flow_from_directory(\n",
" './horse-or-human/', # This is the source directory for training images\n",
" target_size=(300, 300), # All images will be resized to 300x300\n",
" batch_size=128,\n",
" # Since you use binary_crossentropy loss, you need binary labels\n",
" class_mode='binary')\n",
"\n",
"# Flow validation images in batches of 128 using validation_datagen generator\n",
"validation_generator = validation_datagen.flow_from_directory(\n",
" './validation-horse-or-human/', # This is the source directory for validation images\n",
" target_size=(300, 300), # All images will be resized to 300x300\n",
" batch_size=32,\n",
" # Since you use binary_crossentropy loss, you need binary labels\n",
" class_mode='binary')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mu3Jdwkjwax4"
},
"source": [
"### Training\n",
"Now train the model for 15 epochs. Here, you will pass parameters for `validation_data` and `validation_steps`. With these, you will notice additional outputs in the print statements: `val_loss` and `val_accuracy`. Notice that as you train with more epochs, your training accuracy might go up but your validation accuracy goes down. This can be a sign of overfitting and you need to prevent your model from reaching this point."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Fb1_lgobv81m"
},
"outputs": [],
"source": [
"history = model.fit(\n",
" train_generator,\n",
" steps_per_epoch=8, \n",
" epochs=15,\n",
" verbose=1,\n",
" validation_data = validation_generator,\n",
" validation_steps=8)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o6vSHzPR2ghH"
},
"source": [
"### Model Prediction\n",
"\n",
"Now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, upload them, and run them through the model, giving an indication of whether the object is a horse or a human.\n",
"\n",
"**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it.\n",
"\n",
"_For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DoWp43WxJDNT"
},
"outputs": [],
"source": [
"## CODE BLOCK FOR NON-SAFARI BROWSERS\n",
"## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD\n",
"\n",
"import numpy as np\n",
"from google.colab import files\n",
"from keras.preprocessing import image\n",
"\n",
"uploaded = files.upload()\n",
"\n",
"for fn in uploaded.keys():\n",
" \n",
" # predicting images\n",
" path = '/content/' + fn\n",
" img = image.load_img(path, target_size=(300, 300))\n",
" x = image.img_to_array(img)\n",
" x /= 255\n",
" x = np.expand_dims(x, axis=0)\n",
"\n",
" images = np.vstack([x])\n",
" classes = model.predict(images, batch_size=10)\n",
" print(classes[0])\n",
" if classes[0]>0.5:\n",
" print(fn + \" is a human\")\n",
" else:\n",
" print(fn + \" is a horse\")\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UJV8rdWU0NlM"
},
"source": [
"`Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.\n",
"\n",
"Instructions on how to upload image(s) manually in a Colab:\n",
"\n",
"1. Select the `folder` icon on the left `menu bar`.\n",
"2. Click on the `folder with an arrow pointing upwards` named `..`\n",
"3. Click on the `folder` named `tmp`.\n",
"4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.\n",
"5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.\n",
"6. Uncomment and run the code block below. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "eyIcglKE0MpY"
},
"outputs": [],
"source": [
"# # CODE BLOCK FOR SAFARI USERS\n",
"\n",
"# import numpy as np\n",
"# from keras.preprocessing import image\n",
"# import os\n",
"\n",
"# images = os.listdir(\"/tmp/images\")\n",
"\n",
"# print(images)\n",
"\n",
"# for i in images:\n",
"# print()\n",
"# # predicting images\n",
"# path = '/tmp/images/' + i\n",
"# img = image.load_img(path, target_size=(300, 300))\n",
"# x = image.img_to_array(img)\n",
"# x /= 255\n",
"# x = np.expand_dims(x, axis=0)\n",
"\n",
"# images = np.vstack([x])\n",
"# classes = model.predict(images, batch_size=10)\n",
"# print(classes[0])\n",
"# if classes[0]>0.5:\n",
"# print(i + \" is a human\")\n",
"# else:\n",
"# print(i + \" is a horse\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-8EHQyWGDvWz"
},
"source": [
"### Visualizing Intermediate Representations\n",
"\n",
"As before, you can plot how the features are transformed as it goes through each layer."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-5tES8rXFjux"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import random\n",
"from tensorflow.keras.preprocessing.image import img_to_array, load_img\n",
"\n",
"# Define a new Model that will take an image as input, and will output\n",
"# intermediate representations for all layers in the previous model after\n",
"# the first.\n",
"successive_outputs = [layer.output for layer in model.layers[1:]]\n",
"visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)\n",
"\n",
"# Prepare a random input image from the training set.\n",
"horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]\n",
"human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]\n",
"img_path = random.choice(horse_img_files + human_img_files)\n",
"\n",
"img = load_img(img_path, target_size=(300, 300)) # this is a PIL image\n",
"x = img_to_array(img) # Numpy array with shape (300, 300, 3)\n",
"x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3)\n",
"\n",
"# Scale by 1/255\n",
"x /= 255\n",
"\n",
"# Run the image through the network, thus obtaining all\n",
"# intermediate representations for this image.\n",
"successive_feature_maps = visualization_model.predict(x)\n",
"\n",
"# These are the names of the layers, so you can have them as part of the plot\n",
"layer_names = [layer.name for layer in model.layers[1:]]\n",
"\n",
"# Display the representations\n",
"for layer_name, feature_map in zip(layer_names, successive_feature_maps):\n",
" if len(feature_map.shape) == 4:\n",
"\n",
" # Just do this for the conv / maxpool layers, not the fully-connected layers\n",
" n_features = feature_map.shape[-1] # number of features in feature map\n",
"\n",
" # The feature map has shape (1, size, size, n_features)\n",
" size = feature_map.shape[1]\n",
" \n",
" # Tile the images in this matrix\n",
" display_grid = np.zeros((size, size * n_features))\n",
" for i in range(n_features):\n",
" x = feature_map[0, :, :, i]\n",
" x -= x.mean()\n",
" x /= x.std()\n",
" x *= 64\n",
" x += 128\n",
" x = np.clip(x, 0, 255).astype('uint8')\n",
" \n",
" # Tile each filter into this big horizontal grid\n",
" display_grid[:, i * size : (i + 1) * size] = x\n",
" \n",
" # Display the grid\n",
" scale = 20. / n_features\n",
" plt.figure(figsize=(scale * n_features, scale))\n",
" plt.title(layer_name)\n",
" plt.grid(False)\n",
" plt.imshow(display_grid, aspect='auto', cmap='viridis')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "j4IBgYCYooGD"
},
"source": [
"## Clean Up\n",
"\n",
"Before running the next exercise, run the following cell to terminate the kernel and free memory resources:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "651IgjLyo-Jx"
},
"outputs": [],
"source": [
"# import os, signal\n",
"# os.kill(os.getpid(), signal.SIGKILL)"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "C1_W4_Lab_2_image_generator_with_validation.ipynb",
"private_outputs": true,
"provenance": [
{
"file_id": "https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/adding_C1/C1/W4/ungraded_labs/C1_W4_Lab_2_image_generator_with_validation.ipynb",
"timestamp": 1639109465068
}
],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
================================================
FILE: 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/4. Using Real-world Images/ungraded_labs/C1_W4_Lab_3_compacted_images.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C1/W4/ungraded_labs/C1_W4_Lab_3_compacted_images.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qR8Am0lBtRAx"
},
"source": [
"# Ungraded Lab: Effect of Compacted Images in Training\n",
"\n",
"In this notebook, you will see how reducing the target size of the generator images will affect the architecture and performance of your model. This is a useful technique in case you need to speed up your training or save compute resources. Let's begin!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "D1iD7DhP2NWt"
},
"source": [
"**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qxY7KvGQ2Qdr"
},
"source": [
"As before, start downloading the train and validation sets:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "RXZT2UsyIVe_"
},
"outputs": [],
"source": [
"# Download the training set\n",
"!wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/horse-or-human.zip"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0mLij6qde6Ox"
},
"outputs": [],
"source": [
"# Download the validation set\n",
"!wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/validation-horse-or-human.zip"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9brUxyTpYZHy"
},
"source": [
"Then unzip them:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PLy3pthUS0D2"
},
"outputs": [],
"source": [
"import zipfile\n",
"\n",
"# Unzip training set\n",
"local_zip = './horse-or-human.zip'\n",
"zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
"zip_ref.extractall('./horse-or-human')\n",
"\n",
"# Unzip validation set\n",
"local_zip = './validation-horse-or-human.zip'\n",
"zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
"zip_ref.extractall('./validation-horse-or-human')\n",
"\n",
"zip_ref.close()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o-qUPyfO7Qr8"
},
"source": [
"Then define the directories containing the images:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NR_M9nWN-K8B"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"# Directory with training horse pictures\n",
"train_horse_dir = os.path.join('./horse-or-human/horses')\n",
"\n",
"# Directory with training human pictures\n",
"train_human_dir = os.path.join('./horse-or-human/humans')\n",
"\n",
"# Directory with validation horse pictures\n",
"validation_horse_dir = os.path.join('./validation-horse-or-human/horses')\n",
"\n",
"# Directory with validation human pictures\n",
"validation_human_dir = os.path.join('./validation-horse-or-human/humans')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "z1wrZCxTPw4m"
},
"source": [
"You can check that the directories are not empty and that the train set has more images than the validation set:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_horse_names = os.listdir(train_horse_dir)\n",
"print(f'TRAIN SET HORSES: {train_horse_names[:10]}')\n",
"\n",
"train_human_names = os.listdir(train_human_dir)\n",
"print(f'TRAIN SET HUMANS: {train_human_names[:10]}')\n",
"\n",
"validation_horse_hames = os.listdir(validation_horse_dir)\n",
"print(f'VAL SET HORSES: {validation_horse_hames[:10]}')\n",
"\n",
"validation_human_names = os.listdir(validation_human_dir)\n",
"print(f'VAL SET HUMANS: {validation_human_names[:10]}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZTpdVrBg2LZC"
},
"outputs": [],
"source": [
"print(f'total training horse images: {len(os.listdir(train_horse_dir))}')\n",
"print(f'total training human images: {len(os.listdir(train_human_dir))}')\n",
"print(f'total validation horse images: {len(os.listdir(validation_horse_dir))}')\n",
"print(f'total validation human images: {len(os.listdir(validation_human_dir))}')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5oqBkNBJmtUv"
},
"source": [
"## Build the Model\n",
"\n",
"The model will follow the same architecture as before but they key difference is in the `input_shape` parameter of the first `Conv2D` layer. Since you will be compacting the images later in the generator, you need to specify the expected image size here. So instead of 300x300 as in the previous two labs, you specify a smaller 150x150 array."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PixZ2s5QbYQ3"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"model = tf.keras.models.Sequential([\n",
" # Note the input shape is the desired size of the image 150x150 with 3 bytes color\n",
" # This is the first convolution\n",
" tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n",
" tf.keras.layers.MaxPooling2D(2, 2),\n",
" # The second convolution\n",
" tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # The third convolution\n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
"# # The fourth convolution (You can uncomment the 4th and 5th conv layers later to see the effect)\n",
"# tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
"# tf.keras.layers.MaxPooling2D(2,2),\n",
"# # The fifth convolution\n",
"# tf.keras.layers.Conv2D(64, (3,3), activation='relu'),\n",
"# tf.keras.layers.MaxPooling2D(2,2),\n",
" # Flatten the results to feed into a DNN\n",
" tf.keras.layers.Flatten(),\n",
" # 512 neuron hidden layer\n",
" tf.keras.layers.Dense(512, activation='relu'),\n",
" # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')\n",
" tf.keras.layers.Dense(1, activation='sigmoid')\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "s9EaFDP5srBa"
},
"source": [
"You can see the difference from previous models when you print the `model.summary()`. As expected, there will be less inputs to the `Dense` layer at the end of the model compared to the previous labs. This is because you used the same number of max pooling layers in your model. And since you have a smaller image to begin with (150 x 150), then the output after all the pooling layers will also be smaller."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7ZKj8392nbgP"
},
"outputs": [],
"source": [
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PEkKSpZlvJXA"
},
"source": [
"You will use the same settings for training:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8DHWhFP_uhq3"
},
"outputs": [],
"source": [
"from tensorflow.keras.optimizers import RMSprop\n",
"\n",
"model.compile(loss='binary_crossentropy',\n",
" optimizer=RMSprop(learning_rate=0.001),\n",
" metrics=['accuracy'])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Sn9m9D3UimHM"
},
"source": [
"### Data Preprocessing\n",
"\n",
"Now you will instantiate the data generators. As mentioned before, you will be compacting the image by specifying the `target_size` parameter. See the simple change below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ClebU9NJg99G"
},
"outputs": [],
"source": [
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
"\n",
"# All images will be rescaled by 1./255\n",
"train_datagen = ImageDataGenerator(rescale=1/255)\n",
"validation_datagen = ImageDataGenerator(rescale=1/255)\n",
"\n",
"# Flow training images in batches of 128 using train_datagen generator\n",
"train_generator = train_datagen.flow_from_directory(\n",
" './horse-or-human/', # This is the source directory for training images\n",
" target_size=(150, 150), # All images will be resized to 150x150\n",
" batch_size=128,\n",
" # Since you used binary_crossentropy loss, you need binary labels\n",
" class_mode='binary')\n",
"\n",
"# Flow training images in batches of 128 using train_datagen generator\n",
"validation_generator = validation_datagen.flow_from_directory(\n",
" './validation-horse-or-human/', # This is the source directory for training images\n",
" target_size=(150, 150), # All images will be resized to 150x150\n",
" batch_size=32,\n",
" # Since you used binary_crossentropy loss, you need binary labels\n",
" class_mode='binary')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mu3Jdwkjwax4"
},
"source": [
"### Training\n",
"\n",
"Now you're ready to train and see the results. Note your observations about how fast the model trains and the accuracies you're getting in the train and validation sets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Fb1_lgobv81m"
},
"outputs": [],
"source": [
"history = model.fit(\n",
" train_generator,\n",
" steps_per_epoch=8, \n",
" epochs=15,\n",
" verbose=1,\n",
" validation_data = validation_generator,\n",
" validation_steps=8)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o6vSHzPR2ghH"
},
"source": [
"### Model Prediction\n",
"\n",
"As usual, it is also good practice to try running your model over some handpicked images. See if you got better, worse, or the same performance as the previous lab.\n",
"\n",
"**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it.\n",
"\n",
"_For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DoWp43WxJDNT"
},
"outputs": [],
"source": [
"## CODE BLOCK FOR NON-SAFARI BROWSERS\n",
"## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD\n",
"\n",
"import numpy as np\n",
"from google.colab import files\n",
"from keras.preprocessing import image\n",
"\n",
"uploaded = files.upload()\n",
"\n",
"for fn in uploaded.keys():\n",
" \n",
" # predicting images\n",
" path = '/content/' + fn\n",
" img = image.load_img(path, target_size=(150, 150))\n",
" x = image.img_to_array(img)\n",
" x /= 255\n",
" x = np.expand_dims(x, axis=0)\n",
"\n",
" images = np.vstack([x])\n",
" classes = model.predict(images, batch_size=10)\n",
" print(classes[0])\n",
" if classes[0]>0.5:\n",
" print(fn + \" is a human\")\n",
" else:\n",
" print(fn + \" is a horse\")\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ckps9Sw4657d"
},
"source": [
"`Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.\n",
"\n",
"Instructions on how to upload image(s) manually in a Colab:\n",
"\n",
"1. Select the `folder` icon on the left `menu bar`.\n",
"2. Click on the `folder with an arrow pointing upwards` named `..`\n",
"3. Click on the `folder` named `tmp`.\n",
"4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.\n",
"5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.\n",
"6. Uncomment and run the code block below. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "v_GgQjRT65oM"
},
"outputs": [],
"source": [
"# # CODE BLOCK FOR SAFARI USERS\n",
"\n",
"# import numpy as np\n",
"# from keras.preprocessing import image\n",
"# import os\n",
"\n",
"# images = os.listdir(\"/tmp/images\")\n",
"\n",
"# print(images)\n",
"\n",
"# for i in images:\n",
"# print()\n",
"# # predicting images\n",
"# path = '/tmp/images/' + i\n",
"# img = image.load_img(path, target_size=(150, 150))\n",
"# x = image.img_to_array(img)\n",
"# x /= 255\n",
"# x = np.expand_dims(x, axis=0)\n",
"\n",
"# images = np.vstack([x])\n",
"# classes = model.predict(images, batch_size=10)\n",
"# print(classes[0])\n",
"# if classes[0]>0.5:\n",
"# print(i + \" is a human\")\n",
"# else:\n",
"# print(i + \" is a horse\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-8EHQyWGDvWz"
},
"source": [
"### Visualizing Intermediate Representations\n",
"\n",
"You can also look again at the intermediate representations. You will notice that the output at the last convolution layer is even more abstract because it contains fewer pixels than before."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-5tES8rXFjux"
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import random\n",
"from tensorflow.keras.preprocessing.image import img_to_array, load_img\n",
"\n",
"# Define a new Model that will take an image as input, and will output\n",
"# intermediate representations for all layers in the previous model after\n",
"# the first.\n",
"successive_outputs = [layer.output for layer in model.layers[1:]]\n",
"visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)\n",
"\n",
"# Prepare a random input image from the training set.\n",
"horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]\n",
"human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]\n",
"img_path = random.choice(horse_img_files + human_img_files)\n",
"img = load_img(img_path, target_size=(150, 150)) # this is a PIL image\n",
"x = img_to_array(img) # Numpy array with shape (150, 150, 3)\n",
"x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3)\n",
"\n",
"# Scale by 1/255\n",
"x /= 255\n",
"\n",
"# Run the image through the network, thus obtaining all\n",
"# intermediate representations for this image.\n",
"successive_feature_maps = visualization_model.predict(x)\n",
"\n",
"# These are the names of the layers, so you can have them as part of the plot\n",
"layer_names = [layer.name for layer in model.layers[1:]]\n",
"\n",
"# Display the representations\n",
"for layer_name, feature_map in zip(layer_names, successive_feature_maps):\n",
" if len(feature_map.shape) == 4:\n",
"\n",
" # Just do this for the conv / maxpool layers, not the fully-connected layers\n",
" n_features = feature_map.shape[-1] # number of features in feature map\n",
"\n",
" # The feature map has shape (1, size, size, n_features)\n",
" size = feature_map.shape[1]\n",
" \n",
" # Tile the images in this matrix\n",
" display_grid = np.zeros((size, size * n_features))\n",
" for i in range(n_features):\n",
" x = feature_map[0, :, :, i]\n",
" x -= x.mean()\n",
" x /= x.std()\n",
" x *= 64\n",
" x += 128\n",
" x = np.clip(x, 0, 255).astype('uint8')\n",
" \n",
" # Tile each filter into this big horizontal grid\n",
" display_grid[:, i * size : (i + 1) * size] = x\n",
" \n",
" # Display the grid\n",
" scale = 20. / n_features\n",
" plt.figure(figsize=(scale * n_features, scale))\n",
" plt.title(layer_name)\n",
" plt.grid(False)\n",
" plt.imshow(display_grid, aspect='auto', cmap='viridis')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "j4IBgYCYooGD"
},
"source": [
"## Clean Up\n",
"\n",
"Please run the following cell to terminate the kernel and free memory resources:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "651IgjLyo-Jx"
},
"outputs": [],
"source": [
"import os, signal\n",
"os.kill(os.getpid(), signal.SIGKILL)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tFnBvcIrXWW2"
},
"source": [
"## Wrap Up\n",
"\n",
"In this lab, you saw how compacting images affected your previous model. This is one technique to keep in mind especially when you are still in the exploratory phase of your own projects. You can see if a smaller model behaves just as well as a large model so you can have faster training. You also saw how easy it is to customize your images for this adjustment in size by simply changing a parameter in the `ImageDataGenerator` class."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "C1_W4_Lab_3_compacted_images.ipynb",
"private_outputs": true,
"provenance": [
{
"file_id": "https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/adding_C1/C1/W4/ungraded_labs/C1_W4_Lab_3_compacted_images.ipynb",
"timestamp": 1639112249467
}
],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
================================================
FILE: 2. Convolutional Neural Networks in TensorFlow/1. Exploring a Larger Dataset/assignment/C2W1_Assignment.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "AuW-xg_bTsaF"
},
"source": [
"# Week 1: Using CNN's with the Cats vs Dogs Dataset\n",
"\n",
"Welcome to the 1st assignment of the course! This week, you will be using the famous `Cats vs Dogs` dataset to train a model that can classify images of dogs from images of cats. For this, you will create your own Convolutional Neural Network in Tensorflow and leverage Keras' image preprocessing utilities.\n",
"\n",
"You will also create some helper functions to move the images around the filesystem so if you are not familiar with the `os` module be sure to take a look a the [docs](https://docs.python.org/3/library/os.html).\n",
"\n",
"Let's get started!"
],
"id": "AuW-xg_bTsaF"
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"id": "dn-6c02VmqiN"
},
"outputs": [],
"source": [
"import os\n",
"import zipfile\n",
"import random\n",
"import shutil\n",
"import tensorflow as tf\n",
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
"from shutil import copyfile\n",
"import matplotlib.pyplot as plt"
],
"id": "dn-6c02VmqiN"
},
{
"cell_type": "markdown",
"metadata": {
"id": "bLTQd84RUs1j"
},
"source": [
"Download the dataset from its original source by running the cell below. \n",
"\n",
"Note that the `zip` file that contains the images is unzipped under the `/tmp` directory."
],
"id": "bLTQd84RUs1j"
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"id": "3sd9dQWa23aj",
"lines_to_next_cell": 2,
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "6744766e-6e6b-458d-b98e-675b6764af45"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"--2022-03-25 22:39:29-- https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\n",
"Resolving download.microsoft.com (download.microsoft.com)... 23.45.144.230, 2600:1407:f800:481::e59, 2600:1407:f800:49b::e59\n",
"Connecting to download.microsoft.com (download.microsoft.com)|23.45.144.230|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 824894548 (787M) [application/octet-stream]\n",
"Saving to: ‘/tmp/cats-and-dogs.zip’\n",
"\n",
"/tmp/cats-and-dogs. 100%[===================>] 786.68M 102MB/s in 8.9s \n",
"\n",
"2022-03-25 22:39:38 (88.5 MB/s) - ‘/tmp/cats-and-dogs.zip’ saved [824894548/824894548]\n",
"\n"
]
}
],
"source": [
"# If the URL doesn't work, visit https://www.microsoft.com/en-us/download/confirmation.aspx?id=54765\n",
"# And right click on the 'Download Manually' link to get a new URL to the dataset\n",
"\n",
"# Note: This is a very large dataset and will take some time to download\n",
"\n",
"!wget --no-check-certificate \\\n",
" \"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\" \\\n",
" -O \"/tmp/cats-and-dogs.zip\"\n",
"\n",
"local_zip = '/tmp/cats-and-dogs.zip'\n",
"zip_ref = zipfile.ZipFile(local_zip, 'r')\n",
"zip_ref.extractall('/tmp')\n",
"zip_ref.close()"
],
"id": "3sd9dQWa23aj"
},
{
"cell_type": "markdown",
"metadata": {
"id": "e_HsUV9WVJHL"
},
"source": [
"Now the images are stored within the `/tmp/PetImages` directory. There is a subdirectory for each class, so one for dogs and one for cats."
],
"id": "e_HsUV9WVJHL"
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"id": "DM851ZmN28J3",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "6c90e944-bed7-48aa-ee16-c8af980f5de9"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"There are 12501 images of dogs.\n",
"There are 12501 images of cats.\n"
]
}
],
"source": [
"source_path = '/tmp/PetImages'\n",
"\n",
"source_path_dogs = os.path.join(source_path, 'Dog')\n",
"source_path_cats = os.path.join(source_path, 'Cat')\n",
"\n",
"\n",
"# os.listdir returns a list containing all files under the given path\n",
"print(f\"There are {len(os.listdir(source_path_dogs))} images of dogs.\")\n",
"print(f\"There are {len(os.listdir(source_path_cats))} images of cats.\")"
],
"id": "DM851ZmN28J3"
},
{
"cell_type": "markdown",
"metadata": {
"id": "G7dI86rmRGmC"
},
"source": [
"**Expected Output:**\n",
"\n",
"```\n",
"There are 12501 images of dogs.\n",
"There are 12501 images of cats.\n",
"```"
],
"id": "G7dI86rmRGmC"
},
{
"cell_type": "markdown",
"metadata": {
"id": "iFbMliudNIjW"
},
"source": [
"You will need a directory for cats-v-dogs, and subdirectories for training\n",
"and testing. These in turn will need subdirectories for 'cats' and 'dogs'. To accomplish this, complete the `create_train_test_dirs` below:"
],
"id": "iFbMliudNIjW"
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"cellView": "code",
"id": "F-QkLjxpmyK2"
},
"outputs": [],
"source": [
"# Define root directory\n",
"root_dir = '/tmp/cats-v-dogs'\n",
"\n",
"# Empty directory to prevent FileExistsError is the function is run several times\n",
"if os.path.exists(root_dir):\n",
" shutil.rmtree(root_dir)\n",
"\n",
"# GRADED FUNCTION: create_train_test_dirs\n",
"def create_train_test_dirs(root_path):\n",
" ### START CODE HERE\n",
"\n",
" # HINT:\n",
" # Use os.makedirs to create your directories with intermediate subdirectories\n",
" # Don't hardcode the paths. Use os.path.join to append the new directories to the root_path parameter\n",
" os.makedirs(os.path.join(root_path, 'training'))\n",
" os.makedirs(os.path.join(f'{root_path}/training', 'dogs'))\n",
" os.makedirs(os.path.join(f'{root_path}/training', 'cats'))\n",
" os.makedirs(os.path.join(root_path, 'testing'))\n",
" os.makedirs(os.path.join(f'{root_path}/testing', 'dogs'))\n",
" os.makedirs(os.path.join(f'{root_path}/testing', 'cats'))\n",
" ### END CODE HERE\n",
"\n",
" \n",
"try:\n",
" create_train_test_dirs(root_path=root_dir)\n",
"except FileExistsError:\n",
" print(\"You should not be seeing this since the upper directory is removed beforehand\")"
],
"id": "F-QkLjxpmyK2"
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"id": "5dhtL344OK00",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "83abbb20-6a0d-4907-e21e-6f73679ba731"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"/tmp/cats-v-dogs/testing\n",
"/tmp/cats-v-dogs/training\n",
"/tmp/cats-v-dogs/testing/dogs\n",
"/tmp/cats-v-dogs/testing/cats\n",
"/tmp/cats-v-dogs/training/dogs\n",
"/tmp/cats-v-dogs/training/cats\n"
]
}
],
"source": [
"# Test your create_train_test_dirs function\n",
"\n",
"for rootdir, dirs, files in os.walk(root_dir):\n",
" for subdir in dirs:\n",
" print(os.path.join(rootdir, subdir))"
],
"id": "5dhtL344OK00"
},
{
"cell_type": "markdown",
"metadata": {
"id": "D7A0RK3IQsvg"
},
"source": [
"**Expected Output (directory order might vary):**\n",
"\n",
"``` txt\n",
"/tmp/cats-v-dogs/training\n",
"/tmp/cats-v-dogs/testing\n",
"/tmp/cats-v-dogs/training/cats\n",
"/tmp/cats-v-dogs/training/dogs\n",
"/tmp/cats-v-dogs/testing/cats\n",
"/tmp/cats-v-dogs/testing/dogs\n",
"\n",
"```"
],
"id": "D7A0RK3IQsvg"
},
{
"cell_type": "markdown",
"metadata": {
"id": "R93T7HdE5txZ"
},
"source": [
"Code the `split_data` function which takes in the following arguments:\n",
"- SOURCE: directory containing the files\n",
"\n",
"- TRAINING: directory that a portion of the files will be copied to (will be used for training)\n",
"- TESTING: directory that a portion of the files will be copied to (will be used for testing)\n",
"- SPLIT SIZE: to determine the portion\n",
"\n",
"The files should be randomized, so that the training set is a random sample of the files, and the test set is made up of the remaining files.\n",
"\n",
"For example, if `SOURCE` is `PetImages/Cat`, and `SPLIT` SIZE is .9 then 90% of the images in `PetImages/Cat` will be copied to the `TRAINING` dir\n",
"and 10% of the images will be copied to the `TESTING` dir.\n",
"\n",
"All images should be checked before the copy, so if they have a zero file length, they will be omitted from the copying process. If this is the case then your function should print out a message such as `\"filename is zero length, so ignoring.\"`. **You should perform this check before the split so that only non-zero images are considered when doing the actual split.**\n",
"\n",
"\n",
"Hints:\n",
"\n",
"- `os.listdir(DIRECTORY)` returns a list with the contents of that directory.\n",
"\n",
"- `os.path.getsize(PATH)` returns the size of the file\n",
"\n",
"- `copyfile(source, destination)` copies a file from source to destination\n",
"\n",
"- `random.sample(list, len(list))` shuffles a list"
],
"id": "R93T7HdE5txZ"
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {
"cellView": "code",
"id": "zvSODo0f9LaU"
},
"outputs": [],
"source": [
"# GRADED FUNCTION: split_data\n",
"def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):\n",
"\n",
" ### START CODE HERE\n",
" # Shuffle list\n",
" shuffled_source = random.sample(os.listdir(SOURCE), len(os.listdir(SOURCE)))\n",
"\n",
" # Find total number of files in training dir\n",
" training_number = int(len(shuffled_source) * SPLIT_SIZE)\n",
"\n",
" i = 0\n",
" target = TRAINING\n",
"\n",
" for item in shuffled_source:\n",
" item_source = os.path.join(SOURCE, item)\n",
" if os.path.getsize(item_source) == 0:\n",
" print(f'{item} is zero length, so ignoring.')\n",
" else: \n",
" copyfile(item_source, os.path.join(target, item))\n",
" i += 1\n",
"\n",
" # Switch copy target to TESTING\n",
" if i == training_number:\n",
" target = TESTING\n",
" ### END CODE HERE\n"
],
"id": "zvSODo0f9LaU"
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {
"id": "FlIdoUeX9S-9",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "f5b8338a-65a5-4a03-e116-c86d73e05a09"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"11250\n",
"666.jpg is zero length, so ignoring.\n",
"11250\n",
"11702.jpg is zero length, so ignoring.\n",
"\n",
"\n",
"There are 11250 images of cats for training\n",
"There are 11250 images of dogs for training\n",
"There are 1250 images of cats for testing\n",
"There are 1250 images of dogs for testing\n"
]
}
],
"source": [
"# Test your split_data function\n",
"\n",
"# Define paths\n",
"CAT_SOURCE_DIR = \"/tmp/PetImages/Cat/\"\n",
"DOG_SOURCE_DIR = \"/tmp/PetImages/Dog/\"\n",
"\n",
"TRAINING_DIR = \"/tmp/cats-v-dogs/training/\"\n",
"TESTING_DIR = \"/tmp/cats-v-dogs/testing/\"\n",
"\n",
"TRAINING_CATS_DIR = os.path.join(TRAINING_DIR, \"cats/\")\n",
"TESTING_CATS_DIR = os.path.join(TESTING_DIR, \"cats/\")\n",
"\n",
"TRAINING_DOGS_DIR = os.path.join(TRAINING_DIR, \"dogs/\")\n",
"TESTING_DOGS_DIR = os.path.join(TESTING_DIR, \"dogs/\")\n",
"\n",
"# Empty directories in case you run this cell multiple times\n",
"if len(os.listdir(TRAINING_CATS_DIR)) > 0:\n",
" for file in os.scandir(TRAINING_CATS_DIR):\n",
" os.remove(file.path)\n",
"if len(os.listdir(TRAINING_DOGS_DIR)) > 0:\n",
" for file in os.scandir(TRAINING_DOGS_DIR):\n",
" os.remove(file.path)\n",
"if len(os.listdir(TESTING_CATS_DIR)) > 0:\n",
" for file in os.scandir(TESTING_CATS_DIR):\n",
" os.remove(file.path)\n",
"if len(os.listdir(TESTING_DOGS_DIR)) > 0:\n",
" for file in os.scandir(TESTING_DOGS_DIR):\n",
" os.remove(file.path)\n",
"\n",
"# Define proportion of images used for training\n",
"split_size = .9\n",
"\n",
"# Run the function\n",
"# NOTE: Messages about zero length images should be printed out\n",
"split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)\n",
"split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)\n",
"\n",
"# Check that the number of images matches the expected output\n",
"print(f\"\\n\\nThere are {len(os.listdir(TRAINING_CATS_DIR))} images of cats for training\")\n",
"print(f\"There are {len(os.listdir(TRAINING_DOGS_DIR))} images of dogs for training\")\n",
"print(f\"There are {len(os.listdir(TESTING_CATS_DIR))} images of cats for testing\")\n",
"print(f\"There are {len(os.listdir(TESTING_DOGS_DIR))} images of dogs for testing\")"
],
"id": "FlIdoUeX9S-9"
},
{
"cell_type": "markdown",
"metadata": {
"id": "hvskJNOFVSaz"
},
"source": [
"**Expected Output:**\n",
"\n",
"```\n",
"666.jpg is zero length, so ignoring.\n",
"11702.jpg is zero length, so ignoring.\n",
"```\n",
"\n",
"```\n",
"There are 11250 images of cats for training\n",
"There are 11250 images of dogs for training\n",
"There are 1250 images of cats for testing\n",
"There are 1250 images of dogs for testing\n",
"```"
],
"id": "hvskJNOFVSaz"
},
{
"cell_type": "markdown",
"metadata": {
"id": "Zil4QmOD_mXF"
},
"source": [
"Now that you have successfully organized the data in a way that can be easily fed to Keras' `ImageDataGenerator`, it is time for you to code the generators that will yield batches of images, both for training and validation. For this, complete the `train_val_generators` function below.\n",
"\n",
"Something important to note is that the images in this dataset come in a variety of resolutions. Luckily, the `flow_from_directory` method allows you to standarize this by defining a tuple called `target_size` that will be used to convert each image to this target resolution. **For this exercise, use a `target_size` of (150, 150)**.\n",
"\n",
"**Note:** So far, you have seen the term `testing` being used a lot for referring to a subset of images within the dataset. In this exercise, all of the `testing` data is actually being used as `validation` data. This is not very important within the context of the task at hand but it is worth mentioning to avoid confusion."
],
"id": "Zil4QmOD_mXF"
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"cellView": "code",
"id": "fQrZfVgz4j2g"
},
"outputs": [],
"source": [
"# GRADED FUNCTION: train_val_generators\n",
"def train_val_generators(TRAINING_DIR, VALIDATION_DIR):\n",
" ### START CODE HERE\n",
"\n",
" # Instantiate the ImageDataGenerator class (don't forget to set the rescale argument)\n",
" train_datagen = ImageDataGenerator(rescale = 1./255.)\n",
"\n",
" # Pass in the appropiate arguments to the flow_from_directory method\n",
" train_generator = train_datagen.flow_from_directory(directory=TRAINING_DIR,\n",
" batch_size=45,\n",
" class_mode='binary',\n",
" target_size=(150, 150))\n",
"\n",
" # Instantiate the ImageDataGenerator class (don't forget to set the rescale argument)\n",
" validation_datagen = ImageDataGenerator(rescale = 1./255.)\n",
"\n",
" # Pass in the appropiate arguments to the flow_from_directory method\n",
" validation_generator = validation_datagen.flow_from_directory(directory=VALIDATION_DIR,\n",
" batch_size=5,\n",
" class_mode='binary',\n",
" target_size=(150, 150))\n",
" ### END CODE HERE\n",
" return train_generator, validation_generator\n"
],
"id": "fQrZfVgz4j2g"
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"id": "qM7FxrjGiobD",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "5b5b48b4-7506-40a6-b036-5be385483784"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Found 22499 images belonging to 2 classes.\n",
"Found 2499 images belonging to 2 classes.\n"
]
}
],
"source": [
"# Test your generators\n",
"train_generator, validation_generator = train_val_generators(TRAINING_DIR, TESTING_DIR)"
],
"id": "qM7FxrjGiobD"
},
{
"cell_type": "markdown",
"metadata": {
"id": "tiPNmSfZjHwJ"
},
"source": [
"**Expected Output:**\n",
"\n",
"```\n",
"Found 22498 images belonging to 2 classes.\n",
"Found 2500 images belonging to 2 classes.\n",
"```\n"
],
"id": "tiPNmSfZjHwJ"
},
{
"cell_type": "markdown",
"metadata": {
"id": "TI3oEmyQCZoO"
},
"source": [
"One last step before training is to define the architecture of the model that will be trained.\n",
"\n",
"Complete the `create_model` function below which should return a Keras' `Sequential` model.\n",
"\n",
"Aside from defining the architecture of the model, you should also compile it so make sure to use a `loss` function that is compatible with the `class_mode` you defined in the previous exercise, which should also be compatible with the output of your network. You can tell if they aren't compatible if you get an error during training.\n",
"\n",
"**Note that you should use at least 3 convolution layers to achieve the desired performance.**"
],
"id": "TI3oEmyQCZoO"
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"cellView": "code",
"id": "oDPK8tUB_O9e",
"lines_to_next_cell": 2
},
"outputs": [],
"source": [
"# GRADED FUNCTION: create_model\n",
"def create_model():\n",
" # DEFINE A KERAS MODEL TO CLASSIFY CATS V DOGS\n",
" # USE AT LEAST 3 CONVOLUTION LAYERS\n",
"\n",
" ### START CODE HERE\n",
"\n",
" model = tf.keras.models.Sequential([ \n",
" # Note the input shape is the desired size of the image 150x150 with 3 bytes color\n",
" tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),\n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" tf.keras.layers.Conv2D(32, (3,3), activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(2,2), \n",
" tf.keras.layers.Conv2D(64, (3,3), activation='relu'), \n",
" tf.keras.layers.MaxPooling2D(2,2),\n",
" # Flatten the results to feed into a DNN\n",
" tf.keras.layers.Flatten(), \n",
" # 512 neuron hidden layer\n",
" tf.keras.layers.Dense(512, activation='relu'), \n",
" # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('cats') and 1 for the other ('dogs')\n",
" tf.keras.layers.Dense(1, activation='sigmoid')\n",
" ])\n",
"\n",
" from tensorflow.keras.optimizers import RMSprop\n",
"\n",
" model.compile(optimizer=RMSprop(learning_rate=0.001),\n",
" loss='binary_crossentropy',\n",
" metrics=['accuracy']) \n",
" \n",
" ### END CODE HERE\n",
"\n",
" return model\n"
],
"id": "oDPK8tUB_O9e"
},
{
"cell_type": "markdown",
"metadata": {
"id": "SMFNJZmTCZv6"
},
"source": [
"Now it is time to train your model!\n",
"\n",
"**Note:** You can ignore the `UserWarning: Possibly corrupt EXIF data.` warnings."
],
"id": "SMFNJZmTCZv6"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5qE1G6JB4fMn",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "195aedfd-d4e1-4e73-b76a-24bfd95545ed"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Epoch 1/15\n",
"141/500 [=======>......................] - ETA: 1:00 - loss: 0.7992 - accuracy: 0.5814"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:770: UserWarning: Possibly corrupt EXIF data. Expecting to read 32 bytes but only got 0. Skipping tag 270\n",
" \" Skipping tag %s\" % (size, len(data), tag)\n",
"/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:770: UserWarning: Possibly corrupt EXIF data. Expecting to read 5 bytes but only got 0. Skipping tag 271\n",
" \" Skipping tag %s\" % (size, len(data), tag)\n",
"/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:770: UserWarning: Possibly corrupt EXIF data. Expecting to read 8 bytes but only got 0. Skipping tag 272\n",
" \" Skipping tag %s\" % (size, len(data), tag)\n",
"/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:770: UserWarning: Possibly corrupt EXIF data. Expecting to read 8 bytes but only got 0. Skipping tag 282\n",
" \" Skipping tag %s\" % (size, len(data), tag)\n",
"/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:770: UserWarning: Possibly corrupt EXIF data. Expecting to read 8 bytes but only got 0. Skipping tag 283\n",
" \" Skipping tag %s\" % (size, len(data), tag)\n",
"/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:770: UserWarning: Possibly corrupt EXIF data. Expecting to read 20 bytes but only got 0. Skipping tag 306\n",
" \" Skipping tag %s\" % (size, len(data), tag)\n",
"/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:770: UserWarning: Possibly corrupt EXIF data. Expecting to read 48 bytes but only got 0. Skipping tag 532\n",
" \" Skipping tag %s\" % (size, len(data), tag)\n",
"/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 2 bytes but only got 0. \n",
" warnings.warn(str(msg))\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"500/500 [==============================] - 106s 190ms/step - loss: 0.6497 - accuracy: 0.6579 - val_loss: 0.6198 - val_accuracy: 0.6907\n",
"Epoch 2/15\n",
"500/500 [==============================] - 93s 186ms/step - loss: 0.4798 - accuracy: 0.7693 - val_loss: 0.4519 - val_accuracy: 0.7891\n",
"Epoch 3/15\n",
"500/500 [==============================] - 93s 186ms/step - loss: 0.4040 - accuracy: 0.8161 - val_loss: 0.3935 - val_accuracy: 0.8207\n",
"Epoch 4/15\n",
"500/500 [==============================] - 93s 185ms/step - loss: 0.3389 - accuracy: 0.8551 - val_loss: 0.4478 - val_accuracy: 0.8071\n",
"Epoch 5/15\n",
"500/500 [==============================] - 93s 186ms/step - loss: 0.2772 - accuracy: 0.8816 - val_loss: 0.4037 - val_accuracy: 0.8207\n",
"Epoch 6/15\n",
"500/500 [==============================] - 93s 186ms/step - loss: 0.2125 - accuracy: 0.9132 - val_loss: 0.4297 - val_accuracy: 0.8239\n",
"Epoch 7/15\n",
"500/500 [==============================] - 94s 187ms/step - loss: 0.1511 - accuracy: 0.9423 - val_loss: 0.7851 - val_accuracy: 0.8007\n",
"Epoch 8/15\n",
"500/500 [==============================] - 93s 186ms/step - loss: 0.1075 - accuracy: 0.9605 - val_loss: 0.5910 - val_accuracy: 0.8271\n",
"Epoch 9/15\n",
"500/500 [==============================] - 93s 186ms/step - loss: 0.0818 - accuracy: 0.9728 - val_loss: 1.0429 - val_accuracy: 0.8307\n",
"Epoch 10/15\n",
"500/500 [==============================] - 93s 186ms/step - loss: 0.0741 - accuracy: 0.9762 - val_loss: 0.8686 - val_accuracy: 0.8259\n",
"Epoch 11/15\n",
"500/500 [==============================] - 94s 187ms/step - loss: 0.0728 - accuracy: 0.9774 - val_loss: 1.1316 - val_accuracy: 0.7467\n",
"Epoch 12/15\n",
"500/500 [==============================] - 93s 186ms/step - loss: 0.0677 - accuracy: 0.9800 - val_loss: 1.1091 - val_accuracy: 0.8263\n",
"Epoch 13/15\n",
"500/500 [==============================] - 95s 190ms/step - loss: 0.0668 - accuracy: 0.9814 - val_loss: 1.1743 - val_accuracy: 0.8331\n",
"Epoch 14/15\n",
"500/500 [==============================] - 93s 187ms/step - loss: 0.0702 - accuracy: 0.9807 - val_loss: 1.2697 - val_accuracy: 0.8219\n",
"Epoch 15/15\n",
"500/500 [==============================] - 94s 189ms/step - loss: 0.0826 - accuracy: 0.9779 - val_loss: 1.2121 - val_accuracy: 0.8131\n"
]
}
],
"source": [
"# Get the untrained model\n",
"model = create_model()\n",
"\n",
"# Train the model\n",
"# Note that this may take some time.\n",
"history = model.fit(train_generator,\n",
" epochs=15,\n",
" verbose=1,\n",
" validation_data=validation_generator)"
],
"id": "5qE1G6JB4fMn"
},
{
"cell_type": "markdown",
"metadata": {
"id": "VGsaDMc-GMd4"
},
"source": [
"Once training has finished, you can run the following cell to check the training and validation accuracy achieved at the end of each epoch.\n",
"\n",
"**To pass this assignment, your model should achieve a training accuracy of at least 95% and a validation accuracy of at least 80%**. If your model didn't achieve these thresholds, try training again with a different model architecture and remember to use at least 3 convolutional layers."
],
"id": "VGsaDMc-GMd4"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MWZrJN4-65RC",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 547
},
"outputId": "47d5ff18-23aa-4c0e-9519-fb7e7afbc692"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
],
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAcYAAAEICAYAAADFgFTtAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3dd5xU5dn/8c+FLFJFQZAI6GJiw9gRBSMiGEXUaIxGRWPXRFNMnsQ8JjH5aaoxRn1MnlhjiaJGjRgfW1Rs2GgKGlEiCgiISu9td6/fH9cZ995ltrDs7mz5vl+vec2ZM2fOuWZg5zv3fcpt7o6IiIiENoUuQEREpClRMIqIiCQUjCIiIgkFo4iISELBKCIiklAwioiIJBSMIjUwsyfM7Mz6XraQzGyWmR3eAOt1M/tCNn2jmf28NsvWYTunmdlTda1TpDqm8xilJTKzlcnDjsA6oDR7/E13H934VTUdZjYLOM/dn6nn9Tqws7vPqK9lzawYmAkUuXtJfdQpUp22hS5ApCG4e+fcdHUhYGZt9WUrTYX+PzYN6kqVVsXMhprZXDP7bzP7GLjdzLYxs0fNbIGZLcmm+ySved7MzsumzzKzl8zs6mzZmWZ2VB2X7WdmL5rZCjN7xsz+18zurqLu2tT4KzN7OVvfU2a2bfL8N8xstpktMrOfVfP5HGhmH5vZFsm8r5rZm9n0QDN71cyWmtl8M/uzmbWrYl13mNmvk8eXZK/5yMzOqbTs0Wb2hpktN7M5ZnZ58vSL2f1SM1tpZoNyn23y+sFmNtHMlmX3g2v72Wzi59zNzG7P3sMSM3s4ee44M5uSvYf3zWxENr9Ct7WZXZ77dzaz4qxL+Vwz+xB4Npv/QPbvsCz7P7JH8voOZvbH7N9zWfZ/rIOZPWZm3630ft40s6/me69SNQWjtEa9gG7AjsAFxN/B7dnjHYA1wJ+ref2BwHRgW+Aq4K9mZnVY9h5gAtAduBz4RjXbrE2No4CzgZ5AO+BHAGbWH7ghW//22fb6kIe7jwdWAcMqrfeebLoU+EH2fgYBw4GLqqmbrIYRWT1fBnYGKu/fXAWcAWwNHA1caGbHZ88Nye63dvfO7v5qpXV3Ax4Drs/e2zXAY2bWvdJ72OizyaOmz/kuomt+j2xd12Y1DAT+BlySv
gitextract_p4qcdysp/ ├── 1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/ │ ├── 1. A New Programming Paradigm/ │ │ ├── assignment/ │ │ │ └── C1W1_Assignment.ipynb │ │ └── ungraded_lab/ │ │ └── C1_W1_Lab_1_hello_world_nn.ipynb │ ├── 2. Introduction to Computer Vision/ │ │ ├── assignment/ │ │ │ └── C1W2_Assignment.ipynb │ │ └── ungraded_labs/ │ │ ├── C1_W2_Lab_1_beyond_hello_world.ipynb │ │ └── C1_W2_Lab_2_callbacks.ipynb │ ├── 3. Enhancing Vision with Convolutional Neural Networks/ │ │ ├── assignment/ │ │ │ └── C1W3_Assignment.ipynb │ │ └── ungraded_labs/ │ │ ├── C1_W3_Lab_1_improving_accuracy_using_convolutions.ipynb │ │ └── C1_W3_Lab_2_exploring_convolutions.ipynb │ └── 4. Using Real-world Images/ │ ├── assignment/ │ │ └── C1W4_Assignment.ipynb │ └── ungraded_labs/ │ ├── C1_W4_Lab_1_image_generator_no_validation.ipynb │ ├── C1_W4_Lab_2_image_generator_with_validation.ipynb │ └── C1_W4_Lab_3_compacted_images.ipynb ├── 2. Convolutional Neural Networks in TensorFlow/ │ ├── 1. Exploring a Larger Dataset/ │ │ ├── assignment/ │ │ │ ├── C2W1_Assignment.ipynb │ │ │ └── history.pkl │ │ └── ungraded_lab/ │ │ └── C2_W1_Lab_1_cats_vs_dogs.ipynb │ ├── 2. Augmentation - A Technique to Avoid Overfitting/ │ │ ├── assignment/ │ │ │ ├── C2W2_Assignment.ipynb │ │ │ └── history_augmented.pkl │ │ └── ungraded_labs/ │ │ ├── C2_W2_Lab_1_cats_v_dogs_augmentation.ipynb │ │ └── C2_W2_Lab_2_horses_v_humans_augmentation.ipynb │ ├── 3. Transfer Learning/ │ │ ├── assignment/ │ │ │ └── C2W3_Assignment.ipynb │ │ └── ungraded_lab/ │ │ └── C2_W3_Lab_1_transfer_learning.ipynb │ └── 4. Multiclass Classification/ │ ├── assignment/ │ │ └── C2W4_Assignment.ipynb │ └── ungraded_lab/ │ └── C2_W4_Lab_1_multi_class_classifier.ipynb ├── 3. Natural Language Processing in TensorFlow/ │ ├── 1. Sentiment in Text/ │ │ ├── assignment/ │ │ │ └── C3W1_Assignment.ipynb │ │ └── ungraded_labs/ │ │ ├── C3_W1_Lab_1_tokenize_basic.ipynb │ │ ├── C3_W1_Lab_2_sequences_basic.ipynb │ │ └── C3_W1_Lab_3_sarcasm.ipynb │ ├── 2. Word Embeddings/ │ │ ├── assignment/ │ │ │ ├── C3W2_Assignment.ipynb │ │ │ ├── meta.tsv │ │ │ └── vecs.tsv │ │ └── ungraded_labs/ │ │ ├── C3_W2_Lab_1_imdb.ipynb │ │ ├── C3_W2_Lab_2_sarcasm_classifier.ipynb │ │ └── C3_W2_Lab_3_imdb_subwords.ipynb │ ├── 3. Sequence Models/ │ │ ├── assignment/ │ │ │ └── C3W3_Assignment.ipynb │ │ └── ungraded_labs/ │ │ ├── C3_W3_Lab_1_single_layer_LSTM.ipynb │ │ ├── C3_W3_Lab_2_multiple_layer_LSTM.ipynb │ │ ├── C3_W3_Lab_3_Conv1D.ipynb │ │ ├── C3_W3_Lab_4_imdb_reviews_with_GRU_LSTM_Conv1D.ipynb │ │ ├── C3_W3_Lab_5_sarcasm_with_bi_LSTM.ipynb │ │ └── C3_W3_Lab_6_sarcasm_with_1D_convolutional.ipynb │ └── 4. Sequence Models and Literature/ │ ├── assignment/ │ │ ├── C3W4_Assignment.ipynb │ │ └── history.pkl │ ├── misc/ │ │ └── Laurences_generated_poetry.txt │ └── ungraded_labs/ │ ├── C3_W4_Lab_1.ipynb │ └── C3_W4_Lab_2_irish_lyrics.ipynb ├── 4. Sequences, Time Serirs and Prediction/ │ ├── 1. Sequences and Prediction/ │ │ ├── assignment/ │ │ │ ├── C4_W1_Assignment.ipynb │ │ │ └── C4_W1_Assignment_Solution.ipynb │ │ └── ungraded_labs/ │ │ ├── C4_W1_Lab_1_time_series.ipynb │ │ └── C4_W1_Lab_2_forecasting.ipynb │ ├── 2. Deep Neural Networks for Time Series/ │ │ ├── assignment/ │ │ │ ├── C4_W2_Assignment.ipynb │ │ │ └── C4_W2_Assignment_Solution.ipynb │ │ └── ungraded_labs/ │ │ ├── C4_W2_Lab_1_features_and_labels.ipynb │ │ ├── C4_W2_Lab_2_single_layer_NN.ipynb │ │ └── C4_W2_Lab_3_deep_NN.ipynb │ ├── 3. Recurrent Neural Networks for Time Series/ │ │ ├── assignment/ │ │ │ ├── C4_W3_Assignment.ipynb │ │ │ └── C4_W3_Assignment_Solution.ipynb │ │ └── ungraded_labs/ │ │ ├── C4_W3_Lab_1_RNN.ipynb │ │ └── C4_W3_Lab_2_LSTM.ipynb │ └── 4. Real-world Time Series Data/ │ ├── assignment/ │ │ ├── C4_W4_Assignment.ipynb │ │ └── C4_W4_Assignment_Solution.ipynb │ └── ungraded_labs/ │ ├── C4_W4_Lab_1_LSTM.ipynb │ ├── C4_W4_Lab_2_Sunspots.ipynb │ └── C4_W4_Lab_3_DNN_only.ipynb ├── Coursera_Code_of_Conduct.md ├── Coursera_Honor_Code.md ├── LICENSE └── README.md
Condensed preview — 67 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (2,971K chars).
[
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/1. A New Programming Paradigm/assignment/C1W1_Assignment.ipynb",
"chars": 4179,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"mw2VBrBcgvGa\"\n },\n \"source\": [\n \"# Week"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/1. A New Programming Paradigm/ungraded_lab/C1_W1_Lab_1_hello_world_nn.ipynb",
"chars": 9413,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/2. Introduction to Computer Vision/assignment/C1W2_Assignment.ipynb",
"chars": 7257,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"_2s0EJ5Fy4u2\"\n },\n \"source\": [\n \"# Week"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/2. Introduction to Computer Vision/ungraded_labs/C1_W2_Lab_1_beyond_hello_world.ipynb",
"chars": 28891,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/2. Introduction to Computer Vision/ungraded_labs/C1_W2_Lab_2_callbacks.ipynb",
"chars": 6324,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/3. Enhancing Vision with Convolutional Neural Networks/assignment/C1W3_Assignment.ipynb",
"chars": 9358,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"iQjHqsmTAVLU\"\n },\n \"source\": [\n \"# Week"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/3. Enhancing Vision with Convolutional Neural Networks/ungraded_labs/C1_W3_Lab_1_improving_accuracy_using_convolutions.ipynb",
"chars": 13661,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/3. Enhancing Vision with Convolutional Neural Networks/ungraded_labs/C1_W3_Lab_2_exploring_convolutions.ipynb",
"chars": 10977,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/4. Using Real-world Images/assignment/C1W4_Assignment.ipynb",
"chars": 11686,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"GvJbBW_oDOwC\"\n },\n \"source\": [\n \"# Week"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/4. Using Real-world Images/ungraded_labs/C1_W4_Lab_1_image_generator_no_validation.ipynb",
"chars": 23790,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/4. Using Real-world Images/ungraded_labs/C1_W4_Lab_2_image_generator_with_validation.ipynb",
"chars": 20592,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "1. Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/4. Using Real-world Images/ungraded_labs/C1_W4_Lab_3_compacted_images.ipynb",
"chars": 19327,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/1. Exploring a Larger Dataset/assignment/C2W1_Assignment.ipynb",
"chars": 65111,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"AuW-xg_bTsaF\"\n },\n \"sou"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/1. Exploring a Larger Dataset/ungraded_lab/C2_W1_Lab_1_cats_vs_dogs.ipynb",
"chars": 30738,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/2. Augmentation - A Technique to Avoid Overfitting/assignment/C2W2_Assignment.ipynb",
"chars": 60442,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"AuW-xg_bTsaF\"\n },\n \"sou"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/2. Augmentation - A Technique to Avoid Overfitting/ungraded_labs/C2_W2_Lab_1_cats_v_dogs_augmentation.ipynb",
"chars": 15229,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/2. Augmentation - A Technique to Avoid Overfitting/ungraded_labs/C2_W2_Lab_2_horses_v_humans_augmentation.ipynb",
"chars": 9674,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/3. Transfer Learning/assignment/C2W3_Assignment.ipynb",
"chars": 417743,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"f8cj-HBNoEZy\"\n },\n \"sou"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/3. Transfer Learning/ungraded_lab/C2_W3_Lab_1_transfer_learning.ipynb",
"chars": 13225,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/4. Multiclass Classification/assignment/C2W4_Assignment.ipynb",
"chars": 98231,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"0l5n9ToXGbRC\"\n },\n \"sou"
},
{
"path": "2. Convolutional Neural Networks in TensorFlow/4. Multiclass Classification/ungraded_lab/C2_W4_Lab_1_multi_class_classifier.ipynb",
"chars": 15205,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/1. Sentiment in Text/assignment/C3W1_Assignment.ipynb",
"chars": 19162,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"1bed6a4c\",\n \"metadata\": {},\n \"source\": [\n \"# Week 1: Expl"
},
{
"path": "3. Natural Language Processing in TensorFlow/1. Sentiment in Text/ungraded_labs/C3_W1_Lab_1_tokenize_basic.ipynb",
"chars": 4788,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/1. Sentiment in Text/ungraded_labs/C3_W1_Lab_2_sequences_basic.ipynb",
"chars": 6001,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/1. Sentiment in Text/ungraded_labs/C3_W1_Lab_3_sarcasm.ipynb",
"chars": 6274,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/2. Word Embeddings/assignment/C3W2_Assignment.ipynb",
"chars": 71268,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"5639149e\",\n \"metadata\": {},\n \"source\": [\n \"# Week 2: Divi"
},
{
"path": "3. Natural Language Processing in TensorFlow/2. Word Embeddings/assignment/meta.tsv",
"chars": 6528,
"preview": "<OOV>\ns\nsaid\nwill\nnot\nmr\nyear\nalso\npeople\nnew\nus\none\ncan\nlast\nt\nfirst\ntime\ntwo\ngovernment\nworld\nnow\nuk\nbest\nyears\nno\nmak"
},
{
"path": "3. Natural Language Processing in TensorFlow/2. Word Embeddings/assignment/vecs.tsv",
"chars": 182371,
"preview": "0.17527069\t0.081104085\t-0.013618566\t0.23656648\t0.03086388\t0.019155119\t0.09918349\t-0.08148629\t-0.027392035\t0.026738666\t-0"
},
{
"path": "3. Natural Language Processing in TensorFlow/2. Word Embeddings/ungraded_labs/C3_W2_Lab_1_imdb.ipynb",
"chars": 15862,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/2. Word Embeddings/ungraded_labs/C3_W2_Lab_2_sarcasm_classifier.ipynb",
"chars": 14507,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/2. Word Embeddings/ungraded_labs/C3_W2_Lab_3_imdb_subwords.ipynb",
"chars": 16653,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/3. Sequence Models/assignment/C3W3_Assignment.ipynb",
"chars": 70346,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"14a17241\",\n \"metadata\": {},\n \"source\": [\n \"\\n\",\n \"# We"
},
{
"path": "3. Natural Language Processing in TensorFlow/3. Sequence Models/ungraded_labs/C3_W3_Lab_1_single_layer_LSTM.ipynb",
"chars": 8324,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/3. Sequence Models/ungraded_labs/C3_W3_Lab_2_multiple_layer_LSTM.ipynb",
"chars": 7773,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/3. Sequence Models/ungraded_labs/C3_W3_Lab_3_Conv1D.ipynb",
"chars": 7739,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/3. Sequence Models/ungraded_labs/C3_W3_Lab_4_imdb_reviews_with_GRU_LSTM_Conv1D.ipynb",
"chars": 14414,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/3. Sequence Models/ungraded_labs/C3_W3_Lab_5_sarcasm_with_bi_LSTM.ipynb",
"chars": 7225,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/3. Sequence Models/ungraded_labs/C3_W3_Lab_6_sarcasm_with_1D_convolutional.ipynb",
"chars": 6736,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/4. Sequence Models and Literature/assignment/C3W4_Assignment.ipynb",
"chars": 68917,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"bFWbEb6uGbN-\"\n },\n \"sou"
},
{
"path": "3. Natural Language Processing in TensorFlow/4. Sequence Models and Literature/misc/Laurences_generated_poetry.txt",
"chars": 68969,
"preview": "Come all ye maidens young and fair\nAnd you that are blooming in your prime\nAlways beware and keep your garden fair\nLet n"
},
{
"path": "3. Natural Language Processing in TensorFlow/4. Sequence Models and Literature/ungraded_labs/C3_W4_Lab_1.ipynb",
"chars": 18719,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "3. Natural Language Processing in TensorFlow/4. Sequence Models and Literature/ungraded_labs/C3_W4_Lab_2_irish_lyrics.ipynb",
"chars": 14476,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"<a href=\\\"https://colab.research.go"
},
{
"path": "4. Sequences, Time Serirs and Prediction/1. Sequences and Prediction/assignment/C4_W1_Assignment.ipynb",
"chars": 567935,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W1_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/1. Sequences and Prediction/assignment/C4_W1_Assignment_Solution.ipynb",
"chars": 15260,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W1_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/1. Sequences and Prediction/ungraded_labs/C4_W1_Lab_1_time_series.ipynb",
"chars": 19764,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"colab\": {\n \"name\": \"C4_W1_Lab_1_time_series.ipynb\",\n"
},
{
"path": "4. Sequences, Time Serirs and Prediction/1. Sequences and Prediction/ungraded_labs/C4_W1_Lab_2_forecasting.ipynb",
"chars": 12896,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W1_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/2. Deep Neural Networks for Time Series/assignment/C4_W2_Assignment.ipynb",
"chars": 131178,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"colab\": {\n \"name\": \"C4_W2_Assignment.ipynb\",\n \""
},
{
"path": "4. Sequences, Time Serirs and Prediction/2. Deep Neural Networks for Time Series/assignment/C4_W2_Assignment_Solution.ipynb",
"chars": 7779,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W2_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/2. Deep Neural Networks for Time Series/ungraded_labs/C4_W2_Lab_1_features_and_labels.ipynb",
"chars": 5862,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W2_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/2. Deep Neural Networks for Time Series/ungraded_labs/C4_W2_Lab_2_single_layer_NN.ipynb",
"chars": 8252,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"colab\": {\n \"name\": \"C4_W2_Lab_2_single_layer_NN.ipyn"
},
{
"path": "4. Sequences, Time Serirs and Prediction/2. Deep Neural Networks for Time Series/ungraded_labs/C4_W2_Lab_3_deep_NN.ipynb",
"chars": 10991,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"colab\": {\n \"name\": \"C4_W2_Lab_3_deep_NN.ipynb\",\n "
},
{
"path": "4. Sequences, Time Serirs and Prediction/3. Recurrent Neural Networks for Time Series/assignment/C4_W3_Assignment.ipynb",
"chars": 229308,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W3_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/3. Recurrent Neural Networks for Time Series/assignment/C4_W3_Assignment_Solution.ipynb",
"chars": 11342,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W3_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/3. Recurrent Neural Networks for Time Series/ungraded_labs/C4_W3_Lab_1_RNN.ipynb",
"chars": 10891,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"colab\": {\n \"name\": \"C4_W3_Lab_1_RNN.ipynb\",\n \"p"
},
{
"path": "4. Sequences, Time Serirs and Prediction/3. Recurrent Neural Networks for Time Series/ungraded_labs/C4_W3_Lab_2_LSTM.ipynb",
"chars": 13271,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"colab\": {\n \"name\": \"C4_W3_Lab_2_LSTM.ipynb\",\n \""
},
{
"path": "4. Sequences, Time Serirs and Prediction/4. Real-world Time Series Data/assignment/C4_W4_Assignment.ipynb",
"chars": 195419,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W4_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/4. Real-world Time Series Data/assignment/C4_W4_Assignment_Solution.ipynb",
"chars": 11009,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W4_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/4. Real-world Time Series Data/ungraded_labs/C4_W4_Lab_1_LSTM.ipynb",
"chars": 11600,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"colab\": {\n \"name\": \"C4_W4_Lab_1_LSTM.ipynb\",\n \""
},
{
"path": "4. Sequences, Time Serirs and Prediction/4. Real-world Time Series Data/ungraded_labs/C4_W4_Lab_2_Sunspots.ipynb",
"chars": 12062,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"accelerator\": \"GPU\",\n \"colab\": {\n \"name\": \"C4_W4_"
},
{
"path": "4. Sequences, Time Serirs and Prediction/4. Real-world Time Series Data/ungraded_labs/C4_W4_Lab_3_DNN_only.ipynb",
"chars": 6957,
"preview": "{\n \"nbformat\": 4,\n \"nbformat_minor\": 0,\n \"metadata\": {\n \"colab\": {\n \"name\": \"C4_W4_Lab_3_DNN_only.ipynb\",\n "
},
{
"path": "Coursera_Code_of_Conduct.md",
"chars": 14471,
"preview": "# Coursera Code of Conduct\n\nThe Coursera platform includes people from all around the world, and from a wide variety of "
},
{
"path": "Coursera_Honor_Code.md",
"chars": 3746,
"preview": "# Coursera Honor Code\n\nAcademic integrity is important to Coursera and our institutional partners. Your commitment to ac"
},
{
"path": "LICENSE",
"chars": 10755,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 3441,
"preview": "# DeepLearning.AI TensorFlow Developer Professional Certificate\n
About this extraction
This page contains the full source code of the williamcwi/DeepLearning.AI-TensorFlow-Developer-Professional-Certificate GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 67 files (2.7 MB), approximately 716.2k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.