[
  {
    "path": "Chapter02/Python 2.7/computation_model.py",
    "content": "import tensorflow as tf\nwith tf.Session() as session:\n    x = tf.placeholder(tf.float32,[1],name=\"x\")\n    y = tf.placeholder(tf.float32,[1],name=\"y\")\n    z = tf.constant(2.0)\n    y = x * z\nx_in = [100]\ny_output = session.run(y,{x:x_in})\nprint(y_output)\n"
  },
  {
    "path": "Chapter02/Python 2.7/data_model.py",
    "content": "import tensorflow as tf\n\nscalar = tf.constant(100)\nvector = tf.constant([1,2,3,4,5])\nmatrix = tf.constant([[1,2,3],[4,5,6]])\ncube_matrix = tf.constant([[[1],[2],[3]],[[4],[5],[6]],[[7],[8],[9]]])\n\n\nprint(scalar.get_shape())\nprint(vector.get_shape())\nprint(matrix.get_shape())\nprint(cube_matrix.get_shape())\n\n\"\"\"\n>>> \n()\n(5,)\n(2, 3)\n(3, 3, 1)\n>>> \n\"\"\"\n\n"
  },
  {
    "path": "Chapter02/Python 2.7/feeding_parameters.py",
    "content": "import tensorflow as tf\nimport numpy as np\n\na = 3\nb = 2\n\n\nx = tf.placeholder(tf.float32,shape=(a,b))\ny = tf.add(x,x)\n\ndata = np.random.rand(a,b)\n\nsess = tf.Session()\n\nprint sess.run(y,feed_dict={x:data})\n\n"
  },
  {
    "path": "Chapter02/Python 2.7/fetching_parameters_1.py",
    "content": "import tensorflow as tf\n\nconstant_A = tf.constant([100.0])\nconstant_B = tf.constant([300.0])\nconstant_C = tf.constant([3.0])\n\nsum_ = tf.add(constant_A,constant_B)\nmul_ = tf.multiply(constant_A,constant_C)\n\nwith tf.Session() as sess:\n    result = sess.run([sum_,mul_])\n    print(result)\n\n\n\"\"\"\n>>> \n[array([ 400.], dtype=float32), array([ 300.], dtype=float32)]\n>>> \n\"\"\"\n"
  },
  {
    "path": "Chapter02/Python 2.7/programming_model.py",
    "content": "import tensorflow as tf\nwith tf.Session() as session:\n    x = tf.placeholder(tf.float32,[1],name=\"x\")\n    y = tf.placeholder(tf.float32,[1],name=\"y\")\n    z = tf.constant(2.0)\n    y = x * z\nx_in = [100]\ny_output = session.run(y,{x:x_in})\nprint(y_output)\n"
  },
  {
    "path": "Chapter02/Python 2.7/single_neuron_model_1.py",
    "content": "import tensorflow as tf\n\nweight = tf.Variable(1.0,name=\"weight\")\ninput_value = tf.constant(0.5,name=\"input_value\")\nexpected_output = tf.constant(0.0,name=\"expected_output\")\nmodel = tf.multiply(input_value,weight,\"model\")\nloss_function = tf.pow(expected_output - model,2,name=\"loss_function\")\n\noptimizer = tf.train.GradientDescentOptimizer(0.025).minimize(loss_function)\n\nfor value in [input_value,weight,expected_output,model,loss_function]:\n    tf.summary.scalar(value.op.name,value)\n\nsummaries = tf.summary.merge_all()\nsess = tf.Session()\n\nsummary_writer = tf.summary.FileWriter('log_simple_stats',sess.graph)\n\nsess.run(tf.global_variables_initializer())\nfor i in range(100):\n    summary_writer.add_summary(sess.run(summaries),i)\n    sess.run(optimizer)\n"
  },
  {
    "path": "Chapter02/Python 2.7/tensor_flow_counter_1.py",
    "content": "import tensorflow as tf\n\nvalue = tf.Variable(0,name=\"value\")\none = tf.constant(1)\nnew_value = tf.add(value,one)\nupdate_value=tf.assign(value,new_value)\n\ninitialize_var = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n    sess.run(initialize_var)\n    print(sess.run(value))\n    for _ in range(10):\n        sess.run(update_value)\n        print(sess.run(value))\n\n\"\"\"\n>>> \n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n>>>\n\"\"\"     \n"
  },
  {
    "path": "Chapter02/Python 2.7/tensor_with_numpy_1.py",
    "content": "import tensorflow as tf\nimport numpy as np\n\n#tensore 1d con valori costanti\ntensor_1d = np.array([1,2,3,4,5,6,7,8,9,10])\ntensor_1d = tf.constant(tensor_1d)\nwith tf.Session() as sess:\n    print (tensor_1d.get_shape())\n    print sess.run(tensor_1d)\n\n#tensore 2d con valori variabili\ntensor_2d = np.array([(1,2,3),(4,5,6),(7,8,9)])\ntensor_2d = tf.Variable(tensor_2d)\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    print (tensor_2d.get_shape())\n    print sess.run(tensor_2d)\n\n\ntensor_3d = np.array([[[ 0,  1,  2],[ 3,  4,  5],[ 6,  7,  8]],\n                      [[ 9, 10, 11],[12, 13, 14],[15, 16, 17]],\n                      [[18, 19, 20],[21, 22, 23],[24, 25, 26]]])\n\ntensor_3d = tf.convert_to_tensor(tensor_3d, dtype=tf.float64)\nwith tf.Session() as sess:\n    print (tensor_3d.get_shape())\n    print sess.run(tensor_3d)\n\n\ninteractive_session = tf.InteractiveSession()\ntensor = np.array([1,2,3,4,5])\ntensor = tf.constant(tensor)\nprint(tensor.eval())\ninteractive_session.close()\n\n\"\"\"\nPython 2.7.10 (default, Oct 14 2015, 16:09:02) \n[GCC 5.2.1 20151010] on linux2\nType \"copyright\", \"credits\" or \"license()\" for more information.\n>>> ================================ RESTART ================================\n>>> \n(10,)\n[ 1  2  3  4  5  6  7  8  9 10]\n(3, 3)\n[[1 2 3]\n [4 5 6]\n [7 8 9]]\n(3, 3, 3)\n[[[  0.   1.   2.]\n  [  3.   4.   5.]\n  [  6.   7.   8.]]\n\n [[  9.  10.  11.]\n  [ 12.  13.  14.]\n  [ 15.  16.  17.]]\n\n [[ 18.  19.  20.]\n  [ 21.  22.  23.]\n  [ 24.  25.  26.]]]\n[1 2 3 4 5]\n>>> \n\"\"\"\n\n\n"
  },
  {
    "path": "Chapter02/Python 3.5/computation_model.py",
    "content": "import tensorflow as tf\nwith tf.Session() as session:\n    x = tf.placeholder(tf.float32, [1], name=\"x\")\n    y = tf.placeholder(tf.float32, [1], name=\"y\")\n    z = tf.constant(2.0)\n    y = x * z\nx_in = [100]\ny_output = session.run(y, {x: x_in})\nprint(y_output)\n"
  },
  {
    "path": "Chapter02/Python 3.5/data_model.py",
    "content": "import tensorflow as tf\n\nscalar = tf.constant(100)\nvector = tf.constant([1, 2, 3, 4, 5])\nmatrix = tf.constant([[1, 2, 3], [4, 5, 6]])\ncube_matrix = tf.constant([[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]])\n\nprint(scalar.get_shape())\nprint(vector.get_shape())\nprint(matrix.get_shape())\nprint(cube_matrix.get_shape())\n\n\"\"\"\n>>> \n()\n(5,)\n(2, 3)\n(3, 3, 1)\n>>> \n\"\"\"\n\n"
  },
  {
    "path": "Chapter02/Python 3.5/feeding_parameters.py",
    "content": "import tensorflow as tf\nimport numpy as np\n\na = 3\nb = 2\n\nx = tf.placeholder(tf.float32, shape=(a, b))\ny = tf.add(x, x)\n\ndata = np.random.rand(a, b)\n\nsess = tf.Session()\n\nprint(sess.run(y,feed_dict={x: data}))\n\n"
  },
  {
    "path": "Chapter02/Python 3.5/fetching_parameters_1.py",
    "content": "import tensorflow as tf\n\nconstant_A = tf.constant([100.0])\nconstant_B = tf.constant([300.0])\nconstant_C = tf.constant([3.0])\n\nsum_ = tf.add(constant_A, constant_B)\nmul_ = tf.multiply(constant_A, constant_C)\n\nwith tf.Session() as sess:\n    result = sess.run([sum_, mul_])\n    print(result)\n\n\n\"\"\"\n>>> \n[array([ 400.], dtype=float32), array([ 300.], dtype=float32)]\n>>> \n\"\"\"\n"
  },
  {
    "path": "Chapter02/Python 3.5/programming_model.py",
    "content": "import tensorflow as tf\n\nwith tf.Session() as session:\n    x = tf.placeholder(tf.float32, [1], name=\"x\")\n    y = tf.placeholder(tf.float32, [1], name=\"y\")\n    z = tf.constant(2.0)\n    y = x * z\n\nx_in = [100]\ny_output = session.run(y, {x: x_in})\nprint(y_output)\n"
  },
  {
    "path": "Chapter02/Python 3.5/single_neuron_model_1.py",
    "content": "import tensorflow as tf\n\nweight = tf.Variable(1.0, name=\"weight\")\ninput_value = tf.constant(0.5, name=\"input_value\")\nexpected_output = tf.constant(0.0, name=\"expected_output\")\nmodel = tf.multiply(input_value,weight, \"model\")\nloss_function = tf.pow(expected_output - model, 2, name=\"loss_function\")\n\noptimizer = tf.train.GradientDescentOptimizer(0.025).minimize(loss_function)\n\nfor value in [input_value, weight, expected_output, model, loss_function]:\n    tf.summary.scalar(value.op.name, value)\n\nsummaries = tf.summary.merge_all()\nsess = tf.Session()\n\nsummary_writer = tf.summary.FileWriter('log_simple_stats', sess.graph)\n\nsess.run(tf.global_variables_initializer())\n\nfor i in range(100):\n    summary_writer.add_summary(sess.run(summaries), i)\n    sess.run(optimizer)\n"
  },
  {
    "path": "Chapter02/Python 3.5/tensor_flow_counter_1.py",
    "content": "import tensorflow as tf\n\nvalue = tf.Variable(0, name=\"value\")\none = tf.constant(1)\nnew_value = tf.add(value, one)\nupdate_value = tf.assign(value, new_value)\n\ninitialize_var = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n    sess.run(initialize_var)\n    print(sess.run(value))\n    for _ in range(10):\n        sess.run(update_value)\n        print(sess.run(value))\n\n\"\"\"\n>>> \n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n>>>\n\"\"\"     \n"
  },
  {
    "path": "Chapter02/Python 3.5/tensor_with_numpy_1.py",
    "content": "import tensorflow as tf\nimport numpy as np\n\n#tensore 1d con valori costanti\ntensor_1d = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\ntensor_1d = tf.constant(tensor_1d)\nwith tf.Session() as sess:\n    print(tensor_1d.get_shape())\n    print(sess.run(tensor_1d))\n\n#tensore 2d con valori variabili\ntensor_2d = np.array([(1, 2, 3), (4, 5, 6), (7, 8, 9)])\ntensor_2d = tf.Variable(tensor_2d)\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    print(tensor_2d.get_shape())\n    print(sess.run(tensor_2d))\n\n\ntensor_3d = np.array([[[ 0,  1,  2],[ 3,  4,  5],[ 6,  7,  8]],\n                      [[ 9, 10, 11],[12, 13, 14],[15, 16, 17]],\n                      [[18, 19, 20],[21, 22, 23],[24, 25, 26]]])\n\ntensor_3d = tf.convert_to_tensor(tensor_3d, dtype=tf.float64)\nwith tf.Session() as sess:\n    print(tensor_3d.get_shape())\n    print(sess.run(tensor_3d))\n\n\ninteractive_session = tf.InteractiveSession()\ntensor = np.array([1, 2, 3, 4, 5])\ntensor = tf.constant(tensor)\nprint(tensor.eval())\ninteractive_session.close()\n\n\"\"\"\nPython 2.7.10 (default, Oct 14 2015, 16:09:02) \n[GCC 5.2.1 20151010] on linux2\nType \"copyright\", \"credits\" or \"license()\" for more information.\n>>> ================================ RESTART ================================\n>>> \n(10,)\n[ 1  2  3  4  5  6  7  8  9 10]\n(3, 3)\n[[1 2 3]\n [4 5 6]\n [7 8 9]]\n(3, 3, 3)\n[[[  0.   1.   2.]\n  [  3.   4.   5.]\n  [  6.   7.   8.]]\n\n [[  9.  10.  11.]\n  [ 12.  13.  14.]\n  [ 15.  16.  17.]]\n\n [[ 18.  19.  20.]\n  [ 21.  22.  23.]\n  [ 24.  25.  26.]]]\n[1 2 3 4 5]\n>>> \n\"\"\"\n\n\n"
  },
  {
    "path": "Chapter03/Python 2.7/five_layers_relu_1.py",
    "content": "from tensorflow.examples.tutorials.mnist import input_data\nimport tensorflow as tf\nimport math\n\nlogs_path = 'log_simple_stats_5_layers_relu_softmax'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\n\nmnist = input_data.read_data_sets(\"/tmp/data\", one_hot=True)\n\nX = tf.placeholder(tf.float32, [None, 784])\nY_ = tf.placeholder(tf.float32, [None, 10])\nlr = tf.placeholder(tf.float32)\n\n\nL = 200\nM = 100\nN = 60\nO = 30\n\nW1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))  \nB1 = tf.Variable(tf.ones([L])/10)\nW2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))\nB2 = tf.Variable(tf.ones([M])/10)\nW3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))\nB3 = tf.Variable(tf.ones([N])/10)\nW4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))\nB4 = tf.Variable(tf.ones([O])/10)\nW5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))\nB5 = tf.Variable(tf.zeros([10]))\n\n\nXX = tf.reshape(X, [-1, 784])\nY1 = tf.nn.relu(tf.matmul(XX, W1) + B1)\nY2 = tf.nn.relu(tf.matmul(Y1, W2) + B2)\nY3 = tf.nn.relu(tf.matmul(Y2, W3) + B3)\nY4 = tf.nn.relu(tf.matmul(Y3, W4) + B4)\nYlogits = tf.matmul(Y4, W5) + B5\nY = tf.nn.softmax(Ylogits)\n\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)\ncross_entropy = tf.reduce_mean(cross_entropy)*100\n\n\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n\ntrain_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)\n\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\ninit = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, \\\n                                    graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples/batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            max_learning_rate = 0.003\n            min_learning_rate = 0.0001\n            decay_speed = 2000 \n            learning_rate = min_learning_rate+\\\n                            (max_learning_rate - min_learning_rate)\\\n                            * math.exp(-i/decay_speed)\n            _, summary = sess.run([train_step, summary_op],\\\n                                  {X: batch_x, Y_: batch_y,\\\n                                   lr: learning_rate})\n            writer.add_summary(summary,\\\n                               epoch * batch_count + i)\n        #if epoch % 2 == 0:\n        print(\"Epoch: \", epoch)\n           \n    print(\"Accuracy: \", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))\n    print(\"done\")\n\n"
  },
  {
    "path": "Chapter03/Python 2.7/five_layers_relu_dropout_1.py",
    "content": "from tensorflow.examples.tutorials.mnist import input_data\nimport tensorflow as tf\nimport math\n\nlogs_path = 'log_simple_stats_5_lyers_dropout'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\n\nmnist = input_data.read_data_sets(\"/tmp/data\", one_hot=True)\n\nX = tf.placeholder(tf.float32, [None, 784])\nY_ = tf.placeholder(tf.float32, [None, 10])\nlr = tf.placeholder(tf.float32)\npkeep = tf.placeholder(tf.float32)\n\nL = 200\nM = 100\nN = 60\nO = 30\n\nW1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))  \nB1 = tf.Variable(tf.ones([L])/10)\nW2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))\nB2 = tf.Variable(tf.ones([M])/10)\nW3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))\nB3 = tf.Variable(tf.ones([N])/10)\nW4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))\nB4 = tf.Variable(tf.ones([O])/10)\nW5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))\nB5 = tf.Variable(tf.zeros([10]))\n\nXX = tf.reshape(X, [-1, 28*28])\n\nY1 = tf.nn.relu(tf.matmul(XX, W1) + B1)\nY1d = tf.nn.dropout(Y1, pkeep)\n\nY2 = tf.nn.relu(tf.matmul(Y1d, W2) + B2)\nY2d = tf.nn.dropout(Y2, pkeep)\n\nY3 = tf.nn.relu(tf.matmul(Y2d, W3) + B3)\nY3d = tf.nn.dropout(Y3, pkeep)\n\nY4 = tf.nn.relu(tf.matmul(Y3d, W4) + B4)\nY4d = tf.nn.dropout(Y4, pkeep)\n\nYlogits = tf.matmul(Y4d, W5) + B5\nY = tf.nn.softmax(Ylogits)\n\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)\ncross_entropy = tf.reduce_mean(cross_entropy)*100\n\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\ntrain_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)\n\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\ninit = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, \\\n                                    graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples/batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            max_learning_rate = 0.003\n            min_learning_rate = 0.0001\n            decay_speed = 2000 \n            learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)\n            _, summary = sess.run([train_step, summary_op], {X: batch_x, Y_: batch_y, pkeep: 0.75, lr: learning_rate})\n            writer.add_summary(summary,\\\n                               epoch * batch_count + i)\n        print \"Epoch: \", epoch\n           \n    print \"Accuracy: \", accuracy.eval\\\n          (feed_dict={X: mnist.test.images, Y_: mnist.test.labels, pkeep: 0.75})\n    print \"done\"\n\n"
  },
  {
    "path": "Chapter03/Python 2.7/five_layers_sigmoid_1.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport math\n\nlogs_path = 'log_simple_stats_5_layers_sigmoid'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\n\nmnist = input_data.read_data_sets(\"/tmp/data\", one_hot=True)\nX = tf.placeholder(tf.float32, [None, 784])\nY_ = tf.placeholder(tf.float32, [None, 10])\n\nL = 200\nM = 100\nN = 60\nO = 30\n\nW1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))  \nB1 = tf.Variable(tf.zeros([L]))\nW2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))\nB2 = tf.Variable(tf.zeros([M]))\nW3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))\nB3 = tf.Variable(tf.zeros([N]))\nW4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))\nB4 = tf.Variable(tf.zeros([O]))\nW5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))\nB5 = tf.Variable(tf.zeros([10]))\n\n\nXX = tf.reshape(X, [-1, 784])\nY1 = tf.nn.sigmoid(tf.matmul(XX, W1) + B1)\nY2 = tf.nn.sigmoid(tf.matmul(Y1, W2) + B2)\nY3 = tf.nn.sigmoid(tf.matmul(Y2, W3) + B3)\nY4 = tf.nn.sigmoid(tf.matmul(Y3, W4) + B4)\nYlogits = tf.matmul(Y4, W5) + B5\nY = tf.nn.softmax(Ylogits)\n\n\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)\ncross_entropy = tf.reduce_mean(cross_entropy)*100\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nlearning_rate = 0.003\ntrain_step = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\n\n\ninit = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples/batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            _, summary = sess.run([train_step, summary_op],\\\n                                  feed_dict={X: batch_x,\\\n                                             Y_: batch_y})\n            writer.add_summary(summary,\\\n                               epoch * batch_count + i)\n        print(\"Epoch: \", epoch)\n           \n    print(\"Accuracy: \", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))\n    print(\"done\")\n\n"
  },
  {
    "path": "Chapter03/Python 2.7/softmax_classifier_1.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport matplotlib.pyplot as plt\nfrom random import randint\nimport numpy as np\n\nlogs_path = 'log_mnist_softmax'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\nmnist = input_data.read_data_sets(\"data\", one_hot=True)\n\nX = tf.placeholder(tf.float32, [None, 784], name=\"input\")\nY_ = tf.placeholder(tf.float32, [None, 10])\nW = tf.Variable(tf.zeros([784, 10]))\nb = tf.Variable(tf.zeros([10]))\nXX = tf.reshape(X, [-1, 784])\n\n\nY = tf.nn.softmax(tf.matmul(XX, W) + b, name=\"output\")\ncross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y))\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\ntrain_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)\n\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, \\\n                                    graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples/batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            _, summary = sess.run([train_step, summary_op],\\\n                                  feed_dict={X: batch_x,\\\n                                             Y_: batch_y})\n            writer.add_summary(summary, epoch * batch_count + i)\n        print(\"Epoch: \", epoch)\n           \n    print(\"Accuracy: \", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))\n    print(\"done\")\n    \n    num = randint(0, mnist.test.images.shape[0]) \n    img = mnist.test.images[num] \n \n    classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]}) \n    print('Neural Network predicted', classification[0])\n    print('Real label is:', np.argmax(mnist.test.labels[num]))\n\n    saver = tf.train.Saver()\n    save_path = saver.save(sess, \"data/saved_mnist_cnn.ckpt\")\n    print(\"Model saved to %s\" % save_path)\n\n\n"
  },
  {
    "path": "Chapter03/Python 2.7/softmax_model_loader_1.py",
    "content": "import matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom random import randint\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets('data', one_hot=True)\nsess = tf.InteractiveSession()\nnew_saver = tf.train.import_meta_graph('data/saved_mnist_cnn.ckpt.meta')\nnew_saver.restore(sess, 'data/saved_mnist_cnn.ckpt')\ntf.get_default_graph().as_graph_def()\n\nx = sess.graph.get_tensor_by_name(\"input:0\")\ny_conv = sess.graph.get_tensor_by_name(\"output:0\")\n\nnum = randint(0, mnist.test.images.shape[0])\nimg = mnist.test.images[num]\n\nresult = sess.run([\"input:0\", y_conv], feed_dict= {x:img})\nprint(result)\nprint(sess.run(tf.argmax(result, 1)))\n\nplt.imshow(image_b.reshape([28, 28]), cmap='Greys')\nplt.show()\n\n\n\n\n\n"
  },
  {
    "path": "Chapter03/Python 2.7/softmax_model_saver_1.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport matplotlib.pyplot as plt\nfrom random import randint\nimport numpy as np\n\nlogs_path = 'log_mnist_softmax'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\nmnist = input_data.read_data_sets(\"data\", one_hot=True)\n\nX = tf.placeholder(tf.float32, [None, 784], name=\"input\")\nY_ = tf.placeholder(tf.float32, [None, 10])\nW = tf.Variable(tf.zeros([784, 10]))\nb = tf.Variable(tf.zeros([10]))\nXX = tf.reshape(X, [-1, 784])\n\nY = tf.nn.softmax(tf.matmul(X, W) + b, name=\"output\")\ncross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y))\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\ntrain_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)\n\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, \\\n                                   graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples / batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            _, summary = sess.run([train_step, summary_op], \\\n                                  feed_dict={X: batch_x, \\\n                                             Y_: batch_y})\n            writer.add_summary(summary, epoch * batch_count + i)\n        print(\"Epoch: \", epoch)\n\n    print(\"Accuracy: \", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))\n    print(\"done\")\n\n    num = randint(0, mnist.test.images.shape[0])\n    img = mnist.test.images[num]\n\n    classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]})\n    print('Neural Network predicted', classification[0])\n    print('Real label is:', np.argmax(mnist.test.labels[num]))\n\n    saver = tf.train.Saver()\n    save_path = saver.save(sess, \"data/saved_mnist_cnn.ckpt\")\n    print(\"Model saved to %s\" % save_path)\n\n\n"
  },
  {
    "path": "Chapter03/Python 3.5/five_layers_relu_1.py",
    "content": "from tensorflow.examples.tutorials.mnist import input_data\nimport tensorflow as tf\nimport math\n\nlogs_path = 'log_simple_stats_5_layers_relu_softmax'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\n\nmnist = input_data.read_data_sets(\"/tmp/data\", one_hot=True)\n\nX = tf.placeholder(tf.float32, [None, 784])\nY_ = tf.placeholder(tf.float32, [None, 10])\nlr = tf.placeholder(tf.float32)\n\n\nL = 200\nM = 100\nN = 60\nO = 30\n\nW1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))  \nB1 = tf.Variable(tf.ones([L])/10)\nW2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))\nB2 = tf.Variable(tf.ones([M])/10)\nW3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))\nB3 = tf.Variable(tf.ones([N])/10)\nW4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))\nB4 = tf.Variable(tf.ones([O])/10)\nW5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))\nB5 = tf.Variable(tf.zeros([10]))\n\n\nXX = tf.reshape(X, [-1, 784])\nY1 = tf.nn.relu(tf.matmul(XX, W1) + B1)\nY2 = tf.nn.relu(tf.matmul(Y1, W2) + B2)\nY3 = tf.nn.relu(tf.matmul(Y2, W3) + B3)\nY4 = tf.nn.relu(tf.matmul(Y3, W4) + B4)\nYlogits = tf.matmul(Y4, W5) + B5\nY = tf.nn.softmax(Ylogits)\n\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)\ncross_entropy = tf.reduce_mean(cross_entropy)*100\n\n\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n\ntrain_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)\n\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\ninit = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, \\\n                                    graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples/batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            max_learning_rate = 0.003\n            min_learning_rate = 0.0001\n            decay_speed = 2000 \n            learning_rate = min_learning_rate+\\\n                            (max_learning_rate - min_learning_rate)\\\n                            * math.exp(-i/decay_speed)\n            _, summary = sess.run([train_step, summary_op],\\\n                                  {X: batch_x, Y_: batch_y,\\\n                                   lr: learning_rate})\n            writer.add_summary(summary,\\\n                               epoch * batch_count + i)\n        #if epoch % 2 == 0:\n        print(\"Epoch: \", epoch)\n           \n    print(\"Accuracy: \", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))\n    print(\"done\")\n\n"
  },
  {
    "path": "Chapter03/Python 3.5/five_layers_relu_dropout_1.py",
    "content": "from tensorflow.examples.tutorials.mnist import input_data\nimport tensorflow as tf\nimport math\n\nlogs_path = 'log_simple_stats_5_lyers_dropout'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\n\nmnist = input_data.read_data_sets(\"/tmp/data\", one_hot=True)\n\nX = tf.placeholder(tf.float32, [None, 784])\nY_ = tf.placeholder(tf.float32, [None, 10])\nlr = tf.placeholder(tf.float32)\npkeep = tf.placeholder(tf.float32)\n\nL = 200\nM = 100\nN = 60\nO = 30\n\nW1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))  \nB1 = tf.Variable(tf.ones([L])/10)\nW2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))\nB2 = tf.Variable(tf.ones([M])/10)\nW3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))\nB3 = tf.Variable(tf.ones([N])/10)\nW4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))\nB4 = tf.Variable(tf.ones([O])/10)\nW5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))\nB5 = tf.Variable(tf.zeros([10]))\n\nXX = tf.reshape(X, [-1, 28*28])\n\nY1 = tf.nn.relu(tf.matmul(XX, W1) + B1)\nY1d = tf.nn.dropout(Y1, pkeep)\n\nY2 = tf.nn.relu(tf.matmul(Y1d, W2) + B2)\nY2d = tf.nn.dropout(Y2, pkeep)\n\nY3 = tf.nn.relu(tf.matmul(Y2d, W3) + B3)\nY3d = tf.nn.dropout(Y3, pkeep)\n\nY4 = tf.nn.relu(tf.matmul(Y3d, W4) + B4)\nY4d = tf.nn.dropout(Y4, pkeep)\n\nYlogits = tf.matmul(Y4d, W5) + B5\nY = tf.nn.softmax(Ylogits)\n\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)\ncross_entropy = tf.reduce_mean(cross_entropy)*100\n\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\ntrain_step = tf.train.AdamOptimizer(lr).minimize(cross_entropy)\n\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\ninit = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, \\\n                                    graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples/batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            max_learning_rate = 0.003\n            min_learning_rate = 0.0001\n            decay_speed = 2000 \n            learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-i/decay_speed)\n            _, summary = sess.run([train_step, summary_op], {X: batch_x, Y_: batch_y, pkeep: 0.75, lr: learning_rate})\n            writer.add_summary(summary,\\\n                               epoch * batch_count + i)\n        print (\"Epoch: \", epoch)\n           \n    print (\"Accuracy: \", accuracy.eval\\\n          (feed_dict={X: mnist.test.images, Y_: mnist.test.labels, pkeep: 0.75}))\n    print (\"done\")\n\n"
  },
  {
    "path": "Chapter03/Python 3.5/five_layers_sigmoid_1.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport math\n\nlogs_path = 'log_simple_stats_5_layers_sigmoid'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\n\nmnist = input_data.read_data_sets(\"/tmp/data\", one_hot=True)\nX = tf.placeholder(tf.float32, [None, 784])\nY_ = tf.placeholder(tf.float32, [None, 10])\n\nL = 200\nM = 100\nN = 60\nO = 30\n\nW1 = tf.Variable(tf.truncated_normal([784, L], stddev=0.1))  \nB1 = tf.Variable(tf.zeros([L]))\nW2 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))\nB2 = tf.Variable(tf.zeros([M]))\nW3 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))\nB3 = tf.Variable(tf.zeros([N]))\nW4 = tf.Variable(tf.truncated_normal([N, O], stddev=0.1))\nB4 = tf.Variable(tf.zeros([O]))\nW5 = tf.Variable(tf.truncated_normal([O, 10], stddev=0.1))\nB5 = tf.Variable(tf.zeros([10]))\n\n\nXX = tf.reshape(X, [-1, 784])\nY1 = tf.nn.sigmoid(tf.matmul(XX, W1) + B1)\nY2 = tf.nn.sigmoid(tf.matmul(Y1, W2) + B2)\nY3 = tf.nn.sigmoid(tf.matmul(Y2, W3) + B3)\nY4 = tf.nn.sigmoid(tf.matmul(Y3, W4) + B4)\nYlogits = tf.matmul(Y4, W5) + B5\nY = tf.nn.softmax(Ylogits)\n\n\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_)\ncross_entropy = tf.reduce_mean(cross_entropy)*100\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nlearning_rate = 0.003\ntrain_step = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\n\n\ninit = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples/batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            _, summary = sess.run([train_step, summary_op],\\\n                                  feed_dict={X: batch_x,\\\n                                             Y_: batch_y})\n            writer.add_summary(summary,\\\n                               epoch * batch_count + i)\n        print(\"Epoch: \", epoch)\n           \n    print(\"Accuracy: \", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))\n    print(\"done\")\n\n"
  },
  {
    "path": "Chapter03/Python 3.5/softmax_classifier_1.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport matplotlib.pyplot as plt\nfrom random import randint\nimport numpy as np\n\nlogs_path = 'log_mnist_softmax'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\nmnist = input_data.read_data_sets(\"data\", one_hot=True)\n\nX = tf.placeholder(tf.float32, [None, 784], name=\"input\")\nY_ = tf.placeholder(tf.float32, [None, 10])\nW = tf.Variable(tf.zeros([784, 10]))\nb = tf.Variable(tf.zeros([10]))\nXX = tf.reshape(X, [-1, 784])\n\n\nY = tf.nn.softmax(tf.matmul(XX, W) + b, name=\"output\")\ncross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y))\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\ntrain_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)\n\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, \\\n                                    graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples/batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            _, summary = sess.run([train_step, summary_op],\\\n                                  feed_dict={X: batch_x,\\\n                                             Y_: batch_y})\n            writer.add_summary(summary, epoch * batch_count + i)\n        print(\"Epoch: \", epoch)\n           \n    print(\"Accuracy: \", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))\n    print(\"done\")\n    \n    num = randint(0, mnist.test.images.shape[0]) \n    img = mnist.test.images[num] \n \n    classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]}) \n    print('Neural Network predicted', classification[0])\n    print('Real label is:', np.argmax(mnist.test.labels[num]))\n\n    saver = tf.train.Saver()\n    save_path = saver.save(sess, \"data/saved_mnist_cnn.ckpt\")\n    print(\"Model saved to %s\" % save_path)\n\n\n"
  },
  {
    "path": "Chapter03/Python 3.5/softmax_model_loader_1.py",
    "content": "import matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom random import randint\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets('data', one_hot=True)\nsess = tf.InteractiveSession()\nnew_saver = tf.train.import_meta_graph('data/saved_mnist_cnn.ckpt.meta')\nnew_saver.restore(sess, 'data/saved_mnist_cnn.ckpt')\ntf.get_default_graph().as_graph_def()\n\nx = sess.graph.get_tensor_by_name(\"input:0\")\ny_conv = sess.graph.get_tensor_by_name(\"output:0\")\n\nnum = randint(0, mnist.test.images.shape[0])\nimg = mnist.test.images[num]\n\nresult = sess.run([\"input:0\", y_conv], feed_dict= {x:img})\nprint(result)\nprint(sess.run(tf.argmax(result, 1)))\n\nplt.imshow(image_b.reshape([28, 28]), cmap='Greys')\nplt.show()\n\n\n\n\n\n"
  },
  {
    "path": "Chapter03/Python 3.5/softmax_model_saver_1.py",
    "content": "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport matplotlib.pyplot as plt\nfrom random import randint\nimport numpy as np\n\nlogs_path = 'log_mnist_softmax'\nbatch_size = 100\nlearning_rate = 0.5\ntraining_epochs = 10\nmnist = input_data.read_data_sets(\"data\", one_hot=True)\n\nX = tf.placeholder(tf.float32, [None, 784], name=\"input\")\nY_ = tf.placeholder(tf.float32, [None, 10])\nW = tf.Variable(tf.zeros([784, 10]))\nb = tf.Variable(tf.zeros([10]))\nXX = tf.reshape(X, [-1, 784])\n\nY = tf.nn.softmax(tf.matmul(X, W) + b, name=\"output\")\ncross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y))\ncorrect_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\ntrain_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)\n\ntf.summary.scalar(\"cost\", cross_entropy)\ntf.summary.scalar(\"accuracy\", accuracy)\nsummary_op = tf.summary.merge_all()\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    writer = tf.summary.FileWriter(logs_path, \\\n                                   graph=tf.get_default_graph())\n    for epoch in range(training_epochs):\n        batch_count = int(mnist.train.num_examples / batch_size)\n        for i in range(batch_count):\n            batch_x, batch_y = mnist.train.next_batch(batch_size)\n            _, summary = sess.run([train_step, summary_op], \\\n                                  feed_dict={X: batch_x, \\\n                                             Y_: batch_y})\n            writer.add_summary(summary, epoch * batch_count + i)\n        print(\"Epoch: \", epoch)\n\n    print(\"Accuracy: \", accuracy.eval(feed_dict={X: mnist.test.images, Y_: mnist.test.labels}))\n    print(\"done\")\n\n    num = randint(0, mnist.test.images.shape[0])\n    img = mnist.test.images[num]\n\n    classification = sess.run(tf.argmax(Y, 1), feed_dict={X: [img]})\n    print('Neural Network predicted', classification[0])\n    print('Real label is:', np.argmax(mnist.test.labels[num]))\n\n    saver = tf.train.Saver()\n    save_path = saver.save(sess, \"data/saved_mnist_cnn.ckpt\")\n    print(\"Model saved to %s\" % save_path)\n\n\n"
  },
  {
    "path": "Chapter04/EMOTION_CNN/Python 2.7/EmotionDetectorUtils.py",
    "content": "import pandas as pd\nimport numpy as np\nimport os, sys, inspect\nfrom six.moves import cPickle as pickle\nimport scipy.misc as misc\n\nIMAGE_SIZE = 48\nNUM_LABELS = 7\nVALIDATION_PERCENT = 0.1  # use 10 percent of training images for validation\n\nIMAGE_LOCATION_NORM = IMAGE_SIZE / 2\n\nnp.random.seed(0)\n\nemotion = {0:'anger', 1:'disgust',\\\n           2:'fear',3:'happy',\\\n           4:'sad',5:'surprise',6:'neutral'}\n\nclass testResult:\n\n    def __init__(self):\n        self.anger = 0\n        self.disgust = 0\n        self.fear = 0\n        self.happy = 0\n        self.sad = 0\n        self.surprise = 0\n        self.neutral = 0\n        \n    def evaluate(self,label):\n        \n        if (0 == label):\n            self.anger = self.anger+1\n        if (1 == label):\n            self.disgust = self.disgust+1\n        if (2 == label):\n            self.fear = self.fear+1\n        if (3 == label):\n            self.happy = self.happy+1\n        if (4 == label):\n            self.sad = self.sad+1\n        if (5 == label):\n            self.surprise = self.surprise+1\n        if (6 == label):\n            self.neutral = self.neutral+1\n\n    def display_result(self,evaluations):\n        print(\"anger = \"    + str((self.anger/float(evaluations))*100)    + \"%\")\n        print(\"disgust = \"  + str((self.disgust/float(evaluations))*100)  + \"%\")\n        print(\"fear = \"     + str((self.fear/float(evaluations))*100)     + \"%\")\n        print(\"happy = \"    + str((self.happy/float(evaluations))*100)    + \"%\")\n        print(\"sad = \"      + str((self.sad/float(evaluations))*100)      + \"%\")\n        print(\"surprise = \" + str((self.surprise/float(evaluations))*100) + \"%\")\n        print(\"neutral = \"  + str((self.neutral/float(evaluations))*100)  + \"%\")\n            \n\ndef read_data(data_dir, force=False):\n    def create_onehot_label(x):\n        label = np.zeros((1, NUM_LABELS), dtype=np.float32)\n        label[:, int(x)] = 1\n        return label\n\n    pickle_file = os.path.join(data_dir, \"EmotionDetectorData.pickle\")\n    if force or not os.path.exists(pickle_file):\n        train_filename = os.path.join(data_dir, \"train.csv\")\n        data_frame = pd.read_csv(train_filename)\n        data_frame['Pixels'] = data_frame['Pixels'].apply(lambda x: np.fromstring(x, sep=\" \") / 255.0)\n        data_frame = data_frame.dropna()\n        print(\"Reading train.csv ...\")\n\n        train_images = np.vstack(data_frame['Pixels']).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)\n        print(train_images.shape)\n        train_labels = np.array([map(create_onehot_label, data_frame['Emotion'].values)]).reshape(-1, NUM_LABELS)\n        print(train_labels.shape)\n\n        permutations = np.random.permutation(train_images.shape[0])\n        train_images = train_images[permutations]\n        train_labels = train_labels[permutations]\n        validation_percent = int(train_images.shape[0] * VALIDATION_PERCENT)\n        validation_images = train_images[:validation_percent]\n        validation_labels = train_labels[:validation_percent]\n        train_images = train_images[validation_percent:]\n        train_labels = train_labels[validation_percent:]\n\n        print(\"Reading test.csv ...\")\n        test_filename = os.path.join(data_dir, \"test.csv\")\n        data_frame = pd.read_csv(test_filename)\n        data_frame['Pixels'] = data_frame['Pixels'].apply(lambda x: np.fromstring(x, sep=\" \") / 255.0)\n        data_frame = data_frame.dropna()\n        test_images = np.vstack(data_frame['Pixels']).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)\n\n        with open(pickle_file, \"wb\") as file:\n            try:\n                print('Picking ...')\n                save = {\n                    \"train_images\": train_images,\n                    \"train_labels\": train_labels,\n                    \"validation_images\": validation_images,\n                    \"validation_labels\": validation_labels,\n                    \"test_images\": test_images,\n                }\n                pickle.dump(save, file, pickle.HIGHEST_PROTOCOL)\n\n            except:\n                print(\"Unable to pickle file :/\")\n\n    with open(pickle_file, \"rb\") as file:\n        save = pickle.load(file)\n        train_images = save[\"train_images\"]\n        train_labels = save[\"train_labels\"]\n        validation_images = save[\"validation_images\"]\n        validation_labels = save[\"validation_labels\"]\n        test_images = save[\"test_images\"]\n\n    return train_images, train_labels, validation_images, validation_labels, test_images\n"
  },
  {
    "path": "Chapter04/EMOTION_CNN/Python 2.7/EmotionDetector_1.py",
    "content": "import tensorflow as tf\nimport numpy as np\n#import os, sys, inspect\nfrom datetime import datetime\nimport EmotionDetectorUtils\n\n\"\"\"\nlib_path = os.path.realpath(\n    os.path.abspath(os.path.join(os.path.split(inspect.getfile(inspect.currentframe()))[0], \"..\")))\nif lib_path not in sys.path:\n    sys.path.insert(0, lib_path)\n\"\"\"\n\n\nFLAGS = tf.flags.FLAGS\ntf.flags.DEFINE_string(\"data_dir\", \"EmotionDetector/\", \"Path to data files\")\ntf.flags.DEFINE_string(\"logs_dir\", \"logs/EmotionDetector_logs/\", \"Path to where log files are to be saved\")\ntf.flags.DEFINE_string(\"mode\", \"train\", \"mode: train (Default)/ test\")\n\nBATCH_SIZE = 128\nLEARNING_RATE = 1e-3\nMAX_ITERATIONS = 1001\nREGULARIZATION = 1e-2\nIMAGE_SIZE = 48\nNUM_LABELS = 7\nVALIDATION_PERCENT = 0.1\n\n\ndef add_to_regularization_loss(W, b):\n    tf.add_to_collection(\"losses\", tf.nn.l2_loss(W))\n    tf.add_to_collection(\"losses\", tf.nn.l2_loss(b))\n\ndef weight_variable(shape, stddev=0.02, name=None):\n    initial = tf.truncated_normal(shape, stddev=stddev)\n    if name is None:\n        return tf.Variable(initial)\n    else:\n        return tf.get_variable(name, initializer=initial)\n\n\ndef bias_variable(shape, name=None):\n    initial = tf.constant(0.0, shape=shape)\n    if name is None:\n        return tf.Variable(initial)\n    else:\n        return tf.get_variable(name, initializer=initial)\n\ndef conv2d_basic(x, W, bias):\n    conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding=\"SAME\")\n    return tf.nn.bias_add(conv, bias)\n\ndef max_pool_2x2(x):\n    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], \\\n                          strides=[1, 2, 2, 1], padding=\"SAME\")\n\n\ndef emotion_cnn(dataset):\n    with tf.name_scope(\"conv1\") as scope:\n        #W_conv1 = weight_variable([5, 5, 1, 32])\n        #b_conv1 = bias_variable([32])\n        tf.summary.histogram(\"W_conv1\", weights['wc1'])\n        tf.summary.histogram(\"b_conv1\", biases['bc1'])\n        conv_1 = tf.nn.conv2d(dataset, weights['wc1'],\\\n                              strides=[1, 1, 1, 1], padding=\"SAME\")\n        h_conv1 = tf.nn.bias_add(conv_1, biases['bc1'])\n        #h_conv1 = conv2d_basic(dataset, W_conv1, b_conv1)\n        h_1 = tf.nn.relu(h_conv1)\n        h_pool1 = max_pool_2x2(h_1)\n        add_to_regularization_loss(weights['wc1'], biases['bc1'])\n\n    with tf.name_scope(\"conv2\") as scope:\n        #W_conv2 = weight_variable([3, 3, 32, 64])\n        #b_conv2 = bias_variable([64])\n        tf.summary.histogram(\"W_conv2\", weights['wc2'])\n        tf.summary.histogram(\"b_conv2\", biases['bc2'])\n        conv_2 = tf.nn.conv2d(h_pool1, weights['wc2'], strides=[1, 1, 1, 1], padding=\"SAME\")\n        h_conv2 = tf.nn.bias_add(conv_2, biases['bc2'])\n        #h_conv2 = conv2d_basic(h_pool1, weights['wc2'], biases['bc2'])\n        h_2 = tf.nn.relu(h_conv2)\n        h_pool2 = max_pool_2x2(h_2)\n        add_to_regularization_loss(weights['wc2'], biases['bc2'])\n\n    with tf.name_scope(\"fc_1\") as scope:\n        prob = 0.5\n        image_size = IMAGE_SIZE / 4\n        h_flat = tf.reshape(h_pool2, [-1, image_size * image_size * 64])\n        #W_fc1 = weight_variable([image_size * image_size * 64, 256])\n        #b_fc1 = bias_variable([256])\n        tf.summary.histogram(\"W_fc1\", weights['wf1'])\n        tf.summary.histogram(\"b_fc1\", biases['bf1'])\n        h_fc1 = tf.nn.relu(tf.matmul(h_flat, weights['wf1']) + biases['bf1'])\n        h_fc1_dropout = tf.nn.dropout(h_fc1, prob)\n        \n    with tf.name_scope(\"fc_2\") as scope:\n        #W_fc2 = weight_variable([256, NUM_LABELS])\n        #b_fc2 = bias_variable([NUM_LABELS])\n        tf.summary.histogram(\"W_fc2\", weights['wf2'])\n        tf.summary.histogram(\"b_fc2\", biases['bf2'])\n        #pred = tf.matmul(h_fc1, weights['wf2']) + biases['bf2']\n        pred = tf.matmul(h_fc1_dropout, weights['wf2']) + biases['bf2']\n\n    return pred\n\nweights = {\n    'wc1': weight_variable([5, 5, 1, 32], name=\"W_conv1\"),\n    'wc2': weight_variable([3, 3, 32, 64],name=\"W_conv2\"),\n    'wf1': weight_variable([(IMAGE_SIZE / 4) * (IMAGE_SIZE / 4) * 64, 256],name=\"W_fc1\"),\n    'wf2': weight_variable([256, NUM_LABELS], name=\"W_fc2\")\n}\n\nbiases = {\n    'bc1': bias_variable([32], name=\"b_conv1\"),\n    'bc2': bias_variable([64], name=\"b_conv2\"),\n    'bf1': bias_variable([256], name=\"b_fc1\"),\n    'bf2': bias_variable([NUM_LABELS], name=\"b_fc2\")\n}\n\ndef loss(pred, label):\n    cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=label))\n    tf.summary.scalar('Entropy', cross_entropy_loss)\n    reg_losses = tf.add_n(tf.get_collection(\"losses\"))\n    tf.summary.scalar('Reg_loss', reg_losses)\n    return cross_entropy_loss + REGULARIZATION * reg_losses\n\n\ndef train(loss, step):\n    return tf.train.AdamOptimizer(LEARNING_RATE).minimize(loss, global_step=step)\n\n\ndef get_next_batch(images, labels, step):\n    offset = (step * BATCH_SIZE) % (images.shape[0] - BATCH_SIZE)\n    batch_images = images[offset: offset + BATCH_SIZE]\n    batch_labels = labels[offset:offset + BATCH_SIZE]\n    return batch_images, batch_labels\n\n\ndef main(argv=None):\n    train_images, train_labels, valid_images, valid_labels, test_images = EmotionDetectorUtils.read_data(FLAGS.data_dir)\n    print(\"Train size: %s\" % train_images.shape[0])\n    print('Validation size: %s' % valid_images.shape[0])\n    print(\"Test size: %s\" % test_images.shape[0])\n\n    global_step = tf.Variable(0, trainable=False)\n    dropout_prob = tf.placeholder(tf.float32)\n    input_dataset = tf.placeholder(tf.float32, [None, IMAGE_SIZE, IMAGE_SIZE, 1],name=\"input\")\n    input_labels = tf.placeholder(tf.float32, [None, NUM_LABELS])\n\n    pred = emotion_cnn(input_dataset)\n    output_pred = tf.nn.softmax(pred,name=\"output\")\n    loss_val = loss(pred, input_labels)\n    train_op = train(loss_val, global_step)\n\n    summary_op = tf.summary.merge_all()\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        summary_writer = tf.summary.FileWriter(FLAGS.logs_dir, sess.graph_def)\n        saver = tf.train.Saver()\n        ckpt = tf.train.get_checkpoint_state(FLAGS.logs_dir)\n        if ckpt and ckpt.model_checkpoint_path:\n            saver.restore(sess, ckpt.model_checkpoint_path)\n            print(\"Model Restored!\")\n\n        for step in range(MAX_ITERATIONS):\n            batch_image, batch_label = get_next_batch(train_images, train_labels, step)\n            feed_dict = {input_dataset: batch_image, input_labels: batch_label}\n\n            sess.run(train_op, feed_dict=feed_dict)\n            if step % 10 == 0:\n                train_loss, summary_str = sess.run([loss_val, summary_op], feed_dict=feed_dict)\n                summary_writer.add_summary(summary_str, global_step=step)\n                print(\"Training Loss: %f\" % train_loss)\n\n            if step % 100 == 0:\n                valid_loss = sess.run(loss_val, feed_dict={input_dataset: valid_images, input_labels: valid_labels})\n                print(\"%s Validation Loss: %f\" % (datetime.now(), valid_loss))\n                saver.save(sess, FLAGS.logs_dir + 'model.ckpt', global_step=step)\n\n\nif __name__ == \"__main__\":\n    tf.app.run()\n\n\n\n\"\"\"\n>>> \nTrain size: 3761\nValidation size: 417\nTest size: 1312\nWARNING:tensorflow:Passing a `GraphDef` to the SummaryWriter is deprecated. Pass a `Graph` object instead, such as `sess.graph`.\nTraining Loss: 1.962236\n2016-11-05 22:39:36.645682 Validation Loss: 1.962719\nTraining Loss: 1.907290\nTraining Loss: 1.849100\nTraining Loss: 1.871116\nTraining Loss: 1.798998\nTraining Loss: 1.885601\nTraining Loss: 1.849380\nTraining Loss: 1.843139\nTraining Loss: 1.933691\nTraining Loss: 1.829839\nTraining Loss: 1.839772\n2016-11-05 22:42:58.951699 Validation Loss: 1.822431\nTraining Loss: 1.772197\nTraining Loss: 1.666473\nTraining Loss: 1.620869\nTraining Loss: 1.592660\nTraining Loss: 1.422701\nTraining Loss: 1.436721\nTraining Loss: 1.348217\nTraining Loss: 1.432023\nTraining Loss: 1.347753\nTraining Loss: 1.299889\n2016-11-05 22:46:55.144483 Validation Loss: 1.335237\nTraining Loss: 1.108747\nTraining Loss: 1.197601\nTraining Loss: 1.245860\nTraining Loss: 1.164120\nTraining Loss: 0.994351\nTraining Loss: 1.072356\nTraining Loss: 1.193485\nTraining Loss: 1.118093\nTraining Loss: 1.021220\nTraining Loss: 1.069752\n2016-11-05 22:50:17.677074 Validation Loss: 1.111559\nTraining Loss: 1.099430\nTraining Loss: 0.966327\nTraining Loss: 0.960916\nTraining Loss: 0.844742\nTraining Loss: 0.979741\nTraining Loss: 0.891897\nTraining Loss: 1.013132\nTraining Loss: 0.936738\nTraining Loss: 0.911577\nTraining Loss: 0.862605\n2016-11-05 22:53:30.999141 Validation Loss: 0.999061\nTraining Loss: 0.800337\nTraining Loss: 0.776097\nTraining Loss: 0.799260\nTraining Loss: 0.919926\nTraining Loss: 0.758807\nTraining Loss: 0.807968\nTraining Loss: 0.856378\nTraining Loss: 0.867762\nTraining Loss: 0.656170\nTraining Loss: 0.688761\n2016-11-05 22:56:53.256991 Validation Loss: 0.931223\nTraining Loss: 0.696454\nTraining Loss: 0.725157\nTraining Loss: 0.674037\nTraining Loss: 0.719200\nTraining Loss: 0.749460\nTraining Loss: 0.741768\nTraining Loss: 0.702719\nTraining Loss: 0.734194\nTraining Loss: 0.669155\nTraining Loss: 0.641528\n2016-11-05 23:00:06.530139 Validation Loss: 0.911489\nTraining Loss: 0.764550\nTraining Loss: 0.646964\nTraining Loss: 0.724712\nTraining Loss: 0.726692\nTraining Loss: 0.656019\nTraining Loss: 0.690552\nTraining Loss: 0.537638\nTraining Loss: 0.680097\nTraining Loss: 0.554115\nTraining Loss: 0.590837\n2016-11-05 23:03:15.351156 Validation Loss: 0.818303\nTraining Loss: 0.656608\nTraining Loss: 0.567394\nTraining Loss: 0.545324\nTraining Loss: 0.611726\nTraining Loss: 0.600910\nTraining Loss: 0.526467\nTraining Loss: 0.584986\nTraining Loss: 0.567015\nTraining Loss: 0.555465\nTraining Loss: 0.630097\n2016-11-05 23:06:26.575298 Validation Loss: 0.824178\nTraining Loss: 0.662920\nTraining Loss: 0.512493\nTraining Loss: 0.475912\nTraining Loss: 0.455112\nTraining Loss: 0.567875\nTraining Loss: 0.582927\nTraining Loss: 0.509225\nTraining Loss: 0.602916\nTraining Loss: 0.521976\nTraining Loss: 0.445122\n2016-11-05 23:09:40.136353 Validation Loss: 0.803449\nTraining Loss: 0.435535\nTraining Loss: 0.459343\nTraining Loss: 0.481706\nTraining Loss: 0.460640\nTraining Loss: 0.554570\nTraining Loss: 0.427962\nTraining Loss: 0.512764\nTraining Loss: 0.531128\nTraining Loss: 0.364465\nTraining Loss: 0.432366\n2016-11-05 23:12:50.769527 Validation Loss: 0.851074\n>>> \n\"\"\"\n"
  },
  {
    "path": "Chapter04/EMOTION_CNN/Python 2.7/test_your_image.py",
    "content": "from scipy import misc\r\nimport numpy as np\r\nimport matplotlib.cm as cm\r\nimport tensorflow as tf\r\nimport os, sys, inspect\r\nfrom datetime import datetime\r\nfrom matplotlib import pyplot as plt\r\nimport matplotlib.image as mpimg\r\nfrom scipy import misc\r\nimport EmotionDetectorUtils\r\nfrom EmotionDetectorUtils import testResult\r\n\r\nemotion = {0:'anger', 1:'disgust',\\\r\n           2:'fear',3:'happy',\\\r\n           4:'sad',5:'surprise',6:'neutral'}\r\n\r\n\r\ndef rgb2gray(rgb):\r\n    return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\r\n\r\nimg = mpimg.imread('author_img.jpg')     \r\ngray = rgb2gray(img)\r\nplt.imshow(gray, cmap = plt.get_cmap('gray'))\r\nplt.show()\r\n\r\n\r\n\"\"\"\"\r\nlib_path = os.path.realpath(\r\n    os.path.abspath(os.path.join(os.path.split(inspect.getfile(inspect.currentframe()))[0], \"..\")))\r\nif lib_path not in sys.path:\r\n    sys.path.insert(0, lib_path)\r\n\"\"\"\r\n\r\n\r\n\r\nFLAGS = tf.flags.FLAGS\r\ntf.flags.DEFINE_string(\"data_dir\", \"EmotionDetector/\", \"Path to data files\")\r\ntf.flags.DEFINE_string(\"logs_dir\", \"logs/EmotionDetector_logs/\", \"Path to where log files are to be saved\")\r\ntf.flags.DEFINE_string(\"mode\", \"train\", \"mode: train (Default)/ test\")\r\n\r\n\r\n\r\n\r\ntrain_images, train_labels, valid_images, valid_labels, test_images = \\\r\n                  EmotionDetectorUtils.read_data(FLAGS.data_dir)\r\n\r\n\r\nsess = tf.InteractiveSession()\r\n\r\nnew_saver = tf.train.import_meta_graph('logs/EmotionDetector_logs/model.ckpt-1000.meta')\r\nnew_saver.restore(sess, 'logs/EmotionDetector_logs/model.ckpt-1000')\r\ntf.get_default_graph().as_graph_def()\r\n\r\nx = sess.graph.get_tensor_by_name(\"input:0\")\r\ny_conv = sess.graph.get_tensor_by_name(\"output:0\")\r\n\r\nimage_0 = np.resize(gray,(1,48,48,1))\r\ntResult = testResult()\r\nnum_evaluations = 1000\r\n\r\nfor i in range(0,num_evaluations):\r\n\tresult = sess.run(y_conv, feed_dict={x:image_0})\r\n\tlabel = sess.run(tf.argmax(result, 1))\r\n\tlabel = label[0]\r\n\tlabel = int(label)\r\n\ttResult.evaluate(label)\r\ntResult.display_result(num_evaluations)\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "Chapter04/EMOTION_CNN/Python 3.5/EmotionDetectorUtils.py",
    "content": "import pandas as pd\nimport numpy as np\nimport os, sys, inspect\nfrom six.moves import cPickle as pickle\nimport scipy.misc as misc\n\nIMAGE_SIZE = 48\nNUM_LABELS = 7\nVALIDATION_PERCENT = 0.1  # use 10 percent of training images for validation\n\nIMAGE_LOCATION_NORM = IMAGE_SIZE // 2\n\nnp.random.seed(0)\n\nemotion = {0:'anger', 1:'disgust',\\\n           2:'fear',3:'happy',\\\n           4:'sad',5:'surprise',6:'neutral'}\n\nclass testResult:\n\n    def __init__(self):\n        self.anger = 0\n        self.disgust = 0\n        self.fear = 0\n        self.happy = 0\n        self.sad = 0\n        self.surprise = 0\n        self.neutral = 0\n        \n    def evaluate(self,label):\n        \n        if (0 == label):\n            self.anger = self.anger+1\n        if (1 == label):\n            self.disgust = self.disgust+1\n        if (2 == label):\n            self.fear = self.fear+1\n        if (3 == label):\n            self.happy = self.happy+1\n        if (4 == label):\n            self.sad = self.sad+1\n        if (5 == label):\n            self.surprise = self.surprise+1\n        if (6 == label):\n            self.neutral = self.neutral+1\n\n    def display_result(self,evaluations):\n        print(\"anger = \"    + str((self.anger/float(evaluations))*100)    + \"%\")\n        print(\"disgust = \"  + str((self.disgust/float(evaluations))*100)  + \"%\")\n        print(\"fear = \"     + str((self.fear/float(evaluations))*100)     + \"%\")\n        print(\"happy = \"    + str((self.happy/float(evaluations))*100)    + \"%\")\n        print(\"sad = \"      + str((self.sad/float(evaluations))*100)      + \"%\")\n        print(\"surprise = \" + str((self.surprise/float(evaluations))*100) + \"%\")\n        print(\"neutral = \"  + str((self.neutral/float(evaluations))*100)  + \"%\")\n            \n\ndef read_data(data_dir, force=False):\n    def create_onehot_label(x):\n        label = np.zeros((1, NUM_LABELS), dtype=np.float32)\n        label[:, int(x)] = 1\n        return label\n\n    pickle_file = os.path.join(data_dir, \"EmotionDetectorData.pickle\")\n    if force or not os.path.exists(pickle_file):\n        train_filename = os.path.join(data_dir, \"train.csv\")\n        data_frame = pd.read_csv(train_filename)\n        data_frame['Pixels'] = data_frame['Pixels'].apply(lambda x: np.fromstring(x, sep=\" \") / 255.0)\n        data_frame = data_frame.dropna()\n        print(\"Reading train.csv ...\")\n\n        train_images = np.vstack(data_frame['Pixels']).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)\n        print(train_images.shape)\n        train_labels = np.array(list(map(create_onehot_label, data_frame['Emotion'].values))).reshape(-1, NUM_LABELS)\n        print(train_labels.shape)\n\n        permutations = np.random.permutation(train_images.shape[0])\n        train_images = train_images[permutations]\n        train_labels = train_labels[permutations]\n        validation_percent = int(train_images.shape[0] * VALIDATION_PERCENT)\n        validation_images = train_images[:validation_percent]\n        validation_labels = train_labels[:validation_percent]\n        train_images = train_images[validation_percent:]\n        train_labels = train_labels[validation_percent:]\n\n        print(\"Reading test.csv ...\")\n        test_filename = os.path.join(data_dir, \"test.csv\")\n        data_frame = pd.read_csv(test_filename)\n        data_frame['Pixels'] = data_frame['Pixels'].apply(lambda x: np.fromstring(x, sep=\" \") / 255.0)\n        data_frame = data_frame.dropna()\n        test_images = np.vstack(data_frame['Pixels']).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)\n\n        with open(pickle_file, \"wb\") as file:\n            try:\n                print('Picking ...')\n                save = {\n                    \"train_images\": train_images,\n                    \"train_labels\": train_labels,\n                    \"validation_images\": validation_images,\n                    \"validation_labels\": validation_labels,\n                    \"test_images\": test_images,\n                }\n                pickle.dump(save, file, pickle.HIGHEST_PROTOCOL)\n\n            except:\n                print(\"Unable to pickle file :/\")\n\n    with open(pickle_file, \"rb\") as file:\n        save = pickle.load(file)\n        train_images = save[\"train_images\"]\n        train_labels = save[\"train_labels\"]\n        validation_images = save[\"validation_images\"]\n        validation_labels = save[\"validation_labels\"]\n        test_images = save[\"test_images\"]\n\n    return train_images, train_labels, validation_images, validation_labels, test_images\n"
  },
  {
    "path": "Chapter04/EMOTION_CNN/Python 3.5/EmotionDetector_1.py",
    "content": "import tensorflow as tf\nimport numpy as np\n#import os, sys, inspect\nfrom datetime import datetime\nimport EmotionDetectorUtils\n\n\"\"\"\nlib_path = os.path.realpath(\n    os.path.abspath(os.path.join(os.path.split(inspect.getfile(inspect.currentframe()))[0], \"..\")))\nif lib_path not in sys.path:\n    sys.path.insert(0, lib_path)\n\"\"\"\n\n\nFLAGS = tf.flags.FLAGS\ntf.flags.DEFINE_string(\"data_dir\", \"EmotionDetector/\", \"Path to data files\")\ntf.flags.DEFINE_string(\"logs_dir\", \"logs/EmotionDetector_logs/\", \"Path to where log files are to be saved\")\ntf.flags.DEFINE_string(\"mode\", \"train\", \"mode: train (Default)/ test\")\n\nBATCH_SIZE = 128\nLEARNING_RATE = 1e-3\nMAX_ITERATIONS = 1001\nREGULARIZATION = 1e-2\nIMAGE_SIZE = 48\nNUM_LABELS = 7\nVALIDATION_PERCENT = 0.1\n\n\ndef add_to_regularization_loss(W, b):\n    tf.add_to_collection(\"losses\", tf.nn.l2_loss(W))\n    tf.add_to_collection(\"losses\", tf.nn.l2_loss(b))\n\ndef weight_variable(shape, stddev=0.02, name=None):\n    initial = tf.truncated_normal(shape, stddev=stddev)\n    if name is None:\n        return tf.Variable(initial)\n    else:\n        return tf.get_variable(name, initializer=initial)\n\n\ndef bias_variable(shape, name=None):\n    initial = tf.constant(0.0, shape=shape)\n    if name is None:\n        return tf.Variable(initial)\n    else:\n        return tf.get_variable(name, initializer=initial)\n\ndef conv2d_basic(x, W, bias):\n    conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding=\"SAME\")\n    return tf.nn.bias_add(conv, bias)\n\ndef max_pool_2x2(x):\n    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], \\\n                          strides=[1, 2, 2, 1], padding=\"SAME\")\n\n\ndef emotion_cnn(dataset):\n    with tf.name_scope(\"conv1\") as scope:\n        #W_conv1 = weight_variable([5, 5, 1, 32])\n        #b_conv1 = bias_variable([32])\n        tf.summary.histogram(\"W_conv1\", weights['wc1'])\n        tf.summary.histogram(\"b_conv1\", biases['bc1'])\n        conv_1 = tf.nn.conv2d(dataset, weights['wc1'],\\\n                              strides=[1, 1, 1, 1], padding=\"SAME\")\n        h_conv1 = tf.nn.bias_add(conv_1, biases['bc1'])\n        #h_conv1 = conv2d_basic(dataset, W_conv1, b_conv1)\n        h_1 = tf.nn.relu(h_conv1)\n        h_pool1 = max_pool_2x2(h_1)\n        add_to_regularization_loss(weights['wc1'], biases['bc1'])\n\n    with tf.name_scope(\"conv2\") as scope:\n        #W_conv2 = weight_variable([3, 3, 32, 64])\n        #b_conv2 = bias_variable([64])\n        tf.summary.histogram(\"W_conv2\", weights['wc2'])\n        tf.summary.histogram(\"b_conv2\", biases['bc2'])\n        conv_2 = tf.nn.conv2d(h_pool1, weights['wc2'], strides=[1, 1, 1, 1], padding=\"SAME\")\n        h_conv2 = tf.nn.bias_add(conv_2, biases['bc2'])\n        #h_conv2 = conv2d_basic(h_pool1, weights['wc2'], biases['bc2'])\n        h_2 = tf.nn.relu(h_conv2)\n        h_pool2 = max_pool_2x2(h_2)\n        add_to_regularization_loss(weights['wc2'], biases['bc2'])\n\n    with tf.name_scope(\"fc_1\") as scope:\n        prob = 0.5\n        image_size = IMAGE_SIZE // 4\n        h_flat = tf.reshape(h_pool2, [-1, image_size * image_size * 64])\n        #W_fc1 = weight_variable([image_size * image_size * 64, 256])\n        #b_fc1 = bias_variable([256])\n        tf.summary.histogram(\"W_fc1\", weights['wf1'])\n        tf.summary.histogram(\"b_fc1\", biases['bf1'])\n        h_fc1 = tf.nn.relu(tf.matmul(h_flat, weights['wf1']) + biases['bf1'])\n        h_fc1_dropout = tf.nn.dropout(h_fc1, prob)\n        \n    with tf.name_scope(\"fc_2\") as scope:\n        #W_fc2 = weight_variable([256, NUM_LABELS])\n        #b_fc2 = bias_variable([NUM_LABELS])\n        tf.summary.histogram(\"W_fc2\", weights['wf2'])\n        tf.summary.histogram(\"b_fc2\", biases['bf2'])\n        #pred = tf.matmul(h_fc1, weights['wf2']) + biases['bf2']\n        pred = tf.matmul(h_fc1_dropout, weights['wf2']) + biases['bf2']\n\n    return pred\n\nweights = {\n    'wc1': weight_variable([5, 5, 1, 32], name=\"W_conv1\"),\n    'wc2': weight_variable([3, 3, 32, 64],name=\"W_conv2\"),\n    'wf1': weight_variable([(IMAGE_SIZE // 4) * (IMAGE_SIZE // 4) * 64, 256],name=\"W_fc1\"),\n    'wf2': weight_variable([256, NUM_LABELS], name=\"W_fc2\")\n}\n\nbiases = {\n    'bc1': bias_variable([32], name=\"b_conv1\"),\n    'bc2': bias_variable([64], name=\"b_conv2\"),\n    'bf1': bias_variable([256], name=\"b_fc1\"),\n    'bf2': bias_variable([NUM_LABELS], name=\"b_fc2\")\n}\n\ndef loss(pred, label):\n    cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=label))\n    tf.summary.scalar('Entropy', cross_entropy_loss)\n    reg_losses = tf.add_n(tf.get_collection(\"losses\"))\n    tf.summary.scalar('Reg_loss', reg_losses)\n    return cross_entropy_loss + REGULARIZATION * reg_losses\n\n\ndef train(loss, step):\n    return tf.train.AdamOptimizer(LEARNING_RATE).minimize(loss, global_step=step)\n\n\ndef get_next_batch(images, labels, step):\n    offset = (step * BATCH_SIZE) % (images.shape[0] - BATCH_SIZE)\n    batch_images = images[offset: offset + BATCH_SIZE]\n    batch_labels = labels[offset:offset + BATCH_SIZE]\n    return batch_images, batch_labels\n\n\ndef main(argv=None):\n    train_images, train_labels, valid_images, valid_labels, test_images = EmotionDetectorUtils.read_data(FLAGS.data_dir)\n    print(\"Train size: %s\" % train_images.shape[0])\n    print('Validation size: %s' % valid_images.shape[0])\n    print(\"Test size: %s\" % test_images.shape[0])\n\n    global_step = tf.Variable(0, trainable=False)\n    dropout_prob = tf.placeholder(tf.float32)\n    input_dataset = tf.placeholder(tf.float32, [None, IMAGE_SIZE, IMAGE_SIZE, 1],name=\"input\")\n    input_labels = tf.placeholder(tf.float32, [None, NUM_LABELS])\n\n    pred = emotion_cnn(input_dataset)\n    output_pred = tf.nn.softmax(pred,name=\"output\")\n    loss_val = loss(pred, input_labels)\n    train_op = train(loss_val, global_step)\n\n    summary_op = tf.summary.merge_all()\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        summary_writer = tf.summary.FileWriter(FLAGS.logs_dir, sess.graph_def)\n        saver = tf.train.Saver()\n        ckpt = tf.train.get_checkpoint_state(FLAGS.logs_dir)\n        if ckpt and ckpt.model_checkpoint_path:\n            saver.restore(sess, ckpt.model_checkpoint_path)\n            print(\"Model Restored!\")\n\n        for step in range(MAX_ITERATIONS):\n            batch_image, batch_label = get_next_batch(train_images, train_labels, step)\n            feed_dict = {input_dataset: batch_image, input_labels: batch_label}\n\n            sess.run(train_op, feed_dict=feed_dict)\n            if step % 10 == 0:\n                train_loss, summary_str = sess.run([loss_val, summary_op], feed_dict=feed_dict)\n                summary_writer.add_summary(summary_str, global_step=step)\n                print(\"Training Loss: %f\" % train_loss)\n\n            if step % 100 == 0:\n                valid_loss = sess.run(loss_val, feed_dict={input_dataset: valid_images, input_labels: valid_labels})\n                print(\"%s Validation Loss: %f\" % (datetime.now(), valid_loss))\n                saver.save(sess, FLAGS.logs_dir + 'model.ckpt', global_step=step)\n\n\nif __name__ == \"__main__\":\n    tf.app.run()\n\n\n\n\"\"\"\n>>> \nTrain size: 3761\nValidation size: 417\nTest size: 1312\nTraining Loss: 1.951450\n2017-07-27 14:26:41.689096 Validation Loss: 1.958948\nTraining Loss: 1.899691\nTraining Loss: 1.873583\nTraining Loss: 1.883454\nTraining Loss: 1.794849\nTraining Loss: 1.884183\nTraining Loss: 1.848423\nTraining Loss: 1.838916\nTraining Loss: 1.918565\nTraining Loss: 1.829074\nTraining Loss: 1.864008\n2017-07-27 14:27:00.305351 Validation Loss: 1.790150\nTraining Loss: 1.753058\nTraining Loss: 1.615597\nTraining Loss: 1.571414\nTraining Loss: 1.623350\nTraining Loss: 1.494578\nTraining Loss: 1.502531\nTraining Loss: 1.349338\nTraining Loss: 1.537164\nTraining Loss: 1.364067\nTraining Loss: 1.387331\n2017-07-27 14:27:20.328279 Validation Loss: 1.375231\nTraining Loss: 1.186529\nTraining Loss: 1.386529\nTraining Loss: 1.270537\nTraining Loss: 1.211034\nTraining Loss: 1.096524\nTraining Loss: 1.192567\nTraining Loss: 1.279141\nTraining Loss: 1.199098\nTraining Loss: 1.017902\nTraining Loss: 1.249009\n2017-07-27 14:27:38.844167 Validation Loss: 1.178693\nTraining Loss: 1.222699\nTraining Loss: 0.970940\nTraining Loss: 1.012443\nTraining Loss: 0.931900\nTraining Loss: 1.016142\nTraining Loss: 0.943123\nTraining Loss: 1.099365\nTraining Loss: 1.000534\nTraining Loss: 0.925840\nTraining Loss: 0.895967\n2017-07-27 14:27:57.399234 Validation Loss: 1.103102\nTraining Loss: 0.863209\nTraining Loss: 0.833549\nTraining Loss: 0.812724\nTraining Loss: 1.009514\nTraining Loss: 1.024465\nTraining Loss: 0.961753\nTraining Loss: 0.986352\nTraining Loss: 0.959654\nTraining Loss: 0.774006\nTraining Loss: 0.858462\n2017-07-27 14:28:15.782431 Validation Loss: 1.000128\nTraining Loss: 0.663166\nTraining Loss: 0.785379\nTraining Loss: 0.821995\nTraining Loss: 0.945040\nTraining Loss: 0.909402\nTraining Loss: 0.797702\nTraining Loss: 0.769628\nTraining Loss: 0.750213\nTraining Loss: 0.722645\nTraining Loss: 0.800091\n2017-07-27 14:28:34.632889 Validation Loss: 0.924810\nTraining Loss: 0.878261\nTraining Loss: 0.817574\nTraining Loss: 0.856897\nTraining Loss: 0.752512\nTraining Loss: 0.881165\nTraining Loss: 0.710394\nTraining Loss: 0.721797\nTraining Loss: 0.726897\nTraining Loss: 0.624348\nTraining Loss: 0.730256\n2017-07-27 14:28:53.171239 Validation Loss: 0.901341\nTraining Loss: 0.685925\nTraining Loss: 0.630337\nTraining Loss: 0.656826\nTraining Loss: 0.666020\nTraining Loss: 0.627277\nTraining Loss: 0.698149\nTraining Loss: 0.722851\nTraining Loss: 0.722231\nTraining Loss: 0.701155\nTraining Loss: 0.684319\n2017-07-27 14:29:11.596521 Validation Loss: 0.894154\nTraining Loss: 0.738686\nTraining Loss: 0.580629\nTraining Loss: 0.545667\nTraining Loss: 0.614124\nTraining Loss: 0.640999\nTraining Loss: 0.762669\nTraining Loss: 0.628534\nTraining Loss: 0.690788\nTraining Loss: 0.628837\nTraining Loss: 0.565587\n2017-07-27 14:29:30.075707 Validation Loss: 0.825970\nTraining Loss: 0.551373\nTraining Loss: 0.466755\nTraining Loss: 0.583116\nTraining Loss: 0.644869\nTraining Loss: 0.626141\nTraining Loss: 0.609953\nTraining Loss: 0.622723\nTraining Loss: 0.696944\nTraining Loss: 0.543604\nTraining Loss: 0.436234\n2017-07-27 14:29:48.517299 Validation Loss: 0.873586\n\n>>> \n\"\"\"\n"
  },
  {
    "path": "Chapter04/EMOTION_CNN/Python 3.5/__init__.py",
    "content": ""
  },
  {
    "path": "Chapter04/EMOTION_CNN/Python 3.5/test_your_image.py",
    "content": "from scipy import misc\r\nimport numpy as np\r\nimport matplotlib.cm as cm\r\nimport tensorflow as tf\r\nimport os, sys, inspect\r\nfrom datetime import datetime\r\nfrom matplotlib import pyplot as plt\r\nimport matplotlib.image as mpimg\r\nfrom scipy import misc\r\nimport EmotionDetectorUtils\r\nfrom EmotionDetectorUtils import testResult\r\n\r\nemotion = {0:'anger', 1:'disgust',\\\r\n           2:'fear',3:'happy',\\\r\n           4:'sad',5:'surprise',6:'neutral'}\r\n\r\n\r\ndef rgb2gray(rgb):\r\n    return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\r\n\r\nimg = mpimg.imread('author_img.jpg')     \r\ngray = rgb2gray(img)\r\nplt.imshow(gray, cmap = plt.get_cmap('gray'))\r\nplt.show()\r\n\r\n\r\n\"\"\"\"\r\nlib_path = os.path.realpath(\r\n    os.path.abspath(os.path.join(os.path.split(inspect.getfile(inspect.currentframe()))[0], \"..\")))\r\nif lib_path not in sys.path:\r\n    sys.path.insert(0, lib_path)\r\n\"\"\"\r\n\r\n\r\n\r\nFLAGS = tf.flags.FLAGS\r\ntf.flags.DEFINE_string(\"data_dir\", \"EmotionDetector/\", \"Path to data files\")\r\ntf.flags.DEFINE_string(\"logs_dir\", \"logs/EmotionDetector_logs/\", \"Path to where log files are to be saved\")\r\ntf.flags.DEFINE_string(\"mode\", \"train\", \"mode: train (Default)/ test\")\r\n\r\n\r\n\r\n\r\ntrain_images, train_labels, valid_images, valid_labels, test_images = \\\r\n                  EmotionDetectorUtils.read_data(FLAGS.data_dir)\r\n\r\n\r\nsess = tf.InteractiveSession()\r\n\r\nnew_saver = tf.train.import_meta_graph('logs/EmotionDetector_logs/model.ckpt-1000.meta')\r\nnew_saver.restore(sess, 'logs/EmotionDetector_logs/model.ckpt-1000')\r\ntf.get_default_graph().as_graph_def()\r\n\r\nx = sess.graph.get_tensor_by_name(\"input:0\")\r\ny_conv = sess.graph.get_tensor_by_name(\"output:0\")\r\n\r\nimage_0 = np.resize(gray,(1,48,48,1))\r\ntResult = testResult()\r\nnum_evaluations = 1000\r\n\r\nfor i in range(0,num_evaluations):\r\n\tresult = sess.run(y_conv, feed_dict={x:image_0})\r\n\tlabel = sess.run(tf.argmax(result, 1))\r\n\tlabel = label[0]\r\n\tlabel = int(label)\r\n\ttResult.evaluate(label)\r\ntResult.display_result(num_evaluations)\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "Chapter04/MNIST_CNN/Python 2.7/mnist_cnn_1.py",
    "content": "import tensorflow as tf\r\nimport numpy as np\r\n#import mnist_data \r\n\r\nbatch_size = 128\r\ntest_size = 256\r\nimg_size = 28\r\nnum_classes = 10\r\n\r\ndef init_weights(shape):\r\n    return tf.Variable(tf.random_normal(shape, stddev=0.01))\r\n\r\n\r\ndef model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden):\r\n\r\n    conv1 = tf.nn.conv2d(X, w,\\\r\n                         strides=[1, 1, 1, 1],\\\r\n                         padding='SAME')\r\n\r\n    conv1_a = tf.nn.relu(conv1)\r\n    conv1 = tf.nn.max_pool(conv1_a, ksize=[1, 2, 2, 1]\\\r\n                        ,strides=[1, 2, 2, 1],\\\r\n                        padding='SAME')\r\n    conv1 = tf.nn.dropout(conv1, p_keep_conv)\r\n\r\n    conv2 = tf.nn.conv2d(conv1, w2,\\\r\n                         strides=[1, 1, 1, 1],\\\r\n                         padding='SAME')\r\n    conv2_a = tf.nn.relu(conv2)\r\n    conv2 = tf.nn.max_pool(conv2_a, ksize=[1, 2, 2, 1],\\\r\n                        strides=[1, 2, 2, 1],\\\r\n                        padding='SAME')\r\n    conv2 = tf.nn.dropout(conv2, p_keep_conv)\r\n\r\n    conv3=tf.nn.conv2d(conv2, w3,\\\r\n                       strides=[1, 1, 1, 1]\\\r\n                       ,padding='SAME')\r\n\r\n    conv3 = tf.nn.relu(conv3)\r\n\r\n\r\n    FC_layer = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1],\\\r\n                        strides=[1, 2, 2, 1],\\\r\n                        padding='SAME')\r\n    \r\n    FC_layer = tf.reshape(FC_layer, [-1, w4.get_shape().as_list()[0]])    \r\n    FC_layer = tf.nn.dropout(FC_layer, p_keep_conv)\r\n\r\n\r\n    output_layer = tf.nn.relu(tf.matmul(FC_layer, w4))\r\n    output_layer = tf.nn.dropout(output_layer, p_keep_hidden)\r\n\r\n    result = tf.matmul(output_layer, w_o)\r\n    return result\r\n\r\n\r\n#mnist = mnist_data.read_data_sets(\"ata/\")\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\nmnist = input_data.read_data_sets(\"MNIST_data\", one_hot=True)\r\n\r\ntrX, trY, teX, teY = mnist.train.images,\\\r\n                     mnist.train.labels, \\\r\n                     mnist.test.images, \\\r\n                     mnist.test.labels\r\n\r\ntrX = trX.reshape(-1, img_size, img_size, 1)  # 28x28x1 input img\r\nteX = teX.reshape(-1, img_size, img_size, 1)  # 28x28x1 input img\r\n\r\nX = tf.placeholder(\"float\", [None, img_size, img_size, 1])\r\nY = tf.placeholder(\"float\", [None, num_classes])\r\n\r\nw = init_weights([3, 3, 1, 32])       # 3x3x1 conv, 32 outputs\r\nw2 = init_weights([3, 3, 32, 64])     # 3x3x32 conv, 64 outputs\r\nw3 = init_weights([3, 3, 64, 128])    # 3x3x32 conv, 128 outputs\r\nw4 = init_weights([128 * 4 * 4, 625]) # FC 128 * 4 * 4 inputs, 625 outputs\r\nw_o = init_weights([625, num_classes])         # FC 625 inputs, 10 outputs (labels)\r\n\r\np_keep_conv = tf.placeholder(\"float\")\r\np_keep_hidden = tf.placeholder(\"float\")\r\npy_x = model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden)\r\n\r\nY_ = tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y)\r\ncost = tf.reduce_mean(Y_)\r\noptimizer  = tf.train.\\\r\n           RMSPropOptimizer(0.001, 0.9).minimize(cost)\r\npredict_op = tf.argmax(py_x, 1)\r\n\r\nwith tf.Session() as sess:\r\n    #tf.initialize_all_variables().run()\r\n    tf.global_variables_initializer().run()\r\n    for i in range(100):\r\n        training_batch = \\\r\n                       zip(range(0, len(trX), \\\r\n                                 batch_size),\r\n                             range(batch_size, \\\r\n                                   len(trX)+1, \\\r\n                                   batch_size))\r\n        for start, end in training_batch:\r\n            sess.run(optimizer , feed_dict={X: trX[start:end],\\\r\n                                          Y: trY[start:end],\\\r\n                                          p_keep_conv: 0.8,\\\r\n                                          p_keep_hidden: 0.5})\r\n\r\n        test_indices = np.arange(len(teX)) # Get A Test Batch\r\n        np.random.shuffle(test_indices)\r\n        test_indices = test_indices[0:test_size]\r\n\r\n        print(i, np.mean(np.argmax(teY[test_indices], axis=1) ==\\\r\n                         sess.run\\\r\n                         (predict_op,\\\r\n                          feed_dict={X: teX[test_indices],\\\r\n                                     Y: teY[test_indices], \\\r\n                                     p_keep_conv: 1.0,\\\r\n                                     p_keep_hidden: 1.0})))\r\n\r\n\"\"\"\r\nSuccessfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\r\nSuccessfully extracted to train-images-idx3-ubyte.mnist 9912422 bytes.\r\nLoading ata/train-images-idx3-ubyte.mnist\r\nSuccessfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\r\nSuccessfully extracted to train-labels-idx1-ubyte.mnist 28881 bytes.\r\nLoading ata/train-labels-idx1-ubyte.mnist\r\nSuccessfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\r\nSuccessfully extracted to t10k-images-idx3-ubyte.mnist 1648877 bytes.\r\nLoading ata/t10k-images-idx3-ubyte.mnist\r\nSuccessfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\r\nSuccessfully extracted to t10k-labels-idx1-ubyte.mnist 4542 bytes.\r\nLoading ata/t10k-labels-idx1-ubyte.mnist\r\n(0, 0.95703125)\r\n(1, 0.98046875)\r\n(2, 0.9921875)\r\n(3, 0.99609375)\r\n(4, 0.99609375)\r\n(5, 0.98828125)\r\n(6, 0.99609375)\r\n(7, 0.99609375)\r\n(8, 0.98828125)\r\n(9, 0.98046875)\r\n(10, 0.99609375)\r\n(11, 1.0)\r\n(12, 0.9921875)\r\n(13, 0.98046875)\r\n(14, 0.98828125)\r\n(15, 0.9921875)\r\n(16, 0.9921875)\r\n(17, 0.9921875)\r\n(18, 0.9921875)\r\n(19, 1.0)\r\n(20, 0.98828125)\r\n(21, 0.99609375)\r\n(22, 0.98828125)\r\n(23, 1.0)\r\n(24, 0.9921875)\r\n(25, 0.99609375)\r\n(26, 0.99609375)\r\n(27, 0.98828125)\r\n(28, 0.98828125)\r\n(29, 0.9921875)\r\n(30, 0.99609375)\r\n(31, 0.9921875)\r\n(32, 0.99609375)\r\n(33, 1.0)\r\n(34, 0.99609375)\r\n(35, 1.0)\r\n(36, 0.9921875)\r\n(37, 1.0)\r\n(38, 0.99609375)\r\n(39, 0.99609375)\r\n(40, 0.99609375)\r\n(41, 0.9921875)\r\n(42, 0.98828125)\r\n(43, 0.9921875)\r\n(44, 0.9921875)\r\n(45, 0.9921875)\r\n(46, 0.9921875)\r\n(47, 0.98828125)\r\n(48, 0.99609375)\r\n(49, 0.99609375)\r\n(50, 1.0)\r\n(51, 0.98046875)\r\n(52, 0.99609375)\r\n(53, 0.98828125)\r\n(54, 0.99609375)\r\n(55, 0.9921875)\r\n(56, 0.99609375)\r\n(57, 0.9921875)\r\n(58, 0.98828125)\r\n(59, 0.99609375)\r\n(60, 0.99609375)\r\n(61, 0.98828125)\r\n(62, 1.0)\r\n(63, 0.98828125)\r\n(64, 0.98828125)\r\n(65, 0.98828125)\r\n(66, 1.0)\r\n(67, 0.99609375)\r\n(68, 1.0)\r\n(69, 1.0)\r\n(70, 0.9921875)\r\n(71, 0.99609375)\r\n(72, 0.984375)\r\n(73, 0.9921875)\r\n(74, 0.98828125)\r\n(75, 0.99609375)\r\n(76, 1.0)\r\n(77, 0.9921875)\r\n(78, 0.984375)\r\n(79, 1.0)\r\n(80, 0.9921875)\r\n(81, 0.9921875)\r\n(82, 0.99609375)\r\n(83, 1.0)\r\n(84, 0.98828125)\r\n(85, 0.98828125)\r\n(86, 0.99609375)\r\n(87, 1.0)\r\n(88, 0.99609375)\r\n\"\"\"\r\n"
  },
  {
    "path": "Chapter04/MNIST_CNN/Python 3.5/mnist_cnn_1.py",
    "content": "import tensorflow as tf\r\nimport numpy as np\r\n#import mnist_data \r\n\r\nbatch_size = 128\r\ntest_size = 256\r\nimg_size = 28\r\nnum_classes = 10\r\n\r\ndef init_weights(shape):\r\n    return tf.Variable(tf.random_normal(shape, stddev=0.01))\r\n\r\n\r\ndef model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden):\r\n\r\n    conv1 = tf.nn.conv2d(X, w,\\\r\n                         strides=[1, 1, 1, 1],\\\r\n                         padding='SAME')\r\n\r\n    conv1_a = tf.nn.relu(conv1)\r\n    conv1 = tf.nn.max_pool(conv1_a, ksize=[1, 2, 2, 1]\\\r\n                        ,strides=[1, 2, 2, 1],\\\r\n                        padding='SAME')\r\n    conv1 = tf.nn.dropout(conv1, p_keep_conv)\r\n\r\n    conv2 = tf.nn.conv2d(conv1, w2,\\\r\n                         strides=[1, 1, 1, 1],\\\r\n                         padding='SAME')\r\n    conv2_a = tf.nn.relu(conv2)\r\n    conv2 = tf.nn.max_pool(conv2_a, ksize=[1, 2, 2, 1],\\\r\n                        strides=[1, 2, 2, 1],\\\r\n                        padding='SAME')\r\n    conv2 = tf.nn.dropout(conv2, p_keep_conv)\r\n\r\n    conv3=tf.nn.conv2d(conv2, w3,\\\r\n                       strides=[1, 1, 1, 1]\\\r\n                       ,padding='SAME')\r\n\r\n    conv3 = tf.nn.relu(conv3)\r\n\r\n\r\n    FC_layer = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1],\\\r\n                        strides=[1, 2, 2, 1],\\\r\n                        padding='SAME')\r\n    \r\n    FC_layer = tf.reshape(FC_layer, [-1, w4.get_shape().as_list()[0]])    \r\n    FC_layer = tf.nn.dropout(FC_layer, p_keep_conv)\r\n\r\n\r\n    output_layer = tf.nn.relu(tf.matmul(FC_layer, w4))\r\n    output_layer = tf.nn.dropout(output_layer, p_keep_hidden)\r\n\r\n    result = tf.matmul(output_layer, w_o)\r\n    return result\r\n\r\n\r\n#mnist = mnist_data.read_data_sets(\"ata/\")\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\nmnist = input_data.read_data_sets(\"MNIST_data\", one_hot=True)\r\n\r\ntrX, trY, teX, teY = mnist.train.images,\\\r\n                     mnist.train.labels, \\\r\n                     mnist.test.images, \\\r\n                     mnist.test.labels\r\n\r\ntrX = trX.reshape(-1, img_size, img_size, 1)  # 28x28x1 input img\r\nteX = teX.reshape(-1, img_size, img_size, 1)  # 28x28x1 input img\r\n\r\nX = tf.placeholder(\"float\", [None, img_size, img_size, 1])\r\nY = tf.placeholder(\"float\", [None, num_classes])\r\n\r\nw = init_weights([3, 3, 1, 32])       # 3x3x1 conv, 32 outputs\r\nw2 = init_weights([3, 3, 32, 64])     # 3x3x32 conv, 64 outputs\r\nw3 = init_weights([3, 3, 64, 128])    # 3x3x32 conv, 128 outputs\r\nw4 = init_weights([128 * 4 * 4, 625]) # FC 128 * 4 * 4 inputs, 625 outputs\r\nw_o = init_weights([625, num_classes])         # FC 625 inputs, 10 outputs (labels)\r\n\r\np_keep_conv = tf.placeholder(\"float\")\r\np_keep_hidden = tf.placeholder(\"float\")\r\npy_x = model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden)\r\n\r\nY_ = tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y)\r\ncost = tf.reduce_mean(Y_)\r\noptimizer  = tf.train.\\\r\n           RMSPropOptimizer(0.001, 0.9).minimize(cost)\r\npredict_op = tf.argmax(py_x, 1)\r\n\r\nwith tf.Session() as sess:\r\n    #tf.initialize_all_variables().run()\r\n    tf.global_variables_initializer().run()\r\n    for i in range(100):\r\n        training_batch = \\\r\n                       zip(range(0, len(trX), \\\r\n                                 batch_size),\r\n                             range(batch_size, \\\r\n                                   len(trX)+1, \\\r\n                                   batch_size))\r\n        for start, end in training_batch:\r\n            sess.run(optimizer, feed_dict={X: trX[start:end],\\\r\n                                          Y: trY[start:end],\\\r\n                                          p_keep_conv: 0.8,\\\r\n                                          p_keep_hidden: 0.5})\r\n\r\n        test_indices = np.arange(len(teX))# Get A Test Batch\r\n        np.random.shuffle(test_indices)\r\n        test_indices = test_indices[0:test_size]\r\n\r\n        print(i, np.mean(np.argmax(teY[test_indices], axis=1) ==\\\r\n                         sess.run\\\r\n                         (predict_op,\\\r\n                          feed_dict={X: teX[test_indices],\\\r\n                                     Y: teY[test_indices], \\\r\n                                     p_keep_conv: 1.0,\\\r\n                                     p_keep_hidden: 1.0})))\r\n\r\n\"\"\"\r\nSuccessfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\r\nSuccessfully extracted to train-images-idx3-ubyte.mnist 9912422 bytes.\r\nLoading ata/train-images-idx3-ubyte.mnist\r\nSuccessfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\r\nSuccessfully extracted to train-labels-idx1-ubyte.mnist 28881 bytes.\r\nLoading ata/train-labels-idx1-ubyte.mnist\r\nSuccessfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\r\nSuccessfully extracted to t10k-images-idx3-ubyte.mnist 1648877 bytes.\r\nLoading ata/t10k-images-idx3-ubyte.mnist\r\nSuccessfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\r\nSuccessfully extracted to t10k-labels-idx1-ubyte.mnist 4542 bytes.\r\nLoading ata/t10k-labels-idx1-ubyte.mnist\r\n(0, 0.95703125)\r\n(1, 0.98046875)\r\n(2, 0.9921875)\r\n(3, 0.99609375)\r\n(4, 0.99609375)\r\n(5, 0.98828125)\r\n(6, 0.99609375)\r\n(7, 0.99609375)\r\n(8, 0.98828125)\r\n(9, 0.98046875)\r\n(10, 0.99609375)\r\n(11, 1.0)\r\n(12, 0.9921875)\r\n(13, 0.98046875)\r\n(14, 0.98828125)\r\n(15, 0.9921875)\r\n(16, 0.9921875)\r\n(17, 0.9921875)\r\n(18, 0.9921875)\r\n(19, 1.0)\r\n(20, 0.98828125)\r\n(21, 0.99609375)\r\n(22, 0.98828125)\r\n(23, 1.0)\r\n(24, 0.9921875)\r\n(25, 0.99609375)\r\n(26, 0.99609375)\r\n(27, 0.98828125)\r\n(28, 0.98828125)\r\n(29, 0.9921875)\r\n(30, 0.99609375)\r\n(31, 0.9921875)\r\n(32, 0.99609375)\r\n(33, 1.0)\r\n(34, 0.99609375)\r\n(35, 1.0)\r\n(36, 0.9921875)\r\n(37, 1.0)\r\n(38, 0.99609375)\r\n(39, 0.99609375)\r\n(40, 0.99609375)\r\n(41, 0.9921875)\r\n(42, 0.98828125)\r\n(43, 0.9921875)\r\n(44, 0.9921875)\r\n(45, 0.9921875)\r\n(46, 0.9921875)\r\n(47, 0.98828125)\r\n(48, 0.99609375)\r\n(49, 0.99609375)\r\n(50, 1.0)\r\n(51, 0.98046875)\r\n(52, 0.99609375)\r\n(53, 0.98828125)\r\n(54, 0.99609375)\r\n(55, 0.9921875)\r\n(56, 0.99609375)\r\n(57, 0.9921875)\r\n(58, 0.98828125)\r\n(59, 0.99609375)\r\n(60, 0.99609375)\r\n(61, 0.98828125)\r\n(62, 1.0)\r\n(63, 0.98828125)\r\n(64, 0.98828125)\r\n(65, 0.98828125)\r\n(66, 1.0)\r\n(67, 0.99609375)\r\n(68, 1.0)\r\n(69, 1.0)\r\n(70, 0.9921875)\r\n(71, 0.99609375)\r\n(72, 0.984375)\r\n(73, 0.9921875)\r\n(74, 0.98828125)\r\n(75, 0.99609375)\r\n(76, 1.0)\r\n(77, 0.9921875)\r\n(78, 0.984375)\r\n(79, 1.0)\r\n(80, 0.9921875)\r\n(81, 0.9921875)\r\n(82, 0.99609375)\r\n(83, 1.0)\r\n(84, 0.98828125)\r\n(85, 0.98828125)\r\n(86, 0.99609375)\r\n(87, 1.0)\r\n(88, 0.99609375)\r\n\"\"\"\r\n"
  },
  {
    "path": "Chapter05/Python 2.7/Convlutional_AutoEncoder.py",
    "content": "import matplotlib.pyplot as plt\r\nimport numpy as np\r\nimport math\r\nimport tensorflow as tf\r\nimport tensorflow.examples.tutorials.mnist.input_data as input_data\r\n\r\nfrom tensorflow.python.framework import ops\r\nimport warnings\r\nimport random\r\nimport os\r\n\r\nwarnings.filterwarnings(\"ignore\")\r\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\r\nops.reset_default_graph()\r\n\r\n# LOAD PACKAGES\r\nmnist = input_data.read_data_sets(\"data/\", one_hot=True)\r\ntrainimgs = mnist.train.images\r\ntrainlabels = mnist.train.labels\r\ntestimgs = mnist.test.images\r\ntestlabels = mnist.test.labels\r\nntrain = trainimgs.shape[0]\r\nntest = testimgs.shape[0]\r\ndim = trainimgs.shape[1]\r\nnout = trainlabels.shape[1]\r\n\r\nprint(\"Packages loaded\")\r\n# WEIGHT AND BIASES\r\nn1 = 16\r\nn2 = 32\r\nn3 = 64\r\nksize = 5\r\n\r\nweights = {\r\n    'ce1': tf.Variable(tf.random_normal([ksize, ksize, 1, n1], stddev=0.1)),\r\n    'ce2': tf.Variable(tf.random_normal([ksize, ksize, n1, n2], stddev=0.1)),\r\n    'ce3': tf.Variable(tf.random_normal([ksize, ksize, n2, n3], stddev=0.1)),\r\n    'cd3': tf.Variable(tf.random_normal([ksize, ksize, n2, n3], stddev=0.1)),\r\n    'cd2': tf.Variable(tf.random_normal([ksize, ksize, n1, n2], stddev=0.1)),\r\n    'cd1': tf.Variable(tf.random_normal([ksize, ksize, 1, n1], stddev=0.1))\r\n}\r\nbiases = {\r\n    'be1': tf.Variable(tf.random_normal([n1], stddev=0.1)),\r\n    'be2': tf.Variable(tf.random_normal([n2], stddev=0.1)),\r\n    'be3': tf.Variable(tf.random_normal([n3], stddev=0.1)),\r\n    'bd3': tf.Variable(tf.random_normal([n2], stddev=0.1)),\r\n    'bd2': tf.Variable(tf.random_normal([n1], stddev=0.1)),\r\n    'bd1': tf.Variable(tf.random_normal([1], stddev=0.1))\r\n}\r\n\r\n\r\ndef cae(_X, _W, _b, _keepprob):\r\n    _input_r = tf.reshape(_X, shape=[-1, 28, 28, 1])\r\n    # Encoder\r\n    _ce1 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_input_r, _W['ce1'], strides=[1, 2, 2, 1], padding='SAME'), _b['be1']))\r\n    _ce1 = tf.nn.dropout(_ce1, _keepprob)\r\n    _ce2 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_ce1, _W['ce2'], strides=[1, 2, 2, 1], padding='SAME'), _b['be2']))\r\n    _ce2 = tf.nn.dropout(_ce2, _keepprob)\r\n    _ce3 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_ce2, _W['ce3'], strides=[1, 2, 2, 1], padding='SAME'), _b['be3']))\r\n    _ce3 = tf.nn.dropout(_ce3, _keepprob)\r\n    # Decoder\r\n    _cd3 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_ce3, _W['cd3'], tf.stack([tf.shape(_X)[0], 7, 7, n2]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd3']))\r\n    _cd3 = tf.nn.dropout(_cd3, _keepprob)\r\n    _cd2 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_cd3, _W['cd2'], tf.stack([tf.shape(_X)[0], 14, 14, n1]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd2']))\r\n    _cd2 = tf.nn.dropout(_cd2, _keepprob)\r\n    _cd1 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_cd2, _W['cd1'], tf.stack([tf.shape(_X)[0], 28, 28, 1]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd1']))\r\n    _cd1 = tf.nn.dropout(_cd1, _keepprob)\r\n    _out = _cd1\r\n    return _out\r\n\r\nprint(\"Network ready\")\r\nx = tf.placeholder(tf.float32, [None, dim])\r\ny = tf.placeholder(tf.float32, [None, dim])\r\nkeepprob = tf.placeholder(tf.float32)\r\npred = cae(x, weights, biases, keepprob)  # ['out']\r\ncost = tf.reduce_sum(tf.square(cae(x, weights, biases, keepprob)- tf.reshape(y, shape=[-1, 28, 28, 1])))\r\n\r\nlearning_rate = 0.001\r\noptm = tf.train.AdamOptimizer(learning_rate).minimize(cost)\r\ninit = tf.global_variables_initializer()\r\n\r\nprint(\"Functions ready\")\r\nsess = tf.Session()\r\nsess.run(init)\r\n\r\n# mean_img = np.mean(mnist.train.images, axis=0)\r\nmean_img = np.zeros((784))\r\n# Fit all training data\r\nbatch_size = 128\r\nn_epochs = 50\r\nprint(\"Strart training..\")\r\n\r\nfor epoch_i in range(n_epochs):\r\n    for batch_i in range(mnist.train.num_examples // batch_size):\r\n        batch_xs, _ = mnist.train.next_batch(batch_size)\r\n        trainbatch = np.array([img - mean_img for img in batch_xs])\r\n        trainbatch_noisy = trainbatch + 0.3 * np.random.randn(\r\n        trainbatch.shape[0], 784)\r\n        sess.run(optm, feed_dict={x: trainbatch_noisy, y: trainbatch, keepprob: 0.7})\r\n        print(\"[%02d/%02d] cost: %.4f\" % (epoch_i, n_epochs, sess.run(cost, feed_dict={x: trainbatch_noisy, y: trainbatch, keepprob: 1.})))\r\n\r\n    if (epoch_i % 10) == 0:\r\n        n_examples = 5\r\n        test_xs, _ = mnist.test.next_batch(n_examples)\r\n        test_xs_noisy = test_xs + 0.3 * np.random.randn(\r\n        test_xs.shape[0], 784)\r\n        recon = sess.run(pred, feed_dict={x: test_xs_noisy,keepprob: 1.})\r\n        fig, axs = plt.subplots(2, n_examples, figsize=(15, 4))\r\n\r\n        for example_i in range(n_examples):\r\n             axs[0][example_i].matshow(np.reshape(test_xs_noisy[example_i, :], (28, 28)), cmap=plt.get_cmap('gray'))\r\n             axs[1][example_i].matshow(np.reshape(np.reshape(recon[example_i, ...], (784,))+ mean_img, (28, 28)), cmap=plt.get_cmap('gray'))\r\n             plt.show()\r\n"
  },
  {
    "path": "Chapter05/Python 2.7/autoencoder_1.py",
    "content": "import tensorflow as tf\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\n\r\n# Import MINST data\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\r\n\r\n#mnist = mnist_data.read_data_sets(\"data/\")\r\n\r\n# Parameters\r\nlearning_rate = 0.01\r\ntraining_epochs = 10\r\nbatch_size = 256\r\ndisplay_step = 1\r\nexamples_to_show = 10\r\n\r\n# Network Parameters\r\nn_hidden_1 = 256 # 1st layer num features\r\nn_hidden_2 = 128 # 2nd layer num features\r\nn_input = 784 # MNIST data input (img shape: 28*28)\r\n\r\n# tf Graph input (only pictures)\r\nX = tf.placeholder(\"float\", [None, n_input])\r\n\r\nweights = {\r\n    'encoder_h1': tf.Variable\\\r\n    (tf.random_normal([n_input, n_hidden_1])),\r\n    'encoder_h2': tf.Variable\\\r\n    (tf.random_normal([n_hidden_1, n_hidden_2])),\r\n    'decoder_h1': tf.Variable\\\r\n    (tf.random_normal([n_hidden_2, n_hidden_1])),\r\n    'decoder_h2': tf.Variable\\\r\n    (tf.random_normal([n_hidden_1, n_input])),\r\n}\r\nbiases = {\r\n    'encoder_b1': tf.Variable\\\r\n    (tf.random_normal([n_hidden_1])),\r\n    'encoder_b2': tf.Variable\\\r\n    (tf.random_normal([n_hidden_2])),\r\n    'decoder_b1': tf.Variable\\\r\n    (tf.random_normal([n_hidden_1])),\r\n    'decoder_b2': tf.Variable\\\r\n    (tf.random_normal([n_input])),\r\n}\r\n\r\n\r\n\r\n# Encoder Hidden layer with sigmoid activation #1\r\nencoder_in = tf.nn.sigmoid(tf.add\\\r\n                           (tf.matmul(X, \\\r\n                                      weights['encoder_h1']),\\\r\n                            biases['encoder_b1']))\r\n\r\n# Decoder Hidden layer with sigmoid activation #2\r\nencoder_out = tf.nn.sigmoid(tf.add\\\r\n                            (tf.matmul(encoder_in,\\\r\n                                       weights['encoder_h2']),\\\r\n                             biases['encoder_b2']))\r\n\r\n\r\n# Encoder Hidden layer with sigmoid activation #1\r\ndecoder_in = tf.nn.sigmoid(tf.add\\\r\n                           (tf.matmul(encoder_out,\\\r\n                                      weights['decoder_h1']),\\\r\n                            biases['decoder_b1']))\r\n\r\n# Decoder Hidden layer with sigmoid activation #2\r\ndecoder_out = tf.nn.sigmoid(tf.add\\\r\n                            (tf.matmul(decoder_in,\\\r\n                                       weights['decoder_h2']),\\\r\n                             biases['decoder_b2']))\r\n\r\n\r\n# Prediction\r\ny_pred = decoder_out\r\n# Targets (Labels) are the input data.\r\ny_true = X\r\n\r\n# Define loss and optimizer, minimize the squared error\r\ncost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))\r\noptimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)\r\n\r\n# Initializing the variables\r\ninit = tf.global_variables_initializer()\r\n\r\n# Launch the graph\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    total_batch = int(mnist.train.num_examples/batch_size)\r\n    # Training cycle\r\n    for epoch in range(training_epochs):\r\n        # Loop over all batches\r\n        for i in range(total_batch):\r\n            batch_xs, batch_ys =\\\r\n                      mnist.train.next_batch(batch_size)\r\n            # Run optimization op (backprop) and cost op (to get loss value)\r\n            _, c = sess.run([optimizer, cost],\\\r\n                            feed_dict={X: batch_xs})\r\n        # Display logs per epoch step\r\n        if epoch % display_step == 0:\r\n            print(\"Epoch:\", '%04d' % (epoch+1),\r\n                  \"cost=\", \"{:.9f}\".format(c))\r\n\r\n    print(\"Optimization Finished!\")\r\n\r\n    # Applying encode and decode over test set\r\n    encode_decode = sess.run(\r\n        y_pred, feed_dict=\\\r\n        {X: mnist.test.images[:examples_to_show]})\r\n    # Compare original images with their reconstructions\r\n    f, a = plt.subplots(2, 4, figsize=(10, 5))\r\n    for i in range(examples_to_show):\r\n        a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))\r\n        a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))\r\n    f.show()\r\n    plt.draw()\r\n    plt.show()\r\n"
  },
  {
    "path": "Chapter05/Python 2.7/deconvolutional_autoencoder_1.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport matplotlib.pyplot as plt\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\n\r\n#Plot function\r\ndef plotresult(org_vec,noisy_vec,out_vec):\r\n    plt.matshow(np.reshape(org_vec, (28, 28)),\\\r\n                cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Original Image\")\r\n    plt.colorbar()\r\n\r\n    plt.matshow(np.reshape(noisy_vec, (28, 28)),\\\r\n                cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Input Image\")\r\n    plt.colorbar()\r\n    \r\n    outimg   = np.reshape(out_vec, (28, 28))\r\n    plt.matshow(outimg, cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Reconstructed Image\")\r\n    plt.colorbar()\r\n    plt.show()\r\n\r\n# NETOWORK PARAMETERS\r\nn_input    = 784 \r\nn_hidden_1 = 256 \r\nn_hidden_2 = 256 \r\nn_output   = 784\r\n\r\nepochs     = 110\r\nbatch_size = 100\r\ndisp_step  = 10\r\n\r\nprint (\"PACKAGES LOADED\")\r\n\r\nmnist = input_data.read_data_sets('data/', one_hot=True)\r\ntrainimg   = mnist.train.images\r\ntrainlabel = mnist.train.labels\r\ntestimg    = mnist.test.images\r\ntestlabel  = mnist.test.labels\r\nprint (\"MNIST LOADED\")\r\n\r\n\r\n# PLACEHOLDERS\r\nx = tf.placeholder(\"float\", [None, n_input])\r\ny = tf.placeholder(\"float\", [None, n_output])\r\ndropout_keep_prob = tf.placeholder(\"float\")\r\n\r\n# WEIGHTS\r\nweights = {\r\n    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),\r\n    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),\r\n    'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))\r\n}\r\nbiases = {\r\n    'b1': tf.Variable(tf.random_normal([n_hidden_1])),\r\n    'b2': tf.Variable(tf.random_normal([n_hidden_2])),\r\n    'out': tf.Variable(tf.random_normal([n_output]))\r\n}\r\n\r\n\r\nencode_in = tf.nn.sigmoid\\\r\n          (tf.add(tf.matmul\\\r\n                  (x, weights['h1']),\\\r\n                  biases['b1'])) \r\n\r\nencode_out = tf.nn.dropout\\\r\n             (encode_in, dropout_keep_prob) \r\n\r\ndecode_in = tf.nn.sigmoid\\\r\n          (tf.add(tf.matmul\\\r\n                  (encode_out, weights['h2']),\\\r\n                  biases['b2'])) \r\n\r\ndecode_out = tf.nn.dropout(decode_in,\\\r\n                           dropout_keep_prob) \r\n\r\n\r\ny_pred = tf.nn.sigmoid\\\r\n         (tf.matmul(decode_out,\\\r\n                    weights['out']) +\\\r\n          biases['out'])\r\n\r\n# COST\r\ncost = tf.reduce_mean(tf.pow(y_pred - y, 2))\r\n\r\n# OPTIMIZER\r\noptmizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)\r\n\r\n# INITIALIZER\r\ninit = tf.global_variables_initializer()\r\n\r\ninit = tf.global_variables_initializer()\r\n\r\n# Launch the graph\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    print (\"Start Training\")\r\n    for epoch in range(epochs):\r\n        num_batch  = int(mnist.train.num_examples/batch_size)\r\n        total_cost = 0.\r\n        for i in range(num_batch):\r\n            batch_xs, batch_ys = mnist.train.next_batch(batch_size)\r\n            batch_xs_noisy = batch_xs\\\r\n                             + 0.3*np.random.randn(batch_size, 784)\r\n            feeds = {x: batch_xs_noisy,\\\r\n                     y: batch_xs, \\\r\n                     dropout_keep_prob: 0.8}\r\n            sess.run(optmizer, feed_dict=feeds)\r\n            total_cost += sess.run(cost, feed_dict=feeds)\r\n        # DISPLAY\r\n        if epoch % disp_step == 0:\r\n            print (\"Epoch %02d/%02d average cost: %.6f\" \r\n                   % (epoch, epochs, total_cost/num_batch))\r\n\r\n            # Test one\r\n            print (\"Start Test\")\r\n            randidx   = np.random.randint\\\r\n                        (testimg.shape[0], size=1)\r\n            orgvec    = testimg[randidx, :]\r\n            testvec   = testimg[randidx, :]\r\n            label     = np.argmax(testlabel[randidx, :], 1)\r\n\r\n            print (\"Test label is %d\" % (label)) \r\n            noisyvec = testvec + 0.3*np.random.randn(1, 784)\r\n            outvec   = sess.run(y_pred,\\\r\n                                feed_dict={x: noisyvec,\\\r\n                                           dropout_keep_prob: 1})\r\n\r\n            plotresult(orgvec,noisyvec,outvec)\r\n            print (\"restart Training\")\r\n"
  },
  {
    "path": "Chapter05/Python 2.7/denoising_autoencoder_1.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport matplotlib.pyplot as plt\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\n\r\n#Plot function\r\ndef plotresult(org_vec,noisy_vec,out_vec):\r\n    plt.matshow(np.reshape(org_vec, (28, 28)),\\\r\n                cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Original Image\")\r\n    plt.colorbar()\r\n\r\n    plt.matshow(np.reshape(noisy_vec, (28, 28)),\\\r\n                cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Input Image\")\r\n    plt.colorbar()\r\n    \r\n    outimg   = np.reshape(out_vec, (28, 28))\r\n    plt.matshow(outimg, cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Reconstructed Image\")\r\n    plt.colorbar()\r\n    plt.show()\r\n\r\n# NETOWRK PARAMETERS\r\nn_input    = 784 \r\nn_hidden_1 = 256 \r\nn_hidden_2 = 256 \r\nn_output   = 784\r\n\r\nepochs     = 100\r\nbatch_size = 100\r\ndisp_step  = 10\r\n\r\nprint (\"PACKAGES LOADED\")\r\n\r\nmnist = input_data.read_data_sets('data/', one_hot=True)\r\ntrainimg   = mnist.train.images\r\ntrainlabel = mnist.train.labels\r\ntestimg    = mnist.test.images\r\ntestlabel  = mnist.test.labels\r\nprint (\"MNIST LOADED\")\r\n\r\n\r\n# PLACEHOLDERS\r\nx = tf.placeholder(\"float\", [None, n_input])\r\ny = tf.placeholder(\"float\", [None, n_output])\r\ndropout_keep_prob = tf.placeholder(\"float\")\r\n\r\n# WEIGHTS\r\nweights = {\r\n    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),\r\n    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),\r\n    'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))\r\n}\r\nbiases = {\r\n    'b1': tf.Variable(tf.random_normal([n_hidden_1])),\r\n    'b2': tf.Variable(tf.random_normal([n_hidden_2])),\r\n    'out': tf.Variable(tf.random_normal([n_output]))\r\n}\r\n\r\n\r\nencode_in = tf.nn.sigmoid\\\r\n          (tf.add(tf.matmul\\\r\n                  (x, weights['h1']),\\\r\n                  biases['b1'])) \r\n\r\nencode_out = tf.nn.dropout\\\r\n             (encode_in, dropout_keep_prob) \r\n\r\ndecode_in = tf.nn.sigmoid\\\r\n          (tf.add(tf.matmul\\\r\n                  (encode_out, weights['h2']),\\\r\n                  biases['b2'])) \r\n\r\ndecode_out = tf.nn.dropout(decode_in,\\\r\n                           dropout_keep_prob) \r\n\r\n\r\ny_pred = tf.nn.sigmoid\\\r\n         (tf.matmul(decode_out,\\\r\n                    weights['out']) +\\\r\n          biases['out'])\r\n\r\n# COST\r\ncost = tf.reduce_mean(tf.pow(y_pred - y, 2))\r\n\r\n# OPTIMIZER\r\noptmizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)\r\n\r\n# INITIALIZER\r\ninit = tf.global_variables_initializer()\r\n\r\ninit = tf.global_variables_initializer()\r\n\r\n# Launch the graph\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    print (\"Start Training\")\r\n    for epoch in range(epochs):\r\n        num_batch  = int(mnist.train.num_examples/batch_size)\r\n        total_cost = 0.\r\n        for i in range(num_batch):\r\n            batch_xs, batch_ys = mnist.train.next_batch(batch_size)\r\n            batch_xs_noisy = batch_xs + 0.3*np.random.randn(batch_size, 784)\r\n            feeds = {x: batch_xs_noisy, y: batch_xs, dropout_keep_prob: 0.8}\r\n            sess.run(optmizer, feed_dict=feeds)\r\n            total_cost += sess.run(cost, feed_dict=feeds)\r\n        # DISPLAY\r\n        if epoch % disp_step == 0:\r\n            print (\"Epoch %02d/%02d average cost: %.6f\" \r\n                   % (epoch, epochs, total_cost/num_batch))\r\n\r\n            # Test one\r\n            print (\"Start Test\")\r\n            randidx   = np.random.randint\\\r\n                        (testimg.shape[0], size=1)\r\n            orgvec    = testimg[randidx, :]\r\n            testvec   = testimg[randidx, :]\r\n            label     = np.argmax(testlabel[randidx, :], 1)\r\n\r\n            print (\"Test label is %d\" % (label)) \r\n            noisyvec = testvec + 0.3*np.random.randn(1, 784)\r\n            outvec   = sess.run(y_pred,\\\r\n                                feed_dict={x: noisyvec,\\\r\n                                           dropout_keep_prob: 1})\r\n\r\n            plotresult(orgvec,noisyvec,outvec)\r\n            print (\"restart Training\")\r\n\r\n\r\n    \r\n\"\"\"\"\r\nPACKAGES LOADED\r\nExtracting data/train-images-idx3-ubyte.gz\r\nExtracting data/train-labels-idx1-ubyte.gz\r\nExtracting data/t10k-images-idx3-ubyte.gz\r\nExtracting data/t10k-labels-idx1-ubyte.gz\r\nMNIST LOADED\r\nStart Training\r\nEpoch 00/100 average cost: 0.212313\r\nStart Test\r\nTest label is 6\r\nrestart Training\r\nEpoch 10/100 average cost: 0.033660\r\nStart Test\r\nTest label is 2\r\nrestart Training\r\nEpoch 20/100 average cost: 0.026888\r\nStart Test\r\nTest label is 6\r\nrestart Training\r\nEpoch 30/100 average cost: 0.023660\r\nStart Test\r\nTest label is 1\r\nrestart Training\r\nEpoch 40/100 average cost: 0.021740\r\nStart Test\r\nTest label is 9\r\nrestart Training\r\nEpoch 50/100 average cost: 0.020399\r\nStart Test\r\nTest label is 0\r\nrestart Training\r\nEpoch 60/100 average cost: 0.019593\r\nStart Test\r\nTest label is 9\r\nrestart Training\r\nEpoch 70/100 average cost: 0.019026\r\nStart Test\r\nTest label is 1\r\nrestart Training\r\nEpoch 80/100 average cost: 0.018537\r\nStart Test\r\nTest label is 4\r\nrestart Training\r\nEpoch 90/100 average cost: 0.018224\r\nStart Test\r\nTest label is 9\r\nrestart Training\r\n\"\"\"\r\n"
  },
  {
    "path": "Chapter05/Python 3.5/Convlutional_AutoEncoder.py",
    "content": "import matplotlib.pyplot as plt\r\nimport numpy as np\r\nimport math\r\nimport tensorflow as tf\r\nimport tensorflow.examples.tutorials.mnist.input_data as input_data\r\n\r\nfrom tensorflow.python.framework import ops\r\nimport warnings\r\nimport random\r\nimport os\r\n\r\nwarnings.filterwarnings(\"ignore\")\r\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\r\nops.reset_default_graph()\r\n\r\n# LOAD PACKAGES\r\nmnist = input_data.read_data_sets(\"data/\", one_hot=True)\r\ntrainimgs = mnist.train.images\r\ntrainlabels = mnist.train.labels\r\ntestimgs = mnist.test.images\r\ntestlabels = mnist.test.labels\r\nntrain = trainimgs.shape[0]\r\nntest = testimgs.shape[0]\r\ndim = trainimgs.shape[1]\r\nnout = trainlabels.shape[1]\r\n\r\nprint(\"Packages loaded\")\r\n# WEIGHT AND BIASES\r\nn1 = 16\r\nn2 = 32\r\nn3 = 64\r\nksize = 5\r\n\r\nweights = {\r\n    'ce1': tf.Variable(tf.random_normal([ksize, ksize, 1, n1], stddev=0.1)),\r\n    'ce2': tf.Variable(tf.random_normal([ksize, ksize, n1, n2], stddev=0.1)),\r\n    'ce3': tf.Variable(tf.random_normal([ksize, ksize, n2, n3], stddev=0.1)),\r\n    'cd3': tf.Variable(tf.random_normal([ksize, ksize, n2, n3], stddev=0.1)),\r\n    'cd2': tf.Variable(tf.random_normal([ksize, ksize, n1, n2], stddev=0.1)),\r\n    'cd1': tf.Variable(tf.random_normal([ksize, ksize, 1, n1], stddev=0.1))\r\n}\r\nbiases = {\r\n    'be1': tf.Variable(tf.random_normal([n1], stddev=0.1)),\r\n    'be2': tf.Variable(tf.random_normal([n2], stddev=0.1)),\r\n    'be3': tf.Variable(tf.random_normal([n3], stddev=0.1)),\r\n    'bd3': tf.Variable(tf.random_normal([n2], stddev=0.1)),\r\n    'bd2': tf.Variable(tf.random_normal([n1], stddev=0.1)),\r\n    'bd1': tf.Variable(tf.random_normal([1], stddev=0.1))\r\n}\r\n\r\n\r\ndef cae(_X, _W, _b, _keepprob):\r\n    _input_r = tf.reshape(_X, shape=[-1, 28, 28, 1])\r\n    # Encoder\r\n    _ce1 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_input_r, _W['ce1'], strides=[1, 2, 2, 1], padding='SAME'), _b['be1']))\r\n    _ce1 = tf.nn.dropout(_ce1, _keepprob)\r\n    _ce2 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_ce1, _W['ce2'], strides=[1, 2, 2, 1], padding='SAME'), _b['be2']))\r\n    _ce2 = tf.nn.dropout(_ce2, _keepprob)\r\n    _ce3 = tf.nn.sigmoid(tf.add(tf.nn.conv2d(_ce2, _W['ce3'], strides=[1, 2, 2, 1], padding='SAME'), _b['be3']))\r\n    _ce3 = tf.nn.dropout(_ce3, _keepprob)\r\n    # Decoder\r\n    _cd3 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_ce3, _W['cd3'], tf.stack([tf.shape(_X)[0], 7, 7, n2]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd3']))\r\n    _cd3 = tf.nn.dropout(_cd3, _keepprob)\r\n    _cd2 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_cd3, _W['cd2'], tf.stack([tf.shape(_X)[0], 14, 14, n1]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd2']))\r\n    _cd2 = tf.nn.dropout(_cd2, _keepprob)\r\n    _cd1 = tf.nn.sigmoid(tf.add(tf.nn.conv2d_transpose(_cd2, _W['cd1'], tf.stack([tf.shape(_X)[0], 28, 28, 1]), strides=[1, 2, 2, 1], padding='SAME'), _b['bd1']))\r\n    _cd1 = tf.nn.dropout(_cd1, _keepprob)\r\n    _out = _cd1\r\n    return _out\r\n\r\nprint(\"Network ready\")\r\nx = tf.placeholder(tf.float32, [None, dim])\r\ny = tf.placeholder(tf.float32, [None, dim])\r\nkeepprob = tf.placeholder(tf.float32)\r\npred = cae(x, weights, biases, keepprob)  # ['out']\r\ncost = tf.reduce_sum(tf.square(cae(x, weights, biases, keepprob)- tf.reshape(y, shape=[-1, 28, 28, 1])))\r\n\r\nlearning_rate = 0.001\r\noptm = tf.train.AdamOptimizer(learning_rate).minimize(cost)\r\ninit = tf.global_variables_initializer()\r\n\r\nprint(\"Functions ready\")\r\nsess = tf.Session()\r\nsess.run(init)\r\n\r\n# mean_img = np.mean(mnist.train.images, axis=0)\r\nmean_img = np.zeros((784))\r\n# Fit all training data\r\nbatch_size = 128\r\nn_epochs = 50\r\nprint(\"Strart training..\")\r\n\r\nfor epoch_i in range(n_epochs):\r\n    for batch_i in range(mnist.train.num_examples // batch_size):\r\n        batch_xs, _ = mnist.train.next_batch(batch_size)\r\n        trainbatch = np.array([img - mean_img for img in batch_xs])\r\n        trainbatch_noisy = trainbatch + 0.3 * np.random.randn(\r\n        trainbatch.shape[0], 784)\r\n        sess.run(optm, feed_dict={x: trainbatch_noisy, y: trainbatch, keepprob: 0.7})\r\n        print(\"[%02d/%02d] cost: %.4f\" % (epoch_i, n_epochs, sess.run(cost, feed_dict={x: trainbatch_noisy, y: trainbatch, keepprob: 1.})))\r\n\r\n    if (epoch_i % 10) == 0:\r\n        n_examples = 5\r\n        test_xs, _ = mnist.test.next_batch(n_examples)\r\n        test_xs_noisy = test_xs + 0.3 * np.random.randn(\r\n        test_xs.shape[0], 784)\r\n        recon = sess.run(pred, feed_dict={x: test_xs_noisy,keepprob: 1.})\r\n        fig, axs = plt.subplots(2, n_examples, figsize=(15, 4))\r\n\r\n        for example_i in range(n_examples):\r\n             axs[0][example_i].matshow(np.reshape(test_xs_noisy[example_i, :], (28, 28)), cmap=plt.get_cmap('gray'))\r\n             axs[1][example_i].matshow(np.reshape(np.reshape(recon[example_i, ...], (784,))+ mean_img, (28, 28)), cmap=plt.get_cmap('gray'))\r\n             plt.show()\r\n"
  },
  {
    "path": "Chapter05/Python 3.5/__init__.py",
    "content": ""
  },
  {
    "path": "Chapter05/Python 3.5/autoencoder_1.py",
    "content": "import tensorflow as tf\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\n\r\n# Import MINST data\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\r\n\r\n#mnist = mnist_data.read_data_sets(\"data/\")\r\n\r\n# Parameters\r\nlearning_rate = 0.01\r\ntraining_epochs = 20\r\nbatch_size = 256\r\ndisplay_step = 1\r\nexamples_to_show = 10\r\n\r\n# Network Parameters\r\nn_hidden_1 = 256 # 1st layer num features\r\nn_hidden_2 = 128 # 2nd layer num features\r\nn_input = 784 # MNIST data input (img shape: 28*28)\r\n\r\n# tf Graph input (only pictures)\r\nX = tf.placeholder(\"float\", [None, n_input])\r\n\r\nweights = {\r\n    'encoder_h1': tf.Variable\\\r\n    (tf.random_normal([n_input, n_hidden_1])),\r\n    'encoder_h2': tf.Variable\\\r\n    (tf.random_normal([n_hidden_1, n_hidden_2])),\r\n    'decoder_h1': tf.Variable\\\r\n    (tf.random_normal([n_hidden_2, n_hidden_1])),\r\n    'decoder_h2': tf.Variable\\\r\n    (tf.random_normal([n_hidden_1, n_input])),\r\n}\r\nbiases = {\r\n    'encoder_b1': tf.Variable\\\r\n    (tf.random_normal([n_hidden_1])),\r\n    'encoder_b2': tf.Variable\\\r\n    (tf.random_normal([n_hidden_2])),\r\n    'decoder_b1': tf.Variable\\\r\n    (tf.random_normal([n_hidden_1])),\r\n    'decoder_b2': tf.Variable\\\r\n    (tf.random_normal([n_input])),\r\n}\r\n\r\n\r\n\r\n# Encoder Hidden layer with sigmoid activation #1\r\nencoder_in = tf.nn.sigmoid(tf.add\\\r\n                           (tf.matmul(X, \\\r\n                                      weights['encoder_h1']),\\\r\n                            biases['encoder_b1']))\r\n\r\n# Decoder Hidden layer with sigmoid activation #2\r\nencoder_out = tf.nn.sigmoid(tf.add\\\r\n                            (tf.matmul(encoder_in,\\\r\n                                       weights['encoder_h2']),\\\r\n                             biases['encoder_b2']))\r\n\r\n\r\n# Encoder Hidden layer with sigmoid activation #1\r\ndecoder_in = tf.nn.sigmoid(tf.add\\\r\n                           (tf.matmul(encoder_out,\\\r\n                                      weights['decoder_h1']),\\\r\n                            biases['decoder_b1']))\r\n\r\n# Decoder Hidden layer with sigmoid activation #2\r\ndecoder_out = tf.nn.sigmoid(tf.add\\\r\n                            (tf.matmul(decoder_in,\\\r\n                                       weights['decoder_h2']),\\\r\n                             biases['decoder_b2']))\r\n\r\n\r\n# Prediction\r\ny_pred = decoder_out\r\n# Targets (Labels) are the input data.\r\ny_true = X\r\n\r\n# Define loss and optimizer, minimize the squared error\r\ncost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))\r\noptimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)\r\n\r\n# Initializing the variables\r\ninit = tf.global_variables_initializer()\r\n\r\n# Launch the graph\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    total_batch = int(mnist.train.num_examples/batch_size)\r\n    # Training cycle\r\n    for epoch in range(training_epochs):\r\n        # Loop over all batches\r\n        for i in range(total_batch):\r\n            batch_xs, batch_ys =\\\r\n                      mnist.train.next_batch(batch_size)\r\n            # Run optimization op (backprop) and cost op (to get loss value)\r\n            _, c = sess.run([optimizer, cost],\\\r\n                            feed_dict={X: batch_xs})\r\n        # Display logs per epoch step\r\n        if epoch % display_step == 0:\r\n            print(\"Epoch:\", '%04d' % (epoch+1),\r\n                  \"cost=\", \"{:.9f}\".format(c))\r\n\r\n    print(\"Optimization Finished!\")\r\n\r\n    # Applying encode and decode over test set\r\n    encode_decode = sess.run(\r\n        y_pred, feed_dict=\\\r\n        {X: mnist.test.images[:examples_to_show]})\r\n    # Compare original images with their reconstructions\r\n    f, a = plt.subplots(2, 10, figsize=(10, 2))\r\n    for i in range(examples_to_show):\r\n        a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))\r\n        a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))\r\n    f.show()\r\n    plt.draw()\r\n    plt.show()\r\n"
  },
  {
    "path": "Chapter05/Python 3.5/deconvolutional_autoencoder_1.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport matplotlib.pyplot as plt\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\n\r\n#Plot function\r\ndef plotresult(org_vec,noisy_vec,out_vec):\r\n    plt.matshow(np.reshape(org_vec, (28, 28)), cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Original Image\")\r\n    plt.colorbar()\r\n\r\n    plt.matshow(np.reshape(noisy_vec, (28, 28)), cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Input Image\")\r\n    plt.colorbar()\r\n    \r\n    outimg = np.reshape(out_vec, (28, 28))\r\n    plt.matshow(outimg, cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Reconstructed Image\")\r\n    plt.colorbar()\r\n    plt.show()\r\n\r\n# NETOWORK PARAMETERS\r\nn_input = 784\r\nn_hidden_1 = 256 \r\nn_hidden_2 = 256 \r\nn_output = 784\r\n\r\nepochs = 110\r\nbatch_size = 100\r\ndisp_step = 10\r\n\r\nprint(\"PACKAGES LOADED\")\r\n\r\nmnist = input_data.read_data_sets('data/', one_hot=True)\r\ntrainimg   = mnist.train.images\r\ntrainlabel = mnist.train.labels\r\ntestimg    = mnist.test.images\r\ntestlabel  = mnist.test.labels\r\nprint(\"MNIST LOADED\")\r\n\r\n\r\n# PLACEHOLDERS\r\nx = tf.placeholder(\"float\", [None, n_input])\r\ny = tf.placeholder(\"float\", [None, n_output])\r\ndropout_keep_prob = tf.placeholder(\"float\")\r\n\r\n# WEIGHTS\r\nweights = {\r\n    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),\r\n    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),\r\n    'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))\r\n}\r\nbiases = {\r\n    'b1': tf.Variable(tf.random_normal([n_hidden_1])),\r\n    'b2': tf.Variable(tf.random_normal([n_hidden_2])),\r\n    'out': tf.Variable(tf.random_normal([n_output]))\r\n}\r\n\r\nencode_in = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['h1']), biases['b1']))\r\nencode_out = tf.nn.dropout(encode_in, dropout_keep_prob)\r\ndecode_in = tf.nn.sigmoid(tf.add(tf.matmul(encode_out, weights['h2']), biases['b2']))\r\ndecode_out = tf.nn.dropout(decode_in, dropout_keep_prob)\r\n\r\ny_pred = tf.nn.sigmoid(tf.matmul(decode_out, weights['out']) + biases['out'])\r\n\r\n# COST\r\ncost = tf.reduce_mean(tf.pow(y_pred - y, 2))\r\n\r\n# OPTIMIZER\r\noptmizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)\r\n\r\n# INITIALIZER\r\ninit = tf.global_variables_initializer()\r\n\r\ninit = tf.global_variables_initializer()\r\n\r\n# Launch the graph\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    print(\"Start Training\")\r\n    for epoch in range(epochs):\r\n        num_batch  = int(mnist.train.num_examples/batch_size)\r\n        total_cost = 0.\r\n        for i in range(num_batch):\r\n            batch_xs, batch_ys = mnist.train.next_batch(batch_size)\r\n            batch_xs_noisy = batch_xs + 0.3*np.random.randn(batch_size, 784)\r\n            feeds = {x: batch_xs_noisy, y: batch_xs, dropout_keep_prob: 0.8}\r\n            sess.run(optmizer, feed_dict=feeds)\r\n            total_cost += sess.run(cost, feed_dict=feeds)\r\n        # DISPLAY\r\n        if epoch % disp_step == 0:\r\n            print(\"Epoch %02d/%02d average cost: %.6f\" % (epoch, epochs, total_cost/num_batch))\r\n\r\n            # Test one\r\n            print (\"Start Test\")\r\n            randidx = np.random.randint(testimg.shape[0], size=1)\r\n            orgvec = testimg[randidx, :]\r\n            testvec = testimg[randidx, :]\r\n            label = np.argmax(testlabel[randidx, :], 1)\r\n\r\n            print (\"Test label is %d\" % label)\r\n            noisyvec = testvec + 0.3*np.random.randn(1, 784)\r\n            outvec = sess.run(y_pred, feed_dict={x: noisyvec, dropout_keep_prob: 1})\r\n\r\n            plotresult(orgvec,noisyvec,outvec)\r\n            print(\"restart Training\")"
  },
  {
    "path": "Chapter05/Python 3.5/denoising_autoencoder_1.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport matplotlib.pyplot as plt\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\n\r\n#Plot function\r\ndef plotresult(org_vec,noisy_vec,out_vec):\r\n    plt.matshow(np.reshape(org_vec, (28, 28)), cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Original Image\")\r\n    plt.colorbar()\r\n\r\n    plt.matshow(np.reshape(noisy_vec, (28, 28)), cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Input Image\")\r\n    plt.colorbar()\r\n    \r\n    outimg = np.reshape(out_vec, (28, 28))\r\n    plt.matshow(outimg, cmap=plt.get_cmap('gray'))\r\n    plt.title(\"Reconstructed Image\")\r\n    plt.colorbar()\r\n    plt.show()\r\n\r\n# NETOWRK PARAMETERS\r\nn_input = 784\r\nn_hidden_1 = 256 \r\nn_hidden_2 = 256 \r\nn_output = 784\r\n\r\nepochs = 100\r\nbatch_size = 100\r\ndisp_step = 10\r\n\r\nprint(\"PACKAGES LOADED\")\r\n\r\nmnist = input_data.read_data_sets('data/', one_hot=True)\r\ntrainimg = mnist.train.images\r\ntrainlabel = mnist.train.labels\r\ntestimg = mnist.test.images\r\ntestlabel = mnist.test.labels\r\nprint(\"MNIST LOADED\")\r\n\r\n\r\n# PLACEHOLDERS\r\nx = tf.placeholder(\"float\", [None, n_input])\r\ny = tf.placeholder(\"float\", [None, n_output])\r\ndropout_keep_prob = tf.placeholder(\"float\")\r\n\r\n# WEIGHTS\r\nweights = {\r\n    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),\r\n    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),\r\n    'out': tf.Variable(tf.random_normal([n_hidden_2, n_output]))\r\n}\r\nbiases = {\r\n    'b1': tf.Variable(tf.random_normal([n_hidden_1])),\r\n    'b2': tf.Variable(tf.random_normal([n_hidden_2])),\r\n    'out': tf.Variable(tf.random_normal([n_output]))\r\n}\r\n\r\n\r\nencode_in = tf.nn.sigmoid\\\r\n          (tf.add(tf.matmul\\\r\n                  (x, weights['h1']),\\\r\n                  biases['b1'])) \r\n\r\nencode_out = tf.nn.dropout\\\r\n             (encode_in, dropout_keep_prob) \r\n\r\ndecode_in = tf.nn.sigmoid\\\r\n          (tf.add(tf.matmul\\\r\n                  (encode_out, weights['h2']),\\\r\n                  biases['b2'])) \r\n\r\ndecode_out = tf.nn.dropout(decode_in,\\\r\n                           dropout_keep_prob) \r\n\r\n\r\ny_pred = tf.nn.sigmoid\\\r\n         (tf.matmul(decode_out,\\\r\n                    weights['out']) +\\\r\n          biases['out'])\r\n\r\n# COST\r\ncost = tf.reduce_mean(tf.pow(y_pred - y, 2))\r\n\r\n# OPTIMIZER\r\noptmizer = tf.train.RMSPropOptimizer(0.01).minimize(cost)\r\n\r\n# INITIALIZER\r\ninit = tf.global_variables_initializer()\r\n\r\ninit = tf.global_variables_initializer()\r\n\r\n# Launch the graph\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    print(\"Start Training\")\r\n    for epoch in range(epochs):\r\n        num_batch  = int(mnist.train.num_examples/batch_size)\r\n        total_cost = 0.\r\n        for i in range(num_batch):\r\n            batch_xs, batch_ys = mnist.train.next_batch(batch_size)\r\n            batch_xs_noisy = batch_xs + 0.3*np.random.randn(batch_size, 784)\r\n            feeds = {x: batch_xs_noisy, y: batch_xs, dropout_keep_prob: 0.8}\r\n            sess.run(optmizer, feed_dict=feeds)\r\n            total_cost += sess.run(cost, feed_dict=feeds)\r\n        # DISPLAY\r\n        if epoch % disp_step == 0:\r\n            print(\"Epoch %02d/%02d average cost: %.6f\"\r\n                   % (epoch, epochs, total_cost/num_batch))\r\n\r\n            # Test one\r\n            print(\"Start Test\")\r\n            randidx   = np.random.randint\\\r\n                        (testimg.shape[0], size=1)\r\n            orgvec    = testimg[randidx, :]\r\n            testvec   = testimg[randidx, :]\r\n            label     = np.argmax(testlabel[randidx, :], 1)\r\n\r\n            print(\"Test label is %d\" % (label))\r\n            noisyvec = testvec + 0.3*np.random.randn(1, 784)\r\n            outvec   = sess.run(y_pred,\\\r\n                                feed_dict={x: noisyvec,\\\r\n                                           dropout_keep_prob: 1})\r\n\r\n            plotresult(orgvec,noisyvec,outvec)\r\n            print(\"restart Training\")\r\n\r\n\r\n    \r\n\"\"\"\"\r\nPACKAGES LOADED\r\nExtracting data/train-images-idx3-ubyte.gz\r\nExtracting data/train-labels-idx1-ubyte.gz\r\nExtracting data/t10k-images-idx3-ubyte.gz\r\nExtracting data/t10k-labels-idx1-ubyte.gz\r\nMNIST LOADED\r\nStart Training\r\nEpoch 00/100 average cost: 0.212313\r\nStart Test\r\nTest label is 6\r\nrestart Training\r\nEpoch 10/100 average cost: 0.033660\r\nStart Test\r\nTest label is 2\r\nrestart Training\r\nEpoch 20/100 average cost: 0.026888\r\nStart Test\r\nTest label is 6\r\nrestart Training\r\nEpoch 30/100 average cost: 0.023660\r\nStart Test\r\nTest label is 1\r\nrestart Training\r\nEpoch 40/100 average cost: 0.021740\r\nStart Test\r\nTest label is 9\r\nrestart Training\r\nEpoch 50/100 average cost: 0.020399\r\nStart Test\r\nTest label is 0\r\nrestart Training\r\nEpoch 60/100 average cost: 0.019593\r\nStart Test\r\nTest label is 9\r\nrestart Training\r\nEpoch 70/100 average cost: 0.019026\r\nStart Test\r\nTest label is 1\r\nrestart Training\r\nEpoch 80/100 average cost: 0.018537\r\nStart Test\r\nTest label is 4\r\nrestart Training\r\nEpoch 90/100 average cost: 0.018224\r\nStart Test\r\nTest label is 9\r\nrestart Training\r\n\"\"\"\r\n"
  },
  {
    "path": "Chapter06/Python 2.7/LSTM_model_1.py",
    "content": "import tensorflow as tf\r\nfrom tensorflow.contrib import rnn\r\n\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)\r\n\r\nlearning_rate = 0.001\r\ntraining_iters = 100000\r\nbatch_size = 128\r\ndisplay_step = 10\r\n\r\nn_input = 28 \r\nn_steps = 28 \r\nn_hidden = 128 \r\nn_classes = 10 \r\n\r\nx = tf.placeholder(\"float\", [None, n_steps, n_input])\r\ny = tf.placeholder(\"float\", [None, n_classes])\r\n\r\nweights = {\r\n    'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))\r\n}\r\nbiases = {\r\n    'out': tf.Variable(tf.random_normal([n_classes]))\r\n}\r\n\r\ndef RNN(x, weights, biases):\r\n    x = tf.transpose(x, [1, 0, 2])\r\n    x = tf.reshape(x, [-1, n_input])\r\n    x = tf.split(axis=0, num_or_size_splits=n_steps, value=x)\r\n    lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)\r\n    outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)\r\n    return tf.matmul(outputs[-1], weights['out']) + biases['out']\r\n\r\npred = RNN(x, weights, biases)\r\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))\r\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\r\n\r\ncorrect_pred = tf. equal(tf.argmax(pred,1), tf.argmax(y,1))\r\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\r\n\r\ninit = tf.global_variables_initializer()\r\n\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    step = 1\r\n    while step * batch_size < training_iters:\r\n        batch_x, batch_y = mnist.train.next_batch(batch_size)\r\n        batch_x = batch_x.reshape((batch_size, n_steps, n_input))\r\n        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})\r\n        if step % display_step == 0:\r\n            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})\r\n            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})\r\n            print(\"Iter \" + str(step*batch_size) + \", Minibatch Loss= \" + \\\r\n                  \"{:.6f}\".format(loss) + \", Training Accuracy= \" + \\\r\n                  \"{:.5f}\".format(acc))\r\n        step += 1\r\n    print(\"Optimization Finished!\")\r\n\r\n    test_len = 128\r\n    test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))\r\n    test_label = mnist.test.labels[:test_len]\r\n    print(\"Testing Accuracy:\", \\\r\nsess.run(accuracy, feed_dict={x: test_data, y: test_label}))\r\n"
  },
  {
    "path": "Chapter06/Python 2.7/__init__.py",
    "content": ""
  },
  {
    "path": "Chapter06/Python 2.7/bidirectional_RNN_1.py",
    "content": "import tensorflow as tf\r\nimport numpy as np\r\nfrom tensorflow.contrib import rnn\r\n\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)\r\n\r\nlearning_rate = 0.001\r\ntraining_iters = 100000\r\nbatch_size = 128\r\ndisplay_step = 10\r\n\r\nn_input = 28 \r\nn_steps = 28 \r\nn_hidden = 128 \r\nn_classes = 10 \r\n\r\nx = tf.placeholder(\"float\", [None, n_steps, n_input])\r\ny = tf.placeholder(\"float\", [None, n_classes])\r\n\r\nweights = {\r\n    'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes]))\r\n}\r\nbiases = {\r\n    'out': tf.Variable(tf.random_normal([n_classes]))\r\n}\r\n\r\ndef BiRNN(x, weights, biases):\r\n    x = tf.transpose(x, [1, 0, 2])\r\n    x = tf.reshape(x, [-1, n_input])\r\n    x = tf.split(axis=0, num_or_size_splits=n_steps, value=x)\r\n    lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)\r\n    lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)\r\n    try:\r\n        outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,\r\n                                              dtype=tf.float32)\r\n    except Exception: # Old TensorFlow version only returns outputs not states\r\n        outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,\r\n                                        dtype=tf.float32)\r\n    return tf.matmul(outputs[-1], weights['out']) + biases['out']\r\n\r\npred = BiRNN(x, weights, biases)\r\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))\r\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\r\ncorrect_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))\r\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\r\ninit = tf.global_variables_initializer()\r\n\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    step = 1\r\n    while step * batch_size < training_iters:\r\n        batch_x, batch_y = mnist.train.next_batch(batch_size)\r\n        batch_x = batch_x.reshape((batch_size, n_steps, n_input))\r\n        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})\r\n        if step % display_step == 0:\r\n            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})\r\n            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})\r\n            print(\"Iter \" + str(step*batch_size) + \", Minibatch Loss= \" + \\\r\n                  \"{:.6f}\".format(loss) + \", Training Accuracy= \" + \\\r\n                  \"{:.5f}\".format(acc))\r\n        step += 1\r\n    print(\"Optimization Finished!\")\r\n\r\n    test_len = 128\r\n    test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))\r\n    test_label = mnist.test.labels[:test_len]\r\n    print(\"Testing Accuracy:\", \\\r\nsess.run(accuracy, feed_dict={x: test_data, y: test_label}))\r\n"
  },
  {
    "path": "Chapter06/Python 3.5/LSTM_model_1.py",
    "content": "import tensorflow as tf\r\nfrom tensorflow.contrib import rnn\r\n\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)\r\n\r\nlearning_rate = 0.001\r\ntraining_iters = 100000\r\nbatch_size = 128\r\ndisplay_step = 10\r\n\r\nn_input = 28 \r\nn_steps = 28 \r\nn_hidden = 128 \r\nn_classes = 10 \r\n\r\nx = tf.placeholder(\"float\", [None, n_steps, n_input])\r\ny = tf.placeholder(\"float\", [None, n_classes])\r\n\r\nweights = {\r\n    'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))\r\n}\r\nbiases = {\r\n    'out': tf.Variable(tf.random_normal([n_classes]))\r\n}\r\n\r\ndef RNN(x, weights, biases):\r\n    x = tf.transpose(x, [1, 0, 2])\r\n    x = tf.reshape(x, [-1, n_input])\r\n    x = tf.split(axis=0, num_or_size_splits=n_steps, value=x)\r\n    lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)\r\n    outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)\r\n    return tf.matmul(outputs[-1], weights['out']) + biases['out']\r\n\r\npred = RNN(x, weights, biases)\r\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))\r\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\r\n\r\ncorrect_pred = tf. equal(tf.argmax(pred,1), tf.argmax(y,1))\r\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\r\n\r\ninit = tf.global_variables_initializer()\r\n\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    step = 1\r\n    while step * batch_size < training_iters:\r\n        batch_x, batch_y = mnist.train.next_batch(batch_size)\r\n        batch_x = batch_x.reshape((batch_size, n_steps, n_input))\r\n        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})\r\n        if step % display_step == 0:\r\n            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})\r\n            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})\r\n            print(\"Iter \" + str(step*batch_size) + \", Minibatch Loss= \" + \\\r\n                  \"{:.6f}\".format(loss) + \", Training Accuracy= \" + \\\r\n                  \"{:.5f}\".format(acc))\r\n        step += 1\r\n    print(\"Optimization Finished!\")\r\n\r\n    test_len = 128\r\n    test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))\r\n    test_label = mnist.test.labels[:test_len]\r\n    print(\"Testing Accuracy:\", \\\r\nsess.run(accuracy, feed_dict={x: test_data, y: test_label}))\r\n"
  },
  {
    "path": "Chapter06/Python 3.5/__init__.py",
    "content": ""
  },
  {
    "path": "Chapter06/Python 3.5/bidirectional_RNN_1.py",
    "content": "import tensorflow as tf\r\nimport numpy as np\r\nfrom tensorflow.contrib import rnn\r\n\r\nfrom tensorflow.examples.tutorials.mnist import input_data\r\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)\r\n\r\nlearning_rate = 0.001\r\ntraining_iters = 100000\r\nbatch_size = 128\r\ndisplay_step = 10\r\n\r\nn_input = 28 \r\nn_steps = 28 \r\nn_hidden = 128 \r\nn_classes = 10 \r\n\r\nx = tf.placeholder(\"float\", [None, n_steps, n_input])\r\ny = tf.placeholder(\"float\", [None, n_classes])\r\n\r\nweights = {\r\n    'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes]))\r\n}\r\nbiases = {\r\n    'out': tf.Variable(tf.random_normal([n_classes]))\r\n}\r\n\r\ndef BiRNN(x, weights, biases):\r\n    x = tf.transpose(x, [1, 0, 2])\r\n    x = tf.reshape(x, [-1, n_input])\r\n    x = tf.split(axis=0, num_or_size_splits=n_steps, value=x)\r\n    lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)\r\n    lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)\r\n    try:\r\n        outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,\r\n                                              dtype=tf.float32)\r\n    except Exception: # Old TensorFlow version only returns outputs not states\r\n        outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,\r\n                                        dtype=tf.float32)\r\n    return tf.matmul(outputs[-1], weights['out']) + biases['out']\r\n\r\npred = BiRNN(x, weights, biases)\r\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))\r\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\r\ncorrect_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))\r\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\r\ninit = tf.global_variables_initializer()\r\n\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n    step = 1\r\n    while step * batch_size < training_iters:\r\n        batch_x, batch_y = mnist.train.next_batch(batch_size)\r\n        batch_x = batch_x.reshape((batch_size, n_steps, n_input))\r\n        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})\r\n        if step % display_step == 0:\r\n            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})\r\n            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})\r\n            print(\"Iter \" + str(step*batch_size) + \", Minibatch Loss= \" + \\\r\n                  \"{:.6f}\".format(loss) + \", Training Accuracy= \" + \\\r\n                  \"{:.5f}\".format(acc))\r\n        step += 1\r\n    print(\"Optimization Finished!\")\r\n\r\n    test_len = 128\r\n    test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))\r\n    test_label = mnist.test.labels[:test_len]\r\n    print(\"Testing Accuracy:\", \\\r\nsess.run(accuracy, feed_dict={x: test_data, y: test_label}))\r\n"
  },
  {
    "path": "Chapter07/Python 2.7/gpu_computing_with_multiple_GPU.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport datetime\r\n\r\nlog_device_placement = True\r\nn = 10\r\n\r\nA = np.random.rand(10000, 10000).astype('float32')\r\nB = np.random.rand(10000, 10000).astype('float32')\r\n\r\nc1 = []\r\n\r\ndef matpow(M, n):\r\n    if n < 1: #Abstract cases where n < 1\r\n        return M\r\n    else:\r\n        return tf.matmul(M, matpow(M, n-1))\r\n\r\n#FIRST GPU\r\nwith tf.device('/gpu:0'):\r\n    a = tf.placeholder(tf.float32, [10000, 10000])\r\n    c1.append(matpow(a, n))\r\n    \r\n#SECOND GPU\r\nwith tf.device('/gpu:1'):\r\n    b = tf.placeholder(tf.float32, [10000, 10000])\r\n    c1.append(matpow(b, n))\r\n\r\n\r\nwith tf.device('/cpu:0'):\r\n    sum = tf.add_n(c1) \r\n    print(sum)\r\n\r\nt1_1 = datetime.datetime.now()\r\nwith tf.Session(config=tf.ConfigProto\\\r\n                 (allow_soft_placement=True,\\\r\n                log_device_placement=log_device_placement))\\\r\n                  as sess:\r\n     sess.run(sum, {a:A, b:B})\r\nt2_1 = datetime.datetime.now()\r\n"
  },
  {
    "path": "Chapter07/Python 2.7/gpu_example.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport datetime\r\n\r\nlog_device_placement = True\r\n\r\nn = 10\r\n\r\nA = np.random.rand(10000, 10000).astype('float32')\r\nB = np.random.rand(10000, 10000).astype('float32')\r\n\r\n\r\nc1 = []\r\nc2 = []\r\n\r\ndef matpow(M, n):\r\n    if n < 1: #Abstract cases where n < 1\r\n        return M\r\n    else:\r\n        return tf.matmul(M, matpow(M, n-1))\r\n\r\nwith tf.device('/gpu:0'):\r\n    a = tf.placeholder(tf.float32, [10000, 10000])\r\n    b = tf.placeholder(tf.float32, [10000, 10000])\r\n    c1.append(matpow(a, n))\r\n    c1.append(matpow(b, n))\r\n# If the below code does not work use '/job:localhost/replica:0/task:0/cpu:0' as the GPU device\r\nwith tf.device('/cpu:0'):\r\n  sum = tf.add_n(c1) #Addition of all elements in c1, i.e. A^n + B^n\r\n\r\nt1_1 = datetime.datetime.now()\r\nwith tf.Session(config=tf.ConfigProto\\\r\n              (log_device_placement=log_device_placement)) as sess:\r\n     sess.run(sum, {a:A, b:B})\r\nt2_1 = datetime.datetime.now()\r\n"
  },
  {
    "path": "Chapter07/Python 2.7/gpu_soft_placemnet_1.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport datetime\r\n\r\nlog_device_placement = True\r\nn = 10\r\n\r\nA = np.random.rand(10000, 10000).astype('float32')\r\nB = np.random.rand(10000, 10000).astype('float32')\r\n\r\nc1 = []\r\n\r\ndef matpow(M, n):\r\n    if n < 1: #Abstract cases where n < 1\r\n        return M\r\n    else:\r\n        return tf.matmul(M, matpow(M, n-1))\r\n\r\nwith tf.device('/job:localhost/replica:0/task:0/cpu:0'):\r\n    a = tf.placeholder(tf.float32, [10000, 10000])\r\n    b = tf.placeholder(tf.float32, [10000, 10000])\r\n    c1.append(matpow(a, n))\r\n    c1.append(matpow(b, n))\r\n\r\nwith tf.device('/job:localhost/replica:0/task:0/cpu:1'):\r\n    sum = tf.add_n(c1) \r\n    print(sum)\t\r\n\r\nt1_1 = datetime.datetime.now()\r\nwith tf.Session(config=tf.ConfigProto\\\r\n                 (allow_soft_placement=True,\\\r\n                log_device_placement=log_device_placement))\\\r\n                  as sess:\r\n     sess.run(sum, {a:A, b:B})\r\nt2_1 = datetime.datetime.now()\r\n"
  },
  {
    "path": "Chapter07/Python 3.5/gpu_computing_with_multiple_GPU.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport datetime\r\n\r\nlog_device_placement = True\r\nn = 10\r\n\r\nA = np.random.rand(10000, 10000).astype('float32')\r\nB = np.random.rand(10000, 10000).astype('float32')\r\n\r\nc1 = []\r\n\r\ndef matpow(M, n):\r\n    if n < 1: #Abstract cases where n < 1\r\n        return M\r\n    else:\r\n        return tf.matmul(M, matpow(M, n-1))\r\n\r\n#FIRST GPU\r\nwith tf.device('/gpu:0'):\r\n    a = tf.placeholder(tf.float32, [10000, 10000])\r\n    c1.append(matpow(a, n))\r\n    \r\n#SECOND GPU\r\nwith tf.device('/gpu:1'):\r\n    b = tf.placeholder(tf.float32, [10000, 10000])\r\n    c1.append(matpow(b, n))\r\n\r\n\r\nwith tf.device('/cpu:0'):\r\n    sum = tf.add_n(c1) \r\n    print(sum)\r\n\r\nt1_1 = datetime.datetime.now()\r\nwith tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=log_device_placement)) as sess:\r\n     sess.run(sum, {a:A, b:B})\r\n\r\nt2_1 = datetime.datetime.now()\r\n"
  },
  {
    "path": "Chapter07/Python 3.5/gpu_example.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport datetime\r\n\r\nlog_device_placement = True\r\nn = 10\r\nA = np.random.rand(10000, 10000).astype('float32')\r\nB = np.random.rand(10000, 10000).astype('float32')\r\nc1 = []\r\nc2 = []\r\n\r\ndef matpow(M, n):\r\n    if n < 1: #Abstract cases where n < 1\r\n        return M\r\n    else:\r\n        return tf.matmul(M, matpow(M, n-1))\r\n\r\nwith tf.device('/gpu:0'): # For CPU use /cpu:0\r\n    a = tf.placeholder(tf.float32, [10000, 10000])\r\n    b = tf.placeholder(tf.float32, [10000, 10000])\r\n    c1.append(matpow(a, n))\r\n    c1.append(matpow(b, n))\r\n\r\n# If the below code does not work use '/job:localhost/replica:0/task:0/cpu:0' as the GPU device\r\nwith tf.device('/cpu:0'):\r\n  sum = tf.add_n(c1) #Addition of all elements in c1, i.e. A^n + B^n\r\n\r\nt1_1 = datetime.datetime.now()\r\nwith tf.Session(config=tf.ConfigProto(log_device_placement=log_device_placement)) as sess:\r\n     sess.run(sum, {a:A, b:B})\r\n\r\nt2_1 = datetime.datetime.now()\r\n"
  },
  {
    "path": "Chapter07/Python 3.5/gpu_soft_placemnet_1.py",
    "content": "import numpy as np\r\nimport tensorflow as tf\r\nimport datetime\r\n\r\nlog_device_placement = True\r\nn = 10\r\n\r\nA = np.random.rand(10000, 10000).astype('float32')\r\nB = np.random.rand(10000, 10000).astype('float32')\r\n\r\nc1 = []\r\n\r\ndef matpow(M, n):\r\n    if n < 1: #Abstract cases where n < 1\r\n        return M\r\n    else:\r\n        return tf.matmul(M, matpow(M, n-1))\r\n\r\nwith tf.device('gpu:0'): # for CPU only, use /cpu:0\r\n    a = tf.placeholder(tf.float32, [10000, 10000])\r\n    b = tf.placeholder(tf.float32, [10000, 10000])\r\n    c1.append(matpow(a, n))\r\n    c1.append(matpow(b, n))\r\n\r\nwith tf.device('gpu:1'): # for CPU only, use /cpu:0\r\n    sum = tf.add_n(c1) \r\n    print(sum)\t\r\n\r\nt1_1 = datetime.datetime.now()\r\nwith tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=log_device_placement)) as sess:\r\n     sess.run(sum, {a:A, b:B})\r\n\r\nt2_1 = datetime.datetime.now()\r\n"
  },
  {
    "path": "Chapter08/Python 2.7/digit_classifier.py",
    "content": "from six.moves import xrange  \r\nimport tensorflow as tf\r\nimport prettytensor as pt\r\nfrom prettytensor.tutorial import data_utils\r\n\r\ntf.app.flags.DEFINE_string('save_path', None, 'Where to save the model checkpoints.')\r\nFLAGS = tf.app.flags.FLAGS\r\n\r\nBATCH_SIZE = 50\r\nEPOCH_SIZE = 60000 // BATCH_SIZE\r\nTEST_SIZE = 10000 // BATCH_SIZE\r\n\r\ntf.app.flags.DEFINE_string('model', 'full','Choose one of the models, either full or conv')\r\nFLAGS = tf.app.flags.FLAGS\r\ndef multilayer_fully_connected(images, labels):\r\n                           images = pt.wrap(images)\r\n                           with pt.defaults_scope(activation_fn=tf.nn.relu,l2loss=0.00001):\r\n                           return (images.flatten().\\\r\n                                   fully_connected(100).\\\r\n                                   fully_connected(100).\\\r\n                                   softmax_classifier(10, labels))\r\n\r\ndef lenet5(images, labels):\r\n    images = pt.wrap(images)\r\n    with pt.defaults_scope\\\r\n         (activation_fn=tf.nn.relu, l2loss=0.00001):\r\n    return (images.conv2d(5, 20).\\\r\n            max_pool(2, 2).\\\r\n            conv2d(5, 50).\\\r\n            max_pool(2, 2).\\\r\n            flatten().\\\r\n            fully_connected(500).\\\r\n            softmax_classifier(10, labels))\r\n\r\ndef main(_=None):\r\n  image_placeholder = tf.placeholder\\\r\n                      (tf.float32, [BATCH_SIZE, 28, 28, 1])\r\n  labels_placeholder = tf.placeholder\\\r\n                       (tf.float32, [BATCH_SIZE, 10])\r\n\r\nif FLAGS.model == 'full':\r\n    result = multilayer_fully_connected\\\r\n             (image_placeholder,\\\r\n              labels_placeholder)\r\n  elif FLAGS.model == 'conv':\r\n    result = lenet5(image_placeholder,\\\r\n                    labels_placeholder)\r\nelse:\r\n    raise ValueError\\\r\n              ('model must be full or conv: %s' % FLAGS.model)\r\n\r\naccuracy = result.softmax.\\\r\n           evaluate_classifier\\\r\n           (labels_placeholder,phase=pt.Phase.test)\r\n\r\ntrain_images, train_labels = data_utils.mnist(training=True)\r\ntest_images, test_labels = data_utils.mnist(training=False)\r\noptimizer = tf.train.GradientDescentOptimizer(0.01)\r\ntrain_op = pt.apply_optimizer(optimizer,losses=[result.loss])\r\nrunner = pt.train.Runner(save_path=FLAGS.save_path)\r\n\r\n\r\nwith tf.Session():\r\n    for epoch in xrange(10):\r\n        train_images, train_labels = \\\r\n                      data_utils.permute_data\\\r\n                      ((train_images, train_labels))\r\n\r\n        runner.train_model(train_op,result.\\\r\n                           loss,EPOCH_SIZE,\\\r\n                           feed_vars=(image_placeholder,\\\r\n                                      labels_placeholder),\\\r\n                           feed_data=pt.train.\\\r\n                           feed_numpy(BATCH_SIZE,\\\r\n                                      train_images,\\\r\n                                      train_labels),\\\r\n                           print_every=100)\r\n        classification_accuracy = runner.evaluate_model\\\r\n                                  (accuracy,\\\r\n                                   TEST_SIZE,\\\r\n                                   feed_vars=(image_placeholder,\\\r\n                                              labels_placeholder),\\\r\n                                   feed_data=pt.train.\\\r\n                                   feed_numpy(BATCH_SIZE,\\\r\n                                              test_images,\\\r\n                                              test_labels))\r\n  print('epoch’ , epoch + 1)\r\n  print(‘accuracy’, classification_accuracy )\r\n\r\nif __name__ == '__main__':\r\n  tf.app.run()\r\n\r\n"
  },
  {
    "path": "Chapter08/Python 2.7/keras_movie_classifier_1.py",
    "content": "import numpy\r\nfrom keras.datasets import imdb\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Dense\r\nfrom keras.layers import LSTM\r\nfrom keras.layers.embeddings import Embedding\r\nfrom keras.preprocessing import sequence\r\n\r\n# fix random seed for reproducibility\r\nnumpy.random.seed(7)\r\n\r\n# load the dataset but only keep the top n words, zero the rest\r\ntop_words = 5000\r\n(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=top_words)\r\n# truncate and pad input sequences\r\nmax_review_length = 500\r\nX_train = sequence.pad_sequences(X_train, maxlen=max_review_length)\r\nX_test = sequence.pad_sequences(X_test, maxlen=max_review_length)\r\n\r\n# create the model\r\nembedding_vecor_length = 32\r\nmodel = Sequential()\r\nmodel.add(Embedding(top_words, embedding_vecor_length,\\\r\n                       input_length=max_review_length))\r\nmodel.add(LSTM(100))\r\nmodel.add(Dense(1, activation='sigmoid'))\r\nmodel.compile(loss='binary_crossentropy',\\\r\n                 optimizer='adam',\\\r\n                   metrics=['accuracy'])\r\nprint(model.summary())\r\n\r\nmodel.fit(X_train, y_train,\\\r\n     validation_data=(X_test, y_test),\\\r\n           nb_epoch=3, batch_size=64)\r\n\r\n# Final evaluation of the model\r\nscores = model.evaluate(X_test, y_test, verbose=0)\r\n\r\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))\r\n"
  },
  {
    "path": "Chapter08/Python 2.7/keras_movie_classifier_using_convLayer_1.py",
    "content": "from __future__ import print_function\r\n\r\nimport numpy\r\nfrom keras.datasets import imdb\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Dense\r\nfrom keras.layers import LSTM\r\nfrom keras.layers.embeddings import Embedding\r\nfrom keras.preprocessing import sequence\r\nfrom keras.layers import Conv1D, GlobalMaxPooling1D\r\n\r\n# fix random seed for the reproducibility\r\nnumpy.random.seed(7)\r\n\r\n# load the dataset but only keep the top n words, zero the rest\r\ntop_words = 5000\r\n(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)\r\n# truncate and pad input sequences\r\nmax_review_length = 500\r\nX_train = sequence.pad_sequences(X_train, maxlen=max_review_length)\r\nX_test = sequence.pad_sequences(X_test, maxlen=max_review_length)\r\n\r\n# create the model\r\nembedding_vector_length = 32\r\nmodel = Sequential()\r\nmodel.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length))\r\nmodel.add(Conv1D(padding=\"same\", activation=\"relu\", kernel_size=3, num_filter=32))\r\nmodel.add(GlobalMaxPooling1D())\r\nmodel.add(LSTM(32, input_dim=64, return_sequences=True))\r\nmodel.add(LSTM(24, return_sequences=True))\r\nmodel.add(LSTM(1,  return_sequences=False))\r\n\r\nmodel.add(Dense(2, activation='sigmoid'))\r\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\r\nprint(model.summary())\r\n\r\nmodel.fit(X_train, y_train, validation_data=(X_test, y_test), num_epoch=3, batch_size=64)\r\n\r\n# Final evaluation of the model\r\nscores = model.evaluate(X_test, y_test, verbose=0)\r\n\r\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))\r\n"
  },
  {
    "path": "Chapter08/Python 2.7/pretty_tensor_digit_1.py",
    "content": "import tensorflow as tf\r\nimport prettytensor as pt\r\nfrom prettytensor.tutorial import data_utils\r\n\r\ntf.app.flags.DEFINE_string('save_path', None, 'Where to save the model checkpoints.')\r\nFLAGS = tf.app.flags.FLAGS\r\n\r\nBATCH_SIZE = 50\r\nEPOCH_SIZE = 60000 // BATCH_SIZE\r\nTEST_SIZE = 10000 // BATCH_SIZE\r\n\r\ntf.app.flags.DEFINE_string('model', 'full','Choose one of the models, either full or conv')\r\nFLAGS = tf.app.flags.FLAGS\r\ndef multilayer_fully_connected(images, labels):\r\n                           images = pt.wrap(images)\r\n                           with pt.defaults_scope(activation_fn=tf.nn.relu,l2loss=0.00001):\r\n                           \treturn (images.flatten().fully_connected(100).fully_connected(100).softmax_classifier(10, labels))\r\n\r\n\r\ndef lenet5(images, labels):\r\n    images = pt.wrap(images)\r\n    with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):\r\n    \treturn (images.conv2d(5, 20).max_pool(2, 2).conv2d(5, 50).max_pool(2, 2).flatten().fully_connected(500).softmax_classifier(10, labels))\r\n\r\n\r\ndef main(_=None):\r\n  image_placeholder = tf.placeholder\\\r\n                      (tf.float32, [BATCH_SIZE, 28, 28, 1])\r\n  labels_placeholder = tf.placeholder\\\r\n                       (tf.float32, [BATCH_SIZE, 10])\r\n\r\nif FLAGS.model == 'full':\r\n    result = multilayer_fully_connected\\\r\n             (image_placeholder,\\\r\n              labels_placeholder)\r\nelif FLAGS.model == 'conv':\r\n  \tresult = lenet5(image_placeholder, labels_placeholder)\r\nelse:\r\n    raise ValueError\\\r\n              ('model must be full or conv: %s' % FLAGS.model)\r\n\r\naccuracy = result.softmax.\\\r\n           evaluate_classifier\\\r\n           (labels_placeholder,phase=pt.Phase.test)\r\n\r\ntrain_images, train_labels = data_utils.mnist(training=True)\r\ntest_images, test_labels = data_utils.mnist(training=False)\r\noptimizer = tf.train.GradientDescentOptimizer(0.01)\r\ntrain_op = pt.apply_optimizer(optimizer,losses=[result.loss])\r\nrunner = pt.train.Runner(save_path=FLAGS.save_path)\r\n\r\n\r\nwith tf.Session():\r\n    for epoch in xrange(10):\r\n        train_images, train_labels = \\\r\n                      data_utils.permute_data\\\r\n                      ((train_images, train_labels))\r\n\r\n        runner.train_model(train_op,result.\\\r\n                           loss,EPOCH_SIZE,\\\r\n                           feed_vars=(image_placeholder,\\\r\n                                      labels_placeholder),\\\r\n                           feed_data=pt.train.\\\r\n                           feed_numpy(BATCH_SIZE,\\\r\n                                      train_images,\\\r\n                                      train_labels),\\\r\n                           print_every=100)\r\n        classification_accuracy = runner.evaluate_model\\\r\n                                  (accuracy,\\\r\n                                   TEST_SIZE,\\\r\n                                   feed_vars=(image_placeholder,\\\r\n                                              labels_placeholder),\\\r\n                                   feed_data=pt.train.\\\r\n                                   feed_numpy(BATCH_SIZE,\\\r\n                                              test_images,\\\r\n                                              test_labels))\r\n  \tprint('epoch' , epoch + 1)\r\n  \tprint('accuracy', classification_accuracy )\r\n        \r\nif __name__ == '__main__':\r\n  tf.app.run()\r\n"
  },
  {
    "path": "Chapter08/Python 2.7/tflearn_titanic_classifier.py",
    "content": "from tflearn.datasets import titanic\r\ntitanic.download_dataset('titanic_dataset.csv')\r\nfrom tflearn.data_utils import load_csv\r\ndata, labels = load_csv('titanic_dataset.csv', target_column=0,\r\n                        categorical_labels=True, n_classes=2)\r\n\r\ndef preprocess(data, columns_to_ignore):\r\n    for id in sorted(columns_to_ignore, reverse=True):\r\n        [r.pop(id) for r in data]\r\n    for i in range(len(data)):\r\n        data[i][1] = 1. if data[i][1] == 'female' else 0.\r\n    return np.array(data, dtype=np.float32)\r\n\r\nto_ignore=[1, 6]\r\ndata = preprocess(data, to_ignore)\r\nnet = tflearn.input_data(shape=[None, 6])\r\n\r\nnet = tflearn.fully_connected(net, 32)\r\nnet = tflearn.fully_connected(net, 32)\r\nnet = tflearn.fully_connected(net, 2, activation='softmax')\r\nnet = tflearn.regression(net)\r\nmodel = tflearn.DNN(net)\r\nmodel.fit(data, labels, n_epoch=10, batch_size=16, show_metric=True)\r\n"
  },
  {
    "path": "Chapter08/Python 3.5/__init__.py",
    "content": ""
  },
  {
    "path": "Chapter08/Python 3.5/digit_classifier.py",
    "content": "from six.moves import range  \r\nimport tensorflow as tf\r\nimport prettytensor as pt\r\nfrom prettytensor.tutorial import data_utils\r\n\r\ntf.app.flags.DEFINE_string('save_path', None, 'Where to save the model checkpoints.')\r\nFLAGS = tf.app.flags.FLAGS\r\n\r\nBATCH_SIZE = 50\r\nEPOCH_SIZE = 60000 // BATCH_SIZE\r\nTEST_SIZE = 10000 // BATCH_SIZE\r\n\r\nimage_placeholder = tf.placeholder\\\r\n                      (tf.float32, [BATCH_SIZE, 28, 28, 1])\r\nlabels_placeholder = tf.placeholder\\\r\n                       (tf.float32, [BATCH_SIZE, 10])\r\n\r\ntf.app.flags.DEFINE_string('model', 'full','Choose one of the models, either full or conv')\r\nFLAGS = tf.app.flags.FLAGS\r\ndef multilayer_fully_connected(images, labels):\r\n                           images = pt.wrap(images)\r\n                           with pt.defaults_scope(activation_fn=tf.nn.relu,l2loss=0.00001):\r\n                               return (images.flatten().\\\r\n                                   fully_connected(100).\\\r\n                                   fully_connected(100).\\\r\n                                   softmax_classifier(10, labels))\r\n\r\ndef lenet5(images, labels):\r\n    images = pt.wrap(images)\r\n    with pt.defaults_scope\\\r\n         (activation_fn=tf.nn.relu, l2loss=0.00001):\r\n        return (images.conv2d(5, 20).\\\r\n            max_pool(2, 2).\\\r\n            conv2d(5, 50).\\\r\n            max_pool(2, 2).\\\r\n            flatten().\\\r\n            fully_connected(500).\\\r\n            softmax_classifier(10, labels))\r\n\r\ndef main(_=None):\r\n  image_placeholder = tf.placeholder\\\r\n                      (tf.float32, [BATCH_SIZE, 28, 28, 1])\r\n  labels_placeholder = tf.placeholder\\\r\n                       (tf.float32, [BATCH_SIZE, 10])\r\n\r\nif FLAGS.model == 'full':\r\n    result = multilayer_fully_connected(image_placeholder, labels_placeholder)\r\nelif FLAGS.model == 'conv':\r\n    result = lenet5(image_placeholder, labels_placeholder)\r\nelse:\r\n    raise ValueError('model must be full or conv: %s' % FLAGS.model)\r\n\r\naccuracy = result.softmax.evaluate_classifier(labels_placeholder,phase=pt.Phase.test)\r\n\r\ntrain_images, train_labels = data_utils.mnist(training=True)\r\ntest_images, test_labels = data_utils.mnist(training=False)\r\noptimizer = tf.train.GradientDescentOptimizer(0.01)\r\ntrain_op = pt.apply_optimizer(optimizer,losses=[result.loss])\r\nrunner = pt.train.Runner(save_path=FLAGS.save_path)\r\n\r\n\r\nwith tf.Session():\r\n    for epoch in range(10):\r\n        train_images, train_labels = \\\r\n                      data_utils.permute_data\\\r\n                      ((train_images, train_labels))\r\n\r\n        runner.train_model(train_op,result.\\\r\n                           loss,EPOCH_SIZE,\\\r\n                           feed_vars=(image_placeholder,\\\r\n                                      labels_placeholder),\\\r\n                           feed_data=pt.train.\\\r\n                           feed_numpy(BATCH_SIZE,\\\r\n                                      train_images,\\\r\n                                      train_labels),\\\r\n                           print_every=100)\r\n        classification_accuracy = runner.evaluate_model\\\r\n                                  (accuracy,\\\r\n                                   TEST_SIZE,\\\r\n                                   feed_vars=(image_placeholder,\\\r\n                                              labels_placeholder),\\\r\n                                   feed_data=pt.train.\\\r\n                                   feed_numpy(BATCH_SIZE,\\\r\n                                              test_images,\\\r\n                                              test_labels))\r\n\r\nprint('epoch' , epoch + 1)\r\nprint('accuracy', classification_accuracy)\r\n\r\nif __name__ == '__main__':\r\n  tf.app.run()\r\n\r\n"
  },
  {
    "path": "Chapter08/Python 3.5/keras_movie_classifier_1.py",
    "content": "import numpy\r\nfrom keras.datasets import imdb\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Dense\r\nfrom keras.layers import LSTM\r\nfrom keras.layers.embeddings import Embedding\r\nfrom keras.preprocessing import sequence\r\n\r\n# fix random seed for reproducibility\r\nnumpy.random.seed(7)\r\n\r\n# load the dataset but only keep the top n words, zero the rest\r\ntop_words = 5000\r\n(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=top_words)\r\n# truncate and pad input sequences\r\nmax_review_length = 500\r\nX_train = sequence.pad_sequences(X_train, maxlen=max_review_length)\r\nX_test = sequence.pad_sequences(X_test, maxlen=max_review_length)\r\n\r\n# create the model\r\nembedding_vecor_length = 32\r\nmodel = Sequential()\r\nmodel.add(Embedding(top_words, embedding_vecor_length,\\\r\n                       input_length=max_review_length))\r\nmodel.add(LSTM(100))\r\nmodel.add(Dense(1, activation='sigmoid'))\r\nmodel.compile(loss='binary_crossentropy',\\\r\n                 optimizer='adam',\\\r\n                   metrics=['accuracy'])\r\nprint(model.summary())\r\n\r\nmodel.fit(X_train, y_train,\\\r\n     validation_data=(X_test, y_test),\\\r\n           epochs=3, batch_size=64)\r\n\r\n# Final evaluation of the model\r\nscores = model.evaluate(X_test, y_test, verbose=0)\r\n\r\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))\r\n"
  },
  {
    "path": "Chapter08/Python 3.5/keras_movie_classifier_using_convLayer_1.py",
    "content": "from __future__ import print_function\r\n\r\nimport numpy\r\nfrom keras.datasets import imdb\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Dense\r\nfrom keras.layers import LSTM\r\nfrom keras.layers.embeddings import Embedding\r\nfrom keras.preprocessing import sequence\r\nfrom keras.layers import Conv1D, GlobalMaxPooling1D\r\n\r\n# fix random seed for the reproducibility\r\nnumpy.random.seed(7)\r\n\r\n# load the dataset but only keep the top n words, zero the rest\r\ntop_words = 5000\r\n(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)\r\n# truncate and pad input sequences\r\nmax_review_length = 500\r\nX_train = sequence.pad_sequences(X_train, maxlen=max_review_length)\r\nX_test = sequence.pad_sequences(X_test, maxlen=max_review_length)\r\n\r\n# create the model\r\nembedding_vector_length = 32\r\nmodel = Sequential()\r\nmodel.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length))\r\nmodel.add(Conv1D(padding=\"same\", activation=\"relu\", kernel_size=3, num_filter=32))\r\nmodel.add(GlobalMaxPooling1D())\r\nmodel.add(LSTM(32, input_dim=64, return_sequences=True))\r\nmodel.add(LSTM(24, return_sequences=True))\r\nmodel.add(LSTM(1,  return_sequences=False))\r\n\r\nmodel.add(Dense(2, activation='sigmoid'))\r\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\r\nprint(model.summary())\r\n\r\nmodel.fit(X_train, y_train, validation_data=(X_test, y_test), num_epoch=3, batch_size=64)\r\n\r\n# Final evaluation of the model\r\nscores = model.evaluate(X_test, y_test, verbose=0)\r\n\r\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))\r\n"
  },
  {
    "path": "Chapter08/Python 3.5/pretty_tensor_digit_1.py",
    "content": "import tensorflow as tf\r\nimport prettytensor as pt\r\nfrom prettytensor.tutorial import data_utils\r\n\r\ntf.app.flags.DEFINE_string('save_path', None, 'Where to save the model checkpoints.')\r\nFLAGS = tf.app.flags.FLAGS\r\n\r\nBATCH_SIZE = 50\r\nEPOCH_SIZE = 60000 // BATCH_SIZE\r\nTEST_SIZE = 10000 // BATCH_SIZE\r\n\r\nimage_placeholder = tf.placeholder(tf.float32, [BATCH_SIZE, 28, 28, 1])\r\nlabels_placeholder = tf.placeholder(tf.float32, [BATCH_SIZE, 10])\r\n\r\ntf.app.flags.DEFINE_string('model', 'full','Choose one of the models, either full or conv')\r\nFLAGS = tf.app.flags.FLAGS\r\ndef multilayer_fully_connected(images, labels):\r\n                           images = pt.wrap(images)\r\n                           with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):\r\n                           \treturn (images.flatten().fully_connected(100).fully_connected(100).softmax_classifier(10, labels))\r\n\r\n\r\ndef lenet5(images, labels):\r\n    images = pt.wrap(images)\r\n    with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):\r\n    \treturn (images.conv2d(5, 20).max_pool(2, 2).conv2d(5, 50).max_pool(2, 2).flatten().fully_connected(500).softmax_classifier(10, labels))\r\n\r\n\r\ndef main(_=None):\r\n  image_placeholder = tf.placeholder(tf.float32, [BATCH_SIZE, 28, 28, 1])\r\n  labels_placeholder = tf.placeholder(tf.float32, [BATCH_SIZE, 10])\r\n\r\nif FLAGS.model == 'full':\r\n    result = multilayer_fully_connected(image_placeholder, labels_placeholder)\r\nelif FLAGS.model == 'conv':\r\n  \tresult = lenet5(image_placeholder, labels_placeholder)\r\nelse:\r\n    raise ValueError\\\r\n              ('model must be full or conv: %s' % FLAGS.model)\r\n\r\naccuracy = result.softmax.evaluate_classifier(labels_placeholder,phase=pt.Phase.test)\r\n\r\ntrain_images, train_labels = data_utils.mnist(training=True)\r\ntest_images, test_labels = data_utils.mnist(training=False)\r\noptimizer = tf.train.GradientDescentOptimizer(0.01)\r\ntrain_op = pt.apply_optimizer(optimizer,losses=[result.loss])\r\nrunner = pt.train.Runner(save_path=FLAGS.save_path)\r\n\r\n\r\nwith tf.Session():\r\n    for epoch in range(10):\r\n        train_images, train_labels = \\\r\n                      data_utils.permute_data\\\r\n                      ((train_images, train_labels))\r\n\r\n        runner.train_model(train_op,result.\\\r\n                           loss,EPOCH_SIZE,\\\r\n                           feed_vars=(image_placeholder,\\\r\n                                      labels_placeholder),\\\r\n                           feed_data=pt.train.\\\r\n                           feed_numpy(BATCH_SIZE,\\\r\n                                      train_images,\\\r\n                                      train_labels),\\\r\n                           print_every=100)\r\n        classification_accuracy = runner.evaluate_model\\\r\n                                  (accuracy,\\\r\n                                   TEST_SIZE,\\\r\n                                   feed_vars=(image_placeholder,\\\r\n                                              labels_placeholder),\\\r\n                                   feed_data=pt.train.\\\r\n                                   feed_numpy(BATCH_SIZE,\\\r\n                                              test_images,\\\r\n                                              test_labels))\r\n\r\n    print('epoch' , epoch + 1)\r\n    print('accuracy', classification_accuracy )\r\n        \r\nif __name__ == '__main__':\r\n  tf.app.run()\r\n"
  },
  {
    "path": "Chapter08/Python 3.5/tflearn_titanic_classifier.py",
    "content": "import tflearn\r\nfrom tflearn.datasets import titanic\r\nimport numpy as np\r\ntitanic.download_dataset('titanic_dataset.csv')\r\nfrom tflearn.data_utils import load_csv\r\ndata, labels = load_csv('titanic_dataset.csv', target_column=0,\r\n                        categorical_labels=True, n_classes=2)\r\n\r\ndef preprocess(data, columns_to_ignore):\r\n    for id in sorted(columns_to_ignore, reverse=True):\r\n        [r.pop(id) for r in data]\r\n    for i in range(len(data)):\r\n        data[i][1] = 1. if data[i][1] == 'female' else 0.\r\n    return np.array(data, dtype=np.float32)\r\n\r\nto_ignore=[1, 6]\r\ndata = preprocess(data, to_ignore)\r\nnet = tflearn.input_data(shape=[None, 6])\r\n\r\nnet = tflearn.fully_connected(net, 32)\r\nnet = tflearn.fully_connected(net, 32)\r\nnet = tflearn.fully_connected(net, 2, activation='softmax')\r\nnet = tflearn.regression(net)\r\nmodel = tflearn.DNN(net)\r\nmodel.fit(data, labels, n_epoch=10, batch_size=16, show_metric=True)\r\n\r\n# Evalute the model\r\naccuracy = model.evaluate(data, labels, batch_size=16)\r\nprint('Accuracy: ', accuracy)\r\n"
  },
  {
    "path": "Chapter08/data/titanic_dataset.csv",
    "content": "survived,pclass,name,sex,age,sibsp,parch,ticket,fare\r\n1,1,\"Allen, Miss. Elisabeth Walton\",female,29,0,0,24160,211.3375\r\n1,1,\"Allison, Master. Hudson Trevor\",male,0.9167,1,2,113781,151.5500\r\n0,1,\"Allison, Miss. Helen Loraine\",female,2,1,2,113781,151.5500\r\n0,1,\"Allison, Mr. Hudson Joshua Creighton\",male,30,1,2,113781,151.5500\r\n0,1,\"Allison, Mrs. Hudson J C (Bessie Waldo Daniels)\",female,25,1,2,113781,151.5500\r\n1,1,\"Anderson, Mr. Harry\",male,48,0,0,19952,26.5500\r\n1,1,\"Andrews, Miss. Kornelia Theodosia\",female,63,1,0,13502,77.9583\r\n0,1,\"Andrews, Mr. Thomas Jr\",male,39,0,0,112050,0.0000\r\n1,1,\"Appleton, Mrs. Edward Dale (Charlotte Lamson)\",female,53,2,0,11769,51.4792\r\n0,1,\"Artagaveytia, Mr. Ramon\",male,71,0,0,PC 17609,49.5042\r\n0,1,\"Astor, Col. John Jacob\",male,47,1,0,PC 17757,227.5250\r\n1,1,\"Astor, Mrs. John Jacob (Madeleine Talmadge Force)\",female,18,1,0,PC 17757,227.5250\r\n1,1,\"Aubart, Mme. Leontine Pauline\",female,24,0,0,PC 17477,69.3000\r\n1,1,\"Barber, Miss. Ellen \"\"Nellie\"\"\",female,26,0,0,19877,78.8500\r\n1,1,\"Barkworth, Mr. Algernon Henry Wilson\",male,80,0,0,27042,30.0000\r\n0,1,\"Baumann, Mr. John D\",male,0,0,0,PC 17318,25.9250\r\n0,1,\"Baxter, Mr. Quigg Edmond\",male,24,0,1,PC 17558,247.5208\r\n1,1,\"Baxter, Mrs. James (Helene DeLaudeniere Chaput)\",female,50,0,1,PC 17558,247.5208\r\n1,1,\"Bazzani, Miss. Albina\",female,32,0,0,11813,76.2917\r\n0,1,\"Beattie, Mr. Thomson\",male,36,0,0,13050,75.2417\r\n1,1,\"Beckwith, Mr. Richard Leonard\",male,37,1,1,11751,52.5542\r\n1,1,\"Beckwith, Mrs. Richard Leonard (Sallie Monypeny)\",female,47,1,1,11751,52.5542\r\n1,1,\"Behr, Mr. Karl Howell\",male,26,0,0,111369,30.0000\r\n1,1,\"Bidois, Miss. Rosalie\",female,42,0,0,PC 17757,227.5250\r\n1,1,\"Bird, Miss. Ellen\",female,29,0,0,PC 17483,221.7792\r\n0,1,\"Birnbaum, Mr. Jakob\",male,25,0,0,13905,26.0000\r\n1,1,\"Bishop, Mr. Dickinson H\",male,25,1,0,11967,91.0792\r\n1,1,\"Bishop, Mrs. Dickinson H (Helen Walton)\",female,19,1,0,11967,91.0792\r\n1,1,\"Bissette, Miss. Amelia\",female,35,0,0,PC 17760,135.6333\r\n1,1,\"Bjornstrom-Steffansson, Mr. Mauritz Hakan\",male,28,0,0,110564,26.5500\r\n0,1,\"Blackwell, Mr. Stephen Weart\",male,45,0,0,113784,35.5000\r\n1,1,\"Blank, Mr. Henry\",male,40,0,0,112277,31.0000\r\n1,1,\"Bonnell, Miss. Caroline\",female,30,0,0,36928,164.8667\r\n1,1,\"Bonnell, Miss. Elizabeth\",female,58,0,0,113783,26.5500\r\n0,1,\"Borebank, Mr. John James\",male,42,0,0,110489,26.5500\r\n1,1,\"Bowen, Miss. Grace Scott\",female,45,0,0,PC 17608,262.3750\r\n1,1,\"Bowerman, Miss. Elsie Edith\",female,22,0,1,113505,55.0000\r\n1,1,\"Bradley, Mr. George (\"\"George Arthur Brayton\"\")\",male,0,0,0,111427,26.5500\r\n0,1,\"Brady, Mr. John Bertram\",male,41,0,0,113054,30.5000\r\n0,1,\"Brandeis, Mr. Emil\",male,48,0,0,PC 17591,50.4958\r\n0,1,\"Brewe, Dr. Arthur Jackson\",male,0,0,0,112379,39.6000\r\n1,1,\"Brown, Mrs. James Joseph (Margaret Tobin)\",female,44,0,0,PC 17610,27.7208\r\n1,1,\"Brown, Mrs. John Murray (Caroline Lane Lamson)\",female,59,2,0,11769,51.4792\r\n1,1,\"Bucknell, Mrs. William Robert (Emma Eliza Ward)\",female,60,0,0,11813,76.2917\r\n1,1,\"Burns, Miss. Elizabeth Margaret\",female,41,0,0,16966,134.5000\r\n0,1,\"Butt, Major. Archibald Willingham\",male,45,0,0,113050,26.5500\r\n0,1,\"Cairns, Mr. Alexander\",male,0,0,0,113798,31.0000\r\n1,1,\"Calderhead, Mr. Edward Pennington\",male,42,0,0,PC 17476,26.2875\r\n1,1,\"Candee, Mrs. Edward (Helen Churchill Hungerford)\",female,53,0,0,PC 17606,27.4458\r\n1,1,\"Cardeza, Mr. Thomas Drake Martinez\",male,36,0,1,PC 17755,512.3292\r\n1,1,\"Cardeza, Mrs. James Warburton Martinez (Charlotte Wardle Drake)\",female,58,0,1,PC 17755,512.3292\r\n0,1,\"Carlsson, Mr. Frans Olof\",male,33,0,0,695,5.0000\r\n0,1,\"Carrau, Mr. Francisco M\",male,28,0,0,113059,47.1000\r\n0,1,\"Carrau, Mr. Jose Pedro\",male,17,0,0,113059,47.1000\r\n1,1,\"Carter, Master. William Thornton II\",male,11,1,2,113760,120.0000\r\n1,1,\"Carter, Miss. Lucile Polk\",female,14,1,2,113760,120.0000\r\n1,1,\"Carter, Mr. William Ernest\",male,36,1,2,113760,120.0000\r\n1,1,\"Carter, Mrs. William Ernest (Lucile Polk)\",female,36,1,2,113760,120.0000\r\n0,1,\"Case, Mr. Howard Brown\",male,49,0,0,19924,26.0000\r\n1,1,\"Cassebeer, Mrs. Henry Arthur Jr (Eleanor Genevieve Fosdick)\",female,0,0,0,17770,27.7208\r\n0,1,\"Cavendish, Mr. Tyrell William\",male,36,1,0,19877,78.8500\r\n1,1,\"Cavendish, Mrs. Tyrell William (Julia Florence Siegel)\",female,76,1,0,19877,78.8500\r\n0,1,\"Chaffee, Mr. Herbert Fuller\",male,46,1,0,W.E.P. 5734,61.1750\r\n1,1,\"Chaffee, Mrs. Herbert Fuller (Carrie Constance Toogood)\",female,47,1,0,W.E.P. 5734,61.1750\r\n1,1,\"Chambers, Mr. Norman Campbell\",male,27,1,0,113806,53.1000\r\n1,1,\"Chambers, Mrs. Norman Campbell (Bertha Griggs)\",female,33,1,0,113806,53.1000\r\n1,1,\"Chaudanson, Miss. Victorine\",female,36,0,0,PC 17608,262.3750\r\n1,1,\"Cherry, Miss. Gladys\",female,30,0,0,110152,86.5000\r\n1,1,\"Chevre, Mr. Paul Romaine\",male,45,0,0,PC 17594,29.7000\r\n1,1,\"Chibnall, Mrs. (Edith Martha Bowerman)\",female,0,0,1,113505,55.0000\r\n0,1,\"Chisholm, Mr. Roderick Robert Crispin\",male,0,0,0,112051,0.0000\r\n0,1,\"Clark, Mr. Walter Miller\",male,27,1,0,13508,136.7792\r\n1,1,\"Clark, Mrs. Walter Miller (Virginia McDowell)\",female,26,1,0,13508,136.7792\r\n1,1,\"Cleaver, Miss. Alice\",female,22,0,0,113781,151.5500\r\n0,1,\"Clifford, Mr. George Quincy\",male,0,0,0,110465,52.0000\r\n0,1,\"Colley, Mr. Edward Pomeroy\",male,47,0,0,5727,25.5875\r\n1,1,\"Compton, Miss. Sara Rebecca\",female,39,1,1,PC 17756,83.1583\r\n0,1,\"Compton, Mr. Alexander Taylor Jr\",male,37,1,1,PC 17756,83.1583\r\n1,1,\"Compton, Mrs. Alexander Taylor (Mary Eliza Ingersoll)\",female,64,0,2,PC 17756,83.1583\r\n1,1,\"Cornell, Mrs. Robert Clifford (Malvina Helen Lamson)\",female,55,2,0,11770,25.7000\r\n0,1,\"Crafton, Mr. John Bertram\",male,0,0,0,113791,26.5500\r\n0,1,\"Crosby, Capt. Edward Gifford\",male,70,1,1,WE/P 5735,71.0000\r\n1,1,\"Crosby, Miss. Harriet R\",female,36,0,2,WE/P 5735,71.0000\r\n1,1,\"Crosby, Mrs. Edward Gifford (Catherine Elizabeth Halstead)\",female,64,1,1,112901,26.5500\r\n0,1,\"Cumings, Mr. John Bradley\",male,39,1,0,PC 17599,71.2833\r\n1,1,\"Cumings, Mrs. John Bradley (Florence Briggs Thayer)\",female,38,1,0,PC 17599,71.2833\r\n1,1,\"Daly, Mr. Peter Denis \",male,51,0,0,113055,26.5500\r\n1,1,\"Daniel, Mr. Robert Williams\",male,27,0,0,113804,30.5000\r\n1,1,\"Daniels, Miss. Sarah\",female,33,0,0,113781,151.5500\r\n0,1,\"Davidson, Mr. Thornton\",male,31,1,0,F.C. 12750,52.0000\r\n1,1,\"Davidson, Mrs. Thornton (Orian Hays)\",female,27,1,2,F.C. 12750,52.0000\r\n1,1,\"Dick, Mr. Albert Adrian\",male,31,1,0,17474,57.0000\r\n1,1,\"Dick, Mrs. Albert Adrian (Vera Gillespie)\",female,17,1,0,17474,57.0000\r\n1,1,\"Dodge, Dr. Washington\",male,53,1,1,33638,81.8583\r\n1,1,\"Dodge, Master. Washington\",male,4,0,2,33638,81.8583\r\n1,1,\"Dodge, Mrs. Washington (Ruth Vidaver)\",female,54,1,1,33638,81.8583\r\n0,1,\"Douglas, Mr. Walter Donald\",male,50,1,0,PC 17761,106.4250\r\n1,1,\"Douglas, Mrs. Frederick Charles (Mary Helene Baxter)\",female,27,1,1,PC 17558,247.5208\r\n1,1,\"Douglas, Mrs. Walter Donald (Mahala Dutton)\",female,48,1,0,PC 17761,106.4250\r\n1,1,\"Duff Gordon, Lady. (Lucille Christiana Sutherland) (\"\"Mrs Morgan\"\")\",female,48,1,0,11755,39.6000\r\n1,1,\"Duff Gordon, Sir. Cosmo Edmund (\"\"Mr Morgan\"\")\",male,49,1,0,PC 17485,56.9292\r\n0,1,\"Dulles, Mr. William Crothers\",male,39,0,0,PC 17580,29.7000\r\n1,1,\"Earnshaw, Mrs. Boulton (Olive Potter)\",female,23,0,1,11767,83.1583\r\n1,1,\"Endres, Miss. Caroline Louise\",female,38,0,0,PC 17757,227.5250\r\n1,1,\"Eustis, Miss. Elizabeth Mussey\",female,54,1,0,36947,78.2667\r\n0,1,\"Evans, Miss. Edith Corse\",female,36,0,0,PC 17531,31.6792\r\n0,1,\"Farthing, Mr. John\",male,0,0,0,PC 17483,221.7792\r\n1,1,\"Flegenheim, Mrs. Alfred (Antoinette)\",female,0,0,0,PC 17598,31.6833\r\n1,1,\"Fleming, Miss. Margaret\",female,0,0,0,17421,110.8833\r\n1,1,\"Flynn, Mr. John Irwin (\"\"Irving\"\")\",male,36,0,0,PC 17474,26.3875\r\n0,1,\"Foreman, Mr. Benjamin Laventall\",male,30,0,0,113051,27.7500\r\n1,1,\"Fortune, Miss. Alice Elizabeth\",female,24,3,2,19950,263.0000\r\n1,1,\"Fortune, Miss. Ethel Flora\",female,28,3,2,19950,263.0000\r\n1,1,\"Fortune, Miss. Mabel Helen\",female,23,3,2,19950,263.0000\r\n0,1,\"Fortune, Mr. Charles Alexander\",male,19,3,2,19950,263.0000\r\n0,1,\"Fortune, Mr. Mark\",male,64,1,4,19950,263.0000\r\n1,1,\"Fortune, Mrs. Mark (Mary McDougald)\",female,60,1,4,19950,263.0000\r\n1,1,\"Francatelli, Miss. Laura Mabel\",female,30,0,0,PC 17485,56.9292\r\n0,1,\"Franklin, Mr. Thomas Parham\",male,0,0,0,113778,26.5500\r\n1,1,\"Frauenthal, Dr. Henry William\",male,50,2,0,PC 17611,133.6500\r\n1,1,\"Frauenthal, Mr. Isaac Gerald\",male,43,1,0,17765,27.7208\r\n1,1,\"Frauenthal, Mrs. Henry William (Clara Heinsheimer)\",female,0,1,0,PC 17611,133.6500\r\n1,1,\"Frolicher, Miss. Hedwig Margaritha\",female,22,0,2,13568,49.5000\r\n1,1,\"Frolicher-Stehli, Mr. Maxmillian\",male,60,1,1,13567,79.2000\r\n1,1,\"Frolicher-Stehli, Mrs. Maxmillian (Margaretha Emerentia Stehli)\",female,48,1,1,13567,79.2000\r\n0,1,\"Fry, Mr. Richard\",male,0,0,0,112058,0.0000\r\n0,1,\"Futrelle, Mr. Jacques Heath\",male,37,1,0,113803,53.1000\r\n1,1,\"Futrelle, Mrs. Jacques Heath (Lily May Peel)\",female,35,1,0,113803,53.1000\r\n0,1,\"Gee, Mr. Arthur H\",male,47,0,0,111320,38.5000\r\n1,1,\"Geiger, Miss. Amalie\",female,35,0,0,113503,211.5000\r\n1,1,\"Gibson, Miss. Dorothy Winifred\",female,22,0,1,112378,59.4000\r\n1,1,\"Gibson, Mrs. Leonard (Pauline C Boeson)\",female,45,0,1,112378,59.4000\r\n0,1,\"Giglio, Mr. Victor\",male,24,0,0,PC 17593,79.2000\r\n1,1,\"Goldenberg, Mr. Samuel L\",male,49,1,0,17453,89.1042\r\n1,1,\"Goldenberg, Mrs. Samuel L (Edwiga Grabowska)\",female,0,1,0,17453,89.1042\r\n0,1,\"Goldschmidt, Mr. George B\",male,71,0,0,PC 17754,34.6542\r\n1,1,\"Gracie, Col. Archibald IV\",male,53,0,0,113780,28.5000\r\n1,1,\"Graham, Miss. Margaret Edith\",female,19,0,0,112053,30.0000\r\n0,1,\"Graham, Mr. George Edward\",male,38,0,1,PC 17582,153.4625\r\n1,1,\"Graham, Mrs. William Thompson (Edith Junkins)\",female,58,0,1,PC 17582,153.4625\r\n1,1,\"Greenfield, Mr. William Bertram\",male,23,0,1,PC 17759,63.3583\r\n1,1,\"Greenfield, Mrs. Leo David (Blanche Strouse)\",female,45,0,1,PC 17759,63.3583\r\n0,1,\"Guggenheim, Mr. Benjamin\",male,46,0,0,PC 17593,79.2000\r\n1,1,\"Harder, Mr. George Achilles\",male,25,1,0,11765,55.4417\r\n1,1,\"Harder, Mrs. George Achilles (Dorothy Annan)\",female,25,1,0,11765,55.4417\r\n1,1,\"Harper, Mr. Henry Sleeper\",male,48,1,0,PC 17572,76.7292\r\n1,1,\"Harper, Mrs. Henry Sleeper (Myna Haxtun)\",female,49,1,0,PC 17572,76.7292\r\n0,1,\"Harrington, Mr. Charles H\",male,0,0,0,113796,42.4000\r\n0,1,\"Harris, Mr. Henry Birkhardt\",male,45,1,0,36973,83.4750\r\n1,1,\"Harris, Mrs. Henry Birkhardt (Irene Wallach)\",female,35,1,0,36973,83.4750\r\n0,1,\"Harrison, Mr. William\",male,40,0,0,112059,0.0000\r\n1,1,\"Hassab, Mr. Hammad\",male,27,0,0,PC 17572,76.7292\r\n1,1,\"Hawksford, Mr. Walter James\",male,0,0,0,16988,30.0000\r\n1,1,\"Hays, Miss. Margaret Bechstein\",female,24,0,0,11767,83.1583\r\n0,1,\"Hays, Mr. Charles Melville\",male,55,1,1,12749,93.5000\r\n1,1,\"Hays, Mrs. Charles Melville (Clara Jennings Gregg)\",female,52,1,1,12749,93.5000\r\n0,1,\"Head, Mr. Christopher\",male,42,0,0,113038,42.5000\r\n0,1,\"Hilliard, Mr. Herbert Henry\",male,0,0,0,17463,51.8625\r\n0,1,\"Hipkins, Mr. William Edward\",male,55,0,0,680,50.0000\r\n1,1,\"Hippach, Miss. Jean Gertrude\",female,16,0,1,111361,57.9792\r\n1,1,\"Hippach, Mrs. Louis Albert (Ida Sophia Fischer)\",female,44,0,1,111361,57.9792\r\n1,1,\"Hogeboom, Mrs. John C (Anna Andrews)\",female,51,1,0,13502,77.9583\r\n0,1,\"Holverson, Mr. Alexander Oskar\",male,42,1,0,113789,52.0000\r\n1,1,\"Holverson, Mrs. Alexander Oskar (Mary Aline Towner)\",female,35,1,0,113789,52.0000\r\n1,1,\"Homer, Mr. Harry (\"\"Mr E Haven\"\")\",male,35,0,0,111426,26.5500\r\n1,1,\"Hoyt, Mr. Frederick Maxfield\",male,38,1,0,19943,90.0000\r\n0,1,\"Hoyt, Mr. William Fisher\",male,0,0,0,PC 17600,30.6958\r\n1,1,\"Hoyt, Mrs. Frederick Maxfield (Jane Anne Forby)\",female,35,1,0,19943,90.0000\r\n1,1,\"Icard, Miss. Amelie\",female,38,0,0,113572,80.0000\r\n0,1,\"Isham, Miss. Ann Elizabeth\",female,50,0,0,PC 17595,28.7125\r\n1,1,\"Ismay, Mr. Joseph Bruce\",male,49,0,0,112058,0.0000\r\n0,1,\"Jones, Mr. Charles Cresson\",male,46,0,0,694,26.0000\r\n0,1,\"Julian, Mr. Henry Forbes\",male,50,0,0,113044,26.0000\r\n0,1,\"Keeping, Mr. Edwin\",male,32.5,0,0,113503,211.5000\r\n0,1,\"Kent, Mr. Edward Austin\",male,58,0,0,11771,29.7000\r\n0,1,\"Kenyon, Mr. Frederick R\",male,41,1,0,17464,51.8625\r\n1,1,\"Kenyon, Mrs. Frederick R (Marion)\",female,0,1,0,17464,51.8625\r\n1,1,\"Kimball, Mr. Edwin Nelson Jr\",male,42,1,0,11753,52.5542\r\n1,1,\"Kimball, Mrs. Edwin Nelson Jr (Gertrude Parsons)\",female,45,1,0,11753,52.5542\r\n0,1,\"Klaber, Mr. Herman\",male,0,0,0,113028,26.5500\r\n1,1,\"Kreuchen, Miss. Emilie\",female,39,0,0,24160,211.3375\r\n1,1,\"Leader, Dr. Alice (Farnham)\",female,49,0,0,17465,25.9292\r\n1,1,\"LeRoy, Miss. Bertha\",female,30,0,0,PC 17761,106.4250\r\n1,1,\"Lesurer, Mr. Gustave J\",male,35,0,0,PC 17755,512.3292\r\n0,1,\"Lewy, Mr. Ervin G\",male,0,0,0,PC 17612,27.7208\r\n0,1,\"Lindeberg-Lind, Mr. Erik Gustaf (\"\"Mr Edward Lingrey\"\")\",male,42,0,0,17475,26.5500\r\n1,1,\"Lindstrom, Mrs. Carl Johan (Sigrid Posse)\",female,55,0,0,112377,27.7208\r\n1,1,\"Lines, Miss. Mary Conover\",female,16,0,1,PC 17592,39.4000\r\n1,1,\"Lines, Mrs. Ernest H (Elizabeth Lindsey James)\",female,51,0,1,PC 17592,39.4000\r\n0,1,\"Long, Mr. Milton Clyde\",male,29,0,0,113501,30.0000\r\n1,1,\"Longley, Miss. Gretchen Fiske\",female,21,0,0,13502,77.9583\r\n0,1,\"Loring, Mr. Joseph Holland\",male,30,0,0,113801,45.5000\r\n1,1,\"Lurette, Miss. Elise\",female,58,0,0,PC 17569,146.5208\r\n1,1,\"Madill, Miss. Georgette Alexandra\",female,15,0,1,24160,211.3375\r\n0,1,\"Maguire, Mr. John Edward\",male,30,0,0,110469,26.0000\r\n1,1,\"Maioni, Miss. Roberta\",female,16,0,0,110152,86.5000\r\n1,1,\"Marechal, Mr. Pierre\",male,0,0,0,11774,29.7000\r\n0,1,\"Marvin, Mr. Daniel Warner\",male,19,1,0,113773,53.1000\r\n1,1,\"Marvin, Mrs. Daniel Warner (Mary Graham Carmichael Farquarson)\",female,18,1,0,113773,53.1000\r\n1,1,\"Mayne, Mlle. Berthe Antonine (\"\"Mrs de Villiers\"\")\",female,24,0,0,PC 17482,49.5042\r\n0,1,\"McCaffry, Mr. Thomas Francis\",male,46,0,0,13050,75.2417\r\n0,1,\"McCarthy, Mr. Timothy J\",male,54,0,0,17463,51.8625\r\n1,1,\"McGough, Mr. James Robert\",male,36,0,0,PC 17473,26.2875\r\n0,1,\"Meyer, Mr. Edgar Joseph\",male,28,1,0,PC 17604,82.1708\r\n1,1,\"Meyer, Mrs. Edgar Joseph (Leila Saks)\",female,0,1,0,PC 17604,82.1708\r\n0,1,\"Millet, Mr. Francis Davis\",male,65,0,0,13509,26.5500\r\n0,1,\"Minahan, Dr. William Edward\",male,44,2,0,19928,90.0000\r\n1,1,\"Minahan, Miss. Daisy E\",female,33,1,0,19928,90.0000\r\n1,1,\"Minahan, Mrs. William Edward (Lillian E Thorpe)\",female,37,1,0,19928,90.0000\r\n1,1,\"Mock, Mr. Philipp Edmund\",male,30,1,0,13236,57.7500\r\n0,1,\"Molson, Mr. Harry Markland\",male,55,0,0,113787,30.5000\r\n0,1,\"Moore, Mr. Clarence Bloomfield\",male,47,0,0,113796,42.4000\r\n0,1,\"Natsch, Mr. Charles H\",male,37,0,1,PC 17596,29.7000\r\n1,1,\"Newell, Miss. Madeleine\",female,31,1,0,35273,113.2750\r\n1,1,\"Newell, Miss. Marjorie\",female,23,1,0,35273,113.2750\r\n0,1,\"Newell, Mr. Arthur Webster\",male,58,0,2,35273,113.2750\r\n1,1,\"Newsom, Miss. Helen Monypeny\",female,19,0,2,11752,26.2833\r\n0,1,\"Nicholson, Mr. Arthur Ernest\",male,64,0,0,693,26.0000\r\n1,1,\"Oliva y Ocana, Dona. Fermina\",female,39,0,0,PC 17758,108.9000\r\n1,1,\"Omont, Mr. Alfred Fernand\",male,0,0,0,F.C. 12998,25.7417\r\n1,1,\"Ostby, Miss. Helene Ragnhild\",female,22,0,1,113509,61.9792\r\n0,1,\"Ostby, Mr. Engelhart Cornelius\",male,65,0,1,113509,61.9792\r\n0,1,\"Ovies y Rodriguez, Mr. Servando\",male,28.5,0,0,PC 17562,27.7208\r\n0,1,\"Parr, Mr. William Henry Marsh\",male,0,0,0,112052,0.0000\r\n0,1,\"Partner, Mr. Austen\",male,45.5,0,0,113043,28.5000\r\n0,1,\"Payne, Mr. Vivian Ponsonby\",male,23,0,0,12749,93.5000\r\n0,1,\"Pears, Mr. Thomas Clinton\",male,29,1,0,113776,66.6000\r\n1,1,\"Pears, Mrs. Thomas (Edith Wearne)\",female,22,1,0,113776,66.6000\r\n0,1,\"Penasco y Castellana, Mr. Victor de Satode\",male,18,1,0,PC 17758,108.9000\r\n1,1,\"Penasco y Castellana, Mrs. Victor de Satode (Maria Josefa Perez de Soto y Vallejo)\",female,17,1,0,PC 17758,108.9000\r\n1,1,\"Perreault, Miss. Anne\",female,30,0,0,12749,93.5000\r\n1,1,\"Peuchen, Major. Arthur Godfrey\",male,52,0,0,113786,30.5000\r\n0,1,\"Porter, Mr. Walter Chamberlain\",male,47,0,0,110465,52.0000\r\n1,1,\"Potter, Mrs. Thomas Jr (Lily Alexenia Wilson)\",female,56,0,1,11767,83.1583\r\n0,1,\"Reuchlin, Jonkheer. John George\",male,38,0,0,19972,0.0000\r\n1,1,\"Rheims, Mr. George Alexander Lucien\",male,0,0,0,PC 17607,39.6000\r\n0,1,\"Ringhini, Mr. Sante\",male,22,0,0,PC 17760,135.6333\r\n0,1,\"Robbins, Mr. Victor\",male,0,0,0,PC 17757,227.5250\r\n1,1,\"Robert, Mrs. Edward Scott (Elisabeth Walton McMillan)\",female,43,0,1,24160,211.3375\r\n0,1,\"Roebling, Mr. Washington Augustus II\",male,31,0,0,PC 17590,50.4958\r\n1,1,\"Romaine, Mr. Charles Hallace (\"\"Mr C Rolmane\"\")\",male,45,0,0,111428,26.5500\r\n0,1,\"Rood, Mr. Hugh Roscoe\",male,0,0,0,113767,50.0000\r\n1,1,\"Rosenbaum, Miss. Edith Louise\",female,33,0,0,PC 17613,27.7208\r\n0,1,\"Rosenshine, Mr. George (\"\"Mr George Thorne\"\")\",male,46,0,0,PC 17585,79.2000\r\n0,1,\"Ross, Mr. John Hugo\",male,36,0,0,13049,40.1250\r\n1,1,\"Rothes, the Countess. of (Lucy Noel Martha Dyer-Edwards)\",female,33,0,0,110152,86.5000\r\n0,1,\"Rothschild, Mr. Martin\",male,55,1,0,PC 17603,59.4000\r\n1,1,\"Rothschild, Mrs. Martin (Elizabeth L. Barrett)\",female,54,1,0,PC 17603,59.4000\r\n0,1,\"Rowe, Mr. Alfred G\",male,33,0,0,113790,26.5500\r\n1,1,\"Ryerson, Master. John Borie\",male,13,2,2,PC 17608,262.3750\r\n1,1,\"Ryerson, Miss. Emily Borie\",female,18,2,2,PC 17608,262.3750\r\n1,1,\"Ryerson, Miss. Susan Parker \"\"Suzette\"\"\",female,21,2,2,PC 17608,262.3750\r\n0,1,\"Ryerson, Mr. Arthur Larned\",male,61,1,3,PC 17608,262.3750\r\n1,1,\"Ryerson, Mrs. Arthur Larned (Emily Maria Borie)\",female,48,1,3,PC 17608,262.3750\r\n1,1,\"Saalfeld, Mr. Adolphe\",male,0,0,0,19988,30.5000\r\n1,1,\"Sagesser, Mlle. Emma\",female,24,0,0,PC 17477,69.3000\r\n1,1,\"Salomon, Mr. Abraham L\",male,0,0,0,111163,26.0000\r\n1,1,\"Schabert, Mrs. Paul (Emma Mock)\",female,35,1,0,13236,57.7500\r\n1,1,\"Serepeca, Miss. Augusta\",female,30,0,0,113798,31.0000\r\n1,1,\"Seward, Mr. Frederic Kimber\",male,34,0,0,113794,26.5500\r\n1,1,\"Shutes, Miss. Elizabeth W\",female,40,0,0,PC 17582,153.4625\r\n1,1,\"Silverthorne, Mr. Spencer Victor\",male,35,0,0,PC 17475,26.2875\r\n0,1,\"Silvey, Mr. William Baird\",male,50,1,0,13507,55.9000\r\n1,1,\"Silvey, Mrs. William Baird (Alice Munger)\",female,39,1,0,13507,55.9000\r\n1,1,\"Simonius-Blumer, Col. Oberst Alfons\",male,56,0,0,13213,35.5000\r\n1,1,\"Sloper, Mr. William Thompson\",male,28,0,0,113788,35.5000\r\n0,1,\"Smart, Mr. John Montgomery\",male,56,0,0,113792,26.5500\r\n0,1,\"Smith, Mr. James Clinch\",male,56,0,0,17764,30.6958\r\n0,1,\"Smith, Mr. Lucien Philip\",male,24,1,0,13695,60.0000\r\n0,1,\"Smith, Mr. Richard William\",male,0,0,0,113056,26.0000\r\n1,1,\"Smith, Mrs. Lucien Philip (Mary Eloise Hughes)\",female,18,1,0,13695,60.0000\r\n1,1,\"Snyder, Mr. John Pillsbury\",male,24,1,0,21228,82.2667\r\n1,1,\"Snyder, Mrs. John Pillsbury (Nelle Stevenson)\",female,23,1,0,21228,82.2667\r\n1,1,\"Spedden, Master. Robert Douglas\",male,6,0,2,16966,134.5000\r\n1,1,\"Spedden, Mr. Frederic Oakley\",male,45,1,1,16966,134.5000\r\n1,1,\"Spedden, Mrs. Frederic Oakley (Margaretta Corning Stone)\",female,40,1,1,16966,134.5000\r\n0,1,\"Spencer, Mr. William Augustus\",male,57,1,0,PC 17569,146.5208\r\n1,1,\"Spencer, Mrs. William Augustus (Marie Eugenie)\",female,0,1,0,PC 17569,146.5208\r\n1,1,\"Stahelin-Maeglin, Dr. Max\",male,32,0,0,13214,30.5000\r\n0,1,\"Stead, Mr. William Thomas\",male,62,0,0,113514,26.5500\r\n1,1,\"Stengel, Mr. Charles Emil Henry\",male,54,1,0,11778,55.4417\r\n1,1,\"Stengel, Mrs. Charles Emil Henry (Annie May Morris)\",female,43,1,0,11778,55.4417\r\n1,1,\"Stephenson, Mrs. Walter Bertram (Martha Eustis)\",female,52,1,0,36947,78.2667\r\n0,1,\"Stewart, Mr. Albert A\",male,0,0,0,PC 17605,27.7208\r\n1,1,\"Stone, Mrs. George Nelson (Martha Evelyn)\",female,62,0,0,113572,80.0000\r\n0,1,\"Straus, Mr. Isidor\",male,67,1,0,PC 17483,221.7792\r\n0,1,\"Straus, Mrs. Isidor (Rosalie Ida Blun)\",female,63,1,0,PC 17483,221.7792\r\n0,1,\"Sutton, Mr. Frederick\",male,61,0,0,36963,32.3208\r\n1,1,\"Swift, Mrs. Frederick Joel (Margaret Welles Barron)\",female,48,0,0,17466,25.9292\r\n1,1,\"Taussig, Miss. Ruth\",female,18,0,2,110413,79.6500\r\n0,1,\"Taussig, Mr. Emil\",male,52,1,1,110413,79.6500\r\n1,1,\"Taussig, Mrs. Emil (Tillie Mandelbaum)\",female,39,1,1,110413,79.6500\r\n1,1,\"Taylor, Mr. Elmer Zebley\",male,48,1,0,19996,52.0000\r\n1,1,\"Taylor, Mrs. Elmer Zebley (Juliet Cummins Wright)\",female,0,1,0,19996,52.0000\r\n0,1,\"Thayer, Mr. John Borland\",male,49,1,1,17421,110.8833\r\n1,1,\"Thayer, Mr. John Borland Jr\",male,17,0,2,17421,110.8833\r\n1,1,\"Thayer, Mrs. John Borland (Marian Longstreth Morris)\",female,39,1,1,17421,110.8833\r\n1,1,\"Thorne, Mrs. Gertrude Maybelle\",female,0,0,0,PC 17585,79.2000\r\n1,1,\"Tucker, Mr. Gilbert Milligan Jr\",male,31,0,0,2543,28.5375\r\n0,1,\"Uruchurtu, Don. Manuel E\",male,40,0,0,PC 17601,27.7208\r\n0,1,\"Van der hoef, Mr. Wyckoff\",male,61,0,0,111240,33.5000\r\n0,1,\"Walker, Mr. William Anderson\",male,47,0,0,36967,34.0208\r\n1,1,\"Ward, Miss. Anna\",female,35,0,0,PC 17755,512.3292\r\n0,1,\"Warren, Mr. Frank Manley\",male,64,1,0,110813,75.2500\r\n1,1,\"Warren, Mrs. Frank Manley (Anna Sophia Atkinson)\",female,60,1,0,110813,75.2500\r\n0,1,\"Weir, Col. John\",male,60,0,0,113800,26.5500\r\n0,1,\"White, Mr. Percival Wayland\",male,54,0,1,35281,77.2875\r\n0,1,\"White, Mr. Richard Frasar\",male,21,0,1,35281,77.2875\r\n1,1,\"White, Mrs. John Stuart (Ella Holmes)\",female,55,0,0,PC 17760,135.6333\r\n1,1,\"Wick, Miss. Mary Natalie\",female,31,0,2,36928,164.8667\r\n0,1,\"Wick, Mr. George Dennick\",male,57,1,1,36928,164.8667\r\n1,1,\"Wick, Mrs. George Dennick (Mary Hitchcock)\",female,45,1,1,36928,164.8667\r\n0,1,\"Widener, Mr. George Dunton\",male,50,1,1,113503,211.5000\r\n0,1,\"Widener, Mr. Harry Elkins\",male,27,0,2,113503,211.5000\r\n1,1,\"Widener, Mrs. George Dunton (Eleanor Elkins)\",female,50,1,1,113503,211.5000\r\n1,1,\"Willard, Miss. Constance\",female,21,0,0,113795,26.5500\r\n0,1,\"Williams, Mr. Charles Duane\",male,51,0,1,PC 17597,61.3792\r\n1,1,\"Williams, Mr. Richard Norris II\",male,21,0,1,PC 17597,61.3792\r\n0,1,\"Williams-Lambert, Mr. Fletcher Fellows\",male,0,0,0,113510,35.0000\r\n1,1,\"Wilson, Miss. Helen Alice\",female,31,0,0,16966,134.5000\r\n1,1,\"Woolner, Mr. Hugh\",male,0,0,0,19947,35.5000\r\n0,1,\"Wright, Mr. George\",male,62,0,0,113807,26.5500\r\n1,1,\"Young, Miss. Marie Grice\",female,36,0,0,PC 17760,135.6333\r\n0,2,\"Abelson, Mr. Samuel\",male,30,1,0,P/PP 3381,24.0000\r\n1,2,\"Abelson, Mrs. Samuel (Hannah Wizosky)\",female,28,1,0,P/PP 3381,24.0000\r\n0,2,\"Aldworth, Mr. Charles Augustus\",male,30,0,0,248744,13.0000\r\n0,2,\"Andrew, Mr. Edgardo Samuel\",male,18,0,0,231945,11.5000\r\n0,2,\"Andrew, Mr. Frank Thomas\",male,25,0,0,C.A. 34050,10.5000\r\n0,2,\"Angle, Mr. William A\",male,34,1,0,226875,26.0000\r\n1,2,\"Angle, Mrs. William A (Florence \"\"Mary\"\" Agnes Hughes)\",female,36,1,0,226875,26.0000\r\n0,2,\"Ashby, Mr. John\",male,57,0,0,244346,13.0000\r\n0,2,\"Bailey, Mr. Percy Andrew\",male,18,0,0,29108,11.5000\r\n0,2,\"Baimbrigge, Mr. Charles Robert\",male,23,0,0,C.A. 31030,10.5000\r\n1,2,\"Ball, Mrs. (Ada E Hall)\",female,36,0,0,28551,13.0000\r\n0,2,\"Banfield, Mr. Frederick James\",male,28,0,0,C.A./SOTON 34068,10.5000\r\n0,2,\"Bateman, Rev. Robert James\",male,51,0,0,S.O.P. 1166,12.5250\r\n1,2,\"Beane, Mr. Edward\",male,32,1,0,2908,26.0000\r\n1,2,\"Beane, Mrs. Edward (Ethel Clarke)\",female,19,1,0,2908,26.0000\r\n0,2,\"Beauchamp, Mr. Henry James\",male,28,0,0,244358,26.0000\r\n1,2,\"Becker, Master. Richard F\",male,1,2,1,230136,39.0000\r\n1,2,\"Becker, Miss. Marion Louise\",female,4,2,1,230136,39.0000\r\n1,2,\"Becker, Miss. Ruth Elizabeth\",female,12,2,1,230136,39.0000\r\n1,2,\"Becker, Mrs. Allen Oliver (Nellie E Baumgardner)\",female,36,0,3,230136,39.0000\r\n1,2,\"Beesley, Mr. Lawrence\",male,34,0,0,248698,13.0000\r\n1,2,\"Bentham, Miss. Lilian W\",female,19,0,0,28404,13.0000\r\n0,2,\"Berriman, Mr. William John\",male,23,0,0,28425,13.0000\r\n0,2,\"Botsford, Mr. William Hull\",male,26,0,0,237670,13.0000\r\n0,2,\"Bowenur, Mr. Solomon\",male,42,0,0,211535,13.0000\r\n0,2,\"Bracken, Mr. James H\",male,27,0,0,220367,13.0000\r\n1,2,\"Brown, Miss. Amelia \"\"Mildred\"\"\",female,24,0,0,248733,13.0000\r\n1,2,\"Brown, Miss. Edith Eileen\",female,15,0,2,29750,39.0000\r\n0,2,\"Brown, Mr. Thomas William Solomon\",male,60,1,1,29750,39.0000\r\n1,2,\"Brown, Mrs. Thomas William Solomon (Elizabeth Catherine Ford)\",female,40,1,1,29750,39.0000\r\n1,2,\"Bryhl, Miss. Dagmar Jenny Ingeborg \",female,20,1,0,236853,26.0000\r\n0,2,\"Bryhl, Mr. Kurt Arnold Gottfrid\",male,25,1,0,236853,26.0000\r\n1,2,\"Buss, Miss. Kate\",female,36,0,0,27849,13.0000\r\n0,2,\"Butler, Mr. Reginald Fenton\",male,25,0,0,234686,13.0000\r\n0,2,\"Byles, Rev. Thomas Roussel Davids\",male,42,0,0,244310,13.0000\r\n1,2,\"Bystrom, Mrs. (Karolina)\",female,42,0,0,236852,13.0000\r\n1,2,\"Caldwell, Master. Alden Gates\",male,0.8333,0,2,248738,29.0000\r\n1,2,\"Caldwell, Mr. Albert Francis\",male,26,1,1,248738,29.0000\r\n1,2,\"Caldwell, Mrs. Albert Francis (Sylvia Mae Harbaugh)\",female,22,1,1,248738,29.0000\r\n1,2,\"Cameron, Miss. Clear Annie\",female,35,0,0,F.C.C. 13528,21.0000\r\n0,2,\"Campbell, Mr. William\",male,0,0,0,239853,0.0000\r\n0,2,\"Carbines, Mr. William\",male,19,0,0,28424,13.0000\r\n0,2,\"Carter, Mrs. Ernest Courtenay (Lilian Hughes)\",female,44,1,0,244252,26.0000\r\n0,2,\"Carter, Rev. Ernest Courtenay\",male,54,1,0,244252,26.0000\r\n0,2,\"Chapman, Mr. Charles Henry\",male,52,0,0,248731,13.5000\r\n0,2,\"Chapman, Mr. John Henry\",male,37,1,0,SC/AH 29037,26.0000\r\n0,2,\"Chapman, Mrs. John Henry (Sara Elizabeth Lawry)\",female,29,1,0,SC/AH 29037,26.0000\r\n1,2,\"Christy, Miss. Julie Rachel\",female,25,1,1,237789,30.0000\r\n1,2,\"Christy, Mrs. (Alice Frances)\",female,45,0,2,237789,30.0000\r\n0,2,\"Clarke, Mr. Charles Valentine\",male,29,1,0,2003,26.0000\r\n1,2,\"Clarke, Mrs. Charles V (Ada Maria Winfield)\",female,28,1,0,2003,26.0000\r\n0,2,\"Coleridge, Mr. Reginald Charles\",male,29,0,0,W./C. 14263,10.5000\r\n0,2,\"Collander, Mr. Erik Gustaf\",male,28,0,0,248740,13.0000\r\n1,2,\"Collett, Mr. Sidney C Stuart\",male,24,0,0,28034,10.5000\r\n1,2,\"Collyer, Miss. Marjorie \"\"Lottie\"\"\",female,8,0,2,C.A. 31921,26.2500\r\n0,2,\"Collyer, Mr. Harvey\",male,31,1,1,C.A. 31921,26.2500\r\n1,2,\"Collyer, Mrs. Harvey (Charlotte Annie Tate)\",female,31,1,1,C.A. 31921,26.2500\r\n1,2,\"Cook, Mrs. (Selena Rogers)\",female,22,0,0,W./C. 14266,10.5000\r\n0,2,\"Corbett, Mrs. Walter H (Irene Colvin)\",female,30,0,0,237249,13.0000\r\n0,2,\"Corey, Mrs. Percy C (Mary Phyllis Elizabeth Miller)\",female,0,0,0,F.C.C. 13534,21.0000\r\n0,2,\"Cotterill, Mr. Henry \"\"Harry\"\"\",male,21,0,0,29107,11.5000\r\n0,2,\"Cunningham, Mr. Alfred Fleming\",male,0,0,0,239853,0.0000\r\n1,2,\"Davies, Master. John Morgan Jr\",male,8,1,1,C.A. 33112,36.7500\r\n0,2,\"Davies, Mr. Charles Henry\",male,18,0,0,S.O.C. 14879,73.5000\r\n1,2,\"Davies, Mrs. John Morgan (Elizabeth Agnes Mary White) \",female,48,0,2,C.A. 33112,36.7500\r\n1,2,\"Davis, Miss. Mary\",female,28,0,0,237668,13.0000\r\n0,2,\"de Brito, Mr. Jose Joaquim\",male,32,0,0,244360,13.0000\r\n0,2,\"Deacon, Mr. Percy William\",male,17,0,0,S.O.C. 14879,73.5000\r\n0,2,\"del Carlo, Mr. Sebastiano\",male,29,1,0,SC/PARIS 2167,27.7208\r\n1,2,\"del Carlo, Mrs. Sebastiano (Argenia Genovesi)\",female,24,1,0,SC/PARIS 2167,27.7208\r\n0,2,\"Denbury, Mr. Herbert\",male,25,0,0,C.A. 31029,31.5000\r\n0,2,\"Dibden, Mr. William\",male,18,0,0,S.O.C. 14879,73.5000\r\n1,2,\"Doling, Miss. Elsie\",female,18,0,1,231919,23.0000\r\n1,2,\"Doling, Mrs. John T (Ada Julia Bone)\",female,34,0,1,231919,23.0000\r\n0,2,\"Downton, Mr. William James\",male,54,0,0,28403,26.0000\r\n1,2,\"Drew, Master. Marshall Brines\",male,8,0,2,28220,32.5000\r\n0,2,\"Drew, Mr. James Vivian\",male,42,1,1,28220,32.5000\r\n1,2,\"Drew, Mrs. James Vivian (Lulu Thorne Christian)\",female,34,1,1,28220,32.5000\r\n1,2,\"Duran y More, Miss. Asuncion\",female,27,1,0,SC/PARIS 2149,13.8583\r\n1,2,\"Duran y More, Miss. Florentina\",female,30,1,0,SC/PARIS 2148,13.8583\r\n0,2,\"Eitemiller, Mr. George Floyd\",male,23,0,0,29751,13.0000\r\n0,2,\"Enander, Mr. Ingvar\",male,21,0,0,236854,13.0000\r\n0,2,\"Fahlstrom, Mr. Arne Jonas\",male,18,0,0,236171,13.0000\r\n0,2,\"Faunthorpe, Mr. Harry\",male,40,1,0,2926,26.0000\r\n1,2,\"Faunthorpe, Mrs. Lizzie (Elizabeth Anne Wilkinson)\",female,29,1,0,2926,26.0000\r\n0,2,\"Fillbrook, Mr. Joseph Charles\",male,18,0,0,C.A. 15185,10.5000\r\n0,2,\"Fox, Mr. Stanley Hubert\",male,36,0,0,229236,13.0000\r\n0,2,\"Frost, Mr. Anthony Wood \"\"Archie\"\"\",male,0,0,0,239854,0.0000\r\n0,2,\"Funk, Miss. Annie Clemmer\",female,38,0,0,237671,13.0000\r\n0,2,\"Fynney, Mr. Joseph J\",male,35,0,0,239865,26.0000\r\n0,2,\"Gale, Mr. Harry\",male,38,1,0,28664,21.0000\r\n0,2,\"Gale, Mr. Shadrach\",male,34,1,0,28664,21.0000\r\n1,2,\"Garside, Miss. Ethel\",female,34,0,0,243880,13.0000\r\n0,2,\"Gaskell, Mr. Alfred\",male,16,0,0,239865,26.0000\r\n0,2,\"Gavey, Mr. Lawrence\",male,26,0,0,31028,10.5000\r\n0,2,\"Gilbert, Mr. William\",male,47,0,0,C.A. 30769,10.5000\r\n0,2,\"Giles, Mr. Edgar\",male,21,1,0,28133,11.5000\r\n0,2,\"Giles, Mr. Frederick Edward\",male,21,1,0,28134,11.5000\r\n0,2,\"Giles, Mr. Ralph\",male,24,0,0,248726,13.5000\r\n0,2,\"Gill, Mr. John William\",male,24,0,0,233866,13.0000\r\n0,2,\"Gillespie, Mr. William Henry\",male,34,0,0,12233,13.0000\r\n0,2,\"Givard, Mr. Hans Kristensen\",male,30,0,0,250646,13.0000\r\n0,2,\"Greenberg, Mr. Samuel\",male,52,0,0,250647,13.0000\r\n0,2,\"Hale, Mr. Reginald\",male,30,0,0,250653,13.0000\r\n1,2,\"Hamalainen, Master. Viljo\",male,0.6667,1,1,250649,14.5000\r\n1,2,\"Hamalainen, Mrs. William (Anna)\",female,24,0,2,250649,14.5000\r\n0,2,\"Harbeck, Mr. William H\",male,44,0,0,248746,13.0000\r\n1,2,\"Harper, Miss. Annie Jessie \"\"Nina\"\"\",female,6,0,1,248727,33.0000\r\n0,2,\"Harper, Rev. John\",male,28,0,1,248727,33.0000\r\n1,2,\"Harris, Mr. George\",male,62,0,0,S.W./PP 752,10.5000\r\n0,2,\"Harris, Mr. Walter\",male,30,0,0,W/C 14208,10.5000\r\n1,2,\"Hart, Miss. Eva Miriam\",female,7,0,2,F.C.C. 13529,26.2500\r\n0,2,\"Hart, Mr. Benjamin\",male,43,1,1,F.C.C. 13529,26.2500\r\n1,2,\"Hart, Mrs. Benjamin (Esther Ada Bloomfield)\",female,45,1,1,F.C.C. 13529,26.2500\r\n1,2,\"Herman, Miss. Alice\",female,24,1,2,220845,65.0000\r\n1,2,\"Herman, Miss. Kate\",female,24,1,2,220845,65.0000\r\n0,2,\"Herman, Mr. Samuel\",male,49,1,2,220845,65.0000\r\n1,2,\"Herman, Mrs. Samuel (Jane Laver)\",female,48,1,2,220845,65.0000\r\n1,2,\"Hewlett, Mrs. (Mary D Kingcome) \",female,55,0,0,248706,16.0000\r\n0,2,\"Hickman, Mr. Leonard Mark\",male,24,2,0,S.O.C. 14879,73.5000\r\n0,2,\"Hickman, Mr. Lewis\",male,32,2,0,S.O.C. 14879,73.5000\r\n0,2,\"Hickman, Mr. Stanley George\",male,21,2,0,S.O.C. 14879,73.5000\r\n0,2,\"Hiltunen, Miss. Marta\",female,18,1,1,250650,13.0000\r\n1,2,\"Hocking, Miss. Ellen \"\"Nellie\"\"\",female,20,2,1,29105,23.0000\r\n0,2,\"Hocking, Mr. Richard George\",male,23,2,1,29104,11.5000\r\n0,2,\"Hocking, Mr. Samuel James Metcalfe\",male,36,0,0,242963,13.0000\r\n1,2,\"Hocking, Mrs. Elizabeth (Eliza Needs)\",female,54,1,3,29105,23.0000\r\n0,2,\"Hodges, Mr. Henry Price\",male,50,0,0,250643,13.0000\r\n0,2,\"Hold, Mr. Stephen\",male,44,1,0,26707,26.0000\r\n1,2,\"Hold, Mrs. Stephen (Annie Margaret Hill)\",female,29,1,0,26707,26.0000\r\n0,2,\"Hood, Mr. Ambrose Jr\",male,21,0,0,S.O.C. 14879,73.5000\r\n1,2,\"Hosono, Mr. Masabumi\",male,42,0,0,237798,13.0000\r\n0,2,\"Howard, Mr. Benjamin\",male,63,1,0,24065,26.0000\r\n0,2,\"Howard, Mrs. Benjamin (Ellen Truelove Arman)\",female,60,1,0,24065,26.0000\r\n0,2,\"Hunt, Mr. George Henry\",male,33,0,0,SCO/W 1585,12.2750\r\n1,2,\"Ilett, Miss. Bertha\",female,17,0,0,SO/C 14885,10.5000\r\n0,2,\"Jacobsohn, Mr. Sidney Samuel\",male,42,1,0,243847,27.0000\r\n1,2,\"Jacobsohn, Mrs. Sidney Samuel (Amy Frances Christy)\",female,24,2,1,243847,27.0000\r\n0,2,\"Jarvis, Mr. John Denzil\",male,47,0,0,237565,15.0000\r\n0,2,\"Jefferys, Mr. Clifford Thomas\",male,24,2,0,C.A. 31029,31.5000\r\n0,2,\"Jefferys, Mr. Ernest Wilfred\",male,22,2,0,C.A. 31029,31.5000\r\n0,2,\"Jenkin, Mr. Stephen Curnow\",male,32,0,0,C.A. 33111,10.5000\r\n1,2,\"Jerwan, Mrs. Amin S (Marie Marthe Thuillard)\",female,23,0,0,SC/AH Basle 541,13.7917\r\n0,2,\"Kantor, Mr. Sinai\",male,34,1,0,244367,26.0000\r\n1,2,\"Kantor, Mrs. Sinai (Miriam Sternin)\",female,24,1,0,244367,26.0000\r\n0,2,\"Karnes, Mrs. J Frank (Claire Bennett)\",female,22,0,0,F.C.C. 13534,21.0000\r\n1,2,\"Keane, Miss. Nora A\",female,0,0,0,226593,12.3500\r\n0,2,\"Keane, Mr. Daniel\",male,35,0,0,233734,12.3500\r\n1,2,\"Kelly, Mrs. Florence \"\"Fannie\"\"\",female,45,0,0,223596,13.5000\r\n0,2,\"Kirkland, Rev. Charles Leonard\",male,57,0,0,219533,12.3500\r\n0,2,\"Knight, Mr. Robert J\",male,0,0,0,239855,0.0000\r\n0,2,\"Kvillner, Mr. Johan Henrik Johannesson\",male,31,0,0,C.A. 18723,10.5000\r\n0,2,\"Lahtinen, Mrs. William (Anna Sylfven)\",female,26,1,1,250651,26.0000\r\n0,2,\"Lahtinen, Rev. William\",male,30,1,1,250651,26.0000\r\n0,2,\"Lamb, Mr. John Joseph\",male,0,0,0,240261,10.7083\r\n1,2,\"Laroche, Miss. Louise\",female,1,1,2,SC/Paris 2123,41.5792\r\n1,2,\"Laroche, Miss. Simonne Marie Anne Andree\",female,3,1,2,SC/Paris 2123,41.5792\r\n0,2,\"Laroche, Mr. Joseph Philippe Lemercier\",male,25,1,2,SC/Paris 2123,41.5792\r\n1,2,\"Laroche, Mrs. Joseph (Juliette Marie Louise Lafargue)\",female,22,1,2,SC/Paris 2123,41.5792\r\n1,2,\"Lehmann, Miss. Bertha\",female,17,0,0,SC 1748,12.0000\r\n1,2,\"Leitch, Miss. Jessie Wills\",female,0,0,0,248727,33.0000\r\n1,2,\"Lemore, Mrs. (Amelia Milley)\",female,34,0,0,C.A. 34260,10.5000\r\n0,2,\"Levy, Mr. Rene Jacques\",male,36,0,0,SC/Paris 2163,12.8750\r\n0,2,\"Leyson, Mr. Robert William Norman\",male,24,0,0,C.A. 29566,10.5000\r\n0,2,\"Lingane, Mr. John\",male,61,0,0,235509,12.3500\r\n0,2,\"Louch, Mr. Charles Alexander\",male,50,1,0,SC/AH 3085,26.0000\r\n1,2,\"Louch, Mrs. Charles Alexander (Alice Adelaide Slow)\",female,42,1,0,SC/AH 3085,26.0000\r\n0,2,\"Mack, Mrs. (Mary)\",female,57,0,0,S.O./P.P. 3,10.5000\r\n0,2,\"Malachard, Mr. Noel\",male,0,0,0,237735,15.0458\r\n1,2,\"Mallet, Master. Andre\",male,1,0,2,S.C./PARIS 2079,37.0042\r\n0,2,\"Mallet, Mr. Albert\",male,31,1,1,S.C./PARIS 2079,37.0042\r\n1,2,\"Mallet, Mrs. Albert (Antoinette Magnin)\",female,24,1,1,S.C./PARIS 2079,37.0042\r\n0,2,\"Mangiavacchi, Mr. Serafino Emilio\",male,0,0,0,SC/A.3 2861,15.5792\r\n0,2,\"Matthews, Mr. William John\",male,30,0,0,28228,13.0000\r\n0,2,\"Maybery, Mr. Frank Hubert\",male,40,0,0,239059,16.0000\r\n0,2,\"McCrae, Mr. Arthur Gordon\",male,32,0,0,237216,13.5000\r\n0,2,\"McCrie, Mr. James Matthew\",male,30,0,0,233478,13.0000\r\n0,2,\"McKane, Mr. Peter David\",male,46,0,0,28403,26.0000\r\n1,2,\"Mellinger, Miss. Madeleine Violet\",female,13,0,1,250644,19.5000\r\n1,2,\"Mellinger, Mrs. (Elizabeth Anne Maidment)\",female,41,0,1,250644,19.5000\r\n1,2,\"Mellors, Mr. William John\",male,19,0,0,SW/PP 751,10.5000\r\n0,2,\"Meyer, Mr. August\",male,39,0,0,248723,13.0000\r\n0,2,\"Milling, Mr. Jacob Christian\",male,48,0,0,234360,13.0000\r\n0,2,\"Mitchell, Mr. Henry Michael\",male,70,0,0,C.A. 24580,10.5000\r\n0,2,\"Montvila, Rev. Juozas\",male,27,0,0,211536,13.0000\r\n0,2,\"Moraweck, Dr. Ernest\",male,54,0,0,29011,14.0000\r\n0,2,\"Morley, Mr. Henry Samuel (\"\"Mr Henry Marshall\"\")\",male,39,0,0,250655,26.0000\r\n0,2,\"Mudd, Mr. Thomas Charles\",male,16,0,0,S.O./P.P. 3,10.5000\r\n0,2,\"Myles, Mr. Thomas Francis\",male,62,0,0,240276,9.6875\r\n0,2,\"Nasser, Mr. Nicholas\",male,32.5,1,0,237736,30.0708\r\n1,2,\"Nasser, Mrs. Nicholas (Adele Achem)\",female,14,1,0,237736,30.0708\r\n1,2,\"Navratil, Master. Edmond Roger\",male,2,1,1,230080,26.0000\r\n1,2,\"Navratil, Master. Michel M\",male,3,1,1,230080,26.0000\r\n0,2,\"Navratil, Mr. Michel (\"\"Louis M Hoffman\"\")\",male,36.5,0,2,230080,26.0000\r\n0,2,\"Nesson, Mr. Israel\",male,26,0,0,244368,13.0000\r\n0,2,\"Nicholls, Mr. Joseph Charles\",male,19,1,1,C.A. 33112,36.7500\r\n0,2,\"Norman, Mr. Robert Douglas\",male,28,0,0,218629,13.5000\r\n1,2,\"Nourney, Mr. Alfred (\"\"Baron von Drachstedt\"\")\",male,20,0,0,SC/PARIS 2166,13.8625\r\n1,2,\"Nye, Mrs. (Elizabeth Ramell)\",female,29,0,0,C.A. 29395,10.5000\r\n0,2,\"Otter, Mr. Richard\",male,39,0,0,28213,13.0000\r\n1,2,\"Oxenham, Mr. Percy Thomas\",male,22,0,0,W./C. 14260,10.5000\r\n1,2,\"Padro y Manent, Mr. Julian\",male,0,0,0,SC/PARIS 2146,13.8625\r\n0,2,\"Pain, Dr. Alfred\",male,23,0,0,244278,10.5000\r\n1,2,\"Pallas y Castello, Mr. Emilio\",male,29,0,0,SC/PARIS 2147,13.8583\r\n0,2,\"Parker, Mr. Clifford Richard\",male,28,0,0,SC 14888,10.5000\r\n0,2,\"Parkes, Mr. Francis \"\"Frank\"\"\",male,0,0,0,239853,0.0000\r\n1,2,\"Parrish, Mrs. (Lutie Davis)\",female,50,0,1,230433,26.0000\r\n0,2,\"Pengelly, Mr. Frederick William\",male,19,0,0,28665,10.5000\r\n0,2,\"Pernot, Mr. Rene\",male,0,0,0,SC/PARIS 2131,15.0500\r\n0,2,\"Peruschitz, Rev. Joseph Maria\",male,41,0,0,237393,13.0000\r\n1,2,\"Phillips, Miss. Alice Frances Louisa\",female,21,0,1,S.O./P.P. 2,21.0000\r\n1,2,\"Phillips, Miss. Kate Florence (\"\"Mrs Kate Louise Phillips Marshall\"\")\",female,19,0,0,250655,26.0000\r\n0,2,\"Phillips, Mr. Escott Robert\",male,43,0,1,S.O./P.P. 2,21.0000\r\n1,2,\"Pinsky, Mrs. (Rosa)\",female,32,0,0,234604,13.0000\r\n0,2,\"Ponesell, Mr. Martin\",male,34,0,0,250647,13.0000\r\n1,2,\"Portaluppi, Mr. Emilio Ilario Giuseppe\",male,30,0,0,C.A. 34644,12.7375\r\n0,2,\"Pulbaum, Mr. Franz\",male,27,0,0,SC/PARIS 2168,15.0333\r\n1,2,\"Quick, Miss. Phyllis May\",female,2,1,1,26360,26.0000\r\n1,2,\"Quick, Miss. Winifred Vera\",female,8,1,1,26360,26.0000\r\n1,2,\"Quick, Mrs. Frederick Charles (Jane Richards)\",female,33,0,2,26360,26.0000\r\n0,2,\"Reeves, Mr. David\",male,36,0,0,C.A. 17248,10.5000\r\n0,2,\"Renouf, Mr. Peter Henry\",male,34,1,0,31027,21.0000\r\n1,2,\"Renouf, Mrs. Peter Henry (Lillian Jefferys)\",female,30,3,0,31027,21.0000\r\n1,2,\"Reynaldo, Ms. Encarnacion\",female,28,0,0,230434,13.0000\r\n0,2,\"Richard, Mr. Emile\",male,23,0,0,SC/PARIS 2133,15.0458\r\n1,2,\"Richards, Master. George Sibley\",male,0.8333,1,1,29106,18.7500\r\n1,2,\"Richards, Master. William Rowe\",male,3,1,1,29106,18.7500\r\n1,2,\"Richards, Mrs. Sidney (Emily Hocking)\",female,24,2,3,29106,18.7500\r\n1,2,\"Ridsdale, Miss. Lucy\",female,50,0,0,W./C. 14258,10.5000\r\n0,2,\"Rogers, Mr. Reginald Harry\",male,19,0,0,28004,10.5000\r\n1,2,\"Rugg, Miss. Emily\",female,21,0,0,C.A. 31026,10.5000\r\n0,2,\"Schmidt, Mr. August\",male,26,0,0,248659,13.0000\r\n0,2,\"Sedgwick, Mr. Charles Frederick Waddington\",male,25,0,0,244361,13.0000\r\n0,2,\"Sharp, Mr. Percival James R\",male,27,0,0,244358,26.0000\r\n1,2,\"Shelley, Mrs. William (Imanita Parrish Hall)\",female,25,0,1,230433,26.0000\r\n1,2,\"Silven, Miss. Lyyli Karoliina\",female,18,0,2,250652,13.0000\r\n1,2,\"Sincock, Miss. Maude\",female,20,0,0,C.A. 33112,36.7500\r\n1,2,\"Sinkkonen, Miss. Anna\",female,30,0,0,250648,13.0000\r\n0,2,\"Sjostedt, Mr. Ernst Adolf\",male,59,0,0,237442,13.5000\r\n1,2,\"Slayter, Miss. Hilda Mary\",female,30,0,0,234818,12.3500\r\n0,2,\"Slemen, Mr. Richard James\",male,35,0,0,28206,10.5000\r\n1,2,\"Smith, Miss. Marion Elsie\",female,40,0,0,31418,13.0000\r\n0,2,\"Sobey, Mr. Samuel James Hayden\",male,25,0,0,C.A. 29178,13.0000\r\n0,2,\"Stanton, Mr. Samuel Ward\",male,41,0,0,237734,15.0458\r\n0,2,\"Stokes, Mr. Philip Joseph\",male,25,0,0,F.C.C. 13540,10.5000\r\n0,2,\"Swane, Mr. George\",male,18.5,0,0,248734,13.0000\r\n0,2,\"Sweet, Mr. George Frederick\",male,14,0,0,220845,65.0000\r\n1,2,\"Toomey, Miss. Ellen\",female,50,0,0,F.C.C. 13531,10.5000\r\n0,2,\"Troupiansky, Mr. Moses Aaron\",male,23,0,0,233639,13.0000\r\n1,2,\"Trout, Mrs. William H (Jessie L)\",female,28,0,0,240929,12.6500\r\n1,2,\"Troutt, Miss. Edwina Celia \"\"Winnie\"\"\",female,27,0,0,34218,10.5000\r\n0,2,\"Turpin, Mr. William John Robert\",male,29,1,0,11668,21.0000\r\n0,2,\"Turpin, Mrs. William John Robert (Dorothy Ann Wonnacott)\",female,27,1,0,11668,21.0000\r\n0,2,\"Veal, Mr. James\",male,40,0,0,28221,13.0000\r\n1,2,\"Walcroft, Miss. Nellie\",female,31,0,0,F.C.C. 13528,21.0000\r\n0,2,\"Ware, Mr. John James\",male,30,1,0,CA 31352,21.0000\r\n0,2,\"Ware, Mr. William Jeffery\",male,23,1,0,28666,10.5000\r\n1,2,\"Ware, Mrs. John James (Florence Louise Long)\",female,31,0,0,CA 31352,21.0000\r\n0,2,\"Watson, Mr. Ennis Hastings\",male,0,0,0,239856,0.0000\r\n1,2,\"Watt, Miss. Bertha J\",female,12,0,0,C.A. 33595,15.7500\r\n1,2,\"Watt, Mrs. James (Elizabeth \"\"Bessie\"\" Inglis Milne)\",female,40,0,0,C.A. 33595,15.7500\r\n1,2,\"Webber, Miss. Susan\",female,32.5,0,0,27267,13.0000\r\n0,2,\"Weisz, Mr. Leopold\",male,27,1,0,228414,26.0000\r\n1,2,\"Weisz, Mrs. Leopold (Mathilde Francoise Pede)\",female,29,1,0,228414,26.0000\r\n1,2,\"Wells, Master. Ralph Lester\",male,2,1,1,29103,23.0000\r\n1,2,\"Wells, Miss. Joan\",female,4,1,1,29103,23.0000\r\n1,2,\"Wells, Mrs. Arthur Henry (\"\"Addie\"\" Dart Trevaskis)\",female,29,0,2,29103,23.0000\r\n1,2,\"West, Miss. Barbara J\",female,0.9167,1,2,C.A. 34651,27.7500\r\n1,2,\"West, Miss. Constance Mirium\",female,5,1,2,C.A. 34651,27.7500\r\n0,2,\"West, Mr. Edwy Arthur\",male,36,1,2,C.A. 34651,27.7500\r\n1,2,\"West, Mrs. Edwy Arthur (Ada Mary Worth)\",female,33,1,2,C.A. 34651,27.7500\r\n0,2,\"Wheadon, Mr. Edward H\",male,66,0,0,C.A. 24579,10.5000\r\n0,2,\"Wheeler, Mr. Edwin \"\"Frederick\"\"\",male,0,0,0,SC/PARIS 2159,12.8750\r\n1,2,\"Wilhelms, Mr. Charles\",male,31,0,0,244270,13.0000\r\n1,2,\"Williams, Mr. Charles Eugene\",male,0,0,0,244373,13.0000\r\n1,2,\"Wright, Miss. Marion\",female,26,0,0,220844,13.5000\r\n0,2,\"Yrois, Miss. Henriette (\"\"Mrs Harbeck\"\")\",female,24,0,0,248747,13.0000\r\n0,3,\"Abbing, Mr. Anthony\",male,42,0,0,C.A. 5547,7.5500\r\n0,3,\"Abbott, Master. Eugene Joseph\",male,13,0,2,C.A. 2673,20.2500\r\n0,3,\"Abbott, Mr. Rossmore Edward\",male,16,1,1,C.A. 2673,20.2500\r\n1,3,\"Abbott, Mrs. Stanton (Rosa Hunt)\",female,35,1,1,C.A. 2673,20.2500\r\n1,3,\"Abelseth, Miss. Karen Marie\",female,16,0,0,348125,7.6500\r\n1,3,\"Abelseth, Mr. Olaus Jorgensen\",male,25,0,0,348122,7.6500\r\n1,3,\"Abrahamsson, Mr. Abraham August Johannes\",male,20,0,0,SOTON/O2 3101284,7.9250\r\n1,3,\"Abrahim, Mrs. Joseph (Sophie Halaut Easu)\",female,18,0,0,2657,7.2292\r\n0,3,\"Adahl, Mr. Mauritz Nils Martin\",male,30,0,0,C 7076,7.2500\r\n0,3,\"Adams, Mr. John\",male,26,0,0,341826,8.0500\r\n0,3,\"Ahlin, Mrs. Johan (Johanna Persdotter Larsson)\",female,40,1,0,7546,9.4750\r\n1,3,\"Aks, Master. Philip Frank\",male,0.8333,0,1,392091,9.3500\r\n1,3,\"Aks, Mrs. Sam (Leah Rosen)\",female,18,0,1,392091,9.3500\r\n1,3,\"Albimona, Mr. Nassef Cassem\",male,26,0,0,2699,18.7875\r\n0,3,\"Alexander, Mr. William\",male,26,0,0,3474,7.8875\r\n0,3,\"Alhomaki, Mr. Ilmari Rudolf\",male,20,0,0,SOTON/O2 3101287,7.9250\r\n0,3,\"Ali, Mr. Ahmed\",male,24,0,0,SOTON/O.Q. 3101311,7.0500\r\n0,3,\"Ali, Mr. William\",male,25,0,0,SOTON/O.Q. 3101312,7.0500\r\n0,3,\"Allen, Mr. William Henry\",male,35,0,0,373450,8.0500\r\n0,3,\"Allum, Mr. Owen George\",male,18,0,0,2223,8.3000\r\n0,3,\"Andersen, Mr. Albert Karvin\",male,32,0,0,C 4001,22.5250\r\n1,3,\"Andersen-Jensen, Miss. Carla Christine Nielsine\",female,19,1,0,350046,7.8542\r\n0,3,\"Andersson, Master. Sigvard Harald Elias\",male,4,4,2,347082,31.2750\r\n0,3,\"Andersson, Miss. Ebba Iris Alfrida\",female,6,4,2,347082,31.2750\r\n0,3,\"Andersson, Miss. Ellis Anna Maria\",female,2,4,2,347082,31.2750\r\n1,3,\"Andersson, Miss. Erna Alexandra\",female,17,4,2,3101281,7.9250\r\n0,3,\"Andersson, Miss. Ida Augusta Margareta\",female,38,4,2,347091,7.7750\r\n0,3,\"Andersson, Miss. Ingeborg Constanzia\",female,9,4,2,347082,31.2750\r\n0,3,\"Andersson, Miss. Sigrid Elisabeth\",female,11,4,2,347082,31.2750\r\n0,3,\"Andersson, Mr. Anders Johan\",male,39,1,5,347082,31.2750\r\n1,3,\"Andersson, Mr. August Edvard (\"\"Wennerstrom\"\")\",male,27,0,0,350043,7.7958\r\n0,3,\"Andersson, Mr. Johan Samuel\",male,26,0,0,347075,7.7750\r\n0,3,\"Andersson, Mrs. Anders Johan (Alfrida Konstantia Brogren)\",female,39,1,5,347082,31.2750\r\n0,3,\"Andreasson, Mr. Paul Edvin\",male,20,0,0,347466,7.8542\r\n0,3,\"Angheloff, Mr. Minko\",male,26,0,0,349202,7.8958\r\n0,3,\"Arnold-Franchi, Mr. Josef\",male,25,1,0,349237,17.8000\r\n0,3,\"Arnold-Franchi, Mrs. Josef (Josefine Franchi)\",female,18,1,0,349237,17.8000\r\n0,3,\"Aronsson, Mr. Ernst Axel Algot\",male,24,0,0,349911,7.7750\r\n0,3,\"Asim, Mr. Adola\",male,35,0,0,SOTON/O.Q. 3101310,7.0500\r\n0,3,\"Asplund, Master. Carl Edgar\",male,5,4,2,347077,31.3875\r\n0,3,\"Asplund, Master. Clarence Gustaf Hugo\",male,9,4,2,347077,31.3875\r\n1,3,\"Asplund, Master. Edvin Rojj Felix\",male,3,4,2,347077,31.3875\r\n0,3,\"Asplund, Master. Filip Oscar\",male,13,4,2,347077,31.3875\r\n1,3,\"Asplund, Miss. Lillian Gertrud\",female,5,4,2,347077,31.3875\r\n0,3,\"Asplund, Mr. Carl Oscar Vilhelm Gustafsson\",male,40,1,5,347077,31.3875\r\n1,3,\"Asplund, Mr. Johan Charles\",male,23,0,0,350054,7.7958\r\n1,3,\"Asplund, Mrs. Carl Oscar (Selma Augusta Emilia Johansson)\",female,38,1,5,347077,31.3875\r\n1,3,\"Assaf Khalil, Mrs. Mariana (\"\"Miriam\"\")\",female,45,0,0,2696,7.2250\r\n0,3,\"Assaf, Mr. Gerios\",male,21,0,0,2692,7.2250\r\n0,3,\"Assam, Mr. Ali\",male,23,0,0,SOTON/O.Q. 3101309,7.0500\r\n0,3,\"Attalah, Miss. Malake\",female,17,0,0,2627,14.4583\r\n0,3,\"Attalah, Mr. Sleiman\",male,30,0,0,2694,7.2250\r\n0,3,\"Augustsson, Mr. Albert\",male,23,0,0,347468,7.8542\r\n1,3,\"Ayoub, Miss. Banoura\",female,13,0,0,2687,7.2292\r\n0,3,\"Baccos, Mr. Raffull\",male,20,0,0,2679,7.2250\r\n0,3,\"Backstrom, Mr. Karl Alfred\",male,32,1,0,3101278,15.8500\r\n1,3,\"Backstrom, Mrs. Karl Alfred (Maria Mathilda Gustafsson)\",female,33,3,0,3101278,15.8500\r\n1,3,\"Baclini, Miss. Eugenie\",female,0.75,2,1,2666,19.2583\r\n1,3,\"Baclini, Miss. Helene Barbara\",female,0.75,2,1,2666,19.2583\r\n1,3,\"Baclini, Miss. Marie Catherine\",female,5,2,1,2666,19.2583\r\n1,3,\"Baclini, Mrs. Solomon (Latifa Qurban)\",female,24,0,3,2666,19.2583\r\n1,3,\"Badman, Miss. Emily Louisa\",female,18,0,0,A/4 31416,8.0500\r\n0,3,\"Badt, Mr. Mohamed\",male,40,0,0,2623,7.2250\r\n0,3,\"Balkic, Mr. Cerin\",male,26,0,0,349248,7.8958\r\n1,3,\"Barah, Mr. Hanna Assi\",male,20,0,0,2663,7.2292\r\n0,3,\"Barbara, Miss. Saiide\",female,18,0,1,2691,14.4542\r\n0,3,\"Barbara, Mrs. (Catherine David)\",female,45,0,1,2691,14.4542\r\n0,3,\"Barry, Miss. Julia\",female,27,0,0,330844,7.8792\r\n0,3,\"Barton, Mr. David John\",male,22,0,0,324669,8.0500\r\n0,3,\"Beavan, Mr. William Thomas\",male,19,0,0,323951,8.0500\r\n0,3,\"Bengtsson, Mr. John Viktor\",male,26,0,0,347068,7.7750\r\n0,3,\"Berglund, Mr. Karl Ivar Sven\",male,22,0,0,PP 4348,9.3500\r\n0,3,\"Betros, Master. Seman\",male,0,0,0,2622,7.2292\r\n0,3,\"Betros, Mr. Tannous\",male,20,0,0,2648,4.0125\r\n1,3,\"Bing, Mr. Lee\",male,32,0,0,1601,56.4958\r\n0,3,\"Birkeland, Mr. Hans Martin Monsen\",male,21,0,0,312992,7.7750\r\n0,3,\"Bjorklund, Mr. Ernst Herbert\",male,18,0,0,347090,7.7500\r\n0,3,\"Bostandyeff, Mr. Guentcho\",male,26,0,0,349224,7.8958\r\n0,3,\"Boulos, Master. Akar\",male,6,1,1,2678,15.2458\r\n0,3,\"Boulos, Miss. Nourelain\",female,9,1,1,2678,15.2458\r\n0,3,\"Boulos, Mr. Hanna\",male,0,0,0,2664,7.2250\r\n0,3,\"Boulos, Mrs. Joseph (Sultana)\",female,0,0,2,2678,15.2458\r\n0,3,\"Bourke, Miss. Mary\",female,0,0,2,364848,7.7500\r\n0,3,\"Bourke, Mr. John\",male,40,1,1,364849,15.5000\r\n0,3,\"Bourke, Mrs. John (Catherine)\",female,32,1,1,364849,15.5000\r\n0,3,\"Bowen, Mr. David John \"\"Dai\"\"\",male,21,0,0,54636,16.1000\r\n1,3,\"Bradley, Miss. Bridget Delia\",female,22,0,0,334914,7.7250\r\n0,3,\"Braf, Miss. Elin Ester Maria\",female,20,0,0,347471,7.8542\r\n0,3,\"Braund, Mr. Lewis Richard\",male,29,1,0,3460,7.0458\r\n0,3,\"Braund, Mr. Owen Harris\",male,22,1,0,A/5 21171,7.2500\r\n0,3,\"Brobeck, Mr. Karl Rudolf\",male,22,0,0,350045,7.7958\r\n0,3,\"Brocklebank, Mr. William Alfred\",male,35,0,0,364512,8.0500\r\n0,3,\"Buckley, Miss. Katherine\",female,18.5,0,0,329944,7.2833\r\n1,3,\"Buckley, Mr. Daniel\",male,21,0,0,330920,7.8208\r\n0,3,\"Burke, Mr. Jeremiah\",male,19,0,0,365222,6.7500\r\n0,3,\"Burns, Miss. Mary Delia\",female,18,0,0,330963,7.8792\r\n0,3,\"Cacic, Miss. Manda\",female,21,0,0,315087,8.6625\r\n0,3,\"Cacic, Miss. Marija\",female,30,0,0,315084,8.6625\r\n0,3,\"Cacic, Mr. Jego Grga\",male,18,0,0,315091,8.6625\r\n0,3,\"Cacic, Mr. Luka\",male,38,0,0,315089,8.6625\r\n0,3,\"Calic, Mr. Jovo\",male,17,0,0,315093,8.6625\r\n0,3,\"Calic, Mr. Petar\",male,17,0,0,315086,8.6625\r\n0,3,\"Canavan, Miss. Mary\",female,21,0,0,364846,7.7500\r\n0,3,\"Canavan, Mr. Patrick\",male,21,0,0,364858,7.7500\r\n0,3,\"Cann, Mr. Ernest Charles\",male,21,0,0,A./5. 2152,8.0500\r\n0,3,\"Caram, Mr. Joseph\",male,0,1,0,2689,14.4583\r\n0,3,\"Caram, Mrs. Joseph (Maria Elias)\",female,0,1,0,2689,14.4583\r\n0,3,\"Carlsson, Mr. August Sigfrid\",male,28,0,0,350042,7.7958\r\n0,3,\"Carlsson, Mr. Carl Robert\",male,24,0,0,350409,7.8542\r\n1,3,\"Carr, Miss. Helen \"\"Ellen\"\"\",female,16,0,0,367231,7.7500\r\n0,3,\"Carr, Miss. Jeannie\",female,37,0,0,368364,7.7500\r\n0,3,\"Carver, Mr. Alfred John\",male,28,0,0,392095,7.2500\r\n0,3,\"Celotti, Mr. Francesco\",male,24,0,0,343275,8.0500\r\n0,3,\"Charters, Mr. David\",male,21,0,0,A/5. 13032,7.7333\r\n1,3,\"Chip, Mr. Chang\",male,32,0,0,1601,56.4958\r\n0,3,\"Christmann, Mr. Emil\",male,29,0,0,343276,8.0500\r\n0,3,\"Chronopoulos, Mr. Apostolos\",male,26,1,0,2680,14.4542\r\n0,3,\"Chronopoulos, Mr. Demetrios\",male,18,1,0,2680,14.4542\r\n0,3,\"Coelho, Mr. Domingos Fernandeo\",male,20,0,0,SOTON/O.Q. 3101307,7.0500\r\n1,3,\"Cohen, Mr. Gurshon \"\"Gus\"\"\",male,18,0,0,A/5 3540,8.0500\r\n0,3,\"Colbert, Mr. Patrick\",male,24,0,0,371109,7.2500\r\n0,3,\"Coleff, Mr. Peju\",male,36,0,0,349210,7.4958\r\n0,3,\"Coleff, Mr. Satio\",male,24,0,0,349209,7.4958\r\n0,3,\"Conlon, Mr. Thomas Henry\",male,31,0,0,21332,7.7333\r\n0,3,\"Connaghton, Mr. Michael\",male,31,0,0,335097,7.7500\r\n1,3,\"Connolly, Miss. Kate\",female,22,0,0,370373,7.7500\r\n0,3,\"Connolly, Miss. Kate\",female,30,0,0,330972,7.6292\r\n0,3,\"Connors, Mr. Patrick\",male,70.5,0,0,370369,7.7500\r\n0,3,\"Cook, Mr. Jacob\",male,43,0,0,A/5 3536,8.0500\r\n0,3,\"Cor, Mr. Bartol\",male,35,0,0,349230,7.8958\r\n0,3,\"Cor, Mr. Ivan\",male,27,0,0,349229,7.8958\r\n0,3,\"Cor, Mr. Liudevit\",male,19,0,0,349231,7.8958\r\n0,3,\"Corn, Mr. Harry\",male,30,0,0,SOTON/OQ 392090,8.0500\r\n1,3,\"Coutts, Master. Eden Leslie \"\"Neville\"\"\",male,9,1,1,C.A. 37671,15.9000\r\n1,3,\"Coutts, Master. William Loch \"\"William\"\"\",male,3,1,1,C.A. 37671,15.9000\r\n1,3,\"Coutts, Mrs. William (Winnie \"\"Minnie\"\" Treanor)\",female,36,0,2,C.A. 37671,15.9000\r\n0,3,\"Coxon, Mr. Daniel\",male,59,0,0,364500,7.2500\r\n0,3,\"Crease, Mr. Ernest James\",male,19,0,0,S.P. 3464,8.1583\r\n1,3,\"Cribb, Miss. Laura Alice\",female,17,0,1,371362,16.1000\r\n0,3,\"Cribb, Mr. John Hatfield\",male,44,0,1,371362,16.1000\r\n0,3,\"Culumovic, Mr. Jeso\",male,17,0,0,315090,8.6625\r\n0,3,\"Daher, Mr. Shedid\",male,22.5,0,0,2698,7.2250\r\n1,3,\"Dahl, Mr. Karl Edwart\",male,45,0,0,7598,8.0500\r\n0,3,\"Dahlberg, Miss. Gerda Ulrika\",female,22,0,0,7552,10.5167\r\n0,3,\"Dakic, Mr. Branko\",male,19,0,0,349228,10.1708\r\n1,3,\"Daly, Miss. Margaret Marcella \"\"Maggie\"\"\",female,30,0,0,382650,6.9500\r\n1,3,\"Daly, Mr. Eugene Patrick\",male,29,0,0,382651,7.7500\r\n0,3,\"Danbom, Master. Gilbert Sigvard Emanuel\",male,0.3333,0,2,347080,14.4000\r\n0,3,\"Danbom, Mr. Ernst Gilbert\",male,34,1,1,347080,14.4000\r\n0,3,\"Danbom, Mrs. Ernst Gilbert (Anna Sigrid Maria Brogren)\",female,28,1,1,347080,14.4000\r\n0,3,\"Danoff, Mr. Yoto\",male,27,0,0,349219,7.8958\r\n0,3,\"Dantcheff, Mr. Ristiu\",male,25,0,0,349203,7.8958\r\n0,3,\"Davies, Mr. Alfred J\",male,24,2,0,A/4 48871,24.1500\r\n0,3,\"Davies, Mr. Evan\",male,22,0,0,SC/A4 23568,8.0500\r\n0,3,\"Davies, Mr. John Samuel\",male,21,2,0,A/4 48871,24.1500\r\n0,3,\"Davies, Mr. Joseph\",male,17,2,0,A/4 48873,8.0500\r\n0,3,\"Davison, Mr. Thomas Henry\",male,0,1,0,386525,16.1000\r\n1,3,\"Davison, Mrs. Thomas Henry (Mary E Finck)\",female,0,1,0,386525,16.1000\r\n1,3,\"de Messemaeker, Mr. Guillaume Joseph\",male,36.5,1,0,345572,17.4000\r\n1,3,\"de Messemaeker, Mrs. Guillaume Joseph (Emma)\",female,36,1,0,345572,17.4000\r\n1,3,\"de Mulder, Mr. Theodore\",male,30,0,0,345774,9.5000\r\n0,3,\"de Pelsmaeker, Mr. Alfons\",male,16,0,0,345778,9.5000\r\n1,3,\"Dean, Master. Bertram Vere\",male,1,1,2,C.A. 2315,20.5750\r\n1,3,\"Dean, Miss. Elizabeth Gladys \"\"Millvina\"\"\",female,0.1667,1,2,C.A. 2315,20.5750\r\n0,3,\"Dean, Mr. Bertram Frank\",male,26,1,2,C.A. 2315,20.5750\r\n1,3,\"Dean, Mrs. Bertram (Eva Georgetta Light)\",female,33,1,2,C.A. 2315,20.5750\r\n0,3,\"Delalic, Mr. Redjo\",male,25,0,0,349250,7.8958\r\n0,3,\"Demetri, Mr. Marinko\",male,0,0,0,349238,7.8958\r\n0,3,\"Denkoff, Mr. Mitto\",male,0,0,0,349225,7.8958\r\n0,3,\"Dennis, Mr. Samuel\",male,22,0,0,A/5 21172,7.2500\r\n0,3,\"Dennis, Mr. William\",male,36,0,0,A/5 21175,7.2500\r\n1,3,\"Devaney, Miss. Margaret Delia\",female,19,0,0,330958,7.8792\r\n0,3,\"Dika, Mr. Mirko\",male,17,0,0,349232,7.8958\r\n0,3,\"Dimic, Mr. Jovan\",male,42,0,0,315088,8.6625\r\n0,3,\"Dintcheff, Mr. Valtcho\",male,43,0,0,349226,7.8958\r\n0,3,\"Doharr, Mr. Tannous\",male,0,0,0,2686,7.2292\r\n0,3,\"Dooley, Mr. Patrick\",male,32,0,0,370376,7.7500\r\n1,3,\"Dorking, Mr. Edward Arthur\",male,19,0,0,A/5. 10482,8.0500\r\n1,3,\"Dowdell, Miss. Elizabeth\",female,30,0,0,364516,12.4750\r\n0,3,\"Doyle, Miss. Elizabeth\",female,24,0,0,368702,7.7500\r\n1,3,\"Drapkin, Miss. Jennie\",female,23,0,0,SOTON/OQ 392083,8.0500\r\n0,3,\"Drazenoic, Mr. Jozef\",male,33,0,0,349241,7.8958\r\n0,3,\"Duane, Mr. Frank\",male,65,0,0,336439,7.7500\r\n1,3,\"Duquemin, Mr. Joseph\",male,24,0,0,S.O./P.P. 752,7.5500\r\n0,3,\"Dyker, Mr. Adolf Fredrik\",male,23,1,0,347072,13.9000\r\n1,3,\"Dyker, Mrs. Adolf Fredrik (Anna Elisabeth Judith Andersson)\",female,22,1,0,347072,13.9000\r\n0,3,\"Edvardsson, Mr. Gustaf Hjalmar\",male,18,0,0,349912,7.7750\r\n0,3,\"Eklund, Mr. Hans Linus\",male,16,0,0,347074,7.7750\r\n0,3,\"Ekstrom, Mr. Johan\",male,45,0,0,347061,6.9750\r\n0,3,\"Elias, Mr. Dibo\",male,0,0,0,2674,7.2250\r\n0,3,\"Elias, Mr. Joseph\",male,39,0,2,2675,7.2292\r\n0,3,\"Elias, Mr. Joseph Jr\",male,17,1,1,2690,7.2292\r\n0,3,\"Elias, Mr. Tannous\",male,15,1,1,2695,7.2292\r\n0,3,\"Elsbury, Mr. William James\",male,47,0,0,A/5 3902,7.2500\r\n1,3,\"Emanuel, Miss. Virginia Ethel\",female,5,0,0,364516,12.4750\r\n0,3,\"Emir, Mr. Farred Chehab\",male,0,0,0,2631,7.2250\r\n0,3,\"Everett, Mr. Thomas James\",male,40.5,0,0,C.A. 6212,15.1000\r\n0,3,\"Farrell, Mr. James\",male,40.5,0,0,367232,7.7500\r\n1,3,\"Finoli, Mr. Luigi\",male,0,0,0,SOTON/O.Q. 3101308,7.0500\r\n0,3,\"Fischer, Mr. Eberhard Thelander\",male,18,0,0,350036,7.7958\r\n0,3,\"Fleming, Miss. Honora\",female,0,0,0,364859,7.7500\r\n0,3,\"Flynn, Mr. James\",male,0,0,0,364851,7.7500\r\n0,3,\"Flynn, Mr. John\",male,0,0,0,368323,6.9500\r\n0,3,\"Foley, Mr. Joseph\",male,26,0,0,330910,7.8792\r\n0,3,\"Foley, Mr. William\",male,0,0,0,365235,7.7500\r\n1,3,\"Foo, Mr. Choong\",male,0,0,0,1601,56.4958\r\n0,3,\"Ford, Miss. Doolina Margaret \"\"Daisy\"\"\",female,21,2,2,W./C. 6608,34.3750\r\n0,3,\"Ford, Miss. Robina Maggie \"\"Ruby\"\"\",female,9,2,2,W./C. 6608,34.3750\r\n0,3,\"Ford, Mr. Arthur\",male,0,0,0,A/5 1478,8.0500\r\n0,3,\"Ford, Mr. Edward Watson\",male,18,2,2,W./C. 6608,34.3750\r\n0,3,\"Ford, Mr. William Neal\",male,16,1,3,W./C. 6608,34.3750\r\n0,3,\"Ford, Mrs. Edward (Margaret Ann Watson)\",female,48,1,3,W./C. 6608,34.3750\r\n0,3,\"Fox, Mr. Patrick\",male,0,0,0,368573,7.7500\r\n0,3,\"Franklin, Mr. Charles (Charles Fardon)\",male,0,0,0,SOTON/O.Q. 3101314,7.2500\r\n0,3,\"Gallagher, Mr. Martin\",male,25,0,0,36864,7.7417\r\n0,3,\"Garfirth, Mr. John\",male,0,0,0,358585,14.5000\r\n0,3,\"Gheorgheff, Mr. Stanio\",male,0,0,0,349254,7.8958\r\n0,3,\"Gilinski, Mr. Eliezer\",male,22,0,0,14973,8.0500\r\n1,3,\"Gilnagh, Miss. Katherine \"\"Katie\"\"\",female,16,0,0,35851,7.7333\r\n1,3,\"Glynn, Miss. Mary Agatha\",female,0,0,0,335677,7.7500\r\n1,3,\"Goldsmith, Master. Frank John William \"\"Frankie\"\"\",male,9,0,2,363291,20.5250\r\n0,3,\"Goldsmith, Mr. Frank John\",male,33,1,1,363291,20.5250\r\n0,3,\"Goldsmith, Mr. Nathan\",male,41,0,0,SOTON/O.Q. 3101263,7.8500\r\n1,3,\"Goldsmith, Mrs. Frank John (Emily Alice Brown)\",female,31,1,1,363291,20.5250\r\n0,3,\"Goncalves, Mr. Manuel Estanslas\",male,38,0,0,SOTON/O.Q. 3101306,7.0500\r\n0,3,\"Goodwin, Master. Harold Victor\",male,9,5,2,CA 2144,46.9000\r\n0,3,\"Goodwin, Master. Sidney Leonard\",male,1,5,2,CA 2144,46.9000\r\n0,3,\"Goodwin, Master. William Frederick\",male,11,5,2,CA 2144,46.9000\r\n0,3,\"Goodwin, Miss. Jessie Allis\",female,10,5,2,CA 2144,46.9000\r\n0,3,\"Goodwin, Miss. Lillian Amy\",female,16,5,2,CA 2144,46.9000\r\n0,3,\"Goodwin, Mr. Charles Edward\",male,14,5,2,CA 2144,46.9000\r\n0,3,\"Goodwin, Mr. Charles Frederick\",male,40,1,6,CA 2144,46.9000\r\n0,3,\"Goodwin, Mrs. Frederick (Augusta Tyler)\",female,43,1,6,CA 2144,46.9000\r\n0,3,\"Green, Mr. George Henry\",male,51,0,0,21440,8.0500\r\n0,3,\"Gronnestad, Mr. Daniel Danielsen\",male,32,0,0,8471,8.3625\r\n0,3,\"Guest, Mr. Robert\",male,0,0,0,376563,8.0500\r\n0,3,\"Gustafsson, Mr. Alfred Ossian\",male,20,0,0,7534,9.8458\r\n0,3,\"Gustafsson, Mr. Anders Vilhelm\",male,37,2,0,3101276,7.9250\r\n0,3,\"Gustafsson, Mr. Johan Birger\",male,28,2,0,3101277,7.9250\r\n0,3,\"Gustafsson, Mr. Karl Gideon\",male,19,0,0,347069,7.7750\r\n0,3,\"Haas, Miss. Aloisia\",female,24,0,0,349236,8.8500\r\n0,3,\"Hagardon, Miss. Kate\",female,17,0,0,AQ/3. 30631,7.7333\r\n0,3,\"Hagland, Mr. Ingvald Olai Olsen\",male,0,1,0,65303,19.9667\r\n0,3,\"Hagland, Mr. Konrad Mathias Reiersen\",male,0,1,0,65304,19.9667\r\n0,3,\"Hakkarainen, Mr. Pekka Pietari\",male,28,1,0,STON/O2. 3101279,15.8500\r\n1,3,\"Hakkarainen, Mrs. Pekka Pietari (Elin Matilda Dolck)\",female,24,1,0,STON/O2. 3101279,15.8500\r\n0,3,\"Hampe, Mr. Leon\",male,20,0,0,345769,9.5000\r\n0,3,\"Hanna, Mr. Mansour\",male,23.5,0,0,2693,7.2292\r\n0,3,\"Hansen, Mr. Claus Peter\",male,41,2,0,350026,14.1083\r\n0,3,\"Hansen, Mr. Henrik Juul\",male,26,1,0,350025,7.8542\r\n0,3,\"Hansen, Mr. Henry Damsgaard\",male,21,0,0,350029,7.8542\r\n1,3,\"Hansen, Mrs. Claus Peter (Jennie L Howard)\",female,45,1,0,350026,14.1083\r\n0,3,\"Harknett, Miss. Alice Phoebe\",female,0,0,0,W./C. 6609,7.5500\r\n0,3,\"Harmer, Mr. Abraham (David Lishin)\",male,25,0,0,374887,7.2500\r\n0,3,\"Hart, Mr. Henry\",male,0,0,0,394140,6.8583\r\n0,3,\"Hassan, Mr. Houssein G N\",male,11,0,0,2699,18.7875\r\n1,3,\"Healy, Miss. Hanora \"\"Nora\"\"\",female,0,0,0,370375,7.7500\r\n1,3,\"Hedman, Mr. Oskar Arvid\",male,27,0,0,347089,6.9750\r\n1,3,\"Hee, Mr. Ling\",male,0,0,0,1601,56.4958\r\n0,3,\"Hegarty, Miss. Hanora \"\"Nora\"\"\",female,18,0,0,365226,6.7500\r\n1,3,\"Heikkinen, Miss. Laina\",female,26,0,0,STON/O2. 3101282,7.9250\r\n0,3,\"Heininen, Miss. Wendla Maria\",female,23,0,0,STON/O2. 3101290,7.9250\r\n1,3,\"Hellstrom, Miss. Hilda Maria\",female,22,0,0,7548,8.9625\r\n0,3,\"Hendekovic, Mr. Ignjac\",male,28,0,0,349243,7.8958\r\n0,3,\"Henriksson, Miss. Jenny Lovisa\",female,28,0,0,347086,7.7750\r\n0,3,\"Henry, Miss. Delia\",female,0,0,0,382649,7.7500\r\n1,3,\"Hirvonen, Miss. Hildur E\",female,2,0,1,3101298,12.2875\r\n1,3,\"Hirvonen, Mrs. Alexander (Helga E Lindqvist)\",female,22,1,1,3101298,12.2875\r\n0,3,\"Holm, Mr. John Fredrik Alexander\",male,43,0,0,C 7075,6.4500\r\n0,3,\"Holthen, Mr. Johan Martin\",male,28,0,0,C 4001,22.5250\r\n1,3,\"Honkanen, Miss. Eliina\",female,27,0,0,STON/O2. 3101283,7.9250\r\n0,3,\"Horgan, Mr. John\",male,0,0,0,370377,7.7500\r\n1,3,\"Howard, Miss. May Elizabeth\",female,0,0,0,A. 2. 39186,8.0500\r\n0,3,\"Humblen, Mr. Adolf Mathias Nicolai Olsen\",male,42,0,0,348121,7.6500\r\n1,3,\"Hyman, Mr. Abraham\",male,0,0,0,3470,7.8875\r\n0,3,\"Ibrahim Shawah, Mr. Yousseff\",male,30,0,0,2685,7.2292\r\n0,3,\"Ilieff, Mr. Ylio\",male,0,0,0,349220,7.8958\r\n0,3,\"Ilmakangas, Miss. Ida Livija\",female,27,1,0,STON/O2. 3101270,7.9250\r\n0,3,\"Ilmakangas, Miss. Pieta Sofia\",female,25,1,0,STON/O2. 3101271,7.9250\r\n0,3,\"Ivanoff, Mr. Kanio\",male,0,0,0,349201,7.8958\r\n1,3,\"Jalsevac, Mr. Ivan\",male,29,0,0,349240,7.8958\r\n1,3,\"Jansson, Mr. Carl Olof\",male,21,0,0,350034,7.7958\r\n0,3,\"Jardin, Mr. Jose Neto\",male,0,0,0,SOTON/O.Q. 3101305,7.0500\r\n0,3,\"Jensen, Mr. Hans Peder\",male,20,0,0,350050,7.8542\r\n0,3,\"Jensen, Mr. Niels Peder\",male,48,0,0,350047,7.8542\r\n0,3,\"Jensen, Mr. Svend Lauritz\",male,17,1,0,350048,7.0542\r\n1,3,\"Jermyn, Miss. Annie\",female,0,0,0,14313,7.7500\r\n1,3,\"Johannesen-Bratthammer, Mr. Bernt\",male,0,0,0,65306,8.1125\r\n0,3,\"Johanson, Mr. Jakob Alfred\",male,34,0,0,3101264,6.4958\r\n1,3,\"Johansson Palmquist, Mr. Oskar Leander\",male,26,0,0,347070,7.7750\r\n0,3,\"Johansson, Mr. Erik\",male,22,0,0,350052,7.7958\r\n0,3,\"Johansson, Mr. Gustaf Joel\",male,33,0,0,7540,8.6542\r\n0,3,\"Johansson, Mr. Karl Johan\",male,31,0,0,347063,7.7750\r\n0,3,\"Johansson, Mr. Nils\",male,29,0,0,347467,7.8542\r\n1,3,\"Johnson, Master. Harold Theodor\",male,4,1,1,347742,11.1333\r\n1,3,\"Johnson, Miss. Eleanor Ileen\",female,1,1,1,347742,11.1333\r\n0,3,\"Johnson, Mr. Alfred\",male,49,0,0,LINE,0.0000\r\n0,3,\"Johnson, Mr. Malkolm Joackim\",male,33,0,0,347062,7.7750\r\n0,3,\"Johnson, Mr. William Cahoone Jr\",male,19,0,0,LINE,0.0000\r\n1,3,\"Johnson, Mrs. Oscar W (Elisabeth Vilhelmina Berg)\",female,27,0,2,347742,11.1333\r\n0,3,\"Johnston, Master. William Arthur \"\"Willie\"\"\",male,0,1,2,W./C. 6607,23.4500\r\n0,3,\"Johnston, Miss. Catherine Helen \"\"Carrie\"\"\",female,0,1,2,W./C. 6607,23.4500\r\n0,3,\"Johnston, Mr. Andrew G\",male,0,1,2,W./C. 6607,23.4500\r\n0,3,\"Johnston, Mrs. Andrew G (Elizabeth \"\"Lily\"\" Watson)\",female,0,1,2,W./C. 6607,23.4500\r\n0,3,\"Jonkoff, Mr. Lalio\",male,23,0,0,349204,7.8958\r\n1,3,\"Jonsson, Mr. Carl\",male,32,0,0,350417,7.8542\r\n0,3,\"Jonsson, Mr. Nils Hilding\",male,27,0,0,350408,7.8542\r\n0,3,\"Jussila, Miss. Katriina\",female,20,1,0,4136,9.8250\r\n0,3,\"Jussila, Miss. Mari Aina\",female,21,1,0,4137,9.8250\r\n1,3,\"Jussila, Mr. Eiriik\",male,32,0,0,STON/O 2. 3101286,7.9250\r\n0,3,\"Kallio, Mr. Nikolai Erland\",male,17,0,0,STON/O 2. 3101274,7.1250\r\n0,3,\"Kalvik, Mr. Johannes Halvorsen\",male,21,0,0,8475,8.4333\r\n0,3,\"Karaic, Mr. Milan\",male,30,0,0,349246,7.8958\r\n1,3,\"Karlsson, Mr. Einar Gervasius\",male,21,0,0,350053,7.7958\r\n0,3,\"Karlsson, Mr. Julius Konrad Eugen\",male,33,0,0,347465,7.8542\r\n0,3,\"Karlsson, Mr. Nils August\",male,22,0,0,350060,7.5208\r\n1,3,\"Karun, Miss. Manca\",female,4,0,1,349256,13.4167\r\n1,3,\"Karun, Mr. Franz\",male,39,0,1,349256,13.4167\r\n0,3,\"Kassem, Mr. Fared\",male,0,0,0,2700,7.2292\r\n0,3,\"Katavelas, Mr. Vassilios (\"\"Catavelas Vassilios\"\")\",male,18.5,0,0,2682,7.2292\r\n0,3,\"Keane, Mr. Andrew \"\"Andy\"\"\",male,0,0,0,12460,7.7500\r\n0,3,\"Keefe, Mr. Arthur\",male,0,0,0,323592,7.2500\r\n1,3,\"Kelly, Miss. Anna Katherine \"\"Annie Kate\"\"\",female,0,0,0,9234,7.7500\r\n1,3,\"Kelly, Miss. Mary\",female,0,0,0,14312,7.7500\r\n0,3,\"Kelly, Mr. James\",male,34.5,0,0,330911,7.8292\r\n0,3,\"Kelly, Mr. James\",male,44,0,0,363592,8.0500\r\n1,3,\"Kennedy, Mr. John\",male,0,0,0,368783,7.7500\r\n0,3,\"Khalil, Mr. Betros\",male,0,1,0,2660,14.4542\r\n0,3,\"Khalil, Mrs. Betros (Zahie \"\"Maria\"\" Elias)\",female,0,1,0,2660,14.4542\r\n0,3,\"Kiernan, Mr. John\",male,0,1,0,367227,7.7500\r\n0,3,\"Kiernan, Mr. Philip\",male,0,1,0,367229,7.7500\r\n0,3,\"Kilgannon, Mr. Thomas J\",male,0,0,0,36865,7.7375\r\n0,3,\"Kink, Miss. Maria\",female,22,2,0,315152,8.6625\r\n0,3,\"Kink, Mr. Vincenz\",male,26,2,0,315151,8.6625\r\n1,3,\"Kink-Heilmann, Miss. Luise Gretchen\",female,4,0,2,315153,22.0250\r\n1,3,\"Kink-Heilmann, Mr. Anton\",male,29,3,1,315153,22.0250\r\n1,3,\"Kink-Heilmann, Mrs. Anton (Luise Heilmann)\",female,26,1,1,315153,22.0250\r\n0,3,\"Klasen, Miss. Gertrud Emilia\",female,1,1,1,350405,12.1833\r\n0,3,\"Klasen, Mr. Klas Albin\",male,18,1,1,350404,7.8542\r\n0,3,\"Klasen, Mrs. (Hulda Kristina Eugenia Lofqvist)\",female,36,0,2,350405,12.1833\r\n0,3,\"Kraeff, Mr. Theodor\",male,0,0,0,349253,7.8958\r\n1,3,\"Krekorian, Mr. Neshan\",male,25,0,0,2654,7.2292\r\n0,3,\"Lahoud, Mr. Sarkis\",male,0,0,0,2624,7.2250\r\n0,3,\"Laitinen, Miss. Kristina Sofia\",female,37,0,0,4135,9.5875\r\n0,3,\"Laleff, Mr. Kristo\",male,0,0,0,349217,7.8958\r\n1,3,\"Lam, Mr. Ali\",male,0,0,0,1601,56.4958\r\n0,3,\"Lam, Mr. Len\",male,0,0,0,1601,56.4958\r\n1,3,\"Landergren, Miss. Aurora Adelia\",female,22,0,0,C 7077,7.2500\r\n0,3,\"Lane, Mr. Patrick\",male,0,0,0,7935,7.7500\r\n1,3,\"Lang, Mr. Fang\",male,26,0,0,1601,56.4958\r\n0,3,\"Larsson, Mr. August Viktor\",male,29,0,0,7545,9.4833\r\n0,3,\"Larsson, Mr. Bengt Edvin\",male,29,0,0,347067,7.7750\r\n0,3,\"Larsson-Rondberg, Mr. Edvard A\",male,22,0,0,347065,7.7750\r\n1,3,\"Leeni, Mr. Fahim (\"\"Philip Zenni\"\")\",male,22,0,0,2620,7.2250\r\n0,3,\"Lefebre, Master. Henry Forbes\",male,0,3,1,4133,25.4667\r\n0,3,\"Lefebre, Miss. Ida\",female,0,3,1,4133,25.4667\r\n0,3,\"Lefebre, Miss. Jeannie\",female,0,3,1,4133,25.4667\r\n0,3,\"Lefebre, Miss. Mathilde\",female,0,3,1,4133,25.4667\r\n0,3,\"Lefebre, Mrs. Frank (Frances)\",female,0,0,4,4133,25.4667\r\n0,3,\"Leinonen, Mr. Antti Gustaf\",male,32,0,0,STON/O 2. 3101292,7.9250\r\n0,3,\"Lemberopolous, Mr. Peter L\",male,34.5,0,0,2683,6.4375\r\n0,3,\"Lennon, Miss. Mary\",female,0,1,0,370371,15.5000\r\n0,3,\"Lennon, Mr. Denis\",male,0,1,0,370371,15.5000\r\n0,3,\"Leonard, Mr. Lionel\",male,36,0,0,LINE,0.0000\r\n0,3,\"Lester, Mr. James\",male,39,0,0,A/4 48871,24.1500\r\n0,3,\"Lievens, Mr. Rene Aime\",male,24,0,0,345781,9.5000\r\n0,3,\"Lindahl, Miss. Agda Thorilda Viktoria\",female,25,0,0,347071,7.7750\r\n0,3,\"Lindblom, Miss. Augusta Charlotta\",female,45,0,0,347073,7.7500\r\n0,3,\"Lindell, Mr. Edvard Bengtsson\",male,36,1,0,349910,15.5500\r\n0,3,\"Lindell, Mrs. Edvard Bengtsson (Elin Gerda Persson)\",female,30,1,0,349910,15.5500\r\n1,3,\"Lindqvist, Mr. Eino William\",male,20,1,0,STON/O 2. 3101285,7.9250\r\n0,3,\"Linehan, Mr. Michael\",male,0,0,0,330971,7.8792\r\n0,3,\"Ling, Mr. Lee\",male,28,0,0,1601,56.4958\r\n0,3,\"Lithman, Mr. Simon\",male,0,0,0,S.O./P.P. 251,7.5500\r\n0,3,\"Lobb, Mr. William Arthur\",male,30,1,0,A/5. 3336,16.1000\r\n0,3,\"Lobb, Mrs. William Arthur (Cordelia K Stanlick)\",female,26,1,0,A/5. 3336,16.1000\r\n0,3,\"Lockyer, Mr. Edward\",male,0,0,0,1222,7.8792\r\n0,3,\"Lovell, Mr. John Hall (\"\"Henry\"\")\",male,20.5,0,0,A/5 21173,7.2500\r\n1,3,\"Lulic, Mr. Nikola\",male,27,0,0,315098,8.6625\r\n0,3,\"Lundahl, Mr. Johan Svensson\",male,51,0,0,347743,7.0542\r\n1,3,\"Lundin, Miss. Olga Elida\",female,23,0,0,347469,7.8542\r\n1,3,\"Lundstrom, Mr. Thure Edvin\",male,32,0,0,350403,7.5792\r\n0,3,\"Lyntakoff, Mr. Stanko\",male,0,0,0,349235,7.8958\r\n0,3,\"MacKay, Mr. George William\",male,0,0,0,C.A. 42795,7.5500\r\n1,3,\"Madigan, Miss. Margaret \"\"Maggie\"\"\",female,0,0,0,370370,7.7500\r\n1,3,\"Madsen, Mr. Fridtjof Arne\",male,24,0,0,C 17369,7.1417\r\n0,3,\"Maenpaa, Mr. Matti Alexanteri\",male,22,0,0,STON/O 2. 3101275,7.1250\r\n0,3,\"Mahon, Miss. Bridget Delia\",female,0,0,0,330924,7.8792\r\n0,3,\"Mahon, Mr. John\",male,0,0,0,AQ/4 3130,7.7500\r\n0,3,\"Maisner, Mr. Simon\",male,0,0,0,A/S 2816,8.0500\r\n0,3,\"Makinen, Mr. Kalle Edvard\",male,29,0,0,STON/O 2. 3101268,7.9250\r\n1,3,\"Mamee, Mr. Hanna\",male,0,0,0,2677,7.2292\r\n0,3,\"Mangan, Miss. Mary\",female,30.5,0,0,364850,7.7500\r\n1,3,\"Mannion, Miss. Margareth\",female,0,0,0,36866,7.7375\r\n0,3,\"Mardirosian, Mr. Sarkis\",male,0,0,0,2655,7.2292\r\n0,3,\"Markoff, Mr. Marin\",male,35,0,0,349213,7.8958\r\n0,3,\"Markun, Mr. Johann\",male,33,0,0,349257,7.8958\r\n1,3,\"Masselmani, Mrs. Fatima\",female,0,0,0,2649,7.2250\r\n0,3,\"Matinoff, Mr. Nicola\",male,0,0,0,349255,7.8958\r\n1,3,\"McCarthy, Miss. Catherine \"\"Katie\"\"\",female,0,0,0,383123,7.7500\r\n1,3,\"McCormack, Mr. Thomas Joseph\",male,0,0,0,367228,7.7500\r\n1,3,\"McCoy, Miss. Agnes\",female,0,2,0,367226,23.2500\r\n1,3,\"McCoy, Miss. Alicia\",female,0,2,0,367226,23.2500\r\n1,3,\"McCoy, Mr. Bernard\",male,0,2,0,367226,23.2500\r\n1,3,\"McDermott, Miss. Brigdet Delia\",female,0,0,0,330932,7.7875\r\n0,3,\"McEvoy, Mr. Michael\",male,0,0,0,36568,15.5000\r\n1,3,\"McGovern, Miss. Mary\",female,0,0,0,330931,7.8792\r\n1,3,\"McGowan, Miss. Anna \"\"Annie\"\"\",female,15,0,0,330923,8.0292\r\n0,3,\"McGowan, Miss. Katherine\",female,35,0,0,9232,7.7500\r\n0,3,\"McMahon, Mr. Martin\",male,0,0,0,370372,7.7500\r\n0,3,\"McNamee, Mr. Neal\",male,24,1,0,376566,16.1000\r\n0,3,\"McNamee, Mrs. Neal (Eileen O'Leary)\",female,19,1,0,376566,16.1000\r\n0,3,\"McNeill, Miss. Bridget\",female,0,0,0,370368,7.7500\r\n0,3,\"Meanwell, Miss. (Marion Ogden)\",female,0,0,0,SOTON/O.Q. 392087,8.0500\r\n0,3,\"Meek, Mrs. Thomas (Annie Louise Rowley)\",female,0,0,0,343095,8.0500\r\n0,3,\"Meo, Mr. Alfonzo\",male,55.5,0,0,A.5. 11206,8.0500\r\n0,3,\"Mernagh, Mr. Robert\",male,0,0,0,368703,7.7500\r\n1,3,\"Midtsjo, Mr. Karl Albert\",male,21,0,0,345501,7.7750\r\n0,3,\"Miles, Mr. Frank\",male,0,0,0,359306,8.0500\r\n0,3,\"Mineff, Mr. Ivan\",male,24,0,0,349233,7.8958\r\n0,3,\"Minkoff, Mr. Lazar\",male,21,0,0,349211,7.8958\r\n0,3,\"Mionoff, Mr. Stoytcho\",male,28,0,0,349207,7.8958\r\n0,3,\"Mitkoff, Mr. Mito\",male,0,0,0,349221,7.8958\r\n1,3,\"Mockler, Miss. Helen Mary \"\"Ellie\"\"\",female,0,0,0,330980,7.8792\r\n0,3,\"Moen, Mr. Sigurd Hansen\",male,25,0,0,348123,7.6500\r\n1,3,\"Moor, Master. Meier\",male,6,0,1,392096,12.4750\r\n1,3,\"Moor, Mrs. (Beila)\",female,27,0,1,392096,12.4750\r\n0,3,\"Moore, Mr. Leonard Charles\",male,0,0,0,A4. 54510,8.0500\r\n1,3,\"Moran, Miss. Bertha\",female,0,1,0,371110,24.1500\r\n0,3,\"Moran, Mr. Daniel J\",male,0,1,0,371110,24.1500\r\n0,3,\"Moran, Mr. James\",male,0,0,0,330877,8.4583\r\n0,3,\"Morley, Mr. William\",male,34,0,0,364506,8.0500\r\n0,3,\"Morrow, Mr. Thomas Rowan\",male,0,0,0,372622,7.7500\r\n1,3,\"Moss, Mr. Albert Johan\",male,0,0,0,312991,7.7750\r\n1,3,\"Moubarek, Master. Gerios\",male,0,1,1,2661,15.2458\r\n1,3,\"Moubarek, Master. Halim Gonios (\"\"William George\"\")\",male,0,1,1,2661,15.2458\r\n1,3,\"Moubarek, Mrs. George (Omine \"\"Amenia\"\" Alexander)\",female,0,0,2,2661,15.2458\r\n1,3,\"Moussa, Mrs. (Mantoura Boulos)\",female,0,0,0,2626,7.2292\r\n0,3,\"Moutal, Mr. Rahamin Haim\",male,0,0,0,374746,8.0500\r\n1,3,\"Mullens, Miss. Katherine \"\"Katie\"\"\",female,0,0,0,35852,7.7333\r\n1,3,\"Mulvihill, Miss. Bertha E\",female,24,0,0,382653,7.7500\r\n0,3,\"Murdlin, Mr. Joseph\",male,0,0,0,A./5. 3235,8.0500\r\n1,3,\"Murphy, Miss. Katherine \"\"Kate\"\"\",female,0,1,0,367230,15.5000\r\n1,3,\"Murphy, Miss. Margaret Jane\",female,0,1,0,367230,15.5000\r\n1,3,\"Murphy, Miss. Nora\",female,0,0,0,36568,15.5000\r\n0,3,\"Myhrman, Mr. Pehr Fabian Oliver Malkolm\",male,18,0,0,347078,7.7500\r\n0,3,\"Naidenoff, Mr. Penko\",male,22,0,0,349206,7.8958\r\n1,3,\"Najib, Miss. Adele Kiamie \"\"Jane\"\"\",female,15,0,0,2667,7.2250\r\n1,3,\"Nakid, Miss. Maria (\"\"Mary\"\")\",female,1,0,2,2653,15.7417\r\n1,3,\"Nakid, Mr. Sahid\",male,20,1,1,2653,15.7417\r\n1,3,\"Nakid, Mrs. Said (Waika \"\"Mary\"\" Mowad)\",female,19,1,1,2653,15.7417\r\n0,3,\"Nancarrow, Mr. William Henry\",male,33,0,0,A./5. 3338,8.0500\r\n0,3,\"Nankoff, Mr. Minko\",male,0,0,0,349218,7.8958\r\n0,3,\"Nasr, Mr. Mustafa\",male,0,0,0,2652,7.2292\r\n0,3,\"Naughton, Miss. Hannah\",female,0,0,0,365237,7.7500\r\n0,3,\"Nenkoff, Mr. Christo\",male,0,0,0,349234,7.8958\r\n1,3,\"Nicola-Yarred, Master. Elias\",male,12,1,0,2651,11.2417\r\n1,3,\"Nicola-Yarred, Miss. Jamila\",female,14,1,0,2651,11.2417\r\n0,3,\"Nieminen, Miss. Manta Josefina\",female,29,0,0,3101297,7.9250\r\n0,3,\"Niklasson, Mr. Samuel\",male,28,0,0,363611,8.0500\r\n1,3,\"Nilsson, Miss. Berta Olivia\",female,18,0,0,347066,7.7750\r\n1,3,\"Nilsson, Miss. Helmina Josefina\",female,26,0,0,347470,7.8542\r\n0,3,\"Nilsson, Mr. August Ferdinand\",male,21,0,0,350410,7.8542\r\n0,3,\"Nirva, Mr. Iisakki Antino Aijo\",male,41,0,0,SOTON/O2 3101272,7.1250\r\n1,3,\"Niskanen, Mr. Juha\",male,39,0,0,STON/O 2. 3101289,7.9250\r\n0,3,\"Nosworthy, Mr. Richard Cater\",male,21,0,0,A/4. 39886,7.8000\r\n0,3,\"Novel, Mr. Mansouer\",male,28.5,0,0,2697,7.2292\r\n1,3,\"Nysten, Miss. Anna Sofia\",female,22,0,0,347081,7.7500\r\n0,3,\"Nysveen, Mr. Johan Hansen\",male,61,0,0,345364,6.2375\r\n0,3,\"O'Brien, Mr. Thomas\",male,0,1,0,370365,15.5000\r\n0,3,\"O'Brien, Mr. Timothy\",male,0,0,0,330979,7.8292\r\n1,3,\"O'Brien, Mrs. Thomas (Johanna \"\"Hannah\"\" Godfrey)\",female,0,1,0,370365,15.5000\r\n0,3,\"O'Connell, Mr. Patrick D\",male,0,0,0,334912,7.7333\r\n0,3,\"O'Connor, Mr. Maurice\",male,0,0,0,371060,7.7500\r\n0,3,\"O'Connor, Mr. Patrick\",male,0,0,0,366713,7.7500\r\n0,3,\"Odahl, Mr. Nils Martin\",male,23,0,0,7267,9.2250\r\n0,3,\"O'Donoghue, Ms. Bridget\",female,0,0,0,364856,7.7500\r\n1,3,\"O'Driscoll, Miss. Bridget\",female,0,0,0,14311,7.7500\r\n1,3,\"O'Dwyer, Miss. Ellen \"\"Nellie\"\"\",female,0,0,0,330959,7.8792\r\n1,3,\"Ohman, Miss. Velin\",female,22,0,0,347085,7.7750\r\n1,3,\"O'Keefe, Mr. Patrick\",male,0,0,0,368402,7.7500\r\n1,3,\"O'Leary, Miss. Hanora \"\"Norah\"\"\",female,0,0,0,330919,7.8292\r\n1,3,\"Olsen, Master. Artur Karl\",male,9,0,1,C 17368,3.1708\r\n0,3,\"Olsen, Mr. Henry Margido\",male,28,0,0,C 4001,22.5250\r\n0,3,\"Olsen, Mr. Karl Siegwart Andreas\",male,42,0,1,4579,8.4042\r\n0,3,\"Olsen, Mr. Ole Martin\",male,0,0,0,Fa 265302,7.3125\r\n0,3,\"Olsson, Miss. Elina\",female,31,0,0,350407,7.8542\r\n0,3,\"Olsson, Mr. Nils Johan Goransson\",male,28,0,0,347464,7.8542\r\n1,3,\"Olsson, Mr. Oscar Wilhelm\",male,32,0,0,347079,7.7750\r\n0,3,\"Olsvigen, Mr. Thor Anderson\",male,20,0,0,6563,9.2250\r\n0,3,\"Oreskovic, Miss. Jelka\",female,23,0,0,315085,8.6625\r\n0,3,\"Oreskovic, Miss. Marija\",female,20,0,0,315096,8.6625\r\n0,3,\"Oreskovic, Mr. Luka\",male,20,0,0,315094,8.6625\r\n0,3,\"Osen, Mr. Olaf Elon\",male,16,0,0,7534,9.2167\r\n1,3,\"Osman, Mrs. Mara\",female,31,0,0,349244,8.6833\r\n0,3,\"O'Sullivan, Miss. Bridget Mary\",female,0,0,0,330909,7.6292\r\n0,3,\"Palsson, Master. Gosta Leonard\",male,2,3,1,349909,21.0750\r\n0,3,\"Palsson, Master. Paul Folke\",male,6,3,1,349909,21.0750\r\n0,3,\"Palsson, Miss. Stina Viola\",female,3,3,1,349909,21.0750\r\n0,3,\"Palsson, Miss. Torborg Danira\",female,8,3,1,349909,21.0750\r\n0,3,\"Palsson, Mrs. Nils (Alma Cornelia Berglund)\",female,29,0,4,349909,21.0750\r\n0,3,\"Panula, Master. Eino Viljami\",male,1,4,1,3101295,39.6875\r\n0,3,\"Panula, Master. Juha Niilo\",male,7,4,1,3101295,39.6875\r\n0,3,\"Panula, Master. Urho Abraham\",male,2,4,1,3101295,39.6875\r\n0,3,\"Panula, Mr. Ernesti Arvid\",male,16,4,1,3101295,39.6875\r\n0,3,\"Panula, Mr. Jaako Arnold\",male,14,4,1,3101295,39.6875\r\n0,3,\"Panula, Mrs. Juha (Maria Emilia Ojala)\",female,41,0,5,3101295,39.6875\r\n0,3,\"Pasic, Mr. Jakob\",male,21,0,0,315097,8.6625\r\n0,3,\"Patchett, Mr. George\",male,19,0,0,358585,14.5000\r\n0,3,\"Paulner, Mr. Uscher\",male,0,0,0,3411,8.7125\r\n0,3,\"Pavlovic, Mr. Stefo\",male,32,0,0,349242,7.8958\r\n0,3,\"Peacock, Master. Alfred Edward\",male,0.75,1,1,SOTON/O.Q. 3101315,13.7750\r\n0,3,\"Peacock, Miss. Treasteall\",female,3,1,1,SOTON/O.Q. 3101315,13.7750\r\n0,3,\"Peacock, Mrs. Benjamin (Edith Nile)\",female,26,0,2,SOTON/O.Q. 3101315,13.7750\r\n0,3,\"Pearce, Mr. Ernest\",male,0,0,0,343271,7.0000\r\n0,3,\"Pedersen, Mr. Olaf\",male,0,0,0,345498,7.7750\r\n0,3,\"Peduzzi, Mr. Joseph\",male,0,0,0,A/5 2817,8.0500\r\n0,3,\"Pekoniemi, Mr. Edvard\",male,21,0,0,STON/O 2. 3101294,7.9250\r\n0,3,\"Peltomaki, Mr. Nikolai Johannes\",male,25,0,0,STON/O 2. 3101291,7.9250\r\n0,3,\"Perkin, Mr. John Henry\",male,22,0,0,A/5 21174,7.2500\r\n1,3,\"Persson, Mr. Ernst Ulrik\",male,25,1,0,347083,7.7750\r\n1,3,\"Peter, Master. Michael J\",male,0,1,1,2668,22.3583\r\n1,3,\"Peter, Miss. Anna\",female,0,1,1,2668,22.3583\r\n1,3,\"Peter, Mrs. Catherine (Catherine Rizk)\",female,0,0,2,2668,22.3583\r\n0,3,\"Peters, Miss. Katie\",female,0,0,0,330935,8.1375\r\n0,3,\"Petersen, Mr. Marius\",male,24,0,0,342441,8.0500\r\n0,3,\"Petranec, Miss. Matilda\",female,28,0,0,349245,7.8958\r\n0,3,\"Petroff, Mr. Nedelio\",male,19,0,0,349212,7.8958\r\n0,3,\"Petroff, Mr. Pastcho (\"\"Pentcho\"\")\",male,0,0,0,349215,7.8958\r\n0,3,\"Petterson, Mr. Johan Emil\",male,25,1,0,347076,7.7750\r\n0,3,\"Pettersson, Miss. Ellen Natalia\",female,18,0,0,347087,7.7750\r\n1,3,\"Pickard, Mr. Berk (Berk Trembisky)\",male,32,0,0,SOTON/O.Q. 392078,8.0500\r\n0,3,\"Plotcharsky, Mr. Vasil\",male,0,0,0,349227,7.8958\r\n0,3,\"Pokrnic, Mr. Mate\",male,17,0,0,315095,8.6625\r\n0,3,\"Pokrnic, Mr. Tome\",male,24,0,0,315092,8.6625\r\n0,3,\"Radeff, Mr. Alexander\",male,0,0,0,349223,7.8958\r\n0,3,\"Rasmussen, Mrs. (Lena Jacobsen Solvang)\",female,0,0,0,65305,8.1125\r\n0,3,\"Razi, Mr. Raihed\",male,0,0,0,2629,7.2292\r\n0,3,\"Reed, Mr. James George\",male,0,0,0,362316,7.2500\r\n0,3,\"Rekic, Mr. Tido\",male,38,0,0,349249,7.8958\r\n0,3,\"Reynolds, Mr. Harold J\",male,21,0,0,342684,8.0500\r\n0,3,\"Rice, Master. Albert\",male,10,4,1,382652,29.1250\r\n0,3,\"Rice, Master. Arthur\",male,4,4,1,382652,29.1250\r\n0,3,\"Rice, Master. Eric\",male,7,4,1,382652,29.1250\r\n0,3,\"Rice, Master. Eugene\",male,2,4,1,382652,29.1250\r\n0,3,\"Rice, Master. George Hugh\",male,8,4,1,382652,29.1250\r\n0,3,\"Rice, Mrs. William (Margaret Norton)\",female,39,0,5,382652,29.1250\r\n0,3,\"Riihivouri, Miss. Susanna Juhantytar \"\"Sanni\"\"\",female,22,0,0,3101295,39.6875\r\n0,3,\"Rintamaki, Mr. Matti\",male,35,0,0,STON/O 2. 3101273,7.1250\r\n1,3,\"Riordan, Miss. Johanna \"\"Hannah\"\"\",female,0,0,0,334915,7.7208\r\n0,3,\"Risien, Mr. Samuel Beard\",male,0,0,0,364498,14.5000\r\n0,3,\"Risien, Mrs. Samuel (Emma)\",female,0,0,0,364498,14.5000\r\n0,3,\"Robins, Mr. Alexander A\",male,50,1,0,A/5. 3337,14.5000\r\n0,3,\"Robins, Mrs. Alexander A (Grace Charity Laury)\",female,47,1,0,A/5. 3337,14.5000\r\n0,3,\"Rogers, Mr. William John\",male,0,0,0,S.C./A.4. 23567,8.0500\r\n0,3,\"Rommetvedt, Mr. Knud Paust\",male,0,0,0,312993,7.7750\r\n0,3,\"Rosblom, Miss. Salli Helena\",female,2,1,1,370129,20.2125\r\n0,3,\"Rosblom, Mr. Viktor Richard\",male,18,1,1,370129,20.2125\r\n0,3,\"Rosblom, Mrs. Viktor (Helena Wilhelmina)\",female,41,0,2,370129,20.2125\r\n1,3,\"Roth, Miss. Sarah A\",female,0,0,0,342712,8.0500\r\n0,3,\"Rouse, Mr. Richard Henry\",male,50,0,0,A/5 3594,8.0500\r\n0,3,\"Rush, Mr. Alfred George John\",male,16,0,0,A/4. 20589,8.0500\r\n1,3,\"Ryan, Mr. Edward\",male,0,0,0,383162,7.7500\r\n0,3,\"Ryan, Mr. Patrick\",male,0,0,0,371110,24.1500\r\n0,3,\"Saad, Mr. Amin\",male,0,0,0,2671,7.2292\r\n0,3,\"Saad, Mr. Khalil\",male,25,0,0,2672,7.2250\r\n0,3,\"Saade, Mr. Jean Nassr\",male,0,0,0,2676,7.2250\r\n0,3,\"Sadlier, Mr. Matthew\",male,0,0,0,367655,7.7292\r\n0,3,\"Sadowitz, Mr. Harry\",male,0,0,0,LP 1588,7.5750\r\n0,3,\"Saether, Mr. Simon Sivertsen\",male,38.5,0,0,SOTON/O.Q. 3101262,7.2500\r\n0,3,\"Sage, Master. Thomas Henry\",male,0,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Master. William Henry\",male,14.5,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Miss. Ada\",female,0,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Miss. Constance Gladys\",female,0,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Miss. Dorothy Edith \"\"Dolly\"\"\",female,0,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Miss. Stella Anna\",female,0,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Mr. Douglas Bullen\",male,0,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Mr. Frederick\",male,0,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Mr. George John Jr\",male,0,8,2,CA. 2343,69.5500\r\n0,3,\"Sage, Mr. John George\",male,0,1,9,CA. 2343,69.5500\r\n0,3,\"Sage, Mrs. John (Annie Bullen)\",female,0,1,9,CA. 2343,69.5500\r\n0,3,\"Salander, Mr. Karl Johan\",male,24,0,0,7266,9.3250\r\n1,3,\"Salkjelsvik, Miss. Anna Kristine\",female,21,0,0,343120,7.6500\r\n0,3,\"Salonen, Mr. Johan Werner\",male,39,0,0,3101296,7.9250\r\n0,3,\"Samaan, Mr. Elias\",male,0,2,0,2662,21.6792\r\n0,3,\"Samaan, Mr. Hanna\",male,0,2,0,2662,21.6792\r\n0,3,\"Samaan, Mr. Youssef\",male,0,2,0,2662,21.6792\r\n1,3,\"Sandstrom, Miss. Beatrice Irene\",female,1,1,1,PP 9549,16.7000\r\n1,3,\"Sandstrom, Mrs. Hjalmar (Agnes Charlotta Bengtsson)\",female,24,0,2,PP 9549,16.7000\r\n1,3,\"Sandstrom, Miss. Marguerite Rut\",female,4,1,1,PP 9549,16.7000\r\n1,3,\"Sap, Mr. Julius\",male,25,0,0,345768,9.5000\r\n0,3,\"Saundercock, Mr. William Henry\",male,20,0,0,A/5. 2151,8.0500\r\n0,3,\"Sawyer, Mr. Frederick Charles\",male,24.5,0,0,342826,8.0500\r\n0,3,\"Scanlan, Mr. James\",male,0,0,0,36209,7.7250\r\n0,3,\"Sdycoff, Mr. Todor\",male,0,0,0,349222,7.8958\r\n0,3,\"Shaughnessy, Mr. Patrick\",male,0,0,0,370374,7.7500\r\n1,3,\"Sheerlinck, Mr. Jan Baptist\",male,29,0,0,345779,9.5000\r\n0,3,\"Shellard, Mr. Frederick William\",male,0,0,0,C.A. 6212,15.1000\r\n1,3,\"Shine, Miss. Ellen Natalia\",female,0,0,0,330968,7.7792\r\n0,3,\"Shorney, Mr. Charles Joseph\",male,0,0,0,374910,8.0500\r\n0,3,\"Simmons, Mr. John\",male,0,0,0,SOTON/OQ 392082,8.0500\r\n0,3,\"Sirayanian, Mr. Orsen\",male,22,0,0,2669,7.2292\r\n0,3,\"Sirota, Mr. Maurice\",male,0,0,0,392092,8.0500\r\n0,3,\"Sivic, Mr. Husein\",male,40,0,0,349251,7.8958\r\n0,3,\"Sivola, Mr. Antti Wilhelm\",male,21,0,0,STON/O 2. 3101280,7.9250\r\n1,3,\"Sjoblom, Miss. Anna Sofia\",female,18,0,0,3101265,7.4958\r\n0,3,\"Skoog, Master. Harald\",male,4,3,2,347088,27.9000\r\n0,3,\"Skoog, Master. Karl Thorsten\",male,10,3,2,347088,27.9000\r\n0,3,\"Skoog, Miss. Mabel\",female,9,3,2,347088,27.9000\r\n0,3,\"Skoog, Miss. Margit Elizabeth\",female,2,3,2,347088,27.9000\r\n0,3,\"Skoog, Mr. Wilhelm\",male,40,1,4,347088,27.9000\r\n0,3,\"Skoog, Mrs. William (Anna Bernhardina Karlsson)\",female,45,1,4,347088,27.9000\r\n0,3,\"Slabenoff, Mr. Petco\",male,0,0,0,349214,7.8958\r\n0,3,\"Slocovski, Mr. Selman Francis\",male,0,0,0,SOTON/OQ 392086,8.0500\r\n0,3,\"Smiljanic, Mr. Mile\",male,0,0,0,315037,8.6625\r\n0,3,\"Smith, Mr. Thomas\",male,0,0,0,384461,7.7500\r\n1,3,\"Smyth, Miss. Julia\",female,0,0,0,335432,7.7333\r\n0,3,\"Soholt, Mr. Peter Andreas Lauritz Andersen\",male,19,0,0,348124,7.6500\r\n0,3,\"Somerton, Mr. Francis William\",male,30,0,0,A.5. 18509,8.0500\r\n0,3,\"Spector, Mr. Woolf\",male,0,0,0,A.5. 3236,8.0500\r\n0,3,\"Spinner, Mr. Henry John\",male,32,0,0,STON/OQ. 369943,8.0500\r\n0,3,\"Staneff, Mr. Ivan\",male,0,0,0,349208,7.8958\r\n0,3,\"Stankovic, Mr. Ivan\",male,33,0,0,349239,8.6625\r\n1,3,\"Stanley, Miss. Amy Zillah Elsie\",female,23,0,0,CA. 2314,7.5500\r\n0,3,\"Stanley, Mr. Edward Roland\",male,21,0,0,A/4 45380,8.0500\r\n0,3,\"Storey, Mr. Thomas\",male,60.5,0,0,3701,7.8958\r\n0,3,\"Stoytcheff, Mr. Ilia\",male,19,0,0,349205,7.8958\r\n0,3,\"Strandberg, Miss. Ida Sofia\",female,22,0,0,7553,9.8375\r\n1,3,\"Stranden, Mr. Juho\",male,31,0,0,STON/O 2. 3101288,7.9250\r\n0,3,\"Strilic, Mr. Ivan\",male,27,0,0,315083,8.6625\r\n0,3,\"Strom, Miss. Telma Matilda\",female,2,0,1,347054,10.4625\r\n0,3,\"Strom, Mrs. Wilhelm (Elna Matilda Persson)\",female,29,1,1,347054,10.4625\r\n1,3,\"Sunderland, Mr. Victor Francis\",male,16,0,0,SOTON/OQ 392089,8.0500\r\n1,3,\"Sundman, Mr. Johan Julian\",male,44,0,0,STON/O 2. 3101269,7.9250\r\n0,3,\"Sutehall, Mr. Henry Jr\",male,25,0,0,SOTON/OQ 392076,7.0500\r\n0,3,\"Svensson, Mr. Johan\",male,74,0,0,347060,7.7750\r\n1,3,\"Svensson, Mr. Johan Cervin\",male,14,0,0,7538,9.2250\r\n0,3,\"Svensson, Mr. Olof\",male,24,0,0,350035,7.7958\r\n1,3,\"Tenglin, Mr. Gunnar Isidor\",male,25,0,0,350033,7.7958\r\n0,3,\"Theobald, Mr. Thomas Leonard\",male,34,0,0,363294,8.0500\r\n1,3,\"Thomas, Master. Assad Alexander\",male,0.4167,0,1,2625,8.5167\r\n0,3,\"Thomas, Mr. Charles P\",male,0,1,0,2621,6.4375\r\n0,3,\"Thomas, Mr. John\",male,0,0,0,2681,6.4375\r\n0,3,\"Thomas, Mr. Tannous\",male,0,0,0,2684,7.2250\r\n1,3,\"Thomas, Mrs. Alexander (Thamine \"\"Thelma\"\")\",female,16,1,1,2625,8.5167\r\n0,3,\"Thomson, Mr. Alexander Morrison\",male,0,0,0,32302,8.0500\r\n0,3,\"Thorneycroft, Mr. Percival\",male,0,1,0,376564,16.1000\r\n1,3,\"Thorneycroft, Mrs. Percival (Florence Kate White)\",female,0,1,0,376564,16.1000\r\n0,3,\"Tikkanen, Mr. Juho\",male,32,0,0,STON/O 2. 3101293,7.9250\r\n0,3,\"Tobin, Mr. Roger\",male,0,0,0,383121,7.7500\r\n0,3,\"Todoroff, Mr. Lalio\",male,0,0,0,349216,7.8958\r\n0,3,\"Tomlin, Mr. Ernest Portage\",male,30.5,0,0,364499,8.0500\r\n0,3,\"Torber, Mr. Ernst William\",male,44,0,0,364511,8.0500\r\n0,3,\"Torfa, Mr. Assad\",male,0,0,0,2673,7.2292\r\n1,3,\"Tornquist, Mr. William Henry\",male,25,0,0,LINE,0.0000\r\n0,3,\"Toufik, Mr. Nakli\",male,0,0,0,2641,7.2292\r\n1,3,\"Touma, Master. Georges Youssef\",male,7,1,1,2650,15.2458\r\n1,3,\"Touma, Miss. Maria Youssef\",female,9,1,1,2650,15.2458\r\n1,3,\"Touma, Mrs. Darwis (Hanne Youssef Razi)\",female,29,0,2,2650,15.2458\r\n0,3,\"Turcin, Mr. Stjepan\",male,36,0,0,349247,7.8958\r\n1,3,\"Turja, Miss. Anna Sofia\",female,18,0,0,4138,9.8417\r\n1,3,\"Turkula, Mrs. (Hedwig)\",female,63,0,0,4134,9.5875\r\n0,3,\"van Billiard, Master. James William\",male,0,1,1,A/5. 851,14.5000\r\n0,3,\"van Billiard, Master. Walter John\",male,11.5,1,1,A/5. 851,14.5000\r\n0,3,\"van Billiard, Mr. Austin Blyler\",male,40.5,0,2,A/5. 851,14.5000\r\n0,3,\"Van Impe, Miss. Catharina\",female,10,0,2,345773,24.1500\r\n0,3,\"Van Impe, Mr. Jean Baptiste\",male,36,1,1,345773,24.1500\r\n0,3,\"Van Impe, Mrs. Jean Baptiste (Rosalie Paula Govaert)\",female,30,1,1,345773,24.1500\r\n0,3,\"van Melkebeke, Mr. Philemon\",male,0,0,0,345777,9.5000\r\n0,3,\"Vande Velde, Mr. Johannes Joseph\",male,33,0,0,345780,9.5000\r\n0,3,\"Vande Walle, Mr. Nestor Cyriel\",male,28,0,0,345770,9.5000\r\n0,3,\"Vanden Steen, Mr. Leo Peter\",male,28,0,0,345783,9.5000\r\n0,3,\"Vander Cruyssen, Mr. Victor\",male,47,0,0,345765,9.0000\r\n0,3,\"Vander Planke, Miss. Augusta Maria\",female,18,2,0,345764,18.0000\r\n0,3,\"Vander Planke, Mr. Julius\",male,31,3,0,345763,18.0000\r\n0,3,\"Vander Planke, Mr. Leo Edmondus\",male,16,2,0,345764,18.0000\r\n0,3,\"Vander Planke, Mrs. Julius (Emelia Maria Vandemoortele)\",female,31,1,0,345763,18.0000\r\n1,3,\"Vartanian, Mr. David\",male,22,0,0,2658,7.2250\r\n0,3,\"Vendel, Mr. Olof Edvin\",male,20,0,0,350416,7.8542\r\n0,3,\"Vestrom, Miss. Hulda Amanda Adolfina\",female,14,0,0,350406,7.8542\r\n0,3,\"Vovk, Mr. Janko\",male,22,0,0,349252,7.8958\r\n0,3,\"Waelens, Mr. Achille\",male,22,0,0,345767,9.0000\r\n0,3,\"Ware, Mr. Frederick\",male,0,0,0,359309,8.0500\r\n0,3,\"Warren, Mr. Charles William\",male,0,0,0,C.A. 49867,7.5500\r\n0,3,\"Webber, Mr. James\",male,0,0,0,SOTON/OQ 3101316,8.0500\r\n0,3,\"Wenzel, Mr. Linhart\",male,32.5,0,0,345775,9.5000\r\n1,3,\"Whabee, Mrs. George Joseph (Shawneene Abi-Saab)\",female,38,0,0,2688,7.2292\r\n0,3,\"Widegren, Mr. Carl/Charles Peter\",male,51,0,0,347064,7.7500\r\n0,3,\"Wiklund, Mr. Jakob Alfred\",male,18,1,0,3101267,6.4958\r\n0,3,\"Wiklund, Mr. Karl Johan\",male,21,1,0,3101266,6.4958\r\n1,3,\"Wilkes, Mrs. James (Ellen Needs)\",female,47,1,0,363272,7.0000\r\n0,3,\"Willer, Mr. Aaron (\"\"Abi Weller\"\")\",male,0,0,0,3410,8.7125\r\n0,3,\"Willey, Mr. Edward\",male,0,0,0,S.O./P.P. 751,7.5500\r\n0,3,\"Williams, Mr. Howard Hugh \"\"Harry\"\"\",male,0,0,0,A/5 2466,8.0500\r\n0,3,\"Williams, Mr. Leslie\",male,28.5,0,0,54636,16.1000\r\n0,3,\"Windelov, Mr. Einar\",male,21,0,0,SOTON/OQ 3101317,7.2500\r\n0,3,\"Wirz, Mr. Albert\",male,27,0,0,315154,8.6625\r\n0,3,\"Wiseman, Mr. Phillippe\",male,0,0,0,A/4. 34244,7.2500\r\n0,3,\"Wittevrongel, Mr. Camille\",male,36,0,0,345771,9.5000\r\n0,3,\"Yasbeck, Mr. Antoni\",male,27,1,0,2659,14.4542\r\n1,3,\"Yasbeck, Mrs. Antoni (Selini Alexander)\",female,15,1,0,2659,14.4542\r\n0,3,\"Youseff, Mr. Gerious\",male,45.5,0,0,2628,7.2250\r\n0,3,\"Yousif, Mr. Wazli\",male,0,0,0,2647,7.2250\r\n0,3,\"Yousseff, Mr. Gerious\",male,0,0,0,2627,14.4583\r\n0,3,\"Zabour, Miss. Hileni\",female,14.5,1,0,2665,14.4542\r\n0,3,\"Zabour, Miss. Thamine\",female,0,1,0,2665,14.4542\r\n0,3,\"Zakarian, Mr. Mapriededer\",male,26.5,0,0,2656,7.2250\r\n0,3,\"Zakarian, Mr. Ortin\",male,27,0,0,2670,7.2250\r\n0,3,\"Zimmerman, Mr. Leo\",male,29,0,0,315082,7.8750"
  },
  {
    "path": "Chapter09/Python 2.7/classify_image.py",
    "content": "import tensorflow as tf, sys\n\n# You will be sending the image to be classified as a parameter\nprovided_image_path = sys.argv[1]\n\n# then we will read the image data\nprovided_image_data = tf.gfile.FastGFile(provided_image_path, 'rb').read()\n\n# Loads label file\nlabel_lines = [line.rstrip() for line \n             in tf.gfile.GFile(\"tensorflow_files/retrained_labels.txt\")]\n\n# Unpersists graph from file\nwith tf.gfile.FastGFile(\"tensorflow_files/retrained_graph.pb\", 'rb') as f:\n    graph_def = tf.GraphDef()\n    graph_def.ParseFromString(f.read())\n    _ = tf.import_graph_def(graph_def, name='')\n\nwith tf.Session() as sess:\n    # pass the provided_image_data as input to the graph\n    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')\n    \n    netowrk_predictions = sess.run(softmax_tensor, \\\n             {'DecodeJpeg/contents:0': provided_image_data})\n    \n    # Sort the result by confidence to show the flower labels accordingly\n    top_predictions = netowrk_predictions[0].argsort()[-len(netowrk_predictions[0]):][::-1]\n    \n    for prediction in top_predictions:\n        flower_type = label_lines[prediction]\n        score = netowrk_predictions[0][prediction]\n        print('%s (score = %.5f)' % (flower_type, score))\n"
  },
  {
    "path": "Chapter09/Python 3.5/classify_image.py",
    "content": "import tensorflow as tf, sys\n\n# You will be sending the image to be classified as a parameter\nprovided_image_path = sys.argv[1]\n\n# then we will read the image data\nprovided_image_data = tf.gfile.FastGFile(provided_image_path, 'rb').read()\n\n# Loads label file\nlabel_lines = [line.rstrip() for line \n             in tf.gfile.GFile(\"tensorflow_files/retrained_labels.txt\")]\n\n# Unpersists graph from file\nwith tf.gfile.FastGFile(\"tensorflow_files/retrained_graph.pb\", 'rb') as f:\n    graph_def = tf.GraphDef()\n    graph_def.ParseFromString(f.read())\n    _ = tf.import_graph_def(graph_def, name='')\n\nwith tf.Session() as sess:\n    # pass the provided_image_data as input to the graph\n    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')\n    \n    netowrk_predictions = sess.run(softmax_tensor, \\\n             {'DecodeJpeg/contents:0': provided_image_data})\n    \n    # Sort the result by confidence to show the flower labels accordingly\n    top_predictions = netowrk_predictions[0].argsort()[-len(netowrk_predictions[0]):][::-1]\n    \n    for prediction in top_predictions:\n        flower_type = label_lines[prediction]\n        score = netowrk_predictions[0][prediction]\n        print('%s (score = %.5f)' % (flower_type, score))\n"
  },
  {
    "path": "Chapter10/Python 2.7/FrozenLake_1.py",
    "content": "import gym\r\nimport numpy as np\r\n\r\nenv = gym.make('FrozenLake-v0')\r\n\r\n#Initialize table with all zeros\r\nQ = np.zeros([env.observation_space.n,env.action_space.n])\r\n# Set learning parameters\r\nlr = .85\r\ngamma = .99\r\nnum_episodes = 2000\r\n\r\n#create lists to contain total rewards and steps per episode\r\nrList = []\r\nfor i in range(num_episodes):\r\n#Reset environment and get first new observation\r\n    s = env.reset()\r\n    rAll = 0\r\n    d = False\r\n    j = 0\r\n\r\n   #The Q-Table learning algorithm\r\n    while j < 99:\r\n        j+=1\r\n\r\n   #Choose an action by greedily (with noise) picking from Q table\r\n        a=np.argmax(Q[s,:]+ \\\r\n                  np.random.randn(1,env.action_space.n)*(1./(i+1)))\r\n\r\n  #Get new state and reward from environment\r\n        s1,r,d,_ = env.step(a)\r\n\r\n        #Update Q-Table with new knowledge\r\n        Q[s,a] = Q[s,a] + lr*(r + gamma *np.max(Q[s1,:]) - Q[s,a])\r\n        rAll += r\r\n        s = s1\r\n        if d == True:\r\n            break\r\n\r\n    rList.append(rAll)\r\n\r\nprint(\"Score over time: \" +  str(sum(rList)/num_episodes))\r\nprint(\"Final Q-Table Values\")\r\nprint(Q)\r\n"
  },
  {
    "path": "Chapter10/Python 2.7/Q_Learning_1.py",
    "content": "import gym\r\nimport numpy as np\r\nimport random\r\nimport tensorflow as tf\r\nimport matplotlib.pyplot as plt\r\n\r\n#Define the FrozenLake enviroment\r\nenv = gym.make('FrozenLake-v0')\r\n\r\n#Setup the TensorFlow placeholders and variabiles\r\ntf.reset_default_graph()\r\ninputs1 = tf.placeholder(shape=[1,16],dtype=tf.float32)\r\nW = tf.Variable(tf.random_uniform([16,4],0,0.01))\r\nQout = tf.matmul(inputs1,W)\r\npredict = tf.argmax(Qout,1)\r\nnextQ = tf.placeholder(shape=[1,4],dtype=tf.float32)\r\n\r\n#define the loss and optimization functions \r\nloss = tf.reduce_sum(tf.square(nextQ - Qout))\r\ntrainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)\r\nupdateModel = trainer.minimize(loss)\r\n\r\n#initilize the vabiables\r\ninit = tf.global_variables_initializer()\r\n\r\n#prepare the q-learning parameters\r\ngamma = .99\r\ne = 0.1\r\nnum_episodes = 6000\r\njList = []\r\nrList = []\r\n\r\n#Run the session\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n#Start the Q-learning procedure\r\n    for i in range(num_episodes):\r\n        s = env.reset()\r\n        rAll = 0\r\n        d = False\r\n        j = 0\r\n        while j < 99:\r\n            j+=1\r\n            a,allQ = sess.run([predict,Qout],\\\r\n                              feed_dict=\\\r\n                              {inputs1:np.identity(16)[s:s+1]})\r\n\r\n            if np.random.rand(1) < e:\r\n                a[0] = env.action_space.sample()\r\n            s1,r,d,_ = env.step(a[0])\r\n            Q1 = sess.run(Qout,feed_dict=\\\r\n                          {inputs1:np.identity(16)[s1:s1+1]})\r\n            maxQ1 = np.max(Q1)\r\n            targetQ = allQ\r\n            targetQ[0,a[0]] = r + gamma *maxQ1\r\n            _,W1 = sess.run([updateModel,W],\\\r\n                            feed_dict=\\\r\n                           {inputs1:np.identity(16)[s:s+1],nextQ:targetQ})\r\n#cumulate the total reward\r\n            rAll += r\r\n            s = s1\r\n            if d == True:\r\n                e = 1./((i/50) + 10)\r\n                break\r\n        jList.append(j)\r\n        rList.append(rAll)\r\n#print the results\r\n    print(\"Percent of succesful episodes: \" + str(sum(rList)/num_episodes) + \"%\")\r\n"
  },
  {
    "path": "Chapter10/Python 3.5/FrozenLake_1.py",
    "content": "import gym\r\nimport numpy as np\r\n\r\nenv = gym.make('FrozenLake-v0')\r\n\r\n#Initialize table with all zeros\r\nQ = np.zeros([env.observation_space.n,env.action_space.n])\r\n# Set learning parameters\r\nlr = .85\r\ngamma = .99\r\nnum_episodes = 2000\r\n\r\n#create lists to contain total rewards and steps per episode\r\nrList = []\r\nfor i in range(num_episodes):\r\n#Reset environment and get first new observation\r\n    s = env.reset()\r\n    rAll = 0\r\n    d = False\r\n    j = 0\r\n\r\n   #The Q-Table learning algorithm\r\n    while j < 99:\r\n        j+=1\r\n\r\n   #Choose an action by greedily (with noise) picking from Q table\r\n        a=np.argmax(Q[s,:]+ \\\r\n                  np.random.randn(1,env.action_space.n)*(1./(i+1)))\r\n\r\n  #Get new state and reward from environment\r\n        s1,r,d,_ = env.step(a)\r\n\r\n        #Update Q-Table with new knowledge\r\n        Q[s,a] = Q[s,a] + lr*(r + gamma *np.max(Q[s1,:]) - Q[s,a])\r\n        rAll += r\r\n        s = s1\r\n        if d == True:\r\n            break\r\n\r\n    rList.append(rAll)\r\n\r\nprint(\"Score over time: \" +  str(sum(rList)/num_episodes))\r\nprint(\"Final Q-Table Values\")\r\nprint(Q)\r\n"
  },
  {
    "path": "Chapter10/Python 3.5/Q_Learning_1.py",
    "content": "import gym\r\nimport numpy as np\r\nimport random\r\nimport tensorflow as tf\r\nimport matplotlib.pyplot as plt\r\n\r\n#Define the FrozenLake enviroment\r\nenv = gym.make('FrozenLake-v0')\r\n\r\n#Setup the TensorFlow placeholders and variabiles\r\ntf.reset_default_graph()\r\ninputs1 = tf.placeholder(shape=[1,16],dtype=tf.float32)\r\nW = tf.Variable(tf.random_uniform([16,4],0,0.01))\r\nQout = tf.matmul(inputs1,W)\r\npredict = tf.argmax(Qout,1)\r\nnextQ = tf.placeholder(shape=[1,4],dtype=tf.float32)\r\n\r\n#define the loss and optimization functions \r\nloss = tf.reduce_sum(tf.square(nextQ - Qout))\r\ntrainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)\r\nupdateModel = trainer.minimize(loss)\r\n\r\n#initilize the vabiables\r\ninit = tf.global_variables_initializer()\r\n\r\n#prepare the q-learning parameters\r\ngamma = .99\r\ne = 0.1\r\nnum_episodes = 6000\r\njList = []\r\nrList = []\r\n\r\n#Run the session\r\nwith tf.Session() as sess:\r\n    sess.run(init)\r\n#Start the Q-learning procedure\r\n    for i in range(num_episodes):\r\n        s = env.reset()\r\n        rAll = 0\r\n        d = False\r\n        j = 0\r\n        while j < 99:\r\n            j+=1\r\n            a,allQ = sess.run([predict,Qout],\\\r\n                              feed_dict=\\\r\n                              {inputs1:np.identity(16)[s:s+1]})\r\n\r\n            if np.random.rand(1) < e:\r\n                a[0] = env.action_space.sample()\r\n            s1,r,d,_ = env.step(a[0])\r\n            Q1 = sess.run(Qout,feed_dict=\\\r\n                          {inputs1:np.identity(16)[s1:s1+1]})\r\n            maxQ1 = np.max(Q1)\r\n            targetQ = allQ\r\n            targetQ[0,a[0]] = r + gamma *maxQ1\r\n            _,W1 = sess.run([updateModel,W],\\\r\n                            feed_dict=\\\r\n                           {inputs1:np.identity(16)[s:s+1],nextQ:targetQ})\r\n#cumulate the total reward\r\n            rAll += r\r\n            s = s1\r\n            if d == True:\r\n                e = 1./((i/50) + 10)\r\n                break\r\n        jList.append(j)\r\n        rList.append(rAll)\r\n#print the results\r\n    print(\"Percent of succesful episodes: \" + str(sum(rList)/num_episodes) + \"%\")\r\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2017 Deeptituscano\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "\n\n\n# Deep Learning with TensorFlow\nDeep Learning with TensorFlow by Packt\n\nThis is the code repository for [Deep Learning with TensorFlow](https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-tensorflow?utm_source=github&utm_medium=repository&utm_campaign=9781786469786), published by [Packt](https://www.packtpub.com/?utm_source=github). It contains all the supporting project files necessary to work through the book from start to finish.\n## About the Book\nDeep learning is the step that comes after machine learning, and has more advanced implementations. Machine learning is not just for academics anymore, but is becoming a mainstream practice through wide adoption, and deep learning has taken the front seat. As a data scientist, if you want to explore data abstraction layers, this book will be your guide. This book shows how this can be exploited in the real world with complex raw data using TensorFlow 1.x.\n\nThroughout the book, you’ll learn how to implement deep learning algorithms for machine learning systems and integrate them into your product offerings, including search, image recognition, and language processing. Additionally, you’ll learn how to analyze and improve the performance of deep learning models. This can be done by comparing algorithms against benchmarks, along with machine intelligence, to learn from the information and determine ideal behaviors within a specific context.\n\nAfter finishing the book, you will be familiar with machine learning techniques, in particular the use of TensorFlow for deep learning, and will be ready to apply your knowledge to research or commercial projects.\n## Instructions and Navigation\nAll of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.\n\n\n\nThe code will look like the following:\n```\n>>> import tensorflow as tf\n>>> hello = tf.constant(\"hello TensorFlow!\")\n>>> sess=tf.Session()\n```\n\nAll the examples have been implemented using Python version 2.7 on a Ubuntu Linux 64\nbit including the TensorFlow library version 1.0.1.\nYou will also need the following Python modules (preferably the latest version):\nPip\nBazel\nMatplotlib\nNumPy\nPandas\nPreface\n. Only for Chapter 8, Advanced TensorFlow Programming and Chapter 9, Reinforcement\nLearning, you will need the following frameworks:\nKeras\nPretty Tensor\nTFLearn\nOpenAI gym\n\n## Related Products\n* [Deep Learning with TensorFlow [Video]](https://www.packtpub.com/big-data-and-business-intelligence/deep-learning-tensorflow-video)\n\n* [Machine Learning with TensorFlow](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-tensorflow)\n\n* [Building Machine Learning Projects with TensorFlow](https://www.packtpub.com/big-data-and-business-intelligence/building-machine-learning-projects-tensorflow)\n\n### Suggestions and Feedback\n[Click here](https://docs.google.com/forms/d/e/1FAIpQLSe5qwunkGf6PUvzPirPDtuy1Du5Rlzew23UBp2S-P3wB-GcwQ/viewform) if you have any feedback or suggestions.\n### Download a free PDF\n\n <i>If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.<br>Simply click on the link to claim your free PDF.</i>\n<p align=\"center\"> <a href=\"https://packt.link/free-ebook/9781788831109\">https://packt.link/free-ebook/9781788831109 </a> </p>"
  }
]