[
  {
    "path": "LICENSE.md",
    "content": "\nThe MIT License (MIT)\n\nCopyright (c) 2018 \n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Frozen Graph TensorFlow\n\nLei Mao\n\n## Introduction\n\nThis repository has the examples of saving, loading, and running inference for frozen graph in TensorFlow 1.x and 2.x.\n\n## Files\n\n```\n.\n├── LICENSE.md\n├── README.md\n├── TensorFlow_v1\n│   ├── cifar.py\n│   ├── cnn.py\n│   ├── inspect_signature.py\n│   ├── main.py\n│   ├── README.md\n│   ├── test_pb.py\n│   └── utils.py\n└── TensorFlow_v2\n    ├── example_1.py\n    ├── example_2.py\n    ├── README.md\n    └── utils.py\n```\n\n## Blogs\n\n* [Save, Load and Inference From TensorFlow Frozen Graph](https://leimao.github.io/blog/Save-Load-Inference-From-TF-Frozen-Graph/)\n* [Save, Load and Inference From TensorFlow 2.x Frozen Graph](https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/)\n\n## Examples\n\n* [TensorFlow 1.x](https://github.com/leimao/Frozen_Graph_TensorFlow/tree/master/TensorFlow_v1)\n* [TensorFlow 2.x](https://github.com/leimao/Frozen_Graph_TensorFlow/tree/master/TensorFlow_v2)"
  },
  {
    "path": "TensorFlow_v1/.gitignore",
    "content": "__pycache__\nfrozen_models\nmodels"
  },
  {
    "path": "TensorFlow_v1/README.md",
    "content": "# Frozen Graph TensorFlow 1.x\n\nLei Mao\n\n## Introduction\n\nThis repository was modified from my previous [simple CNN model](https://github.com/leimao/Convolutional_Neural_Network_CIFAR10) to classify CIFAR10 dataset. It consist training, saving model to frozen graph ``pb`` file, load ``pb`` file and do inference in TensorFlow. The tutorial with detailed description is available on my [blog](https://leimao.github.io/blog/Save-Load-Inference-From-TF-Frozen-Graph/). To the best of my knowledge, there is few similar tutorials on the internet. I wish this sample code could help you to prepare your own ``pb`` file for deployment.\n\n\n## Dependencies\n\n* Python 3.6\n* Numpy 1.14\n* TensorFlow 1.12\n* Matplotlib 2.1.1 (for demo purpose)\n\n\n## Files\n```bash\n.\n├── cifar.py\n├── cnn.py\n├── main.py\n├── README.md\n└── utils.py\n```\n## Features\n\n* User-friendly CNN API wrapped\n* Allows changing learning rate and dropout rate in real time\n* No need for significant changes to codes in order to work for other tasks\n\n## Usage\n\n### Train and Test Model in TensorFlow\n\n```bash\n$ python main.py --help\nusage: main.py [-h] [-train] [-test] [--lr LR] [--lr_decay LR_DECAY]\n               [--dropout DROPOUT] [--batch_size BATCH_SIZE] [--epochs EPOCHS]\n               [--optimizer OPTIMIZER] [--seed SEED] [--model_dir MODEL_DIR]\n               [--model_filename MODEL_FILENAME] [--log_dir LOG_DIR]\n\nTrain CNN on CIFAR10 dataset.\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -train, --train       train model\n  -test, --test         test model\n  --lr LR               initial learning rate\n  --lr_decay LR_DECAY   learning rate decay\n  --dropout DROPOUT     dropout rate\n  --batch_size BATCH_SIZE\n                        mini batch size\n  --epochs EPOCHS       number of epochs\n  --optimizer OPTIMIZER\n                        optimizer\n  --seed SEED           random seed\n  --model_dir MODEL_DIR\n                        model directory\n  --model_filename MODEL_FILENAME\n                        model filename\n  --log_dir LOG_DIR     log directory\n\n```\n\n```bash\n$ python main.py --train --test --epoch 30 --lr_decay 0.9 --dropout 0.5\n```\n\n### Test Model from PB File\n\n```bash\n$ python test_pb.py --help\nusage: test_pb.py [-h] [--model_pb_filepath MODEL_PB_FILEPATH]\n\nLoad and test model from frozen graph pb file.\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --model_pb_filepath MODEL_PB_FILEPATH\n                        model pb-format frozen graph file filepath\n\n```\n\n\n```bash\n$ python test_pb.py\n```\n\n## Update Log\n\n### 2019/9/16\n\nReplaced using the side effect of `tf.InteractiveSession` to set default graph for loading `graphdef` to using Python resource management `with` to set default graph for loading `graphdef`.\n\n\n## Reference\n\n* [Save, Load and Inference From TensorFlow Frozen Graph](https://leimao.github.io/blog/Save-Load-Inference-From-TF-Frozen-Graph/)\n"
  },
  {
    "path": "TensorFlow_v1/cifar.py",
    "content": "import tensorflow as tf\nimport numpy as np\n\n\ndef train_test_split(x, y, train_fraction=0.9):\n\n    # Split the data into training data and test data\n    assert len(x) == len(y)\n    dataset_size = len(x)\n    idx = np.arange(len(x))\n    np.random.shuffle(idx)\n    idx_split = int(dataset_size * train_fraction)\n    x_train = x[:idx_split]\n    y_train = y[:idx_split]\n    x_test = x[idx_split:]\n    y_test = y[idx_split:]\n\n    return (x_train, y_train), (x_test, y_test)\n\n\nclass CIFAR10(object):\n    def __init__(self, train_fraction=0.9):\n\n        (self.x_train,\n         self.y_train), (self.x_test,\n                         self.y_test) = tf.keras.datasets.cifar10.load_data()\n        (self.x_train, self.y_train), (self.x_valid,\n                                       self.y_valid) = train_test_split(\n                                           x=self.x_train,\n                                           y=self.y_train,\n                                           train_fraction=train_fraction)\n\n        assert np.array_equal(np.unique(self.y_train),\n                              np.unique(self.y_test)) == True\n\n        self.num_classes = len(np.unique(self.y_train))\n\n        self.input_size = list(self.x_train.shape[1:])\n\n        # Convert integer label to binary vector\n        self.y_train_onehot = tf.keras.utils.to_categorical(\n            self.y_train, self.num_classes)\n        self.y_valid_onehot = tf.keras.utils.to_categorical(\n            self.y_valid, self.num_classes)\n        self.y_test_onehot = tf.keras.utils.to_categorical(\n            self.y_test, self.num_classes)\n        # Image scaling\n        self.x_train = self.x_train.astype('float32')\n        self.x_valid = self.x_valid.astype('float32')\n        self.x_test = self.x_test.astype('float32')\n        self.x_train /= 255\n        self.x_valid /= 255\n        self.x_test /= 255\n"
  },
  {
    "path": "TensorFlow_v1/cnn.py",
    "content": "import tensorflow as tf\nimport os\n\nfrom tensorflow.python.tools import freeze_graph\nfrom tensorflow.python.framework import graph_util\n\nfrom tensorflow.python.saved_model import builder as saved_model_builder\nfrom tensorflow.python.saved_model import signature_def_utils\nfrom tensorflow.python.saved_model import signature_constants\nfrom tensorflow.python.saved_model import tag_constants\nfrom tensorflow.python.saved_model import utils as saved_model_utils\n\n\nclass CNN(object):\n    def __init__(self, input_size, num_classes, optimizer):\n\n        self.num_classes = num_classes\n        self.input_size = input_size\n        self.optimizer = optimizer\n\n        self.learning_rate = tf.placeholder(tf.float32,\n                                            shape=[],\n                                            name='learning_rate')\n        self.dropout_rate = tf.placeholder(tf.float32,\n                                           shape=[],\n                                           name='dropout_rate')\n        self.input = tf.placeholder(tf.float32, [None] + self.input_size,\n                                    name='input')\n        self.label = tf.placeholder(tf.float32, [None, self.num_classes],\n                                    name='label')\n        self.output = self.network_initializer()\n        self.loss = self.loss_initializer()\n        self.optimization = self.optimizer_initializer()\n\n        self.saver = tf.train.Saver()\n        self.sess = tf.Session()\n        self.sess.run(tf.global_variables_initializer())\n\n    def network(self, input, dropout_rate):\n\n        conv1 = tf.layers.conv2d(inputs=input,\n                                 filters=64,\n                                 kernel_size=[3, 3],\n                                 padding='same',\n                                 activation=tf.nn.relu,\n                                 name='conv1')\n\n        conv2 = tf.layers.conv2d(inputs=conv1,\n                                 filters=64,\n                                 kernel_size=[3, 3],\n                                 padding='same',\n                                 activation=tf.nn.relu,\n                                 name='conv2')\n\n        pool1 = tf.layers.max_pooling2d(inputs=conv2,\n                                        pool_size=[2, 2],\n                                        strides=[2, 2],\n                                        name='pool1')\n\n        pool1_dropout = tf.layers.dropout(inputs=pool1,\n                                          rate=dropout_rate,\n                                          training=True,\n                                          name='pool1_dropout')\n\n        conv3 = tf.layers.conv2d(inputs=pool1_dropout,\n                                 filters=128,\n                                 kernel_size=[3, 3],\n                                 padding='same',\n                                 activation=tf.nn.relu,\n                                 name='conv3')\n\n        conv4 = tf.layers.conv2d(inputs=conv3,\n                                 filters=128,\n                                 kernel_size=[3, 3],\n                                 padding='same',\n                                 activation=tf.nn.relu,\n                                 name='conv4')\n\n        pool2 = tf.layers.max_pooling2d(inputs=conv4,\n                                        pool_size=[2, 2],\n                                        strides=[2, 2],\n                                        name='pool2')\n\n        pool2_dropout = tf.layers.dropout(inputs=pool2,\n                                          rate=dropout_rate,\n                                          training=True,\n                                          name='pool2_dropout')\n\n        conv5 = tf.layers.conv2d(inputs=pool2_dropout,\n                                 filters=256,\n                                 kernel_size=[3, 3],\n                                 padding='same',\n                                 activation=tf.nn.relu,\n                                 name='conv5')\n\n        pool3 = tf.layers.max_pooling2d(inputs=conv5,\n                                        pool_size=[2, 2],\n                                        strides=[2, 2],\n                                        name='pool3')\n\n        pool3_dropout = tf.layers.dropout(inputs=pool3,\n                                          rate=dropout_rate,\n                                          training=True,\n                                          name='pool3_dropout')\n\n        flat = tf.layers.flatten(inputs=pool3_dropout, name='flat')\n\n        fc1 = tf.layers.dense(inputs=flat,\n                              units=256,\n                              activation=tf.nn.relu,\n                              name='fc1')\n\n        fc1_dropout = tf.layers.dropout(inputs=fc1,\n                                        rate=dropout_rate,\n                                        training=True,\n                                        name='fc1_dropout')\n\n        fc2 = tf.layers.dense(inputs=fc1_dropout,\n                              units=self.num_classes,\n                              activation=None,\n                              name='fc2')\n\n        # Give output node a\n        output = tf.identity(fc2, name='output')\n\n        return output\n\n    def network_initializer(self):\n\n        with tf.variable_scope('cnn') as scope:\n            ouput = self.network(input=self.input,\n                                 dropout_rate=self.dropout_rate)\n\n        return ouput\n\n    def loss_initializer(self):\n\n        with tf.variable_scope('loss') as scope:\n            cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(\n                labels=self.label, logits=self.output, name='cross_entropy')\n            cross_entropy_mean = tf.reduce_mean(cross_entropy,\n                                                name='cross_entropy_mean')\n        return cross_entropy_mean\n\n    def optimizer_initializer(self):\n\n        if self.optimizer == 'Adam':\n            optimizer = tf.train.AdamOptimizer(\n                learning_rate=self.learning_rate).minimize(self.loss)\n        else:\n            optimizer = tf.train.GradientDescentOptimizer(\n                learning_rate=self.learning_rate).minimize(self.loss)\n\n        return optimizer\n\n    def train(self, data, label, learning_rate, dropout_rate):\n\n        _, train_loss = self.sess.run(\n            [self.optimization, self.loss],\n            feed_dict={\n                self.input: data,\n                self.label: label,\n                self.learning_rate: learning_rate,\n                self.dropout_rate: dropout_rate\n            })\n        return train_loss\n\n    def validate(self, data, label):\n\n        output, validate_loss = self.sess.run([self.output, self.loss],\n                                              feed_dict={\n                                                  self.input: data,\n                                                  self.label: label,\n                                                  self.dropout_rate: 0.0\n                                              })\n        return output, validate_loss\n\n    def test(self, data):\n\n        output = self.sess.run(self.output,\n                               feed_dict={\n                                   self.input: data,\n                                   self.dropout_rate: 0.0\n                               })\n\n        return output\n\n    def save(self, directory, filename):\n\n        if not os.path.exists(directory):\n            os.makedirs(directory)\n        filepath = os.path.join(directory, filename + '.ckpt')\n        self.saver.save(self.sess, filepath)\n        return filepath\n\n    def save_signature(self, directory):\n\n        signature = signature_def_utils.build_signature_def(\n            inputs={\n                'input':\n                saved_model_utils.build_tensor_info(self.input),\n                'dropout_rate':\n                saved_model_utils.build_tensor_info(self.dropout_rate)\n            },\n            outputs={\n                'output': saved_model_utils.build_tensor_info(self.output)\n            },\n            method_name=signature_constants.PREDICT_METHOD_NAME)\n        signature_map = {\n            signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature\n        }\n        model_builder = saved_model_builder.SavedModelBuilder(directory)\n        model_builder.add_meta_graph_and_variables(\n            self.sess,\n            tags=[tag_constants.SERVING],\n            signature_def_map=signature_map,\n            clear_devices=True)\n        model_builder.save(as_text=False)\n\n    def save_as_pb(self, directory, filename):\n\n        if not os.path.exists(directory):\n            os.makedirs(directory)\n\n        # Save check point for graph frozen later\n        ckpt_filepath = self.save(directory=directory, filename=filename)\n        pbtxt_filename = filename + '.pbtxt'\n        pbtxt_filepath = os.path.join(directory, pbtxt_filename)\n        pb_filepath = os.path.join(directory, filename + '.pb')\n        # This will only save the graph but the variables will not be saved.\n        # You have to freeze your model first.\n        tf.train.write_graph(graph_or_graph_def=self.sess.graph_def,\n                             logdir=directory,\n                             name=pbtxt_filename,\n                             as_text=True)\n\n        # Freeze graph\n        # Method 1\n        freeze_graph.freeze_graph(input_graph=pbtxt_filepath,\n                                  input_saver='',\n                                  input_binary=False,\n                                  input_checkpoint=ckpt_filepath,\n                                  output_node_names='cnn/output',\n                                  restore_op_name='save/restore_all',\n                                  filename_tensor_name='save/Const:0',\n                                  output_graph=pb_filepath,\n                                  clear_devices=True,\n                                  initializer_nodes='')\n\n        # Method 2\n        '''\n        graph = tf.get_default_graph()\n        input_graph_def = graph.as_graph_def()\n        output_node_names = ['cnn/output']\n\n        output_graph_def = graph_util.convert_variables_to_constants(self.sess, input_graph_def, output_node_names)\n\n        with tf.gfile.GFile(pb_filepath, 'wb') as f:\n            f.write(output_graph_def.SerializeToString())\n        '''\n\n        return pb_filepath\n\n    def load(self, filepath):\n\n        if os.path.splitext(filepath)[1] != '.ckpt':\n            filepath += '.ckpt'\n\n        self.saver.restore(self.sess, filepath)\n"
  },
  {
    "path": "TensorFlow_v1/inspect_signature.py",
    "content": "import tensorflow as tf\nfrom tensorflow.python.saved_model import tag_constants\n\n\ndef retrieve_model_data_info(saved_model_path):\n    with tf.Session() as sess:\n        graph = tf.Graph()\n        with graph.as_default():\n            metagraph = tf.saved_model.loader.load(sess,\n                                                   [tag_constants.SERVING],\n                                                   saved_model_path)\n        inputs_mapping = dict(\n            metagraph.signature_def['serving_default'].inputs)\n        outputs_mapping = dict(\n            metagraph.signature_def['serving_default'].outputs)\n        print(\"Print output mapping: \", outputs_mapping)\n        print(\"Print input mapping: \", inputs_mapping)\n\n\nretrieve_model_data_info('./model/signature')\n"
  },
  {
    "path": "TensorFlow_v1/main.py",
    "content": "import os\nimport argparse\nimport tensorflow as tf\nimport numpy as np\n\nfrom cnn import CNN\nfrom cifar import CIFAR10\nfrom utils import plot_curve, model_accuracy\n\n\ndef train(learning_rate, learning_rate_decay, dropout_rate, mini_batch_size,\n          epochs, optimizer, random_seed, model_directory, model_filename,\n          log_directory):\n\n    np.random.seed(random_seed)\n\n    if not os.path.exists(log_directory):\n        os.makedirs(log_directory)\n\n    # Load CIFAR10 dataset\n    cifar10 = CIFAR10()\n    x_train = cifar10.x_train\n    y_train = cifar10.y_train\n    y_train_onehot = cifar10.y_train_onehot\n    x_valid = cifar10.x_valid\n    y_valid = cifar10.y_valid\n    y_valid_onehot = cifar10.y_valid_onehot\n\n    num_classes = cifar10.num_classes\n    input_size = cifar10.input_size\n\n    print('CIFAR10 Input Image Size: {}'.format(input_size))\n\n    model = CNN(input_size=input_size,\n                num_classes=num_classes,\n                optimizer=optimizer)\n\n    train_accuracy_log = list()\n    valid_accuracy_log = list()\n    train_loss_log = list()\n\n    for epoch in range(epochs):\n        print('Epoch: %d' % epoch)\n\n        learning_rate *= learning_rate_decay\n        # Prepare mini batches on train set\n        shuffled_idx = np.arange(len(x_train))\n        np.random.shuffle(shuffled_idx)\n        mini_batch_idx = [\n            shuffled_idx[k:k + mini_batch_size]\n            for k in range(0, len(x_train), mini_batch_size)\n        ]\n\n        # Validate on validation set\n        valid_prediction_onehot = model.test(data=x_valid)\n        valid_prediction = np.argmax(valid_prediction_onehot, axis=1).reshape(\n            (-1, 1))\n        valid_accuracy = model_accuracy(label=y_valid,\n                                        prediction=valid_prediction)\n        print('Validation Accuracy: %f' % valid_accuracy)\n        valid_accuracy_log.append(valid_accuracy)\n\n        # Train on train set\n        for i, idx in enumerate(mini_batch_idx):\n            train_loss = model.train(data=x_train[idx],\n                                     label=y_train_onehot[idx],\n                                     learning_rate=learning_rate,\n                                     dropout_rate=dropout_rate)\n            if i % 200 == 0:\n                train_prediction_onehot = model.test(data=x_train[idx])\n                train_prediction = np.argmax(train_prediction_onehot,\n                                             axis=1).reshape((-1, 1))\n                train_accuracy = model_accuracy(label=y_train[idx],\n                                                prediction=train_prediction)\n                print('Training Loss: %f, Training Accuracy: %f' %\n                      (train_loss, train_accuracy))\n                if i == 0:\n                    train_accuracy_log.append(train_accuracy)\n                    train_loss_log.append(train_loss)\n\n    model.save(directory=model_directory, filename=model_filename)\n    print('Trained model saved successfully')\n\n    model.save_as_pb(directory=model_directory, filename=model_filename)\n    print('Trained model saved as pb successfully')\n\n    # The directory should not exist before calling this method\n    signature_dir = os.path.join(model_directory, 'signature')\n    assert (not os.path.exists(signature_dir))\n    model.save_signature(directory=signature_dir)\n    print('Trained model with signature saved successfully')\n\n    plot_curve(train_losses = train_loss_log, train_accuracies = train_accuracy_log, valid_accuracies = valid_accuracy_log, \\\n        filename = os.path.join(log_directory, 'training_curve.png'))\n\n\ndef test(model_file):\n\n    tf.reset_default_graph()\n\n    # Load CIFAR10 dataset\n    cifar10 = CIFAR10()\n    x_test = cifar10.x_test\n    y_test = cifar10.y_test\n    y_test_onehot = cifar10.y_test_onehot\n    num_classes = cifar10.num_classes\n    input_size = cifar10.input_size\n\n    model = CNN(input_size=input_size,\n                num_classes=num_classes,\n                optimizer='Adam')\n    model.load(filepath=model_file)\n\n    test_prediction_onehot = model.test(data=x_test)\n    test_prediction = np.argmax(test_prediction_onehot, axis=1).reshape(\n        (-1, 1))\n    test_accuracy = model_accuracy(label=y_test, prediction=test_prediction)\n\n    print('Test Accuracy: %f' % test_accuracy)\n\n\ndef main():\n    # Default settings\n    learning_rate_default = 0.001\n    learning_rate_decay_default = 0.9\n    dropout_rate_default = 0.5\n    mini_batch_size_default = 64\n    epochs_default = 30\n    optimizer_default = 'Adam'\n    random_seed_default = 0\n    model_directory_default = 'model'\n    model_filename_default = 'cifar10_cnn'\n    log_directory_default = 'log'\n\n    # Argparser\n    parser = argparse.ArgumentParser(\n        description='Train CNN on CIFAR10 dataset.')\n\n    parser.add_argument('-train',\n                        '--train',\n                        help='train model',\n                        action='store_true')\n    parser.add_argument('-test',\n                        '--test',\n                        help='test model',\n                        action='store_true')\n    parser.add_argument('--lr',\n                        type=float,\n                        help='initial learning rate',\n                        default=learning_rate_default)\n    parser.add_argument('--lr_decay',\n                        type=float,\n                        help='learning rate decay',\n                        default=learning_rate_decay_default)\n    parser.add_argument('--dropout',\n                        type=float,\n                        help='dropout rate',\n                        default=dropout_rate_default)\n    parser.add_argument('--batch_size',\n                        type=int,\n                        help='mini batch size',\n                        default=mini_batch_size_default)\n    parser.add_argument('--epochs',\n                        type=int,\n                        help='number of epochs',\n                        default=epochs_default)\n    parser.add_argument('--optimizer',\n                        type=str,\n                        help='optimizer',\n                        default=optimizer_default)\n    parser.add_argument('--seed',\n                        type=int,\n                        help='random seed',\n                        default=random_seed_default)\n    parser.add_argument('--model_dir',\n                        type=str,\n                        help='model directory',\n                        default=model_directory_default)\n    parser.add_argument('--model_filename',\n                        type=str,\n                        help='model filename',\n                        default=model_filename_default)\n    parser.add_argument('--log_dir',\n                        type=str,\n                        help='log directory',\n                        default=log_directory_default)\n\n    argv = parser.parse_args()\n\n    # Post-process argparser\n    learning_rate = argv.lr\n    learning_rate_decay = argv.lr_decay\n    dropout_rate = argv.dropout\n    mini_batch_size = argv.batch_size\n    epochs = argv.epochs\n    optimizer = argv.optimizer\n    random_seed = argv.seed\n    model_directory = argv.model_dir\n    model_filename = argv.model_filename\n    log_directory = argv.log_dir\n\n    if argv.train:\n        print('Training CNN on CIFAR10 dataset...')\n        train(learning_rate=learning_rate,\n              learning_rate_decay=learning_rate_decay,\n              dropout_rate=dropout_rate,\n              mini_batch_size=mini_batch_size,\n              epochs=epochs,\n              optimizer=optimizer,\n              random_seed=random_seed,\n              model_directory=model_directory,\n              model_filename=model_filename,\n              log_directory=log_directory)\n\n    if argv.test:\n        print('Testing CNN on CIFAR10 dataset...')\n        test(model_file=os.path.join(model_directory_default,\n                                     model_filename_default))\n\n\nif __name__ == '__main__':\n\n    main()\n"
  },
  {
    "path": "TensorFlow_v1/test_pb.py",
    "content": "import tensorflow as tf\nimport numpy as np\nimport argparse\n\nfrom cifar import CIFAR10\nfrom utils import model_accuracy\nfrom tensorflow.python.framework import tensor_util\n\n# If load from pb, you may have to use get_tensor_by_name heavily.\n\n\nclass CNN(object):\n    def __init__(self, model_filepath):\n\n        # The file path of model\n        self.model_filepath = model_filepath\n        # Initialize the model\n        self.load_graph(model_filepath=self.model_filepath)\n\n    def load_graph(self, model_filepath):\n        '''\n        Lode trained model.\n        '''\n        print('Loading model...')\n        self.graph = tf.Graph()\n\n        with tf.gfile.GFile(model_filepath, 'rb') as f:\n            graph_def = tf.GraphDef()\n            graph_def.ParseFromString(f.read())\n\n        print('Check out the input placeholders:')\n        nodes = [\n            n.name + ' => ' + n.op for n in graph_def.node\n            if n.op in ('Placeholder')\n        ]\n        for node in nodes:\n            print(node)\n\n        with self.graph.as_default():\n            # Define input tensor\n            self.input = tf.placeholder(np.float32,\n                                        shape=[None, 32, 32, 3],\n                                        name='input')\n            self.dropout_rate = tf.placeholder(tf.float32,\n                                               shape=[],\n                                               name='dropout_rate')\n            tf.import_graph_def(graph_def, {\n                'input': self.input,\n                'dropout_rate': self.dropout_rate\n            })\n\n        self.graph.finalize()\n\n        print('Model loading complete!')\n\n        # Get layer names\n        layers = [op.name for op in self.graph.get_operations()]\n        for layer in layers:\n            print(layer)\n        \"\"\"\n        # Check out the weights of the nodes\n        weight_nodes = [n for n in graph_def.node if n.op == 'Const']\n        for n in weight_nodes:\n            print(\"Name of the node - %s\" % n.name)\n            # print(\"Value - \" )\n            # print(tensor_util.MakeNdarray(n.attr['value'].tensor))\n        \"\"\"\n\n        # In this version, tf.InteractiveSession and tf.Session could be used interchangeably.\n        # self.sess = tf.InteractiveSession(graph = self.graph)\n        self.sess = tf.Session(graph=self.graph)\n\n    def test(self, data):\n\n        # Know your output node name\n        output_tensor = self.graph.get_tensor_by_name(\"import/cnn/output:0\")\n        output = self.sess.run(output_tensor,\n                               feed_dict={\n                                   self.input: data,\n                                   self.dropout_rate: 0\n                               })\n\n        return output\n\n\ndef test_from_frozen_graph(model_filepath):\n\n    tf.reset_default_graph()\n\n    # Load CIFAR10 dataset\n    cifar10 = CIFAR10()\n    x_test = cifar10.x_test\n    y_test = cifar10.y_test\n    y_test_onehot = cifar10.y_test_onehot\n    num_classes = cifar10.num_classes\n    input_size = cifar10.input_size\n\n    # Test 500 samples\n    x_test = x_test[0:500]\n    y_test = y_test[0:500]\n\n    model = CNN(model_filepath=model_filepath)\n\n    test_prediction_onehot = model.test(data=x_test)\n    test_prediction = np.argmax(test_prediction_onehot, axis=1).reshape(\n        (-1, 1))\n    test_accuracy = model_accuracy(label=y_test, prediction=test_prediction)\n\n    print('Test Accuracy: %f' % test_accuracy)\n\n\ndef main():\n\n    model_pb_filepath_default = './model/cifar10_cnn.pb'\n\n    # Argparser\n    parser = argparse.ArgumentParser(\n        description='Load and test model from frozen graph pb file.')\n\n    parser.add_argument('--model_pb_filepath',\n                        type=str,\n                        help='model pb-format frozen graph file filepath',\n                        default=model_pb_filepath_default)\n\n    argv = parser.parse_args()\n\n    model_pb_filepath = argv.model_pb_filepath\n\n    test_from_frozen_graph(model_filepath=model_pb_filepath)\n\n\nif __name__ == '__main__':\n\n    main()\n"
  },
  {
    "path": "TensorFlow_v1/utils.py",
    "content": "import numpy as np\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\n\ndef model_accuracy(label, prediction):\n\n    # Evaluate the trained model\n    return np.sum(label == prediction) / len(prediction)\n\n\ndef plot_curve(train_losses,\n               train_accuracies,\n               valid_accuracies,\n               savefig=True,\n               showfig=False,\n               filename='training_curve.png'):\n\n    x = np.arange(len(train_losses))\n    y1 = train_accuracies\n    y2 = valid_accuracies\n    y3 = train_losses\n\n    fig, ax1 = plt.subplots(figsize=(12, 8))\n    ax2 = ax1.twinx()\n\n    ax1.plot(x, y1, color='b', marker='o', label='Training Accuracy')\n    ax1.plot(x, y2, color='g', marker='o', label='Validation Accuracy')\n    ax2.plot(x, y3, color='r', marker='o', label='Training Loss')\n\n    ax1.set_xlabel('Epochs')\n    ax1.set_ylabel('Accuracy')\n    ax2.set_ylabel('Loss')\n\n    ax1.legend()\n    ax2.legend()\n\n    if savefig:\n        fig.savefig(filename, format='png', dpi=600, bbox_inches='tight')\n    if showfig:\n        plt.show()\n    plt.close()\n\n    return\n"
  },
  {
    "path": "TensorFlow_v2/.gitignore",
    "content": "__pycache__\nfrozen_models\nmodels"
  },
  {
    "path": "TensorFlow_v2/README.md",
    "content": "# Frozen Graph TensorFlow 2.x\n\nLei Mao\n\n## Introduction\n\nTensorFlow 1.x provided interface to freeze models via `tf.Session`. However, since TensorFlow 2.x removed `tf.Session`, freezing models in TensorFlow 2.x had been a problem to most of the users.\n\nIn this repository, several simple concrete examples have been implemented to demonstrate how to freeze models and run inference using frozen models in TensorFlow 2.x. The frozen models are also fully compatible with inference using TensorFlow 1.x, TensorFlow 2.x, ONNX Runtime, and TensorRT. \n\n## Usages\n\n### Docker Container\n\nWe use TensorFlow 2.3 Docker container from DockerHub. To download the Docker image, please run the following command in the terminal.\n\n```bash\n$ docker pull tensorflow/tensorflow:2.3.0-gpu\n```\n\nTo start the Docker container, please run the following command in the terminal.\n\n```bash\n$ docker run --gpus all -it --rm -v $(pwd):/mnt tensorflow/tensorflow:2.3.0-gpu\n```\n\n### Examples\n\n#### Example 1\n\nWe would train a simple fully connected neural network to classify the Fashion MNIST data. The model would be saved as `SavedModel` in the `models/simple_model` directory for completeness. In addition, the model would also be frozen and saved as `simple_frozen_graph.pb` in the `frozen_models` directory.\n\nTo train, save, export, and run inference for the model, please run the following command in the terminal.\n\n```bash\n$ python example_1.py\n```\n\n#### Example 2\n\nWe would train a simple recurrent neural network that has multiple inputs and outputs using random data. The model would be saved as `SavedModel` in the `models/complex_model` directory for completeness. In addition, the model would also be frozen and saved as `complex_frozen_graph.pb` in the `frozen_models` directory.\n\nTo train, save, export, and run inference for the model, please run the following command in the terminal.\n\n```bash\n$ python example_2.py\n```\n\n### Convert Frozen Graph to ONNX\n\nIf TensorFlow 1.x and `tf2onnx` have been installed, the frozen graph could be converted to ONNX model using the following command.\n\n```bash\n$ python -m tf2onnx.convert --input ./frozen_models/frozen_graph.pb --output model.onnx --outputs Identity:0 --inputs x:0\n```\n\n### Convert Frozen Graph to UFF\n\nThe frozen graph could also be converted to UFF model for TensorRT using the following command. \n\n```bash\n$ convert-to-uff frozen_graph.pb -t -O Identity -o frozen_graph.uff\n```\n\nTensorRT 6.0 Docker image could be pulled from [NVIDIA NGC](https://ngc.nvidia.com/).\n\n```bash\n$ docker pull nvcr.io/nvidia/tensorrt:19.12-py3\n```\n\n## References\n\n* [Migrate from TensorFlow 1.x to 2.x](https://www.tensorflow.org/guide/migrate)\n"
  },
  {
    "path": "TensorFlow_v2/example_1.py",
    "content": "import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2\nimport numpy as np\n\nfrom utils import get_fashion_mnist_data, wrap_frozen_graph\n\n\ndef main():\n\n    tf.random.set_seed(seed=0)\n\n    # Get data\n    (train_images, train_labels), (test_images,\n                                   test_labels) = get_fashion_mnist_data()\n\n    # Create Keras model\n    model = keras.Sequential(layers=[\n        keras.layers.InputLayer(input_shape=(28, 28), name=\"input\"),\n        keras.layers.Flatten(input_shape=(28, 28), name=\"flatten\"),\n        keras.layers.Dense(128, activation=\"relu\", name=\"dense\"),\n        keras.layers.Dense(10, activation=\"softmax\", name=\"output\")\n    ], name=\"FCN\")\n\n    # Print model architecture\n    model.summary()\n\n    # Compile model with optimizer\n    model.compile(optimizer=\"adam\",\n                  loss=\"sparse_categorical_crossentropy\",\n                  metrics=[\"accuracy\"])\n\n    # Train model\n    model.fit(x={\"input\": train_images}, y={\"output\": train_labels}, epochs=1)\n\n    # Test model\n    test_loss, test_acc = model.evaluate(x={\"input\": test_images},\n                                         y={\"output\": test_labels},\n                                         verbose=2)\n    print(\"-\" * 50)\n    print(\"Test accuracy: \")\n    print(test_acc)\n\n    # Get predictions for test images\n    predictions = model.predict(test_images)\n    # Print the prediction for the first image\n    print(\"-\" * 50)\n    print(\"Example TensorFlow prediction reference:\")\n    print(predictions[0])\n\n    # Save model to SavedModel format\n    tf.saved_model.save(model, \"./models/simple_model\")\n\n    # Convert Keras model to ConcreteFunction\n    full_model = tf.function(lambda x: model(x))\n    full_model = full_model.get_concrete_function(\n        x=tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))\n\n    # Get frozen ConcreteFunction\n    frozen_func = convert_variables_to_constants_v2(full_model)\n    frozen_func.graph.as_graph_def()\n\n    layers = [op.name for op in frozen_func.graph.get_operations()]\n    print(\"-\" * 50)\n    print(\"Frozen model layers: \")\n    for layer in layers:\n        print(layer)\n\n    print(\"-\" * 50)\n    print(\"Frozen model inputs: \")\n    print(frozen_func.inputs)\n    print(\"Frozen model outputs: \")\n    print(frozen_func.outputs)\n\n    # Save frozen graph from frozen ConcreteFunction to hard drive\n    tf.io.write_graph(graph_or_graph_def=frozen_func.graph,\n                      logdir=\"./frozen_models\",\n                      name=\"simple_frozen_graph.pb\",\n                      as_text=False)\n\n\n    # Load frozen graph using TensorFlow 1.x functions\n    with tf.io.gfile.GFile(\"./frozen_models/simple_frozen_graph.pb\", \"rb\") as f:\n        graph_def = tf.compat.v1.GraphDef()\n        loaded = graph_def.ParseFromString(f.read())\n\n    # Wrap frozen graph to ConcreteFunctions\n    frozen_func = wrap_frozen_graph(graph_def=graph_def,\n                                    inputs=[\"x:0\"],\n                                    outputs=[\"Identity:0\"],\n                                    print_graph=True)\n\n    print(\"-\" * 50)\n    print(\"Frozen model inputs: \")\n    print(frozen_func.inputs)\n    print(\"Frozen model outputs: \")\n    print(frozen_func.outputs)\n\n    # Get predictions for test images\n    frozen_graph_predictions = frozen_func(x=tf.constant(test_images))[0]\n\n    # Print the prediction for the first image\n    print(\"-\" * 50)\n    print(\"Example TensorFlow frozen graph prediction reference:\")\n    print(frozen_graph_predictions[0].numpy())\n\n    # The two predictions should be almost the same.\n    assert np.allclose(a=frozen_graph_predictions[0].numpy(), b=predictions[0], rtol=1e-05, atol=1e-08, equal_nan=False)\n\nif __name__ == \"__main__\":\n\n    main()\n"
  },
  {
    "path": "TensorFlow_v2/example_2.py",
    "content": "import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2\nimport numpy as np\n\nfrom utils import wrap_frozen_graph\n\ndef main():\n\n    # Mysterious code\n    # https://leimao.github.io/blog/TensorFlow-cuDNN-Failure/\n    gpu_devices = tf.config.experimental.list_physical_devices('GPU')\n    for device in gpu_devices:\n        tf.config.experimental.set_memory_growth(device, True)\n\n    # Dummy example copied from TensorFlow\n    # https://www.tensorflow.org/guide/keras/functional#models_with_multiple_inputs_and_outputs\n\n    num_tags = 12  # Number of unique issue tags\n    num_words = 10000  # Size of vocabulary obtained when preprocessing text data\n    num_departments = 4  # Number of departments for predictions\n\n    title_input = keras.Input(\n        shape=(None,), name=\"title\"\n    )  # Variable-length sequence of ints\n    body_input = keras.Input(shape=(None,), name=\"body\")  # Variable-length sequence of ints\n    tags_input = keras.Input(\n        shape=(num_tags,), name=\"tags\"\n    )  # Binary vectors of size `num_tags`\n\n    # Embed each word in the title into a 64-dimensional vector\n    title_features = keras.layers.Embedding(num_words, 64)(title_input)\n    # Embed each word in the text into a 64-dimensional vector\n    body_features = keras.layers.Embedding(num_words, 64)(body_input)\n\n    # Reduce sequence of embedded words in the title into a single 128-dimensional vector\n    title_features = keras.layers.LSTM(128)(title_features)\n    # Reduce sequence of embedded words in the body into a single 32-dimensional vector\n    body_features = keras.layers.LSTM(32)(body_features)\n\n    # Merge all available features into a single large vector via concatenation\n    x = keras.layers.concatenate([title_features, body_features, tags_input])\n\n    # Stick a logistic regression for priority prediction on top of the features\n    priority_pred = keras.layers.Dense(1, name=\"priority\")(x)\n    # Stick a department classifier on top of the features\n    department_pred = keras.layers.Dense(num_departments, name=\"department\")(x)\n\n    # Instantiate an end-to-end model predicting both priority and department\n    model = keras.Model(\n        inputs=[title_input, body_input, tags_input],\n        outputs=[priority_pred, department_pred],\n    )\n\n    model.compile(\n        optimizer=keras.optimizers.RMSprop(1e-3),\n        loss=[\n            keras.losses.BinaryCrossentropy(from_logits=True),\n            keras.losses.CategoricalCrossentropy(from_logits=True),\n        ],\n        loss_weights=[1.0, 0.2],\n    )\n\n    # Dummy input data\n    title_data = np.random.randint(num_words, size=(1280, 10)).astype(\"float32\")\n    body_data = np.random.randint(num_words, size=(1280, 100)).astype(\"float32\")\n    tags_data = np.random.randint(2, size=(1280, num_tags)).astype(\"float32\")\n\n    # Dummy target data\n    priority_targets = np.random.random(size=(1280, 1))\n    dept_targets = np.random.randint(2, size=(1280, num_departments))\n\n    model.fit(\n        {\"title\": title_data, \"body\": body_data, \"tags\": tags_data},\n        {\"priority\": priority_targets, \"department\": dept_targets},\n        epochs=2,\n        batch_size=32,\n    )\n\n    predictions = model.predict({\"title\": title_data[0:1], \"body\": body_data[0:1], \"tags\": tags_data[0:1]})\n    predictions_priority = predictions[0]\n    predictions_department = predictions[1]\n\n    print(\"-\" * 50)\n    print(\"Example TensorFlow prediction reference:\")\n    print(predictions_priority)\n    print(predictions_department)\n\n    # Save model to SavedModel format\n    tf.saved_model.save(model, \"./models/complex_model\")\n\n    full_model = tf.function(lambda x: model(x))\n    full_model = full_model.get_concrete_function(x=(tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype), tf.TensorSpec(model.inputs[1].shape, model.inputs[1].dtype), tf.TensorSpec(model.inputs[2].shape, model.inputs[2].dtype)))\n\n    # Get frozen ConcreteFunction\n    # https://github.com/tensorflow/tensorflow/issues/36391#issuecomment-596055100\n    frozen_func = convert_variables_to_constants_v2(full_model, lower_control_flow=False)\n    frozen_func.graph.as_graph_def()\n\n    layers = [op.name for op in frozen_func.graph.get_operations()]\n    print(\"-\" * 50)\n    print(\"Frozen model layers: \")\n    for layer in layers:\n        print(layer)\n\n    print(\"-\" * 50)\n    print(\"Frozen model inputs: \")\n    print(frozen_func.inputs)\n    print(\"Frozen model outputs: \")\n    print(frozen_func.outputs)\n\n    # Save frozen graph from frozen ConcreteFunction to hard drive\n    tf.io.write_graph(graph_or_graph_def=frozen_func.graph,\n                        logdir=\"./frozen_models\",\n                        name=\"complex_frozen_graph.pb\",\n                        as_text=False)\n\n    # Load frozen graph using TensorFlow 1.x functions\n    with tf.io.gfile.GFile(\"./frozen_models/complex_frozen_graph.pb\", \"rb\") as f:\n        graph_def = tf.compat.v1.GraphDef()\n        loaded = graph_def.ParseFromString(f.read())\n\n    # Wrap frozen graph to ConcreteFunctions\n    frozen_func = wrap_frozen_graph(graph_def=graph_def,\n                                    inputs=[\"x:0\", \"x_1:0\", \"x_2:0\"],\n                                    outputs=[\"Identity:0\", \"Identity_1:0\"],\n                                    print_graph=True)\n\n    # Note that we only have \"one\" input and \"output\" for the loaded frozen function\n    print(\"-\" * 50)\n    print(\"Frozen model inputs: \")\n    print(frozen_func.inputs)\n    print(\"Frozen model outputs: \")\n    print(frozen_func.outputs)\n\n    # Get predictions\n    frozen_graph_predictions = frozen_func(x=tf.constant(title_data[0:1]), x_1=tf.constant(body_data[0:1]), x_2=tf.constant(tags_data[0:1]))\n    frozen_graph_predictions_priority = frozen_graph_predictions[0]\n    frozen_graph_predictions_department = frozen_graph_predictions[1]\n\n    print(\"-\" * 50)\n    print(\"Example TensorFlow frozen graph prediction reference:\")\n    print(frozen_graph_predictions_priority.numpy())\n    print(frozen_graph_predictions_department.numpy())\n\n    # The two predictions should be almost the same.\n    assert np.allclose(a=frozen_graph_predictions_priority.numpy(), b=predictions_priority, rtol=1e-05, atol=1e-08, equal_nan=False)\n    assert np.allclose(a=frozen_graph_predictions_department.numpy(), b=predictions_department, rtol=1e-05, atol=1e-08, equal_nan=False)\n\n\nif __name__ == \"__main__\":\n\n    main()\n"
  },
  {
    "path": "TensorFlow_v2/utils.py",
    "content": "import tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\n\n\ndef get_fashion_mnist_data():\n\n    fashion_mnist = keras.datasets.fashion_mnist\n    (train_images, train_labels), (test_images,\n                                   test_labels) = fashion_mnist.load_data()\n    class_names = [\n        \"T-shirt/top\", \"Trouser\", \"Pullover\", \"Dress\", \"Coat\", \"Sandal\",\n        \"Shirt\", \"Sneaker\", \"Bag\", \"Ankle boot\"\n    ]\n    train_images = train_images.astype(np.float32) / 255.0\n    test_images = test_images.astype(np.float32) / 255.0\n\n    return (train_images, train_labels), (test_images, test_labels)\n\n\ndef wrap_frozen_graph(graph_def, inputs, outputs, print_graph=False):\n    def _imports_graph_def():\n        tf.compat.v1.import_graph_def(graph_def, name=\"\")\n\n    wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])\n    import_graph = wrapped_import.graph\n\n    if print_graph == True:\n        print(\"-\" * 50)\n        print(\"Frozen model layers: \")\n        layers = [op.name for op in import_graph.get_operations()]\n        for layer in layers:\n            print(layer)\n        print(\"-\" * 50)\n\n    return wrapped_import.prune(\n        tf.nest.map_structure(import_graph.as_graph_element, inputs),\n        tf.nest.map_structure(import_graph.as_graph_element, outputs))\n"
  }
]