[
  {
    "path": ".gitignore",
    "content": "cache\n"
  },
  {
    "path": "README.md",
    "content": "# BoxCars Fine-Grained Recognition of Vehicles\nThis is Keras+Tensorflow re-implementation of our method for fine-grained classification of vehicles decribed in **BoxCars: Improving Vehicle Fine-Grained Recognition using 3D Bounding Boxes in Traffic Surveillance** ([link](https://doi.org/10.1109/TITS.2018.2799228)).\nThe numerical results are slightly different, but similar. This code is for **research only** purposes.\nIf you use the code, please cite our paper:\n```\n@ARTICLE{Sochor2018, \nauthor={J. Sochor and J. Špaňhel and A. Herout}, \njournal={IEEE Transactions on Intelligent Transportation Systems}, \ntitle={BoxCars: Improving Fine-Grained Recognition of Vehicles Using 3-D Bounding Boxes in Traffic Surveillance}, \nyear={2018}, \nvolume={PP}, \nnumber={99}, \npages={1-12}, \ndoi={10.1109/TITS.2018.2799228}, \nISSN={1524-9050}\n}\n```\n\n## Installation\n\n* Clone the repository and cd to it.\n\n```bash\ngit clone https://github.com/JakubSochor/BoxCars.git BoxCars\ncd BoxCars\n```\n* (Optional, but recommended) Create virtual environment for this project - you can use **virtualenvwrapper** or following commands. **IMPORTANT NOTE:** this project is using **Python3**.\n\n```bash\nvirtuenv -p /usr/bin/python3 boxcars_venv\nsource boxcars_venv/bin/activate\n```\n\n* Install required packages:\n\n```bash\npip3 install -r requirements.txt \n```\n\n* Manually download dataset https://medusa.fit.vutbr.cz/traffic/data/BoxCars116k.zip and unzip it.\n* Modify `scripts/config.py` and change `BOXCARS_DATASET_ROOT` to directory where is the unzipped dataset.\n* (Optional) Download trained models using `scripts/download_models.py`. To download all models to default location (`./models`) run following command (or use -h for help):\n\n```base\npython3 scripts/download_models.py --all\n``` \n\n\n## Usage\n### Fine-tuning of the Models\nTo fine-tune a model use `scripts/train_eval.py` (use -h for help). Example for ResNet50:\n```bash\npython3 scripts/train_eval.py --train-net ResNet50 \n```\nIt is also possible to resume training using `--resume` argument for `train_eval.py`.\n\n### Evaluation\nThe model is evaluated when the training is finished, however it is possible to evaluate saved model by running:\n```bash\npython3 scripts/train_eval.py --eval path-to-model.h5\n```\n\n\n## Trained models\nWe provide numerical results of models distributed with this code (use `scripts/download_models.py`). \nThe processing time was measured on GTX1080 with CUDNN. The accuracy results are always shown as single image accuracy/whole track accuracy (in percents). \nWe have also evaluated the method with estimated 3D bounding boxes (see paper for details) and included the results here. \nThe estimated bounding boxes are in `data/estimated_3DBB.pkl`. In order to use the estimated bounding boxes, use `--estimated-3DBB path-to-pkl` argument for `train_eval.py` script.\nThe models which were trained with the estimated bounding boxes have suffix `_estimated3DBB`.  \n\nNet | Original 3DBBs | Estimated 3DBBs | Image Processing Time\n----|---------------:|---------------:|---------------------:\nResNet50 |  84.29/91.61 | 81.78/90.79  | 5.8ms\nVGG16 | 84.10/92.09 | 81.43/90.68 | 5.4ms\nVGG19 | 83.35/91.23 | 81.93/91.48  | 5.4ms\nInceptionV3 | 81.51/89.86 | 79.89/89.92 | 6.1ms\n\n\n## BoxCars116k dataset\nThe dataset was created for the paper and it is possible to download it from our [website](https://medusa.fit.vutbr.cz/traffic/data/BoxCars116k.zip)\nThe dataset contains 116k of images of vehicles with fine-grained labels taken from surveillance cameras under various viewpoints. \nSee the paper [**BoxCars: Improving Vehicle Fine-Grained Recognition using 3D Bounding Boxes in Traffic Surveillance**](https://doi.org/10.1109/TITS.2018.2799228) for more statistics and information about dataset acquisition.\nThe dataset contains tracked vehicles with the same label and multiple images per track. The track is uniquely identified by its id `vehicle_id`, while each image is uniquely identified by `vehicle_id` and `instance_id`. It is possible to use class `BoxCarsDataset` from `lib/boxcars_dataset.py` for working with the dataset; however, for convenience, we describe the structure of the dataset also here. \nThe dataset contains several files and folders:\n* **images** - dataset images and masks \n* **atlas.pkl** - *BIG* structure with jpeg encoded images, which can be convenient as the whole structure fits the memory and it is possible to get the images on the fly. To load the atlas (or any other pkl file), you can use function `load_cache` from `lib/utils.py`. To decode the image (in RGB channel order), use the following statement.\n```python\natlas = load_cache(path_to_atlas_file)\nimage = cv2.cvtColor(cv2.imdecode(atlas[vehicle_id][instance_id], 1), cv2.COLOR_BGR2RGB)\n```\n\n* **dataset.pkl** - contains dictionary with following fields:\n```\ncameras: information about used cameras (vanishing points, principal point)\nsamples: list of vehicles (index correspons to vehicle id). \n\t\t The structure contains several fields which should understandable. \n\t\t It also contains field instances with list of of dictionaries \n\t\t with information about images of the vehicle track. \n\t\t The flag to_camera defines whether the vehicle is going towards camera or not. \n```\n\n* **classification_splits.pkl** - different splits (*hard*, *medium* from paper and additional *body* and *make* split). Each split contains structure `types_mapping` definig mapping from textual labels to integer labels. It also contains fields `train`, `test`, and `validation` which are lists and each element contains tuple `(vehicle_id, class_id)`.\n\n* **verification_splits.pkl** - similar to classification splits; however, the elements in `train`, `test` are triplets `(vehicle_id1, vehicle_id2, class_id)`.\n\n* **json_data** and **matlab_data** - converted pkl file\n\n\n## Links \n* [BoxCars116k dataset](https://medusa.fit.vutbr.cz/traffic/data/BoxCars116k.zip) ([backup location](https://drive.google.com/file/d/19LHLOmmVyUS1R4ypwByfrV8KQWnz2GDT/view?usp=sharing))\n* Web with our [Traffic Research](https://medusa.fit.vutbr.cz/traffic/)\n"
  },
  {
    "path": "lib/__init__.py",
    "content": ""
  },
  {
    "path": "lib/boxcars_data_generator.py",
    "content": "# -*- coding: utf-8 -*-\nimport cv2\nimport numpy as np\nfrom keras.preprocessing.image import Iterator\nfrom boxcars_image_transformations import alter_HSV, image_drop, unpack_3DBB, add_bb_noise_flip\nimport random\n\n#%%\nclass BoxCarsDataGenerator(Iterator):\n    def __init__(self, dataset, part, batch_size=8, training_mode=False, seed=None, generate_y = True, image_size = (224,224)):\n        assert image_size == (224,224), \"only images 224x224 are supported by unpack_3DBB for now, if necessary it can be changed\"\n        assert dataset.X[part] is not None, \"load some classification split first\"\n        super().__init__(dataset.X[part].shape[0], batch_size, training_mode, seed)\n        self.part = part\n        self.generate_y = generate_y\n        self.dataset = dataset\n        self.image_size = image_size\n        self.training_mode = training_mode\n        if self.dataset.atlas is None:\n            self.dataset.load_atlas()\n\n    #%%\n    def next(self):\n        with self.lock:\n            index_array, current_index, current_batch_size = next(self.index_generator)\n        x = np.empty([current_batch_size] + list(self.image_size) + [3], dtype=np.float32)\n        for i, ind in enumerate(index_array):\n            vehicle_id, instance_id = self.dataset.X[self.part][ind]\n            vehicle, instance, bb3d = self.dataset.get_vehicle_instance_data(vehicle_id, instance_id)\n            image = self.dataset.get_image(vehicle_id, instance_id)\n            if self.training_mode:\n                image = alter_HSV(image) # randomly alternate color\n                image = image_drop(image) # randomly remove part of the image\n                bb_noise = np.clip(np.random.randn(2) * 1.5, -5, 5) # generate random bounding box movement\n                flip = bool(random.getrandbits(1)) # random flip\n                image, bb3d = add_bb_noise_flip(image, bb3d, flip, bb_noise) \n            image = unpack_3DBB(image, bb3d) \n            image = (image.astype(np.float32) - 116)/128.\n            x[i, ...] = image\n        if not self.generate_y:\n            return x\n        y = self.dataset.Y[self.part][index_array]\n        return x, y\n\n"
  },
  {
    "path": "lib/boxcars_dataset.py",
    "content": "# -*- coding: utf-8 -*-\nfrom config import BOXCARS_DATASET,BOXCARS_ATLAS,BOXCARS_CLASSIFICATION_SPLITS\nfrom utils import load_cache\nimport cv2\nimport numpy as np\n\n#%%\nclass BoxCarsDataset(object):\n    def __init__(self, load_atlas = False, load_split = None, use_estimated_3DBB = False, estimated_3DBB_path = None):\n        self.dataset = load_cache(BOXCARS_DATASET)\n        self.use_estimated_3DBB = use_estimated_3DBB\n        \n        self.atlas = None\n        self.split = None\n        self.split_name = None\n        self.estimated_3DBB = None\n        self.X = {}\n        self.Y = {}\n        for part in (\"train\", \"validation\", \"test\"):\n            self.X[part] = None\n            self.Y[part] = None # for labels as array of 0-1 flags\n            \n        if load_atlas:\n            self.load_atlas()\n        if load_split is not None:\n            self.load_classification_split(load_split)\n        if self.use_estimated_3DBB:\n            self.estimated_3DBB = load_cache(estimated_3DBB_path)\n        \n    #%%\n    def load_atlas(self):\n        self.atlas = load_cache(BOXCARS_ATLAS)\n    \n    #%%\n    def load_classification_split(self, split_name):\n        self.split = load_cache(BOXCARS_CLASSIFICATION_SPLITS)[split_name]\n        self.split_name = split_name\n       \n    #%%\n    def get_image(self, vehicle_id, instance_id):\n        \"\"\"\n        returns decoded image from atlas in RGB channel order\n        \"\"\"\n        return cv2.cvtColor(cv2.imdecode(self.atlas[vehicle_id][instance_id], 1), cv2.COLOR_BGR2RGB)\n        \n    #%%\n    def get_vehicle_instance_data(self, vehicle_id, instance_id, original_image_coordinates=False):\n        \"\"\"\n        original_image_coordinates: the 3DBB coordinates are in the original image space\n                                    to convert them into cropped image space, it is necessary to subtract instance[\"3DBB_offset\"]\n                                    which is done if this parameter is False. \n        \"\"\"\n        vehicle = self.dataset[\"samples\"][vehicle_id]\n        instance = vehicle[\"instances\"][instance_id]\n        if not self.use_estimated_3DBB:\n            bb3d = self.dataset[\"samples\"][vehicle_id][\"instances\"][instance_id][\"3DBB\"]\n        else:\n            bb3d = self.estimated_3DBB[vehicle_id][instance_id]\n            \n        if not original_image_coordinates:\n            bb3d = bb3d - instance[\"3DBB_offset\"]\n\n        return vehicle, instance, bb3d \n            \n       \n    #%%\n    def initialize_data(self, part):\n        assert self.split is not None, \"load classification split first\"\n        assert part in self.X, \"unknown part -- use: train, validation, test\"\n        assert self.X[part] is None, \"part %s was already initialized\"%part\n        data = self.split[part]\n        x, y = [], []\n        for vehicle_id, label in data:\n            num_instances = len(self.dataset[\"samples\"][vehicle_id][\"instances\"])\n            x.extend([(vehicle_id, instance_id) for instance_id in range(num_instances)])\n            y.extend([label]*num_instances)\n        self.X[part] = np.asarray(x,dtype=int)\n\n        y = np.asarray(y,dtype=int)\n        y_categorical = np.zeros((y.shape[0], self.get_number_of_classes()))\n        y_categorical[np.arange(y.shape[0]), y] = 1\n        self.Y[part] = y_categorical\n        \n\n\n    def get_number_of_classes(self):\n        return len(self.split[\"types_mapping\"])\n        \n        \n    def evaluate(self, probabilities, part=\"test\", top_k=1):\n        samples = self.X[part]\n        assert samples.shape[0] == probabilities.shape[0]\n        assert self.get_number_of_classes() == probabilities.shape[1]\n        part_data = self.split[part]\n        probs_inds = {}\n        for vehicle_id, _ in part_data:\n            probs_inds[vehicle_id] = np.zeros(len(self.dataset[\"samples\"][vehicle_id][\"instances\"]), dtype=int)\n        for i, (vehicle_id, instance_id) in enumerate(samples):\n            probs_inds[vehicle_id][instance_id] = i\n            \n        get_hit = lambda probs, gt: int(gt in np.argsort(probs.flatten())[-top_k:])\n        hits = []\n        hits_tracks = []\n        for vehicle_id, label in part_data:\n            inds = probs_inds[vehicle_id]\n            hits_tracks.append(get_hit(np.mean(probabilities[inds, :], axis=0), label))\n            for ind in inds:\n                hits.append(get_hit(probabilities[ind, :], label))\n                \n        return np.mean(hits), np.mean(hits_tracks)\n        "
  },
  {
    "path": "lib/boxcars_image_transformations.py",
    "content": "# -*- coding: utf-8 -*-\nimport cv2\nimport numpy as np\nimport random\n\n\n#%%\ndef alter_HSV(img, change_probability = 0.6):\n    if random.random() < 1-change_probability:\n        return img\n    addToHue = random.randint(0,179)\n    addToSaturation = random.gauss(60, 20)\n    addToValue = random.randint(-50,50)\n    hsvVersion =  cv2.cvtColor(img, cv2.COLOR_RGB2HSV)\n    \n    channels = hsvVersion.transpose(2, 0, 1)\n    channels[0] = ((channels[0].astype(int) + addToHue)%180).astype(np.uint8)\n    channels[1] = (np.maximum(0, np.minimum(255, (channels[1].astype(int) + addToSaturation)))).astype(np.uint8)\n    channels[2] = (np.maximum(0, np.minimum(255, (channels[2].astype(int) + addToValue)))).astype(np.uint8)\n    hsvVersion = channels.transpose(1,2,0)   \n        \n    return cv2.cvtColor(hsvVersion, cv2.COLOR_HSV2RGB)\n\n#%%\ndef image_drop(img, change_probability = 0.6):\n    if random.random() < 1-change_probability:\n        return img\n    width = random.randint(int(img.shape[1]*0.10), int(img.shape[1]*0.3))\n    height = random.randint(int(img.shape[0]*0.10), int(img.shape[0]*0.3))\n    x = random.randint(int(img.shape[1]*0.10), img.shape[1]-width-int(img.shape[1]*0.10))\n    y = random.randint(int(img.shape[0]*0.10), img.shape[0]-height-int(img.shape[0]*0.10))\n    img[y:y+height,x:x+width,:] = (np.random.rand(height,width,3)*255).astype(np.uint8)\n    return img\n\n#%%\ndef add_bb_noise_flip(image, bb3d, flip, bb_noise):\n    bb3d = bb3d + bb_noise \n    if flip:\n        bb3d[:, 0] = image.shape[1] - bb3d[:,0]\n        image = cv2.flip(image, 1)\n    return image, bb3d\n\n#%%\ndef _unpack_side(img, origPoints, targetSize):\n    origPoints = np.array(origPoints).reshape(-1,1,2)\n    targetPoints = np.array([(0,0), (targetSize[0],0), (0, targetSize[1]), \n                             (targetSize[0], targetSize[1])]).reshape(-1,1,2).astype(origPoints.dtype)\n    m, _ = cv2.findHomography(origPoints, targetPoints, 0)\n    resultImage = cv2.warpPerspective(img, m, targetSize)\n    return resultImage\n    \n    \n#%%    \ndef unpack_3DBB(img, bb):\n    frontal = _unpack_side(img, [bb[0], bb[1], bb[4], bb[5]], (75,124))\n    side = _unpack_side(img, [bb[1], bb[2], bb[5], bb[6]], (149,124))\n    roof = _unpack_side(img, [bb[0], bb[3], bb[1], bb[2]], (149,100))\n    \n    final = np.zeros((224,224,3), dtype=frontal.dtype)\n    final[100:, 0:75] = frontal\n    final[0:100, 75:] = roof\n    final[100:, 75:] = side\n    \n    return final\n    "
  },
  {
    "path": "lib/utils.py",
    "content": "# -*- coding: utf-8 -*-\nimport pickle\nimport os\nimport numpy as np\nimport sys\n\n#%%\ndef load_cache(path, encoding=\"latin-1\", fix_imports=True):\n    \"\"\"\n    encoding latin-1 is default for Python2 compatibility\n    \"\"\"\n    with open(path, \"rb\") as f:\n        return pickle.load(f, encoding=encoding, fix_imports=True)\n\n#%%\ndef save_cache(path, data):\n    with open(path, \"wb\") as f:\n        pickle.dump(data, f)\n\n#%%\ndef ensure_dir(d):\n    if len(d)  == 0: # for empty dirs (for compatibility with os.path.dirname(\"xxx.yy\"))\n        return\n    if not os.path.exists(d):\n        try:\n            os.makedirs(d)\n        except OSError as e:\n            if e.errno != 17: # FILE EXISTS\n                raise e\n\n#%%\ndef parse_args(available_nets):\n    import argparse\n    default_cache = os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"cache\"))\n    parser = argparse.ArgumentParser(description=\"BoxCars fine-grained recognition algorithm Keras re-implementation\",\n                                    formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n    parser.add_argument(\"--eval\", type=str, default=None, help=\"path to model file to be evaluated\")\n    parser.add_argument(\"--resume\", type=str, default=None, help=\"path to model file to be resumed\")\n    parser.add_argument(\"--train-net\", type=str, default=available_nets[0], help=\"train on one of following nets: %s\"%(str(available_nets)))\n    parser.add_argument(\"--batch-size\", type=int, default=8, help=\"batch size\")\n    parser.add_argument(\"--lr\", type=float, default=0.0025, help=\"learning rate\")\n    parser.add_argument(\"--epochs\", type=int, default=20, help=\"run for epochs\")\n    parser.add_argument(\"--cache\", type=str, default=default_cache, help=\"where to store training meta-data and final model\")\n    parser.add_argument(\"--estimated-3DBB\", type=str, default=None, help=\"use estimated 3DBBs from specified path\")\n    \n    \n    args = parser.parse_args()\n    assert args.eval is None or args.resume is None, \"--eval and --resume are mutually exclusive\"\n    if args.eval is None and args.resume is None:\n        assert args.train_net in available_nets, \"--train-net must be one of %s\"%(str(available_nets))\n\n    return args\n\n \n#%%\ndef download_report_hook(block_num, block_size, total_size):\n    downloaded = block_num*block_size\n    percents = downloaded / total_size * 100\n    show_str = \" %.1f%%\"%(percents)\n    sys.stdout.write(show_str + len(show_str)*\"\\b\")\n    sys.stdout.flush()\n    if downloaded >= total_size:\n        print()\n"
  },
  {
    "path": "models/.gitignore",
    "content": "*\n!.gitignore\n!README.md\n"
  },
  {
    "path": "models/README.md",
    "content": "* Default location for downloaded models\n* Use `scripts/download_models.py`"
  },
  {
    "path": "requirements.txt",
    "content": "appdirs==1.4.0\nh5py==2.6.0\nKeras==1.2.2\nnumpy==1.12.0\nopencv-python==3.2.0.6\npackaging==16.8\nprotobuf==3.2.0\npyparsing==2.1.10\nPyYAML==3.12\nscipy==0.18.1\nsix==1.10.0\ntensorflow-gpu==1.0.0\nTheano==0.8.2\n"
  },
  {
    "path": "scripts/_init_paths.py",
    "content": "# -*- coding: utf-8 -*-\nimport os\nimport sys\nscript_dir = os.path.dirname(__file__)\nsys.path.insert(0, os.path.realpath(os.path.join(script_dir, '..', 'lib')))\n"
  },
  {
    "path": "scripts/config.py",
    "content": "# -*- coding: utf-8 -*-\nimport os\n#%%\n# change this to your location\nBOXCARS_DATASET_ROOT = \"/mnt/matylda1/isochor/Datasets/BoxCars116k/\" \n\n#%%\nBOXCARS_IMAGES_ROOT = os.path.join(BOXCARS_DATASET_ROOT, \"images\")\nBOXCARS_DATASET = os.path.join(BOXCARS_DATASET_ROOT, \"dataset.pkl\")\nBOXCARS_ATLAS = os.path.join(BOXCARS_DATASET_ROOT, \"atlas.pkl\")\nBOXCARS_CLASSIFICATION_SPLITS = os.path.join(BOXCARS_DATASET_ROOT, \"classification_splits.pkl\")\n\n"
  },
  {
    "path": "scripts/download_models.py",
    "content": "# -*- coding: utf-8 -*-\nimport _init_paths\nimport os\nimport urllib.request \nimport re\nimport argparse\nimport sys\nfrom utils import ensure_dir, download_report_hook\n\n#%%\nMODELS_DIR_URL = \"https://medusa.fit.vutbr.cz/traffic/data/BoxCars-models/\"\nSUFFIX = \"h5\"\nDEFAULT_OUTPUT_DIR = os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"models\"))\n\n#%%\nwith urllib.request.urlopen(MODELS_DIR_URL) as response:\n    dir_listing = response.read().decode(\"utf-8\")\n\nmodel_matcher = re.compile(r'href=\"(.*)\\.%s\"'%(SUFFIX))\navailable_nets = model_matcher.findall(dir_listing)\n\n\n#%%\nparser = argparse.ArgumentParser(description=\"Download trained model files. Available nets: %s\"%(str(available_nets)),\n                                 formatter_class=argparse.ArgumentDefaultsHelpFormatter)\nparser.add_argument(\"--output-dir\",\"-o\", type=str, default=DEFAULT_OUTPUT_DIR, help=\"output directory where to put downloaded models\")\nparser.add_argument(\"--all\", \"-a\", default=False, action=\"store_true\", help=\"download all available models\")\nparser.add_argument(\"net_name\", nargs=\"*\")\nargs = parser.parse_args()\n\ndownload_nets = args.net_name\nif args.all:\n    download_nets = available_nets\n    \nif len(download_nets) == 0:\n    print(\"You need to specify nets to download or use --all to download all of them\\nAVAILABLE NETS: %s\\n\"%(str(available_nets)))\n    parser.print_usage()\n    sys.exit(1)\n\n#%%\nprint(\"Saving downloaded models to: %s\"%(args.output_dir))\nensure_dir(args.output_dir)\nfor net in download_nets:\n    if net not in available_nets:\n        print(\"WARNING: Skipping %s because it is not available. AVAILABLE_NETS: %s\"%(net, str(available_nets)))\n        continue\n    print(\"Downloading %s... \"%(net), end=\"\")\n    sys.stdout.flush()\n    urllib.request.urlretrieve(MODELS_DIR_URL + net + \".\" + SUFFIX, os.path.join(args.output_dir, \"%s.%s\"%(net, SUFFIX)), download_report_hook)\n    \n"
  },
  {
    "path": "scripts/train_eval.py",
    "content": "# -*- coding: utf-8 -*-\nimport _init_paths\n# this should be soon to prevent tensorflow initialization with -h parameter\nfrom utils import ensure_dir, parse_args\nargs = parse_args([\"ResNet50\", \"VGG16\", \"VGG19\", \"InceptionV3\"])\n\n# other imports\nimport os\nimport time\nimport sys\n\nfrom boxcars_dataset import BoxCarsDataset\nfrom boxcars_data_generator import BoxCarsDataGenerator\n\nfrom keras.applications.resnet50 import ResNet50\nfrom keras.applications.vgg16 import VGG16\nfrom keras.applications.vgg19 import VGG19\nfrom keras.applications.inception_v3 import InceptionV3\nfrom keras.layers import Dense, Flatten, Dropout, AveragePooling2D\nfrom keras.models import Model, load_model\nfrom keras.optimizers import SGD\nfrom keras.callbacks import ModelCheckpoint, TensorBoard\n\n\n#%% initialize dataset\nif args.estimated_3DBB is None:\n    dataset = BoxCarsDataset(load_split=\"hard\", load_atlas=True)\nelse:\n    dataset = BoxCarsDataset(load_split=\"hard\", load_atlas=True, \n                             use_estimated_3DBB = True, estimated_3DBB_path = args.estimated_3DBB)\n\n#%% get optional path to load model\nmodel = None\nfor path in [args.eval, args.resume]:\n    if path is not None:\n        print(\"Loading model from %s\"%path)\n        model = load_model(path)\n        break\n\n#%% construct the model as it was not passed as an argument\nif model is None:\n    print(\"Initializing new %s model ...\"%args.train_net)\n    if args.train_net in (\"ResNet50\", ):\n        base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(224,224,3))\n        x = Flatten()(base_model.output)\n        \n    if args.train_net in (\"VGG16\", \"VGG19\"):\n        if args.train_net == \"VGG16\":\n            base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224,224,3))\n        elif args.train_net == \"VGG19\":\n            base_model = VGG19(weights='imagenet', include_top=False, input_shape=(224,224,3))\n        x = Flatten()(base_model.output)\n        x = Dense(4096, activation='relu', name='fc1')(x)\n        x = Dropout(0.5)(x)\n        x = Dense(4096, activation='relu', name='fc2')(x)\n        x = Dropout(0.5)(x)\n\n    if args.train_net in (\"InceptionV3\", ):\n        base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(224,224,3))\n        output_dim = int(base_model.outputs[0].get_shape()[1])\n        x = AveragePooling2D((output_dim, output_dim), strides=(output_dim, output_dim), name='avg_pool')(base_model.output)\n        x = Flatten()(x)\n            \n    predictions = Dense(dataset.get_number_of_classes(), activation='softmax')(x)\n    model = Model(input=base_model.input, output=predictions, name=\"%s%s\"%(args.train_net, {True: \"_estimated3DBB\", False:\"\"}[args.estimated_3DBB is not None]))\n    optimizer = SGD(lr=args.lr, decay=1e-4, nesterov=True)\n    model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=[\"accuracy\"])\n\n\nprint(\"Model name: %s\"%(model.name))\nif args.estimated_3DBB is not None and \"estimated3DBB\" not in model.name:\n    print(\"ERROR: using estimated 3DBBs with model trained on original 3DBBs\")\n    sys.exit(1)\nif args.estimated_3DBB is None and \"estimated3DBB\" in model.name:\n    print(\"ERROR: using model trained on estimated 3DBBs and running on original 3DBBs\")\n    sys.exit(1)\n\nargs.output_final_model_path = os.path.join(args.cache, model.name, \"final_model.h5\")\nargs.snapshots_dir = os.path.join(args.cache, model.name, \"snapshots\")\nargs.tensorboard_dir = os.path.join(args.cache, model.name, \"tensorboard\")\n\n#%% training\nif args.eval is None:\n    print(\"Training...\")\n    #%% initialize dataset for training\n    dataset.initialize_data(\"train\")\n    dataset.initialize_data(\"validation\")\n    generator_train = BoxCarsDataGenerator(dataset, \"train\", args.batch_size, training_mode=True)\n    generator_val = BoxCarsDataGenerator(dataset, \"validation\", args.batch_size, training_mode=False)\n\n\n    #%% callbacks\n    ensure_dir(args.tensorboard_dir)\n    ensure_dir(args.snapshots_dir)\n    tb_callback = TensorBoard(args.tensorboard_dir, histogram_freq=1, write_graph=False, write_images=False)\n    saver_callback = ModelCheckpoint(os.path.join(args.snapshots_dir, \"model_{epoch:03d}_{val_acc:.2f}.h5\"), period=4 )\n\n    #%% get initial epoch\n    initial_epoch = 0\n    if args.resume is not None:\n        initial_epoch = int(os.path.basename(args.resume).split(\"_\")[1]) + 1\n\n\n    model.fit_generator(generator=generator_train, \n                        samples_per_epoch=generator_train.n,\n                        nb_epoch=args.epochs,\n                        verbose=1,\n                        validation_data=generator_val,\n                        nb_val_samples=generator_val.n,\n                        callbacks=[tb_callback, saver_callback],\n                        initial_epoch = initial_epoch,\n                        )\n\n    #%% save trained data\n    print(\"Saving the final model to %s\"%(args.output_final_model_path))\n    ensure_dir(os.path.dirname(args.output_final_model_path))\n    model.save(args.output_final_model_path)\n\n\n#%% evaluate the model \nprint(\"Running evaluation...\")\ndataset.initialize_data(\"test\")\ngenerator_test = BoxCarsDataGenerator(dataset, \"test\", args.batch_size, training_mode=False, generate_y=False)\nstart_time = time.time()\npredictions = model.predict_generator(generator_test, generator_test.n)\nend_time = time.time()\nsingle_acc, tracks_acc = dataset.evaluate(predictions)\nprint(\" -- Accuracy: %.2f%%\"%(single_acc*100))\nprint(\" -- Track accuracy: %.2f%%\"%(tracks_acc*100))\nprint(\" -- Image processing time: %.1fms\"%((end_time-start_time)/generator_test.n*1000))\n"
  }
]