[
  {
    "path": "README.md",
    "content": "# Ultrasound nerve segmentation using Keras (1.0.7)\nKaggle Ultrasound Nerve Segmentation competition [Keras]\n\n#Install (Ubuntu {14,16}, GPU)\n\ncuDNN required.\n\n###Theano\n- http://deeplearning.net/software/theano/install_ubuntu.html#install-ubuntu\n- sudo pip install pydot-ng\n\nIn ~/.theanorc\n```\n[global]\ndevice = gpu0\n[dnn]\nenabled = True\n```\n\n###Keras\n- sudo apt-get install libhdf5-dev\n- sudo pip install h5py\n- sudo pip install pydot\n- sudo pip install nose_parameterized\n- sudo pip install keras\n\nIn ~/.keras/keras.json (it's very important, the project was running on theano backend, and some issues are possible in case of TensorFlow)\n```\n{\n    \"image_dim_ordering\": \"th\",\n    \"epsilon\": 1e-07,\n    \"floatx\": \"float32\",\n    \"backend\": \"theano\"\n}\n```\n\n###Python deps\n - sudo apt-get install python-opencv\n - sudo apt-get install python-sklearn\n\n#Prepare\n\nPlace train and test data into '../train' and '../test' folders accordingly.\n\n```\nmkdir np_data\npython data.py\n```\n\n#Training\n\nSingle model training.\n```\npython train.py\n```\nResults will be generatated in \"res/\" folder. res/unet.hdf5 - best model\n\nGenerate submission:\n```\npython submission.py\n```\n\nGenerate predection with a model in res/unet.hdf5\n``` \npython current.py\n```\n\n#Model\n\nMotivation's explained in my internal pres (slides: http://www.slideshare.net/Eduardyantov/ultrasound-segmentation-kaggle-review)\n\nI used U-net like architecture (http://arxiv.org/abs/1505.04597). Main differences:\n - inception blocks instead of VGG like\n - Conv with stride instead of MaxPooling\n - Dropout, p=0.5\n - skip connections from encoder to decoder layers with residual blocks\n - BatchNorm everywhere\n - 2 heads training: auxiliary branch for scoring nerve presence (in the middle of the network), one branch for segmentation\n - ELU activation\n - sigmoid activation in output \n - Adam optimizer, without weight regularization in layers\n - Dice coeff loss, average per batch, without smoothing\n - output layers - sigmoid activation\n - batch_size=64,128 (for GeForce 1080 and Titan X respectively)\n\nAugmentation:\n - flip x,y\n - random zoom\n - random channel shift\n - elastic transormation didn't help in this configuration\n\nAugmentation generator (generate augmented data on the fly for each epoch) didn't improve the score. \nFor prediction augmented images were used.\n\nValidation:\n\nFor some reason validation split by patient (which is proper in this competition) didn't work for me, probably due to bug in the code. So I used random split.\n\nFinal prediction uses probability of a nerve presence: p_nerve = (p_score + p_segment)/2, where p_segment based on number of output pixels in the mask.\n\n#Results and technical aspects\n- On GPU Titan X an epoch took about 6 minutes. Training early stops at 15-30 epochs.\n- For batch_size=64 6Gb GPU memory is required.\n- Best single model achieved 0.694 LB score.\n- An ensemble of 6 different k-fold ensembles (k=5,6,8) scored 0.70399\n\n#Credits\nThis code was originally based on https://github.com/jocicmarko/ultrasound-nerve-segmentation/\n"
  },
  {
    "path": "__init__.py",
    "content": ""
  },
  {
    "path": "augmentation.py",
    "content": "import sys, os\nimport numpy as np\nfrom keras.preprocessing.image import (transform_matrix_offset_center, apply_transform, Iterator,\n                                       random_channel_shift, flip_axis)\nfrom scipy.ndimage.interpolation import map_coordinates\nfrom scipy.ndimage.filters import gaussian_filter\n\n\n_dir = os.path.join(os.path.realpath(os.path.dirname(__file__)), '')\ndata_path = os.path.join(_dir, '../')\naug_data_path = os.path.join(_dir, 'aug_data')\naug_pattern = os.path.join(aug_data_path, 'train_img_%d.npy')\naug_mask_pattern = os.path.join(aug_data_path, 'train_mask_%d.npy')\n\n\ndef random_zoom(x, y, zoom_range, row_index=1, col_index=2, channel_index=0,\n                fill_mode='nearest', cval=0.):\n    if len(zoom_range) != 2:\n        raise Exception('zoom_range should be a tuple or list of two floats. '\n                        'Received arg: ', zoom_range)\n\n    if zoom_range[0] == 1 and zoom_range[1] == 1:\n        zx, zy = 1, 1\n    else:\n        zx, zy = np.random.uniform(zoom_range[0], zoom_range[1], 2)\n    zoom_matrix = np.array([[zx, 0, 0],\n                            [0, zy, 0],\n                            [0, 0, 1]])\n\n    h, w = x.shape[row_index], x.shape[col_index]\n    transform_matrix = transform_matrix_offset_center(zoom_matrix, h, w)\n    x = apply_transform(x, transform_matrix, channel_index, fill_mode, cval)\n    y = apply_transform(y, transform_matrix, channel_index, fill_mode, cval)\n    return x, y\n\n\ndef random_rotation(x, y, rg, row_index=1, col_index=2, channel_index=0,\n                    fill_mode='nearest', cval=0.):\n    theta = np.pi / 180 * np.random.uniform(-rg, rg)\n    rotation_matrix = np.array([[np.cos(theta), -np.sin(theta), 0],\n                                [np.sin(theta), np.cos(theta), 0],\n                                [0, 0, 1]])\n\n    h, w = x.shape[row_index], x.shape[col_index]\n    transform_matrix = transform_matrix_offset_center(rotation_matrix, h, w)\n    x = apply_transform(x, transform_matrix, channel_index, fill_mode, cval)\n    y = apply_transform(y, transform_matrix, channel_index, fill_mode, cval)\n    return x, y\n\n\ndef random_shear(x, y, intensity, row_index=1, col_index=2, channel_index=0,\n                 fill_mode='constant', cval=0.):\n    shear = np.random.uniform(-intensity, intensity)\n    shear_matrix = np.array([[1, -np.sin(shear), 0],\n                             [0, np.cos(shear), 0],\n                             [0, 0, 1]])\n\n    h, w = x.shape[row_index], x.shape[col_index]\n    transform_matrix = transform_matrix_offset_center(shear_matrix, h, w)\n    x = apply_transform(x, transform_matrix, channel_index, fill_mode, cval)\n    y = apply_transform(y, transform_matrix, channel_index, fill_mode, cval)\n    return x, y\n\n\nclass CustomNumpyArrayIterator(Iterator):\n\n    def __init__(self, X, y, image_data_generator,\n                 batch_size=32, shuffle=False, seed=None,\n                 dim_ordering='th'):\n        self.X = X\n        self.y = y\n        self.image_data_generator = image_data_generator\n        self.dim_ordering = dim_ordering\n        super(CustomNumpyArrayIterator, self).__init__(X.shape[0], batch_size, shuffle, seed)\n\n\n    def next(self):\n        with self.lock:\n            index_array, _, current_batch_size = next(self.index_generator)\n        batch_x = np.zeros(tuple([current_batch_size] + list(self.X.shape)[1:]))\n        batch_y_1, batch_y_2 = [], []\n        for i, j in enumerate(index_array):\n            x = self.X[j]\n            y1 = self.y[0][j]\n            y2 = self.y[1][j]\n            _x, _y1 = self.image_data_generator.random_transform(x.astype('float32'), y1.astype('float32'))\n            batch_x[i] = _x\n            batch_y_1.append(_y1)\n            batch_y_2.append(y2)\n        return batch_x, [np.array(batch_y_1), np.array(batch_y_2)]\n    \n\nclass CustomImageDataGenerator(object):\n    def __init__(self, zoom_range=(1,1), channel_shift_range=0, horizontal_flip=False, vertical_flip=False,\n                 rotation_range=0,\n                 width_shift_range=0.,\n                 height_shift_range=0.,\n                 shear_range=0.,\n                 elastic=None,\n):\n        self.zoom_range = zoom_range\n        self.channel_shift_range = channel_shift_range\n        self.horizontal_flip = horizontal_flip\n        self.vertical_flip = vertical_flip\n        self.rotation_range = rotation_range\n        self.width_shift_range = width_shift_range\n        self.height_shift_range = height_shift_range\n        self.shear_range = shear_range\n        self.elastic = elastic\n    \n    def random_transform(self, x, y, row_index=1, col_index=2, channel_index=0):\n        \n        if self.horizontal_flip:\n            if True or np.random.random() < 0.5:\n                x = flip_axis(x, 2)\n                y = flip_axis(y, 2)\n        \n        # use composition of homographies to generate final transform that needs to be applied\n        if self.rotation_range:\n            theta = np.pi / 180 * np.random.uniform(-self.rotation_range, self.rotation_range)\n        else:\n            theta = 0\n        rotation_matrix = np.array([[np.cos(theta), -np.sin(theta), 0],\n                                    [np.sin(theta), np.cos(theta), 0],\n                                    [0, 0, 1]])\n        if self.height_shift_range:\n            tx = np.random.uniform(-self.height_shift_range, self.height_shift_range) * x.shape[row_index]\n        else:\n            tx = 0\n\n        if self.width_shift_range:\n            ty = np.random.uniform(-self.width_shift_range, self.width_shift_range) * x.shape[col_index]\n        else:\n            ty = 0\n\n        translation_matrix = np.array([[1, 0, tx],\n                                       [0, 1, ty],\n                                       [0, 0, 1]])\n        if self.shear_range:\n            shear = np.random.uniform(-self.shear_range, self.shear_range)\n        else:\n            shear = 0\n        shear_matrix = np.array([[1, -np.sin(shear), 0],\n                                 [0, np.cos(shear), 0],\n                                 [0, 0, 1]])\n\n        if self.zoom_range[0] == 1 and self.zoom_range[1] == 1:\n            zx, zy = 1, 1\n        else:\n            zx, zy = np.random.uniform(self.zoom_range[0], self.zoom_range[1], 2)\n        zoom_matrix = np.array([[zx, 0, 0],\n                                [0, zy, 0],\n                                [0, 0, 1]])\n\n        transform_matrix = np.dot(np.dot(np.dot(rotation_matrix, translation_matrix), shear_matrix), zoom_matrix)\n        \n        h, w = x.shape[row_index], x.shape[col_index]\n        transform_matrix = transform_matrix_offset_center(transform_matrix, h, w)\n        \n        x = apply_transform(x, transform_matrix, channel_index,\n                            fill_mode='constant')\n        y = apply_transform(y, transform_matrix, channel_index,\n                            fill_mode='constant')\n        \n        \n        #        \n\n        if self.vertical_flip:\n            if np.random.random() < 0.5:\n                x = flip_axis(x, 1)\n                y = flip_axis(y, 1)\n        \n        if self.channel_shift_range != 0:\n            x = random_channel_shift(x, self.channel_shift_range)\n\n\n        if self.elastic is not None:\n            x, y = elastic_transform(x.reshape(h,w), y.reshape(h,w), *self.elastic)\n            x, y = x.reshape(1, h, w), y.reshape(1, h, w)\n        \n        return x, y\n    \n    def flow(self, X, Y, batch_size, shuffle=True, seed=None):\n        return CustomNumpyArrayIterator(\n            X, Y, self,\n            batch_size=batch_size, shuffle=shuffle, seed=seed)\n\n        \ndef elastic_transform(image, mask, alpha, sigma, alpha_affine=None, random_state=None):\n    \"\"\"Elastic deformation of images as described in [Simard2003]_ (with modifications).\n    .. [Simard2003] Simard, Steinkraus and Platt, \"Best Practices for\n         Convolutional Neural Networks applied to Visual Document Analysis\", in\n         Proc. of the International Conference on Document Analysis and\n         Recognition, 2003.\n\n     Based on https://gist.github.com/erniejunior/601cdf56d2b424757de5\n    \"\"\"\n    if random_state is None:\n        random_state = np.random.RandomState(None)\n\n    shape = image.shape\n\n    dx = gaussian_filter((random_state.rand(*shape) * 2 - 1), sigma) * alpha\n    dy = gaussian_filter((random_state.rand(*shape) * 2 - 1), sigma) * alpha\n\n    x, y = np.meshgrid(np.arange(shape[1]), np.arange(shape[0]))\n    indices = np.reshape(y+dy, (-1, 1)), np.reshape(x+dx, (-1, 1))\n\n\n    res_x = map_coordinates(image, indices, order=1, mode='reflect').reshape(shape)\n    res_y = map_coordinates(mask, indices, order=1, mode='reflect').reshape(shape)\n    return res_x, res_y\n\n\ndef test():\n    X = np.random.randint(0,100, (1000, 1, 100, 200))\n    YY = [np.random.randint(0,100, (1000, 1, 100, 200)), np.random.random((1000, 1))]\n    cid = CustomImageDataGenerator(horizontal_flip=True, elastic=(100,20))\n    gen = cid.flow(X, YY, batch_size=64, shuffle=False)\n    n = gen.next()[0]\n    \n    \nif __name__ == '__main__':\n    sys.exit(test())\n"
  },
  {
    "path": "average_ensembles.py",
    "content": "import numpy as np\nimport sys\nfrom u_model import IMG_COLS as img_cols, IMG_ROWS as img_rows\nfrom train import Learner\n\nensembles = {\n             'ens2': (8, 'best/ens2/res3/'), \n             'ens3': (6, 'best/ens3/res3/'), \n             'ens4': (6, 'best/ens4/res3/'), \n             'ens5': (8, 'best/ens5/res3/'), \n             'ens7': (6, 'best/ens7/res3/'), \n             'ens8': (5, 'best/ens8/res3/'),  \n             }\n\n\ndef main():\n    kfold_masks, kfold_prob = [], []\n    weigths = []\n    for name, (kfold, prefix) in ensembles.iteritems():\n        print 'Loading name=%s, prefix=%s, kfold=%d' % (name, prefix, kfold)\n        ens_x_mask = np.load(prefix + 'imgs_mask_test.npy')\n        ens_x_prob = np.load(prefix + 'imgs_mask_exist_test.npy')\n        kfold_masks.append(ens_x_mask)\n        kfold_prob.append(ens_x_prob)\n        weigths.append(kfold)\n    #\n    total_weight = float(sum(weigths))\n    total_cnt = len(weigths)\n    dlen = len(kfold_masks[0])\n    res_masks = np.ndarray((dlen, 1, img_rows, img_cols), dtype=np.float32)\n    res_probs = np.ndarray((dlen, ), dtype=np.float32)\n    \n    for i in xrange(dlen):\n        masks = np.ndarray((total_cnt, 1, img_rows, img_cols), dtype=np.float32)\n        probs = np.ndarray((total_cnt, ), dtype=np.float32)\n        for k in xrange(total_cnt):\n            masks[k] = weigths[k] * kfold_masks[k][i]\n            probs[k] = weigths[k] * kfold_prob[k][i]\n        res_masks[i] = np.sum(masks, 0)/total_weight\n        res_probs[i] = np.sum(probs)/total_weight\n    print 'Saving', Learner.test_mask_res, Learner.test_mask_exist_res\n    np.save(Learner.test_mask_res, res_masks)\n    np.save(Learner.test_mask_exist_res, res_probs)\n    \n\nif __name__ == '__main__':\n    sys.exit(main())\n\n"
  },
  {
    "path": "current.py",
    "content": "import numpy as np\nimport sys\nfrom data import load_test_data\nfrom u_model import get_unet\nfrom keras.optimizers import Adam\nfrom train import preprocess, Learner\n#aug\nfrom u_model import IMG_COLS as img_cols, IMG_ROWS as img_rows\nfrom keras.preprocessing.image import flip_axis, random_channel_shift\n\ncurry = lambda func, *args, **kw:\\\n            lambda *p, **n:\\\n                 func(*args + p, **dict(kw.items() + n.items()))\nfrom keras.preprocessing.image import apply_transform, transform_matrix_offset_center\n\n\ndef zoom(x, zoom_range, row_index=1, col_index=2, channel_index=0,\n                fill_mode='nearest', cval=0.):\n    zx, zy = zoom_range\n    zoom_matrix = np.array([[zx, 0, 0],\n                            [0, zy, 0],\n                            [0, 0, 1]])\n\n    h, w = x.shape[row_index], x.shape[col_index]\n    transform_matrix = transform_matrix_offset_center(zoom_matrix, h, w)\n    x = apply_transform(x, transform_matrix, channel_index, fill_mode, cval)\n    return x\n\n\ntransforms = (\n              {'do': curry(flip_axis, axis=1), 'undo': curry(flip_axis, axis=1)},\n              {'do': curry(flip_axis, axis=2), 'undo': curry(flip_axis, axis=2)},\n              {'do': curry(zoom, zoom_range=(1.05, 1.05)), 'undo': curry(zoom, zoom_range=(1/1.05, 1/1.05))},\n              {'do': curry(zoom, zoom_range=(0.95, 0.95)), 'undo': curry(zoom, zoom_range=(1/0.95, 1/0.95))},\n              {'do': curry(random_channel_shift, intensity=5), 'undo': lambda x: x},\n              )\n\n\ndef run_test():\n    BS = 128\n    print('Loading and preprocessing test data...')\n    mean, std = Learner.load_meanstd()\n    \n    imgs_test = load_test_data()\n    imgs_test = preprocess(imgs_test)\n\n    imgs_test = imgs_test.astype('float32')\n    imgs_test -= mean\n    imgs_test /= std\n\n    print('Loading saved weights...')\n    model = get_unet(Adam(0.001))\n    print ('Loading weights from %s' % Learner.best_weight_path)\n    model.load_weights(Learner.best_weight_path)\n    \n    print ('Augment')\n    alen, dlen = len(transforms), len(imgs_test)\n    test_x = np.ndarray((alen, dlen, 1, img_rows, img_cols), dtype=np.float32)\n    for i in xrange(dlen):\n        for j, transform in enumerate(transforms):\n            test_x[j,i] = transform['do'](imgs_test[i].copy())\n    #\n    print('Predicting masks on test data...')\n    outputs = []\n    asis_res = model.predict(imgs_test, batch_size=BS, verbose=1)\n    outputs.append(asis_res)\n    for j, transform in enumerate(transforms):\n        t_y = model.predict(test_x[j], batch_size=BS, verbose=1)\n        outputs.append(t_y)\n    #\n    print('Analyzing')\n    test_masks = np.ndarray((dlen, 1, img_rows, img_cols), dtype=np.float32)\n    test_probs = np.ndarray((dlen, ), dtype=np.float32)\n    for i in xrange(dlen):\n        masks = np.ndarray((alen+1, 1, img_rows, img_cols), dtype=np.float32)\n        probs = np.ndarray((alen+1, ), dtype=np.float32)\n        for j, t_y in enumerate(outputs):\n            mask, prob = t_y[0][i], t_y[1][i]\n            if j:\n                mask = transforms[j-1]['undo'](mask)\n            masks[j] = mask\n            probs[j] = prob\n        #\n        test_masks[i] = np.mean(masks, 0)\n        test_probs[i] = np.mean(probs)\n            \n    print('Saving ')\n    np.save(Learner.test_mask_res, test_masks)\n    np.save(Learner.test_mask_exist_res, test_probs)\n\ndef main():\n    run_test()\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "current_ensemble.py",
    "content": "import numpy as np\nimport sys\nfrom data import load_test_data\nfrom u_model import get_unet\nfrom keras.optimizers import Adam\nfrom train import preprocess, Learner\n#aug\nfrom u_model import IMG_COLS as img_cols, IMG_ROWS as img_rows\nfrom keras.preprocessing.image import flip_axis, random_channel_shift\n\ncurry = lambda func, *args, **kw:\\\n            lambda *p, **n:\\\n                 func(*args + p, **dict(kw.items() + n.items()))\nfrom keras.preprocessing.image import apply_transform, transform_matrix_offset_center\n\n\ndef zoom(x, zoom_range, row_index=1, col_index=2, channel_index=0,\n                fill_mode='nearest', cval=0.):\n    zx, zy = zoom_range\n    zoom_matrix = np.array([[zx, 0, 0],\n                            [0, zy, 0],\n                            [0, 0, 1]])\n\n    h, w = x.shape[row_index], x.shape[col_index]\n    transform_matrix = transform_matrix_offset_center(zoom_matrix, h, w)\n    x = apply_transform(x, transform_matrix, channel_index, fill_mode, cval)\n    return x\n\n\ntransforms = (\n              {'do': curry(flip_axis, axis=1), 'undo': curry(flip_axis, axis=1)},\n              {'do': curry(flip_axis, axis=2), 'undo': curry(flip_axis, axis=2)},\n              {'do': curry(zoom, zoom_range=(1.05, 1.05)), 'undo': curry(zoom, zoom_range=(1/1.05, 1/1.05))},\n              {'do': curry(zoom, zoom_range=(0.95, 0.95)), 'undo': curry(zoom, zoom_range=(1/0.95, 1/0.95))},\n              {'do': curry(random_channel_shift, intensity=5), 'undo': lambda x: x},\n              )\n\n\ndef run_test():\n    BS = 256\n    print('Loading and preprocessing test data...')\n    mean, std = Learner.load_meanstd()\n    \n    imgs_test = load_test_data()\n#    imgs_test = imgs_test[:100]\n#    print ('test')\n    imgs_test = preprocess(imgs_test)\n\n    imgs_test = imgs_test.astype('float32')\n    imgs_test -= mean\n    imgs_test /= std\n\n    \n    print ('Augment')\n    alen, dlen = len(transforms), len(imgs_test)\n    test_x = np.ndarray((alen, dlen, 1, img_rows, img_cols), dtype=np.float32)\n    for i in xrange(dlen):\n        for j, transform in enumerate(transforms):\n            test_x[j,i] = transform['do'](imgs_test[i].copy())\n    #\n    kfold = 6\n    kfold_masks, kfold_prob = [], []\n    for _iter in xrange(kfold):\n        print('Iter=%d, Loading saved weights...' % _iter)\n        model = get_unet(Adam(0.001))\n        filepath = Learner.best_weight_path + '_%d.fold' % _iter\n        print ('Loading weights from %s' % filepath)\n        model.load_weights(filepath)\n        #\n        print('Predicting masks on test data...')\n        outputs = []\n        asis_res = model.predict(imgs_test, batch_size=BS, verbose=1)\n        outputs.append(asis_res)\n        for j, transform in enumerate(transforms):\n            t_y = model.predict(test_x[j], batch_size=BS, verbose=1)\n            outputs.append(t_y)\n        #\n        print('Analyzing')\n        test_masks = np.ndarray((dlen, 1, img_rows, img_cols), dtype=np.float32)\n        test_probs = np.ndarray((dlen, ), dtype=np.float32)\n        for i in xrange(dlen):\n            masks = np.ndarray((alen+1, 1, img_rows, img_cols), dtype=np.float32)\n            probs = np.ndarray((alen+1, ), dtype=np.float32)\n            for j, t_y in enumerate(outputs):\n                mask, prob = t_y[0][i], t_y[1][i]\n                if j:\n                    mask = transforms[j-1]['undo'](mask.copy())\n                masks[j] = mask\n                probs[j] = prob\n            #\n            test_masks[i] = np.mean(masks, 0)\n            test_probs[i] = np.mean(probs)\n        kfold_masks.append(test_masks)\n        kfold_prob.append(test_probs)\n    \n    print 'Summing results of ensemble'\n    #\n    res_masks = np.ndarray((dlen, 1, img_rows, img_cols), dtype=np.float32)\n    res_probs = np.ndarray((dlen, ), dtype=np.float32)\n    for i in xrange(dlen):\n        masks = np.ndarray((kfold, 1, img_rows, img_cols), dtype=np.float32)\n        probs = np.ndarray((kfold, ), dtype=np.float32)\n        for k in xrange(kfold):\n            masks[k] = kfold_masks[k][i]\n            probs[k] = kfold_prob[k][i]\n        res_masks[i] = np.mean(masks, 0)\n        res_probs[i] = np.mean(probs)\n        \n\n    print('Saving ')\n    np.save(Learner.test_mask_res, res_masks)\n    np.save(Learner.test_mask_exist_res, res_probs)\n\n\ndef main():\n    run_test()\n\nif __name__ == '__main__':\n    sys.exit(main())\n\n"
  },
  {
    "path": "data.py",
    "content": "from __future__ import print_function\nimport os, sys\nimport numpy as np\nimport cv2\n\nimage_rows = 420\nimage_cols = 580\n\n_dir = os.path.join(os.path.realpath(os.path.dirname(__file__)), '')\ndata_path = os.path.join(_dir, '../')\npreprocess_path = os.path.join(_dir, 'np_data')\nimg_train_path = os.path.join(preprocess_path, 'imgs_train.npy')\nimg_train_mask_path = os.path.join(preprocess_path, 'imgs_mask_train.npy')\nimg_train_patients = os.path.join(preprocess_path, 'imgs_patient.npy')\nimg_test_path = os.path.join(preprocess_path, 'imgs_test.npy') \nimg_test_id_path = os.path.join(preprocess_path, 'imgs_id_test.npy') \n\n\n\ndef load_test_data():\n    print ('Loading test data from %s' % img_test_path)\n    imgs_test = np.load(img_test_path)\n    return imgs_test\n\ndef load_test_ids():\n    print ('Loading test ids from %s' % img_test_id_path)\n    imgs_id = np.load(img_test_id_path)\n    return imgs_id\n\ndef load_train_data():\n    print ('Loading train data from %s and %s' % (img_train_path, img_train_mask_path))\n    imgs_train = np.load(img_train_path)\n    imgs_mask_train = np.load(img_train_mask_path)\n    return imgs_train, imgs_mask_train\n\ndef load_patient_num():\n    print ('Loading patient numbers from %s' % img_train_patients)\n    return np.load(img_train_patients)\n\ndef get_patient_nums(string):\n    pat, photo = string.split('_')\n    photo = photo.split('.')[0]\n    return int(pat), int(photo)\n\ndef create_train_data():\n    train_data_path = os.path.join(data_path, 'train')\n    images = filter((lambda image: 'mask' not in image), os.listdir(train_data_path))\n    total = len(images) \n\n    imgs = np.ndarray((total, 1, image_rows, image_cols), dtype=np.uint8)\n    imgs_mask = np.ndarray((total, 1, image_rows, image_cols), dtype=np.uint8)\n    i = 0\n    print('Creating training images...')\n    img_patients = np.ndarray((total,), dtype=np.uint8)\n    for image_name in images:\n        if 'mask' in image_name:\n            continue\n        image_mask_name = image_name.split('.')[0] + '_mask.tif'\n        patient_num = image_name.split('_')[0]\n        img = cv2.imread(os.path.join(train_data_path, image_name), cv2.IMREAD_GRAYSCALE)\n        img_mask = cv2.imread(os.path.join(train_data_path, image_mask_name), cv2.IMREAD_GRAYSCALE)\n\n        imgs[i, 0] = img\n        imgs_mask[i, 0] = img_mask\n        img_patients[i] = patient_num\n        if i % 100 == 0:\n            print('Done: {0}/{1} images'.format(i, total))\n        i += 1\n    print('Loading done.')\n    np.save(img_train_patients, img_patients)\n    np.save(img_train_path, imgs)\n    np.save(img_train_mask_path, imgs_mask)\n    print('Saving to .npy files done.')\n\n\ndef create_test_data():\n    train_data_path = os.path.join(data_path, 'test')\n    images = os.listdir(train_data_path)\n    total = len(images)\n\n    imgs = np.ndarray((total, 1, image_rows, image_cols), dtype=np.uint8)\n    imgs_id = np.ndarray((total, ), dtype=np.int32)\n\n    i = 0\n    print('Creating test images...')\n    for image_name in images:\n        img_id = int(image_name.split('.')[0])\n        img = cv2.imread(os.path.join(train_data_path, image_name), cv2.IMREAD_GRAYSCALE)\n\n        imgs[i, 0] = img\n        imgs_id[i] = img_id\n\n        if i % 100 == 0:\n            print('Done: {0}/{1} images'.format(i, total))\n        i += 1\n    print('Loading done.')\n\n    np.save(img_test_path, imgs)\n    np.save(img_test_id_path, imgs_id)\n    print('Saving to .npy files done.')\n\n\ndef main():\n    create_train_data()\n    create_test_data()\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "keras_plus.py",
    "content": "from keras.callbacks import Callback\nfrom keras.callbacks import warnings\nimport sys\nimport numpy as np\nfrom keras import backend as K\n\n\nclass AdvancedLearnignRateScheduler(Callback):\n    '''\n    # Arguments\n        monitor: quantity to be monitored.\n        patience: number of epochs with no improvement\n            after which training will be stopped.\n        verbose: verbosity mode.\n        mode: one of {auto, min, max}. In 'min' mode,\n            training will stop when the quantity\n            monitored has stopped decreasing; in 'max'\n            mode it will stop when the quantity\n            monitored has stopped increasing.\n    '''\n    def __init__(self, monitor='val_loss', patience=0,\n                 verbose=0, mode='auto', decayRatio=0.5):\n        super(Callback, self).__init__()\n\n        self.monitor = monitor\n        self.patience = patience\n        self.verbose = verbose\n        self.wait = 0\n        self.decayRatio = decayRatio\n\n        if mode not in ['auto', 'min', 'max']:\n            warnings.warn('Mode %s is unknown, '\n                          'fallback to auto mode.'\n                          % (self.mode), RuntimeWarning)\n            mode = 'auto'\n\n        if mode == 'min':\n            self.monitor_op = np.less\n            self.best = np.Inf\n        elif mode == 'max':\n            self.monitor_op = np.greater\n            self.best = -np.Inf\n        else:\n            if 'acc' in self.monitor:\n                self.monitor_op = np.greater\n                self.best = -np.Inf\n            else:\n                self.monitor_op = np.less\n                self.best = np.Inf\n\n    def on_epoch_end(self, epoch, logs={}):\n        current = logs.get(self.monitor)\n\n        current_lr = K.get_value(self.model.optimizer.lr)\n        print(\" \\nLearning rate:\", current_lr)\n        if current is None:\n            warnings.warn('AdvancedLearnignRateScheduler'\n                          ' requires %s available!' %\n                          (self.monitor), RuntimeWarning)\n\n        if self.monitor_op(current, self.best):\n            self.best = current\n            self.wait = 0\n        else:\n            if self.wait >= self.patience:\n                assert hasattr(self.model.optimizer, 'lr'), \\\n                    'Optimizer must have a \"lr\" attribute.'\n                current_lr = K.get_value(self.model.optimizer.lr)\n                new_lr = current_lr * self.decayRatio\n                if self.verbose > 0:\n                    print(' \\nEpoch %05d: reducing learning rate' % (epoch))\n                    sys.stderr.write(' \\nnew lr: %.5f\\n' % new_lr)\n                K.set_value(self.model.optimizer.lr, new_lr)\n                self.wait = 0\n\n            self.wait += 1\n\n\nclass LearningRateDecay(Callback):\n    '''Learning rate scheduler.\n\n    # Arguments\n        schedule: a function that takes an epoch index as input\n            (integer, indexed from 0) and returns a new\n            learning rate as output (float).\n    '''\n    def __init__(self, decay, every_n=1, verbose=0):\n        Callback.__init__(self)\n        self.decay = decay\n        self.every_n = every_n\n        self.verbose = verbose\n\n    def on_epoch_end(self, epoch, logs={}):\n        if not (epoch and epoch % self.every_n == 0):\n            return\n\n        assert hasattr(self.model.optimizer, 'lr'), \\\n            'Optimizer must have a \"lr\" attribute.'\n        current_lr = K.get_value(self.model.optimizer.lr)\n        new_lr = current_lr * self.decay\n        if self.verbose > 0:\n            print(' \\nEpoch %05d: reducing learning rate' % (epoch))\n            sys.stderr.write('new lr: %.5f\\n' % new_lr)\n        K.set_value(self.model.optimizer.lr, new_lr)\n"
  },
  {
    "path": "metric.py",
    "content": "import sys\nimport numpy as np\nfrom keras import backend as K\nsmooth = 1\n\n\ndef mean_length_error(y_true, y_pred):\n    y_true_f = K.sum(K.round(K.flatten(y_true)))\n    y_pred_f = K.sum(K.round(K.flatten(y_pred)))\n    delta = (y_pred_f - y_true_f)\n    return K.mean(K.tanh(delta))\n\ndef dice_coef(y_true, y_pred):\n    y_true_f = K.flatten(y_true)\n    y_pred_f = K.flatten(y_pred)\n    intersection = K.sum(y_true_f * y_pred_f)\n    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)\n\ndef dice_coef_loss(y_true, y_pred):\n    return -dice_coef(y_true, y_pred)\n\ndef np_dice_coef(y_true, y_pred):\n    tr = y_true.flatten()\n    pr = y_pred.flatten()\n    return (2. * np.sum(tr * pr) + smooth) / (np.sum(tr) + np.sum(pr) + smooth)\n\n\ndef main():\n    a = np.random.random((420,100))\n    b = np.random.random((420,100))\n#    print a.flatten().shape\n    res =  np_dice_coef(a,b )\n    print res\n\n\nif __name__ == '__main__':\n    sys.exit(main())\n    "
  },
  {
    "path": "submission.py",
    "content": "from __future__ import print_function\nimport sys, os\nimport numpy as np\nimport cv2\nfrom data import image_cols, image_rows, load_test_ids\nfrom train import Learner\n\n\ndef prep(img):\n    img = img.astype('float32')\n    img = cv2.resize(img, (image_cols, image_rows)) \n    img = cv2.threshold(img, 0.5, 1., cv2.THRESH_BINARY)[1].astype(np.uint8)\n    return img\n\ndef run_length_enc(label):\n    from itertools import chain\n    x = label.transpose().flatten()\n    y = np.where(x > 0)[0]\n    if len(y) < 10:  # consider as empty\n        return ''\n    z = np.where(np.diff(y) > 1)[0]\n    start = np.insert(y[z+1], 0, y[0])\n    end = np.append(y[z], y[-1])\n    length = end - start\n    res = [[s+1, l+1] for s, l in zip(list(start), list(length))]\n    res = list(chain.from_iterable(res))\n    return ' '.join([str(r) for r in res])\n\n\ndef submission():\n    imgs_id_test = load_test_ids()\n    \n    print ('Loading test_mask_res from %s' % Learner.test_mask_res)\n    imgs_test = np.load(Learner.test_mask_res)\n    print ('Loading imgs_exist_test from %s' % Learner.test_mask_exist_res)\n    imgs_exist_test = np.load(Learner.test_mask_exist_res)\n\n    argsort = np.argsort(imgs_id_test)\n    imgs_id_test = imgs_id_test[argsort]\n    imgs_test = imgs_test[argsort]\n    imgs_exist_test = imgs_exist_test[argsort]\n\n    total = imgs_test.shape[0]\n    ids = []\n    rles = []\n    for i in xrange(total):\n        img = imgs_test[i, 0]\n        img_exist = imgs_exist_test[i]\n        img = prep(img)\n        new_prob = (img_exist + min(1, np.sum(img)/10000.0 )* 5 / 3)/2\n        if np.sum(img) > 0 and new_prob < 0.5:\n            img = np.zeros((image_rows, image_cols))\n\n        rle = run_length_enc(img)\n\n        rles.append(rle)\n        ids.append(imgs_id_test[i])\n\n        if i % 1000 == 0:\n            print('{}/{}'.format(i, total))\n\n    file_name = os.path.join(Learner.res_dir, 'submission.csv')\n\n    with open(file_name, 'w+') as f:\n        f.write('img,pixels\\n')\n        for i in xrange(total):\n            s = str(ids[i]) + ',' + rles[i]\n            f.write(s + '\\n')\n\ndef main():\n    submission()\n\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "train.py",
    "content": "from __future__ import print_function\nfrom optparse import OptionParser\nimport cv2, sys, os, shutil, random\nimport numpy as np\nfrom keras.optimizers import Adam\nfrom keras.callbacks import ModelCheckpoint, EarlyStopping\nfrom keras.preprocessing.image import flip_axis, random_channel_shift\nfrom keras.engine.training import slice_X\nfrom keras_plus import LearningRateDecay\nfrom u_model import get_unet, IMG_COLS as img_cols, IMG_ROWS as img_rows\nfrom data import load_train_data, load_test_data, load_patient_num\nfrom augmentation import random_zoom, elastic_transform, random_rotation\nfrom utils import save_pickle, load_pickle, count_enum\n\n_dir = os.path.join(os.path.realpath(os.path.dirname(__file__)), '')\n\n\ndef preprocess(imgs, to_rows=None, to_cols=None):\n    if to_rows is None or to_cols is None:\n        to_rows = img_rows\n        to_cols = img_cols\n    imgs_p = np.ndarray((imgs.shape[0], imgs.shape[1], to_rows, to_cols), dtype=np.uint8)\n    for i in xrange(imgs.shape[0]):\n        imgs_p[i, 0] = cv2.resize(imgs[i, 0], (to_cols, to_rows), interpolation=cv2.INTER_CUBIC)\n    return imgs_p\n\n\nclass Learner(object):\n    \n    suffix = ''\n    res_dir = os.path.join(_dir, 'res' + suffix)\n    best_weight_path = os.path.join(res_dir, 'unet.hdf5')\n    test_mask_res = os.path.join(res_dir, 'imgs_mask_test.npy')\n    test_mask_exist_res = os.path.join(res_dir, 'imgs_mask_exist_test.npy')\n    meanstd_path = os.path.join(res_dir, 'meanstd.dump')\n    valid_data_path = os.path.join(res_dir, 'valid.npy')\n    tensorboard_dir = os.path.join(res_dir, 'tb')\n    \n    def __init__(self, model_func, validation_split):\n        self.model_func = model_func\n        self.validation_split = validation_split\n        self.__iter_res_dir = os.path.join(self.res_dir, 'res_iter')\n        self.__iter_res_file = os.path.join(self.__iter_res_dir, '{epoch:02d}-{val_loss:.4f}.unet.hdf5')\n        \n    def _dir_init(self):\n        if not os.path.exists(self.res_dir):\n            os.mkdir(self.res_dir)\n        #iter clean\n        if os.path.exists(self.__iter_res_dir):\n            shutil.rmtree(self.__iter_res_dir)\n        os.mkdir(self.__iter_res_dir)\n    \n    def save_meanstd(self):\n        data = [self.mean, self.std]\n        save_pickle(self.meanstd_path, data)\n        \n    @classmethod\n    def load_meanstd(cls):\n        print ('Load meanstd from %s' % cls.meanstd_path)\n        mean, std = load_pickle(cls.meanstd_path)\n        return mean, std\n    \n    @classmethod\n    def save_valid_idx(cls, idx):\n        save_pickle(cls.valid_data_path, idx)\n        \n    @classmethod\n    def load_valid_idx(cls):\n        return load_pickle(cls.valid_data_path)\n    \n    def _init_mean_std(self, data):\n        data = data.astype('float32')\n        self.mean, self.std = np.mean(data), np.std(data)\n        self.save_meanstd()\n        return data\n    \n    def get_object_existance(self, mask_array):\n        return np.array([int(np.sum(mask_array[i, 0]) > 0) for i in xrange(len(mask_array))])\n\n    def standartize(self, array, to_float=False):\n        if to_float:\n            array = array.astype('float32')\n        if self.mean is None or self.std is None:\n            raise ValueError, 'No mean/std is initialised'\n        \n        array -= self.mean\n        array /= self.std\n        return array\n\n    @classmethod\n    def norm_mask(cls, mask_array):\n        mask_array = mask_array.astype('float32')\n        mask_array /= 255.0\n        return mask_array\n\n    @classmethod\n    def shuffle_train(cls, data, mask):\n        perm = np.random.permutation(len(data))\n        data = data[perm]\n        mask = mask[perm]\n        return data, mask\n\n    @classmethod\n    def split_train_and_valid_by_patient(cls, data, mask, validation_split, shuffle=False):\n        print('Shuffle & split...')\n        patient_nums = load_patient_num()\n        patient_dict = count_enum(patient_nums)\n        pnum = len(patient_dict)\n        val_num = int(pnum * validation_split)\n        patients = patient_dict.keys()\n        if shuffle:\n            random.shuffle(patients)\n        val_p, train_p = patients[:val_num], patients[val_num:]\n        train_indexes = [i for i, c in enumerate(patient_nums) if c in set(train_p)]\n        val_indexes = [i for i, c in enumerate(patient_nums) if c in set(val_p)]\n        x_train, y_train = data[train_indexes], mask[train_indexes]\n        x_valid, y_valid = data[val_indexes], mask[val_indexes]\n        cls.save_valid_idx(val_indexes)\n        print ('val patients:', len(x_valid), val_p)\n        print ('train patients:', len(x_train), train_p)\n        return (x_train, y_train), (x_valid, y_valid)\n\n    @classmethod\n    def split_train_and_valid(cls, data, mask, validation_split, shuffle=False):\n        print('Shuffle & split...')\n        if shuffle:\n            data, mask = cls.shuffle_train(data, mask)\n        split_at = int(len(data) * (1. - validation_split))\n        x_train, x_valid = (slice_X(data, 0, split_at), slice_X(data, split_at))\n        y_train, y_valid = (slice_X(mask, 0, split_at), slice_X(mask, split_at))\n        cls.save_valid_idx(range(len(data))[split_at:])\n        return (x_train, y_train), (x_valid, y_valid)\n        \n    def test(self, model, batch_size=256):\n        print('Loading and pre-processing test data...')\n        imgs_test = load_test_data()\n        imgs_test = preprocess(imgs_test)\n        imgs_test = self.standartize(imgs_test, to_float=True)\n    \n        print('Loading best saved weights...')\n        model.load_weights(self.best_weight_path)\n        print('Predicting masks on test data and saving...')\n        imgs_mask_test = model.predict(imgs_test, batch_size=batch_size, verbose=1)\n        \n        np.save(self.test_mask_res, imgs_mask_test[0])\n        np.save(self.test_mask_exist_res, imgs_mask_test[1])\n        \n    def __pretrain_model_load(self, model, pretrained_path):\n        if pretrained_path is not None:\n            if not os.path.exists(pretrained_path):\n                raise ValueError, 'No such pre-trained path exists'\n            model.load_weights(pretrained_path)\n            \n            \n    def augmentation(self, X, Y):\n        print('Augmentation model...')\n        total = len(X)\n        x_train, y_train = [], []\n        \n        for i in xrange(total):\n            x, y = X[i], Y[i]\n            #standart\n            x_train.append(x)\n            y_train.append(y)\n        \n#            for _ in xrange(1):\n#                _x, _y = elastic_transform(x[0], y[0], 100, 20)\n#                x_train.append(_x.reshape((1,) + _x.shape))\n#                y_train.append(_y.reshape((1,) + _y.shape))\n            \n            #flip x\n            x_train.append(flip_axis(x, 2))\n            y_train.append(flip_axis(y, 2))\n            #flip y\n            x_train.append(flip_axis(x, 1))\n            y_train.append(flip_axis(y, 1))\n            #continue\n            #zoom\n            for _ in xrange(1):\n                _x, _y = random_zoom(x, y, (0.9, 1.1))\n                x_train.append(_x)\n                y_train.append(_y)\n            for _ in xrange(0):\n                _x, _y = random_rotation(x, y, 5)\n                x_train.append(_x)\n                y_train.append(_y)\n            #intentsity\n            for _ in xrange(1):\n                _x = random_channel_shift(x, 5.0)\n                x_train.append(_x)\n                y_train.append(y)\n    \n        x_train = np.array(x_train)\n        y_train = np.array(y_train)\n        return x_train, y_train\n        \n    def fit(self, x_train, y_train, x_valid, y_valid, pretrained_path):\n        print('Creating and compiling and fitting model...')\n        print('Shape:', x_train.shape)\n        #second output\n        y_train_2 = self.get_object_existance(y_train)\n        y_valid_2 = self.get_object_existance(y_valid)\n\n        #load model\n        optimizer = Adam(lr=0.0045)\n        model = self.model_func(optimizer)\n\n        #checkpoints\n        model_checkpoint = ModelCheckpoint(self.__iter_res_file, monitor='val_loss')\n        model_save_best = ModelCheckpoint(self.best_weight_path, monitor='val_loss', save_best_only=True)\n        early_s = EarlyStopping(monitor='val_loss', patience=5, verbose=1)\n        learning_rate_adapt = LearningRateDecay(0.9, every_n=2, verbose=1)\n        self.__pretrain_model_load(model, pretrained_path)\n        model.fit(\n                   x_train, [y_train, y_train_2], \n                   validation_data=(x_valid, [y_valid, y_valid_2]),\n                   batch_size=128, nb_epoch=50,\n                   verbose=1, shuffle=True,\n                   callbacks=[model_save_best, model_checkpoint, early_s]\n                   ) \n        \n        #augment\n        return model\n\n    def train_and_predict(self, pretrained_path=None, split_random=True):\n        self._dir_init()\n        print('Loading and preprocessing and standarize train data...')\n        imgs_train, imgs_mask_train = load_train_data()\n        \n        imgs_train = preprocess(imgs_train)\n\n        imgs_mask_train = preprocess(imgs_mask_train)\n        \n        imgs_mask_train = self.norm_mask(imgs_mask_train)\n\n        split_func = split_random and self.split_train_and_valid or self.split_train_and_valid_by_patient\n        (x_train, y_train), (x_valid, y_valid) = split_func(imgs_train, imgs_mask_train,\n                                                        validation_split=self.validation_split)\n        self._init_mean_std(x_train)\n        x_train = self.standartize(x_train, True)\n        x_valid = self.standartize(x_valid, True)\n        #augmentation\n        x_train, y_train = self.augmentation(x_train, y_train)\n        #fit\n        model = self.fit(x_train, y_train, x_valid, y_valid, pretrained_path)\n        #test\n        self.test(model)\n\n\ndef main():\n    parser = OptionParser()\n    parser.add_option(\"-s\", \"--split_random\", action='store', type='int', dest='split_random', default = 1)\n    parser.add_option(\"-m\", \"--model_name\", action='store', type='str', dest='model_name', default = 'u_model')\n    #\n    options, _ = parser.parse_args()\n    split_random = options.split_random\n    model_name = options.model_name\n    if model_name is None:\n        raise ValueError, 'model_name is not defined'\n    #\n    import imp\n    model_ = imp.load_source('model_', model_name + '.py')\n    model_func = model_.get_unet\n    #\n    lr = Learner(model_func, validation_split=0.2)\n    lr.train_and_predict(pretrained_path=None, split_random=split_random)\n    print ('Results in ', lr.res_dir)\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "train_generator.py",
    "content": "from __future__ import print_function\nfrom optparse import OptionParser\nimport cv2, sys, os, shutil, random\nimport numpy as np\nfrom keras.optimizers import Adam, SGD, RMSprop\nfrom keras.callbacks import ModelCheckpoint, EarlyStopping\nfrom keras.preprocessing.image import flip_axis, random_channel_shift\nfrom keras.engine.training import slice_X\nfrom keras_plus import LearningRateDecay\nfrom u_model import get_unet, IMG_COLS as img_cols, IMG_ROWS as img_rows\nfrom data import load_train_data, load_test_data, load_patient_num\nfrom augmentation import CustomImageDataGenerator\nfrom augmentation import random_zoom, elastic_transform, load_aug\nfrom utils import save_pickle, load_pickle, count_enum\n\n_dir = os.path.join(os.path.realpath(os.path.dirname(__file__)), '')\n\n\n\ndef preprocess(imgs, to_rows=None, to_cols=None):\n    if to_rows is None or to_cols is None:\n        to_rows = img_rows\n        to_cols = img_cols\n    imgs_p = np.ndarray((imgs.shape[0], imgs.shape[1], to_rows, to_cols), dtype=np.uint8)\n    for i in xrange(imgs.shape[0]):\n        imgs_p[i, 0] = cv2.resize(imgs[i, 0], (to_cols, to_rows), interpolation=cv2.INTER_CUBIC)\n    return imgs_p\n\nclass Learner(object):\n    \n    suffix = ''\n    res_dir = os.path.join(_dir, 'res' + suffix)\n    best_weight_path = os.path.join(res_dir, 'unet.hdf5')\n    test_mask_res = os.path.join(res_dir, 'imgs_mask_test.npy')\n    test_mask_exist_res = os.path.join(res_dir, 'imgs_mask_exist_test.npy')\n    meanstd_path = os.path.join(res_dir, 'meanstd.dump')\n    valid_data_path = os.path.join(res_dir, 'valid.npy')\n    tensorboard_dir = os.path.join(res_dir, 'tb')\n    \n    def __init__(self, model_func, validation_split):\n        self.model_func = model_func\n        self.validation_split = validation_split\n        self.__iter_res_dir = os.path.join(self.res_dir, 'res_iter')\n        self.__iter_res_file = os.path.join(self.__iter_res_dir, '{epoch:02d}-{val_loss:.4f}.unet.hdf5')\n        \n    def _dir_init(self):\n        if not os.path.exists(self.res_dir):\n            os.mkdir(self.res_dir)\n        #iter clean\n        if os.path.exists(self.__iter_res_dir):\n            shutil.rmtree(self.__iter_res_dir)\n        os.mkdir(self.__iter_res_dir)\n    \n    def save_meanstd(self):\n        data = [self.mean, self.std]\n        save_pickle(self.meanstd_path, data)\n        \n    @classmethod\n    def load_meanstd(cls):\n        print ('Load meanstd from %s' % cls.meanstd_path)\n        mean, std = load_pickle(cls.meanstd_path)\n        return mean, std\n    \n    @classmethod\n    def save_valid_idx(cls, idx):\n        save_pickle(cls.valid_data_path, idx)\n        \n    @classmethod\n    def load_valid_idx(cls):\n        return load_pickle(cls.valid_data_path)\n    \n    def _init_mean_std(self, data):\n        data = data.astype('float32')\n        self.mean, self.std = np.mean(data), np.std(data)\n        self.save_meanstd()\n        return data\n    \n    def get_object_existance(self, mask_array):\n        return np.array([int(np.sum(mask_array[i, 0]) > 0) for i in xrange(len(mask_array))])\n\n    def standartize(self, array, to_float=False):\n        if to_float:\n            array = array.astype('float32')\n        if self.mean is None or self.std is None:\n            raise ValueError, 'No mean/std is initialised'\n        \n        array -= self.mean\n        array /= self.std\n        return array\n\n    @classmethod\n    def norm_mask(cls, mask_array):\n        mask_array = mask_array.astype('float32')\n        mask_array /= 255.0\n        return mask_array\n\n    @classmethod\n    def shuffle_train(cls, data, mask):\n        perm = np.random.permutation(len(data))\n        data = data[perm]\n        mask = mask[perm]\n        return data, mask\n\n    @classmethod\n    def split_train_and_valid_by_patient(cls, data, mask, validation_split, shuffle=False):\n        print('Shuffle & split...')\n        patient_nums = load_patient_num()\n        patient_dict = count_enum(patient_nums)\n        pnum = len(patient_dict)\n        val_num = int(pnum * validation_split)\n        patients = patient_dict.keys()\n        if shuffle:\n            random.shuffle(patients)\n        val_p, train_p = patients[:val_num], patients[val_num:]\n        train_indexes = [i for i, c in enumerate(patient_nums) if c in set(train_p)]\n        val_indexes = [i for i, c in enumerate(patient_nums) if c in set(val_p)]\n        x_train, y_train = data[train_indexes], mask[train_indexes]\n        x_valid, y_valid = data[val_indexes], mask[val_indexes]\n        cls.save_valid_idx(val_indexes)\n        print ('val patients:', len(x_valid), val_p)\n        print ('train patients:', len(x_train), train_p)\n        return (x_train, y_train), (x_valid, y_valid)\n\n    @classmethod\n    def split_train_and_valid(cls, data, mask, validation_split, shuffle=False):\n        print('Shuffle & split...')\n        if shuffle:\n            data, mask = cls.shuffle_train(data, mask)\n        split_at = int(len(data) * (1. - validation_split))\n        x_train, x_valid = (slice_X(data, 0, split_at), slice_X(data, split_at))\n        y_train, y_valid = (slice_X(mask, 0, split_at), slice_X(mask, split_at))\n        cls.save_valid_idx(range(len(data))[split_at:])\n        return (x_train, y_train), (x_valid, y_valid)\n        \n    def test(self, model, batch_size=256):\n        print('Loading and pre-processing test data...')\n        imgs_test = load_test_data()\n        imgs_test = preprocess(imgs_test)\n        imgs_test = self.standartize(imgs_test, to_float=True)\n    \n        print('Loading best saved weights...')\n        model.load_weights(self.best_weight_path)\n        print('Predicting masks on test data and saving...')\n        imgs_mask_test = model.predict(imgs_test, batch_size=batch_size, verbose=1)\n        \n        np.save(self.test_mask_res, imgs_mask_test[0])\n        np.save(self.test_mask_exist_res, imgs_mask_test[1])\n        \n    def __pretrain_model_load(self, model, pretrained_path):\n        if pretrained_path is not None:\n            if not os.path.exists(pretrained_path):\n                raise ValueError, 'No such pre-trained path exists'\n            model.load_weights(pretrained_path)\n            \n            \n    def augmentation(self, X, Y):\n        print('Augmentation model...')\n        total = len(X)\n        x_train, y_train = [], []\n        \n        for i in xrange(total):\n            if i % 100 == 0:\n                print ('Aug', i)\n            x, y = X[i], Y[i]\n            #standart\n            x_train.append(x)\n            y_train.append(y)\n        \n            for _ in xrange(2):\n                _x, _y = elastic_transform(x[0], y[0], 100, 20)\n                x_train.append(_x.reshape((1,) + _x.shape))\n                y_train.append(_y.reshape((1,) + _y.shape))\n            \n            #flip x\n            x_train.append(flip_axis(x, 2))\n            y_train.append(flip_axis(y, 2))\n            #flip y\n            x_train.append(flip_axis(x, 1))\n            y_train.append(flip_axis(y, 1))\n            continue\n            #zoom\n            for _ in xrange(1):\n                _x, _y = random_zoom(x, y, (0.9, 1.1))\n                x_train.append(_x)\n                y_train.append(_y)\n            #intentsity\n            for _ in xrange(1):\n                _x = random_channel_shift(x, 5.0)\n                x_train.append(_x)\n                y_train.append(y)\n                \n#        for j in xrange(5):\n#            xs, ys = load_aug(j)\n#            ys = self.norm_mask(ys)\n#            (xn, yn), _ = self.split_train_and_valid_by_patient(xs, ys, validation_split=self.validation_split, shuffle=False)\n#            for i in xrange(len(xn)):\n#                x_train.append(xn[i])\n#                y_train.append(yn[i])\n    \n        x_train = np.array(x_train)\n        y_train = np.array(y_train)\n        return x_train, y_train\n        \n    def fit(self, x_train, y_train, x_valid, y_valid, pretrained_path):\n        print('Creating and compiling and fitting model...')\n        print('Shape:', x_train.shape)\n        #second output\n        y_train_2 = self.get_object_existance(y_train)\n        y_valid_2 = self.get_object_existance(y_valid)\n\n        #load model\n        optimizer = Adam(lr=0.0045)\n        #model = get_unet(optimizer)\n        model = self.model_func(optimizer)\n\n        #checkpoints\n        model_checkpoint = ModelCheckpoint(self.__iter_res_file, monitor='val_loss')\n        model_save_best = ModelCheckpoint(self.best_weight_path, monitor='val_loss', save_best_only=True)\n        early_s = EarlyStopping(monitor='val_loss', patience=10, verbose=1)\n        #tb = TensorBoard(self.tensorboard_dir, histogram_freq=2, write_graph=True)\n        #learning_rate_adapt = AdvancedLearnignRateScheduler(monitor='val_loss', patience=1, verbose=1, mode='min', decayRatio=0.5)\n        learning_rate_adapt = LearningRateDecay(0.95, every_n=4, verbose=1)\n        self.__pretrain_model_load(model, pretrained_path)\n        #augment\n        datagen = CustomImageDataGenerator(zoom_range=(0.9,1.1),\n                                           horizontal_flip=True,\n                                           vertical_flip=False, \n#                                           rotation_range=5,\n                                           channel_shift_range=5.0,\n                                           elastic=None #(100, 20)\n                                           )\n#        #fit\n        model.fit_generator(datagen.flow(x_train, [y_train, y_train_2], batch_size=64),\n                            samples_per_epoch=len(x_train),\n                            nb_epoch=250,\n                            verbose=1,\n                            callbacks=[model_save_best, model_checkpoint, early_s, learning_rate_adapt],\n                            validation_data=(x_valid, [y_valid, y_valid_2])\n                            )\n        return model\n\n    def train_and_predict(self, pretrained_path=None, split_random=True):\n        self._dir_init()\n        print('Loading and preprocessing and standarize train data...')\n        imgs_train, imgs_mask_train = load_train_data()\n        \n        imgs_train = preprocess(imgs_train)\n\n        imgs_mask_train = preprocess(imgs_mask_train)\n        \n        imgs_mask_train = self.norm_mask(imgs_mask_train)\n        #imgs_train = self.norm_mask(imgs_train) /255\n        #shuffle and split\n        split_func = split_random and self.split_train_and_valid or self.split_train_and_valid_by_patient\n        (x_train, y_train), (x_valid, y_valid) = split_func(imgs_train, imgs_mask_train,\n                                                        validation_split=self.validation_split)\n        self._init_mean_std(x_train)\n        x_train = self.standartize(x_train, True)\n        x_valid = self.standartize(x_valid, True)\n        #fit\n        model = self.fit(x_train, y_train, x_valid, y_valid, pretrained_path)\n        #test\n        self.test(model)\n\n\ndef main():\n    parser = OptionParser()\n    parser.add_option(\"-m\", \"--model_name\", action='store', type='str', dest='model_name', default = 'u_model')\n    parser.add_option(\"-s\", \"--split_random\", action='store', type='int', dest='split_random', default = 1)\n    #\n    options, _ = parser.parse_args()\n    model_name = options.model_name\n    split_random = options.split_random\n    if model_name is None:\n        raise ValueError, 'model_name is not defined'\n    #\n    import imp\n    model_ = imp.load_source('model_', model_name + '.py')\n    model_func = model_.get_unet\n    #\n    lr = Learner(model_func, validation_split=0.2)\n    lr.train_and_predict(pretrained_path=None, split_random=split_random)\n    print ('Results in ', lr.res_dir)\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "train_kfold.py",
    "content": "from optparse import OptionParser\nimport cv2, sys, os, shutil, random\nimport numpy as np\nfrom keras.optimizers import Adam, SGD, RMSprop\nfrom keras.callbacks import ModelCheckpoint, EarlyStopping\nfrom keras.preprocessing.image import flip_axis, random_channel_shift\nfrom keras.engine.training import slice_X\nfrom keras_plus import LearningRateDecay\nfrom u_model import get_unet, IMG_COLS as img_cols, IMG_ROWS as img_rows\nfrom data import load_train_data, load_test_data, load_patient_num\nfrom augmentation import CustomImageDataGenerator\nfrom augmentation import random_zoom, elastic_transform, random_rotation\nfrom utils import save_pickle, load_pickle, count_enum\nfrom sklearn.cross_validation import KFold\n\n_dir = os.path.join(os.path.realpath(os.path.dirname(__file__)), '')\n\n\n\ndef preprocess(imgs, to_rows=None, to_cols=None):\n    if to_rows is None or to_cols is None:\n        to_rows = img_rows\n        to_cols = img_cols\n    imgs_p = np.ndarray((imgs.shape[0], imgs.shape[1], to_rows, to_cols), dtype=np.uint8)\n    for i in xrange(imgs.shape[0]):\n        imgs_p[i, 0] = cv2.resize(imgs[i, 0], (to_cols, to_rows), interpolation=cv2.INTER_CUBIC)\n    return imgs_p\n\nclass Learner(object):\n    \n    suffix = ''\n    res_dir = os.path.join(_dir, 'res' + suffix)\n    best_weight_path = os.path.join(res_dir, 'unet.hdf5')\n    test_mask_res = os.path.join(res_dir, 'imgs_mask_test.npy')\n    test_mask_exist_res = os.path.join(res_dir, 'imgs_mask_exist_test.npy')\n    meanstd_path = os.path.join(res_dir, 'meanstd.dump')\n    valid_data_path = os.path.join(res_dir, 'valid.npy')\n    tensorboard_dir = os.path.join(res_dir, 'tb')\n    \n    def __init__(self, model_func, validation_split):\n        self.model_func = model_func\n        self.validation_split = validation_split\n        self.__iter_res_dir = os.path.join(self.res_dir, 'res_iter')\n        self.__iter_res_file = os.path.join(self.__iter_res_dir, '{epoch:02d}-{val_loss:.4f}.unet.hdf5')\n        \n    def _dir_init(self):\n        if not os.path.exists(self.res_dir):\n            os.mkdir(self.res_dir)\n        #iter clean\n        if os.path.exists(self.__iter_res_dir):\n            shutil.rmtree(self.__iter_res_dir)\n        os.mkdir(self.__iter_res_dir)\n    \n    def save_meanstd(self):\n        data = [self.mean, self.std]\n        save_pickle(self.meanstd_path, data)\n        \n    @classmethod\n    def load_meanstd(cls):\n        print ('Load meanstd from %s' % cls.meanstd_path)\n        mean, std = load_pickle(cls.meanstd_path)\n        return mean, std\n    \n    @classmethod\n    def save_valid_idx(cls, idx):\n        save_pickle(cls.valid_data_path, idx)\n        \n    @classmethod\n    def load_valid_idx(cls):\n        return load_pickle(cls.valid_data_path)\n    \n    def _init_mean_std(self, data):\n        data = data.astype('float32')\n        self.mean, self.std = np.mean(data), np.std(data)\n        self.save_meanstd()\n        return data\n    \n    def get_object_existance(self, mask_array):\n        return np.array([int(np.sum(mask_array[i, 0]) > 0) for i in xrange(len(mask_array))])\n\n    def standartize(self, array, to_float=False):\n        if to_float:\n            array = array.astype('float32')\n        if self.mean is None or self.std is None:\n            raise ValueError, 'No mean/std is initialised'\n        \n        array -= self.mean\n        array /= self.std\n        return array\n\n    @classmethod\n    def norm_mask(cls, mask_array):\n        mask_array = mask_array.astype('float32')\n        mask_array /= 255.0\n        return mask_array\n\n    @classmethod\n    def shuffle_train(cls, data, mask):\n        perm = np.random.permutation(len(data))\n        data = data[perm]\n        mask = mask[perm]\n        return data, mask\n        \n    def __pretrain_model_load(self, model, pretrained_path):\n        if pretrained_path is not None:\n            if not os.path.exists(pretrained_path):\n                raise ValueError, 'No such pre-trained path exists'\n            model.load_weights(pretrained_path)\n            \n            \n    def augmentation(self, X, Y):\n        print('Augmentation model...')\n        total = len(X)\n        x_train, y_train = [], []\n        \n        for i in xrange(total):\n            if i % 100 == 0:\n                print ('Aug', i)\n            x, y = X[i], Y[i]\n            #standart\n            x_train.append(x)\n            y_train.append(y)\n        \n#            for _ in xrange(1):\n#                _x, _y = elastic_transform(x[0], y[0], 100, 20)\n#                x_train.append(_x.reshape((1,) + _x.shape))\n#                y_train.append(_y.reshape((1,) + _y.shape))\n            \n            #flip x\n            x_train.append(flip_axis(x, 2))\n            y_train.append(flip_axis(y, 2))\n            #flip y\n            x_train.append(flip_axis(x, 1))\n            y_train.append(flip_axis(y, 1))\n            #continue\n            #zoom\n            for _ in xrange(1):\n                _x, _y = random_zoom(x, y, (0.9, 1.1))\n                x_train.append(_x)\n                y_train.append(_y)\n            for _ in xrange(0):\n                _x, _y = random_rotation(x, y, 5)\n                x_train.append(_x)\n                y_train.append(_y)\n            #intentsity\n            for _ in xrange(1):\n                _x = random_channel_shift(x, 5.0)\n                x_train.append(_x)\n                y_train.append(y)\n    \n        x_train = np.array(x_train)\n        y_train = np.array(y_train)\n        return x_train, y_train\n        \n    def fit(self, x_train, y_train, nfolds=8):\n        print('Creating and compiling and fitting model...')\n        print('Shape:', x_train.shape)\n        random_state = 51\n        kf = KFold(len(x_train), n_folds=nfolds, shuffle=True, random_state=random_state)\n        for i, (train_index, test_index) in enumerate(kf):\n            print 'Fold %d' % i\n            X_train, X_valid = x_train[train_index], x_train[test_index]\n            Y_train, Y_valid = y_train[train_index], y_train[test_index]\n            Y_valid_2 = self.get_object_existance(Y_valid)\n            X_train, Y_train = self.augmentation(X_train, Y_train)\n            Y_train_2 = self.get_object_existance(Y_train)\n            #\n            optimizer = Adam(lr=0.0045)\n            model = self.model_func(optimizer)\n            model_checkpoint = ModelCheckpoint(self.__iter_res_file + '_%d.fold' % i, monitor='val_loss')\n            model_save_best = ModelCheckpoint(self.best_weight_path + '_%d.fold' % i, monitor='val_loss',\n                                               save_best_only=True)\n            early_s = EarlyStopping(monitor='val_loss', patience=8, verbose=1)\n            #\n            model.fit(\n                       X_train, [Y_train, Y_train_2], \n                       validation_data=(X_valid, [Y_valid, Y_valid_2]),\n                       batch_size=128, nb_epoch=40,\n                       verbose=1, shuffle=True,\n                       callbacks=[model_save_best, model_checkpoint, early_s]\n                       ) \n        \n        #augment\n        return model\n\n    def train_and_predict(self, pretrained_path=None):\n        self._dir_init()\n        print('Loading and preprocessing and standarize train data...')\n        imgs_train, imgs_mask_train = load_train_data()\n        imgs_train = preprocess(imgs_train)\n        imgs_mask_train = preprocess(imgs_mask_train)\n        imgs_mask_train = self.norm_mask(imgs_mask_train)\n        \n        self._init_mean_std(imgs_train)\n        imgs_train = self.standartize(imgs_train, True)\n        self.fit(imgs_train, imgs_mask_train)\n\n\ndef main():\n    parser = OptionParser()\n    parser.add_option(\"-s\", \"--suffix\", action='store', type='str', dest='suffix', default = None)\n    parser.add_option(\"-m\", \"--model_name\", action='store', type='str', dest='model_name', default = 'u_model')\n    #\n    options, _ = parser.parse_args()\n    suffix = options.suffix\n    model_name = options.model_name\n    if model_name is None:\n        raise ValueError, 'model_name is not defined'\n#    if suffix is None:\n#        raise ValueError, 'Please specify suffix option'\n#    print ('Suffix: \"%s\"' % suffix )\n    #\n    import imp\n    model_ = imp.load_source('model_', model_name + '.py')\n    model_func = model_.get_unet\n    #\n    lr = Learner(model_func, validation_split=0.2)\n    lr.train_and_predict(pretrained_path=None)\n    print ('Results in ', lr.res_dir)\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "u_model.py",
    "content": "import sys\nfrom keras.models import Model\nfrom keras.layers import Input, merge, Convolution2D, MaxPooling2D, UpSampling2D, Dense\nfrom keras.layers import BatchNormalization, Dropout, Flatten, Lambda\nfrom keras.layers.advanced_activations import ELU, LeakyReLU\nfrom metric import dice_coef, dice_coef_loss\n\nIMG_ROWS, IMG_COLS = 80, 112 \n\ndef _shortcut(_input, residual):\n    stride_width = _input._keras_shape[2] / residual._keras_shape[2]\n    stride_height = _input._keras_shape[3] / residual._keras_shape[3]\n    equal_channels = residual._keras_shape[1] == _input._keras_shape[1]\n\n    shortcut = _input\n    # 1 X 1 conv if shape is different. Else identity.\n    if stride_width > 1 or stride_height > 1 or not equal_channels:\n        shortcut = Convolution2D(nb_filter=residual._keras_shape[1], nb_row=1, nb_col=1,\n                                 subsample=(stride_width, stride_height),\n                                 init=\"he_normal\", border_mode=\"valid\")(_input)\n\n    return merge([shortcut, residual], mode=\"sum\")\n\n\ndef inception_block(inputs, depth, batch_mode=0, splitted=False, activation='relu'):\n    assert depth % 16 == 0\n    actv = activation == 'relu' and (lambda: LeakyReLU(0.0)) or activation == 'elu' and (lambda: ELU(1.0)) or None\n    \n    c1_1 = Convolution2D(depth/4, 1, 1, init='he_normal', border_mode='same')(inputs)\n    \n    c2_1 = Convolution2D(depth/8*3, 1, 1, init='he_normal', border_mode='same')(inputs)\n    c2_1 = actv()(c2_1)\n    if splitted:\n        c2_2 = Convolution2D(depth/2, 1, 3, init='he_normal', border_mode='same')(c2_1)\n        c2_2 = BatchNormalization(mode=batch_mode, axis=1)(c2_2)\n        c2_2 = actv()(c2_2)\n        c2_3 = Convolution2D(depth/2, 3, 1, init='he_normal', border_mode='same')(c2_2)\n    else:\n        c2_3 = Convolution2D(depth/2, 3, 3, init='he_normal', border_mode='same')(c2_1)\n    \n    c3_1 = Convolution2D(depth/16, 1, 1, init='he_normal', border_mode='same')(inputs)\n    #missed batch norm\n    c3_1 = actv()(c3_1)\n    if splitted:\n        c3_2 = Convolution2D(depth/8, 1, 5, init='he_normal', border_mode='same')(c3_1)\n        c3_2 = BatchNormalization(mode=batch_mode, axis=1)(c3_2)\n        c3_2 = actv()(c3_2)\n        c3_3 = Convolution2D(depth/8, 5, 1, init='he_normal', border_mode='same')(c3_2)\n    else:\n        c3_3 = Convolution2D(depth/8, 5, 5, init='he_normal', border_mode='same')(c3_1)\n    \n    p4_1 = MaxPooling2D(pool_size=(3,3), strides=(1,1), border_mode='same')(inputs)\n    c4_2 = Convolution2D(depth/8, 1, 1, init='he_normal', border_mode='same')(p4_1)\n    \n    res = merge([c1_1, c2_3, c3_3, c4_2], mode='concat', concat_axis=1)\n    res = BatchNormalization(mode=batch_mode, axis=1)(res)\n    res = actv()(res)\n    return res\n    \n\ndef rblock(inputs, num, depth, scale=0.1):    \n    residual = Convolution2D(depth, num, num, border_mode='same')(inputs)\n    residual = BatchNormalization(mode=2, axis=1)(residual)\n    residual = Lambda(lambda x: x*scale)(residual)\n    res = _shortcut(inputs, residual)\n    return ELU()(res) \n    \n\ndef NConvolution2D(nb_filter, nb_row, nb_col, border_mode='same', subsample=(1, 1)):\n    def f(_input):\n        conv = Convolution2D(nb_filter=nb_filter, nb_row=nb_row, nb_col=nb_col, subsample=subsample,\n                              border_mode=border_mode)(_input)\n        norm = BatchNormalization(mode=2, axis=1)(conv)\n        return ELU()(norm)\n\n    return f\n\ndef BNA(_input):\n    inputs_norm = BatchNormalization(mode=2, axis=1)(_input)\n    return ELU()(inputs_norm)\n\ndef reduction_a(inputs, k=64, l=64, m=96, n=96):\n    \"35x35 -> 17x17\"\n    inputs_norm = BNA(inputs)\n    pool1 = MaxPooling2D((3,3), strides=(2,2), border_mode='same')(inputs_norm)\n    \n    conv2 = Convolution2D(n, 3, 3, subsample=(2,2), border_mode='same')(inputs_norm)\n    \n    conv3_1 = NConvolution2D(k, 1, 1, subsample=(1,1), border_mode='same')(inputs_norm)\n    conv3_2 = NConvolution2D(l, 3, 3, subsample=(1,1), border_mode='same')(conv3_1)\n    conv3_2 = Convolution2D(m, 3, 3, subsample=(2,2), border_mode='same')(conv3_2)\n    \n    res = merge([pool1, conv2, conv3_2], mode='concat', concat_axis=1)\n    return res\n\n\ndef reduction_b(inputs):\n    \"17x17 -> 8x8\"\n    inputs_norm = BNA(inputs)\n    pool1 = MaxPooling2D((3,3), strides=(2,2), border_mode='same')(inputs_norm)\n    #\n    conv2_1 = NConvolution2D(64, 1, 1, subsample=(1,1), border_mode='same')(inputs_norm)\n    conv2_2 = Convolution2D(96, 3, 3, subsample=(2,2), border_mode='same')(conv2_1)\n    #\n    conv3_1 = NConvolution2D(64, 1, 1, subsample=(1,1), border_mode='same')(inputs_norm)\n    conv3_2 = Convolution2D(72, 3, 3, subsample=(2,2), border_mode='same')(conv3_1)\n    #\n    conv4_1 = NConvolution2D(64, 1, 1, subsample=(1,1), border_mode='same')(inputs_norm)\n    conv4_2 = NConvolution2D(72, 3, 3, subsample=(1,1), border_mode='same')(conv4_1)\n    conv4_3 = Convolution2D(80, 3, 3, subsample=(2,2), border_mode='same')(conv4_2)\n    #\n    res = merge([pool1, conv2_2, conv3_2, conv4_3], mode='concat', concat_axis=1)\n    return res\n    \n    \n\n\ndef get_unet_inception_2head(optimizer):\n    splitted = True\n    act = 'elu'\n    \n    inputs = Input((1, IMG_ROWS, IMG_COLS), name='main_input')\n    conv1 = inception_block(inputs, 32, batch_mode=2, splitted=splitted, activation=act)\n    #conv1 = inception_block(conv1, 32, batch_mode=2, splitted=splitted, activation=act)\n    \n    #pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)\n    pool1 = NConvolution2D(32, 3, 3, border_mode='same', subsample=(2,2))(conv1)\n    pool1 = Dropout(0.5)(pool1)\n    \n    conv2 = inception_block(pool1, 64, batch_mode=2, splitted=splitted, activation=act)\n    #pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)\n    pool2 = NConvolution2D(64, 3, 3, border_mode='same', subsample=(2,2))(conv2)\n    pool2 = Dropout(0.5)(pool2)\n    \n    conv3 = inception_block(pool2, 128, batch_mode=2, splitted=splitted, activation=act)\n    #pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)\n    pool3 = NConvolution2D(128, 3, 3, border_mode='same', subsample=(2,2))(conv3)\n    pool3 = Dropout(0.5)(pool3)\n     \n    conv4 = inception_block(pool3, 256, batch_mode=2, splitted=splitted, activation=act)\n    #pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)\n    pool4 = NConvolution2D(256, 3, 3, border_mode='same', subsample=(2,2))(conv4)\n    pool4 = Dropout(0.5)(pool4)\n    \n    conv5 = inception_block(pool4, 512, batch_mode=2, splitted=splitted, activation=act)\n    #conv5 = inception_block(conv5, 512, batch_mode=2, splitted=splitted, activation=act)\n    conv5 = Dropout(0.5)(conv5)\n    \n    #\n    pre = Convolution2D(1, 1, 1, init='he_normal', activation='sigmoid')(conv5)\n    pre = Flatten()(pre)\n    aux_out = Dense(1, activation='sigmoid', name='aux_output')(pre) \n    #\n    \n    after_conv4 = rblock(conv4, 1, 256)\n    up6 = merge([UpSampling2D(size=(2, 2))(conv5), after_conv4], mode='concat', concat_axis=1)\n    conv6 = inception_block(up6, 256, batch_mode=2, splitted=splitted, activation=act)\n    conv6 = Dropout(0.5)(conv6)\n    \n    after_conv3 = rblock(conv3, 1, 128)\n    up7 = merge([UpSampling2D(size=(2, 2))(conv6), after_conv3], mode='concat', concat_axis=1)\n    conv7 = inception_block(up7, 128, batch_mode=2, splitted=splitted, activation=act)\n    conv7 = Dropout(0.5)(conv7)\n    \n    after_conv2 = rblock(conv2, 1, 64)\n    up8 = merge([UpSampling2D(size=(2, 2))(conv7), after_conv2], mode='concat', concat_axis=1)\n    conv8 = inception_block(up8, 64, batch_mode=2, splitted=splitted, activation=act)\n    conv8 = Dropout(0.5)(conv8)\n    \n    after_conv1 = rblock(conv1, 1, 32)\n    up9 = merge([UpSampling2D(size=(2, 2))(conv8), after_conv1], mode='concat', concat_axis=1)\n    conv9 = inception_block(up9, 32, batch_mode=2, splitted=splitted, activation=act)\n    #conv9 = inception_block(conv9, 32, batch_mode=2, splitted=splitted, activation=act)\n    conv9 = Dropout(0.5)(conv9)\n\n    conv10 = Convolution2D(1, 1, 1, init='he_normal', activation='sigmoid', name='main_output')(conv9)\n    #print conv10._keras_shape\n\n    model = Model(input=inputs, output=[conv10, aux_out])\n    model.compile(optimizer=optimizer,\n                  loss={'main_output': dice_coef_loss, 'aux_output': 'binary_crossentropy'},\n                  metrics={'main_output': dice_coef, 'aux_output': 'acc'},\n                  loss_weights={'main_output': 1., 'aux_output': 0.5})\n\n    return model\n\n\nget_unet = get_unet_inception_2head\n\ndef main():\n    from keras.optimizers import Adam, RMSprop, SGD\n    from metric import dice_coef, dice_coef_loss\n    import numpy as np\n    img_rows = IMG_ROWS\n    img_cols = IMG_COLS\n    \n    optimizer = RMSprop(lr=0.045, rho=0.9, epsilon=1.0)\n    model = get_unet(Adam(lr=1e-5))\n    model.compile(optimizer=optimizer, loss=dice_coef_loss, metrics=[dice_coef])\n    \n    x = np.random.random((1, 1,img_rows,img_cols))\n    res = model.predict(x, 1)\n    print res\n    #print 'res', res[0].shape\n    print 'params', model.count_params()\n    print 'layer num', len(model.layers)\n    #\n\n\nif __name__ == '__main__':\n    sys.exit(main())\n\n"
  },
  {
    "path": "utils.py",
    "content": "import cPickle as pickle\n\ndef load_pickle(file_path):\n    data = None\n    with open (file_path,\"rb\") as dumpFile:\n        data = pickle.load(dumpFile)\n    return data\n\ndef save_pickle(file_path, data):\n    with open (file_path,\"wb\") as dumpFile:\n        pickle.dump(data, dumpFile, pickle.HIGHEST_PROTOCOL)\n\ndef count_enum(words):\n    wdict = {}\n    get = wdict.get\n    for word in words:\n        wdict[word] = get(word, 0) + 1\n    return wdict\n"
  }
]