[
  {
    "path": ".gitignore",
    "content": ".idea\n*.pyc\n*.pkl\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 VictorLi\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# AiFashion\n\n- Author: VictorLi, yuanyuan.li85@gmail.com\n- Code for  FashionAI Global Challenge—Key Points Detection of Apparel\n[2018 TianChi](https://tianchi.aliyun.com/competition/introduction.htm?spm=5176.100068.5678.1.4ccc289bCzDJXu&raceId=231648&_lang=en_US)\n- Rank 45/2322 at 1st round competition, score 0.61\n- Rank 46 at 2nd round competition, score 0.477\n\n## Images with detected keypoints\n### Dress\n![Dress](./images/dress.jpg)\n### Blouse\n![Blouse](./images/blouse.jpg)\n### Outwear\n![Outwear](./images/outwear.jpg)\n### Skirt\n![Skirt](./images/skirt.jpg)\n### Trousers\n![Trousers](./images/trousers.jpg)\n\n\n## Basic idea\n- The key idea comes from paper [Cascaded Pyramid Network for Multi-Person Pose Estimation](https://arxiv.org/abs/1711.07319). We have a 2 stage network called global net and refine net who are U-net like. The network was trained to detect the heatmap of cloth's key points. The backbone network used here is resnet101.  \n- To overcome the negative impact from different category, `input_mask` was introduced to zero the invalid keypoints. For example, skirt has 4 valid keypoints: `waistband_left`, `waistband_right`, `hemline_left` and `hemline_right`. In `input_mask`, only those valid masks are 1.0 , while other 20 masks are set as zero.\n- On line hard negative mining, at last stage of refinenet, only take the top losses as consideration and ignore the easy part (small loss)\n\n## Dependency\n- Keras2.0\n- Tensorflow\n- Opencv/Numpy/Pandas\n- Pretrained model weights, resenet101\n\n## Folder Structure\n- `data`: folder to store training and testing images and annotations\n- `trained_models`: folder to store trained models and logs\n- `submission`: folder to store generated submission for evaluation.\n- `src`: folder to put all of source code.  \n`src/data_gen`: code for data generator including data augmentation and pre-process  \n`src/eval`: code for evaluation, including inference and post-processing.  \n`src/unet`: code for cnn model definition, including train, fine-tune, loss, optimizer definition.  \n`src/top`:top level code for train, test and demo.   \n\n## How to train network  \n- Download dataset from competition webpage and put it under data.  \n  `data/train` : data used as train. `data/test` : data used for test  \n- Download [resnet101](https://gist.github.com/flyyufelix/65018873f8cb2bbe95f429c474aa1294) model and save it as `data/resnet101_weights_tf.h5`.   \nNote: all the models here use channel_last dim order.\n- Train all-in-one network from scratch  \n```\npython train.py --category all --epochs 30 --network v11 --batchSize 3 --gpuID 2\n```\n- The trained model and log will be put under `trained_models/all/xxxx`, i.e `trained_models/all/2018_05_23_15_18_07/`  \n- The evaluation  will run for each epoch and details saved to `val.log`\n- Resume training from a specific model.  \n```\npython train.py --gpuID 2 --category all --epochs 30 --network v11 --batchSize 3 --resume True --resumeModel /path/to/model/start/with --initEpoch 6\n```\n\n## How to test and generate submission\n- Run test and generate submission\nBelow command search the best score from `modelpath` and use that to generate submission  \n```\npython test.py --gpuID 2 --modelpath ../../trained_models/all/xxx --outpath ../../submission/2018_04_19/ --augment True\n```\nThe submission will be saved as `submission.csv`\n\n## How to run demo\n- Download the pre trained weights from [BaiduDisk](https://pan.baidu.com/s/1t7fB5wnRfW1Vny0gw7xUDQ) (password `1ae2`) or [GoogleDrive](https://drive.google.com/open?id=1VY-AO2F1XMQLBjEZjy6CrOSIPWWaHUGr)\n- Save it somewhere, i.e `trained_models/all/fashion_ai_keypoint_weights_epoch28.hdf5`\n- Or use your own trained model.\n- Run demo and the cloth with keypoints marked will be displayed.   \n```\npython demo.py --gpuID 2 --modelfile ../../trained_models/all/fashion_ai_keypoint_weights_epoch28.hdf5\n```\n\n## Reference\n- Resnet 101 Keras : https://github.com/statech/resnet\n"
  },
  {
    "path": "data/placeholder.txt",
    "content": ""
  },
  {
    "path": "src/data_gen/data_generator.py",
    "content": "\nimport os\nimport cv2\nimport pandas as pd\nimport numpy as np\nimport random\n\nfrom kpAnno import KpAnno\nfrom dataset import getKpNum, getKpKeys, getFlipMapID,  generate_input_mask\nfrom utils import make_gaussian, load_annotation_from_df\nfrom data_process import pad_image, resize_image, normalize_image, rotate_image, \\\n    rotate_image_float, rotate_mask, crop_image\nfrom ohem import generate_topk_mask_ohem\n\nclass DataGenerator(object):\n\n    def __init__(self, category, annfile):\n        self.category = category\n        self.annfile  = annfile\n        self._initialize()\n\n    def get_dim_order(self):\n        # default tensorflow dim order\n        return \"channels_last\"\n\n    def get_dataset_size(self):\n        return len(self.annDataFrame)\n\n    def generator_with_mask_ohem(self, graph, kerasModel, batchSize=16, inputSize=(512, 512), flipFlag=False, cropFlag=False,\n                            shuffle=True, rotateFlag=True, nStackNum=1):\n\n        '''\n        Input:  batch_size * Height (512) * Width (512) * Channel (3)\n        Input:  batch_size * 256 * 256 * Channel (N+1). Mask for each category. 1.0 for valid parts in category. 0.0 for invalid parts\n        Output: batch_size * Height/2 (256) * Width/2 (256) * Channel (N+1)\n        '''\n        xdf = self.annDataFrame\n\n        targetHeight, targetWidth = inputSize\n\n        # train_input: npfloat,  height, width, channels\n        # train_gthmap: npfloat, N heatmap + 1 background heatmap,\n        train_input = np.zeros((batchSize, targetHeight, targetWidth, 3), dtype=np.float)\n        train_mask = np.zeros((batchSize, targetHeight / 2, targetWidth / 2, getKpNum(self.category) ), dtype=np.float)\n        train_gthmap = np.zeros((batchSize, targetHeight / 2, targetWidth / 2, getKpNum(self.category) ), dtype=np.float)\n        train_ohem_mask = np.zeros((batchSize, targetHeight / 2, targetWidth / 2, getKpNum(self.category) ), dtype=np.float)\n        train_ohem_gthmap = np.zeros((batchSize, targetHeight / 2, targetWidth / 2, getKpNum(self.category) ), dtype=np.float)\n\n        ## generator need to be infinite loop\n        while 1:\n            # random shuffle at first\n            if shuffle:\n                xdf = xdf.sample(frac=1)\n            count = 0\n            for _index, _row in xdf.iterrows():\n                xindex = count % batchSize\n                xinput, xhmap = self._prcoess_img(_row, inputSize, rotateFlag, flipFlag, cropFlag, nobgFlag=True)\n                xmask = generate_input_mask(_row['image_category'],\n                                            (targetHeight, targetWidth, getKpNum(self.category)))\n\n                xohem_mask, xohem_gthmap = generate_topk_mask_ohem([xinput, xmask], xhmap, kerasModel, graph,\n                                            8, _row['image_category'], dynamicFlag=False)\n\n                train_input[xindex, :, :, :] = xinput\n                train_mask[xindex, :, :, :] = xmask\n                train_gthmap[xindex, :, :, :] = xhmap\n                train_ohem_mask[xindex, :, :, :] = xohem_mask\n                train_ohem_gthmap[xindex, :, :, :] = xohem_gthmap\n\n                # if refinenet enable, refinenet has two outputs, globalnet and refinenet\n                if xindex == 0 and count != 0:\n                    gthamplst = list()\n                    for i in range(nStackNum):\n                        gthamplst.append(train_gthmap)\n\n                    # last stack will use ohem gthmap\n                    gthamplst.append(train_ohem_gthmap)\n\n                    yield [train_input, train_mask, train_ohem_mask], gthamplst\n\n                count += 1\n\n    def _initialize(self):\n        self._load_anno()\n\n    def _load_anno(self):\n        '''\n        Load annotations from train.csv\n        '''\n        # Todo: check if category legal\n        self.train_img_path = \"../../data/train\"\n\n        # read into dataframe\n        xpd = pd.read_csv(self.annfile)\n        xpd = load_annotation_from_df(xpd, self.category)\n        self.annDataFrame = xpd\n\n    def _prcoess_img(self, dfrow, inputSize, rotateFlag, flipFlag, cropFlag, nobgFlag):\n\n        mlist = dfrow[getKpKeys(self.category)]\n        imgName, kpStr = mlist[0], mlist[1:]\n\n        # read kp annotation from csv file\n        kpAnnlst = list()\n        for _kpstr in kpStr:\n            _kpAn = KpAnno.readFromStr(_kpstr)\n            kpAnnlst.append(_kpAn)\n\n        assert (len(kpAnnlst) == getKpNum(self.category)), str(len(kpAnnlst))+\" is not the same as \"+str(getKpNum(self.category))\n\n\n        xcvmat = cv2.imread(os.path.join(self.train_img_path, imgName))\n        if xcvmat is None:\n            return None, None\n\n        #flip as first operation.\n        # flip image\n        if random.choice([0, 1]) and flipFlag:\n            xcvmat, kpAnnlst = self.flip_image(xcvmat, kpAnnlst)\n\n        #if cropFlag:\n        #    xcvmat, kpAnnlst = crop_image(xcvmat, kpAnnlst, 0.8, 0.95)\n\n        # pad image to 512x512\n        paddedImg, kpAnnlst = pad_image(xcvmat, kpAnnlst, inputSize[0], inputSize[1])\n\n        assert (len(kpAnnlst) == getKpNum(self.category)), str(len(kpAnnlst)) + \" is not the same as \" + str(\n            getKpNum(self.category))\n\n        # output ground truth heatmap is 256x256\n        trainGtHmap = self.__generate_hmap(paddedImg, kpAnnlst)\n\n        if random.choice([0,1]) and rotateFlag:\n            rAngle = np.random.randint(-1*40, 40)\n            rotatedImage,  _ = rotate_image(paddedImg, list(), rAngle)\n            rotatedGtHmap  = rotate_mask(trainGtHmap, rAngle)\n        else:\n            rotatedImage  = paddedImg\n            rotatedGtHmap = trainGtHmap\n\n        # resize image\n        resizedImg    = cv2.resize(rotatedImage, inputSize)\n        resizedGtHmap = cv2.resize(rotatedGtHmap, (inputSize[0]//2, inputSize[1]//2))\n\n        return normalize_image(resizedImg), resizedGtHmap\n\n\n    def __generate_hmap(self, cvmat, kpAnnolst):\n        # kpnum + background\n        gthmp = np.zeros((cvmat.shape[0], cvmat.shape[1], getKpNum(self.category)), dtype=np.float)\n\n        for i, _kpAnn in enumerate(kpAnnolst):\n            if _kpAnn.visibility == -1:\n                continue\n\n            radius = 100\n            gaussMask = make_gaussian(radius, radius, 20, None)\n\n            # avoid out of boundary\n            top_x, top_y = max(0, _kpAnn.x - radius/2), max(0, _kpAnn.y - radius/2)\n            bottom_x, bottom_y = min(cvmat.shape[1], _kpAnn.x + radius/2), min(cvmat.shape[0], _kpAnn.y + radius/2)\n\n            top_x_offset = top_x - (_kpAnn.x - radius/2)\n            top_y_offset = top_y - (_kpAnn.y - radius/2)\n\n            gthmp[ top_y:bottom_y, top_x:bottom_x, i] = gaussMask[top_y_offset:top_y_offset + bottom_y-top_y,\n                                                                  top_x_offset:top_x_offset + bottom_x-top_x]\n\n        return gthmp\n\n    def flip_image(self, orgimg, orgKpAnolst):\n        flipImg = cv2.flip(orgimg, flipCode=1)\n        flipannlst = self.flip_annlst(orgKpAnolst, orgimg.shape)\n        return flipImg, flipannlst\n\n\n    def flip_annlst(self, kpannlst, imgshape):\n        height, width, channels = imgshape\n\n        # flip first\n        flipAnnlst = list()\n        for _kp in kpannlst:\n            flip_x = width - _kp.x\n            flipAnnlst.append(KpAnno(flip_x, _kp.y, _kp.visibility))\n\n        # exchange location of flip keypoints, left->right\n        outAnnlst = flipAnnlst[:]\n        for i, _kp in enumerate(flipAnnlst):\n            mapId = getFlipMapID('all', i)\n            outAnnlst[mapId] = _kp\n\n        return outAnnlst\n\n\n\n\n"
  },
  {
    "path": "src/data_gen/data_process.py",
    "content": "import pandas as pd\nimport numpy as np\nimport cv2\nimport os\nfrom kpAnno import KpAnno\n\ndef normalize_image(cvmat):\n    assert (cvmat.dtype == np.uint8) , \" only support normalize np.uint8 to float -0.5 ~ 0.5'\"\n    cvmat = cvmat.astype(np.float)\n    cvmat = (cvmat - 128.0) / 256.0\n    return cvmat\n\ndef resize_image(cvmat, targetWidth, targetHeight):\n\n    assert (cvmat.dtype == np.uint8) , \" only support normalize np.uint8  in  resize_image'\"\n\n    # get scale\n    srcHeight, srcWidth, channles = cvmat.shape\n    minScale = min( targetHeight*1.0/srcHeight,  targetWidth*1.0/srcWidth)\n\n    # resize\n    resizedMat = cv2.resize(cvmat, None, fx=minScale, fy=minScale)\n    reHeight, reWidth, channles = resizedMat.shape\n\n    # pad to targetWidth or targetHeight\n    outmat = np.zeros((targetHeight, targetWidth, 3), dtype=cvmat.dtype) + 128\n\n    if targetHeight == reHeight and targetWidth == reWidth:\n        outmat = resizedMat\n    elif targetWidth != reWidth and targetHeight == reHeight:\n        # add pad to width\n        outmat[:, 0:reWidth, :] = resizedMat\n    elif targetHeight != reHeight and targetWidth == reWidth:\n        # add padding to height\n        outmat[0:reHeight, :, :] = resizedMat\n    else:\n        assert(0), \"after resize either width or height same as target width or target height\"\n    return (outmat, minScale)\n\ndef pad_image(cvmat, kpAnno, targetWidth, targetHeight):\n    '''\n\n    :param cvmat: input mat\n    :param targetWidth:  width to pad\n    :param targetHeight: height to pad\n    :return:\n    '''\n    assert (cvmat.dtype == np.uint8) , \" only support normalize np.uint8  in pad_image'\" + str(cvmat.dtype)\n\n    srcHeight, srcWidth, channles = cvmat.shape\n    outmat = np.zeros((targetHeight, targetWidth, 3), dtype=cvmat.dtype) + 128\n\n    if targetHeight == srcHeight and targetWidth == srcWidth:\n        outmat =  cvmat\n        outkpAnno = kpAnno\n    elif targetWidth != srcWidth and targetHeight == srcHeight:\n        # add pad to width\n        outmat[:, 0:srcWidth, :] = cvmat\n        outkpAnno = kpAnno\n    elif targetHeight != srcHeight and targetWidth == srcWidth:\n        # add padding to height\n        outmat[0:srcHeight, :, :] = cvmat\n        outkpAnno = kpAnno\n    else:\n        # resize at first, then pad\n        outmat, scale = resize_image(cvmat, targetWidth, targetHeight)\n        outkpAnno = list()\n        for _kpAnno in kpAnno:\n            _nkp = KpAnno.applyScale(_kpAnno, scale)\n            outkpAnno.append(_nkp)\n    return (outmat, outkpAnno)\n\n\ndef pad_image_inference(cvmat, targetWidth, targetHeight):\n    '''\n\n    :param cvmat: input mat\n    :param targetWidth:  width to pad\n    :param targetHeight: height to pad\n    :return:\n    '''\n    assert (cvmat.dtype == np.uint8), \" only support normalize np.uint8  in pad_image'\" + str(cvmat.dtype)\n\n    srcHeight, srcWidth, channles = cvmat.shape\n    outmat = np.zeros((targetHeight, targetWidth, 3), dtype=cvmat.dtype) + 128\n\n    if targetHeight == srcHeight and targetWidth == srcWidth:\n        outmat = cvmat\n        scale = 1.0\n    elif targetWidth > srcWidth and targetHeight == srcHeight:\n        # add pad to width\n        outmat[:, 0:srcWidth, :] = cvmat\n        scale = 1.0\n    elif targetHeight > srcHeight and targetWidth == srcWidth:\n        # add padding to height\n        outmat[0:srcHeight, :, :] = cvmat\n        scale = 1.0\n    else:\n        # resize at first, then pad\n        outmat, scale = resize_image(cvmat, targetWidth, targetHeight)\n\n    return (outmat, scale)\n\ndef rotate_image(cvmat, kpAnnLst, rotateAngle):\n\n    assert (cvmat.dtype == np.uint8) , \" only support normalize np.uint8  in rotate_image'\"\n\n    ##Make sure cvmat is square?\n    height, width, channel = cvmat.shape\n\n    center = ( width//2, height//2)\n    rotateMatrix = cv2.getRotationMatrix2D(center, rotateAngle, 1.0)\n\n    cos, sin = np.abs(rotateMatrix[0,0]), np.abs(rotateMatrix[0, 1])\n    newH = int((height*sin)+(width*cos))\n    newW = int((height*cos)+(width*sin))\n\n    rotateMatrix[0,2] += (newW/2) - center[0] #x\n    rotateMatrix[1,2] += (newH/2) - center[1] #y\n\n    # rotate image\n    outMat = cv2.warpAffine(cvmat, rotateMatrix, (newH, newW), borderValue=(128, 128, 128))\n\n    # rotate annotations\n    nKpLst = list()\n    for _kp in kpAnnLst:\n        _newkp = KpAnno.applyRotate(_kp, rotateMatrix)\n        nKpLst.append(_newkp)\n\n    return (outMat, nKpLst)\n\n\ndef rotate_image_with_invrmat(cvmat, rotateAngle):\n\n    assert (cvmat.dtype == np.uint8) , \" only support normalize np.uint  in rotate_image_with_invrmat'\"\n\n    ##Make sure cvmat is square?\n    height, width, channel = cvmat.shape\n\n    center = ( width//2, height//2)\n    rotateMatrix = cv2.getRotationMatrix2D(center, rotateAngle, 1.0)\n\n    cos, sin = np.abs(rotateMatrix[0,0]), np.abs(rotateMatrix[0, 1])\n    newH = int((height*sin)+(width*cos))\n    newW = int((height*cos)+(width*sin))\n\n    rotateMatrix[0,2] += (newW/2) - center[0] #x\n    rotateMatrix[1,2] += (newH/2) - center[1] #y\n\n    # rotate image\n    outMat = cv2.warpAffine(cvmat, rotateMatrix, (newH, newW), borderValue=(128, 128, 128))\n\n    # generate inv rotate matrix\n    invRotateMatrix = cv2.invertAffineTransform(rotateMatrix)\n\n    return (outMat, invRotateMatrix, (width, height))\n\ndef rotate_mask(mask, rotateAngle):\n\n    outmask = rotate_image_float(mask, rotateAngle)\n\n    return outmask\n\ndef rotate_image_float(cvmat, rotateAngle, borderValue=(0.0, 0.0, 0.0)):\n\n    assert (cvmat.dtype == np.float) , \" only support normalize np.float  in rotate_image_float'\"\n\n    ##Make sure cvmat is square?\n    height, width, channels = cvmat.shape\n\n    center = ( width//2, height//2)\n    rotateMatrix = cv2.getRotationMatrix2D(center, rotateAngle, 1.0)\n\n    cos, sin = np.abs(rotateMatrix[0,0]), np.abs(rotateMatrix[0, 1])\n    newH = int((height*sin)+(width*cos))\n    newW = int((height*cos)+(width*sin))\n\n    rotateMatrix[0,2] += (newW/2) - center[0] #x\n    rotateMatrix[1,2] += (newH/2) - center[1] #y\n\n    # rotate image\n    outMat = cv2.warpAffine(cvmat, rotateMatrix, (newH, newW), borderValue=borderValue)\n\n    return outMat\n\n\ndef crop_image(cvmat, kpAnnLst, lowLimitRatio, upLimitRatio):\n    import random\n\n    assert(lowLimitRatio < 1.0), 'lowLimitRatio should be less than 1.0'\n    assert(upLimitRatio < 1.0), 'upLimitRatio should be less than 1.0'\n\n    height, width, channels = cvmat.shape\n\n    cropHeight = random.randrange(int(lowLimitRatio*height),  int(upLimitRatio*height))\n    cropWidth  = random.randrange(int(lowLimitRatio*width),  int(upLimitRatio*width))\n\n    top_x = random.randrange(0,  width - cropWidth)\n    top_y = random.randrange(0,  height - cropHeight)\n\n    # apply offset for keypoints\n    nKpLst = list()\n    for _kp in kpAnnLst:\n        if _kp.visibility == -1:\n            _newkp = _kp\n        else:\n            _newkp = KpAnno.applyOffset(_kp, (top_x, top_y))\n            if _newkp.x <=0 or _newkp.y <=0:\n                # negative location, return original image\n                return cvmat, kpAnnLst\n            if _newkp.x >= cropWidth or _newkp.y >= cropHeight:\n                # keypoints are cropped out\n                return cvmat, kpAnnLst\n        nKpLst.append(_newkp)\n\n    return cvmat[top_y:top_y+cropHeight,  top_x:top_x+cropWidth], nKpLst\n\nif __name__ == \"__main__\":\n    pass"
  },
  {
    "path": "src/data_gen/dataset.py",
    "content": "\n\ndef getKpNum(category):\n    # remove one column 'image_id'\n    return len(getKpKeys(category)) - 1\n\nTROUSERS_PART_KYES=['waistband_left', 'waistband_right', 'crotch', 'bottom_left_in', 'bottom_left_out', 'bottom_right_in', 'bottom_right_out']\nTROUSERS_PART_FLIP_KYES=['waistband_right', 'waistband_left', 'crotch', 'bottom_right_in', 'bottom_right_out', 'bottom_left_in', 'bottom_left_out']\n\nSKIRT_PART_KEYS=['waistband_left', 'waistband_right', 'hemline_left', 'hemline_right']\nSKIRT_PART_FLIP_KEYS=['waistband_right', 'waistband_left', 'hemline_right', 'hemline_left']\n\n\nDRESS_PART_KEYS= ['neckline_left', 'neckline_right', 'shoulder_left', 'shoulder_right', 'center_front',\n              'armpit_left', 'armpit_right', 'waistline_left', 'waistline_right', 'cuff_left_in',\n              'cuff_left_out', 'cuff_right_in', 'cuff_right_out', 'hemline_left', 'hemline_right']\nDRESS_PART_FLIP_KEYS=['neckline_right', 'neckline_left', 'shoulder_right', 'shoulder_left', 'center_front',\n               'armpit_right', 'armpit_left', 'waistline_right', 'waistline_left', 'cuff_right_in',\n               'cuff_right_out', 'cuff_left_in', 'cuff_left_out', 'hemline_right', 'hemline_left']\n\nBLOUSE_PART_KEYS=['neckline_left', 'neckline_right', 'shoulder_left', 'shoulder_right',\n           'center_front', 'armpit_left', 'armpit_right', 'top_hem_left', 'top_hem_right',\n           'cuff_left_in', 'cuff_left_out', 'cuff_right_in', 'cuff_right_out']\n\nBLOUSE_PART_FLIP_KEYS=['neckline_right', 'neckline_left', 'shoulder_right', 'shoulder_left',\n           'center_front', 'armpit_right', 'armpit_left', 'top_hem_right', 'top_hem_left',\n           'cuff_right_in', 'cuff_right_out', 'cuff_left_in', 'cuff_left_out']\n\nOUTWEAR_PART_KEYS=['neckline_left', 'neckline_right', 'shoulder_left', 'shoulder_right',\n            'armpit_left', 'armpit_right', 'waistline_left', 'waistline_right', 'cuff_left_in',\n            'cuff_left_out', 'cuff_right_in', 'cuff_right_out', 'top_hem_left', 'top_hem_right']\n\nOUTWEAR_PART_FLIP_KEYS = ['neckline_right', 'neckline_left', 'shoulder_right', 'shoulder_left',\n           'armpit_right', 'armpit_left', 'waistline_right', 'waistline_left', 'cuff_right_in',\n           'cuff_right_out', 'cuff_left_in', 'cuff_left_out', 'top_hem_right', 'top_hem_left']\n\nALL_PART_KEYS = ['neckline_left', 'neckline_right', 'center_front', 'shoulder_left', 'shoulder_right',\n               'armpit_left', 'armpit_right', 'waistline_left', 'waistline_right', 'cuff_left_in', 'cuff_left_out',\n               'cuff_right_in', 'cuff_right_out', 'top_hem_left', 'top_hem_right', 'waistband_left', 'waistband_right',\n               'hemline_left', 'hemline_right', 'crotch', 'bottom_left_in', 'bottom_left_out',\n               'bottom_right_in', 'bottom_right_out']\n\nALL_PART_FLIP_KEYS = [  'neckline_right', 'neckline_left', 'center_front', 'shoulder_right', 'shoulder_left',\n                        'armpit_right', 'armpit_left',   'waistline_right', 'waistline_left', 'cuff_right_in', 'cuff_right_out',\n                        'cuff_left_in', 'cuff_left_out', 'top_hem_right', 'top_hem_left',  'waistband_right','waistband_left',\n                        'hemline_right', 'hemline_left',  'crotch',  'bottom_right_in', 'bottom_right_out',\n                        'bottom_left_in', 'bottom_left_out']\n\ndef getFlipKeys(category):\n    if category == 'skirt':\n        keys, mapkeys = SKIRT_PART_KEYS, SKIRT_PART_FLIP_KEYS\n    elif category == 'dress':\n        keys, mapkeys = DRESS_PART_KEYS, DRESS_PART_FLIP_KEYS\n    elif category == 'trousers':\n        keys, mapkeys = TROUSERS_PART_KYES, TROUSERS_PART_FLIP_KYES\n    elif category == 'blouse':\n        keys, mapkeys = BLOUSE_PART_KEYS, BLOUSE_PART_FLIP_KEYS\n    elif category == 'outwear':\n        keys, mapkeys = OUTWEAR_PART_KEYS, OUTWEAR_PART_FLIP_KEYS\n    elif category == 'all':\n        keys, mapkeys = ALL_PART_KEYS, ALL_PART_FLIP_KEYS\n    else:\n        assert (0), category + \" not supported\"\n\n    xdict = dict()\n    for i in range(len(keys)):\n        xdict[keys[i]] = mapkeys[i]\n    return keys, xdict\n\ndef getFlipMapID(category, partid):\n    keys, mapDict = getFlipKeys(category)\n    mapKey = mapDict[keys[partid]]\n    mapID  = keys.index(mapKey)\n    return mapID\n\ndef getKpKeys(category):\n    '''\n\n    :param category:\n    :return: get the keypoint keys in annotation csv\n    '''\n    SKIRT_KP_KEYS = ['image_id', 'waistband_left', 'waistband_right', 'hemline_left', 'hemline_right']\n    DRESS_KP_KEYS = ['image_id', 'neckline_left', 'neckline_right', 'shoulder_left', 'shoulder_right', 'center_front',\n                     'armpit_left',  'armpit_right' ,  'waistline_left' , 'waistline_right', 'cuff_left_in',\n                     'cuff_left_out', 'cuff_right_in',  'cuff_right_out',  'hemline_left',  'hemline_right']\n    TROUSERS_KP_KEYS=['image_id',  'waistband_left', 'waistband_right', 'crotch',  'bottom_left_in',\n                      'bottom_left_out', 'bottom_right_in', 'bottom_right_out']\n    BLOUSE_KP_KEYS = [ 'image_id', 'neckline_left', 'neckline_right', 'shoulder_left', 'shoulder_right',\n                       'center_front', 'armpit_left', 'armpit_right', 'top_hem_left', 'top_hem_right',\n                       'cuff_left_in', 'cuff_left_out', 'cuff_right_in', 'cuff_right_out']\n    OUTWEAR_KP_KEYS= ['image_id', 'neckline_left', 'neckline_right', 'shoulder_left', 'shoulder_right',\n                      'armpit_left', 'armpit_right', 'waistline_left', 'waistline_right', 'cuff_left_in',\n                      'cuff_left_out', 'cuff_right_in', 'cuff_right_out', 'top_hem_left', 'top_hem_right']\n\n    ALL_KP_KESY = ['image_id','neckline_left', 'neckline_right', 'center_front', 'shoulder_left', 'shoulder_right',\n                 'armpit_left', 'armpit_right', 'waistline_left', 'waistline_right', 'cuff_left_in', 'cuff_left_out', 'cuff_right_in',\n                 'cuff_right_out', 'top_hem_left', 'top_hem_right', 'waistband_left', 'waistband_right', 'hemline_left', 'hemline_right' ,\n                 'crotch', 'bottom_left_in' , 'bottom_left_out', 'bottom_right_in' ,'bottom_right_out']\n\n    if category == 'skirt':\n        return SKIRT_KP_KEYS\n    elif category == 'dress':\n        return DRESS_KP_KEYS\n    elif category == 'trousers':\n        return TROUSERS_KP_KEYS\n    elif category == 'blouse':\n        return BLOUSE_KP_KEYS\n    elif category == 'outwear':\n        return OUTWEAR_KP_KEYS\n    elif category == 'all':\n        return ALL_KP_KESY\n    else:\n        assert(0), category + ' not supported'\n\n\ndef fill_dataframe(kplst, category, dfrow):\n    keys = getKpKeys(category)[1:]\n\n    # fill category\n    dfrow['image_category'] = category\n\n    assert (len(keys) == len(kplst)), str(len(kplst)) + ' must be the same as ' + str(len(keys))\n    for i, _key in enumerate(keys):\n        kpann = kplst[i]\n        outstr = str(int(kpann.x))+\"_\"+str(int(kpann.y))+\"_\"+str(1)\n        dfrow[_key] = outstr\n\n\ndef get_kp_index_from_allkeys(kpname):\n    ALL_KP_KEYS = ['neckline_left', 'neckline_right', 'center_front', 'shoulder_left', 'shoulder_right',\n                   'armpit_left', 'armpit_right', 'waistline_left', 'waistline_right', 'cuff_left_in', 'cuff_left_out',\n                   'cuff_right_in', 'cuff_right_out', 'top_hem_left', 'top_hem_right', 'waistband_left', 'waistband_right',\n                   'hemline_left', 'hemline_right', 'crotch', 'bottom_left_in', 'bottom_left_out', 'bottom_right_in', 'bottom_right_out']\n\n    return ALL_KP_KEYS.index(kpname)\n\n\ndef generate_input_mask(image_category, shape, nobgFlag=True):\n    import numpy as np\n    # 0.0 for invalid key points for each category\n    # 1.0 for valid key points for each category\n    h, w, c = shape\n    mask = np.zeros((h // 2, w // 2, c), dtype=np.float)\n\n    for key in getKpKeys(image_category)[1:]:\n        index = get_kp_index_from_allkeys(key)\n        mask[:, :, index] = 1.0\n\n    # for last channel, background\n    if nobgFlag:     mask[:, :, -1] = 0.0\n    else:   mask[:, :, -1] = 1.0\n\n    return mask"
  },
  {
    "path": "src/data_gen/kpAnno.py",
    "content": "import numpy as np\n\n\nclass KpAnno(object):\n    '''\n        Convert string to x, y, visibility\n    '''\n    def __init__(self, x, y, visibility):\n        self.x = int(x)\n        self.y = int(y)\n        self.visibility = visibility\n\n    @classmethod\n    def readFromStr(cls, xstr):\n        xarray = xstr.split('_')\n        x = int(xarray[0])\n        y = int(xarray[1])\n        visibility = int(xarray[2])\n        return cls(x,y, visibility)\n\n    @classmethod\n    def applyScale(cls, kpAnno, scale):\n        x = int(kpAnno.x*scale)\n        y = int(kpAnno.y*scale)\n        v = kpAnno.visibility\n        return cls(x, y, v)\n\n    @classmethod\n    def applyRotate(cls, kpAnno, rotateMatrix):\n        vector = [kpAnno.x, kpAnno.y, 1]\n        rotatedV = np.dot(rotateMatrix, vector)\n        return cls( int(rotatedV[0]), int(rotatedV[1]), kpAnno.visibility)\n\n    @classmethod\n    def applyOffset(cls, kpAnno, offset):\n        x = kpAnno.x - offset[0]\n        y = kpAnno.y - offset[1]\n        v = kpAnno.visibility\n        return cls(x, y, v)\n\n    @staticmethod\n    def calcDistance(kpA, kpB):\n        distance = (kpA.x - kpB.x)**2 + (kpA.y - kpB.y)**2\n        return np.sqrt(distance)\n"
  },
  {
    "path": "src/data_gen/ohem.py",
    "content": "\nimport sys\nsys.path.insert(0, \"../unet/\")\n\nfrom keras.models import *\nfrom keras.layers import *\nfrom utils import np_euclidean_l2\nfrom dataset import getKpNum\n\ndef generate_topk_mask_ohem(input_data, gthmap, keras_model, graph, topK, image_category, dynamicFlag=False):\n    '''\n    :param input_data: input\n    :param gthmap:  ground truth\n    :param keras_model: keras model\n    :param graph:  tf grpah to WA thread issue\n    :param topK: number of kp selected\n    :return:\n    '''\n\n    # do inference, and calculate loss of each channel\n    mimg, mmask = input_data\n    ximg  = mimg[np.newaxis,:,:,:]\n    xmask = mmask[np.newaxis,:,:,:]\n\n    if len(keras_model.input_layers) == 3:\n        # use original mask as ohem_mask\n        inputs = [ximg, xmask, xmask]\n    else:\n        inputs = [ximg, xmask]\n\n    with graph.as_default():\n        keras_output = keras_model.predict(inputs)\n\n    # heatmap of last stage\n    outhmap = keras_output[-1]\n\n    channel_num = gthmap.shape[-1]\n\n    # calculate loss\n    mloss = list()\n    for i in range(channel_num):\n        _dtmap = outhmap[0, :, :, i]\n        _gtmap = gthmap[:, :, i]\n        loss   = np_euclidean_l2(_dtmap, _gtmap)\n        mloss.append(loss)\n\n    # refill input_mask, set topk as 1.0 and fill 0.0 for rest\n    # fixme: topk may different b/w category\n    if dynamicFlag:\n        topK = getKpNum(image_category)//2\n\n    ohem_mask   = adjsut_mask(mloss, mmask, topK)\n\n    ohem_gthmap = ohem_mask * gthmap\n\n    return ohem_mask, ohem_gthmap\n\ndef adjsut_mask(loss, input_mask,  topk):\n    # pick topk loss from losses\n    # fill topk with 1.0 and fill the rest as 0.0\n    assert (len(loss) == input_mask.shape[-1]), \\\n        \"shape should be same\" + str(len(loss)) + \" vs \" + str(input_mask.shape)\n\n    outmask = np.zeros(input_mask.shape, dtype=np.float)\n\n    topk_index = sorted(range(len(loss)), key=lambda i:loss[i])[-topk:]\n\n    for i in range(len(loss)):\n        if i in topk_index:\n            outmask[:,:,i] = 1.0\n\n    return outmask\n"
  },
  {
    "path": "src/data_gen/utils.py",
    "content": "\nimport numpy as np\nimport pandas as pd\nimport os\n\ndef make_gaussian(width, height, sigma=3, center=None):\n    '''\n        generate 2d guassion heatmap\n    :return:\n    '''\n\n    x = np.arange(0, width, 1, float)\n    y = np.arange(0, height, 1, float)[:, np.newaxis]\n\n    if center is None:\n        x0 = width // 2\n        y0 = height // 2\n    else:\n        x0 = center[0]\n        y0 = center[1]\n\n    return np.exp( -4*np.log(2)*((x-x0)**2 + (y-y0)**2)/sigma**2)\n\n\ndef split_csv_train_val(allcsv, traincsv, valcsv, ratio=0.8):\n    xdf = pd.read_csv(allcsv)\n    # random shuffle\n    xdf = xdf.sample(frac=1)\n\n    # random sampling\n    msk = np.random.rand(len(xdf)) < ratio\n    trainDf= xdf[msk]\n    valDf= xdf[~msk]\n    print \"total\", len(xdf), \"split into train \", len(trainDf), '  val', len(valDf)\n\n    #save to file\n    trainDf.to_csv(traincsv, index=False)\n    valDf.to_csv(valcsv, index=False)\n\n\ndef np_euclidean_l2(x, y):\n    assert (x.shape == y.shape), \"shape mismatched \" + x.shape +\" :  \" + y.shape\n    loss = np.sum((x - y)**2)\n    loss = np.sqrt(loss)\n    return loss\n\n\ndef load_annotation_from_df(df, category):\n    if category == 'all':\n        return df\n    else:\n        return df[df['image_category'] == category]\n\n\n"
  },
  {
    "path": "src/eval/eval_callback.py",
    "content": "\nimport keras\nimport os\nimport datetime\nfrom evaluation import Evaluation\nfrom time import time\nclass NormalizedErrorCallBack(keras.callbacks.Callback):\n\n    def __init__(self, foldpath, category, multiOut=False, resumeFolder=None):\n        self.parentFoldPath = foldpath\n        self.category = category\n\n        if resumeFolder is None:\n            self.foldPath = os.path.join(self.parentFoldPath, self.category, datetime.datetime.now().strftime('%Y_%m_%d_%H_%M_%S'))\n            if not os.path.exists(self.foldPath):\n                os.mkdir(self.foldPath)\n        else:\n            self.foldPath = resumeFolder\n\n        self.valLog = os.path.join(self.foldPath, 'val.log')\n        self.multiOut = multiOut\n\n    def get_folder_path(self):\n        return self.foldPath\n\n    def on_epoch_end(self, epoch, logs=None):\n        modelName = os.path.join(self.foldPath, self.category+\"_weights_\"+str(epoch)+\".hdf5\")\n        keras.models.save_model(self.model, modelName)\n        print \"Saving model to \", modelName\n\n        print \"Runing evaluation .........\"\n\n        xEval = Evaluation(self.category, None)\n        xEval.init_from_model(self.model)\n\n        start = time()\n        neScore, categoryDict = xEval.eval(self.multiOut, details=True)\n        end = time()\n        print \"Evaluation Done\", str(neScore), \" cost \", end - start, \" seconds!\"\n\n        for key in categoryDict.keys():\n            scores = categoryDict[key]\n            print key, ' score ', sum(scores)/len(scores)\n\n        with open(self.valLog , 'a+') as xfile:\n            xfile.write(modelName + \", Socre \"+ str(neScore)+\"\\n\")\n            for key in categoryDict.keys():\n                scores = categoryDict[key]\n                xfile.write(key + \": \" + str(sum(scores)/len(scores)) + \"\\n\")\n\n        xfile.close()"
  },
  {
    "path": "src/eval/evaluation.py",
    "content": "\nimport sys\nsys.path.insert(0, \"../data_gen/\")\nsys.path.insert(0, \"../unet/\")\n\nimport pandas as pd\nfrom dataset import getKpKeys, getKpNum, getFlipMapID, get_kp_index_from_allkeys, generate_input_mask\nfrom kpAnno import KpAnno\nfrom post_process import post_process_heatmap\nfrom keras.models import load_model\nimport os\nfrom refinenet_mask_v3 import euclidean_loss\nimport numpy as np\nimport cv2\nfrom resnet101 import Scale\nfrom utils import load_annotation_from_df\nfrom collections import defaultdict\nimport copy\nfrom data_process import pad_image_inference\n\nclass Evaluation(object):\n    def __init__(self, category, modelFile):\n        self.category = category\n        self.train_img_path = \"../../data/train\"\n        if modelFile is not None:\n            self._initialize(modelFile)\n\n    def init_from_model(self, model):\n        self._load_anno()\n        self.net = model\n\n    def eval(self, multiOut=False, details=False, flip=True):\n        xdf = self.annDataFrame\n        scores = list()\n        xdict = dict()\n        xcategoryDict = defaultdict(list)\n        for _index, _row in xdf.iterrows():\n            imgId = _row['image_id']\n            category = _row['image_category']\n            imgFile = os.path.join(self.train_img_path, imgId)\n            gtKpAnno = self._get_groundtruth_kpAnno(_row)\n            if flip:\n                predKpAnno = self.predict_kp_with_flip(imgFile, category)\n            else:\n                predKpAnno = self.predict_kp(imgFile, category, multiOut)\n            neScore = Evaluation.calc_ne_score(category, predKpAnno, gtKpAnno)\n            scores.extend(neScore)\n            if details:\n                xcategoryDict[category].extend(neScore)\n        if details:\n            return sum(scores)/len(scores), xcategoryDict\n        else:\n            return sum(scores)/len(scores)\n\n    def _initialize(self, modelFile):\n        self._load_anno()\n        self._initialize_network(modelFile)\n\n    def _initialize_network(self, modelFile):\n        self.net = load_model(modelFile, custom_objects={'euclidean_loss': euclidean_loss, 'Scale': Scale})\n\n    def _load_anno(self):\n        '''\n        Load annotations from train.csv\n        '''\n        self.annfile = os.path.join(\"../../data/train/Annotations\", \"val_split.csv\")\n\n        # read into dataframe\n        xpd = pd.read_csv(self.annfile)\n        xpd = load_annotation_from_df(xpd, self.category)\n        self.annDataFrame = xpd\n\n\n    def _get_groundtruth_kpAnno(self, dfrow):\n        mlist = dfrow[getKpKeys(self.category)]\n        imgName, kpStr = mlist[0], mlist[1:]\n        # read kp annotation from csv file\n        kpAnnlst = [KpAnno.readFromStr(_kpstr) for _kpstr in kpStr]\n        return kpAnnlst\n\n    def _net_inference_with_mask(self, imgFile, imgCategory):\n        import cv2\n        from data_process import normalize_image, pad_image_inference\n        assert (len(self.net.input_layers) > 1), \"input layer need to more than 1\"\n\n        # load image and preprocess\n        img = cv2.imread(imgFile)\n\n        img, scale = pad_image_inference(img, 512, 512)\n        img   = normalize_image(img)\n        input_img = img[np.newaxis, :, :, :]\n\n        input_mask = generate_input_mask(imgCategory, (512, 512, getKpNum(self.category)) )\n        input_mask = input_mask[np.newaxis, :, :, :]\n\n        # inference\n        heatmap = self.net.predict([input_img, input_mask, input_mask])\n\n        return (heatmap, scale)\n\n    def _heatmap_sum(self, heatmaplst):\n        outheatmap = np.copy(heatmaplst[0])\n        for i in range(1, len(heatmaplst), 1):\n            outheatmap += heatmaplst[i]\n        return outheatmap\n\n    def predict_kp(self, imgFile, imgCategory, multiOutput=False):\n\n        xnetout, scale = self._net_inference_with_mask(imgFile, imgCategory)\n\n        if multiOutput:\n            #todo: fixme, it is tricky that the previous stage has beeter performance than last stage's output.\n            #todo: here, we are using multiple stage's output sum.\n            heatmap = self._heatmap_sum(xnetout)\n        else:\n            heatmap = xnetout\n\n        detectedKps = post_process_heatmap(heatmap, kpConfidenceTh=0.2)\n\n        # scale to padded resolution 256X256 -> 512X512\n        scaleTo512 = 2.0\n\n        # apply scale to original resolution\n        detectedKps = [KpAnno(_kp.x*scaleTo512/scale , _kp.y*scaleTo512/scale, _kp.visibility) for _kp in detectedKps]\n\n        return detectedKps\n\n\n    def predict_kp_with_flip(self, imgFile, imgCategory):\n        #  inference with flip and original image\n        heatmap, scale = self._net_inference_flip(imgFile, imgCategory)\n\n        detectedKps = post_process_heatmap(heatmap, kpConfidenceTh=0.2)\n\n        # scale to padded resolution 256X256 -> 512X512\n        scaleTo512 = 2.0\n\n        # apply scale to original resolution\n        detectedKps = [KpAnno(_kp.x * scaleTo512 / scale, _kp.y * scaleTo512 / scale, _kp.visibility) for _kp in\n                       detectedKps]\n\n        return detectedKps\n\n    def _net_inference_flip(self, imgFile, imgCategory):\n        import cv2\n        from data_process import normalize_image, pad_image_inference\n        assert (len(self.net.input_layers) > 1), \"input layer need to more than 1\"\n\n        batch_size =2\n\n        input_img  = np.zeros(shape=(batch_size, 512, 512, 3), dtype=np.float)\n        input_mask = np.zeros(shape=(batch_size, 256, 256, getKpNum(self.category)), dtype=np.float)\n\n        # load image and preprocess\n        orgimage = cv2.imread(imgFile)\n\n        padimg, scale = pad_image_inference(orgimage, 512, 512)\n        flipimg = cv2.flip(padimg, flipCode=1)\n\n        input_img[0,:,:,:] = normalize_image(padimg)\n        input_img[1,:,:,:] = normalize_image(flipimg)\n\n        mask = generate_input_mask(imgCategory, (512, 512, getKpNum(self.category)))\n        input_mask[0,:,:,:] = mask\n        input_mask[1,:,:,:] = mask\n\n        # inference\n        if len(self.net.input_layers) == 2:\n            heatmap = self.net.predict([input_img, input_mask])\n        elif len(self.net.input_layers) == 3:\n            heatmap = self.net.predict([input_img, input_mask, input_mask])\n        else:\n            assert (0), str(len(self.net.input_layers)) + \" should be 2 or 3 \"\n\n        # sum heatmap\n        avgheatmap = self._heatmap_sum(heatmap)\n\n        orgheatmap = avgheatmap[0,:,:,:]\n\n        # convert to same sequency with original heatmap\n        flipheatmap = avgheatmap[1,:,:,:]\n        flipheatmap = self._flip_out_heatmap(flipheatmap)\n\n        # average original and flip heatmap\n        outheatmap = flipheatmap + orgheatmap\n        outheatmap = outheatmap[np.newaxis, :, :, :]\n\n        return (outheatmap, scale)\n\n    def predict_kp_with_rotate(self, imgFile, imgCategory):\n        #  inference with rotated image\n        rotateheatmap = self._net_inference_rotate(imgFile, imgCategory)\n        rotateheatmap = rotateheatmap[np.newaxis, :, :, :]\n\n        # original image and flip image\n        orgflipmap, scale = self._net_inference_flip(imgFile, imgCategory)\n        mflipmap = cv2.resize(orgflipmap[0,:,:,:], None, fx=2.0/scale, fy=2.0/scale)\n\n        # add mflipmap and rotateheatmap\n        avgheatmap = mflipmap[np.newaxis, :, :, :]\n\n        b, h, w , c = rotateheatmap.shape\n        avgheatmap[:, 0:h, 0:w,:] += rotateheatmap\n\n        # generate key point locations\n        detectedKps = post_process_heatmap(avgheatmap, kpConfidenceTh=0.2)\n\n        return detectedKps\n\n    def _net_inference_rotate(self, imgFile, imgCategory):\n        from data_process import normalize_image, pad_image_inference, rotate_image_with_invrmat\n\n        # load image and preprocess\n        orgimage = cv2.imread(imgFile)\n\n        anglelst = [-20, -10, 10, 20]\n\n        input_img  = np.zeros(shape=(len(anglelst), 512, 512, 3), dtype=np.float)\n        input_mask = np.zeros(shape=(len(anglelst), 256, 256, getKpNum(self.category)), dtype=np.float)\n\n        mlist = list()\n        for i, angle in enumerate(anglelst):\n            rotateimg, invRotMatrix, orgImgSize = rotate_image_with_invrmat(orgimage, angle)\n            padimg, scale = pad_image_inference(rotateimg, 512, 512)\n            _img = normalize_image(padimg)\n            input_img[i, :, :, :] = _img\n            mlist.append((scale, invRotMatrix))\n\n        mask = generate_input_mask(imgCategory, (512, 512, getKpNum(self.category)))\n        for i, angle in enumerate(anglelst):\n            input_mask[i, :,:,:] = mask\n\n        # inference\n        heatmap = self.net.predict([input_img, input_mask, input_mask])\n        heatmap = self._heatmap_sum(heatmap)\n\n        # rotate back to original resolution\n        sumheatmap =  np.zeros(shape=(orgimage.shape[0], orgimage.shape[1], getKpNum(self.category)), dtype=np.float)\n        for i, item in enumerate(mlist):\n            _heatmap = heatmap[i, :, :, :]\n            _scale, _invRotMatrix = item\n            _heatmap = cv2.resize(_heatmap, None, fx=2.0 / _scale, fy=2.0 / _scale)\n            _invheatmap = cv2.warpAffine(_heatmap, _invRotMatrix, (orgimage.shape[1], orgimage.shape[0]))\n            sumheatmap += _invheatmap\n\n        return sumheatmap\n\n    def _flip_out_heatmap(self, flipout):\n        outmap = np.zeros(flipout.shape, dtype=np.float)\n        for i in range(flipout.shape[-1]):\n            flipid = getFlipMapID(self.category, i)\n            mask = np.copy(flipout[:, :, i])\n            outmap[:, :, flipid] = cv2.flip(mask, flipCode=1)\n        return outmap\n\n\n    @staticmethod\n    def get_normized_distance(category, gtKp):\n        '''\n\n        :param category:\n        :param gtKp:\n        :return: if ground truth's two points do not exist, return a big number 1e6\n        '''\n\n        if category in ['skirt' ,'trousers']:\n            ##waistband left and right\n            waistband_left_index  = get_kp_index_from_allkeys('waistband_left')\n            waistband_right_index = get_kp_index_from_allkeys('waistband_right')\n\n            if gtKp[waistband_left_index].visibility != -1 and gtKp[waistband_right_index].visibility != -1:\n                distance = KpAnno.calcDistance(gtKp[waistband_left_index], gtKp[waistband_right_index])\n            else:\n                distance = 1e6\n            return distance\n        elif category in ['blouse', 'dress', 'outwear']:\n            armpit_left_index  = get_kp_index_from_allkeys('armpit_left')\n            armpit_right_index = get_kp_index_from_allkeys('armpit_right')\n            ##armpit_left armpit_right'\n            if gtKp[armpit_left_index].visibility != -1 and gtKp[armpit_right_index].visibility != -1:\n                distance = KpAnno.calcDistance(gtKp[armpit_left_index], gtKp[armpit_right_index])\n            else:\n                distance = 1e6\n            return distance\n        else:\n            assert (0), category + \" not implemented in _get_normized_distance\"\n\n\n    @staticmethod\n    def calc_ne_score(category, dtKp, gtKp):\n\n        assert (len(dtKp) == len(gtKp)), \"predicted keypoint number should be the same as ground truth keypoints\" + \\\n                                         str(dtKp) + \" vs \" + str(gtKp)\n\n        # calculate normalized error as score\n        normalizedDistance = Evaluation.get_normized_distance(category, gtKp)\n\n        mlist = list()\n        for i in range(len(gtKp)):\n            if gtKp[i].visibility == 1:\n                dk = KpAnno.calcDistance(dtKp[i], gtKp[i])\n                mlist.append( dk/normalizedDistance)\n\n        return mlist\n"
  },
  {
    "path": "src/eval/post_process.py",
    "content": "import cv2\nimport numpy as np\nfrom scipy.ndimage import gaussian_filter, maximum_filter\nfrom keras.layers import *\nfrom kpAnno import KpAnno\n\ndef post_process_heatmap(heatMap, kpConfidenceTh=0.2):\n    kplst = list()\n    for i in range(heatMap.shape[-1]):\n        # ignore last channel, background channel\n        _map = heatMap[0, :, :, i]\n        _map = gaussian_filter(_map, sigma=0.5)\n        _nmsPeaks = non_max_supression(_map, windowSize=3, threshold=1e-6)\n\n        y, x = np.where(_nmsPeaks == _nmsPeaks.max())\n        confidence = np.amax(_nmsPeaks)\n        if confidence > kpConfidenceTh:\n            kplst.append(KpAnno(x[0], y[0], 1))\n        else:\n            kplst.append(KpAnno(x[0], y[0], -1))\n    return kplst\n\ndef non_max_supression(plain, windowSize=3, threshold=1e-6):\n    # clear value less than threshold\n    under_th_indices = plain < threshold\n    plain[under_th_indices] = 0\n    return plain* (plain == maximum_filter(plain, footprint=np.ones((windowSize, windowSize))))\n"
  },
  {
    "path": "src/top/demo.py",
    "content": "import sys\nsys.path.insert(0, \"../data_gen/\")\nsys.path.insert(0, \"../eval/\")\nsys.path.insert(0, \"../unet/\")\n\nimport argparse\nimport os\nimport pandas as pd\nimport cv2\nfrom evaluation import Evaluation\nfrom dataset import getKpKeys, get_kp_index_from_allkeys\n\ndef visualize_keypoint(imageName, category, dtkp):\n    cvmat = cv2.imread(imageName)\n    for key in getKpKeys(category)[1:]:\n        index = get_kp_index_from_allkeys(key)\n        _kp = dtkp[index]\n        cv2.circle(cvmat, center=(_kp.x, _kp.y), radius=7, color=(1.0, 0.0, 0.0), thickness=2)\n    cv2.imshow('demo', cvmat)\n    cv2.waitKey()\n\ndef demo(modelfile):\n\n    # load network\n    xEval = Evaluation('all', modelfile)\n\n    # load images and run prediction\n    testfile = os.path.join(\"../../data/test/\", 'test.csv')\n    xdf = pd.read_csv(testfile)\n    xdf = xdf.sample(frac=1.0)\n\n    for _index, _row in xdf.iterrows():\n        _image_id = _row['image_id']\n        _category = _row['image_category']\n        imageName = os.path.join(\"../../data/test\", _image_id)\n        print _image_id, _category\n        dtkp = xEval.predict_kp_with_rotate(imageName, _category)\n        visualize_keypoint(imageName, _category, dtkp)\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--gpuID\", default=0, type=int, help='gpu id')\n    parser.add_argument(\"--modelfile\", help=\"file of model\")\n\n    args = parser.parse_args()\n\n    print args\n\n    os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(args.gpuID)\n\n    demo(args.modelfile)"
  },
  {
    "path": "src/top/test.py",
    "content": "import sys\nsys.path.insert(0, \"../data_gen/\")\nsys.path.insert(0, \"../eval/\")\nsys.path.insert(0, \"../unet/\")\n\nimport argparse\nimport os\nfrom fashion_net import FashionNet\nfrom dataset import getKpNum, getKpKeys\nimport pandas as pd\nfrom evaluation import Evaluation\nimport pickle\nimport numpy as np\n\n\ndef get_best_single_model(valfile):\n    '''\n    :param valfile: the log file with validation score for each snapshot\n    :return: model file and score\n    '''\n\n    def get_key(item):\n        return item[1]\n\n    with open(valfile) as xval:\n        lines = xval.readlines()\n\n    xlist = list()\n    for linenum, xline in enumerate(lines):\n        if 'hdf5' in xline and 'Socre' in xline:\n            modelname = xline.strip().split(',')[0]\n            overallscore = xline.strip().split(',')[1]\n            xlist.append((modelname, overallscore))\n\n    bestmodel = sorted(xlist, key=get_key)[0]\n\n    return bestmodel\n\n\ndef fill_dataframe(kplst, keys, dfrow, image_category):\n    # fill category\n\n    dfrow['image_category'] = image_category\n\n    assert (len(keys) == len(kplst)), str(len(kplst)) + ' must be the same as ' + str(len(keys))\n    for i, _key in enumerate(keys):\n        kpann = kplst[i]\n        outstr = str(int(kpann.x))+\"_\"+str(int(kpann.y))+\"_\"+str(1)\n        dfrow[_key] = outstr\n\ndef get_kp_from_dict(mdict, image_category, image_id):\n    if image_category in mdict.keys():\n        xdict = mdict[image_category]\n    else:\n        xdict = mdict['all']\n    return xdict[image_id]\n\ndef submission(pklpath):\n    xdf = pd.read_csv(\"../../data/train/Annotations/train.csv\")\n    trainKeys = xdf.keys()\n\n    testdf = pd.read_csv(\"../../data/test/test.csv\")\n    print len(testdf), \" samples in test.csv\"\n\n    mdict = dict()\n    for xfile in os.listdir(pklpath):\n        if xfile.endswith('.pkl'):\n            category = xfile.strip().split('.')[0]\n            pkl = open(os.path.join(pklpath, xfile))\n            mdict[category] = pickle.load(pkl)\n\n    print testdf.keys()\n    print mdict.keys()\n\n    submissionDf = pd.DataFrame(columns=trainKeys, index=np.arange(testdf.shape[0]))\n    submissionDf = submissionDf.fillna(value='-1_-1_-1')\n    submissionDf['image_id'] = testdf['image_id']\n    submissionDf['image_category'] = testdf['image_category']\n\n    for _index, _row in submissionDf.iterrows():\n        image_id = _row['image_id']\n        image_category = _row['image_category']\n        kplst = get_kp_from_dict(mdict, image_category, image_id)\n        fill_dataframe(kplst, getKpKeys('all')[1:], _row, image_category)\n\n\n    print len(submissionDf), \"save to \",  os.path.join(pklpath, 'submission.csv')\n    submissionDf.to_csv( os.path.join(pklpath, 'submission.csv'), index=False )\n\n\ndef load_image_names(annfile, category):\n    # read into dataframe\n    xdf = pd.read_csv(annfile)\n    xdf = xdf[xdf['image_category'] == category]\n    return xdf\n\ndef main_test(savepath, modelpath, augmentFlag):\n\n    valfile = os.path.join(modelpath, 'val.log')\n    bestmodels = get_best_single_model(valfile)\n\n    print bestmodels, augmentFlag\n\n    xEval = Evaluation('all', bestmodels[0])\n\n    # load images and run prediction\n    testfile = os.path.join(\"../../data/test/\", 'test.csv')\n\n    for category in ['skirt', 'blouse', 'trousers', 'outwear', 'dress']:\n        xdict = dict()\n        xdf = load_image_names(testfile, category)\n        print len(xdf), \" images to process \", category\n\n        count = 0\n        for _index, _row in xdf.iterrows():\n            count += 1\n            if count%1000 == 0:\n                print count, \"images have been processed\"\n\n            _image_id = _row['image_id']\n            imageName = os.path.join(\"../../data/test\", _image_id)\n            if augmentFlag:\n                dtkp = xEval.predict_kp_with_rotate(imageName, _row['image_category'])\n            else:\n                dtkp = xEval.predict_kp(imageName, _row['image_category'], multiOutput=True)\n            xdict[_image_id] = dtkp\n\n        savefile = os.path.join(savepath, category+'.pkl')\n        with open(savefile, 'wb') as xfile:\n            pickle.dump(xdict, xfile)\n\n        print \"prediction save to \", savefile\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--gpuID\", default=0, type=int, help='gpu id')\n    parser.add_argument(\"--modelpath\", help=\"path of trained model\")\n    parser.add_argument(\"--outpath\", help=\"path to save predicted keypoints\")\n    parser.add_argument(\"--augment\", default=False, type=bool, help=\"augment or not\")\n\n    args = parser.parse_args()\n\n    print args\n\n    os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(args.gpuID)\n\n    main_test(args.outpath, args.modelpath, args.augment)\n    submission(args.outpath)"
  },
  {
    "path": "src/top/train.py",
    "content": "import sys\nsys.path.insert(0, \"../data_gen/\")\nsys.path.insert(0, \"../unet/\")\n\nimport argparse\nimport os\nfrom fashion_net import FashionNet\nfrom dataset import getKpNum\nimport tensorflow as tf\nfrom keras import backend as k\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--gpuID\", default=0, type=int, help='gpu id')\n    parser.add_argument(\"--category\", help=\"specify cloth category\")\n    parser.add_argument(\"--network\", help=\"specify  network arch'\")\n    parser.add_argument(\"--batchSize\", default=8, type=int, help='batch size for training')\n    parser.add_argument(\"--epochs\", default=20, type=int, help=\"number of traning epochs\")\n    parser.add_argument(\"--resume\", default=False, type=bool,  help=\"resume training or not\")\n    parser.add_argument(\"--lrdecay\", default=False, type=bool,  help=\"lr decay or not\")\n    parser.add_argument(\"--resumeModel\", help=\"start point to retrain\")\n    parser.add_argument(\"--initEpoch\", type=int, help=\"epoch to resume\")\n\n\n    args = parser.parse_args()\n\n    os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(args.gpuID)\n\n\n    # TensorFlow wizardry\n    config = tf.ConfigProto()\n\n    # Don't pre-allocate memory; allocate as-needed\n    config.gpu_options.allow_growth = True\n\n    # Only allow a total of half the GPU memory to be allocated\n    config.gpu_options.per_process_gpu_memory_fraction = 1.0\n\n    # Create a session with the above options specified.\n    k.tensorflow_backend.set_session(tf.Session(config=config))\n\n    if not args.resume :\n        xnet = FashionNet(512, 512, getKpNum(args.category))\n        xnet.build_model(modelName=args.network, show=True)\n        xnet.train(args.category, epochs=args.epochs, batchSize=args.batchSize, lrschedule=args.lrdecay)\n    else:\n        xnet = FashionNet(512, 512, getKpNum(args.category))\n        xnet.resume_train(args.category, args.resumeModel, args.network, args.initEpoch,\n                          epochs=args.epochs, batchSize=args.batchSize)"
  },
  {
    "path": "src/unet/fashion_net.py",
    "content": "\nimport sys\nsys.path.insert(0, \"../data_gen/\")\nsys.path.insert(0, \"../eval/\")\n\nfrom data_generator import DataGenerator\nfrom keras.callbacks import ModelCheckpoint, CSVLogger\nfrom keras.models import load_model\nfrom data_process import pad_image, normalize_image\nimport os\nimport cv2\nimport numpy as np\nimport datetime\nfrom eval_callback import NormalizedErrorCallBack\nfrom refinenet_mask_v3 import Res101RefineNetMaskV3, euclidean_loss\nfrom resnet101 import Scale\nimport tensorflow as tf\n\nclass FashionNet(object):\n\n    def __init__(self, inputHeight, inputWidth, nClasses):\n        self.inputWidth = inputWidth\n        self.inputHeight = inputHeight\n        self.nClass = nClasses\n\n    def build_model(self, modelName='v2', show=False):\n        self.modelName = modelName\n        self.model = Res101RefineNetMaskV3(self.nClass, self.inputHeight, self.inputWidth, nStackNum=2)\n        self.nStackNum = 2\n\n        # show model summary and layer name\n        if show:\n            self.model.summary()\n            for layer in self.model.layers:\n                print layer.name, layer.trainable\n\n    def train(self, category, batchSize=8, epochs=20, lrschedule=False):\n        trainDt = DataGenerator(category, os.path.join(\"../../data/train/Annotations\", \"train_split.csv\"))\n        trainGen = trainDt.generator_with_mask_ohem( graph=tf.get_default_graph(), kerasModel=self.model,\n                                    batchSize= batchSize, inputSize=(self.inputHeight, self.inputWidth),\n                                    nStackNum=self.nStackNum, flipFlag=False, cropFlag=False)\n\n        normalizedErrorCallBack = NormalizedErrorCallBack(\"../../trained_models/\", category, True)\n\n        csvlogger = CSVLogger( os.path.join(normalizedErrorCallBack.get_folder_path(),\n                               \"csv_train_\"+self.modelName+\"_\"+str(datetime.datetime.now().strftime('%H:%M'))+\".csv\"))\n\n        xcallbacks = [normalizedErrorCallBack, csvlogger]\n\n        self.model.fit_generator(generator=trainGen, steps_per_epoch=trainDt.get_dataset_size()//batchSize,\n                                 epochs=epochs,  callbacks=xcallbacks)\n\n    def load_model(self, netWeightFile):\n        self.model = load_model(netWeightFile, custom_objects={'euclidean_loss': euclidean_loss, 'Scale': Scale})\n\n    def resume_train(self, category, pretrainModel, modelName, initEpoch, batchSize=8, epochs=20):\n        self.modelName = modelName\n        self.load_model(pretrainModel)\n        refineNetflag = True\n        self.nStackNum = 2\n\n        modelPath = os.path.dirname(pretrainModel)\n\n        trainDt = DataGenerator(category, os.path.join(\"../../data/train/Annotations\", \"train_split.csv\"))\n        trainGen = trainDt.generator_with_mask_ohem(graph=tf.get_default_graph(), kerasModel=self.model,\n                                                    batchSize=batchSize, inputSize=(self.inputHeight, self.inputWidth),\n                                                    nStackNum=self.nStackNum, flipFlag=False, cropFlag=False)\n\n\n        normalizedErrorCallBack = NormalizedErrorCallBack(\"../../trained_models/\", category, refineNetflag, resumeFolder=modelPath)\n\n        csvlogger = CSVLogger(os.path.join(normalizedErrorCallBack.get_folder_path(),\n                                           \"csv_train_\" + self.modelName + \"_\" + str(\n                                               datetime.datetime.now().strftime('%H:%M')) + \".csv\"))\n\n        self.model.fit_generator(initial_epoch=initEpoch, generator=trainGen, steps_per_epoch=trainDt.get_dataset_size() // batchSize,\n                                 epochs=epochs, callbacks=[normalizedErrorCallBack, csvlogger])\n\n\n    def predict_image(self, imgfile):\n        # load image and preprocess\n        img = cv2.imread(imgfile)\n        img, _ = pad_image(img, list(), 512, 512)\n        img = normalize_image(img)\n        input = img[np.newaxis,:,:,:]\n        # inference\n        heatmap = self.model.predict(input)\n        return heatmap\n\n\n    def predict(self, input):\n        # inference\n        heatmap = self.model.predict(input)\n        return heatmap"
  },
  {
    "path": "src/unet/refinenet.py",
    "content": "from keras.models import *\nfrom keras.layers import *\nfrom keras.optimizers import Adam, SGD\nfrom keras import backend as K\nfrom keras.applications.resnet50 import ResNet50\n\nIMAGE_ORDERING = 'channels_last'\n\ndef Res101RefineNetDilated(n_classes, inputHeight, inputWidth):\n    model = build_network_resnet101(inputHeight, inputWidth, n_classes, dilated=True)\n    return model\n\ndef Res101RefineNetStacked(n_classes, inputHeight, inputWidth, nStackNum):\n    model = build_network_resnet101_stack(inputHeight, inputWidth, n_classes, nStackNum)\n    return model\n\ndef euclidean_loss(x, y):\n    return K.sqrt(K.sum(K.square(x - y)))\n\n\ndef create_global_net(lowlevelFeatures, n_classes):\n    lf2x, lf4x, lf8x, lf16x = lowlevelFeatures\n\n    o = lf16x\n\n    o = (Conv2D(256, (3, 3), activation='relu', padding='same', name='up16x_conv', data_format=IMAGE_ORDERING))(o)\n    o = (BatchNormalization())(o)\n\n    o = (Conv2DTranspose(256, kernel_size=(3, 3), strides=(2, 2), name='upsample_16x', activation='relu', padding='same',\n                    data_format=IMAGE_ORDERING))(o)\n    o = (concatenate([o, lf8x], axis=-1))\n    o = (Conv2D(128, (3, 3), activation='relu', padding='same', name='up8x_conv', data_format=IMAGE_ORDERING))(o)\n    o = (BatchNormalization())(o)\n    fup8x = o\n\n    o = (Conv2DTranspose(128, kernel_size=(3, 3), strides=(2, 2), name='upsample_8x', padding='same', activation='relu',\n                         data_format=IMAGE_ORDERING))(o)\n    o = (concatenate([o, lf4x], axis=-1))\n    o = (Conv2D(64, (3, 3), activation='relu', padding='same', name='up4x_conv', data_format=IMAGE_ORDERING))(o)\n    o = (BatchNormalization())(o)\n    fup4x = o\n\n    o = (Conv2DTranspose(64, kernel_size=(3, 3), strides=(2, 2), name='upsample_4x', padding='same', activation='relu',\n                         data_format=IMAGE_ORDERING))(o)\n    o = (concatenate([o, lf2x], axis=-1))\n    o = (Conv2D(64, (3, 3), activation='relu', padding='same', name='up2x_conv', data_format=IMAGE_ORDERING))(o)\n    o = (BatchNormalization())(o)\n    fup2x = o\n\n    out2x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name='out2x', data_format=IMAGE_ORDERING)(fup2x)\n    out4x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name='out4x', data_format=IMAGE_ORDERING)(fup4x)\n    out8x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name='out8x', data_format=IMAGE_ORDERING)(fup8x)\n\n    x4x = UpSampling2D((2, 2), data_format=IMAGE_ORDERING)(out8x)\n    eadd4x = Add(name='global4x')([x4x, out4x])\n\n    x2x = UpSampling2D((2, 2), data_format=IMAGE_ORDERING)(eadd4x)\n    eadd2x = Add(name='global2x')([x2x, out2x])\n\n    return (fup8x, eadd4x, eadd2x)\n\ndef create_refine_net(inputFeatures, n_classes):\n    f8x, f4x, f2x = inputFeatures\n\n    # 2 Conv2DTranspose f8x -> fup8x\n    fup8x = (Conv2DTranspose(128, kernel_size=(3, 3), strides=(2, 2), name='refine8x_deconv_1', padding='same', activation='relu',\n                         data_format=IMAGE_ORDERING))(f8x)\n    fup8x = (BatchNormalization())(fup8x)\n\n    fup8x = (Conv2DTranspose(128, kernel_size=(3, 3), strides=(2, 2), name='refine8x_deconv_2', padding='same', activation='relu',\n                         data_format=IMAGE_ORDERING))(fup8x)\n    fup8x = (BatchNormalization())(fup8x)\n\n    # 1 Conv2DTranspose f4x -> fup4x\n    fup4x = (Conv2DTranspose(128, kernel_size=(3, 3), strides=(2, 2), name='refine4x_deconv', padding='same', activation='relu',\n                    data_format=IMAGE_ORDERING))(f4x)\n\n    fup4x = (BatchNormalization())(fup4x)\n\n    # 1 conv f2x -> fup2x\n    fup2x =  (Conv2D(128, (3, 3), activation='relu', padding='same', name='refine2x_conv', data_format=IMAGE_ORDERING))(f2x)\n    fup2x =  (BatchNormalization())(fup2x)\n\n    # concat f2x, fup8x, fup4x\n    fconcat = (concatenate([fup8x, fup4x, fup2x], axis=-1, name='refine_concat'))\n\n    # 1x1 to map to required feature map\n    out2x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name='refine2x', data_format=IMAGE_ORDERING)(fconcat)\n\n    return out2x\n\n\ndef create_refine_net_bottleneck(inputFeatures, n_classes):\n    f8x, f4x, f2x = inputFeatures\n\n    # 2 Conv2DTranspose f8x -> fup8x\n    fup8x = (Conv2D(256, kernel_size=(1, 1),  name='refine8x_1', padding='same', activation='relu', data_format=IMAGE_ORDERING))(f8x)\n    fup8x = (BatchNormalization())(fup8x)\n\n    fup8x = (Conv2D(128, kernel_size=(1, 1),  name='refine8x_2', padding='same', activation='relu', data_format=IMAGE_ORDERING))(fup8x)\n    fup8x = (BatchNormalization())(fup8x)\n\n    fup8x = UpSampling2D((4, 4), data_format=IMAGE_ORDERING)(fup8x)\n\n\n    # 1 Conv2DTranspose f4x -> fup4x\n    fup4x = (Conv2D(128, kernel_size=(1, 1), name='refine4x', padding='same', activation='relu', data_format=IMAGE_ORDERING))(f4x)\n    fup4x = (BatchNormalization())(fup4x)\n    fup4x = UpSampling2D((2, 2), data_format=IMAGE_ORDERING)(fup4x)\n\n\n    # 1 conv f2x -> fup2x\n    fup2x =  (Conv2D(128, (1, 1), activation='relu', padding='same', name='refine2x_conv', data_format=IMAGE_ORDERING))(f2x)\n    fup2x =  (BatchNormalization())(fup2x)\n\n    # concat f2x, fup8x, fup4x\n    fconcat = (concatenate([fup8x, fup4x, fup2x], axis=-1, name='refine_concat'))\n\n    # 1x1 to map to required feature map\n    out2x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name='refine2x', data_format=IMAGE_ORDERING)(fconcat)\n\n    return out2x\n\n\ndef create_stack_refinenet(inputFeatures, n_classes, layerName):\n    f8x, f4x, f2x = inputFeatures\n\n    # 2 Conv2DTranspose f8x -> fup8x\n    fup8x = (Conv2D(256, kernel_size=(1, 1), name=layerName+'_refine8x_1', padding='same', activation='relu'))(f8x)\n    fup8x = (BatchNormalization())(fup8x)\n\n    fup8x = (Conv2D(128, kernel_size=(1, 1), name=layerName+'refine8x_2', padding='same', activation='relu'))(fup8x)\n    fup8x = (BatchNormalization())(fup8x)\n\n    out8x = fup8x\n    fup8x = UpSampling2D((4, 4), data_format=IMAGE_ORDERING)(fup8x)\n\n    # 1 Conv2DTranspose f4x -> fup4x\n    fup4x = (Conv2D(128, kernel_size=(1, 1), name=layerName+'refine4x', padding='same', activation='relu'))(f4x)\n    fup4x = (BatchNormalization())(fup4x)\n    out4x = fup4x\n    fup4x = UpSampling2D((2, 2), data_format=IMAGE_ORDERING)(fup4x)\n\n    # 1 conv f2x -> fup2x\n    fup2x = (Conv2D(128, (1, 1), activation='relu', padding='same', name=layerName+'refine2x_conv'))(f2x)\n    fup2x = (BatchNormalization())(fup2x)\n\n    # concat f2x, fup8x, fup4x\n    fconcat = (concatenate([fup8x, fup4x, fup2x], axis=-1, name=layerName+'refine_concat'))\n\n    # 1x1 to map to required feature map\n    out2x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name=layerName+'refine2x')(fconcat)\n\n    return out8x, out4x, out2x\n\n\ndef create_global_net_dilated(lowlevelFeatures, n_classes):\n    lf2x, lf4x, lf8x, lf16x = lowlevelFeatures\n\n    o = lf16x\n\n    o = (Conv2D(256, (3, 3), dilation_rate=(2, 2), activation='relu', padding='same', name='up16x_conv', data_format=IMAGE_ORDERING))(o)\n    o = (BatchNormalization())(o)\n\n    o = (Conv2DTranspose(256, kernel_size=(3, 3), strides=(2, 2), name='upsample_16x', activation='relu', padding='same',\n                    data_format=IMAGE_ORDERING))(o)\n    o = (concatenate([o, lf8x], axis=-1))\n    o = (Conv2D(128, (3, 3), dilation_rate=(2, 2), activation='relu', padding='same', name='up8x_conv', data_format=IMAGE_ORDERING))(o)\n    o = (BatchNormalization())(o)\n    fup8x = o\n\n    o = (Conv2DTranspose(128, kernel_size=(3, 3), strides=(2, 2), name='upsample_8x', padding='same', activation='relu',\n                         data_format=IMAGE_ORDERING))(o)\n    o = (concatenate([o, lf4x], axis=-1))\n    o = (Conv2D(64, (3, 3), dilation_rate=(2, 2), activation='relu', padding='same', name='up4x_conv', data_format=IMAGE_ORDERING))(o)\n    o = (BatchNormalization())(o)\n    fup4x = o\n\n    o = (Conv2DTranspose(64, kernel_size=(3, 3), strides=(2, 2), name='upsample_4x', padding='same', activation='relu',\n                         data_format=IMAGE_ORDERING))(o)\n    o = (concatenate([o, lf2x], axis=-1))\n    o = (Conv2D(64, (3, 3), dilation_rate=(2, 2), activation='relu', padding='same', name='up2x_conv', data_format=IMAGE_ORDERING))(o)\n    o = (BatchNormalization())(o)\n    fup2x = o\n\n    out2x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name='out2x', data_format=IMAGE_ORDERING)(fup2x)\n    out4x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name='out4x', data_format=IMAGE_ORDERING)(fup4x)\n    out8x = Conv2D(n_classes, (1, 1), activation='linear', padding='same', name='out8x', data_format=IMAGE_ORDERING)(fup8x)\n\n    x4x = UpSampling2D((2, 2), data_format=IMAGE_ORDERING)(out8x)\n    eadd4x = Add(name='global4x')([x4x, out4x])\n\n    x2x = UpSampling2D((2, 2), data_format=IMAGE_ORDERING)(eadd4x)\n    eadd2x = Add(name='global2x')([x2x, out2x])\n\n    return (fup8x, eadd4x, eadd2x)\n\n\ndef build_network_resnet101(inputHeight, inputWidth, n_classes, frozenlayers=True, dilated=False):\n    input, lf2x, lf4x, lf8x, lf16x = load_backbone_res101net(inputHeight, inputWidth)\n\n    # global net 8x, 4x, and 2x\n    if dilated:\n        g8x, g4x, g2x = create_global_net_dilated((lf2x, lf4x, lf8x, lf16x), n_classes)\n    else:\n        g8x, g4x, g2x = create_global_net((lf2x, lf4x, lf8x, lf16x), n_classes)\n\n    # refine net, only 2x as output\n    refine2x = create_refine_net_bottleneck((g8x, g4x, g2x), n_classes)\n\n    model = Model(inputs=input, outputs=[g2x, refine2x])\n\n    adam = Adam(lr=1e-4)\n    model.compile(optimizer=adam, loss=euclidean_loss, metrics=[\"accuracy\"])\n\n    return model\n\n\ndef build_network_resnet101_stack(inputHeight, inputWidth, n_classes, nStack):\n    # backbone network\n    input, lf2x,lf4x, lf8x, lf16x = load_backbone_res101net(inputHeight, inputWidth)\n\n    # global net\n    g8x, g4x, g2x = create_global_net_dilated((lf2x, lf4x, lf8x, lf16x), n_classes)\n\n    s8x, s4x, s2x = g8x, g4x, g2x\n\n    outputs =  [g2x]\n    for i in range(nStack):\n        s8x, s4x, s2x =  create_stack_refinenet((s8x, s4x, s2x), n_classes, 'stack_'+str(i))\n        outputs.append(s2x)\n\n    model = Model(inputs=input, outputs=outputs)\n\n    adam = Adam(lr=1e-4)\n    model.compile(optimizer=adam, loss=euclidean_loss, metrics=[\"accuracy\"])\n    return model\n\n\ndef load_backbone_res101net(inputHeight, inputWidth):\n    from resnet101 import ResNet101\n    xresnet = ResNet101(weights='imagenet', include_top=False, input_shape=(inputHeight, inputWidth, 3))\n\n    xresnet.load_weights(\"../../data/resnet101_weights_tf.h5\", by_name=True)\n\n    lf16x = xresnet.get_layer('res4b22_relu').output\n    lf8x = xresnet.get_layer('res3b2_relu').output\n    lf4x = xresnet.get_layer('res2c_relu').output\n    lf2x = xresnet.get_layer('conv1_relu').output\n\n    # add one padding for lf4x whose shape is 127x127\n    lf4xp = ZeroPadding2D(padding=((0, 1), (0, 1)))(lf4x)\n\n    return (xresnet.input, lf2x, lf4xp, lf8x, lf16x)"
  },
  {
    "path": "src/unet/refinenet_mask_v3.py",
    "content": "\nfrom refinenet import load_backbone_res101net, create_global_net_dilated, create_stack_refinenet\nfrom keras.models import *\nfrom keras.layers import *\nfrom keras.optimizers import Adam, SGD\nfrom keras import backend as K\nimport keras\n\ndef Res101RefineNetMaskV3(n_classes, inputHeight, inputWidth, nStackNum):\n    model = build_resnet101_stack_mask_v3(inputHeight, inputWidth, n_classes, nStackNum)\n    return model\n\ndef euclidean_loss(x, y):\n    return K.sqrt(K.sum(K.square(x - y)))\n\ndef apply_mask_to_output(output, mask):\n    output_with_mask = keras.layers.multiply([output, mask])\n    return output_with_mask\n\ndef build_resnet101_stack_mask_v3(inputHeight, inputWidth, n_classes, nStack):\n\n    input_mask = Input(shape=(inputHeight//2, inputHeight//2, n_classes), name='mask')\n    input_ohem_mask = Input(shape=(inputHeight//2, inputHeight//2, n_classes), name='ohem_mask')\n\n    # backbone network\n    input_image, lf2x,lf4x, lf8x, lf16x = load_backbone_res101net(inputHeight, inputWidth)\n\n    # global net\n    g8x, g4x, g2x = create_global_net_dilated((lf2x, lf4x, lf8x, lf16x), n_classes)\n\n    s8x, s4x, s2x = g8x, g4x, g2x\n\n    g2x_mask = apply_mask_to_output(g2x, input_mask)\n\n    outputs =  [g2x_mask]\n    for i in range(nStack):\n        s8x, s4x, s2x =  create_stack_refinenet((s8x, s4x, s2x), n_classes, 'stack_'+str(i))\n        if i == (nStack-1): # last stack with ohem_mask\n            s2x_mask = apply_mask_to_output(s2x, input_ohem_mask)\n        else:\n            s2x_mask = apply_mask_to_output(s2x, input_mask)\n        outputs.append(s2x_mask)\n\n    model = Model(inputs=[input_image, input_mask, input_ohem_mask], outputs=outputs)\n\n    adam = Adam(lr=1e-4)\n    model.compile(optimizer=adam, loss=euclidean_loss, metrics=[\"accuracy\"])\n    return model"
  },
  {
    "path": "src/unet/resnet101.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"ResNet-101 model for Keras.\n\n# Reference:\n\n- [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)\n\nSlightly modified Felix Yu's (https://github.com/flyyufelix) implementation of\nResNet-101 to have consistent API as those pre-trained models within\n`keras.applications`. The original implementation is found here\nhttps://gist.github.com/flyyufelix/65018873f8cb2bbe95f429c474aa1294#file-resnet-101_keras-py\n\nImplementation is based on Keras 2.0\n\"\"\"\nfrom keras.layers import (\n    Input, Dense, Conv2D, MaxPooling2D, AveragePooling2D, ZeroPadding2D,\n    Flatten, Activation, GlobalAveragePooling2D, GlobalMaxPooling2D, add)\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.models import Model\nfrom keras import initializers\nfrom keras.engine import Layer, InputSpec\nfrom keras.engine.topology import get_source_inputs\nfrom keras import backend as K\nfrom keras.applications.imagenet_utils import _obtain_input_shape\nfrom keras.utils.data_utils import get_file\n\nimport warnings\nimport sys\nsys.setrecursionlimit(3000)\n\n\nWEIGHTS_PATH_TH = 'https://dl.dropboxusercontent.com/s/rrp56zm347fbrdn/resnet101_weights_th.h5?dl=0'\nWEIGHTS_PATH_TF = 'https://dl.dropboxusercontent.com/s/a21lyqwgf88nz9b/resnet101_weights_tf.h5?dl=0'\nMD5_HASH_TH = '3d2e9a49d05192ce6e22200324b7defe'\nMD5_HASH_TF = '867a922efc475e9966d0f3f7b884dc15'\n\n\nclass Scale(Layer):\n    '''Learns a set of weights and biases used for scaling the input data.\n    the output consists simply in an element-wise multiplication of the input\n    and a sum of a set of constants:\n\n        out = in * gamma + beta,\n\n    where 'gamma' and 'beta' are the weights and biases larned.\n\n    # Arguments\n        axis: integer, axis along which to normalize in mode 0. For instance,\n            if your input tensor has shape (samples, channels, rows, cols),\n            set axis to 1 to normalize per feature map (channels axis).\n        momentum: momentum in the computation of the\n            exponential average of the mean and standard deviation\n            of the data, for feature-wise normalization.\n        weights: Initialization weights.\n            List of 2 Numpy arrays, with shapes:\n            `[(input_shape,), (input_shape,)]`\n        beta_init: name of initialization function for shift parameter\n            (see [initializers](../initializers.md)), or alternatively,\n            Theano/TensorFlow function to use for weights initialization.\n            This parameter is only relevant if you don't pass a `weights`\n            argument.\n        gamma_init: name of initialization function for scale parameter (see\n            [initializers](../initializers.md)), or alternatively,\n            Theano/TensorFlow function to use for weights initialization.\n            This parameter is only relevant if you don't pass a `weights`\n            argument.\n        gamma_init: name of initialization function for scale parameter (see\n            [initializers](../initializers.md)), or alternatively,\n            Theano/TensorFlow function to use for weights initialization.\n            This parameter is only relevant if you don't pass a `weights`\n            argument.\n    '''\n    def __init__(self,\n                 weights=None,\n                 axis=-1,\n                 momentum=0.9,\n                 beta_init='zero',\n                 gamma_init='one',\n                 **kwargs):\n        self.momentum = momentum\n        self.axis = axis\n        self.beta_init = initializers.get(beta_init)\n        self.gamma_init = initializers.get(gamma_init)\n        self.initial_weights = weights\n        super(Scale, self).__init__(**kwargs)\n\n    def build(self, input_shape):\n        self.input_spec = [InputSpec(shape=input_shape)]\n        shape = (int(input_shape[self.axis]),)\n\n        self.gamma = K.variable(\n            self.gamma_init(shape),\n            name='{}_gamma'.format(self.name))\n        self.beta = K.variable(\n            self.beta_init(shape),\n            name='{}_beta'.format(self.name))\n        self.trainable_weights = [self.gamma, self.beta]\n\n        if self.initial_weights is not None:\n            self.set_weights(self.initial_weights)\n            del self.initial_weights\n\n    def call(self, x, mask=None):\n        input_shape = self.input_spec[0].shape\n        broadcast_shape = [1] * len(input_shape)\n        broadcast_shape[self.axis] = input_shape[self.axis]\n\n        out = K.reshape(\n            self.gamma,\n            broadcast_shape) * x + K.reshape(self.beta, broadcast_shape)\n        return out\n\n    def get_config(self):\n        config = {\"momentum\": self.momentum, \"axis\": self.axis}\n        base_config = super(Scale, self).get_config()\n        return dict(list(base_config.items()) + list(config.items()))\n\n\ndef identity_block(input_tensor, kernel_size, filters, stage, block):\n    '''The identity_block is the block that has no conv layer at shortcut\n    # Arguments\n        input_tensor: input tensor\n        kernel_size: defualt 3, the kernel size of middle conv layer at main\n            path\n        filters: list of integers, the nb_filters of 3 conv layer at main path\n        stage: integer, current stage label, used for generating layer names\n        block: 'a','b'..., current block label, used for generating layer names\n    '''\n    eps = 1.1e-5\n    if K.image_data_format() == 'channels_last':\n        bn_axis = 3\n    else:\n        bn_axis = 1\n    nb_filter1, nb_filter2, nb_filter3 = filters\n    conv_name_base = 'res' + str(stage) + block + '_branch'\n    bn_name_base = 'bn' + str(stage) + block + '_branch'\n    scale_name_base = 'scale' + str(stage) + block + '_branch'\n\n    x = Conv2D(nb_filter1, (1, 1), name=conv_name_base + '2a',\n               use_bias=False)(input_tensor)\n    x = BatchNormalization(epsilon=eps, axis=bn_axis,\n                           name=bn_name_base + '2a')(x)\n    x = Scale(axis=bn_axis, name=scale_name_base + '2a')(x)\n    x = Activation('relu', name=conv_name_base + '2a_relu')(x)\n\n    x = ZeroPadding2D((1, 1), name=conv_name_base + '2b_zeropadding')(x)\n    x = Conv2D(nb_filter2, (kernel_size, kernel_size),\n               name=conv_name_base + '2b', use_bias=False)(x)\n    x = BatchNormalization(epsilon=eps, axis=bn_axis,\n                           name=bn_name_base + '2b')(x)\n    x = Scale(axis=bn_axis, name=scale_name_base + '2b')(x)\n    x = Activation('relu', name=conv_name_base + '2b_relu')(x)\n\n    x = Conv2D(nb_filter3, (1, 1), name=conv_name_base + '2c',\n               use_bias=False)(x)\n    x = BatchNormalization(epsilon=eps, axis=bn_axis,\n                           name=bn_name_base + '2c')(x)\n    x = Scale(axis=bn_axis, name=scale_name_base + '2c')(x)\n\n    x = add([x, input_tensor], name='res' + str(stage) + block)\n    x = Activation('relu', name='res' + str(stage) + block + '_relu')(x)\n    return x\n\n\ndef conv_block(input_tensor,\n               kernel_size,\n               filters,\n               stage,\n               block,\n               strides=(2, 2)):\n    '''conv_block is the block that has a conv layer at shortcut\n    # Arguments\n        input_tensor: input tensor\n        kernel_size: defualt 3, the kernel size of middle conv layer at main\n            path\n        filters: list of integers, the nb_filters of 3 conv layer at main path\n        stage: integer, current stage label, used for generating layer names\n        block: 'a','b'..., current block label, used for generating layer names\n    Note that from stage 3, the first conv layer at main path is with\n    strides=(2,2). And the shortcut should have strides=(2,2) as well\n    '''\n    eps = 1.1e-5\n    if K.image_data_format() == 'channels_last':\n        bn_axis = 3\n    else:\n        bn_axis = 1\n    nb_filter1, nb_filter2, nb_filter3 = filters\n    conv_name_base = 'res' + str(stage) + block + '_branch'\n    bn_name_base = 'bn' + str(stage) + block + '_branch'\n    scale_name_base = 'scale' + str(stage) + block + '_branch'\n\n    x = Conv2D(nb_filter1, (1, 1), strides=strides,\n               name=conv_name_base + '2a', use_bias=False)(input_tensor)\n    x = BatchNormalization(epsilon=eps, axis=bn_axis,\n                           name=bn_name_base + '2a')(x)\n    x = Scale(axis=bn_axis, name=scale_name_base + '2a')(x)\n    x = Activation('relu', name=conv_name_base + '2a_relu')(x)\n\n    x = ZeroPadding2D((1, 1), name=conv_name_base + '2b_zeropadding')(x)\n    x = Conv2D(nb_filter2, (kernel_size, kernel_size),\n               name=conv_name_base + '2b', use_bias=False)(x)\n    x = BatchNormalization(epsilon=eps, axis=bn_axis,\n                           name=bn_name_base + '2b')(x)\n    x = Scale(axis=bn_axis, name=scale_name_base + '2b')(x)\n    x = Activation('relu', name=conv_name_base + '2b_relu')(x)\n\n    x = Conv2D(nb_filter3, (1, 1),\n               name=conv_name_base + '2c', use_bias=False)(x)\n    x = BatchNormalization(epsilon=eps, axis=bn_axis,\n                           name=bn_name_base + '2c')(x)\n    x = Scale(axis=bn_axis, name=scale_name_base + '2c')(x)\n\n    shortcut = Conv2D(nb_filter3, (1, 1), strides=strides,\n                      name=conv_name_base + '1', use_bias=False)(input_tensor)\n    shortcut = BatchNormalization(epsilon=eps, axis=bn_axis,\n                                  name=bn_name_base + '1')(shortcut)\n    shortcut = Scale(axis=bn_axis, name=scale_name_base + '1')(shortcut)\n\n    x = add([x, shortcut], name='res' + str(stage) + block)\n    x = Activation('relu', name='res' + str(stage) + block + '_relu')(x)\n    return x\n\n\ndef ResNet101(include_top=True,\n              weights='imagenet',\n              input_tensor=None,\n              input_shape=None,\n              pooling=None,\n              classes=1000):\n    \"\"\"Instantiates the ResNet-101 architecture.\n\n    Optionally loads weights pre-trained on ImageNet. Note that when using\n    TensorFlow, for best performance you should set\n    image_data_format='channels_last'` in your Keras config at\n    ~/.keras/keras.json.\n\n    The model and the weights are compatible with both TensorFlow and Theano.\n    The data format convention used by the model is the one specified in your\n    Keras config file.\n\n    Parameters\n    ----------\n        include_top: whether to include the fully-connected layer at the top of\n            the network.\n        weights: one of `None` (random initialization) or 'imagenet'\n            (pre-training on ImageNet).\n        input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n            to use as image input for the model.\n        input_shape: optional shape tuple, only to be specified if\n            `include_top` is False (otherwise the input shape has to be\n            `(224, 224, 3)` (with `channels_last` data format) or\n            `(3, 224, 224)` (with `channels_first` data format). It should have\n            exactly 3 inputs channels, and width and height should be no\n            smaller than 197.\n            E.g. `(200, 200, 3)` would be one valid value.\n        pooling: Optional pooling mode for feature extraction when\n            `include_top` is `False`.\n            - `None` means that the output of the model will be the 4D tensor\n                output of the last convolutional layer.\n            - `avg` means that global average pooling will be applied to the\n                output of the last convolutional layer, and thus the output of\n                the model will be a 2D tensor.\n            - `max` means that global max pooling will be applied.\n        classes: optional number of classes to classify images into, only to be\n            specified if `include_top` is True, and if no `weights` argument is\n            specified.\n\n    Returns\n    -------\n        A Keras model instance.\n\n    Raises\n    ------\n        ValueError: in case of invalid argument for `weights`, or invalid input\n        shape.\n    \"\"\"\n    if weights not in {'imagenet', None}:\n        raise ValueError('The `weights` argument should be either '\n                         '`None` (random initialization) or `imagenet` '\n                         '(pre-training on ImageNet).')\n\n    if weights == 'imagenet' and include_top and classes != 1000:\n        raise ValueError('If using `weights` as imagenet with `include_top`'\n                         ' as true, `classes` should be 1000')\n\n    # Determine proper input shape\n    input_shape = _obtain_input_shape(input_shape,\n                                      default_size=224,\n                                      min_size=197,\n                                      data_format=K.image_data_format(),\n                                      require_flatten=include_top,\n                                      weights=weights)\n\n    if input_tensor is None:\n        img_input = Input(shape=input_shape, name='data')\n    else:\n        if not K.is_keras_tensor(input_tensor):\n            img_input = Input(\n                tensor=input_tensor, shape=input_shape, name='data')\n        else:\n            img_input = input_tensor\n    if K.image_data_format() == 'channels_last':\n        bn_axis = 3\n    else:\n        bn_axis = 1\n    eps = 1.1e-5\n\n    x = ZeroPadding2D((3, 3), name='conv1_zeropadding')(img_input)\n    x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', use_bias=False)(x)\n    x = BatchNormalization(epsilon=eps, axis=bn_axis, name='bn_conv1')(x)\n    x = Scale(axis=bn_axis, name='scale_conv1')(x)\n    x = Activation('relu', name='conv1_relu')(x)\n    x = MaxPooling2D((3, 3), strides=(2, 2), name='pool1')(x)\n\n    x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))\n    x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')\n    x = identity_block(x, 3, [64, 64, 256], stage=2, block='c')\n\n    x = conv_block(x, 3, [128, 128, 512], stage=3, block='a')\n    for i in range(1, 3):\n        x = identity_block(x, 3, [128, 128, 512], stage=3, block='b' + str(i))\n\n    x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a')\n    for i in range(1, 23):\n        x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b' + str(i))\n\n    x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a')\n    x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b')\n    x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c')\n\n    x = AveragePooling2D((7, 7), name='avg_pool')(x)\n\n    if include_top:\n        x = Flatten()(x)\n        x = Dense(classes, activation='softmax', name='mmfc1000')(x)\n    else:\n        if pooling == 'avg':\n            x = GlobalAveragePooling2D()(x)\n        elif pooling == 'max':\n            x = GlobalMaxPooling2D()(x)\n\n    # Ensure that the model takes into account\n    # any potential predecessors of `input_tensor`.\n    if input_tensor is not None:\n        inputs = get_source_inputs(input_tensor)\n    else:\n        inputs = img_input\n    # Create model.\n    model = Model(inputs, x, name='resnet101')\n\n    '''\n    # load weights\n    if weights == 'imagenet':\n        filename = 'resnet101_weights_{}.h5'.format(K.image_dim_ordering())\n        if K.backend() == 'theano':\n            path = WEIGHTS_PATH_TH\n            md5_hash = MD5_HASH_TH\n        else:\n            path = WEIGHTS_PATH_TF\n            md5_hash = MD5_HASH_TF\n        weights_path = get_file(\n            fname=filename,\n            origin=path,\n            cache_subdir='models',\n            md5_hash=md5_hash,\n            hash_algorithm='md5')\n        model.load_weights(weights_path, by_name=True)\n\n        if K.image_data_format() == 'channels_first' and K.backend() == 'tensorflow':\n            warnings.warn('You are using the TensorFlow backend, yet you '\n                          'are using the Theano '\n                          'image data format convention '\n                          '(`image_data_format=\"channels_first\"`). '\n                          'For best performance, set '\n                          '`image_data_format=\"channels_last\"` in '\n                          'your Keras config '\n                          'at ~/.keras/keras.json.')\n    '''\n    return model\n"
  },
  {
    "path": "submission/placeholder.txt",
    "content": ""
  },
  {
    "path": "trained_models/placeholder.txt",
    "content": ""
  }
]