[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Fei Wang\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# WiFi Perception\nCode of paper, Person-in-WiFi: Fine-grained Person Perception using WiFi. In this paper, we tend to use WiFi to capture human pose and body. The paper is under review, due to IRB issues, we have not made code publicly. Still, we release data collection tools in this repo.\n\n\n\n# Updates\n\n[CSI tool](https://github.com/spanev/linux-80211n-csitool) now supports Ubuntu 18.04, Sep. 2019.\n\n\n# System\nWe use camera to capture human as annotations. Specifically, we use a Mask R-CNN implementation, [detectorch](https://github.com/ignacio-rocco/detectorch) to prepare human mask, and [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) python-api to prepare human pose, including pose coordinate arrays, joint heat maps, and part affinity field, with help of OpenPose developers, [Gines](https://github.com/gineshidalgo99) and [Raaj](https://github.com/soulslicer). \n\nMeanwhile, we record WiFi signals to train a deep network.\n\n![system](figs/systems.png)\n\n# Results\n![result](figs/result.png)\n"
  },
  {
    "path": "datacollectioncode/videowithtimestamp/README.md",
    "content": "# Install OpenCV on Ubuntu\n\nSee the [tutorial](https://www.learnopencv.com/install-opencv3-on-ubuntu/)\n\n## Note:\nStep 4.1: Download opencv from Github\n\n```\ngit clone https://github.com/opencv/opencv.git\n\ncd opencv \n\ngit checkout 3.1.0\n\ncd ..\n```\n\nStep 4.2: Download opencv_contrib from Github\n```\ngit clone https://github.com/opencv/opencv_contrib.git\n\ncd opencv_contrib\n\ngit checkout 3.1.0\n\ncd ..\n```\n\nWe have tried a lot of OpenCV version in python 2, 3, or anconda python, and finally found the **3.1.0** can adjust fps and match with the command of **datetime.datetime.now()**.\n\n"
  },
  {
    "path": "datacollectioncode/videowithtimestamp/videoWrite-spyder.py",
    "content": "#!/usr/bin/python\n\nimport cv2\nimport datetime\nimport time\n#import sys\n\n\nif __name__ == \"__main__\":\n\n    try:\n    \n        fps = 20\n        frameWidth  = 1280\n        frameHeight = 720\n        \n        cap = cv2.VideoCapture(0)\n        cap.set(cv2.CAP_PROP_FRAME_WIDTH,  frameWidth)\n        cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frameHeight)\n#\ttime.sleep()\n        cap.set(cv2.CAP_PROP_FPS, fps)\n\n\tcameraFPS = cap.get(cv2.CAP_PROP_FPS)\n\n\tprint(\"FPS:\", cameraFPS)\n\tprint(\"Frame size:\", cap.get(cv2.CAP_PROP_FRAME_WIDTH), cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n        \n        #fourcc = cv2.VideoWriter_fourcc(*'MJPG') # + .avi works, .mp4 not works\n        #fourcc = cv2.cv.CV_FOURCC(*'XVID')MP4V\n        \n        fourcc = cv2.VideoWriter_fourcc(*'XVID')\n        #fourcc = cv2.VideoWriter_fourcc(*'MP4V')\n        videofile = cv2.VideoWriter('video.avi',\n                                    fourcc,\n                                    int(cameraFPS),\n                                    (frameWidth, frameHeight))\n        \n        \n        \n        #file = open('/media/csipose1/XPG SD700X/time', 'w+')\n        \n        \n        with open('VideoTimestamp.txt', 'w+') as file:\n            while(cap.isOpened()):\n                ret, frame = cap.read()\n                #time.sleep(delay)\n                t = datetime.datetime.now()\n                #t = time.clock()\n                #print(ret)\n                if ret:\n                   file.write(str(t)+'\\n')\n                   print(str(t))\n                   videofile.write(frame)\n                   # cv2.imshow('Camera', frame)\n                    \n                   #if cv2.waitKey(1) & 0xFF == ord('q'):\n                   #    break\n                else:\n                    break\n    \n        \n    except KeyboardInterrupt:\n        print(\"Quit\")\n        cap.release()\n        videofile.release()\n        #cv2.destroyAllWindows()\n        file.close()\n"
  },
  {
    "path": "datacollectioncode/wifiwithtimestamp/Makefile",
    "content": "all: print_packets get_first_bfee parse_log log_to_file nl_bf_to_eff log_to_file_time\n\nKERNEL = $(strip $(shell uname -r))\nKERNEL_SOURCE = /lib/modules/$(KERNEL)/build\n\nifneq ($(wildcard $(KERNEL_SOURCE)/include/uapi),)\n        KERNEL_HEADERS = $(KERNEL_SOURCE)/include/uapi\nelse ifneq ($(wildcard $(KERNEL_SOURCE)/include),)\n        KERNEL_HEADERS = $(KERNEL_SOURCE)/include\nelse\n        $(error Kernel headers not found)\nendif\n\nCFLAGS = -Wall -Werror\nLDLIBS = -lm\nCC = gcc\n\nnl_bf_to_eff: nl_bf_to_eff.c bf_to_eff.o iwl_nl.o util.o q_approx.o\n\nlog_to_file.c: iwl_connector.h\n\niwl_nl.c: iwl_connector.h\n\niwl_connector.h: connector_users.h\n\nconnector_users.h: $(KERNEL_HEADERS)/linux/connector.h\n\techo \"#undef CN_NETLINK_USERS\" > connector_users.h\n\tgrep \"#define CN_NETLINK_USERS\" $(KERNEL_HEADERS)/linux/connector.h >> connector_users.h\n\nclean:\n\trm -f *.o get_first_bfee log_to_file print_packets parse_log nl_bf_to_eff connector_users.h log_to_file_time\n"
  },
  {
    "path": "datacollectioncode/wifiwithtimestamp/README.md",
    "content": "# Record WiFi with Unix Time-stamp \n\n1. Follow [Linux CSI Tool Installation Instructions](http://dhalperi.github.io/linux-80211n-csitool/installation.html)\n\n2. Before run the \n```\n4. Build the Userspace Logging Tool\n\nBuild log_to_file, a command line tool that writes CSI obtained via the driver to a file:\n\nmake -C linux-80211n-csitool-supplementary/netlink\n```\nput **log_to_file_time.c** and **Makefile** into the folder **netlink** first. \n\n\n## How to use: with log_to_file_time.c, one can log time-stamps\n \n ```\n sudo ../netlink/log_to_file_time ~/Desktop/log.dat ~/Desktop/time.txt \n \n````\n"
  },
  {
    "path": "datacollectioncode/wifiwithtimestamp/log_to_file_time.c",
    "content": "/*\n * (c) 2008-2011 Daniel Halperin <dhalperi@cs.washington.edu>\n */\n#include \"iwl_connector.h\"\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n#include <unistd.h>\n#include <arpa/inet.h>\n#include <sys/socket.h>\n#include <linux/netlink.h>\n#include <sys/time.h>\n#include <time.h>\n\n#define MAX_PAYLOAD 2048\n#define SLOW_MSG_CNT 1\n\nint sock_fd = -1;\t\t\t\t\t\t\t// the socket\nFILE* out = NULL;\nFILE* out_time = NULL;\n\nvoid check_usage(int argc, char** argv);\n\nFILE* open_file(char* filename, char* spec);\n\nvoid caught_signal(int sig);\n\nvoid exit_program(int code);\nvoid exit_program_err(int code, char* func);\n\nint main(int argc, char** argv)\n{\n\t/* Local variables */\n\tstruct sockaddr_nl proc_addr, kern_addr;\t// addrs for recv, send, bind\n\tstruct cn_msg *cmsg;\n\tchar buf[4096];\n\tint ret;\n\tunsigned short l, l2;\n\tint count = 0;\n\t\n\t/* Local timestamp variables */\n\tstruct timeval tv;\n \tstruct tm* tm;\n \tchar time_buffer[30];\n\n\t/* Make sure usage is correct */\n\tcheck_usage(argc, argv);\n\n\t/* Open and check log file */\n\tout = open_file(argv[1], \"w\");\n        out_time = open_file(argv[2], \"w\");\n\n\t/* Setup the socket */\n\tsock_fd = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_CONNECTOR);\n\tif (sock_fd == -1)\n\t\texit_program_err(-1, \"socket\");\n\n\t/* Initialize the address structs */\n\tmemset(&proc_addr, 0, sizeof(struct sockaddr_nl));\n\tproc_addr.nl_family = AF_NETLINK;\n\tproc_addr.nl_pid = getpid();\t\t\t// this process' PID\n\tproc_addr.nl_groups = CN_IDX_IWLAGN;\n\tmemset(&kern_addr, 0, sizeof(struct sockaddr_nl));\n\tkern_addr.nl_family = AF_NETLINK;\n\tkern_addr.nl_pid = 0;\t\t\t\t\t// kernel\n\tkern_addr.nl_groups = CN_IDX_IWLAGN;\n\n\t/* Now bind the socket */\n\tif (bind(sock_fd, (struct sockaddr *)&proc_addr, sizeof(struct sockaddr_nl)) == -1)\n\t\texit_program_err(-1, \"bind\");\n\n\t/* And subscribe to netlink group */\n\t{\n\t\tint on = proc_addr.nl_groups;\n\t\tret = setsockopt(sock_fd, 270, NETLINK_ADD_MEMBERSHIP, &on, sizeof(on));\n\t\tif (ret)\n\t\t\texit_program_err(-1, \"setsockopt\");\n\t}\n\n\t/* Set up the \"caught_signal\" function as this program's sig handler */\n\tsignal(SIGINT, caught_signal);\n\n\t/* Poll socket forever waiting for a message */\n\twhile (1)\n\t{\n\t\t/* Receive from socket with infinite timeout */\n\t\tret = recv(sock_fd, buf, sizeof(buf), 0);\n\t\tif (ret == -1)\n\t\t\texit_program_err(-1, \"recv\");\n\t\t/* Pull out the message portion and print some stats */\n\t\tcmsg = NLMSG_DATA(buf);\n\t\tif (count % SLOW_MSG_CNT == 0)\n\t\t\tprintf(\"received %d bytes: id: %d val: %d seq: %d clen: %d\\n\", cmsg->len, cmsg->id.idx, cmsg->id.val, cmsg->seq, cmsg->len);\n\t\t/* Log the data to file */\n\t\tl = (unsigned short) cmsg->len;\n\t\tl2 = htons(l);\n\t\tfwrite(&l2, 1, sizeof(unsigned short), out);\n\n\t\t/* write timestamp */\n\t\tgettimeofday(&tv, NULL);\n \t\ttm=localtime(&tv.tv_sec);\n \t\tsprintf(time_buffer, \"%02d:%02d:%02d.%06ld\\n\", tm->tm_hour, tm->tm_min, tm->tm_sec, tv.tv_usec);\n \t\tfwrite(&time_buffer, 1, 16, out_time);\n\n\t\tret = fwrite(cmsg->data, 1, l, out);\n\t\tif (count % 100 == 0)\n\t\t\tprintf(\"wrote %d bytes [msgcnt=%u]\\n\", ret, count);\n\t\t++count;\n\t\tif (ret != l)\n\t\t\texit_program_err(1, \"fwrite\");\n\t}\n\n\texit_program(0);\n\treturn 0;\n}\n\nvoid check_usage(int argc, char** argv)\n{\n\tif (argc != 3)\n\t{\n\t\tfprintf(stderr, \"Usage: %s <output_file>\\n\", argv[0]);\n\t\texit_program(1);\n\t}\n}\n\nFILE* open_file(char* filename, char* spec)\n{\n\tFILE* fp = fopen(filename, spec);\n\tif (!fp)\n\t{\n\t\tperror(\"fopen\");\n\t\texit_program(1);\n\t}\n\treturn fp;\n}\n\nvoid caught_signal(int sig)\n{\n\tfprintf(stderr, \"Caught signal %d\\n\", sig);\n\texit_program(0);\n}\n\nvoid exit_program(int code)\n{\n\tif (out)\n\t{\n\t\tfclose(out);\n\t\tfclose(out_time);\n\t\tout = NULL;\n\t}\n\tif (sock_fd != -1)\n\t{\n\t\tclose(sock_fd);\n\t\tsock_fd = -1;\n\t}\n\texit(code);\n}\n\nvoid exit_program_err(int code, char* func)\n{\n\tperror(func);\n\texit_program(code);\n}\n"
  },
  {
    "path": "dataprocessing/demo_FPN_video_new.py",
    "content": "\n# coding: utf-8\n\n# # Imports\n\n# In[1]:\n\n\nimport torch\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\n\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.io as sio\nimport sys\nsys.path.insert(0, \"lib/\")\nfrom utils.preprocess_sample import preprocess_sample\nfrom utils.collate_custom import collate_custom\nfrom utils.utils import to_cuda_variable\nfrom utils.json_dataset_evaluator import evaluate_boxes,evaluate_masks\nfrom model.detector import detector\nimport utils.result_utils as result_utils\nimport utils.vis as vis_utils\nimport skimage.io as io\nfrom utils.blob import prep_im_for_blob,im_list_to_blob\nimport utils.dummy_datasets as dummy_datasets\nfrom utils.multilevel_rois import add_multilevel_rois_for_test\nimport cv2\nimport os\n\nfrom utils.selective_search import selective_search # needed for proposal extraction in Fast RCNN\nfrom PIL import Image\n\ntorch_ver = torch.__version__[:3]\n\n\n# # Parameters\n\n# In[2]:\n\n\n# COCO minival2014 dataset path\ncoco_ann_file='datasets/data/coco/annotations/instances_minival2014.json'\nimg_dir='datasets/data/coco/val2014'\n\n# model type\nmodel_type='mask' # change here\n\n# pretrained model\nif model_type=='mask':\n    arch='resnet101'\n    # https://s3-us-west-2.amazonaws.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl\n    pretrained_model_file = 'files/trained_models/mask_fpn/model_final.pkl'\n    use_rpn_head = True\n    use_mask_head = True\nelif model_type=='faster':\n    arch='resnet50'\n    # https://s3-us-west-2.amazonaws.com/detectron/35857389/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_2x.yaml.01_37_22.KSeq0b5q/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl\n    pretrained_model_file = 'files/trained_models/faster/e2e_faster_rcnn_R-50-FPN_2x.pkl'\n    use_rpn_head = True\n    use_mask_head = False\nelif model_type=='fast':\n    arch='resnet50'\n    # https://s3-us-west-2.amazonaws.com/detectron/36225249/12_2017_baselines/fast_rcnn_R-50-FPN_2x.yaml.08_40_18.zoChak1f/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl\n    pretrained_model_file = 'files/trained_models/fast/fast_rcnn_R-50-FPN_2x.pkl'\n    use_rpn_head = False\n    use_mask_head = False\n\n\n# # Create detector model\n\n# In[5]:\n\n\nmodel = detector(arch=arch,\n                 detector_pkl_file=pretrained_model_file,\n                 conv_body_layers=['conv1','bn1','relu','maxpool','layer1','layer2','layer3','layer4'],\n                 conv_head_layers='two_layer_mlp',\n                 fpn_layers=['layer1','layer2','layer3','layer4'],\n                 fpn_extra_lvl=True,\n                 roi_height=7,\n                 roi_width=7,\n                 roi_spatial_scale=[0.25,0.125,0.0625,0.03125],\n                 roi_sampling_ratio=2,\n                 use_rpn_head = use_rpn_head,\n                 use_mask_head = use_mask_head,\n                 mask_head_type = '1up4convs')\nmodel = model.cuda()\n\n\ndef eval_model(sample):\n    class_scores, bbox_deltas, rois, img_features = model(sample['image'],\n                                                          sample['proposal_coords'],\n                                                          scaling_factor=sample['scaling_factors'])\n    return class_scores, bbox_deltas, rois, img_features\n\n# # Load image\n\n# In[4]:\nimport glob\nvideo_dir = '/media/delight-wifi/My Passport/Dataset/WiFiPose-Video/' # chage dir\n\n\n\nvideos = glob.glob(video_dir+'*.avi')\nvideo_num = len(videos)\n\n\n\n# image_fn = 'demo/33823288584_1d21cf0a26_k.jpg'\n\n# Load image\noutput_dir = '/media/delight-wifi/My Passport/Dataset/video-mask/'\nfor video_index in range(video_num):\n    \n\n    video_fn = videos[video_index]\n    video_name = video_fn[len(video_dir):]\n    print(video_name)\n    video_name = video_fn[len(video_dir):-4]\n    outputVideo_dir = output_dir + video_name + '_mask/'\n    \n    if not os.path.exists(outputVideo_dir):\n        os.makedirs(outputVideo_dir)\n    print(video_fn)\t\n    video = cv2.VideoCapture(video_fn)\n    frame_index = 0\n    while(video.isOpened()):\n        #print('hello')\n        frame_index = frame_index + 1\n        ret, image = video.read()\n        \n        if ret:\n\n\n            if len(image.shape) == 2: # convert grayscale to RGB\n                image = np.repeat(np.expand_dims(image,2), 3, axis=2)\n            orig_im_size = image.shape\n            # Preprocess image\n            im_list, im_scales = prep_im_for_blob(image)\n            # Build sample\n            sample = {}\n            # im_list_to blob swaps channels and adds stride in case of fpn\n            fpn_on=True\n            sample['image'] = torch.FloatTensor(im_list_to_blob(im_list,fpn_on))\n            sample['scaling_factors'] = im_scales[0]\n            sample['original_im_size'] = torch.FloatTensor(orig_im_size)\n          # Extract proposals\n            if model_type=='fast':\n              # extract proposals using selective search (xmin,ymin,xmax,ymax format)\n                rects = selective_search(pil_image=Image.fromarray(image),quality='f')\n                sample['proposal_coords']=torch.FloatTensor(preprocess_sample().remove_dup_prop(rects)[0])*im_scales[0]\n            else:\n                sample['proposal_coords']=torch.FloatTensor([-1]) # dummy value\n            # Convert to cuda variable\n            sample = to_cuda_variable(sample)\n\n\n\n\n\n        # # Evaluate\n\n        # In[8]:\n\n\n\n\n\n        # In[9]:\n\n\n            if torch_ver==\"0.4\":\n                with torch.no_grad():\n                    class_scores,bbox_deltas,rois,img_features=eval_model(sample)\n            else:\n                class_scores,bbox_deltas,rois,img_features=eval_model(sample)\n\n        # postprocess output:\n        # - convert coordinates back to original image size,\n        # - treshold proposals based on score,\n        # - do NMS.\n            scores_final, boxes_final, boxes_per_class = result_utils.postprocess_output(rois,\n                                                                            sample['scaling_factors'],\n                                                                            sample['original_im_size'],\n                                                                            class_scores,\n                                                                            bbox_deltas)\n\n            if model_type=='mask':\n              # compute masks\n                boxes_final_multiscale = add_multilevel_rois_for_test({'rois': boxes_final*sample['scaling_factors']},'rois')\n                boxes_final_multiscale_th = []\n                for k in boxes_final_multiscale.keys():\n                    if len(boxes_final_multiscale[k])>0 and 'rois_fpn' in k:\n                        boxes_final_multiscale_th.append(Variable(torch.cuda.FloatTensor(boxes_final_multiscale[k])))\n                    elif len(boxes_final_multiscale[k])==0 and 'rois_fpn' in k:\n                        boxes_final_multiscale_th.append(None)\n                rois_idx_restore_th = Variable(torch.cuda.FloatTensor(boxes_final_multiscale['rois_idx_restore_int32']))\n                masks=model.mask_head(img_features,boxes_final_multiscale_th,rois_idx_restore_th.long())\n              # postprocess mask output:\n                h_orig = int(sample['original_im_size'].squeeze()[0].data.cpu().numpy().item())\n                w_orig = int(sample['original_im_size'].squeeze()[1].data.cpu().numpy().item())\n                cls_segms = result_utils.segm_results(boxes_per_class, masks.cpu().data.numpy(), boxes_final, h_orig, w_orig,\n                                                    M=28) # M: Mask RCNN resolution\n            else:\n                cls_segms = None\n\n            # sio.savemat(outputVideo_dir + str(frame_index) + '.mat', {'boxes_final':boxes_final,'cls_segms':cls_segms,'scores_final':scores_final,'boxes_per_class':boxes_per_class})\n\n            mask = vis_utils.return_image_mask(\n                image,  # BGR -> RGB for visualization\n                str(frame_index),\n                outputVideo_dir,\n                boxes_per_class,\n                cls_segms,\n                None,\n           #     dataset=dummy_datasets.get_coco_dataset(),\n            #    box_alpha=0.3,\n             #   show_class=True,\n                thresh=0.7\n              #  kp_thresh=2,\n               # show=True\n            )\n            # print(boxes_per_class.shape)\n            person_bb = boxes_per_class[1]\n            # print(np.shape(boxes_per_class))\n            boxes = []\n            for person_index in range(len(person_bb)):\n                if person_bb[person_index, -1] > 0.9:\n                    boxes = np.concatenate((boxes, person_bb[person_index, :]), axis=0)\n            #    boxes = boxes.reshape(-1, 5)\n            print(video_name, frame_index)\n\n            masks = []\n            if len(boxes) > 0:\n                boxes = boxes.reshape(-1, 5)\n                for person_index in range(len(boxes)):\n                    temp_box = np.zeros([720, 1280], dtype=np.int8)\n                    h_min = int(np.ceil(boxes[person_index, 1] + 0.01) - 1)\n                    h_max = int(np.floor(boxes[person_index, 3]))\n                    w_min = int(np.ceil(boxes[person_index, 0] + 0.01) - 1)\n                    w_max = int(np.floor(boxes[person_index, 2]))\n                    temp_box[h_min:h_max, w_min:w_max] = 1\n                    # temp_box[0, np.ceil(boxes[person_index, 1] + 0.01)-1:np.floor(boxes[person_index,3]), np.ceil(boxes[person_index,0]+0.01):np.floor(boxes[person_index,2]) ]=1\n\n                    mask_num = len(mask)\n                    # b = mask[0]\n                    # print(b)\n                    # print(np.shape(mask))\n                    iou = np.zeros(mask_num)\n                    for mask_index in range(mask_num):\n                        iou[mask_index] = np.sum(mask[mask_index] * temp_box)\n                    idx = np.argmax(iou)\n\n                    if person_index == 0:\n                        masks = mask[idx].reshape(1, 720, 1280)\n                    else:\n                        masks = np.concatenate((masks, mask[idx].reshape(1, 720, 1280)), axis=0)\n            # if not os.path.exists('/media/delight-wifi/My Passport/Dataset/video-mask/' + video_name + '_mask'):\n            #     os.mkdir('/media/delight-wifi/My Passport/Dataset/video-mask/' + video_name + '_mask')\n            sio.savemat(outputVideo_dir + video_name + '_' + str(frame_index + 1) + '.mat', {'boxes': boxes, 'masks': masks})\n\n        else:\n            video.release()\n\nprint('Done!')\n\n\n\n\n\n\n\n"
  },
  {
    "path": "dataprocessing/getHumanMaskandBbox.py",
    "content": "import glob\nimport scipy.io as sio\nimport cv2\nimport numpy as np\n\nfile_name = '10'\n\nframe_dir = '/data/feiw/oct17outVideo/oct17set' + file_name + '/'\nframes = glob.glob(frame_dir + '*.mat')\nframe_num = len(frames)/2\n\ncap = cv2.VideoCapture('/home/feiw/detectorch/demo/oct17video/oct17set'+ file_name + '.avi')\nvideo_frame_num = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))\n\nif frame_num==video_frame_num:\n    print('frame equals!')\nelse:\n    print('frame doesnot equal!')\n\nfor frame_index in range(int(frame_num)):\n    bb = sio.loadmat(frame_dir + str(frame_index+1)+'.mat')\n    person_bb = bb['boxes_per_class'][0,1]\n    mask = sio.loadmat(frame_dir + str(frame_index+1)+'.MASK.mat')\n    mask = mask['mask']\n\n    boxes = []\n    for person_index in range(len(person_bb)):\n        if person_bb[person_index,-1] > 0.9:\n            boxes = np.concatenate((boxes, person_bb[person_index,:]), axis=0)\n#    boxes = boxes.reshape(-1, 5)\n    print('oct17set'+file_name,frame_index)\n\n    masks = []\n    if len(boxes)>0:\n       boxes = boxes.reshape(-1, 5)         \n       for person_index in range(len(boxes)):\n            temp_box = np.zeros([720,1280], dtype=np.int8)\n            h_min = int(np.ceil(boxes[person_index, 1] + 0.01)-1)\n            h_max = int(np.floor(boxes[person_index, 3]))\n            w_min = int(np.ceil(boxes[person_index, 0] + 0.01)-1)\n            w_max = int(np.floor(boxes[person_index, 2]))\n            temp_box[h_min:h_max, w_min:w_max] = 1\n            # temp_box[0, np.ceil(boxes[person_index, 1] + 0.01)-1:np.floor(boxes[person_index,3]), np.ceil(boxes[person_index,0]+0.01):np.floor(boxes[person_index,2]) ]=1\n\n            mask_num = len(mask)\n            iou = np.zeros(mask_num)\n            for mask_index in range(mask_num):\n                iou[mask_index] = np.sum(mask[mask_index,:,:] * temp_box)\n            idx = np.argmax(iou)\n\n            if person_index==0:\n                masks = mask[idx,:,:].reshape(1,720,1280)\n            else:\n                masks = np.concatenate((masks, mask[idx,:,:].reshape(1,720,1280)), axis=0)\n\n    sio.savemat('/data/feiw/oct17outVideo/oct17set'+file_name+'_clean/oct17set'+ file_name+'_' + str(frame_index+1)+'.mat', {'boxes':boxes, 'masks':masks})\n\nprint('oct17set'+file_name+' saved succeed!')\n"
  },
  {
    "path": "dataprocessing/poseArrayAlign.m",
    "content": "clear\r\n% folder_list = {'E:\\oct17\\frame_csi_hm_mask_bb_array_80train\\', 'E:\\oct17\\frame_csi_hm_mask_bb_array_20test\\', ...\r\n%     'E:\\sep12\\frame_csi_hm_mask_bb_array_80train\\', 'E:\\sep12\\frame_csi_hm_mask_bb_array_20test\\'};\r\n\r\nfolder_list = {'/media/feiw/New Volume1/wifiposedata/train80/'};\r\n\r\ncolor = rand([9,3]);\r\n\r\n% for folder_name = folder_list\r\n%     folder_name{1}\r\n% end\r\n\r\nfor folder_name = folder_list\r\n\r\n    files = dir([folder_name{1}, '*.mat']);\r\n    file_num = length(files);\r\n    \r\n    for file_index = 1:file_num\r\n        [folder_name{1}, files(file_index).name]\r\n        %load([folder_name{1}, files(file_index).name], 'array', 'boxes');\r\n        load([folder_name{1}, files(file_index).name], 'boxes');\r\n        index = getIndex(files(file_index).name);\r\n        if files(file_index).name(10) == '0'\r\n            load(['/media/feiw/New Volume1/poseArray/coco/', files(file_index).name(1:10), '/',...\r\n                files(file_index).name(1:10), '_', index, '.mat'], 'coco_pose');\r\n        else\r\n            load(['/media/feiw/New Volume1/poseArray/coco/', files(file_index).name(1:9), '/',...\r\n                files(file_index).name(1:9), '_', index, '.mat'], 'coco_pose');\r\n        end\r\n        \r\n        array = coco_pose;\r\n        boxes_num = size(boxes,1);\r\n        openpose_array_num = size(array,1);\r\n\r\n        if boxes_num>0&&openpose_array_num>0  %%%% if mask rcnn has boxes and openpose has joints\r\n           \r\n            %% if image having 4 persons start \r\n            if ~isempty(strfind(files(file_index).name, 'four')) %% if image having 4 persons\r\n                if size(boxes,1)>4\r\n                   %%%%%%% important get the largest 'four' boxes\r\n                   box_size = (boxes(:,3)-boxes(:,1)) .* (boxes(:,4)-boxes(:,2)); % boxes size by width.*height\r\n                   [~, idx] = sort(box_size); %%%\r\n                   boxes = boxes(idx(1:4),:); %%% get the largest 2 boxes\r\n                   %%%%%%%\r\n                   boxes_num = size(boxes,1);\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                                     \r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox ha\r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                   \r\n                else\r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox \r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                    \r\n                end\r\n            \r\n%         imshow(imresize(frame,[720,1280])); hold on;\r\n%         \r\n%         for i = 1:boxes_num\r\n%            rectangle('Position', [boxes(i,1:2) boxes(i,3:4)-boxes(i,1:2)], 'EdgeColor', color(i,:));\r\n%            scatter(squeeze(openpose_array(i,:,1)), squeeze(openpose_array(i,:,2)), 'MarkerEdgeColor', color(i,:) );\r\n%             \r\n%         end\r\n\r\n            %% if image having 4 persons\r\n            elseif ~isempty(strfind(files(file_index).name, 'five')) % if image having 5 persons\r\n                if size(boxes,1)>5\r\n                   %%%%%%% important get the largest 'four' boxes\r\n                   box_size = (boxes(:,3)-boxes(:,1)) .* (boxes(:,4)-boxes(:,2)); % boxes size by width.*height\r\n                   [~, idx] = sort(box_size); %%%\r\n                   boxes = boxes(idx(1:5),:); %%% get the largest 2 boxes\r\n                   %%%%%%%\r\n                   boxes_num = size(boxes,1);\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                                     \r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox ha\r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                   \r\n                else\r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox \r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                    \r\n                end\r\n            \r\n        %% if image having 4 persons\r\n            elseif ~isempty(strfind(files(file_index).name, 'two')) %% if image having 2 persons\r\n                if size(boxes,1)>2\r\n                   %%%%%%% important get the largest 'four' boxes\r\n                   box_size = (boxes(:,3)-boxes(:,1)) .* (boxes(:,4)-boxes(:,2)); % boxes size by width.*height\r\n                   [~, idx] = sort(box_size); %%%\r\n                   boxes = boxes(idx(1:2),:); %%% get the largest 2 boxes\r\n                   %%%%%%%\r\n                   boxes_num = size(boxes,1);\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                                     \r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox ha\r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                   \r\n                else\r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox \r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                    \r\n                end\r\n             \r\n        %% if image having 3 persons\r\n            elseif ~isempty(strfind(files(file_index).name, 'three')) %% if image having 3 persons\r\n                if size(boxes,1)>3\r\n                   %%%%%%% important get the largest 'four' boxes\r\n                   box_size = (boxes(:,3)-boxes(:,1)) .* (boxes(:,4)-boxes(:,2)); % boxes size by width.*height\r\n                   [~, idx] = sort(box_size); %%%\r\n                   boxes = boxes(idx(1:3),:); %%% get the largest 2 boxes\r\n                   %%%%%%%\r\n                   boxes_num = size(boxes,1);\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                                     \r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox ha\r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                   \r\n                else\r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox \r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                    \r\n                end\r\n            else\r\n                   %% align bounding box which contains most\r\n                   % seleting joints\r\n                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array\r\n                   for boxes_index = 1:boxes_num \r\n                       count = zeros([1, openpose_array_num]);\r\n                           for openpose_array_index = 1:openpose_array_num\r\n                               %%% counting the number of in-boundingbox\r\n                               %%% joints\r\n                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...\r\n                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);\r\n                                count(openpose_array_index) = sum(double(temp==4));\r\n                                %%%% \r\n                           end\r\n                           [~, idx] = max(count); %% which boundingbox \r\n                       openpose_array(boxes_index,:,:) = array(idx,:,:);    \r\n                   end\r\n                    \r\n            end\r\n        end\r\n        \r\n%         imshow(imresize(frame,[720,1280])); hold on;\r\n%         \r\n%         for i = 1:boxes_num\r\n%            rectangle('Position', [boxes(i,1:2) boxes(i,3:4)-boxes(i,1:2)], 'EdgeColor', color(i,:));\r\n%            scatter(squeeze(openpose_array(i,:,1)), squeeze(openpose_array(i,:,2)), 'MarkerEdgeColor', color(i,:) );\r\n%             \r\n%         end\r\n%         pause(0.5)\r\n%         hold off\r\n        \r\n\r\n          save(['/media/feiw/New Volume1/poseArray/allignedCOCOPose/', files(file_index).name] , 'openpose_array', 'boxes');\r\n          \r\n%         if ~isempty(strfind(folder_name{1}, 'train'))\r\n%             save(['wifiposedata\\train80\\', files(file_index).name], 'openpose_array', 'boxes');\r\n%         else \r\n%             save(['wifiposedata\\test20\\', files(file_index).name], 'openpose_array', 'boxes');\r\n%         end\r\n    end  \r\n    \r\n    \r\nend   \r\n\r\nfunction index = getIndex(file_name)\r\n    for i = [5,4,3]\r\n        if ~isempty(str2num(file_name(end-3-i:end-4)))\r\n            index = file_name(end-3-i:end-4);\r\n            break;\r\n        end\r\n    end\r\n    \r\n\r\n\r\nend\r\n"
  },
  {
    "path": "dataprocessing/readme.md",
    "content": "# Mask-Boxes Prepration and Mask-BBox Alignment \n\n## Functions\n1. prepare masks of persons, and bboxes of persons;\n2. align the mask and bbox of every person via the IOU;\n3. align bboxes and body joint coordinates (figure. 9 in the [tech report](https://arxiv.org/pdf/1904.00276.pdf)).\n\n## How to use\n1. install [detectorch](https://github.com/ignacio-rocco/detectorch) following its description;\n2. replace the **/lib/utils/vis.py** with **vis.py** here;\n3. **demo_FPN_video_new.py** takes a set of videos as inputs and outputs the masks and bboxes of every frame.\n4. **poseArrayAlign.m** takes *pose-arrays* of openpose and *boxes* of detectorch as inputs, counts the in-box joints for each boxes, and aligns each bbox with a pose-array that falls mostly in the bbox. \n\n## Update of the **vis.py**\n1. [def return_image_mask()](https://github.com/geekfeiw/wifiperson/blob/8a8a7e8d9829892fa2dc19f4a462eee1166b5f52/dataprocessing/vis.py#L806), return masks of persons, then align with their boxes in **demo_FPN_video_new.py**\n\n2. [def save_image_mask()](https://github.com/geekfeiw/wifiperson/blob/8a8a7e8d9829892fa2dc19f4a462eee1166b5f52/dataprocessing/vis.py#L546)\nsave mask of all trained objects, 80 classes (departed approaches, not recommended)\n"
  },
  {
    "path": "dataprocessing/vis.py",
    "content": "# Copyright (c) 2017-present, Facebook, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n##############################################################################\n\n\"\"\"Detection output visualization module.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport cv2\nimport numpy as np\nimport os\n\nimport pycocotools.mask as mask_util\n\nfrom utils.colormap import colormap\n# import utils.keypoints as keypoint_utils\n\n# Matplotlib requires certain adjustments in some environments\n# Must happen before importing matplotlib\nimport matplotlib\nmatplotlib.use('Agg') # Use a non-interactive backend\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Polygon\nimport scipy.io as sio\n\nplt.rcParams['pdf.fonttype'] = 42  # For editing in Adobe Illustrator\n\n\n_GRAY = (218, 227, 218)\n_GREEN = (18, 127, 15)\n_WHITE = (255, 255, 255)\n\n\n# def kp_connections(keypoints):\n#     kp_lines = [\n#         [keypoints.index('left_eye'), keypoints.index('right_eye')],\n#         [keypoints.index('left_eye'), keypoints.index('nose')],\n#         [keypoints.index('right_eye'), keypoints.index('nose')],\n#         [keypoints.index('right_eye'), keypoints.index('right_ear')],\n#         [keypoints.index('left_eye'), keypoints.index('left_ear')],\n#         [keypoints.index('right_shoulder'), keypoints.index('right_elbow')],\n#         [keypoints.index('right_elbow'), keypoints.index('right_wrist')],\n#         [keypoints.index('left_shoulder'), keypoints.index('left_elbow')],\n#         [keypoints.index('left_elbow'), keypoints.index('left_wrist')],\n#         [keypoints.index('right_hip'), keypoints.index('right_knee')],\n#         [keypoints.index('right_knee'), keypoints.index('right_ankle')],\n#         [keypoints.index('left_hip'), keypoints.index('left_knee')],\n#         [keypoints.index('left_knee'), keypoints.index('left_ankle')],\n#         [keypoints.index('right_shoulder'), keypoints.index('left_shoulder')],\n#         [keypoints.index('right_hip'), keypoints.index('left_hip')],\n#     ]\n#     return kp_lines\n\n\ndef convert_from_cls_format(cls_boxes, cls_segms, cls_keyps):\n    \"\"\"Convert from the class boxes/segms/keyps format generated by the testing\n    code.\n    \"\"\"\n    box_list = [b for b in cls_boxes if len(b) > 0]\n    if len(box_list) > 0:\n        boxes = np.concatenate(box_list)\n    else:\n        boxes = None\n    if cls_segms is not None:\n        segms = [s for slist in cls_segms for s in slist]\n    else:\n        segms = None\n    if cls_keyps is not None:\n        keyps = [k for klist in cls_keyps for k in klist]\n    else:\n        keyps = None\n    classes = []\n    for j in range(len(cls_boxes)):\n        classes += [j] * len(cls_boxes[j])\n    return boxes, segms, keyps, classes\n\n\ndef get_class_string(class_index, score, dataset):\n    class_text = dataset.classes[class_index] if dataset is not None else \\\n        'id{:d}'.format(class_index)\n    return class_text + ' {:0.2f}'.format(score).lstrip('0')\n\n\ndef vis_mask(img, mask, col, alpha=0.4, show_border=True, border_thick=1):\n    \"\"\"Visualizes a single binary mask.\"\"\"\n\n    img = img.astype(np.float32)\n    idx = np.nonzero(mask)\n\n    img[idx[0], idx[1], :] *= 1.0 - alpha\n    img[idx[0], idx[1], :] += alpha * col\n\n    if show_border:\n        _, contours, _ = cv2.findContours(\n            mask.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)\n        cv2.drawContours(img, contours, -1, _WHITE, border_thick, cv2.LINE_AA)\n\n    return img.astype(np.uint8)\n\n\ndef vis_class(img, pos, class_str, font_scale=0.35):\n    \"\"\"Visualizes the class.\"\"\"\n    x0, y0 = int(pos[0]), int(pos[1])\n    # Compute text size.\n    txt = class_str\n    font = cv2.FONT_HERSHEY_SIMPLEX\n    ((txt_w, txt_h), _) = cv2.getTextSize(txt, font, font_scale, 1)\n    # Place text background.\n    back_tl = x0, y0 - int(1.3 * txt_h)\n    back_br = x0 + txt_w, y0\n    cv2.rectangle(img, back_tl, back_br, _GREEN, -1)\n    # Show text.\n    txt_tl = x0, y0 - int(0.3 * txt_h)\n    cv2.putText(img, txt, txt_tl, font, font_scale, _GRAY, lineType=cv2.LINE_AA)\n    return img\n\n\ndef vis_bbox(img, bbox, thick=1):\n    \"\"\"Visualizes a bounding box.\"\"\"\n    (x0, y0, w, h) = bbox\n    x1, y1 = int(x0 + w), int(y0 + h)\n    x0, y0 = int(x0), int(y0)\n    cv2.rectangle(img, (x0, y0), (x1, y1), _GREEN, thickness=thick)\n    return img\n\n\n# def vis_keypoints(img, kps, kp_thresh=2, alpha=0.7):\n#     \"\"\"Visualizes keypoints (adapted from vis_one_image).\n#     kps has shape (4, #keypoints) where 4 rows are (x, y, logit, prob).\n#     \"\"\"\n#     dataset_keypoints, _ = keypoint_utils.get_keypoints()\n#     kp_lines = kp_connections(dataset_keypoints)\n\n#     # Convert from plt 0-1 RGBA colors to 0-255 BGR colors for opencv.\n#     cmap = plt.get_cmap('rainbow')\n#     colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]\n#     colors = [(c[2] * 255, c[1] * 255, c[0] * 255) for c in colors]\n\n#     # Perform the drawing on a copy of the image, to allow for blending.\n#     kp_mask = np.copy(img)\n\n#     # Draw mid shoulder / mid hip first for better visualization.\n#     mid_shoulder = (\n#         kps[:2, dataset_keypoints.index('right_shoulder')] +\n#         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0\n#     sc_mid_shoulder = np.minimum(\n#         kps[2, dataset_keypoints.index('right_shoulder')],\n#         kps[2, dataset_keypoints.index('left_shoulder')])\n#     mid_hip = (\n#         kps[:2, dataset_keypoints.index('right_hip')] +\n#         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0\n#     sc_mid_hip = np.minimum(\n#         kps[2, dataset_keypoints.index('right_hip')],\n#         kps[2, dataset_keypoints.index('left_hip')])\n#     nose_idx = dataset_keypoints.index('nose')\n#     if sc_mid_shoulder > kp_thresh and kps[2, nose_idx] > kp_thresh:\n#         cv2.line(\n#             kp_mask, tuple(mid_shoulder), tuple(kps[:2, nose_idx]),\n#             color=colors[len(kp_lines)], thickness=2, lineType=cv2.LINE_AA)\n#     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:\n#         cv2.line(\n#             kp_mask, tuple(mid_shoulder), tuple(mid_hip),\n#             color=colors[len(kp_lines) + 1], thickness=2, lineType=cv2.LINE_AA)\n\n#     # Draw the keypoints.\n#     for l in range(len(kp_lines)):\n#         i1 = kp_lines[l][0]\n#         i2 = kp_lines[l][1]\n#         p1 = kps[0, i1], kps[1, i1]\n#         p2 = kps[0, i2], kps[1, i2]\n#         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:\n#             cv2.line(\n#                 kp_mask, p1, p2,\n#                 color=colors[l], thickness=2, lineType=cv2.LINE_AA)\n#         if kps[2, i1] > kp_thresh:\n#             cv2.circle(\n#                 kp_mask, p1,\n#                 radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA)\n#         if kps[2, i2] > kp_thresh:\n#             cv2.circle(\n#                 kp_mask, p2,\n#                 radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA)\n\n#     # Blend the keypoints.\n#     return cv2.addWeighted(img, 1.0 - alpha, kp_mask, alpha, 0)\n\n\ndef vis_one_image_opencv(\n        im, boxes, segms=None, keypoints=None, thresh=0.9, kp_thresh=2,\n        show_box=False, dataset=None, show_class=False):\n    \"\"\"Constructs a numpy array with the detections visualized.\"\"\"\n\n    if isinstance(boxes, list):\n        boxes, segms, keypoints, classes = convert_from_cls_format(\n            boxes, segms, keypoints)\n\n    if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:\n        return im\n\n    if segms is not None:\n        masks = mask_util.decode(segms)\n        color_list = colormap()\n        mask_color_id = 0\n\n    # Display in largest to smallest order to reduce occlusion\n    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n    sorted_inds = np.argsort(-areas)\n\n    for i in sorted_inds:\n        bbox = boxes[i, :4]\n        score = boxes[i, -1]\n        if score < thresh:\n            continue\n\n        # show box (off by default)\n        if show_box:\n            im = vis_bbox(\n                im, (bbox[0], bbox[1], bbox[2] - bbox[0], bbox[3] - bbox[1]))\n\n        # show class (off by default)\n        if show_class:\n            class_str = get_class_string(classes[i], score, dataset)\n            im = vis_class(im, (bbox[0], bbox[1] - 2), class_str)\n\n        # show mask\n        if segms is not None and len(segms) > i:\n            color_mask = color_list[mask_color_id % len(color_list), 0:3]\n            mask_color_id += 1\n            im = vis_mask(im, masks[..., i], color_mask)\n\n        # # show keypoints\n        # if keypoints is not None and len(keypoints) > i:\n        #     im = vis_keypoints(im, keypoints[i], kp_thresh)\n\n    return im\n\n\ndef vis_one_image(\n        im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9,\n        kp_thresh=2, dpi=200, box_alpha=0.0, dataset=None, show_class=False,\n        ext='pdf', show=False):\n    \"\"\"Visual debugging of detections.\"\"\"\n    if not os.path.exists(output_dir):\n        os.makedirs(output_dir)\n\n    if isinstance(boxes, list):\n        boxes, segms, keypoints, classes = convert_from_cls_format(\n            boxes, segms, keypoints)\n\n    if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:\n        return\n\n    # dataset_keypoints, _ = keypoint_utils.get_keypoints()\n\n    if segms is not None:\n        masks = mask_util.decode(segms)\n\n    color_list = colormap(rgb=True) / 255\n\n    # kp_lines = kp_connections(dataset_keypoints)\n    # cmap = plt.get_cmap('rainbow')\n    # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]\n\n    fig = plt.figure(frameon=False)\n    fig.set_size_inches(im.shape[1] / dpi, im.shape[0] / dpi)\n    ax = plt.Axes(fig, [0., 0., 1., 1.])\n    ax.axis('off')\n    fig.add_axes(ax)\n    ax.imshow(im)\n\n    # Display in largest to smallest order to reduce occlusion\n    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n    sorted_inds = np.argsort(-areas)\n\n    mask_color_id = 0\n    res = []\n    for i in sorted_inds:\n        bbox = boxes[i, :4]\n        score = boxes[i, -1]\n        if score < thresh:\n            continue\n\n        # show box (off by default)\n        ax.add_patch(\n            plt.Rectangle((bbox[0], bbox[1]),\n                          bbox[2] - bbox[0],\n                          bbox[3] - bbox[1],\n                          fill=False, edgecolor='g',\n                          linewidth=0.5, alpha=box_alpha))\n\n        if show_class:\n            ax.text(\n                bbox[0], bbox[1] - 2,\n                get_class_string(classes[i], score, dataset),\n                fontsize=3,\n                family='serif',\n                bbox=dict(\n                    facecolor='g', alpha=0.4, pad=0, edgecolor='none'),\n                color='white')\n\n        # show mask\n        if segms is not None and len(segms) > i:\n            img = np.ones(im.shape)\n            color_mask = color_list[mask_color_id % len(color_list), 0:3]\n            mask_color_id += 1\n\n            w_ratio = .4\n            for c in range(3):\n                color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio\n            for c in range(3):\n                img[:, :, c] = color_mask[c]\n            e = masks[:, :, i]\n            res += [e]\n\n            contour, hier = cv2.findContours(\n                e.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)\n\n            for c in contour:\n                polygon = Polygon(\n                    c.reshape((-1, 2)),\n                    fill=True, facecolor=color_mask,\n                    edgecolor='w', linewidth=1.2,\n                    alpha=0.5)\n                ax.add_patch(polygon)\n\n        # # show keypoints\n        # if keypoints is not None and len(keypoints) > i:\n        #     kps = keypoints[i]\n        #     plt.autoscale(False)\n        #     for l in range(len(kp_lines)):\n        #         i1 = kp_lines[l][0]\n        #         i2 = kp_lines[l][1]\n        #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:\n        #             x = [kps[0, i1], kps[0, i2]]\n        #             y = [kps[1, i1], kps[1, i2]]\n        #             line = plt.plot(x, y)\n        #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)\n        #         if kps[2, i1] > kp_thresh:\n        #             plt.plot(\n        #                 kps[0, i1], kps[1, i1], '.', color=colors[l],\n        #                 markersize=3.0, alpha=0.7)\n\n        #         if kps[2, i2] > kp_thresh:\n        #             plt.plot(\n        #                 kps[0, i2], kps[1, i2], '.', color=colors[l],\n        #                 markersize=3.0, alpha=0.7)\n\n        #     # add mid shoulder / mid hip for better visualization\n        #     mid_shoulder = (\n        #         kps[:2, dataset_keypoints.index('right_shoulder')] +\n        #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0\n        #     sc_mid_shoulder = np.minimum(\n        #         kps[2, dataset_keypoints.index('right_shoulder')],\n        #         kps[2, dataset_keypoints.index('left_shoulder')])\n        #     mid_hip = (\n        #         kps[:2, dataset_keypoints.index('right_hip')] +\n        #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0\n        #     sc_mid_hip = np.minimum(\n        #         kps[2, dataset_keypoints.index('right_hip')],\n        #         kps[2, dataset_keypoints.index('left_hip')])\n        #     if (sc_mid_shoulder > kp_thresh and\n        #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):\n        #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]\n        #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]\n        #         line = plt.plot(x, y)\n        #         plt.setp(\n        #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)\n        #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:\n        #         x = [mid_shoulder[0], mid_hip[0]]\n        #         y = [mid_shoulder[1], mid_hip[1]]\n        #         line = plt.plot(x, y)\n        #         plt.setp(\n        #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,\n        #             alpha=0.7)\n\n    output_name = os.path.basename(im_name) + '.' + ext\n    fig.savefig(os.path.join(output_dir, '{}'.format(output_name)), dpi=dpi)\n    print('result saved to {}'.format(os.path.join(output_dir, '{}'.format(output_name))))\n    if show:\n        plt.show()\n    plt.close('all')\n    sio.savemat('res_mask_000128.mat', {'mask': res})\n    print('save done!')\n\n    def vis_one_image(\n            im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9,\n            kp_thresh=2, dpi=200, box_alpha=0.0, dataset=None, show_class=False,\n            ext='pdf', show=False):\n        \"\"\"Visual debugging of detections.\"\"\"\n        if not os.path.exists(output_dir):\n            os.makedirs(output_dir)\n\n        if isinstance(boxes, list):\n            boxes, segms, keypoints, classes = convert_from_cls_format(\n                boxes, segms, keypoints)\n\n        if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:\n            return\n\n        # dataset_keypoints, _ = keypoint_utils.get_keypoints()\n\n        if segms is not None:\n            masks = mask_util.decode(segms)\n\n        color_list = colormap(rgb=True) / 255\n\n        # kp_lines = kp_connections(dataset_keypoints)\n        # cmap = plt.get_cmap('rainbow')\n        # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]\n\n        fig = plt.figure(frameon=False)\n        fig.set_size_inches(im.shape[1] / dpi, im.shape[0] / dpi)\n        ax = plt.Axes(fig, [0., 0., 1., 1.])\n        ax.axis('off')\n        fig.add_axes(ax)\n        ax.imshow(im)\n\n        # Display in largest to smallest order to reduce occlusion\n        areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n        sorted_inds = np.argsort(-areas)\n\n        mask_color_id = 0\n        res = []\n        for i in sorted_inds:\n            bbox = boxes[i, :4]\n            score = boxes[i, -1]\n            if score < thresh:\n                continue\n\n            # show box (off by default)\n            ax.add_patch(\n                plt.Rectangle((bbox[0], bbox[1]),\n                              bbox[2] - bbox[0],\n                              bbox[3] - bbox[1],\n                              fill=False, edgecolor='g',\n                              linewidth=0.5, alpha=box_alpha))\n\n            if show_class:\n                ax.text(\n                    bbox[0], bbox[1] - 2,\n                    get_class_string(classes[i], score, dataset),\n                    fontsize=3,\n                    family='serif',\n                    bbox=dict(\n                        facecolor='g', alpha=0.4, pad=0, edgecolor='none'),\n                    color='white')\n\n            # show mask\n            if segms is not None and len(segms) > i:\n                img = np.ones(im.shape)\n                color_mask = color_list[mask_color_id % len(color_list), 0:3]\n                mask_color_id += 1\n\n                w_ratio = .4\n                for c in range(3):\n                    color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio\n                for c in range(3):\n                    img[:, :, c] = color_mask[c]\n                e = masks[:, :, i]\n                res += [e]\n\n                _, contour, hier = cv2.findContours(\n                    e.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)\n\n                for c in contour:\n                    polygon = Polygon(\n                        c.reshape((-1, 2)),\n                        fill=True, facecolor=color_mask,\n                        edgecolor='w', linewidth=1.2,\n                        alpha=0.5)\n                    ax.add_patch(polygon)\n\n                    # # show keypoints\n                    # if keypoints is not None and len(keypoints) > i:\n                    #     kps = keypoints[i]\n                    #     plt.autoscale(False)\n                    #     for l in range(len(kp_lines)):\n                    #         i1 = kp_lines[l][0]\n                    #         i2 = kp_lines[l][1]\n                    #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:\n                    #             x = [kps[0, i1], kps[0, i2]]\n                    #             y = [kps[1, i1], kps[1, i2]]\n                    #             line = plt.plot(x, y)\n                    #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)\n                    #         if kps[2, i1] > kp_thresh:\n                    #             plt.plot(\n                    #                 kps[0, i1], kps[1, i1], '.', color=colors[l],\n                    #                 markersize=3.0, alpha=0.7)\n\n                    #         if kps[2, i2] > kp_thresh:\n                    #             plt.plot(\n                    #                 kps[0, i2], kps[1, i2], '.', color=colors[l],\n                    #                 markersize=3.0, alpha=0.7)\n\n                    #     # add mid shoulder / mid hip for better visualization\n                    #     mid_shoulder = (\n                    #         kps[:2, dataset_keypoints.index('right_shoulder')] +\n                    #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0\n                    #     sc_mid_shoulder = np.minimum(\n                    #         kps[2, dataset_keypoints.index('right_shoulder')],\n                    #         kps[2, dataset_keypoints.index('left_shoulder')])\n                    #     mid_hip = (\n                    #         kps[:2, dataset_keypoints.index('right_hip')] +\n                    #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0\n                    #     sc_mid_hip = np.minimum(\n                    #         kps[2, dataset_keypoints.index('right_hip')],\n                    #         kps[2, dataset_keypoints.index('left_hip')])\n                    #     if (sc_mid_shoulder > kp_thresh and\n                    #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):\n                    #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]\n                    #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]\n                    #         line = plt.plot(x, y)\n                    #         plt.setp(\n                    #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)\n                    #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:\n                    #         x = [mid_shoulder[0], mid_hip[0]]\n                    #         y = [mid_shoulder[1], mid_hip[1]]\n                    #         line = plt.plot(x, y)\n                    #         plt.setp(\n                    #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,\n                    #             alpha=0.7)\n\n        output_name = os.path.basename(im_name) + '.' + ext\n        fig.savefig(os.path.join(output_dir, '{}'.format(output_name)), dpi=dpi)\n        print('result saved to {}'.format(os.path.join(output_dir, '{}'.format(output_name))))\n        if show:\n            plt.show()\n        plt.close('all')\n        sio.savemat('res_mask_000128.mat', {'mask': res})\n        print('save done!')\n\n# save mask\n# added by Fei Wang,\ndef save_image_mask(\n        im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9,\n        kp_thresh=2, dpi=200, box_alpha=0.0, dataset=None, show_class=False,\n        ext='pdf', show=False):\n    \"\"\"Visual debugging of detections.\"\"\"\n    if not os.path.exists(output_dir):\n        os.makedirs(output_dir)\n\n    if isinstance(boxes, list):\n        boxes, segms, keypoints, classes = convert_from_cls_format(\n            boxes, segms, keypoints)\n\n    # if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:\n    #     return\n\n    # dataset_keypoints, _ = keypoint_utils.get_keypoints()\n\n    if segms is not None:\n        masks = mask_util.decode(segms)\n\n    color_list = colormap(rgb=True) / 255\n\n    # kp_lines = kp_connections(dataset_keypoints)\n    # cmap = plt.get_cmap('rainbow')\n    # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]\n\n    fig = plt.figure(frameon=False)\n    fig.set_size_inches(im.shape[1] / dpi, im.shape[0] / dpi)\n    ax = plt.Axes(fig, [0., 0., 1., 1.])\n    ax.axis('off')\n    fig.add_axes(ax)\n    ax.imshow(im)\n\n    # Display in largest to smallest order to reduce occlusion\n    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n    sorted_inds = np.argsort(-areas)\n\n    mask_color_id = 0\n    res = []\n    for i in sorted_inds:\n        bbox = boxes[i, :4]\n        score = boxes[i, -1]\n        if score < thresh:\n            continue\n\n        # show box (off by default)\n        ax.add_patch(\n            plt.Rectangle((bbox[0], bbox[1]),\n                          bbox[2] - bbox[0],\n                          bbox[3] - bbox[1],\n                          fill=False, edgecolor='g',\n                          linewidth=0.5, alpha=box_alpha))\n\n        if show_class:\n            ax.text(\n                bbox[0], bbox[1] - 2,\n                get_class_string(classes[i], score, dataset),\n                fontsize=3,\n                family='serif',\n                bbox=dict(\n                    facecolor='g', alpha=0.4, pad=0, edgecolor='none'),\n                color='white')\n\n        # show mask\n        if segms is not None and len(segms) > i:\n            img = np.ones(im.shape)\n            color_mask = color_list[mask_color_id % len(color_list), 0:3]\n            mask_color_id += 1\n\n            w_ratio = .4\n            for c in range(3):\n                color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio\n            for c in range(3):\n                img[:, :, c] = color_mask[c]\n            e = masks[:, :, i]\n            res += [e]\n\n            _, contour, hier = cv2.findContours(\n                e.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)\n\n            for c in contour:\n                polygon = Polygon(\n                    c.reshape((-1, 2)),\n                    fill=True, facecolor=color_mask,\n                    edgecolor='w', linewidth=1.2,\n                    alpha=0.5)\n                ax.add_patch(polygon)\n\n        # # show keypoints\n        # if keypoints is not None and len(keypoints) > i:\n        #     kps = keypoints[i]\n        #     plt.autoscale(False)\n        #     for l in range(len(kp_lines)):\n        #         i1 = kp_lines[l][0]\n        #         i2 = kp_lines[l][1]\n        #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:\n        #             x = [kps[0, i1], kps[0, i2]]\n        #             y = [kps[1, i1], kps[1, i2]]\n        #             line = plt.plot(x, y)\n        #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)\n        #         if kps[2, i1] > kp_thresh:\n        #             plt.plot(\n        #                 kps[0, i1], kps[1, i1], '.', color=colors[l],\n        #                 markersize=3.0, alpha=0.7)\n\n        #         if kps[2, i2] > kp_thresh:\n        #             plt.plot(\n        #                 kps[0, i2], kps[1, i2], '.', color=colors[l],\n        #                 markersize=3.0, alpha=0.7)\n\n        #     # add mid shoulder / mid hip for better visualization\n        #     mid_shoulder = (\n        #         kps[:2, dataset_keypoints.index('right_shoulder')] +\n        #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0\n        #     sc_mid_shoulder = np.minimum(\n        #         kps[2, dataset_keypoints.index('right_shoulder')],\n        #         kps[2, dataset_keypoints.index('left_shoulder')])\n        #     mid_hip = (\n        #         kps[:2, dataset_keypoints.index('right_hip')] +\n        #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0\n        #     sc_mid_hip = np.minimum(\n        #         kps[2, dataset_keypoints.index('right_hip')],\n        #         kps[2, dataset_keypoints.index('left_hip')])\n        #     if (sc_mid_shoulder > kp_thresh and\n        #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):\n        #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]\n        #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]\n        #         line = plt.plot(x, y)\n        #         plt.setp(\n        #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)\n        #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:\n        #         x = [mid_shoulder[0], mid_hip[0]]\n        #         y = [mid_shoulder[1], mid_hip[1]]\n        #         line = plt.plot(x, y)\n        #         plt.setp(\n        #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,\n        #             alpha=0.7)\n\n    output_name = os.path.basename(im_name) + '.' + ext\n    fig.savefig(os.path.join(output_dir, '{}'.format(output_name)), dpi=dpi)\n    print('result saved to {}'.format(os.path.join(output_dir, '{}'.format(output_name))))\n    if show:\n        plt.show()\n    plt.close('all')\n    sio.savemat('res_mask_000128.mat', {'mask': res})\n    print('save done!')\n\n\ndef save_image_mask(\n        im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9):\n    \"\"\"Visual debugging of detections.\"\"\"\n    if not os.path.exists(output_dir):\n        os.makedirs(output_dir)\n\n    if isinstance(boxes, list):\n        boxes, segms, keypoints, classes = convert_from_cls_format(\n            boxes, segms, keypoints)\n\n    # if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:\n    #     return\n\n    # dataset_keypoints, _ = keypoint_utils.get_keypoints()\n\n    if segms is not None:\n        masks = mask_util.decode(segms)\n\n    color_list = colormap(rgb=True) / 255\n\n    # kp_lines = kp_connections(dataset_keypoints)\n\n    # cmap = plt.get_cmap('rainbow')\n    # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]\n\n\n    # Display in largest to smallest order to reduce occlusion\n    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n    sorted_inds = np.argsort(-areas)\n\n    mask_color_id = 0\n\n    res = []\n    for i in sorted_inds:\n        bbox = boxes[i, :4]\n        score = boxes[i, -1]\n        if score < thresh:\n            continue\n\n        # show mask\n        if segms is not None and len(segms) > i:\n            img = np.ones(im.shape)\n            color_mask = color_list[mask_color_id % len(color_list), 0:3]\n            mask_color_id += 1\n\n            w_ratio = .4\n            for c in range(3):\n                color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio\n            for c in range(3):\n                img[:, :, c] = color_mask[c]\n            e = masks[:, :, i]\n            res += [e]\n\n        # # show keypoints\n        # if keypoints is not None and len(keypoints) > i:\n        #     kps = keypoints[i]\n        #     plt.autoscale(False)\n        #     for l in range(len(kp_lines)):\n        #         i1 = kp_lines[l][0]\n        #         i2 = kp_lines[l][1]\n        #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:\n        #             x = [kps[0, i1], kps[0, i2]]\n        #             y = [kps[1, i1], kps[1, i2]]\n        #             line = plt.plot(x, y)\n        #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)\n        #         if kps[2, i1] > kp_thresh:\n        #             plt.plot(\n        #                 kps[0, i1], kps[1, i1], '.', color=colors[l],\n        #                 markersize=3.0, alpha=0.7)\n\n        #         if kps[2, i2] > kp_thresh:\n        #             plt.plot(\n        #                 kps[0, i2], kps[1, i2], '.', color=colors[l],\n        #                 markersize=3.0, alpha=0.7)\n\n        #     # add mid shoulder / mid hip for better visualization\n        #     mid_shoulder = (\n        #         kps[:2, dataset_keypoints.index('right_shoulder')] +\n        #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0\n        #     sc_mid_shoulder = np.minimum(\n        #         kps[2, dataset_keypoints.index('right_shoulder')],\n        #         kps[2, dataset_keypoints.index('left_shoulder')])\n        #     mid_hip = (\n        #         kps[:2, dataset_keypoints.index('right_hip')] +\n        #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0\n        #     sc_mid_hip = np.minimum(\n        #         kps[2, dataset_keypoints.index('right_hip')],\n        #         kps[2, dataset_keypoints.index('left_hip')])\n        #     if (sc_mid_shoulder > kp_thresh and\n        #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):\n        #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]\n        #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]\n        #         line = plt.plot(x, y)\n        #         plt.setp(\n        #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)\n        #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:\n        #         x = [mid_shoulder[0], mid_hip[0]]\n        #         y = [mid_shoulder[1], mid_hip[1]]\n        #         line = plt.plot(x, y)\n        #         plt.setp(\n        #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,\n        #             alpha=0.7)\n\n    output_name = os.path.basename(im_name) + '.MASK.mat'\n    sio.savemat(os.path.join(output_dir, '{}'.format(output_name)), {'mask': res})\n#    print('save done!')\n\n\n# return mask\n# added by Fei Wang,\ndef return_image_mask(\n        im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9):\n    \"\"\"Visual debugging of detections.\"\"\"\n    if not os.path.exists(output_dir):\n        os.makedirs(output_dir)\n\n    if isinstance(boxes, list):\n        boxes, segms, keypoints, classes = convert_from_cls_format(\n            boxes, segms, keypoints)\n\n    # if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:\n    #     return\n\n    # dataset_keypoints, _ = keypoint_utils.get_keypoints()\n\n    if segms is not None:\n        masks = mask_util.decode(segms)\n\n    color_list = colormap(rgb=True) / 255\n\n    # kp_lines = kp_connections(dataset_keypoints)\n\n    # cmap = plt.get_cmap('rainbow')\n    # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]\n\n\n    # Display in largest to smallest order to reduce occlusion\n    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n    sorted_inds = np.argsort(-areas)\n\n    mask_color_id = 0\n\n    res = []\n    for i in sorted_inds:\n        bbox = boxes[i, :4]\n        score = boxes[i, -1]\n        if score < thresh:\n            continue\n\n        # show mask\n        if segms is not None and len(segms) > i:\n            img = np.ones(im.shape)\n            color_mask = color_list[mask_color_id % len(color_list), 0:3]\n            mask_color_id += 1\n\n            w_ratio = .4\n            for c in range(3):\n                color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio\n            for c in range(3):\n                img[:, :, c] = color_mask[c]\n            e = masks[:, :, i]\n            res += [e]\n\n        # # show keypoints\n        # if keypoints is not None and len(keypoints) > i:\n        #     kps = keypoints[i]\n        #     plt.autoscale(False)\n        #     for l in range(len(kp_lines)):\n        #         i1 = kp_lines[l][0]\n        #         i2 = kp_lines[l][1]\n        #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:\n        #             x = [kps[0, i1], kps[0, i2]]\n        #             y = [kps[1, i1], kps[1, i2]]\n        #             line = plt.plot(x, y)\n        #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)\n        #         if kps[2, i1] > kp_thresh:\n        #             plt.plot(\n        #                 kps[0, i1], kps[1, i1], '.', color=colors[l],\n        #                 markersize=3.0, alpha=0.7)\n\n        #         if kps[2, i2] > kp_thresh:\n        #             plt.plot(\n        #                 kps[0, i2], kps[1, i2], '.', color=colors[l],\n        #                 markersize=3.0, alpha=0.7)\n\n        #     # add mid shoulder / mid hip for better visualization\n        #     mid_shoulder = (\n        #         kps[:2, dataset_keypoints.index('right_shoulder')] +\n        #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0\n        #     sc_mid_shoulder = np.minimum(\n        #         kps[2, dataset_keypoints.index('right_shoulder')],\n        #         kps[2, dataset_keypoints.index('left_shoulder')])\n        #     mid_hip = (\n        #         kps[:2, dataset_keypoints.index('right_hip')] +\n        #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0\n        #     sc_mid_hip = np.minimum(\n        #         kps[2, dataset_keypoints.index('right_hip')],\n        #         kps[2, dataset_keypoints.index('left_hip')])\n        #     if (sc_mid_shoulder > kp_thresh and\n        #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):\n        #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]\n        #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]\n        #         line = plt.plot(x, y)\n        #         plt.setp(\n        #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)\n        #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:\n        #         x = [mid_shoulder[0], mid_hip[0]]\n        #         y = [mid_shoulder[1], mid_hip[1]]\n        #         line = plt.plot(x, y)\n        #         plt.setp(\n        #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,\n        #             alpha=0.7)\n    return res\n#    output_name = os.path.basename(im_name) + '.MASK.mat'\n#    sio.savemat(os.path.join(output_dir, '{}'.format(output_name)), {'mask': res})\n#    print('save done!')\n"
  }
]