Full Code of geekfeiw/wifiperson for AI

master e0d091c5ba17 cached
13 files
75.0 KB
20.2k tokens
17 symbols
1 requests
Download .txt
Repository: geekfeiw/wifiperson
Branch: master
Commit: e0d091c5ba17
Files: 13
Total size: 75.0 KB

Directory structure:
gitextract_73vlnhyl/

├── .gitignore
├── LICENSE
├── README.md
├── datacollectioncode/
│   ├── videowithtimestamp/
│   │   ├── README.md
│   │   └── videoWrite-spyder.py
│   └── wifiwithtimestamp/
│       ├── Makefile
│       ├── README.md
│       └── log_to_file_time.c
└── dataprocessing/
    ├── demo_FPN_video_new.py
    ├── getHumanMaskandBbox.py
    ├── poseArrayAlign.m
    ├── readme.md
    └── vis.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2018 Fei Wang

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# WiFi Perception
Code of paper, Person-in-WiFi: Fine-grained Person Perception using WiFi. In this paper, we tend to use WiFi to capture human pose and body. The paper is under review, due to IRB issues, we have not made code publicly. Still, we release data collection tools in this repo.



# Updates

[CSI tool](https://github.com/spanev/linux-80211n-csitool) now supports Ubuntu 18.04, Sep. 2019.


# System
We use camera to capture human as annotations. Specifically, we use a Mask R-CNN implementation, [detectorch](https://github.com/ignacio-rocco/detectorch) to prepare human mask, and [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) python-api to prepare human pose, including pose coordinate arrays, joint heat maps, and part affinity field, with help of OpenPose developers, [Gines](https://github.com/gineshidalgo99) and [Raaj](https://github.com/soulslicer). 

Meanwhile, we record WiFi signals to train a deep network.

![system](figs/systems.png)

# Results
![result](figs/result.png)


================================================
FILE: datacollectioncode/videowithtimestamp/README.md
================================================
# Install OpenCV on Ubuntu

See the [tutorial](https://www.learnopencv.com/install-opencv3-on-ubuntu/)

## Note:
Step 4.1: Download opencv from Github

```
git clone https://github.com/opencv/opencv.git

cd opencv 

git checkout 3.1.0

cd ..
```

Step 4.2: Download opencv_contrib from Github
```
git clone https://github.com/opencv/opencv_contrib.git

cd opencv_contrib

git checkout 3.1.0

cd ..
```

We have tried a lot of OpenCV version in python 2, 3, or anconda python, and finally found the **3.1.0** can adjust fps and match with the command of **datetime.datetime.now()**.



================================================
FILE: datacollectioncode/videowithtimestamp/videoWrite-spyder.py
================================================
#!/usr/bin/python

import cv2
import datetime
import time
#import sys


if __name__ == "__main__":

    try:
    
        fps = 20
        frameWidth  = 1280
        frameHeight = 720
        
        cap = cv2.VideoCapture(0)
        cap.set(cv2.CAP_PROP_FRAME_WIDTH,  frameWidth)
        cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frameHeight)
#	time.sleep()
        cap.set(cv2.CAP_PROP_FPS, fps)

	cameraFPS = cap.get(cv2.CAP_PROP_FPS)

	print("FPS:", cameraFPS)
	print("Frame size:", cap.get(cv2.CAP_PROP_FRAME_WIDTH), cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
        
        #fourcc = cv2.VideoWriter_fourcc(*'MJPG') # + .avi works, .mp4 not works
        #fourcc = cv2.cv.CV_FOURCC(*'XVID')MP4V
        
        fourcc = cv2.VideoWriter_fourcc(*'XVID')
        #fourcc = cv2.VideoWriter_fourcc(*'MP4V')
        videofile = cv2.VideoWriter('video.avi',
                                    fourcc,
                                    int(cameraFPS),
                                    (frameWidth, frameHeight))
        
        
        
        #file = open('/media/csipose1/XPG SD700X/time', 'w+')
        
        
        with open('VideoTimestamp.txt', 'w+') as file:
            while(cap.isOpened()):
                ret, frame = cap.read()
                #time.sleep(delay)
                t = datetime.datetime.now()
                #t = time.clock()
                #print(ret)
                if ret:
                   file.write(str(t)+'\n')
                   print(str(t))
                   videofile.write(frame)
                   # cv2.imshow('Camera', frame)
                    
                   #if cv2.waitKey(1) & 0xFF == ord('q'):
                   #    break
                else:
                    break
    
        
    except KeyboardInterrupt:
        print("Quit")
        cap.release()
        videofile.release()
        #cv2.destroyAllWindows()
        file.close()


================================================
FILE: datacollectioncode/wifiwithtimestamp/Makefile
================================================
all: print_packets get_first_bfee parse_log log_to_file nl_bf_to_eff log_to_file_time

KERNEL = $(strip $(shell uname -r))
KERNEL_SOURCE = /lib/modules/$(KERNEL)/build

ifneq ($(wildcard $(KERNEL_SOURCE)/include/uapi),)
        KERNEL_HEADERS = $(KERNEL_SOURCE)/include/uapi
else ifneq ($(wildcard $(KERNEL_SOURCE)/include),)
        KERNEL_HEADERS = $(KERNEL_SOURCE)/include
else
        $(error Kernel headers not found)
endif

CFLAGS = -Wall -Werror
LDLIBS = -lm
CC = gcc

nl_bf_to_eff: nl_bf_to_eff.c bf_to_eff.o iwl_nl.o util.o q_approx.o

log_to_file.c: iwl_connector.h

iwl_nl.c: iwl_connector.h

iwl_connector.h: connector_users.h

connector_users.h: $(KERNEL_HEADERS)/linux/connector.h
	echo "#undef CN_NETLINK_USERS" > connector_users.h
	grep "#define CN_NETLINK_USERS" $(KERNEL_HEADERS)/linux/connector.h >> connector_users.h

clean:
	rm -f *.o get_first_bfee log_to_file print_packets parse_log nl_bf_to_eff connector_users.h log_to_file_time


================================================
FILE: datacollectioncode/wifiwithtimestamp/README.md
================================================
# Record WiFi with Unix Time-stamp 

1. Follow [Linux CSI Tool Installation Instructions](http://dhalperi.github.io/linux-80211n-csitool/installation.html)

2. Before run the 
```
4. Build the Userspace Logging Tool

Build log_to_file, a command line tool that writes CSI obtained via the driver to a file:

make -C linux-80211n-csitool-supplementary/netlink
```
put **log_to_file_time.c** and **Makefile** into the folder **netlink** first. 


## How to use: with log_to_file_time.c, one can log time-stamps
 
 ```
 sudo ../netlink/log_to_file_time ~/Desktop/log.dat ~/Desktop/time.txt 
 
````


================================================
FILE: datacollectioncode/wifiwithtimestamp/log_to_file_time.c
================================================
/*
 * (c) 2008-2011 Daniel Halperin <dhalperi@cs.washington.edu>
 */
#include "iwl_connector.h"

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <arpa/inet.h>
#include <sys/socket.h>
#include <linux/netlink.h>
#include <sys/time.h>
#include <time.h>

#define MAX_PAYLOAD 2048
#define SLOW_MSG_CNT 1

int sock_fd = -1;							// the socket
FILE* out = NULL;
FILE* out_time = NULL;

void check_usage(int argc, char** argv);

FILE* open_file(char* filename, char* spec);

void caught_signal(int sig);

void exit_program(int code);
void exit_program_err(int code, char* func);

int main(int argc, char** argv)
{
	/* Local variables */
	struct sockaddr_nl proc_addr, kern_addr;	// addrs for recv, send, bind
	struct cn_msg *cmsg;
	char buf[4096];
	int ret;
	unsigned short l, l2;
	int count = 0;
	
	/* Local timestamp variables */
	struct timeval tv;
 	struct tm* tm;
 	char time_buffer[30];

	/* Make sure usage is correct */
	check_usage(argc, argv);

	/* Open and check log file */
	out = open_file(argv[1], "w");
        out_time = open_file(argv[2], "w");

	/* Setup the socket */
	sock_fd = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_CONNECTOR);
	if (sock_fd == -1)
		exit_program_err(-1, "socket");

	/* Initialize the address structs */
	memset(&proc_addr, 0, sizeof(struct sockaddr_nl));
	proc_addr.nl_family = AF_NETLINK;
	proc_addr.nl_pid = getpid();			// this process' PID
	proc_addr.nl_groups = CN_IDX_IWLAGN;
	memset(&kern_addr, 0, sizeof(struct sockaddr_nl));
	kern_addr.nl_family = AF_NETLINK;
	kern_addr.nl_pid = 0;					// kernel
	kern_addr.nl_groups = CN_IDX_IWLAGN;

	/* Now bind the socket */
	if (bind(sock_fd, (struct sockaddr *)&proc_addr, sizeof(struct sockaddr_nl)) == -1)
		exit_program_err(-1, "bind");

	/* And subscribe to netlink group */
	{
		int on = proc_addr.nl_groups;
		ret = setsockopt(sock_fd, 270, NETLINK_ADD_MEMBERSHIP, &on, sizeof(on));
		if (ret)
			exit_program_err(-1, "setsockopt");
	}

	/* Set up the "caught_signal" function as this program's sig handler */
	signal(SIGINT, caught_signal);

	/* Poll socket forever waiting for a message */
	while (1)
	{
		/* Receive from socket with infinite timeout */
		ret = recv(sock_fd, buf, sizeof(buf), 0);
		if (ret == -1)
			exit_program_err(-1, "recv");
		/* Pull out the message portion and print some stats */
		cmsg = NLMSG_DATA(buf);
		if (count % SLOW_MSG_CNT == 0)
			printf("received %d bytes: id: %d val: %d seq: %d clen: %d\n", cmsg->len, cmsg->id.idx, cmsg->id.val, cmsg->seq, cmsg->len);
		/* Log the data to file */
		l = (unsigned short) cmsg->len;
		l2 = htons(l);
		fwrite(&l2, 1, sizeof(unsigned short), out);

		/* write timestamp */
		gettimeofday(&tv, NULL);
 		tm=localtime(&tv.tv_sec);
 		sprintf(time_buffer, "%02d:%02d:%02d.%06ld\n", tm->tm_hour, tm->tm_min, tm->tm_sec, tv.tv_usec);
 		fwrite(&time_buffer, 1, 16, out_time);

		ret = fwrite(cmsg->data, 1, l, out);
		if (count % 100 == 0)
			printf("wrote %d bytes [msgcnt=%u]\n", ret, count);
		++count;
		if (ret != l)
			exit_program_err(1, "fwrite");
	}

	exit_program(0);
	return 0;
}

void check_usage(int argc, char** argv)
{
	if (argc != 3)
	{
		fprintf(stderr, "Usage: %s <output_file>\n", argv[0]);
		exit_program(1);
	}
}

FILE* open_file(char* filename, char* spec)
{
	FILE* fp = fopen(filename, spec);
	if (!fp)
	{
		perror("fopen");
		exit_program(1);
	}
	return fp;
}

void caught_signal(int sig)
{
	fprintf(stderr, "Caught signal %d\n", sig);
	exit_program(0);
}

void exit_program(int code)
{
	if (out)
	{
		fclose(out);
		fclose(out_time);
		out = NULL;
	}
	if (sock_fd != -1)
	{
		close(sock_fd);
		sock_fd = -1;
	}
	exit(code);
}

void exit_program_err(int code, char* func)
{
	perror(func);
	exit_program(code);
}


================================================
FILE: dataprocessing/demo_FPN_video_new.py
================================================

# coding: utf-8

# # Imports

# In[1]:


import torch
from torch.autograd import Variable
from torch.utils.data import DataLoader

import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
import scipy.io as sio
import sys
sys.path.insert(0, "lib/")
from utils.preprocess_sample import preprocess_sample
from utils.collate_custom import collate_custom
from utils.utils import to_cuda_variable
from utils.json_dataset_evaluator import evaluate_boxes,evaluate_masks
from model.detector import detector
import utils.result_utils as result_utils
import utils.vis as vis_utils
import skimage.io as io
from utils.blob import prep_im_for_blob,im_list_to_blob
import utils.dummy_datasets as dummy_datasets
from utils.multilevel_rois import add_multilevel_rois_for_test
import cv2
import os

from utils.selective_search import selective_search # needed for proposal extraction in Fast RCNN
from PIL import Image

torch_ver = torch.__version__[:3]


# # Parameters

# In[2]:


# COCO minival2014 dataset path
coco_ann_file='datasets/data/coco/annotations/instances_minival2014.json'
img_dir='datasets/data/coco/val2014'

# model type
model_type='mask' # change here

# pretrained model
if model_type=='mask':
    arch='resnet101'
    # https://s3-us-west-2.amazonaws.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl
    pretrained_model_file = 'files/trained_models/mask_fpn/model_final.pkl'
    use_rpn_head = True
    use_mask_head = True
elif model_type=='faster':
    arch='resnet50'
    # https://s3-us-west-2.amazonaws.com/detectron/35857389/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_2x.yaml.01_37_22.KSeq0b5q/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl
    pretrained_model_file = 'files/trained_models/faster/e2e_faster_rcnn_R-50-FPN_2x.pkl'
    use_rpn_head = True
    use_mask_head = False
elif model_type=='fast':
    arch='resnet50'
    # https://s3-us-west-2.amazonaws.com/detectron/36225249/12_2017_baselines/fast_rcnn_R-50-FPN_2x.yaml.08_40_18.zoChak1f/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl
    pretrained_model_file = 'files/trained_models/fast/fast_rcnn_R-50-FPN_2x.pkl'
    use_rpn_head = False
    use_mask_head = False


# # Create detector model

# In[5]:


model = detector(arch=arch,
                 detector_pkl_file=pretrained_model_file,
                 conv_body_layers=['conv1','bn1','relu','maxpool','layer1','layer2','layer3','layer4'],
                 conv_head_layers='two_layer_mlp',
                 fpn_layers=['layer1','layer2','layer3','layer4'],
                 fpn_extra_lvl=True,
                 roi_height=7,
                 roi_width=7,
                 roi_spatial_scale=[0.25,0.125,0.0625,0.03125],
                 roi_sampling_ratio=2,
                 use_rpn_head = use_rpn_head,
                 use_mask_head = use_mask_head,
                 mask_head_type = '1up4convs')
model = model.cuda()


def eval_model(sample):
    class_scores, bbox_deltas, rois, img_features = model(sample['image'],
                                                          sample['proposal_coords'],
                                                          scaling_factor=sample['scaling_factors'])
    return class_scores, bbox_deltas, rois, img_features

# # Load image

# In[4]:
import glob
video_dir = '/media/delight-wifi/My Passport/Dataset/WiFiPose-Video/' # chage dir



videos = glob.glob(video_dir+'*.avi')
video_num = len(videos)



# image_fn = 'demo/33823288584_1d21cf0a26_k.jpg'

# Load image
output_dir = '/media/delight-wifi/My Passport/Dataset/video-mask/'
for video_index in range(video_num):
    

    video_fn = videos[video_index]
    video_name = video_fn[len(video_dir):]
    print(video_name)
    video_name = video_fn[len(video_dir):-4]
    outputVideo_dir = output_dir + video_name + '_mask/'
    
    if not os.path.exists(outputVideo_dir):
        os.makedirs(outputVideo_dir)
    print(video_fn)	
    video = cv2.VideoCapture(video_fn)
    frame_index = 0
    while(video.isOpened()):
        #print('hello')
        frame_index = frame_index + 1
        ret, image = video.read()
        
        if ret:


            if len(image.shape) == 2: # convert grayscale to RGB
                image = np.repeat(np.expand_dims(image,2), 3, axis=2)
            orig_im_size = image.shape
            # Preprocess image
            im_list, im_scales = prep_im_for_blob(image)
            # Build sample
            sample = {}
            # im_list_to blob swaps channels and adds stride in case of fpn
            fpn_on=True
            sample['image'] = torch.FloatTensor(im_list_to_blob(im_list,fpn_on))
            sample['scaling_factors'] = im_scales[0]
            sample['original_im_size'] = torch.FloatTensor(orig_im_size)
          # Extract proposals
            if model_type=='fast':
              # extract proposals using selective search (xmin,ymin,xmax,ymax format)
                rects = selective_search(pil_image=Image.fromarray(image),quality='f')
                sample['proposal_coords']=torch.FloatTensor(preprocess_sample().remove_dup_prop(rects)[0])*im_scales[0]
            else:
                sample['proposal_coords']=torch.FloatTensor([-1]) # dummy value
            # Convert to cuda variable
            sample = to_cuda_variable(sample)





        # # Evaluate

        # In[8]:





        # In[9]:


            if torch_ver=="0.4":
                with torch.no_grad():
                    class_scores,bbox_deltas,rois,img_features=eval_model(sample)
            else:
                class_scores,bbox_deltas,rois,img_features=eval_model(sample)

        # postprocess output:
        # - convert coordinates back to original image size,
        # - treshold proposals based on score,
        # - do NMS.
            scores_final, boxes_final, boxes_per_class = result_utils.postprocess_output(rois,
                                                                            sample['scaling_factors'],
                                                                            sample['original_im_size'],
                                                                            class_scores,
                                                                            bbox_deltas)

            if model_type=='mask':
              # compute masks
                boxes_final_multiscale = add_multilevel_rois_for_test({'rois': boxes_final*sample['scaling_factors']},'rois')
                boxes_final_multiscale_th = []
                for k in boxes_final_multiscale.keys():
                    if len(boxes_final_multiscale[k])>0 and 'rois_fpn' in k:
                        boxes_final_multiscale_th.append(Variable(torch.cuda.FloatTensor(boxes_final_multiscale[k])))
                    elif len(boxes_final_multiscale[k])==0 and 'rois_fpn' in k:
                        boxes_final_multiscale_th.append(None)
                rois_idx_restore_th = Variable(torch.cuda.FloatTensor(boxes_final_multiscale['rois_idx_restore_int32']))
                masks=model.mask_head(img_features,boxes_final_multiscale_th,rois_idx_restore_th.long())
              # postprocess mask output:
                h_orig = int(sample['original_im_size'].squeeze()[0].data.cpu().numpy().item())
                w_orig = int(sample['original_im_size'].squeeze()[1].data.cpu().numpy().item())
                cls_segms = result_utils.segm_results(boxes_per_class, masks.cpu().data.numpy(), boxes_final, h_orig, w_orig,
                                                    M=28) # M: Mask RCNN resolution
            else:
                cls_segms = None

            # sio.savemat(outputVideo_dir + str(frame_index) + '.mat', {'boxes_final':boxes_final,'cls_segms':cls_segms,'scores_final':scores_final,'boxes_per_class':boxes_per_class})

            mask = vis_utils.return_image_mask(
                image,  # BGR -> RGB for visualization
                str(frame_index),
                outputVideo_dir,
                boxes_per_class,
                cls_segms,
                None,
           #     dataset=dummy_datasets.get_coco_dataset(),
            #    box_alpha=0.3,
             #   show_class=True,
                thresh=0.7
              #  kp_thresh=2,
               # show=True
            )
            # print(boxes_per_class.shape)
            person_bb = boxes_per_class[1]
            # print(np.shape(boxes_per_class))
            boxes = []
            for person_index in range(len(person_bb)):
                if person_bb[person_index, -1] > 0.9:
                    boxes = np.concatenate((boxes, person_bb[person_index, :]), axis=0)
            #    boxes = boxes.reshape(-1, 5)
            print(video_name, frame_index)

            masks = []
            if len(boxes) > 0:
                boxes = boxes.reshape(-1, 5)
                for person_index in range(len(boxes)):
                    temp_box = np.zeros([720, 1280], dtype=np.int8)
                    h_min = int(np.ceil(boxes[person_index, 1] + 0.01) - 1)
                    h_max = int(np.floor(boxes[person_index, 3]))
                    w_min = int(np.ceil(boxes[person_index, 0] + 0.01) - 1)
                    w_max = int(np.floor(boxes[person_index, 2]))
                    temp_box[h_min:h_max, w_min:w_max] = 1
                    # temp_box[0, np.ceil(boxes[person_index, 1] + 0.01)-1:np.floor(boxes[person_index,3]), np.ceil(boxes[person_index,0]+0.01):np.floor(boxes[person_index,2]) ]=1

                    mask_num = len(mask)
                    # b = mask[0]
                    # print(b)
                    # print(np.shape(mask))
                    iou = np.zeros(mask_num)
                    for mask_index in range(mask_num):
                        iou[mask_index] = np.sum(mask[mask_index] * temp_box)
                    idx = np.argmax(iou)

                    if person_index == 0:
                        masks = mask[idx].reshape(1, 720, 1280)
                    else:
                        masks = np.concatenate((masks, mask[idx].reshape(1, 720, 1280)), axis=0)
            # if not os.path.exists('/media/delight-wifi/My Passport/Dataset/video-mask/' + video_name + '_mask'):
            #     os.mkdir('/media/delight-wifi/My Passport/Dataset/video-mask/' + video_name + '_mask')
            sio.savemat(outputVideo_dir + video_name + '_' + str(frame_index + 1) + '.mat', {'boxes': boxes, 'masks': masks})

        else:
            video.release()

print('Done!')









================================================
FILE: dataprocessing/getHumanMaskandBbox.py
================================================
import glob
import scipy.io as sio
import cv2
import numpy as np

file_name = '10'

frame_dir = '/data/feiw/oct17outVideo/oct17set' + file_name + '/'
frames = glob.glob(frame_dir + '*.mat')
frame_num = len(frames)/2

cap = cv2.VideoCapture('/home/feiw/detectorch/demo/oct17video/oct17set'+ file_name + '.avi')
video_frame_num = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))

if frame_num==video_frame_num:
    print('frame equals!')
else:
    print('frame doesnot equal!')

for frame_index in range(int(frame_num)):
    bb = sio.loadmat(frame_dir + str(frame_index+1)+'.mat')
    person_bb = bb['boxes_per_class'][0,1]
    mask = sio.loadmat(frame_dir + str(frame_index+1)+'.MASK.mat')
    mask = mask['mask']

    boxes = []
    for person_index in range(len(person_bb)):
        if person_bb[person_index,-1] > 0.9:
            boxes = np.concatenate((boxes, person_bb[person_index,:]), axis=0)
#    boxes = boxes.reshape(-1, 5)
    print('oct17set'+file_name,frame_index)

    masks = []
    if len(boxes)>0:
       boxes = boxes.reshape(-1, 5)         
       for person_index in range(len(boxes)):
            temp_box = np.zeros([720,1280], dtype=np.int8)
            h_min = int(np.ceil(boxes[person_index, 1] + 0.01)-1)
            h_max = int(np.floor(boxes[person_index, 3]))
            w_min = int(np.ceil(boxes[person_index, 0] + 0.01)-1)
            w_max = int(np.floor(boxes[person_index, 2]))
            temp_box[h_min:h_max, w_min:w_max] = 1
            # temp_box[0, np.ceil(boxes[person_index, 1] + 0.01)-1:np.floor(boxes[person_index,3]), np.ceil(boxes[person_index,0]+0.01):np.floor(boxes[person_index,2]) ]=1

            mask_num = len(mask)
            iou = np.zeros(mask_num)
            for mask_index in range(mask_num):
                iou[mask_index] = np.sum(mask[mask_index,:,:] * temp_box)
            idx = np.argmax(iou)

            if person_index==0:
                masks = mask[idx,:,:].reshape(1,720,1280)
            else:
                masks = np.concatenate((masks, mask[idx,:,:].reshape(1,720,1280)), axis=0)

    sio.savemat('/data/feiw/oct17outVideo/oct17set'+file_name+'_clean/oct17set'+ file_name+'_' + str(frame_index+1)+'.mat', {'boxes':boxes, 'masks':masks})

print('oct17set'+file_name+' saved succeed!')


================================================
FILE: dataprocessing/poseArrayAlign.m
================================================
clear
% folder_list = {'E:\oct17\frame_csi_hm_mask_bb_array_80train\', 'E:\oct17\frame_csi_hm_mask_bb_array_20test\', ...
%     'E:\sep12\frame_csi_hm_mask_bb_array_80train\', 'E:\sep12\frame_csi_hm_mask_bb_array_20test\'};

folder_list = {'/media/feiw/New Volume1/wifiposedata/train80/'};

color = rand([9,3]);

% for folder_name = folder_list
%     folder_name{1}
% end

for folder_name = folder_list

    files = dir([folder_name{1}, '*.mat']);
    file_num = length(files);
    
    for file_index = 1:file_num
        [folder_name{1}, files(file_index).name]
        %load([folder_name{1}, files(file_index).name], 'array', 'boxes');
        load([folder_name{1}, files(file_index).name], 'boxes');
        index = getIndex(files(file_index).name);
        if files(file_index).name(10) == '0'
            load(['/media/feiw/New Volume1/poseArray/coco/', files(file_index).name(1:10), '/',...
                files(file_index).name(1:10), '_', index, '.mat'], 'coco_pose');
        else
            load(['/media/feiw/New Volume1/poseArray/coco/', files(file_index).name(1:9), '/',...
                files(file_index).name(1:9), '_', index, '.mat'], 'coco_pose');
        end
        
        array = coco_pose;
        boxes_num = size(boxes,1);
        openpose_array_num = size(array,1);

        if boxes_num>0&&openpose_array_num>0  %%%% if mask rcnn has boxes and openpose has joints
           
            %% if image having 4 persons start 
            if ~isempty(strfind(files(file_index).name, 'four')) %% if image having 4 persons
                if size(boxes,1)>4
                   %%%%%%% important get the largest 'four' boxes
                   box_size = (boxes(:,3)-boxes(:,1)) .* (boxes(:,4)-boxes(:,2)); % boxes size by width.*height
                   [~, idx] = sort(box_size); %%%
                   boxes = boxes(idx(1:4),:); %%% get the largest 2 boxes
                   %%%%%%%
                   boxes_num = size(boxes,1);
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                                     
                   %% align bounding box which contains most
                   % seleting joints
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox ha
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                   
                else
                   %% align bounding box which contains most
                   % seleting joints
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox 
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                    
                end
            
%         imshow(imresize(frame,[720,1280])); hold on;
%         
%         for i = 1:boxes_num
%            rectangle('Position', [boxes(i,1:2) boxes(i,3:4)-boxes(i,1:2)], 'EdgeColor', color(i,:));
%            scatter(squeeze(openpose_array(i,:,1)), squeeze(openpose_array(i,:,2)), 'MarkerEdgeColor', color(i,:) );
%             
%         end

            %% if image having 4 persons
            elseif ~isempty(strfind(files(file_index).name, 'five')) % if image having 5 persons
                if size(boxes,1)>5
                   %%%%%%% important get the largest 'four' boxes
                   box_size = (boxes(:,3)-boxes(:,1)) .* (boxes(:,4)-boxes(:,2)); % boxes size by width.*height
                   [~, idx] = sort(box_size); %%%
                   boxes = boxes(idx(1:5),:); %%% get the largest 2 boxes
                   %%%%%%%
                   boxes_num = size(boxes,1);
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                                     
                   %% align bounding box which contains most
                   % seleting joints
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox ha
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                   
                else
                   %% align bounding box which contains most
                   % seleting joints
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox 
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                    
                end
            
        %% if image having 4 persons
            elseif ~isempty(strfind(files(file_index).name, 'two')) %% if image having 2 persons
                if size(boxes,1)>2
                   %%%%%%% important get the largest 'four' boxes
                   box_size = (boxes(:,3)-boxes(:,1)) .* (boxes(:,4)-boxes(:,2)); % boxes size by width.*height
                   [~, idx] = sort(box_size); %%%
                   boxes = boxes(idx(1:2),:); %%% get the largest 2 boxes
                   %%%%%%%
                   boxes_num = size(boxes,1);
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                                     
                   %% align bounding box which contains most
                   % seleting joints
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox ha
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                   
                else
                   %% align bounding box which contains most
                   % seleting joints
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox 
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                    
                end
             
        %% if image having 3 persons
            elseif ~isempty(strfind(files(file_index).name, 'three')) %% if image having 3 persons
                if size(boxes,1)>3
                   %%%%%%% important get the largest 'four' boxes
                   box_size = (boxes(:,3)-boxes(:,1)) .* (boxes(:,4)-boxes(:,2)); % boxes size by width.*height
                   [~, idx] = sort(box_size); %%%
                   boxes = boxes(idx(1:3),:); %%% get the largest 2 boxes
                   %%%%%%%
                   boxes_num = size(boxes,1);
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                                     
                   %% align bounding box which contains most
                   % seleting joints
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox ha
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                   
                else
                   %% align bounding box which contains most
                   % seleting joints
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox 
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                    
                end
            else
                   %% align bounding box which contains most
                   % seleting joints
                   openpose_array = zeros([boxes_num,18,3]); % creat a list to save cressponding array
                   for boxes_index = 1:boxes_num 
                       count = zeros([1, openpose_array_num]);
                           for openpose_array_index = 1:openpose_array_num
                               %%% counting the number of in-boundingbox
                               %%% joints
                               temp = sum(double(squeeze(array(openpose_array_index,:,1:2))>boxes(boxes_index,1:2))...
                                   + double(squeeze(array(openpose_array_index,:,1:2))<boxes(boxes_index,3:4)), 2);
                                count(openpose_array_index) = sum(double(temp==4));
                                %%%% 
                           end
                           [~, idx] = max(count); %% which boundingbox 
                       openpose_array(boxes_index,:,:) = array(idx,:,:);    
                   end
                    
            end
        end
        
%         imshow(imresize(frame,[720,1280])); hold on;
%         
%         for i = 1:boxes_num
%            rectangle('Position', [boxes(i,1:2) boxes(i,3:4)-boxes(i,1:2)], 'EdgeColor', color(i,:));
%            scatter(squeeze(openpose_array(i,:,1)), squeeze(openpose_array(i,:,2)), 'MarkerEdgeColor', color(i,:) );
%             
%         end
%         pause(0.5)
%         hold off
        

          save(['/media/feiw/New Volume1/poseArray/allignedCOCOPose/', files(file_index).name] , 'openpose_array', 'boxes');
          
%         if ~isempty(strfind(folder_name{1}, 'train'))
%             save(['wifiposedata\train80\', files(file_index).name], 'openpose_array', 'boxes');
%         else 
%             save(['wifiposedata\test20\', files(file_index).name], 'openpose_array', 'boxes');
%         end
    end  
    
    
end   

function index = getIndex(file_name)
    for i = [5,4,3]
        if ~isempty(str2num(file_name(end-3-i:end-4)))
            index = file_name(end-3-i:end-4);
            break;
        end
    end
    


end


================================================
FILE: dataprocessing/readme.md
================================================
# Mask-Boxes Prepration and Mask-BBox Alignment 

## Functions
1. prepare masks of persons, and bboxes of persons;
2. align the mask and bbox of every person via the IOU;
3. align bboxes and body joint coordinates (figure. 9 in the [tech report](https://arxiv.org/pdf/1904.00276.pdf)).

## How to use
1. install [detectorch](https://github.com/ignacio-rocco/detectorch) following its description;
2. replace the **/lib/utils/vis.py** with **vis.py** here;
3. **demo_FPN_video_new.py** takes a set of videos as inputs and outputs the masks and bboxes of every frame.
4. **poseArrayAlign.m** takes *pose-arrays* of openpose and *boxes* of detectorch as inputs, counts the in-box joints for each boxes, and aligns each bbox with a pose-array that falls mostly in the bbox. 

## Update of the **vis.py**
1. [def return_image_mask()](https://github.com/geekfeiw/wifiperson/blob/8a8a7e8d9829892fa2dc19f4a462eee1166b5f52/dataprocessing/vis.py#L806), return masks of persons, then align with their boxes in **demo_FPN_video_new.py**

2. [def save_image_mask()](https://github.com/geekfeiw/wifiperson/blob/8a8a7e8d9829892fa2dc19f4a462eee1166b5f52/dataprocessing/vis.py#L546)
save mask of all trained objects, 80 classes (departed approaches, not recommended)


================================================
FILE: dataprocessing/vis.py
================================================
# Copyright (c) 2017-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################

"""Detection output visualization module."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

import cv2
import numpy as np
import os

import pycocotools.mask as mask_util

from utils.colormap import colormap
# import utils.keypoints as keypoint_utils

# Matplotlib requires certain adjustments in some environments
# Must happen before importing matplotlib
import matplotlib
matplotlib.use('Agg') # Use a non-interactive backend
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
import scipy.io as sio

plt.rcParams['pdf.fonttype'] = 42  # For editing in Adobe Illustrator


_GRAY = (218, 227, 218)
_GREEN = (18, 127, 15)
_WHITE = (255, 255, 255)


# def kp_connections(keypoints):
#     kp_lines = [
#         [keypoints.index('left_eye'), keypoints.index('right_eye')],
#         [keypoints.index('left_eye'), keypoints.index('nose')],
#         [keypoints.index('right_eye'), keypoints.index('nose')],
#         [keypoints.index('right_eye'), keypoints.index('right_ear')],
#         [keypoints.index('left_eye'), keypoints.index('left_ear')],
#         [keypoints.index('right_shoulder'), keypoints.index('right_elbow')],
#         [keypoints.index('right_elbow'), keypoints.index('right_wrist')],
#         [keypoints.index('left_shoulder'), keypoints.index('left_elbow')],
#         [keypoints.index('left_elbow'), keypoints.index('left_wrist')],
#         [keypoints.index('right_hip'), keypoints.index('right_knee')],
#         [keypoints.index('right_knee'), keypoints.index('right_ankle')],
#         [keypoints.index('left_hip'), keypoints.index('left_knee')],
#         [keypoints.index('left_knee'), keypoints.index('left_ankle')],
#         [keypoints.index('right_shoulder'), keypoints.index('left_shoulder')],
#         [keypoints.index('right_hip'), keypoints.index('left_hip')],
#     ]
#     return kp_lines


def convert_from_cls_format(cls_boxes, cls_segms, cls_keyps):
    """Convert from the class boxes/segms/keyps format generated by the testing
    code.
    """
    box_list = [b for b in cls_boxes if len(b) > 0]
    if len(box_list) > 0:
        boxes = np.concatenate(box_list)
    else:
        boxes = None
    if cls_segms is not None:
        segms = [s for slist in cls_segms for s in slist]
    else:
        segms = None
    if cls_keyps is not None:
        keyps = [k for klist in cls_keyps for k in klist]
    else:
        keyps = None
    classes = []
    for j in range(len(cls_boxes)):
        classes += [j] * len(cls_boxes[j])
    return boxes, segms, keyps, classes


def get_class_string(class_index, score, dataset):
    class_text = dataset.classes[class_index] if dataset is not None else \
        'id{:d}'.format(class_index)
    return class_text + ' {:0.2f}'.format(score).lstrip('0')


def vis_mask(img, mask, col, alpha=0.4, show_border=True, border_thick=1):
    """Visualizes a single binary mask."""

    img = img.astype(np.float32)
    idx = np.nonzero(mask)

    img[idx[0], idx[1], :] *= 1.0 - alpha
    img[idx[0], idx[1], :] += alpha * col

    if show_border:
        _, contours, _ = cv2.findContours(
            mask.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
        cv2.drawContours(img, contours, -1, _WHITE, border_thick, cv2.LINE_AA)

    return img.astype(np.uint8)


def vis_class(img, pos, class_str, font_scale=0.35):
    """Visualizes the class."""
    x0, y0 = int(pos[0]), int(pos[1])
    # Compute text size.
    txt = class_str
    font = cv2.FONT_HERSHEY_SIMPLEX
    ((txt_w, txt_h), _) = cv2.getTextSize(txt, font, font_scale, 1)
    # Place text background.
    back_tl = x0, y0 - int(1.3 * txt_h)
    back_br = x0 + txt_w, y0
    cv2.rectangle(img, back_tl, back_br, _GREEN, -1)
    # Show text.
    txt_tl = x0, y0 - int(0.3 * txt_h)
    cv2.putText(img, txt, txt_tl, font, font_scale, _GRAY, lineType=cv2.LINE_AA)
    return img


def vis_bbox(img, bbox, thick=1):
    """Visualizes a bounding box."""
    (x0, y0, w, h) = bbox
    x1, y1 = int(x0 + w), int(y0 + h)
    x0, y0 = int(x0), int(y0)
    cv2.rectangle(img, (x0, y0), (x1, y1), _GREEN, thickness=thick)
    return img


# def vis_keypoints(img, kps, kp_thresh=2, alpha=0.7):
#     """Visualizes keypoints (adapted from vis_one_image).
#     kps has shape (4, #keypoints) where 4 rows are (x, y, logit, prob).
#     """
#     dataset_keypoints, _ = keypoint_utils.get_keypoints()
#     kp_lines = kp_connections(dataset_keypoints)

#     # Convert from plt 0-1 RGBA colors to 0-255 BGR colors for opencv.
#     cmap = plt.get_cmap('rainbow')
#     colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]
#     colors = [(c[2] * 255, c[1] * 255, c[0] * 255) for c in colors]

#     # Perform the drawing on a copy of the image, to allow for blending.
#     kp_mask = np.copy(img)

#     # Draw mid shoulder / mid hip first for better visualization.
#     mid_shoulder = (
#         kps[:2, dataset_keypoints.index('right_shoulder')] +
#         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0
#     sc_mid_shoulder = np.minimum(
#         kps[2, dataset_keypoints.index('right_shoulder')],
#         kps[2, dataset_keypoints.index('left_shoulder')])
#     mid_hip = (
#         kps[:2, dataset_keypoints.index('right_hip')] +
#         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0
#     sc_mid_hip = np.minimum(
#         kps[2, dataset_keypoints.index('right_hip')],
#         kps[2, dataset_keypoints.index('left_hip')])
#     nose_idx = dataset_keypoints.index('nose')
#     if sc_mid_shoulder > kp_thresh and kps[2, nose_idx] > kp_thresh:
#         cv2.line(
#             kp_mask, tuple(mid_shoulder), tuple(kps[:2, nose_idx]),
#             color=colors[len(kp_lines)], thickness=2, lineType=cv2.LINE_AA)
#     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:
#         cv2.line(
#             kp_mask, tuple(mid_shoulder), tuple(mid_hip),
#             color=colors[len(kp_lines) + 1], thickness=2, lineType=cv2.LINE_AA)

#     # Draw the keypoints.
#     for l in range(len(kp_lines)):
#         i1 = kp_lines[l][0]
#         i2 = kp_lines[l][1]
#         p1 = kps[0, i1], kps[1, i1]
#         p2 = kps[0, i2], kps[1, i2]
#         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:
#             cv2.line(
#                 kp_mask, p1, p2,
#                 color=colors[l], thickness=2, lineType=cv2.LINE_AA)
#         if kps[2, i1] > kp_thresh:
#             cv2.circle(
#                 kp_mask, p1,
#                 radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA)
#         if kps[2, i2] > kp_thresh:
#             cv2.circle(
#                 kp_mask, p2,
#                 radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA)

#     # Blend the keypoints.
#     return cv2.addWeighted(img, 1.0 - alpha, kp_mask, alpha, 0)


def vis_one_image_opencv(
        im, boxes, segms=None, keypoints=None, thresh=0.9, kp_thresh=2,
        show_box=False, dataset=None, show_class=False):
    """Constructs a numpy array with the detections visualized."""

    if isinstance(boxes, list):
        boxes, segms, keypoints, classes = convert_from_cls_format(
            boxes, segms, keypoints)

    if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:
        return im

    if segms is not None:
        masks = mask_util.decode(segms)
        color_list = colormap()
        mask_color_id = 0

    # Display in largest to smallest order to reduce occlusion
    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
    sorted_inds = np.argsort(-areas)

    for i in sorted_inds:
        bbox = boxes[i, :4]
        score = boxes[i, -1]
        if score < thresh:
            continue

        # show box (off by default)
        if show_box:
            im = vis_bbox(
                im, (bbox[0], bbox[1], bbox[2] - bbox[0], bbox[3] - bbox[1]))

        # show class (off by default)
        if show_class:
            class_str = get_class_string(classes[i], score, dataset)
            im = vis_class(im, (bbox[0], bbox[1] - 2), class_str)

        # show mask
        if segms is not None and len(segms) > i:
            color_mask = color_list[mask_color_id % len(color_list), 0:3]
            mask_color_id += 1
            im = vis_mask(im, masks[..., i], color_mask)

        # # show keypoints
        # if keypoints is not None and len(keypoints) > i:
        #     im = vis_keypoints(im, keypoints[i], kp_thresh)

    return im


def vis_one_image(
        im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9,
        kp_thresh=2, dpi=200, box_alpha=0.0, dataset=None, show_class=False,
        ext='pdf', show=False):
    """Visual debugging of detections."""
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)

    if isinstance(boxes, list):
        boxes, segms, keypoints, classes = convert_from_cls_format(
            boxes, segms, keypoints)

    if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:
        return

    # dataset_keypoints, _ = keypoint_utils.get_keypoints()

    if segms is not None:
        masks = mask_util.decode(segms)

    color_list = colormap(rgb=True) / 255

    # kp_lines = kp_connections(dataset_keypoints)
    # cmap = plt.get_cmap('rainbow')
    # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]

    fig = plt.figure(frameon=False)
    fig.set_size_inches(im.shape[1] / dpi, im.shape[0] / dpi)
    ax = plt.Axes(fig, [0., 0., 1., 1.])
    ax.axis('off')
    fig.add_axes(ax)
    ax.imshow(im)

    # Display in largest to smallest order to reduce occlusion
    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
    sorted_inds = np.argsort(-areas)

    mask_color_id = 0
    res = []
    for i in sorted_inds:
        bbox = boxes[i, :4]
        score = boxes[i, -1]
        if score < thresh:
            continue

        # show box (off by default)
        ax.add_patch(
            plt.Rectangle((bbox[0], bbox[1]),
                          bbox[2] - bbox[0],
                          bbox[3] - bbox[1],
                          fill=False, edgecolor='g',
                          linewidth=0.5, alpha=box_alpha))

        if show_class:
            ax.text(
                bbox[0], bbox[1] - 2,
                get_class_string(classes[i], score, dataset),
                fontsize=3,
                family='serif',
                bbox=dict(
                    facecolor='g', alpha=0.4, pad=0, edgecolor='none'),
                color='white')

        # show mask
        if segms is not None and len(segms) > i:
            img = np.ones(im.shape)
            color_mask = color_list[mask_color_id % len(color_list), 0:3]
            mask_color_id += 1

            w_ratio = .4
            for c in range(3):
                color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio
            for c in range(3):
                img[:, :, c] = color_mask[c]
            e = masks[:, :, i]
            res += [e]

            contour, hier = cv2.findContours(
                e.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)

            for c in contour:
                polygon = Polygon(
                    c.reshape((-1, 2)),
                    fill=True, facecolor=color_mask,
                    edgecolor='w', linewidth=1.2,
                    alpha=0.5)
                ax.add_patch(polygon)

        # # show keypoints
        # if keypoints is not None and len(keypoints) > i:
        #     kps = keypoints[i]
        #     plt.autoscale(False)
        #     for l in range(len(kp_lines)):
        #         i1 = kp_lines[l][0]
        #         i2 = kp_lines[l][1]
        #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:
        #             x = [kps[0, i1], kps[0, i2]]
        #             y = [kps[1, i1], kps[1, i2]]
        #             line = plt.plot(x, y)
        #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)
        #         if kps[2, i1] > kp_thresh:
        #             plt.plot(
        #                 kps[0, i1], kps[1, i1], '.', color=colors[l],
        #                 markersize=3.0, alpha=0.7)

        #         if kps[2, i2] > kp_thresh:
        #             plt.plot(
        #                 kps[0, i2], kps[1, i2], '.', color=colors[l],
        #                 markersize=3.0, alpha=0.7)

        #     # add mid shoulder / mid hip for better visualization
        #     mid_shoulder = (
        #         kps[:2, dataset_keypoints.index('right_shoulder')] +
        #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0
        #     sc_mid_shoulder = np.minimum(
        #         kps[2, dataset_keypoints.index('right_shoulder')],
        #         kps[2, dataset_keypoints.index('left_shoulder')])
        #     mid_hip = (
        #         kps[:2, dataset_keypoints.index('right_hip')] +
        #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0
        #     sc_mid_hip = np.minimum(
        #         kps[2, dataset_keypoints.index('right_hip')],
        #         kps[2, dataset_keypoints.index('left_hip')])
        #     if (sc_mid_shoulder > kp_thresh and
        #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):
        #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]
        #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]
        #         line = plt.plot(x, y)
        #         plt.setp(
        #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)
        #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:
        #         x = [mid_shoulder[0], mid_hip[0]]
        #         y = [mid_shoulder[1], mid_hip[1]]
        #         line = plt.plot(x, y)
        #         plt.setp(
        #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,
        #             alpha=0.7)

    output_name = os.path.basename(im_name) + '.' + ext
    fig.savefig(os.path.join(output_dir, '{}'.format(output_name)), dpi=dpi)
    print('result saved to {}'.format(os.path.join(output_dir, '{}'.format(output_name))))
    if show:
        plt.show()
    plt.close('all')
    sio.savemat('res_mask_000128.mat', {'mask': res})
    print('save done!')

    def vis_one_image(
            im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9,
            kp_thresh=2, dpi=200, box_alpha=0.0, dataset=None, show_class=False,
            ext='pdf', show=False):
        """Visual debugging of detections."""
        if not os.path.exists(output_dir):
            os.makedirs(output_dir)

        if isinstance(boxes, list):
            boxes, segms, keypoints, classes = convert_from_cls_format(
                boxes, segms, keypoints)

        if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:
            return

        # dataset_keypoints, _ = keypoint_utils.get_keypoints()

        if segms is not None:
            masks = mask_util.decode(segms)

        color_list = colormap(rgb=True) / 255

        # kp_lines = kp_connections(dataset_keypoints)
        # cmap = plt.get_cmap('rainbow')
        # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]

        fig = plt.figure(frameon=False)
        fig.set_size_inches(im.shape[1] / dpi, im.shape[0] / dpi)
        ax = plt.Axes(fig, [0., 0., 1., 1.])
        ax.axis('off')
        fig.add_axes(ax)
        ax.imshow(im)

        # Display in largest to smallest order to reduce occlusion
        areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
        sorted_inds = np.argsort(-areas)

        mask_color_id = 0
        res = []
        for i in sorted_inds:
            bbox = boxes[i, :4]
            score = boxes[i, -1]
            if score < thresh:
                continue

            # show box (off by default)
            ax.add_patch(
                plt.Rectangle((bbox[0], bbox[1]),
                              bbox[2] - bbox[0],
                              bbox[3] - bbox[1],
                              fill=False, edgecolor='g',
                              linewidth=0.5, alpha=box_alpha))

            if show_class:
                ax.text(
                    bbox[0], bbox[1] - 2,
                    get_class_string(classes[i], score, dataset),
                    fontsize=3,
                    family='serif',
                    bbox=dict(
                        facecolor='g', alpha=0.4, pad=0, edgecolor='none'),
                    color='white')

            # show mask
            if segms is not None and len(segms) > i:
                img = np.ones(im.shape)
                color_mask = color_list[mask_color_id % len(color_list), 0:3]
                mask_color_id += 1

                w_ratio = .4
                for c in range(3):
                    color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio
                for c in range(3):
                    img[:, :, c] = color_mask[c]
                e = masks[:, :, i]
                res += [e]

                _, contour, hier = cv2.findContours(
                    e.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)

                for c in contour:
                    polygon = Polygon(
                        c.reshape((-1, 2)),
                        fill=True, facecolor=color_mask,
                        edgecolor='w', linewidth=1.2,
                        alpha=0.5)
                    ax.add_patch(polygon)

                    # # show keypoints
                    # if keypoints is not None and len(keypoints) > i:
                    #     kps = keypoints[i]
                    #     plt.autoscale(False)
                    #     for l in range(len(kp_lines)):
                    #         i1 = kp_lines[l][0]
                    #         i2 = kp_lines[l][1]
                    #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:
                    #             x = [kps[0, i1], kps[0, i2]]
                    #             y = [kps[1, i1], kps[1, i2]]
                    #             line = plt.plot(x, y)
                    #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)
                    #         if kps[2, i1] > kp_thresh:
                    #             plt.plot(
                    #                 kps[0, i1], kps[1, i1], '.', color=colors[l],
                    #                 markersize=3.0, alpha=0.7)

                    #         if kps[2, i2] > kp_thresh:
                    #             plt.plot(
                    #                 kps[0, i2], kps[1, i2], '.', color=colors[l],
                    #                 markersize=3.0, alpha=0.7)

                    #     # add mid shoulder / mid hip for better visualization
                    #     mid_shoulder = (
                    #         kps[:2, dataset_keypoints.index('right_shoulder')] +
                    #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0
                    #     sc_mid_shoulder = np.minimum(
                    #         kps[2, dataset_keypoints.index('right_shoulder')],
                    #         kps[2, dataset_keypoints.index('left_shoulder')])
                    #     mid_hip = (
                    #         kps[:2, dataset_keypoints.index('right_hip')] +
                    #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0
                    #     sc_mid_hip = np.minimum(
                    #         kps[2, dataset_keypoints.index('right_hip')],
                    #         kps[2, dataset_keypoints.index('left_hip')])
                    #     if (sc_mid_shoulder > kp_thresh and
                    #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):
                    #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]
                    #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]
                    #         line = plt.plot(x, y)
                    #         plt.setp(
                    #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)
                    #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:
                    #         x = [mid_shoulder[0], mid_hip[0]]
                    #         y = [mid_shoulder[1], mid_hip[1]]
                    #         line = plt.plot(x, y)
                    #         plt.setp(
                    #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,
                    #             alpha=0.7)

        output_name = os.path.basename(im_name) + '.' + ext
        fig.savefig(os.path.join(output_dir, '{}'.format(output_name)), dpi=dpi)
        print('result saved to {}'.format(os.path.join(output_dir, '{}'.format(output_name))))
        if show:
            plt.show()
        plt.close('all')
        sio.savemat('res_mask_000128.mat', {'mask': res})
        print('save done!')

# save mask
# added by Fei Wang,
def save_image_mask(
        im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9,
        kp_thresh=2, dpi=200, box_alpha=0.0, dataset=None, show_class=False,
        ext='pdf', show=False):
    """Visual debugging of detections."""
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)

    if isinstance(boxes, list):
        boxes, segms, keypoints, classes = convert_from_cls_format(
            boxes, segms, keypoints)

    # if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:
    #     return

    # dataset_keypoints, _ = keypoint_utils.get_keypoints()

    if segms is not None:
        masks = mask_util.decode(segms)

    color_list = colormap(rgb=True) / 255

    # kp_lines = kp_connections(dataset_keypoints)
    # cmap = plt.get_cmap('rainbow')
    # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]

    fig = plt.figure(frameon=False)
    fig.set_size_inches(im.shape[1] / dpi, im.shape[0] / dpi)
    ax = plt.Axes(fig, [0., 0., 1., 1.])
    ax.axis('off')
    fig.add_axes(ax)
    ax.imshow(im)

    # Display in largest to smallest order to reduce occlusion
    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
    sorted_inds = np.argsort(-areas)

    mask_color_id = 0
    res = []
    for i in sorted_inds:
        bbox = boxes[i, :4]
        score = boxes[i, -1]
        if score < thresh:
            continue

        # show box (off by default)
        ax.add_patch(
            plt.Rectangle((bbox[0], bbox[1]),
                          bbox[2] - bbox[0],
                          bbox[3] - bbox[1],
                          fill=False, edgecolor='g',
                          linewidth=0.5, alpha=box_alpha))

        if show_class:
            ax.text(
                bbox[0], bbox[1] - 2,
                get_class_string(classes[i], score, dataset),
                fontsize=3,
                family='serif',
                bbox=dict(
                    facecolor='g', alpha=0.4, pad=0, edgecolor='none'),
                color='white')

        # show mask
        if segms is not None and len(segms) > i:
            img = np.ones(im.shape)
            color_mask = color_list[mask_color_id % len(color_list), 0:3]
            mask_color_id += 1

            w_ratio = .4
            for c in range(3):
                color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio
            for c in range(3):
                img[:, :, c] = color_mask[c]
            e = masks[:, :, i]
            res += [e]

            _, contour, hier = cv2.findContours(
                e.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)

            for c in contour:
                polygon = Polygon(
                    c.reshape((-1, 2)),
                    fill=True, facecolor=color_mask,
                    edgecolor='w', linewidth=1.2,
                    alpha=0.5)
                ax.add_patch(polygon)

        # # show keypoints
        # if keypoints is not None and len(keypoints) > i:
        #     kps = keypoints[i]
        #     plt.autoscale(False)
        #     for l in range(len(kp_lines)):
        #         i1 = kp_lines[l][0]
        #         i2 = kp_lines[l][1]
        #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:
        #             x = [kps[0, i1], kps[0, i2]]
        #             y = [kps[1, i1], kps[1, i2]]
        #             line = plt.plot(x, y)
        #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)
        #         if kps[2, i1] > kp_thresh:
        #             plt.plot(
        #                 kps[0, i1], kps[1, i1], '.', color=colors[l],
        #                 markersize=3.0, alpha=0.7)

        #         if kps[2, i2] > kp_thresh:
        #             plt.plot(
        #                 kps[0, i2], kps[1, i2], '.', color=colors[l],
        #                 markersize=3.0, alpha=0.7)

        #     # add mid shoulder / mid hip for better visualization
        #     mid_shoulder = (
        #         kps[:2, dataset_keypoints.index('right_shoulder')] +
        #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0
        #     sc_mid_shoulder = np.minimum(
        #         kps[2, dataset_keypoints.index('right_shoulder')],
        #         kps[2, dataset_keypoints.index('left_shoulder')])
        #     mid_hip = (
        #         kps[:2, dataset_keypoints.index('right_hip')] +
        #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0
        #     sc_mid_hip = np.minimum(
        #         kps[2, dataset_keypoints.index('right_hip')],
        #         kps[2, dataset_keypoints.index('left_hip')])
        #     if (sc_mid_shoulder > kp_thresh and
        #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):
        #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]
        #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]
        #         line = plt.plot(x, y)
        #         plt.setp(
        #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)
        #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:
        #         x = [mid_shoulder[0], mid_hip[0]]
        #         y = [mid_shoulder[1], mid_hip[1]]
        #         line = plt.plot(x, y)
        #         plt.setp(
        #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,
        #             alpha=0.7)

    output_name = os.path.basename(im_name) + '.' + ext
    fig.savefig(os.path.join(output_dir, '{}'.format(output_name)), dpi=dpi)
    print('result saved to {}'.format(os.path.join(output_dir, '{}'.format(output_name))))
    if show:
        plt.show()
    plt.close('all')
    sio.savemat('res_mask_000128.mat', {'mask': res})
    print('save done!')


def save_image_mask(
        im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9):
    """Visual debugging of detections."""
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)

    if isinstance(boxes, list):
        boxes, segms, keypoints, classes = convert_from_cls_format(
            boxes, segms, keypoints)

    # if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:
    #     return

    # dataset_keypoints, _ = keypoint_utils.get_keypoints()

    if segms is not None:
        masks = mask_util.decode(segms)

    color_list = colormap(rgb=True) / 255

    # kp_lines = kp_connections(dataset_keypoints)

    # cmap = plt.get_cmap('rainbow')
    # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]


    # Display in largest to smallest order to reduce occlusion
    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
    sorted_inds = np.argsort(-areas)

    mask_color_id = 0

    res = []
    for i in sorted_inds:
        bbox = boxes[i, :4]
        score = boxes[i, -1]
        if score < thresh:
            continue

        # show mask
        if segms is not None and len(segms) > i:
            img = np.ones(im.shape)
            color_mask = color_list[mask_color_id % len(color_list), 0:3]
            mask_color_id += 1

            w_ratio = .4
            for c in range(3):
                color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio
            for c in range(3):
                img[:, :, c] = color_mask[c]
            e = masks[:, :, i]
            res += [e]

        # # show keypoints
        # if keypoints is not None and len(keypoints) > i:
        #     kps = keypoints[i]
        #     plt.autoscale(False)
        #     for l in range(len(kp_lines)):
        #         i1 = kp_lines[l][0]
        #         i2 = kp_lines[l][1]
        #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:
        #             x = [kps[0, i1], kps[0, i2]]
        #             y = [kps[1, i1], kps[1, i2]]
        #             line = plt.plot(x, y)
        #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)
        #         if kps[2, i1] > kp_thresh:
        #             plt.plot(
        #                 kps[0, i1], kps[1, i1], '.', color=colors[l],
        #                 markersize=3.0, alpha=0.7)

        #         if kps[2, i2] > kp_thresh:
        #             plt.plot(
        #                 kps[0, i2], kps[1, i2], '.', color=colors[l],
        #                 markersize=3.0, alpha=0.7)

        #     # add mid shoulder / mid hip for better visualization
        #     mid_shoulder = (
        #         kps[:2, dataset_keypoints.index('right_shoulder')] +
        #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0
        #     sc_mid_shoulder = np.minimum(
        #         kps[2, dataset_keypoints.index('right_shoulder')],
        #         kps[2, dataset_keypoints.index('left_shoulder')])
        #     mid_hip = (
        #         kps[:2, dataset_keypoints.index('right_hip')] +
        #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0
        #     sc_mid_hip = np.minimum(
        #         kps[2, dataset_keypoints.index('right_hip')],
        #         kps[2, dataset_keypoints.index('left_hip')])
        #     if (sc_mid_shoulder > kp_thresh and
        #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):
        #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]
        #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]
        #         line = plt.plot(x, y)
        #         plt.setp(
        #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)
        #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:
        #         x = [mid_shoulder[0], mid_hip[0]]
        #         y = [mid_shoulder[1], mid_hip[1]]
        #         line = plt.plot(x, y)
        #         plt.setp(
        #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,
        #             alpha=0.7)

    output_name = os.path.basename(im_name) + '.MASK.mat'
    sio.savemat(os.path.join(output_dir, '{}'.format(output_name)), {'mask': res})
#    print('save done!')


# return mask
# added by Fei Wang,
def return_image_mask(
        im, im_name, output_dir, boxes, segms=None, keypoints=None, thresh=0.9):
    """Visual debugging of detections."""
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)

    if isinstance(boxes, list):
        boxes, segms, keypoints, classes = convert_from_cls_format(
            boxes, segms, keypoints)

    # if boxes is None or boxes.shape[0] == 0 or max(boxes[:, 4]) < thresh:
    #     return

    # dataset_keypoints, _ = keypoint_utils.get_keypoints()

    if segms is not None:
        masks = mask_util.decode(segms)

    color_list = colormap(rgb=True) / 255

    # kp_lines = kp_connections(dataset_keypoints)

    # cmap = plt.get_cmap('rainbow')
    # colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]


    # Display in largest to smallest order to reduce occlusion
    areas = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
    sorted_inds = np.argsort(-areas)

    mask_color_id = 0

    res = []
    for i in sorted_inds:
        bbox = boxes[i, :4]
        score = boxes[i, -1]
        if score < thresh:
            continue

        # show mask
        if segms is not None and len(segms) > i:
            img = np.ones(im.shape)
            color_mask = color_list[mask_color_id % len(color_list), 0:3]
            mask_color_id += 1

            w_ratio = .4
            for c in range(3):
                color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio
            for c in range(3):
                img[:, :, c] = color_mask[c]
            e = masks[:, :, i]
            res += [e]

        # # show keypoints
        # if keypoints is not None and len(keypoints) > i:
        #     kps = keypoints[i]
        #     plt.autoscale(False)
        #     for l in range(len(kp_lines)):
        #         i1 = kp_lines[l][0]
        #         i2 = kp_lines[l][1]
        #         if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:
        #             x = [kps[0, i1], kps[0, i2]]
        #             y = [kps[1, i1], kps[1, i2]]
        #             line = plt.plot(x, y)
        #             plt.setp(line, color=colors[l], linewidth=1.0, alpha=0.7)
        #         if kps[2, i1] > kp_thresh:
        #             plt.plot(
        #                 kps[0, i1], kps[1, i1], '.', color=colors[l],
        #                 markersize=3.0, alpha=0.7)

        #         if kps[2, i2] > kp_thresh:
        #             plt.plot(
        #                 kps[0, i2], kps[1, i2], '.', color=colors[l],
        #                 markersize=3.0, alpha=0.7)

        #     # add mid shoulder / mid hip for better visualization
        #     mid_shoulder = (
        #         kps[:2, dataset_keypoints.index('right_shoulder')] +
        #         kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0
        #     sc_mid_shoulder = np.minimum(
        #         kps[2, dataset_keypoints.index('right_shoulder')],
        #         kps[2, dataset_keypoints.index('left_shoulder')])
        #     mid_hip = (
        #         kps[:2, dataset_keypoints.index('right_hip')] +
        #         kps[:2, dataset_keypoints.index('left_hip')]) / 2.0
        #     sc_mid_hip = np.minimum(
        #         kps[2, dataset_keypoints.index('right_hip')],
        #         kps[2, dataset_keypoints.index('left_hip')])
        #     if (sc_mid_shoulder > kp_thresh and
        #             kps[2, dataset_keypoints.index('nose')] > kp_thresh):
        #         x = [mid_shoulder[0], kps[0, dataset_keypoints.index('nose')]]
        #         y = [mid_shoulder[1], kps[1, dataset_keypoints.index('nose')]]
        #         line = plt.plot(x, y)
        #         plt.setp(
        #             line, color=colors[len(kp_lines)], linewidth=1.0, alpha=0.7)
        #     if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:
        #         x = [mid_shoulder[0], mid_hip[0]]
        #         y = [mid_shoulder[1], mid_hip[1]]
        #         line = plt.plot(x, y)
        #         plt.setp(
        #             line, color=colors[len(kp_lines) + 1], linewidth=1.0,
        #             alpha=0.7)
    return res
#    output_name = os.path.basename(im_name) + '.MASK.mat'
#    sio.savemat(os.path.join(output_dir, '{}'.format(output_name)), {'mask': res})
#    print('save done!')
Download .txt
gitextract_73vlnhyl/

├── .gitignore
├── LICENSE
├── README.md
├── datacollectioncode/
│   ├── videowithtimestamp/
│   │   ├── README.md
│   │   └── videoWrite-spyder.py
│   └── wifiwithtimestamp/
│       ├── Makefile
│       ├── README.md
│       └── log_to_file_time.c
└── dataprocessing/
    ├── demo_FPN_video_new.py
    ├── getHumanMaskandBbox.py
    ├── poseArrayAlign.m
    ├── readme.md
    └── vis.py
Download .txt
SYMBOL INDEX (17 symbols across 3 files)

FILE: datacollectioncode/wifiwithtimestamp/log_to_file_time.c
  function main (line 33) | int main(int argc, char** argv)
  function check_usage (line 119) | void check_usage(int argc, char** argv)
  function FILE (line 128) | FILE* open_file(char* filename, char* spec)
  function caught_signal (line 139) | void caught_signal(int sig)
  function exit_program (line 145) | void exit_program(int code)
  function exit_program_err (line 161) | void exit_program_err(int code, char* func)

FILE: dataprocessing/demo_FPN_video_new.py
  function eval_model (line 94) | def eval_model(sample):

FILE: dataprocessing/vis.py
  function convert_from_cls_format (line 69) | def convert_from_cls_format(cls_boxes, cls_segms, cls_keyps):
  function get_class_string (line 92) | def get_class_string(class_index, score, dataset):
  function vis_mask (line 98) | def vis_mask(img, mask, col, alpha=0.4, show_border=True, border_thick=1):
  function vis_class (line 115) | def vis_class(img, pos, class_str, font_scale=0.35):
  function vis_bbox (line 132) | def vis_bbox(img, bbox, thick=1):
  function vis_one_image_opencv (line 202) | def vis_one_image_opencv(
  function vis_one_image (line 252) | def vis_one_image(
  function save_image_mask (line 548) | def save_image_mask(
  function save_image_mask (line 696) | def save_image_mask(
  function return_image_mask (line 806) | def return_image_mask(
Condensed preview — 13 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (80K chars).
[
  {
    "path": ".gitignore",
    "chars": 1203,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "LICENSE",
    "chars": 1065,
    "preview": "MIT License\n\nCopyright (c) 2018 Fei Wang\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\no"
  },
  {
    "path": "README.md",
    "chars": 1022,
    "preview": "# WiFi Perception\nCode of paper, Person-in-WiFi: Fine-grained Person Perception using WiFi. In this paper, we tend to us"
  },
  {
    "path": "datacollectioncode/videowithtimestamp/README.md",
    "chars": 583,
    "preview": "# Install OpenCV on Ubuntu\n\nSee the [tutorial](https://www.learnopencv.com/install-opencv3-on-ubuntu/)\n\n## Note:\nStep 4."
  },
  {
    "path": "datacollectioncode/videowithtimestamp/videoWrite-spyder.py",
    "chars": 1902,
    "preview": "#!/usr/bin/python\n\nimport cv2\nimport datetime\nimport time\n#import sys\n\n\nif __name__ == \"__main__\":\n\n    try:\n    \n      "
  },
  {
    "path": "datacollectioncode/wifiwithtimestamp/Makefile",
    "chars": 955,
    "preview": "all: print_packets get_first_bfee parse_log log_to_file nl_bf_to_eff log_to_file_time\n\nKERNEL = $(strip $(shell uname -r"
  },
  {
    "path": "datacollectioncode/wifiwithtimestamp/README.md",
    "chars": 595,
    "preview": "# Record WiFi with Unix Time-stamp \n\n1. Follow [Linux CSI Tool Installation Instructions](http://dhalperi.github.io/linu"
  },
  {
    "path": "datacollectioncode/wifiwithtimestamp/log_to_file_time.c",
    "chars": 3751,
    "preview": "/*\n * (c) 2008-2011 Daniel Halperin <dhalperi@cs.washington.edu>\n */\n#include \"iwl_connector.h\"\n\n#include <stdio.h>\n#inc"
  },
  {
    "path": "dataprocessing/demo_FPN_video_new.py",
    "chars": 10660,
    "preview": "\n# coding: utf-8\n\n# # Imports\n\n# In[1]:\n\n\nimport torch\nfrom torch.autograd import Variable\nfrom torch.utils.data import "
  },
  {
    "path": "dataprocessing/getHumanMaskandBbox.py",
    "chars": 2255,
    "preview": "import glob\nimport scipy.io as sio\nimport cv2\nimport numpy as np\n\nfile_name = '10'\n\nframe_dir = '/data/feiw/oct17outVide"
  },
  {
    "path": "dataprocessing/poseArrayAlign.m",
    "chars": 15459,
    "preview": "clear\r\n% folder_list = {'E:\\oct17\\frame_csi_hm_mask_bb_array_80train\\', 'E:\\oct17\\frame_csi_hm_mask_bb_array_20test\\', ."
  },
  {
    "path": "dataprocessing/readme.md",
    "chars": 1250,
    "preview": "# Mask-Boxes Prepration and Mask-BBox Alignment \n\n## Functions\n1. prepare masks of persons, and bboxes of persons;\n2. al"
  },
  {
    "path": "dataprocessing/vis.py",
    "chars": 36150,
    "preview": "# Copyright (c) 2017-present, Facebook, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you m"
  }
]

About this extraction

This page contains the full source code of the geekfeiw/wifiperson GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 13 files (75.0 KB), approximately 20.2k tokens, and a symbol index with 17 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!