Repository: bubbliiiing/faster-rcnn-tf2 Branch: main Commit: 66c710c79572 Files: 27 Total size: 225.4 KB Directory structure: gitextract_tiar2vxh/ ├── .gitignore ├── LICENSE ├── README.md ├── frcnn.py ├── get_map.py ├── nets/ │ ├── __init__.py │ ├── classifier.py │ ├── frcnn.py │ ├── frcnn_training.py │ ├── resnet.py │ ├── rpn.py │ └── vgg.py ├── predict.py ├── requirements.txt ├── summary.py ├── train.py ├── utils/ │ ├── __init__.py │ ├── anchors.py │ ├── callbacks.py │ ├── dataloader.py │ ├── utils.py │ ├── utils_bbox.py │ ├── utils_fit.py │ └── utils_map.py ├── vision_for_anchor.py ├── voc_annotation.py └── 常见问题汇总.md ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ # ignore map, miou, datasets map_out/ miou_out/ VOCdevkit/ datasets/ Medical_Datasets/ lfw/ logs/ model_data/ .temp_map_out/ # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ wheels/ pip-wheel-metadata/ share/python-wheels/ *.egg-info/ .installed.cfg *.egg MANIFEST # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .nox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *.cover *.py,cover .hypothesis/ .pytest_cache/ # Translations *.mo *.pot # Django stuff: *.log local_settings.py db.sqlite3 db.sqlite3-journal # Flask stuff: instance/ .webassets-cache # Scrapy stuff: .scrapy # Sphinx documentation docs/_build/ # PyBuilder target/ # Jupyter Notebook .ipynb_checkpoints # IPython profile_default/ ipython_config.py # pyenv .python-version # pipenv # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. # However, in case of collaboration, if having platform-specific dependencies or dependencies # having no cross-platform support, pipenv may install dependencies that don't work, or not # install all needed dependencies. #Pipfile.lock # PEP 582; used by e.g. github.com/David-OConnor/pyflow __pypackages__/ # Celery stuff celerybeat-schedule celerybeat.pid # SageMath parsed files *.sage.py # Environments .env .venv env/ venv/ ENV/ env.bak/ venv.bak/ # Spyder project settings .spyderproject .spyproject # Rope project settings .ropeproject # mkdocs documentation /site # mypy .mypy_cache/ .dmypy.json dmypy.json # Pyre type checker .pyre/ ================================================ FILE: LICENSE ================================================ MIT License Copyright (c) 2020 JiaQi Xu Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: README.md ================================================ ## Faster-Rcnn:Two-Stage目标检测模型在Tensorflow2当中的实现 --- ## 目录 1. [仓库更新 Top News](#仓库更新) 2. [性能情况 Performance](#性能情况) 3. [所需环境 Environment](#所需环境) 4. [文件下载 Download](#文件下载) 5. [训练步骤 How2train](#训练步骤) 6. [预测步骤 How2predict](#预测步骤) 7. [评估步骤 How2eval](#评估步骤) 8. [参考资料 Reference](#Reference) ## Top News **`2022-04`**:**支持多GPU训练,新增各个种类目标数量计算。** **`2022-03`**:**进行了大幅度的更新,支持step、cos学习率下降法、支持adam、sgd优化器选择、支持学习率根据batch_size自适应调整、新增图片裁剪。** BiliBili视频中的原仓库地址为:https://github.com/bubbliiiing/faster-rcnn-tf2/tree/bilibili **`2021-10`**:**进行了大幅度的更新,增加了大量注释、增加了大量可调整参数、对代码的组成模块进行修改、增加fps、视频预测、批量预测等功能。** ## 性能情况 | 训练数据集 | 权值文件名称 | 测试数据集 | 输入图片大小 | mAP 0.5:0.95 | mAP 0.5 | | :-----: | :-----: | :------: | :------: | :------: | :-----: | | VOC07+12 | [voc_weights_resnet.h5](https://github.com/bubbliiiing/faster-rcnn-tf2/releases/download/v1.0/voc_weights_resnet.h5) | VOC-Test07 | - | - | 81.16 | VOC07+12 | [voc_weights_vgg.h5](https://github.com/bubbliiiing/faster-rcnn-tf2/releases/download/v1.0/voc_weights_vgg.h5) | VOC-Test07 | - | - | 76.28 ## 所需环境 tensorflow-gpu==2.2.0 ## 文件下载 训练所需的voc_weights_resnet.h5、voc_weights_vgg.h5和主干的权值可以去百度网盘下载 链接: https://pan.baidu.com/s/1ACymiz3m9Kx0L8WXIDX_dg 提取码: jwvs VOC数据集下载地址如下,里面已经包括了训练集、测试集、验证集(与测试集一样),无需再次划分: 链接: https://pan.baidu.com/s/1-1Ej6dayrx3g0iAA88uY5A 提取码: ph32 ## 训练步骤 ### a、训练VOC07+12数据集 1. 数据集的准备 **本文使用VOC格式进行训练,训练前需要下载好VOC07+12的数据集,解压后放在根目录** 2. 数据集的处理 修改voc_annotation.py里面的annotation_mode=2,运行voc_annotation.py生成根目录下的2007_train.txt和2007_val.txt。 3. 开始网络训练 train.py的默认参数用于训练VOC数据集,直接运行train.py即可开始训练。 4. 训练结果预测 训练结果预测需要用到两个文件,分别是frcnn.py和predict.py。我们首先需要去frcnn.py里面修改model_path以及classes_path,这两个参数必须要修改。 **model_path指向训练好的权值文件,在logs文件夹里。 classes_path指向检测类别所对应的txt。** 完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。 ### b、训练自己的数据集 1. 数据集的准备 **本文使用VOC格式进行训练,训练前需要自己制作好数据集,** 训练前将标签文件放在VOCdevkit文件夹下的VOC2007文件夹下的Annotation中。 训练前将图片文件放在VOCdevkit文件夹下的VOC2007文件夹下的JPEGImages中。 2. 数据集的处理 在完成数据集的摆放之后,我们需要利用voc_annotation.py获得训练用的2007_train.txt和2007_val.txt。 修改voc_annotation.py里面的参数。第一次训练可以仅修改classes_path,classes_path用于指向检测类别所对应的txt。 训练自己的数据集时,可以自己建立一个cls_classes.txt,里面写自己所需要区分的类别。 model_data/cls_classes.txt文件内容为: ```python cat dog ... ``` 修改voc_annotation.py中的classes_path,使其对应cls_classes.txt,并运行voc_annotation.py。 3. 开始网络训练 **训练的参数较多,均在train.py中,大家可以在下载库后仔细看注释,其中最重要的部分依然是train.py里的classes_path。** **classes_path用于指向检测类别所对应的txt,这个txt和voc_annotation.py里面的txt一样!训练自己的数据集必须要修改!** 修改完classes_path后就可以运行train.py开始训练了,在训练多个epoch后,权值会生成在logs文件夹中。 4. 训练结果预测 训练结果预测需要用到两个文件,分别是frcnn.py和predict.py。在frcnn.py里面修改model_path以及classes_path。 **model_path指向训练好的权值文件,在logs文件夹里。 classes_path指向检测类别所对应的txt。** 完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。 ## 预测步骤 ### a、使用预训练权重 1. 下载完库后解压,在百度网盘下载frcnn_weights.pth,放入model_data,运行predict.py,输入 ```python img/street.jpg ``` 2. 在predict.py里面进行设置可以进行fps测试和video视频检测。 ### b、使用自己训练的权重 1. 按照训练步骤训练。 2. 在frcnn.py文件里面,在如下部分修改model_path和classes_path使其对应训练好的文件;**model_path对应logs文件夹下面的权值文件,classes_path是model_path对应分的类**。 ```python _defaults = { #--------------------------------------------------------------------------# # 使用自己训练好的模型进行预测一定要修改model_path和classes_path! # model_path指向logs文件夹下的权值文件,classes_path指向model_data下的txt # 如果出现shape不匹配,同时要注意训练时的model_path和classes_path参数的修改 #--------------------------------------------------------------------------# "model_path" : 'model_data/voc_weights_resnet.h5', "classes_path" : 'model_data/voc_classes.txt', #---------------------------------------------------------------------# # 网络的主干特征提取网络,resnet50或者vgg #---------------------------------------------------------------------# "backbone" : "resnet50", #---------------------------------------------------------------------# # 只有得分大于置信度的预测框会被保留下来 #---------------------------------------------------------------------# "confidence" : 0.5, #---------------------------------------------------------------------# # 非极大抑制所用到的nms_iou大小 #---------------------------------------------------------------------# "nms_iou" : 0.3, #---------------------------------------------------------------------# # 用于指定先验框的大小 #---------------------------------------------------------------------# 'anchors_size' : [128, 256, 512], } ``` 3. 运行predict.py,输入 ```python img/street.jpg ``` 4. 在predict.py里面进行设置可以进行fps测试和video视频检测。 ## 评估步骤 ### a、评估VOC07+12的测试集 1. 本文使用VOC格式进行评估。VOC07+12已经划分好了测试集,无需利用voc_annotation.py生成ImageSets文件夹下的txt。 2. 在frcnn.py里面修改model_path以及classes_path。**model_path指向训练好的权值文件,在logs文件夹里。classes_path指向检测类别所对应的txt。** 3. 运行get_map.py即可获得评估结果,评估结果会保存在map_out文件夹中。 ### b、评估自己的数据集 1. 本文使用VOC格式进行评估。 2. 如果在训练前已经运行过voc_annotation.py文件,代码会自动将数据集划分成训练集、验证集和测试集。如果想要修改测试集的比例,可以修改voc_annotation.py文件下的trainval_percent。trainval_percent用于指定(训练集+验证集)与测试集的比例,默认情况下 (训练集+验证集):测试集 = 9:1。train_percent用于指定(训练集+验证集)中训练集与验证集的比例,默认情况下 训练集:验证集 = 9:1。 3. 利用voc_annotation.py划分测试集后,前往get_map.py文件修改classes_path,classes_path用于指向检测类别所对应的txt,这个txt和训练时的txt一样。评估自己的数据集必须要修改。 4. 在frcnn.py里面修改model_path以及classes_path。**model_path指向训练好的权值文件,在logs文件夹里。classes_path指向检测类别所对应的txt。** 5. 运行get_map.py即可获得评估结果,评估结果会保存在map_out文件夹中。 ## Reference https://github.com/qqwweee/keras-yolo3/ https://github.com/pierluigiferrari/ssd_keras https://github.com/kuhung/SSD_keras https://github.com/jinfagang/keras_frcnn https://github.com/Cartucho/mAP ================================================ FILE: frcnn.py ================================================ import colorsys import os import time import numpy as np from PIL import ImageDraw, ImageFont from tensorflow.keras.applications.imagenet_utils import preprocess_input import nets.frcnn as frcnn from utils.anchors import get_anchors from utils.utils import (cvtColor, get_classes, get_new_img_size, resize_image, show_config) from utils.utils_bbox import BBoxUtility #--------------------------------------------# # 使用自己训练好的模型预测需要修改2个参数 # model_path和classes_path都需要修改! # 如果出现shape不匹配 # 一定要注意训练时的NUM_CLASSES、 # model_path和classes_path参数的修改 #--------------------------------------------# class FRCNN(object): _defaults = { #--------------------------------------------------------------------------# # 使用自己训练好的模型进行预测一定要修改model_path和classes_path! # model_path指向logs文件夹下的权值文件,classes_path指向model_data下的txt # # 训练好后logs文件夹下存在多个权值文件,选择验证集损失较低的即可。 # 验证集损失较低不代表mAP较高,仅代表该权值在验证集上泛化性能较好。 # 如果出现shape不匹配,同时要注意训练时的model_path和classes_path参数的修改 #--------------------------------------------------------------------------# "model_path" : 'model_data/voc_weights_resnet.h5', "classes_path" : 'model_data/voc_classes.txt', #---------------------------------------------------------------------# # 网络的主干特征提取网络,resnet50或者vgg #---------------------------------------------------------------------# "backbone" : "resnet50", #---------------------------------------------------------------------# # 只有得分大于置信度的预测框会被保留下来 #---------------------------------------------------------------------# "confidence" : 0.5, #---------------------------------------------------------------------# # 非极大抑制所用到的nms_iou大小 #---------------------------------------------------------------------# "nms_iou" : 0.3, #---------------------------------------------------------------------# # 用于指定先验框的大小 #---------------------------------------------------------------------# 'anchors_size' : [128, 256, 512], } @classmethod def get_defaults(cls, n): if n in cls._defaults: return cls._defaults[n] else: return "Unrecognized attribute name '" + n + "'" #---------------------------------------------------# # 初始化faster RCNN #---------------------------------------------------# def __init__(self, **kwargs): self.__dict__.update(self._defaults) for name, value in kwargs.items(): setattr(self, name, value) self._defaults[name] = value #---------------------------------------------------# # 获得种类和先验框的数量 #---------------------------------------------------# self.class_names, self.num_classes = get_classes(self.classes_path) self.num_classes = self.num_classes + 1 #---------------------------------------------------# # 创建一个工具箱,用于进行解码 # 最大使用min_k个建议框,默认为150 #---------------------------------------------------# self.bbox_util = BBoxUtility(self.num_classes, nms_iou = self.nms_iou, min_k = 150) #---------------------------------------------------# # 画框设置不同的颜色 #---------------------------------------------------# hsv_tuples = [(x / self.num_classes, 1., 1.) for x in range(self.num_classes)] self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples)) self.colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), self.colors)) self.generate() show_config(**self._defaults) #---------------------------------------------------# # 载入模型 #---------------------------------------------------# def generate(self): model_path = os.path.expanduser(self.model_path) assert model_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.' #-------------------------------# # 载入模型与权值 #-------------------------------# self.model_rpn, self.model_classifier = frcnn.get_predict_model(self.num_classes, self.backbone) self.model_rpn.load_weights(self.model_path, by_name=True) self.model_classifier.load_weights(self.model_path, by_name=True) print('{} model, anchors, and classes loaded.'.format(model_path)) #---------------------------------------------------# # 检测图片 #---------------------------------------------------# def detect_image(self, image, crop = False, count = False): #---------------------------------------------------# # 计算输入图片的高和宽 #---------------------------------------------------# image_shape = np.array(np.shape(image)[0:2]) #---------------------------------------------------# # 计算输入到网络中进行运算的图片的高和宽 # 保证短边是600的 #---------------------------------------------------# input_shape = get_new_img_size(image_shape[0], image_shape[1]) #---------------------------------------------------------# # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB #---------------------------------------------------------# image = cvtColor(image) #---------------------------------------------------------# # 给原图像进行resize,resize到短边为600的大小上 #---------------------------------------------------------# image_data = resize_image(image, [input_shape[1], input_shape[0]]) #---------------------------------------------------------# # 添加上batch_size维度 #---------------------------------------------------------# image_data = np.expand_dims(preprocess_input(np.array(image_data, dtype='float32')), 0) #---------------------------------------------------------# # 获得rpn网络预测结果和base_layer #---------------------------------------------------------# rpn_pred = self.model_rpn(image_data) rpn_pred = [x.numpy() for x in rpn_pred] #---------------------------------------------------------# # 生成先验框并解码 #---------------------------------------------------------# anchors = get_anchors(input_shape, self.backbone, self.anchors_size) rpn_results = self.bbox_util.detection_out_rpn(rpn_pred, anchors) #-------------------------------------------------------------# # 利用建议框获得classifier网络预测结果 #-------------------------------------------------------------# classifier_pred = self.model_classifier([rpn_pred[2], rpn_results[:, :, [1, 0, 3, 2]]]) classifier_pred = [x.numpy() for x in classifier_pred] #-------------------------------------------------------------# # 利用classifier的预测结果对建议框进行解码,获得预测框 #-------------------------------------------------------------# results = self.bbox_util.detection_out_classifier(classifier_pred, rpn_results, image_shape, input_shape, self.confidence) if len(results[0]) == 0: return image top_label = np.array(results[0][:, 5], dtype = 'int32') top_conf = results[0][:, 4] top_boxes = results[0][:, :4] #---------------------------------------------------------# # 设置字体与边框厚度 #---------------------------------------------------------# font = ImageFont.truetype(font='model_data/simhei.ttf',size=np.floor(3e-2 * np.shape(image)[1] + 0.5).astype('int32')) thickness = max((np.shape(image)[0] + np.shape(image)[1]) // input_shape[0], 1) #---------------------------------------------------------# # 计数 #---------------------------------------------------------# if count: print("top_label:", top_label) classes_nums = np.zeros([self.num_classes]) for i in range(self.num_classes): num = np.sum(top_label == i) if num > 0: print(self.class_names[i], " : ", num) classes_nums[i] = num print("classes_nums:", classes_nums) #---------------------------------------------------------# # 是否进行目标的裁剪 #---------------------------------------------------------# if crop: for i, c in list(enumerate(top_label)): top, left, bottom, right = top_boxes[i] top = max(0, np.floor(top).astype('int32')) left = max(0, np.floor(left).astype('int32')) bottom = min(image.size[1], np.floor(bottom).astype('int32')) right = min(image.size[0], np.floor(right).astype('int32')) dir_save_path = "img_crop" if not os.path.exists(dir_save_path): os.makedirs(dir_save_path) crop_image = image.crop([left, top, right, bottom]) crop_image.save(os.path.join(dir_save_path, "crop_" + str(i) + ".png"), quality=95, subsampling=0) print("save crop_" + str(i) + ".png to " + dir_save_path) #---------------------------------------------------------# # 图像绘制 #---------------------------------------------------------# for i, c in list(enumerate(top_label)): predicted_class = self.class_names[int(c)] box = top_boxes[i] score = top_conf[i] top, left, bottom, right = box top = max(0, np.floor(top).astype('int32')) left = max(0, np.floor(left).astype('int32')) bottom = min(image.size[1], np.floor(bottom).astype('int32')) right = min(image.size[0], np.floor(right).astype('int32')) label = '{} {:.2f}'.format(predicted_class, score) draw = ImageDraw.Draw(image) label_size = draw.textsize(label, font) label = label.encode('utf-8') print(label, top, left, bottom, right) if top - label_size[1] >= 0: text_origin = np.array([left, top - label_size[1]]) else: text_origin = np.array([left, top + 1]) for i in range(thickness): draw.rectangle([left + i, top + i, right - i, bottom - i], outline=self.colors[c]) draw.rectangle([tuple(text_origin), tuple(text_origin + label_size)], fill=self.colors[c]) draw.text(text_origin, str(label,'UTF-8'), fill=(0, 0, 0), font=font) del draw return image def get_FPS(self, image, test_interval): #---------------------------------------------------# # 计算输入图片的高和宽 #---------------------------------------------------# image_shape = np.array(np.shape(image)[0:2]) input_shape = get_new_img_size(image_shape[0], image_shape[1]) #---------------------------------------------------------# # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB #---------------------------------------------------------# image = cvtColor(image) #---------------------------------------------------------# # 给原图像进行resize,resize到短边为600的大小上 #---------------------------------------------------------# image_data = resize_image(image, [input_shape[1], input_shape[0]]) #---------------------------------------------------------# # 添加上batch_size维度 #---------------------------------------------------------# image_data = np.expand_dims(preprocess_input(np.array(image_data, dtype='float32')), 0) #---------------------------------------------------------# # 获得rpn网络预测结果和base_layer #---------------------------------------------------------# rpn_pred = self.model_rpn(image_data) rpn_pred = [x.numpy() for x in rpn_pred] #---------------------------------------------------------# # 生成先验框并解码 #---------------------------------------------------------# anchors = get_anchors(input_shape, self.backbone, self.anchors_size) rpn_results = self.bbox_util.detection_out_rpn(rpn_pred, anchors) #-------------------------------------------------------------# # 利用建议框获得classifier网络预测结果 #-------------------------------------------------------------# classifier_pred = self.model_classifier([rpn_pred[2], rpn_results[:, :, [1, 0, 3, 2]]]) classifier_pred = [x.numpy() for x in classifier_pred] #-------------------------------------------------------------# # 利用classifier的预测结果对建议框进行解码,获得预测框 #-------------------------------------------------------------# results = self.bbox_util.detection_out_classifier(classifier_pred, rpn_results, image_shape, input_shape, self.confidence) t1 = time.time() for _ in range(test_interval): #---------------------------------------------------------# # 获得rpn网络预测结果和base_layer #---------------------------------------------------------# rpn_pred = self.model_rpn(image_data) rpn_pred = [x.numpy() for x in rpn_pred] #---------------------------------------------------------# # 生成先验框并解码 #---------------------------------------------------------# anchors = get_anchors(input_shape, self.backbone, self.anchors_size) rpn_results = self.bbox_util.detection_out_rpn(rpn_pred, anchors) temp_ROIs = rpn_results[:, :, [1, 0, 3, 2]] #-------------------------------------------------------------# # 利用建议框获得classifier网络预测结果 #-------------------------------------------------------------# classifier_pred = self.model_classifier([rpn_pred[2], temp_ROIs]) classifier_pred = [x.numpy() for x in classifier_pred] #-------------------------------------------------------------# # 利用classifier的预测结果对建议框进行解码,获得预测框 #-------------------------------------------------------------# results = self.bbox_util.detection_out_classifier(classifier_pred, rpn_results, image_shape, input_shape, self.confidence) t2 = time.time() tact_time = (t2 - t1) / test_interval return tact_time def get_map_txt(self, image_id, image, class_names, map_out_path): f = open(os.path.join(map_out_path, "detection-results/"+image_id+".txt"),"w") #---------------------------------------------------# # 计算输入图片的高和宽 #---------------------------------------------------# image_shape = np.array(np.shape(image)[0:2]) input_shape = get_new_img_size(image_shape[0], image_shape[1]) #---------------------------------------------------------# # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB #---------------------------------------------------------# image = cvtColor(image) #---------------------------------------------------------# # 给原图像进行resize,resize到短边为600的大小上 #---------------------------------------------------------# image_data = resize_image(image, [input_shape[1], input_shape[0]]) #---------------------------------------------------------# # 添加上batch_size维度 #---------------------------------------------------------# image_data = np.expand_dims(preprocess_input(np.array(image_data, dtype='float32')), 0) #---------------------------------------------------------# # 获得rpn网络预测结果和base_layer #---------------------------------------------------------# rpn_pred = self.model_rpn(image_data) rpn_pred = [x.numpy() for x in rpn_pred] #---------------------------------------------------------# # 生成先验框并解码 #---------------------------------------------------------# anchors = get_anchors(input_shape, self.backbone, self.anchors_size) rpn_results = self.bbox_util.detection_out_rpn(rpn_pred, anchors) #-------------------------------------------------------------# # 利用建议框获得classifier网络预测结果 #-------------------------------------------------------------# classifier_pred = self.model_classifier([rpn_pred[2], rpn_results[:, :, [1, 0, 3, 2]]]) classifier_pred = [x.numpy() for x in classifier_pred] #-------------------------------------------------------------# # 利用classifier的预测结果对建议框进行解码,获得预测框 #-------------------------------------------------------------# results = self.bbox_util.detection_out_classifier(classifier_pred, rpn_results, image_shape, input_shape, self.confidence) #--------------------------------------# # 如果没有检测到物体,则返回原图 #--------------------------------------# if len(results[0])<=0: return top_label = np.array(results[0][:, 5], dtype = 'int32') top_conf = results[0][:, 4] top_boxes = results[0][:, :4] for i, c in list(enumerate(top_label)): predicted_class = self.class_names[int(c)] box = top_boxes[i] score = str(top_conf[i]) top, left, bottom, right = box if predicted_class not in class_names: continue f.write("%s %s %s %s %s %s\n" % (predicted_class, score[:6], str(int(left)), str(int(top)), str(int(right)),str(int(bottom)))) f.close() return ================================================ FILE: get_map.py ================================================ import os import xml.etree.ElementTree as ET import tensorflow as tf from PIL import Image from tqdm import tqdm from frcnn import FRCNN from utils.utils import get_classes from utils.utils_map import get_coco_map, get_map gpus = tf.config.experimental.list_physical_devices(device_type='GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) if __name__ == "__main__": ''' Recall和Precision不像AP是一个面积的概念,因此在门限值(Confidence)不同时,网络的Recall和Precision值是不同的。 默认情况下,本代码计算的Recall和Precision代表的是当门限值(Confidence)为0.5时,所对应的Recall和Precision值。 受到mAP计算原理的限制,网络在计算mAP时需要获得近乎所有的预测框,这样才可以计算不同门限条件下的Recall和Precision值 因此,本代码获得的map_out/detection-results/里面的txt的框的数量一般会比直接predict多一些,目的是列出所有可能的预测框, ''' #------------------------------------------------------------------------------------------------------------------# # map_mode用于指定该文件运行时计算的内容 # map_mode为0代表整个map计算流程,包括获得预测结果、获得真实框、计算VOC_map。 # map_mode为1代表仅仅获得预测结果。 # map_mode为2代表仅仅获得真实框。 # map_mode为3代表仅仅计算VOC_map。 # map_mode为4代表利用COCO工具箱计算当前数据集的0.50:0.95map。需要获得预测结果、获得真实框后并安装pycocotools才行 #-------------------------------------------------------------------------------------------------------------------# map_mode = 0 #--------------------------------------------------------------------------------------# # 此处的classes_path用于指定需要测量VOC_map的类别 # 一般情况下与训练和预测所用的classes_path一致即可 #--------------------------------------------------------------------------------------# classes_path = 'model_data/voc_classes.txt' #--------------------------------------------------------------------------------------# # MINOVERLAP用于指定想要获得的mAP0.x,mAP0.x的意义是什么请同学们百度一下。 # 比如计算mAP0.75,可以设定MINOVERLAP = 0.75。 # # 当某一预测框与真实框重合度大于MINOVERLAP时,该预测框被认为是正样本,否则为负样本。 # 因此MINOVERLAP的值越大,预测框要预测的越准确才能被认为是正样本,此时算出来的mAP值越低, #--------------------------------------------------------------------------------------# MINOVERLAP = 0.5 #--------------------------------------------------------------------------------------# # 受到mAP计算原理的限制,网络在计算mAP时需要获得近乎所有的预测框,这样才可以计算mAP # 因此,confidence的值应当设置的尽量小进而获得全部可能的预测框。 # # 该值一般不调整。因为计算mAP需要获得近乎所有的预测框,此处的confidence不能随便更改。 # 想要获得不同门限值下的Recall和Precision值,请修改下方的score_threhold。 #--------------------------------------------------------------------------------------# confidence = 0.02 #--------------------------------------------------------------------------------------# # 预测时使用到的非极大抑制值的大小,越大表示非极大抑制越不严格。 # # 该值一般不调整。 #--------------------------------------------------------------------------------------# nms_iou = 0.5 #---------------------------------------------------------------------------------------------------------------# # Recall和Precision不像AP是一个面积的概念,因此在门限值不同时,网络的Recall和Precision值是不同的。 # # 默认情况下,本代码计算的Recall和Precision代表的是当门限值为0.5(此处定义为score_threhold)时所对应的Recall和Precision值。 # 因为计算mAP需要获得近乎所有的预测框,上面定义的confidence不能随便更改。 # 这里专门定义一个score_threhold用于代表门限值,进而在计算mAP时找到门限值对应的Recall和Precision值。 #---------------------------------------------------------------------------------------------------------------# score_threhold = 0.5 #-------------------------------------------------------# # map_vis用于指定是否开启VOC_map计算的可视化 #-------------------------------------------------------# map_vis = False #-------------------------------------------------------# # 指向VOC数据集所在的文件夹 # 默认指向根目录下的VOC数据集 #-------------------------------------------------------# VOCdevkit_path = 'VOCdevkit' #-------------------------------------------------------# # 结果输出的文件夹,默认为map_out #-------------------------------------------------------# map_out_path = 'map_out' image_ids = open(os.path.join(VOCdevkit_path, "VOC2007/ImageSets/Main/test.txt")).read().strip().split() if not os.path.exists(map_out_path): os.makedirs(map_out_path) if not os.path.exists(os.path.join(map_out_path, 'ground-truth')): os.makedirs(os.path.join(map_out_path, 'ground-truth')) if not os.path.exists(os.path.join(map_out_path, 'detection-results')): os.makedirs(os.path.join(map_out_path, 'detection-results')) if not os.path.exists(os.path.join(map_out_path, 'images-optional')): os.makedirs(os.path.join(map_out_path, 'images-optional')) class_names, _ = get_classes(classes_path) if map_mode == 0 or map_mode == 1: print("Load model.") frcnn = FRCNN(confidence = confidence, nms_iou = nms_iou) print("Load model done.") print("Get predict result.") for image_id in tqdm(image_ids): image_path = os.path.join(VOCdevkit_path, "VOC2007/JPEGImages/"+image_id+".jpg") image = Image.open(image_path) if map_vis: image.save(os.path.join(map_out_path, "images-optional/" + image_id + ".jpg")) frcnn.get_map_txt(image_id, image, class_names, map_out_path) print("Get predict result done.") if map_mode == 0 or map_mode == 2: print("Get ground truth result.") for image_id in tqdm(image_ids): with open(os.path.join(map_out_path, "ground-truth/"+image_id+".txt"), "w") as new_f: root = ET.parse(os.path.join(VOCdevkit_path, "VOC2007/Annotations/"+image_id+".xml")).getroot() for obj in root.findall('object'): difficult_flag = False if obj.find('difficult')!=None: difficult = obj.find('difficult').text if int(difficult)==1: difficult_flag = True obj_name = obj.find('name').text if obj_name not in class_names: continue bndbox = obj.find('bndbox') left = bndbox.find('xmin').text top = bndbox.find('ymin').text right = bndbox.find('xmax').text bottom = bndbox.find('ymax').text if difficult_flag: new_f.write("%s %s %s %s %s difficult\n" % (obj_name, left, top, right, bottom)) else: new_f.write("%s %s %s %s %s\n" % (obj_name, left, top, right, bottom)) print("Get ground truth result done.") if map_mode == 0 or map_mode == 3: print("Get map.") get_map(MINOVERLAP, True, score_threhold = score_threhold, path = map_out_path) print("Get map done.") if map_mode == 4: print("Get map.") get_coco_map(class_names = class_names, path = map_out_path) print("Get map done.") ================================================ FILE: nets/__init__.py ================================================ # ================================================ FILE: nets/classifier.py ================================================ import tensorflow as tf import tensorflow.keras.backend as K from tensorflow.keras.initializers import RandomNormal from tensorflow.keras.layers import Dense, Flatten, Layer, TimeDistributed from nets.resnet import resnet50_classifier_layers from nets.vgg import vgg_classifier_layers class RoiPoolingConv(Layer): def __init__(self, pool_size, **kwargs): self.pool_size = pool_size super(RoiPoolingConv, self).__init__(**kwargs) def build(self, input_shape): self.nb_channels = input_shape[0][3] def compute_output_shape(self, input_shape): input_shape2 = input_shape[1] return None, input_shape2[1], self.pool_size, self.pool_size, self.nb_channels def call(self, x, mask=None): assert(len(x) == 2) #--------------------------------# # 共享特征层 # batch_size, 38, 38, 1024 #--------------------------------# feature_map = x[0] #--------------------------------# # 建议框 # batch_size, num_rois, 4 #--------------------------------# rois = x[1] #---------------------------------# # 建议框数量,batch_size大小 #---------------------------------# num_rois = tf.shape(rois)[1] batch_size = tf.shape(rois)[0] #---------------------------------# # 生成建议框序号信息 # 用于在进行crop_and_resize时 # 帮助建议框找到对应的共享特征层 #---------------------------------# box_index = tf.expand_dims(tf.range(0, batch_size), 1) box_index = tf.tile(box_index, (1, num_rois)) box_index = tf.reshape(box_index, [-1]) rs = tf.image.crop_and_resize(feature_map, tf.reshape(rois, [-1, 4]), box_index, (self.pool_size, self.pool_size)) #---------------------------------------------------------------------------------# # 最终的输出为 # (batch_size, num_rois, 14, 14, 1024) #---------------------------------------------------------------------------------# final_output = K.reshape(rs, (batch_size, num_rois, self.pool_size, self.pool_size, self.nb_channels)) return final_output #----------------------------------------------------# # 将共享特征层和建议框传入classifier网络 # 该网络结果会对建议框进行调整获得预测框 #----------------------------------------------------# def get_resnet50_classifier(base_layers, input_rois, roi_size, num_classes=21): # batch_size, 38, 38, 1024 -> batch_size, num_rois, 14, 14, 1024 out_roi_pool = RoiPoolingConv(roi_size)([base_layers, input_rois]) # batch_size, num_rois, 14, 14, 1024 -> num_rois, 1, 1, 2048 out = resnet50_classifier_layers(out_roi_pool) # batch_size, num_rois, 1, 1, 2048 -> batch_size, num_rois, 2048 out = TimeDistributed(Flatten())(out) # batch_size, num_rois, 2048 -> batch_size, num_rois, num_classes out_class = TimeDistributed(Dense(num_classes, activation='softmax', kernel_initializer=RandomNormal(stddev=0.02)), name='dense_class_{}'.format(num_classes))(out) # batch_size, num_rois, 2048 -> batch_size, num_rois, 4 * (num_classes-1) out_regr = TimeDistributed(Dense(4 * (num_classes - 1), activation='linear', kernel_initializer=RandomNormal(stddev=0.02)), name='dense_regress_{}'.format(num_classes))(out) return [out_class, out_regr] def get_vgg_classifier(base_layers, input_rois, roi_size, num_classes=21): # batch_size, 37, 37, 512 -> batch_size, num_rois, 7, 7, 512 out_roi_pool = RoiPoolingConv(roi_size)([base_layers, input_rois]) # batch_size, num_rois, 7, 7, 512 -> batch_size, num_rois, 4096 out = vgg_classifier_layers(out_roi_pool) # batch_size, num_rois, 4096 -> batch_size, num_rois, num_classes out_class = TimeDistributed(Dense(num_classes, activation='softmax', kernel_initializer=RandomNormal(stddev=0.02)), name='dense_class_{}'.format(num_classes))(out) # batch_size, num_rois, 4096 -> batch_size, num_rois, 4 * (num_classes-1) out_regr = TimeDistributed(Dense(4 * (num_classes-1), activation='linear', kernel_initializer=RandomNormal(stddev=0.02)), name='dense_regress_{}'.format(num_classes))(out) return [out_class, out_regr] ================================================ FILE: nets/frcnn.py ================================================ from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from nets.classifier import get_resnet50_classifier, get_vgg_classifier from nets.resnet import ResNet50 from nets.rpn import get_rpn from nets.vgg import VGG16 def get_model(num_classes, backbone, num_anchors = 9, input_shape=[None, None, 3]): inputs = Input(shape=input_shape) roi_input = Input(shape=(None, 4)) if backbone == 'vgg': #----------------------------------------------------# # 假设输入为600,600,3 # 获得一个37,37,512的共享特征层base_layers #----------------------------------------------------# base_layers = VGG16(inputs) #----------------------------------------------------# # 将共享特征层传入建议框网络 # 该网络结果会对先验框进行调整获得建议框 #----------------------------------------------------# rpn = get_rpn(base_layers, num_anchors) #----------------------------------------------------# # 将共享特征层和建议框传入classifier网络 # 该网络结果会对建议框进行调整获得预测框 #----------------------------------------------------# classifier = get_vgg_classifier(base_layers, roi_input, 7, num_classes) else: #----------------------------------------------------# # 假设输入为600,600,3 # 获得一个38,38,1024的共享特征层base_layers #----------------------------------------------------# base_layers = ResNet50(inputs) #----------------------------------------------------# # 将共享特征层传入建议框网络 # 该网络结果会对先验框进行调整获得建议框 #----------------------------------------------------# rpn = get_rpn(base_layers, num_anchors) #----------------------------------------------------# # 将共享特征层和建议框传入classifier网络 # 该网络结果会对建议框进行调整获得预测框 #----------------------------------------------------# classifier = get_resnet50_classifier(base_layers, roi_input, 14, num_classes) model_rpn = Model(inputs, rpn) model_all = Model([inputs, roi_input], rpn + classifier) return model_rpn, model_all def get_predict_model(num_classes, backbone, num_anchors = 9): inputs = Input(shape=(None, None, 3)) roi_input = Input(shape=(None, 4)) if backbone == 'vgg': feature_map_input = Input(shape=(None, None, 512)) #----------------------------------------------------# # 假设输入为600,600,3 # 获得一个37,37,512的共享特征层base_layers #----------------------------------------------------# base_layers = VGG16(inputs) #----------------------------------------------------# # 将共享特征层传入建议框网络 # 该网络结果会对先验框进行调整获得建议框 #----------------------------------------------------# rpn = get_rpn(base_layers, num_anchors) #----------------------------------------------------# # 将共享特征层和建议框传入classifier网络 # 该网络结果会对建议框进行调整获得预测框 #----------------------------------------------------# classifier = get_vgg_classifier(feature_map_input, roi_input, 7, num_classes) else: feature_map_input = Input(shape=(None, None, 1024)) #----------------------------------------------------# # 假设输入为600,600,3 # 获得一个38,38,1024的共享特征层base_layers #----------------------------------------------------# base_layers = ResNet50(inputs) #----------------------------------------------------# # 将共享特征层传入建议框网络 # 该网络结果会对先验框进行调整获得建议框 #----------------------------------------------------# rpn = get_rpn(base_layers, num_anchors) #----------------------------------------------------# # 将共享特征层和建议框传入classifier网络 # 该网络结果会对建议框进行调整获得预测框 #----------------------------------------------------# classifier = get_resnet50_classifier(feature_map_input, roi_input, 14, num_classes) model_rpn = Model(inputs, rpn + [base_layers]) model_classifier_only = Model([feature_map_input, roi_input], classifier) return model_rpn, model_classifier_only ================================================ FILE: nets/frcnn_training.py ================================================ import math from functools import partial import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import backend as K def rpn_cls_loss(): def _rpn_cls_loss(y_true, y_pred): #---------------------------------------------------# # y_true [batch_size, num_anchor, 1] # y_pred [batch_size, num_anchor, 1] #---------------------------------------------------# labels = y_true classification = y_pred #---------------------------------------------------# # -1 是需要忽略的, 0 是背景, 1 是存在目标 #---------------------------------------------------# anchor_state = y_true #---------------------------------------------------# # 获得无需忽略的所有样本 #---------------------------------------------------# indices_for_no_ignore = tf.where(keras.backend.not_equal(anchor_state, -1)) labels_for_no_ignore = tf.gather_nd(labels, indices_for_no_ignore) classification_for_no_ignore = tf.gather_nd(classification, indices_for_no_ignore) cls_loss_for_no_ignore = keras.backend.binary_crossentropy(labels_for_no_ignore, classification_for_no_ignore) cls_loss_for_no_ignore = keras.backend.sum(cls_loss_for_no_ignore) #---------------------------------------------------# # 进行标准化 #---------------------------------------------------# normalizer_no_ignore = tf.where(keras.backend.not_equal(anchor_state, -1)) normalizer_no_ignore = keras.backend.cast(keras.backend.shape(normalizer_no_ignore)[0], keras.backend.floatx()) normalizer_no_ignore = keras.backend.maximum(keras.backend.cast_to_floatx(1.0), normalizer_no_ignore) #---------------------------------------------------# # 总的loss #---------------------------------------------------# loss = cls_loss_for_no_ignore / normalizer_no_ignore return loss return _rpn_cls_loss def rpn_smooth_l1(sigma = 1.0): sigma_squared = sigma ** 2 def _rpn_smooth_l1(y_true, y_pred): #---------------------------------------------------# # y_true [batch_size, num_anchor, 4 + 1] # y_pred [batch_size, num_anchor, 4] #---------------------------------------------------# regression = y_pred regression_target = y_true[:, :, :-1] #---------------------------------------------------# # -1 是需要忽略的, 0 是背景, 1 是存在目标 #---------------------------------------------------# anchor_state = y_true[:, :, -1] #---------------------------------------------------# # 找到正样本 #---------------------------------------------------# indices = tf.where(keras.backend.equal(anchor_state, 1)) regression = tf.gather_nd(regression, indices) regression_target = tf.gather_nd(regression_target, indices) #---------------------------------------------------# # 计算smooth L1损失 #---------------------------------------------------# regression_diff = regression - regression_target regression_diff = keras.backend.abs(regression_diff) regression_loss = tf.where( keras.backend.less(regression_diff, 1.0 / sigma_squared), 0.5 * sigma_squared * keras.backend.pow(regression_diff, 2), regression_diff - 0.5 / sigma_squared ) #---------------------------------------------------# # 将所获得的loss除上正样本的数量 #---------------------------------------------------# normalizer = keras.backend.maximum(1, keras.backend.shape(indices)[0]) normalizer = keras.backend.cast(normalizer, dtype=keras.backend.floatx()) regression_loss = keras.backend.sum(regression_loss) / normalizer return regression_loss return _rpn_smooth_l1 def classifier_cls_loss(): def _classifier_cls_loss(y_true, y_pred): return K.mean(K.categorical_crossentropy(y_true, y_pred)) return _classifier_cls_loss def classifier_smooth_l1(num_classes, sigma = 1.0): epsilon = 1e-4 sigma_squared = sigma ** 2 def class_loss_regr_fixed_num(y_true, y_pred): regression = y_pred regression_target = y_true[:, :, 4 * num_classes:] regression_diff = regression_target - regression regression_diff = keras.backend.abs(regression_diff) regression_loss = 4 * K.sum(y_true[:, :, :4*num_classes] * tf.where( keras.backend.less(regression_diff, 1.0 / sigma_squared), 0.5 * sigma_squared * keras.backend.pow(regression_diff, 2), regression_diff - 0.5 / sigma_squared ) ) normalizer = K.sum(epsilon + y_true[:, :, :4*num_classes]) regression_loss = keras.backend.sum(regression_loss) / normalizer # x_bool = K.cast(K.less_equal(regression_diff, 1.0), 'float32') # regression_loss = 4 * K.sum(y_true[:, :, :4*num_classes] * (x_bool * (0.5 * regression_diff * regression_diff) + (1 - x_bool) * (regression_diff - 0.5))) / K.sum(epsilon + y_true[:, :, :4*num_classes]) return regression_loss return class_loss_regr_fixed_num class ProposalTargetCreator(object): def __init__(self, num_classes, n_sample=128, pos_ratio=0.5, pos_iou_thresh=0.5, neg_iou_thresh_high=0.5, neg_iou_thresh_low=0, variance=[0.125, 0.125, 0.25, 0.25]): self.n_sample = n_sample self.pos_ratio = pos_ratio self.pos_roi_per_image = np.round(self.n_sample * self.pos_ratio) self.pos_iou_thresh = pos_iou_thresh self.neg_iou_thresh_high = neg_iou_thresh_high self.neg_iou_thresh_low = neg_iou_thresh_low self.num_classes = num_classes self.variance = variance def bbox_iou(self, bbox_a, bbox_b): if bbox_a.shape[1] != 4 or bbox_b.shape[1] != 4: print(bbox_a, bbox_b) raise IndexError tl = np.maximum(bbox_a[:, None, :2], bbox_b[:, :2]) br = np.minimum(bbox_a[:, None, 2:], bbox_b[:, 2:]) area_i = np.prod(br - tl, axis=2) * (tl < br).all(axis=2) area_a = np.prod(bbox_a[:, 2:] - bbox_a[:, :2], axis=1) area_b = np.prod(bbox_b[:, 2:] - bbox_b[:, :2], axis=1) return area_i / (area_a[:, None] + area_b - area_i) def bbox2loc(self, src_bbox, dst_bbox): width = src_bbox[:, 2] - src_bbox[:, 0] height = src_bbox[:, 3] - src_bbox[:, 1] ctr_x = src_bbox[:, 0] + 0.5 * width ctr_y = src_bbox[:, 1] + 0.5 * height base_width = dst_bbox[:, 2] - dst_bbox[:, 0] base_height = dst_bbox[:, 3] - dst_bbox[:, 1] base_ctr_x = dst_bbox[:, 0] + 0.5 * base_width base_ctr_y = dst_bbox[:, 1] + 0.5 * base_height eps = np.finfo(height.dtype).eps width = np.maximum(width, eps) height = np.maximum(height, eps) dx = (base_ctr_x - ctr_x) / width dy = (base_ctr_y - ctr_y) / height dw = np.log(base_width / width) dh = np.log(base_height / height) loc = np.vstack((dx, dy, dw, dh)).transpose() return loc def calc_iou(self, R, all_boxes): # ----------------------------------------------------- # # 计算建议框和真实框的重合程度 # ----------------------------------------------------- # if len(all_boxes)==0: max_iou = np.zeros(len(R)) gt_assignment = np.zeros(len(R), np.int32) gt_roi_label = np.zeros(len(R)) else: bboxes = all_boxes[:, :4] label = all_boxes[:, 4] R = np.concatenate([R, bboxes], axis=0) iou = self.bbox_iou(R, bboxes) #---------------------------------------------------------# # 获得每一个建议框最对应的真实框的iou [num_roi, ] #---------------------------------------------------------# max_iou = iou.max(axis=1) #---------------------------------------------------------# # 获得每一个建议框最对应的真实框 [num_roi, ] #---------------------------------------------------------# gt_assignment = iou.argmax(axis=1) #---------------------------------------------------------# # 真实框的标签 #---------------------------------------------------------# gt_roi_label = label[gt_assignment] #----------------------------------------------------------------# # 满足建议框和真实框重合程度大于neg_iou_thresh_high的作为负样本 # 将正样本的数量限制在self.pos_roi_per_image以内 #----------------------------------------------------------------# pos_index = np.where(max_iou >= self.pos_iou_thresh)[0] pos_roi_per_this_image = int(min(self.n_sample//2, pos_index.size)) if pos_index.size > 0: pos_index = np.random.choice(pos_index, size=pos_roi_per_this_image, replace=False) #-----------------------------------------------------------------------------------------------------# # 满足建议框和真实框重合程度小于neg_iou_thresh_high大于neg_iou_thresh_low作为负样本 # 将正样本的数量和负样本的数量的总和固定成self.n_sample #-----------------------------------------------------------------------------------------------------# neg_index = np.where((max_iou < self.neg_iou_thresh_high) & (max_iou >= self.neg_iou_thresh_low))[0] neg_roi_per_this_image = self.n_sample - pos_roi_per_this_image if neg_roi_per_this_image > neg_index.size: neg_index = np.random.choice(neg_index, size=neg_roi_per_this_image, replace=True) else: neg_index = np.random.choice(neg_index, size=neg_roi_per_this_image, replace=False) #---------------------------------------------------------# # sample_roi [n_sample, ] # gt_roi_loc [n_sample, 4] # gt_roi_label [n_sample, ] #---------------------------------------------------------# keep_index = np.append(pos_index, neg_index) sample_roi = R[keep_index] if len(all_boxes) != 0: gt_roi_loc = self.bbox2loc(sample_roi, bboxes[gt_assignment[keep_index]]) gt_roi_loc = gt_roi_loc / np.array(self.variance) else: gt_roi_loc = np.zeros_like(sample_roi) gt_roi_label = gt_roi_label[keep_index] gt_roi_label[pos_roi_per_this_image:] = self.num_classes - 1 #---------------------------------------------------------# # X [n_sample, 4] # Y1 [n_sample, num_classes] # Y2 [n_sample, (num_clssees-1) * 8] #---------------------------------------------------------# X = np.zeros_like(sample_roi) X[:, [0, 1, 2, 3]] = sample_roi[:, [1, 0, 3, 2]] Y1 = np.eye(self.num_classes)[np.array(gt_roi_label, np.int32)] y_class_regr_label = np.zeros([np.shape(gt_roi_loc)[0], self.num_classes-1, 4]) y_class_regr_coords = np.zeros([np.shape(gt_roi_loc)[0], self.num_classes-1, 4]) y_class_regr_label[np.arange(np.shape(gt_roi_loc)[0])[:pos_roi_per_this_image], np.array(gt_roi_label[:pos_roi_per_this_image], np.int32)] = 1 y_class_regr_coords[np.arange(np.shape(gt_roi_loc)[0])[:pos_roi_per_this_image], np.array(gt_roi_label[:pos_roi_per_this_image], np.int32)] = \ gt_roi_loc[:pos_roi_per_this_image] y_class_regr_label = np.reshape(y_class_regr_label, [np.shape(gt_roi_loc)[0], -1]) y_class_regr_coords = np.reshape(y_class_regr_coords, [np.shape(gt_roi_loc)[0], -1]) Y2 = np.concatenate([np.array(y_class_regr_label), np.array(y_class_regr_coords)], axis = 1) return X, Y1, Y2 def get_lr_scheduler(lr_decay_type, lr, min_lr, total_iters, warmup_iters_ratio = 0.05, warmup_lr_ratio = 0.1, no_aug_iter_ratio = 0.05, step_num = 10): def yolox_warm_cos_lr(lr, min_lr, total_iters, warmup_total_iters, warmup_lr_start, no_aug_iter, iters): if iters <= warmup_total_iters: # lr = (lr - warmup_lr_start) * iters / float(warmup_total_iters) + warmup_lr_start lr = (lr - warmup_lr_start) * pow(iters / float(warmup_total_iters), 2 ) + warmup_lr_start elif iters >= total_iters - no_aug_iter: lr = min_lr else: lr = min_lr + 0.5 * (lr - min_lr) * ( 1.0 + math.cos( math.pi * (iters - warmup_total_iters) / (total_iters - warmup_total_iters - no_aug_iter) ) ) return lr def step_lr(lr, decay_rate, step_size, iters): if step_size < 1: raise ValueError("step_size must above 1.") n = iters // step_size out_lr = lr * decay_rate ** n return out_lr if lr_decay_type == "cos": warmup_total_iters = min(max(warmup_iters_ratio * total_iters, 1), 3) warmup_lr_start = max(warmup_lr_ratio * lr, 1e-6) no_aug_iter = min(max(no_aug_iter_ratio * total_iters, 1), 15) func = partial(yolox_warm_cos_lr ,lr, min_lr, total_iters, warmup_total_iters, warmup_lr_start, no_aug_iter) else: decay_rate = (min_lr / lr) ** (1 / (step_num - 1)) step_size = total_iters / step_num func = partial(step_lr, lr, decay_rate, step_size) return func ================================================ FILE: nets/resnet.py ================================================ #-------------------------------------------------------------# # ResNet50的网络部分 #-------------------------------------------------------------# from tensorflow.keras import layers from tensorflow.keras.initializers import RandomNormal from tensorflow.keras.layers import (Activation, Add, AveragePooling2D, BatchNormalization, Conv2D, MaxPooling2D, TimeDistributed, ZeroPadding2D) def identity_block(input_tensor, kernel_size, filters, stage, block): filters1, filters2, filters3 = filters conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' x = Conv2D(filters1, (1, 1), kernel_initializer=RandomNormal(stddev=0.02), name=conv_name_base + '2a')(input_tensor) x = BatchNormalization(trainable=False, name=bn_name_base + '2a')(x) x = Activation('relu')(x) x = Conv2D(filters2, kernel_size, padding='same', kernel_initializer=RandomNormal(stddev=0.02), name=conv_name_base + '2b')(x) x = BatchNormalization(trainable=False, name=bn_name_base + '2b')(x) x = Activation('relu')(x) x = Conv2D(filters3, (1, 1), kernel_initializer=RandomNormal(stddev=0.02), name=conv_name_base + '2c')(x) x = BatchNormalization(trainable=False, name=bn_name_base + '2c')(x) x = layers.add([x, input_tensor]) x = Activation('relu')(x) return x def conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)): filters1, filters2, filters3 = filters conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' x = Conv2D(filters1, (1, 1), strides=strides, kernel_initializer=RandomNormal(stddev=0.02), name=conv_name_base + '2a')(input_tensor) x = BatchNormalization(trainable=False, name=bn_name_base + '2a')(x) x = Activation('relu')(x) x = Conv2D(filters2, kernel_size, padding='same', kernel_initializer=RandomNormal(stddev=0.02), name=conv_name_base + '2b')(x) x = BatchNormalization(trainable=False, name=bn_name_base + '2b')(x) x = Activation('relu')(x) x = Conv2D(filters3, (1, 1), kernel_initializer=RandomNormal(stddev=0.02), name=conv_name_base + '2c')(x) x = BatchNormalization(trainable=False, name=bn_name_base + '2c')(x) shortcut = Conv2D(filters3, (1, 1), strides=strides, kernel_initializer=RandomNormal(stddev=0.02), name=conv_name_base + '1')(input_tensor) shortcut = BatchNormalization(trainable=False, name=bn_name_base + '1')(shortcut) x = layers.add([x, shortcut]) x = Activation('relu')(x) return x def ResNet50(inputs): #-----------------------------------# # 假设输入进来的图片是600,600,3 #-----------------------------------# img_input = inputs # 600,600,3 -> 300,300,64 x = ZeroPadding2D((3, 3))(img_input) x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(x) x = BatchNormalization(trainable=False, name='bn_conv1')(x) x = Activation('relu')(x) # 300,300,64 -> 150,150,64 x = MaxPooling2D((3, 3), strides=(2, 2), padding="same")(x) # 150,150,64 -> 150,150,256 x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1)) x = identity_block(x, 3, [64, 64, 256], stage=2, block='b') x = identity_block(x, 3, [64, 64, 256], stage=2, block='c') # 150,150,256 -> 75,75,512 x = conv_block(x, 3, [128, 128, 512], stage=3, block='a') x = identity_block(x, 3, [128, 128, 512], stage=3, block='b') x = identity_block(x, 3, [128, 128, 512], stage=3, block='c') x = identity_block(x, 3, [128, 128, 512], stage=3, block='d') # 75,75,512 -> 38,38,1024 x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='c') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='d') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='e') x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f') # 最终获得一个38,38,1024的共享特征层 return x def identity_block_td(input_tensor, kernel_size, filters, stage, block): nb_filter1, nb_filter2, nb_filter3 = filters conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' x = TimeDistributed(Conv2D(nb_filter1, (1, 1), kernel_initializer='normal'), name=conv_name_base + '2a')(input_tensor) x = TimeDistributed(BatchNormalization(trainable=False), name=bn_name_base + '2a')(x) x = Activation('relu')(x) x = TimeDistributed(Conv2D(nb_filter2, (kernel_size, kernel_size), kernel_initializer='normal',padding='same'), name=conv_name_base + '2b')(x) x = TimeDistributed(BatchNormalization(trainable=False), name=bn_name_base + '2b')(x) x = Activation('relu')(x) x = TimeDistributed(Conv2D(nb_filter3, (1, 1), kernel_initializer='normal'), name=conv_name_base + '2c')(x) x = TimeDistributed(BatchNormalization(trainable=False), name=bn_name_base + '2c')(x) x = Add()([x, input_tensor]) x = Activation('relu')(x) return x def conv_block_td(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)): nb_filter1, nb_filter2, nb_filter3 = filters conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' x = TimeDistributed(Conv2D(nb_filter1, (1, 1), strides=strides, kernel_initializer='normal'), name=conv_name_base + '2a')(input_tensor) x = TimeDistributed(BatchNormalization(trainable=False), name=bn_name_base + '2a')(x) x = Activation('relu')(x) x = TimeDistributed(Conv2D(nb_filter2, (kernel_size, kernel_size), padding='same', kernel_initializer='normal'), name=conv_name_base + '2b')(x) x = TimeDistributed(BatchNormalization(trainable=False), name=bn_name_base + '2b')(x) x = Activation('relu')(x) x = TimeDistributed(Conv2D(nb_filter3, (1, 1), kernel_initializer='normal'), name=conv_name_base + '2c')(x) x = TimeDistributed(BatchNormalization(trainable=False), name=bn_name_base + '2c')(x) shortcut = TimeDistributed(Conv2D(nb_filter3, (1, 1), strides=strides, kernel_initializer='normal'), name=conv_name_base + '1')(input_tensor) shortcut = TimeDistributed(BatchNormalization(trainable=False), name=bn_name_base + '1')(shortcut) x = Add()([x, shortcut]) x = Activation('relu')(x) return x def resnet50_classifier_layers(x): # batch_size, num_rois, 14, 14, 1024 -> batch_size, num_rois, 7, 7, 2048 x = conv_block_td(x, 3, [512, 512, 2048], stage=5, block='a', strides=(2, 2)) # batch_size, num_rois, 7, 7, 2048 -> batch_size, num_rois, 7, 7, 2048 x = identity_block_td(x, 3, [512, 512, 2048], stage=5, block='b') # batch_size, num_rois, 7, 7, 2048 -> batch_size, num_rois, 7, 7, 2048 x = identity_block_td(x, 3, [512, 512, 2048], stage=5, block='c') # batch_size, num_rois, 7, 7, 2048 -> batch_size, num_rois, 1, 1, 2048 x = TimeDistributed(AveragePooling2D((7, 7)), name='avg_pool')(x) return x ================================================ FILE: nets/rpn.py ================================================ from tensorflow.keras.initializers import RandomNormal from tensorflow.keras.layers import Conv2D, Reshape #----------------------------------------------------# # 创建建议框网络 # 该网络结果会对先验框进行调整获得建议框 #----------------------------------------------------# def get_rpn(base_layers, num_anchors): #----------------------------------------------------# # 利用一个512通道的3x3卷积进行特征整合 #----------------------------------------------------# x = Conv2D(512, (3, 3), padding='same', activation='relu', kernel_initializer=RandomNormal(stddev=0.02), name='rpn_conv1')(base_layers) #----------------------------------------------------# # 利用一个1x1卷积调整通道数,获得预测结果 #----------------------------------------------------# x_class = Conv2D(num_anchors, (1, 1), activation='sigmoid', kernel_initializer=RandomNormal(stddev=0.02), name='rpn_out_class')(x) x_regr = Conv2D(num_anchors * 4, (1, 1), activation='linear', kernel_initializer=RandomNormal(stddev=0.02), name='rpn_out_regress')(x) x_class = Reshape((-1, 1),name="classification")(x_class) x_regr = Reshape((-1, 4),name="regression")(x_regr) return [x_class, x_regr] ================================================ FILE: nets/vgg.py ================================================ from tensorflow.keras.layers import (Conv2D, Dense, Flatten, MaxPooling2D, TimeDistributed) def VGG16(inputs): x = Conv2D(64,(3,3),activation = 'relu',padding = 'same',name = 'block1_conv1')(inputs) x = Conv2D(64,(3,3),activation = 'relu',padding = 'same', name = 'block1_conv2')(x) x = MaxPooling2D((2,2), strides = (2,2), name = 'block1_pool')(x) x = Conv2D(128,(3,3),activation = 'relu',padding = 'same',name = 'block2_conv1')(x) x = Conv2D(128,(3,3),activation = 'relu',padding = 'same',name = 'block2_conv2')(x) x = MaxPooling2D((2,2),strides = (2,2), name = 'block2_pool')(x) x = Conv2D(256,(3,3),activation = 'relu',padding = 'same',name = 'block3_conv1')(x) x = Conv2D(256,(3,3),activation = 'relu',padding = 'same',name = 'block3_conv2')(x) x = Conv2D(256,(3,3),activation = 'relu',padding = 'same',name = 'block3_conv3')(x) x = MaxPooling2D((2,2),strides = (2,2), name = 'block3_pool')(x) # 第四个卷积部分 # 14,14,512 x = Conv2D(512,(3,3),activation = 'relu',padding = 'same', name = 'block4_conv1')(x) x = Conv2D(512,(3,3),activation = 'relu',padding = 'same', name = 'block4_conv2')(x) x = Conv2D(512,(3,3),activation = 'relu',padding = 'same', name = 'block4_conv3')(x) x = MaxPooling2D((2,2),strides = (2,2), name = 'block4_pool')(x) # 第五个卷积部分 # 7,7,512 x = Conv2D(512,(3,3),activation = 'relu', padding = 'same', name = 'block5_conv1')(x) x = Conv2D(512,(3,3),activation = 'relu', padding = 'same', name = 'block5_conv2')(x) x = Conv2D(512,(3,3),activation = 'relu', padding = 'same', name = 'block5_conv3')(x) return x def vgg_classifier_layers(x): # num_rois, 14, 14, 1024 -> num_rois, 7, 7, 2048 x = TimeDistributed(Flatten(name='flatten'))(x) x = TimeDistributed(Dense(4096, activation='relu'), name='fc1')(x) x = TimeDistributed(Dense(4096, activation='relu'), name='fc2')(x) return x ================================================ FILE: predict.py ================================================ #----------------------------------------------------# # 将单张图片预测、摄像头检测和FPS测试功能 # 整合到了一个py文件中,通过指定mode进行模式的修改。 #----------------------------------------------------# import time import cv2 import numpy as np import tensorflow as tf from PIL import Image from frcnn import FRCNN gpus = tf.config.experimental.list_physical_devices(device_type='GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) if __name__ == "__main__": frcnn = FRCNN() #----------------------------------------------------------------------------------------------------------# # mode用于指定测试的模式: # 'predict' 表示单张图片预测,如果想对预测过程进行修改,如保存图片,截取对象等,可以先看下方详细的注释 # 'video' 表示视频检测,可调用摄像头或者视频进行检测,详情查看下方注释。 # 'fps' 表示测试fps,使用的图片是img里面的street.jpg,详情查看下方注释。 # 'dir_predict' 表示遍历文件夹进行检测并保存。默认遍历img文件夹,保存img_out文件夹,详情查看下方注释。 #----------------------------------------------------------------------------------------------------------# mode = "predict" #-------------------------------------------------------------------------# # crop 指定了是否在单张图片预测后对目标进行截取 # count 指定了是否进行目标的计数 # crop、count仅在mode='predict'时有效 #-------------------------------------------------------------------------# crop = False count = False #----------------------------------------------------------------------------------------------------------# # video_path 用于指定视频的路径,当video_path=0时表示检测摄像头 # 想要检测视频,则设置如video_path = "xxx.mp4"即可,代表读取出根目录下的xxx.mp4文件。 # video_save_path 表示视频保存的路径,当video_save_path=""时表示不保存 # 想要保存视频,则设置如video_save_path = "yyy.mp4"即可,代表保存为根目录下的yyy.mp4文件。 # video_fps 用于保存的视频的fps # # video_path、video_save_path和video_fps仅在mode='video'时有效 # 保存视频时需要ctrl+c退出或者运行到最后一帧才会完成完整的保存步骤。 #----------------------------------------------------------------------------------------------------------# video_path = 0 video_save_path = "" video_fps = 25.0 #----------------------------------------------------------------------------------------------------------# # test_interval 用于指定测量fps的时候,图片检测的次数。理论上test_interval越大,fps越准确。 # fps_image_path 用于指定测试的fps图片 # # test_interval和fps_image_path仅在mode='fps'有效 #----------------------------------------------------------------------------------------------------------# test_interval = 100 fps_image_path = "img/street.jpg" #-------------------------------------------------------------------------# # dir_origin_path 指定了用于检测的图片的文件夹路径 # dir_save_path 指定了检测完图片的保存路径 # # dir_origin_path和dir_save_path仅在mode='dir_predict'时有效 #-------------------------------------------------------------------------# dir_origin_path = "img/" dir_save_path = "img_out/" if mode == "predict": ''' 1、该代码无法直接进行批量预测,如果想要批量预测,可以利用os.listdir()遍历文件夹,利用Image.open打开图片文件进行预测。 具体流程可以参考get_dr_txt.py,在get_dr_txt.py即实现了遍历还实现了目标信息的保存。 2、如果想要进行检测完的图片的保存,利用r_image.save("img.jpg")即可保存,直接在predict.py里进行修改即可。 3、如果想要获得预测框的坐标,可以进入frcnn.detect_image函数,在绘图部分读取top,left,bottom,right这四个值。 4、如果想要利用预测框截取下目标,可以进入frcnn.detect_image函数,在绘图部分利用获取到的top,left,bottom,right这四个值 在原图上利用矩阵的方式进行截取。 5、如果想要在预测图上写额外的字,比如检测到的特定目标的数量,可以进入frcnn.detect_image函数,在绘图部分对predicted_class进行判断, 比如判断if predicted_class == 'car': 即可判断当前目标是否为车,然后记录数量即可。利用draw.text即可写字。 ''' while True: img = input('Input image filename:') try: image = Image.open(img) except: print('Open Error! Try again!') continue else: r_image = frcnn.detect_image(image, crop = crop, count = count) r_image.show() elif mode == "video": capture=cv2.VideoCapture(video_path) if video_save_path!="": fourcc = cv2.VideoWriter_fourcc(*'XVID') size = (int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT))) out = cv2.VideoWriter(video_save_path, fourcc, video_fps, size) fps = 0.0 while(True): t1 = time.time() # 读取某一帧 ref,frame=capture.read() # 格式转变,BGRtoRGB frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) # 转变成Image frame = Image.fromarray(np.uint8(frame)) # 进行检测 frame = np.array(frcnn.detect_image(frame)) # RGBtoBGR满足opencv显示格式 frame = cv2.cvtColor(frame,cv2.COLOR_RGB2BGR) fps = ( fps + (1./(time.time()-t1)) ) / 2 print("fps= %.2f"%(fps)) frame = cv2.putText(frame, "fps= %.2f"%(fps), (0, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.imshow("video",frame) c= cv2.waitKey(1) & 0xff if video_save_path!="": out.write(frame) if c==27: capture.release() break capture.release() out.release() cv2.destroyAllWindows() elif mode == "fps": img = Image.open(fps_image_path) tact_time = frcnn.get_FPS(img, test_interval) print(str(tact_time) + ' seconds, ' + str(1/tact_time) + 'FPS, @batch_size 1') elif mode == "dir_predict": import os from tqdm import tqdm img_names = os.listdir(dir_origin_path) for img_name in tqdm(img_names): if img_name.lower().endswith(('.bmp', '.dib', '.png', '.jpg', '.jpeg', '.pbm', '.pgm', '.ppm', '.tif', '.tiff')): image_path = os.path.join(dir_origin_path, img_name) image = Image.open(image_path) r_image = frcnn.detect_image(image) if not os.path.exists(dir_save_path): os.makedirs(dir_save_path) r_image.save(os.path.join(dir_save_path, img_name.replace(".jpg", ".png")), quality=95, subsampling=0) else: raise AssertionError("Please specify the correct mode: 'predict', 'video', 'fps' or 'dir_predict'.") ================================================ FILE: requirements.txt ================================================ scipy==1.4.1 numpy==1.18.4 matplotlib==3.2.1 opencv_python==4.2.0.34 tensorflow_gpu==2.2.0 tqdm==4.46.1 Pillow==8.2.0 h5py==2.10.0 ================================================ FILE: summary.py ================================================ #--------------------------------------------# # 该部分代码用于看网络结构 #--------------------------------------------# from nets.frcnn import get_model from utils.utils import net_flops if __name__ == "__main__": input_shape = [600, 600] num_classes = 21 _, model = get_model(num_classes, 'vgg', input_shape=[input_shape[0], input_shape[1], 3]) #--------------------------------------------------------# # 由于faster-rcnn-keras里面存在一些不可计算层 # 如ROIpooling # 因此无法正确计算FLOPs,可参考pytorch版本 #--------------------------------------------------------# #--------------------------------------------# # 查看网络结构网络结构 #--------------------------------------------# model.summary() #--------------------------------------------# # 计算网络的FLOPS #--------------------------------------------# net_flops(model, table=False) #--------------------------------------------# # 获得网络每个层的名称与序号 #--------------------------------------------# # for i,layer in enumerate(model.layers): # print(i,layer.name) ================================================ FILE: train.py ================================================ import datetime import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' import tensorflow as tf import tensorflow.keras.backend as K from tensorflow.keras.optimizers import SGD, Adam from nets.frcnn import get_model from nets.frcnn_training import (ProposalTargetCreator, classifier_cls_loss, classifier_smooth_l1, get_lr_scheduler, rpn_cls_loss, rpn_smooth_l1) from utils.anchors import get_anchors from utils.callbacks import EvalCallback, LossHistory from utils.dataloader import FRCNNDatasets, OrderedEnqueuer from utils.utils import get_classes, show_config from utils.utils_bbox import BBoxUtility from utils.utils_fit import fit_one_epoch ''' 训练自己的目标检测模型一定需要注意以下几点: 1、训练前仔细检查自己的格式是否满足要求,该库要求数据集格式为VOC格式,需要准备好的内容有输入图片和标签 输入图片为.jpg图片,无需固定大小,传入训练前会自动进行resize。 灰度图会自动转成RGB图片进行训练,无需自己修改。 输入图片如果后缀非jpg,需要自己批量转成jpg后再开始训练。 标签为.xml格式,文件中会有需要检测的目标信息,标签文件和输入图片文件相对应。 2、损失值的大小用于判断是否收敛,比较重要的是有收敛的趋势,即验证集损失不断下降,如果验证集损失基本上不改变的话,模型基本上就收敛了。 损失值的具体大小并没有什么意义,大和小只在于损失的计算方式,并不是接近于0才好。如果想要让损失好看点,可以直接到对应的损失函数里面除上10000。 训练过程中的损失值会保存在logs文件夹下的loss_%Y_%m_%d_%H_%M_%S文件夹中 3、训练好的权值文件保存在logs文件夹中,每个训练世代(Epoch)包含若干训练步长(Step),每个训练步长(Step)进行一次梯度下降。 如果只是训练了几个Step是不会保存的,Epoch和Step的概念要捋清楚一下。 ''' if __name__ == "__main__": #---------------------------------------------------------------------# # train_gpu 训练用到的GPU # 默认为第一张卡、双卡为[0, 1]、三卡为[0, 1, 2] # 在使用多GPU时,每个卡上的batch为总batch除以卡的数量。 #---------------------------------------------------------------------# train_gpu = [0,] #---------------------------------------------------------------------# # classes_path 指向model_data下的txt,与自己训练的数据集相关 # 训练前一定要修改classes_path,使其对应自己的数据集 #---------------------------------------------------------------------# classes_path = 'model_data/voc_classes.txt' #----------------------------------------------------------------------------------------------------------------------------# # 权值文件的下载请看README,可以通过网盘下载。模型的 预训练权重 对不同数据集是通用的,因为特征是通用的。 # 模型的 预训练权重 比较重要的部分是 主干特征提取网络的权值部分,用于进行特征提取。 # 预训练权重对于99%的情况都必须要用,不用的话主干部分的权值太过随机,特征提取效果不明显,网络训练的结果也不会好 # # 如果训练过程中存在中断训练的操作,可以将model_path设置成logs文件夹下的权值文件,将已经训练了一部分的权值再次载入。 # 同时修改下方的 冻结阶段 或者 解冻阶段 的参数,来保证模型epoch的连续性。 # # 当model_path = ''的时候不加载整个模型的权值。 # # 此处使用的是整个模型的权重,因此是在train.py进行加载的。 # 如果想要让模型从主干的预训练权值开始训练,则设置model_path为主干网络的权值,此时仅加载主干。 # 如果想要让模型从0开始训练,则设置model_path = '',Freeze_Train = Fasle,此时从0开始训练,且没有冻结主干的过程。 # # 一般来讲,网络从0开始的训练效果会很差,因为权值太过随机,特征提取效果不明显,因此非常、非常、非常不建议大家从0开始训练! # 如果一定要从0开始,可以了解imagenet数据集,首先训练分类模型,获得网络的主干部分权值,分类模型的 主干部分 和该模型通用,基于此进行训练。 #----------------------------------------------------------------------------------------------------------------------------# model_path = 'model_data/voc_weights_resnet.h5' #------------------------------------------------------# # input_shape 输入的shape大小 #------------------------------------------------------# input_shape = [600, 600] #---------------------------------------------# # vgg # resnet50 #---------------------------------------------# backbone = "resnet50" #------------------------------------------------------------------------# # anchors_size用于设定先验框的大小,每个特征点均存在9个先验框。 # anchors_size每个数对应3个先验框。 # 当anchors_size = [8, 16, 32]的时候,生成的先验框宽高约为: # [128, 128]; [256, 256] ; [512, 512]; [128, 256]; # [256, 512]; [512, 1024]; [256, 128]; [512, 256]; # [1024, 512]; 详情查看anchors.py # 如果想要检测小物体,可以减小anchors_size靠前的数。 # 比如设置anchors_size = [64, 256, 512] #------------------------------------------------------------------------# anchors_size = [128, 256, 512] #----------------------------------------------------------------------------------------------------------------------------# # 训练分为两个阶段,分别是冻结阶段和解冻阶段。设置冻结阶段是为了满足机器性能不足的同学的训练需求。 # 冻结训练需要的显存较小,显卡非常差的情况下,可设置Freeze_Epoch等于UnFreeze_Epoch,此时仅仅进行冻结训练。 # # 在此提供若干参数设置建议,各位训练者根据自己的需求进行灵活调整: # (一)从整个模型的预训练权重开始训练: # Adam: # Init_Epoch = 0,Freeze_Epoch = 50,UnFreeze_Epoch = 100,Freeze_Train = True,optimizer_type = 'adam',Init_lr = 1e-4。(冻结) # Init_Epoch = 0,UnFreeze_Epoch = 100,Freeze_Train = False,optimizer_type = 'adam',Init_lr = 1e-4。(不冻结) # SGD: # Init_Epoch = 0,Freeze_Epoch = 50,UnFreeze_Epoch = 150,Freeze_Train = True,optimizer_type = 'sgd',Init_lr = 1e-2。(冻结) # Init_Epoch = 0,UnFreeze_Epoch = 150,Freeze_Train = False,optimizer_type = 'sgd',Init_lr = 1e-2。(不冻结) # 其中:UnFreeze_Epoch可以在100-300之间调整。 # (二)从主干网络的预训练权重开始训练: # Adam: # Init_Epoch = 0,Freeze_Epoch = 50,UnFreeze_Epoch = 100,Freeze_Train = True,optimizer_type = 'adam',Init_lr = 1e-4。(冻结) # Init_Epoch = 0,UnFreeze_Epoch = 100,Freeze_Train = False,optimizer_type = 'adam',Init_lr = 1e-4。(不冻结) # SGD: # Init_Epoch = 0,Freeze_Epoch = 50,UnFreeze_Epoch = 150,Freeze_Train = True,optimizer_type = 'sgd',Init_lr = 1e-2。(冻结) # Init_Epoch = 0,UnFreeze_Epoch = 150,Freeze_Train = False,optimizer_type = 'sgd',Init_lr = 1e-2。(不冻结) # 其中:由于从主干网络的预训练权重开始训练,主干的权值不一定适合目标检测,需要更多的训练跳出局部最优解。 # UnFreeze_Epoch可以在150-300之间调整,YOLOV5和YOLOX均推荐使用300。 # Adam相较于SGD收敛的快一些。因此UnFreeze_Epoch理论上可以小一点,但依然推荐更多的Epoch。 # (三)batch_size的设置: # 在显卡能够接受的范围内,以大为好。显存不足与数据集大小无关,提示显存不足(OOM或者CUDA out of memory)请调小batch_size。 # faster rcnn的Batch BatchNormalization层已经冻结,batch_size可以为1 #----------------------------------------------------------------------------------------------------------------------------# #------------------------------------------------------------------# # 冻结阶段训练参数 # 此时模型的主干被冻结了,特征提取网络不发生改变 # 占用的显存较小,仅对网络进行微调 # Init_Epoch 模型当前开始的训练世代,其值可以大于Freeze_Epoch,如设置: # Init_Epoch = 60、Freeze_Epoch = 50、UnFreeze_Epoch = 100 # 会跳过冻结阶段,直接从60代开始,并调整对应的学习率。 # (断点续练时使用) # Freeze_Epoch 模型冻结训练的Freeze_Epoch # (当Freeze_Train=False时失效) # Freeze_batch_size 模型冻结训练的batch_size # (当Freeze_Train=False时失效) #------------------------------------------------------------------# Init_Epoch = 0 Freeze_Epoch = 50 Freeze_batch_size = 4 #------------------------------------------------------------------# # 解冻阶段训练参数 # 此时模型的主干不被冻结了,特征提取网络会发生改变 # 占用的显存较大,网络所有的参数都会发生改变 # UnFreeze_Epoch 模型总共训练的epoch # SGD需要更长的时间收敛,因此设置较大的UnFreeze_Epoch # Adam可以使用相对较小的UnFreeze_Epoch # Unfreeze_batch_size 模型在解冻后的batch_size #------------------------------------------------------------------# UnFreeze_Epoch = 100 Unfreeze_batch_size = 2 #------------------------------------------------------------------# # Freeze_Train 是否进行冻结训练 # 默认先冻结主干训练后解冻训练。 # 如果设置Freeze_Train=False,建议使用优化器为sgd #------------------------------------------------------------------# Freeze_Train = True #------------------------------------------------------------------# # 其它训练参数:学习率、优化器、学习率下降有关 #------------------------------------------------------------------# #------------------------------------------------------------------# # Init_lr 模型的最大学习率 # 当使用Adam优化器时建议设置 Init_lr=1e-4 # 当使用SGD优化器时建议设置 Init_lr=1e-2 # Min_lr 模型的最小学习率,默认为最大学习率的0.01 #------------------------------------------------------------------# Init_lr = 1e-4 Min_lr = Init_lr * 0.01 #------------------------------------------------------------------# # optimizer_type 使用到的优化器种类,可选的有adam、sgd # 当使用Adam优化器时建议设置 Init_lr=1e-4 # 当使用SGD优化器时建议设置 Init_lr=1e-2 # momentum 优化器内部使用到的momentum参数 #------------------------------------------------------------------# optimizer_type = "adam" momentum = 0.9 #------------------------------------------------------------------# # lr_decay_type 使用到的学习率下降方式,可选的有'step'、'cos' #------------------------------------------------------------------# lr_decay_type = 'cos' #------------------------------------------------------------------# # save_period 多少个epoch保存一次权值 #------------------------------------------------------------------# save_period = 5 #------------------------------------------------------------------# # save_dir 权值与日志文件保存的文件夹 #------------------------------------------------------------------# save_dir = 'logs' #------------------------------------------------------------------# # eval_flag 是否在训练时进行评估,评估对象为验证集 # 安装pycocotools库后,评估体验更佳。 # eval_period 代表多少个epoch评估一次,不建议频繁的评估 # 评估需要消耗较多的时间,频繁评估会导致训练非常慢 # 此处获得的mAP会与get_map.py获得的会有所不同,原因有二: # (一)此处获得的mAP为验证集的mAP。 # (二)此处设置评估参数较为保守,目的是加快评估速度。 #------------------------------------------------------------------# eval_flag = True eval_period = 5 #------------------------------------------------------------------# # num_workers 用于设置是否使用多线程读取数据,1代表关闭多线程 # 开启后会加快数据读取速度,但是会占用更多内存 # 在IO为瓶颈的时候再开启多线程,即GPU运算速度远大于读取图片的速度。 #------------------------------------------------------------------# num_workers = 1 #------------------------------------------------------# # train_annotation_path 训练图片路径和标签 # val_annotation_path 验证图片路径和标签 #------------------------------------------------------# train_annotation_path = '2007_train.txt' val_annotation_path = '2007_val.txt' #------------------------------------------------------# # 设置用到的显卡 #------------------------------------------------------# os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(str(x) for x in train_gpu) ngpus_per_node = len(train_gpu) gpus = tf.config.experimental.list_physical_devices(device_type='GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) if ngpus_per_node > 1: strategy = tf.distribute.MirroredStrategy() else: strategy = None print('Number of devices: {}'.format(ngpus_per_node)) #----------------------------------------------------# # 获取classes和anchor #----------------------------------------------------# class_names, num_classes = get_classes(classes_path) num_classes += 1 anchors = get_anchors(input_shape, backbone, anchors_size) #----------------------------------------------------# # 判断是否多GPU载入模型和预训练权重 #----------------------------------------------------# if ngpus_per_node > 1: with strategy.scope(): model_rpn, model_all = get_model(num_classes, backbone = backbone) if model_path != '': #------------------------------------------------------# # 载入预训练权重 #------------------------------------------------------# print('Load weights {}.'.format(model_path)) model_rpn.load_weights(model_path, by_name=True) model_all.load_weights(model_path, by_name=True) else: model_rpn, model_all = get_model(num_classes, backbone = backbone) if model_path != '': #------------------------------------------------------# # 载入预训练权重 #------------------------------------------------------# print('Load weights {}.'.format(model_path)) model_rpn.load_weights(model_path, by_name=True) model_all.load_weights(model_path, by_name=True) time_str = datetime.datetime.strftime(datetime.datetime.now(),'%Y_%m_%d_%H_%M_%S') log_dir = os.path.join(save_dir, "loss_" + str(time_str)) #--------------------------------------------# # 训练参数的设置 #--------------------------------------------# callback = tf.summary.create_file_writer(log_dir) loss_history = LossHistory(log_dir) bbox_util = BBoxUtility(num_classes) roi_helper = ProposalTargetCreator(num_classes) #---------------------------# # 读取数据集对应的txt #---------------------------# with open(train_annotation_path, encoding='utf-8') as f: train_lines = f.readlines() with open(val_annotation_path, encoding='utf-8') as f: val_lines = f.readlines() num_train = len(train_lines) num_val = len(val_lines) show_config( classes_path = classes_path, model_path = model_path, input_shape = input_shape, \ Init_Epoch = Init_Epoch, Freeze_Epoch = Freeze_Epoch, UnFreeze_Epoch = UnFreeze_Epoch, Freeze_batch_size = Freeze_batch_size, Unfreeze_batch_size = Unfreeze_batch_size, Freeze_Train = Freeze_Train, \ Init_lr = Init_lr, Min_lr = Min_lr, optimizer_type = optimizer_type, momentum = momentum, lr_decay_type = lr_decay_type, \ save_period = save_period, save_dir = save_dir, num_workers = num_workers, num_train = num_train, num_val = num_val ) #---------------------------------------------------------# # 总训练世代指的是遍历全部数据的总次数 # 总训练步长指的是梯度下降的总次数 # 每个训练世代包含若干训练步长,每个训练步长进行一次梯度下降。 # 此处仅建议最低训练世代,上不封顶,计算时只考虑了解冻部分 #----------------------------------------------------------# wanted_step = 5e4 if optimizer_type == "sgd" else 1.5e4 total_step = num_train // Unfreeze_batch_size * UnFreeze_Epoch if total_step <= wanted_step: if num_train // Unfreeze_batch_size == 0: raise ValueError('数据集过小,无法进行训练,请扩充数据集。') wanted_epoch = wanted_step // (num_train // Unfreeze_batch_size) + 1 print("\n\033[1;33;44m[Warning] 使用%s优化器时,建议将训练总步长设置到%d以上。\033[0m"%(optimizer_type, wanted_step)) print("\033[1;33;44m[Warning] 本次运行的总训练数据量为%d,Unfreeze_batch_size为%d,共训练%d个Epoch,计算出总训练步长为%d。\033[0m"%(num_train, Unfreeze_batch_size, UnFreeze_Epoch, total_step)) print("\033[1;33;44m[Warning] 由于总训练步长为%d,小于建议总步长%d,建议设置总世代为%d。\033[0m"%(total_step, wanted_step, wanted_epoch)) #------------------------------------------------------# # 主干特征提取网络特征通用,冻结训练可以加快训练速度 # 也可以在训练初期防止权值被破坏。 # Init_Epoch为起始世代 # Freeze_Epoch为冻结训练的世代 # UnFreeze_Epoch总训练世代 # 提示OOM或者显存不足请调小Batch_size #------------------------------------------------------# if True: UnFreeze_flag = False if Freeze_Train: freeze_layers = {'vgg' : 17, 'resnet50' : 141}[backbone] for i in range(freeze_layers): if type(model_all.layers[i]) != tf.keras.layers.BatchNormalization: model_all.layers[i].trainable = False print('Freeze the first {} layers of total {} layers.'.format(freeze_layers, len(model_all.layers))) #-------------------------------------------------------------------# # 如果不冻结训练的话,直接设置batch_size为Unfreeze_batch_size #-------------------------------------------------------------------# batch_size = Freeze_batch_size if Freeze_Train else Unfreeze_batch_size #-------------------------------------------------------------------# # 判断当前batch_size,自适应调整学习率 #-------------------------------------------------------------------# nbs = 16 lr_limit_max = 1e-4 if optimizer_type == 'adam' else 5e-2 lr_limit_min = 1e-4 if optimizer_type == 'adam' else 5e-4 Init_lr_fit = min(max(batch_size / nbs * Init_lr, lr_limit_min), lr_limit_max) Min_lr_fit = min(max(batch_size / nbs * Min_lr, lr_limit_min * 1e-2), lr_limit_max * 1e-2) optimizer = { 'adam' : Adam(lr = Init_lr_fit, beta_1 = momentum), 'sgd' : SGD(lr = Init_lr_fit, momentum = momentum, nesterov=True) }[optimizer_type] if ngpus_per_node > 1: with strategy.scope(): model_rpn.compile( loss = {'classification' : rpn_cls_loss(), 'regression' : rpn_smooth_l1()}, optimizer = optimizer ) model_all.compile( loss = { 'classification' : rpn_cls_loss(), 'regression' : rpn_smooth_l1(), 'dense_class_{}'.format(num_classes) : classifier_cls_loss(), 'dense_regress_{}'.format(num_classes) : classifier_smooth_l1(num_classes - 1) }, optimizer = optimizer ) else: model_rpn.compile( loss = {'classification' : rpn_cls_loss(), 'regression' : rpn_smooth_l1()}, optimizer = optimizer ) model_all.compile( loss = { 'classification' : rpn_cls_loss(), 'regression' : rpn_smooth_l1(), 'dense_class_{}'.format(num_classes) : classifier_cls_loss(), 'dense_regress_{}'.format(num_classes) : classifier_smooth_l1(num_classes - 1) }, optimizer = optimizer ) #---------------------------------------# # 获得学习率下降的公式 #---------------------------------------# lr_scheduler_func = get_lr_scheduler(lr_decay_type, Init_lr_fit, Min_lr_fit, UnFreeze_Epoch) epoch_step = num_train // batch_size epoch_step_val = num_val // batch_size if epoch_step == 0 or epoch_step_val == 0: raise ValueError('数据集过小,无法进行训练,请扩充数据集。') train_dataloader = FRCNNDatasets(train_lines, input_shape, anchors, batch_size, num_classes, train = True) val_dataloader = FRCNNDatasets(val_lines, input_shape, anchors, batch_size, num_classes, train = False) #---------------------------------------# # 训练时的评估数据集 #---------------------------------------# eval_callback = EvalCallback(model_rpn, model_all, backbone, input_shape, anchors_size, class_names, num_classes, val_lines, log_dir, \ eval_flag=eval_flag, period=eval_period) #---------------------------------------# # 构建多线程数据加载器 #---------------------------------------# gen_enqueuer = OrderedEnqueuer(train_dataloader, use_multiprocessing=True if num_workers > 1 else False, shuffle=True) gen_val_enqueuer = OrderedEnqueuer(val_dataloader, use_multiprocessing=True if num_workers > 1 else False, shuffle=True) gen_enqueuer.start(workers=num_workers, max_queue_size=10) gen_val_enqueuer.start(workers=num_workers, max_queue_size=10) gen = gen_enqueuer.get() gen_val = gen_val_enqueuer.get() for epoch in range(Init_Epoch, UnFreeze_Epoch): #---------------------------------------# # 如果模型有冻结学习部分 # 则解冻,并设置参数 #---------------------------------------# if epoch >= Freeze_Epoch and not UnFreeze_flag and Freeze_Train: batch_size = Unfreeze_batch_size #-------------------------------------------------------------------# # 判断当前batch_size,自适应调整学习率 #-------------------------------------------------------------------# nbs = 16 lr_limit_max = 1e-4 if optimizer_type == 'adam' else 5e-2 lr_limit_min = 1e-4 if optimizer_type == 'adam' else 5e-4 Init_lr_fit = min(max(batch_size / nbs * Init_lr, lr_limit_min), lr_limit_max) Min_lr_fit = min(max(batch_size / nbs * Min_lr, lr_limit_min * 1e-2), lr_limit_max * 1e-2) #---------------------------------------# # 获得学习率下降的公式 #---------------------------------------# lr_scheduler_func = get_lr_scheduler(lr_decay_type, Init_lr_fit, Min_lr_fit, UnFreeze_Epoch) for i in range(freeze_layers): if type(model_all.layers[i]) != tf.keras.layers.BatchNormalization: model_all.layers[i].trainable = True if ngpus_per_node > 1: with strategy.scope(): model_rpn.compile( loss = {'classification' : rpn_cls_loss(), 'regression' : rpn_smooth_l1()}, optimizer = optimizer ) model_all.compile( loss = { 'classification' : rpn_cls_loss(), 'regression' : rpn_smooth_l1(), 'dense_class_{}'.format(num_classes) : classifier_cls_loss(), 'dense_regress_{}'.format(num_classes) : classifier_smooth_l1(num_classes - 1) }, optimizer = optimizer ) else: model_rpn.compile( loss = {'classification' : rpn_cls_loss(), 'regression' : rpn_smooth_l1()}, optimizer = optimizer ) model_all.compile( loss = { 'classification' : rpn_cls_loss(), 'regression' : rpn_smooth_l1(), 'dense_class_{}'.format(num_classes) : classifier_cls_loss(), 'dense_regress_{}'.format(num_classes) : classifier_smooth_l1(num_classes - 1) }, optimizer = optimizer ) epoch_step = num_train // batch_size epoch_step_val = num_val // batch_size if epoch_step == 0 or epoch_step_val == 0: raise ValueError("数据集过小,无法继续进行训练,请扩充数据集。") train_dataloader.batch_size = batch_size val_dataloader.batch_size = batch_size gen_enqueuer.stop() gen_val_enqueuer.stop() #---------------------------------------# # 构建多线程数据加载器 #---------------------------------------# gen_enqueuer = OrderedEnqueuer(train_dataloader, use_multiprocessing=True if num_workers > 1 else False, shuffle=True) gen_val_enqueuer = OrderedEnqueuer(val_dataloader, use_multiprocessing=True if num_workers > 1 else False, shuffle=True) gen_enqueuer.start(workers=num_workers, max_queue_size=10) gen_val_enqueuer.start(workers=num_workers, max_queue_size=10) gen = gen_enqueuer.get() gen_val = gen_val_enqueuer.get() UnFreeze_flag = True lr = lr_scheduler_func(epoch) K.set_value(optimizer.lr, lr) fit_one_epoch(model_rpn, model_all, loss_history, eval_callback, callback, epoch, epoch_step, epoch_step_val, gen, gen_val, UnFreeze_Epoch, anchors, bbox_util, roi_helper, save_period, save_dir) ================================================ FILE: utils/__init__.py ================================================ # ================================================ FILE: utils/anchors.py ================================================ import numpy as np from tensorflow import keras #---------------------------------------------------# # 生成基础的先验框 #---------------------------------------------------# def generate_anchors(sizes = [128, 256, 512], ratios = [[1, 1], [1, 2], [2, 1]]): num_anchors = len(sizes) * len(ratios) anchors = np.zeros((num_anchors, 4)) anchors[:, 2:] = np.tile(sizes, (2, len(ratios))).T for i in range(len(ratios)): anchors[3 * i: 3 * i + 3, 2] = anchors[3 * i: 3 * i + 3, 2] * ratios[i][0] anchors[3 * i: 3 * i + 3, 3] = anchors[3 * i: 3 * i + 3, 3] * ratios[i][1] anchors[:, 0::2] -= np.tile(anchors[:, 2] * 0.5, (2, 1)).T anchors[:, 1::2] -= np.tile(anchors[:, 3] * 0.5, (2, 1)).T return anchors #---------------------------------------------------# # 对基础的先验框进行拓展获得全部的建议框 #---------------------------------------------------# def shift(shape, anchors, stride=16): #---------------------------------------------------# # [0,1,2,3,4,5……37] # [0.5,1.5,2.5……37.5] # [8,24,……] #---------------------------------------------------# shift_x = (np.arange(0, shape[1], dtype=keras.backend.floatx()) + 0.5) * stride shift_y = (np.arange(0, shape[0], dtype=keras.backend.floatx()) + 0.5) * stride shift_x, shift_y = np.meshgrid(shift_x, shift_y) shift_x = np.reshape(shift_x, [-1]) shift_y = np.reshape(shift_y, [-1]) shifts = np.stack([ shift_x, shift_y, shift_x, shift_y ], axis=0) shifts = np.transpose(shifts) number_of_anchors = np.shape(anchors)[0] k = np.shape(shifts)[0] shifted_anchors = np.reshape(anchors, [1, number_of_anchors, 4]) + np.array(np.reshape(shifts, [k, 1, 4]), keras.backend.floatx()) shifted_anchors = np.reshape(shifted_anchors, [k * number_of_anchors, 4]) return shifted_anchors #---------------------------------------------------# # 获得resnet50对应的baselayer大小 #---------------------------------------------------# def get_resnet50_output_length(height, width): def get_output_length(input_length): filter_sizes = [7, 3, 1, 1] padding = [3, 1, 0, 0] stride = 2 for i in range(4): input_length = (input_length + 2 * padding[i] - filter_sizes[i]) // stride + 1 return input_length return get_output_length(height), get_output_length(width) #---------------------------------------------------# # 获得vgg对应的baselayer大小 #---------------------------------------------------# def get_vgg_output_length(height, width): def get_output_length(input_length): filter_sizes = [2, 2, 2, 2] padding = [0, 0, 0, 0] stride = 2 for i in range(4): input_length = (input_length + 2 * padding[i] - filter_sizes[i]) // stride + 1 return input_length return get_output_length(height), get_output_length(width) def get_anchors(input_shape, backbone, sizes = [128, 256, 512], ratios = [[1, 1], [1, 2], [2, 1]], stride=16): if backbone == 'vgg': feature_shape = get_vgg_output_length(input_shape[0], input_shape[1]) else: feature_shape = get_resnet50_output_length(input_shape[0], input_shape[1]) anchors = generate_anchors(sizes = sizes, ratios = ratios) anchors = shift(feature_shape, anchors, stride = stride) anchors[:, ::2] /= input_shape[1] anchors[:, 1::2] /= input_shape[0] anchors = np.clip(anchors, 0, 1) return anchors ================================================ FILE: utils/callbacks.py ================================================ import os import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt import scipy.signal import shutil import numpy as np from tensorflow.keras.applications.imagenet_utils import preprocess_input from tensorflow import keras from PIL import Image from tqdm import tqdm from .anchors import get_anchors from .utils import cvtColor, get_new_img_size, resize_image from .utils_bbox import BBoxUtility from .utils_map import get_coco_map, get_map class LossHistory(keras.callbacks.Callback): def __init__(self, log_dir): self.log_dir = log_dir self.losses = [] self.val_loss = [] if not os.path.exists(self.log_dir): os.makedirs(self.log_dir) def on_epoch_end(self, epoch, logs={}): if not os.path.exists(self.log_dir): os.makedirs(self.log_dir) self.losses.append(logs.get('loss')) self.val_loss.append(logs.get('val_loss')) with open(os.path.join(self.log_dir, "epoch_loss.txt"), 'a') as f: f.write(str(logs.get('loss'))) f.write("\n") with open(os.path.join(self.log_dir, "epoch_val_loss.txt"), 'a') as f: f.write(str(logs.get('val_loss'))) f.write("\n") self.loss_plot() def loss_plot(self): iters = range(len(self.losses)) plt.figure() plt.plot(iters, self.losses, 'red', linewidth = 2, label='train loss') plt.plot(iters, self.val_loss, 'coral', linewidth = 2, label='val loss') try: if len(self.losses) < 25: num = 5 else: num = 15 plt.plot(iters, scipy.signal.savgol_filter(self.losses, num, 3), 'green', linestyle = '--', linewidth = 2, label='smooth train loss') plt.plot(iters, scipy.signal.savgol_filter(self.val_loss, num, 3), '#8B4513', linestyle = '--', linewidth = 2, label='smooth val loss') except: pass plt.grid(True) plt.xlabel('Epoch') plt.ylabel('Loss') plt.title('A Loss Curve') plt.legend(loc="upper right") plt.savefig(os.path.join(self.log_dir, "epoch_loss.png")) plt.cla() plt.close("all") class EvalCallback(keras.callbacks.Callback): def __init__(self, model_rpn, model_all, backbone, input_shape, anchors_size, class_names, num_classes, val_lines, log_dir, \ map_out_path=".temp_map_out", max_boxes=100, confidence=0.05, nms_iou=0.5, letterbox_image=True, MINOVERLAP=0.5, eval_flag=True, period=1): super(EvalCallback, self).__init__() self.model_rpn = model_rpn self.model_all = model_all self.backbone = backbone self.input_shape = input_shape self.anchors_size = anchors_size self.class_names = class_names self.num_classes = num_classes self.val_lines = val_lines self.log_dir = log_dir self.map_out_path = map_out_path self.max_boxes = max_boxes self.confidence = confidence self.nms_iou = nms_iou self.letterbox_image = letterbox_image self.MINOVERLAP = MINOVERLAP self.eval_flag = eval_flag self.period = period #---------------------------------------------------# # 创建一个工具箱,用于进行解码 # 最大使用min_k个建议框,默认为150 #---------------------------------------------------# self.bbox_util = BBoxUtility(self.num_classes, nms_iou = self.nms_iou, min_k = 150) self.maps = [0] self.epoches = [0] if self.eval_flag: with open(os.path.join(self.log_dir, "epoch_map.txt"), 'a') as f: f.write(str(0)) f.write("\n") def get_map_txt(self, image_id, image, class_names, map_out_path): f = open(os.path.join(map_out_path, "detection-results/"+image_id+".txt"),"w") #---------------------------------------------------# # 计算输入图片的高和宽 #---------------------------------------------------# image_shape = np.array(np.shape(image)[0:2]) input_shape = get_new_img_size(image_shape[0], image_shape[1]) #---------------------------------------------------------# # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB #---------------------------------------------------------# image = cvtColor(image) #---------------------------------------------------------# # 给原图像进行resize,resize到短边为600的大小上 #---------------------------------------------------------# image_data = resize_image(image, [input_shape[1], input_shape[0]]) #---------------------------------------------------------# # 添加上batch_size维度 #---------------------------------------------------------# image_data = np.expand_dims(preprocess_input(np.array(image_data, dtype='float32')), 0) #---------------------------------------------------------# # 获得rpn网络预测结果和base_layer #---------------------------------------------------------# rpn_pred = self.model_rpn(image_data) rpn_pred = [x.numpy() for x in rpn_pred] #---------------------------------------------------------# # 生成先验框并解码 #---------------------------------------------------------# anchors = get_anchors(input_shape, self.backbone, self.anchors_size) rpn_results = self.bbox_util.detection_out_rpn(rpn_pred, anchors) #-------------------------------------------------------------# # 利用建议框获得classifier网络预测结果 #-------------------------------------------------------------# classifier_pred = self.model_all([image_data, rpn_results[:, :, [1, 0, 3, 2]]])[-2:] classifier_pred = [x.numpy() for x in classifier_pred] #-------------------------------------------------------------# # 利用classifier的预测结果对建议框进行解码,获得预测框 #-------------------------------------------------------------# results = self.bbox_util.detection_out_classifier(classifier_pred, rpn_results, image_shape, input_shape, self.confidence) #--------------------------------------# # 如果没有检测到物体,则返回原图 #--------------------------------------# if len(results[0])<=0: return top_label = np.array(results[0][:, 5], dtype = 'int32') top_conf = results[0][:, 4] top_boxes = results[0][:, :4] top_100 = np.argsort(top_conf)[::-1][:self.max_boxes] top_boxes = top_boxes[top_100] top_conf = top_conf[top_100] top_label = top_label[top_100] for i, c in list(enumerate(top_label)): predicted_class = self.class_names[int(c)] box = top_boxes[i] score = str(top_conf[i]) top, left, bottom, right = box if predicted_class not in class_names: continue f.write("%s %s %s %s %s %s\n" % (predicted_class, score[:6], str(int(left)), str(int(top)), str(int(right)),str(int(bottom)))) f.close() return def on_epoch_end(self, epoch, logs=None): temp_epoch = epoch + 1 if temp_epoch % self.period == 0 and self.eval_flag: if not os.path.exists(self.map_out_path): os.makedirs(self.map_out_path) if not os.path.exists(os.path.join(self.map_out_path, "ground-truth")): os.makedirs(os.path.join(self.map_out_path, "ground-truth")) if not os.path.exists(os.path.join(self.map_out_path, "detection-results")): os.makedirs(os.path.join(self.map_out_path, "detection-results")) print("Get map.") for annotation_line in tqdm(self.val_lines): line = annotation_line.split() image_id = os.path.basename(line[0]).split('.')[0] #------------------------------# # 读取图像并转换成RGB图像 #------------------------------# image = Image.open(line[0]) #------------------------------# # 获得预测框 #------------------------------# gt_boxes = np.array([np.array(list(map(int,box.split(',')))) for box in line[1:]]) #------------------------------# # 获得预测txt #------------------------------# self.get_map_txt(image_id, image, self.class_names, self.map_out_path) #------------------------------# # 获得真实框txt #------------------------------# with open(os.path.join(self.map_out_path, "ground-truth/"+image_id+".txt"), "w") as new_f: for box in gt_boxes: left, top, right, bottom, obj = box obj_name = self.class_names[obj] new_f.write("%s %s %s %s %s\n" % (obj_name, left, top, right, bottom)) print("Calculate Map.") try: temp_map = get_coco_map(class_names = self.class_names, path = self.map_out_path)[1] except: temp_map = get_map(self.MINOVERLAP, False, path = self.map_out_path) self.maps.append(temp_map) self.epoches.append(temp_epoch) with open(os.path.join(self.log_dir, "epoch_map.txt"), 'a') as f: f.write(str(temp_map)) f.write("\n") plt.figure() plt.plot(self.epoches, self.maps, 'red', linewidth = 2, label='train map') plt.grid(True) plt.xlabel('Epoch') plt.ylabel('Map %s'%str(self.MINOVERLAP)) plt.title('A Map Curve') plt.legend(loc="upper right") plt.savefig(os.path.join(self.log_dir, "epoch_map.png")) plt.cla() plt.close("all") print("Get map done.") shutil.rmtree(self.map_out_path) ================================================ FILE: utils/dataloader.py ================================================ import math import multiprocessing import random import threading import time from abc import abstractmethod from contextlib import closing from multiprocessing.pool import ThreadPool from random import shuffle import cv2 import numpy as np import six from PIL import Image from tensorflow.keras.applications.imagenet_utils import preprocess_input from tensorflow import keras try: import queue except ImportError: import Queue as queue from utils.utils import cvtColor class FRCNNDatasets(keras.utils.Sequence): def __init__(self, annotation_lines, input_shape, anchors, batch_size, num_classes, train, n_sample = 256, ignore_threshold = 0.3, overlap_threshold = 0.7): self.annotation_lines = annotation_lines self.length = len(self.annotation_lines) self.input_shape = input_shape self.anchors = anchors self.num_anchors = len(anchors) self.batch_size = batch_size self.num_classes = num_classes self.train = train self.n_sample = n_sample self.ignore_threshold = ignore_threshold self.overlap_threshold = overlap_threshold def __len__(self): return math.ceil(len(self.annotation_lines) / float(self.batch_size)) def __getitem__(self, index): image_data = [] classifications = [] regressions = [] targets = [] for i in range(index * self.batch_size, (index + 1) * self.batch_size): i = i % self.length #---------------------------------------------------# # 训练时进行数据的随机增强 # 验证时不进行数据的随机增强 #---------------------------------------------------# image, box = self.get_random_data(self.annotation_lines[i], self.input_shape, random = self.train) if len(box)!=0: boxes = np.array(box[:, :4] , dtype=np.float32) boxes[:, [0, 2]] = boxes[:,[0, 2]] / self.input_shape[1] boxes[:, [1, 3]] = boxes[:,[1, 3]] / self.input_shape[0] box = np.concatenate([boxes, box[:, -1:]], axis=-1) assignment = self.assign_boxes(box) classification = assignment[:, 4] regression = assignment[:, :] #---------------------------------------------------# # 对正样本与负样本进行筛选,训练样本总和为256 #---------------------------------------------------# pos_index = np.where(classification > 0)[0] if len(pos_index) > self.n_sample / 2: disable_index = np.random.choice(pos_index, size=(len(pos_index) - self.n_sample // 2), replace=False) classification[disable_index] = -1 regression[disable_index, -1] = -1 # ----------------------------------------------------- # # 平衡正负样本,保持总数量为256 # ----------------------------------------------------- # n_neg = self.n_sample - np.sum(classification > 0) neg_index = np.where(classification == 0)[0] if len(neg_index) > n_neg: disable_index = np.random.choice(neg_index, size=(len(neg_index) - n_neg), replace=False) classification[disable_index] = -1 regression[disable_index, -1] = -1 image_data.append(preprocess_input(np.array(image, np.float32))) classifications.append(np.expand_dims(classification, -1)) regressions.append(regression) targets.append(box) return np.array(image_data), [np.array(classifications,dtype=np.float32), np.array(regressions,dtype=np.float32)], targets def generate(self): i = 0 while True: image_data = [] classifications = [] regressions = [] targets = [] for b in range(self.batch_size): if i==0: np.random.shuffle(self.annotation_lines) #---------------------------------------------------# # 训练时进行数据的随机增强 # 验证时不进行数据的随机增强 #---------------------------------------------------# image, box = self.get_random_data(self.annotation_lines[i], self.input_shape, random = self.train) if len(box)!=0: boxes = np.array(box[:, :4] , dtype=np.float32) boxes[:, [0, 2]] = boxes[:,[0, 2]] / self.input_shape[1] boxes[:, [1, 3]] = boxes[:,[1, 3]] / self.input_shape[0] box = np.concatenate([boxes, box[:, -1:]], axis=-1) assignment = self.assign_boxes(box) classification = assignment[:, 4] regression = assignment[:, :] #---------------------------------------------------# # 对正样本与负样本进行筛选,训练样本总和为256 #---------------------------------------------------# pos_index = np.where(classification > 0)[0] if len(pos_index) > self.n_sample / 2: disable_index = np.random.choice(pos_index, size=(len(pos_index) - self.n_sample // 2), replace=False) classification[disable_index] = -1 regression[disable_index, -1] = -1 # ----------------------------------------------------- # # 平衡正负样本,保持总数量为256 # ----------------------------------------------------- # n_neg = self.n_sample - np.sum(classification > 0) neg_index = np.where(classification == 0)[0] if len(neg_index) > n_neg: disable_index = np.random.choice(neg_index, size=(len(neg_index) - n_neg), replace=False) classification[disable_index] = -1 regression[disable_index, -1] = -1 i = (i+1) % self.length image_data.append(preprocess_input(np.array(image, np.float32))) classifications.append(np.expand_dims(classification, -1)) regressions.append(regression) targets.append(box) yield np.array(image_data), [np.array(classifications,dtype=np.float32), np.array(regressions,dtype=np.float32)], targets def on_epoch_end(self): shuffle(self.annotation_lines) def rand(self, a=0, b=1): return np.random.rand()*(b-a) + a def get_random_data(self, annotation_line, input_shape, jitter=.3, hue=.1, sat=0.7, val=0.4, random=True): line = annotation_line.split() #------------------------------# # 读取图像并转换成RGB图像 #------------------------------# image = Image.open(line[0]) image = cvtColor(image) #------------------------------# # 获得图像的高宽与目标高宽 #------------------------------# iw, ih = image.size h, w = input_shape #------------------------------# # 获得预测框 #------------------------------# box = np.array([np.array(list(map(int,box.split(',')))) for box in line[1:]]) if not random: scale = min(w/iw, h/ih) nw = int(iw*scale) nh = int(ih*scale) dx = (w-nw)//2 dy = (h-nh)//2 #---------------------------------# # 将图像多余的部分加上灰条 #---------------------------------# image = image.resize((nw,nh), Image.BICUBIC) new_image = Image.new('RGB', (w,h), (128,128,128)) new_image.paste(image, (dx, dy)) image_data = np.array(new_image, np.float32) #---------------------------------# # 对真实框进行调整 #---------------------------------# if len(box)>0: np.random.shuffle(box) box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy box[:, 0:2][box[:, 0:2]<0] = 0 box[:, 2][box[:, 2]>w] = w box[:, 3][box[:, 3]>h] = h box_w = box[:, 2] - box[:, 0] box_h = box[:, 3] - box[:, 1] box = box[np.logical_and(box_w>1, box_h>1)] # discard invalid box return image_data, box #------------------------------------------# # 对图像进行缩放并且进行长和宽的扭曲 #------------------------------------------# new_ar = iw/ih * self.rand(1-jitter,1+jitter) / self.rand(1-jitter,1+jitter) scale = self.rand(.25, 2) if new_ar < 1: nh = int(scale*h) nw = int(nh*new_ar) else: nw = int(scale*w) nh = int(nw/new_ar) image = image.resize((nw,nh), Image.BICUBIC) #------------------------------------------# # 将图像多余的部分加上灰条 #------------------------------------------# dx = int(self.rand(0, w-nw)) dy = int(self.rand(0, h-nh)) new_image = Image.new('RGB', (w,h), (128,128,128)) new_image.paste(image, (dx, dy)) image = new_image #------------------------------------------# # 翻转图像 #------------------------------------------# flip = self.rand()<.5 if flip: image = image.transpose(Image.FLIP_LEFT_RIGHT) image_data = np.array(image, np.uint8) #---------------------------------# # 对图像进行色域变换 # 计算色域变换的参数 #---------------------------------# r = np.random.uniform(-1, 1, 3) * [hue, sat, val] + 1 #---------------------------------# # 将图像转到HSV上 #---------------------------------# hue, sat, val = cv2.split(cv2.cvtColor(image_data, cv2.COLOR_RGB2HSV)) dtype = image_data.dtype #---------------------------------# # 应用变换 #---------------------------------# x = np.arange(0, 256, dtype=r.dtype) lut_hue = ((x * r[0]) % 180).astype(dtype) lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) lut_val = np.clip(x * r[2], 0, 255).astype(dtype) image_data = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))) image_data = cv2.cvtColor(image_data, cv2.COLOR_HSV2RGB) #---------------------------------# # 对真实框进行调整 #---------------------------------# if len(box)>0: np.random.shuffle(box) box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy if flip: box[:, [0,2]] = w - box[:, [2,0]] box[:, 0:2][box[:, 0:2]<0] = 0 box[:, 2][box[:, 2]>w] = w box[:, 3][box[:, 3]>h] = h box_w = box[:, 2] - box[:, 0] box_h = box[:, 3] - box[:, 1] box = box[np.logical_and(box_w>1, box_h>1)] return image_data, box def iou(self, box): #---------------------------------------------# # 计算出每个真实框与所有的先验框的iou # 判断真实框与先验框的重合情况 #---------------------------------------------# inter_upleft = np.maximum(self.anchors[:, :2], box[:2]) inter_botright = np.minimum(self.anchors[:, 2:4], box[2:]) inter_wh = inter_botright - inter_upleft inter_wh = np.maximum(inter_wh, 0) inter = inter_wh[:, 0] * inter_wh[:, 1] #---------------------------------------------# # 真实框的面积 #---------------------------------------------# area_true = (box[2] - box[0]) * (box[3] - box[1]) #---------------------------------------------# # 先验框的面积 #---------------------------------------------# area_gt = (self.anchors[:, 2] - self.anchors[:, 0])*(self.anchors[:, 3] - self.anchors[:, 1]) #---------------------------------------------# # 计算iou #---------------------------------------------# union = area_true + area_gt - inter iou = inter / union return iou def encode_ignore_box(self, box, return_iou=True, variances = [0.25, 0.25, 0.25, 0.25]): #---------------------------------------------# # 计算当前真实框和先验框的重合情况 #---------------------------------------------# iou = self.iou(box) ignored_box = np.zeros((self.num_anchors, 1)) #---------------------------------------------------# # 找到处于忽略门限值范围内的先验框 #---------------------------------------------------# assign_mask_ignore = (iou > self.ignore_threshold) & (iou < self.overlap_threshold) ignored_box[:, 0][assign_mask_ignore] = iou[assign_mask_ignore] encoded_box = np.zeros((self.num_anchors, 4 + return_iou)) #---------------------------------------------------# # 找到每一个真实框,重合程度较高的先验框 #---------------------------------------------------# assign_mask = iou > self.overlap_threshold #---------------------------------------------# # 如果没有一个先验框重合度大于self.overlap_threshold # 则选择重合度最大的为正样本 #---------------------------------------------# if not assign_mask.any(): assign_mask[iou.argmax()] = True #---------------------------------------------# # 利用iou进行赋值 #---------------------------------------------# if return_iou: encoded_box[:, -1][assign_mask] = iou[assign_mask] #---------------------------------------------# # 找到对应的先验框 #---------------------------------------------# assigned_anchors = self.anchors[assign_mask] #---------------------------------------------# # 逆向编码,将真实框转化为FRCNN预测结果的格式 # 先计算真实框的中心与长宽 #---------------------------------------------# box_center = 0.5 * (box[:2] + box[2:]) box_wh = box[2:] - box[:2] #---------------------------------------------# # 再计算重合度较高的先验框的中心与长宽 #---------------------------------------------# assigned_anchors_center = 0.5 * (assigned_anchors[:, :2] + assigned_anchors[:, 2:4]) assigned_anchors_wh = assigned_anchors[:, 2:4] - assigned_anchors[:, :2] # 逆向求取FasterRCNN应该有的预测结果 encoded_box[:, :2][assign_mask] = box_center - assigned_anchors_center encoded_box[:, :2][assign_mask] /= assigned_anchors_wh encoded_box[:, :2][assign_mask] /= np.array(variances)[:2] encoded_box[:, 2:4][assign_mask] = np.log(box_wh / assigned_anchors_wh) encoded_box[:, 2:4][assign_mask] /= np.array(variances)[2:4] return encoded_box.ravel(), ignored_box.ravel() def assign_boxes(self, boxes): #---------------------------------------------------# # assignment分为2个部分 # :4 的内容为网络应该有的回归预测结果 # 4 的内容为先验框是否包含物体,默认为背景 #---------------------------------------------------# assignment = np.zeros((self.num_anchors, 4 + 1)) assignment[:, 4] = 0.0 if len(boxes) == 0: return assignment #---------------------------------------------------# # 对每一个真实框都进行iou计算 #---------------------------------------------------# apply_along_axis_boxes = np.apply_along_axis(self.encode_ignore_box, 1, boxes[:, :4]) encoded_boxes = np.array([apply_along_axis_boxes[i, 0] for i in range(len(apply_along_axis_boxes))]) ingored_boxes = np.array([apply_along_axis_boxes[i, 1] for i in range(len(apply_along_axis_boxes))]) #---------------------------------------------------# # 在reshape后,获得的ingored_boxes的shape为: # [num_true_box, num_anchors, 1] 其中1为iou #---------------------------------------------------# ingored_boxes = ingored_boxes.reshape(-1, self.num_anchors, 1) ignore_iou = ingored_boxes[:, :, 0].max(axis=0) ignore_iou_mask = ignore_iou > 0 assignment[:, 4][ignore_iou_mask] = -1 #---------------------------------------------------# # 在reshape后,获得的encoded_boxes的shape为: # [num_true_box, num_anchors, 4+1] # 4是编码后的结果,1为iou #---------------------------------------------------# encoded_boxes = encoded_boxes.reshape(-1, self.num_anchors, 5) #---------------------------------------------------# # [num_anchors]求取每一个先验框重合度最大的真实框 #---------------------------------------------------# best_iou = encoded_boxes[:, :, -1].max(axis=0) best_iou_idx = encoded_boxes[:, :, -1].argmax(axis=0) best_iou_mask = best_iou > 0 best_iou_idx = best_iou_idx[best_iou_mask] #---------------------------------------------------# # 计算一共有多少先验框满足需求 #---------------------------------------------------# assign_num = len(best_iou_idx) # 将编码后的真实框取出 encoded_boxes = encoded_boxes[:, best_iou_mask, :] assignment[:, :4][best_iou_mask] = encoded_boxes[best_iou_idx,np.arange(assign_num), :4] #----------------------------------------------------------# # 4代表为背景的概率,设定为0,因为这些先验框有对应的物体 #----------------------------------------------------------# assignment[:, 4][best_iou_mask] = 1 # 通过assign_boxes我们就获得了,输入进来的这张图片,应该有的预测结果是什么样子的 return assignment #----------------------------------------------------------# # 多进程进行数据读取的代码,Copy From Keras==2.1.5 # Training-related part of the Keras engine. #----------------------------------------------------------# _SHARED_SEQUENCES = {} _SEQUENCE_COUNTER = None def init_pool(seqs): global _SHARED_SEQUENCES _SHARED_SEQUENCES = seqs def get_index(uid, i): """Get the value from the Sequence `uid` at index `i`. To allow multiple Sequences to be used at the same time, we use `uid` to get a specific one. A single Sequence would cause the validation to overwrite the training Sequence. # Arguments uid: int, Sequence identifier i: index # Returns The value at index `i`. """ return _SHARED_SEQUENCES[uid][i] class SequenceEnqueuer(object): """Base class to enqueue inputs. The task of an Enqueuer is to use parallelism to speed up preprocessing. This is done with processes or threads. # Examples ```python enqueuer = SequenceEnqueuer(...) enqueuer.start() datas = enqueuer.get() for data in datas: # Use the inputs; training, evaluating, predicting. # ... stop sometime. enqueuer.close() ``` The `enqueuer.get()` should be an infinite stream of datas. """ @abstractmethod def is_running(self): raise NotImplementedError @abstractmethod def start(self, workers=1, max_queue_size=10): """Starts the handler's workers. # Arguments workers: number of worker threads max_queue_size: queue size (when full, threads could block on `put()`). """ raise NotImplementedError @abstractmethod def stop(self, timeout=None): """Stop running threads and wait for them to exit, if necessary. Should be called by the same thread which called start(). # Arguments timeout: maximum time to wait on thread.join() """ raise NotImplementedError @abstractmethod def get(self): """Creates a generator to extract data from the queue. Skip the data if it is `None`. # Returns Generator yielding tuples `(inputs, targets)` or `(inputs, targets, sample_weights)`. """ raise NotImplementedError class OrderedEnqueuer(SequenceEnqueuer): """Builds a Enqueuer from a Sequence. Used in `fit_generator`, `evaluate_generator`, `predict_generator`. # Arguments sequence: A `keras.utils.data_utils.Sequence` object. use_multiprocessing: use multiprocessing if True, otherwise threading shuffle: whether to shuffle the data at the beginning of each epoch """ def __init__(self, sequence, use_multiprocessing=False, shuffle=False): self.sequence = sequence self.use_multiprocessing = use_multiprocessing global _SEQUENCE_COUNTER if _SEQUENCE_COUNTER is None: try: _SEQUENCE_COUNTER = multiprocessing.Value('i', 0) except OSError: # In this case the OS does not allow us to use # multiprocessing. We resort to an int # for enqueuer indexing. _SEQUENCE_COUNTER = 0 if isinstance(_SEQUENCE_COUNTER, int): self.uid = _SEQUENCE_COUNTER _SEQUENCE_COUNTER += 1 else: # Doing Multiprocessing.Value += x is not process-safe. with _SEQUENCE_COUNTER.get_lock(): self.uid = _SEQUENCE_COUNTER.value _SEQUENCE_COUNTER.value += 1 self.shuffle = shuffle self.workers = 0 self.executor_fn = None self.queue = None self.run_thread = None self.stop_signal = None def is_running(self): return self.stop_signal is not None and not self.stop_signal.is_set() def start(self, workers=1, max_queue_size=10): """Start the handler's workers. # Arguments workers: number of worker threads max_queue_size: queue size (when full, workers could block on `put()`) """ if self.use_multiprocessing: self.executor_fn = lambda seqs: multiprocessing.Pool(workers, initializer=init_pool, initargs=(seqs,)) else: # We do not need the init since it's threads. self.executor_fn = lambda _: ThreadPool(workers) self.workers = workers self.queue = queue.Queue(max_queue_size) self.stop_signal = threading.Event() self.run_thread = threading.Thread(target=self._run) self.run_thread.daemon = True self.run_thread.start() def _wait_queue(self): """Wait for the queue to be empty.""" while True: time.sleep(0.1) if self.queue.unfinished_tasks == 0 or self.stop_signal.is_set(): return def _run(self): """Submits request to the executor and queue the `Future` objects.""" sequence = list(range(len(self.sequence))) self._send_sequence() # Share the initial sequence while True: if self.shuffle: random.shuffle(sequence) with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor: for i in sequence: if self.stop_signal.is_set(): return self.queue.put( executor.apply_async(get_index, (self.uid, i)), block=True) # Done with the current epoch, waiting for the final batches self._wait_queue() if self.stop_signal.is_set(): # We're done return # Call the internal on epoch end. self.sequence.on_epoch_end() self._send_sequence() # Update the pool def get(self): """Creates a generator to extract data from the queue. Skip the data if it is `None`. # Yields The next element in the queue, i.e. a tuple `(inputs, targets)` or `(inputs, targets, sample_weights)`. """ try: while self.is_running(): inputs = self.queue.get(block=True).get() self.queue.task_done() if inputs is not None: yield inputs except Exception as e: self.stop() six.raise_from(StopIteration(e), e) def _send_sequence(self): """Send current Sequence to all workers.""" # For new processes that may spawn _SHARED_SEQUENCES[self.uid] = self.sequence def stop(self, timeout=None): """Stops running threads and wait for them to exit, if necessary. Should be called by the same thread which called `start()`. # Arguments timeout: maximum time to wait on `thread.join()` """ self.stop_signal.set() with self.queue.mutex: self.queue.queue.clear() self.queue.unfinished_tasks = 0 self.queue.not_full.notify() self.run_thread.join(timeout) _SHARED_SEQUENCES[self.uid] = None ================================================ FILE: utils/utils.py ================================================ import numpy as np from PIL import Image #---------------------------------------------------------# # 将图像转换成RGB图像,防止灰度图在预测时报错。 # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB #---------------------------------------------------------# def cvtColor(image): if len(np.shape(image)) == 3 and np.shape(image)[2] == 3: return image else: image = image.convert('RGB') return image #---------------------------------------------------# # 对输入图像进行resize #---------------------------------------------------# def resize_image(image, size): w, h = size new_image = image.resize((w, h), Image.BICUBIC) return new_image #---------------------------------------------------# # 获得类 #---------------------------------------------------# def get_classes(classes_path): with open(classes_path, encoding='utf-8') as f: class_names = f.readlines() class_names = [c.strip() for c in class_names] return class_names, len(class_names) def show_config(**kwargs): print('Configurations:') print('-' * 70) print('|%25s | %40s|' % ('keys', 'values')) print('-' * 70) for key, value in kwargs.items(): print('|%25s | %40s|' % (str(key), str(value))) print('-' * 70) #---------------------------------------------------# # 获得输入图片的大小 #---------------------------------------------------# def get_new_img_size(height, width, img_min_side=600): if width <= height: f = float(img_min_side) / width resized_height = int(f * height) resized_width = int(img_min_side) else: f = float(img_min_side) / height resized_width = int(f * width) resized_height = int(img_min_side) return resized_height, resized_width #-------------------------------------------------------------------------------------------------------------------------------# # From https://github.com/ckyrkou/Keras_FLOP_Estimator # Fix lots of bugs #-------------------------------------------------------------------------------------------------------------------------------# def net_flops(model, table=False, print_result=True): if (table == True): print("\n") print('%25s | %16s | %16s | %16s | %16s | %6s | %6s' % ( 'Layer Name', 'Input Shape', 'Output Shape', 'Kernel Size', 'Filters', 'Strides', 'FLOPS')) print('=' * 120) #---------------------------------------------------# # 总的FLOPs #---------------------------------------------------# t_flops = 0 factor = 1e9 for l in model.layers: try: #--------------------------------------# # 所需参数的初始化定义 #--------------------------------------# o_shape, i_shape, strides, ks, filters = ('', '', ''), ('', '', ''), (1, 1), (0, 0), 0 flops = 0 #--------------------------------------# # 获得层的名字 #--------------------------------------# name = l.name if ('InputLayer' in str(l)): i_shape = l.get_input_shape_at(0)[1:4] o_shape = l.get_output_shape_at(0)[1:4] #--------------------------------------# # Reshape层 #--------------------------------------# elif ('Reshape' in str(l)): i_shape = l.get_input_shape_at(0)[1:4] o_shape = l.get_output_shape_at(0)[1:4] #--------------------------------------# # 填充层 #--------------------------------------# elif ('Padding' in str(l)): i_shape = l.get_input_shape_at(0)[1:4] o_shape = l.get_output_shape_at(0)[1:4] #--------------------------------------# # 平铺层 #--------------------------------------# elif ('Flatten' in str(l)): i_shape = l.get_input_shape_at(0)[1:4] o_shape = l.get_output_shape_at(0)[1:4] #--------------------------------------# # 激活函数层 #--------------------------------------# elif 'Activation' in str(l): i_shape = l.get_input_shape_at(0)[1:4] o_shape = l.get_output_shape_at(0)[1:4] #--------------------------------------# # LeakyReLU #--------------------------------------# elif 'LeakyReLU' in str(l): for i in range(len(l._inbound_nodes)): i_shape = l.get_input_shape_at(i)[1:4] o_shape = l.get_output_shape_at(i)[1:4] flops += i_shape[0] * i_shape[1] * i_shape[2] #--------------------------------------# # 池化层 #--------------------------------------# elif 'MaxPooling' in str(l): i_shape = l.get_input_shape_at(0)[1:4] o_shape = l.get_output_shape_at(0)[1:4] #--------------------------------------# # 池化层 #--------------------------------------# elif ('AveragePooling' in str(l) and 'Global' not in str(l)): strides = l.strides ks = l.pool_size for i in range(len(l._inbound_nodes)): i_shape = l.get_input_shape_at(i)[1:4] o_shape = l.get_output_shape_at(i)[1:4] flops += o_shape[0] * o_shape[1] * o_shape[2] #--------------------------------------# # 全局池化层 #--------------------------------------# elif ('AveragePooling' in str(l) and 'Global' in str(l)): for i in range(len(l._inbound_nodes)): i_shape = l.get_input_shape_at(i)[1:4] o_shape = l.get_output_shape_at(i)[1:4] flops += (i_shape[0] * i_shape[1] + 1) * i_shape[2] #--------------------------------------# # 标准化层 #--------------------------------------# elif ('BatchNormalization' in str(l)): for i in range(len(l._inbound_nodes)): i_shape = l.get_input_shape_at(i)[1:4] o_shape = l.get_output_shape_at(i)[1:4] temp_flops = 1 for i in range(len(i_shape)): temp_flops *= i_shape[i] temp_flops *= 2 flops += temp_flops #--------------------------------------# # 全连接层 #--------------------------------------# elif ('Dense' in str(l)): for i in range(len(l._inbound_nodes)): i_shape = l.get_input_shape_at(i)[1:4] o_shape = l.get_output_shape_at(i)[1:4] temp_flops = 1 for i in range(len(o_shape)): temp_flops *= o_shape[i] if (i_shape[-1] == None): temp_flops = temp_flops * o_shape[-1] else: temp_flops = temp_flops * i_shape[-1] flops += temp_flops #--------------------------------------# # 普通卷积层 #--------------------------------------# elif ('Conv2D' in str(l) and 'DepthwiseConv2D' not in str(l) and 'SeparableConv2D' not in str(l)): strides = l.strides ks = l.kernel_size filters = l.filters bias = 1 if l.use_bias else 0 for i in range(len(l._inbound_nodes)): i_shape = l.get_input_shape_at(i)[1:4] o_shape = l.get_output_shape_at(i)[1:4] if (filters == None): filters = i_shape[2] flops += filters * o_shape[0] * o_shape[1] * (ks[0] * ks[1] * i_shape[2] + bias) #--------------------------------------# # 逐层卷积层 #--------------------------------------# elif ('Conv2D' in str(l) and 'DepthwiseConv2D' in str(l) and 'SeparableConv2D' not in str(l)): strides = l.strides ks = l.kernel_size filters = l.filters bias = 1 if l.use_bias else 0 for i in range(len(l._inbound_nodes)): i_shape = l.get_input_shape_at(i)[1:4] o_shape = l.get_output_shape_at(i)[1:4] if (filters == None): filters = i_shape[2] flops += filters * o_shape[0] * o_shape[1] * (ks[0] * ks[1] + bias) #--------------------------------------# # 深度可分离卷积层 #--------------------------------------# elif ('Conv2D' in str(l) and 'DepthwiseConv2D' not in str(l) and 'SeparableConv2D' in str(l)): strides = l.strides ks = l.kernel_size filters = l.filters for i in range(len(l._inbound_nodes)): i_shape = l.get_input_shape_at(i)[1:4] o_shape = l.get_output_shape_at(i)[1:4] if (filters == None): filters = i_shape[2] flops += i_shape[2] * o_shape[0] * o_shape[1] * (ks[0] * ks[1] + bias) + \ filters * o_shape[0] * o_shape[1] * (1 * 1 * i_shape[2] + bias) #--------------------------------------# # 模型中有模型时 #--------------------------------------# elif 'Model' in str(l): flops = net_flops(l, print_result=False) t_flops += flops if (table == True): print('%25s | %16s | %16s | %16s | %16s | %6s | %5.4f' % ( name[:25], str(i_shape), str(o_shape), str(ks), str(filters), str(strides), flops)) except: pass t_flops = t_flops * 2 if print_result: show_flops = t_flops / factor print('Total GFLOPs: %.3fG' % (show_flops)) return t_flops ================================================ FILE: utils/utils_bbox.py ================================================ import math import numpy as np import tensorflow as tf import tensorflow.keras.backend as K class BBoxUtility(object): def __init__(self, num_classes, rpn_pre_boxes = 12000, rpn_nms = 0.7, nms_iou = 0.3, min_k = 300): #---------------------------------------------------# # 种类数量 #---------------------------------------------------# self.num_classes = num_classes #---------------------------------------------------# # 建议框非极大抑制前的框的数量 #---------------------------------------------------# self.rpn_pre_boxes = rpn_pre_boxes #---------------------------------------------------# # 非极大抑制的iou #---------------------------------------------------# self.rpn_nms = rpn_nms self.nms_iou = nms_iou #---------------------------------------------------# # 建议框非极大抑制后的框的数量 #---------------------------------------------------# self._min_k = min_k def decode_boxes(self, mbox_loc, anchors, variances): # 获得先验框的宽与高 anchor_width = anchors[:, 2] - anchors[:, 0] anchor_height = anchors[:, 3] - anchors[:, 1] # 获得先验框的中心点 anchor_center_x = 0.5 * (anchors[:, 2] + anchors[:, 0]) anchor_center_y = 0.5 * (anchors[:, 3] + anchors[:, 1]) # 真实框距离先验框中心的xy轴偏移情况 detections_center_x = mbox_loc[:, 0] * anchor_width * variances[0] detections_center_x += anchor_center_x detections_center_y = mbox_loc[:, 1] * anchor_height * variances[1] detections_center_y += anchor_center_y # 真实框的宽与高的求取 detections_width = np.exp(mbox_loc[:, 2] * variances[2]) detections_width *= anchor_width detections_height = np.exp(mbox_loc[:, 3] * variances[3]) detections_height *= anchor_height # 获取真实框的左上角与右下角 detections_xmin = detections_center_x - 0.5 * detections_width detections_ymin = detections_center_y - 0.5 * detections_height detections_xmax = detections_center_x + 0.5 * detections_width detections_ymax = detections_center_y + 0.5 * detections_height # 真实框的左上角与右下角进行堆叠 detections = np.concatenate((detections_xmin[:, None], detections_ymin[:, None], detections_xmax[:, None], detections_ymax[:, None]), axis=-1) # 防止超出0与1 detections = np.minimum(np.maximum(detections, 0.0), 1.0) return detections def detection_out_rpn(self, predictions, anchors, variances = [0.25, 0.25, 0.25, 0.25]): #---------------------------------------------------# # 获得种类的置信度 #---------------------------------------------------# mbox_conf = predictions[0] #---------------------------------------------------# # mbox_loc是回归预测结果 #---------------------------------------------------# mbox_loc = predictions[1] results = [] # 对每一张图片进行处理,由于在predict.py的时候,我们只输入一张图片,所以for i in range(len(mbox_loc))只进行一次 for i in range(len(mbox_loc)): #--------------------------------# # 利用回归结果对先验框进行解码 #--------------------------------# detections = self.decode_boxes(mbox_loc[i], anchors, variances) #--------------------------------# # 取出先验框内包含物体的概率 #--------------------------------# c_confs = mbox_conf[i, :, 0] c_confs_argsort = np.argsort(c_confs)[::-1][:self.rpn_pre_boxes] #------------------------------------# # 原始的预测框较多,先选一些高分框 #------------------------------------# confs_to_process = c_confs[c_confs_argsort] boxes_to_process = detections[c_confs_argsort, :] #--------------------------------# # 进行iou的非极大抑制 #--------------------------------# idx = tf.image.non_max_suppression(boxes_to_process, confs_to_process, self._min_k, iou_threshold = self.rpn_nms).numpy() #--------------------------------# # 取出在非极大抑制中效果较好的内容 #--------------------------------# good_boxes = boxes_to_process[idx] results.append(good_boxes) return np.array(results) def frcnn_correct_boxes(self, box_xy, box_wh, input_shape, image_shape): #-----------------------------------------------------------------# # 把y轴放前面是因为方便预测框和图像的宽高进行相乘 #-----------------------------------------------------------------# box_yx = box_xy[..., ::-1] box_hw = box_wh[..., ::-1] input_shape = np.array(input_shape) image_shape = np.array(image_shape) box_mins = box_yx - (box_hw / 2.) box_maxes = box_yx + (box_hw / 2.) boxes = np.concatenate([box_mins[..., 0:1], box_mins[..., 1:2], box_maxes[..., 0:1], box_maxes[..., 1:2]], axis=-1) boxes *= np.concatenate([image_shape, image_shape], axis=-1) return boxes def detection_out_classifier(self, predictions, rpn_results, image_shape, input_shape, confidence = 0.5, variances = [0.125, 0.125, 0.25, 0.25]): #---------------------------------------------------# # proposal_conf是种类的置信度 #---------------------------------------------------# proposal_conf = predictions[0] #---------------------------------------------------# # proposal_loc是回归预测结果 #---------------------------------------------------# proposal_loc = predictions[1] results = [] #------------------------------------------------------------------------------------------------------------------# # 对每一张图片进行处理,由于在predict.py的时候,我们只输入一张图片,所以for i in range(len(mbox_loc))只进行一次 #------------------------------------------------------------------------------------------------------------------# for i in range(len(proposal_conf)): results.append([]) #------------------------------------------# # 利用classifier预测结果 # 对建议框进行解码,并且判断物品种类 #------------------------------------------# detections = [] #---------------------------------------------------# # 计算建议框中心和宽高 #---------------------------------------------------# rpn_results[i, :, 2] = rpn_results[i, :, 2] - rpn_results[i, :, 0] rpn_results[i, :, 3] = rpn_results[i, :, 3] - rpn_results[i, :, 1] rpn_results[i, :, 0] = rpn_results[i, :, 0] + rpn_results[i, :, 2] / 2 rpn_results[i, :, 1] = rpn_results[i, :, 1] + rpn_results[i, :, 3] / 2 for j in range(proposal_conf[i].shape[0]): #---------------------------------------------------# # 计算建议框的种类 #---------------------------------------------------# score = np.max(proposal_conf[i][j, :-1]) label = np.argmax(proposal_conf[i][j, :-1]) if score < confidence: continue #---------------------------------------------------# # 对建议框中心宽高进行调整获得预测框 #---------------------------------------------------# x, y, w, h = rpn_results[i, j, :] tx, ty, tw, th = proposal_loc[i][j, 4 * label: 4 * (label + 1)] x1 = tx * variances[0] * w + x y1 = ty * variances[1] * h + y w1 = math.exp(tw * variances[2]) * w h1 = math.exp(th * variances[3]) * h xmin = x1 - w1/2. ymin = y1 - h1/2. xmax = x1 + w1/2 ymax = y1 + h1/2 detections.append([xmin, ymin, xmax, ymax, score, label]) detections = np.array(detections) if len(detections) > 0: for c in range(self.num_classes): c_confs_m = detections[:, -1] == c if len(detections[c_confs_m]) > 0: boxes_to_process = detections[:, :4][c_confs_m] confs_to_process = detections[:, 4][c_confs_m] #-----------------------------------------# # 进行iou的非极大抑制 #-----------------------------------------# idx = tf.image.non_max_suppression(boxes_to_process, confs_to_process, self._min_k, iou_threshold = self.nms_iou).numpy() #-----------------------------------------# # 取出在非极大抑制中效果较好的内容 #-----------------------------------------# results[-1].extend(detections[c_confs_m][idx]) if len(results[-1]) > 0: results[-1] = np.array(results[-1]) box_xy, box_wh = (results[-1][:, 0:2] + results[-1][:, 2:4])/2, results[-1][:, 2:4] - results[-1][:, 0:2] results[-1][:, :4] = self.frcnn_correct_boxes(box_xy, box_wh, input_shape, image_shape) return results ================================================ FILE: utils/utils_fit.py ================================================ import os import numpy as np import tensorflow as tf from tensorflow.keras import backend as K from tqdm import tqdm def write_log(callback, names, logs, batch_no): with callback.as_default(): for name, value in zip(names, logs): tf.summary.scalar(name,value,step=batch_no) callback.flush() def fit_one_epoch(model_rpn, model_all, loss_history, eval_callback, callback, epoch, epoch_step, epoch_step_val, gen, gen_val, Epoch, anchors, bbox_util, roi_helper, save_period, save_dir): total_loss = 0 rpn_loc_loss = 0 rpn_cls_loss = 0 roi_loc_loss = 0 roi_cls_loss = 0 val_loss = 0 with tqdm(total=epoch_step,desc=f'Epoch {epoch + 1}/{Epoch}',postfix=dict,mininterval=0.3) as pbar: for iteration, batch in enumerate(gen): if iteration >= epoch_step: break X, Y, boxes = batch[0], batch[1], batch[2] P_rpn = model_rpn.predict_on_batch(X) results = bbox_util.detection_out_rpn(P_rpn, anchors) roi_inputs = [] out_classes = [] out_regrs = [] for i in range(len(X)): R = results[i] X2, Y1, Y2 = roi_helper.calc_iou(R, boxes[i]) roi_inputs.append(X2) out_classes.append(Y1) out_regrs.append(Y2) loss_class = model_all.train_on_batch([X, np.array(roi_inputs)], [Y[0], Y[1], np.array(out_classes), np.array(out_regrs)]) write_log(callback, ['total_loss','rpn_cls_loss', 'rpn_reg_loss', 'detection_cls_loss', 'detection_reg_loss'], loss_class, iteration) rpn_cls_loss += loss_class[1] rpn_loc_loss += loss_class[2] roi_cls_loss += loss_class[3] roi_loc_loss += loss_class[4] total_loss = rpn_loc_loss + rpn_cls_loss + roi_loc_loss + roi_cls_loss pbar.set_postfix(**{'total' : total_loss / (iteration + 1), 'rpn_cls' : rpn_cls_loss / (iteration + 1), 'rpn_loc' : rpn_loc_loss / (iteration + 1), 'roi_cls' : roi_cls_loss / (iteration + 1), 'roi_loc' : roi_loc_loss / (iteration + 1), 'lr' : K.get_value(model_rpn.optimizer.lr)}) pbar.update(1) print('Start Validation') with tqdm(total=epoch_step_val, desc=f'Epoch {epoch + 1}/{Epoch}',postfix=dict,mininterval=0.3) as pbar: for iteration, batch in enumerate(gen_val): if iteration >= epoch_step_val: break X, Y, boxes = batch[0], batch[1], batch[2] P_rpn = model_rpn.predict_on_batch(X) results = bbox_util.detection_out_rpn(P_rpn, anchors) roi_inputs = [] out_classes = [] out_regrs = [] for i in range(len(X)): R = results[i] X2, Y1, Y2 = roi_helper.calc_iou(R, boxes[i]) roi_inputs.append(X2) out_classes.append(Y1) out_regrs.append(Y2) loss_class = model_all.test_on_batch([X, np.array(roi_inputs)], [Y[0], Y[1], np.array(out_classes), np.array(out_regrs)]) val_loss += loss_class[0] pbar.set_postfix(**{'total' : val_loss / (iteration + 1)}) pbar.update(1) logs = {'loss': total_loss / epoch_step, 'val_loss': val_loss / epoch_step_val} loss_history.on_epoch_end([], logs) eval_callback.on_epoch_end(epoch, logs) print('Epoch:'+ str(epoch+1) + '/' + str(Epoch)) print('Total Loss: %.3f || Val Loss: %.3f ' % (total_loss / epoch_step, val_loss / epoch_step_val)) #-----------------------------------------------# # 保存权值 #-----------------------------------------------# if (epoch + 1) % save_period == 0 or epoch + 1 == Epoch: model_all.save_weights(os.path.join(save_dir, 'ep%03d-loss%.3f-val_loss%.3f.h5' % (epoch + 1, total_loss / epoch_step, val_loss / epoch_step_val))) if len(loss_history.val_loss) <= 1 or (val_loss / epoch_step_val) <= min(loss_history.val_loss): print('Save best model to best_epoch_weights.pth') model_all.save_weights(os.path.join(save_dir, "best_epoch_weights.h5")) model_all.save_weights(os.path.join(save_dir, "last_epoch_weights.h5")) ================================================ FILE: utils/utils_map.py ================================================ import glob import json import math import operator import os import shutil import sys try: from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval except: pass import cv2 import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt import numpy as np ''' 0,0 ------> x (width) | | (Left,Top) | *_________ | | | | | y |_________| (height) * (Right,Bottom) ''' def log_average_miss_rate(precision, fp_cumsum, num_images): """ log-average miss rate: Calculated by averaging miss rates at 9 evenly spaced FPPI points between 10e-2 and 10e0, in log-space. output: lamr | log-average miss rate mr | miss rate fppi | false positives per image references: [1] Dollar, Piotr, et al. "Pedestrian Detection: An Evaluation of the State of the Art." Pattern Analysis and Machine Intelligence, IEEE Transactions on 34.4 (2012): 743 - 761. """ if precision.size == 0: lamr = 0 mr = 1 fppi = 0 return lamr, mr, fppi fppi = fp_cumsum / float(num_images) mr = (1 - precision) fppi_tmp = np.insert(fppi, 0, -1.0) mr_tmp = np.insert(mr, 0, 1.0) ref = np.logspace(-2.0, 0.0, num = 9) for i, ref_i in enumerate(ref): j = np.where(fppi_tmp <= ref_i)[-1][-1] ref[i] = mr_tmp[j] lamr = math.exp(np.mean(np.log(np.maximum(1e-10, ref)))) return lamr, mr, fppi """ throw error and exit """ def error(msg): print(msg) sys.exit(0) """ check if the number is a float between 0.0 and 1.0 """ def is_float_between_0_and_1(value): try: val = float(value) if val > 0.0 and val < 1.0: return True else: return False except ValueError: return False """ Calculate the AP given the recall and precision array 1st) We compute a version of the measured precision/recall curve with precision monotonically decreasing 2nd) We compute the AP as the area under this curve by numerical integration. """ def voc_ap(rec, prec): """ --- Official matlab code VOC2012--- mrec=[0 ; rec ; 1]; mpre=[0 ; prec ; 0]; for i=numel(mpre)-1:-1:1 mpre(i)=max(mpre(i),mpre(i+1)); end i=find(mrec(2:end)~=mrec(1:end-1))+1; ap=sum((mrec(i)-mrec(i-1)).*mpre(i)); """ rec.insert(0, 0.0) # insert 0.0 at begining of list rec.append(1.0) # insert 1.0 at end of list mrec = rec[:] prec.insert(0, 0.0) # insert 0.0 at begining of list prec.append(0.0) # insert 0.0 at end of list mpre = prec[:] """ This part makes the precision monotonically decreasing (goes from the end to the beginning) matlab: for i=numel(mpre)-1:-1:1 mpre(i)=max(mpre(i),mpre(i+1)); """ for i in range(len(mpre)-2, -1, -1): mpre[i] = max(mpre[i], mpre[i+1]) """ This part creates a list of indexes where the recall changes matlab: i=find(mrec(2:end)~=mrec(1:end-1))+1; """ i_list = [] for i in range(1, len(mrec)): if mrec[i] != mrec[i-1]: i_list.append(i) # if it was matlab would be i + 1 """ The Average Precision (AP) is the area under the curve (numerical integration) matlab: ap=sum((mrec(i)-mrec(i-1)).*mpre(i)); """ ap = 0.0 for i in i_list: ap += ((mrec[i]-mrec[i-1])*mpre[i]) return ap, mrec, mpre """ Convert the lines of a file to a list """ def file_lines_to_list(path): # open txt file lines to a list with open(path) as f: content = f.readlines() # remove whitespace characters like `\n` at the end of each line content = [x.strip() for x in content] return content """ Draws text in image """ def draw_text_in_image(img, text, pos, color, line_width): font = cv2.FONT_HERSHEY_PLAIN fontScale = 1 lineType = 1 bottomLeftCornerOfText = pos cv2.putText(img, text, bottomLeftCornerOfText, font, fontScale, color, lineType) text_width, _ = cv2.getTextSize(text, font, fontScale, lineType)[0] return img, (line_width + text_width) """ Plot - adjust axes """ def adjust_axes(r, t, fig, axes): # get text width for re-scaling bb = t.get_window_extent(renderer=r) text_width_inches = bb.width / fig.dpi # get axis width in inches current_fig_width = fig.get_figwidth() new_fig_width = current_fig_width + text_width_inches propotion = new_fig_width / current_fig_width # get axis limit x_lim = axes.get_xlim() axes.set_xlim([x_lim[0], x_lim[1]*propotion]) """ Draw plot using Matplotlib """ def draw_plot_func(dictionary, n_classes, window_title, plot_title, x_label, output_path, to_show, plot_color, true_p_bar): # sort the dictionary by decreasing value, into a list of tuples sorted_dic_by_value = sorted(dictionary.items(), key=operator.itemgetter(1)) # unpacking the list of tuples into two lists sorted_keys, sorted_values = zip(*sorted_dic_by_value) # if true_p_bar != "": """ Special case to draw in: - green -> TP: True Positives (object detected and matches ground-truth) - red -> FP: False Positives (object detected but does not match ground-truth) - orange -> FN: False Negatives (object not detected but present in the ground-truth) """ fp_sorted = [] tp_sorted = [] for key in sorted_keys: fp_sorted.append(dictionary[key] - true_p_bar[key]) tp_sorted.append(true_p_bar[key]) plt.barh(range(n_classes), fp_sorted, align='center', color='crimson', label='False Positive') plt.barh(range(n_classes), tp_sorted, align='center', color='forestgreen', label='True Positive', left=fp_sorted) # add legend plt.legend(loc='lower right') """ Write number on side of bar """ fig = plt.gcf() # gcf - get current figure axes = plt.gca() r = fig.canvas.get_renderer() for i, val in enumerate(sorted_values): fp_val = fp_sorted[i] tp_val = tp_sorted[i] fp_str_val = " " + str(fp_val) tp_str_val = fp_str_val + " " + str(tp_val) # trick to paint multicolor with offset: # first paint everything and then repaint the first number t = plt.text(val, i, tp_str_val, color='forestgreen', va='center', fontweight='bold') plt.text(val, i, fp_str_val, color='crimson', va='center', fontweight='bold') if i == (len(sorted_values)-1): # largest bar adjust_axes(r, t, fig, axes) else: plt.barh(range(n_classes), sorted_values, color=plot_color) """ Write number on side of bar """ fig = plt.gcf() # gcf - get current figure axes = plt.gca() r = fig.canvas.get_renderer() for i, val in enumerate(sorted_values): str_val = " " + str(val) # add a space before if val < 1.0: str_val = " {0:.2f}".format(val) t = plt.text(val, i, str_val, color=plot_color, va='center', fontweight='bold') # re-set axes to show number inside the figure if i == (len(sorted_values)-1): # largest bar adjust_axes(r, t, fig, axes) # set window title fig.canvas.set_window_title(window_title) # write classes in y axis tick_font_size = 12 plt.yticks(range(n_classes), sorted_keys, fontsize=tick_font_size) """ Re-scale height accordingly """ init_height = fig.get_figheight() # comput the matrix height in points and inches dpi = fig.dpi height_pt = n_classes * (tick_font_size * 1.4) # 1.4 (some spacing) height_in = height_pt / dpi # compute the required figure height top_margin = 0.15 # in percentage of the figure height bottom_margin = 0.05 # in percentage of the figure height figure_height = height_in / (1 - top_margin - bottom_margin) # set new height if figure_height > init_height: fig.set_figheight(figure_height) # set plot title plt.title(plot_title, fontsize=14) # set axis titles # plt.xlabel('classes') plt.xlabel(x_label, fontsize='large') # adjust size of window fig.tight_layout() # save the plot fig.savefig(output_path) # show image if to_show: plt.show() # close the plot plt.close() def get_map(MINOVERLAP, draw_plot, score_threhold=0.5, path = './map_out'): GT_PATH = os.path.join(path, 'ground-truth') DR_PATH = os.path.join(path, 'detection-results') IMG_PATH = os.path.join(path, 'images-optional') TEMP_FILES_PATH = os.path.join(path, '.temp_files') RESULTS_FILES_PATH = os.path.join(path, 'results') show_animation = True if os.path.exists(IMG_PATH): for dirpath, dirnames, files in os.walk(IMG_PATH): if not files: show_animation = False else: show_animation = False if not os.path.exists(TEMP_FILES_PATH): os.makedirs(TEMP_FILES_PATH) if os.path.exists(RESULTS_FILES_PATH): shutil.rmtree(RESULTS_FILES_PATH) else: os.makedirs(RESULTS_FILES_PATH) if draw_plot: try: matplotlib.use('TkAgg') except: pass os.makedirs(os.path.join(RESULTS_FILES_PATH, "AP")) os.makedirs(os.path.join(RESULTS_FILES_PATH, "F1")) os.makedirs(os.path.join(RESULTS_FILES_PATH, "Recall")) os.makedirs(os.path.join(RESULTS_FILES_PATH, "Precision")) if show_animation: os.makedirs(os.path.join(RESULTS_FILES_PATH, "images", "detections_one_by_one")) ground_truth_files_list = glob.glob(GT_PATH + '/*.txt') if len(ground_truth_files_list) == 0: error("Error: No ground-truth files found!") ground_truth_files_list.sort() gt_counter_per_class = {} counter_images_per_class = {} for txt_file in ground_truth_files_list: file_id = txt_file.split(".txt", 1)[0] file_id = os.path.basename(os.path.normpath(file_id)) temp_path = os.path.join(DR_PATH, (file_id + ".txt")) if not os.path.exists(temp_path): error_msg = "Error. File not found: {}\n".format(temp_path) error(error_msg) lines_list = file_lines_to_list(txt_file) bounding_boxes = [] is_difficult = False already_seen_classes = [] for line in lines_list: try: if "difficult" in line: class_name, left, top, right, bottom, _difficult = line.split() is_difficult = True else: class_name, left, top, right, bottom = line.split() except: if "difficult" in line: line_split = line.split() _difficult = line_split[-1] bottom = line_split[-2] right = line_split[-3] top = line_split[-4] left = line_split[-5] class_name = "" for name in line_split[:-5]: class_name += name + " " class_name = class_name[:-1] is_difficult = True else: line_split = line.split() bottom = line_split[-1] right = line_split[-2] top = line_split[-3] left = line_split[-4] class_name = "" for name in line_split[:-4]: class_name += name + " " class_name = class_name[:-1] bbox = left + " " + top + " " + right + " " + bottom if is_difficult: bounding_boxes.append({"class_name":class_name, "bbox":bbox, "used":False, "difficult":True}) is_difficult = False else: bounding_boxes.append({"class_name":class_name, "bbox":bbox, "used":False}) if class_name in gt_counter_per_class: gt_counter_per_class[class_name] += 1 else: gt_counter_per_class[class_name] = 1 if class_name not in already_seen_classes: if class_name in counter_images_per_class: counter_images_per_class[class_name] += 1 else: counter_images_per_class[class_name] = 1 already_seen_classes.append(class_name) with open(TEMP_FILES_PATH + "/" + file_id + "_ground_truth.json", 'w') as outfile: json.dump(bounding_boxes, outfile) gt_classes = list(gt_counter_per_class.keys()) gt_classes = sorted(gt_classes) n_classes = len(gt_classes) dr_files_list = glob.glob(DR_PATH + '/*.txt') dr_files_list.sort() for class_index, class_name in enumerate(gt_classes): bounding_boxes = [] for txt_file in dr_files_list: file_id = txt_file.split(".txt",1)[0] file_id = os.path.basename(os.path.normpath(file_id)) temp_path = os.path.join(GT_PATH, (file_id + ".txt")) if class_index == 0: if not os.path.exists(temp_path): error_msg = "Error. File not found: {}\n".format(temp_path) error(error_msg) lines = file_lines_to_list(txt_file) for line in lines: try: tmp_class_name, confidence, left, top, right, bottom = line.split() except: line_split = line.split() bottom = line_split[-1] right = line_split[-2] top = line_split[-3] left = line_split[-4] confidence = line_split[-5] tmp_class_name = "" for name in line_split[:-5]: tmp_class_name += name + " " tmp_class_name = tmp_class_name[:-1] if tmp_class_name == class_name: bbox = left + " " + top + " " + right + " " +bottom bounding_boxes.append({"confidence":confidence, "file_id":file_id, "bbox":bbox}) bounding_boxes.sort(key=lambda x:float(x['confidence']), reverse=True) with open(TEMP_FILES_PATH + "/" + class_name + "_dr.json", 'w') as outfile: json.dump(bounding_boxes, outfile) sum_AP = 0.0 ap_dictionary = {} lamr_dictionary = {} with open(RESULTS_FILES_PATH + "/results.txt", 'w') as results_file: results_file.write("# AP and precision/recall per class\n") count_true_positives = {} for class_index, class_name in enumerate(gt_classes): count_true_positives[class_name] = 0 dr_file = TEMP_FILES_PATH + "/" + class_name + "_dr.json" dr_data = json.load(open(dr_file)) nd = len(dr_data) tp = [0] * nd fp = [0] * nd score = [0] * nd score_threhold_idx = 0 for idx, detection in enumerate(dr_data): file_id = detection["file_id"] score[idx] = float(detection["confidence"]) if score[idx] >= score_threhold: score_threhold_idx = idx if show_animation: ground_truth_img = glob.glob1(IMG_PATH, file_id + ".*") if len(ground_truth_img) == 0: error("Error. Image not found with id: " + file_id) elif len(ground_truth_img) > 1: error("Error. Multiple image with id: " + file_id) else: img = cv2.imread(IMG_PATH + "/" + ground_truth_img[0]) img_cumulative_path = RESULTS_FILES_PATH + "/images/" + ground_truth_img[0] if os.path.isfile(img_cumulative_path): img_cumulative = cv2.imread(img_cumulative_path) else: img_cumulative = img.copy() bottom_border = 60 BLACK = [0, 0, 0] img = cv2.copyMakeBorder(img, 0, bottom_border, 0, 0, cv2.BORDER_CONSTANT, value=BLACK) gt_file = TEMP_FILES_PATH + "/" + file_id + "_ground_truth.json" ground_truth_data = json.load(open(gt_file)) ovmax = -1 gt_match = -1 bb = [float(x) for x in detection["bbox"].split()] for obj in ground_truth_data: if obj["class_name"] == class_name: bbgt = [ float(x) for x in obj["bbox"].split() ] bi = [max(bb[0],bbgt[0]), max(bb[1],bbgt[1]), min(bb[2],bbgt[2]), min(bb[3],bbgt[3])] iw = bi[2] - bi[0] + 1 ih = bi[3] - bi[1] + 1 if iw > 0 and ih > 0: ua = (bb[2] - bb[0] + 1) * (bb[3] - bb[1] + 1) + (bbgt[2] - bbgt[0] + 1) * (bbgt[3] - bbgt[1] + 1) - iw * ih ov = iw * ih / ua if ov > ovmax: ovmax = ov gt_match = obj if show_animation: status = "NO MATCH FOUND!" min_overlap = MINOVERLAP if ovmax >= min_overlap: if "difficult" not in gt_match: if not bool(gt_match["used"]): tp[idx] = 1 gt_match["used"] = True count_true_positives[class_name] += 1 with open(gt_file, 'w') as f: f.write(json.dumps(ground_truth_data)) if show_animation: status = "MATCH!" else: fp[idx] = 1 if show_animation: status = "REPEATED MATCH!" else: fp[idx] = 1 if ovmax > 0: status = "INSUFFICIENT OVERLAP" """ Draw image to show animation """ if show_animation: height, widht = img.shape[:2] white = (255,255,255) light_blue = (255,200,100) green = (0,255,0) light_red = (30,30,255) margin = 10 # 1nd line v_pos = int(height - margin - (bottom_border / 2.0)) text = "Image: " + ground_truth_img[0] + " " img, line_width = draw_text_in_image(img, text, (margin, v_pos), white, 0) text = "Class [" + str(class_index) + "/" + str(n_classes) + "]: " + class_name + " " img, line_width = draw_text_in_image(img, text, (margin + line_width, v_pos), light_blue, line_width) if ovmax != -1: color = light_red if status == "INSUFFICIENT OVERLAP": text = "IoU: {0:.2f}% ".format(ovmax*100) + "< {0:.2f}% ".format(min_overlap*100) else: text = "IoU: {0:.2f}% ".format(ovmax*100) + ">= {0:.2f}% ".format(min_overlap*100) color = green img, _ = draw_text_in_image(img, text, (margin + line_width, v_pos), color, line_width) # 2nd line v_pos += int(bottom_border / 2.0) rank_pos = str(idx+1) text = "Detection #rank: " + rank_pos + " confidence: {0:.2f}% ".format(float(detection["confidence"])*100) img, line_width = draw_text_in_image(img, text, (margin, v_pos), white, 0) color = light_red if status == "MATCH!": color = green text = "Result: " + status + " " img, line_width = draw_text_in_image(img, text, (margin + line_width, v_pos), color, line_width) font = cv2.FONT_HERSHEY_SIMPLEX if ovmax > 0: bbgt = [ int(round(float(x))) for x in gt_match["bbox"].split() ] cv2.rectangle(img,(bbgt[0],bbgt[1]),(bbgt[2],bbgt[3]),light_blue,2) cv2.rectangle(img_cumulative,(bbgt[0],bbgt[1]),(bbgt[2],bbgt[3]),light_blue,2) cv2.putText(img_cumulative, class_name, (bbgt[0],bbgt[1] - 5), font, 0.6, light_blue, 1, cv2.LINE_AA) bb = [int(i) for i in bb] cv2.rectangle(img,(bb[0],bb[1]),(bb[2],bb[3]),color,2) cv2.rectangle(img_cumulative,(bb[0],bb[1]),(bb[2],bb[3]),color,2) cv2.putText(img_cumulative, class_name, (bb[0],bb[1] - 5), font, 0.6, color, 1, cv2.LINE_AA) cv2.imshow("Animation", img) cv2.waitKey(20) output_img_path = RESULTS_FILES_PATH + "/images/detections_one_by_one/" + class_name + "_detection" + str(idx) + ".jpg" cv2.imwrite(output_img_path, img) cv2.imwrite(img_cumulative_path, img_cumulative) cumsum = 0 for idx, val in enumerate(fp): fp[idx] += cumsum cumsum += val cumsum = 0 for idx, val in enumerate(tp): tp[idx] += cumsum cumsum += val rec = tp[:] for idx, val in enumerate(tp): rec[idx] = float(tp[idx]) / np.maximum(gt_counter_per_class[class_name], 1) prec = tp[:] for idx, val in enumerate(tp): prec[idx] = float(tp[idx]) / np.maximum((fp[idx] + tp[idx]), 1) ap, mrec, mprec = voc_ap(rec[:], prec[:]) F1 = np.array(rec)*np.array(prec)*2 / np.where((np.array(prec)+np.array(rec))==0, 1, (np.array(prec)+np.array(rec))) sum_AP += ap text = "{0:.2f}%".format(ap*100) + " = " + class_name + " AP " #class_name + " AP = {0:.2f}%".format(ap*100) if len(prec)>0: F1_text = "{0:.2f}".format(F1[score_threhold_idx]) + " = " + class_name + " F1 " Recall_text = "{0:.2f}%".format(rec[score_threhold_idx]*100) + " = " + class_name + " Recall " Precision_text = "{0:.2f}%".format(prec[score_threhold_idx]*100) + " = " + class_name + " Precision " else: F1_text = "0.00" + " = " + class_name + " F1 " Recall_text = "0.00%" + " = " + class_name + " Recall " Precision_text = "0.00%" + " = " + class_name + " Precision " rounded_prec = [ '%.2f' % elem for elem in prec ] rounded_rec = [ '%.2f' % elem for elem in rec ] results_file.write(text + "\n Precision: " + str(rounded_prec) + "\n Recall :" + str(rounded_rec) + "\n\n") if len(prec)>0: print(text + "\t||\tscore_threhold=" + str(score_threhold) + " : " + "F1=" + "{0:.2f}".format(F1[score_threhold_idx])\ + " ; Recall=" + "{0:.2f}%".format(rec[score_threhold_idx]*100) + " ; Precision=" + "{0:.2f}%".format(prec[score_threhold_idx]*100)) else: print(text + "\t||\tscore_threhold=" + str(score_threhold) + " : " + "F1=0.00% ; Recall=0.00% ; Precision=0.00%") ap_dictionary[class_name] = ap n_images = counter_images_per_class[class_name] lamr, mr, fppi = log_average_miss_rate(np.array(rec), np.array(fp), n_images) lamr_dictionary[class_name] = lamr if draw_plot: plt.plot(rec, prec, '-o') area_under_curve_x = mrec[:-1] + [mrec[-2]] + [mrec[-1]] area_under_curve_y = mprec[:-1] + [0.0] + [mprec[-1]] plt.fill_between(area_under_curve_x, 0, area_under_curve_y, alpha=0.2, edgecolor='r') fig = plt.gcf() fig.canvas.set_window_title('AP ' + class_name) plt.title('class: ' + text) plt.xlabel('Recall') plt.ylabel('Precision') axes = plt.gca() axes.set_xlim([0.0,1.0]) axes.set_ylim([0.0,1.05]) fig.savefig(RESULTS_FILES_PATH + "/AP/" + class_name + ".png") plt.cla() plt.plot(score, F1, "-", color='orangered') plt.title('class: ' + F1_text + "\nscore_threhold=" + str(score_threhold)) plt.xlabel('Score_Threhold') plt.ylabel('F1') axes = plt.gca() axes.set_xlim([0.0,1.0]) axes.set_ylim([0.0,1.05]) fig.savefig(RESULTS_FILES_PATH + "/F1/" + class_name + ".png") plt.cla() plt.plot(score, rec, "-H", color='gold') plt.title('class: ' + Recall_text + "\nscore_threhold=" + str(score_threhold)) plt.xlabel('Score_Threhold') plt.ylabel('Recall') axes = plt.gca() axes.set_xlim([0.0,1.0]) axes.set_ylim([0.0,1.05]) fig.savefig(RESULTS_FILES_PATH + "/Recall/" + class_name + ".png") plt.cla() plt.plot(score, prec, "-s", color='palevioletred') plt.title('class: ' + Precision_text + "\nscore_threhold=" + str(score_threhold)) plt.xlabel('Score_Threhold') plt.ylabel('Precision') axes = plt.gca() axes.set_xlim([0.0,1.0]) axes.set_ylim([0.0,1.05]) fig.savefig(RESULTS_FILES_PATH + "/Precision/" + class_name + ".png") plt.cla() if show_animation: cv2.destroyAllWindows() if n_classes == 0: print("未检测到任何种类,请检查标签信息与get_map.py中的classes_path是否修改。") return 0 results_file.write("\n# mAP of all classes\n") mAP = sum_AP / n_classes text = "mAP = {0:.2f}%".format(mAP*100) results_file.write(text + "\n") print(text) shutil.rmtree(TEMP_FILES_PATH) """ Count total of detection-results """ det_counter_per_class = {} for txt_file in dr_files_list: lines_list = file_lines_to_list(txt_file) for line in lines_list: class_name = line.split()[0] if class_name in det_counter_per_class: det_counter_per_class[class_name] += 1 else: det_counter_per_class[class_name] = 1 dr_classes = list(det_counter_per_class.keys()) """ Write number of ground-truth objects per class to results.txt """ with open(RESULTS_FILES_PATH + "/results.txt", 'a') as results_file: results_file.write("\n# Number of ground-truth objects per class\n") for class_name in sorted(gt_counter_per_class): results_file.write(class_name + ": " + str(gt_counter_per_class[class_name]) + "\n") """ Finish counting true positives """ for class_name in dr_classes: if class_name not in gt_classes: count_true_positives[class_name] = 0 """ Write number of detected objects per class to results.txt """ with open(RESULTS_FILES_PATH + "/results.txt", 'a') as results_file: results_file.write("\n# Number of detected objects per class\n") for class_name in sorted(dr_classes): n_det = det_counter_per_class[class_name] text = class_name + ": " + str(n_det) text += " (tp:" + str(count_true_positives[class_name]) + "" text += ", fp:" + str(n_det - count_true_positives[class_name]) + ")\n" results_file.write(text) """ Plot the total number of occurences of each class in the ground-truth """ if draw_plot: window_title = "ground-truth-info" plot_title = "ground-truth\n" plot_title += "(" + str(len(ground_truth_files_list)) + " files and " + str(n_classes) + " classes)" x_label = "Number of objects per class" output_path = RESULTS_FILES_PATH + "/ground-truth-info.png" to_show = False plot_color = 'forestgreen' draw_plot_func( gt_counter_per_class, n_classes, window_title, plot_title, x_label, output_path, to_show, plot_color, '', ) # """ # Plot the total number of occurences of each class in the "detection-results" folder # """ # if draw_plot: # window_title = "detection-results-info" # # Plot title # plot_title = "detection-results\n" # plot_title += "(" + str(len(dr_files_list)) + " files and " # count_non_zero_values_in_dictionary = sum(int(x) > 0 for x in list(det_counter_per_class.values())) # plot_title += str(count_non_zero_values_in_dictionary) + " detected classes)" # # end Plot title # x_label = "Number of objects per class" # output_path = RESULTS_FILES_PATH + "/detection-results-info.png" # to_show = False # plot_color = 'forestgreen' # true_p_bar = count_true_positives # draw_plot_func( # det_counter_per_class, # len(det_counter_per_class), # window_title, # plot_title, # x_label, # output_path, # to_show, # plot_color, # true_p_bar # ) """ Draw log-average miss rate plot (Show lamr of all classes in decreasing order) """ if draw_plot: window_title = "lamr" plot_title = "log-average miss rate" x_label = "log-average miss rate" output_path = RESULTS_FILES_PATH + "/lamr.png" to_show = False plot_color = 'royalblue' draw_plot_func( lamr_dictionary, n_classes, window_title, plot_title, x_label, output_path, to_show, plot_color, "" ) """ Draw mAP plot (Show AP's of all classes in decreasing order) """ if draw_plot: window_title = "mAP" plot_title = "mAP = {0:.2f}%".format(mAP*100) x_label = "Average Precision" output_path = RESULTS_FILES_PATH + "/mAP.png" to_show = True plot_color = 'royalblue' draw_plot_func( ap_dictionary, n_classes, window_title, plot_title, x_label, output_path, to_show, plot_color, "" ) return mAP def preprocess_gt(gt_path, class_names): image_ids = os.listdir(gt_path) results = {} images = [] bboxes = [] for i, image_id in enumerate(image_ids): lines_list = file_lines_to_list(os.path.join(gt_path, image_id)) boxes_per_image = [] image = {} image_id = os.path.splitext(image_id)[0] image['file_name'] = image_id + '.jpg' image['width'] = 1 image['height'] = 1 #-----------------------------------------------------------------# # 感谢 多学学英语吧 的提醒 # 解决了'Results do not correspond to current coco set'问题 #-----------------------------------------------------------------# image['id'] = str(image_id) for line in lines_list: difficult = 0 if "difficult" in line: line_split = line.split() left, top, right, bottom, _difficult = line_split[-5:] class_name = "" for name in line_split[:-5]: class_name += name + " " class_name = class_name[:-1] difficult = 1 else: line_split = line.split() left, top, right, bottom = line_split[-4:] class_name = "" for name in line_split[:-4]: class_name += name + " " class_name = class_name[:-1] left, top, right, bottom = float(left), float(top), float(right), float(bottom) if class_name not in class_names: continue cls_id = class_names.index(class_name) + 1 bbox = [left, top, right - left, bottom - top, difficult, str(image_id), cls_id, (right - left) * (bottom - top) - 10.0] boxes_per_image.append(bbox) images.append(image) bboxes.extend(boxes_per_image) results['images'] = images categories = [] for i, cls in enumerate(class_names): category = {} category['supercategory'] = cls category['name'] = cls category['id'] = i + 1 categories.append(category) results['categories'] = categories annotations = [] for i, box in enumerate(bboxes): annotation = {} annotation['area'] = box[-1] annotation['category_id'] = box[-2] annotation['image_id'] = box[-3] annotation['iscrowd'] = box[-4] annotation['bbox'] = box[:4] annotation['id'] = i annotations.append(annotation) results['annotations'] = annotations return results def preprocess_dr(dr_path, class_names): image_ids = os.listdir(dr_path) results = [] for image_id in image_ids: lines_list = file_lines_to_list(os.path.join(dr_path, image_id)) image_id = os.path.splitext(image_id)[0] for line in lines_list: line_split = line.split() confidence, left, top, right, bottom = line_split[-5:] class_name = "" for name in line_split[:-5]: class_name += name + " " class_name = class_name[:-1] left, top, right, bottom = float(left), float(top), float(right), float(bottom) result = {} result["image_id"] = str(image_id) if class_name not in class_names: continue result["category_id"] = class_names.index(class_name) + 1 result["bbox"] = [left, top, right - left, bottom - top] result["score"] = float(confidence) results.append(result) return results def get_coco_map(class_names, path): GT_PATH = os.path.join(path, 'ground-truth') DR_PATH = os.path.join(path, 'detection-results') COCO_PATH = os.path.join(path, 'coco_eval') if not os.path.exists(COCO_PATH): os.makedirs(COCO_PATH) GT_JSON_PATH = os.path.join(COCO_PATH, 'instances_gt.json') DR_JSON_PATH = os.path.join(COCO_PATH, 'instances_dr.json') with open(GT_JSON_PATH, "w") as f: results_gt = preprocess_gt(GT_PATH, class_names) json.dump(results_gt, f, indent=4) with open(DR_JSON_PATH, "w") as f: results_dr = preprocess_dr(DR_PATH, class_names) json.dump(results_dr, f, indent=4) if len(results_dr) == 0: print("未检测到任何目标。") return [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] cocoGt = COCO(GT_JSON_PATH) cocoDt = cocoGt.loadRes(DR_JSON_PATH) cocoEval = COCOeval(cocoGt, cocoDt, 'bbox') cocoEval.evaluate() cocoEval.accumulate() cocoEval.summarize() return cocoEval.stats ================================================ FILE: vision_for_anchor.py ================================================ from tensorflow import keras import matplotlib.pyplot as plt import numpy as np #---------------------------------------------------# # 生成基础的先验框 #---------------------------------------------------# def generate_anchors(sizes = [128, 256, 512], ratios = [[1, 1], [1, 2], [2, 1]]): num_anchors = len(sizes) * len(ratios) anchors = np.zeros((num_anchors, 4)) anchors[:, 2:] = np.tile(sizes, (2, len(ratios))).T for i in range(len(ratios)): anchors[3 * i: 3 * i + 3, 2] = anchors[3 * i: 3 * i + 3, 2] * ratios[i][0] anchors[3 * i: 3 * i + 3, 3] = anchors[3 * i: 3 * i + 3, 3] * ratios[i][1] anchors[:, 0::2] -= np.tile(anchors[:, 2] * 0.5, (2, 1)).T anchors[:, 1::2] -= np.tile(anchors[:, 3] * 0.5, (2, 1)).T return anchors #---------------------------------------------------# # 对基础的先验框进行拓展获得全部的建议框 #---------------------------------------------------# def shift(shape, anchors, stride=16): #---------------------------------------------------# # [0,1,2,3,4,5……37] # [0.5,1.5,2.5……37.5] # [8,24,……] #---------------------------------------------------# shift_x = (np.arange(0, shape[0], dtype=keras.backend.floatx()) + 0.5) * stride shift_y = (np.arange(0, shape[1], dtype=keras.backend.floatx()) + 0.5) * stride shift_x, shift_y = np.meshgrid(shift_x, shift_y) shift_x = np.reshape(shift_x, [-1]) shift_y = np.reshape(shift_y, [-1]) # print(shift_x,shift_y) shifts = np.stack([ shift_x, shift_y, shift_x, shift_y ], axis=0) shifts = np.transpose(shifts) number_of_anchors = np.shape(anchors)[0] k = np.shape(shifts)[0] shifted_anchors = np.reshape(anchors, [1, number_of_anchors, 4]) + np.array(np.reshape(shifts, [k, 1, 4]), keras.backend.floatx()) shifted_anchors = np.reshape(shifted_anchors, [k * number_of_anchors, 4]) #---------------------------------------------------# # 进行图像的绘制 #---------------------------------------------------# fig = plt.figure() ax = fig.add_subplot(111) plt.ylim(-300,900) plt.xlim(-300,900) # plt.ylim(0,600) # plt.xlim(0,600) plt.scatter(shift_x,shift_y) box_widths = shifted_anchors[:, 2] - shifted_anchors[:, 0] box_heights = shifted_anchors[:, 3] - shifted_anchors[:, 1] initial = 0 for i in [initial + 0, initial + 1, initial + 2, initial + 3, initial + 4, initial + 5, initial + 6, initial + 7, initial + 8]: rect = plt.Rectangle([shifted_anchors[i, 0], shifted_anchors[i, 1]], box_widths[i], box_heights[i], color="r", fill=False) ax.add_patch(rect) plt.show() return shifted_anchors #---------------------------------------------------# # 获得resnet50对应的baselayer大小 #---------------------------------------------------# def get_resnet50_output_length(height, width): def get_output_length(input_length): filter_sizes = [7, 3, 1, 1] padding = [3, 1, 0, 0] stride = 2 for i in range(4): input_length = (input_length + 2 * padding[i] - filter_sizes[i]) // stride + 1 return input_length return get_output_length(height), get_output_length(width) #---------------------------------------------------# # 获得vgg对应的baselayer大小 #---------------------------------------------------# def get_vgg_output_length(height, width): def get_output_length(input_length): filter_sizes = [2, 2, 2, 2] padding = [0, 0, 0, 0] stride = 2 for i in range(4): input_length = (input_length + 2 * padding[i] - filter_sizes[i]) // stride + 1 return input_length return get_output_length(height), get_output_length(width) def get_anchors(input_shape, backbone, sizes = [128, 256, 512], ratios = [[1, 1], [1, 2], [2, 1]], stride=16): if backbone == 'vgg': feature_shape = get_vgg_output_length(input_shape[0], input_shape[1]) print(feature_shape) else: feature_shape = get_resnet50_output_length(input_shape[0], input_shape[1]) anchors = generate_anchors(sizes = sizes, ratios = ratios) anchors = shift(feature_shape, anchors, stride = stride) anchors[:, ::2] /= input_shape[1] anchors[:, 1::2] /= input_shape[0] anchors = np.clip(anchors, 0, 1) return anchors if __name__ == "__main__": get_anchors([600, 600], 'resnet50') ================================================ FILE: voc_annotation.py ================================================ import os import random import xml.etree.ElementTree as ET import numpy as np from utils.utils import get_classes #--------------------------------------------------------------------------------------------------------------------------------# # annotation_mode用于指定该文件运行时计算的内容 # annotation_mode为0代表整个标签处理过程,包括获得VOCdevkit/VOC2007/ImageSets里面的txt以及训练用的2007_train.txt、2007_val.txt # annotation_mode为1代表获得VOCdevkit/VOC2007/ImageSets里面的txt # annotation_mode为2代表获得训练用的2007_train.txt、2007_val.txt #--------------------------------------------------------------------------------------------------------------------------------# annotation_mode = 0 #-------------------------------------------------------------------# # 必须要修改,用于生成2007_train.txt、2007_val.txt的目标信息 # 与训练和预测所用的classes_path一致即可 # 如果生成的2007_train.txt里面没有目标信息 # 那么就是因为classes没有设定正确 # 仅在annotation_mode为0和2的时候有效 #-------------------------------------------------------------------# classes_path = 'model_data/voc_classes.txt' #--------------------------------------------------------------------------------------------------------------------------------# # trainval_percent用于指定(训练集+验证集)与测试集的比例,默认情况下 (训练集+验证集):测试集 = 9:1 # train_percent用于指定(训练集+验证集)中训练集与验证集的比例,默认情况下 训练集:验证集 = 9:1 # 仅在annotation_mode为0和1的时候有效 #--------------------------------------------------------------------------------------------------------------------------------# trainval_percent = 0.9 train_percent = 0.9 #-------------------------------------------------------# # 指向VOC数据集所在的文件夹 # 默认指向根目录下的VOC数据集 #-------------------------------------------------------# VOCdevkit_path = 'VOCdevkit' VOCdevkit_sets = [('2007', 'train'), ('2007', 'val')] classes, _ = get_classes(classes_path) #-------------------------------------------------------# # 统计目标数量 #-------------------------------------------------------# photo_nums = np.zeros(len(VOCdevkit_sets)) nums = np.zeros(len(classes)) def convert_annotation(year, image_id, list_file): in_file = open(os.path.join(VOCdevkit_path, 'VOC%s/Annotations/%s.xml'%(year, image_id)), encoding='utf-8') tree=ET.parse(in_file) root = tree.getroot() for obj in root.iter('object'): difficult = 0 if obj.find('difficult')!=None: difficult = obj.find('difficult').text cls = obj.find('name').text if cls not in classes or int(difficult)==1: continue cls_id = classes.index(cls) xmlbox = obj.find('bndbox') b = (int(float(xmlbox.find('xmin').text)), int(float(xmlbox.find('ymin').text)), int(float(xmlbox.find('xmax').text)), int(float(xmlbox.find('ymax').text))) list_file.write(" " + ",".join([str(a) for a in b]) + ',' + str(cls_id)) nums[classes.index(cls)] = nums[classes.index(cls)] + 1 if __name__ == "__main__": random.seed(0) if " " in os.path.abspath(VOCdevkit_path): raise ValueError("数据集存放的文件夹路径与图片名称中不可以存在空格,否则会影响正常的模型训练,请注意修改。") if annotation_mode == 0 or annotation_mode == 1: print("Generate txt in ImageSets.") xmlfilepath = os.path.join(VOCdevkit_path, 'VOC2007/Annotations') saveBasePath = os.path.join(VOCdevkit_path, 'VOC2007/ImageSets/Main') temp_xml = os.listdir(xmlfilepath) total_xml = [] for xml in temp_xml: if xml.endswith(".xml"): total_xml.append(xml) num = len(total_xml) list = range(num) tv = int(num*trainval_percent) tr = int(tv*train_percent) trainval= random.sample(list,tv) train = random.sample(trainval,tr) print("train and val size",tv) print("train size",tr) ftrainval = open(os.path.join(saveBasePath,'trainval.txt'), 'w') ftest = open(os.path.join(saveBasePath,'test.txt'), 'w') ftrain = open(os.path.join(saveBasePath,'train.txt'), 'w') fval = open(os.path.join(saveBasePath,'val.txt'), 'w') for i in list: name=total_xml[i][:-4]+'\n' if i in trainval: ftrainval.write(name) if i in train: ftrain.write(name) else: fval.write(name) else: ftest.write(name) ftrainval.close() ftrain.close() fval.close() ftest.close() print("Generate txt in ImageSets done.") if annotation_mode == 0 or annotation_mode == 2: print("Generate 2007_train.txt and 2007_val.txt for train.") type_index = 0 for year, image_set in VOCdevkit_sets: image_ids = open(os.path.join(VOCdevkit_path, 'VOC%s/ImageSets/Main/%s.txt'%(year, image_set)), encoding='utf-8').read().strip().split() list_file = open('%s_%s.txt'%(year, image_set), 'w', encoding='utf-8') for image_id in image_ids: list_file.write('%s/VOC%s/JPEGImages/%s.jpg'%(os.path.abspath(VOCdevkit_path), year, image_id)) convert_annotation(year, image_id, list_file) list_file.write('\n') photo_nums[type_index] = len(image_ids) type_index += 1 list_file.close() print("Generate 2007_train.txt and 2007_val.txt for train done.") def printTable(List1, List2): for i in range(len(List1[0])): print("|", end=' ') for j in range(len(List1)): print(List1[j][i].rjust(int(List2[j])), end=' ') print("|", end=' ') print() str_nums = [str(int(x)) for x in nums] tableData = [ classes, str_nums ] colWidths = [0]*len(tableData) len1 = 0 for i in range(len(tableData)): for j in range(len(tableData[i])): if len(tableData[i][j]) > colWidths[i]: colWidths[i] = len(tableData[i][j]) printTable(tableData, colWidths) if photo_nums[0] <= 500: print("训练集数量小于500,属于较小的数据量,请注意设置较大的训练世代(Epoch)以满足足够的梯度下降次数(Step)。") if np.sum(nums) == 0: print("在数据集中并未获得任何目标,请注意修改classes_path对应自己的数据集,并且保证标签名字正确,否则训练将会没有任何效果!") print("在数据集中并未获得任何目标,请注意修改classes_path对应自己的数据集,并且保证标签名字正确,否则训练将会没有任何效果!") print("在数据集中并未获得任何目标,请注意修改classes_path对应自己的数据集,并且保证标签名字正确,否则训练将会没有任何效果!") print("(重要的事情说三遍)。") ================================================ FILE: 常见问题汇总.md ================================================ 问题汇总的博客地址为[https://blog.csdn.net/weixin_44791964/article/details/107517428](https://blog.csdn.net/weixin_44791964/article/details/107517428)。 # 问题汇总 ## 1、下载问题 ### a、代码下载 **问:up主,可以给我发一份代码吗,代码在哪里下载啊? 答:Github上的地址就在视频简介里。复制一下就能进去下载了。** **问:up主,为什么我下载的代码提示压缩包损坏? 答:重新去Github下载。** **问:up主,为什么我下载的代码和你在视频以及博客上的代码不一样? 答:我常常会对代码进行更新,最终以实际的代码为准。** ### b、 权值下载 **问:up主,为什么我下载的代码里面,model_data下面没有.pth或者.h5文件? 答:我一般会把权值上传到Github和百度网盘,在GITHUB的README里面就能找到。** ### c、 数据集下载 **问:up主,XXXX数据集在哪里下载啊? 答:一般数据集的下载地址我会放在README里面,基本上都有,没有的话请及时联系我添加,直接发github的issue即可**。 ## 2、环境配置问题 ### a、现在库中所用的环境 **pytorch代码对应的pytorch版本为1.2,博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/106037141](https://blog.csdn.net/weixin_44791964/article/details/106037141)。 **keras代码对应的tensorflow版本为1.13.2,keras版本是2.1.5,博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/104702142](https://blog.csdn.net/weixin_44791964/article/details/104702142)。 **tf2代码对应的tensorflow版本为2.2.0,无需安装keras,博客地址对应**[https://blog.csdn.net/weixin_44791964/article/details/109161493](https://blog.csdn.net/weixin_44791964/article/details/109161493)。 **问:你的代码某某某版本的tensorflow和pytorch能用嘛? 答:最好按照我推荐的配置,配置教程也有!其它版本的我没有试过!可能出现问题但是一般问题不大。仅需要改少量代码即可。** ### b、30系列显卡环境配置 30系显卡由于框架更新不可使用上述环境配置教程。 当前我已经测试的可以用的30显卡配置如下: **pytorch代码对应的pytorch版本为1.7.0,cuda为11.0,cudnn为8.0.5**。 **keras代码无法在win10下配置cuda11,在ubuntu下可以百度查询一下,配置tensorflow版本为1.15.4,keras版本是2.1.5或者2.3.1(少量函数接口不同,代码可能还需要少量调整。)** **tf2代码对应的tensorflow版本为2.4.0,cuda为11.0,cudnn为8.0.5**。 ### c、GPU利用问题与环境使用问题 **问:为什么我安装了tensorflow-gpu但是却没用利用GPU进行训练呢? 答:确认tensorflow-gpu已经装好,利用pip list查看tensorflow版本,然后查看任务管理器或者利用nvidia命令看看是否使用了gpu进行训练,任务管理器的话要看显存使用情况。** **问:up主,我好像没有在用gpu进行训练啊,怎么看是不是用了GPU进行训练? 答:查看是否使用GPU进行训练一般使用NVIDIA在命令行的查看命令,如果要看任务管理器的话,请看性能部分GPU的显存是否利用,或者查看任务管理器的Cuda,而非Copy。** ![在这里插入图片描述](https://img-blog.csdnimg.cn/20201013234241524.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDc5MTk2NA==,size_16,color_FFFFFF,t_70#pic_center) **问:up主,为什么我按照你的环境配置后还是不能使用? 答:请把你的GPU、CUDA、CUDNN、TF版本以及PYTORCH版本B站私聊告诉我。** **问:出现如下错误** ```python Traceback (most recent call last): File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\imp.py", line 243, in load_modulereturn load_dynamic(name, filename, file) File "C:\Users\focus\Anaconda3\ana\envs\tensorflow-gpu\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: 找不到指定的模块。 ``` **答:如果没重启过就重启一下,否则重新按照步骤安装,还无法解决则把你的GPU、CUDA、CUDNN、TF版本以及PYTORCH版本私聊告诉我。** ### d、no module问题 **问:为什么提示说no module name utils.utils(no module name nets.yolo、no module name nets.ssd等一系列问题)啊? 答:utils并不需要用pip装,它就在我上传的仓库的根目录,出现这个问题的原因是根目录不对,查查相对目录和根目录的概念。查了基本上就明白了。** **问:为什么提示说no module name matplotlib(no module name PIL,no module name cv2等等)? 答:这个库没安装打开命令行安装就好。pip install matplotlib** **问:为什么我已经用pip装了opencv(pillow、matplotlib等),还是提示no module name cv2? 答:没有激活环境装,要激活对应的conda环境进行安装才可以正常使用** **问:为什么提示说No module named 'torch' ? 答:其实我也真的很想知道为什么会有这个问题……这个pytorch没装是什么情况?一般就俩情况,一个是真的没装,还有一个是装到其它环境了,当前激活的环境不是自己装的环境。** **问:为什么提示说No module named 'tensorflow' ? 答:同上。** ### e、cuda安装失败问题 一般cuda安装前需要安装Visual Studio,装个2017版本即可。 ### f、Ubuntu系统问题 **所有代码在Ubuntu下可以使用,我两个系统都试过。** ### g、VSCODE提示错误的问题 **问:为什么在VSCODE里面提示一大堆的错误啊? 答:我也提示一大堆的错误,但是不影响,是VSCODE的问题,如果不想看错误的话就装Pycharm。** ### h、使用cpu进行训练与预测的问题 **对于keras和tf2的代码而言,如果想用cpu进行训练和预测,直接装cpu版本的tensorflow就可以了。** **对于pytorch的代码而言,如果想用cpu进行训练和预测,需要将cuda=True修改成cuda=False。** ### i、tqdm没有pos参数问题 **问:运行代码提示'tqdm' object has no attribute 'pos'。 答:重装tqdm,换个版本就可以了。** ### j、提示decode(“utf-8”)的问题 **由于h5py库的更新,安装过程中会自动安装h5py=3.0.0以上的版本,会导致decode("utf-8")的错误! 各位一定要在安装完tensorflow后利用命令装h5py=2.10.0!** ``` pip install h5py==2.10.0 ``` ### k、提示TypeError: __array__() takes 1 positional argument but 2 were given错误 可以修改pillow版本解决。 ``` pip install pillow==8.2.0 ``` ### l、其它问题 **问:为什么提示TypeError: cat() got an unexpected keyword argument 'axis',Traceback (most recent call last),AttributeError: 'Tensor' object has no attribute 'bool'? 答:这是版本问题,建议使用torch1.2以上版本** **其它有很多稀奇古怪的问题,很多是版本问题,建议按照我的视频教程安装Keras和tensorflow。比如装的是tensorflow2,就不用问我说为什么我没法运行Keras-yolo啥的。那是必然不行的。** ## 3、目标检测库问题汇总(人脸检测和分类库也可参考) ### a、shape不匹配问题 #### 1)、训练时shape不匹配问题 **问:up主,为什么运行train.py会提示shape不匹配啊? 答:在keras环境中,因为你训练的种类和原始的种类不同,网络结构会变化,所以最尾部的shape会有少量不匹配。** #### 2)、预测时shape不匹配问题 **问:为什么我运行predict.py会提示我说shape不匹配呀。 在Pytorch里面是这样的:** ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171631901.png) 在Keras里面是这样的: ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171523380.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDc5MTk2NA==,size_16,color_FFFFFF,t_70) **答:原因主要有仨: 1、在ssd、FasterRCNN里面,可能是train.py里面的num_classes没改。 2、model_path没改。 3、classes_path没改。 请检查清楚了!确定自己所用的model_path和classes_path是对应的!训练的时候用到的num_classes或者classes_path也需要检查!** ### b、显存不足问题 **问:为什么我运行train.py下面的命令行闪的贼快,还提示OOM啥的? 答:这是在keras中出现的,爆显存了,可以改小batch_size,SSD的显存占用率是最小的,建议用SSD; 2G显存:SSD、YOLOV4-TINY 4G显存:YOLOV3 6G显存:YOLOV4、Retinanet、M2det、Efficientdet、Faster RCNN等 8G+显存:随便选吧。** **需要注意的是,受到BatchNorm2d影响,batch_size不可为1,至少为2。** **问:为什么提示 RuntimeError: CUDA out of memory. Tried to allocate 52.00 MiB (GPU 0; 15.90 GiB total capacity; 14.85 GiB already allocated; 51.88 MiB free; 15.07 GiB reserved in total by PyTorch)? 答:这是pytorch中出现的,爆显存了,同上。** **问:为什么我显存都没利用,就直接爆显存了? 答:都爆显存了,自然就不利用了,模型没有开始训练。** ### c、训练问题(冻结训练,LOSS问题、训练效果问题等) **问:为什么要冻结训练和解冻训练呀? 答:这是迁移学习的思想,因为神经网络主干特征提取部分所提取到的特征是通用的,我们冻结起来训练可以加快训练效率,也可以防止权值被破坏。** 在冻结阶段,模型的主干被冻结了,特征提取网络不发生改变。占用的显存较小,仅对网络进行微调。 在解冻阶段,模型的主干不被冻结了,特征提取网络会发生改变。占用的显存较大,网络所有的参数都会发生改变。 **问:为什么我的网络不收敛啊,LOSS是XXXX。 答:不同网络的LOSS不同,LOSS只是一个参考指标,用于查看网络是否收敛,而非评价网络好坏,我的yolo代码都没有归一化,所以LOSS值看起来比较高,LOSS的值不重要,重要的是是否在变小,预测是否有效果。** **问:为什么我的训练效果不好?预测了没有框(框不准)。 答:** 考虑几个问题: 1、目标信息问题,查看2007_train.txt文件是否有目标信息,没有的话请修改voc_annotation.py。 2、数据集问题,小于500的自行考虑增加数据集,同时测试不同的模型,确认数据集是好的。 3、是否解冻训练,如果数据集分布与常规画面差距过大需要进一步解冻训练,调整主干,加强特征提取能力。 4、网络问题,比如SSD不适合小目标,因为先验框固定了。 5、训练时长问题,有些同学只训练了几代表示没有效果,按默认参数训练完。 6、确认自己是否按照步骤去做了,如果比如voc_annotation.py里面的classes是否修改了等。 7、不同网络的LOSS不同,LOSS只是一个参考指标,用于查看网络是否收敛,而非评价网络好坏,LOSS的值不重要,重要的是是否收敛。 **问:我怎么出现了gbk什么的编码错误啊:** ```python UnicodeDecodeError: 'gbk' codec can't decode byte 0xa6 in position 446: illegal multibyte sequence ``` **答:标签和路径不要使用中文,如果一定要使用中文,请注意处理的时候编码的问题,改成打开文件的encoding方式改为utf-8。** **问:我的图片是xxx*xxx的分辨率的,可以用吗!** **答:可以用,代码里面会自动进行resize或者数据增强。** **问:怎么进行多GPU训练? 答:pytorch的大多数代码可以直接使用gpu训练,keras的话直接百度就好了,实现并不复杂,我没有多卡没法详细测试,还需要各位同学自己努力了。** ### d、灰度图问题 **问:能不能训练灰度图(预测灰度图)啊? 答:我的大多数库会将灰度图转化成RGB进行训练和预测,如果遇到代码不能训练或者预测灰度图的情况,可以尝试一下在get_random_data里面将Image.open后的结果转换成RGB,预测的时候也这样试试。(仅供参考)** ### e、断点续练问题 **问:我已经训练过几个世代了,能不能从这个基础上继续开始训练 答:可以,你在训练前,和载入预训练权重一样载入训练过的权重就行了。一般训练好的权重会保存在logs文件夹里面,将model_path修改成你要开始的权值的路径即可。** ### f、预训练权重的问题 **问:如果我要训练其它的数据集,预训练权重要怎么办啊?** **答:数据的预训练权重对不同数据集是通用的,因为特征是通用的,预训练权重对于99%的情况都必须要用,不用的话权值太过随机,特征提取效果不明显,网络训练的结果也不会好。** **问:up,我修改了网络,预训练权重还能用吗? 答:修改了主干的话,如果不是用的现有的网络,基本上预训练权重是不能用的,要么就自己判断权值里卷积核的shape然后自己匹配,要么只能自己预训练去了;修改了后半部分的话,前半部分的主干部分的预训练权重还是可以用的,如果是pytorch代码的话,需要自己修改一下载入权值的方式,判断shape后载入,如果是keras代码,直接by_name=True,skip_mismatch=True即可。** 权值匹配的方式可以参考如下: ```python # 加快模型训练的效率 print('Loading weights into state dict...') device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_dict = model.state_dict() pretrained_dict = torch.load(model_path, map_location=device) a = {} for k, v in pretrained_dict.items(): try: if np.shape(model_dict[k]) == np.shape(v): a[k]=v except: pass model_dict.update(a) model.load_state_dict(model_dict) print('Finished!') ``` **问:我要怎么不使用预训练权重啊? 答:把载入预训练权重的代码注释了就行。** **问:为什么我不使用预训练权重效果这么差啊? 答:因为随机初始化的权值不好,提取的特征不好,也就导致了模型训练的效果不好,voc07+12、coco+voc07+12效果都不一样,预训练权重还是非常重要的。** ### g、视频检测问题与摄像头检测问题 **问:怎么用摄像头检测呀? 答:predict.py修改参数可以进行摄像头检测,也有视频详细解释了摄像头检测的思路。** **问:怎么用视频检测呀? 答:同上** ### h、从0开始训练问题 **问:怎么在模型上从0开始训练? 答:在算力不足与调参能力不足的情况下从0开始训练毫无意义。模型特征提取能力在随机初始化参数的情况下非常差。没有好的参数调节能力和算力,无法使得网络正常收敛。** 如果一定要从0开始,那么训练的时候请注意几点: - 不载入预训练权重。 - 不要进行冻结训练,注释冻结模型的代码。 **问:为什么我不使用预训练权重效果这么差啊? 答:因为随机初始化的权值不好,提取的特征不好,也就导致了模型训练的效果不好,voc07+12、coco+voc07+12效果都不一样,预训练权重还是非常重要的。** ### i、保存问题 **问:检测完的图片怎么保存? 答:一般目标检测用的是Image,所以查询一下PIL库的Image如何进行保存。详细看看predict.py文件的注释。** **问:怎么用视频保存呀? 答:详细看看predict.py文件的注释。** ### j、遍历问题 **问:如何对一个文件夹的图片进行遍历? 答:一般使用os.listdir先找出文件夹里面的所有图片,然后根据predict.py文件里面的执行思路检测图片就行了,详细看看predict.py文件的注释。** **问:如何对一个文件夹的图片进行遍历?并且保存。 答:遍历的话一般使用os.listdir先找出文件夹里面的所有图片,然后根据predict.py文件里面的执行思路检测图片就行了。保存的话一般目标检测用的是Image,所以查询一下PIL库的Image如何进行保存。如果有些库用的是cv2,那就是查一下cv2怎么保存图片。详细看看predict.py文件的注释。** ### k、路径问题(No such file or directory) **问:我怎么出现了这样的错误呀:** ```python FileNotFoundError: 【Errno 2】 No such file or directory …………………………………… …………………………………… ``` **答:去检查一下文件夹路径,查看是否有对应文件;并且检查一下2007_train.txt,其中文件路径是否有错。** 关于路径有几个重要的点: **文件夹名称中一定不要有空格。 注意相对路径和绝对路径。 多百度路径相关的知识。** **所有的路径问题基本上都是根目录问题,好好查一下相对目录的概念!** ### l、和原版比较问题 **问:你这个代码和原版比怎么样,可以达到原版的效果么? 答:基本上可以达到,我都用voc数据测过,我没有好显卡,没有能力在coco上测试与训练。** **问:你有没有实现yolov4所有的tricks,和原版差距多少? 答:并没有实现全部的改进部分,由于YOLOV4使用的改进实在太多了,很难完全实现与列出来,这里只列出来了一些我比较感兴趣,而且非常有效的改进。论文中提到的SAM(注意力机制模块),作者自己的源码也没有使用。还有其它很多的tricks,不是所有的tricks都有提升,我也没法实现全部的tricks。至于和原版的比较,我没有能力训练coco数据集,根据使用过的同学反应差距不大。** ### m、FPS问题(检测速度问题) **问:你这个FPS可以到达多少,可以到 XX FPS么? 答:FPS和机子的配置有关,配置高就快,配置低就慢。** **问:为什么我用服务器去测试yolov4(or others)的FPS只有十几? 答:检查是否正确安装了tensorflow-gpu或者pytorch的gpu版本,如果已经正确安装,可以去利用time.time()的方法查看detect_image里面,哪一段代码耗时更长(不仅只有网络耗时长,其它处理部分也会耗时,如绘图等)。** **问:为什么论文中说速度可以达到XX,但是这里却没有? 答:检查是否正确安装了tensorflow-gpu或者pytorch的gpu版本,如果已经正确安装,可以去利用time.time()的方法查看detect_image里面,哪一段代码耗时更长(不仅只有网络耗时长,其它处理部分也会耗时,如绘图等)。有些论文还会使用多batch进行预测,我并没有去实现这个部分。** ### n、预测图片不显示问题 **问:为什么你的代码在预测完成后不显示图片?只是在命令行告诉我有什么目标。 答:给系统安装一个图片查看器就行了。** ### o、算法评价问题(目标检测的map、PR曲线、Recall、Precision等) **问:怎么计算map? 答:看map视频,都一个流程。** **问:计算map的时候,get_map.py里面有一个MINOVERLAP是什么用的,是iou吗? 答:是iou,它的作用是判断预测框和真实框的重合成度,如果重合程度大于MINOVERLAP,则预测正确。** **问:为什么get_map.py里面的self.confidence(self.score)要设置的那么小? 答:看一下map的视频的原理部分,要知道所有的结果然后再进行pr曲线的绘制。** **问:能不能说说怎么绘制PR曲线啥的呀。 答:可以看mAP视频,结果里面有PR曲线。** **问:怎么计算Recall、Precision指标。 答:这俩指标应该是相对于特定的置信度的,计算map的时候也会获得。** ### p、coco数据集训练问题 **问:目标检测怎么训练COCO数据集啊?。 答:coco数据训练所需要的txt文件可以参考qqwweee的yolo3的库,格式都是一样的。** ### q、模型优化(模型修改)问题 **问:up,YOLO系列使用Focal LOSS的代码你有吗,有提升吗? 答:很多人试过,提升效果也不大(甚至变的更Low),它自己有自己的正负样本的平衡方式。** **问:up,我修改了网络,预训练权重还能用吗? 答:修改了主干的话,如果不是用的现有的网络,基本上预训练权重是不能用的,要么就自己判断权值里卷积核的shape然后自己匹配,要么只能自己预训练去了;修改了后半部分的话,前半部分的主干部分的预训练权重还是可以用的,如果是pytorch代码的话,需要自己修改一下载入权值的方式,判断shape后载入,如果是keras代码,直接by_name=True,skip_mismatch=True即可。** 权值匹配的方式可以参考如下: ```python # 加快模型训练的效率 print('Loading weights into state dict...') device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_dict = model.state_dict() pretrained_dict = torch.load(model_path, map_location=device) a = {} for k, v in pretrained_dict.items(): try: if np.shape(model_dict[k]) == np.shape(v): a[k]=v except: pass model_dict.update(a) model.load_state_dict(model_dict) print('Finished!') ``` **问:up,怎么修改模型啊,我想发个小论文! 答:建议看看yolov3和yolov4的区别,然后看看yolov4的论文,作为一个大型调参现场非常有参考意义,使用了很多tricks。我能给的建议就是多看一些经典模型,然后拆解里面的亮点结构并使用。** ### r、部署问题 我没有具体部署到手机等设备上过,所以很多部署问题我并不了解…… ## 4、语义分割库问题汇总 ### a、shape不匹配问题 #### 1)、训练时shape不匹配问题 **问:up主,为什么运行train.py会提示shape不匹配啊? 答:在keras环境中,因为你训练的种类和原始的种类不同,网络结构会变化,所以最尾部的shape会有少量不匹配。** #### 2)、预测时shape不匹配问题 **问:为什么我运行predict.py会提示我说shape不匹配呀。 在Pytorch里面是这样的:** ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171631901.png) 在Keras里面是这样的: ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200722171523380.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDc5MTk2NA==,size_16,color_FFFFFF,t_70) **答:原因主要有二: 1、train.py里面的num_classes没改。 2、预测时num_classes没改。 请检查清楚!训练和预测的时候用到的num_classes都需要检查!** ### b、显存不足问题 **问:为什么我运行train.py下面的命令行闪的贼快,还提示OOM啥的? 答:这是在keras中出现的,爆显存了,可以改小batch_size。** **需要注意的是,受到BatchNorm2d影响,batch_size不可为1,至少为2。** **问:为什么提示 RuntimeError: CUDA out of memory. Tried to allocate 52.00 MiB (GPU 0; 15.90 GiB total capacity; 14.85 GiB already allocated; 51.88 MiB free; 15.07 GiB reserved in total by PyTorch)? 答:这是pytorch中出现的,爆显存了,同上。** **问:为什么我显存都没利用,就直接爆显存了? 答:都爆显存了,自然就不利用了,模型没有开始训练。** ### c、训练问题(冻结训练,LOSS问题、训练效果问题等) **问:为什么要冻结训练和解冻训练呀? 答:这是迁移学习的思想,因为神经网络主干特征提取部分所提取到的特征是通用的,我们冻结起来训练可以加快训练效率,也可以防止权值被破坏。** **在冻结阶段,模型的主干被冻结了,特征提取网络不发生改变。占用的显存较小,仅对网络进行微调。** **在解冻阶段,模型的主干不被冻结了,特征提取网络会发生改变。占用的显存较大,网络所有的参数都会发生改变。** **问:为什么我的网络不收敛啊,LOSS是XXXX。 答:不同网络的LOSS不同,LOSS只是一个参考指标,用于查看网络是否收敛,而非评价网络好坏,我的yolo代码都没有归一化,所以LOSS值看起来比较高,LOSS的值不重要,重要的是是否在变小,预测是否有效果。** **问:为什么我的训练效果不好?预测了没有目标,结果是一片黑。 答:** **考虑几个问题: 1、数据集问题,这是最重要的问题。小于500的自行考虑增加数据集;一定要检查数据集的标签,视频中详细解析了VOC数据集的格式,但并不是有输入图片有输出标签即可,还需要确认标签的每一个像素值是否为它对应的种类。很多同学的标签格式不对,最常见的错误格式就是标签的背景为黑,目标为白,此时目标的像素点值为255,无法正常训练,目标需要为1才行。 2、是否解冻训练,如果数据集分布与常规画面差距过大需要进一步解冻训练,调整主干,加强特征提取能力。 3、网络问题,可以尝试不同的网络。 4、训练时长问题,有些同学只训练了几代表示没有效果,按默认参数训练完。 5、确认自己是否按照步骤去做了。 6、不同网络的LOSS不同,LOSS只是一个参考指标,用于查看网络是否收敛,而非评价网络好坏,LOSS的值不重要,重要的是是否收敛。** **问:为什么我的训练效果不好?对小目标预测不准确。 答:对于deeplab和pspnet而言,可以修改一下downsample_factor,当downsample_factor为16的时候下采样倍数过多,效果不太好,可以修改为8。** **问:我怎么出现了gbk什么的编码错误啊:** ```python UnicodeDecodeError: 'gbk' codec can't decode byte 0xa6 in position 446: illegal multibyte sequence ``` **答:标签和路径不要使用中文,如果一定要使用中文,请注意处理的时候编码的问题,改成打开文件的encoding方式改为utf-8。** **问:我的图片是xxx*xxx的分辨率的,可以用吗!** **答:可以用,代码里面会自动进行resize或者数据增强。** **问:怎么进行多GPU训练? 答:pytorch的大多数代码可以直接使用gpu训练,keras的话直接百度就好了,实现并不复杂,我没有多卡没法详细测试,还需要各位同学自己努力了。** ### d、灰度图问题 **问:能不能训练灰度图(预测灰度图)啊? 答:我的大多数库会将灰度图转化成RGB进行训练和预测,如果遇到代码不能训练或者预测灰度图的情况,可以尝试一下在get_random_data里面将Image.open后的结果转换成RGB,预测的时候也这样试试。(仅供参考)** ### e、断点续练问题 **问:我已经训练过几个世代了,能不能从这个基础上继续开始训练 答:可以,你在训练前,和载入预训练权重一样载入训练过的权重就行了。一般训练好的权重会保存在logs文件夹里面,将model_path修改成你要开始的权值的路径即可。** ### f、预训练权重的问题 **问:如果我要训练其它的数据集,预训练权重要怎么办啊?** **答:数据的预训练权重对不同数据集是通用的,因为特征是通用的,预训练权重对于99%的情况都必须要用,不用的话权值太过随机,特征提取效果不明显,网络训练的结果也不会好。** **问:up,我修改了网络,预训练权重还能用吗? 答:修改了主干的话,如果不是用的现有的网络,基本上预训练权重是不能用的,要么就自己判断权值里卷积核的shape然后自己匹配,要么只能自己预训练去了;修改了后半部分的话,前半部分的主干部分的预训练权重还是可以用的,如果是pytorch代码的话,需要自己修改一下载入权值的方式,判断shape后载入,如果是keras代码,直接by_name=True,skip_mismatch=True即可。** 权值匹配的方式可以参考如下: ```python # 加快模型训练的效率 print('Loading weights into state dict...') device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_dict = model.state_dict() pretrained_dict = torch.load(model_path, map_location=device) a = {} for k, v in pretrained_dict.items(): try: if np.shape(model_dict[k]) == np.shape(v): a[k]=v except: pass model_dict.update(a) model.load_state_dict(model_dict) print('Finished!') ``` **问:我要怎么不使用预训练权重啊? 答:把载入预训练权重的代码注释了就行。** **问:为什么我不使用预训练权重效果这么差啊? 答:因为随机初始化的权值不好,提取的特征不好,也就导致了模型训练的效果不好,预训练权重还是非常重要的。** ### g、视频检测问题与摄像头检测问题 **问:怎么用摄像头检测呀? 答:predict.py修改参数可以进行摄像头检测,也有视频详细解释了摄像头检测的思路。** **问:怎么用视频检测呀? 答:同上** ### h、从0开始训练问题 **问:怎么在模型上从0开始训练? 答:在算力不足与调参能力不足的情况下从0开始训练毫无意义。模型特征提取能力在随机初始化参数的情况下非常差。没有好的参数调节能力和算力,无法使得网络正常收敛。** 如果一定要从0开始,那么训练的时候请注意几点: - 不载入预训练权重。 - 不要进行冻结训练,注释冻结模型的代码。 **问:为什么我不使用预训练权重效果这么差啊? 答:因为随机初始化的权值不好,提取的特征不好,也就导致了模型训练的效果不好,预训练权重还是非常重要的。** ### i、保存问题 **问:检测完的图片怎么保存? 答:一般目标检测用的是Image,所以查询一下PIL库的Image如何进行保存。详细看看predict.py文件的注释。** **问:怎么用视频保存呀? 答:详细看看predict.py文件的注释。** ### j、遍历问题 **问:如何对一个文件夹的图片进行遍历? 答:一般使用os.listdir先找出文件夹里面的所有图片,然后根据predict.py文件里面的执行思路检测图片就行了,详细看看predict.py文件的注释。** **问:如何对一个文件夹的图片进行遍历?并且保存。 答:遍历的话一般使用os.listdir先找出文件夹里面的所有图片,然后根据predict.py文件里面的执行思路检测图片就行了。保存的话一般目标检测用的是Image,所以查询一下PIL库的Image如何进行保存。如果有些库用的是cv2,那就是查一下cv2怎么保存图片。详细看看predict.py文件的注释。** ### k、路径问题(No such file or directory) **问:我怎么出现了这样的错误呀:** ```python FileNotFoundError: 【Errno 2】 No such file or directory …………………………………… …………………………………… ``` **答:去检查一下文件夹路径,查看是否有对应文件;并且检查一下2007_train.txt,其中文件路径是否有错。** 关于路径有几个重要的点: **文件夹名称中一定不要有空格。 注意相对路径和绝对路径。 多百度路径相关的知识。** **所有的路径问题基本上都是根目录问题,好好查一下相对目录的概念!** ### l、FPS问题(检测速度问题) **问:你这个FPS可以到达多少,可以到 XX FPS么? 答:FPS和机子的配置有关,配置高就快,配置低就慢。** **问:为什么论文中说速度可以达到XX,但是这里却没有? 答:检查是否正确安装了tensorflow-gpu或者pytorch的gpu版本,如果已经正确安装,可以去利用time.time()的方法查看detect_image里面,哪一段代码耗时更长(不仅只有网络耗时长,其它处理部分也会耗时,如绘图等)。有些论文还会使用多batch进行预测,我并没有去实现这个部分。** ### m、预测图片不显示问题 **问:为什么你的代码在预测完成后不显示图片?只是在命令行告诉我有什么目标。 答:给系统安装一个图片查看器就行了。** ### n、算法评价问题(miou) **问:怎么计算miou? 答:参考视频里的miou测量部分。** **问:怎么计算Recall、Precision指标。 答:现有的代码还无法获得,需要各位同学理解一下混淆矩阵的概念,然后自行计算一下。** ### o、模型优化(模型修改)问题 **问:up,我修改了网络,预训练权重还能用吗? 答:修改了主干的话,如果不是用的现有的网络,基本上预训练权重是不能用的,要么就自己判断权值里卷积核的shape然后自己匹配,要么只能自己预训练去了;修改了后半部分的话,前半部分的主干部分的预训练权重还是可以用的,如果是pytorch代码的话,需要自己修改一下载入权值的方式,判断shape后载入,如果是keras代码,直接by_name=True,skip_mismatch=True即可。** 权值匹配的方式可以参考如下: ```python # 加快模型训练的效率 print('Loading weights into state dict...') device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_dict = model.state_dict() pretrained_dict = torch.load(model_path, map_location=device) a = {} for k, v in pretrained_dict.items(): try: if np.shape(model_dict[k]) == np.shape(v): a[k]=v except: pass model_dict.update(a) model.load_state_dict(model_dict) print('Finished!') ``` **问:up,怎么修改模型啊,我想发个小论文! 答:建议看看目标检测中yolov4的论文,作为一个大型调参现场非常有参考意义,使用了很多tricks。我能给的建议就是多看一些经典模型,然后拆解里面的亮点结构并使用。常用的tricks如注意力机制什么的,可以试试。** ### p、部署问题 我没有具体部署到手机等设备上过,所以很多部署问题我并不了解…… ## 5、交流群问题 **问:up,有没有QQ群啥的呢? 答:没有没有,我没有时间管理QQ群……** ## 6、怎么学习的问题 **问:up,你的学习路线怎么样的?我是个小白我要怎么学? 答:这里有几点需要注意哈 1、我不是高手,很多东西我也不会,我的学习路线也不一定适用所有人。 2、我实验室不做深度学习,所以我很多东西都是自学,自己摸索,正确与否我也不知道。 3、我个人觉得学习更靠自学** 学习路线的话,我是先学习了莫烦的python教程,从tensorflow、keras、pytorch入门,入门完之后学的SSD,YOLO,然后了解了很多经典的卷积网,后面就开始学很多不同的代码了,我的学习方法就是一行一行的看,了解整个代码的执行流程,特征层的shape变化等,花了很多时间也没有什么捷径,就是要花时间吧。