Repository: EMI-Group/FaPN Branch: main Commit: 4d400719d3f2 Files: 39 Total size: 198.5 KB Directory structure: gitextract_3pcsu2_j/ ├── DCNv2/ │ ├── .gitignore │ ├── LICENSE │ ├── README.md │ ├── __init__.py │ ├── dcn_v2.py │ ├── make.sh │ ├── setup.py │ ├── src/ │ │ ├── cpu/ │ │ │ ├── dcn_v2_cpu.cpp │ │ │ ├── dcn_v2_im2col_cpu.cpp │ │ │ ├── dcn_v2_im2col_cpu.h │ │ │ ├── dcn_v2_psroi_pooling_cpu.cpp │ │ │ └── vision.h │ │ ├── cuda/ │ │ │ ├── dcn_v2_cuda.cu │ │ │ ├── dcn_v2_im2col_cuda.cu │ │ │ ├── dcn_v2_im2col_cuda.h │ │ │ ├── dcn_v2_psroi_pooling_cuda.cu │ │ │ └── vision.h │ │ ├── dcn_v2.h │ │ └── vision.cpp │ └── test/ │ ├── test.py │ ├── testcpu.py │ └── testcuda.py ├── LICENSE ├── README.md ├── configs/ │ ├── Base-RCNN-FAN.yaml │ ├── COCO-Detection/ │ │ ├── faster_rcnn_R_101_FAN_3x.yaml │ │ └── faster_rcnn_R_50_FAN_1x.yaml │ ├── COCO-InstanceSegmentation/ │ │ ├── mask_rcnn_R_101_FAN_3x.yaml │ │ └── mask_rcnn_R_50_FAN_1x.yaml │ └── COCO-PanopticSegmentation/ │ ├── Base-Panoptic-FAN.yaml │ ├── panoptic_fan_R_101_3x.yaml │ └── panoptic_fan_R_50_1x.yaml ├── detectron2/ │ └── modeling/ │ └── backbone/ │ ├── __init__.py │ └── fan.py └── projects/ └── PointRend/ └── configs/ ├── InstanceSegmentation/ │ ├── Base-PointRend-RCNN-FAN.yaml │ └── pointrend_rcnn_R_50_FAN_1x_coco.yaml └── SemanticSegmentation/ ├── Base-PointRend-Semantic-FAN.yaml ├── pointrend_semantic_R_101_FAN_1x_cityscapes.yaml └── pointrend_semantic_R_50_FAN_1x_cityscapes.yaml ================================================ FILE CONTENTS ================================================ ================================================ FILE: DCNv2/.gitignore ================================================ .vscode .idea *.so *.o *pyc _ext build DCNv2.egg-info dist ================================================ FILE: DCNv2/LICENSE ================================================ BSD 3-Clause License Copyright (c) 2019, Charles Shang All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ================================================ FILE: DCNv2/README.md ================================================ ## Deformable Convolutional Networks V2 with Pytorch 1.7 ### Build ```bash ./make.sh # build python test.py # run examples and gradient check ``` ### An Example - deformable conv ```python from dcn_v2 import DCN input = torch.randn(2, 64, 128, 128).cuda() # wrap all things (offset and mask) in DCN dcn = DCN(64, 64, kernel_size=(3,3), stride=1, padding=1, deformable_groups=2).cuda() output = dcn(input) print(output.shape) ``` - deformable roi pooling ```python from dcn_v2 import DCNPooling input = torch.randn(2, 32, 64, 64).cuda() batch_inds = torch.randint(2, (20, 1)).cuda().float() x = torch.randint(256, (20, 1)).cuda().float() y = torch.randint(256, (20, 1)).cuda().float() w = torch.randint(64, (20, 1)).cuda().float() h = torch.randint(64, (20, 1)).cuda().float() rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) # mdformable pooling (V2) # wrap all things (offset and mask) in DCNPooling dpooling = DCNPooling(spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=False, group_size=1, trans_std=0.1).cuda() dout = dpooling(input, rois) ``` ### Note Now the master branch is for pytorch 1.0 (new ATen API), you can switch back to pytorch 0.4 with, ```bash git checkout pytorch_0.4 ``` ### Known Issues: - [x] Gradient check w.r.t offset (solved) - [ ] Backward is not reentrant (minor) This is an adaption of the official [Deformable-ConvNets](https://github.com/msracver/Deformable-ConvNets/tree/master/DCNv2_op). I have ran the gradient check for many times with DOUBLE type. Every tensor **except offset** passes. However, when I set the offset to 0.5, it passes. I'm still wondering what cause this problem. Is it because some non-differential points? Update: all gradient check passes with double precision. Another issue is that it raises `RuntimeError: Backward is not reentrant`. However, the error is very small (`<1e-7` for float `<1e-15` for double), so it may not be a serious problem (?) Please post an issue or PR if you have any comments. ================================================ FILE: DCNv2/__init__.py ================================================ ================================================ FILE: DCNv2/dcn_v2.py ================================================ #!/usr/bin/env python from __future__ import absolute_import, division, print_function import math import torch from torch import nn from torch.autograd import Function from torch.autograd.function import once_differentiable from torch.nn.modules.utils import _pair import PIL from PIL import Image import os import numpy as np import _ext as _backend class _DCNv2(Function): @staticmethod def forward(ctx, input, offset, mask, weight, bias, stride, padding, dilation, deformable_groups): ctx.stride = _pair(stride) ctx.padding = _pair(padding) ctx.dilation = _pair(dilation) ctx.kernel_size = _pair(weight.shape[2:4]) ctx.deformable_groups = deformable_groups output = _backend.dcn_v2_forward( input, weight, bias, offset, mask, ctx.kernel_size[0], ctx.kernel_size[1], ctx.stride[0], ctx.stride[1], ctx.padding[0], ctx.padding[1], ctx.dilation[0], ctx.dilation[1], ctx.deformable_groups, ) ctx.save_for_backward(input, offset, mask, weight, bias) return output @staticmethod @once_differentiable def backward(ctx, grad_output): input, offset, mask, weight, bias = ctx.saved_tensors grad_input, grad_offset, grad_mask, grad_weight, grad_bias = _backend.dcn_v2_backward( input, weight, bias, offset, mask, grad_output, ctx.kernel_size[0], ctx.kernel_size[1], ctx.stride[0], ctx.stride[1], ctx.padding[0], ctx.padding[1], ctx.dilation[0], ctx.dilation[1], ctx.deformable_groups, ) return ( grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, ) dcn_v2_conv = _DCNv2.apply class DCNv2(nn.Module): def __init__( self, in_channels, out_channels, kernel_size, stride, padding, dilation=1, deformable_groups=1, ): super(DCNv2, self).__init__() self.in_channels = in_channels self.out_channels = out_channels self.kernel_size = _pair(kernel_size) self.stride = _pair(stride) self.padding = _pair(padding) self.dilation = _pair(dilation) self.deformable_groups = deformable_groups self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels, *self.kernel_size)) self.bias = nn.Parameter(torch.Tensor(out_channels)) self.reset_parameters() def reset_parameters(self): n = self.in_channels for k in self.kernel_size: n *= k stdv = 1.0 / math.sqrt(n) self.weight.data.uniform_(-stdv, stdv) self.bias.data.zero_() def forward(self, input, offset, mask): assert 2 * self.deformable_groups * self.kernel_size[0] * self.kernel_size[1] == offset.shape[1] assert self.deformable_groups * self.kernel_size[0] * self.kernel_size[1] == mask.shape[1] return dcn_v2_conv( input, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, self.deformable_groups, ) class DCN(DCNv2): def __init__(self, in_channels, out_channels, kernel_size, stride, padding, dilation=1, deformable_groups=1, extra_offset_mask=False,): super(DCN, self).__init__(in_channels, out_channels, kernel_size, stride, padding, dilation, deformable_groups) self.extra_offset_mask = extra_offset_mask channels_ = self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1] self.conv_offset_mask = nn.Conv2d(self.in_channels, channels_, kernel_size=self.kernel_size, stride=self.stride, padding=self.padding, bias=True) self.init_offset() def init_offset(self): self.conv_offset_mask.weight.data.zero_() self.conv_offset_mask.bias.data.zero_() def forward(self, input, main_path=None): if self.extra_offset_mask: out = self.conv_offset_mask(input[1]) input = input[0] else: out = self.conv_offset_mask(input) o1, o2, mask = torch.chunk(out, 3, dim=1) # each has self.deformable_groups * self.kernel_size[0] * self.kernel_size[1] channels offset = torch.cat((o1, o2), dim=1) # x, y [0-8]: the first group, # o1, o2 = o1.data.cpu().numpy(), o2.data.cpu().numpy() # o = o1[0] # first image in the batch # print(o1[0]) # return # k = 0 # img_h, img_w = 128, 256 # img_r, img_c, inter = 3, 3, 5 # img_size = ((img_w + inter) * img_c, img_r * (img_h + inter)) # to_image = Image.new('RGB', img_size, 'white') # print() # for j in range(o.shape[0]): # for group # if j % 9 == 0 and j != 0: # different kernel # dst_file = os.path.join(main_path, 'offset_{}to{}.png'.format(j-9, j)) # # save first # to_image.save(dst_file) # # new image # img_size = ((img_w + inter) * img_c, img_r * (img_h + inter)) # to_image = Image.new('RGB', img_size, 'white') # feature_img = np.asarray(feature_img * 255, dtype=np.uint8) # for x, y in range # feature_img = Image.fromarray(cv2.cvtColor(feature_img, cv2.COLOR_BGR2RGB)) # index_r, index_c = j // img_c, j % img_c # to_image.paste(feature_img, (index_c * (img_w + inter), index_r * (img_h + inter))) mask = torch.sigmoid(mask) return dcn_v2_conv(input, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, self.deformable_groups,) class _DCNv2Pooling(Function): @staticmethod def forward( ctx, input, rois, offset, spatial_scale, pooled_size, output_dim, no_trans, group_size=1, part_size=None, sample_per_part=4, trans_std=0.0, ): ctx.spatial_scale = spatial_scale ctx.no_trans = int(no_trans) ctx.output_dim = output_dim ctx.group_size = group_size ctx.pooled_size = pooled_size ctx.part_size = pooled_size if part_size is None else part_size ctx.sample_per_part = sample_per_part ctx.trans_std = trans_std output, output_count = _backend.dcn_v2_psroi_pooling_forward( input, rois, offset, ctx.no_trans, ctx.spatial_scale, ctx.output_dim, ctx.group_size, ctx.pooled_size, ctx.part_size, ctx.sample_per_part, ctx.trans_std, ) ctx.save_for_backward(input, rois, offset, output_count) return output @staticmethod @once_differentiable def backward(ctx, grad_output): input, rois, offset, output_count = ctx.saved_tensors grad_input, grad_offset = _backend.dcn_v2_psroi_pooling_backward( grad_output, input, rois, offset, output_count, ctx.no_trans, ctx.spatial_scale, ctx.output_dim, ctx.group_size, ctx.pooled_size, ctx.part_size, ctx.sample_per_part, ctx.trans_std, ) return grad_input, None, grad_offset, None, None, None, None, None, None, None, None dcn_v2_pooling = _DCNv2Pooling.apply class DCNv2Pooling(nn.Module): def __init__( self, spatial_scale, pooled_size, output_dim, no_trans, group_size=1, part_size=None, sample_per_part=4, trans_std=0.0, ): super(DCNv2Pooling, self).__init__() self.spatial_scale = spatial_scale self.pooled_size = pooled_size self.output_dim = output_dim self.no_trans = no_trans self.group_size = group_size self.part_size = pooled_size if part_size is None else part_size self.sample_per_part = sample_per_part self.trans_std = trans_std def forward(self, input, rois, offset): assert input.shape[1] == self.output_dim if self.no_trans: offset = input.new() return dcn_v2_pooling( input, rois, offset, self.spatial_scale, self.pooled_size, self.output_dim, self.no_trans, self.group_size, self.part_size, self.sample_per_part, self.trans_std, ) class DCNPooling(DCNv2Pooling): def __init__( self, spatial_scale, pooled_size, output_dim, no_trans, group_size=1, part_size=None, sample_per_part=4, trans_std=0.0, deform_fc_dim=1024, ): super(DCNPooling, self).__init__( spatial_scale, pooled_size, output_dim, no_trans, group_size, part_size, sample_per_part, trans_std, ) self.deform_fc_dim = deform_fc_dim if not no_trans: self.offset_mask_fc = nn.Sequential( nn.Linear(self.pooled_size * self.pooled_size * self.output_dim, self.deform_fc_dim), nn.ReLU(inplace=True), nn.Linear(self.deform_fc_dim, self.deform_fc_dim), nn.ReLU(inplace=True), nn.Linear(self.deform_fc_dim, self.pooled_size * self.pooled_size * 3), ) self.offset_mask_fc[4].weight.data.zero_() self.offset_mask_fc[4].bias.data.zero_() def forward(self, input, rois): offset = input.new() if not self.no_trans: # do roi_align first n = rois.shape[0] roi = dcn_v2_pooling( input, rois, offset, self.spatial_scale, self.pooled_size, self.output_dim, True, # no trans self.group_size, self.part_size, self.sample_per_part, self.trans_std, ) # build mask and offset offset_mask = self.offset_mask_fc(roi.view(n, -1)) offset_mask = offset_mask.view(n, 3, self.pooled_size, self.pooled_size) o1, o2, mask = torch.chunk(offset_mask, 3, dim=1) offset = torch.cat((o1, o2), dim=1) mask = torch.sigmoid(mask) # do pooling with offset and mask return ( dcn_v2_pooling( input, rois, offset, self.spatial_scale, self.pooled_size, self.output_dim, self.no_trans, self.group_size, self.part_size, self.sample_per_part, self.trans_std, ) * mask ) # only roi_align return dcn_v2_pooling( input, rois, offset, self.spatial_scale, self.pooled_size, self.output_dim, self.no_trans, self.group_size, self.part_size, self.sample_per_part, self.trans_std, ) ================================================ FILE: DCNv2/make.sh ================================================ #!/usr/bin/env bash python setup.py build develop ================================================ FILE: DCNv2/setup.py ================================================ #!/usr/bin/env python import glob import os import torch from setuptools import find_packages, setup from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension requirements = ["torch", "torchvision"] def get_extensions(): this_dir = os.path.dirname(os.path.abspath(__file__)) extensions_dir = os.path.join(this_dir, "src") main_file = glob.glob(os.path.join(extensions_dir, "*.cpp")) source_cpu = glob.glob(os.path.join(extensions_dir, "cpu", "*.cpp")) source_cuda = glob.glob(os.path.join(extensions_dir, "cuda", "*.cu")) os.environ["CC"] = "g++" sources = main_file + source_cpu extension = CppExtension extra_compile_args = {"cxx": []} define_macros = [] if torch.cuda.is_available() and CUDA_HOME is not None: extension = CUDAExtension sources += source_cuda define_macros += [("WITH_CUDA", None)] extra_compile_args["nvcc"] = [ "-DCUDA_HAS_FP16=1", "-D__CUDA_NO_HALF_OPERATORS__", "-D__CUDA_NO_HALF_CONVERSIONS__", "-D__CUDA_NO_HALF2_OPERATORS__", ] else: # raise NotImplementedError('Cuda is not available') pass sources = [os.path.join(extensions_dir, s) for s in sources] include_dirs = [extensions_dir] ext_modules = [ extension( "_ext", sources, include_dirs=include_dirs, define_macros=define_macros, extra_compile_args=extra_compile_args, ) ] return ext_modules setup( name="DCNv2", version="0.1", author="charlesshang", url="https://github.com/charlesshang/DCNv2", description="deformable convolutional networks", packages=find_packages( exclude=( "configs", "tests", ) ), # install_requires=requirements, ext_modules=get_extensions(), cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension}, ) ================================================ FILE: DCNv2/src/cpu/dcn_v2_cpu.cpp ================================================ #include #include "cpu/dcn_v2_im2col_cpu.h" #include #include //#include #include //#include //#include //extern THCState *state; // author: Charles Shang // https://github.com/torch/cunn/blob/master/lib/THCUNN/generic/SpatialConvolutionMM.cu // modified from the CUDA version for CPU use by Daniel K. Suhendro // edit by: James Bockman and Matthew Howe // modified for torch implementation to remove use of deprecated torch access to Blas at::Tensor dcn_v2_cpu_forward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const int kernel_h, const int kernel_w, const int stride_h, const int stride_w, const int pad_h, const int pad_w, const int dilation_h, const int dilation_w, const int deformable_group) { // THCAssertSameGPU(THCudaTensor_checkGPU(state, 5, input, weight, bias, offset, mask)); /*AT_ASSERTM(input.type().is_cuda(), "input must be a CUDA tensor"); AT_ASSERTM(weight.type().is_cuda(), "weight must be a CUDA tensor"); AT_ASSERTM(bias.type().is_cuda(), "bias must be a CUDA tensor"); AT_ASSERTM(offset.type().is_cuda(), "offset must be a CUDA tensor"); AT_ASSERTM(mask.type().is_cuda(), "mask must be a CUDA tensor");*/ const int batch = input.size(0); const int channels = input.size(1); const int height = input.size(2); const int width = input.size(3); const int channels_out = weight.size(0); const int channels_kernel = weight.size(1); const int kernel_h_ = weight.size(2); const int kernel_w_ = weight.size(3); // printf("Kernels: %d %d %d %d\n", kernel_h_, kernel_w_, kernel_w, kernel_h); // printf("Channels: %d %d\n", channels, channels_kernel); // printf("Channels: %d %d\n", channels_out, channels_kernel); AT_ASSERTM(kernel_h_ == kernel_h && kernel_w_ == kernel_w, "Input shape and kernel shape wont match: (%d x %d vs %d x %d).", kernel_h_, kernel_w, kernel_h_, kernel_w_); AT_ASSERTM(channels == channels_kernel, "Input shape and kernel channels wont match: (%d vs %d).", channels, channels_kernel); const int height_out = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; const int width_out = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; // auto ones = at::ones({height_out, width_out}, input.options()); auto ones = at::ones({bias.sizes()[0], height_out, width_out}, input.options()); auto columns = at::empty({channels * kernel_h * kernel_w, 1 * height_out * width_out}, input.options()); auto output = at::zeros({batch, channels_out, height_out, width_out}, input.options()); using scalar_t = float; for (int b = 0; b < batch; b++) { auto input_n = input.select(0, b); auto offset_n = offset.select(0, b); auto mask_n = mask.select(0, b); auto output_n = output.select(0, b); // std::cout << "output_n: " << output_n << "output.select(0,b): " << output.select(0,b) << "\n"; // Do Bias first: // M,N,K are dims of matrix A and B // (see http://docs.nvidia.com/cuda/cublas/#cublas-lt-t-gt-gemm) // (N x 1) (1 x M) // torch implementation auto ones_T = at::transpose(ones.contiguous(), 2, 0); ones_T = at::mul(ones_T, bias.contiguous()); ones_T = at::transpose(ones_T, 2, 0); output_n = at::add(output_n, ones_T); modulated_deformable_im2col_cpu(input_n.data_ptr(), offset_n.data_ptr(), mask_n.data_ptr(), 1, channels, height, width, height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, deformable_group, columns.data_ptr()); //(k * m) x (m * n) // Y = WC // torch implementation auto weight_flat = weight.view({channels_out, channels * kernel_h * kernel_w}); auto product = at::matmul(weight_flat, columns); output.select(0, b) = at::add(output_n, product.view({channels_out, height_out, width_out})); } return output; } std::vector dcn_v2_cpu_backward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const at::Tensor &grad_output, int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, int pad_w, int dilation_h, int dilation_w, int deformable_group) { THArgCheck(input.is_contiguous(), 1, "input tensor has to be contiguous"); THArgCheck(weight.is_contiguous(), 2, "weight tensor has to be contiguous"); /*AT_ASSERTM(input.type().is_cuda(), "input must be a CUDA tensor"); AT_ASSERTM(weight.type().is_cuda(), "weight must be a CUDA tensor"); AT_ASSERTM(bias.type().is_cuda(), "bias must be a CUDA tensor"); AT_ASSERTM(offset.type().is_cuda(), "offset must be a CUDA tensor"); AT_ASSERTM(mask.type().is_cuda(), "mask must be a CUDA tensor");*/ const int batch = input.size(0); const int channels = input.size(1); const int height = input.size(2); const int width = input.size(3); const int channels_out = weight.size(0); const int channels_kernel = weight.size(1); const int kernel_h_ = weight.size(2); const int kernel_w_ = weight.size(3); AT_ASSERTM(kernel_h_ == kernel_h && kernel_w_ == kernel_w, "Input shape and kernel shape wont match: (%d x %d vs %d x %d).", kernel_h_, kernel_w, kernel_h_, kernel_w_); AT_ASSERTM(channels == channels_kernel, "Input shape and kernel channels wont match: (%d vs %d).", channels, channels_kernel); const int height_out = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; const int width_out = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; auto ones = at::ones({height_out, width_out}, input.options()); auto columns = at::zeros({channels * kernel_h * kernel_w, 1 * height_out * width_out}, input.options()); auto output = at::empty({batch, channels_out, height_out, width_out}, input.options()); auto grad_input = at::zeros_like(input); auto grad_weight = at::zeros_like(weight); auto grad_bias = at::zeros_like(bias); auto grad_offset = at::zeros_like(offset); auto grad_mask = at::zeros_like(mask); using scalar_t = float; for (int b = 0; b < batch; b++) { auto input_n = input.select(0, b); auto offset_n = offset.select(0, b); auto mask_n = mask.select(0, b); auto grad_output_n = grad_output.select(0, b); auto grad_input_n = grad_input.select(0, b); auto grad_offset_n = grad_offset.select(0, b); auto grad_mask_n = grad_mask.select(0, b); // Torch implementation auto weight_flat = weight.view({channels_out, channels*kernel_h*kernel_w}); weight_flat = at::transpose(weight_flat, 1, 0); auto grad_output_n_flat = grad_output_n.view({channels_out, height_out*width_out}); columns = at::matmul(weight_flat, grad_output_n_flat); // gradient w.r.t. input coordinate data modulated_deformable_col2im_coord_cpu(columns.data_ptr(), input_n.data_ptr(), offset_n.data_ptr(), mask_n.data_ptr(), 1, channels, height, width, height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, deformable_group, grad_offset_n.data_ptr(), grad_mask_n.data_ptr()); // gradient w.r.t. input data modulated_deformable_col2im_cpu(columns.data_ptr(), offset_n.data_ptr(), mask_n.data_ptr(), 1, channels, height, width, height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, deformable_group, grad_input_n.data_ptr()); // gradient w.r.t. weight, dWeight should accumulate across the batch and group modulated_deformable_im2col_cpu(input_n.data_ptr(), offset_n.data_ptr(), mask_n.data_ptr(), 1, channels, height, width, height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, deformable_group, columns.data_ptr()); // Torch implementation auto product = at::matmul(grad_output_n_flat, at::transpose(columns, 1, 0)); grad_weight = at::add(grad_weight, product.view({channels_out, channels, kernel_h, kernel_w})); // Torch implementation auto ones_flat = ones.view({height_out*width_out}); product = at::matmul(grad_output_n_flat, ones_flat); grad_bias = at::add(grad_bias, product); } return { grad_input, grad_offset, grad_mask, grad_weight, grad_bias }; } ================================================ FILE: DCNv2/src/cpu/dcn_v2_im2col_cpu.cpp ================================================ #include "dcn_v2_im2col_cpu.h" #include #include #include #include //#include #include //#include //#include // modified from the CUDA version for CPU use by Daniel K. Suhendro /*#define CUDA_KERNEL_LOOP(i, n) \ for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ i < (n); \ i += blockDim.x * gridDim.x) const int CUDA_NUM_THREADS = 1024; inline int GET_BLOCKS(const int N) { return (N + CUDA_NUM_THREADS - 1) / CUDA_NUM_THREADS; }*/ float dmcn_im2col_bilinear_cpu(const float *bottom_data, const int data_width, const int height, const int width, float h, float w) { int h_low = floor(h); int w_low = floor(w); int h_high = h_low + 1; int w_high = w_low + 1; float lh = h - h_low; float lw = w - w_low; float hh = 1 - lh, hw = 1 - lw; float v1 = 0; if (h_low >= 0 && w_low >= 0) v1 = bottom_data[h_low * data_width + w_low]; float v2 = 0; if (h_low >= 0 && w_high <= width - 1) v2 = bottom_data[h_low * data_width + w_high]; float v3 = 0; if (h_high <= height - 1 && w_low >= 0) v3 = bottom_data[h_high * data_width + w_low]; float v4 = 0; if (h_high <= height - 1 && w_high <= width - 1) v4 = bottom_data[h_high * data_width + w_high]; float w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; float val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); return val; } float dmcn_get_gradient_weight_cpu(float argmax_h, float argmax_w, const int h, const int w, const int height, const int width) { if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width) { //empty return 0; } int argmax_h_low = floor(argmax_h); int argmax_w_low = floor(argmax_w); int argmax_h_high = argmax_h_low + 1; int argmax_w_high = argmax_w_low + 1; float weight = 0; if (h == argmax_h_low && w == argmax_w_low) weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); if (h == argmax_h_low && w == argmax_w_high) weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); if (h == argmax_h_high && w == argmax_w_low) weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); if (h == argmax_h_high && w == argmax_w_high) weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); return weight; } float dmcn_get_coordinate_weight_cpu(float argmax_h, float argmax_w, const int height, const int width, const float *im_data, const int data_width, const int bp_dir) { if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width) { //empty return 0; } int argmax_h_low = floor(argmax_h); int argmax_w_low = floor(argmax_w); int argmax_h_high = argmax_h_low + 1; int argmax_w_high = argmax_w_low + 1; float weight = 0; if (bp_dir == 0) { if (argmax_h_low >= 0 && argmax_w_low >= 0) weight += -1 * (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_low * data_width + argmax_w_low]; if (argmax_h_low >= 0 && argmax_w_high <= width - 1) weight += -1 * (argmax_w - argmax_w_low) * im_data[argmax_h_low * data_width + argmax_w_high]; if (argmax_h_high <= height - 1 && argmax_w_low >= 0) weight += (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_high * data_width + argmax_w_low]; if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) weight += (argmax_w - argmax_w_low) * im_data[argmax_h_high * data_width + argmax_w_high]; } else if (bp_dir == 1) { if (argmax_h_low >= 0 && argmax_w_low >= 0) weight += -1 * (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_low]; if (argmax_h_low >= 0 && argmax_w_high <= width - 1) weight += (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_high]; if (argmax_h_high <= height - 1 && argmax_w_low >= 0) weight += -1 * (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_low]; if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) weight += (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_high]; } return weight; } void modulated_deformable_im2col_cpu_kernel(const int n, const float *data_im, const float *data_offset, const float *data_mask, const int height, const int width, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int channel_per_deformable_group, const int batch_size, const int num_channels, const int deformable_group, const int height_col, const int width_col, float *data_col) { // launch channels * batch_size * height_col * width_col cores for(int index=0; index(0); const float h_im = h_in + i * dilation_h + offset_h; const float w_im = w_in + j * dilation_w + offset_w; //if (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) { if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) { //const float map_h = i * dilation_h + offset_h; //const float map_w = j * dilation_w + offset_w; //const int cur_height = height - h_in; //const int cur_width = width - w_in; //val = dmcn_im2col_bilinear_cpu(data_im_ptr, width, cur_height, cur_width, map_h, map_w); val = dmcn_im2col_bilinear_cpu(data_im_ptr, width, height, width, h_im, w_im); } *data_col_ptr = val * mask; // data_col_ptr += batch_size * height_col * width_col; data_col_ptr += height_col * width_col; } } } } void modulated_deformable_col2im_cpu_kernel(const int n, const float *data_col, const float *data_offset, const float *data_mask, const int channels, const int height, const int width, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int channel_per_deformable_group, const int batch_size, const int deformable_group, const int height_col, const int width_col, float *grad_im) { for(int index = 0; index < n; index++) { const int j = (index / width_col / height_col / batch_size) % kernel_w; const int i = (index / width_col / height_col / batch_size / kernel_w) % kernel_h; const int c = index / width_col / height_col / batch_size / kernel_w / kernel_h; // compute the start and end of the output const int deformable_group_index = c / channel_per_deformable_group; int w_out = index % width_col; int h_out = (index / width_col) % height_col; int b = (index / width_col / height_col) % batch_size; int w_in = w_out * stride_w - pad_w; int h_in = h_out * stride_h - pad_h; const float *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col; const float *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col; const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; const int data_mask_hw_ptr = ((i * kernel_w + j) * height_col + h_out) * width_col + w_out; const float offset_h = data_offset_ptr[data_offset_h_ptr]; const float offset_w = data_offset_ptr[data_offset_w_ptr]; const float mask = data_mask_ptr[data_mask_hw_ptr]; const float cur_inv_h_data = h_in + i * dilation_h + offset_h; const float cur_inv_w_data = w_in + j * dilation_w + offset_w; const float cur_top_grad = data_col[index] * mask; const int cur_h = (int)cur_inv_h_data; const int cur_w = (int)cur_inv_w_data; for (int dy = -2; dy <= 2; dy++) { for (int dx = -2; dx <= 2; dx++) { if (cur_h + dy >= 0 && cur_h + dy < height && cur_w + dx >= 0 && cur_w + dx < width && abs(cur_inv_h_data - (cur_h + dy)) < 1 && abs(cur_inv_w_data - (cur_w + dx)) < 1) { int cur_bottom_grad_pos = ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; float weight = dmcn_get_gradient_weight_cpu(cur_inv_h_data, cur_inv_w_data, cur_h + dy, cur_w + dx, height, width); //atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad); *(grad_im + cur_bottom_grad_pos) += weight * cur_top_grad; } } } } } void modulated_deformable_col2im_coord_cpu_kernel(const int n, const float *data_col, const float *data_im, const float *data_offset, const float *data_mask, const int channels, const int height, const int width, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int channel_per_deformable_group, const int batch_size, const int offset_channels, const int deformable_group, const int height_col, const int width_col, float *grad_offset, float *grad_mask) { for(int index = 0; index < n; index++) { float val = 0, mval = 0; int w = index % width_col; int h = (index / width_col) % height_col; int c = (index / width_col / height_col) % offset_channels; int b = (index / width_col / height_col) / offset_channels; // compute the start and end of the output const int deformable_group_index = c / (2 * kernel_h * kernel_w); const int col_step = kernel_h * kernel_w; int cnt = 0; const float *data_col_ptr = data_col + deformable_group_index * channel_per_deformable_group * batch_size * width_col * height_col; const float *data_im_ptr = data_im + (b * deformable_group + deformable_group_index) * channel_per_deformable_group / kernel_h / kernel_w * height * width; const float *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col; const float *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col; const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; col_c += col_step) { const int col_pos = (((col_c * batch_size + b) * height_col) + h) * width_col + w; const int bp_dir = offset_c % 2; int j = (col_pos / width_col / height_col / batch_size) % kernel_w; int i = (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; int w_out = col_pos % width_col; int h_out = (col_pos / width_col) % height_col; int w_in = w_out * stride_w - pad_w; int h_in = h_out * stride_h - pad_h; const int data_offset_h_ptr = (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); const int data_offset_w_ptr = (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out); const int data_mask_hw_ptr = (((i * kernel_w + j) * height_col + h_out) * width_col + w_out); const float offset_h = data_offset_ptr[data_offset_h_ptr]; const float offset_w = data_offset_ptr[data_offset_w_ptr]; const float mask = data_mask_ptr[data_mask_hw_ptr]; float inv_h = h_in + i * dilation_h + offset_h; float inv_w = w_in + j * dilation_w + offset_w; if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) { inv_h = inv_w = -2; } else { mval += data_col_ptr[col_pos] * dmcn_im2col_bilinear_cpu(data_im_ptr + cnt * height * width, width, height, width, inv_h, inv_w); } const float weight = dmcn_get_coordinate_weight_cpu( inv_h, inv_w, height, width, data_im_ptr + cnt * height * width, width, bp_dir); val += weight * data_col_ptr[col_pos] * mask; cnt += 1; } // KERNEL_ASSIGN(grad_offset[index], offset_req, val); grad_offset[index] = val; if (offset_c % 2 == 0) // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w], mask_req, mval); grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w] = mval; } } void modulated_deformable_im2col_cpu(const float* data_im, const float* data_offset, const float* data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float* data_col) { // num_axes should be smaller than block size const int channel_per_deformable_group = channels / deformable_group; const int num_kernels = channels * batch_size * height_col * width_col; modulated_deformable_im2col_cpu_kernel( num_kernels, data_im, data_offset, data_mask, height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group, batch_size, channels, deformable_group, height_col, width_col, data_col); /*cudaError_t err = cudaGetLastError(); if (err != cudaSuccess) { printf("error in modulated_deformable_im2col_cuda: %s\n", cudaGetErrorString(err)); }*/ } void modulated_deformable_col2im_cpu(const float* data_col, const float* data_offset, const float* data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float* grad_im){ const int channel_per_deformable_group = channels / deformable_group; const int num_kernels = channels * kernel_h * kernel_w * batch_size * height_col * width_col; modulated_deformable_col2im_cpu_kernel( num_kernels, data_col, data_offset, data_mask, channels, height_im, width_im, kernel_h, kernel_w, pad_h, pad_h, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group, batch_size, deformable_group, height_col, width_col, grad_im); /*cudaError_t err = cudaGetLastError(); if (err != cudaSuccess) { printf("error in modulated_deformable_col2im_cuda: %s\n", cudaGetErrorString(err)); }*/ } void modulated_deformable_col2im_coord_cpu(const float* data_col, const float* data_im, const float* data_offset, const float* data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float* grad_offset, float* grad_mask) { const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * kernel_w * deformable_group; const int channel_per_deformable_group = channels * kernel_h * kernel_w / deformable_group; modulated_deformable_col2im_coord_cpu_kernel( num_kernels, data_col, data_im, data_offset, data_mask, channels, height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group, batch_size, 2 * kernel_h * kernel_w * deformable_group, deformable_group, height_col, width_col, grad_offset, grad_mask); /*cudaError_t err = cudaGetLastError(); if (err != cudaSuccess) { printf("error in modulated_deformable_col2im_coord_cuda: %s\n", cudaGetErrorString(err)); }*/ } ================================================ FILE: DCNv2/src/cpu/dcn_v2_im2col_cpu.h ================================================ /*! ******************* BEGIN Caffe Copyright Notice and Disclaimer **************** * * COPYRIGHT * * All contributions by the University of California: * Copyright (c) 2014-2017 The Regents of the University of California (Regents) * All rights reserved. * * All other contributions: * Copyright (c) 2014-2017, the respective contributors * All rights reserved. * * Caffe uses a shared copyright model: each contributor holds copyright over * their contributions to Caffe. The project versioning records all such * contribution and copyright details. If a contributor wants to further mark * their specific copyright on a particular contribution, they should indicate * their copyright solely in the commit message of the change when it is * committed. * * LICENSE * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, this * list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * CONTRIBUTION AGREEMENT * * By contributing to the BVLC/caffe repository through pull-request, comment, * or otherwise, the contributor releases their content to the * license and copyright terms herein. * ***************** END Caffe Copyright Notice and Disclaimer ******************** * * Copyright (c) 2018 Microsoft * Licensed under The MIT License [see LICENSE for details] * \file modulated_deformable_im2col.h * \brief Function definitions of converting an image to * column matrix based on kernel, padding, dilation, and offset. * These functions are mainly used in deformable convolution operators. * \ref: https://arxiv.org/abs/1811.11168 * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu */ /***************** Adapted by Charles Shang *********************/ // modified from the CUDA version for CPU use by Daniel K. Suhendro #ifndef DCN_V2_IM2COL_CPU #define DCN_V2_IM2COL_CPU #ifdef __cplusplus extern "C" { #endif void modulated_deformable_im2col_cpu(const float *data_im, const float *data_offset, const float *data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float *data_col); void modulated_deformable_col2im_cpu(const float *data_col, const float *data_offset, const float *data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float *grad_im); void modulated_deformable_col2im_coord_cpu(const float *data_col, const float *data_im, const float *data_offset, const float *data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float *grad_offset, float *grad_mask); #ifdef __cplusplus } #endif #endif ================================================ FILE: DCNv2/src/cpu/dcn_v2_psroi_pooling_cpu.cpp ================================================ /*! * Copyright (c) 2017 Microsoft * Licensed under The MIT License [see LICENSE for details] * \file deformable_psroi_pooling.cu * \brief * \author Yi Li, Guodong Zhang, Jifeng Dai */ /***************** Adapted by Charles Shang *********************/ // modified from the CUDA version for CPU use by Daniel K. Suhendro #include #include #include #include //#include #include //#include //#include /*#define CUDA_KERNEL_LOOP(i, n) \ for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ i < (n); \ i += blockDim.x * gridDim.x) const int CUDA_NUM_THREADS = 1024; inline int GET_BLOCKS(const int N) { return (N + CUDA_NUM_THREADS - 1) / CUDA_NUM_THREADS; }*/ template T bilinear_interp_cpu( const T *data, const T x, const T y, const int width, const int height) { int x1 = floor(x); int x2 = ceil(x); int y1 = floor(y); int y2 = ceil(y); T dist_x = static_cast(x - x1); T dist_y = static_cast(y - y1); T value11 = data[y1 * width + x1]; T value12 = data[y2 * width + x1]; T value21 = data[y1 * width + x2]; T value22 = data[y2 * width + x2]; T value = (1 - dist_x) * (1 - dist_y) * value11 + (1 - dist_x) * dist_y * value12 + dist_x * (1 - dist_y) * value21 + dist_x * dist_y * value22; return value; } template void DeformablePSROIPoolForwardKernelCpu( const int count, const T *bottom_data, const T spatial_scale, const int channels, const int height, const int width, const int pooled_height, const int pooled_width, const T *bottom_rois, const T *bottom_trans, const int no_trans, const T trans_std, const int sample_per_part, const int output_dim, const int group_size, const int part_size, const int num_classes, const int channels_each_class, T *top_data, T *top_count) { for(int index = 0; index < count; index++) { // The output is in order (n, ctop, ph, pw) int pw = index % pooled_width; int ph = (index / pooled_width) % pooled_height; int ctop = (index / pooled_width / pooled_height) % output_dim; int n = index / pooled_width / pooled_height / output_dim; // [start, end) interval for spatial sampling const T *offset_bottom_rois = bottom_rois + n * 5; int roi_batch_ind = offset_bottom_rois[0]; T roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale - 0.5; T roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale - 0.5; T roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale - 0.5; T roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale - 0.5; // Force too small ROIs to be 1x1 T roi_width = std::max(roi_end_w - roi_start_w, T(0.1)); //avoid 0 T roi_height = std::max(roi_end_h - roi_start_h, T(0.1)); // Compute w and h at bottom T bin_size_h = roi_height / static_cast(pooled_height); T bin_size_w = roi_width / static_cast(pooled_width); T sub_bin_size_h = bin_size_h / static_cast(sample_per_part); T sub_bin_size_w = bin_size_w / static_cast(sample_per_part); int part_h = floor(static_cast(ph) / pooled_height * part_size); int part_w = floor(static_cast(pw) / pooled_width * part_size); int class_id = ctop / channels_each_class; T trans_x = no_trans ? static_cast(0) : bottom_trans[(((n * num_classes + class_id) * 2) * part_size + part_h) * part_size + part_w] * trans_std; T trans_y = no_trans ? static_cast(0) : bottom_trans[(((n * num_classes + class_id) * 2 + 1) * part_size + part_h) * part_size + part_w] * trans_std; T wstart = static_cast(pw) * bin_size_w + roi_start_w; wstart += trans_x * roi_width; T hstart = static_cast(ph) * bin_size_h + roi_start_h; hstart += trans_y * roi_height; T sum = 0; int count = 0; int gw = floor(static_cast(pw) * group_size / pooled_width); int gh = floor(static_cast(ph) * group_size / pooled_height); gw = std::min(std::max(gw, 0), group_size - 1); gh = std::min(std::max(gh, 0), group_size - 1); const T *offset_bottom_data = bottom_data + (roi_batch_ind * channels) * height * width; for (int ih = 0; ih < sample_per_part; ih++) { for (int iw = 0; iw < sample_per_part; iw++) { T w = wstart + iw * sub_bin_size_w; T h = hstart + ih * sub_bin_size_h; // bilinear interpolation if (w < -0.5 || w > width - 0.5 || h < -0.5 || h > height - 0.5) { continue; } w = std::min(std::max(w, T(0.)), width - T(1.)); h = std::min(std::max(h, T(0.)), height - T(1.)); int c = (ctop * group_size + gh) * group_size + gw; T val = bilinear_interp_cpu(offset_bottom_data + c * height * width, w, h, width, height); sum += val; count++; } } top_data[index] = count == 0 ? static_cast(0) : sum / count; top_count[index] = count; } } template void DeformablePSROIPoolBackwardAccKernelCpu( const int count, const T *top_diff, const T *top_count, const int num_rois, const T spatial_scale, const int channels, const int height, const int width, const int pooled_height, const int pooled_width, const int output_dim, T *bottom_data_diff, T *bottom_trans_diff, const T *bottom_data, const T *bottom_rois, const T *bottom_trans, const int no_trans, const T trans_std, const int sample_per_part, const int group_size, const int part_size, const int num_classes, const int channels_each_class) { for(int index = 0; index < count; index++) { // The output is in order (n, ctop, ph, pw) int pw = index % pooled_width; int ph = (index / pooled_width) % pooled_height; int ctop = (index / pooled_width / pooled_height) % output_dim; int n = index / pooled_width / pooled_height / output_dim; // [start, end) interval for spatial sampling const T *offset_bottom_rois = bottom_rois + n * 5; int roi_batch_ind = offset_bottom_rois[0]; T roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale - 0.5; T roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale - 0.5; T roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale - 0.5; T roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale - 0.5; // Force too small ROIs to be 1x1 T roi_width = std::max(roi_end_w - roi_start_w, T(0.1)); //avoid 0 T roi_height = std::max(roi_end_h - roi_start_h, T(0.1)); // Compute w and h at bottom T bin_size_h = roi_height / static_cast(pooled_height); T bin_size_w = roi_width / static_cast(pooled_width); T sub_bin_size_h = bin_size_h / static_cast(sample_per_part); T sub_bin_size_w = bin_size_w / static_cast(sample_per_part); int part_h = floor(static_cast(ph) / pooled_height * part_size); int part_w = floor(static_cast(pw) / pooled_width * part_size); int class_id = ctop / channels_each_class; T trans_x = no_trans ? static_cast(0) : bottom_trans[(((n * num_classes + class_id) * 2) * part_size + part_h) * part_size + part_w] * trans_std; T trans_y = no_trans ? static_cast(0) : bottom_trans[(((n * num_classes + class_id) * 2 + 1) * part_size + part_h) * part_size + part_w] * trans_std; T wstart = static_cast(pw) * bin_size_w + roi_start_w; wstart += trans_x * roi_width; T hstart = static_cast(ph) * bin_size_h + roi_start_h; hstart += trans_y * roi_height; if (top_count[index] <= 0) { continue; } T diff_val = top_diff[index] / top_count[index]; const T *offset_bottom_data = bottom_data + roi_batch_ind * channels * height * width; T *offset_bottom_data_diff = bottom_data_diff + roi_batch_ind * channels * height * width; int gw = floor(static_cast(pw) * group_size / pooled_width); int gh = floor(static_cast(ph) * group_size / pooled_height); gw = std::min(std::max(gw, 0), group_size - 1); gh = std::min(std::max(gh, 0), group_size - 1); for (int ih = 0; ih < sample_per_part; ih++) { for (int iw = 0; iw < sample_per_part; iw++) { T w = wstart + iw * sub_bin_size_w; T h = hstart + ih * sub_bin_size_h; // bilinear interpolation if (w < -0.5 || w > width - 0.5 || h < -0.5 || h > height - 0.5) { continue; } w = std::min(std::max(w, T(0.)), width - T(1.)); h = std::min(std::max(h, T(0.)), height - T(1.)); int c = (ctop * group_size + gh) * group_size + gw; // backward on feature int x0 = floor(w); int x1 = ceil(w); int y0 = floor(h); int y1 = ceil(h); T dist_x = w - x0, dist_y = h - y0; T q00 = (1 - dist_x) * (1 - dist_y); T q01 = (1 - dist_x) * dist_y; T q10 = dist_x * (1 - dist_y); T q11 = dist_x * dist_y; int bottom_index_base = c * height * width; /*atomicAdd(offset_bottom_data_diff + bottom_index_base + y0 * width + x0, q00 * diff_val); atomicAdd(offset_bottom_data_diff + bottom_index_base + y1 * width + x0, q01 * diff_val); atomicAdd(offset_bottom_data_diff + bottom_index_base + y0 * width + x1, q10 * diff_val); atomicAdd(offset_bottom_data_diff + bottom_index_base + y1 * width + x1, q11 * diff_val);*/ *(offset_bottom_data_diff + bottom_index_base + y0 * width + x0) += q00 * diff_val; *(offset_bottom_data_diff + bottom_index_base + y1 * width + x0) += q01 * diff_val; *(offset_bottom_data_diff + bottom_index_base + y0 * width + x1) += q10 * diff_val; *(offset_bottom_data_diff + bottom_index_base + y1 * width + x1) += q11 * diff_val; if (no_trans) { continue; } T U00 = offset_bottom_data[bottom_index_base + y0 * width + x0]; T U01 = offset_bottom_data[bottom_index_base + y1 * width + x0]; T U10 = offset_bottom_data[bottom_index_base + y0 * width + x1]; T U11 = offset_bottom_data[bottom_index_base + y1 * width + x1]; T diff_x = (U11 * dist_y + U10 * (1 - dist_y) - U01 * dist_y - U00 * (1 - dist_y)) * trans_std * diff_val; diff_x *= roi_width; T diff_y = (U11 * dist_x + U01 * (1 - dist_x) - U10 * dist_x - U00 * (1 - dist_x)) * trans_std * diff_val; diff_y *= roi_height; /*atomicAdd(bottom_trans_diff + (((n * num_classes + class_id) * 2) * part_size + part_h) * part_size + part_w, diff_x); atomicAdd(bottom_trans_diff + (((n * num_classes + class_id) * 2 + 1) * part_size + part_h) * part_size + part_w, diff_y);*/ *(bottom_trans_diff + (((n * num_classes + class_id) * 2) * part_size + part_h) * part_size + part_w) += diff_x; *(bottom_trans_diff + (((n * num_classes + class_id) * 2 + 1) * part_size + part_h) * part_size + part_w) += diff_y; } } } } std::tuple dcn_v2_psroi_pooling_cpu_forward(const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std) { /*AT_ASSERTM(input.type().is_cuda(), "input must be a CUDA tensor"); AT_ASSERTM(bbox.type().is_cuda(), "rois must be a CUDA tensor"); AT_ASSERTM(trans.type().is_cuda(), "trans must be a CUDA tensor");*/ const int batch = input.size(0); const int channels = input.size(1); const int height = input.size(2); const int width = input.size(3); const int channels_trans = no_trans ? 2 : trans.size(1); const int num_bbox = bbox.size(0); AT_ASSERTM(channels == output_dim, "input channels and output channels must equal"); auto pooled_height = pooled_size; auto pooled_width = pooled_size; auto out = at::empty({num_bbox, output_dim, pooled_height, pooled_width}, input.options()); long out_size = num_bbox * output_dim * pooled_height * pooled_width; auto top_count = at::zeros({num_bbox, output_dim, pooled_height, pooled_width}, input.options()); const int num_classes = no_trans ? 1 : channels_trans / 2; const int channels_each_class = no_trans ? output_dim : output_dim / num_classes; //cudaStream_t stream = at::cuda::getCurrentCUDAStream(); if (out.numel() == 0) { //THCudaCheck(cudaGetLastError()); return std::make_tuple(out, top_count); } /*dim3 grid(std::min(THCCeilDiv(out_size, 512L), 4096L)); dim3 block(512);*/ AT_DISPATCH_FLOATING_TYPES(input.type(), "dcn_v2_psroi_pooling_cpu_forward", [&] { DeformablePSROIPoolForwardKernelCpu( out_size, input.contiguous().data(), spatial_scale, channels, height, width, pooled_height, pooled_width, bbox.contiguous().data(), trans.contiguous().data(), no_trans, trans_std, sample_per_part, output_dim, group_size, part_size, num_classes, channels_each_class, out.data(), top_count.data()); }); //THCudaCheck(cudaGetLastError()); return std::make_tuple(out, top_count); } std::tuple dcn_v2_psroi_pooling_cpu_backward(const at::Tensor &out_grad, const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const at::Tensor &top_count, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std) { /*AT_ASSERTM(out_grad.type().is_cuda(), "out_grad must be a CUDA tensor"); AT_ASSERTM(input.type().is_cuda(), "input must be a CUDA tensor"); AT_ASSERTM(bbox.type().is_cuda(), "bbox must be a CUDA tensor"); AT_ASSERTM(trans.type().is_cuda(), "trans must be a CUDA tensor"); AT_ASSERTM(top_count.type().is_cuda(), "top_count must be a CUDA tensor");*/ const int batch = input.size(0); const int channels = input.size(1); const int height = input.size(2); const int width = input.size(3); const int channels_trans = no_trans ? 2 : trans.size(1); const int num_bbox = bbox.size(0); AT_ASSERTM(channels == output_dim, "input channels and output channels must equal"); auto pooled_height = pooled_size; auto pooled_width = pooled_size; long out_size = num_bbox * output_dim * pooled_height * pooled_width; const int num_classes = no_trans ? 1 : channels_trans / 2; const int channels_each_class = no_trans ? output_dim : output_dim / num_classes; auto input_grad = at::zeros({batch, channels, height, width}, out_grad.options()); auto trans_grad = at::zeros_like(trans); if (input_grad.numel() == 0) { //THCudaCheck(cudaGetLastError()); return std::make_tuple(input_grad, trans_grad); } /*dim3 grid(std::min(THCCeilDiv(out_size, 512L), 4096L)); dim3 block(512); cudaStream_t stream = at::cuda::getCurrentCUDAStream();*/ AT_DISPATCH_FLOATING_TYPES(out_grad.type(), "dcn_v2_psroi_pooling_cpu_backward", [&] { DeformablePSROIPoolBackwardAccKernelCpu( out_size, out_grad.contiguous().data(), top_count.contiguous().data(), num_bbox, spatial_scale, channels, height, width, pooled_height, pooled_width, output_dim, input_grad.contiguous().data(), trans_grad.contiguous().data(), input.contiguous().data(), bbox.contiguous().data(), trans.contiguous().data(), no_trans, trans_std, sample_per_part, group_size, part_size, num_classes, channels_each_class); }); //THCudaCheck(cudaGetLastError()); return std::make_tuple(input_grad, trans_grad); } ================================================ FILE: DCNv2/src/cpu/vision.h ================================================ #pragma once #include at::Tensor dcn_v2_cpu_forward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const int kernel_h, const int kernel_w, const int stride_h, const int stride_w, const int pad_h, const int pad_w, const int dilation_h, const int dilation_w, const int deformable_group); std::vector dcn_v2_cpu_backward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const at::Tensor &grad_output, int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, int pad_w, int dilation_h, int dilation_w, int deformable_group); std::tuple dcn_v2_psroi_pooling_cpu_forward(const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std); std::tuple dcn_v2_psroi_pooling_cpu_backward(const at::Tensor &out_grad, const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const at::Tensor &top_count, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std); ================================================ FILE: DCNv2/src/cuda/dcn_v2_cuda.cu ================================================ #include #include "cuda/dcn_v2_im2col_cuda.h" #include #include #include #include #include THCState *state = at::globalContext().lazyInitCUDA(); // author: Charles Shang // https://github.com/torch/cunn/blob/master/lib/THCUNN/generic/SpatialConvolutionMM.cu // [batch gemm] // https://github.com/pytorch/pytorch/blob/master/aten/src/THC/generic/THCTensorMathBlas.cu __global__ void createBatchGemmBuffer(const float **input_b, float **output_b, float **columns_b, const float **ones_b, const float **weight_b, const float **bias_b, float *input, float *output, float *columns, float *ones, float *weight, float *bias, const int input_stride, const int output_stride, const int columns_stride, const int ones_stride, const int num_batches) { const int idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx < num_batches) { input_b[idx] = input + idx * input_stride; output_b[idx] = output + idx * output_stride; columns_b[idx] = columns + idx * columns_stride; ones_b[idx] = ones + idx * ones_stride; // share weights and bias within a Mini-Batch weight_b[idx] = weight; bias_b[idx] = bias; } } at::Tensor dcn_v2_cuda_forward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const int kernel_h, const int kernel_w, const int stride_h, const int stride_w, const int pad_h, const int pad_w, const int dilation_h, const int dilation_w, const int deformable_group) { using scalar_t = float; // THCAssertSameGPU(THCudaTensor_checkGPU(state, 5, input, weight, bias, offset, mask)); AT_ASSERTM(input.type().is_cuda(), "input must be a CUDA tensor"); AT_ASSERTM(weight.type().is_cuda(), "weight must be a CUDA tensor"); AT_ASSERTM(bias.type().is_cuda(), "bias must be a CUDA tensor"); AT_ASSERTM(offset.type().is_cuda(), "offset must be a CUDA tensor"); AT_ASSERTM(mask.type().is_cuda(), "mask must be a CUDA tensor"); const int batch = input.size(0); const int channels = input.size(1); const int height = input.size(2); const int width = input.size(3); const int channels_out = weight.size(0); const int channels_kernel = weight.size(1); const int kernel_h_ = weight.size(2); const int kernel_w_ = weight.size(3); // printf("Kernels: %d %d %d %d\n", kernel_h_, kernel_w_, kernel_w, kernel_h); // printf("Channels: %d %d\n", channels, channels_kernel); // printf("Channels: %d %d\n", channels_out, channels_kernel); AT_ASSERTM(kernel_h_ == kernel_h && kernel_w_ == kernel_w, "Input shape and kernel shape wont match: (%d x %d vs %d x %d).", kernel_h_, kernel_w, kernel_h_, kernel_w_); AT_ASSERTM(channels == channels_kernel, "Input shape and kernel channels wont match: (%d vs %d).", channels, channels_kernel); const int height_out = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; const int width_out = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; auto ones = at::ones({batch, height_out, width_out}, input.options()); auto columns = at::empty({batch, channels * kernel_h * kernel_w, 1 * height_out * width_out}, input.options()); auto output = at::empty({batch, channels_out, height_out, width_out}, input.options()); // prepare for batch-wise computing, which is significantly faster than instance-wise computing // when batch size is large. // launch batch threads int matrices_size = batch * sizeof(float *); auto input_b = static_cast(THCudaMalloc(state, matrices_size)); auto output_b = static_cast(THCudaMalloc(state, matrices_size)); auto columns_b = static_cast(THCudaMalloc(state, matrices_size)); auto ones_b = static_cast(THCudaMalloc(state, matrices_size)); auto weight_b = static_cast(THCudaMalloc(state, matrices_size)); auto bias_b = static_cast(THCudaMalloc(state, matrices_size)); const int block = 128; const int grid = (batch + block - 1) / block; createBatchGemmBuffer<<>>( input_b, output_b, columns_b, ones_b, weight_b, bias_b, input.data(), output.data(), columns.data(), ones.data(), weight.data(), bias.data(), channels * width * height, channels_out * width_out * height_out, channels * kernel_h * kernel_w * height_out * width_out, height_out * width_out, batch); long m_ = channels_out; long n_ = height_out * width_out; long k_ = 1; THCudaBlas_SgemmBatched(state, 't', 'n', n_, m_, k_, 1.0f, ones_b, k_, bias_b, k_, 0.0f, output_b, n_, batch); modulated_deformable_im2col_cuda(c10::cuda::getCurrentCUDAStream(), input.data(), offset.data(), mask.data(), batch, channels, height, width, height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, deformable_group, columns.data()); long m = channels_out; long n = height_out * width_out; long k = channels * kernel_h * kernel_w; THCudaBlas_SgemmBatched(state, 'n', 'n', n, m, k, 1.0f, (const float **)columns_b, n, weight_b, k, 1.0f, output_b, n, batch); THCudaFree(state, input_b); THCudaFree(state, output_b); THCudaFree(state, columns_b); THCudaFree(state, ones_b); THCudaFree(state, weight_b); THCudaFree(state, bias_b); return output; } __global__ void createBatchGemmBufferBackward( float **grad_output_b, float **columns_b, float **ones_b, float **weight_b, float **grad_weight_b, float **grad_bias_b, float *grad_output, float *columns, float *ones, float *weight, float *grad_weight, float *grad_bias, const int grad_output_stride, const int columns_stride, const int ones_stride, const int num_batches) { const int idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx < num_batches) { grad_output_b[idx] = grad_output + idx * grad_output_stride; columns_b[idx] = columns + idx * columns_stride; ones_b[idx] = ones + idx * ones_stride; // share weights and bias within a Mini-Batch weight_b[idx] = weight; grad_weight_b[idx] = grad_weight; grad_bias_b[idx] = grad_bias; } } std::vector dcn_v2_cuda_backward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const at::Tensor &grad_output, int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, int pad_w, int dilation_h, int dilation_w, int deformable_group) { THArgCheck(input.is_contiguous(), 1, "input tensor has to be contiguous"); THArgCheck(weight.is_contiguous(), 2, "weight tensor has to be contiguous"); AT_ASSERTM(input.type().is_cuda(), "input must be a CUDA tensor"); AT_ASSERTM(weight.type().is_cuda(), "weight must be a CUDA tensor"); AT_ASSERTM(bias.type().is_cuda(), "bias must be a CUDA tensor"); AT_ASSERTM(offset.type().is_cuda(), "offset must be a CUDA tensor"); AT_ASSERTM(mask.type().is_cuda(), "mask must be a CUDA tensor"); const int batch = input.size(0); const int channels = input.size(1); const int height = input.size(2); const int width = input.size(3); const int channels_out = weight.size(0); const int channels_kernel = weight.size(1); const int kernel_h_ = weight.size(2); const int kernel_w_ = weight.size(3); AT_ASSERTM(kernel_h_ == kernel_h && kernel_w_ == kernel_w, "Input shape and kernel shape wont match: (%d x %d vs %d x %d).", kernel_h_, kernel_w, kernel_h_, kernel_w_); AT_ASSERTM(channels == channels_kernel, "Input shape and kernel channels wont match: (%d vs %d).", channels, channels_kernel); const int height_out = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; const int width_out = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; auto ones = at::ones({height_out, width_out}, input.options()); auto columns = at::empty({channels * kernel_h * kernel_w, 1 * height_out * width_out}, input.options()); auto output = at::empty({batch, channels_out, height_out, width_out}, input.options()); auto grad_input = at::zeros_like(input); auto grad_weight = at::zeros_like(weight); auto grad_bias = at::zeros_like(bias); auto grad_offset = at::zeros_like(offset); auto grad_mask = at::zeros_like(mask); using scalar_t = float; for (int b = 0; b < batch; b++) { auto input_n = input.select(0, b); auto offset_n = offset.select(0, b); auto mask_n = mask.select(0, b); auto grad_output_n = grad_output.select(0, b); auto grad_input_n = grad_input.select(0, b); auto grad_offset_n = grad_offset.select(0, b); auto grad_mask_n = grad_mask.select(0, b); long m = channels * kernel_h * kernel_w; long n = height_out * width_out; long k = channels_out; THCudaBlas_Sgemm(state, 'n', 't', n, m, k, 1.0f, grad_output_n.data(), n, weight.data(), m, 0.0f, columns.data(), n); // gradient w.r.t. input coordinate data modulated_deformable_col2im_coord_cuda(c10::cuda::getCurrentCUDAStream(), columns.data(), input_n.data(), offset_n.data(), mask_n.data(), 1, channels, height, width, height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, deformable_group, grad_offset_n.data(), grad_mask_n.data()); // gradient w.r.t. input data modulated_deformable_col2im_cuda(c10::cuda::getCurrentCUDAStream(), columns.data(), offset_n.data(), mask_n.data(), 1, channels, height, width, height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, deformable_group, grad_input_n.data()); // gradient w.r.t. weight, dWeight should accumulate across the batch and group modulated_deformable_im2col_cuda(c10::cuda::getCurrentCUDAStream(), input_n.data(), offset_n.data(), mask_n.data(), 1, channels, height, width, height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, deformable_group, columns.data()); long m_ = channels_out; long n_ = channels * kernel_h * kernel_w; long k_ = height_out * width_out; THCudaBlas_Sgemm(state, 't', 'n', n_, m_, k_, 1.0f, columns.data(), k_, grad_output_n.data(), k_, 1.0f, grad_weight.data(), n_); // gradient w.r.t. bias // long m_ = channels_out; // long k__ = height_out * width_out; // THCudaBlas_Sgemm(state, // 't', 'n', // k_, m_, 1, 1.0f, // grad_output_n.data(), k_, // ones.data(), 1, 1.0f, // grad_bias.data(), 1); THCudaBlas_Sgemm(state, 'N', 'N', 1, m_, k_, 1.0f, ones.data(), 1, grad_output_n.data(), k_, 1.0f, grad_bias.data(), 1); } return { grad_input, grad_offset, grad_mask, grad_weight, grad_bias }; } ================================================ FILE: DCNv2/src/cuda/dcn_v2_im2col_cuda.cu ================================================ #include "dcn_v2_im2col_cuda.h" #include #include #include #include #include #include #include #include #define CUDA_KERNEL_LOOP(i, n) \ for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ i < (n); \ i += blockDim.x * gridDim.x) const int CUDA_NUM_THREADS = 1024; inline int GET_BLOCKS(const int N) { return (N + CUDA_NUM_THREADS - 1) / CUDA_NUM_THREADS; } __device__ float dmcn_im2col_bilinear_cuda(const float *bottom_data, const int data_width, const int height, const int width, float h, float w) { int h_low = floor(h); int w_low = floor(w); int h_high = h_low + 1; int w_high = w_low + 1; float lh = h - h_low; float lw = w - w_low; float hh = 1 - lh, hw = 1 - lw; float v1 = 0; if (h_low >= 0 && w_low >= 0) v1 = bottom_data[h_low * data_width + w_low]; float v2 = 0; if (h_low >= 0 && w_high <= width - 1) v2 = bottom_data[h_low * data_width + w_high]; float v3 = 0; if (h_high <= height - 1 && w_low >= 0) v3 = bottom_data[h_high * data_width + w_low]; float v4 = 0; if (h_high <= height - 1 && w_high <= width - 1) v4 = bottom_data[h_high * data_width + w_high]; float w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; float val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); return val; } __device__ float dmcn_get_gradient_weight_cuda(float argmax_h, float argmax_w, const int h, const int w, const int height, const int width) { if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width) { //empty return 0; } int argmax_h_low = floor(argmax_h); int argmax_w_low = floor(argmax_w); int argmax_h_high = argmax_h_low + 1; int argmax_w_high = argmax_w_low + 1; float weight = 0; if (h == argmax_h_low && w == argmax_w_low) weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); if (h == argmax_h_low && w == argmax_w_high) weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); if (h == argmax_h_high && w == argmax_w_low) weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); if (h == argmax_h_high && w == argmax_w_high) weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); return weight; } __device__ float dmcn_get_coordinate_weight_cuda(float argmax_h, float argmax_w, const int height, const int width, const float *im_data, const int data_width, const int bp_dir) { if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width) { //empty return 0; } int argmax_h_low = floor(argmax_h); int argmax_w_low = floor(argmax_w); int argmax_h_high = argmax_h_low + 1; int argmax_w_high = argmax_w_low + 1; float weight = 0; if (bp_dir == 0) { if (argmax_h_low >= 0 && argmax_w_low >= 0) weight += -1 * (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_low * data_width + argmax_w_low]; if (argmax_h_low >= 0 && argmax_w_high <= width - 1) weight += -1 * (argmax_w - argmax_w_low) * im_data[argmax_h_low * data_width + argmax_w_high]; if (argmax_h_high <= height - 1 && argmax_w_low >= 0) weight += (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_high * data_width + argmax_w_low]; if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) weight += (argmax_w - argmax_w_low) * im_data[argmax_h_high * data_width + argmax_w_high]; } else if (bp_dir == 1) { if (argmax_h_low >= 0 && argmax_w_low >= 0) weight += -1 * (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_low]; if (argmax_h_low >= 0 && argmax_w_high <= width - 1) weight += (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_high]; if (argmax_h_high <= height - 1 && argmax_w_low >= 0) weight += -1 * (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_low]; if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) weight += (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_high]; } return weight; } __global__ void modulated_deformable_im2col_gpu_kernel(const int n, const float *data_im, const float *data_offset, const float *data_mask, const int height, const int width, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int channel_per_deformable_group, const int batch_size, const int num_channels, const int deformable_group, const int height_col, const int width_col, float *data_col) { // launch channels * batch_size * height_col * width_col cores CUDA_KERNEL_LOOP(index, n) { // NOTE(CharlesShang): different from Dai Jifeng's MXNet implementation, col_buffer is of shape (c*kw*kh, N, oh, ow) // here columns is of shape (N, c*kw*kh, oh * ow), need to adapt axis // index index of output matrix const int w_col = index % width_col; const int h_col = (index / width_col) % height_col; // const int b_col = (index / width_col / height_col) % batch_size; const int b_col = (index / width_col / height_col / num_channels) % batch_size; // const int c_im = (index / width_col / height_col) / batch_size; const int c_im = (index / width_col / height_col) % num_channels; // const int c_col = c_im * kernel_h * kernel_w; const int c_col = c_im * kernel_h * kernel_w; // compute deformable group index const int deformable_group_index = c_im / channel_per_deformable_group; const int h_in = h_col * stride_h - pad_h; const int w_in = w_col * stride_w - pad_w; // float *data_col_ptr = data_col + ((c_col * batch_size + b_col) * height_col + h_col) * width_col + w_col; float *data_col_ptr = data_col + ((b_col * num_channels * kernel_w * kernel_h + c_col) * height_col + h_col) * width_col + w_col; //const float* data_im_ptr = data_im + ((b_col * num_channels + c_im) * height + h_in) * width + w_in; const float *data_im_ptr = data_im + (b_col * num_channels + c_im) * height * width; const float *data_offset_ptr = data_offset + (b_col * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col; const float *data_mask_ptr = data_mask + (b_col * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col; for (int i = 0; i < kernel_h; ++i) { for (int j = 0; j < kernel_w; ++j) { const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_col) * width_col + w_col; const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_col) * width_col + w_col; const int data_mask_hw_ptr = ((i * kernel_w + j) * height_col + h_col) * width_col + w_col; const float offset_h = data_offset_ptr[data_offset_h_ptr]; const float offset_w = data_offset_ptr[data_offset_w_ptr]; const float mask = data_mask_ptr[data_mask_hw_ptr]; float val = static_cast(0); const float h_im = h_in + i * dilation_h + offset_h; const float w_im = w_in + j * dilation_w + offset_w; //if (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) { if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) { //const float map_h = i * dilation_h + offset_h; //const float map_w = j * dilation_w + offset_w; //const int cur_height = height - h_in; //const int cur_width = width - w_in; //val = dmcn_im2col_bilinear_cuda(data_im_ptr, width, cur_height, cur_width, map_h, map_w); val = dmcn_im2col_bilinear_cuda(data_im_ptr, width, height, width, h_im, w_im); } *data_col_ptr = val * mask; // data_col_ptr += batch_size * height_col * width_col; data_col_ptr += height_col * width_col; } } } } __global__ void modulated_deformable_col2im_gpu_kernel(const int n, const float *data_col, const float *data_offset, const float *data_mask, const int channels, const int height, const int width, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int channel_per_deformable_group, const int batch_size, const int deformable_group, const int height_col, const int width_col, float *grad_im) { CUDA_KERNEL_LOOP(index, n) { const int j = (index / width_col / height_col / batch_size) % kernel_w; const int i = (index / width_col / height_col / batch_size / kernel_w) % kernel_h; const int c = index / width_col / height_col / batch_size / kernel_w / kernel_h; // compute the start and end of the output const int deformable_group_index = c / channel_per_deformable_group; int w_out = index % width_col; int h_out = (index / width_col) % height_col; int b = (index / width_col / height_col) % batch_size; int w_in = w_out * stride_w - pad_w; int h_in = h_out * stride_h - pad_h; const float *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col; const float *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col; const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; const int data_mask_hw_ptr = ((i * kernel_w + j) * height_col + h_out) * width_col + w_out; const float offset_h = data_offset_ptr[data_offset_h_ptr]; const float offset_w = data_offset_ptr[data_offset_w_ptr]; const float mask = data_mask_ptr[data_mask_hw_ptr]; const float cur_inv_h_data = h_in + i * dilation_h + offset_h; const float cur_inv_w_data = w_in + j * dilation_w + offset_w; const float cur_top_grad = data_col[index] * mask; const int cur_h = (int)cur_inv_h_data; const int cur_w = (int)cur_inv_w_data; for (int dy = -2; dy <= 2; dy++) { for (int dx = -2; dx <= 2; dx++) { if (cur_h + dy >= 0 && cur_h + dy < height && cur_w + dx >= 0 && cur_w + dx < width && abs(cur_inv_h_data - (cur_h + dy)) < 1 && abs(cur_inv_w_data - (cur_w + dx)) < 1) { int cur_bottom_grad_pos = ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; float weight = dmcn_get_gradient_weight_cuda(cur_inv_h_data, cur_inv_w_data, cur_h + dy, cur_w + dx, height, width); atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad); } } } } } __global__ void modulated_deformable_col2im_coord_gpu_kernel(const int n, const float *data_col, const float *data_im, const float *data_offset, const float *data_mask, const int channels, const int height, const int width, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int channel_per_deformable_group, const int batch_size, const int offset_channels, const int deformable_group, const int height_col, const int width_col, float *grad_offset, float *grad_mask) { CUDA_KERNEL_LOOP(index, n) { float val = 0, mval = 0; int w = index % width_col; int h = (index / width_col) % height_col; int c = (index / width_col / height_col) % offset_channels; int b = (index / width_col / height_col) / offset_channels; // compute the start and end of the output const int deformable_group_index = c / (2 * kernel_h * kernel_w); const int col_step = kernel_h * kernel_w; int cnt = 0; const float *data_col_ptr = data_col + deformable_group_index * channel_per_deformable_group * batch_size * width_col * height_col; const float *data_im_ptr = data_im + (b * deformable_group + deformable_group_index) * channel_per_deformable_group / kernel_h / kernel_w * height * width; const float *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col; const float *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col; const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; col_c += col_step) { const int col_pos = (((col_c * batch_size + b) * height_col) + h) * width_col + w; const int bp_dir = offset_c % 2; int j = (col_pos / width_col / height_col / batch_size) % kernel_w; int i = (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; int w_out = col_pos % width_col; int h_out = (col_pos / width_col) % height_col; int w_in = w_out * stride_w - pad_w; int h_in = h_out * stride_h - pad_h; const int data_offset_h_ptr = (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); const int data_offset_w_ptr = (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out); const int data_mask_hw_ptr = (((i * kernel_w + j) * height_col + h_out) * width_col + w_out); const float offset_h = data_offset_ptr[data_offset_h_ptr]; const float offset_w = data_offset_ptr[data_offset_w_ptr]; const float mask = data_mask_ptr[data_mask_hw_ptr]; float inv_h = h_in + i * dilation_h + offset_h; float inv_w = w_in + j * dilation_w + offset_w; if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) { inv_h = inv_w = -2; } else { mval += data_col_ptr[col_pos] * dmcn_im2col_bilinear_cuda(data_im_ptr + cnt * height * width, width, height, width, inv_h, inv_w); } const float weight = dmcn_get_coordinate_weight_cuda( inv_h, inv_w, height, width, data_im_ptr + cnt * height * width, width, bp_dir); val += weight * data_col_ptr[col_pos] * mask; cnt += 1; } // KERNEL_ASSIGN(grad_offset[index], offset_req, val); grad_offset[index] = val; if (offset_c % 2 == 0) // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w], mask_req, mval); grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w] = mval; } } void modulated_deformable_im2col_cuda(cudaStream_t stream, const float* data_im, const float* data_offset, const float* data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float* data_col) { // num_axes should be smaller than block size const int channel_per_deformable_group = channels / deformable_group; const int num_kernels = channels * batch_size * height_col * width_col; modulated_deformable_im2col_gpu_kernel <<>>( num_kernels, data_im, data_offset, data_mask, height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group, batch_size, channels, deformable_group, height_col, width_col, data_col); cudaError_t err = cudaGetLastError(); if (err != cudaSuccess) { printf("error in modulated_deformable_im2col_cuda: %s\n", cudaGetErrorString(err)); } } void modulated_deformable_col2im_cuda(cudaStream_t stream, const float* data_col, const float* data_offset, const float* data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float* grad_im){ const int channel_per_deformable_group = channels / deformable_group; const int num_kernels = channels * kernel_h * kernel_w * batch_size * height_col * width_col; modulated_deformable_col2im_gpu_kernel <<>>( num_kernels, data_col, data_offset, data_mask, channels, height_im, width_im, kernel_h, kernel_w, pad_h, pad_h, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group, batch_size, deformable_group, height_col, width_col, grad_im); cudaError_t err = cudaGetLastError(); if (err != cudaSuccess) { printf("error in modulated_deformable_col2im_cuda: %s\n", cudaGetErrorString(err)); } } void modulated_deformable_col2im_coord_cuda(cudaStream_t stream, const float* data_col, const float* data_im, const float* data_offset, const float* data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kernel_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float* grad_offset, float* grad_mask) { const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * kernel_w * deformable_group; const int channel_per_deformable_group = channels * kernel_h * kernel_w / deformable_group; modulated_deformable_col2im_coord_gpu_kernel <<>>( num_kernels, data_col, data_im, data_offset, data_mask, channels, height_im, width_im, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group, batch_size, 2 * kernel_h * kernel_w * deformable_group, deformable_group, height_col, width_col, grad_offset, grad_mask); cudaError_t err = cudaGetLastError(); if (err != cudaSuccess) { printf("error in modulated_deformable_col2im_coord_cuda: %s\n", cudaGetErrorString(err)); } } ================================================ FILE: DCNv2/src/cuda/dcn_v2_im2col_cuda.h ================================================ /*! ******************* BEGIN Caffe Copyright Notice and Disclaimer **************** * * COPYRIGHT * * All contributions by the University of California: * Copyright (c) 2014-2017 The Regents of the University of California (Regents) * All rights reserved. * * All other contributions: * Copyright (c) 2014-2017, the respective contributors * All rights reserved. * * Caffe uses a shared copyright model: each contributor holds copyright over * their contributions to Caffe. The project versioning records all such * contribution and copyright details. If a contributor wants to further mark * their specific copyright on a particular contribution, they should indicate * their copyright solely in the commit message of the change when it is * committed. * * LICENSE * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, this * list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * CONTRIBUTION AGREEMENT * * By contributing to the BVLC/caffe repository through pull-request, comment, * or otherwise, the contributor releases their content to the * license and copyright terms herein. * ***************** END Caffe Copyright Notice and Disclaimer ******************** * * Copyright (c) 2018 Microsoft * Licensed under The MIT License [see LICENSE for details] * \file modulated_deformable_im2col.h * \brief Function definitions of converting an image to * column matrix based on kernel, padding, dilation, and offset. * These functions are mainly used in deformable convolution operators. * \ref: https://arxiv.org/abs/1811.11168 * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu */ /***************** Adapted by Charles Shang *********************/ #ifndef DCN_V2_IM2COL_CUDA #define DCN_V2_IM2COL_CUDA #ifdef __cplusplus extern "C" { #endif void modulated_deformable_im2col_cuda(cudaStream_t stream, const float *data_im, const float *data_offset, const float *data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float *data_col); void modulated_deformable_col2im_cuda(cudaStream_t stream, const float *data_col, const float *data_offset, const float *data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float *grad_im); void modulated_deformable_col2im_coord_cuda(cudaStream_t stream, const float *data_col, const float *data_im, const float *data_offset, const float *data_mask, const int batch_size, const int channels, const int height_im, const int width_im, const int height_col, const int width_col, const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, const int stride_h, const int stride_w, const int dilation_h, const int dilation_w, const int deformable_group, float *grad_offset, float *grad_mask); #ifdef __cplusplus } #endif #endif ================================================ FILE: DCNv2/src/cuda/dcn_v2_psroi_pooling_cuda.cu ================================================ /*! * Copyright (c) 2017 Microsoft * Licensed under The MIT License [see LICENSE for details] * \file deformable_psroi_pooling.cu * \brief * \author Yi Li, Guodong Zhang, Jifeng Dai */ /***************** Adapted by Charles Shang *********************/ #include #include #include #include #include #include #include #include #include #define CUDA_KERNEL_LOOP(i, n) \ for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ i < (n); \ i += blockDim.x * gridDim.x) const int CUDA_NUM_THREADS = 1024; inline int GET_BLOCKS(const int N) { return (N + CUDA_NUM_THREADS - 1) / CUDA_NUM_THREADS; } template __device__ T bilinear_interp_cuda( const T *data, const T x, const T y, const int width, const int height) { int x1 = floor(x); int x2 = ceil(x); int y1 = floor(y); int y2 = ceil(y); T dist_x = static_cast(x - x1); T dist_y = static_cast(y - y1); T value11 = data[y1 * width + x1]; T value12 = data[y2 * width + x1]; T value21 = data[y1 * width + x2]; T value22 = data[y2 * width + x2]; T value = (1 - dist_x) * (1 - dist_y) * value11 + (1 - dist_x) * dist_y * value12 + dist_x * (1 - dist_y) * value21 + dist_x * dist_y * value22; return value; } template __global__ void DeformablePSROIPoolForwardKernelCuda( const int count, const T *bottom_data, const T spatial_scale, const int channels, const int height, const int width, const int pooled_height, const int pooled_width, const T *bottom_rois, const T *bottom_trans, const int no_trans, const T trans_std, const int sample_per_part, const int output_dim, const int group_size, const int part_size, const int num_classes, const int channels_each_class, T *top_data, T *top_count) { CUDA_KERNEL_LOOP(index, count) { // The output is in order (n, ctop, ph, pw) int pw = index % pooled_width; int ph = (index / pooled_width) % pooled_height; int ctop = (index / pooled_width / pooled_height) % output_dim; int n = index / pooled_width / pooled_height / output_dim; // [start, end) interval for spatial sampling const T *offset_bottom_rois = bottom_rois + n * 5; int roi_batch_ind = offset_bottom_rois[0]; T roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale - 0.5; T roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale - 0.5; T roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale - 0.5; T roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale - 0.5; // Force too small ROIs to be 1x1 T roi_width = max(roi_end_w - roi_start_w, 0.1); //avoid 0 T roi_height = max(roi_end_h - roi_start_h, 0.1); // Compute w and h at bottom T bin_size_h = roi_height / static_cast(pooled_height); T bin_size_w = roi_width / static_cast(pooled_width); T sub_bin_size_h = bin_size_h / static_cast(sample_per_part); T sub_bin_size_w = bin_size_w / static_cast(sample_per_part); int part_h = floor(static_cast(ph) / pooled_height * part_size); int part_w = floor(static_cast(pw) / pooled_width * part_size); int class_id = ctop / channels_each_class; T trans_x = no_trans ? static_cast(0) : bottom_trans[(((n * num_classes + class_id) * 2) * part_size + part_h) * part_size + part_w] * trans_std; T trans_y = no_trans ? static_cast(0) : bottom_trans[(((n * num_classes + class_id) * 2 + 1) * part_size + part_h) * part_size + part_w] * trans_std; T wstart = static_cast(pw) * bin_size_w + roi_start_w; wstart += trans_x * roi_width; T hstart = static_cast(ph) * bin_size_h + roi_start_h; hstart += trans_y * roi_height; T sum = 0; int count = 0; int gw = floor(static_cast(pw) * group_size / pooled_width); int gh = floor(static_cast(ph) * group_size / pooled_height); gw = min(max(gw, 0), group_size - 1); gh = min(max(gh, 0), group_size - 1); const T *offset_bottom_data = bottom_data + (roi_batch_ind * channels) * height * width; for (int ih = 0; ih < sample_per_part; ih++) { for (int iw = 0; iw < sample_per_part; iw++) { T w = wstart + iw * sub_bin_size_w; T h = hstart + ih * sub_bin_size_h; // bilinear interpolation if (w < -0.5 || w > width - 0.5 || h < -0.5 || h > height - 0.5) { continue; } w = min(max(w, 0.), width - 1.); h = min(max(h, 0.), height - 1.); int c = (ctop * group_size + gh) * group_size + gw; T val = bilinear_interp_cuda(offset_bottom_data + c * height * width, w, h, width, height); sum += val; count++; } } top_data[index] = count == 0 ? static_cast(0) : sum / count; top_count[index] = count; } } template __global__ void DeformablePSROIPoolBackwardAccKernelCuda( const int count, const T *top_diff, const T *top_count, const int num_rois, const T spatial_scale, const int channels, const int height, const int width, const int pooled_height, const int pooled_width, const int output_dim, T *bottom_data_diff, T *bottom_trans_diff, const T *bottom_data, const T *bottom_rois, const T *bottom_trans, const int no_trans, const T trans_std, const int sample_per_part, const int group_size, const int part_size, const int num_classes, const int channels_each_class) { CUDA_KERNEL_LOOP(index, count) { // The output is in order (n, ctop, ph, pw) int pw = index % pooled_width; int ph = (index / pooled_width) % pooled_height; int ctop = (index / pooled_width / pooled_height) % output_dim; int n = index / pooled_width / pooled_height / output_dim; // [start, end) interval for spatial sampling const T *offset_bottom_rois = bottom_rois + n * 5; int roi_batch_ind = offset_bottom_rois[0]; T roi_start_w = static_cast(round(offset_bottom_rois[1])) * spatial_scale - 0.5; T roi_start_h = static_cast(round(offset_bottom_rois[2])) * spatial_scale - 0.5; T roi_end_w = static_cast(round(offset_bottom_rois[3]) + 1.) * spatial_scale - 0.5; T roi_end_h = static_cast(round(offset_bottom_rois[4]) + 1.) * spatial_scale - 0.5; // Force too small ROIs to be 1x1 T roi_width = max(roi_end_w - roi_start_w, 0.1); //avoid 0 T roi_height = max(roi_end_h - roi_start_h, 0.1); // Compute w and h at bottom T bin_size_h = roi_height / static_cast(pooled_height); T bin_size_w = roi_width / static_cast(pooled_width); T sub_bin_size_h = bin_size_h / static_cast(sample_per_part); T sub_bin_size_w = bin_size_w / static_cast(sample_per_part); int part_h = floor(static_cast(ph) / pooled_height * part_size); int part_w = floor(static_cast(pw) / pooled_width * part_size); int class_id = ctop / channels_each_class; T trans_x = no_trans ? static_cast(0) : bottom_trans[(((n * num_classes + class_id) * 2) * part_size + part_h) * part_size + part_w] * trans_std; T trans_y = no_trans ? static_cast(0) : bottom_trans[(((n * num_classes + class_id) * 2 + 1) * part_size + part_h) * part_size + part_w] * trans_std; T wstart = static_cast(pw) * bin_size_w + roi_start_w; wstart += trans_x * roi_width; T hstart = static_cast(ph) * bin_size_h + roi_start_h; hstart += trans_y * roi_height; if (top_count[index] <= 0) { continue; } T diff_val = top_diff[index] / top_count[index]; const T *offset_bottom_data = bottom_data + roi_batch_ind * channels * height * width; T *offset_bottom_data_diff = bottom_data_diff + roi_batch_ind * channels * height * width; int gw = floor(static_cast(pw) * group_size / pooled_width); int gh = floor(static_cast(ph) * group_size / pooled_height); gw = min(max(gw, 0), group_size - 1); gh = min(max(gh, 0), group_size - 1); for (int ih = 0; ih < sample_per_part; ih++) { for (int iw = 0; iw < sample_per_part; iw++) { T w = wstart + iw * sub_bin_size_w; T h = hstart + ih * sub_bin_size_h; // bilinear interpolation if (w < -0.5 || w > width - 0.5 || h < -0.5 || h > height - 0.5) { continue; } w = min(max(w, 0.), width - 1.); h = min(max(h, 0.), height - 1.); int c = (ctop * group_size + gh) * group_size + gw; // backward on feature int x0 = floor(w); int x1 = ceil(w); int y0 = floor(h); int y1 = ceil(h); T dist_x = w - x0, dist_y = h - y0; T q00 = (1 - dist_x) * (1 - dist_y); T q01 = (1 - dist_x) * dist_y; T q10 = dist_x * (1 - dist_y); T q11 = dist_x * dist_y; int bottom_index_base = c * height * width; atomicAdd(offset_bottom_data_diff + bottom_index_base + y0 * width + x0, q00 * diff_val); atomicAdd(offset_bottom_data_diff + bottom_index_base + y1 * width + x0, q01 * diff_val); atomicAdd(offset_bottom_data_diff + bottom_index_base + y0 * width + x1, q10 * diff_val); atomicAdd(offset_bottom_data_diff + bottom_index_base + y1 * width + x1, q11 * diff_val); if (no_trans) { continue; } T U00 = offset_bottom_data[bottom_index_base + y0 * width + x0]; T U01 = offset_bottom_data[bottom_index_base + y1 * width + x0]; T U10 = offset_bottom_data[bottom_index_base + y0 * width + x1]; T U11 = offset_bottom_data[bottom_index_base + y1 * width + x1]; T diff_x = (U11 * dist_y + U10 * (1 - dist_y) - U01 * dist_y - U00 * (1 - dist_y)) * trans_std * diff_val; diff_x *= roi_width; T diff_y = (U11 * dist_x + U01 * (1 - dist_x) - U10 * dist_x - U00 * (1 - dist_x)) * trans_std * diff_val; diff_y *= roi_height; atomicAdd(bottom_trans_diff + (((n * num_classes + class_id) * 2) * part_size + part_h) * part_size + part_w, diff_x); atomicAdd(bottom_trans_diff + (((n * num_classes + class_id) * 2 + 1) * part_size + part_h) * part_size + part_w, diff_y); } } } } std::tuple dcn_v2_psroi_pooling_cuda_forward(const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std) { AT_ASSERTM(input.type().is_cuda(), "input must be a CUDA tensor"); AT_ASSERTM(bbox.type().is_cuda(), "rois must be a CUDA tensor"); AT_ASSERTM(trans.type().is_cuda(), "trans must be a CUDA tensor"); const int batch = input.size(0); const int channels = input.size(1); const int height = input.size(2); const int width = input.size(3); const int channels_trans = no_trans ? 2 : trans.size(1); const int num_bbox = bbox.size(0); AT_ASSERTM(channels == output_dim, "input channels and output channels must equal"); auto pooled_height = pooled_size; auto pooled_width = pooled_size; auto out = at::empty({num_bbox, output_dim, pooled_height, pooled_width}, input.options()); long out_size = num_bbox * output_dim * pooled_height * pooled_width; auto top_count = at::zeros({num_bbox, output_dim, pooled_height, pooled_width}, input.options()); const int num_classes = no_trans ? 1 : channels_trans / 2; const int channels_each_class = no_trans ? output_dim : output_dim / num_classes; cudaStream_t stream = at::cuda::getCurrentCUDAStream(); if (out.numel() == 0) { THCudaCheck(cudaGetLastError()); return std::make_tuple(out, top_count); } dim3 grid(std::min(THCCeilDiv(out_size, 512L), 4096L)); dim3 block(512); AT_DISPATCH_FLOATING_TYPES(input.type(), "dcn_v2_psroi_pooling_cuda_forward", [&] { DeformablePSROIPoolForwardKernelCuda<<>>( out_size, input.contiguous().data(), spatial_scale, channels, height, width, pooled_height, pooled_width, bbox.contiguous().data(), trans.contiguous().data(), no_trans, trans_std, sample_per_part, output_dim, group_size, part_size, num_classes, channels_each_class, out.data(), top_count.data()); }); THCudaCheck(cudaGetLastError()); return std::make_tuple(out, top_count); } std::tuple dcn_v2_psroi_pooling_cuda_backward(const at::Tensor &out_grad, const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const at::Tensor &top_count, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std) { AT_ASSERTM(out_grad.type().is_cuda(), "out_grad must be a CUDA tensor"); AT_ASSERTM(input.type().is_cuda(), "input must be a CUDA tensor"); AT_ASSERTM(bbox.type().is_cuda(), "bbox must be a CUDA tensor"); AT_ASSERTM(trans.type().is_cuda(), "trans must be a CUDA tensor"); AT_ASSERTM(top_count.type().is_cuda(), "top_count must be a CUDA tensor"); const int batch = input.size(0); const int channels = input.size(1); const int height = input.size(2); const int width = input.size(3); const int channels_trans = no_trans ? 2 : trans.size(1); const int num_bbox = bbox.size(0); AT_ASSERTM(channels == output_dim, "input channels and output channels must equal"); auto pooled_height = pooled_size; auto pooled_width = pooled_size; long out_size = num_bbox * output_dim * pooled_height * pooled_width; const int num_classes = no_trans ? 1 : channels_trans / 2; const int channels_each_class = no_trans ? output_dim : output_dim / num_classes; auto input_grad = at::zeros({batch, channels, height, width}, out_grad.options()); auto trans_grad = at::zeros_like(trans); if (input_grad.numel() == 0) { THCudaCheck(cudaGetLastError()); return std::make_tuple(input_grad, trans_grad); } dim3 grid(std::min(THCCeilDiv(out_size, 512L), 4096L)); dim3 block(512); cudaStream_t stream = at::cuda::getCurrentCUDAStream(); AT_DISPATCH_FLOATING_TYPES(out_grad.type(), "dcn_v2_psroi_pooling_cuda_backward", [&] { DeformablePSROIPoolBackwardAccKernelCuda<<>>( out_size, out_grad.contiguous().data(), top_count.contiguous().data(), num_bbox, spatial_scale, channels, height, width, pooled_height, pooled_width, output_dim, input_grad.contiguous().data(), trans_grad.contiguous().data(), input.contiguous().data(), bbox.contiguous().data(), trans.contiguous().data(), no_trans, trans_std, sample_per_part, group_size, part_size, num_classes, channels_each_class); }); THCudaCheck(cudaGetLastError()); return std::make_tuple(input_grad, trans_grad); } ================================================ FILE: DCNv2/src/cuda/vision.h ================================================ #pragma once #include at::Tensor dcn_v2_cuda_forward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const int kernel_h, const int kernel_w, const int stride_h, const int stride_w, const int pad_h, const int pad_w, const int dilation_h, const int dilation_w, const int deformable_group); std::vector dcn_v2_cuda_backward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const at::Tensor &grad_output, int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, int pad_w, int dilation_h, int dilation_w, int deformable_group); std::tuple dcn_v2_psroi_pooling_cuda_forward(const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std); std::tuple dcn_v2_psroi_pooling_cuda_backward(const at::Tensor &out_grad, const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const at::Tensor &top_count, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std); ================================================ FILE: DCNv2/src/dcn_v2.h ================================================ #pragma once #include "cpu/vision.h" #ifdef WITH_CUDA #include "cuda/vision.h" #endif at::Tensor dcn_v2_forward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const int kernel_h, const int kernel_w, const int stride_h, const int stride_w, const int pad_h, const int pad_w, const int dilation_h, const int dilation_w, const int deformable_group) { if (input.type().is_cuda()) { #ifdef WITH_CUDA return dcn_v2_cuda_forward(input, weight, bias, offset, mask, kernel_h, kernel_w, stride_h, stride_w, pad_h, pad_w, dilation_h, dilation_w, deformable_group); #else AT_ERROR("Not compiled with GPU support"); #endif } else{ return dcn_v2_cpu_forward(input, weight, bias, offset, mask, kernel_h, kernel_w, stride_h, stride_w, pad_h, pad_w, dilation_h, dilation_w, deformable_group); } } std::vector dcn_v2_backward(const at::Tensor &input, const at::Tensor &weight, const at::Tensor &bias, const at::Tensor &offset, const at::Tensor &mask, const at::Tensor &grad_output, int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, int pad_w, int dilation_h, int dilation_w, int deformable_group) { if (input.type().is_cuda()) { #ifdef WITH_CUDA return dcn_v2_cuda_backward(input, weight, bias, offset, mask, grad_output, kernel_h, kernel_w, stride_h, stride_w, pad_h, pad_w, dilation_h, dilation_w, deformable_group); #else AT_ERROR("Not compiled with GPU support"); #endif } else{ return dcn_v2_cpu_backward(input, weight, bias, offset, mask, grad_output, kernel_h, kernel_w, stride_h, stride_w, pad_h, pad_w, dilation_h, dilation_w, deformable_group); } } std::tuple dcn_v2_psroi_pooling_forward(const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std) { if (input.type().is_cuda()) { #ifdef WITH_CUDA return dcn_v2_psroi_pooling_cuda_forward(input, bbox, trans, no_trans, spatial_scale, output_dim, group_size, pooled_size, part_size, sample_per_part, trans_std); #else AT_ERROR("Not compiled with GPU support"); #endif } else{ return dcn_v2_psroi_pooling_cpu_forward(input, bbox, trans, no_trans, spatial_scale, output_dim, group_size, pooled_size, part_size, sample_per_part, trans_std); } } std::tuple dcn_v2_psroi_pooling_backward(const at::Tensor &out_grad, const at::Tensor &input, const at::Tensor &bbox, const at::Tensor &trans, const at::Tensor &top_count, const int no_trans, const float spatial_scale, const int output_dim, const int group_size, const int pooled_size, const int part_size, const int sample_per_part, const float trans_std) { if (input.type().is_cuda()) { #ifdef WITH_CUDA return dcn_v2_psroi_pooling_cuda_backward(out_grad, input, bbox, trans, top_count, no_trans, spatial_scale, output_dim, group_size, pooled_size, part_size, sample_per_part, trans_std); #else AT_ERROR("Not compiled with GPU support"); #endif } else{ return dcn_v2_psroi_pooling_cpu_backward(out_grad, input, bbox, trans, top_count, no_trans, spatial_scale, output_dim, group_size, pooled_size, part_size, sample_per_part, trans_std); } } ================================================ FILE: DCNv2/src/vision.cpp ================================================ #include "dcn_v2.h" PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("dcn_v2_forward", &dcn_v2_forward, "dcn_v2_forward"); m.def("dcn_v2_backward", &dcn_v2_backward, "dcn_v2_backward"); m.def("dcn_v2_psroi_pooling_forward", &dcn_v2_psroi_pooling_forward, "dcn_v2_psroi_pooling_forward"); m.def("dcn_v2_psroi_pooling_backward", &dcn_v2_psroi_pooling_backward, "dcn_v2_psroi_pooling_backward"); } ================================================ FILE: DCNv2/test/test.py ================================================ #!/usr/bin/env python from __future__ import absolute_import, division, print_function import torch import torch.nn as nn from torch.autograd import gradcheck from dcn_v2 import DCN, DCNPooling, DCNv2, DCNv2Pooling, dcn_v2_conv, dcn_v2_pooling deformable_groups = 1 N, inC, inH, inW = 2, 2, 4, 4 outC = 2 kH, kW = 3, 3 def conv_identify(weight, bias): weight.data.zero_() bias.data.zero_() o, i, h, w = weight.shape y = h // 2 x = w // 2 for p in range(i): for q in range(o): if p == q: weight.data[q, p, y, x] = 1.0 def check_zero_offset(): conv_offset = nn.Conv2d( inC, deformable_groups * 2 * kH * kW, kernel_size=(kH, kW), stride=(1, 1), padding=(1, 1), bias=True, ).cuda() conv_mask = nn.Conv2d( inC, deformable_groups * 1 * kH * kW, kernel_size=(kH, kW), stride=(1, 1), padding=(1, 1), bias=True, ).cuda() dcn_v2 = DCNv2(inC, outC, (kH, kW), stride=1, padding=1, dilation=1, deformable_groups=deformable_groups).cuda() conv_offset.weight.data.zero_() conv_offset.bias.data.zero_() conv_mask.weight.data.zero_() conv_mask.bias.data.zero_() conv_identify(dcn_v2.weight, dcn_v2.bias) input = torch.randn(N, inC, inH, inW).cuda() offset = conv_offset(input) mask = conv_mask(input) mask = torch.sigmoid(mask) output = dcn_v2(input, offset, mask) output *= 2 d = (input - output).abs().max() if d < 1e-10: print("Zero offset passed") else: print("Zero offset failed") print(input) print(output) def check_gradient_dconv(): input = torch.rand(N, inC, inH, inW).cuda() * 0.01 input.requires_grad = True offset = torch.randn(N, deformable_groups * 2 * kW * kH, inH, inW).cuda() * 2 # offset.data.zero_() # offset.data -= 0.5 offset.requires_grad = True mask = torch.rand(N, deformable_groups * 1 * kW * kH, inH, inW).cuda() # mask.data.zero_() mask.requires_grad = True mask = torch.sigmoid(mask) weight = torch.randn(outC, inC, kH, kW).cuda() weight.requires_grad = True bias = torch.rand(outC).cuda() bias.requires_grad = True stride = 1 padding = 1 dilation = 1 print( "check_gradient_dconv: ", gradcheck( dcn_v2_conv, (input, offset, mask, weight, bias, stride, padding, dilation, deformable_groups), eps=1e-3, atol=1e-4, rtol=1e-2, ), ) def check_pooling_zero_offset(): input = torch.randn(2, 16, 64, 64).cuda().zero_() input[0, :, 16:26, 16:26] = 1.0 input[1, :, 10:20, 20:30] = 2.0 rois = ( torch.tensor( [ [0, 65, 65, 103, 103], [1, 81, 41, 119, 79], ] ) .cuda() .float() ) pooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=16, no_trans=True, group_size=1, trans_std=0.0, ).cuda() out = pooling(input, rois, input.new()) s = ", ".join(["%f" % out[i, :, :, :].mean().item() for i in range(rois.shape[0])]) print(s) dpooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=16, no_trans=False, group_size=1, trans_std=0.0, ).cuda() offset = torch.randn(20, 2, 7, 7).cuda().zero_() dout = dpooling(input, rois, offset) s = ", ".join(["%f" % dout[i, :, :, :].mean().item() for i in range(rois.shape[0])]) print(s) def check_gradient_dpooling(): input = torch.randn(2, 3, 5, 5).cuda() * 0.01 N = 4 batch_inds = torch.randint(2, (N, 1)).cuda().float() x = torch.rand((N, 1)).cuda().float() * 15 y = torch.rand((N, 1)).cuda().float() * 15 w = torch.rand((N, 1)).cuda().float() * 10 h = torch.rand((N, 1)).cuda().float() * 10 rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) offset = torch.randn(N, 2, 3, 3).cuda() input.requires_grad = True offset.requires_grad = True spatial_scale = 1.0 / 4 pooled_size = 3 output_dim = 3 no_trans = 0 group_size = 1 trans_std = 0.0 sample_per_part = 4 part_size = pooled_size print( "check_gradient_dpooling:", gradcheck( dcn_v2_pooling, ( input, rois, offset, spatial_scale, pooled_size, output_dim, no_trans, group_size, part_size, sample_per_part, trans_std, ), eps=1e-4, ), ) def example_dconv(): input = torch.randn(2, 64, 128, 128).cuda() # wrap all things (offset and mask) in DCN dcn = DCN(64, 64, kernel_size=(3, 3), stride=1, padding=1, deformable_groups=2).cuda() # print(dcn.weight.shape, input.shape) output = dcn(input) targert = output.new(*output.size()) targert.data.uniform_(-0.01, 0.01) error = (targert - output).mean() error.backward() print(output.shape) def example_dpooling(): input = torch.randn(2, 32, 64, 64).cuda() batch_inds = torch.randint(2, (20, 1)).cuda().float() x = torch.randint(256, (20, 1)).cuda().float() y = torch.randint(256, (20, 1)).cuda().float() w = torch.randint(64, (20, 1)).cuda().float() h = torch.randint(64, (20, 1)).cuda().float() rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) offset = torch.randn(20, 2, 7, 7).cuda() input.requires_grad = True offset.requires_grad = True # normal roi_align pooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=True, group_size=1, trans_std=0.1, ).cuda() # deformable pooling dpooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=False, group_size=1, trans_std=0.1, ).cuda() out = pooling(input, rois, offset) dout = dpooling(input, rois, offset) print(out.shape) print(dout.shape) target_out = out.new(*out.size()) target_out.data.uniform_(-0.01, 0.01) target_dout = dout.new(*dout.size()) target_dout.data.uniform_(-0.01, 0.01) e = (target_out - out).mean() e.backward() e = (target_dout - dout).mean() e.backward() def example_mdpooling(): input = torch.randn(2, 32, 64, 64).cuda() input.requires_grad = True batch_inds = torch.randint(2, (20, 1)).cuda().float() x = torch.randint(256, (20, 1)).cuda().float() y = torch.randint(256, (20, 1)).cuda().float() w = torch.randint(64, (20, 1)).cuda().float() h = torch.randint(64, (20, 1)).cuda().float() rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) # mdformable pooling (V2) dpooling = DCNPooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=False, group_size=1, trans_std=0.1, deform_fc_dim=1024, ).cuda() dout = dpooling(input, rois) target = dout.new(*dout.size()) target.data.uniform_(-0.1, 0.1) error = (target - dout).mean() error.backward() print(dout.shape) if __name__ == "__main__": example_dconv() example_dpooling() example_mdpooling() check_pooling_zero_offset() # zero offset check if inC == outC: check_zero_offset() check_gradient_dpooling() check_gradient_dconv() # """ # ****** Note: backward is not reentrant error may not be a serious problem, # ****** since the max error is less than 1e-7, # ****** Still looking for what trigger this problem # """ ================================================ FILE: DCNv2/test/testcpu.py ================================================ #!/usr/bin/env python from __future__ import absolute_import, division, print_function import torch import torch.nn as nn from torch.autograd import gradcheck from dcn_v2 import DCN, DCNPooling, DCNv2, DCNv2Pooling, dcn_v2_conv, dcn_v2_pooling deformable_groups = 1 N, inC, inH, inW = 2, 2, 4, 4 outC = 2 kH, kW = 3, 3 def conv_identify(weight, bias): weight.data.zero_() bias.data.zero_() o, i, h, w = weight.shape y = h // 2 x = w // 2 for p in range(i): for q in range(o): if p == q: weight.data[q, p, y, x] = 1.0 def check_zero_offset(): conv_offset = nn.Conv2d( inC, deformable_groups * 2 * kH * kW, kernel_size=(kH, kW), stride=(1, 1), padding=(1, 1), bias=True, ) conv_mask = nn.Conv2d( inC, deformable_groups * 1 * kH * kW, kernel_size=(kH, kW), stride=(1, 1), padding=(1, 1), bias=True, ) dcn_v2 = DCNv2(inC, outC, (kH, kW), stride=1, padding=1, dilation=1, deformable_groups=deformable_groups) conv_offset.weight.data.zero_() conv_offset.bias.data.zero_() conv_mask.weight.data.zero_() conv_mask.bias.data.zero_() conv_identify(dcn_v2.weight, dcn_v2.bias) input = torch.randn(N, inC, inH, inW) offset = conv_offset(input) mask = conv_mask(input) mask = torch.sigmoid(mask) output = dcn_v2(input, offset, mask) output *= 2 d = (input - output).abs().max() if d < 1e-10: print("Zero offset passed") else: print("Zero offset failed") print(input) print(output) def check_gradient_dconv(): input = torch.rand(N, inC, inH, inW) * 0.01 input.requires_grad = True offset = torch.randn(N, deformable_groups * 2 * kW * kH, inH, inW) * 2 # offset.data.zero_() # offset.data -= 0.5 offset.requires_grad = True mask = torch.rand(N, deformable_groups * 1 * kW * kH, inH, inW) # mask.data.zero_() mask.requires_grad = True mask = torch.sigmoid(mask) weight = torch.randn(outC, inC, kH, kW) weight.requires_grad = True bias = torch.rand(outC) bias.requires_grad = True stride = 1 padding = 1 dilation = 1 print( "check_gradient_dconv: ", gradcheck( dcn_v2_conv, (input, offset, mask, weight, bias, stride, padding, dilation, deformable_groups), eps=1e-3, atol=1e-4, rtol=1e-2, ), ) def check_pooling_zero_offset(): input = torch.randn(2, 16, 64, 64).zero_() input[0, :, 16:26, 16:26] = 1.0 input[1, :, 10:20, 20:30] = 2.0 rois = torch.tensor( [ [0, 65, 65, 103, 103], [1, 81, 41, 119, 79], ] ).float() pooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=16, no_trans=True, group_size=1, trans_std=0.0, ) out = pooling(input, rois, input.new()) s = ", ".join(["%f" % out[i, :, :, :].mean().item() for i in range(rois.shape[0])]) print(s) dpooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=16, no_trans=False, group_size=1, trans_std=0.0, ) offset = torch.randn(20, 2, 7, 7).zero_() dout = dpooling(input, rois, offset) s = ", ".join(["%f" % dout[i, :, :, :].mean().item() for i in range(rois.shape[0])]) print(s) def check_gradient_dpooling(): input = torch.randn(2, 3, 5, 5) * 0.01 N = 4 batch_inds = torch.randint(2, (N, 1)).float() x = torch.rand((N, 1)).float() * 15 y = torch.rand((N, 1)).float() * 15 w = torch.rand((N, 1)).float() * 10 h = torch.rand((N, 1)).float() * 10 rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) offset = torch.randn(N, 2, 3, 3) input.requires_grad = True offset.requires_grad = True spatial_scale = 1.0 / 4 pooled_size = 3 output_dim = 3 no_trans = 0 group_size = 1 trans_std = 0.0 sample_per_part = 4 part_size = pooled_size print( "check_gradient_dpooling:", gradcheck( dcn_v2_pooling, ( input, rois, offset, spatial_scale, pooled_size, output_dim, no_trans, group_size, part_size, sample_per_part, trans_std, ), eps=1e-4, ), ) def example_dconv(): input = torch.randn(2, 64, 128, 128) # wrap all things (offset and mask) in DCN dcn = DCN(64, 64, kernel_size=(3, 3), stride=1, padding=1, deformable_groups=2) # print(dcn.weight.shape, input.shape) output = dcn(input) targert = output.new(*output.size()) targert.data.uniform_(-0.01, 0.01) error = (targert - output).mean() error.backward() print(output.shape) def example_dpooling(): input = torch.randn(2, 32, 64, 64) batch_inds = torch.randint(2, (20, 1)).float() x = torch.randint(256, (20, 1)).float() y = torch.randint(256, (20, 1)).float() w = torch.randint(64, (20, 1)).float() h = torch.randint(64, (20, 1)).float() rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) offset = torch.randn(20, 2, 7, 7) input.requires_grad = True offset.requires_grad = True # normal roi_align pooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=True, group_size=1, trans_std=0.1, ) # deformable pooling dpooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=False, group_size=1, trans_std=0.1, ) out = pooling(input, rois, offset) dout = dpooling(input, rois, offset) print(out.shape) print(dout.shape) target_out = out.new(*out.size()) target_out.data.uniform_(-0.01, 0.01) target_dout = dout.new(*dout.size()) target_dout.data.uniform_(-0.01, 0.01) e = (target_out - out).mean() e.backward() e = (target_dout - dout).mean() e.backward() def example_mdpooling(): input = torch.randn(2, 32, 64, 64) input.requires_grad = True batch_inds = torch.randint(2, (20, 1)).float() x = torch.randint(256, (20, 1)).float() y = torch.randint(256, (20, 1)).float() w = torch.randint(64, (20, 1)).float() h = torch.randint(64, (20, 1)).float() rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) # mdformable pooling (V2) dpooling = DCNPooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=False, group_size=1, trans_std=0.1, deform_fc_dim=1024, ) dout = dpooling(input, rois) target = dout.new(*dout.size()) target.data.uniform_(-0.1, 0.1) error = (target - dout).mean() error.backward() print(dout.shape) if __name__ == "__main__": example_dconv() example_dpooling() example_mdpooling() check_pooling_zero_offset() # zero offset check if inC == outC: check_zero_offset() check_gradient_dpooling() check_gradient_dconv() # """ # ****** Note: backward is not reentrant error may not be a serious problem, # ****** since the max error is less than 1e-7, # ****** Still looking for what trigger this problem # """ ================================================ FILE: DCNv2/test/testcuda.py ================================================ #!/usr/bin/env python from __future__ import absolute_import, division, print_function import torch import torch.nn as nn from torch.autograd import gradcheck from dcn_v2 import DCN, DCNPooling, DCNv2, DCNv2Pooling, dcn_v2_conv, dcn_v2_pooling deformable_groups = 1 N, inC, inH, inW = 2, 2, 4, 4 outC = 2 kH, kW = 3, 3 def conv_identify(weight, bias): weight.data.zero_() bias.data.zero_() o, i, h, w = weight.shape y = h // 2 x = w // 2 for p in range(i): for q in range(o): if p == q: weight.data[q, p, y, x] = 1.0 def check_zero_offset(): conv_offset = nn.Conv2d( inC, deformable_groups * 2 * kH * kW, kernel_size=(kH, kW), stride=(1, 1), padding=(1, 1), bias=True, ).cuda() conv_mask = nn.Conv2d( inC, deformable_groups * 1 * kH * kW, kernel_size=(kH, kW), stride=(1, 1), padding=(1, 1), bias=True, ).cuda() dcn_v2 = DCNv2(inC, outC, (kH, kW), stride=1, padding=1, dilation=1, deformable_groups=deformable_groups).cuda() conv_offset.weight.data.zero_() conv_offset.bias.data.zero_() conv_mask.weight.data.zero_() conv_mask.bias.data.zero_() conv_identify(dcn_v2.weight, dcn_v2.bias) input = torch.randn(N, inC, inH, inW).cuda() offset = conv_offset(input) mask = conv_mask(input) mask = torch.sigmoid(mask) output = dcn_v2(input, offset, mask) output *= 2 d = (input - output).abs().max() if d < 1e-10: print("Zero offset passed") else: print("Zero offset failed") print(input) print(output) def check_gradient_dconv(): input = torch.rand(N, inC, inH, inW).cuda() * 0.01 input.requires_grad = True offset = torch.randn(N, deformable_groups * 2 * kW * kH, inH, inW).cuda() * 2 # offset.data.zero_() # offset.data -= 0.5 offset.requires_grad = True mask = torch.rand(N, deformable_groups * 1 * kW * kH, inH, inW).cuda() # mask.data.zero_() mask.requires_grad = True mask = torch.sigmoid(mask) weight = torch.randn(outC, inC, kH, kW).cuda() weight.requires_grad = True bias = torch.rand(outC).cuda() bias.requires_grad = True stride = 1 padding = 1 dilation = 1 print( "check_gradient_dconv: ", gradcheck( dcn_v2_conv, (input, offset, mask, weight, bias, stride, padding, dilation, deformable_groups), eps=1e-3, atol=1e-4, rtol=1e-2, ), ) def check_pooling_zero_offset(): input = torch.randn(2, 16, 64, 64).cuda().zero_() input[0, :, 16:26, 16:26] = 1.0 input[1, :, 10:20, 20:30] = 2.0 rois = ( torch.tensor( [ [0, 65, 65, 103, 103], [1, 81, 41, 119, 79], ] ) .cuda() .float() ) pooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=16, no_trans=True, group_size=1, trans_std=0.0, ).cuda() out = pooling(input, rois, input.new()) s = ", ".join(["%f" % out[i, :, :, :].mean().item() for i in range(rois.shape[0])]) print(s) dpooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=16, no_trans=False, group_size=1, trans_std=0.0, ).cuda() offset = torch.randn(20, 2, 7, 7).cuda().zero_() dout = dpooling(input, rois, offset) s = ", ".join(["%f" % dout[i, :, :, :].mean().item() for i in range(rois.shape[0])]) print(s) def check_gradient_dpooling(): input = torch.randn(2, 3, 5, 5).cuda().float() * 0.01 N = 4 batch_inds = torch.randint(2, (N, 1)).cuda().float() x = torch.rand((N, 1)).cuda().float() * 15 y = torch.rand((N, 1)).cuda().float() * 15 w = torch.rand((N, 1)).cuda().float() * 10 h = torch.rand((N, 1)).cuda().float() * 10 rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) offset = torch.randn(N, 2, 3, 3).cuda() input.requires_grad = True offset.requires_grad = True spatial_scale = 1.0 / 4 pooled_size = 3 output_dim = 3 no_trans = 0 group_size = 1 trans_std = 0.0 sample_per_part = 4 part_size = pooled_size print( "check_gradient_dpooling:", gradcheck( dcn_v2_pooling, ( input, rois, offset, spatial_scale, pooled_size, output_dim, no_trans, group_size, part_size, sample_per_part, trans_std, ), eps=1e-4, ), ) def example_dconv(): input = torch.randn(2, 64, 128, 128).cuda() # wrap all things (offset and mask) in DCN dcn = DCN(64, 64, kernel_size=(3, 3), stride=1, padding=1, deformable_groups=2).cuda() # print(dcn.weight.shape, input.shape) output = dcn(input) targert = output.new(*output.size()) targert.data.uniform_(-0.01, 0.01) error = (targert - output).mean() error.backward() print(output.shape) def example_dpooling(): input = torch.randn(2, 32, 64, 64).cuda() batch_inds = torch.randint(2, (20, 1)).cuda().float() x = torch.randint(256, (20, 1)).cuda().float() y = torch.randint(256, (20, 1)).cuda().float() w = torch.randint(64, (20, 1)).cuda().float() h = torch.randint(64, (20, 1)).cuda().float() rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) offset = torch.randn(20, 2, 7, 7).cuda() input.requires_grad = True offset.requires_grad = True # normal roi_align pooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=True, group_size=1, trans_std=0.1, ).cuda() # deformable pooling dpooling = DCNv2Pooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=False, group_size=1, trans_std=0.1, ).cuda() out = pooling(input, rois, offset) dout = dpooling(input, rois, offset) print(out.shape) print(dout.shape) target_out = out.new(*out.size()) target_out.data.uniform_(-0.01, 0.01) target_dout = dout.new(*dout.size()) target_dout.data.uniform_(-0.01, 0.01) e = (target_out - out).mean() e.backward() e = (target_dout - dout).mean() e.backward() def example_mdpooling(): input = torch.randn(2, 32, 64, 64).cuda() input.requires_grad = True batch_inds = torch.randint(2, (20, 1)).cuda().float() x = torch.randint(256, (20, 1)).cuda().float() y = torch.randint(256, (20, 1)).cuda().float() w = torch.randint(64, (20, 1)).cuda().float() h = torch.randint(64, (20, 1)).cuda().float() rois = torch.cat((batch_inds, x, y, x + w, y + h), dim=1) # mdformable pooling (V2) dpooling = DCNPooling( spatial_scale=1.0 / 4, pooled_size=7, output_dim=32, no_trans=False, group_size=1, trans_std=0.1, deform_fc_dim=1024, ).cuda() dout = dpooling(input, rois) target = dout.new(*dout.size()) target.data.uniform_(-0.1, 0.1) error = (target - dout).mean() error.backward() print(dout.shape) if __name__ == "__main__": example_dconv() example_dpooling() example_mdpooling() check_pooling_zero_offset() # zero offset check if inC == outC: check_zero_offset() check_gradient_dpooling() check_gradient_dconv() # """ # ****** Note: backward is not reentrant error may not be a serious problem, # ****** since the max error is less than 1e-7, # ****** Still looking for what trigger this problem # """ ================================================ FILE: LICENSE ================================================ Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================================ FILE: README.md ================================================ # FaPN: Feature-aligned Pyramid Network for Dense Image Prediction [[arXiv]](https://arxiv.org/pdf/2108.07058.pdf) [[Project Page]](http://www.shihuahuang.cn/fapn/) ```BibTex @inproceedings{ huang2021fapn, title={{FaPN}: Feature-aligned Pyramid Network for Dense Image Prediction}, author={Shihua Huang and Zhichao Lu and Ran Cheng and Cheng He}, booktitle={International Conference on Computer Vision (ICCV)}, year={2021} } ``` ## Overview FaPN vs. FPN | Before vs. After Alignment :-------------------------:|:-------------------------: | This project provides the official implementation for our ICCV2021 paper "[FaPN: Feature-aligned Pyramid Network for Dense Image Prediction](https://arxiv.org/pdf/2108.07058.pdf)" based on [Detectron2](https://github.com/facebookresearch/detectron2). FaPN is a simple yet effective top-down pyramidal architecture to generate multi-scale features for dense image prediction. Comprised of a feature alignment module (FAM) and a feature selection module (FSM), FaPN addresses the issue of feature alignment in the original [FPN](https://arxiv.org/abs/1612.03144), leading to substaintial improvements on various dense prediction tasks, such as object detection, semantic, instance, panoptic segmentation, etc. ## Installation This project is based on [Detectron2](https://github.com/facebookresearch/detectron2), which can be constructed as follows. * Install Detectron2 following [the instructions](https://detectron2.readthedocs.io/tutorials/install.html). * Setup the dataset following [the structure](https://github.com/facebookresearch/detectron2/blob/master/datasets/README.md). * Copy this project to `/path/to/detectron2` * Install DCNv2 following [Install DCNv2.md](./DCNv2/README.md). ## Training To train a model with 8 GPUs, run: ```bash cd /path/to/detectron2/tools python3 train_net.py --config-file --num-gpus 8 ``` For example, to launch Faster R-CNN training (1x schedule) with ResNet-50 backbone on 8 GPUs, one should execute: ```bash cd /path/to/detectron2/tools python3 train_net.py --config-file ../configs\COCO-Detection\faster_rcnn_R_50_FAN_1x.yaml --num-gpus 8 ``` ## Evaluation To evaluate a pre-trained model with 8 GPUs, run: ```bash cd /path/to/detectron2/tools python3 train_net.py --config-file --num-gpus 8 --eval-only MODEL.WEIGHTS /path/to/model_checkpoint ``` ## Results ### COCO Object Detection #### Faster R-CNN + FaPN:
Name lr
sched
box
AP
box
APs
box
APm
box
APl
download
R50 1x 39.2 24.5 43.3 49.1 model |  log
R101 3x 42.8 27.0 46.2 54.9 model |  log
### Cityscapes Semantic Segmentation #### PointRend + FaPN:
Name lr
sched
mask
mIoU
mask
i_IoU
mask
IoU_sup
mask
iIoU_sup
download
R50 1x 80.0 61.3 90.6 78.5 model |  log
R101 1x 80.1 62.2 90.8 78.6 model |  log
### COCO Instance Segmentation #### Mask R-CNN + FaPN:
Name lr
sched
mask
AP
mask
APs
box
AP
box
APs
download
R50 1x 36.4 18.1 39.8 24.3 model |  log
R101 3x 39.4 20.9 43.8 27.4 model |  log
#### PointRend + FaPN:
Name lr
sched
mask
AP
mask
APs
box
AP
box
APs
download
R50 1x 37.6 18.6 39.4 24.2 model |  log
### COCO Panoptic Segmentation #### PanopticFPN + FaPN:
Name lr
sched
PQ mask
mIoU
St
PQ
box
AP
Th
PQ
download
R50 1x 41.1 43.4 32.5 38.7 46.9 model |  log
R101 3x 44.2 45.7 35.0 43.0 53.3 model |  log
================================================ FILE: configs/Base-RCNN-FAN.yaml ================================================ MODEL: META_ARCHITECTURE: "GeneralizedRCNN" BACKBONE: NAME: "build_resnet_fan_backbone" # build_resnet_fan_backbone RESNETS: OUT_FEATURES: ["res2", "res3", "res4", "res5"] FPN: IN_FEATURES: ["res2", "res3", "res4", "res5"] ANCHOR_GENERATOR: SIZES: [[32], [64], [128], [256], [512]] # One size for each in feature map ASPECT_RATIOS: [[0.5, 1.0, 2.0]] # Three aspect ratios (same for all in feature maps) RPN: IN_FEATURES: ["p2", "p3", "p4", "p5", "p6"] PRE_NMS_TOPK_TRAIN: 2000 # Per FPN level PRE_NMS_TOPK_TEST: 1000 # Per FPN level # Detectron1 uses 2000 proposals per-batch, # (See "modeling/rpn/rpn_outputs.py" for details of this legacy issue) # which is approximately 1000 proposals per-image since the default batch size for FPN is 2. POST_NMS_TOPK_TRAIN: 1000 POST_NMS_TOPK_TEST: 1000 ROI_HEADS: NAME: "StandardROIHeads" IN_FEATURES: ["p2", "p3", "p4", "p5"] ROI_BOX_HEAD: NAME: "FastRCNNConvFCHead" NUM_FC: 2 POOLER_RESOLUTION: 7 ROI_MASK_HEAD: NAME: "MaskRCNNConvUpsampleHead" NUM_CONV: 4 POOLER_RESOLUTION: 14 DATASETS: TRAIN: ("coco_2017_train",) TEST: ("coco_2017_val",) SOLVER: IMS_PER_BATCH: 16 BASE_LR: 0.02 STEPS: (60000, 80000) MAX_ITER: 90000 INPUT: MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800) VERSION: 2 ================================================ FILE: configs/COCO-Detection/faster_rcnn_R_101_FAN_3x.yaml ================================================ _BASE_: "../Base-RCNN-FAN.yaml" MODEL: WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl" # WEIGHTS: "path/faster_rcnn_r101_3x_fan/model_final.pth" MASK_ON: False RESNETS: DEPTH: 101 SOLVER: STEPS: (210000, 250000) MAX_ITER: 270000 ================================================ FILE: configs/COCO-Detection/faster_rcnn_R_50_FAN_1x.yaml ================================================ _BASE_: "../Base-RCNN-FAN.yaml" MODEL: WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl" # WEIGHTS: "path/faster_rcnn_r50_1x_fan/model_final.pth" MASK_ON: False RESNETS: DEPTH: 50 ================================================ FILE: configs/COCO-InstanceSegmentation/mask_rcnn_R_101_FAN_3x.yaml ================================================ _BASE_: "../Base-RCNN-FAN.yaml" MODEL: WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl" # WEIGHTS: "path/mask_rcnn_r101_3x_fan/model_final.pth" MASK_ON: True RESNETS: DEPTH: 101 SOLVER: STEPS: (210000, 250000) MAX_ITER: 270000 ================================================ FILE: configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FAN_1x.yaml ================================================ _BASE_: "../Base-RCNN-FAN.yaml" MODEL: WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl" # WEIGHTS: "/home/cseadmin/huangsh/codes/detectron2/tools/mask_rcnn_r50_1x_fan/model_final.pth" MASK_ON: True RESNETS: DEPTH: 50 ================================================ FILE: configs/COCO-PanopticSegmentation/Base-Panoptic-FAN.yaml ================================================ _BASE_: "../Base-RCNN-FAN.yaml" MODEL: META_ARCHITECTURE: "PanopticFPN" MASK_ON: True SEM_SEG_HEAD: LOSS_WEIGHT: 0.5 DATASETS: TRAIN: ("coco_2017_train_panoptic_separated",) TEST: ("coco_2017_val_panoptic_separated",) DATALOADER: FILTER_EMPTY_ANNOTATIONS: False ================================================ FILE: configs/COCO-PanopticSegmentation/panoptic_fan_R_101_3x.yaml ================================================ _BASE_: "Base-Panoptic-FAN.yaml" MODEL: WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl" # WEIGHTS: "path/panoptic_r101_3x_fan/model_final.pth" RESNETS: DEPTH: 101 SOLVER: STEPS: (210000, 250000) MAX_ITER: 270000 ================================================ FILE: configs/COCO-PanopticSegmentation/panoptic_fan_R_50_1x.yaml ================================================ _BASE_: "Base-Panoptic-FAN.yaml" MODEL: WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl" # WEIGHTS: "path/panoptic_r50_1x_fan/model_final.pth" RESNETS: DEPTH: 50 ================================================ FILE: detectron2/modeling/backbone/__init__.py ================================================ # Copyright (c) Facebook, Inc. and its affiliates. from .build import build_backbone, BACKBONE_REGISTRY # noqa F401 isort:skip from .backbone import Backbone from .fpn import FPN from .regnet import RegNet from .fan import FAN from .resnet import ( BasicStem, ResNet, ResNetBlockBase, build_resnet_backbone, make_stage, BottleneckBlock, ) __all__ = [k for k in globals().keys() if not k.startswith("_")] # TODO can expose more resnet blocks after careful consideration ================================================ FILE: detectron2/modeling/backbone/fan.py ================================================ # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved import math import fvcore.nn.weight_init as weight_init import torch.nn.functional as F import torch from torch import nn import os import torch import torchvision as tv import torchvision.transforms as transforms import torch.nn as nn import numpy as np import cv2 import PIL from PIL import Image import matplotlib.pyplot as plt import matplotlib.cm as mpl_color_map from detectron2.layers import Conv2d, ShapeSpec, get_norm from .backbone import Backbone from .build import BACKBONE_REGISTRY from .resnet import build_resnet_backbone __all__ = ["build_resnet_fan_backbone", "build_retinanet_resnet_fan_backbone", "FAN"] # from dcn_v2 import DCN, DCNPooling, DCNv2, DCNv2Pooling, dcn_v2_conv, dcn_v2_pooling from dcn_v2 import DCN as dcn_v2 from detectron2.layers import (CNNBlockBase, Conv2d, DeformConv, ModulatedDeformConv, ShapeSpec, get_norm, ) class FeatureSelectionModule(nn.Module): def __init__(self, in_chan, out_chan, norm="GN"): super(FeatureSelectionModule, self).__init__() self.conv_atten = Conv2d(in_chan, in_chan, kernel_size=1, bias=False, norm=get_norm(norm, in_chan)) self.sigmoid = nn.Sigmoid() self.conv = Conv2d(in_chan, out_chan, kernel_size=1, bias=False, norm=get_norm('', out_chan)) weight_init.c2_xavier_fill(self.conv_atten) weight_init.c2_xavier_fill(self.conv) def forward(self, x): atten = self.sigmoid(self.conv_atten(F.avg_pool2d(x, x.size()[2:]))) feat = torch.mul(x, atten) x = x + feat feat = self.conv(x) return feat class FeatureAlign_V2(nn.Module): # FaPN full version def __init__(self, in_nc=128, out_nc=128, norm=None): super(FeatureAlign_V2, self).__init__() self.lateral_conv = FeatureSelectionModule(in_nc, out_nc, norm="") self.offset = Conv2d(out_nc * 2, out_nc, kernel_size=1, stride=1, padding=0, bias=False, norm=norm) self.dcpack_L2 = dcn_v2(out_nc, out_nc, 3, stride=1, padding=1, dilation=1, deformable_groups=8, extra_offset_mask=True) self.relu = nn.ReLU(inplace=True) weight_init.c2_xavier_fill(self.offset) def forward(self, feat_l, feat_s, main_path=None): HW = feat_l.size()[2:] if feat_l.size()[2:] != feat_s.size()[2:]: feat_up = F.interpolate(feat_s, HW, mode='bilinear', align_corners=False) else: feat_up = feat_s feat_arm = self.lateral_conv(feat_l) # 0~1 * feats offset = self.offset(torch.cat([feat_arm, feat_up * 2], dim=1)) # concat for offset by compute the dif feat_align = self.relu(self.dcpack_L2([feat_up, offset], main_path)) # [feat, offset] return feat_align + feat_arm class FAN(Backbone): """ This module implements :paper:`FPN`. It creates pyramid features built on top of some input feature maps. """ def __init__(self, bottom_up, in_features, out_channels, norm="", top_block=None, fuse_type="sum"): """ Args: bottom_up (Backbone): module representing the bottom up subnetwork. Must be a subclass of :class:`Backbone`. The multi-scale feature maps generated by the bottom up network, and listed in `in_features`, are used to generate FPN levels. in_features (list[str]): names of the input feature maps coming from the backbone to which FPN is attached. For example, if the backbone produces ["res2", "res3", "res4"], any *contiguous* sublist of these may be used; order must be from high to low resolution. out_channels (int): number of channels in the output feature maps. norm (str): the normalization to use. top_block (nn.Module or None): if provided, an extra operation will be performed on the output of the last (smallest resolution) FPN output, and the result will extend the result list. The top_block further downsamples the feature map. It must have an attribute "num_levels", meaning the number of extra FPN levels added by this block, and "in_feature", which is a string representing its input feature (e.g., p5). fuse_type (str): types for fusing the top down features and the lateral ones. It can be "sum" (default), which sums up element-wise; or "avg", which takes the element-wise mean of the two. """ super(FAN, self).__init__() assert isinstance(bottom_up, Backbone) # Feature map strides and channels from the bottom up network (e.g. ResNet) input_shapes = bottom_up.output_shape() strides = [input_shapes[f].stride for f in in_features] in_channels_per_feature = [input_shapes[f].channels for f in in_features] _assert_strides_are_log2_contiguous(strides) align_modules = [] output_convs = [] use_bias = norm == "" for idx, in_channels in enumerate(in_channels_per_feature[:-1]): stage = int(math.log2(strides[idx])) lateral_norm = get_norm(norm, out_channels) align_module = FeatureAlign_V2(in_channels, out_channels, norm=lateral_norm) # proposed fapn self.add_module("fan_align{}".format(stage), align_module) align_modules.append(align_module) output_conv = Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=use_bias, norm=get_norm(norm, out_channels), ) weight_init.c2_xavier_fill(output_conv) self.add_module("fpn_output{}".format(stage), output_conv) output_convs.append(output_conv) stage = int(math.log2(strides[len(in_channels_per_feature) - 1])) lateral_conv = Conv2d(in_channels_per_feature[-1], out_channels, kernel_size=1, bias=use_bias, norm=get_norm(norm, out_channels)) align_modules.append(lateral_conv) self.add_module("fan_align{}".format(stage), lateral_conv) # Place convs into top-down order (from low to high resolution) to make the top-down computation in forward clearer. self.align_modules = align_modules[::-1] self.output_convs = output_convs[::-1] self.top_block = top_block self.in_features = in_features self.bottom_up = bottom_up # Return feature names are "p", like ["p2", "p3", ..., "p6"] self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in strides} # top block output feature maps. if self.top_block is not None: for s in range(stage, stage + self.top_block.num_levels): self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1) self._out_features = list(self._out_feature_strides.keys()) self._out_feature_channels = {k: out_channels for k in self._out_features} self._size_divisibility = strides[-1] assert fuse_type in {"avg", "sum"} self._fuse_type = fuse_type @property def size_divisibility(self): return self._size_divisibility def forward(self, x): """ Args: input (dict[str->Tensor]): mapping feature map name (e.g., "res5") to feature map tensor for each feature level in high to low resolution order. Returns: dict[str->Tensor]: mapping from feature map name to FPN feature map tensor in high to low resolution order. Returned feature names follow the FPN paper convention: "p", where stage has stride = 2 ** stage e.g., ["p2", "p3", ..., "p6"]. """ # Reverse feature maps into top-down order (from low to high resolution) bottom_up_features = self.bottom_up(x) x = [bottom_up_features[f] for f in self.in_features[::-1]] results = [] prev_features = self.align_modules[0](x[0]) results.append(prev_features) for features, align_module, output_conv in zip(x[1:], self.align_modules[1:], self.output_convs[0:]): prev_features = align_module(features, prev_features) results.insert(0, output_conv(prev_features)) if self.top_block is not None: top_block_in_feature = bottom_up_features.get(self.top_block.in_feature, None) if top_block_in_feature is None: top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)] results.extend(self.top_block(top_block_in_feature)) assert len(self._out_features) == len(results) return dict(zip(self._out_features, results)) def output_shape(self): return {name: ShapeSpec(channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]) for name in self._out_features} def _assert_strides_are_log2_contiguous(strides): """ Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2". """ for i, stride in enumerate(strides[1:], 1): assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format(stride, strides[i - 1]) class LastLevelMaxPool(nn.Module): """ This module is used in the original FPN to generate a downsampled P6 feature from P5. """ def __init__(self): super().__init__() self.num_levels = 1 self.in_feature = "p5" def forward(self, x): return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)] class LastLevelP6P7(nn.Module): """ This module is used in RetinaNet to generate extra layers, P6 and P7 from C5 feature. """ def __init__(self, in_channels, out_channels, in_feature="res5"): super().__init__() self.num_levels = 2 self.in_feature = in_feature self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) for module in [self.p6, self.p7]: weight_init.c2_xavier_fill(module) def forward(self, c5): p6 = self.p6(c5) p7 = self.p7(F.relu(p6)) return [p6, p7] @BACKBONE_REGISTRY.register() def build_resnet_fan_backbone(cfg, input_shape: ShapeSpec): """ Args: cfg: a detectron2 CfgNode Returns: backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. """ bottom_up = build_resnet_backbone(cfg, input_shape) in_features = cfg.MODEL.FPN.IN_FEATURES out_channels = cfg.MODEL.FPN.OUT_CHANNELS backbone = FAN(bottom_up=bottom_up, in_features=in_features, out_channels=out_channels, norm=cfg.MODEL.FPN.NORM, top_block=LastLevelMaxPool(), fuse_type=cfg.MODEL.FPN.FUSE_TYPE, ) return backbone @BACKBONE_REGISTRY.register() def build_retinanet_resnet_fan_backbone(cfg, input_shape: ShapeSpec): """ Args: cfg: a detectron2 CfgNode Returns: backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. """ bottom_up = build_resnet_backbone(cfg, input_shape) in_features = cfg.MODEL.FPN.IN_FEATURES out_channels = cfg.MODEL.FPN.OUT_CHANNELS in_channels_p6p7 = bottom_up.output_shape()["res5"].channels backbone = FAN(bottom_up=bottom_up, in_features=in_features, out_channels=out_channels, norm=cfg.MODEL.FPN.NORM, top_block=LastLevelP6P7(in_channels_p6p7, out_channels), fuse_type=cfg.MODEL.FPN.FUSE_TYPE, ) return backbone ================================================ FILE: projects/PointRend/configs/InstanceSegmentation/Base-PointRend-RCNN-FAN.yaml ================================================ _BASE_: "../../../../configs/Base-RCNN-FAN.yaml" MODEL: MASK_ON: true ROI_HEADS: NAME: "PointRendROIHeads" IN_FEATURES: ["p2", "p3", "p4", "p5"] ROI_BOX_HEAD: TRAIN_ON_PRED_BOXES: True ROI_MASK_HEAD: NAME: "CoarseMaskHead" FC_DIM: 1024 NUM_FC: 2 OUTPUT_SIDE_RESOLUTION: 7 IN_FEATURES: ["p2"] POINT_HEAD_ON: True POINT_HEAD: FC_DIM: 256 NUM_FC: 3 IN_FEATURES: ["p2"] INPUT: # PointRend for instance segmenation does not work with "polygon" mask_format. MASK_FORMAT: "bitmask" #DATASETS: # TEST: ("lvis_v1_val_cocofied",) ================================================ FILE: projects/PointRend/configs/InstanceSegmentation/pointrend_rcnn_R_50_FAN_1x_coco.yaml ================================================ _BASE_: Base-PointRend-RCNN-FAN.yaml MODEL: WEIGHTS: detectron2://ImageNetPretrained/MSRA/R-50.pkl # WEIGHTS: "path/pointrend_coco_r50_1x_fan/model_final.pth" RESNETS: DEPTH: 50 # To add COCO AP evaluation against the higher-quality LVIS annotations. # DATASETS: # TEST: ("coco_2017_val", "lvis_v0.5_val_cocofied") ================================================ FILE: projects/PointRend/configs/SemanticSegmentation/Base-PointRend-Semantic-FAN.yaml ================================================ _BASE_: "../../../../configs/Base-RCNN-FAN.yaml" MODEL: META_ARCHITECTURE: "SemanticSegmentor" BACKBONE: FREEZE_AT: 0 SEM_SEG_HEAD: NAME: "PointRendSemSegHead" POINT_HEAD: NUM_CLASSES: 54 FC_DIM: 256 NUM_FC: 3 IN_FEATURES: ["p2"] TRAIN_NUM_POINTS: 1024 SUBDIVISION_STEPS: 2 SUBDIVISION_NUM_POINTS: 8192 COARSE_SEM_SEG_HEAD_NAME: "SemSegFPNHead" COARSE_PRED_EACH_LAYER: False DATASETS: TRAIN: ("coco_2017_train_panoptic_stuffonly",) TEST: ("coco_2017_val_panoptic_stuffonly",) ================================================ FILE: projects/PointRend/configs/SemanticSegmentation/pointrend_semantic_R_101_FAN_1x_cityscapes.yaml ================================================ _BASE_: Base-PointRend-Semantic-FAN.yaml MODEL: WEIGHTS: detectron2://ImageNetPretrained/MSRA/R-101.pkl # WEIGHTS: "path/pointrend_cityscapes_r101_1x_fan/model_final.pth" RESNETS: DEPTH: 101 SEM_SEG_HEAD: NUM_CLASSES: 19 POINT_HEAD: NUM_CLASSES: 19 TRAIN_NUM_POINTS: 2048 SUBDIVISION_NUM_POINTS: 8192 DATASETS: TRAIN: ("cityscapes_fine_sem_seg_train",) TEST: ("cityscapes_fine_sem_seg_val",) SOLVER: BASE_LR: 0.01 STEPS: (40000, 55000) MAX_ITER: 65000 IMS_PER_BATCH: 32 INPUT: MIN_SIZE_TRAIN: (512, 768, 1024, 1280, 1536, 1792, 2048) MIN_SIZE_TRAIN_SAMPLING: "choice" MIN_SIZE_TEST: 1024 MAX_SIZE_TRAIN: 4096 MAX_SIZE_TEST: 2048 CROP: ENABLED: True TYPE: "absolute" SIZE: (512, 1024) SINGLE_CATEGORY_MAX_AREA: 0.75 COLOR_AUG_SSD: True DATALOADER: NUM_WORKERS: 10 OUTPUT_DIR: "./pointrend_cityscapes_r101_1x_fan" ================================================ FILE: projects/PointRend/configs/SemanticSegmentation/pointrend_semantic_R_50_FAN_1x_cityscapes.yaml ================================================ _BASE_: Base-PointRend-Semantic-FAN.yaml MODEL: WEIGHTS: detectron2://ImageNetPretrained/MSRA/R-50.pkl # WEIGHTS: "path/pointrend_cityscapes_r50_1x_fan/model_final.pth" RESNETS: DEPTH: 50 SEM_SEG_HEAD: NUM_CLASSES: 19 POINT_HEAD: NUM_CLASSES: 19 TRAIN_NUM_POINTS: 2048 SUBDIVISION_NUM_POINTS: 8192 DATASETS: TRAIN: ("cityscapes_fine_sem_seg_train",) TEST: ("cityscapes_fine_sem_seg_val",) SOLVER: BASE_LR: 0.01 STEPS: (40000, 55000) MAX_ITER: 65000 IMS_PER_BATCH: 32 INPUT: MIN_SIZE_TRAIN: (512, 768, 1024, 1280, 1536, 1792, 2048) MIN_SIZE_TRAIN_SAMPLING: "choice" MIN_SIZE_TEST: 1024 MAX_SIZE_TRAIN: 4096 MAX_SIZE_TEST: 2048 CROP: ENABLED: True TYPE: "absolute" SIZE: (512, 1024) SINGLE_CATEGORY_MAX_AREA: 0.75 COLOR_AUG_SSD: True DATALOADER: NUM_WORKERS: 10