Repository: JuewenPeng/BokehMe
Branch: main
Commit: 8b3ed556dc14
Files: 9
Total size: 10.6 MB
Directory structure:
gitextract_mdifo9q2/
├── LICENSE
├── README.md
├── checkpoints/
│ ├── arnet.pth
│ └── iunet.pth
├── classical_renderer/
│ ├── scatter.py
│ └── scatter_ex.py
├── demo.py
├── neural_renderer.py
└── requirements.txt
================================================
FILE CONTENTS
================================================
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: README.md
================================================
# BokehMe: When Neural Rendering Meets Classical Rendering (CVPR 2022 Oral)
[Juewen Peng](https://scholar.google.com/citations?hl=en&user=fYC6lCUAAAAJ)1,
[Zhiguo Cao](http://english.aia.hust.edu.cn/info/1085/1528.htm)1,
[Xianrui Luo](https://scholar.google.com/citations?hl=en&user=tUeWQ5AAAAAJ)1,
[Hao Lu](http://faculty.hust.edu.cn/LUHAO/en/index.htm)1,
[Ke Xian](https://sites.google.com/site/kexian1991/)1*,
[Jianming Zhang](https://jimmie33.github.io/)2
1Huazhong University of Science and Technology, 2Adobe Research
### [Project](https://juewenpeng.github.io/BokehMe/) | [Paper](https://github.com/JuewenPeng/BokehMe/blob/main/pdf/BokehMe.pdf) | [Supp](https://github.com/JuewenPeng/BokehMe/blob/main/pdf/BokehMe-supp.pdf) | [Poster](https://github.com/JuewenPeng/BokehMe/blob/main/pdf/BokehMe-poster.pdf) | [Video](https://www.youtube.com/watch?v=e-zr_wCxNc8) | [Data](#blb-dataset)
This repository is the official PyTorch implementation of the CVPR 2022 paper "BokehMe: When Neural Rendering Meets Classical Rendering".
**NOTE**: There is a citation mistake in the paper of the conference version. In section 4.1, the disparity maps of the EBB400 dataset are predicted by MiDaS [1] instead of DPT [2].
> [1] Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer
> [2] Vision Transformers for Dense Prediction
## Installation
```
git clone https://github.com/JuewenPeng/BokehMe.git
cd BokehMe
pip install -r requirements.txt
```
## Usage
```
python demo.py --image_path 'inputs/21.jpg' --disp_path 'inputs/21.png' --save_dir 'outputs' --K 60 --disp_focus 90/255 --gamma 4 --highlight
```
- `image_path`: path of the input all-in-focus image
- `disp_path`: path of the input disparity map (predicted by [DPT](https://github.com/isl-org/DPT) in this example)
- `save_dir`: directory to save the results
- `K`: blur parameter
- `disp_focus`: refocused disparity (range from 0 to 1)
- `gamma`: gamma value (range from 1 to 5)
- `highlight`: enhance RGB values of highlights before rendering for stunning bokeh balls
See `demo.py` for more details.
## BLB Dataset
The BLB dataset is synthesized by Blender 2.93. It contains 10 scenes, each consisting of an all-in-focus image, a disparity map, a stack of bokeh images with 5 blur amounts and 10 refocused disparities, and a parameter file. We additionally provide 15 corrupted disparity maps (through gaussian blur, dilation, erosion) for each scene. Our BLB dataset can be downloaded from [Google Drive](https://drive.google.com/drive/folders/1URpab6AXQsNTqcBcighF73w5pFlvM0Ej?usp=sharing) or [Baidu Netdisk](https://pan.baidu.com/s/1U0XlFM_84-vVgnXGYz0ncQ?pwd=re8q).
**Instructions**:
- EXR images can be loaded by `image = cv2.imread(IMAGE_PATH, -1)[..., :3].astype(np.float32) ** (1/2.2)` . The loaded images are in BGR, so you can convert them to RGB by `image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)` if necessary.
- EXR depth maps can be loaded by `depth = cv2.imread(DEPTH_PATH, -1)[..., 0].astype(np.float32)`. You can convert them to disparity maps by `disp = 1 / depth`. Note that it is **unnecesary** to normalize the disparity maps since we have pre-processed them to ensure that the signed defocus maps calculated by `K * (disp - disp_focus)` are in line with the experimental settings of the paper.
- NOTE: Some pixel values of images may be larger than 1 for highlights (but mostly smaller than 1). Considering the fact that some rendering methods can only output values between 0 and 1, we clip the numerical ranges of the predicted bokeh images and the real ones to [0, 1] before evaluation. The main reason for this phenomenon (image values exceeding 1) is that the EXR images exported from Blender are in linear space, and we only process them with gamma 2.2 correction without tone mapping. We will improve it in the future.
## Citation
If you find our work useful in your research, please cite our paper.
```
@inproceedings{Peng2022BokehMe,
title = {BokehMe: When Neural Rendering Meets Classical Rendering},
author = {Peng, Juewen and Cao, Zhiguo and Luo, Xianrui and Lu, Hao and Xian, Ke and Zhang, Jianming},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022}
}
```
================================================
FILE: checkpoints/arnet.pth
================================================
[File too large to display: 10.6 MB]
================================================
FILE: classical_renderer/scatter.py
================================================
#!/user/bin/env python3
# -*- coding: utf-8 -*-
import torch
import torch.nn as nn
import torch.nn.functional as F
import cupy
import re
kernel_Render_updateOutput = '''
extern "C" __global__ void kernel_Render_updateOutput(
const int n,
const float* image, // original image
const float* defocus, // signed defocus map
int* defocusDilate, // signed defocus map after dilating
float* bokehCum, // cumulative bokeh image
float* weightCum // cumulative weight map
)
{
for (int intIndex = (blockIdx.x * blockDim.x) + threadIdx.x; intIndex < n; intIndex += blockDim.x * gridDim.x) {
const int intN = ( intIndex / SIZE_3(weightCum) / SIZE_2(weightCum) / SIZE_1(weightCum) ) % SIZE_0(weightCum);
// const int intC = ( intIndex / SIZE_3(weightCum) / SIZE_2(weightCum) ) % SIZE_1(weightCum);
const int intY = ( intIndex / SIZE_3(weightCum) ) % SIZE_2(weightCum);
const int intX = ( intIndex ) % SIZE_3(weightCum);
float fltDefocus = VALUE_4(defocus, intN, 0, intY, intX);
float fltRadius = fabsf(fltDefocus);
for (int intDeltaY = -(int)(fltRadius)-1; intDeltaY <= (int)(fltRadius)+1; ++intDeltaY) {
for (int intDeltaX = -(int)(fltRadius)-1; intDeltaX <= (int)(fltRadius)+1; ++intDeltaX) {
int intNeighborY = intY + intDeltaY;
int intNeighborX = intX + intDeltaX;
if ((intNeighborY >= 0) && (intNeighborY < SIZE_2(bokehCum)) && (intNeighborX >= 0) && (intNeighborX < SIZE_3(bokehCum))) {
float fltDist = sqrtf((float)(intDeltaY)*(float)(intDeltaY) + (float)(intDeltaX)*(float)(intDeltaX));
float fltWeight = (0.5 + 0.5 * tanhf(4 * (fltRadius - fltDist))) / (fltRadius * fltRadius + 0.2);
if (fltRadius >= fltDist) {
atomicMax(&defocusDilate[OFFSET_4(defocusDilate, intN, 0, intNeighborY, intNeighborX)], int(fltDefocus));
}
atomicAdd(&weightCum[OFFSET_4(weightCum, intN, 0, intNeighborY, intNeighborX)], fltWeight);
atomicAdd(&bokehCum[OFFSET_4(bokehCum, intN, 0, intNeighborY, intNeighborX)], fltWeight * VALUE_4(image, intN, 0, intY, intX));
atomicAdd(&bokehCum[OFFSET_4(bokehCum, intN, 1, intNeighborY, intNeighborX)], fltWeight * VALUE_4(image, intN, 1, intY, intX));
atomicAdd(&bokehCum[OFFSET_4(bokehCum, intN, 2, intNeighborY, intNeighborX)], fltWeight * VALUE_4(image, intN, 2, intY, intX));
}
}
}
}
}
'''
def cupy_kernel(strFunction, objVariables):
strKernel = globals()[strFunction]
while True:
objMatch = re.search('(SIZE_)([0-4])(\()([^\)]*)(\))', strKernel)
if objMatch is None:
break
# end
intArg = int(objMatch.group(2))
strTensor = objMatch.group(4)
intSizes = objVariables[strTensor].size()
strKernel = strKernel.replace(objMatch.group(), str(intSizes[intArg]))
# end
while True:
objMatch = re.search('(OFFSET_)([0-4])(\()([^\)]+)(\))', strKernel)
if objMatch is None:
break
# end
intArgs = int(objMatch.group(2))
strArgs = objMatch.group(4).split(',')
strTensor = strArgs[0]
intStrides = objVariables[strTensor].stride()
strIndex = ['((' + strArgs[intArg + 1].replace('{', '(').replace('}', ')').strip() + ')*' + str(
intStrides[intArg]) + ')' for intArg in range(intArgs)]
strKernel = strKernel.replace(objMatch.group(0), '(' + str.join('+', strIndex) + ')')
# end
while True:
objMatch = re.search('(VALUE_)([0-4])(\()([^\)]+)(\))', strKernel)
if objMatch is None:
break
# end
intArgs = int(objMatch.group(2))
strArgs = objMatch.group(4).split(',')
strTensor = strArgs[0]
intStrides = objVariables[strTensor].stride()
strIndex = ['((' + strArgs[intArg + 1].replace('{', '(').replace('}', ')').strip() + ')*' + str(
intStrides[intArg]) + ')' for intArg in range(intArgs)]
strKernel = strKernel.replace(objMatch.group(0), strTensor + '[' + str.join('+', strIndex) + ']')
# end
return strKernel
# end
# @cupy.util.memoize(for_each_device=True)
@cupy.memoize(for_each_device=True)
def cupy_launch(strFunction, strKernel):
return cupy.cuda.compile_with_cache(strKernel).get_function(strFunction)
# end
class _FunctionRender(torch.autograd.Function):
@staticmethod
def forward(self, image, defocus):
# self.save_for_backward(image, defocus)
defocus_dilate = defocus.int()
bokeh_cum = torch.zeros_like(image)
weight_cum = torch.zeros_like(defocus)
if defocus.is_cuda == True:
n = weight_cum.nelement()
cupy_launch('kernel_Render_updateOutput', cupy_kernel('kernel_Render_updateOutput', {
'image': image,
'defocus': defocus,
'defocusDilate': defocus_dilate,
'bokehCum': bokeh_cum,
'weightCum': weight_cum
}))(
grid=tuple([int((n + 512 - 1) / 512), 1, 1]),
block=tuple([512, 1, 1]),
args=[
cupy.int(n),
image.data_ptr(),
defocus.data_ptr(),
defocus_dilate.data_ptr(),
bokeh_cum.data_ptr(),
weight_cum.data_ptr()
]
)
elif defocus.is_cuda == False:
raise NotImplementedError()
# end
return defocus_dilate.float(), bokeh_cum, weight_cum
# end
# @staticmethod
# def backward(self, gradBokehCum, gradWeightCum):
# end
# end
def FunctionRender(image, defocus):
defocus_dilate, bokeh_cum, weight_cum = _FunctionRender.apply(image, defocus)
return defocus_dilate, bokeh_cum, weight_cum
# end
class ModuleRenderScatter(torch.nn.Module):
def __init__(self):
super(ModuleRenderScatter, self).__init__()
# end
def forward(self, image, defocus):
defocus_dilate, bokeh_cum, weight_cum = FunctionRender(image, defocus)
bokeh = bokeh_cum / weight_cum
return bokeh, defocus_dilate
# end
# end
================================================
FILE: classical_renderer/scatter_ex.py
================================================
#!/user/bin/env python3
# -*- coding: utf-8 -*-
import torch
import cupy
import re
kernel_Render_updateOutput = '''
extern "C" __global__ void kernel_Render_updateOutput(
const int n,
const int polySides,
const float initAngle,
const float* image, // original image
const float* defocus, // signed defocus map
int* defocusDilate, // signed defocus map after dilating
float* bokehCum, // cumulative bokeh image
float* weightCum // cumulative weight map
)
{
// int polySides = 6;
float PI = 3.1415926536;
float fltAngle1 = 2 * PI / (float)(polySides);
float fltAngle2 = PI / 2 - PI / (float)(polySides);
// float initAngle = PI / 2;
float donutRatio = 0; // (0 -> 0.5 : circle -> donut)
for (int intIndex = (blockIdx.x * blockDim.x) + threadIdx.x; intIndex < n; intIndex += blockDim.x * gridDim.x) {
const int intN = ( intIndex / SIZE_3(weightCum) / SIZE_2(weightCum) / SIZE_1(weightCum) ) % SIZE_0(weightCum);
// const int intC = ( intIndex / SIZE_3(weightCum) / SIZE_2(weightCum) ) % SIZE_1(weightCum);
const int intY = ( intIndex / SIZE_3(weightCum) ) % SIZE_2(weightCum);
const int intX = ( intIndex ) % SIZE_3(weightCum);
float fltDefocus = VALUE_4(defocus, intN, 0, intY, intX);
float fltRadius = fabsf(fltDefocus);
float fltRadiusSquare = fltRadius * fltRadius;
// float fltWeight = 1.0 / (fltRadiusSquare + 0.4);
for (int intDeltaY = -(int)(fltRadius)-1; intDeltaY <= (int)(fltRadius)+1; intDeltaY++) {
for (int intDeltaX = -(int)(fltRadius)-1; intDeltaX <= (int)(fltRadius)+1; intDeltaX++) {
int intNeighborY = intY + intDeltaY;
int intNeighborX = intX + intDeltaX;
float fltAngle = atan2f((float)(intDeltaY), (float)(intDeltaX));
fltAngle = fmodf(fabsf(fltAngle + initAngle), fltAngle1);
if ((intNeighborY >= 0) & (intNeighborY < SIZE_2(bokehCum)) & (intNeighborX >= 0) & (intNeighborX < SIZE_3(bokehCum))) {
float fltDist = sqrtf((float)(intDeltaY)*(float)(intDeltaY) + (float)(intDeltaX)*(float)(intDeltaX));
float fltWeight = (0.5 + 0.5 * tanhf(4 * (fltRadius * sinf(fltAngle2)/sinf(fltAngle+fltAngle2) - fltDist))) * (1 - donutRatio + donutRatio * tanhf(0.2 * (1 + fltDist - fltRadius * sinf(fltAngle2)/sinf(fltAngle+fltAngle2)))) / (fltRadius * fltRadius + 0.2);
if (fltRadius >= fltDist) {
atomicMax(&defocusDilate[OFFSET_4(defocusDilate, intN, 0, intNeighborY, intNeighborX)], int(fltDefocus));
}
atomicAdd(&weightCum[OFFSET_4(weightCum, intN, 0, intNeighborY, intNeighborX)], fltWeight);
atomicAdd(&bokehCum[OFFSET_4(bokehCum, intN, 0, intNeighborY, intNeighborX)], fltWeight * VALUE_4(image, intN, 0, intY, intX));
atomicAdd(&bokehCum[OFFSET_4(bokehCum, intN, 1, intNeighborY, intNeighborX)], fltWeight * VALUE_4(image, intN, 1, intY, intX));
atomicAdd(&bokehCum[OFFSET_4(bokehCum, intN, 2, intNeighborY, intNeighborX)], fltWeight * VALUE_4(image, intN, 2, intY, intX));
}
}
}
}
}
'''
def cupy_kernel(strFunction, objVariables):
strKernel = globals()[strFunction]
while True:
objMatch = re.search('(SIZE_)([0-4])(\()([^\)]*)(\))', strKernel)
if objMatch is None:
break
# end
intArg = int(objMatch.group(2))
strTensor = objMatch.group(4)
intSizes = objVariables[strTensor].size()
strKernel = strKernel.replace(objMatch.group(), str(intSizes[intArg]))
# end
while True:
objMatch = re.search('(OFFSET_)([0-4])(\()([^\)]+)(\))', strKernel)
if objMatch is None:
break
# end
intArgs = int(objMatch.group(2))
strArgs = objMatch.group(4).split(',')
strTensor = strArgs[0]
intStrides = objVariables[strTensor].stride()
strIndex = ['((' + strArgs[intArg + 1].replace('{', '(').replace('}', ')').strip() + ')*' + str(
intStrides[intArg]) + ')' for intArg in range(intArgs)]
strKernel = strKernel.replace(objMatch.group(0), '(' + str.join('+', strIndex) + ')')
# end
while True:
objMatch = re.search('(VALUE_)([0-4])(\()([^\)]+)(\))', strKernel)
if objMatch is None:
break
# end
intArgs = int(objMatch.group(2))
strArgs = objMatch.group(4).split(',')
strTensor = strArgs[0]
intStrides = objVariables[strTensor].stride()
strIndex = ['((' + strArgs[intArg + 1].replace('{', '(').replace('}', ')').strip() + ')*' + str(
intStrides[intArg]) + ')' for intArg in range(intArgs)]
strKernel = strKernel.replace(objMatch.group(0), strTensor + '[' + str.join('+', strIndex) + ']')
# end
return strKernel
# end
# @cupy.util.memoize(for_each_device=True)
@cupy.memoize(for_each_device=True)
def cupy_launch(strFunction, strKernel):
return cupy.cuda.compile_with_cache(strKernel).get_function(strFunction)
# end
class _FunctionRender(torch.autograd.Function):
@staticmethod
def forward(self, image, defocus, poly_sides, init_angle):
# self.save_for_backward(image, signedDisp)
defocus_dilate = defocus.int()
bokeh_cum = torch.zeros_like(image)
weight_cum = torch.zeros_like(defocus)
if defocus.is_cuda == True:
n = weight_cum.nelement()
cupy_launch('kernel_Render_updateOutput', cupy_kernel('kernel_Render_updateOutput', {
'poly_sides': poly_sides,
'init_angle': init_angle,
'image': image,
'defocus': defocus,
'defocusDilate': defocus_dilate,
'bokehCum': bokeh_cum,
'weightCum': weight_cum,
}))(
grid=tuple([int((n + 512 - 1) / 512), 1, 1]),
block=tuple([512, 1, 1]),
args=[
cupy.int(n),
cupy.int(poly_sides),
cupy.float32(init_angle),
image.data_ptr(),
defocus.data_ptr(),
defocus_dilate.data_ptr(),
bokeh_cum.data_ptr(),
weight_cum.data_ptr()
]
)
elif defocus.is_cuda == False:
raise NotImplementedError()
# end
return defocus_dilate.float(), bokeh_cum, weight_cum
# end
# @staticmethod
# def backward(self, gradBokehCum, gradWeightCum):
# end
# end
def FunctionRender(image, defocus, poly_sides, init_angle):
defocus_dilate, bokeh_cum, weight_cum = _FunctionRender.apply(image, defocus, poly_sides, init_angle)
return defocus_dilate, bokeh_cum, weight_cum
# end
class ModuleRenderScatterEX(torch.nn.Module):
def __init__(self):
super(ModuleRenderScatterEX, self).__init__()
# end
def forward(self, image, defocus, poly_sides=10000, init_angle=3.1415926536/2):
defocus_dilate, bokeh_cum, weight_cum = FunctionRender(image, defocus, poly_sides, init_angle)
bokeh = bokeh_cum / weight_cum
return bokeh, defocus_dilate
# end
# end
================================================
FILE: demo.py
================================================
#!/usr/bin/env python
# encoding: utf-8
import os
# os.environ['CUDA_VISIBLE_DEVICES'] = '7'
import matplotlib.pyplot as plt
import numpy as np
import cv2
import argparse
import torch
import torch.nn.functional as F
from neural_renderer import ARNet, IUNet
from classical_renderer.scatter import ModuleRenderScatter # circular aperture
from classical_renderer.scatter_ex import ModuleRenderScatterEX # adjustable aperture shape
def gaussian_blur(x, r, sigma=None):
r = int(round(r))
if sigma is None:
sigma = 0.3 * (r - 1) + 0.8
x_grid, y_grid = torch.meshgrid(torch.arange(-int(r), int(r) + 1), torch.arange(-int(r), int(r) + 1))
kernel = torch.exp(-(x_grid ** 2 + y_grid ** 2) / 2 / sigma ** 2)
kernel = kernel.float() / kernel.sum()
kernel = kernel.expand(1, 1, 2*r+1, 2*r+1).to(x.device)
x = F.pad(x, pad=(r, r, r, r), mode='replicate')
x = F.conv2d(x, weight=kernel, padding=0)
return x
def pipeline(classical_renderer, arnet, iunet, image, defocus, gamma, args):
bokeh_classical, defocus_dilate = classical_renderer(image**gamma, defocus*args.defocus_scale)
# bokeh_classical, defocus_dilate = classical_renderer_ex(image**gamma, defocus*args.defocus_scale, poly_sides=6)
bokeh_classical = bokeh_classical ** (1/gamma)
defocus_dilate = defocus_dilate / args.defocus_scale
gamma = (gamma - args.gamma_min) / (args.gamma_max - args.gamma_min)
adapt_scale = max(defocus.abs().max().item(), 1)
image_re = F.interpolate(image, scale_factor=1/adapt_scale, mode='bilinear', align_corners=True)
defocus_re = 1 / adapt_scale * F.interpolate(defocus, scale_factor=1/adapt_scale, mode='bilinear', align_corners=True)
bokeh_neural, error_map = arnet(image_re, defocus_re, gamma)
error_map = F.interpolate(error_map, size=(image.shape[2], image.shape[3]), mode='bilinear', align_corners=True)
bokeh_neural.clamp_(0, 1e5)
if args.save_intermediate:
cv2.imwrite(os.path.join(save_root, 'bokeh_neural_s0.jpg'), bokeh_neural[0].cpu().permute(1, 2, 0).numpy()[..., ::-1] * 255)
scale = -1
for scale in range(int(np.log2(adapt_scale))):
ratio = 2**(scale+1) / adapt_scale
h_re, w_re = int(ratio * image.shape[2]), int(ratio * image.shape[3])
image_re = F.interpolate(image, size=(h_re, w_re), mode='bilinear', align_corners=True)
defocus_re = ratio * F.interpolate(defocus, size=(h_re, w_re), mode='bilinear', align_corners=True)
defocus_dilate_re = ratio * F.interpolate(defocus_dilate, size=(h_re, w_re), mode='bilinear', align_corners=True)
bokeh_neural_refine = iunet(image_re, defocus_re.clamp(-1, 1), bokeh_neural, gamma).clamp(0, 1e5)
mask = gaussian_blur(((defocus_dilate_re < 1) * (defocus_dilate_re > -1)).float(), 0.005 * (defocus_dilate_re.shape[2] + defocus_dilate_re.shape[3]))
bokeh_neural = mask * bokeh_neural_refine + (1 - mask) * F.interpolate(bokeh_neural, size=(h_re, w_re), mode='bilinear', align_corners=True)
if args.save_intermediate:
cv2.imwrite(os.path.join(save_root, f'bokeh_neural_s{scale+1}.jpg'), bokeh_neural[0].cpu().permute(1, 2, 0).numpy()[..., ::-1] * 255)
cv2.imwrite(os.path.join(save_root, f'fmask_neural_s{scale+1}.jpg'), mask[0][0].cpu().numpy() * 255)
bokeh_neural_refine = iunet(image, defocus.clamp(-1, 1), bokeh_neural, gamma).clamp(0, 1e5)
mask = gaussian_blur(((defocus_dilate < 1) * (defocus_dilate > -1)).float(), 0.005 * (defocus_dilate.shape[2] + defocus_dilate.shape[3]))
bokeh_neural = mask * bokeh_neural_refine + (1 - mask) * F.interpolate(bokeh_neural, size=(image.shape[2], image.shape[3]), mode='bilinear', align_corners=True)
if args.save_intermediate:
cv2.imwrite(os.path.join(save_root, f'bokeh_neural_s{scale+2}.jpg'), bokeh_neural[0].cpu().permute(1, 2, 0).numpy()[..., ::-1] * 255)
cv2.imwrite(os.path.join(save_root, f'fmask_neural_s{scale+2}.jpg'), mask[0][0].cpu().numpy() * 255)
bokeh_pred = bokeh_classical * (1 - error_map) + bokeh_neural * error_map
return bokeh_pred.clamp(0, 1), bokeh_classical.clamp(0, 1), bokeh_neural.clamp(0, 1), error_map
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
parser = argparse.ArgumentParser(description='Bokeh Rendering', fromfile_prefix_chars='@')
parser.add_argument('--defocus_scale', type=float, default=10.)
parser.add_argument('--gamma_min', type=float, default=1.)
parser.add_argument('--gamma_max', type=float, default=5.)
# Model 1
parser.add_argument('--arnet_shuffle_rate', type=int, default=2)
parser.add_argument('--arnet_in_channels', type=int, default=5)
parser.add_argument('--arnet_out_channels', type=int, default=4)
parser.add_argument('--arnet_middle_channels', type=int, default=128)
parser.add_argument('--arnet_num_block', type=int, default=3)
parser.add_argument('--arnet_share_weight', action='store_true')
parser.add_argument('--arnet_connect_mode', type=str, default='distinct_source')
parser.add_argument('--arnet_use_bn', action='store_true')
parser.add_argument('--arnet_activation', type=str, default='elu')
# Model 2
parser.add_argument('--iunet_shuffle_rate', type=int, default=2)
parser.add_argument('--iunet_in_channels', type=int, default=8)
parser.add_argument('--iunet_out_channels', type=int, default=3)
parser.add_argument('--iunet_middle_channels', type=int, default=64)
parser.add_argument('--iunet_num_block', type=int, default=3)
parser.add_argument('--iunet_share_weight', action='store_true')
parser.add_argument('--iunet_connect_mode', type=str, default='distinct_source')
parser.add_argument('--iunet_use_bn', action='store_true')
parser.add_argument('--iunet_activation', type=str, default='elu')
# Checkpoint
parser.add_argument('--arnet_checkpoint_path', type=str, default='./checkpoints/arnet.pth')
parser.add_argument('--iunet_checkpoint_path', type=str, default='./checkpoints/iunet.pth')
# Input
parser.add_argument('--image_path', type=str, default='./inputs/21.jpg')
parser.add_argument('--disp_path', type=str, default='./inputs/21.png')
parser.add_argument('--save_dir', type=str, default='./outputs')
parser.add_argument('--K', type=float, default=60, help='blur parameter')
parser.add_argument('--disp_focus', type=float, default=90/255, help='refocused disparity (0~1)')
parser.add_argument('--gamma', type=float, default=4, help='gamma value (1~5)')
parser.add_argument('--highlight', action='store_true', help='forcibly enchance RGB values of highlights')
parser.add_argument('--highlight_RGB_threshold', type=float, default=220/255)
parser.add_argument('--highlight_enhance_ratio', type=float, default=0.4)
parser.add_argument('--save_intermediate', action='store_true', help='save intermediate results')
args = parser.parse_args()
arnet_checkpoint_path = args.arnet_checkpoint_path
iunet_checkpoint_path = args.iunet_checkpoint_path
classical_renderer = ModuleRenderScatter().to(device)
# classical_renderer_ex = ModuleRenderScatterEX().to(device)
arnet = ARNet(args.arnet_shuffle_rate, args.arnet_in_channels, args.arnet_out_channels, args.arnet_middle_channels,
args.arnet_num_block, args.arnet_share_weight, args.arnet_connect_mode, args.arnet_use_bn, args.arnet_activation)
iunet = IUNet(args.iunet_shuffle_rate, args.iunet_in_channels, args.iunet_out_channels, args.iunet_middle_channels,
args.iunet_num_block, args.iunet_share_weight, args.iunet_connect_mode, args.iunet_use_bn, args.iunet_activation)
arnet.cuda()
iunet.cuda()
checkpoint = torch.load(arnet_checkpoint_path)
arnet.load_state_dict(checkpoint['model'])
checkpoint = torch.load(iunet_checkpoint_path)
iunet.load_state_dict(checkpoint['model'])
arnet.eval()
iunet.eval()
save_root = os.path.join(args.save_dir, os.path.splitext(os.path.basename(args.image_path))[0])
os.makedirs(save_root, exist_ok=True)
K = args.K # blur parameter
disp_focus = args.disp_focus # 0~1
gamma = args.gamma # 1~5
image = cv2.imread(args.image_path).astype(np.float32) / 255.0
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_ori = image.copy()
disp = np.float32(cv2.imread(args.disp_path, cv2.IMREAD_GRAYSCALE))
disp = (disp - disp.min()) / (disp.max() - disp.min())
########## Highlights ##########
if args.highlight:
mask1 = np.clip(np.tanh(200 * (np.abs(disp - disp_focus)**2 - 0.01)), 0, 1)[..., np.newaxis] # out-of-focus areas
# mask2 = (np.max(image, axis=2, keepdims=True) > args.highlight_RGB_threshold) # highlight areas
mask2 = np.clip(np.tanh(10*(image - args.highlight_RGB_threshold)), 0, 1) # highlight areas
mask = mask1 * mask2
image = image * (1 + mask * args.highlight_enhance_ratio)
################################
defocus = K * (disp - disp_focus) / args.defocus_scale
with torch.no_grad():
image = torch.from_numpy(image).permute(2, 0, 1).unsqueeze(0)
defocus = torch.from_numpy(defocus).unsqueeze(0).unsqueeze(0)
image = image.cuda()
defocus = defocus.cuda()
bokeh_pred, bokeh_classical, bokeh_neural, error_map = pipeline(
classical_renderer, arnet, iunet, image, defocus, gamma, args
)
defocus = defocus[0][0].cpu().numpy()
error_map = error_map[0][0].cpu().numpy()
bokeh_classical = bokeh_classical[0].cpu().permute(1, 2, 0).numpy()
bokeh_neural = bokeh_neural[0].cpu().permute(1, 2, 0).detach().numpy()
bokeh_pred = bokeh_pred[0].cpu().permute(1, 2, 0).detach().numpy()
cv2.imwrite(os.path.join(save_root, 'image.jpg'), image_ori[..., ::-1] * 255)
plt.imsave(os.path.join(save_root, 'defocus.jpg'), defocus, cmap='coolwarm', vmin=-max(defocus.max(), -defocus.min()), vmax=max(defocus.max(), -defocus.min()))
cv2.imwrite(os.path.join(save_root, 'disparity.jpg'), disp * 255)
cv2.imwrite(os.path.join(save_root, 'error_map.jpg'), error_map * 255)
cv2.imwrite(os.path.join(save_root, 'bokeh_classical.jpg'), bokeh_classical[..., ::-1] * 255)
cv2.imwrite(os.path.join(save_root, 'bokeh_neural.jpg'), bokeh_neural[..., ::-1] * 255)
cv2.imwrite(os.path.join(save_root, 'bokeh_pred.jpg'), bokeh_pred[..., ::-1] * 255)
================================================
FILE: neural_renderer.py
================================================
#!/usr/bin/env python
# encoding: utf-8
import os
# os.environ['CUDA_VISIBLE_DEVICES'] = '5'
import torch
import torch.nn as nn
import torch.nn.functional as F
class Space2Depth(nn.Module):
def __init__(self, down_factor):
super(Space2Depth, self).__init__()
self.down_factor = down_factor
def forward(self, x):
n, c, h, w = x.size()
unfolded_x = torch.nn.functional.unfold(x, self.down_factor, stride=self.down_factor)
return unfolded_x.view(n, c * self.down_factor ** 2, h // self.down_factor, w // self.down_factor)
def conv_bn_activation(in_channels, out_channels, kernel_size, stride, padding, use_bn, activation):
module = nn.Sequential()
# module.add_module('pad', nn.ReflectionPad2d(padding))
module.add_module('conv', nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding))
module.add_module('bn', nn.BatchNorm2d(out_channels)) if use_bn else None
module.add_module('activation', activation) if activation else None
return module
class BlockStack(nn.Module):
def __init__(self, channels, num_block, share_weight, connect_mode, use_bn, activation):
# connect_mode: refer to "Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks"
super(BlockStack, self).__init__()
self.num_block = num_block
self.connect_mode = connect_mode
self.blocks = nn.ModuleList()
if share_weight is True:
block = nn.Sequential(
conv_bn_activation(
in_channels=channels,
out_channels=channels,
kernel_size=3, stride=1, padding=1,
use_bn=use_bn, activation=activation
),
conv_bn_activation(
in_channels=channels,
out_channels=channels,
kernel_size=3, stride=1, padding=1,
use_bn=use_bn, activation=activation
)
)
for i in range(num_block):
self.blocks.append(block)
else:
for i in range(num_block):
block = nn.Sequential(
conv_bn_activation(
in_channels=channels,
out_channels=channels,
kernel_size=3, stride=1, padding=1,
use_bn=use_bn, activation=activation
),
conv_bn_activation(
in_channels=channels,
out_channels=channels,
kernel_size=3, stride=1, padding=1,
use_bn=use_bn, activation=activation
)
)
self.blocks.append(block)
def forward(self, x):
if self.connect_mode == 'no':
for i in range(self.num_block):
x = self.blocks[i](x)
elif self.connect_mode == 'distinct_source':
for i in range(self.num_block):
x = self.blocks[i](x) + x
elif self.connect_mode == 'shared_source':
x0 = x
for i in range(self.num_block):
x = self.blocks[i](x) + x0
else:
print('"connect_mode" error!')
exit(0)
return x
class ARNet(nn.Module): # Adaptive Rendering Network
def __init__(self, shuffle_rate=2, in_channels=5, out_channels=4, middle_channels=128, num_block=3, share_weight=False, connect_mode='distinct_source', use_bn=False, activation='elu'):
super(ARNet, self).__init__()
self.shuffle_rate = shuffle_rate
self.connect_mode = connect_mode
if activation == 'relu':
activation = nn.ReLU(inplace=True)
elif activation == 'leaky_relu':
activation = nn.LeakyReLU(inplace=True)
elif activation == 'elu':
activation = nn.ELU(inplace=True)
else:
print('"activation" error!')
exit(0)
self.downsample = Space2Depth(shuffle_rate)
self.conv0 = conv_bn_activation(
in_channels=(in_channels - 1) * shuffle_rate ** 2 + 1,
out_channels=middle_channels,
kernel_size=3, stride=1, padding=1,
use_bn=use_bn, activation=activation
)
self.block_stack = BlockStack(
channels=middle_channels,
num_block=num_block, share_weight=share_weight, connect_mode=connect_mode,
use_bn=use_bn, activation=activation
)
self.conv1 = conv_bn_activation(
in_channels=middle_channels,
out_channels=out_channels * shuffle_rate ** 2,
kernel_size=3, stride=1, padding=1,
use_bn=False, activation=None
)
self.upsample = nn.PixelShuffle(shuffle_rate)
def forward(self, image, defocus, gamma):
_, _, h, w = image.shape
h_re = int(h // self.shuffle_rate * self.shuffle_rate)
w_re = int(w // self.shuffle_rate * self.shuffle_rate)
x = torch.cat((image, defocus), dim=1)
x = F.interpolate(x, size=(h_re, w_re), mode='bilinear', align_corners=True)
x = self.downsample(x)
gamma = torch.ones_like(x[:, :1]) * gamma
x = torch.cat((x, gamma), dim=1)
x = self.conv0(x)
x = self.block_stack(x)
x = self.conv1(x)
x = self.upsample(x)
x = F.interpolate(x, size=(h, w), mode='bilinear', align_corners=True)
bokeh = x[:, :-1]
mask = torch.sigmoid(x[:, -1:])
return bokeh, mask
class IUNet(nn.Module): # Iterative Upsampling Network
def __init__(self, shuffle_rate=2, in_channels=8, out_channels=3, middle_channels=64, num_block=3, share_weight=False, connect_mode='distinct_source', use_bn=False, activation='elu'):
super(IUNet, self).__init__()
self.shuffle_rate = shuffle_rate
self.connect_mode = connect_mode
if activation == 'relu':
activation = nn.ReLU(inplace=True)
elif activation == 'leaky_relu':
activation = nn.LeakyReLU(inplace=True)
elif activation == 'elu':
activation = nn.ELU(inplace=True)
else:
print('"activation" error!')
exit(0)
self.downsample = Space2Depth(shuffle_rate)
self.conv0 = conv_bn_activation(
in_channels=(in_channels - 4) * shuffle_rate ** 2 + 4,
out_channels=middle_channels,
kernel_size=3, stride=1, padding=1,
use_bn=use_bn, activation=activation
)
self.block_stack = BlockStack(
channels=middle_channels,
num_block=num_block, share_weight=share_weight, connect_mode=connect_mode,
use_bn=use_bn, activation=activation
)
self.conv1 = conv_bn_activation(
in_channels=middle_channels,
out_channels=out_channels * shuffle_rate ** 2,
kernel_size=3, stride=1, padding=1,
use_bn=False, activation=None
)
self.upsample = nn.PixelShuffle(shuffle_rate)
def forward(self, image, defocus, bokeh_coarse, gamma):
_, _, h, w = image.shape
h_re = int(h // self.shuffle_rate * self.shuffle_rate)
w_re = int(w // self.shuffle_rate * self.shuffle_rate)
x = torch.cat((image, defocus), dim=1)
x = F.interpolate(x, size=(h_re, w_re), mode='bilinear', align_corners=True)
x = self.downsample(x)
if bokeh_coarse.shape[2] != x.shape[2] or bokeh_coarse.shape[3] != x.shape[3]:
bokeh_coarse = F.interpolate(bokeh_coarse, size=(x.shape[2], x.shape[3]), mode='bilinear', align_corners=False)
gamma = torch.ones_like(x[:, :1]) * gamma
x = torch.cat((x, bokeh_coarse, gamma), dim=1)
x = self.conv0(x)
x = self.block_stack(x)
x = self.conv1(x)
x = self.upsample(x)
bokeh_refine = F.interpolate(x, size=(h, w), mode='bilinear', align_corners=True)
return bokeh_refine
================================================
FILE: requirements.txt
================================================
cupy==10.5.0
cupy_cuda90==7.7.0
matplotlib==3.5.1
numpy==1.18.5
opencv_python==4.2.0.34
Pillow==9.1.1
torch==1.8.1
torchvision==0.9.1