Showing preview only (278K chars total). Download the full file or copy to clipboard to get everything.
Repository: weitw/ImageDenoise
Branch: master
Commit: 94f314358bec
Files: 151
Total size: 258.8 KB
Directory structure:
gitextract_hro3tm_6/
├── BM3D/
│ ├── BM3D-SAPCA/
│ │ ├── BM3DSAPCA2009.p
│ │ ├── README-BM3D-SAPCA.txt
│ │ ├── demo_BM3DSAPCA.m
│ │ ├── function_AnisLPAICI8.p
│ │ ├── function_CreateLPAKernels.m
│ │ ├── function_LPAKernelMatrixTheta.m
│ │ ├── function_WOSFilters.p
│ │ └── function_Window2D.m
│ ├── BM3D.m
│ ├── BM3DDEB.m
│ ├── BM3DSHARP.m
│ ├── BM3D_CFA.m
│ ├── CBM3D.m
│ ├── CVBM3D.m
│ ├── ClipComp16b.p
│ ├── IDDBM3D/
│ │ ├── BM3DDEB_init.m
│ │ ├── BlockMatch.mexw32
│ │ ├── BlockMatch.mexw64
│ │ ├── Demo_IDDBM3D.m
│ │ ├── GroupProcessor.mexw32
│ │ ├── GroupProcessor.mexw64
│ │ └── IDDBM3D.p
│ ├── LEGAL_NOTICE.txt
│ ├── README.txt
│ ├── VBM3D.m
│ ├── bm3d_CFA_thr.mexa64
│ ├── bm3d_CFA_thr.mexglx
│ ├── bm3d_CFA_thr.mexmaci64
│ ├── bm3d_CFA_thr.mexw32
│ ├── bm3d_CFA_thr.mexw64
│ ├── bm3d_CFA_wiener.mexa64
│ ├── bm3d_CFA_wiener.mexglx
│ ├── bm3d_CFA_wiener.mexmaci64
│ ├── bm3d_CFA_wiener.mexw32
│ ├── bm3d_CFA_wiener.mexw64
│ ├── bm3d_thr.mexa64
│ ├── bm3d_thr.mexglx
│ ├── bm3d_thr.mexmaci
│ ├── bm3d_thr.mexmaci64
│ ├── bm3d_thr.mexw32
│ ├── bm3d_thr.mexw64
│ ├── bm3d_thr_color.mexa64
│ ├── bm3d_thr_color.mexglx
│ ├── bm3d_thr_color.mexmaci
│ ├── bm3d_thr_color.mexmaci64
│ ├── bm3d_thr_color.mexw32
│ ├── bm3d_thr_color.mexw64
│ ├── bm3d_thr_colored_noise.mexa64
│ ├── bm3d_thr_colored_noise.mexglx
│ ├── bm3d_thr_colored_noise.mexmaci
│ ├── bm3d_thr_colored_noise.mexmaci64
│ ├── bm3d_thr_colored_noise.mexw32
│ ├── bm3d_thr_colored_noise.mexw64
│ ├── bm3d_thr_sharpen_var.mexa64
│ ├── bm3d_thr_sharpen_var.mexglx
│ ├── bm3d_thr_sharpen_var.mexmaci
│ ├── bm3d_thr_sharpen_var.mexmaci64
│ ├── bm3d_thr_sharpen_var.mexw32
│ ├── bm3d_thr_sharpen_var.mexw64
│ ├── bm3d_thr_video.mexa64
│ ├── bm3d_thr_video.mexglx
│ ├── bm3d_thr_video.mexmaci
│ ├── bm3d_thr_video.mexmaci64
│ ├── bm3d_thr_video.mexw32
│ ├── bm3d_thr_video.mexw64
│ ├── bm3d_thr_video_c.mexw32
│ ├── bm3d_thr_video_c.mexw64
│ ├── bm3d_wiener.mexa64
│ ├── bm3d_wiener.mexglx
│ ├── bm3d_wiener.mexmaci
│ ├── bm3d_wiener.mexmaci64
│ ├── bm3d_wiener.mexw32
│ ├── bm3d_wiener.mexw64
│ ├── bm3d_wiener_color.mexa64
│ ├── bm3d_wiener_color.mexglx
│ ├── bm3d_wiener_color.mexmaci
│ ├── bm3d_wiener_color.mexmaci64
│ ├── bm3d_wiener_color.mexw32
│ ├── bm3d_wiener_color.mexw64
│ ├── bm3d_wiener_colored_noise.mexa64
│ ├── bm3d_wiener_colored_noise.mexglx
│ ├── bm3d_wiener_colored_noise.mexmaci
│ ├── bm3d_wiener_colored_noise.mexmaci64
│ ├── bm3d_wiener_colored_noise.mexw32
│ ├── bm3d_wiener_colored_noise.mexw64
│ ├── bm3d_wiener_video.mexa64
│ ├── bm3d_wiener_video.mexglx
│ ├── bm3d_wiener_video.mexmaci
│ ├── bm3d_wiener_video.mexmaci64
│ ├── bm3d_wiener_video.mexw32
│ ├── bm3d_wiener_video.mexw64
│ ├── bm3d_wiener_video_c.mexw32
│ ├── bm3d_wiener_video_c.mexw64
│ └── main.m
├── DnCNN/
│ ├── Demo_FDnCNN_Color.m
│ ├── Demo_FDnCNN_Color_Clip.m
│ ├── Demo_FDnCNN_Gray.m
│ ├── Demo_FDnCNN_Gray_Clip.m
│ ├── Demo_test_CDnCNN_Specific.m
│ ├── Demo_test_DnCNN.m
│ ├── Demo_test_DnCNN3.m
│ ├── Demo_test_DnCNN_C.m
│ ├── model/
│ │ ├── DnCNN3.mat
│ │ ├── FDnCNN_Clip_color.mat
│ │ ├── FDnCNN_Clip_gray.mat
│ │ ├── FDnCNN_color.mat
│ │ ├── FDnCNN_gray.mat
│ │ ├── GD_Color_Blind.mat
│ │ ├── GD_Gray_Blind.mat
│ │ ├── README.txt
│ │ ├── specifics/
│ │ │ ├── sigma=10.mat
│ │ │ ├── sigma=15.mat
│ │ │ ├── sigma=20.mat
│ │ │ ├── sigma=25.mat
│ │ │ ├── sigma=30.mat
│ │ │ ├── sigma=35.mat
│ │ │ ├── sigma=40.mat
│ │ │ ├── sigma=45.mat
│ │ │ ├── sigma=50.mat
│ │ │ ├── sigma=55.mat
│ │ │ ├── sigma=60.mat
│ │ │ ├── sigma=65.mat
│ │ │ ├── sigma=70.mat
│ │ │ └── sigma=75.mat
│ │ └── specifics_color/
│ │ ├── Add (color) specific models.md
│ │ ├── color_sigma=05.mat
│ │ ├── color_sigma=10.mat
│ │ ├── color_sigma=15.mat
│ │ ├── color_sigma=25.mat
│ │ ├── color_sigma=35.mat
│ │ ├── color_sigma=50.mat
│ │ ├── model_sigma=00to10.mat
│ │ ├── model_sigma=20to30.mat
│ │ ├── model_sigma=40to50.mat
│ │ ├── model_sigma=60to70.mat
│ │ └── model_sigma=80to90.mat
│ └── utilities/
│ ├── Cal_PSNRSSIM.m
│ ├── Merge_Bnorm_Demo.m
│ ├── data_augmentation.m
│ ├── modcrop.m
│ ├── shave.m
│ ├── sigma=25_Bnorm.mat
│ ├── simplenn_matlab.m
│ ├── vl_ffdnet_concise.m
│ ├── vl_ffdnet_matlab.m
│ ├── vl_simplenn.m
│ └── vl_simplenn_mergebnorm.m
├── README.md
├── avefilter/
│ └── avefilt.m
├── medianfilter/
│ └── medianfilt.m
└── nlm-image-denoising/
└── NLmeansfilt.m
================================================
FILE CONTENTS
================================================
================================================
FILE: BM3D/BM3D-SAPCA/README-BM3D-SAPCA.txt
================================================
--------------------------------------------------------------------
BM3D-SAPCA : BM3D with Shape-Adaptive Principal Component Analysis
v1.00, 2009
--------------------------------------------------------------------
Copyright (c) 2009-2011 Tampere University of Technology.
All rights reserved.
This work should be used for nonprofit purposes only.
Author: Alessandro Foi
BM3D web page: http://www.cs.tut.fi/~foi/GCF-BM3D
BM3D-SAPCA is an algorithm for attenuation of additive white
Gaussian noise (AWGN) from grayscale images.
This software package reproduces the results from the article:
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian,
"BM3D Image Denoising with Shape-Adaptive Principal
Component Analysis", Proc. Workshop on Signal Processing
with Adaptive Sparse Structured Representations (SPARS'09),
Saint-Malo, France, April 2009.
( PDF available at http://www.cs.tut.fi/~foi/GCF-BM3D )
--------------------------------------------------------------------
This demo package includes routines from both the
LASIP 2D demobox (http://www.cs.tut.fi/~lasip/2D/) and the
Pointwise SA-DCT demobox (http://www.cs.tut.fi/~foi/SA-DCT/).
--------------------------------------------------------------------
--------------------------------------------------------------------
Disclaimer
--------------------------------------------------------------------
Any unauthorized use of these routines for industrial or profit-
oriented activities is expressively prohibited. By downloading
and/or using any of these files, you implicitly agree to all the
terms of the TUT limited license, as specified in the document
Legal_Notice.txt (included in this package) and online at
http://www.cs.tut.fi/~foi/GCF-BM3D/legal_notice.html
================================================
FILE: BM3D/BM3D-SAPCA/demo_BM3DSAPCA.m
================================================
% BM3D-SAPCA : BM3D with Shape-Adaptive Principal Component Analysis (v1.00, 2009)
% (demo script)
%
% BM3D-SAPCA is an algorithm for attenuation of additive white Gaussian noise (AWGN)
% from grayscale images. This algorithm reproduces the results from the article:
% K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "BM3D Image Denoising with
% Shape-Adaptive Principal Component Analysis", Proc. Workshop on Signal Processing
% with Adaptive Sparse Structured Representations (SPARS'09), Saint-Malo, France,
% April 2009. (PDF available at http://www.cs.tut.fi/~foi/GCF-BM3D )
%
%
% SYNTAX:
%
% y_est = BM3DSAPCA2009(z, sigma)
%
% where z is an image corrupted by AWGN with noise standard deviation sigma
% and y_est is an estimate of the noise-free image.
% Signals are assumed on the intensity range [0,1].
%
%
% USAGE EXAMPLE:
%
% y = im2double(imread('Cameraman256.png'));
% sigma=25/255;
% z=y+sigma*randn(size(y));
% y_est = BM3DSAPCA2009(z,sigma);
%
%
%
% Copyright (c) 2009-2011 Tampere University of Technology. All rights reserved.
% This work should only be used for nonprofit purposes.
%
% author: Alessandro Foi, email: firstname.lastname@tut.fi
%
%%
clear all
y = im2double(imread('Cameraman256.png'));
% y = im2double(imread('Lena512.png'));
randn('seed',0);
sigma=25/255;
z=y+sigma*randn(size(y));
y_est = BM3DSAPCA2009(z,sigma);
PSNR = 10*log10(1/mean((y(:)-y_est(:)).^2));
disp(['PSNR = ',num2str(PSNR)])
if exist('ssim_index')
[mssim ssim_map] = ssim_index(y*255, y_est*255);
disp(['SSIM = ',num2str(mssim)])
end
================================================
FILE: BM3D/BM3D-SAPCA/function_CreateLPAKernels.m
================================================
% Creates LPA kernels cell array (function_CreateLPAKernels)
%
% Alessandro Foi - Tampere University of Technology - 2003-2005
% ---------------------------------------------------------------
%
% Builds kernels cell arrays kernels{direction,size}
% and kernels_higher_order{direction,size,1:2}
% kernels_higher_order{direction,size,1} is the 3D matrix
% of all kernels for that particular direction/size
% kernels_higher_order{direction,size,2} is the 2D matrix
% containing the orders indices for the kernels
% contained in kernels_higher_order{direction,size,1}
%
% ---------------------------------------------------------------------
%
% kernels_higher_order{direction,size,1}(:,:,1) is the funcion estimate kernel
% kernels_higher_order{direction,size,1}(:,:,2) is a first derivative estimate kernel
%
% kernels_higher_order{direction,size,1}(:,:,n) is a higher order derivative estimate kernel
% whose orders with respect to x and y are specified in
% kernels_higher_order{direction,size,2}(n,:)=
% =[xorder yorder xorder+yorder]
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [kernels, kernels_higher_order]=function_createLPAkernels(m,h1,h2,TYPE,window_type,directional_resolution,sig_winds,beta)
%--------------------------------------------------------------------------
% LPA ORDER AND KERNELS SIZES
%--------------------------------------------------------------------------
% m=[2,0]; % THE VECTOR ORDER OF LPA;
% h1=[1 2 3 4 5]; % sizes of the kernel
% h2=[1 2 3 4 5]; % row vectors h1 and h2 need to have the same lenght
%--------------------------------------------------------------------------
% WINDOWS PARAMETERS
%--------------------------------------------------------------------------
% sig_winds=[h1*1 ; h1*1]; % Gaussian parameter
% beta=1; % Parameter of window 6
% window_type=1 ; % window_type=1 for uniform, window_type=2 for Gaussian
% window_type=6 for exponentions with beta
% window_type=8 for Interpolation
% TYPE=00; % TYPE IS A SYMMETRY OF THE WINDOW
% 00 SYMMETRIC
% 10 NONSYMMETRIC ON X1 and SYMMETRIC ON X2
% 11 NONSYMMETRIC ON X1,X2 (Quadrants)
%
% for rotated directional kernels the method that is used for rotation can be specified by adding
% a binary digit in front of these types, as follows:
%
% 10
% 11 ARE "STANDARD" USING NN (Nearest Neighb.) (you can think of these numbers with a 0 in front)
% 00
%
% 110
% 111 ARE EXACT SAMPLING OF THE EXACT ROTATED KERNEL
% 100
%
% 210
% 211 ARE WITH BILINEAR INTERP
% 200
%
% 310
% 311 ARE WITH BICUBIC INTERP (not reccomended)
% 300
%--------------------------------------------------------------------------
% DIRECTIONAL PARAMETERS
%--------------------------------------------------------------------------
% directional_resolution=4; % number of directions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% From this point onwards this file and the create_LPA_kernels.m should be identical %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
lenh=max(length(h1),length(h2));
clear kernels
clear kernels_higher_order
kernels=cell(directional_resolution,lenh);
kernels_higher_order=cell(directional_resolution,lenh,2);
THETASTEP=2*pi/directional_resolution;
THETA=[0:THETASTEP:2*pi-THETASTEP];
s1=0;
for theta=THETA,
s1=s1+1;
for s=1:lenh,
[gh,gh1,gh1degrees]=function_LPAKernelMatrixTheta(ceil(h2(s)),ceil(h1(s)),window_type,[sig_winds(1,s) sig_winds(2,s)],TYPE,theta, m);
kernels{s1,s}=gh; % degree=0 kernel
kernels_higher_order{s1,s,1}=gh1; % degree>=0 kernels
kernels_higher_order{s1,s,2}=gh1degrees; % polynomial indexes matrix
end % different lengths loop
end % different directions loop
================================================
FILE: BM3D/BM3D-SAPCA/function_LPAKernelMatrixTheta.m
================================================
% Return the discrete kernels for LPA estimation and their degrees matrix
%
% function [G, G1, index_polynomials]=function_LPAKernelMatrixTheta(h2,h1,window_type,sig_wind,TYPE,theta, m)
%
%
% Outputs:
%
% G kernel for function estimation
% G1 kernels for function and derivative estimation
% G1(:,:,j), j=1 for function estimation, j=2 for d/dx, j=3 for d/dy,
% contains 0 and all higher order kernels (sorted by degree:
% 1 x y y^2 x^3 x^2y xy^2 y^3 etc...)
% index_polynomials matrix of degrees first column x powers, second
% column y powers, third column total degree
%
%
% Inputs:
%
% h2, h1 size of the kernel (size of the "asymmetrical portion")
% m=[m(1) m(2)] the vector order of the LPA any order combination should work
% "theta" is an angle of the directrd window
% "TYPE" is a type of the window support
% "sig_wind" - vector - sigma parameters of the Gaussian wind
% "beta"- parameter of the power in some weights for the window function
% (these last 3 parameters are fed into function_Window2D function)
%
%
% Alessandro Foi, 6 march 2004
function [G, G1, index_polynomials]=function_LPAKernelMatrixTheta(h2,h1,window_type,sig_wind,TYPE,theta, m)
global beta
%G1=0;
m(1)=min(h1,m(1));
m(2)=min(h2,m(2));
% builds ordered matrix of the monomes powers
number_of_polynomials=(min(m)+1)*(max(m)-min(m)+1)+(min(m)+1)*min(m)/2; % =size(index_polynomials,1)
index_polynomials=zeros(number_of_polynomials,2);
index3=1;
for index1=1:min(m)+1
for index2=1:max(m)+2-index1
index_polynomials(index3,:)=[index1-1,index2-1];
index3=index3+1;
end
end
if m(1)>m(2)
index_polynomials=fliplr(index_polynomials);
end
index_polynomials(:,3)=index_polynomials(:,1)+index_polynomials(:,2); %calculates degrees of polynomials
index_polynomials=sortrows(sortrows(index_polynomials,2),3); %sorts polynomials by degree (x first)
%=====================================================================================================================================
halfH=max(h1,h2);
H=-halfH+1:halfH-1;
% creates window function and then rotates it
% win_fun=zeros(halfH-1,halfH-1);
for x1=H
for x2=H
if TYPE==00|TYPE==200|TYPE==300 % SYMMETRIC WINDOW
win_fun1(x2+halfH,x1+halfH)=function_Window2D(x1/h1/(1-1000*eps),x2/h2/(1-1000*eps),window_type,sig_wind,beta,h2/h1); % weight
end
if TYPE==11|TYPE==211|TYPE==311 % NONSYMMETRIC ON X1,X2 WINDOW
win_fun1(x2+halfH,x1+halfH)=(x1>=-0.05)*(x2>=-0.05)*function_Window2D(x1/h1/(1-1000*eps),x2/h2/(1-1000*eps),window_type,sig_wind,beta,h2/h1); % weight
end
if TYPE==10|TYPE==210|TYPE==310 % NONSYMMETRIC ON X1 WINDOW
win_fun1(x2+halfH,x1+halfH)=(x1>=-0.05)*function_Window2D(x1/h1/(1-1000*eps),x2/h2/(1-1000*eps),window_type,sig_wind,beta,h2/h1); % weight
end
if TYPE==100|TYPE==110|TYPE==111 % exact sampling
xt1=x1*cos(-theta)+x2*sin(-theta);
xt2=x2*cos(-theta)-x1*sin(-theta);
if TYPE==100 % SYMMETRIC WINDOW
win_fun1(x2+halfH,x1+halfH)=function_Window2D(xt1/h1/(1-1000*eps),xt2/h2/(1-1000*eps),window_type,sig_wind,beta,h2/h1); % weight
end
if TYPE==111 % NONSYMMETRIC ON X1,X2 WINDOW
win_fun1(x2+halfH,x1+halfH)=(xt1>=-0.05)*(xt2>=-0.05)*function_Window2D(xt1/h1/(1-1000*eps),xt2/h2/(1-1000*eps),window_type,sig_wind,beta,h2/h1); % weight
end
if TYPE==110 % NONSYMMETRIC ON X1 WINDOW
win_fun1(x2+halfH,x1+halfH)=(xt1>=-0.05)*function_Window2D(xt1/h1/(1-1000*eps),xt2/h2/(1-1000*eps),window_type,sig_wind,beta,h2/h1); % weight
end
end
end
end
win_fun=win_fun1;
if (theta~=0)&(TYPE<100)
win_fun=imrotate(win_fun1,theta*180/pi,'nearest'); % use 'nearest' or 'bilinear' for different interpolation schemes ('bicubic'...?)
end
if (theta~=0)&(TYPE>=200)&(TYPE<300)
win_fun=imrotate(win_fun1,theta*180/pi,'bilinear'); % use 'nearest' or 'bilinear' for different interpolation schemes ('bicubic'...?)
end
if (theta~=0)&(TYPE>=300)
win_fun=imrotate(win_fun1,theta*180/pi,'bicubic'); % use 'nearest' or 'bilinear' for different interpolation schemes ('bicubic'...?)
end
% make the weight support a square
win_fun2=zeros(max(size(win_fun)));
win_fun2((max(size(win_fun))-size(win_fun,1))/2+1:max(size(win_fun))-((max(size(win_fun))-size(win_fun,1))/2),(max(size(win_fun))-size(win_fun,2))/2+1:max(size(win_fun))-((max(size(win_fun))-size(win_fun,2))/2))=win_fun;
win_fun=win_fun2;
%=====================================================================================================================================
%%%% rotated coordinates
H=-(size(win_fun,1)-1)/2:(size(win_fun,1)-1)/2;
halfH=(size(win_fun,1)+1)/2;
h_radious=halfH;
Hcos=H*cos(theta); Hsin=H*sin(theta);
%%%% Calculation of FI matrix
FI=zeros(number_of_polynomials);
i1=0;
for s1=H
i1=i1+1;
i2=0;
for s2=H
i2=i2+1;
x1=Hcos(s1+h_radious)-Hsin(s2+h_radious);
x2=Hsin(s1+h_radious)+Hcos(s2+h_radious);
phi=sqrt(win_fun(s2+halfH,s1+halfH))*(prod(((ones(number_of_polynomials,1)*[x1 x2]).^index_polynomials(:,1:2)),2)./prod(gamma(index_polynomials(:,1:2)+1),2).*(-ones(number_of_polynomials,1)).^index_polynomials(:,3));
FI=FI+phi*phi';
end % end of s2
end % end of s1
%FI_inv=((FI+1*eps*eye(size(FI)))^(-1)); % invert FI matrix
FI_inv=pinv(FI); % invert FI matrix (using pseudoinverse)
G1=zeros([size(H,2) size(H,2) number_of_polynomials]);
%%%% Calculation of mask
i1=0;
for s1=H
i1=i1+1;
i2=0;
for s2=H
i2=i2+1;
x1=Hcos(s1+h_radious)-Hsin(s2+h_radious);
x2=Hsin(s1+h_radious)+Hcos(s2+h_radious);
phi=FI_inv*win_fun(s2+halfH,s1+halfH)*(prod(((ones(number_of_polynomials,1)*[x1 x2]).^index_polynomials(:,1:2)),2)./prod(gamma(index_polynomials(:,1:2)+1),2).*(-ones(number_of_polynomials,1)).^index_polynomials(:,3));
G(i2,i1,1)=phi(1); % Function Est
G1(i2,i1,:)=phi(:)'; % Function est & Der est on X Y etc...
end % end of s1
end % end of s2
%keyboard
================================================
FILE: BM3D/BM3D-SAPCA/function_Window2D.m
================================================
% Returns a scalar/matrix weights (window function) for the LPA estimates
% function w=function_Window2D(X,Y,window,sig_wind, beta);
% X,Y scalar/matrix variables
% window - type of the window weight
% sig_wind - std scaling for the Gaussian ro-weight
% beta -parameter of the degree in the weights
%----------------------------------------------------------------------------------
% V. Katkovnik & A. Foi - Tampere University of Technology - 2002-2005
function w=function_Window2D(X,Y,window,sig_wind, beta,ratio);
if nargin == 5
ratio=1;
end
IND=(abs(X)<=1)&(abs(Y)<=1);
IND2=((X.^2+Y.^2)<=1);
IND3=((X.^2+(Y*ratio).^2)<=1);
if window==1 % rectangular symmetric window
w=IND; end
if window==2 %Gaussian
X=X/sig_wind(1);
Y=Y/sig_wind(2);
w = IND.*exp(-(X.^2 + Y.^2)/2); %*(abs(Y)<=0.1*abs(X));%.*IND2; %((X.^2+Y.^2)<=1);
end
if window==3 % Quadratic window
w=(1-(X.^2+Y.^2)).*((X.^2+Y.^2)<=1); end
if window==4 % triangular symmetric window
w=(1-abs(X)).*(1-abs(Y)).*((X.^2+Y.^2)<=1); end
if window==5 % Epanechnikov symmetric window
w=(1-X.^2).*(1-Y.^2).*((X.^2+Y.^2)<=1);
end
if window==6 % Generalized Gaussian
X=X/sig_wind;
Y=Y/sig_wind;
w = exp(-((X.^2 + Y.^2).^beta)/2).*((X.^2+Y.^2)<=1); end
if window==7
X=X/sig_wind;
Y=Y/sig_wind;
w = exp(-abs(X) - abs(Y)).*IND; end
if window==8 % Interpolation
w=(1./(abs(X).^4+abs(Y).^4+0.0001)).*IND2;
end
if window==9 % Interpolation
NORM=(abs(X)).^2+(abs(Y)).^2+0.0001;
w=(1./NORM.*(1-sqrt(NORM)).^2).*(NORM<=1);
end
if window==10
w=((X.^2+Y.^2)<=1);
end
if window==11
temp=asin(Y./sqrt(X.^2+Y.^2+eps));
temp=temp*0.6; % Width of Beam
temp=(temp>0)*min(temp,1)+(temp<=0)*max(temp,-1);
w=max(0,IND.*cos(pi*temp));
end
if window==111
temp=asin(Y./sqrt(X.^2+Y.^2+eps));
temp=temp*0.8; % Width of Beam
temp=(temp>0)*min(temp,1)+(temp<=0)*max(temp,-1);
w=max(0,IND3.*(cos(pi*temp)>0));
% w=((X.^2+Y.^2)<=1);
end
if window==112
temp=atan(Y/(X+eps));
%temp=temp*0.8; % Width of Beam
%temp=(temp>0)*min(temp,1)+(temp<=0)*max(temp,-1);
w=max(0,IND3.*((abs(temp))<=pi/4));
% w=((X.^2+Y.^2)<=1);
end
================================================
FILE: BM3D/BM3D.m
================================================
function [PSNR, SSIM, y_est] = BM3D(y, z, sigma, profile, print_to_screen)
image_name = [
% 'montage.png'
'Cameraman256.png'
% 'boat.png'
% 'Lena512.png'
% 'house.png'
% 'barbara.png'
% 'peppers256.png'
% 'fingerprint.png'
% 'couple.png'
% 'hill.png'
% 'man.png'
];
if (exist('profile') ~= 1)
profile = 'np'; %% default profile
end
if (exist('sigma') ~= 1)
sigma = 10; %% default standard deviation of the AWGN
end
%%%% Following are the parameters for the Normal Profile.
%%%% Select transforms ('dct', 'dst', 'hadamard', or anything that is listed by 'help wfilters'):
transform_2D_HT_name = 'bior1.5'; %% transform used for the HT filt. of size N1 x N1
transform_2D_Wiener_name = 'dct'; %% transform used for the Wiener filt. of size N1_wiener x N1_wiener
transform_3rd_dim_name = 'haar'; %% transform used in the 3-rd dim, the same for HT and Wiener filt.
%%%% Hard-thresholding (HT) parameters:
N1 = 8; %% N1 x N1 is the block size used for the hard-thresholding (HT) filtering
Nstep = 3; %% sliding step to process every next reference block
N2 = 16; %% maximum number of similar blocks (maximum size of the 3rd dimension of a 3D array)
Ns = 39; %% length of the side of the search neighborhood for full-search block-matching (BM), must be odd
tau_match = 3000;%% threshold for the block-distance (d-distance)
lambda_thr2D = 0; %% threshold parameter for the coarse initial denoising used in the d-distance measure
lambda_thr3D = 2.7; %% threshold parameter for the hard-thresholding in 3D transform domain
beta = 2.0; %% parameter of the 2D Kaiser window used in the reconstruction
%%%% Wiener filtering parameters:
N1_wiener = 8;
Nstep_wiener = 3;
N2_wiener = 32;
Ns_wiener = 39;
tau_match_wiener = 400;
beta_wiener = 2.0;
%%%% Block-matching parameters:
stepFS = 1; %% step that forces to switch to full-search BM, "1" implies always full-search
smallLN = 'not used in np'; %% if stepFS > 1, then this specifies the size of the small local search neighb.
stepFSW = 1;
smallLNW = 'not used in np';
thrToIncStep = 8; % if the number of non-zero coefficients after HT is less than thrToIncStep,
% than the sliding step to the next reference block is incresed to (nm1-1)
if strcmp(profile, 'lc') == 1
Nstep = 6;
Ns = 25;
Nstep_wiener = 5;
N2_wiener = 16;
Ns_wiener = 25;
thrToIncStep = 3;
smallLN = 3;
stepFS = 6*Nstep;
smallLNW = 2;
stepFSW = 5*Nstep_wiener;
end
if (strcmp(profile, 'vn') == 1) || (sigma > 40)
N2 = 32;
Nstep = 4;
N1_wiener = 11;
Nstep_wiener = 6;
lambda_thr3D = 2.8;
thrToIncStep = 3;
tau_match_wiener = 3500;
tau_match = 25000;
Ns_wiener = 39;
end
% The 'vn_old' profile corresponds to the original parameters for strong noise proposed in [1].
if (strcmp(profile, 'vn_old') == 1) && (sigma > 40)
transform_2D_HT_name = 'dct';
N1 = 12;
Nstep = 4;
N1_wiener = 11;
Nstep_wiener = 6;
lambda_thr3D = 2.8;
lambda_thr2D = 2.0;
thrToIncStep = 3;
tau_match_wiener = 3500;
tau_match = 5000;
Ns_wiener = 39;
end
decLevel = 0; %% dec. levels of the dyadic wavelet 2D transform for blocks (0 means full decomposition, higher values decrease the dec. number)
thr_mask = ones(N1); %% N1xN1 mask of threshold scaling coeff. --- by default there is no scaling, however the use of different thresholds for different wavelet decompoistion subbands can be done with this matrix
if strcmp(profile, 'high') == 1 %% this profile is not documented in [1]
decLevel = 1;
Nstep = 2;
Nstep_wiener = 2;
lambda_thr3D = 2.5;
vMask = ones(N1,1); vMask((end/4+1):end/2)= 1.01; vMask((end/2+1):end) = 1.07; %% this allows to have different threhsolds for the finest and next-to-the-finest subbands
thr_mask = vMask * vMask';
beta = 2.5;
beta_wiener = 1.5;
end
%%% Check whether to dump information to the screen or remain silent
dump_output_information = 1;
if (exist('print_to_screen') == 1) && (print_to_screen == 0)
dump_output_information = 0;
end
%%%% Create transform matrices, etc.
%%%%
[Tfor, Tinv] = getTransfMatrix(N1, transform_2D_HT_name, decLevel); %% get (normalized) forward and inverse transform matrices
[TforW, TinvW] = getTransfMatrix(N1_wiener, transform_2D_Wiener_name, 0); %% get (normalized) forward and inverse transform matrices
if (strcmp(transform_3rd_dim_name, 'haar') == 1) || (strcmp(transform_3rd_dim_name(end-2:end), '1.1') == 1)
%%% If Haar is used in the 3-rd dimension, then a fast internal transform is used, thus no need to generate transform
%%% matrices.
hadper_trans_single_den = {};
inverse_hadper_trans_single_den = {};
else
%%% Create transform matrices. The transforms are later applied by
%%% matrix-vector multiplication for the 1D case.
for hpow = 0:ceil(log2(max(N2,N2_wiener)))
h = 2^hpow;
[Tfor3rd, Tinv3rd] = getTransfMatrix(h, transform_3rd_dim_name, 0);
hadper_trans_single_den{h} = single(Tfor3rd);
inverse_hadper_trans_single_den{h} = single(Tinv3rd');
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% 2D Kaiser windows used in the aggregation of block-wise estimates
%%%%
if beta_wiener==2 && beta==2 && N1_wiener==8 && N1==8 % hardcode the window function so that the signal processing toolbox is not needed by default
Wwin2D = [ 0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924];
Wwin2D_wiener = Wwin2D;
else
Wwin2D = kaiser(N1, beta) * kaiser(N1, beta)'; % Kaiser window used in the aggregation of the HT part
Wwin2D_wiener = kaiser(N1_wiener, beta_wiener) * kaiser(N1_wiener, beta_wiener)'; % Kaiser window used in the aggregation of the Wiener filt. part
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% If needed, read images, generate noise, or scale the images to the
%%%% [0,1] interval
%%%%
if (exist('y') ~= 1) || (exist('z') ~= 1)
y = im2double(imread(image_name)); %% read a noise-free image and put in intensity range [0,1]
randn('seed', 0); %% generate seed
z = y + (sigma/255)*randn(size(y)); %% create a noisy image
else % external images
image_name = 'External image';
% convert z to double precision if needed
z = double(z);
% convert y to double precision if needed
y = double(y);
% if z's range is [0, 255], then convert to [0, 1]
if (max(z(:)) > 10) % a naive check for intensity range
z = z / 255;
end
% if y's range is [0, 255], then convert to [0, 1]
if (max(y(:)) > 10) % a naive check for intensity range
y = y / 255;
end
end
if (size(z,3) ~= 1) || (size(y,3) ~= 1)
error('BM3D accepts only grayscale 2D images.');
end
% Check if the true image y is a valid one; if not, then we cannot compute PSNR, etc.
y_is_invalid_image = (length(size(z)) ~= length(size(y))) | (size(z,1) ~= size(y,1)) | (size(z,2) ~= size(y,2));
if (y_is_invalid_image)
dump_output_information = 0;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Print image information to the screen
%%%%
if dump_output_information == 1
fprintf('Image: %s (%dx%d), sigma: %.1f\n', image_name, size(z,1), size(z,2), sigma);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 1. Produce the basic estimate by HT filtering
%%%%
tic;
y_hat = bm3d_thr(z, hadper_trans_single_den, Nstep, N1, N2, lambda_thr2D,...
lambda_thr3D, tau_match*N1*N1/(255*255), (Ns-1)/2, (sigma/255), thrToIncStep, single(Tfor), single(Tinv)', inverse_hadper_trans_single_den, single(thr_mask), Wwin2D, smallLN, stepFS );
estimate_elapsed_time = toc;
if dump_output_information == 1
PSNR_INITIAL_ESTIMATE = 10*log10(1/mean((y(:)-double(y_hat(:))).^2));
fprintf('BASIC ESTIMATE, PSNR: %.2f dB\n', PSNR_INITIAL_ESTIMATE);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 2. Produce the final estimate by Wiener filtering (using the
%%%% hard-thresholding initial estimate)
%%%%
tic;
y_est = bm3d_wiener(z, y_hat, hadper_trans_single_den, Nstep_wiener, N1_wiener, N2_wiener, ...
'unused arg', tau_match_wiener*N1_wiener*N1_wiener/(255*255), (Ns_wiener-1)/2, (sigma/255), 'unused arg', single(TforW), single(TinvW)', inverse_hadper_trans_single_den, Wwin2D_wiener, smallLNW, stepFSW, single(ones(N1_wiener)) );
wiener_elapsed_time = toc;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Calculate the final estimate's PSNR, print it, and show the
%%%% denoised image next to the noisy one
%%%%
y_est = double(y_est);
PSNR = 0; %% Remains 0 if the true image y is not available
SSIM = 0;
if (~y_is_invalid_image) % checks if y is a valid image
PSNR = 10*log10(1/mean((y(:)-y_est(:)).^2)); % y is valid
SSIM = ssim(y, y_est);
end
if dump_output_information == 1
fprintf('FINAL ESTIMATE (total time: %.1f sec), PSNR: %.2f dB\n', ...
wiener_elapsed_time + estimate_elapsed_time, PSNR);
figure, imshow(z); title(sprintf('Noisy %s, PSNR: %.3f dB (sigma: %d)', ...
image_name(1:end-4), 10*log10(1/mean((y(:)-z(:)).^2)), sigma));
figure, imshow(y_est); title(sprintf('Denoised %s, PSNR: %.3f dB', ...
image_name(1:end-4), PSNR));
end
return;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Some auxiliary functions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% Create forward and inverse transform matrices, which allow for perfect
% reconstruction. The forward transform matrix is normalized so that the
% l2-norm of each basis element is 1.
%
% [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% INPUTS:
%
% N --> Size of the transform (for wavelets, must be 2^K)
%
% transform_type --> 'dct', 'dst', 'hadamard', or anything that is
% listed by 'help wfilters' (bi-orthogonal wavelets)
% 'DCrand' -- an orthonormal transform with a DC and all
% the other basis elements of random nature
%
% dec_levels --> If a wavelet transform is generated, this is the
% desired decomposition level. Must be in the
% range [0, log2(N)-1], where "0" implies
% full decomposition.
%
% OUTPUTS:
%
% Tforward --> (N x N) Forward transform matrix
%
% Tinverse --> (N x N) Inverse transform matrix
%
if exist('dec_levels') ~= 1
dec_levels = 0;
end
if N == 1
Tforward = 1;
elseif strcmp(transform_type, 'hadamard') == 1
Tforward = hadamard(N);
elseif (N == 8) && strcmp(transform_type, 'bior1.5')==1 % hardcoded transform so that the wavelet toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.219417649252501 0.449283757993216 0.449283757993216 0.219417649252501 -0.219417649252501 -0.449283757993216 -0.449283757993216 -0.219417649252501;
0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846 -0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284;
-0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284 0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846;
0.707106781186547 -0.707106781186547 0 0 0 0 0 0;
0 0 0.707106781186547 -0.707106781186547 0 0 0 0;
0 0 0 0 0.707106781186547 -0.707106781186547 0 0;
0 0 0 0 0 0 0.707106781186547 -0.707106781186547];
elseif (N == 8) && strcmp(transform_type, 'dct')==1 % hardcoded transform so that the signal processing toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.490392640201615 0.415734806151273 0.277785116509801 0.097545161008064 -0.097545161008064 -0.277785116509801 -0.415734806151273 -0.490392640201615;
0.461939766255643 0.191341716182545 -0.191341716182545 -0.461939766255643 -0.461939766255643 -0.191341716182545 0.191341716182545 0.461939766255643;
0.415734806151273 -0.097545161008064 -0.490392640201615 -0.277785116509801 0.277785116509801 0.490392640201615 0.097545161008064 -0.415734806151273;
0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274 0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274;
0.277785116509801 -0.490392640201615 0.097545161008064 0.415734806151273 -0.415734806151273 -0.097545161008064 0.490392640201615 -0.277785116509801;
0.191341716182545 -0.461939766255643 0.461939766255643 -0.191341716182545 -0.191341716182545 0.461939766255643 -0.461939766255643 0.191341716182545;
0.097545161008064 -0.277785116509801 0.415734806151273 -0.490392640201615 0.490392640201615 -0.415734806151273 0.277785116509801 -0.097545161008064];
elseif (N == 8) && strcmp(transform_type, 'dst')==1 % hardcoded transform so that the PDE toolbox is not needed to generate it
Tforward = [ 0.161229841765317 0.303012985114696 0.408248290463863 0.464242826880013 0.464242826880013 0.408248290463863 0.303012985114696 0.161229841765317;
0.303012985114696 0.464242826880013 0.408248290463863 0.161229841765317 -0.161229841765317 -0.408248290463863 -0.464242826880013 -0.303012985114696;
0.408248290463863 0.408248290463863 0 -0.408248290463863 -0.408248290463863 0 0.408248290463863 0.408248290463863;
0.464242826880013 0.161229841765317 -0.408248290463863 -0.303012985114696 0.303012985114696 0.408248290463863 -0.161229841765317 -0.464242826880013;
0.464242826880013 -0.161229841765317 -0.408248290463863 0.303012985114696 0.303012985114696 -0.408248290463863 -0.161229841765317 0.464242826880013;
0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863;
0.303012985114696 -0.464242826880013 0.408248290463863 -0.161229841765317 -0.161229841765317 0.408248290463863 -0.464242826880013 0.303012985114696;
0.161229841765317 -0.303012985114696 0.408248290463863 -0.464242826880013 0.464242826880013 -0.408248290463863 0.303012985114696 -0.161229841765317];
elseif strcmp(transform_type, 'dct') == 1
Tforward = dct(eye(N));
elseif strcmp(transform_type, 'dst') == 1
Tforward = dst(eye(N));
elseif strcmp(transform_type, 'DCrand') == 1
x = randn(N); x(1:end,1) = 1; [Q,R] = qr(x);
if (Q(1) < 0)
Q = -Q;
end
Tforward = Q';
else %% a wavelet decomposition supported by 'wavedec'
%%% Set periodic boundary conditions, to preserve bi-orthogonality
dwtmode('per','nodisp');
Tforward = zeros(N,N);
for i = 1:N
Tforward(:,i)=wavedec(circshift([1 zeros(1,N-1)],[dec_levels i-1]), log2(N), transform_type); %% construct transform matrix
end
end
%%% Normalize the basis elements
Tforward = (Tforward' * diag(sqrt(1./sum(Tforward.^2,2))))';
%%% Compute the inverse transform matrix
Tinverse = inv(Tforward);
return;
================================================
FILE: BM3D/BM3DDEB.m
================================================
function [ISNR, y_hat_RWI] = BM3DDEB(experiment_number, test_image_name)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright (c) 2008-2014 Tampere University of Technology. All rights reserved.
% This work should only be used for nonprofit purposes.
%
% AUTHORS:
% Kostadin Dabov
% Alessandro Foi email: alessandro.foi _at_ tut.fi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% This function implements the image deblurring method proposed in:
%
% [1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Image
% restoration by sparse 3D transform-domain collaborative filtering,"
% Proc SPIE Electronic Imaging, January 2008.
%
% FUNCTION INTERFACE:
%
% [PSNR, y_hat_RWI] = BM3DDEB(experiment_number, test_image_name)
%
% INPUT:
% 1) experiment_number: 1 -> PSF 1, sigma^2 = 2
% 2 -> PSF 1, sigma^2 = 8
% 3 -> PSF 2, sigma^2 = 0.308
% 4 -> PSF 3, sigma^2 = 49
% 5 -> PSF 4, sigma^2 = 4
% 6 -> PSF 5, sigma^2 = 64
%
% 2) test_image_name: a valid filename of a grayscale test image
%
% OUTPUT:
% 1) ISNR: the output improvement in SNR, dB
% 2) y_hat_RWI: the restored image
%
% ! The function can work without any of the input arguments,
% in which case, the internal default ones are used !
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Fixed regularization parameters (obtained empirically after a rough optimization)
Regularization_alpha_RI = 4e-4;
Regularization_alpha_RWI = 5e-3;
%%%% Experiment number (see below for details, e.g. how the blur is generated, etc.)
if (exist('experiment_number') ~= 1)
experiment_number = 3; % 1 -- 6
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Select a single image filename (might contain path)
%%%%
if (exist('test_image_name') ~= 1)
test_image_name = [
% 'Lena512.png'
'Cameraman256.png'
% 'barbara.png'
% 'house.png'
];
end
%%%% Select 2D transforms ('dct', 'dst', 'hadamard', or anything that is listed by 'help wfilters'):
transform_2D_HT_name = 'dst'; %% 2D transform (of size N1 x N1) used in Step 1
transform_2D_Wiener_name = 'dct'; %% 2D transform (of size N1_wiener x N1_wiener) used in Step 2
transform_3rd_dimage_name = 'haar'; %% 1D tranform used in the 3-rd dim, the same for both steps
%%%% Step 1 (BM3D with collaborative hard-thresholding) parameters:
N1 = 8; %% N1 x N1 is the block size
Nstep = 3; %% sliding step to process every next refernece block
N2 = 16; %% maximum number of similar blocks (maximum size of the 3rd dimensiona of a 3D array)
Ns = 39; %% length of the side of the search neighborhood for full-search block-matching (BM)
tau_match = 6000;%% threshold for the block distance (d-distance)
lambda_thr2D = 0; %% threshold for the coarse initial denoising used in the d-distance measure
lambda_thr3D = 2.9; %% threshold for the hard-thresholding
beta = 0; %% the beta parameter of the 2D Kaiser window used in the reconstruction
%%%% Step 2 (BM3D with collaborative Wiener filtering) parameters:
N1_wiener = 8;
Nstep_wiener = 2;
N2_wiener = 16;
Ns_wiener = 39;
tau_match_wiener = 800;
beta_wiener = 0;
%%%% Specify whether to print results and display images
print_to_screen = 1;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Note: touch below this point only if you know what you are doing!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Make parameters compatible with the interface of the mex-functions
%%%%
[Tfor, Tinv] = getTransfMatrix(N1, transform_2D_HT_name, 0); %% get (normalized) forward and inverse transform matrices
[TforW, TinvW] = getTransfMatrix(N1_wiener, transform_2D_Wiener_name, 0); %% get (normalized) forward and inverse transform matrices
if (strcmp(transform_3rd_dimage_name, 'haar') == 1),
%%% Fast internal transform is used, no need to generate transform
%%% matrices.
hadper_trans_single_den = {};
inverse_hadper_trans_single_den = {};
else
%%% Create transform matrices. The transforms are later applied by
%%% vector-matrix multiplications
for hpow = 0:ceil(log2(max(N2,N2_wiener))),
h = 2^hpow;
[Tfor3rd, Tinv3rd] = getTransfMatrix(h, transform_3rd_dimage_name, 0);
hadper_trans_single_den{h} = single(Tfor3rd);
inverse_hadper_trans_single_den{h} = single(Tinv3rd');
end
end
if beta == 0 & beta_wiener == 0
Wwin2D = ones(N1,N1);
Wwin2D_wiener = ones(N1_wiener,N1_wiener);
else
Wwin2D = kaiser(N1, beta) * kaiser(N1, beta)'; % Kaiser window used in the hard-thresholding part
Wwin2D_wiener = kaiser(N1_wiener, beta_wiener) * kaiser(N1_wiener, beta_wiener)'; % Kaiser window used in the Wiener filtering part
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Read an image and generate a blurred and noisy image
%%%%
y = im2double(imread(test_image_name));
if experiment_number==1
sigma=sqrt(2)/255;
for x1=-7:7; for x2=-7:7; v(x1+8,x2+8)=1/(x1^2+x2^2+1); end, end; v=v./sum(v(:));
end
if experiment_number==2
sigma=sqrt(8)/255;
s1=0; for a1=-7:7; s1=s1+1; s2=0; for a2=-7:7; s2=s2+1; v(s1,s2)=1/(a1^2+a2^2+1); end, end; v=v./sum(v(:));
end
if experiment_number==3
BSNR=40; sigma=-1; % if "sigma=-1", then the value of sigma depends on the BSNR
v=ones(9); v=v./sum(v(:));
end
if experiment_number==4
sigma=7/255;
v=[1 4 6 4 1]'*[1 4 6 4 1]; v=v./sum(v(:)); % PSF
end
if experiment_number==5
sigma=2/255;
v=fspecial('gaussian', 25, 1.6);
end
if experiment_number==6
sigma=8/255;
v=fspecial('gaussian', 25, .4);
end
[Xv, Xh] = size(y);
[ghy,ghx] = size(v);
big_v = zeros(Xv,Xh); big_v(1:ghy,1:ghx)=v; big_v=circshift(big_v,-round([(ghy-1)/2 (ghx-1)/2])); % pad PSF with zeros to whole image domain, and center it
V = fft2(big_v); % frequency response of the PSF
y_blur = imfilter(y, v(end:-1:1,end:-1:1), 'circular'); % performs blurring (by circular convolution)
randn('seed',0); %%% fix seed for the random number generator
if sigma == -1; %% check whether to use BSNR in order to define value of sigma
sigma=sqrt(norm(y_blur(:)-mean(y_blur(:)),2)^2 /(Xh*Xv*10^(BSNR/10))); % compute sigma from the desired BSNR
end
%%%% Create a blurred and noisy observation
z = y_blur + sigma*randn(Xv,Xh);
tic;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 1: Final estimate by Regularized Inversion (RI) followed by
%%%% BM3D with collaborative hard-thresholding
%%%%
%%%% Step 1.1. Regularized Inversion
RI= conj(V)./( (abs(V).^2) + Regularization_alpha_RI * Xv*Xh*sigma^2); % Transfer Matrix for RI %% Standard Tikhonov Regularization
zRI=real(ifft2( fft2(z).* RI )); % Regularized Inverse Estimate (RI OBSERVATION)
stdRI = zeros(N1, N1);
for ii = 1:N1,
for jj = 1:N1,
UnitMatrix = zeros(N1,N1); UnitMatrix(ii,jj)=1;
BasisElementPadded = zeros(Xv, Xh); BasisElementPadded(1:N1,1:N1) = Tinv*UnitMatrix*Tinv';
TransfBasisElementPadded = fft2(BasisElementPadded);
stdRI(ii,jj) = sqrt( (1/(Xv*Xh)) * sum(sum(abs(TransfBasisElementPadded.*RI).^2)) )*sigma;
end,
end
%%%% Step 1.2. Colored noise suppression by BM3D with collaborative hard-
%%%% thresholding
y_hat_RI = bm3d_thr_colored_noise(zRI, hadper_trans_single_den, Nstep, N1, N2, lambda_thr2D,...
lambda_thr3D, tau_match*N1*N1/(255*255), (Ns-1)/2, sigma, 0, single(Tfor), single(Tinv)',...
inverse_hadper_trans_single_den, single(stdRI'), Wwin2D, 0, 1 );
PSNR_INITIAL_ESTIMATE = 10*log10(1/mean((y(:)-y_hat_RI(:)).^2));
ISNR_INITIAL_ESTIMATE = PSNR_INITIAL_ESTIMATE - 10*log10(1/mean((y(:)-z(:)).^2));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 2: Final estimate by Regularized Wiener Inversion (RWI) followed
%%%% by BM3D with collaborative Wiener filtering
%%%%
%%%% Step 2.1. Regularized Wiener Inversion
Wiener_Pilot = abs(fft2(double(y_hat_RI))); %%% Wiener reference estimate
RWI = conj(V).*Wiener_Pilot.^2./(Wiener_Pilot.^2.*(abs(V).^2) + Regularization_alpha_RWI*Xv*Xh*sigma^2); % Transfer Matrix for RWI (uses standard regularization 'a-la-Tikhonov')
zRWI = real(ifft2(fft2(z).*RWI)); % RWI OBSERVATION
stdRWI = zeros(N1_wiener, N1_wiener);
for ii = 1:N1_wiener,
for jj = 1:N1_wiener,
UnitMatrix = zeros(N1_wiener,N1_wiener); UnitMatrix(ii,jj)=1;
BasisElementPadded = zeros(Xv, Xh); BasisElementPadded(1:N1_wiener,1:N1_wiener) = TinvW*UnitMatrix*TinvW';
TransfBasisElementPadded = fft2(BasisElementPadded);
stdRWI(ii,jj) = sqrt( (1/(Xv*Xh)) * sum(sum(abs(TransfBasisElementPadded.*RWI).^2)) )*sigma;
end,
end
%%%% Step 2.2. Colored noise suppression by BM3D with collaborative Wiener
%%%% filtering
y_hat_RWI = bm3d_wiener_colored_noise(zRWI, y_hat_RI, hadper_trans_single_den, Nstep_wiener, N1_wiener, N2_wiener, ...
0, tau_match_wiener*N1_wiener*N1_wiener/(255*255), (Ns_wiener-1)/2, 0, single(stdRWI'), single(TforW), single(TinvW)',...
inverse_hadper_trans_single_den, Wwin2D_wiener, 0, 1, single(ones(N1_wiener)) );
elapsed_time = toc;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Calculate the final estimate's PSNR and ISNR, print them, and show the
%%%% restored image
%%%%
PSNR = 10*log10(1/mean((y(:)-y_hat_RWI(:)).^2));
ISNR = PSNR - 10*log10(1/mean((y(:)-z(:)).^2));
if print_to_screen == 1
fprintf('Image: %s, Exp %d, Time: %.1f sec, PSNR-RI: %.2f dB, PSNR-RWI: %.2f, ISNR-RWI: %.2f dB\n', ...
test_image_name, experiment_number, elapsed_time, PSNR_INITIAL_ESTIMATE, PSNR, ISNR);
figure,imshow(z);
figure,imshow(double(y_hat_RWI));
end
return;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Some auxiliary functions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% Create forward and inverse transform matrices, which allow for perfect
% reconstruction. The forward transform matrix is normalized so that the
% l2-norm of each basis element is 1.
%
% [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% INPUTS:
%
% N --> Size of the transform (for wavelets, must be 2^K)
%
% transform_type --> 'dct', 'dst', 'hadamard', or anything that is
% listed by 'help wfilters' (bi-orthogonal wavelets)
% 'DCrand' -- an orthonormal transform with a DC and all
% the other basis elements of random nature
%
% dec_levels --> If a wavelet transform is generated, this is the
% desired decomposition level. Must be in the
% range [0, log2(N)-1], where "0" implies
% full decomposition.
%
% OUTPUTS:
%
% Tforward --> (N x N) Forward transform matrix
%
% Tinverse --> (N x N) Inverse transform matrix
%
if exist('dec_levels') ~= 1,
dec_levels = 0;
end
if N == 1,
Tforward = 1;
elseif strcmp(transform_type, 'hadamard') == 1,
Tforward = hadamard(N);
elseif (N == 8) & strcmp(transform_type, 'bior1.5')==1 % hardcoded transform so that the wavelet toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.219417649252501 0.449283757993216 0.449283757993216 0.219417649252501 -0.219417649252501 -0.449283757993216 -0.449283757993216 -0.219417649252501;
0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846 -0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284;
-0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284 0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846;
0.707106781186547 -0.707106781186547 0 0 0 0 0 0;
0 0 0.707106781186547 -0.707106781186547 0 0 0 0;
0 0 0 0 0.707106781186547 -0.707106781186547 0 0;
0 0 0 0 0 0 0.707106781186547 -0.707106781186547];
elseif (N == 8) & strcmp(transform_type, 'dct')==1 % hardcoded transform so that the signal processing toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.490392640201615 0.415734806151273 0.277785116509801 0.097545161008064 -0.097545161008064 -0.277785116509801 -0.415734806151273 -0.490392640201615;
0.461939766255643 0.191341716182545 -0.191341716182545 -0.461939766255643 -0.461939766255643 -0.191341716182545 0.191341716182545 0.461939766255643;
0.415734806151273 -0.097545161008064 -0.490392640201615 -0.277785116509801 0.277785116509801 0.490392640201615 0.097545161008064 -0.415734806151273;
0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274 0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274;
0.277785116509801 -0.490392640201615 0.097545161008064 0.415734806151273 -0.415734806151273 -0.097545161008064 0.490392640201615 -0.277785116509801;
0.191341716182545 -0.461939766255643 0.461939766255643 -0.191341716182545 -0.191341716182545 0.461939766255643 -0.461939766255643 0.191341716182545;
0.097545161008064 -0.277785116509801 0.415734806151273 -0.490392640201615 0.490392640201615 -0.415734806151273 0.277785116509801 -0.097545161008064];
elseif (N == 8) & strcmp(transform_type, 'dst')==1 % hardcoded transform so that the PDE toolbox is not needed to generate it
Tforward = [ 0.161229841765317 0.303012985114696 0.408248290463863 0.464242826880013 0.464242826880013 0.408248290463863 0.303012985114696 0.161229841765317;
0.303012985114696 0.464242826880013 0.408248290463863 0.161229841765317 -0.161229841765317 -0.408248290463863 -0.464242826880013 -0.303012985114696;
0.408248290463863 0.408248290463863 0 -0.408248290463863 -0.408248290463863 0 0.408248290463863 0.408248290463863;
0.464242826880013 0.161229841765317 -0.408248290463863 -0.303012985114696 0.303012985114696 0.408248290463863 -0.161229841765317 -0.464242826880013;
0.464242826880013 -0.161229841765317 -0.408248290463863 0.303012985114696 0.303012985114696 -0.408248290463863 -0.161229841765317 0.464242826880013;
0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863;
0.303012985114696 -0.464242826880013 0.408248290463863 -0.161229841765317 -0.161229841765317 0.408248290463863 -0.464242826880013 0.303012985114696;
0.161229841765317 -0.303012985114696 0.408248290463863 -0.464242826880013 0.464242826880013 -0.408248290463863 0.303012985114696 -0.161229841765317];
elseif strcmp(transform_type, 'dct') == 1,
Tforward = dct(eye(N));
elseif strcmp(transform_type, 'dst') == 1,
Tforward = dst(eye(N));
elseif strcmp(transform_type, 'DCrand') == 1,
x = randn(N); x(1:end,1) = 1; [Q,R] = qr(x);
if (Q(1) < 0),
Q = -Q;
end;
Tforward = Q';
else %% a wavelet decomposition supported by 'wavedec'
%%% Set periodic boundary conditions, to preserve bi-orthogonality
dwtmode('per','nodisp');
Tforward = zeros(N,N);
for i = 1:N
Tforward(:,i)=wavedec(circshift([1 zeros(1,N-1)],[dec_levels i-1]), log2(N), transform_type); %% construct transform matrix
end
end
%%% Normalize the basis elements
Tforward = (Tforward' * diag(sqrt(1./sum(Tforward.^2,2))))';
%%% Compute the inverse transform matrix
Tinverse = inv(Tforward);
return;
================================================
FILE: BM3D/BM3DSHARP.m
================================================
function [y_hat] = BM3DSHARP(z, sigma, alpha_sharp, profile, print_to_screen)
%
% Joint sharpening and denoising with BM3D. This is implementation of the
% BM3D-SH3D sharpening method that is developed in:
%
% [1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Joint image
% sharpening and denoising by 3D transform-domain collaborative filtering,"
% Proc. 2007 Int. TICSP Workshop Spectral Meth. Multirate Signal Process.,
% SMMSP 2007, Moscow, Russia, September 2007.
%
% FUNCTION INTERFACE:
%
% [ysharp] = BM3DSHARP(z, sigma, alpha_sharp, profile, print_to_screen)
%
% The function can work without any of the input arguments, hence they are
% optional!
%
% INPUTS (OPTIONAL):
%
% 1) z (matrix, size MxN) : Input image (noisy and with poor contrast)
% 2) sigma (double) : Noise (IF ANY noise) standard deviation (signal assumed
% in the range [0, 255])
% 3) alpha_sharp (double) : Sharpening parameter (default: 1.5):
% (1,inf) -> sharpen
% 1 -> no sharpening
% (0,1) -> de-sharpen
% 4) profile (char vector) : 'lc' --> fast
% 'np' --> normal (default)
% 5) print_to_screen (boolean) : 0 --> do not print output
% information (and do not plot figures)
% 1 --> print figures (default)
%
% OUTPUTS:
% 1) ysharp (matrix, size MxN) : Sharpened image (in the range [0,1])
%
% BASIC USAGE EXAMPLES:
%
% sigma = 10;
% z = im2double(imread('cameraman.tif'));
% z = z + (sigma/255)*randn(size(z));
% alpha_sharp = 1.3;
% [ysharp] = BM3DSHARP(z, sigma, alpha_sharp);
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright 2007 Tampere University of Technology. All rights reserved.
% This work should only be used for nonprofit purposes.
%
% AUTHORS:
% Kostadin Dabov (2007), email: kostadin.dabov _at_ tut.fi
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% In case, an input image z is not provided, then use the filename
%%%% below to read an original image (might contain path also). Later,
%%%% artificial AWGN noise is added and this noisy image is processed
%%%% by the BM3D-SH3D.
%%%%
if (exist('image_name') ~= 1)
image_name = [
%
%%%% Grayscale images
% 'barco.png'
% 'pentagon.tif'
'Cameraman256.png'
% 'boat.png'
% 'Lena512.png'
% 'house.png'
% 'barbara.png'
];
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Quality/complexity trade-off profile selection
%%%%
%%%% 'np' --> Normal Profile (balanced quality)
%%%% 'lc' --> Low Complexity Profile (fast, lower quality)
%%%%
%%%% 'high' --> High Profile (high quality, not documented in [1])
%%%%
if (exist('profile') ~= 1)
profile = 'np'; %% default profile
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Specify the std. dev. of the corrupting noise
%%%%
if (exist('sigma') ~= 1),
if (exist('z') ~= 1)
sigma = 20; %% default standard deviation of the AWGN
else
fprintf('Please specify value for the s.t.d. "sigma"\n');
y_hat = 0;
return;
end
end
if (exist('alpha_sharp') ~= 1)
alpha_sharp = 3/2;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Following are the parameters for the Normal Profile.
%%%%
%%%% Select transforms ('dct', 'dst', 'hadamard', or anything that is listed by 'help wfilters'):
transform_2D_HT_name = 'bior1.5'; %% transform used for the HT filt. of size N1 x N1
transform_3rd_dim_name = 'haar'; %% transform used in the 3-rd dim, the same for HT and Wiener filt.
%%%% Hard-thresholding (HT) parameters:
N1 = 8; %% N1 x N1 is the block size used for the hard-thresholding (HT) filtering
Nstep = 3; %% sliding step to process every next reference block
N2 = 16; %% maximum number of similar blocks (maximum size of the 3rd dimension of a 3D array)
Ns = 39; %% length of the side of the search neighborhood for full-search block-matching (BM), must be odd
tau_match = 3000;%% threshold for the block-distance (d-distance)
lambda_thr2D = 0; %% threshold parameter for the coarse initial denoising used in the d-distance measure
lambda_thr3D = 2.7; %% threshold parameter for the hard-thresholding in 3D transform domain
beta = 2.0; %% parameter of the 2D Kaiser window used in the reconstruction
%%%% Block-matching parameters:
stepFS = 1; %% step that forces to switch to full-search BM, "1" implies always full-search
smallLN = 'not used in np'; %% if stepFS > 1, then this specifies the size of the small local search neighb.
thrToIncStep = 8; %% used in the HT filtering to increase the sliding step in uniform regions
if strcmp(profile, 'lc') == 1,
Nstep = 6;
Ns = 25;
thrToIncStep = 3;
smallLN = 3;
stepFS = 6*Nstep;
end
if (strcmp(profile, 'vn') == 1) | (sigma > 40),
transform_2D_HT_name = 'dct';
N1 = 12;
Nstep = 4;
lambda_thr3D = 2.8;
lambda_thr2D = 2.0;
thrToIncStep = 3;
tau_match = 5000;
end
decLevel = 0; %% dec. levels of the dyadic wavelet 2D transform for blocks (0 means full decomposition, higher values decrease the dec. number)
thr_mask = ones(N1); %% N1xN1 mask of threshold scaling coeff. --- by default there is no scaling, however the use of different thresholds for different wavelet decompoistion subbands can be done with this matrix
if strcmp(profile, 'high') == 1, %% this profile is not documented in [1]
decLevel = 1;
Nstep = 2;
lambda_thr3D = 2.5;
vMask = ones(N1,1); vMask((end/4+1):end/2)= 1.01; vMask((end/2+1):end) = 1.07; %% this allows to have different threhsolds for the finest and next-to-the-finest subbands
thr_mask = vMask * vMask';
beta = 2.5;
beta_wiener = 1.5;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Note: touch below this point only if you know what you are doing!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Check whether to dump information to the screen or remain silent
dump_output_information = 1;
if (exist('print_to_screen') == 1) & (print_to_screen == 0),
dump_output_information = 0;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Create transform matrices, etc.
%%%%
[Tfor, Tinv] = getTransfMatrix(N1, transform_2D_HT_name, decLevel); %% get (normalized) forward and inverse transform matrices
if (strcmp(transform_3rd_dim_name, 'haar') == 1) | (strcmp(transform_3rd_dim_name(end-2:end), '1.1') == 1),
%%% If Haar is used in the 3-rd dimension, then a fast internal transform is used, thus no need to generate transform
%%% matrices.
hadper_trans_single_den = {};
inverse_hadper_trans_single_den = {};
else
%%% Create transform matrices. The transforms are later applied by
%%% matrix-vector multiplication for the 1D case.
for hpow = 0:ceil(log2(max(N2,N2_wiener))),
h = 2^hpow;
[Tfor3rd, Tinv3rd] = getTransfMatrix(h, transform_3rd_dim_name, 0);
hadper_trans_single_den{h} = single(Tfor3rd);
inverse_hadper_trans_single_den{h} = single(Tinv3rd');
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% 2D Kaiser windows used in the aggregation of block-wise estimates
%%%%
if beta==2 & N1==8 % hardcode the window function so that the signal processing toolbox is not needed by default
Wwin2D = [ 0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924];
else
Wwin2D = kaiser(N1, beta) * kaiser(N1, beta)'; % Kaiser window used in the aggregation of the HT part
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% If needed, read images, generate noise, or scale the images to the
%%%% [0,1] interval
%%%%
if (exist('z') ~= 1)
y = im2double(imread(image_name)); %% read a noise-free image and put in intensity range [0,1]
randn('seed', 0); %% generate seed
z = y + (sigma/255)*randn(size(y)); %% create a noisy image
else % external images
image_name = 'External image';
% convert z to double precision if needed
z = double(z);
% if z's range is [0, 255], then convert to [0, 1]
if (max(z(:)) > 10), % a naive check for intensity range
z = z / 255;
end
end
if (size(z,3) ~= 1),
error('BM3D-SH3D accepts only grayscale 2D images.');
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Print image information to the screen
%%%%
if dump_output_information == 1,
fprintf('Image: %s (%dx%d), sigma: %.1f\n', image_name, size(z,1), size(z,2), sigma);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Apply the filtering MEX-subroutine
%%%%
tic;
y_hat = bm3d_thr_sharpen_var(z, hadper_trans_single_den, Nstep, N1, N2, lambda_thr2D,...
lambda_thr3D, tau_match*N1*N1/(255*255), (Ns-1)/2, (sigma/255), thrToIncStep, single(Tfor), single(Tinv)', inverse_hadper_trans_single_den, single(thr_mask), Wwin2D, smallLN, stepFS, 1/alpha_sharp );
estimate_elapsed_time = toc;
if dump_output_information == 1,
fprintf('SHARPENING COMPLETED (total time: %.1f sec)\n', ...
estimate_elapsed_time);
imshow(z); figure, imshow(double(y_hat));
end
return;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Some auxiliary functions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% Create forward and inverse transform matrices, which allow for perfect
% reconstruction. The forward transform matrix is normalized so that the
% l2-norm of each basis element is 1.
%
% [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% INPUTS:
%
% N --> Size of the transform (for wavelets, must be 2^K)
%
% transform_type --> 'dct', 'dst', 'hadamard', or anything that is
% listed by 'help wfilters' (bi-orthogonal wavelets)
% 'DCrand' -- an orthonormal transform with a DC and all
% the other basis elements of random nature
%
% dec_levels --> If a wavelet transform is generated, this is the
% desired decomposition level. Must be in the
% range [0, log2(N)-1], where "0" implies
% full decomposition.
%
% OUTPUTS:
%
% Tforward --> (N x N) Forward transform matrix
%
% Tinverse --> (N x N) Inverse transform matrix
%
if exist('dec_levels') ~= 1,
dec_levels = 0;
end
if N == 1,
Tforward = 1;
elseif strcmp(transform_type, 'hadamard') == 1,
Tforward = hadamard(N);
elseif (N == 8) & strcmp(transform_type, 'bior1.5')==1 % hardcoded transform so that the wavelet toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.219417649252501 0.449283757993216 0.449283757993216 0.219417649252501 -0.219417649252501 -0.449283757993216 -0.449283757993216 -0.219417649252501;
0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846 -0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284;
-0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284 0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846;
0.707106781186547 -0.707106781186547 0 0 0 0 0 0;
0 0 0.707106781186547 -0.707106781186547 0 0 0 0;
0 0 0 0 0.707106781186547 -0.707106781186547 0 0;
0 0 0 0 0 0 0.707106781186547 -0.707106781186547];
elseif (N == 8) & strcmp(transform_type, 'dct')==1 % hardcoded transform so that the signal processing toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.490392640201615 0.415734806151273 0.277785116509801 0.097545161008064 -0.097545161008064 -0.277785116509801 -0.415734806151273 -0.490392640201615;
0.461939766255643 0.191341716182545 -0.191341716182545 -0.461939766255643 -0.461939766255643 -0.191341716182545 0.191341716182545 0.461939766255643;
0.415734806151273 -0.097545161008064 -0.490392640201615 -0.277785116509801 0.277785116509801 0.490392640201615 0.097545161008064 -0.415734806151273;
0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274 0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274;
0.277785116509801 -0.490392640201615 0.097545161008064 0.415734806151273 -0.415734806151273 -0.097545161008064 0.490392640201615 -0.277785116509801;
0.191341716182545 -0.461939766255643 0.461939766255643 -0.191341716182545 -0.191341716182545 0.461939766255643 -0.461939766255643 0.191341716182545;
0.097545161008064 -0.277785116509801 0.415734806151273 -0.490392640201615 0.490392640201615 -0.415734806151273 0.277785116509801 -0.097545161008064];
elseif (N == 8) & strcmp(transform_type, 'dst')==1 % hardcoded transform so that the PDE toolbox is not needed to generate it
Tforward = [ 0.161229841765317 0.303012985114696 0.408248290463863 0.464242826880013 0.464242826880013 0.408248290463863 0.303012985114696 0.161229841765317;
0.303012985114696 0.464242826880013 0.408248290463863 0.161229841765317 -0.161229841765317 -0.408248290463863 -0.464242826880013 -0.303012985114696;
0.408248290463863 0.408248290463863 0 -0.408248290463863 -0.408248290463863 0 0.408248290463863 0.408248290463863;
0.464242826880013 0.161229841765317 -0.408248290463863 -0.303012985114696 0.303012985114696 0.408248290463863 -0.161229841765317 -0.464242826880013;
0.464242826880013 -0.161229841765317 -0.408248290463863 0.303012985114696 0.303012985114696 -0.408248290463863 -0.161229841765317 0.464242826880013;
0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863;
0.303012985114696 -0.464242826880013 0.408248290463863 -0.161229841765317 -0.161229841765317 0.408248290463863 -0.464242826880013 0.303012985114696;
0.161229841765317 -0.303012985114696 0.408248290463863 -0.464242826880013 0.464242826880013 -0.408248290463863 0.303012985114696 -0.161229841765317];
elseif strcmp(transform_type, 'dct') == 1,
Tforward = dct(eye(N));
elseif strcmp(transform_type, 'dst') == 1,
Tforward = dst(eye(N));
elseif strcmp(transform_type, 'DCrand') == 1,
x = randn(N); x(1:end,1) = 1; [Q,R] = qr(x);
if (Q(1) < 0),
Q = -Q;
end;
Tforward = Q';
else %% a wavelet decomposition supported by 'wavedec'
%%% Set periodic boundary conditions, to preserve bi-orthogonality
dwtmode('per','nodisp');
Tforward = zeros(N,N);
for i = 1:N
Tforward(:,i)=wavedec(circshift([1 zeros(1,N-1)],[dec_levels i-1]), log2(N), transform_type); %% construct transform matrix
end
end
%%% Normalize the basis elements
Tforward = (Tforward' * diag(sqrt(1./sum(Tforward.^2,2))))';
%%% Compute the inverse transform matrix
Tinverse = inv(Tforward);
return;
================================================
FILE: BM3D/BM3D_CFA.m
================================================
function [varargout] = BM3D_CFA(z, sigma)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% BM3D_CFA is the modification of the BM3D algorithm for attenuation of additive white Gaussian noise from
% Bayer CFA images. This algorithm reproduces the results from the article:
%
% [1] A. Danielyan, M. Vehvilinen, A. Foi, V. Katkovnik, and K. Egiazarian,
% Cross-color BM3D filtering of noisy raw data,
% Proc. Int. Workshop on Local and Non-Local Approx. in Image Process.,
% LNLA 2009, Tuusula, Finland, pp. 125-129, August 2009.
%
% FUNCTION INTERFACE:
%
% [y_wiener, y_ht] = BM3D(z, sigma)
%
% ! The function can work without any of the input arguments,
% in which case, the internal default ones are used !
% INPUT ARGUMENTS (OPTIONAL):
%
% 2) z (matrix M x N): Noisy image (intensities in range [0,1] or [0,255])
% 3) sigma (double) : Std. dev. of the noise (corresponding to intensities
% in range [0,255] even if the range of z is [0,1])
% OUTPUTS:
% 1) y_wiener (matrix M x N): Final(wiener) estimate (in the range [0,1])
% 2) y_ht (matrix M x N): Basic (hard-thresholding) estimate (in the range [0,1])
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright (c) 2009-2014 Tampere University of Technology.
% All rights reserved.
% This work should only be used for nonprofit purposes.
%
% AUTHORS:
% Aram Danielyan, email: aram dot danielyan _at_ .tut.fi
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% In case, a noisy image z is not provided, then use the filename
%%%% below to read an original image (might contain path also). Later,
%%%% artificial AWGN noise is added and this noisy image is processed
%%%% by the BM3D.
%%%%
image_name = [
'kodim07.png'
% 'kodim08.png'
% 'kodim19.png'
% 'kodim23.png'
];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Quality/complexity trade-off profile selection
%%%%
%%%% 'np' --> Normal Profile (balanced quality)
if ~exist('profile','var')
profile = 'np'; %% default profile
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Specify the std. dev. of the corrupting noise
%%%%
if ~exist('sigma','var')
sigma = 25; %% default standard deviation of the AWGN
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Following are the parameters for the Normal Profile.
%%%%
%%%% Select transforms ('dct', 'dst', 'hadamard', or anything that is listed by 'help wfilters'):
transform_2D_HT_name = 'dct'; %% transform used for the HT filt. of size N1 x N1
transform_2D_Wiener_name = 'dct';
transform_3rd_dim_name = 'haar'; %% transform used in the 3-rd dim, the same for HT and Wiener filt.
%%%% Hard-thresholding (HT) parameters:
N1 = 5; %% N1 x N1 is the block size used for the hard-thresholding (HT) filtering
Nstep = 3; %% sliding step to process every next reference block
N2 = 16; %% maximum number of similar blocks (maximum size of the 3rd dimension of a 3D array)
Ns = 39; %% length of the side of the search neighborhood for full-search block-matching (BM), must be odd
lambda_thr2D = 0;
tau_match = 3000;%% threshold for the block-distance (d-distance)
lambda_thr3D = 2.7; %% threshold parameter for the hard-thresholding in 3D transform domain
beta = 2.0; %% parameter of the 2D Kaiser window used in the reconstruction
%%%% Step 2: Wiener filtering parameters:
N1_wiener = 6;
Nstep_wiener = 3;
N2_wiener = 32;
Ns_wiener = 39;
tau_match_wiener = 400;
beta_wiener = 2.0;
%%%% Block-matching parameters:
stepFS = 1; %% step that forces to switch to full-search BM, "1" implies always full-search
smallLN = 'not used in np'; %% if stepFS > 1, then this specifies the size of the small local search neighb.
stepFSW = 1;
smallLNW = 'not used in np';
thrToIncStep = 8; % if the number of non-zero coefficients after HT is less than thrToIncStep,
% than the sliding step to the next reference block is incresed to (nm1-1)
decLevel = 0; %% dec. levels of the dyadic wavelet 2D transform for blocks (0 means full decomposition, higher values decrease the dec. number)
thr_mask = ones(N1); %% N1xN1 mask of threshold scaling coeff. --- by default there is no scaling, however the use of different thresholds for different wavelet decompoistion subbands can be done with this matrix
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Note: touch below this point only if you know what you are doing!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Create transform matrices, etc.
%%%%
[Tfor, Tinv] = getTransfMatrix(N1, transform_2D_HT_name, decLevel); %% get (normalized) forward and inverse transform matrices
[TforW, TinvW] = getTransfMatrix(N1_wiener, transform_2D_Wiener_name, 0); %% get (normalized) forward and inverse transform matrices
if (strcmp(transform_3rd_dim_name, 'haar') == 1) | (strcmp(transform_3rd_dim_name(end-2:end), '1.1') == 1),
%%% If Haar is used in the 3-rd dimension, then a fast internal transform is used, thus no need to generate transform
%%% matrices.
hadper_trans_single_den = {};
inverse_hadper_trans_single_den = {};
else
%%% Create transform matrices. The transforms are later applied by
%%% matrix-vector multiplication for the 1D case.
for hpow = 0:ceil(log2(max(N2,N2_wiener))),
h = 2^hpow;
[Tfor3rd, Tinv3rd] = getTransfMatrix(h, transform_3rd_dim_name, 0);
hadper_trans_single_den{h} = single(Tfor3rd);
inverse_hadper_trans_single_den{h} = single(Tinv3rd');
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% 2D Kaiser windows used in the aggregation of block-wise estimates
%%%%
if beta_wiener==2 & beta==2 & N1_wiener==8 & N1==8 % hardcode the window function so that the signal processing toolbox is not needed by default
Wwin2D = [ 0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924];
Wwin2D_wiener = Wwin2D;
else
Wwin2D = kaiser(N1, beta) * kaiser(N1, beta)'; % Kaiser window used in the aggregation of the HT part
Wwin2D_wiener = kaiser(N1_wiener, beta_wiener) * kaiser(N1_wiener, beta_wiener)'; % Kaiser window used in the aggregation of the Wiener filt. part
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% If needed, read images, generate noise, or scale the images to the
%%%% [0,1] interval
%%%%
if ~exist('z','var')
yRGB = im2double(imread(image_name)); %% read a noise-free image and put in intensity range [0,1]
y = zeros(size(yRGB,1), size(yRGB,2));
y(1:2:end,1:2:end) = yRGB(1:2:end,1:2:end,2);
y(2:2:end,2:2:end) = yRGB(2:2:end,2:2:end,2);
y(1:2:end,2:2:end) = yRGB(1:2:end,2:2:end,1);
y(2:2:end,1:2:end) = yRGB(2:2:end,1:2:end,3);
randn('seed', 0); %% generate seed
z = y + (sigma/255)*randn(size(y)); %% create a noisy image
else % external images
image_name = 'External image';
% convert z to double precision if needed
z = double(z);
y= [];
end
if (size(z,3) ~= 1)
error('BM3D accepts only grayscale 2D images.');
end
%%% Check whether to dump information to the screen or remain silent
if isempty(y)
dump_output_information = false;
else
dump_output_information = true;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Print image information to the screen
%%%%
if dump_output_information
fprintf('Image: %s (%dx%d), sigma: %.1f\n', image_name, size(z,1), size(z,2), sigma);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 1. Produce the basic estimate by HT filtering
%%%%
tic;
y_ht = bm3d_CFA_thr(z, hadper_trans_single_den, Nstep, N1, N2, lambda_thr2D,...
lambda_thr3D, tau_match*N1*N1/(255*255), (Ns-1)/2, (sigma/255), thrToIncStep, single(Tfor), single(Tinv)', inverse_hadper_trans_single_den, single(thr_mask), Wwin2D, smallLN, stepFS );
estimate_elapsed_time = toc;
if dump_output_information
PSNR_INITIAL_ESTIMATE = 10*log10(1/mean((y(:)-double(y_ht(:))).^2));
fprintf('BASIC ESTIMATE, PSNR: %.2f dB\n', PSNR_INITIAL_ESTIMATE);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 2. Produce the final estimate by Wiener filtering (using the
%%%% hard-thresholding initial estimate)
%%%%
tic;
y_wiener = bm3d_CFA_wiener(z, y_ht, hadper_trans_single_den, Nstep_wiener, N1_wiener, N2_wiener, ...
'unused arg', tau_match_wiener*N1_wiener*N1_wiener/(255*255), (Ns_wiener-1)/2, (sigma/255), 'unused arg', single(TforW), single(TinvW)', inverse_hadper_trans_single_den, Wwin2D_wiener, smallLNW, stepFSW, single(ones(N1_wiener)) );
wiener_elapsed_time = toc;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Calculate the final estimate's PSNR, print it, and show the
%%%% denoised image next to the noisy one
%%%%
y_wiener = double(y_wiener);
if dump_output_information
PSNR = 10*log10(1/mean((y(:)-y_wiener(:)).^2)); % y is valid
fprintf('FINAL ESTIMATE (total time: %.1f sec), PSNR: %.2f dB\n', ...
wiener_elapsed_time + estimate_elapsed_time, PSNR);
figure, imshow(z); title(sprintf('Noisy %s, PSNR: %.3f dB (sigma: %d)', ...
image_name(1:end-4), 10*log10(1/mean((y(:)-z(:)).^2)), sigma));
figure, imshow(y_wiener); title(sprintf('Denoised %s, PSNR: %.3f dB', ...
image_name(1:end-4), PSNR));
end
if nargout==0
varargout={};
else
varargout{1}=y_wiener;
varargout{2}=y_ht;
end
return;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Some auxiliary functions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% Create forward and inverse transform matrices, which allow for perfect
% reconstruction. The forward transform matrix is normalized so that the
% l2-norm of each basis element is 1.
%
% [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% INPUTS:
%
% N --> Size of the transform (for wavelets, must be 2^K)
%
% transform_type --> 'dct', 'dst', 'hadamard', or anything that is
% listed by 'help wfilters' (bi-orthogonal wavelets)
% 'DCrand' -- an orthonormal transform with a DC and all
% the other basis elements of random nature
%
% dec_levels --> If a wavelet transform is generated, this is the
% desired decomposition level. Must be in the
% range [0, log2(N)-1], where "0" implies
% full decomposition.
%
% OUTPUTS:
%
% Tforward --> (N x N) Forward transform matrix
%
% Tinverse --> (N x N) Inverse transform matrix
%
if exist('dec_levels') ~= 1,
dec_levels = 0;
end
if N == 1,
Tforward = 1;
elseif strcmp(transform_type, 'hadamard') == 1,
Tforward = hadamard(N);
elseif (N == 8) & strcmp(transform_type, 'bior1.5')==1 % hardcoded transform so that the wavelet toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.219417649252501 0.449283757993216 0.449283757993216 0.219417649252501 -0.219417649252501 -0.449283757993216 -0.449283757993216 -0.219417649252501;
0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846 -0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284;
-0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284 0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846;
0.707106781186547 -0.707106781186547 0 0 0 0 0 0;
0 0 0.707106781186547 -0.707106781186547 0 0 0 0;
0 0 0 0 0.707106781186547 -0.707106781186547 0 0;
0 0 0 0 0 0 0.707106781186547 -0.707106781186547];
elseif (N == 8) & strcmp(transform_type, 'dct')==1 % hardcoded transform so that the signal processing toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.490392640201615 0.415734806151273 0.277785116509801 0.097545161008064 -0.097545161008064 -0.277785116509801 -0.415734806151273 -0.490392640201615;
0.461939766255643 0.191341716182545 -0.191341716182545 -0.461939766255643 -0.461939766255643 -0.191341716182545 0.191341716182545 0.461939766255643;
0.415734806151273 -0.097545161008064 -0.490392640201615 -0.277785116509801 0.277785116509801 0.490392640201615 0.097545161008064 -0.415734806151273;
0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274 0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274;
0.277785116509801 -0.490392640201615 0.097545161008064 0.415734806151273 -0.415734806151273 -0.097545161008064 0.490392640201615 -0.277785116509801;
0.191341716182545 -0.461939766255643 0.461939766255643 -0.191341716182545 -0.191341716182545 0.461939766255643 -0.461939766255643 0.191341716182545;
0.097545161008064 -0.277785116509801 0.415734806151273 -0.490392640201615 0.490392640201615 -0.415734806151273 0.277785116509801 -0.097545161008064];
elseif (N == 8) & strcmp(transform_type, 'dst')==1 % hardcoded transform so that the PDE toolbox is not needed to generate it
Tforward = [ 0.161229841765317 0.303012985114696 0.408248290463863 0.464242826880013 0.464242826880013 0.408248290463863 0.303012985114696 0.161229841765317;
0.303012985114696 0.464242826880013 0.408248290463863 0.161229841765317 -0.161229841765317 -0.408248290463863 -0.464242826880013 -0.303012985114696;
0.408248290463863 0.408248290463863 0 -0.408248290463863 -0.408248290463863 0 0.408248290463863 0.408248290463863;
0.464242826880013 0.161229841765317 -0.408248290463863 -0.303012985114696 0.303012985114696 0.408248290463863 -0.161229841765317 -0.464242826880013;
0.464242826880013 -0.161229841765317 -0.408248290463863 0.303012985114696 0.303012985114696 -0.408248290463863 -0.161229841765317 0.464242826880013;
0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863;
0.303012985114696 -0.464242826880013 0.408248290463863 -0.161229841765317 -0.161229841765317 0.408248290463863 -0.464242826880013 0.303012985114696;
0.161229841765317 -0.303012985114696 0.408248290463863 -0.464242826880013 0.464242826880013 -0.408248290463863 0.303012985114696 -0.161229841765317];
elseif strcmp(transform_type, 'dct') == 1,
Tforward = dct(eye(N));
elseif strcmp(transform_type, 'dst') == 1,
Tforward = dst(eye(N));
elseif strcmp(transform_type, 'DCrand') == 1,
x = randn(N); x(1:end,1) = 1; [Q,R] = qr(x);
if (Q(1) < 0),
Q = -Q;
end;
Tforward = Q';
else %% a wavelet decomposition supported by 'wavedec'
%%% Set periodic boundary conditions, to preserve bi-orthogonality
dwtmode('per','nodisp');
Tforward = zeros(N,N);
for i = 1:N
Tforward(:,i)=wavedec(circshift([1 zeros(1,N-1)],[dec_levels i-1]), log2(N), transform_type); %% construct transform matrix
end
end
%%% Normalize the basis elements
Tforward = (Tforward' * diag(sqrt(1./sum(Tforward.^2,2))))';
%%% Compute the inverse transform matrix
Tinverse = inv(Tforward);
return;
================================================
FILE: BM3D/CBM3D.m
================================================
function [PSNR, yRGB_est] = CBM3D(yRGB, zRGB, sigma, profile, print_to_screen, colorspace)
%
% CBM3D is algorithm for attenuation of additive white Gaussian noise from
% color RGB images. This algorithm reproduces the results from the article:
%
% [1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Color image
% denoising via sparse 3D collaborative filtering with grouping constraint in
% luminance-chrominance space," submitted to IEEE Int. Conf. Image Process.,
% January 2007, in review, preprint at http://www.cs.tut.fi/~foi/GCF-BM3D.
%
% FUNCTION INTERFACE:
%
% [PSNR, yRGB_est] = CBM3D(yRGB, zRGB, sigma, profile, print_to_screen, colorspace)
%
% ! The function can work without any of the input arguments,
% in which case, the internal default ones are used !
%
% BASIC USAGE EXAMPLES:
%
% Case 1) Using the default parameters (i.e., image name, sigma, etc.)
%
% [PSNR, yRGB_est] = CBM3D;
%
% Case 2) Using an external noisy image:
%
% % Read an RGB image and scale its intensities in range [0,1]
% yRGB = im2double(imread('image_House256rgb.png'));
% % Generate the same seed used in the experimental results of [1]
% randn('seed', 0);
% % Standard deviation of the noise --- corresponding to intensity
% % range [0,255], despite that the input was scaled in [0,1]
% sigma = 25;
% % Add the AWGN with zero mean and standard deviation 'sigma'
% zRGB = yRGB + (sigma/255)*randn(size(yRGB));
% % Denoise 'zRGB'. The denoised image is 'yRGB_est', and 'NA = 1'
% % because the true image was not provided
% [NA, yRGB_est] = CBM3D(1, zRGB, sigma);
% % Compute the putput PSNR
% PSNR = 10*log10(1/mean((yRGB(:)-yRGB_est(:)).^2))
% % show the noisy image 'zRGB' and the denoised 'yRGB_est'
% figure; imshow(min(max(zRGB,0),1));
% figure; imshow(min(max(yRGB_est,0),1));
%
% Case 3) If the original image yRGB is provided as the first input
% argument, then some additional information is printed (PSNRs,
% figures, etc.). That is, "[NA, yRGB_est] = BM3D(1, zRGB, sigma);" in the
% above code should be replaced with:
%
% [PSNR, yRGB_est] = CBM3D(yRGB, zRGB, sigma);
%
%
% INPUT ARGUMENTS (OPTIONAL):
% 1) yRGB (M x N x 3): Noise-free RGB image (needed for computing PSNR),
% replace with the scalar 1 if not available.
% 2) zRGB (M x N x 3): Noisy RGBimage (intensities in range [0,1] or [0,255])
% 3) sigma (double) : Std. dev. of the noise (corresponding to intensities
% in range [0,255] even if the range of zRGB is [0,1])
% 4) profile (char) : 'np' --> Normal Profile
% 'lc' --> Fast Profile
% 5) print_to_screen : 0 --> do not print output information (and do
% not plot figures)
% 1 --> print information and plot figures
% 6) colorspace (char): 'opp' --> use opponent colorspace
% 'yCbCr' --> use yCbCr colorspace
%
% OUTPUTS:
% 1) PSNR (double) : Output PSNR (dB), only if the original
% image is available, otherwise PSNR = 0
% 2) yRGB_est (M x N x 3): Final RGB estimate (in the range [0,1])
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright (c) 2007-2011 Tampere University of Technology.
% All rights reserved.
% This work should only be used for nonprofit purposes.
%
% AUTHORS:
% Kostadin Dabov, email: dabov _at_ cs.tut.fi
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% In case, there is no input image (zRGB or yRGB), then use the filename
%%%% below to read an original image (might contain path also). Later,
%%%% artificial AWGN noise is added and this noisy image is processed
%%%% by the CBM3D.
%%%%
image_name = [
% 'kodim12.png'
'image_Lena512rgb.png'
% 'image_House256rgb.png'
% 'image_Peppers512rgb.png'
% 'image_Baboon512rgb.png'
% 'image_F16_512rgb.png'
];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Quality/complexity trade-off
%%%%
%%%% 'np' --> Normal Profile (balanced quality)
%%%% 'lc' --> Low Complexity Profile (fast, lower quality)
%%%%
%%%% 'high' --> High Profile (high quality, not documented in [1])
%%%%
%%%% 'vn' --> This profile is automatically enabled for high noise
%%%% when sigma > 40
%%%%
%%%% 'vn_old' --> This is the old 'vn' profile that was used in [1].
%%%% It gives inferior results than 'vn' in most cases.
%%%%
if (exist('profile') ~= 1)
profile = 'np'; %% default profile
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Specify the std. dev. of the corrupting noise
%%%%
if (exist('sigma') ~= 1),
sigma = 50; %% default standard deviation of the AWGN
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Colorspace in which we perform denoising. BM is applied to the first
%%%% component and the matching information is re-used for the other two.
%%%%
if (exist('colorspace') ~= 1),
colorspace = 'opp'; %%% (valid colorspaces are: 'yCbCr' and 'opp')
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Following are the parameters for the Normal Profile.
%%%%
%%%% Select transforms ('dct', 'dst', 'hadamard', or anything that is listed by 'help wfilters'):
transform_2D_HT_name = 'bior1.5'; %% transform used for the HT filt. of size N1 x N1
transform_2D_Wiener_name = 'dct'; %% transform used for the Wiener filt. of size N1_wiener x N1_wiener
transform_3rd_dim_name = 'haar'; %% transform used in the 3-rd dim, the same for HT and Wiener filt.
%%%% Hard-thresholding (HT) parameters:
N1 = 8; %% N1 x N1 is the block size used for the hard-thresholding (HT) filtering
Nstep = 3; %% sliding step to process every next reference block
N2 = 16; %% maximum number of similar blocks (maximum size of the 3rd dimension of a 3D array)
Ns = 39; %% length of the side of the search neighborhood for full-search block-matching (BM), must be odd
tau_match = 3000;%% threshold for the block-distance (d-distance)
lambda_thr2D = 0; %% threshold parameter for the coarse initial denoising used in the d-distance measure
lambda_thr3D = 2.7; %% threshold parameter for the hard-thresholding in 3D transform domain
beta = 2.0; %% parameter of the 2D Kaiser window used in the reconstruction
%%%% Wiener filtering parameters:
N1_wiener = 8;
Nstep_wiener = 3;
N2_wiener = 32;
Ns_wiener = 39;
tau_match_wiener = 400;
beta_wiener = 2.0;
%%%% Block-matching parameters:
stepFS = 1; %% step that forces to switch to full-search BM, "1" implies always full-search
smallLN = 'not used in np'; %% if stepFS > 1, then this specifies the size of the small local search neighb.
stepFSW = 1;
smallLNW = 'not used in np';
thrToIncStep = 8; %% used in the HT filtering to increase the sliding step in uniform regions
if strcmp(profile, 'lc') == 1,
Nstep = 6;
Ns = 25;
Nstep_wiener = 5;
N2_wiener = 16;
Ns_wiener = 25;
thrToIncStep = 3;
smallLN = 3;
stepFS = 6*Nstep;
smallLNW = 2;
stepFSW = 5*Nstep_wiener;
end
% Profile 'vn' was proposed in
% Y. Hou, C. Zhao, D. Yang, and Y. Cheng, 'Comment on "Image Denoising by Sparse 3D Transform-Domain
% Collaborative Filtering"', accepted for publication, IEEE Trans. on Image Processing, July, 2010.
% as a better alternative to that initially proposed in [1] (which is currently in profile 'vn_old')
if (strcmp(profile, 'vn') == 1) | (sigma > 40),
N2 = 32;
Nstep = 4;
N1_wiener = 11;
Nstep_wiener = 6;
lambda_thr3D = 2.8;
thrToIncStep = 3;
tau_match_wiener = 3500;
tau_match = 25000;
Ns_wiener = 39;
end
% The 'vn_old' profile corresponds to the original parameters for strong noise proposed in [1].
if (strcmp(profile, 'vn_old') == 1) & (sigma > 40),
transform_2D_HT_name = 'dct';
N1 = 12;
Nstep = 4;
N1_wiener = 11;
Nstep_wiener = 6;
lambda_thr3D = 2.8;
lambda_thr2D = 2.0;
thrToIncStep = 3;
tau_match_wiener = 3500;
tau_match = 5000;
Ns_wiener = 39;
end
decLevel = 0; %% dec. levels of the dyadic wavelet 2D transform for blocks (0 means full decomposition, higher values decrease the dec. number)
thr_mask = ones(N1); %% N1xN1 mask of threshold scaling coeff. --- by default there is no scaling, however the use of different thresholds for different wavelet decompoistion subbands can be done with this matrix
if strcmp(profile, 'high') == 1,
decLevel = 1;
Nstep = 2;
Nstep_wiener = 2;
lambda_thr3D = 2.5;
vMask = ones(N1,1); vMask((end/4+1):end/2)= 1.01; vMask((end/2+1):end) = 1.07; %% this allows to have different threhsolds for the finest and next-to-the-finest subbands
thr_mask = vMask * vMask';
beta = 2.5;
beta_wiener = 1.5;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Note: touch below this point only if you know what you are doing!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Check whether to dump information to the screen or reamin silent
dump_output_information = 1;
if (exist('print_to_screen') == 1) & (print_to_screen == 0),
dump_output_information = 0;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Create transform matrices, etc.
%%%%
[Tfor, Tinv] = getTransfMatrix(N1, transform_2D_HT_name, decLevel); %% get (normalized) forward and inverse transform matrices
[TforW, TinvW] = getTransfMatrix(N1_wiener, transform_2D_Wiener_name); %% get (normalized) forward and inverse transform matrices
if (strcmp(transform_3rd_dim_name, 'haar') == 1) | (strcmp(transform_3rd_dim_name(end-2:end), '1.1') == 1),
%%% If Haar is used in the 3-rd dimension, then a fast internal transform is used, thus no need to generate transform
%%% matrices.
hadper_trans_single_den = {};
inverse_hadper_trans_single_den = {};
else
%%% Create transform matrices. The transforms are later applied by
%%% matrix-vector multiplication for the 1D case.
for hpow = 0:ceil(log2(max(N2,N2_wiener))),
h = 2^hpow;
[Tfor3rd, Tinv3rd] = getTransfMatrix(h, transform_3rd_dim_name, 0);
hadper_trans_single_den{h} = single(Tfor3rd);
inverse_hadper_trans_single_den{h} = single(Tinv3rd');
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% 2D Kaiser windows used in the aggregation of block-wise estimates
%%%%
if beta_wiener==2 & beta==2 & N1_wiener==8 & N1==8 % hardcode the window function so that the signal processing toolbox is not needed by default
Wwin2D = [ 0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924];
Wwin2D_wiener = Wwin2D;
else
Wwin2D = kaiser(N1, beta) * kaiser(N1, beta)'; % Kaiser window used in the aggregation of the HT part
Wwin2D_wiener = kaiser(N1_wiener, beta_wiener) * kaiser(N1_wiener, beta_wiener)'; % Kaiser window used in the aggregation of the Wiener filt. part
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% If needed, read images, generate noise, or scale the images to the
%%%% [0,1] interval
%%%%
if (exist('yRGB') ~= 1) | (exist('zRGB') ~= 1)
yRGB = im2double(imread(image_name)); %% read a noise-free image
randn('seed', 0); %% generate seed
zRGB = yRGB + (sigma/255)*randn(size(yRGB)); %% create a noisy image
else % external images
image_name = 'External image';
% convert zRGB to double precision
zRGB = double(zRGB);
% convert yRGB to double precision
yRGB = double(yRGB);
% if zRGB's range is [0, 255], then convert to [0, 1]
if (max(zRGB(:)) > 10), % a naive check for intensity range
zRGB = zRGB / 255;
end
% if yRGB's range is [0, 255], then convert to [0, 1]
if (max(yRGB(:)) > 10), % a naive check for intensity range
yRGB = yRGB / 255;
end
end
if (size(zRGB,3) ~= 3) | (size(zRGB,4) ~= 1),
error('CBM3D accepts only input RGB images (i.e. matrices of size M x N x 3).');
end
% Check if the true image yRGB is a valid one; if not, then we cannot compute PSNR, etc.
yRGB_is_invalid_image = (length(size(zRGB)) ~= length(size(yRGB))) | (size(zRGB,1) ~= size(yRGB,1)) | (size(zRGB,2) ~= size(yRGB,2)) | (size(zRGB,3) ~= size(yRGB,3));
if (yRGB_is_invalid_image),
dump_output_information = 0;
end
[Xv, Xh, numSlices] = size(zRGB); %%% obtain image sizes
if numSlices ~= 3
fprintf('Error, an RGB color image is required!\n');
return;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Change colorspace, compute the l2-norms of the new color channels
%%%%
[zColSpace l2normLumChrom] = function_rgb2LumChrom(zRGB, colorspace);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Print image information to the screen
%%%%
if dump_output_information == 1,
fprintf(sprintf('Image: %s (%dx%dx%d), sigma: %.1f\n', image_name, Xv, Xh, numSlices, sigma));
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 1. Basic estimate by collaborative hard-thresholding and using
%%%% the grouping constraint on the chrominances.
%%%%
tic;
y_hat = bm3d_thr_color(zColSpace, hadper_trans_single_den, Nstep, N1, N2, lambda_thr2D,...
lambda_thr3D, tau_match*N1*N1/(255*255), (Ns-1)/2, sigma/255, thrToIncStep, single(Tfor), single(Tinv)', inverse_hadper_trans_single_den, single(thr_mask), 'unused arg', 'unused arg', l2normLumChrom, Wwin2D, smallLN, stepFS );
estimate_elapsed_time = toc;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 2. Final estimate by collaborative Wiener filtering and using
%%%% the grouping constraint on the chrominances.
%%%%
tic;
yRGB_est = bm3d_wiener_color(zColSpace, y_hat, hadper_trans_single_den, Nstep_wiener, N1_wiener, N2_wiener, ...
'unused_arg', tau_match_wiener*N1_wiener*N1_wiener/(255*255), (Ns_wiener-1)/2, sigma/255, 'unused arg', single(TforW), single(TinvW)', inverse_hadper_trans_single_den, 'unused arg', 'unused arg', l2normLumChrom, Wwin2D_wiener, smallLNW, stepFSW );
wiener_elapsed_time = toc;
yRGB_est = double(yRGB_est);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Convert back to RGB colorspace
%%%%
yRGB_est = function_LumChrom2rgb(yRGB_est, colorspace);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Calculate final estimate's PSNR and ISNR, print them, and show the
%%%% denoised image
%%%%
PSNR = 0; %% Remains 0 if the true image yRGB is not available
if (~yRGB_is_invalid_image), % then we assume yRGB is a valid image
PSNR = 10*log10(1/mean((yRGB(:)-yRGB_est(:)).^2));
end
if dump_output_information == 1,
fprintf(sprintf('FINAL ESTIMATE (total time: %.1f sec), PSNR: %.2f dB\n', ...
wiener_elapsed_time + estimate_elapsed_time, PSNR));
figure, imshow(min(max(zRGB,0),1)); title(sprintf('Noisy %s, PSNR: %.3f dB (sigma: %d)', ...
image_name(1:end-4), 10*log10(1/mean((yRGB(:)-zRGB(:)).^2)), sigma));
figure, imshow(min(max(yRGB_est,0),1)); title(sprintf('Denoised %s, PSNR: %.3f dB', ...
image_name(1:end-4), PSNR));
end
return;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Some auxiliary functions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% Create forward and inverse transform matrices, which allow for perfect
% reconstruction. The forward transform matrix is normalized so that the
% l2-norm of each basis element is 1.
%
% [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% INPUTS:
%
% N --> Size of the transform (for wavelets, must be 2^K)
%
% transform_type --> 'dct', 'dst', 'hadamard', or anything that is
% listed by 'help wfilters' (bi-orthogonal wavelets)
% 'DCrand' -- an orthonormal transform with a DC and all
% the other basis elements of random nature
%
% dec_levels --> If a wavelet transform is generated, this is the
% desired decomposition level. Must be in the
% range [0, log2(N)-1], where "0" implies
% full decomposition.
%
% OUTPUTS:
%
% Tforward --> (N x N) Forward transform matrix
%
% Tinverse --> (N x N) Inverse transform matrix
%
if exist('dec_levels') ~= 1,
dec_levels = 0;
end
if N == 1,
Tforward = 1;
elseif strcmp(transform_type, 'hadamard') == 1,
Tforward = hadamard(N);
elseif (N == 8) & strcmp(transform_type, 'bior1.5')==1 % hardcoded transform so that the wavelet toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.219417649252501 0.449283757993216 0.449283757993216 0.219417649252501 -0.219417649252501 -0.449283757993216 -0.449283757993216 -0.219417649252501;
0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846 -0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284;
-0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284 0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846;
0.707106781186547 -0.707106781186547 0 0 0 0 0 0;
0 0 0.707106781186547 -0.707106781186547 0 0 0 0;
0 0 0 0 0.707106781186547 -0.707106781186547 0 0;
0 0 0 0 0 0 0.707106781186547 -0.707106781186547];
elseif (N == 8) & strcmp(transform_type, 'dct')==1 % hardcoded transform so that the signal processing toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.490392640201615 0.415734806151273 0.277785116509801 0.097545161008064 -0.097545161008064 -0.277785116509801 -0.415734806151273 -0.490392640201615;
0.461939766255643 0.191341716182545 -0.191341716182545 -0.461939766255643 -0.461939766255643 -0.191341716182545 0.191341716182545 0.461939766255643;
0.415734806151273 -0.097545161008064 -0.490392640201615 -0.277785116509801 0.277785116509801 0.490392640201615 0.097545161008064 -0.415734806151273;
0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274 0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274;
0.277785116509801 -0.490392640201615 0.097545161008064 0.415734806151273 -0.415734806151273 -0.097545161008064 0.490392640201615 -0.277785116509801;
0.191341716182545 -0.461939766255643 0.461939766255643 -0.191341716182545 -0.191341716182545 0.461939766255643 -0.461939766255643 0.191341716182545;
0.097545161008064 -0.277785116509801 0.415734806151273 -0.490392640201615 0.490392640201615 -0.415734806151273 0.277785116509801 -0.097545161008064];
elseif (N == 8) & strcmp(transform_type, 'dst')==1 % hardcoded transform so that the PDE toolbox is not needed to generate it
Tforward = [ 0.161229841765317 0.303012985114696 0.408248290463863 0.464242826880013 0.464242826880013 0.408248290463863 0.303012985114696 0.161229841765317;
0.303012985114696 0.464242826880013 0.408248290463863 0.161229841765317 -0.161229841765317 -0.408248290463863 -0.464242826880013 -0.303012985114696;
0.408248290463863 0.408248290463863 0 -0.408248290463863 -0.408248290463863 0 0.408248290463863 0.408248290463863;
0.464242826880013 0.161229841765317 -0.408248290463863 -0.303012985114696 0.303012985114696 0.408248290463863 -0.161229841765317 -0.464242826880013;
0.464242826880013 -0.161229841765317 -0.408248290463863 0.303012985114696 0.303012985114696 -0.408248290463863 -0.161229841765317 0.464242826880013;
0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863;
0.303012985114696 -0.464242826880013 0.408248290463863 -0.161229841765317 -0.161229841765317 0.408248290463863 -0.464242826880013 0.303012985114696;
0.161229841765317 -0.303012985114696 0.408248290463863 -0.464242826880013 0.464242826880013 -0.408248290463863 0.303012985114696 -0.161229841765317];
elseif strcmp(transform_type, 'dct') == 1,
Tforward = dct(eye(N));
elseif strcmp(transform_type, 'dst') == 1,
Tforward = dst(eye(N));
elseif strcmp(transform_type, 'DCrand') == 1,
x = randn(N); x(1:end,1) = 1; [Q,R] = qr(x);
if (Q(1) < 0),
Q = -Q;
end;
Tforward = Q';
else %% a wavelet decomposition supported by 'wavedec'
%%% Set periodic boundary conditions, to preserve bi-orthogonality
dwtmode('per','nodisp');
Tforward = zeros(N,N);
for i = 1:N
Tforward(:,i)=wavedec(circshift([1 zeros(1,N-1)],[dec_levels i-1]), log2(N), transform_type); %% construct transform matrix
end
end
%%% Normalize the basis elements
Tforward = (Tforward' * diag(sqrt(1./sum(Tforward.^2,2))))';
%%% Compute the inverse transform matrix
Tinverse = inv(Tforward);
return;
function [y, A, l2normLumChrom]=function_rgb2LumChrom(xRGB, colormode)
% Forward color-space transformation ( inverse transformation is function_LumChrom2rgb.m )
%
% Alessandro Foi - Tampere University of Technology - 2005 - 2006 Public release v1.03 (March 2006)
% -----------------------------------------------------------------------------------------------------------------------------------------------
%
% SYNTAX:
%
% [y A l2normLumChrom] = function_rgb2LumChrom(xRGB, colormode);
%
% INPUTS:
% xRGB is RGB image with range [0 1]^3
%
% colormode = 'opp', 'yCbCr', 'pca', or a custom 3x3 matrix
%
% 'opp' Opponent color space ('opp' is equirange version)
% 'yCbCr' The standard yCbCr (e.g. for JPEG images)
% 'pca' Principal components (note that this transformation is renormalized to be equirange)
%
% OUTPUTS:
% y is color-transformed image (with range typically included in or equal to [0 1]^3, depending on the transformation matrix)
%
% l2normLumChrom (optional) l2-norm of the transformation (useful for noise std calculation)
% A transformation matrix (used necessarily if colormode='pca')
%
% NOTES: - If only two outputs are used, then the second output is l2normLumChrom, unless colormode='pca';
% - 'opp' is used by default if no colormode is specified.
%
%
% USAGE EXAMPLE FOR PCA TRANSFORMATION:
% %%%% -- forward color transformation --
% if colormode=='pca'
% [zLumChrom colormode] = function_rgb2LumChrom(zRGB,colormode); % 'colormode' is assigned a 3x3 transform matrix
% else
% zLumChrom = function_rgb2LumChrom(zRGB,colormode);
% end
%
% %%%% [ ... ] Some processing [ ... ]
%
% %%%% -- inverse color transformation --
% zRGB=function_LumChrom2rgb(zLumChrom,colormode);
%
if nargin==1
colormode='opp';
end
change_output=0;
if size(colormode)==[3 3]
A=colormode;
l2normLumChrom=sqrt(sum(A.^2,2));
else
if strcmp(colormode,'opp')
A=[1/3 1/3 1/3; 0.5 0 -0.5; 0.25 -0.5 0.25];
end
if strcmp(colormode,'yCbCr')
A=[0.299 0.587 0.114; -0.16873660714285 -0.33126339285715 0.5; 0.5 -0.4186875 -0.0813125];
end
if strcmp(colormode,'pca')
A=princomp(reshape(xRGB,[size(xRGB,1)*size(xRGB,2) 3]))';
A=A./repmat(sum(A.*(A>0),2)-sum(A.*(A<0),2),[1 3]); %% ranges are normalized to unitary length;
else
if nargout==2
change_output=1;
end
end
end
%%%% Make sure that each channel's intensity range is [0,1]
maxV = sum(A.*(A>0),2);
minV = sum(A.*(A<0),2);
yNormal = (reshape(xRGB,[size(xRGB,1)*size(xRGB,2) 3]) * A' - repmat(minV, [1 size(xRGB,1)*size(xRGB,2)])') * diag(1./(maxV-minV)); % put in range [0,1]
y = reshape(yNormal, [size(xRGB,1) size(xRGB,2) 3]);
%%%% The l2-norm of each of the 3 transform basis elements
l2normLumChrom = diag(1./(maxV-minV))*sqrt(sum(A.^2,2));
if change_output
A=l2normLumChrom;
end
return;
function yRGB=function_LumChrom2rgb(x,colormode)
% Inverse color-space transformation ( forward transformation is function_rgb2LumChrom.m )
%
% Alessandro Foi - Tampere University of Technology - 2005 - 2006 Public release v1.03 (March 2006)
% -----------------------------------------------------------------------------------------------------------------------------------------------
%
% SYNTAX:
%
% yRGB = function_LumChrom2rgb(x,colormode);
%
% INPUTS:
% x is color-transformed image (with range typically included in or equal to [0 1]^3, depending on the transformation matrix)
%
% colormode = 'opp', 'yCbCr', or a custom 3x3 matrix (e.g. provided by the forward transform when 'pca' is selected)
%
% 'opp' opponent color space ('opp' is equirange version)
% 'yCbCr' standard yCbCr (e.g. for JPEG images)
%
% OUTPUTS:
% x is RGB image (with range [0 1]^3)
%
%
% NOTE: 'opp' is used by default if no colormode is specified
%
if nargin==1
colormode='opp';
end
if size(colormode)==[3 3]
A=colormode;
B=inv(A);
else
if strcmp(colormode,'opp')
A =[1/3 1/3 1/3; 0.5 0 -0.5; 0.25 -0.5 0.25];
B =[1 1 2/3;1 0 -4/3;1 -1 2/3];
end
if strcmp(colormode,'yCbCr')
A=[0.299 0.587 0.114; -0.16873660714285 -0.33126339285715 0.5; 0.5 -0.4186875 -0.0813125];
B=inv(A);
end
end
%%%% Make sure that each channel's intensity range is [0,1]
maxV = sum(A.*(A>0),2);
minV = sum(A.*(A<0),2);
xNormal = reshape(x,[size(x,1)*size(x,2) 3]) * diag(maxV-minV) + repmat(minV, [1 size(x,1)*size(x,2)])'; % put in range [0,1]
yRGB = reshape(xNormal * B', [ size(x,1) size(x,2) 3]);
return;
================================================
FILE: BM3D/CVBM3D.m
================================================
function [Xdenoised] = CVBM3D(Xnoisy, sigma, Xorig)
% CVBM3D denoising of RGB videos corrupted with AWGN.
%
%
% [Xdenoised] = CVBM3D(Xnoisy, sigma, Xorig)
%
% INPUTS:
%
% 1) Xnoisy --> Either a filename of a noisy AVI RGB uncompressed video (e.g. 'SMg20.avi')
% or a 4-D matrix of dimensions (M x N x 3 x NumberOfFrames)
% The intensity range is [0,255]!
% 2) Sigma --> Noise standard deviation (assumed intensity range is [0,255])
%
% 3) Xorig (optional parameter) --> Filename of the original video
%
% OUTPUT: .avi files are written to the current matlab folder
%
% 1) Xdenoised --> A 4-D matrix with the denoised RGB-video
%
% USAGE EXAMPLES:
% 1) To denoise a video:
% CVBM3D('SMg20.avi', 20)
%
% 2) To denoise a video and print PSNR:
% CVBM3D('SMg20.avi', 20, 'SM.avi')
%
% 1) To denoise a 4-D matrix representing a noisy RGB video:
% CVBM3D(X_4D_matrix, 20)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright 2009 Tampere University of Technology. All rights reserved.
% This work should only be used for nonprofit purposes.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% If no input argument is provided, then use the internal ones from below:
if exist('sigma', 'var') ~= 1,
Xnoisy = 'SMg20.avi'; sigma = 20; ;
end
% Whether or not to print information to the screen
dump_information = 1;
% If the input is a 4-D matrix, then save it as AVI file that is used as
% input to the denoising
if ischar(Xnoisy) == 0;
NumberOfFrames = size(Xnoisy,4);
if NumberOfFrames <= 1
error('The input RGB video should be a 4-D matrix (M x N x 3 x NumberOfFrames)');
end
avi_filename = sprintf('ExternalMatrix_%.6d.avi', round(rand*50000));
if exist(avi_filename, 'file') == 2,
delete(avi_filename);
end
mov = avifile(avi_filename, 'Colormap', gray(256), 'compression', 'None', 'fps', 30);
if mean2(Xnoisy) <= 1
fprintf('Possible error: the input RGB-videos should be in range [0,255] and not in [0,1]!\n');
else
for ii = [1:NumberOfFrames],
mov = addframe(mov, uint8(Xnoisy(:,:,:,ii)));
end
end
mov = close(mov);
if dump_information == 1
fprintf('The input 4-D matrix was written to: %s.\n', avi_filename);
end
clear Xnoisy
Xnoisy = avi_filename;
end
% Read some properties of the noisy RGB video
noi_avi_file_info = aviinfo(Xnoisy);
NumberOfFrames = noi_avi_file_info.NumFrames;
%%% Read Xorig video --- needed if one wants to compute PSNR and ISNR
if exist('Xorig', 'var') == 1,
if ischar(Xorig) == 1;
org_avi_file_info = aviinfo(Xorig);
mo = aviread(Xorig);
Xorig = zeros([size(mo(1).cdata), NumberOfFrames], 'single');
for cf = 1:NumberOfFrames
Xorig(:,:,:,cf) = single(mo(cf).cdata(:,:,:));
end
clear mo;
if (org_avi_file_info.NumFrames == noi_avi_file_info.NumFrames && org_avi_file_info.FramesPerSecond == noi_avi_file_info.FramesPerSecond && ...
org_avi_file_info.Width == noi_avi_file_info.Width && org_avi_file_info.Height == noi_avi_file_info.Height)
dump_information = 1;
end
else
Xorig = single(Xorig);
if mean2(Xorig) <= 1
fprintf('Possible error: the input RGB-videos should be in range [0,255] and not in [0,1]!\n');
end
end
end
denoiseFrames = min(9, NumberOfFrames);
denoiseFramesW = min(9, NumberOfFrames);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Quality/complexity trade-off
%%%%
%%%% 'np' --> Normal Profile (balanced quality)
%%%% 'lc' --> Low Complexity Profile (fast, lower quality)
%%%%
if (exist('bm3dProfile') ~= 1)
bm3dProfile = 'np';
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Following are the parameters for the Normal Profile.
%%%%
%%%% Select transforms ('dct', 'dst', 'hadamard', or anything that is listed by 'help wfilters'):
transform_2D_HT_name = 'bior1.5'; %% transform used for the HT filt. of size N1 x N1
transform_2D_Wiener_name = 'dct'; %% transform used for the Wiener filt. of size N1_wiener x N1_wiener
transform_3rd_dim_name = 'haar'; %% tranform used in the 3-rd dim, the same for HT and Wiener filt.
%%%% Step 1: Hard-thresholding (HT) parameters:
N1 = 8; %% N1 x N1 is the block size used for the hard-thresholding (HT) filtering
Nstep = 5; %% sliding step to process every next refernece block
N2 = 8; %% maximum number of similar blocks (maximum size of the 3rd dimension of the 3D groups)
Ns = 7; %% length of the side of the search neighborhood for full-search block-matching (BM)
Npr = 3; %% length of the side of the motion-adaptive search neighborhood, use din the predictive-search BM
tau_match = 3000; %% threshold for the block distance (d-distance)
lambda_thr3D = 2.7; %% threshold parameter for the hard-thresholding in 3D DFT domain
dsub = 13; %% a small value subtracted from the distnce of blocks with the same spatial coordinate as the reference one
Nb = 2; %% number of blocks to follow in each next frame, used in the predictive-search BM
beta = 2.0; %% the beta parameter of the 2D Kaiser window used in the reconstruction
%%%% Step 2: Wiener filtering parameters:
N1_wiener = 7;
Nstep_wiener = 4;
N2_wiener = 8;
Ns_wiener = 7;
Npr_wiener = 3;
tau_match_wiener = 1000;
beta_wiener = 2.0;
dsub_wiener = 1.5;
Nb_wiener = 2;
%%%% Block-matching parameters:
stepFS = 1; %% step that firces to switch to full-search BM, "1" implies always full-search
stepFSW = 1;
thrToIncStep = 8; %% used in the HT filtering to increase the sliding step in uniform regions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Following are the parameters for the Low Complexity Profile.
%%%%
if strcmp(bm3dProfile, 'lc') == 1,
lambda_thr3D = 2.8;
denoiseFrames = min(5, NumberOfFrames);
denoiseFramesW = min(5, NumberOfFrames);
N2_wiener = 4;
N2 = 4;
Ns = 3;
Ns_wiener = 3;
Nb = 1;
Nb_wiener = 1;
end
if strcmp(bm3dProfile, 'hi') == 1,
Nstep = 3;
Nstep_wiener = 3;
end
if sigma > 30,
N1_wiener = 8;
tau_match = 4500;
tau_match_wiener = 3000;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Note: touch below this point only if you know what you are doing!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Create transform matrices, etc.
%%%%
decLevel = 0; %% dec. levels of the dyadic wavelet 2D transform for blocks (0 means full decomposition, higher values decrease the dec. number)
decLevel3 = 0; %% dec. level for the wavelet transform in the 3rd dimension
[Tfor, Tinv] = getTransfMatrix(N1, transform_2D_HT_name, decLevel); %% get (normalized) forward and inverse transform matrices
[TforW, TinvW] = getTransfMatrix(N1_wiener, transform_2D_Wiener_name); %% get (normalized) forward and inverse transform matrices
if (strcmp(transform_3rd_dim_name, 'haar') == 1 || strcmp(transform_3rd_dim_name(end-2:end), '1.1') == 1),
%%% Fast internal transform is used, no need to generate transform
%%% matrices.
hadper_trans_single_den = {};
inverse_hadper_trans_single_den = {};
else
%%% Create transform matrices. The transforms are later computed by
%%% matrix multiplication with them
for hh = [1 2 4 8 16 32];
[Tfor3rd, Tinv3rd] = getTransfMatrix(hh, transform_3rd_dim_name, decLevel3);
hadper_trans_single_den{hh} = single(Tfor3rd);
inverse_hadper_trans_single_den{hh} = single(Tinv3rd');
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% 2D Kaiser windows that scale the reconstructed blocks
%%%%
if beta_wiener==2 & beta==2 & N1_wiener==7 & N1==8 % hardcode the window function so that the signal processing toolbox is not needed by default
Wwin2D = [ 0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924 ];
Wwin2D_wiener = [ 0.1924 0.3151 0.4055 0.4387 0.4055 0.3151 0.1924;
0.3151 0.5161 0.6640 0.7184 0.6640 0.5161 0.3151;
0.4055 0.6640 0.8544 0.9243 0.8544 0.6640 0.4055;
0.4387 0.7184 0.9243 1.0000 0.9243 0.7184 0.4387;
0.4055 0.6640 0.8544 0.9243 0.8544 0.6640 0.4055;
0.3151 0.5161 0.6640 0.7184 0.6640 0.5161 0.3151;
0.1924 0.3151 0.4055 0.4387 0.4055 0.3151 0.1924 ];
else
Wwin2D = kaiser(N1, beta) * kaiser(N1, beta)'; % Kaiser window used in the aggregation of the HT part
Wwin2D_wiener = kaiser(N1_wiener, beta_wiener) * kaiser(N1_wiener, beta_wiener)'; % Kaiser window used in the aggregation of the Wiener filt. part
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Read an image, generate noise and add it to the image
%%%%
if dump_information == 1
fprintf('Input video: %s, sigma: %.1f\n', Xnoisy, sigma);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Determine unique filenames of intermediate avi files
%%%%
HT_avi_file = sprintf('%s_cvbm3d_step1_0.avi', Xnoisy(1:end-4));
Denoised_avi_file = sprintf('%s_cvbm3d_0.avi', Xnoisy(1:end-4));
i = 1;
while (exist(['./' HT_avi_file], 'file') ~= 0) | (exist(['./' Denoised_avi_file],'file') ~= 0)
HT_avi_file = sprintf('%s_cvbm3d_step1_%d.avi', Xnoisy(1:end-4),i);
Denoised_avi_file = sprintf('%s_cvbm3d_%d.avi', Xnoisy(1:end-4),i);
i = i + 1;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Initial estimate by hard-thresholding filtering
HT_IO = {which(Xnoisy), HT_avi_file};
tic;
bm3d_thr_video_c(HT_IO, hadper_trans_single_den, Nstep, N1, N2, 0,...
lambda_thr3D, tau_match*N1*N1/(255*255), (Ns-1)/2, sigma/255, thrToIncStep,...
single(Tfor), single(Tinv)', inverse_hadper_trans_single_den, single(ones(N1)),...
'unused arg', dsub*dsub/255 * (sigma^2 / 255), ones(NumberOfFrames,1), Wwin2D,...
(Npr-1)/2, stepFS, denoiseFrames, Nb, 0 );
estimate_elapsed_time = toc;
if dump_information == 1
% mo = aviread(HT_avi_file);
% y_hat = zeros([size(mo(1).cdata(:,:,1)), 3, NumberOfFrames], 'single');
% for cf = 1:NumberOfFrames
% y_hat(:,:,:,cf) = single(mo(cf).cdata(:,:,:))/255;
% end
% clear mo
%
% PSNR_HT_ESTIMATE = 10*log10(1/mean2((Xorig-y_hat).^2));
% fprintf('HT ESTIMATE, PSNR: %.3f dB\n', PSNR_HT_ESTIMATE);
% clear y_hat;
fprintf('STEP1 completed!\n');
end
%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%% Final estimate by Wiener filtering (using the hard-thresholding
% initial estimate)
lut_ic = ClipComp16b(sigma/255);
WIE_IO = {which(Xnoisy), HT_avi_file, Denoised_avi_file};
tic;
bm3d_wiener_video_c(WIE_IO, 'unused', hadper_trans_single_den, Nstep_wiener, N1_wiener, N2_wiener, ...
'unused_arg', tau_match_wiener*N1_wiener*N1_wiener/(255*255), (Ns_wiener-1)/2, sigma/255, 'unused arg',...
single(TforW), single(TinvW)', inverse_hadper_trans_single_den, 'unused arg', dsub_wiener*dsub_wiener/255*(sigma^2 / 255),...
ones(NumberOfFrames,1), Wwin2D_wiener, (Npr_wiener-1)/2, stepFSW, denoiseFramesW, Nb_wiener, 0, lut_ic);
wiener_elapsed_time = toc;
if nargout == 1
mo = aviread(Denoised_avi_file);
Xdenoised = zeros([size(mo(1).cdata(:,:,1)), 3, NumberOfFrames], 'single');
for cf = 1:NumberOfFrames
Xdenoised(:,:,:,cf) = single(mo(cf).cdata(:,:,:));
end
clear mo
end
if dump_information == 1
if nargout ~= 1
mo = aviread(Denoised_avi_file);
Xdenoised = zeros([size(mo(1).cdata(:,:,1)), 3, NumberOfFrames], 'single');
for cf = 1:NumberOfFrames
Xdenoised(:,:,:,cf) = single(mo(cf).cdata(:,:,:));
end
clear mo
end
PSNR_TEXT='';
if exist('Xorig', 'var') == 1
PSNR = 10*log10(255*255/mean((Xorig(:)-Xdenoised(:)).^2));
PSNR_TEXT=sprintf(' PSNR: %.3f dB,', PSNR);
New_Denoised_avi_file = sprintf('%s_PSNR%.2f.avi',Denoised_avi_file(1:end-4),PSNR);
movefile(Denoised_avi_file, New_Denoised_avi_file);
Denoised_avi_file = New_Denoised_avi_file;
end
% PSNRs = zeros(NumberOfFrames,1);
% for ii = 1:NumberOfFrames,
% PSNRs(ii) = 10*log10(1/mean2( (Xorig(:,:,:,ii)-Xdenoised(:,:,:,ii)).^2));
% fprintf('Frame: %d, PSNR: %.2f\n', ii, PSNRs(ii));
% end
if nargout == 0
clear Xdenoised
end
fprintf('FILTERING COMPLETED (frames/sec: %.2f,%s denoised video saved as %s)\n', ...
NumberOfFrames/(wiener_elapsed_time + estimate_elapsed_time), PSNR_TEXT, Denoised_avi_file);
end
return;
function [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% Create forward and inverse transform matrices, which allow for perfect
% reconstruction. The forward transform matrix is normalized so that the
% l2-norm of each basis element is 1.
%
% [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% INPUTS:
%
% N --> Size of the transform (for wavelets, must be 2^K)
%
% transform_type --> 'dct', 'dst', 'hadamard', or anything that is
% listed by 'help wfilters' (bi-orthogonal wavelets)
% 'DCrand' -- an orthonormal transform with a DC and all
% the other basis elements of random nature
%
% dec_levels --> If a wavelet transform is generated, this is the
% desired decomposition level. Must be in the
% range [0, log2(N)-1], where "0" implies
% full decomposition.
%
% OUTPUTS:
%
% Tforward --> (N x N) Forward transform matrix
%
% Tinverse --> (N x N) Inverse transform matrix
%
if exist('dec_levels') ~= 1,
dec_levels = 0;
end
if N == 1,
Tforward = 1;
elseif strcmp(transform_type, 'hadamard') == 1,
Tforward = hadamard(N);
elseif (N == 8) & strcmp(transform_type, 'bior1.5')==1 % hardcoded transform so that the wavelet toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.219417649252501 0.449283757993216 0.449283757993216 0.219417649252501 -0.219417649252501 -0.449283757993216 -0.449283757993216 -0.219417649252501;
0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846 -0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284;
-0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284 0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846;
0.707106781186547 -0.707106781186547 0 0 0 0 0 0;
0 0 0.707106781186547 -0.707106781186547 0 0 0 0;
0 0 0 0 0.707106781186547 -0.707106781186547 0 0;
0 0 0 0 0 0 0.707106781186547 -0.707106781186547];
elseif (N == 8) & strcmp(transform_type, 'dct')==1 % hardcoded transform so that the signal processing toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.490392640201615 0.415734806151273 0.277785116509801 0.097545161008064 -0.097545161008064 -0.277785116509801 -0.415734806151273 -0.490392640201615;
0.461939766255643 0.191341716182545 -0.191341716182545 -0.461939766255643 -0.461939766255643 -0.191341716182545 0.191341716182545 0.461939766255643;
0.415734806151273 -0.097545161008064 -0.490392640201615 -0.277785116509801 0.277785116509801 0.490392640201615 0.097545161008064 -0.415734806151273;
0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274 0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274;
0.277785116509801 -0.490392640201615 0.097545161008064 0.415734806151273 -0.415734806151273 -0.097545161008064 0.490392640201615 -0.277785116509801;
0.191341716182545 -0.461939766255643 0.461939766255643 -0.191341716182545 -0.191341716182545 0.461939766255643 -0.461939766255643 0.191341716182545;
0.097545161008064 -0.277785116509801 0.415734806151273 -0.490392640201615 0.490392640201615 -0.415734806151273 0.277785116509801 -0.097545161008064];
elseif (N == 8) & strcmp(transform_type, 'dst')==1 % hardcoded transform so that the PDE toolbox is not needed to generate it
Tforward = [ 0.161229841765317 0.303012985114696 0.408248290463863 0.464242826880013 0.464242826880013 0.408248290463863 0.303012985114696 0.161229841765317;
0.303012985114696 0.464242826880013 0.408248290463863 0.161229841765317 -0.161229841765317 -0.408248290463863 -0.464242826880013 -0.303012985114696;
0.408248290463863 0.408248290463863 0 -0.408248290463863 -0.408248290463863 0 0.408248290463863 0.408248290463863;
0.464242826880013 0.161229841765317 -0.408248290463863 -0.303012985114696 0.303012985114696 0.408248290463863 -0.161229841765317 -0.464242826880013;
0.464242826880013 -0.161229841765317 -0.408248290463863 0.303012985114696 0.303012985114696 -0.408248290463863 -0.161229841765317 0.464242826880013;
0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863;
0.303012985114696 -0.464242826880013 0.408248290463863 -0.161229841765317 -0.161229841765317 0.408248290463863 -0.464242826880013 0.303012985114696;
0.161229841765317 -0.303012985114696 0.408248290463863 -0.464242826880013 0.464242826880013 -0.408248290463863 0.303012985114696 -0.161229841765317];
elseif (N == 7) & strcmp(transform_type, 'dct')==1 % hardcoded transform so that the signal processing toolbox is not needed to generate it
Tforward =[ 0.377964473009227 0.377964473009227 0.377964473009227 0.377964473009227 0.377964473009227 0.377964473009227 0.377964473009227;
0.521120889169602 0.417906505941275 0.231920613924330 0 -0.231920613924330 -0.417906505941275 -0.521120889169602;
0.481588117120063 0.118942442321354 -0.333269317528993 -0.534522483824849 -0.333269317528993 0.118942442321354 0.481588117120063;
0.417906505941275 -0.231920613924330 -0.521120889169602 0 0.521120889169602 0.231920613924330 -0.417906505941275;
0.333269317528993 -0.481588117120063 -0.118942442321354 0.534522483824849 -0.118942442321354 -0.481588117120063 0.333269317528993;
0.231920613924330 -0.521120889169602 0.417906505941275 0 -0.417906505941275 0.521120889169602 -0.231920613924330;
0.118942442321354 -0.333269317528993 0.481588117120063 -0.534522483824849 0.481588117120063 -0.333269317528993 0.118942442321354];
elseif strcmp(transform_type, 'dct') == 1,
Tforward = dct(eye(N));
elseif strcmp(transform_type, 'dst') == 1,
Tforward = dst(eye(N));
elseif strcmp(transform_type, 'DCrand') == 1,
x = randn(N); x(1:end,1) = 1; [Q,R] = qr(x);
if (Q(1) < 0),
Q = -Q;
end;
Tforward = Q';
else %% a wavelet decomposition supported by 'wavedec'
%%% Set periodic boundary conditions, to preserve bi-orthogonality
dwtmode('per','nodisp');
Tforward = zeros(N,N);
for i = 1:N
Tforward(:,i)=wavedec(circshift([1 zeros(1,N-1)],[dec_levels i-1]), log2(N), transform_type); %% construct transform matrix
end
end
%%% Normalize the basis elements
Tforward = (Tforward' * diag(sqrt(1./sum(Tforward.^2,2))))';
%%% Compute the inverse transform matrix
Tinverse = inv(Tforward);
return;
================================================
FILE: BM3D/IDDBM3D/BM3DDEB_init.m
================================================
function [ISNR, y_hat_RI,y_hat_RWI,zRI] = BM3DDEB_init(experiment_number, y, z, v, sigma)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright 2008 Tampere University of Technology. All rights reserved.
% This work should only be used for nonprofit purposes.
%
% AUTHORS:
% Kostadin Dabov, email: kostadin.dabov _at_ tut.fi
% Alessandro Foi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% This function implements the image deblurring method proposed in:
%
% [1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Image
% restoration by sparse 3D transform-domain collaborative filtering,"
% Proc SPIE Electronic Imaging, January 2008.
%
% FUNCTION INTERFACE:
%
% [PSNR, y_hat_RWI] = BM3DDEB(experiment_number, test_image_name)
%
% INPUT:
% 1) experiment_number: 1 -> PSF 1, sigma^2 = 2
% 2 -> PSF 1, sigma^2 = 8
% 3 -> PSF 2, sigma^2 = 0.308
% 4 -> PSF 3, sigma^2 = 49
% 5 -> PSF 4, sigma^2 = 4
% 6 -> PSF 5, sigma^2 = 64
%
% 2) test_image_name: a valid filename of a grayscale test image
%
% OUTPUT:
% 1) ISNR: the output improvement in SNR, dB
% 2) y_hat_RWI: the restored image
%
% ! The function can work without any of the input arguments,
% in which case, the internal default ones are used !
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Fixed regularization parameters (obtained empirically after a rough optimization)
Regularization_alpha_RI = 4e-4;
Regularization_alpha_RWI = 5e-3;
%%%% Experiment number (see below for details, e.g. how the blur is generated, etc.)
if (exist('experiment_number') ~= 1)
experiment_number = 3; % 1 -- 6
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Select a single image filename (might contain path)
%%%%
% if (exist('test_image_name') ~= 1)
% test_image_name = [
% % 'Lena512.png'
% 'Cameraman256.png'
% % 'barbara.png'
% % 'house.png'
% ];
% end
%%%% Select 2D transforms ('dct', 'dst', 'hadamard', or anything that is listed by 'help wfilters'):
transform_2D_HT_name = 'dst'; %% 2D transform (of size N1 x N1) used in Step 1
transform_2D_Wiener_name = 'dct'; %% 2D transform (of size N1_wiener x N1_wiener) used in Step 2
transform_3rd_dimage_name = 'haar'; %% 1D tranform used in the 3-rd dim, the same for both steps
%%%% Step 1 (BM3D with collaborative hard-thresholding) parameters:
N1 = 8; %% N1 x N1 is the block size
Nstep = 3; %% sliding step to process every next refernece block
N2 = 16; %% maximum number of similar blocks (maximum size of the 3rd dimensiona of a 3D array)
Ns = 39; %% length of the side of the search neighborhood for full-search block-matching (BM)
tau_match = 6000;%% threshold for the block distance (d-distance)
lambda_thr2D = 0; %% threshold for the coarse initial denoising used in the d-distance measure
lambda_thr3D = 2.9; %% threshold for the hard-thresholding
beta = 0; %% the beta parameter of the 2D Kaiser window used in the reconstruction
%%%% Step 2 (BM3D with collaborative Wiener filtering) parameters:
N1_wiener = 8;
Nstep_wiener = 2;
N2_wiener = 16;
Ns_wiener = 39;
tau_match_wiener = 800;
beta_wiener = 0;
%%%% Specify whether to print results and display images
print_to_screen = 0;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Note: touch below this point only if you know what you are doing!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Make parameters compatible with the interface of the mex-functions
%%%%
[Tfor, Tinv] = getTransfMatrix(N1, transform_2D_HT_name, 0); %% get (normalized) forward and inverse transform matrices
[TforW, TinvW] = getTransfMatrix(N1_wiener, transform_2D_Wiener_name, 0); %% get (normalized) forward and inverse transform matrices
if (strcmp(transform_3rd_dimage_name, 'haar') == 1),
%%% Fast internal transform is used, no need to generate transform
%%% matrices.
hadper_trans_single_den = {};
inverse_hadper_trans_single_den = {};
else
%%% Create transform matrices. The transforms are later applied by
%%% vector-matrix multiplications
for hpow = 0:ceil(log2(max(N2,N2_wiener))),
h = 2^hpow;
[Tfor3rd, Tinv3rd] = getTransfMatrix(h, transform_3rd_dimage_name, 0);
hadper_trans_single_den{h} = single(Tfor3rd);
inverse_hadper_trans_single_den{h} = single(Tinv3rd');
end
end
if beta == 0 & beta_wiener == 0
Wwin2D = ones(N1_wiener,N1_wiener);
Wwin2D_wiener = ones(N1,N1);
else
Wwin2D = kaiser(N1, beta) * kaiser(N1, beta)'; % Kaiser window used in the hard-thresholding part
Wwin2D_wiener = kaiser(N1_wiener, beta_wiener) * kaiser(N1_wiener, beta_wiener)'; % Kaiser window used in the Wiener filtering part
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%% Read an image and generate a blurred and noisy image
% %%%%
% y = im2double(imread(test_image_name));
%
% if experiment_number==1
% sigma=sqrt(2)/255;
% for x1=-7:7; for x2=-7:7; v(x1+8,x2+8)=1/(x1^2+x2^2+1); end, end; v=v./sum(v(:));
% end
% if experiment_number==2
% sigma=sqrt(8)/255;
% s1=0; for a1=-7:7; s1=s1+1; s2=0; for a2=-7:7; s2=s2+1; v(s1,s2)=1/(a1^2+a2^2+1); end, end; v=v./sum(v(:));
% end
% if experiment_number==3
% BSNR=40; sigma=-1; % if "sigma=-1", then the value of sigma depends on the BSNR
% v=ones(9); v=v./sum(v(:));
% end
% if experiment_number==4
% sigma=7/255;
% v=[1 4 6 4 1]'*[1 4 6 4 1]; v=v./sum(v(:)); % PSF
% end
% if experiment_number==5
% sigma=2/255;
% v=fspecial('gaussian', 25, 1.6);
% end
% if experiment_number==6
% sigma=8/255;
% v=fspecial('gaussian', 25, .4);
% end
%
%
[Xv, Xh] = size(y);
[ghy,ghx] = size(v);
big_v = zeros(Xv,Xh); big_v(1:ghy,1:ghx)=v; big_v=circshift(big_v,-round([(ghy-1)/2 (ghx-1)/2])); % pad PSF with zeros to whole image domain, and center it
V = fft2(big_v); % frequency response of the PSF
% y_blur = imfilter(y, v, 'circular'); % performs blurring (by circular convolution)
%
% randn('seed',0); %%% fix seed for the random number generator
% if sigma == -1; %% check whether to use BSNR in order to define value of sigma
% sigma=sqrt(norm(y_blur(:)-mean(y_blur(:)),2)^2 /(Xh*Xv*10^(BSNR/10))); % compute sigma from the desired BSNR
% end
%
% %%%% Create a blurred and noisy observation
% z = y_blur + sigma*randn(Xv,Xh);
tic;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 1: Final estimate by Regularized Inversion (RI) followed by
%%%% BM3D with collaborative hard-thresholding
%%%%
%%%% Step 1.1. Regularized Inversion
RI= conj(V)./( (abs(V).^2) + Regularization_alpha_RI * Xv*Xh*sigma^2); % Transfer Matrix for RI %% Standard Tikhonov Regularization
zRI=real(ifft2( fft2(z).* RI )); % Regularized Inverse Estimate (RI OBSERVATION)
stdRI = zeros(N1, N1);
for ii = 1:N1,
for jj = 1:N1,
UnitMatrix = zeros(N1,N1); UnitMatrix(ii,jj)=1;
BasisElementPadded = zeros(Xv, Xh); BasisElementPadded(1:N1,1:N1) = Tinv*UnitMatrix*Tinv';
TransfBasisElementPadded = fft2(BasisElementPadded);
stdRI(ii,jj) = sqrt( (1/(Xv*Xh)) * sum(sum(abs(TransfBasisElementPadded.*RI).^2)) )*sigma;
end,
end
%%%% Step 1.2. Colored noise suppression by BM3D with collaborative hard-
%%%% thresholding
y_hat_RI = bm3d_thr_colored_noise(zRI, hadper_trans_single_den, Nstep, N1, N2, lambda_thr2D,...
lambda_thr3D, tau_match*N1*N1/(255*255), (Ns-1)/2, sigma, 0, single(Tfor), single(Tinv)',...
inverse_hadper_trans_single_den, single(stdRI'), Wwin2D, 0, 1 );
PSNR_INITIAL_ESTIMATE = 10*log10(1/mean((y(:)-y_hat_RI(:)).^2));
ISNR_INITIAL_ESTIMATE = PSNR_INITIAL_ESTIMATE - 10*log10(1/mean((y(:)-z(:)).^2));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Step 2: Final estimate by Regularized Wiener Inversion (RWI) followed
%%%% by BM3D with collaborative Wiener filtering
%%%%
%%%% Step 2.1. Regularized Wiener Inversion
Wiener_Pilot = abs(fft2(double(y_hat_RI))); %%% Wiener reference estimate
RWI = conj(V).*Wiener_Pilot.^2./(Wiener_Pilot.^2.*(abs(V).^2) + Regularization_alpha_RWI*Xv*Xh*sigma^2); % Transfer Matrix for RWI (uses standard regularization 'a-la-Tikhonov')
zRWI = real(ifft2(fft2(z).*RWI)); % RWI OBSERVATION
stdRWI = zeros(N1_wiener, N1_wiener);
for ii = 1:N1,
for jj = 1:N1,
UnitMatrix = zeros(N1,N1); UnitMatrix(ii,jj)=1;
BasisElementPadded = zeros(Xv, Xh); BasisElementPadded(1:N1,1:N1) = idct2(UnitMatrix);
TransfBasisElementPadded = fft2(BasisElementPadded);
stdRWI(ii,jj) = sqrt( (1/(Xv*Xh)) * sum(sum(abs(TransfBasisElementPadded.*RWI).^2)) )*sigma;
end,
end
%%%% Step 2.2. Colored noise suppression by BM3D with collaborative Wiener
%%%% filtering
y_hat_RWI = bm3d_wiener_colored_noise(zRWI, y_hat_RI, hadper_trans_single_den, Nstep_wiener, N1_wiener, N2_wiener, ...
0, tau_match_wiener*N1_wiener*N1_wiener/(255*255), (Ns_wiener-1)/2, 0, single(stdRWI'), single(TforW), single(TinvW)',...
inverse_hadper_trans_single_den, Wwin2D_wiener, 0, 1, single(ones(N1_wiener)) );
elapsed_time = toc;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Calculate the final estimate's PSNR and ISNR, print them, and show the
%%%% restored image
%%%%
PSNR = 10*log10(1/mean((y(:)-y_hat_RWI(:)).^2));
ISNR = PSNR - 10*log10(1/mean((y(:)-z(:)).^2));
if print_to_screen == 1
fprintf('Image: %s, Exp %d, Time: %.1f sec, PSNR-RI: %.2f dB, PSNR-RWI: %.2f, ISNR-RWI: %.2f dB\n', ...
test_image_name, experiment_number, elapsed_time, PSNR_INITIAL_ESTIMATE, PSNR, ISNR);
figure,imshow(z);
figure,imshow(double(y_hat_RWI));
end
return;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Some auxiliary functions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% Create forward and inverse transform matrices, which allow for perfect
% reconstruction. The forward transform matrix is normalized so that the
% l2-norm of each basis element is 1.
%
% [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% INPUTS:
%
% N --> Size of the transform (for wavelets, must be 2^K)
%
% transform_type --> 'dct', 'dst', 'hadamard', or anything that is
% listed by 'help wfilters' (bi-orthogonal wavelets)
% 'DCrand' -- an orthonormal transform with a DC and all
% the other basis elements of random nature
%
% dec_levels --> If a wavelet transform is generated, this is the
% desired decomposition level. Must be in the
% range [0, log2(N)-1], where "0" implies
% full decomposition.
%
% OUTPUTS:
%
% Tforward --> (N x N) Forward transform matrix
%
% Tinverse --> (N x N) Inverse transform matrix
%
if exist('dec_levels') ~= 1,
dec_levels = 0;
end
if N == 1,
Tforward = 1;
elseif strcmp(transform_type, 'hadamard') == 1,
Tforward = hadamard(N);
elseif (N == 8) & strcmp(transform_type, 'bior1.5')==1 % hardcoded transform so that the wavelet toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.219417649252501 0.449283757993216 0.449283757993216 0.219417649252501 -0.219417649252501 -0.449283757993216 -0.449283757993216 -0.219417649252501;
0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846 -0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284;
-0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284 0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846;
0.707106781186547 -0.707106781186547 0 0 0 0 0 0;
0 0 0.707106781186547 -0.707106781186547 0 0 0 0;
0 0 0 0 0.707106781186547 -0.707106781186547 0 0;
0 0 0 0 0 0 0.707106781186547 -0.707106781186547];
elseif (N == 8) & strcmp(transform_type, 'dct')==1 % hardcoded transform so that the signal processing toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.490392640201615 0.415734806151273 0.277785116509801 0.097545161008064 -0.097545161008064 -0.277785116509801 -0.415734806151273 -0.490392640201615;
0.461939766255643 0.191341716182545 -0.191341716182545 -0.461939766255643 -0.461939766255643 -0.191341716182545 0.191341716182545 0.461939766255643;
0.415734806151273 -0.097545161008064 -0.490392640201615 -0.277785116509801 0.277785116509801 0.490392640201615 0.097545161008064 -0.415734806151273;
0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274 0.353553390593274 -0.353553390593274 -0.353553390593274 0.353553390593274;
0.277785116509801 -0.490392640201615 0.097545161008064 0.415734806151273 -0.415734806151273 -0.097545161008064 0.490392640201615 -0.277785116509801;
0.191341716182545 -0.461939766255643 0.461939766255643 -0.191341716182545 -0.191341716182545 0.461939766255643 -0.461939766255643 0.191341716182545;
0.097545161008064 -0.277785116509801 0.415734806151273 -0.490392640201615 0.490392640201615 -0.415734806151273 0.277785116509801 -0.097545161008064];
elseif (N == 8) & strcmp(transform_type, 'dst')==1 % hardcoded transform so that the PDE toolbox is not needed to generate it
Tforward = [ 0.161229841765317 0.303012985114696 0.408248290463863 0.464242826880013 0.464242826880013 0.408248290463863 0.303012985114696 0.161229841765317;
0.303012985114696 0.464242826880013 0.408248290463863 0.161229841765317 -0.161229841765317 -0.408248290463863 -0.464242826880013 -0.303012985114696;
0.408248290463863 0.408248290463863 0 -0.408248290463863 -0.408248290463863 0 0.408248290463863 0.408248290463863;
0.464242826880013 0.161229841765317 -0.408248290463863 -0.303012985114696 0.303012985114696 0.408248290463863 -0.161229841765317 -0.464242826880013;
0.464242826880013 -0.161229841765317 -0.408248290463863 0.303012985114696 0.303012985114696 -0.408248290463863 -0.161229841765317 0.464242826880013;
0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863 0 0.408248290463863 -0.408248290463863;
0.303012985114696 -0.464242826880013 0.408248290463863 -0.161229841765317 -0.161229841765317 0.408248290463863 -0.464242826880013 0.303012985114696;
0.161229841765317 -0.303012985114696 0.408248290463863 -0.464242826880013 0.464242826880013 -0.408248290463863 0.303012985114696 -0.161229841765317];
elseif strcmp(transform_type, 'dct') == 1,
Tforward = dct(eye(N));
elseif strcmp(transform_type, 'dst') == 1,
Tforward = dst(eye(N));
elseif strcmp(transform_type, 'DCrand') == 1,
x = randn(N); x(1:end,1) = 1; [Q,R] = qr(x);
if (Q(1) < 0),
Q = -Q;
end;
Tforward = Q';
else %% a wavelet decomposition supported by 'wavedec'
%%% Set periodic boundary conditions, to preserve bi-orthogonality
dwtmode('per','nodisp');
Tforward = zeros(N,N);
for i = 1:N
Tforward(:,i)=wavedec(circshift([1 zeros(1,N-1)],[dec_levels i-1]), log2(N), transform_type); %% construct transform matrix
end
end
%%% Normalize the basis elements
Tforward = (Tforward' * diag(sqrt(1./sum(Tforward.^2,2))))';
%%% Compute the inverse transform matrix
Tinverse = inv(Tforward);
return;
================================================
FILE: BM3D/IDDBM3D/Demo_IDDBM3D.m
================================================
function [isnr, y_hat] = Demo_IDDBM3D(experiment_number, test_image_name)
% ------------------------------------------------------------------------------------------
%
% Demo software for BM3D-frame based image deblurring
% Public release ver. 0.8 (beta) (June 03, 2011)
%
% ------------------------------------------------------------------------------------------
%
% This function implements the IDDBM3D image deblurring algorithm proposed in:
%
% [1] A.Danielyan, V. Katkovnik, and K. Egiazarian, "BM3D frames and
% variational image deblurring," submitted to IEEE TIP, May 2011
%
% ------------------------------------------------------------------------------------------
%
% authors: Aram Danielyan
% Vladimir Katkovnik
%
% web page: http://www.cs.tut.fi/~foi/GCF-BM3D/
%
% contact: firstname.lastname@tut.fi
%
% ------------------------------------------------------------------------------------------
% Copyright (c) 2011 Tampere University of Technology.
% All rights reserved.
% This work should be used for nonprofit purposes only.
% ------------------------------------------------------------------------------------------
%
% Disclaimer
% ----------
%
% Any unauthorized use of these routines for industrial or profit-oriented activities is
% expressively prohibited. By downloading and/or using any of these files, you implicitly
% agree to all the terms of the TUT limited license (included in the file Legal_Notice.txt).
% ------------------------------------------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% FUNCTION INTERFACE:
%
% [psnr, y_hat] = Demo_IDDBM3D(experiment_number, test_image_name)
%
% INPUT:
% 1) experiment_number: 1 -> PSF 1, sigma^2 = 2
% 2 -> PSF 1, sigma^2 = 8
% 3 -> PSF 2, sigma^2 = 0.308
% 4 -> PSF 3, sigma^2 = 49
% 5 -> PSF 4, sigma^2 = 4
% 6 -> PSF 5, sigma^2 = 64
% 7-13 -> experiments 7-13 are not described in [1].
% see this file for the blur and noise parameters.
% 2) test_image_name: a valid filename of a grayscale test image
%
% OUTPUT:
% 1) isnr the output improvement in SNR, dB
% 2) y_hat: the restored image
%
% ! The function can work without any of the input arguments,
% in which case, the internal default ones are used !
%
% To run this demo functions within the BM3D package should be accessible to Matlab
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
addpath('../')
if ~exist('experiment_number','var'), experiment_number=3; end
if ~exist('test_image_name','var'), test_image_name='Cameraman256.png'; end
filename=test_image_name;
if 1 %
initType = 'bm3ddeb'; %use output of the BM3DDEB to initialize the algorithm
else
initType = 'zeros'; %use zero image to initialize the algorithm
end
matchType = 'bm3ddeb'; %build groups using output of the BM3DDEB algorithm
numIt = 200;
fprintf('Experiment number: %d\n', experiment_number);
fprintf('Image: %s\n', filename);
%% ------- Generating bservation ---------------------------------------------
disp('--- Generating observation ----');
y=im2double(imread(filename));
[yN,xN]=size(y);
switch experiment_number
case 1
sigma=sqrt(2)/255;
for x1=-7:7; for x2=-7:7; h(x1+8,x2+8)=1/(x1^2+x2^2+1); end, end; h=h./sum(h(:));
case 2
sigma=sqrt(8)/255;
s1=0; for a1=-7:7; s1=s1+1; s2=0; for a2=-7:7; s2=s2+1; h(s1,s2)=1/(a1^2+a2^2+1); end, end; h=h./sum(h(:));
case 3
BSNR=40;
sigma=-1; % if "sigma=-1", then the value of sigma depends on the BSNR
h=ones(9); h=h./sum(h(:));
case 4
sigma=7/255;
h=[1 4 6 4 1]'*[1 4 6 4 1]; h=h./sum(h(:)); % PSF
case 5
sigma=2/255;
h=fspecial('gaussian', 25, 1.6);
case 6
sigma=8/255;
h=fspecial('gaussian', 25, .4);
%extra experiments
case 7
BSNR=30;
sigma=-1;
h=ones(9); h=h./sum(h(:));
case 8
BSNR=20;
sigma=-1;
h=ones(9); h=h./sum(h(:));
case 9
BSNR=40;
sigma=-1;
h=fspecial('gaussian', 25, 1.6);
case 10
BSNR=20;
sigma=-1;
h=fspecial('gaussian', 25, 1.6);
case 11
BSNR=15;
sigma=-1;
h=fspecial('gaussian', 25, 1.6);
case 12
BSNR=40;
sigma=-1; % if "sigma=-1", then the value of sigma depends on the BSNR
h=ones(19); h=h./sum(h(:));
case 13
BSNR=25;
sigma=-1; % if "sigma=-1", then the value of sigma depends on the BSNR
h=ones(19); h=h./sum(h(:));
end
y_blur = imfilter(y, h, 'circular'); % performs blurring (by circular convolution)
if sigma == -1; %% check whether to use BSNR in order to define value of sigma
sigma=sqrt(norm(y_blur(:)-mean(y_blur(:)),2)^2 /(yN*xN*10^(BSNR/10)));
% Xv% compute sigma from the desired BSNR
end
%%%% Create a blurred and noisy observation
randn('seed',0);
z = y_blur + sigma*randn(yN, xN);
bsnr=10*log10(norm(y_blur(:)-mean(y_blur(:)),2)^2 /sigma^2/yN/xN);
psnr_z =PSNR(y,z,1,0);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
fprintf('Observation BSNR: %4.2f, PSNR: %4.2f\n', bsnr, psnr_z);
%% ----- Computing initial estimate ---------------------
disp('--- Computing initial estimate ----');
[dummy, y_hat_RI,y_hat_RWI,zRI] = BM3DDEB_init(experiment_number, y, z, h, sigma);
switch lower(initType)
case 'zeros'
y_hat_init=zeros(size(z));
case 'zri'
y_hat_init=zRI;
case 'ri'
y_hat_init=y_hat_RI;
case 'bm3ddeb'
y_hat_init=y_hat_RWI;
end
switch lower(matchType)
case 'z'
match_im = z;
case 'y'
match_im = y;
case 'zri'
match_im = zRI;
case 'ri'
match_im = y_hat_RI;
case 'bm3ddeb'
match_im = y_hat_RWI;
end
psnr_init = PSNR(y, y_hat_init,1,0);
fprintf('Initialization method: %s\n', initType);
fprintf('Initial estimate ISNR: %4.2f, PSNR: %4.2f\n', psnr_init-psnr_z, psnr_init);
%% ------- Core algorithm ---------------------
%------ Description of the parameters of the IDDBM3D function ----------
%y - true image (use [] if true image is unavaliable)
%z - observed
%h - blurring PSF
%y_hat_init - initial estimate y_0
%match_im - image used to constuct groups and calculate weights g_r
%sigma - standard deviation of the noise
%threshType = 'h'; %use 's' for soft thresholding
%numIt - number of iterations
%gamma - regularization parameter see [1]
%tau - regularization parameter see [1] (thresholding level)
%xi - regularization parameter see [1], it is always set to 1 in this implementation
%showFigure - set to True to display figure with current estimate
%--------------------------------------------------------------------
threshType = 'h';
showFigure = true;
switch threshType
case {'s'}
gamma_tau_xi_inits= [
0.0004509 0.70 1;%1
0.0006803 0.78 1;%2
0.0003485 0.65 1;%3
0.0005259 0.72 1;%4
0.0005327 0.82 1;%5
7.632e-05 0.25 1;%6
0.0005818 0.81 1;%7
0.001149 1.18 1;%8
0.0004155 0.74 1;%9
0.0005591 0.74 1;%10
0.0007989 0.82 1;%11
0.0006702 0.75 1;%12
0.001931 1.83 1;%13
];
case {'h'}
gamma_tau_xi_inits= [
0.00051 3.13 1;%1
0.0006004 2.75 1;%2
0.0004573 2.91 1;%3
0.0005959 2.82 1;%4
0.0006018 3.63 1;%5
0.0001726 2.24 1;%6
0.00062 2.98 1;%7
0.001047 3.80 1;%8
0.0005125 3.00 1;%9
0.0005685 2.80 1;%10
0.0005716 2.75 1;%11
0.0005938 2.55 1;%12
0.001602 4.16 1;%13
];
end
gamma = gamma_tau_xi_inits(experiment_number,1);
tau = gamma_tau_xi_inits(experiment_number,2)/255*2.7;
xi = gamma_tau_xi_inits(experiment_number,3);
disp('-------- Start ----------');
fprintf('Number of iterations to perform: %d\n', numIt);
fprintf('Thresholding type: %s\n', threshType);
y_hat = IDDBM3D(y, h, z, y_hat_init, match_im, sigma, threshType, numIt, gamma, tau, xi, showFigure);
psnr = PSNR(y,y_hat,1,0);
isnr = psnr-psnr_z;
disp('-------- Results --------');
fprintf('Final estimate ISNR: %4.2f, PSNR: %4.2f\n', isnr, psnr);
return;
end
function PSNRdb = PSNR(x, y, maxval, borders)
if ~exist('borders', 'var'), borders = 0; end
if ~exist('maxval', 'var'), maxval = 255; end
xx=borders+1:size(x,1)-borders;
yy=borders+1:size(x,2)-borders;
PSNRdb = zeros(1,size(x,3));
for fr=1:size(x,3)
err = x(xx,yy,fr) - y(xx,yy,fr);
PSNRdb(fr) = 10 * log10((maxval^2)/mean2(err.^2));
end
end
================================================
FILE: BM3D/LEGAL_NOTICE.txt
================================================
Legal Notice
By accessing these World Wide Web pages you agree to the following terms. If you do not agree to the following terms, please notice that you are not allowed to use the site.
Copyright, author rights, trademarks and other intellectual property rights
This website and its contents are protected by copyright, author rights and/or other intellectual property rights which are the property of Tampere University of Technology ("TUT"), its researchers and/or third parties. Reproduction, modification, and use of the materials (or any information incorporated thereto such as but not limited to reports, publications, software, pictures, diagrams, video material) published on this website are hereby authorized provided that:
(i) reproduction, use, and modification are for informational and non-commercial or personal use only and will not be copied or posted on any network computer or broadcast in any media; and
(ii) any reproduction or modification retains all original notices including proprietary or copyright notices; and
(iii) reference to the original authors is given whenever results, which arise from the use of the provided material or any modification of it, are made public.
No other use of the materials and of any information incorporated thereto is hereby authorized.
In addition, be informed that some names are protected by trademarks which are the property of TUT, its researchers and/or other third parties whether a specific mention in that respect is made or not.
Disclaimers
The material, which is found on this website, is provided for general information only and should not be relied upon or used as the basis for making any transactions of any kind whatsoever. All the information and any part thereof provided on this website are provided AS IS without warranty of any kind either expressed or implied including, without limitation, warranties of merchantability, fitness for a particular purpose or non infringement of intellectual property rights.
TUT makes no representations or warranties as to the accuracy or completeness of any materials and information incorporated thereto and contained on this website. TUT makes no representations or warranties that access to this website will be uninterrupted or error-free, that this website (the materials and/or any information incorporated thereto) will be secure and free of virus or other harmful components.
The use of the materials (or any information incorporated thereto), in whole or in part, contained in this website is your sole responsibility. TUT disclaims any liability for any damages whatsoever including without limitation direct, indirect, incidental and/or consequential damages resulting from access to the website and use of the materials provided therein.
This website may contain links to third party sites. The links are provided to you only as a convenience and the inclusion of any link do not imply either an endorsement by TUT of the linked sites or any warranty from TUT on said sites. Access to said linked sites is at your own risk.
Transmission of user information
Any and all information or request for information you may direct to TUT through this website or through e-mail as may be linked to this website is to be considered as not confidential.
You may also address your information or request through mail to TUT's registered office for the attention of the department identified in the relevant part of this website.
Modifications
TUT reserves the right to revise the site or withdraw access to them at any time.
================================================
FILE: BM3D/README.txt
================================================
-------------------------------------------------------------------
BM3D demo software for image/video restoration and enhancement
Public release v2.00 (30 January 2014)
-------------------------------------------------------------------
Copyright (c) 2006-2014 Tampere University of Technology.
All rights reserved.
This work should be used for nonprofit purposes only.
Authors: Kostadin Dabov
Aram Danieyan
Alessandro Foi
BM3D web page: http://www.cs.tut.fi/~foi/GCF-BM3D
-------------------------------------------------------------------
Contents
-------------------------------------------------------------------
The package comprises these functions
*) BM3D.m : BM3D grayscale-image denoising [1]
*) CBM3D.m : CBM3D RGB-image denoising [2]
*) VBM3D.m : VBM3D grayscale-video denoising [3]
*) CVBM3D.m : CVBM3D RGB-video denoising
*) BM3DSHARP.m : BM3D-SHARP grayscale-image sharepening &
denoising [4]
*) BM3DDEB.m : BM3D-DEB grayscale-image deblurring [5]
*) IDDBM3D\Demo_IDDBM3D : IDDBM3D grayscale-image deblurring [8]
*) BM3D-SAPCA\BM3DSAPCA2009 : BM3D-SAPCA grayscale-image denoising [9]
*) BM3D_CFA.m : BM3D denoising of Bayer data [10]
For help on how to use these scripts, you can e.g. use "help BM3D"
or "help CBM3D".
Each demo calls MEX-functions that allow to change all possible
parameters used in the algorithm from within the corresponding
M-file.
-------------------------------------------------------------------
Installation
-------------------------------------------------------------------
Unzip both BM3D.zip (contains codes) and BM3D_images.zip (contains
test images) in a folder that is in the MATLAB path.
-------------------------------------------------------------------
Requirements
-------------------------------------------------------------------
*) MS Windows (32 or 64 bit), Linux (32 bit or 64 bit)
or Mac OS X (32 or 64 bit)
*) Matlab v.7.1 or later with installed:
-- Image Processing Toolbox (for visualization with "imshow")
*) CVBM3D currently supports only 32-bit and 64-bit Windows.
*) IDDBM3D currently supports only 32-bit and 64-bit Windows and
requires Microsoft Visual C++ 2008 SP1 Redistributable Package
to be installed. It can be downloaded from:
(x86) http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A5C84275-3B97-4AB7-A40D-3802B2AF5FC2
(x64) http://www.microsoft.com/downloads/en/details.aspx?FamilyID=BA9257CA-337F-4B40-8C14-157CFDFFEE4E
-------------------------------------------------------------------
Change log
-------------------------------------------------------------------
v2.00 (30 January 2014)
+ Added BM3D_CFA denoising algorithm for Bayer data [10].
! Various fixes in BM3DDEB main script: now works correctly with
asymmetric PSFs; corrected several typos which caused first or
second collaborative filtering stages to fail whenever the block
sizes and 2-D transforms differed from the default ones.
v1.9 (26 August 2011)
+ Added BM3D-SAPCA denoising algorithm [9].
v1.8 (4 July 2011)
+ Added IDDBM3D deblurring algorithm [8].
! Improved float precision of BM3D, CBM3D, and BM3DDEB mex-files.
v1.7.6 (4 February 2011)
+ Added support for Matlab running on Mac OSX 32-bit
. Changed the strong-noise parameters ("vn" profile) in CBM3D.m,
as proposed in [6].
v1.7.5 (7 July 2010)
. Changed the strong-noise parameters ("vn" profile) in BM3D.m,
as proposed in [6].
v1.7.4 (3 May 2010)
+ Added support for Matlab running on Mac OSX 64-bit
v1.7.3 (15 March 2010)
! Fixed a problem with writing to AVI files in CVBM3D
! Fixed a problem with VBM3D when the input is a 3-D matrix
v1.7.2 (8 Dec 2009)
! Fixed the output of CVBM3D to be in range [0,255] instead of
in range [0,1]
v1.7.1 (2 Dec 2009)
! Fixed a bug in VBM3D.m introduced in v1.7 that concerns the
declipping
v1.7 (12 Nov 2009)
+ Added CVBM3D.m script that performs denoising on RGB-videos with
AWGN
! Fixed VBM3D.m to use declipping in the case when noisy AVI file
is provided
v1.6 (17 June 2009)
! Made few fixes to the "getTransfMatrix" internal function.
If used with default parameters, BM3D no longer requires
neither Wavelet, PDE, nor Signal Processing toolbox.
+ Added support for x86_64 Linux
v1.5.1 (20 Nov 2008)
! Fixed bugs for older versions of Matlab
+ Added support for 32-bit Linux
+ improved the structure of the VBM3D.m script
v1.5 (18 Oct 2008)
+ Added x86_64 version of the MEX-files that run on 64-bit Matlab
under Windows
+ Added a missing function in BM3DDEB.m
+ Improves some of the comments in the codes
! Fixed a bug in VBM3D when only a input noisy video is provided
v1.4.1 (26 Feb 2008)
! Fixed a bug in the grayscale-image deblurring codes and made
these codes compatible with Matlab 7 or newer versions.
v1.4 (1 Feb 2008)
+ Added grayscale-image deblurring
v1.3 (12 Oct 2007)
+ Added grayscale-image joint sharpening and denoising
v1.2.1 (4 Sept 2007)
! Fixed the output of the VBM3D to be the final Wiener estimate
rather than the intermediate basic estimate
! Fixed a problem when the original video is provided as a 3D
matrix
v1.2 (11 June 2007)
+ Added grayscale-video denoising files
v1.1.3 (4 May 2007)
+ Added support for Linux x86-compatible platforms
v1.1.2
! Fixed bugs related with Matlab v.6.1
v1.1.1 (8 March 2007)
! Fixed bugs related with Matlab v.6 (e.g., "isfloat" was not
available and "imshow" did not work with single precision)
+ Improved the usage examples shown by executing "help BM3D"
or "help CBM3D" MATLAB commands
v1.1 (6 March 2007)
! Fixed a bug in comparisons of the image sizes, which was
causing problems when executing "CBM3D(1,z,sigma);"
! Fixed a bug that was causing a crash when the input images are
of type "uint8"
! Fixed a problem that has caused some versions of imshow to
report an error
! Fixed few typos in the comments of the functions
. Made the parameters of the BM3D and the C-BM3D the same
v1.0 (9 December 2006)
+ Initial version, based on BM3D-DFT [7] package (November 2005)
-------------------------------------------------------------------
References
-------------------------------------------------------------------
[1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Image
denoising by sparse 3D transform-domain collaborative filtering,"
IEEE Trans. Image Process., vol. 16, no. 8, August 2007.
[2] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Color
image denoising via sparse 3D collaborative filtering with
grouping constraint in luminance-chrominance space," Proc. IEEE
Int. Conf. Image Process., ICIP 2007, San Antonio (TX), USA,
September 2007.
[3] K. Dabov, A. Foi, and K. Egiazarian, "Video denoising by
sparse 3D transform-domain collaborative filtering," Proc.
European Signal Process. Conf., EUSIPCO 2007, Poznan, Poland,
September 2007.
[4] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Joint
image sharpening and denoising by 3D transform-domain
collaborative filtering," Proc. 2007 Int. TICSP Workshop Spectral
Meth. Multirate Signal Process., SMMSP 2007, Moscow, Russia,
September 2007.
[5] K. Dabov, A. Foi, and K. Egiazarian, "Image restoration by
sparse 3D transform-domain collaborative filtering," Proc. SPIE
Electronic Imaging '08, vol. 6812, no. 6812-1D, San Jose (CA),
USA, January 2008.
[6] Y. Hou, C. Zhao, D. Yang, and Y. Cheng, 'Comment on "Image
Denoising by Sparse 3D Transform-Domain Collaborative Filtering"'
accepted for publication, IEEE Trans. Image Process., July, 2010.
[7] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Image
denoising with block-matching and 3D filtering," Proc. SPIE
Electronic Imaging '06, vol. 6064, no. 6064A-30, San Jose (CA),
USA, January 2006.
[8] A.Danielyan, V. Katkovnik, and K. Egiazarian, "BM3D frames and
variational image deblurring," accepted for publication in IEEE
Trans. Image Process.
Preprint online at http://www.cs.tut.fi/~foi/GCF-BM3D
[9] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "BM3D Image
Denoising with Shape-Adaptive Principal Component Analysis", Proc.
Workshop on Signal Processing with Adaptive Sparse Structured
Representations (SPARS'09), Saint-Malo, France, April 2009.
[10] A. Danielyan, M. Vehvilinen, A. Foi, V. Katkovnik, and
K. Egiazarian, "Cross-color BM3D filtering of noisy raw data",
Proc. Int. Workshop on Local and Non-Local Approx. in Image Process.,
LNLA 2009, Tuusula, Finland, pp. 125-129, August 2009.
-------------------------------------------------------------------
Disclaimer
-------------------------------------------------------------------
Any unauthorized use of these routines for industrial or profit-
oriented activities is expressively prohibited. By downloading
and/or using any of these files, you implicitly agree to all the
terms of the TUT limited license:
http://www.cs.tut.fi/~foi/GCF-BM3D/legal_notice.html
-------------------------------------------------------------------
Feedback
-------------------------------------------------------------------
If you have any comment, suggestion, or question, please do
contact Alessandro Foi at firstname.lastname@tut.fi
================================================
FILE: BM3D/VBM3D.m
================================================
function [PSNR_FINAL_ESTIMATE, y_hat_wi] = VBM3D(Xnoisy, sigma, NumberOfFrames, dump_information, Xorig, bm3dProfile)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% VBM3D is a Matlab function for attenuation of additive white Gaussian
% noise from grayscale videos. This algorithm reproduces the results from the article:
%
% [1] K. Dabov, A. Foi, and K. Egiazarian, "Video denoising by sparse 3D
% transform-domain collaborative filtering," European Signal Processing
% Conference (EUSIPCO-2007), September 2007. (accepted)
%
% INTERFACE:
%
% [PSNR, Xest] = VBM3D(Xnoisy, Sigma, NFrames, PrintInfo, Xorig)
%
% INPUTS:
% 1) Xnoisy --> A filename of a noisy .avi video, e.g. Xnoisy = 'gstennisg20.avi'
% OR
% Xnoisy --> A 3D matrix of a noisy video in a (floating point data in range [0,1],
% or in [0,255])
% 2) Sigma --> Noise standard deviation (assumed range is [0,255], no matter what is
% the input's range)
%
% 3) NFrames (optional paremter!) --> Number of frames to process. If set to 0 or
% ommited, then process all frames (default: 0).
%
% 4) PrintInfo (optional paremter!) --> If non-zero, then print to screen and save
% the denoised video in .AVI
% format. (default: 1)
%
% 5) Xorig (optional paremter!) --> Original video's filename or 3D matrix
% If provided, PSNR, ISNR will be computed.
%
% NOTE: If Xorig == Xnoisy, then artificial noise is added internally and the
% obtained noisy video is denoised.
%
% OUTPUTS:
%
% 1) PSNR --> If Xorig is valid video, then this contains the PSNR of the
% denoised one
%
% 1) Xest --> Final video estimate in a 3D matrix (intensities in range [0,1])
%
% *) If "PrintInfo" is non-zero, then save the denoised video in the current
% MATLAB folder.
%
% USAGE EXAMPLES:
%
% 1) Denoise a noisy (clipped in [0,255] range) video sequence, e.g.
% 'gsalesmang20.avi' corrupted with AWGN with std. dev. 20:
%
% Xest = VBM3D('gsalesmang20.avi', 20, 0, 1);
%
% 2) The same, but also print PSNR, ISNR numbers.
%
% Xest = VBM3D('gsalesmang20.avi', 20, 0, 1, 'gsalesman.avi');
%
% 3) Add artificial noise to a video, then denoise it (without
% considering clipping in [0,255]):
%
% Xest = VBM3D('gsalesman.avi', 20, 0, 1, 'gsalesman.avi');
%
%
% RESTRICTIONS:
%
% Since the video sequences are read into memory as 3D matrices,
% there apply restrictions on the input video size, which are thus
% proportional to the maximum memory allocatable by Matlab.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright 2007 Tampere University of Technology. All rights reserved.
% This work should only be used for nonprofit purposes.
%
% AUTHORS:
% Kostadin Dabov, email: dabov _at_ cs.tut.fi
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% If no input argument is provided, then use these internal ones:
if exist('sigma', 'var') ~= 1,
Xnoisy = 'gsalesmang20.avi'; Xorig = 'gsalesman.avi'; sigma = 20;
%Xnoisy = 'gstennisg20.avi'; Xorig = 'gstennis.avi'; sigma = 20;
%Xnoisy = 'gflowersg20.avi'; Xorig = 'gflower.avi'; sigma = 20;
%Xnoisy = 'gsalesman.avi'; Xorig = Xnoisy; sigma = 20;
NumberOfFrames = 0; %% 0 means process ALL frames.
end
if exist('dump_information', 'var') ~= 1,
dump_information = 1; % 1 -> print informaion to the screen and save the processed video as an AVI file
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Obtain infromation about the input noisy video
%%%%
if (ischar(Xnoisy) == 1), % if the input is a video filename
isCharacterName = 1;
Xnoisy_name = Xnoisy;
videoInfo = aviinfo(Xnoisy);
videoHeight = videoInfo.Height;
videoWidth = videoInfo.Width;
TotalFrames = videoInfo.NumFrames;
elseif length(size(Xnoisy)) == 3% the input argument is a 3D video (spatio-temporal) matrix
Xnoisy_name = 'Input 3D matrix';
isCharacterName = 0;
[videoHeight, videoWidth, TotalFrames] = size(Xnoisy);
else
fprintf('Oops! The input argument Xnoisy should be either a filename or a 3D matrix!\n');
PSNR_FINAL_ESTIMATE = 0;
y_hat_wi = 0;
return;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Check if we want to process all frames, and save as 'NumberOfFrames'
%%%% the desired number of frames to process
%%%%
if exist('NumberOfFrames', 'var') == 1,
if NumberOfFrames <= 0,
NumberOfFrames = TotalFrames;
else
NumberOfFrames = max(min(NumberOfFrames, TotalFrames), 1);
end
else
NumberOfFrames = TotalFrames;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Quality/complexity trade-off
%%%%
%%%% 'np' --> Normal Profile (balanced quality)
%%%% 'lc' --> Low Complexity Profile (fast, lower quality)
%%%%
if (exist('bm3dProfile', 'var') ~= 1)
bm3dProfile = 'np';
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Parameters for the Normal Profile.
%%%%
%%%% Select transforms ('dct', 'dst', 'hadamard', or anything that is listed by 'help wfilters'):
transform_2D_HT_name = 'bior1.5'; %% transform used for the HT filt. of size N1 x N1
transform_2D_Wiener_name = 'dct'; %% transform used for the Wiener filt. of size N1_wiener x N1_wiener
transform_3rd_dim_name = 'haar'; %% tranform used in the 3-rd dim, the same for HT and Wiener filt.
%%%% Step 1: Hard-thresholding (HT) parameters:
denoiseFrames = min(9, NumberOfFrames); % number of frames in the temporalwindow (should not exceed the total number of frames 'NumberOfFrames')
N1 = 8; %% N1 x N1 is the block size used for the hard-thresholding (HT) filtering
Nstep = 6; %% sliding step to process every next refernece block
N2 = 8; %% maximum number of similar blocks (maximum size of the 3rd dimension of the 3D groups)
Ns = 7; %% length of the side of the search neighborhood for full-search block-matching (BM)
Npr = 5; %% length of the side of the motion-adaptive search neighborhood, use din the predictive-search BM
tau_match = 3000; %% threshold for the block distance (d-distance)
lambda_thr3D = 2.7; %% threshold parameter for the hard-thresholding in 3D DFT domain
dsub = 7; %% a small value subtracted from the distnce of blocks with the same spatial coordinate as the reference one
Nb = 2; %% number of blocks to follow in each next frame, used in the predictive-search BM
beta = 2.0; %% the beta parameter of the 2D Kaiser window used in the reconstruction
%%%% Step 2: Wiener filtering parameters:
denoiseFramesW = min(9, NumberOfFrames);
N1_wiener = 7;
Nstep_wiener = 4;
N2_wiener = 8;
Ns_wiener = 7;
Npr_wiener = 5;
tau_match_wiener = 1500;
beta_wiener = 2.0;
dsub_wiener = 3;
Nb_wiener = 2;
%%%% Block-matching parameters:
stepFS = 1; %% step that forces to switch to full-search BM, "1" implies always full-search
smallLN = 3; %% if stepFS > 1, then this specifies the size of the small local search neighb.
stepFSW = 1;
smallLNW = 3;
thrToIncStep = 8; %% used in the HT filtering to increase the sliding step in uniform regions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Parameters for the Low Complexity Profile.
%%%%
if strcmp(bm3dProfile, 'lc') == 1,
lambda_thr3D = 2.8;
smallLN = 2;
smallLNW = 2;
denoiseFrames = min(5, NumberOfFrames);
denoiseFramesW = min(5, NumberOfFrames);
N2_wiener = 4;
N2 = 4;
Ns = 3;
Ns_wiener = 3;
NB = 1;
Nb_wiener = 1;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Parameters for the High Profile.
%%%%
if strcmp(bm3dProfile, 'hi') == 1,
Nstep = 3;
Nstep_wiener = 3;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Parameters for the "Very Noisy" Profile.
%%%%
if sigma > 30,
N1 = 8;
N1_wiener = 8;
Nstep = 6;
tau_match = 4500;
tau_match_wiener = 3000;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Note: touch below this point only if you know what you are doing!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Extract the input noisy video and make sure intensities are in [0,1]
%%%% interval, using single-precision float
if isCharacterName,
mno = aviread(Xnoisy_name);
z = zeros([videoHeight, videoWidth, NumberOfFrames], 'single');
for cf = 1:NumberOfFrames
z(:,:,cf) = single(mno(cf).cdata(:,:,1)) * 0.0039216; % 1/255 = 0.0039216
end
clear mno
else
if isinteger(Xnoisy) == 1,
z = single(Xnoisy) * 0.0039216; % 1/255 = 0.0039216
elseif isfloat(Xnoisy) == 0,
fprintf('Unknown format of "Xnoisy"! Must be a filename (array of char) or a 3D array of either floating point data (range [0,1]) or integer data (range [0,255]). \n');
return;
else
z = single(Xnoisy);
end
end
clear Xnoisy;
%%%% If the original video is provided, then extract it to 'Xorig'
%%%% which is later used to compute PSNR and ISNR
if exist('Xorig', 'var') == 1,
randn('seed', 0);
if ischar(Xorig) == 0,
if isinteger(Xorig) == 1,
y = single(Xorig) * 0.0039216; % 1/255 = 0.0039216
elseif isfloat(Xorig) == 0,
fprintf('Unknown format of "Xorig"! Must be a filename (array of char) or a 3D array of either floating point data (range [0,1]) or integer data (range [0,255]). \n');
return;
else
y = single(Xorig);
end
else
if strcmp(Xorig, Xnoisy_name) == 1, %% special case, noise is aritifically added
y = z;
z = z + (sigma/255) * randn(size(z));
else
mo = aviread(Xorig);
y = zeros([videoHeight, videoWidth, NumberOfFrames], 'single');
for cf = 1:NumberOfFrames
y(:,:,cf) = single(mo(cf).cdata(:,:,1)) * 0.0039216; % 1/255 = 0.0039216
end
clear mo
end
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Create transform matrices, etc.
%%%%
decLevel = 0; %% dec. levels of the dyadic wavelet 2D transform for blocks (0 means full decomposition, higher values decrease the dec. number)
decLevel3 = 0; %% dec. level for the wavelet transform in the 3rd dimension
[Tfor, Tinv] = getTransfMatrix(N1, transform_2D_HT_name, decLevel); %% get (normalized) forward and inverse transform matrices
[TforW, TinvW] = getTransfMatrix(N1_wiener, transform_2D_Wiener_name); %% get (normalized) forward and inverse transform matrices
thr_mask = ones(N1); %% N1xN1 mask of threshold scaling coeff. --- by default there is no scaling, however the use of different thresholds for different wavelet decompoistion subbands can be done with this matrix
if (strcmp(transform_3rd_dim_name, 'haar') == 1 || strcmp(transform_3rd_dim_name(end-2:end), '1.1') == 1),
%%% Fast internal transform is used, no need to generate transform
%%% matrices.
hadper_trans_single_den = {};
inverse_hadper_trans_single_den = {};
else
%%% Create transform matrices. The transforms are later computed by
%%% matrix multiplication with them
for hh = [1 2 4 8 16 32];
[Tfor3rd, Tinv3rd] = getTransfMatrix(hh, transform_3rd_dim_name, decLevel3);
hadper_trans_single_den{hh} = single(Tfor3rd);
inverse_hadper_trans_single_den{hh} = single(Tinv3rd');
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% 2D Kaiser windows that scale the reconstructed blocks
%%%%
if beta_wiener==2 & beta==2 & N1_wiener==7 & N1==8 % hardcode the window function so that the signal processing toolbox is not needed by default
Wwin2D = [ 0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.4325 0.6717 0.8644 0.9718 0.9718 0.8644 0.6717 0.4325;
0.3846 0.5974 0.7688 0.8644 0.8644 0.7688 0.5974 0.3846;
0.2989 0.4642 0.5974 0.6717 0.6717 0.5974 0.4642 0.2989;
0.1924 0.2989 0.3846 0.4325 0.4325 0.3846 0.2989 0.1924 ];
Wwin2D_wiener = [ 0.1924 0.3151 0.4055 0.4387 0.4055 0.3151 0.1924;
0.3151 0.5161 0.6640 0.7184 0.6640 0.5161 0.3151;
0.4055 0.6640 0.8544 0.9243 0.8544 0.6640 0.4055;
0.4387 0.7184 0.9243 1.0000 0.9243 0.7184 0.4387;
0.4055 0.6640 0.8544 0.9243 0.8544 0.6640 0.4055;
0.3151 0.5161 0.6640 0.7184 0.6640 0.5161 0.3151;
0.1924 0.3151 0.4055 0.4387 0.4055 0.3151 0.1924 ];
else
Wwin2D = kaiser(N1, beta) * kaiser(N1, beta)'; % Kaiser window used in the aggregation of the HT part
Wwin2D_wiener = kaiser(N1_wiener, beta_wiener) * kaiser(N1_wiener, beta_wiener)'; % Kaiser window used in the aggregation of the Wiener filt. part
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Read an image, generate noise and add it to the image
%%%%
l2normLumChrom = ones(NumberOfFrames,1); %%% NumberOfFrames == nSl !
if dump_information == 1,
fprintf('Video: %s (%dx%dx%d), sigma: %.1f\n', Xnoisy_name, videoHeight, videoWidth, NumberOfFrames, sigma);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Initial estimate by hard-thresholding filtering
tic;
y_hat = bm3d_thr_video(z, hadper_trans_single_den, Nstep, N1, N2, 0,...
lambda_thr3D, tau_match*N1*N1/(255*255), (Ns-1)/2, sigma/255, thrToIncStep, single(Tfor), single(Tinv)', inverse_hadper_trans_single_den, single(thr_mask), 'unused arg', dsub*dsub/255, l2normLumChrom, Wwin2D, (Npr-1)/2, stepFS, denoiseFrames, Nb );
estimate_elapsed_time = toc;
if exist('Xorig', 'var') == 1,
PSNR_INITIAL_ESTIMATE = 10*log10(1/mean((double(y(:))-double(y_hat(:))).^2));
PSNR_NOISE = 10*log10(1/mean((double(y(:))-double(z(:))).^2));
ISNR_INITIAL_ESTIMATE = PSNR_INITIAL_ESTIMATE - PSNR_NOISE;
if dump_information == 1,
fprintf('BASIC ESTIMATE (time: %.1f sec), PSNR: %.3f dB, ISNR: %.3f dB\n', ...
estimate_elapsed_time, PSNR_INITIAL_ESTIMATE, ISNR_INITIAL_ESTIMATE);
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%% Final estimate by Wiener filtering (using the hard-thresholding
% initial estimate)
tic;
y_hat_wi = bm3d_wiener_video(z, y_hat, hadper_trans_single_den, Nstep_wiener, N1_wiener, N2_wiener, ...
'unused_arg', tau_match_wiener*N1_wiener*N1_wiener/(255*255), (Ns_wiener-1)/2, sigma/255, 'unused arg', single(TforW), single(TinvW)', inverse_hadper_trans_single_den, 'unused arg', dsub_wiener*dsub_wiener/255, l2normLumChrom, Wwin2D_wiener, (Npr_wiener-1)/2, stepFSW, denoiseFramesW, Nb_wiener );
% In case the input noisy video is clipped in [0,1], then apply declipping
if isCharacterName
if exist('Xorig', 'var') == 1
if ~strcmp(Xorig, Xnoisy_name)
[y_hat_wi] = ClipComp16b(sigma/255, y_hat_wi);
end
else
[y_hat_wi] = ClipComp16b(sigma/255, y_hat_wi);
end
end
wiener_elapsed_time = toc;
PSNR_FINAL_ESTIMATE = 0;
if exist('Xorig', 'var') == 1,
PSNR_FINAL_ESTIMATE = 10*log10(1/mean((double(y(:))-double(y_hat_wi(:))).^2));
ISNR_FINAL_ESTIMATE = PSNR_FINAL_ESTIMATE - 10*log10(1/mean((double(y(:))-double(z(:))).^2));
end
if dump_information == 1,
text_psnr = '';
if exist('Xorig', 'var') == 1
%%%% Un-comment the following to print the PSNR of each frame
%
% PSNRs = zeros(NumberOfFrames,1);
% for ii = [1:NumberOfFrames],
% PSNRs(ii) = 10*log10(1/mean2((y(:,:,ii)-y_hat_wi(:,:,ii)).^2));
% fprintf(['Frame: ' sprintf('%d',ii) ', PSNR: ' sprintf('%.2f',PSNRs(ii)) '\n']);
% end
%
fprintf('FINAL ESTIMATE, PSNR: %.3f dB, ISNR: %.3f dB\n', ...
PSNR_FINAL_ESTIMATE, ISNR_FINAL_ESTIMATE);
figure, imshow(double(z(:,:,ceil(NumberOfFrames/2)))); % show the central frame
title(sprintf('Noisy frame #%d',ceil(NumberOfFrames/2)));
figure, imshow(double(y_hat_wi(:,:,ceil(NumberOfFrames/2)))); % show the central frame
title(sprintf('Denoised frame #%d',ceil(NumberOfFrames/2)));
text_psnr = sprintf('_PSNR%.2f', PSNR_FINAL_ESTIMATE);
end
fprintf('The denoising took: %.1f sec (%.4f sec/frame). ', ...
wiener_elapsed_time+estimate_elapsed_time, (wiener_elapsed_time+estimate_elapsed_time)/NumberOfFrames);
text_vid = 'Denoised';
FRATE = 30; % default value
if isCharacterName,
text_vid = Xnoisy_name(1:end-4);
ainfo = aviinfo(Xnoisy_name);
FRATE = ainfo.FramesPerSecond;
end
avi_filename = sprintf('%s%s_%s_BM3D.avi', text_vid, text_psnr, bm3dProfile);
if exist(avi_filename, 'file') ~= 0,
delete(avi_filename);
end
mov = avifile(avi_filename, 'Colormap', gray(256), 'compression', 'None', 'fps', FRATE);
for ii = [1:NumberOfFrames],
mov = addframe(mov, uint8(round(255*double(y_hat_wi(:,:,ii)))));
end
mov = close(mov);
fprintf('The denoised video written to: %s.\n\n', avi_filename);
end
return;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Some auxiliary functions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% Create forward and inverse transform matrices, which allow for perfect
% reconstruction. The forward transform matrix is normalized so that the
% l2-norm of each basis element is 1.
%
% [Tforward, Tinverse] = getTransfMatrix (N, transform_type, dec_levels)
%
% INPUTS:
%
% N --> Size of the transform (for wavelets, must be 2^K)
%
% transform_type --> 'dct', 'dst', 'hadamard', or anything that is
% listed by 'help wfilters' (bi-orthogonal wavelets)
% 'DCrand' -- an orthonormal transform with a DC and all
% the other basis elements of random nature
%
% dec_levels --> If a wavelet transform is generated, this is the
% desired decomposition level. Must be in the
% range [0, log2(N)-1], where "0" implies
% full decomposition.
%
% OUTPUTS:
%
% Tforward --> (N x N) Forward transform matrix
%
% Tinverse --> (N x N) Inverse transform matrix
%
if exist('dec_levels', 'var') ~= 1,
dec_levels = 0;
end
if N == 1,
Tforward = 1;
elseif strcmp(transform_type, 'hadamard') == 1,
Tforward = hadamard(N);
elseif (N == 8) & strcmp(transform_type, 'bior1.5')==1 % hardcoded transform so that the wavelet toolbox is not needed to generate it
Tforward = [ 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274 0.353553390593274;
0.219417649252501 0.449283757993216 0.449283757993216 0.219417649252501 -0.219417649252501 -0.449283757993216 -0.449283757993216 -0.219417649252501;
0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846 -0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284;
-0.083506045090284 0.083506045090284 -0.083506045090284 0.083506045090284 0.569359398342846 0.402347308162278 -0.402347308162278 -0.569359398342846;
0.707106781186547 -0.707106781186547 0 0 0
gitextract_hro3tm_6/
├── BM3D/
│ ├── BM3D-SAPCA/
│ │ ├── BM3DSAPCA2009.p
│ │ ├── README-BM3D-SAPCA.txt
│ │ ├── demo_BM3DSAPCA.m
│ │ ├── function_AnisLPAICI8.p
│ │ ├── function_CreateLPAKernels.m
│ │ ├── function_LPAKernelMatrixTheta.m
│ │ ├── function_WOSFilters.p
│ │ └── function_Window2D.m
│ ├── BM3D.m
│ ├── BM3DDEB.m
│ ├── BM3DSHARP.m
│ ├── BM3D_CFA.m
│ ├── CBM3D.m
│ ├── CVBM3D.m
│ ├── ClipComp16b.p
│ ├── IDDBM3D/
│ │ ├── BM3DDEB_init.m
│ │ ├── BlockMatch.mexw32
│ │ ├── BlockMatch.mexw64
│ │ ├── Demo_IDDBM3D.m
│ │ ├── GroupProcessor.mexw32
│ │ ├── GroupProcessor.mexw64
│ │ └── IDDBM3D.p
│ ├── LEGAL_NOTICE.txt
│ ├── README.txt
│ ├── VBM3D.m
│ ├── bm3d_CFA_thr.mexa64
│ ├── bm3d_CFA_thr.mexglx
│ ├── bm3d_CFA_thr.mexmaci64
│ ├── bm3d_CFA_thr.mexw32
│ ├── bm3d_CFA_thr.mexw64
│ ├── bm3d_CFA_wiener.mexa64
│ ├── bm3d_CFA_wiener.mexglx
│ ├── bm3d_CFA_wiener.mexmaci64
│ ├── bm3d_CFA_wiener.mexw32
│ ├── bm3d_CFA_wiener.mexw64
│ ├── bm3d_thr.mexa64
│ ├── bm3d_thr.mexglx
│ ├── bm3d_thr.mexmaci
│ ├── bm3d_thr.mexmaci64
│ ├── bm3d_thr.mexw32
│ ├── bm3d_thr.mexw64
│ ├── bm3d_thr_color.mexa64
│ ├── bm3d_thr_color.mexglx
│ ├── bm3d_thr_color.mexmaci
│ ├── bm3d_thr_color.mexmaci64
│ ├── bm3d_thr_color.mexw32
│ ├── bm3d_thr_color.mexw64
│ ├── bm3d_thr_colored_noise.mexa64
│ ├── bm3d_thr_colored_noise.mexglx
│ ├── bm3d_thr_colored_noise.mexmaci
│ ├── bm3d_thr_colored_noise.mexmaci64
│ ├── bm3d_thr_colored_noise.mexw32
│ ├── bm3d_thr_colored_noise.mexw64
│ ├── bm3d_thr_sharpen_var.mexa64
│ ├── bm3d_thr_sharpen_var.mexglx
│ ├── bm3d_thr_sharpen_var.mexmaci
│ ├── bm3d_thr_sharpen_var.mexmaci64
│ ├── bm3d_thr_sharpen_var.mexw32
│ ├── bm3d_thr_sharpen_var.mexw64
│ ├── bm3d_thr_video.mexa64
│ ├── bm3d_thr_video.mexglx
│ ├── bm3d_thr_video.mexmaci
│ ├── bm3d_thr_video.mexmaci64
│ ├── bm3d_thr_video.mexw32
│ ├── bm3d_thr_video.mexw64
│ ├── bm3d_thr_video_c.mexw32
│ ├── bm3d_thr_video_c.mexw64
│ ├── bm3d_wiener.mexa64
│ ├── bm3d_wiener.mexglx
│ ├── bm3d_wiener.mexmaci
│ ├── bm3d_wiener.mexmaci64
│ ├── bm3d_wiener.mexw32
│ ├── bm3d_wiener.mexw64
│ ├── bm3d_wiener_color.mexa64
│ ├── bm3d_wiener_color.mexglx
│ ├── bm3d_wiener_color.mexmaci
│ ├── bm3d_wiener_color.mexmaci64
│ ├── bm3d_wiener_color.mexw32
│ ├── bm3d_wiener_color.mexw64
│ ├── bm3d_wiener_colored_noise.mexa64
│ ├── bm3d_wiener_colored_noise.mexglx
│ ├── bm3d_wiener_colored_noise.mexmaci
│ ├── bm3d_wiener_colored_noise.mexmaci64
│ ├── bm3d_wiener_colored_noise.mexw32
│ ├── bm3d_wiener_colored_noise.mexw64
│ ├── bm3d_wiener_video.mexa64
│ ├── bm3d_wiener_video.mexglx
│ ├── bm3d_wiener_video.mexmaci
│ ├── bm3d_wiener_video.mexmaci64
│ ├── bm3d_wiener_video.mexw32
│ ├── bm3d_wiener_video.mexw64
│ ├── bm3d_wiener_video_c.mexw32
│ ├── bm3d_wiener_video_c.mexw64
│ └── main.m
├── DnCNN/
│ ├── Demo_FDnCNN_Color.m
│ ├── Demo_FDnCNN_Color_Clip.m
│ ├── Demo_FDnCNN_Gray.m
│ ├── Demo_FDnCNN_Gray_Clip.m
│ ├── Demo_test_CDnCNN_Specific.m
│ ├── Demo_test_DnCNN.m
│ ├── Demo_test_DnCNN3.m
│ ├── Demo_test_DnCNN_C.m
│ ├── model/
│ │ ├── DnCNN3.mat
│ │ ├── FDnCNN_Clip_color.mat
│ │ ├── FDnCNN_Clip_gray.mat
│ │ ├── FDnCNN_color.mat
│ │ ├── FDnCNN_gray.mat
│ │ ├── GD_Color_Blind.mat
│ │ ├── GD_Gray_Blind.mat
│ │ ├── README.txt
│ │ ├── specifics/
│ │ │ ├── sigma=10.mat
│ │ │ ├── sigma=15.mat
│ │ │ ├── sigma=20.mat
│ │ │ ├── sigma=25.mat
│ │ │ ├── sigma=30.mat
│ │ │ ├── sigma=35.mat
│ │ │ ├── sigma=40.mat
│ │ │ ├── sigma=45.mat
│ │ │ ├── sigma=50.mat
│ │ │ ├── sigma=55.mat
│ │ │ ├── sigma=60.mat
│ │ │ ├── sigma=65.mat
│ │ │ ├── sigma=70.mat
│ │ │ └── sigma=75.mat
│ │ └── specifics_color/
│ │ ├── Add (color) specific models.md
│ │ ├── color_sigma=05.mat
│ │ ├── color_sigma=10.mat
│ │ ├── color_sigma=15.mat
│ │ ├── color_sigma=25.mat
│ │ ├── color_sigma=35.mat
│ │ ├── color_sigma=50.mat
│ │ ├── model_sigma=00to10.mat
│ │ ├── model_sigma=20to30.mat
│ │ ├── model_sigma=40to50.mat
│ │ ├── model_sigma=60to70.mat
│ │ └── model_sigma=80to90.mat
│ └── utilities/
│ ├── Cal_PSNRSSIM.m
│ ├── Merge_Bnorm_Demo.m
│ ├── data_augmentation.m
│ ├── modcrop.m
│ ├── shave.m
│ ├── sigma=25_Bnorm.mat
│ ├── simplenn_matlab.m
│ ├── vl_ffdnet_concise.m
│ ├── vl_ffdnet_matlab.m
│ ├── vl_simplenn.m
│ └── vl_simplenn_mergebnorm.m
├── README.md
├── avefilter/
│ └── avefilt.m
├── medianfilter/
│ └── medianfilt.m
└── nlm-image-denoising/
└── NLmeansfilt.m
Condensed preview — 151 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (275K chars).
[
{
"path": "BM3D/BM3D-SAPCA/README-BM3D-SAPCA.txt",
"chars": 1825,
"preview": "--------------------------------------------------------------------\n\n BM3D-SAPCA : BM3D with Shape-Adaptive Principal C"
},
{
"path": "BM3D/BM3D-SAPCA/demo_BM3DSAPCA.m",
"chars": 1592,
"preview": "% BM3D-SAPCA : BM3D with Shape-Adaptive Principal Component Analysis (v1.00, 2009)\n% (demo script)\n%\n% BM3D-SAPCA is an"
},
{
"path": "BM3D/BM3D-SAPCA/function_CreateLPAKernels.m",
"chars": 5075,
"preview": "% Creates LPA kernels cell array (function_CreateLPAKernels)\n%\n% Alessandro Foi - Tampere University of Technology - "
},
{
"path": "BM3D/BM3D-SAPCA/function_LPAKernelMatrixTheta.m",
"chars": 6268,
"preview": "% Return the discrete kernels for LPA estimation and their degrees matrix\n%\n% function [G, G1, index_polynomials]=functi"
},
{
"path": "BM3D/BM3D-SAPCA/function_Window2D.m",
"chars": 2242,
"preview": "% Returns a scalar/matrix weights (window function) for the LPA estimates\n% function w=function_Window2D(X,Y,window,sig_"
},
{
"path": "BM3D/BM3D.m",
"chars": 17558,
"preview": "function [PSNR, SSIM, y_est] = BM3D(y, z, sigma, profile, print_to_screen)\n\nimage_name = [\n% 'montage.png'\n 'Cam"
},
{
"path": "BM3D/BM3DDEB.m",
"chars": 17170,
"preview": "function [ISNR, y_hat_RWI] = BM3DDEB(experiment_number, test_image_name)\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%"
},
{
"path": "BM3D/BM3DSHARP.m",
"chars": 17667,
"preview": "function [y_hat] = BM3DSHARP(z, sigma, alpha_sharp, profile, print_to_screen)\n%\n% Joint sharpening and denoising with B"
},
{
"path": "BM3D/BM3D_CFA.m",
"chars": 17775,
"preview": "function [varargout] = BM3D_CFA(z, sigma)\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n"
},
{
"path": "BM3D/CBM3D.m",
"chars": 28530,
"preview": "function [PSNR, yRGB_est] = CBM3D(yRGB, zRGB, sigma, profile, print_to_screen, colorspace)\n%\n% CBM3D is algorithm for a"
},
{
"path": "BM3D/CVBM3D.m",
"chars": 21769,
"preview": "function [Xdenoised] = CVBM3D(Xnoisy, sigma, Xorig)\n% CVBM3D denoising of RGB videos corrupted with AWGN.\n%\n%\n% [Xdeno"
},
{
"path": "BM3D/IDDBM3D/BM3DDEB_init.m",
"chars": 17226,
"preview": "function [ISNR, y_hat_RI,y_hat_RWI,zRI] = BM3DDEB_init(experiment_number, y, z, v, sigma)\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%"
},
{
"path": "BM3D/IDDBM3D/Demo_IDDBM3D.m",
"chars": 9075,
"preview": "function [isnr, y_hat] = Demo_IDDBM3D(experiment_number, test_image_name)\n% -------------------------------------------"
},
{
"path": "BM3D/LEGAL_NOTICE.txt",
"chars": 3563,
"preview": "Legal Notice\n\nBy accessing these World Wide Web pages you agree to the following terms. If you do not agree to the follo"
},
{
"path": "BM3D/README.txt",
"chars": 9377,
"preview": "-------------------------------------------------------------------\n\n BM3D demo software for image/video restoration an"
},
{
"path": "BM3D/VBM3D.m",
"chars": 26707,
"preview": "function [PSNR_FINAL_ESTIMATE, y_hat_wi] = VBM3D(Xnoisy, sigma, NumberOfFrames, dump_information, Xorig, bm3dProfile)\n%%"
},
{
"path": "BM3D/main.m",
"chars": 1017,
"preview": "clear;clc;\n \npauseTime = 1;\n\ndata_path = \"..\\Set12\";\next = [\"*.jpg\", \"*.png\", \"*.jpeg\"];\nfilePaths = [];\nfor i = 1 : "
},
{
"path": "DnCNN/Demo_FDnCNN_Color.m",
"chars": 3718,
"preview": "% This is the testing demo of Flexible DnCNN (FDnCNN) for denoising noisy color images corrupted by\n% AWGN.\n%\n% To run t"
},
{
"path": "DnCNN/Demo_FDnCNN_Color_Clip.m",
"chars": 4112,
"preview": "% This is the testing demo of Flexible DnCNN (FDnCNN) for denoising noisy color images corrupted by\n% AWGN with clipping"
},
{
"path": "DnCNN/Demo_FDnCNN_Gray.m",
"chars": 3682,
"preview": "% This is the testing demo of Flexible DnCNN (FDnCNN) for denoising noisy grayscale images corrupted by\n% AWGN.\n%\n% To r"
},
{
"path": "DnCNN/Demo_FDnCNN_Gray_Clip.m",
"chars": 3505,
"preview": "% This is the testing demo of Flexible DnCNN (FDnCNN) for denoising noisy grayscale images corrupted by\n% AWGN with clip"
},
{
"path": "DnCNN/Demo_test_CDnCNN_Specific.m",
"chars": 1952,
"preview": "% This is the testing demo of CDnCNN for denoising noisy color images corrupted by\n% AWGN.\n\n% clear; clc;\naddpath('utili"
},
{
"path": "DnCNN/Demo_test_DnCNN.m",
"chars": 2690,
"preview": "\n%%% This is the testing demo for gray image (Gaussian) denoising.\n%%% Training data: 400 images of size 180X180\n% гԽ SC"
},
{
"path": "DnCNN/Demo_test_DnCNN3.m",
"chars": 13396,
"preview": "\n%%% This is the testing demo for learning a single model for three tasks, including Gaussian denoing, SISR, JPEG image "
},
{
"path": "DnCNN/Demo_test_DnCNN_C.m",
"chars": 1985,
"preview": "\n%%% This is the testing code demo for color image (Gaussian) denoising.\n%%% The model is trained with 1) noise levels i"
},
{
"path": "DnCNN/model/README.txt",
"chars": 3873,
"preview": "## Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising\n\n\n### Main Contents\n\n**demos**: `Demo_"
},
{
"path": "DnCNN/model/specifics_color/Add (color) specific models.md",
"chars": 1,
"preview": "\n"
},
{
"path": "DnCNN/utilities/Cal_PSNRSSIM.m",
"chars": 6348,
"preview": "function [psnr_cur, ssim_cur] = Cal_PSNRSSIM(A,B,row,col)\n\n\n[n,m,ch]=size(B);\nA = A(row+1:n-row,col+1:m-col,:);\nB = B(ro"
},
{
"path": "DnCNN/utilities/Merge_Bnorm_Demo.m",
"chars": 93,
"preview": "\n\n\n\n\nload('sigma=25_Bnorm.mat');\n\n[net] = vl_simplenn_mergebnorm(net);\n\nsave sigma=25 net;\n\n\n"
},
{
"path": "DnCNN/utilities/data_augmentation.m",
"chars": 709,
"preview": "function image = data_augmentation(image, mode)\n\nif mode == 1\n return;\nend\n\nif mode == 2 % flipped\n image = flipud"
},
{
"path": "DnCNN/utilities/modcrop.m",
"chars": 267,
"preview": "function imgs = modcrop(imgs, modulo)\nif size(imgs,3)==1\n sz = size(imgs);\n sz = sz - mod(sz, modulo);\n imgs = "
},
{
"path": "DnCNN/utilities/shave.m",
"chars": 107,
"preview": "function I = shave(I, border)\nI = I(1+border(1):end-border(1), ...\n 1+border(2):end-border(2), :, :);\n"
},
{
"path": "DnCNN/utilities/simplenn_matlab.m",
"chars": 805,
"preview": "function res = simplenn_matlab(net, input)\n\n%% If you did not install the matconvnet package, you can use this for testi"
},
{
"path": "DnCNN/utilities/vl_ffdnet_concise.m",
"chars": 1049,
"preview": "function res = vl_ffdnet_concise(net, x)\n\nglobal sigmas;\nn = numel(net.layers);\nres = struct('x', cell(1,n+1));\nres(1).x"
},
{
"path": "DnCNN/utilities/vl_ffdnet_matlab.m",
"chars": 1376,
"preview": "function res = vl_ffdnet_matlab(net, input)\n\n%% If you did not install the matconvnet package, you can use this for test"
},
{
"path": "DnCNN/utilities/vl_simplenn.m",
"chars": 4428,
"preview": "function res = vl_simplenn(net, x, dzdy, res, varargin)\n%VL_SIMPLENN Evaluate a SimpleNN network.\n% RES = VL_SIMPLENN"
},
{
"path": "DnCNN/utilities/vl_simplenn_mergebnorm.m",
"chars": 1018,
"preview": "function [net1] = vl_simplenn_mergebnorm(net)\n\n%% merge bnorm parameters into adjacent Conv layer\n\nfor i = 1:numel(net.l"
},
{
"path": "README.md",
"chars": 877,
"preview": "### 1. 项目介绍\n\n#### 1.1 项目的背景\n\n该项目是为了研究基于深度卷积神经网络的图像去噪算法,是利用DnCNN模型,但是为了比较该算法的效果,另外实现了四种传统的图像去噪算法(均值滤波、中值滤波、非局部均值滤波NLM和三维块"
},
{
"path": "avefilter/avefilt.m",
"chars": 1698,
"preview": "clear,clc;\npauseTime = 1;\n\ndata_path = \"..\\Set12\";\next = [\"*.jpg\", \"*.png\", \"*.jpeg\"];\nfilePaths = [];\nfor i = 1 : le"
},
{
"path": "medianfilter/medianfilt.m",
"chars": 1569,
"preview": "clear,clc;\npauseTime = 1;\n\ndata_path = \"..\\Set12\";\next = [\"*.jpg\", \"*.png\", \"*.jpeg\"];\nfilePaths = [];\nfor i = 1 : le"
},
{
"path": "nlm-image-denoising/NLmeansfilt.m",
"chars": 1327,
"preview": "clear,clc;\npauseTime = 1;\n\ndata_path = \"..\\Set12\";\next = [\"*.jpg\", \"*.png\", \"*.jpeg\"];\nfilePaths = [];\nfor i = 1 : le"
}
]
// ... and 110 more files (download for full content)
About this extraction
This page contains the full source code of the weitw/ImageDenoise GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 151 files (258.8 KB), approximately 84.9k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.