Repository: TomTomTommi/HiNet
Branch: main
Commit: 5c2682a6be3f
Files: 25
Total size: 716.3 KB
Directory structure:
gitextract_0034reul/
├── README.md
├── calculate_PSNR_SSIM.py
├── config.py
├── datasets.py
├── environment.yml
├── hinet.py
├── image/
│ ├── cover/
│ │ └── 1
│ ├── secret/
│ │ └── 1
│ ├── secret-rev/
│ │ └── 1
│ └── steg/
│ └── 1
├── invblock.py
├── logging/
│ ├── 1
│ ├── train__211222-183515.log
│ ├── train__211223-100502.log
│ └── train__211224-105010.log
├── model/
│ └── 1
├── model.py
├── modules/
│ ├── Unet_common.py
│ └── module_util.py
├── rrdb_denselayer.py
├── test.py
├── train.py
├── train_logging.py
├── util.py
└── viz.py
================================================
FILE CONTENTS
================================================
================================================
FILE: README.md
================================================
# HiNet: Deep Image Hiding by Invertible Network
This repo is the official code for
* [**HiNet: Deep Image Hiding by Invertible Network.**](https://openaccess.thecvf.com/content/ICCV2021/html/Jing_HiNet_Deep_Image_Hiding_by_Invertible_Network_ICCV_2021_paper.html)
* [*Junpeng Jing*](https://tomtomtommi.github.io/), [*Xin Deng*](http://www.commsp.ee.ic.ac.uk/~xindeng/), [*Mai Xu*](http://shi.buaa.edu.cn/MaiXu/zh_CN/index.htm), [*Jianyi Wang*](http://buaamc2.net/html/Members/jianyiwang.html), [*Zhenyu Guan*](http://cst.buaa.edu.cn/info/1071/2542.htm).
Published on [**ICCV 2021**](http://iccv2021.thecvf.com/home).
By [MC2 Lab](http://buaamc2.net/) @ [Beihang University](http://ev.buaa.edu.cn/).
## Dependencies and Installation
- Python 3 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux)).
- [PyTorch = 1.0.1](https://pytorch.org/) .
- See [environment.yml](https://github.com/TomTomTommi/HiNet/blob/main/environment.yml) for other dependencies.
## Get Started
- Run `python train.py` for training.
- Run `python test.py` for testing.
- Set the model path (where the trained model saved) and the image path (where the image saved during testing) to your local path.
`line45: MODEL_PATH = '' `
`line49: IMAGE_PATH = '' `
## Dataset
- In this paper, we use the commonly used dataset DIV2K, COCO, and ImageNet.
- For train or test on your own dataset, change the code in `config.py`:
`line30: TRAIN_PATH = '' `
`line31: VAL_PATH = '' `
## Trained Model
- Here we provide a trained [model](https://drive.google.com/drive/folders/1l3XBFYPMaNFdvCWyOHfB2qIPkpjIxZgE?usp=sharing).
- Fill in the `MODEL_PATH` and the file name `suffix` before testing by the trained model.
- For example, if the model name is `model.pt` and its path is `/home/usrname/Hinet/model/`,
set `MODEL_PATH = '/home/usrname/Hinet/model/'` and file name `suffix = 'model.pt'`.
## Training Demo (2021/12/25 Updated)
- Here we provide a training demo to show how to train a converged model in the early training stage. During this process, the model may suffer from explosion. Our solution is to stop the training process at a normal node and abate the learning rate. Then, continue to train the model.
- Note that in order to log the training process, we have imported `logging` package, with slightly modified `train_logging.py` and `util.py` files.
- Stage1:
Run `python train_logging.py` for training with initial `config.py` (learning rate=10^-4.5).
The logging file is [train__211222-183515.log](https://github.com/TomTomTommi/HiNet/blob/main/logging/train__211222-183515.log).
(The values of r_loss and g_loss are reversed due to a small bug, which has been debuged in stage2.)
See the tensorboard:
Note that in the 507-th epoch the model exploded. Thus, we stop the stage1 at epoch 500.
- Stage2:
Set `suffix = 'model_checkpoint_00500.pt'` and `tain_next = True` and `trained_epoch = 500`.
Change the learning rate from 10^-4.5 to 10^-5.0.
Run `python train_logging.py` for training.
The logging file is [train__211223-100502.log](https://github.com/TomTomTommi/HiNet/blob/main/logging/train__211223-100502.log).
See the tensorboard:
Note that in the 1692-th epoch the model exploded. Thus, we stop the stage2 at epoch 1690.
- Stage3:
Similar operation.
Change the learning rate from 10^-5.0 to 10^-5.2.
The logging file is [train__211224-105010.log](https://github.com/TomTomTommi/HiNet/blob/main/logging/train__211224-105010.log).
See the tensorboard:
We can see that the network has initially converged. Then, you can change the super-parameters lamda according to the PSNR to balance the quality between stego image and recovered image. Note that the PSNR in the tensorboard is RGB-PSNR and in our paper is Y-PSNR.
## Others
- The `batchsize_val` in `config.py` should be at least `2*number of gpus` and it should be divisible by number of gpus.
## Citation
If you find our paper or code useful for your research, please cite:
```
@InProceedings{Jing_2021_ICCV,
author = {Jing, Junpeng and Deng, Xin and Xu, Mai and Wang, Jianyi and Guan, Zhenyu},
title = {HiNet: Deep Image Hiding by Invertible Network},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {4733-4742}
}
```
================================================
FILE: calculate_PSNR_SSIM.py
================================================
'''
calculate the PSNR and SSIM.
same as MATLAB's results
'''
import os
import math
import numpy as np
import cv2
import glob
from natsort import natsorted
def main():
# Configurations
# GT - Ground-truth;
# Gen: Generated / Restored / Recovered images
folder_GT = '/home/jjp/Hinet/image/cover/'
folder_Gen = '/home/jjp/Hinet/image/steg/'
crop_border = 1
suffix = '_secret_rev' # suffix for Gen images
test_Y = True # True: test Y channel only; False: test RGB channels
PSNR_all = []
SSIM_all = []
img_list = sorted(glob.glob(folder_GT + '/*'))
img_list = natsorted(img_list)
if test_Y:
print('Testing Y channel.')
else:
print('Testing RGB channels.')
for i, img_path in enumerate(img_list):
base_name = os.path.splitext(os.path.basename(img_path))[0]
# base_name = base_name[:5]
im_GT = cv2.imread(img_path) / 255.
# print(base_name)
# print(img_path)
# print(os.path.join(folder_Gen, base_name + '.png'))
im_Gen = cv2.imread(os.path.join(folder_Gen, base_name + '.png')) / 255.
if test_Y and im_GT.shape[2] == 3: # evaluate on Y channel in YCbCr color space
im_GT_in = bgr2ycbcr(im_GT)
im_Gen_in = bgr2ycbcr(im_Gen)
else:
im_GT_in = im_GT
im_Gen_in = im_Gen
# # crop borders
# if im_GT_in.ndim == 3:
# cropped_GT = im_GT_in[crop_border:-crop_border, crop_border:-crop_border, :]
# cropped_Gen = im_Gen_in[crop_border:-crop_border, crop_border:-crop_border, :]
# elif im_GT_in.ndim == 2:
# cropped_GT = im_GT_in[crop_border:-crop_border, crop_border:-crop_border]
# cropped_Gen = im_Gen_in[crop_border:-crop_border, crop_border:-crop_border]
# else:
# raise ValueError('Wrong image dimension: {}. Should be 2 or 3.'.format(im_GT_in.ndim))
# calculate PSNR and SSIM
PSNR = calculate_psnr(im_GT_in * 255, im_Gen_in * 255)
SSIM = calculate_ssim(im_GT_in * 255, im_Gen_in * 255)
print('{:3d} - {:25}. \tPSNR: {:.6f} dB, \tSSIM: {:.6f}'.format(
i + 1, base_name, PSNR, SSIM))
PSNR_all.append(PSNR)
SSIM_all.append(SSIM)
print('Average: PSNR: {:.6f} dB, SSIM: {:.6f}'.format(
sum(PSNR_all) / len(PSNR_all),
sum(SSIM_all) / len(SSIM_all)))
with open('1.txt', 'w') as f:
f.write(str(PSNR_all))
def calculate_psnr(img1, img2):
# img1 and img2 have range [0, 255]
img1 = img1.astype(np.float64)
img2 = img2.astype(np.float64)
mse = np.mean((img1 - img2)**2)
if mse == 0:
return float('inf')
return 20 * math.log10(255.0 / math.sqrt(mse))
def ssim(img1, img2):
C1 = (0.01 * 255)**2
C2 = (0.03 * 255)**2
img1 = img1.astype(np.float64)
img2 = img2.astype(np.float64)
kernel = cv2.getGaussianKernel(11, 1.5)
window = np.outer(kernel, kernel.transpose())
mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
mu1_sq = mu1**2
mu2_sq = mu2**2
mu1_mu2 = mu1 * mu2
sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
(sigma1_sq + sigma2_sq + C2))
return ssim_map.mean()
def calculate_ssim(img1, img2):
'''calculate SSIM
the same outputs as MATLAB's
img1, img2: [0, 255]
'''
if not img1.shape == img2.shape:
raise ValueError('Input images must have the same dimensions.')
if img1.ndim == 2:
return ssim(img1, img2)
elif img1.ndim == 3:
if img1.shape[2] == 3:
ssims = []
for i in range(3):
ssims.append(ssim(img1, img2))
return np.array(ssims).mean()
elif img1.shape[2] == 1:
return ssim(np.squeeze(img1), np.squeeze(img2))
else:
raise ValueError('Wrong input image dimensions.')
def bgr2ycbcr(img, only_y=True):
'''same as matlab rgb2ycbcr
only_y: only return Y channel
Input:
uint8, [0, 255]
float, [0, 1]
'''
in_img_type = img.dtype
img.astype(np.float32)
if in_img_type != np.uint8:
img *= 255.
# convert
if only_y:
rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
else:
rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
[65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
if in_img_type == np.uint8:
rlt = rlt.round()
else:
rlt /= 255.
return rlt.astype(in_img_type)
if __name__ == '__main__':
main()
================================================
FILE: config.py
================================================
# Super parameters
clamp = 2.0
channels_in = 3
log10_lr = -4.5
lr = 10 ** log10_lr
epochs = 1000
weight_decay = 1e-5
init_scale = 0.01
lamda_reconstruction = 5
lamda_guide = 1
lamda_low_frequency = 1
device_ids = [0]
# Train:
batch_size = 16
cropsize = 224
betas = (0.5, 0.999)
weight_step = 1000
gamma = 0.5
# Val:
cropsize_val = 1024
batchsize_val = 2
shuffle_val = False
val_freq = 50
# Dataset
TRAIN_PATH = '/home/jjp/Dataset/DIV2K/DIV2K_train_HR/'
VAL_PATH = '/home/jjp/Dataset/DIV2K/DIV2K_valid_HR/'
format_train = 'png'
format_val = 'png'
# Display and logging:
loss_display_cutoff = 2.0
loss_names = ['L', 'lr']
silent = False
live_visualization = False
progress_bar = False
# Saving checkpoints:
MODEL_PATH = '/home/jjp/Hinet/model/'
checkpoint_on_error = True
SAVE_freq = 50
IMAGE_PATH = '/home/jjp/Hinet/image/'
IMAGE_PATH_cover = IMAGE_PATH + 'cover/'
IMAGE_PATH_secret = IMAGE_PATH + 'secret/'
IMAGE_PATH_steg = IMAGE_PATH + 'steg/'
IMAGE_PATH_secret_rev = IMAGE_PATH + 'secret-rev/'
# Load:
suffix = 'model.pt'
tain_next = False
trained_epoch = 0
================================================
FILE: datasets.py
================================================
import glob
from PIL import Image
from torch.utils.data import Dataset, DataLoader
import torchvision.transforms as T
import config as c
from natsort import natsorted
def to_rgb(image):
rgb_image = Image.new("RGB", image.size)
rgb_image.paste(image)
return rgb_image
class Hinet_Dataset(Dataset):
def __init__(self, transforms_=None, mode="train"):
self.transform = transforms_
self.mode = mode
if mode == 'train':
# train
self.files = natsorted(sorted(glob.glob(c.TRAIN_PATH + "/*." + c.format_train)))
else:
# test
self.files = sorted(glob.glob(c.VAL_PATH + "/*." + c.format_val))
def __getitem__(self, index):
try:
image = Image.open(self.files[index])
image = to_rgb(image)
item = self.transform(image)
return item
except:
return self.__getitem__(index + 1)
def __len__(self):
if self.mode == 'shuffle':
return max(len(self.files_cover), len(self.files_secret))
else:
return len(self.files)
transform = T.Compose([
T.RandomHorizontalFlip(),
T.RandomVerticalFlip(),
T.RandomCrop(c.cropsize),
T.ToTensor()
])
transform_val = T.Compose([
T.CenterCrop(c.cropsize_val),
T.ToTensor(),
])
# Training data loader
trainloader = DataLoader(
Hinet_Dataset(transforms_=transform, mode="train"),
batch_size=c.batch_size,
shuffle=True,
pin_memory=True,
num_workers=8,
drop_last=True
)
# Test data loader
testloader = DataLoader(
Hinet_Dataset(transforms_=transform_val, mode="val"),
batch_size=c.batchsize_val,
shuffle=False,
pin_memory=True,
num_workers=1,
drop_last=True
)
================================================
FILE: environment.yml
================================================
name: pytorch1.01
channels:
- pytorch
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- absl-py=0.11.0=pyhd3eb1b0_1
- aiohttp=3.7.3=py37h27cfd23_1
- async-timeout=3.0.1=py37_0
- attrs=20.3.0=pyhd3eb1b0_0
- blas=1.0=mkl
- blinker=1.4=py37_0
- brotlipy=0.7.0=py37h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.17.1=h27cfd23_0
- ca-certificates=2020.12.8=h06a4308_0
- cachetools=4.2.0=pyhd3eb1b0_0
- cairo=1.14.12=h8948797_3
- certifi=2020.12.5=py37h06a4308_0
- cffi=1.14.4=py37h261ae71_0
- chardet=3.0.4=py37h06a4308_1003
- click=7.1.2=py_0
- cloudpickle=1.6.0=py_0
- cryptography=2.9.2=py37h1ba5d50_0
- cudatoolkit=10.0.130=0
- cycler=0.10.0=py37_0
- cytoolz=0.11.0=py37h7b6447c_0
- dask-core=2020.12.0=pyhd3eb1b0_0
- dbus=1.13.18=hb2f20db_0
- decorator=4.4.2=py_0
- expat=2.2.10=he6710b0_2
- ffmpeg=4.0=hcdf2ecd_0
- fontconfig=2.13.0=h9420a91_0
- freeglut=3.0.0=hf484d3e_5
- freetype=2.10.4=h5ab3b9f_0
- glib=2.66.1=h92f7085_0
- google-auth=1.24.0=pyhd3eb1b0_0
- google-auth-oauthlib=0.4.2=pyhd3eb1b0_2
- graphite2=1.3.14=h23475e2_0
- grpcio=1.31.0=py37hf8bcb03_0
- gst-plugins-base=1.14.0=hbbd80ab_1
- gstreamer=1.14.0=hb31296c_0
- harfbuzz=1.8.8=hffaf4a1_0
- hdf5=1.10.2=hba1933b_1
- icu=58.2=he6710b0_3
- idna=2.10=py_0
- imageio=2.9.0=py_0
- importlib-metadata=2.0.0=py_1
- intel-openmp=2020.2=254
- jasper=2.0.14=h07fcdf6_1
- jpeg=9b=h024ee3a_2
- kiwisolver=1.3.0=py37h2531618_0
- lcms2=2.11=h396b838_0
- ld_impl_linux-64=2.33.1=h53a641e_7
- libedit=3.1.20191231=h14c3975_1
- libffi=3.3=he6710b0_2
- libgcc-ng=9.1.0=hdf63c60_0
- libgfortran-ng=7.3.0=hdf63c60_0
- libglu=9.0.0=hf484d3e_1
- libopencv=3.4.2=hb342d67_1
- libopus=1.3.1=h7b6447c_0
- libpng=1.6.37=hbc83047_0
- libprotobuf=3.13.0.1=hd408876_0
- libstdcxx-ng=9.1.0=hdf63c60_0
- libtiff=4.1.0=h2733197_1
- libuuid=1.0.3=h1bed415_2
- libvpx=1.7.0=h439df22_0
- libxcb=1.14=h7b6447c_0
- libxml2=2.9.10=hb55368b_3
- lz4-c=1.9.2=heb0550a_3
- markdown=3.3.3=py37h06a4308_0
- matplotlib=2.2.3=py37hb69df0a_0
- mkl=2020.2=256
- mkl-service=2.3.0=py37he8ac12f_0
- mkl_fft=1.2.0=py37h23d657b_0
- mkl_random=1.1.1=py37h0573a6f_0
- multidict=4.7.6=py37h7b6447c_1
- natsort=7.1.0=pyhd3eb1b0_0
- ncurses=6.2=he6710b0_1
- networkx=2.5=py_0
- ninja=1.10.2=py37hff7bd54_0
- numpy=1.19.2=py37h54aff64_0
- numpy-base=1.19.2=py37hfa32c7d_0
- oauthlib=3.1.0=py_0
- olefile=0.46=py37_0
- opencv=3.4.2=py37h6fd60c2_1
- openssl=1.1.1i=h27cfd23_0
- pandas=1.2.0=py37ha9443f7_0
- pcre=8.44=he6710b0_0
- pillow=6.0.0=py37h34e0f95_0
- pip=20.3.3=py37h06a4308_0
- pixman=0.40.0=h7b6447c_0
- py-opencv=3.4.2=py37hb342d67_1
- pyasn1=0.4.8=py_0
- pyasn1-modules=0.2.8=py_0
- pycparser=2.20=py_2
- pyjwt=2.0.0=py37h06a4308_0
- pyopenssl=20.0.1=pyhd3eb1b0_1
- pyparsing=2.4.7=py_0
- pyqt=5.9.2=py37h05f1152_2
- pysocks=1.7.1=py37_1
- python=3.7.9=h7579374_0
- python-dateutil=2.8.1=py_0
- pytorch=1.0.1=py3.7_cuda10.0.130_cudnn7.4.2_2
- pytz=2020.4=pyhd3eb1b0_0
- pywavelets=1.1.1=py37h7b6447c_2
- pyyaml=5.3.1=py37h7b6447c_1
- qt=5.9.7=h5867ecd_1
- readline=8.0=h7b6447c_0
- requests=2.25.1=pyhd3eb1b0_0
- requests-oauthlib=1.3.0=py_0
- rsa=4.6=py_0
- scikit-image=0.14.2=py37he6710b0_0
- scikit-learn=0.20.3=py37hd81dba3_0
- scipy=1.5.2=py37h0b6359f_0
- setuptools=51.0.0=py37h06a4308_2
- sip=4.19.8=py37hf484d3e_0
- six=1.15.0=py37h06a4308_0
- sqlite=3.33.0=h62c20be_0
- tensorboard=2.3.0=pyh4dce500_0
- tensorboard-plugin-wit=1.6.0=py_0
- tk=8.6.10=hbc83047_0
- toolz=0.11.1=py_0
- torchvision=0.2.2=py_3
- tornado=6.1=py37h27cfd23_0
- tqdm=4.54.1=pyhd3eb1b0_0
- typing-extensions=3.7.4.3=0
- typing_extensions=3.7.4.3=py_0
- urllib3=1.26.2=pyhd3eb1b0_0
- werkzeug=1.0.1=py_0
- wheel=0.36.2=pyhd3eb1b0_0
- xz=5.2.5=h7b6447c_0
- yaml=0.2.5=h7b6447c_0
- yarl=1.5.1=py37h7b6447c_0
- zipp=3.4.0=pyhd3eb1b0_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.5=h9ceee32_0
- pip:
- backcall==0.2.0
- coloredlogs==15.0
- et-xmlfile==1.0.1
- humanfriendly==9.1
- ipdb==0.13.7
- ipython==7.22.0
- ipython-genutils==0.2.0
- jdcal==1.4.1
- jedi==0.18.0
- openpyxl==3.0.5
- parso==0.8.2
- pexpect==4.8.0
- pickleshare==0.7.5
- prompt-toolkit==3.0.18
- protobuf==3.14.0
- ptyprocess==0.7.0
- pygments==2.8.1
- tensorboardx==2.1
- toml==0.10.2
- torchsummary==1.5.1
- traitlets==5.0.5
- wcwidth==0.2.5
prefix: /home/jjp/anaconda3/envs/pytorch1.01
================================================
FILE: hinet.py
================================================
from model import *
from invblock import INV_block
class Hinet(nn.Module):
def __init__(self):
super(Hinet, self).__init__()
self.inv1 = INV_block()
self.inv2 = INV_block()
self.inv3 = INV_block()
self.inv4 = INV_block()
self.inv5 = INV_block()
self.inv6 = INV_block()
self.inv7 = INV_block()
self.inv8 = INV_block()
self.inv9 = INV_block()
self.inv10 = INV_block()
self.inv11 = INV_block()
self.inv12 = INV_block()
self.inv13 = INV_block()
self.inv14 = INV_block()
self.inv15 = INV_block()
self.inv16 = INV_block()
def forward(self, x, rev=False):
if not rev:
out = self.inv1(x)
out = self.inv2(out)
out = self.inv3(out)
out = self.inv4(out)
out = self.inv5(out)
out = self.inv6(out)
out = self.inv7(out)
out = self.inv8(out)
out = self.inv9(out)
out = self.inv10(out)
out = self.inv11(out)
out = self.inv12(out)
out = self.inv13(out)
out = self.inv14(out)
out = self.inv15(out)
out = self.inv16(out)
else:
out = self.inv16(x, rev=True)
out = self.inv15(out, rev=True)
out = self.inv14(out, rev=True)
out = self.inv13(out, rev=True)
out = self.inv12(out, rev=True)
out = self.inv11(out, rev=True)
out = self.inv10(out, rev=True)
out = self.inv9(out, rev=True)
out = self.inv8(out, rev=True)
out = self.inv7(out, rev=True)
out = self.inv6(out, rev=True)
out = self.inv5(out, rev=True)
out = self.inv4(out, rev=True)
out = self.inv3(out, rev=True)
out = self.inv2(out, rev=True)
out = self.inv1(out, rev=True)
return out
================================================
FILE: image/cover/1
================================================
================================================
FILE: image/secret/1
================================================
================================================
FILE: image/secret-rev/1
================================================
================================================
FILE: image/steg/1
================================================
================================================
FILE: invblock.py
================================================
from math import exp
import torch
import torch.nn as nn
import config as c
from rrdb_denselayer import ResidualDenseBlock_out
class INV_block(nn.Module):
def __init__(self, subnet_constructor=ResidualDenseBlock_out, clamp=c.clamp, harr=True, in_1=3, in_2=3):
super().__init__()
if harr:
self.split_len1 = in_1 * 4
self.split_len2 = in_2 * 4
self.clamp = clamp
# ρ
self.r = subnet_constructor(self.split_len1, self.split_len2)
# η
self.y = subnet_constructor(self.split_len1, self.split_len2)
# φ
self.f = subnet_constructor(self.split_len2, self.split_len1)
def e(self, s):
return torch.exp(self.clamp * 2 * (torch.sigmoid(s) - 0.5))
def forward(self, x, rev=False):
x1, x2 = (x.narrow(1, 0, self.split_len1),
x.narrow(1, self.split_len1, self.split_len2))
if not rev:
t2 = self.f(x2)
y1 = x1 + t2
s1, t1 = self.r(y1), self.y(y1)
y2 = self.e(s1) * x2 + t1
else:
s1, t1 = self.r(x1), self.y(x1)
y2 = (x2 - t1) / self.e(s1)
t2 = self.f(y2)
y1 = (x1 - t2)
return torch.cat((y1, y2), 1)
================================================
FILE: logging/1
================================================
================================================
FILE: logging/train__211222-183515.log
================================================
21-12-22 18:35:15.784 - INFO: DataParallel(
(module): Model(
(model): Hinet(
(inv1): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv2): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv3): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv4): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv5): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv6): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv7): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv8): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv9): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv10): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv11): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv12): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv13): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv14): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv15): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv16): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
)
)
)
21-12-22 18:36:26.983 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:36:26.983 - INFO: Train epoch 1: Loss: 1916530.6087 | r_Loss: 102788.5950 | g_Loss: 360085.9344 | l_Loss: 13312.3456 |
21-12-22 18:37:38.852 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:37:38.853 - INFO: Train epoch 2: Loss: 283685.8447 | r_Loss: 35807.4640 | g_Loss: 48669.8036 | l_Loss: 4529.3622 |
21-12-22 18:38:50.731 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:38:50.732 - INFO: Train epoch 3: Loss: 224030.6595 | r_Loss: 21267.3226 | g_Loss: 40083.8091 | l_Loss: 2344.2918 |
21-12-22 18:40:02.623 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:40:02.624 - INFO: Train epoch 4: Loss: 158706.2889 | r_Loss: 14590.3483 | g_Loss: 28442.3184 | l_Loss: 1904.3474 |
21-12-22 18:41:14.682 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:41:14.683 - INFO: Train epoch 5: Loss: 163240.7969 | r_Loss: 12599.3926 | g_Loss: 29776.1817 | l_Loss: 1760.4959 |
21-12-22 18:42:26.607 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:42:26.608 - INFO: Train epoch 6: Loss: 145886.6773 | r_Loss: 11285.0963 | g_Loss: 26611.1108 | l_Loss: 1546.0276 |
21-12-22 18:43:38.485 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:43:38.486 - INFO: Train epoch 7: Loss: 134074.5503 | r_Loss: 9025.2018 | g_Loss: 24783.6321 | l_Loss: 1131.1864 |
21-12-22 18:44:50.323 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:44:50.323 - INFO: Train epoch 8: Loss: 121420.2602 | r_Loss: 8606.2344 | g_Loss: 22327.0586 | l_Loss: 1178.7340 |
21-12-22 18:46:02.176 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:46:02.177 - INFO: Train epoch 9: Loss: 122202.0555 | r_Loss: 6858.1651 | g_Loss: 22904.1718 | l_Loss: 823.0301 |
21-12-22 18:47:13.896 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:47:13.896 - INFO: Train epoch 10: Loss: 136471.0877 | r_Loss: 9423.7402 | g_Loss: 25191.3217 | l_Loss: 1090.7403 |
21-12-22 18:48:25.691 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:48:25.691 - INFO: Train epoch 11: Loss: 107961.1444 | r_Loss: 5840.4038 | g_Loss: 20244.6550 | l_Loss: 897.4663 |
21-12-22 18:49:37.521 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:49:37.522 - INFO: Train epoch 12: Loss: 485130.6858 | r_Loss: 31769.5842 | g_Loss: 89893.0403 | l_Loss: 3895.9129 |
21-12-22 18:50:49.266 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:50:49.266 - INFO: Train epoch 13: Loss: 151705.1645 | r_Loss: 16805.2286 | g_Loss: 26605.7868 | l_Loss: 1871.0028 |
21-12-22 18:52:00.949 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:52:00.950 - INFO: Train epoch 14: Loss: 128397.9205 | r_Loss: 10097.1638 | g_Loss: 23420.3183 | l_Loss: 1199.1644 |
21-12-22 18:53:12.643 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:53:12.644 - INFO: Train epoch 15: Loss: 111989.2177 | r_Loss: 8957.6058 | g_Loss: 20382.2197 | l_Loss: 1120.5144 |
21-12-22 18:54:24.350 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:54:24.351 - INFO: Train epoch 16: Loss: 105015.4739 | r_Loss: 7037.8217 | g_Loss: 19443.5949 | l_Loss: 759.6777 |
21-12-22 18:55:36.173 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:55:36.174 - INFO: Train epoch 17: Loss: 123558.2725 | r_Loss: 7039.1299 | g_Loss: 23123.6531 | l_Loss: 900.8759 |
21-12-22 18:56:48.103 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:56:48.103 - INFO: Train epoch 18: Loss: 112097.1477 | r_Loss: 6368.6927 | g_Loss: 20990.6381 | l_Loss: 775.2658 |
21-12-22 18:57:59.910 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:57:59.910 - INFO: Train epoch 19: Loss: 114564.5614 | r_Loss: 5753.4407 | g_Loss: 21630.5812 | l_Loss: 658.2140 |
21-12-22 18:59:11.722 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 18:59:11.722 - INFO: Train epoch 20: Loss: 99518.6668 | r_Loss: 6046.9635 | g_Loss: 18545.9561 | l_Loss: 741.9222 |
21-12-22 19:00:23.722 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:00:23.722 - INFO: Train epoch 21: Loss: 110340.7299 | r_Loss: 5825.3805 | g_Loss: 20766.3231 | l_Loss: 683.7340 |
21-12-22 19:01:35.705 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:01:35.706 - INFO: Train epoch 22: Loss: 100620.7459 | r_Loss: 5382.7266 | g_Loss: 18893.8807 | l_Loss: 768.6155 |
21-12-22 19:02:47.318 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:02:47.319 - INFO: Train epoch 23: Loss: 99746.6170 | r_Loss: 5043.1425 | g_Loss: 18801.0269 | l_Loss: 698.3406 |
21-12-22 19:03:59.287 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:03:59.288 - INFO: Train epoch 24: Loss: 93780.7573 | r_Loss: 5846.0949 | g_Loss: 17432.3253 | l_Loss: 773.0354 |
21-12-22 19:05:11.109 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:05:11.110 - INFO: Train epoch 25: Loss: 100899.1276 | r_Loss: 5416.6849 | g_Loss: 18970.5545 | l_Loss: 629.6696 |
21-12-22 19:06:22.924 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:06:22.925 - INFO: Train epoch 26: Loss: 95658.3858 | r_Loss: 5295.2835 | g_Loss: 17922.5744 | l_Loss: 750.2308 |
21-12-22 19:07:34.800 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:07:34.800 - INFO: Train epoch 27: Loss: 92960.0682 | r_Loss: 5096.9779 | g_Loss: 17449.4505 | l_Loss: 615.8386 |
21-12-22 19:08:46.616 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:08:46.617 - INFO: Train epoch 28: Loss: 94073.5911 | r_Loss: 4942.2597 | g_Loss: 17705.2431 | l_Loss: 605.1158 |
21-12-22 19:09:58.361 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:09:58.362 - INFO: Train epoch 29: Loss: 96355.1770 | r_Loss: 5353.4785 | g_Loss: 18086.9778 | l_Loss: 566.8100 |
21-12-22 19:11:43.908 - INFO: TEST: PSNR_S: 19.8193 | PSNR_C: 23.4743 |
21-12-22 19:11:43.909 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:11:43.909 - INFO: Train epoch 30: Loss: 99024.9319 | r_Loss: 5586.3135 | g_Loss: 18544.4066 | l_Loss: 716.5852 |
21-12-22 19:12:55.624 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:12:55.625 - INFO: Train epoch 31: Loss: 98538.9102 | r_Loss: 5540.8903 | g_Loss: 18463.3595 | l_Loss: 681.2220 |
21-12-22 19:14:07.539 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:14:07.540 - INFO: Train epoch 32: Loss: 91550.5468 | r_Loss: 5660.9612 | g_Loss: 17050.9749 | l_Loss: 634.7113 |
21-12-22 19:15:19.441 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:15:19.441 - INFO: Train epoch 33: Loss: 93384.6013 | r_Loss: 6166.8778 | g_Loss: 17276.7829 | l_Loss: 833.8093 |
21-12-22 19:16:31.351 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:16:31.351 - INFO: Train epoch 34: Loss: 77829.5837 | r_Loss: 7108.6995 | g_Loss: 13963.7100 | l_Loss: 902.3337 |
21-12-22 19:17:43.210 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:17:43.211 - INFO: Train epoch 35: Loss: 74310.4207 | r_Loss: 9494.7295 | g_Loss: 12721.3827 | l_Loss: 1208.7780 |
21-12-22 19:18:55.045 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:18:55.046 - INFO: Train epoch 36: Loss: 69724.0864 | r_Loss: 10863.8109 | g_Loss: 11573.4593 | l_Loss: 992.9794 |
21-12-22 19:20:06.848 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:20:06.849 - INFO: Train epoch 37: Loss: 52294.3337 | r_Loss: 9285.8723 | g_Loss: 8385.4427 | l_Loss: 1081.2468 |
21-12-22 19:21:18.739 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:21:18.740 - INFO: Train epoch 38: Loss: 56965.3714 | r_Loss: 8709.5626 | g_Loss: 9380.5371 | l_Loss: 1353.1239 |
21-12-22 19:22:30.787 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:22:30.788 - INFO: Train epoch 39: Loss: 51276.8025 | r_Loss: 8020.2103 | g_Loss: 8421.2233 | l_Loss: 1150.4749 |
21-12-22 19:23:42.753 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:23:42.753 - INFO: Train epoch 40: Loss: 47764.1348 | r_Loss: 7001.6987 | g_Loss: 7970.4975 | l_Loss: 909.9489 |
21-12-22 19:24:54.511 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:24:54.511 - INFO: Train epoch 41: Loss: 48155.8277 | r_Loss: 7282.1811 | g_Loss: 8003.3746 | l_Loss: 856.7744 |
21-12-22 19:26:06.384 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:26:06.385 - INFO: Train epoch 42: Loss: 48455.2601 | r_Loss: 6392.9273 | g_Loss: 8245.4818 | l_Loss: 834.9239 |
21-12-22 19:27:18.100 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:27:18.101 - INFO: Train epoch 43: Loss: 46717.3673 | r_Loss: 6299.9798 | g_Loss: 7906.3217 | l_Loss: 885.7791 |
21-12-22 19:28:29.850 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:28:29.851 - INFO: Train epoch 44: Loss: 41815.2180 | r_Loss: 5934.1662 | g_Loss: 7036.7091 | l_Loss: 697.5062 |
21-12-22 19:29:41.729 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:29:41.730 - INFO: Train epoch 45: Loss: 44937.6304 | r_Loss: 5596.1181 | g_Loss: 7712.8649 | l_Loss: 777.1872 |
21-12-22 19:30:53.470 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:30:53.471 - INFO: Train epoch 46: Loss: 45333.4879 | r_Loss: 5821.4603 | g_Loss: 7761.8428 | l_Loss: 702.8132 |
21-12-22 19:32:05.093 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:32:05.093 - INFO: Train epoch 47: Loss: 37154.6009 | r_Loss: 5362.4525 | g_Loss: 6238.6018 | l_Loss: 599.1395 |
21-12-22 19:33:16.787 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:33:16.788 - INFO: Train epoch 48: Loss: 42004.4616 | r_Loss: 5252.6135 | g_Loss: 7223.1537 | l_Loss: 636.0801 |
21-12-22 19:34:28.564 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:34:28.565 - INFO: Train epoch 49: Loss: 41553.0154 | r_Loss: 5276.1923 | g_Loss: 7121.4157 | l_Loss: 669.7442 |
21-12-22 19:35:40.588 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:35:40.588 - INFO: Train epoch 50: Loss: 54623.5339 | r_Loss: 5371.9379 | g_Loss: 9709.1540 | l_Loss: 705.8253 |
21-12-22 19:36:52.466 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:36:52.466 - INFO: Train epoch 51: Loss: 42918.8186 | r_Loss: 6267.7249 | g_Loss: 7175.5440 | l_Loss: 773.3741 |
21-12-22 19:38:04.354 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:38:04.354 - INFO: Train epoch 52: Loss: 38656.3892 | r_Loss: 5336.7248 | g_Loss: 6528.5062 | l_Loss: 677.1332 |
21-12-22 19:39:16.221 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:39:16.222 - INFO: Train epoch 53: Loss: 38940.0750 | r_Loss: 5003.8909 | g_Loss: 6628.4260 | l_Loss: 794.0546 |
21-12-22 19:40:28.120 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:40:28.121 - INFO: Train epoch 54: Loss: 38471.1475 | r_Loss: 4886.3822 | g_Loss: 6569.5278 | l_Loss: 737.1263 |
21-12-22 19:41:39.998 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:41:39.998 - INFO: Train epoch 55: Loss: 41086.1729 | r_Loss: 4651.1002 | g_Loss: 7173.2689 | l_Loss: 568.7273 |
21-12-22 19:42:51.759 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:42:51.759 - INFO: Train epoch 56: Loss: 59183.1506 | r_Loss: 5361.0480 | g_Loss: 10614.2246 | l_Loss: 750.9801 |
21-12-22 19:44:03.496 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:44:03.496 - INFO: Train epoch 57: Loss: 38515.6164 | r_Loss: 5358.6152 | g_Loss: 6492.8062 | l_Loss: 692.9702 |
21-12-22 19:45:15.333 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:45:15.333 - INFO: Train epoch 58: Loss: 35177.1862 | r_Loss: 4979.8452 | g_Loss: 5917.4729 | l_Loss: 609.9764 |
21-12-22 19:46:27.325 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:46:27.326 - INFO: Train epoch 59: Loss: 36637.2636 | r_Loss: 4687.0900 | g_Loss: 6280.7927 | l_Loss: 546.2105 |
21-12-22 19:48:12.983 - INFO: TEST: PSNR_S: 23.8853 | PSNR_C: 24.5243 |
21-12-22 19:48:12.984 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:48:12.985 - INFO: Train epoch 60: Loss: 42134.2096 | r_Loss: 4571.7637 | g_Loss: 7398.5892 | l_Loss: 569.4997 |
21-12-22 19:49:24.758 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:49:24.759 - INFO: Train epoch 61: Loss: 32280.4110 | r_Loss: 4497.7907 | g_Loss: 5442.2084 | l_Loss: 571.5780 |
21-12-22 19:50:36.656 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:50:36.657 - INFO: Train epoch 62: Loss: 31958.5495 | r_Loss: 4431.0904 | g_Loss: 5381.6426 | l_Loss: 619.2462 |
21-12-22 19:51:48.611 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:51:48.611 - INFO: Train epoch 63: Loss: 32817.9409 | r_Loss: 4119.5277 | g_Loss: 5649.4970 | l_Loss: 450.9283 |
21-12-22 19:53:00.550 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:53:00.551 - INFO: Train epoch 64: Loss: 38051.7816 | r_Loss: 4232.9049 | g_Loss: 6652.6408 | l_Loss: 555.6725 |
21-12-22 19:54:12.457 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:54:12.458 - INFO: Train epoch 65: Loss: 34867.1996 | r_Loss: 4277.1156 | g_Loss: 6015.3669 | l_Loss: 513.2493 |
21-12-22 19:55:24.292 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:55:24.293 - INFO: Train epoch 66: Loss: 31819.8045 | r_Loss: 4115.4255 | g_Loss: 5454.0983 | l_Loss: 433.8873 |
21-12-22 19:56:36.094 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:56:36.095 - INFO: Train epoch 67: Loss: 34828.6469 | r_Loss: 4002.6987 | g_Loss: 6071.9324 | l_Loss: 466.2861 |
21-12-22 19:57:47.939 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:57:47.939 - INFO: Train epoch 68: Loss: 37350.5337 | r_Loss: 3994.9277 | g_Loss: 6561.5923 | l_Loss: 547.6441 |
21-12-22 19:58:59.832 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 19:58:59.833 - INFO: Train epoch 69: Loss: 33193.3755 | r_Loss: 4052.6953 | g_Loss: 5722.1048 | l_Loss: 530.1563 |
21-12-22 20:00:11.604 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:00:11.605 - INFO: Train epoch 70: Loss: 31803.6246 | r_Loss: 4103.4976 | g_Loss: 5424.5795 | l_Loss: 577.2293 |
21-12-22 20:01:23.397 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:01:23.397 - INFO: Train epoch 71: Loss: 35658.5235 | r_Loss: 3969.1713 | g_Loss: 6245.4957 | l_Loss: 461.8740 |
21-12-22 20:02:35.376 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:02:35.376 - INFO: Train epoch 72: Loss: 33273.9690 | r_Loss: 4031.3716 | g_Loss: 5754.7480 | l_Loss: 468.8574 |
21-12-22 20:03:47.374 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:03:47.374 - INFO: Train epoch 73: Loss: 31997.0918 | r_Loss: 3894.0523 | g_Loss: 5515.6542 | l_Loss: 524.7681 |
21-12-22 20:04:59.301 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:04:59.301 - INFO: Train epoch 74: Loss: 29749.8680 | r_Loss: 3743.6633 | g_Loss: 5094.6368 | l_Loss: 533.0205 |
21-12-22 20:06:11.214 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:06:11.215 - INFO: Train epoch 75: Loss: 31316.6220 | r_Loss: 3727.8082 | g_Loss: 5418.7688 | l_Loss: 494.9694 |
21-12-22 20:07:23.168 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:07:23.169 - INFO: Train epoch 76: Loss: 29193.5801 | r_Loss: 3461.8510 | g_Loss: 5045.4350 | l_Loss: 504.5544 |
21-12-22 20:08:35.203 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:08:35.204 - INFO: Train epoch 77: Loss: 29583.6322 | r_Loss: 3454.7472 | g_Loss: 5152.3946 | l_Loss: 366.9120 |
21-12-22 20:09:47.122 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:09:47.123 - INFO: Train epoch 78: Loss: 32468.2160 | r_Loss: 3839.8608 | g_Loss: 5639.9090 | l_Loss: 428.8102 |
21-12-22 20:10:58.909 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:10:58.909 - INFO: Train epoch 79: Loss: 28015.7399 | r_Loss: 3704.3633 | g_Loss: 4761.1647 | l_Loss: 505.5524 |
21-12-22 20:12:10.749 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:12:10.750 - INFO: Train epoch 80: Loss: 342866.0490 | r_Loss: 34705.5339 | g_Loss: 60719.6382 | l_Loss: 4562.3195 |
21-12-22 20:13:22.641 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:13:22.641 - INFO: Train epoch 81: Loss: 64890.4735 | r_Loss: 12810.6736 | g_Loss: 10067.3952 | l_Loss: 1742.8233 |
21-12-22 20:14:34.632 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:14:34.633 - INFO: Train epoch 82: Loss: 46769.0341 | r_Loss: 10368.6590 | g_Loss: 7025.7791 | l_Loss: 1271.4796 |
21-12-22 20:15:46.573 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:15:46.573 - INFO: Train epoch 83: Loss: 40247.8239 | r_Loss: 8742.3530 | g_Loss: 6041.4586 | l_Loss: 1298.1774 |
21-12-22 20:16:58.541 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:16:58.542 - INFO: Train epoch 84: Loss: 36324.4422 | r_Loss: 7551.5208 | g_Loss: 5542.6111 | l_Loss: 1059.8660 |
21-12-22 20:18:10.549 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:18:10.550 - INFO: Train epoch 85: Loss: 33744.4767 | r_Loss: 6900.0976 | g_Loss: 5205.4819 | l_Loss: 816.9696 |
21-12-22 20:19:22.385 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:19:22.385 - INFO: Train epoch 86: Loss: 29984.0971 | r_Loss: 6302.2685 | g_Loss: 4570.4455 | l_Loss: 829.6008 |
21-12-22 20:20:34.159 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:20:34.160 - INFO: Train epoch 87: Loss: 28899.3025 | r_Loss: 5858.6954 | g_Loss: 4466.2121 | l_Loss: 709.5463 |
21-12-22 20:21:46.039 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:21:46.040 - INFO: Train epoch 88: Loss: 28071.3953 | r_Loss: 5501.5555 | g_Loss: 4364.7567 | l_Loss: 746.0566 |
21-12-22 20:22:57.999 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:22:58.000 - INFO: Train epoch 89: Loss: 27034.3433 | r_Loss: 5542.3232 | g_Loss: 4158.2607 | l_Loss: 700.7162 |
21-12-22 20:24:43.789 - INFO: TEST: PSNR_S: 25.8681 | PSNR_C: 24.2186 |
21-12-22 20:24:43.790 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:24:43.790 - INFO: Train epoch 90: Loss: 27429.1166 | r_Loss: 5167.5514 | g_Loss: 4311.4618 | l_Loss: 704.2561 |
21-12-22 20:25:55.660 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:25:55.661 - INFO: Train epoch 91: Loss: 26321.7543 | r_Loss: 5063.5385 | g_Loss: 4123.2744 | l_Loss: 641.8438 |
21-12-22 20:27:07.504 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:27:07.505 - INFO: Train epoch 92: Loss: 24511.1963 | r_Loss: 5070.5907 | g_Loss: 3775.2423 | l_Loss: 564.3946 |
21-12-22 20:28:19.329 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:28:19.330 - INFO: Train epoch 93: Loss: 24642.1308 | r_Loss: 4865.9146 | g_Loss: 3842.0976 | l_Loss: 565.7282 |
21-12-22 20:29:31.122 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:29:31.123 - INFO: Train epoch 94: Loss: 20589.7835 | r_Loss: 4519.5283 | g_Loss: 3107.4175 | l_Loss: 533.1675 |
21-12-22 20:30:43.029 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:30:43.030 - INFO: Train epoch 95: Loss: 20890.6618 | r_Loss: 4341.8538 | g_Loss: 3214.4236 | l_Loss: 476.6901 |
21-12-22 20:31:54.839 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:31:54.839 - INFO: Train epoch 96: Loss: 68466.5346 | r_Loss: 7802.2862 | g_Loss: 11950.4663 | l_Loss: 911.9162 |
21-12-22 20:33:06.575 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:33:06.575 - INFO: Train epoch 97: Loss: 40537.8210 | r_Loss: 14383.5867 | g_Loss: 4940.4311 | l_Loss: 1452.0787 |
21-12-22 20:34:18.527 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:34:18.527 - INFO: Train epoch 98: Loss: 26632.5315 | r_Loss: 7718.0495 | g_Loss: 3597.9815 | l_Loss: 924.5744 |
21-12-22 20:35:30.541 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:35:30.542 - INFO: Train epoch 99: Loss: 24876.1990 | r_Loss: 6436.3721 | g_Loss: 3523.4448 | l_Loss: 822.6029 |
21-12-22 20:36:42.475 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:36:42.476 - INFO: Train epoch 100: Loss: 23284.8408 | r_Loss: 6006.3252 | g_Loss: 3314.9716 | l_Loss: 703.6574 |
21-12-22 20:37:54.590 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:37:54.591 - INFO: Train epoch 101: Loss: 23025.3239 | r_Loss: 5575.1147 | g_Loss: 3341.5128 | l_Loss: 742.6451 |
21-12-22 20:39:06.572 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:39:06.573 - INFO: Train epoch 102: Loss: 19243.0120 | r_Loss: 5033.9181 | g_Loss: 2716.7555 | l_Loss: 625.3164 |
21-12-22 20:40:18.736 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:40:18.737 - INFO: Train epoch 103: Loss: 19524.4420 | r_Loss: 4836.3808 | g_Loss: 2823.7097 | l_Loss: 569.5127 |
21-12-22 20:41:30.836 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:41:30.836 - INFO: Train epoch 104: Loss: 17540.4176 | r_Loss: 4479.8870 | g_Loss: 2505.5700 | l_Loss: 532.6805 |
21-12-22 20:42:42.784 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:42:42.784 - INFO: Train epoch 105: Loss: 17820.4678 | r_Loss: 4233.8928 | g_Loss: 2626.8312 | l_Loss: 452.4191 |
21-12-22 20:43:54.672 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:43:54.672 - INFO: Train epoch 106: Loss: 19130.2410 | r_Loss: 4222.2630 | g_Loss: 2878.3505 | l_Loss: 516.2255 |
21-12-22 20:45:06.722 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:45:06.722 - INFO: Train epoch 107: Loss: 16477.7752 | r_Loss: 3923.4162 | g_Loss: 2410.0183 | l_Loss: 504.2671 |
21-12-22 20:46:18.888 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:46:18.889 - INFO: Train epoch 108: Loss: 18694.0237 | r_Loss: 4054.0613 | g_Loss: 2820.0210 | l_Loss: 539.8571 |
21-12-22 20:47:30.951 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:47:30.952 - INFO: Train epoch 109: Loss: 16835.4866 | r_Loss: 3874.2568 | g_Loss: 2493.3063 | l_Loss: 494.6984 |
21-12-22 20:48:42.900 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:48:42.901 - INFO: Train epoch 110: Loss: 16738.1378 | r_Loss: 3635.6416 | g_Loss: 2531.9113 | l_Loss: 442.9398 |
21-12-22 20:49:54.951 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:49:54.952 - INFO: Train epoch 111: Loss: 17317.6649 | r_Loss: 3691.6990 | g_Loss: 2629.7406 | l_Loss: 477.2632 |
21-12-22 20:51:07.106 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:51:07.106 - INFO: Train epoch 112: Loss: 16501.1131 | r_Loss: 3524.7622 | g_Loss: 2505.2059 | l_Loss: 450.3214 |
21-12-22 20:52:19.131 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:52:19.132 - INFO: Train epoch 113: Loss: 15375.0421 | r_Loss: 3321.5725 | g_Loss: 2319.4758 | l_Loss: 456.0907 |
21-12-22 20:53:31.093 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:53:31.094 - INFO: Train epoch 114: Loss: 15642.6089 | r_Loss: 3292.4873 | g_Loss: 2385.1636 | l_Loss: 424.3036 |
21-12-22 20:54:43.086 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:54:43.087 - INFO: Train epoch 115: Loss: 15422.0306 | r_Loss: 3226.7952 | g_Loss: 2366.9861 | l_Loss: 360.3051 |
21-12-22 20:55:54.956 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:55:54.956 - INFO: Train epoch 116: Loss: 14017.6159 | r_Loss: 3124.4570 | g_Loss: 2106.5922 | l_Loss: 360.1975 |
21-12-22 20:57:06.900 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:57:06.901 - INFO: Train epoch 117: Loss: 15065.3337 | r_Loss: 3137.7820 | g_Loss: 2298.5352 | l_Loss: 434.8757 |
21-12-22 20:58:19.089 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:58:19.090 - INFO: Train epoch 118: Loss: 14413.3867 | r_Loss: 3047.6079 | g_Loss: 2188.1003 | l_Loss: 425.2776 |
21-12-22 20:59:31.062 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 20:59:31.062 - INFO: Train epoch 119: Loss: 13817.0255 | r_Loss: 3010.6503 | g_Loss: 2085.2487 | l_Loss: 380.1320 |
21-12-22 21:01:16.916 - INFO: TEST: PSNR_S: 28.9198 | PSNR_C: 26.6148 |
21-12-22 21:01:16.918 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:01:16.918 - INFO: Train epoch 120: Loss: 15093.5819 | r_Loss: 3092.1541 | g_Loss: 2326.4818 | l_Loss: 369.0190 |
21-12-22 21:02:28.848 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:02:28.848 - INFO: Train epoch 121: Loss: 13238.4354 | r_Loss: 2782.3987 | g_Loss: 2011.2357 | l_Loss: 399.8580 |
21-12-22 21:03:40.842 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:03:40.843 - INFO: Train epoch 122: Loss: 12575.9218 | r_Loss: 2744.5891 | g_Loss: 1904.6636 | l_Loss: 308.0145 |
21-12-22 21:04:52.803 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:04:52.803 - INFO: Train epoch 123: Loss: 12814.6647 | r_Loss: 2743.1511 | g_Loss: 1955.9732 | l_Loss: 291.6477 |
21-12-22 21:06:04.722 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:06:04.723 - INFO: Train epoch 124: Loss: 14915.0112 | r_Loss: 3222.7633 | g_Loss: 2270.0663 | l_Loss: 341.9163 |
21-12-22 21:07:16.678 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:07:16.679 - INFO: Train epoch 125: Loss: 11573.1813 | r_Loss: 2620.3512 | g_Loss: 1731.1305 | l_Loss: 297.1779 |
21-12-22 21:08:28.660 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:08:28.661 - INFO: Train epoch 126: Loss: 13091.2749 | r_Loss: 2902.8270 | g_Loss: 1971.6948 | l_Loss: 329.9736 |
21-12-22 21:09:40.519 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:09:40.520 - INFO: Train epoch 127: Loss: 12052.0927 | r_Loss: 2727.2504 | g_Loss: 1800.3996 | l_Loss: 322.8440 |
21-12-22 21:10:52.504 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:10:52.504 - INFO: Train epoch 128: Loss: 12619.7724 | r_Loss: 2767.9229 | g_Loss: 1890.8982 | l_Loss: 397.3584 |
21-12-22 21:12:04.514 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:12:04.514 - INFO: Train epoch 129: Loss: 11980.8025 | r_Loss: 2695.4229 | g_Loss: 1791.9514 | l_Loss: 325.6227 |
21-12-22 21:13:16.614 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:13:16.615 - INFO: Train epoch 130: Loss: 11367.0313 | r_Loss: 2569.9454 | g_Loss: 1699.9351 | l_Loss: 297.4106 |
21-12-22 21:14:28.554 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:14:28.555 - INFO: Train epoch 131: Loss: 153057.9031 | r_Loss: 21148.5858 | g_Loss: 25922.1142 | l_Loss: 2298.7500 |
21-12-22 21:15:40.508 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:15:40.509 - INFO: Train epoch 132: Loss: 26568.9323 | r_Loss: 9641.7427 | g_Loss: 3128.6491 | l_Loss: 1283.9438 |
21-12-22 21:16:52.486 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:16:52.486 - INFO: Train epoch 133: Loss: 21816.5433 | r_Loss: 7345.8246 | g_Loss: 2696.2840 | l_Loss: 989.2988 |
21-12-22 21:18:04.344 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:18:04.344 - INFO: Train epoch 134: Loss: 17683.9966 | r_Loss: 5917.6610 | g_Loss: 2219.3588 | l_Loss: 669.5414 |
21-12-22 21:19:16.484 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:19:16.484 - INFO: Train epoch 135: Loss: 15724.2366 | r_Loss: 5162.4216 | g_Loss: 1977.8150 | l_Loss: 672.7402 |
21-12-22 21:20:28.369 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:20:28.370 - INFO: Train epoch 136: Loss: 14803.4069 | r_Loss: 4561.9909 | g_Loss: 1939.7383 | l_Loss: 542.7245 |
21-12-22 21:21:40.201 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:21:40.202 - INFO: Train epoch 137: Loss: 14500.0333 | r_Loss: 4277.5843 | g_Loss: 1935.3035 | l_Loss: 545.9314 |
21-12-22 21:22:52.235 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:22:52.236 - INFO: Train epoch 138: Loss: 14151.8251 | r_Loss: 3919.8744 | g_Loss: 1932.1346 | l_Loss: 571.2779 |
21-12-22 21:24:04.104 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:24:04.105 - INFO: Train epoch 139: Loss: 14036.5457 | r_Loss: 3714.1949 | g_Loss: 1949.6293 | l_Loss: 574.2039 |
21-12-22 21:25:16.058 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:25:16.059 - INFO: Train epoch 140: Loss: 13167.4898 | r_Loss: 3459.4391 | g_Loss: 1862.5013 | l_Loss: 395.5444 |
21-12-22 21:26:28.009 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:26:28.010 - INFO: Train epoch 141: Loss: 13768.2388 | r_Loss: 3555.1968 | g_Loss: 1946.7979 | l_Loss: 479.0525 |
21-12-22 21:27:39.922 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:27:39.923 - INFO: Train epoch 142: Loss: 12731.8326 | r_Loss: 3287.8198 | g_Loss: 1812.6276 | l_Loss: 380.8750 |
21-12-22 21:28:51.863 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:28:51.863 - INFO: Train epoch 143: Loss: 13741.3181 | r_Loss: 3219.7089 | g_Loss: 2029.7813 | l_Loss: 372.7031 |
21-12-22 21:30:03.746 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:30:03.746 - INFO: Train epoch 144: Loss: 11677.5802 | r_Loss: 3072.1389 | g_Loss: 1642.8255 | l_Loss: 391.3137 |
21-12-22 21:31:15.728 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:31:15.728 - INFO: Train epoch 145: Loss: 13175.8396 | r_Loss: 3107.9181 | g_Loss: 1940.9301 | l_Loss: 363.2711 |
21-12-22 21:32:27.614 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:32:27.614 - INFO: Train epoch 146: Loss: 11668.8457 | r_Loss: 2827.7383 | g_Loss: 1703.5023 | l_Loss: 323.5960 |
21-12-22 21:33:39.534 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:33:39.535 - INFO: Train epoch 147: Loss: 11387.6898 | r_Loss: 2770.7158 | g_Loss: 1654.1003 | l_Loss: 346.4726 |
21-12-22 21:34:51.451 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:34:51.452 - INFO: Train epoch 148: Loss: 11241.4477 | r_Loss: 2774.2883 | g_Loss: 1628.7248 | l_Loss: 323.5353 |
21-12-22 21:36:03.506 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:36:03.507 - INFO: Train epoch 149: Loss: 11857.5487 | r_Loss: 2705.2344 | g_Loss: 1757.1296 | l_Loss: 366.6663 |
21-12-22 21:37:49.077 - INFO: TEST: PSNR_S: 30.0401 | PSNR_C: 27.2125 |
21-12-22 21:37:49.078 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:37:49.078 - INFO: Train epoch 150: Loss: 10109.3984 | r_Loss: 2457.5742 | g_Loss: 1473.9373 | l_Loss: 282.1374 |
21-12-22 21:39:00.761 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:39:00.762 - INFO: Train epoch 151: Loss: 12083.3736 | r_Loss: 2636.5949 | g_Loss: 1818.3451 | l_Loss: 355.0531 |
21-12-22 21:40:12.265 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:40:12.265 - INFO: Train epoch 152: Loss: 11401.8536 | r_Loss: 2502.1409 | g_Loss: 1716.5796 | l_Loss: 316.8147 |
21-12-22 21:41:23.856 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:41:23.857 - INFO: Train epoch 153: Loss: 10431.1598 | r_Loss: 2477.2819 | g_Loss: 1536.2734 | l_Loss: 272.5108 |
21-12-22 21:42:35.335 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:42:35.335 - INFO: Train epoch 154: Loss: 11366.5958 | r_Loss: 2479.2515 | g_Loss: 1714.5349 | l_Loss: 314.6695 |
21-12-22 21:43:46.846 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:43:46.846 - INFO: Train epoch 155: Loss: 10427.5536 | r_Loss: 2335.3958 | g_Loss: 1553.6620 | l_Loss: 323.8479 |
21-12-22 21:44:58.508 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:44:58.509 - INFO: Train epoch 156: Loss: 10158.8200 | r_Loss: 2384.1641 | g_Loss: 1495.0311 | l_Loss: 299.5003 |
21-12-22 21:46:09.935 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:46:09.937 - INFO: Train epoch 157: Loss: 9750.4756 | r_Loss: 2289.5608 | g_Loss: 1432.5681 | l_Loss: 298.0740 |
21-12-22 21:47:21.568 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:47:21.569 - INFO: Train epoch 158: Loss: 10112.7539 | r_Loss: 2357.4700 | g_Loss: 1495.2955 | l_Loss: 278.8064 |
21-12-22 21:48:33.113 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:48:33.113 - INFO: Train epoch 159: Loss: 10353.9041 | r_Loss: 2297.2995 | g_Loss: 1549.0748 | l_Loss: 311.2303 |
21-12-22 21:49:44.709 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:49:44.710 - INFO: Train epoch 160: Loss: 9400.0015 | r_Loss: 2254.9525 | g_Loss: 1372.0443 | l_Loss: 284.8276 |
21-12-22 21:50:56.299 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:50:56.300 - INFO: Train epoch 161: Loss: 9231.3387 | r_Loss: 2132.1035 | g_Loss: 1370.3484 | l_Loss: 247.4931 |
21-12-22 21:52:07.922 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:52:07.923 - INFO: Train epoch 162: Loss: 10126.0316 | r_Loss: 2308.3611 | g_Loss: 1505.4462 | l_Loss: 290.4396 |
21-12-22 21:53:19.567 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:53:19.568 - INFO: Train epoch 163: Loss: 8476.0053 | r_Loss: 2042.9946 | g_Loss: 1233.3927 | l_Loss: 266.0473 |
21-12-22 21:54:31.222 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:54:31.222 - INFO: Train epoch 164: Loss: 9145.2572 | r_Loss: 2066.6966 | g_Loss: 1365.8818 | l_Loss: 249.1516 |
21-12-22 21:55:42.829 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:55:42.830 - INFO: Train epoch 165: Loss: 8793.2883 | r_Loss: 2056.3526 | g_Loss: 1298.6873 | l_Loss: 243.4990 |
21-12-22 21:56:54.440 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:56:54.441 - INFO: Train epoch 166: Loss: 20993.6458 | r_Loss: 3631.8276 | g_Loss: 3395.4360 | l_Loss: 384.6382 |
21-12-22 21:58:06.375 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:58:06.376 - INFO: Train epoch 167: Loss: 17164.5957 | r_Loss: 5154.3174 | g_Loss: 2261.2553 | l_Loss: 704.0014 |
21-12-22 21:59:18.334 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 21:59:18.335 - INFO: Train epoch 168: Loss: 10405.9059 | r_Loss: 3375.0306 | g_Loss: 1314.6944 | l_Loss: 457.4032 |
21-12-22 22:00:30.002 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:00:30.003 - INFO: Train epoch 169: Loss: 9725.5509 | r_Loss: 2876.9022 | g_Loss: 1298.8499 | l_Loss: 354.3994 |
21-12-22 22:01:41.555 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:01:41.557 - INFO: Train epoch 170: Loss: 8695.2549 | r_Loss: 2529.4026 | g_Loss: 1164.5558 | l_Loss: 343.0736 |
21-12-22 22:02:53.372 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:02:53.373 - INFO: Train epoch 171: Loss: 8195.9290 | r_Loss: 2395.4719 | g_Loss: 1096.0586 | l_Loss: 320.1642 |
21-12-22 22:04:05.155 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:04:05.156 - INFO: Train epoch 172: Loss: 8931.5243 | r_Loss: 2379.0117 | g_Loss: 1257.2123 | l_Loss: 266.4514 |
21-12-22 22:05:16.875 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:05:16.876 - INFO: Train epoch 173: Loss: 7951.7283 | r_Loss: 2181.7680 | g_Loss: 1100.8653 | l_Loss: 265.6338 |
21-12-22 22:06:28.642 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:06:28.643 - INFO: Train epoch 174: Loss: 8434.3090 | r_Loss: 2087.9266 | g_Loss: 1213.4428 | l_Loss: 279.1684 |
21-12-22 22:07:40.668 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:07:40.669 - INFO: Train epoch 175: Loss: 7469.6696 | r_Loss: 1984.7773 | g_Loss: 1043.0773 | l_Loss: 269.5061 |
21-12-22 22:08:52.461 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:08:52.462 - INFO: Train epoch 176: Loss: 7376.2131 | r_Loss: 1917.1701 | g_Loss: 1044.3765 | l_Loss: 237.1603 |
21-12-22 22:10:04.421 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:10:04.422 - INFO: Train epoch 177: Loss: 7369.1734 | r_Loss: 1872.5814 | g_Loss: 1048.3791 | l_Loss: 254.6967 |
21-12-22 22:11:16.376 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:11:16.377 - INFO: Train epoch 178: Loss: 260857.1239 | r_Loss: 34758.7546 | g_Loss: 44232.2272 | l_Loss: 4937.2243 |
21-12-22 22:12:28.354 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:12:28.354 - INFO: Train epoch 179: Loss: 28152.0204 | r_Loss: 10423.5548 | g_Loss: 3261.2980 | l_Loss: 1421.9757 |
21-12-22 22:14:14.185 - INFO: TEST: PSNR_S: 27.9124 | PSNR_C: 22.7089 |
21-12-22 22:14:14.186 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:14:14.186 - INFO: Train epoch 180: Loss: 20223.5536 | r_Loss: 7535.9725 | g_Loss: 2365.8103 | l_Loss: 858.5298 |
21-12-22 22:15:26.091 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:15:26.092 - INFO: Train epoch 181: Loss: 17685.6930 | r_Loss: 6537.2388 | g_Loss: 2051.1212 | l_Loss: 892.8479 |
21-12-22 22:16:38.060 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:16:38.061 - INFO: Train epoch 182: Loss: 15731.1795 | r_Loss: 5684.9290 | g_Loss: 1862.0956 | l_Loss: 735.7723 |
21-12-22 22:17:49.928 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:17:49.928 - INFO: Train epoch 183: Loss: 13936.3919 | r_Loss: 5198.1452 | g_Loss: 1601.4376 | l_Loss: 731.0585 |
21-12-22 22:19:01.785 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:19:01.785 - INFO: Train epoch 184: Loss: 13343.9719 | r_Loss: 4594.5654 | g_Loss: 1630.8295 | l_Loss: 595.2589 |
21-12-22 22:20:13.813 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:20:13.813 - INFO: Train epoch 185: Loss: 12075.0075 | r_Loss: 4014.3104 | g_Loss: 1515.7239 | l_Loss: 482.0779 |
21-12-22 22:21:25.727 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:21:25.727 - INFO: Train epoch 186: Loss: 11096.0552 | r_Loss: 3999.1119 | g_Loss: 1314.8982 | l_Loss: 522.4521 |
21-12-22 22:22:37.608 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:22:37.608 - INFO: Train epoch 187: Loss: 10748.5395 | r_Loss: 3596.2278 | g_Loss: 1326.7601 | l_Loss: 518.5114 |
21-12-22 22:23:49.590 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:23:49.590 - INFO: Train epoch 188: Loss: 10528.2911 | r_Loss: 3383.1433 | g_Loss: 1340.4959 | l_Loss: 442.6680 |
21-12-22 22:25:01.535 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:25:01.536 - INFO: Train epoch 189: Loss: 9977.2218 | r_Loss: 3284.1858 | g_Loss: 1259.7103 | l_Loss: 394.4847 |
21-12-22 22:26:13.466 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:26:13.467 - INFO: Train epoch 190: Loss: 9999.0100 | r_Loss: 3032.5809 | g_Loss: 1315.2906 | l_Loss: 389.9760 |
21-12-22 22:27:25.302 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:27:25.302 - INFO: Train epoch 191: Loss: 9507.4832 | r_Loss: 2937.5907 | g_Loss: 1239.7936 | l_Loss: 370.9246 |
21-12-22 22:28:37.175 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:28:37.176 - INFO: Train epoch 192: Loss: 9290.9749 | r_Loss: 2858.5347 | g_Loss: 1218.4205 | l_Loss: 340.3376 |
21-12-22 22:29:48.997 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:29:48.997 - INFO: Train epoch 193: Loss: 8557.4781 | r_Loss: 2662.7123 | g_Loss: 1104.2844 | l_Loss: 373.3438 |
21-12-22 22:31:00.692 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:31:00.693 - INFO: Train epoch 194: Loss: 8260.1197 | r_Loss: 2543.7279 | g_Loss: 1073.7744 | l_Loss: 347.5198 |
21-12-22 22:32:12.481 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:32:12.481 - INFO: Train epoch 195: Loss: 7968.7023 | r_Loss: 2407.7852 | g_Loss: 1057.6126 | l_Loss: 272.8542 |
21-12-22 22:33:24.296 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:33:24.296 - INFO: Train epoch 196: Loss: 9668.4835 | r_Loss: 2709.9647 | g_Loss: 1323.3420 | l_Loss: 341.8088 |
21-12-22 22:34:36.208 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:34:36.210 - INFO: Train epoch 197: Loss: 7584.2450 | r_Loss: 2428.9753 | g_Loss: 983.2439 | l_Loss: 239.0502 |
21-12-22 22:35:48.013 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:35:48.014 - INFO: Train epoch 198: Loss: 8122.3154 | r_Loss: 2444.6574 | g_Loss: 1076.4419 | l_Loss: 295.4482 |
21-12-22 22:36:59.880 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:36:59.881 - INFO: Train epoch 199: Loss: 7871.4775 | r_Loss: 2295.2394 | g_Loss: 1053.8641 | l_Loss: 306.9176 |
21-12-22 22:38:11.723 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:38:11.723 - INFO: Train epoch 200: Loss: 7506.8081 | r_Loss: 2232.1525 | g_Loss: 999.0520 | l_Loss: 279.3957 |
21-12-22 22:39:23.652 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:39:23.653 - INFO: Train epoch 201: Loss: 10355.6376 | r_Loss: 2686.1228 | g_Loss: 1468.8906 | l_Loss: 325.0616 |
21-12-22 22:40:35.554 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:40:35.555 - INFO: Train epoch 202: Loss: 7213.8257 | r_Loss: 2422.0954 | g_Loss: 896.5640 | l_Loss: 308.9101 |
21-12-22 22:41:47.496 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:41:47.497 - INFO: Train epoch 203: Loss: 7006.0121 | r_Loss: 2270.1539 | g_Loss: 887.6660 | l_Loss: 297.5281 |
21-12-22 22:42:59.405 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:42:59.405 - INFO: Train epoch 204: Loss: 6613.7662 | r_Loss: 2058.7276 | g_Loss: 854.1419 | l_Loss: 284.3291 |
21-12-22 22:44:11.330 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:44:11.331 - INFO: Train epoch 205: Loss: 6477.2133 | r_Loss: 2015.2777 | g_Loss: 839.0306 | l_Loss: 266.7824 |
21-12-22 22:45:23.286 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:45:23.287 - INFO: Train epoch 206: Loss: 7017.6158 | r_Loss: 2071.7649 | g_Loss: 944.4041 | l_Loss: 223.8303 |
21-12-22 22:46:35.237 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:46:35.238 - INFO: Train epoch 207: Loss: 6294.4180 | r_Loss: 1901.1697 | g_Loss: 829.3361 | l_Loss: 246.5676 |
21-12-22 22:47:47.175 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:47:47.175 - INFO: Train epoch 208: Loss: 318688.2810 | r_Loss: 76999.6836 | g_Loss: 46665.8245 | l_Loss: 8359.4774 |
21-12-22 22:48:59.120 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:48:59.120 - INFO: Train epoch 209: Loss: 37220.0410 | r_Loss: 16509.8942 | g_Loss: 3733.9525 | l_Loss: 2040.3844 |
21-12-22 22:50:44.815 - INFO: TEST: PSNR_S: 28.0055 | PSNR_C: 21.7095 |
21-12-22 22:50:44.816 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:50:44.816 - INFO: Train epoch 210: Loss: 25296.2641 | r_Loss: 10724.9305 | g_Loss: 2607.4682 | l_Loss: 1533.9922 |
21-12-22 22:51:56.736 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:51:56.736 - INFO: Train epoch 211: Loss: 19998.1698 | r_Loss: 8048.4201 | g_Loss: 2201.5121 | l_Loss: 942.1892 |
21-12-22 22:53:08.598 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:53:08.599 - INFO: Train epoch 212: Loss: 16356155.9448 | r_Loss: 290475.9205 | g_Loss: 3210019.5555 | l_Loss: 15582.1974 |
21-12-22 22:54:20.591 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:54:20.592 - INFO: Train epoch 213: Loss: 485173.1156 | r_Loss: 149723.5105 | g_Loss: 62513.3905 | l_Loss: 22882.6554 |
21-12-22 22:55:32.480 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:55:32.480 - INFO: Train epoch 214: Loss: 296655.6138 | r_Loss: 101776.2773 | g_Loss: 36463.4864 | l_Loss: 12561.9058 |
21-12-22 22:56:44.344 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:56:44.345 - INFO: Train epoch 215: Loss: 196693.5850 | r_Loss: 68276.4258 | g_Loss: 23930.8153 | l_Loss: 8763.0820 |
21-12-22 22:57:56.252 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:57:56.252 - INFO: Train epoch 216: Loss: 142519.2347 | r_Loss: 50261.9923 | g_Loss: 17197.7760 | l_Loss: 6268.3621 |
21-12-22 22:59:08.153 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 22:59:08.153 - INFO: Train epoch 217: Loss: 116399.3428 | r_Loss: 41317.1180 | g_Loss: 14001.1953 | l_Loss: 5076.2483 |
21-12-22 23:00:20.084 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:00:20.084 - INFO: Train epoch 218: Loss: 97622.2209 | r_Loss: 34708.5604 | g_Loss: 11564.2887 | l_Loss: 5092.2172 |
21-12-22 23:01:32.169 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:01:32.169 - INFO: Train epoch 219: Loss: 80574.1048 | r_Loss: 27217.6677 | g_Loss: 9937.9180 | l_Loss: 3666.8469 |
21-12-22 23:02:44.100 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:02:44.100 - INFO: Train epoch 220: Loss: 73058.6069 | r_Loss: 24076.3061 | g_Loss: 9228.2075 | l_Loss: 2841.2632 |
21-12-22 23:03:56.039 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:03:56.039 - INFO: Train epoch 221: Loss: 63305.4026 | r_Loss: 20342.4619 | g_Loss: 8090.4730 | l_Loss: 2510.5760 |
21-12-22 23:05:08.076 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:05:08.077 - INFO: Train epoch 222: Loss: 59256.0605 | r_Loss: 18502.3356 | g_Loss: 7643.9121 | l_Loss: 2534.1649 |
21-12-22 23:06:19.893 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:06:19.893 - INFO: Train epoch 223: Loss: 52702.6927 | r_Loss: 16010.8567 | g_Loss: 6914.6416 | l_Loss: 2118.6283 |
21-12-22 23:07:31.899 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:07:31.900 - INFO: Train epoch 224: Loss: 49100.3635 | r_Loss: 15198.0849 | g_Loss: 6381.8987 | l_Loss: 1992.7853 |
21-12-22 23:08:43.881 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:08:43.882 - INFO: Train epoch 225: Loss: 47575.7043 | r_Loss: 14590.5826 | g_Loss: 6219.0061 | l_Loss: 1890.0916 |
21-12-22 23:09:55.835 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:09:55.835 - INFO: Train epoch 226: Loss: 43915.9430 | r_Loss: 13452.6959 | g_Loss: 5765.0614 | l_Loss: 1637.9398 |
21-12-22 23:11:07.791 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:11:07.791 - INFO: Train epoch 227: Loss: 43477.1537 | r_Loss: 13517.7257 | g_Loss: 5665.5662 | l_Loss: 1631.5975 |
21-12-22 23:12:19.703 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:12:19.703 - INFO: Train epoch 228: Loss: 40644.4197 | r_Loss: 12611.7203 | g_Loss: 5305.0600 | l_Loss: 1507.3994 |
21-12-22 23:13:31.823 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:13:31.823 - INFO: Train epoch 229: Loss: 43367.6341 | r_Loss: 12640.5584 | g_Loss: 5835.8925 | l_Loss: 1547.6136 |
21-12-22 23:14:43.750 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:14:43.750 - INFO: Train epoch 230: Loss: 38841.8880 | r_Loss: 11995.4763 | g_Loss: 5116.9549 | l_Loss: 1261.6373 |
21-12-22 23:15:55.699 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:15:55.700 - INFO: Train epoch 231: Loss: 37574.2473 | r_Loss: 11596.5421 | g_Loss: 4911.7888 | l_Loss: 1418.7609 |
21-12-22 23:17:07.566 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:17:07.567 - INFO: Train epoch 232: Loss: 41148.0815 | r_Loss: 11821.8676 | g_Loss: 5549.3229 | l_Loss: 1579.6001 |
21-12-22 23:18:19.556 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:18:19.556 - INFO: Train epoch 233: Loss: 35724.1842 | r_Loss: 10977.8349 | g_Loss: 4657.0187 | l_Loss: 1461.2563 |
21-12-22 23:19:31.444 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:19:31.444 - INFO: Train epoch 234: Loss: 36877.4441 | r_Loss: 11084.5892 | g_Loss: 4885.5531 | l_Loss: 1365.0897 |
21-12-22 23:20:43.323 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:20:43.323 - INFO: Train epoch 235: Loss: 37378.3702 | r_Loss: 10955.2792 | g_Loss: 5021.8655 | l_Loss: 1313.7633 |
21-12-22 23:21:55.189 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:21:55.189 - INFO: Train epoch 236: Loss: 35682.0084 | r_Loss: 10844.4903 | g_Loss: 4694.2800 | l_Loss: 1366.1179 |
21-12-22 23:23:07.151 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:23:07.152 - INFO: Train epoch 237: Loss: 32782.9121 | r_Loss: 10385.1061 | g_Loss: 4210.8551 | l_Loss: 1343.5303 |
21-12-22 23:24:18.999 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:24:19.000 - INFO: Train epoch 238: Loss: 33814.2976 | r_Loss: 10203.8223 | g_Loss: 4472.5565 | l_Loss: 1247.6927 |
21-12-22 23:25:30.861 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:25:30.862 - INFO: Train epoch 239: Loss: 33784.9310 | r_Loss: 10374.7283 | g_Loss: 4433.7007 | l_Loss: 1241.6988 |
21-12-22 23:27:16.567 - INFO: TEST: PSNR_S: 25.2313 | PSNR_C: 21.3945 |
21-12-22 23:27:16.568 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:27:16.569 - INFO: Train epoch 240: Loss: 33330.5516 | r_Loss: 10019.8823 | g_Loss: 4411.0040 | l_Loss: 1255.6492 |
21-12-22 23:28:28.365 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:28:28.366 - INFO: Train epoch 241: Loss: 33261.5474 | r_Loss: 9526.5993 | g_Loss: 4512.6571 | l_Loss: 1171.6628 |
21-12-22 23:29:40.248 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:29:40.249 - INFO: Train epoch 242: Loss: 35353.6446 | r_Loss: 9923.5511 | g_Loss: 4853.4422 | l_Loss: 1162.8822 |
21-12-22 23:30:52.071 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:30:52.071 - INFO: Train epoch 243: Loss: 32346.1270 | r_Loss: 9930.9724 | g_Loss: 4237.4089 | l_Loss: 1228.1105 |
21-12-22 23:32:03.937 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:32:03.938 - INFO: Train epoch 244: Loss: 30202.1777 | r_Loss: 9430.2957 | g_Loss: 3921.9381 | l_Loss: 1162.1913 |
21-12-22 23:33:15.730 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:33:15.731 - INFO: Train epoch 245: Loss: 30991.7988 | r_Loss: 9320.6023 | g_Loss: 4098.6576 | l_Loss: 1177.9084 |
21-12-22 23:34:27.529 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:34:27.529 - INFO: Train epoch 246: Loss: 33025.7210 | r_Loss: 9626.0362 | g_Loss: 4421.6954 | l_Loss: 1291.2078 |
21-12-22 23:35:39.423 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:35:39.423 - INFO: Train epoch 247: Loss: 38868.8523 | r_Loss: 9308.6511 | g_Loss: 5695.7887 | l_Loss: 1081.2577 |
21-12-22 23:36:51.309 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:36:51.309 - INFO: Train epoch 248: Loss: 30223.4276 | r_Loss: 9357.5339 | g_Loss: 3930.2882 | l_Loss: 1214.4525 |
21-12-22 23:38:03.129 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:38:03.129 - INFO: Train epoch 249: Loss: 29526.3746 | r_Loss: 8946.4145 | g_Loss: 3880.8810 | l_Loss: 1175.5552 |
21-12-22 23:39:15.010 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:39:15.010 - INFO: Train epoch 250: Loss: 29522.3144 | r_Loss: 8932.8795 | g_Loss: 3911.0775 | l_Loss: 1034.0469 |
21-12-22 23:40:26.961 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:40:26.962 - INFO: Train epoch 251: Loss: 33959.3302 | r_Loss: 9187.2619 | g_Loss: 4695.3398 | l_Loss: 1295.3690 |
21-12-22 23:41:39.011 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:41:39.012 - INFO: Train epoch 252: Loss: 29914.6573 | r_Loss: 8812.9355 | g_Loss: 4012.4347 | l_Loss: 1039.5487 |
21-12-22 23:42:51.013 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:42:51.013 - INFO: Train epoch 253: Loss: 30179.4750 | r_Loss: 8913.8867 | g_Loss: 4029.2364 | l_Loss: 1119.4062 |
21-12-22 23:44:02.891 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:44:02.891 - INFO: Train epoch 254: Loss: 30102.4033 | r_Loss: 8800.9074 | g_Loss: 4053.4485 | l_Loss: 1034.2531 |
21-12-22 23:45:14.785 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:45:14.785 - INFO: Train epoch 255: Loss: 29027.6973 | r_Loss: 8653.7850 | g_Loss: 3876.1986 | l_Loss: 992.9193 |
21-12-22 23:46:26.741 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:46:26.741 - INFO: Train epoch 256: Loss: 29141.0061 | r_Loss: 8413.1448 | g_Loss: 3922.0572 | l_Loss: 1117.5754 |
21-12-22 23:47:38.716 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:47:38.717 - INFO: Train epoch 257: Loss: 28846.4665 | r_Loss: 8460.1701 | g_Loss: 3853.5795 | l_Loss: 1118.3988 |
21-12-22 23:48:50.566 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:48:50.567 - INFO: Train epoch 258: Loss: 29710.5227 | r_Loss: 8469.1082 | g_Loss: 4021.4516 | l_Loss: 1134.1565 |
21-12-22 23:50:02.585 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:50:02.585 - INFO: Train epoch 259: Loss: 27896.0012 | r_Loss: 8364.8101 | g_Loss: 3698.9704 | l_Loss: 1036.3389 |
21-12-22 23:51:14.593 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:51:14.594 - INFO: Train epoch 260: Loss: 26786.8151 | r_Loss: 8202.2817 | g_Loss: 3499.7821 | l_Loss: 1085.6232 |
21-12-22 23:52:26.711 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:52:26.711 - INFO: Train epoch 261: Loss: 29912.9254 | r_Loss: 8607.3411 | g_Loss: 4053.5729 | l_Loss: 1037.7202 |
21-12-22 23:53:38.607 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:53:38.608 - INFO: Train epoch 262: Loss: 31990.6874 | r_Loss: 8410.3527 | g_Loss: 4515.6659 | l_Loss: 1002.0048 |
21-12-22 23:54:50.560 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:54:50.560 - INFO: Train epoch 263: Loss: 28016.4844 | r_Loss: 8277.7443 | g_Loss: 3723.0460 | l_Loss: 1123.5099 |
21-12-22 23:56:02.485 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:56:02.486 - INFO: Train epoch 264: Loss: 26881.6298 | r_Loss: 8097.0604 | g_Loss: 3544.4853 | l_Loss: 1062.1428 |
21-12-22 23:57:14.336 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:57:14.336 - INFO: Train epoch 265: Loss: 27519.5763 | r_Loss: 8247.0236 | g_Loss: 3652.9412 | l_Loss: 1007.8465 |
21-12-22 23:58:26.155 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:58:26.156 - INFO: Train epoch 266: Loss: 29045.1296 | r_Loss: 8099.1104 | g_Loss: 3994.3679 | l_Loss: 974.1800 |
21-12-22 23:59:37.874 - INFO: Learning rate: 3.1622776601683795e-05
21-12-22 23:59:37.875 - INFO: Train epoch 267: Loss: 32425.0663 | r_Loss: 8330.3982 | g_Loss: 4604.1349 | l_Loss: 1073.9938 |
21-12-23 00:00:49.785 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:00:49.786 - INFO: Train epoch 268: Loss: 31832.7883 | r_Loss: 8298.6226 | g_Loss: 4504.5371 | l_Loss: 1011.4798 |
21-12-23 00:02:01.635 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:02:01.635 - INFO: Train epoch 269: Loss: 25283.5530 | r_Loss: 7768.8665 | g_Loss: 3294.2007 | l_Loss: 1043.6832 |
21-12-23 00:03:47.171 - INFO: TEST: PSNR_S: 26.4845 | PSNR_C: 22.3635 |
21-12-23 00:03:47.172 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:03:47.173 - INFO: Train epoch 270: Loss: 24940.4664 | r_Loss: 7810.1703 | g_Loss: 3234.0099 | l_Loss: 960.2467 |
21-12-23 00:04:59.068 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:04:59.069 - INFO: Train epoch 271: Loss: 26433.9353 | r_Loss: 8004.5279 | g_Loss: 3503.0925 | l_Loss: 913.9447 |
21-12-23 00:06:11.023 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:06:11.023 - INFO: Train epoch 272: Loss: 37823.2063 | r_Loss: 8313.3134 | g_Loss: 5676.1302 | l_Loss: 1129.2414 |
21-12-23 00:07:22.865 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:07:22.866 - INFO: Train epoch 273: Loss: 25927.9129 | r_Loss: 8210.4238 | g_Loss: 3316.3547 | l_Loss: 1135.7156 |
21-12-23 00:08:34.757 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:08:34.758 - INFO: Train epoch 274: Loss: 25154.1087 | r_Loss: 8023.3709 | g_Loss: 3233.5925 | l_Loss: 962.7753 |
21-12-23 00:09:46.560 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:09:46.561 - INFO: Train epoch 275: Loss: 24383.8802 | r_Loss: 7756.3962 | g_Loss: 3146.6013 | l_Loss: 894.4779 |
21-12-23 00:10:58.531 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:10:58.532 - INFO: Train epoch 276: Loss: 25655.9582 | r_Loss: 7674.9408 | g_Loss: 3410.9644 | l_Loss: 926.1956 |
21-12-23 00:12:10.384 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:12:10.385 - INFO: Train epoch 277: Loss: 23848.1662 | r_Loss: 7479.8582 | g_Loss: 3100.4134 | l_Loss: 866.2407 |
21-12-23 00:13:22.459 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:13:22.460 - INFO: Train epoch 278: Loss: 25165.8549 | r_Loss: 7480.3507 | g_Loss: 3326.8677 | l_Loss: 1051.1658 |
21-12-23 00:14:34.517 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:14:34.518 - INFO: Train epoch 279: Loss: 27052.5182 | r_Loss: 7269.1677 | g_Loss: 3768.2325 | l_Loss: 942.1885 |
21-12-23 00:15:46.498 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:15:46.499 - INFO: Train epoch 280: Loss: 24790.4353 | r_Loss: 7516.4600 | g_Loss: 3277.5702 | l_Loss: 886.1241 |
21-12-23 00:16:58.546 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:16:58.546 - INFO: Train epoch 281: Loss: 27569.1877 | r_Loss: 7623.5946 | g_Loss: 3808.0570 | l_Loss: 905.3081 |
21-12-23 00:18:10.572 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:18:10.573 - INFO: Train epoch 282: Loss: 110607.1827 | r_Loss: 28939.8343 | g_Loss: 15682.1208 | l_Loss: 3256.7449 |
21-12-23 00:19:22.489 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:19:22.490 - INFO: Train epoch 283: Loss: 30429.8356 | r_Loss: 9965.2452 | g_Loss: 3845.6239 | l_Loss: 1236.4707 |
21-12-23 00:20:34.501 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:20:34.502 - INFO: Train epoch 284: Loss: 27473.7874 | r_Loss: 8742.7148 | g_Loss: 3530.5307 | l_Loss: 1078.4193 |
21-12-23 00:21:46.455 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:21:46.455 - INFO: Train epoch 285: Loss: 27334.5130 | r_Loss: 8474.3199 | g_Loss: 3537.5581 | l_Loss: 1172.4023 |
21-12-23 00:22:58.271 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:22:58.271 - INFO: Train epoch 286: Loss: 25159.7466 | r_Loss: 8281.4039 | g_Loss: 3192.7659 | l_Loss: 914.5131 |
21-12-23 00:24:10.312 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:24:10.313 - INFO: Train epoch 287: Loss: 24910.0437 | r_Loss: 7881.1415 | g_Loss: 3219.7054 | l_Loss: 930.3752 |
21-12-23 00:25:22.176 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:25:22.176 - INFO: Train epoch 288: Loss: 23893.2264 | r_Loss: 7697.2618 | g_Loss: 3036.0643 | l_Loss: 1015.6432 |
21-12-23 00:26:33.996 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:26:33.997 - INFO: Train epoch 289: Loss: 24718.5386 | r_Loss: 7692.4801 | g_Loss: 3199.8930 | l_Loss: 1026.5936 |
21-12-23 00:27:45.939 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:27:45.939 - INFO: Train epoch 290: Loss: 23559.4226 | r_Loss: 7493.5462 | g_Loss: 3018.7371 | l_Loss: 972.1908 |
21-12-23 00:28:57.710 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:28:57.711 - INFO: Train epoch 291: Loss: 23503.1714 | r_Loss: 7263.1288 | g_Loss: 3053.8667 | l_Loss: 970.7092 |
21-12-23 00:30:09.686 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:30:09.687 - INFO: Train epoch 292: Loss: 22288.3211 | r_Loss: 7050.1085 | g_Loss: 2870.4958 | l_Loss: 885.7330 |
21-12-23 00:31:21.396 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:31:21.396 - INFO: Train epoch 293: Loss: 22557.3284 | r_Loss: 7087.1750 | g_Loss: 2928.4786 | l_Loss: 827.7602 |
21-12-23 00:32:33.220 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:32:33.221 - INFO: Train epoch 294: Loss: 22994.4687 | r_Loss: 7083.1813 | g_Loss: 3009.4356 | l_Loss: 864.1091 |
21-12-23 00:33:45.075 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:33:45.076 - INFO: Train epoch 295: Loss: 22353.5265 | r_Loss: 6893.9204 | g_Loss: 2899.5301 | l_Loss: 961.9553 |
21-12-23 00:34:56.977 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:34:56.978 - INFO: Train epoch 296: Loss: 22142.7202 | r_Loss: 6735.2985 | g_Loss: 2923.1697 | l_Loss: 791.5731 |
21-12-23 00:36:08.855 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:36:08.856 - INFO: Train epoch 297: Loss: 23286.7130 | r_Loss: 6861.6527 | g_Loss: 3100.2424 | l_Loss: 923.8484 |
21-12-23 00:37:20.715 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:37:20.716 - INFO: Train epoch 298: Loss: 37223.1082 | r_Loss: 7240.7138 | g_Loss: 5810.0692 | l_Loss: 932.0484 |
21-12-23 00:38:32.518 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:38:32.519 - INFO: Train epoch 299: Loss: 22550.0774 | r_Loss: 7024.0438 | g_Loss: 2914.3119 | l_Loss: 954.4743 |
21-12-23 00:40:18.270 - INFO: TEST: PSNR_S: 27.0598 | PSNR_C: 22.9168 |
21-12-23 00:40:18.271 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:40:18.271 - INFO: Train epoch 300: Loss: 21950.5937 | r_Loss: 6937.4901 | g_Loss: 2822.5739 | l_Loss: 900.2341 |
21-12-23 00:41:30.096 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:41:30.097 - INFO: Train epoch 301: Loss: 22007.4772 | r_Loss: 6888.8248 | g_Loss: 2837.2155 | l_Loss: 932.5750 |
21-12-23 00:42:41.860 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:42:41.861 - INFO: Train epoch 302: Loss: 36509.1650 | r_Loss: 7374.6324 | g_Loss: 5651.3124 | l_Loss: 877.9711 |
21-12-23 00:43:53.700 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:43:53.700 - INFO: Train epoch 303: Loss: 21657.5835 | r_Loss: 7125.0249 | g_Loss: 2728.2633 | l_Loss: 891.2424 |
21-12-23 00:45:05.575 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:45:05.577 - INFO: Train epoch 304: Loss: 21593.6668 | r_Loss: 6983.2373 | g_Loss: 2747.1572 | l_Loss: 874.6432 |
21-12-23 00:46:17.676 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:46:17.676 - INFO: Train epoch 305: Loss: 21703.0510 | r_Loss: 6921.9866 | g_Loss: 2780.1954 | l_Loss: 880.0872 |
21-12-23 00:47:29.625 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:47:29.626 - INFO: Train epoch 306: Loss: 21953.0861 | r_Loss: 6876.9023 | g_Loss: 2844.7325 | l_Loss: 852.5212 |
21-12-23 00:48:41.548 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:48:41.549 - INFO: Train epoch 307: Loss: 20932.2853 | r_Loss: 6526.5196 | g_Loss: 2699.2591 | l_Loss: 909.4705 |
21-12-23 00:49:53.378 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:49:53.379 - INFO: Train epoch 308: Loss: 21116.0415 | r_Loss: 6538.6809 | g_Loss: 2735.0307 | l_Loss: 902.2069 |
21-12-23 00:51:05.251 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:51:05.251 - INFO: Train epoch 309: Loss: 22796.5592 | r_Loss: 6405.9175 | g_Loss: 3130.2981 | l_Loss: 739.1513 |
21-12-23 00:52:17.046 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:52:17.046 - INFO: Train epoch 310: Loss: 21013.2574 | r_Loss: 6688.0473 | g_Loss: 2690.3645 | l_Loss: 873.3874 |
21-12-23 00:53:28.932 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:53:28.932 - INFO: Train epoch 311: Loss: 20700.1745 | r_Loss: 6432.3559 | g_Loss: 2701.3009 | l_Loss: 761.3139 |
21-12-23 00:54:40.908 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:54:40.909 - INFO: Train epoch 312: Loss: 21684.7816 | r_Loss: 6270.7829 | g_Loss: 2924.6032 | l_Loss: 790.9827 |
21-12-23 00:55:52.769 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:55:52.769 - INFO: Train epoch 313: Loss: 20430.3515 | r_Loss: 6223.6449 | g_Loss: 2687.9915 | l_Loss: 766.7489 |
21-12-23 00:57:04.539 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:57:04.540 - INFO: Train epoch 314: Loss: 45189.1502 | r_Loss: 7303.6509 | g_Loss: 7417.8520 | l_Loss: 796.2392 |
21-12-23 00:58:16.456 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:58:16.456 - INFO: Train epoch 315: Loss: 20252.2469 | r_Loss: 6841.4512 | g_Loss: 2510.4354 | l_Loss: 858.6190 |
21-12-23 00:59:28.195 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 00:59:28.196 - INFO: Train epoch 316: Loss: 20034.3008 | r_Loss: 6740.4241 | g_Loss: 2490.0542 | l_Loss: 843.6061 |
21-12-23 01:00:40.098 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:00:40.099 - INFO: Train epoch 317: Loss: 20020.9072 | r_Loss: 6731.3849 | g_Loss: 2503.3332 | l_Loss: 772.8562 |
21-12-23 01:01:51.932 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:01:51.933 - INFO: Train epoch 318: Loss: 18795.2733 | r_Loss: 6454.8957 | g_Loss: 2304.1768 | l_Loss: 819.4934 |
21-12-23 01:03:03.825 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:03:03.826 - INFO: Train epoch 319: Loss: 18898.0679 | r_Loss: 6304.1970 | g_Loss: 2363.1115 | l_Loss: 778.3133 |
21-12-23 01:04:15.625 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:04:15.626 - INFO: Train epoch 320: Loss: 19736.1385 | r_Loss: 6336.4236 | g_Loss: 2515.0323 | l_Loss: 824.5532 |
21-12-23 01:05:27.452 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:05:27.453 - INFO: Train epoch 321: Loss: 17250.8990 | r_Loss: 5879.1876 | g_Loss: 2141.3324 | l_Loss: 665.0494 |
21-12-23 01:06:39.308 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:06:39.309 - INFO: Train epoch 322: Loss: 19973.4644 | r_Loss: 6271.9891 | g_Loss: 2593.7450 | l_Loss: 732.7504 |
21-12-23 01:07:51.063 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:07:51.064 - INFO: Train epoch 323: Loss: 19149.2659 | r_Loss: 6139.4011 | g_Loss: 2456.6171 | l_Loss: 726.7792 |
21-12-23 01:09:02.778 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:09:02.778 - INFO: Train epoch 324: Loss: 20056.5721 | r_Loss: 5965.4164 | g_Loss: 2671.6335 | l_Loss: 732.9882 |
21-12-23 01:10:14.580 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:10:14.581 - INFO: Train epoch 325: Loss: 22249.0127 | r_Loss: 6141.0407 | g_Loss: 3060.8522 | l_Loss: 803.7111 |
21-12-23 01:11:26.511 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:11:26.511 - INFO: Train epoch 326: Loss: 18392.0377 | r_Loss: 6171.1794 | g_Loss: 2292.0203 | l_Loss: 760.7570 |
21-12-23 01:12:38.265 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:12:38.265 - INFO: Train epoch 327: Loss: 17189.1331 | r_Loss: 5804.0737 | g_Loss: 2139.5762 | l_Loss: 687.1784 |
21-12-23 01:13:50.047 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:13:50.048 - INFO: Train epoch 328: Loss: 36749.3786 | r_Loss: 6688.9591 | g_Loss: 5845.1076 | l_Loss: 834.8820 |
21-12-23 01:15:01.957 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:15:01.958 - INFO: Train epoch 329: Loss: 17083.5208 | r_Loss: 6617.9282 | g_Loss: 1933.8147 | l_Loss: 796.5191 |
21-12-23 01:16:47.684 - INFO: TEST: PSNR_S: 28.9331 | PSNR_C: 23.3559 |
21-12-23 01:16:47.686 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:16:47.686 - INFO: Train epoch 330: Loss: 16275.0314 | r_Loss: 6149.4920 | g_Loss: 1876.6092 | l_Loss: 742.4933 |
21-12-23 01:17:59.586 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:17:59.586 - INFO: Train epoch 331: Loss: 16400.3199 | r_Loss: 5956.7668 | g_Loss: 1936.8623 | l_Loss: 759.2418 |
21-12-23 01:19:11.473 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:19:11.474 - INFO: Train epoch 332: Loss: 16779.4268 | r_Loss: 6061.0683 | g_Loss: 1978.1575 | l_Loss: 827.5707 |
21-12-23 01:20:23.207 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:20:23.207 - INFO: Train epoch 333: Loss: 16483.8184 | r_Loss: 5921.2417 | g_Loss: 1953.4058 | l_Loss: 795.5477 |
21-12-23 01:21:35.145 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:21:35.146 - INFO: Train epoch 334: Loss: 16152.4714 | r_Loss: 5769.6935 | g_Loss: 1927.7513 | l_Loss: 744.0212 |
21-12-23 01:22:46.897 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:22:46.897 - INFO: Train epoch 335: Loss: 16072.2029 | r_Loss: 5503.9425 | g_Loss: 1958.5561 | l_Loss: 775.4801 |
21-12-23 01:23:58.589 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:23:58.589 - INFO: Train epoch 336: Loss: 15531.9900 | r_Loss: 5456.0943 | g_Loss: 1873.2148 | l_Loss: 709.8217 |
21-12-23 01:25:10.393 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:25:10.393 - INFO: Train epoch 337: Loss: 16511.3959 | r_Loss: 5621.2523 | g_Loss: 2048.4508 | l_Loss: 647.8896 |
21-12-23 01:26:22.240 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:26:22.240 - INFO: Train epoch 338: Loss: 25254.5085 | r_Loss: 5897.0637 | g_Loss: 3732.4790 | l_Loss: 695.0499 |
21-12-23 01:27:34.026 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:27:34.027 - INFO: Train epoch 339: Loss: 17305.6172 | r_Loss: 5812.5284 | g_Loss: 2162.6737 | l_Loss: 679.7201 |
21-12-23 01:28:45.836 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:28:45.837 - INFO: Train epoch 340: Loss: 15370.1355 | r_Loss: 5544.4636 | g_Loss: 1814.7220 | l_Loss: 752.0618 |
21-12-23 01:29:57.703 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:29:57.704 - INFO: Train epoch 341: Loss: 17994.0434 | r_Loss: 5821.8792 | g_Loss: 2260.9857 | l_Loss: 867.2358 |
21-12-23 01:31:09.454 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:31:09.455 - INFO: Train epoch 342: Loss: 14907.3081 | r_Loss: 5387.4571 | g_Loss: 1770.1059 | l_Loss: 669.3215 |
21-12-23 01:32:21.239 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:32:21.239 - INFO: Train epoch 343: Loss: 15105.4968 | r_Loss: 5313.5861 | g_Loss: 1832.1390 | l_Loss: 631.2157 |
21-12-23 01:33:33.008 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:33:33.009 - INFO: Train epoch 344: Loss: 15280.1007 | r_Loss: 5200.9836 | g_Loss: 1894.8067 | l_Loss: 605.0837 |
21-12-23 01:34:44.713 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:34:44.714 - INFO: Train epoch 345: Loss: 24360.5750 | r_Loss: 5741.6536 | g_Loss: 3566.3423 | l_Loss: 787.2096 |
21-12-23 01:35:56.535 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:35:56.536 - INFO: Train epoch 346: Loss: 15922.2618 | r_Loss: 5862.2866 | g_Loss: 1866.1090 | l_Loss: 729.4299 |
21-12-23 01:37:08.317 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:37:08.318 - INFO: Train epoch 347: Loss: 14696.1419 | r_Loss: 5457.3837 | g_Loss: 1718.8337 | l_Loss: 644.5896 |
21-12-23 01:38:20.079 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:38:20.080 - INFO: Train epoch 348: Loss: 15175.1455 | r_Loss: 5169.3856 | g_Loss: 1879.8152 | l_Loss: 606.6840 |
21-12-23 01:39:31.910 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:39:31.911 - INFO: Train epoch 349: Loss: 15881.6981 | r_Loss: 5111.1638 | g_Loss: 2019.7915 | l_Loss: 671.5767 |
21-12-23 01:40:43.710 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:40:43.710 - INFO: Train epoch 350: Loss: 14614.2115 | r_Loss: 5238.4374 | g_Loss: 1738.8292 | l_Loss: 681.6282 |
21-12-23 01:41:55.641 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:41:55.641 - INFO: Train epoch 351: Loss: 16775.7723 | r_Loss: 5171.2568 | g_Loss: 2191.1424 | l_Loss: 648.8033 |
21-12-23 01:43:07.353 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:43:07.354 - INFO: Train epoch 352: Loss: 14125.5992 | r_Loss: 5150.1672 | g_Loss: 1661.8090 | l_Loss: 666.3870 |
21-12-23 01:44:19.151 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:44:19.152 - INFO: Train epoch 353: Loss: 21422.2970 | r_Loss: 5423.6455 | g_Loss: 3073.0584 | l_Loss: 633.3591 |
21-12-23 01:45:30.860 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:45:30.861 - INFO: Train epoch 354: Loss: 15591.6053 | r_Loss: 5523.3497 | g_Loss: 1882.9169 | l_Loss: 653.6710 |
21-12-23 01:46:42.582 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:46:42.582 - INFO: Train epoch 355: Loss: 14200.8528 | r_Loss: 5117.1998 | g_Loss: 1700.6044 | l_Loss: 580.6310 |
21-12-23 01:47:54.316 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:47:54.316 - INFO: Train epoch 356: Loss: 18788.5635 | r_Loss: 5421.4153 | g_Loss: 2528.8590 | l_Loss: 722.8532 |
21-12-23 01:49:05.944 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:49:05.945 - INFO: Train epoch 357: Loss: 13617.5169 | r_Loss: 5047.5662 | g_Loss: 1599.5479 | l_Loss: 572.2112 |
21-12-23 01:50:17.638 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:50:17.639 - INFO: Train epoch 358: Loss: 14446.0879 | r_Loss: 5079.7533 | g_Loss: 1749.5595 | l_Loss: 618.5374 |
21-12-23 01:51:29.484 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:51:29.485 - INFO: Train epoch 359: Loss: 14275.9589 | r_Loss: 4912.0642 | g_Loss: 1754.5438 | l_Loss: 591.1757 |
21-12-23 01:53:14.924 - INFO: TEST: PSNR_S: 29.8297 | PSNR_C: 24.3808 |
21-12-23 01:53:14.925 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:53:14.925 - INFO: Train epoch 360: Loss: 14839.4481 | r_Loss: 4829.6083 | g_Loss: 1903.8023 | l_Loss: 490.8283 |
21-12-23 01:54:26.624 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:54:26.624 - INFO: Train epoch 361: Loss: 21345.2220 | r_Loss: 5208.5082 | g_Loss: 3092.9173 | l_Loss: 672.1274 |
21-12-23 01:55:38.440 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:55:38.440 - INFO: Train epoch 362: Loss: 13974.5867 | r_Loss: 5154.1519 | g_Loss: 1623.0808 | l_Loss: 705.0309 |
21-12-23 01:56:50.179 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:56:50.179 - INFO: Train epoch 363: Loss: 13414.7101 | r_Loss: 4949.1201 | g_Loss: 1562.8145 | l_Loss: 651.5174 |
21-12-23 01:58:01.915 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:58:01.916 - INFO: Train epoch 364: Loss: 23651.8983 | r_Loss: 5365.5396 | g_Loss: 3528.0746 | l_Loss: 645.9854 |
21-12-23 01:59:13.680 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 01:59:13.681 - INFO: Train epoch 365: Loss: 13859.2505 | r_Loss: 5198.8591 | g_Loss: 1593.1981 | l_Loss: 694.4010 |
21-12-23 02:00:25.533 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:00:25.534 - INFO: Train epoch 366: Loss: 13309.0780 | r_Loss: 5030.2516 | g_Loss: 1525.4210 | l_Loss: 651.7215 |
21-12-23 02:01:37.417 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:01:37.417 - INFO: Train epoch 367: Loss: 13121.3016 | r_Loss: 4787.7462 | g_Loss: 1548.1085 | l_Loss: 593.0129 |
21-12-23 02:02:49.297 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:02:49.297 - INFO: Train epoch 368: Loss: 13176.3025 | r_Loss: 4767.8495 | g_Loss: 1558.4113 | l_Loss: 616.3964 |
21-12-23 02:04:01.144 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:04:01.145 - INFO: Train epoch 369: Loss: 13649.4340 | r_Loss: 4791.4196 | g_Loss: 1656.5483 | l_Loss: 575.2729 |
21-12-23 02:05:12.938 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:05:12.939 - INFO: Train epoch 370: Loss: 20549.5284 | r_Loss: 5097.4257 | g_Loss: 2972.2743 | l_Loss: 590.7311 |
21-12-23 02:06:24.726 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:06:24.727 - INFO: Train epoch 371: Loss: 13562.2129 | r_Loss: 4958.6341 | g_Loss: 1592.0886 | l_Loss: 643.1357 |
21-12-23 02:07:36.661 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:07:36.661 - INFO: Train epoch 372: Loss: 14645.4524 | r_Loss: 4793.2074 | g_Loss: 1858.9755 | l_Loss: 557.3675 |
21-12-23 02:08:48.388 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:08:48.389 - INFO: Train epoch 373: Loss: 12612.9723 | r_Loss: 4660.8065 | g_Loss: 1481.8046 | l_Loss: 543.1428 |
21-12-23 02:10:00.095 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:10:00.095 - INFO: Train epoch 374: Loss: 14367.0894 | r_Loss: 4642.2497 | g_Loss: 1827.4430 | l_Loss: 587.6247 |
21-12-23 02:11:11.752 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:11:11.752 - INFO: Train epoch 375: Loss: 15571.2664 | r_Loss: 4664.3974 | g_Loss: 2065.9923 | l_Loss: 576.9074 |
21-12-23 02:12:23.436 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:12:23.436 - INFO: Train epoch 376: Loss: 12429.7220 | r_Loss: 4628.5494 | g_Loss: 1440.2062 | l_Loss: 600.1414 |
21-12-23 02:13:35.236 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:13:35.237 - INFO: Train epoch 377: Loss: 12666.3535 | r_Loss: 4477.5534 | g_Loss: 1521.1705 | l_Loss: 582.9472 |
21-12-23 02:14:46.872 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:14:46.872 - INFO: Train epoch 378: Loss: 15778.2456 | r_Loss: 4374.2035 | g_Loss: 2163.2780 | l_Loss: 587.6522 |
21-12-23 02:15:58.716 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:15:58.717 - INFO: Train epoch 379: Loss: 13257.6915 | r_Loss: 4616.3751 | g_Loss: 1618.1989 | l_Loss: 550.3217 |
21-12-23 02:17:10.412 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:17:10.413 - INFO: Train epoch 380: Loss: 11552.0918 | r_Loss: 4286.0666 | g_Loss: 1355.0366 | l_Loss: 490.8421 |
21-12-23 02:18:22.141 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:18:22.141 - INFO: Train epoch 381: Loss: 15743.7421 | r_Loss: 4606.6200 | g_Loss: 2119.5394 | l_Loss: 539.4252 |
21-12-23 02:19:33.884 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:19:33.884 - INFO: Train epoch 382: Loss: 12209.1638 | r_Loss: 4268.7347 | g_Loss: 1476.8788 | l_Loss: 556.0352 |
21-12-23 02:20:45.532 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:20:45.532 - INFO: Train epoch 383: Loss: 19493.1004 | r_Loss: 4436.5979 | g_Loss: 2906.1670 | l_Loss: 525.6672 |
21-12-23 02:21:57.259 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:21:57.260 - INFO: Train epoch 384: Loss: 12666.3575 | r_Loss: 4746.9444 | g_Loss: 1459.0293 | l_Loss: 624.2665 |
21-12-23 02:23:09.009 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:23:09.009 - INFO: Train epoch 385: Loss: 12412.3199 | r_Loss: 4511.6522 | g_Loss: 1447.9272 | l_Loss: 661.0319 |
21-12-23 02:24:20.694 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:24:20.694 - INFO: Train epoch 386: Loss: 11236.3745 | r_Loss: 4265.4271 | g_Loss: 1290.8317 | l_Loss: 516.7889 |
21-12-23 02:25:32.324 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:25:32.325 - INFO: Train epoch 387: Loss: 11405.3783 | r_Loss: 4133.0039 | g_Loss: 1357.1133 | l_Loss: 486.8080 |
21-12-23 02:26:44.174 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:26:44.174 - INFO: Train epoch 388: Loss: 14206.5084 | r_Loss: 4214.5707 | g_Loss: 1896.3955 | l_Loss: 509.9602 |
21-12-23 02:27:55.853 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:27:55.853 - INFO: Train epoch 389: Loss: 12798.5236 | r_Loss: 4268.9004 | g_Loss: 1599.4716 | l_Loss: 532.2650 |
21-12-23 02:29:41.453 - INFO: TEST: PSNR_S: 30.2444 | PSNR_C: 24.6377 |
21-12-23 02:29:41.454 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:29:41.455 - INFO: Train epoch 390: Loss: 26492.5751 | r_Loss: 5163.5157 | g_Loss: 4139.9431 | l_Loss: 629.3437 |
21-12-23 02:30:53.140 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:30:53.140 - INFO: Train epoch 391: Loss: 11669.0579 | r_Loss: 4458.1491 | g_Loss: 1331.0785 | l_Loss: 555.5162 |
21-12-23 02:32:04.922 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:32:04.922 - INFO: Train epoch 392: Loss: 11234.8184 | r_Loss: 4326.4149 | g_Loss: 1282.3501 | l_Loss: 496.6530 |
21-12-23 02:33:16.670 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:33:16.671 - INFO: Train epoch 393: Loss: 11422.7090 | r_Loss: 4237.9353 | g_Loss: 1335.1783 | l_Loss: 508.8823 |
21-12-23 02:34:28.291 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:34:28.291 - INFO: Train epoch 394: Loss: 11244.3117 | r_Loss: 4099.1840 | g_Loss: 1327.1188 | l_Loss: 509.5338 |
21-12-23 02:35:39.953 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:35:39.954 - INFO: Train epoch 395: Loss: 11126.5678 | r_Loss: 3982.5186 | g_Loss: 1322.3259 | l_Loss: 532.4196 |
21-12-23 02:36:51.603 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:36:51.603 - INFO: Train epoch 396: Loss: 20952.7730 | r_Loss: 4714.7699 | g_Loss: 3133.8375 | l_Loss: 568.8155 |
21-12-23 02:38:03.300 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:38:03.300 - INFO: Train epoch 397: Loss: 10956.4181 | r_Loss: 4241.6536 | g_Loss: 1232.7605 | l_Loss: 550.9619 |
21-12-23 02:39:15.150 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:39:15.151 - INFO: Train epoch 398: Loss: 10998.3049 | r_Loss: 4077.3556 | g_Loss: 1291.6715 | l_Loss: 462.5916 |
21-12-23 02:40:26.823 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:40:26.824 - INFO: Train epoch 399: Loss: 10624.8870 | r_Loss: 3929.1426 | g_Loss: 1253.8797 | l_Loss: 426.3459 |
21-12-23 02:41:38.537 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:41:38.538 - INFO: Train epoch 400: Loss: 10954.2412 | r_Loss: 3916.7600 | g_Loss: 1321.9163 | l_Loss: 427.8996 |
21-12-23 02:42:50.269 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:42:50.269 - INFO: Train epoch 401: Loss: 11720.4431 | r_Loss: 3988.1445 | g_Loss: 1437.5689 | l_Loss: 544.4542 |
21-12-23 02:44:02.025 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:44:02.026 - INFO: Train epoch 402: Loss: 11470.2733 | r_Loss: 3665.2225 | g_Loss: 1470.5415 | l_Loss: 452.3435 |
21-12-23 02:45:13.739 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:45:13.739 - INFO: Train epoch 403: Loss: 11034.8915 | r_Loss: 3801.3666 | g_Loss: 1360.5381 | l_Loss: 430.8344 |
21-12-23 02:46:25.428 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:46:25.429 - INFO: Train epoch 404: Loss: 11753.1652 | r_Loss: 3745.8827 | g_Loss: 1504.1179 | l_Loss: 486.6930 |
21-12-23 02:47:36.979 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:47:36.979 - INFO: Train epoch 405: Loss: 10309.5302 | r_Loss: 3672.1870 | g_Loss: 1241.8019 | l_Loss: 428.3337 |
21-12-23 02:48:48.569 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:48:48.569 - INFO: Train epoch 406: Loss: 11090.5613 | r_Loss: 3809.2497 | g_Loss: 1362.5489 | l_Loss: 468.5672 |
21-12-23 02:50:00.239 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:50:00.239 - INFO: Train epoch 407: Loss: 35482.9791 | r_Loss: 5940.8085 | g_Loss: 5758.8128 | l_Loss: 748.1066 |
21-12-23 02:51:12.009 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:51:12.010 - INFO: Train epoch 408: Loss: 11414.6707 | r_Loss: 4404.3054 | g_Loss: 1279.9127 | l_Loss: 610.8015 |
21-12-23 02:52:23.776 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:52:23.777 - INFO: Train epoch 409: Loss: 10954.1838 | r_Loss: 4177.2390 | g_Loss: 1253.0599 | l_Loss: 511.6452 |
21-12-23 02:53:35.336 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:53:35.337 - INFO: Train epoch 410: Loss: 10294.0424 | r_Loss: 4113.0410 | g_Loss: 1144.8300 | l_Loss: 456.8513 |
21-12-23 02:54:47.179 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:54:47.180 - INFO: Train epoch 411: Loss: 9730.0686 | r_Loss: 3856.8166 | g_Loss: 1083.2366 | l_Loss: 457.0691 |
21-12-23 02:55:58.923 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:55:58.924 - INFO: Train epoch 412: Loss: 9795.4695 | r_Loss: 3711.7440 | g_Loss: 1120.1334 | l_Loss: 483.0583 |
21-12-23 02:57:10.544 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:57:10.544 - INFO: Train epoch 413: Loss: 10925.8177 | r_Loss: 3839.6706 | g_Loss: 1318.8606 | l_Loss: 491.8442 |
21-12-23 02:58:22.242 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:58:22.243 - INFO: Train epoch 414: Loss: 9502.7316 | r_Loss: 3506.0428 | g_Loss: 1114.4190 | l_Loss: 424.5936 |
21-12-23 02:59:33.940 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 02:59:33.940 - INFO: Train epoch 415: Loss: 9361.9698 | r_Loss: 3474.9199 | g_Loss: 1094.1546 | l_Loss: 416.2767 |
21-12-23 03:00:45.669 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:00:45.670 - INFO: Train epoch 416: Loss: 10580.2208 | r_Loss: 3424.8232 | g_Loss: 1349.8672 | l_Loss: 406.0616 |
21-12-23 03:01:57.409 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:01:57.409 - INFO: Train epoch 417: Loss: 11842.9808 | r_Loss: 3534.1054 | g_Loss: 1571.5621 | l_Loss: 451.0646 |
21-12-23 03:03:09.058 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:03:09.059 - INFO: Train epoch 418: Loss: 10402.6159 | r_Loss: 3602.1109 | g_Loss: 1265.0401 | l_Loss: 475.3047 |
21-12-23 03:04:20.746 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:04:20.746 - INFO: Train epoch 419: Loss: 10660.8961 | r_Loss: 3500.9599 | g_Loss: 1351.5335 | l_Loss: 402.2686 |
21-12-23 03:06:06.298 - INFO: TEST: PSNR_S: 30.5967 | PSNR_C: 26.0113 |
21-12-23 03:06:06.299 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:06:06.299 - INFO: Train epoch 420: Loss: 10367.7832 | r_Loss: 3391.9752 | g_Loss: 1308.2700 | l_Loss: 434.4581 |
21-12-23 03:07:17.979 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:07:17.980 - INFO: Train epoch 421: Loss: 9705.6518 | r_Loss: 3379.5889 | g_Loss: 1181.8005 | l_Loss: 417.0603 |
21-12-23 03:08:29.546 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:08:29.547 - INFO: Train epoch 422: Loss: 9560.5492 | r_Loss: 3232.3931 | g_Loss: 1179.1448 | l_Loss: 432.4323 |
21-12-23 03:09:41.253 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:09:41.254 - INFO: Train epoch 423: Loss: 10189.3723 | r_Loss: 3208.9958 | g_Loss: 1307.3704 | l_Loss: 443.5246 |
21-12-23 03:10:52.918 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:10:52.919 - INFO: Train epoch 424: Loss: 13181.7352 | r_Loss: 3425.2209 | g_Loss: 1873.9020 | l_Loss: 387.0043 |
21-12-23 03:12:04.618 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:12:04.618 - INFO: Train epoch 425: Loss: 9408.1547 | r_Loss: 3367.3613 | g_Loss: 1104.9423 | l_Loss: 516.0815 |
21-12-23 03:13:16.235 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:13:16.236 - INFO: Train epoch 426: Loss: 9106.8974 | r_Loss: 3142.7094 | g_Loss: 1116.2603 | l_Loss: 382.8862 |
21-12-23 03:14:27.788 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:14:27.789 - INFO: Train epoch 427: Loss: 9419.1417 | r_Loss: 3218.8378 | g_Loss: 1154.9314 | l_Loss: 425.6471 |
21-12-23 03:15:39.402 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:15:39.402 - INFO: Train epoch 428: Loss: 9141.0693 | r_Loss: 3130.9726 | g_Loss: 1119.3839 | l_Loss: 413.1773 |
21-12-23 03:16:51.016 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:16:51.016 - INFO: Train epoch 429: Loss: 10455.6350 | r_Loss: 3106.2981 | g_Loss: 1400.2102 | l_Loss: 348.2860 |
21-12-23 03:18:02.849 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:18:02.849 - INFO: Train epoch 430: Loss: 10399.3192 | r_Loss: 3020.0319 | g_Loss: 1404.1303 | l_Loss: 358.6359 |
21-12-23 03:19:14.432 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:19:14.432 - INFO: Train epoch 431: Loss: 8679.0131 | r_Loss: 3061.6476 | g_Loss: 1035.3843 | l_Loss: 440.4441 |
21-12-23 03:20:26.047 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:20:26.048 - INFO: Train epoch 432: Loss: 9699.3092 | r_Loss: 3090.5465 | g_Loss: 1226.7283 | l_Loss: 475.1210 |
21-12-23 03:21:37.665 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:21:37.665 - INFO: Train epoch 433: Loss: 8745.0439 | r_Loss: 2916.1669 | g_Loss: 1093.2418 | l_Loss: 362.6679 |
21-12-23 03:22:49.424 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:22:49.425 - INFO: Train epoch 434: Loss: 9485.0230 | r_Loss: 2897.1263 | g_Loss: 1246.4102 | l_Loss: 355.8458 |
21-12-23 03:24:01.080 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:24:01.081 - INFO: Train epoch 435: Loss: 8658.5377 | r_Loss: 2812.6936 | g_Loss: 1104.5963 | l_Loss: 322.8624 |
21-12-23 03:25:12.670 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:25:12.671 - INFO: Train epoch 436: Loss: 10065.2405 | r_Loss: 2979.4971 | g_Loss: 1342.6297 | l_Loss: 372.5948 |
21-12-23 03:26:24.321 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:26:24.321 - INFO: Train epoch 437: Loss: 8700.3867 | r_Loss: 2992.3917 | g_Loss: 1069.4363 | l_Loss: 360.8136 |
21-12-23 03:27:35.918 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:27:35.919 - INFO: Train epoch 438: Loss: 8923.5978 | r_Loss: 2906.8792 | g_Loss: 1139.3494 | l_Loss: 319.9717 |
21-12-23 03:28:47.695 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:28:47.695 - INFO: Train epoch 439: Loss: 14685.0800 | r_Loss: 3319.8672 | g_Loss: 2194.7064 | l_Loss: 391.6807 |
21-12-23 03:29:59.505 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:29:59.505 - INFO: Train epoch 440: Loss: 8202.1856 | r_Loss: 2984.8211 | g_Loss: 960.1211 | l_Loss: 416.7589 |
21-12-23 03:31:11.200 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:31:11.200 - INFO: Train epoch 441: Loss: 8684.4565 | r_Loss: 3004.6178 | g_Loss: 1057.5804 | l_Loss: 391.9365 |
21-12-23 03:32:23.031 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:32:23.032 - INFO: Train epoch 442: Loss: 8421.4715 | r_Loss: 2890.4307 | g_Loss: 1035.9770 | l_Loss: 351.1558 |
21-12-23 03:33:34.746 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:33:34.747 - INFO: Train epoch 443: Loss: 8200.9959 | r_Loss: 2760.3290 | g_Loss: 1007.7055 | l_Loss: 402.1393 |
21-12-23 03:34:46.575 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:34:46.576 - INFO: Train epoch 444: Loss: 12755.0547 | r_Loss: 3192.5577 | g_Loss: 1842.5013 | l_Loss: 349.9905 |
21-12-23 03:35:58.257 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:35:58.258 - INFO: Train epoch 445: Loss: 7778.7741 | r_Loss: 2787.0343 | g_Loss: 925.4604 | l_Loss: 364.4379 |
21-12-23 03:37:10.005 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:37:10.005 - INFO: Train epoch 446: Loss: 8074.3802 | r_Loss: 2769.8957 | g_Loss: 984.1166 | l_Loss: 383.9014 |
21-12-23 03:38:21.687 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:38:21.687 - INFO: Train epoch 447: Loss: 9526.8055 | r_Loss: 2713.9801 | g_Loss: 1293.1829 | l_Loss: 346.9112 |
21-12-23 03:39:33.330 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:39:33.330 - INFO: Train epoch 448: Loss: 7866.1437 | r_Loss: 2783.9809 | g_Loss: 953.6021 | l_Loss: 314.1524 |
21-12-23 03:40:45.105 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:40:45.105 - INFO: Train epoch 449: Loss: 7903.0747 | r_Loss: 2548.7569 | g_Loss: 1010.6178 | l_Loss: 301.2286 |
21-12-23 03:42:30.545 - INFO: TEST: PSNR_S: 32.5207 | PSNR_C: 27.1142 |
21-12-23 03:42:30.546 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:42:30.547 - INFO: Train epoch 450: Loss: 8769.2354 | r_Loss: 2762.1883 | g_Loss: 1138.5454 | l_Loss: 314.3202 |
21-12-23 03:43:42.326 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:43:42.326 - INFO: Train epoch 451: Loss: 7214.7866 | r_Loss: 2530.7021 | g_Loss: 875.4015 | l_Loss: 307.0771 |
21-12-23 03:44:54.070 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:44:54.071 - INFO: Train epoch 452: Loss: 10778.9354 | r_Loss: 2817.3871 | g_Loss: 1527.0728 | l_Loss: 326.1841 |
21-12-23 03:46:05.941 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:46:05.942 - INFO: Train epoch 453: Loss: 8413.8678 | r_Loss: 2894.5788 | g_Loss: 1037.7525 | l_Loss: 330.5265 |
21-12-23 03:47:17.659 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:47:17.659 - INFO: Train epoch 454: Loss: 7444.6331 | r_Loss: 2572.8334 | g_Loss: 913.7081 | l_Loss: 303.2593 |
21-12-23 03:48:29.438 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:48:29.438 - INFO: Train epoch 455: Loss: 7200.6897 | r_Loss: 2466.7895 | g_Loss: 870.6057 | l_Loss: 380.8715 |
21-12-23 03:49:41.093 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:49:41.093 - INFO: Train epoch 456: Loss: 8534.9453 | r_Loss: 2520.4811 | g_Loss: 1139.7617 | l_Loss: 315.6556 |
21-12-23 03:50:52.862 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:50:52.863 - INFO: Train epoch 457: Loss: 7321.0849 | r_Loss: 2501.7373 | g_Loss: 904.5652 | l_Loss: 296.5213 |
21-12-23 03:52:04.622 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:52:04.623 - INFO: Train epoch 458: Loss: 7605.6561 | r_Loss: 2427.5458 | g_Loss: 978.1983 | l_Loss: 287.1190 |
21-12-23 03:53:16.089 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:53:16.090 - INFO: Train epoch 459: Loss: 7721.3848 | r_Loss: 2453.4491 | g_Loss: 996.9978 | l_Loss: 282.9469 |
21-12-23 03:54:27.824 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:54:27.825 - INFO: Train epoch 460: Loss: 7007.1259 | r_Loss: 2321.8965 | g_Loss: 875.4271 | l_Loss: 308.0939 |
21-12-23 03:55:39.580 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:55:39.581 - INFO: Train epoch 461: Loss: 8823.8314 | r_Loss: 2445.2728 | g_Loss: 1209.1469 | l_Loss: 332.8241 |
21-12-23 03:56:51.242 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:56:51.243 - INFO: Train epoch 462: Loss: 7280.8016 | r_Loss: 2382.7776 | g_Loss: 918.4256 | l_Loss: 305.8960 |
21-12-23 03:58:02.968 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:58:02.968 - INFO: Train epoch 463: Loss: 6643.5540 | r_Loss: 2246.5058 | g_Loss: 814.8903 | l_Loss: 322.5969 |
21-12-23 03:59:14.652 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 03:59:14.652 - INFO: Train epoch 464: Loss: 25558.2799 | r_Loss: 5162.1808 | g_Loss: 3958.9579 | l_Loss: 601.3097 |
21-12-23 04:00:26.420 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:00:26.420 - INFO: Train epoch 465: Loss: 8021.1324 | r_Loss: 3162.9856 | g_Loss: 896.7169 | l_Loss: 374.5624 |
21-12-23 04:01:38.106 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:01:38.106 - INFO: Train epoch 466: Loss: 7416.3237 | r_Loss: 2870.8118 | g_Loss: 848.9422 | l_Loss: 300.8009 |
21-12-23 04:02:49.798 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:02:49.799 - INFO: Train epoch 467: Loss: 7275.7177 | r_Loss: 2726.8007 | g_Loss: 843.5767 | l_Loss: 331.0334 |
21-12-23 04:04:01.547 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:04:01.548 - INFO: Train epoch 468: Loss: 7389.3110 | r_Loss: 2640.6350 | g_Loss: 873.2954 | l_Loss: 382.1992 |
21-12-23 04:05:13.329 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:05:13.330 - INFO: Train epoch 469: Loss: 7627.2383 | r_Loss: 2686.7852 | g_Loss: 920.7403 | l_Loss: 336.7519 |
21-12-23 04:06:25.116 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:06:25.116 - INFO: Train epoch 470: Loss: 8238.7661 | r_Loss: 2543.4272 | g_Loss: 1072.0579 | l_Loss: 335.0494 |
21-12-23 04:07:36.823 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:07:36.824 - INFO: Train epoch 471: Loss: 7073.0487 | r_Loss: 2453.7339 | g_Loss: 864.3529 | l_Loss: 297.5501 |
21-12-23 04:08:48.517 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:08:48.517 - INFO: Train epoch 472: Loss: 6479.1092 | r_Loss: 2260.0638 | g_Loss: 797.2865 | l_Loss: 232.6128 |
21-12-23 04:10:00.254 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:10:00.255 - INFO: Train epoch 473: Loss: 9870.5615 | r_Loss: 2427.2520 | g_Loss: 1428.5164 | l_Loss: 300.7273 |
21-12-23 04:11:12.117 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:11:12.118 - INFO: Train epoch 474: Loss: 7002.3957 | r_Loss: 2552.5949 | g_Loss: 832.7738 | l_Loss: 285.9317 |
21-12-23 04:12:23.871 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:12:23.872 - INFO: Train epoch 475: Loss: 6834.8962 | r_Loss: 2373.8888 | g_Loss: 824.5074 | l_Loss: 338.4704 |
21-12-23 04:13:35.695 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:13:35.695 - INFO: Train epoch 476: Loss: 6752.7770 | r_Loss: 2324.4938 | g_Loss: 831.7957 | l_Loss: 269.3047 |
21-12-23 04:14:47.388 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:14:47.388 - INFO: Train epoch 477: Loss: 6160.4993 | r_Loss: 2106.5983 | g_Loss: 761.8698 | l_Loss: 244.5522 |
21-12-23 04:15:59.051 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:15:59.051 - INFO: Train epoch 478: Loss: 7272.7048 | r_Loss: 2234.5346 | g_Loss: 952.6600 | l_Loss: 274.8701 |
21-12-23 04:17:10.657 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:17:10.657 - INFO: Train epoch 479: Loss: 6016.2310 | r_Loss: 2130.5388 | g_Loss: 725.6513 | l_Loss: 257.4357 |
21-12-23 04:18:56.346 - INFO: TEST: PSNR_S: 33.1542 | PSNR_C: 27.6782 |
21-12-23 04:18:56.347 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:18:56.347 - INFO: Train epoch 480: Loss: 9724.5515 | r_Loss: 2518.2101 | g_Loss: 1376.2782 | l_Loss: 324.9508 |
21-12-23 04:20:08.058 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:20:08.059 - INFO: Train epoch 481: Loss: 6919.2375 | r_Loss: 2368.0789 | g_Loss: 831.4278 | l_Loss: 394.0195 |
21-12-23 04:21:19.836 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:21:19.837 - INFO: Train epoch 482: Loss: 6204.2054 | r_Loss: 2167.0514 | g_Loss: 738.4867 | l_Loss: 344.7207 |
21-12-23 04:22:31.642 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:22:31.642 - INFO: Train epoch 483: Loss: 6340.0248 | r_Loss: 2193.9803 | g_Loss: 777.2485 | l_Loss: 259.8021 |
21-12-23 04:23:43.365 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:23:43.365 - INFO: Train epoch 484: Loss: 5872.2413 | r_Loss: 1957.3414 | g_Loss: 734.5748 | l_Loss: 242.0258 |
21-12-23 04:24:54.992 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:24:54.992 - INFO: Train epoch 485: Loss: 6674.4973 | r_Loss: 2072.8166 | g_Loss: 867.8152 | l_Loss: 262.6050 |
21-12-23 04:26:06.783 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:26:06.783 - INFO: Train epoch 486: Loss: 6413.0732 | r_Loss: 1942.0373 | g_Loss: 839.3889 | l_Loss: 274.0916 |
21-12-23 04:27:18.565 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:27:18.565 - INFO: Train epoch 487: Loss: 5771.1652 | r_Loss: 1903.9559 | g_Loss: 723.5073 | l_Loss: 249.6730 |
21-12-23 04:28:30.324 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:28:30.324 - INFO: Train epoch 488: Loss: 6320.0959 | r_Loss: 1949.4927 | g_Loss: 822.0010 | l_Loss: 260.5980 |
21-12-23 04:29:42.083 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:29:42.083 - INFO: Train epoch 489: Loss: 5858.4408 | r_Loss: 1853.6124 | g_Loss: 760.5309 | l_Loss: 202.1738 |
21-12-23 04:30:53.983 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:30:53.983 - INFO: Train epoch 490: Loss: 6055.2920 | r_Loss: 1870.4032 | g_Loss: 793.8156 | l_Loss: 215.8109 |
21-12-23 04:32:05.765 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:32:05.766 - INFO: Train epoch 491: Loss: 6664.4012 | r_Loss: 1958.4647 | g_Loss: 897.4886 | l_Loss: 218.4935 |
21-12-23 04:33:17.566 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:33:17.567 - INFO: Train epoch 492: Loss: 5568.6590 | r_Loss: 1854.8614 | g_Loss: 699.6046 | l_Loss: 215.7745 |
21-12-23 04:34:29.250 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:34:29.250 - INFO: Train epoch 493: Loss: 6944.8521 | r_Loss: 1942.6610 | g_Loss: 952.0167 | l_Loss: 242.1076 |
21-12-23 04:35:40.955 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:35:40.956 - INFO: Train epoch 494: Loss: 5789.7048 | r_Loss: 1927.3347 | g_Loss: 722.1743 | l_Loss: 251.4986 |
21-12-23 04:36:52.884 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:36:52.885 - INFO: Train epoch 495: Loss: 5788.8558 | r_Loss: 1888.2412 | g_Loss: 733.8262 | l_Loss: 231.4836 |
21-12-23 04:38:04.637 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:38:04.637 - INFO: Train epoch 496: Loss: 5885.0661 | r_Loss: 1807.1582 | g_Loss: 773.5281 | l_Loss: 210.2673 |
21-12-23 04:39:16.422 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:39:16.423 - INFO: Train epoch 497: Loss: 5688.9253 | r_Loss: 1834.2997 | g_Loss: 723.6805 | l_Loss: 236.2232 |
21-12-23 04:40:28.022 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:40:28.022 - INFO: Train epoch 498: Loss: 6494.9683 | r_Loss: 1858.5022 | g_Loss: 885.9756 | l_Loss: 206.5881 |
21-12-23 04:41:39.598 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:41:39.599 - INFO: Train epoch 499: Loss: 5312.2237 | r_Loss: 1791.8323 | g_Loss: 656.5604 | l_Loss: 237.5894 |
21-12-23 04:42:51.444 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:42:51.445 - INFO: Train epoch 500: Loss: 5443.4818 | r_Loss: 1724.8783 | g_Loss: 704.4543 | l_Loss: 196.3318 |
21-12-23 04:44:03.151 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:44:03.152 - INFO: Train epoch 501: Loss: 5421.3279 | r_Loss: 1748.3725 | g_Loss: 696.4048 | l_Loss: 190.9315 |
21-12-23 04:45:14.852 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:45:14.853 - INFO: Train epoch 502: Loss: 5695.0684 | r_Loss: 1692.6523 | g_Loss: 755.1905 | l_Loss: 226.4637 |
21-12-23 04:46:26.554 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:46:26.555 - INFO: Train epoch 503: Loss: 5830.2503 | r_Loss: 1841.8773 | g_Loss: 753.4787 | l_Loss: 220.9793 |
21-12-23 04:47:38.407 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:47:38.407 - INFO: Train epoch 504: Loss: 5560.3546 | r_Loss: 1711.8192 | g_Loss: 727.0392 | l_Loss: 213.3392 |
21-12-23 04:48:50.140 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:48:50.140 - INFO: Train epoch 505: Loss: 5220.4477 | r_Loss: 1662.7969 | g_Loss: 674.5196 | l_Loss: 185.0526 |
21-12-23 04:50:01.794 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:50:01.794 - INFO: Train epoch 506: Loss: 5048.8368 | r_Loss: 1535.9554 | g_Loss: 666.9926 | l_Loss: 177.9186 |
21-12-23 04:51:13.329 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:51:13.329 - INFO: Train epoch 507: Loss: 78825545657348880.0000 | r_Loss: 1203295664.3973 | g_Loss: 15765109681171420.0000 | l_Loss: 53101508.2087 |
21-12-23 04:52:25.114 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:52:25.114 - INFO: Train epoch 508: Loss: 89760514918.4000 | r_Loss: 1609844.7737 | g_Loss: 17951746805.7600 | l_Loss: 170768.1112 |
21-12-23 04:53:36.808 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:53:36.809 - INFO: Train epoch 509: Loss: 13351978071.0400 | r_Loss: 1380072.8700 | g_Loss: 2670085977.6000 | l_Loss: 168119.2546 |
21-12-23 04:55:22.307 - INFO: TEST: PSNR_S: -30.5618 | PSNR_C: -0.5705 |
21-12-23 04:55:22.308 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:55:22.308 - INFO: Train epoch 510: Loss: 9578494274.5600 | r_Loss: 1455827.4512 | g_Loss: 1915367184.6400 | l_Loss: 202430.6602 |
21-12-23 04:56:34.034 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:56:34.034 - INFO: Train epoch 511: Loss: 7285151687.6800 | r_Loss: 1426894.7788 | g_Loss: 1456707645.4400 | l_Loss: 186589.9700 |
21-12-23 04:57:45.729 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:57:45.730 - INFO: Train epoch 512: Loss: 6259058437.1200 | r_Loss: 1507723.7737 | g_Loss: 1251471107.2000 | l_Loss: 195144.1238 |
21-12-23 04:58:57.387 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 04:58:57.387 - INFO: Train epoch 513: Loss: 5284519936.0000 | r_Loss: 1418807.6963 | g_Loss: 1056585367.0400 | l_Loss: 174314.2737 |
21-12-23 05:00:09.068 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:00:09.068 - INFO: Train epoch 514: Loss: 4585196756.4800 | r_Loss: 1353128.9375 | g_Loss: 916730558.0800 | l_Loss: 190783.1759 |
21-12-23 05:01:20.679 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:01:20.680 - INFO: Train epoch 515: Loss: 4505471139.8400 | r_Loss: 1511752.7988 | g_Loss: 900759932.1600 | l_Loss: 159722.8573 |
21-12-23 05:02:32.204 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:02:32.205 - INFO: Train epoch 516: Loss: 4021383037.4400 | r_Loss: 1420848.4400 | g_Loss: 803957783.0400 | l_Loss: 173237.3057 |
21-12-23 05:03:43.870 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:03:43.871 - INFO: Train epoch 517: Loss: 3717954951.6800 | r_Loss: 1443600.4825 | g_Loss: 743266588.1600 | l_Loss: 178435.0941 |
21-12-23 05:04:55.451 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:04:55.451 - INFO: Train epoch 518: Loss: 3709898250.2400 | r_Loss: 1515585.1763 | g_Loss: 741640915.2000 | l_Loss: 178057.6409 |
21-12-23 05:06:06.902 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:06:06.903 - INFO: Train epoch 519: Loss: 3344384803.8400 | r_Loss: 1422321.3000 | g_Loss: 668560216.3200 | l_Loss: 161423.5355 |
21-12-23 05:07:18.474 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:07:18.474 - INFO: Train epoch 520: Loss: 3129837073.9200 | r_Loss: 1450231.0037 | g_Loss: 625646471.6800 | l_Loss: 154497.8175 |
21-12-23 05:08:30.072 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:08:30.072 - INFO: Train epoch 521: Loss: 2848975160.3200 | r_Loss: 1356461.8525 | g_Loss: 569495344.0000 | l_Loss: 141961.3315 |
21-12-23 05:09:41.660 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:09:41.660 - INFO: Train epoch 522: Loss: 2740289134.0800 | r_Loss: 1418609.5200 | g_Loss: 547740353.2800 | l_Loss: 168755.0709 |
21-12-23 05:10:53.197 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:10:53.198 - INFO: Train epoch 523: Loss: 2686676551.6800 | r_Loss: 1491146.0875 | g_Loss: 537006122.5600 | l_Loss: 154782.8391 |
21-12-23 05:12:04.818 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:12:04.819 - INFO: Train epoch 524: Loss: 2593800084.4800 | r_Loss: 1482650.5312 | g_Loss: 518430753.9200 | l_Loss: 163691.3444 |
21-12-23 05:13:16.319 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:13:16.320 - INFO: Train epoch 525: Loss: 2469694448.6400 | r_Loss: 1436760.3587 | g_Loss: 493614819.2000 | l_Loss: 183579.1839 |
21-12-23 05:14:27.782 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:14:27.783 - INFO: Train epoch 526: Loss: 2461043630.0800 | r_Loss: 1557566.7488 | g_Loss: 491854680.6400 | l_Loss: 212670.9609 |
21-12-23 05:15:39.482 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:15:39.483 - INFO: Train epoch 527: Loss: 2342671831.0400 | r_Loss: 1512593.9013 | g_Loss: 468190960.3200 | l_Loss: 204414.3314 |
21-12-23 05:16:50.988 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:16:50.989 - INFO: Train epoch 528: Loss: 2260056396.8000 | r_Loss: 1442587.4688 | g_Loss: 451692565.1200 | l_Loss: 150994.9207 |
21-12-23 05:18:02.541 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:18:02.542 - INFO: Train epoch 529: Loss: 2142793139.2000 | r_Loss: 1438110.4175 | g_Loss: 428232579.2000 | l_Loss: 192140.8879 |
21-12-23 05:19:14.115 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:19:14.115 - INFO: Train epoch 530: Loss: 2026425880.3200 | r_Loss: 1449491.5050 | g_Loss: 404960872.3200 | l_Loss: 172023.7397 |
21-12-23 05:20:25.691 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:20:25.691 - INFO: Train epoch 531: Loss: 1971024861.4400 | r_Loss: 1440213.8650 | g_Loss: 393877263.3600 | l_Loss: 198324.8710 |
21-12-23 05:21:37.401 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:21:37.402 - INFO: Train epoch 532: Loss: 1859539727.3600 | r_Loss: 1433838.6925 | g_Loss: 371589333.4400 | l_Loss: 159231.1795 |
21-12-23 05:22:49.227 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:22:49.228 - INFO: Train epoch 533: Loss: 1873340600.3200 | r_Loss: 1508036.5513 | g_Loss: 374335927.3600 | l_Loss: 152942.5454 |
21-12-23 05:24:01.011 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:24:01.012 - INFO: Train epoch 534: Loss: 1383228807.6800 | r_Loss: 1316760.4625 | g_Loss: 276351008.3200 | l_Loss: 157015.5646 |
21-12-23 05:25:12.827 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:25:12.828 - INFO: Train epoch 535: Loss: 864557076.4800 | r_Loss: 1506813.3250 | g_Loss: 172574098.7200 | l_Loss: 179770.0672 |
21-12-23 05:26:24.385 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:26:24.385 - INFO: Train epoch 536: Loss: 592266478.7200 | r_Loss: 1457797.4975 | g_Loss: 118129790.6400 | l_Loss: 159722.1018 |
21-12-23 05:27:35.942 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:27:35.942 - INFO: Train epoch 537: Loss: 577308040.3200 | r_Loss: 1407351.4788 | g_Loss: 115145084.6400 | l_Loss: 175271.5176 |
21-12-23 05:28:47.589 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:28:47.590 - INFO: Train epoch 538: Loss: 544920802.7200 | r_Loss: 1372597.0762 | g_Loss: 108674113.0400 | l_Loss: 177629.2521 |
21-12-23 05:29:59.124 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:29:59.124 - INFO: Train epoch 539: Loss: 624942969.6000 | r_Loss: 1449852.9463 | g_Loss: 124667199.5200 | l_Loss: 157123.2663 |
21-12-23 05:31:44.560 - INFO: TEST: PSNR_S: -18.5014 | PSNR_C: -0.3992 |
21-12-23 05:31:44.561 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:31:44.561 - INFO: Train epoch 540: Loss: 543863113.1200 | r_Loss: 1398406.0362 | g_Loss: 108453731.0800 | l_Loss: 196055.9570 |
21-12-23 05:32:56.171 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:32:56.172 - INFO: Train epoch 541: Loss: 453131077.7600 | r_Loss: 1253228.2075 | g_Loss: 90346256.7600 | l_Loss: 146557.6797 |
21-12-23 05:34:07.746 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:34:07.746 - INFO: Train epoch 542: Loss: 583447247.8400 | r_Loss: 1434548.1238 | g_Loss: 116372927.5600 | l_Loss: 148067.0249 |
21-12-23 05:35:19.328 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:35:19.329 - INFO: Train epoch 543: Loss: 529525024.6400 | r_Loss: 1403104.2650 | g_Loss: 105591454.0000 | l_Loss: 164647.5010 |
21-12-23 05:36:30.949 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:36:30.949 - INFO: Train epoch 544: Loss: 509937079.6800 | r_Loss: 1368076.1087 | g_Loss: 101684614.0800 | l_Loss: 145932.9205 |
21-12-23 05:37:42.570 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:37:42.570 - INFO: Train epoch 545: Loss: 501165195.8400 | r_Loss: 1382004.6200 | g_Loss: 99921858.2000 | l_Loss: 173899.0923 |
21-12-23 05:38:54.261 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:38:54.262 - INFO: Train epoch 546: Loss: 459029931.8400 | r_Loss: 1290544.7775 | g_Loss: 91518765.2400 | l_Loss: 145556.0534 |
21-12-23 05:40:05.942 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:40:05.942 - INFO: Train epoch 547: Loss: 454608742.8800 | r_Loss: 1321348.1638 | g_Loss: 90623058.4800 | l_Loss: 172106.0782 |
21-12-23 05:41:17.561 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:41:17.561 - INFO: Train epoch 548: Loss: 499533592.6400 | r_Loss: 1362458.9187 | g_Loss: 99601021.4400 | l_Loss: 166026.6384 |
21-12-23 05:42:29.197 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:42:29.198 - INFO: Train epoch 549: Loss: 505851523.3600 | r_Loss: 1356125.0962 | g_Loss: 100866787.5600 | l_Loss: 161462.5587 |
21-12-23 05:43:40.800 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:43:40.801 - INFO: Train epoch 550: Loss: 445021493.2800 | r_Loss: 1305426.3750 | g_Loss: 88710922.5600 | l_Loss: 161453.6801 |
21-12-23 05:44:52.407 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:44:52.407 - INFO: Train epoch 551: Loss: 381135919.6800 | r_Loss: 1230348.2750 | g_Loss: 75944716.4000 | l_Loss: 181987.0639 |
21-12-23 05:46:04.149 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:46:04.149 - INFO: Train epoch 552: Loss: 446510874.5600 | r_Loss: 1303685.1863 | g_Loss: 89009235.7600 | l_Loss: 161003.9435 |
21-12-23 05:47:15.866 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:47:15.866 - INFO: Train epoch 553: Loss: 410013046.4000 | r_Loss: 1269881.2437 | g_Loss: 81717317.5200 | l_Loss: 156578.4471 |
21-12-23 05:48:27.609 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:48:27.610 - INFO: Train epoch 554: Loss: 408456991.3600 | r_Loss: 1259481.5550 | g_Loss: 81406623.6000 | l_Loss: 164396.8704 |
21-12-23 05:49:39.292 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:49:39.293 - INFO: Train epoch 555: Loss: 420058293.9200 | r_Loss: 1267153.6000 | g_Loss: 83725942.8000 | l_Loss: 161423.5507 |
21-12-23 05:50:50.789 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:50:50.790 - INFO: Train epoch 556: Loss: 442129038.8800 | r_Loss: 1306198.2750 | g_Loss: 88131976.6800 | l_Loss: 162961.0782 |
21-12-23 05:52:02.376 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:52:02.376 - INFO: Train epoch 557: Loss: 407324664.9600 | r_Loss: 1297448.8162 | g_Loss: 81173088.2400 | l_Loss: 161777.3782 |
21-12-23 05:53:13.903 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:53:13.903 - INFO: Train epoch 558: Loss: 401721315.8400 | r_Loss: 1271861.9438 | g_Loss: 80059685.5200 | l_Loss: 151022.0339 |
21-12-23 05:54:25.530 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:54:25.530 - INFO: Train epoch 559: Loss: 420950201.1200 | r_Loss: 1310228.0913 | g_Loss: 83899096.0800 | l_Loss: 144498.9439 |
21-12-23 05:55:37.144 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:55:37.145 - INFO: Train epoch 560: Loss: 379787268.1600 | r_Loss: 1230309.4775 | g_Loss: 75683902.8800 | l_Loss: 137446.9425 |
21-12-23 05:56:48.598 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:56:48.599 - INFO: Train epoch 561: Loss: 400873314.5600 | r_Loss: 1272568.6500 | g_Loss: 79894409.0800 | l_Loss: 128704.2321 |
21-12-23 05:58:00.260 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:58:00.260 - INFO: Train epoch 562: Loss: 386475136.1600 | r_Loss: 1273973.1338 | g_Loss: 77006814.1200 | l_Loss: 167092.6040 |
21-12-23 05:59:11.793 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 05:59:11.793 - INFO: Train epoch 563: Loss: 414594341.7600 | r_Loss: 1304300.5012 | g_Loss: 82630132.9600 | l_Loss: 139376.2365 |
21-12-23 06:00:23.395 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:00:23.396 - INFO: Train epoch 564: Loss: 387063482.7200 | r_Loss: 1257788.5888 | g_Loss: 77130854.8800 | l_Loss: 151419.9034 |
21-12-23 06:01:34.998 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:01:34.999 - INFO: Train epoch 565: Loss: 391377302.7200 | r_Loss: 1282027.9100 | g_Loss: 77987807.8400 | l_Loss: 156240.7463 |
21-12-23 06:02:46.734 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:02:46.735 - INFO: Train epoch 566: Loss: 389541903.0400 | r_Loss: 1257434.8613 | g_Loss: 77628456.3200 | l_Loss: 142188.4620 |
21-12-23 06:03:58.276 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:03:58.277 - INFO: Train epoch 567: Loss: 385313962.8800 | r_Loss: 1302443.4163 | g_Loss: 76767484.2000 | l_Loss: 174098.8862 |
21-12-23 06:05:09.815 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:05:09.816 - INFO: Train epoch 568: Loss: 364540605.9200 | r_Loss: 1231824.7412 | g_Loss: 72634150.8800 | l_Loss: 138026.9271 |
21-12-23 06:06:21.304 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:06:21.304 - INFO: Train epoch 569: Loss: 358850068.9600 | r_Loss: 1244120.9375 | g_Loss: 71491751.2000 | l_Loss: 147184.2926 |
21-12-23 06:08:06.611 - INFO: TEST: PSNR_S: -16.8178 | PSNR_C: 0.1372 |
21-12-23 06:08:06.613 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:08:06.613 - INFO: Train epoch 570: Loss: 358402115.5200 | r_Loss: 1233486.9925 | g_Loss: 71396523.4400 | l_Loss: 186016.1194 |
21-12-23 06:09:18.152 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:09:18.153 - INFO: Train epoch 571: Loss: 317850326.4000 | r_Loss: 1174301.0950 | g_Loss: 63306055.4800 | l_Loss: 145746.6366 |
21-12-23 06:10:29.689 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:10:29.690 - INFO: Train epoch 572: Loss: 348569897.1200 | r_Loss: 1244392.5562 | g_Loss: 69437847.8000 | l_Loss: 136263.3053 |
21-12-23 06:11:41.394 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:11:41.394 - INFO: Train epoch 573: Loss: 314567266.0800 | r_Loss: 1185060.4587 | g_Loss: 62650697.2000 | l_Loss: 128722.2025 |
21-12-23 06:12:52.978 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:12:52.978 - INFO: Train epoch 574: Loss: 363270031.6800 | r_Loss: 1284660.6850 | g_Loss: 72360425.1600 | l_Loss: 183245.1332 |
21-12-23 06:14:04.620 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:14:04.621 - INFO: Train epoch 575: Loss: 298230154.7200 | r_Loss: 1145986.4400 | g_Loss: 59389953.8400 | l_Loss: 134400.4333 |
21-12-23 06:15:16.390 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:15:16.391 - INFO: Train epoch 576: Loss: 329672990.0800 | r_Loss: 1208137.9612 | g_Loss: 65667074.1200 | l_Loss: 129482.8604 |
21-12-23 06:16:28.273 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:16:28.273 - INFO: Train epoch 577: Loss: 311933877.9200 | r_Loss: 1164676.2000 | g_Loss: 62124346.8800 | l_Loss: 147465.9124 |
21-12-23 06:17:40.086 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:17:40.087 - INFO: Train epoch 578: Loss: 307952201.2800 | r_Loss: 1188090.8862 | g_Loss: 61319503.0000 | l_Loss: 166593.8794 |
21-12-23 06:18:51.991 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:18:51.992 - INFO: Train epoch 579: Loss: 300256493.4400 | r_Loss: 1159992.3388 | g_Loss: 59787720.0000 | l_Loss: 157896.0322 |
21-12-23 06:20:03.779 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:20:03.780 - INFO: Train epoch 580: Loss: 286305670.0800 | r_Loss: 1151125.5537 | g_Loss: 57005390.0800 | l_Loss: 127591.1261 |
21-12-23 06:21:15.745 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:21:15.746 - INFO: Train epoch 581: Loss: 292243275.3600 | r_Loss: 1174208.0025 | g_Loss: 58186326.4400 | l_Loss: 137435.0490 |
21-12-23 06:22:27.532 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:22:27.532 - INFO: Train epoch 582: Loss: 286219823.3600 | r_Loss: 1148151.3300 | g_Loss: 56985390.4400 | l_Loss: 144718.0075 |
21-12-23 06:23:39.013 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:23:39.014 - INFO: Train epoch 583: Loss: 301802632.1600 | r_Loss: 1199963.8175 | g_Loss: 60089485.3200 | l_Loss: 155243.6304 |
21-12-23 06:24:50.670 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:24:50.671 - INFO: Train epoch 584: Loss: 262338130.0800 | r_Loss: 1094576.8613 | g_Loss: 52223220.4800 | l_Loss: 127448.8954 |
21-12-23 06:26:02.390 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:26:02.391 - INFO: Train epoch 585: Loss: 277295504.9600 | r_Loss: 1135849.4106 | g_Loss: 55203853.9400 | l_Loss: 140389.0959 |
21-12-23 06:27:14.207 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:27:14.208 - INFO: Train epoch 586: Loss: 295526458.5600 | r_Loss: 1215247.0275 | g_Loss: 58832261.0000 | l_Loss: 149906.1103 |
21-12-23 06:28:25.752 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:28:25.753 - INFO: Train epoch 587: Loss: 232784954.8000 | r_Loss: 1068779.1738 | g_Loss: 46315695.7000 | l_Loss: 137698.2009 |
21-12-23 06:29:37.192 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:29:37.192 - INFO: Train epoch 588: Loss: 249713314.0800 | r_Loss: 1096222.7200 | g_Loss: 49695936.8000 | l_Loss: 137407.0126 |
21-12-23 06:30:48.705 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:30:48.706 - INFO: Train epoch 589: Loss: 242950426.8800 | r_Loss: 1113262.8613 | g_Loss: 48341613.6200 | l_Loss: 129097.6726 |
21-12-23 06:32:00.394 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:32:00.394 - INFO: Train epoch 590: Loss: 282325114.4000 | r_Loss: 1178255.4450 | g_Loss: 56202104.0000 | l_Loss: 136341.1667 |
21-12-23 06:33:11.942 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:33:11.943 - INFO: Train epoch 591: Loss: 274632065.7600 | r_Loss: 1162901.1187 | g_Loss: 54662436.6000 | l_Loss: 156981.8608 |
21-12-23 06:34:23.777 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:34:23.778 - INFO: Train epoch 592: Loss: 206128433.5200 | r_Loss: 1031310.0863 | g_Loss: 40991851.2200 | l_Loss: 137869.0339 |
21-12-23 06:35:35.629 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:35:35.629 - INFO: Train epoch 593: Loss: 265540846.4000 | r_Loss: 1167670.8462 | g_Loss: 52847664.6000 | l_Loss: 134851.1548 |
21-12-23 06:36:47.368 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:36:47.369 - INFO: Train epoch 594: Loss: 245167314.0800 | r_Loss: 1127186.2850 | g_Loss: 48779143.0400 | l_Loss: 144412.8429 |
21-12-23 06:37:59.180 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:37:59.180 - INFO: Train epoch 595: Loss: 242791554.4800 | r_Loss: 1114661.5600 | g_Loss: 48303501.9600 | l_Loss: 159383.1341 |
21-12-23 06:39:10.789 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:39:10.790 - INFO: Train epoch 596: Loss: 239475917.1200 | r_Loss: 1132091.2488 | g_Loss: 47640491.7600 | l_Loss: 141368.2036 |
21-12-23 06:40:22.273 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:40:22.273 - INFO: Train epoch 597: Loss: 223930824.4800 | r_Loss: 1110108.8750 | g_Loss: 44537881.0000 | l_Loss: 131310.4106 |
21-12-23 06:41:33.732 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:41:33.732 - INFO: Train epoch 598: Loss: 233702785.0400 | r_Loss: 1117287.3462 | g_Loss: 46489236.2000 | l_Loss: 139316.4277 |
21-12-23 06:42:45.390 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:42:45.391 - INFO: Train epoch 599: Loss: 218168822.4000 | r_Loss: 1088013.2100 | g_Loss: 43387738.5800 | l_Loss: 142117.8493 |
21-12-23 06:44:30.837 - INFO: TEST: PSNR_S: -14.9013 | PSNR_C: 0.5614 |
21-12-23 06:44:30.838 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:44:30.839 - INFO: Train epoch 600: Loss: 216027661.1200 | r_Loss: 1092630.6550 | g_Loss: 42962102.4200 | l_Loss: 124518.4271 |
21-12-23 06:45:42.436 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:45:42.436 - INFO: Train epoch 601: Loss: 215593152.4800 | r_Loss: 1108757.4150 | g_Loss: 42867678.7600 | l_Loss: 146000.6322 |
21-12-23 06:46:54.010 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:46:54.011 - INFO: Train epoch 602: Loss: 184897291.1200 | r_Loss: 1050193.5456 | g_Loss: 36742888.5800 | l_Loss: 132654.6871 |
21-12-23 06:48:05.725 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:48:05.726 - INFO: Train epoch 603: Loss: 223290036.6400 | r_Loss: 1123533.3125 | g_Loss: 44404555.6800 | l_Loss: 143724.4187 |
21-12-23 06:49:17.251 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:49:17.251 - INFO: Train epoch 604: Loss: 204363545.2000 | r_Loss: 1070574.2850 | g_Loss: 40629101.1200 | l_Loss: 147464.8943 |
21-12-23 06:50:28.896 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:50:28.897 - INFO: Train epoch 605: Loss: 201126058.7200 | r_Loss: 1077210.3700 | g_Loss: 39985033.2800 | l_Loss: 123680.1464 |
21-12-23 06:51:40.480 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:51:40.481 - INFO: Train epoch 606: Loss: 205779402.7200 | r_Loss: 1096594.8762 | g_Loss: 40908179.5400 | l_Loss: 141908.6312 |
21-12-23 06:52:51.913 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:52:51.913 - INFO: Train epoch 607: Loss: 164954852.6400 | r_Loss: 1009616.5138 | g_Loss: 32768198.8600 | l_Loss: 104244.2007 |
21-12-23 06:54:03.967 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:54:03.967 - INFO: Train epoch 608: Loss: 197109519.0400 | r_Loss: 1092758.8600 | g_Loss: 39178607.7800 | l_Loss: 123719.5361 |
21-12-23 06:55:15.684 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:55:15.685 - INFO: Train epoch 609: Loss: 172145136.1600 | r_Loss: 1025122.8413 | g_Loss: 34199699.7000 | l_Loss: 121514.0175 |
21-12-23 06:56:27.641 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:56:27.642 - INFO: Train epoch 610: Loss: 183981480.3200 | r_Loss: 1053322.7756 | g_Loss: 36552313.2200 | l_Loss: 166589.4325 |
21-12-23 06:57:39.418 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:57:39.419 - INFO: Train epoch 611: Loss: 191598910.2400 | r_Loss: 1039056.2087 | g_Loss: 38085399.4600 | l_Loss: 132857.1682 |
21-12-23 06:58:50.889 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 06:58:50.889 - INFO: Train epoch 612: Loss: 192485089.2000 | r_Loss: 1090336.1812 | g_Loss: 38252879.3600 | l_Loss: 130357.6633 |
21-12-23 07:00:02.324 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:00:02.325 - INFO: Train epoch 613: Loss: 177470560.0000 | r_Loss: 1052173.9675 | g_Loss: 35255058.2200 | l_Loss: 143096.4656 |
21-12-23 07:01:13.857 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:01:13.858 - INFO: Train epoch 614: Loss: 159715049.6800 | r_Loss: 1008616.4637 | g_Loss: 31717880.7600 | l_Loss: 117028.6537 |
21-12-23 07:02:25.363 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:02:25.364 - INFO: Train epoch 615: Loss: 156807853.4400 | r_Loss: 978620.5062 | g_Loss: 31141080.5400 | l_Loss: 123829.4958 |
21-12-23 07:03:36.901 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:03:36.901 - INFO: Train epoch 616: Loss: 152364682.4800 | r_Loss: 985264.0637 | g_Loss: 30251087.1800 | l_Loss: 123981.4872 |
21-12-23 07:04:48.629 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:04:48.629 - INFO: Train epoch 617: Loss: 156854233.6800 | r_Loss: 972194.8113 | g_Loss: 31153558.2000 | l_Loss: 114247.3982 |
21-12-23 07:06:00.121 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:06:00.121 - INFO: Train epoch 618: Loss: 136189774.0800 | r_Loss: 946655.3075 | g_Loss: 27024557.8800 | l_Loss: 120328.9325 |
21-12-23 07:07:11.734 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:07:11.735 - INFO: Train epoch 619: Loss: 134762224.4800 | r_Loss: 945004.6388 | g_Loss: 26740212.1400 | l_Loss: 116160.6611 |
21-12-23 07:08:23.344 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:08:23.345 - INFO: Train epoch 620: Loss: 123556338.1600 | r_Loss: 931824.7350 | g_Loss: 24500423.3600 | l_Loss: 122397.4247 |
21-12-23 07:09:35.022 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:09:35.022 - INFO: Train epoch 621: Loss: 119949220.1600 | r_Loss: 943148.8888 | g_Loss: 23781963.0800 | l_Loss: 96254.8271 |
21-12-23 07:10:46.556 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:10:46.557 - INFO: Train epoch 622: Loss: 90896532.4800 | r_Loss: 828016.7550 | g_Loss: 17994009.5800 | l_Loss: 98467.6586 |
21-12-23 07:11:58.164 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:11:58.164 - INFO: Train epoch 623: Loss: 92615349.4400 | r_Loss: 864387.1750 | g_Loss: 18328388.1800 | l_Loss: 109022.2906 |
21-12-23 07:13:09.822 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:13:09.823 - INFO: Train epoch 624: Loss: 88061754.4800 | r_Loss: 855713.9950 | g_Loss: 17421307.1600 | l_Loss: 99504.9866 |
21-12-23 07:14:21.491 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:14:21.491 - INFO: Train epoch 625: Loss: 74963703.6800 | r_Loss: 790401.1656 | g_Loss: 14815436.5800 | l_Loss: 96118.3635 |
21-12-23 07:15:33.085 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:15:33.086 - INFO: Train epoch 626: Loss: 77667373.5200 | r_Loss: 836760.6138 | g_Loss: 15344453.2800 | l_Loss: 108346.3162 |
21-12-23 07:16:44.785 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:16:44.785 - INFO: Train epoch 627: Loss: 72423299.2800 | r_Loss: 809193.6094 | g_Loss: 14304157.8000 | l_Loss: 93317.3286 |
21-12-23 07:17:56.399 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:17:56.400 - INFO: Train epoch 628: Loss: 78794555.2000 | r_Loss: 851284.0306 | g_Loss: 15569208.0800 | l_Loss: 97231.3470 |
21-12-23 07:19:08.069 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:19:08.070 - INFO: Train epoch 629: Loss: 67764676.9600 | r_Loss: 798531.9900 | g_Loss: 13373221.0300 | l_Loss: 100039.8863 |
21-12-23 07:20:53.570 - INFO: TEST: PSNR_S: -10.2789 | PSNR_C: 1.8093 |
21-12-23 07:20:53.571 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:20:53.571 - INFO: Train epoch 630: Loss: 67839748.8800 | r_Loss: 820919.2681 | g_Loss: 13379475.3700 | l_Loss: 121452.3935 |
21-12-23 07:22:05.160 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:22:05.161 - INFO: Train epoch 631: Loss: 63135812.5600 | r_Loss: 783662.2431 | g_Loss: 12450658.1200 | l_Loss: 98859.0805 |
21-12-23 07:23:16.849 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:23:16.850 - INFO: Train epoch 632: Loss: 69891668.8800 | r_Loss: 840820.4531 | g_Loss: 13789062.8700 | l_Loss: 105534.5962 |
21-12-23 07:24:28.526 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:24:28.527 - INFO: Train epoch 633: Loss: 69930630.0000 | r_Loss: 857650.8250 | g_Loss: 13793548.7000 | l_Loss: 105234.9674 |
21-12-23 07:25:40.052 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:25:40.052 - INFO: Train epoch 634: Loss: 61125707.1200 | r_Loss: 794880.5563 | g_Loss: 12046061.8800 | l_Loss: 100517.1528 |
21-12-23 07:26:51.692 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:26:51.692 - INFO: Train epoch 635: Loss: 62947608.9200 | r_Loss: 818731.1181 | g_Loss: 12404134.0800 | l_Loss: 108207.5200 |
21-12-23 07:28:03.273 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:28:03.273 - INFO: Train epoch 636: Loss: 69382046.3600 | r_Loss: 856576.1519 | g_Loss: 13685004.6800 | l_Loss: 100446.6230 |
21-12-23 07:29:14.937 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:29:14.938 - INFO: Train epoch 637: Loss: 63124793.6000 | r_Loss: 822004.0012 | g_Loss: 12439421.2900 | l_Loss: 105683.1111 |
21-12-23 07:30:26.579 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:30:26.580 - INFO: Train epoch 638: Loss: 59088759.0400 | r_Loss: 810464.0188 | g_Loss: 11635471.2200 | l_Loss: 100938.4848 |
21-12-23 07:31:38.137 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:31:38.138 - INFO: Train epoch 639: Loss: 61325913.2000 | r_Loss: 850314.1325 | g_Loss: 12073726.7900 | l_Loss: 106964.9011 |
21-12-23 07:32:49.780 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:32:49.781 - INFO: Train epoch 640: Loss: 61698631.0400 | r_Loss: 853074.2662 | g_Loss: 12146393.9600 | l_Loss: 113588.0880 |
21-12-23 07:34:01.568 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:34:01.569 - INFO: Train epoch 641: Loss: 60080608.8800 | r_Loss: 840788.0950 | g_Loss: 11829226.6400 | l_Loss: 93687.1879 |
21-12-23 07:35:13.231 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:35:13.232 - INFO: Train epoch 642: Loss: 62012964.8400 | r_Loss: 861315.9012 | g_Loss: 12205523.7900 | l_Loss: 124029.7314 |
21-12-23 07:36:25.012 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:36:25.012 - INFO: Train epoch 643: Loss: 57089096.4400 | r_Loss: 828570.4800 | g_Loss: 11231818.1200 | l_Loss: 101435.7157 |
21-12-23 07:37:36.729 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:37:36.730 - INFO: Train epoch 644: Loss: 53445330.3200 | r_Loss: 804326.6431 | g_Loss: 10504679.0200 | l_Loss: 117608.5127 |
21-12-23 07:38:48.448 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:38:48.449 - INFO: Train epoch 645: Loss: 55418740.1600 | r_Loss: 833195.1887 | g_Loss: 10897913.8100 | l_Loss: 95976.2220 |
21-12-23 07:40:00.018 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:40:00.019 - INFO: Train epoch 646: Loss: 58657607.4800 | r_Loss: 876028.8531 | g_Loss: 11533991.7900 | l_Loss: 111620.1738 |
21-12-23 07:41:11.713 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:41:11.714 - INFO: Train epoch 647: Loss: 54856537.1200 | r_Loss: 854396.1200 | g_Loss: 10777087.5900 | l_Loss: 116702.9671 |
21-12-23 07:42:23.489 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:42:23.489 - INFO: Train epoch 648: Loss: 55137062.3600 | r_Loss: 870371.8575 | g_Loss: 10830635.9900 | l_Loss: 113510.4107 |
21-12-23 07:43:35.192 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:43:35.193 - INFO: Train epoch 649: Loss: 50337756.4000 | r_Loss: 819094.7987 | g_Loss: 9881945.5200 | l_Loss: 108933.6228 |
21-12-23 07:44:46.858 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:44:46.858 - INFO: Train epoch 650: Loss: 52409647.1600 | r_Loss: 863488.9281 | g_Loss: 10285907.5200 | l_Loss: 116621.1762 |
21-12-23 07:45:58.439 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:45:58.440 - INFO: Train epoch 651: Loss: 51628065.6800 | r_Loss: 846141.8287 | g_Loss: 10137475.5400 | l_Loss: 94546.5831 |
21-12-23 07:47:10.046 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:47:10.047 - INFO: Train epoch 652: Loss: 52534666.2000 | r_Loss: 889116.0669 | g_Loss: 10305805.7400 | l_Loss: 116521.1639 |
21-12-23 07:48:21.836 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:48:21.837 - INFO: Train epoch 653: Loss: 50547632.5200 | r_Loss: 872534.5413 | g_Loss: 9913457.8200 | l_Loss: 107809.1283 |
21-12-23 07:49:33.557 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:49:33.558 - INFO: Train epoch 654: Loss: 46970078.7200 | r_Loss: 861816.8556 | g_Loss: 9198671.6500 | l_Loss: 114903.6330 |
21-12-23 07:50:45.288 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:50:45.289 - INFO: Train epoch 655: Loss: 44833038.8400 | r_Loss: 860045.3888 | g_Loss: 8775192.8100 | l_Loss: 97029.4268 |
21-12-23 07:51:56.946 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:51:56.947 - INFO: Train epoch 656: Loss: 47458367.4400 | r_Loss: 886940.1550 | g_Loss: 9292864.3300 | l_Loss: 107105.2183 |
21-12-23 07:53:08.541 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:53:08.541 - INFO: Train epoch 657: Loss: 44977942.2400 | r_Loss: 866162.5563 | g_Loss: 8801579.9500 | l_Loss: 103879.8895 |
21-12-23 07:54:20.289 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:54:20.289 - INFO: Train epoch 658: Loss: 47136382.9200 | r_Loss: 899000.0925 | g_Loss: 9225602.4700 | l_Loss: 109370.6696 |
21-12-23 07:55:31.892 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:55:31.892 - INFO: Train epoch 659: Loss: 46460251.1200 | r_Loss: 912975.9400 | g_Loss: 9086783.0000 | l_Loss: 113360.4937 |
21-12-23 07:57:17.278 - INFO: TEST: PSNR_S: -8.3206 | PSNR_C: 1.4701 |
21-12-23 07:57:17.280 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:57:17.280 - INFO: Train epoch 660: Loss: 48168629.9200 | r_Loss: 922898.9769 | g_Loss: 9423312.1700 | l_Loss: 129170.3698 |
21-12-23 07:58:29.044 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:58:29.045 - INFO: Train epoch 661: Loss: 44714711.4800 | r_Loss: 901556.0369 | g_Loss: 8737483.2300 | l_Loss: 125739.2605 |
21-12-23 07:59:40.761 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 07:59:40.762 - INFO: Train epoch 662: Loss: 41691923.5200 | r_Loss: 893976.0019 | g_Loss: 8138431.0600 | l_Loss: 105792.5574 |
21-12-23 08:00:52.427 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:00:52.427 - INFO: Train epoch 663: Loss: 45921889.9600 | r_Loss: 940065.0337 | g_Loss: 8971948.2550 | l_Loss: 122083.8114 |
21-12-23 08:02:03.968 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:02:03.969 - INFO: Train epoch 664: Loss: 39720050.5200 | r_Loss: 870803.1562 | g_Loss: 7746770.8300 | l_Loss: 115393.2703 |
21-12-23 08:03:15.517 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:03:15.518 - INFO: Train epoch 665: Loss: 44407647.9200 | r_Loss: 951733.5513 | g_Loss: 8670938.3800 | l_Loss: 101222.6450 |
21-12-23 08:04:27.108 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:04:27.109 - INFO: Train epoch 666: Loss: 40459879.8000 | r_Loss: 896718.1875 | g_Loss: 7889032.3300 | l_Loss: 118000.0236 |
21-12-23 08:05:38.856 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:05:38.856 - INFO: Train epoch 667: Loss: 39514370.3200 | r_Loss: 899746.9287 | g_Loss: 7700645.7900 | l_Loss: 111394.5682 |
21-12-23 08:06:50.446 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:06:50.446 - INFO: Train epoch 668: Loss: 40036589.8800 | r_Loss: 950077.0988 | g_Loss: 7792118.2800 | l_Loss: 125921.0782 |
21-12-23 08:08:02.032 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:08:02.033 - INFO: Train epoch 669: Loss: 37361738.9200 | r_Loss: 929937.1325 | g_Loss: 7262141.8650 | l_Loss: 121093.0181 |
21-12-23 08:09:13.727 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:09:13.727 - INFO: Train epoch 670: Loss: 36456533.0400 | r_Loss: 922978.2338 | g_Loss: 7084626.1100 | l_Loss: 110424.5288 |
21-12-23 08:10:25.353 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:10:25.353 - INFO: Train epoch 671: Loss: 34089864.2400 | r_Loss: 924269.7288 | g_Loss: 6612023.0700 | l_Loss: 105479.1968 |
21-12-23 08:11:36.968 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:11:36.968 - INFO: Train epoch 672: Loss: 34206007.3200 | r_Loss: 999340.9962 | g_Loss: 6618739.9250 | l_Loss: 112966.7977 |
21-12-23 08:12:48.584 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:12:48.584 - INFO: Train epoch 673: Loss: 33615893.1200 | r_Loss: 1048601.7150 | g_Loss: 6486004.7550 | l_Loss: 137267.5980 |
21-12-23 08:14:00.295 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:14:00.295 - INFO: Train epoch 674: Loss: 32164512.8000 | r_Loss: 1067892.7075 | g_Loss: 6190948.9700 | l_Loss: 141874.9606 |
21-12-23 08:15:11.891 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:15:11.892 - INFO: Train epoch 675: Loss: 31131519.3600 | r_Loss: 1096657.2712 | g_Loss: 5978551.9350 | l_Loss: 142102.5273 |
21-12-23 08:16:23.618 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:16:23.619 - INFO: Train epoch 676: Loss: 29829637.6400 | r_Loss: 1110575.7725 | g_Loss: 5718411.5200 | l_Loss: 127004.4289 |
21-12-23 08:17:35.243 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:17:35.244 - INFO: Train epoch 677: Loss: 28290148.3200 | r_Loss: 1112440.7750 | g_Loss: 5410184.1800 | l_Loss: 126786.7232 |
21-12-23 08:18:46.838 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:18:46.839 - INFO: Train epoch 678: Loss: 28370519.7800 | r_Loss: 1140281.0244 | g_Loss: 5416022.6250 | l_Loss: 150125.7048 |
21-12-23 08:19:58.457 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:19:58.458 - INFO: Train epoch 679: Loss: 29637712.2000 | r_Loss: 1266459.3925 | g_Loss: 5641654.0550 | l_Loss: 162982.3919 |
21-12-23 08:21:10.093 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:21:10.093 - INFO: Train epoch 680: Loss: 26203652.3800 | r_Loss: 1111855.9888 | g_Loss: 4986052.6850 | l_Loss: 161533.2257 |
21-12-23 08:22:21.889 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:22:21.890 - INFO: Train epoch 681: Loss: 27004421.6200 | r_Loss: 1204261.5275 | g_Loss: 5128406.1550 | l_Loss: 158129.3767 |
21-12-23 08:23:33.450 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:23:33.450 - INFO: Train epoch 682: Loss: 26094673.4400 | r_Loss: 1207099.2125 | g_Loss: 4953211.1200 | l_Loss: 121518.7352 |
21-12-23 08:24:45.149 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:24:45.149 - INFO: Train epoch 683: Loss: 26732533.8400 | r_Loss: 1304803.2325 | g_Loss: 5058096.2400 | l_Loss: 137249.2602 |
21-12-23 08:25:56.721 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:25:56.721 - INFO: Train epoch 684: Loss: 25694972.7800 | r_Loss: 1286016.3762 | g_Loss: 4852142.1700 | l_Loss: 148245.2970 |
21-12-23 08:27:08.477 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:27:08.478 - INFO: Train epoch 685: Loss: 25806595.6600 | r_Loss: 1307444.5913 | g_Loss: 4870892.7250 | l_Loss: 144687.7984 |
21-12-23 08:28:19.993 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:28:19.994 - INFO: Train epoch 686: Loss: 25873526.5400 | r_Loss: 1394798.8300 | g_Loss: 4859231.8300 | l_Loss: 182568.3155 |
21-12-23 08:29:31.673 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:29:31.674 - INFO: Train epoch 687: Loss: 23892662.1600 | r_Loss: 1363960.7650 | g_Loss: 4470850.1050 | l_Loss: 174450.7663 |
21-12-23 08:30:43.348 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:30:43.348 - INFO: Train epoch 688: Loss: 23599915.8600 | r_Loss: 1362919.3162 | g_Loss: 4413273.4600 | l_Loss: 170629.3304 |
21-12-23 08:31:54.940 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:31:54.941 - INFO: Train epoch 689: Loss: 25428884.0400 | r_Loss: 1576873.7100 | g_Loss: 4732591.2500 | l_Loss: 189053.9780 |
21-12-23 08:33:40.418 - INFO: TEST: PSNR_S: -5.3467 | PSNR_C: -1.0991 |
21-12-23 08:33:40.419 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:33:40.419 - INFO: Train epoch 690: Loss: 23897444.3600 | r_Loss: 1653187.2712 | g_Loss: 4413615.7900 | l_Loss: 176178.1972 |
21-12-23 08:34:51.954 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:34:51.955 - INFO: Train epoch 691: Loss: 22366142.6800 | r_Loss: 1535119.5762 | g_Loss: 4129352.2250 | l_Loss: 184261.9464 |
21-12-23 08:36:03.545 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:36:03.546 - INFO: Train epoch 692: Loss: 22936492.4600 | r_Loss: 1733740.0012 | g_Loss: 4195908.9050 | l_Loss: 223208.0294 |
21-12-23 08:37:15.205 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:37:15.205 - INFO: Train epoch 693: Loss: 21958441.3800 | r_Loss: 1733629.9837 | g_Loss: 3997467.4150 | l_Loss: 237474.4165 |
21-12-23 08:38:26.804 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:38:26.805 - INFO: Train epoch 694: Loss: 21659330.9800 | r_Loss: 1801592.1250 | g_Loss: 3920727.9950 | l_Loss: 254098.7306 |
21-12-23 08:39:38.438 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:39:38.439 - INFO: Train epoch 695: Loss: 21530520.5400 | r_Loss: 1780270.6838 | g_Loss: 3910180.6000 | l_Loss: 199346.8565 |
21-12-23 08:40:49.941 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:40:49.942 - INFO: Train epoch 696: Loss: 21781581.8600 | r_Loss: 1905639.1038 | g_Loss: 3916966.2500 | l_Loss: 291111.5294 |
21-12-23 08:42:01.556 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:42:01.557 - INFO: Train epoch 697: Loss: 21418818.5400 | r_Loss: 1850728.4812 | g_Loss: 3872693.2600 | l_Loss: 204623.8649 |
21-12-23 08:43:13.217 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:43:13.218 - INFO: Train epoch 698: Loss: 20543897.7000 | r_Loss: 1861788.1612 | g_Loss: 3688399.3125 | l_Loss: 240113.0713 |
21-12-23 08:44:24.721 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:44:24.721 - INFO: Train epoch 699: Loss: 20177230.3200 | r_Loss: 1801076.3987 | g_Loss: 3627756.3900 | l_Loss: 237371.9056 |
21-12-23 08:45:36.175 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:45:36.176 - INFO: Train epoch 700: Loss: 20631020.0800 | r_Loss: 1881722.0837 | g_Loss: 3701829.6450 | l_Loss: 240149.8105 |
21-12-23 08:46:47.823 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:46:47.823 - INFO: Train epoch 701: Loss: 20379790.0600 | r_Loss: 1997369.5662 | g_Loss: 3624054.8150 | l_Loss: 262146.2973 |
21-12-23 08:47:59.492 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:47:59.492 - INFO: Train epoch 702: Loss: 20627120.6600 | r_Loss: 2019960.7025 | g_Loss: 3670962.4000 | l_Loss: 252347.6248 |
21-12-23 08:49:11.008 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:49:11.009 - INFO: Train epoch 703: Loss: 21061596.0600 | r_Loss: 2264558.6888 | g_Loss: 3704818.9550 | l_Loss: 272942.2992 |
21-12-23 08:50:22.552 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:50:22.553 - INFO: Train epoch 704: Loss: 19763132.7800 | r_Loss: 2038603.3050 | g_Loss: 3506642.2600 | l_Loss: 191318.4615 |
21-12-23 08:51:34.083 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:51:34.083 - INFO: Train epoch 705: Loss: 19450186.0200 | r_Loss: 2026726.4325 | g_Loss: 3428489.5500 | l_Loss: 281011.8970 |
21-12-23 08:52:45.774 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:52:45.774 - INFO: Train epoch 706: Loss: 19581083.8800 | r_Loss: 2122055.6250 | g_Loss: 3417527.8650 | l_Loss: 371389.0206 |
21-12-23 08:53:57.255 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:53:57.256 - INFO: Train epoch 707: Loss: 19364130.6800 | r_Loss: 2070567.9725 | g_Loss: 3407761.0800 | l_Loss: 254757.2705 |
21-12-23 08:55:08.894 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:55:08.895 - INFO: Train epoch 708: Loss: 19178584.0000 | r_Loss: 2183483.4300 | g_Loss: 3345878.5450 | l_Loss: 265707.6803 |
21-12-23 08:56:20.445 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:56:20.445 - INFO: Train epoch 709: Loss: 19304216.1400 | r_Loss: 2251750.7275 | g_Loss: 3357556.7550 | l_Loss: 264681.7369 |
21-12-23 08:57:31.970 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:57:31.970 - INFO: Train epoch 710: Loss: 18939246.5200 | r_Loss: 2310038.4450 | g_Loss: 3270580.3100 | l_Loss: 276306.5292 |
21-12-23 08:58:43.605 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:58:43.605 - INFO: Train epoch 711: Loss: 18632885.5400 | r_Loss: 2325031.1475 | g_Loss: 3207531.2250 | l_Loss: 270198.2048 |
21-12-23 08:59:55.153 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 08:59:55.153 - INFO: Train epoch 712: Loss: 17756690.4000 | r_Loss: 2297872.8675 | g_Loss: 3046037.4825 | l_Loss: 228630.1697 |
21-12-23 09:01:06.656 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:01:06.657 - INFO: Train epoch 713: Loss: 17685599.6400 | r_Loss: 2264002.8350 | g_Loss: 3032762.4100 | l_Loss: 257784.6343 |
21-12-23 09:02:18.186 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:02:18.187 - INFO: Train epoch 714: Loss: 17212725.5200 | r_Loss: 2471556.4450 | g_Loss: 2893083.4100 | l_Loss: 275752.0733 |
21-12-23 09:03:29.763 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:03:29.764 - INFO: Train epoch 715: Loss: 16032564.3400 | r_Loss: 2106445.5162 | g_Loss: 2730766.4850 | l_Loss: 272286.3216 |
21-12-23 09:04:41.197 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:04:41.197 - INFO: Train epoch 716: Loss: 15923125.2000 | r_Loss: 2273976.4512 | g_Loss: 2662748.2525 | l_Loss: 335407.5166 |
21-12-23 09:05:52.678 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:05:52.678 - INFO: Train epoch 717: Loss: 16117188.8400 | r_Loss: 2453670.6775 | g_Loss: 2670701.6700 | l_Loss: 310009.7373 |
21-12-23 09:07:04.159 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:07:04.159 - INFO: Train epoch 718: Loss: 15914250.2800 | r_Loss: 2524379.8275 | g_Loss: 2605778.9825 | l_Loss: 360975.6768 |
21-12-23 09:08:15.661 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:08:15.662 - INFO: Train epoch 719: Loss: 16289348.4200 | r_Loss: 2767300.0175 | g_Loss: 2643973.7825 | l_Loss: 302179.5767 |
21-12-23 09:10:01.140 - INFO: TEST: PSNR_S: -2.9895 | PSNR_C: -2.9415 |
21-12-23 09:10:01.141 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:10:01.141 - INFO: Train epoch 720: Loss: 15160841.5000 | r_Loss: 2550901.7250 | g_Loss: 2463365.5425 | l_Loss: 293112.0502 |
21-12-23 09:11:12.597 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:11:12.597 - INFO: Train epoch 721: Loss: 15236670.2800 | r_Loss: 2681793.4050 | g_Loss: 2444960.7500 | l_Loss: 330072.9830 |
21-12-23 09:12:24.072 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:12:24.073 - INFO: Train epoch 722: Loss: 15033387.4600 | r_Loss: 2597366.3550 | g_Loss: 2411054.8700 | l_Loss: 380746.7720 |
21-12-23 09:13:35.685 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:13:35.685 - INFO: Train epoch 723: Loss: 14601922.3400 | r_Loss: 2526218.8875 | g_Loss: 2357974.5700 | l_Loss: 285830.5411 |
21-12-23 09:14:47.169 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:14:47.170 - INFO: Train epoch 724: Loss: 14424741.3600 | r_Loss: 2740522.6900 | g_Loss: 2277243.0375 | l_Loss: 298003.5609 |
21-12-23 09:15:58.679 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:15:58.680 - INFO: Train epoch 725: Loss: 14017187.7100 | r_Loss: 2614121.2588 | g_Loss: 2197083.4075 | l_Loss: 417649.3145 |
21-12-23 09:17:10.143 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:17:10.144 - INFO: Train epoch 726: Loss: 14130631.9400 | r_Loss: 2905722.2450 | g_Loss: 2182737.5425 | l_Loss: 311222.0709 |
21-12-23 09:18:21.507 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:18:21.508 - INFO: Train epoch 727: Loss: 13639048.8200 | r_Loss: 2646995.2850 | g_Loss: 2131122.9575 | l_Loss: 336438.9068 |
21-12-23 09:19:32.988 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:19:32.989 - INFO: Train epoch 728: Loss: 13726933.0600 | r_Loss: 2659083.0175 | g_Loss: 2145709.3225 | l_Loss: 339303.4544 |
21-12-23 09:20:44.447 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:20:44.448 - INFO: Train epoch 729: Loss: 13591318.7900 | r_Loss: 2836316.5825 | g_Loss: 2077279.9875 | l_Loss: 368602.2794 |
21-12-23 09:21:55.959 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:21:55.960 - INFO: Train epoch 730: Loss: 13211514.9400 | r_Loss: 2805671.9475 | g_Loss: 2022602.6200 | l_Loss: 292829.9267 |
21-12-23 09:23:07.403 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:23:07.404 - INFO: Train epoch 731: Loss: 13127105.4400 | r_Loss: 2686136.7750 | g_Loss: 2037143.0550 | l_Loss: 255253.4432 |
21-12-23 09:24:18.788 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:24:18.788 - INFO: Train epoch 732: Loss: 12440257.9600 | r_Loss: 2856878.1825 | g_Loss: 1858866.4900 | l_Loss: 289047.2602 |
21-12-23 09:25:30.208 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:25:30.209 - INFO: Train epoch 733: Loss: 12269793.4200 | r_Loss: 2790974.8400 | g_Loss: 1844600.4350 | l_Loss: 255816.3684 |
21-12-23 09:26:41.594 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:26:41.595 - INFO: Train epoch 734: Loss: 12384653.5400 | r_Loss: 2841457.0700 | g_Loss: 1837752.3575 | l_Loss: 354434.7245 |
21-12-23 09:27:53.050 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:27:53.050 - INFO: Train epoch 735: Loss: 11493798.2300 | r_Loss: 2498860.6175 | g_Loss: 1742483.3250 | l_Loss: 282520.9400 |
21-12-23 09:29:04.506 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:29:04.507 - INFO: Train epoch 736: Loss: 11372856.2600 | r_Loss: 2652595.9875 | g_Loss: 1677050.0025 | l_Loss: 335010.2470 |
21-12-23 09:30:16.002 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:30:16.002 - INFO: Train epoch 737: Loss: 9417554195.2600 | r_Loss: 2781802.8588 | g_Loss: 1882886499.7450 | l_Loss: 340175.6546 |
21-12-23 09:31:27.482 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:31:27.482 - INFO: Train epoch 738: Loss: 1211367889.2800 | r_Loss: 758270.1637 | g_Loss: 242102727.2800 | l_Loss: 95981.9461 |
21-12-23 09:32:38.934 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:32:38.934 - INFO: Train epoch 739: Loss: 362397786.5600 | r_Loss: 737582.9550 | g_Loss: 72313710.8800 | l_Loss: 91647.3198 |
21-12-23 09:33:50.512 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:33:50.513 - INFO: Train epoch 740: Loss: 323425886.0800 | r_Loss: 746265.8512 | g_Loss: 64519525.0400 | l_Loss: 81995.4438 |
21-12-23 09:35:01.927 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:35:01.928 - INFO: Train epoch 741: Loss: 276828122.7200 | r_Loss: 719118.5900 | g_Loss: 55203219.2800 | l_Loss: 92909.8851 |
21-12-23 09:36:13.407 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:36:13.407 - INFO: Train epoch 742: Loss: 232409402.5600 | r_Loss: 724670.4206 | g_Loss: 46317853.6400 | l_Loss: 95463.6913 |
21-12-23 09:37:24.935 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:37:24.936 - INFO: Train epoch 743: Loss: 204889567.5200 | r_Loss: 715549.1963 | g_Loss: 40815100.4000 | l_Loss: 98516.5876 |
21-12-23 09:38:36.436 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:38:36.437 - INFO: Train epoch 744: Loss: 195117994.2400 | r_Loss: 719454.7788 | g_Loss: 38861255.8800 | l_Loss: 92260.0813 |
21-12-23 09:39:47.949 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:39:47.949 - INFO: Train epoch 745: Loss: 194697128.9600 | r_Loss: 744439.0637 | g_Loss: 38771361.9200 | l_Loss: 95881.3857 |
21-12-23 09:40:59.230 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:40:59.231 - INFO: Train epoch 746: Loss: 164023612.6400 | r_Loss: 709407.3825 | g_Loss: 32642727.8000 | l_Loss: 100566.6169 |
21-12-23 09:42:10.671 - INFO: Learning rate: 3.1622776601683795e-05
21-12-23 09:42:10.671 - INFO: Train epoch 747: Loss: 154628368.8000 | r_Loss: 710731.2412 | g_Loss: 30765499.0400 | l_Loss: 90143.9358 |
================================================
FILE: logging/train__211223-100502.log
================================================
21-12-23 10:05:02.218 - INFO: DataParallel(
(module): Model(
(model): Hinet(
(inv1): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv2): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv3): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv4): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv5): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv6): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv7): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv8): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv9): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv10): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv11): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv12): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv13): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv14): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv15): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv16): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
)
)
)
21-12-23 10:06:13.296 - INFO: Learning rate: 1e-05
21-12-23 10:06:13.296 - INFO: Train epoch 501: Loss: 5333.5981 | r_Loss: 684.6151 | g_Loss: 1694.3937 | l_Loss: 216.1288 |
21-12-23 10:07:24.838 - INFO: Learning rate: 1e-05
21-12-23 10:07:24.839 - INFO: Train epoch 502: Loss: 5230.3957 | r_Loss: 671.7932 | g_Loss: 1672.4524 | l_Loss: 198.9772 |
21-12-23 10:08:36.441 - INFO: Learning rate: 1e-05
21-12-23 10:08:36.441 - INFO: Train epoch 503: Loss: 5370.5528 | r_Loss: 693.1737 | g_Loss: 1708.1788 | l_Loss: 196.5057 |
21-12-23 10:09:47.896 - INFO: Learning rate: 1e-05
21-12-23 10:09:47.897 - INFO: Train epoch 504: Loss: 5501.9308 | r_Loss: 724.5668 | g_Loss: 1692.4590 | l_Loss: 186.6376 |
21-12-23 10:10:59.498 - INFO: Learning rate: 1e-05
21-12-23 10:10:59.498 - INFO: Train epoch 505: Loss: 4856.7065 | r_Loss: 607.5158 | g_Loss: 1610.2577 | l_Loss: 208.8696 |
21-12-23 10:12:11.093 - INFO: Learning rate: 1e-05
21-12-23 10:12:11.093 - INFO: Train epoch 506: Loss: 5003.4981 | r_Loss: 641.7618 | g_Loss: 1597.8735 | l_Loss: 196.8156 |
21-12-23 10:13:22.569 - INFO: Learning rate: 1e-05
21-12-23 10:13:22.570 - INFO: Train epoch 507: Loss: 5597.0475 | r_Loss: 736.1739 | g_Loss: 1698.0675 | l_Loss: 218.1102 |
21-12-23 10:14:34.112 - INFO: Learning rate: 1e-05
21-12-23 10:14:34.112 - INFO: Train epoch 508: Loss: 4950.7749 | r_Loss: 637.3977 | g_Loss: 1579.9180 | l_Loss: 183.8682 |
21-12-23 10:15:45.481 - INFO: Learning rate: 1e-05
21-12-23 10:15:45.481 - INFO: Train epoch 509: Loss: 4820.9102 | r_Loss: 609.9854 | g_Loss: 1560.8623 | l_Loss: 210.1209 |
21-12-23 10:17:30.675 - INFO: TEST: PSNR_S: 33.5715 | PSNR_C: 29.7774 |
21-12-23 10:17:30.676 - INFO: Learning rate: 1e-05
21-12-23 10:17:30.676 - INFO: Train epoch 510: Loss: 4890.9884 | r_Loss: 634.5965 | g_Loss: 1519.8821 | l_Loss: 198.1240 |
21-12-23 10:18:42.165 - INFO: Learning rate: 1e-05
21-12-23 10:18:42.165 - INFO: Train epoch 511: Loss: 4656.6479 | r_Loss: 594.9295 | g_Loss: 1483.3086 | l_Loss: 198.6917 |
21-12-23 10:19:53.468 - INFO: Learning rate: 1e-05
21-12-23 10:19:53.469 - INFO: Train epoch 512: Loss: 4612.4464 | r_Loss: 597.3787 | g_Loss: 1451.2765 | l_Loss: 174.2764 |
21-12-23 10:21:04.954 - INFO: Learning rate: 1e-05
21-12-23 10:21:04.954 - INFO: Train epoch 513: Loss: 4330.2585 | r_Loss: 564.1655 | g_Loss: 1348.4075 | l_Loss: 161.0233 |
21-12-23 10:22:16.380 - INFO: Learning rate: 1e-05
21-12-23 10:22:16.380 - INFO: Train epoch 514: Loss: 27378.5879 | r_Loss: 4638.6651 | g_Loss: 3764.3318 | l_Loss: 420.9304 |
21-12-23 10:23:27.807 - INFO: Learning rate: 1e-05
21-12-23 10:23:27.808 - INFO: Train epoch 515: Loss: 7116.4908 | r_Loss: 847.5854 | g_Loss: 2565.8531 | l_Loss: 312.7105 |
21-12-23 10:24:39.283 - INFO: Learning rate: 1e-05
21-12-23 10:24:39.284 - INFO: Train epoch 516: Loss: 5586.6713 | r_Loss: 653.9187 | g_Loss: 2028.6815 | l_Loss: 288.3961 |
21-12-23 10:25:50.810 - INFO: Learning rate: 1e-05
21-12-23 10:25:50.811 - INFO: Train epoch 517: Loss: 5484.6187 | r_Loss: 663.8721 | g_Loss: 1919.0955 | l_Loss: 246.1625 |
21-12-23 10:27:02.376 - INFO: Learning rate: 1e-05
21-12-23 10:27:02.376 - INFO: Train epoch 518: Loss: 4899.8742 | r_Loss: 576.1682 | g_Loss: 1809.8109 | l_Loss: 209.2224 |
21-12-23 10:28:13.879 - INFO: Learning rate: 1e-05
21-12-23 10:28:13.879 - INFO: Train epoch 519: Loss: 4970.4092 | r_Loss: 586.3060 | g_Loss: 1836.3392 | l_Loss: 202.5400 |
21-12-23 10:29:25.401 - INFO: Learning rate: 1e-05
21-12-23 10:29:25.402 - INFO: Train epoch 520: Loss: 4729.9816 | r_Loss: 566.0047 | g_Loss: 1711.7860 | l_Loss: 188.1723 |
21-12-23 10:30:36.888 - INFO: Learning rate: 1e-05
21-12-23 10:30:36.888 - INFO: Train epoch 521: Loss: 4839.3611 | r_Loss: 570.0642 | g_Loss: 1771.2683 | l_Loss: 217.7715 |
21-12-23 10:31:48.483 - INFO: Learning rate: 1e-05
21-12-23 10:31:48.483 - INFO: Train epoch 522: Loss: 4560.1329 | r_Loss: 534.0276 | g_Loss: 1678.2235 | l_Loss: 211.7713 |
21-12-23 10:33:00.056 - INFO: Learning rate: 1e-05
21-12-23 10:33:00.057 - INFO: Train epoch 523: Loss: 4197.9750 | r_Loss: 483.8682 | g_Loss: 1568.9835 | l_Loss: 209.6506 |
21-12-23 10:34:11.506 - INFO: Learning rate: 1e-05
21-12-23 10:34:11.506 - INFO: Train epoch 524: Loss: 4502.3567 | r_Loss: 547.0884 | g_Loss: 1582.8807 | l_Loss: 184.0342 |
21-12-23 10:35:23.200 - INFO: Learning rate: 1e-05
21-12-23 10:35:23.201 - INFO: Train epoch 525: Loss: 4619.3896 | r_Loss: 567.2574 | g_Loss: 1595.8040 | l_Loss: 187.2985 |
21-12-23 10:36:34.824 - INFO: Learning rate: 1e-05
21-12-23 10:36:34.825 - INFO: Train epoch 526: Loss: 4465.4261 | r_Loss: 533.4336 | g_Loss: 1576.7989 | l_Loss: 221.4591 |
21-12-23 10:37:46.408 - INFO: Learning rate: 1e-05
21-12-23 10:37:46.408 - INFO: Train epoch 527: Loss: 4469.2376 | r_Loss: 546.0716 | g_Loss: 1577.7927 | l_Loss: 161.0867 |
21-12-23 10:38:57.991 - INFO: Learning rate: 1e-05
21-12-23 10:38:57.991 - INFO: Train epoch 528: Loss: 4563.2396 | r_Loss: 556.3824 | g_Loss: 1562.7311 | l_Loss: 218.5965 |
21-12-23 10:40:09.469 - INFO: Learning rate: 1e-05
21-12-23 10:40:09.469 - INFO: Train epoch 529: Loss: 4240.0762 | r_Loss: 516.0205 | g_Loss: 1494.8668 | l_Loss: 165.1069 |
21-12-23 10:41:21.062 - INFO: Learning rate: 1e-05
21-12-23 10:41:21.063 - INFO: Train epoch 530: Loss: 4134.1073 | r_Loss: 512.5174 | g_Loss: 1407.9547 | l_Loss: 163.5657 |
21-12-23 10:42:32.609 - INFO: Learning rate: 1e-05
21-12-23 10:42:32.610 - INFO: Train epoch 531: Loss: 4468.9494 | r_Loss: 552.9248 | g_Loss: 1508.3003 | l_Loss: 196.0249 |
21-12-23 10:43:44.099 - INFO: Learning rate: 1e-05
21-12-23 10:43:44.099 - INFO: Train epoch 532: Loss: 3992.3783 | r_Loss: 484.7236 | g_Loss: 1399.6632 | l_Loss: 169.0971 |
21-12-23 10:44:55.693 - INFO: Learning rate: 1e-05
21-12-23 10:44:55.694 - INFO: Train epoch 533: Loss: 4222.8661 | r_Loss: 528.7561 | g_Loss: 1425.4025 | l_Loss: 153.6829 |
21-12-23 10:46:07.267 - INFO: Learning rate: 1e-05
21-12-23 10:46:07.268 - INFO: Train epoch 534: Loss: 4534.6596 | r_Loss: 587.4505 | g_Loss: 1427.3123 | l_Loss: 170.0947 |
21-12-23 10:47:18.946 - INFO: Learning rate: 1e-05
21-12-23 10:47:18.946 - INFO: Train epoch 535: Loss: 4065.3345 | r_Loss: 501.5339 | g_Loss: 1373.9592 | l_Loss: 183.7057 |
21-12-23 10:48:30.472 - INFO: Learning rate: 1e-05
21-12-23 10:48:30.472 - INFO: Train epoch 536: Loss: 4176.2602 | r_Loss: 524.1068 | g_Loss: 1383.4934 | l_Loss: 172.2329 |
21-12-23 10:49:42.132 - INFO: Learning rate: 1e-05
21-12-23 10:49:42.132 - INFO: Train epoch 537: Loss: 4128.6631 | r_Loss: 524.7287 | g_Loss: 1346.5154 | l_Loss: 158.5041 |
21-12-23 10:50:53.612 - INFO: Learning rate: 1e-05
21-12-23 10:50:53.613 - INFO: Train epoch 538: Loss: 4374.3897 | r_Loss: 566.1836 | g_Loss: 1334.1037 | l_Loss: 209.3681 |
21-12-23 10:52:05.212 - INFO: Learning rate: 1e-05
21-12-23 10:52:05.212 - INFO: Train epoch 539: Loss: 4043.3745 | r_Loss: 500.8336 | g_Loss: 1361.3735 | l_Loss: 177.8329 |
21-12-23 10:53:50.694 - INFO: TEST: PSNR_S: 35.2460 | PSNR_C: 30.2942 |
21-12-23 10:53:50.695 - INFO: Learning rate: 1e-05
21-12-23 10:53:50.696 - INFO: Train epoch 540: Loss: 4074.6017 | r_Loss: 513.5159 | g_Loss: 1344.8008 | l_Loss: 162.2212 |
21-12-23 10:55:02.344 - INFO: Learning rate: 1e-05
21-12-23 10:55:02.345 - INFO: Train epoch 541: Loss: 4082.7191 | r_Loss: 521.9114 | g_Loss: 1309.5539 | l_Loss: 163.6080 |
21-12-23 10:56:14.134 - INFO: Learning rate: 1e-05
21-12-23 10:56:14.134 - INFO: Train epoch 542: Loss: 4099.1094 | r_Loss: 527.6644 | g_Loss: 1303.6317 | l_Loss: 157.1555 |
21-12-23 10:57:25.815 - INFO: Learning rate: 1e-05
21-12-23 10:57:25.815 - INFO: Train epoch 543: Loss: 3940.2907 | r_Loss: 499.8314 | g_Loss: 1286.9467 | l_Loss: 154.1870 |
21-12-23 10:58:37.327 - INFO: Learning rate: 1e-05
21-12-23 10:58:37.327 - INFO: Train epoch 544: Loss: 3937.8674 | r_Loss: 499.2811 | g_Loss: 1263.6828 | l_Loss: 177.7792 |
21-12-23 10:59:48.943 - INFO: Learning rate: 1e-05
21-12-23 10:59:48.943 - INFO: Train epoch 545: Loss: 3871.6920 | r_Loss: 494.9316 | g_Loss: 1230.0400 | l_Loss: 166.9938 |
21-12-23 11:01:00.582 - INFO: Learning rate: 1e-05
21-12-23 11:01:00.582 - INFO: Train epoch 546: Loss: 4093.3081 | r_Loss: 525.8604 | g_Loss: 1285.9035 | l_Loss: 178.1028 |
21-12-23 11:02:12.223 - INFO: Learning rate: 1e-05
21-12-23 11:02:12.223 - INFO: Train epoch 547: Loss: 3914.1195 | r_Loss: 500.8673 | g_Loss: 1256.8843 | l_Loss: 152.8987 |
21-12-23 11:03:23.797 - INFO: Learning rate: 1e-05
21-12-23 11:03:23.797 - INFO: Train epoch 548: Loss: 3991.1681 | r_Loss: 506.8451 | g_Loss: 1279.4473 | l_Loss: 177.4951 |
21-12-23 11:04:35.494 - INFO: Learning rate: 1e-05
21-12-23 11:04:35.494 - INFO: Train epoch 549: Loss: 4116.7702 | r_Loss: 536.7680 | g_Loss: 1266.7762 | l_Loss: 166.1539 |
21-12-23 11:05:47.109 - INFO: Learning rate: 1e-05
21-12-23 11:05:47.110 - INFO: Train epoch 550: Loss: 3998.6238 | r_Loss: 517.7928 | g_Loss: 1246.5840 | l_Loss: 163.0761 |
21-12-23 11:06:58.958 - INFO: Learning rate: 1e-05
21-12-23 11:06:58.958 - INFO: Train epoch 551: Loss: 3744.7837 | r_Loss: 467.1726 | g_Loss: 1244.8799 | l_Loss: 164.0411 |
21-12-23 11:08:10.545 - INFO: Learning rate: 1e-05
21-12-23 11:08:10.545 - INFO: Train epoch 552: Loss: 3956.3266 | r_Loss: 523.5772 | g_Loss: 1211.9694 | l_Loss: 126.4710 |
21-12-23 11:09:22.038 - INFO: Learning rate: 1e-05
21-12-23 11:09:22.039 - INFO: Train epoch 553: Loss: 3573.2057 | r_Loss: 449.7874 | g_Loss: 1181.1788 | l_Loss: 143.0901 |
21-12-23 11:10:33.581 - INFO: Learning rate: 1e-05
21-12-23 11:10:33.582 - INFO: Train epoch 554: Loss: 3678.0730 | r_Loss: 480.1946 | g_Loss: 1125.4559 | l_Loss: 151.6442 |
21-12-23 11:11:45.036 - INFO: Learning rate: 1e-05
21-12-23 11:11:45.037 - INFO: Train epoch 555: Loss: 3632.1381 | r_Loss: 460.2220 | g_Loss: 1179.4551 | l_Loss: 151.5732 |
21-12-23 11:12:56.601 - INFO: Learning rate: 1e-05
21-12-23 11:12:56.601 - INFO: Train epoch 556: Loss: 3523.3741 | r_Loss: 442.8117 | g_Loss: 1168.2173 | l_Loss: 141.0980 |
21-12-23 11:14:08.046 - INFO: Learning rate: 1e-05
21-12-23 11:14:08.047 - INFO: Train epoch 557: Loss: 3843.4709 | r_Loss: 510.4278 | g_Loss: 1160.4463 | l_Loss: 130.8856 |
21-12-23 11:15:19.615 - INFO: Learning rate: 1e-05
21-12-23 11:15:19.616 - INFO: Train epoch 558: Loss: 3347.8744 | r_Loss: 416.2964 | g_Loss: 1131.4718 | l_Loss: 134.9207 |
21-12-23 11:16:31.156 - INFO: Learning rate: 1e-05
21-12-23 11:16:31.156 - INFO: Train epoch 559: Loss: 3771.4207 | r_Loss: 494.9288 | g_Loss: 1168.2115 | l_Loss: 128.5655 |
21-12-23 11:17:42.829 - INFO: Learning rate: 1e-05
21-12-23 11:17:42.829 - INFO: Train epoch 560: Loss: 3483.1330 | r_Loss: 437.6874 | g_Loss: 1155.3596 | l_Loss: 139.3364 |
21-12-23 11:18:54.524 - INFO: Learning rate: 1e-05
21-12-23 11:18:54.525 - INFO: Train epoch 561: Loss: 3806.1281 | r_Loss: 496.1071 | g_Loss: 1192.6769 | l_Loss: 132.9155 |
21-12-23 11:20:06.138 - INFO: Learning rate: 1e-05
21-12-23 11:20:06.138 - INFO: Train epoch 562: Loss: 3499.2408 | r_Loss: 437.5269 | g_Loss: 1167.2877 | l_Loss: 144.3185 |
21-12-23 11:21:17.828 - INFO: Learning rate: 1e-05
21-12-23 11:21:17.829 - INFO: Train epoch 563: Loss: 3505.9648 | r_Loss: 439.4704 | g_Loss: 1152.3946 | l_Loss: 156.2181 |
21-12-23 11:22:29.410 - INFO: Learning rate: 1e-05
21-12-23 11:22:29.410 - INFO: Train epoch 564: Loss: 3454.7352 | r_Loss: 430.9269 | g_Loss: 1152.0529 | l_Loss: 148.0479 |
21-12-23 11:23:41.084 - INFO: Learning rate: 1e-05
21-12-23 11:23:41.084 - INFO: Train epoch 565: Loss: 3677.9605 | r_Loss: 478.2313 | g_Loss: 1140.5349 | l_Loss: 146.2691 |
21-12-23 11:24:52.806 - INFO: Learning rate: 1e-05
21-12-23 11:24:52.807 - INFO: Train epoch 566: Loss: 3352.0198 | r_Loss: 420.3588 | g_Loss: 1091.3110 | l_Loss: 158.9150 |
21-12-23 11:26:04.642 - INFO: Learning rate: 1e-05
21-12-23 11:26:04.642 - INFO: Train epoch 567: Loss: 3666.6106 | r_Loss: 483.4909 | g_Loss: 1107.4619 | l_Loss: 141.6941 |
21-12-23 11:27:16.278 - INFO: Learning rate: 1e-05
21-12-23 11:27:16.279 - INFO: Train epoch 568: Loss: 3575.8680 | r_Loss: 452.3694 | g_Loss: 1165.0699 | l_Loss: 148.9513 |
21-12-23 11:28:27.935 - INFO: Learning rate: 1e-05
21-12-23 11:28:27.935 - INFO: Train epoch 569: Loss: 3152.0740 | r_Loss: 387.9516 | g_Loss: 1052.1534 | l_Loss: 160.1624 |
21-12-23 11:30:13.390 - INFO: TEST: PSNR_S: 36.1697 | PSNR_C: 31.1308 |
21-12-23 11:30:13.391 - INFO: Learning rate: 1e-05
21-12-23 11:30:13.391 - INFO: Train epoch 570: Loss: 3418.0109 | r_Loss: 432.7191 | g_Loss: 1116.4241 | l_Loss: 137.9914 |
21-12-23 11:31:24.969 - INFO: Learning rate: 1e-05
21-12-23 11:31:24.970 - INFO: Train epoch 571: Loss: 3489.9692 | r_Loss: 453.2293 | g_Loss: 1100.4547 | l_Loss: 123.3681 |
21-12-23 11:32:36.536 - INFO: Learning rate: 1e-05
21-12-23 11:32:36.536 - INFO: Train epoch 572: Loss: 3460.7526 | r_Loss: 448.1032 | g_Loss: 1084.7338 | l_Loss: 135.5028 |
21-12-23 11:33:48.163 - INFO: Learning rate: 1e-05
21-12-23 11:33:48.163 - INFO: Train epoch 573: Loss: 3531.0863 | r_Loss: 450.2804 | g_Loss: 1148.5777 | l_Loss: 131.1068 |
21-12-23 11:34:59.826 - INFO: Learning rate: 1e-05
21-12-23 11:34:59.827 - INFO: Train epoch 574: Loss: 3455.6948 | r_Loss: 440.2320 | g_Loss: 1106.3738 | l_Loss: 148.1610 |
21-12-23 11:36:11.521 - INFO: Learning rate: 1e-05
21-12-23 11:36:11.521 - INFO: Train epoch 575: Loss: 3406.1451 | r_Loss: 427.4706 | g_Loss: 1135.6554 | l_Loss: 133.1370 |
21-12-23 11:37:23.122 - INFO: Learning rate: 1e-05
21-12-23 11:37:23.123 - INFO: Train epoch 576: Loss: 3123.2018 | r_Loss: 387.1659 | g_Loss: 1058.3107 | l_Loss: 129.0614 |
21-12-23 11:38:34.744 - INFO: Learning rate: 1e-05
21-12-23 11:38:34.745 - INFO: Train epoch 577: Loss: 3438.4331 | r_Loss: 443.6089 | g_Loss: 1068.1280 | l_Loss: 152.2605 |
21-12-23 11:39:46.424 - INFO: Learning rate: 1e-05
21-12-23 11:39:46.424 - INFO: Train epoch 578: Loss: 3222.4686 | r_Loss: 401.6988 | g_Loss: 1096.1025 | l_Loss: 117.8723 |
21-12-23 11:40:58.134 - INFO: Learning rate: 1e-05
21-12-23 11:40:58.135 - INFO: Train epoch 579: Loss: 3494.1287 | r_Loss: 450.9863 | g_Loss: 1110.6762 | l_Loss: 128.5211 |
21-12-23 11:42:09.816 - INFO: Learning rate: 1e-05
21-12-23 11:42:09.817 - INFO: Train epoch 580: Loss: 3015.6368 | r_Loss: 367.2406 | g_Loss: 1032.4993 | l_Loss: 146.9343 |
21-12-23 11:43:21.467 - INFO: Learning rate: 1e-05
21-12-23 11:43:21.468 - INFO: Train epoch 581: Loss: 3172.3694 | r_Loss: 392.6935 | g_Loss: 1084.6921 | l_Loss: 124.2100 |
21-12-23 11:44:33.076 - INFO: Learning rate: 1e-05
21-12-23 11:44:33.076 - INFO: Train epoch 582: Loss: 3173.7128 | r_Loss: 400.1912 | g_Loss: 1027.0533 | l_Loss: 145.7033 |
21-12-23 11:45:44.724 - INFO: Learning rate: 1e-05
21-12-23 11:45:44.724 - INFO: Train epoch 583: Loss: 3142.6951 | r_Loss: 385.3832 | g_Loss: 1088.9273 | l_Loss: 126.8521 |
21-12-23 11:46:56.323 - INFO: Learning rate: 1e-05
21-12-23 11:46:56.324 - INFO: Train epoch 584: Loss: 3229.6186 | r_Loss: 399.8622 | g_Loss: 1095.3074 | l_Loss: 135.0004 |
21-12-23 11:48:07.866 - INFO: Learning rate: 1e-05
21-12-23 11:48:07.866 - INFO: Train epoch 585: Loss: 3044.8809 | r_Loss: 376.3016 | g_Loss: 1038.1719 | l_Loss: 125.2011 |
21-12-23 11:49:19.493 - INFO: Learning rate: 1e-05
21-12-23 11:49:19.494 - INFO: Train epoch 586: Loss: 3020.4498 | r_Loss: 376.1945 | g_Loss: 1022.9614 | l_Loss: 116.5157 |
21-12-23 11:50:31.072 - INFO: Learning rate: 1e-05
21-12-23 11:50:31.072 - INFO: Train epoch 587: Loss: 3124.6334 | r_Loss: 391.4837 | g_Loss: 1028.6038 | l_Loss: 138.6112 |
21-12-23 11:51:42.548 - INFO: Learning rate: 1e-05
21-12-23 11:51:42.548 - INFO: Train epoch 588: Loss: 3201.9852 | r_Loss: 397.0794 | g_Loss: 1062.8761 | l_Loss: 153.7120 |
21-12-23 11:52:54.333 - INFO: Learning rate: 1e-05
21-12-23 11:52:54.334 - INFO: Train epoch 589: Loss: 3755.6810 | r_Loss: 481.0478 | g_Loss: 1167.0843 | l_Loss: 183.3578 |
21-12-23 11:54:05.808 - INFO: Learning rate: 1e-05
21-12-23 11:54:05.809 - INFO: Train epoch 590: Loss: 2976.4869 | r_Loss: 365.4883 | g_Loss: 1021.6853 | l_Loss: 127.3603 |
21-12-23 11:55:17.354 - INFO: Learning rate: 1e-05
21-12-23 11:55:17.355 - INFO: Train epoch 591: Loss: 3064.5911 | r_Loss: 379.2948 | g_Loss: 1046.4642 | l_Loss: 121.6531 |
21-12-23 11:56:28.898 - INFO: Learning rate: 1e-05
21-12-23 11:56:28.899 - INFO: Train epoch 592: Loss: 3220.8139 | r_Loss: 410.3736 | g_Loss: 1055.1710 | l_Loss: 113.7750 |
21-12-23 11:57:40.425 - INFO: Learning rate: 1e-05
21-12-23 11:57:40.425 - INFO: Train epoch 593: Loss: 2744.2878 | r_Loss: 335.7094 | g_Loss: 954.2378 | l_Loss: 111.5029 |
21-12-23 11:58:51.979 - INFO: Learning rate: 1e-05
21-12-23 11:58:51.979 - INFO: Train epoch 594: Loss: 2972.0689 | r_Loss: 379.1042 | g_Loss: 975.0665 | l_Loss: 101.4816 |
21-12-23 12:00:03.622 - INFO: Learning rate: 1e-05
21-12-23 12:00:03.623 - INFO: Train epoch 595: Loss: 2960.9495 | r_Loss: 364.8257 | g_Loss: 1000.9791 | l_Loss: 135.8417 |
21-12-23 12:01:15.242 - INFO: Learning rate: 1e-05
21-12-23 12:01:15.243 - INFO: Train epoch 596: Loss: 2684.4375 | r_Loss: 332.2172 | g_Loss: 915.4698 | l_Loss: 107.8820 |
21-12-23 12:02:26.792 - INFO: Learning rate: 1e-05
21-12-23 12:02:26.792 - INFO: Train epoch 597: Loss: 2732.8197 | r_Loss: 334.2411 | g_Loss: 943.2641 | l_Loss: 118.3500 |
21-12-23 12:03:38.177 - INFO: Learning rate: 1e-05
21-12-23 12:03:38.178 - INFO: Train epoch 598: Loss: 2924.0167 | r_Loss: 369.3799 | g_Loss: 963.3665 | l_Loss: 113.7505 |
21-12-23 12:04:49.639 - INFO: Learning rate: 1e-05
21-12-23 12:04:49.639 - INFO: Train epoch 599: Loss: 2998.3882 | r_Loss: 382.1528 | g_Loss: 994.6712 | l_Loss: 92.9530 |
21-12-23 12:06:35.024 - INFO: TEST: PSNR_S: 36.9241 | PSNR_C: 31.6685 |
21-12-23 12:06:35.025 - INFO: Learning rate: 1e-05
21-12-23 12:06:35.025 - INFO: Train epoch 600: Loss: 2924.6567 | r_Loss: 362.4827 | g_Loss: 981.8504 | l_Loss: 130.3927 |
21-12-23 12:07:46.605 - INFO: Learning rate: 1e-05
21-12-23 12:07:46.606 - INFO: Train epoch 601: Loss: 2952.5476 | r_Loss: 362.9754 | g_Loss: 1024.7947 | l_Loss: 112.8760 |
21-12-23 12:08:58.111 - INFO: Learning rate: 1e-05
21-12-23 12:08:58.112 - INFO: Train epoch 602: Loss: 2803.4234 | r_Loss: 337.3057 | g_Loss: 982.1159 | l_Loss: 134.7788 |
21-12-23 12:10:09.556 - INFO: Learning rate: 1e-05
21-12-23 12:10:09.556 - INFO: Train epoch 603: Loss: 3002.8400 | r_Loss: 370.9302 | g_Loss: 1030.0962 | l_Loss: 118.0929 |
21-12-23 12:11:21.100 - INFO: Learning rate: 1e-05
21-12-23 12:11:21.100 - INFO: Train epoch 604: Loss: 2932.5660 | r_Loss: 364.5713 | g_Loss: 976.3800 | l_Loss: 133.3297 |
21-12-23 12:12:32.540 - INFO: Learning rate: 1e-05
21-12-23 12:12:32.541 - INFO: Train epoch 605: Loss: 2588.2875 | r_Loss: 311.9615 | g_Loss: 919.0950 | l_Loss: 109.3848 |
21-12-23 12:13:44.152 - INFO: Learning rate: 1e-05
21-12-23 12:13:44.152 - INFO: Train epoch 606: Loss: 2935.2169 | r_Loss: 369.7789 | g_Loss: 975.8864 | l_Loss: 110.4359 |
21-12-23 12:14:55.804 - INFO: Learning rate: 1e-05
21-12-23 12:14:55.805 - INFO: Train epoch 607: Loss: 2590.2548 | r_Loss: 312.2203 | g_Loss: 904.4397 | l_Loss: 124.7135 |
21-12-23 12:16:07.326 - INFO: Learning rate: 1e-05
21-12-23 12:16:07.327 - INFO: Train epoch 608: Loss: 2692.6603 | r_Loss: 333.0358 | g_Loss: 910.5648 | l_Loss: 116.9164 |
21-12-23 12:17:18.771 - INFO: Learning rate: 1e-05
21-12-23 12:17:18.772 - INFO: Train epoch 609: Loss: 2847.4008 | r_Loss: 341.2558 | g_Loss: 1022.3991 | l_Loss: 118.7225 |
21-12-23 12:18:30.200 - INFO: Learning rate: 1e-05
21-12-23 12:18:30.201 - INFO: Train epoch 610: Loss: 2575.4319 | r_Loss: 314.6771 | g_Loss: 890.6049 | l_Loss: 111.4417 |
21-12-23 12:19:41.698 - INFO: Learning rate: 1e-05
21-12-23 12:19:41.698 - INFO: Train epoch 611: Loss: 40116.7412 | r_Loss: 6954.8715 | g_Loss: 4801.5338 | l_Loss: 540.8498 |
21-12-23 12:20:53.223 - INFO: Learning rate: 1e-05
21-12-23 12:20:53.223 - INFO: Train epoch 612: Loss: 5800.2003 | r_Loss: 609.9307 | g_Loss: 2458.7524 | l_Loss: 291.7945 |
21-12-23 12:22:04.776 - INFO: Learning rate: 1e-05
21-12-23 12:22:04.776 - INFO: Train epoch 613: Loss: 4539.7730 | r_Loss: 484.8065 | g_Loss: 1877.7287 | l_Loss: 238.0116 |
21-12-23 12:23:16.390 - INFO: Learning rate: 1e-05
21-12-23 12:23:16.391 - INFO: Train epoch 614: Loss: 4048.2565 | r_Loss: 428.1127 | g_Loss: 1667.5951 | l_Loss: 240.0977 |
21-12-23 12:24:27.796 - INFO: Learning rate: 1e-05
21-12-23 12:24:27.796 - INFO: Train epoch 615: Loss: 3780.0469 | r_Loss: 395.2661 | g_Loss: 1603.7259 | l_Loss: 199.9904 |
21-12-23 12:25:39.371 - INFO: Learning rate: 1e-05
21-12-23 12:25:39.372 - INFO: Train epoch 616: Loss: 3360.3691 | r_Loss: 342.4255 | g_Loss: 1464.7645 | l_Loss: 183.4769 |
21-12-23 12:26:50.869 - INFO: Learning rate: 1e-05
21-12-23 12:26:50.870 - INFO: Train epoch 617: Loss: 3449.0004 | r_Loss: 357.9279 | g_Loss: 1475.7311 | l_Loss: 183.6295 |
21-12-23 12:28:02.354 - INFO: Learning rate: 1e-05
21-12-23 12:28:02.355 - INFO: Train epoch 618: Loss: 3436.8920 | r_Loss: 360.5415 | g_Loss: 1433.0168 | l_Loss: 201.1678 |
21-12-23 12:29:13.805 - INFO: Learning rate: 1e-05
21-12-23 12:29:13.806 - INFO: Train epoch 619: Loss: 3271.9874 | r_Loss: 353.4528 | g_Loss: 1358.1037 | l_Loss: 146.6194 |
21-12-23 12:30:25.338 - INFO: Learning rate: 1e-05
21-12-23 12:30:25.338 - INFO: Train epoch 620: Loss: 3255.6668 | r_Loss: 342.0766 | g_Loss: 1368.2006 | l_Loss: 177.0834 |
21-12-23 12:31:36.876 - INFO: Learning rate: 1e-05
21-12-23 12:31:36.877 - INFO: Train epoch 621: Loss: 3266.1277 | r_Loss: 347.6735 | g_Loss: 1361.6848 | l_Loss: 166.0753 |
21-12-23 12:32:48.389 - INFO: Learning rate: 1e-05
21-12-23 12:32:48.390 - INFO: Train epoch 622: Loss: 2936.9307 | r_Loss: 302.3424 | g_Loss: 1275.2729 | l_Loss: 149.9459 |
21-12-23 12:34:00.055 - INFO: Learning rate: 1e-05
21-12-23 12:34:00.055 - INFO: Train epoch 623: Loss: 2913.6749 | r_Loss: 301.1834 | g_Loss: 1214.6471 | l_Loss: 193.1106 |
21-12-23 12:35:11.529 - INFO: Learning rate: 1e-05
21-12-23 12:35:11.529 - INFO: Train epoch 624: Loss: 2927.9037 | r_Loss: 308.2676 | g_Loss: 1233.2554 | l_Loss: 153.3103 |
21-12-23 12:36:23.018 - INFO: Learning rate: 1e-05
21-12-23 12:36:23.019 - INFO: Train epoch 625: Loss: 3042.9241 | r_Loss: 334.3915 | g_Loss: 1244.0516 | l_Loss: 126.9151 |
21-12-23 12:37:34.487 - INFO: Learning rate: 1e-05
21-12-23 12:37:34.488 - INFO: Train epoch 626: Loss: 3024.7637 | r_Loss: 328.7160 | g_Loss: 1240.9042 | l_Loss: 140.2795 |
21-12-23 12:38:46.046 - INFO: Learning rate: 1e-05
21-12-23 12:38:46.046 - INFO: Train epoch 627: Loss: 2874.1129 | r_Loss: 307.4195 | g_Loss: 1171.1242 | l_Loss: 165.8911 |
21-12-23 12:39:57.685 - INFO: Learning rate: 1e-05
21-12-23 12:39:57.686 - INFO: Train epoch 628: Loss: 2795.6582 | r_Loss: 304.9058 | g_Loss: 1126.3304 | l_Loss: 144.7986 |
21-12-23 12:41:09.376 - INFO: Learning rate: 1e-05
21-12-23 12:41:09.377 - INFO: Train epoch 629: Loss: 2723.2345 | r_Loss: 295.9111 | g_Loss: 1124.6220 | l_Loss: 119.0569 |
21-12-23 12:42:54.846 - INFO: TEST: PSNR_S: 37.6705 | PSNR_C: 31.0688 |
21-12-23 12:42:54.847 - INFO: Learning rate: 1e-05
21-12-23 12:42:54.848 - INFO: Train epoch 630: Loss: 2904.1619 | r_Loss: 325.3938 | g_Loss: 1148.5679 | l_Loss: 128.6252 |
21-12-23 12:44:06.435 - INFO: Learning rate: 1e-05
21-12-23 12:44:06.435 - INFO: Train epoch 631: Loss: 2945.8964 | r_Loss: 328.3761 | g_Loss: 1151.7482 | l_Loss: 152.2674 |
21-12-23 12:45:18.054 - INFO: Learning rate: 1e-05
21-12-23 12:45:18.055 - INFO: Train epoch 632: Loss: 2560.4073 | r_Loss: 281.4469 | g_Loss: 1031.1157 | l_Loss: 122.0573 |
21-12-23 12:46:29.554 - INFO: Learning rate: 1e-05
21-12-23 12:46:29.555 - INFO: Train epoch 633: Loss: 2583.1672 | r_Loss: 282.3451 | g_Loss: 1027.3392 | l_Loss: 144.1022 |
21-12-23 12:47:41.265 - INFO: Learning rate: 1e-05
21-12-23 12:47:41.266 - INFO: Train epoch 634: Loss: 2752.8623 | r_Loss: 307.6167 | g_Loss: 1085.3085 | l_Loss: 129.4705 |
21-12-23 12:48:52.924 - INFO: Learning rate: 1e-05
21-12-23 12:48:52.924 - INFO: Train epoch 635: Loss: 2578.5851 | r_Loss: 290.6718 | g_Loss: 991.9179 | l_Loss: 133.3079 |
21-12-23 12:50:04.550 - INFO: Learning rate: 1e-05
21-12-23 12:50:04.551 - INFO: Train epoch 636: Loss: 2686.4646 | r_Loss: 305.7105 | g_Loss: 1017.5227 | l_Loss: 140.3893 |
21-12-23 12:51:16.084 - INFO: Learning rate: 1e-05
21-12-23 12:51:16.085 - INFO: Train epoch 637: Loss: 2641.6088 | r_Loss: 296.5265 | g_Loss: 1017.6852 | l_Loss: 141.2912 |
21-12-23 12:52:27.737 - INFO: Learning rate: 1e-05
21-12-23 12:52:27.738 - INFO: Train epoch 638: Loss: 2414.6743 | r_Loss: 270.0759 | g_Loss: 949.8459 | l_Loss: 114.4491 |
21-12-23 12:53:39.361 - INFO: Learning rate: 1e-05
21-12-23 12:53:39.361 - INFO: Train epoch 639: Loss: 2531.6845 | r_Loss: 290.3179 | g_Loss: 954.0748 | l_Loss: 126.0202 |
21-12-23 12:54:51.028 - INFO: Learning rate: 1e-05
21-12-23 12:54:51.028 - INFO: Train epoch 640: Loss: 2476.3539 | r_Loss: 280.9281 | g_Loss: 941.0220 | l_Loss: 130.6915 |
21-12-23 12:56:02.607 - INFO: Learning rate: 1e-05
21-12-23 12:56:02.608 - INFO: Train epoch 641: Loss: 2501.4576 | r_Loss: 292.2123 | g_Loss: 945.2457 | l_Loss: 95.1504 |
21-12-23 12:57:14.217 - INFO: Learning rate: 1e-05
21-12-23 12:57:14.217 - INFO: Train epoch 642: Loss: 2525.6068 | r_Loss: 296.6128 | g_Loss: 931.9887 | l_Loss: 110.5539 |
21-12-23 12:58:25.778 - INFO: Learning rate: 1e-05
21-12-23 12:58:25.778 - INFO: Train epoch 643: Loss: 2591.4203 | r_Loss: 304.4343 | g_Loss: 949.1241 | l_Loss: 120.1249 |
21-12-23 12:59:37.280 - INFO: Learning rate: 1e-05
21-12-23 12:59:37.280 - INFO: Train epoch 644: Loss: 2463.4826 | r_Loss: 285.6992 | g_Loss: 910.3077 | l_Loss: 124.6788 |
21-12-23 13:00:48.846 - INFO: Learning rate: 1e-05
21-12-23 13:00:48.846 - INFO: Train epoch 645: Loss: 2484.3699 | r_Loss: 289.1657 | g_Loss: 926.3745 | l_Loss: 112.1669 |
21-12-23 13:02:00.423 - INFO: Learning rate: 1e-05
21-12-23 13:02:00.423 - INFO: Train epoch 646: Loss: 2453.7866 | r_Loss: 285.0547 | g_Loss: 907.6403 | l_Loss: 120.8727 |
21-12-23 13:03:12.015 - INFO: Learning rate: 1e-05
21-12-23 13:03:12.016 - INFO: Train epoch 647: Loss: 2518.9562 | r_Loss: 311.0949 | g_Loss: 858.2460 | l_Loss: 105.2356 |
21-12-23 13:04:23.695 - INFO: Learning rate: 1e-05
21-12-23 13:04:23.696 - INFO: Train epoch 648: Loss: 2425.1120 | r_Loss: 288.7182 | g_Loss: 868.4706 | l_Loss: 113.0502 |
21-12-23 13:05:35.314 - INFO: Learning rate: 1e-05
21-12-23 13:05:35.314 - INFO: Train epoch 649: Loss: 2375.8438 | r_Loss: 274.1542 | g_Loss: 882.0447 | l_Loss: 123.0282 |
21-12-23 13:06:46.843 - INFO: Learning rate: 1e-05
21-12-23 13:06:46.843 - INFO: Train epoch 650: Loss: 2384.4989 | r_Loss: 279.6536 | g_Loss: 870.8639 | l_Loss: 115.3669 |
21-12-23 13:07:58.350 - INFO: Learning rate: 1e-05
21-12-23 13:07:58.351 - INFO: Train epoch 651: Loss: 2410.9296 | r_Loss: 291.4885 | g_Loss: 854.4796 | l_Loss: 99.0073 |
21-12-23 13:09:10.033 - INFO: Learning rate: 1e-05
21-12-23 13:09:10.033 - INFO: Train epoch 652: Loss: 2438.7350 | r_Loss: 291.1041 | g_Loss: 871.0325 | l_Loss: 112.1823 |
21-12-23 13:10:21.634 - INFO: Learning rate: 1e-05
21-12-23 13:10:21.635 - INFO: Train epoch 653: Loss: 2380.3262 | r_Loss: 286.7815 | g_Loss: 845.1659 | l_Loss: 101.2526 |
21-12-23 13:11:33.190 - INFO: Learning rate: 1e-05
21-12-23 13:11:33.190 - INFO: Train epoch 654: Loss: 2274.9515 | r_Loss: 269.7259 | g_Loss: 819.2277 | l_Loss: 107.0941 |
21-12-23 13:12:44.828 - INFO: Learning rate: 1e-05
21-12-23 13:12:44.829 - INFO: Train epoch 655: Loss: 2600.7672 | r_Loss: 330.2305 | g_Loss: 861.5378 | l_Loss: 88.0767 |
21-12-23 13:13:56.478 - INFO: Learning rate: 1e-05
21-12-23 13:13:56.478 - INFO: Train epoch 656: Loss: 2398.3880 | r_Loss: 290.9847 | g_Loss: 845.7414 | l_Loss: 97.7229 |
21-12-23 13:15:08.164 - INFO: Learning rate: 1e-05
21-12-23 13:15:08.165 - INFO: Train epoch 657: Loss: 2455.7461 | r_Loss: 291.9101 | g_Loss: 870.4532 | l_Loss: 125.7425 |
21-12-23 13:16:19.736 - INFO: Learning rate: 1e-05
21-12-23 13:16:19.737 - INFO: Train epoch 658: Loss: 2392.6146 | r_Loss: 291.9105 | g_Loss: 827.5439 | l_Loss: 105.5184 |
21-12-23 13:17:31.228 - INFO: Learning rate: 1e-05
21-12-23 13:17:31.229 - INFO: Train epoch 659: Loss: 2420.1930 | r_Loss: 294.3154 | g_Loss: 832.2904 | l_Loss: 116.3257 |
21-12-23 13:19:16.624 - INFO: TEST: PSNR_S: 38.1697 | PSNR_C: 32.4814 |
21-12-23 13:19:16.625 - INFO: Learning rate: 1e-05
21-12-23 13:19:16.625 - INFO: Train epoch 660: Loss: 2239.2743 | r_Loss: 265.9113 | g_Loss: 802.3977 | l_Loss: 107.3203 |
21-12-23 13:20:28.041 - INFO: Learning rate: 1e-05
21-12-23 13:20:28.042 - INFO: Train epoch 661: Loss: 2475.9464 | r_Loss: 302.8889 | g_Loss: 845.5945 | l_Loss: 115.9075 |
21-12-23 13:21:39.581 - INFO: Learning rate: 1e-05
21-12-23 13:21:39.582 - INFO: Train epoch 662: Loss: 2434.0033 | r_Loss: 295.4540 | g_Loss: 838.2260 | l_Loss: 118.5074 |
21-12-23 13:22:51.195 - INFO: Learning rate: 1e-05
21-12-23 13:22:51.196 - INFO: Train epoch 663: Loss: 2305.1996 | r_Loss: 277.0183 | g_Loss: 813.4246 | l_Loss: 106.6834 |
21-12-23 13:24:02.737 - INFO: Learning rate: 1e-05
21-12-23 13:24:02.737 - INFO: Train epoch 664: Loss: 2397.9321 | r_Loss: 296.8862 | g_Loss: 813.2157 | l_Loss: 100.2854 |
21-12-23 13:25:14.374 - INFO: Learning rate: 1e-05
21-12-23 13:25:14.375 - INFO: Train epoch 665: Loss: 2146.9900 | r_Loss: 253.6616 | g_Loss: 779.1139 | l_Loss: 99.5682 |
21-12-23 13:26:25.962 - INFO: Learning rate: 1e-05
21-12-23 13:26:25.962 - INFO: Train epoch 666: Loss: 2298.0384 | r_Loss: 283.2704 | g_Loss: 790.7280 | l_Loss: 90.9582 |
21-12-23 13:27:37.573 - INFO: Learning rate: 1e-05
21-12-23 13:27:37.574 - INFO: Train epoch 667: Loss: 2267.6275 | r_Loss: 269.2958 | g_Loss: 814.5411 | l_Loss: 106.6075 |
21-12-23 13:28:49.293 - INFO: Learning rate: 1e-05
21-12-23 13:28:49.293 - INFO: Train epoch 668: Loss: 2110.1202 | r_Loss: 250.0411 | g_Loss: 768.4160 | l_Loss: 91.4985 |
21-12-23 13:30:00.818 - INFO: Learning rate: 1e-05
21-12-23 13:30:00.818 - INFO: Train epoch 669: Loss: 2397.1697 | r_Loss: 295.4537 | g_Loss: 824.4726 | l_Loss: 95.4285 |
21-12-23 13:31:12.414 - INFO: Learning rate: 1e-05
21-12-23 13:31:12.414 - INFO: Train epoch 670: Loss: 2198.7967 | r_Loss: 263.5984 | g_Loss: 780.0929 | l_Loss: 100.7116 |
21-12-23 13:32:23.846 - INFO: Learning rate: 1e-05
21-12-23 13:32:23.846 - INFO: Train epoch 671: Loss: 2162.0910 | r_Loss: 255.5423 | g_Loss: 780.6979 | l_Loss: 103.6815 |
21-12-23 13:33:35.384 - INFO: Learning rate: 1e-05
21-12-23 13:33:35.385 - INFO: Train epoch 672: Loss: 2248.8529 | r_Loss: 272.2911 | g_Loss: 795.5750 | l_Loss: 91.8226 |
21-12-23 13:34:46.801 - INFO: Learning rate: 1e-05
21-12-23 13:34:46.802 - INFO: Train epoch 673: Loss: 2048.5265 | r_Loss: 239.4590 | g_Loss: 750.5444 | l_Loss: 100.6871 |
21-12-23 13:35:58.325 - INFO: Learning rate: 1e-05
21-12-23 13:35:58.326 - INFO: Train epoch 674: Loss: 2112.0176 | r_Loss: 250.1949 | g_Loss: 770.2479 | l_Loss: 90.7951 |
21-12-23 13:37:09.917 - INFO: Learning rate: 1e-05
21-12-23 13:37:09.917 - INFO: Train epoch 675: Loss: 2536.7000 | r_Loss: 315.1751 | g_Loss: 865.3057 | l_Loss: 95.5187 |
21-12-23 13:38:21.600 - INFO: Learning rate: 1e-05
21-12-23 13:38:21.601 - INFO: Train epoch 676: Loss: 2114.5028 | r_Loss: 246.1107 | g_Loss: 805.1549 | l_Loss: 78.7943 |
21-12-23 13:39:33.248 - INFO: Learning rate: 1e-05
21-12-23 13:39:33.248 - INFO: Train epoch 677: Loss: 2172.1522 | r_Loss: 255.2717 | g_Loss: 788.9462 | l_Loss: 106.8473 |
21-12-23 13:40:44.800 - INFO: Learning rate: 1e-05
21-12-23 13:40:44.800 - INFO: Train epoch 678: Loss: 2169.6968 | r_Loss: 261.3716 | g_Loss: 757.0141 | l_Loss: 105.8248 |
21-12-23 13:41:56.411 - INFO: Learning rate: 1e-05
21-12-23 13:41:56.411 - INFO: Train epoch 679: Loss: 2099.8948 | r_Loss: 240.8054 | g_Loss: 799.5950 | l_Loss: 96.2728 |
21-12-23 13:43:08.105 - INFO: Learning rate: 1e-05
21-12-23 13:43:08.105 - INFO: Train epoch 680: Loss: 2207.1080 | r_Loss: 263.8231 | g_Loss: 795.6497 | l_Loss: 92.3428 |
21-12-23 13:44:19.652 - INFO: Learning rate: 1e-05
21-12-23 13:44:19.652 - INFO: Train epoch 681: Loss: 2347.1016 | r_Loss: 288.7082 | g_Loss: 796.8013 | l_Loss: 106.7594 |
21-12-23 13:45:31.306 - INFO: Learning rate: 1e-05
21-12-23 13:45:31.306 - INFO: Train epoch 682: Loss: 2078.6915 | r_Loss: 246.0163 | g_Loss: 756.8923 | l_Loss: 91.7177 |
21-12-23 13:46:42.893 - INFO: Learning rate: 1e-05
21-12-23 13:46:42.894 - INFO: Train epoch 683: Loss: 2062.5200 | r_Loss: 239.3746 | g_Loss: 775.1074 | l_Loss: 90.5395 |
21-12-23 13:47:54.461 - INFO: Learning rate: 1e-05
21-12-23 13:47:54.461 - INFO: Train epoch 684: Loss: 51674.8586 | r_Loss: 8966.3527 | g_Loss: 6088.0635 | l_Loss: 755.0307 |
21-12-23 13:49:06.152 - INFO: Learning rate: 1e-05
21-12-23 13:49:06.152 - INFO: Train epoch 685: Loss: 6452.5402 | r_Loss: 647.2330 | g_Loss: 2867.7252 | l_Loss: 348.6500 |
21-12-23 13:50:17.805 - INFO: Learning rate: 1e-05
21-12-23 13:50:17.805 - INFO: Train epoch 686: Loss: 4585.1033 | r_Loss: 479.2592 | g_Loss: 1924.8220 | l_Loss: 263.9856 |
21-12-23 13:51:29.532 - INFO: Learning rate: 1e-05
21-12-23 13:51:29.533 - INFO: Train epoch 687: Loss: 3934.1691 | r_Loss: 399.8976 | g_Loss: 1713.2061 | l_Loss: 221.4748 |
21-12-23 13:52:41.113 - INFO: Learning rate: 1e-05
21-12-23 13:52:41.113 - INFO: Train epoch 688: Loss: 3884.2353 | r_Loss: 407.6559 | g_Loss: 1646.3717 | l_Loss: 199.5841 |
21-12-23 13:53:52.758 - INFO: Learning rate: 1e-05
21-12-23 13:53:52.759 - INFO: Train epoch 689: Loss: 3456.7862 | r_Loss: 354.3784 | g_Loss: 1487.4190 | l_Loss: 197.4750 |
21-12-23 13:55:38.174 - INFO: TEST: PSNR_S: 36.8600 | PSNR_C: 29.9088 |
21-12-23 13:55:38.175 - INFO: Learning rate: 1e-05
21-12-23 13:55:38.175 - INFO: Train epoch 690: Loss: 3350.3112 | r_Loss: 339.8261 | g_Loss: 1455.0126 | l_Loss: 196.1682 |
21-12-23 13:56:49.822 - INFO: Learning rate: 1e-05
21-12-23 13:56:49.823 - INFO: Train epoch 691: Loss: 3263.6648 | r_Loss: 332.5478 | g_Loss: 1412.0113 | l_Loss: 188.9147 |
21-12-23 13:58:01.440 - INFO: Learning rate: 1e-05
21-12-23 13:58:01.441 - INFO: Train epoch 692: Loss: 3066.7167 | r_Loss: 312.9925 | g_Loss: 1362.3894 | l_Loss: 139.3647 |
21-12-23 13:59:13.088 - INFO: Learning rate: 1e-05
21-12-23 13:59:13.088 - INFO: Train epoch 693: Loss: 3021.2702 | r_Loss: 310.8263 | g_Loss: 1322.3138 | l_Loss: 144.8249 |
21-12-23 14:00:24.816 - INFO: Learning rate: 1e-05
21-12-23 14:00:24.817 - INFO: Train epoch 694: Loss: 2862.9427 | r_Loss: 296.1570 | g_Loss: 1223.1806 | l_Loss: 158.9772 |
21-12-23 14:01:36.453 - INFO: Learning rate: 1e-05
21-12-23 14:01:36.454 - INFO: Train epoch 695: Loss: 2873.0441 | r_Loss: 292.5829 | g_Loss: 1224.5763 | l_Loss: 185.5535 |
21-12-23 14:02:48.014 - INFO: Learning rate: 1e-05
21-12-23 14:02:48.015 - INFO: Train epoch 696: Loss: 2876.7102 | r_Loss: 293.5023 | g_Loss: 1238.1925 | l_Loss: 171.0060 |
21-12-23 14:03:59.518 - INFO: Learning rate: 1e-05
21-12-23 14:03:59.518 - INFO: Train epoch 697: Loss: 2766.8386 | r_Loss: 289.6065 | g_Loss: 1174.1915 | l_Loss: 144.6146 |
21-12-23 14:05:11.088 - INFO: Learning rate: 1e-05
21-12-23 14:05:11.089 - INFO: Train epoch 698: Loss: 2780.6860 | r_Loss: 295.2685 | g_Loss: 1178.2866 | l_Loss: 126.0570 |
21-12-23 14:06:22.518 - INFO: Learning rate: 1e-05
21-12-23 14:06:22.519 - INFO: Train epoch 699: Loss: 2553.4551 | r_Loss: 264.3218 | g_Loss: 1098.8354 | l_Loss: 133.0108 |
21-12-23 14:07:34.233 - INFO: Learning rate: 1e-05
21-12-23 14:07:34.233 - INFO: Train epoch 700: Loss: 2640.9342 | r_Loss: 270.9379 | g_Loss: 1130.2448 | l_Loss: 155.9999 |
21-12-23 14:08:45.768 - INFO: Learning rate: 1e-05
21-12-23 14:08:45.768 - INFO: Train epoch 701: Loss: 2601.1266 | r_Loss: 267.7223 | g_Loss: 1128.0435 | l_Loss: 134.4716 |
21-12-23 14:09:57.379 - INFO: Learning rate: 1e-05
21-12-23 14:09:57.380 - INFO: Train epoch 702: Loss: 2627.6218 | r_Loss: 278.7556 | g_Loss: 1104.1867 | l_Loss: 129.6570 |
21-12-23 14:11:08.977 - INFO: Learning rate: 1e-05
21-12-23 14:11:08.978 - INFO: Train epoch 703: Loss: 2697.4908 | r_Loss: 290.6171 | g_Loss: 1122.7820 | l_Loss: 121.6234 |
21-12-23 14:12:20.492 - INFO: Learning rate: 1e-05
21-12-23 14:12:20.492 - INFO: Train epoch 704: Loss: 2451.4886 | r_Loss: 255.2068 | g_Loss: 1050.3504 | l_Loss: 125.1042 |
21-12-23 14:13:32.084 - INFO: Learning rate: 1e-05
21-12-23 14:13:32.084 - INFO: Train epoch 705: Loss: 2608.8994 | r_Loss: 278.0968 | g_Loss: 1088.9833 | l_Loss: 129.4321 |
21-12-23 14:14:43.594 - INFO: Learning rate: 1e-05
21-12-23 14:14:43.595 - INFO: Train epoch 706: Loss: 2594.0988 | r_Loss: 274.5101 | g_Loss: 1073.3857 | l_Loss: 148.1626 |
21-12-23 14:15:55.054 - INFO: Learning rate: 1e-05
21-12-23 14:15:55.055 - INFO: Train epoch 707: Loss: 2392.0486 | r_Loss: 248.1932 | g_Loss: 1022.7660 | l_Loss: 128.3167 |
21-12-23 14:17:06.734 - INFO: Learning rate: 1e-05
21-12-23 14:17:06.734 - INFO: Train epoch 708: Loss: 2517.6090 | r_Loss: 266.8308 | g_Loss: 1035.0278 | l_Loss: 148.4273 |
21-12-23 14:18:18.372 - INFO: Learning rate: 1e-05
21-12-23 14:18:18.372 - INFO: Train epoch 709: Loss: 2445.8006 | r_Loss: 269.0686 | g_Loss: 979.4421 | l_Loss: 121.0153 |
21-12-23 14:19:29.981 - INFO: Learning rate: 1e-05
21-12-23 14:19:29.981 - INFO: Train epoch 710: Loss: 2231.2253 | r_Loss: 236.1906 | g_Loss: 947.0803 | l_Loss: 103.1918 |
21-12-23 14:20:41.505 - INFO: Learning rate: 1e-05
21-12-23 14:20:41.506 - INFO: Train epoch 711: Loss: 2515.3924 | r_Loss: 279.1659 | g_Loss: 999.0532 | l_Loss: 120.5096 |
21-12-23 14:21:52.960 - INFO: Learning rate: 1e-05
21-12-23 14:21:52.961 - INFO: Train epoch 712: Loss: 2333.3745 | r_Loss: 247.5870 | g_Loss: 971.6097 | l_Loss: 123.8297 |
21-12-23 14:23:04.489 - INFO: Learning rate: 1e-05
21-12-23 14:23:04.489 - INFO: Train epoch 713: Loss: 2281.2832 | r_Loss: 247.2844 | g_Loss: 929.5506 | l_Loss: 115.3107 |
21-12-23 14:24:16.043 - INFO: Learning rate: 1e-05
21-12-23 14:24:16.043 - INFO: Train epoch 714: Loss: 2427.0463 | r_Loss: 271.0319 | g_Loss: 960.0811 | l_Loss: 111.8055 |
21-12-23 14:25:27.476 - INFO: Learning rate: 1e-05
21-12-23 14:25:27.477 - INFO: Train epoch 715: Loss: 2735.6046 | r_Loss: 332.3080 | g_Loss: 970.5401 | l_Loss: 103.5247 |
21-12-23 14:26:39.029 - INFO: Learning rate: 1e-05
21-12-23 14:26:39.030 - INFO: Train epoch 716: Loss: 2365.8779 | r_Loss: 253.7954 | g_Loss: 977.8437 | l_Loss: 119.0573 |
21-12-23 14:27:50.557 - INFO: Learning rate: 1e-05
21-12-23 14:27:50.557 - INFO: Train epoch 717: Loss: 2477.1313 | r_Loss: 265.3869 | g_Loss: 1023.1619 | l_Loss: 127.0348 |
21-12-23 14:29:02.038 - INFO: Learning rate: 1e-05
21-12-23 14:29:02.038 - INFO: Train epoch 718: Loss: 2298.5290 | r_Loss: 246.4366 | g_Loss: 938.2021 | l_Loss: 128.1441 |
21-12-23 14:30:13.548 - INFO: Learning rate: 1e-05
21-12-23 14:30:13.548 - INFO: Train epoch 719: Loss: 2229.9324 | r_Loss: 240.0370 | g_Loss: 909.1080 | l_Loss: 120.6392 |
21-12-23 14:31:58.796 - INFO: TEST: PSNR_S: 38.6678 | PSNR_C: 31.8764 |
21-12-23 14:31:58.797 - INFO: Learning rate: 1e-05
21-12-23 14:31:58.798 - INFO: Train epoch 720: Loss: 2151.9306 | r_Loss: 236.1982 | g_Loss: 875.4920 | l_Loss: 95.4477 |
21-12-23 14:33:10.475 - INFO: Learning rate: 1e-05
21-12-23 14:33:10.475 - INFO: Train epoch 721: Loss: 2286.1088 | r_Loss: 260.9386 | g_Loss: 882.6847 | l_Loss: 98.7313 |
21-12-23 14:34:22.006 - INFO: Learning rate: 1e-05
21-12-23 14:34:22.006 - INFO: Train epoch 722: Loss: 2831.0308 | r_Loss: 349.3498 | g_Loss: 963.0455 | l_Loss: 121.2363 |
21-12-23 14:35:33.538 - INFO: Learning rate: 1e-05
21-12-23 14:35:33.538 - INFO: Train epoch 723: Loss: 2115.1728 | r_Loss: 225.0582 | g_Loss: 880.0144 | l_Loss: 109.8671 |
21-12-23 14:36:45.033 - INFO: Learning rate: 1e-05
21-12-23 14:36:45.033 - INFO: Train epoch 724: Loss: 2093.9775 | r_Loss: 217.9358 | g_Loss: 878.7587 | l_Loss: 125.5396 |
21-12-23 14:37:56.476 - INFO: Learning rate: 1e-05
21-12-23 14:37:56.477 - INFO: Train epoch 725: Loss: 2360.3682 | r_Loss: 269.1684 | g_Loss: 905.1290 | l_Loss: 109.3971 |
21-12-23 14:39:07.969 - INFO: Learning rate: 1e-05
21-12-23 14:39:07.969 - INFO: Train epoch 726: Loss: 2246.5521 | r_Loss: 244.7462 | g_Loss: 900.9394 | l_Loss: 121.8816 |
21-12-23 14:40:19.556 - INFO: Learning rate: 1e-05
21-12-23 14:40:19.557 - INFO: Train epoch 727: Loss: 2184.4118 | r_Loss: 240.7898 | g_Loss: 878.9234 | l_Loss: 101.5393 |
21-12-23 14:41:31.026 - INFO: Learning rate: 1e-05
21-12-23 14:41:31.026 - INFO: Train epoch 728: Loss: 2316.5956 | r_Loss: 258.5817 | g_Loss: 885.5165 | l_Loss: 138.1706 |
21-12-23 14:42:42.706 - INFO: Learning rate: 1e-05
21-12-23 14:42:42.707 - INFO: Train epoch 729: Loss: 2337.6542 | r_Loss: 261.9321 | g_Loss: 925.0623 | l_Loss: 102.9311 |
21-12-23 14:43:54.138 - INFO: Learning rate: 1e-05
21-12-23 14:43:54.139 - INFO: Train epoch 730: Loss: 2332.6767 | r_Loss: 270.0456 | g_Loss: 881.9920 | l_Loss: 100.4568 |
21-12-23 14:45:05.610 - INFO: Learning rate: 1e-05
21-12-23 14:45:05.611 - INFO: Train epoch 731: Loss: 2151.4839 | r_Loss: 246.1166 | g_Loss: 835.1099 | l_Loss: 85.7907 |
21-12-23 14:46:17.231 - INFO: Learning rate: 1e-05
21-12-23 14:46:17.231 - INFO: Train epoch 732: Loss: 2204.1772 | r_Loss: 243.8627 | g_Loss: 876.0089 | l_Loss: 108.8550 |
21-12-23 14:47:28.749 - INFO: Learning rate: 1e-05
21-12-23 14:47:28.750 - INFO: Train epoch 733: Loss: 2151.5722 | r_Loss: 249.2094 | g_Loss: 806.3089 | l_Loss: 99.2162 |
21-12-23 14:48:40.308 - INFO: Learning rate: 1e-05
21-12-23 14:48:40.309 - INFO: Train epoch 734: Loss: 2249.8257 | r_Loss: 252.7407 | g_Loss: 860.8800 | l_Loss: 125.2423 |
21-12-23 14:49:51.996 - INFO: Learning rate: 1e-05
21-12-23 14:49:51.997 - INFO: Train epoch 735: Loss: 2231.8970 | r_Loss: 259.6645 | g_Loss: 836.3899 | l_Loss: 97.1847 |
21-12-23 14:51:03.477 - INFO: Learning rate: 1e-05
21-12-23 14:51:03.477 - INFO: Train epoch 736: Loss: 2106.9939 | r_Loss: 236.1188 | g_Loss: 828.0329 | l_Loss: 98.3670 |
21-12-23 14:52:14.978 - INFO: Learning rate: 1e-05
21-12-23 14:52:14.979 - INFO: Train epoch 737: Loss: 1986.3457 | r_Loss: 222.2805 | g_Loss: 771.9690 | l_Loss: 102.9744 |
21-12-23 14:53:26.575 - INFO: Learning rate: 1e-05
21-12-23 14:53:26.576 - INFO: Train epoch 738: Loss: 2248.6196 | r_Loss: 262.4116 | g_Loss: 826.1064 | l_Loss: 110.4553 |
21-12-23 14:54:38.213 - INFO: Learning rate: 1e-05
21-12-23 14:54:38.213 - INFO: Train epoch 739: Loss: 1944.0011 | r_Loss: 216.8667 | g_Loss: 763.4426 | l_Loss: 96.2251 |
21-12-23 14:55:49.833 - INFO: Learning rate: 1e-05
21-12-23 14:55:49.833 - INFO: Train epoch 740: Loss: 2091.8053 | r_Loss: 237.4431 | g_Loss: 805.9071 | l_Loss: 98.6828 |
21-12-23 14:57:01.415 - INFO: Learning rate: 1e-05
21-12-23 14:57:01.415 - INFO: Train epoch 741: Loss: 2226.9803 | r_Loss: 253.2435 | g_Loss: 849.9636 | l_Loss: 110.7993 |
21-12-23 14:58:12.893 - INFO: Learning rate: 1e-05
21-12-23 14:58:12.893 - INFO: Train epoch 742: Loss: 2006.2315 | r_Loss: 228.5126 | g_Loss: 773.5989 | l_Loss: 90.0698 |
21-12-23 14:59:24.477 - INFO: Learning rate: 1e-05
21-12-23 14:59:24.478 - INFO: Train epoch 743: Loss: 2014.4561 | r_Loss: 226.2680 | g_Loss: 777.0113 | l_Loss: 106.1045 |
21-12-23 15:00:36.080 - INFO: Learning rate: 1e-05
21-12-23 15:00:36.080 - INFO: Train epoch 744: Loss: 2634.6383 | r_Loss: 340.2460 | g_Loss: 815.8611 | l_Loss: 117.5473 |
21-12-23 15:01:47.558 - INFO: Learning rate: 1e-05
21-12-23 15:01:47.558 - INFO: Train epoch 745: Loss: 2005.3851 | r_Loss: 221.8409 | g_Loss: 798.8901 | l_Loss: 97.2903 |
21-12-23 15:02:59.145 - INFO: Learning rate: 1e-05
21-12-23 15:02:59.146 - INFO: Train epoch 746: Loss: 2105.9916 | r_Loss: 235.7831 | g_Loss: 814.5506 | l_Loss: 112.5255 |
21-12-23 15:04:10.897 - INFO: Learning rate: 1e-05
21-12-23 15:04:10.897 - INFO: Train epoch 747: Loss: 2174.5765 | r_Loss: 257.4191 | g_Loss: 788.3166 | l_Loss: 99.1642 |
21-12-23 15:05:22.489 - INFO: Learning rate: 1e-05
21-12-23 15:05:22.489 - INFO: Train epoch 748: Loss: 1970.9344 | r_Loss: 222.6954 | g_Loss: 766.0204 | l_Loss: 91.4367 |
21-12-23 15:06:34.226 - INFO: Learning rate: 1e-05
21-12-23 15:06:34.227 - INFO: Train epoch 749: Loss: 2021.4702 | r_Loss: 227.4886 | g_Loss: 793.9439 | l_Loss: 90.0834 |
21-12-23 15:08:19.855 - INFO: TEST: PSNR_S: 39.2539 | PSNR_C: 32.7713 |
21-12-23 15:08:19.856 - INFO: Learning rate: 1e-05
21-12-23 15:08:19.856 - INFO: Train epoch 750: Loss: 2098.9111 | r_Loss: 238.9369 | g_Loss: 790.1816 | l_Loss: 114.0449 |
21-12-23 15:09:31.561 - INFO: Learning rate: 1e-05
21-12-23 15:09:31.561 - INFO: Train epoch 751: Loss: 2010.2912 | r_Loss: 238.5974 | g_Loss: 718.5851 | l_Loss: 98.7192 |
21-12-23 15:10:43.180 - INFO: Learning rate: 1e-05
21-12-23 15:10:43.181 - INFO: Train epoch 752: Loss: 2028.3297 | r_Loss: 235.6060 | g_Loss: 761.0400 | l_Loss: 89.2599 |
21-12-23 15:11:54.763 - INFO: Learning rate: 1e-05
21-12-23 15:11:54.764 - INFO: Train epoch 753: Loss: 1954.1530 | r_Loss: 215.8575 | g_Loss: 778.1154 | l_Loss: 96.7501 |
21-12-23 15:13:06.349 - INFO: Learning rate: 1e-05
21-12-23 15:13:06.349 - INFO: Train epoch 754: Loss: 1994.5976 | r_Loss: 226.9320 | g_Loss: 757.3002 | l_Loss: 102.6373 |
21-12-23 15:14:17.935 - INFO: Learning rate: 1e-05
21-12-23 15:14:17.935 - INFO: Train epoch 755: Loss: 2023.7797 | r_Loss: 233.5620 | g_Loss: 754.5753 | l_Loss: 101.3943 |
21-12-23 15:15:29.791 - INFO: Learning rate: 1e-05
21-12-23 15:15:29.792 - INFO: Train epoch 756: Loss: 2211.0537 | r_Loss: 263.8166 | g_Loss: 795.5557 | l_Loss: 96.4149 |
21-12-23 15:16:41.442 - INFO: Learning rate: 1e-05
21-12-23 15:16:41.442 - INFO: Train epoch 757: Loss: 1871.7400 | r_Loss: 206.8120 | g_Loss: 741.9991 | l_Loss: 95.6807 |
21-12-23 15:17:53.172 - INFO: Learning rate: 1e-05
21-12-23 15:17:53.173 - INFO: Train epoch 758: Loss: 2047.1447 | r_Loss: 236.0775 | g_Loss: 780.2219 | l_Loss: 86.5355 |
21-12-23 15:19:04.765 - INFO: Learning rate: 1e-05
21-12-23 15:19:04.765 - INFO: Train epoch 759: Loss: 1917.5058 | r_Loss: 225.6582 | g_Loss: 708.3841 | l_Loss: 80.8308 |
21-12-23 15:20:16.353 - INFO: Learning rate: 1e-05
21-12-23 15:20:16.354 - INFO: Train epoch 760: Loss: 1905.0308 | r_Loss: 218.1950 | g_Loss: 726.2266 | l_Loss: 87.8294 |
21-12-23 15:21:27.952 - INFO: Learning rate: 1e-05
21-12-23 15:21:27.953 - INFO: Train epoch 761: Loss: 1954.0539 | r_Loss: 226.7653 | g_Loss: 729.9463 | l_Loss: 90.2809 |
21-12-23 15:22:39.648 - INFO: Learning rate: 1e-05
21-12-23 15:22:39.648 - INFO: Train epoch 762: Loss: 2056.1854 | r_Loss: 249.8823 | g_Loss: 728.6187 | l_Loss: 78.1551 |
21-12-23 15:23:51.240 - INFO: Learning rate: 1e-05
21-12-23 15:23:51.240 - INFO: Train epoch 763: Loss: 2013.2305 | r_Loss: 235.1039 | g_Loss: 731.5983 | l_Loss: 106.1129 |
21-12-23 15:25:02.851 - INFO: Learning rate: 1e-05
21-12-23 15:25:02.852 - INFO: Train epoch 764: Loss: 1876.9388 | r_Loss: 213.2428 | g_Loss: 726.3247 | l_Loss: 84.3999 |
21-12-23 15:26:14.585 - INFO: Learning rate: 1e-05
21-12-23 15:26:14.585 - INFO: Train epoch 765: Loss: 1931.5649 | r_Loss: 221.3826 | g_Loss: 729.7231 | l_Loss: 94.9288 |
21-12-23 15:27:26.076 - INFO: Learning rate: 1e-05
21-12-23 15:27:26.077 - INFO: Train epoch 766: Loss: 1948.8119 | r_Loss: 229.1760 | g_Loss: 724.2351 | l_Loss: 78.6971 |
21-12-23 15:28:37.761 - INFO: Learning rate: 1e-05
21-12-23 15:28:37.761 - INFO: Train epoch 767: Loss: 2089.2002 | r_Loss: 239.1538 | g_Loss: 784.0303 | l_Loss: 109.4009 |
21-12-23 15:29:49.438 - INFO: Learning rate: 1e-05
21-12-23 15:29:49.439 - INFO: Train epoch 768: Loss: 1904.8977 | r_Loss: 219.9479 | g_Loss: 727.3625 | l_Loss: 77.7959 |
21-12-23 15:31:01.077 - INFO: Learning rate: 1e-05
21-12-23 15:31:01.078 - INFO: Train epoch 769: Loss: 2065.9677 | r_Loss: 251.1589 | g_Loss: 726.0325 | l_Loss: 84.1408 |
21-12-23 15:32:12.672 - INFO: Learning rate: 1e-05
21-12-23 15:32:12.672 - INFO: Train epoch 770: Loss: 1782.7507 | r_Loss: 198.6802 | g_Loss: 700.7291 | l_Loss: 88.6207 |
21-12-23 15:33:24.342 - INFO: Learning rate: 1e-05
21-12-23 15:33:24.343 - INFO: Train epoch 771: Loss: 1934.2765 | r_Loss: 225.6341 | g_Loss: 720.3793 | l_Loss: 85.7265 |
21-12-23 15:34:36.112 - INFO: Learning rate: 1e-05
21-12-23 15:34:36.112 - INFO: Train epoch 772: Loss: 1867.9745 | r_Loss: 218.0266 | g_Loss: 704.9543 | l_Loss: 72.8871 |
21-12-23 15:35:47.756 - INFO: Learning rate: 1e-05
21-12-23 15:35:47.757 - INFO: Train epoch 773: Loss: 2028.2914 | r_Loss: 239.1389 | g_Loss: 736.1105 | l_Loss: 96.4863 |
21-12-23 15:36:59.392 - INFO: Learning rate: 1e-05
21-12-23 15:36:59.392 - INFO: Train epoch 774: Loss: 1973.4193 | r_Loss: 229.8252 | g_Loss: 735.5849 | l_Loss: 88.7083 |
21-12-23 15:38:11.190 - INFO: Learning rate: 1e-05
21-12-23 15:38:11.191 - INFO: Train epoch 775: Loss: 2457.3420 | r_Loss: 324.5364 | g_Loss: 742.3084 | l_Loss: 92.3517 |
21-12-23 15:39:22.755 - INFO: Learning rate: 1e-05
21-12-23 15:39:22.756 - INFO: Train epoch 776: Loss: 2126.5925 | r_Loss: 252.9430 | g_Loss: 769.2494 | l_Loss: 92.6281 |
21-12-23 15:40:34.554 - INFO: Learning rate: 1e-05
21-12-23 15:40:34.555 - INFO: Train epoch 777: Loss: 1817.0000 | r_Loss: 194.1199 | g_Loss: 744.8374 | l_Loss: 101.5633 |
21-12-23 15:41:46.295 - INFO: Learning rate: 1e-05
21-12-23 15:41:46.296 - INFO: Train epoch 778: Loss: 1721.9180 | r_Loss: 191.4007 | g_Loss: 678.1065 | l_Loss: 86.8081 |
21-12-23 15:42:57.898 - INFO: Learning rate: 1e-05
21-12-23 15:42:57.899 - INFO: Train epoch 779: Loss: 1780.9560 | r_Loss: 200.3800 | g_Loss: 702.2722 | l_Loss: 76.7839 |
21-12-23 15:44:43.279 - INFO: TEST: PSNR_S: 39.3423 | PSNR_C: 33.2496 |
21-12-23 15:44:43.280 - INFO: Learning rate: 1e-05
21-12-23 15:44:43.280 - INFO: Train epoch 780: Loss: 1799.4167 | r_Loss: 210.0959 | g_Loss: 670.8203 | l_Loss: 78.1171 |
21-12-23 15:45:54.840 - INFO: Learning rate: 1e-05
21-12-23 15:45:54.841 - INFO: Train epoch 781: Loss: 5847.8613 | r_Loss: 965.9415 | g_Loss: 904.8306 | l_Loss: 113.3233 |
21-12-23 15:47:06.492 - INFO: Learning rate: 1e-05
21-12-23 15:47:06.492 - INFO: Train epoch 782: Loss: 11488.3404 | r_Loss: 1805.4236 | g_Loss: 2170.6074 | l_Loss: 290.6154 |
21-12-23 15:48:18.206 - INFO: Learning rate: 1e-05
21-12-23 15:48:18.207 - INFO: Train epoch 783: Loss: 2860.3192 | r_Loss: 281.9218 | g_Loss: 1286.5422 | l_Loss: 164.1679 |
21-12-23 15:49:29.732 - INFO: Learning rate: 1e-05
21-12-23 15:49:29.733 - INFO: Train epoch 784: Loss: 2629.2479 | r_Loss: 255.9515 | g_Loss: 1190.5684 | l_Loss: 158.9220 |
21-12-23 15:50:41.587 - INFO: Learning rate: 1e-05
21-12-23 15:50:41.588 - INFO: Train epoch 785: Loss: 2334.0816 | r_Loss: 220.1572 | g_Loss: 1094.4731 | l_Loss: 138.8226 |
21-12-23 15:51:53.211 - INFO: Learning rate: 1e-05
21-12-23 15:51:53.212 - INFO: Train epoch 786: Loss: 2200.2087 | r_Loss: 206.8578 | g_Loss: 1046.7495 | l_Loss: 119.1705 |
21-12-23 15:53:04.785 - INFO: Learning rate: 1e-05
21-12-23 15:53:04.786 - INFO: Train epoch 787: Loss: 2059.1067 | r_Loss: 195.7715 | g_Loss: 968.0311 | l_Loss: 112.2183 |
21-12-23 15:54:16.390 - INFO: Learning rate: 1e-05
21-12-23 15:54:16.390 - INFO: Train epoch 788: Loss: 2048.8195 | r_Loss: 198.7731 | g_Loss: 941.4209 | l_Loss: 113.5330 |
21-12-23 15:55:27.985 - INFO: Learning rate: 1e-05
21-12-23 15:55:27.985 - INFO: Train epoch 789: Loss: 2108.7882 | r_Loss: 210.6455 | g_Loss: 943.5661 | l_Loss: 111.9946 |
21-12-23 15:56:39.613 - INFO: Learning rate: 1e-05
21-12-23 15:56:39.613 - INFO: Train epoch 790: Loss: 1970.3182 | r_Loss: 194.3662 | g_Loss: 866.6931 | l_Loss: 131.7939 |
21-12-23 15:57:51.396 - INFO: Learning rate: 1e-05
21-12-23 15:57:51.396 - INFO: Train epoch 791: Loss: 2019.6876 | r_Loss: 200.8991 | g_Loss: 905.4071 | l_Loss: 109.7852 |
21-12-23 15:59:03.020 - INFO: Learning rate: 1e-05
21-12-23 15:59:03.021 - INFO: Train epoch 792: Loss: 2005.5982 | r_Loss: 203.7160 | g_Loss: 870.2730 | l_Loss: 116.7451 |
21-12-23 16:00:14.729 - INFO: Learning rate: 1e-05
21-12-23 16:00:14.730 - INFO: Train epoch 793: Loss: 1879.1115 | r_Loss: 193.4080 | g_Loss: 805.1218 | l_Loss: 106.9498 |
21-12-23 16:01:26.385 - INFO: Learning rate: 1e-05
21-12-23 16:01:26.385 - INFO: Train epoch 794: Loss: 1773.8076 | r_Loss: 176.2611 | g_Loss: 786.8961 | l_Loss: 105.6059 |
21-12-23 16:02:37.979 - INFO: Learning rate: 1e-05
21-12-23 16:02:37.979 - INFO: Train epoch 795: Loss: 1870.4314 | r_Loss: 197.2659 | g_Loss: 801.8852 | l_Loss: 82.2165 |
21-12-23 16:03:49.539 - INFO: Learning rate: 1e-05
21-12-23 16:03:49.540 - INFO: Train epoch 796: Loss: 1935.9597 | r_Loss: 209.8577 | g_Loss: 794.4003 | l_Loss: 92.2707 |
21-12-23 16:05:01.323 - INFO: Learning rate: 1e-05
21-12-23 16:05:01.323 - INFO: Train epoch 797: Loss: 1875.9414 | r_Loss: 202.7145 | g_Loss: 778.1761 | l_Loss: 84.1928 |
21-12-23 16:06:12.952 - INFO: Learning rate: 1e-05
21-12-23 16:06:12.953 - INFO: Train epoch 798: Loss: 1829.0543 | r_Loss: 195.3134 | g_Loss: 762.5527 | l_Loss: 89.9346 |
21-12-23 16:07:24.625 - INFO: Learning rate: 1e-05
21-12-23 16:07:24.626 - INFO: Train epoch 799: Loss: 1771.9060 | r_Loss: 195.0857 | g_Loss: 721.0045 | l_Loss: 75.4732 |
21-12-23 16:08:36.512 - INFO: Learning rate: 1e-05
21-12-23 16:08:36.512 - INFO: Train epoch 800: Loss: 1811.8344 | r_Loss: 193.2989 | g_Loss: 739.9544 | l_Loss: 105.3854 |
21-12-23 16:09:48.334 - INFO: Learning rate: 1e-05
21-12-23 16:09:48.335 - INFO: Train epoch 801: Loss: 1804.2102 | r_Loss: 196.3956 | g_Loss: 731.7997 | l_Loss: 90.4326 |
21-12-23 16:11:00.012 - INFO: Learning rate: 1e-05
21-12-23 16:11:00.012 - INFO: Train epoch 802: Loss: 1882.9141 | r_Loss: 204.0472 | g_Loss: 763.0054 | l_Loss: 99.6727 |
21-12-23 16:12:11.726 - INFO: Learning rate: 1e-05
21-12-23 16:12:11.727 - INFO: Train epoch 803: Loss: 1938.5405 | r_Loss: 218.0827 | g_Loss: 757.1204 | l_Loss: 91.0065 |
21-12-23 16:13:23.535 - INFO: Learning rate: 1e-05
21-12-23 16:13:23.535 - INFO: Train epoch 804: Loss: 1789.2127 | r_Loss: 197.7217 | g_Loss: 720.6236 | l_Loss: 79.9804 |
21-12-23 16:14:35.288 - INFO: Learning rate: 1e-05
21-12-23 16:14:35.288 - INFO: Train epoch 805: Loss: 1793.2338 | r_Loss: 200.2762 | g_Loss: 710.5620 | l_Loss: 81.2908 |
21-12-23 16:15:47.118 - INFO: Learning rate: 1e-05
21-12-23 16:15:47.119 - INFO: Train epoch 806: Loss: 1717.5513 | r_Loss: 192.3341 | g_Loss: 675.7938 | l_Loss: 80.0872 |
21-12-23 16:16:58.780 - INFO: Learning rate: 1e-05
21-12-23 16:16:58.780 - INFO: Train epoch 807: Loss: 1810.5755 | r_Loss: 202.8695 | g_Loss: 710.5520 | l_Loss: 85.6760 |
21-12-23 16:18:10.510 - INFO: Learning rate: 1e-05
21-12-23 16:18:10.510 - INFO: Train epoch 808: Loss: 1778.8331 | r_Loss: 203.8575 | g_Loss: 660.2485 | l_Loss: 99.2971 |
21-12-23 16:19:22.150 - INFO: Learning rate: 1e-05
21-12-23 16:19:22.151 - INFO: Train epoch 809: Loss: 1795.1777 | r_Loss: 200.7067 | g_Loss: 693.5422 | l_Loss: 98.1018 |
21-12-23 16:21:07.763 - INFO: TEST: PSNR_S: 40.1119 | PSNR_C: 33.3976 |
21-12-23 16:21:07.764 - INFO: Learning rate: 1e-05
21-12-23 16:21:07.764 - INFO: Train epoch 810: Loss: 1828.2775 | r_Loss: 211.0461 | g_Loss: 682.0591 | l_Loss: 90.9879 |
21-12-23 16:22:19.461 - INFO: Learning rate: 1e-05
21-12-23 16:22:19.462 - INFO: Train epoch 811: Loss: 1852.5449 | r_Loss: 210.5338 | g_Loss: 716.4130 | l_Loss: 83.4628 |
21-12-23 16:23:31.156 - INFO: Learning rate: 1e-05
21-12-23 16:23:31.156 - INFO: Train epoch 812: Loss: 1695.1917 | r_Loss: 191.6048 | g_Loss: 655.9816 | l_Loss: 81.1862 |
21-12-23 16:24:42.850 - INFO: Learning rate: 1e-05
21-12-23 16:24:42.850 - INFO: Train epoch 813: Loss: 1521.7583 | r_Loss: 165.3115 | g_Loss: 623.7408 | l_Loss: 71.4599 |
21-12-23 16:25:54.594 - INFO: Learning rate: 1e-05
21-12-23 16:25:54.594 - INFO: Train epoch 814: Loss: 1614.2363 | r_Loss: 181.1548 | g_Loss: 633.4172 | l_Loss: 75.0451 |
21-12-23 16:27:06.546 - INFO: Learning rate: 1e-05
21-12-23 16:27:06.547 - INFO: Train epoch 815: Loss: 1683.8236 | r_Loss: 192.5753 | g_Loss: 636.2825 | l_Loss: 84.6646 |
21-12-23 16:28:18.492 - INFO: Learning rate: 1e-05
21-12-23 16:28:18.493 - INFO: Train epoch 816: Loss: 2012.9405 | r_Loss: 260.6006 | g_Loss: 626.8821 | l_Loss: 83.0553 |
21-12-23 16:29:30.254 - INFO: Learning rate: 1e-05
21-12-23 16:29:30.255 - INFO: Train epoch 817: Loss: 1755.6706 | r_Loss: 199.8437 | g_Loss: 669.8142 | l_Loss: 86.6377 |
21-12-23 16:30:42.067 - INFO: Learning rate: 1e-05
21-12-23 16:30:42.067 - INFO: Train epoch 818: Loss: 1699.1565 | r_Loss: 185.1658 | g_Loss: 683.0382 | l_Loss: 90.2893 |
21-12-23 16:31:53.824 - INFO: Learning rate: 1e-05
21-12-23 16:31:53.824 - INFO: Train epoch 819: Loss: 1661.1017 | r_Loss: 187.3060 | g_Loss: 649.3011 | l_Loss: 75.2706 |
21-12-23 16:33:05.476 - INFO: Learning rate: 1e-05
21-12-23 16:33:05.476 - INFO: Train epoch 820: Loss: 1674.0841 | r_Loss: 189.4063 | g_Loss: 635.5919 | l_Loss: 91.4608 |
21-12-23 16:34:17.316 - INFO: Learning rate: 1e-05
21-12-23 16:34:17.317 - INFO: Train epoch 821: Loss: 1666.8875 | r_Loss: 191.1536 | g_Loss: 636.0454 | l_Loss: 75.0739 |
21-12-23 16:35:28.937 - INFO: Learning rate: 1e-05
21-12-23 16:35:28.938 - INFO: Train epoch 822: Loss: 1612.1713 | r_Loss: 180.5383 | g_Loss: 623.2959 | l_Loss: 86.1838 |
21-12-23 16:36:40.690 - INFO: Learning rate: 1e-05
21-12-23 16:36:40.691 - INFO: Train epoch 823: Loss: 1630.2900 | r_Loss: 183.8541 | g_Loss: 634.9888 | l_Loss: 76.0309 |
21-12-23 16:37:52.445 - INFO: Learning rate: 1e-05
21-12-23 16:37:52.446 - INFO: Train epoch 824: Loss: 1605.8592 | r_Loss: 184.8042 | g_Loss: 625.4937 | l_Loss: 56.3445 |
21-12-23 16:39:04.101 - INFO: Learning rate: 1e-05
21-12-23 16:39:04.101 - INFO: Train epoch 825: Loss: 1595.6919 | r_Loss: 186.0694 | g_Loss: 593.3402 | l_Loss: 72.0045 |
21-12-23 16:40:15.902 - INFO: Learning rate: 1e-05
21-12-23 16:40:15.902 - INFO: Train epoch 826: Loss: 1642.2557 | r_Loss: 177.6910 | g_Loss: 673.5973 | l_Loss: 80.2034 |
21-12-23 16:41:27.671 - INFO: Learning rate: 1e-05
21-12-23 16:41:27.672 - INFO: Train epoch 827: Loss: 1545.2358 | r_Loss: 173.9017 | g_Loss: 598.8409 | l_Loss: 76.8862 |
21-12-23 16:42:39.538 - INFO: Learning rate: 1e-05
21-12-23 16:42:39.538 - INFO: Train epoch 828: Loss: 1589.6074 | r_Loss: 183.3671 | g_Loss: 601.3772 | l_Loss: 71.3945 |
21-12-23 16:43:51.277 - INFO: Learning rate: 1e-05
21-12-23 16:43:51.277 - INFO: Train epoch 829: Loss: 1658.8139 | r_Loss: 195.2142 | g_Loss: 590.9122 | l_Loss: 91.8306 |
21-12-23 16:45:03.203 - INFO: Learning rate: 1e-05
21-12-23 16:45:03.204 - INFO: Train epoch 830: Loss: 1713.1576 | r_Loss: 192.8143 | g_Loss: 659.3721 | l_Loss: 89.7140 |
21-12-23 16:46:15.125 - INFO: Learning rate: 1e-05
21-12-23 16:46:15.125 - INFO: Train epoch 831: Loss: 1645.1573 | r_Loss: 186.6773 | g_Loss: 617.8246 | l_Loss: 93.9464 |
21-12-23 16:47:26.903 - INFO: Learning rate: 1e-05
21-12-23 16:47:26.904 - INFO: Train epoch 832: Loss: 1533.3390 | r_Loss: 177.8856 | g_Loss: 578.3434 | l_Loss: 65.5676 |
21-12-23 16:48:38.642 - INFO: Learning rate: 1e-05
21-12-23 16:48:38.643 - INFO: Train epoch 833: Loss: 1641.0240 | r_Loss: 186.1240 | g_Loss: 638.1773 | l_Loss: 72.2270 |
21-12-23 16:49:50.556 - INFO: Learning rate: 1e-05
21-12-23 16:49:50.557 - INFO: Train epoch 834: Loss: 1565.8819 | r_Loss: 182.1364 | g_Loss: 587.6686 | l_Loss: 67.5311 |
21-12-23 16:51:02.408 - INFO: Learning rate: 1e-05
21-12-23 16:51:02.409 - INFO: Train epoch 835: Loss: 1715.3004 | r_Loss: 199.2577 | g_Loss: 628.6384 | l_Loss: 90.3734 |
21-12-23 16:52:14.156 - INFO: Learning rate: 1e-05
21-12-23 16:52:14.157 - INFO: Train epoch 836: Loss: 1754.9819 | r_Loss: 210.7504 | g_Loss: 628.0616 | l_Loss: 73.1684 |
21-12-23 16:53:25.995 - INFO: Learning rate: 1e-05
21-12-23 16:53:25.996 - INFO: Train epoch 837: Loss: 1520.8039 | r_Loss: 172.3499 | g_Loss: 593.7662 | l_Loss: 65.2882 |
21-12-23 16:54:37.711 - INFO: Learning rate: 1e-05
21-12-23 16:54:37.712 - INFO: Train epoch 838: Loss: 1586.7033 | r_Loss: 182.1516 | g_Loss: 599.5373 | l_Loss: 76.4081 |
21-12-23 16:55:49.597 - INFO: Learning rate: 1e-05
21-12-23 16:55:49.598 - INFO: Train epoch 839: Loss: 1630.4005 | r_Loss: 185.6715 | g_Loss: 622.3620 | l_Loss: 79.6808 |
21-12-23 16:57:35.034 - INFO: TEST: PSNR_S: 40.5767 | PSNR_C: 34.0037 |
21-12-23 16:57:35.036 - INFO: Learning rate: 1e-05
21-12-23 16:57:35.036 - INFO: Train epoch 840: Loss: 1622.4966 | r_Loss: 184.7989 | g_Loss: 629.4733 | l_Loss: 69.0290 |
21-12-23 16:58:46.844 - INFO: Learning rate: 1e-05
21-12-23 16:58:46.845 - INFO: Train epoch 841: Loss: 1621.6523 | r_Loss: 188.5229 | g_Loss: 601.5145 | l_Loss: 77.5234 |
21-12-23 16:59:58.834 - INFO: Learning rate: 1e-05
21-12-23 16:59:58.834 - INFO: Train epoch 842: Loss: 1642.5255 | r_Loss: 189.1381 | g_Loss: 608.7371 | l_Loss: 88.0980 |
21-12-23 17:01:10.763 - INFO: Learning rate: 1e-05
21-12-23 17:01:10.763 - INFO: Train epoch 843: Loss: 1496.9881 | r_Loss: 169.2218 | g_Loss: 584.0803 | l_Loss: 66.7991 |
21-12-23 17:02:22.502 - INFO: Learning rate: 1e-05
21-12-23 17:02:22.502 - INFO: Train epoch 844: Loss: 24733.5498 | r_Loss: 4392.3312 | g_Loss: 2493.5511 | l_Loss: 278.3430 |
21-12-23 17:03:34.285 - INFO: Learning rate: 1e-05
21-12-23 17:03:34.285 - INFO: Train epoch 845: Loss: 3595.3556 | r_Loss: 369.7172 | g_Loss: 1546.3009 | l_Loss: 200.4685 |
21-12-23 17:04:46.048 - INFO: Learning rate: 1e-05
21-12-23 17:04:46.049 - INFO: Train epoch 846: Loss: 2821.0674 | r_Loss: 281.6806 | g_Loss: 1279.6803 | l_Loss: 132.9840 |
21-12-23 17:05:57.742 - INFO: Learning rate: 1e-05
21-12-23 17:05:57.743 - INFO: Train epoch 847: Loss: 2522.2718 | r_Loss: 247.3979 | g_Loss: 1134.0767 | l_Loss: 151.2054 |
21-12-23 17:07:09.442 - INFO: Learning rate: 1e-05
21-12-23 17:07:09.443 - INFO: Train epoch 848: Loss: 2347.0196 | r_Loss: 230.8418 | g_Loss: 1071.3289 | l_Loss: 121.4815 |
21-12-23 17:08:21.300 - INFO: Learning rate: 1e-05
21-12-23 17:08:21.301 - INFO: Train epoch 849: Loss: 2192.4811 | r_Loss: 204.0803 | g_Loss: 1024.6560 | l_Loss: 147.4238 |
21-12-23 17:09:33.206 - INFO: Learning rate: 1e-05
21-12-23 17:09:33.207 - INFO: Train epoch 850: Loss: 2091.7918 | r_Loss: 203.9173 | g_Loss: 957.0738 | l_Loss: 115.1315 |
21-12-23 17:10:45.029 - INFO: Learning rate: 1e-05
21-12-23 17:10:45.030 - INFO: Train epoch 851: Loss: 2019.8664 | r_Loss: 195.7514 | g_Loss: 922.1968 | l_Loss: 118.9126 |
21-12-23 17:11:56.807 - INFO: Learning rate: 1e-05
21-12-23 17:11:56.808 - INFO: Train epoch 852: Loss: 1957.5528 | r_Loss: 191.0283 | g_Loss: 888.9475 | l_Loss: 113.4636 |
21-12-23 17:13:08.428 - INFO: Learning rate: 1e-05
21-12-23 17:13:08.428 - INFO: Train epoch 853: Loss: 1962.2333 | r_Loss: 190.7207 | g_Loss: 886.1035 | l_Loss: 122.5261 |
21-12-23 17:14:20.281 - INFO: Learning rate: 1e-05
21-12-23 17:14:20.281 - INFO: Train epoch 854: Loss: 1878.7302 | r_Loss: 183.9259 | g_Loss: 852.0326 | l_Loss: 107.0680 |
21-12-23 17:15:32.232 - INFO: Learning rate: 1e-05
21-12-23 17:15:32.232 - INFO: Train epoch 855: Loss: 1788.5851 | r_Loss: 176.1437 | g_Loss: 808.4322 | l_Loss: 99.4343 |
21-12-23 17:16:43.883 - INFO: Learning rate: 1e-05
21-12-23 17:16:43.884 - INFO: Train epoch 856: Loss: 1770.0863 | r_Loss: 170.7654 | g_Loss: 816.7649 | l_Loss: 99.4943 |
21-12-23 17:17:55.686 - INFO: Learning rate: 1e-05
21-12-23 17:17:55.687 - INFO: Train epoch 857: Loss: 1759.6429 | r_Loss: 178.6752 | g_Loss: 782.3133 | l_Loss: 83.9535 |
21-12-23 17:19:07.445 - INFO: Learning rate: 1e-05
21-12-23 17:19:07.446 - INFO: Train epoch 858: Loss: 1822.4576 | r_Loss: 186.7889 | g_Loss: 792.8002 | l_Loss: 95.7130 |
21-12-23 17:20:19.107 - INFO: Learning rate: 1e-05
21-12-23 17:20:19.107 - INFO: Train epoch 859: Loss: 1689.4671 | r_Loss: 169.4427 | g_Loss: 757.4595 | l_Loss: 84.7941 |
21-12-23 17:21:30.861 - INFO: Learning rate: 1e-05
21-12-23 17:21:30.862 - INFO: Train epoch 860: Loss: 1752.7910 | r_Loss: 175.9832 | g_Loss: 777.3944 | l_Loss: 95.4805 |
21-12-23 17:22:42.575 - INFO: Learning rate: 1e-05
21-12-23 17:22:42.576 - INFO: Train epoch 861: Loss: 1651.6188 | r_Loss: 166.7435 | g_Loss: 736.7625 | l_Loss: 81.1385 |
21-12-23 17:23:54.254 - INFO: Learning rate: 1e-05
21-12-23 17:23:54.255 - INFO: Train epoch 862: Loss: 1620.7929 | r_Loss: 166.0159 | g_Loss: 701.2649 | l_Loss: 89.4483 |
21-12-23 17:25:05.841 - INFO: Learning rate: 1e-05
21-12-23 17:25:05.841 - INFO: Train epoch 863: Loss: 1589.1775 | r_Loss: 164.9814 | g_Loss: 684.4146 | l_Loss: 79.8559 |
21-12-23 17:26:17.537 - INFO: Learning rate: 1e-05
21-12-23 17:26:17.538 - INFO: Train epoch 864: Loss: 1573.6407 | r_Loss: 163.3445 | g_Loss: 673.0768 | l_Loss: 83.8415 |
21-12-23 17:27:29.216 - INFO: Learning rate: 1e-05
21-12-23 17:27:29.216 - INFO: Train epoch 865: Loss: 1573.4463 | r_Loss: 162.4973 | g_Loss: 677.5100 | l_Loss: 83.4497 |
21-12-23 17:28:40.958 - INFO: Learning rate: 1e-05
21-12-23 17:28:40.959 - INFO: Train epoch 866: Loss: 1599.2166 | r_Loss: 166.2851 | g_Loss: 684.7719 | l_Loss: 83.0193 |
21-12-23 17:29:52.795 - INFO: Learning rate: 1e-05
21-12-23 17:29:52.795 - INFO: Train epoch 867: Loss: 1560.4094 | r_Loss: 156.8624 | g_Loss: 675.7377 | l_Loss: 100.3599 |
21-12-23 17:31:04.508 - INFO: Learning rate: 1e-05
21-12-23 17:31:04.508 - INFO: Train epoch 868: Loss: 1571.6767 | r_Loss: 168.4934 | g_Loss: 649.5524 | l_Loss: 79.6575 |
21-12-23 17:32:16.146 - INFO: Learning rate: 1e-05
21-12-23 17:32:16.146 - INFO: Train epoch 869: Loss: 1557.4884 | r_Loss: 170.1372 | g_Loss: 628.7779 | l_Loss: 78.0246 |
21-12-23 17:34:01.475 - INFO: TEST: PSNR_S: 40.8350 | PSNR_C: 33.5544 |
21-12-23 17:34:01.476 - INFO: Learning rate: 1e-05
21-12-23 17:34:01.476 - INFO: Train epoch 870: Loss: 1474.8661 | r_Loss: 156.1446 | g_Loss: 623.7409 | l_Loss: 70.4020 |
21-12-23 17:35:13.098 - INFO: Learning rate: 1e-05
21-12-23 17:35:13.099 - INFO: Train epoch 871: Loss: 1510.9979 | r_Loss: 158.0364 | g_Loss: 631.7385 | l_Loss: 89.0774 |
21-12-23 17:36:24.708 - INFO: Learning rate: 1e-05
21-12-23 17:36:24.708 - INFO: Train epoch 872: Loss: 1517.4111 | r_Loss: 161.3950 | g_Loss: 631.7087 | l_Loss: 78.7274 |
21-12-23 17:37:36.337 - INFO: Learning rate: 1e-05
21-12-23 17:37:36.337 - INFO: Train epoch 873: Loss: 1460.7510 | r_Loss: 156.4007 | g_Loss: 590.9927 | l_Loss: 87.7547 |
21-12-23 17:38:48.016 - INFO: Learning rate: 1e-05
21-12-23 17:38:48.016 - INFO: Train epoch 874: Loss: 1646.7704 | r_Loss: 184.2063 | g_Loss: 644.2882 | l_Loss: 81.4508 |
21-12-23 17:39:59.568 - INFO: Learning rate: 1e-05
21-12-23 17:39:59.568 - INFO: Train epoch 875: Loss: 1484.8489 | r_Loss: 154.5955 | g_Loss: 616.4858 | l_Loss: 95.3855 |
21-12-23 17:41:11.132 - INFO: Learning rate: 1e-05
21-12-23 17:41:11.132 - INFO: Train epoch 876: Loss: 1519.0323 | r_Loss: 162.3934 | g_Loss: 620.2613 | l_Loss: 86.8042 |
21-12-23 17:42:22.802 - INFO: Learning rate: 1e-05
21-12-23 17:42:22.802 - INFO: Train epoch 877: Loss: 1553.6235 | r_Loss: 166.1414 | g_Loss: 628.9544 | l_Loss: 93.9619 |
21-12-23 17:43:34.404 - INFO: Learning rate: 1e-05
21-12-23 17:43:34.405 - INFO: Train epoch 878: Loss: 1418.3649 | r_Loss: 153.2871 | g_Loss: 582.1536 | l_Loss: 69.7759 |
21-12-23 17:44:45.967 - INFO: Learning rate: 1e-05
21-12-23 17:44:45.968 - INFO: Train epoch 879: Loss: 1413.7567 | r_Loss: 148.4877 | g_Loss: 596.1936 | l_Loss: 75.1246 |
21-12-23 17:45:57.593 - INFO: Learning rate: 1e-05
21-12-23 17:45:57.594 - INFO: Train epoch 880: Loss: 1479.0239 | r_Loss: 166.1619 | g_Loss: 578.1890 | l_Loss: 70.0255 |
21-12-23 17:47:09.137 - INFO: Learning rate: 1e-05
21-12-23 17:47:09.138 - INFO: Train epoch 881: Loss: 1470.0833 | r_Loss: 160.8210 | g_Loss: 604.1871 | l_Loss: 61.7912 |
21-12-23 17:48:20.646 - INFO: Learning rate: 1e-05
21-12-23 17:48:20.647 - INFO: Train epoch 882: Loss: 1489.5524 | r_Loss: 163.4608 | g_Loss: 588.0291 | l_Loss: 84.2192 |
21-12-23 17:49:32.162 - INFO: Learning rate: 1e-05
21-12-23 17:49:32.163 - INFO: Train epoch 883: Loss: 1521.7459 | r_Loss: 165.2667 | g_Loss: 611.6874 | l_Loss: 83.7248 |
21-12-23 17:50:43.672 - INFO: Learning rate: 1e-05
21-12-23 17:50:43.673 - INFO: Train epoch 884: Loss: 1628.5625 | r_Loss: 181.8681 | g_Loss: 628.4010 | l_Loss: 90.8210 |
21-12-23 17:51:55.300 - INFO: Learning rate: 1e-05
21-12-23 17:51:55.301 - INFO: Train epoch 885: Loss: 1524.9089 | r_Loss: 166.9028 | g_Loss: 604.5569 | l_Loss: 85.8381 |
21-12-23 17:53:06.842 - INFO: Learning rate: 1e-05
21-12-23 17:53:06.842 - INFO: Train epoch 886: Loss: 1573.2162 | r_Loss: 176.9872 | g_Loss: 611.3550 | l_Loss: 76.9251 |
21-12-23 17:54:18.442 - INFO: Learning rate: 1e-05
21-12-23 17:54:18.442 - INFO: Train epoch 887: Loss: 1442.5434 | r_Loss: 160.2555 | g_Loss: 573.7593 | l_Loss: 67.5067 |
21-12-23 17:55:30.107 - INFO: Learning rate: 1e-05
21-12-23 17:55:30.108 - INFO: Train epoch 888: Loss: 1425.4487 | r_Loss: 154.6529 | g_Loss: 572.3476 | l_Loss: 79.8365 |
21-12-23 17:56:41.902 - INFO: Learning rate: 1e-05
21-12-23 17:56:41.902 - INFO: Train epoch 889: Loss: 1454.6154 | r_Loss: 160.3876 | g_Loss: 563.9040 | l_Loss: 88.7733 |
21-12-23 17:57:53.578 - INFO: Learning rate: 1e-05
21-12-23 17:57:53.578 - INFO: Train epoch 890: Loss: 1568.9689 | r_Loss: 179.0434 | g_Loss: 608.6538 | l_Loss: 65.0982 |
21-12-23 17:59:05.213 - INFO: Learning rate: 1e-05
21-12-23 17:59:05.214 - INFO: Train epoch 891: Loss: 1345.1378 | r_Loss: 143.2111 | g_Loss: 553.3187 | l_Loss: 75.7638 |
21-12-23 18:00:17.066 - INFO: Learning rate: 1e-05
21-12-23 18:00:17.066 - INFO: Train epoch 892: Loss: 1510.4094 | r_Loss: 168.7898 | g_Loss: 580.3068 | l_Loss: 86.1535 |
21-12-23 18:01:28.754 - INFO: Learning rate: 1e-05
21-12-23 18:01:28.755 - INFO: Train epoch 893: Loss: 1656.9427 | r_Loss: 199.2018 | g_Loss: 587.9227 | l_Loss: 73.0111 |
21-12-23 18:02:40.385 - INFO: Learning rate: 1e-05
21-12-23 18:02:40.386 - INFO: Train epoch 894: Loss: 1367.0224 | r_Loss: 147.0232 | g_Loss: 562.2518 | l_Loss: 69.6545 |
21-12-23 18:03:52.108 - INFO: Learning rate: 1e-05
21-12-23 18:03:52.108 - INFO: Train epoch 895: Loss: 1411.8245 | r_Loss: 154.1974 | g_Loss: 570.1465 | l_Loss: 70.6910 |
21-12-23 18:05:03.701 - INFO: Learning rate: 1e-05
21-12-23 18:05:03.701 - INFO: Train epoch 896: Loss: 1340.0005 | r_Loss: 147.8835 | g_Loss: 533.9559 | l_Loss: 66.6270 |
21-12-23 18:06:15.318 - INFO: Learning rate: 1e-05
21-12-23 18:06:15.318 - INFO: Train epoch 897: Loss: 1416.0644 | r_Loss: 157.5393 | g_Loss: 554.0348 | l_Loss: 74.3333 |
21-12-23 18:07:27.102 - INFO: Learning rate: 1e-05
21-12-23 18:07:27.103 - INFO: Train epoch 898: Loss: 1317.2966 | r_Loss: 151.2650 | g_Loss: 506.8371 | l_Loss: 54.1343 |
21-12-23 18:08:38.785 - INFO: Learning rate: 1e-05
21-12-23 18:08:38.785 - INFO: Train epoch 899: Loss: 1452.5079 | r_Loss: 161.7146 | g_Loss: 572.2058 | l_Loss: 71.7293 |
21-12-23 18:10:24.054 - INFO: TEST: PSNR_S: 41.1392 | PSNR_C: 34.3053 |
21-12-23 18:10:24.055 - INFO: Learning rate: 1e-05
21-12-23 18:10:24.055 - INFO: Train epoch 900: Loss: 1469.3813 | r_Loss: 166.6547 | g_Loss: 566.9058 | l_Loss: 69.2022 |
21-12-23 18:11:35.675 - INFO: Learning rate: 1e-05
21-12-23 18:11:35.676 - INFO: Train epoch 901: Loss: 1366.4600 | r_Loss: 150.7002 | g_Loss: 538.3737 | l_Loss: 74.5850 |
21-12-23 18:12:47.466 - INFO: Learning rate: 1e-05
21-12-23 18:12:47.467 - INFO: Train epoch 902: Loss: 1418.6891 | r_Loss: 156.4806 | g_Loss: 559.9522 | l_Loss: 76.3337 |
21-12-23 18:13:59.170 - INFO: Learning rate: 1e-05
21-12-23 18:13:59.171 - INFO: Train epoch 903: Loss: 1500.3479 | r_Loss: 176.6090 | g_Loss: 553.5421 | l_Loss: 63.7608 |
21-12-23 18:15:10.869 - INFO: Learning rate: 1e-05
21-12-23 18:15:10.869 - INFO: Train epoch 904: Loss: 1275.4768 | r_Loss: 138.3451 | g_Loss: 520.7337 | l_Loss: 63.0179 |
21-12-23 18:16:22.550 - INFO: Learning rate: 1e-05
21-12-23 18:16:22.551 - INFO: Train epoch 905: Loss: 1781.7259 | r_Loss: 212.8395 | g_Loss: 632.5839 | l_Loss: 84.9442 |
21-12-23 18:17:34.285 - INFO: Learning rate: 1e-05
21-12-23 18:17:34.285 - INFO: Train epoch 906: Loss: 1375.6753 | r_Loss: 148.2729 | g_Loss: 564.6918 | l_Loss: 69.6192 |
21-12-23 18:18:46.054 - INFO: Learning rate: 1e-05
21-12-23 18:18:46.054 - INFO: Train epoch 907: Loss: 1418.8498 | r_Loss: 155.3002 | g_Loss: 570.8857 | l_Loss: 71.4633 |
21-12-23 18:19:57.813 - INFO: Learning rate: 1e-05
21-12-23 18:19:57.813 - INFO: Train epoch 908: Loss: 1488.9823 | r_Loss: 169.6374 | g_Loss: 568.7967 | l_Loss: 71.9989 |
21-12-23 18:21:09.413 - INFO: Learning rate: 1e-05
21-12-23 18:21:09.414 - INFO: Train epoch 909: Loss: 1390.5681 | r_Loss: 159.3992 | g_Loss: 526.4772 | l_Loss: 67.0948 |
21-12-23 18:22:21.121 - INFO: Learning rate: 1e-05
21-12-23 18:22:21.122 - INFO: Train epoch 910: Loss: 1320.9879 | r_Loss: 150.3739 | g_Loss: 511.1433 | l_Loss: 57.9754 |
21-12-23 18:23:32.789 - INFO: Learning rate: 1e-05
21-12-23 18:23:32.789 - INFO: Train epoch 911: Loss: 1386.9233 | r_Loss: 156.5051 | g_Loss: 536.5439 | l_Loss: 67.8538 |
21-12-23 18:24:44.428 - INFO: Learning rate: 1e-05
21-12-23 18:24:44.429 - INFO: Train epoch 912: Loss: 1374.2417 | r_Loss: 152.5398 | g_Loss: 545.1653 | l_Loss: 66.3776 |
21-12-23 18:25:56.047 - INFO: Learning rate: 1e-05
21-12-23 18:25:56.047 - INFO: Train epoch 913: Loss: 1467.6437 | r_Loss: 175.1543 | g_Loss: 524.0821 | l_Loss: 67.7903 |
21-12-23 18:27:07.770 - INFO: Learning rate: 1e-05
21-12-23 18:27:07.770 - INFO: Train epoch 914: Loss: 1354.1729 | r_Loss: 150.6281 | g_Loss: 539.7769 | l_Loss: 61.2556 |
21-12-23 18:28:19.319 - INFO: Learning rate: 1e-05
21-12-23 18:28:19.320 - INFO: Train epoch 915: Loss: 1360.4544 | r_Loss: 152.5827 | g_Loss: 529.6368 | l_Loss: 67.9044 |
21-12-23 18:29:30.836 - INFO: Learning rate: 1e-05
21-12-23 18:29:30.837 - INFO: Train epoch 916: Loss: 1302.6601 | r_Loss: 144.6941 | g_Loss: 508.0710 | l_Loss: 71.1188 |
21-12-23 18:30:42.426 - INFO: Learning rate: 1e-05
21-12-23 18:30:42.426 - INFO: Train epoch 917: Loss: 1443.2637 | r_Loss: 163.3435 | g_Loss: 563.4208 | l_Loss: 63.1256 |
21-12-23 18:31:54.195 - INFO: Learning rate: 1e-05
21-12-23 18:31:54.195 - INFO: Train epoch 918: Loss: 1379.7579 | r_Loss: 154.9193 | g_Loss: 535.4933 | l_Loss: 69.6680 |
21-12-23 18:33:05.789 - INFO: Learning rate: 1e-05
21-12-23 18:33:05.789 - INFO: Train epoch 919: Loss: 1288.0191 | r_Loss: 143.1270 | g_Loss: 504.2544 | l_Loss: 68.1297 |
21-12-23 18:34:17.556 - INFO: Learning rate: 1e-05
21-12-23 18:34:17.557 - INFO: Train epoch 920: Loss: 1292.6431 | r_Loss: 143.0372 | g_Loss: 511.5948 | l_Loss: 65.8621 |
21-12-23 18:35:29.234 - INFO: Learning rate: 1e-05
21-12-23 18:35:29.234 - INFO: Train epoch 921: Loss: 1381.6242 | r_Loss: 154.5040 | g_Loss: 540.2129 | l_Loss: 68.8915 |
21-12-23 18:36:40.921 - INFO: Learning rate: 1e-05
21-12-23 18:36:40.921 - INFO: Train epoch 922: Loss: 1340.6655 | r_Loss: 153.3110 | g_Loss: 523.6812 | l_Loss: 50.4295 |
21-12-23 18:37:52.624 - INFO: Learning rate: 1e-05
21-12-23 18:37:52.624 - INFO: Train epoch 923: Loss: 1627.9508 | r_Loss: 200.8937 | g_Loss: 552.5692 | l_Loss: 70.9133 |
21-12-23 18:39:04.381 - INFO: Learning rate: 1e-05
21-12-23 18:39:04.382 - INFO: Train epoch 924: Loss: 1223.5683 | r_Loss: 127.8651 | g_Loss: 519.2269 | l_Loss: 65.0157 |
21-12-23 18:40:15.937 - INFO: Learning rate: 1e-05
21-12-23 18:40:15.937 - INFO: Train epoch 925: Loss: 62550.2288 | r_Loss: 11276.5133 | g_Loss: 5522.9130 | l_Loss: 644.7483 |
21-12-23 18:41:27.585 - INFO: Learning rate: 1e-05
21-12-23 18:41:27.585 - INFO: Train epoch 926: Loss: 7641.5755 | r_Loss: 765.8800 | g_Loss: 3353.6920 | l_Loss: 458.4836 |
21-12-23 18:42:39.219 - INFO: Learning rate: 1e-05
21-12-23 18:42:39.219 - INFO: Train epoch 927: Loss: 4027.0625 | r_Loss: 419.3618 | g_Loss: 1703.4500 | l_Loss: 226.8037 |
21-12-23 18:43:50.765 - INFO: Learning rate: 1e-05
21-12-23 18:43:50.765 - INFO: Train epoch 928: Loss: 3501.7313 | r_Loss: 356.9479 | g_Loss: 1531.1605 | l_Loss: 185.8313 |
21-12-23 18:45:02.367 - INFO: Learning rate: 1e-05
21-12-23 18:45:02.368 - INFO: Train epoch 929: Loss: 3209.2824 | r_Loss: 317.7629 | g_Loss: 1439.8025 | l_Loss: 180.6653 |
21-12-23 18:46:47.625 - INFO: TEST: PSNR_S: 37.3777 | PSNR_C: 30.0969 |
21-12-23 18:46:47.626 - INFO: Learning rate: 1e-05
21-12-23 18:46:47.627 - INFO: Train epoch 930: Loss: 3060.9652 | r_Loss: 301.6995 | g_Loss: 1397.2552 | l_Loss: 155.2127 |
21-12-23 18:47:59.162 - INFO: Learning rate: 1e-05
21-12-23 18:47:59.163 - INFO: Train epoch 931: Loss: 2875.8756 | r_Loss: 275.3032 | g_Loss: 1325.3573 | l_Loss: 174.0022 |
21-12-23 18:49:10.802 - INFO: Learning rate: 1e-05
21-12-23 18:49:10.802 - INFO: Train epoch 932: Loss: 2768.9875 | r_Loss: 266.3426 | g_Loss: 1272.3334 | l_Loss: 164.9411 |
21-12-23 18:50:22.536 - INFO: Learning rate: 1e-05
21-12-23 18:50:22.537 - INFO: Train epoch 933: Loss: 2601.6355 | r_Loss: 249.9061 | g_Loss: 1206.4005 | l_Loss: 145.7048 |
21-12-23 18:51:34.120 - INFO: Learning rate: 1e-05
21-12-23 18:51:34.120 - INFO: Train epoch 934: Loss: 2509.6458 | r_Loss: 243.9195 | g_Loss: 1161.0559 | l_Loss: 128.9923 |
21-12-23 18:52:45.892 - INFO: Learning rate: 1e-05
21-12-23 18:52:45.892 - INFO: Train epoch 935: Loss: 2393.0968 | r_Loss: 229.9477 | g_Loss: 1108.8629 | l_Loss: 134.4956 |
21-12-23 18:53:57.443 - INFO: Learning rate: 1e-05
21-12-23 18:53:57.443 - INFO: Train epoch 936: Loss: 2420.8386 | r_Loss: 227.9350 | g_Loss: 1131.3641 | l_Loss: 149.7996 |
21-12-23 18:55:09.118 - INFO: Learning rate: 1e-05
21-12-23 18:55:09.118 - INFO: Train epoch 937: Loss: 2281.7550 | r_Loss: 214.5723 | g_Loss: 1071.6765 | l_Loss: 137.2168 |
21-12-23 18:56:20.788 - INFO: Learning rate: 1e-05
21-12-23 18:56:20.789 - INFO: Train epoch 938: Loss: 2308.3816 | r_Loss: 218.5509 | g_Loss: 1073.6120 | l_Loss: 142.0152 |
21-12-23 18:57:32.217 - INFO: Learning rate: 1e-05
21-12-23 18:57:32.217 - INFO: Train epoch 939: Loss: 2162.2055 | r_Loss: 208.5124 | g_Loss: 1008.5093 | l_Loss: 111.1344 |
21-12-23 18:58:43.798 - INFO: Learning rate: 1e-05
21-12-23 18:58:43.799 - INFO: Train epoch 940: Loss: 2183.5531 | r_Loss: 207.3367 | g_Loss: 1023.2291 | l_Loss: 123.6407 |
21-12-23 18:59:55.520 - INFO: Learning rate: 1e-05
21-12-23 18:59:55.521 - INFO: Train epoch 941: Loss: 2007.1136 | r_Loss: 191.1979 | g_Loss: 934.0760 | l_Loss: 117.0483 |
21-12-23 19:01:07.062 - INFO: Learning rate: 1e-05
21-12-23 19:01:07.063 - INFO: Train epoch 942: Loss: 1946.2607 | r_Loss: 181.1672 | g_Loss: 929.3618 | l_Loss: 111.0632 |
21-12-23 19:02:18.657 - INFO: Learning rate: 1e-05
21-12-23 19:02:18.658 - INFO: Train epoch 943: Loss: 2080.8357 | r_Loss: 197.0509 | g_Loss: 969.9514 | l_Loss: 125.6300 |
21-12-23 19:03:30.186 - INFO: Learning rate: 1e-05
21-12-23 19:03:30.186 - INFO: Train epoch 944: Loss: 1961.7944 | r_Loss: 186.9776 | g_Loss: 906.7732 | l_Loss: 120.1333 |
21-12-23 19:04:41.704 - INFO: Learning rate: 1e-05
21-12-23 19:04:41.704 - INFO: Train epoch 945: Loss: 1931.5028 | r_Loss: 179.0218 | g_Loss: 932.2503 | l_Loss: 104.1434 |
21-12-23 19:05:53.979 - INFO: Learning rate: 1e-05
21-12-23 19:05:53.980 - INFO: Train epoch 946: Loss: 1920.3268 | r_Loss: 184.0250 | g_Loss: 899.3894 | l_Loss: 100.8123 |
21-12-23 19:07:07.216 - INFO: Learning rate: 1e-05
21-12-23 19:07:07.217 - INFO: Train epoch 947: Loss: 1886.0189 | r_Loss: 180.1300 | g_Loss: 871.3213 | l_Loss: 114.0477 |
21-12-23 19:08:20.983 - INFO: Learning rate: 1e-05
21-12-23 19:08:20.984 - INFO: Train epoch 948: Loss: 1830.3704 | r_Loss: 175.9696 | g_Loss: 848.1690 | l_Loss: 102.3532 |
21-12-23 19:09:34.280 - INFO: Learning rate: 1e-05
21-12-23 19:09:34.281 - INFO: Train epoch 949: Loss: 1771.6142 | r_Loss: 161.7952 | g_Loss: 850.6243 | l_Loss: 112.0139 |
21-12-23 19:10:47.825 - INFO: Learning rate: 1e-05
21-12-23 19:10:47.826 - INFO: Train epoch 950: Loss: 1760.7308 | r_Loss: 167.3490 | g_Loss: 812.6655 | l_Loss: 111.3201 |
21-12-23 19:12:01.557 - INFO: Learning rate: 1e-05
21-12-23 19:12:01.558 - INFO: Train epoch 951: Loss: 1868.2160 | r_Loss: 177.7972 | g_Loss: 868.8899 | l_Loss: 110.3401 |
21-12-23 19:13:15.050 - INFO: Learning rate: 1e-05
21-12-23 19:13:15.051 - INFO: Train epoch 952: Loss: 1789.3605 | r_Loss: 174.1339 | g_Loss: 821.5344 | l_Loss: 97.1568 |
21-12-23 19:14:28.869 - INFO: Learning rate: 1e-05
21-12-23 19:14:28.869 - INFO: Train epoch 953: Loss: 1714.0309 | r_Loss: 165.8766 | g_Loss: 801.5830 | l_Loss: 83.0649 |
21-12-23 19:15:42.976 - INFO: Learning rate: 1e-05
21-12-23 19:15:42.977 - INFO: Train epoch 954: Loss: 1694.7740 | r_Loss: 165.8948 | g_Loss: 768.5818 | l_Loss: 96.7182 |
21-12-23 19:16:56.851 - INFO: Learning rate: 1e-05
21-12-23 19:16:56.852 - INFO: Train epoch 955: Loss: 1536.3939 | r_Loss: 147.8430 | g_Loss: 698.9221 | l_Loss: 98.2570 |
21-12-23 19:18:10.752 - INFO: Learning rate: 1e-05
21-12-23 19:18:10.753 - INFO: Train epoch 956: Loss: 1706.1731 | r_Loss: 169.9146 | g_Loss: 758.1712 | l_Loss: 98.4292 |
21-12-23 19:19:24.489 - INFO: Learning rate: 1e-05
21-12-23 19:19:24.490 - INFO: Train epoch 957: Loss: 1804.7033 | r_Loss: 184.6709 | g_Loss: 775.3737 | l_Loss: 105.9751 |
21-12-23 19:20:38.219 - INFO: Learning rate: 1e-05
21-12-23 19:20:38.219 - INFO: Train epoch 958: Loss: 1673.8692 | r_Loss: 165.1127 | g_Loss: 756.9531 | l_Loss: 91.3525 |
21-12-23 19:21:51.986 - INFO: Learning rate: 1e-05
21-12-23 19:21:51.987 - INFO: Train epoch 959: Loss: 1549.2878 | r_Loss: 153.4627 | g_Loss: 696.0621 | l_Loss: 85.9125 |
21-12-23 19:23:41.602 - INFO: TEST: PSNR_S: 39.5879 | PSNR_C: 32.9270 |
21-12-23 19:23:41.604 - INFO: Learning rate: 1e-05
21-12-23 19:23:41.605 - INFO: Train epoch 960: Loss: 1635.0551 | r_Loss: 168.5235 | g_Loss: 702.0454 | l_Loss: 90.3924 |
21-12-23 19:24:55.699 - INFO: Learning rate: 1e-05
21-12-23 19:24:55.700 - INFO: Train epoch 961: Loss: 1631.6119 | r_Loss: 165.4678 | g_Loss: 716.5421 | l_Loss: 87.7306 |
21-12-23 19:26:09.673 - INFO: Learning rate: 1e-05
21-12-23 19:26:09.674 - INFO: Train epoch 962: Loss: 1569.6736 | r_Loss: 156.2406 | g_Loss: 708.3883 | l_Loss: 80.0822 |
21-12-23 19:27:23.983 - INFO: Learning rate: 1e-05
21-12-23 19:27:23.983 - INFO: Train epoch 963: Loss: 1628.6122 | r_Loss: 167.9155 | g_Loss: 695.5088 | l_Loss: 93.5257 |
21-12-23 19:28:38.408 - INFO: Learning rate: 1e-05
21-12-23 19:28:38.409 - INFO: Train epoch 964: Loss: 1748.4535 | r_Loss: 184.4447 | g_Loss: 722.5049 | l_Loss: 103.7250 |
21-12-23 19:29:52.192 - INFO: Learning rate: 1e-05
21-12-23 19:29:52.193 - INFO: Train epoch 965: Loss: 1545.9899 | r_Loss: 157.6852 | g_Loss: 682.9314 | l_Loss: 74.6323 |
21-12-23 19:31:06.209 - INFO: Learning rate: 1e-05
21-12-23 19:31:06.210 - INFO: Train epoch 966: Loss: 1523.1294 | r_Loss: 153.0211 | g_Loss: 671.1048 | l_Loss: 86.9190 |
21-12-23 19:32:20.485 - INFO: Learning rate: 1e-05
21-12-23 19:32:20.486 - INFO: Train epoch 967: Loss: 1553.5140 | r_Loss: 163.1118 | g_Loss: 661.2448 | l_Loss: 76.7101 |
21-12-23 19:33:35.048 - INFO: Learning rate: 1e-05
21-12-23 19:33:35.049 - INFO: Train epoch 968: Loss: 1445.3791 | r_Loss: 146.6149 | g_Loss: 633.8292 | l_Loss: 78.4756 |
21-12-23 19:34:48.714 - INFO: Learning rate: 1e-05
21-12-23 19:34:48.715 - INFO: Train epoch 969: Loss: 1586.3196 | r_Loss: 171.1205 | g_Loss: 647.8786 | l_Loss: 82.8385 |
21-12-23 19:36:02.739 - INFO: Learning rate: 1e-05
21-12-23 19:36:02.740 - INFO: Train epoch 970: Loss: 1573.6613 | r_Loss: 161.9741 | g_Loss: 690.1127 | l_Loss: 73.6783 |
21-12-23 19:37:16.460 - INFO: Learning rate: 1e-05
21-12-23 19:37:16.461 - INFO: Train epoch 971: Loss: 1527.3126 | r_Loss: 161.4306 | g_Loss: 647.9465 | l_Loss: 72.2132 |
21-12-23 19:38:30.219 - INFO: Learning rate: 1e-05
21-12-23 19:38:30.220 - INFO: Train epoch 972: Loss: 1553.4402 | r_Loss: 167.2469 | g_Loss: 638.3888 | l_Loss: 78.8168 |
21-12-23 19:39:44.163 - INFO: Learning rate: 1e-05
21-12-23 19:39:44.164 - INFO: Train epoch 973: Loss: 1457.1344 | r_Loss: 148.0741 | g_Loss: 626.8964 | l_Loss: 89.8674 |
21-12-23 19:40:57.581 - INFO: Learning rate: 1e-05
21-12-23 19:40:57.582 - INFO: Train epoch 974: Loss: 1479.5859 | r_Loss: 151.8336 | g_Loss: 638.7586 | l_Loss: 81.6593 |
21-12-23 19:42:10.861 - INFO: Learning rate: 1e-05
21-12-23 19:42:10.862 - INFO: Train epoch 975: Loss: 1580.5505 | r_Loss: 172.2804 | g_Loss: 638.7024 | l_Loss: 80.4463 |
21-12-23 19:43:24.492 - INFO: Learning rate: 1e-05
21-12-23 19:43:24.493 - INFO: Train epoch 976: Loss: 1488.3155 | r_Loss: 153.7677 | g_Loss: 643.0975 | l_Loss: 76.3793 |
21-12-23 19:44:38.400 - INFO: Learning rate: 1e-05
21-12-23 19:44:38.401 - INFO: Train epoch 977: Loss: 1444.3314 | r_Loss: 150.5457 | g_Loss: 603.8787 | l_Loss: 87.7240 |
21-12-23 19:45:52.112 - INFO: Learning rate: 1e-05
21-12-23 19:45:52.113 - INFO: Train epoch 978: Loss: 1373.7217 | r_Loss: 140.7499 | g_Loss: 588.9269 | l_Loss: 81.0451 |
21-12-23 19:47:05.976 - INFO: Learning rate: 1e-05
21-12-23 19:47:05.977 - INFO: Train epoch 979: Loss: 1633.2498 | r_Loss: 188.5825 | g_Loss: 626.0859 | l_Loss: 64.2516 |
21-12-23 19:48:20.008 - INFO: Learning rate: 1e-05
21-12-23 19:48:20.009 - INFO: Train epoch 980: Loss: 1388.4172 | r_Loss: 145.6743 | g_Loss: 595.7950 | l_Loss: 64.2508 |
21-12-23 19:49:33.966 - INFO: Learning rate: 1e-05
21-12-23 19:49:33.968 - INFO: Train epoch 981: Loss: 1411.1907 | r_Loss: 144.2055 | g_Loss: 611.9396 | l_Loss: 78.2238 |
21-12-23 19:50:47.826 - INFO: Learning rate: 1e-05
21-12-23 19:50:47.827 - INFO: Train epoch 982: Loss: 1432.3211 | r_Loss: 156.1591 | g_Loss: 574.8083 | l_Loss: 76.7175 |
21-12-23 19:52:01.753 - INFO: Learning rate: 1e-05
21-12-23 19:52:01.754 - INFO: Train epoch 983: Loss: 1429.7382 | r_Loss: 147.5982 | g_Loss: 599.6862 | l_Loss: 92.0608 |
21-12-23 19:53:15.314 - INFO: Learning rate: 1e-05
21-12-23 19:53:15.315 - INFO: Train epoch 984: Loss: 1349.9985 | r_Loss: 141.2338 | g_Loss: 577.8236 | l_Loss: 66.0061 |
21-12-23 19:54:29.341 - INFO: Learning rate: 1e-05
21-12-23 19:54:29.342 - INFO: Train epoch 985: Loss: 1462.3735 | r_Loss: 159.3861 | g_Loss: 592.3378 | l_Loss: 73.1051 |
21-12-23 19:55:43.532 - INFO: Learning rate: 1e-05
21-12-23 19:55:43.533 - INFO: Train epoch 986: Loss: 1400.7341 | r_Loss: 150.9212 | g_Loss: 569.3479 | l_Loss: 76.7803 |
21-12-23 19:56:57.515 - INFO: Learning rate: 1e-05
21-12-23 19:56:57.516 - INFO: Train epoch 987: Loss: 1323.6707 | r_Loss: 133.2791 | g_Loss: 582.3726 | l_Loss: 74.9029 |
21-12-23 19:58:10.914 - INFO: Learning rate: 1e-05
21-12-23 19:58:10.914 - INFO: Train epoch 988: Loss: 1660.0184 | r_Loss: 200.4720 | g_Loss: 587.6961 | l_Loss: 69.9623 |
21-12-23 19:59:25.041 - INFO: Learning rate: 1e-05
21-12-23 19:59:25.042 - INFO: Train epoch 989: Loss: 1295.2254 | r_Loss: 129.4511 | g_Loss: 573.9461 | l_Loss: 74.0240 |
21-12-23 20:01:15.167 - INFO: TEST: PSNR_S: 41.6021 | PSNR_C: 34.0992 |
21-12-23 20:01:15.168 - INFO: Learning rate: 1e-05
21-12-23 20:01:15.169 - INFO: Train epoch 990: Loss: 1365.6968 | r_Loss: 135.9652 | g_Loss: 608.6291 | l_Loss: 77.2418 |
21-12-23 20:02:28.966 - INFO: Learning rate: 1e-05
21-12-23 20:02:28.967 - INFO: Train epoch 991: Loss: 1318.7177 | r_Loss: 138.5858 | g_Loss: 557.5755 | l_Loss: 68.2132 |
21-12-23 20:03:42.711 - INFO: Learning rate: 1e-05
21-12-23 20:03:42.712 - INFO: Train epoch 992: Loss: 1340.0940 | r_Loss: 148.1158 | g_Loss: 544.7075 | l_Loss: 54.8076 |
21-12-23 20:04:56.552 - INFO: Learning rate: 1e-05
21-12-23 20:04:56.552 - INFO: Train epoch 993: Loss: 1338.2648 | r_Loss: 141.3432 | g_Loss: 567.0327 | l_Loss: 64.5162 |
21-12-23 20:06:10.236 - INFO: Learning rate: 1e-05
21-12-23 20:06:10.237 - INFO: Train epoch 994: Loss: 1344.2777 | r_Loss: 149.0700 | g_Loss: 531.2824 | l_Loss: 67.6452 |
21-12-23 20:07:24.093 - INFO: Learning rate: 1e-05
21-12-23 20:07:24.094 - INFO: Train epoch 995: Loss: 1293.3003 | r_Loss: 132.9282 | g_Loss: 537.3446 | l_Loss: 91.3149 |
21-12-23 20:08:37.729 - INFO: Learning rate: 1e-05
21-12-23 20:08:37.730 - INFO: Train epoch 996: Loss: 1324.8310 | r_Loss: 140.9474 | g_Loss: 549.3936 | l_Loss: 70.7004 |
21-12-23 20:09:51.613 - INFO: Learning rate: 1e-05
21-12-23 20:09:51.614 - INFO: Train epoch 997: Loss: 1310.7167 | r_Loss: 144.7198 | g_Loss: 527.6649 | l_Loss: 59.4530 |
21-12-23 20:11:05.403 - INFO: Learning rate: 1e-05
21-12-23 20:11:05.404 - INFO: Train epoch 998: Loss: 1309.5609 | r_Loss: 141.6173 | g_Loss: 533.9374 | l_Loss: 67.5372 |
21-12-23 20:12:19.257 - INFO: Learning rate: 1e-05
21-12-23 20:12:19.258 - INFO: Train epoch 999: Loss: 1339.1516 | r_Loss: 148.5987 | g_Loss: 537.2867 | l_Loss: 58.8714 |
21-12-23 20:13:33.541 - INFO: Learning rate: 1e-05
21-12-23 20:13:33.542 - INFO: Train epoch 1000: Loss: 1289.7755 | r_Loss: 140.2547 | g_Loss: 530.2381 | l_Loss: 58.2638 |
21-12-23 20:14:47.382 - INFO: Learning rate: 1e-05
21-12-23 20:14:47.383 - INFO: Train epoch 1001: Loss: 1335.6879 | r_Loss: 147.0345 | g_Loss: 535.3285 | l_Loss: 65.1871 |
21-12-23 20:16:01.075 - INFO: Learning rate: 1e-05
21-12-23 20:16:01.076 - INFO: Train epoch 1002: Loss: 1290.9907 | r_Loss: 138.8669 | g_Loss: 529.6498 | l_Loss: 67.0062 |
21-12-23 20:17:15.131 - INFO: Learning rate: 1e-05
21-12-23 20:17:15.132 - INFO: Train epoch 1003: Loss: 1226.8704 | r_Loss: 131.5402 | g_Loss: 506.2891 | l_Loss: 62.8805 |
21-12-23 20:18:29.098 - INFO: Learning rate: 1e-05
21-12-23 20:18:29.099 - INFO: Train epoch 1004: Loss: 1426.5562 | r_Loss: 166.0946 | g_Loss: 525.6478 | l_Loss: 70.4356 |
21-12-23 20:19:42.904 - INFO: Learning rate: 1e-05
21-12-23 20:19:42.905 - INFO: Train epoch 1005: Loss: 1343.6908 | r_Loss: 144.0090 | g_Loss: 557.0754 | l_Loss: 66.5704 |
21-12-23 20:20:56.703 - INFO: Learning rate: 1e-05
21-12-23 20:20:56.704 - INFO: Train epoch 1006: Loss: 1255.9830 | r_Loss: 136.0178 | g_Loss: 505.5625 | l_Loss: 70.3317 |
21-12-23 20:22:11.090 - INFO: Learning rate: 1e-05
21-12-23 20:22:11.091 - INFO: Train epoch 1007: Loss: 1323.5037 | r_Loss: 150.1498 | g_Loss: 510.6339 | l_Loss: 62.1205 |
21-12-23 20:23:25.020 - INFO: Learning rate: 1e-05
21-12-23 20:23:25.021 - INFO: Train epoch 1008: Loss: 1211.8777 | r_Loss: 128.5743 | g_Loss: 508.7527 | l_Loss: 60.2534 |
21-12-23 20:24:38.899 - INFO: Learning rate: 1e-05
21-12-23 20:24:38.900 - INFO: Train epoch 1009: Loss: 1330.1480 | r_Loss: 150.5246 | g_Loss: 513.6365 | l_Loss: 63.8884 |
21-12-23 20:25:53.161 - INFO: Learning rate: 1e-05
21-12-23 20:25:53.162 - INFO: Train epoch 1010: Loss: 1363.9835 | r_Loss: 154.3264 | g_Loss: 516.8952 | l_Loss: 75.4564 |
21-12-23 20:27:07.265 - INFO: Learning rate: 1e-05
21-12-23 20:27:07.266 - INFO: Train epoch 1011: Loss: 1290.9496 | r_Loss: 140.3837 | g_Loss: 524.4627 | l_Loss: 64.5686 |
21-12-23 20:28:21.146 - INFO: Learning rate: 1e-05
21-12-23 20:28:21.148 - INFO: Train epoch 1012: Loss: 1314.1590 | r_Loss: 145.9203 | g_Loss: 525.8482 | l_Loss: 58.7092 |
21-12-23 20:29:34.919 - INFO: Learning rate: 1e-05
21-12-23 20:29:34.920 - INFO: Train epoch 1013: Loss: 1321.6479 | r_Loss: 146.3925 | g_Loss: 523.6428 | l_Loss: 66.0427 |
21-12-23 20:30:48.402 - INFO: Learning rate: 1e-05
21-12-23 20:30:48.403 - INFO: Train epoch 1014: Loss: 1209.6231 | r_Loss: 132.6625 | g_Loss: 486.3984 | l_Loss: 59.9121 |
21-12-23 20:32:02.373 - INFO: Learning rate: 1e-05
21-12-23 20:32:02.375 - INFO: Train epoch 1015: Loss: 1187.4909 | r_Loss: 131.3813 | g_Loss: 476.6160 | l_Loss: 53.9684 |
21-12-23 20:33:16.172 - INFO: Learning rate: 1e-05
21-12-23 20:33:16.174 - INFO: Train epoch 1016: Loss: 1252.6324 | r_Loss: 137.5468 | g_Loss: 505.2456 | l_Loss: 59.6529 |
21-12-23 20:34:29.944 - INFO: Learning rate: 1e-05
21-12-23 20:34:29.945 - INFO: Train epoch 1017: Loss: 1172.5840 | r_Loss: 126.9927 | g_Loss: 476.0623 | l_Loss: 61.5582 |
21-12-23 20:35:43.728 - INFO: Learning rate: 1e-05
21-12-23 20:35:43.729 - INFO: Train epoch 1018: Loss: 1230.0906 | r_Loss: 135.2923 | g_Loss: 488.8008 | l_Loss: 64.8284 |
21-12-23 20:36:57.632 - INFO: Learning rate: 1e-05
21-12-23 20:36:57.633 - INFO: Train epoch 1019: Loss: 1312.9296 | r_Loss: 147.0416 | g_Loss: 521.4597 | l_Loss: 56.2617 |
21-12-23 20:38:47.856 - INFO: TEST: PSNR_S: 41.1483 | PSNR_C: 34.7749 |
21-12-23 20:38:47.858 - INFO: Learning rate: 1e-05
21-12-23 20:38:47.858 - INFO: Train epoch 1020: Loss: 1235.8880 | r_Loss: 136.9510 | g_Loss: 494.2328 | l_Loss: 56.9000 |
21-12-23 20:40:01.749 - INFO: Learning rate: 1e-05
21-12-23 20:40:01.750 - INFO: Train epoch 1021: Loss: 1299.5984 | r_Loss: 149.5622 | g_Loss: 495.4376 | l_Loss: 56.3500 |
21-12-23 20:41:15.944 - INFO: Learning rate: 1e-05
21-12-23 20:41:15.945 - INFO: Train epoch 1022: Loss: 1217.5018 | r_Loss: 132.5715 | g_Loss: 493.3320 | l_Loss: 61.3125 |
21-12-23 20:42:29.767 - INFO: Learning rate: 1e-05
21-12-23 20:42:29.768 - INFO: Train epoch 1023: Loss: 1200.8365 | r_Loss: 132.1967 | g_Loss: 485.2504 | l_Loss: 54.6026 |
21-12-23 20:43:43.691 - INFO: Learning rate: 1e-05
21-12-23 20:43:43.692 - INFO: Train epoch 1024: Loss: 1215.7387 | r_Loss: 134.5154 | g_Loss: 476.6087 | l_Loss: 66.5531 |
21-12-23 20:44:57.730 - INFO: Learning rate: 1e-05
21-12-23 20:44:57.731 - INFO: Train epoch 1025: Loss: 1233.2807 | r_Loss: 136.6254 | g_Loss: 478.4698 | l_Loss: 71.6839 |
21-12-23 20:46:11.608 - INFO: Learning rate: 1e-05
21-12-23 20:46:11.609 - INFO: Train epoch 1026: Loss: 1289.3043 | r_Loss: 151.6446 | g_Loss: 479.1914 | l_Loss: 51.8899 |
21-12-23 20:47:25.629 - INFO: Learning rate: 1e-05
21-12-23 20:47:25.630 - INFO: Train epoch 1027: Loss: 1245.8340 | r_Loss: 141.6101 | g_Loss: 476.8377 | l_Loss: 60.9460 |
21-12-23 20:48:39.256 - INFO: Learning rate: 1e-05
21-12-23 20:48:39.257 - INFO: Train epoch 1028: Loss: 17097.6620 | r_Loss: 3178.8012 | g_Loss: 1090.8655 | l_Loss: 112.7909 |
21-12-23 20:49:53.094 - INFO: Learning rate: 1e-05
21-12-23 20:49:53.095 - INFO: Train epoch 1029: Loss: 10648.3756 | r_Loss: 1530.3810 | g_Loss: 2690.8106 | l_Loss: 305.6601 |
21-12-23 20:51:07.100 - INFO: Learning rate: 1e-05
21-12-23 20:51:07.101 - INFO: Train epoch 1030: Loss: 2822.4163 | r_Loss: 263.8752 | g_Loss: 1333.2431 | l_Loss: 169.7974 |
21-12-23 20:52:21.139 - INFO: Learning rate: 1e-05
21-12-23 20:52:21.140 - INFO: Train epoch 1031: Loss: 2498.0469 | r_Loss: 232.1391 | g_Loss: 1192.8447 | l_Loss: 144.5065 |
21-12-23 20:53:35.106 - INFO: Learning rate: 1e-05
21-12-23 20:53:35.107 - INFO: Train epoch 1032: Loss: 2189.5297 | r_Loss: 200.1486 | g_Loss: 1064.2086 | l_Loss: 124.5783 |
21-12-23 20:54:48.994 - INFO: Learning rate: 1e-05
21-12-23 20:54:48.995 - INFO: Train epoch 1033: Loss: 1957.5681 | r_Loss: 176.6908 | g_Loss: 957.5725 | l_Loss: 116.5414 |
21-12-23 20:56:02.917 - INFO: Learning rate: 1e-05
21-12-23 20:56:02.918 - INFO: Train epoch 1034: Loss: 1899.3041 | r_Loss: 171.7523 | g_Loss: 919.8587 | l_Loss: 120.6838 |
21-12-23 20:57:16.840 - INFO: Learning rate: 1e-05
21-12-23 20:57:16.841 - INFO: Train epoch 1035: Loss: 1881.2373 | r_Loss: 178.8421 | g_Loss: 875.1504 | l_Loss: 111.8765 |
21-12-23 20:58:31.080 - INFO: Learning rate: 1e-05
21-12-23 20:58:31.081 - INFO: Train epoch 1036: Loss: 1744.8551 | r_Loss: 164.1085 | g_Loss: 820.3293 | l_Loss: 103.9834 |
21-12-23 20:59:44.760 - INFO: Learning rate: 1e-05
21-12-23 20:59:44.761 - INFO: Train epoch 1037: Loss: 1721.1231 | r_Loss: 159.6710 | g_Loss: 818.3891 | l_Loss: 104.3791 |
21-12-23 21:00:58.964 - INFO: Learning rate: 1e-05
21-12-23 21:00:58.965 - INFO: Train epoch 1038: Loss: 1692.0006 | r_Loss: 154.3335 | g_Loss: 811.0434 | l_Loss: 109.2898 |
21-12-23 21:02:12.647 - INFO: Learning rate: 1e-05
21-12-23 21:02:12.648 - INFO: Train epoch 1039: Loss: 1562.9277 | r_Loss: 145.8781 | g_Loss: 746.0966 | l_Loss: 87.4407 |
21-12-23 21:03:26.746 - INFO: Learning rate: 1e-05
21-12-23 21:03:26.747 - INFO: Train epoch 1040: Loss: 1506.5265 | r_Loss: 139.4411 | g_Loss: 715.0832 | l_Loss: 94.2377 |
21-12-23 21:04:40.388 - INFO: Learning rate: 1e-05
21-12-23 21:04:40.389 - INFO: Train epoch 1041: Loss: 1543.1416 | r_Loss: 148.2083 | g_Loss: 715.7728 | l_Loss: 86.3274 |
21-12-23 21:05:54.200 - INFO: Learning rate: 1e-05
21-12-23 21:05:54.201 - INFO: Train epoch 1042: Loss: 1549.9717 | r_Loss: 150.5451 | g_Loss: 711.6236 | l_Loss: 85.6226 |
21-12-23 21:07:08.104 - INFO: Learning rate: 1e-05
21-12-23 21:07:08.105 - INFO: Train epoch 1043: Loss: 1476.3264 | r_Loss: 138.8860 | g_Loss: 687.3390 | l_Loss: 94.5574 |
21-12-23 21:08:22.238 - INFO: Learning rate: 1e-05
21-12-23 21:08:22.239 - INFO: Train epoch 1044: Loss: 1416.8741 | r_Loss: 132.6323 | g_Loss: 672.0820 | l_Loss: 81.6308 |
21-12-23 21:09:36.135 - INFO: Learning rate: 1e-05
21-12-23 21:09:36.136 - INFO: Train epoch 1045: Loss: 1422.7023 | r_Loss: 134.9415 | g_Loss: 665.9823 | l_Loss: 82.0126 |
21-12-23 21:10:50.026 - INFO: Learning rate: 1e-05
21-12-23 21:10:50.027 - INFO: Train epoch 1046: Loss: 1386.6085 | r_Loss: 135.4288 | g_Loss: 643.0908 | l_Loss: 66.3735 |
21-12-23 21:12:03.657 - INFO: Learning rate: 1e-05
21-12-23 21:12:03.658 - INFO: Train epoch 1047: Loss: 1319.6418 | r_Loss: 125.6415 | g_Loss: 621.3892 | l_Loss: 70.0449 |
21-12-23 21:13:17.261 - INFO: Learning rate: 1e-05
21-12-23 21:13:17.262 - INFO: Train epoch 1048: Loss: 1359.0686 | r_Loss: 131.5888 | g_Loss: 627.5663 | l_Loss: 73.5586 |
21-12-23 21:14:31.852 - INFO: Learning rate: 1e-05
21-12-23 21:14:31.854 - INFO: Train epoch 1049: Loss: 1334.1793 | r_Loss: 130.1538 | g_Loss: 611.5915 | l_Loss: 71.8190 |
21-12-23 21:16:22.247 - INFO: TEST: PSNR_S: 41.7302 | PSNR_C: 33.8924 |
21-12-23 21:16:22.249 - INFO: Learning rate: 1e-05
21-12-23 21:16:22.250 - INFO: Train epoch 1050: Loss: 1328.2815 | r_Loss: 131.5871 | g_Loss: 595.1034 | l_Loss: 75.2425 |
21-12-23 21:17:36.603 - INFO: Learning rate: 1e-05
21-12-23 21:17:36.604 - INFO: Train epoch 1051: Loss: 1334.0986 | r_Loss: 130.3512 | g_Loss: 611.2053 | l_Loss: 71.1370 |
21-12-23 21:18:50.583 - INFO: Learning rate: 1e-05
21-12-23 21:18:50.584 - INFO: Train epoch 1052: Loss: 1345.9883 | r_Loss: 132.7426 | g_Loss: 597.4247 | l_Loss: 84.8506 |
21-12-23 21:20:04.470 - INFO: Learning rate: 1e-05
21-12-23 21:20:04.471 - INFO: Train epoch 1053: Loss: 1311.5071 | r_Loss: 128.4577 | g_Loss: 583.3858 | l_Loss: 85.8328 |
21-12-23 21:21:18.678 - INFO: Learning rate: 1e-05
21-12-23 21:21:18.679 - INFO: Train epoch 1054: Loss: 1246.8809 | r_Loss: 126.6200 | g_Loss: 550.7526 | l_Loss: 63.0282 |
21-12-23 21:22:32.470 - INFO: Learning rate: 1e-05
21-12-23 21:22:32.472 - INFO: Train epoch 1055: Loss: 1267.5519 | r_Loss: 129.0923 | g_Loss: 553.3383 | l_Loss: 68.7523 |
21-12-23 21:23:46.361 - INFO: Learning rate: 1e-05
21-12-23 21:23:46.363 - INFO: Train epoch 1056: Loss: 1243.7622 | r_Loss: 126.0049 | g_Loss: 542.9102 | l_Loss: 70.8276 |
21-12-23 21:25:00.356 - INFO: Learning rate: 1e-05
21-12-23 21:25:00.357 - INFO: Train epoch 1057: Loss: 1339.6667 | r_Loss: 137.9619 | g_Loss: 575.0931 | l_Loss: 74.7640 |
21-12-23 21:26:14.137 - INFO: Learning rate: 1e-05
21-12-23 21:26:14.138 - INFO: Train epoch 1058: Loss: 1237.7565 | r_Loss: 129.8051 | g_Loss: 523.7247 | l_Loss: 65.0062 |
21-12-23 21:27:28.165 - INFO: Learning rate: 1e-05
21-12-23 21:27:28.166 - INFO: Train epoch 1059: Loss: 1199.6947 | r_Loss: 122.5603 | g_Loss: 525.1479 | l_Loss: 61.7453 |
21-12-23 21:28:42.000 - INFO: Learning rate: 1e-05
21-12-23 21:28:42.002 - INFO: Train epoch 1060: Loss: 1296.9859 | r_Loss: 134.7139 | g_Loss: 549.5888 | l_Loss: 73.8276 |
21-12-23 21:29:55.645 - INFO: Learning rate: 1e-05
21-12-23 21:29:55.647 - INFO: Train epoch 1061: Loss: 1170.7674 | r_Loss: 121.5454 | g_Loss: 498.2918 | l_Loss: 64.7486 |
21-12-23 21:31:09.639 - INFO: Learning rate: 1e-05
21-12-23 21:31:09.640 - INFO: Train epoch 1062: Loss: 1669.1902 | r_Loss: 205.5894 | g_Loss: 563.2927 | l_Loss: 77.9506 |
21-12-23 21:32:23.684 - INFO: Learning rate: 1e-05
21-12-23 21:32:23.685 - INFO: Train epoch 1063: Loss: 1240.7735 | r_Loss: 124.9625 | g_Loss: 546.8988 | l_Loss: 69.0622 |
21-12-23 21:33:38.168 - INFO: Learning rate: 1e-05
21-12-23 21:33:38.169 - INFO: Train epoch 1064: Loss: 1237.6612 | r_Loss: 125.2925 | g_Loss: 541.2909 | l_Loss: 69.9079 |
21-12-23 21:34:52.337 - INFO: Learning rate: 1e-05
21-12-23 21:34:52.338 - INFO: Train epoch 1065: Loss: 1233.1273 | r_Loss: 132.0690 | g_Loss: 511.4706 | l_Loss: 61.3116 |
21-12-23 21:36:06.386 - INFO: Learning rate: 1e-05
21-12-23 21:36:06.387 - INFO: Train epoch 1066: Loss: 1184.5541 | r_Loss: 121.7185 | g_Loss: 501.0202 | l_Loss: 74.9417 |
21-12-23 21:37:20.494 - INFO: Learning rate: 1e-05
21-12-23 21:37:20.495 - INFO: Train epoch 1067: Loss: 1207.0617 | r_Loss: 127.8067 | g_Loss: 505.2611 | l_Loss: 62.7670 |
21-12-23 21:38:34.662 - INFO: Learning rate: 1e-05
21-12-23 21:38:34.662 - INFO: Train epoch 1068: Loss: 1197.6145 | r_Loss: 121.0903 | g_Loss: 522.7408 | l_Loss: 69.4223 |
21-12-23 21:39:48.336 - INFO: Learning rate: 1e-05
21-12-23 21:39:48.337 - INFO: Train epoch 1069: Loss: 1209.5991 | r_Loss: 125.5520 | g_Loss: 512.8770 | l_Loss: 68.9622 |
21-12-23 21:41:02.424 - INFO: Learning rate: 1e-05
21-12-23 21:41:02.425 - INFO: Train epoch 1070: Loss: 1124.3185 | r_Loss: 118.0193 | g_Loss: 482.1503 | l_Loss: 52.0718 |
21-12-23 21:42:16.014 - INFO: Learning rate: 1e-05
21-12-23 21:42:16.015 - INFO: Train epoch 1071: Loss: 1208.5420 | r_Loss: 132.3798 | g_Loss: 484.5019 | l_Loss: 62.1410 |
21-12-23 21:43:29.623 - INFO: Learning rate: 1e-05
21-12-23 21:43:29.624 - INFO: Train epoch 1072: Loss: 1135.4237 | r_Loss: 124.0023 | g_Loss: 467.9085 | l_Loss: 47.5037 |
21-12-23 21:44:43.488 - INFO: Learning rate: 1e-05
21-12-23 21:44:43.489 - INFO: Train epoch 1073: Loss: 1175.9579 | r_Loss: 125.6087 | g_Loss: 482.4652 | l_Loss: 65.4489 |
21-12-23 21:45:57.128 - INFO: Learning rate: 1e-05
21-12-23 21:45:57.129 - INFO: Train epoch 1074: Loss: 1109.2395 | r_Loss: 119.1812 | g_Loss: 460.6950 | l_Loss: 52.6386 |
21-12-23 21:47:10.957 - INFO: Learning rate: 1e-05
21-12-23 21:47:10.959 - INFO: Train epoch 1075: Loss: 1119.6381 | r_Loss: 120.9846 | g_Loss: 451.9718 | l_Loss: 62.7433 |
21-12-23 21:48:24.758 - INFO: Learning rate: 1e-05
21-12-23 21:48:24.759 - INFO: Train epoch 1076: Loss: 1173.0992 | r_Loss: 125.1513 | g_Loss: 484.2961 | l_Loss: 63.0468 |
21-12-23 21:49:38.786 - INFO: Learning rate: 1e-05
21-12-23 21:49:38.787 - INFO: Train epoch 1077: Loss: 1215.5844 | r_Loss: 134.8298 | g_Loss: 482.1157 | l_Loss: 59.3194 |
21-12-23 21:50:52.744 - INFO: Learning rate: 1e-05
21-12-23 21:50:52.746 - INFO: Train epoch 1078: Loss: 1238.2672 | r_Loss: 132.8146 | g_Loss: 498.8903 | l_Loss: 75.3042 |
21-12-23 21:52:06.433 - INFO: Learning rate: 1e-05
21-12-23 21:52:06.434 - INFO: Train epoch 1079: Loss: 1153.4988 | r_Loss: 125.0662 | g_Loss: 467.7936 | l_Loss: 60.3742 |
21-12-23 21:53:56.395 - INFO: TEST: PSNR_S: 42.0356 | PSNR_C: 35.0199 |
21-12-23 21:53:56.396 - INFO: Learning rate: 1e-05
21-12-23 21:53:56.396 - INFO: Train epoch 1080: Loss: 1134.9678 | r_Loss: 124.1608 | g_Loss: 457.5058 | l_Loss: 56.6578 |
21-12-23 21:55:09.842 - INFO: Learning rate: 1e-05
21-12-23 21:55:09.843 - INFO: Train epoch 1081: Loss: 1205.1208 | r_Loss: 131.1019 | g_Loss: 488.2061 | l_Loss: 61.4054 |
21-12-23 21:56:23.529 - INFO: Learning rate: 1e-05
21-12-23 21:56:23.530 - INFO: Train epoch 1082: Loss: 1595.5973 | r_Loss: 196.3004 | g_Loss: 538.8530 | l_Loss: 75.2422 |
21-12-23 21:57:36.925 - INFO: Learning rate: 1e-05
21-12-23 21:57:36.926 - INFO: Train epoch 1083: Loss: 1096.1962 | r_Loss: 114.2524 | g_Loss: 468.5786 | l_Loss: 56.3555 |
21-12-23 21:58:51.272 - INFO: Learning rate: 1e-05
21-12-23 21:58:51.273 - INFO: Train epoch 1084: Loss: 1099.5718 | r_Loss: 116.5302 | g_Loss: 454.6048 | l_Loss: 62.3158 |
21-12-23 22:00:05.591 - INFO: Learning rate: 1e-05
21-12-23 22:00:05.592 - INFO: Train epoch 1085: Loss: 1217.3232 | r_Loss: 137.9959 | g_Loss: 476.8017 | l_Loss: 50.5422 |
21-12-23 22:01:19.445 - INFO: Learning rate: 1e-05
21-12-23 22:01:19.446 - INFO: Train epoch 1086: Loss: 1060.4339 | r_Loss: 109.6492 | g_Loss: 452.3456 | l_Loss: 59.8422 |
21-12-23 22:02:33.276 - INFO: Learning rate: 1e-05
21-12-23 22:02:33.278 - INFO: Train epoch 1087: Loss: 1200.1040 | r_Loss: 133.6964 | g_Loss: 479.4009 | l_Loss: 52.2209 |
21-12-23 22:03:47.387 - INFO: Learning rate: 1e-05
21-12-23 22:03:47.388 - INFO: Train epoch 1088: Loss: 1156.1442 | r_Loss: 126.4622 | g_Loss: 468.5145 | l_Loss: 55.3187 |
21-12-23 22:05:01.679 - INFO: Learning rate: 1e-05
21-12-23 22:05:01.680 - INFO: Train epoch 1089: Loss: 1178.3777 | r_Loss: 131.9520 | g_Loss: 465.0855 | l_Loss: 53.5321 |
21-12-23 22:06:15.818 - INFO: Learning rate: 1e-05
21-12-23 22:06:15.819 - INFO: Train epoch 1090: Loss: 1175.8897 | r_Loss: 128.0248 | g_Loss: 478.2267 | l_Loss: 57.5388 |
21-12-23 22:07:30.002 - INFO: Learning rate: 1e-05
21-12-23 22:07:30.003 - INFO: Train epoch 1091: Loss: 1190.0251 | r_Loss: 129.1524 | g_Loss: 476.7532 | l_Loss: 67.5097 |
21-12-23 22:08:44.114 - INFO: Learning rate: 1e-05
21-12-23 22:08:44.116 - INFO: Train epoch 1092: Loss: 1085.6527 | r_Loss: 117.2807 | g_Loss: 440.4243 | l_Loss: 58.8248 |
21-12-23 22:09:57.682 - INFO: Learning rate: 1e-05
21-12-23 22:09:57.683 - INFO: Train epoch 1093: Loss: 1091.7928 | r_Loss: 119.1575 | g_Loss: 437.8602 | l_Loss: 58.1448 |
21-12-23 22:11:11.392 - INFO: Learning rate: 1e-05
21-12-23 22:11:11.393 - INFO: Train epoch 1094: Loss: 1179.2169 | r_Loss: 131.2334 | g_Loss: 471.4563 | l_Loss: 51.5936 |
21-12-23 22:12:25.377 - INFO: Learning rate: 1e-05
21-12-23 22:12:25.379 - INFO: Train epoch 1095: Loss: 1184.6417 | r_Loss: 132.0283 | g_Loss: 466.3801 | l_Loss: 58.1199 |
21-12-23 22:13:39.466 - INFO: Learning rate: 1e-05
21-12-23 22:13:39.467 - INFO: Train epoch 1096: Loss: 1122.1415 | r_Loss: 121.9766 | g_Loss: 452.2835 | l_Loss: 59.9750 |
21-12-23 22:14:53.702 - INFO: Learning rate: 1e-05
21-12-23 22:14:53.703 - INFO: Train epoch 1097: Loss: 1155.5662 | r_Loss: 126.5303 | g_Loss: 468.6159 | l_Loss: 54.2989 |
21-12-23 22:16:07.874 - INFO: Learning rate: 1e-05
21-12-23 22:16:07.875 - INFO: Train epoch 1098: Loss: 1128.2137 | r_Loss: 119.4479 | g_Loss: 472.2201 | l_Loss: 58.7540 |
21-12-23 22:17:21.635 - INFO: Learning rate: 1e-05
21-12-23 22:17:21.635 - INFO: Train epoch 1099: Loss: 1214.5239 | r_Loss: 131.7882 | g_Loss: 487.5287 | l_Loss: 68.0543 |
21-12-23 22:18:35.373 - INFO: Learning rate: 1e-05
21-12-23 22:18:35.374 - INFO: Train epoch 1100: Loss: 1070.8683 | r_Loss: 114.8312 | g_Loss: 445.9771 | l_Loss: 50.7351 |
21-12-23 22:19:49.581 - INFO: Learning rate: 1e-05
21-12-23 22:19:49.582 - INFO: Train epoch 1101: Loss: 1115.0335 | r_Loss: 121.0746 | g_Loss: 453.1298 | l_Loss: 56.5308 |
21-12-23 22:21:03.806 - INFO: Learning rate: 1e-05
21-12-23 22:21:03.806 - INFO: Train epoch 1102: Loss: 1221.8691 | r_Loss: 139.2820 | g_Loss: 467.7175 | l_Loss: 57.7419 |
21-12-23 22:22:17.448 - INFO: Learning rate: 1e-05
21-12-23 22:22:17.449 - INFO: Train epoch 1103: Loss: 1048.3051 | r_Loss: 112.8951 | g_Loss: 429.9667 | l_Loss: 53.8630 |
21-12-23 22:23:31.508 - INFO: Learning rate: 1e-05
21-12-23 22:23:31.509 - INFO: Train epoch 1104: Loss: 1059.0970 | r_Loss: 111.6217 | g_Loss: 442.3678 | l_Loss: 58.6207 |
21-12-23 22:24:45.491 - INFO: Learning rate: 1e-05
21-12-23 22:24:45.492 - INFO: Train epoch 1105: Loss: 1168.4150 | r_Loss: 131.8697 | g_Loss: 446.0608 | l_Loss: 63.0057 |
21-12-23 22:25:59.238 - INFO: Learning rate: 1e-05
21-12-23 22:25:59.239 - INFO: Train epoch 1106: Loss: 1077.6794 | r_Loss: 116.7013 | g_Loss: 432.7822 | l_Loss: 61.3905 |
21-12-23 22:27:12.966 - INFO: Learning rate: 1e-05
21-12-23 22:27:12.967 - INFO: Train epoch 1107: Loss: 1046.3342 | r_Loss: 112.6460 | g_Loss: 434.9721 | l_Loss: 48.1321 |
21-12-23 22:28:26.749 - INFO: Learning rate: 1e-05
21-12-23 22:28:26.750 - INFO: Train epoch 1108: Loss: 1117.3461 | r_Loss: 125.5783 | g_Loss: 444.9923 | l_Loss: 44.4625 |
21-12-23 22:29:40.758 - INFO: Learning rate: 1e-05
21-12-23 22:29:40.759 - INFO: Train epoch 1109: Loss: 1177.4500 | r_Loss: 134.8602 | g_Loss: 449.1839 | l_Loss: 53.9650 |
21-12-23 22:31:30.271 - INFO: TEST: PSNR_S: 42.3669 | PSNR_C: 35.3275 |
21-12-23 22:31:30.273 - INFO: Learning rate: 1e-05
21-12-23 22:31:30.273 - INFO: Train epoch 1110: Loss: 996.8911 | r_Loss: 104.7291 | g_Loss: 423.0823 | l_Loss: 50.1631 |
21-12-23 22:32:44.508 - INFO: Learning rate: 1e-05
21-12-23 22:32:44.509 - INFO: Train epoch 1111: Loss: 1061.8766 | r_Loss: 112.0538 | g_Loss: 446.6093 | l_Loss: 54.9986 |
21-12-23 22:33:58.211 - INFO: Learning rate: 1e-05
21-12-23 22:33:58.212 - INFO: Train epoch 1112: Loss: 1069.3040 | r_Loss: 116.0241 | g_Loss: 424.6286 | l_Loss: 64.5549 |
21-12-23 22:35:12.113 - INFO: Learning rate: 1e-05
21-12-23 22:35:12.114 - INFO: Train epoch 1113: Loss: 1170.1459 | r_Loss: 134.2864 | g_Loss: 445.1309 | l_Loss: 53.5832 |
21-12-23 22:36:26.234 - INFO: Learning rate: 1e-05
21-12-23 22:36:26.235 - INFO: Train epoch 1114: Loss: 1225.2860 | r_Loss: 135.8017 | g_Loss: 471.9446 | l_Loss: 74.3331 |
21-12-23 22:37:40.093 - INFO: Learning rate: 1e-05
21-12-23 22:37:40.095 - INFO: Train epoch 1115: Loss: 1074.6113 | r_Loss: 117.9201 | g_Loss: 430.8181 | l_Loss: 54.1927 |
21-12-23 22:38:54.089 - INFO: Learning rate: 1e-05
21-12-23 22:38:54.090 - INFO: Train epoch 1116: Loss: 1224.6147 | r_Loss: 136.3807 | g_Loss: 482.0596 | l_Loss: 60.6513 |
21-12-23 22:40:08.153 - INFO: Learning rate: 1e-05
21-12-23 22:40:08.155 - INFO: Train epoch 1117: Loss: 1029.9062 | r_Loss: 111.4709 | g_Loss: 424.7569 | l_Loss: 47.7949 |
21-12-23 22:41:22.054 - INFO: Learning rate: 1e-05
21-12-23 22:41:22.056 - INFO: Train epoch 1118: Loss: 1048.5456 | r_Loss: 114.9448 | g_Loss: 419.7333 | l_Loss: 54.0884 |
21-12-23 22:42:36.315 - INFO: Learning rate: 1e-05
21-12-23 22:42:36.316 - INFO: Train epoch 1119: Loss: 1179.5726 | r_Loss: 134.4578 | g_Loss: 450.2354 | l_Loss: 57.0483 |
21-12-23 22:43:50.819 - INFO: Learning rate: 1e-05
21-12-23 22:43:50.820 - INFO: Train epoch 1120: Loss: 1064.4634 | r_Loss: 115.2140 | g_Loss: 435.4061 | l_Loss: 52.9873 |
21-12-23 22:45:05.246 - INFO: Learning rate: 1e-05
21-12-23 22:45:05.247 - INFO: Train epoch 1121: Loss: 1131.0314 | r_Loss: 123.4398 | g_Loss: 469.9755 | l_Loss: 43.8570 |
21-12-23 22:46:19.181 - INFO: Learning rate: 1e-05
21-12-23 22:46:19.182 - INFO: Train epoch 1122: Loss: 1066.0068 | r_Loss: 117.1369 | g_Loss: 431.5798 | l_Loss: 48.7425 |
21-12-23 22:47:33.084 - INFO: Learning rate: 1e-05
21-12-23 22:47:33.085 - INFO: Train epoch 1123: Loss: 1239.6998 | r_Loss: 143.2282 | g_Loss: 462.9653 | l_Loss: 60.5933 |
21-12-23 22:48:47.303 - INFO: Learning rate: 1e-05
21-12-23 22:48:47.304 - INFO: Train epoch 1124: Loss: 1082.5403 | r_Loss: 115.7078 | g_Loss: 446.9273 | l_Loss: 57.0740 |
21-12-23 22:50:01.275 - INFO: Learning rate: 1e-05
21-12-23 22:50:01.276 - INFO: Train epoch 1125: Loss: 1080.5097 | r_Loss: 116.2461 | g_Loss: 440.6508 | l_Loss: 58.6285 |
21-12-23 22:51:15.348 - INFO: Learning rate: 1e-05
21-12-23 22:51:15.349 - INFO: Train epoch 1126: Loss: 1002.9756 | r_Loss: 107.3552 | g_Loss: 413.2159 | l_Loss: 52.9835 |
21-12-23 22:52:29.223 - INFO: Learning rate: 1e-05
21-12-23 22:52:29.224 - INFO: Train epoch 1127: Loss: 45997.5388 | r_Loss: 8414.6949 | g_Loss: 3483.1444 | l_Loss: 440.9181 |
21-12-23 22:53:43.321 - INFO: Learning rate: 1e-05
21-12-23 22:53:43.322 - INFO: Train epoch 1128: Loss: 5215.8397 | r_Loss: 555.3888 | g_Loss: 2182.6210 | l_Loss: 256.2748 |
21-12-23 22:54:57.402 - INFO: Learning rate: 1e-05
21-12-23 22:54:57.403 - INFO: Train epoch 1129: Loss: 3522.5630 | r_Loss: 344.7211 | g_Loss: 1612.3985 | l_Loss: 186.5591 |
21-12-23 22:56:11.352 - INFO: Learning rate: 1e-05
21-12-23 22:56:11.353 - INFO: Train epoch 1130: Loss: 3045.7960 | r_Loss: 284.4891 | g_Loss: 1445.8308 | l_Loss: 177.5197 |
21-12-23 22:57:24.998 - INFO: Learning rate: 1e-05
21-12-23 22:57:24.999 - INFO: Train epoch 1131: Loss: 2620.0985 | r_Loss: 246.2288 | g_Loss: 1247.7549 | l_Loss: 141.1995 |
21-12-23 22:58:39.241 - INFO: Learning rate: 1e-05
21-12-23 22:58:39.242 - INFO: Train epoch 1132: Loss: 2631.7828 | r_Loss: 254.5561 | g_Loss: 1218.4052 | l_Loss: 140.5972 |
21-12-23 22:59:53.128 - INFO: Learning rate: 1e-05
21-12-23 22:59:53.129 - INFO: Train epoch 1133: Loss: 2440.0685 | r_Loss: 227.3376 | g_Loss: 1166.6291 | l_Loss: 136.7514 |
21-12-23 23:01:06.842 - INFO: Learning rate: 1e-05
21-12-23 23:01:06.843 - INFO: Train epoch 1134: Loss: 2300.7670 | r_Loss: 222.4695 | g_Loss: 1063.6427 | l_Loss: 124.7767 |
21-12-23 23:02:20.815 - INFO: Learning rate: 1e-05
21-12-23 23:02:20.817 - INFO: Train epoch 1135: Loss: 2228.4750 | r_Loss: 209.3634 | g_Loss: 1053.5090 | l_Loss: 128.1491 |
21-12-23 23:03:34.643 - INFO: Learning rate: 1e-05
21-12-23 23:03:34.644 - INFO: Train epoch 1136: Loss: 2093.7509 | r_Loss: 196.2930 | g_Loss: 1000.4768 | l_Loss: 111.8090 |
21-12-23 23:04:48.537 - INFO: Learning rate: 1e-05
21-12-23 23:04:48.538 - INFO: Train epoch 1137: Loss: 2036.8476 | r_Loss: 187.4224 | g_Loss: 980.1005 | l_Loss: 119.6353 |
21-12-23 23:06:02.845 - INFO: Learning rate: 1e-05
21-12-23 23:06:02.846 - INFO: Train epoch 1138: Loss: 2072.7607 | r_Loss: 192.2168 | g_Loss: 996.6402 | l_Loss: 115.0365 |
21-12-23 23:07:16.737 - INFO: Learning rate: 1e-05
21-12-23 23:07:16.738 - INFO: Train epoch 1139: Loss: 1938.9393 | r_Loss: 179.2641 | g_Loss: 941.0028 | l_Loss: 101.6162 |
21-12-23 23:09:06.698 - INFO: TEST: PSNR_S: 39.8940 | PSNR_C: 32.0248 |
21-12-23 23:09:06.700 - INFO: Learning rate: 1e-05
21-12-23 23:09:06.700 - INFO: Train epoch 1140: Loss: 2002.6216 | r_Loss: 182.1477 | g_Loss: 950.6388 | l_Loss: 141.2445 |
21-12-23 23:10:20.751 - INFO: Learning rate: 1e-05
21-12-23 23:10:20.752 - INFO: Train epoch 1141: Loss: 1718.2876 | r_Loss: 156.9621 | g_Loss: 829.3379 | l_Loss: 104.1393 |
21-12-23 23:11:34.513 - INFO: Learning rate: 1e-05
21-12-23 23:11:34.514 - INFO: Train epoch 1142: Loss: 1847.2632 | r_Loss: 171.2697 | g_Loss: 870.7571 | l_Loss: 120.1577 |
21-12-23 23:12:48.218 - INFO: Learning rate: 1e-05
21-12-23 23:12:48.219 - INFO: Train epoch 1143: Loss: 1658.9028 | r_Loss: 148.5444 | g_Loss: 795.6571 | l_Loss: 120.5239 |
21-12-23 23:14:02.258 - INFO: Learning rate: 1e-05
21-12-23 23:14:02.259 - INFO: Train epoch 1144: Loss: 1840.2357 | r_Loss: 174.7769 | g_Loss: 867.3310 | l_Loss: 99.0203 |
21-12-23 23:15:15.822 - INFO: Learning rate: 1e-05
21-12-23 23:15:15.823 - INFO: Train epoch 1145: Loss: 1668.3078 | r_Loss: 156.3168 | g_Loss: 784.1027 | l_Loss: 102.6212 |
21-12-23 23:16:29.429 - INFO: Learning rate: 1e-05
21-12-23 23:16:29.430 - INFO: Train epoch 1146: Loss: 1601.2461 | r_Loss: 150.7784 | g_Loss: 757.3108 | l_Loss: 90.0434 |
21-12-23 23:17:43.765 - INFO: Learning rate: 1e-05
21-12-23 23:17:43.766 - INFO: Train epoch 1147: Loss: 1523.8087 | r_Loss: 139.8437 | g_Loss: 748.0865 | l_Loss: 76.5034 |
21-12-23 23:18:57.680 - INFO: Learning rate: 1e-05
21-12-23 23:18:57.681 - INFO: Train epoch 1148: Loss: 1567.2515 | r_Loss: 148.9777 | g_Loss: 742.7748 | l_Loss: 79.5880 |
21-12-23 23:20:11.330 - INFO: Learning rate: 1e-05
21-12-23 23:20:11.331 - INFO: Train epoch 1149: Loss: 1563.6029 | r_Loss: 142.1093 | g_Loss: 753.6176 | l_Loss: 99.4390 |
21-12-23 23:21:25.330 - INFO: Learning rate: 1e-05
21-12-23 23:21:25.332 - INFO: Train epoch 1150: Loss: 1538.1818 | r_Loss: 138.9185 | g_Loss: 738.7563 | l_Loss: 104.8332 |
21-12-23 23:22:39.501 - INFO: Learning rate: 1e-05
21-12-23 23:22:39.502 - INFO: Train epoch 1151: Loss: 1451.9362 | r_Loss: 132.0525 | g_Loss: 698.8129 | l_Loss: 92.8607 |
21-12-23 23:23:53.180 - INFO: Learning rate: 1e-05
21-12-23 23:23:53.181 - INFO: Train epoch 1152: Loss: 1384.8158 | r_Loss: 123.4315 | g_Loss: 683.5238 | l_Loss: 84.1344 |
21-12-23 23:25:06.889 - INFO: Learning rate: 1e-05
21-12-23 23:25:06.890 - INFO: Train epoch 1153: Loss: 1403.2504 | r_Loss: 129.2242 | g_Loss: 683.6620 | l_Loss: 73.4676 |
21-12-23 23:26:20.534 - INFO: Learning rate: 1e-05
21-12-23 23:26:20.535 - INFO: Train epoch 1154: Loss: 1375.5923 | r_Loss: 125.1005 | g_Loss: 671.4678 | l_Loss: 78.6218 |
21-12-23 23:27:34.326 - INFO: Learning rate: 1e-05
21-12-23 23:27:34.326 - INFO: Train epoch 1155: Loss: 1287.5403 | r_Loss: 116.1385 | g_Loss: 624.5258 | l_Loss: 82.3222 |
21-12-23 23:28:48.352 - INFO: Learning rate: 1e-05
21-12-23 23:28:48.353 - INFO: Train epoch 1156: Loss: 1289.4230 | r_Loss: 119.2487 | g_Loss: 619.0372 | l_Loss: 74.1423 |
21-12-23 23:30:02.451 - INFO: Learning rate: 1e-05
21-12-23 23:30:02.453 - INFO: Train epoch 1157: Loss: 1296.2902 | r_Loss: 119.6509 | g_Loss: 612.7112 | l_Loss: 85.3243 |
21-12-23 23:31:16.284 - INFO: Learning rate: 1e-05
21-12-23 23:31:16.285 - INFO: Train epoch 1158: Loss: 1383.3033 | r_Loss: 129.3370 | g_Loss: 630.7599 | l_Loss: 105.8587 |
21-12-23 23:32:30.164 - INFO: Learning rate: 1e-05
21-12-23 23:32:30.165 - INFO: Train epoch 1159: Loss: 1260.2016 | r_Loss: 120.1902 | g_Loss: 587.2652 | l_Loss: 71.9854 |
21-12-23 23:33:44.081 - INFO: Learning rate: 1e-05
21-12-23 23:33:44.082 - INFO: Train epoch 1160: Loss: 1263.7156 | r_Loss: 121.5165 | g_Loss: 578.0807 | l_Loss: 78.0524 |
21-12-23 23:34:57.913 - INFO: Learning rate: 1e-05
21-12-23 23:34:57.914 - INFO: Train epoch 1161: Loss: 1296.4288 | r_Loss: 125.2069 | g_Loss: 595.9853 | l_Loss: 74.4089 |
21-12-23 23:36:11.710 - INFO: Learning rate: 1e-05
21-12-23 23:36:11.711 - INFO: Train epoch 1162: Loss: 1213.1543 | r_Loss: 112.8418 | g_Loss: 570.6522 | l_Loss: 78.2930 |
21-12-23 23:37:25.700 - INFO: Learning rate: 1e-05
21-12-23 23:37:25.701 - INFO: Train epoch 1163: Loss: 1207.4361 | r_Loss: 115.3606 | g_Loss: 565.8882 | l_Loss: 64.7450 |
21-12-23 23:38:39.673 - INFO: Learning rate: 1e-05
21-12-23 23:38:39.675 - INFO: Train epoch 1164: Loss: 1153.5062 | r_Loss: 107.8142 | g_Loss: 541.3813 | l_Loss: 73.0541 |
21-12-23 23:39:53.394 - INFO: Learning rate: 1e-05
21-12-23 23:39:53.395 - INFO: Train epoch 1165: Loss: 1199.2669 | r_Loss: 113.8613 | g_Loss: 563.2587 | l_Loss: 66.7016 |
21-12-23 23:41:07.473 - INFO: Learning rate: 1e-05
21-12-23 23:41:07.474 - INFO: Train epoch 1166: Loss: 1127.3049 | r_Loss: 108.9071 | g_Loss: 533.5645 | l_Loss: 49.2048 |
21-12-23 23:42:21.216 - INFO: Learning rate: 1e-05
21-12-23 23:42:21.217 - INFO: Train epoch 1167: Loss: 1282.8858 | r_Loss: 131.9181 | g_Loss: 558.6866 | l_Loss: 64.6089 |
21-12-23 23:43:34.878 - INFO: Learning rate: 1e-05
21-12-23 23:43:34.879 - INFO: Train epoch 1168: Loss: 1171.9214 | r_Loss: 114.0821 | g_Loss: 538.5548 | l_Loss: 62.9561 |
21-12-23 23:44:48.313 - INFO: Learning rate: 1e-05
21-12-23 23:44:48.313 - INFO: Train epoch 1169: Loss: 1226.2217 | r_Loss: 117.2307 | g_Loss: 566.4213 | l_Loss: 73.6467 |
21-12-23 23:46:38.155 - INFO: TEST: PSNR_S: 42.3867 | PSNR_C: 34.4997 |
21-12-23 23:46:38.157 - INFO: Learning rate: 1e-05
21-12-23 23:46:38.157 - INFO: Train epoch 1170: Loss: 1122.0068 | r_Loss: 108.2141 | g_Loss: 519.3764 | l_Loss: 61.5597 |
21-12-23 23:47:51.934 - INFO: Learning rate: 1e-05
21-12-23 23:47:51.935 - INFO: Train epoch 1171: Loss: 1127.7878 | r_Loss: 111.1555 | g_Loss: 507.3960 | l_Loss: 64.6145 |
21-12-23 23:49:05.767 - INFO: Learning rate: 1e-05
21-12-23 23:49:05.768 - INFO: Train epoch 1172: Loss: 1086.3033 | r_Loss: 104.8291 | g_Loss: 493.0832 | l_Loss: 69.0747 |
21-12-23 23:50:19.624 - INFO: Learning rate: 1e-05
21-12-23 23:50:19.625 - INFO: Train epoch 1173: Loss: 1129.9686 | r_Loss: 112.4365 | g_Loss: 514.2122 | l_Loss: 53.5737 |
21-12-23 23:51:33.240 - INFO: Learning rate: 1e-05
21-12-23 23:51:33.241 - INFO: Train epoch 1174: Loss: 1106.5476 | r_Loss: 108.8656 | g_Loss: 500.7044 | l_Loss: 61.5153 |
21-12-23 23:52:46.889 - INFO: Learning rate: 1e-05
21-12-23 23:52:46.890 - INFO: Train epoch 1175: Loss: 1031.5899 | r_Loss: 98.0482 | g_Loss: 469.7742 | l_Loss: 71.5747 |
21-12-23 23:54:00.775 - INFO: Learning rate: 1e-05
21-12-23 23:54:00.776 - INFO: Train epoch 1176: Loss: 1402.7145 | r_Loss: 165.8653 | g_Loss: 505.2605 | l_Loss: 68.1274 |
21-12-23 23:55:14.858 - INFO: Learning rate: 1e-05
21-12-23 23:55:14.859 - INFO: Train epoch 1177: Loss: 1104.7977 | r_Loss: 106.2684 | g_Loss: 509.1820 | l_Loss: 64.2735 |
21-12-23 23:56:28.753 - INFO: Learning rate: 1e-05
21-12-23 23:56:28.754 - INFO: Train epoch 1178: Loss: 1034.3127 | r_Loss: 99.1962 | g_Loss: 482.9636 | l_Loss: 55.3683 |
21-12-23 23:57:42.742 - INFO: Learning rate: 1e-05
21-12-23 23:57:42.743 - INFO: Train epoch 1179: Loss: 1146.1637 | r_Loss: 114.8446 | g_Loss: 504.0145 | l_Loss: 67.9259 |
21-12-23 23:58:56.754 - INFO: Learning rate: 1e-05
21-12-23 23:58:56.755 - INFO: Train epoch 1180: Loss: 1125.9998 | r_Loss: 110.8610 | g_Loss: 503.5029 | l_Loss: 68.1917 |
21-12-24 00:00:10.668 - INFO: Learning rate: 1e-05
21-12-24 00:00:10.669 - INFO: Train epoch 1181: Loss: 1048.3531 | r_Loss: 103.5994 | g_Loss: 474.0827 | l_Loss: 56.2734 |
21-12-24 00:01:24.838 - INFO: Learning rate: 1e-05
21-12-24 00:01:24.839 - INFO: Train epoch 1182: Loss: 1057.4029 | r_Loss: 103.8950 | g_Loss: 479.5095 | l_Loss: 58.4184 |
21-12-24 00:02:38.785 - INFO: Learning rate: 1e-05
21-12-24 00:02:38.786 - INFO: Train epoch 1183: Loss: 1109.9167 | r_Loss: 111.9568 | g_Loss: 487.5864 | l_Loss: 62.5465 |
21-12-24 00:03:52.266 - INFO: Learning rate: 1e-05
21-12-24 00:03:52.267 - INFO: Train epoch 1184: Loss: 1086.3356 | r_Loss: 111.8291 | g_Loss: 470.2454 | l_Loss: 56.9448 |
21-12-24 00:05:06.339 - INFO: Learning rate: 1e-05
21-12-24 00:05:06.340 - INFO: Train epoch 1185: Loss: 1074.4539 | r_Loss: 108.3844 | g_Loss: 474.3463 | l_Loss: 58.1855 |
21-12-24 00:06:20.378 - INFO: Learning rate: 1e-05
21-12-24 00:06:20.379 - INFO: Train epoch 1186: Loss: 1057.6474 | r_Loss: 104.4844 | g_Loss: 468.4771 | l_Loss: 66.7480 |
21-12-24 00:07:34.439 - INFO: Learning rate: 1e-05
21-12-24 00:07:34.440 - INFO: Train epoch 1187: Loss: 1068.2461 | r_Loss: 110.7864 | g_Loss: 462.1286 | l_Loss: 52.1856 |
21-12-24 00:08:48.768 - INFO: Learning rate: 1e-05
21-12-24 00:08:48.769 - INFO: Train epoch 1188: Loss: 1020.9330 | r_Loss: 104.7033 | g_Loss: 448.2770 | l_Loss: 49.1396 |
21-12-24 00:10:02.689 - INFO: Learning rate: 1e-05
21-12-24 00:10:02.690 - INFO: Train epoch 1189: Loss: 1020.3339 | r_Loss: 108.1923 | g_Loss: 435.8883 | l_Loss: 43.4841 |
21-12-24 00:11:16.706 - INFO: Learning rate: 1e-05
21-12-24 00:11:16.707 - INFO: Train epoch 1190: Loss: 1067.5008 | r_Loss: 110.9915 | g_Loss: 452.7227 | l_Loss: 59.8206 |
21-12-24 00:12:30.768 - INFO: Learning rate: 1e-05
21-12-24 00:12:30.769 - INFO: Train epoch 1191: Loss: 1061.9413 | r_Loss: 111.0371 | g_Loss: 449.7901 | l_Loss: 56.9657 |
21-12-24 00:13:45.211 - INFO: Learning rate: 1e-05
21-12-24 00:13:45.211 - INFO: Train epoch 1192: Loss: 1149.1983 | r_Loss: 132.4027 | g_Loss: 441.3973 | l_Loss: 45.7873 |
21-12-24 00:14:58.904 - INFO: Learning rate: 1e-05
21-12-24 00:14:58.906 - INFO: Train epoch 1193: Loss: 1003.2841 | r_Loss: 101.7521 | g_Loss: 443.7638 | l_Loss: 50.7599 |
21-12-24 00:16:12.972 - INFO: Learning rate: 1e-05
21-12-24 00:16:12.973 - INFO: Train epoch 1194: Loss: 1051.2305 | r_Loss: 104.5319 | g_Loss: 453.2235 | l_Loss: 75.3478 |
21-12-24 00:17:26.648 - INFO: Learning rate: 1e-05
21-12-24 00:17:26.649 - INFO: Train epoch 1195: Loss: 999.6932 | r_Loss: 103.9088 | g_Loss: 426.6648 | l_Loss: 53.4846 |
21-12-24 00:18:40.724 - INFO: Learning rate: 1e-05
21-12-24 00:18:40.725 - INFO: Train epoch 1196: Loss: 1062.1070 | r_Loss: 110.5575 | g_Loss: 449.7210 | l_Loss: 59.5983 |
21-12-24 00:19:54.463 - INFO: Learning rate: 1e-05
21-12-24 00:19:54.464 - INFO: Train epoch 1197: Loss: 1075.3004 | r_Loss: 114.6674 | g_Loss: 445.8270 | l_Loss: 56.1367 |
21-12-24 00:21:08.134 - INFO: Learning rate: 1e-05
21-12-24 00:21:08.135 - INFO: Train epoch 1198: Loss: 1028.2659 | r_Loss: 110.3764 | g_Loss: 425.6464 | l_Loss: 50.7373 |
21-12-24 00:22:22.211 - INFO: Learning rate: 1e-05
21-12-24 00:22:22.212 - INFO: Train epoch 1199: Loss: 1067.7970 | r_Loss: 112.1620 | g_Loss: 446.9580 | l_Loss: 60.0288 |
21-12-24 00:24:12.259 - INFO: TEST: PSNR_S: 42.9617 | PSNR_C: 35.5130 |
21-12-24 00:24:12.260 - INFO: Learning rate: 1e-05
21-12-24 00:24:12.261 - INFO: Train epoch 1200: Loss: 1027.0358 | r_Loss: 106.3421 | g_Loss: 435.9406 | l_Loss: 59.3845 |
21-12-24 00:25:26.644 - INFO: Learning rate: 1e-05
21-12-24 00:25:26.645 - INFO: Train epoch 1201: Loss: 979.4820 | r_Loss: 103.7943 | g_Loss: 408.7641 | l_Loss: 51.7465 |
21-12-24 00:26:40.551 - INFO: Learning rate: 1e-05
21-12-24 00:26:40.552 - INFO: Train epoch 1202: Loss: 1061.9953 | r_Loss: 117.6528 | g_Loss: 420.3286 | l_Loss: 53.4027 |
21-12-24 00:27:54.718 - INFO: Learning rate: 1e-05
21-12-24 00:27:54.720 - INFO: Train epoch 1203: Loss: 992.8072 | r_Loss: 103.7047 | g_Loss: 416.6493 | l_Loss: 57.6342 |
21-12-24 00:29:08.119 - INFO: Learning rate: 1e-05
21-12-24 00:29:08.120 - INFO: Train epoch 1204: Loss: 1047.3063 | r_Loss: 112.3199 | g_Loss: 428.4297 | l_Loss: 57.2770 |
21-12-24 00:30:22.110 - INFO: Learning rate: 1e-05
21-12-24 00:30:22.111 - INFO: Train epoch 1205: Loss: 1042.4017 | r_Loss: 109.1272 | g_Loss: 441.0767 | l_Loss: 55.6891 |
21-12-24 00:31:35.872 - INFO: Learning rate: 1e-05
21-12-24 00:31:35.873 - INFO: Train epoch 1206: Loss: 1032.2351 | r_Loss: 111.0735 | g_Loss: 415.4876 | l_Loss: 61.3801 |
21-12-24 00:32:49.667 - INFO: Learning rate: 1e-05
21-12-24 00:32:49.668 - INFO: Train epoch 1207: Loss: 1006.8492 | r_Loss: 109.0921 | g_Loss: 420.9085 | l_Loss: 40.4803 |
21-12-24 00:34:03.551 - INFO: Learning rate: 1e-05
21-12-24 00:34:03.552 - INFO: Train epoch 1208: Loss: 969.9537 | r_Loss: 103.0438 | g_Loss: 406.4693 | l_Loss: 48.2655 |
21-12-24 00:35:17.334 - INFO: Learning rate: 1e-05
21-12-24 00:35:17.335 - INFO: Train epoch 1209: Loss: 1081.6065 | r_Loss: 118.0626 | g_Loss: 440.9567 | l_Loss: 50.3369 |
21-12-24 00:36:31.052 - INFO: Learning rate: 1e-05
21-12-24 00:36:31.053 - INFO: Train epoch 1210: Loss: 973.3711 | r_Loss: 101.6515 | g_Loss: 407.4363 | l_Loss: 57.6771 |
21-12-24 00:37:44.852 - INFO: Learning rate: 1e-05
21-12-24 00:37:44.853 - INFO: Train epoch 1211: Loss: 1378.2119 | r_Loss: 171.6827 | g_Loss: 466.0218 | l_Loss: 53.7767 |
21-12-24 00:38:58.320 - INFO: Learning rate: 1e-05
21-12-24 00:38:58.321 - INFO: Train epoch 1212: Loss: 1038.6742 | r_Loss: 107.4724 | g_Loss: 442.3161 | l_Loss: 58.9961 |
21-12-24 00:40:12.530 - INFO: Learning rate: 1e-05
21-12-24 00:40:12.531 - INFO: Train epoch 1213: Loss: 957.0372 | r_Loss: 99.9293 | g_Loss: 413.0962 | l_Loss: 44.2943 |
21-12-24 00:41:26.222 - INFO: Learning rate: 1e-05
21-12-24 00:41:26.223 - INFO: Train epoch 1214: Loss: 970.4898 | r_Loss: 102.0620 | g_Loss: 410.8446 | l_Loss: 49.3350 |
21-12-24 00:42:39.714 - INFO: Learning rate: 1e-05
21-12-24 00:42:39.715 - INFO: Train epoch 1215: Loss: 997.3697 | r_Loss: 109.4053 | g_Loss: 405.9649 | l_Loss: 44.3780 |
21-12-24 00:43:53.226 - INFO: Learning rate: 1e-05
21-12-24 00:43:53.227 - INFO: Train epoch 1216: Loss: 945.9642 | r_Loss: 98.8808 | g_Loss: 395.1498 | l_Loss: 56.4104 |
21-12-24 00:45:06.773 - INFO: Learning rate: 1e-05
21-12-24 00:45:06.774 - INFO: Train epoch 1217: Loss: 942.2636 | r_Loss: 101.2567 | g_Loss: 391.6555 | l_Loss: 44.3247 |
21-12-24 00:46:20.687 - INFO: Learning rate: 1e-05
21-12-24 00:46:20.689 - INFO: Train epoch 1218: Loss: 988.3206 | r_Loss: 106.8688 | g_Loss: 407.1231 | l_Loss: 46.8536 |
21-12-24 00:47:34.263 - INFO: Learning rate: 1e-05
21-12-24 00:47:34.264 - INFO: Train epoch 1219: Loss: 935.5970 | r_Loss: 100.2764 | g_Loss: 390.7355 | l_Loss: 43.4795 |
21-12-24 00:48:48.471 - INFO: Learning rate: 1e-05
21-12-24 00:48:48.472 - INFO: Train epoch 1220: Loss: 959.8482 | r_Loss: 104.1802 | g_Loss: 392.8159 | l_Loss: 46.1314 |
21-12-24 00:50:03.129 - INFO: Learning rate: 1e-05
21-12-24 00:50:03.131 - INFO: Train epoch 1221: Loss: 980.6547 | r_Loss: 99.1846 | g_Loss: 418.0053 | l_Loss: 66.7266 |
21-12-24 00:51:17.073 - INFO: Learning rate: 1e-05
21-12-24 00:51:17.075 - INFO: Train epoch 1222: Loss: 60476.0957 | r_Loss: 10124.4288 | g_Loss: 8662.5829 | l_Loss: 1191.3688 |
21-12-24 00:52:30.535 - INFO: Learning rate: 1e-05
21-12-24 00:52:30.536 - INFO: Train epoch 1223: Loss: 8938.4797 | r_Loss: 944.7012 | g_Loss: 3780.8473 | l_Loss: 434.1267 |
21-12-24 00:53:44.529 - INFO: Learning rate: 1e-05
21-12-24 00:53:44.530 - INFO: Train epoch 1224: Loss: 4463.5561 | r_Loss: 451.1925 | g_Loss: 1970.6651 | l_Loss: 236.9284 |
21-12-24 00:54:58.379 - INFO: Learning rate: 1e-05
21-12-24 00:54:58.380 - INFO: Train epoch 1225: Loss: 3562.9751 | r_Loss: 346.6097 | g_Loss: 1612.0093 | l_Loss: 217.9173 |
21-12-24 00:56:12.296 - INFO: Learning rate: 1e-05
21-12-24 00:56:12.297 - INFO: Train epoch 1226: Loss: 3127.2377 | r_Loss: 302.4484 | g_Loss: 1439.1906 | l_Loss: 175.8052 |
21-12-24 00:57:26.409 - INFO: Learning rate: 1e-05
21-12-24 00:57:26.410 - INFO: Train epoch 1227: Loss: 2745.2797 | r_Loss: 257.2906 | g_Loss: 1298.1836 | l_Loss: 160.6432 |
21-12-24 00:58:40.304 - INFO: Learning rate: 1e-05
21-12-24 00:58:40.306 - INFO: Train epoch 1228: Loss: 2702.7693 | r_Loss: 248.6383 | g_Loss: 1281.2419 | l_Loss: 178.3358 |
21-12-24 00:59:54.287 - INFO: Learning rate: 1e-05
21-12-24 00:59:54.288 - INFO: Train epoch 1229: Loss: 2483.3802 | r_Loss: 228.6513 | g_Loss: 1186.1121 | l_Loss: 154.0118 |
21-12-24 01:01:44.295 - INFO: TEST: PSNR_S: 38.8125 | PSNR_C: 30.9892 |
21-12-24 01:01:44.297 - INFO: Learning rate: 1e-05
21-12-24 01:01:44.297 - INFO: Train epoch 1230: Loss: 2437.9361 | r_Loss: 226.9857 | g_Loss: 1156.6379 | l_Loss: 146.3694 |
21-12-24 01:02:58.026 - INFO: Learning rate: 1e-05
21-12-24 01:02:58.027 - INFO: Train epoch 1231: Loss: 2245.1310 | r_Loss: 208.2555 | g_Loss: 1068.6853 | l_Loss: 135.1680 |
21-12-24 01:04:11.971 - INFO: Learning rate: 1e-05
21-12-24 01:04:11.972 - INFO: Train epoch 1232: Loss: 2167.4921 | r_Loss: 199.5126 | g_Loss: 1046.6040 | l_Loss: 123.3251 |
21-12-24 01:05:25.698 - INFO: Learning rate: 1e-05
21-12-24 01:05:25.699 - INFO: Train epoch 1233: Loss: 2001.5363 | r_Loss: 185.5918 | g_Loss: 953.4284 | l_Loss: 120.1486 |
21-12-24 01:06:39.759 - INFO: Learning rate: 1e-05
21-12-24 01:06:39.760 - INFO: Train epoch 1234: Loss: 2141.8730 | r_Loss: 196.5839 | g_Loss: 1021.1607 | l_Loss: 137.7929 |
21-12-24 01:07:53.448 - INFO: Learning rate: 1e-05
21-12-24 01:07:53.449 - INFO: Train epoch 1235: Loss: 2039.4523 | r_Loss: 189.3061 | g_Loss: 974.4047 | l_Loss: 118.5171 |
21-12-24 01:09:07.579 - INFO: Learning rate: 1e-05
21-12-24 01:09:07.580 - INFO: Train epoch 1236: Loss: 1859.9922 | r_Loss: 170.3037 | g_Loss: 889.1001 | l_Loss: 119.3738 |
21-12-24 01:10:21.301 - INFO: Learning rate: 1e-05
21-12-24 01:10:21.302 - INFO: Train epoch 1237: Loss: 1799.7456 | r_Loss: 163.7036 | g_Loss: 870.7900 | l_Loss: 110.4376 |
21-12-24 01:11:34.876 - INFO: Learning rate: 1e-05
21-12-24 01:11:34.877 - INFO: Train epoch 1238: Loss: 1758.6394 | r_Loss: 161.5486 | g_Loss: 833.8199 | l_Loss: 117.0765 |
21-12-24 01:12:48.321 - INFO: Learning rate: 1e-05
21-12-24 01:12:48.322 - INFO: Train epoch 1239: Loss: 1863.3011 | r_Loss: 172.1015 | g_Loss: 884.5994 | l_Loss: 118.1945 |
21-12-24 01:14:02.348 - INFO: Learning rate: 1e-05
21-12-24 01:14:02.350 - INFO: Train epoch 1240: Loss: 1700.0817 | r_Loss: 154.4906 | g_Loss: 836.2606 | l_Loss: 91.3680 |
21-12-24 01:15:16.640 - INFO: Learning rate: 1e-05
21-12-24 01:15:16.642 - INFO: Train epoch 1241: Loss: 1591.2282 | r_Loss: 143.5645 | g_Loss: 783.6065 | l_Loss: 89.7992 |
21-12-24 01:16:30.520 - INFO: Learning rate: 1e-05
21-12-24 01:16:30.521 - INFO: Train epoch 1242: Loss: 1675.2864 | r_Loss: 153.0942 | g_Loss: 803.3176 | l_Loss: 106.4979 |
21-12-24 01:17:44.247 - INFO: Learning rate: 1e-05
21-12-24 01:17:44.248 - INFO: Train epoch 1243: Loss: 1595.5013 | r_Loss: 143.8069 | g_Loss: 767.8726 | l_Loss: 108.5939 |
21-12-24 01:18:57.879 - INFO: Learning rate: 1e-05
21-12-24 01:18:57.880 - INFO: Train epoch 1244: Loss: 1612.9566 | r_Loss: 149.0586 | g_Loss: 770.0250 | l_Loss: 97.6385 |
21-12-24 01:20:11.375 - INFO: Learning rate: 1e-05
21-12-24 01:20:11.376 - INFO: Train epoch 1245: Loss: 1557.2010 | r_Loss: 141.4541 | g_Loss: 752.5981 | l_Loss: 97.3323 |
21-12-24 01:21:25.078 - INFO: Learning rate: 1e-05
21-12-24 01:21:25.079 - INFO: Train epoch 1246: Loss: 1514.2965 | r_Loss: 139.6992 | g_Loss: 722.2083 | l_Loss: 93.5923 |
21-12-24 01:22:39.080 - INFO: Learning rate: 1e-05
21-12-24 01:22:39.081 - INFO: Train epoch 1247: Loss: 1518.5264 | r_Loss: 137.5209 | g_Loss: 728.8863 | l_Loss: 102.0357 |
21-12-24 01:23:52.569 - INFO: Learning rate: 1e-05
21-12-24 01:23:52.570 - INFO: Train epoch 1248: Loss: 1444.3151 | r_Loss: 130.4896 | g_Loss: 703.5486 | l_Loss: 88.3183 |
21-12-24 01:25:06.633 - INFO: Learning rate: 1e-05
21-12-24 01:25:06.634 - INFO: Train epoch 1249: Loss: 1463.2040 | r_Loss: 131.8421 | g_Loss: 709.2958 | l_Loss: 94.6980 |
21-12-24 01:26:20.097 - INFO: Learning rate: 1e-05
21-12-24 01:26:20.099 - INFO: Train epoch 1250: Loss: 1408.4067 | r_Loss: 127.3477 | g_Loss: 695.2026 | l_Loss: 76.4655 |
21-12-24 01:27:33.957 - INFO: Learning rate: 1e-05
21-12-24 01:27:33.958 - INFO: Train epoch 1251: Loss: 1286.4926 | r_Loss: 117.0179 | g_Loss: 635.1637 | l_Loss: 66.2392 |
21-12-24 01:28:47.980 - INFO: Learning rate: 1e-05
21-12-24 01:28:47.981 - INFO: Train epoch 1252: Loss: 1369.3980 | r_Loss: 125.6115 | g_Loss: 662.6701 | l_Loss: 78.6704 |
21-12-24 01:30:01.658 - INFO: Learning rate: 1e-05
21-12-24 01:30:01.659 - INFO: Train epoch 1253: Loss: 1333.9069 | r_Loss: 121.8068 | g_Loss: 641.4045 | l_Loss: 83.4686 |
21-12-24 01:31:15.654 - INFO: Learning rate: 1e-05
21-12-24 01:31:15.655 - INFO: Train epoch 1254: Loss: 1330.3528 | r_Loss: 123.9121 | g_Loss: 641.1076 | l_Loss: 69.6847 |
21-12-24 01:32:29.637 - INFO: Learning rate: 1e-05
21-12-24 01:32:29.639 - INFO: Train epoch 1255: Loss: 1290.2896 | r_Loss: 118.8290 | g_Loss: 619.6318 | l_Loss: 76.5129 |
21-12-24 01:33:43.244 - INFO: Learning rate: 1e-05
21-12-24 01:33:43.246 - INFO: Train epoch 1256: Loss: 1264.7306 | r_Loss: 113.0780 | g_Loss: 620.1249 | l_Loss: 79.2156 |
21-12-24 01:34:56.640 - INFO: Learning rate: 1e-05
21-12-24 01:34:56.641 - INFO: Train epoch 1257: Loss: 1284.1737 | r_Loss: 118.6600 | g_Loss: 612.4231 | l_Loss: 78.4507 |
21-12-24 01:36:10.053 - INFO: Learning rate: 1e-05
21-12-24 01:36:10.054 - INFO: Train epoch 1258: Loss: 1313.3242 | r_Loss: 126.0409 | g_Loss: 614.9520 | l_Loss: 68.1674 |
21-12-24 01:37:24.139 - INFO: Learning rate: 1e-05
21-12-24 01:37:24.140 - INFO: Train epoch 1259: Loss: 1247.3496 | r_Loss: 115.4738 | g_Loss: 605.3246 | l_Loss: 64.6561 |
21-12-24 01:39:14.166 - INFO: TEST: PSNR_S: 42.0700 | PSNR_C: 33.8853 |
21-12-24 01:39:14.168 - INFO: Learning rate: 1e-05
21-12-24 01:39:14.168 - INFO: Train epoch 1260: Loss: 1264.2175 | r_Loss: 116.0097 | g_Loss: 602.9799 | l_Loss: 81.1892 |
21-12-24 01:40:27.436 - INFO: Learning rate: 1e-05
21-12-24 01:40:27.437 - INFO: Train epoch 1261: Loss: 1266.4199 | r_Loss: 119.9494 | g_Loss: 578.0416 | l_Loss: 88.6314 |
21-12-24 01:41:41.372 - INFO: Learning rate: 1e-05
21-12-24 01:41:41.373 - INFO: Train epoch 1262: Loss: 1251.1982 | r_Loss: 117.6840 | g_Loss: 581.4236 | l_Loss: 81.3548 |
21-12-24 01:42:54.928 - INFO: Learning rate: 1e-05
21-12-24 01:42:54.929 - INFO: Train epoch 1263: Loss: 1263.8779 | r_Loss: 125.0770 | g_Loss: 564.0122 | l_Loss: 74.4807 |
21-12-24 01:44:09.078 - INFO: Learning rate: 1e-05
21-12-24 01:44:09.079 - INFO: Train epoch 1264: Loss: 1217.0894 | r_Loss: 115.1585 | g_Loss: 568.6223 | l_Loss: 72.6745 |
21-12-24 01:45:22.770 - INFO: Learning rate: 1e-05
21-12-24 01:45:22.770 - INFO: Train epoch 1265: Loss: 1188.8566 | r_Loss: 112.2727 | g_Loss: 548.4060 | l_Loss: 79.0872 |
21-12-24 01:46:36.406 - INFO: Learning rate: 1e-05
21-12-24 01:46:36.407 - INFO: Train epoch 1266: Loss: 1232.3613 | r_Loss: 118.4735 | g_Loss: 576.9184 | l_Loss: 63.0756 |
21-12-24 01:47:50.115 - INFO: Learning rate: 1e-05
21-12-24 01:47:50.116 - INFO: Train epoch 1267: Loss: 1191.6354 | r_Loss: 115.1691 | g_Loss: 540.3436 | l_Loss: 75.4464 |
21-12-24 01:49:04.076 - INFO: Learning rate: 1e-05
21-12-24 01:49:04.078 - INFO: Train epoch 1268: Loss: 1138.0406 | r_Loss: 109.6923 | g_Loss: 515.2674 | l_Loss: 74.3117 |
21-12-24 01:50:17.574 - INFO: Learning rate: 1e-05
21-12-24 01:50:17.575 - INFO: Train epoch 1269: Loss: 1124.8749 | r_Loss: 109.9626 | g_Loss: 515.4637 | l_Loss: 59.5980 |
21-12-24 01:51:31.364 - INFO: Learning rate: 1e-05
21-12-24 01:51:31.365 - INFO: Train epoch 1270: Loss: 1159.4309 | r_Loss: 114.3718 | g_Loss: 521.9578 | l_Loss: 65.6144 |
21-12-24 01:52:44.790 - INFO: Learning rate: 1e-05
21-12-24 01:52:44.791 - INFO: Train epoch 1271: Loss: 1113.8931 | r_Loss: 109.5499 | g_Loss: 503.8853 | l_Loss: 62.2584 |
21-12-24 01:53:58.611 - INFO: Learning rate: 1e-05
21-12-24 01:53:58.612 - INFO: Train epoch 1272: Loss: 1092.2600 | r_Loss: 106.7547 | g_Loss: 501.3833 | l_Loss: 57.1033 |
21-12-24 01:55:11.941 - INFO: Learning rate: 1e-05
21-12-24 01:55:11.943 - INFO: Train epoch 1273: Loss: 1132.3126 | r_Loss: 111.4874 | g_Loss: 506.9936 | l_Loss: 67.8821 |
21-12-24 01:56:25.971 - INFO: Learning rate: 1e-05
21-12-24 01:56:25.972 - INFO: Train epoch 1274: Loss: 1094.8646 | r_Loss: 106.8976 | g_Loss: 506.4291 | l_Loss: 53.9474 |
21-12-24 01:57:39.501 - INFO: Learning rate: 1e-05
21-12-24 01:57:39.502 - INFO: Train epoch 1275: Loss: 1170.4621 | r_Loss: 115.7196 | g_Loss: 525.8101 | l_Loss: 66.0543 |
21-12-24 01:58:53.318 - INFO: Learning rate: 1e-05
21-12-24 01:58:53.319 - INFO: Train epoch 1276: Loss: 1078.0763 | r_Loss: 105.4669 | g_Loss: 491.7463 | l_Loss: 58.9952 |
21-12-24 02:00:06.845 - INFO: Learning rate: 1e-05
21-12-24 02:00:06.846 - INFO: Train epoch 1277: Loss: 1050.6718 | r_Loss: 103.5338 | g_Loss: 478.0420 | l_Loss: 54.9609 |
21-12-24 02:01:20.324 - INFO: Learning rate: 1e-05
21-12-24 02:01:20.326 - INFO: Train epoch 1278: Loss: 1037.1165 | r_Loss: 102.5039 | g_Loss: 470.7500 | l_Loss: 53.8471 |
21-12-24 02:02:34.327 - INFO: Learning rate: 1e-05
21-12-24 02:02:34.328 - INFO: Train epoch 1279: Loss: 1111.2879 | r_Loss: 110.4682 | g_Loss: 490.9034 | l_Loss: 68.0434 |
21-12-24 02:03:48.291 - INFO: Learning rate: 1e-05
21-12-24 02:03:48.292 - INFO: Train epoch 1280: Loss: 1057.7525 | r_Loss: 104.6062 | g_Loss: 477.0408 | l_Loss: 57.6808 |
21-12-24 02:05:02.041 - INFO: Learning rate: 1e-05
21-12-24 02:05:02.041 - INFO: Train epoch 1281: Loss: 1140.0585 | r_Loss: 121.2475 | g_Loss: 479.7216 | l_Loss: 54.0992 |
21-12-24 02:06:15.603 - INFO: Learning rate: 1e-05
21-12-24 02:06:15.604 - INFO: Train epoch 1282: Loss: 1031.8481 | r_Loss: 103.7500 | g_Loss: 456.6214 | l_Loss: 56.4769 |
21-12-24 02:07:29.190 - INFO: Learning rate: 1e-05
21-12-24 02:07:29.191 - INFO: Train epoch 1283: Loss: 1028.1900 | r_Loss: 106.4118 | g_Loss: 451.2340 | l_Loss: 44.8970 |
21-12-24 02:08:42.645 - INFO: Learning rate: 1e-05
21-12-24 02:08:42.646 - INFO: Train epoch 1284: Loss: 1038.0233 | r_Loss: 106.7304 | g_Loss: 449.0735 | l_Loss: 55.2976 |
21-12-24 02:09:56.366 - INFO: Learning rate: 1e-05
21-12-24 02:09:56.367 - INFO: Train epoch 1285: Loss: 1044.0653 | r_Loss: 107.8526 | g_Loss: 452.9471 | l_Loss: 51.8554 |
21-12-24 02:11:09.615 - INFO: Learning rate: 1e-05
21-12-24 02:11:09.616 - INFO: Train epoch 1286: Loss: 937.4645 | r_Loss: 93.5045 | g_Loss: 415.4173 | l_Loss: 54.5246 |
21-12-24 02:12:23.167 - INFO: Learning rate: 1e-05
21-12-24 02:12:23.169 - INFO: Train epoch 1287: Loss: 976.0847 | r_Loss: 96.7537 | g_Loss: 426.8757 | l_Loss: 65.4403 |
21-12-24 02:13:36.949 - INFO: Learning rate: 1e-05
21-12-24 02:13:36.950 - INFO: Train epoch 1288: Loss: 1005.4676 | r_Loss: 101.8360 | g_Loss: 438.9658 | l_Loss: 57.3220 |
21-12-24 02:14:50.606 - INFO: Learning rate: 1e-05
21-12-24 02:14:50.607 - INFO: Train epoch 1289: Loss: 1059.7622 | r_Loss: 114.2978 | g_Loss: 436.6870 | l_Loss: 51.5862 |
21-12-24 02:16:40.348 - INFO: TEST: PSNR_S: 42.8436 | PSNR_C: 35.3022 |
21-12-24 02:16:40.349 - INFO: Learning rate: 1e-05
21-12-24 02:16:40.350 - INFO: Train epoch 1290: Loss: 983.5233 | r_Loss: 101.4250 | g_Loss: 427.9801 | l_Loss: 48.4181 |
21-12-24 02:17:53.788 - INFO: Learning rate: 1e-05
21-12-24 02:17:53.789 - INFO: Train epoch 1291: Loss: 1019.2801 | r_Loss: 102.1034 | g_Loss: 445.8420 | l_Loss: 62.9210 |
21-12-24 02:19:07.761 - INFO: Learning rate: 1e-05
21-12-24 02:19:07.762 - INFO: Train epoch 1292: Loss: 1040.1446 | r_Loss: 110.1119 | g_Loss: 436.7201 | l_Loss: 52.8648 |
21-12-24 02:20:21.254 - INFO: Learning rate: 1e-05
21-12-24 02:20:21.255 - INFO: Train epoch 1293: Loss: 951.1250 | r_Loss: 96.2380 | g_Loss: 422.0851 | l_Loss: 47.8500 |
21-12-24 02:21:34.716 - INFO: Learning rate: 1e-05
21-12-24 02:21:34.717 - INFO: Train epoch 1294: Loss: 1051.3976 | r_Loss: 112.0511 | g_Loss: 434.1257 | l_Loss: 57.0165 |
21-12-24 02:22:48.218 - INFO: Learning rate: 1e-05
21-12-24 02:22:48.219 - INFO: Train epoch 1295: Loss: 998.8125 | r_Loss: 106.1121 | g_Loss: 417.7409 | l_Loss: 50.5114 |
21-12-24 02:24:02.129 - INFO: Learning rate: 1e-05
21-12-24 02:24:02.131 - INFO: Train epoch 1296: Loss: 1061.4147 | r_Loss: 113.9733 | g_Loss: 441.8975 | l_Loss: 49.6507 |
21-12-24 02:25:15.487 - INFO: Learning rate: 1e-05
21-12-24 02:25:15.488 - INFO: Train epoch 1297: Loss: 1041.6503 | r_Loss: 113.8754 | g_Loss: 422.1235 | l_Loss: 50.1497 |
21-12-24 02:26:29.099 - INFO: Learning rate: 1e-05
21-12-24 02:26:29.100 - INFO: Train epoch 1298: Loss: 989.1647 | r_Loss: 98.4097 | g_Loss: 431.2631 | l_Loss: 65.8532 |
21-12-24 02:27:42.826 - INFO: Learning rate: 1e-05
21-12-24 02:27:42.828 - INFO: Train epoch 1299: Loss: 938.9079 | r_Loss: 96.8450 | g_Loss: 405.5289 | l_Loss: 49.1538 |
21-12-24 02:28:56.201 - INFO: Learning rate: 1e-05
21-12-24 02:28:56.203 - INFO: Train epoch 1300: Loss: 991.5879 | r_Loss: 102.5065 | g_Loss: 418.1431 | l_Loss: 60.9122 |
21-12-24 02:30:09.606 - INFO: Learning rate: 1e-05
21-12-24 02:30:09.607 - INFO: Train epoch 1301: Loss: 1101.1417 | r_Loss: 122.4245 | g_Loss: 435.6376 | l_Loss: 53.3818 |
21-12-24 02:31:23.061 - INFO: Learning rate: 1e-05
21-12-24 02:31:23.062 - INFO: Train epoch 1302: Loss: 990.8414 | r_Loss: 105.5629 | g_Loss: 416.5693 | l_Loss: 46.4577 |
21-12-24 02:32:36.567 - INFO: Learning rate: 1e-05
21-12-24 02:32:36.568 - INFO: Train epoch 1303: Loss: 960.8110 | r_Loss: 101.2498 | g_Loss: 406.6784 | l_Loss: 47.8837 |
21-12-24 02:33:50.035 - INFO: Learning rate: 1e-05
21-12-24 02:33:50.036 - INFO: Train epoch 1304: Loss: 940.5432 | r_Loss: 95.2561 | g_Loss: 409.8939 | l_Loss: 54.3690 |
21-12-24 02:35:04.077 - INFO: Learning rate: 1e-05
21-12-24 02:35:04.078 - INFO: Train epoch 1305: Loss: 1003.8557 | r_Loss: 105.7759 | g_Loss: 425.2805 | l_Loss: 49.6956 |
21-12-24 02:36:17.542 - INFO: Learning rate: 1e-05
21-12-24 02:36:17.543 - INFO: Train epoch 1306: Loss: 975.5116 | r_Loss: 103.5684 | g_Loss: 405.3489 | l_Loss: 52.3204 |
21-12-24 02:37:31.130 - INFO: Learning rate: 1e-05
21-12-24 02:37:31.131 - INFO: Train epoch 1307: Loss: 39821.0646 | r_Loss: 7131.9888 | g_Loss: 3729.6862 | l_Loss: 431.4343 |
21-12-24 02:38:44.667 - INFO: Learning rate: 1e-05
21-12-24 02:38:44.668 - INFO: Train epoch 1308: Loss: 3601.9276 | r_Loss: 341.4147 | g_Loss: 1663.6595 | l_Loss: 231.1947 |
21-12-24 02:39:58.333 - INFO: Learning rate: 1e-05
21-12-24 02:39:58.334 - INFO: Train epoch 1309: Loss: 2711.7348 | r_Loss: 259.2240 | g_Loss: 1258.1367 | l_Loss: 157.4781 |
21-12-24 02:41:11.466 - INFO: Learning rate: 1e-05
21-12-24 02:41:11.467 - INFO: Train epoch 1310: Loss: 2456.2357 | r_Loss: 235.4125 | g_Loss: 1148.1528 | l_Loss: 131.0204 |
21-12-24 02:42:24.754 - INFO: Learning rate: 1e-05
21-12-24 02:42:24.755 - INFO: Train epoch 1311: Loss: 2366.9995 | r_Loss: 227.9085 | g_Loss: 1095.4634 | l_Loss: 131.9934 |
21-12-24 02:43:38.102 - INFO: Learning rate: 1e-05
21-12-24 02:43:38.103 - INFO: Train epoch 1312: Loss: 2197.2720 | r_Loss: 212.0054 | g_Loss: 1025.0152 | l_Loss: 112.2295 |
21-12-24 02:44:51.325 - INFO: Learning rate: 1e-05
21-12-24 02:44:51.326 - INFO: Train epoch 1313: Loss: 2007.8618 | r_Loss: 190.9786 | g_Loss: 940.6156 | l_Loss: 112.3530 |
21-12-24 02:46:04.948 - INFO: Learning rate: 1e-05
21-12-24 02:46:04.949 - INFO: Train epoch 1314: Loss: 2010.9699 | r_Loss: 191.2231 | g_Loss: 927.1270 | l_Loss: 127.7273 |
21-12-24 02:47:18.834 - INFO: Learning rate: 1e-05
21-12-24 02:47:18.835 - INFO: Train epoch 1315: Loss: 1852.7870 | r_Loss: 174.4806 | g_Loss: 870.1014 | l_Loss: 110.2828 |
21-12-24 02:48:32.265 - INFO: Learning rate: 1e-05
21-12-24 02:48:32.266 - INFO: Train epoch 1316: Loss: 1857.1023 | r_Loss: 180.6630 | g_Loss: 856.9285 | l_Loss: 96.8586 |
21-12-24 02:49:46.080 - INFO: Learning rate: 1e-05
21-12-24 02:49:46.081 - INFO: Train epoch 1317: Loss: 1779.5644 | r_Loss: 169.6609 | g_Loss: 838.5481 | l_Loss: 92.7117 |
21-12-24 02:50:59.510 - INFO: Learning rate: 1e-05
21-12-24 02:50:59.511 - INFO: Train epoch 1318: Loss: 1700.0594 | r_Loss: 161.7292 | g_Loss: 798.2506 | l_Loss: 93.1630 |
21-12-24 02:52:12.554 - INFO: Learning rate: 1e-05
21-12-24 02:52:12.555 - INFO: Train epoch 1319: Loss: 1787.8487 | r_Loss: 170.2190 | g_Loss: 827.3226 | l_Loss: 109.4312 |
21-12-24 02:54:01.746 - INFO: TEST: PSNR_S: 40.2887 | PSNR_C: 32.7186 |
21-12-24 02:54:01.748 - INFO: Learning rate: 1e-05
21-12-24 02:54:01.748 - INFO: Train epoch 1320: Loss: 1629.7896 | r_Loss: 156.0044 | g_Loss: 757.5520 | l_Loss: 92.2155 |
21-12-24 02:55:15.077 - INFO: Learning rate: 1e-05
21-12-24 02:55:15.078 - INFO: Train epoch 1321: Loss: 1611.7824 | r_Loss: 153.4091 | g_Loss: 752.1578 | l_Loss: 92.5793 |
21-12-24 02:56:28.390 - INFO: Learning rate: 1e-05
21-12-24 02:56:28.391 - INFO: Train epoch 1322: Loss: 1596.5422 | r_Loss: 151.6792 | g_Loss: 742.0422 | l_Loss: 96.1038 |
21-12-24 02:57:42.039 - INFO: Learning rate: 1e-05
21-12-24 02:57:42.040 - INFO: Train epoch 1323: Loss: 1504.2029 | r_Loss: 140.0926 | g_Loss: 712.8351 | l_Loss: 90.9046 |
21-12-24 02:58:55.387 - INFO: Learning rate: 1e-05
21-12-24 02:58:55.388 - INFO: Train epoch 1324: Loss: 1586.8546 | r_Loss: 147.1101 | g_Loss: 759.1195 | l_Loss: 92.1845 |
21-12-24 03:00:08.846 - INFO: Learning rate: 1e-05
21-12-24 03:00:08.847 - INFO: Train epoch 1325: Loss: 1428.0473 | r_Loss: 132.9949 | g_Loss: 671.4607 | l_Loss: 91.6120 |
21-12-24 03:01:22.384 - INFO: Learning rate: 1e-05
21-12-24 03:01:22.385 - INFO: Train epoch 1326: Loss: 1496.9613 | r_Loss: 142.7453 | g_Loss: 697.2959 | l_Loss: 85.9387 |
21-12-24 03:02:36.133 - INFO: Learning rate: 1e-05
21-12-24 03:02:36.134 - INFO: Train epoch 1327: Loss: 1393.8685 | r_Loss: 131.2569 | g_Loss: 660.0854 | l_Loss: 77.4988 |
21-12-24 03:03:49.420 - INFO: Learning rate: 1e-05
21-12-24 03:03:49.421 - INFO: Train epoch 1328: Loss: 1389.4476 | r_Loss: 131.3089 | g_Loss: 656.6981 | l_Loss: 76.2052 |
21-12-24 03:05:03.389 - INFO: Learning rate: 1e-05
21-12-24 03:05:03.390 - INFO: Train epoch 1329: Loss: 1400.6796 | r_Loss: 131.3109 | g_Loss: 670.4873 | l_Loss: 73.6381 |
21-12-24 03:06:17.157 - INFO: Learning rate: 1e-05
21-12-24 03:06:17.158 - INFO: Train epoch 1330: Loss: 1353.4191 | r_Loss: 127.1531 | g_Loss: 642.3818 | l_Loss: 75.2719 |
21-12-24 03:07:30.849 - INFO: Learning rate: 1e-05
21-12-24 03:07:30.850 - INFO: Train epoch 1331: Loss: 1331.4793 | r_Loss: 125.0227 | g_Loss: 627.4245 | l_Loss: 78.9414 |
21-12-24 03:08:44.804 - INFO: Learning rate: 1e-05
21-12-24 03:08:44.805 - INFO: Train epoch 1332: Loss: 1305.9373 | r_Loss: 125.8833 | g_Loss: 605.2256 | l_Loss: 71.2954 |
21-12-24 03:09:58.014 - INFO: Learning rate: 1e-05
21-12-24 03:09:58.015 - INFO: Train epoch 1333: Loss: 1298.3714 | r_Loss: 121.4975 | g_Loss: 607.9154 | l_Loss: 82.9686 |
21-12-24 03:11:11.678 - INFO: Learning rate: 1e-05
21-12-24 03:11:11.679 - INFO: Train epoch 1334: Loss: 1267.8175 | r_Loss: 118.3323 | g_Loss: 599.2617 | l_Loss: 76.8942 |
21-12-24 03:12:25.279 - INFO: Learning rate: 1e-05
21-12-24 03:12:25.280 - INFO: Train epoch 1335: Loss: 1213.7211 | r_Loss: 113.6400 | g_Loss: 571.3122 | l_Loss: 74.2089 |
21-12-24 03:13:38.385 - INFO: Learning rate: 1e-05
21-12-24 03:13:38.386 - INFO: Train epoch 1336: Loss: 1268.0872 | r_Loss: 121.4092 | g_Loss: 596.8381 | l_Loss: 64.2034 |
21-12-24 03:14:51.662 - INFO: Learning rate: 1e-05
21-12-24 03:14:51.663 - INFO: Train epoch 1337: Loss: 1296.5836 | r_Loss: 122.8527 | g_Loss: 587.7227 | l_Loss: 94.5975 |
21-12-24 03:16:05.058 - INFO: Learning rate: 1e-05
21-12-24 03:16:05.059 - INFO: Train epoch 1338: Loss: 1213.9761 | r_Loss: 112.1667 | g_Loss: 586.1684 | l_Loss: 66.9742 |
21-12-24 03:17:18.449 - INFO: Learning rate: 1e-05
21-12-24 03:17:18.450 - INFO: Train epoch 1339: Loss: 1164.2017 | r_Loss: 110.1543 | g_Loss: 547.7454 | l_Loss: 65.6848 |
21-12-24 03:18:32.364 - INFO: Learning rate: 1e-05
21-12-24 03:18:32.365 - INFO: Train epoch 1340: Loss: 1241.6131 | r_Loss: 119.6896 | g_Loss: 570.0039 | l_Loss: 73.1610 |
21-12-24 03:19:45.800 - INFO: Learning rate: 1e-05
21-12-24 03:19:45.801 - INFO: Train epoch 1341: Loss: 1144.6250 | r_Loss: 108.8413 | g_Loss: 535.3990 | l_Loss: 65.0196 |
21-12-24 03:20:59.560 - INFO: Learning rate: 1e-05
21-12-24 03:20:59.561 - INFO: Train epoch 1342: Loss: 1268.4171 | r_Loss: 126.4365 | g_Loss: 568.3020 | l_Loss: 67.9325 |
21-12-24 03:22:13.020 - INFO: Learning rate: 1e-05
21-12-24 03:22:13.021 - INFO: Train epoch 1343: Loss: 1113.7449 | r_Loss: 103.6503 | g_Loss: 532.5164 | l_Loss: 62.9771 |
21-12-24 03:23:26.722 - INFO: Learning rate: 1e-05
21-12-24 03:23:26.723 - INFO: Train epoch 1344: Loss: 1212.2290 | r_Loss: 114.7298 | g_Loss: 551.5346 | l_Loss: 87.0456 |
21-12-24 03:24:40.204 - INFO: Learning rate: 1e-05
21-12-24 03:24:40.205 - INFO: Train epoch 1345: Loss: 1126.5336 | r_Loss: 108.4731 | g_Loss: 518.5638 | l_Loss: 65.6044 |
21-12-24 03:25:53.689 - INFO: Learning rate: 1e-05
21-12-24 03:25:53.690 - INFO: Train epoch 1346: Loss: 1140.9737 | r_Loss: 106.3187 | g_Loss: 533.1311 | l_Loss: 76.2492 |
21-12-24 03:27:06.833 - INFO: Learning rate: 1e-05
21-12-24 03:27:06.834 - INFO: Train epoch 1347: Loss: 1123.2940 | r_Loss: 107.0127 | g_Loss: 515.8964 | l_Loss: 72.3344 |
21-12-24 03:28:20.216 - INFO: Learning rate: 1e-05
21-12-24 03:28:20.217 - INFO: Train epoch 1348: Loss: 1093.1159 | r_Loss: 106.9302 | g_Loss: 506.7378 | l_Loss: 51.7273 |
21-12-24 03:29:33.557 - INFO: Learning rate: 1e-05
21-12-24 03:29:33.559 - INFO: Train epoch 1349: Loss: 1111.8893 | r_Loss: 106.0639 | g_Loss: 527.3735 | l_Loss: 54.1962 |
21-12-24 03:31:23.080 - INFO: TEST: PSNR_S: 41.5415 | PSNR_C: 34.6210 |
21-12-24 03:31:23.081 - INFO: Learning rate: 1e-05
21-12-24 03:31:23.081 - INFO: Train epoch 1350: Loss: 1096.0432 | r_Loss: 106.3729 | g_Loss: 503.4847 | l_Loss: 60.6940 |
21-12-24 03:32:37.106 - INFO: Learning rate: 1e-05
21-12-24 03:32:37.107 - INFO: Train epoch 1351: Loss: 1103.9552 | r_Loss: 111.1139 | g_Loss: 488.1086 | l_Loss: 60.2769 |
21-12-24 03:33:50.442 - INFO: Learning rate: 1e-05
21-12-24 03:33:50.443 - INFO: Train epoch 1352: Loss: 1170.4594 | r_Loss: 119.3000 | g_Loss: 507.0505 | l_Loss: 66.9089 |
21-12-24 03:35:04.106 - INFO: Learning rate: 1e-05
21-12-24 03:35:04.106 - INFO: Train epoch 1353: Loss: 1099.2244 | r_Loss: 108.0474 | g_Loss: 502.5359 | l_Loss: 56.4516 |
21-12-24 03:36:17.180 - INFO: Learning rate: 1e-05
21-12-24 03:36:17.181 - INFO: Train epoch 1354: Loss: 1120.7723 | r_Loss: 110.7517 | g_Loss: 498.6589 | l_Loss: 68.3546 |
21-12-24 03:37:30.714 - INFO: Learning rate: 1e-05
21-12-24 03:37:30.715 - INFO: Train epoch 1355: Loss: 1179.9807 | r_Loss: 126.3940 | g_Loss: 488.3090 | l_Loss: 59.7019 |
21-12-24 03:38:44.241 - INFO: Learning rate: 1e-05
21-12-24 03:38:44.242 - INFO: Train epoch 1356: Loss: 1034.9993 | r_Loss: 103.1756 | g_Loss: 464.6628 | l_Loss: 54.4586 |
21-12-24 03:39:57.707 - INFO: Learning rate: 1e-05
21-12-24 03:39:57.708 - INFO: Train epoch 1357: Loss: 1069.3077 | r_Loss: 108.6179 | g_Loss: 473.0225 | l_Loss: 53.1956 |
21-12-24 03:41:11.236 - INFO: Learning rate: 1e-05
21-12-24 03:41:11.237 - INFO: Train epoch 1358: Loss: 1051.5371 | r_Loss: 105.8646 | g_Loss: 464.4015 | l_Loss: 57.8125 |
21-12-24 03:42:24.925 - INFO: Learning rate: 1e-05
21-12-24 03:42:24.926 - INFO: Train epoch 1359: Loss: 1019.0156 | r_Loss: 101.1627 | g_Loss: 457.1577 | l_Loss: 56.0443 |
21-12-24 03:43:38.549 - INFO: Learning rate: 1e-05
21-12-24 03:43:38.550 - INFO: Train epoch 1360: Loss: 1071.4371 | r_Loss: 113.2269 | g_Loss: 454.2627 | l_Loss: 51.0399 |
21-12-24 03:44:51.976 - INFO: Learning rate: 1e-05
21-12-24 03:44:51.977 - INFO: Train epoch 1361: Loss: 1048.0358 | r_Loss: 104.8219 | g_Loss: 459.6984 | l_Loss: 64.2279 |
21-12-24 03:46:05.602 - INFO: Learning rate: 1e-05
21-12-24 03:46:05.603 - INFO: Train epoch 1362: Loss: 1046.7818 | r_Loss: 104.8950 | g_Loss: 469.5145 | l_Loss: 52.7922 |
21-12-24 03:47:19.278 - INFO: Learning rate: 1e-05
21-12-24 03:47:19.279 - INFO: Train epoch 1363: Loss: 1078.6888 | r_Loss: 111.0044 | g_Loss: 466.7359 | l_Loss: 56.9311 |
21-12-24 03:48:32.805 - INFO: Learning rate: 1e-05
21-12-24 03:48:32.806 - INFO: Train epoch 1364: Loss: 992.0264 | r_Loss: 101.2314 | g_Loss: 431.9204 | l_Loss: 53.9491 |
21-12-24 03:49:46.165 - INFO: Learning rate: 1e-05
21-12-24 03:49:46.165 - INFO: Train epoch 1365: Loss: 1037.6227 | r_Loss: 104.7849 | g_Loss: 446.6833 | l_Loss: 67.0148 |
21-12-24 03:50:59.863 - INFO: Learning rate: 1e-05
21-12-24 03:50:59.864 - INFO: Train epoch 1366: Loss: 1047.4963 | r_Loss: 108.4835 | g_Loss: 444.8981 | l_Loss: 60.1805 |
21-12-24 03:52:13.409 - INFO: Learning rate: 1e-05
21-12-24 03:52:13.410 - INFO: Train epoch 1367: Loss: 999.3765 | r_Loss: 105.7094 | g_Loss: 420.7867 | l_Loss: 50.0426 |
21-12-24 03:53:26.957 - INFO: Learning rate: 1e-05
21-12-24 03:53:26.958 - INFO: Train epoch 1368: Loss: 1069.1170 | r_Loss: 108.2246 | g_Loss: 474.2460 | l_Loss: 53.7478 |
21-12-24 03:54:41.003 - INFO: Learning rate: 1e-05
21-12-24 03:54:41.004 - INFO: Train epoch 1369: Loss: 1058.7504 | r_Loss: 110.2584 | g_Loss: 447.0358 | l_Loss: 60.4227 |
21-12-24 03:55:53.944 - INFO: Learning rate: 1e-05
21-12-24 03:55:53.946 - INFO: Train epoch 1370: Loss: 991.3155 | r_Loss: 100.7644 | g_Loss: 428.1818 | l_Loss: 59.3117 |
21-12-24 03:57:07.277 - INFO: Learning rate: 1e-05
21-12-24 03:57:07.278 - INFO: Train epoch 1371: Loss: 961.1491 | r_Loss: 96.7301 | g_Loss: 426.1519 | l_Loss: 51.3469 |
21-12-24 03:58:20.662 - INFO: Learning rate: 1e-05
21-12-24 03:58:20.663 - INFO: Train epoch 1372: Loss: 971.1338 | r_Loss: 100.1812 | g_Loss: 427.5997 | l_Loss: 42.6281 |
21-12-24 03:59:33.850 - INFO: Learning rate: 1e-05
21-12-24 03:59:33.851 - INFO: Train epoch 1373: Loss: 1011.8247 | r_Loss: 105.5572 | g_Loss: 424.5886 | l_Loss: 59.4499 |
21-12-24 04:00:46.996 - INFO: Learning rate: 1e-05
21-12-24 04:00:46.997 - INFO: Train epoch 1374: Loss: 962.0126 | r_Loss: 99.1143 | g_Loss: 415.7941 | l_Loss: 50.6468 |
21-12-24 04:02:00.508 - INFO: Learning rate: 1e-05
21-12-24 04:02:00.509 - INFO: Train epoch 1375: Loss: 995.1156 | r_Loss: 109.0373 | g_Loss: 401.2199 | l_Loss: 48.7093 |
21-12-24 04:03:14.052 - INFO: Learning rate: 1e-05
21-12-24 04:03:14.053 - INFO: Train epoch 1376: Loss: 1038.7362 | r_Loss: 111.8432 | g_Loss: 423.7114 | l_Loss: 55.8088 |
21-12-24 04:04:27.452 - INFO: Learning rate: 1e-05
21-12-24 04:04:27.453 - INFO: Train epoch 1377: Loss: 949.3995 | r_Loss: 98.2501 | g_Loss: 405.1895 | l_Loss: 52.9596 |
21-12-24 04:05:40.454 - INFO: Learning rate: 1e-05
21-12-24 04:05:40.455 - INFO: Train epoch 1378: Loss: 940.5459 | r_Loss: 96.6861 | g_Loss: 412.7549 | l_Loss: 44.3607 |
21-12-24 04:06:53.826 - INFO: Learning rate: 1e-05
21-12-24 04:06:53.827 - INFO: Train epoch 1379: Loss: 973.1639 | r_Loss: 102.3971 | g_Loss: 406.0342 | l_Loss: 55.1443 |
21-12-24 04:08:43.364 - INFO: TEST: PSNR_S: 42.8872 | PSNR_C: 35.6316 |
21-12-24 04:08:43.365 - INFO: Learning rate: 1e-05
21-12-24 04:08:43.365 - INFO: Train epoch 1380: Loss: 983.9230 | r_Loss: 101.5628 | g_Loss: 415.0996 | l_Loss: 61.0094 |
21-12-24 04:09:56.580 - INFO: Learning rate: 1e-05
21-12-24 04:09:56.581 - INFO: Train epoch 1381: Loss: 1003.4190 | r_Loss: 105.5399 | g_Loss: 426.5395 | l_Loss: 49.1802 |
21-12-24 04:11:09.715 - INFO: Learning rate: 1e-05
21-12-24 04:11:09.716 - INFO: Train epoch 1382: Loss: 1036.2560 | r_Loss: 110.0595 | g_Loss: 417.1118 | l_Loss: 68.8466 |
21-12-24 04:12:22.957 - INFO: Learning rate: 1e-05
21-12-24 04:12:22.958 - INFO: Train epoch 1383: Loss: 1026.2500 | r_Loss: 112.6169 | g_Loss: 414.1111 | l_Loss: 49.0545 |
21-12-24 04:13:36.367 - INFO: Learning rate: 1e-05
21-12-24 04:13:36.368 - INFO: Train epoch 1384: Loss: 935.2461 | r_Loss: 95.3915 | g_Loss: 408.5841 | l_Loss: 49.7046 |
21-12-24 04:14:50.031 - INFO: Learning rate: 1e-05
21-12-24 04:14:50.032 - INFO: Train epoch 1385: Loss: 998.4139 | r_Loss: 104.4298 | g_Loss: 416.6357 | l_Loss: 59.6291 |
21-12-24 04:16:03.411 - INFO: Learning rate: 1e-05
21-12-24 04:16:03.412 - INFO: Train epoch 1386: Loss: 1104.4749 | r_Loss: 129.7715 | g_Loss: 403.5257 | l_Loss: 52.0915 |
21-12-24 04:17:16.712 - INFO: Learning rate: 1e-05
21-12-24 04:17:16.713 - INFO: Train epoch 1387: Loss: 888.8967 | r_Loss: 88.4345 | g_Loss: 394.9367 | l_Loss: 51.7873 |
21-12-24 04:18:30.024 - INFO: Learning rate: 1e-05
21-12-24 04:18:30.025 - INFO: Train epoch 1388: Loss: 887.4300 | r_Loss: 88.8638 | g_Loss: 389.8322 | l_Loss: 53.2790 |
21-12-24 04:19:43.103 - INFO: Learning rate: 1e-05
21-12-24 04:19:43.104 - INFO: Train epoch 1389: Loss: 893.7871 | r_Loss: 91.5811 | g_Loss: 391.5476 | l_Loss: 44.3340 |
21-12-24 04:20:56.423 - INFO: Learning rate: 1e-05
21-12-24 04:20:56.424 - INFO: Train epoch 1390: Loss: 902.2587 | r_Loss: 95.1244 | g_Loss: 377.4967 | l_Loss: 49.1400 |
21-12-24 04:22:09.541 - INFO: Learning rate: 1e-05
21-12-24 04:22:09.541 - INFO: Train epoch 1391: Loss: 927.6828 | r_Loss: 97.6267 | g_Loss: 400.0664 | l_Loss: 39.4831 |
21-12-24 04:23:22.922 - INFO: Learning rate: 1e-05
21-12-24 04:23:22.923 - INFO: Train epoch 1392: Loss: 1004.7408 | r_Loss: 113.1828 | g_Loss: 388.4013 | l_Loss: 50.4254 |
21-12-24 04:24:36.303 - INFO: Learning rate: 1e-05
21-12-24 04:24:36.304 - INFO: Train epoch 1393: Loss: 936.0605 | r_Loss: 96.0519 | g_Loss: 405.4127 | l_Loss: 50.3883 |
21-12-24 04:25:49.578 - INFO: Learning rate: 1e-05
21-12-24 04:25:49.579 - INFO: Train epoch 1394: Loss: 974.9865 | r_Loss: 101.4389 | g_Loss: 418.9067 | l_Loss: 48.8855 |
21-12-24 04:27:03.536 - INFO: Learning rate: 1e-05
21-12-24 04:27:03.537 - INFO: Train epoch 1395: Loss: 985.8234 | r_Loss: 109.2602 | g_Loss: 394.6255 | l_Loss: 44.8971 |
21-12-24 04:28:17.228 - INFO: Learning rate: 1e-05
21-12-24 04:28:17.230 - INFO: Train epoch 1396: Loss: 924.4977 | r_Loss: 95.1476 | g_Loss: 393.7209 | l_Loss: 55.0389 |
21-12-24 04:29:30.353 - INFO: Learning rate: 1e-05
21-12-24 04:29:30.354 - INFO: Train epoch 1397: Loss: 882.7709 | r_Loss: 93.2726 | g_Loss: 367.7255 | l_Loss: 48.6824 |
21-12-24 04:30:43.566 - INFO: Learning rate: 1e-05
21-12-24 04:30:43.567 - INFO: Train epoch 1398: Loss: 1013.6272 | r_Loss: 112.8892 | g_Loss: 400.7966 | l_Loss: 48.3847 |
21-12-24 04:31:56.527 - INFO: Learning rate: 1e-05
21-12-24 04:31:56.529 - INFO: Train epoch 1399: Loss: 955.5547 | r_Loss: 102.4217 | g_Loss: 397.0028 | l_Loss: 46.4435 |
21-12-24 04:33:09.823 - INFO: Learning rate: 1e-05
21-12-24 04:33:09.824 - INFO: Train epoch 1400: Loss: 880.9895 | r_Loss: 93.3075 | g_Loss: 370.7474 | l_Loss: 43.7045 |
21-12-24 04:34:23.336 - INFO: Learning rate: 1e-05
21-12-24 04:34:23.337 - INFO: Train epoch 1401: Loss: 1050.4689 | r_Loss: 125.2096 | g_Loss: 377.3422 | l_Loss: 47.0785 |
21-12-24 04:35:36.740 - INFO: Learning rate: 1e-05
21-12-24 04:35:36.740 - INFO: Train epoch 1402: Loss: 1005.8975 | r_Loss: 112.0624 | g_Loss: 401.1886 | l_Loss: 44.3970 |
21-12-24 04:36:49.629 - INFO: Learning rate: 1e-05
21-12-24 04:36:49.630 - INFO: Train epoch 1403: Loss: 885.5494 | r_Loss: 88.6666 | g_Loss: 395.6027 | l_Loss: 46.6138 |
21-12-24 04:38:03.208 - INFO: Learning rate: 1e-05
21-12-24 04:38:03.209 - INFO: Train epoch 1404: Loss: 932.7386 | r_Loss: 96.2269 | g_Loss: 399.0505 | l_Loss: 52.5534 |
21-12-24 04:39:16.249 - INFO: Learning rate: 1e-05
21-12-24 04:39:16.250 - INFO: Train epoch 1405: Loss: 923.8991 | r_Loss: 100.8191 | g_Loss: 373.7012 | l_Loss: 46.1023 |
21-12-24 04:40:29.717 - INFO: Learning rate: 1e-05
21-12-24 04:40:29.718 - INFO: Train epoch 1406: Loss: 996.7082 | r_Loss: 108.7076 | g_Loss: 399.9551 | l_Loss: 53.2152 |
21-12-24 04:41:43.122 - INFO: Learning rate: 1e-05
21-12-24 04:41:43.123 - INFO: Train epoch 1407: Loss: 901.9463 | r_Loss: 95.4419 | g_Loss: 377.9136 | l_Loss: 46.8230 |
21-12-24 04:42:56.705 - INFO: Learning rate: 1e-05
21-12-24 04:42:56.706 - INFO: Train epoch 1408: Loss: 944.4892 | r_Loss: 102.5243 | g_Loss: 384.3416 | l_Loss: 47.5262 |
21-12-24 04:44:09.767 - INFO: Learning rate: 1e-05
21-12-24 04:44:09.768 - INFO: Train epoch 1409: Loss: 922.1227 | r_Loss: 96.2772 | g_Loss: 394.0402 | l_Loss: 46.6966 |
21-12-24 04:45:59.174 - INFO: TEST: PSNR_S: 42.9011 | PSNR_C: 35.9116 |
21-12-24 04:45:59.175 - INFO: Learning rate: 1e-05
21-12-24 04:45:59.175 - INFO: Train epoch 1410: Loss: 962.3926 | r_Loss: 108.5767 | g_Loss: 370.5797 | l_Loss: 48.9296 |
21-12-24 04:47:12.584 - INFO: Learning rate: 1e-05
21-12-24 04:47:12.585 - INFO: Train epoch 1411: Loss: 842.4328 | r_Loss: 88.6479 | g_Loss: 356.3303 | l_Loss: 42.8628 |
21-12-24 04:48:25.757 - INFO: Learning rate: 1e-05
21-12-24 04:48:25.758 - INFO: Train epoch 1412: Loss: 913.2200 | r_Loss: 96.3576 | g_Loss: 382.6273 | l_Loss: 48.8045 |
21-12-24 04:49:38.952 - INFO: Learning rate: 1e-05
21-12-24 04:49:38.953 - INFO: Train epoch 1413: Loss: 926.5625 | r_Loss: 102.7802 | g_Loss: 372.8179 | l_Loss: 39.8438 |
21-12-24 04:50:52.038 - INFO: Learning rate: 1e-05
21-12-24 04:50:52.039 - INFO: Train epoch 1414: Loss: 928.3291 | r_Loss: 99.4026 | g_Loss: 380.8118 | l_Loss: 50.5044 |
21-12-24 04:52:05.243 - INFO: Learning rate: 1e-05
21-12-24 04:52:05.244 - INFO: Train epoch 1415: Loss: 976.8739 | r_Loss: 109.7858 | g_Loss: 378.5114 | l_Loss: 49.4335 |
21-12-24 04:53:18.598 - INFO: Learning rate: 1e-05
21-12-24 04:53:18.599 - INFO: Train epoch 1416: Loss: 970.2621 | r_Loss: 104.1153 | g_Loss: 405.2333 | l_Loss: 44.4523 |
21-12-24 04:54:31.686 - INFO: Learning rate: 1e-05
21-12-24 04:54:31.687 - INFO: Train epoch 1417: Loss: 984.1521 | r_Loss: 106.1418 | g_Loss: 407.0831 | l_Loss: 46.3601 |
21-12-24 04:55:45.300 - INFO: Learning rate: 1e-05
21-12-24 04:55:45.301 - INFO: Train epoch 1418: Loss: 880.5465 | r_Loss: 91.3292 | g_Loss: 377.6452 | l_Loss: 46.2556 |
21-12-24 04:56:58.920 - INFO: Learning rate: 1e-05
21-12-24 04:56:58.921 - INFO: Train epoch 1419: Loss: 33459.6407 | r_Loss: 5697.7980 | g_Loss: 4217.9453 | l_Loss: 752.7055 |
21-12-24 04:58:12.088 - INFO: Learning rate: 1e-05
21-12-24 04:58:12.089 - INFO: Train epoch 1420: Loss: 7758.6559 | r_Loss: 877.8306 | g_Loss: 2984.3886 | l_Loss: 385.1146 |
21-12-24 04:59:25.053 - INFO: Learning rate: 1e-05
21-12-24 04:59:25.054 - INFO: Train epoch 1421: Loss: 4281.4156 | r_Loss: 486.1813 | g_Loss: 1650.6427 | l_Loss: 199.8665 |
21-12-24 05:00:38.837 - INFO: Learning rate: 1e-05
21-12-24 05:00:38.838 - INFO: Train epoch 1422: Loss: 3476.0958 | r_Loss: 384.0465 | g_Loss: 1393.1241 | l_Loss: 162.7394 |
21-12-24 05:01:52.029 - INFO: Learning rate: 1e-05
21-12-24 05:01:52.030 - INFO: Train epoch 1423: Loss: 2990.1205 | r_Loss: 325.2726 | g_Loss: 1204.2947 | l_Loss: 159.4626 |
21-12-24 05:03:05.231 - INFO: Learning rate: 1e-05
21-12-24 05:03:05.232 - INFO: Train epoch 1424: Loss: 2705.0480 | r_Loss: 291.3493 | g_Loss: 1121.4883 | l_Loss: 126.8132 |
21-12-24 05:04:18.405 - INFO: Learning rate: 1e-05
21-12-24 05:04:18.407 - INFO: Train epoch 1425: Loss: 2432.0969 | r_Loss: 255.4743 | g_Loss: 1017.4241 | l_Loss: 137.3013 |
21-12-24 05:05:31.643 - INFO: Learning rate: 1e-05
21-12-24 05:05:31.644 - INFO: Train epoch 1426: Loss: 2292.1710 | r_Loss: 249.1374 | g_Loss: 934.6181 | l_Loss: 111.8658 |
21-12-24 05:06:44.799 - INFO: Learning rate: 1e-05
21-12-24 05:06:44.800 - INFO: Train epoch 1427: Loss: 2204.0707 | r_Loss: 233.4393 | g_Loss: 923.9992 | l_Loss: 112.8752 |
21-12-24 05:07:58.736 - INFO: Learning rate: 1e-05
21-12-24 05:07:58.737 - INFO: Train epoch 1428: Loss: 2286.0496 | r_Loss: 242.7417 | g_Loss: 950.3758 | l_Loss: 121.9654 |
21-12-24 05:09:11.997 - INFO: Learning rate: 1e-05
21-12-24 05:09:11.998 - INFO: Train epoch 1429: Loss: 2110.7778 | r_Loss: 219.4379 | g_Loss: 893.1394 | l_Loss: 120.4489 |
21-12-24 05:10:25.253 - INFO: Learning rate: 1e-05
21-12-24 05:10:25.254 - INFO: Train epoch 1430: Loss: 1918.2907 | r_Loss: 201.0773 | g_Loss: 824.7499 | l_Loss: 88.1542 |
21-12-24 05:11:38.586 - INFO: Learning rate: 1e-05
21-12-24 05:11:38.588 - INFO: Train epoch 1431: Loss: 1808.6565 | r_Loss: 188.0794 | g_Loss: 764.5304 | l_Loss: 103.7292 |
21-12-24 05:12:52.049 - INFO: Learning rate: 1e-05
21-12-24 05:12:52.051 - INFO: Train epoch 1432: Loss: 1794.3559 | r_Loss: 184.7045 | g_Loss: 769.5755 | l_Loss: 101.2581 |
21-12-24 05:14:05.579 - INFO: Learning rate: 1e-05
21-12-24 05:14:05.580 - INFO: Train epoch 1433: Loss: 1918.6842 | r_Loss: 199.8189 | g_Loss: 821.5060 | l_Loss: 98.0838 |
21-12-24 05:15:18.884 - INFO: Learning rate: 1e-05
21-12-24 05:15:18.886 - INFO: Train epoch 1434: Loss: 1699.5455 | r_Loss: 173.0891 | g_Loss: 742.0854 | l_Loss: 92.0145 |
21-12-24 05:16:32.235 - INFO: Learning rate: 1e-05
21-12-24 05:16:32.236 - INFO: Train epoch 1435: Loss: 1671.6757 | r_Loss: 170.3780 | g_Loss: 729.5538 | l_Loss: 90.2320 |
21-12-24 05:17:45.741 - INFO: Learning rate: 1e-05
21-12-24 05:17:45.743 - INFO: Train epoch 1436: Loss: 1718.3768 | r_Loss: 181.6927 | g_Loss: 717.9360 | l_Loss: 91.9772 |
21-12-24 05:18:58.730 - INFO: Learning rate: 1e-05
21-12-24 05:18:58.731 - INFO: Train epoch 1437: Loss: 1648.5207 | r_Loss: 170.7217 | g_Loss: 714.3638 | l_Loss: 80.5483 |
21-12-24 05:20:12.023 - INFO: Learning rate: 1e-05
21-12-24 05:20:12.024 - INFO: Train epoch 1438: Loss: 1621.8657 | r_Loss: 174.8387 | g_Loss: 662.0370 | l_Loss: 85.6350 |
21-12-24 05:21:24.932 - INFO: Learning rate: 1e-05
21-12-24 05:21:24.933 - INFO: Train epoch 1439: Loss: 1710.4776 | r_Loss: 181.4406 | g_Loss: 715.6519 | l_Loss: 87.6226 |
21-12-24 05:23:14.035 - INFO: TEST: PSNR_S: 40.0295 | PSNR_C: 33.3809 |
21-12-24 05:23:14.036 - INFO: Learning rate: 1e-05
21-12-24 05:23:14.037 - INFO: Train epoch 1440: Loss: 1545.6358 | r_Loss: 160.9399 | g_Loss: 662.4575 | l_Loss: 78.4786 |
21-12-24 05:24:27.081 - INFO: Learning rate: 1e-05
21-12-24 05:24:27.082 - INFO: Train epoch 1441: Loss: 1531.2170 | r_Loss: 160.0560 | g_Loss: 659.2467 | l_Loss: 71.6902 |
21-12-24 05:25:40.540 - INFO: Learning rate: 1e-05
21-12-24 05:25:40.541 - INFO: Train epoch 1442: Loss: 1573.7997 | r_Loss: 164.1498 | g_Loss: 670.9637 | l_Loss: 82.0872 |
21-12-24 05:26:53.754 - INFO: Learning rate: 1e-05
21-12-24 05:26:53.755 - INFO: Train epoch 1443: Loss: 1373.7676 | r_Loss: 140.1856 | g_Loss: 598.6961 | l_Loss: 74.1433 |
21-12-24 05:28:07.147 - INFO: Learning rate: 1e-05
21-12-24 05:28:07.148 - INFO: Train epoch 1444: Loss: 1414.7327 | r_Loss: 146.5074 | g_Loss: 612.6952 | l_Loss: 69.5007 |
21-12-24 05:29:20.292 - INFO: Learning rate: 1e-05
21-12-24 05:29:20.293 - INFO: Train epoch 1445: Loss: 1517.1232 | r_Loss: 168.6837 | g_Loss: 610.4599 | l_Loss: 63.2449 |
21-12-24 05:30:33.469 - INFO: Learning rate: 1e-05
21-12-24 05:30:33.470 - INFO: Train epoch 1446: Loss: 1577.5883 | r_Loss: 175.8645 | g_Loss: 625.8444 | l_Loss: 72.4214 |
21-12-24 05:31:46.914 - INFO: Learning rate: 1e-05
21-12-24 05:31:46.915 - INFO: Train epoch 1447: Loss: 1334.0751 | r_Loss: 132.7303 | g_Loss: 584.1815 | l_Loss: 86.2418 |
21-12-24 05:33:00.356 - INFO: Learning rate: 1e-05
21-12-24 05:33:00.357 - INFO: Train epoch 1448: Loss: 1320.8276 | r_Loss: 133.5072 | g_Loss: 595.3964 | l_Loss: 57.8954 |
21-12-24 05:34:13.793 - INFO: Learning rate: 1e-05
21-12-24 05:34:13.794 - INFO: Train epoch 1449: Loss: 1360.1746 | r_Loss: 134.7169 | g_Loss: 597.3821 | l_Loss: 89.2081 |
21-12-24 05:35:27.263 - INFO: Learning rate: 1e-05
21-12-24 05:35:27.264 - INFO: Train epoch 1450: Loss: 1361.8845 | r_Loss: 140.7485 | g_Loss: 586.5650 | l_Loss: 71.5768 |
21-12-24 05:36:40.773 - INFO: Learning rate: 1e-05
21-12-24 05:36:40.774 - INFO: Train epoch 1451: Loss: 1277.4742 | r_Loss: 129.1322 | g_Loss: 576.2170 | l_Loss: 55.5961 |
21-12-24 05:37:54.381 - INFO: Learning rate: 1e-05
21-12-24 05:37:54.382 - INFO: Train epoch 1452: Loss: 1320.4533 | r_Loss: 135.7737 | g_Loss: 569.2840 | l_Loss: 72.3010 |
21-12-24 05:39:07.781 - INFO: Learning rate: 1e-05
21-12-24 05:39:07.782 - INFO: Train epoch 1453: Loss: 1294.4954 | r_Loss: 135.0121 | g_Loss: 554.5850 | l_Loss: 64.8499 |
21-12-24 05:40:20.927 - INFO: Learning rate: 1e-05
21-12-24 05:40:20.928 - INFO: Train epoch 1454: Loss: 1255.1136 | r_Loss: 130.9772 | g_Loss: 534.4119 | l_Loss: 65.8158 |
21-12-24 05:41:34.306 - INFO: Learning rate: 1e-05
21-12-24 05:41:34.307 - INFO: Train epoch 1455: Loss: 1267.8270 | r_Loss: 131.0082 | g_Loss: 548.0864 | l_Loss: 64.6995 |
21-12-24 05:42:47.475 - INFO: Learning rate: 1e-05
21-12-24 05:42:47.476 - INFO: Train epoch 1456: Loss: 1526.8043 | r_Loss: 185.9449 | g_Loss: 529.3019 | l_Loss: 67.7779 |
21-12-24 05:44:00.533 - INFO: Learning rate: 1e-05
21-12-24 05:44:00.535 - INFO: Train epoch 1457: Loss: 1207.7062 | r_Loss: 115.5905 | g_Loss: 560.8923 | l_Loss: 68.8612 |
21-12-24 05:45:14.012 - INFO: Learning rate: 1e-05
21-12-24 05:45:14.013 - INFO: Train epoch 1458: Loss: 1204.4795 | r_Loss: 122.2602 | g_Loss: 528.4222 | l_Loss: 64.7564 |
21-12-24 05:46:27.388 - INFO: Learning rate: 1e-05
21-12-24 05:46:27.389 - INFO: Train epoch 1459: Loss: 1210.2442 | r_Loss: 121.3848 | g_Loss: 538.5852 | l_Loss: 64.7349 |
21-12-24 05:47:40.775 - INFO: Learning rate: 1e-05
21-12-24 05:47:40.776 - INFO: Train epoch 1460: Loss: 1220.8853 | r_Loss: 131.3481 | g_Loss: 507.1445 | l_Loss: 57.0006 |
21-12-24 05:48:54.127 - INFO: Learning rate: 1e-05
21-12-24 05:48:54.128 - INFO: Train epoch 1461: Loss: 1362.7098 | r_Loss: 156.8073 | g_Loss: 512.8634 | l_Loss: 65.8100 |
21-12-24 05:50:07.748 - INFO: Learning rate: 1e-05
21-12-24 05:50:07.749 - INFO: Train epoch 1462: Loss: 1198.5385 | r_Loss: 115.7932 | g_Loss: 547.9127 | l_Loss: 71.6598 |
21-12-24 05:51:20.714 - INFO: Learning rate: 1e-05
21-12-24 05:51:20.715 - INFO: Train epoch 1463: Loss: 1214.6252 | r_Loss: 122.2204 | g_Loss: 536.3450 | l_Loss: 67.1780 |
21-12-24 05:52:34.148 - INFO: Learning rate: 1e-05
21-12-24 05:52:34.149 - INFO: Train epoch 1464: Loss: 1110.6456 | r_Loss: 110.2567 | g_Loss: 501.3710 | l_Loss: 57.9913 |
21-12-24 05:53:47.454 - INFO: Learning rate: 1e-05
21-12-24 05:53:47.455 - INFO: Train epoch 1465: Loss: 1271.9722 | r_Loss: 138.6235 | g_Loss: 510.4688 | l_Loss: 68.3859 |
21-12-24 05:55:00.562 - INFO: Learning rate: 1e-05
21-12-24 05:55:00.563 - INFO: Train epoch 1466: Loss: 1056.5823 | r_Loss: 101.3914 | g_Loss: 494.2481 | l_Loss: 55.3770 |
21-12-24 05:56:13.785 - INFO: Learning rate: 1e-05
21-12-24 05:56:13.787 - INFO: Train epoch 1467: Loss: 1421.8830 | r_Loss: 170.2343 | g_Loss: 496.0464 | l_Loss: 74.6650 |
21-12-24 05:57:27.073 - INFO: Learning rate: 1e-05
21-12-24 05:57:27.074 - INFO: Train epoch 1468: Loss: 1077.9139 | r_Loss: 103.4644 | g_Loss: 503.7710 | l_Loss: 56.8208 |
21-12-24 05:58:40.372 - INFO: Learning rate: 1e-05
21-12-24 05:58:40.374 - INFO: Train epoch 1469: Loss: 1093.6566 | r_Loss: 107.2308 | g_Loss: 499.4035 | l_Loss: 58.0992 |
21-12-24 06:00:29.536 - INFO: TEST: PSNR_S: 42.4932 | PSNR_C: 34.7090 |
21-12-24 06:00:29.538 - INFO: Learning rate: 1e-05
21-12-24 06:00:29.538 - INFO: Train epoch 1470: Loss: 1089.4840 | r_Loss: 107.1111 | g_Loss: 490.0071 | l_Loss: 63.9214 |
21-12-24 06:01:42.725 - INFO: Learning rate: 1e-05
21-12-24 06:01:42.726 - INFO: Train epoch 1471: Loss: 1185.5260 | r_Loss: 123.8945 | g_Loss: 503.4952 | l_Loss: 62.5581 |
21-12-24 06:02:55.785 - INFO: Learning rate: 1e-05
21-12-24 06:02:55.786 - INFO: Train epoch 1472: Loss: 1259.4520 | r_Loss: 144.8329 | g_Loss: 476.4078 | l_Loss: 58.8799 |
21-12-24 06:04:09.031 - INFO: Learning rate: 1e-05
21-12-24 06:04:09.032 - INFO: Train epoch 1473: Loss: 1028.8969 | r_Loss: 99.5170 | g_Loss: 470.9889 | l_Loss: 60.3233 |
21-12-24 06:05:22.580 - INFO: Learning rate: 1e-05
21-12-24 06:05:22.581 - INFO: Train epoch 1474: Loss: 1085.1291 | r_Loss: 108.6470 | g_Loss: 479.2546 | l_Loss: 62.6396 |
21-12-24 06:06:36.307 - INFO: Learning rate: 1e-05
21-12-24 06:06:36.308 - INFO: Train epoch 1475: Loss: 1155.7759 | r_Loss: 116.7267 | g_Loss: 506.1371 | l_Loss: 66.0054 |
21-12-24 06:07:49.266 - INFO: Learning rate: 1e-05
21-12-24 06:07:49.267 - INFO: Train epoch 1476: Loss: 1084.7163 | r_Loss: 109.6645 | g_Loss: 485.6286 | l_Loss: 50.7652 |
21-12-24 06:09:02.597 - INFO: Learning rate: 1e-05
21-12-24 06:09:02.599 - INFO: Train epoch 1477: Loss: 1111.7213 | r_Loss: 116.6095 | g_Loss: 471.7478 | l_Loss: 56.9258 |
21-12-24 06:10:15.268 - INFO: Learning rate: 1e-05
21-12-24 06:10:15.269 - INFO: Train epoch 1478: Loss: 1062.8827 | r_Loss: 106.5213 | g_Loss: 471.0422 | l_Loss: 59.2342 |
21-12-24 06:11:28.283 - INFO: Learning rate: 1e-05
21-12-24 06:11:28.284 - INFO: Train epoch 1479: Loss: 1069.6036 | r_Loss: 105.4864 | g_Loss: 468.3389 | l_Loss: 73.8328 |
21-12-24 06:12:41.854 - INFO: Learning rate: 1e-05
21-12-24 06:12:41.855 - INFO: Train epoch 1480: Loss: 1134.5332 | r_Loss: 123.2077 | g_Loss: 462.7305 | l_Loss: 55.7642 |
21-12-24 06:13:55.126 - INFO: Learning rate: 1e-05
21-12-24 06:13:55.127 - INFO: Train epoch 1481: Loss: 1129.0083 | r_Loss: 121.8509 | g_Loss: 463.2835 | l_Loss: 56.4702 |
21-12-24 06:15:08.088 - INFO: Learning rate: 1e-05
21-12-24 06:15:08.089 - INFO: Train epoch 1482: Loss: 1007.3030 | r_Loss: 103.0968 | g_Loss: 441.3895 | l_Loss: 50.4298 |
21-12-24 06:16:21.279 - INFO: Learning rate: 1e-05
21-12-24 06:16:21.281 - INFO: Train epoch 1483: Loss: 1144.7920 | r_Loss: 124.6398 | g_Loss: 466.1979 | l_Loss: 55.3953 |
21-12-24 06:17:34.452 - INFO: Learning rate: 1e-05
21-12-24 06:17:34.453 - INFO: Train epoch 1484: Loss: 1035.5258 | r_Loss: 102.8521 | g_Loss: 468.9477 | l_Loss: 52.3174 |
21-12-24 06:18:47.456 - INFO: Learning rate: 1e-05
21-12-24 06:18:47.457 - INFO: Train epoch 1485: Loss: 995.5215 | r_Loss: 99.9167 | g_Loss: 438.4977 | l_Loss: 57.4404 |
21-12-24 06:20:00.912 - INFO: Learning rate: 1e-05
21-12-24 06:20:00.913 - INFO: Train epoch 1486: Loss: 1191.8096 | r_Loss: 134.1542 | g_Loss: 463.2370 | l_Loss: 57.8015 |
21-12-24 06:21:14.113 - INFO: Learning rate: 1e-05
21-12-24 06:21:14.114 - INFO: Train epoch 1487: Loss: 1016.6915 | r_Loss: 103.0192 | g_Loss: 441.9749 | l_Loss: 59.6209 |
21-12-24 06:22:27.452 - INFO: Learning rate: 1e-05
21-12-24 06:22:27.453 - INFO: Train epoch 1488: Loss: 1016.6777 | r_Loss: 105.1555 | g_Loss: 431.2039 | l_Loss: 59.6965 |
21-12-24 06:23:41.154 - INFO: Learning rate: 1e-05
21-12-24 06:23:41.155 - INFO: Train epoch 1489: Loss: 1169.1705 | r_Loss: 136.2454 | g_Loss: 439.2930 | l_Loss: 48.6504 |
21-12-24 06:24:54.670 - INFO: Learning rate: 1e-05
21-12-24 06:24:54.671 - INFO: Train epoch 1490: Loss: 905.1151 | r_Loss: 85.9432 | g_Loss: 420.6009 | l_Loss: 54.7979 |
21-12-24 06:26:08.109 - INFO: Learning rate: 1e-05
21-12-24 06:26:08.110 - INFO: Train epoch 1491: Loss: 1015.8108 | r_Loss: 101.4975 | g_Loss: 451.0608 | l_Loss: 57.2623 |
21-12-24 06:27:21.102 - INFO: Learning rate: 1e-05
21-12-24 06:27:21.103 - INFO: Train epoch 1492: Loss: 1020.0875 | r_Loss: 106.0282 | g_Loss: 430.3442 | l_Loss: 59.6024 |
21-12-24 06:28:34.480 - INFO: Learning rate: 1e-05
21-12-24 06:28:34.481 - INFO: Train epoch 1493: Loss: 998.1437 | r_Loss: 99.8678 | g_Loss: 441.4951 | l_Loss: 57.3093 |
21-12-24 06:29:47.754 - INFO: Learning rate: 1e-05
21-12-24 06:29:47.755 - INFO: Train epoch 1494: Loss: 1074.8037 | r_Loss: 115.8475 | g_Loss: 437.6294 | l_Loss: 57.9366 |
21-12-24 06:31:01.316 - INFO: Learning rate: 1e-05
21-12-24 06:31:01.317 - INFO: Train epoch 1495: Loss: 994.3202 | r_Loss: 100.6391 | g_Loss: 438.2426 | l_Loss: 52.8821 |
21-12-24 06:32:14.724 - INFO: Learning rate: 1e-05
21-12-24 06:32:14.725 - INFO: Train epoch 1496: Loss: 1017.4405 | r_Loss: 110.8445 | g_Loss: 419.0601 | l_Loss: 44.1578 |
21-12-24 06:33:28.598 - INFO: Learning rate: 1e-05
21-12-24 06:33:28.599 - INFO: Train epoch 1497: Loss: 962.0044 | r_Loss: 96.3109 | g_Loss: 430.7774 | l_Loss: 49.6726 |
21-12-24 06:34:42.043 - INFO: Learning rate: 1e-05
21-12-24 06:34:42.045 - INFO: Train epoch 1498: Loss: 919.3093 | r_Loss: 92.0158 | g_Loss: 412.9307 | l_Loss: 46.2997 |
21-12-24 06:35:55.319 - INFO: Learning rate: 1e-05
21-12-24 06:35:55.320 - INFO: Train epoch 1499: Loss: 1025.8329 | r_Loss: 111.7804 | g_Loss: 409.2249 | l_Loss: 57.7058 |
21-12-24 06:37:44.586 - INFO: TEST: PSNR_S: 42.4042 | PSNR_C: 35.4709 |
21-12-24 06:37:44.587 - INFO: Learning rate: 1e-05
21-12-24 06:37:44.587 - INFO: Train epoch 1500: Loss: 939.3879 | r_Loss: 94.3850 | g_Loss: 411.0910 | l_Loss: 56.3721 |
21-12-24 06:38:57.974 - INFO: Learning rate: 1e-05
21-12-24 06:38:57.975 - INFO: Train epoch 1501: Loss: 1039.8827 | r_Loss: 111.6742 | g_Loss: 429.2974 | l_Loss: 52.2145 |
21-12-24 06:40:11.179 - INFO: Learning rate: 1e-05
21-12-24 06:40:11.180 - INFO: Train epoch 1502: Loss: 970.7383 | r_Loss: 97.0065 | g_Loss: 422.7793 | l_Loss: 62.9266 |
21-12-24 06:41:24.684 - INFO: Learning rate: 1e-05
21-12-24 06:41:24.685 - INFO: Train epoch 1503: Loss: 1010.5019 | r_Loss: 105.6312 | g_Loss: 428.7270 | l_Loss: 53.6189 |
21-12-24 06:42:37.762 - INFO: Learning rate: 1e-05
21-12-24 06:42:37.763 - INFO: Train epoch 1504: Loss: 893.2399 | r_Loss: 89.9939 | g_Loss: 400.6638 | l_Loss: 42.6066 |
21-12-24 06:43:50.959 - INFO: Learning rate: 1e-05
21-12-24 06:43:50.960 - INFO: Train epoch 1505: Loss: 1285.4016 | r_Loss: 158.5286 | g_Loss: 438.9014 | l_Loss: 53.8569 |
21-12-24 06:45:03.841 - INFO: Learning rate: 1e-05
21-12-24 06:45:03.842 - INFO: Train epoch 1506: Loss: 918.2126 | r_Loss: 88.8448 | g_Loss: 424.9848 | l_Loss: 49.0038 |
21-12-24 06:46:17.212 - INFO: Learning rate: 1e-05
21-12-24 06:46:17.212 - INFO: Train epoch 1507: Loss: 960.1238 | r_Loss: 98.5687 | g_Loss: 411.7751 | l_Loss: 55.5050 |
21-12-24 06:47:30.490 - INFO: Learning rate: 1e-05
21-12-24 06:47:30.491 - INFO: Train epoch 1508: Loss: 958.3850 | r_Loss: 100.6332 | g_Loss: 398.7363 | l_Loss: 56.4824 |
21-12-24 06:48:43.831 - INFO: Learning rate: 1e-05
21-12-24 06:48:43.832 - INFO: Train epoch 1509: Loss: 953.3668 | r_Loss: 96.2118 | g_Loss: 415.8335 | l_Loss: 56.4743 |
21-12-24 06:49:56.585 - INFO: Learning rate: 1e-05
21-12-24 06:49:56.586 - INFO: Train epoch 1510: Loss: 980.9852 | r_Loss: 103.8355 | g_Loss: 417.1463 | l_Loss: 44.6615 |
21-12-24 06:51:09.732 - INFO: Learning rate: 1e-05
21-12-24 06:51:09.733 - INFO: Train epoch 1511: Loss: 918.0234 | r_Loss: 95.8811 | g_Loss: 392.0460 | l_Loss: 46.5717 |
21-12-24 06:52:23.126 - INFO: Learning rate: 1e-05
21-12-24 06:52:23.127 - INFO: Train epoch 1512: Loss: 936.8964 | r_Loss: 94.8778 | g_Loss: 412.8901 | l_Loss: 49.6175 |
21-12-24 06:53:36.357 - INFO: Learning rate: 1e-05
21-12-24 06:53:36.358 - INFO: Train epoch 1513: Loss: 923.5264 | r_Loss: 95.4944 | g_Loss: 402.9942 | l_Loss: 43.0601 |
21-12-24 06:54:49.419 - INFO: Learning rate: 1e-05
21-12-24 06:54:49.420 - INFO: Train epoch 1514: Loss: 931.5718 | r_Loss: 98.2043 | g_Loss: 385.1580 | l_Loss: 55.3924 |
21-12-24 06:56:02.978 - INFO: Learning rate: 1e-05
21-12-24 06:56:02.980 - INFO: Train epoch 1515: Loss: 964.8293 | r_Loss: 107.0113 | g_Loss: 384.2987 | l_Loss: 45.4740 |
21-12-24 06:57:16.566 - INFO: Learning rate: 1e-05
21-12-24 06:57:16.567 - INFO: Train epoch 1516: Loss: 890.1540 | r_Loss: 91.1354 | g_Loss: 388.9259 | l_Loss: 45.5510 |
21-12-24 06:58:29.436 - INFO: Learning rate: 1e-05
21-12-24 06:58:29.437 - INFO: Train epoch 1517: Loss: 964.5886 | r_Loss: 103.3235 | g_Loss: 396.3421 | l_Loss: 51.6288 |
21-12-24 06:59:43.024 - INFO: Learning rate: 1e-05
21-12-24 06:59:43.026 - INFO: Train epoch 1518: Loss: 945.4512 | r_Loss: 102.4893 | g_Loss: 391.0134 | l_Loss: 41.9912 |
21-12-24 07:00:56.092 - INFO: Learning rate: 1e-05
21-12-24 07:00:56.093 - INFO: Train epoch 1519: Loss: 983.8245 | r_Loss: 105.1768 | g_Loss: 406.3150 | l_Loss: 51.6256 |
21-12-24 07:02:08.848 - INFO: Learning rate: 1e-05
21-12-24 07:02:08.849 - INFO: Train epoch 1520: Loss: 960.8949 | r_Loss: 104.1729 | g_Loss: 383.7093 | l_Loss: 56.3210 |
21-12-24 07:03:21.791 - INFO: Learning rate: 1e-05
21-12-24 07:03:21.792 - INFO: Train epoch 1521: Loss: 911.3031 | r_Loss: 94.9830 | g_Loss: 385.8014 | l_Loss: 50.5868 |
21-12-24 07:04:35.160 - INFO: Learning rate: 1e-05
21-12-24 07:04:35.161 - INFO: Train epoch 1522: Loss: 943.0239 | r_Loss: 99.1389 | g_Loss: 394.6128 | l_Loss: 52.7164 |
21-12-24 07:05:48.612 - INFO: Learning rate: 1e-05
21-12-24 07:05:48.613 - INFO: Train epoch 1523: Loss: 988.5649 | r_Loss: 109.6670 | g_Loss: 389.0083 | l_Loss: 51.2217 |
21-12-24 07:07:01.804 - INFO: Learning rate: 1e-05
21-12-24 07:07:01.804 - INFO: Train epoch 1524: Loss: 887.7513 | r_Loss: 87.4605 | g_Loss: 393.2353 | l_Loss: 57.2138 |
21-12-24 07:08:15.119 - INFO: Learning rate: 1e-05
21-12-24 07:08:15.120 - INFO: Train epoch 1525: Loss: 928.4758 | r_Loss: 97.2519 | g_Loss: 385.8404 | l_Loss: 56.3757 |
21-12-24 07:09:28.095 - INFO: Learning rate: 1e-05
21-12-24 07:09:28.096 - INFO: Train epoch 1526: Loss: 1033.2981 | r_Loss: 118.7593 | g_Loss: 391.6849 | l_Loss: 47.8168 |
21-12-24 07:10:41.373 - INFO: Learning rate: 1e-05
21-12-24 07:10:41.374 - INFO: Train epoch 1527: Loss: 915.4940 | r_Loss: 93.7117 | g_Loss: 384.8452 | l_Loss: 62.0904 |
21-12-24 07:11:54.483 - INFO: Learning rate: 1e-05
21-12-24 07:11:54.485 - INFO: Train epoch 1528: Loss: 882.0019 | r_Loss: 92.3885 | g_Loss: 380.1756 | l_Loss: 39.8836 |
21-12-24 07:13:07.936 - INFO: Learning rate: 1e-05
21-12-24 07:13:07.937 - INFO: Train epoch 1529: Loss: 892.5908 | r_Loss: 95.3769 | g_Loss: 369.0782 | l_Loss: 46.6281 |
21-12-24 07:14:57.458 - INFO: TEST: PSNR_S: 42.6364 | PSNR_C: 35.9807 |
21-12-24 07:14:57.460 - INFO: Learning rate: 1e-05
21-12-24 07:14:57.460 - INFO: Train epoch 1530: Loss: 913.0629 | r_Loss: 97.4293 | g_Loss: 382.8758 | l_Loss: 43.0408 |
21-12-24 07:16:10.806 - INFO: Learning rate: 1e-05
21-12-24 07:16:10.807 - INFO: Train epoch 1531: Loss: 960.1440 | r_Loss: 101.1213 | g_Loss: 403.5195 | l_Loss: 51.0180 |
21-12-24 07:17:24.035 - INFO: Learning rate: 1e-05
21-12-24 07:17:24.036 - INFO: Train epoch 1532: Loss: 888.6593 | r_Loss: 95.1113 | g_Loss: 363.5742 | l_Loss: 49.5284 |
21-12-24 07:18:37.542 - INFO: Learning rate: 1e-05
21-12-24 07:18:37.544 - INFO: Train epoch 1533: Loss: 923.1717 | r_Loss: 97.4038 | g_Loss: 381.2951 | l_Loss: 54.8576 |
21-12-24 07:19:50.880 - INFO: Learning rate: 1e-05
21-12-24 07:19:50.881 - INFO: Train epoch 1534: Loss: 926.6430 | r_Loss: 100.9127 | g_Loss: 378.3125 | l_Loss: 43.7670 |
21-12-24 07:21:04.085 - INFO: Learning rate: 1e-05
21-12-24 07:21:04.085 - INFO: Train epoch 1535: Loss: 875.7419 | r_Loss: 89.6710 | g_Loss: 372.1605 | l_Loss: 55.2265 |
21-12-24 07:22:17.493 - INFO: Learning rate: 1e-05
21-12-24 07:22:17.494 - INFO: Train epoch 1536: Loss: 865.7361 | r_Loss: 90.2611 | g_Loss: 367.7519 | l_Loss: 46.6785 |
21-12-24 07:23:30.924 - INFO: Learning rate: 1e-05
21-12-24 07:23:30.925 - INFO: Train epoch 1537: Loss: 923.2648 | r_Loss: 97.1670 | g_Loss: 387.4491 | l_Loss: 49.9805 |
21-12-24 07:24:44.083 - INFO: Learning rate: 1e-05
21-12-24 07:24:44.084 - INFO: Train epoch 1538: Loss: 882.8223 | r_Loss: 94.9022 | g_Loss: 363.1493 | l_Loss: 45.1622 |
21-12-24 07:25:57.523 - INFO: Learning rate: 1e-05
21-12-24 07:25:57.524 - INFO: Train epoch 1539: Loss: 852.4812 | r_Loss: 89.0298 | g_Loss: 358.1718 | l_Loss: 49.1603 |
21-12-24 07:27:10.814 - INFO: Learning rate: 1e-05
21-12-24 07:27:10.815 - INFO: Train epoch 1540: Loss: 918.2688 | r_Loss: 101.6222 | g_Loss: 360.5421 | l_Loss: 49.6157 |
21-12-24 07:28:24.289 - INFO: Learning rate: 1e-05
21-12-24 07:28:24.290 - INFO: Train epoch 1541: Loss: 889.8417 | r_Loss: 94.1264 | g_Loss: 376.0410 | l_Loss: 43.1687 |
21-12-24 07:29:37.821 - INFO: Learning rate: 1e-05
21-12-24 07:29:37.822 - INFO: Train epoch 1542: Loss: 892.3665 | r_Loss: 94.8577 | g_Loss: 373.5027 | l_Loss: 44.5754 |
21-12-24 07:30:50.874 - INFO: Learning rate: 1e-05
21-12-24 07:30:50.875 - INFO: Train epoch 1543: Loss: 933.0694 | r_Loss: 97.5890 | g_Loss: 400.6843 | l_Loss: 44.4403 |
21-12-24 07:32:04.214 - INFO: Learning rate: 1e-05
21-12-24 07:32:04.215 - INFO: Train epoch 1544: Loss: 949.5479 | r_Loss: 105.9694 | g_Loss: 374.5172 | l_Loss: 45.1839 |
21-12-24 07:33:17.625 - INFO: Learning rate: 1e-05
21-12-24 07:33:17.626 - INFO: Train epoch 1545: Loss: 877.4792 | r_Loss: 91.5441 | g_Loss: 370.6823 | l_Loss: 49.0766 |
21-12-24 07:34:31.144 - INFO: Learning rate: 1e-05
21-12-24 07:34:31.145 - INFO: Train epoch 1546: Loss: 906.6167 | r_Loss: 97.0280 | g_Loss: 370.6912 | l_Loss: 50.7854 |
21-12-24 07:35:44.393 - INFO: Learning rate: 1e-05
21-12-24 07:35:44.394 - INFO: Train epoch 1547: Loss: 868.4502 | r_Loss: 92.1418 | g_Loss: 364.8855 | l_Loss: 42.8555 |
21-12-24 07:36:57.416 - INFO: Learning rate: 1e-05
21-12-24 07:36:57.417 - INFO: Train epoch 1548: Loss: 23779.0632 | r_Loss: 4653.6608 | g_Loss: 451.8976 | l_Loss: 58.8611 |
21-12-24 07:38:10.886 - INFO: Learning rate: 1e-05
21-12-24 07:38:10.887 - INFO: Train epoch 1549: Loss: 10115.1577 | r_Loss: 1597.2164 | g_Loss: 1908.3985 | l_Loss: 220.6769 |
21-12-24 07:39:24.444 - INFO: Learning rate: 1e-05
21-12-24 07:39:24.445 - INFO: Train epoch 1550: Loss: 2119.6862 | r_Loss: 200.4396 | g_Loss: 995.0658 | l_Loss: 122.4226 |
21-12-24 07:40:37.908 - INFO: Learning rate: 1e-05
21-12-24 07:40:37.909 - INFO: Train epoch 1551: Loss: 1807.7630 | r_Loss: 168.0993 | g_Loss: 869.0519 | l_Loss: 98.2147 |
21-12-24 07:41:51.266 - INFO: Learning rate: 1e-05
21-12-24 07:41:51.267 - INFO: Train epoch 1552: Loss: 1703.9922 | r_Loss: 155.4712 | g_Loss: 821.4425 | l_Loss: 105.1939 |
21-12-24 07:43:05.107 - INFO: Learning rate: 1e-05
21-12-24 07:43:05.108 - INFO: Train epoch 1553: Loss: 1564.6467 | r_Loss: 142.7099 | g_Loss: 754.5127 | l_Loss: 96.5845 |
21-12-24 07:44:18.423 - INFO: Learning rate: 1e-05
21-12-24 07:44:18.424 - INFO: Train epoch 1554: Loss: 1519.2203 | r_Loss: 135.9506 | g_Loss: 734.2628 | l_Loss: 105.2046 |
21-12-24 07:45:31.708 - INFO: Learning rate: 1e-05
21-12-24 07:45:31.709 - INFO: Train epoch 1555: Loss: 1461.4952 | r_Loss: 129.1009 | g_Loss: 718.1800 | l_Loss: 97.8110 |
21-12-24 07:46:44.912 - INFO: Learning rate: 1e-05
21-12-24 07:46:44.914 - INFO: Train epoch 1556: Loss: 1319.3761 | r_Loss: 113.7853 | g_Loss: 661.9185 | l_Loss: 88.5310 |
21-12-24 07:47:58.074 - INFO: Learning rate: 1e-05
21-12-24 07:47:58.075 - INFO: Train epoch 1557: Loss: 1313.6571 | r_Loss: 112.9261 | g_Loss: 651.3918 | l_Loss: 97.6347 |
21-12-24 07:49:11.255 - INFO: Learning rate: 1e-05
21-12-24 07:49:11.256 - INFO: Train epoch 1558: Loss: 1239.2991 | r_Loss: 103.9044 | g_Loss: 626.9999 | l_Loss: 92.7774 |
21-12-24 07:50:24.279 - INFO: Learning rate: 1e-05
21-12-24 07:50:24.281 - INFO: Train epoch 1559: Loss: 1254.3397 | r_Loss: 111.4281 | g_Loss: 625.2784 | l_Loss: 71.9207 |
21-12-24 07:52:14.000 - INFO: TEST: PSNR_S: 42.2697 | PSNR_C: 33.6824 |
21-12-24 07:52:14.002 - INFO: Learning rate: 1e-05
21-12-24 07:52:14.002 - INFO: Train epoch 1560: Loss: 1198.7266 | r_Loss: 104.1641 | g_Loss: 605.4250 | l_Loss: 72.4812 |
21-12-24 07:53:27.250 - INFO: Learning rate: 1e-05
21-12-24 07:53:27.251 - INFO: Train epoch 1561: Loss: 1194.6411 | r_Loss: 106.2620 | g_Loss: 596.1687 | l_Loss: 67.1626 |
21-12-24 07:54:40.388 - INFO: Learning rate: 1e-05
21-12-24 07:54:40.389 - INFO: Train epoch 1562: Loss: 1178.9287 | r_Loss: 101.8022 | g_Loss: 593.8013 | l_Loss: 76.1165 |
21-12-24 07:55:54.139 - INFO: Learning rate: 1e-05
21-12-24 07:55:54.140 - INFO: Train epoch 1563: Loss: 1153.4564 | r_Loss: 97.5431 | g_Loss: 576.7912 | l_Loss: 88.9495 |
21-12-24 07:57:07.888 - INFO: Learning rate: 1e-05
21-12-24 07:57:07.889 - INFO: Train epoch 1564: Loss: 1078.5040 | r_Loss: 93.8860 | g_Loss: 540.4247 | l_Loss: 68.6491 |
21-12-24 07:58:21.608 - INFO: Learning rate: 1e-05
21-12-24 07:58:21.609 - INFO: Train epoch 1565: Loss: 1082.1639 | r_Loss: 92.8806 | g_Loss: 554.1370 | l_Loss: 63.6237 |
21-12-24 07:59:35.217 - INFO: Learning rate: 1e-05
21-12-24 07:59:35.218 - INFO: Train epoch 1566: Loss: 1105.7677 | r_Loss: 96.9902 | g_Loss: 557.5944 | l_Loss: 63.2221 |
21-12-24 08:00:49.055 - INFO: Learning rate: 1e-05
21-12-24 08:00:49.056 - INFO: Train epoch 1567: Loss: 1057.7090 | r_Loss: 92.8165 | g_Loss: 531.6466 | l_Loss: 61.9801 |
21-12-24 08:02:02.445 - INFO: Learning rate: 1e-05
21-12-24 08:02:02.446 - INFO: Train epoch 1568: Loss: 1152.4383 | r_Loss: 102.8500 | g_Loss: 570.5599 | l_Loss: 67.6282 |
21-12-24 08:03:15.725 - INFO: Learning rate: 1e-05
21-12-24 08:03:15.726 - INFO: Train epoch 1569: Loss: 1020.7019 | r_Loss: 89.4215 | g_Loss: 512.2784 | l_Loss: 61.3160 |
21-12-24 08:04:29.250 - INFO: Learning rate: 1e-05
21-12-24 08:04:29.251 - INFO: Train epoch 1570: Loss: 992.8266 | r_Loss: 86.5780 | g_Loss: 492.3595 | l_Loss: 67.5769 |
21-12-24 08:05:42.827 - INFO: Learning rate: 1e-05
21-12-24 08:05:42.828 - INFO: Train epoch 1571: Loss: 1012.1849 | r_Loss: 87.6874 | g_Loss: 510.5587 | l_Loss: 63.1892 |
21-12-24 08:06:55.726 - INFO: Learning rate: 1e-05
21-12-24 08:06:55.727 - INFO: Train epoch 1572: Loss: 1024.8097 | r_Loss: 91.9953 | g_Loss: 494.1948 | l_Loss: 70.6384 |
21-12-24 08:08:09.139 - INFO: Learning rate: 1e-05
21-12-24 08:08:09.140 - INFO: Train epoch 1573: Loss: 988.0187 | r_Loss: 87.8894 | g_Loss: 481.8792 | l_Loss: 66.6927 |
21-12-24 08:09:22.114 - INFO: Learning rate: 1e-05
21-12-24 08:09:22.115 - INFO: Train epoch 1574: Loss: 945.4643 | r_Loss: 86.5923 | g_Loss: 462.6975 | l_Loss: 49.8051 |
21-12-24 08:10:35.316 - INFO: Learning rate: 1e-05
21-12-24 08:10:35.317 - INFO: Train epoch 1575: Loss: 1012.2601 | r_Loss: 93.2636 | g_Loss: 483.9323 | l_Loss: 62.0098 |
21-12-24 08:11:48.972 - INFO: Learning rate: 1e-05
21-12-24 08:11:48.973 - INFO: Train epoch 1576: Loss: 902.0597 | r_Loss: 81.2075 | g_Loss: 451.7555 | l_Loss: 44.2669 |
21-12-24 08:13:02.633 - INFO: Learning rate: 1e-05
21-12-24 08:13:02.634 - INFO: Train epoch 1577: Loss: 951.2802 | r_Loss: 85.4869 | g_Loss: 469.3929 | l_Loss: 54.4526 |
21-12-24 08:14:15.782 - INFO: Learning rate: 1e-05
21-12-24 08:14:15.783 - INFO: Train epoch 1578: Loss: 949.3511 | r_Loss: 87.0678 | g_Loss: 454.8662 | l_Loss: 59.1460 |
21-12-24 08:15:29.111 - INFO: Learning rate: 1e-05
21-12-24 08:15:29.112 - INFO: Train epoch 1579: Loss: 982.6193 | r_Loss: 93.3625 | g_Loss: 464.4076 | l_Loss: 51.3995 |
21-12-24 08:16:42.429 - INFO: Learning rate: 1e-05
21-12-24 08:16:42.430 - INFO: Train epoch 1580: Loss: 962.6468 | r_Loss: 87.5392 | g_Loss: 465.3188 | l_Loss: 59.6320 |
21-12-24 08:17:56.052 - INFO: Learning rate: 1e-05
21-12-24 08:17:56.053 - INFO: Train epoch 1581: Loss: 917.1540 | r_Loss: 84.2269 | g_Loss: 444.2776 | l_Loss: 51.7418 |
21-12-24 08:19:09.536 - INFO: Learning rate: 1e-05
21-12-24 08:19:09.537 - INFO: Train epoch 1582: Loss: 930.3050 | r_Loss: 88.3068 | g_Loss: 440.2644 | l_Loss: 48.5067 |
21-12-24 08:20:23.081 - INFO: Learning rate: 1e-05
21-12-24 08:20:23.082 - INFO: Train epoch 1583: Loss: 907.6884 | r_Loss: 83.9473 | g_Loss: 431.8910 | l_Loss: 56.0608 |
21-12-24 08:21:36.302 - INFO: Learning rate: 1e-05
21-12-24 08:21:36.303 - INFO: Train epoch 1584: Loss: 898.1195 | r_Loss: 84.8714 | g_Loss: 424.7705 | l_Loss: 48.9921 |
21-12-24 08:22:49.604 - INFO: Learning rate: 1e-05
21-12-24 08:22:49.605 - INFO: Train epoch 1585: Loss: 891.8636 | r_Loss: 83.7088 | g_Loss: 423.0393 | l_Loss: 50.2805 |
21-12-24 08:24:02.817 - INFO: Learning rate: 1e-05
21-12-24 08:24:02.818 - INFO: Train epoch 1586: Loss: 867.0509 | r_Loss: 81.7234 | g_Loss: 408.0015 | l_Loss: 50.4323 |
21-12-24 08:25:16.264 - INFO: Learning rate: 1e-05
21-12-24 08:25:16.265 - INFO: Train epoch 1587: Loss: 896.1341 | r_Loss: 87.9731 | g_Loss: 412.7231 | l_Loss: 43.5458 |
21-12-24 08:26:30.021 - INFO: Learning rate: 1e-05
21-12-24 08:26:30.022 - INFO: Train epoch 1588: Loss: 917.4606 | r_Loss: 89.3972 | g_Loss: 413.5203 | l_Loss: 56.9543 |
21-12-24 08:27:43.380 - INFO: Learning rate: 1e-05
21-12-24 08:27:43.381 - INFO: Train epoch 1589: Loss: 813.9731 | r_Loss: 76.4071 | g_Loss: 388.5248 | l_Loss: 43.4130 |
21-12-24 08:29:32.517 - INFO: TEST: PSNR_S: 42.2787 | PSNR_C: 35.6705 |
21-12-24 08:29:32.518 - INFO: Learning rate: 1e-05
21-12-24 08:29:32.518 - INFO: Train epoch 1590: Loss: 877.7025 | r_Loss: 85.2753 | g_Loss: 405.3415 | l_Loss: 45.9843 |
21-12-24 08:30:46.076 - INFO: Learning rate: 1e-05
21-12-24 08:30:46.077 - INFO: Train epoch 1591: Loss: 840.5245 | r_Loss: 82.4082 | g_Loss: 382.1677 | l_Loss: 46.3157 |
21-12-24 08:31:59.217 - INFO: Learning rate: 1e-05
21-12-24 08:31:59.218 - INFO: Train epoch 1592: Loss: 880.8191 | r_Loss: 85.3492 | g_Loss: 410.3245 | l_Loss: 43.7485 |
21-12-24 08:33:12.677 - INFO: Learning rate: 1e-05
21-12-24 08:33:12.678 - INFO: Train epoch 1593: Loss: 865.7928 | r_Loss: 83.0056 | g_Loss: 402.1137 | l_Loss: 48.6511 |
21-12-24 08:34:26.020 - INFO: Learning rate: 1e-05
21-12-24 08:34:26.021 - INFO: Train epoch 1594: Loss: 853.4760 | r_Loss: 83.4974 | g_Loss: 387.3700 | l_Loss: 48.6191 |
21-12-24 08:35:39.478 - INFO: Learning rate: 1e-05
21-12-24 08:35:39.479 - INFO: Train epoch 1595: Loss: 868.1502 | r_Loss: 86.9675 | g_Loss: 381.3366 | l_Loss: 51.9762 |
21-12-24 08:36:52.699 - INFO: Learning rate: 1e-05
21-12-24 08:36:52.700 - INFO: Train epoch 1596: Loss: 883.4293 | r_Loss: 88.6664 | g_Loss: 381.4901 | l_Loss: 58.6071 |
21-12-24 08:38:06.197 - INFO: Learning rate: 1e-05
21-12-24 08:38:06.198 - INFO: Train epoch 1597: Loss: 907.2558 | r_Loss: 94.4188 | g_Loss: 387.8458 | l_Loss: 47.3160 |
21-12-24 08:39:19.739 - INFO: Learning rate: 1e-05
21-12-24 08:39:19.740 - INFO: Train epoch 1598: Loss: 909.0284 | r_Loss: 95.5417 | g_Loss: 386.2588 | l_Loss: 45.0613 |
21-12-24 08:40:33.025 - INFO: Learning rate: 1e-05
21-12-24 08:40:33.026 - INFO: Train epoch 1599: Loss: 821.8634 | r_Loss: 79.6302 | g_Loss: 377.0275 | l_Loss: 46.6847 |
21-12-24 08:41:46.175 - INFO: Learning rate: 1e-05
21-12-24 08:41:46.176 - INFO: Train epoch 1600: Loss: 852.5313 | r_Loss: 83.5799 | g_Loss: 385.4567 | l_Loss: 49.1750 |
21-12-24 08:42:59.512 - INFO: Learning rate: 1e-05
21-12-24 08:42:59.513 - INFO: Train epoch 1601: Loss: 813.5366 | r_Loss: 81.2760 | g_Loss: 367.2696 | l_Loss: 39.8873 |
21-12-24 08:44:13.083 - INFO: Learning rate: 1e-05
21-12-24 08:44:13.084 - INFO: Train epoch 1602: Loss: 833.1190 | r_Loss: 85.5183 | g_Loss: 358.4474 | l_Loss: 47.0803 |
21-12-24 08:45:26.195 - INFO: Learning rate: 1e-05
21-12-24 08:45:26.196 - INFO: Train epoch 1603: Loss: 886.2272 | r_Loss: 95.5086 | g_Loss: 363.0395 | l_Loss: 45.6445 |
21-12-24 08:46:39.493 - INFO: Learning rate: 1e-05
21-12-24 08:46:39.494 - INFO: Train epoch 1604: Loss: 817.4150 | r_Loss: 80.5345 | g_Loss: 372.3468 | l_Loss: 42.3955 |
21-12-24 08:47:52.592 - INFO: Learning rate: 1e-05
21-12-24 08:47:52.593 - INFO: Train epoch 1605: Loss: 847.1175 | r_Loss: 82.5145 | g_Loss: 375.7832 | l_Loss: 58.7619 |
21-12-24 08:49:06.029 - INFO: Learning rate: 1e-05
21-12-24 08:49:06.030 - INFO: Train epoch 1606: Loss: 806.2132 | r_Loss: 80.9179 | g_Loss: 355.6748 | l_Loss: 45.9491 |
21-12-24 08:50:19.674 - INFO: Learning rate: 1e-05
21-12-24 08:50:19.675 - INFO: Train epoch 1607: Loss: 847.7960 | r_Loss: 85.8724 | g_Loss: 377.1846 | l_Loss: 41.2492 |
21-12-24 08:51:32.952 - INFO: Learning rate: 1e-05
21-12-24 08:51:32.952 - INFO: Train epoch 1608: Loss: 789.9701 | r_Loss: 79.0263 | g_Loss: 354.0438 | l_Loss: 40.7948 |
21-12-24 08:52:45.963 - INFO: Learning rate: 1e-05
21-12-24 08:52:45.964 - INFO: Train epoch 1609: Loss: 843.9422 | r_Loss: 86.9474 | g_Loss: 367.1991 | l_Loss: 42.0062 |
21-12-24 08:53:59.367 - INFO: Learning rate: 1e-05
21-12-24 08:53:59.368 - INFO: Train epoch 1610: Loss: 825.2340 | r_Loss: 83.6943 | g_Loss: 363.0553 | l_Loss: 43.7074 |
21-12-24 08:55:12.535 - INFO: Learning rate: 1e-05
21-12-24 08:55:12.536 - INFO: Train epoch 1611: Loss: 797.9378 | r_Loss: 82.0849 | g_Loss: 345.5697 | l_Loss: 41.9437 |
21-12-24 08:56:26.154 - INFO: Learning rate: 1e-05
21-12-24 08:56:26.155 - INFO: Train epoch 1612: Loss: 881.1047 | r_Loss: 95.5384 | g_Loss: 365.5317 | l_Loss: 37.8810 |
21-12-24 08:57:39.902 - INFO: Learning rate: 1e-05
21-12-24 08:57:39.903 - INFO: Train epoch 1613: Loss: 842.9822 | r_Loss: 88.3553 | g_Loss: 360.7207 | l_Loss: 40.4850 |
21-12-24 08:58:53.610 - INFO: Learning rate: 1e-05
21-12-24 08:58:53.611 - INFO: Train epoch 1614: Loss: 840.0405 | r_Loss: 84.3770 | g_Loss: 364.7939 | l_Loss: 53.3618 |
21-12-24 09:00:06.968 - INFO: Learning rate: 1e-05
21-12-24 09:00:06.969 - INFO: Train epoch 1615: Loss: 894.7621 | r_Loss: 94.6246 | g_Loss: 368.6192 | l_Loss: 53.0197 |
21-12-24 09:01:20.069 - INFO: Learning rate: 1e-05
21-12-24 09:01:20.070 - INFO: Train epoch 1616: Loss: 792.0350 | r_Loss: 79.9195 | g_Loss: 339.9133 | l_Loss: 52.5239 |
21-12-24 09:02:33.221 - INFO: Learning rate: 1e-05
21-12-24 09:02:33.222 - INFO: Train epoch 1617: Loss: 841.6074 | r_Loss: 87.8457 | g_Loss: 359.0245 | l_Loss: 43.3544 |
21-12-24 09:03:46.779 - INFO: Learning rate: 1e-05
21-12-24 09:03:46.780 - INFO: Train epoch 1618: Loss: 916.7770 | r_Loss: 104.5749 | g_Loss: 346.4422 | l_Loss: 47.4602 |
21-12-24 09:05:00.582 - INFO: Learning rate: 1e-05
21-12-24 09:05:00.583 - INFO: Train epoch 1619: Loss: 812.6384 | r_Loss: 84.1405 | g_Loss: 344.5693 | l_Loss: 47.3666 |
21-12-24 09:06:50.380 - INFO: TEST: PSNR_S: 43.5987 | PSNR_C: 36.3974 |
21-12-24 09:06:50.381 - INFO: Learning rate: 1e-05
21-12-24 09:06:50.382 - INFO: Train epoch 1620: Loss: 784.5048 | r_Loss: 79.3974 | g_Loss: 347.8174 | l_Loss: 39.7006 |
21-12-24 09:08:03.757 - INFO: Learning rate: 1e-05
21-12-24 09:08:03.758 - INFO: Train epoch 1621: Loss: 825.9391 | r_Loss: 85.5302 | g_Loss: 360.6045 | l_Loss: 37.6835 |
21-12-24 09:09:16.879 - INFO: Learning rate: 1e-05
21-12-24 09:09:16.880 - INFO: Train epoch 1622: Loss: 784.6610 | r_Loss: 79.5903 | g_Loss: 341.2135 | l_Loss: 45.4958 |
21-12-24 09:10:30.572 - INFO: Learning rate: 1e-05
21-12-24 09:10:30.573 - INFO: Train epoch 1623: Loss: 1003.8703 | r_Loss: 120.9199 | g_Loss: 349.9160 | l_Loss: 49.3547 |
21-12-24 09:11:43.910 - INFO: Learning rate: 1e-05
21-12-24 09:11:43.911 - INFO: Train epoch 1624: Loss: 763.0539 | r_Loss: 74.4592 | g_Loss: 349.8810 | l_Loss: 40.8772 |
21-12-24 09:12:57.384 - INFO: Learning rate: 1e-05
21-12-24 09:12:57.385 - INFO: Train epoch 1625: Loss: 827.1236 | r_Loss: 89.3634 | g_Loss: 345.1559 | l_Loss: 35.1509 |
21-12-24 09:14:10.562 - INFO: Learning rate: 1e-05
21-12-24 09:14:10.563 - INFO: Train epoch 1626: Loss: 813.1795 | r_Loss: 84.2767 | g_Loss: 346.7731 | l_Loss: 45.0227 |
21-12-24 09:15:23.582 - INFO: Learning rate: 1e-05
21-12-24 09:15:23.583 - INFO: Train epoch 1627: Loss: 819.5827 | r_Loss: 85.6343 | g_Loss: 351.6307 | l_Loss: 39.7806 |
21-12-24 09:16:36.685 - INFO: Learning rate: 1e-05
21-12-24 09:16:36.686 - INFO: Train epoch 1628: Loss: 836.1941 | r_Loss: 87.9744 | g_Loss: 350.1587 | l_Loss: 46.1632 |
21-12-24 09:17:49.775 - INFO: Learning rate: 1e-05
21-12-24 09:17:49.776 - INFO: Train epoch 1629: Loss: 903.0125 | r_Loss: 95.1289 | g_Loss: 367.5988 | l_Loss: 59.7692 |
21-12-24 09:19:03.504 - INFO: Learning rate: 1e-05
21-12-24 09:19:03.504 - INFO: Train epoch 1630: Loss: 829.0356 | r_Loss: 86.1717 | g_Loss: 350.3935 | l_Loss: 47.7837 |
21-12-24 09:20:16.659 - INFO: Learning rate: 1e-05
21-12-24 09:20:16.660 - INFO: Train epoch 1631: Loss: 749.2853 | r_Loss: 74.4446 | g_Loss: 338.4570 | l_Loss: 38.6053 |
21-12-24 09:21:29.700 - INFO: Learning rate: 1e-05
21-12-24 09:21:29.701 - INFO: Train epoch 1632: Loss: 875.4438 | r_Loss: 93.2961 | g_Loss: 367.9199 | l_Loss: 41.0432 |
21-12-24 09:22:43.032 - INFO: Learning rate: 1e-05
21-12-24 09:22:43.033 - INFO: Train epoch 1633: Loss: 885.7709 | r_Loss: 98.0124 | g_Loss: 355.1045 | l_Loss: 40.6043 |
21-12-24 09:23:56.228 - INFO: Learning rate: 1e-05
21-12-24 09:23:56.228 - INFO: Train epoch 1634: Loss: 815.2492 | r_Loss: 83.8577 | g_Loss: 357.3648 | l_Loss: 38.5960 |
21-12-24 09:25:09.364 - INFO: Learning rate: 1e-05
21-12-24 09:25:09.365 - INFO: Train epoch 1635: Loss: 768.7360 | r_Loss: 78.6747 | g_Loss: 333.4236 | l_Loss: 41.9390 |
21-12-24 09:26:22.680 - INFO: Learning rate: 1e-05
21-12-24 09:26:22.681 - INFO: Train epoch 1636: Loss: 814.2074 | r_Loss: 84.5408 | g_Loss: 346.8333 | l_Loss: 44.6701 |
21-12-24 09:27:35.650 - INFO: Learning rate: 1e-05
21-12-24 09:27:35.652 - INFO: Train epoch 1637: Loss: 842.4867 | r_Loss: 91.5926 | g_Loss: 345.3658 | l_Loss: 39.1580 |
21-12-24 09:28:48.875 - INFO: Learning rate: 1e-05
21-12-24 09:28:48.876 - INFO: Train epoch 1638: Loss: 14405.1835 | r_Loss: 2678.5474 | g_Loss: 909.8101 | l_Loss: 102.6365 |
21-12-24 09:30:02.169 - INFO: Learning rate: 1e-05
21-12-24 09:30:02.170 - INFO: Train epoch 1639: Loss: 2346.5485 | r_Loss: 217.3854 | g_Loss: 1141.0493 | l_Loss: 118.5723 |
21-12-24 09:31:15.460 - INFO: Learning rate: 1e-05
21-12-24 09:31:15.460 - INFO: Train epoch 1640: Loss: 1707.9083 | r_Loss: 149.2330 | g_Loss: 860.4238 | l_Loss: 101.3194 |
21-12-24 09:32:28.674 - INFO: Learning rate: 1e-05
21-12-24 09:32:28.675 - INFO: Train epoch 1641: Loss: 1465.4634 | r_Loss: 120.1125 | g_Loss: 761.1771 | l_Loss: 103.7239 |
21-12-24 09:33:42.704 - INFO: Learning rate: 1e-05
21-12-24 09:33:42.705 - INFO: Train epoch 1642: Loss: 1273.8757 | r_Loss: 103.5727 | g_Loss: 662.7674 | l_Loss: 93.2448 |
21-12-24 09:34:56.384 - INFO: Learning rate: 1e-05
21-12-24 09:34:56.385 - INFO: Train epoch 1643: Loss: 1341.7553 | r_Loss: 115.2021 | g_Loss: 670.8085 | l_Loss: 94.9364 |
21-12-24 09:36:10.133 - INFO: Learning rate: 1e-05
21-12-24 09:36:10.134 - INFO: Train epoch 1644: Loss: 1223.4247 | r_Loss: 99.1240 | g_Loss: 639.2811 | l_Loss: 88.5237 |
21-12-24 09:37:24.001 - INFO: Learning rate: 1e-05
21-12-24 09:37:24.002 - INFO: Train epoch 1645: Loss: 1191.5650 | r_Loss: 98.0392 | g_Loss: 624.6568 | l_Loss: 76.7122 |
21-12-24 09:38:37.531 - INFO: Learning rate: 1e-05
21-12-24 09:38:37.532 - INFO: Train epoch 1646: Loss: 1145.2553 | r_Loss: 98.5115 | g_Loss: 583.5797 | l_Loss: 69.1180 |
21-12-24 09:39:50.812 - INFO: Learning rate: 1e-05
21-12-24 09:39:50.813 - INFO: Train epoch 1647: Loss: 1086.6809 | r_Loss: 90.3222 | g_Loss: 562.6948 | l_Loss: 72.3751 |
21-12-24 09:41:04.240 - INFO: Learning rate: 1e-05
21-12-24 09:41:04.242 - INFO: Train epoch 1648: Loss: 1057.4094 | r_Loss: 89.7136 | g_Loss: 540.7197 | l_Loss: 68.1217 |
21-12-24 09:42:17.932 - INFO: Learning rate: 1e-05
21-12-24 09:42:17.933 - INFO: Train epoch 1649: Loss: 994.1378 | r_Loss: 84.9609 | g_Loss: 510.1755 | l_Loss: 59.1575 |
21-12-24 09:44:07.546 - INFO: TEST: PSNR_S: 43.5524 | PSNR_C: 34.6063 |
21-12-24 09:44:07.547 - INFO: Learning rate: 1e-05
21-12-24 09:44:07.547 - INFO: Train epoch 1650: Loss: 983.3585 | r_Loss: 81.0479 | g_Loss: 510.8286 | l_Loss: 67.2904 |
21-12-24 09:45:21.268 - INFO: Learning rate: 1e-05
21-12-24 09:45:21.269 - INFO: Train epoch 1651: Loss: 1001.9569 | r_Loss: 85.9370 | g_Loss: 503.2385 | l_Loss: 69.0335 |
21-12-24 09:46:34.644 - INFO: Learning rate: 1e-05
21-12-24 09:46:34.645 - INFO: Train epoch 1652: Loss: 952.6495 | r_Loss: 79.8033 | g_Loss: 485.2719 | l_Loss: 68.3610 |
21-12-24 09:47:47.468 - INFO: Learning rate: 1e-05
21-12-24 09:47:47.469 - INFO: Train epoch 1653: Loss: 901.5193 | r_Loss: 75.6322 | g_Loss: 463.1238 | l_Loss: 60.2348 |
21-12-24 09:49:00.577 - INFO: Learning rate: 1e-05
21-12-24 09:49:00.578 - INFO: Train epoch 1654: Loss: 872.8831 | r_Loss: 71.8349 | g_Loss: 466.2055 | l_Loss: 47.5029 |
21-12-24 09:50:13.819 - INFO: Learning rate: 1e-05
21-12-24 09:50:13.820 - INFO: Train epoch 1655: Loss: 916.4867 | r_Loss: 77.1195 | g_Loss: 464.0018 | l_Loss: 66.8875 |
21-12-24 09:51:27.160 - INFO: Learning rate: 1e-05
21-12-24 09:51:27.161 - INFO: Train epoch 1656: Loss: 929.3526 | r_Loss: 81.8718 | g_Loss: 462.8461 | l_Loss: 57.1476 |
21-12-24 09:52:40.738 - INFO: Learning rate: 1e-05
21-12-24 09:52:40.739 - INFO: Train epoch 1657: Loss: 863.2178 | r_Loss: 76.1591 | g_Loss: 422.5891 | l_Loss: 59.8331 |
21-12-24 09:53:54.035 - INFO: Learning rate: 1e-05
21-12-24 09:53:54.036 - INFO: Train epoch 1658: Loss: 847.4463 | r_Loss: 75.1713 | g_Loss: 419.5601 | l_Loss: 52.0296 |
21-12-24 09:55:07.548 - INFO: Learning rate: 1e-05
21-12-24 09:55:07.549 - INFO: Train epoch 1659: Loss: 873.4359 | r_Loss: 79.8546 | g_Loss: 429.3905 | l_Loss: 44.7724 |
21-12-24 09:56:20.782 - INFO: Learning rate: 1e-05
21-12-24 09:56:20.782 - INFO: Train epoch 1660: Loss: 850.9858 | r_Loss: 77.9583 | g_Loss: 409.0373 | l_Loss: 52.1571 |
21-12-24 09:57:34.134 - INFO: Learning rate: 1e-05
21-12-24 09:57:34.135 - INFO: Train epoch 1661: Loss: 855.8289 | r_Loss: 77.2884 | g_Loss: 414.8659 | l_Loss: 54.5208 |
21-12-24 09:58:47.480 - INFO: Learning rate: 1e-05
21-12-24 09:58:47.481 - INFO: Train epoch 1662: Loss: 784.4591 | r_Loss: 70.9478 | g_Loss: 387.8596 | l_Loss: 41.8606 |
21-12-24 10:00:00.738 - INFO: Learning rate: 1e-05
21-12-24 10:00:00.739 - INFO: Train epoch 1663: Loss: 806.6972 | r_Loss: 73.4454 | g_Loss: 394.7007 | l_Loss: 44.7696 |
21-12-24 10:01:14.162 - INFO: Learning rate: 1e-05
21-12-24 10:01:14.164 - INFO: Train epoch 1664: Loss: 817.9001 | r_Loss: 75.3585 | g_Loss: 393.2926 | l_Loss: 47.8152 |
21-12-24 10:02:27.685 - INFO: Learning rate: 1e-05
21-12-24 10:02:27.686 - INFO: Train epoch 1665: Loss: 797.7165 | r_Loss: 72.3306 | g_Loss: 387.2248 | l_Loss: 48.8385 |
21-12-24 10:03:41.114 - INFO: Learning rate: 1e-05
21-12-24 10:03:41.115 - INFO: Train epoch 1666: Loss: 875.6423 | r_Loss: 87.3374 | g_Loss: 394.5212 | l_Loss: 44.4341 |
21-12-24 10:04:54.323 - INFO: Learning rate: 1e-05
21-12-24 10:04:54.324 - INFO: Train epoch 1667: Loss: 773.9687 | r_Loss: 71.6283 | g_Loss: 366.8915 | l_Loss: 48.9355 |
21-12-24 10:06:07.715 - INFO: Learning rate: 1e-05
21-12-24 10:06:07.717 - INFO: Train epoch 1668: Loss: 810.2192 | r_Loss: 78.4768 | g_Loss: 376.1192 | l_Loss: 41.7159 |
21-12-24 10:07:21.287 - INFO: Learning rate: 1e-05
21-12-24 10:07:21.288 - INFO: Train epoch 1669: Loss: 777.7164 | r_Loss: 74.3028 | g_Loss: 366.4278 | l_Loss: 39.7745 |
21-12-24 10:08:34.690 - INFO: Learning rate: 1e-05
21-12-24 10:08:34.691 - INFO: Train epoch 1670: Loss: 956.5442 | r_Loss: 106.3131 | g_Loss: 381.3545 | l_Loss: 43.6243 |
21-12-24 10:09:48.338 - INFO: Learning rate: 1e-05
21-12-24 10:09:48.340 - INFO: Train epoch 1671: Loss: 762.2459 | r_Loss: 73.2126 | g_Loss: 357.2680 | l_Loss: 38.9149 |
21-12-24 10:11:01.576 - INFO: Learning rate: 1e-05
21-12-24 10:11:01.577 - INFO: Train epoch 1672: Loss: 784.1566 | r_Loss: 75.1956 | g_Loss: 363.9480 | l_Loss: 44.2303 |
21-12-24 10:12:15.095 - INFO: Learning rate: 1e-05
21-12-24 10:12:15.096 - INFO: Train epoch 1673: Loss: 785.5053 | r_Loss: 75.5129 | g_Loss: 358.6584 | l_Loss: 49.2827 |
21-12-24 10:13:28.637 - INFO: Learning rate: 1e-05
21-12-24 10:13:28.637 - INFO: Train epoch 1674: Loss: 851.5965 | r_Loss: 86.7678 | g_Loss: 373.3325 | l_Loss: 44.4249 |
21-12-24 10:14:42.728 - INFO: Learning rate: 1e-05
21-12-24 10:14:42.729 - INFO: Train epoch 1675: Loss: 816.6315 | r_Loss: 80.7188 | g_Loss: 364.3383 | l_Loss: 48.6992 |
21-12-24 10:15:55.992 - INFO: Learning rate: 1e-05
21-12-24 10:15:55.992 - INFO: Train epoch 1676: Loss: 756.5006 | r_Loss: 74.4684 | g_Loss: 349.3871 | l_Loss: 34.7716 |
21-12-24 10:17:09.112 - INFO: Learning rate: 1e-05
21-12-24 10:17:09.113 - INFO: Train epoch 1677: Loss: 781.9788 | r_Loss: 75.8099 | g_Loss: 354.5783 | l_Loss: 48.3509 |
21-12-24 10:18:22.964 - INFO: Learning rate: 1e-05
21-12-24 10:18:22.965 - INFO: Train epoch 1678: Loss: 830.9963 | r_Loss: 83.7044 | g_Loss: 365.6374 | l_Loss: 46.8369 |
21-12-24 10:19:36.299 - INFO: Learning rate: 1e-05
21-12-24 10:19:36.300 - INFO: Train epoch 1679: Loss: 776.8798 | r_Loss: 77.0451 | g_Loss: 349.1785 | l_Loss: 42.4759 |
21-12-24 10:21:26.129 - INFO: TEST: PSNR_S: 43.9917 | PSNR_C: 36.4497 |
21-12-24 10:21:26.131 - INFO: Learning rate: 1e-05
21-12-24 10:21:26.132 - INFO: Train epoch 1680: Loss: 791.6092 | r_Loss: 80.2605 | g_Loss: 347.9493 | l_Loss: 42.3573 |
21-12-24 10:22:39.142 - INFO: Learning rate: 1e-05
21-12-24 10:22:39.143 - INFO: Train epoch 1681: Loss: 811.9097 | r_Loss: 81.7508 | g_Loss: 359.3081 | l_Loss: 43.8476 |
21-12-24 10:23:52.634 - INFO: Learning rate: 1e-05
21-12-24 10:23:52.635 - INFO: Train epoch 1682: Loss: 829.2183 | r_Loss: 87.0514 | g_Loss: 347.4479 | l_Loss: 46.5132 |
21-12-24 10:25:06.120 - INFO: Learning rate: 1e-05
21-12-24 10:25:06.121 - INFO: Train epoch 1683: Loss: 771.4503 | r_Loss: 77.6023 | g_Loss: 336.3731 | l_Loss: 47.0656 |
21-12-24 10:26:19.821 - INFO: Learning rate: 1e-05
21-12-24 10:26:19.823 - INFO: Train epoch 1684: Loss: 921.2358 | r_Loss: 98.5604 | g_Loss: 375.6672 | l_Loss: 52.7667 |
21-12-24 10:27:33.492 - INFO: Learning rate: 1e-05
21-12-24 10:27:33.493 - INFO: Train epoch 1685: Loss: 729.1279 | r_Loss: 72.3978 | g_Loss: 330.7991 | l_Loss: 36.3398 |
21-12-24 10:28:46.851 - INFO: Learning rate: 1e-05
21-12-24 10:28:46.852 - INFO: Train epoch 1686: Loss: 771.4386 | r_Loss: 75.4112 | g_Loss: 347.4656 | l_Loss: 46.9170 |
21-12-24 10:30:00.282 - INFO: Learning rate: 1e-05
21-12-24 10:30:00.282 - INFO: Train epoch 1687: Loss: 748.3135 | r_Loss: 75.2537 | g_Loss: 326.3650 | l_Loss: 45.6799 |
21-12-24 10:31:14.047 - INFO: Learning rate: 1e-05
21-12-24 10:31:14.049 - INFO: Train epoch 1688: Loss: 777.9094 | r_Loss: 81.2759 | g_Loss: 335.0053 | l_Loss: 36.5244 |
21-12-24 10:32:28.015 - INFO: Learning rate: 1e-05
21-12-24 10:32:28.016 - INFO: Train epoch 1689: Loss: 744.9148 | r_Loss: 76.2981 | g_Loss: 326.5564 | l_Loss: 36.8678 |
21-12-24 10:33:41.172 - INFO: Learning rate: 1e-05
21-12-24 10:33:41.173 - INFO: Train epoch 1690: Loss: 762.6484 | r_Loss: 79.3934 | g_Loss: 322.5522 | l_Loss: 43.1292 |
21-12-24 10:34:54.394 - INFO: Learning rate: 1e-05
21-12-24 10:34:54.395 - INFO: Train epoch 1691: Loss: 806.7924 | r_Loss: 83.2526 | g_Loss: 341.4248 | l_Loss: 49.1046 |
21-12-24 10:36:07.646 - INFO: Learning rate: 1e-05
21-12-24 10:36:07.647 - INFO: Train epoch 1692: Loss: 48777.0852 | r_Loss: 8865.4685 | g_Loss: 3922.1753 | l_Loss: 527.5679 |
================================================
FILE: logging/train__211224-105010.log
================================================
21-12-24 10:50:10.472 - INFO: DataParallel(
(module): Model(
(model): Hinet(
(inv1): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv2): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv3): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv4): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv5): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv6): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv7): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv8): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv9): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv10): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv11): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv12): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv13): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv14): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv15): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
(inv16): INV_block(
(r): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(y): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
(f): ResidualDenseBlock_out(
(conv1): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(44, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(76, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(108, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(140, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.01, inplace)
)
)
)
)
)
21-12-24 10:51:24.005 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 10:51:24.006 - INFO: Train epoch 1651: Loss: 996.6664 | r_Loss: 87.0803 | g_Loss: 499.5519 | l_Loss: 61.7132 |
21-12-24 10:52:37.878 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 10:52:37.879 - INFO: Train epoch 1652: Loss: 934.8195 | r_Loss: 81.4228 | g_Loss: 465.0092 | l_Loss: 62.6962 |
21-12-24 10:53:51.594 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 10:53:51.595 - INFO: Train epoch 1653: Loss: 909.2602 | r_Loss: 85.9651 | g_Loss: 422.5426 | l_Loss: 56.8919 |
21-12-24 10:55:05.322 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 10:55:05.323 - INFO: Train epoch 1654: Loss: 834.6590 | r_Loss: 79.7185 | g_Loss: 393.1451 | l_Loss: 42.9211 |
21-12-24 10:56:18.955 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 10:56:18.956 - INFO: Train epoch 1655: Loss: 1296.1235 | r_Loss: 151.0322 | g_Loss: 462.2363 | l_Loss: 78.7261 |
21-12-24 10:57:32.335 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 10:57:32.336 - INFO: Train epoch 1656: Loss: 836.4362 | r_Loss: 74.6929 | g_Loss: 416.5005 | l_Loss: 46.4711 |
21-12-24 10:58:45.987 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 10:58:45.988 - INFO: Train epoch 1657: Loss: 770.1497 | r_Loss: 69.8997 | g_Loss: 372.9066 | l_Loss: 47.7447 |
21-12-24 10:59:59.057 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 10:59:59.058 - INFO: Train epoch 1658: Loss: 879.3608 | r_Loss: 84.0686 | g_Loss: 396.0211 | l_Loss: 62.9966 |
21-12-24 11:01:12.716 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:01:12.717 - INFO: Train epoch 1659: Loss: 795.9085 | r_Loss: 77.6426 | g_Loss: 362.8962 | l_Loss: 44.7994 |
21-12-24 11:02:26.173 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:02:26.173 - INFO: Train epoch 1660: Loss: 812.1923 | r_Loss: 80.2613 | g_Loss: 361.0567 | l_Loss: 49.8293 |
21-12-24 11:03:39.323 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:03:39.325 - INFO: Train epoch 1661: Loss: 844.8791 | r_Loss: 85.5502 | g_Loss: 362.2384 | l_Loss: 54.8897 |
21-12-24 11:04:53.042 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:04:53.043 - INFO: Train epoch 1662: Loss: 760.9282 | r_Loss: 75.5782 | g_Loss: 341.6751 | l_Loss: 41.3623 |
21-12-24 11:06:06.290 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:06:06.291 - INFO: Train epoch 1663: Loss: 783.3647 | r_Loss: 78.4957 | g_Loss: 348.9161 | l_Loss: 41.9701 |
21-12-24 11:07:19.850 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:07:19.851 - INFO: Train epoch 1664: Loss: 714.5819 | r_Loss: 70.6148 | g_Loss: 325.3690 | l_Loss: 36.1389 |
21-12-24 11:08:33.661 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:08:33.662 - INFO: Train epoch 1665: Loss: 828.1388 | r_Loss: 86.6758 | g_Loss: 353.0427 | l_Loss: 41.7171 |
21-12-24 11:09:47.023 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:09:47.024 - INFO: Train epoch 1666: Loss: 743.3908 | r_Loss: 73.9117 | g_Loss: 335.4105 | l_Loss: 38.4218 |
21-12-24 11:11:00.957 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:11:00.958 - INFO: Train epoch 1667: Loss: 757.1275 | r_Loss: 77.4477 | g_Loss: 329.1774 | l_Loss: 40.7116 |
21-12-24 11:12:14.564 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:12:14.565 - INFO: Train epoch 1668: Loss: 785.3291 | r_Loss: 77.8449 | g_Loss: 346.5850 | l_Loss: 49.5194 |
21-12-24 11:13:28.145 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:13:28.147 - INFO: Train epoch 1669: Loss: 728.2735 | r_Loss: 72.8905 | g_Loss: 321.7528 | l_Loss: 42.0681 |
21-12-24 11:14:41.429 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:14:41.430 - INFO: Train epoch 1670: Loss: 732.5073 | r_Loss: 74.8059 | g_Loss: 315.3424 | l_Loss: 43.1353 |
21-12-24 11:15:55.292 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:15:55.293 - INFO: Train epoch 1671: Loss: 771.3462 | r_Loss: 80.3056 | g_Loss: 325.4308 | l_Loss: 44.3874 |
21-12-24 11:17:08.662 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:17:08.663 - INFO: Train epoch 1672: Loss: 842.4972 | r_Loss: 92.4568 | g_Loss: 339.2938 | l_Loss: 40.9191 |
21-12-24 11:18:22.081 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:18:22.082 - INFO: Train epoch 1673: Loss: 8610.9523 | r_Loss: 1506.7883 | g_Loss: 958.7069 | l_Loss: 118.3040 |
21-12-24 11:19:34.991 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:19:34.991 - INFO: Train epoch 1674: Loss: 1021.1046 | r_Loss: 94.6165 | g_Loss: 483.1054 | l_Loss: 64.9166 |
21-12-24 11:20:48.303 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:20:48.304 - INFO: Train epoch 1675: Loss: 951.6090 | r_Loss: 84.5241 | g_Loss: 471.4960 | l_Loss: 57.4927 |
21-12-24 11:22:01.863 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:22:01.864 - INFO: Train epoch 1676: Loss: 890.1895 | r_Loss: 77.6636 | g_Loss: 442.2973 | l_Loss: 59.5743 |
21-12-24 11:23:14.835 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:23:14.836 - INFO: Train epoch 1677: Loss: 885.7175 | r_Loss: 78.1532 | g_Loss: 446.9193 | l_Loss: 48.0322 |
21-12-24 11:24:28.151 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:24:28.152 - INFO: Train epoch 1678: Loss: 824.7316 | r_Loss: 71.6695 | g_Loss: 419.0889 | l_Loss: 47.2951 |
21-12-24 11:25:41.614 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:25:41.616 - INFO: Train epoch 1679: Loss: 869.4428 | r_Loss: 77.9959 | g_Loss: 419.5999 | l_Loss: 59.8634 |
21-12-24 11:27:31.224 - INFO: TEST: PSNR_S: 44.2622 | PSNR_C: 35.5947 |
21-12-24 11:27:31.225 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:27:31.225 - INFO: Train epoch 1680: Loss: 866.8122 | r_Loss: 76.9340 | g_Loss: 423.7489 | l_Loss: 58.3931 |
21-12-24 11:28:44.604 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:28:44.605 - INFO: Train epoch 1681: Loss: 826.8867 | r_Loss: 71.9132 | g_Loss: 408.8606 | l_Loss: 58.4598 |
21-12-24 11:29:58.321 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:29:58.322 - INFO: Train epoch 1682: Loss: 748.3975 | r_Loss: 65.2267 | g_Loss: 381.7432 | l_Loss: 40.5209 |
21-12-24 11:31:11.977 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:31:11.978 - INFO: Train epoch 1683: Loss: 813.6189 | r_Loss: 71.7737 | g_Loss: 397.9074 | l_Loss: 56.8428 |
21-12-24 11:32:25.389 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:32:25.390 - INFO: Train epoch 1684: Loss: 825.0517 | r_Loss: 73.7446 | g_Loss: 408.7961 | l_Loss: 47.5326 |
21-12-24 11:33:38.962 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:33:38.963 - INFO: Train epoch 1685: Loss: 775.1033 | r_Loss: 69.8173 | g_Loss: 383.5295 | l_Loss: 42.4871 |
21-12-24 11:34:52.218 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:34:52.219 - INFO: Train epoch 1686: Loss: 806.1944 | r_Loss: 71.8815 | g_Loss: 385.0441 | l_Loss: 61.7429 |
21-12-24 11:36:05.957 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:36:05.958 - INFO: Train epoch 1687: Loss: 778.4672 | r_Loss: 71.2008 | g_Loss: 380.8677 | l_Loss: 41.5954 |
21-12-24 11:37:19.420 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:37:19.421 - INFO: Train epoch 1688: Loss: 750.4331 | r_Loss: 66.9524 | g_Loss: 376.2731 | l_Loss: 39.3979 |
21-12-24 11:38:33.515 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:38:33.516 - INFO: Train epoch 1689: Loss: 781.7109 | r_Loss: 72.6099 | g_Loss: 369.7764 | l_Loss: 48.8849 |
21-12-24 11:39:46.912 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:39:46.913 - INFO: Train epoch 1690: Loss: 772.5375 | r_Loss: 70.7663 | g_Loss: 372.8491 | l_Loss: 45.8568 |
21-12-24 11:41:00.649 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:41:00.650 - INFO: Train epoch 1691: Loss: 763.5094 | r_Loss: 70.2843 | g_Loss: 368.8476 | l_Loss: 43.2403 |
21-12-24 11:42:14.152 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:42:14.153 - INFO: Train epoch 1692: Loss: 769.3426 | r_Loss: 70.8585 | g_Loss: 371.8062 | l_Loss: 43.2441 |
21-12-24 11:43:27.780 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:43:27.781 - INFO: Train epoch 1693: Loss: 781.7902 | r_Loss: 74.1837 | g_Loss: 370.6968 | l_Loss: 40.1747 |
21-12-24 11:44:41.724 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:44:41.725 - INFO: Train epoch 1694: Loss: 789.3826 | r_Loss: 75.9377 | g_Loss: 359.0156 | l_Loss: 50.6786 |
21-12-24 11:45:55.547 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:45:55.548 - INFO: Train epoch 1695: Loss: 749.3144 | r_Loss: 69.1799 | g_Loss: 354.0228 | l_Loss: 49.3919 |
21-12-24 11:47:08.998 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:47:08.999 - INFO: Train epoch 1696: Loss: 753.8007 | r_Loss: 71.2242 | g_Loss: 344.8389 | l_Loss: 52.8406 |
21-12-24 11:48:22.610 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:48:22.611 - INFO: Train epoch 1697: Loss: 743.3431 | r_Loss: 70.1488 | g_Loss: 347.5078 | l_Loss: 45.0913 |
21-12-24 11:49:36.235 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:49:36.236 - INFO: Train epoch 1698: Loss: 719.0849 | r_Loss: 66.2507 | g_Loss: 346.8729 | l_Loss: 40.9583 |
21-12-24 11:50:49.769 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:50:49.770 - INFO: Train epoch 1699: Loss: 767.6972 | r_Loss: 74.3354 | g_Loss: 353.6101 | l_Loss: 42.4099 |
21-12-24 11:52:03.341 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:52:03.342 - INFO: Train epoch 1700: Loss: 719.5966 | r_Loss: 68.5536 | g_Loss: 336.9405 | l_Loss: 39.8880 |
21-12-24 11:53:17.154 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:53:17.156 - INFO: Train epoch 1701: Loss: 719.8267 | r_Loss: 68.8395 | g_Loss: 333.4468 | l_Loss: 42.1825 |
21-12-24 11:54:30.797 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:54:30.799 - INFO: Train epoch 1702: Loss: 743.6404 | r_Loss: 72.3179 | g_Loss: 337.3837 | l_Loss: 44.6671 |
21-12-24 11:55:44.226 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:55:44.227 - INFO: Train epoch 1703: Loss: 769.4671 | r_Loss: 76.2474 | g_Loss: 347.0613 | l_Loss: 41.1687 |
21-12-24 11:56:57.361 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:56:57.362 - INFO: Train epoch 1704: Loss: 747.3007 | r_Loss: 73.5217 | g_Loss: 341.2174 | l_Loss: 38.4746 |
21-12-24 11:58:10.751 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:58:10.753 - INFO: Train epoch 1705: Loss: 759.1546 | r_Loss: 74.5607 | g_Loss: 339.2166 | l_Loss: 47.1346 |
21-12-24 11:59:24.179 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 11:59:24.180 - INFO: Train epoch 1706: Loss: 710.6230 | r_Loss: 70.4662 | g_Loss: 319.5313 | l_Loss: 38.7606 |
21-12-24 12:00:37.830 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:00:37.831 - INFO: Train epoch 1707: Loss: 700.4298 | r_Loss: 70.4592 | g_Loss: 308.3489 | l_Loss: 39.7849 |
21-12-24 12:01:51.071 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:01:51.072 - INFO: Train epoch 1708: Loss: 666.6130 | r_Loss: 63.6407 | g_Loss: 311.5169 | l_Loss: 36.8924 |
21-12-24 12:03:04.688 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:03:04.689 - INFO: Train epoch 1709: Loss: 710.7510 | r_Loss: 72.6898 | g_Loss: 312.6813 | l_Loss: 34.6208 |
21-12-24 12:04:54.376 - INFO: TEST: PSNR_S: 44.5309 | PSNR_C: 36.7876 |
21-12-24 12:04:54.378 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:04:54.379 - INFO: Train epoch 1710: Loss: 748.2437 | r_Loss: 73.5660 | g_Loss: 332.4286 | l_Loss: 47.9853 |
21-12-24 12:06:08.112 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:06:08.113 - INFO: Train epoch 1711: Loss: 730.4399 | r_Loss: 73.9148 | g_Loss: 319.4427 | l_Loss: 41.4229 |
21-12-24 12:07:21.838 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:07:21.839 - INFO: Train epoch 1712: Loss: 706.9692 | r_Loss: 70.8856 | g_Loss: 314.8030 | l_Loss: 37.7382 |
21-12-24 12:08:35.159 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:08:35.160 - INFO: Train epoch 1713: Loss: 736.0849 | r_Loss: 75.6680 | g_Loss: 314.9232 | l_Loss: 42.8219 |
21-12-24 12:09:48.651 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:09:48.652 - INFO: Train epoch 1714: Loss: 711.8496 | r_Loss: 71.4041 | g_Loss: 314.3108 | l_Loss: 40.5185 |
21-12-24 12:11:02.199 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:11:02.200 - INFO: Train epoch 1715: Loss: 753.2928 | r_Loss: 77.4933 | g_Loss: 326.8601 | l_Loss: 38.9664 |
21-12-24 12:12:15.538 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:12:15.539 - INFO: Train epoch 1716: Loss: 734.1763 | r_Loss: 75.5589 | g_Loss: 316.3935 | l_Loss: 39.9884 |
21-12-24 12:13:29.243 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:13:29.245 - INFO: Train epoch 1717: Loss: 714.3771 | r_Loss: 73.4440 | g_Loss: 304.0243 | l_Loss: 43.1328 |
21-12-24 12:14:42.505 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:14:42.505 - INFO: Train epoch 1718: Loss: 714.9687 | r_Loss: 72.2714 | g_Loss: 310.6197 | l_Loss: 42.9923 |
21-12-24 12:15:55.500 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:15:55.501 - INFO: Train epoch 1719: Loss: 704.3244 | r_Loss: 71.2147 | g_Loss: 307.3500 | l_Loss: 40.9007 |
21-12-24 12:17:09.363 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:17:09.365 - INFO: Train epoch 1720: Loss: 691.3586 | r_Loss: 71.5271 | g_Loss: 294.6778 | l_Loss: 39.0454 |
21-12-24 12:18:22.588 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:18:22.589 - INFO: Train epoch 1721: Loss: 664.2191 | r_Loss: 67.4086 | g_Loss: 293.2737 | l_Loss: 33.9023 |
21-12-24 12:19:36.656 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:19:36.657 - INFO: Train epoch 1722: Loss: 657.1564 | r_Loss: 65.6682 | g_Loss: 288.2780 | l_Loss: 40.5373 |
21-12-24 12:20:49.840 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:20:49.841 - INFO: Train epoch 1723: Loss: 711.8719 | r_Loss: 75.0266 | g_Loss: 295.2638 | l_Loss: 41.4753 |
21-12-24 12:22:03.414 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:22:03.415 - INFO: Train epoch 1724: Loss: 705.0644 | r_Loss: 70.7484 | g_Loss: 311.1821 | l_Loss: 40.1403 |
21-12-24 12:23:16.876 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:23:16.877 - INFO: Train epoch 1725: Loss: 720.0282 | r_Loss: 74.0241 | g_Loss: 310.4715 | l_Loss: 39.4361 |
21-12-24 12:24:30.434 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:24:30.435 - INFO: Train epoch 1726: Loss: 732.0975 | r_Loss: 77.5719 | g_Loss: 305.1166 | l_Loss: 39.1216 |
21-12-24 12:25:43.854 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:25:43.855 - INFO: Train epoch 1727: Loss: 714.6404 | r_Loss: 74.9158 | g_Loss: 303.4786 | l_Loss: 36.5827 |
21-12-24 12:26:57.341 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:26:57.343 - INFO: Train epoch 1728: Loss: 710.4739 | r_Loss: 75.9522 | g_Loss: 300.6479 | l_Loss: 30.0649 |
21-12-24 12:28:11.148 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:28:11.149 - INFO: Train epoch 1729: Loss: 698.6187 | r_Loss: 71.9532 | g_Loss: 300.3817 | l_Loss: 38.4709 |
21-12-24 12:29:24.938 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:29:24.939 - INFO: Train epoch 1730: Loss: 666.1466 | r_Loss: 68.0425 | g_Loss: 290.4145 | l_Loss: 35.5197 |
21-12-24 12:30:38.439 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:30:38.440 - INFO: Train epoch 1731: Loss: 738.4115 | r_Loss: 79.9069 | g_Loss: 304.8984 | l_Loss: 33.9787 |
21-12-24 12:31:52.032 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:31:52.033 - INFO: Train epoch 1732: Loss: 715.5669 | r_Loss: 75.0963 | g_Loss: 303.7472 | l_Loss: 36.3380 |
21-12-24 12:33:05.460 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:33:05.461 - INFO: Train epoch 1733: Loss: 716.5168 | r_Loss: 74.1250 | g_Loss: 307.2458 | l_Loss: 38.6461 |
21-12-24 12:34:19.710 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:34:19.711 - INFO: Train epoch 1734: Loss: 755.9281 | r_Loss: 80.3550 | g_Loss: 316.5978 | l_Loss: 37.5553 |
21-12-24 12:35:33.259 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:35:33.260 - INFO: Train epoch 1735: Loss: 698.3875 | r_Loss: 76.0861 | g_Loss: 283.9956 | l_Loss: 33.9616 |
21-12-24 12:36:46.572 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:36:46.573 - INFO: Train epoch 1736: Loss: 695.3843 | r_Loss: 73.2075 | g_Loss: 298.3507 | l_Loss: 30.9962 |
21-12-24 12:38:00.649 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:38:00.650 - INFO: Train epoch 1737: Loss: 722.9559 | r_Loss: 76.7901 | g_Loss: 305.3573 | l_Loss: 33.6481 |
21-12-24 12:39:14.767 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:39:14.768 - INFO: Train epoch 1738: Loss: 750.9706 | r_Loss: 78.2312 | g_Loss: 315.2320 | l_Loss: 44.5824 |
21-12-24 12:40:28.346 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:40:28.347 - INFO: Train epoch 1739: Loss: 736.5281 | r_Loss: 78.5347 | g_Loss: 307.1150 | l_Loss: 36.7398 |
21-12-24 12:42:17.876 - INFO: TEST: PSNR_S: 44.5592 | PSNR_C: 37.1728 |
21-12-24 12:42:17.877 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:42:17.878 - INFO: Train epoch 1740: Loss: 688.5199 | r_Loss: 69.7688 | g_Loss: 304.6378 | l_Loss: 35.0379 |
21-12-24 12:43:31.222 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:43:31.223 - INFO: Train epoch 1741: Loss: 665.3084 | r_Loss: 68.0839 | g_Loss: 292.8388 | l_Loss: 32.0500 |
21-12-24 12:44:44.551 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:44:44.552 - INFO: Train epoch 1742: Loss: 735.3801 | r_Loss: 77.4265 | g_Loss: 310.8973 | l_Loss: 37.3504 |
21-12-24 12:45:57.866 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:45:57.867 - INFO: Train epoch 1743: Loss: 695.8742 | r_Loss: 70.2452 | g_Loss: 295.6354 | l_Loss: 49.0126 |
21-12-24 12:47:11.589 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:47:11.591 - INFO: Train epoch 1744: Loss: 691.0718 | r_Loss: 73.7173 | g_Loss: 289.4705 | l_Loss: 33.0146 |
21-12-24 12:48:25.438 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:48:25.439 - INFO: Train epoch 1745: Loss: 733.7554 | r_Loss: 78.5749 | g_Loss: 298.6014 | l_Loss: 42.2794 |
21-12-24 12:49:39.087 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:49:39.088 - INFO: Train epoch 1746: Loss: 683.7668 | r_Loss: 70.8411 | g_Loss: 295.1287 | l_Loss: 34.4324 |
21-12-24 12:50:52.392 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:50:52.394 - INFO: Train epoch 1747: Loss: 659.3999 | r_Loss: 66.7903 | g_Loss: 290.0912 | l_Loss: 35.3573 |
21-12-24 12:52:05.981 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:52:05.982 - INFO: Train epoch 1748: Loss: 681.8155 | r_Loss: 70.6287 | g_Loss: 291.1384 | l_Loss: 37.5336 |
21-12-24 12:53:19.309 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:53:19.310 - INFO: Train epoch 1749: Loss: 682.0511 | r_Loss: 69.2791 | g_Loss: 295.0000 | l_Loss: 40.6557 |
21-12-24 12:54:32.975 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:54:32.976 - INFO: Train epoch 1750: Loss: 694.0233 | r_Loss: 74.4357 | g_Loss: 290.2026 | l_Loss: 31.6420 |
21-12-24 12:55:46.359 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:55:46.360 - INFO: Train epoch 1751: Loss: 679.9877 | r_Loss: 71.4661 | g_Loss: 285.6241 | l_Loss: 37.0331 |
21-12-24 12:57:00.474 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:57:00.475 - INFO: Train epoch 1752: Loss: 658.5425 | r_Loss: 66.5151 | g_Loss: 291.9117 | l_Loss: 34.0555 |
21-12-24 12:58:13.947 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:58:13.948 - INFO: Train epoch 1753: Loss: 667.6711 | r_Loss: 68.3255 | g_Loss: 288.2551 | l_Loss: 37.7888 |
21-12-24 12:59:27.901 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 12:59:27.902 - INFO: Train epoch 1754: Loss: 698.7648 | r_Loss: 73.4291 | g_Loss: 295.5948 | l_Loss: 36.0244 |
21-12-24 13:00:41.804 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:00:41.805 - INFO: Train epoch 1755: Loss: 696.7584 | r_Loss: 73.2533 | g_Loss: 293.8157 | l_Loss: 36.6764 |
21-12-24 13:01:55.604 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:01:55.606 - INFO: Train epoch 1756: Loss: 725.7841 | r_Loss: 77.8175 | g_Loss: 296.2575 | l_Loss: 40.4394 |
21-12-24 13:03:09.485 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:03:09.486 - INFO: Train epoch 1757: Loss: 884.0669 | r_Loss: 104.0200 | g_Loss: 316.1741 | l_Loss: 47.7926 |
21-12-24 13:04:22.981 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:04:22.982 - INFO: Train epoch 1758: Loss: 694.3711 | r_Loss: 70.8976 | g_Loss: 298.5111 | l_Loss: 41.3721 |
21-12-24 13:05:36.913 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:05:36.914 - INFO: Train epoch 1759: Loss: 695.9676 | r_Loss: 69.2147 | g_Loss: 307.5922 | l_Loss: 42.3021 |
21-12-24 13:06:50.393 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:06:50.394 - INFO: Train epoch 1760: Loss: 712.6273 | r_Loss: 73.1807 | g_Loss: 310.1358 | l_Loss: 36.5878 |
21-12-24 13:08:04.164 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:08:04.165 - INFO: Train epoch 1761: Loss: 697.5223 | r_Loss: 73.8286 | g_Loss: 295.4237 | l_Loss: 32.9554 |
21-12-24 13:09:17.993 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:09:17.994 - INFO: Train epoch 1762: Loss: 674.5606 | r_Loss: 70.1752 | g_Loss: 289.6457 | l_Loss: 34.0391 |
21-12-24 13:10:31.925 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:10:31.926 - INFO: Train epoch 1763: Loss: 711.1937 | r_Loss: 76.1995 | g_Loss: 296.9209 | l_Loss: 33.2753 |
21-12-24 13:11:45.527 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:11:45.528 - INFO: Train epoch 1764: Loss: 688.7824 | r_Loss: 72.6844 | g_Loss: 290.7455 | l_Loss: 34.6148 |
21-12-24 13:12:59.072 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:12:59.073 - INFO: Train epoch 1765: Loss: 742.8691 | r_Loss: 78.6104 | g_Loss: 309.5635 | l_Loss: 40.2537 |
21-12-24 13:14:12.598 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:14:12.599 - INFO: Train epoch 1766: Loss: 712.0398 | r_Loss: 76.8489 | g_Loss: 291.0487 | l_Loss: 36.7467 |
21-12-24 13:15:26.393 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:15:26.394 - INFO: Train epoch 1767: Loss: 689.3794 | r_Loss: 69.6028 | g_Loss: 287.1737 | l_Loss: 54.1916 |
21-12-24 13:16:39.835 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:16:39.836 - INFO: Train epoch 1768: Loss: 691.9738 | r_Loss: 72.2257 | g_Loss: 287.1889 | l_Loss: 43.6561 |
21-12-24 13:17:52.922 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:17:52.923 - INFO: Train epoch 1769: Loss: 681.9947 | r_Loss: 73.5295 | g_Loss: 285.3682 | l_Loss: 28.9791 |
21-12-24 13:19:42.987 - INFO: TEST: PSNR_S: 44.6176 | PSNR_C: 37.4177 |
21-12-24 13:19:42.989 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:19:42.990 - INFO: Train epoch 1770: Loss: 730.8154 | r_Loss: 77.5737 | g_Loss: 298.4191 | l_Loss: 44.5278 |
21-12-24 13:20:56.877 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:20:56.878 - INFO: Train epoch 1771: Loss: 692.0084 | r_Loss: 75.0203 | g_Loss: 285.2535 | l_Loss: 31.6536 |
21-12-24 13:22:10.851 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:22:10.852 - INFO: Train epoch 1772: Loss: 725.6709 | r_Loss: 77.1570 | g_Loss: 303.5220 | l_Loss: 36.3639 |
21-12-24 13:23:24.636 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:23:24.637 - INFO: Train epoch 1773: Loss: 764.1123 | r_Loss: 76.6675 | g_Loss: 340.7704 | l_Loss: 40.0043 |
21-12-24 13:24:38.281 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:24:38.282 - INFO: Train epoch 1774: Loss: 659.8171 | r_Loss: 67.0161 | g_Loss: 289.4995 | l_Loss: 35.2372 |
21-12-24 13:25:51.553 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:25:51.554 - INFO: Train epoch 1775: Loss: 676.6983 | r_Loss: 69.6707 | g_Loss: 286.0884 | l_Loss: 42.2561 |
21-12-24 13:27:04.840 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:27:04.841 - INFO: Train epoch 1776: Loss: 658.6153 | r_Loss: 68.2647 | g_Loss: 281.1455 | l_Loss: 36.1463 |
21-12-24 13:28:18.394 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:28:18.395 - INFO: Train epoch 1777: Loss: 617.6532 | r_Loss: 61.3925 | g_Loss: 272.6765 | l_Loss: 38.0141 |
21-12-24 13:29:31.964 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:29:31.965 - INFO: Train epoch 1778: Loss: 699.6648 | r_Loss: 73.1363 | g_Loss: 296.0462 | l_Loss: 37.9370 |
21-12-24 13:30:45.685 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:30:45.686 - INFO: Train epoch 1779: Loss: 645.1173 | r_Loss: 68.0590 | g_Loss: 274.8078 | l_Loss: 30.0147 |
21-12-24 13:31:59.456 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:31:59.457 - INFO: Train epoch 1780: Loss: 736.3907 | r_Loss: 85.5363 | g_Loss: 275.2029 | l_Loss: 33.5065 |
21-12-24 13:33:12.936 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:33:12.937 - INFO: Train epoch 1781: Loss: 27199.5270 | r_Loss: 4951.1001 | g_Loss: 2165.3570 | l_Loss: 278.6699 |
21-12-24 13:34:26.689 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:34:26.690 - INFO: Train epoch 1782: Loss: 1888.3177 | r_Loss: 178.5296 | g_Loss: 882.1664 | l_Loss: 113.5032 |
21-12-24 13:35:40.244 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:35:40.245 - INFO: Train epoch 1783: Loss: 1413.8933 | r_Loss: 129.4908 | g_Loss: 688.0817 | l_Loss: 78.3574 |
21-12-24 13:36:53.641 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:36:53.642 - INFO: Train epoch 1784: Loss: 1389.9506 | r_Loss: 128.6146 | g_Loss: 663.9321 | l_Loss: 82.9453 |
21-12-24 13:38:07.355 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:38:07.357 - INFO: Train epoch 1785: Loss: 1283.1526 | r_Loss: 118.0200 | g_Loss: 616.7083 | l_Loss: 76.3441 |
21-12-24 13:39:21.441 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:39:21.442 - INFO: Train epoch 1786: Loss: 1148.8036 | r_Loss: 102.8540 | g_Loss: 566.2212 | l_Loss: 68.3123 |
21-12-24 13:40:35.080 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:40:35.081 - INFO: Train epoch 1787: Loss: 1195.0455 | r_Loss: 106.0786 | g_Loss: 600.8800 | l_Loss: 63.7727 |
21-12-24 13:41:48.641 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:41:48.642 - INFO: Train epoch 1788: Loss: 1090.6651 | r_Loss: 95.9577 | g_Loss: 544.1177 | l_Loss: 66.7587 |
21-12-24 13:43:01.827 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:43:01.828 - INFO: Train epoch 1789: Loss: 1026.4087 | r_Loss: 94.0206 | g_Loss: 500.7070 | l_Loss: 55.5987 |
21-12-24 13:44:14.952 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:44:14.953 - INFO: Train epoch 1790: Loss: 997.3083 | r_Loss: 90.9614 | g_Loss: 484.6996 | l_Loss: 57.8017 |
21-12-24 13:45:28.664 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:45:28.665 - INFO: Train epoch 1791: Loss: 975.2179 | r_Loss: 86.8784 | g_Loss: 488.3203 | l_Loss: 52.5054 |
21-12-24 13:46:42.290 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:46:42.291 - INFO: Train epoch 1792: Loss: 962.6513 | r_Loss: 84.0564 | g_Loss: 492.5444 | l_Loss: 49.8250 |
21-12-24 13:47:55.914 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:47:55.915 - INFO: Train epoch 1793: Loss: 970.4572 | r_Loss: 88.2591 | g_Loss: 470.4195 | l_Loss: 58.7422 |
21-12-24 13:49:09.677 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:49:09.678 - INFO: Train epoch 1794: Loss: 931.1605 | r_Loss: 83.7026 | g_Loss: 458.2896 | l_Loss: 54.3581 |
21-12-24 13:50:23.000 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:50:23.001 - INFO: Train epoch 1795: Loss: 954.0510 | r_Loss: 86.8411 | g_Loss: 458.6640 | l_Loss: 61.1813 |
21-12-24 13:51:36.615 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:51:36.616 - INFO: Train epoch 1796: Loss: 900.0978 | r_Loss: 81.6517 | g_Loss: 435.6577 | l_Loss: 56.1816 |
21-12-24 13:52:50.309 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:52:50.310 - INFO: Train epoch 1797: Loss: 891.5913 | r_Loss: 81.1589 | g_Loss: 432.3371 | l_Loss: 53.4599 |
21-12-24 13:54:04.073 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:54:04.074 - INFO: Train epoch 1798: Loss: 851.1819 | r_Loss: 74.6060 | g_Loss: 428.3256 | l_Loss: 49.8262 |
21-12-24 13:55:17.724 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:55:17.725 - INFO: Train epoch 1799: Loss: 844.9514 | r_Loss: 75.8346 | g_Loss: 416.5922 | l_Loss: 49.1864 |
21-12-24 13:57:07.039 - INFO: TEST: PSNR_S: 43.9725 | PSNR_C: 35.5244 |
21-12-24 13:57:07.040 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:57:07.040 - INFO: Train epoch 1800: Loss: 836.3367 | r_Loss: 77.4354 | g_Loss: 405.5109 | l_Loss: 43.6486 |
21-12-24 13:58:20.329 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:58:20.331 - INFO: Train epoch 1801: Loss: 805.8711 | r_Loss: 70.8932 | g_Loss: 402.9265 | l_Loss: 48.4787 |
21-12-24 13:59:34.663 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 13:59:34.664 - INFO: Train epoch 1802: Loss: 820.0081 | r_Loss: 72.4727 | g_Loss: 404.8849 | l_Loss: 52.7595 |
21-12-24 14:00:47.701 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:00:47.702 - INFO: Train epoch 1803: Loss: 777.7857 | r_Loss: 69.2405 | g_Loss: 379.6074 | l_Loss: 51.9759 |
21-12-24 14:02:01.086 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:02:01.087 - INFO: Train epoch 1804: Loss: 771.2026 | r_Loss: 67.6609 | g_Loss: 377.1543 | l_Loss: 55.7439 |
21-12-24 14:03:14.531 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:03:14.532 - INFO: Train epoch 1805: Loss: 808.4646 | r_Loss: 74.1017 | g_Loss: 382.9217 | l_Loss: 55.0345 |
21-12-24 14:04:28.040 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:04:28.041 - INFO: Train epoch 1806: Loss: 752.9266 | r_Loss: 66.2621 | g_Loss: 373.4613 | l_Loss: 48.1547 |
21-12-24 14:05:41.439 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:05:41.440 - INFO: Train epoch 1807: Loss: 771.5367 | r_Loss: 70.2679 | g_Loss: 370.3910 | l_Loss: 49.8061 |
21-12-24 14:06:54.644 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:06:54.645 - INFO: Train epoch 1808: Loss: 786.3187 | r_Loss: 71.5571 | g_Loss: 383.4627 | l_Loss: 45.0707 |
21-12-24 14:08:08.253 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:08:08.254 - INFO: Train epoch 1809: Loss: 804.5923 | r_Loss: 73.8412 | g_Loss: 387.4913 | l_Loss: 47.8951 |
21-12-24 14:09:21.810 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:09:21.811 - INFO: Train epoch 1810: Loss: 798.1764 | r_Loss: 75.3360 | g_Loss: 375.2541 | l_Loss: 46.2421 |
21-12-24 14:10:35.363 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:10:35.364 - INFO: Train epoch 1811: Loss: 775.0778 | r_Loss: 69.6894 | g_Loss: 371.9672 | l_Loss: 54.6635 |
21-12-24 14:11:49.217 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:11:49.218 - INFO: Train epoch 1812: Loss: 752.2019 | r_Loss: 70.8881 | g_Loss: 353.6663 | l_Loss: 44.0949 |
21-12-24 14:13:02.676 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:13:02.677 - INFO: Train epoch 1813: Loss: 748.0589 | r_Loss: 67.3545 | g_Loss: 366.3562 | l_Loss: 44.9300 |
21-12-24 14:14:16.495 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:14:16.496 - INFO: Train epoch 1814: Loss: 717.4607 | r_Loss: 63.5924 | g_Loss: 359.1337 | l_Loss: 40.3648 |
21-12-24 14:15:30.137 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:15:30.138 - INFO: Train epoch 1815: Loss: 706.5849 | r_Loss: 63.7316 | g_Loss: 342.7885 | l_Loss: 45.1386 |
21-12-24 14:16:43.558 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:16:43.559 - INFO: Train epoch 1816: Loss: 714.4677 | r_Loss: 63.0740 | g_Loss: 352.6346 | l_Loss: 46.4630 |
21-12-24 14:17:56.758 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:17:56.759 - INFO: Train epoch 1817: Loss: 755.3819 | r_Loss: 70.8035 | g_Loss: 356.0815 | l_Loss: 45.2829 |
21-12-24 14:19:10.791 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:19:10.792 - INFO: Train epoch 1818: Loss: 718.0255 | r_Loss: 65.1656 | g_Loss: 343.7945 | l_Loss: 48.4029 |
21-12-24 14:20:24.294 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:20:24.296 - INFO: Train epoch 1819: Loss: 737.0700 | r_Loss: 68.8999 | g_Loss: 352.0209 | l_Loss: 40.5498 |
21-12-24 14:21:38.014 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:21:38.015 - INFO: Train epoch 1820: Loss: 742.7412 | r_Loss: 70.2665 | g_Loss: 347.9807 | l_Loss: 43.4283 |
21-12-24 14:22:51.623 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:22:51.624 - INFO: Train epoch 1821: Loss: 679.8593 | r_Loss: 60.2650 | g_Loss: 336.2007 | l_Loss: 42.3334 |
21-12-24 14:24:04.937 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:24:04.938 - INFO: Train epoch 1822: Loss: 720.1367 | r_Loss: 68.2421 | g_Loss: 342.1020 | l_Loss: 36.8243 |
21-12-24 14:25:18.388 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:25:18.389 - INFO: Train epoch 1823: Loss: 693.1084 | r_Loss: 63.7521 | g_Loss: 332.6333 | l_Loss: 41.7147 |
21-12-24 14:26:32.182 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:26:32.182 - INFO: Train epoch 1824: Loss: 686.0146 | r_Loss: 64.0352 | g_Loss: 329.2265 | l_Loss: 36.6120 |
21-12-24 14:27:46.192 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:27:46.193 - INFO: Train epoch 1825: Loss: 688.8245 | r_Loss: 62.8375 | g_Loss: 331.6962 | l_Loss: 42.9406 |
21-12-24 14:28:59.633 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:28:59.634 - INFO: Train epoch 1826: Loss: 711.7002 | r_Loss: 70.4528 | g_Loss: 324.2055 | l_Loss: 35.2307 |
21-12-24 14:30:13.167 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:30:13.168 - INFO: Train epoch 1827: Loss: 673.0654 | r_Loss: 60.8528 | g_Loss: 332.2980 | l_Loss: 36.5034 |
21-12-24 14:31:26.771 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:31:26.772 - INFO: Train epoch 1828: Loss: 672.9244 | r_Loss: 64.6316 | g_Loss: 310.8706 | l_Loss: 38.8958 |
21-12-24 14:32:40.221 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:32:40.222 - INFO: Train epoch 1829: Loss: 682.3891 | r_Loss: 64.7451 | g_Loss: 326.1835 | l_Loss: 32.4800 |
21-12-24 14:34:29.696 - INFO: TEST: PSNR_S: 44.7531 | PSNR_C: 36.7158 |
21-12-24 14:34:29.697 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:34:29.698 - INFO: Train epoch 1830: Loss: 713.1444 | r_Loss: 68.6513 | g_Loss: 331.5930 | l_Loss: 38.2950 |
21-12-24 14:35:43.375 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:35:43.376 - INFO: Train epoch 1831: Loss: 674.9687 | r_Loss: 64.8393 | g_Loss: 312.9078 | l_Loss: 37.8644 |
21-12-24 14:36:56.907 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:36:56.908 - INFO: Train epoch 1832: Loss: 729.9047 | r_Loss: 67.8977 | g_Loss: 332.4869 | l_Loss: 57.9293 |
21-12-24 14:38:10.569 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:38:10.570 - INFO: Train epoch 1833: Loss: 673.7224 | r_Loss: 62.5087 | g_Loss: 314.4841 | l_Loss: 46.6946 |
21-12-24 14:39:24.160 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:39:24.161 - INFO: Train epoch 1834: Loss: 702.8203 | r_Loss: 68.0163 | g_Loss: 322.8687 | l_Loss: 39.8700 |
21-12-24 14:40:37.556 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:40:37.557 - INFO: Train epoch 1835: Loss: 681.4264 | r_Loss: 64.5655 | g_Loss: 319.8888 | l_Loss: 38.7100 |
21-12-24 14:41:50.818 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:41:50.819 - INFO: Train epoch 1836: Loss: 704.8515 | r_Loss: 67.5858 | g_Loss: 328.6246 | l_Loss: 38.2980 |
21-12-24 14:43:04.405 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:43:04.406 - INFO: Train epoch 1837: Loss: 688.5916 | r_Loss: 65.6036 | g_Loss: 315.1386 | l_Loss: 45.4349 |
21-12-24 14:44:17.843 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:44:17.844 - INFO: Train epoch 1838: Loss: 666.2931 | r_Loss: 64.6960 | g_Loss: 300.1363 | l_Loss: 42.6768 |
21-12-24 14:45:31.440 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:45:31.441 - INFO: Train epoch 1839: Loss: 684.7102 | r_Loss: 66.1549 | g_Loss: 314.6221 | l_Loss: 39.3138 |
21-12-24 14:46:45.028 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:46:45.029 - INFO: Train epoch 1840: Loss: 682.8615 | r_Loss: 66.8284 | g_Loss: 307.8528 | l_Loss: 40.8665 |
21-12-24 14:47:58.424 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:47:58.425 - INFO: Train epoch 1841: Loss: 674.7829 | r_Loss: 64.2279 | g_Loss: 313.0539 | l_Loss: 40.5897 |
21-12-24 14:49:12.179 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:49:12.180 - INFO: Train epoch 1842: Loss: 705.4461 | r_Loss: 69.1956 | g_Loss: 318.0932 | l_Loss: 41.3752 |
21-12-24 14:50:26.334 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:50:26.335 - INFO: Train epoch 1843: Loss: 658.2322 | r_Loss: 64.3833 | g_Loss: 303.1554 | l_Loss: 33.1605 |
21-12-24 14:51:39.936 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:51:39.937 - INFO: Train epoch 1844: Loss: 709.8064 | r_Loss: 74.3519 | g_Loss: 302.2732 | l_Loss: 35.7735 |
21-12-24 14:52:53.280 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:52:53.282 - INFO: Train epoch 1845: Loss: 664.4267 | r_Loss: 63.9445 | g_Loss: 302.0227 | l_Loss: 42.6813 |
21-12-24 14:54:07.087 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:54:07.088 - INFO: Train epoch 1846: Loss: 682.1295 | r_Loss: 66.1928 | g_Loss: 307.7126 | l_Loss: 43.4530 |
21-12-24 14:55:20.602 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:55:20.603 - INFO: Train epoch 1847: Loss: 679.1990 | r_Loss: 69.0970 | g_Loss: 295.2008 | l_Loss: 38.5129 |
21-12-24 14:56:34.185 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:56:34.186 - INFO: Train epoch 1848: Loss: 670.6749 | r_Loss: 66.4798 | g_Loss: 302.5884 | l_Loss: 35.6876 |
21-12-24 14:57:48.070 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:57:48.072 - INFO: Train epoch 1849: Loss: 698.5283 | r_Loss: 69.9021 | g_Loss: 301.7318 | l_Loss: 47.2860 |
21-12-24 14:59:01.995 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 14:59:01.996 - INFO: Train epoch 1850: Loss: 672.0781 | r_Loss: 66.1542 | g_Loss: 301.5685 | l_Loss: 39.7388 |
21-12-24 15:00:15.202 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:00:15.203 - INFO: Train epoch 1851: Loss: 682.8503 | r_Loss: 68.8609 | g_Loss: 303.9424 | l_Loss: 34.6033 |
21-12-24 15:01:28.748 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:01:28.748 - INFO: Train epoch 1852: Loss: 681.8842 | r_Loss: 68.4287 | g_Loss: 303.4624 | l_Loss: 36.2782 |
21-12-24 15:02:42.278 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:02:42.279 - INFO: Train epoch 1853: Loss: 657.8965 | r_Loss: 66.5098 | g_Loss: 291.5939 | l_Loss: 33.7534 |
21-12-24 15:03:56.072 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:03:56.073 - INFO: Train epoch 1854: Loss: 633.9719 | r_Loss: 60.7079 | g_Loss: 295.9366 | l_Loss: 34.4956 |
21-12-24 15:05:10.000 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:05:10.001 - INFO: Train epoch 1855: Loss: 625.3379 | r_Loss: 61.9797 | g_Loss: 279.1265 | l_Loss: 36.3129 |
21-12-24 15:06:23.193 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:06:23.195 - INFO: Train epoch 1856: Loss: 666.8640 | r_Loss: 70.0536 | g_Loss: 282.9565 | l_Loss: 33.6396 |
21-12-24 15:07:36.858 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:07:36.859 - INFO: Train epoch 1857: Loss: 725.2017 | r_Loss: 75.1786 | g_Loss: 306.9707 | l_Loss: 42.3381 |
21-12-24 15:08:50.631 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:08:50.632 - INFO: Train epoch 1858: Loss: 630.8458 | r_Loss: 63.8043 | g_Loss: 276.4533 | l_Loss: 35.3708 |
21-12-24 15:10:04.172 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:10:04.173 - INFO: Train epoch 1859: Loss: 653.6839 | r_Loss: 66.5865 | g_Loss: 285.3445 | l_Loss: 35.4069 |
21-12-24 15:11:54.014 - INFO: TEST: PSNR_S: 45.0200 | PSNR_C: 37.3160 |
21-12-24 15:11:54.015 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:11:54.015 - INFO: Train epoch 1860: Loss: 675.3478 | r_Loss: 69.4280 | g_Loss: 286.3882 | l_Loss: 41.8196 |
21-12-24 15:13:07.377 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:13:07.378 - INFO: Train epoch 1861: Loss: 735.9340 | r_Loss: 73.5292 | g_Loss: 324.1120 | l_Loss: 44.1758 |
21-12-24 15:14:21.002 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:14:21.003 - INFO: Train epoch 1862: Loss: 644.5409 | r_Loss: 65.6331 | g_Loss: 281.6457 | l_Loss: 34.7298 |
21-12-24 15:15:34.572 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:15:34.573 - INFO: Train epoch 1863: Loss: 678.3646 | r_Loss: 69.0200 | g_Loss: 294.2955 | l_Loss: 38.9690 |
21-12-24 15:16:47.936 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:16:47.937 - INFO: Train epoch 1864: Loss: 643.1459 | r_Loss: 65.5357 | g_Loss: 277.2762 | l_Loss: 38.1911 |
21-12-24 15:18:01.557 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:18:01.558 - INFO: Train epoch 1865: Loss: 666.7288 | r_Loss: 70.3506 | g_Loss: 275.4242 | l_Loss: 39.5514 |
21-12-24 15:19:15.906 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:19:15.907 - INFO: Train epoch 1866: Loss: 705.0768 | r_Loss: 74.8824 | g_Loss: 301.8500 | l_Loss: 28.8151 |
21-12-24 15:20:29.795 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:20:29.796 - INFO: Train epoch 1867: Loss: 605.0372 | r_Loss: 57.4753 | g_Loss: 271.8301 | l_Loss: 45.8307 |
21-12-24 15:21:43.137 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:21:43.138 - INFO: Train epoch 1868: Loss: 670.4567 | r_Loss: 68.6070 | g_Loss: 295.8435 | l_Loss: 31.5782 |
21-12-24 15:22:56.951 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:22:56.952 - INFO: Train epoch 1869: Loss: 667.2033 | r_Loss: 68.8726 | g_Loss: 285.9217 | l_Loss: 36.9188 |
21-12-24 15:24:10.512 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:24:10.513 - INFO: Train epoch 1870: Loss: 607.8520 | r_Loss: 60.5080 | g_Loss: 269.0332 | l_Loss: 36.2788 |
21-12-24 15:25:24.101 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:25:24.103 - INFO: Train epoch 1871: Loss: 646.1062 | r_Loss: 67.4696 | g_Loss: 270.2080 | l_Loss: 38.5501 |
21-12-24 15:26:37.607 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:26:37.608 - INFO: Train epoch 1872: Loss: 632.7603 | r_Loss: 65.2664 | g_Loss: 268.3516 | l_Loss: 38.0768 |
21-12-24 15:27:50.678 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:27:50.680 - INFO: Train epoch 1873: Loss: 635.1568 | r_Loss: 64.3990 | g_Loss: 275.7519 | l_Loss: 37.4102 |
21-12-24 15:29:04.465 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:29:04.466 - INFO: Train epoch 1874: Loss: 691.5381 | r_Loss: 76.1473 | g_Loss: 278.2882 | l_Loss: 32.5134 |
21-12-24 15:30:18.315 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:30:18.316 - INFO: Train epoch 1875: Loss: 645.2537 | r_Loss: 64.4660 | g_Loss: 281.3924 | l_Loss: 41.5316 |
21-12-24 15:31:31.939 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:31:31.940 - INFO: Train epoch 1876: Loss: 682.9236 | r_Loss: 73.2017 | g_Loss: 286.0000 | l_Loss: 30.9152 |
21-12-24 15:32:45.637 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:32:45.638 - INFO: Train epoch 1877: Loss: 697.0865 | r_Loss: 74.1529 | g_Loss: 290.5709 | l_Loss: 35.7511 |
21-12-24 15:33:59.194 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:33:59.195 - INFO: Train epoch 1878: Loss: 643.6496 | r_Loss: 65.3598 | g_Loss: 281.4079 | l_Loss: 35.4425 |
21-12-24 15:35:12.681 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:35:12.682 - INFO: Train epoch 1879: Loss: 627.7748 | r_Loss: 63.3863 | g_Loss: 273.2666 | l_Loss: 37.5768 |
21-12-24 15:36:26.642 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:36:26.644 - INFO: Train epoch 1880: Loss: 676.9040 | r_Loss: 71.7531 | g_Loss: 282.0461 | l_Loss: 36.0926 |
21-12-24 15:37:40.369 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:37:40.370 - INFO: Train epoch 1881: Loss: 697.5580 | r_Loss: 74.1386 | g_Loss: 288.5074 | l_Loss: 38.3574 |
21-12-24 15:38:54.024 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:38:54.025 - INFO: Train epoch 1882: Loss: 678.3468 | r_Loss: 76.7319 | g_Loss: 260.3516 | l_Loss: 34.3355 |
21-12-24 15:40:07.595 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:40:07.596 - INFO: Train epoch 1883: Loss: 598.1925 | r_Loss: 59.2218 | g_Loss: 266.1003 | l_Loss: 35.9834 |
21-12-24 15:41:20.961 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:41:20.962 - INFO: Train epoch 1884: Loss: 638.1156 | r_Loss: 67.0024 | g_Loss: 272.1458 | l_Loss: 30.9578 |
21-12-24 15:42:34.515 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:42:34.516 - INFO: Train epoch 1885: Loss: 586.9661 | r_Loss: 59.0618 | g_Loss: 260.7465 | l_Loss: 30.9108 |
21-12-24 15:43:48.069 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:43:48.070 - INFO: Train epoch 1886: Loss: 621.7998 | r_Loss: 63.6335 | g_Loss: 269.0621 | l_Loss: 34.5701 |
21-12-24 15:45:01.901 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:45:01.902 - INFO: Train epoch 1887: Loss: 632.1954 | r_Loss: 63.2525 | g_Loss: 279.8214 | l_Loss: 36.1113 |
21-12-24 15:46:15.501 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:46:15.502 - INFO: Train epoch 1888: Loss: 658.1341 | r_Loss: 65.4487 | g_Loss: 285.3926 | l_Loss: 45.4980 |
21-12-24 15:47:29.234 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:47:29.234 - INFO: Train epoch 1889: Loss: 909.0179 | r_Loss: 115.5207 | g_Loss: 290.2729 | l_Loss: 41.1415 |
21-12-24 15:49:19.279 - INFO: TEST: PSNR_S: 40.8152 | PSNR_C: 34.4156 |
21-12-24 15:49:19.281 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:49:19.281 - INFO: Train epoch 1890: Loss: 11961.4131 | r_Loss: 2206.1668 | g_Loss: 837.1147 | l_Loss: 93.4647 |
21-12-24 15:50:32.795 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:50:32.796 - INFO: Train epoch 1891: Loss: 1056.5178 | r_Loss: 105.6675 | g_Loss: 472.8878 | l_Loss: 55.2924 |
21-12-24 15:51:45.912 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:51:45.913 - INFO: Train epoch 1892: Loss: 937.6495 | r_Loss: 86.5954 | g_Loss: 453.7878 | l_Loss: 50.8846 |
21-12-24 15:52:59.979 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:52:59.980 - INFO: Train epoch 1893: Loss: 892.4382 | r_Loss: 80.5243 | g_Loss: 431.7219 | l_Loss: 58.0949 |
21-12-24 15:54:13.609 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:54:13.610 - INFO: Train epoch 1894: Loss: 820.0113 | r_Loss: 73.0635 | g_Loss: 401.6995 | l_Loss: 52.9941 |
21-12-24 15:55:27.435 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:55:27.435 - INFO: Train epoch 1895: Loss: 856.1665 | r_Loss: 76.1337 | g_Loss: 424.6960 | l_Loss: 50.8020 |
21-12-24 15:56:41.103 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:56:41.104 - INFO: Train epoch 1896: Loss: 794.8029 | r_Loss: 68.6865 | g_Loss: 400.6130 | l_Loss: 50.7576 |
21-12-24 15:57:54.452 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:57:54.453 - INFO: Train epoch 1897: Loss: 754.6464 | r_Loss: 66.9735 | g_Loss: 380.2996 | l_Loss: 39.4793 |
21-12-24 15:59:08.282 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 15:59:08.284 - INFO: Train epoch 1898: Loss: 777.1012 | r_Loss: 68.3311 | g_Loss: 395.6892 | l_Loss: 39.7566 |
21-12-24 16:00:22.361 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:00:22.361 - INFO: Train epoch 1899: Loss: 713.2783 | r_Loss: 59.8727 | g_Loss: 366.4051 | l_Loss: 47.5099 |
21-12-24 16:01:35.782 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:01:35.783 - INFO: Train epoch 1900: Loss: 761.8036 | r_Loss: 69.1024 | g_Loss: 374.0197 | l_Loss: 42.2716 |
21-12-24 16:02:49.407 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:02:49.408 - INFO: Train epoch 1901: Loss: 722.4671 | r_Loss: 64.3758 | g_Loss: 356.5424 | l_Loss: 44.0456 |
21-12-24 16:04:02.702 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:04:02.703 - INFO: Train epoch 1902: Loss: 709.2062 | r_Loss: 61.4920 | g_Loss: 361.7559 | l_Loss: 39.9904 |
21-12-24 16:05:15.947 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:05:15.948 - INFO: Train epoch 1903: Loss: 702.7438 | r_Loss: 63.0182 | g_Loss: 349.6163 | l_Loss: 38.0367 |
21-12-24 16:06:29.683 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:06:29.684 - INFO: Train epoch 1904: Loss: 720.2896 | r_Loss: 64.3664 | g_Loss: 350.9877 | l_Loss: 47.4701 |
21-12-24 16:07:43.428 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:07:43.429 - INFO: Train epoch 1905: Loss: 738.7760 | r_Loss: 66.3340 | g_Loss: 359.4850 | l_Loss: 47.6209 |
21-12-24 16:08:56.876 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:08:56.877 - INFO: Train epoch 1906: Loss: 724.6545 | r_Loss: 66.7975 | g_Loss: 346.2669 | l_Loss: 44.4001 |
21-12-24 16:10:10.565 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:10:10.566 - INFO: Train epoch 1907: Loss: 662.3915 | r_Loss: 59.9612 | g_Loss: 324.4650 | l_Loss: 38.1203 |
21-12-24 16:11:24.295 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:11:24.296 - INFO: Train epoch 1908: Loss: 711.5876 | r_Loss: 64.3935 | g_Loss: 346.6104 | l_Loss: 43.0099 |
21-12-24 16:12:37.761 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:12:37.763 - INFO: Train epoch 1909: Loss: 671.8348 | r_Loss: 60.9112 | g_Loss: 325.7905 | l_Loss: 41.4885 |
21-12-24 16:13:51.279 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:13:51.280 - INFO: Train epoch 1910: Loss: 679.2678 | r_Loss: 61.1218 | g_Loss: 329.8926 | l_Loss: 43.7663 |
21-12-24 16:15:04.835 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:15:04.836 - INFO: Train epoch 1911: Loss: 703.2545 | r_Loss: 63.8917 | g_Loss: 341.1247 | l_Loss: 42.6712 |
21-12-24 16:16:18.379 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:16:18.380 - INFO: Train epoch 1912: Loss: 685.0896 | r_Loss: 63.7026 | g_Loss: 321.7318 | l_Loss: 44.8450 |
21-12-24 16:17:32.177 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:17:32.179 - INFO: Train epoch 1913: Loss: 688.0055 | r_Loss: 62.2110 | g_Loss: 337.6132 | l_Loss: 39.3373 |
21-12-24 16:18:46.178 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:18:46.180 - INFO: Train epoch 1914: Loss: 658.8671 | r_Loss: 60.6661 | g_Loss: 313.2766 | l_Loss: 42.2601 |
21-12-24 16:19:59.918 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:19:59.919 - INFO: Train epoch 1915: Loss: 632.3056 | r_Loss: 56.3226 | g_Loss: 310.9987 | l_Loss: 39.6941 |
21-12-24 16:21:13.566 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:21:13.568 - INFO: Train epoch 1916: Loss: 637.0151 | r_Loss: 60.5904 | g_Loss: 296.2315 | l_Loss: 37.8315 |
21-12-24 16:22:27.256 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:22:27.257 - INFO: Train epoch 1917: Loss: 625.8112 | r_Loss: 59.2367 | g_Loss: 293.0478 | l_Loss: 36.5800 |
21-12-24 16:23:40.931 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:23:40.932 - INFO: Train epoch 1918: Loss: 624.0560 | r_Loss: 58.8510 | g_Loss: 288.7823 | l_Loss: 41.0186 |
21-12-24 16:24:55.134 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:24:55.135 - INFO: Train epoch 1919: Loss: 679.4857 | r_Loss: 66.6878 | g_Loss: 305.2384 | l_Loss: 40.8083 |
21-12-24 16:26:45.192 - INFO: TEST: PSNR_S: 45.3411 | PSNR_C: 37.1076 |
21-12-24 16:26:45.193 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:26:45.194 - INFO: Train epoch 1920: Loss: 636.0305 | r_Loss: 58.8598 | g_Loss: 299.1346 | l_Loss: 42.5970 |
21-12-24 16:27:58.768 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:27:58.769 - INFO: Train epoch 1921: Loss: 650.4100 | r_Loss: 60.3835 | g_Loss: 307.8647 | l_Loss: 40.6280 |
21-12-24 16:29:12.412 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:29:12.414 - INFO: Train epoch 1922: Loss: 658.5720 | r_Loss: 62.3796 | g_Loss: 305.2417 | l_Loss: 41.4323 |
21-12-24 16:30:26.234 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:30:26.235 - INFO: Train epoch 1923: Loss: 646.2142 | r_Loss: 62.8893 | g_Loss: 296.9604 | l_Loss: 34.8074 |
21-12-24 16:31:39.850 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:31:39.851 - INFO: Train epoch 1924: Loss: 677.0013 | r_Loss: 64.8470 | g_Loss: 310.8356 | l_Loss: 41.9306 |
21-12-24 16:32:53.311 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:32:53.312 - INFO: Train epoch 1925: Loss: 635.3908 | r_Loss: 60.9040 | g_Loss: 297.3460 | l_Loss: 33.5247 |
21-12-24 16:34:07.179 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:34:07.180 - INFO: Train epoch 1926: Loss: 616.5689 | r_Loss: 58.4606 | g_Loss: 274.0732 | l_Loss: 50.1926 |
21-12-24 16:35:20.906 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:35:20.908 - INFO: Train epoch 1927: Loss: 632.1898 | r_Loss: 64.2374 | g_Loss: 283.7478 | l_Loss: 27.2549 |
21-12-24 16:36:35.009 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:36:35.010 - INFO: Train epoch 1928: Loss: 598.7088 | r_Loss: 57.3555 | g_Loss: 274.9868 | l_Loss: 36.9442 |
21-12-24 16:37:48.498 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:37:48.499 - INFO: Train epoch 1929: Loss: 634.3287 | r_Loss: 62.2063 | g_Loss: 290.7204 | l_Loss: 32.5767 |
21-12-24 16:39:02.076 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:39:02.077 - INFO: Train epoch 1930: Loss: 623.9433 | r_Loss: 59.8283 | g_Loss: 287.3509 | l_Loss: 37.4509 |
21-12-24 16:40:15.622 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:40:15.623 - INFO: Train epoch 1931: Loss: 600.1869 | r_Loss: 57.8146 | g_Loss: 275.4819 | l_Loss: 35.6321 |
21-12-24 16:41:29.239 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:41:29.240 - INFO: Train epoch 1932: Loss: 643.4028 | r_Loss: 63.5580 | g_Loss: 286.0055 | l_Loss: 39.6074 |
21-12-24 16:42:42.872 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:42:42.873 - INFO: Train epoch 1933: Loss: 648.3691 | r_Loss: 64.9372 | g_Loss: 280.2251 | l_Loss: 43.4581 |
21-12-24 16:43:56.672 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:43:56.673 - INFO: Train epoch 1934: Loss: 637.2269 | r_Loss: 59.8688 | g_Loss: 293.7668 | l_Loss: 44.1164 |
21-12-24 16:45:10.102 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:45:10.103 - INFO: Train epoch 1935: Loss: 585.4444 | r_Loss: 56.4364 | g_Loss: 270.4253 | l_Loss: 32.8372 |
21-12-24 16:46:23.793 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:46:23.794 - INFO: Train epoch 1936: Loss: 607.2162 | r_Loss: 59.5265 | g_Loss: 270.3215 | l_Loss: 39.2621 |
21-12-24 16:47:37.425 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:47:37.426 - INFO: Train epoch 1937: Loss: 618.0518 | r_Loss: 62.3136 | g_Loss: 273.2433 | l_Loss: 33.2404 |
21-12-24 16:48:50.828 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:48:50.829 - INFO: Train epoch 1938: Loss: 605.9599 | r_Loss: 59.9540 | g_Loss: 279.9192 | l_Loss: 26.2707 |
21-12-24 16:50:04.920 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:50:04.921 - INFO: Train epoch 1939: Loss: 636.2723 | r_Loss: 63.2826 | g_Loss: 289.7265 | l_Loss: 30.1326 |
21-12-24 16:51:18.408 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:51:18.409 - INFO: Train epoch 1940: Loss: 636.5492 | r_Loss: 65.3328 | g_Loss: 273.3096 | l_Loss: 36.5757 |
21-12-24 16:52:32.258 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:52:32.259 - INFO: Train epoch 1941: Loss: 628.7292 | r_Loss: 62.4662 | g_Loss: 277.4726 | l_Loss: 38.9255 |
21-12-24 16:53:45.801 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:53:45.803 - INFO: Train epoch 1942: Loss: 661.5400 | r_Loss: 67.2657 | g_Loss: 284.9851 | l_Loss: 40.2263 |
21-12-24 16:54:59.754 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:54:59.755 - INFO: Train epoch 1943: Loss: 680.2688 | r_Loss: 71.9889 | g_Loss: 282.0589 | l_Loss: 38.2652 |
21-12-24 16:56:13.443 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:56:13.444 - INFO: Train epoch 1944: Loss: 586.0674 | r_Loss: 59.2695 | g_Loss: 264.7375 | l_Loss: 24.9826 |
21-12-24 16:57:26.931 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:57:26.932 - INFO: Train epoch 1945: Loss: 697.1279 | r_Loss: 76.0401 | g_Loss: 287.3003 | l_Loss: 29.6272 |
21-12-24 16:58:40.215 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:58:40.216 - INFO: Train epoch 1946: Loss: 623.6015 | r_Loss: 63.8315 | g_Loss: 270.3138 | l_Loss: 34.1305 |
21-12-24 16:59:53.558 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 16:59:53.559 - INFO: Train epoch 1947: Loss: 648.2332 | r_Loss: 65.9899 | g_Loss: 284.1317 | l_Loss: 34.1520 |
21-12-24 17:01:06.921 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:01:06.923 - INFO: Train epoch 1948: Loss: 619.3136 | r_Loss: 63.2762 | g_Loss: 266.4289 | l_Loss: 36.5038 |
21-12-24 17:02:20.815 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:02:20.816 - INFO: Train epoch 1949: Loss: 587.3828 | r_Loss: 60.7599 | g_Loss: 251.8979 | l_Loss: 31.6854 |
21-12-24 17:04:10.451 - INFO: TEST: PSNR_S: 45.3047 | PSNR_C: 37.6403 |
21-12-24 17:04:10.453 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:04:10.453 - INFO: Train epoch 1950: Loss: 612.2067 | r_Loss: 63.2890 | g_Loss: 268.3908 | l_Loss: 27.3711 |
21-12-24 17:05:24.228 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:05:24.229 - INFO: Train epoch 1951: Loss: 618.8943 | r_Loss: 65.4116 | g_Loss: 263.3706 | l_Loss: 28.4659 |
21-12-24 17:06:37.589 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:06:37.590 - INFO: Train epoch 1952: Loss: 793.5032 | r_Loss: 96.9822 | g_Loss: 277.7022 | l_Loss: 30.8901 |
21-12-24 17:07:51.062 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:07:51.063 - INFO: Train epoch 1953: Loss: 630.4517 | r_Loss: 63.2685 | g_Loss: 281.3518 | l_Loss: 32.7573 |
21-12-24 17:09:05.053 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:09:05.054 - INFO: Train epoch 1954: Loss: 613.0163 | r_Loss: 62.1412 | g_Loss: 270.5010 | l_Loss: 31.8094 |
21-12-24 17:10:18.853 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:10:18.854 - INFO: Train epoch 1955: Loss: 595.5946 | r_Loss: 59.1312 | g_Loss: 267.1997 | l_Loss: 32.7387 |
21-12-24 17:11:32.409 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:11:32.411 - INFO: Train epoch 1956: Loss: 618.6234 | r_Loss: 61.4558 | g_Loss: 279.1566 | l_Loss: 32.1875 |
21-12-24 17:12:46.183 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:12:46.184 - INFO: Train epoch 1957: Loss: 586.3047 | r_Loss: 59.2662 | g_Loss: 262.7378 | l_Loss: 27.2361 |
21-12-24 17:13:59.741 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:13:59.742 - INFO: Train epoch 1958: Loss: 587.9225 | r_Loss: 57.8819 | g_Loss: 265.8033 | l_Loss: 32.7095 |
21-12-24 17:15:13.911 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:15:13.912 - INFO: Train epoch 1959: Loss: 620.8686 | r_Loss: 62.0006 | g_Loss: 272.1723 | l_Loss: 38.6931 |
21-12-24 17:16:27.320 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:16:27.321 - INFO: Train epoch 1960: Loss: 578.1654 | r_Loss: 58.2360 | g_Loss: 253.7806 | l_Loss: 33.2050 |
21-12-24 17:17:40.645 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:17:40.646 - INFO: Train epoch 1961: Loss: 610.2687 | r_Loss: 62.0060 | g_Loss: 270.1520 | l_Loss: 30.0868 |
21-12-24 17:18:54.000 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:18:54.002 - INFO: Train epoch 1962: Loss: 612.2146 | r_Loss: 62.1913 | g_Loss: 264.2066 | l_Loss: 37.0517 |
21-12-24 17:20:07.578 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:20:07.579 - INFO: Train epoch 1963: Loss: 625.5423 | r_Loss: 64.3910 | g_Loss: 271.1981 | l_Loss: 32.3893 |
21-12-24 17:21:20.993 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:21:20.994 - INFO: Train epoch 1964: Loss: 633.2656 | r_Loss: 64.5620 | g_Loss: 273.6413 | l_Loss: 36.8142 |
21-12-24 17:22:34.909 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:22:34.910 - INFO: Train epoch 1965: Loss: 624.3680 | r_Loss: 64.8415 | g_Loss: 262.3753 | l_Loss: 37.7853 |
21-12-24 17:23:48.488 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:23:48.489 - INFO: Train epoch 1966: Loss: 628.7107 | r_Loss: 65.1324 | g_Loss: 268.7161 | l_Loss: 34.3324 |
21-12-24 17:25:01.861 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:25:01.862 - INFO: Train epoch 1967: Loss: 795.4167 | r_Loss: 90.9734 | g_Loss: 298.1314 | l_Loss: 42.4184 |
21-12-24 17:26:15.707 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:26:15.708 - INFO: Train epoch 1968: Loss: 632.6498 | r_Loss: 64.8428 | g_Loss: 274.1638 | l_Loss: 34.2718 |
21-12-24 17:27:29.110 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:27:29.111 - INFO: Train epoch 1969: Loss: 581.5862 | r_Loss: 59.0073 | g_Loss: 258.3323 | l_Loss: 28.2175 |
21-12-24 17:28:42.559 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:28:42.560 - INFO: Train epoch 1970: Loss: 635.6687 | r_Loss: 63.4828 | g_Loss: 277.9344 | l_Loss: 40.3203 |
21-12-24 17:29:55.941 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:29:55.942 - INFO: Train epoch 1971: Loss: 632.5222 | r_Loss: 68.0719 | g_Loss: 261.3104 | l_Loss: 30.8526 |
21-12-24 17:31:09.359 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:31:09.360 - INFO: Train epoch 1972: Loss: 611.2465 | r_Loss: 62.1317 | g_Loss: 269.5029 | l_Loss: 31.0851 |
21-12-24 17:32:22.906 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:32:22.907 - INFO: Train epoch 1973: Loss: 613.8267 | r_Loss: 64.7421 | g_Loss: 256.6255 | l_Loss: 33.4906 |
21-12-24 17:33:36.698 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:33:36.699 - INFO: Train epoch 1974: Loss: 614.9026 | r_Loss: 63.6278 | g_Loss: 260.1656 | l_Loss: 36.5981 |
21-12-24 17:34:50.377 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:34:50.378 - INFO: Train epoch 1975: Loss: 631.7577 | r_Loss: 65.7075 | g_Loss: 262.1822 | l_Loss: 41.0379 |
21-12-24 17:36:03.938 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:36:03.939 - INFO: Train epoch 1976: Loss: 587.8239 | r_Loss: 58.8955 | g_Loss: 265.3509 | l_Loss: 27.9954 |
21-12-24 17:37:17.355 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:37:17.356 - INFO: Train epoch 1977: Loss: 587.2894 | r_Loss: 59.9837 | g_Loss: 259.3176 | l_Loss: 28.0534 |
21-12-24 17:38:30.883 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:38:30.885 - INFO: Train epoch 1978: Loss: 613.0306 | r_Loss: 61.8243 | g_Loss: 266.7836 | l_Loss: 37.1257 |
21-12-24 17:39:44.347 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:39:44.348 - INFO: Train epoch 1979: Loss: 619.4720 | r_Loss: 63.5345 | g_Loss: 272.2066 | l_Loss: 29.5930 |
21-12-24 17:41:33.922 - INFO: TEST: PSNR_S: 45.4568 | PSNR_C: 37.8514 |
21-12-24 17:41:33.924 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:41:33.924 - INFO: Train epoch 1980: Loss: 583.4941 | r_Loss: 61.8989 | g_Loss: 246.5665 | l_Loss: 27.4330 |
21-12-24 17:42:47.060 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:42:47.061 - INFO: Train epoch 1981: Loss: 632.6852 | r_Loss: 66.2075 | g_Loss: 275.2881 | l_Loss: 26.3594 |
21-12-24 17:44:00.728 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:44:00.729 - INFO: Train epoch 1982: Loss: 629.7519 | r_Loss: 64.3060 | g_Loss: 273.7787 | l_Loss: 34.4431 |
21-12-24 17:45:13.919 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:45:13.920 - INFO: Train epoch 1983: Loss: 620.1705 | r_Loss: 63.3343 | g_Loss: 264.7587 | l_Loss: 38.7405 |
21-12-24 17:46:27.423 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:46:27.424 - INFO: Train epoch 1984: Loss: 607.7524 | r_Loss: 62.4182 | g_Loss: 258.6069 | l_Loss: 37.0544 |
21-12-24 17:47:40.867 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:47:40.868 - INFO: Train epoch 1985: Loss: 649.9184 | r_Loss: 65.0347 | g_Loss: 288.6903 | l_Loss: 36.0547 |
21-12-24 17:48:54.303 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:48:54.304 - INFO: Train epoch 1986: Loss: 587.1150 | r_Loss: 60.9735 | g_Loss: 257.6428 | l_Loss: 24.6049 |
21-12-24 17:50:07.616 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:50:07.617 - INFO: Train epoch 1987: Loss: 12177.0717 | r_Loss: 2223.8411 | g_Loss: 941.7559 | l_Loss: 116.1102 |
21-12-24 17:51:21.285 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:51:21.287 - INFO: Train epoch 1988: Loss: 1326.1720 | r_Loss: 147.1959 | g_Loss: 529.3060 | l_Loss: 60.8864 |
21-12-24 17:52:34.615 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:52:34.616 - INFO: Train epoch 1989: Loss: 1111.7152 | r_Loss: 116.3364 | g_Loss: 474.1011 | l_Loss: 55.9324 |
21-12-24 17:53:47.975 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:53:47.976 - INFO: Train epoch 1990: Loss: 1013.3722 | r_Loss: 102.3423 | g_Loss: 446.8372 | l_Loss: 54.8237 |
21-12-24 17:55:01.241 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:55:01.242 - INFO: Train epoch 1991: Loss: 1012.6008 | r_Loss: 99.3762 | g_Loss: 455.9877 | l_Loss: 59.7324 |
21-12-24 17:56:14.555 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:56:14.556 - INFO: Train epoch 1992: Loss: 894.5727 | r_Loss: 86.6377 | g_Loss: 404.6056 | l_Loss: 56.7788 |
21-12-24 17:57:27.893 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:57:27.895 - INFO: Train epoch 1993: Loss: 863.3781 | r_Loss: 81.6897 | g_Loss: 404.1323 | l_Loss: 50.7975 |
21-12-24 17:58:41.868 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:58:41.869 - INFO: Train epoch 1994: Loss: 822.8185 | r_Loss: 76.9013 | g_Loss: 387.2363 | l_Loss: 51.0759 |
21-12-24 17:59:55.435 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 17:59:55.436 - INFO: Train epoch 1995: Loss: 802.0839 | r_Loss: 77.0935 | g_Loss: 374.6093 | l_Loss: 42.0071 |
21-12-24 18:01:08.990 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:01:08.991 - INFO: Train epoch 1996: Loss: 812.7463 | r_Loss: 78.2166 | g_Loss: 377.6234 | l_Loss: 44.0399 |
21-12-24 18:02:22.508 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:02:22.509 - INFO: Train epoch 1997: Loss: 785.9746 | r_Loss: 73.8684 | g_Loss: 367.8361 | l_Loss: 48.7967 |
21-12-24 18:03:35.956 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:03:35.957 - INFO: Train epoch 1998: Loss: 749.2332 | r_Loss: 70.6028 | g_Loss: 356.0359 | l_Loss: 40.1833 |
21-12-24 18:04:49.300 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:04:49.301 - INFO: Train epoch 1999: Loss: 749.4429 | r_Loss: 71.3410 | g_Loss: 350.8643 | l_Loss: 41.8736 |
21-12-24 18:06:02.458 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:06:02.460 - INFO: Train epoch 2000: Loss: 751.5022 | r_Loss: 71.1040 | g_Loss: 351.7186 | l_Loss: 44.2635 |
21-12-24 18:07:16.259 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:07:16.261 - INFO: Train epoch 2001: Loss: 725.8917 | r_Loss: 67.8326 | g_Loss: 345.8735 | l_Loss: 40.8550 |
21-12-24 18:08:29.417 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:08:29.418 - INFO: Train epoch 2002: Loss: 706.2910 | r_Loss: 67.3700 | g_Loss: 327.3787 | l_Loss: 42.0621 |
21-12-24 18:09:42.476 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:09:42.476 - INFO: Train epoch 2003: Loss: 681.5402 | r_Loss: 63.0435 | g_Loss: 328.7410 | l_Loss: 37.5814 |
21-12-24 18:10:56.069 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:10:56.070 - INFO: Train epoch 2004: Loss: 717.3996 | r_Loss: 65.1688 | g_Loss: 340.7284 | l_Loss: 50.8273 |
21-12-24 18:12:09.344 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:12:09.344 - INFO: Train epoch 2005: Loss: 716.3013 | r_Loss: 68.1353 | g_Loss: 333.9845 | l_Loss: 41.6404 |
21-12-24 18:13:22.821 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:13:22.821 - INFO: Train epoch 2006: Loss: 698.4550 | r_Loss: 65.5618 | g_Loss: 333.4192 | l_Loss: 37.2271 |
21-12-24 18:14:36.338 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:14:36.339 - INFO: Train epoch 2007: Loss: 693.6376 | r_Loss: 65.3407 | g_Loss: 324.8096 | l_Loss: 42.1247 |
21-12-24 18:15:49.738 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:15:49.740 - INFO: Train epoch 2008: Loss: 688.8421 | r_Loss: 64.6166 | g_Loss: 324.3113 | l_Loss: 41.4478 |
21-12-24 18:17:03.167 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:17:03.168 - INFO: Train epoch 2009: Loss: 716.0856 | r_Loss: 67.7879 | g_Loss: 333.4016 | l_Loss: 43.7444 |
21-12-24 18:18:53.098 - INFO: TEST: PSNR_S: 45.1175 | PSNR_C: 36.7520 |
21-12-24 18:18:53.100 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:18:53.101 - INFO: Train epoch 2010: Loss: 608.4285 | r_Loss: 55.8832 | g_Loss: 291.8996 | l_Loss: 37.1130 |
21-12-24 18:20:06.437 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:20:06.438 - INFO: Train epoch 2011: Loss: 708.6412 | r_Loss: 68.5439 | g_Loss: 327.3189 | l_Loss: 38.6029 |
21-12-24 18:21:19.923 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:21:19.924 - INFO: Train epoch 2012: Loss: 663.0554 | r_Loss: 61.6973 | g_Loss: 315.9949 | l_Loss: 38.5741 |
21-12-24 18:22:33.164 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:22:33.165 - INFO: Train epoch 2013: Loss: 668.1866 | r_Loss: 63.8422 | g_Loss: 315.4560 | l_Loss: 33.5198 |
21-12-24 18:23:46.737 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:23:46.738 - INFO: Train epoch 2014: Loss: 634.6868 | r_Loss: 59.9277 | g_Loss: 295.8869 | l_Loss: 39.1615 |
21-12-24 18:25:00.204 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:25:00.205 - INFO: Train epoch 2015: Loss: 619.1524 | r_Loss: 57.6669 | g_Loss: 290.9759 | l_Loss: 39.8420 |
21-12-24 18:26:14.051 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:26:14.052 - INFO: Train epoch 2016: Loss: 622.1457 | r_Loss: 60.3048 | g_Loss: 285.7946 | l_Loss: 34.8270 |
21-12-24 18:27:27.353 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:27:27.354 - INFO: Train epoch 2017: Loss: 635.6873 | r_Loss: 61.4487 | g_Loss: 296.2848 | l_Loss: 32.1588 |
21-12-24 18:28:40.918 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:28:40.918 - INFO: Train epoch 2018: Loss: 655.0203 | r_Loss: 64.0857 | g_Loss: 294.2851 | l_Loss: 40.3068 |
21-12-24 18:29:54.052 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:29:54.053 - INFO: Train epoch 2019: Loss: 656.4808 | r_Loss: 63.7495 | g_Loss: 302.9159 | l_Loss: 34.8174 |
21-12-24 18:31:07.340 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:31:07.341 - INFO: Train epoch 2020: Loss: 658.0201 | r_Loss: 65.2244 | g_Loss: 295.9575 | l_Loss: 35.9406 |
21-12-24 18:32:20.971 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:32:20.972 - INFO: Train epoch 2021: Loss: 676.6056 | r_Loss: 64.7296 | g_Loss: 309.2870 | l_Loss: 43.6705 |
21-12-24 18:33:34.548 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:33:34.549 - INFO: Train epoch 2022: Loss: 600.8246 | r_Loss: 56.0690 | g_Loss: 286.6454 | l_Loss: 33.8341 |
21-12-24 18:34:47.994 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:34:47.995 - INFO: Train epoch 2023: Loss: 629.6818 | r_Loss: 60.5555 | g_Loss: 287.4399 | l_Loss: 39.4643 |
21-12-24 18:36:01.388 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:36:01.389 - INFO: Train epoch 2024: Loss: 608.7965 | r_Loss: 60.1987 | g_Loss: 272.1046 | l_Loss: 35.6987 |
21-12-24 18:37:15.356 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:37:15.357 - INFO: Train epoch 2025: Loss: 656.4832 | r_Loss: 64.4006 | g_Loss: 291.9456 | l_Loss: 42.5345 |
21-12-24 18:38:28.730 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:38:28.731 - INFO: Train epoch 2026: Loss: 667.5681 | r_Loss: 65.8187 | g_Loss: 298.9589 | l_Loss: 39.5158 |
21-12-24 18:39:42.184 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:39:42.185 - INFO: Train epoch 2027: Loss: 621.1933 | r_Loss: 60.8942 | g_Loss: 282.9756 | l_Loss: 33.7467 |
21-12-24 18:40:55.798 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:40:55.799 - INFO: Train epoch 2028: Loss: 604.4173 | r_Loss: 56.4200 | g_Loss: 288.3503 | l_Loss: 33.9670 |
21-12-24 18:42:08.941 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:42:08.942 - INFO: Train epoch 2029: Loss: 636.4399 | r_Loss: 62.1395 | g_Loss: 280.3760 | l_Loss: 45.3662 |
21-12-24 18:43:22.672 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:43:22.673 - INFO: Train epoch 2030: Loss: 594.2249 | r_Loss: 58.3599 | g_Loss: 265.2315 | l_Loss: 37.1941 |
21-12-24 18:44:36.398 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:44:36.399 - INFO: Train epoch 2031: Loss: 620.3902 | r_Loss: 63.3986 | g_Loss: 267.7229 | l_Loss: 35.6744 |
21-12-24 18:45:50.123 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:45:50.124 - INFO: Train epoch 2032: Loss: 571.7649 | r_Loss: 56.1656 | g_Loss: 261.3795 | l_Loss: 29.5571 |
21-12-24 18:47:03.557 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:47:03.558 - INFO: Train epoch 2033: Loss: 606.9786 | r_Loss: 58.5625 | g_Loss: 282.9861 | l_Loss: 31.1802 |
21-12-24 18:48:16.783 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:48:16.784 - INFO: Train epoch 2034: Loss: 595.9614 | r_Loss: 58.6328 | g_Loss: 267.4713 | l_Loss: 35.3261 |
21-12-24 18:49:30.256 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:49:30.257 - INFO: Train epoch 2035: Loss: 621.5777 | r_Loss: 63.5517 | g_Loss: 272.7368 | l_Loss: 31.0824 |
21-12-24 18:50:43.463 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:50:43.464 - INFO: Train epoch 2036: Loss: 606.8375 | r_Loss: 61.0973 | g_Loss: 273.1465 | l_Loss: 28.2046 |
21-12-24 18:51:56.897 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:51:56.898 - INFO: Train epoch 2037: Loss: 618.4793 | r_Loss: 62.8638 | g_Loss: 271.7600 | l_Loss: 32.4002 |
21-12-24 18:53:10.148 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:53:10.149 - INFO: Train epoch 2038: Loss: 659.1715 | r_Loss: 66.9698 | g_Loss: 290.3307 | l_Loss: 33.9918 |
21-12-24 18:54:23.249 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:54:23.250 - INFO: Train epoch 2039: Loss: 614.0109 | r_Loss: 59.1002 | g_Loss: 276.4518 | l_Loss: 42.0579 |
21-12-24 18:56:12.845 - INFO: TEST: PSNR_S: 45.3207 | PSNR_C: 37.5570 |
21-12-24 18:56:12.846 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:56:12.846 - INFO: Train epoch 2040: Loss: 621.7762 | r_Loss: 62.8809 | g_Loss: 272.9762 | l_Loss: 34.3955 |
21-12-24 18:57:26.674 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:57:26.675 - INFO: Train epoch 2041: Loss: 599.3791 | r_Loss: 58.8264 | g_Loss: 264.2889 | l_Loss: 40.9582 |
21-12-24 18:58:40.213 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:58:40.215 - INFO: Train epoch 2042: Loss: 613.6683 | r_Loss: 64.3475 | g_Loss: 266.3007 | l_Loss: 25.6302 |
21-12-24 18:59:53.832 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 18:59:53.834 - INFO: Train epoch 2043: Loss: 583.6748 | r_Loss: 57.4175 | g_Loss: 260.2652 | l_Loss: 36.3222 |
21-12-24 19:01:07.145 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:01:07.146 - INFO: Train epoch 2044: Loss: 610.9311 | r_Loss: 61.7668 | g_Loss: 267.2660 | l_Loss: 34.8314 |
21-12-24 19:02:20.963 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:02:20.964 - INFO: Train epoch 2045: Loss: 577.8724 | r_Loss: 57.4485 | g_Loss: 258.1535 | l_Loss: 32.4765 |
21-12-24 19:03:34.212 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:03:34.213 - INFO: Train epoch 2046: Loss: 647.8192 | r_Loss: 67.0812 | g_Loss: 279.7912 | l_Loss: 32.6220 |
21-12-24 19:04:47.616 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:04:47.617 - INFO: Train epoch 2047: Loss: 580.2238 | r_Loss: 56.5505 | g_Loss: 261.5964 | l_Loss: 35.8749 |
21-12-24 19:06:01.048 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:06:01.049 - INFO: Train epoch 2048: Loss: 620.5916 | r_Loss: 63.2416 | g_Loss: 268.9603 | l_Loss: 35.4235 |
21-12-24 19:07:14.489 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:07:14.490 - INFO: Train epoch 2049: Loss: 591.4287 | r_Loss: 59.7033 | g_Loss: 259.7360 | l_Loss: 33.1760 |
21-12-24 19:08:28.303 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:08:28.304 - INFO: Train epoch 2050: Loss: 585.8399 | r_Loss: 58.8181 | g_Loss: 258.4100 | l_Loss: 33.3393 |
21-12-24 19:09:41.680 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:09:41.681 - INFO: Train epoch 2051: Loss: 585.3919 | r_Loss: 58.3329 | g_Loss: 260.6730 | l_Loss: 33.0546 |
21-12-24 19:10:54.995 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:10:54.996 - INFO: Train epoch 2052: Loss: 614.2025 | r_Loss: 61.7678 | g_Loss: 271.7930 | l_Loss: 33.5707 |
21-12-24 19:12:09.113 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:12:09.114 - INFO: Train epoch 2053: Loss: 811.2597 | r_Loss: 98.3334 | g_Loss: 283.1136 | l_Loss: 36.4792 |
21-12-24 19:13:22.423 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:13:22.424 - INFO: Train epoch 2054: Loss: 588.0144 | r_Loss: 55.9099 | g_Loss: 265.8312 | l_Loss: 42.6335 |
21-12-24 19:14:35.782 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:14:35.783 - INFO: Train epoch 2055: Loss: 584.1778 | r_Loss: 59.3582 | g_Loss: 259.4061 | l_Loss: 27.9807 |
21-12-24 19:15:49.214 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:15:49.215 - INFO: Train epoch 2056: Loss: 670.0383 | r_Loss: 68.5046 | g_Loss: 297.3853 | l_Loss: 30.1300 |
21-12-24 19:17:02.748 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:17:02.749 - INFO: Train epoch 2057: Loss: 593.3636 | r_Loss: 57.8398 | g_Loss: 270.8876 | l_Loss: 33.2772 |
21-12-24 19:18:16.064 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:18:16.065 - INFO: Train epoch 2058: Loss: 590.3879 | r_Loss: 58.7892 | g_Loss: 265.0000 | l_Loss: 31.4419 |
21-12-24 19:19:29.704 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:19:29.705 - INFO: Train epoch 2059: Loss: 626.5792 | r_Loss: 63.9581 | g_Loss: 265.3119 | l_Loss: 41.4768 |
21-12-24 19:20:43.319 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:20:43.320 - INFO: Train epoch 2060: Loss: 605.0838 | r_Loss: 61.2045 | g_Loss: 258.5201 | l_Loss: 40.5410 |
21-12-24 19:21:57.451 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:21:57.452 - INFO: Train epoch 2061: Loss: 615.6128 | r_Loss: 62.7622 | g_Loss: 267.6820 | l_Loss: 34.1196 |
21-12-24 19:23:10.847 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:23:10.849 - INFO: Train epoch 2062: Loss: 608.1586 | r_Loss: 63.1778 | g_Loss: 261.1231 | l_Loss: 31.1465 |
21-12-24 19:24:24.456 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:24:24.458 - INFO: Train epoch 2063: Loss: 600.4301 | r_Loss: 61.0596 | g_Loss: 266.0264 | l_Loss: 29.1057 |
21-12-24 19:25:38.278 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:25:38.279 - INFO: Train epoch 2064: Loss: 568.5084 | r_Loss: 56.5029 | g_Loss: 262.2236 | l_Loss: 23.7703 |
21-12-24 19:26:51.723 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:26:51.724 - INFO: Train epoch 2065: Loss: 605.0953 | r_Loss: 60.4288 | g_Loss: 264.3672 | l_Loss: 38.5840 |
21-12-24 19:28:05.768 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:28:05.769 - INFO: Train epoch 2066: Loss: 586.4251 | r_Loss: 61.1549 | g_Loss: 252.6677 | l_Loss: 27.9831 |
21-12-24 19:29:19.507 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:29:19.509 - INFO: Train epoch 2067: Loss: 565.4580 | r_Loss: 56.7114 | g_Loss: 248.3273 | l_Loss: 33.5736 |
21-12-24 19:30:33.077 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:30:33.078 - INFO: Train epoch 2068: Loss: 640.4105 | r_Loss: 66.3730 | g_Loss: 268.9954 | l_Loss: 39.5498 |
21-12-24 19:31:46.666 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:31:46.668 - INFO: Train epoch 2069: Loss: 663.9168 | r_Loss: 72.2682 | g_Loss: 271.5784 | l_Loss: 30.9972 |
21-12-24 19:33:36.325 - INFO: TEST: PSNR_S: 45.6989 | PSNR_C: 37.9242 |
21-12-24 19:33:36.326 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:33:36.327 - INFO: Train epoch 2070: Loss: 575.8256 | r_Loss: 58.9576 | g_Loss: 252.4620 | l_Loss: 28.5757 |
21-12-24 19:34:49.929 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:34:49.931 - INFO: Train epoch 2071: Loss: 589.4905 | r_Loss: 58.2241 | g_Loss: 256.0445 | l_Loss: 42.3257 |
21-12-24 19:36:03.817 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:36:03.818 - INFO: Train epoch 2072: Loss: 578.0674 | r_Loss: 59.1621 | g_Loss: 254.7216 | l_Loss: 27.5352 |
21-12-24 19:37:17.415 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:37:17.416 - INFO: Train epoch 2073: Loss: 599.3524 | r_Loss: 60.7755 | g_Loss: 265.2043 | l_Loss: 30.2708 |
21-12-24 19:38:30.719 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:38:30.720 - INFO: Train epoch 2074: Loss: 601.7044 | r_Loss: 61.9335 | g_Loss: 253.4678 | l_Loss: 38.5691 |
21-12-24 19:39:44.111 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:39:44.112 - INFO: Train epoch 2075: Loss: 602.0565 | r_Loss: 61.3546 | g_Loss: 265.5864 | l_Loss: 29.6972 |
21-12-24 19:40:58.112 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:40:58.114 - INFO: Train epoch 2076: Loss: 587.1171 | r_Loss: 59.7722 | g_Loss: 248.5386 | l_Loss: 39.7174 |
21-12-24 19:42:11.807 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:42:11.808 - INFO: Train epoch 2077: Loss: 574.5993 | r_Loss: 59.2057 | g_Loss: 251.7776 | l_Loss: 26.7933 |
21-12-24 19:43:25.569 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:43:25.570 - INFO: Train epoch 2078: Loss: 626.3902 | r_Loss: 66.5088 | g_Loss: 265.7307 | l_Loss: 28.1154 |
21-12-24 19:44:38.789 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:44:38.791 - INFO: Train epoch 2079: Loss: 567.8792 | r_Loss: 56.3761 | g_Loss: 247.6813 | l_Loss: 38.3177 |
21-12-24 19:45:52.260 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:45:52.261 - INFO: Train epoch 2080: Loss: 603.9240 | r_Loss: 62.0461 | g_Loss: 262.5035 | l_Loss: 31.1898 |
21-12-24 19:47:05.730 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:47:05.732 - INFO: Train epoch 2081: Loss: 603.5378 | r_Loss: 62.5034 | g_Loss: 257.5332 | l_Loss: 33.4875 |
21-12-24 19:48:19.210 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:48:19.211 - INFO: Train epoch 2082: Loss: 620.7764 | r_Loss: 64.5120 | g_Loss: 266.9704 | l_Loss: 31.2460 |
21-12-24 19:49:32.620 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:49:32.622 - INFO: Train epoch 2083: Loss: 629.5641 | r_Loss: 64.3834 | g_Loss: 275.2602 | l_Loss: 32.3868 |
21-12-24 19:50:46.175 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:50:46.176 - INFO: Train epoch 2084: Loss: 596.5758 | r_Loss: 62.3626 | g_Loss: 262.3234 | l_Loss: 22.4395 |
21-12-24 19:51:59.607 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:51:59.608 - INFO: Train epoch 2085: Loss: 583.8447 | r_Loss: 60.7892 | g_Loss: 253.8456 | l_Loss: 26.0531 |
21-12-24 19:53:13.722 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:53:13.723 - INFO: Train epoch 2086: Loss: 620.0317 | r_Loss: 65.0637 | g_Loss: 263.4192 | l_Loss: 31.2940 |
21-12-24 19:54:27.411 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:54:27.412 - INFO: Train epoch 2087: Loss: 576.4570 | r_Loss: 58.0738 | g_Loss: 249.5292 | l_Loss: 36.5586 |
21-12-24 19:55:41.084 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:55:41.085 - INFO: Train epoch 2088: Loss: 627.9779 | r_Loss: 63.9151 | g_Loss: 272.6048 | l_Loss: 35.7974 |
21-12-24 19:56:54.862 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:56:54.863 - INFO: Train epoch 2089: Loss: 549.3365 | r_Loss: 54.8260 | g_Loss: 247.4111 | l_Loss: 27.7955 |
21-12-24 19:58:08.195 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:58:08.196 - INFO: Train epoch 2090: Loss: 620.7209 | r_Loss: 65.9888 | g_Loss: 254.2906 | l_Loss: 36.4861 |
21-12-24 19:59:21.983 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 19:59:21.985 - INFO: Train epoch 2091: Loss: 604.6688 | r_Loss: 62.2217 | g_Loss: 258.7897 | l_Loss: 34.7707 |
21-12-24 20:00:35.579 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:00:35.580 - INFO: Train epoch 2092: Loss: 593.5051 | r_Loss: 60.1060 | g_Loss: 256.4493 | l_Loss: 36.5256 |
21-12-24 20:01:48.887 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:01:48.888 - INFO: Train epoch 2093: Loss: 598.3478 | r_Loss: 61.9879 | g_Loss: 259.2312 | l_Loss: 29.1773 |
21-12-24 20:03:02.119 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:03:02.121 - INFO: Train epoch 2094: Loss: 569.9269 | r_Loss: 56.1280 | g_Loss: 252.0562 | l_Loss: 37.2309 |
21-12-24 20:04:15.620 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:04:15.621 - INFO: Train epoch 2095: Loss: 563.9193 | r_Loss: 58.4704 | g_Loss: 243.7142 | l_Loss: 27.8531 |
21-12-24 20:05:29.199 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:05:29.200 - INFO: Train epoch 2096: Loss: 648.5836 | r_Loss: 73.5737 | g_Loss: 253.2119 | l_Loss: 27.5034 |
21-12-24 20:06:42.575 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:06:42.576 - INFO: Train epoch 2097: Loss: 573.1320 | r_Loss: 57.1994 | g_Loss: 250.6656 | l_Loss: 36.4691 |
21-12-24 20:07:56.233 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:07:56.234 - INFO: Train epoch 2098: Loss: 564.7176 | r_Loss: 57.6931 | g_Loss: 241.1465 | l_Loss: 35.1057 |
21-12-24 20:09:09.686 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:09:09.688 - INFO: Train epoch 2099: Loss: 542.8961 | r_Loss: 55.6516 | g_Loss: 232.7191 | l_Loss: 31.9189 |
21-12-24 20:10:59.605 - INFO: TEST: PSNR_S: 39.4885 | PSNR_C: 33.3258 |
21-12-24 20:10:59.607 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:10:59.607 - INFO: Train epoch 2100: Loss: 12985.4924 | r_Loss: 2373.9663 | g_Loss: 1001.7327 | l_Loss: 113.9282 |
21-12-24 20:12:13.919 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:12:13.920 - INFO: Train epoch 2101: Loss: 1261.9005 | r_Loss: 133.1727 | g_Loss: 530.8890 | l_Loss: 65.1478 |
21-12-24 20:13:27.355 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:13:27.356 - INFO: Train epoch 2102: Loss: 1000.2285 | r_Loss: 101.4475 | g_Loss: 441.1989 | l_Loss: 51.7921 |
21-12-24 20:14:40.894 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:14:40.895 - INFO: Train epoch 2103: Loss: 985.7283 | r_Loss: 98.3942 | g_Loss: 438.1642 | l_Loss: 55.5932 |
21-12-24 20:15:54.518 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:15:54.519 - INFO: Train epoch 2104: Loss: 882.8259 | r_Loss: 80.9873 | g_Loss: 410.0627 | l_Loss: 67.8269 |
21-12-24 20:17:07.843 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:17:07.844 - INFO: Train epoch 2105: Loss: 899.1950 | r_Loss: 83.1695 | g_Loss: 424.8014 | l_Loss: 58.5463 |
21-12-24 20:18:21.677 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:18:21.678 - INFO: Train epoch 2106: Loss: 783.1773 | r_Loss: 71.8375 | g_Loss: 375.1869 | l_Loss: 48.8028 |
21-12-24 20:19:35.342 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:19:35.344 - INFO: Train epoch 2107: Loss: 761.7270 | r_Loss: 69.7515 | g_Loss: 362.9575 | l_Loss: 50.0123 |
21-12-24 20:20:49.160 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:20:49.161 - INFO: Train epoch 2108: Loss: 776.4543 | r_Loss: 69.4831 | g_Loss: 379.4947 | l_Loss: 49.5439 |
21-12-24 20:22:02.467 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:22:02.468 - INFO: Train epoch 2109: Loss: 779.2716 | r_Loss: 70.7938 | g_Loss: 372.8234 | l_Loss: 52.4791 |
21-12-24 20:23:15.955 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:23:15.956 - INFO: Train epoch 2110: Loss: 744.9255 | r_Loss: 66.1459 | g_Loss: 358.1913 | l_Loss: 56.0046 |
21-12-24 20:24:29.694 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:24:29.695 - INFO: Train epoch 2111: Loss: 746.9425 | r_Loss: 67.1245 | g_Loss: 364.4836 | l_Loss: 46.8364 |
21-12-24 20:25:43.505 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:25:43.506 - INFO: Train epoch 2112: Loss: 717.2841 | r_Loss: 63.9799 | g_Loss: 355.1667 | l_Loss: 42.2178 |
21-12-24 20:26:57.440 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:26:57.441 - INFO: Train epoch 2113: Loss: 672.0792 | r_Loss: 60.9283 | g_Loss: 328.8383 | l_Loss: 38.5995 |
21-12-24 20:28:10.736 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:28:10.737 - INFO: Train epoch 2114: Loss: 690.0522 | r_Loss: 60.4152 | g_Loss: 339.5373 | l_Loss: 48.4389 |
21-12-24 20:29:24.105 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:29:24.106 - INFO: Train epoch 2115: Loss: 695.0441 | r_Loss: 61.5661 | g_Loss: 349.3268 | l_Loss: 37.8868 |
21-12-24 20:30:37.433 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:30:37.434 - INFO: Train epoch 2116: Loss: 700.4783 | r_Loss: 63.3824 | g_Loss: 340.4502 | l_Loss: 43.1159 |
21-12-24 20:31:51.342 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:31:51.343 - INFO: Train epoch 2117: Loss: 645.9065 | r_Loss: 57.9492 | g_Loss: 321.8809 | l_Loss: 34.2796 |
21-12-24 20:33:04.823 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:33:04.824 - INFO: Train epoch 2118: Loss: 662.4146 | r_Loss: 60.5944 | g_Loss: 317.4227 | l_Loss: 42.0198 |
21-12-24 20:34:18.181 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:34:18.182 - INFO: Train epoch 2119: Loss: 649.1915 | r_Loss: 59.9825 | g_Loss: 313.2727 | l_Loss: 36.0063 |
21-12-24 20:35:31.229 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:35:31.230 - INFO: Train epoch 2120: Loss: 640.7685 | r_Loss: 57.0155 | g_Loss: 318.6620 | l_Loss: 37.0288 |
21-12-24 20:36:44.713 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:36:44.715 - INFO: Train epoch 2121: Loss: 618.3744 | r_Loss: 55.2547 | g_Loss: 303.8970 | l_Loss: 38.2037 |
21-12-24 20:37:57.854 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:37:57.855 - INFO: Train epoch 2122: Loss: 621.7857 | r_Loss: 53.8981 | g_Loss: 306.0681 | l_Loss: 46.2273 |
21-12-24 20:39:10.964 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:39:10.965 - INFO: Train epoch 2123: Loss: 626.1538 | r_Loss: 55.7208 | g_Loss: 306.4480 | l_Loss: 41.1020 |
21-12-24 20:40:24.673 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:40:24.674 - INFO: Train epoch 2124: Loss: 649.8204 | r_Loss: 58.9257 | g_Loss: 312.6841 | l_Loss: 42.5076 |
21-12-24 20:41:38.256 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:41:38.257 - INFO: Train epoch 2125: Loss: 644.4760 | r_Loss: 56.8312 | g_Loss: 308.0693 | l_Loss: 52.2509 |
21-12-24 20:42:51.968 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:42:51.969 - INFO: Train epoch 2126: Loss: 599.9067 | r_Loss: 54.4805 | g_Loss: 290.1981 | l_Loss: 37.3063 |
21-12-24 20:44:05.124 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:44:05.125 - INFO: Train epoch 2127: Loss: 603.2827 | r_Loss: 55.7047 | g_Loss: 292.8055 | l_Loss: 31.9538 |
21-12-24 20:45:18.569 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:45:18.570 - INFO: Train epoch 2128: Loss: 603.6346 | r_Loss: 55.4636 | g_Loss: 287.4851 | l_Loss: 38.8315 |
21-12-24 20:46:31.937 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:46:31.939 - INFO: Train epoch 2129: Loss: 587.3629 | r_Loss: 54.3161 | g_Loss: 272.0835 | l_Loss: 43.6987 |
21-12-24 20:48:21.557 - INFO: TEST: PSNR_S: 45.6841 | PSNR_C: 37.2848 |
21-12-24 20:48:21.558 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:48:21.558 - INFO: Train epoch 2130: Loss: 569.6523 | r_Loss: 51.7360 | g_Loss: 279.4352 | l_Loss: 31.5371 |
21-12-24 20:49:34.972 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:49:34.973 - INFO: Train epoch 2131: Loss: 649.6059 | r_Loss: 63.3131 | g_Loss: 296.8643 | l_Loss: 36.1762 |
21-12-24 20:50:48.497 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:50:48.498 - INFO: Train epoch 2132: Loss: 625.8270 | r_Loss: 59.8831 | g_Loss: 286.6863 | l_Loss: 39.7253 |
21-12-24 20:52:01.591 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:52:01.592 - INFO: Train epoch 2133: Loss: 571.3463 | r_Loss: 51.2623 | g_Loss: 276.4195 | l_Loss: 38.6153 |
21-12-24 20:53:14.859 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:53:14.860 - INFO: Train epoch 2134: Loss: 581.8075 | r_Loss: 54.5641 | g_Loss: 271.7849 | l_Loss: 37.2023 |
21-12-24 20:54:28.620 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:54:28.621 - INFO: Train epoch 2135: Loss: 572.1239 | r_Loss: 52.6107 | g_Loss: 268.1429 | l_Loss: 40.9273 |
21-12-24 20:55:41.959 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:55:41.960 - INFO: Train epoch 2136: Loss: 577.8400 | r_Loss: 53.4013 | g_Loss: 271.3547 | l_Loss: 39.4790 |
21-12-24 20:56:55.280 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:56:55.282 - INFO: Train epoch 2137: Loss: 578.5917 | r_Loss: 53.7497 | g_Loss: 277.1518 | l_Loss: 32.6915 |
21-12-24 20:58:08.680 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:58:08.681 - INFO: Train epoch 2138: Loss: 585.8887 | r_Loss: 55.1588 | g_Loss: 267.8288 | l_Loss: 42.2662 |
21-12-24 20:59:22.161 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 20:59:22.163 - INFO: Train epoch 2139: Loss: 574.4831 | r_Loss: 55.7636 | g_Loss: 259.6620 | l_Loss: 36.0029 |
21-12-24 21:00:35.932 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:00:35.933 - INFO: Train epoch 2140: Loss: 560.4972 | r_Loss: 52.1855 | g_Loss: 266.1804 | l_Loss: 33.3892 |
21-12-24 21:01:49.336 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:01:49.338 - INFO: Train epoch 2141: Loss: 638.3803 | r_Loss: 66.5477 | g_Loss: 274.0852 | l_Loss: 31.5568 |
21-12-24 21:03:03.605 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:03:03.607 - INFO: Train epoch 2142: Loss: 588.6420 | r_Loss: 60.8198 | g_Loss: 254.8868 | l_Loss: 29.6563 |
21-12-24 21:04:17.207 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:04:17.208 - INFO: Train epoch 2143: Loss: 574.2052 | r_Loss: 56.8124 | g_Loss: 259.2476 | l_Loss: 30.8956 |
21-12-24 21:05:30.871 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:05:30.873 - INFO: Train epoch 2144: Loss: 544.9614 | r_Loss: 51.2841 | g_Loss: 255.2096 | l_Loss: 33.3315 |
21-12-24 21:06:44.309 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:06:44.310 - INFO: Train epoch 2145: Loss: 557.4074 | r_Loss: 53.8924 | g_Loss: 248.3736 | l_Loss: 39.5717 |
21-12-24 21:07:57.988 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:07:57.989 - INFO: Train epoch 2146: Loss: 585.8423 | r_Loss: 56.0155 | g_Loss: 273.7731 | l_Loss: 31.9919 |
21-12-24 21:09:11.717 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:09:11.718 - INFO: Train epoch 2147: Loss: 622.3124 | r_Loss: 63.9751 | g_Loss: 275.2998 | l_Loss: 27.1373 |
21-12-24 21:10:25.256 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:10:25.257 - INFO: Train epoch 2148: Loss: 594.3797 | r_Loss: 58.3430 | g_Loss: 269.0681 | l_Loss: 33.5965 |
21-12-24 21:11:39.266 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:11:39.268 - INFO: Train epoch 2149: Loss: 555.4528 | r_Loss: 55.2923 | g_Loss: 251.7839 | l_Loss: 27.2076 |
21-12-24 21:12:52.838 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:12:52.839 - INFO: Train epoch 2150: Loss: 637.5213 | r_Loss: 69.1145 | g_Loss: 258.1448 | l_Loss: 33.8042 |
21-12-24 21:14:06.454 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:14:06.456 - INFO: Train epoch 2151: Loss: 581.9780 | r_Loss: 57.7892 | g_Loss: 262.0933 | l_Loss: 30.9385 |
21-12-24 21:15:19.765 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:15:19.766 - INFO: Train epoch 2152: Loss: 554.7170 | r_Loss: 53.2336 | g_Loss: 253.9608 | l_Loss: 34.5883 |
21-12-24 21:16:33.797 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:16:33.798 - INFO: Train epoch 2153: Loss: 591.5201 | r_Loss: 57.8580 | g_Loss: 263.8130 | l_Loss: 38.4172 |
21-12-24 21:17:47.610 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:17:47.611 - INFO: Train epoch 2154: Loss: 569.0643 | r_Loss: 56.6049 | g_Loss: 255.5427 | l_Loss: 30.4971 |
21-12-24 21:19:00.944 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:19:00.945 - INFO: Train epoch 2155: Loss: 543.3545 | r_Loss: 54.1023 | g_Loss: 251.7685 | l_Loss: 21.0745 |
21-12-24 21:20:14.725 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:20:14.726 - INFO: Train epoch 2156: Loss: 598.1650 | r_Loss: 61.0127 | g_Loss: 253.7936 | l_Loss: 39.3081 |
21-12-24 21:21:28.041 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:21:28.042 - INFO: Train epoch 2157: Loss: 679.5854 | r_Loss: 78.0138 | g_Loss: 261.8476 | l_Loss: 27.6686 |
21-12-24 21:22:41.456 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:22:41.457 - INFO: Train epoch 2158: Loss: 509.4235 | r_Loss: 48.6873 | g_Loss: 238.9889 | l_Loss: 26.9980 |
21-12-24 21:23:55.254 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:23:55.255 - INFO: Train epoch 2159: Loss: 544.3661 | r_Loss: 51.7189 | g_Loss: 253.1107 | l_Loss: 32.6609 |
21-12-24 21:25:45.299 - INFO: TEST: PSNR_S: 45.9178 | PSNR_C: 37.9525 |
21-12-24 21:25:45.300 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:25:45.301 - INFO: Train epoch 2160: Loss: 584.2674 | r_Loss: 59.4333 | g_Loss: 260.3028 | l_Loss: 26.7979 |
21-12-24 21:26:58.668 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:26:58.669 - INFO: Train epoch 2161: Loss: 538.2529 | r_Loss: 53.0745 | g_Loss: 243.8108 | l_Loss: 29.0696 |
21-12-24 21:28:12.361 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:28:12.362 - INFO: Train epoch 2162: Loss: 548.1948 | r_Loss: 55.3505 | g_Loss: 239.3783 | l_Loss: 32.0643 |
21-12-24 21:29:25.981 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:29:25.982 - INFO: Train epoch 2163: Loss: 573.2271 | r_Loss: 58.2188 | g_Loss: 252.6660 | l_Loss: 29.4672 |
21-12-24 21:30:39.603 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:30:39.603 - INFO: Train epoch 2164: Loss: 568.6893 | r_Loss: 57.9410 | g_Loss: 244.0272 | l_Loss: 34.9569 |
21-12-24 21:31:53.387 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:31:53.388 - INFO: Train epoch 2165: Loss: 579.2773 | r_Loss: 60.1101 | g_Loss: 250.6473 | l_Loss: 28.0795 |
21-12-24 21:33:07.198 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:33:07.199 - INFO: Train epoch 2166: Loss: 593.2783 | r_Loss: 59.4033 | g_Loss: 258.0129 | l_Loss: 38.2488 |
21-12-24 21:34:21.029 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:34:21.031 - INFO: Train epoch 2167: Loss: 608.0112 | r_Loss: 64.8020 | g_Loss: 251.7587 | l_Loss: 32.2426 |
21-12-24 21:35:34.842 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:35:34.843 - INFO: Train epoch 2168: Loss: 547.3459 | r_Loss: 54.4931 | g_Loss: 244.5583 | l_Loss: 30.3222 |
21-12-24 21:36:49.008 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:36:49.009 - INFO: Train epoch 2169: Loss: 588.4928 | r_Loss: 60.1922 | g_Loss: 256.0623 | l_Loss: 31.4698 |
21-12-24 21:38:02.822 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:38:02.824 - INFO: Train epoch 2170: Loss: 529.8836 | r_Loss: 52.6310 | g_Loss: 239.5981 | l_Loss: 27.1307 |
21-12-24 21:39:16.732 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:39:16.733 - INFO: Train epoch 2171: Loss: 535.6014 | r_Loss: 53.9670 | g_Loss: 238.4299 | l_Loss: 27.3367 |
21-12-24 21:40:30.320 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:40:30.322 - INFO: Train epoch 2172: Loss: 556.3048 | r_Loss: 59.1984 | g_Loss: 232.0845 | l_Loss: 28.2283 |
21-12-24 21:41:43.853 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:41:43.855 - INFO: Train epoch 2173: Loss: 532.2913 | r_Loss: 52.9403 | g_Loss: 240.5537 | l_Loss: 27.0361 |
21-12-24 21:42:57.258 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:42:57.260 - INFO: Train epoch 2174: Loss: 562.1075 | r_Loss: 57.4236 | g_Loss: 245.0388 | l_Loss: 29.9509 |
21-12-24 21:44:10.469 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:44:10.470 - INFO: Train epoch 2175: Loss: 548.4500 | r_Loss: 57.3763 | g_Loss: 235.8689 | l_Loss: 25.6999 |
21-12-24 21:45:23.852 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:45:23.854 - INFO: Train epoch 2176: Loss: 563.7111 | r_Loss: 58.7508 | g_Loss: 236.6083 | l_Loss: 33.3487 |
21-12-24 21:46:37.709 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:46:37.710 - INFO: Train epoch 2177: Loss: 589.8760 | r_Loss: 62.5430 | g_Loss: 250.1051 | l_Loss: 27.0556 |
21-12-24 21:47:51.465 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:47:51.466 - INFO: Train epoch 2178: Loss: 561.9281 | r_Loss: 57.7367 | g_Loss: 241.2211 | l_Loss: 32.0233 |
21-12-24 21:49:05.315 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:49:05.316 - INFO: Train epoch 2179: Loss: 575.0350 | r_Loss: 59.4882 | g_Loss: 244.3447 | l_Loss: 33.2493 |
21-12-24 21:50:18.718 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:50:18.719 - INFO: Train epoch 2180: Loss: 585.6135 | r_Loss: 63.3769 | g_Loss: 235.9704 | l_Loss: 32.7587 |
21-12-24 21:51:32.582 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:51:32.583 - INFO: Train epoch 2181: Loss: 541.4172 | r_Loss: 53.0427 | g_Loss: 248.3192 | l_Loss: 27.8846 |
21-12-24 21:52:46.093 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:52:46.094 - INFO: Train epoch 2182: Loss: 589.6021 | r_Loss: 60.4910 | g_Loss: 254.7928 | l_Loss: 32.3542 |
21-12-24 21:53:59.383 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:53:59.384 - INFO: Train epoch 2183: Loss: 685.3559 | r_Loss: 81.8635 | g_Loss: 251.0409 | l_Loss: 24.9976 |
21-12-24 21:55:13.444 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:55:13.445 - INFO: Train epoch 2184: Loss: 550.1818 | r_Loss: 56.8561 | g_Loss: 237.1976 | l_Loss: 28.7039 |
21-12-24 21:56:27.081 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:56:27.083 - INFO: Train epoch 2185: Loss: 540.4189 | r_Loss: 54.6699 | g_Loss: 240.8385 | l_Loss: 26.2307 |
21-12-24 21:57:40.820 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:57:40.821 - INFO: Train epoch 2186: Loss: 520.5557 | r_Loss: 52.4013 | g_Loss: 227.3156 | l_Loss: 31.2335 |
21-12-24 21:58:54.438 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 21:58:54.439 - INFO: Train epoch 2187: Loss: 525.3087 | r_Loss: 51.6909 | g_Loss: 235.1048 | l_Loss: 31.7494 |
21-12-24 22:00:07.681 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:00:07.682 - INFO: Train epoch 2188: Loss: 535.2937 | r_Loss: 53.1692 | g_Loss: 233.3436 | l_Loss: 36.1042 |
21-12-24 22:01:20.982 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:01:20.984 - INFO: Train epoch 2189: Loss: 626.6628 | r_Loss: 66.9413 | g_Loss: 262.6578 | l_Loss: 29.2984 |
21-12-24 22:03:10.074 - INFO: TEST: PSNR_S: 45.9636 | PSNR_C: 38.2067 |
21-12-24 22:03:10.075 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:03:10.075 - INFO: Train epoch 2190: Loss: 547.6243 | r_Loss: 55.0541 | g_Loss: 234.4792 | l_Loss: 37.8746 |
21-12-24 22:04:23.618 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:04:23.619 - INFO: Train epoch 2191: Loss: 566.5976 | r_Loss: 58.4422 | g_Loss: 238.6080 | l_Loss: 35.7788 |
21-12-24 22:05:37.065 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:05:37.067 - INFO: Train epoch 2192: Loss: 552.9415 | r_Loss: 55.8960 | g_Loss: 241.1275 | l_Loss: 32.3338 |
21-12-24 22:06:50.793 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:06:50.794 - INFO: Train epoch 2193: Loss: 604.0481 | r_Loss: 66.1002 | g_Loss: 242.2749 | l_Loss: 31.2724 |
21-12-24 22:08:04.251 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:08:04.252 - INFO: Train epoch 2194: Loss: 567.7270 | r_Loss: 60.0358 | g_Loss: 237.0575 | l_Loss: 30.4906 |
21-12-24 22:09:17.907 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:09:17.908 - INFO: Train epoch 2195: Loss: 608.3099 | r_Loss: 67.9516 | g_Loss: 238.2721 | l_Loss: 30.2797 |
21-12-24 22:10:31.395 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:10:31.396 - INFO: Train epoch 2196: Loss: 733.5658 | r_Loss: 81.4677 | g_Loss: 291.4242 | l_Loss: 34.8032 |
21-12-24 22:11:44.620 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:11:44.621 - INFO: Train epoch 2197: Loss: 562.1504 | r_Loss: 59.1387 | g_Loss: 236.2972 | l_Loss: 30.1598 |
21-12-24 22:12:58.032 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:12:58.032 - INFO: Train epoch 2198: Loss: 522.5658 | r_Loss: 50.9258 | g_Loss: 240.7770 | l_Loss: 27.1600 |
21-12-24 22:14:11.275 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:14:11.276 - INFO: Train epoch 2199: Loss: 507.9292 | r_Loss: 49.8444 | g_Loss: 222.9874 | l_Loss: 35.7200 |
21-12-24 22:15:24.382 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:15:24.383 - INFO: Train epoch 2200: Loss: 525.8504 | r_Loss: 52.6209 | g_Loss: 237.3011 | l_Loss: 25.4446 |
21-12-24 22:16:38.066 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:16:38.067 - INFO: Train epoch 2201: Loss: 555.4937 | r_Loss: 56.6271 | g_Loss: 239.8141 | l_Loss: 32.5441 |
21-12-24 22:17:51.596 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:17:51.597 - INFO: Train epoch 2202: Loss: 9370.6059 | r_Loss: 1671.2400 | g_Loss: 909.2712 | l_Loss: 105.1348 |
21-12-24 22:19:05.445 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:19:05.446 - INFO: Train epoch 2203: Loss: 1082.3501 | r_Loss: 96.9667 | g_Loss: 527.4396 | l_Loss: 70.0770 |
21-12-24 22:20:19.400 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:20:19.401 - INFO: Train epoch 2204: Loss: 892.8647 | r_Loss: 78.3881 | g_Loss: 444.5592 | l_Loss: 56.3651 |
21-12-24 22:21:32.940 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:21:32.941 - INFO: Train epoch 2205: Loss: 847.7340 | r_Loss: 73.9008 | g_Loss: 428.0726 | l_Loss: 50.1573 |
21-12-24 22:22:46.504 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:22:46.505 - INFO: Train epoch 2206: Loss: 821.9043 | r_Loss: 72.1013 | g_Loss: 406.1403 | l_Loss: 55.2576 |
21-12-24 22:23:59.916 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:23:59.917 - INFO: Train epoch 2207: Loss: 781.5538 | r_Loss: 67.7019 | g_Loss: 397.2187 | l_Loss: 45.8255 |
21-12-24 22:25:13.585 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:25:13.586 - INFO: Train epoch 2208: Loss: 747.7819 | r_Loss: 63.8553 | g_Loss: 380.0820 | l_Loss: 48.4235 |
21-12-24 22:26:27.654 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:26:27.655 - INFO: Train epoch 2209: Loss: 736.9800 | r_Loss: 62.8040 | g_Loss: 372.5997 | l_Loss: 50.3601 |
21-12-24 22:27:41.487 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:27:41.488 - INFO: Train epoch 2210: Loss: 691.3281 | r_Loss: 61.1668 | g_Loss: 343.8013 | l_Loss: 41.6928 |
21-12-24 22:28:55.649 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:28:55.650 - INFO: Train epoch 2211: Loss: 683.6727 | r_Loss: 59.4943 | g_Loss: 343.9326 | l_Loss: 42.2685 |
21-12-24 22:30:09.355 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:30:09.356 - INFO: Train epoch 2212: Loss: 667.3518 | r_Loss: 58.8452 | g_Loss: 336.4959 | l_Loss: 36.6299 |
21-12-24 22:31:23.069 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:31:23.070 - INFO: Train epoch 2213: Loss: 628.3894 | r_Loss: 54.1896 | g_Loss: 319.9674 | l_Loss: 37.4739 |
21-12-24 22:32:36.912 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:32:36.913 - INFO: Train epoch 2214: Loss: 673.9013 | r_Loss: 59.2793 | g_Loss: 335.8018 | l_Loss: 41.7031 |
21-12-24 22:33:50.570 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:33:50.571 - INFO: Train epoch 2215: Loss: 612.5422 | r_Loss: 52.8346 | g_Loss: 310.9995 | l_Loss: 37.3695 |
21-12-24 22:35:04.378 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:35:04.379 - INFO: Train epoch 2216: Loss: 663.9732 | r_Loss: 58.1644 | g_Loss: 322.3055 | l_Loss: 50.8457 |
21-12-24 22:36:17.961 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:36:17.962 - INFO: Train epoch 2217: Loss: 620.7436 | r_Loss: 55.5886 | g_Loss: 303.0856 | l_Loss: 39.7149 |
21-12-24 22:37:31.412 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:37:31.413 - INFO: Train epoch 2218: Loss: 617.1009 | r_Loss: 53.7173 | g_Loss: 308.9065 | l_Loss: 39.6080 |
21-12-24 22:38:44.807 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:38:44.808 - INFO: Train epoch 2219: Loss: 628.9003 | r_Loss: 56.1106 | g_Loss: 316.2271 | l_Loss: 32.1205 |
21-12-24 22:40:34.463 - INFO: TEST: PSNR_S: 45.8428 | PSNR_C: 37.0659 |
21-12-24 22:40:34.464 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:40:34.465 - INFO: Train epoch 2220: Loss: 641.6720 | r_Loss: 57.3796 | g_Loss: 313.1102 | l_Loss: 41.6638 |
21-12-24 22:41:47.866 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:41:47.867 - INFO: Train epoch 2221: Loss: 584.5024 | r_Loss: 51.8279 | g_Loss: 292.5428 | l_Loss: 32.8201 |
21-12-24 22:43:01.380 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:43:01.381 - INFO: Train epoch 2222: Loss: 600.3762 | r_Loss: 54.3070 | g_Loss: 289.8842 | l_Loss: 38.9571 |
21-12-24 22:44:14.917 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:44:14.918 - INFO: Train epoch 2223: Loss: 615.2847 | r_Loss: 54.4142 | g_Loss: 304.9131 | l_Loss: 38.3005 |
21-12-24 22:45:28.232 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:45:28.234 - INFO: Train epoch 2224: Loss: 576.7264 | r_Loss: 53.2380 | g_Loss: 278.4628 | l_Loss: 32.0734 |
21-12-24 22:46:41.769 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:46:41.770 - INFO: Train epoch 2225: Loss: 600.9607 | r_Loss: 55.4335 | g_Loss: 285.1135 | l_Loss: 38.6799 |
21-12-24 22:47:55.136 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:47:55.137 - INFO: Train epoch 2226: Loss: 580.1297 | r_Loss: 54.6379 | g_Loss: 271.4629 | l_Loss: 35.4771 |
21-12-24 22:49:08.486 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:49:08.487 - INFO: Train epoch 2227: Loss: 562.3743 | r_Loss: 51.4520 | g_Loss: 270.9107 | l_Loss: 34.2035 |
21-12-24 22:50:22.192 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:50:22.193 - INFO: Train epoch 2228: Loss: 560.2322 | r_Loss: 50.5938 | g_Loss: 272.7748 | l_Loss: 34.4886 |
21-12-24 22:51:35.566 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:51:35.567 - INFO: Train epoch 2229: Loss: 563.3413 | r_Loss: 51.3419 | g_Loss: 272.2261 | l_Loss: 34.4056 |
21-12-24 22:52:49.673 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:52:49.674 - INFO: Train epoch 2230: Loss: 593.2982 | r_Loss: 56.6171 | g_Loss: 276.2645 | l_Loss: 33.9481 |
21-12-24 22:54:03.362 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:54:03.364 - INFO: Train epoch 2231: Loss: 558.3047 | r_Loss: 52.5051 | g_Loss: 265.9420 | l_Loss: 29.8370 |
21-12-24 22:55:16.933 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:55:16.935 - INFO: Train epoch 2232: Loss: 590.0099 | r_Loss: 56.6229 | g_Loss: 270.2928 | l_Loss: 36.6025 |
21-12-24 22:56:30.319 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:56:30.320 - INFO: Train epoch 2233: Loss: 517.4610 | r_Loss: 48.5732 | g_Loss: 241.0689 | l_Loss: 33.5259 |
21-12-24 22:57:43.695 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:57:43.696 - INFO: Train epoch 2234: Loss: 537.5602 | r_Loss: 49.2045 | g_Loss: 253.6910 | l_Loss: 37.8468 |
21-12-24 22:58:57.238 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 22:58:57.239 - INFO: Train epoch 2235: Loss: 561.6677 | r_Loss: 53.4126 | g_Loss: 260.1073 | l_Loss: 34.4973 |
21-12-24 23:00:10.605 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:00:10.606 - INFO: Train epoch 2236: Loss: 544.1911 | r_Loss: 52.4044 | g_Loss: 250.4940 | l_Loss: 31.6753 |
21-12-24 23:01:24.215 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:01:24.216 - INFO: Train epoch 2237: Loss: 563.1623 | r_Loss: 54.5182 | g_Loss: 258.7159 | l_Loss: 31.8554 |
21-12-24 23:02:37.834 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:02:37.835 - INFO: Train epoch 2238: Loss: 554.7508 | r_Loss: 53.4426 | g_Loss: 260.8233 | l_Loss: 26.7145 |
21-12-24 23:03:51.352 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:03:51.353 - INFO: Train epoch 2239: Loss: 533.6239 | r_Loss: 51.2934 | g_Loss: 246.9779 | l_Loss: 30.1789 |
21-12-24 23:05:05.037 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:05:05.038 - INFO: Train epoch 2240: Loss: 567.9228 | r_Loss: 55.4309 | g_Loss: 255.9483 | l_Loss: 34.8202 |
21-12-24 23:06:18.314 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:06:18.315 - INFO: Train epoch 2241: Loss: 580.2831 | r_Loss: 58.5438 | g_Loss: 251.1991 | l_Loss: 36.3653 |
21-12-24 23:07:31.338 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:07:31.339 - INFO: Train epoch 2242: Loss: 574.5751 | r_Loss: 56.6286 | g_Loss: 259.3903 | l_Loss: 32.0417 |
21-12-24 23:08:45.132 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:08:45.134 - INFO: Train epoch 2243: Loss: 575.7647 | r_Loss: 62.4061 | g_Loss: 236.8497 | l_Loss: 26.8846 |
21-12-24 23:09:59.138 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:09:59.140 - INFO: Train epoch 2244: Loss: 527.2879 | r_Loss: 49.7797 | g_Loss: 245.6156 | l_Loss: 32.7741 |
21-12-24 23:11:12.590 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:11:12.591 - INFO: Train epoch 2245: Loss: 555.4638 | r_Loss: 53.8722 | g_Loss: 255.0856 | l_Loss: 31.0173 |
21-12-24 23:12:25.858 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:12:25.860 - INFO: Train epoch 2246: Loss: 527.2573 | r_Loss: 51.8597 | g_Loss: 237.4236 | l_Loss: 30.5350 |
21-12-24 23:13:39.245 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:13:39.247 - INFO: Train epoch 2247: Loss: 523.5084 | r_Loss: 51.7867 | g_Loss: 237.2720 | l_Loss: 27.3031 |
21-12-24 23:14:52.498 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:14:52.499 - INFO: Train epoch 2248: Loss: 562.0823 | r_Loss: 55.1117 | g_Loss: 251.6608 | l_Loss: 34.8631 |
21-12-24 23:16:06.257 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:16:06.257 - INFO: Train epoch 2249: Loss: 525.1644 | r_Loss: 50.9943 | g_Loss: 238.9767 | l_Loss: 31.2159 |
21-12-24 23:17:55.885 - INFO: TEST: PSNR_S: 45.8980 | PSNR_C: 38.0806 |
21-12-24 23:17:55.887 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:17:55.887 - INFO: Train epoch 2250: Loss: 537.3012 | r_Loss: 54.8207 | g_Loss: 236.6396 | l_Loss: 26.5581 |
21-12-24 23:19:09.461 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:19:09.462 - INFO: Train epoch 2251: Loss: 552.0327 | r_Loss: 57.1658 | g_Loss: 236.8245 | l_Loss: 29.3795 |
21-12-24 23:20:23.161 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:20:23.162 - INFO: Train epoch 2252: Loss: 565.1949 | r_Loss: 55.4526 | g_Loss: 256.6073 | l_Loss: 31.3247 |
21-12-24 23:21:37.190 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:21:37.191 - INFO: Train epoch 2253: Loss: 516.6459 | r_Loss: 50.7900 | g_Loss: 235.5577 | l_Loss: 27.1383 |
21-12-24 23:22:50.976 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:22:50.977 - INFO: Train epoch 2254: Loss: 584.8931 | r_Loss: 63.2864 | g_Loss: 238.6480 | l_Loss: 29.8132 |
21-12-24 23:24:04.789 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:24:04.791 - INFO: Train epoch 2255: Loss: 524.7437 | r_Loss: 51.5452 | g_Loss: 240.0989 | l_Loss: 26.9188 |
21-12-24 23:25:18.204 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:25:18.205 - INFO: Train epoch 2256: Loss: 560.4452 | r_Loss: 53.8183 | g_Loss: 254.0767 | l_Loss: 37.2768 |
21-12-24 23:26:31.845 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:26:31.846 - INFO: Train epoch 2257: Loss: 548.5045 | r_Loss: 56.1922 | g_Loss: 241.9424 | l_Loss: 25.6012 |
21-12-24 23:27:45.637 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:27:45.638 - INFO: Train epoch 2258: Loss: 568.8508 | r_Loss: 56.8831 | g_Loss: 250.9516 | l_Loss: 33.4835 |
21-12-24 23:28:59.260 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:28:59.261 - INFO: Train epoch 2259: Loss: 515.3305 | r_Loss: 50.8720 | g_Loss: 233.2791 | l_Loss: 27.6914 |
21-12-24 23:30:12.810 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:30:12.811 - INFO: Train epoch 2260: Loss: 656.6171 | r_Loss: 78.3992 | g_Loss: 238.1589 | l_Loss: 26.4623 |
21-12-24 23:31:26.125 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:31:26.125 - INFO: Train epoch 2261: Loss: 529.3076 | r_Loss: 51.5866 | g_Loss: 239.9126 | l_Loss: 31.4623 |
21-12-24 23:32:39.253 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:32:39.254 - INFO: Train epoch 2262: Loss: 624.5541 | r_Loss: 68.5543 | g_Loss: 256.7944 | l_Loss: 24.9881 |
21-12-24 23:33:52.593 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:33:52.594 - INFO: Train epoch 2263: Loss: 499.3988 | r_Loss: 45.5909 | g_Loss: 239.5720 | l_Loss: 31.8725 |
21-12-24 23:35:05.862 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:35:05.863 - INFO: Train epoch 2264: Loss: 505.4967 | r_Loss: 48.0329 | g_Loss: 238.1155 | l_Loss: 27.2165 |
21-12-24 23:36:19.913 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:36:19.914 - INFO: Train epoch 2265: Loss: 531.9198 | r_Loss: 54.0208 | g_Loss: 238.0679 | l_Loss: 23.7480 |
21-12-24 23:37:33.379 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:37:33.380 - INFO: Train epoch 2266: Loss: 551.1444 | r_Loss: 53.6198 | g_Loss: 248.4166 | l_Loss: 34.6288 |
21-12-24 23:38:47.015 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:38:47.016 - INFO: Train epoch 2267: Loss: 563.3258 | r_Loss: 58.9211 | g_Loss: 240.8125 | l_Loss: 27.9077 |
21-12-24 23:40:01.005 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:40:01.006 - INFO: Train epoch 2268: Loss: 540.5093 | r_Loss: 56.0102 | g_Loss: 229.7371 | l_Loss: 30.7212 |
21-12-24 23:41:14.506 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:41:14.507 - INFO: Train epoch 2269: Loss: 549.7305 | r_Loss: 56.5312 | g_Loss: 240.3619 | l_Loss: 26.7126 |
21-12-24 23:42:28.085 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:42:28.086 - INFO: Train epoch 2270: Loss: 535.3517 | r_Loss: 56.1207 | g_Loss: 223.3306 | l_Loss: 31.4176 |
21-12-24 23:43:41.540 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:43:41.541 - INFO: Train epoch 2271: Loss: 548.0916 | r_Loss: 55.6452 | g_Loss: 237.6634 | l_Loss: 32.2023 |
21-12-24 23:44:55.232 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:44:55.233 - INFO: Train epoch 2272: Loss: 632.3427 | r_Loss: 65.9823 | g_Loss: 264.1156 | l_Loss: 38.3156 |
21-12-24 23:46:08.847 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:46:08.848 - INFO: Train epoch 2273: Loss: 571.3724 | r_Loss: 59.1456 | g_Loss: 239.2592 | l_Loss: 36.3852 |
21-12-24 23:47:22.646 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:47:22.647 - INFO: Train epoch 2274: Loss: 527.6513 | r_Loss: 53.1407 | g_Loss: 234.5072 | l_Loss: 27.4406 |
21-12-24 23:48:35.819 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:48:35.820 - INFO: Train epoch 2275: Loss: 546.1312 | r_Loss: 55.8121 | g_Loss: 234.2498 | l_Loss: 32.8211 |
21-12-24 23:49:49.096 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:49:49.097 - INFO: Train epoch 2276: Loss: 518.3216 | r_Loss: 51.3318 | g_Loss: 227.0802 | l_Loss: 34.5822 |
21-12-24 23:51:02.063 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:51:02.064 - INFO: Train epoch 2277: Loss: 526.9438 | r_Loss: 53.4114 | g_Loss: 227.3622 | l_Loss: 32.5245 |
21-12-24 23:52:15.441 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:52:15.442 - INFO: Train epoch 2278: Loss: 600.4945 | r_Loss: 65.8542 | g_Loss: 243.7953 | l_Loss: 27.4281 |
21-12-24 23:53:28.843 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:53:28.844 - INFO: Train epoch 2279: Loss: 556.3044 | r_Loss: 57.5616 | g_Loss: 242.9264 | l_Loss: 25.5698 |
21-12-24 23:55:18.298 - INFO: TEST: PSNR_S: 46.0362 | PSNR_C: 38.3627 |
21-12-24 23:55:18.300 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:55:18.300 - INFO: Train epoch 2280: Loss: 490.9463 | r_Loss: 47.2177 | g_Loss: 226.0017 | l_Loss: 28.8563 |
21-12-24 23:56:31.701 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:56:31.702 - INFO: Train epoch 2281: Loss: 551.1044 | r_Loss: 57.0590 | g_Loss: 236.5189 | l_Loss: 29.2906 |
21-12-24 23:57:45.108 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:57:45.109 - INFO: Train epoch 2282: Loss: 15554.4924 | r_Loss: 2878.5883 | g_Loss: 1027.4469 | l_Loss: 134.1048 |
21-12-24 23:58:58.461 - INFO: Learning rate: 6.30957344480193e-06
21-12-24 23:58:58.462 - INFO: Train epoch 2283: Loss: 1198.2839 | r_Loss: 114.8008 | g_Loss: 562.5998 | l_Loss: 61.6802 |
21-12-25 00:00:11.974 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:00:11.975 - INFO: Train epoch 2284: Loss: 981.8724 | r_Loss: 86.3036 | g_Loss: 489.1636 | l_Loss: 61.1907 |
21-12-25 00:01:25.401 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:01:25.402 - INFO: Train epoch 2285: Loss: 871.0357 | r_Loss: 73.4831 | g_Loss: 453.7753 | l_Loss: 49.8450 |
21-12-25 00:02:38.419 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:02:38.420 - INFO: Train epoch 2286: Loss: 909.6067 | r_Loss: 77.4449 | g_Loss: 470.1583 | l_Loss: 52.2241 |
21-12-25 00:03:51.422 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:03:51.422 - INFO: Train epoch 2287: Loss: 812.1475 | r_Loss: 67.3929 | g_Loss: 421.2068 | l_Loss: 53.9761 |
21-12-25 00:05:05.105 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:05:05.107 - INFO: Train epoch 2288: Loss: 771.8752 | r_Loss: 61.9152 | g_Loss: 411.0179 | l_Loss: 51.2811 |
21-12-25 00:06:18.350 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:06:18.352 - INFO: Train epoch 2289: Loss: 753.2390 | r_Loss: 59.4849 | g_Loss: 409.7472 | l_Loss: 46.0674 |
21-12-25 00:07:31.663 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:07:31.663 - INFO: Train epoch 2290: Loss: 741.5206 | r_Loss: 59.2353 | g_Loss: 399.4246 | l_Loss: 45.9194 |
21-12-25 00:08:44.928 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:08:44.929 - INFO: Train epoch 2291: Loss: 754.2676 | r_Loss: 62.7355 | g_Loss: 391.9558 | l_Loss: 48.6341 |
21-12-25 00:09:57.958 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:09:57.959 - INFO: Train epoch 2292: Loss: 704.1602 | r_Loss: 59.5543 | g_Loss: 362.7389 | l_Loss: 43.6496 |
21-12-25 00:11:11.378 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:11:11.379 - INFO: Train epoch 2293: Loss: 726.4703 | r_Loss: 60.7820 | g_Loss: 377.1049 | l_Loss: 45.4555 |
21-12-25 00:12:24.846 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:12:24.848 - INFO: Train epoch 2294: Loss: 663.6293 | r_Loss: 54.3206 | g_Loss: 340.6436 | l_Loss: 51.3829 |
21-12-25 00:13:37.707 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:13:37.708 - INFO: Train epoch 2295: Loss: 660.2029 | r_Loss: 53.6791 | g_Loss: 342.7833 | l_Loss: 49.0244 |
21-12-25 00:14:50.712 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:14:50.714 - INFO: Train epoch 2296: Loss: 700.9518 | r_Loss: 60.8736 | g_Loss: 349.7464 | l_Loss: 46.8376 |
21-12-25 00:16:04.099 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:16:04.100 - INFO: Train epoch 2297: Loss: 608.3557 | r_Loss: 48.7966 | g_Loss: 325.8371 | l_Loss: 38.5356 |
21-12-25 00:17:17.544 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:17:17.545 - INFO: Train epoch 2298: Loss: 631.1877 | r_Loss: 52.9217 | g_Loss: 332.0702 | l_Loss: 34.5092 |
21-12-25 00:18:31.117 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:18:31.118 - INFO: Train epoch 2299: Loss: 632.1683 | r_Loss: 53.0559 | g_Loss: 326.9994 | l_Loss: 39.8895 |
21-12-25 00:19:44.227 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:19:44.228 - INFO: Train epoch 2300: Loss: 593.7039 | r_Loss: 47.4100 | g_Loss: 315.3421 | l_Loss: 41.3120 |
21-12-25 00:20:57.998 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:20:57.999 - INFO: Train epoch 2301: Loss: 638.5890 | r_Loss: 54.6725 | g_Loss: 319.4002 | l_Loss: 45.8262 |
21-12-25 00:22:11.478 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:22:11.479 - INFO: Train epoch 2302: Loss: 602.2284 | r_Loss: 51.1254 | g_Loss: 313.5902 | l_Loss: 33.0111 |
21-12-25 00:23:24.815 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:23:24.817 - INFO: Train epoch 2303: Loss: 617.5701 | r_Loss: 52.0699 | g_Loss: 313.2749 | l_Loss: 43.9456 |
21-12-25 00:24:38.731 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:24:38.732 - INFO: Train epoch 2304: Loss: 578.2340 | r_Loss: 47.9656 | g_Loss: 302.8288 | l_Loss: 35.5773 |
21-12-25 00:25:52.188 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:25:52.189 - INFO: Train epoch 2305: Loss: 574.3229 | r_Loss: 49.0097 | g_Loss: 296.4165 | l_Loss: 32.8581 |
21-12-25 00:27:05.917 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:27:05.918 - INFO: Train epoch 2306: Loss: 584.3216 | r_Loss: 49.0761 | g_Loss: 299.2289 | l_Loss: 39.7121 |
21-12-25 00:28:19.179 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:28:19.180 - INFO: Train epoch 2307: Loss: 581.7638 | r_Loss: 51.0953 | g_Loss: 290.6789 | l_Loss: 35.6085 |
21-12-25 00:29:32.480 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:29:32.481 - INFO: Train epoch 2308: Loss: 593.0509 | r_Loss: 50.6168 | g_Loss: 298.0807 | l_Loss: 41.8864 |
21-12-25 00:30:45.980 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:30:45.981 - INFO: Train epoch 2309: Loss: 543.5585 | r_Loss: 44.6192 | g_Loss: 286.3366 | l_Loss: 34.1259 |
21-12-25 00:32:35.389 - INFO: TEST: PSNR_S: 46.1986 | PSNR_C: 37.1016 |
21-12-25 00:32:35.390 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:32:35.391 - INFO: Train epoch 2310: Loss: 555.2208 | r_Loss: 47.7247 | g_Loss: 282.1182 | l_Loss: 34.4790 |
21-12-25 00:33:49.347 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:33:49.348 - INFO: Train epoch 2311: Loss: 587.5638 | r_Loss: 51.1533 | g_Loss: 293.3883 | l_Loss: 38.4090 |
21-12-25 00:35:02.716 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:35:02.718 - INFO: Train epoch 2312: Loss: 565.1783 | r_Loss: 47.4908 | g_Loss: 285.5307 | l_Loss: 42.1935 |
21-12-25 00:36:16.415 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:36:16.416 - INFO: Train epoch 2313: Loss: 612.9824 | r_Loss: 56.1397 | g_Loss: 298.5032 | l_Loss: 33.7805 |
21-12-25 00:37:30.063 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:37:30.065 - INFO: Train epoch 2314: Loss: 593.2221 | r_Loss: 52.4669 | g_Loss: 291.9922 | l_Loss: 38.8954 |
21-12-25 00:38:43.708 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:38:43.709 - INFO: Train epoch 2315: Loss: 599.6309 | r_Loss: 55.7542 | g_Loss: 287.9778 | l_Loss: 32.8822 |
21-12-25 00:39:57.598 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:39:57.599 - INFO: Train epoch 2316: Loss: 547.5635 | r_Loss: 47.7261 | g_Loss: 275.9639 | l_Loss: 32.9689 |
21-12-25 00:41:11.207 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:41:11.208 - INFO: Train epoch 2317: Loss: 537.9161 | r_Loss: 47.4418 | g_Loss: 269.7167 | l_Loss: 30.9902 |
21-12-25 00:42:24.926 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:42:24.928 - INFO: Train epoch 2318: Loss: 597.1974 | r_Loss: 53.7531 | g_Loss: 289.7165 | l_Loss: 38.7156 |
21-12-25 00:43:38.558 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:43:38.559 - INFO: Train epoch 2319: Loss: 539.3552 | r_Loss: 47.6779 | g_Loss: 270.5255 | l_Loss: 30.4402 |
21-12-25 00:44:52.308 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:44:52.309 - INFO: Train epoch 2320: Loss: 574.9428 | r_Loss: 53.8114 | g_Loss: 271.4746 | l_Loss: 34.4113 |
21-12-25 00:46:05.654 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:46:05.655 - INFO: Train epoch 2321: Loss: 549.3258 | r_Loss: 48.6596 | g_Loss: 270.5582 | l_Loss: 35.4696 |
21-12-25 00:47:19.147 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:47:19.149 - INFO: Train epoch 2322: Loss: 558.1726 | r_Loss: 51.0731 | g_Loss: 270.2212 | l_Loss: 32.5858 |
21-12-25 00:48:32.808 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:48:32.809 - INFO: Train epoch 2323: Loss: 547.3787 | r_Loss: 51.4249 | g_Loss: 258.9902 | l_Loss: 31.2639 |
21-12-25 00:49:46.563 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:49:46.565 - INFO: Train epoch 2324: Loss: 535.1498 | r_Loss: 47.4887 | g_Loss: 261.5795 | l_Loss: 36.1269 |
21-12-25 00:51:00.072 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:51:00.072 - INFO: Train epoch 2325: Loss: 538.3331 | r_Loss: 48.9646 | g_Loss: 262.0272 | l_Loss: 31.4828 |
21-12-25 00:52:13.929 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:52:13.930 - INFO: Train epoch 2326: Loss: 551.3453 | r_Loss: 49.5943 | g_Loss: 265.0771 | l_Loss: 38.2965 |
21-12-25 00:53:27.530 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:53:27.532 - INFO: Train epoch 2327: Loss: 582.5015 | r_Loss: 56.8783 | g_Loss: 268.8203 | l_Loss: 29.2897 |
21-12-25 00:54:41.391 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:54:41.393 - INFO: Train epoch 2328: Loss: 557.3595 | r_Loss: 51.9218 | g_Loss: 271.0646 | l_Loss: 26.6862 |
21-12-25 00:55:54.644 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:55:54.645 - INFO: Train epoch 2329: Loss: 529.8472 | r_Loss: 48.3402 | g_Loss: 259.8088 | l_Loss: 28.3373 |
21-12-25 00:57:07.764 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:57:07.764 - INFO: Train epoch 2330: Loss: 569.4316 | r_Loss: 54.7389 | g_Loss: 259.8483 | l_Loss: 35.8886 |
21-12-25 00:58:21.210 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:58:21.212 - INFO: Train epoch 2331: Loss: 555.7792 | r_Loss: 52.2239 | g_Loss: 267.2934 | l_Loss: 27.3662 |
21-12-25 00:59:34.389 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 00:59:34.390 - INFO: Train epoch 2332: Loss: 540.7391 | r_Loss: 49.9480 | g_Loss: 258.9018 | l_Loss: 32.0973 |
21-12-25 01:00:47.800 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:00:47.800 - INFO: Train epoch 2333: Loss: 526.3609 | r_Loss: 49.4269 | g_Loss: 245.5421 | l_Loss: 33.6842 |
21-12-25 01:02:01.282 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:02:01.283 - INFO: Train epoch 2334: Loss: 571.5211 | r_Loss: 56.7671 | g_Loss: 256.0060 | l_Loss: 31.6793 |
21-12-25 01:03:14.767 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:03:14.769 - INFO: Train epoch 2335: Loss: 548.4768 | r_Loss: 50.5715 | g_Loss: 256.1366 | l_Loss: 39.4829 |
21-12-25 01:04:28.541 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:04:28.542 - INFO: Train epoch 2336: Loss: 515.4724 | r_Loss: 49.8374 | g_Loss: 239.5876 | l_Loss: 26.6977 |
21-12-25 01:05:41.985 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:05:41.986 - INFO: Train epoch 2337: Loss: 526.6212 | r_Loss: 50.2350 | g_Loss: 245.6817 | l_Loss: 29.7647 |
21-12-25 01:06:55.431 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:06:55.432 - INFO: Train epoch 2338: Loss: 537.1566 | r_Loss: 51.7775 | g_Loss: 248.6192 | l_Loss: 29.6498 |
21-12-25 01:08:08.964 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:08:08.965 - INFO: Train epoch 2339: Loss: 582.4573 | r_Loss: 56.8437 | g_Loss: 265.0380 | l_Loss: 33.2006 |
21-12-25 01:09:58.495 - INFO: TEST: PSNR_S: 46.0104 | PSNR_C: 38.0676 |
21-12-25 01:09:58.497 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:09:58.498 - INFO: Train epoch 2340: Loss: 517.0374 | r_Loss: 49.1681 | g_Loss: 242.8224 | l_Loss: 28.3744 |
21-12-25 01:11:12.224 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:11:12.225 - INFO: Train epoch 2341: Loss: 506.4868 | r_Loss: 48.0991 | g_Loss: 237.0196 | l_Loss: 28.9719 |
21-12-25 01:12:25.665 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:12:25.666 - INFO: Train epoch 2342: Loss: 556.9807 | r_Loss: 56.8940 | g_Loss: 246.1493 | l_Loss: 26.3613 |
21-12-25 01:13:38.825 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:13:38.826 - INFO: Train epoch 2343: Loss: 558.9902 | r_Loss: 55.7772 | g_Loss: 253.5045 | l_Loss: 26.5996 |
21-12-25 01:14:52.224 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:14:52.225 - INFO: Train epoch 2344: Loss: 560.5134 | r_Loss: 54.5148 | g_Loss: 254.4769 | l_Loss: 33.4627 |
21-12-25 01:16:05.582 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:16:05.583 - INFO: Train epoch 2345: Loss: 515.2969 | r_Loss: 50.3586 | g_Loss: 227.4313 | l_Loss: 36.0728 |
21-12-25 01:17:18.613 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:17:18.614 - INFO: Train epoch 2346: Loss: 548.7055 | r_Loss: 53.1401 | g_Loss: 253.3370 | l_Loss: 29.6683 |
21-12-25 01:18:31.925 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:18:31.927 - INFO: Train epoch 2347: Loss: 545.9482 | r_Loss: 57.0997 | g_Loss: 237.9592 | l_Loss: 22.4907 |
21-12-25 01:19:45.597 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:19:45.598 - INFO: Train epoch 2348: Loss: 545.1905 | r_Loss: 52.8707 | g_Loss: 245.8277 | l_Loss: 35.0092 |
21-12-25 01:20:58.896 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:20:58.898 - INFO: Train epoch 2349: Loss: 580.4621 | r_Loss: 61.8283 | g_Loss: 243.6077 | l_Loss: 27.7127 |
21-12-25 01:22:12.648 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:22:12.649 - INFO: Train epoch 2350: Loss: 528.6869 | r_Loss: 53.3037 | g_Loss: 235.2375 | l_Loss: 26.9309 |
21-12-25 01:23:26.391 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:23:26.392 - INFO: Train epoch 2351: Loss: 521.1472 | r_Loss: 50.3607 | g_Loss: 238.1780 | l_Loss: 31.1655 |
21-12-25 01:24:39.760 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:24:39.761 - INFO: Train epoch 2352: Loss: 517.1468 | r_Loss: 51.2368 | g_Loss: 235.0523 | l_Loss: 25.9107 |
21-12-25 01:25:52.814 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:25:52.815 - INFO: Train epoch 2353: Loss: 509.4376 | r_Loss: 51.5281 | g_Loss: 228.1673 | l_Loss: 23.6301 |
21-12-25 01:27:06.145 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:27:06.147 - INFO: Train epoch 2354: Loss: 554.5674 | r_Loss: 55.2176 | g_Loss: 245.7976 | l_Loss: 32.6820 |
21-12-25 01:28:19.828 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:28:19.829 - INFO: Train epoch 2355: Loss: 521.4647 | r_Loss: 52.4332 | g_Loss: 225.3365 | l_Loss: 33.9623 |
21-12-25 01:29:33.026 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:29:33.028 - INFO: Train epoch 2356: Loss: 529.6195 | r_Loss: 52.1303 | g_Loss: 243.1336 | l_Loss: 25.8343 |
21-12-25 01:30:46.598 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:30:46.599 - INFO: Train epoch 2357: Loss: 502.4849 | r_Loss: 49.9504 | g_Loss: 224.0133 | l_Loss: 28.7194 |
21-12-25 01:32:00.213 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:32:00.215 - INFO: Train epoch 2358: Loss: 628.0449 | r_Loss: 69.8588 | g_Loss: 246.0062 | l_Loss: 32.7445 |
21-12-25 01:33:13.908 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:33:13.909 - INFO: Train epoch 2359: Loss: 526.6016 | r_Loss: 52.9491 | g_Loss: 235.4561 | l_Loss: 26.3997 |
21-12-25 01:34:27.627 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:34:27.628 - INFO: Train epoch 2360: Loss: 542.9046 | r_Loss: 55.9923 | g_Loss: 238.8892 | l_Loss: 24.0539 |
21-12-25 01:35:40.886 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:35:40.887 - INFO: Train epoch 2361: Loss: 529.8635 | r_Loss: 54.6967 | g_Loss: 226.1261 | l_Loss: 30.2537 |
21-12-25 01:36:54.292 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:36:54.293 - INFO: Train epoch 2362: Loss: 520.6398 | r_Loss: 53.7164 | g_Loss: 229.8283 | l_Loss: 22.2295 |
21-12-25 01:38:07.580 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:38:07.581 - INFO: Train epoch 2363: Loss: 525.8970 | r_Loss: 53.6390 | g_Loss: 229.5370 | l_Loss: 28.1652 |
21-12-25 01:39:20.971 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:39:20.972 - INFO: Train epoch 2364: Loss: 628.1777 | r_Loss: 70.8524 | g_Loss: 244.6956 | l_Loss: 29.2201 |
21-12-25 01:40:34.465 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:40:34.466 - INFO: Train epoch 2365: Loss: 514.8101 | r_Loss: 52.3690 | g_Loss: 221.2188 | l_Loss: 31.7462 |
21-12-25 01:41:47.981 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:41:47.982 - INFO: Train epoch 2366: Loss: 511.4810 | r_Loss: 51.0970 | g_Loss: 225.1586 | l_Loss: 30.8375 |
21-12-25 01:43:01.819 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:43:01.821 - INFO: Train epoch 2367: Loss: 541.2529 | r_Loss: 55.6238 | g_Loss: 232.7183 | l_Loss: 30.4156 |
21-12-25 01:44:15.364 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:44:15.365 - INFO: Train epoch 2368: Loss: 570.0276 | r_Loss: 62.1307 | g_Loss: 234.1528 | l_Loss: 25.2213 |
21-12-25 01:45:28.745 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:45:28.746 - INFO: Train epoch 2369: Loss: 503.1099 | r_Loss: 50.6191 | g_Loss: 224.2231 | l_Loss: 25.7915 |
21-12-25 01:47:18.162 - INFO: TEST: PSNR_S: 46.2189 | PSNR_C: 38.3961 |
21-12-25 01:47:18.164 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:47:18.164 - INFO: Train epoch 2370: Loss: 684.9937 | r_Loss: 67.6447 | g_Loss: 308.7837 | l_Loss: 37.9868 |
21-12-25 01:48:31.649 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:48:31.650 - INFO: Train epoch 2371: Loss: 494.8132 | r_Loss: 46.9838 | g_Loss: 226.0647 | l_Loss: 33.8297 |
21-12-25 01:49:45.109 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:49:45.110 - INFO: Train epoch 2372: Loss: 528.4147 | r_Loss: 53.0512 | g_Loss: 227.7566 | l_Loss: 35.4022 |
21-12-25 01:50:58.565 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:50:58.566 - INFO: Train epoch 2373: Loss: 564.7626 | r_Loss: 62.4788 | g_Loss: 224.7520 | l_Loss: 27.6164 |
21-12-25 01:52:11.741 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:52:11.742 - INFO: Train epoch 2374: Loss: 511.6679 | r_Loss: 51.7063 | g_Loss: 221.3911 | l_Loss: 31.7452 |
21-12-25 01:53:25.447 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:53:25.448 - INFO: Train epoch 2375: Loss: 601.4907 | r_Loss: 66.9559 | g_Loss: 237.9568 | l_Loss: 28.7544 |
21-12-25 01:54:38.691 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:54:38.692 - INFO: Train epoch 2376: Loss: 534.5712 | r_Loss: 54.6470 | g_Loss: 233.9944 | l_Loss: 27.3416 |
21-12-25 01:55:52.028 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:55:52.029 - INFO: Train epoch 2377: Loss: 504.3491 | r_Loss: 51.2720 | g_Loss: 218.4094 | l_Loss: 29.5798 |
21-12-25 01:57:05.291 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:57:05.292 - INFO: Train epoch 2378: Loss: 500.0047 | r_Loss: 51.7101 | g_Loss: 217.2800 | l_Loss: 24.1742 |
21-12-25 01:58:18.679 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:58:18.680 - INFO: Train epoch 2379: Loss: 543.4469 | r_Loss: 57.2796 | g_Loss: 226.4684 | l_Loss: 30.5807 |
21-12-25 01:59:32.577 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 01:59:32.578 - INFO: Train epoch 2380: Loss: 17131.7992 | r_Loss: 3139.5422 | g_Loss: 1281.5427 | l_Loss: 152.5447 |
21-12-25 02:00:45.984 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:00:45.985 - INFO: Train epoch 2381: Loss: 1354.0727 | r_Loss: 148.4921 | g_Loss: 545.0004 | l_Loss: 66.6116 |
21-12-25 02:01:59.383 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:01:59.384 - INFO: Train epoch 2382: Loss: 1093.5193 | r_Loss: 118.3370 | g_Loss: 450.7965 | l_Loss: 51.0379 |
21-12-25 02:03:13.325 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:03:13.326 - INFO: Train epoch 2383: Loss: 992.9677 | r_Loss: 102.1385 | g_Loss: 426.4975 | l_Loss: 55.7777 |
21-12-25 02:04:26.901 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:04:26.902 - INFO: Train epoch 2384: Loss: 936.2768 | r_Loss: 94.5277 | g_Loss: 414.4832 | l_Loss: 49.1550 |
21-12-25 02:05:40.542 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:05:40.543 - INFO: Train epoch 2385: Loss: 927.6883 | r_Loss: 92.5370 | g_Loss: 407.4944 | l_Loss: 57.5088 |
21-12-25 02:06:53.844 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:06:53.845 - INFO: Train epoch 2386: Loss: 865.9554 | r_Loss: 83.9064 | g_Loss: 397.3185 | l_Loss: 49.1048 |
21-12-25 02:08:07.329 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:08:07.330 - INFO: Train epoch 2387: Loss: 804.4538 | r_Loss: 77.2732 | g_Loss: 372.0340 | l_Loss: 46.0536 |
21-12-25 02:09:20.984 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:09:20.985 - INFO: Train epoch 2388: Loss: 775.2244 | r_Loss: 73.7620 | g_Loss: 369.1002 | l_Loss: 37.3143 |
21-12-25 02:10:34.228 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:10:34.229 - INFO: Train epoch 2389: Loss: 752.0492 | r_Loss: 70.0609 | g_Loss: 359.4300 | l_Loss: 42.3147 |
21-12-25 02:11:47.588 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:11:47.589 - INFO: Train epoch 2390: Loss: 807.1253 | r_Loss: 76.1131 | g_Loss: 376.4182 | l_Loss: 50.1417 |
21-12-25 02:13:00.806 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:13:00.807 - INFO: Train epoch 2391: Loss: 734.2572 | r_Loss: 68.2701 | g_Loss: 347.7280 | l_Loss: 45.1789 |
21-12-25 02:14:14.519 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:14:14.520 - INFO: Train epoch 2392: Loss: 726.2777 | r_Loss: 66.3163 | g_Loss: 351.7452 | l_Loss: 42.9510 |
21-12-25 02:15:27.741 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:15:27.742 - INFO: Train epoch 2393: Loss: 724.3739 | r_Loss: 65.6116 | g_Loss: 344.6818 | l_Loss: 51.6343 |
21-12-25 02:16:40.854 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:16:40.855 - INFO: Train epoch 2394: Loss: 710.3021 | r_Loss: 63.4905 | g_Loss: 349.3163 | l_Loss: 43.5332 |
21-12-25 02:17:53.861 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:17:53.862 - INFO: Train epoch 2395: Loss: 683.6789 | r_Loss: 63.2281 | g_Loss: 330.0240 | l_Loss: 37.5146 |
21-12-25 02:19:06.879 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:19:06.880 - INFO: Train epoch 2396: Loss: 650.8397 | r_Loss: 57.9640 | g_Loss: 322.6667 | l_Loss: 38.3530 |
21-12-25 02:20:19.583 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:20:19.583 - INFO: Train epoch 2397: Loss: 664.6895 | r_Loss: 59.4739 | g_Loss: 324.9543 | l_Loss: 42.3659 |
21-12-25 02:21:32.147 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:21:32.148 - INFO: Train epoch 2398: Loss: 653.5669 | r_Loss: 59.6754 | g_Loss: 317.2515 | l_Loss: 37.9386 |
21-12-25 02:22:44.218 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:22:44.219 - INFO: Train epoch 2399: Loss: 642.1230 | r_Loss: 57.2631 | g_Loss: 314.2686 | l_Loss: 41.5388 |
21-12-25 02:24:30.730 - INFO: TEST: PSNR_S: 45.2787 | PSNR_C: 36.7859 |
21-12-25 02:24:30.731 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:24:30.731 - INFO: Train epoch 2400: Loss: 654.0469 | r_Loss: 59.5798 | g_Loss: 312.9599 | l_Loss: 43.1881 |
21-12-25 02:25:42.957 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:25:42.958 - INFO: Train epoch 2401: Loss: 667.6104 | r_Loss: 59.3684 | g_Loss: 320.3366 | l_Loss: 50.4320 |
21-12-25 02:26:55.028 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:26:55.028 - INFO: Train epoch 2402: Loss: 639.4166 | r_Loss: 58.8107 | g_Loss: 312.6307 | l_Loss: 32.7326 |
21-12-25 02:28:06.870 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:28:06.871 - INFO: Train epoch 2403: Loss: 594.1779 | r_Loss: 53.5819 | g_Loss: 292.2786 | l_Loss: 33.9896 |
21-12-25 02:29:18.790 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:29:18.791 - INFO: Train epoch 2404: Loss: 644.9507 | r_Loss: 57.1595 | g_Loss: 321.2288 | l_Loss: 37.9246 |
21-12-25 02:30:30.656 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:30:30.657 - INFO: Train epoch 2405: Loss: 579.7712 | r_Loss: 51.3926 | g_Loss: 292.3069 | l_Loss: 30.5011 |
21-12-25 02:31:42.571 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:31:42.572 - INFO: Train epoch 2406: Loss: 618.8758 | r_Loss: 55.1945 | g_Loss: 299.7042 | l_Loss: 43.1990 |
21-12-25 02:32:54.347 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:32:54.347 - INFO: Train epoch 2407: Loss: 610.3457 | r_Loss: 56.6872 | g_Loss: 291.2435 | l_Loss: 35.6663 |
21-12-25 02:34:05.992 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:34:05.993 - INFO: Train epoch 2408: Loss: 587.7059 | r_Loss: 51.3578 | g_Loss: 289.5920 | l_Loss: 41.3251 |
21-12-25 02:35:17.690 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:35:17.691 - INFO: Train epoch 2409: Loss: 579.8854 | r_Loss: 52.9739 | g_Loss: 278.9579 | l_Loss: 36.0582 |
21-12-25 02:36:29.471 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:36:29.472 - INFO: Train epoch 2410: Loss: 544.1614 | r_Loss: 47.5717 | g_Loss: 272.5361 | l_Loss: 33.7670 |
21-12-25 02:37:41.115 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:37:41.116 - INFO: Train epoch 2411: Loss: 624.0030 | r_Loss: 57.9281 | g_Loss: 299.1735 | l_Loss: 35.1889 |
21-12-25 02:38:52.934 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:38:52.935 - INFO: Train epoch 2412: Loss: 579.6454 | r_Loss: 52.7104 | g_Loss: 279.1813 | l_Loss: 36.9121 |
21-12-25 02:40:04.854 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:40:04.854 - INFO: Train epoch 2413: Loss: 582.4074 | r_Loss: 53.0019 | g_Loss: 281.6038 | l_Loss: 35.7939 |
21-12-25 02:41:16.615 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:41:16.616 - INFO: Train epoch 2414: Loss: 565.8019 | r_Loss: 51.5643 | g_Loss: 274.3907 | l_Loss: 33.5898 |
21-12-25 02:42:28.478 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:42:28.479 - INFO: Train epoch 2415: Loss: 549.3623 | r_Loss: 50.1367 | g_Loss: 261.7040 | l_Loss: 36.9748 |
21-12-25 02:43:40.096 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:43:40.097 - INFO: Train epoch 2416: Loss: 594.7902 | r_Loss: 56.1263 | g_Loss: 281.6216 | l_Loss: 32.5372 |
21-12-25 02:44:51.694 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:44:51.695 - INFO: Train epoch 2417: Loss: 564.3811 | r_Loss: 53.3273 | g_Loss: 268.5734 | l_Loss: 29.1712 |
21-12-25 02:46:03.241 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:46:03.242 - INFO: Train epoch 2418: Loss: 525.3645 | r_Loss: 47.3125 | g_Loss: 260.3907 | l_Loss: 28.4116 |
21-12-25 02:47:14.830 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:47:14.831 - INFO: Train epoch 2419: Loss: 531.3112 | r_Loss: 47.7483 | g_Loss: 261.5718 | l_Loss: 30.9979 |
21-12-25 02:48:26.317 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:48:26.318 - INFO: Train epoch 2420: Loss: 541.5926 | r_Loss: 49.0787 | g_Loss: 260.0481 | l_Loss: 36.1512 |
21-12-25 02:49:37.822 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:49:37.823 - INFO: Train epoch 2421: Loss: 576.4315 | r_Loss: 55.9181 | g_Loss: 268.2088 | l_Loss: 28.6324 |
21-12-25 02:50:49.361 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:50:49.361 - INFO: Train epoch 2422: Loss: 547.1382 | r_Loss: 51.7960 | g_Loss: 259.1356 | l_Loss: 29.0225 |
21-12-25 02:52:00.878 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:52:00.879 - INFO: Train epoch 2423: Loss: 593.3767 | r_Loss: 56.3466 | g_Loss: 277.9003 | l_Loss: 33.7436 |
21-12-25 02:53:12.342 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:53:12.343 - INFO: Train epoch 2424: Loss: 533.5651 | r_Loss: 48.5347 | g_Loss: 258.7306 | l_Loss: 32.1610 |
21-12-25 02:54:24.059 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:54:24.060 - INFO: Train epoch 2425: Loss: 557.9335 | r_Loss: 51.8688 | g_Loss: 260.4701 | l_Loss: 38.1194 |
21-12-25 02:55:35.929 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:55:35.930 - INFO: Train epoch 2426: Loss: 522.1577 | r_Loss: 49.3278 | g_Loss: 247.4121 | l_Loss: 28.1064 |
21-12-25 02:56:47.401 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:56:47.402 - INFO: Train epoch 2427: Loss: 539.6482 | r_Loss: 53.3056 | g_Loss: 248.9155 | l_Loss: 24.2049 |
21-12-25 02:57:59.142 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:57:59.143 - INFO: Train epoch 2428: Loss: 564.6049 | r_Loss: 52.8897 | g_Loss: 262.0299 | l_Loss: 38.1262 |
21-12-25 02:59:10.645 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 02:59:10.645 - INFO: Train epoch 2429: Loss: 555.5662 | r_Loss: 53.8182 | g_Loss: 257.8520 | l_Loss: 28.6233 |
21-12-25 03:00:55.816 - INFO: TEST: PSNR_S: 46.2312 | PSNR_C: 37.8948 |
21-12-25 03:00:55.817 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:00:55.818 - INFO: Train epoch 2430: Loss: 532.3445 | r_Loss: 50.5117 | g_Loss: 249.3390 | l_Loss: 30.4469 |
21-12-25 03:02:07.336 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:02:07.337 - INFO: Train epoch 2431: Loss: 514.0337 | r_Loss: 48.7943 | g_Loss: 242.5386 | l_Loss: 27.5235 |
21-12-25 03:03:18.788 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:03:18.789 - INFO: Train epoch 2432: Loss: 578.6062 | r_Loss: 58.4545 | g_Loss: 257.0507 | l_Loss: 29.2828 |
21-12-25 03:04:30.247 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:04:30.249 - INFO: Train epoch 2433: Loss: 529.6103 | r_Loss: 49.0216 | g_Loss: 252.6170 | l_Loss: 31.8851 |
21-12-25 03:05:41.741 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:05:41.742 - INFO: Train epoch 2434: Loss: 521.5708 | r_Loss: 50.4497 | g_Loss: 238.2762 | l_Loss: 31.0459 |
21-12-25 03:06:53.385 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:06:53.386 - INFO: Train epoch 2435: Loss: 570.0690 | r_Loss: 57.7315 | g_Loss: 246.4012 | l_Loss: 35.0105 |
21-12-25 03:08:04.962 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:08:04.962 - INFO: Train epoch 2436: Loss: 542.2142 | r_Loss: 50.5726 | g_Loss: 248.6518 | l_Loss: 40.6995 |
21-12-25 03:09:16.277 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:09:16.277 - INFO: Train epoch 2437: Loss: 504.9266 | r_Loss: 49.4760 | g_Loss: 234.3069 | l_Loss: 23.2397 |
21-12-25 03:10:27.791 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:10:27.792 - INFO: Train epoch 2438: Loss: 577.9817 | r_Loss: 56.9366 | g_Loss: 261.4588 | l_Loss: 31.8401 |
21-12-25 03:11:39.246 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:11:39.247 - INFO: Train epoch 2439: Loss: 673.2995 | r_Loss: 80.5479 | g_Loss: 232.7726 | l_Loss: 37.7872 |
21-12-25 03:12:50.700 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:12:50.701 - INFO: Train epoch 2440: Loss: 525.6825 | r_Loss: 50.6070 | g_Loss: 247.7199 | l_Loss: 24.9279 |
21-12-25 03:14:02.169 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:14:02.170 - INFO: Train epoch 2441: Loss: 512.4093 | r_Loss: 47.3163 | g_Loss: 244.8047 | l_Loss: 31.0230 |
21-12-25 03:15:13.561 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:15:13.562 - INFO: Train epoch 2442: Loss: 514.8897 | r_Loss: 50.0098 | g_Loss: 239.8084 | l_Loss: 25.0324 |
21-12-25 03:16:24.939 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:16:24.940 - INFO: Train epoch 2443: Loss: 544.6029 | r_Loss: 50.9063 | g_Loss: 251.9552 | l_Loss: 38.1164 |
21-12-25 03:17:36.463 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:17:36.464 - INFO: Train epoch 2444: Loss: 534.7057 | r_Loss: 53.0721 | g_Loss: 242.7644 | l_Loss: 26.5807 |
21-12-25 03:18:47.793 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:18:47.793 - INFO: Train epoch 2445: Loss: 518.2090 | r_Loss: 49.8617 | g_Loss: 244.2150 | l_Loss: 24.6855 |
21-12-25 03:19:59.163 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:19:59.164 - INFO: Train epoch 2446: Loss: 521.1395 | r_Loss: 53.1068 | g_Loss: 230.0529 | l_Loss: 25.5525 |
21-12-25 03:21:10.622 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:21:10.623 - INFO: Train epoch 2447: Loss: 531.7296 | r_Loss: 51.7054 | g_Loss: 240.0103 | l_Loss: 33.1921 |
21-12-25 03:22:21.896 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:22:21.896 - INFO: Train epoch 2448: Loss: 559.8033 | r_Loss: 59.3945 | g_Loss: 235.4942 | l_Loss: 27.3366 |
21-12-25 03:23:33.336 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:23:33.336 - INFO: Train epoch 2449: Loss: 528.3924 | r_Loss: 52.1222 | g_Loss: 242.0923 | l_Loss: 25.6891 |
21-12-25 03:24:44.756 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:24:44.757 - INFO: Train epoch 2450: Loss: 534.9117 | r_Loss: 52.9691 | g_Loss: 241.8709 | l_Loss: 28.1954 |
21-12-25 03:25:56.316 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:25:56.317 - INFO: Train epoch 2451: Loss: 490.9374 | r_Loss: 46.4724 | g_Loss: 229.9270 | l_Loss: 28.6485 |
21-12-25 03:27:07.723 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:27:07.724 - INFO: Train epoch 2452: Loss: 532.4828 | r_Loss: 54.0895 | g_Loss: 229.9531 | l_Loss: 32.0822 |
21-12-25 03:28:19.125 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:28:19.126 - INFO: Train epoch 2453: Loss: 549.6169 | r_Loss: 56.0847 | g_Loss: 237.3832 | l_Loss: 31.8104 |
21-12-25 03:29:30.590 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:29:30.591 - INFO: Train epoch 2454: Loss: 524.9898 | r_Loss: 52.1665 | g_Loss: 233.4181 | l_Loss: 30.7394 |
21-12-25 03:30:42.141 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:30:42.142 - INFO: Train epoch 2455: Loss: 559.2126 | r_Loss: 58.5581 | g_Loss: 233.9642 | l_Loss: 32.4578 |
21-12-25 03:31:53.512 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:31:53.513 - INFO: Train epoch 2456: Loss: 519.1936 | r_Loss: 52.7651 | g_Loss: 231.8444 | l_Loss: 23.5236 |
21-12-25 03:33:04.867 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:33:04.868 - INFO: Train epoch 2457: Loss: 527.9630 | r_Loss: 53.8230 | g_Loss: 230.5837 | l_Loss: 28.2642 |
21-12-25 03:34:16.317 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:34:16.318 - INFO: Train epoch 2458: Loss: 625.9518 | r_Loss: 72.2532 | g_Loss: 232.3272 | l_Loss: 32.3585 |
21-12-25 03:35:27.727 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:35:27.727 - INFO: Train epoch 2459: Loss: 519.4454 | r_Loss: 53.9559 | g_Loss: 226.0782 | l_Loss: 23.5877 |
21-12-25 03:37:12.771 - INFO: TEST: PSNR_S: 46.0173 | PSNR_C: 38.4116 |
21-12-25 03:37:12.772 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:37:12.772 - INFO: Train epoch 2460: Loss: 480.5616 | r_Loss: 47.4966 | g_Loss: 216.6562 | l_Loss: 26.4224 |
21-12-25 03:38:24.182 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:38:24.182 - INFO: Train epoch 2461: Loss: 537.2360 | r_Loss: 53.9596 | g_Loss: 234.6886 | l_Loss: 32.7496 |
21-12-25 03:39:35.628 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:39:35.629 - INFO: Train epoch 2462: Loss: 506.6324 | r_Loss: 51.2584 | g_Loss: 224.2261 | l_Loss: 26.1140 |
21-12-25 03:40:46.933 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:40:46.934 - INFO: Train epoch 2463: Loss: 500.7383 | r_Loss: 50.4235 | g_Loss: 214.5202 | l_Loss: 34.1009 |
21-12-25 03:41:58.393 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:41:58.393 - INFO: Train epoch 2464: Loss: 524.6089 | r_Loss: 53.9049 | g_Loss: 228.4945 | l_Loss: 26.5900 |
21-12-25 03:43:09.876 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:43:09.876 - INFO: Train epoch 2465: Loss: 743.3496 | r_Loss: 94.0937 | g_Loss: 239.6599 | l_Loss: 33.2211 |
21-12-25 03:44:21.291 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:44:21.292 - INFO: Train epoch 2466: Loss: 505.0781 | r_Loss: 49.9287 | g_Loss: 227.0026 | l_Loss: 28.4318 |
21-12-25 03:45:32.757 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:45:32.758 - INFO: Train epoch 2467: Loss: 519.1321 | r_Loss: 51.7094 | g_Loss: 234.2461 | l_Loss: 26.3391 |
21-12-25 03:46:44.318 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:46:44.319 - INFO: Train epoch 2468: Loss: 501.6971 | r_Loss: 50.0969 | g_Loss: 217.7218 | l_Loss: 33.4907 |
21-12-25 03:47:55.707 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:47:55.707 - INFO: Train epoch 2469: Loss: 524.8399 | r_Loss: 53.5454 | g_Loss: 227.7760 | l_Loss: 29.3370 |
21-12-25 03:49:07.321 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:49:07.322 - INFO: Train epoch 2470: Loss: 473.0527 | r_Loss: 46.4562 | g_Loss: 214.8187 | l_Loss: 25.9533 |
21-12-25 03:50:18.766 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:50:18.767 - INFO: Train epoch 2471: Loss: 473.1289 | r_Loss: 47.9015 | g_Loss: 208.9659 | l_Loss: 24.6554 |
21-12-25 03:51:30.234 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:51:30.234 - INFO: Train epoch 2472: Loss: 509.8244 | r_Loss: 53.8440 | g_Loss: 218.8916 | l_Loss: 21.7126 |
21-12-25 03:52:41.633 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:52:41.634 - INFO: Train epoch 2473: Loss: 480.3654 | r_Loss: 47.6069 | g_Loss: 217.6837 | l_Loss: 24.6472 |
21-12-25 03:53:53.121 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:53:53.122 - INFO: Train epoch 2474: Loss: 532.7886 | r_Loss: 55.9290 | g_Loss: 229.0749 | l_Loss: 24.0690 |
21-12-25 03:55:04.480 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:55:04.481 - INFO: Train epoch 2475: Loss: 521.8409 | r_Loss: 52.9491 | g_Loss: 228.9115 | l_Loss: 28.1838 |
21-12-25 03:56:15.755 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:56:15.756 - INFO: Train epoch 2476: Loss: 513.2559 | r_Loss: 52.9538 | g_Loss: 220.4581 | l_Loss: 28.0289 |
21-12-25 03:57:27.322 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:57:27.322 - INFO: Train epoch 2477: Loss: 544.3214 | r_Loss: 57.5259 | g_Loss: 230.0454 | l_Loss: 26.6463 |
21-12-25 03:58:38.672 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:58:38.673 - INFO: Train epoch 2478: Loss: 586.4746 | r_Loss: 63.0860 | g_Loss: 239.8973 | l_Loss: 31.1474 |
21-12-25 03:59:50.069 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 03:59:50.070 - INFO: Train epoch 2479: Loss: 530.5715 | r_Loss: 54.0560 | g_Loss: 234.3665 | l_Loss: 25.9248 |
21-12-25 04:01:01.453 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:01:01.453 - INFO: Train epoch 2480: Loss: 668.3572 | r_Loss: 61.3316 | g_Loss: 315.9031 | l_Loss: 45.7962 |
21-12-25 04:02:12.828 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:02:12.828 - INFO: Train epoch 2481: Loss: 508.3763 | r_Loss: 50.7561 | g_Loss: 222.3467 | l_Loss: 32.2491 |
21-12-25 04:03:24.232 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:03:24.232 - INFO: Train epoch 2482: Loss: 475.1092 | r_Loss: 47.0861 | g_Loss: 206.7997 | l_Loss: 32.8792 |
21-12-25 04:04:35.576 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:04:35.576 - INFO: Train epoch 2483: Loss: 503.1578 | r_Loss: 53.0187 | g_Loss: 210.2706 | l_Loss: 27.7937 |
21-12-25 04:05:46.983 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:05:46.984 - INFO: Train epoch 2484: Loss: 526.7890 | r_Loss: 54.0608 | g_Loss: 228.1291 | l_Loss: 28.3561 |
21-12-25 04:06:58.366 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:06:58.366 - INFO: Train epoch 2485: Loss: 485.5523 | r_Loss: 49.6892 | g_Loss: 211.2431 | l_Loss: 25.8632 |
21-12-25 04:08:09.652 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:08:09.653 - INFO: Train epoch 2486: Loss: 529.5257 | r_Loss: 55.8876 | g_Loss: 223.3456 | l_Loss: 26.7420 |
21-12-25 04:09:21.192 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:09:21.193 - INFO: Train epoch 2487: Loss: 505.2279 | r_Loss: 52.2013 | g_Loss: 213.5878 | l_Loss: 30.6338 |
21-12-25 04:10:32.606 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:10:32.607 - INFO: Train epoch 2488: Loss: 526.7861 | r_Loss: 55.2119 | g_Loss: 219.5051 | l_Loss: 31.2215 |
21-12-25 04:11:44.149 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:11:44.150 - INFO: Train epoch 2489: Loss: 553.6469 | r_Loss: 60.3268 | g_Loss: 227.6807 | l_Loss: 24.3319 |
21-12-25 04:13:29.208 - INFO: TEST: PSNR_S: 46.2123 | PSNR_C: 38.6302 |
21-12-25 04:13:29.209 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:13:29.210 - INFO: Train epoch 2490: Loss: 514.5740 | r_Loss: 53.1071 | g_Loss: 223.0622 | l_Loss: 25.9765 |
21-12-25 04:14:40.624 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:14:40.625 - INFO: Train epoch 2491: Loss: 526.3600 | r_Loss: 54.1835 | g_Loss: 223.6600 | l_Loss: 31.7824 |
21-12-25 04:15:51.954 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:15:51.954 - INFO: Train epoch 2492: Loss: 488.9454 | r_Loss: 51.0736 | g_Loss: 210.2411 | l_Loss: 23.3363 |
21-12-25 04:17:03.375 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:17:03.375 - INFO: Train epoch 2493: Loss: 479.5040 | r_Loss: 46.9320 | g_Loss: 216.9067 | l_Loss: 27.9373 |
21-12-25 04:18:14.833 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:18:14.833 - INFO: Train epoch 2494: Loss: 495.4421 | r_Loss: 50.6100 | g_Loss: 214.4719 | l_Loss: 27.9201 |
21-12-25 04:19:26.162 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:19:26.163 - INFO: Train epoch 2495: Loss: 564.3062 | r_Loss: 63.3150 | g_Loss: 224.1447 | l_Loss: 23.5866 |
21-12-25 04:20:37.650 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:20:37.651 - INFO: Train epoch 2496: Loss: 543.7808 | r_Loss: 54.6692 | g_Loss: 241.6121 | l_Loss: 28.8225 |
21-12-25 04:21:49.096 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:21:49.096 - INFO: Train epoch 2497: Loss: 495.5474 | r_Loss: 50.8746 | g_Loss: 212.8009 | l_Loss: 28.3734 |
21-12-25 04:23:00.489 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:23:00.489 - INFO: Train epoch 2498: Loss: 480.0072 | r_Loss: 47.5579 | g_Loss: 215.0250 | l_Loss: 27.1925 |
21-12-25 04:24:11.834 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:24:11.834 - INFO: Train epoch 2499: Loss: 475.4468 | r_Loss: 47.4292 | g_Loss: 211.7769 | l_Loss: 26.5237 |
21-12-25 04:25:23.162 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:25:23.162 - INFO: Train epoch 2500: Loss: 533.3730 | r_Loss: 55.9585 | g_Loss: 221.2485 | l_Loss: 32.3322 |
21-12-25 04:26:34.742 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:26:34.742 - INFO: Train epoch 2501: Loss: 506.4606 | r_Loss: 52.6755 | g_Loss: 214.1828 | l_Loss: 28.9002 |
21-12-25 04:27:46.165 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:27:46.165 - INFO: Train epoch 2502: Loss: 498.9593 | r_Loss: 50.3467 | g_Loss: 220.2078 | l_Loss: 27.0179 |
21-12-25 04:28:57.591 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:28:57.592 - INFO: Train epoch 2503: Loss: 532.4010 | r_Loss: 55.9234 | g_Loss: 224.7217 | l_Loss: 28.0623 |
21-12-25 04:30:09.004 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:30:09.004 - INFO: Train epoch 2504: Loss: 461.4187 | r_Loss: 44.9227 | g_Loss: 207.2576 | l_Loss: 29.5478 |
21-12-25 04:31:20.512 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:31:20.513 - INFO: Train epoch 2505: Loss: 524.5798 | r_Loss: 54.4271 | g_Loss: 223.8492 | l_Loss: 28.5951 |
21-12-25 04:32:31.840 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:32:31.840 - INFO: Train epoch 2506: Loss: 476.8811 | r_Loss: 51.0325 | g_Loss: 197.9785 | l_Loss: 23.7403 |
21-12-25 04:33:43.271 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:33:43.272 - INFO: Train epoch 2507: Loss: 492.0287 | r_Loss: 51.4170 | g_Loss: 208.5277 | l_Loss: 26.4160 |
21-12-25 04:34:54.729 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:34:54.730 - INFO: Train epoch 2508: Loss: 487.8066 | r_Loss: 50.1894 | g_Loss: 206.7255 | l_Loss: 30.1339 |
21-12-25 04:36:06.098 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:36:06.099 - INFO: Train epoch 2509: Loss: 13097.1434 | r_Loss: 2344.3612 | g_Loss: 1215.0778 | l_Loss: 160.2598 |
21-12-25 04:37:17.482 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:37:17.482 - INFO: Train epoch 2510: Loss: 1194.0798 | r_Loss: 112.8265 | g_Loss: 559.8247 | l_Loss: 70.1225 |
21-12-25 04:38:29.021 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:38:29.021 - INFO: Train epoch 2511: Loss: 941.5805 | r_Loss: 85.3703 | g_Loss: 460.8761 | l_Loss: 53.8527 |
21-12-25 04:39:40.674 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:39:40.674 - INFO: Train epoch 2512: Loss: 851.2711 | r_Loss: 74.6786 | g_Loss: 424.6540 | l_Loss: 53.2242 |
21-12-25 04:40:52.311 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:40:52.311 - INFO: Train epoch 2513: Loss: 778.8825 | r_Loss: 69.1431 | g_Loss: 387.0597 | l_Loss: 46.1073 |
21-12-25 04:42:03.712 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:42:03.713 - INFO: Train epoch 2514: Loss: 759.5253 | r_Loss: 65.2503 | g_Loss: 378.5102 | l_Loss: 54.7635 |
21-12-25 04:43:15.208 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:43:15.209 - INFO: Train epoch 2515: Loss: 731.5379 | r_Loss: 62.1045 | g_Loss: 375.5555 | l_Loss: 45.4601 |
21-12-25 04:44:26.746 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:44:26.747 - INFO: Train epoch 2516: Loss: 647.4414 | r_Loss: 53.0709 | g_Loss: 345.4238 | l_Loss: 36.6629 |
21-12-25 04:45:38.215 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:45:38.215 - INFO: Train epoch 2517: Loss: 665.4657 | r_Loss: 58.1614 | g_Loss: 335.2191 | l_Loss: 39.4396 |
21-12-25 04:46:49.664 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:46:49.664 - INFO: Train epoch 2518: Loss: 657.7020 | r_Loss: 55.3500 | g_Loss: 342.4506 | l_Loss: 38.5014 |
21-12-25 04:48:01.069 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:48:01.069 - INFO: Train epoch 2519: Loss: 655.9516 | r_Loss: 56.0691 | g_Loss: 331.4765 | l_Loss: 44.1297 |
21-12-25 04:49:46.100 - INFO: TEST: PSNR_S: 45.7236 | PSNR_C: 36.6793 |
21-12-25 04:49:46.101 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:49:46.101 - INFO: Train epoch 2520: Loss: 652.0832 | r_Loss: 55.9563 | g_Loss: 329.1806 | l_Loss: 43.1209 |
21-12-25 04:50:57.629 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:50:57.630 - INFO: Train epoch 2521: Loss: 639.8889 | r_Loss: 51.1653 | g_Loss: 323.0834 | l_Loss: 60.9792 |
21-12-25 04:52:09.041 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:52:09.041 - INFO: Train epoch 2522: Loss: 591.3267 | r_Loss: 49.1194 | g_Loss: 307.3110 | l_Loss: 38.4188 |
21-12-25 04:53:20.539 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:53:20.539 - INFO: Train epoch 2523: Loss: 595.2659 | r_Loss: 50.8392 | g_Loss: 297.3971 | l_Loss: 43.6727 |
21-12-25 04:54:31.995 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:54:31.995 - INFO: Train epoch 2524: Loss: 577.7930 | r_Loss: 49.9393 | g_Loss: 295.4700 | l_Loss: 32.6266 |
21-12-25 04:55:43.544 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:55:43.545 - INFO: Train epoch 2525: Loss: 598.3942 | r_Loss: 52.9362 | g_Loss: 300.9795 | l_Loss: 32.7338 |
21-12-25 04:56:54.986 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:56:54.986 - INFO: Train epoch 2526: Loss: 614.4634 | r_Loss: 54.2106 | g_Loss: 304.5258 | l_Loss: 38.8844 |
21-12-25 04:58:06.564 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:58:06.564 - INFO: Train epoch 2527: Loss: 611.7775 | r_Loss: 53.8316 | g_Loss: 299.6432 | l_Loss: 42.9762 |
21-12-25 04:59:18.016 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 04:59:18.017 - INFO: Train epoch 2528: Loss: 552.2027 | r_Loss: 47.3481 | g_Loss: 277.4898 | l_Loss: 37.9725 |
21-12-25 05:00:29.455 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:00:29.455 - INFO: Train epoch 2529: Loss: 569.2748 | r_Loss: 48.8736 | g_Loss: 287.3879 | l_Loss: 37.5190 |
21-12-25 05:01:40.954 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:01:40.955 - INFO: Train epoch 2530: Loss: 542.1972 | r_Loss: 46.8958 | g_Loss: 275.5151 | l_Loss: 32.2030 |
21-12-25 05:02:52.359 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:02:52.360 - INFO: Train epoch 2531: Loss: 558.2325 | r_Loss: 49.5172 | g_Loss: 275.3353 | l_Loss: 35.3113 |
21-12-25 05:04:03.819 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:04:03.820 - INFO: Train epoch 2532: Loss: 561.5244 | r_Loss: 48.6323 | g_Loss: 279.7420 | l_Loss: 38.6209 |
21-12-25 05:05:15.195 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:05:15.195 - INFO: Train epoch 2533: Loss: 565.6875 | r_Loss: 49.1383 | g_Loss: 276.8189 | l_Loss: 43.1771 |
21-12-25 05:06:26.557 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:06:26.558 - INFO: Train epoch 2534: Loss: 549.9360 | r_Loss: 50.3353 | g_Loss: 264.0376 | l_Loss: 34.2218 |
21-12-25 05:07:37.994 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:07:37.995 - INFO: Train epoch 2535: Loss: 530.7122 | r_Loss: 47.3468 | g_Loss: 260.4886 | l_Loss: 33.4895 |
21-12-25 05:08:49.475 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:08:49.475 - INFO: Train epoch 2536: Loss: 535.9086 | r_Loss: 47.1095 | g_Loss: 271.9069 | l_Loss: 28.4544 |
21-12-25 05:10:01.137 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:10:01.137 - INFO: Train epoch 2537: Loss: 512.9748 | r_Loss: 46.8392 | g_Loss: 247.0396 | l_Loss: 31.7392 |
21-12-25 05:11:12.436 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:11:12.436 - INFO: Train epoch 2538: Loss: 539.4229 | r_Loss: 47.7586 | g_Loss: 265.9704 | l_Loss: 34.6596 |
21-12-25 05:12:23.791 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:12:23.792 - INFO: Train epoch 2539: Loss: 495.6705 | r_Loss: 44.6151 | g_Loss: 242.2830 | l_Loss: 30.3119 |
21-12-25 05:13:35.213 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:13:35.213 - INFO: Train epoch 2540: Loss: 547.7158 | r_Loss: 51.3487 | g_Loss: 259.2914 | l_Loss: 31.6809 |
21-12-25 05:14:46.600 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:14:46.601 - INFO: Train epoch 2541: Loss: 527.1906 | r_Loss: 49.1135 | g_Loss: 253.3648 | l_Loss: 28.2582 |
21-12-25 05:15:58.114 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:15:58.114 - INFO: Train epoch 2542: Loss: 574.2122 | r_Loss: 55.3446 | g_Loss: 265.0724 | l_Loss: 32.4166 |
21-12-25 05:17:09.491 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:17:09.492 - INFO: Train epoch 2543: Loss: 514.8705 | r_Loss: 47.7477 | g_Loss: 249.2892 | l_Loss: 26.8427 |
21-12-25 05:18:20.893 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:18:20.893 - INFO: Train epoch 2544: Loss: 525.5632 | r_Loss: 49.5486 | g_Loss: 249.6842 | l_Loss: 28.1358 |
21-12-25 05:19:32.209 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:19:32.209 - INFO: Train epoch 2545: Loss: 511.1071 | r_Loss: 48.5656 | g_Loss: 242.1515 | l_Loss: 26.1276 |
21-12-25 05:20:43.544 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:20:43.545 - INFO: Train epoch 2546: Loss: 513.1423 | r_Loss: 47.2593 | g_Loss: 241.7085 | l_Loss: 35.1373 |
21-12-25 05:21:54.978 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:21:54.979 - INFO: Train epoch 2547: Loss: 506.1609 | r_Loss: 46.6949 | g_Loss: 241.8422 | l_Loss: 30.8443 |
21-12-25 05:23:06.483 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:23:06.484 - INFO: Train epoch 2548: Loss: 530.6919 | r_Loss: 50.6240 | g_Loss: 245.7501 | l_Loss: 31.8219 |
21-12-25 05:24:17.831 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:24:17.832 - INFO: Train epoch 2549: Loss: 487.7698 | r_Loss: 46.8241 | g_Loss: 230.3702 | l_Loss: 23.2794 |
21-12-25 05:26:02.900 - INFO: TEST: PSNR_S: 46.5937 | PSNR_C: 38.2370 |
21-12-25 05:26:02.901 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:26:02.901 - INFO: Train epoch 2550: Loss: 531.1269 | r_Loss: 48.7786 | g_Loss: 248.9608 | l_Loss: 38.2731 |
21-12-25 05:27:14.439 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:27:14.439 - INFO: Train epoch 2551: Loss: 482.3716 | r_Loss: 44.5362 | g_Loss: 231.0791 | l_Loss: 28.6116 |
21-12-25 05:28:25.844 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:28:25.845 - INFO: Train epoch 2552: Loss: 490.1549 | r_Loss: 47.0435 | g_Loss: 228.5041 | l_Loss: 26.4334 |
21-12-25 05:29:37.224 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:29:37.225 - INFO: Train epoch 2553: Loss: 462.1085 | r_Loss: 43.5549 | g_Loss: 220.4955 | l_Loss: 23.8386 |
21-12-25 05:30:48.562 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:30:48.563 - INFO: Train epoch 2554: Loss: 499.2391 | r_Loss: 49.9231 | g_Loss: 221.1464 | l_Loss: 28.4769 |
21-12-25 05:32:00.039 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:32:00.041 - INFO: Train epoch 2555: Loss: 475.3848 | r_Loss: 45.4951 | g_Loss: 223.4284 | l_Loss: 24.4810 |
21-12-25 05:33:11.525 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:33:11.525 - INFO: Train epoch 2556: Loss: 516.8465 | r_Loss: 53.4023 | g_Loss: 226.6185 | l_Loss: 23.2163 |
21-12-25 05:34:22.891 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:34:22.892 - INFO: Train epoch 2557: Loss: 486.2054 | r_Loss: 46.8697 | g_Loss: 218.3355 | l_Loss: 33.5216 |
21-12-25 05:35:34.263 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:35:34.263 - INFO: Train epoch 2558: Loss: 475.4511 | r_Loss: 46.1694 | g_Loss: 217.8461 | l_Loss: 26.7578 |
21-12-25 05:36:45.616 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:36:45.616 - INFO: Train epoch 2559: Loss: 513.6005 | r_Loss: 49.6571 | g_Loss: 238.2963 | l_Loss: 27.0185 |
21-12-25 05:37:57.012 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:37:57.013 - INFO: Train epoch 2560: Loss: 512.8723 | r_Loss: 54.1288 | g_Loss: 216.3177 | l_Loss: 25.9105 |
21-12-25 05:39:08.497 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:39:08.498 - INFO: Train epoch 2561: Loss: 499.4531 | r_Loss: 49.1149 | g_Loss: 227.3793 | l_Loss: 26.4992 |
21-12-25 05:40:19.931 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:40:19.932 - INFO: Train epoch 2562: Loss: 483.1891 | r_Loss: 46.5005 | g_Loss: 226.5556 | l_Loss: 24.1312 |
21-12-25 05:41:31.288 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:41:31.289 - INFO: Train epoch 2563: Loss: 461.0279 | r_Loss: 44.7644 | g_Loss: 208.5552 | l_Loss: 28.6505 |
21-12-25 05:42:42.640 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:42:42.641 - INFO: Train epoch 2564: Loss: 541.9674 | r_Loss: 58.2442 | g_Loss: 221.8322 | l_Loss: 28.9143 |
21-12-25 05:43:54.076 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:43:54.077 - INFO: Train epoch 2565: Loss: 523.6719 | r_Loss: 54.3297 | g_Loss: 219.8252 | l_Loss: 32.1980 |
21-12-25 05:45:05.514 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:45:05.515 - INFO: Train epoch 2566: Loss: 499.1330 | r_Loss: 49.8580 | g_Loss: 224.1069 | l_Loss: 25.7363 |
21-12-25 05:46:16.886 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:46:16.887 - INFO: Train epoch 2567: Loss: 516.6525 | r_Loss: 51.0949 | g_Loss: 221.9862 | l_Loss: 39.1917 |
21-12-25 05:47:28.308 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:47:28.309 - INFO: Train epoch 2568: Loss: 516.3277 | r_Loss: 52.6685 | g_Loss: 223.2629 | l_Loss: 29.7222 |
21-12-25 05:48:39.780 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:48:39.781 - INFO: Train epoch 2569: Loss: 466.9882 | r_Loss: 49.2118 | g_Loss: 195.8327 | l_Loss: 25.0965 |
21-12-25 05:49:51.294 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:49:51.296 - INFO: Train epoch 2570: Loss: 477.0271 | r_Loss: 47.2952 | g_Loss: 217.6071 | l_Loss: 22.9442 |
21-12-25 05:51:02.805 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:51:02.806 - INFO: Train epoch 2571: Loss: 484.2273 | r_Loss: 48.3601 | g_Loss: 211.2136 | l_Loss: 31.2133 |
21-12-25 05:52:14.179 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:52:14.179 - INFO: Train epoch 2572: Loss: 506.5063 | r_Loss: 49.7142 | g_Loss: 226.6092 | l_Loss: 31.3260 |
21-12-25 05:53:25.523 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:53:25.524 - INFO: Train epoch 2573: Loss: 496.8390 | r_Loss: 48.9168 | g_Loss: 221.2764 | l_Loss: 30.9786 |
21-12-25 05:54:36.841 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:54:36.842 - INFO: Train epoch 2574: Loss: 482.4966 | r_Loss: 50.7039 | g_Loss: 204.0507 | l_Loss: 24.9266 |
21-12-25 05:55:48.311 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:55:48.312 - INFO: Train epoch 2575: Loss: 520.6712 | r_Loss: 53.7816 | g_Loss: 226.4334 | l_Loss: 25.3300 |
21-12-25 05:56:59.672 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:56:59.673 - INFO: Train epoch 2576: Loss: 516.2070 | r_Loss: 53.8902 | g_Loss: 215.6935 | l_Loss: 31.0623 |
21-12-25 05:58:11.080 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:58:11.081 - INFO: Train epoch 2577: Loss: 501.9393 | r_Loss: 51.8336 | g_Loss: 218.9374 | l_Loss: 23.8340 |
21-12-25 05:59:22.474 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 05:59:22.475 - INFO: Train epoch 2578: Loss: 488.1436 | r_Loss: 50.7385 | g_Loss: 211.8733 | l_Loss: 22.5776 |
21-12-25 06:00:33.843 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:00:33.843 - INFO: Train epoch 2579: Loss: 479.0214 | r_Loss: 46.4909 | g_Loss: 216.5606 | l_Loss: 30.0061 |
21-12-25 06:02:18.984 - INFO: TEST: PSNR_S: 46.4461 | PSNR_C: 38.6120 |
21-12-25 06:02:18.985 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:02:18.985 - INFO: Train epoch 2580: Loss: 606.9228 | r_Loss: 60.3255 | g_Loss: 265.5506 | l_Loss: 39.7447 |
21-12-25 06:03:30.433 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:03:30.433 - INFO: Train epoch 2581: Loss: 487.4634 | r_Loss: 47.9742 | g_Loss: 216.3393 | l_Loss: 31.2533 |
21-12-25 06:04:41.815 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:04:41.816 - INFO: Train epoch 2582: Loss: 510.1474 | r_Loss: 53.5990 | g_Loss: 219.4326 | l_Loss: 22.7199 |
21-12-25 06:05:53.258 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:05:53.259 - INFO: Train epoch 2583: Loss: 518.7743 | r_Loss: 50.1761 | g_Loss: 235.0498 | l_Loss: 32.8439 |
21-12-25 06:07:04.697 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:07:04.698 - INFO: Train epoch 2584: Loss: 461.0568 | r_Loss: 45.6634 | g_Loss: 211.1503 | l_Loss: 21.5893 |
21-12-25 06:08:16.130 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:08:16.131 - INFO: Train epoch 2585: Loss: 495.9845 | r_Loss: 51.7031 | g_Loss: 213.6740 | l_Loss: 23.7949 |
21-12-25 06:09:27.483 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:09:27.483 - INFO: Train epoch 2586: Loss: 566.4190 | r_Loss: 64.8021 | g_Loss: 215.0585 | l_Loss: 27.3500 |
21-12-25 06:10:38.921 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:10:38.921 - INFO: Train epoch 2587: Loss: 499.5660 | r_Loss: 50.1578 | g_Loss: 222.7348 | l_Loss: 26.0423 |
21-12-25 06:11:50.379 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:11:50.380 - INFO: Train epoch 2588: Loss: 494.9228 | r_Loss: 50.7825 | g_Loss: 213.8579 | l_Loss: 27.1525 |
21-12-25 06:13:01.951 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:13:01.951 - INFO: Train epoch 2589: Loss: 483.1206 | r_Loss: 48.5553 | g_Loss: 218.5085 | l_Loss: 21.8358 |
21-12-25 06:14:13.315 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:14:13.316 - INFO: Train epoch 2590: Loss: 495.2682 | r_Loss: 51.6453 | g_Loss: 213.5151 | l_Loss: 23.5263 |
21-12-25 06:15:24.597 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:15:24.598 - INFO: Train epoch 2591: Loss: 478.1581 | r_Loss: 46.3452 | g_Loss: 216.7897 | l_Loss: 29.6421 |
21-12-25 06:16:35.939 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:16:35.940 - INFO: Train epoch 2592: Loss: 464.6821 | r_Loss: 48.0434 | g_Loss: 198.6745 | l_Loss: 25.7906 |
21-12-25 06:17:47.284 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:17:47.285 - INFO: Train epoch 2593: Loss: 465.2062 | r_Loss: 46.2408 | g_Loss: 207.1541 | l_Loss: 26.8480 |
21-12-25 06:18:58.753 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:18:58.753 - INFO: Train epoch 2594: Loss: 495.6946 | r_Loss: 50.8641 | g_Loss: 214.2105 | l_Loss: 27.1634 |
21-12-25 06:20:10.152 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:20:10.152 - INFO: Train epoch 2595: Loss: 514.1759 | r_Loss: 55.4325 | g_Loss: 212.3857 | l_Loss: 24.6279 |
21-12-25 06:21:21.688 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:21:21.689 - INFO: Train epoch 2596: Loss: 479.0516 | r_Loss: 46.7633 | g_Loss: 212.2360 | l_Loss: 32.9994 |
21-12-25 06:22:33.009 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:22:33.009 - INFO: Train epoch 2597: Loss: 518.1991 | r_Loss: 50.8990 | g_Loss: 231.1784 | l_Loss: 32.5257 |
21-12-25 06:23:44.409 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:23:44.409 - INFO: Train epoch 2598: Loss: 527.7706 | r_Loss: 54.0034 | g_Loss: 229.4943 | l_Loss: 28.2594 |
21-12-25 06:24:55.809 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:24:55.809 - INFO: Train epoch 2599: Loss: 525.3492 | r_Loss: 55.0709 | g_Loss: 221.5410 | l_Loss: 28.4537 |
21-12-25 06:26:07.258 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:26:07.259 - INFO: Train epoch 2600: Loss: 463.5351 | r_Loss: 46.5877 | g_Loss: 205.4933 | l_Loss: 25.1030 |
21-12-25 06:27:18.758 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:27:18.759 - INFO: Train epoch 2601: Loss: 525.4193 | r_Loss: 53.8425 | g_Loss: 222.0408 | l_Loss: 34.1662 |
21-12-25 06:28:30.142 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:28:30.143 - INFO: Train epoch 2602: Loss: 487.3361 | r_Loss: 49.8235 | g_Loss: 211.9399 | l_Loss: 26.2785 |
21-12-25 06:29:41.682 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:29:41.683 - INFO: Train epoch 2603: Loss: 536.6281 | r_Loss: 57.3719 | g_Loss: 219.0536 | l_Loss: 30.7153 |
21-12-25 06:30:53.026 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:30:53.026 - INFO: Train epoch 2604: Loss: 483.2541 | r_Loss: 49.0865 | g_Loss: 209.2957 | l_Loss: 28.5258 |
21-12-25 06:32:04.397 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:32:04.397 - INFO: Train epoch 2605: Loss: 480.9482 | r_Loss: 48.5561 | g_Loss: 212.4468 | l_Loss: 25.7211 |
21-12-25 06:33:15.870 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:33:15.871 - INFO: Train epoch 2606: Loss: 503.1602 | r_Loss: 51.7712 | g_Loss: 215.7921 | l_Loss: 28.5121 |
21-12-25 06:34:27.379 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:34:27.380 - INFO: Train epoch 2607: Loss: 531.0892 | r_Loss: 56.5596 | g_Loss: 220.3909 | l_Loss: 27.9003 |
21-12-25 06:35:38.752 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:35:38.753 - INFO: Train epoch 2608: Loss: 587.5714 | r_Loss: 60.1132 | g_Loss: 258.2395 | l_Loss: 28.7658 |
21-12-25 06:36:50.074 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:36:50.075 - INFO: Train epoch 2609: Loss: 468.2030 | r_Loss: 46.8871 | g_Loss: 205.1697 | l_Loss: 28.5977 |
21-12-25 06:38:35.097 - INFO: TEST: PSNR_S: 46.5736 | PSNR_C: 38.9002 |
21-12-25 06:38:35.098 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:38:35.098 - INFO: Train epoch 2610: Loss: 476.0072 | r_Loss: 48.6917 | g_Loss: 208.3713 | l_Loss: 24.1773 |
21-12-25 06:39:46.481 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:39:46.482 - INFO: Train epoch 2611: Loss: 461.5085 | r_Loss: 46.8730 | g_Loss: 200.9263 | l_Loss: 26.2174 |
21-12-25 06:40:57.795 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:40:57.795 - INFO: Train epoch 2612: Loss: 495.7278 | r_Loss: 51.3066 | g_Loss: 214.9249 | l_Loss: 24.2698 |
21-12-25 06:42:09.088 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:42:09.089 - INFO: Train epoch 2613: Loss: 501.4949 | r_Loss: 55.2506 | g_Loss: 203.6176 | l_Loss: 21.6241 |
21-12-25 06:43:20.434 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:43:20.434 - INFO: Train epoch 2614: Loss: 468.2513 | r_Loss: 47.0554 | g_Loss: 207.9060 | l_Loss: 25.0684 |
21-12-25 06:44:31.802 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:44:31.802 - INFO: Train epoch 2615: Loss: 486.8071 | r_Loss: 49.6962 | g_Loss: 214.1289 | l_Loss: 24.1969 |
21-12-25 06:45:43.111 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:45:43.112 - INFO: Train epoch 2616: Loss: 548.9882 | r_Loss: 49.9268 | g_Loss: 264.6555 | l_Loss: 34.6988 |
21-12-25 06:46:54.746 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:46:54.748 - INFO: Train epoch 2617: Loss: 447.3260 | r_Loss: 43.0672 | g_Loss: 203.5805 | l_Loss: 28.4093 |
21-12-25 06:48:06.139 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:48:06.140 - INFO: Train epoch 2618: Loss: 460.5417 | r_Loss: 47.0120 | g_Loss: 202.7282 | l_Loss: 22.7536 |
21-12-25 06:49:17.602 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:49:17.603 - INFO: Train epoch 2619: Loss: 460.0218 | r_Loss: 48.9335 | g_Loss: 193.7499 | l_Loss: 21.6042 |
21-12-25 06:50:29.080 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:50:29.080 - INFO: Train epoch 2620: Loss: 467.9618 | r_Loss: 48.1959 | g_Loss: 199.5298 | l_Loss: 27.4524 |
21-12-25 06:51:40.450 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:51:40.451 - INFO: Train epoch 2621: Loss: 491.0335 | r_Loss: 51.3541 | g_Loss: 206.1034 | l_Loss: 28.1594 |
21-12-25 06:52:51.831 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:52:51.832 - INFO: Train epoch 2622: Loss: 489.9996 | r_Loss: 51.2547 | g_Loss: 209.6301 | l_Loss: 24.0959 |
21-12-25 06:54:03.151 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:54:03.151 - INFO: Train epoch 2623: Loss: 516.1200 | r_Loss: 55.0183 | g_Loss: 215.2084 | l_Loss: 25.8203 |
21-12-25 06:55:14.408 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:55:14.409 - INFO: Train epoch 2624: Loss: 495.7952 | r_Loss: 52.8863 | g_Loss: 207.8607 | l_Loss: 23.5030 |
21-12-25 06:56:25.792 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:56:25.792 - INFO: Train epoch 2625: Loss: 489.1505 | r_Loss: 52.6698 | g_Loss: 199.1264 | l_Loss: 26.6749 |
21-12-25 06:57:37.073 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:57:37.073 - INFO: Train epoch 2626: Loss: 504.3033 | r_Loss: 52.5722 | g_Loss: 216.0066 | l_Loss: 25.4358 |
21-12-25 06:58:48.448 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:58:48.448 - INFO: Train epoch 2627: Loss: 496.2779 | r_Loss: 52.7514 | g_Loss: 210.6037 | l_Loss: 21.9175 |
21-12-25 06:59:59.940 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 06:59:59.941 - INFO: Train epoch 2628: Loss: 17473.6444 | r_Loss: 3289.7952 | g_Loss: 917.7672 | l_Loss: 106.9012 |
21-12-25 07:01:11.326 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:01:11.327 - INFO: Train epoch 2629: Loss: 1555.9582 | r_Loss: 159.4541 | g_Loss: 670.2982 | l_Loss: 88.3896 |
21-12-25 07:02:22.687 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:02:22.688 - INFO: Train epoch 2630: Loss: 1101.5687 | r_Loss: 109.6845 | g_Loss: 490.5104 | l_Loss: 62.6356 |
21-12-25 07:03:34.214 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:03:34.215 - INFO: Train epoch 2631: Loss: 1020.7480 | r_Loss: 101.5129 | g_Loss: 459.5558 | l_Loss: 53.6278 |
21-12-25 07:04:45.623 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:04:45.623 - INFO: Train epoch 2632: Loss: 908.4358 | r_Loss: 88.9581 | g_Loss: 414.0130 | l_Loss: 49.6320 |
21-12-25 07:05:56.964 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:05:56.965 - INFO: Train epoch 2633: Loss: 870.0538 | r_Loss: 84.8451 | g_Loss: 403.0071 | l_Loss: 42.8213 |
21-12-25 07:07:08.325 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:07:08.325 - INFO: Train epoch 2634: Loss: 839.9791 | r_Loss: 77.7293 | g_Loss: 395.9987 | l_Loss: 55.3338 |
21-12-25 07:08:19.782 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:08:19.783 - INFO: Train epoch 2635: Loss: 801.4016 | r_Loss: 75.3113 | g_Loss: 383.4948 | l_Loss: 41.3504 |
21-12-25 07:09:31.150 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:09:31.151 - INFO: Train epoch 2636: Loss: 756.8034 | r_Loss: 69.7931 | g_Loss: 355.7903 | l_Loss: 52.0473 |
21-12-25 07:10:42.553 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:10:42.554 - INFO: Train epoch 2637: Loss: 700.6959 | r_Loss: 62.8130 | g_Loss: 338.1548 | l_Loss: 48.4762 |
21-12-25 07:11:54.111 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:11:54.111 - INFO: Train epoch 2638: Loss: 702.9282 | r_Loss: 64.5779 | g_Loss: 337.0010 | l_Loss: 43.0379 |
21-12-25 07:13:05.689 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:13:05.690 - INFO: Train epoch 2639: Loss: 699.7741 | r_Loss: 65.3985 | g_Loss: 329.8719 | l_Loss: 42.9099 |
21-12-25 07:14:50.739 - INFO: TEST: PSNR_S: 44.8195 | PSNR_C: 36.6805 |
21-12-25 07:14:50.740 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:14:50.740 - INFO: Train epoch 2640: Loss: 689.5708 | r_Loss: 63.1441 | g_Loss: 327.5522 | l_Loss: 46.2981 |
21-12-25 07:16:02.089 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:16:02.089 - INFO: Train epoch 2641: Loss: 642.2575 | r_Loss: 57.7631 | g_Loss: 313.3600 | l_Loss: 40.0822 |
21-12-25 07:17:13.575 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:17:13.576 - INFO: Train epoch 2642: Loss: 627.9345 | r_Loss: 56.3645 | g_Loss: 306.0630 | l_Loss: 40.0491 |
21-12-25 07:18:24.966 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:18:24.967 - INFO: Train epoch 2643: Loss: 615.8475 | r_Loss: 56.5429 | g_Loss: 301.2605 | l_Loss: 31.8723 |
21-12-25 07:19:36.268 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:19:36.268 - INFO: Train epoch 2644: Loss: 627.9972 | r_Loss: 57.2363 | g_Loss: 305.6271 | l_Loss: 36.1888 |
21-12-25 07:20:47.607 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:20:47.608 - INFO: Train epoch 2645: Loss: 580.0189 | r_Loss: 51.6630 | g_Loss: 287.3789 | l_Loss: 34.3250 |
21-12-25 07:21:58.925 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:21:58.925 - INFO: Train epoch 2646: Loss: 573.2387 | r_Loss: 49.9311 | g_Loss: 292.8654 | l_Loss: 30.7181 |
21-12-25 07:23:10.278 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:23:10.279 - INFO: Train epoch 2647: Loss: 583.9423 | r_Loss: 53.2909 | g_Loss: 283.5048 | l_Loss: 33.9831 |
21-12-25 07:24:21.485 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:24:21.486 - INFO: Train epoch 2648: Loss: 605.5548 | r_Loss: 56.4830 | g_Loss: 288.4339 | l_Loss: 34.7060 |
21-12-25 07:25:32.790 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:25:32.791 - INFO: Train epoch 2649: Loss: 605.6663 | r_Loss: 55.3324 | g_Loss: 296.7406 | l_Loss: 32.2636 |
21-12-25 07:26:44.169 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:26:44.170 - INFO: Train epoch 2650: Loss: 580.4760 | r_Loss: 51.9225 | g_Loss: 288.2142 | l_Loss: 32.6493 |
21-12-25 07:27:55.686 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:27:55.687 - INFO: Train epoch 2651: Loss: 551.0341 | r_Loss: 47.2282 | g_Loss: 278.1238 | l_Loss: 36.7693 |
21-12-25 07:29:07.103 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:29:07.104 - INFO: Train epoch 2652: Loss: 564.2302 | r_Loss: 48.7854 | g_Loss: 281.6596 | l_Loss: 38.6434 |
21-12-25 07:30:18.473 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:30:18.474 - INFO: Train epoch 2653: Loss: 544.8573 | r_Loss: 47.7582 | g_Loss: 273.7220 | l_Loss: 32.3446 |
21-12-25 07:31:29.930 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:31:29.931 - INFO: Train epoch 2654: Loss: 574.9890 | r_Loss: 52.4701 | g_Loss: 287.2211 | l_Loss: 25.4176 |
21-12-25 07:32:41.374 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:32:41.374 - INFO: Train epoch 2655: Loss: 560.6316 | r_Loss: 52.6549 | g_Loss: 264.1081 | l_Loss: 33.2489 |
21-12-25 07:33:53.111 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:33:53.112 - INFO: Train epoch 2656: Loss: 529.5768 | r_Loss: 47.7969 | g_Loss: 261.0363 | l_Loss: 29.5557 |
21-12-25 07:35:06.619 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:35:06.621 - INFO: Train epoch 2657: Loss: 490.0669 | r_Loss: 42.7911 | g_Loss: 248.8164 | l_Loss: 27.2949 |
21-12-25 07:36:20.297 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:36:20.298 - INFO: Train epoch 2658: Loss: 513.5538 | r_Loss: 45.3402 | g_Loss: 252.0729 | l_Loss: 34.7799 |
21-12-25 07:37:33.793 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:37:33.794 - INFO: Train epoch 2659: Loss: 548.8489 | r_Loss: 49.6901 | g_Loss: 267.5171 | l_Loss: 32.8814 |
21-12-25 07:38:47.406 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:38:47.407 - INFO: Train epoch 2660: Loss: 519.3883 | r_Loss: 46.9668 | g_Loss: 255.6516 | l_Loss: 28.9024 |
21-12-25 07:40:00.621 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:40:00.623 - INFO: Train epoch 2661: Loss: 532.3969 | r_Loss: 49.4813 | g_Loss: 253.9351 | l_Loss: 31.0553 |
21-12-25 07:41:14.527 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:41:14.528 - INFO: Train epoch 2662: Loss: 513.5206 | r_Loss: 45.7538 | g_Loss: 254.8653 | l_Loss: 29.8861 |
21-12-25 07:42:27.622 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:42:27.623 - INFO: Train epoch 2663: Loss: 485.3695 | r_Loss: 43.6021 | g_Loss: 235.9925 | l_Loss: 31.3667 |
21-12-25 07:43:40.826 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:43:40.828 - INFO: Train epoch 2664: Loss: 547.2359 | r_Loss: 51.7315 | g_Loss: 253.8906 | l_Loss: 34.6878 |
21-12-25 07:44:54.061 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:44:54.062 - INFO: Train epoch 2665: Loss: 497.6874 | r_Loss: 43.9821 | g_Loss: 245.9662 | l_Loss: 31.8105 |
21-12-25 07:46:07.160 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:46:07.161 - INFO: Train epoch 2666: Loss: 557.8218 | r_Loss: 53.2416 | g_Loss: 258.5951 | l_Loss: 33.0190 |
21-12-25 07:47:20.566 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:47:20.567 - INFO: Train epoch 2667: Loss: 509.8515 | r_Loss: 46.8851 | g_Loss: 248.9672 | l_Loss: 26.4587 |
21-12-25 07:48:34.021 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:48:34.022 - INFO: Train epoch 2668: Loss: 518.6234 | r_Loss: 48.1671 | g_Loss: 245.5572 | l_Loss: 32.2307 |
21-12-25 07:49:47.465 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:49:47.467 - INFO: Train epoch 2669: Loss: 537.9017 | r_Loss: 49.9089 | g_Loss: 252.4515 | l_Loss: 35.9058 |
21-12-25 07:51:37.387 - INFO: TEST: PSNR_S: 46.4675 | PSNR_C: 38.1006 |
21-12-25 07:51:37.388 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:51:37.388 - INFO: Train epoch 2670: Loss: 517.0916 | r_Loss: 49.2986 | g_Loss: 244.0195 | l_Loss: 26.5792 |
21-12-25 07:52:51.359 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:52:51.360 - INFO: Train epoch 2671: Loss: 501.3219 | r_Loss: 48.1487 | g_Loss: 232.0338 | l_Loss: 28.5448 |
21-12-25 07:54:04.961 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:54:04.962 - INFO: Train epoch 2672: Loss: 495.6819 | r_Loss: 47.1518 | g_Loss: 226.0027 | l_Loss: 33.9201 |
21-12-25 07:55:18.470 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:55:18.471 - INFO: Train epoch 2673: Loss: 495.5098 | r_Loss: 46.2732 | g_Loss: 233.3655 | l_Loss: 30.7782 |
21-12-25 07:56:31.975 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:56:31.976 - INFO: Train epoch 2674: Loss: 493.6840 | r_Loss: 49.0881 | g_Loss: 225.8917 | l_Loss: 22.3515 |
21-12-25 07:57:45.681 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:57:45.682 - INFO: Train epoch 2675: Loss: 514.7359 | r_Loss: 49.2644 | g_Loss: 238.3315 | l_Loss: 30.0824 |
21-12-25 07:58:59.566 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 07:58:59.567 - INFO: Train epoch 2676: Loss: 533.1851 | r_Loss: 52.6055 | g_Loss: 234.5032 | l_Loss: 35.6543 |
21-12-25 08:00:13.227 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:00:13.228 - INFO: Train epoch 2677: Loss: 515.0404 | r_Loss: 48.6762 | g_Loss: 239.0149 | l_Loss: 32.6446 |
21-12-25 08:01:26.802 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:01:26.804 - INFO: Train epoch 2678: Loss: 443.4486 | r_Loss: 40.0162 | g_Loss: 217.4613 | l_Loss: 25.9063 |
21-12-25 08:02:40.003 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:02:40.004 - INFO: Train epoch 2679: Loss: 466.5776 | r_Loss: 43.5607 | g_Loss: 225.0300 | l_Loss: 23.7442 |
21-12-25 08:03:53.254 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:03:53.255 - INFO: Train epoch 2680: Loss: 481.5348 | r_Loss: 44.8223 | g_Loss: 230.3687 | l_Loss: 27.0547 |
21-12-25 08:05:06.706 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:05:06.707 - INFO: Train epoch 2681: Loss: 478.8752 | r_Loss: 46.2146 | g_Loss: 217.0235 | l_Loss: 30.7786 |
21-12-25 08:06:19.824 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:06:19.825 - INFO: Train epoch 2682: Loss: 495.4275 | r_Loss: 48.7154 | g_Loss: 223.9995 | l_Loss: 27.8510 |
21-12-25 08:07:33.519 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:07:33.520 - INFO: Train epoch 2683: Loss: 508.4954 | r_Loss: 49.1311 | g_Loss: 227.3670 | l_Loss: 35.4730 |
21-12-25 08:08:47.097 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:08:47.098 - INFO: Train epoch 2684: Loss: 519.2049 | r_Loss: 51.6811 | g_Loss: 234.7456 | l_Loss: 26.0540 |
21-12-25 08:10:01.190 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:10:01.191 - INFO: Train epoch 2685: Loss: 507.0130 | r_Loss: 50.8712 | g_Loss: 227.5824 | l_Loss: 25.0743 |
21-12-25 08:11:14.735 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:11:14.736 - INFO: Train epoch 2686: Loss: 475.2980 | r_Loss: 46.4129 | g_Loss: 217.7397 | l_Loss: 25.4937 |
21-12-25 08:12:28.357 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:12:28.358 - INFO: Train epoch 2687: Loss: 517.1740 | r_Loss: 51.6960 | g_Loss: 226.7598 | l_Loss: 31.9342 |
21-12-25 08:13:42.038 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:13:42.039 - INFO: Train epoch 2688: Loss: 473.1408 | r_Loss: 46.4924 | g_Loss: 214.9035 | l_Loss: 25.7753 |
21-12-25 08:14:55.325 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:14:55.326 - INFO: Train epoch 2689: Loss: 491.3363 | r_Loss: 49.0186 | g_Loss: 219.9917 | l_Loss: 26.2515 |
21-12-25 08:16:09.202 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:16:09.203 - INFO: Train epoch 2690: Loss: 486.9230 | r_Loss: 46.6619 | g_Loss: 224.9733 | l_Loss: 28.6402 |
21-12-25 08:17:22.417 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:17:22.418 - INFO: Train epoch 2691: Loss: 485.0293 | r_Loss: 48.4647 | g_Loss: 214.7118 | l_Loss: 27.9941 |
21-12-25 08:18:35.879 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:18:35.880 - INFO: Train epoch 2692: Loss: 467.2219 | r_Loss: 46.5818 | g_Loss: 212.9205 | l_Loss: 21.3924 |
21-12-25 08:19:49.330 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:19:49.331 - INFO: Train epoch 2693: Loss: 453.7348 | r_Loss: 44.5928 | g_Loss: 204.7856 | l_Loss: 25.9853 |
21-12-25 08:21:02.486 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:21:02.487 - INFO: Train epoch 2694: Loss: 516.0616 | r_Loss: 52.5295 | g_Loss: 224.6354 | l_Loss: 28.7789 |
21-12-25 08:22:16.133 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:22:16.135 - INFO: Train epoch 2695: Loss: 483.3773 | r_Loss: 48.4775 | g_Loss: 215.8105 | l_Loss: 25.1794 |
21-12-25 08:23:30.062 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:23:30.062 - INFO: Train epoch 2696: Loss: 500.3132 | r_Loss: 48.4745 | g_Loss: 229.3538 | l_Loss: 28.5867 |
21-12-25 08:24:43.550 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:24:43.551 - INFO: Train epoch 2697: Loss: 488.6547 | r_Loss: 47.6438 | g_Loss: 223.5544 | l_Loss: 26.8812 |
21-12-25 08:25:56.805 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:25:56.806 - INFO: Train epoch 2698: Loss: 485.8296 | r_Loss: 47.5704 | g_Loss: 222.1804 | l_Loss: 25.7973 |
21-12-25 08:27:10.544 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:27:10.545 - INFO: Train epoch 2699: Loss: 464.2048 | r_Loss: 44.8081 | g_Loss: 210.9088 | l_Loss: 29.2556 |
21-12-25 08:29:00.224 - INFO: TEST: PSNR_S: 46.4384 | PSNR_C: 38.6615 |
21-12-25 08:29:00.226 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:29:00.226 - INFO: Train epoch 2700: Loss: 508.4540 | r_Loss: 52.2328 | g_Loss: 224.0855 | l_Loss: 23.2044 |
21-12-25 08:30:13.387 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:30:13.389 - INFO: Train epoch 2701: Loss: 455.6639 | r_Loss: 46.7031 | g_Loss: 201.8943 | l_Loss: 20.2540 |
21-12-25 08:31:26.939 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:31:26.941 - INFO: Train epoch 2702: Loss: 468.5351 | r_Loss: 46.8879 | g_Loss: 211.0745 | l_Loss: 23.0212 |
21-12-25 08:32:40.436 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:32:40.437 - INFO: Train epoch 2703: Loss: 518.5437 | r_Loss: 53.4657 | g_Loss: 222.7540 | l_Loss: 28.4611 |
21-12-25 08:33:54.051 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:33:54.052 - INFO: Train epoch 2704: Loss: 477.0348 | r_Loss: 46.8953 | g_Loss: 212.7885 | l_Loss: 29.7698 |
21-12-25 08:35:07.535 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:35:07.536 - INFO: Train epoch 2705: Loss: 478.6535 | r_Loss: 49.3291 | g_Loss: 208.4573 | l_Loss: 23.5504 |
21-12-25 08:36:20.977 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:36:20.979 - INFO: Train epoch 2706: Loss: 456.4161 | r_Loss: 46.3455 | g_Loss: 202.6088 | l_Loss: 22.0798 |
21-12-25 08:37:34.590 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:37:34.592 - INFO: Train epoch 2707: Loss: 476.6995 | r_Loss: 48.5732 | g_Loss: 207.0518 | l_Loss: 26.7816 |
21-12-25 08:38:48.696 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:38:48.697 - INFO: Train epoch 2708: Loss: 492.2907 | r_Loss: 49.4946 | g_Loss: 216.3653 | l_Loss: 28.4525 |
21-12-25 08:40:02.410 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:40:02.412 - INFO: Train epoch 2709: Loss: 506.0249 | r_Loss: 54.0277 | g_Loss: 213.9007 | l_Loss: 21.9859 |
21-12-25 08:41:15.922 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:41:15.924 - INFO: Train epoch 2710: Loss: 486.1356 | r_Loss: 49.6746 | g_Loss: 209.3309 | l_Loss: 28.4319 |
21-12-25 08:42:29.148 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:42:29.149 - INFO: Train epoch 2711: Loss: 459.3936 | r_Loss: 45.9743 | g_Loss: 207.6655 | l_Loss: 21.8564 |
21-12-25 08:43:42.838 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:43:42.839 - INFO: Train epoch 2712: Loss: 515.9418 | r_Loss: 54.6799 | g_Loss: 216.5196 | l_Loss: 26.0228 |
21-12-25 08:44:56.341 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:44:56.342 - INFO: Train epoch 2713: Loss: 453.0714 | r_Loss: 45.5341 | g_Loss: 202.6036 | l_Loss: 22.7976 |
21-12-25 08:46:09.934 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:46:09.936 - INFO: Train epoch 2714: Loss: 482.6426 | r_Loss: 49.3251 | g_Loss: 210.5662 | l_Loss: 25.4509 |
21-12-25 08:47:23.470 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:47:23.472 - INFO: Train epoch 2715: Loss: 524.2757 | r_Loss: 56.7267 | g_Loss: 211.2584 | l_Loss: 29.3836 |
21-12-25 08:48:37.365 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:48:37.366 - INFO: Train epoch 2716: Loss: 456.0974 | r_Loss: 45.1423 | g_Loss: 203.7995 | l_Loss: 26.5866 |
21-12-25 08:49:50.930 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:49:50.931 - INFO: Train epoch 2717: Loss: 454.0880 | r_Loss: 44.2597 | g_Loss: 205.1025 | l_Loss: 27.6869 |
21-12-25 08:51:04.471 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:51:04.472 - INFO: Train epoch 2718: Loss: 513.0525 | r_Loss: 53.3520 | g_Loss: 221.8994 | l_Loss: 24.3933 |
21-12-25 08:52:17.906 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:52:17.907 - INFO: Train epoch 2719: Loss: 478.2975 | r_Loss: 48.2926 | g_Loss: 213.5096 | l_Loss: 23.3252 |
21-12-25 08:53:31.479 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:53:31.481 - INFO: Train epoch 2720: Loss: 509.7644 | r_Loss: 53.0184 | g_Loss: 221.9999 | l_Loss: 22.6725 |
21-12-25 08:54:44.920 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:54:44.921 - INFO: Train epoch 2721: Loss: 495.6359 | r_Loss: 50.5231 | g_Loss: 213.4139 | l_Loss: 29.6067 |
21-12-25 08:55:58.493 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:55:58.494 - INFO: Train epoch 2722: Loss: 488.6997 | r_Loss: 50.3069 | g_Loss: 208.1030 | l_Loss: 29.0620 |
21-12-25 08:57:12.270 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:57:12.272 - INFO: Train epoch 2723: Loss: 427.4884 | r_Loss: 41.6634 | g_Loss: 194.4649 | l_Loss: 24.7066 |
21-12-25 08:58:25.767 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:58:25.768 - INFO: Train epoch 2724: Loss: 546.6820 | r_Loss: 60.6736 | g_Loss: 216.0492 | l_Loss: 27.2648 |
21-12-25 08:59:39.086 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 08:59:39.087 - INFO: Train epoch 2725: Loss: 440.5881 | r_Loss: 43.6274 | g_Loss: 197.7720 | l_Loss: 24.6794 |
21-12-25 09:00:52.495 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:00:52.497 - INFO: Train epoch 2726: Loss: 461.5932 | r_Loss: 48.2919 | g_Loss: 198.8651 | l_Loss: 21.2686 |
21-12-25 09:02:06.076 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:02:06.077 - INFO: Train epoch 2727: Loss: 468.4431 | r_Loss: 46.8816 | g_Loss: 211.8835 | l_Loss: 22.1518 |
21-12-25 09:03:19.397 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:03:19.398 - INFO: Train epoch 2728: Loss: 478.2073 | r_Loss: 47.8581 | g_Loss: 212.6425 | l_Loss: 26.2743 |
21-12-25 09:04:32.640 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:04:32.641 - INFO: Train epoch 2729: Loss: 477.3383 | r_Loss: 49.7978 | g_Loss: 203.2278 | l_Loss: 25.1217 |
21-12-25 09:06:22.280 - INFO: TEST: PSNR_S: 46.8635 | PSNR_C: 38.8173 |
21-12-25 09:06:22.281 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:06:22.281 - INFO: Train epoch 2730: Loss: 692.8583 | r_Loss: 82.4746 | g_Loss: 248.9869 | l_Loss: 31.4985 |
21-12-25 09:07:35.611 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:07:35.612 - INFO: Train epoch 2731: Loss: 474.2635 | r_Loss: 47.8284 | g_Loss: 207.6040 | l_Loss: 27.5175 |
21-12-25 09:08:49.442 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:08:49.444 - INFO: Train epoch 2732: Loss: 439.8822 | r_Loss: 44.3799 | g_Loss: 199.5445 | l_Loss: 18.4380 |
21-12-25 09:10:02.852 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:10:02.853 - INFO: Train epoch 2733: Loss: 472.4011 | r_Loss: 46.0822 | g_Loss: 211.9969 | l_Loss: 29.9933 |
21-12-25 09:11:16.458 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:11:16.459 - INFO: Train epoch 2734: Loss: 488.4403 | r_Loss: 49.4366 | g_Loss: 214.5422 | l_Loss: 26.7153 |
21-12-25 09:12:30.000 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:12:30.001 - INFO: Train epoch 2735: Loss: 447.6338 | r_Loss: 44.9180 | g_Loss: 199.0901 | l_Loss: 23.9534 |
21-12-25 09:13:43.995 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:13:43.996 - INFO: Train epoch 2736: Loss: 447.8890 | r_Loss: 44.4735 | g_Loss: 201.4309 | l_Loss: 24.0908 |
21-12-25 09:14:57.528 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:14:57.530 - INFO: Train epoch 2737: Loss: 475.8793 | r_Loss: 49.6348 | g_Loss: 202.9841 | l_Loss: 24.7214 |
21-12-25 09:16:10.674 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:16:10.676 - INFO: Train epoch 2738: Loss: 479.4847 | r_Loss: 49.2509 | g_Loss: 212.3667 | l_Loss: 20.8634 |
21-12-25 09:17:24.307 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:17:24.309 - INFO: Train epoch 2739: Loss: 469.8435 | r_Loss: 49.1271 | g_Loss: 199.9245 | l_Loss: 24.2836 |
21-12-25 09:18:37.856 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:18:37.858 - INFO: Train epoch 2740: Loss: 498.9031 | r_Loss: 53.7568 | g_Loss: 204.7199 | l_Loss: 25.3991 |
21-12-25 09:19:51.637 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:19:51.639 - INFO: Train epoch 2741: Loss: 484.2422 | r_Loss: 48.6064 | g_Loss: 215.9395 | l_Loss: 25.2709 |
21-12-25 09:21:04.880 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:21:04.880 - INFO: Train epoch 2742: Loss: 507.0431 | r_Loss: 53.3761 | g_Loss: 214.1817 | l_Loss: 25.9806 |
21-12-25 09:22:18.550 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:22:18.551 - INFO: Train epoch 2743: Loss: 487.0943 | r_Loss: 50.2155 | g_Loss: 212.1295 | l_Loss: 23.8871 |
21-12-25 09:23:31.953 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:23:31.954 - INFO: Train epoch 2744: Loss: 470.2634 | r_Loss: 46.7981 | g_Loss: 205.6625 | l_Loss: 30.6101 |
21-12-25 09:24:45.207 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:24:45.208 - INFO: Train epoch 2745: Loss: 476.9521 | r_Loss: 49.9028 | g_Loss: 202.1049 | l_Loss: 25.3330 |
21-12-25 09:25:58.588 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:25:58.589 - INFO: Train epoch 2746: Loss: 450.8879 | r_Loss: 44.8835 | g_Loss: 200.8577 | l_Loss: 25.6126 |
21-12-25 09:27:12.465 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:27:12.466 - INFO: Train epoch 2747: Loss: 525.4247 | r_Loss: 57.3748 | g_Loss: 214.5028 | l_Loss: 24.0478 |
21-12-25 09:28:26.253 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:28:26.254 - INFO: Train epoch 2748: Loss: 544.0491 | r_Loss: 59.7761 | g_Loss: 221.2333 | l_Loss: 23.9356 |
21-12-25 09:29:40.193 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:29:40.194 - INFO: Train epoch 2749: Loss: 493.7104 | r_Loss: 50.4416 | g_Loss: 211.4403 | l_Loss: 30.0622 |
21-12-25 09:30:54.167 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:30:54.168 - INFO: Train epoch 2750: Loss: 437.8213 | r_Loss: 42.5260 | g_Loss: 196.2963 | l_Loss: 28.8950 |
21-12-25 09:32:07.746 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:32:07.747 - INFO: Train epoch 2751: Loss: 467.4733 | r_Loss: 45.9122 | g_Loss: 209.6056 | l_Loss: 28.3064 |
21-12-25 09:33:21.279 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:33:21.280 - INFO: Train epoch 2752: Loss: 469.2616 | r_Loss: 46.7846 | g_Loss: 209.4615 | l_Loss: 25.8772 |
21-12-25 09:34:34.243 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:34:34.244 - INFO: Train epoch 2753: Loss: 490.2742 | r_Loss: 50.0669 | g_Loss: 210.8956 | l_Loss: 29.0442 |
21-12-25 09:35:47.996 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:35:47.997 - INFO: Train epoch 2754: Loss: 539.8011 | r_Loss: 51.9971 | g_Loss: 250.4828 | l_Loss: 29.3325 |
21-12-25 09:37:01.592 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:37:01.593 - INFO: Train epoch 2755: Loss: 454.9907 | r_Loss: 44.6469 | g_Loss: 204.6962 | l_Loss: 27.0600 |
21-12-25 09:38:14.700 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:38:14.702 - INFO: Train epoch 2756: Loss: 435.5329 | r_Loss: 44.3764 | g_Loss: 188.7650 | l_Loss: 24.8857 |
21-12-25 09:39:28.110 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:39:28.111 - INFO: Train epoch 2757: Loss: 483.7542 | r_Loss: 52.3790 | g_Loss: 200.9783 | l_Loss: 20.8810 |
21-12-25 09:40:41.635 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:40:41.636 - INFO: Train epoch 2758: Loss: 462.2755 | r_Loss: 47.4588 | g_Loss: 202.0374 | l_Loss: 22.9443 |
21-12-25 09:41:54.838 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:41:54.839 - INFO: Train epoch 2759: Loss: 464.3709 | r_Loss: 47.9350 | g_Loss: 200.9198 | l_Loss: 23.7760 |
21-12-25 09:43:44.400 - INFO: TEST: PSNR_S: 33.9373 | PSNR_C: 30.9399 |
21-12-25 09:43:44.402 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:43:44.403 - INFO: Train epoch 2760: Loss: 9759.0377 | r_Loss: 1845.9202 | g_Loss: 467.6803 | l_Loss: 61.7570 |
21-12-25 09:44:58.022 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:44:58.023 - INFO: Train epoch 2761: Loss: 1564.9100 | r_Loss: 181.5903 | g_Loss: 577.4706 | l_Loss: 79.4876 |
21-12-25 09:46:11.937 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:46:11.938 - INFO: Train epoch 2762: Loss: 984.3894 | r_Loss: 101.1467 | g_Loss: 427.9016 | l_Loss: 50.7546 |
21-12-25 09:47:25.054 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:47:25.055 - INFO: Train epoch 2763: Loss: 855.4714 | r_Loss: 80.6128 | g_Loss: 399.5837 | l_Loss: 52.8238 |
21-12-25 09:48:38.702 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:48:38.703 - INFO: Train epoch 2764: Loss: 743.3869 | r_Loss: 68.0136 | g_Loss: 354.7352 | l_Loss: 48.5839 |
21-12-25 09:49:51.780 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:49:51.781 - INFO: Train epoch 2765: Loss: 708.5815 | r_Loss: 63.3438 | g_Loss: 339.8338 | l_Loss: 52.0289 |
21-12-25 09:51:05.401 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:51:05.402 - INFO: Train epoch 2766: Loss: 734.7515 | r_Loss: 67.5651 | g_Loss: 353.0044 | l_Loss: 43.9214 |
21-12-25 09:52:18.930 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:52:18.931 - INFO: Train epoch 2767: Loss: 700.1715 | r_Loss: 63.3429 | g_Loss: 333.9655 | l_Loss: 49.4915 |
21-12-25 09:53:32.112 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:53:32.113 - INFO: Train epoch 2768: Loss: 665.8520 | r_Loss: 60.3400 | g_Loss: 325.8033 | l_Loss: 38.3489 |
21-12-25 09:54:45.771 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:54:45.773 - INFO: Train epoch 2769: Loss: 686.2235 | r_Loss: 61.7587 | g_Loss: 332.0010 | l_Loss: 45.4288 |
21-12-25 09:55:59.124 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:55:59.125 - INFO: Train epoch 2770: Loss: 653.9449 | r_Loss: 60.4577 | g_Loss: 308.6261 | l_Loss: 43.0304 |
21-12-25 09:57:12.401 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:57:12.401 - INFO: Train epoch 2771: Loss: 617.5961 | r_Loss: 57.3774 | g_Loss: 297.2568 | l_Loss: 33.4521 |
21-12-25 09:58:25.802 - INFO: Learning rate: 6.30957344480193e-06
21-12-25 09:58:25.803 - INFO: Train epoch 2772: Loss: 577.6340 | r_Loss: 53.7793 | g_Loss: 280.5089 | l_Loss: 28.2287 |
================================================
FILE: model/1
================================================
================================================
FILE: model.py
================================================
import torch.optim
import torch.nn as nn
import config as c
from hinet import Hinet
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.model = Hinet()
def forward(self, x, rev=False):
if not rev:
out = self.model(x)
else:
out = self.model(x, rev=True)
return out
def init_model(mod):
for key, param in mod.named_parameters():
split = key.split('.')
if param.requires_grad:
param.data = c.init_scale * torch.randn(param.data.shape).cuda()
if split[-2] == 'conv5':
param.data.fill_(0.)
================================================
FILE: modules/Unet_common.py
================================================
import math
import torch
import torch.nn as nn
from modules import module_util as mutil
import functools
def default_conv(in_channels, out_channels, kernel_size, bias=True, dilation=1, use_snorm=False):
if use_snorm:
return nn.utils.spectral_norm(nn.Conv2d(
in_channels, out_channels, kernel_size,
padding=(kernel_size//2)+dilation-1, bias=bias, dilation=dilation))
else:
return nn.Conv2d(
in_channels, out_channels, kernel_size,
padding=(kernel_size//2)+dilation-1, bias=bias, dilation=dilation)
def default_conv1(in_channels, out_channels, kernel_size, bias=True, groups=3, use_snorm=False):
if use_snorm:
return nn.utils.spectral_norm(nn.Conv2d(
in_channels,out_channels, kernel_size,
padding=(kernel_size//2), bias=bias, groups=groups))
else:
return nn.Conv2d(
in_channels,out_channels, kernel_size,
padding=(kernel_size//2), bias=bias, groups=groups)
def default_conv3d(in_channels, out_channels, kernel_size, t_kernel=3, bias=True, dilation=1, groups=1, use_snorm=False):
if use_snorm:
return nn.utils.spectral_norm(nn.Conv3d(
in_channels,out_channels, (t_kernel, kernel_size, kernel_size), stride=1,
padding=(0,kernel_size//2,kernel_size//2), bias=bias, dilation=dilation, groups=groups))
else:
return nn.Conv3d(
in_channels,out_channels, (t_kernel, kernel_size, kernel_size), stride=1,
padding=(0,kernel_size//2,kernel_size//2), bias=bias, dilation=dilation, groups=groups)
#def shuffle_channel()
def channel_shuffle(x, groups):
batchsize, num_channels, height, width = x.size()
channels_per_group = num_channels // groups
# reshape
x = x.view(batchsize, groups,
channels_per_group, height, width)
x = torch.transpose(x, 1, 2).contiguous()
# flatten
x = x.view(batchsize, -1, height, width)
return x
def pixel_down_shuffle(x, downsacale_factor):
batchsize, num_channels, height, width = x.size()
out_height = height // downsacale_factor
out_width = width // downsacale_factor
input_view = x.contiguous().view(batchsize, num_channels, out_height, downsacale_factor, out_width,
downsacale_factor)
num_channels *= downsacale_factor ** 2
unshuffle_out = input_view.permute(0,1,3,5,2,4).contiguous()
return unshuffle_out.view(batchsize, num_channels, out_height, out_width)
def sp_init(x):
x01 = x[:, :, 0::2, :]
x02 = x[:, :, 1::2, :]
x_LL = x01[:, :, :, 0::2]
x_HL = x02[:, :, :, 0::2]
x_LH = x01[:, :, :, 1::2]
x_HH = x02[:, :, :, 1::2]
return torch.cat((x_LL, x_HL, x_LH, x_HH), 1)
def dwt_init3d(x):
x01 = x[:, :, :, 0::2, :] / 2
x02 = x[:, :, :, 1::2, :] / 2
x1 = x01[:, :, :, :, 0::2]
x2 = x02[:, :, :, :, 0::2]
x3 = x01[:, :, :, :, 1::2]
x4 = x02[:, :, :, :, 1::2]
x_LL = x1 + x2 + x3 + x4
x_HL = -x1 - x2 + x3 + x4
x_LH = -x1 + x2 - x3 + x4
x_HH = x1 - x2 - x3 + x4
return torch.cat((x_LL, x_HL, x_LH, x_HH), 1)
def dwt_init(x):
x01 = x[:, :, 0::2, :] / 2
x02 = x[:, :, 1::2, :] / 2
x1 = x01[:, :, :, 0::2]
x2 = x02[:, :, :, 0::2]
x3 = x01[:, :, :, 1::2]
x4 = x02[:, :, :, 1::2]
x_LL = x1 + x2 + x3 + x4
x_HL = -x1 - x2 + x3 + x4
x_LH = -x1 + x2 - x3 + x4
x_HH = x1 - x2 - x3 + x4
return torch.cat((x_LL, x_HL, x_LH, x_HH), 1)
def iwt_init(x):
r = 2
in_batch, in_channel, in_height, in_width = x.size()
#print([in_batch, in_channel, in_height, in_width])
out_batch, out_channel, out_height, out_width = in_batch, int(
in_channel / (r ** 2)), r * in_height, r * in_width
x1 = x[:, 0:out_channel, :, :] / 2
x2 = x[:, out_channel:out_channel * 2, :, :] / 2
x3 = x[:, out_channel * 2:out_channel * 3, :, :] / 2
x4 = x[:, out_channel * 3:out_channel * 4, :, :] / 2
h = torch.zeros([out_batch, out_channel, out_height, out_width]).float().cuda()
h[:, :, 0::2, 0::2] = x1 - x2 - x3 + x4
h[:, :, 1::2, 0::2] = x1 - x2 + x3 - x4
h[:, :, 0::2, 1::2] = x1 + x2 - x3 - x4
h[:, :, 1::2, 1::2] = x1 + x2 + x3 + x4
return h
class ResidualDenseBlock(nn.Module):
def __init__(self, nf=64, gc=32, kernel_size = 3, bias=True, use_snorm=False):
super(ResidualDenseBlock, self).__init__()
# gc: growth channel, i.e. intermediate channels
if use_snorm:
self.conv1 = nn.utils.spectral_norm(nn.Conv2d(nf, gc, 3, 1, 1, bias=bias))
self.conv2 = nn.utils.spectral_norm(nn.Conv2d(nf + gc, gc, 3, 1, 1, bias=bias))
self.conv3 = nn.utils.spectral_norm(nn.Conv2d(nf + 2 * gc, gc, 3, 1, 1, bias=bias))
self.conv4 = nn.utils.spectral_norm(nn.Conv2d(nf + 3 * gc, gc, 3, 1, 1, bias=bias))
self.conv5 = nn.utils.spectral_norm(nn.Conv2d(nf + 4 * gc, nf, 3, 1, 1, bias=bias))
else:
self.conv1 = nn.Conv2d(nf, gc, 3, 1, 1, bias=bias)
self.conv2 = nn.Conv2d(nf + gc, gc, 3, 1, 1, bias=bias)
self.conv3 = nn.Conv2d(nf + 2 * gc, gc, 3, 1, 1, bias=bias)
self.conv4 = nn.Conv2d(nf + 3 * gc, gc, 3, 1, 1, bias=bias)
self.conv5 = nn.Conv2d(nf + 4 * gc, nf, 3, 1, 1, bias=bias)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
# initialization
mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1)
# mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4], 0.1)
def forward(self, x):
x1 = self.lrelu(self.conv1(x))
x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))
x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
# x4 = self.conv4(torch.cat((x, x1, x2, x3), 1))
x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))
x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
return x5 * 0.2 + x
class RRDB(nn.Module):
'''Residual in Residual Dense Block'''
def __init__(self, nf, gc=32, use_snorm=False):
super(RRDB, self).__init__()
self.RDB1 = ResidualDenseBlock(nf, gc, use_snorm)
# self.RDB2 = ResidualDenseBlock(nf, gc)
# self.RDB3 = ResidualDenseBlock(nf, gc)
def forward(self, x):
out = self.RDB1(x)
# out = self.RDB2(out)
# out = self.RDB3(out)
return out * 0.2 + x
class RRDBblock(nn.Module):
'''Residual in Residual Dense Block'''
def __init__(self, nf, gc=32, nb=23, use_snorm=False):
super(RRDBblock, self).__init__()
RRDB_block_f = functools.partial(RRDB, nf=nf, gc=gc, use_snorm=use_snorm)
self.RRDB_trunk = mutil.make_layer(RRDB_block_f, nb)
if use_snorm:
self.trunk_conv = nn.utils.spectral_norm(nn.Conv2d(nf, nf, 3, 1, 1, bias=True))
else:
self.trunk_conv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
def forward(self, x):
return self.trunk_conv(self.RRDB_trunk(x))
class Channel_Shuffle(nn.Module):
def __init__(self, conv_groups):
super(Channel_Shuffle, self).__init__()
self.conv_groups = conv_groups
self.requires_grad = False
def forward(self, x):
return channel_shuffle(x, self.conv_groups)
class SP(nn.Module):
def __init__(self):
super(SP, self).__init__()
self.requires_grad = False
def forward(self, x):
return sp_init(x)
class Pixel_Down_Shuffle(nn.Module):
def __init__(self):
super(Pixel_Down_Shuffle, self).__init__()
self.requires_grad = False
def forward(self, x):
return pixel_down_shuffle(x, 2)
class DWT(nn.Module):
def __init__(self):
super(DWT, self).__init__()
self.requires_grad = False
def forward(self, x):
return dwt_init(x)
class DWT3d(nn.Module):
def __init__(self):
super(DWT3d, self).__init__()
self.requires_grad = False
def forward(self, x):
return dwt_init3d(x)
class IWT(nn.Module):
def __init__(self):
super(IWT, self).__init__()
self.requires_grad = False
def forward(self, x):
return iwt_init(x)
class MeanShift(nn.Conv2d):
def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
super(MeanShift, self).__init__(3, 3, kernel_size=1)
std = torch.Tensor(rgb_std)
self.weight.data = torch.eye(3).view(3, 3, 1, 1)
self.weight.data.div_(std.view(3, 1, 1, 1))
self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
self.bias.data.div_(std)
self.requires_grad = False
if sign==-1:
self.create_graph = False
self.volatile = True
class MeanShift2(nn.Conv2d):
def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
super(MeanShift2, self).__init__(4, 4, kernel_size=1)
std = torch.Tensor(rgb_std)
self.weight.data = torch.eye(4).view(4, 4, 1, 1)
self.weight.data.div_(std.view(4, 1, 1, 1))
self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
self.bias.data.div_(std)
self.requires_grad = False
if sign==-1:
self.volatile = True
class BasicBlock(nn.Sequential):
def __init__(
self, in_channels, out_channels, kernel_size, stride=1, bias=False,
bn=False, act=nn.LeakyReLU(True), use_snorm=False):
if use_snorm:
m = [nn.utils.spectral_norm(nn.Conv2d(
in_channels, out_channels, kernel_size,
padding=(kernel_size//2), stride=stride, bias=bias))
]
else:
m = [nn.Conv2d(
in_channels, out_channels, kernel_size,
padding=(kernel_size//2), stride=stride, bias=bias)
]
if bn: m.append(nn.BatchNorm2d(out_channels))
if act is not None: m.append(act)
super(BasicBlock, self).__init__(*m)
class Block3d(nn.Sequential):
def __init__(
self, in_channels, out_channels, kernel_size, t_kernel=3,
bias=True, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(Block3d, self).__init__()
m = []
m.append(default_conv3d(in_channels, out_channels, kernel_size, bias=bias, use_snorm=use_snorm))
m.append(act)
m.append(default_conv3d(out_channels, out_channels, kernel_size, bias=bias, use_snorm=use_snorm))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x)
return x
class BBlock(nn.Module):
def __init__(
self, conv, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(BBlock, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x)
return x
class DBlock_com(nn.Module):
def __init__(
self, conv, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(DBlock_com, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=2, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=3, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x)
return x
class DBlock_inv(nn.Module):
def __init__(
self, conv, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(DBlock_inv, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=3, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=2, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x)
return x
class DBlock_com1(nn.Module):
def __init__(
self, conv, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(DBlock_com1, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=2, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
m.append(conv(out_channels, out_channels, kernel_size, bias=bias, dilation=1, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x)
return x
class DBlock_inv1(nn.Module):
def __init__(
self, conv, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(DBlock_inv1, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=2, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
m.append(conv(out_channels, out_channels, kernel_size, bias=bias, dilation=1, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x)
return x
class DBlock_com2(nn.Module):
def __init__(
self, conv, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(DBlock_com2, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=2, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=2, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x)
return x
class DBlock_inv2(nn.Module):
def __init__(
self, conv, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(DBlock_inv2, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=2, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, dilation=2, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels, eps=1e-4, momentum=0.95))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x)
return x
class ShuffleBlock(nn.Module):
def __init__(
self, conv, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1,conv_groups=1, use_snorm=False):
super(ShuffleBlock, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, use_snorm=use_snorm))
m.append(Channel_Shuffle(conv_groups))
if bn: m.append(nn.BatchNorm2d(out_channels))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x).mul(self.res_scale)
return x
class DWBlock(nn.Module):
def __init__(
self, conv, conv1, in_channels, out_channels, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(DWBlock, self).__init__()
m = []
m.append(conv(in_channels, out_channels, kernel_size, bias=bias, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels))
m.append(act)
m.append(conv1(in_channels, out_channels, 1, bias=bias, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(out_channels))
m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
x = self.body(x).mul(self.res_scale)
return x
class ResBlock(nn.Module):
def __init__(
self, conv, n_feat, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(ResBlock, self).__init__()
m = []
for i in range(2):
m.append(conv(n_feat, n_feat, kernel_size, bias=bias, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(n_feat))
if i == 0: m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
res = self.body(x).mul(self.res_scale)
res += x
return res
class Block(nn.Module):
def __init__(
self, conv, n_feat, kernel_size,
bias=True, bn=False, act=nn.LeakyReLU(True), res_scale=1, use_snorm=False):
super(Block, self).__init__()
m = []
for i in range(4):
m.append(conv(n_feat, n_feat, kernel_size, bias=bias, use_snorm=use_snorm))
if bn: m.append(nn.BatchNorm2d(n_feat))
if i == 0: m.append(act)
self.body = nn.Sequential(*m)
self.res_scale = res_scale
def forward(self, x):
res = self.body(x).mul(self.res_scale)
# res += x
return res
class Upsampler(nn.Sequential):
def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True, use_snorm=False):
m = []
if (scale & (scale - 1)) == 0: # Is scale = 2^n?
for _ in range(int(math.log(scale, 2))):
m.append(conv(n_feat, 4 * n_feat, 3, bias, use_snorm=use_snorm))
m.append(nn.PixelShuffle(2))
if bn: m.append(nn.BatchNorm2d(n_feat))
if act: m.append(act())
elif scale == 3:
m.append(conv(n_feat, 9 * n_feat, 3, bias, use_snorm=use_snorm))
m.append(nn.PixelShuffle(3))
if bn: m.append(nn.BatchNorm2d(n_feat))
if act: m.append(act())
else:
raise NotImplementedError
super(Upsampler, self).__init__(*m)
class VGG_conv0(nn.Module):
def __init__(self, in_nc, nf):
super(VGG_conv0, self).__init__()
# [64, 128, 128]
self.conv0_0 = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
self.conv0_1 = nn.Conv2d(nf, nf, 4, 2, 1, bias=False)
self.bn0_1 = nn.BatchNorm2d(nf, affine=True)
# [64, 64, 64]
self.conv1_0 = nn.Conv2d(nf, nf * 2, 3, 1, 1, bias=False)
self.bn1_0 = nn.BatchNorm2d(nf * 2, affine=True)
self.conv1_1 = nn.Conv2d(nf * 2, nf * 2, 4, 2, 1, bias=False)
self.bn1_1 = nn.BatchNorm2d(nf * 2, affine=True)
# [128, 32, 32]
self.conv2_0 = nn.Conv2d(nf * 2, nf * 4, 3, 1, 1, bias=False)
self.bn2_0 = nn.BatchNorm2d(nf * 4, affine=True)
self.conv2_1 = nn.Conv2d(nf * 4, nf * 4, 4, 2, 1, bias=False)
self.bn2_1 = nn.BatchNorm2d(nf * 4, affine=True)
# [256, 16, 16]
self.conv3_0 = nn.Conv2d(nf * 4, nf * 8, 3, 1, 1, bias=False)
self.bn3_0 = nn.BatchNorm2d(nf * 8, affine=True)
self.conv3_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
self.bn3_1 = nn.BatchNorm2d(nf * 8, affine=True)
# [512, 8, 8]
self.conv4_0 = nn.Conv2d(nf * 8, nf * 8, 3, 1, 1, bias=False)
self.bn4_0 = nn.BatchNorm2d(nf * 8, affine=True)
self.conv4_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
self.bn4_1 = nn.BatchNorm2d(nf * 8, affine=True)
# self.avg_pool = nn.AvgPool2d(3, stride=2, padding=0, ceil_mode=True) # /2
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x):
fea = self.lrelu(self.conv0_0(x))
fea = self.lrelu(self.bn0_1(self.conv0_1(fea)))
fea = self.lrelu(self.bn1_0(self.conv1_0(fea)))
fea = self.lrelu(self.bn1_1(self.conv1_1(fea)))
fea = self.lrelu(self.bn2_0(self.conv2_0(fea)))
fea = self.lrelu(self.bn2_1(self.conv2_1(fea)))
fea = self.lrelu(self.bn3_0(self.conv3_0(fea)))
fea = self.lrelu(self.bn3_1(self.conv3_1(fea)))
fea = self.lrelu(self.bn4_0(self.conv4_0(fea)))
fea = self.lrelu(self.bn4_1(self.conv4_1(fea)))
# fea = self.avg_pool(fea)
return fea
class VGG_conv1(nn.Module):
def __init__(self, in_nc, nf):
super(VGG_conv1, self).__init__()
# [64, 128, 128]
self.conv0_0 = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
self.conv0_1 = nn.Conv2d(nf, nf, 4, 2, 1, bias=False)
self.bn0_1 = nn.BatchNorm2d(nf, affine=True)
# [64, 64, 64]
self.conv1_0 = nn.Conv2d(nf, nf * 2, 3, 1, 1, bias=False)
self.bn1_0 = nn.BatchNorm2d(nf * 2, affine=True)
self.conv1_1 = nn.Conv2d(nf * 2, nf * 2, 4, 2, 1, bias=False)
self.bn1_1 = nn.BatchNorm2d(nf * 2, affine=True)
# [128, 32, 32]
self.conv2_0 = nn.Conv2d(nf * 2, nf * 4, 3, 1, 1, bias=False)
self.bn2_0 = nn.BatchNorm2d(nf * 4, affine=True)
self.conv2_1 = nn.Conv2d(nf * 4, nf * 4, 4, 2, 1, bias=False)
self.bn2_1 = nn.BatchNorm2d(nf * 4, affine=True)
# [256, 16, 16]
self.conv3_0 = nn.Conv2d(nf * 4, nf * 8, 3, 1, 1, bias=False)
self.bn3_0 = nn.BatchNorm2d(nf * 8, affine=True)
self.conv3_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
self.bn3_1 = nn.BatchNorm2d(nf * 8, affine=True)
# [512, 8, 8]
self.conv4_0 = nn.Conv2d(nf * 8, nf * 8, 3, 1, 1, bias=False)
self.bn4_0 = nn.BatchNorm2d(nf * 8, affine=True)
self.conv4_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
self.bn4_1 = nn.BatchNorm2d(nf * 8, affine=True)
# self.avg_pool = nn.AvgPool2d(2, stride=1, padding=0, ceil_mode=True) # /2
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x):
fea = self.lrelu(self.conv0_0(x))
fea = self.lrelu(self.bn0_1(self.conv0_1(fea)))
fea = self.lrelu(self.bn1_0(self.conv1_0(fea)))
fea = self.lrelu(self.bn1_1(self.conv1_1(fea)))
fea = self.lrelu(self.bn2_0(self.conv2_0(fea)))
fea = self.lrelu(self.bn2_1(self.conv2_1(fea)))
fea = self.lrelu(self.bn3_0(self.conv3_0(fea)))
fea = self.lrelu(self.bn3_1(self.conv3_1(fea)))
fea = self.lrelu(self.bn4_0(self.conv4_0(fea)))
fea = self.lrelu(self.bn4_1(self.conv4_1(fea)))
# fea = self.avg_pool(fea)
return fea
class VGG_conv2(nn.Module):
def __init__(self, in_nc, nf):
super(VGG_conv2, self).__init__()
# [64, 128, 128]
self.conv0_0 = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
self.conv0_1 = nn.Conv2d(nf, nf, 4, 2, 1, bias=False)
self.bn0_1 = nn.BatchNorm2d(nf, affine=True)
# [64, 64, 64]
self.conv1_0 = nn.Conv2d(nf, nf * 2, 3, 1, 1, bias=False)
self.bn1_0 = nn.BatchNorm2d(nf * 2, affine=True)
self.conv1_1 = nn.Conv2d(nf * 2, nf * 2, 4, 2, 1, bias=False)
self.bn1_1 = nn.BatchNorm2d(nf * 2, affine=True)
# [128, 32, 32]
self.conv2_0 = nn.Conv2d(nf * 2, nf * 4, 3, 1, 1, bias=False)
self.bn2_0 = nn.BatchNorm2d(nf * 4, affine=True)
self.conv2_1 = nn.Conv2d(nf * 4, nf * 4, 4, 2, 1, bias=False)
self.bn2_1 = nn.BatchNorm2d(nf * 4, affine=True)
# [256, 16, 16]
self.conv3_0 = nn.Conv2d(nf * 4, nf * 8, 3, 1, 1, bias=False)
self.bn3_0 = nn.BatchNorm2d(nf * 8, affine=True)
self.conv3_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
self.bn3_1 = nn.BatchNorm2d(nf * 8, affine=True)
# [512, 8, 8]
# self.conv4_0 = nn.Conv2d(nf * 8, nf * 8, 3, 1, 1, bias=False)
# self.bn4_0 = nn.BatchNorm2d(nf * 8, affine=True)
# self.conv4_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
# self.bn4_1 = nn.BatchNorm2d(nf * 8, affine=True)
# self.avg_pool = nn.AvgPool2d(3, stride=2, padding=0, ceil_mode=True) # /2
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x):
fea = self.lrelu(self.conv0_0(x))
fea = self.lrelu(self.bn0_1(self.conv0_1(fea)))
fea = self.lrelu(self.bn1_0(self.conv1_0(fea)))
fea = self.lrelu(self.bn1_1(self.conv1_1(fea)))
fea = self.lrelu(self.bn2_0(self.conv2_0(fea)))
fea = self.lrelu(self.bn2_1(self.conv2_1(fea)))
fea = self.lrelu(self.bn3_0(self.conv3_0(fea)))
fea = self.lrelu(self.bn3_1(self.conv3_1(fea)))
# fea = self.lrelu(self.bn4_0(self.conv4_0(fea)))
# fea = self.lrelu(self.bn4_1(self.conv4_1(fea)))
# fea = self.avg_pool(fea)
return fea
================================================
FILE: modules/module_util.py
================================================
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.nn.functional as F
def initialize_weights(net_l, scale=1):
if not isinstance(net_l, list):
net_l = [net_l]
for net in net_l:
for m in net.modules():
if isinstance(m, nn.Conv2d):
init.kaiming_normal_(m.weight, a=0, mode='fan_in')
m.weight.data *= scale # for residual block
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
init.kaiming_normal_(m.weight, a=0, mode='fan_in')
m.weight.data *= scale
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
init.constant_(m.weight, 1)
init.constant_(m.bias.data, 0.0)
def make_layer(block, n_layers):
layers = []
for _ in range(n_layers):
layers.append(block())
return nn.Sequential(*layers)
class ResidualBlock_noBN(nn.Module):
'''Residual block w/o BN
---Conv-ReLU-Conv-+-
|________________|
'''
def __init__(self, nf=64):
super(ResidualBlock_noBN, self).__init__()
self.conv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
self.conv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
# initialization
initialize_weights([self.conv1, self.conv2], 0.1)
def forward(self, x):
identity = x
out = F.relu(self.conv1(x), inplace=True)
out = self.conv2(out)
return identity + out
def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros'):
"""Warp an image or feature map with optical flow
Args:
x (Tensor): size (N, C, H, W)
flow (Tensor): size (N, H, W, 2), normal value
interp_mode (str): 'nearest' or 'bilinear'
padding_mode (str): 'zeros' or 'border' or 'reflection'
Returns:
Tensor: warped image or feature map
"""
flow = flow.permute(0,2,3,1)
assert x.size()[-2:] == flow.size()[1:3]
B, C, H, W = x.size()
# mesh grid
grid_y, grid_x = torch.meshgrid(torch.arange(0, H), torch.arange(0, W))
grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2
grid.requires_grad = False
grid = grid.type_as(x)
vgrid = grid + flow
# scale grid to [-1,1]
vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(W - 1, 1) - 1.0
vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(H - 1, 1) - 1.0
vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3)
output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode)
return output
================================================
FILE: rrdb_denselayer.py
================================================
import torch
import torch.nn as nn
import modules.module_util as mutil
# Dense connection
class ResidualDenseBlock_out(nn.Module):
def __init__(self, input, output, bias=True):
super(ResidualDenseBlock_out, self).__init__()
self.conv1 = nn.Conv2d(input, 32, 3, 1, 1, bias=bias)
self.conv2 = nn.Conv2d(input + 32, 32, 3, 1, 1, bias=bias)
self.conv3 = nn.Conv2d(input + 2 * 32, 32, 3, 1, 1, bias=bias)
self.conv4 = nn.Conv2d(input + 3 * 32, 32, 3, 1, 1, bias=bias)
self.conv5 = nn.Conv2d(input + 4 * 32, output, 3, 1, 1, bias=bias)
self.lrelu = nn.LeakyReLU(inplace=True)
# initialization
mutil.initialize_weights([self.conv5], 0.)
def forward(self, x):
x1 = self.lrelu(self.conv1(x))
x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))
x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))
x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
return x5
================================================
FILE: test.py
================================================
import math
import torch
import torch.nn
import torch.optim
import torchvision
import numpy as np
from model import *
import config as c
import datasets
import modules.Unet_common as common
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def load(name):
state_dicts = torch.load(name)
network_state_dict = {k:v for k,v in state_dicts['net'].items() if 'tmp_var' not in k}
net.load_state_dict(network_state_dict)
try:
optim.load_state_dict(state_dicts['opt'])
except:
print('Cannot load optimizer for some reason or other')
def gauss_noise(shape):
noise = torch.zeros(shape).cuda()
for i in range(noise.shape[0]):
noise[i] = torch.randn(noise[i].shape).cuda()
return noise
def computePSNR(origin,pred):
origin = np.array(origin)
origin = origin.astype(np.float32)
pred = np.array(pred)
pred = pred.astype(np.float32)
mse = np.mean((origin/1.0 - pred/1.0) ** 2 )
if mse < 1.0e-10:
return 100
return 10 * math.log10(255.0**2/mse)
net = Model()
net.cuda()
init_model(net)
net = torch.nn.DataParallel(net, device_ids=c.device_ids)
params_trainable = (list(filter(lambda p: p.requires_grad, net.parameters())))
optim = torch.optim.Adam(params_trainable, lr=c.lr, betas=c.betas, eps=1e-6, weight_decay=c.weight_decay)
weight_scheduler = torch.optim.lr_scheduler.StepLR(optim, c.weight_step, gamma=c.gamma)
load(c.MODEL_PATH + c.suffix)
net.eval()
dwt = common.DWT()
iwt = common.IWT()
with torch.no_grad():
for i, data in enumerate(datasets.testloader):
data = data.to(device)
cover = data[data.shape[0] // 2:, :, :, :]
secret = data[:data.shape[0] // 2, :, :, :]
cover_input = dwt(cover)
secret_input = dwt(secret)
input_img = torch.cat((cover_input, secret_input), 1)
#################
# forward: #
#################
output = net(input_img)
output_steg = output.narrow(1, 0, 4 * c.channels_in)
output_z = output.narrow(1, 4 * c.channels_in, output.shape[1] - 4 * c.channels_in)
steg_img = iwt(output_steg)
backward_z = gauss_noise(output_z.shape)
#################
# backward: #
#################
output_rev = torch.cat((output_steg, backward_z), 1)
bacward_img = net(output_rev, rev=True)
secret_rev = bacward_img.narrow(1, 4 * c.channels_in, bacward_img.shape[1] - 4 * c.channels_in)
secret_rev = iwt(secret_rev)
cover_rev = bacward_img.narrow(1, 0, 4 * c.channels_in)
cover_rev = iwt(cover_rev)
resi_cover = (steg_img - cover) * 20
resi_secret = (secret_rev - secret) * 20
torchvision.utils.save_image(cover, c.IMAGE_PATH_cover + '%.5d.png' % i)
torchvision.utils.save_image(secret, c.IMAGE_PATH_secret + '%.5d.png' % i)
torchvision.utils.save_image(steg_img, c.IMAGE_PATH_steg + '%.5d.png' % i)
torchvision.utils.save_image(secret_rev, c.IMAGE_PATH_secret_rev + '%.5d.png' % i)
================================================
FILE: train.py
================================================
#!/usr/bin/env python
import torch
import torch.nn
import torch.optim
import math
import numpy as np
from model import *
import config as c
from tensorboardX import SummaryWriter
import datasets
import viz
import modules.Unet_common as common
import warnings
warnings.filterwarnings("ignore")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def gauss_noise(shape):
noise = torch.zeros(shape).cuda()
for i in range(noise.shape[0]):
noise[i] = torch.randn(noise[i].shape).cuda()
return noise
def guide_loss(output, bicubic_image):
loss_fn = torch.nn.MSELoss(reduce=True, size_average=False)
loss = loss_fn(output, bicubic_image)
return loss.to(device)
def reconstruction_loss(rev_input, input):
loss_fn = torch.nn.MSELoss(reduce=True, size_average=False)
loss = loss_fn(rev_input, input)
return loss.to(device)
def low_frequency_loss(ll_input, gt_input):
loss_fn = torch.nn.MSELoss(reduce=True, size_average=False)
loss = loss_fn(ll_input, gt_input)
return loss.to(device)
# 网络参数数量
def get_parameter_number(net):
total_num = sum(p.numel() for p in net.parameters())
trainable_num = sum(p.numel() for p in net.parameters() if p.requires_grad)
return {'Total': total_num, 'Trainable': trainable_num}
def computePSNR(origin,pred):
origin = np.array(origin)
origin = origin.astype(np.float32)
pred = np.array(pred)
pred = pred.astype(np.float32)
mse = np.mean((origin/1.0 - pred/1.0) ** 2 )
if mse < 1.0e-10:
return 100
return 10 * math.log10(255.0**2/mse)
def load(name):
state_dicts = torch.load(name)
network_state_dict = {k: v for k, v in state_dicts['net'].items() if 'tmp_var' not in k}
net.load_state_dict(network_state_dict)
try:
optim.load_state_dict(state_dicts['opt'])
except:
print('Cannot load optimizer for some reason or other')
#####################
# Model initialize: #
#####################
net = Model()
net.cuda()
init_model(net)
net = torch.nn.DataParallel(net, device_ids=c.device_ids)
para = get_parameter_number(net)
print(para)
params_trainable = (list(filter(lambda p: p.requires_grad, net.parameters())))
optim = torch.optim.Adam(params_trainable, lr=c.lr, betas=c.betas, eps=1e-6, weight_decay=c.weight_decay)
weight_scheduler = torch.optim.lr_scheduler.StepLR(optim, c.weight_step, gamma=c.gamma)
dwt = common.DWT()
iwt = common.IWT()
if c.tain_next:
load(c.MODEL_PATH + c.suffix)
try:
writer = SummaryWriter(comment='hinet', filename_suffix="steg")
for i_epoch in range(c.epochs):
i_epoch = i_epoch + c.trained_epoch + 1
loss_history = []
#################
# train: #
#################
for i_batch, data in enumerate(datasets.trainloader):
data = data.to(device)
cover = data[data.shape[0] // 2:]
secret = data[:data.shape[0] // 2]
cover_input = dwt(cover)
secret_input = dwt(secret)
input_img = torch.cat((cover_input, secret_input), 1)
#################
# forward: #
#################
output = net(input_img)
output_steg = output.narrow(1, 0, 4 * c.channels_in)
output_z = output.narrow(1, 4 * c.channels_in, output.shape[1] - 4 * c.channels_in)
steg_img = iwt(output_steg)
#################
# backward: #
#################
output_z_guass = gauss_noise(output_z.shape)
output_rev = torch.cat((output_steg, output_z_guass), 1)
output_image = net(output_rev, rev=True)
secret_rev = output_image.narrow(1, 4 * c.channels_in, output_image.shape[1] - 4 * c.channels_in)
secret_rev = iwt(secret_rev)
#################
# loss: #
#################
g_loss = guide_loss(steg_img.cuda(), cover.cuda())
r_loss = reconstruction_loss(secret_rev, secret)
steg_low = output_steg.narrow(1, 0, c.channels_in)
cover_low = cover_input.narrow(1, 0, c.channels_in)
l_loss = low_frequency_loss(steg_low, cover_low)
total_loss = c.lamda_reconstruction * r_loss + c.lamda_guide * g_loss + c.lamda_low_frequency * l_loss
total_loss.backward()
optim.step()
optim.zero_grad()
loss_history.append([total_loss.item(), 0.])
epoch_losses = np.mean(np.array(loss_history), axis=0)
epoch_losses[1] = np.log10(optim.param_groups[0]['lr'])
#################
# val: #
#################
if i_epoch % c.val_freq == 0:
with torch.no_grad():
psnr_s = []
psnr_c = []
net.eval()
for x in datasets.testloader:
x = x.to(device)
cover = x[x.shape[0] // 2:, :, :, :]
secret = x[:x.shape[0] // 2, :, :, :]
cover_input = dwt(cover)
secret_input = dwt(secret)
input_img = torch.cat((cover_input, secret_input), 1)
#################
# forward: #
#################
output = net(input_img)
output_steg = output.narrow(1, 0, 4 * c.channels_in)
steg = iwt(output_steg)
output_z = output.narrow(1, 4 * c.channels_in, output.shape[1] - 4 * c.channels_in)
output_z = gauss_noise(output_z.shape)
#################
# backward: #
#################
output_steg = output_steg.cuda()
output_rev = torch.cat((output_steg, output_z), 1)
output_image = net(output_rev, rev=True)
secret_rev = output_image.narrow(1, 4 * c.channels_in, output_image.shape[1] - 4 * c.channels_in)
secret_rev = iwt(secret_rev)
secret_rev = secret_rev.cpu().numpy().squeeze() * 255
np.clip(secret_rev, 0, 255)
secret = secret.cpu().numpy().squeeze() * 255
np.clip(secret, 0, 255)
cover = cover.cpu().numpy().squeeze() * 255
np.clip(cover, 0, 255)
steg = steg.cpu().numpy().squeeze() * 255
np.clip(steg, 0, 255)
psnr_temp = computePSNR(secret_rev, secret)
psnr_s.append(psnr_temp)
psnr_temp_c = computePSNR(cover, steg)
psnr_c.append(psnr_temp_c)
writer.add_scalars("PSNR_S", {"average psnr": np.mean(psnr_s)}, i_epoch)
writer.add_scalars("PSNR_C", {"average psnr": np.mean(psnr_c)}, i_epoch)
viz.show_loss(epoch_losses)
writer.add_scalars("Train", {"Train_Loss": epoch_losses[0]}, i_epoch)
if i_epoch > 0 and (i_epoch % c.SAVE_freq) == 0:
torch.save({'opt': optim.state_dict(),
'net': net.state_dict()}, c.MODEL_PATH + 'model_checkpoint_%.5i' % i_epoch + '.pt')
weight_scheduler.step()
torch.save({'opt': optim.state_dict(),
'net': net.state_dict()}, c.MODEL_PATH + 'model' + '.pt')
writer.close()
except:
if c.checkpoint_on_error:
torch.save({'opt': optim.state_dict(),
'net': net.state_dict()}, c.MODEL_PATH + 'model_ABORT' + '.pt')
raise
finally:
viz.signal_stop()
================================================
FILE: train_logging.py
================================================
#!/usr/bin/env python
import torch
import torch.nn
import torch.optim
import math
import numpy as np
from model import *
import config as c
from tensorboardX import SummaryWriter
import datasets
import viz
import modules.Unet_common as common
import warnings
import logging
import util
warnings.filterwarnings("ignore")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def gauss_noise(shape):
noise = torch.zeros(shape).cuda()
for i in range(noise.shape[0]):
noise[i] = torch.randn(noise[i].shape).cuda()
return noise
def guide_loss(output, bicubic_image):
loss_fn = torch.nn.MSELoss(reduce=True, size_average=False)
loss = loss_fn(output, bicubic_image)
return loss.to(device)
def reconstruction_loss(rev_input, input):
loss_fn = torch.nn.MSELoss(reduce=True, size_average=False)
loss = loss_fn(rev_input, input)
return loss.to(device)
def low_frequency_loss(ll_input, gt_input):
loss_fn = torch.nn.MSELoss(reduce=True, size_average=False)
loss = loss_fn(ll_input, gt_input)
return loss.to(device)
# 网络参数数量
def get_parameter_number(net):
total_num = sum(p.numel() for p in net.parameters())
trainable_num = sum(p.numel() for p in net.parameters() if p.requires_grad)
return {'Total': total_num, 'Trainable': trainable_num}
def computePSNR(origin,pred):
origin = np.array(origin)
origin = origin.astype(np.float32)
pred = np.array(pred)
pred = pred.astype(np.float32)
mse = np.mean((origin/1.0 - pred/1.0) ** 2 )
if mse < 1.0e-10:
return 100
return 10 * math.log10(255.0**2/mse)
def load(name):
state_dicts = torch.load(name)
network_state_dict = {k: v for k, v in state_dicts['net'].items() if 'tmp_var' not in k}
net.load_state_dict(network_state_dict)
try:
optim.load_state_dict(state_dicts['opt'])
except:
print('Cannot load optimizer for some reason or other')
#####################
# Model initialize: #
#####################
net = Model()
net.cuda()
init_model(net)
net = torch.nn.DataParallel(net, device_ids=c.device_ids)
para = get_parameter_number(net)
print(para)
params_trainable = (list(filter(lambda p: p.requires_grad, net.parameters())))
optim = torch.optim.Adam(params_trainable, lr=c.lr, betas=c.betas, eps=1e-6, weight_decay=c.weight_decay)
weight_scheduler = torch.optim.lr_scheduler.StepLR(optim, c.weight_step, gamma=c.gamma)
dwt = common.DWT()
iwt = common.IWT()
if c.tain_next:
load(c.MODEL_PATH + c.suffix)
optim = torch.optim.Adam(params_trainable, lr=c.lr, betas=c.betas, eps=1e-6, weight_decay=c.weight_decay)
util.setup_logger('train', '/home/jjp/HiNet-main2/', 'train_', level=logging.INFO, screen=True, tofile=True)
logger_train = logging.getLogger('train')
logger_train.info(net)
try:
writer = SummaryWriter(comment='hinet', filename_suffix="steg")
for i_epoch in range(c.epochs):
i_epoch = i_epoch + c.trained_epoch + 1
loss_history = []
g_loss_history = []
r_loss_history = []
l_loss_history = []
#################
# train: #
#################
for i_batch, data in enumerate(datasets.trainloader):
data = data.to(device)
cover = data[data.shape[0] // 2:]
secret = data[:data.shape[0] // 2]
cover_input = dwt(cover)
secret_input = dwt(secret)
input_img = torch.cat((cover_input, secret_input), 1)
#################
# forward: #
#################
output = net(input_img)
output_steg = output.narrow(1, 0, 4 * c.channels_in)
output_z = output.narrow(1, 4 * c.channels_in, output.shape[1] - 4 * c.channels_in)
steg_img = iwt(output_steg)
#################
# backward: #
#################
output_z_guass = gauss_noise(output_z.shape)
output_rev = torch.cat((output_steg, output_z_guass), 1)
output_image = net(output_rev, rev=True)
secret_rev = output_image.narrow(1, 4 * c.channels_in, output_image.shape[1] - 4 * c.channels_in)
secret_rev = iwt(secret_rev)
#################
# loss: #
#################
g_loss = guide_loss(steg_img.cuda(), cover.cuda())
r_loss = reconstruction_loss(secret_rev, secret)
steg_low = output_steg.narrow(1, 0, c.channels_in)
cover_low = cover_input.narrow(1, 0, c.channels_in)
l_loss = low_frequency_loss(steg_low, cover_low)
total_loss = c.lamda_reconstruction * r_loss + c.lamda_guide * g_loss + c.lamda_low_frequency * l_loss
total_loss.backward()
optim.step()
optim.zero_grad()
loss_history.append([total_loss.item(), 0.])
g_loss_history.append([g_loss.item(), 0.])
r_loss_history.append([r_loss.item(), 0.])
l_loss_history.append([l_loss.item(), 0.])
epoch_losses = np.mean(np.array(loss_history), axis=0)
r_epoch_losses = np.mean(np.array(r_loss_history), axis=0)
g_epoch_losses = np.mean(np.array(g_loss_history), axis=0)
l_epoch_losses = np.mean(np.array(l_loss_history), axis=0)
epoch_losses[1] = np.log10(optim.param_groups[0]['lr'])
#################
# val: #
#################
if i_epoch % c.val_freq == 0:
with torch.no_grad():
psnr_s = []
psnr_c = []
net.eval()
for x in datasets.testloader:
x = x.to(device)
cover = x[x.shape[0] // 2:, :, :, :]
secret = x[:x.shape[0] // 2, :, :, :]
cover_input = dwt(cover)
secret_input = dwt(secret)
input_img = torch.cat((cover_input, secret_input), 1)
#################
# forward: #
#################
output = net(input_img)
output_steg = output.narrow(1, 0, 4 * c.channels_in)
steg = iwt(output_steg)
output_z = output.narrow(1, 4 * c.channels_in, output.shape[1] - 4 * c.channels_in)
output_z = gauss_noise(output_z.shape)
#################
# backward: #
#################
output_steg = output_steg.cuda()
output_rev = torch.cat((output_steg, output_z), 1)
output_image = net(output_rev, rev=True)
secret_rev = output_image.narrow(1, 4 * c.channels_in, output_image.shape[1] - 4 * c.channels_in)
secret_rev = iwt(secret_rev)
secret_rev = secret_rev.cpu().numpy().squeeze() * 255
np.clip(secret_rev, 0, 255)
secret = secret.cpu().numpy().squeeze() * 255
np.clip(secret, 0, 255)
cover = cover.cpu().numpy().squeeze() * 255
np.clip(cover, 0, 255)
steg = steg.cpu().numpy().squeeze() * 255
np.clip(steg, 0, 255)
psnr_temp = computePSNR(secret_rev, secret)
psnr_s.append(psnr_temp)
psnr_temp_c = computePSNR(cover, steg)
psnr_c.append(psnr_temp_c)
writer.add_scalars("PSNR_S", {"average psnr": np.mean(psnr_s)}, i_epoch)
writer.add_scalars("PSNR_C", {"average psnr": np.mean(psnr_c)}, i_epoch)
logger_train.info(
f"TEST: "
f'PSNR_S: {np.mean(psnr_s):.4f} | '
f'PSNR_C: {np.mean(psnr_c):.4f} | '
)
viz.show_loss(epoch_losses)
writer.add_scalars("Train", {"Train_Loss": epoch_losses[0]}, i_epoch)
logger_train.info(f"Learning rate: {optim.param_groups[0]['lr']}")
logger_train.info(
f"Train epoch {i_epoch}: "
f'Loss: {epoch_losses[0].item():.4f} | '
f'r_Loss: {r_epoch_losses[0].item():.4f} | '
f'g_Loss: {g_epoch_losses[0].item():.4f} | '
f'l_Loss: {l_epoch_losses[0].item():.4f} | '
)
if i_epoch > 0 and (i_epoch % c.SAVE_freq) == 0:
torch.save({'opt': optim.state_dict(),
'net': net.state_dict()}, c.MODEL_PATH + 'model_checkpoint_%.5i' % i_epoch + '.pt')
weight_scheduler.step()
torch.save({'opt': optim.state_dict(),
'net': net.state_dict()}, c.MODEL_PATH + 'model' + '.pt')
writer.close()
except:
if c.checkpoint_on_error:
torch.save({'opt': optim.state_dict(),
'net': net.state_dict()}, c.MODEL_PATH + 'model_ABORT' + '.pt')
raise
finally:
viz.signal_stop()
================================================
FILE: util.py
================================================
import os
import logging
from datetime import datetime
def get_timestamp():
return datetime.now().strftime('%y%m%d-%H%M%S')
def setup_logger(logger_name, root, phase, level=logging.INFO, screen=False, tofile=False):
'''set up logger'''
lg = logging.getLogger(logger_name)
formatter = logging.Formatter('%(asctime)s.%(msecs)03d - %(levelname)s: %(message)s',
datefmt='%y-%m-%d %H:%M:%S')
lg.setLevel(level)
if tofile:
log_file = os.path.join(root, phase + '_{}.log'.format(get_timestamp()))
fh = logging.FileHandler(log_file, mode='w')
fh.setFormatter(formatter)
lg.addHandler(fh)
if screen:
sh = logging.StreamHandler()
sh.setFormatter(formatter)
lg.addHandler(sh)
================================================
FILE: viz.py
================================================
from os.path import join
from scipy.ndimage import zoom
import matplotlib.pyplot as plt
import numpy as np
import config as c
import datasets
n_imgs = 4
n_plots = 2
figsize = (4,4)
class Visualizer:
def __init__(self, loss_labels):
self.n_losses = len(loss_labels)
self.loss_labels = loss_labels
self.counter = 1
header = 'Epoch'
for l in loss_labels:
header += '\t\t%s' % (l)
self.config_str = ""
self.config_str += "==="*30 + "\n"
self.config_str += "Config options:\n\n"
for v in dir(c):
if v[0]=='_': continue
s=eval('c.%s'%(v))
self.config_str += " {:25}\t{}\n".format(v,s)
self.config_str += "==="*30 + "\n"
print(self.config_str)
print(header)
def update_losses(self, losses, *args):
print('\r', ' '*20, end='')
line = '\r%.3i' % (self.counter)
for l in losses:
line += '\t\t%.4f' % (l)
print(line)
self.counter += 1
def update_images(self, *img_list):
w = img_list[0].shape[2]
k = 0
k_img = 0
show_img = np.zeros((3, w*n_imgs, w*n_imgs), dtype=np.uint8)
img_list_np = []
for im in img_list:
im_np = im
img_list_np.append(np.clip((255. * im_np), 0, 255).astype(np.uint8))
for i in range(n_imgs):
for j in range(n_imgs):
show_img[:, w*i:w*i+w, w*j:w*j+w] = img_list_np[k]
k += 1
if k >= len(img_list_np):
k = 0
k_img += 1
plt.imsave(join(c.img_folder, '%.4d.jpg'%(self.counter)), show_img.transpose(1,2,0))
return zoom(show_img, (1., c.preview_upscale, c.preview_upscale), order=0)
def update_hist(self, *args):
pass
def update_running(self, *args):
pass
visualizer = Visualizer(c.loss_names)
def show_loss(losses, logscale=False):
visualizer.update_losses(losses)
def show_imgs(*imgs):
visualizer.update_images(*imgs)
def show_hist(data):
visualizer.update_hist(data.data)
def signal_start():
visualizer.update_running(True)
def signal_stop():
visualizer.update_running(False)
def close():
visualizer.close()