Spaces:
Runtime error
Runtime error
akhaliq3
commited on
Commit
•
bf91721
1
Parent(s):
e40c95c
spaces demo
Browse files- FAQ.md +9 -0
- LICENSE +29 -0
- MANIFEST.in +8 -0
- Training.md +100 -0
- VERSION +1 -0
- experiments/pretrained_models/README.md +1 -0
- inference_realesrgan.py +79 -0
- options/train_realesrgan_x4plus.yml +186 -0
- options/train_realesrnet_x4plus.yml +144 -0
- realesrgan/__init__.py +6 -0
- realesrgan/archs/__init__.py +10 -0
- realesrgan/archs/discriminator_arch.py +60 -0
- realesrgan/data/__init__.py +10 -0
- realesrgan/data/realesrgan_dataset.py +175 -0
- realesrgan/models/__init__.py +10 -0
- realesrgan/models/realesrgan_model.py +242 -0
- realesrgan/models/realesrnet_model.py +172 -0
- realesrgan/train.py +11 -0
- realesrgan/utils.py +231 -0
- realesrgan/weights/README.md +3 -0
- requirements.txt +4 -0
- scripts/pytorch2onnx.py +17 -0
- setup.cfg +22 -0
- setup.py +113 -0
FAQ.md
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# FAQ
|
2 |
+
|
3 |
+
1. **What is the difference of `--netscale` and `outscale`?**
|
4 |
+
|
5 |
+
A: TODO.
|
6 |
+
|
7 |
+
1. **How to select models?**
|
8 |
+
|
9 |
+
A: TODO.
|
LICENSE
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
BSD 3-Clause License
|
2 |
+
|
3 |
+
Copyright (c) 2021, Xintao Wang
|
4 |
+
All rights reserved.
|
5 |
+
|
6 |
+
Redistribution and use in source and binary forms, with or without
|
7 |
+
modification, are permitted provided that the following conditions are met:
|
8 |
+
|
9 |
+
1. Redistributions of source code must retain the above copyright notice, this
|
10 |
+
list of conditions and the following disclaimer.
|
11 |
+
|
12 |
+
2. Redistributions in binary form must reproduce the above copyright notice,
|
13 |
+
this list of conditions and the following disclaimer in the documentation
|
14 |
+
and/or other materials provided with the distribution.
|
15 |
+
|
16 |
+
3. Neither the name of the copyright holder nor the names of its
|
17 |
+
contributors may be used to endorse or promote products derived from
|
18 |
+
this software without specific prior written permission.
|
19 |
+
|
20 |
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
21 |
+
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
22 |
+
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
23 |
+
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
24 |
+
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
25 |
+
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
26 |
+
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
27 |
+
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
28 |
+
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
29 |
+
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
MANIFEST.in
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
include assets/*
|
2 |
+
include inputs/*
|
3 |
+
include scripts/*.py
|
4 |
+
include inference_realesrgan.py
|
5 |
+
include VERSION
|
6 |
+
include LICENSE
|
7 |
+
include requirements.txt
|
8 |
+
include realesrgan/weights/README.md
|
Training.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# :computer: How to Train Real-ESRGAN
|
2 |
+
|
3 |
+
The training codes have been released. <br>
|
4 |
+
Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I will also retrain the models.
|
5 |
+
|
6 |
+
## Overview
|
7 |
+
|
8 |
+
The training has been divided into two stages. These two stages have the same data synthesis process and training pipeline, except for the loss functions. Specifically,
|
9 |
+
|
10 |
+
1. We first train Real-ESRNet with L1 loss from the pre-trained model ESRGAN.
|
11 |
+
1. We then use the trained Real-ESRNet model as an initialization of the generator, and train the Real-ESRGAN with a combination of L1 loss, perceptual loss and GAN loss.
|
12 |
+
|
13 |
+
## Dataset Preparation
|
14 |
+
|
15 |
+
We use DF2K (DIV2K and Flickr2K) + OST datasets for our training. Only HR images are required. <br>
|
16 |
+
You can download from :
|
17 |
+
|
18 |
+
1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip
|
19 |
+
2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
|
20 |
+
3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip
|
21 |
+
|
22 |
+
For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales.
|
23 |
+
|
24 |
+
We then crop DF2K images into sub-images for faster IO and processing.
|
25 |
+
|
26 |
+
You need to prepare a txt file containing the image paths. The following are some examples in `meta_info_DF2Kmultiscale+OST_sub.txt` (As different users may have different sub-images partitions, this file is not suitable for your purpose and you need to prepare your own txt file):
|
27 |
+
|
28 |
+
```txt
|
29 |
+
DF2K_HR_sub/000001_s001.png
|
30 |
+
DF2K_HR_sub/000001_s002.png
|
31 |
+
DF2K_HR_sub/000001_s003.png
|
32 |
+
...
|
33 |
+
```
|
34 |
+
|
35 |
+
## Train Real-ESRNet
|
36 |
+
|
37 |
+
1. Download pre-trained model [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) into `experiments/pretrained_models`.
|
38 |
+
```bash
|
39 |
+
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models
|
40 |
+
```
|
41 |
+
1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly:
|
42 |
+
```yml
|
43 |
+
train:
|
44 |
+
name: DF2K+OST
|
45 |
+
type: RealESRGANDataset
|
46 |
+
dataroot_gt: datasets/DF2K # modify to the root path of your folder
|
47 |
+
meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt
|
48 |
+
io_backend:
|
49 |
+
type: disk
|
50 |
+
```
|
51 |
+
1. If you want to perform validation during training, uncomment those lines and modify accordingly:
|
52 |
+
```yml
|
53 |
+
# Uncomment these for validation
|
54 |
+
# val:
|
55 |
+
# name: validation
|
56 |
+
# type: PairedImageDataset
|
57 |
+
# dataroot_gt: path_to_gt
|
58 |
+
# dataroot_lq: path_to_lq
|
59 |
+
# io_backend:
|
60 |
+
# type: disk
|
61 |
+
|
62 |
+
...
|
63 |
+
|
64 |
+
# Uncomment these for validation
|
65 |
+
# validation settings
|
66 |
+
# val:
|
67 |
+
# val_freq: !!float 5e3
|
68 |
+
# save_img: True
|
69 |
+
|
70 |
+
# metrics:
|
71 |
+
# psnr: # metric name, can be arbitrary
|
72 |
+
# type: calculate_psnr
|
73 |
+
# crop_border: 4
|
74 |
+
# test_y_channel: false
|
75 |
+
```
|
76 |
+
1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
|
77 |
+
```bash
|
78 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 \
|
79 |
+
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
|
80 |
+
```
|
81 |
+
1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
|
82 |
+
```bash
|
83 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 \
|
84 |
+
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
|
85 |
+
```
|
86 |
+
|
87 |
+
## Train Real-ESRGAN
|
88 |
+
|
89 |
+
1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`.
|
90 |
+
1. Modify the option file `train_realesrgan_x4plus.yml` accordingly. Most modifications are similar to those listed above.
|
91 |
+
1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
|
92 |
+
```bash
|
93 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 \
|
94 |
+
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
|
95 |
+
```
|
96 |
+
1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
|
97 |
+
```bash
|
98 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 \
|
99 |
+
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
|
100 |
+
```
|
VERSION
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
0.2.1
|
experiments/pretrained_models/README.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# Put downloaded pre-trained models here
|
inference_realesrgan.py
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import argparse
|
2 |
+
import cv2
|
3 |
+
import glob
|
4 |
+
import os
|
5 |
+
|
6 |
+
from realesrgan import RealESRGANer
|
7 |
+
|
8 |
+
|
9 |
+
def main():
|
10 |
+
parser = argparse.ArgumentParser()
|
11 |
+
parser.add_argument('--input', type=str, default='inputs', help='Input image or folder')
|
12 |
+
parser.add_argument(
|
13 |
+
'--model_path',
|
14 |
+
type=str,
|
15 |
+
default='experiments/pretrained_models/RealESRGAN_x4plus.pth',
|
16 |
+
help='Path to the pre-trained model')
|
17 |
+
parser.add_argument('--output', type=str, default='results', help='Output folder')
|
18 |
+
parser.add_argument('--netscale', type=int, default=4, help='Upsample scale factor of the network')
|
19 |
+
parser.add_argument('--outscale', type=float, default=4, help='The final upsampling scale of the image')
|
20 |
+
parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image')
|
21 |
+
parser.add_argument('--tile', type=int, default=0, help='Tile size, 0 for no tile during testing')
|
22 |
+
parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding')
|
23 |
+
parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border')
|
24 |
+
parser.add_argument('--half', action='store_true', help='Use half precision during inference')
|
25 |
+
parser.add_argument(
|
26 |
+
'--alpha_upsampler',
|
27 |
+
type=str,
|
28 |
+
default='realesrgan',
|
29 |
+
help='The upsampler for the alpha channels. Options: realesrgan | bicubic')
|
30 |
+
parser.add_argument(
|
31 |
+
'--ext',
|
32 |
+
type=str,
|
33 |
+
default='auto',
|
34 |
+
help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs')
|
35 |
+
args = parser.parse_args()
|
36 |
+
|
37 |
+
upsampler = RealESRGANer(
|
38 |
+
scale=args.netscale,
|
39 |
+
model_path=args.model_path,
|
40 |
+
tile=args.tile,
|
41 |
+
tile_pad=args.tile_pad,
|
42 |
+
pre_pad=args.pre_pad,
|
43 |
+
half=args.half)
|
44 |
+
os.makedirs(args.output, exist_ok=True)
|
45 |
+
if os.path.isfile(args.input):
|
46 |
+
paths = [args.input]
|
47 |
+
else:
|
48 |
+
paths = sorted(glob.glob(os.path.join(args.input, '*')))
|
49 |
+
|
50 |
+
for idx, path in enumerate(paths):
|
51 |
+
imgname, extension = os.path.splitext(os.path.basename(path))
|
52 |
+
print('Testing', idx, imgname)
|
53 |
+
|
54 |
+
img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
|
55 |
+
h, w = img.shape[0:2]
|
56 |
+
if max(h, w) > 1000 and args.netscale == 4:
|
57 |
+
import warnings
|
58 |
+
warnings.warn('The input image is large, try X2 model for better performace.')
|
59 |
+
if max(h, w) < 500 and args.netscale == 2:
|
60 |
+
import warnings
|
61 |
+
warnings.warn('The input image is small, try X4 model for better performace.')
|
62 |
+
|
63 |
+
try:
|
64 |
+
output, img_mode = upsampler.enhance(img, outscale=args.outscale)
|
65 |
+
except Exception as error:
|
66 |
+
print('Error', error)
|
67 |
+
else:
|
68 |
+
if args.ext == 'auto':
|
69 |
+
extension = extension[1:]
|
70 |
+
else:
|
71 |
+
extension = args.ext
|
72 |
+
if img_mode == 'RGBA': # RGBA images should be saved in png format
|
73 |
+
extension = 'png'
|
74 |
+
save_path = os.path.join(args.output, f'{imgname}_{args.suffix}.{extension}')
|
75 |
+
cv2.imwrite(save_path, output)
|
76 |
+
|
77 |
+
|
78 |
+
if __name__ == '__main__':
|
79 |
+
main()
|
options/train_realesrgan_x4plus.yml
ADDED
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# general settings
|
2 |
+
name: train_RealESRGANx4plus_400k_B12G4_fromRealESRNet
|
3 |
+
model_type: RealESRGANModel
|
4 |
+
scale: 4
|
5 |
+
num_gpu: 4
|
6 |
+
manual_seed: 0
|
7 |
+
|
8 |
+
# ----------------- options for synthesizing training data in RealESRGANModel ----------------- #
|
9 |
+
# USM the ground-truth
|
10 |
+
l1_gt_usm: True
|
11 |
+
percep_gt_usm: True
|
12 |
+
gan_gt_usm: False
|
13 |
+
|
14 |
+
# the first degradation process
|
15 |
+
resize_prob: [0.2, 0.7, 0.1] # up, down, keep
|
16 |
+
resize_range: [0.15, 1.5]
|
17 |
+
gaussian_noise_prob: 0.5
|
18 |
+
noise_range: [1, 30]
|
19 |
+
poisson_scale_range: [0.05, 3]
|
20 |
+
gray_noise_prob: 0.4
|
21 |
+
jpeg_range: [30, 95]
|
22 |
+
|
23 |
+
# the second degradation process
|
24 |
+
second_blur_prob: 0.8
|
25 |
+
resize_prob2: [0.3, 0.4, 0.3] # up, down, keep
|
26 |
+
resize_range2: [0.3, 1.2]
|
27 |
+
gaussian_noise_prob2: 0.5
|
28 |
+
noise_range2: [1, 25]
|
29 |
+
poisson_scale_range2: [0.05, 2.5]
|
30 |
+
gray_noise_prob2: 0.4
|
31 |
+
jpeg_range2: [30, 95]
|
32 |
+
|
33 |
+
gt_size: 256
|
34 |
+
queue_size: 180
|
35 |
+
|
36 |
+
# dataset and data loader settings
|
37 |
+
datasets:
|
38 |
+
train:
|
39 |
+
name: DF2K+OST
|
40 |
+
type: RealESRGANDataset
|
41 |
+
dataroot_gt: datasets/DF2K
|
42 |
+
meta_info: realesrgan/data/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
|
43 |
+
io_backend:
|
44 |
+
type: disk
|
45 |
+
|
46 |
+
blur_kernel_size: 21
|
47 |
+
kernel_list: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
|
48 |
+
kernel_prob: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
|
49 |
+
sinc_prob: 0.1
|
50 |
+
blur_sigma: [0.2, 3]
|
51 |
+
betag_range: [0.5, 4]
|
52 |
+
betap_range: [1, 2]
|
53 |
+
|
54 |
+
blur_kernel_size2: 21
|
55 |
+
kernel_list2: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
|
56 |
+
kernel_prob2: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
|
57 |
+
sinc_prob2: 0.1
|
58 |
+
blur_sigma2: [0.2, 1.5]
|
59 |
+
betag_range2: [0.5, 4]
|
60 |
+
betap_range2: [1, 2]
|
61 |
+
|
62 |
+
final_sinc_prob: 0.8
|
63 |
+
|
64 |
+
gt_size: 256
|
65 |
+
use_hflip: True
|
66 |
+
use_rot: False
|
67 |
+
|
68 |
+
# data loader
|
69 |
+
use_shuffle: true
|
70 |
+
num_worker_per_gpu: 5
|
71 |
+
batch_size_per_gpu: 12
|
72 |
+
dataset_enlarge_ratio: 1
|
73 |
+
prefetch_mode: ~
|
74 |
+
|
75 |
+
# Uncomment these for validation
|
76 |
+
# val:
|
77 |
+
# name: validation
|
78 |
+
# type: PairedImageDataset
|
79 |
+
# dataroot_gt: path_to_gt
|
80 |
+
# dataroot_lq: path_to_lq
|
81 |
+
# io_backend:
|
82 |
+
# type: disk
|
83 |
+
|
84 |
+
# network structures
|
85 |
+
network_g:
|
86 |
+
type: RRDBNet
|
87 |
+
num_in_ch: 3
|
88 |
+
num_out_ch: 3
|
89 |
+
num_feat: 64
|
90 |
+
num_block: 23
|
91 |
+
num_grow_ch: 32
|
92 |
+
|
93 |
+
|
94 |
+
network_d:
|
95 |
+
type: UNetDiscriminatorSN
|
96 |
+
num_in_ch: 3
|
97 |
+
num_feat: 64
|
98 |
+
skip_connection: True
|
99 |
+
|
100 |
+
# path
|
101 |
+
path:
|
102 |
+
# use the pre-trained Real-ESRNet model
|
103 |
+
pretrain_network_g: experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/models/net_g_1000000.pth
|
104 |
+
param_key_g: params_ema
|
105 |
+
strict_load_g: true
|
106 |
+
resume_state: ~
|
107 |
+
|
108 |
+
# training settings
|
109 |
+
train:
|
110 |
+
ema_decay: 0.999
|
111 |
+
optim_g:
|
112 |
+
type: Adam
|
113 |
+
lr: !!float 1e-4
|
114 |
+
weight_decay: 0
|
115 |
+
betas: [0.9, 0.99]
|
116 |
+
optim_d:
|
117 |
+
type: Adam
|
118 |
+
lr: !!float 1e-4
|
119 |
+
weight_decay: 0
|
120 |
+
betas: [0.9, 0.99]
|
121 |
+
|
122 |
+
scheduler:
|
123 |
+
type: MultiStepLR
|
124 |
+
milestones: [400000]
|
125 |
+
gamma: 0.5
|
126 |
+
|
127 |
+
total_iter: 400000
|
128 |
+
warmup_iter: -1 # no warm up
|
129 |
+
|
130 |
+
# losses
|
131 |
+
pixel_opt:
|
132 |
+
type: L1Loss
|
133 |
+
loss_weight: 1.0
|
134 |
+
reduction: mean
|
135 |
+
# perceptual loss (content and style losses)
|
136 |
+
perceptual_opt:
|
137 |
+
type: PerceptualLoss
|
138 |
+
layer_weights:
|
139 |
+
# before relu
|
140 |
+
'conv1_2': 0.1
|
141 |
+
'conv2_2': 0.1
|
142 |
+
'conv3_4': 1
|
143 |
+
'conv4_4': 1
|
144 |
+
'conv5_4': 1
|
145 |
+
vgg_type: vgg19
|
146 |
+
use_input_norm: true
|
147 |
+
perceptual_weight: !!float 1.0
|
148 |
+
style_weight: 0
|
149 |
+
range_norm: false
|
150 |
+
criterion: l1
|
151 |
+
# gan loss
|
152 |
+
gan_opt:
|
153 |
+
type: GANLoss
|
154 |
+
gan_type: vanilla
|
155 |
+
real_label_val: 1.0
|
156 |
+
fake_label_val: 0.0
|
157 |
+
loss_weight: !!float 1e-1
|
158 |
+
|
159 |
+
net_d_iters: 1
|
160 |
+
net_d_init_iters: 0
|
161 |
+
|
162 |
+
# Uncomment these for validation
|
163 |
+
# validation settings
|
164 |
+
# val:
|
165 |
+
# val_freq: !!float 5e3
|
166 |
+
# save_img: True
|
167 |
+
|
168 |
+
# metrics:
|
169 |
+
# psnr: # metric name, can be arbitrary
|
170 |
+
# type: calculate_psnr
|
171 |
+
# crop_border: 4
|
172 |
+
# test_y_channel: false
|
173 |
+
|
174 |
+
# logging settings
|
175 |
+
logger:
|
176 |
+
print_freq: 100
|
177 |
+
save_checkpoint_freq: !!float 5e3
|
178 |
+
use_tb_logger: true
|
179 |
+
wandb:
|
180 |
+
project: ~
|
181 |
+
resume_id: ~
|
182 |
+
|
183 |
+
# dist training settings
|
184 |
+
dist_params:
|
185 |
+
backend: nccl
|
186 |
+
port: 29500
|
options/train_realesrnet_x4plus.yml
ADDED
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# general settings
|
2 |
+
name: train_RealESRNetx4plus_1000k_B12G4_fromESRGAN
|
3 |
+
model_type: RealESRNetModel
|
4 |
+
scale: 4
|
5 |
+
num_gpu: 4
|
6 |
+
manual_seed: 0
|
7 |
+
|
8 |
+
# ----------------- options for synthesizing training data in RealESRNetModel ----------------- #
|
9 |
+
gt_usm: True # USM the ground-truth
|
10 |
+
|
11 |
+
# the first degradation process
|
12 |
+
resize_prob: [0.2, 0.7, 0.1] # up, down, keep
|
13 |
+
resize_range: [0.15, 1.5]
|
14 |
+
gaussian_noise_prob: 0.5
|
15 |
+
noise_range: [1, 30]
|
16 |
+
poisson_scale_range: [0.05, 3]
|
17 |
+
gray_noise_prob: 0.4
|
18 |
+
jpeg_range: [30, 95]
|
19 |
+
|
20 |
+
# the second degradation process
|
21 |
+
second_blur_prob: 0.8
|
22 |
+
resize_prob2: [0.3, 0.4, 0.3] # up, down, keep
|
23 |
+
resize_range2: [0.3, 1.2]
|
24 |
+
gaussian_noise_prob2: 0.5
|
25 |
+
noise_range2: [1, 25]
|
26 |
+
poisson_scale_range2: [0.05, 2.5]
|
27 |
+
gray_noise_prob2: 0.4
|
28 |
+
jpeg_range2: [30, 95]
|
29 |
+
|
30 |
+
gt_size: 256
|
31 |
+
queue_size: 180
|
32 |
+
|
33 |
+
# dataset and data loader settings
|
34 |
+
datasets:
|
35 |
+
train:
|
36 |
+
name: DF2K+OST
|
37 |
+
type: RealESRGANDataset
|
38 |
+
dataroot_gt: datasets/DF2K
|
39 |
+
meta_info: realesrgan/data/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt
|
40 |
+
io_backend:
|
41 |
+
type: disk
|
42 |
+
|
43 |
+
blur_kernel_size: 21
|
44 |
+
kernel_list: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
|
45 |
+
kernel_prob: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
|
46 |
+
sinc_prob: 0.1
|
47 |
+
blur_sigma: [0.2, 3]
|
48 |
+
betag_range: [0.5, 4]
|
49 |
+
betap_range: [1, 2]
|
50 |
+
|
51 |
+
blur_kernel_size2: 21
|
52 |
+
kernel_list2: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
|
53 |
+
kernel_prob2: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
|
54 |
+
sinc_prob2: 0.1
|
55 |
+
blur_sigma2: [0.2, 1.5]
|
56 |
+
betag_range2: [0.5, 4]
|
57 |
+
betap_range2: [1, 2]
|
58 |
+
|
59 |
+
final_sinc_prob: 0.8
|
60 |
+
|
61 |
+
gt_size: 256
|
62 |
+
use_hflip: True
|
63 |
+
use_rot: False
|
64 |
+
|
65 |
+
# data loader
|
66 |
+
use_shuffle: true
|
67 |
+
num_worker_per_gpu: 5
|
68 |
+
batch_size_per_gpu: 12
|
69 |
+
dataset_enlarge_ratio: 1
|
70 |
+
prefetch_mode: ~
|
71 |
+
|
72 |
+
# Uncomment these for validation
|
73 |
+
# val:
|
74 |
+
# name: validation
|
75 |
+
# type: PairedImageDataset
|
76 |
+
# dataroot_gt: path_to_gt
|
77 |
+
# dataroot_lq: path_to_lq
|
78 |
+
# io_backend:
|
79 |
+
# type: disk
|
80 |
+
|
81 |
+
# network structures
|
82 |
+
network_g:
|
83 |
+
type: RRDBNet
|
84 |
+
num_in_ch: 3
|
85 |
+
num_out_ch: 3
|
86 |
+
num_feat: 64
|
87 |
+
num_block: 23
|
88 |
+
num_grow_ch: 32
|
89 |
+
|
90 |
+
# path
|
91 |
+
path:
|
92 |
+
pretrain_network_g: experiments/pretrained_models/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth
|
93 |
+
param_key_g: params_ema
|
94 |
+
strict_load_g: true
|
95 |
+
resume_state: ~
|
96 |
+
|
97 |
+
# training settings
|
98 |
+
train:
|
99 |
+
ema_decay: 0.999
|
100 |
+
optim_g:
|
101 |
+
type: Adam
|
102 |
+
lr: !!float 2e-4
|
103 |
+
weight_decay: 0
|
104 |
+
betas: [0.9, 0.99]
|
105 |
+
|
106 |
+
scheduler:
|
107 |
+
type: MultiStepLR
|
108 |
+
milestones: [1000000]
|
109 |
+
gamma: 0.5
|
110 |
+
|
111 |
+
total_iter: 1000000
|
112 |
+
warmup_iter: -1 # no warm up
|
113 |
+
|
114 |
+
# losses
|
115 |
+
pixel_opt:
|
116 |
+
type: L1Loss
|
117 |
+
loss_weight: 1.0
|
118 |
+
reduction: mean
|
119 |
+
|
120 |
+
# Uncomment these for validation
|
121 |
+
# validation settings
|
122 |
+
# val:
|
123 |
+
# val_freq: !!float 5e3
|
124 |
+
# save_img: True
|
125 |
+
|
126 |
+
# metrics:
|
127 |
+
# psnr: # metric name, can be arbitrary
|
128 |
+
# type: calculate_psnr
|
129 |
+
# crop_border: 4
|
130 |
+
# test_y_channel: false
|
131 |
+
|
132 |
+
# logging settings
|
133 |
+
logger:
|
134 |
+
print_freq: 100
|
135 |
+
save_checkpoint_freq: !!float 5e3
|
136 |
+
use_tb_logger: true
|
137 |
+
wandb:
|
138 |
+
project: ~
|
139 |
+
resume_id: ~
|
140 |
+
|
141 |
+
# dist training settings
|
142 |
+
dist_params:
|
143 |
+
backend: nccl
|
144 |
+
port: 29500
|
realesrgan/__init__.py
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# flake8: noqa
|
2 |
+
from .archs import *
|
3 |
+
from .data import *
|
4 |
+
from .models import *
|
5 |
+
from .utils import *
|
6 |
+
from .version import __gitsha__, __version__
|
realesrgan/archs/__init__.py
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import importlib
|
2 |
+
from basicsr.utils import scandir
|
3 |
+
from os import path as osp
|
4 |
+
|
5 |
+
# automatically scan and import arch modules for registry
|
6 |
+
# scan all the files that end with '_arch.py' under the archs folder
|
7 |
+
arch_folder = osp.dirname(osp.abspath(__file__))
|
8 |
+
arch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')]
|
9 |
+
# import all the arch modules
|
10 |
+
_arch_modules = [importlib.import_module(f'realesrgan.archs.{file_name}') for file_name in arch_filenames]
|
realesrgan/archs/discriminator_arch.py
ADDED
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from basicsr.utils.registry import ARCH_REGISTRY
|
2 |
+
from torch import nn as nn
|
3 |
+
from torch.nn import functional as F
|
4 |
+
from torch.nn.utils import spectral_norm
|
5 |
+
|
6 |
+
|
7 |
+
@ARCH_REGISTRY.register()
|
8 |
+
class UNetDiscriminatorSN(nn.Module):
|
9 |
+
"""Defines a U-Net discriminator with spectral normalization (SN)"""
|
10 |
+
|
11 |
+
def __init__(self, num_in_ch, num_feat=64, skip_connection=True):
|
12 |
+
super(UNetDiscriminatorSN, self).__init__()
|
13 |
+
self.skip_connection = skip_connection
|
14 |
+
norm = spectral_norm
|
15 |
+
|
16 |
+
self.conv0 = nn.Conv2d(num_in_ch, num_feat, kernel_size=3, stride=1, padding=1)
|
17 |
+
|
18 |
+
self.conv1 = norm(nn.Conv2d(num_feat, num_feat * 2, 4, 2, 1, bias=False))
|
19 |
+
self.conv2 = norm(nn.Conv2d(num_feat * 2, num_feat * 4, 4, 2, 1, bias=False))
|
20 |
+
self.conv3 = norm(nn.Conv2d(num_feat * 4, num_feat * 8, 4, 2, 1, bias=False))
|
21 |
+
# upsample
|
22 |
+
self.conv4 = norm(nn.Conv2d(num_feat * 8, num_feat * 4, 3, 1, 1, bias=False))
|
23 |
+
self.conv5 = norm(nn.Conv2d(num_feat * 4, num_feat * 2, 3, 1, 1, bias=False))
|
24 |
+
self.conv6 = norm(nn.Conv2d(num_feat * 2, num_feat, 3, 1, 1, bias=False))
|
25 |
+
|
26 |
+
# extra
|
27 |
+
self.conv7 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False))
|
28 |
+
self.conv8 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False))
|
29 |
+
|
30 |
+
self.conv9 = nn.Conv2d(num_feat, 1, 3, 1, 1)
|
31 |
+
|
32 |
+
def forward(self, x):
|
33 |
+
x0 = F.leaky_relu(self.conv0(x), negative_slope=0.2, inplace=True)
|
34 |
+
x1 = F.leaky_relu(self.conv1(x0), negative_slope=0.2, inplace=True)
|
35 |
+
x2 = F.leaky_relu(self.conv2(x1), negative_slope=0.2, inplace=True)
|
36 |
+
x3 = F.leaky_relu(self.conv3(x2), negative_slope=0.2, inplace=True)
|
37 |
+
|
38 |
+
# upsample
|
39 |
+
x3 = F.interpolate(x3, scale_factor=2, mode='bilinear', align_corners=False)
|
40 |
+
x4 = F.leaky_relu(self.conv4(x3), negative_slope=0.2, inplace=True)
|
41 |
+
|
42 |
+
if self.skip_connection:
|
43 |
+
x4 = x4 + x2
|
44 |
+
x4 = F.interpolate(x4, scale_factor=2, mode='bilinear', align_corners=False)
|
45 |
+
x5 = F.leaky_relu(self.conv5(x4), negative_slope=0.2, inplace=True)
|
46 |
+
|
47 |
+
if self.skip_connection:
|
48 |
+
x5 = x5 + x1
|
49 |
+
x5 = F.interpolate(x5, scale_factor=2, mode='bilinear', align_corners=False)
|
50 |
+
x6 = F.leaky_relu(self.conv6(x5), negative_slope=0.2, inplace=True)
|
51 |
+
|
52 |
+
if self.skip_connection:
|
53 |
+
x6 = x6 + x0
|
54 |
+
|
55 |
+
# extra
|
56 |
+
out = F.leaky_relu(self.conv7(x6), negative_slope=0.2, inplace=True)
|
57 |
+
out = F.leaky_relu(self.conv8(out), negative_slope=0.2, inplace=True)
|
58 |
+
out = self.conv9(out)
|
59 |
+
|
60 |
+
return out
|
realesrgan/data/__init__.py
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import importlib
|
2 |
+
from basicsr.utils import scandir
|
3 |
+
from os import path as osp
|
4 |
+
|
5 |
+
# automatically scan and import dataset modules for registry
|
6 |
+
# scan all the files that end with '_dataset.py' under the data folder
|
7 |
+
data_folder = osp.dirname(osp.abspath(__file__))
|
8 |
+
dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')]
|
9 |
+
# import all the dataset modules
|
10 |
+
_dataset_modules = [importlib.import_module(f'realesrgan.data.{file_name}') for file_name in dataset_filenames]
|
realesrgan/data/realesrgan_dataset.py
ADDED
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import cv2
|
2 |
+
import math
|
3 |
+
import numpy as np
|
4 |
+
import os
|
5 |
+
import os.path as osp
|
6 |
+
import random
|
7 |
+
import time
|
8 |
+
import torch
|
9 |
+
from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
|
10 |
+
from basicsr.data.transforms import augment
|
11 |
+
from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
|
12 |
+
from basicsr.utils.registry import DATASET_REGISTRY
|
13 |
+
from torch.utils import data as data
|
14 |
+
|
15 |
+
|
16 |
+
@DATASET_REGISTRY.register()
|
17 |
+
class RealESRGANDataset(data.Dataset):
|
18 |
+
"""
|
19 |
+
Dataset used for Real-ESRGAN model.
|
20 |
+
"""
|
21 |
+
|
22 |
+
def __init__(self, opt):
|
23 |
+
super(RealESRGANDataset, self).__init__()
|
24 |
+
self.opt = opt
|
25 |
+
# file client (io backend)
|
26 |
+
self.file_client = None
|
27 |
+
self.io_backend_opt = opt['io_backend']
|
28 |
+
self.gt_folder = opt['dataroot_gt']
|
29 |
+
|
30 |
+
if self.io_backend_opt['type'] == 'lmdb':
|
31 |
+
self.io_backend_opt['db_paths'] = [self.gt_folder]
|
32 |
+
self.io_backend_opt['client_keys'] = ['gt']
|
33 |
+
if not self.gt_folder.endswith('.lmdb'):
|
34 |
+
raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}")
|
35 |
+
with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:
|
36 |
+
self.paths = [line.split('.')[0] for line in fin]
|
37 |
+
else:
|
38 |
+
with open(self.opt['meta_info']) as fin:
|
39 |
+
paths = [line.strip() for line in fin]
|
40 |
+
self.paths = [os.path.join(self.gt_folder, v) for v in paths]
|
41 |
+
|
42 |
+
# blur settings for the first degradation
|
43 |
+
self.blur_kernel_size = opt['blur_kernel_size']
|
44 |
+
self.kernel_list = opt['kernel_list']
|
45 |
+
self.kernel_prob = opt['kernel_prob']
|
46 |
+
self.blur_sigma = opt['blur_sigma']
|
47 |
+
self.betag_range = opt['betag_range']
|
48 |
+
self.betap_range = opt['betap_range']
|
49 |
+
self.sinc_prob = opt['sinc_prob']
|
50 |
+
|
51 |
+
# blur settings for the second degradation
|
52 |
+
self.blur_kernel_size2 = opt['blur_kernel_size2']
|
53 |
+
self.kernel_list2 = opt['kernel_list2']
|
54 |
+
self.kernel_prob2 = opt['kernel_prob2']
|
55 |
+
self.blur_sigma2 = opt['blur_sigma2']
|
56 |
+
self.betag_range2 = opt['betag_range2']
|
57 |
+
self.betap_range2 = opt['betap_range2']
|
58 |
+
self.sinc_prob2 = opt['sinc_prob2']
|
59 |
+
|
60 |
+
# a final sinc filter
|
61 |
+
self.final_sinc_prob = opt['final_sinc_prob']
|
62 |
+
|
63 |
+
self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21
|
64 |
+
self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect
|
65 |
+
self.pulse_tensor[10, 10] = 1
|
66 |
+
|
67 |
+
def __getitem__(self, index):
|
68 |
+
if self.file_client is None:
|
69 |
+
self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
|
70 |
+
|
71 |
+
# -------------------------------- Load gt images -------------------------------- #
|
72 |
+
# Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32.
|
73 |
+
gt_path = self.paths[index]
|
74 |
+
# avoid errors caused by high latency in reading files
|
75 |
+
retry = 3
|
76 |
+
while retry > 0:
|
77 |
+
try:
|
78 |
+
img_bytes = self.file_client.get(gt_path, 'gt')
|
79 |
+
except Exception as e:
|
80 |
+
logger = get_root_logger()
|
81 |
+
logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}')
|
82 |
+
# change another file to read
|
83 |
+
index = random.randint(0, self.__len__())
|
84 |
+
gt_path = self.paths[index]
|
85 |
+
time.sleep(1) # sleep 1s for occasional server congestion
|
86 |
+
else:
|
87 |
+
break
|
88 |
+
finally:
|
89 |
+
retry -= 1
|
90 |
+
img_gt = imfrombytes(img_bytes, float32=True)
|
91 |
+
|
92 |
+
# -------------------- augmentation for training: flip, rotation -------------------- #
|
93 |
+
img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot'])
|
94 |
+
|
95 |
+
# crop or pad to 400: 400 is hard-coded. You may change it accordingly
|
96 |
+
h, w = img_gt.shape[0:2]
|
97 |
+
crop_pad_size = 400
|
98 |
+
# pad
|
99 |
+
if h < crop_pad_size or w < crop_pad_size:
|
100 |
+
pad_h = max(0, crop_pad_size - h)
|
101 |
+
pad_w = max(0, crop_pad_size - w)
|
102 |
+
img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101)
|
103 |
+
# crop
|
104 |
+
if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size:
|
105 |
+
h, w = img_gt.shape[0:2]
|
106 |
+
# randomly choose top and left coordinates
|
107 |
+
top = random.randint(0, h - crop_pad_size)
|
108 |
+
left = random.randint(0, w - crop_pad_size)
|
109 |
+
img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...]
|
110 |
+
|
111 |
+
# ------------------------ Generate kernels (used in the first degradation) ------------------------ #
|
112 |
+
kernel_size = random.choice(self.kernel_range)
|
113 |
+
if np.random.uniform() < self.opt['sinc_prob']:
|
114 |
+
# this sinc filter setting is for kernels ranging from [7, 21]
|
115 |
+
if kernel_size < 13:
|
116 |
+
omega_c = np.random.uniform(np.pi / 3, np.pi)
|
117 |
+
else:
|
118 |
+
omega_c = np.random.uniform(np.pi / 5, np.pi)
|
119 |
+
kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
|
120 |
+
else:
|
121 |
+
kernel = random_mixed_kernels(
|
122 |
+
self.kernel_list,
|
123 |
+
self.kernel_prob,
|
124 |
+
kernel_size,
|
125 |
+
self.blur_sigma,
|
126 |
+
self.blur_sigma, [-math.pi, math.pi],
|
127 |
+
self.betag_range,
|
128 |
+
self.betap_range,
|
129 |
+
noise_range=None)
|
130 |
+
# pad kernel
|
131 |
+
pad_size = (21 - kernel_size) // 2
|
132 |
+
kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))
|
133 |
+
|
134 |
+
# ------------------------ Generate kernels (used in the second degradation) ------------------------ #
|
135 |
+
kernel_size = random.choice(self.kernel_range)
|
136 |
+
if np.random.uniform() < self.opt['sinc_prob2']:
|
137 |
+
if kernel_size < 13:
|
138 |
+
omega_c = np.random.uniform(np.pi / 3, np.pi)
|
139 |
+
else:
|
140 |
+
omega_c = np.random.uniform(np.pi / 5, np.pi)
|
141 |
+
kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
|
142 |
+
else:
|
143 |
+
kernel2 = random_mixed_kernels(
|
144 |
+
self.kernel_list2,
|
145 |
+
self.kernel_prob2,
|
146 |
+
kernel_size,
|
147 |
+
self.blur_sigma2,
|
148 |
+
self.blur_sigma2, [-math.pi, math.pi],
|
149 |
+
self.betag_range2,
|
150 |
+
self.betap_range2,
|
151 |
+
noise_range=None)
|
152 |
+
|
153 |
+
# pad kernel
|
154 |
+
pad_size = (21 - kernel_size) // 2
|
155 |
+
kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size)))
|
156 |
+
|
157 |
+
# ------------------------------------- sinc kernel ------------------------------------- #
|
158 |
+
if np.random.uniform() < self.opt['final_sinc_prob']:
|
159 |
+
kernel_size = random.choice(self.kernel_range)
|
160 |
+
omega_c = np.random.uniform(np.pi / 3, np.pi)
|
161 |
+
sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21)
|
162 |
+
sinc_kernel = torch.FloatTensor(sinc_kernel)
|
163 |
+
else:
|
164 |
+
sinc_kernel = self.pulse_tensor
|
165 |
+
|
166 |
+
# BGR to RGB, HWC to CHW, numpy to tensor
|
167 |
+
img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0]
|
168 |
+
kernel = torch.FloatTensor(kernel)
|
169 |
+
kernel2 = torch.FloatTensor(kernel2)
|
170 |
+
|
171 |
+
return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path}
|
172 |
+
return return_d
|
173 |
+
|
174 |
+
def __len__(self):
|
175 |
+
return len(self.paths)
|
realesrgan/models/__init__.py
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import importlib
|
2 |
+
from basicsr.utils import scandir
|
3 |
+
from os import path as osp
|
4 |
+
|
5 |
+
# automatically scan and import model modules for registry
|
6 |
+
# scan all the files that end with '_model.py' under the model folder
|
7 |
+
model_folder = osp.dirname(osp.abspath(__file__))
|
8 |
+
model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
|
9 |
+
# import all the model modules
|
10 |
+
_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]
|
realesrgan/models/realesrgan_model.py
ADDED
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import numpy as np
|
2 |
+
import random
|
3 |
+
import torch
|
4 |
+
from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
|
5 |
+
from basicsr.data.transforms import paired_random_crop
|
6 |
+
from basicsr.models.srgan_model import SRGANModel
|
7 |
+
from basicsr.utils import DiffJPEG, USMSharp
|
8 |
+
from basicsr.utils.img_process_util import filter2D
|
9 |
+
from basicsr.utils.registry import MODEL_REGISTRY
|
10 |
+
from collections import OrderedDict
|
11 |
+
from torch.nn import functional as F
|
12 |
+
|
13 |
+
|
14 |
+
@MODEL_REGISTRY.register()
|
15 |
+
class RealESRGANModel(SRGANModel):
|
16 |
+
"""RealESRGAN Model"""
|
17 |
+
|
18 |
+
def __init__(self, opt):
|
19 |
+
super(RealESRGANModel, self).__init__(opt)
|
20 |
+
self.jpeger = DiffJPEG(differentiable=False).cuda()
|
21 |
+
self.usm_sharpener = USMSharp().cuda()
|
22 |
+
self.queue_size = opt['queue_size']
|
23 |
+
|
24 |
+
@torch.no_grad()
|
25 |
+
def _dequeue_and_enqueue(self):
|
26 |
+
# training pair pool
|
27 |
+
# initialize
|
28 |
+
b, c, h, w = self.lq.size()
|
29 |
+
if not hasattr(self, 'queue_lr'):
|
30 |
+
assert self.queue_size % b == 0, 'queue size should be divisible by batch size'
|
31 |
+
self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
|
32 |
+
_, c, h, w = self.gt.size()
|
33 |
+
self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
|
34 |
+
self.queue_ptr = 0
|
35 |
+
if self.queue_ptr == self.queue_size: # full
|
36 |
+
# do dequeue and enqueue
|
37 |
+
# shuffle
|
38 |
+
idx = torch.randperm(self.queue_size)
|
39 |
+
self.queue_lr = self.queue_lr[idx]
|
40 |
+
self.queue_gt = self.queue_gt[idx]
|
41 |
+
# get
|
42 |
+
lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
|
43 |
+
gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
|
44 |
+
# update
|
45 |
+
self.queue_lr[0:b, :, :, :] = self.lq.clone()
|
46 |
+
self.queue_gt[0:b, :, :, :] = self.gt.clone()
|
47 |
+
|
48 |
+
self.lq = lq_dequeue
|
49 |
+
self.gt = gt_dequeue
|
50 |
+
else:
|
51 |
+
# only do enqueue
|
52 |
+
self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
|
53 |
+
self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
|
54 |
+
self.queue_ptr = self.queue_ptr + b
|
55 |
+
|
56 |
+
@torch.no_grad()
|
57 |
+
def feed_data(self, data):
|
58 |
+
if self.is_train:
|
59 |
+
# training data synthesis
|
60 |
+
self.gt = data['gt'].to(self.device)
|
61 |
+
self.gt_usm = self.usm_sharpener(self.gt)
|
62 |
+
|
63 |
+
self.kernel1 = data['kernel1'].to(self.device)
|
64 |
+
self.kernel2 = data['kernel2'].to(self.device)
|
65 |
+
self.sinc_kernel = data['sinc_kernel'].to(self.device)
|
66 |
+
|
67 |
+
ori_h, ori_w = self.gt.size()[2:4]
|
68 |
+
|
69 |
+
# ----------------------- The first degradation process ----------------------- #
|
70 |
+
# blur
|
71 |
+
out = filter2D(self.gt_usm, self.kernel1)
|
72 |
+
# random resize
|
73 |
+
updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
|
74 |
+
if updown_type == 'up':
|
75 |
+
scale = np.random.uniform(1, self.opt['resize_range'][1])
|
76 |
+
elif updown_type == 'down':
|
77 |
+
scale = np.random.uniform(self.opt['resize_range'][0], 1)
|
78 |
+
else:
|
79 |
+
scale = 1
|
80 |
+
mode = random.choice(['area', 'bilinear', 'bicubic'])
|
81 |
+
out = F.interpolate(out, scale_factor=scale, mode=mode)
|
82 |
+
# noise
|
83 |
+
gray_noise_prob = self.opt['gray_noise_prob']
|
84 |
+
if np.random.uniform() < self.opt['gaussian_noise_prob']:
|
85 |
+
out = random_add_gaussian_noise_pt(
|
86 |
+
out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
|
87 |
+
else:
|
88 |
+
out = random_add_poisson_noise_pt(
|
89 |
+
out,
|
90 |
+
scale_range=self.opt['poisson_scale_range'],
|
91 |
+
gray_prob=gray_noise_prob,
|
92 |
+
clip=True,
|
93 |
+
rounds=False)
|
94 |
+
# JPEG compression
|
95 |
+
jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
|
96 |
+
out = torch.clamp(out, 0, 1)
|
97 |
+
out = self.jpeger(out, quality=jpeg_p)
|
98 |
+
|
99 |
+
# ----------------------- The second degradation process ----------------------- #
|
100 |
+
# blur
|
101 |
+
if np.random.uniform() < self.opt['second_blur_prob']:
|
102 |
+
out = filter2D(out, self.kernel2)
|
103 |
+
# random resize
|
104 |
+
updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
|
105 |
+
if updown_type == 'up':
|
106 |
+
scale = np.random.uniform(1, self.opt['resize_range2'][1])
|
107 |
+
elif updown_type == 'down':
|
108 |
+
scale = np.random.uniform(self.opt['resize_range2'][0], 1)
|
109 |
+
else:
|
110 |
+
scale = 1
|
111 |
+
mode = random.choice(['area', 'bilinear', 'bicubic'])
|
112 |
+
out = F.interpolate(
|
113 |
+
out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
|
114 |
+
# noise
|
115 |
+
gray_noise_prob = self.opt['gray_noise_prob2']
|
116 |
+
if np.random.uniform() < self.opt['gaussian_noise_prob2']:
|
117 |
+
out = random_add_gaussian_noise_pt(
|
118 |
+
out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
|
119 |
+
else:
|
120 |
+
out = random_add_poisson_noise_pt(
|
121 |
+
out,
|
122 |
+
scale_range=self.opt['poisson_scale_range2'],
|
123 |
+
gray_prob=gray_noise_prob,
|
124 |
+
clip=True,
|
125 |
+
rounds=False)
|
126 |
+
|
127 |
+
# JPEG compression + the final sinc filter
|
128 |
+
# We also need to resize images to desired sizes. We group [resize back + sinc filter] together
|
129 |
+
# as one operation.
|
130 |
+
# We consider two orders:
|
131 |
+
# 1. [resize back + sinc filter] + JPEG compression
|
132 |
+
# 2. JPEG compression + [resize back + sinc filter]
|
133 |
+
# Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
|
134 |
+
if np.random.uniform() < 0.5:
|
135 |
+
# resize back + the final sinc filter
|
136 |
+
mode = random.choice(['area', 'bilinear', 'bicubic'])
|
137 |
+
out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
|
138 |
+
out = filter2D(out, self.sinc_kernel)
|
139 |
+
# JPEG compression
|
140 |
+
jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
|
141 |
+
out = torch.clamp(out, 0, 1)
|
142 |
+
out = self.jpeger(out, quality=jpeg_p)
|
143 |
+
else:
|
144 |
+
# JPEG compression
|
145 |
+
jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
|
146 |
+
out = torch.clamp(out, 0, 1)
|
147 |
+
out = self.jpeger(out, quality=jpeg_p)
|
148 |
+
# resize back + the final sinc filter
|
149 |
+
mode = random.choice(['area', 'bilinear', 'bicubic'])
|
150 |
+
out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
|
151 |
+
out = filter2D(out, self.sinc_kernel)
|
152 |
+
|
153 |
+
# clamp and round
|
154 |
+
self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
|
155 |
+
|
156 |
+
# random crop
|
157 |
+
gt_size = self.opt['gt_size']
|
158 |
+
(self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size,
|
159 |
+
self.opt['scale'])
|
160 |
+
|
161 |
+
# training pair pool
|
162 |
+
self._dequeue_and_enqueue()
|
163 |
+
# sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
|
164 |
+
self.gt_usm = self.usm_sharpener(self.gt)
|
165 |
+
else:
|
166 |
+
self.lq = data['lq'].to(self.device)
|
167 |
+
if 'gt' in data:
|
168 |
+
self.gt = data['gt'].to(self.device)
|
169 |
+
|
170 |
+
def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
|
171 |
+
# do not use the synthetic process during validation
|
172 |
+
self.is_train = False
|
173 |
+
super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
|
174 |
+
self.is_train = True
|
175 |
+
|
176 |
+
def optimize_parameters(self, current_iter):
|
177 |
+
l1_gt = self.gt_usm
|
178 |
+
percep_gt = self.gt_usm
|
179 |
+
gan_gt = self.gt_usm
|
180 |
+
if self.opt['l1_gt_usm'] is False:
|
181 |
+
l1_gt = self.gt
|
182 |
+
if self.opt['percep_gt_usm'] is False:
|
183 |
+
percep_gt = self.gt
|
184 |
+
if self.opt['gan_gt_usm'] is False:
|
185 |
+
gan_gt = self.gt
|
186 |
+
|
187 |
+
# optimize net_g
|
188 |
+
for p in self.net_d.parameters():
|
189 |
+
p.requires_grad = False
|
190 |
+
|
191 |
+
self.optimizer_g.zero_grad()
|
192 |
+
self.output = self.net_g(self.lq)
|
193 |
+
|
194 |
+
l_g_total = 0
|
195 |
+
loss_dict = OrderedDict()
|
196 |
+
if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
|
197 |
+
# pixel loss
|
198 |
+
if self.cri_pix:
|
199 |
+
l_g_pix = self.cri_pix(self.output, l1_gt)
|
200 |
+
l_g_total += l_g_pix
|
201 |
+
loss_dict['l_g_pix'] = l_g_pix
|
202 |
+
# perceptual loss
|
203 |
+
if self.cri_perceptual:
|
204 |
+
l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt)
|
205 |
+
if l_g_percep is not None:
|
206 |
+
l_g_total += l_g_percep
|
207 |
+
loss_dict['l_g_percep'] = l_g_percep
|
208 |
+
if l_g_style is not None:
|
209 |
+
l_g_total += l_g_style
|
210 |
+
loss_dict['l_g_style'] = l_g_style
|
211 |
+
# gan loss
|
212 |
+
fake_g_pred = self.net_d(self.output)
|
213 |
+
l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
|
214 |
+
l_g_total += l_g_gan
|
215 |
+
loss_dict['l_g_gan'] = l_g_gan
|
216 |
+
|
217 |
+
l_g_total.backward()
|
218 |
+
self.optimizer_g.step()
|
219 |
+
|
220 |
+
# optimize net_d
|
221 |
+
for p in self.net_d.parameters():
|
222 |
+
p.requires_grad = True
|
223 |
+
|
224 |
+
self.optimizer_d.zero_grad()
|
225 |
+
# real
|
226 |
+
real_d_pred = self.net_d(gan_gt)
|
227 |
+
l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
|
228 |
+
loss_dict['l_d_real'] = l_d_real
|
229 |
+
loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
|
230 |
+
l_d_real.backward()
|
231 |
+
# fake
|
232 |
+
fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9
|
233 |
+
l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
|
234 |
+
loss_dict['l_d_fake'] = l_d_fake
|
235 |
+
loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
|
236 |
+
l_d_fake.backward()
|
237 |
+
self.optimizer_d.step()
|
238 |
+
|
239 |
+
if self.ema_decay > 0:
|
240 |
+
self.model_ema(decay=self.ema_decay)
|
241 |
+
|
242 |
+
self.log_dict = self.reduce_loss_dict(loss_dict)
|
realesrgan/models/realesrnet_model.py
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import numpy as np
|
2 |
+
import random
|
3 |
+
import torch
|
4 |
+
from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
|
5 |
+
from basicsr.data.transforms import paired_random_crop
|
6 |
+
from basicsr.models.sr_model import SRModel
|
7 |
+
from basicsr.utils import DiffJPEG, USMSharp
|
8 |
+
from basicsr.utils.img_process_util import filter2D
|
9 |
+
from basicsr.utils.registry import MODEL_REGISTRY
|
10 |
+
from torch.nn import functional as F
|
11 |
+
|
12 |
+
|
13 |
+
@MODEL_REGISTRY.register()
|
14 |
+
class RealESRNetModel(SRModel):
|
15 |
+
"""RealESRNet Model"""
|
16 |
+
|
17 |
+
def __init__(self, opt):
|
18 |
+
super(RealESRNetModel, self).__init__(opt)
|
19 |
+
self.jpeger = DiffJPEG(differentiable=False).cuda()
|
20 |
+
self.usm_sharpener = USMSharp().cuda()
|
21 |
+
self.queue_size = opt['queue_size']
|
22 |
+
|
23 |
+
@torch.no_grad()
|
24 |
+
def _dequeue_and_enqueue(self):
|
25 |
+
# training pair pool
|
26 |
+
# initialize
|
27 |
+
b, c, h, w = self.lq.size()
|
28 |
+
if not hasattr(self, 'queue_lr'):
|
29 |
+
assert self.queue_size % b == 0, 'queue size should be divisible by batch size'
|
30 |
+
self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
|
31 |
+
_, c, h, w = self.gt.size()
|
32 |
+
self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
|
33 |
+
self.queue_ptr = 0
|
34 |
+
if self.queue_ptr == self.queue_size: # full
|
35 |
+
# do dequeue and enqueue
|
36 |
+
# shuffle
|
37 |
+
idx = torch.randperm(self.queue_size)
|
38 |
+
self.queue_lr = self.queue_lr[idx]
|
39 |
+
self.queue_gt = self.queue_gt[idx]
|
40 |
+
# get
|
41 |
+
lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
|
42 |
+
gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
|
43 |
+
# update
|
44 |
+
self.queue_lr[0:b, :, :, :] = self.lq.clone()
|
45 |
+
self.queue_gt[0:b, :, :, :] = self.gt.clone()
|
46 |
+
|
47 |
+
self.lq = lq_dequeue
|
48 |
+
self.gt = gt_dequeue
|
49 |
+
else:
|
50 |
+
# only do enqueue
|
51 |
+
self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
|
52 |
+
self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
|
53 |
+
self.queue_ptr = self.queue_ptr + b
|
54 |
+
|
55 |
+
@torch.no_grad()
|
56 |
+
def feed_data(self, data):
|
57 |
+
if self.is_train:
|
58 |
+
# training data synthesis
|
59 |
+
self.gt = data['gt'].to(self.device)
|
60 |
+
# USM the GT images
|
61 |
+
if self.opt['gt_usm'] is True:
|
62 |
+
self.gt = self.usm_sharpener(self.gt)
|
63 |
+
|
64 |
+
self.kernel1 = data['kernel1'].to(self.device)
|
65 |
+
self.kernel2 = data['kernel2'].to(self.device)
|
66 |
+
self.sinc_kernel = data['sinc_kernel'].to(self.device)
|
67 |
+
|
68 |
+
ori_h, ori_w = self.gt.size()[2:4]
|
69 |
+
|
70 |
+
# ----------------------- The first degradation process ----------------------- #
|
71 |
+
# blur
|
72 |
+
out = filter2D(self.gt, self.kernel1)
|
73 |
+
# random resize
|
74 |
+
updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
|
75 |
+
if updown_type == 'up':
|
76 |
+
scale = np.random.uniform(1, self.opt['resize_range'][1])
|
77 |
+
elif updown_type == 'down':
|
78 |
+
scale = np.random.uniform(self.opt['resize_range'][0], 1)
|
79 |
+
else:
|
80 |
+
scale = 1
|
81 |
+
mode = random.choice(['area', 'bilinear', 'bicubic'])
|
82 |
+
out = F.interpolate(out, scale_factor=scale, mode=mode)
|
83 |
+
# noise
|
84 |
+
gray_noise_prob = self.opt['gray_noise_prob']
|
85 |
+
if np.random.uniform() < self.opt['gaussian_noise_prob']:
|
86 |
+
out = random_add_gaussian_noise_pt(
|
87 |
+
out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
|
88 |
+
else:
|
89 |
+
out = random_add_poisson_noise_pt(
|
90 |
+
out,
|
91 |
+
scale_range=self.opt['poisson_scale_range'],
|
92 |
+
gray_prob=gray_noise_prob,
|
93 |
+
clip=True,
|
94 |
+
rounds=False)
|
95 |
+
# JPEG compression
|
96 |
+
jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
|
97 |
+
out = torch.clamp(out, 0, 1)
|
98 |
+
out = self.jpeger(out, quality=jpeg_p)
|
99 |
+
|
100 |
+
# ----------------------- The second degradation process ----------------------- #
|
101 |
+
# blur
|
102 |
+
if np.random.uniform() < self.opt['second_blur_prob']:
|
103 |
+
out = filter2D(out, self.kernel2)
|
104 |
+
# random resize
|
105 |
+
updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
|
106 |
+
if updown_type == 'up':
|
107 |
+
scale = np.random.uniform(1, self.opt['resize_range2'][1])
|
108 |
+
elif updown_type == 'down':
|
109 |
+
scale = np.random.uniform(self.opt['resize_range2'][0], 1)
|
110 |
+
else:
|
111 |
+
scale = 1
|
112 |
+
mode = random.choice(['area', 'bilinear', 'bicubic'])
|
113 |
+
out = F.interpolate(
|
114 |
+
out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
|
115 |
+
# noise
|
116 |
+
gray_noise_prob = self.opt['gray_noise_prob2']
|
117 |
+
if np.random.uniform() < self.opt['gaussian_noise_prob2']:
|
118 |
+
out = random_add_gaussian_noise_pt(
|
119 |
+
out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
|
120 |
+
else:
|
121 |
+
out = random_add_poisson_noise_pt(
|
122 |
+
out,
|
123 |
+
scale_range=self.opt['poisson_scale_range2'],
|
124 |
+
gray_prob=gray_noise_prob,
|
125 |
+
clip=True,
|
126 |
+
rounds=False)
|
127 |
+
|
128 |
+
# JPEG compression + the final sinc filter
|
129 |
+
# We also need to resize images to desired sizes. We group [resize back + sinc filter] together
|
130 |
+
# as one operation.
|
131 |
+
# We consider two orders:
|
132 |
+
# 1. [resize back + sinc filter] + JPEG compression
|
133 |
+
# 2. JPEG compression + [resize back + sinc filter]
|
134 |
+
# Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
|
135 |
+
if np.random.uniform() < 0.5:
|
136 |
+
# resize back + the final sinc filter
|
137 |
+
mode = random.choice(['area', 'bilinear', 'bicubic'])
|
138 |
+
out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
|
139 |
+
out = filter2D(out, self.sinc_kernel)
|
140 |
+
# JPEG compression
|
141 |
+
jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
|
142 |
+
out = torch.clamp(out, 0, 1)
|
143 |
+
out = self.jpeger(out, quality=jpeg_p)
|
144 |
+
else:
|
145 |
+
# JPEG compression
|
146 |
+
jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
|
147 |
+
out = torch.clamp(out, 0, 1)
|
148 |
+
out = self.jpeger(out, quality=jpeg_p)
|
149 |
+
# resize back + the final sinc filter
|
150 |
+
mode = random.choice(['area', 'bilinear', 'bicubic'])
|
151 |
+
out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
|
152 |
+
out = filter2D(out, self.sinc_kernel)
|
153 |
+
|
154 |
+
# clamp and round
|
155 |
+
self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
|
156 |
+
|
157 |
+
# random crop
|
158 |
+
gt_size = self.opt['gt_size']
|
159 |
+
self.gt, self.lq = paired_random_crop(self.gt, self.lq, gt_size, self.opt['scale'])
|
160 |
+
|
161 |
+
# training pair pool
|
162 |
+
self._dequeue_and_enqueue()
|
163 |
+
else:
|
164 |
+
self.lq = data['lq'].to(self.device)
|
165 |
+
if 'gt' in data:
|
166 |
+
self.gt = data['gt'].to(self.device)
|
167 |
+
|
168 |
+
def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
|
169 |
+
# do not use the synthetic process during validation
|
170 |
+
self.is_train = False
|
171 |
+
super(RealESRNetModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
|
172 |
+
self.is_train = True
|
realesrgan/train.py
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# flake8: noqa
|
2 |
+
import os.path as osp
|
3 |
+
from basicsr.train import train_pipeline
|
4 |
+
|
5 |
+
import realesrgan.archs
|
6 |
+
import realesrgan.data
|
7 |
+
import realesrgan.models
|
8 |
+
|
9 |
+
if __name__ == '__main__':
|
10 |
+
root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir))
|
11 |
+
train_pipeline(root_path)
|
realesrgan/utils.py
ADDED
@@ -0,0 +1,231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import cv2
|
2 |
+
import math
|
3 |
+
import numpy as np
|
4 |
+
import os
|
5 |
+
import torch
|
6 |
+
from basicsr.archs.rrdbnet_arch import RRDBNet
|
7 |
+
from torch.hub import download_url_to_file, get_dir
|
8 |
+
from torch.nn import functional as F
|
9 |
+
from urllib.parse import urlparse
|
10 |
+
|
11 |
+
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
12 |
+
|
13 |
+
|
14 |
+
class RealESRGANer():
|
15 |
+
|
16 |
+
def __init__(self, scale, model_path, tile=0, tile_pad=10, pre_pad=10, half=False):
|
17 |
+
self.scale = scale
|
18 |
+
self.tile_size = tile
|
19 |
+
self.tile_pad = tile_pad
|
20 |
+
self.pre_pad = pre_pad
|
21 |
+
self.mod_scale = None
|
22 |
+
self.half = half
|
23 |
+
|
24 |
+
# initialize model
|
25 |
+
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
26 |
+
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=scale)
|
27 |
+
|
28 |
+
if model_path.startswith('https://'):
|
29 |
+
model_path = load_file_from_url(
|
30 |
+
url=model_path, model_dir='realesrgan/weights', progress=True, file_name=None)
|
31 |
+
loadnet = torch.load(model_path)
|
32 |
+
if 'params_ema' in loadnet:
|
33 |
+
keyname = 'params_ema'
|
34 |
+
else:
|
35 |
+
keyname = 'params'
|
36 |
+
model.load_state_dict(loadnet[keyname], strict=True)
|
37 |
+
model.eval()
|
38 |
+
self.model = model.to(self.device)
|
39 |
+
if self.half:
|
40 |
+
self.model = self.model.half()
|
41 |
+
|
42 |
+
def pre_process(self, img):
|
43 |
+
img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
|
44 |
+
self.img = img.unsqueeze(0).to(self.device)
|
45 |
+
if self.half:
|
46 |
+
self.img = self.img.half()
|
47 |
+
|
48 |
+
# pre_pad
|
49 |
+
if self.pre_pad != 0:
|
50 |
+
self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
|
51 |
+
# mod pad
|
52 |
+
if self.scale == 2:
|
53 |
+
self.mod_scale = 2
|
54 |
+
elif self.scale == 1:
|
55 |
+
self.mod_scale = 4
|
56 |
+
if self.mod_scale is not None:
|
57 |
+
self.mod_pad_h, self.mod_pad_w = 0, 0
|
58 |
+
_, _, h, w = self.img.size()
|
59 |
+
if (h % self.mod_scale != 0):
|
60 |
+
self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
|
61 |
+
if (w % self.mod_scale != 0):
|
62 |
+
self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
|
63 |
+
self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
|
64 |
+
|
65 |
+
def process(self):
|
66 |
+
self.output = self.model(self.img)
|
67 |
+
|
68 |
+
def tile_process(self):
|
69 |
+
"""Modified from: https://github.com/ata4/esrgan-launcher
|
70 |
+
"""
|
71 |
+
batch, channel, height, width = self.img.shape
|
72 |
+
output_height = height * self.scale
|
73 |
+
output_width = width * self.scale
|
74 |
+
output_shape = (batch, channel, output_height, output_width)
|
75 |
+
|
76 |
+
# start with black image
|
77 |
+
self.output = self.img.new_zeros(output_shape)
|
78 |
+
tiles_x = math.ceil(width / self.tile_size)
|
79 |
+
tiles_y = math.ceil(height / self.tile_size)
|
80 |
+
|
81 |
+
# loop over all tiles
|
82 |
+
for y in range(tiles_y):
|
83 |
+
for x in range(tiles_x):
|
84 |
+
# extract tile from input image
|
85 |
+
ofs_x = x * self.tile_size
|
86 |
+
ofs_y = y * self.tile_size
|
87 |
+
# input tile area on total image
|
88 |
+
input_start_x = ofs_x
|
89 |
+
input_end_x = min(ofs_x + self.tile_size, width)
|
90 |
+
input_start_y = ofs_y
|
91 |
+
input_end_y = min(ofs_y + self.tile_size, height)
|
92 |
+
|
93 |
+
# input tile area on total image with padding
|
94 |
+
input_start_x_pad = max(input_start_x - self.tile_pad, 0)
|
95 |
+
input_end_x_pad = min(input_end_x + self.tile_pad, width)
|
96 |
+
input_start_y_pad = max(input_start_y - self.tile_pad, 0)
|
97 |
+
input_end_y_pad = min(input_end_y + self.tile_pad, height)
|
98 |
+
|
99 |
+
# input tile dimensions
|
100 |
+
input_tile_width = input_end_x - input_start_x
|
101 |
+
input_tile_height = input_end_y - input_start_y
|
102 |
+
tile_idx = y * tiles_x + x + 1
|
103 |
+
input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
|
104 |
+
|
105 |
+
# upscale tile
|
106 |
+
try:
|
107 |
+
with torch.no_grad():
|
108 |
+
output_tile = self.model(input_tile)
|
109 |
+
except Exception as error:
|
110 |
+
print('Error', error)
|
111 |
+
print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
|
112 |
+
|
113 |
+
# output tile area on total image
|
114 |
+
output_start_x = input_start_x * self.scale
|
115 |
+
output_end_x = input_end_x * self.scale
|
116 |
+
output_start_y = input_start_y * self.scale
|
117 |
+
output_end_y = input_end_y * self.scale
|
118 |
+
|
119 |
+
# output tile area without padding
|
120 |
+
output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
|
121 |
+
output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
|
122 |
+
output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
|
123 |
+
output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
|
124 |
+
|
125 |
+
# put tile into output image
|
126 |
+
self.output[:, :, output_start_y:output_end_y,
|
127 |
+
output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
|
128 |
+
output_start_x_tile:output_end_x_tile]
|
129 |
+
|
130 |
+
def post_process(self):
|
131 |
+
# remove extra pad
|
132 |
+
if self.mod_scale is not None:
|
133 |
+
_, _, h, w = self.output.size()
|
134 |
+
self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
|
135 |
+
# remove prepad
|
136 |
+
if self.pre_pad != 0:
|
137 |
+
_, _, h, w = self.output.size()
|
138 |
+
self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
|
139 |
+
return self.output
|
140 |
+
|
141 |
+
@torch.no_grad()
|
142 |
+
def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):
|
143 |
+
h_input, w_input = img.shape[0:2]
|
144 |
+
# img: numpy
|
145 |
+
img = img.astype(np.float32)
|
146 |
+
if np.max(img) > 255: # 16-bit image
|
147 |
+
max_range = 65535
|
148 |
+
print('\tInput is a 16-bit image')
|
149 |
+
else:
|
150 |
+
max_range = 255
|
151 |
+
img = img / max_range
|
152 |
+
if len(img.shape) == 2: # gray image
|
153 |
+
img_mode = 'L'
|
154 |
+
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
|
155 |
+
elif img.shape[2] == 4: # RGBA image with alpha channel
|
156 |
+
img_mode = 'RGBA'
|
157 |
+
alpha = img[:, :, 3]
|
158 |
+
img = img[:, :, 0:3]
|
159 |
+
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
160 |
+
if alpha_upsampler == 'realesrgan':
|
161 |
+
alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
|
162 |
+
else:
|
163 |
+
img_mode = 'RGB'
|
164 |
+
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
165 |
+
|
166 |
+
# ------------------- process image (without the alpha channel) ------------------- #
|
167 |
+
self.pre_process(img)
|
168 |
+
if self.tile_size > 0:
|
169 |
+
self.tile_process()
|
170 |
+
else:
|
171 |
+
self.process()
|
172 |
+
output_img = self.post_process()
|
173 |
+
output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy()
|
174 |
+
output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
|
175 |
+
if img_mode == 'L':
|
176 |
+
output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
|
177 |
+
|
178 |
+
# ------------------- process the alpha channel if necessary ------------------- #
|
179 |
+
if img_mode == 'RGBA':
|
180 |
+
if alpha_upsampler == 'realesrgan':
|
181 |
+
self.pre_process(alpha)
|
182 |
+
if self.tile_size > 0:
|
183 |
+
self.tile_process()
|
184 |
+
else:
|
185 |
+
self.process()
|
186 |
+
output_alpha = self.post_process()
|
187 |
+
output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
|
188 |
+
output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
|
189 |
+
output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
|
190 |
+
else:
|
191 |
+
h, w = alpha.shape[0:2]
|
192 |
+
output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR)
|
193 |
+
|
194 |
+
# merge the alpha channel
|
195 |
+
output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
|
196 |
+
output_img[:, :, 3] = output_alpha
|
197 |
+
|
198 |
+
# ------------------------------ return ------------------------------ #
|
199 |
+
if max_range == 65535: # 16-bit image
|
200 |
+
output = (output_img * 65535.0).round().astype(np.uint16)
|
201 |
+
else:
|
202 |
+
output = (output_img * 255.0).round().astype(np.uint8)
|
203 |
+
|
204 |
+
if outscale is not None and outscale != float(self.scale):
|
205 |
+
output = cv2.resize(
|
206 |
+
output, (
|
207 |
+
int(w_input * outscale),
|
208 |
+
int(h_input * outscale),
|
209 |
+
), interpolation=cv2.INTER_LANCZOS4)
|
210 |
+
|
211 |
+
return output, img_mode
|
212 |
+
|
213 |
+
|
214 |
+
def load_file_from_url(url, model_dir=None, progress=True, file_name=None):
|
215 |
+
"""Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py
|
216 |
+
"""
|
217 |
+
if model_dir is None:
|
218 |
+
hub_dir = get_dir()
|
219 |
+
model_dir = os.path.join(hub_dir, 'checkpoints')
|
220 |
+
|
221 |
+
os.makedirs(os.path.join(ROOT_DIR, model_dir), exist_ok=True)
|
222 |
+
|
223 |
+
parts = urlparse(url)
|
224 |
+
filename = os.path.basename(parts.path)
|
225 |
+
if file_name is not None:
|
226 |
+
filename = file_name
|
227 |
+
cached_file = os.path.abspath(os.path.join(ROOT_DIR, model_dir, filename))
|
228 |
+
if not os.path.exists(cached_file):
|
229 |
+
print(f'Downloading: "{url}" to {cached_file}\n')
|
230 |
+
download_url_to_file(url, cached_file, hash_prefix=None, progress=progress)
|
231 |
+
return cached_file
|
realesrgan/weights/README.md
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
# Weights
|
2 |
+
|
3 |
+
Put the downloaded weights to this folder.
|
requirements.txt
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
basicsr
|
2 |
+
numpy
|
3 |
+
opencv-python
|
4 |
+
torch>=1.7
|
scripts/pytorch2onnx.py
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch
|
2 |
+
import torch.onnx
|
3 |
+
from basicsr.archs.rrdbnet_arch import RRDBNet
|
4 |
+
|
5 |
+
# An instance of your model
|
6 |
+
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32)
|
7 |
+
model.load_state_dict(torch.load('experiments/pretrained_models/RealESRGAN_x4plus.pth')['params_ema'])
|
8 |
+
# set the train mode to false since we will only run the forward pass.
|
9 |
+
model.train(False)
|
10 |
+
model.cpu().eval()
|
11 |
+
|
12 |
+
# An example input you would normally provide to your model's forward() method
|
13 |
+
x = torch.rand(1, 3, 64, 64)
|
14 |
+
|
15 |
+
# Export the model
|
16 |
+
with torch.no_grad():
|
17 |
+
torch_out = torch.onnx._export(model, x, 'realesrgan-x4.onnx', opset_version=11, export_params=True)
|
setup.cfg
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[flake8]
|
2 |
+
ignore =
|
3 |
+
# line break before binary operator (W503)
|
4 |
+
W503,
|
5 |
+
# line break after binary operator (W504)
|
6 |
+
W504,
|
7 |
+
max-line-length=120
|
8 |
+
|
9 |
+
[yapf]
|
10 |
+
based_on_style = pep8
|
11 |
+
column_limit = 120
|
12 |
+
blank_line_before_nested_class_or_def = true
|
13 |
+
split_before_expression_after_opening_paren = true
|
14 |
+
|
15 |
+
[isort]
|
16 |
+
line_length = 120
|
17 |
+
multi_line_output = 0
|
18 |
+
known_standard_library = pkg_resources,setuptools
|
19 |
+
known_first_party = realesrgan
|
20 |
+
known_third_party = basicsr,cv2,numpy,torch
|
21 |
+
no_lines_before = STDLIB,LOCALFOLDER
|
22 |
+
default_section = THIRDPARTY
|
setup.py
ADDED
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python
|
2 |
+
|
3 |
+
from setuptools import find_packages, setup
|
4 |
+
|
5 |
+
import os
|
6 |
+
import subprocess
|
7 |
+
import time
|
8 |
+
|
9 |
+
version_file = 'realesrgan/version.py'
|
10 |
+
|
11 |
+
|
12 |
+
def readme():
|
13 |
+
with open('README.md', encoding='utf-8') as f:
|
14 |
+
content = f.read()
|
15 |
+
return content
|
16 |
+
|
17 |
+
|
18 |
+
def get_git_hash():
|
19 |
+
|
20 |
+
def _minimal_ext_cmd(cmd):
|
21 |
+
# construct minimal environment
|
22 |
+
env = {}
|
23 |
+
for k in ['SYSTEMROOT', 'PATH', 'HOME']:
|
24 |
+
v = os.environ.get(k)
|
25 |
+
if v is not None:
|
26 |
+
env[k] = v
|
27 |
+
# LANGUAGE is used on win32
|
28 |
+
env['LANGUAGE'] = 'C'
|
29 |
+
env['LANG'] = 'C'
|
30 |
+
env['LC_ALL'] = 'C'
|
31 |
+
out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
|
32 |
+
return out
|
33 |
+
|
34 |
+
try:
|
35 |
+
out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
|
36 |
+
sha = out.strip().decode('ascii')
|
37 |
+
except OSError:
|
38 |
+
sha = 'unknown'
|
39 |
+
|
40 |
+
return sha
|
41 |
+
|
42 |
+
|
43 |
+
def get_hash():
|
44 |
+
if os.path.exists('.git'):
|
45 |
+
sha = get_git_hash()[:7]
|
46 |
+
elif os.path.exists(version_file):
|
47 |
+
try:
|
48 |
+
from facexlib.version import __version__
|
49 |
+
sha = __version__.split('+')[-1]
|
50 |
+
except ImportError:
|
51 |
+
raise ImportError('Unable to get git version')
|
52 |
+
else:
|
53 |
+
sha = 'unknown'
|
54 |
+
|
55 |
+
return sha
|
56 |
+
|
57 |
+
|
58 |
+
def write_version_py():
|
59 |
+
content = """# GENERATED VERSION FILE
|
60 |
+
# TIME: {}
|
61 |
+
__version__ = '{}'
|
62 |
+
__gitsha__ = '{}'
|
63 |
+
version_info = ({})
|
64 |
+
"""
|
65 |
+
sha = get_hash()
|
66 |
+
with open('VERSION', 'r') as f:
|
67 |
+
SHORT_VERSION = f.read().strip()
|
68 |
+
VERSION_INFO = ', '.join([x if x.isdigit() else f'"{x}"' for x in SHORT_VERSION.split('.')])
|
69 |
+
|
70 |
+
version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO)
|
71 |
+
with open(version_file, 'w') as f:
|
72 |
+
f.write(version_file_str)
|
73 |
+
|
74 |
+
|
75 |
+
def get_version():
|
76 |
+
with open(version_file, 'r') as f:
|
77 |
+
exec(compile(f.read(), version_file, 'exec'))
|
78 |
+
return locals()['__version__']
|
79 |
+
|
80 |
+
|
81 |
+
def get_requirements(filename='requirements.txt'):
|
82 |
+
here = os.path.dirname(os.path.realpath(__file__))
|
83 |
+
with open(os.path.join(here, filename), 'r') as f:
|
84 |
+
requires = [line.replace('\n', '') for line in f.readlines()]
|
85 |
+
return requires
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == '__main__':
|
89 |
+
write_version_py()
|
90 |
+
setup(
|
91 |
+
name='realesrgan',
|
92 |
+
version=get_version(),
|
93 |
+
description='Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration',
|
94 |
+
long_description=readme(),
|
95 |
+
long_description_content_type='text/markdown',
|
96 |
+
author='Xintao Wang',
|
97 |
+
author_email='xintao.wang@outlook.com',
|
98 |
+
keywords='computer vision, pytorch, image restoration, super-resolution, esrgan, real-esrgan',
|
99 |
+
url='https://github.com/xinntao/Real-ESRGAN',
|
100 |
+
include_package_data=True,
|
101 |
+
packages=find_packages(exclude=('options', 'datasets', 'experiments', 'results', 'tb_logger', 'wandb')),
|
102 |
+
classifiers=[
|
103 |
+
'Development Status :: 4 - Beta',
|
104 |
+
'License :: OSI Approved :: Apache Software License',
|
105 |
+
'Operating System :: OS Independent',
|
106 |
+
'Programming Language :: Python :: 3',
|
107 |
+
'Programming Language :: Python :: 3.7',
|
108 |
+
'Programming Language :: Python :: 3.8',
|
109 |
+
],
|
110 |
+
license='BSD-3-Clause License',
|
111 |
+
setup_requires=['cython', 'numpy'],
|
112 |
+
install_requires=get_requirements(),
|
113 |
+
zip_safe=False)
|