File size: 1,869 Bytes
5e54b72 b757a76 5e54b72 b416344 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: cc-by-4.0
pipeline_tag: image-to-image
tags:
- pytorch
- super-resolution
---
Name: 4xRealWebPhoto_RGT
License: CC BY 4.0
Author: Philip Hofmann
Network: RGT
Scale: 4
Release Date: 10.03.2024
Purpose: 4x real web photo upscaler, meant for upscaling photos downloaded from the web
Iterations: 260'000
epoch: 71
batch_size: 12
HR_size: 128
Dataset: 4xRealWebPhoto
Number of train images: 6118
OTF Training: No
Pretrained_Model_G: RGT_x4
Description:
4x real web photo upscaler, meant for upscaling photos downloaded from the web. Trained on my v1 of my 4xRealWebPhoto dataset, it should be able to handle noise, jpg and webp (re)compression, (re)scaling, and a bit of lens blur. I tried to simulate the use case of someone uploading a photo (maybe noisy, maybe blurry, maybe noisy&blurry) to the web (for example social media) where the provider would downscale&compress the image, and then another user downloading and re-uploading it on the web (recompression & rescale).
Workflow for dataset degradation can be found in the attached pdf where also information about the very first attempt on this approach, v0, is in the appendix of said pdf. v0 was basically trained on a degraded RSBlur dataset with strong motion blur, v1 (this one) was on nomos8k_sfw, and v2 was on nomos8k with multiscale and variants, also different degradation values.
This was my first RealWebPhoto model (so v1).
v0 is appended simply for information retention purpose, it was only trained for 70k on an degraded RSBlur dataset, but it became apparent that the motion blur was too extreme for this use case. The initial idea was to get realistic blur by using a pre-blurred dataset like RSBlur instead of adding blur synthetically.
![4xRealWebPhoto_RGT](https://github.com/Phhofm/models/assets/14755670/0667688d-c85e-4efd-9601-cb9bc8a6e8e5) |