|
# Frequently Asked Questions
|
|
|
|
### 1. How to reproduce your results in the [PIRM18-SR Challenge](https://www.pirm2018.org/PIRM-SR.html) (with low perceptual index)?
|
|
|
|
First, the released ESRGAN model in the GitHub (`RRDB_ESRGAN_x4.pth`) is **different** from the model we submitted in the competition.
|
|
We found that the lower perceptual index does not always guarantee a better visual quality.
|
|
The aims for the competition and our ESRGAN work will be a bit different.
|
|
We think the aim for the competition is the lower perceptual index and the aim for our ESRGAN work is the better visual quality.
|
|
> More analyses can be found in Sec 4.1 and Sec 5 in [PIRM18-SR Chanllenge report](https://arxiv.org/pdf/1809.07517.pdf).
|
|
> It points out that PI (perceptual index) is well correlated with the human-opinion-scores on a coarse scale, but it is not always well-correlated with these scores on a finer scale. This highlights the urgent need for better perceptual quality metrics.)
|
|
|
|
Therefore, in the PIRM18-SR Challenge competition, we used several tricks for the best perceptual index (see Section 4.5 in the [paper](https://arxiv.org/abs/1809.00219)).
|
|
|
|
Here, we provid the models and codes used in the competition, which is able to produce the results on the `PIRM test dataset` (we use MATLAB 2016b/2017a):
|
|
|
|
| Group | Perceptual index | RMSE |
|
|
| ------------- |:-------------:| -----:|
|
|
| SuperSR | 1.978 | 15.30 |
|
|
|
|
> 1. Download the model and codes from [GoogleDrive](https://drive.google.com/file/d/1l0gBRMqhVLpL_-7R7aN-q-3hnv5ADFSM/view?usp=sharing)
|
|
> 2. Put LR input images in the `LR` folder
|
|
> 3. Run `python test.py`
|
|
> 4. Run `main_reverse_filter.m` in MATLAB as a post processing
|
|
> 5. The results on my computer are: Perceptual index: **1.9777** and RMSE: **15.304**
|
|
|
|
|
|
### 2. How do you get the perceptual index in your ESRGAN paper?
|
|
In our paper, we provide the perceptual index in two places.
|
|
|
|
1). In the Fig. 2, the perceptual index on PIRM self validation dataset is obtained with the **model we submitted in the competition**.
|
|
Since the pupose of this figure is to show the perception-distortion plane. And we also use the post-precessing here same as in the competition.
|
|
|
|
2). In the Fig.7, the perceptual indexs are provided as references and they are tested on the data generated by the released ESRGAN model `RRDB_ESRGAN_x4.pth` in the GiuHub.
|
|
Also, there is **no** post-processing when testing the ESRGAN model for better visual quality.
|
|
|