Phips commited on
Commit
0bbf475
1 Parent(s): bc9a828

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -169,7 +169,7 @@ def main():
169
 
170
  These two 2x models are a Compact(SRVGGNet) for the '2x Fast Upscale' and an ESRGAN(RRDBNet) for the '2x Upscale' upscaling model which I recently trained and released (december 23)
171
  2x Fast Upscale: [2xNomosUni_compact_multijpg](https://openmodeldb.info/models/2x-NomosUni-compact-multijpg)
172
- 2x Upscale: 2xNomosUni_esrgan_multijpg (not on openmodeldb yet)
173
 
174
  These two models are general upscalers with the goal to handle jpg compression and preserve depth of field for the most part.
175
 
@@ -186,7 +186,7 @@ def main():
186
  After discovering and using [chaiNNer](https://github.com/chaiNNer-org/chaiNNer) with the [upscale wiki model database](https://upscale.wiki/w/index.php?title=Model_Database&oldid=1571), I thought that having visual outputs instead of only textual model descriptions would be nice, to not just read about but visually see what these models do.
187
  So I gathered all of the upscaling models on there. Created a [youtube vid](https://youtu.be/0TYRDmQ5LZk) to compare ESRGAN models, made a [reddit post](https://www.reddit.com/r/StableDiffusion/comments/yev37i/comparison_of_upscaling_models_for_ai_generated/), and created a whole [Interactive Visual Comparison of Upscaling Models website](https://phhofm.github.io/upscale/) built with [vitepress](https://vitepress.dev/) (which had reached 1.0.0-alpha.26 at that time) to compare the visual outputs of over 300 different upsaling models.
188
  Instead of only using and comparing upscaling models, I started learning about and training models myself, and released my very first upscaling model in march 23 called [4xLSDIRCompact](https://openmodeldb.info/models/4x-LSDIRCompact), a Compact model based on the [LSDIR](https://data.vision.ee.ethz.ch/yawli/) dataset.
189
- Since then I have trained and released over 50 models of different networks/architectures like SRVGGNet, [RDDBNet](https://github.com/xinntao/Real-ESRGAN), [SwinIR](https://github.com/JingyunLiang/SwinIR), [SRFormer](https://github.com/HVision-NKU/SRFormer) where my model got mentioned on the dev's readme, [GRL](https://github.com/ofsoundof/GRL-Image-Restoration), [OmniSR](https://github.com/Francis0625/Omni-SR), [EDSR](https://github.com/sanghyun-son/EDSR-PyTorch), [HAT](https://github.com/XPixelGroup/HAT), [DAT](https://github.com/zhengchen1999/DAT) where my model got mentioned on the dev's readme, and [SPAN](https://github.com/hongyuanyu/SPAN).
190
  Helped with testing and bug reporting of [neosr](https://github.com/muslll/neosr). Released the datasets FaceUp and SSDIR and made a [youtube video](https://www.youtube.com/watch?v=TBiVIzQkptI) about it.
191
  It has been fun and fascinating so far :D
192
 
 
169
 
170
  These two 2x models are a Compact(SRVGGNet) for the '2x Fast Upscale' and an ESRGAN(RRDBNet) for the '2x Upscale' upscaling model which I recently trained and released (december 23)
171
  2x Fast Upscale: [2xNomosUni_compact_multijpg](https://openmodeldb.info/models/2x-NomosUni-compact-multijpg)
172
+ 2x Upscale: 2xNomosUni_esrgan_multijpg (not on openmodeldb yet, but on my [google drive](https://drive.google.com/drive/folders/12zKVS74mz0NtBKGlkx_r0ytUVoeDiTX3?usp=drive_link))
173
 
174
  These two models are general upscalers with the goal to handle jpg compression and preserve depth of field for the most part.
175
 
 
186
  After discovering and using [chaiNNer](https://github.com/chaiNNer-org/chaiNNer) with the [upscale wiki model database](https://upscale.wiki/w/index.php?title=Model_Database&oldid=1571), I thought that having visual outputs instead of only textual model descriptions would be nice, to not just read about but visually see what these models do.
187
  So I gathered all of the upscaling models on there. Created a [youtube vid](https://youtu.be/0TYRDmQ5LZk) to compare ESRGAN models, made a [reddit post](https://www.reddit.com/r/StableDiffusion/comments/yev37i/comparison_of_upscaling_models_for_ai_generated/), and created a whole [Interactive Visual Comparison of Upscaling Models website](https://phhofm.github.io/upscale/) built with [vitepress](https://vitepress.dev/) (which had reached 1.0.0-alpha.26 at that time) to compare the visual outputs of over 300 different upsaling models.
188
  Instead of only using and comparing upscaling models, I started learning about and training models myself, and released my very first upscaling model in march 23 called [4xLSDIRCompact](https://openmodeldb.info/models/4x-LSDIRCompact), a Compact model based on the [LSDIR](https://data.vision.ee.ethz.ch/yawli/) dataset.
189
+ Since then I have trained and released over 50 models of different networks/architectures like SRVGGNet, [RDDBNet](https://github.com/xinntao/Real-ESRGAN), [SwinIR](https://github.com/JingyunLiang/SwinIR), [SRFormer](https://github.com/HVision-NKU/SRFormer) (my model got mentioned on the readme), [GRL](https://github.com/ofsoundof/GRL-Image-Restoration), [OmniSR](https://github.com/Francis0625/Omni-SR), [EDSR](https://github.com/sanghyun-son/EDSR-PyTorch), [HAT](https://github.com/XPixelGroup/HAT), [DAT](https://github.com/zhengchen1999/DAT) (my model got mentioned on the readme), and [SPAN](https://github.com/hongyuanyu/SPAN).
190
  Helped with testing and bug reporting of [neosr](https://github.com/muslll/neosr). Released the datasets FaceUp and SSDIR and made a [youtube video](https://www.youtube.com/watch?v=TBiVIzQkptI) about it.
191
  It has been fun and fascinating so far :D
192