Update README.md
Browse files
README.md
CHANGED
@@ -2,24 +2,30 @@
|
|
2 |
tags:
|
3 |
- huggan
|
4 |
- gan
|
|
|
|
|
5 |
# See a list of available tags here:
|
6 |
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
|
7 |
# task: unconditional-image-generation or conditional-image-generation or image-to-image
|
8 |
license: mit
|
9 |
---
|
10 |
|
11 |
-
#
|
12 |
|
13 |
## Model description
|
14 |
|
15 |
-
|
|
|
|
|
16 |
|
17 |
## Intended uses & limitations
|
18 |
|
19 |
#### How to use
|
20 |
|
21 |
```python
|
22 |
-
|
|
|
|
|
23 |
```
|
24 |
|
25 |
#### Limitations and bias
|
@@ -28,8 +34,7 @@ Provide examples of latent issues and potential remediations.
|
|
28 |
|
29 |
## Training data
|
30 |
|
31 |
-
|
32 |
-
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
|
33 |
|
34 |
## Training procedure
|
35 |
|
@@ -44,7 +49,20 @@ You can embed local or remote images using `![](...)`
|
|
44 |
### BibTeX entry and citation info
|
45 |
|
46 |
```bibtex
|
47 |
-
@
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
}
|
50 |
```
|
|
|
2 |
tags:
|
3 |
- huggan
|
4 |
- gan
|
5 |
+
datasets:
|
6 |
+
- huggan/maps
|
7 |
# See a list of available tags here:
|
8 |
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
|
9 |
# task: unconditional-image-generation or conditional-image-generation or image-to-image
|
10 |
license: mit
|
11 |
---
|
12 |
|
13 |
+
# Pix2Pix trained on the maps dataset
|
14 |
|
15 |
## Model description
|
16 |
|
17 |
+
This model is a [Pix2Pix]() model trained on the [huggan/maps]() dataset. The goal for the model is to turn a satellite map into a geographic map à la Google Maps, and the other way around.
|
18 |
+
|
19 |
+
The model was trained using the [example script](https://github.com/huggingface/community-events/tree/main/huggan/pytorch/pix2pix) provided by HuggingFace as part of the [HugGAN sprint](https://github.com/huggingface/community-events/tree/main/huggan).
|
20 |
|
21 |
## Intended uses & limitations
|
22 |
|
23 |
#### How to use
|
24 |
|
25 |
```python
|
26 |
+
from huggan.pytorch.pix2pix import GeneratorUNet
|
27 |
+
|
28 |
+
generator = GeneratorUNet.from_pretrained("huggan/pix2pix-maps")
|
29 |
```
|
30 |
|
31 |
#### Limitations and bias
|
|
|
34 |
|
35 |
## Training data
|
36 |
|
37 |
+
The data used was huggan/maps.
|
|
|
38 |
|
39 |
## Training procedure
|
40 |
|
|
|
49 |
### BibTeX entry and citation info
|
50 |
|
51 |
```bibtex
|
52 |
+
@article{DBLP:journals/corr/IsolaZZE16,
|
53 |
+
author = {Phillip Isola and
|
54 |
+
Jun{-}Yan Zhu and
|
55 |
+
Tinghui Zhou and
|
56 |
+
Alexei A. Efros},
|
57 |
+
title = {Image-to-Image Translation with Conditional Adversarial Networks},
|
58 |
+
journal = {CoRR},
|
59 |
+
volume = {abs/1611.07004},
|
60 |
+
year = {2016},
|
61 |
+
url = {http://arxiv.org/abs/1611.07004},
|
62 |
+
eprinttype = {arXiv},
|
63 |
+
eprint = {1611.07004},
|
64 |
+
timestamp = {Mon, 13 Aug 2018 16:49:05 +0200},
|
65 |
+
biburl = {https://dblp.org/rec/journals/corr/IsolaZZE16.bib},
|
66 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
67 |
}
|
68 |
```
|