PyTorch
huggan
gan
pix2pix-maps / README.md
nielsr's picture
nielsr HF staff
Update README.md
29b247f
metadata
tags:
  - huggan
  - gan
datasets:
  - huggan/maps
license: mit

Pix2Pix trained on the maps dataset

Model description

This model is a Pix2Pix model trained on the huggan/maps dataset. The goal for the model is to turn a satellite map into a geographic map à la Google Maps, and the other way around.

The model was trained using the example script provided by HuggingFace as part of the HugGAN sprint.

Intended uses & limitations

How to use

from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet
from PIL import Image
from torchvision.utils import save_image

image = Image.open("...")

generator = GeneratorUNet.from_pretrained("huggan/pix2pix-maps")

pixel_values = transform(image).unsqueeze(0)
output = generator(pixel_values)
save_image(output, 'output.png', normalize=True)

Limitations and bias

Provide examples of latent issues and potential remediations.

Training data

The data used was huggan/maps.

Training procedure

The following command was used:

accelerate launch train.py --dataset huggan/maps --push_to_hub --model_name pix2pix-maps --checkpoint_interval 1

Eval results

Generated Images

You can embed local or remote images using ![](...)

BibTeX entry and citation info

@article{DBLP:journals/corr/IsolaZZE16,
  author    = {Phillip Isola and
               Jun{-}Yan Zhu and
               Tinghui Zhou and
               Alexei A. Efros},
  title     = {Image-to-Image Translation with Conditional Adversarial Networks},
  journal   = {CoRR},
  volume    = {abs/1611.07004},
  year      = {2016},
  url       = {http://arxiv.org/abs/1611.07004},
  eprinttype = {arXiv},
  eprint    = {1611.07004},
  timestamp = {Mon, 13 Aug 2018 16:49:05 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/IsolaZZE16.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}