--- tags: - huggan - gan datasets: - huggan/maps # See a list of available tags here: # https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12 # task: unconditional-image-generation or conditional-image-generation or image-to-image license: mit --- # Pix2Pix trained on the maps dataset ## Model description This model is a [Pix2Pix]() model trained on the [huggan/maps]() dataset. The goal for the model is to turn a satellite map into a geographic map à la Google Maps, and the other way around. The model was trained using the [example script](https://github.com/huggingface/community-events/tree/main/huggan/pytorch/pix2pix) provided by HuggingFace as part of the [HugGAN sprint](https://github.com/huggingface/community-events/tree/main/huggan). ## Intended uses & limitations #### How to use ```python from huggan.pytorch.pix2pix import GeneratorUNet generator = GeneratorUNet.from_pretrained("huggan/pix2pix-maps") ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data The data used was huggan/maps. ## Training procedure Preprocessing, hardware used, hyperparameters... ## Eval results ## Generated Images You can embed local or remote images using `![](...)` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/IsolaZZE16, author = {Phillip Isola and Jun{-}Yan Zhu and Tinghui Zhou and Alexei A. Efros}, title = {Image-to-Image Translation with Conditional Adversarial Networks}, journal = {CoRR}, volume = {abs/1611.07004}, year = {2016}, url = {http://arxiv.org/abs/1611.07004}, eprinttype = {arXiv}, eprint = {1611.07004}, timestamp = {Mon, 13 Aug 2018 16:49:05 +0200}, biburl = {https://dblp.org/rec/journals/corr/IsolaZZE16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```