metadata
tags:
- huggan
- gan
datasets:
- arakesh/uavid-15-hq-mixedres
license: mit
MyModelName
Model description
Pix2pix Model is a conditional adversarial networks, a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Intended uses & limitations:
Used for reconstruction of images from edges
How to use
from torchvision.transforms import Compose, Resize, ToTensor, Normalize
from PIL import Image
from torchvision.utils import save_image
import cv2
from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet
transform = Compose(
[
Resize((256, 256), Image.BICUBIC),
ToTensor(),
Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
model = GeneratorUNet.from_pretrained('huggan/pix2pix-uavid-15)
def predict_fn(img):
inp = transform(img).unsqueeze(0)
out = model(inp)
save_image(out, 'out.png', normalize=True)
return 'out.png'
predict_fn(img)
Limitations and bias
- Gives unrealistic colors in the image
Training data
Training procedure
# clone the repository
git clone https://github.com/huggingface/community-events.git
pip install .
# change directory
cd community-events/huggan/pytorch/pix2pix/
# define config
accelerate config
# launch training with required parameters
accelerate launch train.py --checkpoint_interval 1 --dataset arakesh/uavid-15-hq-mixedres --push_to_hub --model_name pix2pix-uavid-15 --batch_size 2 --n_epochs 50 --image_size 1024 --sample_interval 500
Generated Images
Here,
- First Image Row: Input Image
- Second Image Row: Generated Image
- Third Image Row: Target Image
BibTeX entry and citation info
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}