File size: 3,149 Bytes
623a38b 7a69148 623a38b 7a69148 0c447d9 7a69148 dddbffc 02fb08a dddbffc 5daa4ee 0dbfc14 7b355ac 0dbfc14 7a69148 01ca9ce 7a69148 01ca9ce 7a69148 578184f 7a69148 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
license: bigscience-bloom-rail-1.0
language:
- en
library_name: diffusers
tags:
- stable-diffusion
- text-to-image
---
# pony-diffusion-g5 - a new generation ~~of waifus~~
pony-diffusion-g5 is a latent text-to-image diffusion model that has been conditioned on high quality pony images through fine-tuning.
Finetuned for MLP G5 main characters, based on [AstraliteHeart/pony-diffusion](https://huggingface.co/AstraliteHeart/pony-diffusion)
__!!IMPORTANT: DUE TO LACK OF DATASETS ONLY SUNNY AND IZZY CAN GENERATE QUALITY IMAGES__
__!!IMPORTANT: TRY NEGATIVE PROMPT "3d, sfm"__
<img src="https://huggingface.co/GrieferPig/pony-diffusion-g5/resolve/main/doc/demo5.png" width=50% height=50%>
<img src="https://huggingface.co/GrieferPig/pony-diffusion-g5/resolve/main/doc/demo1.png" width=50% height=50%>
<img src="https://huggingface.co/GrieferPig/pony-diffusion-g5/resolve/main/doc/demo4.png" width=50% height=50%>
<img src="https://huggingface.co/GrieferPig/pony-diffusion-g5/resolve/main/doc/demo3.png" width=50% height=50%>
<img src="https://huggingface.co/GrieferPig/pony-diffusion-g5/resolve/main/doc/demo2.png" width=50% height=50%>
## Dataset criteria
All training images are from Derpibooru using the search criteria below
- General: "g5, safe, solo, score.gte:250, -webm, -animate || g5, suggestive, solo, score.gte:250, -webm, -animate", 856 entries wo/ gifs, ~15 epochs
## Why the model's quality is bad?
The amount of G5 pony images within the search criteria is little, so don't really expect the quality to be as high as the original model is
~~_Also bcs im new to ai stuff i don't know how to train datasets correctly if u could help me great thx_~~
## Example code
```python
from diffusers import StableDiffusionPipeline
import torch
from diffusers import DDIMScheduler
model_path = "GrieferPig/pony-diffusion-g5"
prompt = "(((izzy moonbow))), pony, looking at you, smiling, sitting on beach, cute, portrait, intricate, digital painting, smooth, sharp, focus, depth of field"
negative= "3d sfm"
# torch.manual_seed(1145141919810)
pipe = StableDiffusionPipeline.from_pretrained(
model_path,
torch_dtype=torch.float16,
scheduler=DDIMScheduler(
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=True,
),
# safety_checker=None
)
pipe = pipe.to("cuda")
images = pipe(prompt, width=512, height=512, num_inference_steps=50, num_images_per_prompt=5, negative_prompt=negative).images
for i, image in enumerate(images):
image.save(f"test-{i}.png")
```
## Thanks
[AstraliteHeart/pony-diffusion](https://huggingface.co/AstraliteHeart/pony-diffusion), for providing a solid start-point to train on
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
With special thanks to [Waifu-Diffusion](https://huggingface.co/hakurei/waifu-diffusion) for providing finetuning expertise and [Novel AI](https://novelai.net/) for providing necessary compute.
---
license: bigscience-bloom-rail-1.0
--- |