twodgirl commited on
Commit
87294e4
1 Parent(s): 25537b9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OneDiffusion
2
+
3
+ The inference code provided for the text-to-image workflow. The modified code is not a requirement, it's for demo purposes only. It has less heavy requirements than the [original repo](https://github.com/lehduong/OneDiffusion/tree/b6024589cc56b5af36268761828878b25af5e2fb). The inference speed is 7 s/it atm with the flash attention module removed.
4
+
5
+ If you need a prompt to describe other images, you can use the [Molmo spaces](https://huggingface.co/spaces?search=molmo).
6
+
7
+ ## Installation
8
+
9
+ ```
10
+ pip install accelerate diffusers einops sentencepiece transformers
11
+ ```
12
+
13
+ ## Inference
14
+
15
+ ```python
16
+ from onediffusion.pipeline.onediffusion import OneDiffusionPipeline
17
+ import torch
18
+
19
+ if __name__ == '__main__':
20
+ prompt = 'A bipedal black cat wearing a huge oversized witch hat, a wizards robe, casting a spell,in an enchanted forest. The scene is filled with fireflies and moss on surrounding rocks and trees'
21
+ negative_prompt = 'monochrome, greyscale, low-res, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation'
22
+ pipeline = OneDiffusionPipeline.from_pretrained("twodgirl/onediffusion-bf16").to(device='cuda',
23
+ dtype=torch.bfloat16)
24
+ image = pipeline(prompt='[[text2image]] {}'.format(prompt),
25
+ negative_prompt=negative_prompt,
26
+ num_inference_steps=30,
27
+ guidance_scale=4,
28
+ height=1024,
29
+ width=1024).images[0]
30
+ image.save('cat.png')
31
+ ```