File size: 1,771 Bytes
0c4969a 14774a0 9e69087 14774a0 3699b97 14774a0 3699b97 14774a0 9e69087 14774a0 9e69087 14774a0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
library_name: diffusers
base_model: runwayml/stable-diffusion-v1-5
tags:
- text-to-image
license: creativeml-openrail-m
inference: false
---
## yujiepan/dreamshaper-8-lcm-openvino
This model applies `latent-consistency/lcm-lora-sdv1-5` to base model `Lykon/dreamshaper-8`, and is converted to OpenVINO format.
#### Usage
```python
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline
pipeline = OVStableDiffusionPipeline.from_pretrained(
'yujiepan/dreamshaper-8-lcm-openvino',
device='CPU',
)
prompt = 'cute dog typing at a laptop, 4k, details'
images = pipeline(prompt=prompt, num_inference_steps=8, guidance_scale=1.0).images
```
![output image](./assets/cute-dog-typing-at-a-laptop-4k-details.png)
#### TODO
- The fp16 base model is converted to openvino in fp32, which is unnecessary.
#### Scripts
The model is generated by the following codes:
```python
import torch
from diffusers import AutoPipelineForText2Image, LCMScheduler
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline
base_model_id = "Lykon/dreamshaper-8"
adapter_id = "latent-consistency/lcm-lora-sdv1-5"
save_torch_folder = './dreamshaper-8-lcm'
save_ov_folder = './dreamshaper-8-lcm-openvino'
torch_pipeline = AutoPipelineForText2Image.from_pretrained(
base_model_id, torch_dtype=torch.float16, variant="fp16")
torch_pipeline.scheduler = LCMScheduler.from_config(
torch_pipeline.scheduler.config)
# load and fuse lcm lora
torch_pipeline.load_lora_weights(adapter_id)
torch_pipeline.fuse_lora()
torch_pipeline.save_pretrained(save_torch_folder)
ov_pipeline = OVStableDiffusionPipeline.from_pretrained(
save_torch_folder,
device='CPU',
export=True,
)
ov_pipeline.save_pretrained(save_ov_folder)
```
|