|
--- |
|
library_name: diffusers |
|
base_model: runwayml/stable-diffusion-v1-5 |
|
tags: |
|
- text-to-image |
|
license: creativeml-openrail-m |
|
inference: false |
|
--- |
|
|
|
## yujiepan/dreamshaper-8-lcm-openvino |
|
|
|
This model applies `latent-consistency/lcm-lora-sdv1-5` to base model `Lykon/dreamshaper-8`, and is converted to OpenVINO format. |
|
|
|
#### Usage |
|
|
|
```python |
|
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline |
|
pipeline = OVStableDiffusionPipeline.from_pretrained( |
|
'yujiepan/dreamshaper-8-lcm-openvino', |
|
device='CPU', |
|
) |
|
prompt = 'cute dog typing at a laptop, 4k, details' |
|
images = pipeline(prompt=prompt, num_inference_steps=8, guidance_scale=1.0).images |
|
``` |
|
|
|
![output image](./assets/cute-dog-typing-at-a-laptop-4k-details.png) |
|
|
|
|
|
|
|
|
|
#### TODO |
|
|
|
- The fp16 base model is converted to openvino in fp32, which is unnecessary. |
|
|
|
#### Scripts |
|
|
|
The model is generated by the following codes: |
|
|
|
```python |
|
import torch |
|
from diffusers import AutoPipelineForText2Image, LCMScheduler |
|
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline |
|
|
|
base_model_id = "Lykon/dreamshaper-8" |
|
adapter_id = "latent-consistency/lcm-lora-sdv1-5" |
|
save_torch_folder = './dreamshaper-8-lcm' |
|
save_ov_folder = './dreamshaper-8-lcm-openvino' |
|
|
|
torch_pipeline = AutoPipelineForText2Image.from_pretrained( |
|
base_model_id, torch_dtype=torch.float16, variant="fp16") |
|
torch_pipeline.scheduler = LCMScheduler.from_config( |
|
torch_pipeline.scheduler.config) |
|
# load and fuse lcm lora |
|
torch_pipeline.load_lora_weights(adapter_id) |
|
torch_pipeline.fuse_lora() |
|
torch_pipeline.save_pretrained(save_torch_folder) |
|
|
|
ov_pipeline = OVStableDiffusionPipeline.from_pretrained( |
|
save_torch_folder, |
|
device='CPU', |
|
export=True, |
|
) |
|
ov_pipeline.save_pretrained(save_ov_folder) |
|
``` |
|
|