metadata
license: creativeml-openrail-m
thumbnail: >-
https://huggingface.co/coreml/coreml-anything-v3-1/resolve/main/example-images/thumbnail.png
language:
- en
tags:
- coreml
- stable-diffusion
- stable-diffusion-diffusers
Core ML Converted Model
This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions here.
Provide the model to an app such as Mochi Diffusion to generate images.
split_einsum
version is compatible with all compute unit options including Neural Engine.
original
version is only compatible with CPU & GPU option.
🧩 Paper Cut model V1
This is the fine-tuned Stable Diffusion model trained on Paper Cut images.
Use PaperCut in your prompts.
Sample images:
Based on StableDiffusion 1.5 model
🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information, please have a look at the Stable Diffusion.
You can also export the model to ONNX, MPS and/or FLAX/JAX.
from diffusers import StableDiffusionPipeline
import torch
model_id = "Fictiverse/Stable_Diffusion_PaperCut_Model"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "PaperCut R2-D2"
image = pipe(prompt).images[0]
image.save("./R2-D2.png")