Chinese Stable Diffusion Model Card
svjack/Stable-Diffusion-FineTuned-zh-v0 is a Chinese-specific latent text-to-image diffusion model capable of generating images given any Chinese text input.
This model was trained by using a powerful text-to-image model, diffusers For more information about our training method, see train_zh_model.py. With the help of a good baseline model Taiyi-Stable-Diffusion-1B-Chinese-v0.1 from IDEA-CCNL
Model Details
- Developed by: Zhipeng Yang
- Model type: Diffusion-based text-to-image generation model
- Language(s): Chinese
- License: The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based.
- Model Description: This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model (LDM) that used Stable Diffusion as a pre-trained model.
- Resources for more information: https://github.com/svjack/Stable-Diffusion-Chinese-Extend
Examples
Firstly, install our package as follows. This package is modified 🤗's Diffusers library to run Chinese Stable Diffusion.
diffusers==0.6.0
transformers
torch
datasets
accelerate
sentencepiece
Run this command to log in with your HF Hub token if you haven't before:
huggingface-cli login
Running the pipeline with the LMSDiscreteScheduler scheduler:
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("svjack/Stable-Diffusion-FineTuned-zh-v1")
pipeline.safety_checker = lambda images, clip_input: (images, False)
pipeline = pipeline.to("cuda")
prompt = '女孩们打开了另一世界的大门'
image = pipeline(prompt, guidance_scale=7.5).images[0]
Generator Results comparison
- Downloads last month
- 4