--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: openrail++ tags: - text-to-image - diffusers-training - diffusers - sd3 - sd3-diffusers - template:sd-lora - text-to-image - diffusers-training - diffusers - sd3 - sd3-diffusers - template:sd-lora instance_prompt: a photo of sks dog widget: - text: A photo of sks dog in a bucket output: url: image_0.png - text: A photo of sks dog in a bucket output: url: image_1.png - text: A photo of sks dog in a bucket output: url: image_2.png - text: A photo of sks dog in a bucket output: url: image_3.png --- # SD3 DreamBooth - ZhangxinruBIT/trained-sd3 ## Model description These are ZhangxinruBIT/trained-sd3 DreamBooth weights for stabilityai/stable-diffusion-3-medium-diffusers. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was the text encoder fine-tuned? False. ## Trigger words You should use `a photo of sks dog` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('ZhangxinruBIT/trained-sd3', torch_dtype=torch.float16).to('cuda') image = pipeline('A photo of sks dog in a bucket').images[0] ``` ## License Please adhere to the licensing terms as described `[here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)`. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]