tarsila-captioned / README.md
multimodalart's picture
Upload folder using huggingface_hub
28dd741 verified
|
raw
history blame
1.76 kB
metadata
tags:
  - text-to-image
  - flux
  - lora
  - diffusers
  - template:sd-lora
  - ai-toolkit
widget:
  - text: A person in a bustling cafe in the style of tarsila do amaral
    output:
      url: samples/1725254450130__000001000_0.jpg
  - text: A mecha robot in a favela in the style of tarsila do amaral
    output:
      url: samples/1725254552381__000001000_1.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: in the style of tarsila do amaral
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md

tarsila-captioned

Model trained with AI Toolkit by Ostris

Prompt
A person in a bustling cafe in the style of tarsila do amaral
Prompt
A mecha robot in a favela in the style of tarsila do amaral

Trigger words

You should use in the style of tarsila do amaral to trigger the image generation.

Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('multimodalart/tarsila-captioned', weight_name='tarsila-captioned')
image = pipeline('A person in a bustling cafe in the style of tarsila do amaral').images[0]
image.save("my_image.png")

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers