PTtuts commited on
Commit
c6f21fb
1 Parent(s): e5db091

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +38 -0
  2. config.yaml +59 -0
  3. lora.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: flux-1-dev-non-commercial-license
4
+ license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
5
+ language:
6
+ - en
7
+ tags:
8
+ - flux
9
+ - diffusers
10
+ - lora
11
+ base_model: "black-forest-labs/FLUX.1-dev"
12
+ pipeline_tag: text-to-image
13
+ instance_prompt: TOK
14
+ ---
15
+
16
+ # Flux New Emoji Model M
17
+
18
+ Trained on Replicate using:
19
+
20
+ https://replicate.com/ostris/flux-dev-lora-trainer/train
21
+
22
+
23
+ ## Trigger words
24
+ You should use `TOK` to trigger the image generation.
25
+
26
+
27
+ ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
28
+
29
+ ```py
30
+ from diffusers import AutoPipelineForText2Image
31
+ import torch
32
+
33
+ pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
34
+ pipeline.load_lora_weights('PTtuts/flux-new-emoji-model-m', weight_name='lora.safetensors')
35
+ image = pipeline('your prompt').images[0]
36
+ ```
37
+
38
+ For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
config.yaml ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ job: extension
2
+ config:
3
+ name: flux_train_replicate
4
+ process:
5
+ - type: sd_trainer
6
+ training_folder: output
7
+ device: cuda:0
8
+ trigger_word: TOK
9
+ network:
10
+ type: lora
11
+ linear: 16
12
+ linear_alpha: 16
13
+ save:
14
+ dtype: float16
15
+ save_every: 1001
16
+ max_step_saves_to_keep: 1
17
+ datasets:
18
+ - folder_path: input_images
19
+ caption_ext: filename
20
+ caption_dropout_rate: 0.05
21
+ shuffle_tokens: false
22
+ cache_latents_to_disk: true
23
+ resolution:
24
+ - 512
25
+ - 768
26
+ - 1024
27
+ train:
28
+ batch_size: 1
29
+ steps: 1000
30
+ gradient_accumulation_steps: 1
31
+ train_unet: true
32
+ train_text_encoder: false
33
+ content_or_style: balanced
34
+ gradient_checkpointing: true
35
+ noise_scheduler: flowmatch
36
+ optimizer: adamw8bit
37
+ lr: 0.0004
38
+ ema_config:
39
+ use_ema: true
40
+ ema_decay: 0.99
41
+ dtype: bf16
42
+ model:
43
+ name_or_path: FLUX.1-dev
44
+ is_flux: true
45
+ quantize: true
46
+ sample:
47
+ sampler: flowmatch
48
+ sample_every: 1001
49
+ width: 1024
50
+ height: 1024
51
+ prompts: []
52
+ neg: ''
53
+ seed: 42
54
+ walk_seed: true
55
+ guidance_scale: 4
56
+ sample_steps: 20
57
+ meta:
58
+ name: flux_train_replicate
59
+ version: '1.0'
lora.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c72375cd2643d8bb9e6e92d53bb24bca4a1a6cc9ecdffe5c219471ecb7c225fb
3
+ size 171969408