jimmycarter commited on
Commit
f55ac24
1 Parent(s): 32a7748

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -94,6 +94,35 @@ For usage in ComfyUI, [a single transformer file is provided](https://huggingfac
94
 
95
  The model can be easily finetuned using [SimpleTuner](https://github.com/bghira/SimpleTuner) and the `--flux_attention_masked_training` training option **and the model found in [jimmycarter/LibreFlux-SimpleTuner](https://huggingface.co/jimmycarter/LibreFlux-SimpleTuner)**. This is the same model with the custom pipeline removed, which currently interferes with the ability for SimpleTuner to finetune with it. SimpleTuner has extensive support for parameter-efficient fine-tuning via [LyCORIS](https://github.com/KohakuBlueleaf/LyCORIS), in addition to full-rank fine-tuning. For inference, use the custom pipline from this repo and [follow the example in SimpleTuner to patch in your LyCORIS weights](https://github.com/bghira/SimpleTuner/blob/main/documentation/LYCORIS.md).
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  # Non-technical Report on Schnell De-distillation
98
 
99
  Welcome to my non-technical report on de-distilling FLUX.1-schnell in the most un-scientific way possible with extremely limited resources. I'm not going to claim I made a good model, but I did make a model. It was trained on about 1,500 H100 hour equivalents.
 
94
 
95
  The model can be easily finetuned using [SimpleTuner](https://github.com/bghira/SimpleTuner) and the `--flux_attention_masked_training` training option **and the model found in [jimmycarter/LibreFlux-SimpleTuner](https://huggingface.co/jimmycarter/LibreFlux-SimpleTuner)**. This is the same model with the custom pipeline removed, which currently interferes with the ability for SimpleTuner to finetune with it. SimpleTuner has extensive support for parameter-efficient fine-tuning via [LyCORIS](https://github.com/KohakuBlueleaf/LyCORIS), in addition to full-rank fine-tuning. For inference, use the custom pipline from this repo and [follow the example in SimpleTuner to patch in your LyCORIS weights](https://github.com/bghira/SimpleTuner/blob/main/documentation/LYCORIS.md).
96
 
97
+ ```py
98
+ from lycoris import create_lycoris_from_weights
99
+
100
+ pipe = DiffusionPipeline.from_pretrained(
101
+ "jimmycarter/LibreFLUX",
102
+ custom_pipeline="jimmycarter/LibreFLUX",
103
+ use_safetensors=True,
104
+ torch_dtype=torch.bfloat16,
105
+ trust_remote_code=True,
106
+ )
107
+
108
+ lycoris_safetensors_path = 'pytorch_lora_weights.safetensors'
109
+ wrapper, _ = create_lycoris_from_weights(1.0, lycoris_safetensors_path, pipe.transformer)
110
+ wrapper.merge_to()
111
+ del wrapper
112
+
113
+ prompt = "Photograph of a chalk board on which is written: 'I thought what I'd do was, I'd pretend I was one of those deaf-mutes.'"
114
+ negative_prompt = "blurry"
115
+ images = pipe(
116
+ prompt=prompt,
117
+ negative_prompt=negative_prompt,
118
+ return_dict=False,
119
+ )
120
+ images[0][0].save('chalkboard.png')
121
+
122
+ # optionally, save a merged pipeline containing the LyCORIS baked-in:
123
+ # pipe.save_pretrained('/path/to/output/pipeline')
124
+ ```
125
+
126
  # Non-technical Report on Schnell De-distillation
127
 
128
  Welcome to my non-technical report on de-distilling FLUX.1-schnell in the most un-scientific way possible with extremely limited resources. I'm not going to claim I made a good model, but I did make a model. It was trained on about 1,500 H100 hour equivalents.