Guidance scale more than 1 gives non sensical results?
#1
by
asdaweqw12
- opened
Please check out the blog post for recommendations around guidance_scale
:
https://huggingface.co/blog/lcm_lora
it says in the info to stay between 1 and 2 doesn't it?
@dipstik Yeah but is there a good explanation for why this is broken? For normal LCM guidance = 8 doesn't give broken results.
@asdaweqw12 Their report says that the model is already "trained" with CFG of w=7.5, so the model already emphasizes your text without two feed-forwards as done in previous CFG-based inferences.
haas anyone made it work with a1111?