How to run using Diffusers?
#15 opened 14 days ago
by
mileyYYYYY
boss,cloud you gguf this model?thks
#14 opened 26 days ago
by
allen666
Is it true llama.cpp always de-quantizes GGUF to FP16?
2
#13 opened 26 days ago
by
Phaserblast
which Q5 gives more quality?
1
#12 opened about 2 months ago
by
ZeroCool22
could please tell some detail about how to quantization the qwen-image models
1
#11 opened about 2 months ago
by
dzfancy
[ANSWERED] Is it normal for it to take this long to generate a photo?
#9 opened 2 months ago
by
ogNomad
Man, the RTX 20-series graphics cards don't support it. My RTX 2080 Ti with 22GB VRAM takes 10 minutes to generate a single image.
7
#8 opened 2 months ago
by
dwedwqe21w
Example workflow type issue
๐
1
4
#6 opened 3 months ago
by
Ahnyth
A strange discovery
5
#5 opened 3 months ago
by
makisekurisu-jp
Qwen Image Distill GGUF?
2
#4 opened 3 months ago
by
ruleez
workflow
4
#3 opened 3 months ago
by
rw0101
Falled to run 'qwen-image-Q6_K.gguf' with ollama
3
#2 opened 3 months ago
by
liweiC
Black output
15
#1 opened 3 months ago
by
ruleez