City
city96
AI & ML interests
LLMs, Diffuser models, Anime
Recent Activity
updated
a collection
2 days ago
GGUF Image Model Quants
New activity
4 days ago
city96/FLUX.1-dev-gguf:Fp16 -> GGUF 8B script?
New activity
5 days ago
city96/Flux.1-Heavy-17B:Lol
Organizations
None yet
city96's activity
Fp16 -> GGUF 8B script?
1
#40 opened 4 days ago
by
vdruts
Q16
1
#39 opened 13 days ago
by
noelsalazar
InvokeAI support ?
2
#7 opened 20 days ago
by
HuggingAny
Q4_0, Q4_1, Q5_0, Q5_1 can be dropped?
1
#1 opened 19 days ago
by
CHNtentes
SD3.5 medium pls
1
#1 opened 26 days ago
by
CassyPrivate
it'd be awesome if nf4 was provided
1
#2 opened 28 days ago
by
MayensGuds
how many steps needed ?
1
#3 opened 27 days ago
by
pikkaa
Not able to download it
1
#38 opened about 1 month ago
by
iffishells
need fp8 for speed
3
#1 opened about 1 month ago
by
Ai11Ali
Does it work with gguf of t5xxl ?
5
#2 opened about 1 month ago
by
razvanab
Create an equivalent to GGUF for Diffusers models?
1
#37 opened about 1 month ago
by
julien-c
which vae file should be used with this gguf?
6
#2 opened about 1 month ago
by
slacktahr
The load dual clip GGUF can use this encoder, what what about clip_i?
2
#9 opened about 1 month ago
by
witchercher
Forge support and updated convert script.
3
#1 opened about 1 month ago
by
city96
llama cpp
5
#31 opened 3 months ago
by
goodasdgood
Is it using ggml to compute?
1
#30 opened 3 months ago
by
CHNtentes
How to use the model?
1
#8 opened 3 months ago
by
AIer0107
FLUX GGUF conversion
1
#29 opened 3 months ago
by
bsingh1324
code for use this quantized model
3
#18 opened 3 months ago
by
Mahdimohseni0333