xldistance
xldistance
AI & ML interests
None yet
Recent Activity
New activity
11 days ago
rombodawg/Rombos-Coder-V2.5-Qwen-32b:The most powerful open source code large model!!!
Organizations
None yet
xldistance's activity
How to reduce the problem of 2.25bpw quantitative models often responding haphazardly
1
#2 opened 3 days ago
by
xldistance
Your trained model calls the ollama api frequently unresponsive, you need to restart ollama to reply again.
1
#3 opened 11 days ago
by
xldistance
The most powerful open source code large model!!!
3
#1 opened 14 days ago
by
xldistance
gguf model not loading properly in ollama
3
#1 opened 4 months ago
by
xldistance
Can you quantify this model in exl2?
1
#7 opened 6 months ago
by
xldistance
Can you provide the EXL2 quantitative model?
1
#1 opened 9 months ago
by
xldistance
Create GGUF for this please
8
#2 opened 9 months ago
by
ishanparihar
Can you produce a 2.4bpw exl2 quantisation of this model?
1
#2 opened 9 months ago
by
xldistance
Can you quantify the model?
5
#1 opened 10 months ago
by
xldistance
Can you make a 2.4bpw exl2 quantisation for this model?
4
#1 opened 10 months ago
by
xldistance
GGUF Version?
20
#1 opened 10 months ago
by
johnnnna
Can you quantize this model to 2.4 bpw?
#2 opened 10 months ago
by
xldistance
Can you do a 2.0bpw quantization model?
#4 opened 10 months ago
by
xldistance
maximum context length
2
#3 opened 10 months ago
by
MaziyarPanahi
Can you make a 2.4bpw quantization?
5
#1 opened 11 months ago
by
xldistance
Can you quantify this model?
5
#1 opened 11 months ago
by
xldistance
2.4bpw quantitative modeling can have broken or non-responsive responses
#1 opened 11 months ago
by
xldistance
Can you make a 2.4bpw quantization?
1
#1 opened 11 months ago
by
xldistance
Is there a big performance difference between 2bit quantization and 4bit quantization conversations?
1
#2 opened 11 months ago
by
xldistance