Jack
qwertyjack
AI & ML interests
None yet
Recent Activity
new activity
5 days ago
unsloth/DeepSeek-R1-GGUF:Where did the BF16 come from?
liked
a model
10 days ago
winninghealth/WiNGPT-Babel
Organizations
qwertyjack's activity
Where did the BF16 come from?
8
#10 opened 11 days ago
by
gshpychka
New research paper, R1 type reasoning models can be drastically improved in quality
1
#19 opened 6 days ago
by
krustik
Please make V3-lite
3
#12 opened about 1 month ago
by
rombodawg
![](https://cdn-avatars.huggingface.co/v1/production/uploads/642cc1c253e76b4c2286c58e/fGtQ_QeTjUgBhIT89dpUt.jpeg)
Sync with Qwen/QwQ-32B-Preview
1
#2 opened 2 months ago
by
qwertyjack
感觉新版的Mistrial-LargeV3的GPTQ量化的int4版本对显存的需求大大提升了
2
#1 opened 3 months ago
by
YanchengQian
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/vijglQgev16H2bHF0oA6o.png)
How to run the model OpenGVLab/InternVL2-40B-AWQ with vllm docker image?
2
#2 opened 6 months ago
by
andryevinnik
请教一下,cogvlm和glm4v的区别是什么呢
3
#1 opened 8 months ago
by
rangehow
Which one is better? deepseek-coder-7b-ins-v1.5 or CodeQwen1.5-7B-Chat?
1
#2 opened 9 months ago
by
qwertyjack
What about AWQ?
#1 opened 9 months ago
by
qwertyjack
Having trouble loading this with transformers
5
#8 opened 10 months ago
by
codelion
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png)
GPTQ plz
10
#3 opened 9 months ago
by
Parkerlambert123
Would you plan to optimize ChatGLM2-6B? and when?
4
#47 opened over 1 year ago
by
Zuyuan