Michael Han
shimmyshimmer
AI & ML interests
None yet
Recent Activity
new activity
about 15 hours ago
unsloth/DeepSeek-R1-GGUF:UD-IQ1_M models for distilled R1 versions?
new activity
about 15 hours ago
unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit:thank you for the great work, can this be converted to gguf?
new activity
about 15 hours ago
unsloth/DeepSeek-R1-GGUF:Got it running after downloading some RAM!
Organizations
shimmyshimmer's activity
UD-IQ1_M models for distilled R1 versions?
2
#6 opened 1 day ago
by
SamPurkis
thank you for the great work, can this be converted to gguf?
1
#1 opened 1 day ago
by
asidaddy
Got it running after downloading some RAM!
2
#7 opened 1 day ago
by
ubergarm
Update README.md
1
#5 opened 1 day ago
by
promiseokoli
Dynamic quants
2
#13 opened 1 day ago
by
XelotX
Dynamic Quant?
1
#5 opened about 19 hours ago
by
wyes1
Inference speed
2
#9 opened about 23 hours ago
by
Iker
Error loading on lm-studio
3
#1 opened 8 days ago
by
victor-Des
license issue
2
#3 opened 5 days ago
by
sandi99
Are the Q4 and Q5 models R1 or R1-Zero
18
#2 opened 9 days ago
by
gng2info
What is the VRAM requirement to run this ?
5
#1 opened 9 days ago
by
RageshAntony
Llama.cpp server chat template
2
#4 opened 4 days ago
by
softwareweaver
Llama 3.2 vision 11B model for OCR task
1
#1 opened 6 days ago
by
Pavankumar03
can vllm launch this model?
5
#2 opened about 1 month ago
by
chopin1998
Quality vs 4bnb version
5
#2 opened 8 days ago
by
supercharge19
Please consider awq version
1
#4 opened 9 days ago
by
destranged
IQ4_XS?
1
#1 opened 9 days ago
by
AaronFeng753
The `tokenizer_config.json` is missing the `chat_template` jinja?
1
#1 opened 9 days ago
by
ubergarm
unknown pre-tokenizer type: 'deepseek-r1-qwen'
7
#1 opened 9 days ago
by
Neman
Use with ollama
1
#2 opened 9 days ago
by
Michael22