The modelfile I wrote caused Ollama to fail to generate the think section of the r1 model
#11 opened 1 day ago
by
caimj
deployment error with lmdeploy: RuntimeError: Could not find model architecture from config
#10 opened 5 days ago
by
ismailyenigul
R1 32b is much worse than QwQ ...
22
#6 opened 12 days ago
by
mirek190
Oobabooga Errors
3
#3 opened 13 days ago
by
SekkSea
IQ3_XS and IQ3_M missing in ollama deployment
#2 opened 13 days ago
by
deleted
FIXED: Error with llama-server `unknown pre-tokenizer type: 'deepseek-r1-qwen'`
4
#1 opened 13 days ago
by
ubergarm