Munish Kumar
munish0838
AI & ML interests
LLM Quantizations
Organizations
Improve model card: Add GGUF usage, paper link, and correct metadata
#1 opened 6 months ago
by
nielsr
GGUF
3
#1 opened 11 months ago
by
amogusgaysex
Heads up: this isn't the new ministral 3b
👍
2
2
#1 opened about 1 year ago
by
bartowski
Requesting Re-Quant. Tokenizer Updated with better chatml Support
1
#1 opened about 1 year ago
by
Luni
Error loading model in llama.cpp ?
8
#1 opened over 1 year ago
by
ubergarm
Add paper and citation
#1 opened over 1 year ago
by
maximegmd
Adding `safetensors` variant of this model
#1 opened over 1 year ago
by
SFconvertbot
Adding `safetensors` variant of this model
#1 opened over 1 year ago
by
SFconvertbot
Adding `safetensors` variant of this model
#1 opened over 1 year ago
by
SFconvertbot
Adding `safetensors` variant of this model
#1 opened over 1 year ago
by
SFconvertbot
What am I doing wrong? Using Oobabooga.
3
#1 opened over 1 year ago
by
Goldenblood56
does not appear to have a file named config.json
2
#2 opened over 1 year ago
by
atubong
Compatibility with llama-cpp and Ollama
6
#17 opened over 1 year ago
by
liashchynskyi
Is the original model allganize/Llama-3-Alpha-Ko-8B-Instruct?
3
#1 opened over 1 year ago
by
coconut00
error when loading the model
5
#3 opened over 1 year ago
by
StefanStroescu
How to convert to HF format?
➕
👀
8
5
#6 opened over 1 year ago
by
ddh0
Rename configuration_internlm.py to configuration_internlm2.py
#1 opened over 1 year ago
by
munish0838
Rename configuration_internlm.py to configuration_internlm2.py
#1 opened over 1 year ago
by
munish0838
I have now tried two quantizations 8_0, and 6_K, they both fail like you see below.
3
#2 opened over 1 year ago
by
BigDeeper
Working with llama cpp ?
4
#1 opened over 1 year ago
by
ivanpzk