Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
openbmb
/
MiniCPM-Llama3-V-2_5-gguf
like
208
Follow
OpenBMB
390
GGUF
llama.cpp
Model card
Files
Files and versions
Community
7
7155cb6
MiniCPM-Llama3-V-2_5-gguf
3 contributors
History:
22 commits
tc-mb
Upload ggml-model-Q5_K_M.gguf with huggingface_hub
7155cb6
verified
6 months ago
.gitattributes
Safe
1.56 kB
Add model
6 months ago
README.md
Safe
347 Bytes
Update README.md
6 months ago
ggml-model-F16.gguf
Safe
16.1 GB
LFS
Add fp16
6 months ago
ggml-model-IQ2_M.gguf
Safe
259 MB
LFS
Upload ggml-model-IQ2_M.gguf with huggingface_hub
6 months ago
ggml-model-IQ3_M.gguf
Safe
3.79 GB
LFS
Upload ggml-model-IQ3_M.gguf with huggingface_hub
6 months ago
ggml-model-IQ3_S.gguf
Safe
3.68 GB
LFS
Upload ggml-model-IQ3_S.gguf with huggingface_hub
6 months ago
ggml-model-IQ3_XS.gguf
Safe
3.52 GB
LFS
Upload ggml-model-IQ3_XS.gguf with huggingface_hub
6 months ago
ggml-model-IQ3_XXS.gguf
Safe
312 MB
LFS
Upload ggml-model-IQ3_XXS.gguf with huggingface_hub
6 months ago
ggml-model-IQ4_NL.gguf
Safe
4.71 GB
LFS
Upload ggml-model-IQ4_NL.gguf with huggingface_hub
6 months ago
ggml-model-Q2_K.gguf
Safe
3.18 GB
LFS
Upload ggml-model-Q2_K.gguf with huggingface_hub
6 months ago
ggml-model-Q3_K_L.gguf
Safe
4.32 GB
LFS
Upload ggml-model-Q3_K_L.gguf with huggingface_hub
6 months ago
ggml-model-Q3_K_M.gguf
Safe
4.02 GB
LFS
Upload ggml-model-Q3_K_M.gguf with huggingface_hub
6 months ago
ggml-model-Q3_K_S.gguf
Safe
3.67 GB
LFS
Upload ggml-model-Q3_K_S.gguf with huggingface_hub
6 months ago
ggml-model-Q4_0.gguf
Safe
4.66 GB
LFS
Upload ggml-model-Q4_0.gguf with huggingface_hub
6 months ago
ggml-model-Q4_K.gguf
Safe
4.92 GB
LFS
Upload ggml-model-Q4_K.gguf with huggingface_hub
6 months ago
ggml-model-Q4_K_M.gguf
Safe
4.92 GB
LFS
Add model
6 months ago
ggml-model-Q4_K_S.gguf
Safe
4.69 GB
LFS
Upload ggml-model-Q4_K_S.gguf with huggingface_hub
6 months ago
ggml-model-Q5_K_M.gguf
Safe
5.73 GB
LFS
Upload ggml-model-Q5_K_M.gguf with huggingface_hub
6 months ago
ggml-model-Q5_K_S.gguf
Safe
5.6 GB
LFS
Upload ggml-model-Q5_K_S.gguf with huggingface_hub
6 months ago
ggml-model-Q8_0.gguf
Safe
8.54 GB
LFS
Upload ggml-model-Q8_0.gguf with huggingface_hub
6 months ago
mmproj-model-f16.gguf
Safe
1.03 GB
LFS
Add model
6 months ago