Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
dranger003
/
c4ai-command-r-plus-iMat.GGUF
like
137
Text Generation
GGUF
Inference Endpoints
License:
cc-by-nc-4.0
Model card
Files
Files and versions
Community
19
Deploy
Use this model
938e450
c4ai-command-r-plus-iMat.GGUF
2 contributors
History:
27 commits
dranger003
Upload ggml-c4ai-command-r-plus-q6_k-00002-of-00002.gguf with huggingface_hub
938e450
verified
7 months ago
.gitattributes
2.83 kB
Upload ggml-c4ai-command-r-plus-q6_k-00002-of-00002.gguf with huggingface_hub
7 months ago
README.md
2.18 kB
Update README.md
7 months ago
ggml-c4ai-command-r-plus-iq1_m.gguf
29.3 GB
LFS
Upload ggml-c4ai-command-r-plus-iq1_m.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-iq1_s.gguf
27.3 GB
LFS
Upload ggml-c4ai-command-r-plus-iq1_s.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-iq2_m.gguf
40.2 GB
LFS
Upload ggml-c4ai-command-r-plus-iq2_m.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-iq2_xxs.gguf
32.7 GB
LFS
Upload ggml-c4ai-command-r-plus-iq2_xxs.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-iq3_m-00001-of-00002.gguf
31 GB
LFS
Upload ggml-c4ai-command-r-plus-iq3_m-00001-of-00002.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-iq3_m-00002-of-00002.gguf
20.4 GB
LFS
Upload ggml-c4ai-command-r-plus-iq3_m-00002-of-00002.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-iq3_xxs.gguf
44.8 GB
LFS
Upload ggml-c4ai-command-r-plus-iq3_xxs.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-iq4_xs-00001-of-00002.gguf
35.4 GB
LFS
Upload ggml-c4ai-command-r-plus-iq4_xs-00001-of-00002.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-iq4_xs-00002-of-00002.gguf
24.5 GB
LFS
Upload ggml-c4ai-command-r-plus-iq4_xs-00002-of-00002.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-q5_k-00001-of-00002.gguf
42.7 GB
LFS
Upload ggml-c4ai-command-r-plus-q5_k-00001-of-00002.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-q5_k-00002-of-00002.gguf
34.7 GB
LFS
Upload ggml-c4ai-command-r-plus-q5_k-00002-of-00002.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-q6_k-00002-of-00002.gguf
41.3 GB
LFS
Upload ggml-c4ai-command-r-plus-q6_k-00002-of-00002.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-q8_0-00001-of-00003.gguf
48.1 GB
LFS
Upload ggml-c4ai-command-r-plus-q8_0-00001-of-00003.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-q8_0-00002-of-00003.gguf
41.4 GB
LFS
Upload ggml-c4ai-command-r-plus-q8_0-00002-of-00003.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-q8_0-00003-of-00003.gguf
23.8 GB
LFS
Upload ggml-c4ai-command-r-plus-q8_0-00003-of-00003.gguf with huggingface_hub
7 months ago
ggml-c4ai-command-r-plus-q8_0-imatrix.dat
27.5 MB
LFS
Upload ggml-c4ai-command-r-plus-q8_0-imatrix.dat with huggingface_hub
7 months ago