Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ikawrakow
/
mixtral-8x7b-quantized-gguf
like
7
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Deploy
Use this model
dfdf4f1
mixtral-8x7b-quantized-gguf
1 contributor
History:
4 commits
ikawrakow
Adding legacy llama.cpp quants
dfdf4f1
11 months ago
.gitattributes
Safe
1.56 kB
Adding Mixtral quantized models
11 months ago
README.md
Safe
1.53 kB
Update README.md
11 months ago
mixtral-8x7b-q2k.gguf
Safe
15.4 GB
LFS
Adding Mixtral quantized models
11 months ago
mixtral-8x7b-q3k-medium.gguf
Safe
22.4 GB
LFS
Adding Mixtral quantized models
11 months ago
mixtral-8x7b-q3k-small.gguf
Safe
20.3 GB
LFS
Adding Mixtral quantized models
11 months ago
mixtral-8x7b-q40.gguf
Safe
26.4 GB
LFS
Adding legacy llama.cpp quants
11 months ago
mixtral-8x7b-q41.gguf
Safe
29.3 GB
LFS
Adding legacy llama.cpp quants
11 months ago
mixtral-8x7b-q4k-medium.gguf
Safe
28.4 GB
LFS
Adding Mixtral quantized models
11 months ago
mixtral-8x7b-q4k-small.gguf
Safe
26.7 GB
LFS
Adding Mixtral quantized models
11 months ago
mixtral-8x7b-q50.gguf
Safe
32.2 GB
LFS
Adding legacy llama.cpp quants
11 months ago
mixtral-8x7b-q5k-small.gguf
Safe
32.2 GB
LFS
Adding Mixtral quantized models
11 months ago