Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ikawrakow
/
mixtral-8x7b-quantized-gguf

GGUF
Model card Files Files and versions
xet
Community
2
mixtral-8x7b-quantized-gguf
252 GB
  • 1 contributor
History: 6 commits
ikawrakow's picture
ikawrakow
Adding IQ3_XXS and fixed _M models
c60d8b0 almost 2 years ago
  • .gitattributes
    1.56 kB
    Adding Mixtral quantized models almost 2 years ago
  • README.md
    1.53 kB
    Update README.md almost 2 years ago
  • mixtral-8x7b-iq3-xxs.gguf
    18.3 GB
    xet
    Adding IQ3_XXS and fixed _M models almost 2 years ago
  • mixtral-8x7b-q2k.gguf
    15.4 GB
    xet
    Adding Mixtral quantized models almost 2 years ago
  • mixtral-8x7b-q3k-medium.gguf
    22.5 GB
    xet
    Adding IQ3_XXS and fixed _M models almost 2 years ago
  • mixtral-8x7b-q3k-small.gguf
    20.3 GB
    xet
    Adding Mixtral quantized models almost 2 years ago
  • mixtral-8x7b-q40.gguf
    26.4 GB
    xet
    Adding legacy llama.cpp quants almost 2 years ago
  • mixtral-8x7b-q41.gguf
    29.3 GB
    xet
    Adding legacy llama.cpp quants almost 2 years ago
  • mixtral-8x7b-q4k-medium.gguf
    28.4 GB
    xet
    Adding IQ3_XXS and fixed _M models almost 2 years ago
  • mixtral-8x7b-q4k-small.gguf
    26.7 GB
    xet
    Adding Mixtral quantized models almost 2 years ago
  • mixtral-8x7b-q50.gguf
    32.2 GB
    xet
    Adding legacy llama.cpp quants almost 2 years ago
  • mixtral-8x7b-q5k-small.gguf
    32.2 GB
    xet
    Adding Mixtral quantized models almost 2 years ago