File size: 1,149 Bytes
1bc50c0 1149b1d 1bc50c0 1149b1d 1bc50c0 1eaabd2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
This is the Q3_K_M GGUF port of the [lightblue/Karasu-Mixtral-8x22B-v0.1](https://huggingface.co/lightblue/Karasu-Mixtral-8x22B-v0.1) model.
### How to use
The way to run this directly are using the llama.cpp package.
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
huggingface-cli download lightblue/Karasu-Mixtral-8x22B-v0.1-gguf --local-dir /some/folder/
./main -m /some/folder/Karasu-Mixtral-8x22B-v0.1-Q3_K_M-00001-of-00005.gguf -p "<s>[INST] Tell me a really funny joke. No puns! [/INST]" -n 256 -e
```
If you would like a nice easy GUI and have >64GB of RAM, then you could also run this using [LM Studio](https://lmstudio.ai/) and search for this model on the search bar.
### Commands to make this:
```bash
cd llama.cpp
./convert.py --outfile Karasu-Mixtral-8x22B-v0.1-q3_k_m --outtype f16 /workspace/llm_training/axolotl/mixtral_8x22B_training/merged_model_multiling
./quantize /workspace/Karasu-Mixtral-8x22B-v0.1.gguf /workspace/Karasu-Mixtral-8x22B-v0.1_q3_k_m.gguf Q3_K_M
./gguf-split --split --split-max-size 5G /workspace/Karasu-Mixtral-8x22B-v0.1_q3_k_m.gguf /workspace/somewhere-sensible
``` |