|
This is the Q3_K_M GGUF port of the [lightblue/Karasu-Mixtral-8x22B-v0.1](https://huggingface.co/lightblue/Karasu-Mixtral-8x22B-v0.1) model. |
|
|
|
|
|
### How to use |
|
|
|
|
|
The way to run this directly are using the llama.cpp package. |
|
|
|
```bash |
|
git clone https://github.com/ggerganov/llama.cpp |
|
cd llama.cpp |
|
make |
|
huggingface-cli download lightblue/Karasu-Mixtral-8x22B-v0.1-gguf --local-dir /some/folder/ |
|
./main -m /some/folder/Karasu-Mixtral-8x22B-v0.1-Q3_K_M-00001-of-00005.gguf -p "<s>[INST] Tell me a really funny joke. No puns! [/INST]" -n 256 -e |
|
``` |
|
|
|
If you would like a nice easy GUI and have >64GB of RAM, then you could also run this using [LM Studio](https://lmstudio.ai/) and search for this model on the search bar. |
|
|
|
|
|
### Commands to make this: |
|
|
|
```bash |
|
cd llama.cpp |
|
|
|
./convert-hf-to-gguf.py --outfile /workspace/Karasu-Mixtral-8x22B-v0.1.gguf --outtype f16 /workspace/llm_training/axolotl/mixtral_8x22B_training/merged_model_multiling |
|
|
|
./quantize /workspace/Karasu-Mixtral-8x22B-v0.1.gguf /workspace/Karasu-Mixtral-8x22B-v0.1-Q3_K_M.gguf Q3_K_M |
|
|
|
./gguf-split --split --split-max-tensors 128 /workspace/Karasu-Mixtral-8x22B-v0.1-Q3_K_M.gguf /workspace/split_gguf_q3km/Karasu-Mixtral-8x22B-v0.1-Q3_K_M |
|
|
|
``` |