File size: 1,172 Bytes
08b66b1 8f80cd9 04aa815 8f80cd9 6610f74 8f80cd9 6610f74 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
---
license: apache-2.0
---
# Zephyr 7B Beta Llamafiles
## See [here](https://dev.to/timesurgelabs/llamafile-ai-integration-deployment-made-easy-44cg#how-to-use-llamafiles) for a guide on how to use llamafiles!
* Original Model: [Zephyr 7B Beta](hhttps://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* Quantized Model: [Zephyr 7B Beta GGUF](https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF)
* Llamafile Source Code: [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile)
+ Built with [Llamafile `5ea929c`](https://github.com/Mozilla-Ocho/llamafile/tree/5ea929c618e9a2b162d39d8cc1c91cb564934a9f)
Both the server and the CLI are based on [TheBloke's Zephyr 7B Beta GGUF Q4_K_M](https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF) model.
## Usage
**NOTE:** Due to the executable being greater than 4GB, it is currently not compatible with Windows. I will update with a Windows friendly version of Zephyr 7B Beta when I can.
```bash
# replace with the CLI if you prefer
wget https://huggingface.co/TimeSurgeLabs/zephyr-7b-beta-llamafile/resolve/main/zephyr-beta-server.llamafile
chmod +x zephyr-beta-server.llamafile
./zephyr-beta-server.llamafile
```
|