File size: 1,080 Bytes
cc43689 de30c23 cc43689 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
library_name: transformers
license: apache-2.0
quantized_by: stillerman
tags:
- llamafile
- gguf
language:
- en
datasets:
- HuggingFaceTB/smollm-corpus
---
# SmolLM-135M-Instruct - llamafile
This repo contains `.gguf` and `.llamafile` files for [SmolLM-135M-Instruct](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966). [Llamafiles](https://llamafile.ai/) are single-file executables (called a "llamafile") that run locally on most computers, with no installation.
# Use it in 3 lines!
```
wget https://huggingface.co/stillerman/SmolLM-135M-Instruct-Llamafile/resolve/main/SmolLM-135M-Instruct-F16.llamafile
chmod a+x SmolLM-135M-Instruct-F16.llamafile
./SmolLM-135M-Instruct-F16.llamafile
```
# Thank you to
- Huggingface for [SmolLM model family](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966)
- Mozilla for [Llamafile](https://llamafile.ai/)
- [llama.cpp](https://github.com/ggerganov/llama.cpp/)
- [Justine Tunney](https://huggingface.co/jartine) and [Compilade](https://github.com/compilade) for help |