chatbench-distilgpt2

Model creator: microsoft
Original model: microsoft/chatbench-distilgpt2
GGUF quantization: provided by ysn-rfd using llama.cpp

Special thanks

๐Ÿ™ Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.

Use with Ollama

ollama run "hf.co/ysn-rfd/chatbench-distilgpt2-GGUF:Q3_K_M"

Use with LM Studio

lms load "ysn-rfd/chatbench-distilgpt2-GGUF"

Use with llama.cpp CLI

llama-cli --hf "ysn-rfd/chatbench-distilgpt2-GGUF:Q3_K_M" -p "The meaning to life and the universe is"

Use with llama.cpp Server:

llama-server --hf "ysn-rfd/chatbench-distilgpt2-GGUF:Q3_K_M" -c 4096
Downloads last month
413
GGUF
Model size
81.9M params
Architecture
gpt2
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ysn-rfd/chatbench-distilgpt2-GGUF

Quantized
(2)
this model