Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
MaziyarPanahi
/
TinyMistral-248M-SFT-v4-GGUF
like
0
Text Generation
Transformers
GGUF
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
dataset:OpenAssistant/oasst_top1_2023-08-25
has_space
text-generation-inference
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Why different sizes for same quants?
6
#1 opened 10 months ago by
supercharge19