Transformers
GGUF
Inference Endpoints
Edit model card

QuantFactory Banner

QuantFactory/starcoder2-3b-instruct-v0.1-GGUF

This is quantized version of onekq-ai/starcoder2-3b-instruct-v0.1 created using llama.cpp

Original Model Card

Starcoder2-3b fined the same way as https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 using https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k

Epochs: 1 Learning Rate: 0.0001 Lora Rank: 8 Batch Size: 16 Evaluation Split: 0

Downloads last month
252
GGUF
Model size
3.03B params
Architecture
starcoder2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/starcoder2-3b-instruct-v0.1-GGUF

Quantized
(7)
this model

Dataset used to train QuantFactory/starcoder2-3b-instruct-v0.1-GGUF