GGUF
Inference Endpoints
conversational

The data used to train the model are on Huggingface under siacus/dv_subject

The small-dv version of the fine-tuned model works on a training-set of 5,000 randomly sampled data.

The large version works on the whole 76.1K training records.

The test set is of size 32.6K rows.

F16 version from merged weights created with llama.cpp on a CUDA GPU and the 4bit quantized version created on a Mac M2 Ultra Metal architecture. If you want to use the 4bit quantized version on CUDA, please quantize it directly from the F16 version.

For more information about this model refer the main repository for the supplementary material of the manuscript Rethinking Scale: The Efficacy of Fine-Tuned Open-Source LLMs in Large-Scale Reproducible Social Science Research.

Downloads last month
12
GGUF
Model size
6.74B params
Architecture
llama

4-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for siacus/llama-2-7b-small-dv

Quantized
(60)
this model

Dataset used to train siacus/llama-2-7b-small-dv