leaderboard-pr-bot's picture
Adding Evaluation Results
c538ca7
|
raw
history blame
1.22 kB

The base model is meta's Llama-2-7b-hf. It was finetuned using SFT and the Guanaco dataset. The model prompt is similar to the original Guanaco model. This repo contains the merged fp16 model.

Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.



Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 43.83
ARC (25-shot) 52.47
HellaSwag (10-shot) 78.75
MMLU (5-shot) 45.33
TruthfulQA (0-shot) 43.9
Winogrande (5-shot) 74.19
GSM8K (5-shot) 6.07
DROP (3-shot) 6.11