Bean-3B / README.md
acrastt's picture
Adding Evaluation Results
452d924
|
raw
history blame
1.3 kB
metadata
license: apache-2.0
datasets:
  - 64bits/lima_vicuna_format
language:
  - en
library_name: transformers
pipeline_tag: text-generation

Buy Me A Coffee

This is OpenLLaMA 3B V2 finetuned on LIMA(ShareGPT format) for 2 epochs.

Prompt template:

### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 35.2
ARC (25-shot) 40.36
HellaSwag (10-shot) 72.0
MMLU (5-shot) 26.43
TruthfulQA (0-shot) 36.11
Winogrande (5-shot) 65.67
GSM8K (5-shot) 0.53
DROP (3-shot) 5.28