mgoin shubhrapandit commited on
Commit
6456992
1 Parent(s): a1ae06a

Update README.md (#1)

Browse files

- Update README.md (545f9414e580ebf80d8aee4a4cdc695ecbdcf92c)


Co-authored-by: Shubhra Pandit <shubhrapandit@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -50,17 +50,16 @@ Model evaluation metrics and results.
50
 
51
  | Benchmark | Metric | Llama-2-7b-instruct | Llama-2-7b-pruned70-retrained-instruct |
52
  |------------------------------------------------|---------------|-------------|-------------------------------|
53
- | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | xxxx | xxxx |
54
- | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | xxxx | xxxx |
55
- | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | xxxx | xxxx |
56
- | [ARC-c](https://arxiv.org/abs/1911.01547) | | xxxx | xxxx |
57
- | [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | xxxx | xxxx |
58
- | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | xxxx | xxxx |
59
- | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | xxxx | xxxx |
60
 
61
  ## Model Training Details
62
 
63
- Coming soon.
64
 
65
  ## Help
66
 
 
50
 
51
  | Benchmark | Metric | Llama-2-7b-instruct | Llama-2-7b-pruned70-retrained-instruct |
52
  |------------------------------------------------|---------------|-------------|-------------------------------|
53
+ | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot | 48.60% | 42.33% |
54
+ | [HellaSwag](https://arxiv.org/abs/1905.07830) | 10-shot | 79.45% | 77.21% |
55
+ | [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 75.69% | 71.90% |
56
+ | [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 53.92% | 47.35% |
57
+ | [TruthfulQA](https://arxiv.org/abs/2109.07958) | 0-shot | 43.63% | 42.25% |
58
+ | [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 15.92% | 14.25% |
 
59
 
60
  ## Model Training Details
61
 
62
+ This model was obtained by sparse-tranfer of the sparse foundational model [Llama-2-7b-pruned50-retrained](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained) on a blend of [Open Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), 10% [Open Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca) and 10% [Dolphin](https://huggingface.co/datasets/cognitivecomputations/dolphin) datasets. Training was perfomerd for 6 epochs.
63
 
64
  ## Help
65