File size: 1,569 Bytes
c422045
 
 
 
2ff0836
e1cdc5b
c422045
 
e1cdc5b
 
c422045
 
e1cdc5b
 
2ff0836
 
 
e1cdc5b
 
 
 
 
 
2ff0836
 
 
e1cdc5b
 
 
 
2ff0836
 
 
 
 
 
 
 
 
 
e1cdc5b
 
2ff0836
e1cdc5b
 
2ff0836
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: apache-2.0
datasets:
- fblgit/simple-math
- jondurbin/bagel-v0.3
base_model: abacusai/Smaug-34B-v0.1
tags:
- UNA
- simple-math
- juanako
---

# UNA-SimpleSmaug-34b-v1beta

Scoring 04-February-2024 #1 34B model, outperforming its original base model Smaug-34B-v0.1 with `77.41` 😎

Applied UNA only on the Attention, not on the MLP's
* Is based on Smaug
* SimpleMath dataset
* It was trained on Axolotl

## Experiment
The thing here is to understand whats the impact of SimpleMath applied at the attention layer during a SFT session and how it impacts on the neural network overall.

Results: Improving mathematican and reasoning capabilities without degrading and presserving previous training sessions.

## Evals

Pending, but so far this one
```
|    Task     |Version| Metric |Value            |
|-------------|------:|--------|----------------:|
|arc_challenge|     HF|acc_norm| 0.7457337883959 |
|gsm8k        |     HF|acc     | 0.7247915087187 |
|mmlu         |     HF|acc     | 0.7649553475572 |
|mmlu         |     HF|acc_norm| 0.7681713551647 |
|hellaswag    |     HF|acc_norm| 0.8673571001792 | 
|truthfulqa   |     HF|mc2     | 0.7016557407771 |
|winogrande   |     HF|acc     | 0.8382004735595 |
|------------------------------------------------|
```

Increasing GSM, MMLU, ARC, WINO.

## Citations
To abacusai for making Smaug-34B, the Bagel, and all the magic behind the base model.

If you use the model, provide citation even for merges or anything.
And enjoy our ModelSimilarities tool detector https://github.com/fblgit/model-similarity