nicholasKluge
commited on
Commit
•
2bd0010
1
Parent(s):
1ca4e0e
Update README.md
Browse files
README.md
CHANGED
@@ -115,10 +115,16 @@ The model will output something like:
|
|
115 |
|
116 |
## Evaluation
|
117 |
|
118 |
-
| Model|Average|[ARC](https://arxiv.org/abs/1803.05457)|[
|
119 |
-
|
120 |
-
| [Aira-2-
|
121 |
-
| GPT-2
|
|
|
|
|
|
|
|
|
|
|
|
|
122 |
|
123 |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). The notebook used to make these evaluations is available in the [this repo](lm_evaluation_harness.ipynb).
|
124 |
|
|
|
115 |
|
116 |
## Evaluation
|
117 |
|
118 |
+
| Model (GPT-2) | Average | [ARC](https://arxiv.org/abs/1803.05457) | [TruthfulQA](https://arxiv.org/abs/2109.07958) | [ToxiGen](https://arxiv.org/abs/2203.09509) | | |
|
119 |
+
|-----------------------------------------------------------------|-----------|-----------------------------------------|------------------------------------------------|---------------------------------------------|---|---|
|
120 |
+
| [Aira-2-124M](https://huggingface.co/nicholasKluge/Aira-2-124M) | **38.07** | **24.57** | **41.02** | **48.62** | | |
|
121 |
+
| GPT-2 | 35.37 | 21.84 | 40.67 | 43.62 | | |
|
122 |
+
| [Aira-2-355M](https://huggingface.co/nicholasKluge/Aira-2-355M) | **39.68** | **27.56** | 38.53 | **53.19** | | |
|
123 |
+
| GPT-2-medium | 36.43 | 27.05 | **40.76** | 41.49 | | |
|
124 |
+
| [Aira-2-774M](https://huggingface.co/nicholasKluge/Aira-2-774M) | **42.26** | **28.75** | **41.33** | **56.70** | | |
|
125 |
+
| GPT-2-large | 35.16 | 25.94 | 38.71 | 40.85 | | |
|
126 |
+
| [Aira-2-1B5](https://huggingface.co/nicholasKluge/Aira-2-1B5) | **42.22** | 28.92 | **41.16** | **56.60** | | |
|
127 |
+
| GPT-2-xl | 36.84 | **30.29** | 38.54 | 41.70 | | |
|
128 |
|
129 |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). The notebook used to make these evaluations is available in the [this repo](lm_evaluation_harness.ipynb).
|
130 |
|