amanrangapur
commited on
Commit
•
e183f8f
1
Parent(s):
b9b2b95
Update README.md
Browse files
README.md
CHANGED
@@ -1 +1,25 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- allenai/dolma
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
---
|
8 |
+
|
9 |
+
|
10 |
+
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
11 |
+
|
12 |
+
|
13 |
+
# Model Card for OLMo 7B
|
14 |
+
|
15 |
+
OLMo 7B November 2024 is an updated version of the original [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) model rocking a ____ point increase in ____, among other evaluations improvements, from an improved version of the Dolma dataset and staged training.
|
16 |
+
**This version is for direct use with HuggingFace Transformers** from v4.40 on.
|
17 |
+
|
18 |
+
|
19 |
+
**For transformers versions v4.40.0 or newer, we suggest using [OLMo 7B HF](https://huggingface.co/allenai/OLMo-7B-hf) instead.**
|
20 |
+
|
21 |
+
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
|
22 |
+
The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
|
23 |
+
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
|
24 |
+
|
25 |
+
<!-- *A new version of this model with a 24 point improvement on MMLU is available [here](https://huggingface.co/allenai/OLMo-1.7-7B)*. -->
|