Safetensors
English
olmo2
OLMo-2-1124-7B / README.md
amanrangapur's picture
Update README.md
d8510be verified
|
raw
history blame
1.1 kB
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for OLMo 7B
OLMo 7B November 2024 is an updated version of the original [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) model rocking a ____ point increase in ____, among other evaluations improvements, from an improved version of the Dolma dataset and staged training.
**This version is for direct use with HuggingFace Transformers** from v4.40 on.
**For transformers versions v4.40.0 or newer, we suggest using [OLMo 7B HF](https://huggingface.co/allenai/OLMo-7B-hf) instead.**
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
<!-- *A new version of this model with a 24 point improvement on MMLU is available [here](https://huggingface.co/allenai/OLMo-1.7-7B)*. -->