Model Card for OLMo 7B
OLMo 7B November 2024 is an updated version of the original OLMo 7B model rocking a ____ point increase in ____, among other evaluations improvements, from an improved version of the Dolma dataset and staged training. This version is for direct use with HuggingFace Transformers from v4.40 on.
For transformers versions v4.40.0 or newer, we suggest using OLMo 7B HF instead.
OLMo is a series of Open Language Models designed to enable the science of language models. The OLMo models are trained on the Dolma dataset. We release all code, checkpoints, logs (coming soon), and details involved in training these models.