Safetensors
English
olmo2
amanrangapur commited on
Commit
01777cb
1 Parent(s): c68226a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +225 -5
README.md CHANGED
@@ -1,16 +1,236 @@
 
 
 
 
 
 
 
1
  <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
2
 
3
 
4
- # Model Card for OLMo 7B
 
 
5
 
6
- OLMo 7B November 2024 is an updated version of the original [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) model rocking a ____ point increase in ____, among other evaluations improvements, from an improved version of the Dolma dataset and staged training.
7
- **This version is for direct use with HuggingFace Transformers** from v4.40 on.
8
 
 
9
 
10
- **For transformers versions v4.40.0 or newer, we suggest using [OLMo 7B HF](https://huggingface.co/allenai/OLMo-7B-hf) instead.**
11
 
12
  OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
13
  The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
14
  We release all code, checkpoints, logs (coming soon), and details involved in training these models.
15
 
16
- <!-- *A new version of this model with a 24 point improvement on MMLU is available [here](https://huggingface.co/allenai/OLMo-1.7-7B)*. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/dolma
5
+ language:
6
+ - en
7
+ ---
8
  <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
9
 
10
 
11
+ ## Model Details
12
+
13
+ <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
14
 
 
 
15
 
16
+ # Model Card for OLMo2 7B
17
 
18
+ OLMo2 7B November 2024 is an updated version of the original [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) model rocking a ____ point increase in ____, among other evaluations improvements, from an improved version of the Dolma dataset and staged training.
19
 
20
  OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
21
  The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
22
  We release all code, checkpoints, logs (coming soon), and details involved in training these models.
23
 
24
+
25
+
26
+ The core models released in this batch are the following:
27
+ | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
28
+ |------|--------|---------|-------------|-----------------|----------------|
29
+ | [OLMo2-7B July 2024](https://huggingface.co/allenai/OLMo-7B-0724-hf) | 4 Trillion | 32 | 4096 | 32 | 4096 |
30
+ | [OLMo2- 13B July 2024](https://huggingface.co/allenai/OLMo-1B-0724-hf) | 5 Trillion | 42 | 5120 | 16 | 4096 |
31
+
32
+ We have released checkpoints for these models, for every 1000 training steps.
33
+ The naming convention is `stepXXX-tokensYYYB`.
34
+
35
+ To load a specific model revision with HuggingFace, simply add the argument `revision`:
36
+ ```bash
37
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo2-7B-1124", revision="step1000-tokens5B")
38
+ ```
39
+
40
+ Or, you can access all the revisions for the models via the following code snippet:
41
+ ```python
42
+ from huggingface_hub import list_repo_refs
43
+ out = list_repo_refs("allenai/OLMo2-7B-1124")
44
+ branches = [b.name for b in out.branches]
45
+ ```
46
+
47
+ ### Model Description
48
+
49
+ - **Developed by:** Allen Institute for AI (Ai2)
50
+ - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
51
+ - **Model type:** a Transformer style autoregressive language model.
52
+ - **Language(s) (NLP):** English
53
+ - **License:** The code and model are released under Apache 2.0.
54
+ - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
55
+ - **Date cutoff:** Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version.
56
+
57
+
58
+ ### Model Sources
59
+
60
+ - **Project Page:** https://allenai.org/olmo
61
+ - **Repositories:**
62
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
63
+ - Evaluation code: https://github.com/allenai/OLMo-Eval
64
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
65
+ - **Paper:** [Link](https://arxiv.org/abs/2402.00838)
66
+ - **Technical blog post:** https://blog.allenai.org/olmo-1-7-7b-a-24-point-improvement-on-mmlu-92b43f7d269d
67
+ - **W&B Logs:** [pretraining](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B), [annealing](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B-anneal)
68
+
69
+
70
+ ## Uses
71
+
72
+ ### Inference
73
+
74
+ Proceed as usual with HuggingFace:
75
+ ```python
76
+ from transformers import AutoModelForCausalLM, AutoTokenizer
77
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo2-7B-1124")
78
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo2-7B-1124")
79
+ message = ["Language modeling is "]
80
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
81
+ # optional verifying cuda
82
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
83
+ # olmo = olmo.to('cuda')
84
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
85
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
86
+ >> 'Language modeling is the first step to build natural language generation...'
87
+ ```
88
+
89
+ Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo2-7B-1124", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
90
+ The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
91
+
92
+ ### Fine-tuning
93
+ Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
94
+ 1. Fine-tune with the OLMo repository:
95
+ ```bash
96
+ torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
97
+ --data.paths=[{path_to_data}/input_ids.npy] \
98
+ --data.label_mask_paths=[{path_to_data}/label_mask.npy] \
99
+ --load_path={path_to_checkpoint} \
100
+ --reset_trainer_state
101
+ ```
102
+ For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo?tab=readme-ov-file#fine-tuning).
103
+
104
+ 2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct).
105
+
106
+ <!-- TODO -->
107
+ ## Evaluation
108
+
109
+ Core model results for OLMo 7B models are found below.
110
+
111
+ | Task | Llama-7b | Llama2-7b | Falcon-7b | Mpt-7b | OLMo-7B | Llama2-13b | OLMo 7B April 2024 | **OLMo 7B July 2024** |
112
+ |-------------------|----------|-----------|-----------|--------|---------|------------|--------------------|-----------------------|
113
+ | arc_c | 44.5 | 48.5 | 47.5 | 46.5 | 48.5 | 52.8 | 42.5 | 43.8 |
114
+ | arc_e | 67.9 | 69.5 | 70.4 | 70.5 | 65.4 | 73.7 | 67.2 | 68.8 |
115
+ | boolq | 75.4 | 80.2 | 74.6 | 74.2 | 73.4 | 82.2 | 83.7 | 78.9 |
116
+ | copa | 91.0 | 86.0 | 86.0 | 85.0 | 90.0 | 90.0 | 86.0 | 84.0 |
117
+ | hellaswag | 76.2 | 76.8 | 75.9 | 77.6 | 76.4 | 78.6 | 75.5 | 77.4 |
118
+ | openbookqa | 51.2 | 48.4 | 53.0 | 48.6 | 50.4 | 51.8 | 50.0 | 48.2 |
119
+ | piqa | 77.2 | 76.7 | 78.5 | 77.3 | 78.4 | 79.0 | 77.5 | 78.2 |
120
+ | sciq | 93.9 | 94.5 | 93.9 | 93.7 | 93.8 | 95.5 | 96.7 | 97.0 |
121
+ | winogrande | 70.5 | 69.4 | 68.9 | 69.9 | 67.9 | 73.5 | 69.8 | 68.8 |
122
+ | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33.0 | 36.0 | 36.8 | 35.8 | 36.5 |
123
+ | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 | 55.5 | 52.0 | 53.4 |
124
+ | GSM8k | 10.0 | 12.0 | 4.0 | 4.5 | 8.5 | 25.0 | 29.0 | 35.0 |
125
+ | Full average | 60.3 | 62.1 | 59.2 | 59.3 | 59.8 | 66.2 | 63.8 | 64.2 |
126
+
127
+ And for 1B models:
128
+
129
+ | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | [OLMo 1.0 1B](https://huggingface.co/allenai/OLMo-1B-hf) | **OLMo 1B July 2024** |
130
+ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- | ------ |
131
+ | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 | 36.5 |
132
+ | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 | 55.3 |
133
+ | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 | 67.5 |
134
+ | copa | 50 | 84 | 72 | 78 | 79 | 83.0 |
135
+ | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 | 66.9 |
136
+ | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 | 46.4 |
137
+ | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 | 74.9 |
138
+ | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 | 93.4 |
139
+ | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 | 61.4 |
140
+ | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 | 65.0 |
141
+
142
+ \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
143
+
144
+ ## Model Details
145
+
146
+ ### Data
147
+ For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
148
+ **This model uses the new 1.7 version with more data sources, better deduplication, and quality filtering**.
149
+ During the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0.
150
+
151
+ ### Staged training / annealing
152
+
153
+ In contrast to OLMo 1.0, we trained OLMo 7B July with a two-stage curriculum:
154
+ * In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2.7T tokens, when the learning rate is still somewhat high.
155
+ * At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below.
156
+ Both stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top.
157
+
158
+
159
+ ### Architecture
160
+
161
+ OLMo 7B architecture with peer models for comparison.
162
+
163
+ | | **OLMo 7B July 2024** | [OLMo 1.0 7B](https://huggingface.co/allenai/OLMo-7B-hf) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
164
+ |------------------------|-------------------|-------------------|---------------------|--------------------|--------------------|------------------|
165
+ | d_model | 4096 | 4096 | 4096 | 4096 | 4544 | 4096 |
166
+ | num heads | 32 | 32 | 32 | 32 | 71 | 16 |
167
+ | num layers | 32 | 32 | 32 | 32 | 32 | 32 |
168
+ | MLP ratio | ~8/3 | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
169
+ | LayerNorm type | non-parametric LN | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
170
+ | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE | RoPE |
171
+ | attention variant | full | full | GQA | full | MQA | MQA |
172
+ | biases | none | none | none | in LN only | in LN only | none |
173
+ | block type | sequential | sequential | sequential | sequential | parallel | parallel |
174
+ | activation | SwiGLU | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
175
+ | sequence length | 4096 | 2048 | 4096 | 2048 | 2048 | 2048 |
176
+ | batch size (instances) | 1024 | 2160 | 1024 | 2048 | 2304 | 512 |
177
+ | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~4M | ~1M |
178
+ | weight tying | no | no | no | no | no | yes |
179
+
180
+
181
+ ### Hyperparameters
182
+
183
+ AdamW optimizer parameters are shown below.
184
+
185
+ | Size | Peak LR | Betas | Epsilon | Weight Decay |
186
+ |------|------------|-----------------|-------------|--------------|
187
+ | 7B | 3.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
188
+ | 13B | 9.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
189
+
190
+ Optimizer settings comparison with peer models.
191
+
192
+ | | **OLMo2 7B** | [OLMo 1.0 7B](https://huggingface.co/allenai/OLMo-7B-hf) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
193
+ |-----------------------|------------------|------------------|---------------------|--------------------|--------------------|
194
+ | warmup steps | 2000 | 5000 | 2000 | 2000 | 1000 |
195
+ | peak LR | 4.0E-04 | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
196
+ | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
197
+ | weight decay | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
198
+ | beta1 | 0.9 | 0.9 | 0.9 | 0.9 | 0.99 |
199
+ | beta2 | 0.95 | 0.95 | 0.95 | 0.95 | 0.999 |
200
+ | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
201
+ | LR schedule | cosine | linear | cosine | cosine | cosine |
202
+ | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
203
+ | gradient reduce dtype | FP32 | FP32 | FP32 | FP32 | BF16 |
204
+ | optimizer state dtype | FP32 | FP32 | most likely FP32 | FP32 | FP32 |
205
+
206
+
207
+
208
+ ## Bias, Risks, and Limitations
209
+
210
+ Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
211
+ Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
212
+
213
+ Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
214
+
215
+
216
+ ## Citation
217
+
218
+ **BibTeX:**
219
+
220
+ ```
221
+ @article{Groeneveld2023OLMo,
222
+ title={OLMo: Accelerating the Science of Language Models},
223
+ author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
224
+ journal={Preprint},
225
+ year={2024}
226
+ }
227
+ ```
228
+
229
+ **APA:**
230
+
231
+ Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
232
+
233
+ ## Model Card Contact
234
+
235
+
236
+ For errors in this model card, contact Nathan, `{nathanl} at allenai dot org`.