Why are the released weights in fp32?
The paper mentions using PyTorch AMP (bfloat16) for training, but the model is released in float32.
Is there a specific reason for this? I assume you released the master weights directly (maybe for exact reproducibility or referencing OLMo's checkpointing style)?
Just curious since most recent models usually release 2-byte tensors (bf16/fp16).
P.S. Thanks for extending the max context length to 36k! I previously asked about the 4k limit in Molmo - https://huggingface.co/allenai/Molmo-72B-0924/discussions/15 -, so I'm really happy to see this massive upgrade.
Sorry for the delayed response.
As in Molmo, we kept the model weights in full precision while performing computations in half precision.
We uploaded the original checkpoint without modification.
For more details, please refer to Figure 6 in the appendix of the Molmo paper: https://arxiv.org/pdf/2409.17146
Additionally, we evaluated loading the model weights in bfloat16 and observed only negligible differences in the outputs when using SDPA attention (attn_implementation).
Therefore, loading the model in bfloat16 is also safe in practice.