chrisc36 commited on
Commit
1afe021
1 Parent(s): 9cf34ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ Molmo is a family of open vision-language models developed by the Allen Institut
22
  Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs.
23
  It has state-of-the-art performance among multimodal models with a similar size while being fully open-source.
24
  You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
25
- **Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog).
26
 
27
  Molmo 7B-O is based on [OLMo-7B-1124]() (to be released) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
28
  It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
 
22
  Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs.
23
  It has state-of-the-art performance among multimodal models with a similar size while being fully open-source.
24
  You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
25
+ **Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog) or the [paper](https://huggingface.co/papers/2409.17146).
26
 
27
  Molmo 7B-O is based on [OLMo-7B-1124]() (to be released) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
28
  It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.