File size: 13,183 Bytes
2dce2dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0aabc4c
 
 
 
 
 
 
 
3af71f9
 
 
 
 
 
 
 
0aabc4c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2dce2dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
---
license: apache-2.0
language:
- en
base_model:
- openai/clip-vit-large-patch14-336
- Qwen/Qwen2-7B
pipeline_tag: image-text-to-text
tags:
- multimodal
- olmo
- molmo
- pixmo
library_name: transformers
---

# Molmo 7B-D Model Card with Endpoint Usage

This is a copy of the original [Molmo 7B-D model card](https://huggingface.co/allenai/Molmo-7B-D-0924) with additional information about using the model via Hugging Face Inference Endpoints.

## Using the Model via Inference Endpoints

**Note: The following implementation is a community-contributed endpoint handler and is not an official implementation. For the official model and its usage, please refer to the [official Molmo 7B-D model page](https://huggingface.co/allenai/Molmo-7B-D-0924).**

You should see a `Deploy` via Inference Endpoints option at the top of this model card. 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/60107b385ac3e86b3ea4fc34/kHR0wO_GchczmsmHtjJ1u.png)

Currently, this handler uses `bloat16` for inference. The original authors found some differences in results vs using `float32` weights. 
I didn't find results that degraded much in my initial experiments, but I may change this implementation in the future. 


If you've deployed the model using Hugging Face's Inference Endpoints with a community-contributed handler, you can use it with the following code:

```python
import requests
import json
import base64
from IPython.display import Image, display

API_URL = YOUR_ENDPOINT_URL_HERE
headers = {
    "Accept" : "application/json",
    "Authorization": "Bearer hf_TOKEN_HERE",
    "Content-Type": "application/json" 
}

# Function to encode image to base64
def encode_image(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode('utf-8')

# Path to your local image file
image_path = "hf-logo-with-title.png"

# Display the image (if in a Jupyter notebook)
display(Image(filename=image_path))

# Encode the image
base64_image = encode_image(image_path)

# Prepare the payload
payload = {
    "inputs": {
        "image": base64_image,
        "text_prompt": "Describe this image in detail."
    }
}

# Make the POST request
response = requests.post(API_URL, headers=headers, json=payload)

# Check if the request was successful
if response.status_code == 200:
    # Parse the JSON response
    result = response.json()
    print(result)
else:
    print("Error:", response.status_code)
    print("Response:", response.text)

# Print some debug information
print("\nDebug Information:")
print(f"API URL: {API_URL}")
print(f"Image Path: {image_path}")
print(f"Payload size: {len(json.dumps(payload))} bytes")
print(f"Response status code: {response.status_code}")
```

Example output:

```
[{'generated_text': ' The image features a simple, cartoon-style emoji on the left side, set against a white background. The emoji is a yellow circle with a white outline, depicting a smiling face with black eyes and a red tongue sticking out. The face has two small yellow dots on its cheeks, giving it a cheerful expression. The emoji\'s hands are positioned in front of its chest, as if it is hugging itself. To the right of the emoji, in large, dark blue text, the words "Hugging Face" are displayed. The overall design is minimalistic, with the emoji and text being the only elements in the image.'}]
```

This code snippet demonstrates how to use the model with an image file, encode it to base64, and send it to the inference endpoint for processing. Make sure to replace `"hf_TOKEN_HERE"` with your actual Hugging Face API token.

Remember that this is a community implementation and may not reflect the most up-to-date or official way to use the model. For the latest official information and usage instructions, always refer to the [official Molmo 7B-D model page](https://huggingface.co/allenai/Molmo-7B-D-0924).

---

# Original Molmo 7B-D Model Card

The content below is a copy of the original model card. For the most up-to-date information, please refer to the [official Molmo 7B-D model page](https://huggingface.co/allenai/Molmo-7B-D-0924).

<img src="molmo_logo.png" alt="Logo for the Molmo Project" style="width: auto; height: 50px;">

# Molmo 7B-D

Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
**Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog).

Molmo 7B-D is based on [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone. 
It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
It powers the **Molmo demo at** [**molmo.allenai.org**](https://molmo.allenai.org).

This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.

[**Sign up here**](https://docs.google.com/forms/d/e/1FAIpQLSdML1MhNNBDsCHpgWG65Oydg2SjZzVasyqlP08nBrWjZp_c7A/viewform) to be the first to know when artifacts are released.

Quick links:
- ๐Ÿ’ฌ [Demo](https://molmo.allenai.org/)
- ๐Ÿ“‚ [All Models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
- ๐Ÿ“ƒ [Paper](https://molmo.allenai.org/paper.pdf)
- ๐ŸŽฅ [Blog with Videos](https://molmo.allenai.org/blog)


## Quick Start

To run Molmo, first install dependencies:

```bash
pip install einops torchvision
```

Then, follow these steps:

```python
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from PIL import Image
import requests

# load the processor
processor = AutoProcessor.from_pretrained(
    'allenai/Molmo-7B-D-0924',
    trust_remote_code=True,
    torch_dtype='auto',
    device_map='auto'
)

# load the model
model = AutoModelForCausalLM.from_pretrained(
    'allenai/Molmo-7B-D-0924',
    trust_remote_code=True,
    torch_dtype='auto',
    device_map='auto'
)

# process the image and text
inputs = processor.process(
    images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
    text="Describe this image."
)

# move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}

# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
output = model.generate_from_batch(
    inputs,
    GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
    tokenizer=processor.tokenizer
)

# only get generated tokens; decode them to text
generated_tokens = output[0,inputs['input_ids'].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)

# print the generated text
print(generated_text)

# >>>  This image features an adorable black Labrador puppy, captured from a top-down
#      perspective. The puppy is sitting on a wooden deck, which is composed ...
```

To make inference more efficient, run with autocast:

```python
with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
  output = model.generate_from_batch(
      inputs,
      GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
      tokenizer=processor.tokenizer
  )
```

We did most of our evaluation in this setting (autocast on, but float32 weights)

To even further reduce the memory requirements, the model can be run with bfloat16 weights:

```python
model.to(dtype=torch.bfloat16)
inputs["images"] = inputs["images"].to(torch.bfloat16)
output = model.generate_from_batch(
    inputs,
    GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
    tokenizer=processor.tokenizer
)
```

Note that we have observed that this can change the output of the model compared to running with float32 weights.

## Evaluations 

| Model                       | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating |
|-----------------------------|-----------------------------------------|-----------------------------|
| Molmo 72B                   | 81.2                                    | 1077                        |
| **Molmo 7B-D (this model)** | **77.3**                                | **1056**                    |
| Molmo 7B-O                  | 74.6                                    | 1051                        |
| MolmoE 1B                   | 68.6                                    | 1032                        |
| GPT-4o                      | 78.5                                    | 1079                        |
| GPT-4V                      | 71.1                                    | 1041                        |
| Gemini 1.5 Pro              | 78.3                                    | 1074                        |
| Gemini 1.5 Flash            | 75.1                                    | 1054                        |
| Claude 3.5 Sonnet           | 76.7                                    | 1069                        |
| Claude 3 Opus               | 66.4                                    |  971                        |
| Claude 3 Haiku              | 65.3                                    |  999                        |
| Qwen VL2 72B                | 79.4                                    | 1037                        |
| Qwen VL2 7B                 | 73.7                                    | 1025                        |
| Intern VL2 LLAMA 76B        | 77.1                                    | 1018                        |
| Intern VL2 8B               | 69.4                                    |  953                        |
| Pixtral 12B                 | 69.5                                    | 1016                        |
| Phi3.5-Vision 4B            | 59.7                                    |  982                        |
| PaliGemma 3B                | 50.0                                    |  937                        |
| LLAVA OneVision 72B         | 76.6                                    | 1051                        |
| LLAVA OneVision 7B          | 72.0                                    | 1024                        |
| Cambrian-1 34B              | 66.8                                    |  953                        |
| Cambrian-1 8B               | 63.4                                    |  952                        |
| xGen - MM - Interleave 4B   | 59.5                                    |  979                        |
| LLAVA-1.5 13B               | 43.9                                    |  960                        |
| LLAVA-1.5 7B                | 40.7                                    |  951                        |

*Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).*

## FAQs

### I'm getting an error a broadcast error when processing images!

Your image might not be in RGB format. You can convert it using the following code snippet:

```python
from PIL import Image

image = Image.open(...)

if image.mode != "RGB":
    image = image.convert("RGB")
```

### Molmo doesn't work great with transparent images!

We received reports that Molmo models might struggle with transparent images. 
For the time being, we recommend adding a white or dark background to your images before passing them to the model. The code snippet below shows how to do this using the Python Imaging Library (PIL):

```python

# Load the image
url = "..."
image = Image.open(requests.get(url, stream=True).raw)

# Convert the image to grayscale to calculate brightness
gray_image = image.convert('L')  # Convert to grayscale

# Calculate the average brightness
stat = ImageStat.Stat(gray_image)
average_brightness = stat.mean[0]  # Get the average value

# Define background color based on brightness (threshold can be adjusted)
bg_color = (0, 0, 0) if average_brightness > 127 else (255, 255, 255)

# Create a new image with the same size as the original, filled with the background color
new_image = Image.new('RGB', image.size, bg_color)

# Paste the original image on top of the background (use image as a mask if needed)
new_image.paste(image, (0, 0), image if image.mode == 'RGBA' else None)

# Now you can pass the new_image to Molmo
processor = AutoProcessor.from_pretrained(
    'allenai/Molmo-7B-D-0924',
    trust_remote_code=True,
    torch_dtype='auto',
    device_map='auto'
)
```

## License and Use

This model is licensed under Apache 2.0. It is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).