File size: 15,321 Bytes
424b3e6 c1bed09 c381164 c1bed09 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 |
---
dataset_info:
features:
- name: image
dtype: image
- name: item_ID
dtype: string
- name: query
dtype: string
- name: title
dtype: string
- name: position
dtype: int64
splits:
- name: data
num_bytes: 22251545141.2
num_examples: 982700
download_size: 21955883446
dataset_size: 22251545141.2
configs:
- config_name: default
data_files:
- split: data
path: data/data-*
---
<div style="display: flex; align-items: center; gap: 10px;">
<a href="https://www.marqo.ai/blog/introducing-marqos-ecommerce-embedding-models">
<img src="https://img.shields.io/badge/Model_Release-Blog-blue?logo=font-awesome&logoColor=white&style=flat&logo=pencil-alt" alt="Blog">
</a>
<a href="https://github.com/marqo-ai/marqo-ecommerce-embeddings">
<img src="https://img.shields.io/badge/GitHub-Repo-black?logo=github" alt="GitHub Repo">
</a>
<a href="https://www.marqo.ai/blog/how-to-build-an-ecommerce-image-search-application">
<img src="https://img.shields.io/badge/Ecommerce Search-Blog-red?logo=font-awesome&logoColor=white&style=flat&logo=pencil-alt" alt="Blog">
</a>
<a href="https://join.slack.com/t/marqo-community/shared_invite/zt-2b4nsvbd2-TDf8agPszzWH5hYKBMIgDA">
<img src="https://img.shields.io/badge/Slack-Join_Marqo_Community-purple?logo=Slack" alt=Slack Community">
</a>
</div>
# Marqo Ecommerce Embedding Models
**In this work, we introduce the GoogleShopping-1m dataset for evaluation.** This dataset comes with the release of our state-of-the-art embedding models for ecommerce products: [Marqo-Ecommerce-B](https://huggingface.co/Marqo/marqo-ecommerce-embeddings-B) and [Marqo-Ecommerce-L](https://huggingface.co/Marqo/marqo-ecommerce-embeddings-L).
**Released Content**:
1) Marqo-Ecommerce-B and Marqo-Ecommerce-L embedding models
2) GoogleShopping-1m and AmazonProducts-3m for evaluation
3) Evaluation Code
The benchmarking results show that the Marqo-Ecommerce models consistently outperformed *all other models* across various metrics. Specifically, `marqo-ecommerce-L` achieved an average improvement of **17.6% in MRR** and **20.5% in nDCG@10** when compared with the current best open source model, `ViT-SO400M-14-SigLIP` across all three tasks in the `marqo-ecommerce-hard` dataset. When compared with the best private model, `Amazon-Titan-Multimodal`, we saw an average improvement of **38.9% in MRR** and **45.1% in nDCG@10** across all three tasks, and **35.9% in Recall** across the Text-to-Image tasks in the `marqo-ecommerce-hard` dataset.
<img src="https://raw.githubusercontent.com/marqo-ai/marqo-ecommerce-embeddings/main/performance.png" alt="multi split visual" width="700"/>
More benchmarking results can be found below.
## Models
| **Embedding Model** | **#Params (m)** | **Dimension** | **HuggingFace** | **Download .pt** |
|---------------------| --- |---------------|------------------------------------|-------------------------------------------------------------------------------------------------------------|
| Marqo-Ecommerce-B | 203 | 768 | [Marqo/marqo-ecommerce-embeddings-B](https://huggingface.co/Marqo/marqo-ecommerce-embeddings-B) | [link](https://marqo-gcl-public.s3.us-west-2.amazonaws.com/marqo-general-ecomm/marqo-ecomm-embeddings-b.pt) |
| Marqo-Ecommerce-L | 652 | 1024 | [Marqo/marqo-ecommerce-embeddings-L](https://huggingface.co/Marqo/marqo-ecommerce-embeddings-L) | [link](https://marqo-gcl-public.s3.us-west-2.amazonaws.com/marqo-general-ecomm/marqo-ecomm-embeddings-l.pt) |
### Load from HuggingFace with transformers
To load the models in Transformers, see below. The models are hosted on [Hugging Face](https://huggingface.co/collections/Marqo/marqo-ecommerce-embeddings-66f611b9bb9d035a8d164fbb) and loaded using [Transformers](https://github.com/huggingface/transformers).
```python
from transformers import AutoModel, AutoProcessor
import torch
from PIL import Image
import requests
model_name= 'Marqo/marqo-ecommerce-embeddings-L'
# model_name = 'Marqo/marqo-ecommerce-embeddings-B'
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
img = Image.open(requests.get('https://raw.githubusercontent.com/marqo-ai/marqo-ecommerce-embeddings/refs/heads/main/images/dining-chairs.png', stream=True).raw).convert("RGB")
image = [img]
text = ["dining chairs", "a laptop", "toothbrushes"]
processed = processor(text=text, images=image, padding='max_length', return_tensors="pt")
processor.image_processor.do_rescale = False
with torch.no_grad():
image_features = model.get_image_features(processed['pixel_values'], normalize=True)
text_features = model.get_text_features(processed['input_ids'], normalize=True)
text_probs = (100 * image_features @ text_features.T).softmax(dim=-1)
print(text_probs)
# [1.0000e+00, 8.3131e-12, 5.2173e-12]
```
### Load from HuggingFace with OpenCLIP
To load the models in OpenCLIP, see below. The models are hosted on [Hugging Face](https://huggingface.co/collections/Marqo/marqo-ecommerce-embeddings-66f611b9bb9d035a8d164fbb) and loaded using [OpenCLIP](https://github.com/mlfoundations/open_clip). You can also find this code inside `run_models.py`.
```
pip install open_clip_torch
```
```python
from PIL import Image
import open_clip
import requests
import torch
# Specify model from Hugging Face Hub
model_name = 'hf-hub:Marqo/marqo-ecommerce-embeddings-L'
# model_name = 'hf-hub:Marqo/marqo-ecommerce-embeddings-B'
model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms(model_name)
tokenizer = open_clip.get_tokenizer(model_name)
# Preprocess the image and tokenize text inputs
# Load an example image from a URL
img = Image.open(requests.get('https://raw.githubusercontent.com/marqo-ai/marqo-ecommerce-embeddings/refs/heads/main/images/dining-chairs.png', stream=True).raw)
image = preprocess_val(img).unsqueeze(0)
text = tokenizer(["dining chairs", "a laptop", "toothbrushes"])
# Perform inference
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
# Calculate similarity probabilities
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
# Display the label probabilities
print("Label probs:", text_probs)
# [1.0000e+00, 8.3131e-12, 5.2173e-12]
```
### Evaluation
[Generalised Contrastiove Learning](https://github.com/marqo-ai/GCL) (GCL) is used for the evaluation. The following code can also be found in `scripts`.
```
git clone https://github.com/marqo-ai/GCL
```
Install the packages required by GCL.
**1. GoogleShopping-Text2Image Retrieval.**
```
cd ./GCL
MODEL=hf-hub:Marqo/marqo-ecommerce-B
outdir=/MarqoModels/GE/marqo-ecommerce-B/gs-title2image
hfdataset=Marqo/google-shopping-general-eval
python evals/eval_hf_datasets_v1.py \
--model_name $MODEL \
--hf-dataset $hfdataset \
--output-dir $outdir \
--batch-size 1024 \
--num_workers 8 \
--left-key "['title']" \
--right-key "['image']" \
--img-or-txt "[['txt'], ['img']]" \
--left-weight "[1]" \
--right-weight "[1]" \
--run-queries-cpu \
--top-q 4000 \
--doc-id-key item_ID \
--context-length "[[64], [0]]"
```
**2. GoogleShopping-Category2Image Retrieval.**
```
cd ./GCL
MODEL=hf-hub:Marqo/marqo-ecommerce-B
outdir=/MarqoModels/GE/marqo-ecommerce-B/gs-cat2image
hfdataset=Marqo/google-shopping-general-eval
python evals/eval_hf_datasets_v1.py \
--model_name $MODEL \
--hf-dataset $hfdataset \
--output-dir $outdir \
--batch-size 1024 \
--num_workers 8 \
--left-key "['query']" \
--right-key "['image']" \
--img-or-txt "[['txt'], ['img']]" \
--left-weight "[1]" \
--right-weight "[1]" \
--run-queries-cpu \
--top-q 4000 \
--doc-id-key item_ID \
--context-length "[[64], [0]]"
```
**3. AmazonProducts-Category2Image Retrieval.**
```
cd ./GCL
MODEL=hf-hub:Marqo/marqo-ecommerce-B
outdir=/MarqoModels/GE/marqo-ecommerce-B/ap-title2image
hfdataset=Marqo/amazon-products-eval
python evals/eval_hf_datasets_v1.py \
--model_name $MODEL \
--hf-dataset $hfdataset \
--output-dir $outdir \
--batch-size 1024 \
--num_workers 8 \
--left-key "['title']" \
--right-key "['image']" \
--img-or-txt "[['txt'], ['img']]" \
--left-weight "[1]" \
--right-weight "[1]" \
--run-queries-cpu \
--top-q 4000 \
--doc-id-key item_ID \
--context-length "[[64], [0]]"
```
## Detailed Performance
Our benchmarking process was divided into two distinct regimes, each using different datasets of ecommerce product listings: marqo-ecommerce-hard and marqo-ecommerce-easy. Both datasets contained product images and text and only differed in size. The "easy" dataset is approximately 10-30 times smaller (200k vs 4M products), and designed to accommodate rate-limited models, specifically Cohere-Embeddings-v3 and GCP-Vertex (with limits of 0.66 rps and 2 rps respectively). The "hard" dataset represents the true challenge, since it contains four million ecommerce product listings and is more representative of real-world ecommerce search scenarios.
Within both these scenarios, the models were benchmarked against three different tasks:
* Google Shopping Text-to-Image
* Google Shopping Category-to-Image
* Amazon Products Text-to-Image
### Marqo-Ecommerce-Hard
Marqo-Ecommerce-Hard looks into the comprehensive evaluation conducted using the full 4 million dataset, highlighting the robust performance of our models in a real-world context.
**GoogleShopping-Text2Image Retrieval.**
| **Embedding Model** | **mAP** | **R@10** | **MRR** | **nDCG@10** |
|-------------------------|------|-------|------|---------|
| **Marqo-Ecommerce-L** | **0.682**| **0.878** | **0.683**| **0.726** |
| Marqo-Ecommerce-B | 0.623| 0.832 | 0.624| 0.668 |
| ViT-SO400M-14-SigLip | 0.573| 0.763 | 0.574| 0.613 |
| ViT-L-16-SigLip | 0.540| 0.722 | 0.540| 0.577 |
| ViT-B-16-SigLip | 0.476| 0.660 | 0.477| 0.513 |
| Amazon-Titan-MultiModal | 0.475| 0.648 | 0.475| 0.509 |
| Jina-V1-CLIP | 0.285| 0.402 | 0.285| 0.306 |
**GoogleShopping-Category2Image Retrieval.**
| **Embedding Model** | **mAP** | **P@10** | **MRR** | **nDCG@10** |
|-----------------------------|---------|----------|---------|-------------|
| **Marqo-Ecommerce-L** | **0.463** | **0.652** | **0.822** | **0.666** |
| Marqo-Ecommerce-B | 0.423 | 0.629 | 0.810 | 0.644 |
| ViT-SO400M-14-SigLip | 0.352 | 0.516 | 0.707 | 0.529 |
| ViT-L-16-SigLip | 0.324 | 0.497 | 0.687 | 0.509 |
| ViT-B-16-SigLip | 0.277 | 0.458 | 0.660 | 0.473 |
| Amazon-Titan-MultiModal | 0.246 | 0.429 | 0.642 | 0.446 |
| Jina-V1-CLIP | 0.123 | 0.275 | 0.504 | 0.294 |
**AmazonProducts-Text2Image Retrieval.**
| **Embedding Model** | **mAP** | **R@10** | **MRR** | **nDCG@10** |
|-----------------------------|---------|----------|---------|-------------|
| **Marqo-Ecommerce-L** | **0.658** | **0.854** | **0.663** | **0.703** |
| Marqo-Ecommerce-B | 0.592 | 0.795 | 0.597 | 0.637 |
| ViT-SO400M-14-SigLip | 0.560 | 0.742 | 0.564 | 0.599 |
| ViT-L-16-SigLip | 0.544 | 0.715 | 0.548 | 0.580 |
| ViT-B-16-SigLip | 0.480 | 0.650 | 0.484 | 0.515 |
| Amazon-Titan-MultiModal | 0.456 | 0.627 | 0.457 | 0.491 |
| Jina-V1-CLIP | 0.265 | 0.378 | 0.266 | 0.285 |
### Marqo-Ecommerce-Easy
This dataset is about 10-30 times smaller than the Marqo-Ecommerce-Hard, and designed to accommodate rate-limited models, specifically Cohere-Embeddings-v3 and GCP-Vertex.
**GoogleShopping-Text2Image Retrieval.**
| **Embedding Model** | **mAP** | **R@10** | **MRR** | **nDCG@10** |
|-----------------------------|---------|----------|---------|-------------|
| **Marqo-Ecommerce-L** | **0.879** | **0.971** | **0.879** | **0.901** |
| Marqo-Ecommerce-B | 0.842 | 0.961 | 0.842 | 0.871 |
| ViT-SO400M-14-SigLip | 0.792 | 0.935 | 0.792 | 0.825 |
| GCP-Vertex | 0.740 | 0.910 | 0.740 | 0.779 |
| ViT-L-16-SigLip | 0.754 | 0.907 | 0.754 | 0.789 |
| ViT-B-16-SigLip | 0.701 | 0.870 | 0.701 | 0.739 |
| Amazon-Titan-MultiModal | 0.694 | 0.868 | 0.693 | 0.733 |
| Jina-V1-CLIP | 0.480 | 0.638 | 0.480 | 0.511 |
| Cohere-embedding-v3 | 0.358 | 0.515 | 0.358 | 0.389 |
**GoogleShopping-Category2Image Retrieval.**
| **Embedding Model** | **mAP** | **P@10** | **MRR** | **nDCG@10** |
|-----------------------------|---------|----------|---------|-------------|
| **Marqo-Ecommerce-L** | **0.515** | **0.358** | **0.764** | **0.590** |
| Marqo-Ecommerce-B | 0.479 | 0.336 | 0.744 | 0.558 |
| ViT-SO400M-14-SigLip | 0.423 | 0.302 | 0.644 | 0.487 |
| GCP-Vertex | 0.417 | 0.298 | 0.636 | 0.481 |
| ViT-L-16-SigLip | 0.392 | 0.281 | 0.627 | 0.458 |
| ViT-B-16-SigLip | 0.347 | 0.252 | 0.594 | 0.414 |
| Amazon-Titan-MultiModal | 0.308 | 0.231 | 0.558 | 0.377 |
| Jina-V1-CLIP | 0.175 | 0.122 | 0.369 | 0.229 |
| Cohere-embedding-v3 | 0.136 | 0.110 | 0.315 | 0.178 |
**AmazonProducts-Text2Image Retrieval.**
| **Embedding Model** | **mAP** | **R@10** | **MRR** | **nDCG@10** |
|-----------------------------|---------|----------|---------|-------------|
| **Marqo-Ecommerce-L** | **0.92** | **0.978** | **0.928** | **0.940** |
| Marqo-Ecommerce-B | 0.897 | 0.967 | 0.897 | 0.914 |
| ViT-SO400M-14-SigLip | 0.860 | 0.954 | 0.860 | 0.882 |
| ViT-L-16-SigLip | 0.842 | 0.940 | 0.842 | 0.865 |
| GCP-Vertex | 0.808 | 0.933 | 0.808 | 0.837 |
| ViT-B-16-SigLip | 0.797 | 0.917 | 0.797 | 0.825 |
| Amazon-Titan-MultiModal | 0.762 | 0.889 | 0.763 | 0.791 |
| Jina-V1-CLIP | 0.530 | 0.699 | 0.530 | 0.565 |
| Cohere-embedding-v3 | 0.433 | 0.597 | 0.433 | 0.465 |
## Citation
```
@software{zhu2024marqoecommembed_2024,
author = {Tianyu Zhu and and Jesse Clark},
month = oct,
title = {{Marqo Ecommerce Embeddings - Foundation Model for Product Embeddings}},
url = {https://github.com/marqo-ai/marqo-ecommerce-embeddings/},
version = {1.0.0},
year = {2024}
}
``` |