PEFT
Safetensors
English
German
vidore
multimodal-embedding
File size: 4,806 Bytes
fdc0234
 
 
 
 
b5eb109
fdc0234
 
 
 
 
 
 
 
9c44b9a
5660a9d
87d297e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5660a9d
87d297e
 
 
 
 
 
 
60740df
acd3d8a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60740df
acd3d8a
 
 
60740df
acd3d8a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: apache-2.0
datasets:
- tattrongvu/vqa_de_en_batch1
- vidore/colpali_train_set
- tattrongvu/sharegpt4v_vqa_200k_batch1
language:
- en
- de
base_model:
- Qwen/Qwen2-VL-7B-Instruct
tags:
- vidore
- multimodal-embedding
library_name: peft
---
# ColQwen2-7B: Visual Retriever based on Qwen2-VL-7B-Instruct with ColBERT strategy

### This is the base version trained with batch_size 8x64 for 5 epoch and with the updated pad token

ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. 
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)

This version is the untrained base version to guarantee deterministic projection layer initialization.
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>

## Version specificity


This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali.
Maximal resolution is set so that 768 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements.

This version is trained with `colpali-engine==0.3.4`.

Data is the same as the ColPali data described in the paper.


## Model Training

### Dataset
The dataset was extended from the original colpali train set with the gemini 1.5 flash generated QA on 35k images scraped from internet.

*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*

### Parameters
We train models  use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685)) 
with `alpha=64`  and `r=64` on the transformer layers from the language model, 
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer. 
We train on an 8xH100 GPU setup with distributed data parallelism (via accelerate), a learning rate of 2e-4 with linear decay with 1% warmup steps, batch size per device is 64, in `bfloat16` format

## Usage

Make sure `colpali-engine` is installed from source or with a version superior to 0.3.4.
`transformers` version must be > 4.46.1.

```bash
pip install git+https://github.com/illuin-tech/colpali
```

```python
import torch
from PIL import Image

from colpali_engine.models import ColQwen2, ColQwen2Processor

model = ColQwen2.from_pretrained(
        "tsystems/colqwen2-7b-v1.0",
        torch_dtype=torch.bfloat16,
        device_map="cuda:0",  # or "mps" if on Apple Silicon
    ).eval()
processor = ColQwen2Processor.from_pretrained("tsystems/colqwen2-7b-v1.0")

# Your inputs
images = [
    Image.new("RGB", (32, 32), color="white"),
    Image.new("RGB", (16, 16), color="black"),
]
queries = [
    "Is attention really all you need?",
    "What is the amount of bananas farmed in Salvador?",
]

# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)

# Forward pass
with torch.no_grad():
    image_embeddings = model(**batch_images)
    query_embeddings = model(**batch_queries)

scores = processor.score_multi_vector(query_embeddings, image_embeddings)
```


## Limitations

 - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
 - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.

## License

ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.

## Citation

If you use any datasets or models from this organization in your research, please cite the original dataset as follows:

```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
  title={ColPali: Efficient Document Retrieval with Vision Language Models}, 
  author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
  year={2024},
  eprint={2407.01449},
  archivePrefix={arXiv},
  primaryClass={cs.IR},
  url={https://arxiv.org/abs/2407.01449}, 
}
```