PEFT
Safetensors
English
German
vidore
multimodal-embedding
colqwen2-7b-v1.0 / README.md
tattrongvu's picture
Update README.md
b5eb109 verified
|
raw
history blame
2.47 kB
---
license: apache-2.0
datasets:
- tattrongvu/vqa_de_en_batch1
- vidore/colpali_train_set
- tattrongvu/sharegpt4v_vqa_200k_batch1
language:
- en
- de
base_model:
- Qwen/Qwen2-VL-7B-Instruct
tags:
- vidore
- multimodal-embedding
---
# ColQwen2-7B: Visual Retriever based on Qwen2-VL-7B-Instruct with ColBERT strategy
### This is the base version trained with batch_size 8x64 for 5 epoch and with the updated pad token
ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
This version is the untrained base version to guarantee deterministic projection layer initialization.
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
## Version specificity
This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali.
Maximal resolution is set so that 768 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements.
This version is trained with `colpali-engine==0.3.4`.
Data is the same as the ColPali data described in the paper.
## Model Training
### Dataset
The dataset was extended from the original colpali train set with the gemini 1.5 flash generated QA on 35k images scraped from internet.
*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
### Parameters
We train models use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
with `alpha=64` and `r=64` on the transformer layers from the language model,
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
We train on an 8xH100 GPU setup with distriuted data parallelism (via accelerate), a learning rate of 2e-4 with linear decay with 1% warmup steps, batch size per device is 64, in `bfloat16` format