--- title: README emoji: 👀 colorFrom: indigo colorTo: red sdk: static pinned: true --- # ColPali: Efficient Document Retrieval with Vision Language Models 👀 [![arXiv](https://img.shields.io/badge/arXiv-2407.01449-b31b1b.svg?style=for-the-badge)](https://arxiv.org/abs/2407.01449) This organization contains all artifacts released with our preprint [*ColPali: Efficient Document Retrieval with Vision Language Models*](https://arxiv.org/abs/2407.01449), including the [ViDoRe](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) benchmark and our SOTA document retrieval model [*ColPali*](https://huggingface.co/vidore/colpali). ### Abstract Documents are visually rich structures that convey information through text, as well as tables, figures, page layouts, or fonts. While modern document retrieval systems exhibit strong performance on query-to-text matching, they struggle to exploit visual cues efficiently, hindering their performance on practical document retrieval applications such as Retrieval Augmented Generation. To benchmark current systems on visually rich document retrieval, we introduce the Visual Document Retrieval Benchmark *ViDoRe*, composed of various page-level retrieving tasks spanning multiple domains, languages, and settings. The inherent shortcomings of modern systems motivate the introduction of a new retrieval model architecture, *ColPali*, which leverages the document understanding capabilities of recent Vision Language Models to produce high-quality contextualized embeddings solely from images of document pages. Combined with a late interaction matching mechanism, *ColPali* largely outperforms modern document retrieval pipelines while being drastically faster and end-to-end trainable. ## Models - [*ColPali*](https://huggingface.co/vidore/colpali-v1.2): *ColPali* is our main model contribution, it is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs), to efficiently index documents from their visual features. It is a [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images. - [*BiPali*](https://huggingface.co/vidore/bipali): It is an extension of original SigLIP architecture, the SigLIP-generated patch embeddings are fed to a text language model, PaliGemma-3B, to obtain LLM contextualized output patch embeddings. These representations are pool-averaged to get a single vector representation and create a PaliGemma bi-encoder, *BiPali*. - [*BiSigLIP*](https://huggingface.co/vidore/bisiglip): Finetuned version of original [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384), a strong vision-language bi-encoder model. ## Benchmark - [*Leaderboard*](https://huggingface.co/spaces/vidore/vidore-leaderboard): The ViDoRe leaderboard to track model performance on our new Visual Document Retrieval Benchmark, composed of various page-level retrieving tasks spanning multiple domains, languages, and settings. ## Datasets We organized datasets into collections to constitute our benchmark ViDoRe and its derivates (OCR and Captioning). Below is a brief description of each of them. - [*ViDoRe Benchmark*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d): collection regrouping all datasets constituting the ViDoRe benchmark. It includes the test sets from different academic datasets ([ArXiVQA](https://huggingface.co/datasets/vidore/arxivqa_test_subsampled), [DocVQA](https://huggingface.co/datasets/vidore/docvqa_test_subsampled), [InfoVQA](https://huggingface.co/datasets/vidore/infovqa_test_subsampled), [TATDQA](https://huggingface.co/datasets/vidore/tatdqa_test), [TabFQuAD](https://huggingface.co/datasets/vidore/tabfquad_test_subsampled)) and from datasets synthetically generated spanning various themes and industrial applications: ([Artificial Intelligence](https://huggingface.co/datasets/vidore/syntheticDocQA_artificial_intelligence_test), [Government Reports](https://huggingface.co/datasets/vidore/syntheticDocQA_government_reports_test), [Healthcare Industry](https://huggingface.co/datasets/vidore/syntheticDocQA_healthcare_industry_test), [Energy](https://huggingface.co/datasets/vidore/syntheticDocQA_energy_test) and [Shift Project](https://huggingface.co/datasets/vidore/shiftproject_test)). Further details can be found on the corresponding dataset cards. - [*OCR Baseline*](https://huggingface.co/collections/vidore/vidore-chunk-ocr-baseline-666acce88c294ef415548a56): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are OCRized with Tesseract. - [*Captioning Baseline*](https://huggingface.co/collections/vidore/vidore-captioning-baseline-6658a2a62d857c7a345195fd): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are captioned using Claude Sonnet. ## Code - [*ColPali Engine*](https://github.com/illuin-tech/colpali): The code used to train and run inference with the ColPali architecture. - [*ViDoRe Benchmark*](https://github.com/illuin-tech/vidore-benchmark): A Python package/CLI tool to evaluate document retrieval systems on the ViDoRe benchmark. ## Extra - [*Demo*](https://huggingface.co/spaces/manu/ColPali-demo): A demo to try it out ! This will be improved in the coming days ! - [*Preprint*](https://huggingface.co/papers/2407.01449): The paper with all details ! ## Contact - Manuel Faysse: manuel.faysse@illuin.tech - Hugues Sibille: hugues.sibille@illuin.tech - Tony Wu: tony.wu@illuin.tech ## Citation If you use any datasets or models from this organization in your research, please cite the original dataset as follows: ```latex @misc{faysse2024colpaliefficientdocumentretrieval, title={ColPali: Efficient Document Retrieval with Vision Language Models}, author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo}, year={2024}, eprint={2407.01449}, archivePrefix={arXiv}, primaryClass={cs.IR}, url={https://arxiv.org/abs/2407.01449}, } ``` ## Acknowledgments This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France. This work was performed using HPC resources from the CINES ADASTRA through Grant 2024-AD011015443.