colbertv2.0 / README.md
NohTow's picture
Update README.md
aa9530c verified
|
raw
history blame
3.48 kB
---
base_model: colbert-ir/colbertv2.0
language:
- en
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- ColBERT
- PyLate
widget: []
license: mit
---
# PyLate version of colbert-ir/colbertv2.0
This checkpoint is a version of [colbert-ir/colbertv2.0](https://huggingface.co/colbert-ir/colbertv2.0) compatible with the [PyLate](https://github.com/lightonai/pylate) library.
All the credits belong to the original authors and we thank Omar Khattab for allowing us to share this version of the model.
Please refer to the [original repository](https://huggingface.co/colbert-ir/colbertv2.0) and [paper](https://arxiv.org/abs/2112.01488) for more information about the model and to [PyLate repository](https://github.com/lightonai/pylate) for information about usage of the model.
## Model Details
The model maps query and documents to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
### Model Description
- **Model Type:** PyLate model
- **Base model:** [colbert-ir/colbertv2.0](https://huggingface.co/colbert-ir/colbertv2.0) <!-- at revision c1e84128e85ef755c096a95bdb06b47793b13acf -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** Cosine Similarity
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
### Citation
```
@inproceedings{santhanam-etal-2022-colbertv2,
title = "{C}ol{BERT}v2: Effective and Efficient Retrieval via Lightweight Late Interaction",
author = "Santhanam, Keshav and
Khattab, Omar and
Saad-Falcon, Jon and
Potts, Christopher and
Zaharia, Matei",
editor = "Carpuat, Marine and
de Marneffe, Marie-Catherine and
Meza Ruiz, Ivan Vladimir",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.272",
doi = "10.18653/v1/2022.naacl-main.272",
pages = "3715--3734",
abstract = "Neural information retrieval (IR) has greatly advanced search and other knowledge-intensive language tasks. While many neural IR methods encode queries and documents into single-vector representations, late interaction models produce multi-vector representations at the granularity of each token and decompose relevance modeling into scalable token-level computations. This decomposition has been shown to make late interaction more effective, but it inflates the space footprint of these models by an order of magnitude. In this work, we introduce ColBERTv2, a retriever that couples an aggressive residual compression mechanism with a denoised supervision strategy to simultaneously improve the quality and space footprint of late interaction. We evaluate ColBERTv2 across a wide range of benchmarks, establishing state-of-the-art quality within and outside the training domain while reducing the space footprint of late interaction models by 6{--}10x.",
}
```