ColBERT-NQ / README.md
bconsolvo's picture
Enriching model card for improved discoverability and consumption (#1)
869bc25 verified
|
raw
history blame
4.09 kB
---
license: cc-by-sa-3.0
datasets:
- natural_questions
language:
- en
tags:
- colbert
- natural questions
- checkpoint
- text retrieval
metrics:
- type: NQ 10 Recall
- value: 71.1
- type: NQ 20 Recall
- value: 76.3
- type: NQ 50 Recall
- value: 80.4
- type: NQ 100 Recall
- value: 82.7
- type: NQ 10 MRR
- value: 52.1
- type: NQ 20 MRR
- value: 52.3
- type: NQ 50 MRR
- value: 52.5
- type: NQ 100 MRR
- value: 52.5
---
# ColBERT NQ Checkpoint
The ColBERT NQ Checkpoint is a trained model based on the ColBERT architecture, which itself leverages a BERT encoder for its operations. This model has been specifically trained on the Natural Questions (NQ) dataset, focusing on text retrieval tasks.
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors | ? |
| Date | Feb 7, 2023 |
| Version | Checkpoint |
| Type | Text retrieval |
| Paper or Other Resources | Base Mode: [ColBERT](https://github.com/stanford-futuredata/ColBERT) Dataset: [Natural Questions](https://huggingface.co/datasets/natural_questions) |
| License | Other |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/ColBERT-NQ/discussions) and [Intel DevHub Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | This model is designed for text retrieval tasks, allowing users to submit queries and receive relevant passages from a corpus, in this case, Wikipedia. It can be integrated into applications requiring efficient and accurate retrieval of information based on user queries. |
| Primary intended users | Researchers, developers, and organizations looking for a powerful text retrieval solution that can be integrated into their systems or workflows, especially those requiring retrieval from large, diverse corpora like Wikipedia. |
| Out-of-scope uses | The model is not intended for tasks beyond text retrieval, such as text generation, sentiment analysis, or other forms of natural language processing not related to retrieving relevant text passages. |
# Evaluation
The ColBERT NQ Checkpoint model has been evaluated on the NQ dev dataset with the following results, showcasing its effectiveness in retrieving relevant passages across varying numbers of retrieved documents:
<table>
<colgroup>
<col class="org-right">
<col class="org-right">
<col class="org-right">
</colgroup>
<thead>
<tr>
<th scope="col" class="org-right">NQ</th>
<th scope="col" class="org-right">Recall</th>
<th scope="col" class="org-right">MRR</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-right">10</td>
<td class="org-right">71.1</td>
<td class="org-right">52.0</td>
</tr>
<tr>
<td class="org-right">20</td>
<td class="org-right">76.3</td>
<td class="org-right">52.3</td>
</tr>
<tr>
<td class="org-right">50</td>
<td class="org-right">80.4</td>
<td class="org-right">52.5</td>
</tr>
<tr>
<td class="org-right">100</td>
<td class="org-right">82.7</td>
<td class="org-right">52.5</td>
</tr>
</tbody>
</table>
These metrics demonstrate the model's ability to accurately retrieve relevant information from a corpus, with both recall and mean reciprocal rank (MRR) improving as more passages are considered.
# Ethical Considerations
While not specifically mentioned, ethical considerations for using the ColBERT NQ Checkpoint model should include awareness of potential biases present in the training corpus (Wikipedia), and the implications of those biases on retrieved results. Users should also consider the privacy and data use implications when deploying this model in applications.
Caveats and Recommendations
- Index Creation: Users need to build a vector index from their corpus using the ColBERT codebase before running queries. This process requires computational resources and expertise in setting up and managing search indices.
- Data Bias and Fairness: Given the Wikipedia-based training corpus, users should be mindful of potential biases and the representation of information within Wikipedia, adjusting their use case or implementation as necessary to address these concerns.