guilhermemr04's picture
Update README.md
bc0c419
|
raw
history blame
593 Bytes
This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
This model is also the state of the art on the BEIR Benchmark.
- Paper: [No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval](https://arxiv.org/abs/2206.02873)
- [Repository](https://github.com/guilhermemr04/scaling-zero-shot-retrieval)