File size: 3,345 Bytes
a3d2c62
669b3ff
 
 
 
 
 
 
a3d2c62
669b3ff
34a6ff9
669b3ff
34a6ff9
 
 
 
 
 
669b3ff
 
 
 
 
34a6ff9
 
669b3ff
 
 
 
 
 
 
 
 
 
 
 
34a6ff9
669b3ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34a6ff9
 
669b3ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34a6ff9
 
 
 
 
 
 
 
 
 
 
 
 
669b3ff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers

---

# bowdpr_wiki_triviaft

This is a fine-tuned retriever on the TriviaQA Task (without distillation). We introduce a novel pre-training paradigm, Bag-of-Word Prediction, for dense retrieval. 
This retriever is initialized from a base-sized pre-trained model, [bowdpr/bowdpr_wiki](https://huggingface.co/bowdpr/bowdpr_wiki). Please refer to our [paper](https://arxiv.org/abs/2401.11248) for detailed pre-training and fine-tuning settings. 

Finetuning on QA datasets involves a two-stage pipeline
 - s1: BM25 negs
 - s2: BM25 negs + Mined negatives from s1

<!--- Describe your model here -->

## Usage (Sentence-Transformers)

This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:

```
pip install -U sentence-transformers
```

Then you can use the model like this:

```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('bowdpr/bowdpr_wiki_triviaft')
embeddings = model.encode(sentences)
print(embeddings)
```



## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

```python
from transformers import AutoTokenizer, AutoModel
import torch


def cls_pooling(model_output, attention_mask):
    return model_output[0][:,0]


# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bowdpr/bowdpr_wiki_triviaft')
model = AutoModel.from_pretrained('bowdpr/bowdpr_wiki_triviaft')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)
```


## Full Model Architecture
```
SentenceTransformerforCL(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```

## Citing & Authors

If you are interested in our work, please consider citing our paper.

```
@misc{ma2024bow_pred,
      title={Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval}, 
      author={Guangyuan Ma and Xing Wu and Zijia Lin and Songlin Hu},
      year={2024},
      eprint={2401.11248},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}
```

<!--- Describe where people can find more information -->