antoinelouis
commited on
Commit
•
30b61bb
1
Parent(s):
a0b2916
Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ tags:
|
|
10 |
- sentence-similarity
|
11 |
library_name: sentence-transformers
|
12 |
---
|
13 |
-
# crossencoder-mMiniLMv2-L12-
|
14 |
|
15 |
This is a [sentence-transformers](https://www.SBERT.net) model trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
|
16 |
|
@@ -33,7 +33,7 @@ Then you can use the model like this:
|
|
33 |
from sentence_transformers import CrossEncoder
|
34 |
pairs = [('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]
|
35 |
|
36 |
-
model = CrossEncoder('antoinelouis/crossencoder-mMiniLMv2-L12-
|
37 |
scores = model.predict(pairs)
|
38 |
print(scores)
|
39 |
```
|
@@ -46,8 +46,8 @@ Without [sentence-transformers](https://www.SBERT.net), you can use the model as
|
|
46 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
47 |
import torch
|
48 |
|
49 |
-
model = AutoModelForSequenceClassification.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L12-
|
50 |
-
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L12-
|
51 |
|
52 |
pairs = [('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]
|
53 |
features = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt')
|
@@ -65,15 +65,18 @@ We evaluated the model on 500 random queries from the mMARCO-fr train set (which
|
|
65 |
|
66 |
Below, we compare the model performance with other cross-encoder models fine-tuned on the same dataset. We report the R-precision (RP), mean reciprocal rank (MRR), and recall at various cut-offs (R@k).
|
67 |
|
68 |
-
| | model
|
69 |
-
|
70 |
-
| 1 | [crossencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-camembert-base-mmarcoFR)
|
71 |
-
| 2 | **crossencoder-mMiniLMv2-L12-
|
72 |
-
| 3 | [crossencoder-
|
73 |
-
| 4 | [crossencoder-
|
74 |
-
| 5 | [crossencoder-
|
75 |
-
| 6 | [crossencoder-
|
76 |
-
| 7 | [crossencoder-
|
|
|
|
|
|
|
77 |
|
78 |
## Training
|
79 |
***
|
@@ -96,10 +99,10 @@ We used the French version of the [mMARCO](https://huggingface.co/datasets/unica
|
|
96 |
```bibtex
|
97 |
@online{louis2023,
|
98 |
author = 'Antoine Louis',
|
99 |
-
title = 'crossencoder-mMiniLMv2-L12-
|
100 |
publisher = 'Hugging Face',
|
101 |
month = 'september',
|
102 |
year = '2023',
|
103 |
-
url = 'https://huggingface.co/antoinelouis/crossencoder-mMiniLMv2-L12-
|
104 |
}
|
105 |
```
|
|
|
10 |
- sentence-similarity
|
11 |
library_name: sentence-transformers
|
12 |
---
|
13 |
+
# crossencoder-mMiniLMv2-L12-mmarcoFR
|
14 |
|
15 |
This is a [sentence-transformers](https://www.SBERT.net) model trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
|
16 |
|
|
|
33 |
from sentence_transformers import CrossEncoder
|
34 |
pairs = [('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]
|
35 |
|
36 |
+
model = CrossEncoder('antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR')
|
37 |
scores = model.predict(pairs)
|
38 |
print(scores)
|
39 |
```
|
|
|
46 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
47 |
import torch
|
48 |
|
49 |
+
model = AutoModelForSequenceClassification.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR')
|
50 |
+
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR')
|
51 |
|
52 |
pairs = [('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]
|
53 |
features = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt')
|
|
|
65 |
|
66 |
Below, we compare the model performance with other cross-encoder models fine-tuned on the same dataset. We report the R-precision (RP), mean reciprocal rank (MRR), and recall at various cut-offs (R@k).
|
67 |
|
68 |
+
| | model | Vocab. | #Param. | Size | RP | MRR@10 | R@10(↑) | R@20 | R@50 | R@100 |
|
69 |
+
|---:|:-----------------------------------------------------------------------------------------------------------------------------|:-------|--------:|------:|-------:|---------:|---------:|-------:|-------:|--------:|
|
70 |
+
| 1 | [crossencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-camembert-base-mmarcoFR) | fr | 110M | 443MB | 35.65 | 50.44 | 82.95 | 91.50 | 96.80 | 98.80 |
|
71 |
+
| 2 | **crossencoder-mMiniLMv2-L12-mmarcoFR** | fr,99+ | 118M | 471MB | 34.37 | 51.01 | 82.23 | 90.60 | 96.45 | 98.40 |
|
72 |
+
| 3 | [crossencoder-mpnet-base-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-mpnet-base-mmarcoFR) | en | 109M | 438MB | 29.68 | 46.13 | 80.45 | 87.90 | 93.15 | 96.60 |
|
73 |
+
| 4 | [crossencoder-distilcamembert-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-distilcamembert-mmarcoFR) | fr | 68M | 272MB | 27.28 | 43.71 | 80.30 | 89.10 | 95.55 | 98.60 |
|
74 |
+
| 5 | [crossencoder-electra-base-french-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-electra-base-french-mmarcoFR) | fr | 110M | 443MB | 28.32 | 45.28 | 79.22 | 87.15 | 93.15 | 95.75 |
|
75 |
+
| 6 | [crossencoder-mMiniLMv2-L6-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR) | fr,99+ | 107M | 428MB | 33.92 | 49.33 | 79.00 | 88.35 | 94.80 | 98.20 |
|
76 |
+
| 7 | [crossencoder-MiniLM-L12-msmarco-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-MiniLM-L12-msmarco-mmarcoFR) | en | 33M | 134MB | 29.07 | 44.41 | 77.83 | 88.10 | 95.55 | 99.00 |
|
77 |
+
| 8 | [crossencoder-MiniLM-L6-msmarco-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-MiniLM-L6-msmarco-mmarcoFR) | en | 23M | 91MB | 32.92 | 47.56 | 77.27 | 88.15 | 94.85 | 98.15 |
|
78 |
+
| 9 | [crossencoder-MiniLM-L4-msmarco-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-MiniLM-L4-msmarco-mmarcoFR) | en | 19M | 77MB | 30.98 | 46.22 | 76.35 | 85.80 | 94.35 | 97.55 |
|
79 |
+
| 10 | [crossencoder-MiniLM-L2-msmarco-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-MiniLM-L2-msmarco-mmarcoFR) | en | 15M | 62MB | 30.82 | 44.30 | 72.03 | 82.65 | 93.35 | 98.10 |
|
80 |
|
81 |
## Training
|
82 |
***
|
|
|
99 |
```bibtex
|
100 |
@online{louis2023,
|
101 |
author = 'Antoine Louis',
|
102 |
+
title = 'crossencoder-mMiniLMv2-L12-mmarcoFR: A Cross-Encoder Model Trained on 1M sentence pairs in French',
|
103 |
publisher = 'Hugging Face',
|
104 |
month = 'september',
|
105 |
year = '2023',
|
106 |
+
url = 'https://huggingface.co/antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR',
|
107 |
}
|
108 |
```
|