antoinelouis commited on
Commit
095fae1
1 Parent(s): abda7af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -26
README.md CHANGED
@@ -44,7 +44,7 @@ model-index:
44
 
45
  # biencoder-mMiniLMv2-L6-mmarcoFR
46
 
47
- This is a dense single-vector bi-encoder model. It maps sentences and paragraphs to a 384 dimensional dense vector space and should be used for semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) retrieval dataset.
48
 
49
  ## Usage
50
 
@@ -121,24 +121,11 @@ similarity = q_embeddings @ p_embeddings.T
121
  print(similarity)
122
  ```
123
 
124
- ***
125
-
126
  ## Evaluation
127
 
128
- We evaluate the model on the smaller development set of [mMARCO-fr](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/), which consists of 6,980 queries for a corpus of 8.8M candidate passages. Below, we compare the model performance with other biencoder models fine-tuned on the same dataset. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
129
-
130
- | | model | Vocab. | #Param. | Size | MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100() | R@500 |
131
- |---:|:------------------------------------------------------------------------------------------------------------------------|:-------|--------:|------:|---------:|----------:|---------:|-------:|-----------:|--------:|
132
- | 1 | [biencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR) | 🇫🇷 | 110M | 443MB | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
133
- | 2 | [biencoder-mpnet-base-all-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-all-v2-mmarcoFR) | 🇬🇧 | 109M | 438MB | 28.04 | 33.28 | 27.50 | 51.07 | 77.68 | 88.67 |
134
- | 3 | [biencoder-distilcamembert-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-mmarcoFR) | 🇫🇷 | 68M | 272MB | 26.80 | 31.87 | 26.23 | 49.20 | 76.44 | 87.87 |
135
- | 4 | [biencoder-MiniLM-L6-all-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-all-v2-mmarcoFR) | 🇬🇧 | 23M | 91MB | 25.49 | 30.39 | 24.99 | 47.10 | 73.48 | 86.09 |
136
- | 5 | [biencoder-mMiniLMv2-L12-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR) | 🇫🇷,99+ | 117M | 471MB | 24.74 | 29.41 | 24.23 | 45.40 | 71.52 | 84.42 |
137
- | 6 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 🇫🇷 | 112M | 447MB | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
138
- | 7 | [biencoder-electra-base-french-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-mmarcoFR) | 🇫🇷 | 110M | 440MB | 23.38 | 27.97 | 22.91 | 43.50 | 68.96 | 81.61 |
139
- | 8 | **biencoder-mMiniLMv2-L6-mmarcoFR** | 🇫🇷,99+ | 107M | 428MB | 22.29 | 26.57 | 21.80 | 41.25 | 66.78 | 79.83 |
140
-
141
- ***
142
 
143
  ## Training
144
 
@@ -153,17 +140,15 @@ checkpoint and optimized via the cross-entropy loss (as in [DPR](https://doi.org
153
  NVIDIA V100 GPU for 20 epochs (i.e., 65.7k steps) using the AdamW optimizer with a batch size of 152, a peak learning rate of 2e-5 with warm up along the first 500 steps
154
  and linear scheduling. We set the maximum sequence lengths for both the questions and passages to 128 tokens. We use the cosine similarity to compute relevance scores.
155
 
156
- ***
157
-
158
  ## Citation
159
 
160
  ```bibtex
161
- @online{louis2023,
162
- author = 'Antoine Louis',
163
- title = 'biencoder-mMiniLMv2-L6-mmarcoFR: A Biencoder Model Trained on French mMARCO',
164
- publisher = 'Hugging Face',
165
- month = 'may',
166
- year = '2023',
167
- url = 'https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-mmarcoFR',
168
  }
169
  ```
 
44
 
45
  # biencoder-mMiniLMv2-L6-mmarcoFR
46
 
47
+ This is a dense single-vector bi-encoder model for **French** that can be used for semantic search. The model maps queries and passages to 384-dimensional dense vectors which are used to compute relevance through cosine similarity.
48
 
49
  ## Usage
50
 
 
121
  print(similarity)
122
  ```
123
 
 
 
124
  ## Evaluation
125
 
126
+ The model is evaluated on the smaller development set of [mMARCO-fr](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/), which consists of 6,980 queries for a corpus of
127
+ 8.8M candidate passages. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
128
+ To see how it compares to other neural retrievers in French, check out the [*DécouvrIR*](https://huggingface.co/spaces/antoinelouis/decouvrir) leaderboard.
 
 
 
 
 
 
 
 
 
 
 
129
 
130
  ## Training
131
 
 
140
  NVIDIA V100 GPU for 20 epochs (i.e., 65.7k steps) using the AdamW optimizer with a batch size of 152, a peak learning rate of 2e-5 with warm up along the first 500 steps
141
  and linear scheduling. We set the maximum sequence lengths for both the questions and passages to 128 tokens. We use the cosine similarity to compute relevance scores.
142
 
 
 
143
  ## Citation
144
 
145
  ```bibtex
146
+ @online{louis2024decouvrir,
147
+ author = 'Antoine Louis',
148
+ title = 'DécouvrIR: A Benchmark for Evaluating the Robustness of Information Retrieval Models in French',
149
+ publisher = 'Hugging Face',
150
+ month = 'mar',
151
+ year = '2024',
152
+ url = 'https://huggingface.co/spaces/antoinelouis/decouvrir',
153
  }
154
  ```