Update README.md
Browse files
README.md
CHANGED
@@ -15,42 +15,32 @@ datasets:
|
|
15 |
- hpprc/mqa-ja
|
16 |
- google-research-datasets/paws-x
|
17 |
---
|
|
|
18 |
|
19 |
-
|
20 |
-
This is a text embedding model based on RoFormer with a maximum input sequence length of 1024.
|
21 |
-
The model is pre-trained with Wikipedia and cc100 and fine-tuned as a sentence embedding model.
|
22 |
-
Fine-tuning begins with weakly supervised learning using mc4 and MQA.
|
23 |
-
After that, we perform the same 3-stage learning process as [GLuCoSE v2](https://huggingface.co/pkshatech/GLuCoSE-base-ja-v2).
|
24 |
|
25 |
-
|
26 |
-
- **Model Type:** Sentence Transformer
|
27 |
-
- **Maximum Sequence Length:** 1024 tokens
|
28 |
-
- **Output Dimensionality:** 768 tokens
|
29 |
-
- **Similarity Function:** Cosine Similarity
|
30 |
-
<!-- - **Training Dataset:** Unknown -->
|
31 |
-
<!-- - **Language:** Unknown -->
|
32 |
-
<!-- - **License:** Unknown -->
|
33 |
|
34 |
-
|
|
|
|
|
|
|
35 |
|
36 |
-
|
37 |
-
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
38 |
-
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
```
|
48 |
|
49 |
## Usage
|
50 |
|
51 |
### Direct Usage (Sentence Transformers)
|
52 |
|
53 |
-
You can perform inference using
|
54 |
|
55 |
```python
|
56 |
from sentence_transformers import SentenceTransformer
|
@@ -79,6 +69,7 @@ print(similarities)
|
|
79 |
# [0.5910, 1.0000, 0.4977, 0.6969],
|
80 |
# [0.4332, 0.4977, 1.0000, 0.7475],
|
81 |
# [0.5421, 0.6969, 0.7475, 1.0000]]
|
|
|
82 |
```
|
83 |
|
84 |
### Direct Usage (Transformers)
|
@@ -124,64 +115,78 @@ print(similarities)
|
|
124 |
# [0.5910, 1.0000, 0.4977, 0.6969],
|
125 |
# [0.4332, 0.4977, 1.0000, 0.7475],
|
126 |
# [0.5421, 0.6969, 0.7475, 1.0000]]
|
|
|
127 |
```
|
128 |
|
129 |
-
|
130 |
-
|
|
|
|
|
|
|
131 |
|
132 |
-
|
|
|
133 |
|
134 |
-
|
135 |
|
136 |
-
|
137 |
-
-->
|
138 |
|
139 |
-
|
140 |
-
### Out-of-Scope Use
|
141 |
|
142 |
-
|
143 |
-
-->
|
144 |
|
145 |
-
|
146 |
-
## Bias, Risks and Limitations
|
147 |
|
148 |
-
|
149 |
-
|
150 |
|
151 |
-
|
152 |
-
### Recommendations
|
153 |
|
154 |
-
|
155 |
-
|
|
|
156 |
|
157 |
## Benchmarks
|
158 |
|
|
|
159 |
|
160 |
-
### Retieval
|
161 |
Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) , [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR) and [MLDR-ja](https://huggingface.co/datasets/Shitao/MLDR).
|
162 |
|
163 |
-
|
|
164 |
-
|
165 |
-
| [
|
166 |
-
| [
|
167 |
-
|
|
|
|
|
|
|
|
168 |
| RoSEtta | 0.2B | 79.3 | 57.7 | 83.8 | 32.3 |
|
169 |
|
|
|
170 |
|
171 |
### JMTEB
|
172 |
-
Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
|
173 |
-
* The time-consuming datasets ['amazon_review_classification', 'mrtydi', 'jaqket', 'esci'] were excluded, and the evaluation was conducted on the other 12 datasets.
|
174 |
-
* The average is a macro-average per task.
|
175 |
|
176 |
-
|
177 |
-
|:--:|:--:|:--:|:--:|:----:|:-------:|:-------:|:------:|
|
178 |
-
| [mE5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 75.1 | 80.6 | 80.5 | **52.6** | 62.4 | 70.2 |
|
179 |
-
| [GLuCoSE](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | **82.6** | 69.8 | 78.2 | 51.5 | **66.2** | 69.7 |
|
180 |
-
| RoSEtta | 0.2B | 79.0 | **84.3** | **81.4** | **53.2** | 61.7 | **71.9** |
|
181 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
182 |
|
183 |
## Authors
|
|
|
184 |
Chihiro Yano, Mocho Go, Hideyuki Tachibana, Hiroto Takegawa, Yotaro Watanabe
|
185 |
|
186 |
## License
|
|
|
187 |
This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
|
|
15 |
- hpprc/mqa-ja
|
16 |
- google-research-datasets/paws-x
|
17 |
---
|
18 |
+
# RoSEtta
|
19 |
|
20 |
+
RoSEtta (**Ro**Former-based **S**entence **E**ncoder **t**hrough Dis**t**ill**a**tion) is a general Japanese text embedding model, excelling in retrieval tasks. It has a maximum sequence length of 1024, allowing for input of long sentences. It can run on a CPU and is designed to measure semantic similarity between sentences, as well as to function as a retrieval system for searching passages based on queries.
|
|
|
|
|
|
|
|
|
21 |
|
22 |
+
Key features:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
+
- Use RoPE (Rotary Position Embedding)
|
25 |
+
- Maximum sequence length of 1024 tokens
|
26 |
+
- Distilled from large sentence embedding models
|
27 |
+
- Specialized for retrieval tasks
|
28 |
|
29 |
+
During inference, the prefix "query: " or "passage: " is required. Please check the Usage section for details.
|
|
|
|
|
30 |
|
31 |
+
## Model Description
|
32 |
|
33 |
+
This model is based on RoFormer architecture. After pre-training using MLM loss, weakly supervised learning was performed. Additionally, further training was conducted through distillation using several large embedding models and multi-stage contrastive learning (like [GLuCoSE v2](https://huggingface.co/pkshatech/GLuCoSE-base-ja-v2)).
|
34 |
+
|
35 |
+
- **Maximum Sequence Length:** 1024 tokens
|
36 |
+
- **Output Dimensionality:** 768 tokens
|
37 |
+
- **Similarity Function:** Cosine Similarity
|
|
|
38 |
|
39 |
## Usage
|
40 |
|
41 |
### Direct Usage (Sentence Transformers)
|
42 |
|
43 |
+
You can perform inference using SentenceTransformer with the following code:
|
44 |
|
45 |
```python
|
46 |
from sentence_transformers import SentenceTransformer
|
|
|
69 |
# [0.5910, 1.0000, 0.4977, 0.6969],
|
70 |
# [0.4332, 0.4977, 1.0000, 0.7475],
|
71 |
# [0.5421, 0.6969, 0.7475, 1.0000]]
|
72 |
+
|
73 |
```
|
74 |
|
75 |
### Direct Usage (Transformers)
|
|
|
115 |
# [0.5910, 1.0000, 0.4977, 0.6969],
|
116 |
# [0.4332, 0.4977, 1.0000, 0.7475],
|
117 |
# [0.5421, 0.6969, 0.7475, 1.0000]]
|
118 |
+
|
119 |
```
|
120 |
|
121 |
+
## Training Details
|
122 |
+
|
123 |
+
The fine-tuning of RoSEtta is carried out through the following steps:
|
124 |
+
|
125 |
+
**Step 1: Pre-training**
|
126 |
|
127 |
+
- The model is pre-trained based on RoFormer architecture.
|
128 |
+
- Training data: [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch/) and [cc100](https://data.statmt.org/cc-100/).
|
129 |
|
130 |
+
**Step 2: Weakly supervised learning**
|
131 |
|
132 |
+
- Training data: [MQA](https://huggingface.co/datasets/clips/mqa) and [mc4](https://huggingface.co/datasets/legacy-datasets/mc4).
|
|
|
133 |
|
134 |
+
**Step 3: Ensemble distillation**
|
|
|
135 |
|
136 |
+
- The embedded representation was distilled using [E5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct), [gte-Qwen2](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct), and [mE5-large](https://huggingface.co/intfloat/multilingual-e5-large) as teacher models.
|
|
|
137 |
|
138 |
+
**Step 4: Contrastive learning**
|
|
|
139 |
|
140 |
+
- Triplets were created from [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [MNLI](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7), [PAWS-X](https://huggingface.co/datasets/paws-x), [JSeM](https://github.com/DaisukeBekki/JSeM) and [Mr.TyDi](https://huggingface.co/datasets/castorini/mr-tydi) and used for training.
|
141 |
+
- This training aimed to improve the overall performance as a sentence embedding model.
|
142 |
|
143 |
+
**Step 5: Search-specific contrastive learning**
|
|
|
144 |
|
145 |
+
- In order to make the model more robust to the retrieval task, additional two-stage training with QA and retrieval task was conducted.
|
146 |
+
- In the first stage, the synthetic dataset [auto-wiki-qa](https://huggingface.co/datasets/cl-nagoya/auto-wiki-qa) was used for training,
|
147 |
+
while in the second stage, [JQaRA](https://huggingface.co/datasets/hotchpotch/JQaRA), [MQA](https://huggingface.co/datasets/hpprc/mqa-ja), [Japanese Wikipedia Human Retrieval, Mr.TyDi,MIRACL, Quiz Works and Quiz No Mor](https://huggingface.co/datasets/hpprc/emb)i were used.
|
148 |
|
149 |
## Benchmarks
|
150 |
|
151 |
+
### Retrieval
|
152 |
|
|
|
153 |
Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) , [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR) and [MLDR-ja](https://huggingface.co/datasets/Shitao/MLDR).
|
154 |
|
155 |
+
| Model | Size | MIRACL<br>Recall@5 | JQaRA<br>nDCG@10 | JaCWIR<br>MAP@10 | MLDR<br>nDCG@10 |
|
156 |
+
| :---: | :---: | :---: | :---: | :---: | :---: |
|
157 |
+
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 89.2 | 55.4 | **87.6** | 29.8 |
|
158 |
+
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 78.7 | 62.4 | 85.0 | **37.5** |
|
159 |
+
| | | | | | |
|
160 |
+
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | **84.2** | 47.2 | **85.3** | 25.4 |
|
161 |
+
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 74.3 | **58.1** | 84.6 | **35.3** |
|
162 |
+
| [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 53.3 | 30.8 | 68.6 | 25.2 |
|
163 |
| RoSEtta | 0.2B | 79.3 | 57.7 | 83.8 | 32.3 |
|
164 |
|
165 |
+
Note: Results for OpenAI small embeddings in JQARA and JaCWIR are quoted from the [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) and [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR).
|
166 |
|
167 |
### JMTEB
|
|
|
|
|
|
|
168 |
|
169 |
+
Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
|
|
|
|
|
|
|
|
|
170 |
|
171 |
+
The average score is macro-average.
|
172 |
+
|
173 |
+
| Model | Size | Avg. | Retrieval | STS | Classification | Reranking | Clustering | PairClassification |
|
174 |
+
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
175 |
+
| OpenAI/text-embedding-3-small | - | 69.18 | 66.39 | 79.46 | 73.06 | 92.92 | 51.06 | 62.27 |
|
176 |
+
| OpenAI/text-embedding-3-large | - | 74.05 | 74.48 | 82.52 | 77.58 | 93.58 | 53.32 | 62.35 |
|
177 |
+
| | | | | | | | | |
|
178 |
+
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 70.90 | 70.98 | 79.70 | 72.89 | 92.96 | 51.24 | 62.15 |
|
179 |
+
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 73.31 | 73.02 | 83.13 | 77.43 | 92.99 | 51.82 | 62.29 |
|
180 |
+
| | | | | | | | | |
|
181 |
+
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 68.61 | 68.21 | 79.84 | 69.30 | **92.85** | 48.26 | 62.26 |
|
182 |
+
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 71.91 | 69.82 | **82.87** | 75.58 | 92.91 | **54.16** | 62.38 |
|
183 |
+
| [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 67.29 | 59.02 | 78.71 | **76.82** | 91.90 | 49.78 | **66.39** |
|
184 |
+
| RoSEtta | 0.2B | **72.45** | **73.21** | 81.39 | 72.41 | 92.69 | 53.23 | 61.74 |
|
185 |
|
186 |
## Authors
|
187 |
+
|
188 |
Chihiro Yano, Mocho Go, Hideyuki Tachibana, Hiroto Takegawa, Yotaro Watanabe
|
189 |
|
190 |
## License
|
191 |
+
|
192 |
This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|