hpprc commited on
Commit
323f589
1 Parent(s): 2f8b9b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -91
README.md CHANGED
@@ -10,43 +10,15 @@ base_model: line-corporation/line-distilbert-base-japanese
10
  widget: []
11
  pipeline_tag: sentence-similarity
12
  license: apache-2.0
 
 
13
  ---
14
 
15
- # SentenceTransformer based on line-corporation/line-distilbert-base-japanese
16
 
17
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [line-corporation/line-distilbert-base-japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
18
-
19
- ## Model Details
20
-
21
- ### Model Description
22
- - **Model Type:** Sentence Transformer
23
- - **Base model:** [line-corporation/line-distilbert-base-japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese) <!-- at revision 93bd4811608eecb95ffaaba957646efd9a909cc8 -->
24
- - **Maximum Sequence Length:** 512 tokens
25
- - **Output Dimensionality:** 768 tokens
26
- - **Similarity Function:** Cosine Similarity
27
- <!-- - **Training Dataset:** Unknown -->
28
- <!-- - **Language:** Unknown -->
29
- <!-- - **License:** Unknown -->
30
-
31
- ### Model Sources
32
-
33
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
34
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
35
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
36
-
37
- ### Full Model Architecture
38
-
39
- ```
40
- MySentenceTransformer(
41
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
42
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
43
- )
44
- ```
45
 
46
  ## Usage
47
 
48
- ### Direct Usage (Sentence Transformers)
49
-
50
  First install the Sentence Transformers library:
51
 
52
  ```bash
@@ -55,63 +27,77 @@ pip install -U sentence-transformers
55
 
56
  Then you can load this model and run inference.
57
  ```python
 
58
  from sentence_transformers import SentenceTransformer
59
 
60
  # Download from the 🤗 Hub
61
- model = SentenceTransformer("sentence_transformers_model_id")
62
- # Run inference
 
63
  sentences = [
64
- 'The weather is lovely today.',
65
- "It's so sunny outside!",
66
- 'He drove to the stadium.',
 
67
  ]
68
- embeddings = model.encode(sentences)
69
- print(embeddings.shape)
70
- # [3, 768]
71
-
72
- # Get the similarity scores for the embeddings
73
- similarities = model.similarity(embeddings, embeddings)
74
- print(similarities.shape)
75
- # [3, 3]
76
- ```
77
-
78
- <!--
79
- ### Direct Usage (Transformers)
80
 
81
- <details><summary>Click to see the direct usage in Transformers</summary>
 
 
82
 
83
- </details>
84
- -->
85
-
86
- <!--
87
- ### Downstream Usage (Sentence Transformers)
88
-
89
- You can finetune this model on your own dataset.
90
-
91
- <details><summary>Click to expand</summary>
92
 
93
- </details>
94
- -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
- <!--
97
- ### Out-of-Scope Use
98
 
99
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
100
- -->
101
 
102
- <!--
103
- ## Bias, Risks and Limitations
 
 
 
 
 
 
 
104
 
105
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
106
- -->
107
 
108
- <!--
109
- ### Recommendations
110
 
111
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
112
- -->
 
 
 
 
113
 
114
- ## Training Details
115
 
116
  ### Framework Versions
117
  - Python: 3.10.13
@@ -122,24 +108,10 @@ You can finetune this model on your own dataset.
122
  - Datasets: 2.19.1
123
  - Tokenizers: 0.19.1
124
 
125
- ## Citation
126
 
127
  ### BibTeX
 
128
 
129
- <!--
130
- ## Glossary
131
-
132
- *Clearly define terms in order to be accessible across audiences.*
133
- -->
134
-
135
- <!--
136
- ## Model Card Authors
137
-
138
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
139
- -->
140
-
141
- <!--
142
- ## Model Card Contact
143
-
144
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
145
- -->
 
10
  widget: []
11
  pipeline_tag: sentence-similarity
12
  license: apache-2.0
13
+ datasets:
14
+ - cl-nagoya/ruri-dataset-ft
15
  ---
16
 
17
+ # Ruri: Japanese General Text Embeddings
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Usage
21
 
 
 
22
  First install the Sentence Transformers library:
23
 
24
  ```bash
 
27
 
28
  Then you can load this model and run inference.
29
  ```python
30
+ import torch.nn.functional as F
31
  from sentence_transformers import SentenceTransformer
32
 
33
  # Download from the 🤗 Hub
34
+ model = SentenceTransformer("cl-nagoya/ruri-small", trust_remote_code=True)
35
+
36
+ # Don't forget to add the prefix "クエリ: " for query-side or "文章: " for passage-side texts.
37
  sentences = [
38
+ "クエリ: 瑠璃色はどんな色?",
39
+ "文章: 瑠璃色(るりいろ)は、紫みを帯びた濃い青。名は、半貴石の瑠璃(ラピスラズリ、英: lapis lazuli)による。JIS慣用色名では「こい紫みの青」(略号 dp-pB)と定義している[1][2]。",
40
+ "クエリ: ワシやタカのように、鋭いくちばしと爪を持った大型の鳥類を総称して「何類」というでしょう?",
41
+ "文章: ワシ、タカ、ハゲワシ、ハヤブサ、コンドル、フクロウが代表的である。これらの猛禽類はリンネ前後の時代(17~18世紀)には鷲類・鷹類・隼類及び梟類に分類された。ちなみにリンネは狩りをする鳥を単一の目(もく)にまとめ、vultur(コンドル、ハゲワシ)、falco(ワシ、タカ、ハヤブサなど)、strix(フクロウ)、lanius(モズ)の4属を含めている。",
42
  ]
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
+ embeddings = model.encode(sentences, convert_to_tensor=True)
45
+ print(embeddings.size())
46
+ # [4, 768]
47
 
48
+ similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
49
+ print(similarities)
50
+ ```
 
 
 
 
 
 
51
 
52
+ ## Benchmarks
53
+
54
+ ### JMTEB
55
+ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
56
+
57
+ |Model|#Param.|Avg.|Retrieval|STS|Classfification|Reranking|Clustering|PairClassification|
58
+ |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
59
+ |[cl-nagoya/sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base)|111M|68.56|49.64|82.05|73.47|91.83|51.79|62.57|
60
+ |[cl-nagoya/sup-simcse-ja-large](https://huggingface.co/cl-nagoya/sup-simcse-ja-large)|337M|66.51|37.62|83.18|73.73|91.48|50.56|62.51|
61
+ |[cl-nagoya/unsup-simcse-ja-base](https://huggingface.co/cl-nagoya/unsup-simcse-ja-base)|111M|65.07|40.23|78.72|73.07|91.16|44.77|62.44|
62
+ |[cl-nagoya/unsup-simcse-ja-large](https://huggingface.co/cl-nagoya/unsup-simcse-ja-large)|337M|66.27|40.53|80.56|74.66|90.95|48.41|62.49|
63
+ |[pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja)|133M|70.44|59.02|78.71|76.82|91.90|49.78|66.39|
64
+ ||||||||||
65
+ |[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)|472M|64.70|40.12|76.56|72.66|91.63|44.88|62.33|
66
+ |[intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)|118M|69.52|67.27|80.07|67.62|93.03|46.91|62.19|
67
+ |[intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base)|278M|70.12|68.21|79.84|69.30|92.85|48.26|62.26|
68
+ |[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)|560M|71.65|70.98|79.70|72.89|92.96|51.24|62.15|
69
+ ||||||||||
70
+ |OpenAI/text-embedding-ada-002|-|69.48|64.38|79.02|69.75|93.04|48.30|62.40|
71
+ |OpenAI/text-embedding-3-small|-|70.86|66.39|79.46|73.06|92.92|51.06|62.27|
72
+ |OpenAI/text-embedding-3-large|-|73.97|74.48|82.52|77.58|93.58|53.32|62.35|
73
+ ||||||||||
74
+ |[Ruri-Small](https://huggingface.co/cl-nagoya/ruri-small)|68M|71.53|69.41|82.79|76.22|93.00|51.19|62.11|
75
+ |[Ruri-Base](https://huggingface.co/cl-nagoya/ruri-base)|111M|71.91|69.82|82.87|75.58|92.91|54.16|62.38|
76
+ |[Ruri-Large](https://huggingface.co/cl-nagoya/ruri-large)|337M|73.31|73.02|83.13|77.43|92.99|51.82|62.29|
77
 
 
 
78
 
79
+ ## Model Details
 
80
 
81
+ ### Model Description
82
+ - **Model Type:** Sentence Transformer
83
+ - **Base model:** [line-corporation/line-distilbert-base-japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese)
84
+ - **Maximum Sequence Length:** 512 tokens
85
+ - **Output Dimensionality:** 768
86
+ - **Similarity Function:** Cosine Similarity
87
+ - **Language:** Japanese
88
+ - **License:** Apache 2.0
89
+ - **Paper:** https://arxiv.org/abs/2409.07737
90
 
 
 
91
 
92
+ ### Full Model Architecture
 
93
 
94
+ ```
95
+ SentenceTransformer(
96
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
97
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
98
+ )
99
+ ```
100
 
 
101
 
102
  ### Framework Versions
103
  - Python: 3.10.13
 
108
  - Datasets: 2.19.1
109
  - Tokenizers: 0.19.1
110
 
111
+ <!-- ## Citation
112
 
113
  ### BibTeX
114
+ -->
115
 
116
+ ## License
117
+ This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).