bhlim commited on
Commit
62a9198
·
verified ·
1 Parent(s): da63f2e

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,895 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ datasets:
4
+ - bhlim/patentmatch_for_finetuning
5
+ language:
6
+ - en
7
+ library_name: sentence-transformers
8
+ license: apache-2.0
9
+ metrics:
10
+ - cosine_accuracy@1
11
+ - cosine_accuracy@3
12
+ - cosine_accuracy@5
13
+ - cosine_accuracy@10
14
+ - cosine_precision@1
15
+ - cosine_precision@3
16
+ - cosine_precision@5
17
+ - cosine_precision@10
18
+ - cosine_recall@1
19
+ - cosine_recall@3
20
+ - cosine_recall@5
21
+ - cosine_recall@10
22
+ - cosine_ndcg@10
23
+ - cosine_mrr@10
24
+ - cosine_map@100
25
+ pipeline_tag: sentence-similarity
26
+ tags:
27
+ - sentence-transformers
28
+ - sentence-similarity
29
+ - feature-extraction
30
+ - generated_from_trainer
31
+ - dataset_size:10136
32
+ - loss:MatryoshkaLoss
33
+ - loss:MultipleNegativesRankingLoss
34
+ widget:
35
+ - source_sentence: The UE sends the uplink signal including the identifier of the
36
+ uplink serving node to the downlink serving node and in this case the downlink
37
+ serving node learns the mapping relationship among the UE the uplink serving node
38
+ and the downlink serving node.The UE sends the uplink signal including the identifier
39
+ of the downlink serving node to the uplink serving node an in thiscase the uplink
40
+ serving node learns the mapping relationship among the UE the uplink serving node
41
+ and the downlink serving node.
42
+ sentences:
43
+ - A terminal for use in a wireless communication network comprising a plurality
44
+ of base stations the terminal arranged to communicate with the network via at
45
+ least two cells of a plurality of cells and to transmit a request for uplink resources
46
+ wherein the terminal is arranged to select at least one cell from among said plurality
47
+ of said cells for transmission of said request a resource for transmission of
48
+ said request from among a plurality of resources provided by a cell and a characteristic
49
+ of a signal used to transmit said request and to perform the selection in dependence
50
+ on at least one of the reason for said request the characteristics of an uplink
51
+ channel for transmission of said request and the preference of the network.
52
+ - The electronic device of any of claims 15 wherein the processor is further configured
53
+ to check whether the specific audio data is stored at the memory in response to
54
+ a play request on the specific audio data.
55
+ - The system of claim 1 or claim 2 comprising a plurality of said radiation emitting
56
+ devices.
57
+ - source_sentence: Further in the example of Fig.35 the sound adjusting circuit 210
58
+ controls the sound outputs of the first to fourth speakers 161 to 164 based on
59
+ the sound data from the first to fifth detection sensors 420 to 428 so that the
60
+ first sound corresponding to the second display image is localized in the first
61
+ area 810 where the occupant of the driver seat 13 and the occupant of the rear
62
+ seat 18 equipped with the headrest 25 are located.Likewise the sound adjusting
63
+ circuit 210 controls the sound outputs of the first to fourth speakers 161 to
64
+ 164 based on the sound data from the first to fifth detection sensors 420 to 428
65
+ so that the second sound corresponding to the first display image is localized
66
+ in the second area 820 where the occupant of the assistant drivers seat 12 and
67
+ the occupants of the rear seats 18 equipped with the headrests 26 and 27 respectively
68
+ are located.Accordingly the occupant of the rear seat 18 equipped with the headrest
69
+ 26 who is located in the crosstalk area 603 in Fig.33 can now hear the second
70
+ sound clearly.
71
+ sentences:
72
+ - A gas turbine engine comprising a bladed rotor assembly 100200300400 according
73
+ to any one of Claims 1 to 9.
74
+ - The method of claim 1 further comprising sensing a distance between the display
75
+ and a user wherein applying the sound setting comprises applying the sound setting
76
+ based on the sensed distance between the display and the user and the obtained
77
+ curvature of the panel of the display.
78
+ - A developer carrying member that is capable of carrying a developer on a surface
79
+ thereof and that supplies the developer carried on the surface to a surface of
80
+ an image bearing member when a voltage is applied thereto comprising an elastic
81
+ layer and a surface layer that covers the elastic layer contains alumina and has
82
+ a higher volume resistivity than the elastic layer.
83
+ - source_sentence: In the example of fig.1 a user 107 who arrives in the underground
84
+ area 109 and who has not yet subscribed to the electronic ticket service may subscribe
85
+ to the service by connecting his Bluetooth device 107a to a Bluetooth access point
86
+ 104 of the service provider via a Bluetooth service device 104a.At the access
87
+ point 104 the customer 104 may perform a payment transaction select a desired
88
+ subscription and receive a link key.With the link key the users Bluetooth device
89
+ 107a may subsequently establish secure Bluetooth connections with the Bluetooth
90
+ transceivers 101 and 102af.
91
+ sentences:
92
+ - A wireless communications device 102 for setting up a local service session in
93
+ a shortrange wireless communication network comprising means for sending 222 a
94
+ request for preconfiguration information over a longrange network 104 to a remote
95
+ destination 112 the preconfiguration information enabling establishment of the
96
+ local service session with a proximate wireless communications device 110means
97
+ for receiving 222 from the remote destination 112 the requested preconfiguration
98
+ information wherein the requested preconfiguration information includes one or
99
+ more security keys for performing an authentication process with the proximate
100
+ wireless communications device 110 over shortrange wireless communication means
101
+ for performing 220 an authentication process for establishing the local service
102
+ session with the proximate wireless communications device 110 over the shortrange
103
+ wireless communication using the received one or more security keys and means
104
+ for establishing 220 the local service session with the proximate wireless communications
105
+ device 112 over the shortrange wireless communications after the authentication
106
+ process.
107
+ - The mobile terminal any one of claims 2 to 4 wherein the controller 180 is further
108
+ configured to differently process a color of the image corresponding to the trajectory
109
+ of the second touch based on a position of the first touch.
110
+ - A detergent box assembly for a washing machine comprising a detergent box a distributor
111
+ box having a front plate a rear plate and a receiving chamber provided therebetween
112
+ said receiving chamber configured to store a laundry treat agent the distributor
113
+ box being movably disposed within the detergent box and adapted to move between
114
+ an open position and a closed position a keypress being provided in the front
115
+ plate and a driving subassembly disposed in at least one of the detergent box
116
+ and the distributor box and configured to drive the distributor box to move from
117
+ the closed position to the open position when the keypress is pressed.
118
+ - source_sentence: The step of determining may comprisemeasuring a distance between
119
+ each surrogate server and each subnetwork according to the subnetwork of the user
120
+ selecting a surrogate server with the smallest distance.
121
+ sentences:
122
+ - The computer system of Claim 13 comprising a memory storing instructions which
123
+ when implemented on the one or more processors configure the computer system to
124
+ carry out the method of any one of Claims 1 to 10
125
+ - A cooking oven 1 comprising a housing 2 a cooking cavity 3 formed in the housing
126
+ 2 and closable by a door 5 heating means 6 6 placed in thermal exchange relationship
127
+ with the cooking cavity 3 ventilating means placed in the housing 2 and having
128
+ one or more electrical fans 7 8 7 8 adapted to ventilate on one or more thermally
129
+ sensitive areas of the oven 1 a control system 10 connected to the heating means
130
+ 6 6 and to the ventilating means and having a temperature detector 12 associated
131
+ with the cooking cavity 3 wherein the control system is configured to activate
132
+ and deactivate the heating means 6 6 depending on a temperature detected by the
133
+ temperature detector 12 characterized in that the control system 10 activates
134
+ and deactivates at least one of said one or more fans 7 8 7 8 automatically together
135
+ with the respective activation and deactivation of the heating means 6 6.
136
+ - The method of claim 12 wherein selecting the target control parameter further
137
+ comprises for the respective selected control parameters comparing the initial
138
+ turbine output with the predicted turbine output while operating the selected
139
+ control parameter with the adjustment of the selected control parameter to determine
140
+ an adjustment differential and selecting the target control parameter having the
141
+ target adjustment by using the adjustment differential of the target control parameter.
142
+ - source_sentence: Referring to FIG.32 a a sink device 3200 is designed to display
143
+ thumbnail images in the metadata of contents received from source devices connected
144
+ via an integrated wire interface.As mentioned in the foregoing description if
145
+ a remote controller 3250 capable of outputting a pointing signal is situated within
146
+ a region of a specific thumbnail image 3260 side information e.g.Amanda 1st album
147
+ singer.Song etc.is displayed together.
148
+ sentences:
149
+ - The method of any one of claims 8 to 12 wherein the requesting for the broadcast
150
+ channel information comprises transmitting to the server image data obtained by
151
+ capturing the content being reproduced by the display apparatus or audio data
152
+ obtained by recording the content for a certain time.
153
+ - The electrode assembly of any one of the preceding claims wherein the first electrode
154
+ comprises a substrate 113 wherein the first active material layer comprises active
155
+ material layers 112 on both surfaces of the substrate and the ceramic layer comprises
156
+ ceramic material layers 50 on both surfaces of the substrate.
157
+ - A method according to claim 1 wherein said topsheet assembly is a threeply laminate
158
+ comprising an acquisition layer a nonwoven layer and a cuff assembly.
159
+ model-index:
160
+ - name: BGE base PatentMatch Matryoshka
161
+ results:
162
+ - task:
163
+ type: information-retrieval
164
+ name: Information Retrieval
165
+ dataset:
166
+ name: dim 768
167
+ type: dim_768
168
+ metrics:
169
+ - type: cosine_accuracy@1
170
+ value: 0.042620363062352014
171
+ name: Cosine Accuracy@1
172
+ - type: cosine_accuracy@3
173
+ value: 0.10142067876874507
174
+ name: Cosine Accuracy@3
175
+ - type: cosine_accuracy@5
176
+ value: 0.14483030781373324
177
+ name: Cosine Accuracy@5
178
+ - type: cosine_accuracy@10
179
+ value: 0.23204419889502761
180
+ name: Cosine Accuracy@10
181
+ - type: cosine_precision@1
182
+ value: 0.042620363062352014
183
+ name: Cosine Precision@1
184
+ - type: cosine_precision@3
185
+ value: 0.03380689292291502
186
+ name: Cosine Precision@3
187
+ - type: cosine_precision@5
188
+ value: 0.02896606156274665
189
+ name: Cosine Precision@5
190
+ - type: cosine_precision@10
191
+ value: 0.023204419889502764
192
+ name: Cosine Precision@10
193
+ - type: cosine_recall@1
194
+ value: 0.042620363062352014
195
+ name: Cosine Recall@1
196
+ - type: cosine_recall@3
197
+ value: 0.10142067876874507
198
+ name: Cosine Recall@3
199
+ - type: cosine_recall@5
200
+ value: 0.14483030781373324
201
+ name: Cosine Recall@5
202
+ - type: cosine_recall@10
203
+ value: 0.23204419889502761
204
+ name: Cosine Recall@10
205
+ - type: cosine_ndcg@10
206
+ value: 0.12169609468606697
207
+ name: Cosine Ndcg@10
208
+ - type: cosine_mrr@10
209
+ value: 0.08838588842535165
210
+ name: Cosine Mrr@10
211
+ - type: cosine_map@100
212
+ value: 0.10140867877546615
213
+ name: Cosine Map@100
214
+ - task:
215
+ type: information-retrieval
216
+ name: Information Retrieval
217
+ dataset:
218
+ name: dim 512
219
+ type: dim_512
220
+ metrics:
221
+ - type: cosine_accuracy@1
222
+ value: 0.04222573007103394
223
+ name: Cosine Accuracy@1
224
+ - type: cosine_accuracy@3
225
+ value: 0.09352801894238358
226
+ name: Cosine Accuracy@3
227
+ - type: cosine_accuracy@5
228
+ value: 0.14285714285714285
229
+ name: Cosine Accuracy@5
230
+ - type: cosine_accuracy@10
231
+ value: 0.22454617205998423
232
+ name: Cosine Accuracy@10
233
+ - type: cosine_precision@1
234
+ value: 0.04222573007103394
235
+ name: Cosine Precision@1
236
+ - type: cosine_precision@3
237
+ value: 0.031176006314127862
238
+ name: Cosine Precision@3
239
+ - type: cosine_precision@5
240
+ value: 0.028571428571428574
241
+ name: Cosine Precision@5
242
+ - type: cosine_precision@10
243
+ value: 0.02245461720599842
244
+ name: Cosine Precision@10
245
+ - type: cosine_recall@1
246
+ value: 0.04222573007103394
247
+ name: Cosine Recall@1
248
+ - type: cosine_recall@3
249
+ value: 0.09352801894238358
250
+ name: Cosine Recall@3
251
+ - type: cosine_recall@5
252
+ value: 0.14285714285714285
253
+ name: Cosine Recall@5
254
+ - type: cosine_recall@10
255
+ value: 0.22454617205998423
256
+ name: Cosine Recall@10
257
+ - type: cosine_ndcg@10
258
+ value: 0.11822400593872298
259
+ name: Cosine Ndcg@10
260
+ - type: cosine_mrr@10
261
+ value: 0.08611580912291245
262
+ name: Cosine Mrr@10
263
+ - type: cosine_map@100
264
+ value: 0.09959411357742169
265
+ name: Cosine Map@100
266
+ - task:
267
+ type: information-retrieval
268
+ name: Information Retrieval
269
+ dataset:
270
+ name: dim 256
271
+ type: dim_256
272
+ metrics:
273
+ - type: cosine_accuracy@1
274
+ value: 0.04025256511444357
275
+ name: Cosine Accuracy@1
276
+ - type: cosine_accuracy@3
277
+ value: 0.09155485398579322
278
+ name: Cosine Accuracy@3
279
+ - type: cosine_accuracy@5
280
+ value: 0.13970007892659828
281
+ name: Cosine Accuracy@5
282
+ - type: cosine_accuracy@10
283
+ value: 0.21981057616416733
284
+ name: Cosine Accuracy@10
285
+ - type: cosine_precision@1
286
+ value: 0.04025256511444357
287
+ name: Cosine Precision@1
288
+ - type: cosine_precision@3
289
+ value: 0.03051828466193107
290
+ name: Cosine Precision@3
291
+ - type: cosine_precision@5
292
+ value: 0.02794001578531966
293
+ name: Cosine Precision@5
294
+ - type: cosine_precision@10
295
+ value: 0.021981057616416732
296
+ name: Cosine Precision@10
297
+ - type: cosine_recall@1
298
+ value: 0.04025256511444357
299
+ name: Cosine Recall@1
300
+ - type: cosine_recall@3
301
+ value: 0.09155485398579322
302
+ name: Cosine Recall@3
303
+ - type: cosine_recall@5
304
+ value: 0.13970007892659828
305
+ name: Cosine Recall@5
306
+ - type: cosine_recall@10
307
+ value: 0.21981057616416733
308
+ name: Cosine Recall@10
309
+ - type: cosine_ndcg@10
310
+ value: 0.11513294301691931
311
+ name: Cosine Ndcg@10
312
+ - type: cosine_mrr@10
313
+ value: 0.08350856917352567
314
+ name: Cosine Mrr@10
315
+ - type: cosine_map@100
316
+ value: 0.09631638060202527
317
+ name: Cosine Map@100
318
+ - task:
319
+ type: information-retrieval
320
+ name: Information Retrieval
321
+ dataset:
322
+ name: dim 128
323
+ type: dim_128
324
+ metrics:
325
+ - type: cosine_accuracy@1
326
+ value: 0.037884767166535126
327
+ name: Cosine Accuracy@1
328
+ - type: cosine_accuracy@3
329
+ value: 0.08602999210734018
330
+ name: Cosine Accuracy@3
331
+ - type: cosine_accuracy@5
332
+ value: 0.13180741910023677
333
+ name: Cosine Accuracy@5
334
+ - type: cosine_accuracy@10
335
+ value: 0.2079715864246251
336
+ name: Cosine Accuracy@10
337
+ - type: cosine_precision@1
338
+ value: 0.037884767166535126
339
+ name: Cosine Precision@1
340
+ - type: cosine_precision@3
341
+ value: 0.028676664035780054
342
+ name: Cosine Precision@3
343
+ - type: cosine_precision@5
344
+ value: 0.02636148382004736
345
+ name: Cosine Precision@5
346
+ - type: cosine_precision@10
347
+ value: 0.02079715864246251
348
+ name: Cosine Precision@10
349
+ - type: cosine_recall@1
350
+ value: 0.037884767166535126
351
+ name: Cosine Recall@1
352
+ - type: cosine_recall@3
353
+ value: 0.08602999210734018
354
+ name: Cosine Recall@3
355
+ - type: cosine_recall@5
356
+ value: 0.13180741910023677
357
+ name: Cosine Recall@5
358
+ - type: cosine_recall@10
359
+ value: 0.2079715864246251
360
+ name: Cosine Recall@10
361
+ - type: cosine_ndcg@10
362
+ value: 0.10894233297304821
363
+ name: Cosine Ndcg@10
364
+ - type: cosine_mrr@10
365
+ value: 0.07907489883614581
366
+ name: Cosine Mrr@10
367
+ - type: cosine_map@100
368
+ value: 0.09087791679720966
369
+ name: Cosine Map@100
370
+ - task:
371
+ type: information-retrieval
372
+ name: Information Retrieval
373
+ dataset:
374
+ name: dim 64
375
+ type: dim_64
376
+ metrics:
377
+ - type: cosine_accuracy@1
378
+ value: 0.032754538279400155
379
+ name: Cosine Accuracy@1
380
+ - type: cosine_accuracy@3
381
+ value: 0.07419100236779795
382
+ name: Cosine Accuracy@3
383
+ - type: cosine_accuracy@5
384
+ value: 0.11444356748224152
385
+ name: Cosine Accuracy@5
386
+ - type: cosine_accuracy@10
387
+ value: 0.18468823993685873
388
+ name: Cosine Accuracy@10
389
+ - type: cosine_precision@1
390
+ value: 0.032754538279400155
391
+ name: Cosine Precision@1
392
+ - type: cosine_precision@3
393
+ value: 0.024730334122599312
394
+ name: Cosine Precision@3
395
+ - type: cosine_precision@5
396
+ value: 0.022888713496448304
397
+ name: Cosine Precision@5
398
+ - type: cosine_precision@10
399
+ value: 0.018468823993685875
400
+ name: Cosine Precision@10
401
+ - type: cosine_recall@1
402
+ value: 0.032754538279400155
403
+ name: Cosine Recall@1
404
+ - type: cosine_recall@3
405
+ value: 0.07419100236779795
406
+ name: Cosine Recall@3
407
+ - type: cosine_recall@5
408
+ value: 0.11444356748224152
409
+ name: Cosine Recall@5
410
+ - type: cosine_recall@10
411
+ value: 0.18468823993685873
412
+ name: Cosine Recall@10
413
+ - type: cosine_ndcg@10
414
+ value: 0.0959638876946607
415
+ name: Cosine Ndcg@10
416
+ - type: cosine_mrr@10
417
+ value: 0.06921471166735564
418
+ name: Cosine Mrr@10
419
+ - type: cosine_map@100
420
+ value: 0.08022788346205763
421
+ name: Cosine Map@100
422
+ ---
423
+
424
+ # BGE base PatentMatch Matryoshka
425
+
426
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the [bhlim/patentmatch_for_finetuning](https://huggingface.co/datasets/bhlim/patentmatch_for_finetuning) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
427
+
428
+ ## Model Details
429
+
430
+ ### Model Description
431
+ - **Model Type:** Sentence Transformer
432
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
433
+ - **Maximum Sequence Length:** 512 tokens
434
+ - **Output Dimensionality:** 768 tokens
435
+ - **Similarity Function:** Cosine Similarity
436
+ - **Training Dataset:**
437
+ - [bhlim/patentmatch_for_finetuning](https://huggingface.co/datasets/bhlim/patentmatch_for_finetuning)
438
+ - **Language:** en
439
+ - **License:** apache-2.0
440
+
441
+ ### Model Sources
442
+
443
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
444
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
445
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
446
+
447
+ ### Full Model Architecture
448
+
449
+ ```
450
+ SentenceTransformer(
451
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
452
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
453
+ (2): Normalize()
454
+ )
455
+ ```
456
+
457
+ ## Usage
458
+
459
+ ### Direct Usage (Sentence Transformers)
460
+
461
+ First install the Sentence Transformers library:
462
+
463
+ ```bash
464
+ pip install -U sentence-transformers
465
+ ```
466
+
467
+ Then you can load this model and run inference.
468
+ ```python
469
+ from sentence_transformers import SentenceTransformer
470
+
471
+ # Download from the 🤗 Hub
472
+ model = SentenceTransformer("bhlim/bge-base-patentmatch")
473
+ # Run inference
474
+ sentences = [
475
+ 'Referring to FIG.32 a a sink device 3200 is designed to display thumbnail images in the metadata of contents received from source devices connected via an integrated wire interface.As mentioned in the foregoing description if a remote controller 3250 capable of outputting a pointing signal is situated within a region of a specific thumbnail image 3260 side information e.g.Amanda 1st album singer.Song etc.is displayed together.',
476
+ 'The method of any one of claims 8 to 12 wherein the requesting for the broadcast channel information comprises transmitting to the server image data obtained by capturing the content being reproduced by the display apparatus or audio data obtained by recording the content for a certain time.',
477
+ 'The electrode assembly of any one of the preceding claims wherein the first electrode comprises a substrate 113 wherein the first active material layer comprises active material layers 112 on both surfaces of the substrate and the ceramic layer comprises ceramic material layers 50 on both surfaces of the substrate.',
478
+ ]
479
+ embeddings = model.encode(sentences)
480
+ print(embeddings.shape)
481
+ # [3, 768]
482
+
483
+ # Get the similarity scores for the embeddings
484
+ similarities = model.similarity(embeddings, embeddings)
485
+ print(similarities.shape)
486
+ # [3, 3]
487
+ ```
488
+
489
+ <!--
490
+ ### Direct Usage (Transformers)
491
+
492
+ <details><summary>Click to see the direct usage in Transformers</summary>
493
+
494
+ </details>
495
+ -->
496
+
497
+ <!--
498
+ ### Downstream Usage (Sentence Transformers)
499
+
500
+ You can finetune this model on your own dataset.
501
+
502
+ <details><summary>Click to expand</summary>
503
+
504
+ </details>
505
+ -->
506
+
507
+ <!--
508
+ ### Out-of-Scope Use
509
+
510
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
511
+ -->
512
+
513
+ ## Evaluation
514
+
515
+ ### Metrics
516
+
517
+ #### Information Retrieval
518
+ * Dataset: `dim_768`
519
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
520
+
521
+ | Metric | Value |
522
+ |:--------------------|:-----------|
523
+ | cosine_accuracy@1 | 0.0426 |
524
+ | cosine_accuracy@3 | 0.1014 |
525
+ | cosine_accuracy@5 | 0.1448 |
526
+ | cosine_accuracy@10 | 0.232 |
527
+ | cosine_precision@1 | 0.0426 |
528
+ | cosine_precision@3 | 0.0338 |
529
+ | cosine_precision@5 | 0.029 |
530
+ | cosine_precision@10 | 0.0232 |
531
+ | cosine_recall@1 | 0.0426 |
532
+ | cosine_recall@3 | 0.1014 |
533
+ | cosine_recall@5 | 0.1448 |
534
+ | cosine_recall@10 | 0.232 |
535
+ | cosine_ndcg@10 | 0.1217 |
536
+ | cosine_mrr@10 | 0.0884 |
537
+ | **cosine_map@100** | **0.1014** |
538
+
539
+ #### Information Retrieval
540
+ * Dataset: `dim_512`
541
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
542
+
543
+ | Metric | Value |
544
+ |:--------------------|:-----------|
545
+ | cosine_accuracy@1 | 0.0422 |
546
+ | cosine_accuracy@3 | 0.0935 |
547
+ | cosine_accuracy@5 | 0.1429 |
548
+ | cosine_accuracy@10 | 0.2245 |
549
+ | cosine_precision@1 | 0.0422 |
550
+ | cosine_precision@3 | 0.0312 |
551
+ | cosine_precision@5 | 0.0286 |
552
+ | cosine_precision@10 | 0.0225 |
553
+ | cosine_recall@1 | 0.0422 |
554
+ | cosine_recall@3 | 0.0935 |
555
+ | cosine_recall@5 | 0.1429 |
556
+ | cosine_recall@10 | 0.2245 |
557
+ | cosine_ndcg@10 | 0.1182 |
558
+ | cosine_mrr@10 | 0.0861 |
559
+ | **cosine_map@100** | **0.0996** |
560
+
561
+ #### Information Retrieval
562
+ * Dataset: `dim_256`
563
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
564
+
565
+ | Metric | Value |
566
+ |:--------------------|:-----------|
567
+ | cosine_accuracy@1 | 0.0403 |
568
+ | cosine_accuracy@3 | 0.0916 |
569
+ | cosine_accuracy@5 | 0.1397 |
570
+ | cosine_accuracy@10 | 0.2198 |
571
+ | cosine_precision@1 | 0.0403 |
572
+ | cosine_precision@3 | 0.0305 |
573
+ | cosine_precision@5 | 0.0279 |
574
+ | cosine_precision@10 | 0.022 |
575
+ | cosine_recall@1 | 0.0403 |
576
+ | cosine_recall@3 | 0.0916 |
577
+ | cosine_recall@5 | 0.1397 |
578
+ | cosine_recall@10 | 0.2198 |
579
+ | cosine_ndcg@10 | 0.1151 |
580
+ | cosine_mrr@10 | 0.0835 |
581
+ | **cosine_map@100** | **0.0963** |
582
+
583
+ #### Information Retrieval
584
+ * Dataset: `dim_128`
585
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
586
+
587
+ | Metric | Value |
588
+ |:--------------------|:-----------|
589
+ | cosine_accuracy@1 | 0.0379 |
590
+ | cosine_accuracy@3 | 0.086 |
591
+ | cosine_accuracy@5 | 0.1318 |
592
+ | cosine_accuracy@10 | 0.208 |
593
+ | cosine_precision@1 | 0.0379 |
594
+ | cosine_precision@3 | 0.0287 |
595
+ | cosine_precision@5 | 0.0264 |
596
+ | cosine_precision@10 | 0.0208 |
597
+ | cosine_recall@1 | 0.0379 |
598
+ | cosine_recall@3 | 0.086 |
599
+ | cosine_recall@5 | 0.1318 |
600
+ | cosine_recall@10 | 0.208 |
601
+ | cosine_ndcg@10 | 0.1089 |
602
+ | cosine_mrr@10 | 0.0791 |
603
+ | **cosine_map@100** | **0.0909** |
604
+
605
+ #### Information Retrieval
606
+ * Dataset: `dim_64`
607
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
608
+
609
+ | Metric | Value |
610
+ |:--------------------|:-----------|
611
+ | cosine_accuracy@1 | 0.0328 |
612
+ | cosine_accuracy@3 | 0.0742 |
613
+ | cosine_accuracy@5 | 0.1144 |
614
+ | cosine_accuracy@10 | 0.1847 |
615
+ | cosine_precision@1 | 0.0328 |
616
+ | cosine_precision@3 | 0.0247 |
617
+ | cosine_precision@5 | 0.0229 |
618
+ | cosine_precision@10 | 0.0185 |
619
+ | cosine_recall@1 | 0.0328 |
620
+ | cosine_recall@3 | 0.0742 |
621
+ | cosine_recall@5 | 0.1144 |
622
+ | cosine_recall@10 | 0.1847 |
623
+ | cosine_ndcg@10 | 0.096 |
624
+ | cosine_mrr@10 | 0.0692 |
625
+ | **cosine_map@100** | **0.0802** |
626
+
627
+ <!--
628
+ ## Bias, Risks and Limitations
629
+
630
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
631
+ -->
632
+
633
+ <!--
634
+ ### Recommendations
635
+
636
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
637
+ -->
638
+
639
+ ## Training Details
640
+
641
+ ### Training Dataset
642
+
643
+ #### bhlim/patentmatch_for_finetuning
644
+
645
+ * Dataset: [bhlim/patentmatch_for_finetuning](https://huggingface.co/datasets/bhlim/patentmatch_for_finetuning) at [8d60f21](https://huggingface.co/datasets/bhlim/patentmatch_for_finetuning/tree/8d60f211ba8eb3b64fcdd4615dd0d297cf713843)
646
+ * Size: 10,136 training samples
647
+ * Columns: <code>positive</code> and <code>anchor</code>
648
+ * Approximate statistics based on the first 1000 samples:
649
+ | | positive | anchor |
650
+ |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
651
+ | type | string | string |
652
+ | details | <ul><li>min: 5 tokens</li><li>mean: 136.61 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 76.35 tokens</li><li>max: 512 tokens</li></ul> |
653
+ * Samples:
654
+ | positive | anchor |
655
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
656
+ | <code>Furthermore according to this liquid consuming apparatus if the decompression level acting on the liquid sensing chamber 21 of the liquid container 1 i.e.the pressure loss arising in the connecting passage between the liquid storage portion 7 and the liquid sensing chamber 21 due to the flow rate outflowing from the liquid storage portion 7 because of distension of the diaphragm pump through application of the external force when external force is applied in the direction of expansion of volume of the diaphragm pump 42 asdepicted in FIG.6 has been set to a low level if sufficient liquid is present in the liquid container 1 the liquid sensing chamber 21 will experience substantially no change in volume.</code> | <code>The liquid cartridge according to any of claims 4 to 5 further comprising a ground terminal 175c 176c 177c positioned in the second line.</code> |
657
+ | <code>It is highly desirable for tires to have good wet skid resistance low rolling resistance and good wear characteristics.It has traditionally been very difficult to improve a tires wear characteristics without sacrificing its wet skid resistance and traction characteristics.These properties depend to a great extent on the dynamic viscoelastic properties of the rubbers utilized in making the tire.</code> | <code>The pneumatic tire of at least one of the previous claims wherein the rubber composition comprises from 5 to 20 phr of the oil and from 45 to 70 phr of the terpene phenol resin.</code> |
658
+ | <code>Before setting the environment of the mobile communication terminal a user stores a multimedia message composed of different kinds of contents i.e.images sounds and texts.For example reference block 201 indicates a multimedia message composed of several images sounds and texts.The user can select an image A a sound A and a text A for environment setting elements of the mobile communication terminal from the contents of the multimedia message and construct a theme like in block 203 using the selected image A sound A and text A.The MPU 101 maps the contents of the theme to environment setting elements of the mobile communication terminal i.e.a background screen a ringtone and a user name like in block 205.The MPU 101 then sets the environment of the mobile communication terminal using the mapped elements like in block 207 thereby automatically and collectively changing the environment of the mobile communication terminal.Mapping information about mapping between the selected contents of the multimediamessage and the environment setting elements of the mobile communication terminal is stored in the flash RAM 107.</code> | <code>A terminal for processing data comprising an output unit configured to output a chatting service window a receiving unit configured to receive a request for executing a chatting service and a first download request for downloading first data through the chatting service from a user and a controller configured to control to output the first data downloaded in response to the received first download request to a background screen of the chatting service window.</code> |
659
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
660
+ ```json
661
+ {
662
+ "loss": "MultipleNegativesRankingLoss",
663
+ "matryoshka_dims": [
664
+ 768,
665
+ 512,
666
+ 256,
667
+ 128,
668
+ 64
669
+ ],
670
+ "matryoshka_weights": [
671
+ 1,
672
+ 1,
673
+ 1,
674
+ 1,
675
+ 1
676
+ ],
677
+ "n_dims_per_step": -1
678
+ }
679
+ ```
680
+
681
+ ### Training Hyperparameters
682
+ #### Non-Default Hyperparameters
683
+
684
+ - `eval_strategy`: epoch
685
+ - `per_device_train_batch_size`: 32
686
+ - `per_device_eval_batch_size`: 16
687
+ - `gradient_accumulation_steps`: 16
688
+ - `learning_rate`: 2e-05
689
+ - `num_train_epochs`: 4
690
+ - `lr_scheduler_type`: cosine
691
+ - `warmup_ratio`: 0.1
692
+ - `bf16`: True
693
+ - `tf32`: True
694
+ - `load_best_model_at_end`: True
695
+ - `optim`: adamw_torch_fused
696
+ - `batch_sampler`: no_duplicates
697
+
698
+ #### All Hyperparameters
699
+ <details><summary>Click to expand</summary>
700
+
701
+ - `overwrite_output_dir`: False
702
+ - `do_predict`: False
703
+ - `eval_strategy`: epoch
704
+ - `prediction_loss_only`: True
705
+ - `per_device_train_batch_size`: 32
706
+ - `per_device_eval_batch_size`: 16
707
+ - `per_gpu_train_batch_size`: None
708
+ - `per_gpu_eval_batch_size`: None
709
+ - `gradient_accumulation_steps`: 16
710
+ - `eval_accumulation_steps`: None
711
+ - `learning_rate`: 2e-05
712
+ - `weight_decay`: 0.0
713
+ - `adam_beta1`: 0.9
714
+ - `adam_beta2`: 0.999
715
+ - `adam_epsilon`: 1e-08
716
+ - `max_grad_norm`: 1.0
717
+ - `num_train_epochs`: 4
718
+ - `max_steps`: -1
719
+ - `lr_scheduler_type`: cosine
720
+ - `lr_scheduler_kwargs`: {}
721
+ - `warmup_ratio`: 0.1
722
+ - `warmup_steps`: 0
723
+ - `log_level`: passive
724
+ - `log_level_replica`: warning
725
+ - `log_on_each_node`: True
726
+ - `logging_nan_inf_filter`: True
727
+ - `save_safetensors`: True
728
+ - `save_on_each_node`: False
729
+ - `save_only_model`: False
730
+ - `restore_callback_states_from_checkpoint`: False
731
+ - `no_cuda`: False
732
+ - `use_cpu`: False
733
+ - `use_mps_device`: False
734
+ - `seed`: 42
735
+ - `data_seed`: None
736
+ - `jit_mode_eval`: False
737
+ - `use_ipex`: False
738
+ - `bf16`: True
739
+ - `fp16`: False
740
+ - `fp16_opt_level`: O1
741
+ - `half_precision_backend`: auto
742
+ - `bf16_full_eval`: False
743
+ - `fp16_full_eval`: False
744
+ - `tf32`: True
745
+ - `local_rank`: 0
746
+ - `ddp_backend`: None
747
+ - `tpu_num_cores`: None
748
+ - `tpu_metrics_debug`: False
749
+ - `debug`: []
750
+ - `dataloader_drop_last`: False
751
+ - `dataloader_num_workers`: 0
752
+ - `dataloader_prefetch_factor`: None
753
+ - `past_index`: -1
754
+ - `disable_tqdm`: False
755
+ - `remove_unused_columns`: True
756
+ - `label_names`: None
757
+ - `load_best_model_at_end`: True
758
+ - `ignore_data_skip`: False
759
+ - `fsdp`: []
760
+ - `fsdp_min_num_params`: 0
761
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
762
+ - `fsdp_transformer_layer_cls_to_wrap`: None
763
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
764
+ - `deepspeed`: None
765
+ - `label_smoothing_factor`: 0.0
766
+ - `optim`: adamw_torch_fused
767
+ - `optim_args`: None
768
+ - `adafactor`: False
769
+ - `group_by_length`: False
770
+ - `length_column_name`: length
771
+ - `ddp_find_unused_parameters`: None
772
+ - `ddp_bucket_cap_mb`: None
773
+ - `ddp_broadcast_buffers`: False
774
+ - `dataloader_pin_memory`: True
775
+ - `dataloader_persistent_workers`: False
776
+ - `skip_memory_metrics`: True
777
+ - `use_legacy_prediction_loop`: False
778
+ - `push_to_hub`: False
779
+ - `resume_from_checkpoint`: None
780
+ - `hub_model_id`: None
781
+ - `hub_strategy`: every_save
782
+ - `hub_private_repo`: False
783
+ - `hub_always_push`: False
784
+ - `gradient_checkpointing`: False
785
+ - `gradient_checkpointing_kwargs`: None
786
+ - `include_inputs_for_metrics`: False
787
+ - `eval_do_concat_batches`: True
788
+ - `fp16_backend`: auto
789
+ - `push_to_hub_model_id`: None
790
+ - `push_to_hub_organization`: None
791
+ - `mp_parameters`:
792
+ - `auto_find_batch_size`: False
793
+ - `full_determinism`: False
794
+ - `torchdynamo`: None
795
+ - `ray_scope`: last
796
+ - `ddp_timeout`: 1800
797
+ - `torch_compile`: False
798
+ - `torch_compile_backend`: None
799
+ - `torch_compile_mode`: None
800
+ - `dispatch_batches`: None
801
+ - `split_batches`: None
802
+ - `include_tokens_per_second`: False
803
+ - `include_num_input_tokens_seen`: False
804
+ - `neftune_noise_alpha`: None
805
+ - `optim_target_modules`: None
806
+ - `batch_eval_metrics`: False
807
+ - `batch_sampler`: no_duplicates
808
+ - `multi_dataset_batch_sampler`: proportional
809
+
810
+ </details>
811
+
812
+ ### Training Logs
813
+ | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
814
+ |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
815
+ | 0.5047 | 10 | 10.0459 | - | - | - | - | - |
816
+ | 0.9590 | 19 | - | 0.0849 | 0.0915 | 0.0939 | 0.0778 | 0.0966 |
817
+ | 1.0095 | 20 | 7.1373 | - | - | - | - | - |
818
+ | 1.5142 | 30 | 5.9969 | - | - | - | - | - |
819
+ | 1.9685 | 39 | - | 0.0890 | 0.0965 | 0.1007 | 0.0795 | 0.1012 |
820
+ | 2.0189 | 40 | 5.2984 | - | - | - | - | - |
821
+ | 2.5237 | 50 | 4.884 | - | - | - | - | - |
822
+ | **2.9779** | **59** | **-** | **0.091** | **0.0967** | **0.099** | **0.0801** | **0.1013** |
823
+ | 3.0284 | 60 | 4.6633 | - | - | - | - | - |
824
+ | 3.5331 | 70 | 4.5226 | - | - | - | - | - |
825
+ | 3.8360 | 76 | - | 0.0909 | 0.0963 | 0.0996 | 0.0802 | 0.1014 |
826
+
827
+ * The bold row denotes the saved checkpoint.
828
+
829
+ ### Framework Versions
830
+ - Python: 3.10.12
831
+ - Sentence Transformers: 3.0.1
832
+ - Transformers: 4.41.2
833
+ - PyTorch: 2.1.2+cu121
834
+ - Accelerate: 0.32.1
835
+ - Datasets: 2.19.1
836
+ - Tokenizers: 0.19.1
837
+
838
+ ## Citation
839
+
840
+ ### BibTeX
841
+
842
+ #### Sentence Transformers
843
+ ```bibtex
844
+ @inproceedings{reimers-2019-sentence-bert,
845
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
846
+ author = "Reimers, Nils and Gurevych, Iryna",
847
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
848
+ month = "11",
849
+ year = "2019",
850
+ publisher = "Association for Computational Linguistics",
851
+ url = "https://arxiv.org/abs/1908.10084",
852
+ }
853
+ ```
854
+
855
+ #### MatryoshkaLoss
856
+ ```bibtex
857
+ @misc{kusupati2024matryoshka,
858
+ title={Matryoshka Representation Learning},
859
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
860
+ year={2024},
861
+ eprint={2205.13147},
862
+ archivePrefix={arXiv},
863
+ primaryClass={cs.LG}
864
+ }
865
+ ```
866
+
867
+ #### MultipleNegativesRankingLoss
868
+ ```bibtex
869
+ @misc{henderson2017efficient,
870
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
871
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
872
+ year={2017},
873
+ eprint={1705.00652},
874
+ archivePrefix={arXiv},
875
+ primaryClass={cs.CL}
876
+ }
877
+ ```
878
+
879
+ <!--
880
+ ## Glossary
881
+
882
+ *Clearly define terms in order to be accessible across audiences.*
883
+ -->
884
+
885
+ <!--
886
+ ## Model Card Authors
887
+
888
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
889
+ -->
890
+
891
+ <!--
892
+ ## Model Card Contact
893
+
894
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
895
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ad1b1d02b1d6371697e40440adf9472e5dda2109b8c1867a4cea7f4f15bbe54
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff