jebish7 commited on
Commit
9d16a14
1 Parent(s): 4a7b697

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,489 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-small-en-v1.5
3
+ library_name: sentence-transformers
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:29545
11
+ - loss:MultipleNegativesSymmetricRankingLoss
12
+ widget:
13
+ - source_sentence: Could you clarify the process for determining whether an entity
14
+ is subject to FATCA and the ADGM Common Reporting Standard Regulations 2017?
15
+ sentences:
16
+ - If Rule ‎7.5.3(b) or ‎7.5.3(c) applies, the Insurance Intermediary must, if requested
17
+ by the Retail Client, provide to that Client a list of insurers with whom it deals
18
+ or may deal in relation to the relevant Contracts of Insurance.
19
+ - 'REGULATORY REQUIREMENTS FOR AUTHORISED PERSONS ENGAGED IN REGULATED ACTIVITIES
20
+ IN RELATION TO VIRTUAL ASSETS
21
+
22
+ International Tax Reporting Obligations
23
+
24
+ COBS Rule 17.4 requires Authorised Persons to consider and, if applicable, adhere
25
+ to their tax reporting obligations including, as applicable, under the Foreign
26
+ Account Tax Compliance Act (“FATCA”) and the ADGM Common Reporting Standard Regulations
27
+ 2017.
28
+
29
+ '
30
+ - "The following lists some of the items that an Authorised Person should consider\
31
+ \ including in its internal reporting of Operational Risks:\na.\tthe results of\
32
+ \ monitoring activities;\nb.\tassessments of the Operational Risk framework performed\
33
+ \ by control functions such as internal audit, compliance, risk management and/or\
34
+ \ external audit;\nc.\treports generated by (and/or for) supervisory authorities;\n\
35
+ d.\tmaterial breaches of the Authorised Person's risk appetite and tolerance with\
36
+ \ respect to Operational Risk;\ne.\tdetails of recent significant internal Operational\
37
+ \ Risk events and losses, including near misses or events that resulted in a positive\
38
+ \ return; and\nf.\trelevant external events and any potential impact on the Authorised\
39
+ \ Person and its Operational Risk framework, including Operational Risk capital."
40
+ - source_sentence: Could you provide specific examples of how a Relevant Person should
41
+ verify the existence of any secrecy or data protection laws in a third-party's
42
+ country of incorporation that might impede access to CDD information?
43
+ sentences:
44
+ - A Relevant Person should verify whether any secrecy or data protection law exists
45
+ in the country of incorporation of the business partner that would prevent access
46
+ to relevant data.
47
+ - "An Applicant for a Financial Services Permission must pay to the Regulator an\
48
+ \ application fee of $10,000 to carry on the Regulated Activity of:\n(a)\tArranging\
49
+ \ Credit;\n(b)\tOperating a Multilateral Trading Facility;\n(c)\tOperating an\
50
+ \ Organised Trading Facility;\n(d)\tManaging a Collective Investment Fund;\n(e)\t\
51
+ Managing a Venture Capital Fund and co-investments;\n(f)\tActing as the Administrator\
52
+ \ of a Collective Investment Fund;\n(g)\tActing as Trustee of an Investment Trust;\n\
53
+ (h)\tOperating a Credit Rating Agency; or\n(i)\tOperating a Private Financing\
54
+ \ Platform."
55
+ - A Business Reorganisation Plan may be further amended following its initial implementation
56
+ if the Regulator is of the view that changes to the plan are required to achieve
57
+ the long-term viability of the Institution.
58
+ - source_sentence: What specific criteria must a conventional custodian meet to be
59
+ approved by the FSRA as a Digital Security Facility (DSF) for the custody of Digital
60
+ Securities?
61
+ sentences:
62
+ - "Such Rules may prescribe—\n(a)\tthe circumstances in which an Issuer is required\
63
+ \ to appoint a sponsor, and a Reporting Entity is required to appoint a compliance\
64
+ \ adviser or other expert adviser;\n(b)\tthe requirements applicable to the Issuer\
65
+ \ or Reporting Entity, and a person Appointed as a sponsor, compliance adviser\
66
+ \ or other expert adviser; and\n(c)\tany other matter necessary to give effect\
67
+ \ to such appointments.\n"
68
+ - If a Fund Manager is unable to manage a conflict of interest as provided above,
69
+ it must dismiss or replace the member as appropriate.
70
+ - 'DIGITAL SECURITIES – INTERMEDIARIES
71
+
72
+ Intermediaries conducting a Regulated Activity in relation to Virtual Assets –
73
+ Extension into Digital Securities
74
+
75
+ Virtual Asset Custodians may apply to the FSRA to be a DSF in order to provide
76
+ custody of Digital Securities. Refer to paragraphs 73 to 75 for further information
77
+ on the requirements that will apply.
78
+
79
+ '
80
+ - source_sentence: Can you elaborate on the types of records an Authorised Person
81
+ must retain related to ESG disclosures and corporate governance practices?
82
+ sentences:
83
+ - 'REGULATORY REQUIREMENTS - SPOT COMMODITY ACTIVITIES
84
+
85
+ Default Rules
86
+
87
+ The FSRA suggests that an Applicant/Authorised Person consider different scenarios/circumstances
88
+ where it may need to utilise the powers provided to it under its Default Rules,
89
+ and take appropriate action as required. Scenario testing of this kind could
90
+ relate to when there is a financial and/or technical ‘default’ in relation to,
91
+ for example, delivery failure, storage failure or wider banking arrangements. Due
92
+ to the short settlement cycle of Spot Commodity markets, the impact of a ‘default’
93
+ may be on a per-transaction basis or structural basis, in limiting the ability
94
+ of Members to fulfil their delivery obligations (and therefore the ability of
95
+ the MTF to operate on a fair and orderly basis).
96
+
97
+ '
98
+ - Risk control. Authorised Persons should recognise and control the Credit Risk
99
+ arising from their new products and services. Well in advance of entering into
100
+ business transactions involving new types of products and activities, they should
101
+ ensure that they understand the risks fully and have established appropriate Credit
102
+ Risk policies, procedures and controls, which should be approved by the Governing
103
+ Body or its appropriate delegated committee. A formal risk assessment of new products
104
+ and activities should also be performed and documented.
105
+ - 'Records: An Authorised Person must make and retain records of matters and dealings,
106
+ including Accounting Records and corporate governance practices which are the
107
+ subject of requirements and standards under the Regulations and Rules.
108
+
109
+ '
110
+ - source_sentence: How does ADGM ensure that FinTech Participants remain compliant
111
+ with evolving regulatory standards, particularly in the context of new and developing
112
+ technologies?
113
+ sentences:
114
+ - 'DIGITAL SECURITIES – INTERMEDIARIES
115
+
116
+ Conventional Intermediaries – Digital Securities
117
+
118
+ Intermediaries intending to operate solely, in the context of Digital Securities,
119
+ as a broker or dealer for Clients (including the operation of an OTC broking or
120
+ dealing desk) are not permitted to structure their broking / dealing service or
121
+ platform in such a way that would have it be considered as operating a RIE or
122
+ MTF. The FSRA would consider features such as allowing for price discovery, displaying
123
+ a public trading order book (accessible to any member of the public, regardless
124
+ of whether they are Clients), and allowing trades to automatically be matched
125
+ using an exchange-type matching engine as characteristic of a RIE or MTF, and
126
+ not activities acceptable for an Digital Securities intermediary to undertake.
127
+
128
+ '
129
+ - "The Guidance is applicable to the following Persons:\n(a)\tan applicant for a\
130
+ \ Financial Services Permission to carry on the Regulated Activity of Developing\
131
+ \ Financial Technology Services within the RegLab in or from ADGM; and/or\n(b)\t\
132
+ a FinTech Participant."
133
+ - Where an individual is appointed under this Rule, the Regulator may exercise any
134
+ powers it would otherwise be entitled to exercise as if the individual held Approved
135
+ Person status.
136
+ ---
137
+
138
+ # SentenceTransformer based on BAAI/bge-small-en-v1.5
139
+
140
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
141
+
142
+ ## Model Details
143
+
144
+ ### Model Description
145
+ - **Model Type:** Sentence Transformer
146
+ - **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
147
+ - **Maximum Sequence Length:** 512 tokens
148
+ - **Output Dimensionality:** 384 tokens
149
+ - **Similarity Function:** Cosine Similarity
150
+ - **Training Dataset:**
151
+ - csv
152
+ <!-- - **Language:** Unknown -->
153
+ <!-- - **License:** Unknown -->
154
+
155
+ ### Model Sources
156
+
157
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
158
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
159
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
160
+
161
+ ### Full Model Architecture
162
+
163
+ ```
164
+ SentenceTransformer(
165
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
166
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
167
+ (2): Normalize()
168
+ )
169
+ ```
170
+
171
+ ## Usage
172
+
173
+ ### Direct Usage (Sentence Transformers)
174
+
175
+ First install the Sentence Transformers library:
176
+
177
+ ```bash
178
+ pip install -U sentence-transformers
179
+ ```
180
+
181
+ Then you can load this model and run inference.
182
+ ```python
183
+ from sentence_transformers import SentenceTransformer
184
+
185
+ # Download from the 🤗 Hub
186
+ model = SentenceTransformer("jebish7/bge_MNSR")
187
+ # Run inference
188
+ sentences = [
189
+ 'How does ADGM ensure that FinTech Participants remain compliant with evolving regulatory standards, particularly in the context of new and developing technologies?',
190
+ 'The Guidance is applicable to the following Persons:\n(a)\tan applicant for a Financial Services Permission to carry on the Regulated Activity of Developing Financial Technology Services within the RegLab in or from ADGM; and/or\n(b)\ta FinTech Participant.',
191
+ 'DIGITAL SECURITIES – INTERMEDIARIES\nConventional Intermediaries – Digital Securities\nIntermediaries intending to operate solely, in the context of Digital Securities, as a broker or dealer for Clients (including the operation of an OTC broking or dealing desk) are not permitted to structure their broking / dealing service or platform in such a way that would have it be considered as operating a RIE or MTF. The FSRA would consider features such as allowing for price discovery, displaying a public trading order book (accessible to any member of the public, regardless of whether they are Clients), and allowing trades to automatically be matched using an exchange-type matching engine as characteristic of a RIE or MTF, and not activities acceptable for an Digital Securities intermediary to undertake.\n',
192
+ ]
193
+ embeddings = model.encode(sentences)
194
+ print(embeddings.shape)
195
+ # [3, 384]
196
+
197
+ # Get the similarity scores for the embeddings
198
+ similarities = model.similarity(embeddings, embeddings)
199
+ print(similarities.shape)
200
+ # [3, 3]
201
+ ```
202
+
203
+ <!--
204
+ ### Direct Usage (Transformers)
205
+
206
+ <details><summary>Click to see the direct usage in Transformers</summary>
207
+
208
+ </details>
209
+ -->
210
+
211
+ <!--
212
+ ### Downstream Usage (Sentence Transformers)
213
+
214
+ You can finetune this model on your own dataset.
215
+
216
+ <details><summary>Click to expand</summary>
217
+
218
+ </details>
219
+ -->
220
+
221
+ <!--
222
+ ### Out-of-Scope Use
223
+
224
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
225
+ -->
226
+
227
+ <!--
228
+ ## Bias, Risks and Limitations
229
+
230
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
231
+ -->
232
+
233
+ <!--
234
+ ### Recommendations
235
+
236
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
237
+ -->
238
+
239
+ ## Training Details
240
+
241
+ ### Training Dataset
242
+
243
+ #### csv
244
+
245
+ * Dataset: csv
246
+ * Size: 29,545 training samples
247
+ * Columns: <code>anchor</code> and <code>positive</code>
248
+ * Approximate statistics based on the first 1000 samples:
249
+ | | anchor | positive |
250
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
251
+ | type | string | string |
252
+ | details | <ul><li>min: 19 tokens</li><li>mean: 34.47 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 113.88 tokens</li><li>max: 512 tokens</li></ul> |
253
+ * Samples:
254
+ | anchor | positive |
255
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
256
+ | <code>In the case of a cross-border transaction involving jurisdictions with differing sanctions regimes, how should a Relevant Person prioritize and reconcile these requirements?</code> | <code>Sanctions. UNSC Sanctions and Sanctions issued or administered by the U.A.E., including Targeted Financial Sanctions, apply in the ADGM. Relevant Persons must comply with Targeted Financial Sanctions. Sanctions compliance is emphasised by specific obligations contained in the AML Rulebook requiring Relevant Persons to establish and maintain effective systems and controls to comply with applicable Sanctions, including in particular Targeted Financial Sanctions, as set out in Chapter ‎11.</code> |
257
+ | <code>How does the FSRA monitor and assess the deployment scalability of a FinTech proposal within the UAE and ADGM beyond the RegLab validity period?</code> | <code>Evaluation Criteria. To qualify for authorisation under the RegLab framework, the applicant must demonstrate how it satisfies the following evaluation criteria:<br>(a) the FinTech Proposal promotes FinTech innovation, in terms of the business application and deployment model of the technology.<br>(b) the FinTech Proposal has the potential to:<br>i. promote significant growth, efficiency or competition in the financial sector;<br>ii. promote better risk management solutions and regulatory outcomes for the financial industry; or<br>iii. improve the choices and welfare of clients.<br>(c) the FinTech Proposal is at a sufficiently advanced stage of development to mount a live test.<br>(d) the FinTech Proposal can be deployed in the ADGM and the UAE on a broader scale or contribute to the development of ADGM as a financial centre, and, if so, how the applicant intends to do so on completion of the validity period.<br><br></code> |
258
+ | <code>How does the ADGM define "distinct risks" that arise from conducting business entirely in an NFTF manner compared to a mix of face-to-face and NFTF interactions, and what specific risk mitigation strategies should be employed in these scenarios?</code> | <code>The risk assessment under Rule ‎6.2.1(c) should identify actions to mitigate risks associated with undertaking NFTF business generally, and the use of eKYC specifically. This is because distinct risks are often likely to arise where business is conducted entirely in an NFTF manner, compared to when the business relationship includes a mix of face-to-face and NFTF interactions. The assessment should make reference to risk mitigation measures recommended by the Regulator, a competent authority of the U.A.E., FATF, and other relevant bodies.<br><br></code> |
259
+ * Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
260
+ ```json
261
+ {
262
+ "scale": 20.0,
263
+ "similarity_fct": "cos_sim"
264
+ }
265
+ ```
266
+
267
+ ### Evaluation Dataset
268
+
269
+ #### csv
270
+
271
+ * Dataset: csv
272
+ * Size: 3,676 evaluation samples
273
+ * Columns: <code>anchor</code> and <code>positive</code>
274
+ * Approximate statistics based on the first 1000 samples:
275
+ | | anchor | positive |
276
+ |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
277
+ | type | string | string |
278
+ | details | <ul><li>min: 18 tokens</li><li>mean: 34.98 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 114.8 tokens</li><li>max: 512 tokens</li></ul> |
279
+ * Samples:
280
+ | anchor | positive |
281
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
282
+ | <code>How should our firm approach the development and implementation of a risk management system that addresses the full spectrum of risks listed, including technology, compliance, and legal risks?</code> | <code>Management of particular risks<br>Without prejudice to the generality of Rule ‎2.4(1, a Captive Insurer must develop, implement and maintain a risk management system to identify and address risks, including but not limited to:<br>(a) reserving risk;<br>(b) investment risk (including risks associated with the use of Derivatives);<br>(c) underwriting risk;<br>(d) market risk;<br>(e) liquidity management risk;<br>(f) credit quality risk;<br>(g) fraud and other fiduciary risks;<br>(h) compliance risk;<br>(i) outsourcing risk; and<br>(j) reinsurance risk. Reinsurance risk refers to risks associated with the Captive Insurer's use of reinsurance arrangements as Cedant.</code> |
283
+ | <code>What measures could an Authorised Person take to ensure non-repudiation and accountability, so that individuals or systems processing information cannot deny their actions?</code> | <code><br>In establishing its systems and controls to address information security risks, an Authorised Person should have regard to:<br>a. confidentiality: information should be accessible only to Persons or systems with appropriate authority, which may require firewalls within a system, as well as entry restrictions;<br>b. the risk of loss or theft of customer data;<br>c. integrity: safeguarding the accuracy and completeness of information and its processing;<br>d. non repudiation and accountability: ensuring that the Person or system that processed the information cannot deny their actions; and<br>e. internal security: including premises security, staff vetting; access rights and portable media, staff internet and email access, encryption, safe disposal of customer data, and training and awareness.</code> |
284
+ | <code>What authority does the Regulator have over the terms and conditions applied to the escrow account holding funds from a Prospectus Offer?</code> | <code>The Regulator may, during the Offer Period or such other longer period as specified, impose a requirement that the monies held by a Person making a Prospectus Offer or his agent pursuant to the Prospectus Offer or issuance are held in an escrow account for a specified period and on specified terms.</code> |
285
+ * Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
286
+ ```json
287
+ {
288
+ "scale": 20.0,
289
+ "similarity_fct": "cos_sim"
290
+ }
291
+ ```
292
+
293
+ ### Training Hyperparameters
294
+ #### Non-Default Hyperparameters
295
+
296
+ - `eval_strategy`: epoch
297
+ - `per_device_train_batch_size`: 64
298
+ - `learning_rate`: 2e-05
299
+ - `num_train_epochs`: 10
300
+ - `warmup_ratio`: 0.1
301
+ - `load_best_model_at_end`: True
302
+ - `batch_sampler`: no_duplicates
303
+
304
+ #### All Hyperparameters
305
+ <details><summary>Click to expand</summary>
306
+
307
+ - `overwrite_output_dir`: False
308
+ - `do_predict`: False
309
+ - `eval_strategy`: epoch
310
+ - `prediction_loss_only`: True
311
+ - `per_device_train_batch_size`: 64
312
+ - `per_device_eval_batch_size`: 8
313
+ - `per_gpu_train_batch_size`: None
314
+ - `per_gpu_eval_batch_size`: None
315
+ - `gradient_accumulation_steps`: 1
316
+ - `eval_accumulation_steps`: None
317
+ - `torch_empty_cache_steps`: None
318
+ - `learning_rate`: 2e-05
319
+ - `weight_decay`: 0.0
320
+ - `adam_beta1`: 0.9
321
+ - `adam_beta2`: 0.999
322
+ - `adam_epsilon`: 1e-08
323
+ - `max_grad_norm`: 1.0
324
+ - `num_train_epochs`: 10
325
+ - `max_steps`: -1
326
+ - `lr_scheduler_type`: linear
327
+ - `lr_scheduler_kwargs`: {}
328
+ - `warmup_ratio`: 0.1
329
+ - `warmup_steps`: 0
330
+ - `log_level`: passive
331
+ - `log_level_replica`: warning
332
+ - `log_on_each_node`: True
333
+ - `logging_nan_inf_filter`: True
334
+ - `save_safetensors`: True
335
+ - `save_on_each_node`: False
336
+ - `save_only_model`: False
337
+ - `restore_callback_states_from_checkpoint`: False
338
+ - `no_cuda`: False
339
+ - `use_cpu`: False
340
+ - `use_mps_device`: False
341
+ - `seed`: 42
342
+ - `data_seed`: None
343
+ - `jit_mode_eval`: False
344
+ - `use_ipex`: False
345
+ - `bf16`: False
346
+ - `fp16`: False
347
+ - `fp16_opt_level`: O1
348
+ - `half_precision_backend`: auto
349
+ - `bf16_full_eval`: False
350
+ - `fp16_full_eval`: False
351
+ - `tf32`: None
352
+ - `local_rank`: 0
353
+ - `ddp_backend`: None
354
+ - `tpu_num_cores`: None
355
+ - `tpu_metrics_debug`: False
356
+ - `debug`: []
357
+ - `dataloader_drop_last`: False
358
+ - `dataloader_num_workers`: 0
359
+ - `dataloader_prefetch_factor`: None
360
+ - `past_index`: -1
361
+ - `disable_tqdm`: False
362
+ - `remove_unused_columns`: True
363
+ - `label_names`: None
364
+ - `load_best_model_at_end`: True
365
+ - `ignore_data_skip`: False
366
+ - `fsdp`: []
367
+ - `fsdp_min_num_params`: 0
368
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
369
+ - `fsdp_transformer_layer_cls_to_wrap`: None
370
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
371
+ - `deepspeed`: None
372
+ - `label_smoothing_factor`: 0.0
373
+ - `optim`: adamw_torch
374
+ - `optim_args`: None
375
+ - `adafactor`: False
376
+ - `group_by_length`: False
377
+ - `length_column_name`: length
378
+ - `ddp_find_unused_parameters`: None
379
+ - `ddp_bucket_cap_mb`: None
380
+ - `ddp_broadcast_buffers`: False
381
+ - `dataloader_pin_memory`: True
382
+ - `dataloader_persistent_workers`: False
383
+ - `skip_memory_metrics`: True
384
+ - `use_legacy_prediction_loop`: False
385
+ - `push_to_hub`: False
386
+ - `resume_from_checkpoint`: None
387
+ - `hub_model_id`: None
388
+ - `hub_strategy`: every_save
389
+ - `hub_private_repo`: False
390
+ - `hub_always_push`: False
391
+ - `gradient_checkpointing`: False
392
+ - `gradient_checkpointing_kwargs`: None
393
+ - `include_inputs_for_metrics`: False
394
+ - `eval_do_concat_batches`: True
395
+ - `fp16_backend`: auto
396
+ - `push_to_hub_model_id`: None
397
+ - `push_to_hub_organization`: None
398
+ - `mp_parameters`:
399
+ - `auto_find_batch_size`: False
400
+ - `full_determinism`: False
401
+ - `torchdynamo`: None
402
+ - `ray_scope`: last
403
+ - `ddp_timeout`: 1800
404
+ - `torch_compile`: False
405
+ - `torch_compile_backend`: None
406
+ - `torch_compile_mode`: None
407
+ - `dispatch_batches`: None
408
+ - `split_batches`: None
409
+ - `include_tokens_per_second`: False
410
+ - `include_num_input_tokens_seen`: False
411
+ - `neftune_noise_alpha`: None
412
+ - `optim_target_modules`: None
413
+ - `batch_eval_metrics`: False
414
+ - `eval_on_start`: False
415
+ - `use_liger_kernel`: False
416
+ - `eval_use_gather_object`: False
417
+ - `batch_sampler`: no_duplicates
418
+ - `multi_dataset_batch_sampler`: proportional
419
+
420
+ </details>
421
+
422
+ ### Training Logs
423
+ | Epoch | Step | Training Loss | loss |
424
+ |:----------:|:--------:|:-------------:|:----------:|
425
+ | 0.8658 | 200 | 1.6059 | - |
426
+ | 1.2684 | 293 | - | 0.4773 |
427
+ | 1.4632 | 400 | 0.8247 | - |
428
+ | 2.2684 | 586 | - | 0.4313 |
429
+ | 2.0606 | 600 | 0.7352 | - |
430
+ | 2.9264 | 800 | 1.0011 | - |
431
+ | 3.2684 | 879 | - | 0.4038 |
432
+ | 3.5238 | 1000 | 0.646 | - |
433
+ | 4.2684 | 1172 | - | 0.3926 |
434
+ | 4.1212 | 1200 | 0.6207 | - |
435
+ | 4.9870 | 1400 | 0.8652 | - |
436
+ | 5.2684 | 1465 | - | 0.3769 |
437
+ | 5.5844 | 1600 | 0.5708 | - |
438
+ | 6.2684 | 1758 | - | 0.3691 |
439
+ | 6.1818 | 1800 | 0.5588 | - |
440
+ | 7.0476 | 2000 | 0.7551 | - |
441
+ | 7.2684 | 2051 | - | 0.3608 |
442
+ | 7.6450 | 2200 | 0.5758 | - |
443
+ | **8.1212** | **2310** | **-** | **0.3561** |
444
+
445
+ * The bold row denotes the saved checkpoint.
446
+
447
+ ### Framework Versions
448
+ - Python: 3.10.14
449
+ - Sentence Transformers: 3.1.1
450
+ - Transformers: 4.45.2
451
+ - PyTorch: 2.4.0
452
+ - Accelerate: 0.34.2
453
+ - Datasets: 3.0.1
454
+ - Tokenizers: 0.20.0
455
+
456
+ ## Citation
457
+
458
+ ### BibTeX
459
+
460
+ #### Sentence Transformers
461
+ ```bibtex
462
+ @inproceedings{reimers-2019-sentence-bert,
463
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
464
+ author = "Reimers, Nils and Gurevych, Iryna",
465
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
466
+ month = "11",
467
+ year = "2019",
468
+ publisher = "Association for Computational Linguistics",
469
+ url = "https://arxiv.org/abs/1908.10084",
470
+ }
471
+ ```
472
+
473
+ <!--
474
+ ## Glossary
475
+
476
+ *Clearly define terms in order to be accessible across audiences.*
477
+ -->
478
+
479
+ <!--
480
+ ## Model Card Authors
481
+
482
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
483
+ -->
484
+
485
+ <!--
486
+ ## Model Card Contact
487
+
488
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
489
+ -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-small-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.45.2",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.45.2",
5
+ "pytorch": "2.4.0"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4c0f4b8759ab93185e8bb63021e47b8ce4df535ff4bca885399b5edf386a0a6
3
+ size 133462128
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff