wwydmanski commited on
Commit
7bf3502
1 Parent(s): cb09e22

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +131 -114
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,49 +1,81 @@
1
  ---
2
  base_model: allenai/specter2_base
3
  library_name: sentence-transformers
 
 
 
 
 
 
4
  pipeline_tag: sentence-similarity
5
  tags:
6
  - sentence-transformers
7
  - sentence-similarity
8
  - feature-extraction
9
  - generated_from_trainer
10
- - dataset_size:6574
11
  - loss:MultipleNegativesRankingLoss
12
  widget:
13
- - source_sentence: sigma N protein interactions
14
  sentences:
15
- - 'Smoking Relapse After Lung Transplantation: Is a Second Transplant Justified? '
16
- - 'Core RNA polymerase and promoter DNA interactions of purified domains of sigma
17
- N: bipartite functions. '
18
- - 'Protein-protein interactions mapped by artificial proteases: where sigma factors
19
- bind to RNA polymerase. '
20
- - source_sentence: Frailty pathway co-design
 
 
21
  sentences:
22
- - 'High-Sensitivity Cardiac Troponin I Levels in Normal and Hypertensive Pregnancy. '
23
- - 'The systematic approach to improving care for Frail Older Patients (SAFE) study:
24
- A protocol for co-designing a frail older person''s pathway. '
25
- - 'Frailty: successful clinical practice implementation. '
26
- - source_sentence: Diurnal lipid metabolism in lactating sheep
27
  sentences:
28
- - 'Interpreting and applying the EUFEST results using number needed to treat: antipsychotic
29
- effectiveness in first-episode schizophrenia. '
30
- - 'Diurnal variations in the concentration, arteriovenous difference, extraction
31
- ratio, and uptake of 3-hydroxybutyrate and plasma free fatty acids in the hind
32
- limb of lactating sheep. '
33
- - 'Diurnal regulation of milk lipid production and milk secretion in the rat: effect
34
- of dietary protein and energy restriction. '
35
- - source_sentence: Ectopic gastric mucosa
36
  sentences:
37
- - '[Ectopic cardia and gastroesophageal reflux]. '
38
- - 'A bacterial toxicity assay performed with microplates, microluminometry and Microtox
39
- reagent. '
40
- - 'Gastric polyp. '
41
- - source_sentence: monograph editing
 
42
  sentences:
43
- - 'Monographs editor. '
44
- - 'Maternal stress and high-fat diet effect on maternal behavior, milk composition,
45
- and pup ingestive behavior. '
46
- - 'The editing life. '
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ---
48
 
49
  # SentenceTransformer based on allenai/specter2_base
@@ -96,9 +128,9 @@ from sentence_transformers import SentenceTransformer
96
  model = SentenceTransformer("sentence_transformers_model_id")
97
  # Run inference
98
  sentences = [
99
- 'monograph editing',
100
- 'Monographs editor. ',
101
- 'The editing life. ',
102
  ]
103
  embeddings = model.encode(sentences)
104
  print(embeddings.shape)
@@ -134,6 +166,22 @@ You can finetune this model on your own dataset.
134
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
135
  -->
136
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
137
  <!--
138
  ## Bias, Risks and Limitations
139
 
@@ -153,19 +201,19 @@ You can finetune this model on your own dataset.
153
  #### json
154
 
155
  * Dataset: json
156
- * Size: 6,574 training samples
157
  * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
158
  * Approximate statistics based on the first 1000 samples:
159
  | | anchor | positive | negative |
160
  |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
161
  | type | string | string | string |
162
- | details | <ul><li>min: 3 tokens</li><li>mean: 7.59 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.89 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 11.97 tokens</li><li>max: 50 tokens</li></ul> |
163
  * Samples:
164
- | anchor | positive | negative |
165
- |:-------------------------------------------------|:--------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------|
166
- | <code>α-Alumina Nanoparticle Grafting</code> | <code>Grafting PMMA Brushes from α-Alumina Nanoparticles via SI-ATRP. </code> | <code>Mesoporous alumina from colloidal biotemplating of Al clusters. </code> |
167
- | <code>Congenital candidiasis septic shock</code> | <code>Congenital candidiasis presenting as septic shock without rash. </code> | <code>Congenital cutaneous candidiasis: clinical presentation, pathogenesis, and management guidelines. </code> |
168
- | <code>Chronic Venous Occlusion</code> | <code>Anatomic response of canine hindlimb vasculature to chronic venous occlusion. </code> | <code>Chronic venous insufficiency. </code> |
169
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
170
  ```json
171
  {
@@ -177,10 +225,11 @@ You can finetune this model on your own dataset.
177
  ### Training Hyperparameters
178
  #### Non-Default Hyperparameters
179
 
 
180
  - `per_device_train_batch_size`: 32
181
  - `per_device_eval_batch_size`: 32
182
  - `learning_rate`: 2e-05
183
- - `num_train_epochs`: 1
184
  - `lr_scheduler_type`: cosine_with_restarts
185
  - `warmup_ratio`: 0.1
186
  - `bf16`: True
@@ -191,7 +240,7 @@ You can finetune this model on your own dataset.
191
 
192
  - `overwrite_output_dir`: False
193
  - `do_predict`: False
194
- - `eval_strategy`: no
195
  - `prediction_loss_only`: True
196
  - `per_device_train_batch_size`: 32
197
  - `per_device_eval_batch_size`: 32
@@ -206,7 +255,7 @@ You can finetune this model on your own dataset.
206
  - `adam_beta2`: 0.999
207
  - `adam_epsilon`: 1e-08
208
  - `max_grad_norm`: 1.0
209
- - `num_train_epochs`: 1
210
  - `max_steps`: -1
211
  - `lr_scheduler_type`: cosine_with_restarts
212
  - `lr_scheduler_kwargs`: {}
@@ -305,77 +354,45 @@ You can finetune this model on your own dataset.
305
  </details>
306
 
307
  ### Training Logs
308
- | Epoch | Step | Training Loss |
309
- |:------:|:----:|:-------------:|
310
- | 0.0145 | 1 | 2.8777 |
311
- | 0.0290 | 2 | 2.8723 |
312
- | 0.0435 | 3 | 2.7432 |
313
- | 0.0580 | 4 | 2.8806 |
314
- | 0.0725 | 5 | 2.3007 |
315
- | 0.0870 | 6 | 2.2423 |
316
- | 0.1014 | 7 | 1.995 |
317
- | 0.1159 | 8 | 1.5115 |
318
- | 0.1304 | 9 | 1.41 |
319
- | 0.1449 | 10 | 1.243 |
320
- | 0.1594 | 11 | 1.1634 |
321
- | 0.1739 | 12 | 1.1996 |
322
- | 0.1884 | 13 | 1.3653 |
323
- | 0.2029 | 14 | 1.5704 |
324
- | 0.2174 | 15 | 1.3556 |
325
- | 0.2319 | 16 | 1.4051 |
326
- | 0.2464 | 17 | 1.0999 |
327
- | 0.2609 | 18 | 1.0826 |
328
- | 0.2754 | 19 | 1.0449 |
329
- | 0.2899 | 20 | 1.0517 |
330
- | 0.3043 | 21 | 0.9716 |
331
- | 0.3188 | 22 | 1.1993 |
332
- | 0.3333 | 23 | 1.1375 |
333
- | 0.3478 | 24 | 0.9875 |
334
- | 0.3623 | 25 | 0.7656 |
335
- | 0.3768 | 26 | 1.2773 |
336
- | 0.3913 | 27 | 0.7802 |
337
- | 0.4058 | 28 | 0.882 |
338
- | 0.4203 | 29 | 1.0534 |
339
- | 0.4348 | 30 | 0.9073 |
340
- | 0.4493 | 31 | 0.916 |
341
- | 0.4638 | 32 | 0.9702 |
342
- | 0.4783 | 33 | 1.2868 |
343
- | 0.4928 | 34 | 1.0854 |
344
- | 0.5072 | 35 | 0.8832 |
345
- | 0.5217 | 36 | 0.9139 |
346
- | 0.5362 | 37 | 0.9032 |
347
- | 0.5507 | 38 | 0.965 |
348
- | 0.5652 | 39 | 0.7222 |
349
- | 0.5797 | 40 | 0.6682 |
350
- | 0.5942 | 41 | 0.8562 |
351
- | 0.6087 | 42 | 0.9248 |
352
- | 0.6232 | 43 | 0.9867 |
353
- | 0.6377 | 44 | 0.7328 |
354
- | 0.6522 | 45 | 0.7506 |
355
- | 0.6667 | 46 | 0.7952 |
356
- | 0.6812 | 47 | 0.7979 |
357
- | 0.6957 | 48 | 1.0043 |
358
- | 0.7101 | 49 | 1.0428 |
359
- | 0.7246 | 50 | 0.8772 |
360
- | 0.7391 | 51 | 0.6598 |
361
- | 0.7536 | 52 | 0.7804 |
362
- | 0.7681 | 53 | 0.599 |
363
- | 0.7826 | 54 | 0.7974 |
364
- | 0.7971 | 55 | 0.7489 |
365
- | 0.8116 | 56 | 0.8701 |
366
- | 0.8261 | 57 | 0.8903 |
367
- | 0.8406 | 58 | 0.7223 |
368
- | 0.8551 | 59 | 0.925 |
369
- | 0.8696 | 60 | 1.0247 |
370
- | 0.8841 | 61 | 0.7531 |
371
- | 0.8986 | 62 | 0.9684 |
372
- | 0.9130 | 63 | 0.7462 |
373
- | 0.9275 | 64 | 0.8555 |
374
- | 0.9420 | 65 | 0.8016 |
375
- | 0.9565 | 66 | 0.7603 |
376
- | 0.9710 | 67 | 1.1052 |
377
- | 0.9855 | 68 | 0.9505 |
378
- | 1.0 | 69 | 0.6259 |
379
 
380
 
381
  ### Framework Versions
 
1
  ---
2
  base_model: allenai/specter2_base
3
  library_name: sentence-transformers
4
+ metrics:
5
+ - cosine_accuracy
6
+ - dot_accuracy
7
+ - manhattan_accuracy
8
+ - euclidean_accuracy
9
+ - max_accuracy
10
  pipeline_tag: sentence-similarity
11
  tags:
12
  - sentence-transformers
13
  - sentence-similarity
14
  - feature-extraction
15
  - generated_from_trainer
16
+ - dataset_size:10053
17
  - loss:MultipleNegativesRankingLoss
18
  widget:
19
+ - source_sentence: HBV-endemic area diagnostic criteria comparison
20
  sentences:
21
+ - 'Comparison of usefulness of clinical diagnostic criteria for hepatocellular carcinoma
22
+ in a hepatitis B endemic area. '
23
+ - 'The validation of the 2010 American Association for the Study of Liver Diseases
24
+ guideline for the diagnosis of hepatocellular carcinoma in an endemic area. '
25
+ - 'Which admission electrocardiographic parameter is more powerful predictor of
26
+ no-reflow in patients with acute anterior myocardial infarction who underwent
27
+ primary percutaneous intervention? '
28
+ - source_sentence: Family history of alcoholism classification schemes
29
  sentences:
30
+ - 'Developing the mentor/protege relationship. '
31
+ - 'Family history of alcoholism in schizophrenia. '
32
+ - 'Family history models of alcoholism: age of onset, consequences and dependence. '
33
+ - source_sentence: Intellectual Property Commercialization
 
34
  sentences:
35
+ - 'ALEPH-2, a suspected anxiolytic and putative hallucinogenic phenylisopropylamine
36
+ derivative, is a 5-HT2a and 5-HT2c receptor agonist. '
37
+ - 'Technology transfer and monitoring practices. '
38
+ - '[From intellectual property to commercial property]. '
39
+ - source_sentence: Transmembrane domain mutants
 
 
 
40
  sentences:
41
+ - 'Dysgerminoma; case with pulmonary metastases; result of treatment with irradiation
42
+ and male sex hormone. '
43
+ - 'Toward a high-resolution structure of phospholamban: design of soluble transmembrane
44
+ domain mutants. '
45
+ - 'Scanning N-glycosylation mutagenesis of membrane proteins. '
46
+ - source_sentence: Six-coordinate low-spin iron(III) porphyrinate complexes
47
  sentences:
48
+ - 'Molecular structures and magnetic resonance spectroscopic investigations of highly
49
+ distorted six-coordinate low-spin iron(III) porphyrinate complexes. '
50
+ - 'Saddle-shaped six-coordinate iron(iii) porphyrin complex with unusual intermediate-spin
51
+ electronic structure. '
52
+ - 'Performing Economic Evaluation of Integrated Care: Highway to Hell or Stairway
53
+ to Heaven? '
54
+ model-index:
55
+ - name: SentenceTransformer based on allenai/specter2_base
56
+ results:
57
+ - task:
58
+ type: triplet
59
+ name: Triplet
60
+ dataset:
61
+ name: triplet dev
62
+ type: triplet-dev
63
+ metrics:
64
+ - type: cosine_accuracy
65
+ value: 0.606
66
+ name: Cosine Accuracy
67
+ - type: dot_accuracy
68
+ value: 0.395
69
+ name: Dot Accuracy
70
+ - type: manhattan_accuracy
71
+ value: 0.603
72
+ name: Manhattan Accuracy
73
+ - type: euclidean_accuracy
74
+ value: 0.615
75
+ name: Euclidean Accuracy
76
+ - type: max_accuracy
77
+ value: 0.615
78
+ name: Max Accuracy
79
  ---
80
 
81
  # SentenceTransformer based on allenai/specter2_base
 
128
  model = SentenceTransformer("sentence_transformers_model_id")
129
  # Run inference
130
  sentences = [
131
+ 'Six-coordinate low-spin iron(III) porphyrinate complexes',
132
+ 'Molecular structures and magnetic resonance spectroscopic investigations of highly distorted six-coordinate low-spin iron(III) porphyrinate complexes. ',
133
+ 'Saddle-shaped six-coordinate iron(iii) porphyrin complex with unusual intermediate-spin electronic structure. ',
134
  ]
135
  embeddings = model.encode(sentences)
136
  print(embeddings.shape)
 
166
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
167
  -->
168
 
169
+ ## Evaluation
170
+
171
+ ### Metrics
172
+
173
+ #### Triplet
174
+ * Dataset: `triplet-dev`
175
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
176
+
177
+ | Metric | Value |
178
+ |:--------------------|:----------|
179
+ | **cosine_accuracy** | **0.606** |
180
+ | dot_accuracy | 0.395 |
181
+ | manhattan_accuracy | 0.603 |
182
+ | euclidean_accuracy | 0.615 |
183
+ | max_accuracy | 0.615 |
184
+
185
  <!--
186
  ## Bias, Risks and Limitations
187
 
 
201
  #### json
202
 
203
  * Dataset: json
204
+ * Size: 10,053 training samples
205
  * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
206
  * Approximate statistics based on the first 1000 samples:
207
  | | anchor | positive | negative |
208
  |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
209
  | type | string | string | string |
210
+ | details | <ul><li>min: 4 tokens</li><li>mean: 7.49 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.08 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.46 tokens</li><li>max: 48 tokens</li></ul> |
211
  * Samples:
212
+ | anchor | positive | negative |
213
+ |:-------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------|
214
+ | <code>COM-induced secretome changes in U937 monocytes</code> | <code>Characterization of calcium oxalate crystal-induced changes in the secretome of U937 human monocytes. </code> | <code>Monocytes. </code> |
215
+ | <code>Metamaterials</code> | <code>Sound attenuation optimization using metaporous materials tuned on exceptional points. </code> | <code>Metamaterials: A cat's eye for all directions. </code> |
216
+ | <code>Pediatric Parasitology</code> | <code>Parasitic infections among school age children 6 to 11-years-of-age in the Eastern province. </code> | <code>[DIALOGUE ON PEDIATRIC PARASITOLOGY]. </code> |
217
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
218
  ```json
219
  {
 
225
  ### Training Hyperparameters
226
  #### Non-Default Hyperparameters
227
 
228
+ - `eval_strategy`: steps
229
  - `per_device_train_batch_size`: 32
230
  - `per_device_eval_batch_size`: 32
231
  - `learning_rate`: 2e-05
232
+ - `num_train_epochs`: 6
233
  - `lr_scheduler_type`: cosine_with_restarts
234
  - `warmup_ratio`: 0.1
235
  - `bf16`: True
 
240
 
241
  - `overwrite_output_dir`: False
242
  - `do_predict`: False
243
+ - `eval_strategy`: steps
244
  - `prediction_loss_only`: True
245
  - `per_device_train_batch_size`: 32
246
  - `per_device_eval_batch_size`: 32
 
255
  - `adam_beta2`: 0.999
256
  - `adam_epsilon`: 1e-08
257
  - `max_grad_norm`: 1.0
258
+ - `num_train_epochs`: 6
259
  - `max_steps`: -1
260
  - `lr_scheduler_type`: cosine_with_restarts
261
  - `lr_scheduler_kwargs`: {}
 
354
  </details>
355
 
356
  ### Training Logs
357
+ | Epoch | Step | Training Loss | triplet-dev_cosine_accuracy |
358
+ |:------:|:----:|:-------------:|:---------------------------:|
359
+ | 0 | 0 | - | 0.373 |
360
+ | 0.1667 | 1 | 3.138 | - |
361
+ | 0.3333 | 2 | 2.9761 | - |
362
+ | 0.5 | 3 | 2.7135 | - |
363
+ | 0.6667 | 4 | 2.5144 | - |
364
+ | 0.8333 | 5 | 1.9797 | - |
365
+ | 1.0 | 6 | 1.2683 | - |
366
+ | 1.1667 | 7 | 1.6058 | - |
367
+ | 1.3333 | 8 | 1.3236 | - |
368
+ | 1.5 | 9 | 1.1134 | - |
369
+ | 1.6667 | 10 | 1.1205 | - |
370
+ | 1.8333 | 11 | 0.9369 | - |
371
+ | 2.0 | 12 | 0.6215 | - |
372
+ | 2.1667 | 13 | 1.0374 | - |
373
+ | 2.3333 | 14 | 0.9355 | - |
374
+ | 2.5 | 15 | 0.7118 | - |
375
+ | 2.6667 | 16 | 0.7967 | - |
376
+ | 2.8333 | 17 | 0.5739 | - |
377
+ | 3.0 | 18 | 0.4515 | - |
378
+ | 3.1667 | 19 | 0.8018 | - |
379
+ | 3.3333 | 20 | 0.6557 | - |
380
+ | 3.5 | 21 | 0.6027 | - |
381
+ | 3.6667 | 22 | 0.6747 | - |
382
+ | 3.8333 | 23 | 0.5013 | - |
383
+ | 4.0 | 24 | 0.1428 | - |
384
+ | 4.1667 | 25 | 0.5889 | 0.596 |
385
+ | 4.3333 | 26 | 0.5439 | - |
386
+ | 4.5 | 27 | 0.4742 | - |
387
+ | 4.6667 | 28 | 0.5734 | - |
388
+ | 4.8333 | 29 | 0.3966 | - |
389
+ | 5.0 | 30 | 0.1793 | - |
390
+ | 5.1667 | 31 | 0.5408 | - |
391
+ | 5.3333 | 32 | 0.5174 | - |
392
+ | 5.5 | 33 | 0.4179 | - |
393
+ | 5.6667 | 34 | 0.4589 | - |
394
+ | 5.8333 | 35 | 0.3683 | - |
395
+ | 6.0 | 36 | 0.1442 | 0.606 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
396
 
397
 
398
  ### Framework Versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8bde86f785555d47618677bc7c74848231a3556a1eb547e6ded8a24d9917051b
3
  size 439696224
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08d5e8be928eb50a2410dc88bc791f5b18353249539d816ed452827e06ed169a
3
  size 439696224