gokulsrinivasagan commited on
Commit
44a2462
·
verified ·
1 Parent(s): 68ba646

Model save

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/bert_uncased_L-4_H-256_A-4
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - spearmanr
9
+ model-index:
10
+ - name: bert_uncased_L-4_H-256_A-4_stsb
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # bert_uncased_L-4_H-256_A-4_stsb
18
+
19
+ This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.6379
22
+ - Pearson: 0.8559
23
+ - Spearmanr: 0.8555
24
+ - Combined Score: 0.8557
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 5e-05
44
+ - train_batch_size: 256
45
+ - eval_batch_size: 256
46
+ - seed: 10
47
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
+ - lr_scheduler_type: linear
49
+ - num_epochs: 50
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
54
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
55
+ | 5.5773 | 1.0 | 23 | 2.7412 | 0.3845 | 0.3343 | 0.3594 |
56
+ | 2.5793 | 2.0 | 46 | 1.9158 | 0.7727 | 0.7557 | 0.7642 |
57
+ | 1.5767 | 3.0 | 69 | 0.9541 | 0.7706 | 0.7473 | 0.7590 |
58
+ | 0.9474 | 4.0 | 92 | 0.7628 | 0.8133 | 0.8070 | 0.8101 |
59
+ | 0.7258 | 5.0 | 115 | 0.6785 | 0.8383 | 0.8429 | 0.8406 |
60
+ | 0.6162 | 6.0 | 138 | 0.6756 | 0.8436 | 0.8439 | 0.8437 |
61
+ | 0.5455 | 7.0 | 161 | 0.6391 | 0.8480 | 0.8504 | 0.8492 |
62
+ | 0.4912 | 8.0 | 184 | 0.6582 | 0.8461 | 0.8472 | 0.8466 |
63
+ | 0.4443 | 9.0 | 207 | 0.6561 | 0.8472 | 0.8482 | 0.8477 |
64
+ | 0.3995 | 10.0 | 230 | 0.6429 | 0.8504 | 0.8503 | 0.8503 |
65
+ | 0.3689 | 11.0 | 253 | 0.6283 | 0.8545 | 0.8542 | 0.8543 |
66
+ | 0.3418 | 12.0 | 276 | 0.6592 | 0.8520 | 0.8520 | 0.8520 |
67
+ | 0.3302 | 13.0 | 299 | 0.6507 | 0.8524 | 0.8530 | 0.8527 |
68
+ | 0.319 | 14.0 | 322 | 0.6484 | 0.8528 | 0.8526 | 0.8527 |
69
+ | 0.2863 | 15.0 | 345 | 0.6397 | 0.8526 | 0.8527 | 0.8526 |
70
+ | 0.2774 | 16.0 | 368 | 0.6379 | 0.8559 | 0.8555 | 0.8557 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.46.3
76
+ - Pytorch 2.2.1+cu118
77
+ - Datasets 2.17.0
78
+ - Tokenizers 0.20.3
logs/events.out.tfevents.1733332970.ki-g0008.1761130.12 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b15f09fbf5da29f384131bdedee85704cb791a555db2be47f6e5c4679a78114
3
- size 14086
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec4b716599f25bbc5d31393d16ff501f61ecac76c394f21b305b56234442331a
3
+ size 15728
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aef882085692e89c5fad499b5c35be25db398d593c20b48259f8bd89530964b4
3
  size 44691580
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d3e9d33368637ee4c0cfea039e0b95d7f3e8f5b40014ffcc5fcf9679325058e
3
  size 44691580