alenatz commited on
Commit
2c8e0ed
·
verified ·
1 Parent(s): a3ad9fc

alenatz/relation-extraction-bert-biocause

Browse files
README.md CHANGED
@@ -18,15 +18,16 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # relation-bert-biocause
20
 
21
- This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0342
24
- - Precision: 0.6948
25
- - Recall: 0.4193
26
- - F1: 0.4710
27
- - Accuracy: 0.9927
28
- - Balanced Accuracy: 0.4193
29
- - Kappa: 0.2100
 
30
 
31
  ## Model description
32
 
@@ -45,26 +46,30 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - learning_rate: 2e-05
49
- - train_batch_size: 8
50
- - eval_batch_size: 8
51
  - seed: 42
52
- - gradient_accumulation_steps: 2
53
- - total_train_batch_size: 16
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
  - num_epochs: 1
57
 
58
  ### Training results
59
 
60
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Balanced Accuracy | Kappa |
61
- |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-----------------:|:------:|
62
- | 0.0776 | 0.5764 | 100 | 0.0367 | 0.7530 | 0.3905 | 0.4323 | 0.9927 | 0.3905 | 0.1475 |
 
 
 
 
 
 
63
 
64
 
65
  ### Framework versions
66
 
67
- - Transformers 4.41.2
68
  - Pytorch 2.3.0+cu121
69
  - Datasets 2.20.0
70
  - Tokenizers 0.19.1
 
18
 
19
  # relation-bert-biocause
20
 
21
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2130
24
+ - Precision: 0.1019
25
+ - Recall: 0.5855
26
+ - F1: 0.1737
27
+ - Accuracy: 0.9399
28
+ - Relation P: 0.1019
29
+ - Relation R: 0.5855
30
+ - Relation F1: 0.1737
31
 
32
  ## Model description
33
 
 
46
  ### Training hyperparameters
47
 
48
  The following hyperparameters were used during training:
49
+ - learning_rate: 4e-05
50
+ - train_batch_size: 16
51
+ - eval_batch_size: 16
52
  - seed: 42
 
 
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: linear
55
  - num_epochs: 1
56
 
57
  ### Training results
58
 
59
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Relation P | Relation R | Relation F1 |
60
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:----------:|:----------:|:-----------:|
61
+ | 0.7103 | 0.1282 | 20 | 0.3074 | 0.0214 | 0.2368 | 0.0392 | 0.8048 | 0.0214 | 0.2368 | 0.0392 |
62
+ | 0.7103 | 0.2564 | 40 | 0.2230 | 0.0523 | 0.3882 | 0.0922 | 0.8985 | 0.0523 | 0.3882 | 0.0922 |
63
+ | 0.7103 | 0.3846 | 60 | 0.2568 | 0.0983 | 0.5987 | 0.1688 | 0.9413 | 0.0983 | 0.5987 | 0.1688 |
64
+ | 0.7103 | 0.5128 | 80 | 0.2166 | 0.0593 | 0.4671 | 0.1053 | 0.9000 | 0.0593 | 0.4671 | 0.1053 |
65
+ | 0.7103 | 0.6410 | 100 | 0.2308 | 0.1240 | 0.6842 | 0.2099 | 0.9489 | 0.1240 | 0.6842 | 0.2099 |
66
+ | 0.7103 | 0.7692 | 120 | 0.2246 | 0.1080 | 0.625 | 0.1841 | 0.9435 | 0.1080 | 0.625 | 0.1841 |
67
+ | 0.7103 | 0.8974 | 140 | 0.2290 | 0.1196 | 0.6316 | 0.2010 | 0.9483 | 0.1196 | 0.6316 | 0.2010 |
68
 
69
 
70
  ### Framework versions
71
 
72
+ - Transformers 4.42.4
73
  - Pytorch 2.3.0+cu121
74
  - Datasets 2.20.0
75
  - Tokenizers 0.19.1
config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "_name_or_path": "bert-base-cased",
3
  "architectures": [
4
- "BertForTokenClassification"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "classifier_dropout": null,
@@ -11,14 +11,12 @@
11
  "hidden_size": 768,
12
  "id2label": {
13
  "0": "O",
14
- "1": "B-RELATION",
15
- "2": "I-RELATION"
16
  },
17
  "initializer_range": 0.02,
18
  "intermediate_size": 3072,
19
  "label2id": {
20
- "B-RELATION": 1,
21
- "I-RELATION": 2,
22
  "O": 0
23
  },
24
  "layer_norm_eps": 1e-12,
@@ -29,7 +27,7 @@
29
  "pad_token_id": 0,
30
  "position_embedding_type": "absolute",
31
  "torch_dtype": "float32",
32
- "transformers_version": "4.41.2",
33
  "type_vocab_size": 2,
34
  "use_cache": true,
35
  "vocab_size": 28996
 
1
  {
2
  "_name_or_path": "bert-base-cased",
3
  "architectures": [
4
+ "BertForUnbalancedTokenClassification"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "classifier_dropout": null,
 
11
  "hidden_size": 768,
12
  "id2label": {
13
  "0": "O",
14
+ "1": "I-REL"
 
15
  },
16
  "initializer_range": 0.02,
17
  "intermediate_size": 3072,
18
  "label2id": {
19
+ "I-REL": 1,
 
20
  "O": 0
21
  },
22
  "layer_norm_eps": 1e-12,
 
27
  "pad_token_id": 0,
28
  "position_embedding_type": "absolute",
29
  "torch_dtype": "float32",
30
+ "transformers_version": "4.42.4",
31
  "type_vocab_size": 2,
32
  "use_cache": true,
33
  "vocab_size": 28996
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e67f29a0634d271027b3f82bebaadf272e947bb474c5967785313e09405d6561
3
- size 430911284
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fe954b1fbb2265c59319c44afb705da8fb3722ce34294dc2f02df1f8ac0c8ef
3
+ size 433270744
runs/Jul13_20-07-13_a518198a24e3/events.out.tfevents.1720901242.a518198a24e3.2562.12 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af99f53483db3667eade388ebef84ed854dad1c89dfbc573629131440804b8a5
3
+ size 9876
runs/Jul13_20-07-13_a518198a24e3/events.out.tfevents.1720901362.a518198a24e3.2562.13 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d692ce1963f1ad64446c67f95e536f1724541d7c331a4b3764f61ebe34c9646d
3
+ size 723
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ba7bd6094f1df4848a18ad7ba9f68fdc4b8a36ebf0555f908045cd92efd744c1
3
- size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd0a710f1e0519b4d45d49708236592299e5ce368a7b6ba8a193961e147b7a07
3
+ size 5112