vishnun commited on
Commit
dd32923
1 Parent(s): 04e0398

End of training

Browse files
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
@@ -7,28 +8,27 @@ metrics:
7
  - recall
8
  - f1
9
  - accuracy
10
- base_model: distilbert-base-uncased
11
  model-index:
12
- - name: kg_model
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # kg_model
20
 
21
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.3039
24
- - Precision: 0.7629
25
- - Recall: 0.7025
26
- - F1: 0.7315
27
- - Accuracy: 0.8965
28
 
29
  ## Model description
30
 
31
- Lite model to extract entities and relation between them, could be leveraged for Question Answering and Querying tasks.
32
 
33
  ## Intended uses & limitations
34
 
@@ -55,15 +55,15 @@ The following hyperparameters were used during training:
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
- | 0.3736 | 1.0 | 1063 | 0.3379 | 0.7542 | 0.6217 | 0.6816 | 0.8813 |
59
- | 0.3078 | 2.0 | 2126 | 0.3075 | 0.7728 | 0.6678 | 0.7164 | 0.8929 |
60
- | 0.267 | 3.0 | 3189 | 0.3017 | 0.7597 | 0.6999 | 0.7285 | 0.8954 |
61
- | 0.2455 | 4.0 | 4252 | 0.3039 | 0.7629 | 0.7025 | 0.7315 | 0.8965 |
62
 
63
 
64
  ### Framework versions
65
 
66
- - Transformers 4.27.3
67
- - Pytorch 1.13.1+cu116
68
- - Datasets 2.10.1
69
- - Tokenizers 0.13.2
 
1
  ---
2
  license: apache-2.0
3
+ base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
8
  - recall
9
  - f1
10
  - accuracy
 
11
  model-index:
12
+ - name: knowledge-graph-nlp
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # knowledge-graph-nlp
20
 
21
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2719
24
+ - Precision: 0.8636
25
+ - Recall: 0.8409
26
+ - F1: 0.8521
27
+ - Accuracy: 0.9291
28
 
29
  ## Model description
30
 
31
+ More information needed
32
 
33
  ## Intended uses & limitations
34
 
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
+ | 0.1915 | 1.0 | 855 | 0.2499 | 0.8465 | 0.8228 | 0.8345 | 0.9227 |
59
+ | 0.1345 | 2.0 | 1710 | 0.2609 | 0.8528 | 0.8370 | 0.8448 | 0.9259 |
60
+ | 0.1078 | 3.0 | 2565 | 0.2664 | 0.8558 | 0.8450 | 0.8504 | 0.9285 |
61
+ | 0.0949 | 4.0 | 3420 | 0.2719 | 0.8636 | 0.8409 | 0.8521 | 0.9291 |
62
 
63
 
64
  ### Framework versions
65
 
66
+ - Transformers 4.35.2
67
+ - Pytorch 2.1.0+cu121
68
+ - Datasets 2.17.0
69
+ - Tokenizers 0.15.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e31714562dce8a9dde1d977a0d60f7e7c4207dd582e0d9630db448cd2708d424
3
  size 265476168
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1da3b9c94539958d9e413d6f03b563fda02240db8c4e8f6bb6476e39639c2cf7
3
  size 265476168
runs/Feb14_08-09-42_c87e355fe59e/events.out.tfevents.1707898194.c87e355fe59e.433.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68152f05f5bc7e98bc1c51fae543f4f88767e207e47f24af0334de2a0b2cfc52
3
+ size 7482
tokenizer_config.json CHANGED
@@ -1,11 +1,53 @@
1
  {
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  "cls_token": "[CLS]",
3
  "do_lower_case": true,
4
  "mask_token": "[MASK]",
5
  "model_max_length": 512,
6
  "pad_token": "[PAD]",
7
  "sep_token": "[SEP]",
8
- "special_tokens_map_file": null,
9
  "strip_accents": null,
10
  "tokenize_chinese_chars": true,
11
  "tokenizer_class": "DistilBertTokenizer",
 
1
  {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
  "cls_token": "[CLS]",
46
  "do_lower_case": true,
47
  "mask_token": "[MASK]",
48
  "model_max_length": 512,
49
  "pad_token": "[PAD]",
50
  "sep_token": "[SEP]",
 
51
  "strip_accents": null,
52
  "tokenize_chinese_chars": true,
53
  "tokenizer_class": "DistilBertTokenizer",
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7706abd267d9ac612b77ec2e8b1fab84ed50562ca97e89f2059f8c2ee63795cb
3
- size 3579
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fb2d8a08d684d2ca4cbf387bfb296b7d9475e900d25897f19d502a358f4df58
3
+ size 4600