mp-02 commited on
Commit
254a705
·
verified ·
1 Parent(s): eaa096b

layoutlmv3-finetuned-cord

Browse files
Files changed (2) hide show
  1. README.md +25 -43
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,9 +1,8 @@
1
  ---
2
- base_model: layoutlmv3
 
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - mp-02/cord
7
  metrics:
8
  - precision
9
  - recall
@@ -11,26 +10,7 @@ metrics:
11
  - accuracy
12
  model-index:
13
  - name: layoutlmv3-finetuned-cord
14
- results:
15
- - task:
16
- name: Token Classification
17
- type: token-classification
18
- dataset:
19
- name: mp-02/cord
20
- type: mp-02/cord
21
- metrics:
22
- - name: Precision
23
- type: precision
24
- value: 0.963984674329502
25
- - name: Recall
26
- type: recall
27
- value: 0.9767080745341615
28
- - name: F1
29
- type: f1
30
- value: 0.9703046664095644
31
- - name: Accuracy
32
- type: accuracy
33
- value: 0.9690152801358234
34
  ---
35
 
36
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -38,13 +18,13 @@ should probably proofread and complete it, then remove this comment. -->
38
 
39
  # layoutlmv3-finetuned-cord
40
 
41
- This model is a fine-tuned version of [layoutlmv3](https://huggingface.co/layoutlmv3) on the mp-02/cord dataset.
42
  It achieves the following results on the evaluation set:
43
- - Loss: 0.2087
44
- - Precision: 0.9640
45
- - Recall: 0.9767
46
- - F1: 0.9703
47
- - Accuracy: 0.9690
48
 
49
  ## Model description
50
 
@@ -63,9 +43,9 @@ More information needed
63
  ### Training hyperparameters
64
 
65
  The following hyperparameters were used during training:
66
- - learning_rate: 1e-05
67
- - train_batch_size: 5
68
- - eval_batch_size: 5
69
  - seed: 42
70
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
71
  - lr_scheduler_type: linear
@@ -73,21 +53,23 @@ The following hyperparameters were used during training:
73
 
74
  ### Training results
75
 
76
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
77
- |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
78
- | No log | 1.5625 | 250 | 0.2302 | 0.9594 | 0.9720 | 0.9657 | 0.9656 |
79
- | 0.041 | 3.125 | 500 | 0.2176 | 0.9542 | 0.9705 | 0.9623 | 0.9618 |
80
- | 0.041 | 4.6875 | 750 | 0.1903 | 0.9573 | 0.9736 | 0.9654 | 0.9682 |
81
- | 0.0302 | 6.25 | 1000 | 0.2027 | 0.9602 | 0.9744 | 0.9672 | 0.9660 |
82
- | 0.0302 | 7.8125 | 1250 | 0.2174 | 0.9670 | 0.9775 | 0.9722 | 0.9703 |
83
- | 0.019 | 9.375 | 1500 | 0.2018 | 0.9640 | 0.9775 | 0.9707 | 0.9711 |
84
- | 0.019 | 10.9375 | 1750 | 0.2084 | 0.9677 | 0.9783 | 0.9730 | 0.9694 |
85
- | 0.0115 | 12.5 | 2000 | 0.2087 | 0.9640 | 0.9767 | 0.9703 | 0.9690 |
 
 
86
 
87
 
88
  ### Framework versions
89
 
90
  - Transformers 4.42.4
91
- - Pytorch 2.3.1+cu121
92
  - Datasets 2.21.0
93
  - Tokenizers 0.19.1
 
1
  ---
2
+ license: cc-by-nc-sa-4.0
3
+ base_model: microsoft/layoutlmv3-base
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - precision
8
  - recall
 
10
  - accuracy
11
  model-index:
12
  - name: layoutlmv3-finetuned-cord
13
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
18
 
19
  # layoutlmv3-finetuned-cord
20
 
21
+ This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2108
24
+ - Precision: 0.9573
25
+ - Recall: 0.9744
26
+ - F1: 0.9658
27
+ - Accuracy: 0.9652
28
 
29
  ## Model description
30
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - learning_rate: 5e-05
47
+ - train_batch_size: 10
48
+ - eval_batch_size: 10
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
+ | No log | 2.5 | 200 | 0.3004 | 0.9060 | 0.9425 | 0.9239 | 0.9304 |
59
+ | No log | 5.0 | 400 | 0.1818 | 0.9351 | 0.9620 | 0.9483 | 0.9559 |
60
+ | 0.4239 | 7.5 | 600 | 0.1737 | 0.9543 | 0.9728 | 0.9635 | 0.9677 |
61
+ | 0.4239 | 10.0 | 800 | 0.1698 | 0.9594 | 0.9728 | 0.9661 | 0.9660 |
62
+ | 0.0418 | 12.5 | 1000 | 0.1981 | 0.9602 | 0.9736 | 0.9668 | 0.9610 |
63
+ | 0.0418 | 15.0 | 1200 | 0.2044 | 0.9565 | 0.9728 | 0.9646 | 0.9622 |
64
+ | 0.0418 | 17.5 | 1400 | 0.1810 | 0.9581 | 0.9759 | 0.9669 | 0.9699 |
65
+ | 0.014 | 20.0 | 1600 | 0.2161 | 0.9572 | 0.9720 | 0.9646 | 0.9665 |
66
+ | 0.014 | 22.5 | 1800 | 0.2066 | 0.9610 | 0.9752 | 0.9680 | 0.9648 |
67
+ | 0.0061 | 25.0 | 2000 | 0.2108 | 0.9573 | 0.9744 | 0.9658 | 0.9652 |
68
 
69
 
70
  ### Framework versions
71
 
72
  - Transformers 4.42.4
73
+ - Pytorch 2.4.0+cu118
74
  - Datasets 2.21.0
75
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:104ced35cb76f12d543b9a3d7f6df835cc1121a5d5882b8fdb37599cb45f2025
3
  size 503816564
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7145f6fc6c90bdea5043c0524cf4b1b3ae11b9363cc67fa5b964dcb3f85f45de
3
  size 503816564