brandaobrandisborges commited on
Commit
e2d4bcb
1 Parent(s): 6288998

End of training

Browse files
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ model-index:
5
+ - name: layoutlm-synthchecking-padding
6
+ results: []
7
+ ---
8
+
9
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
+ should probably proofread and complete it, then remove this comment. -->
11
+
12
+ # layoutlm-synthchecking-padding
13
+
14
+ This model is a fine-tuned version of [microsoft/layoutlm-large-uncased](https://huggingface.co/microsoft/layoutlm-large-uncased) on the None dataset.
15
+ It achieves the following results on the evaluation set:
16
+ - Loss: 0.0005
17
+ - Ank Address: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30}
18
+ - Ank Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30}
19
+ - Ayee Address: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30}
20
+ - Ayee Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30}
21
+ - Icr: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30}
22
+ - Mount: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30}
23
+ - Overall Precision: 1.0
24
+ - Overall Recall: 1.0
25
+ - Overall F1: 1.0
26
+ - Overall Accuracy: 1.0
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 1e-05
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 2
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - num_epochs: 8
52
+ - mixed_precision_training: Native AMP
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Ank Address | Ank Name | Ayee Address | Ayee Name | Icr | Mount | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
57
+ |:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
58
+ | 1.3656 | 1.0 | 30 | 0.8294 | {'precision': 0.17721518987341772, 'recall': 0.4666666666666667, 'f1': 0.25688073394495414, 'number': 30} | {'precision': 0.23076923076923078, 'recall': 0.1, 'f1': 0.13953488372093023, 'number': 30} | {'precision': 0.011235955056179775, 'recall': 0.03333333333333333, 'f1': 0.01680672268907563, 'number': 30} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | 0.2989 | 0.4333 | 0.3537 | 0.7804 |
59
+ | 0.418 | 2.0 | 60 | 0.0552 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 0.9666666666666667, 'recall': 0.9666666666666667, 'f1': 0.9666666666666667, 'number': 30} | {'precision': 0.9666666666666667, 'recall': 0.9666666666666667, 'f1': 0.9666666666666667, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | 0.9889 | 0.9889 | 0.9889 | 0.9984 |
60
+ | 0.033 | 3.0 | 90 | 0.0022 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | 1.0 | 1.0 | 1.0 | 1.0 |
61
+ | 0.0056 | 4.0 | 120 | 0.0010 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | 1.0 | 1.0 | 1.0 | 1.0 |
62
+ | 0.0032 | 5.0 | 150 | 0.0007 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | 1.0 | 1.0 | 1.0 | 1.0 |
63
+ | 0.0025 | 6.0 | 180 | 0.0006 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | 1.0 | 1.0 | 1.0 | 1.0 |
64
+ | 0.0028 | 7.0 | 210 | 0.0005 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | 1.0 | 1.0 | 1.0 | 1.0 |
65
+ | 0.0022 | 8.0 | 240 | 0.0005 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 30} | 1.0 | 1.0 | 1.0 | 1.0 |
66
+
67
+
68
+ ### Framework versions
69
+
70
+ - Transformers 4.27.1
71
+ - Pytorch 1.13.1+cu116
72
+ - Datasets 2.10.1
73
+ - Tokenizers 0.13.2
logs/events.out.tfevents.1679210624.9cfe23f4f26d.195.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed656bad40668981bebd5d228f10e9a2806f698ec6aeac775972d416ed89bc91
3
- size 4862
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09160f8377c504aaa793e0567a3714ec0fcb8adfde7ec6e4e691a6bcd5d0d839
3
+ size 9807
preprocessor_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "apply_ocr": true,
3
+ "do_resize": true,
4
+ "feature_extractor_type": "LayoutLMv2FeatureExtractor",
5
+ "image_processor_type": "LayoutLMv2ImageProcessor",
6
+ "ocr_lang": null,
7
+ "processor_class": "LayoutLMv2Processor",
8
+ "resample": 2,
9
+ "size": {
10
+ "height": 224,
11
+ "width": 224
12
+ },
13
+ "tesseract_config": ""
14
+ }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc9e1e79ae4c762760a2f2bd42d96150b9fdba73551969ec0473c0c8fd01956e
3
  size 1357511173
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39f58bc9e91aa8ee1b3245438356159526b077805302d9f4c753966457a033a0
3
  size 1357511173
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": null,
3
+ "apply_ocr": false,
4
+ "cls_token": "[CLS]",
5
+ "cls_token_box": [
6
+ 0,
7
+ 0,
8
+ 0,
9
+ 0
10
+ ],
11
+ "do_basic_tokenize": true,
12
+ "do_lower_case": true,
13
+ "mask_token": "[MASK]",
14
+ "model_max_length": 512,
15
+ "never_split": null,
16
+ "only_label_first_subword": true,
17
+ "pad_token": "[PAD]",
18
+ "pad_token_box": [
19
+ 0,
20
+ 0,
21
+ 0,
22
+ 0
23
+ ],
24
+ "pad_token_label": -100,
25
+ "processor_class": "LayoutLMv2Processor",
26
+ "sep_token": "[SEP]",
27
+ "sep_token_box": [
28
+ 1000,
29
+ 1000,
30
+ 1000,
31
+ 1000
32
+ ],
33
+ "special_tokens_map_file": null,
34
+ "strip_accents": null,
35
+ "tokenize_chinese_chars": true,
36
+ "tokenizer_class": "LayoutLMv2Tokenizer",
37
+ "unk_token": "[UNK]"
38
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff