Edit model card

lmv2-g-pan-143doc-06-12

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0443
  • Dob Precision: 1.0
  • Dob Recall: 1.0
  • Dob F1: 1.0
  • Dob Number: 27
  • Fname Precision: 1.0
  • Fname Recall: 0.9643
  • Fname F1: 0.9818
  • Fname Number: 28
  • Name Precision: 0.9630
  • Name Recall: 0.9630
  • Name F1: 0.9630
  • Name Number: 27
  • Pan Precision: 1.0
  • Pan Recall: 1.0
  • Pan F1: 1.0
  • Pan Number: 26
  • Overall Precision: 0.9907
  • Overall Recall: 0.9815
  • Overall F1: 0.9860
  • Overall Accuracy: 0.9978

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Dob Precision Dob Recall Dob F1 Dob Number Fname Precision Fname Recall Fname F1 Fname Number Name Precision Name Recall Name F1 Name Number Pan Precision Pan Recall Pan F1 Pan Number Overall Precision Overall Recall Overall F1 Overall Accuracy
1.274 1.0 114 0.9098 0.9310 1.0 0.9643 27 0.1481 0.1429 0.1455 28 0.1639 0.3704 0.2273 27 0.8125 1.0 0.8966 26 0.4497 0.6204 0.5214 0.9143
0.7133 2.0 228 0.5771 0.9310 1.0 0.9643 27 0.2093 0.3214 0.2535 28 0.6562 0.7778 0.7119 27 0.9630 1.0 0.9811 26 0.6336 0.7685 0.6946 0.9443
0.4593 3.0 342 0.4018 0.9643 1.0 0.9818 27 0.8276 0.8571 0.8421 28 0.9259 0.9259 0.9259 27 1.0 1.0 1.0 26 0.9273 0.9444 0.9358 0.9655
0.3011 4.0 456 0.2638 0.9643 1.0 0.9818 27 1.0 0.9286 0.9630 28 0.9259 0.9259 0.9259 27 0.9630 1.0 0.9811 26 0.9630 0.9630 0.9630 0.9811
0.2209 5.0 570 0.2108 0.9643 1.0 0.9818 27 0.8621 0.8929 0.8772 28 0.9286 0.9630 0.9455 27 0.9286 1.0 0.9630 26 0.9204 0.9630 0.9412 0.9811
0.1724 6.0 684 0.1671 0.9643 1.0 0.9818 27 0.9286 0.9286 0.9286 28 0.8667 0.9630 0.9123 27 0.8966 1.0 0.9455 26 0.9130 0.9722 0.9417 0.9844
0.1285 7.0 798 0.1754 0.9643 1.0 0.9818 27 0.8929 0.8929 0.8929 28 0.9630 0.9630 0.9630 27 0.9630 1.0 0.9811 26 0.9455 0.9630 0.9541 0.9788
0.0999 8.0 912 0.1642 0.9643 1.0 0.9818 27 0.9615 0.8929 0.9259 28 0.9630 0.9630 0.9630 27 0.9630 1.0 0.9811 26 0.9630 0.9630 0.9630 0.9811
0.0862 9.0 1026 0.1417 0.9643 1.0 0.9818 27 0.8966 0.9286 0.9123 28 0.8966 0.9630 0.9286 27 0.9630 1.0 0.9811 26 0.9292 0.9722 0.9502 0.9788
0.0722 10.0 1140 0.1317 0.9643 1.0 0.9818 27 0.9630 0.9286 0.9455 28 0.9286 0.9630 0.9455 27 0.9630 1.0 0.9811 26 0.9545 0.9722 0.9633 0.9822
0.0748 11.0 1254 0.1220 0.9643 1.0 0.9818 27 1.0 0.8929 0.9434 28 1.0 0.9630 0.9811 27 0.9286 1.0 0.9630 26 0.9720 0.9630 0.9674 0.9833
0.0549 12.0 1368 0.1157 0.9643 1.0 0.9818 27 0.8966 0.9286 0.9123 28 0.8667 0.9630 0.9123 27 0.8966 1.0 0.9455 26 0.9052 0.9722 0.9375 0.9811
0.0444 13.0 1482 0.1198 0.9643 1.0 0.9818 27 1.0 0.8929 0.9434 28 0.9630 0.9630 0.9630 27 0.9630 1.0 0.9811 26 0.9720 0.9630 0.9674 0.9811
0.0371 14.0 1596 0.1082 0.9643 1.0 0.9818 27 0.8966 0.9286 0.9123 28 0.8966 0.9630 0.9286 27 0.7879 1.0 0.8814 26 0.8824 0.9722 0.9251 0.9833
0.036 15.0 1710 0.1257 0.9643 1.0 0.9818 27 0.9630 0.9286 0.9455 28 0.9630 0.9630 0.9630 27 0.8966 1.0 0.9455 26 0.9459 0.9722 0.9589 0.9800
0.0291 16.0 1824 0.0930 0.9643 1.0 0.9818 27 0.9643 0.9643 0.9643 28 0.9643 1.0 0.9818 27 0.8667 1.0 0.9286 26 0.9386 0.9907 0.9640 0.9900
0.0267 17.0 1938 0.0993 0.9643 1.0 0.9818 27 0.9286 0.9286 0.9286 28 0.9286 0.9630 0.9455 27 0.9286 1.0 0.9630 26 0.9375 0.9722 0.9545 0.9844
0.023 18.0 2052 0.1240 0.9643 1.0 0.9818 27 0.7941 0.9643 0.8710 28 0.9643 1.0 0.9818 27 0.8387 1.0 0.9123 26 0.8843 0.9907 0.9345 0.9800
0.0379 19.0 2166 0.1154 0.9643 1.0 0.9818 27 1.0 0.9286 0.9630 28 0.9286 0.9630 0.9455 27 0.9286 1.0 0.9630 26 0.9545 0.9722 0.9633 0.9833
0.0199 20.0 2280 0.1143 0.9643 1.0 0.9818 27 1.0 0.9286 0.9630 28 0.8966 0.9630 0.9286 27 0.8667 1.0 0.9286 26 0.9292 0.9722 0.9502 0.9844
0.0256 21.0 2394 0.1175 0.9643 1.0 0.9818 27 0.8667 0.9286 0.8966 28 0.9286 0.9630 0.9455 27 0.9286 1.0 0.9630 26 0.9211 0.9722 0.9459 0.9811
0.0388 22.0 2508 0.0964 0.9643 1.0 0.9818 27 0.8966 0.9286 0.9123 28 0.9310 1.0 0.9643 27 0.8966 1.0 0.9455 26 0.9217 0.9815 0.9507 0.9855
0.0334 23.0 2622 0.1186 0.9643 1.0 0.9818 27 1.0 0.9286 0.9630 28 1.0 0.9630 0.9811 27 0.8966 1.0 0.9455 26 0.9633 0.9722 0.9677 0.9833
0.0134 24.0 2736 0.1193 0.9643 1.0 0.9818 27 0.9630 0.9286 0.9455 28 1.0 0.9630 0.9811 27 0.9286 1.0 0.9630 26 0.9633 0.9722 0.9677 0.9822
0.0157 25.0 2850 0.1078 1.0 1.0 1.0 27 0.9259 0.8929 0.9091 28 0.9286 0.9630 0.9455 27 0.8966 1.0 0.9455 26 0.9369 0.9630 0.9498 0.9833
0.0157 26.0 2964 0.0758 1.0 1.0 1.0 27 0.8929 0.8929 0.8929 28 1.0 1.0 1.0 27 0.8966 1.0 0.9455 26 0.9459 0.9722 0.9589 0.9911
0.0096 27.0 3078 0.0766 1.0 1.0 1.0 27 0.8929 0.8929 0.8929 28 1.0 1.0 1.0 27 0.8966 1.0 0.9455 26 0.9459 0.9722 0.9589 0.9889
0.0135 28.0 3192 0.0443 1.0 1.0 1.0 27 1.0 0.9643 0.9818 28 0.9630 0.9630 0.9630 27 1.0 1.0 1.0 26 0.9907 0.9815 0.9860 0.9978
0.012 29.0 3306 0.1153 0.9643 1.0 0.9818 27 0.8966 0.9286 0.9123 28 0.8667 0.9630 0.9123 27 0.8966 1.0 0.9455 26 0.9052 0.9722 0.9375 0.9822
0.0069 30.0 3420 0.1373 0.9643 1.0 0.9818 27 0.8966 0.9286 0.9123 28 0.9286 0.9630 0.9455 27 0.8966 1.0 0.9455 26 0.9211 0.9722 0.9459 0.9777

Framework versions

  • Transformers 4.20.0.dev0
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1
Downloads last month
1
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.