Edit model card

layoutlmv2-base-uncased_finetuned_docvqa

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 5.2024

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
5.2725 0.22 50 4.6295
4.532 0.44 100 4.2254
4.0834 0.66 150 3.8568
3.9787 0.88 200 3.6523
3.6224 1.11 250 3.8143
3.2397 1.33 300 3.1685
3.0931 1.55 350 3.0822
2.9593 1.77 400 2.9521
2.5904 1.99 450 2.7995
2.1358 2.21 500 2.4827
2.0137 2.43 550 2.3651
1.951 2.65 600 2.2448
1.684 2.88 650 2.3455
1.6708 3.1 700 2.5003
1.3927 3.32 750 2.3116
1.5237 3.54 800 2.6236
1.2826 3.76 850 2.4859
1.5274 3.98 900 2.1857
1.0727 4.2 950 2.5041
0.9465 4.42 1000 2.7958
1.0889 4.65 1050 2.3797
0.9121 4.87 1100 2.7570
0.8847 5.09 1150 3.0968
0.8864 5.31 1200 2.8488
0.8693 5.53 1250 2.6848
0.5451 5.75 1300 3.3272
0.8121 5.97 1350 3.6097
0.6214 6.19 1400 3.1954
0.5576 6.42 1450 3.2427
0.5576 6.64 1500 3.4471
0.4858 6.86 1550 3.2469
0.5947 7.08 1600 3.2522
0.4889 7.3 1650 3.3459
0.3126 7.52 1700 3.9616
0.3291 7.74 1750 3.9943
0.5337 7.96 1800 3.6498
0.2384 8.19 1850 4.2966
0.3566 8.41 1900 4.1365
0.3539 8.63 1950 4.1291
0.3219 8.85 2000 4.3024
0.2307 9.07 2050 4.1780
0.1922 9.29 2100 4.4078
0.1721 9.51 2150 4.2569
0.1541 9.73 2200 4.2138
0.3044 9.96 2250 4.2793
0.2642 10.18 2300 4.2676
0.1725 10.4 2350 4.0887
0.2223 10.62 2400 3.9813
0.2107 10.84 2450 4.1089
0.3262 11.06 2500 3.9025
0.383 11.28 2550 4.1511
0.1437 11.5 2600 4.1774
0.2707 11.73 2650 4.1749
0.1692 11.95 2700 4.3012
0.1651 12.17 2750 4.3723
0.1388 12.39 2800 4.3273
0.1072 12.61 2850 4.7238
0.1748 12.83 2900 4.3425
0.1053 13.05 2950 4.0696
0.1929 13.27 3000 4.6322
0.028 13.5 3050 4.4843
0.0207 13.72 3100 4.9324
0.0662 13.94 3150 4.9421
0.0644 14.16 3200 4.8991
0.0321 14.38 3250 4.7757
0.0567 14.6 3300 4.9158
0.0552 14.82 3350 5.0722
0.0695 15.04 3400 5.0160
0.081 15.27 3450 5.1969
0.084 15.49 3500 5.2285
0.0372 15.71 3550 5.2621
0.1193 15.93 3600 5.1806
0.0166 16.15 3650 5.2799
0.029 16.37 3700 5.2543
0.048 16.59 3750 5.1176
0.143 16.81 3800 5.1800
0.033 17.04 3850 5.1635
0.0424 17.26 3900 5.1982
0.004 17.48 3950 5.2322
0.0143 17.7 4000 5.2242
0.0261 17.92 4050 5.3110
0.0076 18.14 4100 5.3329
0.0036 18.36 4150 5.3355
0.0182 18.58 4200 5.3223
0.0466 18.81 4250 5.2396
0.0036 19.03 4300 5.2409
0.0278 19.25 4350 5.2128
0.015 19.47 4400 5.2227
0.0394 19.69 4450 5.2018
0.0034 19.91 4500 5.2024

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
3
Safetensors
Model size
200M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yuanzheng625/layoutlmv2-base-uncased_finetuned_docvqa

Finetuned
(62)
this model