Edit model card

layoutlm_qa

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.7055

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
5.2983 0.22 50 4.5220
4.4844 0.44 100 4.1165
4.1775 0.66 150 3.8581
3.8202 0.88 200 3.5512
3.5174 1.11 250 3.9044
3.3304 1.33 300 3.3451
3.1339 1.55 350 3.0255
2.9657 1.77 400 2.9532
2.7647 1.99 450 3.0166
2.3376 2.21 500 2.9174
1.9903 2.43 550 2.7034
1.9975 2.65 600 2.4877
1.8642 2.88 650 2.3439
1.6613 3.1 700 2.3873
1.4884 3.32 750 2.1284
1.3033 3.54 800 2.3192
1.3821 3.76 850 3.0033
1.4121 3.98 900 2.3074
1.0226 4.2 950 2.5772
0.8721 4.42 1000 2.8909
1.1364 4.65 1050 2.6966
1.1504 4.87 1100 2.7247
0.7333 5.09 1150 3.3075
0.7097 5.31 1200 3.2459
0.7138 5.53 1250 3.2652
0.6852 5.75 1300 3.0537
0.6396 5.97 1350 3.1964
0.6756 6.19 1400 3.3380
0.5771 6.42 1450 3.4396
0.6753 6.64 1500 3.0820
0.5361 6.86 1550 3.3736
0.5659 7.08 1600 3.3211
0.6637 7.3 1650 3.2642
0.5321 7.52 1700 3.3275
0.3525 7.74 1750 3.5490
0.4964 7.96 1800 3.5147
0.4882 8.19 1850 3.4210
0.3879 8.41 1900 3.9024
0.4991 8.63 1950 3.5269
0.5084 8.85 2000 3.7400
0.3502 9.07 2050 3.6098
0.2492 9.29 2100 3.8580
0.2889 9.51 2150 3.6365
0.2672 9.73 2200 3.5260
0.4289 9.96 2250 3.1862
0.1803 10.18 2300 3.9092
0.2014 10.4 2350 3.8147
0.3197 10.62 2400 3.7593
0.1503 10.84 2450 3.8731
0.1766 11.06 2500 3.6034
0.3074 11.28 2550 3.6639
0.1637 11.5 2600 3.9461
0.2674 11.73 2650 3.6418
0.2074 11.95 2700 3.7350
0.1034 12.17 2750 4.0971
0.1438 12.39 2800 3.8840
0.0739 12.61 2850 3.9797
0.2329 12.83 2900 4.0602
0.2348 13.05 2950 3.9343
0.1119 13.27 3000 4.2030
0.0955 13.5 3050 4.3291
0.0787 13.72 3100 4.1507
0.1446 13.94 3150 4.1370
0.0202 14.16 3200 4.2964
0.1201 14.38 3250 4.3851
0.0783 14.6 3300 4.2924
0.0536 14.82 3350 4.2803
0.1042 15.04 3400 4.2722
0.1374 15.27 3450 4.3609
0.096 15.49 3500 4.3868
0.0223 15.71 3550 4.3771
0.0573 15.93 3600 4.4002
0.0688 16.15 3650 4.4771
0.0052 16.37 3700 4.5400
0.0128 16.59 3750 4.5740
0.0913 16.81 3800 4.6113
0.0783 17.04 3850 4.2686
0.0344 17.26 3900 4.3120
0.0064 17.48 3950 4.4239
0.1358 17.7 4000 4.5027
0.0299 17.92 4050 4.5290
0.0157 18.14 4100 4.6270
0.0141 18.36 4150 4.6847
0.0382 18.58 4200 4.6527
0.0069 18.81 4250 4.5969
0.0698 19.03 4300 4.6249
0.0303 19.25 4350 4.6679
0.0076 19.47 4400 4.7096
0.0161 19.69 4450 4.7129
0.0572 19.91 4500 4.7055

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
60
Safetensors
Model size
200M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for PrimWong/layoutlm_qa

Finetuned
(62)
this model