layoutlm-custom / README.md
uttam333's picture
End of training
89a613b
metadata
tags:
  - generated_from_trainer
model-index:
  - name: layoutlm-custom
    results: []

layoutlm-custom

This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1583
  • Noise: {'precision': 0.8818897637795275, 'recall': 0.8736349453978159, 'f1': 0.8777429467084641, 'number': 641}
  • Signal: {'precision': 0.861198738170347, 'recall': 0.853125, 'f1': 0.8571428571428572, 'number': 640}
  • Overall Precision: 0.8716
  • Overall Recall: 0.8634
  • Overall F1: 0.8675
  • Overall Accuracy: 0.9656

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Noise Signal Overall Precision Overall Recall Overall F1 Overall Accuracy
0.3882 1.0 18 0.2617 {'precision': 0.6654804270462633, 'recall': 0.5834633385335414, 'f1': 0.6217788861180383, 'number': 641} {'precision': 0.6149732620320856, 'recall': 0.5390625, 'f1': 0.5745212323064114, 'number': 640} 0.6402 0.5613 0.5982 0.8986
0.1694 2.0 36 0.1752 {'precision': 0.7387820512820513, 'recall': 0.719188767550702, 'f1': 0.7288537549407115, 'number': 641} {'precision': 0.709470304975923, 'recall': 0.690625, 'f1': 0.6999208234362629, 'number': 640} 0.7241 0.7049 0.7144 0.9296
0.1039 3.0 54 0.1356 {'precision': 0.7865168539325843, 'recall': 0.7644305772230889, 'f1': 0.7753164556962026, 'number': 641} {'precision': 0.77491961414791, 'recall': 0.753125, 'f1': 0.7638668779714739, 'number': 640} 0.7807 0.7588 0.7696 0.9439
0.064 4.0 72 0.1342 {'precision': 0.8220472440944881, 'recall': 0.8143525741029641, 'f1': 0.8181818181818181, 'number': 641} {'precision': 0.8028391167192429, 'recall': 0.7953125, 'f1': 0.7990580847723705, 'number': 640} 0.8125 0.8048 0.8086 0.9522
0.0433 5.0 90 0.1241 {'precision': 0.8544303797468354, 'recall': 0.8424336973478939, 'f1': 0.8483896307934014, 'number': 641} {'precision': 0.8320126782884311, 'recall': 0.8203125, 'f1': 0.8261211644374509, 'number': 640} 0.8432 0.8314 0.8373 0.9601
0.0293 6.0 108 0.1274 {'precision': 0.8650793650793651, 'recall': 0.8502340093603744, 'f1': 0.8575924468922109, 'number': 641} {'precision': 0.8378378378378378, 'recall': 0.8234375, 'f1': 0.830575256107171, 'number': 640} 0.8515 0.8368 0.8441 0.9617
0.0199 7.0 126 0.1372 {'precision': 0.8722397476340694, 'recall': 0.8627145085803433, 'f1': 0.8674509803921568, 'number': 641} {'precision': 0.8530805687203792, 'recall': 0.84375, 'f1': 0.8483896307934015, 'number': 640} 0.8627 0.8532 0.8579 0.9640
0.0139 8.0 144 0.1386 {'precision': 0.8839427662957074, 'recall': 0.8673946957878315, 'f1': 0.8755905511811023, 'number': 641} {'precision': 0.856687898089172, 'recall': 0.840625, 'f1': 0.8485804416403785, 'number': 640} 0.8703 0.8540 0.8621 0.9656
0.0126 9.0 162 0.1467 {'precision': 0.8829113924050633, 'recall': 0.8705148205928237, 'f1': 0.8766692851531814, 'number': 641} {'precision': 0.8541996830427893, 'recall': 0.8421875, 'f1': 0.848151062155783, 'number': 640} 0.8686 0.8564 0.8624 0.9654
0.0114 10.0 180 0.1531 {'precision': 0.8694968553459119, 'recall': 0.8627145085803433, 'f1': 0.8660924040720438, 'number': 641} {'precision': 0.8472440944881889, 'recall': 0.840625, 'f1': 0.8439215686274509, 'number': 640} 0.8584 0.8517 0.8550 0.9631
0.0099 11.0 198 0.1581 {'precision': 0.8703125, 'recall': 0.8689547581903276, 'f1': 0.8696330991412958, 'number': 641} {'precision': 0.8450704225352113, 'recall': 0.84375, 'f1': 0.8444096950742768, 'number': 640} 0.8577 0.8564 0.8570 0.9634
0.0064 12.0 216 0.1543 {'precision': 0.8866141732283465, 'recall': 0.8783151326053042, 'f1': 0.8824451410658307, 'number': 641} {'precision': 0.8643533123028391, 'recall': 0.85625, 'f1': 0.8602825745682888, 'number': 640} 0.8755 0.8673 0.8714 0.9659
0.0059 13.0 234 0.1628 {'precision': 0.8732394366197183, 'recall': 0.8705148205928237, 'f1': 0.871875, 'number': 641} {'precision': 0.8526645768025078, 'recall': 0.85, 'f1': 0.8513302034428795, 'number': 640} 0.8630 0.8603 0.8616 0.9645
0.0056 14.0 252 0.1587 {'precision': 0.878740157480315, 'recall': 0.8705148205928237, 'f1': 0.8746081504702194, 'number': 641} {'precision': 0.8580441640378549, 'recall': 0.85, 'f1': 0.8540031397174254, 'number': 640} 0.8684 0.8603 0.8643 0.9651
0.005 15.0 270 0.1583 {'precision': 0.8818897637795275, 'recall': 0.8736349453978159, 'f1': 0.8777429467084641, 'number': 641} {'precision': 0.861198738170347, 'recall': 0.853125, 'f1': 0.8571428571428572, 'number': 640} 0.8716 0.8634 0.8675 0.9656

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0