roberta-base_legal_ner_finetuned
This model is a fine-tuned version of FacebookAI/roberta-base on the Darrow LegalLens Shared Task NER dataset. It achieves the following results on the evaluation set:
- Loss: 0.2416
- Law Precision: 0.8319
- Law Recall: 0.8785
- Law F1: 0.8545
- Law Number: 107
- Violated by Precision: 0.8361
- Violated by Recall: 0.7183
- Violated by F1: 0.7727
- Violated by Number: 71
- Violated on Precision: 0.5
- Violated on Recall: 0.5
- Violated on F1: 0.5
- Violated on Number: 64
- Violation Precision: 0.6494
- Violation Recall: 0.7032
- Violation F1: 0.6752
- Violation Number: 374
- Overall Precision: 0.6843
- Overall Recall: 0.7143
- Overall F1: 0.6990
- Overall Accuracy: 0.9553
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Law Precision | Law Recall | Law F1 | Law Number | Violated by Precision | Violated by Recall | Violated by F1 | Violated by Number | Violated on Precision | Violated on Recall | Violated on F1 | Violated on Number | Violation Precision | Violation Recall | Violation F1 | Violation Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 85 | 0.7386 | 0.0 | 0.0 | 0.0 | 107 | 0.0 | 0.0 | 0.0 | 71 | 0.0 | 0.0 | 0.0 | 64 | 0.0 | 0.0 | 0.0 | 374 | 0.0 | 0.0 | 0.0 | 0.7707 |
No log | 2.0 | 170 | 0.3510 | 0.0 | 0.0 | 0.0 | 107 | 0.0 | 0.0 | 0.0 | 71 | 0.0 | 0.0 | 0.0 | 64 | 0.2072 | 0.2781 | 0.2374 | 374 | 0.2072 | 0.1688 | 0.1860 | 0.8901 |
No log | 3.0 | 255 | 0.2471 | 0.4265 | 0.2710 | 0.3314 | 107 | 0.0 | 0.0 | 0.0 | 71 | 0.3810 | 0.125 | 0.1882 | 64 | 0.3965 | 0.4813 | 0.4348 | 374 | 0.3996 | 0.3523 | 0.3745 | 0.9199 |
No log | 4.0 | 340 | 0.1996 | 0.7596 | 0.7383 | 0.7488 | 107 | 0.5128 | 0.5634 | 0.5369 | 71 | 0.3827 | 0.4844 | 0.4276 | 64 | 0.5101 | 0.6096 | 0.5554 | 374 | 0.5324 | 0.6136 | 0.5701 | 0.9385 |
No log | 5.0 | 425 | 0.1984 | 0.7946 | 0.8318 | 0.8128 | 107 | 0.64 | 0.6761 | 0.6575 | 71 | 0.5091 | 0.4375 | 0.4706 | 64 | 0.5102 | 0.6684 | 0.5787 | 374 | 0.5669 | 0.6737 | 0.6157 | 0.9449 |
0.5018 | 6.0 | 510 | 0.2447 | 0.7456 | 0.7944 | 0.7692 | 107 | 0.75 | 0.6761 | 0.7111 | 71 | 0.4068 | 0.375 | 0.3902 | 64 | 0.6110 | 0.6845 | 0.6456 | 374 | 0.6296 | 0.6705 | 0.6494 | 0.9465 |
0.5018 | 7.0 | 595 | 0.2264 | 0.8125 | 0.8505 | 0.8311 | 107 | 0.7736 | 0.5775 | 0.6613 | 71 | 0.4754 | 0.4531 | 0.4640 | 64 | 0.6276 | 0.7166 | 0.6692 | 374 | 0.6570 | 0.6964 | 0.6761 | 0.9511 |
0.5018 | 8.0 | 680 | 0.2243 | 0.8598 | 0.8598 | 0.8598 | 107 | 0.7812 | 0.7042 | 0.7407 | 71 | 0.4912 | 0.4375 | 0.4628 | 64 | 0.6209 | 0.7139 | 0.6642 | 374 | 0.6641 | 0.7094 | 0.6860 | 0.9541 |
0.5018 | 9.0 | 765 | 0.2327 | 0.7934 | 0.8972 | 0.8421 | 107 | 0.7808 | 0.8028 | 0.7917 | 71 | 0.4231 | 0.5156 | 0.4648 | 64 | 0.6037 | 0.7005 | 0.6485 | 374 | 0.6346 | 0.7273 | 0.6778 | 0.9547 |
0.5018 | 10.0 | 850 | 0.2416 | 0.8319 | 0.8785 | 0.8545 | 107 | 0.8361 | 0.7183 | 0.7727 | 71 | 0.5 | 0.5 | 0.5 | 64 | 0.6494 | 0.7032 | 0.6752 | 374 | 0.6843 | 0.7143 | 0.6990 | 0.9553 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 8
Model tree for khalidrajan/roberta-base_legal_ner_finetuned
Base model
FacebookAI/roberta-base