legal_long_legal_ver2_test_sm
This model is a fine-tuned version of kiddothe2b/legal-longformer-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.7379
- Accuracy: 0.5519
- Precision: 0.5139
- Recall: 0.5663
- F1: 0.5388
- D-index: 1.2690
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
---|---|---|---|---|---|---|---|---|
No log | 0.98 | 26 | 0.6957 | 0.4599 | 0.44 | 0.6173 | 0.5138 | 1.1026 |
No log | 2.0 | 53 | 0.6960 | 0.4528 | 0.4262 | 0.5306 | 0.4727 | 1.0831 |
No log | 2.98 | 79 | 0.6978 | 0.4481 | 0.4312 | 0.6071 | 0.5042 | 1.0794 |
No log | 4.0 | 106 | 0.7232 | 0.4788 | 0.4622 | 0.7806 | 0.5806 | 1.1493 |
No log | 4.98 | 132 | 0.7340 | 0.5189 | 0.4828 | 0.5714 | 0.5234 | 1.2095 |
No log | 6.0 | 159 | 0.8623 | 0.5425 | 0.5049 | 0.5255 | 0.515 | 1.2493 |
No log | 6.98 | 185 | 1.2325 | 0.5448 | 0.5116 | 0.3367 | 0.4062 | 1.2412 |
No log | 8.0 | 212 | 1.4773 | 0.5165 | 0.4717 | 0.3827 | 0.4225 | 1.1925 |
No log | 8.98 | 238 | 1.6199 | 0.5330 | 0.4941 | 0.4286 | 0.4590 | 1.2258 |
No log | 10.0 | 265 | 1.8976 | 0.5259 | 0.4900 | 0.6276 | 0.5503 | 1.2261 |
No log | 10.98 | 291 | 2.1687 | 0.4953 | 0.4622 | 0.5612 | 0.5069 | 1.1653 |
No log | 12.0 | 318 | 2.3087 | 0.4882 | 0.4578 | 0.5816 | 0.5124 | 1.1535 |
No log | 12.98 | 344 | 2.5168 | 0.4953 | 0.4667 | 0.6429 | 0.5408 | 1.1708 |
No log | 14.0 | 371 | 2.5389 | 0.5142 | 0.4788 | 0.5765 | 0.5231 | 1.2012 |
No log | 14.98 | 397 | 2.4224 | 0.5330 | 0.4957 | 0.5918 | 0.5395 | 1.2366 |
No log | 16.0 | 424 | 2.6391 | 0.5212 | 0.4852 | 0.5867 | 0.5312 | 1.2148 |
No log | 16.98 | 450 | 2.7235 | 0.5307 | 0.4932 | 0.5510 | 0.5205 | 1.2297 |
No log | 18.0 | 477 | 2.7272 | 0.5425 | 0.5045 | 0.5765 | 0.5381 | 1.2527 |
0.2333 | 18.98 | 503 | 2.7222 | 0.5495 | 0.5117 | 0.5561 | 0.5330 | 1.2641 |
0.2333 | 19.62 | 520 | 2.7379 | 0.5519 | 0.5139 | 0.5663 | 0.5388 | 1.2690 |
Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.