rubert-electra-srl
This model is a fine-tuned version of ai-forever/ruElectra-medium on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0448
- Addressee Precision: 0.9583
- Addressee Recall: 1.0
- Addressee F1: 0.9787
- Addressee Number: 23
- Benefactive Precision: 0.0
- Benefactive Recall: 0.0
- Benefactive F1: 0.0
- Benefactive Number: 2
- Causator Precision: 0.9773
- Causator Recall: 0.9773
- Causator F1: 0.9773
- Causator Number: 44
- Cause Precision: 0.9259
- Cause Recall: 0.7143
- Cause F1: 0.8065
- Cause Number: 35
- Contrsubject Precision: 1.0
- Contrsubject Recall: 0.9429
- Contrsubject F1: 0.9706
- Contrsubject Number: 35
- Deliberative Precision: 0.9231
- Deliberative Recall: 1.0
- Deliberative F1: 0.9600
- Deliberative Number: 24
- Destinative Precision: 1.0
- Destinative Recall: 1.0
- Destinative F1: 1.0
- Destinative Number: 7
- Directivefinal Precision: 1.0
- Directivefinal Recall: 1.0
- Directivefinal F1: 1.0
- Directivefinal Number: 1
- Experiencer Precision: 0.9030
- Experiencer Recall: 0.9441
- Experiencer F1: 0.9231
- Experiencer Number: 286
- Instrument Precision: 0.9
- Instrument Recall: 0.9
- Instrument F1: 0.9
- Instrument Number: 10
- Object Precision: 0.9484
- Object Recall: 0.9519
- Object F1: 0.9502
- Object Number: 541
- Overall Precision: 0.9369
- Overall Recall: 0.9425
- Overall F1: 0.9397
- Overall Accuracy: 0.9883
- Limitative F1: 0.0
- Limitative Number: 0.0
- Limitative Precision: 0.0
- Limitative Recall: 0.0
- Directiveinitial F1: 0.0
- Directiveinitial Number: 0.0
- Directiveinitial Precision: 0.0
- Directiveinitial Recall: 0.0
- Mediative F1: 0.0
- Mediative Number: 0.0
- Mediative Precision: 0.0
- Mediative Recall: 0.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00016666401556632117
- train_batch_size: 1
- eval_batch_size: 1
- seed: 708526
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.21
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Addressee Precision | Addressee Recall | Addressee F1 | Addressee Number | Benefactive Precision | Benefactive Recall | Benefactive F1 | Benefactive Number | Causator Precision | Causator Recall | Causator F1 | Causator Number | Cause Precision | Cause Recall | Cause F1 | Cause Number | Contrsubject Precision | Contrsubject Recall | Contrsubject F1 | Contrsubject Number | Deliberative Precision | Deliberative Recall | Deliberative F1 | Deliberative Number | Destinative Precision | Destinative Recall | Destinative F1 | Destinative Number | Directivefinal Precision | Directivefinal Recall | Directivefinal F1 | Directivefinal Number | Experiencer Precision | Experiencer Recall | Experiencer F1 | Experiencer Number | Instrument Precision | Instrument Recall | Instrument F1 | Instrument Number | Object Precision | Object Recall | Object F1 | Object Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Limitative F1 | Limitative Number | Limitative Precision | Limitative Recall | Directiveinitial F1 | Directiveinitial Number | Directiveinitial Precision | Directiveinitial Recall | Mediative F1 | Mediative Number | Mediative Precision | Mediative Recall |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.1548 | 1.0 | 1471 | 0.1755 | 0.6667 | 0.5217 | 0.5854 | 23 | 0.0 | 0.0 | 0.0 | 2 | 0.5714 | 0.8182 | 0.6729 | 44 | 0.5217 | 0.3429 | 0.4138 | 35 | 0.4103 | 0.4571 | 0.4324 | 35 | 0.0 | 0.0 | 0.0 | 24 | 0.0 | 0.0 | 0.0 | 7 | 0.0 | 0.0 | 0.0 | 1 | 0.8645 | 0.8252 | 0.8444 | 286 | 0.0 | 0.0 | 0.0 | 10 | 0.7711 | 0.8965 | 0.8291 | 541 | 0.7627 | 0.7907 | 0.7764 | 0.9582 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.1209 | 2.0 | 2942 | 0.0797 | 0.9130 | 0.9130 | 0.9130 | 23 | 0.0 | 0.0 | 0.0 | 2 | 0.9348 | 0.9773 | 0.9556 | 44 | 0.8462 | 0.6286 | 0.7213 | 35 | 0.8889 | 0.9143 | 0.9014 | 35 | 0.75 | 0.875 | 0.8077 | 24 | 1.0 | 0.4286 | 0.6 | 7 | 0.0 | 0.0 | 0.0 | 1 | 0.8993 | 0.8741 | 0.8865 | 286 | 0.875 | 0.7 | 0.7778 | 10 | 0.9336 | 0.9094 | 0.9213 | 541 | 0.9138 | 0.8839 | 0.8986 | 0.9808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0559 | 3.0 | 4413 | 0.0448 | 0.9583 | 1.0 | 0.9787 | 23 | 0.0 | 0.0 | 0.0 | 2 | 0.9773 | 0.9773 | 0.9773 | 44 | 0.9259 | 0.7143 | 0.8065 | 35 | 1.0 | 0.9429 | 0.9706 | 35 | 0.9231 | 1.0 | 0.9600 | 24 | 1.0 | 1.0 | 1.0 | 7 | 1.0 | 1.0 | 1.0 | 1 | 0.9030 | 0.9441 | 0.9231 | 286 | 0.9 | 0.9 | 0.9 | 10 | 0.9484 | 0.9519 | 0.9502 | 541 | 0.9369 | 0.9425 | 0.9397 | 0.9883 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for dl-ru/rubert-electra-srl
Base model
ai-forever/ruElectra-medium