Edit model card

scenario-TCR-XLMV-XCOPA-6_data-xcopa_all

This model is a fine-tuned version of facebook/xlm-v-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6931
  • Accuracy: 0.5083
  • F1: 0.4627

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 341241
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 500

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 0.38 5 0.6931 0.5133 0.4904
No log 0.77 10 0.6931 0.5542 0.5344
No log 1.15 15 0.6931 0.5358 0.5093
No log 1.54 20 0.6931 0.5508 0.5373
No log 1.92 25 0.6931 0.5033 0.4716
No log 2.31 30 0.6931 0.53 0.5261
No log 2.69 35 0.6931 0.5383 0.5257
No log 3.08 40 0.6931 0.5308 0.5159
No log 3.46 45 0.6931 0.4933 0.4856
No log 3.85 50 0.6931 0.5308 0.5233
No log 4.23 55 0.6931 0.5517 0.5410
No log 4.62 60 0.6931 0.5625 0.5570
No log 5.0 65 0.6931 0.5433 0.5308
No log 5.38 70 0.6931 0.53 0.5236
No log 5.77 75 0.6931 0.5267 0.5103
No log 6.15 80 0.6931 0.5308 0.4987
No log 6.54 85 0.6931 0.5017 0.4889
No log 6.92 90 0.6931 0.5267 0.5009
No log 7.31 95 0.6931 0.5367 0.5062
No log 7.69 100 0.6931 0.5133 0.4859
No log 8.08 105 0.6931 0.4817 0.4610
No log 8.46 110 0.6932 0.5 0.4854
No log 8.85 115 0.6931 0.4992 0.4788
No log 9.23 120 0.6931 0.5 0.4881
No log 9.62 125 0.6931 0.5042 0.4884
No log 10.0 130 0.6931 0.5025 0.4936
No log 10.38 135 0.6931 0.4908 0.4764
No log 10.77 140 0.6931 0.4942 0.4816
No log 11.15 145 0.6932 0.5033 0.4532
No log 11.54 150 0.6931 0.525 0.4732
No log 11.92 155 0.6931 0.53 0.4946
No log 12.31 160 0.6931 0.5192 0.4489
No log 12.69 165 0.6931 0.5225 0.4729
No log 13.08 170 0.6931 0.5117 0.4624
No log 13.46 175 0.6931 0.5383 0.4851
No log 13.85 180 0.6931 0.5208 0.4691
No log 14.23 185 0.6931 0.5258 0.4717
No log 14.62 190 0.6931 0.5417 0.4860
No log 15.0 195 0.6931 0.5233 0.4838
No log 15.38 200 0.6931 0.5258 0.4851
No log 15.77 205 0.6931 0.5392 0.4931
No log 16.15 210 0.6931 0.5083 0.4627

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
3
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for haryoaw/scenario-TCR-XLMV-XCOPA-6_data-xcopa_all

Finetuned
(41)
this model