Edit model card

PhoBertLexical-finetuned_70KURL

This model is a fine-tuned version of vinai/phobert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1508
  • Accuracy: 0.9565
  • F1: 0.9565

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2150
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 0.4651 200 0.4128 0.8255 0.7984
No log 0.9302 400 0.2414 0.9038 0.9057
No log 1.3953 600 0.1894 0.9289 0.9294
No log 1.8605 800 0.1859 0.9293 0.9307
0.3236 2.3256 1000 0.1580 0.9423 0.9430
0.3236 2.7907 1200 0.1532 0.9405 0.9413
0.3236 3.2558 1400 0.1412 0.9478 0.9483
0.3236 3.7209 1600 0.1345 0.9514 0.9517
0.1539 4.1860 1800 0.1399 0.9498 0.9505
0.1539 4.6512 2000 0.1382 0.9504 0.9511
0.1539 5.1163 2200 0.1457 0.9492 0.9499
0.1539 5.5814 2400 0.1343 0.9543 0.9543
0.1146 6.0465 2600 0.1382 0.9573 0.9576
0.1146 6.5116 2800 0.1483 0.9518 0.9524
0.1146 6.9767 3000 0.1392 0.9501 0.9508
0.1146 7.4419 3200 0.1388 0.9500 0.9507
0.1146 7.9070 3400 0.1508 0.9565 0.9565

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.1.2
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for gechim/PhoBertLexical-finetuned_70KURL

Finetuned
(183)
this model