Edit model card

RobertaLr3.024030044726418e-06Wd0.004218621374361941E20

This model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2435

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3.024030044726418e-06
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
0.0186 1.0 510 0.6067
1.0125 2.0 1020 0.4395
0.1619 3.0 1530 0.3656
0.2597 4.0 2040 0.3204
0.006 5.0 2550 0.2991
0.0046 6.0 3060 0.2798
0.0456 7.0 3570 0.2706
0.0833 8.0 4080 0.2442
0.0061 9.0 4590 0.2366
0.0636 10.0 5100 0.2461
0.4491 11.0 5610 0.2436
0.0047 12.0 6120 0.2466
0.3552 13.0 6630 0.2422
0.0023 14.0 7140 0.2441
0.0001 15.0 7650 0.2419
0.0007 16.0 8160 0.2595
0.001 17.0 8670 0.2407
0.0383 18.0 9180 0.2411
0.002 19.0 9690 0.2433
0.0114 20.0 10200 0.2435

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.5.1
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
124M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for hsmith-morganhill/RobertaLr3.024030044726418e-06Wd0.004218621374361941E20

Finetuned
(188)
this model