Edit model card

m2m100-lg-to-en-v2

This model is a fine-tuned version of facebook/m2m100_418M on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 11.5952

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-08
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
12.3804 1.0 119 12.6272
12.389 2.0 238 12.5538
12.3486 3.0 357 12.4810
12.3239 4.0 476 12.4177
12.2828 5.0 595 12.1366
12.0986 6.0 714 12.0700
12.0733 7.0 833 12.0214
12.0746 8.0 952 11.9687
12.0377 9.0 1071 11.9397
12.0163 10.0 1190 11.9031
11.95 11.0 1309 11.8120
11.9303 12.0 1428 11.7297
11.8673 13.0 1547 11.6198
11.8117 14.0 1666 11.5958
11.7915 15.0 1785 11.5952

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.1.2
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
484M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MubarakB/m2m100-lg-to-en-v2

Finetuned
(50)
this model
Finetunes
2 models