llama-pro-8b-english-to-hinglish-translation
This model is a fine-tuned version of TencentARC/LLaMA-Pro-8B on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7581
- Rouge Scores: {'rouge1': 0.920395098520514, 'rouge2': 0.8260982220795346, 'rougeL': 0.8629699886603178, 'rougeLsum': 0.9203938314651259}
- Bleu Scores: [0.0799874668395633, 0.07835910254936027, 0.07659241243282754, 0.0747653244473694]
- Gen Len: 2048.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
---|---|---|---|---|---|---|
0.8367 | 1.0 | 500 | 0.7705 | {'rouge1': 0.9217534575496169, 'rouge2': 0.826045962480547, 'rougeL': 0.861202510859852, 'rougeLsum': 0.9217216485625217} | [0.07995984189186581, 0.07829715249738517, 0.07650957414005091, 0.07466840050021681] | 2048.0 |
0.581 | 2.0 | 1000 | 0.7581 | {'rouge1': 0.920395098520514, 'rouge2': 0.8260982220795346, 'rougeL': 0.8629699886603178, 'rougeLsum': 0.9203938314651259} | [0.0799874668395633, 0.07835910254936027, 0.07659241243282754, 0.0747653244473694] | 2048.0 |
Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
- Downloads last month
- 2
Model tree for DrishtiSharma/llama-pro-8b-english-to-hinglish-translation
Base model
TencentARC/LLaMA-Pro-8B