 
				Workshop-on-Asian-Translation-2025-FinetunedModels
For T2T task of Workshop on Asian Translation(2025), these are the fine-tuned models with NLLB-200-XB as base model, with WAT + 100k samanantar pairs.
  3B • Updated • 15 3B • Updated • 15- Note For WAT2025, on - Challenge set, BLEU - 56.90, RIBES - 0.870254 - Evaluation set, BLEU - 45.10, RIBES - 0.831282 
   - OdiaGenAI/facebook-nllb-200-3.3B-finetuned-bengali3B • Updated • 17- Note For WAT2025, on - Challenge set, BLEU - 50.10, RIBES - 0.830882 - Evaluation set, BLEU - 49.50, RIBES - 0.804158 
   - OdiaGenAI/facebook-nllb-200-3.3B-finetuned-malayalam3B • Updated • 14- Note For WAT2025, on - Challenge set, BLEU - 44.20, RIBES - 0.775824 - Evaluation set, BLEU - 43.20, RIBEs - 0.708217 
   - OdiaGenAI/facebook-nllb-200-3.3B-finetuned-odia3B • Updated • 18- Note For WAT2025, on - Challenge set, BLEU - 56.40, RIBES - 0.916177 - Evaluation set, BLEU - 62.90, RIBES - 0.903659 
 - DebasishDhal99/facebook-nllb-200-1.3B-finetuned-hindi1B • Updated • 21- Note For WAT2025, on - Challenge set, BLEU - 55.50, RIBES - 0.867866 - Evaluation set, BLEU - 44.70, RIBEs - 0.828884 
 - DebasishDhal99/facebook-nllb-200-1.3B-finetuned-odia1B • Updated • 17- Note For WAT2025, on - Challenge set, BLEU - 53.70, RIBES - 0.909711 - Evaluation set, BLEU - 60.10, RIBEs - 0.896546 
 - DebasishDhal99/facebook-nllb-200-distilled-600M-finetuned-odia0.6B • Updated • 12- Note For WAT2025, on - Challenge set, BLEU - 50.00, RIBES - 0.902548 - Evaluation set, BLEU - 54.60, RIBEs - 0.884445