2024-09-05 12:15:34-finetune_distributed.py:144-INFO >> Batch 1 of epoch 1/10, training loss : 0.6476805210113525 2024-09-05 12:15:36-finetune_distributed.py:144-INFO >> Batch 1 of epoch 2/10, training loss : 0.4015043079853058 2024-09-05 12:15:36-finetune_distributed.py:144-INFO >> Batch 1 of epoch 3/10, training loss : 0.2477995604276657 2024-09-05 12:15:37-finetune_distributed.py:144-INFO >> Batch 1 of epoch 4/10, training loss : 0.15607638657093048 2024-09-05 12:15:38-finetune_distributed.py:144-INFO >> Batch 1 of epoch 5/10, training loss : 0.0982329472899437 2024-09-05 12:15:38-finetune_distributed.py:144-INFO >> Batch 1 of epoch 6/10, training loss : 0.06631980836391449 2024-09-05 12:15:39-finetune_distributed.py:144-INFO >> Batch 1 of epoch 7/10, training loss : 0.04755895212292671 2024-09-05 12:15:40-finetune_distributed.py:144-INFO >> Batch 1 of epoch 8/10, training loss : 0.03629200905561447 2024-09-05 12:15:40-finetune_distributed.py:144-INFO >> Batch 1 of epoch 9/10, training loss : 0.02855665795505047 2024-09-05 12:15:41-finetune_distributed.py:144-INFO >> Batch 1 of epoch 10/10, training loss : 0.023430630564689636