llama3.1-8b-gpt4o_100k_coding-k / train_results.json
chansung's picture
Model save
a301dae verified
raw
history blame
236 Bytes
{
"epoch": 1.0,
"total_flos": 7.971483567941222e+17,
"train_loss": 0.6255532653243453,
"train_runtime": 2916.7528,
"train_samples": 116368,
"train_samples_per_second": 5.915,
"train_steps_per_second": 0.093
}