llama3.1-8b-gpt4o_100k_coding-fft / train_results.json
chansung's picture
Model save
6f3318e verified
raw
history blame contribute delete
247 Bytes
{
"epoch": 0.9990732159406858,
"total_flos": 28187736145920.0,
"train_loss": 1.0674916249259283,
"train_runtime": 10422.8572,
"train_samples": 116368,
"train_samples_per_second": 1.655,
"train_steps_per_second": 0.052
}