validation loss
#3
by
jacek2024
- opened
I see that results from this model are worse than from original llama 3, I wonder why validation loss is not changing during your training, is this correct behaviour?
It is not worse than original llama 3, we do not train the instruct version, we train the base model.
Crystalcareai
changed discussion status to
closed