Context-awareness in instruction finetuning
Collection
15 items
•
Updated
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T on the yihanwang617/WizardLM_70k_processed_indicator_unfiltered dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7385 | 0.9989 | 449 | 0.7580 |
0.616 | 1.9978 | 898 | 0.7533 |