Multifaceted Attention Analysis in Llama2 w En & Hi Dolly15k
Collection
13 items
•
Updated
This model is a fine-tuned version of unsloth/llama-2-7b on the generator dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.3591 | 0.64 | 100 | 1.2760 |
1.2391 | 1.27 | 200 | 1.2535 |
1.2263 | 1.91 | 300 | 1.2466 |
1.2207 | 2.55 | 400 | 1.2447 |
Base model
unsloth/llama-2-7b