🌐 NoMIRACL Dataset [EMNLP'24]
Collection
A collection of multilingual relevance assessment datasets. We also have SFT fine-tuned models (Mistral-7B & Llama-3 8B)
•
7 items
•
Updated
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on the nthakur/nomiracl-instruct dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.4364 | 1.0 | 671 | 1.4019 |
Base model
mistralai/Mistral-7B-v0.3