base_model: model language:
- en
- it license: apache-2.0 tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl datasets:
- gsarti/clean_mc4_it
- FreedomIntelligence/alpaca-gpt4-italian This model has been fine-tuned with the continuous pretraining mode of Unsloth on the gsarti/clean_mc4_it dataset (only 100k rows) to improve the Italian language. The second fine-tuning was performed on the instructed dataset FreedomIntelligence/alpaca-gpt4-italian.
Uploaded model
- Developed by: e-palmisano
- License: apache-2.0
- Finetuned from model : unsloth/Qwen2-1.5B-Instruct-bnb-4bit
Evaluation
For a detailed comparison of model performance, check out the Leaderboard for Italian Language Models.
Here's a breakdown of the performance metrics:
Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
---|---|---|---|---|
Accuracy Normalized | 0.4689 |
This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.