🇮🇹 Artemide (/arˈtɛmide/) 3.5
This model is a finetuned version of Phi-3.5-mini-instruct on the ReDiX/DataForge dataset. The dataset is a mixture of high quality italian and english multiturn conversations.
🏆 Evaluation (OPEN ITA LLM Leaderboard)
Model | Parameters | Average | MMLU_IT | ARC_IT | HELLASWAG_IT |
---|---|---|---|---|---|
ReDiX/Artemide-3.5 | 3.82 B | 57.87 | 60.16 | 52.1 | 61.36 |
meta-llama/Meta-Llama-3.1-8B-Instruct | 8.03 B | 56.97 | 58.43 | 48.42 | 64.07 |
microsoft/Phi-3.5-mini-instruct | 3.82 B | 56.82 | 60.03 | 49.19 | 61.25 |
- Downloads last month
- 2,765
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ReDiX/Artemide-3.5
Base model
microsoft/Phi-3.5-mini-instruct