Model Details
| Property | Value | 
|---|---|
| Base Model | Google Gemma 3 (270M parameters) | 
| Fine-tuning Method | LoRA using Unsloth | 
| Final Format | Merged (base + LoRA weights) | 
| Model Size | 270M | 
Datasets used to train
- facebook/belebele
- alexandrainst/multi-wiki-qa
- ilsp/truthful_qa_greek
- ilsp/medical_mcqa_greek
- ilsp/greek_civics_qa
- ilsp/hellaswag_greek
Uploaded finetuned model
- Developed by: alexliap
- License: mit
- Finetuned from model : unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 18

