File size: 2,041 Bytes
077736b 06cc5bc 077736b 4f16991 13de82b e8e8574 13de82b 077736b 516ec12 9f100cc 077736b 65a77d6 0b7f0d3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
library_name: transformers
language:
- ja
- en
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
## Evaluation
### llm-jp-eval script(colab)
```
!git clone https://github.com/llm-jp/llm-jp-eval.git
!cd llm-jp-eval && pip install -e .
!cd llm-jp-eval && python scripts/preprocess_dataset.py --dataset-name all --output-dir ./dataset_dir
!cd llm-jp-eval && python scripts/evaluate_llm.py -cn config.yaml model.pretrained_model_name_or_path=jaeyong2/Qwen2.5-0.5B-Instruct-JaMagpie-Preview tokenizer.pretrained_model_name_or_path=jaeyong2/Qwen2.5-0.5B-Instruct-JaMagpie-Preview dataset_dir=./dataset_dir/1.4.1/evaluation/test
```
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
| | Qwen2.5-0.5B-Instruct | finetuning-model |
|:-----------|----------------------:|-----------------------:|
| mmlu | 0.4592 | 0.4614 |
| llm-jp-eval| Qwen2.5-0.5B-Instruct | finetuning-model |
|:-----------|----------------------:|-----------------------:|
| AVG | 0.3037 | 0.3176 |
| CG | 0 | 0 |
| EL | 0.2637 | 0.3146 |
| FA | 0.0386 | 0.0419 |
| HE | 0.2700 | 0.3250 |
| MC | 0.4033 | 0.3733 |
| MR | 0.0900 | 0.2700 |
| MT | 0.6148 | 0.6691 |
| NLI | 0.5460 | 0.3180 |
| QA | 0.2608 | 0.2791 |
| RC | 0.5495 | 0.5847 |
### License
Qwen/Qwen2.5-3B-Instruct : https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
### Acknowledgement
This research is supported by TPU Research Cloud program.
|