Model Card for Model ID
Evaluation
llm-jp-eval script(colab)
!git clone https://github.com/llm-jp/llm-jp-eval.git
!cd llm-jp-eval && pip install -e .
!cd llm-jp-eval && python scripts/preprocess_dataset.py --dataset-name all --output-dir ./dataset_dir
!cd llm-jp-eval && python scripts/evaluate_llm.py -cn config.yaml model.pretrained_model_name_or_path=jaeyong2/Qwen2.5-1.5B-Instruct-JaMagpie-Preview tokenizer.pretrained_model_name_or_path=jaeyong2/Qwen2.5-1.5B-Instruct-JaMagpie-Preview dataset_dir=./dataset_dir/1.4.1/evaluation/test
llm-jp-eval | Qwen2.5-1.5B-Instruct | google/gemma-2-2b-jpn-it | finetuning-model |
---|---|---|---|
AVG | 0.4343 | 0.4315 | 0.4540 |
CG | 0.0600 | 0.0000 | 0.1500 |
EL | 0.3952 | 0.3222 | 0.4106 |
FA | 0.0690 | 0.0846 | 0.0000 |
HE | 0.4400 | 0.4350 | 0.4300 |
MC | 0.6800 | 0.6000 | 0.6400 |
MR | 0.4700 | 0.4900 | 0.5800 |
MT | 0.6137 | 0.7666 | 0.7915 |
NLI | 0.5500 | 0.5260 | 0.4440 |
QA | 0.2443 | 0.2813 | 0.3054 |
RC | 0.8208 | 0.8097 | 0.7881 |
- Downloads last month
- 52
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.