File size: 6,090 Bytes
79e14d1
 
 
 
 
 
 
 
 
 
 
 
90bf8c4
 
79e14d1
 
 
8e0e6ae
 
79e14d1
90bf8c4
 
 
79e14d1
 
 
 
 
 
 
 
 
 
 
 
33f4717
 
 
 
 
 
 
 
 
8e0e6ae
33f4717
79e14d1
33f4717
 
 
 
 
 
 
 
79e14d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4615c7f
 
 
 
90bf8c4
4615c7f
90bf8c4
4615c7f
 
 
 
90bf8c4
 
79e14d1
 
 
 
 
90bf8c4
79e14d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: mit
task_categories:
- text-generation
language:
- ko
size_categories:
- n<1K
---

# KoInFoBench

KoInFoBench is a specialized evaluation dataset designed to assess the performance of Large Language Models (LLMs) on capabilities of Korean instructions following.<br>
The current version of `KoInFoBench` consists of 60 instruction sets and 233 questions.

Inspired by [InFoBench](https://huggingface.co/datasets/kqsong/InFoBench) dataset, we extends their concpet by focusing on the nuances and features of Korean language.

 - ๐Ÿ–ฅ๏ธ Code to reproduce or evaluate own LLMs is available at [https://github.com/KIFAI/KoInFoBench](https://github.com/KIFAI/KoInFoBench)
 - ๐Ÿ“„ Paper is under writing and open soon!

### ๐Ÿš€ Update
- **2024.05.18**: add other results `gpt-4o-2024-05-13`, `claude-3-sonnet-20240229`, `solar-1-mini-chat`

## Dataset Overview

### Usage
```python
from datasets import load_dataset

dataset = load_dataset('kifai/KoInFoBench')
```

### Example
```json
{
  "id": "19",
  "subset": "input_intensive_set",
  "category": "๊ตฌ๊ธ€์บ˜๋ฆฐ๋”",
  "instruction": "๋‹ค์Œ์€ ํ•ด์™ธ ์ฝ˜์„œํŠธ ์ฐธ๊ฐ€ ํ™•์ •์— ๋Œ€ํ•œ ์˜๋ฌธ์œผ๋กœ ์ž‘์„ฑ๋œ ์ด๋ฉ”์ผ์ž…๋‹ˆ๋‹ค. ํ•œ๊ตญ์‹œ๊ฐ„(KST) ๊ธฐ์ค€์œผ๋กœ ์ฐธ๊ฐ€ ํ™•์ •๋œ ๋‚ ์งœ, ์ฝ˜์„œํŠธ ๋‚ ์งœ์™€ ์‹œ๊ฐ„์„ \"๋…„-์›”-์ผ ์‹œ๊ฐ„\" ํ˜•์‹์œผ๋กœ ์ž‘์„ฑํ•˜๊ณ  ํ•œ๊ตญ์‹œ๊ฐ„ ๊ธฐ์ค€์œผ๋กœ ์ฐธ๊ฐ€ ํ™•์ •์ผ๋กœ๋ถ€ํ„ฐ ์ฝ˜์„œํŠธ ๋‚ ์งœ๊นŒ์ง€ ๋ช‡ ์ผ ๋‚จ์•˜๋Š”์ง€ ๊ณ„์‚ฐํ•˜์—ฌ ๊ตญ๋ฌธ์œผ๋กœ ์ •๋‹ต์„ ํ•จ๊ป˜ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค.",
  "input": "Email: We are pleased to inform you that your concert ticket purchase has been successfully confirmed at approximately 11am GMT today (26 March 2024). The concert you have been eagerly awaiting is scheduled to take place on 17 September 2024, starting at 6 PM UTC+2. Please mark your calendar and prepare to join us for an unforgettable evening of live music and entertainment. Your ticket grants you access to a night filled with exceptional performances, engaging visuals, and the vibrant energy of live music. We recommend arriving early to enjoy the full experience, including pre-concert activities and amenities.",
  "decomposed_questions": [
    "๋‹ต๋ณ€์€ ํ•ด์™ธ ์ฝ˜์„œํŠธ ์ฐธ๊ฐ€ ์ผ์ •์— ๋Œ€ํ•œ ๋‚ด์šฉ์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๊นŒ?",
    "๋‹ต๋ณ€์œผ๋กœ ์ž‘์„ฑ๋œ ๋ชจ๋“  ์ผ์ •์€ ํ•œ๊ตญ์‹œ๊ฐ„(KST) ๊ธฐ์ค€์œผ๋กœ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๊นŒ?",
    "์ฝ˜์„œํŠธ ์ฐธ๊ฐ€๊ฐ€ ํ™•์ •๋œ ๋‚ ์งœ ๊ทธ๋ฆฌ๊ณ  ์ฝ˜์„œํŠธ ๋‚ ์งœ์™€ ์‹œ๊ฐ„ 2๊ฐœ์˜ ์ผ์ •์„ ๋ชจ๋‘ ํฌํ•จํ•ฉ๋‹ˆ๊นŒ?",
    "๋‚ ์งœ์™€ ์‹œ๊ฐ„์ด \"๋…„-์›”-์ผ ์‹œ๊ฐ„\" ํ˜•์‹์œผ๋กœ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๊นŒ?",
    "์ฝ˜์„œํŠธ ํ™•์ •์ผ๋กœ๋ถ€ํ„ฐ ์ฝ˜์„œํŠธ๊นŒ์ง€ ๋‚จ์€ ๊ธฐ๊ฐ„์€ ์ฝ˜์„œํŠธ ์‹œ์ž‘์ผ์„ ํฌํ•จํ•  ๊ฒฝ์šฐ 177์ผ, ๋ฏธํฌํ•จ์ธ ๊ฒฝ์šฐ 176์ผ์ž…๋‹ˆ๋‹ค. ๋‚จ์€ ๊ธฐ๊ฐ„์„ 176์ผ ํ˜น์€ 177์ผ๋กœ ๊ณ„์‚ฐํ•˜์˜€์Šต๋‹ˆ๊นŒ?"
  ],
 "question_label": [
    "Format",
    "Format, Content",
    "Format",
    "Format",
    "Number"
  ],
 "ref": ""
}
```

### Fields
- **id**: unique identifier for each entry in the dataset
- **subset**: include `input_intensive_set` and `instruction_intensive_set`. where "intensive" indicates the entry's focus on evaluating Korean specific input or detailed instruction following
- **category**: a string which each entry belongs. For example, '๊ตฌ๊ธ€์บ˜๋ฆฐ๋”' indicates that the entry is related to tasks associated with Google Calander
- **instruction**: a string containing instructions
- **input**: a string containing context information and can be empty
- **decomposed_questions**: a list of string questions that decompose the task related to the entry. Each question is designed to evaluate the response of LLM
- **question_label**: a list of string labels that identify the type of each decomposed question. Each lable belong to multiple aspects, such as Format, Content, Number, Linguistic, Style
- **ref**: references a string for references or additional information and it could be empty


## Evaluation Result

### DRFR
Decomposed Requirements Following Ratio(DRFR) is the metric to evaluate how LLMs accurately respond to the instruction/input.
This metric calculates the average accuracy across answers to the decomposed questions for each instruction.
The following is the summary of the model performance on our dataset.

| Model                                       | H_DRFR     | A_DRFR | Alignment |
|------------------------------               |--------    |--------|-----------|
| **claude-3-opus-20240229**                  | **0.854**  | 0.850  | 87%       |
| **gpt-4-turbo-2024-04-09**                  | 0.850      | 0.880  | 87%       |
| **gpt-4o-2024-05-13**                       | 0.850      | 0.863  | 89%       |
| **gpt-4-0125-preview**                      | 0.824      | 0.824  | 83%       |
| **claude-3-sonnet-20240229**                | 0.790      | 0.828  | 84%       |
| **gemini-1.5-pro**                          | 0.773      | 0.811  | 83%       |
| **meta-llama/Meta-Llama-3-70B-Instruct-**   | 0.747      | 0.863  | 84%       |
| **hpx003**                                  | 0.691      | 0.738  | 83%       |
| **gpt-3.5-turbo-0125**                      | 0.678      | 0.734  | 82%       |
| **solar-1-mini-chat**                       | 0.614      | 0.695  | 79%       |
| **yanolja/EEVE-Korean-Instruct-10.8B-v1.0** | 0.597      | 0.730  | 79%       |`

- `H_DRFR`: The accuracy of model responses as evaluated by the human expert
- `A_DRFR`: The accuracy of model responses automatically evaluated by GPT-4 as employing the capability of LLM-as-a-judge
- `Alignment`: The degree of agreement or consistency between the human and automated evaluation

> Please note that the evaluation results of the LLMs presented in the above table may vary due to its randomness.

## Additional Information

### License Information

This dataset is released under the [MIT LISENCE](https://github.com/KIFAI/KoInfoBench/blob/main/LICENSE)

### Citation Information
```
@article{,
      title={KoInFoBench}, 
      author={Sungwoo Oh, Sungjun Kown, Donggyu Kim},
      year={2024},
      eprint={},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```