Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 10,372 Bytes
725545f
 
 
 
 
 
 
 
 
 
3c571c1
 
 
 
 
e499d82
 
 
 
 
 
 
 
 
 
 
 
725545f
ce6a97b
725545f
e499d82
 
 
 
725545f
 
 
 
 
ce6a97b
725545f
 
 
 
ce6a97b
725545f
 
e499d82
 
 
 
 
 
 
 
 
725545f
 
 
 
 
 
 
 
 
ce6a97b
 
 
 
725545f
 
 
 
 
 
ce6a97b
 
 
 
c3a7b87
 
 
 
ce6a97b
 
 
 
 
 
 
 
c3a7b87
ce6a97b
 
 
c3a7b87
ce6a97b
 
 
 
 
 
 
 
 
 
 
 
 
 
c3a7b87
ce6a97b
 
 
c3a7b87
ce6a97b
 
 
 
 
 
725545f
 
 
 
 
 
 
ce6a97b
725545f
 
 
 
ce6a97b
725545f
 
 
 
 
ce6a97b
 
725545f
 
 
 
ce6a97b
725545f
05fa81d
725545f
 
 
 
 
ce6a97b
c3d4f49
 
 
 
 
 
 
 
ce6a97b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
---
license: apache-2.0
language:
- en
tags:
- mathematics
- computer-science
- cryptograpy
- ctf
pretty_name: Dynamic Intelligence Assessment Dataset
configs:
- config_name: K1
  data_files: DIA-Benchmark-k1.json
  type: json
  field: questions
- config_name: K5
  data_files: DIA-Benchmark-k5.json
  type: json
  field: questions
- config_name: K10
  data_files: DIA-Benchmark-k10.json
  type: json
  field: questions
- config_name: K100
  data_files: DIA-Benchmark-k100.json
  type: json
  field: questions
---
# Dynamic Intelligence Assessment Dataset

<div align="center">
    <img width="550" alt="logo" src="./assets/dia-logo.png">
</div>

<!-- Provide a quick summary of the dataset. -->
This dataset aims to test the problem-solving ability of LLMs with dynamically generated challenges that are difficult to guess.

## Dataset Details

The DIA Benchmark Dataset is a benchmarking tool consisting of 150 dynamic question generators for the evaluation of the problem-solving capability of LLMs. It primarily focuses on CTF-style (Capture the Flag) challenges that require knowledge from the fields of mathematics, cryptography, cybersecurity, and computer science. The challenge generators were manually developed by industry experts and tested by multiple individuals to find errors and edge cases. The answers often consist of many characters and big numbers, making correct guessing highly unlikely. This repository contains the generated question and answer pairs that can be fed to AI models to assess the outputs. The repository contains various generated instances of one test to increase the accuracy of the measurements.


- **Curated by:** Norbert Tihanyi, Tamas Bisztray, Richard A. Dubniczky, Rebeka Toth, Bertalan Borsos, Bilel Cherif, Ridhi Jain, Lajos Muzsai, Mohamed Amine Ferrag, Ryan Marinelli, Lucas C. Cordeiro, Merouane Debbah, Vasileios Mavroeidis, and Audun Josang
<!-- - **Funded by:** [More Information Needed] -->
- **Language:** English
- **License:** AL 2.0

## Evaluation

We tested 25 state-of-the-art LLM models on the DIA dataset through API calls, and ChatGPT-4o manually through its chat interface to enable tool usage. The tests were generated and run in November 2024 on the `k=5` dataset.

<div align="center">
    <img alt="evaluation" src="./assets/dia-results.png">
</div>

## Sources

<!-- Provide the basic links for the dataset. -->

- **Repository:** https://github.com/DIA-Bench/DIA-Bench
- **Paper:** https://arxiv.org/abs/2410.15490

## Uses

<!-- Address questions around how the dataset is intended to be used. -->
- Testing the problem-solving ability of an LLM.
- Comparing the problem-solving abilities of multiple LLMs.
- Testing the response consistency of an LLM.
- Testing the confidence of an LLM.


## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

During generation, different sizes of the dataset can be created by modifying the `k` parameter, where `k=5` means that one test generator will be used to generate `5` instances of that test type. This repository contains four variations of the pre-generated tests datiled in the following table:

|File|K|Tests|
|-|-|-|
|[DIA-bench-1.json](./DIA-bench-k1.json)|1|150|
|[DIA-bench-5.json](./DIA-bench-k5.json)|5|750|
|[DIA-bench-10.json](./DIA-bench-k10.json)|10|1,500|
|[DIA-bench-100.json](./DIA-bench-k100.json)|100|15,000|

Please note that depending on the tool or API used to run the tests, the runtime as well as the cost will differ significantly.

Each generated dataset is a JSON file, where the questions are stored in a list under the `"questions"` key. An example question looks as follows:

```json
{
    "challenge": {
        "template_id": 1,
        "instance": 1,
        "level": "easy",
        "category": "Mathematics",
        "adversarial": false,
        "description": "You have been given the task of solving the following challenge.\n\n        #Instructions:\n              a. Carefully read the challenge description.\n              b. Determine the most effective approach to solve the challenge. This may include writing a program, utilizing external resources, employing specific tools, or any other suitable methods.\n              c. We are solely interested in the solution, not the method used to obtain it.\n              d. Confidence in your solution is extremely important. If your answer is correct, you will get +1 point. If incorrect, you will get -2 point. If you are unsure of the solution, or you can not solve it respond in this XML format:  <xml>I-DO-NOT-KNOW</xml> to receive 0 points without penalty for an incorrect answer.\n              e. Be sure to double-check your solution before answering and  make sure your answer follows the required XML format as specified in the challenge.\n\n\n        #Challenge:\n\n        Given a 128-bit even number, namely 170272422572648913815378456437965460739, find the next prime number.",
        "instructions": "Find the next prime number that comes after the specified number. Do NOT include any explanation or additional text in the response. ALWAYS return the answer in this XML format: '<xml>prime_number</xml>'."
    },
    "solution": {
        "challenge_solution": "<xml>170272422572648913815378456437965460841</xml>",
        "solution_explanation": "Finding the next prime number after a 128-bit even number can be efficiently accomplished using specialized algorithms. Many programming languages provide built-in functions for this purpose, which can return results almost instantly on a standard computer."
    }
},
```

## Fields

Each question instance contains all of the following fields:

- `challenge.template_id` __int__: ID of the generator template.
- `challenge.instance` __int__: Number of the generated instance from the same template.
- `challenge.level` __str__: Percieved difficulty of the question (easy/medium/hard).
- `challenge.category` __str__: Area of expertise required to solve the question.
- `challenge.adversarial` __bool__: Signifies whether the question contains adversarial tactics to confuse the model.
- `challenge.description` __str__: The description of the challenge facts that is the first input for the model.
- `challenge.instructions` __str__: The instruction on solving the problem and the expected output, that is the second input for the model.
- `solution.challenge_solution`: __str__: The expected textual output that has to be generated by the model.
- `solution.solution_explanation`: __str__: Explanation written by the creator of the template about the challenge.

During testing, it's advised to send the concatenation of `challenge.description`, and `challenge.instructions` to the model and check if the output contains `solution.challenge_solution`. Because of the complexity and specificity of the expected outputs, it's unlinekly that a model would generate a correct solution by chance.

## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

Benchmarks typically rely on static question-answer pairs that the models might memorize or guess. To address these limitations, we introduced Dynamic Intelligence Assessment (DIA), a novel methodology for testing AI models using dynamic question templates and improved metrics across multiple disciplines such as mathematics, cryptography, cybersecurity, and computer science. The accompanying dataset, DIA-Bench, contains a diverse collection of challenge templates with mutable parameters presented in various formats, including text, PDFs, compiled binaries, visual puzzles, and CTF-style cybersecurity challenges. Our framework introduces four new metrics to assess a model’s reliability and confidence across multiple attempts. 

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
All of the data used in the testing was created by industry experts and cross-validated with peers, as well as generated using various python libraries.

#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->

All addresses, names, emails and other details within the dataset are randomly generated and combined from a pre-defined list and thus do not constitute personally identifiable information. All included data serve as examples for the models and are not relevant by themselves.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

This dataset is uniformly biased towards mathematics, computer science, cybersecurity and cryptography topics. All of these are tested using randomly generated tests, with various levels of complexity. We are not testing other capabilities of the models or areas of science such as general knowledge, or biology.

## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```bibtex
@INPROCEEDINGS{diabench,
  author={Tihanyi, Norbert and Bisztray, Tamas and Dubniczky, Richard A. and Toth, Rebeka and Borsos, Bertalan and Cherif, Bilel and Jain, Ridhi and Muzsai, Lajos and Ferrag, Mohamed Amine and Marinelli, Ryan and Cordeiro, Lucas C. and Debbah, Merouane and Mavroeidis, Vasileios and Jøsang, Audun},
  booktitle={2024 IEEE International Conference on Big Data (BigData)}, 
  title={Dynamic Intelligence Assessment: Benchmarking LLMs on the Road to AGI with a Focus on Model Confidence}, 
  year={2024},
  pages={3313-3321},
  keywords={Measurement;Adaptation models;Computational modeling;Benchmark testing;Reliability engineering;Mathematical models;Data models;Reliability;Problem-solving;Computer security;Artificial Intelligence;Large Language Models;Dynamic Benchmarking;Performance Metrics;Reliability},
  doi={10.1109/BigData62323.2024.10825051}}
```