Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
dubniczky commited on
Commit
725545f
·
verified ·
1 Parent(s): e816d8c

Added readme template

Browse files
Files changed (1) hide show
  1. README.md +142 -3
README.md CHANGED
@@ -1,3 +1,142 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - mathematics
7
+ - computer-science
8
+ - cryptograpy
9
+ - ctf
10
+ pretty_name: Dynamic Intelligence Assessment Dataset
11
+ size_categories:
12
+ - 1K<n<15K
13
+ ---
14
+ # Dataset Card for Dataset Name
15
+
16
+ <!-- Provide a quick summary of the dataset. -->
17
+ This dataset aims to test the problem-solving ability of LLMs with dynamically generated challenges that are difficult to guess.
18
+
19
+ ## Dataset Details
20
+
21
+ ### Dataset Description
22
+
23
+ <!-- Provide a longer summary of what this dataset is. -->
24
+
25
+ As machine intelligence evolves, the need to test and compare the problem-solving abilities of different AI models grows.
26
+ However, current benchmarks are often simplistic, allowing models to perform uniformly well and making it difficult to distinguish their capabilities. Additionally, benchmarks typically rely on static question-answer pairs that the models might memorize or guess.
27
+ To address these limitations, we introduce Dynamic Intelligence Assessment (DIA), a novel methodology for testing AI models using dynamic question templates and improved metrics across multiple disciplines such as mathematics, cryptography, cybersecurity, and computer science. The accompanying dataset, DIA-Bench, contains a diverse collection of challenge templates with mutable parameters presented in various formats, including text, PDFs, compiled binaries, visual puzzles, and CTF-style cybersecurity challenges. Our framework introduces four new metrics to assess a model’s reliability and confidence across multiple attempts. These metrics revealed that even simple questions are frequently answered incorrectly when posed in varying forms, highlighting significant gaps in models' reliability. Notably, API models like GPT-4o often overestimated their mathematical capabilities, while ChatGPT-4o demonstrated better performance due to effective tool usage. In self-assessment OpenAI's o1-mini proved to have the best judgement on what tasks it should attempt to solve.
28
+ We evaluated 25 state-of-the-art LLMs using DIA-Bench, showing that current models struggle with complex tasks and often display unexpectedly low confidence, even with simpler questions. The DIA framework sets a new standard for assessing not only problem-solving, but also a model's adaptive intelligence and ability to assess its limitations. The dataset is publicly available on the project's page: https://github.com/DIA-Bench
29
+
30
+
31
+ - **Curated by:** Norbert Tihanyi, Tamas Bisztray, Richard A. Dubniczky, Rebeka Toth, Bertalan Borsos, Bilel Cherif, Ridhi Jain, Lajos Muzsai, Mohamed Amine Ferrag, Ryan Marinelli, Lucas C. Cordeiro, Merouane Debbah, Vasileios Mavroeidis, and Audun Josang
32
+ <!-- - **Funded by:** [More Information Needed] -->
33
+ - **Shared by [optional]:** [More Information Needed]
34
+ - **Language(s) (NLP):** English
35
+ - **License:** AL 2.0
36
+
37
+ ### Dataset Sources
38
+
39
+ <!-- Provide the basic links for the dataset. -->
40
+
41
+ - **Repository:** https://github.com/DIA-Bench/DIA-Bench
42
+ - **Paper:** https://arxiv.org/abs/2410.15490
43
+
44
+ ## Uses
45
+
46
+ <!-- Address questions around how the dataset is intended to be used. -->
47
+ This dataset is intended to be used as a benchmark to test the problem-solving ability and confidence of LLMs.
48
+
49
+
50
+ ## Dataset Structure
51
+
52
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
53
+
54
+ [More Information Needed]
55
+
56
+ ## Dataset Creation
57
+
58
+ ### Curation Rationale
59
+
60
+ <!-- Motivation for the creation of this dataset. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Source Data
65
+
66
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
67
+
68
+ #### Data Collection and Processing
69
+
70
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
71
+
72
+ [More Information Needed]
73
+
74
+ #### Who are the source data producers?
75
+
76
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
77
+
78
+ [More Information Needed]
79
+
80
+ ### Annotations [optional]
81
+
82
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
83
+
84
+ #### Annotation process
85
+
86
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
87
+
88
+ [More Information Needed]
89
+
90
+ #### Who are the annotators?
91
+
92
+ <!-- This section describes the people or systems who created the annotations. -->
93
+
94
+ [More Information Needed]
95
+
96
+ #### Personal and Sensitive Information
97
+
98
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
99
+
100
+ [More Information Needed]
101
+
102
+ ## Bias, Risks, and Limitations
103
+
104
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
105
+
106
+ [More Information Needed]
107
+
108
+ ### Recommendations
109
+
110
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
111
+
112
+ Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
113
+
114
+ ## Citation [optional]
115
+
116
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
117
+
118
+ **BibTeX:**
119
+
120
+ [More Information Needed]
121
+
122
+ **APA:**
123
+
124
+ [More Information Needed]
125
+
126
+ ## Glossary [optional]
127
+
128
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
129
+
130
+ [More Information Needed]
131
+
132
+ ## More Information [optional]
133
+
134
+ [More Information Needed]
135
+
136
+ ## Dataset Card Authors [optional]
137
+
138
+ [More Information Needed]
139
+
140
+ ## Dataset Card Contact
141
+
142
+ [More Information Needed]