Datasets:
Added results to readme
Browse files- README.md +25 -3
- assets/dia-logo.png +3 -0
- assets/dia-results.png +3 -0
README.md
CHANGED
@@ -13,16 +13,30 @@ configs:
|
|
13 |
data_files: DIA-Benchmark-k1.json
|
14 |
type: json
|
15 |
field: questions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
---
|
17 |
# Dynamic Intelligence Assessment Dataset
|
18 |
|
|
|
|
|
|
|
|
|
19 |
<!-- Provide a quick summary of the dataset. -->
|
20 |
This dataset aims to test the problem-solving ability of LLMs with dynamically generated challenges that are difficult to guess.
|
21 |
|
22 |
## Dataset Details
|
23 |
|
24 |
-
### Dataset Description
|
25 |
-
|
26 |
The DIA Benchmark Dataset is a benchmarking tool consisting of 150 dynamic question generators for the evaluation of the problem-solving capability of LLMs. It primarily focuses on CTF-style (Capture the Flag) challenges that require knowledge from the fields of mathematics, cryptography, cybersecurity, and computer science. The challenge generators were manually developed by industry experts and tested by multiple individuals to find errors and edge cases. The answers often consist of many characters and big numbers, making correct guessing highly unlikely. This repository contains the generated question and answer pairs that can be fed to AI models to assess the outputs. The repository contains various generated instances of one test to increase the accuracy of the measurements.
|
27 |
|
28 |
|
@@ -31,7 +45,15 @@ The DIA Benchmark Dataset is a benchmarking tool consisting of 150 dynamic quest
|
|
31 |
- **Language:** English
|
32 |
- **License:** AL 2.0
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
<!-- Provide the basic links for the dataset. -->
|
37 |
|
|
|
13 |
data_files: DIA-Benchmark-k1.json
|
14 |
type: json
|
15 |
field: questions
|
16 |
+
- config_name: K5
|
17 |
+
data_files: DIA-Benchmark-k5.json
|
18 |
+
type: json
|
19 |
+
field: questions
|
20 |
+
- config_name: K10
|
21 |
+
data_files: DIA-Benchmark-k10.json
|
22 |
+
type: json
|
23 |
+
field: questions
|
24 |
+
- config_name: K100
|
25 |
+
data_files: DIA-Benchmark-k100.json
|
26 |
+
type: json
|
27 |
+
field: questions
|
28 |
---
|
29 |
# Dynamic Intelligence Assessment Dataset
|
30 |
|
31 |
+
<div align="center">
|
32 |
+
<img width="550" alt="logo" src="./assets/dia-logo.png">
|
33 |
+
</div>
|
34 |
+
|
35 |
<!-- Provide a quick summary of the dataset. -->
|
36 |
This dataset aims to test the problem-solving ability of LLMs with dynamically generated challenges that are difficult to guess.
|
37 |
|
38 |
## Dataset Details
|
39 |
|
|
|
|
|
40 |
The DIA Benchmark Dataset is a benchmarking tool consisting of 150 dynamic question generators for the evaluation of the problem-solving capability of LLMs. It primarily focuses on CTF-style (Capture the Flag) challenges that require knowledge from the fields of mathematics, cryptography, cybersecurity, and computer science. The challenge generators were manually developed by industry experts and tested by multiple individuals to find errors and edge cases. The answers often consist of many characters and big numbers, making correct guessing highly unlikely. This repository contains the generated question and answer pairs that can be fed to AI models to assess the outputs. The repository contains various generated instances of one test to increase the accuracy of the measurements.
|
41 |
|
42 |
|
|
|
45 |
- **Language:** English
|
46 |
- **License:** AL 2.0
|
47 |
|
48 |
+
## Evaluation
|
49 |
+
|
50 |
+
We tested 25 state-of-the-art LLM models on the DIA dataset through API calls, and ChatGPT-4o manually through its chat interface to enable tool usage. The tests were generated and run in November 2024 on the `k=5` dataset.
|
51 |
+
|
52 |
+
<div align="center">
|
53 |
+
<img alt="evaluation" src="./assets/dia-results.png">
|
54 |
+
</div>
|
55 |
+
|
56 |
+
## Sources
|
57 |
|
58 |
<!-- Provide the basic links for the dataset. -->
|
59 |
|
assets/dia-logo.png
ADDED
Git LFS Details
|
assets/dia-results.png
ADDED
Git LFS Details
|