File size: 4,219 Bytes
520a123
 
c80d71a
 
 
 
 
 
 
7919b09
c80d71a
7919b09
c80d71a
27b1f4d
 
c80d71a
 
7919b09
c80d71a
 
 
 
 
 
 
 
27b1f4d
 
 
 
 
 
 
 
 
 
 
 
 
 
c80d71a
 
 
d6c0394
 
 
 
 
 
 
 
0634d5a
 
d6c0394
 
 
c80d71a
 
 
 
7919b09
c80d71a
 
 
 
 
 
 
 
 
 
 
 
 
7919b09
c80d71a
 
 
 
 
 
 
 
 
 
 
7919b09
c80d71a
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: cc0-1.0
task_categories:
- question-answering
language:
- en
size_categories:
- 100M<n<1B
---
# ComplexTempQA Dataset

ComplexTempQA is a large-scale dataset designed for complex temporal question answering (TQA). It consists of over 100 million question-answer pairs, making it one of the most extensive datasets available for TQA. The dataset is generated using data from Wikipedia and Wikidata and spans questions over a period of 36 years (1987-2023).

**Note:** We have a smaller version consisting of questions from the time period 1987 until 2007.

## Dataset Description

ComplexTempQA categorizes questions into three main types:
- Attribute Questions
- Comparison Questions
- Counting Questions

These categories are further divided based on their relation to events, entities, or time periods. 

### Question Types and Counts

| | Question Type         | Subtype             | Count         |
|--|-----------------------|---------------------|---------------|
|1a| Attribute             | Event               | 83,798        |
|1b| Attribute             | Entity              | 84,079        |
|1c| Attribute             | Time                | 9,454         |
|2a| Comparison            | Event               | 25,353,340    |
|2b| Comparison            | Entity              | 74,678,117    |
|2c| Comparison            | Time                | 54,022,952    |
|3a| Counting              | Event               | 18,325        |
|3b| Counting              | Entity              | 10,798        |
|3c| Counting              | Time                | 12,732        |
|  | Multi-Hop             |                     | 76,933        |
|  | Unnamed Event         |                     | 8,707,123     |
|  | **Total**             |                     | **100,228,457**|

### Metadata

- **id**: A unique identifier for each question.
- **question**: The text of the question being asked.
- **answer**: The answer(s) to the question.
- **type**: The type of question based on the dataset’s taxonomy.
- **rating**: A numerical rating indicating the difficulty of the question (`0` for easy, `1` for hard).
- **timeframe**: The start and end dates relevant to the question.
- **question_entity**: List of Wikidata IDs related to the entities in the question.
- **answer_entity**: List of Wikidata IDs related to the entities in the answer.
- **question_country**: List of Wikidata IDs of the countries associated with the questioned entities or events.
- **answer_country**: List of Wikidata IDs of the countries associated with the answered entities or events.
- **is_unnamed**: A flag indicating if the question contains an implicitly described event (`1` for yes, `0` for no).



## Dataset Characteristics

### Size
ComplexTempQA comprises over 100 million question-answer pairs, focusing on events, entities, and time periods from 1987 to 2023.

### Complexity
Questions require advanced reasoning skills, including multi-hop question answering, temporal aggregation, and across-time comparisons.

### Taxonomy
The dataset follows a unique taxonomy categorizing questions into attributes, comparisons, and counting types, ensuring comprehensive coverage of temporal queries.

### Evaluation
The dataset has been evaluated for readability, ease of answering before and after web searches, and overall clarity. Human raters have assessed a sample of questions to ensure high quality.

## Usage

### Evaluation and Training
ComplexTempQA can be used for:
- Evaluating the temporal reasoning capabilities of large language models (LLMs)
- Fine-tuning language models for better temporal understanding
- Developing and testing retrieval-augmented generation (RAG) systems

### Research Applications
The dataset supports research in:
- Temporal question answering
- Information retrieval
- Language understanding

### Adaptation and Continual Learning
ComplexTempQA's temporal metadata facilitates the development of online adaptation and continual training approaches for LLMs, aiding in the exploration of time-based learning and evaluation.

## Access

The dataset and code are freely available at [https://github.com/DataScienceUIBK/ComplexTempQA](https://github.com/DataScienceUIBK/ComplexTempQA).