Datasets:

Sub-tasks:
slot-filling
Languages:
English
ArXiv:
License:
File size: 7,305 Bytes
a929286
 
 
 
 
57d6b32
a929286
57d6b32
a929286
 
 
 
 
 
 
 
3718c64
 
a929286
 
ab982be
e55f09e
2b75aac
 
 
 
 
 
 
 
 
 
d950093
 
 
 
 
 
2b75aac
 
a929286
 
 
 
 
 
 
ab982be
a929286
 
 
ab982be
 
a929286
 
 
 
 
 
 
 
 
 
 
 
 
4192534
a929286
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4192534
 
 
2b75aac
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- text-generation
- fill-mask
task_ids:
- slot-filling
paperswithcode_id: numersense
pretty_name: NumerSense
dataset_info:
  features:
  - name: sentence
    dtype: string
  - name: target
    dtype: string
  splits:
  - name: train
    num_bytes: 825865
    num_examples: 10444
  - name: test_core
    num_bytes: 62652
    num_examples: 1132
  - name: test_all
    num_bytes: 184180
    num_examples: 3146
  download_size: 985463
  dataset_size: 1072697
---

# Dataset Card for [Dataset Name]

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://inklab.usc.edu/NumerSense/
- **Repository:** https://github.com/INK-USC/NumerSense
- **Paper:** https://arxiv.org/abs/2005.00683
- **Leaderboard:** https://inklab.usc.edu/NumerSense/#exp
- **Point of Contact:** Author emails listed in [paper](https://arxiv.org/abs/2005.00683)

### Dataset Summary

NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145
masked-word-prediction probes. The general idea is to mask numbers between 0-10 in sentences mined from a commonsense
corpus and evaluate whether a language model can correctly predict the masked value.

### Supported Tasks and Leaderboards

The dataset supports the task of slot-filling, specifically as an evaluation of numerical common sense. A leaderboard
is included on the [dataset webpage](https://inklab.usc.edu/NumerSense/#exp) with included benchmarks for GPT-2,
RoBERTa, BERT, and human performance. Leaderboards are included for both the core set and the adversarial set
discussed below.

### Languages

This dataset is in English.

## Dataset Structure

### Data Instances

Each instance consists of a sentence with a masked numerical value between 0-10 and (in the train set) a target.
Example from the training set:

```
sentence: Black bears are about <mask> metres tall.
target: two
```

### Data Fields

Each value of the training set consists of:
- `sentence`: The sentence with a number masked out with the `<mask>` token.
- `target`: The ground truth target value. Since the test sets do not include the ground truth, the `target` field
values are empty strings in the `test_core` and `test_all` splits.

### Data Splits

The dataset includes the following pre-defined data splits:

- A train set with >10K labeled examples (i.e. containing a ground truth value)
- A core test set (`test_core`) with 1,132 examples (no ground truth provided)
- An expanded test set (`test_all`) encompassing `test_core` with the addition of adversarial examples for a total of
3,146 examples. See section 2.2 of [the paper] for a discussion of how these examples are constructed.

## Dataset Creation

### Curation Rationale

The purpose of this dataset is "to study whether PTLMs capture numerical commonsense knowledge, i.e., commonsense
knowledge that provides an understanding of the numeric relation between entities." This work is motivated by the
prior research exploring whether language models possess _commonsense knowledge_.

### Source Data

#### Initial Data Collection and Normalization

The dataset is an extension of the [Open Mind Common Sense](https://huggingface.co/datasets/open_mind_common_sense)
corpus. A query was performed to discover sentences containing numbers between 0-12, after which the resulting
sentences were manually evaluated for inaccuracies, typos, and the expression of commonsense knowledge. The numerical
values were then masked.

#### Who are the source language producers?

The [Open Mind Common Sense](https://huggingface.co/datasets/open_mind_common_sense) corpus, from which this dataset
is sourced, is a crowdsourced dataset maintained by the MIT Media Lab.

### Annotations

#### Annotation process

No annotations are present in this dataset beyond the `target` values automatically sourced from the masked
sentences, as discussed above.

#### Who are the annotators?

The curation and inspection was done in two rounds by graduate students.

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

The motivation of measuring a model's ability to associate numerical values with real-world concepts appears
relatively innocuous. However, as discussed in the following section, the source dataset may well have biases encoded
from crowdworkers, particularly in terms of factoid coverage. A model's ability to perform well on this benchmark
should therefore not be considered evidence that it is more unbiased or objective than a human performing similar
tasks.

[More Information Needed]

### Discussion of Biases

This dataset is sourced from a crowdsourced commonsense knowledge base. While the information contained in the graph
is generally considered to be of high quality, the coverage is considered to very low as a representation of all
possible commonsense knowledge. The representation of certain factoids may also be skewed by the demographics of the
crowdworkers. As one possible example, the term "homophobia" is connected with "Islam" in the ConceptNet knowledge
base, but not with any other religion or group, possibly due to the biases of crowdworkers contributing to the
project.

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

This dataset was collected by Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren, Computer Science researchers
at the at the University of Southern California.

### Licensing Information

The data is hosted in a GitHub repositor with the
[MIT License](https://github.com/INK-USC/NumerSense/blob/main/LICENSE).

### Citation Information

```
@inproceedings{lin2020numersense,
  title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
  author={Bill Yuchen Lin and Seyeon Lee and Rahul Khanna and Xiang Ren}, 
  booktitle={Proceedings of EMNLP},
  year={2020},
  note={to appear}
}
```

### Contributions

Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.