Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 7,642 Bytes
25b1233
 
 
 
 
bcea650
25b1233
bcea650
881acda
25b1233
 
 
 
 
 
 
 
 
 
5ddbeed
66905c4
0a6dd97
2c5d0b3
0a6dd97
 
 
 
 
3144f92
 
 
 
 
 
 
 
 
 
 
 
 
 
0a6dd97
 
 
 
 
 
 
 
e6489c2
 
 
2c5d0b3
0a6dd97
2c5d0b3
 
 
 
 
 
 
 
25b1233
 
 
 
 
 
 
5ddbeed
25b1233
 
 
5ddbeed
 
25b1233
 
 
 
 
 
 
 
 
 
 
 
 
bffa681
25b1233
 
 
e4c423c
 
 
25b1233
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e4c423c
25b1233
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e4c423c
 
 
 
 
 
 
 
 
 
 
 
 
25b1233
bffa681
e4c423c
bffa681
 
0a6dd97
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
paperswithcode_id: dbpedia
pretty_name: DBpedia
dataset_info:
  config_name: dbpedia_14
  features:
  - name: label
    dtype:
      class_label:
        names:
          '0': Company
          '1': EducationalInstitution
          '2': Artist
          '3': Athlete
          '4': OfficeHolder
          '5': MeanOfTransportation
          '6': Building
          '7': NaturalPlace
          '8': Village
          '9': Animal
          '10': Plant
          '11': Album
          '12': Film
          '13': WrittenWork
  - name: title
    dtype: string
  - name: content
    dtype: string
  splits:
  - name: train
    num_bytes: 178428970
    num_examples: 560000
  - name: test
    num_bytes: 22310285
    num_examples: 70000
  download_size: 119424374
  dataset_size: 200739255
configs:
- config_name: dbpedia_14
  data_files:
  - split: train
    path: dbpedia_14/train-*
  - split: test
    path: dbpedia_14/test-*
  default: true
---

# Dataset Card for DBpedia14

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** https://github.com/zhangxiangxiao/Crepe
- **Paper:** https://arxiv.org/abs/1509.01626
- **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu)

### Dataset Summary

The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes
from DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we
randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size
of the training dataset is 560,000 and testing dataset 70,000.
There are 3 columns in the dataset (same for train and test splits), corresponding to class index
(1 to 14), title and content. The title and content are escaped using double quotes ("), and any
internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content.

### Supported Tasks and Leaderboards

- `text-classification`, `topic-classification`: The dataset is mainly used for text classification: given the content
and the title, predict the correct topic. 

### Languages

Although DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear
(e.g. a film whose title is origanlly not English).  

## Dataset Structure

### Data Instances

A typical data point, comprises of a title, a content and the corresponding label. 

An example from the DBpedia test set looks as follows:
```
{
    'title':'',
    'content':" TY KU /taɪkuː/ is an American alcoholic beverage company that specializes in sake and other spirits. The privately-held company was founded in 2004 and is headquartered in New York City New York. While based in New York TY KU's beverages are made in Japan through a joint venture with two sake breweries. Since 2011 TY KU's growth has extended its products into all 50 states.",
    'label':0
}
```

### Data Fields

- 'title': a string containing the title of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes ("").
- 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes ("").
- 'label': one of the 14 possible topics.

### Data Splits

The data is split into a training and test set.
For each of the 14 classes we have 40,000 training samples and 5,000 testing samples.
Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000.

## Dataset Creation

### Curation Rationale

The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).

### Source Data

#### Initial Data Collection and Normalization

Source data is taken from DBpedia: https://wiki.dbpedia.org/develop/datasets

#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).

### Licensing Information

The DBPedia ontology classification dataset is licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License.

### Citation Information

```
@inproceedings{NIPS2015_250cf8b5,
 author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett},
 pages = {},
 publisher = {Curran Associates, Inc.},
 title = {Character-level Convolutional Networks for Text Classification},
 url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf},
 volume = {28},
 year = {2015}
}
```

Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195.

### Contributions

Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.