Datasets:

Modalities:
Text
Formats:
csv
Languages:
Dutch
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 2,746 Bytes
a9281d0
 
4a571ad
 
 
 
 
319baf2
a9281d0
4a571ad
319baf2
 
 
 
 
 
4a571ad
 
 
319baf2
 
 
 
4a571ad
319baf2
 
 
 
4a571ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: mit
task_categories:
- text-classification
language:
- nl
pretty_name: DBRD

---
configs:
- config_name: default
  data_files:
  - split: train
    path: train/neg/*, train/pos/*
  - split: test
    path: test/neg/*, test/pos/*

dataset_info:
  features:
  - name: text
    dtype: string
- name: label
  dtype: integer (1 for positive, -1 for negative)
  splits:
  - name: train
    num_examples: 20027
  - name: test
    num_examples: 2223
  download_size: 79.1MB
  dataset_size: 773,4MB

# Dataset Card for "DBRD: Dutch Book Reviews Dataset"


Translation of the [Dutch Book Review Dataset (DBRD)](https://github.com/benjaminvdb/DBRD), an extensive collection of over 110k book reviews with associated binary sentiment polarity labels. The dataset is designed for sentiment classification in Dutch and is influenced by the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/).

The dataset and the scripts used for scraping the reviews from [Hebban](Hebban), a Dutch platform for book enthusiasts, can be found in the [DBRD GitHub repository](https://github.com/benjaminvdb/DBRD).

# Labels

Distribution of labels positive/negative/neutral in rounded percentages.
```
training: 50/50/ 0
test:     50/50/ 0
```

# Attribution

Please use the following citation when making use of this dataset in your work:

```citation
@article{DBLP:journals/corr/abs-1910-00896,
  author    = {Benjamin van der Burgh and
               Suzan Verberne},
  title     = {The merits of Universal Language Model Fine-tuning for Small Datasets
               - a case with Dutch book reviews},
  journal   = {CoRR},
  volume    = {abs/1910.00896},
  year      = {2019},
  url       = {http://arxiv.org/abs/1910.00896},
  archivePrefix = {arXiv},
  eprint    = {1910.00896},
  timestamp = {Fri, 04 Oct 2019 12:28:06 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1910-00896.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
```

# Acknowledgements (as per GIT repository)

This dataset was created for testing out the ULMFiT (by Jeremy Howard and Sebastian Ruder) deep learning algorithm for text classification. It is implemented in the FastAI Python library that has taught me a lot. I'd also like to thank Timo Block for making his 10kGNAD dataset publicly available and giving me a starting point for this dataset. The dataset structure based on the Large Movie Review Dataset by Andrew L. Maas et al. Thanks to Andreas van Cranenburg for pointing out a problem with the dataset.

And of course I'd like to thank all the reviewers on Hebban for having taken the time to write all these reviews. You've made both book enthousiast and NLP researchers very happy :)

---
license: mit
---