Datasets:

Modalities:
Tabular
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 6,329 Bytes
f371706
 
6bc8ec8
 
 
 
 
 
 
 
 
 
2e88481
6bc8ec8
 
 
 
 
cc07aef
 
764f261
8d6cd43
cc07aef
764f261
8d6cd43
cc07aef
aa92d7d
cc07aef
 
aa92d7d
cc07aef
 
aa92d7d
cc07aef
8d6cd43
438f29e
8d6cd43
 
438f29e
8d6cd43
83359b3
 
f371706
6bc8ec8
 
 
 
 
 
e9e7857
6bc8ec8
 
 
2e88481
 
764f261
6bc8ec8
2e88481
 
 
 
 
 
 
 
 
 
a0abd9d
 
2e88481
6bc8ec8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
license: cc-by-sa-4.0
language:
- sl
- en
- cs
- bs
- hr
- sr
- sk
tags:
- sentiment
- classification
- parliament
- parlament
pretty_name: ParlaSent
size_categories:
- 10K<n<100K
configs:
- config_name: EN
  split: traindevtest
  data_files: ParlaSent_EN.jsonl
- config_name: BCS
  split: traindevtest
  data_files: ParlaSent_BCS.jsonl
- config_name: CZ
  split: train_dev_test
  data_files: ParlaSent_CZ.jsonl
- config_name: SK
  split: train_dev_test
  data_files: ParlaSent_SK.jsonl
- config_name: SL
  split: train_dev_test
  data_files: ParlaSent_SL.jsonl
- config_name: EN_additional_test
  split: test
  data_files: ParlaSent_EN_test.jsonl
- config_name: BCS_additional_test
  split: test
  data_files: ParlaSent_BCS_test.jsonl
task_categories:
- text-classification
---

# The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0

## Dataset Description

- **Repository: [Clarin.si repo](http://hdl.handle.net/11356/1868)** 
- **Paper: https://arxiv.org/abs/2309.09783** 

### Dataset Summary

This dataset was created and used for sentiment analysis experiments.

The dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test.

Each test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into "train", "dev" and "test" portions" for performing language-specific experiments.

The 6-level annotation schema, used by annotators, is the following: 
- Positive for sentences that are entirely or predominantly positive
- Negative for sentences that are entirely or predominantly negative
- M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment
- M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment
- P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment
- N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment

Dataset is described in detail in our [paper](https://arxiv.org/abs/2309.09783).


### Data Attributes
The attributes in training data are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first annotator's annotation
- annotator2 - second annotator's annotation
- reconciliation - the final label agreed upon after reconciliation
- label - three level (positive, negative, neutral) label based on the reconciliation label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- split - whether the sentence is to be used as a training, development or testing instance in case evaluation is done of the training portion of the dataset
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech

The attributes in the test data (_test.jsonl files) are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first (only) annotator's annotation, used as a final annotation
- label - three level (positive, negative, neutral) label based on the annotator1 label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech

### Citation information

Please quote the following paper:
```
@article{
 Mochtak_Rupnik_Ljubešić_2023,
 title={The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings},
 rights={All rights reserved},
 url={http://arxiv.org/abs/2309.09783},
 abstractNote={Sentiments inherently drive politics. How we receive and process information plays an essential role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses. If sentiment plays such an important role in politics, how can we study and measure it systematically? The paper presents a new dataset of sentiment-annotated sentences, which are used in a series of experiments focused on training a robust sentiment classifier for parliamentary proceedings. The paper also introduces the first domain-specific LLM for political science applications additionally pre-trained on 1.72 billion domain-specific words from proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training of LLM on parliamentary data can significantly improve the model downstream performance on the domain-specific tasks, in our case, sentiment detection in parliamentary proceedings. We further show that multilingual models perform very well on unseen languages and that additional data from other languages significantly improves the target parliament’s results. The paper makes an important contribution to multiple domains of social sciences and bridges them with computer science and computational linguistics. Lastly, it sets up a more robust approach to sentiment analysis of political texts in general, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques.},
 note={arXiv:2309.09783 [cs]},
 number={arXiv:2309.09783},
 publisher={arXiv},
 author={Mochtak, Michal and Rupnik, Peter and Ljubešić, Nikola},
 year={2023},
 month={Sep},
 language={en}
}

```