Datasets:
Create README.md
Browse files# ScandiSent
Sentiment Corpus for Swedish 🇸🇪 Norwegian 🇳🇴 Danish 🇩🇰 Finnish 🇫🇮 (and English 🏴)
## Information
The corpus is crawled from [se.trustpilot.com](https://se.trustpilot.com/), [no.trustpilot.com](https://no.trustpilot.com/), [dk.trustpilot.com](https://dk.trustpilot.com/), [fi.trustpilot.com](https://fi.trustpilot.com/) and [trustpilot.com](https://trustpilot.com/).
It consists of reviews from all the 22 corresponding categories:
```javascript
categories = ['animals_pets', 'electronics_technology', 'events_entertainment', 'vehicles_transportation',
'business_services', 'health_medical', 'home_garden', 'hobbies_crafts', 'home_services',
'legal_services_government', 'construction_manufactoring', 'food_beverages_tobacco', 'media_publishing',
'money_insurance', 'travel_vacation', 'restaurants_bars', 'public_local_services', 'shopping_fashion',
'education_training', 'beauty_wellbeing', 'sports', 'housing_utility_company']
```
The size for each language is 10 000 texts evenly balanced between positive and negative reviews. A positive review is considered as a text with the rating `4 or 5`, and a negative review is rated as `1 or 2`. The texts rated as `3` were not used. The zip files consist of csv files for each language with the columns `text` and `label`, were `label` == `1` is a positive review and `label` == `0`is a negative review.
For our paper: [Should we Stop Training More Monolingual Models, and Simply Use Machine Translation Instead?](https://arxiv.org/pdf/2104.10441.pdf) we used the first 7500 texts for training and the last 2500 texts for evaluating.
#### [ScandiSent.zip]([ScandiSent.zip]) 🇸🇪 🇳🇴 🇩🇰 🇫🇮 + 🏴
Is the raw data for each language where we used [fastText](https://fasttext.cc/docs/en/language-identification.html) language identification to ensure that the texts were of the right language.
#### [ScandiSent-mt.zip](ScandiSent-mt.zip) 🏴
Consists of the raw data from `ScandiSent` machine translated to English 🏴 using Googles Neural Machine Translation API.
## Version 1.0
2021-02-06
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: openrail
|
3 |
+
task_categories:
|
4 |
+
- text-classification
|
5 |
+
language:
|
6 |
+
- sv
|
7 |
+
- 'no'
|
8 |
+
- da
|
9 |
+
- en
|
10 |
+
- fi
|
11 |
+
pretty_name: ScandiSen
|
12 |
+
size_categories:
|
13 |
+
- 1K<n<10K
|
14 |
+
---
|
15 |
+
# Dataset Card for Dataset Name
|
16 |
+
|
17 |
+
## Dataset Description
|
18 |
+
|
19 |
+
- **Homepage:**
|
20 |
+
- **Repository:**
|
21 |
+
- **Paper:**
|
22 |
+
- **Leaderboard:**
|
23 |
+
- **Point of Contact:**
|
24 |
+
|
25 |
+
### Dataset Summary
|
26 |
+
|
27 |
+
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
|
28 |
+
|
29 |
+
### Supported Tasks and Leaderboards
|
30 |
+
|
31 |
+
[More Information Needed]
|
32 |
+
|
33 |
+
### Languages
|
34 |
+
|
35 |
+
[More Information Needed]
|
36 |
+
|
37 |
+
## Dataset Structure
|
38 |
+
|
39 |
+
### Data Instances
|
40 |
+
|
41 |
+
[More Information Needed]
|
42 |
+
|
43 |
+
### Data Fields
|
44 |
+
|
45 |
+
[More Information Needed]
|
46 |
+
|
47 |
+
### Data Splits
|
48 |
+
|
49 |
+
[More Information Needed]
|
50 |
+
|
51 |
+
## Dataset Creation
|
52 |
+
|
53 |
+
### Curation Rationale
|
54 |
+
|
55 |
+
[More Information Needed]
|
56 |
+
|
57 |
+
### Source Data
|
58 |
+
|
59 |
+
#### Initial Data Collection and Normalization
|
60 |
+
|
61 |
+
[More Information Needed]
|
62 |
+
|
63 |
+
#### Who are the source language producers?
|
64 |
+
|
65 |
+
[More Information Needed]
|
66 |
+
|
67 |
+
### Annotations
|
68 |
+
|
69 |
+
#### Annotation process
|
70 |
+
|
71 |
+
[More Information Needed]
|
72 |
+
|
73 |
+
#### Who are the annotators?
|
74 |
+
|
75 |
+
[More Information Needed]
|
76 |
+
|
77 |
+
### Personal and Sensitive Information
|
78 |
+
|
79 |
+
[More Information Needed]
|
80 |
+
|
81 |
+
## Considerations for Using the Data
|
82 |
+
|
83 |
+
### Social Impact of Dataset
|
84 |
+
|
85 |
+
[More Information Needed]
|
86 |
+
|
87 |
+
### Discussion of Biases
|
88 |
+
|
89 |
+
[More Information Needed]
|
90 |
+
|
91 |
+
### Other Known Limitations
|
92 |
+
|
93 |
+
[More Information Needed]
|
94 |
+
|
95 |
+
## Additional Information
|
96 |
+
|
97 |
+
### Dataset Curators
|
98 |
+
|
99 |
+
[More Information Needed]
|
100 |
+
|
101 |
+
### Licensing Information
|
102 |
+
|
103 |
+
[More Information Needed]
|
104 |
+
|
105 |
+
### Citation Information
|
106 |
+
|
107 |
+
[More Information Needed]
|
108 |
+
|
109 |
+
### Contributions
|
110 |
+
|
111 |
+
[More Information Needed]
|