Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,8 @@ language: multilingual
|
|
3 |
license: mit
|
4 |
tags:
|
5 |
- emotion
|
6 |
-
-
|
|
|
7 |
|
8 |
widget:
|
9 |
- text: "Guarda! ci sono dei bellissimi capibara!"
|
@@ -16,6 +17,11 @@ widget:
|
|
16 |
|
17 |
---
|
18 |
|
|
|
|
|
|
|
|
|
|
|
19 |
## Abstract
|
20 |
|
21 |
Detecting emotion in text allows social and computational scientists to study how people behave and react to online events. However, developing these tools for different languages requires data that is not always available. This paper collects the available emotion detection datasets across 19 languages. We train a multilingual emotion prediction model for social media data, XLM-EMO. The model shows competitive performance in a zero-shot setting, suggesting it is helpful in the context of low-resource languages. We release our model to the community so that interested researchers can directly use it.
|
@@ -29,7 +35,7 @@ This model is the fine-tuned version of the [XLM-T](https://arxiv.org/abs/2104.1
|
|
29 |
This model had an F1 of 0.85 on the test set.
|
30 |
|
31 |
## Citation
|
32 |
-
Please use the following
|
33 |
```
|
34 |
@inproceedings{bianchi2021feel,
|
35 |
title = "{XLM-EMO: Multilingual Emotion Prediction in Social Media Text}",
|
|
|
3 |
license: mit
|
4 |
tags:
|
5 |
- emotion
|
6 |
+
- emotion-analysis
|
7 |
+
- multilingual
|
8 |
|
9 |
widget:
|
10 |
- text: "Guarda! ci sono dei bellissimi capibara!"
|
|
|
17 |
|
18 |
---
|
19 |
|
20 |
+
#
|
21 |
+
[Federico Bianchi](https://federicobianchi.io/) •
|
22 |
+
[Debora Nozza](http://dnozza.github.io/) •
|
23 |
+
[Dirk Hovy](http://www.dirkhovy.com/)
|
24 |
+
|
25 |
## Abstract
|
26 |
|
27 |
Detecting emotion in text allows social and computational scientists to study how people behave and react to online events. However, developing these tools for different languages requires data that is not always available. This paper collects the available emotion detection datasets across 19 languages. We train a multilingual emotion prediction model for social media data, XLM-EMO. The model shows competitive performance in a zero-shot setting, suggesting it is helpful in the context of low-resource languages. We release our model to the community so that interested researchers can directly use it.
|
|
|
35 |
This model had an F1 of 0.85 on the test set.
|
36 |
|
37 |
## Citation
|
38 |
+
Please use the following BibTeX entry if you use this model in your project:
|
39 |
```
|
40 |
@inproceedings{bianchi2021feel,
|
41 |
title = "{XLM-EMO: Multilingual Emotion Prediction in Social Media Text}",
|