Update README.md
Browse files
README.md
CHANGED
@@ -16,12 +16,19 @@ pipeline_tag: token-classification
|
|
16 |
This is [DeBERTa-v3](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) fine-tuned for Emotion Cause Extraction (ECE) task.
|
17 |
For input text i.e. a sequence of tokens containing a situation with emotional coloring, it is necessary to determine the subset of which tokens justify the emotional state of the speaker. Formally speaking, it is convenient to look at the problem as a binary token classification, where one means that the corresponding token belongs to the desired subset.
|
18 |
|
|
|
|
|
|
|
19 |
|
20 |
## Evaluation
|
21 |
-
Has following results on [EmoCause](https://github.com/skywalker023/focused-empathy):
|
22 |
-
|
23 |
-
|
|
24 |
-
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
|
27 |
|
|
|
16 |
This is [DeBERTa-v3](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) fine-tuned for Emotion Cause Extraction (ECE) task.
|
17 |
For input text i.e. a sequence of tokens containing a situation with emotional coloring, it is necessary to determine the subset of which tokens justify the emotional state of the speaker. Formally speaking, it is convenient to look at the problem as a binary token classification, where one means that the corresponding token belongs to the desired subset.
|
18 |
|
19 |
+
## Training
|
20 |
+
|
21 |
+
Code use to train this model avaliable on my [GitHub](https://github.com/akira225/emotion-cause-detection)
|
22 |
|
23 |
## Evaluation
|
24 |
+
Has following results on [EmoCause](https://github.com/skywalker023/focused-empathy) and [EmpatheticDialodues](https://github.com/facebookresearch/EmpatheticDialogues):
|
25 |
+
|
26 |
+
| Accuracy | Top-1 Recall | Top-3 Recall | Top-5 Recall |
|
27 |
+
| ------------- | ------------- | ------------- | ------------- |
|
28 |
+
| 0.59 | 0.249 | 0.623 | 0.806 |
|
29 |
+
|
30 |
+
|
31 |
+
|
32 |
|
33 |
|
34 |
|