Update README.md
Browse files
README.md
CHANGED
@@ -130,6 +130,18 @@ Optimizing the threshold per label for the one that gives the optimum F1 metrics
|
|
130 |
| surprise | 0.977 | 0.543 | 0.674 | 0.601 | 0.593 | 141 | 0.15 |
|
131 |
| neutral | 0.758 | 0.598 | 0.810 | 0.688 | 0.513 | 1787 | 0.25 |
|
132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
#### Commentary on the dataset
|
134 |
|
135 |
Some labels (E.g. gratitude) when considered independently perform very strongly with F1 exceeding 0.9, whilst others (E.g. relief) perform very poorly.
|
|
|
130 |
| surprise | 0.977 | 0.543 | 0.674 | 0.601 | 0.593 | 141 | 0.15 |
|
131 |
| neutral | 0.758 | 0.598 | 0.810 | 0.688 | 0.513 | 1787 | 0.25 |
|
132 |
|
133 |
+
This improves the overall metrics:
|
134 |
+
|
135 |
+
- Precision: 0.542
|
136 |
+
- Recall: 0.577
|
137 |
+
- F1: 0.541
|
138 |
+
|
139 |
+
Or if calculated weighted by the relative size of the support of each label:
|
140 |
+
|
141 |
+
- Precision: 0.572
|
142 |
+
- Recall: 0.677
|
143 |
+
- F1: 0.611
|
144 |
+
|
145 |
#### Commentary on the dataset
|
146 |
|
147 |
Some labels (E.g. gratitude) when considered independently perform very strongly with F1 exceeding 0.9, whilst others (E.g. relief) perform very poorly.
|