sismetanin
commited on
Commit
•
30e2af4
1
Parent(s):
2cbeea5
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,300 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ru
|
4 |
+
|
5 |
+
tags:
|
6 |
+
- sentiment analysis
|
7 |
+
- Russian
|
8 |
+
---
|
9 |
+
|
10 |
+
## XML-RoBERTa-Large-ru-sentiment-RuSentiment
|
11 |
+
XML-RoBERTa-Large-ru-sentiment-RuSentiment is a [XML-RoBERTa-Large](https://huggingface.co/xlm-roberta-large) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
|
12 |
+
<table>
|
13 |
+
<thead>
|
14 |
+
<tr>
|
15 |
+
<th rowspan="4">Model</th>
|
16 |
+
<th rowspan="4">Score<br></th>
|
17 |
+
<th rowspan="4">Rank</th>
|
18 |
+
<th colspan="12">Dataset</th>
|
19 |
+
</tr>
|
20 |
+
<tr>
|
21 |
+
<td colspan="6">SentiRuEval-2016<br></td>
|
22 |
+
<td colspan="2" rowspan="2">RuSentiment</td>
|
23 |
+
<td rowspan="2">KRND</td>
|
24 |
+
<td rowspan="2">LINIS Crowd</td>
|
25 |
+
<td rowspan="2">RuTweetCorp</td>
|
26 |
+
<td rowspan="2">RuReviews</td>
|
27 |
+
</tr>
|
28 |
+
<tr>
|
29 |
+
<td colspan="3">TC</td>
|
30 |
+
<td colspan="3">Banks</td>
|
31 |
+
</tr>
|
32 |
+
<tr>
|
33 |
+
<td>micro F1</td>
|
34 |
+
<td>macro F1</td>
|
35 |
+
<td>F1</td>
|
36 |
+
<td>micro F1</td>
|
37 |
+
<td>macro F1</td>
|
38 |
+
<td>F1</td>
|
39 |
+
<td>wighted</td>
|
40 |
+
<td>F1</td>
|
41 |
+
<td>F1</td>
|
42 |
+
<td>F1</td>
|
43 |
+
<td>F1</td>
|
44 |
+
<td>F1</td>
|
45 |
+
</tr>
|
46 |
+
</thead>
|
47 |
+
<tbody>
|
48 |
+
<tr>
|
49 |
+
<td>SOTA</td>
|
50 |
+
<td>n/s</td>
|
51 |
+
<td></td>
|
52 |
+
<td>76.71</td>
|
53 |
+
<td>66.40</td>
|
54 |
+
<td>70.68</td>
|
55 |
+
<td>67.51</td>
|
56 |
+
<td>69.53</td>
|
57 |
+
<td>74.06</td>
|
58 |
+
<td>78.50</td>
|
59 |
+
<td>n/s</td>
|
60 |
+
<td>73.63</td>
|
61 |
+
<td>60.51</td>
|
62 |
+
<td>83.68</td>
|
63 |
+
<td>77.44</td>
|
64 |
+
</tr>
|
65 |
+
<tr>
|
66 |
+
<td>XLM-RoBERTa-Large</td>
|
67 |
+
<td>76.37</td>
|
68 |
+
<td>1</td>
|
69 |
+
<td>82.26</td>
|
70 |
+
<td>76.36</td>
|
71 |
+
<td>79.42</td>
|
72 |
+
<td>76.35</td>
|
73 |
+
<td>76.08</td>
|
74 |
+
<td>80.89</td>
|
75 |
+
<td>78.31</td>
|
76 |
+
<td>75.27</td>
|
77 |
+
<td>75.17</td>
|
78 |
+
<td>60.03</td>
|
79 |
+
<td>88.91</td>
|
80 |
+
<td>78.81</td>
|
81 |
+
</tr>
|
82 |
+
<tr>
|
83 |
+
<td>SBERT-Large</td>
|
84 |
+
<td>75.43</td>
|
85 |
+
<td>2</td>
|
86 |
+
<td>78.40</td>
|
87 |
+
<td>71.36</td>
|
88 |
+
<td>75.14</td>
|
89 |
+
<td>72.39</td>
|
90 |
+
<td>71.87</td>
|
91 |
+
<td>77.72</td>
|
92 |
+
<td>78.58</td>
|
93 |
+
<td>75.85</td>
|
94 |
+
<td>74.20</td>
|
95 |
+
<td>60.64</td>
|
96 |
+
<td>88.66</td>
|
97 |
+
<td>77.41</td>
|
98 |
+
</tr>
|
99 |
+
<tr>
|
100 |
+
<td>MBARTRuSumGazeta</td>
|
101 |
+
<td>74.70</td>
|
102 |
+
<td>3</td>
|
103 |
+
<td>76.06</td>
|
104 |
+
<td>68.95</td>
|
105 |
+
<td>73.04</td>
|
106 |
+
<td>72.34</td>
|
107 |
+
<td>71.93</td>
|
108 |
+
<td>77.83</td>
|
109 |
+
<td>76.71</td>
|
110 |
+
<td>73.56</td>
|
111 |
+
<td>74.18</td>
|
112 |
+
<td>60.54</td>
|
113 |
+
<td>87.22</td>
|
114 |
+
<td>77.51</td>
|
115 |
+
</tr>
|
116 |
+
<tr>
|
117 |
+
<td>Conversational RuBERT</td>
|
118 |
+
<td>74.44</td>
|
119 |
+
<td>4</td>
|
120 |
+
<td>76.69</td>
|
121 |
+
<td>69.09</td>
|
122 |
+
<td>73.11</td>
|
123 |
+
<td>69.44</td>
|
124 |
+
<td>68.68</td>
|
125 |
+
<td>75.56</td>
|
126 |
+
<td>77.31</td>
|
127 |
+
<td>74.40</td>
|
128 |
+
<td>73.10</td>
|
129 |
+
<td>59.95</td>
|
130 |
+
<td>87.86</td>
|
131 |
+
<td>77.78</td>
|
132 |
+
</tr>
|
133 |
+
<tr>
|
134 |
+
<td>LaBSE</td>
|
135 |
+
<td>74.11</td>
|
136 |
+
<td>5</td>
|
137 |
+
<td>77.00</td>
|
138 |
+
<td>69.19</td>
|
139 |
+
<td>73.55</td>
|
140 |
+
<td>70.34</td>
|
141 |
+
<td>69.83</td>
|
142 |
+
<td>76.38</td>
|
143 |
+
<td>74.94</td>
|
144 |
+
<td>70.84</td>
|
145 |
+
<td>73.20</td>
|
146 |
+
<td>59.52</td>
|
147 |
+
<td>87.89</td>
|
148 |
+
<td>78.47</td>
|
149 |
+
</tr>
|
150 |
+
<tr>
|
151 |
+
<td>XLM-RoBERTa-Base</td>
|
152 |
+
<td>73.60</td>
|
153 |
+
<td>6</td>
|
154 |
+
<td>76.35</td>
|
155 |
+
<td>69.37</td>
|
156 |
+
<td>73.42</td>
|
157 |
+
<td>68.45</td>
|
158 |
+
<td>67.45</td>
|
159 |
+
<td>74.05</td>
|
160 |
+
<td>74.26</td>
|
161 |
+
<td>70.44</td>
|
162 |
+
<td>71.40</td>
|
163 |
+
<td>60.19</td>
|
164 |
+
<td>87.90</td>
|
165 |
+
<td>78.28</td>
|
166 |
+
</tr>
|
167 |
+
<tr>
|
168 |
+
<td>RuBERT</td>
|
169 |
+
<td>73.45</td>
|
170 |
+
<td>7</td>
|
171 |
+
<td>74.03</td>
|
172 |
+
<td>66.14</td>
|
173 |
+
<td>70.75</td>
|
174 |
+
<td>66.46</td>
|
175 |
+
<td>66.40</td>
|
176 |
+
<td>73.37</td>
|
177 |
+
<td>75.49</td>
|
178 |
+
<td>71.86</td>
|
179 |
+
<td>72.15</td>
|
180 |
+
<td>60.55</td>
|
181 |
+
<td>86.99</td>
|
182 |
+
<td>77.41</td>
|
183 |
+
</tr>
|
184 |
+
<tr>
|
185 |
+
<td>MBART-50-Large-Many-to-Many</td>
|
186 |
+
<td>73.15</td>
|
187 |
+
<td>8</td>
|
188 |
+
<td>75.38</td>
|
189 |
+
<td>67.81</td>
|
190 |
+
<td>72.26</td>
|
191 |
+
<td>67.13</td>
|
192 |
+
<td>66.97</td>
|
193 |
+
<td>73.85</td>
|
194 |
+
<td>74.78</td>
|
195 |
+
<td>70.98</td>
|
196 |
+
<td>71.98</td>
|
197 |
+
<td>59.20</td>
|
198 |
+
<td>87.05</td>
|
199 |
+
<td>77.24</td>
|
200 |
+
</tr>
|
201 |
+
<tr>
|
202 |
+
<td>SlavicBERT</td>
|
203 |
+
<td>71.96</td>
|
204 |
+
<td>9</td>
|
205 |
+
<td>71.45</td>
|
206 |
+
<td>63.03</td>
|
207 |
+
<td>68.44</td>
|
208 |
+
<td>64.32</td>
|
209 |
+
<td>63.99</td>
|
210 |
+
<td>71.31</td>
|
211 |
+
<td>72.13</td>
|
212 |
+
<td>67.57</td>
|
213 |
+
<td>72.54</td>
|
214 |
+
<td>58.70</td>
|
215 |
+
<td>86.43</td>
|
216 |
+
<td>77.16</td>
|
217 |
+
</tr>
|
218 |
+
<tr>
|
219 |
+
<td>EnRuDR-BERT</td>
|
220 |
+
<td>71.51</td>
|
221 |
+
<td>10</td>
|
222 |
+
<td>72.56</td>
|
223 |
+
<td>64.74</td>
|
224 |
+
<td>69.07</td>
|
225 |
+
<td>61.44</td>
|
226 |
+
<td>60.21</td>
|
227 |
+
<td>68.34</td>
|
228 |
+
<td>74.19</td>
|
229 |
+
<td>69.94</td>
|
230 |
+
<td>69.33</td>
|
231 |
+
<td>56.55</td>
|
232 |
+
<td>87.12</td>
|
233 |
+
<td>77.95</td>
|
234 |
+
</tr>
|
235 |
+
<tr>
|
236 |
+
<td>RuDR-BERT</td>
|
237 |
+
<td>71.14</td>
|
238 |
+
<td>11</td>
|
239 |
+
<td>72.79</td>
|
240 |
+
<td>64.23</td>
|
241 |
+
<td>68.36</td>
|
242 |
+
<td>61.86</td>
|
243 |
+
<td>60.92</td>
|
244 |
+
<td>68.48</td>
|
245 |
+
<td>74.65</td>
|
246 |
+
<td>70.63</td>
|
247 |
+
<td>68.74</td>
|
248 |
+
<td>54.45</td>
|
249 |
+
<td>87.04</td>
|
250 |
+
<td>77.91</td>
|
251 |
+
</tr>
|
252 |
+
<tr>
|
253 |
+
<td>MBART-50-Large</td>
|
254 |
+
<td>69.46</td>
|
255 |
+
<td>12</td>
|
256 |
+
<td>70.91</td>
|
257 |
+
<td>62.67</td>
|
258 |
+
<td>67.24</td>
|
259 |
+
<td>61.12</td>
|
260 |
+
<td>60.25</td>
|
261 |
+
<td>68.41</td>
|
262 |
+
<td>72.88</td>
|
263 |
+
<td>68.63</td>
|
264 |
+
<td>70.52</td>
|
265 |
+
<td>46.39</td>
|
266 |
+
<td>86.48</td>
|
267 |
+
<td>77.52</td>
|
268 |
+
</tr>
|
269 |
+
</tbody>
|
270 |
+
</table>
|
271 |
+
|
272 |
+
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
|
273 |
+
|
274 |
+
## Citation
|
275 |
+
If you find this repository helpful, feel free to cite our publication:
|
276 |
+
|
277 |
+
```
|
278 |
+
@article{Smetanin2021Deep,
|
279 |
+
author = {Sergey Smetanin and Mikhail Komarov},
|
280 |
+
title = {Deep transfer learning baselines for sentiment analysis in Russian},
|
281 |
+
journal = {Information Processing & Management},
|
282 |
+
volume = {58},
|
283 |
+
number = {3},
|
284 |
+
pages = {102484},
|
285 |
+
year = {2021},
|
286 |
+
issn = {0306-4573},
|
287 |
+
doi = {0.1016/j.ipm.2020.102484}
|
288 |
+
}
|
289 |
+
```
|
290 |
+
|
291 |
+
Dataset:
|
292 |
+
```
|
293 |
+
@inproceedings{rogers2018rusentiment,
|
294 |
+
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
|
295 |
+
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
|
296 |
+
booktitle={Proceedings of the 27th international conference on computational linguistics},
|
297 |
+
pages={755--763},
|
298 |
+
year={2018}
|
299 |
+
}
|
300 |
+
```
|