Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ datasets:
|
|
17 |
|
18 |
[MLSUM es](https://huggingface.co/datasets/viewer/?dataset=mlsum)
|
19 |
|
20 |
-
## Results
|
21 |
|
22 |
|Set|Metric| Value|
|
23 |
|----|------|------|
|
@@ -27,6 +27,20 @@ datasets:
|
|
27 |
| Test | Rouge1 - fmeasure | 28.83 |
|
28 |
| Test | RougeL - fmeasure | 23.15 |
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
## Usage
|
31 |
|
32 |
```python
|
|
|
17 |
|
18 |
[MLSUM es](https://huggingface.co/datasets/viewer/?dataset=mlsum)
|
19 |
|
20 |
+
## Results
|
21 |
|
22 |
|Set|Metric| Value|
|
23 |
|----|------|------|
|
|
|
27 |
| Test | Rouge1 - fmeasure | 28.83 |
|
28 |
| Test | RougeL - fmeasure | 23.15 |
|
29 |
|
30 |
+
Raw metrics using HF/metrics `rouge`:
|
31 |
+
|
32 |
+
```python
|
33 |
+
rouge = datasets.load_metric("rouge")
|
34 |
+
rouge.compute(predictions=results["pred_summary"], references=results["summary"])
|
35 |
+
|
36 |
+
{'rouge1': AggregateScore(low=Score(precision=0.30393366820245, recall=0.27905239591639935, fmeasure=0.283148902808752), mid=Score(precision=0.3068521142101569, recall=0.2817252494122592, fmeasure=0.28560373425206464), high=Score(precision=0.30972608774202665, recall=0.28458152325781716, fmeasure=0.2883786700591887)),
|
37 |
+
'rougeL': AggregateScore(low=Score(precision=0.24184668819794716, recall=0.22401171380621518, fmeasure=0.22624104698839514), mid=Score(precision=0.24470388406868163, recall=0.22665793214539162, fmeasure=0.2289118878817394), high=Score(precision=0.2476594458951327, recall=0.22932683203591905, fmeasure=0.23153001570662513))}
|
38 |
+
|
39 |
+
rouge.compute(predictions=results["pred_summary"], references=results["summary"], rouge_types=["rouge2"])["rouge2"].mid
|
40 |
+
|
41 |
+
Score(precision=0.11423200347113865, recall=0.10588038944902506, fmeasure=0.1069921217219595)
|
42 |
+
```
|
43 |
+
|
44 |
## Usage
|
45 |
|
46 |
```python
|