oliverguhr
commited on
Commit
•
ca76b17
1
Parent(s):
a4b84ab
updated readme
Browse files
README.md
CHANGED
@@ -8,7 +8,9 @@ language:
|
|
8 |
tags:
|
9 |
- punctuation prediction
|
10 |
- punctuation
|
11 |
-
datasets:
|
|
|
|
|
12 |
license: mit
|
13 |
widget:
|
14 |
- text: "Ho sentito che ti sei laureata il che mi fa molto piacere"
|
@@ -25,4 +27,67 @@ metrics:
|
|
25 |
- f1
|
26 |
---
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
tags:
|
9 |
- punctuation prediction
|
10 |
- punctuation
|
11 |
+
datasets:
|
12 |
+
- wmt/europarl
|
13 |
+
- SoNaR
|
14 |
license: mit
|
15 |
widget:
|
16 |
- text: "Ho sentito che ti sei laureata il che mi fa molto piacere"
|
|
|
27 |
- f1
|
28 |
---
|
29 |
|
30 |
+
|
31 |
+
|
32 |
+
This model predicts the punctuation of English, Italian, French and German texts. We developed it to restore the punctuation of transcribed spoken language.
|
33 |
+
|
34 |
+
This multilanguage model was trained on the [Europarl Dataset](https://huggingface.co/datasets/wmt/europarl) provided by the [SEPP-NLG Shared Task](https://sites.google.com/view/sentence-segmentation) and for the Dutch language we included the [SoNaR Dataset](http://hdl.handle.net/10032/tm-a2-h5). *Please note that this dataset consists of political speeches. Therefore the model might perform differently on texts from other domains.*
|
35 |
+
|
36 |
+
The model restores the following punctuation markers: **"." "," "?" "-" ":"**
|
37 |
+
## Sample Code
|
38 |
+
We provide a simple python package that allows you to process text of any length.
|
39 |
+
|
40 |
+
## Install
|
41 |
+
|
42 |
+
To get started install the package from [pypi](https://pypi.org/project/deepmultilingualpunctuation/):
|
43 |
+
|
44 |
+
```bash
|
45 |
+
pip install deepmultilingualpunctuation
|
46 |
+
```
|
47 |
+
### Restore Punctuation
|
48 |
+
```python
|
49 |
+
from deepmultilingualpunctuation import PunctuationModel
|
50 |
+
|
51 |
+
model = PunctuationModel(model="oliverguhr/fullstop-punctuation-multilingual-sonar-base")
|
52 |
+
text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller"
|
53 |
+
result = model.restore_punctuation(text)
|
54 |
+
print(result)
|
55 |
+
```
|
56 |
+
|
57 |
+
**output**
|
58 |
+
> My name is Clara and I live in Berkeley, California. Ist das eine Frage, Frau Müller?
|
59 |
+
|
60 |
+
|
61 |
+
### Predict Labels
|
62 |
+
```python
|
63 |
+
from deepmultilingualpunctuation import PunctuationModel
|
64 |
+
|
65 |
+
model = PunctuationModel(model="oliverguhr/fullstop-punctuation-multilingual-sonar-base")
|
66 |
+
text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller"
|
67 |
+
clean_text = model.preprocess(text)
|
68 |
+
labled_words = model.predict(clean_text)
|
69 |
+
print(labled_words)
|
70 |
+
```
|
71 |
+
|
72 |
+
**output**
|
73 |
+
|
74 |
+
> [['My', '0', 0.99998856], ['name', '0', 0.9999708], ['is', '0', 0.99975926], ['Clara', '0', 0.6117834], ['and', '0', 0.9999014], ['I', '0', 0.9999808], ['live', '0', 0.9999666], ['in', '0', 0.99990165], ['Berkeley', ',', 0.9941764], ['California', '.', 0.9952892], ['Ist', '0', 0.9999577], ['das', '0', 0.9999678], ['eine', '0', 0.99998224], ['Frage', ',', 0.9952265], ['Frau', '0', 0.99995995], ['Müller', '?', 0.972517]]
|
75 |
+
|
76 |
+
|
77 |
+
|
78 |
+
## Results
|
79 |
+
|
80 |
+
The performance differs for the single punctuation markers as hyphens and colons, in many cases, are optional and can be substituted by either a comma or a full stop. The model achieves the following F1 scores for the different languages:
|
81 |
+
|
82 |
+
| Label | English | German | French|Italian| Dutch |
|
83 |
+
| ------------- | -------- | ------ | ----- | ----- | ----- |
|
84 |
+
| 0 | 0.990 | 0.996 | 0.991 | 0.988 | 0.994 |
|
85 |
+
| . | 0.924 | 0.951 | 0.921 | 0.917 | 0.959 |
|
86 |
+
| ? | 0.825 | 0.829 | 0.800 | 0.736 | 0.817 |
|
87 |
+
| , | 0.798 | 0.937 | 0.811 | 0.778 | 0.813 |
|
88 |
+
| : | 0.535 | 0.608 | 0.578 | 0.544 | 0.657 |
|
89 |
+
| - | 0.345 | 0.384 | 0.353 | 0.344 | 0.464 |
|
90 |
+
| macro average | 0.736 | 0.784 | 0.742 | 0.718 | 0.784 |
|
91 |
+
| micro average | 0.975 | 0.987 | 0.977 | 0.972 | 0.983 |
|
92 |
+
|
93 |
+
|