Update README.md
Browse files
README.md
CHANGED
@@ -33,13 +33,15 @@ This model was trained on the basis of the German BERT large model from [deepset
|
|
33 |
|
34 |
For this purpose, we translated the sentence pairs in these dataset to German.
|
35 |
|
|
|
|
|
36 |
### Model Details
|
37 |
|
38 |
| | Description or Link |
|
39 |
|---|---|
|
40 |
|**Base model** | [```gbert-large```](https://huggingface.co/deepset/gbert-large) |
|
41 |
|**Finetuning task**| Text Pair Classification / Natural Language Inference |
|
42 |
-
|**Source
|
43 |
|
44 |
### Performance
|
45 |
|
@@ -63,6 +65,30 @@ The next table shows the results as well as a comparison with other German langu
|
|
63 |
| deepset/gbert-base | 0.65 |
|
64 |
|
65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
## Other Applications
|
67 |
|
68 |
|
@@ -88,23 +114,6 @@ label: "Furcht, Freude, Wut , Überraschung, Traurigkeit, Ekel, Verachtung"
|
|
88 |
|
89 |
""""""""
|
90 |
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
from transformers import pipeline
|
96 |
-
|
97 |
-
classifier = pipeline("zero-shot-classification",
|
98 |
-
|
99 |
-
model="Dehnes/zeroshot_gbert")
|
100 |
-
|
101 |
-
sequence = "Ich habe ein Problem mit meinem Iphone das so schnell wie möglich gelöst werden muss"
|
102 |
-
|
103 |
-
candidate_labels = ["Computer", "Handy", "Tablet", "dringend", "nicht dringend"]
|
104 |
-
|
105 |
-
#hypothesis_template = "In diesem Satz geht es um das Thema {}." ## Since monolingual model,its sensitive to hypothesis template. This can be experimented
|
106 |
-
#hypothesis_template = "Dieser Satz drückt ein Gefühl von {} aus."
|
107 |
-
|
108 |
-
classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template)
|
109 |
-
|
110 |
-
```
|
|
|
33 |
|
34 |
For this purpose, we translated the sentence pairs in these dataset to German.
|
35 |
|
36 |
+
If you are a German speaker you may also have a look at our Blog post about Zeroshot Classification and our model.
|
37 |
+
|
38 |
### Model Details
|
39 |
|
40 |
| | Description or Link |
|
41 |
|---|---|
|
42 |
|**Base model** | [```gbert-large```](https://huggingface.co/deepset/gbert-large) |
|
43 |
|**Finetuning task**| Text Pair Classification / Natural Language Inference |
|
44 |
+
|**Source datasets**| [```mnli```](https://huggingface.co/datasets/multi_nli) ; [```anli```](https://huggingface.co/datasets/anli) ; [```snli```](https://huggingface.co/datasets/snli) |
|
45 |
|
46 |
### Performance
|
47 |
|
|
|
65 |
| deepset/gbert-base | 0.65 |
|
66 |
|
67 |
|
68 |
+
## How to use
|
69 |
+
|
70 |
+
The simplest way to use the model is the hugging-face transformers pipeline tool.
|
71 |
+
Just initialize the pipeline specifying the task as "zero-shot-classification"
|
72 |
+
|
73 |
+
```python
|
74 |
+
|
75 |
+
from transformers import pipeline
|
76 |
+
|
77 |
+
zershot_pipeline = pipeline("zero-shot-classification",
|
78 |
+
|
79 |
+
model="svalabs/gbert-large-zeroshot-nli")
|
80 |
+
|
81 |
+
sequence = "Ich habe ein Problem mit meinem Iphone das so schnell wie möglich gelöst werden muss"
|
82 |
+
|
83 |
+
labels = ["Computer", "Handy", "Tablet", "dringend", "nicht dringend"]
|
84 |
+
|
85 |
+
#hypothesis_template = "In diesem Satz geht es um das Thema {}." ## Since monolingual model,its sensitive to hypothesis template. This can be experimented
|
86 |
+
#hypothesis_template = "Dieser Satz drückt ein Gefühl von {} aus."
|
87 |
+
|
88 |
+
zershot_pipeline(sequence, labels, hypothesis_template=hypothesis_template)
|
89 |
+
|
90 |
+
```
|
91 |
+
|
92 |
## Other Applications
|
93 |
|
94 |
|
|
|
114 |
|
115 |
""""""""
|
116 |
|
117 |
+
### Contact
|
118 |
+
- Daniel Ehnes, daniel.ehnes@sva.de
|
119 |
+
- Baran Avinc, baran.avinc@sva.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|