Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,9 @@ tokenizer.batch_decode(paraphrased_sentences, skip_special_tokens=True)
|
|
31 |
|
32 |
## Dataset
|
33 |
|
34 |
-
We used 50994 question sentence pairs, which are created manually, to train our model. The dataset is provided our mentor. Sentences were extracted from the titles of topics in popular Turkish forum website donanimhaber.com. We augmented the dataset by writing ten thousand sentences per person
|
35 |
|
36 |
|
37 |
-
|
|
|
|
|
|
31 |
|
32 |
## Dataset
|
33 |
|
34 |
+
We used 50994 question sentence pairs, which are created manually, to train our model. The dataset is provided our mentor. Sentences were extracted from the titles of topics in popular Turkish forum website donanimhaber.com. We augmented the dataset by writing ten thousand sentences per person.
|
35 |
|
36 |
|
37 |
+
## Authors
|
38 |
+
<a href="https://github.com/metinbinbir">Metin Binbir</a>
|
39 |
+
<a href="https://github.com/sercaksoy">Sercan Aksoy</a>
|