secometo commited on
Commit
85d057f
1 Parent(s): 8549a9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -31,7 +31,9 @@ tokenizer.batch_decode(paraphrased_sentences, skip_special_tokens=True)
31
 
32
  ## Dataset
33
 
34
- We used 50994 question sentence pairs, which are created manually, to train our model. The dataset is provided our mentor. Sentences were extracted from the titles of topics in popular Turkish forum website donanimhaber.com. We augmented the dataset by writing ten thousand sentences per person with <a href="https://github.com/sercaksoy?tab=repositories">Sercan Aksoy</a>.
35
 
36
 
37
-
 
 
 
31
 
32
  ## Dataset
33
 
34
+ We used 50994 question sentence pairs, which are created manually, to train our model. The dataset is provided our mentor. Sentences were extracted from the titles of topics in popular Turkish forum website donanimhaber.com. We augmented the dataset by writing ten thousand sentences per person.
35
 
36
 
37
+ ## Authors
38
+ <a href="https://github.com/metinbinbir">Metin Binbir</a>
39
+ <a href="https://github.com/sercaksoy">Sercan Aksoy</a>