Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ library_name: gensim
|
|
14 |
|
15 |
__Word2Bezbar__ are __Word2Vec__ models trained on __french rap lyrics__ sourced from __Genius__. Tokenization has been done using __NLTK__ french `word_tokenze` function, with a prior processing to remove __french oral contractions__. Used dataset size was __323MB__, corresponding to __77M tokens__.
|
16 |
|
17 |
-
The model captures the __semantic relationships__ between words in the context of __french rap__, providing a useful tool for studies associated to __french slang__ and __music
|
18 |
|
19 |
## Model Details
|
20 |
|
|
|
14 |
|
15 |
__Word2Bezbar__ are __Word2Vec__ models trained on __french rap lyrics__ sourced from __Genius__. Tokenization has been done using __NLTK__ french `word_tokenze` function, with a prior processing to remove __french oral contractions__. Used dataset size was __323MB__, corresponding to __77M tokens__.
|
16 |
|
17 |
+
The model captures the __semantic relationships__ between words in the context of __french rap__, providing a useful tool for studies associated to __french slang__ and __music lyrics analysis__.
|
18 |
|
19 |
## Model Details
|
20 |
|