osiria commited on
Commit
b0f74d6
1 Parent(s): a88f7ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md CHANGED
@@ -1,3 +1,72 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - wikipedia
5
+ language:
6
+ - it
7
  ---
8
+ --------------------------------------------------------------------------------------------------
9
+
10
+ <body>
11
+ <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
12
+ <br>
13
+ <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span>
14
+ <br>
15
+ <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: Word2Vec</span>
16
+ <br>
17
+ <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span>
18
+ <br>
19
+ <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span>
20
+ <br>
21
+ <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
22
+ </body>
23
+
24
+ --------------------------------------------------------------------------------------------------
25
+
26
+ <h3>Model description</h3>
27
+
28
+ This model is a <b>lightweight</b> and uncased version of <b>Word2Vec</b> <b>[1]</b> for the <b>italian</b> language. It's implemented in Gensim and it provides embeddings for 560.509 uncased italian words in a 100-dimensional vector space, resulting in a total model size of about 245 MB.
29
+
30
+
31
+ <h3>Training procedure</h3>
32
+
33
+ The model was trained on the italian split of the Wikipedia dataset (about 3.7GB, lowercased and pre-processed) for 10 epochs, using a window size of 5 and including words with a minimum count of 10, with an initial learning rate of 2.5e-3
34
+
35
+
36
+ <h3>Quick usage</h3>
37
+
38
+ Download the files in a local folder called "word2vec-light-uncased-it", then run:
39
+
40
+ ```python
41
+ from gensim.models import KeyedVectors
42
+
43
+ model = KeyedVectors.load("./word2vec-light-uncased-it/word2vec.wordvectors", mmap='r')
44
+
45
+ model.most_similar("poesia", topn=5)
46
+ ```
47
+
48
+ Expected output:
49
+
50
+ ```
51
+ [('letteratura', 0.8193784356117249),
52
+ ('poetica', 0.8115736246109009),
53
+ ('narrativa', 0.7729100584983826),
54
+ ('drammaturgia', 0.7576397061347961),
55
+ ('prosa', 0.7552034854888916)]
56
+ ```
57
+
58
+ <h3>Limitations</h3>
59
+
60
+ This lightweight model is trained on Wikipedia, so it's particularly suitable for natively digital text
61
+ from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.).
62
+
63
+ However, it may show limitations when it comes to chaotic text, containing errors and slang expressions
64
+ (like social media posts) or when it comes to domain-specific text (like medical, financial or legal content).
65
+
66
+ <h3>References</h3>
67
+
68
+ [1] https://arxiv.org/abs/1301.3781
69
+
70
+ <h3>License</h3>
71
+
72
+ The model is released under <b>Apache-2.0</b> license