|
--- |
|
license: apache-2.0 |
|
tags: |
|
- word2vec |
|
datasets: |
|
- wikipedia |
|
language: |
|
- en |
|
--- |
|
|
|
## Information |
|
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/). |
|
|
|
## How to use? |
|
``` |
|
from gensim.models import KeyedVectors |
|
from huggingface_hub import hf_hub_download |
|
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_win10_100d", filename="enwiki_20180420_win10_100d.txt")) |
|
model.most_similar("your_word") |
|
``` |
|
|
|
## Citation |
|
``` |
|
@inproceedings{yamada2020wikipedia2vec, |
|
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia", |
|
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji}, |
|
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, |
|
year = {2020}, |
|
publisher = {Association for Computational Linguistics}, |
|
pages = {23--30} |
|
} |
|
``` |
|
|