modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
Helsinki-NLP/opus-mt-zh-en
6b02b2132d97136ebed2851703f2b3407ea9cf47
2022-07-14T08:52:32.000Z
[ "pytorch", "rust", "marian", "text2text-generation", "zh", "en", "transformers", "translation", "license:cc-by-4.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-zh-en
324,438
32
transformers
100
--- language: - zh - en tags: - translation license: cc-by-4.0 --- ### zho-eng ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details - **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation - **Language(s):** - Source Language: Chinese - Target Language: English - **License:** CC-BY-4.0 - **Resources for more information:** - [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Uses #### Direct Use This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Further details about the dataset for this model can be found in the OPUS readme: [zho-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md) ## Training #### System Information * helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port_machine: brutasse * port_time: 2020-08-21-14:41 * src_multilingual: False * tgt_multilingual: False #### Training Data ##### Preprocessing * pre-processing: normalization + SentencePiece (spm32k,spm32k) * ref_len: 82826.0 * dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT) * download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip) * test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt) ## Evaluation #### Results * test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.eval.txt) * brevity_penalty: 0.948 ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.zho.eng | 36.1 | 0.548 | ## Citation Information ```bibtex @InProceedings{TiedemannThottingal:EAMT2020, author = {J{\"o}rg Tiedemann and Santhosh Thottingal}, title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld}, booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)}, year = {2020}, address = {Lisbon, Portugal} } ``` ## How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en") ```
uer/gpt2-chinese-cluecorpussmall
7c87595de655dc7b0fbfaa545ae413a118063d0b
2022-07-15T08:26:38.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "zh", "dataset:CLUECorpusSmall", "transformers" ]
text-generation
false
uer
null
uer/gpt2-chinese-cluecorpussmall
320,804
20
transformers
101
--- language: zh datasets: CLUECorpusSmall widget: - text: "这是很久之前的事情了" --- # Chinese GPT2 Model ## Model description The model is used to generate Chinese texts. You can download the model either from the [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-chinese-cluecorpussmall). ## How to use You can use the model directly with a pipeline for text generation: ```python >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-cluecorpussmall") >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-cluecorpussmall") >>> text_generator = TextGenerationPipeline(model, tokenizer) >>> text_generator("这是很久之前的事情了", max_length=100, do_sample=True) [{'generated_text': '这是很久之前的事情了 , 我 曾 经 把 这 个 当 做 一 种 思 想 的 传 承 , 或 者 是 人 生 的 回 顾 , 当 时 我 们 是 一 个 刚 刚 加 入 的 时 候 就 想 要 加 入 他 们 , 于 是 我 们 每 天 看 到 他 们 , 加 上 他 们 的 各 种 不 可 思 议 的 行 为 , 直 到 现 在 , 我 们 的 人 生 才 完 整 起 来 。'}] ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. ## Training procedure The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 1024. Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_lm_seq128_dataset.pt \ --seq_length 128 --processes_num 32 --data_processor lm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_lm_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/gpt2/config.json \ --output_model_path models/cluecorpussmall_gpt2_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_lm_seq1024_dataset.pt \ --seq_length 1024 --processes_num 32 --data_processor lm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_lm_seq1024_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_gpt2_seq128_model.bin-1000000 \ --config_path models/gpt2/config.json \ --output_model_path models/cluecorpussmall_gpt2_seq1024_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path cluecorpussmall_gpt2_seq1024_model.bin-250000 \ --output_model_path pytorch_model.bin \ --layers_num 12 ``` ### BibTeX entry and citation info ``` @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ```
Helsinki-NLP/opus-mt-mul-en
bc0ba94fb12f8b8cf88bd8a925b15ccd5fb94340
2020-08-21T14:42:48.000Z
[ "pytorch", "marian", "text2text-generation", "ca", "es", "os", "eo", "ro", "fy", "cy", "is", "lb", "su", "an", "sq", "fr", "ht", "rm", "cv", "ig", "am", "eu", "tr", "ps", "af", "ny", "ch", "uk", "sl", "lt", "tk", "sg", "ar", "lg", "bg", "be", "ka", "gd", "ja", "si", "br", "mh", "km", "th", "ty", "rw", "te", "mk", "or", "wo", "kl", "mr", "ru", "yo", "hu", "fo", "zh", "ti", "co", "ee", "oc", "sn", "mt", "ts", "pl", "gl", "nb", "bn", "tt", "bo", "lo", "id", "gn", "nv", "hy", "kn", "to", "io", "so", "vi", "da", "fj", "gv", "sm", "nl", "mi", "pt", "hi", "se", "as", "ta", "et", "kw", "ga", "sv", "ln", "na", "mn", "gu", "wa", "lv", "jv", "el", "my", "ba", "it", "hr", "ur", "ce", "nn", "fi", "mg", "rn", "xh", "ab", "de", "cs", "he", "zu", "yi", "ml", "mul", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-mul-en
318,897
8
transformers
102
--- language: - ca - es - os - eo - ro - fy - cy - is - lb - su - an - sq - fr - ht - rm - cv - ig - am - eu - tr - ps - af - ny - ch - uk - sl - lt - tk - sg - ar - lg - bg - be - ka - gd - ja - si - br - mh - km - th - ty - rw - te - mk - or - wo - kl - mr - ru - yo - hu - fo - zh - ti - co - ee - oc - sn - mt - ts - pl - gl - nb - bn - tt - bo - lo - id - gn - nv - hy - kn - to - io - so - vi - da - fj - gv - sm - nl - mi - pt - hi - se - as - ta - et - kw - ga - sv - ln - na - mn - gu - wa - lv - jv - el - my - ba - it - hr - ur - ce - nn - fi - mg - rn - xh - ab - de - cs - he - zu - yi - ml - mul - en tags: - translation license: apache-2.0 --- ### mul-eng * source group: Multiple languages * target group: English * OPUS readme: [mul-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md) * model: transformer * source language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-hineng.hin.eng | 8.5 | 0.341 | | newsdev2015-enfi-fineng.fin.eng | 16.8 | 0.441 | | newsdev2016-enro-roneng.ron.eng | 31.3 | 0.580 | | newsdev2016-entr-tureng.tur.eng | 16.4 | 0.422 | | newsdev2017-enlv-laveng.lav.eng | 21.3 | 0.502 | | newsdev2017-enzh-zhoeng.zho.eng | 12.7 | 0.409 | | newsdev2018-enet-esteng.est.eng | 19.8 | 0.467 | | newsdev2019-engu-gujeng.guj.eng | 13.3 | 0.385 | | newsdev2019-enlt-liteng.lit.eng | 19.9 | 0.482 | | newsdiscussdev2015-enfr-fraeng.fra.eng | 26.7 | 0.520 | | newsdiscusstest2015-enfr-fraeng.fra.eng | 29.8 | 0.541 | | newssyscomb2009-ceseng.ces.eng | 21.1 | 0.487 | | newssyscomb2009-deueng.deu.eng | 22.6 | 0.499 | | newssyscomb2009-fraeng.fra.eng | 25.8 | 0.530 | | newssyscomb2009-huneng.hun.eng | 15.1 | 0.430 | | newssyscomb2009-itaeng.ita.eng | 29.4 | 0.555 | | newssyscomb2009-spaeng.spa.eng | 26.1 | 0.534 | | news-test2008-deueng.deu.eng | 21.6 | 0.491 | | news-test2008-fraeng.fra.eng | 22.3 | 0.502 | | news-test2008-spaeng.spa.eng | 23.6 | 0.514 | | newstest2009-ceseng.ces.eng | 19.8 | 0.480 | | newstest2009-deueng.deu.eng | 20.9 | 0.487 | | newstest2009-fraeng.fra.eng | 25.0 | 0.523 | | newstest2009-huneng.hun.eng | 14.7 | 0.425 | | newstest2009-itaeng.ita.eng | 27.6 | 0.542 | | newstest2009-spaeng.spa.eng | 25.7 | 0.530 | | newstest2010-ceseng.ces.eng | 20.6 | 0.491 | | newstest2010-deueng.deu.eng | 23.4 | 0.517 | | newstest2010-fraeng.fra.eng | 26.1 | 0.537 | | newstest2010-spaeng.spa.eng | 29.1 | 0.561 | | newstest2011-ceseng.ces.eng | 21.0 | 0.489 | | newstest2011-deueng.deu.eng | 21.3 | 0.494 | | newstest2011-fraeng.fra.eng | 26.8 | 0.546 | | newstest2011-spaeng.spa.eng | 28.2 | 0.549 | | newstest2012-ceseng.ces.eng | 20.5 | 0.485 | | newstest2012-deueng.deu.eng | 22.3 | 0.503 | | newstest2012-fraeng.fra.eng | 27.5 | 0.545 | | newstest2012-ruseng.rus.eng | 26.6 | 0.532 | | newstest2012-spaeng.spa.eng | 30.3 | 0.567 | | newstest2013-ceseng.ces.eng | 22.5 | 0.498 | | newstest2013-deueng.deu.eng | 25.0 | 0.518 | | newstest2013-fraeng.fra.eng | 27.4 | 0.537 | | newstest2013-ruseng.rus.eng | 21.6 | 0.484 | | newstest2013-spaeng.spa.eng | 28.4 | 0.555 | | newstest2014-csen-ceseng.ces.eng | 24.0 | 0.517 | | newstest2014-deen-deueng.deu.eng | 24.1 | 0.511 | | newstest2014-fren-fraeng.fra.eng | 29.1 | 0.563 | | newstest2014-hien-hineng.hin.eng | 14.0 | 0.414 | | newstest2014-ruen-ruseng.rus.eng | 24.0 | 0.521 | | newstest2015-encs-ceseng.ces.eng | 21.9 | 0.481 | | newstest2015-ende-deueng.deu.eng | 25.5 | 0.519 | | newstest2015-enfi-fineng.fin.eng | 17.4 | 0.441 | | newstest2015-enru-ruseng.rus.eng | 22.4 | 0.494 | | newstest2016-encs-ceseng.ces.eng | 23.0 | 0.500 | | newstest2016-ende-deueng.deu.eng | 30.1 | 0.560 | | newstest2016-enfi-fineng.fin.eng | 18.5 | 0.461 | | newstest2016-enro-roneng.ron.eng | 29.6 | 0.562 | | newstest2016-enru-ruseng.rus.eng | 22.0 | 0.495 | | newstest2016-entr-tureng.tur.eng | 14.8 | 0.415 | | newstest2017-encs-ceseng.ces.eng | 20.2 | 0.475 | | newstest2017-ende-deueng.deu.eng | 26.0 | 0.523 | | newstest2017-enfi-fineng.fin.eng | 19.6 | 0.465 | | newstest2017-enlv-laveng.lav.eng | 16.2 | 0.454 | | newstest2017-enru-ruseng.rus.eng | 24.2 | 0.510 | | newstest2017-entr-tureng.tur.eng | 15.0 | 0.412 | | newstest2017-enzh-zhoeng.zho.eng | 13.7 | 0.412 | | newstest2018-encs-ceseng.ces.eng | 21.2 | 0.486 | | newstest2018-ende-deueng.deu.eng | 31.5 | 0.564 | | newstest2018-enet-esteng.est.eng | 19.7 | 0.473 | | newstest2018-enfi-fineng.fin.eng | 15.1 | 0.418 | | newstest2018-enru-ruseng.rus.eng | 21.3 | 0.490 | | newstest2018-entr-tureng.tur.eng | 15.4 | 0.421 | | newstest2018-enzh-zhoeng.zho.eng | 12.9 | 0.408 | | newstest2019-deen-deueng.deu.eng | 27.0 | 0.529 | | newstest2019-fien-fineng.fin.eng | 17.2 | 0.438 | | newstest2019-guen-gujeng.guj.eng | 9.0 | 0.342 | | newstest2019-lten-liteng.lit.eng | 22.6 | 0.512 | | newstest2019-ruen-ruseng.rus.eng | 24.1 | 0.503 | | newstest2019-zhen-zhoeng.zho.eng | 13.9 | 0.427 | | newstestB2016-enfi-fineng.fin.eng | 15.2 | 0.428 | | newstestB2017-enfi-fineng.fin.eng | 16.8 | 0.442 | | newstestB2017-fien-fineng.fin.eng | 16.8 | 0.442 | | Tatoeba-test.abk-eng.abk.eng | 2.4 | 0.190 | | Tatoeba-test.ady-eng.ady.eng | 1.1 | 0.111 | | Tatoeba-test.afh-eng.afh.eng | 1.7 | 0.108 | | Tatoeba-test.afr-eng.afr.eng | 53.0 | 0.672 | | Tatoeba-test.akl-eng.akl.eng | 5.9 | 0.239 | | Tatoeba-test.amh-eng.amh.eng | 25.6 | 0.464 | | Tatoeba-test.ang-eng.ang.eng | 11.7 | 0.289 | | Tatoeba-test.ara-eng.ara.eng | 26.4 | 0.443 | | Tatoeba-test.arg-eng.arg.eng | 35.9 | 0.473 | | Tatoeba-test.asm-eng.asm.eng | 19.8 | 0.365 | | Tatoeba-test.ast-eng.ast.eng | 31.8 | 0.467 | | Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.119 | | Tatoeba-test.awa-eng.awa.eng | 9.7 | 0.271 | | Tatoeba-test.aze-eng.aze.eng | 37.0 | 0.542 | | Tatoeba-test.bak-eng.bak.eng | 13.9 | 0.395 | | Tatoeba-test.bam-eng.bam.eng | 2.2 | 0.094 | | Tatoeba-test.bel-eng.bel.eng | 36.8 | 0.549 | | Tatoeba-test.ben-eng.ben.eng | 39.7 | 0.546 | | Tatoeba-test.bho-eng.bho.eng | 33.6 | 0.540 | | Tatoeba-test.bod-eng.bod.eng | 1.1 | 0.147 | | Tatoeba-test.bre-eng.bre.eng | 14.2 | 0.303 | | Tatoeba-test.brx-eng.brx.eng | 1.7 | 0.130 | | Tatoeba-test.bul-eng.bul.eng | 46.0 | 0.621 | | Tatoeba-test.cat-eng.cat.eng | 46.6 | 0.636 | | Tatoeba-test.ceb-eng.ceb.eng | 17.4 | 0.347 | | Tatoeba-test.ces-eng.ces.eng | 41.3 | 0.586 | | Tatoeba-test.cha-eng.cha.eng | 7.9 | 0.232 | | Tatoeba-test.che-eng.che.eng | 0.7 | 0.104 | | Tatoeba-test.chm-eng.chm.eng | 7.3 | 0.261 | | Tatoeba-test.chr-eng.chr.eng | 8.8 | 0.244 | | Tatoeba-test.chv-eng.chv.eng | 11.0 | 0.319 | | Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.204 | | Tatoeba-test.cos-eng.cos.eng | 58.2 | 0.643 | | Tatoeba-test.crh-eng.crh.eng | 26.3 | 0.399 | | Tatoeba-test.csb-eng.csb.eng | 18.8 | 0.389 | | Tatoeba-test.cym-eng.cym.eng | 23.4 | 0.407 | | Tatoeba-test.dan-eng.dan.eng | 50.5 | 0.659 | | Tatoeba-test.deu-eng.deu.eng | 39.6 | 0.579 | | Tatoeba-test.dsb-eng.dsb.eng | 24.3 | 0.449 | | Tatoeba-test.dtp-eng.dtp.eng | 1.0 | 0.149 | | Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.061 | | Tatoeba-test.egl-eng.egl.eng | 7.6 | 0.236 | | Tatoeba-test.ell-eng.ell.eng | 55.4 | 0.682 | | Tatoeba-test.enm-eng.enm.eng | 28.0 | 0.489 | | Tatoeba-test.epo-eng.epo.eng | 41.8 | 0.591 | | Tatoeba-test.est-eng.est.eng | 41.5 | 0.581 | | Tatoeba-test.eus-eng.eus.eng | 37.8 | 0.557 | | Tatoeba-test.ewe-eng.ewe.eng | 10.7 | 0.262 | | Tatoeba-test.ext-eng.ext.eng | 25.5 | 0.405 | | Tatoeba-test.fao-eng.fao.eng | 28.7 | 0.469 | | Tatoeba-test.fas-eng.fas.eng | 7.5 | 0.281 | | Tatoeba-test.fij-eng.fij.eng | 24.2 | 0.320 | | Tatoeba-test.fin-eng.fin.eng | 35.8 | 0.534 | | Tatoeba-test.fkv-eng.fkv.eng | 15.5 | 0.434 | | Tatoeba-test.fra-eng.fra.eng | 45.1 | 0.618 | | Tatoeba-test.frm-eng.frm.eng | 29.6 | 0.427 | | Tatoeba-test.frr-eng.frr.eng | 5.5 | 0.138 | | Tatoeba-test.fry-eng.fry.eng | 25.3 | 0.455 | | Tatoeba-test.ful-eng.ful.eng | 1.1 | 0.127 | | Tatoeba-test.gcf-eng.gcf.eng | 16.0 | 0.315 | | Tatoeba-test.gil-eng.gil.eng | 46.7 | 0.587 | | Tatoeba-test.gla-eng.gla.eng | 20.2 | 0.358 | | Tatoeba-test.gle-eng.gle.eng | 43.9 | 0.592 | | Tatoeba-test.glg-eng.glg.eng | 45.1 | 0.623 | | Tatoeba-test.glv-eng.glv.eng | 3.3 | 0.119 | | Tatoeba-test.gos-eng.gos.eng | 20.1 | 0.364 | | Tatoeba-test.got-eng.got.eng | 0.1 | 0.041 | | Tatoeba-test.grc-eng.grc.eng | 2.1 | 0.137 | | Tatoeba-test.grn-eng.grn.eng | 1.7 | 0.152 | | Tatoeba-test.gsw-eng.gsw.eng | 18.2 | 0.334 | | Tatoeba-test.guj-eng.guj.eng | 21.7 | 0.373 | | Tatoeba-test.hat-eng.hat.eng | 34.5 | 0.502 | | Tatoeba-test.hau-eng.hau.eng | 10.5 | 0.295 | | Tatoeba-test.haw-eng.haw.eng | 2.8 | 0.160 | | Tatoeba-test.hbs-eng.hbs.eng | 46.7 | 0.623 | | Tatoeba-test.heb-eng.heb.eng | 33.0 | 0.492 | | Tatoeba-test.hif-eng.hif.eng | 17.0 | 0.391 | | Tatoeba-test.hil-eng.hil.eng | 16.0 | 0.339 | | Tatoeba-test.hin-eng.hin.eng | 36.4 | 0.533 | | Tatoeba-test.hmn-eng.hmn.eng | 0.4 | 0.131 | | Tatoeba-test.hoc-eng.hoc.eng | 0.7 | 0.132 | | Tatoeba-test.hsb-eng.hsb.eng | 41.9 | 0.551 | | Tatoeba-test.hun-eng.hun.eng | 33.2 | 0.510 | | Tatoeba-test.hye-eng.hye.eng | 32.2 | 0.487 | | Tatoeba-test.iba-eng.iba.eng | 9.4 | 0.278 | | Tatoeba-test.ibo-eng.ibo.eng | 5.8 | 0.200 | | Tatoeba-test.ido-eng.ido.eng | 31.7 | 0.503 | | Tatoeba-test.iku-eng.iku.eng | 9.1 | 0.164 | | Tatoeba-test.ile-eng.ile.eng | 42.2 | 0.595 | | Tatoeba-test.ilo-eng.ilo.eng | 29.7 | 0.485 | | Tatoeba-test.ina-eng.ina.eng | 42.1 | 0.607 | | Tatoeba-test.isl-eng.isl.eng | 35.7 | 0.527 | | Tatoeba-test.ita-eng.ita.eng | 54.8 | 0.686 | | Tatoeba-test.izh-eng.izh.eng | 28.3 | 0.526 | | Tatoeba-test.jav-eng.jav.eng | 10.0 | 0.282 | | Tatoeba-test.jbo-eng.jbo.eng | 0.3 | 0.115 | | Tatoeba-test.jdt-eng.jdt.eng | 5.3 | 0.140 | | Tatoeba-test.jpn-eng.jpn.eng | 18.8 | 0.387 | | Tatoeba-test.kab-eng.kab.eng | 3.9 | 0.205 | | Tatoeba-test.kal-eng.kal.eng | 16.9 | 0.329 | | Tatoeba-test.kan-eng.kan.eng | 16.2 | 0.374 | | Tatoeba-test.kat-eng.kat.eng | 31.1 | 0.493 | | Tatoeba-test.kaz-eng.kaz.eng | 24.5 | 0.437 | | Tatoeba-test.kek-eng.kek.eng | 7.4 | 0.192 | | Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.154 | | Tatoeba-test.khm-eng.khm.eng | 12.2 | 0.290 | | Tatoeba-test.kin-eng.kin.eng | 22.5 | 0.355 | | Tatoeba-test.kir-eng.kir.eng | 27.2 | 0.470 | | Tatoeba-test.kjh-eng.kjh.eng | 2.1 | 0.129 | | Tatoeba-test.kok-eng.kok.eng | 4.5 | 0.259 | | Tatoeba-test.kom-eng.kom.eng | 1.4 | 0.099 | | Tatoeba-test.krl-eng.krl.eng | 26.1 | 0.387 | | Tatoeba-test.ksh-eng.ksh.eng | 5.5 | 0.256 | | Tatoeba-test.kum-eng.kum.eng | 9.3 | 0.288 | | Tatoeba-test.kur-eng.kur.eng | 9.6 | 0.208 | | Tatoeba-test.lad-eng.lad.eng | 30.1 | 0.475 | | Tatoeba-test.lah-eng.lah.eng | 11.6 | 0.284 | | Tatoeba-test.lao-eng.lao.eng | 4.5 | 0.214 | | Tatoeba-test.lat-eng.lat.eng | 21.5 | 0.402 | | Tatoeba-test.lav-eng.lav.eng | 40.2 | 0.577 | | Tatoeba-test.ldn-eng.ldn.eng | 0.8 | 0.115 | | Tatoeba-test.lfn-eng.lfn.eng | 23.0 | 0.433 | | Tatoeba-test.lij-eng.lij.eng | 9.3 | 0.287 | | Tatoeba-test.lin-eng.lin.eng | 2.4 | 0.196 | | Tatoeba-test.lit-eng.lit.eng | 44.0 | 0.597 | | Tatoeba-test.liv-eng.liv.eng | 1.6 | 0.115 | | Tatoeba-test.lkt-eng.lkt.eng | 2.0 | 0.113 | | Tatoeba-test.lld-eng.lld.eng | 18.3 | 0.312 | | Tatoeba-test.lmo-eng.lmo.eng | 25.4 | 0.395 | | Tatoeba-test.ltz-eng.ltz.eng | 35.9 | 0.509 | | Tatoeba-test.lug-eng.lug.eng | 5.1 | 0.357 | | Tatoeba-test.mad-eng.mad.eng | 2.8 | 0.123 | | Tatoeba-test.mah-eng.mah.eng | 5.7 | 0.175 | | Tatoeba-test.mai-eng.mai.eng | 56.3 | 0.703 | | Tatoeba-test.mal-eng.mal.eng | 37.5 | 0.534 | | Tatoeba-test.mar-eng.mar.eng | 22.8 | 0.470 | | Tatoeba-test.mdf-eng.mdf.eng | 2.0 | 0.110 | | Tatoeba-test.mfe-eng.mfe.eng | 59.2 | 0.764 | | Tatoeba-test.mic-eng.mic.eng | 9.0 | 0.199 | | Tatoeba-test.mkd-eng.mkd.eng | 44.3 | 0.593 | | Tatoeba-test.mlg-eng.mlg.eng | 31.9 | 0.424 | | Tatoeba-test.mlt-eng.mlt.eng | 38.6 | 0.540 | | Tatoeba-test.mnw-eng.mnw.eng | 2.5 | 0.101 | | Tatoeba-test.moh-eng.moh.eng | 0.3 | 0.110 | | Tatoeba-test.mon-eng.mon.eng | 13.5 | 0.334 | | Tatoeba-test.mri-eng.mri.eng | 8.5 | 0.260 | | Tatoeba-test.msa-eng.msa.eng | 33.9 | 0.520 | | Tatoeba-test.multi.eng | 34.7 | 0.518 | | Tatoeba-test.mwl-eng.mwl.eng | 37.4 | 0.630 | | Tatoeba-test.mya-eng.mya.eng | 15.5 | 0.335 | | Tatoeba-test.myv-eng.myv.eng | 0.8 | 0.118 | | Tatoeba-test.nau-eng.nau.eng | 9.0 | 0.186 | | Tatoeba-test.nav-eng.nav.eng | 1.3 | 0.144 | | Tatoeba-test.nds-eng.nds.eng | 30.7 | 0.495 | | Tatoeba-test.nep-eng.nep.eng | 3.5 | 0.168 | | Tatoeba-test.niu-eng.niu.eng | 42.7 | 0.492 | | Tatoeba-test.nld-eng.nld.eng | 47.9 | 0.640 | | Tatoeba-test.nog-eng.nog.eng | 12.7 | 0.284 | | Tatoeba-test.non-eng.non.eng | 43.8 | 0.586 | | Tatoeba-test.nor-eng.nor.eng | 45.5 | 0.619 | | Tatoeba-test.nov-eng.nov.eng | 26.9 | 0.472 | | Tatoeba-test.nya-eng.nya.eng | 33.2 | 0.456 | | Tatoeba-test.oci-eng.oci.eng | 17.9 | 0.370 | | Tatoeba-test.ori-eng.ori.eng | 14.6 | 0.305 | | Tatoeba-test.orv-eng.orv.eng | 11.0 | 0.283 | | Tatoeba-test.oss-eng.oss.eng | 4.1 | 0.211 | | Tatoeba-test.ota-eng.ota.eng | 4.1 | 0.216 | | Tatoeba-test.pag-eng.pag.eng | 24.3 | 0.468 | | Tatoeba-test.pan-eng.pan.eng | 16.4 | 0.358 | | Tatoeba-test.pap-eng.pap.eng | 53.2 | 0.628 | | Tatoeba-test.pau-eng.pau.eng | 3.7 | 0.173 | | Tatoeba-test.pdc-eng.pdc.eng | 45.3 | 0.569 | | Tatoeba-test.pms-eng.pms.eng | 14.0 | 0.345 | | Tatoeba-test.pol-eng.pol.eng | 41.7 | 0.588 | | Tatoeba-test.por-eng.por.eng | 51.4 | 0.669 | | Tatoeba-test.ppl-eng.ppl.eng | 0.4 | 0.134 | | Tatoeba-test.prg-eng.prg.eng | 4.1 | 0.198 | | Tatoeba-test.pus-eng.pus.eng | 6.7 | 0.233 | | Tatoeba-test.quc-eng.quc.eng | 3.5 | 0.091 | | Tatoeba-test.qya-eng.qya.eng | 0.2 | 0.090 | | Tatoeba-test.rap-eng.rap.eng | 17.5 | 0.230 | | Tatoeba-test.rif-eng.rif.eng | 4.2 | 0.164 | | Tatoeba-test.roh-eng.roh.eng | 24.6 | 0.464 | | Tatoeba-test.rom-eng.rom.eng | 3.4 | 0.212 | | Tatoeba-test.ron-eng.ron.eng | 45.2 | 0.620 | | Tatoeba-test.rue-eng.rue.eng | 21.4 | 0.390 | | Tatoeba-test.run-eng.run.eng | 24.5 | 0.392 | | Tatoeba-test.rus-eng.rus.eng | 42.7 | 0.591 | | Tatoeba-test.sag-eng.sag.eng | 3.4 | 0.187 | | Tatoeba-test.sah-eng.sah.eng | 5.0 | 0.177 | | Tatoeba-test.san-eng.san.eng | 2.0 | 0.172 | | Tatoeba-test.scn-eng.scn.eng | 35.8 | 0.410 | | Tatoeba-test.sco-eng.sco.eng | 34.6 | 0.520 | | Tatoeba-test.sgs-eng.sgs.eng | 21.8 | 0.299 | | Tatoeba-test.shs-eng.shs.eng | 1.8 | 0.122 | | Tatoeba-test.shy-eng.shy.eng | 1.4 | 0.104 | | Tatoeba-test.sin-eng.sin.eng | 20.6 | 0.429 | | Tatoeba-test.sjn-eng.sjn.eng | 1.2 | 0.095 | | Tatoeba-test.slv-eng.slv.eng | 37.0 | 0.545 | | Tatoeba-test.sma-eng.sma.eng | 4.4 | 0.147 | | Tatoeba-test.sme-eng.sme.eng | 8.9 | 0.229 | | Tatoeba-test.smo-eng.smo.eng | 37.7 | 0.483 | | Tatoeba-test.sna-eng.sna.eng | 18.0 | 0.359 | | Tatoeba-test.snd-eng.snd.eng | 28.1 | 0.444 | | Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 | | Tatoeba-test.spa-eng.spa.eng | 47.9 | 0.645 | | Tatoeba-test.sqi-eng.sqi.eng | 46.9 | 0.634 | | Tatoeba-test.stq-eng.stq.eng | 8.1 | 0.379 | | Tatoeba-test.sun-eng.sun.eng | 23.8 | 0.369 | | Tatoeba-test.swa-eng.swa.eng | 6.5 | 0.193 | | Tatoeba-test.swe-eng.swe.eng | 51.4 | 0.655 | | Tatoeba-test.swg-eng.swg.eng | 18.5 | 0.342 | | Tatoeba-test.tah-eng.tah.eng | 25.6 | 0.249 | | Tatoeba-test.tam-eng.tam.eng | 29.1 | 0.437 | | Tatoeba-test.tat-eng.tat.eng | 12.9 | 0.327 | | Tatoeba-test.tel-eng.tel.eng | 21.2 | 0.386 | | Tatoeba-test.tet-eng.tet.eng | 9.2 | 0.215 | | Tatoeba-test.tgk-eng.tgk.eng | 12.7 | 0.374 | | Tatoeba-test.tha-eng.tha.eng | 36.3 | 0.531 | | Tatoeba-test.tir-eng.tir.eng | 9.1 | 0.267 | | Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.084 | | Tatoeba-test.tly-eng.tly.eng | 2.1 | 0.128 | | Tatoeba-test.toi-eng.toi.eng | 5.3 | 0.150 | | Tatoeba-test.ton-eng.ton.eng | 39.5 | 0.473 | | Tatoeba-test.tpw-eng.tpw.eng | 1.5 | 0.160 | | Tatoeba-test.tso-eng.tso.eng | 44.7 | 0.526 | | Tatoeba-test.tuk-eng.tuk.eng | 18.6 | 0.401 | | Tatoeba-test.tur-eng.tur.eng | 40.5 | 0.573 | | Tatoeba-test.tvl-eng.tvl.eng | 55.0 | 0.593 | | Tatoeba-test.tyv-eng.tyv.eng | 19.1 | 0.477 | | Tatoeba-test.tzl-eng.tzl.eng | 17.7 | 0.333 | | Tatoeba-test.udm-eng.udm.eng | 3.4 | 0.217 | | Tatoeba-test.uig-eng.uig.eng | 11.4 | 0.289 | | Tatoeba-test.ukr-eng.ukr.eng | 43.1 | 0.595 | | Tatoeba-test.umb-eng.umb.eng | 9.2 | 0.260 | | Tatoeba-test.urd-eng.urd.eng | 23.2 | 0.426 | | Tatoeba-test.uzb-eng.uzb.eng | 19.0 | 0.342 | | Tatoeba-test.vec-eng.vec.eng | 41.1 | 0.409 | | Tatoeba-test.vie-eng.vie.eng | 30.6 | 0.481 | | Tatoeba-test.vol-eng.vol.eng | 1.8 | 0.143 | | Tatoeba-test.war-eng.war.eng | 15.9 | 0.352 | | Tatoeba-test.wln-eng.wln.eng | 12.6 | 0.291 | | Tatoeba-test.wol-eng.wol.eng | 4.4 | 0.138 | | Tatoeba-test.xal-eng.xal.eng | 0.9 | 0.153 | | Tatoeba-test.xho-eng.xho.eng | 35.4 | 0.513 | | Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.387 | | Tatoeba-test.yor-eng.yor.eng | 19.3 | 0.327 | | Tatoeba-test.zho-eng.zho.eng | 25.8 | 0.448 | | Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.567 | | Tatoeba-test.zza-eng.zza.eng | 1.6 | 0.125 | ### System Info: - hf_name: mul-eng - source_languages: mul - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en'] - src_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt - src_alpha3: mul - tgt_alpha3: eng - short_pair: mul-en - chrF2_score: 0.518 - bleu: 34.7 - brevity_penalty: 1.0 - ref_len: 72346.0 - src_name: Multiple languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: mul - tgt_alpha2: en - prefer_old: False - long_pair: mul-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
dslim/bert-base-NER-uncased
1f52ebe0381dc9e285c0aa7c2971b350894f1efa
2021-05-19T16:10:17.000Z
[ "pytorch", "tf", "jax", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
dslim
null
dslim/bert-base-NER-uncased
316,976
6
transformers
103
Entry not found
google/pegasus-xsum
a0aa5531c00f59a32a167b75130805098b046f9c
2021-09-14T07:25:41.000Z
[ "pytorch", "tf", "jax", "pegasus", "text2text-generation", "en", "arxiv:1912.08777", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
google
null
google/pegasus-xsum
316,141
41
transformers
104
--- language: en tags: - summarization --- ### Pegasus Models See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) Original TF 1 code [here](https://github.com/google-research/pegasus) Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) Task: Summarization The following is copied from the authors' README. # Mixed & Stochastic Checkpoints We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| | cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| | newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| | multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| | gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| | reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| | big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| | arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| | pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| | aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| | billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| The "Mixed & Stochastic" model has the following changes: - trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). - trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). - the model uniformly sample a gap sentence ratio between 15% and 45%. - importance sentences are sampled using a 20% uniform noise to importance scores. - the sentencepiece tokenizer is updated to be able to encode newline character. (*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: - wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. - we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character. Citation ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch
b66b75ad2c01f1cf4ae47abb72464cb2342a5fba
2022-06-15T22:43:46.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch
300,603
null
sentence-transformers
105
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch') model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
facebook/bart-large
cb48c1365bd826bd521f650dc2e0940aee54720c
2022-06-03T10:00:20.000Z
[ "pytorch", "tf", "jax", "rust", "bart", "feature-extraction", "en", "arxiv:1910.13461", "transformers", "license:apache-2.0" ]
feature-extraction
false
facebook
null
facebook/bart-large
299,926
10
transformers
106
--- license: apache-2.0 language: en --- # BART (large-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import BartTokenizer, BartModel tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') model = BartModel.from_pretrained('facebook/bart-large') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
t5-large
cb7a9673bcaf9ab8b677ad4a5650c1d74b4a5a8e
2022-07-22T08:11:26.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "transformers", "summarization", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
null
null
t5-large
299,321
16
transformers
107
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 --- # Model Card for T5 Large ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8. [Model Card Authors](#model-card-authors) 9. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html): > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Large is the checkpoint with 770 million parameters. - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints) - **Model type:** Language model - **Language(s) (NLP):** English, French, Romanian, German - **License:** Apache 2.0 - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5) - **Resources for more information:** - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer) - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5) # Uses ## Direct Use and Downstream Use The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model: > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Recommendations More information needed. # Training Details ## Training Data The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5. The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. Thereby, the following datasets were being used for (1.) and (2.): 1. **Datasets used for Unsupervised denoising objective**: - [C4](https://huggingface.co/datasets/c4) - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr) 2. **Datasets used for Supervised text-to-text language modeling objective** - Sentence acceptability judgment - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471) - Sentiment analysis - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - Paraphrasing/sentence similarity - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002) - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055) - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - Natural language inference - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426) - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250) - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9) - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf) - Sentence completion - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning) - Word sense disambiguation - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121) - Question answering - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023) - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885) - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044) ## Training Procedure In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write: > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details. ## Results For full results for T5-Large, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @article{2020t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {140}, pages = {1-67}, url = {http://jmlr.org/papers/v21/20-074.html} } ``` **APA:** - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained("t5-large") model = T5Model.from_pretrained("t5-large") input_ids = tokenizer( "Studies have been shown that owning a dog is good for you", return_tensors="pt" ).input_ids # Batch size 1 decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1 # forward pass outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) last_hidden_states = outputs.last_hidden_state ``` See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples. </details>
microsoft/deberta-v3-base
559062ad13d311b87b2c455e67dcd5f1c8f65111
2022-02-06T09:55:27.000Z
[ "pytorch", "tf", "rust", "deberta-v2", "en", "arxiv:2006.03654", "arxiv:2111.09543", "transformers", "deberta", "deberta-v3", "license:mit" ]
null
false
microsoft
null
microsoft/deberta-v3-base
298,505
24
transformers
108
--- language: en tags: - deberta - deberta-v3 thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543). Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates. The DeBERTa V3 base model comes with 12 layers and a hidden size of 768. It has only 86M backbone parameters with a vocabulary containing 128K tokens which introduces 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 2.0 and MNLI tasks. | Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)| |-------------------|----------|-------------------|-----------|----------| | RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- | | XLNet-base |32 |92 | -/80.2 | 86.8/- | | ELECTRA-base |30 |86 | -/80.5 | 88.8/ | | DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5| | DeBERTa-v3-base |128|86 | **88.4/85.4** | **90.6/90.7**| | DeBERTa-v3-base + SiFT |128|86 | -/- | 91.0/-| We present the dev results on SQuAD 1.1/2.0 and MNLI tasks. #### Fine-tuning with HF transformers ```bash #!/bin/bash cd transformers/examples/pytorch/text-classification/ pip install datasets export TASK_NAME=mnli output_dir="ds_results" num_gpus=8 batch_size=8 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \ run_glue.py \ --model_name_or_path microsoft/deberta-v3-base \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --evaluation_strategy steps \ --max_seq_length 256 \ --warmup_steps 500 \ --per_device_train_batch_size ${batch_size} \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir $output_dir \ --overwrite_output_dir \ --logging_steps 1000 \ --logging_dir $output_dir ``` ### Citation If you find DeBERTa useful for your work, please cite the following papers: ``` latex @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
Rostlab/prot_bert
3d05bf06e79014892defacad82e0efd06e977ff6
2020-12-11T21:30:07.000Z
[ "pytorch", "fill-mask", "protein", "dataset:Uniref100", "transformers", "protein language model", "autotrain_compatible" ]
fill-mask
false
Rostlab
null
Rostlab/prot_bert
291,646
19
transformers
109
--- language: protein tags: - protein language model datasets: - Uniref100 --- # ProtBert model Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids. ## Model description ProtBert is based on Bert model which pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences. One important difference between our Bert model and the original Bert version is the way of dealing with sequences as separate documents. This means the Next sentence prediction is not used, as each sequence is treated as a complete document. The masking follows the original Bert training with randomly masks 15% of the amino acids in the input. At the end, the feature extracted from this model revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. ## Intended uses & limitations The model could be used for protein feature extraction or to be fine-tuned on downstream tasks. We have noticed in some tasks you could gain more accuracy by fine-tuning the model rather than using it as a feature extractor. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import BertForMaskedLM, BertTokenizer, pipeline >>> tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False ) >>> model = BertForMaskedLM.from_pretrained("Rostlab/prot_bert") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T') [{'score': 0.11088453233242035, 'sequence': '[CLS] D L I P T S S K L V V L D T S L Q V K K A F F A L V T [SEP]', 'token': 5, 'token_str': 'L'}, {'score': 0.08402521163225174, 'sequence': '[CLS] D L I P T S S K L V V S D T S L Q V K K A F F A L V T [SEP]', 'token': 10, 'token_str': 'S'}, {'score': 0.07328339666128159, 'sequence': '[CLS] D L I P T S S K L V V V D T S L Q V K K A F F A L V T [SEP]', 'token': 8, 'token_str': 'V'}, {'score': 0.06921856850385666, 'sequence': '[CLS] D L I P T S S K L V V K D T S L Q V K K A F F A L V T [SEP]', 'token': 12, 'token_str': 'K'}, {'score': 0.06382402777671814, 'sequence': '[CLS] D L I P T S S K L V V I D T S L Q V K K A F F A L V T [SEP]', 'token': 11, 'token_str': 'I'}] ``` Here is how to use this model to get the features of a given protein sequence in PyTorch: ```python from transformers import BertModel, BertTokenizer import re tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False ) model = BertModel.from_pretrained("Rostlab/prot_bert") sequence_Example = "A E T C Z A O" sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example) encoded_input = tokenizer(sequence_Example, return_tensors='pt') output = model(**encoded_input) ``` ## Training data The ProtBert model was pretrained on [Uniref100](https://www.uniprot.org/downloads), a dataset consisting of 217 million protein sequences. ## Training procedure ### Preprocessing The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The rare amino acids "U,Z,O,B" were mapped to "X". The inputs of the model are then of the form: ``` [CLS] Protein Sequence A [SEP] Protein Sequence B [SEP] ``` Furthermore, each protein sequence was treated as a separate document. The preprocessing step was performed twice, once for a combined length (2 sequences) of less than 512 amino acids, and another time using a combined length (2 sequences) of less than 2048 amino acids. The details of the masking procedure for each sequence followed the original Bert model as following: - 15% of the amino acids are masked. - In 80% of the cases, the masked amino acids are replaced by `[MASK]`. - In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace. - In the 10% remaining cases, the masked amino acids are left as is. ### Pretraining The model was trained on a single TPU Pod V3-512 for 400k steps in total. 300K steps using sequence length 512 (batch size 15k), and 100K steps using sequence length 2048 (batch size 2.5k). The optimizer used is Lamb with a learning rate of 0.002, a weight decay of 0.01, learning rate warmup for 40k steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Test results : | Task/Dataset | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | |:-----:|:-----:|:-----:|:-----:|:-----:| | CASP12 | 75 | 63 | | | | TS115 | 83 | 72 | | | | CB513 | 81 | 66 | | | | DeepLoc | | | 79 | 91 | ### BibTeX entry and citation info ```bibtex @article {Elnaggar2020.07.12.199554, author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard}, title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing}, elocation-id = {2020.07.12.199554}, year = {2020}, doi = {10.1101/2020.07.12.199554}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \&lt;a href="https://github.com/agemagician/ProtTrans"\&gt;https://github.com/agemagician/ProtTrans\&lt;/a\&gt;Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554}, eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf}, journal = {bioRxiv} } ``` > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid
f9c8e9f03396500a107dbe97024d7efa23f57e69
2022-07-04T09:04:59.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:sst2", "transformers", "license:apache-2.0" ]
text-classification
false
echarlaix
null
echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid
291,539
null
transformers
110
--- language: en license: apache-2.0 tags: - text-classification datasets: - sst2 metrics: - accuracy --- ## bert-base-uncased model fine-tuned on SST-2 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **37%** of the original weights. The model contains **51%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). <div class="graph"><script src="/echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid/raw/main/model_card/density_info.js" id="2d0fc334-fe98-4315-8890-d6eaca1fa9be"></script></div> In terms of perfomance, its **accuracy** is **91.17**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on task, and distilled from the model [textattack/bert-base-uncased-SST-2](https://huggingface.co/textattack/bert-base-uncased-SST-2). This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning method is that some of the attention heads are completely removed: 88 heads were removed on a total of 144 (61.1%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid/raw/main/model_card/pruning_info.js" id="93b19d7f-c11b-4edf-9670-091e40d9be25"></script></div> ## Details of the SST-2 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SST-2 | train | 67K | | SST-2 | eval | 872 | ### Results **Pytorch model file size**: `351MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **accuracy** | **91.17** | **92.7** | **-1.53**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model cls_pipeline = pipeline( "text-classification", model="echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid", tokenizer="echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid", ) print(f"Parameters count (includes only head pruning, no feed forward pruning)={int(cls_pipeline.model.num_parameters() / 1E6)}M") cls_pipeline.model = optimize_model(cls_pipeline.model, "dense") print(f"Parameters count after optimization={int(cls_pipeline.model.num_parameters() / 1E6)}M") predictions = cls_pipeline("This restaurant is awesome") print(predictions) ```
flair/ner-english-large
e2b1caabf7f9bac1e7829db73eac734df7e6ad7b
2021-05-08T15:36:27.000Z
[ "pytorch", "en", "dataset:conll2003", "arxiv:2011.06993", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-english-large
282,720
12
flair
111
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - conll2003 widget: - text: "George Washington went to Washington" --- ## English NER in Flair (large model) This is the large 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **94,36** (corrected CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/). --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-large") # make example sentence sentence = Sentence("George Washington went to Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (1.0)] Span [5]: "Washington" [− Labels: LOC (1.0)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python import torch # 1. get the corpus from flair.datasets import CONLL_03 corpus = CONLL_03() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize fine-tuneable transformer embeddings WITH document context from flair.embeddings import TransformerWordEmbeddings embeddings = TransformerWordEmbeddings( model='xlm-roberta-large', layers="-1", subtoken_pooling="first", fine_tune=True, use_context=True, ) # 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection) from flair.models import SequenceTagger tagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # 6. initialize trainer with AdamW optimizer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # 7. run training with XLM parameters (20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train('resources/taggers/ner-english-large', learning_rate=5.0e-6, mini_batch_size=4, mini_batch_chunk_size=1, max_epochs=20, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., ) ) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{schweter2020flert, title={FLERT: Document-Level Features for Named Entity Recognition}, author={Stefan Schweter and Alan Akbik}, year={2020}, eprint={2011.06993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
Helsinki-NLP/opus-mt-es-en
7709af724cf305012a250cbd13cf3bfdbd2b66b0
2021-01-18T08:23:34.000Z
[ "pytorch", "marian", "text2text-generation", "es", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-es-en
282,620
7
transformers
112
--- language: - es - en tags: - translation license: apache-2.0 --- ### spa-eng * source group: Spanish * target group: English * OPUS readme: [spa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md) * model: transformer * source language(s): spa * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-spaeng.spa.eng | 30.6 | 0.570 | | news-test2008-spaeng.spa.eng | 27.9 | 0.553 | | newstest2009-spaeng.spa.eng | 30.4 | 0.572 | | newstest2010-spaeng.spa.eng | 36.1 | 0.614 | | newstest2011-spaeng.spa.eng | 34.2 | 0.599 | | newstest2012-spaeng.spa.eng | 37.9 | 0.624 | | newstest2013-spaeng.spa.eng | 35.3 | 0.609 | | Tatoeba-test.spa.eng | 59.6 | 0.739 | ### System Info: - hf_name: spa-eng - source_languages: spa - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['es', 'en'] - src_constituents: {'spa'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt - src_alpha3: spa - tgt_alpha3: eng - short_pair: es-en - chrF2_score: 0.7390000000000001 - bleu: 59.6 - brevity_penalty: 0.9740000000000001 - ref_len: 79376.0 - src_name: Spanish - tgt_name: English - train_date: 2020-08-18 00:00:00 - src_alpha2: es - tgt_alpha2: en - prefer_old: False - long_pair: spa-eng - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking
8ae74eb0fbe7c8d82bb3d1a91fca56f352074e7f
2022-06-15T19:34:08.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking
281,218
null
sentence-transformers
113
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking') model = AutoModel.from_pretrained('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
prithivida/grammar_error_correcter_v1
28f92ee33c2512814c22268b056a725364dae143
2021-07-04T10:44:31.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
prithivida
null
prithivida/grammar_error_correcter_v1
278,983
16
transformers
114
**This model is part of the Gramformer library** please refer to https://github.com/PrithivirajDamodaran/Gramformer/
microsoft/DialoGPT-large
06a70b2c3cecc1f56edc9fdc58d2e90641c9ae9e
2021-05-23T09:06:08.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "arxiv:1911.00536", "transformers", "conversational", "license:mit" ]
conversational
false
microsoft
null
microsoft/DialoGPT-large
278,699
31
transformers
115
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- ## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT) DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations. The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test. The model is trained on 147M multi-turn dialogue from Reddit discussion thread. * Multi-turn generation examples from an interactive environment: |Role | Response | |---------|--------| |User | Does money buy happiness? | | Bot | Depends how much money you spend on it .| |User | What is the best way to buy happiness ? | | Bot | You just have to be a millionaire by your early 20s, then you can be happy . | |User |This is so difficult ! | | Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536) ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large") model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
facebook/m2m100_418M
441fd5182f1298d7e39f34013ac0b905f8ff4429
2022-05-26T22:26:54.000Z
[ "pytorch", "rust", "m2m_100", "text2text-generation", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "arxiv:2010.11125", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
facebook
null
facebook/m2m100_418M
278,499
27
transformers
116
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit tags: --- # M2M100 418M M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") # translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "La vie est comme une boîte de chocolat." # translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Life is like a box of chocolate." ``` See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions. ## Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) ## BibTeX entry and citation info ``` @misc{fan2020englishcentric, title={Beyond English-Centric Multilingual Machine Translation}, author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin}, year={2020}, eprint={2010.11125}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
uer/albert-base-chinese-cluecorpussmall
8634d166f98a6c337bdc6e9fba197df932605cdf
2022-07-15T08:20:21.000Z
[ "pytorch", "tf", "albert", "fill-mask", "zh", "dataset:CLUECorpusSmall", "transformers", "autotrain_compatible" ]
fill-mask
false
uer
null
uer/albert-base-chinese-cluecorpussmall
274,649
4
transformers
117
--- language: zh datasets: CLUECorpusSmall widget: - text: "中国的首都是[MASK]京" --- # Chinese ALBERT ## Model description This is the set of Chinese ALBERT models pre-trained by UER-py. You can download the model either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below: | | Link | | -------- | :-----------------------: | | **ALBERT-Base** | [**L=12/H=768 (Base)**][base] | | **ALBERT-Large** | [**L=24/H=1024 (Large)**][large] | ## How to use You can use the model directly with a pipeline for text generation: ```python >>> from transformers import BertTokenizer, AlbertForMaskedLM, FillMaskPipeline >>> tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall") >>> model = AlbertForMaskedLM.from_pretrained("uer/albert-base-chinese-cluecorpussmall") >>> unmasker = FillMaskPipeline(model, tokenizer) >>> unmasker("中国的首都是[MASK]京。") [ {'sequence': '中 国 的 首 都 是 北 京 。', 'score': 0.8528032898902893, 'token': 1266, 'token_str': '北'}, {'sequence': '中 国 的 首 都 是 南 京 。', 'score': 0.07667620480060577, 'token': 1298, 'token_str': '南'}, {'sequence': '中 国 的 首 都 是 东 京 。', 'score': 0.020440367981791496, 'token': 691, 'token_str': '东'}, {'sequence': '中 国 的 首 都 是 维 京 。', 'score': 0.010197942145168781, 'token': 5335, 'token_str': '维'}, {'sequence': '中 国 的 首 都 是 汴 京 。', 'score': 0.0075391442514956, 'token': 3745, 'token_str': '汴'} ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, AlbertModel tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall") model = AlbertModel.from_pretrained("uer/albert-base-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFAlbertModel tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall") model = TFAlbertModel.from_pretrained("uer/albert-base-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. ## Training procedure The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. Taking the case of ALBERT-Base Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_albert_seq128_dataset.pt \ --seq_length 128 --processes_num 32 --data_processor albert ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_albert_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/albert/base_config.json \ --output_model_path models/cluecorpussmall_albert_base_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_albert_seq512_dataset.pt \ --seq_length 512 --processes_num 32 --data_processor albert ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_albert_seq512_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_albert_base_seq128_model.bin-1000000 \ --config_path models/albert/base_config.json \ --output_model_path models/cluecorpussmall_albert_base_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_albert_from_uer_to_huggingface.py --input_model_path cluecorpussmall_albert_base_seq512_model.bin-250000 \ --output_model_path pytorch_model.bin ``` ### BibTeX entry and citation info ``` @article{lan2019albert, title={Albert: A lite bert for self-supervised learning of language representations}, author={Lan, Zhenzhong and Chen, Mingda and Goodman, Sebastian and Gimpel, Kevin and Sharma, Piyush and Soricut, Radu}, journal={arXiv preprint arXiv:1909.11942}, year={2019} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [base]:https://huggingface.co/uer/albert-base-chinese-cluecorpussmall [large]:https://huggingface.co/uer/albert-large-chinese-cluecorpussmall
funnel-transformer/small
ff0f4c11e46720ca10aa2dd668c2c58fe00ad214
2020-12-11T21:40:44.000Z
[ "pytorch", "tf", "funnel", "feature-extraction", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:gigaword", "arxiv:2006.03236", "transformers", "license:apache-2.0" ]
feature-extraction
false
funnel-transformer
null
funnel-transformer/small
271,908
3
transformers
118
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia - gigaword --- # Funnel Transformer small model (B4-4-4 with decoder) Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model to extract a vector representation of a given text, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import FunnelTokenizer, FunnelModel tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small") model = FunneModel.from_pretrained("funnel-transformer/small") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import FunnelTokenizer, TFFunnelModel tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small") model = TFFunnelModel.from_pretrained("funnel-transformer/small") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data The BERT model was pretrained on: - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books, - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers), - [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages, - [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data, - [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages. ### BibTeX entry and citation info ```bibtex @misc{dai2020funneltransformer, title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing}, author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le}, year={2020}, eprint={2006.03236}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
oliverguhr/german-sentiment-bert
c5c8dd0c5b966460dce1b7c5851bd90af1d2c6b6
2022-07-04T08:59:35.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "de", "transformers", "sentiment", "license:mit" ]
text-classification
false
oliverguhr
null
oliverguhr/german-sentiment-bert
267,318
9
transformers
119
--- language: - de tags: - sentiment - bert license: mit widget: - text: "Das ist gar nicht mal so schlecht" metrics: - f1 --- # German Sentiment Classification with Bert This model was trained for sentiment classification of German language texts. To achieve the best results all model inputs needs to be preprocessed with the same procedure, that was applied during the training. To simplify the usage of the model, we provide a Python package that bundles the code need for the preprocessing and inferencing. The model uses the Googles Bert architecture and was trained on 1.834 million German-language samples. The training data contains texts from various domains like Twitter, Facebook and movie, app and hotel reviews. You can find more information about the dataset and the training process in the [paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.202.pdf). ## Using the Python package To get started install the package from [pypi](https://pypi.org/project/germansentiment/): ```bash pip install germansentiment ``` ```python from germansentiment import SentimentModel model = SentimentModel() texts = [ "Mit keinem guten Ergebniss","Das ist gar nicht mal so gut", "Total awesome!","nicht so schlecht wie erwartet", "Der Test verlief positiv.","Sie fährt ein grünes Auto."] result = model.predict_sentiment(texts) print(result) ``` The code above will output following list: ```python ["negative","negative","positive","positive","neutral", "neutral"] ``` ## Model and Data If you are interested in code and data that was used to train this model please have a look at [this repository](https://github.com/oliverguhr/german-sentiment) and our [paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.202.pdf). Here is a table of the F1 scores that this model achieves on different datasets. Since we trained this model with a newer version of the transformer library, the results are slightly better than reported in the paper. | Dataset | F1 micro Score | | :----------------------------------------------------------- | -------------: | | [holidaycheck](https://github.com/oliverguhr/german-sentiment) | 0.9568 | | [scare](https://www.romanklinger.de/scare/) | 0.9418 | | [filmstarts](https://github.com/oliverguhr/german-sentiment) | 0.9021 | | [germeval](https://sites.google.com/view/germeval2017-absa/home) | 0.7536 | | [PotTS](https://www.aclweb.org/anthology/L16-1181/) | 0.6780 | | [emotions](https://github.com/oliverguhr/german-sentiment) | 0.9649 | | [sb10k](https://www.spinningbytes.com/resources/germansentiment/) | 0.7376 | | [Leipzig Wikipedia Corpus 2016](https://wortschatz.uni-leipzig.de/de/download/german) | 0.9967 | | all | 0.9639 | ## Cite For feedback and questions contact me view mail or Twitter [@oliverguhr](https://twitter.com/oliverguhr). Please cite us if you found this useful: ``` @InProceedings{guhr-EtAl:2020:LREC, author = {Guhr, Oliver and Schumann, Anne-Kathrin and Bahrmann, Frank and Böhme, Hans Joachim}, title = {Training a Broad-Coverage German Sentiment Classification Model for Dialog Systems}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {1620--1625}, url = {https://www.aclweb.org/anthology/2020.lrec-1.202} } ```
microsoft/deberta-v3-large
360b9940401fa4d3411a0ca9f796631ec36f287a
2022-01-13T17:50:16.000Z
[ "pytorch", "tf", "deberta-v2", "en", "arxiv:2006.03654", "arxiv:2111.09543", "transformers", "deberta", "deberta-v3", "license:mit" ]
null
false
microsoft
null
microsoft/deberta-v3-large
254,782
34
transformers
120
--- language: en tags: - deberta - deberta-v3 thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543). Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates. The DeBERTa V3 large model comes with 24 layers and a hidden size of 1024. It has 304M backbone parameters with a vocabulary containing 128K tokens which introduces 131M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 2.0 and MNLI tasks. | Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)| |-------------------|----------|-------------------|-----------|----------| | RoBERTa-large |50 |304 | 89.4/86.5 | 90.2 | | XLNet-large |32 |- | 90.6/87.9 | 90.8 | | DeBERTa-large |50 |- | 90.7/88.0 | 91.3 | | **DeBERTa-v3-large**|128|304 | **91.5/89.0**| **91.8/91.9**| #### Fine-tuning with HF transformers ```bash #!/bin/bash cd transformers/examples/pytorch/text-classification/ pip install datasets export TASK_NAME=mnli output_dir="ds_results" num_gpus=8 batch_size=8 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \ run_glue.py \ --model_name_or_path microsoft/deberta-v3-large \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --evaluation_strategy steps \ --max_seq_length 256 \ --warmup_steps 50 \ --per_device_train_batch_size ${batch_size} \ --learning_rate 6e-6 \ --num_train_epochs 2 \ --output_dir $output_dir \ --overwrite_output_dir \ --logging_steps 1000 \ --logging_dir $output_dir ``` ### Citation If you find DeBERTa useful for your work, please cite the following papers: ``` latex @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
deepset/bert-large-uncased-whole-word-masking-squad2
fc342ddb2da7ae9be8275f3f9970f0b59571caa5
2022-07-25T12:20:44.000Z
[ "pytorch", "jax", "bert", "question-answering", "en", "dataset:squad_v2", "transformers", "license:cc-by-4.0", "model-index", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/bert-large-uncased-whole-word-masking-squad2
250,879
7
transformers
121
--- language: en datasets: - squad_v2 license: cc-by-4.0 model-index: - name: deepset/bert-large-uncased-whole-word-masking-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - name: Exact Match type: exact_match value: 80.8846 verified: true - name: F1 type: f1 value: 83.8765 verified: true ---
Helsinki-NLP/opus-mt-en-de
61a2efe1dd1ca51242d9df09a1a0634b17046125
2022-07-14T08:57:07.000Z
[ "pytorch", "tf", "jax", "rust", "marian", "text2text-generation", "en", "de", "transformers", "translation", "license:cc-by-4.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-de
248,829
5
transformers
122
--- tags: - translation license: cc-by-4.0 --- ### opus-mt-en-de ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation - **Language(s):** - Source Language: English - Target Language: German - **License:** CC-BY-4.0 - **Resources for more information:** - [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Uses #### Direct Use This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Further details about the dataset for this model can be found in the OPUS readme: [en-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-de/README.md) #### Training Data ##### Preprocessing * pre-processing: normalization + SentencePiece * dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT) * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.test.txt) ## Evaluation #### Results * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.eval.txt) #### Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.de | 23.5 | 0.540 | | news-test2008.en.de | 23.5 | 0.529 | | newstest2009.en.de | 22.3 | 0.530 | | newstest2010.en.de | 24.9 | 0.544 | | newstest2011.en.de | 22.5 | 0.524 | | newstest2012.en.de | 23.0 | 0.525 | | newstest2013.en.de | 26.9 | 0.553 | | newstest2015-ende.en.de | 31.1 | 0.594 | | newstest2016-ende.en.de | 37.0 | 0.636 | | newstest2017-ende.en.de | 29.9 | 0.586 | | newstest2018-ende.en.de | 45.2 | 0.690 | | newstest2019-ende.en.de | 40.9 | 0.654 | | Tatoeba.en.de | 47.3 | 0.664 | ## Citation Information ```bibtex @InProceedings{TiedemannThottingal:EAMT2020, author = {J{\"o}rg Tiedemann and Santhosh Thottingal}, title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld}, booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)}, year = {2020}, address = {Lisbon, Portugal} } ``` ## How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de") ```
facebook/dpr-ctx_encoder-single-nq-base
aa2a11a692b50b73fa245d59e49be796eefd888f
2020-11-25T16:58:35.000Z
[ "pytorch", "tf", "dpr", "transformers" ]
null
false
facebook
null
facebook/dpr-ctx_encoder-single-nq-base
242,736
3
transformers
123
Entry not found
microsoft/layoutlmv2-large-uncased
26362c3ebf9c5c9277dc51954e5107a905415eec
2022-05-03T07:36:38.000Z
[ "pytorch", "layoutlmv2", "en", "arxiv:2012.14740", "transformers", "license:cc-by-nc-sa-4.0" ]
null
false
microsoft
null
microsoft/layoutlmv2-large-uncased
241,454
3
transformers
124
--- language: en license: cc-by-nc-sa-4.0 --- # LayoutLMv2 **Multimodal (text + layout/format + image) pre-training for document AI** ## Introduction LayoutLMv2 is an improved version of LayoutLM with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. It outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including , including FUNSD (0.7895 → 0.8420), CORD (0.9493 → 0.9601), SROIE (0.9524 → 0.9781), Kleister-NDA (0.834 → 0.852), RVL-CDIP (0.9443 → 0.9564), and DocVQA (0.7295 → 0.8672). [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou, [ACL 2021](#)
facebook/dpr-question_encoder-single-nq-base
7fee7988e53c713da8323b184f6015d47861a1bf
2020-11-25T16:59:20.000Z
[ "pytorch", "tf", "dpr", "feature-extraction", "transformers" ]
feature-extraction
false
facebook
null
facebook/dpr-question_encoder-single-nq-base
231,479
2
transformers
125
Entry not found
cardiffnlp/twitter-roberta-base-sentiment-latest
5916057ce88cf0a408a195082b6c06d3dce12552
2022-03-31T09:47:41.000Z
[ "pytorch", "tf", "roberta", "text-classification", "english", "arxiv:2202.03829", "transformers" ]
text-classification
false
cardiffnlp
null
cardiffnlp/twitter-roberta-base-sentiment-latest
231,133
29
transformers
126
--- language: english widget: - text: "Covid cases are increasing fast!" --- # Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2021) This is a roBERTa-base model trained on ~124M tweets from January 2018 to December 2021 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m)), and finetuned for sentiment analysis with the TweetEval benchmark. The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English. - Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829). - Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms). <b>Labels</b>: 0 -> Negative; 1 -> Neutral; 2 -> Positive ## Example Pipeline ```python from transformers import pipeline sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path) sentiment_task("Covid cases are increasing fast!") ``` ``` [{'label': 'Negative', 'score': 0.7236}] ``` ## Full classification example ```python from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer, AutoConfig import numpy as np from scipy.special import softmax # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest" tokenizer = AutoTokenizer.from_pretrained(MODEL) config = AutoConfig.from_pretrained(MODEL) # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) #model.save_pretrained(MODEL) text = "Covid cases are increasing fast!" text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Covid cases are increasing fast!" # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) # Print labels and scores ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = config.id2label[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` 1) Negative 0.7236 2) Neutral 0.2287 3) Positive 0.0477 ```
deepparag/Aeona
4b980c2b6b62850536ce4354e1945b8f4d778f62
2022-07-23T05:30:35.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "license:mit" ]
conversational
false
deepparag
null
deepparag/Aeona
228,625
8
transformers
127
--- thumbnail: https://images-ext-2.discordapp.net/external/Wvtx1L98EbA7DR2lpZPbDxDuO4qmKt03nZygATZtXgk/%3Fsize%3D4096/https/cdn.discordapp.com/avatars/931226824753700934/338a9e413bbceaeb9095a29e97d4fac0.png tags: - conversational license: mit --- # Aeona | Chatbot ![Aeona Banner](https://github.com/deepsarda/Aeona/blob/master/dashboard/static/banner.png?raw=true) An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small). Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot. Using an AIML Chatbot will allow you to hardcode some replies also. # AEONA Aeona is an chatbot which hope's to be able to talk with humans as if its an friend! It's main target platform is discord. You can invite the bot [here](https://aeona.xyz). To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyx/). Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user. # Participate and Help the AI improve or just hang out at [hugging face discussions](https://huggingface.co/deepparag/Aeona/discussions) ## Goals The goal is to create an AI which will work with AIML in order to create the most human like AI. #### Why not an AI on its own? For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code! The goal of the AI is to generate responses where the AIML fails. Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible! So we use 3 dataset:- 1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines! 2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages! 3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time! ## Training The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated. This leads to them covering each others issues! The AI has a context of 6 messages which means it will reply until the 4th message from user. [Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1) ## Tips for Hugging Face interference I recommend send the user input, previous 3 AI and human responses. Using more context than this will lead to useless responses but using less is alright but the responses may be random. ## Evaluation Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics. | Model | Perplexity | |---|---| | Seq2seq Baseline [3] | 29.8 | | Wolf et al. [5] | 16.3 | | GPT-2 baseline | 99.5 | | DialoGPT baseline | 56.6 | | DialoGPT finetuned | 11.4 | | PersonaGPT | 10.2 | | **Aeona** | **7.9** | ## Usage Example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deepparag/Aeona") model = AutoModelWithLMHead.from_pretrained("deepparag/Aeona") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=4, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Aeona: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
flair/pos-english
22061e2903f36383754ba0101bc988c432aa4e06
2021-03-02T22:20:07.000Z
[ "pytorch", "en", "dataset:ontonotes", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/pos-english
227,637
5
flair
128
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - ontonotes widget: - text: "I love Berlin." --- ## English Part-of-Speech Tagging in Flair (default model) This is the standard part-of-speech tagging model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **98,19** (Ontonotes) Predicts fine-grained POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADD | Email | |AFX | Affix | |CC | Coordinating conjunction | |CD | Cardinal number | |DT | Determiner | |EX | Existential there | |FW | Foreign word | |HYPH | Hyphen | |IN | Preposition or subordinating conjunction | |JJ | Adjective | |JJR |Adjective, comparative | |JJS | Adjective, superlative | |LS | List item marker | |MD | Modal | |NFP | Superfluous punctuation | |NN | Noun, singular or mass | |NNP |Proper noun, singular | |NNPS | Proper noun, plural | |NNS |Noun, plural | |PDT | Predeterminer | |POS | Possessive ending | |PRP | Personal pronoun | |PRP$ | Possessive pronoun | |RB | Adverb | |RBR | Adverb, comparative | |RBS | Adverb, superlative | |RP | Particle | |SYM | Symbol | |TO | to | |UH | Interjection | |VB | Verb, base form | |VBD | Verb, past tense | |VBG | Verb, gerund or present participle | |VBN | Verb, past participle | |VBP | Verb, non-3rd person singular present | |VBZ | Verb, 3rd person singular present | |WDT | Wh-determiner | |WP | Wh-pronoun | |WP$ | Possessive wh-pronoun | |WRB | Wh-adverb | |XX | Unknown | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/pos-english") # make example sentence sentence = Sentence("I love Berlin.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "I" [− Labels: PRP (1.0)] Span [2]: "love" [− Labels: VBP (1.0)] Span [3]: "Berlin" [− Labels: NNP (0.9999)] Span [4]: "." [− Labels: . (1.0)] ``` So, the word "*I*" is labeled as a **pronoun** (PRP), "*love*" is labeled as a **verb** (VBP) and "*Berlin*" is labeled as a **proper noun** (NNP) in the sentence "*I love Berlin*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'pos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/pos-english', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
google/mt5-small
f03a52d3eaa650878b6f52e443bc4d5b385e786e
2022-05-27T15:06:24.000Z
[ "pytorch", "tf", "jax", "mt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2010.11934", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/mt5-small
223,820
19
transformers
129
--- language: - multilingual - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu datasets: - mc4 license: apache-2.0 --- [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
google/t5-v1_1-base
650d7745bf1e502d6949b22cc19155cd656d3d4e
2021-06-23T01:54:44.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/t5-v1_1-base
215,738
14
transformers
130
--- language: en datasets: - c4 license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 ## Version 1.1 [T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the following improvements compared to the original T5 model- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. **Note**: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5-v1_1) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
microsoft/deberta-v2-xlarge
30597019711d3531f994d1e21defffd0d8cd55ab
2022-01-13T17:21:41.000Z
[ "pytorch", "tf", "deberta-v2", "en", "arxiv:2006.03654", "transformers", "deberta", "license:mit" ]
null
false
microsoft
null
microsoft/deberta-v2-xlarge
215,543
4
transformers
131
--- language: en tags: deberta thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This is the DeBERTa V2 xlarge model with 24 layers, 1536 hidden size. The total parameters are 900M and it is trained with 160GB raw data. ### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\\ --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\\ --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
gpt2-large
e5ab12c7d42b9e60a6025476a688aab2c5695189
2022-07-22T07:59:04.000Z
[ "pytorch", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "transformers", "license:mit" ]
text-generation
false
null
null
gpt2-large
212,520
16
transformers
132
--- language: en license: mit --- # GPT-2 Large ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-author) ## Model Details **Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-large') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"}, {'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"}, {'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"}, {'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = GPT2Model.from_pretrained('gpt2-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = TFGPT2Model.from_pretrained('gpt2-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-large') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a hotel'}, {'generated_text': 'The man worked as a salesman in Mexico and in'}, {'generated_text': 'The man worked as a supervisor at the warehouse for'}, {'generated_text': "The man worked as a cleaner for the store's"}, {'generated_text': 'The man worked as a barbershop apprentice.'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a clerk at the bank.'}, {'generated_text': 'The woman worked as a caregiver, and her'}, {'generated_text': 'The woman worked as a customer service agent for a'}, {'generated_text': 'The woman worked as a cleaner at the store,'}, {'generated_text': 'The woman worked as a barista and was "'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575| ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
finiteautomata/beto-sentiment-analysis
2d232b7b937ca0f6940f6b32ce5aaaeb012d8b38
2022-06-22T13:46:19.000Z
[ "pytorch", "jax", "bert", "text-classification", "es", "arxiv:2106.09462", "transformers", "sentiment-analysis" ]
text-classification
false
finiteautomata
null
finiteautomata/beto-sentiment-analysis
211,263
11
transformers
133
--- language: - es tags: - sentiment-analysis --- # Sentiment Analysis in Spanish ## beto-sentiment-analysis Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/pysentimiento/pysentimiento/) Model trained with TASS 2020 corpus (around ~5k tweets) of several dialects of Spanish. Base model is [BETO](https://github.com/dccuchile/beto), a BERT model trained in Spanish. Uses `POS`, `NEG`, `NEU` labels. ## License `pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses. 1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php) 2. [SEMEval 2017 Dataset license]() ## Citation If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462) ``` @misc{perez2021pysentimiento, title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks}, author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque}, year={2021}, eprint={2106.09462}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Enjoy! 🤗
princeton-nlp/sup-simcse-bert-base-uncased
2d82fab19ac3a73a20dd20333d27eb8a52d6e97f
2021-05-20T02:54:31.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
princeton-nlp
null
princeton-nlp/sup-simcse-bert-base-uncased
209,700
4
transformers
134
Entry not found
GroNLP/bert-base-dutch-cased
484ff5cec2ad42b434537dadd901d9b8e2b64cd2
2021-08-25T15:20:21.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "nl", "arxiv:1912.09582", "transformers", "BERTje", "autotrain_compatible" ]
fill-mask
false
GroNLP
null
GroNLP/bert-base-dutch-cased
205,618
4
transformers
135
--- language: nl thumbnail: "https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png" tags: - BERTje --- # BERTje: A Dutch BERT model [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Andreas van Cranenburgh](https://www.semanticscholar.org/author/Andreas-van-Cranenburgh/2791585) • [Arianna Bisazza](https://www.semanticscholar.org/author/Arianna-Bisazza/3242253) • [Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) • [Gertjan van Noord](https://www.semanticscholar.org/author/Gertjan-van-Noord/143715131) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description BERTje is a Dutch pre-trained BERT model developed at the University of Groningen. <img src="https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png" height="250"> For details, check out our paper on [arXiv](https://arxiv.org/abs/1912.09582), the code on [Github](https://github.com/wietsedv/bertje) and related work on [Semantic Scholar](https://www.semanticscholar.org/paper/BERTje%3A-A-Dutch-BERT-Model-Vries-Cranenburgh/a4d5e425cac0bf84c86c0c9f720b6339d6288ffa). The paper and Github page mention fine-tuned models that are available [here](https://huggingface.co/wietsedv). ## How to use ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased") model = AutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # Tensorflow ``` **WARNING:** The vocabulary size of BERTje has changed in 2021. If you use an older fine-tuned model and experience problems with the `GroNLP/bert-base-dutch-cased` tokenizer, use use the following tokenizer: ```python tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased", revision="v1") # v1 is the old vocabulary ``` ## Benchmarks The arXiv paper lists benchmarks. Here are a couple of comparisons between BERTje, multilingual BERT, BERT-NL and RobBERT that were done after writing the paper. Unlike some other comparisons, the fine-tuning procedures for these benchmarks are identical for each pre-trained model. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures. More experimental results will be added to this page when they are finished. Technical details about how a fine-tuned these models will be published later as well as downloadable fine-tuned checkpoints. All of the tested models are *base* sized (12) layers with cased tokenization. Headers in the tables below link to original data sources. Scores link to the model pages that corresponds to that specific fine-tuned model. These tables will be updated when more simple fine-tuned models are made available. ### Named Entity Recognition | Model | [CoNLL-2002](https://www.clips.uantwerpen.be/conll2002/ner/) | [SoNaR-1](https://ivdnt.org/downloads/taalmaterialen/tstc-sonar-corpus) | spaCy UD LassySmall | | ---------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | **BERTje** | [**90.24**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner) | [**84.93**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-sonar-ner) | [86.10](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-udlassy-ner) | | [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | [88.61](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-conll2002-ner) | [84.19](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-sonar-ner) | [**86.77**](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-udlassy-ner) | | [BERT-NL](http://textdata.nl) | 85.05 | 80.45 | 81.62 | | [RobBERT](https://github.com/iPieter/RobBERT) | 84.72 | 81.98 | 79.84 | ### Part-of-speech tagging | Model | [UDv2.5 LassySmall](https://universaldependencies.org/treebanks/nl_lassysmall/index.html) | | ---------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | | **BERTje** | **96.48** | | [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | 96.20 | | [BERT-NL](http://textdata.nl) | 96.10 | | [RobBERT](https://github.com/iPieter/RobBERT) | 95.91 | ### BibTeX entry and citation info ```bibtex @misc{devries2019bertje, \ttitle = {{BERTje}: {A} {Dutch} {BERT} {Model}}, \tshorttitle = {{BERTje}}, \tauthor = {de Vries, Wietse and van Cranenburgh, Andreas and Bisazza, Arianna and Caselli, Tommaso and Noord, Gertjan van and Nissim, Malvina}, \tyear = {2019}, \tmonth = dec, \thowpublished = {arXiv:1912.09582}, \turl = {http://arxiv.org/abs/1912.09582}, } ```
shibing624/text2vec-base-chinese
b455bb011898ad5d8b16cea238d070cd34db4b05
2022-03-14T06:43:16.000Z
[ "pytorch", "bert", "feature-extraction", "transformers", "text2vec", "sentence-similarity", "license:apache-2.0" ]
sentence-similarity
false
shibing624
null
shibing624/text2vec-base-chinese
202,918
4
transformers
136
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - text2vec - feature-extraction - sentence-similarity - transformers --- # shibing624/text2vec-base-chinese This is a CoSENT(Cosine Sentence) model: shibing624/text2vec-base-chinese. It maps sentences to a 768 dimensional dense vector space and can be used for tasks like sentence embeddings, text matching or semantic search. ## Evaluation For an automated evaluation of this model, see the *Evaluation Benchmark*: [text2vec](https://github.com/shibing624/text2vec) - chinese text matching task: | Model Name | ATEC | BQ | LCQMC | PAWSX | STS-B | Avg | QPS | | :---- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | w2v-light-tencent-chinese | 20.00 | 31.49 | 59.46 | 2.57 | 55.78 | 33.86 | 10283 | | paraphrase-multilingual-MiniLM-L12-v2 | 18.42 | 38.52 | 63.96 | 10.14 | 78.90 | 41.99 | 2371 | | text2vec-base-chinese | 31.93 | 42.67 | 70.16 | 17.21 | 79.30 | **48.25** | 2572 | ## Usage (text2vec) Using this model becomes easy when you have [text2vec](https://github.com/shibing624/text2vec) installed: ``` pip install -U text2vec ``` Then you can use the model like this: ```python from text2vec import SentenceModel sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡'] model = SentenceModel('shibing624/text2vec-base-chinese') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [text2vec](https://github.com/shibing624/text2vec), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Install transformers: ``` pip install transformers ``` Then load model and predict: ```python from transformers import BertTokenizer, BertModel import torch # Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Load model from HuggingFace Hub tokenizer = BertTokenizer.from_pretrained('shibing624/text2vec-base-chinese') model = BertModel.from_pretrained('shibing624/text2vec-base-chinese') sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡'] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Usage (sentence-transformers) [sentence-transformers](https://github.com/UKPLab/sentence-transformers) is a popular library to compute dense vector representations for sentences. Install sentence-transformers: ``` pip install -U sentence-transformers ``` Then load model and predict: ```python from sentence_transformers import SentenceTransformer m = SentenceTransformer("shibing624/text2vec-base-chinese") sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡'] sentence_embeddings = m.encode(sentences) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` CoSENT( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_mean_tokens': True}) ) ``` ## Citing & Authors This model was trained by [text2vec/cosent](https://github.com/shibing624/text2vec/tree/master/text2vec/cosent). If you find this model helpful, feel free to cite: ```bibtex @software{text2vec, author = {Xu Ming}, title = {text2vec: A Tool for Text to Vector}, year = {2022}, url = {https://github.com/shibing624/text2vec}, } ```
flair/pos-english-fast
78bf413a631e2de4cb977e1f2794295d981e4c13
2021-03-02T22:19:11.000Z
[ "pytorch", "en", "dataset:ontonotes", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/pos-english-fast
202,393
2
flair
137
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - ontonotes widget: - text: "I love Berlin." --- ## English Part-of-Speech Tagging in Flair (fast model) This is the fast part-of-speech tagging model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **98,10** (Ontonotes) Predicts fine-grained POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADD | Email | |AFX | Affix | |CC | Coordinating conjunction | |CD | Cardinal number | |DT | Determiner | |EX | Existential there | |FW | Foreign word | |HYPH | Hyphen | |IN | Preposition or subordinating conjunction | |JJ | Adjective | |JJR |Adjective, comparative | |JJS | Adjective, superlative | |LS | List item marker | |MD | Modal | |NFP | Superfluous punctuation | |NN | Noun, singular or mass | |NNP |Proper noun, singular | |NNPS | Proper noun, plural | |NNS |Noun, plural | |PDT | Predeterminer | |POS | Possessive ending | |PRP | Personal pronoun | |PRP$ | Possessive pronoun | |RB | Adverb | |RBR | Adverb, comparative | |RBS | Adverb, superlative | |RP | Particle | |SYM | Symbol | |TO | to | |UH | Interjection | |VB | Verb, base form | |VBD | Verb, past tense | |VBG | Verb, gerund or present participle | |VBN | Verb, past participle | |VBP | Verb, non-3rd person singular present | |VBZ | Verb, 3rd person singular present | |WDT | Wh-determiner | |WP | Wh-pronoun | |WP$ | Possessive wh-pronoun | |WRB | Wh-adverb | |XX | Unknown | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/pos-english-fast") # make example sentence sentence = Sentence("I love Berlin.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "I" [− Labels: PRP (1.0)] Span [2]: "love" [− Labels: VBP (0.9998)] Span [3]: "Berlin" [− Labels: NNP (0.9999)] Span [4]: "." [− Labels: . (0.9998)] ``` So, the word "*I*" is labeled as a **pronoun** (PRP), "*love*" is labeled as a **verb** (VBP) and "*Berlin*" is labeled as a **proper noun** (NNP) in the sentence "*I love Berlin*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'pos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/pos-english-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
google/bigbird-roberta-base
5a145f7852cba9bd431386a58137bf8a29903b90
2021-06-02T14:30:54.000Z
[ "pytorch", "jax", "big_bird", "pretraining", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:cc_news", "arxiv:2007.14062", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bigbird-roberta-base
199,034
17
transformers
138
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia - cc_news --- # BigBird base model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdModel # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdModel.from_pretrained("google/bigbird-roberta-base") # you can change `attention_type` to full attention like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training Data This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
huggingface/CodeBERTa-small-v1
e93b5898cff07f03f1c1c09cde284d1b85962363
2022-06-27T15:48:41.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "code", "dataset:code_search_net", "arxiv:1909.09436", "transformers", "autotrain_compatible" ]
fill-mask
false
huggingface
null
huggingface/CodeBERTa-small-v1
198,709
16
transformers
139
--- language: code thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png datasets: - code_search_net --- # CodeBERTa CodeBERTa is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub. Supported languages: ```shell "go" "java" "javascript" "php" "python" "ruby" ``` The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`. Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta). The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full corpus (~2M functions) for 5 epochs. ### Tensorboard for this training ⤵️ [![tb](https://cdn-media.huggingface.co/CodeBERTa/tensorboard.png)](https://tensorboard.dev/experiment/irRI7jXGQlqmlxXS0I07ew/#scalars) ## Quick start: masked language modeling prediction ```python PHP_CODE = """ public static <mask> set(string $key, $value) { if (!in_array($key, self::$allowedKeys)) { throw new \InvalidArgumentException('Invalid key given'); } self::$storedValues[$key] = $value; } """.lstrip() ``` ### Does the model know how to complete simple PHP code? ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="huggingface/CodeBERTa-small-v1", tokenizer="huggingface/CodeBERTa-small-v1" ) fill_mask(PHP_CODE) ## Top 5 predictions: # ' function' # prob 0.9999827146530151 'function' # ' void' # ' def' # ' final' # ``` ### Yes! That was easy 🎉 What about some Python (warning: this is going to be meta) ```python PYTHON_CODE = """ def pipeline( task: str, model: Optional = None, framework: Optional[<mask>] = None, **kwargs ) -> Pipeline: pass """.lstrip() ``` Results: ```python 'framework', 'Framework', ' framework', 'None', 'str' ``` > This program can auto-complete itself! 😱 ### Just for fun, let's try to mask natural language (not code): ```python fill_mask("My name is <mask>.") # {'sequence': '<s> My name is undefined.</s>', 'score': 0.2548016905784607, 'token': 3353} # {'sequence': '<s> My name is required.</s>', 'score': 0.07290805131196976, 'token': 2371} # {'sequence': '<s> My name is null.</s>', 'score': 0.06323737651109695, 'token': 469} # {'sequence': '<s> My name is name.</s>', 'score': 0.021919190883636475, 'token': 652} # {'sequence': '<s> My name is disabled.</s>', 'score': 0.019681859761476517, 'token': 7434} ``` This (kind of) works because code contains comments (which contain natural language). Of course, the most frequent name for a Computer scientist must be undefined 🤓. ## Downstream task: [programming language identification](https://huggingface.co/huggingface/CodeBERTa-language-id) See the model card for **[`huggingface/CodeBERTa-language-id`](https://huggingface.co/huggingface/CodeBERTa-language-id)** 🤯. <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, shorttitle = {{CodeSearchNet} {Challenge}}, url = {http://arxiv.org/abs/1909.09436}, urldate = {2020-03-12}, journal = {arXiv:1909.09436 [cs, stat]}, author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, month = sep, year = {2019}, note = {arXiv: 1909.09436}, } ``` </details>
cl-tohoku/bert-base-japanese
5dc6dbba88a42d21da3b71025c109c42462307f2
2021-09-23T13:45:36.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ja", "dataset:wikipedia", "transformers", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
cl-tohoku
null
cl-tohoku/bert-base-japanese
196,583
4
transformers
140
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 東北大学で[MASK]の研究をしています。 --- # BERT base Japanese (IPA dictionary) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0). ## Model architecture The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. ## Training Data The model is trained on Japanese Wikipedia as of September 1, 2019. To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles. The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences. ## Tokenization The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32000. ## Training The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps. ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ## Acknowledgments For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
ramsrigouthamg/t5_sentence_paraphraser
6887902ca669ce785cb8a01b3425e843011bc110
2021-06-23T13:47:31.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
ramsrigouthamg
null
ramsrigouthamg/t5_sentence_paraphraser
190,848
3
transformers
141
Entry not found
ckiplab/albert-tiny-chinese
d1edf497761caf4fdd83d2c4488132a8c56f9e3c
2022-05-10T03:28:09.000Z
[ "pytorch", "albert", "fill-mask", "zh", "transformers", "lm-head", "license:gpl-3.0", "autotrain_compatible" ]
fill-mask
false
ckiplab
null
ckiplab/albert-tiny-chinese
186,457
4
transformers
142
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - lm-head - albert - zh license: gpl-3.0 --- # CKIP ALBERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
mrm8488/t5-base-finetuned-summarize-news
ada499546852c489d6327cae23439ec309f6869f
2022-01-18T15:07:32.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "en", "arxiv:1910.10683", "transformers", "news", "summary", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-base-finetuned-summarize-news
181,482
6
transformers
143
--- language: en tags: - news - summary --- # T5-base fine-tuned fo News Summarization 📖✏️🧾 All credits to [Abhishek Kumar Mishra](https://github.com/abhimishra91) [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [News Summary](https://www.kaggle.com/sunnysai12345/news-summary) dataset for **summarization** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Summarization) - Dataset 📚 [News Summary](https://www.kaggle.com/sunnysai12345/news-summary) The dataset consists of **4515 examples** and contains Author_name, Headlines, Url of Article, Short text, Complete Article. I gathered the summarized news from Inshorts and only scraped the news articles from Hindu, Indian times and Guardian. Time period ranges from febrauary to august 2017. ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) created by [Abhishek Kumar Mishra](https://github.com/abhimishra91), so all credits to him! I also trained the model for more epochs (6). ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-summarize-news") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-summarize-news") def summarize(text, max_length=150): input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids, num_beams=2, max_length=max_length, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] return preds[0] ``` Given the following article from **NYT** (2020/06/09) with title *George Floyd’s death energized a movement. He will be buried in Houston today*: After the sound and the fury, weeks of demonstrations and anguished calls for racial justice, the man whose death gave rise to an international movement, and whose last words — “I can’t breathe” — have been a rallying cry, will be laid to rest on Tuesday at a private funeral in Houston.George Floyd, who was 46, will then be buried in a grave next to his mother’s.The service, scheduled to begin at 11 a.m. at the Fountain of Praise church, comes after five days of public memorials in Minneapolis, North Carolina and Houston and two weeks after a Minneapolis police officer was caught on video pressing his knee into Mr. Floyd’s neck for nearly nine minutes before Mr. Floyd died. That officer, Derek Chauvin, has been charged with second-degree murder and second-degree manslaughter. His bail was set at $1.25 million in a court appearance on Monday. The outpouring of anger and outrage after Mr. Floyd’s death — and the speed at which protests spread from tense, chaotic demonstrations in the city where he died to an international movement from Rome to Rio de Janeiro — has reflected the depth of frustration borne of years of watching black people die at the hands of the police or vigilantes while calls for change went unmet. ``` summarize('After the sound and the fury, weeks of demonstrations and anguished calls for racial justice, the man whose death gave rise to an international movement, and whose last words — “I can’t breathe” — have been a rallying cry, will be laid to rest on Tuesday at a private funeral in Houston.George Floyd, who was 46, will then be buried in a grave next to his mother’s.The service, scheduled to begin at 11 a.m. at the Fountain of Praise church, comes after five days of public memorials in Minneapolis, North Carolina and Houston and two weeks after a Minneapolis police officer was caught on video pressing his knee into Mr. Floyd’s neck for nearly nine minutes before Mr. Floyd died. That officer, Derek Chauvin, has been charged with second-degree murder and second-degree manslaughter. His bail was set at $1.25 million in a court appearance on Monday. The outpouring of anger and outrage after Mr. Floyd’s death — and the speed at which protests spread from tense, chaotic demonstrations in the city where he died to an international movement from Rome to Rio de Janeiro — has reflected the depth of frustration borne of years of watching black people die at the hands of the police or vigilantes while calls for change went unmet.', 80) ``` We would obtain: At a private funeral in Houston. Floyd, who was 46 years old when his death occurred, will be buried next to the grave of his mother. A Minnesota police officer was caught on video pressing his knee into Mr's neck for nearly nine minutes before his death. The officer has been charged with second-degree manslaughter and $1.2 million bail is set at > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
flexudy/t5-base-multi-sentence-doctor
85ef24d555e2e6cabd5ce8264e9ce1627c406bad
2020-12-11T23:33:25.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
flexudy
null
flexudy/t5-base-multi-sentence-doctor
180,774
11
transformers
144
![avatar](sent-banner.png) # Sentence-Doctor Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text. ## 1. Problem: Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and **Sentence Boundary Detection** As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on **clean** input. ## 2. Solution: Here we provide a model that **attempts** to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward: * `Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence`. ## 3. Use Cases: * Attempt to repair noisy sentences that where extracted with OCR software or text extractors. * Attempt to repair sentence boundaries. * Example (in German): **Input: "und ich bin im**", * Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren." * Output: "John und ich bin im Jahr 1990 geboren" * Possibly sentence level spelling correction -- Although this is not the intended use. * Input: "I went to church **las yesteday**" => Output: "I went to church last Sunday". ## 4. Disclaimer Note how we always emphises on the word *attempt*. The current version of the model was only trained on **150K** sentences from the tatoeba dataset: https://tatoeba.org/eng. (50K per language -- En, Fr, De). Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data. ## 5. Datasets We generated synthetic data from the tatoeba dataset: https://tatoeba.org/eng. Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where **sentence_doctor_dataset_300K** is a larger dataset with 100K sentences for each language). ## 6. Usage ### 6.1 Preprocessing * Let us assume we have the following text (Note that there are no punctuation marks in the text): ```python text = "That is my job I am a medical doctor I save lives" ``` * You decided extract the sentences and for some obscure reason, you obtained these sentences: ```python sentences = ["That is my job I a", "m a medical doct", "I save lives"] ``` * You now wish to correct the sentence **"m a medical doct"**. Here is the single preprocessing step for the model: ```python input_text = "repair_sentence: " + sentences[1] + " context: {" + sentences[0] + "}{" + sentences[2] + "} </s>" ``` **Explanation**:</br> * We are telling the model to repair the sentence with the prefix "repair_sentence: " * Then append the sentence we want to repair **sentence[1]** which is "m a medical doct" * Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text. * To do that, we append the keyword "context :" * Append **{sentence[0]}** "{That is my job I a}". (Note how it is sourrounded by curly braces). * Append **{sentence[2]}** "{I save lives}". * At last we tell the model this is the end of the input with </s>. ```python print(input_text) # repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s> ``` <br/> **The context is optional**, so the input could also be ```repair_sentence: m a medical doct context: {}{} </s>``` ### 6.2 Inference ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor") model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor") input_text = "repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>" input_ids = tokenizer.encode(input_text, return_tensors="pt") outputs = model.generate(input_ids, max_length=32, num_beams=1) sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) assert sentence == "I am a medical doctor." ``` ## 7. Fine-tuning We also provide a script `train_any_t5_task.py` that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example: ```python # TODO Set your training epochs config.TRAIN_EPOCHS = 3 ``` If you don't want to read the #TODO comments, just pass in your data like this ```python # TODO Where is your data ? Enter the path trainer.start("data/sentence_doctor_dataset_300.csv") ``` and voila!! Please feel free to correct any mistakes in the code and make a pull request. ## 8. Attribution * [Huggingface](https://huggingface.co/) transformer lib for making this possible * Abhishek Kumar Mishra's transformer [tutorial](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) on text summarisation. Our training code is just a modified version of their code. So many thanks. * We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the [authors](https://huggingface.co/WikinewsSum) * We also read a lot of work from [Suraj Patil](https://github.com/patil-suraj) * No one has been forgotten, hopefully :)
imone/pangu_2_6B
f6185c345ee59518384a463350bc834ace46e557
2021-12-13T06:34:22.000Z
[ "pytorch", "gpt_pangu", "text-generation", "transformers" ]
text-generation
false
imone
null
imone/pangu_2_6B
180,697
3
transformers
145
# Pangu-Alpha 2.6B ## Model Description PanGu-α is proposed by a joint technical team headed by PCNL. It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the “Peng Cheng Cloud Brain II” computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension. This repository contains PyTorch implementation of PanGu model, with 2.6 billion parameters pretrained weights (FP32 precision), converted from original MindSpore checkpoint. ## Usage (Text Generation) Currently PanGu model is not supported by transformers, so `trust_remote_code=True` is required to load model implementation in this repo. ```python from transformers import TextGenerationPipeline, AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("imone/pangu_2_6B", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("imone/pangu_2_6B", trust_remote_code=True) text_generator = TextGenerationPipeline(model, tokenizer) # greedy search print(text_generator("中国和美国和日本和法国和加拿大和澳大利亚的首都分别是哪里?", max_length=50)) ``` Expected output: ```python [{'generated_text': '中国和美国和日本和法国和加拿大和澳大利亚的首都分别是哪里?\n中国北京,美国华盛顿,日本东京,法国巴黎,加拿大多伦多,澳大利亚悉尼,新西兰奥克兰,澳大利亚墨尔本,新西兰奥克兰,'}] ```
bert-base-german-cased
702774c02b32a4f360d5fea60ab034d64bf0141c
2021-05-18T16:14:28.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "de", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
fill-mask
false
null
null
bert-base-german-cased
178,265
14
transformers
146
--- language: de license: mit thumbnail: https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png tags: - exbert --- <a href="https://huggingface.co/exbert/?model=bert-base-german-cased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> # German BERT ![bert_image](https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png) ## Overview **Language model:** bert-base-cased **Language:** German **Training data:** Wiki, OpenLegalData, News (~ 12GB) **Eval data:** Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification) **Infrastructure**: 1x TPU v2 **Published**: Jun 14th, 2019 **Update April 3rd, 2020**: we updated the vocabulary file on deepset's s3 to conform with the default tokenization of punctuation tokens. For details see the related [FARM issue](https://github.com/deepset-ai/FARM/issues/60). If you want to use the old vocab we have also uploaded a ["deepset/bert-base-german-cased-oldvocab"](https://huggingface.co/deepset/bert-base-german-cased-oldvocab) model. ## Details - We trained using Google's Tensorflow code on a single cloud TPU v2 with standard settings. - We trained 810k steps with a batch size of 1024 for sequence length 128 and 30k steps with sequence length 512. Training took about 9 days. - As training data we used the latest German Wikipedia dump (6GB of raw txt files), the OpenLegalData dump (2.4 GB) and news articles (3.6 GB). - We cleaned the data dumps with tailored scripts and segmented sentences with spacy v2.1. To create tensorflow records we used the recommended sentencepiece library for creating the word piece vocabulary and tensorflow scripts to convert the text to data usable by BERT. See https://deepset.ai/german-bert for more details ## Hyperparameters ``` batch_size = 1024 n_steps = 810_000 max_seq_len = 128 (and 512 later) learning_rate = 1e-4 lr_schedule = LinearWarmup num_warmup_steps = 10_000 ``` ## Performance During training we monitored the loss and evaluated different model checkpoints on the following German datasets: - germEval18Fine: Macro f1 score for multiclass sentiment classification - germEval18coarse: Macro f1 score for binary sentiment classification - germEval14: Seq f1 score for NER (file names deuutf.\*) - CONLL03: Seq f1 score for NER - 10kGNAD: Accuracy for document classification Even without thorough hyperparameter tuning, we observed quite stable learning especially for our German model. Multiple restarts with different seeds produced quite similar results. ![performancetable](https://thumb.tildacdn.com/tild3162-6462-4566-b663-376630376138/-/format/webp/Screenshot_from_2020.png) We further evaluated different points during the 9 days of pre-training and were astonished how fast the model converges to the maximally reachable performance. We ran all 5 downstream tasks on 7 different model checkpoints - taken at 0 up to 840k training steps (x-axis in figure below). Most checkpoints are taken from early training where we expected most performance changes. Surprisingly, even a randomly initialized BERT can be trained only on labeled downstream datasets and reach good performance (blue line, GermEval 2018 Coarse task, 795 kB trainset size). ![checkpointseval](https://thumb.tildacdn.com/tild6335-3531-4137-b533-313365663435/-/format/webp/deepset_checkpoints.png) ## Authors - Branden Chan: `branden.chan [at] deepset.ai` - Timo Möller: `timo.moeller [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Tanay Soni: `tanay.soni [at] deepset.ai` ## About us ![deepset logo](https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/deepset_logo.png) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
Jean-Baptiste/roberta-large-ner-english
c272484a77f6bacd3569d32936fda04555fb4006
2022-07-04T15:02:50.000Z
[ "pytorch", "tf", "roberta", "token-classification", "en", "dataset:conll2003", "transformers", "autotrain_compatible" ]
token-classification
false
Jean-Baptiste
null
Jean-Baptiste/roberta-large-ner-english
175,900
6
transformers
147
--- language: en datasets: - conll2003 widget: - text: "My name is jean-baptiste and I live in montreal" - text: "My name is clara and I live in berkeley, california." - text: "My name is wolfgang and I live in berlin" train-eval-index: - config: conll2003 task: token-classification task_id: entity_extraction splits: eval_split: validation col_mapping: tokens: tokens ner_tags: tags --- # roberta-large-ner-english: model fine-tuned from roberta-large for NER task ## Introduction [roberta-large-ner-english] is an english NER model that was fine-tuned from roberta-large on conll2003 dataset. Model was validated on emails/chat data and outperformed other models on this type of data specifically. In particular the model seems to work better on entity that don't start with an upper case. ## Training data Training data was classified as follow: Abbreviation|Description -|- O |Outside of a named entity MISC |Miscellaneous entity PER |Person’s name ORG |Organization LOC |Location In order to simplify, the prefix B- or I- from original conll2003 was removed. I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size: Train | Validation -|- 17494 | 3250 ## How to use roberta-large-ner-english with HuggingFace ##### Load roberta-large-ner-english and its sub-word tokenizer : ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english") model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english") ##### Process text sample (from wikipedia) from transformers import pipeline nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple") nlp("Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer") [{'entity_group': 'ORG', 'score': 0.99381506, 'word': ' Apple', 'start': 0, 'end': 5}, {'entity_group': 'PER', 'score': 0.99970853, 'word': ' Steve Jobs', 'start': 29, 'end': 39}, {'entity_group': 'PER', 'score': 0.99981767, 'word': ' Steve Wozniak', 'start': 41, 'end': 54}, {'entity_group': 'PER', 'score': 0.99956465, 'word': ' Ronald Wayne', 'start': 59, 'end': 71}, {'entity_group': 'PER', 'score': 0.9997918, 'word': ' Wozniak', 'start': 92, 'end': 99}, {'entity_group': 'MISC', 'score': 0.99956393, 'word': ' Apple I', 'start': 102, 'end': 109}] ``` ## Model performances Model performances computed on conll2003 validation dataset (computed on the tokens predictions) entity|precision|recall|f1 -|-|-|- PER|0.9914|0.9927|0.9920 ORG|0.9627|0.9661|0.9644 LOC|0.9795|0.9862|0.9828 MISC|0.9292|0.9262|0.9277 Overall|0.9740|0.9766|0.9753 On private dataset (email, chat, informal discussion), computed on word predictions: entity|precision|recall|f1 -|-|-|- PER|0.8823|0.9116|0.8967 ORG|0.7694|0.7292|0.7487 LOC|0.8619|0.7768|0.8171 By comparison on the same private dataset, Spacy (en_core_web_trf-3.2.0) was giving: entity|precision|recall|f1 -|-|-|- PER|0.9146|0.8287|0.8695 ORG|0.7655|0.6437|0.6993 LOC|0.8727|0.6180|0.7236 For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails: https://medium.com/@jean-baptiste.polle/lstm-model-for-email-signature-detection-8e990384fefa
cl-tohoku/bert-base-japanese-v2
e4211d7c20b078ac29b022be35ae4b63f3fe1679
2021-09-23T13:45:31.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ja", "dataset:wikipedia", "transformers", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
cl-tohoku
null
cl-tohoku/bert-base-japanese-v2
174,174
9
transformers
148
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 東北大学で[MASK]の研究をしています。 --- # BERT base Japanese (unidic-lite with whole word masking, jawiki-20200831) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization. Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v2.0). ## Model architecture The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. ## Training Data The models are trained on the Japanese version of Wikipedia. The training corpus is generated from the Wikipedia Cirrussearch dump file as of August 31, 2020. The generated corpus files are 4.0GB in total, containing approximately 30M sentences. We used the [MeCab](https://taku910.github.io/mecab/) morphological parser with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary to split texts into sentences. ## Tokenization The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32768. We used [`fugashi`](https://github.com/polm/fugashi) and [`unidic-lite`](https://github.com/polm/unidic-lite) packages for the tokenization. ## Training The models are trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps. For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once. For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TensorFlow Research Cloud program](https://www.tensorflow.org/tfrc/). The training took about 5 days to finish. ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ## Acknowledgments This model is trained with Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
dbmdz/bert-base-italian-xxl-cased
e25680c78556c0d9002dba60d712e1df3095240e
2021-05-19T15:01:46.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "it", "dataset:wikipedia", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
dbmdz
null
dbmdz/bert-base-italian-xxl-cased
172,134
7
transformers
149
--- language: it license: mit datasets: - wikipedia --- # 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
Helsinki-NLP/opus-mt-en-ROMANCE
92870a2f094c444064c7a568c25eef6971e07b03
2021-09-09T21:34:01.000Z
[ "pytorch", "tf", "jax", "rust", "marian", "text2text-generation", "en", "roa", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-ROMANCE
169,907
2
transformers
150
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ROMANCE * source languages: en * target languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la * OPUS readme: [en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-04-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.zip) * test set translations: [opus-2020-04-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.test.txt) * test set scores: [opus-2020-04-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.la | 50.1 | 0.693 |
sentence-transformers/allenai-specter
29f9f45ff2a85fe9dfe8ce2cef3d8ec4e65c5f37
2022-06-15T21:31:20.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/allenai-specter
167,946
3
sentence-transformers
151
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity license: apache-2.0 --- # allenai-specter This model is a conversion of the [AllenAI SPECTER](https://github.com/allenai/specter) model to [sentence-transformers](https://www.SBERT.net). It can be used to map the titles & abstracts of scientific publications to a vector space such that similar papers are close. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/allenai-specter') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/allenai-specter') model = AutoModel.from_pretrained('sentence-transformers/allenai-specter') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/allenai-specter) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors See [AllenAI SPECTER](https://github.com/allenai/specter)
cross-encoder/ms-marco-MiniLM-L-6-v2
b2cfda50a1a9fc7919e7444afbb52610d268af92
2021-08-05T08:39:38.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/ms-marco-MiniLM-L-6-v2
167,508
6
transformers
152
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
google/t5-v1_1-small
fb7e6cba609f7bab11c614294bc04f82f613c7b1
2021-06-23T00:37:12.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/t5-v1_1-small
166,813
5
transformers
153
--- language: en datasets: - c4 license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 ## Version 1.1 [T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the following improvements compared to the original T5 model- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. **Note**: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5-v1_1) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
BaptisteDoyen/camembert-base-xnli
791c5260a7c5984c7d96e622b45ca4c3ee6ea7d8
2022-06-29T09:30:26.000Z
[ "pytorch", "tf", "camembert", "text-classification", "fr", "dataset:xnli", "transformers", "zero-shot-classification", "xnli", "nli", "license:mit" ]
zero-shot-classification
false
BaptisteDoyen
null
BaptisteDoyen/camembert-base-xnli
166,725
8
transformers
154
--- language: - fr thumbnail: tags: - zero-shot-classification - xnli - nli - fr license: mit pipeline_tag: zero-shot-classification datasets: - xnli metrics: - accuracy --- # camembert-base-xnli ## Model description Camembert-base model fine-tuned on french part of XNLI dataset. <br> One of the few Zero-Shot classification model working on french 🇫🇷 ## Intended uses & limitations #### How to use Two different usages : - As a Zero-Shot sequence classifier : ```python classifier = pipeline("zero-shot-classification", model="BaptisteDoyen/camembert-base-xnli") sequence = "L'équipe de France joue aujourd'hui au Parc des Princes" candidate_labels = ["sport","politique","science"] hypothesis_template = "Ce texte parle de {}." classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template) # outputs : # {'sequence': "L'équipe de France joue aujourd'hui au Parc des Princes", # 'labels': ['sport', 'politique', 'science'], # 'scores': [0.8595073223114014, 0.10821866989135742, 0.0322740375995636]} ``` - As a premise/hypothesis checker : <br> The idea is here to compute a probability of the form \\( P(premise|hypothesis ) \\) ```python # load model and tokenizer nli_model = AutoModelForSequenceClassification.from_pretrained("BaptisteDoyen/camembert-base-xnli") tokenizer = AutoTokenizer.from_pretrained("BaptisteDoyen/camembert-base-xnli") # sequences premise = "le score pour les bleus est élevé" hypothesis = "L'équipe de France a fait un bon match" # tokenize and run through model x = tokenizer.encode(premise, hypothesis, return_tensors='pt') logits = nli_model(x)[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (0) as the probability of the label being true entail_contradiction_logits = logits[:,::2] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,0] prob_label_is_true[0].tolist() * 100 # outputs # 86.40775084495544 ``` ## Training data Training data is the french fold of the [XNLI](https://research.fb.com/publications/xnli-evaluating-cross-lingual-sentence-representations/) dataset released in 2018 by Facebook. <br> Available with great ease using the ```datasets``` library : ```python from datasets import load_dataset dataset = load_dataset('xnli', 'fr') ``` ## Training/Fine-Tuning procedure Training procedure is here pretty basic and was performed on the cloud using a single GPU. <br> Main training parameters : - ```lr = 2e-5``` with ```lr_scheduler_type = "linear"``` - ```num_train_epochs = 4``` - ```batch_size = 12``` (limited by GPU-memory) - ```weight_decay = 0.01``` - ```metric_for_best_model = "eval_accuracy"``` ## Eval results We obtain the following results on ```validation``` and ```test``` sets: | Set | Accuracy | | ---------- |-------------| | validation | 81.4 | | test | 81.7 |
oliverguhr/fullstop-punctuation-multilang-large
4740a83c496dc2416c0cf8ae3c6572dfb6851228
2022-06-09T11:51:40.000Z
[ "pytorch", "tf", "xlm-roberta", "token-classification", "en", "de", "fr", "it", "dataset:wmt/europarl", "transformers", "punctuation prediction", "punctuation", "license:mit", "autotrain_compatible" ]
token-classification
false
oliverguhr
null
oliverguhr/fullstop-punctuation-multilang-large
165,600
28
transformers
155
--- language: - en - de - fr - it tags: - punctuation prediction - punctuation datasets: wmt/europarl license: mit widget: - text: "Ho sentito che ti sei laureata il che mi fa molto piacere" example_title: "Italian" - text: "Tous les matins vers quatre heures mon père ouvrait la porte de ma chambre" example_title: "French" - text: "Ist das eine Frage Frau Müller" example_title: "German" - text: "Yet she blushed as if with guilt when Cynthia reading her thoughts said to her one day Molly youre very glad to get rid of us are not you" example_title: "English" metrics: - f1 --- This model predicts the punctuation of English, Italian, French and German texts. We developed it to restore the punctuation of transcribed spoken language. This multilanguage model was trained on the [Europarl Dataset](https://huggingface.co/datasets/wmt/europarl) provided by the [SEPP-NLG Shared Task](https://sites.google.com/view/sentence-segmentation). *Please note that this dataset consists of political speeches. Therefore the model might perform differently on texts from other domains.* The model restores the following punctuation markers: **"." "," "?" "-" ":"** ## Sample Code We provide a simple python package that allows you to process text of any length. ## Install To get started install the package from [pypi](https://pypi.org/project/deepmultilingualpunctuation/): ```bash pip install deepmultilingualpunctuation ``` ### Restore Punctuation ```python from deepmultilingualpunctuation import PunctuationModel model = PunctuationModel() text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller" result = model.restore_punctuation(text) print(result) ``` **output** > My name is Clara and I live in Berkeley, California. Ist das eine Frage, Frau Müller? ### Predict Labels ```python from deepmultilingualpunctuation import PunctuationModel model = PunctuationModel() text = "My name is Clara and I live in Berkeley California Ist das eine Frage Frau Müller" clean_text = model.preprocess(text) labled_words = model.predict(clean_text) print(labled_words) ``` **output** > [['My', '0', 0.9999887], ['name', '0', 0.99998665], ['is', '0', 0.9998579], ['Clara', '0', 0.6752215], ['and', '0', 0.99990904], ['I', '0', 0.9999877], ['live', '0', 0.9999839], ['in', '0', 0.9999515], ['Berkeley', ',', 0.99800044], ['California', '.', 0.99534047], ['Ist', '0', 0.99998784], ['das', '0', 0.99999154], ['eine', '0', 0.9999918], ['Frage', ',', 0.99622655], ['Frau', '0', 0.9999889], ['Müller', '?', 0.99863917]] ## Results The performance differs for the single punctuation markers as hyphens and colons, in many cases, are optional and can be substituted by either a comma or a full stop. The model achieves the following F1 scores for the different languages: | Label | EN | DE | FR | IT | | ------------- | ----- | ----- | ----- | ----- | | 0 | 0.991 | 0.997 | 0.992 | 0.989 | | . | 0.948 | 0.961 | 0.945 | 0.942 | | ? | 0.890 | 0.893 | 0.871 | 0.832 | | , | 0.819 | 0.945 | 0.831 | 0.798 | | : | 0.575 | 0.652 | 0.620 | 0.588 | | - | 0.425 | 0.435 | 0.431 | 0.421 | | macro average | 0.775 | 0.814 | 0.782 | 0.762 | ## References ``` @article{guhr-EtAl:2021:fullstop, title={FullStop: Multilingual Deep Models for Punctuation Prediction}, author = {Guhr, Oliver and Schumann, Anne-Kathrin and Bahrmann, Frank and Böhme, Hans Joachim}, booktitle = {Proceedings of the Swiss Text Analytics Conference 2021}, month = {June}, year = {2021}, address = {Winterthur, Switzerland}, publisher = {CEUR Workshop Proceedings}, url = {http://ceur-ws.org/Vol-2957/sepp_paper4.pdf} } ```
sentence-transformers/multi-qa-MiniLM-L6-cos-v1
2ad254dbef118e9d73b90b0797a1632cb455fedf
2022-07-11T21:10:58.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:search_qa", "dataset:eli5", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/QQP", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/Amazon-QA", "dataset:embedding-data/WikiAnswers", "sentence-transformers", "sentence-similarity" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/multi-qa-MiniLM-L6-cos-v1
163,551
24
sentence-transformers
156
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity datasets: - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - search_qa - eli5 - natural_questions - trivia_qa - embedding-data/QQP - embedding-data/PAQ_pairs - embedding-data/Amazon-QA - embedding-data/WikiAnswers --- # multi-qa-MiniLM-L6-cos-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] #Load the model model = SentenceTransformer('sentence-transformers/multi-qa-MiniLM-L6-cos-v1') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## PyTorch Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take average of all tokens def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) return embeddings # Sentences we want sentence embeddings for query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1") model = AutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## TensorFlow Usage (HuggingFace Transformers) Similarly to the PyTorch example above, to use the model with TensorFlow you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, TFAutoModel import tensorflow as tf #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state input_mask_expanded = tf.cast(tf.tile(tf.expand_dims(attention_mask, -1), [1, 1, token_embeddings.shape[-1]]), tf.float32) return tf.math.reduce_sum(token_embeddings * input_mask_expanded, 1) / tf.math.maximum(tf.math.reduce_sum(input_mask_expanded, 1), 1e-9) #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='tf') # Compute token embeddings model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings embeddings = tf.math.l2_normalize(embeddings, axis=1) return embeddings # Sentences we want sentence embeddings for query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1") model = TFAutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = (query_emb @ tf.transpose(doc_emb))[0].numpy().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Technical Details In the following some technical details how this model must be used: | Setting | Value | | --- | :---: | | Dimensions | 384 | | Produces normalized embeddings | Yes | | Pooling-Method | Mean pooling | | Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance | Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used. ---- ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages. Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text. ## Training procedure The full training script is accessible in this current repository: `train_script.py`. ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. #### Training We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20. | Dataset | Number of training tuples | |--------------------------------------------------------|:--------------------------:| | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 | | [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 | | [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 | | **Total** | **214,988,242** |
cross-encoder/stsb-roberta-base
90a6796bd3c504b63351dad78c76ffb40e3d6e5a
2021-08-05T08:41:58.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/stsb-roberta-base
163,486
null
transformers
157
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
DeepPavlov/rubert-base-cased
4036cab694767a299f2b9e6492909664d9414229
2021-11-23T08:03:04.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1905.07213", "transformers" ]
feature-extraction
false
DeepPavlov
null
DeepPavlov/rubert-base-cased
162,685
13
transformers
158
--- language: - ru --- # rubert-base-cased RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\[1\]. 08.11.2021: upload model with MLM and NSP heads \[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).
cardiffnlp/twitter-roberta-base-irony
72213835791c86ac7cade4acef91820bc9f1dc57
2021-05-20T15:03:56.000Z
[ "pytorch", "tf", "jax", "roberta", "text-classification", "arxiv:2010.12421", "transformers" ]
text-classification
false
cardiffnlp
null
cardiffnlp/twitter-roberta-base-irony
162,662
1
transformers
159
# Twitter-roBERTa-base for Irony Detection This is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark. - Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval). ## Example of classification ```python from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request # Preprocess text (username and link placeholders) def preprocess(text): new_text = [ ] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) # Tasks: # emoji, emotion, hate, irony, offensive, sentiment # stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary task='irony' MODEL = f"cardiffnlp/twitter-roberta-base-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) # download label mapping labels=[] mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = "Great, it broke the first day..." text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Great, it broke the first day..." # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = labels[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` 1) irony 0.914 2) non_irony 0.086 ```
google/electra-base-discriminator
1b48ef100dac4676d84125a8a7b7ab7c51e00386
2021-04-30T07:33:10.000Z
[ "pytorch", "tf", "jax", "rust", "electra", "pretraining", "en", "transformers", "license:apache-2.0" ]
null
false
google
null
google/electra-base-discriminator
162,100
5
transformers
160
--- language: en thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- ## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-base-discriminator") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()] ```
google/vit-base-patch16-224-in21k
1ba429d32753f33a0660b80ac6f43a3c80c18938
2022-01-12T08:03:16.000Z
[ "pytorch", "tf", "jax", "vit", "feature-extraction", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "transformers", "vision", "license:apache-2.0" ]
feature-extraction
false
google
null
google/vit-base-patch16-224-in21k
162,065
12
transformers
161
--- license: apache-2.0 tags: - vision datasets: - imagenet-21k inference: false --- # Vision Transformer (base-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` Here is how to use this model in JAX/Flax: ```python from transformers import ViTFeatureExtractor, FlaxViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = FlaxViTModel.from_pretrained('google/vit-base-patch16-224-in21k') inputs = feature_extractor(images=image, return_tensors="np") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
facebook/mbart-large-50-many-to-many-mmt
0ece2bb75a89350002537169ecadeb2b3d043b6b
2022-05-26T22:28:18.000Z
[ "pytorch", "jax", "rust", "mbart", "text2text-generation", "multilingual", "ar", "cs", "de", "en", "es", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "af", "az", "bn", "fa", "he", "hr", "id", "ka", "km", "mk", "ml", "mn", "mr", "pl", "ps", "pt", "sv", "sw", "ta", "te", "th", "tl", "uk", "ur", "xh", "gl", "sl", "arxiv:2008.00401", "transformers", "mbart-50", "autotrain_compatible" ]
text2text-generation
false
facebook
null
facebook/mbart-large-50-many-to-many-mmt
160,492
13
transformers
162
--- language: - multilingual - ar - cs - de - en - es - et - fi - fr - gu - hi - it - ja - kk - ko - lt - lv - my - ne - nl - ro - ru - si - tr - vi - zh - af - az - bn - fa - he - hr - id - ka - km - mk - ml - mn - mr - pl - ps - pt - sv - sw - ta - te - th - tl - uk - ur - xh - gl - sl tags: - mbart-50 --- # mBART-50 many to many multilingual machine translation This model is a fine-tuned checkpoint of [mBART-large-50](https://huggingface.co/facebook/mbart-large-50). `mbart-large-50-many-to-many-mmt` is fine-tuned for multilingual machine translation. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper. The model can translate directly between any pair of 50 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है" article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا." model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") # translate Hindi to French tokenizer.src_lang = "hi_IN" encoded_hi = tokenizer(article_hi, return_tensors="pt") generated_tokens = model.generate( **encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"] ) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire dans la Syrie." # translate Arabic to English tokenizer.src_lang = "ar_AR" encoded_ar = tokenizer(article_ar, return_tensors="pt") generated_tokens = model.generate( **encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"] ) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "The Secretary-General of the United Nations says there is no military solution in Syria." ``` See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for more fine-tuned versions. ## Languages covered Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI) ## BibTeX entry and citation info ``` @article{tang2020multilingual, title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning}, author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan}, year={2020}, eprint={2008.00401}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
etalab-ia/camembert-base-squadFR-fquad-piaf
63296563d30b341d2cbb3feae651a3545dc1c74d
2022-07-04T08:16:13.000Z
[ "pytorch", "tf", "camembert", "question-answering", "fr", "dataset:piaf", "dataset:FQuAD", "dataset:SQuAD-FR", "transformers", "autotrain_compatible" ]
question-answering
false
etalab-ia
null
etalab-ia/camembert-base-squadFR-fquad-piaf
156,581
7
transformers
163
--- language: fr datasets: - piaf - FQuAD - SQuAD-FR widget: - text: "Comment s'appelle le portail open data du gouvernement ?" context: "Etalab est une administration publique française qui fait notamment office de Chief Data Officer de l'État et coordonne la conception et la mise en œuvre de sa stratégie dans le domaine de la donnée (ouverture et partage des données publiques ou open data, exploitation des données et intelligence artificielle...). Ainsi, Etalab développe et maintient le portail des données ouvertes du gouvernement français data.gouv.fr. Etalab promeut également une plus grande ouverture l'administration sur la société (gouvernement ouvert) : transparence de l'action publique, innovation ouverte, participation citoyenne... elle promeut l’innovation, l’expérimentation, les méthodes de travail ouvertes, agiles et itératives, ainsi que les synergies avec la société civile pour décloisonner l’administration et favoriser l’adoption des meilleures pratiques professionnelles dans le domaine du numérique. À ce titre elle étudie notamment l’opportunité de recourir à des technologies en voie de maturation issues du monde de la recherche. Cette entité chargée de l'innovation au sein de l'administration doit contribuer à l'amélioration du service public grâce au numérique. Elle est rattachée à la Direction interministérielle du numérique, dont les missions et l’organisation ont été fixées par le décret du 30 octobre 2019.  Dirigé par Laure Lucchesi depuis 2016, elle rassemble une équipe pluridisciplinaire d'une trentaine de personnes." --- # camembert-base-squadFR-fquad-piaf ## Description Question-answering French model, using base [CamemBERT](https://camembert-model.fr/) fine-tuned on a combo of three French Q&A datasets: 1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/) 2. [FQuADv1.0](https://fquad.illuin.tech/) 3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD) ## Training hyperparameters ```shell python run_squad.py \ --model_type camembert \ --model_name_or_path camembert-base \ --do_train --do_eval \ --train_file data/SQuAD+fquad+piaf.json \ --predict_file data/fquad_valid.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 4 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 10000 ``` ## Evaluation results ### FQuAD v1.0 Evaluation ```shell {"f1": 79.81, "exact_match": 55.14} ``` ### SQuAD-FR Evaluation ```shell {"f1": 80.61, "exact_match": 59.54} ``` ## Usage ```python from transformers import pipeline nlp = pipeline('question-answering', model='etalab-ia/camembert-base-squadFR-fquad-piaf', tokenizer='etalab-ia/camembert-base-squadFR-fquad-piaf') nlp({ 'question': "Qui est Claude Monet?", 'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme." }) ``` ## Acknowledgments This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). ## Citations ### PIAF ``` @inproceedings{KeraronLBAMSSS20, author = {Rachel Keraron and Guillaume Lancrenon and Mathilde Bras and Fr{\'{e}}d{\'{e}}ric Allary and Gilles Moyse and Thomas Scialom and Edmundo{-}Pavel Soriano{-}Morales and Jacopo Staiano}, title = {Project {PIAF:} Building a Native French Question-Answering Dataset}, booktitle = {{LREC}}, pages = {5481--5490}, publisher = {European Language Resources Association}, year = {2020} } ``` ### FQuAD ``` @article{dHoffschmidt2020FQuADFQ, title={FQuAD: French Question Answering Dataset}, author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich}, journal={ArXiv}, year={2020}, volume={abs/2002.06071} } ``` ### SQuAD-FR ``` @MISC{kabbadj2018, author = "Kabbadj, Ali", title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ", editor = "linkedin.com", month = "November", year = "2018", url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}", note = "[Online; posted 11-November-2018]", } ``` ### CamemBERT HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base) ``` @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ```
pyannote/segmentation
c4c8ceafcbb3a7a280c2d357aee9fbc9b0be7f9b
2022-07-19T14:24:12.000Z
[ "pytorch", "dataset:ami", "dataset:dihard", "dataset:voxconverse", "arxiv:2104.04045", "pyannote-audio", "pyannote", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-segmentation", "voice-activity-detection", "overlapped-speech-detection", "resegmentation", "license:mit" ]
voice-activity-detection
false
pyannote
null
pyannote/segmentation
156,422
21
pyannote-audio
164
--- tags: - pyannote - pyannote-audio - pyannote-audio-model - audio - voice - speech - speaker - speaker-segmentation - voice-activity-detection - overlapped-speech-detection - resegmentation datasets: - ami - dihard - voxconverse license: mit inference: false --- # 🎹 Speaker segmentation ![Example](example.png) Model from *[End-to-end speaker segmentation for overlap-aware resegmentation](http://arxiv.org/abs/2104.04045)*, by Hervé Bredin and Antoine Laurent. [Online demo](https://huggingface.co/spaces/pyannote/pretrained-pipelines) is available as a Hugging Face Space. ## Support For commercial enquiries and scientific consulting, please contact [me](mailto:herve@niderb.fr). For [technical questions](https://github.com/pyannote/pyannote-audio/discussions) and [bug reports](https://github.com/pyannote/pyannote-audio/issues), please check [pyannote.audio](https://github.com/pyannote/pyannote-audio) Github repository. ## Usage Relies on pyannote.audio 2.0 currently in development: see [installation instructions](https://github.com/pyannote/pyannote-audio/tree/develop#installation). ### Voice activity detection ```python from pyannote.audio.pipelines import VoiceActivityDetection pipeline = VoiceActivityDetection(segmentation="pyannote/segmentation") HYPER_PARAMETERS = { # onset/offset activation thresholds "onset": 0.5, "offset": 0.5, # remove speech regions shorter than that many seconds. "min_duration_on": 0.0, # fill non-speech regions shorter than that many seconds. "min_duration_off": 0.0 } pipeline.instantiate(HYPER_PARAMETERS) vad = pipeline("audio.wav") # `vad` is a pyannote.core.Annotation instance containing speech regions ``` ### Overlapped speech detection ```python from pyannote.audio.pipelines import OverlappedSpeechDetection pipeline = OverlappedSpeechDetection(segmentation="pyannote/segmentation") pipeline.instantiate(HYPER_PARAMETERS) osd = pipeline("audio.wav") # `osd` is a pyannote.core.Annotation instance containing overlapped speech regions ``` ### Resegmentation ```python from pyannote.audio.pipelines import Resegmentation pipeline = Resegmentation(segmentation="pyannote/segmentation", diarization="baseline") pipeline.instantiate(HYPER_PARAMETERS) resegmented_baseline = pipeline({"audio": "audio.wav", "baseline": baseline}) # where `baseline` should be provided as a pyannote.core.Annotation instance ``` ### Raw scores ```python from pyannote.audio import Inference inference = Inference("pyannote/segmentation") segmentation = inference("audio.wav") # `segmentation` is a pyannote.core.SlidingWindowFeature # instance containing raw segmentation scores like the # one pictured above (output) ``` ## Reproducible research In order to reproduce the results of the paper ["End-to-end speaker segmentation for overlap-aware resegmentation "](https://arxiv.org/abs/2104.04045), use `pyannote/segmentation@Interspeech2021` with the following hyper-parameters: | Voice activity detection | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | ------------------------ | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.684 | 0.577 | 0.181 | 0.037 | | DIHARD3 | 0.767 | 0.377 | 0.136 | 0.067 | | VoxConverse | 0.767 | 0.713 | 0.182 | 0.501 | | Overlapped speech detection | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | --------------------------- | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.448 | 0.362 | 0.116 | 0.187 | | DIHARD3 | 0.430 | 0.320 | 0.091 | 0.144 | | VoxConverse | 0.587 | 0.426 | 0.337 | 0.112 | | Resegmentation of VBx | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | --------------------- | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.542 | 0.527 | 0.044 | 0.705 | | DIHARD3 | 0.592 | 0.489 | 0.163 | 0.182 | | VoxConverse | 0.537 | 0.724 | 0.410 | 0.563 | Expected outputs (and VBx baseline) are also provided in the `/reproducible_research` sub-directories. ## Citation ```bibtex @inproceedings{Bredin2021, Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}}, Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine}, Booktitle = {Proc. Interspeech 2021}, Address = {Brno, Czech Republic}, Month = {August}, Year = {2021}, ``` ```bibtex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ```
EleutherAI/gpt-neo-125M
b559e35121e91087f94903c07213d208d2412f68
2021-12-31T13:46:51.000Z
[ "pytorch", "jax", "rust", "gpt_neo", "text-generation", "en", "dataset:The Pile", "transformers", "text generation", "causal-lm", "license:apache-2.0" ]
text-generation
false
EleutherAI
null
EleutherAI/gpt-neo-125M
156,398
26
transformers
165
--- language: - en tags: - text generation - pytorch - causal-lm license: apache-2.0 datasets: - The Pile --- # GPT-Neo 125M ## Model Description GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M') >>> generator("EleutherAI has", do_sample=True, min_length=50) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results TBD ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
neuralmind/bert-base-portuguese-cased
94d69c95f98f7d5b2a8700c420230ae10def0baa
2022-06-14T14:37:09.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "pt", "dataset:brWaC", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
neuralmind
null
neuralmind/bert-base-portuguese-cased
155,314
34
transformers
166
--- language: pt license: mit tags: - bert - pytorch datasets: - brWaC --- # BERTimbau Base (aka "bert-base-portuguese-cased") ![Bert holding a berimbau](https://imgur.com/JZ7Hynh.jpg) ## Introduction BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/). ## Available models | Model | Arch. | #Layers | #Params | | ---------------------------------------- | ---------- | ------- | ------- | | `neuralmind/bert-base-portuguese-cased` | BERT-Base | 12 | 110M | | `neuralmind/bert-large-portuguese-cased` | BERT-Large | 24 | 335M | ## Usage ```python from transformers import AutoTokenizer # Or BertTokenizer from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads from transformers import AutoModel # or BertModel, for BERT without pretraining heads model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-base-portuguese-cased') tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-base-portuguese-cased', do_lower_case=False) ``` ### Masked language modeling prediction example ```python from transformers import pipeline pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer) pipe('Tinha uma [MASK] no meio do caminho.') # [{'score': 0.14287759363651276, # 'sequence': '[CLS] Tinha uma pedra no meio do caminho. [SEP]', # 'token': 5028, # 'token_str': 'pedra'}, # {'score': 0.06213393807411194, # 'sequence': '[CLS] Tinha uma árvore no meio do caminho. [SEP]', # 'token': 7411, # 'token_str': 'árvore'}, # {'score': 0.05515013635158539, # 'sequence': '[CLS] Tinha uma estrada no meio do caminho. [SEP]', # 'token': 5675, # 'token_str': 'estrada'}, # {'score': 0.0299188531935215, # 'sequence': '[CLS] Tinha uma casa no meio do caminho. [SEP]', # 'token': 1105, # 'token_str': 'casa'}, # {'score': 0.025660505518317223, # 'sequence': '[CLS] Tinha uma cruz no meio do caminho. [SEP]', # 'token': 3466, # 'token_str': 'cruz'}] ``` ### For BERT embeddings ```python import torch model = AutoModel.from_pretrained('neuralmind/bert-base-portuguese-cased') input_ids = tokenizer.encode('Tinha uma pedra no meio do caminho.', return_tensors='pt') with torch.no_grad(): outs = model(input_ids) encoded = outs[0][0, 1:-1] # Ignore [CLS] and [SEP] special tokens # encoded.shape: (8, 768) # tensor([[-0.0398, -0.3057, 0.2431, ..., -0.5420, 0.1857, -0.5775], # [-0.2926, -0.1957, 0.7020, ..., -0.2843, 0.0530, -0.4304], # [ 0.2463, -0.1467, 0.5496, ..., 0.3781, -0.2325, -0.5469], # ..., # [ 0.0662, 0.7817, 0.3486, ..., -0.4131, -0.2852, -0.2819], # [ 0.0662, 0.2845, 0.1871, ..., -0.2542, -0.2933, -0.0661], # [ 0.2761, -0.1657, 0.3288, ..., -0.2102, 0.0029, -0.2009]]) ``` ## Citation If you use our work, please cite: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } ```
joeddav/xlm-roberta-large-xnli
9c1619b90a142cd2913190d80d5f488d6612f57e
2020-12-17T16:39:07.000Z
[ "pytorch", "tf", "xlm-roberta", "text-classification", "multilingual", "dataset:multi_nli", "dataset:xnli", "arxiv:1911.02116", "transformers", "tensorflow", "license:mit", "zero-shot-classification" ]
zero-shot-classification
false
joeddav
null
joeddav/xlm-roberta-large-xnli
154,078
45
transformers
167
--- language: multilingual tags: - text-classification - pytorch - tensorflow datasets: - multi_nli - xnli license: mit pipeline_tag: zero-shot-classification widget: - text: "За кого вы голосуете в 2020 году?" candidate_labels: "politique étrangère, Europe, élections, affaires, politique" multi_class: true - text: "لمن تصوت في 2020؟" candidate_labels: "السياسة الخارجية, أوروبا, الانتخابات, الأعمال, السياسة" multi_class: true - text: "2020'de kime oy vereceksiniz?" candidate_labels: "dış politika, Avrupa, seçimler, ticaret, siyaset" multi_class: true --- # xlm-roberta-large-xnli ## Model Description This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline). ## Intended Usage This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus: - English - French - Spanish - German - Greek - Bulgarian - Russian - Turkish - Arabic - Vietnamese - Thai - Chinese - Hindi - Swahili - Urdu Since the base model was pre-trained trained on 100 different languages, the model has shown some effectiveness in languages beyond those listed above as well. See the full list of pre-trained languages in appendix A of the [XLM Roberata paper](https://arxiv.org/abs/1911.02116) For English-only classification, it is recommended to use [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or [a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla). #### With the zero-shot classification pipeline The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="joeddav/xlm-roberta-large-xnli") ``` You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to classify in another: ```python # we will classify the Russian translation of, "Who are you voting for in 2020?" sequence_to_classify = "За кого вы голосуете в 2020 году?" # we can specify candidate labels in Russian or any other language above: candidate_labels = ["Europe", "public health", "politics"] classifier(sequence_to_classify, candidate_labels) # {'labels': ['politics', 'Europe', 'public health'], # 'scores': [0.9048484563827515, 0.05722189322113991, 0.03792969882488251], # 'sequence': 'За кого вы голосуете в 2020 году?'} ``` The default hypothesis template is the English, `This text is {}`. If you are working strictly within one language, it may be worthwhile to translate this to the language you are working with: ```python sequence_to_classify = "¿A quién vas a votar en 2020?" candidate_labels = ["Europa", "salud pública", "política"] hypothesis_template = "Este ejemplo es {}." classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template) # {'labels': ['política', 'Europa', 'salud pública'], # 'scores': [0.9109585881233215, 0.05954807624220848, 0.029493311420083046], # 'sequence': '¿A quién vas a votar en 2020?'} ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import AutoModelForSequenceClassification, AutoTokenizer nli_model = AutoModelForSequenceClassification.from_pretrained('joeddav/xlm-roberta-large-xnli') tokenizer = AutoTokenizer.from_pretrained('joeddav/xlm-roberta-large-xnli') premise = sequence hypothesis = f'This example is {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1] ``` ## Training This model was pre-trained on set of 100 languages, as described in [the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on the concatenated MNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI data where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for each example come from the same original English example but the premise and hypothesis are of different languages.
allegro/herbert-base-cased
50e33e0567be0c0b313832314c586e3df0dc2297
2022-06-09T11:36:39.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "pl", "transformers", "herbert", "license:cc-by-4.0" ]
feature-extraction
false
allegro
null
allegro/herbert-base-cased
152,056
4
transformers
168
--- language: pl tags: - herbert license: cc-by-4.0 --- # HerBERT **[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** is a BERT-based Language Model trained on Polish corpora using Masked Language Modelling (MLM) and Sentence Structural Objective (SSO) with dynamic masking of whole words. For more details, please refer to: [HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish](https://www.aclweb.org/anthology/2021.bsnlp-1.1/). Model training and experiments were conducted with [transformers](https://github.com/huggingface/transformers) in version 2.9. ## Corpus HerBERT was trained on six different corpora available for Polish language: | Corpus | Tokens | Documents | | :------ | ------: | ------: | | [CCNet Middle](https://github.com/facebookresearch/cc_net) | 3243M | 7.9M | | [CCNet Head](https://github.com/facebookresearch/cc_net) | 2641M | 7.0M | | [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=1)| 1357M | 3.9M | | [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1056M | 1.1M | [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.4M | | [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k | ## Tokenizer The training dataset was tokenized into subwords using a character level byte-pair encoding (``CharBPETokenizer``) with a vocabulary size of 50k tokens. The tokenizer itself was trained with a [tokenizers](https://github.com/huggingface/tokenizers) library. We kindly encourage you to use the ``Fast`` version of the tokenizer, namely ``HerbertTokenizerFast``. ## Usage Example code: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-base-cased") model = AutoModel.from_pretrained("allegro/herbert-base-cased") output = model( **tokenizer.batch_encode_plus( [ ( "A potem szedł środkiem drogi w kurzawie, bo zamiatał nogami, ślepy dziad prowadzony przez tłustego kundla na sznurku.", "A potem leciał od lasu chłopak z butelką, ale ten ujrzawszy księdza przy drodze okrążył go z dala i biegł na przełaj pól do karczmy." ) ], padding='longest', add_special_tokens=True, return_tensors='pt' ) ) ``` ## License CC BY 4.0 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{mroczkowski-etal-2021-herbert, title = "{H}er{BERT}: Efficiently Pretrained Transformer-based Language Model for {P}olish", author = "Mroczkowski, Robert and Rybak, Piotr and Wr{\\'o}blewska, Alina and Gawlik, Ireneusz", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.1", pages = "1--10", } ``` ## Authors The model was trained by [**Machine Learning Research Team at Allegro**](https://ml.allegro.tech/) and [**Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences**](http://zil.ipipan.waw.pl/). You can contact us at: <a href="mailto:klejbenchmark@allegro.pl">klejbenchmark@allegro.pl</a>
bvanaken/clinical-assertion-negation-bert
f381df19e34e690108f3b8e3e8433f7c9d2e2f9d
2022-06-01T12:28:45.000Z
[ "pytorch", "bert", "text-classification", "en", "transformers", "medical", "clinical", "assertion", "negation" ]
text-classification
false
bvanaken
null
bvanaken/clinical-assertion-negation-bert
150,710
9
transformers
169
--- language: "en" tags: - bert - medical - clinical - assertion - negation - text-classification widget: - text: "Patient denies [entity] SOB [entity]." --- # Clinical Assertion / Negation Classification BERT ## Model description The Clinical Assertion and Negation Classification BERT is introduced in the paper [Assertion Detection in Clinical Notes: Medical Language Models to the Rescue? ](https://aclanthology.org/2021.nlpmc-1.5/). The model helps structure information in clinical patient letters by classifying medical conditions mentioned in the letter into PRESENT, ABSENT and POSSIBLE. The model is based on the [ClinicalBERT - Bio + Discharge Summary BERT Model](https://huggingface.co/emilyalsentzer/Bio_Discharge_Summary_BERT) by Alsentzer et al. and fine-tuned on assertion data from the [2010 i2b2 challenge](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3168320/). #### How to use the model You can load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline tokenizer = AutoTokenizer.from_pretrained("bvanaken/clinical-assertion-negation-bert") model = AutoModelForSequenceClassification.from_pretrained("bvanaken/clinical-assertion-negation-bert") ``` The model expects input in the form of spans/sentences with one marked entity to classify as `PRESENT(0)`, `ABSENT(1)` or `POSSIBLE(2)`. The entity in question is identified with the special token `[entity]` surrounding it. Example input and inference: ``` input = "The patient recovered during the night and now denies any [entity] shortness of breath [entity]." classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer) classification = classifier(input) # [{'label': 'ABSENT', 'score': 0.9842607378959656}] ``` ### Cite When working with the model, please cite our paper as follows: ```bibtex @inproceedings{van-aken-2021-assertion, title = "Assertion Detection in Clinical Notes: Medical Language Models to the Rescue?", author = "van Aken, Betty and Trajanovska, Ivana and Siu, Amy and Mayrdorfer, Manuel and Budde, Klemens and Loeser, Alexander", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.nlpmc-1.5", doi = "10.18653/v1/2021.nlpmc-1.5" } ```
gpt2-xl
82bb8104f524eef7f49c81a339b62f5866ef95b6
2022-03-08T09:48:34.000Z
[ "pytorch", "tf", "jax", "rust", "gpt2", "text-generation", "transformers" ]
text-generation
false
null
null
gpt2-xl
149,002
8
transformers
170
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
snrspeaks/t5-one-line-summary
62acf01b9c91b2ea3a84b1e83a8ee0557cc3526c
2021-06-23T14:20:22.000Z
[ "pytorch", "t5", "text2text-generation", "dataset:arxiv", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
snrspeaks
null
snrspeaks/t5-one-line-summary
145,953
8
transformers
171
--- datasets: - arxiv widget: - text: "summarize: We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machinelearning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems." license: mit --- # T5 One Line Summary A T5 model trained on 370,000 research papers, to generate one line summary based on description/abstract of the papers. It is trained using [simpleT5](https://github.com/Shivanandroy/simpleT5) library - A python package built on top of pytorch lightning⚡️ & transformers🤗 to quickly train T5 models ## Usage:[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1HrfT8IKLXvZzPFpl1EhZ3s_iiXG3O2VY?usp=sharing) ```python abstract = """We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machine learning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems. """ ``` ### Using Transformers🤗 ```python model_name = "snrspeaks/t5-one-line-summary" from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) input_ids = tokenizer.encode("summarize: " + abstract, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids,num_beams=5,max_length=50,repetition_penalty=2.5,length_penalty=1,early_stopping=True,num_return_sequences=3) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] print(preds) # output ["Overton: Building, Deploying, and Monitoring Machine Learning Systems for Engineers", "Overton: A System for Building, Monitoring, and Improving Production Machine Learning Systems", "Overton: Building, Monitoring, and Improving Production Machine Learning Systems"] ``` ### Using simpleT5⚡️ ```python # pip install --upgrade simplet5 from simplet5 import SimpleT5 model = SimpleT5() model.load_model("t5","snrspeaks/t5-one-line-summary") model.predict(abstract) # output "Overton: Building, Deploying, and Monitoring Machine Learning Systems for Engineers" ```
tuner007/pegasus_paraphrase
0159e2949ca73657a2f1329898f51b7bb53b9ab2
2021-03-22T21:11:33.000Z
[ "pytorch", "pegasus", "text2text-generation", "en", "transformers", "paraphrasing", "seq2seq", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
tuner007
null
tuner007/pegasus_paraphrase
145,452
60
transformers
172
--- language: en license: apache-2.0 tags: - pegasus - paraphrasing - seq2seq --- ## Model description [PEGASUS](https://github.com/google-research/pegasus) fine-tuned for paraphrasing ## Model in Action 🚀 ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer model_name = 'tuner007/pegasus_paraphrase' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) def get_response(input_text,num_return_sequences,num_beams): batch = tokenizer([input_text],truncation=True,padding='longest',max_length=60, return_tensors="pt").to(torch_device) translated = model.generate(**batch,max_length=60,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text ``` #### Example: ``` num_beams = 10 num_return_sequences = 10 context = "The ultimate test of your knowledge is your capacity to convey it to another." get_response(context,num_return_sequences,num_beams) # output: ['The test of your knowledge is your ability to convey it.', 'The ability to convey your knowledge is the ultimate test of your knowledge.', 'The ability to convey your knowledge is the most important test of your knowledge.', 'Your capacity to convey your knowledge is the ultimate test of it.', 'The test of your knowledge is your ability to communicate it.', 'Your capacity to convey your knowledge is the ultimate test of your knowledge.', 'Your capacity to convey your knowledge to another is the ultimate test of your knowledge.', 'Your capacity to convey your knowledge is the most important test of your knowledge.', 'The test of your knowledge is how well you can convey it.', 'Your capacity to convey your knowledge is the ultimate test.'] ``` > Created by [Arpit Rajauria](https://twitter.com/arpit_rajauria) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/arpit_rajauria)
Helsinki-NLP/opus-mt-de-en
6137149949ac01d19d8eeef6e35d32221dabc8e4
2021-09-09T21:30:51.000Z
[ "pytorch", "rust", "marian", "text2text-generation", "de", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-de-en
143,977
6
transformers
173
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-en * source languages: de * target languages: en * OPUS readme: [de-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.de.en | 29.4 | 0.557 | | news-test2008.de.en | 27.8 | 0.548 | | newstest2009.de.en | 26.8 | 0.543 | | newstest2010.de.en | 30.2 | 0.584 | | newstest2011.de.en | 27.4 | 0.556 | | newstest2012.de.en | 29.1 | 0.569 | | newstest2013.de.en | 32.1 | 0.583 | | newstest2014-deen.de.en | 34.0 | 0.600 | | newstest2015-ende.de.en | 34.2 | 0.599 | | newstest2016-ende.de.en | 40.4 | 0.649 | | newstest2017-ende.de.en | 35.7 | 0.610 | | newstest2018-ende.de.en | 43.7 | 0.667 | | newstest2019-deen.de.en | 40.1 | 0.642 | | Tatoeba.de.en | 55.4 | 0.707 |
allenai/longformer-base-4096
e351d9d5da3eed48886f39eed7b64014debe4925
2021-03-10T02:30:38.000Z
[ "pytorch", "tf", "rust", "longformer", "arxiv:2004.05150", "transformers" ]
null
false
allenai
null
allenai/longformer-base-4096
141,766
33
transformers
174
# longformer-base-4096 [Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents. `longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint and pretrained for MLM on long documents. It supports sequences of length up to 4,096. Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations. Please refer to the examples in `modeling_longformer.py` and the paper for more details on how to set global attention. ### Citing If you use `Longformer` in your research, please cite [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150). ``` @article{Beltagy2020Longformer, title={Longformer: The Long-Document Transformer}, author={Iz Beltagy and Matthew E. Peters and Arman Cohan}, journal={arXiv:2004.05150}, year={2020}, } ``` `Longformer` is an open-source project developed by [the Allen Institute for Artificial Intelligence (AI2)](http://www.allenai.org). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering.
t5-11b
41f839df070947ed9275dedaf3d28c75fb4d43e8
2022-07-22T08:11:37.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "transformers", "summarization", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
null
null
t5-11b
141,448
5
transformers
175
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 inference: false --- # Model Card for T5 11B ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8. [Model Card Authors](#model-card-authors) 9. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html): > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-11B is the checkpoint with 11 billion parameters. - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints) - **Model type:** Language model - **Language(s) (NLP):** English, French, Romanian, German - **License:** Apache 2.0 - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5) - **Resources for more information:** - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer) - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5) # Uses ## Direct Use and Downstream Use The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model: > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Recommendations More information needed. # Training Details ## Training Data The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5. The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. Thereby, the following datasets were being used for (1.) and (2.): 1. **Datasets used for Unsupervised denoising objective**: - [C4](https://huggingface.co/datasets/c4) - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr) 2. **Datasets used for Supervised text-to-text language modeling objective** - Sentence acceptability judgment - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471) - Sentiment analysis - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - Paraphrasing/sentence similarity - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002) - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055) - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - Natural language inference - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426) - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250) - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9) - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf) - Sentence completion - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning) - Word sense disambiguation - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121) - Question answering - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023) - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885) - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044) ## Training Procedure In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write: > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details. ## Results For full results for T5-11B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @article{2020t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {140}, pages = {1-67}, url = {http://jmlr.org/papers/v21/20-074.html} } ``` **APA:** - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model ## Disclaimer **Before `transformers` v3.5.0**, due do its immense size, `t5-11b` required some special treatment. If you're using transformers `<= v3.4.0`, `t5-11b` should be loaded with flag `use_cdn` set to `False` as follows: ```python t5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-11b', use_cdn = False) ``` Secondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB. - Model parallelism has to be used here to overcome this problem as is explained in this [PR](https://github.com/huggingface/transformers/pull/3578). - DeepSpeed's ZeRO-Offload is another approach as explained in this [post](https://github.com/huggingface/transformers/issues/9996). See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context.
cardiffnlp/twitter-roberta-base-stance-climate
a2802f328ed851a97696c724f46e75773c34e098
2021-05-20T15:10:09.000Z
[ "pytorch", "tf", "jax", "roberta", "text-classification", "transformers" ]
text-classification
false
cardiffnlp
null
cardiffnlp/twitter-roberta-base-stance-climate
141,185
null
transformers
176
hf-internal-testing/tiny-random-distilbert
2ef615d573271690c9822df720b8024148d6715a
2021-11-26T16:32:03.000Z
[ "pytorch", "tf", "distilbert", "text-classification", "transformers" ]
text-classification
false
hf-internal-testing
null
hf-internal-testing/tiny-random-distilbert
140,620
null
transformers
177
--- pipeline_tag: text-classification ---
hfl/chinese-electra-180g-small-ex-discriminator
01785e80a5c6601a86bb7cc8d74c20be82646cba
2021-03-03T01:25:29.000Z
[ "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0" ]
null
false
hfl
null
hfl/chinese-electra-180g-small-ex-discriminator
140,416
4
transformers
178
--- language: - zh license: "apache-2.0" --- # This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
jackaduma/SecBERT
e62305c27bd9e535450c47d64a06187f53606c68
2022-01-24T07:45:57.000Z
[ "pytorch", "bert", "fill-mask", "en", "dataset:APTnotes", "dataset:Stucco-Data", "dataset:CASIE", "transformers", "exbert", "security", "cybersecurity", "cyber security", "threat hunting", "threat intelligence", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
jackaduma
null
jackaduma/SecBERT
138,800
2
transformers
179
--- language: en thumbnail: https://github.com/jackaduma tags: - exbert - security - cybersecurity - cyber security - threat hunting - threat intelligence license: apache-2.0 datasets: - APTnotes - Stucco-Data - CASIE --- # SecBERT This is the pretrained model presented in [SecBERT: A Pretrained Language Model for Cyber Security Text](https://github.com/jackaduma/SecBERT/), which is a BERT model trained on cyber security text. The training corpus was papers taken from * [APTnotes](https://github.com/kbandla/APTnotes) * [Stucco-Data: Cyber security data sources](https://stucco.github.io/data/) * [CASIE: Extracting Cybersecurity Event Information from Text](https://ebiquity.umbc.edu/_file_directory_/papers/943.pdf) * [SemEval-2018 Task 8: Semantic Extraction from CybersecUrity REports using Natural Language Processing (SecureNLP)](https://competitions.codalab.org/competitions/17262). SecBERT has its own wordpiece vocabulary (secvocab) that's built to best match the training corpus. We trained [SecBERT](https://huggingface.co/jackaduma/SecBERT) and [SecRoBERTa](https://huggingface.co/jackaduma/SecRoBERTa) versions. Available models include: * [`SecBERT`](https://huggingface.co/jackaduma/SecBERT) * [`SecRoBERTa`](https://huggingface.co/jackaduma/SecRoBERTa) --- ## **Fill Mask** We proposed to build language model which work on cyber security text, as result, it can improve downstream tasks (NER, Text Classification, Semantic Understand, Q&A) in Cyber Security Domain. First, as below shows Fill-Mask pipeline in [Google Bert](), [AllenAI SciBert](https://github.com/allenai/scibert) and our [SecBERT](https://github.com/jackaduma/SecBERT) . <!-- <img src="./fill-mask-result.png" width="150%" height="150%"> --> ![fill-mask-result](https://github.com/jackaduma/SecBERT/blob/main/fill-mask-result.png?raw=true) --- The original repo can be found [here](https://github.com/jackaduma/SecBERT).
kuprel/min-dalle
ab296a7008fd0708c63de711284776725b9729eb
2022-07-28T15:08:30.000Z
[ "transformers", "pytorch", "license:mit" ]
null
false
kuprel
null
kuprel/min-dalle
138,105
12
transformers
180
--- tags: - pytorch license: mit --- # min(DALL·E) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb) [![Discord](https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white)](https://discord.com/channels/823813159592001537/912729332311556136) **[🐙🐱 GitHub](https://github.com/kuprel/min-dalle)** **[☕️ Buy me a Coffee](https://www.buymeacoffee.com/kuprel)** This is a fast, minimal port of Boris Dayma's [DALL·E Mini](https://github.com/borisdayma/dalle-mini) (with mega weights). It has been stripped down for inference and converted to PyTorch. The only third party dependencies are numpy, requests, pillow and torch. To generate a 4x4 grid of DALL·E Mega images it takes: - 89 sec with a T4 in Colab - 48 sec with a P100 in Colab - 13 sec with an A100 on Replicate Here's a more detailed breakdown of performance on an A100. Credit to [@technobird22](https://github.com/technobird22) and his [NeoGen](https://github.com/technobird22/NeoGen) discord bot for the graph. <br /> <img src="https://github.com/kuprel/min-dalle/raw/main/performance.png" alt="min-dalle" width="450"/> <br /> The flax model and code for converting it to torch can be found [here](https://github.com/kuprel/min-dalle-flax). ## Install ```bash $ pip install min-dalle ``` ## Usage Load the model parameters once and reuse the model to generate multiple images. ```python from min_dalle import MinDalle model = MinDalle( models_root='./pretrained', dtype=torch.float32, device='cuda', is_mega=True, is_reusable=True ) ``` The required models will be downloaded to `models_root` if they are not already there. Set the `dtype` to `torch.float16` to save GPU memory. If you have an Ampere architecture GPU you can use `torch.bfloat16`. Set the `device` to either "cuda" or "cpu". Once everything has finished initializing, call `generate_image` with some text as many times as you want. Use a positive `seed` for reproducible results. Higher values for `supercondition_factor` result in better agreement with the text but a narrower variety of generated images. Every image token is sampled from the `top_k` most probable tokens. The largest logit is subtracted from the logits to avoid infs. The logits are then divided by the `temperature`. If `is_seamless` is true, the image grid will be tiled in token space not pixel space. ```python image = model.generate_image( text='Nuclear explosion broccoli', seed=-1, grid_size=4, is_seamless=False, temperature=1, top_k=256, supercondition_factor=32, is_verbose=False ) display(image) ``` <img src="https://github.com/kuprel/min-dalle/raw/main/examples/nuclear_broccoli.jpg" alt="min-dalle" width="400"/> Credit to [@hardmaru](https://twitter.com/hardmaru) for the [example](https://twitter.com/hardmaru/status/1544354119527596034) ### Saving Individual Images The images can also be generated as a `FloatTensor` in case you want to process them manually. ```python images = model.generate_images( text='Nuclear explosion broccoli', seed=-1, grid_size=3, is_seamless=False, temperature=1, top_k=256, supercondition_factor=16, is_verbose=False ) ``` To get an image into PIL format you will have to first move the images to the CPU and convert the tensor to a numpy array. ```python images = images.to('cpu').numpy() ``` Then image $i$ can be coverted to a PIL.Image and saved ```python image = Image.fromarray(images[i]) image.save('image_{}.png'.format(i)) ``` ### Progressive Outputs If the model is being used interactively (e.g. in a notebook) `generate_image_stream` can be used to generate a stream of images as the model is decoding. The detokenizer adds a slight delay for each image. Set `progressive_outputs` to `True` to enable this. An example is implemented in the colab. ```python image_stream = model.generate_image_stream( text='Dali painting of WALL·E', seed=-1, grid_size=3, progressive_outputs=True, is_seamless=False, temperature=1, top_k=256, supercondition_factor=16, is_verbose=False ) for image in image_stream: display(image) ``` <img src="https://github.com/kuprel/min-dalle/raw/main/examples/dali_walle_animated.gif" alt="min-dalle" width="300"/> ### Command Line Use `image_from_text.py` to generate images from the command line. ```bash $ python image_from_text.py --text='artificial intelligence' --no-mega ``` <img src="https://github.com/kuprel/min-dalle/raw/main/examples/artificial_intelligence.jpg" alt="min-dalle" width="200"/> **[❤️ Sponsor](https://github.com/sponsors/kuprel)**
albert-base-v1
aeffd769076a5c4f83b2546aea99ca45a15a5da4
2021-01-13T15:08:24.000Z
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
albert-base-v1
136,283
1
transformers
181
--- tags: - exbert language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT Base v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-base-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1') model = AlbertModel.from_pretrained("albert-base-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1') model = TFAlbertModel.from_pretrained("albert-base-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-base-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=albert-base-v1"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
cahya/t5-base-indonesian-summarization-cased
3afd080677efb3978dfce95a19324d91caff3064
2021-06-23T12:05:23.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "id", "dataset:id_liputan6", "transformers", "pipeline:summarization", "summarization", "autotrain_compatible" ]
summarization
false
cahya
null
cahya/t5-base-indonesian-summarization-cased
133,174
2
transformers
182
--- language: id tags: - pipeline:summarization - summarization - t5 datasets: - id_liputan6 --- # Indonesian T5 Summarization Base Model Finetuned T5 base summarization model for Indonesian. ## Finetuning Corpus `t5-base-indonesian-summarization-cased` model is based on `t5-base-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset. ## Load Finetuned Model ```python from transformers import T5Tokenizer, T5Model, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("cahya/t5-base-indonesian-summarization-cased") model = T5ForConditionalGeneration.from_pretrained("cahya/t5-base-indonesian-summarization-cased") ``` ## Code Sample ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("cahya/t5-base-indonesian-summarization-cased") model = T5ForConditionalGeneration.from_pretrained("cahya/t5-base-indonesian-summarization-cased") # ARTICLE_TO_SUMMARIZE = "" # generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, min_length=20, max_length=80, num_beams=10, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, no_repeat_ngram_size=2, use_cache=True, do_sample = True, temperature = 0.8, top_k = 50, top_p = 0.95) summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary_text) ``` Output: ``` ```
pucpr/biobertpt-all
ac33b1ca265df5074cca1656e15a8bf900394d8e
2021-10-13T09:27:01.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "pt", "dataset:biomedical literature from Scielo and Pubmed", "transformers", "autotrain_compatible" ]
fill-mask
false
pucpr
null
pucpr/biobertpt-all
129,304
5
transformers
183
--- language: "pt" widget: - text: "O paciente recebeu [MASK] do hospital." - text: "O médico receitou a medicação para controlar a [MASK]." - text: "O principal [MASK] da COVID-19 é tosse seca." - text: "O vírus da gripe apresenta um [MASK] constituído por segmentos de ácido ribonucleico." datasets: - biomedical literature from Scielo and Pubmed thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # BioBERTpt - Portuguese Clinical and Biomedical BERT The [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) paper contains clinical and biomedical BERT-based models for Portuguese Language, initialized with BERT-Multilingual-Cased & trained on clinical notes and biomedical literature. This model card describes the BioBERTpt(all) model, a full version with clinical narratives and biomedical literature in Portuguese language. ## How to use the model Load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("pucpr/biobertpt-all") model = AutoModel.from_pretrained("pucpr/biobertpt-all") ``` ## More Information Refer to the original paper, [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) for additional details and performance on Portuguese NER tasks. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
hf-internal-testing/tiny-random-bert
9b8c223d42b2188cb49d29af482996f9d0f3e5a6
2021-09-17T19:21:17.000Z
[ "pytorch", "tf", "bert", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-bert
128,588
null
transformers
184
Entry not found
cross-encoder/ms-marco-TinyBERT-L-2
e9ed04745b2b19e8c4499360253ea5d5b41b5810
2021-08-05T08:39:52.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/ms-marco-TinyBERT-L-2
127,888
null
transformers
185
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
sentence-transformers/paraphrase-multilingual-mpnet-base-v2
ef15aed8b328d308d7237b9bf15269f2cd19e268
2022-06-15T19:38:33.000Z
[ "pytorch", "tf", "xlm-roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-multilingual-mpnet-base-v2
127,876
17
sentence-transformers
186
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-multilingual-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-mpnet-base-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
MoritzLaurer/mDeBERTa-v3-base-mnli-xnli
ef7c55665d6b9cb4c03adfb1a05f0599d519964c
2022-07-28T16:23:58.000Z
[ "pytorch", "deberta-v2", "text-classification", "multilingual", "en", "ar", "bg", "de", "el", "es", "fr", "hi", "ru", "sw", "th", "tr", "ur", "vu", "zh", "dataset:multi_nli", "dataset:xnli", "arxiv:2111.09543", "arxiv:1809.05053", "arxiv:1911.02116", "transformers", "zero-shot-classification", "nli", "license:mit" ]
zero-shot-classification
false
MoritzLaurer
null
MoritzLaurer/mDeBERTa-v3-base-mnli-xnli
127,494
30
transformers
187
--- language: - multilingual - en - ar - bg - de - el - es - fr - hi - ru - sw - th - tr - ur - vu - zh license: mit tags: - zero-shot-classification - text-classification - nli - pytorch metrics: - accuracy datasets: - multi_nli - xnli pipeline_tag: zero-shot-classification widget: - text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" candidate_labels: "politics, economy, entertainment, environment" --- # Multilingual mDeBERTa-v3-base-mnli-xnli ## Model description This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli). As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf). ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" hypothesis = "Emmanuel Macron is the President of France" input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages mDeBERTa was pre-trained on; and significantly reduces training costs. ### Training procedure mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=2, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay ) ``` ### Eval results The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI. Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)). average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vu | zh ---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|---------- 0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116 ## Limitations and bias Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ## Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ## Debugging and issues Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
facebook/wav2vec2-large-960h
bdeaacdf88f7a155f50a2704bc967aa81fbbb2ab
2022-04-05T16:40:42.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "transformers", "speech", "license:apache-2.0" ]
automatic-speech-recognition
false
facebook
null
facebook/wav2vec2-large-960h
126,881
4
transformers
188
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Wav2Vec2-Large-960h [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The large model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-large-960h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import soundfile as sf import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 2.8 | 6.3 |
prajjwal1/bert-mini
5e123abc2480f0c4b4cac186d3b3f09299c258fc
2021-10-27T18:27:38.000Z
[ "pytorch", "en", "arxiv:1908.08962", "arxiv:2110.01518", "transformers", "BERT", "MNLI", "NLI", "transformer", "pre-training", "license:mit" ]
null
false
prajjwal1
null
prajjwal1/bert-mini
126,710
2
transformers
189
--- language: - en license: - mit tags: - BERT - MNLI - NLI - transformer - pre-training --- The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller pre-trained BERT variants, together with [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. If you use the model, please consider citing both the papers: ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{DBLP:journals/corr/abs-1908-08962, author = {Iulia Turc and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation}, journal = {CoRR}, volume = {abs/1908.08962}, year = {2019}, url = {http://arxiv.org/abs/1908.08962}, eprinttype = {arXiv}, eprint = {1908.08962}, timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Config of this model: `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini) Other models to check out: - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny) - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small) - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium) Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
sentence-transformers/msmarco-distilbert-dot-v5
52b2679c1e6789ee4b2d3b81a27a4590a1bc5348
2022-06-15T20:15:43.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/msmarco-distilbert-dot-v5
125,646
4
sentence-transformers
190
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # msmarco-distilbert-dot-v5 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500K (query, answer) pairs from the [MS MARCO dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking/). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] #Load the model model = SentenceTransformer('sentence-transformers/msmarco-distilbert-dot-v5') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores print("Query:", query) for doc, score in doc_score_pairs: print(score, doc) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = mean_pooling(model_output, encoded_input['attention_mask']) return embeddings # Sentences we want sentence embeddings for query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-distilbert-dot-v5") model = AutoModel.from_pretrained("sentence-transformers/msmarco-distilbert-dot-v5") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores print("Query:", query) for doc, score in doc_score_pairs: print(score, doc) ``` ## Technical Details In the following some technical details how this model must be used: | Setting | Value | | --- | :---: | | Dimensions | 768 | | Max Sequence Length | 512 | | Produces normalized embeddings | No | | Pooling-Method | Mean pooling | | Suitable score functions | dot-product (e.g. `util.dot_score`) | ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=msmarco-distilbert-base-dot-v5) ## Training See `train_script.py` in this repository for the used training script. The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7858 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MarginMSELoss.MarginMSELoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 30, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 1e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ``` ## License This model is released under the Apache 2 license. However, note that this model was trained on the MS MARCO dataset which has it's own license restrictions: [MS MARCO - Terms and Conditions](https://github.com/microsoft/msmarco/blob/095515e8e28b756a62fcca7fcf1d8b3d9fbb96a9/README.md).
speechbrain/spkrec-ecapa-voxceleb
5c0be3875fda05e81f3c004ed8c7c06be308de1e
2022-06-26T23:15:06.000Z
[ "en", "dataset:voxceleb", "arxiv:2106.04624", "speechbrain", "embeddings", "Speaker", "Verification", "Identification", "pytorch", "ECAPA", "TDNN", "license:apache-2.0" ]
null
false
speechbrain
null
speechbrain/spkrec-ecapa-voxceleb
125,355
18
speechbrain
191
--- language: "en" thumbnail: tags: - speechbrain - embeddings - Speaker - Verification - Identification - pytorch - ECAPA - TDNN license: "apache-2.0" datasets: - voxceleb metrics: - EER widget: - example_title: VoxCeleb Speaker id10003 src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav - example_title: VoxCeleb Speaker id10004 src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Speaker Verification with ECAPA-TDNN embeddings on Voxceleb This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain. The system can be used to extract speaker embeddings as well. It is trained on Voxceleb 1+ Voxceleb2 training data. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is: | Release | EER(%) |:-------------:|:--------------:| | 05-03-21 | 0.80 | ## Pipeline description This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Compute your speaker embeddings ```python import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb") signal, fs =torchaudio.load('tests/samples/ASR/spk1_snt1.wav') embeddings = classifier.encode_batch(signal) ``` The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*. ### Perform Speaker Verification ```python from speechbrain.pretrained import SpeakerRecognition verification = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb", savedir="pretrained_models/spkrec-ecapa-voxceleb") score, prediction = verification.verify_files("tests/samples/ASR/spk1_snt1.wav", "tests/samples/ASR/spk2_snt1.wav") # Different Speakers score, prediction = verification.verify_files("tests/samples/ASR/spk1_snt1.wav", "tests/samples/ASR/spk1_snt2.wav") # Same Speaker ``` The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise. ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (aa018540). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/VoxCeleb/SpeakerRec python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing ECAPA-TDNN ``` @inproceedings{DBLP:conf/interspeech/DesplanquesTD20, author = {Brecht Desplanques and Jenthe Thienpondt and Kris Demuynck}, editor = {Helen Meng and Bo Xu and Thomas Fang Zheng}, title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation in {TDNN} Based Speaker Verification}, booktitle = {Interspeech 2020}, pages = {3830--3834}, publisher = {{ISCA}}, year = {2020}, } ``` # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
deepset/roberta-base-squad2-covid
b3506f363ab164823a64b5372d5cc98f36504cd6
2021-10-21T12:19:32.000Z
[ "pytorch", "jax", "roberta", "question-answering", "transformers", "license:cc-by-4.0", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/roberta-base-squad2-covid
122,432
4
transformers
192
--- license: cc-by-4.0 --- # roberta-base-squad2 for QA on COVID-19 ## Overview **Language model:** deepset/roberta-base-squad2 **Language:** English **Downstream-task:** Extractive QA **Training data:** [SQuAD-style CORD-19 annotations from 23rd April](https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/200423_covidQA.json) **Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering_crossvalidation.py) in [FARM](https://github.com/deepset-ai/FARM) **Infrastructure**: Tesla v100 ## Hyperparameters ``` batch_size = 24 n_epochs = 3 base_LM_model = "deepset/roberta-base-squad2" max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.1 doc_stride = 128 xval_folds = 5 dev_split = 0 no_ans_boost = -100 ``` --- license: cc-by-4.0 --- ## Performance 5-fold cross-validation on the data set led to the following results: **Single EM-Scores:** [0.222, 0.123, 0.234, 0.159, 0.158] **Single F1-Scores:** [0.476, 0.493, 0.599, 0.461, 0.465] **Single top\\_3\\_recall Scores:** [0.827, 0.776, 0.860, 0.771, 0.777] **XVAL EM:** 0.17890995260663506 **XVAL f1:** 0.49925444207319924 **XVAL top\\_3\\_recall:** 0.8021327014218009 This model is the model obtained from the **third** fold of the cross-validation. ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/roberta-base-squad2-covid" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ### In FARM ```python from farm.modeling.adaptive_model import AdaptiveModel from farm.modeling.tokenization import Tokenizer from farm.infer import Inferencer model_name = "deepset/roberta-base-squad2-covid" # a) Get predictions nlp = Inferencer.load(model_name, task_type="question_answering") QA_input = [{"questions": ["Why is model conversion important?"], "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}] res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True) # b) Load model & tokenizer model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering") tokenizer = Tokenizer.load(model_name) ``` ### In haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2-covid") # or reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2-covid") ``` ## Authors Branden Chan: `branden.chan [at] deepset.ai` Timo Möller: `timo.moeller [at] deepset.ai` Malte Pietsch: `malte.pietsch [at] deepset.ai` Tanay Soni: `tanay.soni [at] deepset.ai` Bogdan Kostić: `bogdan.kostic [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
facebook/mbart-large-50-one-to-many-mmt
3cc64aaf129efb58cdc6345618b39ce776d888b4
2022-05-26T22:28:22.000Z
[ "pytorch", "jax", "mbart", "text2text-generation", "multilingual", "ar", "cs", "de", "en", "es", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "af", "az", "bn", "fa", "he", "hr", "id", "ka", "km", "mk", "ml", "mn", "mr", "pl", "ps", "pt", "sv", "sw", "ta", "te", "th", "tl", "uk", "ur", "xh", "gl", "sl", "arxiv:2008.00401", "transformers", "mbart-50", "autotrain_compatible" ]
text2text-generation
false
facebook
null
facebook/mbart-large-50-one-to-many-mmt
121,393
8
transformers
193
--- language: - multilingual - ar - cs - de - en - es - et - fi - fr - gu - hi - it - ja - kk - ko - lt - lv - my - ne - nl - ro - ru - si - tr - vi - zh - af - az - bn - fa - he - hr - id - ka - km - mk - ml - mn - mr - pl - ps - pt - sv - sw - ta - te - th - tl - uk - ur - xh - gl - sl tags: - mbart-50 --- # mBART-50 one to many multilingual machine translation This model is a fine-tuned checkpoint of [mBART-large-50](https://huggingface.co/facebook/mbart-large-50). `mbart-large-50-one-to-many-mmt` is fine-tuned for multilingual machine translation. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper. The model can translate English to other 49 languages mentioned below. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_en = "The head of the United Nations says there is no military solution in Syria" model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX") model_inputs = tokenizer(article_en, return_tensors="pt") # translate from English to Hindi generated_tokens = model.generate( **model_inputs, forced_bos_token_id=tokenizer.lang_code_to_id["hi_IN"] ) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => 'संयुक्त राष्ट्र के नेता कहते हैं कि सीरिया में कोई सैन्य समाधान नहीं है' # translate from English to Chinese generated_tokens = model.generate( **model_inputs, forced_bos_token_id=tokenizer.lang_code_to_id["zh_CN"] ) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => '联合国首脑说,叙利亚没有军事解决办法' ``` See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for more fine-tuned versions. ## Languages covered Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI) ## BibTeX entry and citation info ``` @article{tang2020multilingual, title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning}, author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan}, year={2020}, eprint={2008.00401}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
sentence-transformers/paraphrase-MiniLM-L3-v2
74e7eed84a0b0ccca7e8769c9b0e5990f41d7125
2022-07-08T04:08:35.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:s2orc", "dataset:ms_marco", "dataset:wiki_atomic_edits", "dataset:snli", "dataset:multi_nli", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/coco_captions", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/QQP", "dataset:yahoo_answers_topics", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-MiniLM-L3-v2
120,018
3
sentence-transformers
194
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - flax-sentence-embeddings/stackexchange_xml - s2orc - ms_marco - wiki_atomic_edits - snli - multi_nli - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/flickr30k-captions - embedding-data/coco_captions - embedding-data/sentence-compression - embedding-data/QQP - yahoo_answers_topics --- # sentence-transformers/paraphrase-MiniLM-L3-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L3-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L3-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
hf-internal-testing/tiny-random-roberta
73def02fc9f13169a1ce21ad4602aae38d7cbd5a
2021-09-17T19:22:24.000Z
[ "pytorch", "tf", "roberta", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-roberta
117,612
null
transformers
195
Entry not found
dbmdz/bert-large-cased-finetuned-conll03-english
f2482bf01f5da0f0eb8e183ffd8cc3885aa90b14
2021-05-19T15:17:53.000Z
[ "pytorch", "tf", "jax", "rust", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
dbmdz
null
dbmdz/bert-large-cased-finetuned-conll03-english
117,048
4
transformers
196
Entry not found
cmarkea/distilcamembert-base
bf14fbad88b19c837997f26dd1684bf98404f96b
2022-05-24T15:57:25.000Z
[ "pytorch", "tf", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1910.01108", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
cmarkea
null
cmarkea/distilcamembert-base
117,021
17
transformers
197
--- language: fr license: mit datasets: - oscar widget: - text: "J'aime lire les <mask> de SF." --- DistilCamemBERT =============== We present a distillation version of the well named [CamemBERT](https://huggingface.co/camembert-base), a RoBERTa French model version, alias DistilCamemBERT. The aim of distillation is to drastically reduce the complexity of the model while preserving the performances. The proof of concept is shown in the [DistilBERT paper](https://arxiv.org/abs/1910.01108) and the code used for the training is inspired by the code of [DistilBERT](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation). Loss function ------------- The training for the distilled model (student model) is designed to be the closest as possible to the original model (teacher model). To perform this the loss function is composed of 3 parts: * DistilLoss: a distillation loss which measures the silimarity between the probabilities at the outputs of the student and teacher models with a cross-entropy loss on the MLM task ; * CosineLoss: a cosine embedding loss. This loss function is applied on the last hidden layers of student and teacher models to guarantee a collinearity between them ; * MLMLoss: and finaly a Masked Language Modeling (MLM) task loss to perform the student model with the original task of the teacher model. The final loss function is a combination of these three losses functions. We use the following ponderation: $$Loss = 0.5 \times DistilLoss + 0.3 \times CosineLoss + 0.2 \times MLMLoss$$ Dataset ------- To limit the bias between the student and teacher models, the dataset used for the DstilCamemBERT training is the same as the camembert-base training one: OSCAR. The French part of this dataset approximately represents 140 GB on a hard drive disk. Training -------- We pre-trained the model on a nVidia Titan RTX during 18 days. Evaluation results ------------------ | Dataset name | f1-score | | :----------: | :------: | | [FLUE](https://huggingface.co/datasets/flue) CLS | 83% | | [FLUE](https://huggingface.co/datasets/flue) PAWS-X | 77% | | [FLUE](https://huggingface.co/datasets/flue) XNLI | 77% | | [wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr) NER | 98% | How to use DistilCamemBERT -------------------------- Load DistilCamemBERT and its sub-word tokenizer : ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cmarkea/distilcamembert-base") model = AutoModel.from_pretrained("cmarkea/distilcamembert-base") model.eval() ... ``` Filling masks using pipeline : ```python from transformers import pipeline model_fill_mask = pipeline("fill-mask", model="cmarkea/distilcamembert-base", tokenizer="cmarkea/distilcamembert-base") results = model_fill_mask("Le camembert est <mask> :)") results [{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.3878222405910492, 'token': 7200}, {'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.06469205021858215, 'token': 2183}, {'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.04534877464175224, 'token': 1654}, {'sequence': '<s> Le camembert est succulent :)</s>', 'score': 0.04128391295671463, 'token': 26202}, {'sequence': '<s> Le camembert est magnifique :)</s>', 'score': 0.02425697259604931, 'token': 1509}] ``` Citation -------- ```bibtex @inproceedings{delestre:hal-03674695, TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}}, AUTHOR = {Delestre, Cyrile and Amar, Abibatou}, URL = {https://hal.archives-ouvertes.fr/hal-03674695}, BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}}, ADDRESS = {Vannes, France}, YEAR = {2022}, MONTH = Jul, KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation}, PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf}, HAL_ID = {hal-03674695}, HAL_VERSION = {v1}, } ```
microsoft/DialoGPT-medium
8bada3b953e25ec171dea4e28c52f1e8b546d707
2021-05-23T09:11:45.000Z
[ "pytorch", "tf", "jax", "rust", "gpt2", "text-generation", "arxiv:1911.00536", "transformers", "conversational", "license:mit" ]
conversational
false
microsoft
null
microsoft/DialoGPT-medium
116,315
24
transformers
198
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- ## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT) DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations. The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test. The model is trained on 147M multi-turn dialogue from Reddit discussion thread. * Multi-turn generation examples from an interactive environment: |Role | Response | |---------|--------| |User | Does money buy happiness? | | Bot | Depends how much money you spend on it .| |User | What is the best way to buy happiness ? | | Bot | You just have to be a millionaire by your early 20s, then you can be happy . | |User |This is so difficult ! | | Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536) ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium") model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
cardiffnlp/twitter-roberta-base-emotion
dff452c4f42c15a25bd51aff1f1ca5d15ec08c23
2022-03-23T14:34:19.000Z
[ "pytorch", "tf", "jax", "roberta", "text-classification", "arxiv:2010.12421", "transformers" ]
text-classification
false
cardiffnlp
null
cardiffnlp/twitter-roberta-base-emotion
115,973
13
transformers
199
# Twitter-roBERTa-base for Emotion Recognition This is a roBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark. - Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval). ## Example of classification ```python from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) # Tasks: # emoji, emotion, hate, irony, offensive, sentiment # stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary task='emotion' MODEL = f"cardiffnlp/twitter-roberta-base-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) # download label mapping mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = "Celebrating my promotion 😎" text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Celebrating my promotion 😎" # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = labels[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` 1) joy 0.9382 2) optimism 0.0362 3) anger 0.0145 4) sadness 0.0112 ```