modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
deepset/xlm-roberta-large-squad2
089becf104e1928b27123065f4724e93fcbfd879
2022-07-25T09:48:49.000Z
[ "pytorch", "xlm-roberta", "question-answering", "multilingual", "dataset:squad_v2", "transformers", "license:cc-by-4.0", "model-index", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/xlm-roberta-large-squad2
60,309
18
transformers
300
--- language: multilingual tags: - question-answering datasets: - squad_v2 license: cc-by-4.0 model-index: - name: deepset/xlm-roberta-large-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - name: Exact Match type: exact_match value: 81.8281 verified: true - name: F1 type: f1 value: 84.8886 verified: true --- # Multilingual XLM-RoBERTa large for QA on various languages ## Overview **Language model:** xlm-roberta-large **Language:** Multilingual **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD dev set - German MLQA - German XQuAD **Training run:** [MLFlow link](https://public-mlflow.deepset.ai/#/experiments/124/runs/3a540e3f3ecf4dd98eae8fc6d457ff20) **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 32 n_epochs = 3 base_LM_model = "xlm-roberta-large" max_seq_len = 256 learning_rate = 1e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Performance Evaluated on the SQuAD 2.0 English dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 79.45759285774446, "f1": 83.79259828925511, "total": 11873, "HasAns_exact": 71.96356275303644, "HasAns_f1": 80.6460053117963, "HasAns_total": 5928, "NoAns_exact": 86.93019343986543, "NoAns_f1": 86.93019343986543, "NoAns_total": 5945 ``` Evaluated on German [MLQA: test-context-de-question-de.json](https://github.com/facebookresearch/MLQA) ``` "exact": 49.34691166703564, "f1": 66.15582561674236, "total": 4517, ``` Evaluated on German [XQuAD: xquad.de.json](https://github.com/deepmind/xquad) ``` "exact": 61.51260504201681, "f1": 78.80206098332569, "total": 1190, ``` ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/xlm-roberta-large-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ### In FARM ```python from farm.modeling.adaptive_model import AdaptiveModel from farm.modeling.tokenization import Tokenizer from farm.infer import QAInferencer model_name = "deepset/xlm-roberta-large-squad2" # a) Get predictions nlp = QAInferencer.load(model_name) QA_input = [{"questions": ["Why is model conversion important?"], "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}] res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True) # b) Load model & tokenizer model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering") tokenizer = Tokenizer.load(model_name) ``` ### In haystack For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/xlm-roberta-large-squad2") # or reader = TransformersReader(model="deepset/xlm-roberta-large-squad2",tokenizer="deepset/xlm-roberta-large-squad2") ``` ## Authors Branden Chan: `branden.chan [at] deepset.ai` Timo Möller: `timo.moeller [at] deepset.ai` Malte Pietsch: `malte.pietsch [at] deepset.ai` Tanay Soni: `tanay.soni [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
microsoft/layoutlmv3-base
2b54055895563a60a6f828b15b71b81e58fd6f0f
2022-07-20T09:35:00.000Z
[ "pytorch", "layoutlmv3", "en", "arxiv:2204.08387", "transformers", "license:cc-by-nc-sa-4.0" ]
null
false
microsoft
null
microsoft/layoutlmv3-base
59,950
19
transformers
301
--- language: en license: cc-by-nc-sa-4.0 --- # LayoutLMv3 [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlmv3) ## Model description LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis. [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022. ## Citation If you find LayoutLM useful in your research, please cite the following paper: ``` @article{huang2022layoutlmv3, title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking}, author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei}, journal={arXiv preprint arXiv:2204.08387}, year={2022} } ``` ## License The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project. [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct)
typeform/mobilebert-uncased-mnli
b60d566014db63a45a440ee32b3e9e9a01d2a1fc
2021-02-14T09:11:00.000Z
[ "pytorch", "mobilebert", "text-classification", "en", "dataset:multi_nli", "transformers", "zero-shot-classification" ]
zero-shot-classification
false
typeform
null
typeform/mobilebert-uncased-mnli
59,703
1
transformers
302
--- language: en pipeline_tag: zero-shot-classification tags: - mobilebert datasets: - multi_nli metrics: - accuracy --- # MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices This model is the Multi-Genre Natural Language Inference (MNLI) fine-turned version of the [uncased MobileBERT model](https://huggingface.co/google/mobilebert-uncased).
sentence-transformers/LaBSE
931b5f9a111859fa72549cd1a7cb32168ebbe010
2022-06-15T19:56:07.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/LaBSE
59,438
25
sentence-transformers
303
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 --- # LaBSE This is a port of the [LaBSE](https://tfhub.dev/google/LaBSE/1) model to PyTorch. It can be used to map 109 languages to a shared vector space. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/LaBSE') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/LaBSE) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Normalize() ) ``` ## Citing & Authors Have a look at [LaBSE](https://tfhub.dev/google/LaBSE/1) for the respective publication that describes LaBSE.
t5-3b
7a91dcdb0494b6d21c9aec758dac1f33c8db715c
2022-07-22T08:11:47.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "transformers", "summarization", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
null
null
t5-3b
59,284
1
transformers
304
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 --- # Model Card for T5-3B ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8. [Model Card Authors](#model-card-authors) 9. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html): > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-3B is the checkpoint with 3 billion parameters. - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints) - **Model type:** Language model - **Language(s) (NLP):** English, French, Romanian, German - **License:** Apache 2.0 - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5) - **Resources for more information:** - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer) - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5) # Uses ## Direct Use and Downstream Use The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model: > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Recommendations More information needed. # Training Details ## Training Data The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5. The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. Thereby, the following datasets were being used for (1.) and (2.): 1. **Datasets used for Unsupervised denoising objective**: - [C4](https://huggingface.co/datasets/c4) - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr) 2. **Datasets used for Supervised text-to-text language modeling objective** - Sentence acceptability judgment - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471) - Sentiment analysis - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - Paraphrasing/sentence similarity - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002) - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055) - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - Natural language inference - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426) - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250) - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9) - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf) - Sentence completion - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning) - Word sense disambiguation - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121) - Question answering - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023) - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885) - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044) ## Training Procedure In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write: > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details. ## Results For full results for T5-3B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @article{2020t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {140}, pages = {1-67}, url = {http://jmlr.org/papers/v21/20-074.html} } ``` **APA:** - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context on how to get started with this checkpoint.
valhalla/distilbart-mnli-12-3
ef9a58ce6a9cd44cd0d4c2f7db1cd67f81019a8b
2021-06-14T10:29:48.000Z
[ "pytorch", "jax", "bart", "text-classification", "dataset:mnli", "transformers", "distilbart", "distilbart-mnli", "zero-shot-classification" ]
zero-shot-classification
false
valhalla
null
valhalla/distilbart-mnli-12-3
59,222
6
transformers
305
--- datasets: - mnli tags: - distilbart - distilbart-mnli pipeline_tag: zero-shot-classification --- # DistilBart-MNLI distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart). We just copy alternating layers from `bart-large-mnli` and finetune more on the same data. | | matched acc | mismatched acc | | ------------------------------------------------------------------------------------ | ----------- | -------------- | | [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) (baseline, 12-12) | 89.9 | 90.01 | | [distilbart-mnli-12-1](https://huggingface.co/valhalla/distilbart-mnli-12-1) | 87.08 | 87.5 | | [distilbart-mnli-12-3](https://huggingface.co/valhalla/distilbart-mnli-12-3) | 88.1 | 88.19 | | [distilbart-mnli-12-6](https://huggingface.co/valhalla/distilbart-mnli-12-6) | 89.19 | 89.01 | | [distilbart-mnli-12-9](https://huggingface.co/valhalla/distilbart-mnli-12-9) | 89.56 | 89.52 | This is a very simple and effective technique, as we can see the performance drop is very little. Detailed performace trade-offs will be posted in this [sheet](https://docs.google.com/spreadsheets/d/1dQeUvAKpScLuhDV1afaPJRRAE55s2LpIzDVA5xfqxvk/edit?usp=sharing). ## Fine-tuning If you want to train these models yourself, clone the [distillbart-mnli repo](https://github.com/patil-suraj/distillbart-mnli) and follow the steps below Clone and install transformers from source ```bash git clone https://github.com/huggingface/transformers.git pip install -qqq -U ./transformers ``` Download MNLI data ```bash python transformers/utils/download_glue_data.py --data_dir glue_data --tasks MNLI ``` Create student model ```bash python create_student.py \ --teacher_model_name_or_path facebook/bart-large-mnli \ --student_encoder_layers 12 \ --student_decoder_layers 6 \ --save_path student-bart-mnli-12-6 \ ``` Start fine-tuning ```bash python run_glue.py args.json ``` You can find the logs of these trained models in this [wandb project](https://wandb.ai/psuraj/distilbart-mnli).
bigscience/T0_3B
8794c7177e3a67b8a0ec739d94eecfa6a591c974
2022-06-21T01:31:56.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:bigscience/P3", "arxiv:2110.08207", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
bigscience
null
bigscience/T0_3B
59,190
42
transformers
306
--- datasets: - bigscience/P3 language: en license: apache-2.0 widget: - text: "A is the son's of B's uncle. What is the family relationship between A and B?" - text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old." - text: "Task: copy but say the opposite.\n PSG won its match against Barca." - text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy." example_title: "Sentiment analysis" - text: "Question A: How is air traffic controlled? \nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates." - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady. \nIn the previous sentence, decide who 'her' is referring to." example_title: "Coreference resolution" - text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n Select the category for the above sentence from: mobile, website, billing, account access." - text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences 1 and 2 have the same meaning?" example_title: "Paraphrase identification" - text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best." - text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1, LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out. Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?" - text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n Sentence A: you can leave the books on the table over there.\n Sentence B: the tables in this book are very hard to read." - text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n Which book is the leftmost book?" example_title: "Logic puzzles" - text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?" example_title: "Reading comprehension" - text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n Which of the following best characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places where people live." --- **How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"! **Official repository**: [bigscience-workshop/t-zero](https://github.com/bigscience-workshop/t-zero) # Model Description T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks. # Intended uses You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*. A few other examples that you can try: - *A is the son's of B's uncle. What is the family relationship between A and B?* - *Question A: How is air traffic controlled?<br> Question B: How do you become an air traffic controller?<br> Pick one: these questions are duplicates or not duplicates.* - *Is the word 'table' used in the same meaning in the two following sentences?<br><br> Sentence A: you can leave the books on the table over there.<br> Sentence B: the tables in this book are very hard to read.* - *Max: Know any good websites to buy clothes from?<br> Payton: Sure :) LINK 1, LINK 2, LINK 3<br> Max: That's a lot of them!<br> Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br> Max: I'll check them out. Thanks.<br><br> Who or what are Payton and Max referring to when they say 'them'?* - *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br> The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br> Which book is the leftmost book?* - *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.* # How to use We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[T0](https://huggingface.co/bigscience/T0)|11 billion| |[T0p](https://huggingface.co/bigscience/T0p)|11 billion| |[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion| |[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion| |[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion| |[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion| Here is how to use the model in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp") model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp") inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`. **Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.** # Training procedure T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective. At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section. Training details: - Fine-tuning steps: 12'200 - Input sequence length: 1024 - Target sequence length: 256 - Batch size: 1'024 sequences - Optimizer: Adafactor - Learning rate: 1e-3 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples) - Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length # Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP| |T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions| |T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC| |T0_single_prompt|Same as T0 but only one prompt per training dataset| |T0_original_task_only|Same as T0 but only original tasks templates| |T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model| For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page. *: We recast Hotpot QA as closed-book QA due to long input sequence length. # Evaluation data We evaluate our models on a suite of held-out tasks: |Task category|Datasets| |-|-| |Natural language inference|ANLI, CB, RTE| |Coreference resolution|WSC, Winogrande| |Word sense disambiguation|WiC| |Sentence completion|COPA, HellaSwag, Story Cloze| We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench): - Code description task - Conceptual combinations - Hindu knowledge json - Known unknowns - Language identification - Logic grid puzzle task - Logical deduction - Common misconceptions - Movie dialog same or different - Novel concepts - Strategyqa - Formal fallacies syllogisms negation - VitaminC - Winowhy multiple choice # Limitations - The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html). - We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model. - Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text. # Bias and fairness Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics: - Input: `Is the earth flat?` - Prediction: `yes` - Input: `Do vaccines cause autism?` - Prediction: `yes` - Input: `Complete this sentence: This man works as a` - Prediction: `Architect` - Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny` - Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex` - Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault` - Input: `what is something everyone hates, but you like?` - Prediction: `sex` - Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex` - Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut` - Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy` Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases. To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts. <table> <tr> <td>Dataset</td> <td>Model</td> <td>Average (Acc.)</td> <td>Median (Acc.)</td> </tr> <tr> <td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td> </tr> <td>T0p</td><td>57.6</td><td>83.8</td> <tr> </tr> <td>T0pp</td><td>62.7</td><td>64.4</td> <tr> </tr> <td>T0_single_prompt</td><td>57.6</td><td>69.5</td> <tr> </tr> <td>T0_original_task_only</td><td>47.1</td><td>37.8</td> <tr> </tr> <td>T0_3B</td><td>56.9</td><td>82.6</td> </tr> <tr> <td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td> </tr> <td>T0p</td><td>80.1</td><td>80.6</td> <tr> </tr> <td>T0pp</td><td>89.2</td><td>90.0</td> <tr> </tr> <td>T0_single_prompt</td><td>81.6</td><td>84.6</td> <tr> </tr> <td>T0_original_task_only</td><td>83.7</td><td>83.8</td> <tr> </tr> <td>T0_3B</td><td>69.7</td><td>69.4</td> </tr> </table> To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts. <table> <tr> <td rowspan="2">Model</td> <td rowspan="2">Subset</td> <td colspan="3">Average (Acc.)</td> <td colspan="3">Median (Acc.)</td> </tr> <tr> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> </tr> <tr> <td rowspan="2">T0</td><td>Type 1</td> <td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td> </tr> <td>Type 2</td> <td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0p</td> <td>Type 1</td> <td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td> </tr> </tr> <td rowspan="2">T0pp</td> <td>Type 1</td> <td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td> </tr> </tr> <td>Type 2</td> <td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td> </tr> </tr> <td rowspan="2">T0_single_prompt</td> <td>Type 1</td> <td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td> </tr> </tr> <td rowspan="2">T0_original_task_only</td> <td>Type 1</td> <td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td> </tr> </tr> <td> Type 2</td> <td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0_3B</td> <td>Type 1</td> <td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td> </tr> </tr> <td> Type 2</td> <td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td> </tr> </table> # BibTeX entry and citation info ```bibtex @misc{sanh2021multitask, title={Multitask Prompted Training Enables Zero-Shot Task Generalization}, author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush}, year={2021}, eprint={2110.08207}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
Rostlab/prot_t5_xl_uniref50
d604cdc190f7df5186404c8729934f0ee9a4b0e4
2021-03-29T11:47:15.000Z
[ "pytorch", "t5", "text2text-generation", "protein", "dataset:UniRef50", "transformers", "protein language model", "autotrain_compatible" ]
text2text-generation
false
Rostlab
null
Rostlab/prot_t5_xl_uniref50
59,027
5
transformers
307
--- language: protein tags: - protein language model datasets: - UniRef50 --- # ProtT5-XL-UniRef50 model Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids. ## Model description ProtT5-XL-UniRef50 is based on the `t5-3b` model and was pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences. One important difference between this T5 model and the original T5 version is the denosing objective. The original T5-3B model was pretrained using a span denosing objective, while this model was pre-trained with a Bart-like MLM denosing objective. The masking probability is consistent with the original T5 training by randomly masking 15% of the amino acids in the input. It has been shown that the features extracted from this self-supervised model (LM-embeddings) captured important biophysical properties governing protein shape. shape. This implied learning some of the grammar of the language of life realized in protein sequences. ## Intended uses & limitations The model could be used for protein feature extraction or to be fine-tuned on downstream tasks. We have noticed in some tasks on can gain more accuracy by fine-tuning the model rather than using it as a feature extractor. We have also noticed that for feature extraction, its better to use the feature extracted from the encoder not from the decoder. ### How to use Here is how to use this model to extract the features of a given protein sequence in PyTorch: ```python from transformers import T5Tokenizer, T5Model import re import torch tokenizer = T5Tokenizer.from_pretrained('Rostlab/prot_t5_xl_uniref50', do_lower_case=False) model = T5Model.from_pretrained("Rostlab/prot_t5_xl_uniref50") sequences_Example = ["A E T C Z A O","S K T Z P"] sequences_Example = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences_Example] ids = tokenizer.batch_encode_plus(sequences_Example, add_special_tokens=True, padding=True) input_ids = torch.tensor(ids['input_ids']) attention_mask = torch.tensor(ids['attention_mask']) with torch.no_grad(): embedding = model(input_ids=input_ids,attention_mask=attention_mask,decoder_input_ids=None) # For feature extraction we recommend to use the encoder embedding encoder_embedding = embedding[2].cpu().numpy() decoder_embedding = embedding[0].cpu().numpy() ``` ## Training data The ProtT5-XL-UniRef50 model was pretrained on [UniRef50](https://www.uniprot.org/help/uniref), a dataset consisting of 45 million protein sequences. ## Training procedure ### Preprocessing The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The rare amino acids "U,Z,O,B" were mapped to "X". The inputs of the model are then of the form: ``` Protein Sequence [EOS] ``` The preprocessing step was performed on the fly, by cutting and padding the protein sequences up to 512 tokens. The details of the masking procedure for each sequence are as follows: - 15% of the amino acids are masked. - In 90% of the cases, the masked amino acids are replaced by `[MASK]` token. - In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace. ### Pretraining The model was trained on a single TPU Pod V2-256 for 991.5 thousand steps in total, using sequence length 512 (batch size 2k). It was trained using ProtT5-XL-BFD model as an initial checkpoint, rather than training from scratch. It has a total of approximately 3B parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results When the model is used for feature extraction, this model achieves the following results: Test results : | Task/Dataset | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | |:-----:|:-----:|:-----:|:-----:|:-----:| | CASP12 | 81 | 70 | | | | TS115 | 87 | 77 | | | | CB513 | 86 | 74 | | | | DeepLoc | | | 81 | 91 | ### BibTeX entry and citation info ```bibtex @article {Elnaggar2020.07.12.199554, author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard}, title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing}, elocation-id = {2020.07.12.199554}, year = {2020}, doi = {10.1101/2020.07.12.199554}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \&lt;a href="https://github.com/agemagician/ProtTrans"\&gt;https://github.com/agemagician/ProtTrans\&lt;/a\&gt;Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554}, eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf}, journal = {bioRxiv} } ``` > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
google/pegasus-large
51b039cd8c644561432f7bfbe75e65f720b38f66
2021-09-14T07:50:56.000Z
[ "pytorch", "tf", "jax", "pegasus", "text2text-generation", "en", "arxiv:1912.08777", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
google
null
google/pegasus-large
58,783
21
transformers
308
--- language: en tags: - summarization --- ### Pegasus Models See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) Original TF 1 code [here](https://github.com/google-research/pegasus) Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) Task: Summarization The following is copied from the authors' README. # Mixed & Stochastic Checkpoints We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| | cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| | newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| | multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| | gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| | reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| | big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| | arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| | pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| | aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| | billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| The "Mixed & Stochastic" model has the following changes: - trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). - trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). - the model uniformly sample a gap sentence ratio between 15% and 45%. - importance sentences are sampled using a 20% uniform noise to importance scores. - the sentencepiece tokenizer is updated to be able to encode newline character. (*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: - wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. - we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character. Citation ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
hf-internal-testing/tiny-random-gpt2
937b4d23b6648f5a1a0d1247b939b26981798903
2021-09-17T19:24:03.000Z
[ "pytorch", "tf", "gpt2", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-gpt2
57,934
null
transformers
309
Entry not found
facebook/blenderbot-400M-distill
a2084cb58dd4810f45302724dd07c68051fe9ed3
2022-05-16T19:39:21.000Z
[ "pytorch", "tf", "jax", "blenderbot", "text2text-generation", "en", "dataset:blended_skill_talk", "arxiv:2004.13637", "transformers", "convAI", "conversational", "facebook", "license:apache-2.0", "autotrain_compatible" ]
conversational
false
facebook
null
facebook/blenderbot-400M-distill
57,741
41
transformers
310
--- language: - en thumbnail: tags: - convAI - conversational - facebook license: apache-2.0 datasets: - blended_skill_talk metrics: - perplexity --- ## Model description + Paper: [Recipes for building an open-domain chatbot]( https://arxiv.org/abs/2004.13637) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
princeton-nlp/unsup-simcse-bert-base-uncased
6504ae026e02a1464538d443b15e36afc318e034
2021-05-20T02:57:45.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
princeton-nlp
null
princeton-nlp/unsup-simcse-bert-base-uncased
57,366
null
transformers
311
Entry not found
Michau/t5-base-en-generate-headline
f526532f788c45b6b6288286e5ef929fa768ef6a
2021-06-23T03:17:34.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Michau
null
Michau/t5-base-en-generate-headline
57,353
18
transformers
312
## About the model The model has been trained on a collection of 500k articles with headings. Its purpose is to create a one-line heading suitable for the given article. Sample code with a WikiNews article: ```python import torch from transformers import T5ForConditionalGeneration,T5Tokenizer device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline") tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline") model = model.to(device) article = ''' Very early yesterday morning, the United States President Donald Trump reported he and his wife First Lady Melania Trump tested positive for COVID-19. Officials said the Trumps' 14-year-old son Barron tested negative as did First Family and Senior Advisors Jared Kushner and Ivanka Trump. Trump took to social media, posting at 12:54 am local time (0454 UTC) on Twitter, "Tonight, [Melania] and I tested positive for COVID-19. We will begin our quarantine and recovery process immediately. We will get through this TOGETHER!" Yesterday afternoon Marine One landed on the White House's South Lawn flying Trump to Walter Reed National Military Medical Center (WRNMMC) in Bethesda, Maryland. Reports said both were showing "mild symptoms". Senior administration officials were tested as people were informed of the positive test. Senior advisor Hope Hicks had tested positive on Thursday. Presidential physician Sean Conley issued a statement saying Trump has been given zinc, vitamin D, Pepcid and a daily Aspirin. Conley also gave a single dose of the experimental polyclonal antibodies drug from Regeneron Pharmaceuticals. According to official statements, Trump, now operating from the WRNMMC, is to continue performing his duties as president during a 14-day quarantine. In the event of Trump becoming incapacitated, Vice President Mike Pence could take over the duties of president via the 25th Amendment of the US Constitution. The Pence family all tested negative as of yesterday and there were no changes regarding Pence's campaign events. ''' text = "headline: " + article max_len = 256 encoding = tokenizer.encode_plus(text, return_tensors = "pt") input_ids = encoding["input_ids"].to(device) attention_masks = encoding["attention_mask"].to(device) beam_outputs = model.generate( input_ids = input_ids, attention_mask = attention_masks, max_length = 64, num_beams = 3, early_stopping = True, ) result = tokenizer.decode(beam_outputs[0]) print(result) ``` Result: ```Trump and First Lady Melania Test Positive for COVID-19```
unitary/multilingual-toxic-xlm-roberta
19f5c53459ec9679c675aeead38cab87cf588944
2021-05-06T11:04:34.000Z
[ "pytorch", "xlm-roberta", "text-classification", "arxiv:1703.04009", "arxiv:1905.12516", "transformers" ]
text-classification
false
unitary
null
unitary/multilingual-toxic-xlm-roberta
56,831
5
transformers
313
--- pipeline_tag: "text-classification" --- <div align="center"> **⚠️ Disclaimer:** The huggingface models currently give different results to the detoxify library (see issue [here](https://github.com/unitaryai/detoxify/issues/15)). For the most up to date models we recommend using the models from https://github.com/unitaryai/detoxify # 🙊 Detoxify ## Toxic Comment Classification with ⚡ Pytorch Lightning and 🤗 Transformers ![CI testing](https://github.com/unitaryai/detoxify/workflows/CI%20testing/badge.svg) ![Lint](https://github.com/unitaryai/detoxify/workflows/Lint/badge.svg) </div> ![Examples image](examples.png) ## Description Trained models & code to predict toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, Multilingual toxic comment classification. Built by [Laura Hanu](https://laurahanu.github.io/) at [Unitary](https://www.unitary.ai/), where we are working to stop harmful content online by interpreting visual content in context. Dependencies: - For inference: - 🤗 Transformers - ⚡ Pytorch lightning - For training will also need: - Kaggle API (to download data) | Challenge | Year | Goal | Original Data Source | Detoxify Model Name | Top Kaggle Leaderboard Score | Detoxify Score |-|-|-|-|-|-|-| | [Toxic Comment Classification Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) | 2018 | build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate. | Wikipedia Comments | `original` | 0.98856 | 0.98636 | [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) | 2019 | build a model that recognizes toxicity and minimizes this type of unintended bias with respect to mentions of identities. You'll be using a dataset labeled for identity mentions and optimizing a metric designed to measure unintended bias. | Civil Comments | `unbiased` | 0.94734 | 0.93639 | [Jigsaw Multilingual Toxic Comment Classification](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification) | 2020 | build effective multilingual models | Wikipedia Comments + Civil Comments | `multilingual` | 0.9536 | 0.91655* *Score not directly comparable since it is obtained on the validation set provided and not on the test set. To update when the test labels are made available. It is also noteworthy to mention that the top leadearboard scores have been achieved using model ensembles. The purpose of this library was to build something user-friendly and straightforward to use. ## Limitations and ethical considerations If words that are associated with swearing, insults or profanity are present in a comment, it is likely that it will be classified as toxic, regardless of the tone or the intent of the author e.g. humorous/self-deprecating. This could present some biases towards already vulnerable minority groups. The intended use of this library is for research purposes, fine-tuning on carefully constructed datasets that reflect real world demographics and/or to aid content moderators in flagging out harmful content quicker. Some useful resources about the risk of different biases in toxicity or hate speech detection are: - [The Risk of Racial Bias in Hate Speech Detection](https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf) - [Automated Hate Speech Detection and the Problem of Offensive Language](https://arxiv.org/pdf/1703.04009.pdf%201.pdf) - [Racial Bias in Hate Speech and Abusive Language Detection Datasets](https://arxiv.org/pdf/1905.12516.pdf) ## Quick prediction The `multilingual` model has been trained on 7 different languages so it should only be tested on: `english`, `french`, `spanish`, `italian`, `portuguese`, `turkish` or `russian`. ```bash # install detoxify pip install detoxify ``` ```python from detoxify import Detoxify # each model takes in either a string or a list of strings results = Detoxify('original').predict('example text') results = Detoxify('unbiased').predict(['example text 1','example text 2']) results = Detoxify('multilingual').predict(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста']) # optional to display results nicely (will need to pip install pandas) import pandas as pd print(pd.DataFrame(results, index=input_text).round(5)) ``` For more details check the Prediction section. ## Labels All challenges have a toxicity label. The toxicity labels represent the aggregate ratings of up to 10 annotators according the following schema: - **Very Toxic** (a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective) - **Toxic** (a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective) - **Hard to Say** - **Not Toxic** More information about the labelling schema can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). ### Toxic Comment Classification Challenge This challenge includes the following labels: - `toxic` - `severe_toxic` - `obscene` - `threat` - `insult` - `identity_hate` ### Jigsaw Unintended Bias in Toxicity Classification This challenge has 2 types of labels: the main toxicity labels and some additional identity labels that represent the identities mentioned in the comments. Only identities with more than 500 examples in the test set (combined public and private) are included during training as additional labels and in the evaluation calculation. - `toxicity` - `severe_toxicity` - `obscene` - `threat` - `insult` - `identity_attack` - `sexual_explicit` Identity labels used: - `male` - `female` - `homosexual_gay_or_lesbian` - `christian` - `jewish` - `muslim` - `black` - `white` - `psychiatric_or_mental_illness` A complete list of all the identity labels available can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). ### Jigsaw Multilingual Toxic Comment Classification Since this challenge combines the data from the previous 2 challenges, it includes all labels from above, however the final evaluation is only on: - `toxicity` ## How to run First, install dependencies ```bash # clone project git clone https://github.com/unitaryai/detoxify # create virtual env python3 -m venv toxic-env source toxic-env/bin/activate # install project pip install -e detoxify cd detoxify # for training pip install -r requirements.txt ``` ## Prediction Trained models summary: |Model name| Transformer type| Data from |:--:|:--:|:--:| |`original`| `bert-base-uncased` | Toxic Comment Classification Challenge |`unbiased`| `roberta-base`| Unintended Bias in Toxicity Classification |`multilingual`| `xlm-roberta-base`| Multilingual Toxic Comment Classification For a quick prediction can run the example script on a comment directly or from a txt containing a list of comments. ```bash # load model via torch.hub python run_prediction.py --input 'example' --model_name original # load model from from checkpoint path python run_prediction.py --input 'example' --from_ckpt_path model_path # save results to a .csv file python run_prediction.py --input test_set.txt --model_name original --save_to results.csv # to see usage python run_prediction.py --help ``` Checkpoints can be downloaded from the latest release or via the Pytorch hub API with the following names: - `toxic_bert` - `unbiased_toxic_roberta` - `multilingual_toxic_xlm_r` ```bash model = torch.hub.load('unitaryai/detoxify','toxic_bert') ``` Importing detoxify in python: ```python from detoxify import Detoxify results = Detoxify('original').predict('some text') results = Detoxify('unbiased').predict(['example text 1','example text 2']) results = Detoxify('multilingual').predict(['example text','exemple de texte','texto de ejemplo','testo di esempio','texto de exemplo','örnek metin','пример текста']) # to display results nicely import pandas as pd print(pd.DataFrame(results,index=input_text).round(5)) ``` ## Training If you do not already have a Kaggle account: - you need to create one to be able to download the data - go to My Account and click on Create New API Token - this will download a kaggle.json file - make sure this file is located in ~/.kaggle ```bash # create data directory mkdir jigsaw_data cd jigsaw_data # download data kaggle competitions download -c jigsaw-toxic-comment-classification-challenge kaggle competitions download -c jigsaw-unintended-bias-in-toxicity-classification kaggle competitions download -c jigsaw-multilingual-toxic-comment-classification ``` ## Start Training ### Toxic Comment Classification Challenge ```bash python create_val_set.py python train.py --config configs/Toxic_comment_classification_BERT.json ``` ### Unintended Bias in Toxicicity Challenge ```bash python train.py --config configs/Unintended_bias_toxic_comment_classification_RoBERTa.json ``` ### Multilingual Toxic Comment Classification This is trained in 2 stages. First, train on all available data, and second, train only on the translated versions of the first challenge. The [translated data](https://www.kaggle.com/miklgr500/jigsaw-train-multilingual-coments-google-api) can be downloaded from Kaggle in french, spanish, italian, portuguese, turkish, and russian (the languages available in the test set). ```bash # stage 1 python train.py --config configs/Multilingual_toxic_comment_classification_XLMR.json # stage 2 python train.py --config configs/Multilingual_toxic_comment_classification_XLMR_stage2.json ``` ### Monitor progress with tensorboard ```bash tensorboard --logdir=./saved ``` ## Model Evaluation ### Toxic Comment Classification Challenge This challenge is evaluated on the mean AUC score of all the labels. ```bash python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv ``` ### Unintended Bias in Toxicicity Challenge This challenge is evaluated on a novel bias metric that combines different AUC scores to balance overall performance. More information on this metric [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation). ```bash python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv # to get the final bias metric python model_eval/compute_bias_metric.py ``` ### Multilingual Toxic Comment Classification This challenge is evaluated on the AUC score of the main toxic label. ```bash python evaluate.py --checkpoint saved/lightning_logs/checkpoints/example_checkpoint.pth --test_csv test.csv ``` ### Citation ``` @misc{Detoxify, title={Detoxify}, author={Hanu, Laura and {Unitary team}}, howpublished={Github. https://github.com/unitaryai/detoxify}, year={2020} } ```
flair/ner-english-fast
3d3d35790f78a00ef319939b9004209d1d05f788
2021-02-26T15:39:34.000Z
[ "pytorch", "en", "dataset:conll2003", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-english-fast
56,353
3
flair
314
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - conll2003 widget: - text: "George Washington went to Washington" --- ## English NER in Flair (fast model) This is the fast 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,92** (corrected CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-fast") # make example sentence sentence = Sentence("George Washington went to Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9515)] Span [5]: "Washington" [− Labels: LOC (0.992)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03 from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_03() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # contextual string embeddings, forward FlairEmbeddings('news-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('news-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
facebook/wav2vec2-large-960h-lv60-self
54074b1c16f4de6a5ad59affb4caa8f2ea03a119
2022-05-23T16:13:42.000Z
[ "pytorch", "tf", "jax", "wav2vec2", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "arxiv:2010.11430", "arxiv:2006.11477", "transformers", "speech", "audio", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
facebook
null
facebook/wav2vec2-large-960h-lv60-self
56,338
19
transformers
315
--- language: en datasets: - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 model-index: - name: wav2vec2-large-960h-lv60 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.9 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.9 --- # Wav2Vec2-Large-960h-Lv60 + Self-Training [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") def map_to_pred(batch): inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest") input_values = inputs.input_values.to("cuda") attention_mask = inputs.attention_mask.to("cuda") with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 1.9 | 3.9 |
bhadresh-savani/bert-base-go-emotion
6ecebb2840243665ab089020504c52e086862848
2021-11-29T10:43:10.000Z
[ "pytorch", "bert", "en", "dataset:go_emotions", "transformers", "text-classification", "go-emotion", "license:apache-2.0" ]
text-classification
false
bhadresh-savani
null
bhadresh-savani/bert-base-go-emotion
55,959
3
transformers
316
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - go-emotion - pytorch license: apache-2.0 datasets: - go_emotions metrics: - Accuracy --- # Bert-Base-Uncased-Go-Emotion ## Model description: ## Training Parameters: ``` Num examples = 169208 Num Epochs = 3 Instantaneous batch size per device = 16 Total train batch size (w. parallel, distributed & accumulation) = 16 Gradient Accumulation steps = 1 Total optimization steps = 31728 ``` ## TrainOutput: ``` 'train_loss': 0.12085497042373672, ``` ## Evalution Output: ``` 'eval_accuracy_thresh': 0.9614765048027039, 'eval_loss': 0.1164659634232521 ``` ## Colab Notebook: [Notebook](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/go_emotion_of_transformers_multilabel_text_classification_v2.ipynb)
cross-encoder/quora-distilroberta-base
2f10e5b229ecdb2ca204717607c7635897fd645b
2021-08-05T08:41:31.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/quora-distilroberta-base
55,355
null
transformers
317
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) ``` You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
Narsil/deberta-large-mnli-zero-cls
47eecd0a22df5e7d6ad4d9ff6fa4b6f322db5700
2021-08-23T13:27:24.000Z
[ "pytorch", "deberta", "text-classification", "en", "arxiv:2006.03654", "transformers", "deberta-v1", "deberta-mnli", "license:mit", "zero-shot-classification" ]
zero-shot-classification
false
Narsil
null
Narsil/deberta-large-mnli-zero-cls
54,966
3
transformers
318
--- language: en tags: - deberta-v1 - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit pipeline_tag: zero-shot-classification --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This is the DeBERTa large model fine-tuned with MNLI task. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\n--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\n--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
flair/ner-english
627fd305bf597ea90fa54a50228ccfd4b412caf5
2021-03-02T22:11:28.000Z
[ "pytorch", "en", "dataset:conll2003", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-english
54,507
4
flair
319
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - conll2003 widget: - text: "George Washington went to Washington" --- ## English NER in Flair (default model) This is the standard 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **93,06** (corrected CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english") # make example sentence sentence = Sentence("George Washington went to Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9968)] Span [5]: "Washington" [− Labels: LOC (0.9994)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03 from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_03() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
siebert/sentiment-roberta-large-english
6eac71655a474ee4d6d0eee7fa532300c537856d
2022-07-12T18:48:33.000Z
[ "pytorch", "tf", "jax", "roberta", "text-classification", "en", "arxiv:1907.11692", "transformers", "sentiment", "twitter", "reviews", "siebert" ]
text-classification
false
siebert
null
siebert/sentiment-roberta-large-english
52,445
24
transformers
320
--- language: "en" tags: - sentiment - twitter - reviews - siebert --- ## SiEBERT - English-Language Sentiment Classification # Overview This model ("SiEBERT", prefix for "Sentiment in English") is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large) ([Liu et al. 2019](https://arxiv.org/pdf/1907.11692.pdf)). It enables reliable binary sentiment analysis for various types of English-language text. For each instance, it predicts either positive (1) or negative (0) sentiment. The model was fine-tuned and evaluated on 15 data sets from diverse text sources to enhance generalization across different types of texts (reviews, tweets, etc.). Consequently, it outperforms models trained on only one type of text (e.g., movie reviews from the popular SST-2 benchmark) when used on new data as shown below. # Predictions on a data set If you want to predict sentiment for your own data, we provide an example script via [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb). You can load your data to a Google Drive and run the script for free on a Colab GPU. Set-up only takes a few minutes. We suggest that you manually label a subset of your data to evaluate performance for your use case. For performance benchmark values across various sentiment analysis contexts, please refer to our paper ([Hartmann et al. 2022](https://www.sciencedirect.com/science/article/pii/S0167811622000477?via%3Dihub)). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/chrsiebert/sentiment-roberta-large-english/blob/main/sentiment_roberta_prediction_example.ipynb) # Use in a Hugging Face pipeline The easiest way to use the model for single predictions is Hugging Face's [sentiment analysis pipeline](https://huggingface.co/transformers/quicktour.html#getting-started-on-a-task-with-a-pipeline), which only needs a couple lines of code as shown in the following example: ``` from transformers import pipeline sentiment_analysis = pipeline("sentiment-analysis",model="siebert/sentiment-roberta-large-english") print(sentiment_analysis("I love this!")) ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/chrsiebert/sentiment-roberta-large-english/blob/main/sentiment_roberta_pipeline.ipynb) # Use for further fine-tuning The model can also be used as a starting point for further fine-tuning of RoBERTa on your specific data. Please refer to Hugging Face's [documentation](https://huggingface.co/docs/transformers/training) for further details and example code. # Performance To evaluate the performance of our general-purpose sentiment analysis model, we set aside an evaluation set from each data set, which was not used for training. On average, our model outperforms a [DistilBERT-based model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) (which is solely fine-tuned on the popular SST-2 data set) by more than 15 percentage points (78.1 vs. 93.2 percent, see table below). As a robustness check, we evaluate the model in a leave-one-out manner (training on 14 data sets, evaluating on the one left out), which decreases model performance by only about 3 percentage points on average and underscores its generalizability. Model performance is given as evaluation set accuracy in percent. |Dataset|DistilBERT SST-2|This model| |---|---|---| |McAuley and Leskovec (2013) (Reviews)|84.7|98.0| |McAuley and Leskovec (2013) (Review Titles)|65.5|87.0| |Yelp Academic Dataset|84.8|96.5| |Maas et al. (2011)|80.6|96.0| |Kaggle|87.2|96.0| |Pang and Lee (2005)|89.7|91.0| |Nakov et al. (2013)|70.1|88.5| |Shamma (2009)|76.0|87.0| |Blitzer et al. (2007) (Books)|83.0|92.5| |Blitzer et al. (2007) (DVDs)|84.5|92.5| |Blitzer et al. (2007) (Electronics)|74.5|95.0| |Blitzer et al. (2007) (Kitchen devices)|80.0|98.5| |Pang et al. (2002)|73.5|95.5| |Speriosu et al. (2011)|71.5|85.5| |Hartmann et al. (2019)|65.5|98.0| |**Average**|**78.1**|**93.2**| # Fine-tuning hyperparameters - learning_rate = 2e-5 - num_train_epochs = 3.0 - warmump_steps = 500 - weight_decay = 0.01 Other values were left at their defaults as listed [here](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments). # Citation and contact Please cite [this paper](https://www.sciencedirect.com/science/article/pii/S0167811622000477?via%3Dihub) (Forthcoming in the [IJRM](https://www.journals.elsevier.com/international-journal-of-research-in-marketing)) when you use our model. Feel free to reach out to [christian.siebert@uni-hamburg.de](mailto:christian.siebert@uni-hamburg.de) with any questions or feedback you may have. ``` @article{hartmann2022, title={More than a feeling: Accuracy and Application of Sentiment Analysis}, author={Hartmann, Jochen and Heitmann, Mark and Siebert, Christian and Schamp, Christina}, journal={International Journal of Research in Marketing}, year={2022} } ```
microsoft/infoxlm-large
d616d637f0720deda963cebbfc630657d2b7d3ae
2021-08-04T11:43:05.000Z
[ "pytorch", "xlm-roberta", "fill-mask", "arxiv:2007.07834", "transformers", "autotrain_compatible" ]
fill-mask
false
microsoft
null
microsoft/infoxlm-large
52,422
2
transformers
321
# InfoXLM **InfoXLM** (NAACL 2021, [paper](https://arxiv.org/pdf/2007.07834.pdf), [repo](https://github.com/microsoft/unilm/tree/master/infoxlm), [model](https://huggingface.co/microsoft/infoxlm-base)) InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training. **MD5** ``` 05b95b7d977450b364f8ea3269391953 config.json c19438359fed6d36b0c1bbb107929579 pytorch_model.bin bf25eb5120ad92ef5c7d8596b5dc4046 sentencepiece.bpe.model eedbd60a7268b9fc45981b849664f747 tokenizer.json ``` **BibTeX** ``` @inproceedings{chi-etal-2021-infoxlm, title = "{I}nfo{XLM}: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training", author={Chi, Zewen and Dong, Li and Wei, Furu and Yang, Nan and Singhal, Saksham and Wang, Wenhui and Song, Xia and Mao, Xian-Ling and Huang, Heyan and Zhou, Ming}, booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.naacl-main.280", doi = "10.18653/v1/2021.naacl-main.280", pages = "3576--3588",} ```
cl-tohoku/bert-base-japanese-char
6aa4c7bc39337858fee3e70f258edeada2e308ea
2021-09-23T13:45:29.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ja", "dataset:wikipedia", "transformers", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
cl-tohoku
null
cl-tohoku/bert-base-japanese-char
52,290
4
transformers
322
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 仙台は「[MASK]の都」と呼ばれている。 --- # BERT base Japanese (character tokenization) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0). ## Model architecture The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. ## Training Data The model is trained on Japanese Wikipedia as of September 1, 2019. To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles. The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences. ## Tokenization The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into characters. The vocabulary size is 4000. ## Training The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps. ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ## Acknowledgments For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
vinai/bertweet-covid19-base-uncased
fd00afc23cbc3c3dba662f913d549453f91cb4d4
2022-06-08T04:41:56.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
vinai
null
vinai/bertweet-covid19-base-uncased
52,157
1
transformers
323
# <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure. The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic. The general architecture and experimental results of BERTweet can be found in our [paper](https://aclanthology.org/2020.emnlp-demos.2/): @inproceedings{bertweet, title = {{BERTweet: A pre-trained language model for English Tweets}}, author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, pages = {9--14}, year = {2020} } **Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software. For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
hf-internal-testing/tiny-random-vit
1870c862512fd2c5c46337626d3fec558aa816f3
2022-03-02T15:34:35.000Z
[ "pytorch", "tf", "vit", "image-classification", "transformers" ]
image-classification
false
hf-internal-testing
null
hf-internal-testing/tiny-random-vit
52,105
null
transformers
324
Entry not found
distilbert-base-german-cased
06b1dc5ba050ddbf462d060df38f906eedb31b01
2022-06-03T09:46:31.000Z
[ "pytorch", "distilbert", "fill-mask", "de", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
distilbert-base-german-cased
51,892
4
transformers
325
--- language: de license: apache-2.0 --- ## distilbert-base-german-cased
deepset/bert-base-cased-squad2
3eb2ba4d2ff1903c1b71e74a8f3640eef57da82d
2022-07-25T11:35:36.000Z
[ "pytorch", "jax", "bert", "question-answering", "en", "dataset:squad_v2", "transformers", "license:cc-by-4.0", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/bert-base-cased-squad2
51,199
9
transformers
326
--- language: en datasets: - squad_v2 license: cc-by-4.0 --- This is a BERT base cased model trained on SQuAD v2
google/byt5-small
ce8f3a48ed7676af36476a01fb01f95ea529599c
2022-05-27T15:06:27.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/byt5-small
51,139
11
transformers
327
--- language: - multilingual - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu datasets: - mc4 license: apache-2.0 --- # ByT5 - Small ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-small') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-small') tokenizer = AutoTokenizer.from_pretrained('google/byt5-small') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
sshleifer/tiny-mbart
9d6b9b3b2774b464bb6b14eda4efe30f82846136
2021-08-26T10:55:11.000Z
[ "pytorch", "tf", "mbart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
sshleifer
null
sshleifer/tiny-mbart
50,936
4
transformers
328
Entry not found
monologg/bert-base-cased-goemotions-original
13c44c849132f82bb61188d909a574badffb27a3
2021-05-19T23:48:33.000Z
[ "pytorch", "bert", "transformers" ]
null
false
monologg
null
monologg/bert-base-cased-goemotions-original
50,803
2
transformers
329
Entry not found
dmis-lab/biobert-base-cased-v1.2
67c9c25b46986521ca33df05d8540da1210b3256
2021-06-24T02:54:58.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
dmis-lab
null
dmis-lab/biobert-base-cased-v1.2
50,666
4
transformers
330
Entry not found
deepset/sentence_bert
496b9b39b227f03c4053a9f5fdac1616773b5112
2021-05-19T15:34:03.000Z
[ "pytorch", "jax", "bert", "transformers", "license:apache-2.0" ]
null
false
deepset
null
deepset/sentence_bert
50,503
5
transformers
331
--- license: apache-2.0 --- This is an upload of the bert-base-nli-stsb-mean-tokens pretrained model from the Sentence Transformers Repo (https://github.com/UKPLab/sentence-transformers)
flair/ner-english-ontonotes-large
4ffb3596f4359f0c8799ea15bbf5dbb3b0915a53
2021-05-08T15:35:21.000Z
[ "pytorch", "en", "dataset:ontonotes", "arxiv:2011.06993", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-english-ontonotes-large
50,495
26
flair
332
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - ontonotes widget: - text: "On September 1st George won 1 dollar while watching Game of Thrones." --- ## English NER in Flair (Ontonotes large model) This is the large 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **90.93** (Ontonotes) Predicts 18 tags: | **tag** | **meaning** | |---------------------------------|-----------| | CARDINAL | cardinal value | | DATE | date value | | EVENT | event name | | FAC | building name | | GPE | geo-political entity | | LANGUAGE | language name | | LAW | law name | | LOC | location name | | MONEY | money name | | NORP | affiliation | | ORDINAL | ordinal value | | ORG | organization name | | PERCENT | percent value | | PERSON | person name | | PRODUCT | product name | | QUANTITY | quantity value | | TIME | time value | | WORK_OF_ART | name of work of art | Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/). --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-ontonotes-large") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [2,3]: "September 1st" [− Labels: DATE (1.0)] Span [4]: "George" [− Labels: PERSON (1.0)] Span [6,7]: "1 dollar" [− Labels: MONEY (1.0)] Span [10,11,12]: "Game of Thrones" [− Labels: WORK_OF_ART (1.0)] ``` So, the entities "*September 1st*" (labeled as a **date**), "*George*" (labeled as a **person**), "*1 dollar*" (labeled as a **money**) and "Game of Thrones" (labeled as a **work of art**) are found in the sentence "*On September 1st George Washington won 1 dollar while watching Game of Thrones*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize fine-tuneable transformer embeddings WITH document context from flair.embeddings import TransformerWordEmbeddings embeddings = TransformerWordEmbeddings( model='xlm-roberta-large', layers="-1", subtoken_pooling="first", fine_tune=True, use_context=True, ) # 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection) from flair.models import SequenceTagger tagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # 6. initialize trainer with AdamW optimizer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # 7. run training with XLM parameters (20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train('resources/taggers/ner-english-ontonotes-large', learning_rate=5.0e-6, mini_batch_size=4, mini_batch_chunk_size=1, max_epochs=20, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., ) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{schweter2020flert, title={FLERT: Document-Level Features for Named Entity Recognition}, author={Stefan Schweter and Alan Akbik}, year={2020}, eprint={2011.06993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
facebook/opt-125m
934b6a077313f3ee660a918a95313f5d0b136c5a
2022-06-22T09:52:32.000Z
[ "pytorch", "tf", "jax", "opt", "text-generation", "en", "arxiv:2205.01068", "arxiv:2005.14165", "transformers", "license:other" ]
text-generation
false
facebook
null
facebook/opt-125m
50,484
13
transformers
333
--- language: en inference: false tags: - text-generation - opt license: other commercial: false --- # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-125m") >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and aware of the fact that I am a woman. I am aware of'}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and active member of the Khaosan Group, a private, self'}] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
sberbank-ai/ruRoberta-large
29b46edec511391c384dfd0bbd3892cb72495c5f
2021-09-21T19:45:07.000Z
[ "pytorch", "roberta", "fill-mask", "ru", "transformers", "PyTorch", "Transformers", "autotrain_compatible" ]
fill-mask
false
sberbank-ai
null
sberbank-ai/ruRoberta-large
50,365
11
transformers
334
--- language: - ru tags: - PyTorch - Transformers thumbnail: "https://github.com/sberbank-ai/model-zoo" --- # ruRoberta-large Model was trained by [SberDevices](https://sberdevices.ru/) team. * Task: `mask filling` * Type: `encoder` * Tokenizer: `bbpe` * Dict size: `50 257` * Num Parameters: `355 M` * Training Data Volume `250 GB`
sentence-transformers/distiluse-base-multilingual-cased-v1
756c7aa7d57c27bd1c71a483367c53966465f450
2022-06-15T20:11:01.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "multilingual", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/distiluse-base-multilingual-cased-v1
49,802
10
sentence-transformers
335
--- pipeline_tag: sentence-similarity language: multilingual license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/distiluse-base-multilingual-cased-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distiluse-base-multilingual-cased-v1) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
allenai/led-base-16384
25756ed025a94fdf2bc4987af86a58fd999047ec
2021-01-11T14:51:01.000Z
[ "pytorch", "tf", "led", "text2text-generation", "en", "arxiv:2004.05150", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
allenai
null
allenai/led-base-16384
49,616
7
transformers
336
--- language: en license: apache-2.0 --- ## Introduction [Allenai's Longformer Encoder-Decoder (LED)](https://github.com/allenai/longformer#longformer). As described in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan, *led-base-16384* was initialized from [*bart-base*](https://huggingface.co/facebook/bart-base) since both models share the exact same architecture. To be able to process 16K tokens, *bart-base*'s position embedding matrix was simply copied 16 times. This model is especially interesting for long-range summarization and question answering. ## Fine-tuning for down-stream task [This notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) shows how *led-base-16384* can effectively be fine-tuned on a downstream task.
sshleifer/tiny-distilbert-base-cased-distilled-squad
33a976c7ab7d41310ea4063d311dbf66c8aaa001
2020-05-14T16:54:23.000Z
[ "pytorch", "tf", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
sshleifer
null
sshleifer/tiny-distilbert-base-cased-distilled-squad
49,350
null
transformers
337
Entry not found
nlpaueb/bert-base-greek-uncased-v1
ec2b8f88dd215b5246f2f850413d5bff90d7540d
2022-03-02T16:32:57.000Z
[ "pytorch", "tf", "jax", "bert", "pretraining", "el", "arxiv:2008.12014", "transformers", "fill-mask" ]
fill-mask
false
nlpaueb
null
nlpaueb/bert-base-greek-uncased-v1
49,226
6
transformers
338
--- language: el pipeline_tag: fill-mask thumbnail: https://github.com/nlpaueb/GreekBERT/raw/master/greek-bert-logo.png widget: - text: "Σήμερα είναι μια [MASK] μέρα." --- # GreekBERT A Greek version of BERT pre-trained language model. <img src="https://github.com/nlpaueb/GreekBERT/raw/master/greek-bert-logo.png" width="600"/> ## Pre-training corpora The pre-training corpora of `bert-base-greek-uncased-v1` include: * The Greek part of [Wikipedia](https://el.wikipedia.org/wiki/Βικιπαίδεια:Αντίγραφα_της_βάσης_δεδομένων), * The Greek part of [European Parliament Proceedings Parallel Corpus](https://www.statmt.org/europarl/), and * The Greek part of [OSCAR](https://traces1.inria.fr/oscar/), a cleansed version of [Common Crawl](https://commoncrawl.org). Future release will also include: * The entire corpus of Greek legislation, as published by the [National Publication Office](http://www.et.gr), * The entire corpus of EU legislation (Greek translation), as published in [Eur-Lex](https://eur-lex.europa.eu/homepage.html?locale=en). ## Pre-training details * We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert).* We then used [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) conversion script to convert the TF checkpoint and vocabulary in the desired format in order to be able to load the model in two lines of code for both PyTorch and TF2 users. * We released a model similar to the English `bert-base-uncased` model (12-layer, 768-hidden, 12-heads, 110M parameters). * We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4. * We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us! \* You can still have access to the original TensorFlow checkpoints from this [Google Drive folder](https://drive.google.com/drive/folders/1ZjlaE4nvdtgqXiVBTVHCF5I9Ff8ZmztE?usp=sharing). ## Requirements We published `bert-base-greek-uncased-v1` as part of [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) repository. So, you need to install the transformers library through pip along with PyTorch or Tensorflow 2. ``` pip install transformers pip install (torch|tensorflow) ``` ## Pre-process text (Deaccent - Lower) **NOTICE:** Preprocessing is now natively supported by the default tokenizer. No need to include the following code. In order to use `bert-base-greek-uncased-v1`, you have to pre-process texts to lowercase letters and remove all Greek diacritics. ```python import unicodedata def strip_accents_and_lowercase(s): return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn').lower() accented_string = "Αυτή είναι η Ελληνική έκδοση του BERT." unaccented_string = strip_accents_and_lowercase(accented_string) print(unaccented_string) # αυτη ειναι η ελληνικη εκδοση του bert. ``` ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-greek-uncased-v1") model = AutoModel.from_pretrained("nlpaueb/bert-base-greek-uncased-v1") ``` ## Use Pretrained Model as a Language Model ```python import torch from transformers import * # Load model and tokenizer tokenizer_greek = AutoTokenizer.from_pretrained('nlpaueb/bert-base-greek-uncased-v1') lm_model_greek = AutoModelWithLMHead.from_pretrained('nlpaueb/bert-base-greek-uncased-v1') # ================ EXAMPLE 1 ================ text_1 = 'O ποιητής έγραψε ένα [MASK] .' # EN: 'The poet wrote a [MASK].' input_ids = tokenizer_greek.encode(text_1) print(tokenizer_greek.convert_ids_to_tokens(input_ids)) # ['[CLS]', 'o', 'ποιητης', 'εγραψε', 'ενα', '[MASK]', '.', '[SEP]'] outputs = lm_model_greek(torch.tensor([input_ids]))[0] print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 5].max(0)[1].item())) # the most plausible prediction for [MASK] is "song" # ================ EXAMPLE 2 ================ text_2 = 'Είναι ένας [MASK] άνθρωπος.' # EN: 'He is a [MASK] person.' input_ids = tokenizer_greek.encode(text_2) print(tokenizer_greek.convert_ids_to_tokens(input_ids)) # ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', '.', '[SEP]'] outputs = lm_model_greek(torch.tensor([input_ids]))[0] print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 3].max(0)[1].item())) # the most plausible prediction for [MASK] is "good" # ================ EXAMPLE 3 ================ text_3 = 'Είναι ένας [MASK] άνθρωπος και κάνει συχνά [MASK].' # EN: 'He is a [MASK] person he does frequently [MASK].' input_ids = tokenizer_greek.encode(text_3) print(tokenizer_greek.convert_ids_to_tokens(input_ids)) # ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', 'και', 'κανει', 'συχνα', '[MASK]', '.', '[SEP]'] outputs = lm_model_greek(torch.tensor([input_ids]))[0] print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 8].max(0)[1].item())) # the most plausible prediction for the second [MASK] is "trips" ``` ## Evaluation on downstream tasks For detailed results read the article: GREEK-BERT: The Greeks visiting Sesame Street. John Koutsikakis, Ilias Chalkidis, Prodromos Malakasiotis and Ion Androutsopoulos. In the Proceedings of the 11th Hellenic Conference on Artificial Intelligence (SETN 2020). Held Online. 2020. (https://arxiv.org/abs/2008.12014) ### Named Entity Recognition with Greek NER dataset | Model name | Micro F1 | | ------------------- | ------------------------------------ | BILSTM-CNN-CRF (Ma and Hovy, 2016) | 76.4 ± 2.07 M-BERT-UNCASED (Devlin et al., 2019) | 81.5 ± 1.77 M-BERT-CASED (Devlin et al., 2019)| 82.1 ± 1.35 XLM-R (Conneau et al., 2020)| 84.8 ± 1.50 GREEK-BERT (ours) | **85.7 ± 1.00** ### Natural Language Inference with XNLI | Model name | Accuracy | | ------------------- | ------------------------------------ | DAM (Parikh et al., 2016) | 68.5 ± 1.71 M-BERT-UNCASED (Devlin et al., 2019) | 73.9 ± 0.64 M-BERT-CASED (Devlin et al., 2019) | 73.5 ± 0.49 XLM-R (Conneau et al., 2020) | 77.3 ± 0.41 GREEK-BERT (ours) | **78.6 ± 0.62** ## Author The model has been officially released with the article "GREEK-BERT: The Greeks visiting Sesame Street. John Koutsikakis, Ilias Chalkidis, Prodromos Malakasiotis and Ion Androutsopoulos. In the Proceedings of the 11th Hellenic Conference on Artificial Intelligence (SETN 2020). Held Online. 2020" (https://arxiv.org/abs/2008.12014). If you use the model, please cite the following: ``` @inproceedings{greek-bert, author = {Koutsikakis, John and Chalkidis, Ilias and Malakasiotis, Prodromos and Androutsopoulos, Ion}, title = {GREEK-BERT: The Greeks Visiting Sesame Street}, year = {2020}, isbn = {9781450388788}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3411408.3411440}, booktitle = {11th Hellenic Conference on Artificial Intelligence}, pages = {110–117}, numpages = {8}, location = {Athens, Greece}, series = {SETN 2020} } ``` ## About Us [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts. The group's current research interests include: * question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering, * natural language generation from databases and ontologies, especially Semantic Web ontologies, text classification, including filtering spam and abusive content, * information extraction and opinion mining, including legal text analytics and sentiment analysis, * natural language processing tools for Greek, for example parsers and named-entity recognizers, machine learning in natural language processing, especially deep learning. The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business. [Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) | Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
IlyaGusev/mbart_ru_sum_gazeta
3cba0b42de306923e580d5b8e266cc33b5cb289a
2022-07-13T15:35:33.000Z
[ "pytorch", "mbart", "text2text-generation", "ru", "dataset:IlyaGusev/gazeta", "arxiv:2006.11063", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
IlyaGusev
null
IlyaGusev/mbart_ru_sum_gazeta
48,196
11
transformers
339
--- language: - ru tags: - summarization - mbart datasets: - IlyaGusev/gazeta license: apache-2.0 inference: parameters: no_repeat_ngram_size: 4 widget: - text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо." example_title: "Википедия" - text: "С 1 сентября в России вступают в силу поправки в закон «О банкротстве» — теперь должники смогут освобождаться от непосильных обязательств во внесудебном порядке, если сумма задолженности составляет не менее 50 тыс. рублей и не превышает 500 тыс. рублей без учета штрафов, пени, процентов за просрочку платежа и прочих имущественных или финансовых санкций. У физлиц и индивидуальных предпринимателей появилась возможность пройти процедуру банкротства без участия суда и финансового управляющего — достаточно подать соответствующее заявление через МФЦ. Сумму задолженности и список всех известных заявителю кредиторов нужно предоставить самостоятельно. Если все условия соблюдены, сведения внесут в Единый федеральный реестр в течение трех рабочих дней. При этом на момент подачи заявления в отношении заявителя должно быть окончено исполнительное производство с возвращением исполнительного документа взыскателю. Это значит, что у потенциального банкрота не должно быть имущества, которое можно взыскать. Кроме того, в отношении гражданина не должно быть возбуждено другое исполнительное производство. В период всей процедуры заявитель не сможет брать займы, кредиты, выдавать поручительства, совершать иные обеспечительные сделки. Внесудебное банкротство будет длиться шесть месяцев, в течение которых также будет действовать мораторий на удовлетворение требований кредиторов, отмеченных в заявлении должника, и мораторий об уплате обязательных платежей. Кроме того, прекращается начисление неустоек и иных финансовых санкций; имущественные взыскания (кроме алиментов) также будут приостановлены. По завершению процедуры заявителя освободят от дальнейшего выполнения требований кредиторов, указанных в заявлении о признании его банкротом, а эта задолженность признается безнадежной. В прошлом месяце стало известно, что за первое полугодие 2020 года российские суды признали банкротами 42,7 тыс. граждан (в том числе индивидуальных предпринимателей) — по данным единого реестра «Федресурс», это на 47,2% больше показателя аналогичного периода 2019 года. Рост числа обанкротившихся граждан во втором квартале по сравнению с первым замедлился — такая динамика обусловлена тем, что в период ограничений с 19 марта по 11 мая суды редко рассматривали банкротные дела компаний и меньше, чем обычно, в отношении граждан, объяснял руководитель проекта «Федресурс» Алексей Юхнин. Он прогнозирует, что во втором полугодии мы увидим рост показателя, когда суды рассмотрят все дела, что не смогли ранее в режиме ограничений. По его данным, уже в июне число личных банкротств выросло до 11,5 тыс., что в два раза превышает показатель аналогичного периода 2019 года." example_title: "Новости" - text: "Актуальность проблемы. Электронная информация играет все большую роль во всех сферах жизни современного общества. В последние годы объем научно-технической текстовой информации в электронном виде возрос настолько, что возникает угроза обесценивания этой информации в связи с трудностями поиска необходимых сведений среди множества доступных текстов. Развитие информационных ресурсов Интернет многократно усугубило проблему информационной перегрузки. В этой ситуации особенно актуальными становятся методы автоматизации реферирования текстовой информации, то есть методы получения сжатого представления текстовых документов–рефератов (аннотаций). Постановка проблемы автоматического реферирования текста и соответственно попытки ее решения с использованием различных подходов предпринимались многими исследователями. История применения вычислительной техники для реферирования насчитывает уже более 50 лет и связана с именами таких исследователей, как Г.П. Лун, В.Е. Берзон, И.П. Cевбо, Э.Ф. Скороходько, Д.Г. Лахути, Р.Г. Пиотровский и др. За эти годы выработаны многочисленные подходы к решению данной проблемы, которые достаточно четко подразделяются на два направления: автоматическое реферирование, основанное на экстрагировании из первичных документов с помощью определенных формальных признаков «наиболее информативных» фраз (фрагментов), совокупность которых образует некоторый экстракт; автоматическое реферирование, основанное на выделении из текстов с помощью специальных информационных языков наиболее существенной информации и порождении новых текстов (рефератов), содержательно обобщающих первичные документы." example_title: "Научная статья" --- # MBARTRuSumGazeta ## Model description This is a ported version of [fairseq model](https://www.dropbox.com/s/fijtntnifbt9h0k/gazeta_mbart_v2_fairseq.tar.gz). For more details, please see [Dataset for Automatic Summarization of Russian News](https://arxiv.org/abs/2006.11063). ## Intended uses & limitations #### How to use Colab: [link](https://colab.research.google.com/drive/1wdo_nPZPk6dWAn1J8nGx4Z5Ef82jCCob) ```python from transformers import MBartTokenizer, MBartForConditionalGeneration model_name = "IlyaGusev/mbart_ru_sum_gazeta" tokenizer = MBartTokenizer.from_pretrained(model_name) model = MBartForConditionalGeneration.from_pretrained(model_name) article_text = "..." input_ids = tokenizer( [article_text], max_length=600, padding="max_length", truncation=True, return_tensors="pt", )["input_ids"] output_ids = model.generate( input_ids=input_ids, no_repeat_ngram_size=4 )[0] summary = tokenizer.decode(output_ids, skip_special_tokens=True) print(summary) ``` #### Limitations and bias - The model should work well with Gazeta.ru articles, but for any other agencies it can suffer from domain shift ## Training data - Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta) ## Training procedure - Fairseq training script: [train.sh](https://github.com/IlyaGusev/summarus/blob/master/external/bart_scripts/train.sh) - Porting: [Colab link](https://colab.research.google.com/drive/13jXOlCpArV-lm4jZQ0VgOpj6nFBYrLAr) ## Eval results * Train dataset: **Gazeta v1 train** * Test dataset: **Gazeta v1 test** * Source max_length: **600** * Target max_length: **200** * no_repeat_ngram_size: **4** * num_beams: **5** | Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length | |:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----| | [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 | | [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 | | [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 | * Train dataset: **Gazeta v1 train** * Test dataset: **Gazeta v2 test** * Source max_length: **600** * Target max_length: **200** * no_repeat_ngram_size: **4** * num_beams: **5** | Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length | |:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----| | [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 | | [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 | | [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 | Predicting all summaries: ```python import json import torch from transformers import MBartTokenizer, MBartForConditionalGeneration from datasets import load_dataset def gen_batch(inputs, batch_size): batch_start = 0 while batch_start < len(inputs): yield inputs[batch_start: batch_start + batch_size] batch_start += batch_size def predict( model_name, input_records, output_file, max_source_tokens_count=600, batch_size=4 ): device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = MBartTokenizer.from_pretrained(model_name) model = MBartForConditionalGeneration.from_pretrained(model_name).to(device) predictions = [] for batch in gen_batch(inputs, batch_size): texts = [r["text"] for r in batch] input_ids = tokenizer( batch, return_tensors="pt", padding="max_length", truncation=True, max_length=max_source_tokens_count )["input_ids"].to(device) output_ids = model.generate( input_ids=input_ids, no_repeat_ngram_size=4 ) summaries = tokenizer.batch_decode(output_ids, skip_special_tokens=True) for s in summaries: print(s) predictions.extend(summaries) with open(output_file, "w") as w: for p in predictions: w.write(p.strip().replace("\n", " ") + "\n") gazeta_test = load_dataset('IlyaGusev/gazeta', script_version="v1.0")["test"] predict("IlyaGusev/mbart_ru_sum_gazeta", list(gazeta_test), "mbart_predictions.txt") ``` Evaluation: https://github.com/IlyaGusev/summarus/blob/master/evaluate.py Flags: --language ru --tokenize-after --lower ### BibTeX entry and citation info ```bibtex @InProceedings{10.1007/978-3-030-59082-6_9, author="Gusev, Ilya", editor="Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia", title="Dataset for Automatic Summarization of Russian News", booktitle="Artificial Intelligence and Natural Language", year="2020", publisher="Springer International Publishing", address="Cham", pages="122--134", isbn="978-3-030-59082-6" } ```
nlpaueb/legal-bert-base-uncased
15b570cbf88259610b082a167dacc190124f60f6
2022-04-28T14:42:50.000Z
[ "pytorch", "tf", "jax", "bert", "pretraining", "en", "transformers", "legal", "license:cc-by-sa-4.0", "fill-mask" ]
fill-mask
false
nlpaueb
null
nlpaueb/legal-bert-base-uncased
48,089
25
transformers
340
--- language: en pipeline_tag: fill-mask license: cc-by-sa-4.0 thumbnail: https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png tags: - legal widget: - text: "The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of police." --- # LEGAL-BERT: The Muppets straight out of Law School <img align="left" src="https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png" width="100"/> LEGAL-BERT is a family of BERT models for the legal domain, intended to assist legal NLP research, computational law, and legal technology applications. To pre-train the different variations of LEGAL-BERT, we collected 12 GB of diverse English legal text from several fields (e.g., legislation, court cases, contracts) scraped from publicly available resources. Sub-domain variants (CONTRACTS-, EURLEX-, ECHR-) and/or general LEGAL-BERT perform better than using BERT out of the box for domain-specific tasks. A light-weight model (33% the size of BERT-BASE) pre-trained from scratch on legal data with competitive performance is also available. <br/><br/> --- I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras and I. Androutsopoulos. "LEGAL-BERT: The Muppets straight out of Law School". In Findings of Empirical Methods in Natural Language Processing (EMNLP 2020) (Short Papers), to be held online, 2020. (https://aclanthology.org/2020.findings-emnlp.261) --- ## Pre-training corpora The pre-training corpora of LEGAL-BERT include: * 116,062 documents of EU legislation, publicly available from EURLEX (http://eur-lex.europa.eu), the repository of EU Law running under the EU Publication Office. * 61,826 documents of UK legislation, publicly available from the UK legislation portal (http://www.legislation.gov.uk). * 19,867 cases from the European Court of Justice (ECJ), also available from EURLEX. * 12,554 cases from HUDOC, the repository of the European Court of Human Rights (ECHR) (http://hudoc.echr.coe.int/eng). * 164,141 cases from various courts across the USA, hosted in the Case Law Access Project portal (https://case.law). * 76,366 US contracts from EDGAR, the database of US Securities and Exchange Commission (SECOM) (https://www.sec.gov/edgar.shtml). ## Pre-training details * We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert). * We released a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters). * We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4. * We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us! * Part of LEGAL-BERT is a light-weight model pre-trained from scratch on legal data, which achieves comparable performance to larger models, while being much more efficient (approximately 4 times faster) with a smaller environmental footprint. ## Models list | Model name | Model Path | Training corpora | | ------------------- | ------------------------------------ | ------------------- | | CONTRACTS-BERT-BASE | `nlpaueb/bert-base-uncased-contracts` | US contracts | | EURLEX-BERT-BASE | `nlpaueb/bert-base-uncased-eurlex` | EU legislation | | ECHR-BERT-BASE | `nlpaueb/bert-base-uncased-echr` | ECHR cases | | LEGAL-BERT-BASE * | `nlpaueb/legal-bert-base-uncased` | All | | LEGAL-BERT-SMALL | `nlpaueb/legal-bert-small-uncased` | All | \* LEGAL-BERT-BASE is the model referred to as LEGAL-BERT-SC in Chalkidis et al. (2020); a model trained from scratch in the legal corpora mentioned below using a newly created vocabulary by a sentence-piece tokenizer trained on the very same corpora. \*\* As many of you expressed interest in the LEGAL-BERT-FP models (those relying on the original BERT-BASE checkpoint), they have been released in Archive.org (https://archive.org/details/legal_bert_fp), as these models are secondary and possibly only interesting for those who aim to dig deeper in the open questions of Chalkidis et al. (2020). ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nlpaueb/legal-bert-base-uncased") model = AutoModel.from_pretrained("nlpaueb/legal-bert-base-uncased") ``` ## Use LEGAL-BERT variants as Language Models | Corpus | Model | Masked token | Predictions | | --------------------------------- | ---------------------------------- | ------------ | ------------ | | | **BERT-BASE-UNCASED** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('new', '0.09'), ('current', '0.04'), ('proposed', '0.03'), ('marketing', '0.03'), ('joint', '0.02') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.32'), ('rape', '0.22'), ('abuse', '0.14'), ('death', '0.04'), ('violence', '0.03') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('farm', '0.25'), ('livestock', '0.08'), ('draft', '0.06'), ('domestic', '0.05'), ('wild', '0.05') | | **CONTRACTS-BERT-BASE** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('letter', '0.38'), ('dealer', '0.04'), ('employment', '0.03'), ('award', '0.03'), ('contribution', '0.02') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('death', '0.39'), ('imprisonment', '0.07'), ('contempt', '0.05'), ('being', '0.03'), ('crime', '0.02') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | (('domestic', '0.18'), ('laboratory', '0.07'), ('household', '0.06'), ('personal', '0.06'), ('the', '0.04') | | **EURLEX-BERT-BASE** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('supply', '0.11'), ('cooperation', '0.08'), ('service', '0.07'), ('licence', '0.07'), ('distribution', '0.05') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.66'), ('death', '0.07'), ('imprisonment', '0.07'), ('murder', '0.04'), ('rape', '0.02') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.43'), ('pet', '0.28'), ('certain', '0.05'), ('fur', '0.03'), ('the', '0.02') | | **ECHR-BERT-BASE** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('second', '0.24'), ('latter', '0.10'), ('draft', '0.05'), ('bilateral', '0.05'), ('arbitration', '0.04') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.99'), ('death', '0.01'), ('inhuman', '0.00'), ('beating', '0.00'), ('rape', '0.00') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('pet', '0.17'), ('all', '0.12'), ('slaughtered', '0.10'), ('domestic', '0.07'), ('individual', '0.05') | | **LEGAL-BERT-BASE** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('settlement', '0.26'), ('letter', '0.23'), ('dealer', '0.04'), ('master', '0.02'), ('supplemental', '0.02') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '1.00'), ('detention', '0.00'), ('arrest', '0.00'), ('rape', '0.00'), ('death', '0.00') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.67'), ('beef', '0.17'), ('farm', '0.03'), ('pet', '0.02'), ('dairy', '0.01') | | **LEGAL-BERT-SMALL** | | (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('license', '0.09'), ('transition', '0.08'), ('settlement', '0.04'), ('consent', '0.03'), ('letter', '0.03') | (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.59'), ('pain', '0.05'), ('ptsd', '0.05'), ('death', '0.02'), ('tuberculosis', '0.02') | (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('all', '0.08'), ('live', '0.07'), ('certain', '0.07'), ('the', '0.07'), ('farm', '0.05') ## Evaluation on downstream tasks Consider the experiments in the article "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2020, (https://aclanthology.org/2020.findings-emnlp.261) ## Author - Publication ``` @inproceedings{chalkidis-etal-2020-legal, title = "{LEGAL}-{BERT}: The Muppets straight out of Law School", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Aletras, Nikolaos and Androutsopoulos, Ion", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", doi = "10.18653/v1/2020.findings-emnlp.261", pages = "2898--2904" } ``` ## About Us [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts. The group's current research interests include: * question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering, * natural language generation from databases and ontologies, especially Semantic Web ontologies, text classification, including filtering spam and abusive content, * information extraction and opinion mining, including legal text analytics and sentiment analysis, * natural language processing tools for Greek, for example parsers and named-entity recognizers, machine learning in natural language processing, especially deep learning. The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business. [Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) | Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
cross-encoder/ms-marco-MiniLM-L-2-v2
f4db9595e5310ba9e0cfbf391154583933b533eb
2021-08-05T08:39:25.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/ms-marco-MiniLM-L-2-v2
47,946
null
transformers
341
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
navervision/KELIP
027d7a67da81f4d2c092f296c47e6e33344dfede
2022-03-17T11:04:13.000Z
[ "pytorch", "kelip", "transformers" ]
null
false
navervision
null
navervision/KELIP
47,838
4
transformers
342
Entry not found
Tatyana/rubert-base-cased-sentiment-new
a1ff066aeb2b26b5f1b8d793862e51d77a1090d3
2021-05-30T23:12:27.000Z
[ "pytorch", "bert", "text-classification", "ru", "dataset:Tatyana/ru_sentiment_dataset", "transformers", "sentiment" ]
text-classification
false
Tatyana
null
Tatyana/rubert-base-cased-sentiment-new
47,547
1
transformers
343
--- language: - ru tags: - sentiment - text-classification datasets: - Tatyana/ru_sentiment_dataset --- # RuBERT for Sentiment Analysis Russian texts sentiment classification. Model trained on [Tatyana/ru_sentiment_dataset](https://huggingface.co/datasets/Tatyana/ru_sentiment_dataset) ## Labels meaning 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ```python !pip install tensorflow-gpu !pip install deeppavlov !python -m deeppavlov install squad_bert !pip install fasttext !pip install transformers !python -m deeppavlov install bert_sentence_embedder from deeppavlov import build_model model = build_model(path_to_model/rubert_sentiment.json) model(["Сегодня хорошая погода", "Я счастлив проводить с тобою время", "Мне нравится эта музыкальная композиция"]) ``` Needed pytorch trained model presented in [Drive](https://drive.google.com/drive/folders/1EnJBq0dGfpjPxbVjybqaS7PsMaPHLUIl?usp=sharing). Load and place model.pth.tar in folder next to another files of a model.
allenai/specter
c15597dc3bf1f00444f1c5a59c9bb80c93499635
2022-06-25T16:04:29.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "en", "dataset:SciDocs", "arxiv:2004.07180", "transformers", "license:apache-2.0" ]
feature-extraction
false
allenai
null
allenai/specter
47,052
14
transformers
344
--- language: en thumbnail: "https://camo.githubusercontent.com/7d080b7a769f7fdf64ac0ebeb47b039cb50be35287e3071f9d633f0fe33e7596/68747470733a2f2f692e6962622e636f2f33544331576d472f737065637465722d6c6f676f2d63726f707065642e706e67" license: apache-2.0 datasets: - SciDocs metrics: - F1 - accuracy - map - ndcg --- ## SPECTER SPECTER is a pre-trained language model to generate document-level embedding of documents. It is pre-trained on a a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning. Paper: [SPECTER: Document-level Representation Learning using Citation-informed Transformers](https://arxiv.org/pdf/2004.07180.pdf) Original Repo: [Github](https://github.com/allenai/specter) Evaluation Benchmark: [SciDocs](https://github.com/allenai/scidocs) Authors: *Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld*
microsoft/layoutxlm-base
b95ef788341ccd507115d74e10c4bb7137559f19
2022-06-15T14:51:06.000Z
[ "pytorch", "layoutlmv2", "arxiv:2104.08836", "transformers", "license:cc-by-nc-sa-4.0" ]
null
false
microsoft
null
microsoft/layoutxlm-base
46,743
22
transformers
345
--- license: cc-by-nc-sa-4.0 --- # LayoutXLM **Multimodal (text + layout/format + image) pre-training for document AI** LayoutXLM is a multilingual variant of LayoutLMv2. The documentation of this model in the Transformers library can be found [here](https://huggingface.co/docs/transformers/model_doc/layoutxlm). [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/layoutxlm) ## Introduction LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei, arXiv Preprint 2021
Helsinki-NLP/opus-mt-ko-en
8bf548f19accb8fdc96055608840f5a0c194ec8d
2020-08-21T14:42:47.000Z
[ "pytorch", "marian", "text2text-generation", "ko", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ko-en
45,612
2
transformers
346
--- language: - ko - en tags: - translation license: apache-2.0 --- ### kor-eng * source group: Korean * target group: English * OPUS readme: [kor-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-eng/README.md) * model: transformer-align * source language(s): kor kor_Hang kor_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kor.eng | 41.3 | 0.588 | ### System Info: - hf_name: kor-eng - source_languages: kor - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'en'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: eng - short_pair: ko-en - chrF2_score: 0.588 - bleu: 41.3 - brevity_penalty: 0.9590000000000001 - ref_len: 17711.0 - src_name: Korean - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: en - prefer_old: False - long_pair: kor-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
cambridgeltl/SapBERT-from-PubMedBERT-fulltext
c1f013fb438445557fa71a012928e233a9c5c777
2021-05-24T09:59:06.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "arxiv:2010.11784", "transformers" ]
feature-extraction
false
cambridgeltl
null
cambridgeltl/SapBERT-from-PubMedBERT-fulltext
44,769
3
transformers
347
--- language: en tags: - biomedical - lexical-semantics datasets: - UMLS **[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br> **[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**! ### SapBERT-PubMedBERT SapBERT by [Liu et al. (2020)](https://arxiv.org/pdf/2010.11784.pdf). Trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AA (English only), using [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) as the base model. Please use [CLS] as the representation of the input. ### Citation ```bibtex @inproceedings{liu-etal-2021-self, title = "Self-Alignment Pretraining for Biomedical Entity Representations", author = "Liu, Fangyu and Shareghi, Ehsan and Meng, Zaiqiao and Basaldella, Marco and Collier, Nigel", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.naacl-main.334", pages = "4228--4238", abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.", } ```
BeIR/query-gen-msmarco-t5-large-v1
5dd8dd401d24332c17e40015e9792ee31f3ced91
2021-06-23T02:12:04.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
BeIR
null
BeIR/query-gen-msmarco-t5-large-v1
43,945
9
transformers
348
# Query Generation This model is the t5-base model from [docTTTTTquery](https://github.com/castorini/docTTTTTquery). The T5-base model was trained on the [MS MARCO Passage Dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking), which consists of about 500k real search queries from Bing together with the relevant passage. The model can be used for query generation to learn semantic search models without requiring annotated training data: [Synthetic Query Generation](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/query_generation). ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('model-name') model = T5ForConditionalGeneration.from_pretrained('model-name') para = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(para, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=3) print("Paragraph:") print(para) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ```
Xenova/sponsorblock-small
5261e7056338c5a91dd6e153314536f44a182b03
2022-02-08T16:56:09.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Xenova
null
Xenova/sponsorblock-small
43,756
1
transformers
349
Entry not found
EColi/SB_Classifier
dc4dce65613d29abd9c20b054a0a0c7abd0c6cb6
2022-04-20T17:27:13.000Z
[ "pytorch", "bert", "text-classification", "generic" ]
text-classification
false
EColi
null
EColi/SB_Classifier
43,746
null
generic
350
--- tags: - text-classification - generic library_name: generic widget: - text: 'This video is sponsored by squarespace' example_title: Sponsor - text: 'Check out the merch at linustechtips.com' example_title: Unpaid/self promotion - text: "Don't forget to like, comment and subscribe" example_title: Interaction reminder - text: 'pqh4LfPeCYs,824.695,826.267,826.133,829.876,835.933,927.581' example_title: Extract text from video ---
dmis-lab/biobert-base-cased-v1.1
924f12e0c3db7f156a765ad53fb6b11e7afedbc8
2020-10-14T07:02:59.000Z
[ "pytorch", "transformers" ]
null
false
dmis-lab
null
dmis-lab/biobert-base-cased-v1.1
43,360
7
transformers
351
Entry not found
indobenchmark/indobert-base-p1
c2cd0b51ddce6580eb35263b39b0a1e5fb0a39e2
2021-05-19T20:22:23.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "id", "dataset:Indo4B", "arxiv:2009.05387", "transformers", "indobert", "indobenchmark", "indonlu", "license:mit" ]
feature-extraction
false
indobenchmark
null
indobenchmark/indobert-base-p1
42,423
1
transformers
352
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT Base Model (phase1 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-base-p1") model = AutoModel.from_pretrained("indobenchmark/indobert-base-p1") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
rasa/LaBSE
e615b58364f13c7be81e15ccea2ab27a6c483b76
2021-05-20T04:01:27.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
rasa
null
rasa/LaBSE
42,409
7
transformers
353
Entry not found
microsoft/swin-base-patch4-window7-224-in22k
790d9b6014f6d157cc34d70afc0604eccc92dadd
2022-05-16T18:11:16.000Z
[ "pytorch", "tf", "swin", "image-classification", "dataset:imagenet-21k", "arxiv:2103.14030", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
microsoft
null
microsoft/swin-base-patch4-window7-224-in22k
42,311
3
transformers
354
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-21k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Swin Transformer (large-sized model) Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) [Source](https://paperswithcode.com/method/swin-transformer) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, SwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-base-patch4-window7-224-in22k") model = SwinForImageClassification.from_pretrained("microsoft/swin-base-patch4-window7-224-in22k") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-14030, author = {Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo}, title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, journal = {CoRR}, volume = {abs/2103.14030}, year = {2021}, url = {https://arxiv.org/abs/2103.14030}, eprinttype = {arXiv}, eprint = {2103.14030}, timestamp = {Thu, 08 Apr 2021 07:53:26 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
bert-large-cased-whole-word-masking-finetuned-squad
ba9ccd18e456b6c6a63a3ea5b21776f05452d923
2021-05-18T16:22:37.000Z
[ "pytorch", "tf", "jax", "rust", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
null
null
bert-large-cased-whole-word-masking-finetuned-squad
42,243
null
transformers
355
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT large model (cased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. This model has the following configuration: - 24-layer - 1024 hidden dimension - 16 attention heads - 336M parameters. ## Intended uses & limitations This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### Fine-tuning After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command: ``` python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \ --model_name_or_path bert-large-cased-whole-word-masking \ --dataset_name squad \ --do_train \ --do_eval \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./examples/models/wwm_cased_finetuned_squad/ \ --per_device_eval_batch_size=3 \ --per_device_train_batch_size=3 \ ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
flair/ner-english-ontonotes-fast
38a8eb6a720791da55e15962c36a37dd8d8270b2
2021-03-02T22:05:17.000Z
[ "pytorch", "en", "dataset:ontonotes", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-english-ontonotes-fast
42,162
7
flair
356
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - ontonotes widget: - text: "On September 1st George Washington won 1 dollar." --- ## English NER in Flair (Ontonotes fast model) This is the fast version of the 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **89.3** (Ontonotes) Predicts 18 tags: | **tag** | **meaning** | |---------------------------------|-----------| | CARDINAL | cardinal value | | DATE | date value | | EVENT | event name | | FAC | building name | | GPE | geo-political entity | | LANGUAGE | language name | | LAW | law name | | LOC | location name | | MONEY | money name | | NORP | affiliation | | ORDINAL | ordinal value | | ORG | organization name | | PERCENT | percent value | | PERSON | person name | | PRODUCT | product name | | QUANTITY | quantity value | | TIME | time value | | WORK_OF_ART | name of work of art | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-ontonotes-fast") # make example sentence sentence = Sentence("On September 1st George Washington won 1 dollar.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [2,3]: "September 1st" [− Labels: DATE (0.9655)] Span [4,5]: "George Washington" [− Labels: PERSON (0.8243)] Span [7,8]: "1 dollar" [− Labels: MONEY (0.8022)] ``` So, the entities "*September 1st*" (labeled as a **date**), "*George Washington*" (labeled as a **person**) and "*1 dollar*" (labeled as a **money**) are found in the sentence "*On September 1st George Washington won 1 dollar*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('en-crawl'), # contextual string embeddings, forward FlairEmbeddings('news-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('news-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english-ontonotes-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
VietAI/gpt-neo-1.3B-vietnamese-news
fbe35b344fc44b1cd58d0c7a4130310eb8894265
2021-10-10T16:44:31.000Z
[ "pytorch", "gpt_neo", "text-generation", "vi", "transformers", "causal-lm", "gpt" ]
text-generation
false
VietAI
null
VietAI/gpt-neo-1.3B-vietnamese-news
41,653
2
transformers
357
--- language: - vi tags: - pytorch - causal-lm - gpt --- # GPT-Neo 1.3B for Vietnamese News Details will be available soon. For more information, please contact anhduongng.1001@gmail.com / imthanhlv@gmail.com / nguyenvulebinh@gmail.com.
google/t5-xxl-lm-adapt
7c856f0142a6655ee44e2fd00fcc9f6d35fff56f
2021-11-01T14:23:24.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "transformers", "t5-lm-adapt", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/t5-xxl-lm-adapt
41,589
3
transformers
358
--- language: en datasets: - c4 tags: - t5-lm-adapt license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted ## Version 1.1 - LM-Adapted [T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-11b): - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. and is pretrained on both the denoising and language modeling objective. More specifically, this checkpoint is initialized from [T5 Version 1.1 - XXL](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-xxl) and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). This adaptation improves the ability of the model to be used for prompt tuning. **Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp). Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
sentence-transformers/multi-qa-mpnet-base-cos-v1
bd0b4f6d767d5cb937b4c1a9611df492a80e891a
2021-08-24T21:07:06.000Z
[ "pytorch", "mpnet", "fill-mask", "sentence-transformers", "feature-extraction", "sentence-similarity" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/multi-qa-mpnet-base-cos-v1
41,510
6
sentence-transformers
359
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # multi-qa-mpnet-base-cos-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] #Load the model model = SentenceTransformer('sentence-transformers/multi-qa-mpnet-base-cos-v1') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take average of all tokens def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) return embeddings # Sentences we want sentence embeddings for query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-mpnet-base-cos-v1") model = AutoModel.from_pretrained("sentence-transformers/multi-qa-mpnet-base-cos-v1") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Technical Details In the following some technical details how this model must be used: | Setting | Value | | --- | :---: | | Dimensions | 768 | | Produces normalized embeddings | Yes | | Pooling-Method | Mean pooling | | Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance | Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used. ---- ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages. Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text. ## Training procedure The full training script is accessible in this current repository: `train_script.py`. ### Pre-training We use the pretrained [`mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. #### Training We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20. | Dataset | Number of training tuples | |--------------------------------------------------------|:--------------------------:| | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 | | [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 | | [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 | | **Total** | **214,988,242** |
openai/clip-vit-base-patch16
6cef4adda11be098f7c823c95de721298611f514
2022-03-14T18:00:36.000Z
[ "pytorch", "jax", "clip", "feature-extraction", "arxiv:2103.00020", "arxiv:1908.04913", "transformers", "vision" ]
feature-extraction
false
openai
null
openai/clip-vit-base-patch16
41,138
7
transformers
360
--- tags: - vision --- # Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md). ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. ### Model Date January 2021 ### Model Type The base model uses a ViT-B/16 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer. ### Documents - [Blog Post](https://openai.com/blog/clip/) - [CLIP Paper](https://arxiv.org/abs/2103.00020) ### Use with Transformers ```python3 from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-base-patch16") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch16") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ### Out-of-Scope Use Cases **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. ### Data Mission Statement Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. ## Performance and Limitations ### Performance We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets: - Food101 - CIFAR10 - CIFAR100 - Birdsnap - SUN397 - Stanford Cars - FGVC Aircraft - VOC2007 - DTD - Oxford-IIIT Pet dataset - Caltech101 - Flowers102 - MNIST - SVHN - IIIT5K - Hateful Memes - SST-2 - UCF101 - Kinetics700 - Country211 - CLEVR Counting - KITTI Distance - STL-10 - RareAct - Flickr30 - MSCOCO - ImageNet - ImageNet-A - ImageNet-R - ImageNet Sketch - ObjectNet (ImageNet Overlap) - Youtube-BB - ImageNet-Vid ## Limitations CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. ### Bias and Fairness We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks. ## Feedback ### Where to send questions or comments about the model Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
sentence-transformers/roberta-base-nli-stsb-mean-tokens
903ef0c8897802c3209d82aa46b1c897ac56cf28
2022-06-15T20:49:42.000Z
[ "pytorch", "tf", "jax", "roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/roberta-base-nli-stsb-mean-tokens
41,072
null
sentence-transformers
361
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/roberta-base-nli-stsb-mean-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/roberta-base-nli-stsb-mean-tokens') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/roberta-base-nli-stsb-mean-tokens') model = AutoModel.from_pretrained('sentence-transformers/roberta-base-nli-stsb-mean-tokens') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/roberta-base-nli-stsb-mean-tokens) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': True}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
airesearch/wangchanberta-base-att-spm-uncased
abe46f39cf2c911a6ad5ec8299bdf7503edc95e4
2022-02-16T14:42:32.000Z
[ "pytorch", "camembert", "fill-mask", "th", "arxiv:1907.11692", "arxiv:1801.06146", "arxiv:1808.06226", "arxiv:2101.09635", "transformers", "autotrain_compatible" ]
fill-mask
false
airesearch
null
airesearch/wangchanberta-base-att-spm-uncased
41,065
9
transformers
362
--- language: th widget: - text: "ผู้ใช้งานท่าอากาศยานนานาชาติ<mask>มีกว่าสามล้านคน<pad>" --- # WangchanBERTa base model: `wangchanberta-base-att-spm-uncased` <br> Pretrained RoBERTa BASE model on assorted Thai texts (78.5 GB). The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers). <br> ## Model description <br> The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692). <br> ## Intended uses & limitations <br> You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task. <br> **Multiclass text classification** - `wisesight_sentiment` 4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets. - `wongnai_reivews` Users' review rating classification task (scale is ranging from 1 to 5) - `generated_reviews_enth` : (`review_star` as label) Generated users' review rating classification task (scale is ranging from 1 to 5). **Multilabel text classification** - `prachathai67k` Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k). **Token classification** - `thainer` Named-entity recognition tagging with 13 named-entities as described in this [page](https://huggingface.co/datasets/thainer). - `lst20` : NER NER and POS tagging Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as described in this [page](https://huggingface.co/datasets/lst20). <br> ## How to use <br> The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko) <br> ## Training data `wangchanberta-base-att-spm-uncased` model was pretrained on assorted Thai text dataset. The total size of uncompressed text is 78.5GB. ### Preprocessing Texts are preprocessed with the following rules: - Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\<br /> with a line break [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). - Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia. - Replace line breaks with spaces. - Replace more than one spaces with a single space - Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). - Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU) ’s `newmm` dictionary-based maximal matching tokenizer. - Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). since there is no delimitation by space in Thai as in English. - Replace spaces with <\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\_>. <br> Regarding the vocabulary, we use SentencePiece [[Kudo, 2018]](https://arxiv.org/abs/1808.06226) to train SentencePiece unigram model. The tokenizer has a vocabulary size of 25,000 subwords, trained on 15M sentences sampled from the training set. The length of each sequence is limited up to 416 subword tokens. Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token. <br> **Train/Val/Test splits** After preprocessing and deduplication, we have a training set of 381,034,638 unique, mostly Thai sentences with sequence length of 5 to 300 words (78.5GB). The training set has a total of 16,957,775,412 words as tokenized by dictionary-based maximal matching [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU), 8,680,485,067 subwords as tokenized by SentencePiece tokenizer, and 53,035,823,287 characters. <br> **Pretraining** The model was trained on 8 V100 GPUs for 500,000 steps with the batch size of 4,096 (32 sequences per device with 16 accumulation steps) and a sequence length of 416 tokens. The optimizer we used is Adam with the learning rate of $3e-4$, $\\\\\\\\\\\\\\\\beta_1 = 0.9$, $\\\\\\\\\\\\\\\\beta_2= 0.999$ and $\\\\\\\\\\\\\\\\epsilon = 1e-6$. The learning rate is warmed up for the first 24,000 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. As of Sun 24 Jan 2021, we release the model from the checkpoint @360,000 steps due to the model pretraining has not yet been completed <br> **BibTeX entry and citation info** ``` @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
pdelobelle/robbert-v2-dutch-ner
64e413ebaf94d058544dd6bce531c66c3116e652
2022-07-05T13:23:41.000Z
[ "pytorch", "jax", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
pdelobelle
null
pdelobelle/robbert-v2-dutch-ner
40,831
null
transformers
363
Entry not found
monologg/koelectra-base-v3-discriminator
68b30cd259f34a4b5aa8786392612ba2a2617fcc
2021-10-20T16:53:40.000Z
[ "pytorch", "electra", "pretraining", "ko", "transformers", "korean", "license:apache-2.0" ]
null
false
monologg
null
monologg/koelectra-base-v3-discriminator
40,481
13
transformers
364
--- language: ko license: apache-2.0 tags: - korean --- # KoELECTRA v3 (Base Discriminator) Pretrained ELECTRA Language Model for Korean (`koelectra-base-v3-discriminator`) For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md). ## Usage ### Load model and tokenizer ```python >>> from transformers import ElectraModel, ElectraTokenizer >>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v3-discriminator") >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator") ``` ### Tokenizer example ```python >>> from transformers import ElectraTokenizer >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator") >>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]") ['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]'] >>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]']) [2, 11229, 29173, 13352, 25541, 4110, 7824, 17788, 18, 3] ``` ## Example using ElectraForPreTraining ```python import torch from transformers import ElectraForPreTraining, ElectraTokenizer discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v3-discriminator") tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator") sentence = "나는 방금 밥을 먹었다." fake_sentence = "나는 내일 밥을 먹었다." fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) print(list(zip(fake_tokens, predictions.tolist()[1:-1]))) ```
textattack/bert-base-uncased-ag-news
fe417ad660b1657142f66353a184dc0c7e6d2e48
2021-05-20T07:40:21.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
textattack
null
textattack/bert-base-uncased-ag-news
40,413
2
transformers
365
## TextAttack Model CardThis `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9514473684210526, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
mrm8488/bert-small-finetuned-squadv2
3ffb743e93b64bc944f778292a71ebac650834ae
2021-05-20T00:33:09.000Z
[ "pytorch", "jax", "bert", "question-answering", "en", "arxiv:1908.08962", "transformers", "autotrain_compatible" ]
question-answering
false
mrm8488
null
mrm8488/bert-small-finetuned-squadv2
40,088
null
transformers
366
--- language: en thumbnail: --- # BERT-Small fine-tuned on SQuAD v2 [BERT-Small](https://github.com/google-research/bert/) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. **Mode size** (after training): **109.74 MB** ## Details of BERT-Small and its 'family' (from their documentation) Released on March 11th, 2020 This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. ## Details of the downstream task (Q&A) - Dataset [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **60.49** | | **F1** | **64.21** | ## Comparison: | Model | EM | F1 score | SIZE (MB) | | ------------------------------------------------------------------------------------------- | --------- | --------- | --------- | | [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** | | [bert-mini-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-finetuned-squadv2) | 56.31 | 59.65 | 42.63 | | [bert-small-finetuned-squadv2](https://huggingface.co/mrm8488/bert-small-finetuned-squadv2) | **60.49** | **64.21** | 109.74 | ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/bert-small-finetuned-squadv2", tokenizer="mrm8488/bert-small-finetuned-squadv2" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) # Output: ``` ```json { "answer": "Manuel Romero", "end": 13, "score": 0.9939319924374637, "start": 0 } ``` ### Yes! That was easy 🎉 Let's try with another example ```python qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "For which company has worked Manuel Romero?" }) # Output: ``` ```json { "answer": "hugginface/transformers", "end": 79, "score": 0.6024888734447131, "start": 56 } ``` ### It works!! 🎉 🎉 🎉 > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
Helsinki-NLP/opus-mt-fi-en
7fb1e75696c8b8930df5afae6bb5d22ffca4ed30
2021-01-18T08:32:43.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-en
40,083
1
transformers
367
--- language: - fi - en tags: - translation license: apache-2.0 --- ### fin-eng * source group: Finnish * target group: English * OPUS readme: [fin-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md) * model: transformer-align * source language(s): fin * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-05.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.zip) * test set translations: [opus-2020-08-05.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.test.txt) * test set scores: [opus-2020-08-05.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2015-enfi-fineng.fin.eng | 25.3 | 0.536 | | newstest2015-enfi-fineng.fin.eng | 26.9 | 0.547 | | newstest2016-enfi-fineng.fin.eng | 29.0 | 0.571 | | newstest2017-enfi-fineng.fin.eng | 32.3 | 0.594 | | newstest2018-enfi-fineng.fin.eng | 23.8 | 0.517 | | newstest2019-fien-fineng.fin.eng | 29.0 | 0.565 | | newstestB2016-enfi-fineng.fin.eng | 24.5 | 0.527 | | newstestB2017-enfi-fineng.fin.eng | 27.4 | 0.557 | | newstestB2017-fien-fineng.fin.eng | 27.4 | 0.557 | | Tatoeba-test.fin.eng | 53.4 | 0.697 | ### System Info: - hf_name: fin-eng - source_languages: fin - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fi', 'en'] - src_constituents: {'fin'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.test.txt - src_alpha3: fin - tgt_alpha3: eng - short_pair: fi-en - chrF2_score: 0.6970000000000001 - bleu: 53.4 - brevity_penalty: 0.99 - ref_len: 74651.0 - src_name: Finnish - tgt_name: English - train_date: 2020-08-05 - src_alpha2: fi - tgt_alpha2: en - prefer_old: False - long_pair: fin-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
albert-large-v2
c76159dc6b4d18f16d303451ae64b4f34a7d0d63
2021-01-13T15:35:47.000Z
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
albert-large-v2
39,393
5
transformers
368
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT Large v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 24 repeating layers - 128 embedding dimension - 1024 hidden dimension - 16 attention heads - 17M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-large-v2') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-large-v2') model = AlbertModel.from_pretrained("albert-large-v2") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-large-v2') model = TFAlbertModel.from_pretrained("albert-large-v2") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-large-v2') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
microsoft/deberta-large
822a8791fdac38e8086e2731158047e9b63e4521
2022-01-13T17:10:16.000Z
[ "pytorch", "tf", "deberta", "en", "arxiv:2006.03654", "transformers", "deberta-v1", "license:mit" ]
null
false
microsoft
null
microsoft/deberta-large
38,677
9
transformers
369
--- language: en tags: deberta-v1 thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp** ```bash cd transformers/examples/text-classification/ export TASK_NAME=mrpc python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\ --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\ --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
rinna/japanese-gpt-1b
a3c6e8478d5afa92fe5174b984555e01fe378cd3
2022-02-18T04:46:46.000Z
[ "pytorch", "gpt2", "text-generation", "ja", "dataset:cc100", "dataset:wikipedia", "dataset:c4", "transformers", "japanese", "gpt", "lm", "nlp", "license:mit" ]
text-generation
false
rinna
null
rinna/japanese-gpt-1b
38,593
20
transformers
370
--- language: ja thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png tags: - ja - japanese - gpt - text-generation - lm - nlp license: mit datasets: - cc100 - wikipedia - c4 widget: - text: "西田幾多郎は、" --- # japanese-gpt-1b ![rinna-icon](./rinna.png) This repository provides a 1.3B-parameter Japanese GPT model. The model was trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/) # How to use the model *NOTE:* Use `T5Tokenizer` to initiate the tokenizer. ~~~~ import torch from transformers import T5Tokenizer, AutoModelForCausalLM tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-1b") model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b") if torch.cuda.is_available(): model = model.to("cuda") text = "西田幾多郎は、" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_length=100, min_length=100, do_sample=True, top_k=500, top_p=0.95, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, bad_word_ids=[[tokenizer.unk_token_id]] ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) # sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの ~~~~ # Model architecture A 24-layer, 2048-hidden-size transformer-based language model. # Training The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data. # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols. # Licenese [The MIT license](https://opensource.org/licenses/MIT)
cross-encoder/ms-marco-TinyBERT-L-2-v2
e9ea2688951463fc2791a2ea2ddfce6762900675
2021-08-05T08:39:45.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/ms-marco-TinyBERT-L-2-v2
38,423
1
transformers
371
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
flair/ner-german-large
d8943c40a867161a5a5b7ce91f31adaea1c3a424
2021-05-08T15:36:43.000Z
[ "pytorch", "de", "dataset:conll2003", "arxiv:2011.06993", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
flair
null
flair/ner-german-large
38,327
6
flair
372
--- tags: - flair - token-classification - sequence-tagger-model language: de datasets: - conll2003 widget: - text: "George Washington ging nach Washington" --- ## German NER in Flair (large model) This is the large 4-class NER model for German that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,31** (CoNLL-03 German revised) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/). --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-german-large") # make example sentence sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (1.0)] Span [5]: "Washington" [− Labels: LOC (1.0)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python import torch # 1. get the corpus from flair.datasets import CONLL_03_GERMAN corpus = CONLL_03_GERMAN() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize fine-tuneable transformer embeddings WITH document context from flair.embeddings import TransformerWordEmbeddings embeddings = TransformerWordEmbeddings( model='xlm-roberta-large', layers="-1", subtoken_pooling="first", fine_tune=True, use_context=True, ) # 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection) from flair.models import SequenceTagger tagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # 6. initialize trainer with AdamW optimizer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # 7. run training with XLM parameters (20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train('resources/taggers/ner-german-large', learning_rate=5.0e-6, mini_batch_size=4, mini_batch_chunk_size=1, max_epochs=20, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., ) ) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{schweter2020flert, title={FLERT: Document-Level Features for Named Entity Recognition}, author={Stefan Schweter and Alan Akbik}, year={2020}, eprint={2011.06993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
csebuetnlp/mT5_multilingual_XLSum
361416d0a10fe5df7e139081f3b5476fd39c860f
2021-10-03T13:14:22.000Z
[ "pytorch", "mt5", "text2text-generation", "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo", "dataset:csebuetnlp/xlsum", "transformers", "summarization", "mT5", "autotrain_compatible" ]
summarization
false
csebuetnlp
null
csebuetnlp/mT5_multilingual_XLSum
37,992
46
transformers
373
--- tags: - summarization - mT5 datasets: - csebuetnlp/xlsum language: - am - ar - az - bn - my - zh - en - fr - gu - ha - hi - ig - id - ja - rn - ko - ky - mr - ne - om - ps - fa - pcm - pt - pa - ru - gd - sr - si - so - es - sw - ta - te - th - ti - tr - uk - ur - uz - vi - cy - yo licenses: - cc-by-nc-sa-4.0 widget: - text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization." --- # mT5-multilingual-XLSum This repository contains the mT5 checkpoint finetuned on the 45 languages of [XL-Sum](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset. For finetuning details and scripts, see the [paper](https://aclanthology.org/2021.findings-acl.413/) and the [official repository](https://github.com/csebuetnlp/xl-sum). ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization.""" model_name = "csebuetnlp/mT5_multilingual_XLSum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ``` ## Benchmarks Scores on the XL-Sum test sets are as follows: Language | ROUGE-1 / ROUGE-2 / ROUGE-L ---------|---------------------------- Amharic | 20.0485 / 7.4111 / 18.0753 Arabic | 34.9107 / 14.7937 / 29.1623 Azerbaijani | 21.4227 / 9.5214 / 19.3331 Bengali | 29.5653 / 12.1095 / 25.1315 Burmese | 15.9626 / 5.1477 / 14.1819 Chinese (Simplified) | 39.4071 / 17.7913 / 33.406 Chinese (Traditional) | 37.1866 / 17.1432 / 31.6184 English | 37.601 / 15.1536 / 29.8817 French | 35.3398 / 16.1739 / 28.2041 Gujarati | 21.9619 / 7.7417 / 19.86 Hausa | 39.4375 / 17.6786 / 31.6667 Hindi | 38.5882 / 16.8802 / 32.0132 Igbo | 31.6148 / 10.1605 / 24.5309 Indonesian | 37.0049 / 17.0181 / 30.7561 Japanese | 48.1544 / 23.8482 / 37.3636 Kirundi | 31.9907 / 14.3685 / 25.8305 Korean | 23.6745 / 11.4478 / 22.3619 Kyrgyz | 18.3751 / 7.9608 / 16.5033 Marathi | 22.0141 / 9.5439 / 19.9208 Nepali | 26.6547 / 10.2479 / 24.2847 Oromo | 18.7025 / 6.1694 / 16.1862 Pashto | 38.4743 / 15.5475 / 31.9065 Persian | 36.9425 / 16.1934 / 30.0701 Pidgin | 37.9574 / 15.1234 / 29.872 Portuguese | 37.1676 / 15.9022 / 28.5586 Punjabi | 30.6973 / 12.2058 / 25.515 Russian | 32.2164 / 13.6386 / 26.1689 Scottish Gaelic | 29.0231 / 10.9893 / 22.8814 Serbian (Cyrillic) | 23.7841 / 7.9816 / 20.1379 Serbian (Latin) | 21.6443 / 6.6573 / 18.2336 Sinhala | 27.2901 / 13.3815 / 23.4699 Somali | 31.5563 / 11.5818 / 24.2232 Spanish | 31.5071 / 11.8767 / 24.0746 Swahili | 37.6673 / 17.8534 / 30.9146 Tamil | 24.3326 / 11.0553 / 22.0741 Telugu | 19.8571 / 7.0337 / 17.6101 Thai | 37.3951 / 17.275 / 28.8796 Tigrinya | 25.321 / 8.0157 / 21.1729 Turkish | 32.9304 / 15.5709 / 29.2622 Ukrainian | 23.9908 / 10.1431 / 20.9199 Urdu | 39.5579 / 18.3733 / 32.8442 Uzbek | 16.8281 / 6.3406 / 15.4055 Vietnamese | 32.8826 / 16.2247 / 26.0844 Welsh | 32.6599 / 11.596 / 26.1164 Yoruba | 31.6595 / 11.6599 / 25.0898 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{hasan-etal-2021-xl, title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages", author = "Hasan, Tahmid and Bhattacharjee, Abhik and Islam, Md. Saiful and Mubasshir, Kazi and Li, Yuan-Fang and Kang, Yong-Bin and Rahman, M. Sohel and Shahriyar, Rifat", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.413", pages = "4693--4703", } ```
textattack/albert-base-v2-yelp-polarity
bbb5fb3997de43eedb58f7c74b8fbd63c719b5dd
2020-07-06T16:37:10.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
false
textattack
null
textattack/albert-base-v2-yelp-polarity
37,888
null
transformers
374
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.975078947368421, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
monologg/kobert
8ebf2818cfd85570737d31ed8cd7aaa000e7056c
2021-05-19T23:52:30.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
monologg
null
monologg/kobert
37,585
5
transformers
375
Entry not found
mrm8488/bert-medium-finetuned-squadv2
881ce1995ab82387a14f63cf50c845afb8f6f724
2021-05-20T00:25:00.000Z
[ "pytorch", "jax", "bert", "question-answering", "en", "arxiv:1908.08962", "transformers", "autotrain_compatible" ]
question-answering
false
mrm8488
null
mrm8488/bert-medium-finetuned-squadv2
37,108
1
transformers
376
--- language: en thumbnail: --- # BERT-Medium fine-tuned on SQuAD v2 [BERT-Medium](https://github.com/google-research/bert/) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. **Mode size** (after training): **157.46 MB** ## Details of BERT-Small and its 'family' (from their documentation) Released on March 11th, 2020 This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. ## Details of the downstream task (Q&A) - Dataset [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **65.95** | | **F1** | **70.11** | ### Raw metrics from benchmark included in training script: ```json { "exact": 65.95637159942727, "f1": 70.11632254245896, "total": 11873, "HasAns_exact": 67.79689608636977, "HasAns_f1": 76.12872765631123, "HasAns_total": 5928, "NoAns_exact": 64.12111017661901, "NoAns_f1": 64.12111017661901, "NoAns_total": 5945, "best_exact": 65.96479407058031, "best_exact_thresh": 0.0, "best_f1": 70.12474501361196, "best_f1_thresh": 0.0 } ``` ## Comparison: | Model | EM | F1 score | SIZE (MB) | | --------------------------------------------------------------------------------------------- | --------- | --------- | --------- | | [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** | | [bert-tiny-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-5-finetuned-squadv2) | 57.12 | 60.86 | 24.34 | | [bert-mini-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-finetuned-squadv2) | 56.31 | 59.65 | 42.63 | | [bert-mini-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-5-finetuned-squadv2) | 63.51 | 66.78 | 66.76 | | [bert-small-finetuned-squadv2](https://huggingface.co/mrm8488/bert-small-finetuned-squadv2) | 60.49 | 64.21 | 109.74 | | [bert-medium-finetuned-squadv2](https://huggingface.co/mrm8488/bert-medium-finetuned-squadv2) | **65.95** | **70.11** | 157.46 | ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/bert-small-finetuned-squadv2", tokenizer="mrm8488/bert-small-finetuned-squadv2" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) # Output: ``` ```json { "answer": "Manuel Romero", "end": 13, "score": 0.9939319924374637, "start": 0 } ``` ### Yes! That was easy 🎉 Let's try with another example ```python qa_pipeline({ 'context': "Manuel Romero has been working remotely in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero?" }) # Output: ``` ```json { "answer": "remotely", "end": 39, "score": 0.3612058272768017, "start": 31 } ``` ### It works!! 🎉 🎉 🎉 > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
YituTech/conv-bert-base
5cb451936b5c4a96562d8b146de85f64f9cf2c22
2021-02-24T11:26:14.000Z
[ "pytorch", "tf", "convbert", "feature-extraction", "transformers" ]
feature-extraction
false
YituTech
null
YituTech/conv-bert-base
36,924
null
transformers
377
Entry not found
dangvantuan/sentence-camembert-large
3c04b3d31c3b8ab520fd9cb474b6f50ad4b7a9a1
2022-07-22T22:33:07.000Z
[ "pytorch", "tf", "camembert", "feature-extraction", "fr", "dataset:stsb_multi_mt", "arxiv:1908.10084", "transformers", "Text", "Sentence Similarity", "Sentence-Embedding", "camembert-large", "license:apache-2.0", "sentence-similarity", "model-index" ]
sentence-similarity
false
dangvantuan
null
dangvantuan/sentence-camembert-large
36,830
5
transformers
378
--- pipeline_tag: sentence-similarity language: fr datasets: - stsb_multi_mt tags: - Text - Sentence Similarity - Sentence-Embedding - camembert-large license: apache-2.0 model-index: - name: sentence-camembert-large by Van Tuan DANG results: - task: name: Sentence-Embedding type: Text Similarity dataset: name: Text Similarity fr type: stsb_multi_mt args: fr metrics: - name: Test Pearson correlation coefficient type: Pearson_correlation_coefficient value: xx.xx --- ## Pre-trained sentence embedding models are the state-of-the-art of Sentence Embeddings for French. Model is Fine-tuned using pre-trained [facebook/camembert-large](https://huggingface.co/camembert/camembert-large) and [Siamese BERT-Networks with 'sentences-transformers'](https://www.sbert.net/) on dataset [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train) ## Usage The model can be used directly (without a language model) as follows: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("dangvantuan/sentence-camembert-large") sentences = ["Un avion est en train de décoller.", "Un homme joue d'une grande flûte.", "Un homme étale du fromage râpé sur une pizza.", "Une personne jette un chat au plafond.", "Une personne est en train de plier un morceau de papier.", ] embeddings = model.encode(sentences) ``` ## Evaluation The model can be evaluated as follows on the French test data of stsb. ```python from sentence_transformers import SentenceTransformer from sentence_transformers.readers import InputExample from datasets import load_dataset def convert_dataset(dataset): dataset_samples=[] for df in dataset: score = float(df['similarity_score'])/5.0 # Normalize score to range 0 ... 1 inp_example = InputExample(texts=[df['sentence1'], df['sentence2']], label=score) dataset_samples.append(inp_example) return dataset_samples # Loading the dataset for evaluation df_dev = load_dataset("stsb_multi_mt", name="fr", split="dev") df_test = load_dataset("stsb_multi_mt", name="fr", split="test") # Convert the dataset for evaluation # For Dev set: dev_samples = convert_dataset(df_dev) val_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(dev_samples, name='sts-dev') val_evaluator(model, output_path="./") # For Test set: test_samples = convert_dataset(df_test) test_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(test_samples, name='sts-test') test_evaluator(model, output_path="./") ``` **Test Result**: The performance is measured using Pearson and Spearman correlation: - On dev | Model | Pearson correlation | Spearman correlation | #params | | ------------- | ------------- | ------------- |------------- | | [dangvantuan/sentence-camembert-large](https://huggingface.co/dangvantuan/sentence-camembert-large)| 88.2 |88.02 | 336M| | [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base) | 86.73|86.54 | 110M | | [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 79.22 | 79.16|135M | - On test | Model | Pearson correlation | Spearman correlation | | ------------- | ------------- | ------------- | | [dangvantuan/sentence-camembert-large](https://huggingface.co/dangvantuan/sentence-camembert-large)| 85.9 | 85.8| | [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base)| 82.36 | 81.64| | [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 78.62 | 77.48| ## Citation @article{reimers2019sentence, title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks}, author={Nils Reimers, Iryna Gurevych}, journal={https://arxiv.org/abs/1908.10084}, year={2019} } @article{martin2020camembert, title={CamemBERT: a Tasty French Language Mode}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, journal={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} }
DeepPavlov/bert-base-multilingual-cased-sentence
403febddd8959ecc1a8d140a83d461a1261c7935
2021-05-18T18:16:12.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "multilingual", "arxiv:1704.05426", "arxiv:1809.05053", "arxiv:1908.10084", "transformers" ]
feature-extraction
false
DeepPavlov
null
DeepPavlov/bert-base-multilingual-cased-sentence
36,729
null
transformers
379
--- language: - multilingual --- # bert-base-multilingual-cased-sentence Sentence Multilingual BERT \(101 languages, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) is a representation‑based sentence encoder for 101 languages of Multilingual BERT. It is initialized with Multilingual BERT and then fine‑tuned on english MultiNLI\[1\] and on dev set of multilingual XNLI\[2\]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT\[3\]. \[1\]: Williams A., Nangia N. & Bowman S. \(2017\) A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. arXiv preprint [arXiv:1704.05426](https://arxiv.org/abs/1704.05426) \[2\]: Williams A., Bowman S. \(2018\) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint [arXiv:1809.05053](https://arxiv.org/abs/1809.05053) \[3\]: N. Reimers, I. Gurevych \(2019\) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint [arXiv:1908.10084](https://arxiv.org/abs/1908.10084)
deepset/gbert-base
4a45e506eccc3405ed2e2a0502995d3f7e483509
2022-02-17T14:05:19.000Z
[ "pytorch", "tf", "fill-mask", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "arxiv:2010.10906", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
deepset
null
deepset/gbert-base
36,687
13
transformers
380
--- language: de license: mit datasets: - wikipedia - OPUS - OpenLegalData --- # German BERT base Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that it outperforms its predecessors. ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** BERT base **Language:** German ## Performance ``` GermEval18 Coarse: 78.17 GermEval18 Fine: 50.90 GermEval14: 87.98 ``` See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors Branden Chan: `branden.chan [at] deepset.ai` Stefan Schweter: `stefan [at] schweter.eu` Timo Möller: `timo.moeller [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
sentence-transformers/msmarco-distilbert-base-v4
62b749054617919f8d1e8462a987edea4b998e3c
2022-06-15T19:32:25.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/msmarco-distilbert-base-v4
36,505
1
sentence-transformers
381
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/msmarco-distilbert-base-v4 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-v4') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-base-v4') model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilbert-base-v4') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-v4) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
M-CLIP/M-BERT-Base-ViT-B
5da718394f8f62314bb080b1e989e61f5e3ce026
2021-05-18T21:34:39.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
M-CLIP
null
M-CLIP/M-BERT-Base-ViT-B
36,232
5
transformers
382
<br /> <p align="center"> <h1 align="center">M-BERT Base ViT-B</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Base%20ViT-B">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('M-BERT-Base-ViT') embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?']) print(embeddings.shape) # Yields: torch.Size([3, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [BERT-base-multilingual](https://huggingface.co/bert-base-multilingual-cased) tuned to match the embedding space for [69 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the ViT-B/32 vision encoder. <br> A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 4069languages used during fine-tuning can be found in [SupportedLanguages.md](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md). Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language. All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
ntu-spml/distilhubert
9c4eece5b1dd98770108a416c101096fb04813de
2021-11-05T12:43:24.000Z
[ "pytorch", "hubert", "feature-extraction", "en", "dataset:librispeech_asr", "arxiv:2110.01900", "transformers", "speech", "license:apache-2.0" ]
feature-extraction
false
ntu-spml
null
ntu-spml/distilhubert
36,130
7
transformers
383
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # DistilHuBERT [DistilHuBERT by NTU Speech Processing & Machine Learning Lab](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900) Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee **Abstract** Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech. The original model can be found under https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
bigscience/bloom
d9bf58e6d318c7760664d16167a62debfd237554
2022-07-29T09:32:01.000Z
[ "pytorch", "tensorboard", "bloom", "feature-extraction", "ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "transformers", "license:bigscience-bloom-rail-1.0", "text-generation", "model-index" ]
text-generation
false
bigscience
null
bigscience/bloom
36,017
712
transformers
384
--- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation widget: - text: 'A "whatpu" is a small, furry animal native to Tanzania. An example of a sentence that uses the word whatpu is: We were traveling in Africa and we saw these very cute whatpus. | To do a "farduddle" means to jump up and down really fast. An example of a sentence that uses the word farduddle is:' example_title: Imaginary word group: English - text: 'Un "whatpu" est un petit animal à fourrure originaire de Tanzanie. Un exemple de phrase qui utilise le mot whatpu est: Nous étions en Afrique et nous avons vu des whatpus trop mignons. Faire un "farduddle" veut dire sauter sur place vraiment vite. Un exemple de phrase qui utilise le mot farduddle est:' example_title: Imaginary word group: French - text: 'Un "whatpu" es un pequeño animal peludo nativo de Tanzania. Un ejemplo de una oración que usa la palabra whatpu es: Estábamos viajando por África y vimos estos whatpus muy bonitos. Hacer un "farduddle" significa saltar arriba y abajo muy rápido. Un ejemplo de una oración que usa la palabra farduddle es:' example_title: Imaginary word group: Spanish - text: ' ال"واتبو" هو حيوان صغير مكسو بالفراء يعيش في تنزانيا. مثال على جملة تستخدم كلمة واتبو هي: كنا نسافر في افريقيا و رأينا هؤلاء الواتبو اللطفاء. للقيام ب"فاردادل" يعني ان تقفز للأعلى و الأسفل بسرعة كبيرة. مثال على جملة تستخدم كلمة فاردادل هي:' example_title: Imaginary word group: Arabic - text: 'Um "whatpu" é um pequeno animal peludo nativo da Tanzânia. Um exemplo de uma frase que usa a palavra whatpu é: Estávamos a viajar por África e vimos uns whatpus muito queridos. Fazer um "farduddle" significa saltar para cima e para baixo muito rápido. Um exemplo de uma frase que usa a palavra farduddle é:' example : Imaginary word group: Portuguese - text: Pour déguster un ortolan, il faut tout d'abord example_title: Recipe group: French - text: |- 34+10=44 54+20= example_title: Addition group: Math - text: |- This tool converts irregular verbs to past tense. Arise - Arose Become - Became Forget - Forgot Freeze - example_title: Irregular verbs group: English - text: |- Please unscramble the letters into a word, and write that word: r e!c.i p r o.c a/l = reciprocal d.o m i!n a n.t = example_title: Word unscrambling group: English - text: |- Estos ejemplos quitan vocales de las palabras Ejemplos: hola - hl manzana - mnzn papas - pps alacran - lcrn papa - example_title: Vowel removal group: Spanish - text: |- Traduce español de España a español de Argentina El coche es rojo - el auto es rojo El ordenador es nuevo - la computadora es nueva el boligrafo es negro - lapicera es negra la nevera example_title: Spanish to Argentinian Spanish group: Spanish - text: To say "I love you" in Hindi, you would say example_title: Translation to Hindi group: English - text: To say "I love you" in Hindi, you would say example_title: Translation from English group: Hindi - text: 'Poor English: She no went to the market. Corrected English:' example_title: Grammar exercise 1 group: English - text: 'استخراج العدد العاملي في لغة بايثون:' example_title: Code generation group: Arabic - text: 'Regexp. Here is a regular expression to match a word starting with a number and then having only vowels:' example_title: Regular expressions group: English - text: |- Do a hello world in different languages: Python: print("hello world") R: example_title: Code generation group: English - text: |- Which is the correct preposition? I'm born X July. X is the preposition in He sat X a chair. X is the preposition on She drove X the bridge. X is the preposition example_title: Grammar exercise 2 group: English - text: |- Dans cet essai je vais m'interroger sur la conscience des modèles d'intelligence artificielle récents comme les modèles de langue. Pour commencer, je m'intéresserai à la notion de conscience et à ce qui la caractérise. Ensuite, j'aborderai la question de l'intelligence et de son lien avec le langage. Enfin, dans une dernière partie je me pencherai sur le cas de l'IA et sur sa conscience. Traduction en espagnol: « example_title: Translation to Spanish group: French - text: |- Dans cet essai je vais m'interroger sur la conscience des modèles d'intelligence artificielle récents comme les modèles de langue. Pour commencer, je m'intéresserai à la notion de conscience et à ce qui la caractérise. Ensuite, j'aborderai la question de l'intelligence et de son lien avec le langage. Enfin, dans une dernière partie je me pencherai sur le cas de l'IA et sur sa conscience. Traduction en espagnol: « example_title: Translation from French group: Spanish - text: ذات مرة ، عاش شبل الدب في الغابة example_title: Fairy tale group: Arabic - text: एक बार की बात है, जंगल में एक भालू का शावक रहता था example_title: Fairy tale group: Hindi - text: Il était une fois une licorne qui vivait example_title: Fairy tale group: French - text: |- Q: A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the gold balls are blue. How many blue golf balls are there? A: Let's think step by step. example_title: Mathematical reasoning group: English model-index: - name: bloom results: - task: type: text-generation name: text generation dataset: name: arc_challenge type: arc_challenge metrics: - name: acc type: acc value: 0.4112627986348123 verified: false - task: type: text-generation name: text generation dataset: name: arc_easy type: arc_easy metrics: - name: acc type: acc value: 0.726010101010101 verified: false - task: type: text-generation name: text generation dataset: name: axb type: axb metrics: - name: acc type: acc value: 0.5751811594202898 verified: false - task: type: text-generation name: text generation dataset: name: axg type: axg metrics: - name: acc type: acc value: 0.5252808988764045 verified: false - task: type: text-generation name: text generation dataset: name: boolq type: boolq metrics: - name: acc type: acc value: 0.6345565749235474 verified: false - task: type: text-generation name: text generation dataset: name: cb type: cb metrics: - name: acc type: acc value: 0.3392857142857143 verified: false - task: type: text-generation name: text generation dataset: name: cola type: cola metrics: - name: acc type: acc value: 0.39022051773729627 verified: false - task: type: text-generation name: text generation dataset: name: copa type: copa metrics: - name: acc type: acc value: 0.56 verified: false - task: type: text-generation name: text generation dataset: name: crows_pairs_english type: crows_pairs_english metrics: - name: acc type: acc value: 0.5 verified: false - task: type: text-generation name: text generation dataset: name: crows_pairs_french type: crows_pairs_french metrics: - name: acc type: acc value: 0.505664877757901 verified: false - task: type: text-generation name: text generation dataset: name: diabla type: diabla metrics: - name: acc type: acc value: 0.2947981906750174 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_afr type: gsarti/flores_101_afr metrics: - name: byte_perplexity type: byte_perplexity value: 4.25431550058444 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_amh type: gsarti/flores_101_amh metrics: - name: byte_perplexity type: byte_perplexity value: 3.716877477347089 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ara type: gsarti/flores_101_ara metrics: - name: byte_perplexity type: byte_perplexity value: 1.7049030137120964 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_asm type: gsarti/flores_101_asm metrics: - name: byte_perplexity type: byte_perplexity value: 6.576581380404954 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ast type: gsarti/flores_101_ast metrics: - name: byte_perplexity type: byte_perplexity value: 2.8562364775797944 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_azj type: gsarti/flores_101_azj metrics: - name: byte_perplexity type: byte_perplexity value: 4.80721528624391 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_bel type: gsarti/flores_101_bel metrics: - name: byte_perplexity type: byte_perplexity value: 2.7312177406635065 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ben type: gsarti/flores_101_ben metrics: - name: byte_perplexity type: byte_perplexity value: 5.993409478990023 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_bos type: gsarti/flores_101_bos metrics: - name: byte_perplexity type: byte_perplexity value: 3.5936169095529493 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_bul type: gsarti/flores_101_bul metrics: - name: byte_perplexity type: byte_perplexity value: 2.159035321398085 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_cat type: gsarti/flores_101_cat metrics: - name: byte_perplexity type: byte_perplexity value: 2.167873680006659 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ceb type: gsarti/flores_101_ceb metrics: - name: byte_perplexity type: byte_perplexity value: 5.286975089885673 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ces type: gsarti/flores_101_ces metrics: - name: byte_perplexity type: byte_perplexity value: 3.4516208322236017 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ckb type: gsarti/flores_101_ckb metrics: - name: byte_perplexity type: byte_perplexity value: 3.7051034724765612 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_cym type: gsarti/flores_101_cym metrics: - name: byte_perplexity type: byte_perplexity value: 7.0889312398688125 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_dan type: gsarti/flores_101_dan metrics: - name: byte_perplexity type: byte_perplexity value: 3.4300748208111838 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_deu type: gsarti/flores_101_deu metrics: - name: byte_perplexity type: byte_perplexity value: 2.3380585896268107 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ell type: gsarti/flores_101_ell metrics: - name: byte_perplexity type: byte_perplexity value: 1.9595604725375586 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_eng type: gsarti/flores_101_eng metrics: - name: byte_perplexity type: byte_perplexity value: 1.8819637649637901 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_est type: gsarti/flores_101_est metrics: - name: byte_perplexity type: byte_perplexity value: 5.773850600380297 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_fas type: gsarti/flores_101_fas metrics: - name: byte_perplexity type: byte_perplexity value: 2.4306140728294086 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_fin type: gsarti/flores_101_fin metrics: - name: byte_perplexity type: byte_perplexity value: 4.304305536244342 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_fra type: gsarti/flores_101_fra metrics: - name: byte_perplexity type: byte_perplexity value: 1.9374688438541796 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ful type: gsarti/flores_101_ful metrics: - name: byte_perplexity type: byte_perplexity value: 9.740353097219378 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_gle type: gsarti/flores_101_gle metrics: - name: byte_perplexity type: byte_perplexity value: 6.035269765075012 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_glg type: gsarti/flores_101_glg metrics: - name: byte_perplexity type: byte_perplexity value: 2.365451129546636 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_guj type: gsarti/flores_101_guj metrics: - name: byte_perplexity type: byte_perplexity value: 5.70676742569154 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hau type: gsarti/flores_101_hau metrics: - name: byte_perplexity type: byte_perplexity value: 8.855204288260023 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_heb type: gsarti/flores_101_heb metrics: - name: byte_perplexity type: byte_perplexity value: 2.920943798471208 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hin type: gsarti/flores_101_hin metrics: - name: byte_perplexity type: byte_perplexity value: 5.452028001573195 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hrv type: gsarti/flores_101_hrv metrics: - name: byte_perplexity type: byte_perplexity value: 3.7056829077179225 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hun type: gsarti/flores_101_hun metrics: - name: byte_perplexity type: byte_perplexity value: 4.058579478967854 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hye type: gsarti/flores_101_hye metrics: - name: byte_perplexity type: byte_perplexity value: 3.127237816041562 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ibo type: gsarti/flores_101_ibo metrics: - name: byte_perplexity type: byte_perplexity value: 3.9500357969906683 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ind type: gsarti/flores_101_ind metrics: - name: byte_perplexity type: byte_perplexity value: 1.976163584180101 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_isl type: gsarti/flores_101_isl metrics: - name: byte_perplexity type: byte_perplexity value: 5.500542085165231 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ita type: gsarti/flores_101_ita metrics: - name: byte_perplexity type: byte_perplexity value: 2.314465100752677 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_jav type: gsarti/flores_101_jav metrics: - name: byte_perplexity type: byte_perplexity value: 4.942322446550142 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_jpn type: gsarti/flores_101_jpn metrics: - name: byte_perplexity type: byte_perplexity value: 2.259421750521777 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kam type: gsarti/flores_101_kam metrics: - name: byte_perplexity type: byte_perplexity value: 9.743025325635475 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kan type: gsarti/flores_101_kan metrics: - name: byte_perplexity type: byte_perplexity value: 6.233724699944989 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kat type: gsarti/flores_101_kat metrics: - name: byte_perplexity type: byte_perplexity value: 2.0508893415872107 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kaz type: gsarti/flores_101_kaz metrics: - name: byte_perplexity type: byte_perplexity value: 3.0390148516287927 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kea type: gsarti/flores_101_kea metrics: - name: byte_perplexity type: byte_perplexity value: 7.147132270533836 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_khm type: gsarti/flores_101_khm metrics: - name: byte_perplexity type: byte_perplexity value: 3.366514710252477 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kir type: gsarti/flores_101_kir metrics: - name: byte_perplexity type: byte_perplexity value: 3.2413845359487885 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kor type: gsarti/flores_101_kor metrics: - name: byte_perplexity type: byte_perplexity value: 2.9023196482741027 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lao type: gsarti/flores_101_lao metrics: - name: byte_perplexity type: byte_perplexity value: 2.331446855837494 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lav type: gsarti/flores_101_lav metrics: - name: byte_perplexity type: byte_perplexity value: 5.223609016485348 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lin type: gsarti/flores_101_lin metrics: - name: byte_perplexity type: byte_perplexity value: 4.847471204107301 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lit type: gsarti/flores_101_lit metrics: - name: byte_perplexity type: byte_perplexity value: 4.5432035498036765 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ltz type: gsarti/flores_101_ltz metrics: - name: byte_perplexity type: byte_perplexity value: 5.5910516978201015 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lug type: gsarti/flores_101_lug metrics: - name: byte_perplexity type: byte_perplexity value: 5.4301049946044175 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_luo type: gsarti/flores_101_luo metrics: - name: byte_perplexity type: byte_perplexity value: 12.031029857399394 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mal type: gsarti/flores_101_mal metrics: - name: byte_perplexity type: byte_perplexity value: 4.794302548141229 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mar type: gsarti/flores_101_mar metrics: - name: byte_perplexity type: byte_perplexity value: 6.856682255407709 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mkd type: gsarti/flores_101_mkd metrics: - name: byte_perplexity type: byte_perplexity value: 2.3354144607382983 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mlt type: gsarti/flores_101_mlt metrics: - name: byte_perplexity type: byte_perplexity value: 9.04135227904975 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mon type: gsarti/flores_101_mon metrics: - name: byte_perplexity type: byte_perplexity value: 3.094907723618666 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mri type: gsarti/flores_101_mri metrics: - name: byte_perplexity type: byte_perplexity value: 5.2659698341456505 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_msa type: gsarti/flores_101_msa metrics: - name: byte_perplexity type: byte_perplexity value: 2.2220779892820985 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mya type: gsarti/flores_101_mya metrics: - name: byte_perplexity type: byte_perplexity value: 2.5229159853414433 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_nld type: gsarti/flores_101_nld metrics: - name: byte_perplexity type: byte_perplexity value: 2.799153089002766 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_nob type: gsarti/flores_101_nob metrics: - name: byte_perplexity type: byte_perplexity value: 3.628942049758715 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_npi type: gsarti/flores_101_npi metrics: - name: byte_perplexity type: byte_perplexity value: 6.666236527803879 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_nso type: gsarti/flores_101_nso metrics: - name: byte_perplexity type: byte_perplexity value: 5.015319074943932 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_nya type: gsarti/flores_101_nya metrics: - name: byte_perplexity type: byte_perplexity value: 4.938044040751036 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_oci type: gsarti/flores_101_oci metrics: - name: byte_perplexity type: byte_perplexity value: 3.607440766288032 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_orm type: gsarti/flores_101_orm metrics: - name: byte_perplexity type: byte_perplexity value: 11.31585044916705 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ory type: gsarti/flores_101_ory metrics: - name: byte_perplexity type: byte_perplexity value: 5.981891184515959 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_pan type: gsarti/flores_101_pan metrics: - name: byte_perplexity type: byte_perplexity value: 4.7716086841502685 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_pol type: gsarti/flores_101_pol metrics: - name: byte_perplexity type: byte_perplexity value: 3.01200174157614 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_por type: gsarti/flores_101_por metrics: - name: byte_perplexity type: byte_perplexity value: 1.8411472115156693 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_pus type: gsarti/flores_101_pus metrics: - name: byte_perplexity type: byte_perplexity value: 4.623872921169341 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ron type: gsarti/flores_101_ron metrics: - name: byte_perplexity type: byte_perplexity value: 3.049829411973529 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_rus type: gsarti/flores_101_rus metrics: - name: byte_perplexity type: byte_perplexity value: 1.7083443875791493 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_slk type: gsarti/flores_101_slk metrics: - name: byte_perplexity type: byte_perplexity value: 4.037719650548048 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_slv type: gsarti/flores_101_slv metrics: - name: byte_perplexity type: byte_perplexity value: 4.141036287764831 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_sna type: gsarti/flores_101_sna metrics: - name: byte_perplexity type: byte_perplexity value: 4.7109183690601295 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_snd type: gsarti/flores_101_snd metrics: - name: byte_perplexity type: byte_perplexity value: 4.206170931541356 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_som type: gsarti/flores_101_som metrics: - name: byte_perplexity type: byte_perplexity value: 9.154342083821405 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_spa type: gsarti/flores_101_spa metrics: - name: byte_perplexity type: byte_perplexity value: 1.7955816311143258 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_srp type: gsarti/flores_101_srp metrics: - name: byte_perplexity type: byte_perplexity value: 2.241096141430147 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_swe type: gsarti/flores_101_swe metrics: - name: byte_perplexity type: byte_perplexity value: 3.344977179674293 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_swh type: gsarti/flores_101_swh metrics: - name: byte_perplexity type: byte_perplexity value: 2.6844272218041634 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tam type: gsarti/flores_101_tam metrics: - name: byte_perplexity type: byte_perplexity value: 5.1645951632801745 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tel type: gsarti/flores_101_tel metrics: - name: byte_perplexity type: byte_perplexity value: 6.8098996634099445 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tgk type: gsarti/flores_101_tgk metrics: - name: byte_perplexity type: byte_perplexity value: 3.785457016715163 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tgl type: gsarti/flores_101_tgl metrics: - name: byte_perplexity type: byte_perplexity value: 3.7498953645610875 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tha type: gsarti/flores_101_tha metrics: - name: byte_perplexity type: byte_perplexity value: 2.104151663233468 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tur type: gsarti/flores_101_tur metrics: - name: byte_perplexity type: byte_perplexity value: 3.3178240103796037 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ukr type: gsarti/flores_101_ukr metrics: - name: byte_perplexity type: byte_perplexity value: 2.088543437159643 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_umb type: gsarti/flores_101_umb metrics: - name: byte_perplexity type: byte_perplexity value: 11.766013385445124 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_urd type: gsarti/flores_101_urd metrics: - name: byte_perplexity type: byte_perplexity value: 1.7788699847612357 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_uzb type: gsarti/flores_101_uzb metrics: - name: byte_perplexity type: byte_perplexity value: 8.499879863290486 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_vie type: gsarti/flores_101_vie metrics: - name: byte_perplexity type: byte_perplexity value: 1.65901207387262 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_wol type: gsarti/flores_101_wol metrics: - name: byte_perplexity type: byte_perplexity value: 6.141703791276928 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_xho type: gsarti/flores_101_xho metrics: - name: byte_perplexity type: byte_perplexity value: 4.690199677955254 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_yor type: gsarti/flores_101_yor metrics: - name: byte_perplexity type: byte_perplexity value: 4.360585696242932 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_zho_simpl type: gsarti/flores_101_zho_simpl metrics: - name: byte_perplexity type: byte_perplexity value: 2.1183545781883515 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_zho_trad type: gsarti/flores_101_zho_trad metrics: - name: byte_perplexity type: byte_perplexity value: 2.273787884962656 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_zul type: gsarti/flores_101_zul metrics: - name: byte_perplexity type: byte_perplexity value: 6.016954767729589 verified: false - task: type: text-generation name: text generation dataset: name: headqa type: headqa metrics: - name: acc type: acc value: 0.3464624361779723 verified: false - task: type: text-generation name: text generation dataset: name: hellaswag type: hellaswag metrics: - name: acc type: acc value: 0.5353515236008763 verified: false - task: type: text-generation name: text generation dataset: name: lambada_mt_de type: lambada_mt_de metrics: - name: acc type: acc value: 0.3291286629148069 verified: false - task: type: text-generation name: text generation dataset: name: lambada_mt_en type: lambada_mt_en metrics: - name: acc type: acc value: 0.6720357073549389 verified: false - task: type: text-generation name: text generation dataset: name: lambada_mt_es type: lambada_mt_es metrics: - name: acc type: acc value: 0.476421502037648 verified: false - task: type: text-generation name: text generation dataset: name: lambada_mt_it type: lambada_mt_it metrics: - name: acc type: acc value: 0.4061711624296526 verified: false - task: type: text-generation name: text generation dataset: name: logiqa type: logiqa metrics: - name: acc type: acc value: 0.2350230414746544 verified: false - task: type: text-generation name: text generation dataset: name: mathqa type: mathqa metrics: - name: acc type: acc value: 0.27671691792294806 verified: false - task: type: text-generation name: text generation dataset: name: mc_taco type: mc_taco metrics: - name: em type: em value: 0.13063063063063063 verified: false - task: type: text-generation name: text generation dataset: name: mnli type: mnli metrics: - name: acc type: acc value: 0.3545565500406835 verified: false - task: type: text-generation name: text generation dataset: name: mnli_mismatched type: mnli_mismatched metrics: - name: acc type: acc value: 0.3545565500406835 verified: false - task: type: text-generation name: text generation dataset: name: mrpc type: mrpc metrics: - name: acc type: acc value: 0.3872549019607843 verified: false - task: type: text-generation name: text generation dataset: name: multirc type: multirc metrics: - name: acc type: acc value: 0.570957095709571 verified: false - task: type: text-generation name: text generation dataset: name: openbookqa type: openbookqa metrics: - name: acc type: acc value: 0.312 verified: false - task: type: text-generation name: text generation dataset: name: piqa type: piqa metrics: - name: acc type: acc value: 0.7812840043525572 verified: false - task: type: text-generation name: text generation dataset: name: prost type: prost metrics: - name: acc type: acc value: 0.2977156276686593 verified: false - task: type: text-generation name: text generation dataset: name: pubmedqa type: pubmedqa metrics: - name: acc type: acc value: 0.741 verified: false - task: type: text-generation name: text generation dataset: name: qnli type: qnli metrics: - name: acc type: acc value: 0.5172981878088962 verified: false - task: type: text-generation name: text generation dataset: name: qqp type: qqp metrics: - name: acc type: acc value: 0.5883007667573584 verified: false - task: type: text-generation name: text generation dataset: name: race type: race metrics: - name: acc type: acc value: 0.39043062200956935 verified: false - task: type: text-generation name: text generation dataset: name: rte type: rte metrics: - name: acc type: acc value: 0.5198555956678701 verified: false - task: type: text-generation name: text generation dataset: name: sciq type: sciq metrics: - name: acc type: acc value: 0.936 verified: false - task: type: text-generation name: text generation dataset: name: sst type: sst metrics: - name: acc type: acc value: 0.6043577981651376 verified: false - task: type: text-generation name: text generation dataset: name: triviaqa type: triviaqa metrics: - name: acc type: acc value: 0.18332891363917617 verified: false - task: type: text-generation name: text generation dataset: name: tydiqa_primary type: tydiqa_primary metrics: - name: acc type: acc value: 0.2809817301342725 verified: false - task: type: text-generation name: text generation dataset: name: webqs type: webqs metrics: - name: acc type: acc value: 0.061515748031496065 verified: false - task: type: text-generation name: text generation dataset: name: wic type: wic metrics: - name: acc type: acc value: 0.5062695924764891 verified: false - task: type: text-generation name: text generation dataset: name: winogrande type: winogrande metrics: - name: acc type: acc value: 0.7095501183898973 verified: false - task: type: text-generation name: text generation dataset: name: wnli type: wnli metrics: - name: acc type: acc value: 0.5704225352112676 verified: false - task: type: text-generation name: text generation dataset: name: wsc type: wsc metrics: - name: acc type: acc value: 0.5192307692307693 verified: false - task: type: text-generation name: text generation dataset: name: humaneval type: humaneval metrics: - name: pass@1 type: pass@1 value: 0.15524390243902436 verified: false - name: pass@10 type: pass@10 value: 0.3220367632383857 verified: false - name: pass@100 type: pass@100 value: 0.5545431515723145 verified: false --- <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> BigScience Large Open-science Open-access Multilingual Language Model Version 1.3 / 6 July 2022 Current Checkpoint: **Training Iteration 95000** Total seen tokens: **366B** --- # Model Details BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. As such, it is able to output coherent text in 46 languages and 13 programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks. ## Basics *This section provides information about the model type, version, license, funders, release date, developers, and contact information.* *It is useful for anyone who wants to reference the model.* <details> <summary>Click to expand</summary> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) *All collaborators are either volunteers or have an agreement with their employer. (Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Checkpoints format:** `transformers` (Megatron-DeepSpeed format available [here](https://huggingface.co/bigscience/bloom-optimizer-states)) **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license) / [article and FAQ](https://bigscience.huggingface.co/blog/the-bigscience-rail-license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** bigscience-contact@googlegroups.com **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ## Technical Specifications *This section includes details about the model objective and architecture, and the compute infrastructure.* *It is useful for people interested in model development.* <details> <summary>Click to expand</summary> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. ### Model Architecture and Objective * Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 176 billion parameters: * 70 layers, 112 attention heads * Hidden layers are 14336-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). ### Compute infrastructure Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). #### Hardware * 384 A100 80GB GPUs (48 nodes) * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes #### Software * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) </details> --- # Training *This section provides information about the training data, the speed and size of training elements, and the environmental impact of training.* *It is useful for people who want to learn more about the model inputs and training footprint.* <details> <summary>Click to expand</summary> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus), and the sizes of each of their contributions to the aggregated training data are presented in an [Interactive Corpus Map](https://huggingface.co/spaces/bigscience-catalogue-lm-data/corpus-map). Training data includes: - 46 natural languages - 13 programming languages - In 1.6TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) ### Languages The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_v2.svg?raw=true) The following tables shows the further distribution of Niger-Congo & Indic languages and programming languages in the training data. Distribution of Niger Congo and Indic languages. | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------| ------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Lingala | 0.0002 | | Malayalam | 0.10 | | Northern Sotho | 0.0002 | | Urdu | 0.10 | | Fon | 0.0002 | | Tamil | 0.20 | | Kirundi | 0.0003 | | Bengali | 0.50 | | Wolof | 0.0004 | | Hindi | 0.70 | | Luganda | 0.0004 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | Distribution of programming languages. | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | ### Preprocessing **Tokenization:** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)), a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. ## Speeds, Sizes, Times Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11-176B-ml-logs/) - Dates: - Started 11th March, 2022 11:42am PST - Estimated end: 5th July, 2022 - Checkpoint size: - Bf16 weights: 329GB - Full checkpoint with optimizer states: 2.3TB - Training throughput: About 150 TFLOP per GPU per second - Number of epochs: 1 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments) - Server training location: Île-de-France, France ## Environmental Impact The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming.)* **Estimated electricity usage:** *(Forthcoming.)* </details> --- # Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.* *It is useful for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary> ## How to use This model can be easily used and deployed using HuggingFace's ecosystem. This needs `transformers` and `accelerate` installed. The model can be downloaded as follows: <img src="https://s3.amazonaws.com/moonup/production/uploads/1657271608456-62441d1d9fdefb55a0b7d12c.png" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> ## Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. ### Direct Use - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings ### Downstream Use - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### Out-of-scope Uses Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but may not be correct. Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### Misuse Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ## Intended Users ### Direct Users - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups ### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) ### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> --- # Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs - Induce users into attributing human traits to it, such as sentience or consciousness </details> --- # Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary> ## Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ## Factors *This section lists some different aspects of BLOOM models. Its focus is on aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ## Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Zero-shot evaluations:** # <span style="color:red"><b>WARNING:</b> These are <b>intermediate results</b></span> See this repository for JSON files: https://github.com/bigscience-workshop/evaluation-results | Task | Language | Metric | BLOOM-176B | OPT-175B* | |:--------|:-----------------|:------------------------|-------------:|------------:| | arc_challenge | eng | acc ↑ | 0.411 | 0.412 | | arc_easy | eng | acc ↑ | 0.726 | 0.751 | | axb (Median of 10 prompts) | eng | acc ↑ | 0.575 | 0.532 | | axg (Median of 10 prompts) | eng | acc ↑ | 0.525 | 0.548 | | boolq (Median of 11 prompts) | eng | acc ↑ | 0.635 | 0.622 | | cb (Median of 15 prompts) | eng | acc ↑ | 0.339 | 0.411 | | cola (Median of 5 prompts) | eng | acc ↑ | 0.39 | 0.444 | | copa (Median of 9 prompts) | eng | acc ↑ | 0.56 | 0.55 | | crows_pairs_english (Median of 6 prompts) | eng | acc ↑ | 0.5 | 0.502 | | crows_pairs_french (Median of 7 prompts) | fra | acc ↑ | 0.506 | 0.499 | | diabla (Median of 2 prompts) | eng | acc ↑ | 0.295 | 0.289 | | gsarti/flores_101_afr | afr | byte_perplexity ↓ | 4.254 | 3.381 | | gsarti/flores_101_amh | amh | byte_perplexity ↓ | 3.717 | 3.87 | | gsarti/flores_101_ara | ara | byte_perplexity ↓ | 1.705 | 2.42 | | gsarti/flores_101_asm | asm | byte_perplexity ↓ | 6.577 | 3.028 | | gsarti/flores_101_ast | ast | byte_perplexity ↓ | 2.856 | 4.737 | | gsarti/flores_101_azj | azj | byte_perplexity ↓ | 4.807 | 4.767 | | gsarti/flores_101_bel | bel | byte_perplexity ↓ | 2.731 | 2.557 | | gsarti/flores_101_ben | ben | byte_perplexity ↓ | 5.993 | 2.243 | | gsarti/flores_101_bos | bos | byte_perplexity ↓ | 3.594 | 2.668 | | gsarti/flores_101_bul | bul | byte_perplexity ↓ | 2.159 | 2.099 | | gsarti/flores_101_cat | cat | byte_perplexity ↓ | 2.168 | 2.837 | | gsarti/flores_101_ceb | ceb | byte_perplexity ↓ | 5.287 | 3.636 | | gsarti/flores_101_ces | ces | byte_perplexity ↓ | 3.452 | 2.749 | | gsarti/flores_101_ckb | ckb | byte_perplexity ↓ | 3.705 | 4.688 | | gsarti/flores_101_cym | cym | byte_perplexity ↓ | 7.089 | 5.075 | | gsarti/flores_101_dan | dan | byte_perplexity ↓ | 3.43 | 2.492 | | gsarti/flores_101_deu | deu | byte_perplexity ↓ | 2.338 | 2.099 | | gsarti/flores_101_ell | ell | byte_perplexity ↓ | 1.96 | 1.811 | | gsarti/flores_101_eng | eng | byte_perplexity ↓ | 1.882 | 1.9 | | gsarti/flores_101_est | est | byte_perplexity ↓ | 5.774 | 3.533 | | gsarti/flores_101_fas | fas | byte_perplexity ↓ | 2.431 | 2.444 | | gsarti/flores_101_fin | fin | byte_perplexity ↓ | 4.304 | 2.601 | | gsarti/flores_101_fra | fra | byte_perplexity ↓ | 1.937 | 1.984 | | gsarti/flores_101_ful | ful | byte_perplexity ↓ | 9.74 | 11.84 | | gsarti/flores_101_gle | gle | byte_perplexity ↓ | 6.035 | 3.914 | | gsarti/flores_101_glg | glg | byte_perplexity ↓ | 2.365 | 3.015 | | gsarti/flores_101_guj | guj | byte_perplexity ↓ | 5.707 | 2.438 | | gsarti/flores_101_hau | hau | byte_perplexity ↓ | 8.855 | 5.283 | | gsarti/flores_101_heb | heb | byte_perplexity ↓ | 2.921 | 2.903 | | gsarti/flores_101_hin | hin | byte_perplexity ↓ | 5.452 | 1.86 | | gsarti/flores_101_hrv | hrv | byte_perplexity ↓ | 3.706 | 2.715 | | gsarti/flores_101_hun | hun | byte_perplexity ↓ | 4.059 | 2.865 | | gsarti/flores_101_hye | hye | byte_perplexity ↓ | 3.127 | 3.411 | | gsarti/flores_101_ibo | ibo | byte_perplexity ↓ | 3.95 | 8.008 | | gsarti/flores_101_ind | ind | byte_perplexity ↓ | 1.976 | 2.632 | | gsarti/flores_101_isl | isl | byte_perplexity ↓ | 5.501 | 4.701 | | gsarti/flores_101_ita | ita | byte_perplexity ↓ | 2.314 | 2.104 | | gsarti/flores_101_jav | jav | byte_perplexity ↓ | 4.942 | 8.16 | | gsarti/flores_101_jpn | jpn | byte_perplexity ↓ | 2.259 | 2.198 | | gsarti/flores_101_kam | kam | byte_perplexity ↓ | 9.743 | 10.981 | | gsarti/flores_101_kan | kan | byte_perplexity ↓ | 6.234 | 2.373 | | gsarti/flores_101_kat | kat | byte_perplexity ↓ | 2.051 | 2.466 | | gsarti/flores_101_kaz | kaz | byte_perplexity ↓ | 3.039 | 4.376 | | gsarti/flores_101_kea | kea | byte_perplexity ↓ | 7.147 | 9.632 | | gsarti/flores_101_khm | khm | byte_perplexity ↓ | 3.367 | 2.646 | | gsarti/flores_101_kir | kir | byte_perplexity ↓ | 3.241 | 4.522 | | gsarti/flores_101_kor | kor | byte_perplexity ↓ | 2.902 | 3.376 | | gsarti/flores_101_lao | lao | byte_perplexity ↓ | 2.331 | 3.106 | | gsarti/flores_101_lav | lav | byte_perplexity ↓ | 5.224 | 4.811 | | gsarti/flores_101_lin | lin | byte_perplexity ↓ | 4.847 | 8.871 | | gsarti/flores_101_lit | lit | byte_perplexity ↓ | 4.543 | 5.183 | | gsarti/flores_101_ltz | ltz | byte_perplexity ↓ | 5.591 | 7.158 | | gsarti/flores_101_lug | lug | byte_perplexity ↓ | 5.43 | 7.399 | | gsarti/flores_101_luo | luo | byte_perplexity ↓ | 12.031 | 11.951 | | gsarti/flores_101_mal | mal | byte_perplexity ↓ | 4.794 | 2.054 | | gsarti/flores_101_mar | mar | byte_perplexity ↓ | 6.857 | 2.274 | | gsarti/flores_101_mkd | mkd | byte_perplexity ↓ | 2.335 | 2.538 | | gsarti/flores_101_mlt | mlt | byte_perplexity ↓ | 9.041 | 5.996 | | gsarti/flores_101_mon | mon | byte_perplexity ↓ | 3.095 | 4.519 | | gsarti/flores_101_mri | mri | byte_perplexity ↓ | 5.266 | 4.438 | | gsarti/flores_101_msa | msa | byte_perplexity ↓ | 2.222 | 2.935 | | gsarti/flores_101_mya | mya | byte_perplexity ↓ | 2.523 | 2.413 | | gsarti/flores_101_nld | nld | byte_perplexity ↓ | 2.799 | 2.293 | | gsarti/flores_101_nob | nob | byte_perplexity ↓ | 3.629 | 2.593 | | gsarti/flores_101_npi | npi | byte_perplexity ↓ | 6.666 | 2.499 | | gsarti/flores_101_nso | nso | byte_perplexity ↓ | 5.015 | 8.485 | | gsarti/flores_101_nya | nya | byte_perplexity ↓ | 4.938 | 7.548 | | gsarti/flores_101_oci | oci | byte_perplexity ↓ | 3.607 | 4.936 | | gsarti/flores_101_orm | orm | byte_perplexity ↓ | 11.316 | 7.145 | | gsarti/flores_101_ory | ory | byte_perplexity ↓ | 5.982 | 2.668 | | gsarti/flores_101_pan | pan | byte_perplexity ↓ | 4.772 | 2.782 | | gsarti/flores_101_pol | pol | byte_perplexity ↓ | 3.012 | 2.432 | | gsarti/flores_101_por | por | byte_perplexity ↓ | 1.841 | 2.178 | | gsarti/flores_101_pus | pus | byte_perplexity ↓ | 4.624 | 4.785 | | gsarti/flores_101_ron | ron | byte_perplexity ↓ | 3.05 | 2.197 | | gsarti/flores_101_rus | rus | byte_perplexity ↓ | 1.708 | 1.689 | | gsarti/flores_101_slk | slk | byte_perplexity ↓ | 4.038 | 3.419 | | gsarti/flores_101_slv | slv | byte_perplexity ↓ | 4.141 | 3.582 | | gsarti/flores_101_sna | sna | byte_perplexity ↓ | 4.711 | 5.588 | | gsarti/flores_101_snd | snd | byte_perplexity ↓ | 4.206 | 5.667 | | gsarti/flores_101_som | som | byte_perplexity ↓ | 9.154 | 4.788 | | gsarti/flores_101_spa | spa | byte_perplexity ↓ | 1.796 | 2.098 | | gsarti/flores_101_srp | srp | byte_perplexity ↓ | 2.241 | 2.688 | | gsarti/flores_101_swe | swe | byte_perplexity ↓ | 3.345 | 2.468 | | gsarti/flores_101_swh | swh | byte_perplexity ↓ | 2.684 | 4.473 | | gsarti/flores_101_tam | tam | byte_perplexity ↓ | 5.165 | 2.024 | | gsarti/flores_101_tel | tel | byte_perplexity ↓ | 6.81 | 2.407 | | gsarti/flores_101_tgk | tgk | byte_perplexity ↓ | 3.785 | 4.899 | | gsarti/flores_101_tgl | tgl | byte_perplexity ↓ | 3.75 | 2.738 | | gsarti/flores_101_tha | tha | byte_perplexity ↓ | 2.104 | 2.035 | | gsarti/flores_101_tur | tur | byte_perplexity ↓ | 3.318 | 2.622 | | gsarti/flores_101_ukr | ukr | byte_perplexity ↓ | 2.089 | 1.93 | | gsarti/flores_101_umb | umb | byte_perplexity ↓ | 11.766 | 11.64 | | gsarti/flores_101_urd | urd | byte_perplexity ↓ | 1.779 | 2.982 | | gsarti/flores_101_uzb | uzb | byte_perplexity ↓ | 8.5 | 13.209 | | gsarti/flores_101_vie | vie | byte_perplexity ↓ | 1.659 | 2.229 | | gsarti/flores_101_wol | wol | byte_perplexity ↓ | 6.142 | 13.945 | | gsarti/flores_101_xho | xho | byte_perplexity ↓ | 4.69 | 8.42 | | gsarti/flores_101_yor | yor | byte_perplexity ↓ | 4.361 | 7.636 | | gsarti/flores_101_zho_simpl | zho_simpl | byte_perplexity ↓ | 2.118 | 5.113 | | gsarti/flores_101_zho_trad | zho_trad | byte_perplexity ↓ | 2.274 | 5.67 | | gsarti/flores_101_zul | zul | byte_perplexity ↓ | 6.017 | 7.341 | | headqa | esp | acc ↑ | 0.346 | 0.244 | | hellaswag | eng | acc ↑ | 0.535 | 0.592 | | lambada_mt_de | deu | acc ↑ | 0.329 | 0.358 | | lambada_mt_en | eng | acc ↑ | 0.672 | 0.747 | | lambada_mt_es | esp | acc ↑ | 0.476 | 0.397 | | lambada_mt_it | ita | acc ↑ | 0.406 | 0.409 | | logiqa | eng | acc ↑ | 0.235 | 0.244 | | mathqa | eng | acc ↑ | 0.277 | 0.268 | | mc_taco | eng | em ↑ | 0.131 | 0.124 | | mnli (Median of 15 prompts) | eng | acc ↑ | 0.355 | 0.36 | | mnli_mismatched (Median of 15 prompts) | eng | acc ↑ | 0.355 | 0.36 | | mrpc | eng | acc ↑ | 0.387 | 0.446 | | multirc (Median of 11 prompts) | eng | acc ↑ | 0.571 | 0.599 | | openbookqa | eng | acc ↑ | 0.312 | 0.322 | | piqa | eng | acc ↑ | 0.781 | 0.791 | | prost | eng | acc ↑ | 0.298 | 0.299 | | pubmedqa | eng | acc ↑ | 0.741 | 0.709 | | qnli | eng | acc ↑ | 0.517 | 0.554 | | qqp (Median of 7 prompts) | eng | acc ↑ | 0.588 | 0.395 | | race | eng | acc ↑ | 0.39 | 0.402 | | rte (Median of 6 prompts) | eng | acc ↑ | 0.52 | 0.495 | | sciq | eng | acc ↑ | 0.936 | 0.948 | | sst (Median of 6 prompts) | eng | acc ↑ | 0.604 | 0.647 | | triviaqa | eng | acc ↑ | 0.183 | 0.342 | | tydiqa_primary (Median of 16 prompts) | eng | acc ↑ | 0.281 | 0.148 | | webqs | eng | acc ↑ | 0.062 | 0.159 | | wic (Median of 11 prompts) | eng | acc ↑ | 0.506 | 0.498 | | winogrande | eng | acc ↑ | 0.71 | 0.736 | | wnli (Median of 6 prompts) | eng | acc ↑ | 0.57 | 0.563 | | wsc (Median of 11 prompts) | eng | acc ↑ | 0.519 | 0.413 | | humaneval | python | pass@1 ↑ | 0.155 | 0.0 | | humaneval | python | pass@10 ↑ | 0.322 | 0.0 | | humaneval | python | pass@100 ↑ | 0.555 | 0.003 | **Train-time Evaluation:** Final checkpoint after 95K steps: - Training Loss: 1.939 - Validation Loss: 2.061 - Perplexity: 7.045 For more see: https://huggingface.co/bigscience/tr11-176B-ml-logs </details> --- # Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models trained or finetuned downstream of BLOOM LM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> --- # Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> --- # More Information *This section provides links to writing on dataset creation, technical specifications, lessons learned, and initial results.* <details> <summary>Click to expand</summary> ## Intermediate checkpoints For academic (or any) usage, we published the intermediate checkpoints, corresponding to the model state at each 5000 steps. Please follow [this link](https://huggingface.co/bigscience/bloom-176-intermediate) to get these checkpoints. ## Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ## Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss ## Lessons Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ## Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> ## Original checkpoints The checkpoints in this repo correspond to the HuggingFace Transformers format. If you want to use our fork of [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed) that the model was trained with, you'd want to use [this repo instead](https://huggingface.co/bigscience/bloom-optimizer-states). --- # Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
beomi/KcELECTRA-base
686333e78646593e324d6ad5e955dfb6dc9f0f5d
2022-06-26T01:49:50.000Z
[ "pytorch", "tf", "electra", "pretraining", "transformers" ]
null
false
beomi
null
beomi/KcELECTRA-base
35,838
4
transformers
385
Entry not found
albert-xxlarge-v2
aaec31cf649a4d91a96b11f83eb5b2985eaf8ee5
2021-01-13T15:33:03.000Z
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
albert-xxlarge-v2
35,631
5
transformers
386
--- tags: - exbert language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT XXLarge v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 4096 hidden dimension - 64 attention heads - 223M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2') model = AlbertModel.from_pretrained("albert-xxlarge-v2") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2') model = TFAlbertModel.from_pretrained("albert-xxlarge-v2") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |----------------|----------|----------|----------|----------|----------|----------| |V2 | |ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | |ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | |ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | |ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | |V1 | |ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | |ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | |ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | |ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=albert-xxlarge-v2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
sentence-transformers/nli-mpnet-base-v2
c388b46d029476cd6611aa9ed44d05272bbbacfb
2022-06-15T20:14:17.000Z
[ "pytorch", "tf", "mpnet", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/nli-mpnet-base-v2
35,533
1
sentence-transformers
387
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/nli-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/nli-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/nli-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-mpnet-base-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
facebook/mbart-large-cc25
2df0e6dd8a0e7f6df056fe4d0d95941a04b64e4f
2021-03-10T03:48:19.000Z
[ "pytorch", "mbart", "text2text-generation", "en", "ar", "cs", "de", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "multilingual", "transformers", "translation", "autotrain_compatible" ]
translation
false
facebook
null
facebook/mbart-large-cc25
35,330
15
transformers
388
--- tags: - translation language: - en - ar - cs - de - et - fi - fr - gu - hi - it - ja - kk - ko - lt - lv - my - ne - nl - ro - ru - si - tr - vi - zh - multilingual --- #### mbart-large-cc25 Pretrained (not finetuned) multilingual mbart model. Original Languages ``` export langs=ar_AR,cs_CZ,de_DE,en_XX,es_XX,et_EE,fi_FI,fr_XX,gu_IN,hi_IN,it_IT,ja_XX,kk_KZ,ko_KR,lt_LT,lv_LV,my_MM,ne_NP,nl_XX,ro_RO,ru_RU,si_LK,tr_TR,vi_VN,zh_CN ``` Original Code: https://github.com/pytorch/fairseq/tree/master/examples/mbart Docs: https://huggingface.co/transformers/master/model_doc/mbart.html Finetuning Code: examples/seq2seq/finetune.py (as of Aug 20, 2020) Can also be finetuned for summarization.
facebook/blenderbot_small-90M
a2a23a425b397872915db19bdee2522877eddc14
2021-12-02T08:09:04.000Z
[ "pytorch", "tf", "jax", "blenderbot-small", "text2text-generation", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "transformers", "convAI", "conversational", "facebook", "license:apache-2.0", "autotrain_compatible" ]
conversational
false
facebook
null
facebook/blenderbot_small-90M
35,264
12
transformers
389
--- language: - en thumbnail: tags: - convAI - conversational - facebook license: apache-2.0 datasets: - blended_skill_talk metrics: - perplexity --- ## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
classla/bcms-bertic-ner
4bd46a99b73827a3f6a095ceafa08b6933986dc0
2022-02-04T14:26:47.000Z
[ "pytorch", "electra", "token-classification", "hr", "bs", "sr", "cnr", "hbs", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
classla
null
classla/bcms-bertic-ner
35,225
2
transformers
390
--- language: - hr - bs - sr - cnr - hbs widget: - text: "Zovem se Marko i živim u Zagrebu. Studirao sam u Beogradu na Filozofskom fakultetu. Obožavam album Moanin." license: apache-2.0 --- # The [BERTić](https://huggingface.co/classla/bcms-bertic)&ast; [bert-ich] /bɜrtitʃ/ model fine-tuned for the task of named entity recognition in Bosnian, Croatian, Montenegrin and Serbian (BCMS) &ast; The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well). This is a fine-tuned version of the [BERTić](https://huggingface.co/classla/bcms-bertic) model for the task of named entity recognition (PER, LOC, ORG, MISC). The fine-tuning was performed on the following datasets: - the [hr500k](http://hdl.handle.net/11356/1183) dataset, 500 thousand tokens in size, standard Croatian - the [SETimes.SR](http://hdl.handle.net/11356/1200) dataset, 87 thousand tokens in size, standard Serbian - the [ReLDI-hr](http://hdl.handle.net/11356/1241) dataset, 89 thousand tokens in size, Internet (Twitter) Croatian - the [ReLDI-sr](http://hdl.handle.net/11356/1240) dataset, 92 thousand tokens in size, Internet (Twitter) Serbian The data was augmented with missing diacritics and standard data was additionally over-represented. The F1 obtained on dev data (train and test was merged into train) is 91.38. For a more detailed per-dataset evaluation of the BERTić model on the NER task have a look at the [main model page](https://huggingface.co/classla/bcms-bertic). If you use this fine-tuned model, please cite the following paper: ``` @inproceedings{ljubesic-lauc-2021-bertic, title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian", author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5", pages = "37--42", } ``` When running the model in `simpletransformers`, the order of labels has to be set as well. ``` from simpletransformers.ner import NERModel, NERArgs model_args = NERArgs() model_args.labels_list = ['B-LOC','B-MISC','B-ORG','B-PER','I-LOC','I-MISC','I-ORG','I-PER','O'] model = NERModel('electra', 'classla/bcms-bertic-ner', args=model_args) ```
sentence-transformers/paraphrase-distilroberta-base-v2
d9461390caf1e64923d00bc55fa02d3c1ed2b9e5
2022-06-15T19:42:26.000Z
[ "pytorch", "tf", "jax", "roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-distilroberta-base-v2
35,187
3
sentence-transformers
391
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-distilroberta-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-distilroberta-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-distilroberta-base-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-distilroberta-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-distilroberta-base-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
sentence-transformers/paraphrase-TinyBERT-L6-v2
8fe7263a517189c4a11a98f87db8ac964b235b5f
2022-06-15T20:12:46.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-TinyBERT-L6-v2
35,010
null
sentence-transformers
392
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-TinyBERT-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-TinyBERT-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-TinyBERT-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-TinyBERT-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-TinyBERT-L6-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
valhalla/t5-base-e2e-qg
c652651334cd5516f2bd0f0fb5303a01a678024e
2021-06-23T14:40:07.000Z
[ "pytorch", "t5", "text2text-generation", "dataset:squad", "arxiv:1910.10683", "transformers", "question-generation", "license:mit", "autotrain_compatible" ]
text2text-generation
false
valhalla
null
valhalla/t5-base-e2e-qg
34,949
2
transformers
393
--- datasets: - squad tags: - question-generation widget: - text: "Python is a programming language. It is developed by Guido Van Rossum and released in 1991. </s>" license: mit --- ## T5 for question-generation This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for end-to-end question generation task. Simply input the text and the model will generate multile questions. You can play with the model using the inference API, just put the text and see the results! For more deatils see [this](https://github.com/patil-suraj/question_generation) repo. ### Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline text = "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum \ and first released in 1991, Python's design philosophy emphasizes code \ readability with its notable use of significant whitespace." nlp = pipeline("e2e-qg", model="valhalla/t5-base-e2e-qg") nlp(text) => [ 'Who created Python?', 'When was Python first released?', "What is Python's design philosophy?" ] ```
microsoft/graphcodebert-base
2ff24803553d2274dd118c7ea20e9b37a5804b11
2021-07-21T16:26:39.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
microsoft
null
microsoft/graphcodebert-base
34,654
7
transformers
394
Entry not found
hf-internal-testing/tiny-random-t5
2f582cd79ed5795b71539951d237945bc1c5ac7e
2022-05-02T14:37:37.000Z
[ "pytorch", "tf", "t5", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-t5
34,603
null
transformers
395
Entry not found
hf-internal-testing/tiny-random-bigbird_pegasus
21ef3274d4148d5299e862b2c80a46713fc688f6
2021-09-17T19:22:17.000Z
[ "pytorch", "bigbird_pegasus", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-bigbird_pegasus
34,545
null
transformers
396
Entry not found
deepset/gbert-large
f6bca479ebb46e62ac99c03282a5030139e302f4
2022-02-17T14:05:45.000Z
[ "pytorch", "tf", "fill-mask", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "dataset:oscar", "arxiv:2010.10906", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
deepset
null
deepset/gbert-large
34,526
10
transformers
397
--- language: de license: mit datasets: - wikipedia - OPUS - OpenLegalData - oscar --- # German BERT large Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that it outperforms its predecessors. ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** BERT large **Language:** German ## Performance ``` GermEval18 Coarse: 80.08 GermEval18 Fine: 52.48 GermEval14: 88.16 ``` See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors Branden Chan: `branden.chan [at] deepset.ai` Stefan Schweter: `stefan [at] schweter.eu` Timo Möller: `timo.moeller [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
cahya/xlm-roberta-large-indonesian-NER
d0ef1c27f757b1c21ab299ccfb25fe858ac77ed4
2020-09-23T15:55:50.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
cahya
null
cahya/xlm-roberta-large-indonesian-NER
34,151
1
transformers
398
Entry not found
facebook/detr-resnet-50-panoptic
fc15262cfd4c13cbdad6d1d55ff0cd31a2251a27
2022-06-27T08:30:08.000Z
[ "pytorch", "detr", "image-segmentation", "dataset:coco", "arxiv:2005.12872", "transformers", "vision", "license:apache-2.0" ]
image-segmentation
false
facebook
null
facebook/detr-resnet-50-panoptic
34,102
30
transformers
399
--- license: apache-2.0 tags: - image-segmentation - vision datasets: - coco widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/dog-cat.jpg example_title: Dog & Cat - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/construction-site.jpg example_title: Construction Site - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/apple-orange.jpg example_title: Apple & Orange --- # DETR (End-to-End Object Detection) model with ResNet-50 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs. ## Intended uses & limitations You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForSegmentation from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-panoptic') model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-50-panoptic') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts COCO classes, bounding boxes, and masks logits = outputs.logits bboxes = outputs.pred_boxes masks = outputs.pred_masks ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **38.8**, a segmentation AP (average precision) of **31.1** and a PQ (panoptic quality) of **43.4**. For more details regarding evaluation results, we refer to table 5 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```