# BERTweet: A pre-trained language model for English Tweets BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure. The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic. The general architecture and experimental results of BERTweet can be found in our [paper](https://aclanthology.org/2020.emnlp-demos.2/): @inproceedings{bertweet, title = {{BERTweet: A pre-trained language model for English Tweets}}, author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, pages = {9--14}, year = {2020} } **Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software. For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)! ### Main results

postagging ner

sentiment irony

### Pre-trained models Model | #params | Arch. | Pre-training data ---|---|---|--- `vinai/bertweet-base` | 135M | base | 850M English Tweets (cased) `vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased) `vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased) `vinai/bertweet-large` | 355M | large | 873M English Tweets (cased) ### Example usage ```python import torch from transformers import AutoModel, AutoTokenizer bertweet = AutoModel.from_pretrained("vinai/bertweet-large") tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-large") # INPUT TWEET IS ALREADY NORMALIZED! line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:" input_ids = torch.tensor([tokenizer.encode(line)]) with torch.no_grad(): features = bertweet(input_ids) # Models outputs are now tuples ## With TensorFlow 2.0+: # from transformers import TFAutoModel # bertweet = TFAutoModel.from_pretrained("vinai/bertweet-large") ``` ### Normalize raw input Tweets Before applying BPE to the pre-training corpus of English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. For `vinai/bertweet-large`, given the raw input Tweets, to obtain the same pre-processing output, users could employ our [TweetNormalizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) module. - Installation: `pip3 install nltk emoji` ```python import torch from transformers import AutoTokenizer from TweetNormalizer import normalizeTweet tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-large") line = normalizeTweet("DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier 😢") input_ids = torch.tensor([tokenizer.encode(line)]) ```