ColD Fusion model

Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at this paper.

Paper Abstract:

Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams.

In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.

How to use

Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch:

from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)

and in TensorFlow:

from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)

Evaluation results

Model Recycling

Evaluation on 36 datasets using ibm/ColD-Fusion-itr13-seed2 as a base model yields average score of 78.72 in comparison to 76.22 by roberta-base.

The model is ranked 1st among all tested models for the roberta-base architecture as of 13/12/2022 Results:

20_newsgroup ag_news amazon_reviews_multi anli boolq cb cola copa dbpedia esnli financial_phrasebank imdb isear mnli mrpc multirc poem_sentiment qnli qqp rotten_tomatoes rte sst2 sst_5bins stsb trec_coarse trec_fine tweet_ev_emoji tweet_ev_emotion tweet_ev_hate tweet_ev_irony tweet_ev_offensive tweet_ev_sentiment wic wnli wsc yahoo_answers
86.3648 89.3 66.72 53.0937 82.0183 89.2857 83.605 73 77.4667 91.0423 87.3 93.868 73.1421 87.3881 87.7451 63.6757 88.4615 92.678 91.0809 91.4634 83.3935 95.2982 58.1448 91.6334 97 91 44.95 83.0401 52.5589 77.0408 86.0465 69.7818 70.0627 49.2958 63.4615 72.5667

For more information, see: Model Recycling title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, }


<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
    <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.