File size: 5,356 Bytes
fbf02b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94bbbef
0949236
ab2324a
0949236
7c90ceb
0949236
 
 
 
 
 
 
 
fbf02b5
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
language: en
tags:
- exbert
license: mit
---

# ColD Fusion model

Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. 
Full details at [this paper](https://arxiv.org/abs/2212.01378).

## Paper Abstract:

Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a 
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, 
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources 
that are only available to well-resourced teams.

In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed 
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic 
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that 
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on 
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find 
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, 
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.


### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:

```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```

and in TensorFlow:

```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```

## Evaluation results
## Model Recycling

[Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=2.50&mnli_lp=nan&20_newsgroup=1.08&ag_news=-0.47&amazon_reviews_multi=0.14&anli=2.75&boolq=3.32&cb=21.52&cola=0.07&copa=24.30&dbpedia=0.17&esnli=0.05&financial_phrasebank=2.19&imdb=-0.03&isear=0.67&mnli=0.41&mrpc=-0.12&multirc=2.46&poem_sentiment=4.52&qnli=0.27&qqp=0.37&rotten_tomatoes=3.04&rte=10.99&sst2=1.18&sst_5bins=1.47&stsb=1.72&trec_coarse=-0.11&trec_fine=3.24&tweet_ev_emoji=-1.35&tweet_ev_emotion=1.22&tweet_ev_hate=-0.34&tweet_ev_irony=5.48&tweet_ev_offensive=1.49&tweet_ev_sentiment=-1.25&wic=4.58&wnli=-5.49&wsc=0.19&yahoo_answers=0.16&model_name=ibm%2FColD-Fusion-itr13-seed2&base_name=roberta-base) using ibm/ColD-Fusion-itr13-seed2 as a base model yields average score of 78.72 in comparison to 76.22 by roberta-base.

The model is ranked 1st among all tested models for the roberta-base architecture as of 13/12/2022
Results:

|   20_newsgroup |   ag_news |   amazon_reviews_multi |    anli |   boolq |      cb |   cola |   copa |   dbpedia |   esnli |   financial_phrasebank |   imdb |   isear |    mnli |    mrpc |   multirc |   poem_sentiment |   qnli |     qqp |   rotten_tomatoes |     rte |    sst2 |   sst_5bins |    stsb |   trec_coarse |   trec_fine |   tweet_ev_emoji |   tweet_ev_emotion |   tweet_ev_hate |   tweet_ev_irony |   tweet_ev_offensive |   tweet_ev_sentiment |     wic |    wnli |     wsc |   yahoo_answers |
|---------------:|----------:|-----------------------:|--------:|--------:|--------:|-------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|-------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:|
|        86.3648 |      89.3 |                  66.72 | 53.0937 | 82.0183 | 89.2857 | 83.605 |     73 |   77.4667 | 91.0423 |                   87.3 | 93.868 | 73.1421 | 87.3881 | 87.7451 |   63.6757 |          88.4615 | 92.678 | 91.0809 |           91.4634 | 83.3935 | 95.2982 |     58.1448 | 91.6334 |            97 |          91 |            44.95 |            83.0401 |         52.5589 |          77.0408 |              86.0465 |              69.7818 | 70.0627 | 49.2958 | 63.4615 |         72.5667 |


For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
  title     = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
  journal   = {CoRR},
  volume    = {abs/2212.01378},
  year      = {2022},
  url       = {https://arxiv.org/abs/2212.01378},
  archivePrefix = {arXiv},
  eprint    = {2212.01378},
}
```

<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
	<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>