jenslemmens commited on
Commit
238b13e
1 Parent(s): 5a7a5e4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CoNTACT
2
+
3
+ ### Model description
4
+
5
+ <u>Co</u>ntextual <u>N</u>eural <u>T</u>ransformer <u>A</u>dapted to <u>C</u>OVID-19 <u>T</u>weets or **CoNTACT** is a Dutch RobBERT model (```pdelobelle/robbert-v2-dutch-base```) adapted to the domain of COVID-19 tweets. The model was developed at [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/) by Jens Lemmens, Jens Van Nooten, Tim Kreutz and Walter Daelemans. A full description of the model, the data that was used and the experiments that were conducted can be found in this ArXiv preprint: https://arxiv.org/abs/2203.07362
6
+
7
+ ### Intended use
8
+
9
+ The model was developed with the intention of achieving high results on NLP tasks involving Dutch social media messages related to COVID-19.
10
+
11
+ ### How to use
12
+
13
+ CoNTACT should be fine-tuned on a downstream task. This can be achieved by referring to ```clips/contact``` in the '--model\_name\_or\_path' argument in Huggingface/Transformers' example scripts, or by loading CoNTACT (as shown below) and fine-tuning it using your own code:
14
+
15
+ ```
16
+ from transformers import AutoModel, AutoTokenizer
17
+
18
+ model = AutoModel.from_pretrained('clips/contact')
19
+ tokenizer = AutoTokenizer.from_pretrained('clips/contact')
20
+
21
+ ...
22
+
23
+
24
+ ```
25
+
26
+ ### Training data
27
+
28
+ CoNTACT was trained on 2.8M Dutch tweets related to COVID-19 that were posted in 2021.
29
+
30
+ ### Training Procedure
31
+
32
+ The model's pre-training phase was extended by performing Masked Language Modeling (MLM) on the training data described above. This was done for 4 epochs, using the largest possible batch size that fit working memory (32).
33
+
34
+ ### Evaluation
35
+
36
+ The model was evaluated on two tasks using data from two social media platforms: Twitter and Facebook. Task 1 involved the binary classification of COVID-19 vaccine stance (hesitant vs. not hesitant), whereas task 2 consisted of the mulilabel, multiclass classification of arguments for vaccine hesitancy. CoNTACT outperformed out-of-the-box RobBERT in virtually all our experiments, and with statistical significance in most cases.
37
+
38
+ ### How to cite
39
+
40
+ ```
41
+ @misc{lemmens2022contact,
42
+ title={CoNTACT: A Dutch COVID-19 Adapted BERT for Vaccine Hesitancy and Argumentation Detection},
43
+ author={Jens Lemmens and Jens Van Nooten and Tim Kreutz and Walter Daelemans},
44
+ year={2022},
45
+ eprint={2203.07362},
46
+ archivePrefix={arXiv},
47
+ primaryClass={cs.CL}
48
+ }
49
+ ```