midav commited on
Commit
d3b8ac9
1 Parent(s): 4a4008e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ tags:
4
+ - onnx
5
+ - text-classification
6
+ - adapterhub:rc/multirc
7
+ - roberta
8
+ - adapter-transformers
9
+ language:
10
+ - en
11
+ ---
12
+
13
+ # ONNX export of Adapter `AdapterHub/roberta-base-pf-multirc` for roberta-base
14
+ ## Conversion of [AdapterHub/roberta-base-pf-multirc](https://huggingface.co/AdapterHub/roberta-base-pf-multirc) for UKP SQuARE
15
+
16
+
17
+ ## Usage
18
+ ```python
19
+ onnx_path = hf_hub_download(repo_id='UKP-SQuARE/roberta-base-pf-multirc-onnx', filename='model.onnx') # or model_quant.onnx for quantization
20
+ onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
21
+
22
+ context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.'
23
+ question = 'What are advantages of ONNX?'
24
+ choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/roberta-base-pf-multirc-onnx')
25
+
26
+ raw_input = [[context, question + + choice] for choice in choices]
27
+ inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np")
28
+ inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0)
29
+ inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0)
30
+ inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0)
31
+ outputs = onnx_model.run(input_feed=dict(inputs), output_names=None)
32
+ ```
33
+
34
+ ## Architecture & Training
35
+
36
+ The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
37
+ In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
38
+
39
+
40
+ ## Evaluation results
41
+
42
+ Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
43
+
44
+ ## Citation
45
+
46
+ If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
47
+
48
+ ```bibtex
49
+ @inproceedings{poth-etal-2021-pre,
50
+ title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
51
+ author = {Poth, Clifton and
52
+ Pfeiffer, Jonas and
53
+ R{"u}ckl{'e}, Andreas and
54
+ Gurevych, Iryna},
55
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
56
+ month = nov,
57
+ year = "2021",
58
+ address = "Online and Punta Cana, Dominican Republic",
59
+ publisher = "Association for Computational Linguistics",
60
+ url = "https://aclanthology.org/2021.emnlp-main.827",
61
+ pages = "10585--10605",
62
+ }
63
+ ```