Muennighoff
commited on
Commit
•
08b4421
1
Parent(s):
78a8ec4
Add bibtex
Browse files
README.md
CHANGED
@@ -667,7 +667,7 @@ model-index:
|
|
667 |
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
|
668 |
|
669 |
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
|
670 |
-
- **Paper:** [
|
671 |
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
|
672 |
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
|
673 |
- **BLOOMZ & mT0 Model Family:**
|
@@ -865,9 +865,16 @@ print(tokenizer.decode(outputs[0]))
|
|
865 |
|
866 |
# Evaluation
|
867 |
|
868 |
-
We refer to Table 7 from our paper
|
869 |
|
870 |
# Citation
|
871 |
```bibtex
|
872 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
873 |
```
|
|
|
667 |
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
|
668 |
|
669 |
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
|
670 |
+
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
|
671 |
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
|
672 |
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
|
673 |
- **BLOOMZ & mT0 Model Family:**
|
|
|
865 |
|
866 |
# Evaluation
|
867 |
|
868 |
+
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
|
869 |
|
870 |
# Citation
|
871 |
```bibtex
|
872 |
+
@misc{muennighoff2022crosslingual,
|
873 |
+
title={Crosslingual Generalization through Multitask Finetuning},
|
874 |
+
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
|
875 |
+
year={2022},
|
876 |
+
eprint={2211.01786},
|
877 |
+
archivePrefix={arXiv},
|
878 |
+
primaryClass={cs.CL}
|
879 |
+
}
|
880 |
```
|