language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
license: bigscience-bloom-rail-1.0
tags:
- ggml
- bloom
datasets:
- bigscience/xP3mt
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
widget:
- text: >-
一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the
previous review as positive, neutral or negative?
example_title: zh-en sentiment
- text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
example_title: zh-zh sentiment
- text: Suggest at least five related search terms to "Mạng neural nhân tạo".
example_title: vi-en query
- text: >-
Proposez au moins cinq mots clés concernant «Réseau de neurones
artificiels».
example_title: fr-fr query
- text: >-
Explain in a sentence in Telugu what is backpropagation in neural
networks.
example_title: te-en qa
- text: Why is the sky blue?
example_title: en-en qa
- text: >-
Write a fairy tale about a troll saving a princess from a dangerous
dragon. The fairy tale is a masterpiece that has achieved praise worldwide
and its moral is "Heroes Come in All Shapes and Sizes". Story (in
Spanish):
example_title: es-en fable
- text: >-
Write a fable about wood elves living in a forest that is suddenly invaded
by ogres. The fable is a masterpiece that has achieved praise worldwide
and its moral is "Violence is the last refuge of the incompetent". Fable
(in Hindi):
example_title: hi-en fable
model-index:
- name: bloomz-7b1-mt
results:
- task:
type: Coreference resolution
dataset:
name: Winogrande XL (xl)
type: winogrande
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 56.51
- task:
type: Coreference resolution
dataset:
name: XWinograd (en)
type: Muennighoff/xwinograd
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 65.76
- task:
type: Coreference resolution
dataset:
name: XWinograd (fr)
type: Muennighoff/xwinograd
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 57.83
- task:
type: Coreference resolution
dataset:
name: XWinograd (jp)
type: Muennighoff/xwinograd
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 51.82
- task:
type: Coreference resolution
dataset:
name: XWinograd (pt)
type: Muennighoff/xwinograd
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 57.41
- task:
type: Coreference resolution
dataset:
name: XWinograd (ru)
type: Muennighoff/xwinograd
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 55.87
- task:
type: Coreference resolution
dataset:
name: XWinograd (zh)
type: Muennighoff/xwinograd
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 62.7
- task:
type: Natural language inference
dataset:
name: ANLI (r1)
type: anli
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 42.6
- task:
type: Natural language inference
dataset:
name: ANLI (r2)
type: anli
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 39.4
- task:
type: Natural language inference
dataset:
name: ANLI (r3)
type: anli
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 42
- task:
type: Natural language inference
dataset:
name: SuperGLUE (cb)
type: super_glue
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 83.93
- task:
type: Natural language inference
dataset:
name: SuperGLUE (rte)
type: super_glue
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 82.67
- task:
type: Natural language inference
dataset:
name: XNLI (ar)
type: xnli
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 55.58
- task:
type: Natural language inference
dataset:
name: XNLI (bg)
type: xnli
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.9
- task:
type: Natural language inference
dataset:
name: XNLI (de)
type: xnli
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.92
- task:
type: Natural language inference
dataset:
name: XNLI (el)
type: xnli
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.89
- task:
type: Natural language inference
dataset:
name: XNLI (en)
type: xnli
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 58.92
- task:
type: Natural language inference
dataset:
name: XNLI (es)
type: xnli
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 57.35
- task:
type: Natural language inference
dataset:
name: XNLI (fr)
type: xnli
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.67
- task:
type: Natural language inference
dataset:
name: XNLI (hi)
type: xnli
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.45
- task:
type: Natural language inference
dataset:
name: XNLI (ru)
type: xnli
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.24
- task:
type: Natural language inference
dataset:
name: XNLI (sw)
type: xnli
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.27
- task:
type: Natural language inference
dataset:
name: XNLI (th)
type: xnli
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 41.08
- task:
type: Natural language inference
dataset:
name: XNLI (tr)
type: xnli
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 38.71
- task:
type: Natural language inference
dataset:
name: XNLI (ur)
type: xnli
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 49.48
- task:
type: Natural language inference
dataset:
name: XNLI (vi)
type: xnli
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.5
- task:
type: Natural language inference
dataset:
name: XNLI (zh)
type: xnli
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.3
- task:
type: Program synthesis
dataset:
name: HumanEval
type: openai_humaneval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 7.23
- type: Pass@10
value: 14.46
- type: Pass@100
value: 25.86
- task:
type: Sentence completion
dataset:
name: StoryCloze (2016)
type: story_cloze
config: '2016'
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 89.58
- task:
type: Sentence completion
dataset:
name: SuperGLUE (copa)
type: super_glue
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 84
- task:
type: Sentence completion
dataset:
name: XCOPA (et)
type: xcopa
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 52
- task:
type: Sentence completion
dataset:
name: XCOPA (ht)
type: xcopa
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 54
- task:
type: Sentence completion
dataset:
name: XCOPA (id)
type: xcopa
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 73
- task:
type: Sentence completion
dataset:
name: XCOPA (it)
type: xcopa
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 62
- task:
type: Sentence completion
dataset:
name: XCOPA (qu)
type: xcopa
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61
- task:
type: Sentence completion
dataset:
name: XCOPA (sw)
type: xcopa
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61
- task:
type: Sentence completion
dataset:
name: XCOPA (ta)
type: xcopa
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 62
- task:
type: Sentence completion
dataset:
name: XCOPA (th)
type: xcopa
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61
- task:
type: Sentence completion
dataset:
name: XCOPA (tr)
type: xcopa
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56
- task:
type: Sentence completion
dataset:
name: XCOPA (vi)
type: xcopa
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 77
- task:
type: Sentence completion
dataset:
name: XCOPA (zh)
type: xcopa
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 80
- task:
type: Sentence completion
dataset:
name: XStoryCloze (ar)
type: Muennighoff/xstory_cloze
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 83.85
- task:
type: Sentence completion
dataset:
name: XStoryCloze (es)
type: Muennighoff/xstory_cloze
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 88.82
- task:
type: Sentence completion
dataset:
name: XStoryCloze (eu)
type: Muennighoff/xstory_cloze
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 73.26
- task:
type: Sentence completion
dataset:
name: XStoryCloze (hi)
type: Muennighoff/xstory_cloze
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 80.41
- task:
type: Sentence completion
dataset:
name: XStoryCloze (id)
type: Muennighoff/xstory_cloze
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 84.58
- task:
type: Sentence completion
dataset:
name: XStoryCloze (my)
type: Muennighoff/xstory_cloze
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 51.56
- task:
type: Sentence completion
dataset:
name: XStoryCloze (ru)
type: Muennighoff/xstory_cloze
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 64.26
- task:
type: Sentence completion
dataset:
name: XStoryCloze (sw)
type: Muennighoff/xstory_cloze
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 71.01
- task:
type: Sentence completion
dataset:
name: XStoryCloze (te)
type: Muennighoff/xstory_cloze
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 73.06
- task:
type: Sentence completion
dataset:
name: XStoryCloze (zh)
type: Muennighoff/xstory_cloze
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 85.9
Table of Contents
Model Summary
We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- Repository: bigscience-workshop/xmtf
- Paper: Crosslingual Generalization through Multitask Finetuning
- Point of Contact: Niklas Muennighoff
- Languages: Refer to bloom for pretraining & xP3 for finetuning language proportions. It understands both pretraining & finetuning languages.
- BLOOMZ & mT0 Model Family:
Multitask finetuned on xP3. Recommended for prompting in English. | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Parameters | 300M | 580M | 1.2B | 3.7B | 13B | 560M | 1.1B | 1.7B | 3B | 7.1B | 176B |
Finetuned Model | mt0-small | mt0-base | mt0-large | mt0-xl | mt0-xxl | bloomz-560m | bloomz-1b1 | bloomz-1b7 | bloomz-3b | bloomz-7b1 | bloomz |
Multitask finetuned on xP3mt. Recommended for prompting in non-English. | |||||||||||
Finetuned Model | mt0-xxl-mt | bloomz-7b1-mt | bloomz-mt | ||||||||
Multitask finetuned on P3. Released for research purposes only. Strictly inferior to above models! | |||||||||||
Finetuned Model | mt0-xxl-p3 | bloomz-7b1-p3 | bloomz-p3 | ||||||||
Original pretrained checkpoints. Not recommended. | |||||||||||
Pretrained Model | mt5-small | mt5-base | mt5-large | mt5-xl | mt5-xxl | bloom-560m | bloom-1b1 | bloom-1b7 | bloom-3b | bloom-7b1 | bloom |
Use
Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "Translate to English: Je t’aime.", the model will most likely answer "I love you.". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
Feel free to share your generations in the Community tab!
How to use
CPU
Click to expand
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
GPU
Click to expand
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
GPU in 8bit
Click to expand
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Limitations
Prompt Engineering: The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "Translate to English: Je t'aime" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "Translate to English: Je t'aime.", "Translate to English: Je t'aime. Translation:" "What is "Je t'aime." in English?", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "Explain in a sentence in Telugu what is backpropagation in neural networks.".
Training
Model
- Architecture: Same as bloom-7b1, also refer to the
config.json
file - Finetuning steps: 1000
- Finetuning tokens: 4.19 billion
- Finetuning layout: 1x pipeline parallel, 1x tensor parallel, 64x data parallel
- Precision: float16
Hardware
- CPUs: AMD CPUs with 512GB memory per node
- GPUs: 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- Communication: NCCL-communications network with a fully dedicated subnet
Software
- Orchestration: Megatron-DeepSpeed
- Optimizer & parallelism: DeepSpeed
- Neural networks: PyTorch (pytorch-1.11 w/ CUDA-11.5)
- FP16 if applicable: apex
Evaluation
We refer to Table 7 from our paper & bigscience/evaluation-results for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
Citation
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}