File size: 4,747 Bytes
13fd9bb ca9db5f 13fd9bb ca9db5f 13fd9bb ca9db5f 13fd9bb ca9db5f 13fd9bb ca9db5f 13fd9bb 0cddfe8 ce9d189 0cddfe8 ce9d189 0cddfe8 ce9d189 0cddfe8 ce9d189 0cddfe8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
---
# BigScience Large Language Model Training
Training a multilingual 176 billion parameters model in the open
![BigScience Logo](https://assets.website-files.com/6139f3cdcbbff3a68486761d/613cd8997b270da063e230c5_Tekengebied%201-p-500.png)
[BigScience](https://bigscience.huggingface.co) is a open and collaborative workshop around the study and creation of very large language models gathering more than 1000 researchers around the worlds. You can find more information on the main website at https://bigscience.huggingface.co.
The training of BigScience’s main model started on **March 11, 2022 11:42am PST** and will continue for 3-4 months on 384 A100 80GB GPUs of the Jean Zay public supercomputer
You can follow the training at [https://twitter.com/BigScienceLLM](https://twitter.com/BigScienceLLM)
## More information on the model, dataset, hardware, environmental consideration:
### **The model**
- 176B parameters decoder-only architecture (GPT-like)
- 70 layers - 112 attention heads per layers - hidden dimensionality of 14336 - 2048 tokens sequence length
- ALiBi positional embeddings - GeLU activation function
- **More information**:
- Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: [https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours](https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours)
- More details on the architecture/optimizer: [https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml)
### **The dataset**
- Multilingual: 46 languages: Full list is here: [https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling](https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling)
- 341.6 billion tokens (1.5 TB of text data)
- Tokenizer vocabulary: 250,680 tokens
- More information:
- Blog post detailing the design choices during the dataset creation: [https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling](https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling)
### **The engineering side**
- number of GPU used for the training: 384 A100 GPU with 80 GB of memory each
- one copy of the model takes 48 GPUs (using 60 GB of memory on each GPU)
- checkpoint size: the bf16 weights are 329GB, the full checkpoint with optimizer states is 2.3TB
- training throughput: ~150 TFLOPs
- estimated training time: 3-4 months depending on throughput and unexpected events
- **More information**:
- Blog post on the hardware/engineering side: [https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model](https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model)
- Details on the distributed setup used for the training: [https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml)
- Tensorboard updated during the training: [https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss](https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss)
- Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): [https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md](https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md)
### **Environmental considerations**
- [Jean Zay](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html), the supercomputer we are using for model training, is mostly powered by nuclear energy, which is a low carbon energy source.
- Significant efforts were made to make sure that the computing infrastructure is as efficient as possible — the heat generated by the hardware even gets used for heating buildings on campus!
- **More information**:
- We are currently working on making a precise estimate of the carbon emitted during all of the steps of model training, including intermediate experiments as well as inference.
- More soon!
|