File size: 6,681 Bytes
bda84dd d19c610 bda84dd d19c610 bda84dd d19c610 bda84dd 4739ad2 bda84dd 4739ad2 bda84dd 0289556 bda84dd 4739ad2 bda84dd 4739ad2 bda84dd 4739ad2 bda84dd 4739ad2 bda84dd 4739ad2 bda84dd 4739ad2 6b9272e 6c59240 12d96e4 6b9272e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
## Introducation
Sparse computing is increasingly recognized as an important direction to improve the computational efficiency of large language models (LLM). Among various approaches, a mixture of experts (MoE) methods (exemplified by models such as [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)) show particular promise. MoE works by selectively activating different model components (experts), thereby optimizing resource usage.
Recent studies ([Zhang el al., 2021](https://arxiv.org/abs/2110.01786); [Liu et al., 2023](https://openreview.net/pdf?id=wIPIhHd00i); [Mirzadeh et al., 2023](https://arxiv.org/abs/2310.04564)) reveal that LLMs inherently exhibit properties conducive to sparse computation when employing the ReLU activation function. This insight opens up new avenues for model efficiency, akin to MoE's selective activation. By dynamically choosing model parameters for computation, we can substantially boost efficiency.
However, the widespread adoption of ReLU-based models in the LLM field remains limited. Here we introduce a new 7B ReLU-based LLM, Bamboo, which boasts nearly 85% sparsity and performance levels on par with [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Model Architecture
As the ReGLU-based LLM has limited sparsity, for example, [ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) has just nearly 67% sparsity. To further push the model's sparsity, we add a relu component after GLU. So our FFN network works as follows:
```Python
class BambooMLP(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.hidden_size = config.hidden_size
self.intermediate_size = config.intermediate_size
self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
self.act_fn = ACT2FN[config.hidden_act]
def forward(self, x):
return self.down_proj(self.act_fn(self.gate_proj(x)) * self.act_fn(self.up_proj(x)))
```
## Training Details
In this section, we introduce the details of training our model, including types of data used, and hyperparameters.
We initialized the model weights to Mistral's model weights and modified the FFN structure to the ReGLU+ReLU structure, then continued pre-training for 200B tokens, divided into two phases:
**First phase**: For the proportion of training corpus, we followed the data mix ratio and sources of the StableLM-3B model ([link](https://stability.wandb.io/stability-llm/stable-lm/reports/StableLM-3B-4E1T--VmlldzoyMjU4?accessToken=u3zujipenkx5g7rtcj9qojjgxpconyjktjkli2po09nffrffdhhchq045vp0wyfo)), conducting a further pre-training with 150B tokens.
The following table shows the hyper-paramters we used in our training process.
| Hyper-parameters | |
| --------------------- | ----------- |
| GPUs | 64 80G-A100 |
| Learning Rate Control | Cosine |
| Peak Learning Rate | 5e-5 |
| Batch Size | 4M |
| Weight Decay | 0.1 |
**Second phase**: We further adjusted the training corpus ratio, incorporating more domain-specific datasets(Math、Coding), and continued training for 50B tokens.
| Hyper-parameters | |
| --------------------- | ----------- |
| GPUs | 64 80G-A100 |
| Learning Rate Control | Cosine |
| Peak Learning Rate | 5e-6 |
| Batch Size | 4M |
| Weight Decay | 0.01 |
## Performance Evaluation Results
Our evaluation is based on the framework lm-evaluation-harness and opencompass. The evaluation details are listed as follows:
- Huggingface LLM Leaderboard tasks.
- Commonsense: We report the average of PIQA, SIQA, ARC easy and challenge and CommonsenseQA.
- Other Popular Benchmarks: We report the average accuracies on Big Bench Hard (BBH) (3-shot), HumanEval, MBPP, MATH.
| | MMLU | Winogrande | TruthfulQA | Hellaswag | GSM8K | Arc-C | HumanEval | BBH | Average |
| ------- | ------ | ---------- | ---------- | --------- | ------ | ------ | --------- | ---- | ------- |
| Ours | 0.6389 | 0.7593 | 0.4406 | 0.8217 | 0.5315 | 0.6195 | 0.256 | | |
| Mistral | 0.6265 | 0.7924 | 0.4262 | 0.8332 | 0.4018 | 0.6143 | 0.2621 | | |
## Speed Evaluation Results
We utilize [PowerInfer](https://arxiv.org/pdf/2312.12456.pdf), a state-of-the-art acceleration framework leveraging activation sparsity. Here we show the inference speed compared with llama.cpp/transformers.
## Limitation & Disclaimer
- Bamboo, having undergone training with only 200B tokens, may still exhibit performance gaps in certain tasks.
- The Bamboo model has only been trained on English-language datasets, hence its capabilities in other languages are still lacking.
- The model may produce unexpected outputs due to its size and probabilistic generation paradigm.
## License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage.
## Citation:
Please kindly cite using the following BibTeX:
```
@misc{bamboo,
title={Bamboo: Harmonizing Sparsity and Performance in Large Language Models},
author={Yixin Song, Haotong Xie, Zeyu Mi, Haibo Chen},
year={2024}
}
``` |