Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,10 @@
|
|
1 |
## Introducation
|
2 |
|
3 |
-
Sparse computing is increasingly recognized as an important direction to improve the computational efficiency of large language models (LLM).
|
4 |
|
5 |
Recent studies ([Zhang el al., 2021](https://arxiv.org/abs/2110.01786); [Liu et al., 2023](https://openreview.net/pdf?id=wIPIhHd00i); [Mirzadeh et al., 2023](https://arxiv.org/abs/2310.04564)) reveal that LLMs inherently exhibit properties conducive to sparse computation when employing the ReLU activation function. This insight opens up new avenues for model efficiency, akin to MoE's selective activation. By dynamically choosing model parameters for computation, we can substantially boost efficiency.
|
6 |
|
7 |
-
However, the widespread adoption of ReLU-based models in the LLM field remains limited. Here we introduce a new 7B ReLU-based LLM, Bamboo, which boasts nearly 85% sparsity and performance levels on par with [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1).
|
8 |
|
9 |
## Model Architecture
|
10 |
|
@@ -38,7 +38,7 @@ The following table shows the hyper-paramters we used in our training process.
|
|
38 |
|
39 |
| Hyper-parameters | |
|
40 |
| --------------------- | ----------- |
|
41 |
-
| GPUs | 64 80G-
|
42 |
| Learning Rate Control | Cosine |
|
43 |
| Peak Learning Rate | 5e-5 |
|
44 |
| Batch Size | 4M |
|
@@ -48,7 +48,7 @@ The following table shows the hyper-paramters we used in our training process.
|
|
48 |
|
49 |
| Hyper-parameters | |
|
50 |
| --------------------- | ----------- |
|
51 |
-
| GPUs | 64 80G-
|
52 |
| Learning Rate Control | Cosine |
|
53 |
| Peak Learning Rate | 5e-6 |
|
54 |
| Batch Size | 4M |
|
|
|
1 |
## Introducation
|
2 |
|
3 |
+
Sparse computing is increasingly recognized as an important direction to improve the computational efficiency of large language models (LLM). For example, mixture of experts (MoE) methods show particular promise.
|
4 |
|
5 |
Recent studies ([Zhang el al., 2021](https://arxiv.org/abs/2110.01786); [Liu et al., 2023](https://openreview.net/pdf?id=wIPIhHd00i); [Mirzadeh et al., 2023](https://arxiv.org/abs/2310.04564)) reveal that LLMs inherently exhibit properties conducive to sparse computation when employing the ReLU activation function. This insight opens up new avenues for model efficiency, akin to MoE's selective activation. By dynamically choosing model parameters for computation, we can substantially boost efficiency.
|
6 |
|
7 |
+
However, the widespread adoption of ReLU-based models in the LLM field remains limited. Here we introduce a new 7B ReLU-based LLM, Bamboo(Github link:[https://github.com/SJTU-IPADS/Bamboo](https://github.com/SJTU-IPADS/Bamboo)), which boasts nearly 85% sparsity and performance levels on par with [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1).
|
8 |
|
9 |
## Model Architecture
|
10 |
|
|
|
38 |
|
39 |
| Hyper-parameters | |
|
40 |
| --------------------- | ----------- |
|
41 |
+
| GPUs | 64 80G-A800 |
|
42 |
| Learning Rate Control | Cosine |
|
43 |
| Peak Learning Rate | 5e-5 |
|
44 |
| Batch Size | 4M |
|
|
|
48 |
|
49 |
| Hyper-parameters | |
|
50 |
| --------------------- | ----------- |
|
51 |
+
| GPUs | 64 80G-A800 |
|
52 |
| Learning Rate Control | Cosine |
|
53 |
| Peak Learning Rate | 5e-6 |
|
54 |
| Batch Size | 4M |
|