JustinLin610
commited on
Commit
•
384a93d
1
Parent(s):
a9f04bc
Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ For more details, please refer to our [blog post](https://qwenlm.github.io/blog/
|
|
21 |
|
22 |
|
23 |
## Model Details
|
24 |
-
Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires
|
25 |
|
26 |
## Training details
|
27 |
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
|
|
|
21 |
|
22 |
|
23 |
## Model Details
|
24 |
+
Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`.
|
25 |
|
26 |
## Training details
|
27 |
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
|