RichardErkhov
commited on
Commit
•
1561403
1
Parent(s):
883cc38
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
Qwen1.5-MoE-A2.7B - bnb 8bits
|
11 |
+
- Model creator: https://huggingface.co/Qwen/
|
12 |
+
- Original model: https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
license: other
|
20 |
+
license_name: tongyi-qianwen
|
21 |
+
license_link: >-
|
22 |
+
https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B/blob/main/LICENSE
|
23 |
+
language:
|
24 |
+
- en
|
25 |
+
pipeline_tag: text-generation
|
26 |
+
tags:
|
27 |
+
- pretrained
|
28 |
+
- moe
|
29 |
+
---
|
30 |
+
|
31 |
+
# Qwen1.5-MoE-A2.7B
|
32 |
+
|
33 |
+
|
34 |
+
## Introduction
|
35 |
+
|
36 |
+
Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.
|
37 |
+
|
38 |
+
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen-moe/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
|
39 |
+
|
40 |
+
## Model Details
|
41 |
+
Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieving comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`.
|
42 |
+
|
43 |
+
## Requirements
|
44 |
+
The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
|
45 |
+
```
|
46 |
+
KeyError: 'qwen2_moe'.
|
47 |
+
```
|
48 |
+
|
49 |
+
## Usage
|
50 |
+
|
51 |
+
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
|
52 |
+
|
53 |
+
|