Add model card
Browse files
README.md
CHANGED
@@ -1,3 +1,29 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
tags:
|
3 |
+
- quantized
|
4 |
+
- 4-bit
|
5 |
+
- AWQ
|
6 |
+
- autotrain_compatible
|
7 |
+
- endpoints_compatible
|
8 |
+
- text-generation-inference
|
9 |
license: apache-2.0
|
10 |
+
language:
|
11 |
+
- en
|
12 |
+
base_model: mistral-community/Mixtral-8x22B-v0.1
|
13 |
+
model_creator: Vezora
|
14 |
+
model_name: Mistral-22B-v0.1
|
15 |
+
model_type: mistral
|
16 |
+
pipeline_tag: text-generation
|
17 |
+
inference: false
|
18 |
---
|
19 |
+
# Vezora/Mistral-22B-v0.1 AWQ
|
20 |
+
|
21 |
+
- Model creator: [Vezora](https://huggingface.co/Vezora)
|
22 |
+
- Original model: [Mistral-22B-v0.1](https://huggingface.co/Vezora/Mistral-22B-v0.1)
|
23 |
+
|
24 |
+
## Model Summary
|
25 |
+
|
26 |
+
This model is not an moe, it is infact a 22B parameter dense model!
|
27 |
+
|
28 |
+
Just one day after the release of **Mixtral-8x-22b**, we are excited to introduce our handcrafted experimental model, **Mistral-22b-V.01**. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
|
29 |
+
|