Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: qwen
|
4 |
+
license_link: >-
|
5 |
+
https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
- zh
|
9 |
+
library_name: transformers
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
inference: false
|
12 |
+
tags:
|
13 |
+
- llama
|
14 |
+
- qwen
|
15 |
+
- qwen1.5
|
16 |
+
- qwen2
|
17 |
+
---
|
18 |
+
This is the Mistral version of [Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) model by Alibaba Cloud.
|
19 |
+
The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py).
|
20 |
+
I have made modifications to make it compatible with qwen1.5.
|
21 |
+
This model is converted with https://github.com/Minami-su/character_AI_open/blob/main/mistral_qwen2.py
|
22 |
+
|
23 |
+
## special
|
24 |
+
|
25 |
+
1.Before using this model, you need to modify modeling_mistral.py in transformers library
|
26 |
+
|
27 |
+
2.vim /root/anaconda3/envs/train/lib/python3.9/site-packages/transformers/models/mistral/modeling_mistral.py
|
28 |
+
|
29 |
+
3.find MistralAttention,
|
30 |
+
|
31 |
+
4.modify q,k,v,o bias=False ----->, bias=config.attention_bias
|
32 |
+
|
33 |
+
Before:
|
34 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d7f90b102d144db4b4245b/AKj_fwEoLUKWZ4mViYW-q.png)
|
35 |
+
After:
|
36 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d7f90b102d144db4b4245b/A2gSwq9l6Zx8X1qegtgvE.png)
|
37 |
+
|
38 |
+
|
39 |
+
## Differences between qwen2 mistral and qwen2 llamafy
|
40 |
+
|
41 |
+
Compared to qwen2 llamafy,qwen2 mistral can use sliding window attention,qwen2 mistral is faster than qwen2 llamafy, and the context length is better
|
42 |
+
|
43 |
+
|
44 |
+
Usage:
|
45 |
+
|
46 |
+
```python
|
47 |
+
|
48 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
49 |
+
tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen1.5-0.5B-Chat_mistral")
|
50 |
+
model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen1.5-0.5B-Chat_mistral", torch_dtype="auto", device_map="auto")
|
51 |
+
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
52 |
+
|
53 |
+
messages = [
|
54 |
+
{"role": "user", "content": "Who are you?"}
|
55 |
+
]
|
56 |
+
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
|
57 |
+
inputs = inputs.to("cuda")
|
58 |
+
generate_ids = model.generate(inputs,max_length=2048, streamer=streamer)
|
59 |
+
|
60 |
+
```
|
61 |
+
|
62 |
+
## Test
|
63 |
+
load in 4bit
|
64 |
+
```
|
65 |
+
hf-causal (pretrained=Qwen1.5-0.5B-Chat), limit: None, provide_description: False, num_fewshot: 0, batch_size: 32
|
66 |
+
| Task |Version| Metric |Value | |Stderr|
|
67 |
+
|-------------|------:|--------|-----:|---|-----:|
|
68 |
+
|arc_challenge| 0|acc |0.2389|± |0.0125|
|
69 |
+
| | |acc_norm|0.2688|± |0.0130|
|
70 |
+
|truthfulqa_mc| 1|mc1 |0.2534|± |0.0152|
|
71 |
+
| | |mc2 |0.4322|± |0.0151|
|
72 |
+
|winogrande | 0|acc |0.5564|± |0.0140|
|
73 |
+
```
|
74 |
+
load in 4bit
|
75 |
+
```
|
76 |
+
hf-causal (pretrained=Qwen1.5-0.5B-Chat_mistral), limit: None, provide_description: False, num_fewshot: 0, batch_size: 32
|
77 |
+
| Task |Version| Metric |Value | |Stderr|
|
78 |
+
|-------------|------:|--------|-----:|---|-----:|
|
79 |
+
|arc_challenge| 0|acc |0.2398|± |0.0125|
|
80 |
+
| | |acc_norm|0.2705|± |0.0130|
|
81 |
+
|truthfulqa_mc| 1|mc1 |0.2534|± |0.0152|
|
82 |
+
| | |mc2 |0.4322|± |0.0151|
|
83 |
+
|winogrande | 0|acc |0.5549|± |0.0140|
|
84 |
+
```
|
85 |
+
```
|