feihu.hf commited on
Commit
f07775d
1 Parent(s): c635280

update README & config.json

Browse files
Files changed (2) hide show
  1. README.md +23 -1
  2. config.json +2 -7
README.md CHANGED
@@ -33,7 +33,8 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
33
  - Number of Paramaters (Non-Embedding): 6.53B
34
  - Number of Layers: 28
35
  - Number of Attention Heads (GQA): 28 for Q and 4 for KV
36
- - Context Length: 131,072 tokens
 
37
 
38
  **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
39
 
@@ -48,6 +49,27 @@ With `transformers<4.37.0`, you will encounter the following error:
48
  KeyError: 'qwen2'
49
  ```
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  ## Evaluation & Performance
53
 
 
33
  - Number of Paramaters (Non-Embedding): 6.53B
34
  - Number of Layers: 28
35
  - Number of Attention Heads (GQA): 28 for Q and 4 for KV
36
+ - Context Length: Full 131,072 tokens
37
+ - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
38
 
39
  **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
40
 
 
49
  KeyError: 'qwen2'
50
  ```
51
 
52
+ ### Processing Long Texts
53
+
54
+ The current `config.json` is set for context length up to 32,768 tokens.
55
+ To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
56
+
57
+ For supported frameworks, you could add the following to `config.json` to enable YaRN:
58
+ ```json
59
+ {
60
+ ...,
61
+ "rope_scaling": {
62
+ "factor": 4.0,
63
+ "original_max_position_embeddings": 32768,
64
+ "type": "yarn"
65
+ }
66
+ }
67
+ ```
68
+
69
+ For deployment, we recommend using vLLM.
70
+ Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
71
+ Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
72
+ We advise adding the `rope_scaling` configuration only when processing long contexts is required.
73
 
74
  ## Evaluation & Performance
75
 
config.json CHANGED
@@ -9,7 +9,7 @@
9
  "hidden_size": 3584,
10
  "initializer_range": 0.02,
11
  "intermediate_size": 18944,
12
- "max_position_embeddings": 131072,
13
  "max_window_layers": 28,
14
  "model_type": "qwen2",
15
  "num_attention_heads": 28,
@@ -23,10 +23,5 @@
23
  "transformers_version": "4.45.0.dev0",
24
  "use_cache": true,
25
  "use_sliding_window": false,
26
- "vocab_size": 152064,
27
- "rope_scaling": {
28
- "factor": 4.0,
29
- "original_max_position_embeddings": 32768,
30
- "type": "yarn"
31
- }
32
  }
 
9
  "hidden_size": 3584,
10
  "initializer_range": 0.02,
11
  "intermediate_size": 18944,
12
+ "max_position_embeddings": 32768,
13
  "max_window_layers": 28,
14
  "model_type": "qwen2",
15
  "num_attention_heads": 28,
 
23
  "transformers_version": "4.45.0.dev0",
24
  "use_cache": true,
25
  "use_sliding_window": false,
26
+ "vocab_size": 152064
 
 
 
 
 
27
  }