Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
|
|
33 |
- Number of Paramaters (Non-Embedding): 1.31B
|
34 |
- Number of Layers: 28
|
35 |
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
|
36 |
-
- Context Length: Full
|
37 |
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
|
38 |
- Quantization: GPTQ 4-bit
|
39 |
|
|
|
33 |
- Number of Paramaters (Non-Embedding): 1.31B
|
34 |
- Number of Layers: 28
|
35 |
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
|
36 |
+
- Context Length: Full 32,768 tokens
|
37 |
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
|
38 |
- Quantization: GPTQ 4-bit
|
39 |
|