feihu.hf commited on
Commit
ca6334f
•
1 Parent(s): 74fbd20

update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -10
README.md CHANGED
@@ -20,13 +20,13 @@ tags:
20
 
21
  ## Introduction
22
 
23
- Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers; Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
24
 
25
  - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
26
  - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
27
  - **Long-context Support** up to 128K tokens.
28
 
29
- **This repo contains the instruction-tuned 7B Qwen2.5-Coder model in the GGUF FOrmat**, which has the following features:
30
  - Type: Causal Language Models
31
  - Training Stage: Pretraining & Post-training
32
  - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
@@ -38,7 +38,7 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
38
  - Note: Currently, only vLLM supports YARN for length extrapolating. If you want to process sequences up to 131,072 tokens, please refer to non-GGUF models.
39
  - Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
40
 
41
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
42
 
43
  ## Quickstart
44
 
@@ -76,9 +76,7 @@ For users, to achieve chatbot-like experience, it is recommended to commence in
76
 
77
  ## Evaluation & Performance
78
 
79
- Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
80
-
81
- For quantized models, the benchmark results against the original bfloat16 models can be found [here](https://qwen.readthedocs.io/en/latest/benchmark/quantization_benchmark.html)
82
 
83
  For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
84
 
@@ -88,10 +86,10 @@ If you find our work helpful, feel free to give us a cite.
88
 
89
  ```
90
  @article{hui2024qwen2,
91
- title={Qwen2. 5-Coder Technical Report},
92
- author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
93
- journal={arXiv preprint arXiv:2409.12186},
94
- year={2024}
95
  }
96
  @article{qwen2,
97
  title={Qwen2 Technical Report},
 
20
 
21
  ## Introduction
22
 
23
+ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
24
 
25
  - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
26
  - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
27
  - **Long-context Support** up to 128K tokens.
28
 
29
+ **This repo contains the instruction-tuned 7B Qwen2.5-Coder model in the GGUF Format**, which has the following features:
30
  - Type: Causal Language Models
31
  - Training Stage: Pretraining & Post-training
32
  - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
 
38
  - Note: Currently, only vLLM supports YARN for length extrapolating. If you want to process sequences up to 131,072 tokens, please refer to non-GGUF models.
39
  - Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
40
 
41
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
42
 
43
  ## Quickstart
44
 
 
76
 
77
  ## Evaluation & Performance
78
 
79
+ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
 
 
80
 
81
  For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
82
 
 
86
 
87
  ```
88
  @article{hui2024qwen2,
89
+ title={Qwen2. 5-Coder Technical Report},
90
+ author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
91
+ journal={arXiv preprint arXiv:2409.12186},
92
+ year={2024}
93
  }
94
  @article{qwen2,
95
  title={Qwen2 Technical Report},