|
--- |
|
license: apache-2.0 |
|
tags: |
|
- java |
|
- qwen2 |
|
- qwen2.java |
|
--- |
|
# Pure quantizations of `Qwen2-Math-1.5B-Instruct` for [qwen2.java](https://github.com/mukel/qwen2.java). |
|
|
|
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0. |
|
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows: |
|
|
|
``` |
|
./quantize --pure ./Qwen2-1.5B-Math-Instruct-F16.gguf ./Qwen2-1.5B-Math-Instruct-Q4_0.gguf Q4_0 |
|
``` |
|
|
|
Original model: [https://huggingface.co/Qwen/Qwen2-Math-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-Math-1.5B-Instruct) |
|
|
|
|
|
## Model Details |
|
|
|
|
|
For more details, please refer to the original [blog post](https://qwenlm.github.io/blog/qwen2-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2-Math). |
|
|
|
|
|
|