license: mit | |
language: | |
- en | |
My own (ZeroWw) quantizations. | |
output and embed tensors quantized to f16. | |
all other tensors quantized to q5_k or q6_k. | |
Result: | |
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization | |
and they perform as well as the pure f16. | |
Note: | |
as of now, to run this model you must use: https://github.com/mnlife/llama.cpp | |
Later on the PR will be added to the main branch of llama.cpp | |