File size: 429 Bytes
a844567 aa1ed19 3a28e73 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
Note:
as of now, to run this model you must use: https://github.com/mnlife/llama.cpp
Later on the PR will be added to the main branch of llama.cpp
|