ZeroWw commited on
Commit
aa1ed19
1 Parent(s): a844567

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -12,3 +12,7 @@ all other tensors quantized to q5_k or q6_k.
12
  Result:
13
  both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
14
  and they perform as well as the pure f16.
 
 
 
 
 
12
  Result:
13
  both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
14
  and they perform as well as the pure f16.
15
+
16
+ Note:
17
+ as of now, to run this model you must use: https://github.com/mnlife/llama.cpp
18
+ Later on the PR will be added to the main branch of llama.cpp