ktoprakucar commited on
Commit
eec63b3
·
verified ·
1 Parent(s): 9a3b64e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -15,8 +15,6 @@ A quantized version of [Granite Guardian 3.1 2B](https://huggingface.co/ibm-gran
15
 
16
  Quantization is done by [llama.cpp](https://github.com/ggerganov/llama.cpp).
17
 
18
- P.S. The llama.cpp library encountered issues during model initialization in both Python and llama-server modes, even with the quantized 8B version from other distributors. However, you can use [LM Studio](https://lmstudio.ai/) for inference!
19
-
20
 
21
  ## Model Summary (from original repository)
22
 
 
15
 
16
  Quantization is done by [llama.cpp](https://github.com/ggerganov/llama.cpp).
17
 
 
 
18
 
19
  ## Model Summary (from original repository)
20