Update README.md
Browse files
README.md
CHANGED
@@ -11,3 +11,5 @@ ggml Alpaca model is from https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/
|
|
11 |
the two models also can be loaded by the [llama.cpp](https://github.com/ggerganov/llama.cpp) project.
|
12 |
|
13 |
InferLLM support the ChatGLM model, the chatglm-q4 is the int4 quantized model from [chatglm-6b](https://huggingface.co/THUDM/chatglm-6b)
|
|
|
|
|
|
11 |
the two models also can be loaded by the [llama.cpp](https://github.com/ggerganov/llama.cpp) project.
|
12 |
|
13 |
InferLLM support the ChatGLM model, the chatglm-q4 is the int4 quantized model from [chatglm-6b](https://huggingface.co/THUDM/chatglm-6b)
|
14 |
+
|
15 |
+
InferLLM support the baichuan model, the baichuan-q4 is the int4 quantized model from [baichuan](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
|