GGUF
Hungarian
English
llama-cpp
gguf-my-repo
Inference Endpoints
conversational
zsolx2 commited on
Commit
6e0bd17
1 Parent(s): d8c6bb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -16,6 +16,7 @@ tags:
16
  # zsolx2/SambaLingo-Hungarian-Chat-Q4_0-GGUF
17
  This model was converted to GGUF format from [`sambanovasystems/SambaLingo-Hungarian-Chat`](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Chat) for more details on the model.
 
19
 
20
  ## Use with llama.cpp
21
  Install llama.cpp through brew (works on Mac and Linux)
 
16
  # zsolx2/SambaLingo-Hungarian-Chat-Q4_0-GGUF
17
  This model was converted to GGUF format from [`sambanovasystems/SambaLingo-Hungarian-Chat`](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Chat) for more details on the model.
19
+ This model was created to be used with GPT4All ver >= 3.0.
20
 
21
  ## Use with llama.cpp
22
  Install llama.cpp through brew (works on Mac and Linux)