Update README.md
Browse files
README.md
CHANGED
@@ -1,12 +1,16 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
library_name: transformers
|
4 |
tags:
|
5 |
- llama-cpp
|
6 |
- gguf-my-repo
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
-
# wannaphong/
|
10 |
This model was converted to GGUF format from pythainlp/KhanomTanLLM-3B-Instruct using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
11 |
Refer to the [original model card](https://huggingface.co/pythainlp/KhanomTanLLM-3B-Instruct) for more details on the model.
|
12 |
|
@@ -48,4 +52,4 @@ Step 3: Run inference through the main binary.
|
|
48 |
or
|
49 |
```
|
50 |
./llama-server --hf-repo wannaphong/KhanomTanLLM-3B-Instruct-Q2_K-GGUF --hf-file ok_llm-q2_k.gguf -c 2048
|
51 |
-
```
|
|
|
1 |
---
|
2 |
+
base_model: pythainlp/KhanomTanLLM-3B-Instruct
|
3 |
library_name: transformers
|
4 |
tags:
|
5 |
- llama-cpp
|
6 |
- gguf-my-repo
|
7 |
+
license: apache-2.0
|
8 |
+
language:
|
9 |
+
- th
|
10 |
+
- en
|
11 |
---
|
12 |
|
13 |
+
# wannaphong/KhanomTanLLM-3B-Instruct-Q2_K-GGUF
|
14 |
This model was converted to GGUF format from pythainlp/KhanomTanLLM-3B-Instruct using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
Refer to the [original model card](https://huggingface.co/pythainlp/KhanomTanLLM-3B-Instruct) for more details on the model.
|
16 |
|
|
|
52 |
or
|
53 |
```
|
54 |
./llama-server --hf-repo wannaphong/KhanomTanLLM-3B-Instruct-Q2_K-GGUF --hf-file ok_llm-q2_k.gguf -c 2048
|
55 |
+
```
|