Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ base_model:
|
|
14 |
- NousResearch/Hermes-2-Theta-Llama-3-8B
|
15 |
- hfl/llama-3-chinese-8b-instruct-v2
|
16 |
---
|
17 |
-
|
18 |
# wwe180/Llama3-15B-lingyang-v0.1-Q6_K-GGUF
|
19 |
This model was converted to GGUF format from [`wwe180/Llama3-15B-lingyang-v0.1`](https://huggingface.co/wwe180/Llama3-15B-lingyang-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/wwe180/Llama3-15B-lingyang-v0.1) for more details on the model.
|
|
|
14 |
- NousResearch/Hermes-2-Theta-Llama-3-8B
|
15 |
- hfl/llama-3-chinese-8b-instruct-v2
|
16 |
---
|
17 |
+
#该模型是实验性的,因此无法保证结果。
|
18 |
# wwe180/Llama3-15B-lingyang-v0.1-Q6_K-GGUF
|
19 |
This model was converted to GGUF format from [`wwe180/Llama3-15B-lingyang-v0.1`](https://huggingface.co/wwe180/Llama3-15B-lingyang-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/wwe180/Llama3-15B-lingyang-v0.1) for more details on the model.
|