Update README.md
Browse files
README.md
CHANGED
@@ -30,13 +30,13 @@ pip3 install huggingface-hub>=0.17.1
|
|
30 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
31 |
|
32 |
```shell
|
33 |
-
huggingface-cli download 24bean/Llama-2-7B-
|
34 |
```
|
35 |
|
36 |
Or you can download llama-2-ko-7b.gguf, non-quantized model by
|
37 |
|
38 |
```shell
|
39 |
-
huggingface-cli download 24bean/Llama-2-7B-
|
40 |
```
|
41 |
|
42 |
## Example `llama.cpp` command
|
|
|
30 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
31 |
|
32 |
```shell
|
33 |
+
huggingface-cli download 24bean/Llama-2-ko-7B-GGUF llama-2-ko-7b_q8_0.gguf --local-dir . --local-dir-use-symlinks False
|
34 |
```
|
35 |
|
36 |
Or you can download llama-2-ko-7b.gguf, non-quantized model by
|
37 |
|
38 |
```shell
|
39 |
+
huggingface-cli download 24bean/Llama-2-ko-7B-GGUF llama-2-ko-7b.gguf --local-dir . --local-dir-use-symlinks False
|
40 |
```
|
41 |
|
42 |
## Example `llama.cpp` command
|