TheBloke commited on
Commit
4d30683
1 Parent(s): 3b1060b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -7
README.md CHANGED
@@ -67,17 +67,12 @@ Don't expect any third-party UIs/tools to support them yet.
67
 
68
  I use the following command line; adjust for your tastes and needs:
69
 
70
- ```
71
- ./main -t 18 -m stable-vicuna-13B.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -r "### Human:" -i
72
- ```
73
- Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
74
-
75
- If you want to enter a prompt from the command line, use `-p <PROMPT>` like so:
76
-
77
  ```
78
  ./main -t 18 -m stable-vicuna-13B.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -r "### Human:" -p "### Human: write a story about llamas ### Assistant:"
79
  ```
80
 
 
 
81
  ## How to run in `text-generation-webui`
82
 
83
  GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
 
67
 
68
  I use the following command line; adjust for your tastes and needs:
69
 
 
 
 
 
 
 
 
70
  ```
71
  ./main -t 18 -m stable-vicuna-13B.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -r "### Human:" -p "### Human: write a story about llamas ### Assistant:"
72
  ```
73
 
74
+ Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
75
+
76
  ## How to run in `text-generation-webui`
77
 
78
  GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.