Transformers
English
falcon
TheBloke commited on
Commit
27cf5ed
·
1 Parent(s): 40290f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -46
README.md CHANGED
@@ -19,14 +19,13 @@ license: other
19
 
20
  # Falcon 40B-Instruct GGML GGML
21
 
22
- These files are GGML format model files for [Falcon 40B-Instruct GGML](https://huggingface.co/tiiuae/falcon-40b-instruct).
23
 
24
- GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
- * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
28
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
- * [ctransformers](https://github.com/marella/ctransformers)
30
 
31
  ## Repositories available
32
 
@@ -37,29 +36,20 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
37
  <!-- compatibility_ggml start -->
38
  ## Compatibility
39
 
40
- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
-
42
- I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
-
44
- They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
45
-
46
- ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
-
48
- These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
49
 
50
- They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
51
-
52
- ## Explanation of the new k-quant methods
 
 
 
53
 
54
- The new methods available are:
55
- * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
- * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
- * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
- * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
- * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
- * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
 
62
- Refer to the Provided Files table below to see what files use which methods, and how.
63
  <!-- compatibility_ggml end -->
64
 
65
  ## Provided files
@@ -70,25 +60,7 @@ Refer to the Provided Files table below to see what files use which methods, and
70
  | Falcon-40b-Instruct.ggmlv3.q5_0.bin | q5_0 | 5 | 28.77 GB | 31.27 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
71
  | Falcon-40b-Instruct.ggmlv3.q5_1.bin | q5_1 | 5 | 31.38 GB | 33.88 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
72
 
73
-
74
- **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
75
-
76
- ## How to run in `llama.cpp`
77
-
78
- I use the following command line; adjust for your tastes and needs:
79
-
80
- ```
81
- ./main -t 10 -ngl 32 -m wizardcoder-15b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
82
- ```
83
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
84
-
85
- Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
86
-
87
- If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
88
-
89
- ## How to run in `text-generation-webui`
90
-
91
- Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
92
 
93
  <!-- footer start -->
94
  ## Discord
 
19
 
20
  # Falcon 40B-Instruct GGML GGML
21
 
22
+ These files are **experimental** GGML format model files for [Falcon 40B-Instruct GGML](https://huggingface.co/tiiuae/falcon-40b-instruct).
23
 
24
+ At the time of writing these GGML files will **not** work in llama.cpp, or any UI or library.
25
+
26
+ They can currently only work using the basic command line test tool from a fork of the ggml repo.
27
+
28
+ They are therefore uploaded purely for initial evaluation and experimentation. Support for these GGMLs should improve in the near future.
 
29
 
30
  ## Repositories available
31
 
 
36
  <!-- compatibility_ggml start -->
37
  ## Compatibility
38
 
39
+ To build the CLI tool necessary to use these GGML files, please follow the following steps:
 
 
 
 
 
 
 
 
40
 
41
+ ```
42
+ git clone https://github.com/jploski/ggml falcon-ggml
43
+ cd falcon-ggml
44
+ git checkout falcon40b
45
+ mkdir build && cd build && cmake .. && cmake --build . --config Release
46
+ ```
47
 
48
+ Then run:
49
+ ```
50
+ bin/falcon -m /workspace/process/wizard-falcon40b/ggml/Falcon-40b-Instruct.ggmlv3.q4_0.bin -t 10 -n 200 -p "write a story about llamas"
51
+ ```
 
 
 
52
 
 
53
  <!-- compatibility_ggml end -->
54
 
55
  ## Provided files
 
60
  | Falcon-40b-Instruct.ggmlv3.q5_0.bin | q5_0 | 5 | 28.77 GB | 31.27 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
61
  | Falcon-40b-Instruct.ggmlv3.q5_1.bin | q5_1 | 5 | 31.38 GB | 33.88 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
62
 
63
+ A q8_0 file will be provided shortly. There is currently an issue preventing it from working. Once this is fixed, it will be uploaded.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
  <!-- footer start -->
66
  ## Discord