Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ The model has ~6.6B parameters and a vocabulary of 50.335 tokens. It is a founda
|
|
29 |
|
30 |
<h3>Quantization</h3>
|
31 |
|
32 |
-
The released checkpoint is quantized in 8-bit, so that it can easily be loaded and used for training and inference on ordinary hardware, and it requires the installation of the <b>transformers</b> library version >= 4.30.1 and the <b>bitsandbytes</b> library, version >= 0.37.2
|
33 |
|
34 |
On Windows operating systems, the <b>bitsandbytes-windows</b> module also needs to be installed on top. However, it appears that the module is not yet updated with some recent features, like the possibility to save the 8-bit quantized models.
|
35 |
In order to include this, you can install the fork in [this repo](https://github.com/francesco-russo-githubber/bitsandbytes-windows), using:
|
|
|
29 |
|
30 |
<h3>Quantization</h3>
|
31 |
|
32 |
+
The released checkpoint is quantized in 8-bit, so that it can easily be loaded and used for training and inference on ordinary hardware like consumer GPUs, and it requires the installation of the <b>transformers</b> library version >= 4.30.1 and the <b>bitsandbytes</b> library, version >= 0.37.2
|
33 |
|
34 |
On Windows operating systems, the <b>bitsandbytes-windows</b> module also needs to be installed on top. However, it appears that the module is not yet updated with some recent features, like the possibility to save the 8-bit quantized models.
|
35 |
In order to include this, you can install the fork in [this repo](https://github.com/francesco-russo-githubber/bitsandbytes-windows), using:
|