Update README.md
Browse files
README.md
CHANGED
@@ -101,6 +101,4 @@ But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia)
|
|
101 |
|
102 |
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
|
103 |
|
104 |
-
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
105 |
-
|
106 |
-
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|
|
101 |
|
102 |
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
|
103 |
|
104 |
+
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
|
|
|