Update README.md
Browse files
README.md
CHANGED
@@ -108,7 +108,4 @@ But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia)
|
|
108 |
|
109 |
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
|
110 |
|
111 |
-
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
112 |
-
|
113 |
-
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
114 |
-
|
|
|
108 |
|
109 |
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
|
110 |
|
111 |
+
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
|
|
|
|
|