macadeliccc commited on
Commit
37c900b
1 Parent(s): 2a74d95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -16,6 +16,8 @@ If this 2x7b model is loaded in 4 bit the hellaswag score is .8270 which is high
16
 
17
  The process is outlined in this [notebook](https://github.com/cognitivecomputations/laserRMT/blob/main/examples/laser-dolphin-mixtral-2x7b.ipynb)
18
 
 
 
19
  Quatizations provided by [TheBloke](https://huggingface.co/TheBloke/laser-dolphin-mixtral-2x7b-dpo-GGUF)
20
 
21
 
@@ -23,6 +25,10 @@ Quatizations provided by [TheBloke](https://huggingface.co/TheBloke/laser-dolphi
23
 
24
  This model follows the same prompt format as the aforementioned model.
25
 
 
 
 
 
26
  Prompt format:
27
 
28
  ```
 
16
 
17
  The process is outlined in this [notebook](https://github.com/cognitivecomputations/laserRMT/blob/main/examples/laser-dolphin-mixtral-2x7b.ipynb)
18
 
19
+ **These Quants will result in unpredicted behavior and I am working on new Quants as I have updated the model**
20
+
21
  Quatizations provided by [TheBloke](https://huggingface.co/TheBloke/laser-dolphin-mixtral-2x7b-dpo-GGUF)
22
 
23
 
 
25
 
26
  This model follows the same prompt format as the aforementioned model.
27
 
28
+ However, there have been reports that this causes errors even though both models were ChatML models.
29
+
30
+ The provided example code does not use this format.
31
+
32
  Prompt format:
33
 
34
  ```