Update README.md
Browse files
README.md
CHANGED
@@ -61,7 +61,7 @@ print(decoded[0])
|
|
61 |
|
62 |
## Bias, Risks and Limitations
|
63 |
|
64 |
-
|
65 |
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
|
66 |
of the corpus was used to train the base model (mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
|
67 |
like books and code.
|
@@ -76,7 +76,7 @@ like books and code.
|
|
76 |
## Quantized versions
|
77 |
|
78 |
We have published as well the 4 bit and 8 bit versions of this model:
|
79 |
-
https://huggingface.co/MoxoffSpA/
|
80 |
|
81 |
## The Moxoff Team
|
82 |
|
|
|
61 |
|
62 |
## Bias, Risks and Limitations
|
63 |
|
64 |
+
Azzurro has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
|
65 |
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
|
66 |
of the corpus was used to train the base model (mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
|
67 |
like books and code.
|
|
|
76 |
## Quantized versions
|
77 |
|
78 |
We have published as well the 4 bit and 8 bit versions of this model:
|
79 |
+
https://huggingface.co/MoxoffSpA/AzzurroQuantized
|
80 |
|
81 |
## The Moxoff Team
|
82 |
|