Moxoff commited on
Commit
ada0fca
1 Parent(s): 3129cb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -61,7 +61,7 @@ print(decoded[0])
61
 
62
  ## Bias, Risks and Limitations
63
 
64
- Pompei has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
65
  responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
66
  of the corpus was used to train the base model (mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
67
  like books and code.
@@ -76,7 +76,7 @@ like books and code.
76
  ## Quantized versions
77
 
78
  We have published as well the 4 bit and 8 bit versions of this model:
79
- https://huggingface.co/MoxoffSpA/Pompei-Quantized
80
 
81
  ## The Moxoff Team
82
 
 
61
 
62
  ## Bias, Risks and Limitations
63
 
64
+ Azzurro has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
65
  responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
66
  of the corpus was used to train the base model (mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
67
  like books and code.
 
76
  ## Quantized versions
77
 
78
  We have published as well the 4 bit and 8 bit versions of this model:
79
+ https://huggingface.co/MoxoffSpA/AzzurroQuantized
80
 
81
  ## The Moxoff Team
82