Commit
•
c3e0237
1
Parent(s):
718ef20
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,13 @@ license: apache-2.0
|
|
18 |
|
19 |
# Model Card for Notus 7B
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
Notus is going to be a collection of fine-tuned models using DPO, similarly to Zephyr, but mainly focused
|
22 |
on the Direct Preference Optimization (DPO) step, aiming to incorporate preference feedback into the LLMs
|
23 |
when fine-tuning those. Notus models are intended to be used as assistants via chat-like applications, and
|
|
|
18 |
|
19 |
# Model Card for Notus 7B
|
20 |
|
21 |
+
<div align="center">
|
22 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/LU-vKiC0R7UxxITrwE1F_.png"/>
|
23 |
+
<p style="text-align: center;">
|
24 |
+
Image was artificially generated by Dalle-3 via ChatGPT Pro
|
25 |
+
</p>
|
26 |
+
</div>
|
27 |
+
|
28 |
Notus is going to be a collection of fine-tuned models using DPO, similarly to Zephyr, but mainly focused
|
29 |
on the Direct Preference Optimization (DPO) step, aiming to incorporate preference feedback into the LLMs
|
30 |
when fine-tuning those. Notus models are intended to be used as assistants via chat-like applications, and
|