August4293
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -2,17 +2,16 @@
|
|
2 |
library_name: transformers
|
3 |
tags: []
|
4 |
---
|
|
|
5 |
|
6 |
-
|
7 |
|
8 |
-
|
9 |
-
|
10 |
-
The fine-tuning process is detailed on the corresponding [GitHub page](https://github.com/August-murr/Lab/tree/main/Mistral%20Self%20Alignment), providing insights into the methodology and purpose behind the model's adaptation.
|
11 |
|
12 |
## Model Details:
|
13 |
- **Base Model:** Mistral 7b
|
14 |
- **Fine-Tuning Purpose:** Self-Alignment and Harmlessness
|
15 |
-
- **Fine-Tuning
|
16 |
-
|
17 |
|
18 |
|
|
|
2 |
library_name: transformers
|
3 |
tags: []
|
4 |
---
|
5 |
+
# Mistral 7b Self-Alignment DPO Model
|
6 |
|
7 |
+
The Mistral 7b Self-Alignment DPO Model is an adapter fine-tuned for self-alignment and harmlessness using the Direct Preference Optimization (DPO) technique. It has been trained utilizing the Mistral Self-Alignment Preference Dataset, accessible [here](https://huggingface.co/datasets/August4293/Preference-Dataset).
|
8 |
|
9 |
+
Detailed information about the DPO fine-tuning process and its application for self-alignment can be found on the corresponding [GitHub page](https://github.com/August-murr/Lab/tree/main/Mistral%20Self%20Alignment).
|
|
|
|
|
10 |
|
11 |
## Model Details:
|
12 |
- **Base Model:** Mistral 7b
|
13 |
- **Fine-Tuning Purpose:** Self-Alignment and Harmlessness
|
14 |
+
- **Fine-Tuning Method:** Direct Preference Optimization (DPO)
|
15 |
+
- **Fine-Tuning Dataset:** [Mistral Self-Alignment Preference Dataset](https://huggingface.co/datasets/August4293/Preference-Dataset)
|
16 |
|
17 |
|