Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ language:
|
|
| 15 |
*phi2-pro* is a fine-tuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
|
| 16 |
preference dataset using *Odds Ratio Preference Optimization (ORPO)*. The model has been trained for 1 epoch.
|
| 17 |
|
| 18 |
-
## LazyORPO
|
| 19 |
|
| 20 |
This model has been trained using **[LazyORPO](https://colab.research.google.com/drive/19ci5XIcJDxDVPY2xC1ftZ5z1kc2ah_rx?usp=sharing)**. A colab notebook that makes the training
|
| 21 |
process much easier. Based on [ORPO paper](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fhuggingface.co%2Fpapers%2F2403.07691)
|
|
@@ -23,7 +23,7 @@ process much easier. Based on [ORPO paper](https://colab.research.google.com/cor
|
|
| 23 |
|
| 24 |

|
| 25 |
|
| 26 |
-
#### What is ORPO?
|
| 27 |
|
| 28 |
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
|
| 29 |
Some highlights of this techniques are:
|
|
@@ -34,9 +34,9 @@ Some highlights of this techniques are:
|
|
| 34 |
* π Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
|
| 35 |
|
| 36 |
|
| 37 |
-
#### Usage
|
| 38 |
|
| 39 |
-
python
|
| 40 |
import torch
|
| 41 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 42 |
|
|
@@ -54,8 +54,8 @@ outputs = model.generate(**inputs, max_length=200)
|
|
| 54 |
text = tokenizer.batch_decode(outputs)[0]
|
| 55 |
print(text)
|
| 56 |
|
|
|
|
| 57 |
|
| 58 |
-
|
| 59 |
-
## Evaluation
|
| 60 |
|
| 61 |
### COMING SOON
|
|
|
|
| 15 |
*phi2-pro* is a fine-tuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
|
| 16 |
preference dataset using *Odds Ratio Preference Optimization (ORPO)*. The model has been trained for 1 epoch.
|
| 17 |
|
| 18 |
+
## π₯ LazyORPO
|
| 19 |
|
| 20 |
This model has been trained using **[LazyORPO](https://colab.research.google.com/drive/19ci5XIcJDxDVPY2xC1ftZ5z1kc2ah_rx?usp=sharing)**. A colab notebook that makes the training
|
| 21 |
process much easier. Based on [ORPO paper](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fhuggingface.co%2Fpapers%2F2403.07691)
|
|
|
|
| 23 |
|
| 24 |

|
| 25 |
|
| 26 |
+
#### π What is ORPO?
|
| 27 |
|
| 28 |
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
|
| 29 |
Some highlights of this techniques are:
|
|
|
|
| 34 |
* π Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
|
| 35 |
|
| 36 |
|
| 37 |
+
#### π» Usage
|
| 38 |
|
| 39 |
+
```python
|
| 40 |
import torch
|
| 41 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 42 |
|
|
|
|
| 54 |
text = tokenizer.batch_decode(outputs)[0]
|
| 55 |
print(text)
|
| 56 |
|
| 57 |
+
```
|
| 58 |
|
| 59 |
+
## π Evaluation
|
|
|
|
| 60 |
|
| 61 |
### COMING SOON
|