Update README.md
Browse files
README.md
CHANGED
@@ -18,8 +18,8 @@ tags:
|
|
18 |
NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac).
|
19 |
|
20 |
It is based on a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
21 |
-
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
|
22 |
-
* [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)
|
23 |
|
24 |
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪
|
25 |
|
|
|
18 |
NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac).
|
19 |
|
20 |
It is based on a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
21 |
+
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1), based on jondurbin's [repo](https://github.com/jondurbin/bagel) and [jondurbin/bagel-v0.3](https://huggingface.co/datasets/jondurbin/bagel-v0.3])
|
22 |
+
* [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp), based on [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp)
|
23 |
|
24 |
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪
|
25 |
|