Text Generation
Transformers
PyTorch
English
llama
conversational
text-generation-inference
Inference Endpoints
hamishivi commited on
Commit
a4631a2
1 Parent(s): a759e6f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -21,7 +21,7 @@ Tulu V2.5 is a series of models trained using DPO and PPO starting from the [Tul
21
  This model is trained on UltraFeedback using DPO, using the overall score to determine chosen and rejected.
22
 
23
  For more details, read the paper:
24
- [Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback](https://link.todo).
25
 
26
 
27
  ## .Model description
@@ -79,6 +79,7 @@ If you find Tulu 2.5 is useful in your work, please cite it with:
79
  title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}},
80
  author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
81
  year={2024},
 
82
  archivePrefix={arXiv},
83
  primaryClass={cs.CL}
84
  }
 
21
  This model is trained on UltraFeedback using DPO, using the overall score to determine chosen and rejected.
22
 
23
  For more details, read the paper:
24
+ [Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback](https://arxiv.org/abs/2406.09279).
25
 
26
 
27
  ## .Model description
 
79
  title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}},
80
  author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
81
  year={2024},
82
+ eprint={2406.09279},
83
  archivePrefix={arXiv},
84
  primaryClass={cs.CL}
85
  }