phi-instruct-segment Model Card

Method

The segment reward model assigns rewards to semantically meaningful text segments, segmented dynamically with an entropy-based threshold. It is trained on binary preference labels from human feedback, optimizing a Bradley-Terry loss function that aggregates segment rewards using the average function.

Architecture

image/png

Training

The phi-instruct-segment model is fine-tuned from microsoft/Phi-3-mini-4k-instruct on the hendrydong/preference_700K dataset.

Citation

If you find this model or our research useful, please consider citing our paper:

@misc{yin2025segmentingtextlearningrewards,
      title={Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model}, 
      author={Yueqin Yin and Shentao Yang and Yujia Xie and Ziyi Yang and Yuting Sun and Hany Awadalla and Weizhu Chen and Mingyuan Zhou},
      year={2025},
      eprint={2501.02790},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.02790},
}
Downloads last month
24
Safetensors
Model size
3.72B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for yyqoni/Phi-3-mini-4k-instruct-segment-rm-700k

Finetuned
(191)
this model

Dataset used to train yyqoni/Phi-3-mini-4k-instruct-segment-rm-700k

Collection including yyqoni/Phi-3-mini-4k-instruct-segment-rm-700k