LLaMA-3-8B-SFR-Iterative-DPO-Concise-R

This is a concise version of Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R. In the training, a concise penalty is applied.

Ethics disclaimer for Salesforce AI models, data, code

This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our standard AUP and AI AUP.

Downloads last month
33
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-Concise-R

Quantizations
3 models