Text Classification
Transformers
PyTorch
English
deberta-v2
reward-model
reward_model
RLHF
Inference Endpoints
theblackcat102 lvwerra HF staff commited on
Commit
1f543c0
1 Parent(s): d93adaf

Python formatting (#2)

Browse files

- Python formatting (00793a4cfb8d5d41497d5b45ca9a6044d9c2ac12)


Co-authored-by: Leandro von Werra <lvwerra@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -34,7 +34,7 @@ All models are train on these dataset with a same split seed across datasets (if
34
 
35
  # How to use
36
 
37
- ```
38
  from transformers import AutoModelForSequenceClassification, AutoTokenizer
39
  reward_name = "OpenAssistant/reward-model-deberta-v3-large"
40
  rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
 
34
 
35
  # How to use
36
 
37
+ ```python
38
  from transformers import AutoModelForSequenceClassification, AutoTokenizer
39
  reward_name = "OpenAssistant/reward-model-deberta-v3-large"
40
  rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)