Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Psychology LLaMA RLHF 🦙🙋♂️
|
2 |
+
This is a LLaMA-7B-based language model trained in the field of psychology using Reinforcement Learning from Human Feedback. To learn more about RLHF, I recommend [this](https://huggingface.co/blog/rlhf) great blogpost on Hugging Face. For some insights in the process of fine-tuning using RLHF, there is a great blogpost, also from Hugging Face, found [here!](https://huggingface.co/blog/stackllama)
|
3 |
+
|
4 |
+
**Links**: [Reward model](https://huggingface.co/samhog/RLHF-psychology-alpaca-rm); [Base model](https://huggingface.co/samhog/psychology-llama-merged)
|
5 |
+
|
6 |
+
|
7 |
+
### Background 💡
|
8 |
+
This model was developed as part of a thesis project in the field of machine learning and psychology. The goal of the thesis was to compare reinforcement learning from human feedback and AI feedback. Evaluation showed that the model performed significantly better (avg. score [out of 4] 2.70) than the base model (1.22), but significantly worse than ChatGPT (3.20). Further, the evaluation found no significant difference between the [RLAIF model](https://huggingface.co/samhog/psychology-llama-rlaif) (2.98). It was trained on a total of 2.000 data points for 4 hours on a single A100 GPU through Google Colab. Even though this model sometimes outputs appropriate answers, it suffers from *The Repetition Problem*.
|
9 |
+
|
10 |
+
|
11 |
+
### Paper 📜
|
12 |
+
"Comparison Between RLHF and RLAIF in Fine-Tuning a Large Language Model"
|
13 |
+
|
14 |
+
When the paper is available, it will be linked here!
|
15 |
+
|
16 |
+
|
17 |
+
### Usage 🏂
|
18 |
+
As a base model, it is recommended to use the [samhog/psychology-alpaca-merged](https://huggingface.co/samhog/psychology-alpaca-merged). Note that this combination does produce some answers suffering from the repetition problem, but not as frequently as the [samhog/psychology-llama-merged](https://huggingface.co/samhog/psychology-llama-merged) does.
|
19 |
+
```
|
20 |
+
from peft import PeftModel
|
21 |
+
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
|
22 |
+
|
23 |
+
tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
|
24 |
+
|
25 |
+
model = LLaMAForCausalLM.from_pretrained(
|
26 |
+
"samhog/psychology-alpaca-merged",
|
27 |
+
load_in_8bit=True,
|
28 |
+
device_map="auto",
|
29 |
+
)
|
30 |
+
model = PeftModel.from_pretrained(model, "samhog/psychology-llama-rlaif")
|
31 |
+
```
|
32 |
+
|
33 |
+
**Authors:**
|
34 |
+
Samuel Höglund, samhog@kth.se;
|
35 |
+
Josef Khedri, jkhedri@kth.se
|