samhog's picture
Update README.md
9785223
|
raw
history blame
741 Bytes

Psychology Alpaca

This is a LLaMA-7B language model trained on 10.000 psychology-related prompts and answers generated by ChatGPT. The model was trained on a single A100 GPU from Google Colab. The model shows some knowledge in the field of psychology and generally performs better than its base model parent.

Background

This model was developed as part of a thesis project in the field of machine learning and psychology. It was used as a base model for further fine-tuning using reinforcement learning. The goal of the thesis was to compare reinforcement learning from human feedback and AI feedback. When the paper is available, it will be linked here!

Authors: Samuel Höglund, samhog@kth.se; Josef Khedri, jkhedri@kth.se