--- datasets: - unalignment/toxic-dpo-v0.1 --- # llama2_xs_460M_uncensored ## Model Details [llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental) DPO finedtuned to remove alignment. ### Model Description - **Developed by:** Harambe Research - **Model type:** llama2 - **Finetuned from model:** [llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental) ### Out-of-Scope Use Don't use this to do bad things. Bad things are bad. ### Recommendations Users (both direct and downstream) should be aware of the risks, biases and limitations of the model. ## How to Get Started with the Model https://github.com/oobabooga/text-generation-webui