teknium's picture
Update README.md
5c5c897
|
raw
history blame
524 Bytes
metadata
license: mit

Base Mode: Llama 7B

Llama DEUS v3 is the largest dataset I've trained on yet, including:

GPTeacher - General Instruct - Code Instruct - Roleplay Instruct My unreleased Roleplay V2 Instruct GPT4-LLM Uncensored + Unnatural Instructions WizardLM Uncensored CamelAI's 200k Biology, 200k Physics, 200k Chemistry, 200k Biology, and 50k Math GPT4 Datasets CodeAlpaca

This model was trained for 4 epochs over 1 day of training, it's a rank 128 LORA that targets attention heads, LM_Head, and MLP layers