Prompt format
It's based on the base Llama-1 model.
It responds well to either "Chat log between user and bot:" style prompts or it also responds fine to alpaca/chatbot style prompts. The FimFicOmegaV3 dataset was cleaned of as much anomalous text as possible and the model was fed the beginning-most sequence of tokens from each story using instruction/output pairs that were (hopefully) meant to influence responses associated with requests for factual information and then it was also trained on a small custom instruction/output dataset using examples generated by Bing answering both trivia about the FiM fictional universe as well as real world trivia on a number of topics with the output prefixed with the names of various characters from the universe followed by a colon and delivered in the signature speech pattern of the character giving the answer with the (most of the time) the question addressing the character giving the answer in the hopes of further reinforcing good chatbot behavior. Both datasets run via LoRA not sure of the exact training parameters. FP16 version no longer exists, too many models not enough drive space.