--- license: apache-2.0 datasets: - cognitivecomputations/dolphin - cognitivecomputations/dolphin-coder - ise-uiuc/Magicoder-OSS-Instruct-75K - teknium/openhermes - migtissera/Synthia-v1.3 language: - en tags: - merge - fine-tuned --- # Model Description This is fine-tuned model based on EmbeddedLLM/Mistral-7B-Merge-14-v0.3 for 9984 steps. The dataset used are: * dophin * dolphin-coder * Magicoder-OSS-Instruct-75K * openhermes * Synthia-v1.3 ## Chat Template Prompt format: This model uses ChatML prompt format. ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` # Training The model is scheduled to be fine-tuned for 3 epochs on 4 A100s using axolotl. # Shout-Out to OSS Thank you to the Open Source AI community for bringing together marvelous code frameworks and datasets.