--- language: - en pipeline_tag: text-generation tags: - shining-valiant - shining-valiant-2 - valiant - valiant-labs - llama - llama-3 - llama-3-instruct - llama-3-instruct-70b - 70b - conversational - chat - instruct model_type: llama license: llama3 --- **This model is legacy - we recommend [Shining Valiant 2](https://huggingface.co/ValiantLabs/Llama3.1-70B-ShiningValiant2) for Llama 3.1 70b!** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/EXX7TKbB-R6arxww2mk0R.jpeg) Shining Valiant 2 is a chat model built on Llama 3 70b, finetuned on our data for friendship, insight, knowledge and enthusiasm. - Finetuned on [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) for best available general performance - Trained on our data, focused on science, engineering, technical knowledge, and structured reasoning ## Version This is the **2024-04-20** release of Shining Valiant 2 for Llama 3 70b. We're working on more Llama 3 releases to come, including Shining Valiant and our Build Tools set of models. We're excited to bring these to everyone soon! ## Prompting Guide Shining Valiant 2 uses the [Llama 3 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) prompt format: <|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>{{ user_msg_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{{ model_answer_1 }}<|eot_id|> Example input: <|begin_of_text|><|start_header_id|>system<|end_header_id|>You are Shining Valiant, a highly capable chat AI.<|eot_id|><|start_header_id|>user<|end_header_id|>Hi, can you write me a cover letter for a data analyst position?<|eot_id|><|start_header_id|>assistant<|end_header_id|> ## WARNING: text-generation-webui When using Llama 3 Instruct models (including Shining Valiant 2) with [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main) note that a current bug in webui can result in incorrect reading of the model's ending tokens, causing unfinished outputs and incorrect structure. For a [temporary workaround](https://github.com/oobabooga/text-generation-webui/issues/5885) if you encounter this issue, edit Shining Valiant 2's tokenizer_config file as indicated: from "eos_token": "<|end_of_text|>", to "eos_token": "<|eot_id|>", ## The Model Shining Valiant 2 is built on top of Llama 3 70b Instruct, the highest performance open-source model currently available. Our private data adds specialist knowledge and Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg) Shining Valiant 2 is created by [Valiant Labs.](http://valiantlabs.ca/) [Check out our HuggingFace page to see all of our models!](https://huggingface.co/ValiantLabs) [Follow us on X for updates on our models!](https://twitter.com/valiant_labs) We care about open source. For everyone to use. We encourage others to finetune further from our models.