--- base_model: LumiOpen/Viking-7B language: - en - fi - sv - 'no' - da - is - nn license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl datasets: - mpasila/Magnum-V2-Mix - anthracite-org/Stheno-Data-Filtered - anthracite-org/kalo-opus-instruct-22k-no-refusal - anthracite-org/nopm_claude_writing_fixed library_name: peft --- It seems fine but I should probably add some instruction prompts to the dataset or train it with a instruct dataset first and then train it with the RP stuff to make it better. Prompt format is: ChatML Merged model: [mpasila/Viking-Magnum-v0.1-7B](https://huggingface.co/mpasila/Viking-Magnum-v0.1-7B) # Uploaded model - **Developed by:** mpasila - **License:** apache-2.0 - **Finetuned from model :** LumiOpen/Viking-7B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)