base_model: mistralai/Mistral-Nemo-Base-2407 | |
license: apache-2.0 | |
datasets: | |
- BeaverAI/Nemo-Inst-Tune-ds | |
language: | |
- en | |
library_name: transformers | |
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) | |
# QuantFactory/mistral-doryV2-12b-GGUF | |
This is quantized version of [BeaverAI/mistral-doryV2-12b](https://huggingface.co/BeaverAI/mistral-doryV2-12b) created using llama.cpp | |
# Original Model Card | |
# Dory 12b (v2) | |
(redone) redone instruct finetune of mistral nemo 12b's base. *not* (E)RP-focused, leave that to drummer. | |
![image/gif](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/BiBtgV_WEIha72WqETWfk.gif) | |
thanks to twisted again for the compute :3 | |
## Prompting | |
alpaca-like: | |
``` | |
### System: | |
[Optional system prompt] | |
### Instruction: | |
[Query] | |
### Response: | |
[Response]</s> | |
### Instruction: | |
[...] | |
``` | |
## Training details | |
Rank 64 QDoRA, trained on the following data mix: | |
- All of [kalomaze/Opus_Instruct_3k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_3k) | |
- All conversations with a reward model rating above 5 in [Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered) | |
- 50k of [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) | |
- All stories above 4.7 rating and published before 2020 in [Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered](https://huggingface.co/datasets/Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered) | |