winglian's picture
Create README.md
0d37aca
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Dodona Pyg 15B 8K Preview
This finetune adds about 100MB of the v8p4 dataset to Dodona 15B 8K.
Dodona 15B 8K Preview is an experiment for fan-fiction and character ai use cases. It is built on Starcoder Plus to give it 8K context length and pretrained on a corpus of fanfiction and visual novels.
Lots of mistakes were made during the creation of this model, but we didn't want to throw $300 of model training time out the window, so we are releasing this as a preview.
If you would like to see us continue to build more models like this, please consider donating by sponsoring us on GitHub on the link above or [Buy me a coffee](https://www.buymeacoffee.com/winglian).
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
# Prompts
While this model is minimally finetuned with USER: / ASSISTANT: prompts, it seems to respond better to Alpaca style prompts:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
...
### Response:
```
<img src="https://huggingface.co/openaccess-ai-collective/dodona-15b-preview/resolve/main/dodona.png" alt="oracle of dodona" width="600" height="500"/>