Release date: December 18th

finetuned from the state-of-the-art (SOTA) model, RWKV v5 12B one state base! More details will be provided soon. Enjoy the incredible performance of this model, which is optimized for systems with 24GB of VRAM and supports fp16. It can be fine-tuned using a single A100 GPU. To execute this model, utilize the RWKV Runner tool.

Finetuned from Mobius 12B base

Usage

Important Notes

After overfitting certain instructions and weakening others, it is necessary to use completion or simulate dialogues.

  • completion prompt = 'User: make this content longer:\nxxxxxx\n\nAssistant: ok, longer content is'

Data format

<s>User:xxxx\n\n</s>Assistant:xxx\n\n</s>User:xxxx\n\n</s>Assistant:xxx\n\n</s>

If you desire optimal performance to run this model,utilize this format and these vocabs

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Datasets used to train xiaol/RWKV-v5-12B-one-state-chat-16k

Collection including xiaol/RWKV-v5-12B-one-state-chat-16k