--- language: - zh - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - trl - sft - yi base_model: cognitivecomputations/dolphin-2.9.1-yi-1.5-9b datasets: - TouchNight/HumanlikeRP --- # HumanlikeRP It is an attempt to build a Humanlike chatbot. Designed to make it give short reply like a real human. It is a failure, the dataset used to train it has weak context relevancy. So it often generates irrelevant answer. And it is also overfitting. ### Chat Format This model has been trained to use ChatML format. ``` <|im_start|>system {{system}}<|im_end|> <|im_start|>{{char}} {{message}}<|im_end|> <|im_start|>{{user}} {{message}}<|im_end|> ``` # Uploaded model - **Developed by:** TouchNight - **License:** apache-2.0 - **Finetuned from model :** cognitivecomputations/dolphin-2.9.1-yi-1.5-9b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)