Trained on over 20k instruct generated all by gpt-4 or humans
Dataset features: 1000 long evolved conversations based off LIMA Subsection of correct PRM800k data Subsection of CamelAI's Physics and Chemistry data
The model is trained with Qlora as well as Axolotl.
The model format is Vicuna 1.1:
User: ...
Assistant: ...
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.