This is a fine-tuned version of TinyLlama-1.1B-intermediate-step-240k-503b using the sam-mosaic/orca-gpt4-chatml dataset.
Training
- Method: QLORA
- Quantization: fp16
- Time: 20h on a RTX 4090 (from runpod.io)
- Cost: About $15
- Based on: https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.