Edit model card

Built with Axolotl

Base model:

PY007/TinyLlama-1.1B-intermediate-step-480k-1T

Dataset:

Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format

Model License:

Apache 2.0, following the TinyLlama base model.

Quantisation:

Hardware and training details:

Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning. https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details.

Downloads last month
83
Safetensors
Model size
1.1B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jeff31415/TinyLlama-1.1B-1T-OpenOrca

Finetunes
4 models
Quantizations
7 models

Datasets used to train jeff31415/TinyLlama-1.1B-1T-OpenOrca