Finetune Llama-2-7B-hf on Hindi dataset after transtokenization

This model was trained on 24GB of RTX A500 on zicsx/mC4-Hindi-Cleaned-3.0 dataset (1%) for 3 hours.

We used Hugging Face PEFT-LoRA PyTorch for training.

Transtokenization process in --

Downloads last month
12
Safetensors
Model size
6.74B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for subhrokomol/Llama2-7B-Hindi-finetuned

Finetuned
(767)
this model
Quantizations
1 model

Dataset used to train subhrokomol/Llama2-7B-Hindi-finetuned