File size: 2,831 Bytes
4f00994 989c563 4f00994 989c563 4f00994 989c563 4f00994 989c563 4f00994 989c563 4f00994 989c563 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
language:
- en
- hi
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- yahma/alpaca-cleaned
- ravithejads/samvaad-hi-filtered
- HydraIndicLM/hindi_alpaca_dolly_67k
---
# TinyLlama-1.1B-Hinglish-LORA-v1.0 model
- **Developed by:** [Kiran Kunapuli](https://www.linkedin.com/in/kirankunapuli/)
- **License:** apache-2.0
- **Finetuned from model:** TinyLlama/TinyLlama-1.1B-Chat-v1.0
- **Model config:**
```python
model = FastLanguageModel.get_peft_model(
model,
r = 64,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = True,
random_state = 42,
use_rslora = True,
loftq_config = None,
)
```
- **Training parameters:**
```python
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 2,
packing = True,
args = TrainingArguments(
per_device_train_batch_size = 12,
gradient_accumulation_steps = 16,
warmup_ratio = 0.1,
num_train_epochs = 1,
learning_rate = 2e-4,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
optim = "paged_adamw_32bit",
weight_decay = 0.001,
lr_scheduler_type = "cosine",
seed = 42,
output_dir = "outputs",
report_to = "wandb",
),
)
```
- **Training details:**
```
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
\\ /| Num examples = 15,464 | Num Epochs = 1
O^O/ \_/ \ Batch size per device = 12 | Gradient Accumulation steps = 16
\ / Total batch size = 192 | Total steps = 80
"-____-" Number of trainable parameters = 50,462,720
GPU = NVIDIA GeForce RTX 3090. Max memory = 24.0 GB.
Total time taken for 1 epoch - 2h:35m:28s
9443.5288 seconds used for training.
157.39 minutes used for training.
Peak reserved memory = 17.641 GB.
Peak reserved memory for training = 15.344 GB.
Peak reserved memory % of max memory = 73.504 %.
Peak reserved memory for training % of max memory = 63.933 %.
```
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
**[NOTE]** TinyLlama's internal maximum sequence length is 2048. We use RoPE Scaling to extend it to 4096 with Unsloth!
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |