See axolotl config
axolotl version: 0.4.0
base_model: maywell/Llama-3-Ko-Luxia-Instruct
trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: "../data/output_fix_real.json"
type: alpaca
conversation: chatml
dataset_prepared_path: ../data/1min-luxia-data-pre
val_set_size: 0.1
output_dir: ../data/output/1min-luxia-8b
sequence_len: 1024
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 10
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 2e-6
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: null
tf32: false
gradient_checkpointing: true
early_stopping_patience: null
resume_from_checkpoint: null
local_rank: null
logging_steps: 1
xformers_attention: null
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size: null
eval_max_new_tokens: 128
saves_per_epoch: 1
save_total_limit: 4
debug: true
deepspeed: deepspeed_configs/zero2.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
data/output/1min-luxia-8b
This model is a fine-tuned version of maywell/Llama-3-Ko-Luxia-Instruct on the modified maywell/ko_youtube_transcription_sample dataset. It achieves the following results on the evaluation set:
- Loss: 2.5280
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 8
- total_train_batch_size: 56
- total_eval_batch_size: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.9998 | 0.2051 | 1 | 3.0382 |
3.0081 | 0.4103 | 2 | 3.0379 |
2.9024 | 0.6154 | 3 | 3.0356 |
2.9814 | 0.8205 | 4 | 3.0280 |
2.9813 | 1.0256 | 5 | 3.0136 |
2.9137 | 1.1795 | 6 | 2.9918 |
2.9909 | 1.3846 | 7 | 2.9426 |
2.8925 | 1.5897 | 8 | 2.9047 |
2.825 | 1.7949 | 9 | 2.8790 |
2.8329 | 2.0 | 10 | 2.7949 |
2.6496 | 2.1538 | 11 | 2.7632 |
2.6857 | 2.3590 | 12 | 2.7388 |
2.679 | 2.5641 | 13 | 2.7193 |
2.6802 | 2.7692 | 14 | 2.6748 |
2.6269 | 2.9744 | 15 | 2.6452 |
2.5546 | 3.1282 | 16 | 2.6286 |
2.574 | 3.3333 | 17 | 2.6168 |
2.5548 | 3.5385 | 18 | 2.6054 |
2.5145 | 3.7436 | 19 | 2.5952 |
2.452 | 3.9487 | 20 | 2.5863 |
2.4647 | 4.1026 | 21 | 2.5786 |
2.423 | 4.3077 | 22 | 2.5715 |
2.4104 | 4.5128 | 23 | 2.5648 |
2.3664 | 4.7179 | 24 | 2.5592 |
2.4211 | 4.9231 | 25 | 2.5536 |
2.4291 | 5.0769 | 26 | 2.5492 |
2.3475 | 5.2821 | 27 | 2.5455 |
2.3665 | 5.4872 | 28 | 2.5417 |
2.3862 | 5.6923 | 29 | 2.5387 |
2.3784 | 5.8974 | 30 | 2.5360 |
2.354 | 6.0513 | 31 | 2.5343 |
2.3442 | 6.2564 | 32 | 2.5321 |
2.3499 | 6.4615 | 33 | 2.5312 |
2.3312 | 6.6667 | 34 | 2.5297 |
2.3551 | 6.8718 | 35 | 2.5289 |
2.3363 | 7.0256 | 36 | 2.5289 |
2.3691 | 7.2308 | 37 | 2.5284 |
2.3267 | 7.4359 | 38 | 2.5281 |
2.3389 | 7.6410 | 39 | 2.5281 |
2.1969 | 7.8462 | 40 | 2.5280 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for esunn/1min-scriptgen-luxia-8b
Base model
maywell/Llama-3-Ko-Luxia-Instruct