See axolotl config
axolotl version: 0.9.2
base_model: giux78/zagreus-test-202000
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
strict: false
# === Datasets ===
streaming: true
datasets:
- path: /leonardo_work/EUHPC_A04_045/.data
type: chat_template
chat_template: tokenizer_default_fallback_llama3
field_messages: conversations
message_property_mappings:
role: from
content: value
roles_to_train: ["gpt", "assistant"]
train_on_eos: "turn"
# === Sequencing / packing ===
sequence_len: 4096
sample_packing: true
remove_unused_columns: false # <-- aggiungi questa riga
eval_sample_packing: false
pad_to_sequence_len: false
streaming_multipack_buffer_size: 10000
# === Ottimizzazione ===
optimizer: adamw_torch_fused
learning_rate: 2e-5
lr_scheduler: cosine
warmup_ratio: 0.1
weight_decay: 0.0
# === Batch (per GPU) ===
micro_batch_size: 1
gradient_accumulation_steps: 8
# Eff. batch = micro_batch_size * grad_accum * num_gpus = 1 * 8 * 32 = 256
# === Precisione / memoria ===
bf16: auto
tf32: true
flash_attention: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
# === FSDP (Axolotl usa fsdp_config; la chiave "fsdp:" è deprecata) ===
fsdp_config:
fsdp_sharding_strategy: FULL_SHARD # shard di param, grad e optimizer state
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_use_orig_params: false
fsdp_sync_module_states: true
fsdp_limit_all_gathers: true
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false # attivalo solo se VRAM è stretta (vedi variante)
fsdp_state_dict_type: SHARDED_STATE_DICT # checkpoint più leggeri su cluster multi-nodo
# === Loop di training ===
num_epochs: 1 # con 170GB basta una passata
# max_steps: 200000 # alternativa: budget a step/token
# === Eval / checkpoint ===
val_set_size: 0.01
evals_per_epoch: 5
save_steps: 2000 # salva ogni 2.000 step (metti il valore che preferisci)
save_total_limit: 5
logging_steps: 20
# === Tracciamento ===
wandb_mode: "offline"
wandb_project: zagreus-350M-sft
wandb_entity: mii-llm
wandb_name: sft
# === Token speciali ===
special_tokens:
pad_token: <|end_of_text|>
eos_token: <|end_of_text|>
model-out
This model is a fine-tuned version of giux78/zagreus-test-202000 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.2814
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 340
- num_epochs: 1.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 0.0003 | 1 | 2.1506 |
3.6166 | 0.2003 | 681 | 2.1390 |
3.4709 | 0.4006 | 1362 | 2.2716 |
3.4327 | 0.6008 | 2043 | 2.2737 |
3.4102 | 0.8011 | 2724 | 2.2814 |
Framework versions
- Transformers 4.56.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.22.1
- Downloads last month
- 19
Model tree for giux78/zagreus-test-202000-sft
Base model
giux78/zagreus-test-202000