Built with Axolotl

See axolotl config

axolotl version: 0.9.2

base_model: ./ale_outputs/opendata-sft-debug-reg/checkpoint-1500/  # <โ€” checkpoint finale precedente
strict: false
output_dir: ./ale_outputs/opendata-sft-lastmile
seed: 42


chat_template: llama3
datasets:
  - path: /leonardo_work/EUHPC_A04_045/training/opendata-1000000
    type: chat_template
    field_messages: conversation
    roles_to_train: ["assistant"]
    train_on_eos: turn

dataset_prepared_path: ./ale_outputs/dataset_cache/opendata-sft

# ---- Training (last-mile fine-tuning) ----
max_steps: 800                    # 500โ€“800 step per consolidare
lr_scheduler: constant_with_warmup
learning_rate: 9.0e-6             # LR โ€œvivoโ€ per qualche centinaio di step
warmup_ratio: 0.0
weight_decay: 0.005
max_grad_norm: 1.0

micro_batch_size: 1
gradient_accumulation_steps: 8
bf16: auto
flash_attention: true
gradient_checkpointing: true

eval_strategy: steps
eval_steps: 100
save_strategy: steps
save_steps: 200
save_total_limit: 4
val_set_size: 10000

# ---- Token ----
special_tokens:
  pad_token: <|end_of_text|>
  eos_token: <|eot_id|>          # importantissimo per train_on_eos: turn

# ---- fsdp ---- (se ti serve ancora)
fsdp_config:
  fsdp_sharding_strategy: FULL_SHARD
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
  fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
  fsdp_backward_prefetch_policy: BACKWARD_PRE
  fsdp_state_dict_type: FULL_STATE_DICT


ale_outputs/opendata-sft-lastmile

This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2857

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 9e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 32
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • total_eval_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant_with_warmup
  • training_steps: 800

Training results

Training Loss Epoch Step Validation Loss
No log 0.0004 1 2.3001
1.9716 0.0432 100 2.2965
1.9648 0.0864 200 2.2945
1.9901 0.1296 300 2.2928
2.0033 0.1728 400 2.2915
1.9634 0.2160 500 2.2898
1.9957 0.2592 600 2.2882
1.9692 0.3023 700 2.2868
1.9827 0.3455 800 2.2857

Framework versions

  • Transformers 4.56.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.5.1
  • Tokenizers 0.22.1
Downloads last month
23
Safetensors
Model size
438M params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support