See axolotl config
axolotl version: 0.13.0.dev0
base_model: meta-llama/Llama-3.2-3B
hub_model_id: smohammadi/qat-nvfp4-llama3B
load_in_8bit: false
load_in_4bit: false
strict: false
#chunked_cross_entropy: true
#plugins:
# - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
#liger_fused_linear_cross_entropy: true
datasets:
- path: yahma/alpaca-cleaned
type: alpaca
split: train[:95%]
output_dir: ./outputs/bf16_out/
dataset_prepared_path: ./outputs/qat_out/dataset_prepared
sample_packing: false #true
sequence_len: 4096
flash_attention: true
#flex_attention: true
#flex_attn_compile_kwargs:
# dynamic: false
#mode: max-autotune-no-cudagraphs
quantization:
activation_dtype: nvfp4
weight_dtype: nvfp4
group_size: 16 # only group_size of 16 is supported with nvfp4
wandb_project: qat_v2
wandb_entity:
wandb_watch:
wandb_name: bf16-nvfp4
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 64
num_epochs: 1
optimizer: adamw_torch_fused
gradient_checkpointing: true
cosine_constant_lr_ratio: 0
cosine_min_lr_ratio: 1.0
learning_rate: 2e-5
save_only_model: true
bf16: true
resume_from_checkpoint:
logging_steps: 1
evals_per_epoch: 1
saves_per_epoch: 1
warmup_ratio: 0.1
weight_decay: 0.0
special_tokens:
pad_token: <|finetune_right_pad_id|>
# save_first_step: true # uncomment this to validate checkpoint saving works with your config
qat-nvfp4-llama3B
This model is a fine-tuned version of meta-llama/Llama-3.2-3B on the yahma/alpaca-cleaned dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 76
- training_steps: 769
Training results
Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 4
Model tree for smohammadi/qated-nvfp4-llama3B
Base model
meta-llama/Llama-3.2-3B