0.0005_llama_nodpo_3iters_bs128_531lr_oldtrl_iter_2

This model is a fine-tuned version of ZhangShenao/0.0005_llama_nodpo_3iters_bs128_531lr_oldtrl_iter_1 on the updated and the original datasets.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-07
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.15.2
Downloads last month
114
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for GeorgiaTech/0.0005_llama_nodpo_3iters_bs128_531lr_oldtrl_iter_2

Spaces using GeorgiaTech/0.0005_llama_nodpo_3iters_bs128_531lr_oldtrl_iter_2 6