Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

lora_evo_ta_all_layers_18_attention_layers

This model is a fine-tuned version of togethercomputer/evo-1-8k-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.8474

Model description

Trained on single ID token "5K dataset" filtered to 4k sequences (20% for test data)

lora_alpha = 64 <--------------

lora_dropout = 0.1

lora_r = 64 <---------

epochs = 3

learning rate = 3e-4

warmup_steps=500

gradient_accumulation_steps = 1

train_batch = 1

eval_batch = 1

ONLY ATTENTION LAYER <---------------------

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
3.0886 0.375 1200 3.0465
3.0274 0.75 2400 2.9992
2.9835 1.125 3600 2.9622
2.9334 1.5 4800 2.9397
2.8989 1.875 6000 2.9026
2.8609 2.25 7200 2.8744
2.8413 2.625 8400 2.8584
2.8341 3.0 9600 2.8474

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for lsmille/lora_evo_ta_all_layers_18_attention_layers

Adapter
(19)
this model