jtatman's picture
End of training
19f8c1b verified
|
raw
history blame
4.3 kB
metadata
base_model: EleutherAI/pythia-160m-deduped
library_name: peft
license: apache-2.0
tags:
  - axolotl
  - relora
  - generated_from_trainer
model-index:
  - name: pythia-160m-dolphin-extended
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: EleutherAI/pythia-160m-deduped
load_in_8bit: 
datasets:
  - path: lee-ite/med-alpaca
    type: alpaca
    shards: 4
  - path: vicgalle/alpaca-gpt4
    type: alpaca
  - path: iamtarun/python_code_instructions_18k_alpaca
    type: alpaca
  - path: llamafactory/alpaca_gpt4_en
    type: alpaca
  - path: cognitivecomputations/dolphin
    name: flan1m-alpaca-uncensored
    type: alpaca
    shards: 4

dataset_prepared_path: ds-mega-alpaca
#dataset_shard_num: 10
chat_template: inst
val_set_size: 0.001
adapter: lora
lora_model_dir: 
sequence_len: 2048
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
  - query_key_value
lora_target_linear: 
lora_fan_in_fan_out: true  # pythia/GPTNeoX lora specific
lora_modules_to_save:
  - embed_in
  - embed_out
  - lm_head
lora_on_cpu: false
# ReLoRA configuration
# # Must use either 'lora' or 'qlora' adapter, and does not support fsdp or deepspeed
# relora_steps: # Number of steps per ReLoRA restart
# relora_warmup_steps: # Number of per-restart warmup steps
# relora_anneal_steps: # Number of anneal steps for each relora cycle
# relora_prune_ratio: # threshold for optimizer magnitude when pruning
# relora_cpu_offload:  # True to perform lora weight merges on cpu during restarts, for modest gpu memory savings
relora_steps: 200
relora_warmup_steps: 10
relora_cpu_offload: false
wandb_project: pythia
wandb_entity:
wandb_watch:
wandb_name: pythia-160m-dolphin-extended
wandb_log_model:
output_dir: ./outputs/lora-alpaca-pythia-160m-dolphin-extended
gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 3
learning_rate: 0.0006
lr_scheduler: cosine_with_restarts
#cosine_min_lr_ratio: 0.1
train_on_inputs: false
group_by_length: false
#bf16: auto
#fp16: true
#tf32: false
float16: true
flash_attn: 
xformers_attention: true
optimizer: paged_adamw_8bit
gpu_memory_limit: 8GiB
hub_model_id: jtatman/pythia-160m-dolphin-extended
early_stopping_patience: 3
#resume_from_checkpoint: outputs/lora-alpaca-pythia-125m/checkpoint-51040
auto_resume_from_checkpoints: true
local_rank:
weight_decay: 0.0
#evals_per_epoch: 4
eval_steps: 200
logging_steps: 1
save_steps: 200
save_total_limit: 5
warmup_steps: 100
tokens:
  - "[INST]"
  - "[/INST]"

pythia-160m-dolphin-extended

This model is a fine-tuned version of EleutherAI/pythia-160m-deduped on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 9.6289

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0006
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
38.0524 0.0000 1 33.0385
8.859 0.0056 200 8.2423
7.2059 0.0113 400 7.4385
10.5864 0.0169 600 10.5324
10.3914 0.0226 800 10.2817
9.5214 0.0282 1000 9.6289

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1