hugger111's picture
Update README.md
09448f7
|
raw
history blame
2.01 kB
metadata
license: mit
tags:
  - generated_from_trainer
  - pytorch
model-index:
  - name: dolly-v2-3-openassistant-guanaco
    results: []
datasets:
  - timdettmers/openassistant-guanaco
library_name: peft
pipeline_tag: text-generation

dolly-v2-3-openassistant-guanaco

This model is a fine-tuned version of databricks/dolly-v2-3b on timdettmers/openassistant-guanaco dataset.

Model description

This is a PEFT model, hence the model file and the config files are

  • adapter_model.bin
  • adapter_config.bin

This fined-tuned model was created with the following bitsandbytes config

BitsAndBytesConfig(load_in_8bit = True, bnb_4bit_quant_type = 'nf4', bnb_4bit_compute_type = torch.bfloat16, bnb_4bit_use_double_quant = True)

The peft_config is as follows:

peft_config = LoraConfig( lora_alpha=16, lora_dropout = 0.1, r = 64, bias = "none", task_type = "CAUSAL_LM", target_modules = [ 'query_key_value', 'dense', 'dense_h_to_4h', 'dense_4h_to_h' ] )

Intended uses & limitations

Model is intended for fair use only.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_ratio: 0.03
  • training_steps: 100

Training results

Framework versions

  • Transformers 4.31.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.0
  • Tokenizers 0.13.3