Edit model card

Model Card for Model ID

This repo contains a low-rank adapter for AceGPT-7B fit on the arbml/alpagasus_cleaned_ar.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

arbml/alpagasus_cleaned_ar

Training Hyperparameters

python finetune.py --base_model 'FreedomIntelligence/AceGPT-7B' --data_path 'alpagasus_cleaned_ar.json' --output_dir 'lora-alpaca_alpagasus'
Training Alpaca-LoRA model with params:
base_model: FreedomIntelligence/AceGPT-7B
data_path: alpagasus_cleaned_ar.json
output_dir: lora-alpaca_alpagasus
batch_size: 128
micro_batch_size: 4
num_epochs: 3
learning_rate: 0.0003
cutoff_len: 256
val_set_size: 2000
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: ['q_proj', 'v_proj']
train_on_inputs: True
add_eos_token: False
group_by_length: False
wandb_project:
wandb_run_name:
wandb_watch:
wandb_log_model:
resume_from_checkpoint: False
prompt template: alpaca

Framework versions

  • PEFT 0.7.2.dev0
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Deema/lora-alpaca_alpagasus_ar

Adapter
(3)
this model