Lora finetune sample data and script. | |
``` | |
pip install transformers | |
pip install datasets | |
pip install peft==0.5.0 | |
pip install trl | |
pip install auto-gptq | |
pip install optimum | |
``` | |
Version is very very important. | |
For example if you get something like | |
``` | |
ValueError: Target module QuantLinear() is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported. | |
``` | |
It's because peft old version. | |
I don't know if it's required, but the version of my running environment. | |
* auto-gptq 0.4.1+cu117 | |
* trl 0.7.1 | |
* optimum 1.12.1.dev0 | |
* transformers 4.32.1 | |
* datasets 2.14.4 | |
The documentation says to install from source, but sometimes that causes errors. | |
If you can't get it to work, it might be better to wait until the stable version comes out. | |
Good luck! | |
- finetune.py gptq finetune sample file. | |
- jawiki3.csv sample data.(Japanese) | |
- lora_test.py after finetune, you can use lora with this script. | |
The model.safetensors is ../gptq_model-4bit-128g.safetensors. | |
It's samefile. I can't find how to change script defaults model name, So I copied it. |